text
stringlengths
28.7k
745k
label
int64
0
1
--- abstract: 'The minimality of the penalty function associated with a convex risk measure is analyzed in this paper. First, in a general static framework, we provide necessary and sufficient conditions for a penalty function defined in a convex and closed subset of the absolutely continuous measures with respect to some reference measure $\mathbb{P}$ to be minimal on this set. When the probability space supports a Lévy process, we establish results that guarantee the minimality property of a penalty function described in terms of the coefficients associated with the density processes. The set of densities processes is described and the convergence of its quadratic variation is analyzed.' author: - '[Daniel Hernández–Hernández[^1]   Leonel Pérez-Hernández[^2]]{}' title: ' Characterization of the minimal penalty of a convex risk measure with applications to Lévy processes.' --- **Key words:** Convex risk measures, Fenchel-Legendre transformation, minimal penalization, Lévy process.\ **Mathematical Subject Classification:** 91B30, 46E30. Introduction ============ The definition of coherent risk measure was introduced by Artzner *et al.* in their fundamental works [@ADEH; @1997], [@ADEH; @1999] for finite probability spaces, giving an axiomatic characterization that was extended later by Delbaen [@Delbaen; @2002] to general probability spaces. In the papers mentioned above one of the fundamental axioms was the positive homogeneity, and in further works it was removed, defining the concept of convex risk measure introduced by Föllmer and Schied [@FoellSch; @2002; @a], [@FoellSch; @2002; @b], Frittelli and Rosazza Gianin [@FritRsza; @2002], [@FritRsza; @2004] and Heath [@Heath; @2000]. This is a rich area that has received a lot of attention and much work has been developed. There exists by now a well established theory in the static and dynamic cases, but there are still many questions unanswered in the static framework that need to be analyzed carefully. The one we focus on in this paper is the characterization of the penalty functions that are minimal for the corresponding static risk measure. Up to now, there are mainly two ways to deal with minimal penalty functions, namely the definition or the biduality relation. With the results presented in this paper we can start with a penalty function, which essentially discriminate models within a convex closed subset of absolutely continuous probability measures with respect to (w.r.t.) the market measure, and then guarantee that it corresponds to the minimal penalty of the corresponding convex risk measure on this subset. This property is, as we will see, closely related with the lower semicontinuity of the penalty function, and the complications to prove this property depend on the structure of the probability space. We first provide a general framework, within a measurable space with a reference probability measure $\mathbb{P}$, and show necessary and sufficient conditions for a penalty function defined in a convex and closed subset of the absolutely continuous measures with respect to the reference measure to be minimal within this subset. The characterization of the form of the penalty functions that are minimal when the probability space supports a Lévy process is then studied. This requires to characterize the set of absolutely continuous measures for this space, and it is done using results that describe the density process for spaces which support semimartingales with the weak predictable representation property. Roughly speaking, using the weak representation property, every density process splits in two parts, one is related with the continuous local martingale part of the decomposition and the other with the corresponding discontinuous one. It is shown some kind of continuity property for the quadratic variation of a sequence of densities converging in  $L^{1}$. From this characterization of the densities, a family of penalty functions is proposed, which turned out to be minimal for the risk measures generated by duality. The paper is organized as follows. Section 2 contains the description of the minimal penalty functions for a general probability space, providing necessary and sufficient conditions, the last one rectricted to a subset of equivalent probability measures. Section 3 reports the structure of the densities for a probability space that supports a Lévy processes and the convergence properties needed to prove the lower semicontinuity of the set of penalty functions defined in Section 4. In this last section we show that these penalty functions are minimal. Minimal penalty function of risk measures concentrated in $\mathcal{Q}_{\ll }\left( \mathbb{P}\right) $. \[Sect Minimal Penalty Function of CMR\] ================================================================================================================================================= Any penalty function $\psi $ induce a convex risk measure $\rho $, which in turn has a representation by means of a minimal penalty function $\psi _{\rho }^{\ast }.$ Starting with a penalty function $\psi $ concentrated in a convex and closed subset of the set of absolutely continuous probability measures with respect to some reference measure $\mathbb{P}$, in this section we give necessary and sufficient conditions in order to guarantee that $\psi $ is the minimal penalty within this set. We begin recalling briefly some known results from the theory of static risk measures, and then a characterization for minimal penalties is presented. Preliminaries from static measures of risk [Subsect:\_Preliminaries\_SCRM]{} ---------------------------------------------------------------------------- Let $X:\Omega \rightarrow \mathbb{R}$ be a mapping from a set $\Omega $ of possible market scenarios, representing the discounted net worth of the position. Uncertainty is represented by the measurable space $(\Omega, \mathcal{F})$, and we denote by $\mathcal{X}$ the linear space of bounded financial positions, including constant functions. 1. The function $\rho :\mathcal{X}\rightarrow \mathbb{R}$, quantifying the risk of $X$, is a *monetary risk measure* if it satisfies the following properties: $$\begin{array}{rl} \text{Monotonicity:} & \text{If }X\leq Y\text{ then }\rho \left( X\right) \geq \rho \left( Y\right) \ \forall X,Y\in \mathcal{X}.\end{array} \label{Monotonicity}$$$\smallskip \ $$$\begin{array}{rl} \text{Translation Invariance:} & \rho \left( X+a\right) =\rho \left( X\right) -a\ \forall a\in \mathbb{R}\ \forall X\in \mathcal{X}.\end{array} \label{Translation Invariance}$$ 2. When this function satisfies also the convexity property $$\begin{array}{rl} & \rho \left( \lambda X+\left( 1-\lambda \right) Y\right) \leq \lambda \rho \left( X\right) +\left( 1-\lambda \right) \rho \left( Y\right) \ \forall \lambda \in \left[ 0,1\right] \ \forall X,Y\in \mathcal{X},\end{array} \label{Convexity}$$it is said that $\rho $ is a convex risk measure. 3. The function $\rho $ is called normalized if $\rho \left( 0\right) =0$, and sensitive, with respect to a measure $\mathbb{P}$, when for each $X\in L_{+}^{\infty }\left( \mathbb{P}\right) $ with $\mathbb{P}\left[ X>0\right] >0$ we have that $\rho \left( -X\right) >\rho \left( 0\right) .$ We say that a set function $\mathbb{Q}:\mathcal{F}\rightarrow \left[ 0,1\right] $ is a *probability content* if it is finitely additive and $\mathbb{Q}\left( \Omega \right) =1$. The set of *probability contents* on this measurable space is denoted by $\mathcal{Q}_{cont}$. From the general theory of static convex risk measures [@FoellSch; @2004], we know that any map $\psi :\mathcal{Q}_{cont}\rightarrow \mathbb{R}\cup \{+\infty \},$ with $\inf\nolimits_{\mathbb{Q}\in \mathcal{Q}_{cont}}\psi (\mathbb{Q})\in \mathbb{R}$, induces a static convex measure of risk as a mapping $\rho :\mathfrak{M}_{b}\rightarrow \mathbb{R}$ given by $$\rho (X):=\sup\nolimits_{\mathbb{Q}\in \mathcal{Q}_{cont}}\left\{ \mathbb{E}_{\mathbb{Q}}\left[ -X\right] -\psi (\mathbb{Q})\right\} . \label{Static_CMR_induced_by_phi}$$Here $\mathfrak{M}$ denotes the class of measurable functions and $\mathfrak{M}_{b}$ the subclass of bounded measurable functions. The function $\psi$ will be referred as a *penalty function*. Föllmer and Schied \[Theorem 3.2\][FoellSch 2002 b]{} and Frittelli and Rosazza Gianin [@FritRsza; @2002 Corollary 7] proved that any convex risk measure is essentially of this form. More precisely, a convex risk measure $\rho $ on the space $\mathfrak{M}_{b}\left( \Omega ,\mathcal{F}\right) $ has the representation $$\rho (X)=\sup\limits_{\mathbb{Q}\in \mathcal{Q}_{cont}}\left\{ \mathbb{E}_{\mathbb{Q}}\left[ -X\right] -\psi _{\rho }^{\ast }\left( \mathbb{Q}\right) \right\} , \label{Static_CMR_Robust_representation}$$where $$\psi _{\rho }^{\ast }\left( \mathbb{Q}\right) :=\sup\limits_{X\in \mathcal{A}\rho }\mathbb{E}_{\mathbb{Q}}\left[ -X\right] , \label{Def._minimal_penalty}$$and $\mathcal{A}_{\rho }:=\left\{ X\in \mathfrak{M}_{b}:\rho (X)\leq 0\right\} $ is the *acceptance set* of $\rho .$ The penalty $\psi _{\rho }^{\ast }$ is called the *minimal penalty function* associated to $\rho $ because, for any other penalty function $\psi $ fulfilling $\left( \ref{Static_CMR_Robust_representation}\right) $, $\psi \left( \mathbb{Q}\right) \geq \psi _{\rho }^{\ast }\left( \mathbb{Q}\right) $, for all $\mathbb{Q}\in \mathcal{Q}_{cont}.$ Furthermore, for the minimal penalty function, the next biduality relation is satisfied $$\psi _{\rho }^{\ast }\left( \mathbb{Q}\right) =\sup_{X\in \mathfrak{M}_{b}\left( \Omega ,\mathcal{F}\right) }\left\{ \mathbb{E}_{\mathbb{Q}}\left[ -X\right] -\rho \left( X\right) \right\} ,\quad \forall \mathbb{Q\in }\mathcal{Q}_{cont}. \label{static convex rsk msr biduality}$$ Let $\mathcal{Q}\left( \Omega ,\mathcal{F}\right) $ be the family of probability measures on the measurable space $\left( \Omega ,\mathcal{F}\right) .$ Among the measures of risk, the class of them that are concentrated on the set of probability measures $\mathcal{Q\subset Q}_{cont}$ are of special interest. Recall that a function $I:E\subset \mathbb{R}^{\Omega }\rightarrow \mathbb{R}$ is *sequentially continuous from below (above)* when $\left\{ X_{n}\right\} _{n\in \mathbb{N}}\uparrow X\Rightarrow \lim_{n\rightarrow \infty }I\left( X_{n}\right) =I\left( X\right) $ ( respectively $\left\{ X_{n}\right\} _{n\in \mathbb{N}}\downarrow X\Rightarrow \lim_{n\rightarrow \infty }I\left( X_{n}\right) =I\left( X\right) $). Föllmer and Schied [@FoellSch; @2004] proved that any sequentially continuous from below convex measure of risk is concentrated on the set $\mathcal{Q}$. Later, Krätschmer [@Kraetschmer; @2005 Prop. 3 p. 601] established that the sequential continuity from below is not only a sufficient but also a necessary condition in order to have a representation, by means of the minimal penalty function in terms of probability measures. We denote by $\mathcal{Q}_{\ll }(\mathbb{P})$ the subclass of absolutely continuous probability measure with respect to $\mathbb{P}$ and by $\mathcal{Q}_{\approx }\left( \mathbb{P}\right) $ the subclass of equivalent probability measure. Of course, $\mathcal{Q}_{\approx }\left( \mathbb{P}\right) \subset \mathcal{Q}_{\ll }(\mathbb{P})\subset \mathcal{Q}\left( \Omega ,\mathcal{F}\right) $. \[Remarkpsi(Q)=+oo\_for\_Q\_not\_&lt;&lt;\] When a convex risk measures in $\mathcal{X}:=L^{\infty }\left( \mathbb{P}\right) $ satisfies the property $$\rho \left( X\right) =\rho \left( Y\right) \text{ if }X=Y\ \mathbb{P}\text{-a.s.} \label{rho(X)=rho(Y)_for_X=Y}$$and is represented by a penalty function $\psi $ as in $\left( \ref{Static_CMR_induced_by_phi}\right) $, we have that $$\mathbb{Q}\in \mathcal{Q}_{cont}\setminus \mathcal{Q}_{cont}^{\ll }\Longrightarrow \psi \left( \mathbb{Q}\right) =+\infty , \label{psi(Q)=+oo_for_Q_not_<<}$$where $\mathcal{Q}_{cont}^{\ll }$ is the set of contents absolutely continuous with respect to $\mathbb{P}$; see [FoellSch 2004]{}. Minimal penalty functions [Subsect:\_Minimal\_penalty\_functions]{} ------------------------------------------------------------------- The minimality property of the penalty function turns out to be quite relevant, and it is a desirable property that is not easy to prove in general. For instance, in the study of robust portfolio optimization problems (see, for example, Schied [@Schd; @2007] and Hernández-Hernández and Pérez-Hernández [@PerHer]), using techniques of duality, the minimality property is a necessary condition in order to have a well posed dual problem. More recently, the dual representations of dynamic risk measures were analyzed by Barrieu and El Karoui [@BaElKa2009], while the connection with BSDEs and $g-$expectations have been studied by Delbaen *et. al.* [@DelPenRz]. The minimality of the penalty function also plays a crucial role in the characterization of the time consistency property for dynamic risk measures (see Bion-Nadal [BionNa2008]{}, [@BionNa2009]). In the next sections we will show some of the difficulties that appear to prove the minimality of the penalty function when the probability space $(\Omega, \mathcal{F},\mathbb{P})$ supports a Lévy process. However, to establish the results of this section we only need to fix a probability space $(\Omega, \mathcal{F}, \mathbb{P})$. When we deal with a set of absolutely continuous probability measures $\mathcal{K}\subset \mathcal{Q}_{\ll }(\mathbb{P})$ it is necessary to make reference to some topological concepts, meaning that we are considering the corresponding set of densities and the strong topology in $L^{1}\left( \mathbb{P}\right) .$ Recall that within a locally convex space, a convex set $\mathcal{K}$ is weakly closed if and only if $\mathcal{K}$ is closed in the original topology [@FoellSch; @2004 Thm A.59]. \[static minimal penalty funct. in Q(&lt;&lt;) &lt;=&gt;\] Let $\psi :\mathcal{K}\subset \mathcal{Q}_{\ll }(\mathbb{P})\rightarrow \mathbb{R}\cup \{+\infty \} $ be a function with $\inf\nolimits_{\mathbb{Q}\in \mathcal{K}}\psi (\mathbb{Q})\in \mathbb{R},$ and define the extension $\psi (\mathbb{Q}):=\infty $ for each $\mathbb{Q}\in \mathcal{Q}_{cont}\setminus \mathcal{K}$, with $\mathcal{K}$ a convex closed set. Also, define the function $\Psi $, with domain in $L^{1}$, as $$\Psi \left( D\right) :=\left\{ \begin{array}{rl} \psi \left( \mathbb{Q}\right) & \text{if }D=d\mathbb{Q}/d\mathbb{P}\text{ for }\mathbb{Q}\in \mathcal{K} \\ \infty & \text{otherwise.}\end{array}\right.$$Then, for the convex measure of risk $\rho (X):=\sup\limits_{\mathbb{Q}\in \mathcal{Q}_{cont}}\left\{ \mathbb{E}_{\mathbb{Q}}\left[ -X\right] -\psi \left( \mathbb{Q}\right) \right\} $ associated with $\psi $ the following assertions hold: $\left( a\right) $ If $\rho $ has as minimal penalty $\psi _{\rho }^{\ast }$ the function $\psi $ (i.e. $\psi $ $=\psi _{\rho }^{\ast }$ ), then $\Psi $ is a proper convex function and lower semicontinuous w.r.t. the (strong) $L^{1}$-topology or equivalently w.r.t. the weak topology $\sigma \left( L^{1},L^{\infty }\right) $. $\left( b\right) $ If $\Psi $ is lower semicontinuous w.r.t. the (strong) $L^{1}$-topology or equivalently w.r.t. the weak topology $\sigma \left( L^{1},L^{\infty }\right) ,$ then $$\psi \mathbf{1}_{\mathcal{Q}_{\ll }(\mathbb{P})}=\psi _{\rho }^{\ast }\mathbf{1}_{\mathcal{Q}_{\ll }(\mathbb{P})}. \label{PSI_l.s.c=>psi*=psi_on_Q<<}$$ *Proof:* $\left( a\right) $ Recall that $\sigma \left( L^{1},L^{\infty }\right) $ is the coarsest topology on $L^{1}\left( \mathbb{P}\right) $ under which every linear operator is continuous, and hence $\Psi _{0}^{X}\left( Z\right) :=\mathbb{E}_{\mathbb{P}}\left[ Z\left( -X\right) \right] $, with $Z\in L^1$, is a continuous function for each $X\in \mathfrak{M}_{b}\left( \Omega ,\mathcal{F}\right) $ fixed. For $\delta \left( \mathcal{K}\right) :=\left\{ Z:Z=d\mathbb{Q}/d\mathbb{P}\text{ with }\mathbb{Q}\in \mathcal{K}\right\} $ we have that$$\Psi _{1}^{X}\left( Z\right) :=\Psi _{0}^{X}\left( Z\right) \mathbf{1}_{\delta \left( \mathcal{K}\right) }\left( Z\right) +\infty \times \mathbf{1}_{L^{1}\setminus \delta \left( \mathcal{K}\right) }\left( Z\right)$$is clearly lower semicontinuous on $\delta \left( \mathcal{K}\right) .$ For $Z^{\prime }\in L^{1}\left( \mathbb{P}\right) \setminus \delta \left( \mathcal{K}\right) $ arbitrary fixed we have from Hahn-Banach’s Theorem that there is a continuous lineal functional $l\left( Z\right) $ with $l\left( Z^{\prime }\right) <\inf_{Z\in \delta \left( \mathcal{K}\right) }l\left( Z\right) $. Taking $\varepsilon :=\frac{1}{2}\left\{ \inf_{Z\in \delta \left( \mathcal{K}\right) }l\left( Z\right) -l\left( Z^{\prime }\right) \right\} $ we have that the weak open ball $B\left( Z^{\prime },\varepsilon \right) :=\left\{ Z\in L^{1}\left( \mathbb{P}\right) :\left\vert l\left( Z^{\prime }\right) -l\left( Z\right) \right\vert <\varepsilon \right\} $ satisfies $B\left( Z^{\prime },\varepsilon \right) \cap \delta \left( \mathcal{K}\right) =\varnothing .$ Therefore, $\Psi _{1}^{X}\left( Z\right) $ is weak lower semicontinuous on $L^{1}\left( \mathbb{P}\right) ,$ as well as $\Psi _{2}^{X}\left( Z\right) :=\Psi _{1}^{X}\left( Z\right) -\rho \left( X\right) .$ If $$\psi \left( \mathbb{Q}\right) =\psi _{\rho }^{\ast }\left( \mathbb{Q}\right) =\sup_{X\in \mathfrak{M}_{b}\left( \Omega ,\mathcal{F}\right) }\left\{ \int Z\left( -X\right) d\mathbb{P}-\rho \left( X\right) \right\},$$ where $Z:=d\mathbb{Q}/d\mathbb{P},$ we have that $\Psi \left( Z\right) =\sup_{X\in \mathfrak{M}_{b}\left( \Omega ,\mathcal{F}\right) }\left\{ \Psi _{2}^{X}\left( Z\right) \right\} $ is the supremum of a family of convex lower semicontinuous functions with respect to the topology $\sigma \left( L^{1},L^{\infty }\right) $, and $\Psi \left( Z\right) $ preserves both properties. $\left( b\right) $ For the Fenchel - Legendre transform (conjugate function) $\Psi ^{\ast }:\ L^{\infty }\left( \mathbb{P}\right) \longrightarrow \mathbb{R}$ for each $U\in L^{\infty }\left( \mathbb{P}\right) $$$\Psi ^{\ast }\left( U\right) =\sup\limits_{Z\in \delta \left( \mathcal{K}\right) }\left\{ \int ZUd\mathbb{P-}\Psi \left( Z\right) \right\} =\sup\limits_{\mathbb{Q}\in \mathcal{Q}_{cont}}\left\{ \mathbb{E}_{\mathbb{Q}}\left[ U\right] \mathbb{-\psi }\left( \mathbb{Q}\right) \right\} \equiv \rho \left( -U\right) .$$ From the lower semicontinuity of $\Psi $ w.r.t. the weak topology $\sigma \left( L^{1},L^{\infty }\right) $ that $\Psi =\Psi ^{\ast \ast }$. Considering the weak$^{\ast }$-topology $\sigma \left( L^{\infty }\left( \mathbb{P}\right) ,L^{1}\left( \mathbb{P}\right) \right) $ for $Z=d\mathbb{Q}/d\mathbb{P}$ we have that $$\psi \left( \mathbb{Q}\right) =\Psi \left( Z\right) =\Psi ^{\ast \ast }\left( Z\right) =\sup\limits_{U\in L^{\infty }\left( \mathbb{P}\right) }\left\{ \int Z\left( -U\right) d\mathbb{P-}\Psi ^{\ast }\left( -U\right) \right\} =\psi _{\rho }^{\ast }\left( \mathbb{Q}\right) .$$$\Box $ 1. As pointed out in Remark \[Remarkpsi(Q)=+oo\_for\_Q\_not\_&lt;&lt;\], we have that $$\mathbb{Q}\in \mathcal{Q}_{cont}\setminus \mathcal{Q}_{cont}^{\ll }\Longrightarrow \psi _{\rho }^{\ast }\left( \mathbb{Q}\right) =+\infty =\psi \left( \mathbb{Q}\right).$$ Therefore, under the conditions of Lemma \[static minimal penalty funct. in Q(&lt;&lt;) &lt;=&gt;\] $\left( b\right) $ the penalty function $\psi $ might differ from $\psi _{\rho }^{\ast }$ on $\mathcal{Q}_{cont}^{\ll }\setminus \mathcal{Q}_{\ll }.$ For instance, the penalty function defined as $\psi \left( \mathbb{Q}\right) :=\infty \times \mathbf{1}_{\mathcal{Q}_{cont}\setminus \mathcal{Q}_{\ll }(\mathbb{P})}\left( \mathbb{Q}\right) $ leads to the worst case risk measure $\rho (X):=\sup\nolimits_{\mathbb{Q}\in \mathcal{Q}_{\ll }(\mathbb{P})}\mathbb{E}_{\mathbb{Q}}\left[ -X\right] $, which has as minimal penalty the function $$\psi _{\rho }^{\ast }\left( \mathbb{Q}\right) =\infty \times \mathbf{1}_{\mathcal{Q}_{cont}\setminus \mathcal{Q}_{cont}^{\ll }}\left( \mathbb{Q}\right).$$ 2. Note that the total variation distance $d_{TV}\left( \mathbb{Q}^{1},\mathbb{Q}^{2}\right) :=\sup_{A\in \mathcal{F}}\left\vert \mathbb{Q}^{1}\left[ A\right] -\mathbb{Q}^{2}\left[ A\right] \right\vert $, with $\mathbb{Q}^{1},\;\mathbb{Q}^{2}\in \mathcal{Q}_{\ll }$, fulfills that $d_{TV}\left( \mathbb{Q}^{1},\mathbb{Q}^{2}\right) \leq \left\Vert d\mathbb{Q}^{1}/d\mathbb{P}-\mathbb{Q}^{2}/d\mathbb{P}\right\Vert _{L^{1}}$. Therefore, the minimal penalty function is lower semicontinuous in the total variation topology; see Remark 4.16 (b) p. 163 in [@FoellSch; @2004]. Preliminaries from stochastic calculus\[Sect. Preliminaries\] ============================================================= Within a probability space which supports a semimartingale with the weak predictable representation property, there is a representation of the density processes of the absolutely continuous probability measures by means of two coefficients. Roughly speaking, this means that the dimension of the linear space of local martingales is two. Throughout these coefficients we can represent every local martingale as a combination of two components, namely as an stochastic integral with respect to the continuous part of the semimartingale and an integral with respect to its compensated jump measure. This is of course the case for local martingales, and with more reason this observation about the dimensionality holds for the martingales associated with the corresponding densities processes. In this section we review those concepts of stochastic calculus that are relevant to understand this representation properties, and prove some kind of continuity property for the quadratic variation of a sequence of uniformly integrable martingales converging in  $L^{1}$. This result is one of the contributions of this paper. Fundamentals of Lévy and semimartingales processes [Subsect:\_Fundamentals\_Levy\_and\_Semimartingales]{} --------------------------------------------------------------------------------------------------------- Let $\left( \Omega ,\mathcal{F},\mathbb{P}\right) $ be a probability space. We say that $L:=\left\{ L_{t}\right\} _{t\in \mathbb{R}_{+}}$ is a Lévy process for this probability space if it is an adapted càdlàg process with independent stationary increments starting at zero. The filtration considered is $\mathbb{F}:=\left\{ \mathcal{F}_{t}^{\mathbb{P}}\left( L\right) \right\} _{t\in \mathbb{R}_{+}}$, the completion of its natural filtration, i.e. $\mathcal{F}_{t}^{\mathbb{P}}\left( L\right) :=\sigma \left\{ L_{s}:s\leq t\right\} \vee \mathcal{N}$ where $\mathcal{N}$ is the $\sigma $-algebra generated by all $\mathbb{P}$-null sets. The jump measure of $L$ is denoted by $\mu :\Omega \times \left( \mathcal{B}\left( \mathbb{R}_{+}\right) \otimes \mathcal{B}\left( \mathbb{R}_{0}\right) \right) \rightarrow \mathbb{N}$ where $\mathbb{R}_{0}:=\mathbb{R}\setminus \left\{ 0\right\} $. The dual predictable projection of this measure, also known as its Lévy system, satisfies the relation $\mu ^{\mathcal{P}}\left( dt,dx\right) =dt\times \nu \left( dx\right) $, where $\nu \left( \cdot \right) :=\mathbb{E}\left[ \mu \left( \left[ 0,1\right] \times \cdot \right) \right] $ is the intensity or Lévy measure of $L.$ The Lévy-Itô decomposition of $L$ is given by $$L_{t}=bt+W_{t}+\int\limits_{\left[ 0,t\right] \times \left\{ 0<\left\vert x\right\vert \leq 1\right\} }xd\left\{ \mu -\mu ^{\mathcal{P}}\right\} +\int\limits_{\left[ 0,t\right] \times \left\{ \left\vert x\right\vert >1\right\} }x\mu \left( ds,dx\right) . \label{Levy-Ito_decomposition}$$It implies that $L^{c}=W$ is the Wiener process, and hence $\left[ L^{c}\right] _{t}=t$, where $\left( \cdot \right) ^{c}$ and $\left[ \,\cdot \,\right] $ denote the continuous martingale part and the process of quadratic variation of any semimartingale, respectively. For the predictable quadratic variation we use the notation $\left\langle \,\cdot \,\right\rangle $. Denote by $\mathcal{V}$ the set of càdlàg, adapted processes with finite variation, and let $\mathcal{V}^{+}\subset \mathcal{V}$ be the subset of non-decreasing processes in $\mathcal{V}$ starting at zero. Let $\mathcal{A}\subset \mathcal{V}$ be the class of processes with integrable variation, i.e. $A\in \mathcal{A}$ if and only if $\bigvee_{0}^{\infty }A\in L^{1}\left( \mathbb{P}\right) $, where $\bigvee_{0}^{t}A$ denotes the variation of $A$ over the finite interval $\left[ 0,t\right] $. The subset $\mathcal{A}^{+}=\mathcal{A\cap V}^{+}$ represents those processes which are also increasing i.e. with non-negative right-continuous increasing trajectories. Furthermore, $\mathcal{A}_{loc}$ (resp. $\mathcal{A}_{loc}^{+}$) is the collection of adapted processes with locally integrable variation (resp. adapted locally integrable increasing processes). For a càdlàg process $X$ we denote by $X_{-}:=\left( X_{t-}\right) $ the left hand limit process, where $X_{0-}:=X_{0}$ by convention, and by $\bigtriangleup X=\left( \bigtriangleup X_{t}\right) $ the jump process $\bigtriangleup X_{t}:=X_{t}-X_{t-}$. Given an adapted càdlàg semimartingale $U$, the jump measure and its dual predictable projection (or compensator) are denoted by $\mu _{U}\left( \left[ 0,t\right] \times A\right) :=\sum_{s\leq t}\mathbf{1}_{A}\left( \triangle U_{s}\right) $ and $\mu _{U}^{\mathcal{P}}$, respectively. Further, we denote by $\mathcal{P}\subset \mathcal{F}\otimes \mathcal{B}\left( \mathbb{R}_{+}\right) $ the predictable $\sigma $-algebra and by $\widetilde{\mathcal{P}}:=\mathcal{P}\otimes \mathcal{B}\left( \mathbb{R}_{0}\right) .$ With some abuse of notation, we write $\theta _{1}\in \widetilde{\mathcal{P}}$ when the function $\theta _{1}:$ $\Omega \times \mathbb{R}_{+}\times \mathbb{R}_{0}\rightarrow \mathbb{R}$ is $\widetilde{\mathcal{P}}$-measurable and $\theta \in \mathcal{P}$ for predictable processes. Let $$\begin{array}{clc} \mathcal{L}\left( U^{c}\right) := & \left\{ \theta \in \mathcal{P}:\exists \left\{ \tau _{n}\right\} _{n\in \mathbb{N}}\text{ sequence of stopping times with }\tau _{n}\uparrow \infty \right. & \\ & \left. \text{and }\mathbb{E}\left[ \int\limits_{0}^{\tau _{n}}\theta ^{2}d\left[ U^{c}\right] \right] <\infty \ \forall n\in \mathbb{N}\right\} & \end{array} \label{Def._L(U)}$$be the class of predictable processes $\theta \in \mathcal{P}$ integrable with respect to $U^{c}$ in the sense of local martingale, and by $$\Lambda \left( U^{c}\right) :=\left\{ \int \theta _{0}dU^{c}:\theta _{0}\in \mathcal{L}\left( U^{c}\right) \right\}$$the linear space of processes which admits a representation as the stochastic integral with respect to $U^{c}$. For an integer valued random measure $\mu ^{\prime }$ we denote by $\mathcal{G}\left( \mu ^{\prime }\right) $ the class of $\widetilde{\mathcal{P}}$-measurable processes $\theta _{1}:$ $\Omega \times \mathbb{R}_{+}\times \mathbb{R}_{0}\rightarrow \mathbb{R}$ satisfying the following conditions: $$\begin{array}{cl} \left( i\right) & \theta _{1}\in \widetilde{\mathcal{P}}, \\ \left( ii\right) & \int\limits_{\mathbb{R}_{0}}\left\vert \theta _{1}\left( t,x\right) \right\vert \left( \mu ^{\prime }\right) ^{\mathcal{P}}\left( \left\{ t\right\} ,dx\right) <\infty \ \forall t>0, \\ \left( iii\right) & \text{The process } \\ & \left\{ \sqrt{\sum\limits_{s\leq t}\left\{ \int\limits_{\mathbb{R}_{0}}\theta _{1}\left( s,x\right) \mu ^{\prime }\left( \left\{ s\right\} ,dx\right) -\int\limits_{\mathbb{R}_{0}}\theta _{1}\left( s,x\right) \left( \mu ^{\prime }\right) ^{\mathcal{P}}\left( \left\{ s\right\} ,dx\right) \right\} ^{2}}\right\} _{t\in \mathbb{R}_{+}}\in \mathcal{A}_{loc}^{+}.\end{array}$$The set $\mathcal{G}\left( \mu ^{\prime }\right) $ represents the domain of the functional $\theta _{1}\rightarrow \int \theta _{1}d\left( \mu ^{\prime }-\left( \mu ^{\prime }\right) ^{\mathcal{P}}\right) ,$ which assign to $\theta _{1}$ the unique purely discontinuous local martingale $M$ with $$\bigtriangleup M_{t}=\int\limits_{\mathbb{R}_{0}}\theta _{1}\left( t,x\right) \mu ^{\prime }\left( \left\{ t\right\} ,dx\right) -\int\limits_{\mathbb{R}_{0}}\theta _{1}\left( t,x\right) \left( \mu ^{\prime }\right) ^{\mathcal{P}}\left( \left\{ t\right\} ,dx\right) .$$ We use the notation $\int \theta _{1}d\left( \mu ^{\prime }-\left( \mu ^{\prime }\right) ^{\mathcal{P}}\right) $ to write the value of this functional in $\theta _{1}$. It is important to point out that this functional is not, in general, the integral with respect to the difference of two measures. For a detailed exposition on these topics see He, Wang and Yan [@HeWanYan] or Jacod and Shiryaev [Jcd&Shry 2003]{}, which are our basic references. In particular, for the Lévy process $L$ with jump measure $\mu $, $$\mathcal{G}\left( \mu \right) \equiv \left\{ \theta _{1}\in \widetilde{\mathcal{P}}:\left\{ \sqrt{\sum\limits_{s\leq t}\left\{ \theta _{1}\left( s,\triangle L_{s}\right) \right\} ^{2}\mathbf{1}_{\mathbb{R}_{0}}\left( \triangle L_{s}\right) }\right\} _{t\in \mathbb{R}_{+}}\in \mathcal{A}_{loc}^{+}\right\} , \label{G(miu) Definition}$$since $\mu ^{\mathcal{P}}\left( \left\{ t\right\} \times A\right) =0$, for any Borel set $A$ of $\mathbb{R}_{0}$. We say that the semimartingale $U$ has the *weak property of predictable representation* when $$\mathcal{M}_{loc,0}=\Lambda \left( U^{c}\right) +\left\{ \int \theta _{1}d\left( \mu _{U}-\mu _{U}^{\mathcal{P}}\right) :\theta _{1}\in \mathcal{G}\left( \mu _{U}\right) \right\} ,\ \label{Def_weak_predictable_repres.}$$where the previous sum is the linear sum of the vector spaces, and $\mathcal{M}_{loc,0}$ is the linear space of local martingales starting at zero. Let $\mathcal{M}$ and $\mathcal{M}_{\infty }$ denote the class of càdlàg and càdlàg uniformly integrable martingale respectively. The following lemma is interesting by itself to understand the continuity properties of the quadratic variation for a given convergent sequence of uniformly integrable martingale . It will play a central role in the proof of the lower semicontinuity of the penalization function introduced in section \[Sect Penalty Function for densities\]. Observe that the assertion of this lemma is valid in a general filtered probability space and not only for the completed natural filtration of the Lévy process introduced above. \[E\[|Mn-M|\]-&gt;0=&gt;\[Mn-M\](oo)-&gt;0\_in\_P\]For $\left\{ M^{\left( n\right) }\right\} _{n\in \mathbb{N}}\subset \mathcal{M}_{\infty }$ and $M\in \mathcal{M}_{\infty }$ the following implication holds $$M_{\infty }^{\left( n\right) }\overset{L^{1}}{\underset{n\rightarrow \infty }{\longrightarrow }}M_{\infty }\Longrightarrow \left[ M^{\left( n\right) }-M\right] _{\infty }\overset{\mathbb{P}}{\longrightarrow }0.$$Moreover,$$M_{\infty }^{\left( n\right) }\overset{L^{1}}{\underset{n\rightarrow \infty }{\longrightarrow }}M_{\infty }\Longrightarrow \left[ M^{\left( n\right) }-M\right] _{t}\overset{\mathbb{P}}{\underset{n\rightarrow \infty }{\longrightarrow }}0\;\; \forall t.$$ *Proof.* From the $L^{1}$ convergence of $M_{\infty }^{\left( n\right) }$ to $M_{\infty }$, we have that $\{M_{\infty }^{\left( n\right) }\}_{n\in \mathbb{N}}\cup \left\{ M_{\infty }\right\} $ is uniformly integrable, which is equivalent to the existence of a convex and increasing function $G:[0,+\infty )\rightarrow \lbrack 0,+\infty )$ such that $$\left( i\right) \quad \lim_{x\rightarrow \infty }\frac{G\left( x\right) }{x}=\infty ,$$and $$\left( ii\right) \quad \sup_{n\in \mathbb{N}}\mathbb{E}\left[ G\left( \left\vert M_{\infty }^{\left( n\right) }\right\vert \right) \right] \vee \mathbb{E}\left[ G\left( \left\vert M_{\infty }\right\vert \right) \right] <\infty .$$Now, define the stopping times $$\tau _{k}^{n}:=\inf \left\{ u>0:\sup_{t\leq u}\left\vert M_{t}^{\left( n\right) }-M_{t}\right\vert \geq k\right\} .$$Observe that the estimation $\sup_{n\in \mathbb{N}}\mathbb{E}\left[ G\left( \left\vert M_{\tau _{k}^{n}}^{\left( n\right) }\right\vert \right) \right] \leq \sup_{n\in \mathbb{N}}\mathbb{E}\left[ G\left( \left\vert M_{\infty }^{\left( n\right) }\right\vert \right) \right] $ implies the uniformly integrability of $\left\{ M_{\tau _{k}^{n}}^{\left( n\right) }\right\} _{n\in \mathbb{N}}$ for each $k$ fixed. Since any uniformly integrable càdlàg martingale is of class $\mathcal{D}$, follows the uniform integrability of $\left\{ M_{\tau _{k}^{n}}\right\} _{n\in \mathbb{N}}$ for all $k\in \mathbb{N}$, and hence $\left\{ \sup\nolimits_{t\leq \tau _{k}^{n}}\left\vert M_{t}^{\left( n\right) }-M_{t}\right\vert \right\} _{n\in \mathbb{N}}$ is uniformly integrable. This and the maximal inequality for supermartingales $$\begin{aligned} \mathbb{P}\left[ \sup_{t\in \mathbb{R}_{+}}\left\vert M_{t}^{\left( n\right) }-M_{t}\right\vert \geq \varepsilon \right] &\leq &\frac{1}{\varepsilon }\left\{ \sup_{t\in \mathbb{R}_{+}}\mathbb{E}\left[ \left\vert M_{t}^{\left( n\right) }-M_{t}\right\vert \right] \right\} \\ &\leq &\frac{1}{\varepsilon }\mathbb{E}\left[ \left\vert M_{\infty }^{\left( n\right) }-M_{\infty }\right\vert \right] \longrightarrow 0,\end{aligned}$$yields the convergence of $\left\{ \sup\nolimits_{t\leq \tau _{k}^{n}}\left\vert M_{t}^{\left( n\right) }-M_{t}\right\vert \right\} _{n\in \mathbb{N}}$ in $L^{1}$ to $0$. The second Davis’ inequality [@HeWanYan Thm. 10.28] guarantees that, for some constant $C$, $$\mathbb{E}\left[ \sqrt{\left[ M^{\left( n\right) }-M\right] _{\tau _{k}^{n}}}\right] \leq C\mathbb{E}\left[ \sup\limits_{t\leq \tau _{k}^{n}}\left\vert M_{t}^{\left( n\right) }-M_{t}\right\vert \right] \underset{n\rightarrow \infty }{\longrightarrow }0\quad \forall k\in \mathbb{N},$$and hence $\left[ M^{\left( n\right) }-M\right] _{\tau _{k}^{n}}\underset{n\rightarrow \infty }{\overset{\mathbb{P}}{\longrightarrow }}0$ for all $k\in \mathbb{N}.$ Finally, to prove that $\left[ M^{\left( n\right) }-M\right] _{\infty }\overset{\mathbb{P}}{\rightarrow }0$ we assume that it is not true, and then $\left[ M^{\left( n\right) }-M\right] _{\infty }\overset{\mathbb{P}}{\nrightarrow }0$ implies that there exist $\varepsilon >0$ and $\left\{ n_{k}\right\} _{k\in \mathbb{N}}\subset \mathbb{N}$ with $$d\left( \left[ M^{\left( n_{k}\right) }-M\right] _{\infty },0\right) \geq \varepsilon$$for all $k\in \mathbb{N},$where $d\left( X,Y\right) :=\inf \left\{ \varepsilon >0:\mathbb{P}\left[ \left\vert X-Y\right\vert >\varepsilon \right] \leq \varepsilon \right\} $ is the Ky Fan metric. We shall denote the subsequence as the original sequence, trying to keep the notation as simple as possible. Using a diagonal argument, a subsequence $\left\{ n_{i}\right\} _{i\in \mathbb{N}}\subset \mathbb{N}$ can be chosen, with the property that $d\left( \left[ M^{\left( n_{i}\right) }-M\right] _{\tau _{k}^{n_{i}}},0\right) <\frac{1}{k}$ for all $i\geq k.$ Since $$\lim_{k\rightarrow \infty }\left[ M^{\left( n_{i}\right) }-M\right] _{\tau _{k}^{n_{i}}}=\left[ M^{\left( n_{i}\right) }-M\right] _{\infty }\quad \mathbb{P}-a.s.,$$we can find some $k\left( n_{i}\right) \geq i$ such that $$d\left( \left[ M^{\left( n_{i}\right) }-M\right] _{\tau _{k\left( n_{i}\right) }^{n_{i}}},\left[ M^{\left( n_{i}\right) }-M\right] _{\infty }\right) <\frac{1}{k}.$$Then, using the estimation $$\mathbb{P}\left[ \left\vert \left[ M^{\left( n_{k}\right) }-M\right] _{\tau _{k\left( n_{k}\right) }^{n_{k}}}-\left[ M^{\left( n_{k}\right) }-M\right] _{\tau _{k}^{n_{k}}}\right\vert >\varepsilon \right] \leq \mathbb{P}\left[ \left\{ \sup\limits_{t\in \mathbb{R}_{+}}\left\vert M_{t}^{\left( n_{k}\right) }-M_{t}\right\vert \geq k\right\} \right] ,$$it follows that $$d\left( \left[ M^{\left( n_{k}\right) }-M\right] _{\tau _{k\left( n_{k}\right) }^{n_{k}}},\left[ M^{\left( n_{k}\right) }-M\right] _{\tau _{k}^{n_{k}}}\right) \underset{k\rightarrow \infty }{\longrightarrow }0,$$which yields a contradiction with $\varepsilon \leq d\left( \left[ M^{\left( n_{k}\right) }-M\right] _{\infty },0\right) $. Thus, $\left[ M^{\left( n\right) }-M\right] _{\infty }\overset{\mathbb{P}}{\rightarrow }0.$ The last part of the this lemma follows immediately from the first statement. $\Box $ Using the Doob’s stopping theorem we can conclude that for $M\in \mathcal{M}_{\infty }$ and an stopping time $\tau $, that $M^{\tau }\in \mathcal{M}_{\infty },$ and therefore it follows as a corollary the following result. \[E\[|(Mn-M)thau|\]-&gt;0=&gt;\[Mn-M\]thau-&gt;0\_in\_P\]For $\left\{ M^{\left( n\right) }\right\} _{n\in \mathbb{N}}\subset \mathcal{M}_{\infty }$, $M\in \mathcal{M}_{\infty }$ and $\tau $ any stopping time holds$$M_{\tau }^{\left( n\right) }\overset{L^{1}}{\rightarrow }M_{\tau }\Longrightarrow \left[ M^{\left( n\right) }-M\right] _{\tau }\overset{\mathbb{P}}{\longrightarrow }0.$$ *Proof.* $\left[ \left( M^{\left( n\right) }\right) ^{\tau }-M^{\tau }\right] _{\infty }=\left[ M^{\left( n\right) }-M\right] _{\infty }^{\tau }=\left[ M^{\left( n\right) }-M\right] _{\tau }\overset{\mathbb{P}}{\longrightarrow }0.$ $\Box $ Density processes \[Sect. Density\_Processes\] ---------------------------------------------- Given an absolutely continuous probability measure $\mathbb{Q}\ll \mathbb{P}$ in a filtered probability space, where a semimartingale with the weak predictable representation property is defined, the structure of the density process has been studied extensively by several authors; see Theorem 14.41 in He, Wang and Yan [@HeWanYan] or Theorem III.5.19 in Jacod and Shiryaev . Denote by $D_{t}:=\mathbb{E}\left[ \left. \frac{d\mathbb{Q}}{d\mathbb{P}}\right\vert \mathcal{F}_{t}\right] $ the càdlàg version of the density process. For the increasing sequence of stopping times $\tau _{n}:=\inf \left\{ t\geq 0:D_{t}<\frac{1}{n}\right\} $ $n\geq 1$ and $\tau _{0}:=\sup_{n}\tau _{n}$ we have $D_{t}\left( \omega \right) =0$ $\forall t\geq \tau _{0}\left( \omega \right) $ and $D_{t}\left( \omega \right) >0$ $\forall t<\tau _{0}\left( \omega \right) ,$ i.e.$$D=D\mathbf{1}_{[\hspace{-0.05cm}[0,\tau _{0}[\hspace{-0.04cm}[}, \label{D=D1[[0,To[[}$$and the process $$\frac{1}{D_{s-}}\mathbf{1}_{[\hspace{-0.05cm}[D_{-}\not=0]\hspace{-0.04cm}]}\text{ is integrable w.r.t. }D, \label{1/D_integrable_wrt_D}$$where we abuse of the notation by setting $[\hspace{-0.05cm}[D_{-}\not=0]\hspace{-0.04cm}]:=\left\{ \left( \omega ,t\right) \in \Omega \times \mathbb{R}_{+}:D_{t-}\left( \omega \right) \neq 0\right\} .$ Both conditions $\left( \ref{D=D1[[0,To[[}\right) $ and $\left( \ref{1/D_integrable_wrt_D}\right) $ are necessary and sufficient in order that a semimartingale to be an *exponential semimartigale* [@HeWanYan Thm. 9.41], i.e. $D=\mathcal{E}\left( Z\right) $ the Doléans-Dade exponential of another semimartingale $Z$. In that case we have $$\tau _{0}=\inf \left\{ t>0:D_{t-}=0\text{ or }D_{t}=0\right\} =\inf \left\{ t>0:\triangle Z_{t}=-1\right\}. \label{Tau0=JumpZ=-1}$$ It is well known that the Lévy-processes satisfy the weak property of predictable representation [@HeWanYan], when the completed natural filtration is considered. In the following lemma we present the characterization of the density processes for the case of these processes. \[Q&lt;&lt;P =&gt;\] Given an absolutely continuous probability measure $\mathbb{Q}\ll \mathbb{P}$, there exist coefficients $\theta _{0}\in \mathcal{L}\left( W\right) \ $and $\theta _{1}\in \mathcal{G}\left( \mu \right) $ such that $$\frac{d\mathbb{Q}_{t}}{d\mathbb{P}_{t}}=\frac{d\mathbb{Q}_{t}}{d\mathbb{P}_{t}}\mathbf{1}_{[\hspace{-0.05cm}[0,\tau _{0}[\hspace{-0.04cm}[}=\mathcal{E}\left( Z^{\theta }\right) \left( t\right) , \label{Dt=exp(Zt)}$$where $Z_{t}^{\theta }\in \mathcal{M}_{loc}$ is the local martingale given by$$Z_{t}^{\theta }:=\int\limits_{]0,t]}\theta _{0}dW+\int\limits_{]0,t]\times \mathbb{R}_{0}}\theta _{1}\left( s,x\right) \left( \mu \left( ds,dx\right) -ds\ \nu \left( dx\right) \right) , \label{Def._Ztheta(t)}$$and $\mathcal{E}$ represents the Doleans-Dade exponential of a semimartingale. The coefficients $\theta _{0}$ and $\theta _{1}$ are $dt$-a.s and $\mu _{\mathbb{P}}^{\mathcal{P}}\left( ds,dx\right) $-a.s. unique on $[\hspace{-0.05cm}[0,\tau _{0}]\hspace{-0.04cm}]$ and $[\hspace{-0.05cm}[0,\tau _{0}]\hspace{-0.04cm}]\times \mathbb{R}_{0}$ respectively for $\mathbb{P}$-almost all $\omega $. Furthermore, the coefficients can be choosen with $\theta _{0}=0$ on $]\hspace{-0.05cm}]\tau _{0},\infty \lbrack \hspace{-0.04cm}[$ and $\theta _{1}=0$ on $]\hspace{-0.05cm}]\tau _{0},\infty \lbrack \hspace{-0.04cm}[\times \mathbb{R}$ . *Proof.* We only address the uniqueness of the coefficients $\theta _{0}$ and $\theta _{1},$ because the representation follows from $\left( \ref{D=D1[[0,To[[}\right) $ and $\left( \ref{1/D_integrable_wrt_D}\right) .$ Let assume, that we have two possible vectors $\theta :=\left( \theta _{0},\theta _{1}\right) $ and $\theta ^{\prime }:=\left( \theta _{0}^{\prime },\theta _{1}^{\prime }\right) $ satisfying the representation, i.e. $$\begin{array}{rl} D_{u}\mathbf{1}_{[\hspace{-0.05cm}[0,\tau _{0}[\hspace{-0.04cm}[} & =\int D_{t-}d\{\int\limits_{]0,t]}\theta _{0}\left( s\right) dW_{s}+\int\limits_{]0,t]\times \mathbb{R}_{0}}\theta _{1}\left( s,x\right) \left( \mu \left( ds,dx\right) -ds\ \nu \left( dx\right) \right) \} \\ & =\int D_{t-}d\{\int\limits_{]0,t]}\theta _{0}^{\prime }\left( s\right) dW_{s}+\int\limits_{]0,t]\times \mathbb{R}_{0}}\theta _{1}^{\prime }\left( s,x\right) \left( \mu \left( ds,dx\right) -ds\ \nu \left( dx\right) \right) \},\end{array}$$and thus$$\begin{aligned} \triangle D_{t} &=&D_{t-}\triangle \left( \int\limits_{]0,t]\times \mathbb{R}_{0}}\theta _{1}\left( s,x\right) \left( \mu \left( ds,dx\right) -ds\ \nu \left( dx\right) \right) \right) \\ &=&D_{t-}\triangle \left( \int\limits_{]0,t]\times \mathbb{R}_{0}}\theta _{1}^{\prime }\left( s,x\right) \left( \mu \left( ds,dx\right) -ds\ \nu \left( dx\right) \right) \right) .\end{aligned}$$Since $D_{t-}>0$ on $[\hspace{-0.05cm}[0,\tau _{0}[\hspace{-0.04cm}[,$ it follows that $$\triangle \left( \int\limits_{]0,t]\times \mathbb{R}_{0}}\theta _{1}\left( s,x\right) \left( \mu \left( ds,dx\right) -ds\ \nu \left( dx\right) \right) \right) =\triangle \left( \int\limits_{]0,t]\times \mathbb{R}_{0}}\theta _{1}^{\prime }\left( s,x\right) \left( \mu \left( ds,dx\right) -ds\ \nu \left( dx\right) \right) \right) .$$ Since two purely discontinuous local martingales with the same jumps are equal, it follows $$\int\limits_{]0,t]\times \mathbb{R}_{0}}\theta _{1}\left( s,x\right) \left( \mu \left( ds,dx\right) -ds\ \nu \left( dx\right) \right) =\int\limits_{]0,t]\times \mathbb{R}_{0}}\widehat{\theta }_{1}\left( s,x\right) \left( \mu \left( ds,dx\right) -ds\ \nu \left( dx\right) \right)$$and thus $$\int D_{t-}d\{\int\limits_{]0,t]}\theta _{0}\left( s\right) dW_{s}\}=\int D_{t-}d\{\int\limits_{]0,t]}\theta _{0}^{\prime }\left( s\right) dW_{s}\}.$$Then, $$0=\left[ \int D_{s-}d\left\{ \int\nolimits_{]0,s]}\left( \theta _{0}^{\prime }\left( u\right) -\theta _{0}\left( u\right) \right) dW_{u}\right\} \right] _{t}=\int\limits_{]0,t]}\left( D_{s-}\right) ^{2}\left\{ \theta _{0}^{\prime }\left( s\right) -\theta _{0}\left( s\right) \right\} ^{2}ds$$and thus $\theta _{0}^{\prime }=\theta _{0}\ dt$-$a.s$ on $[\hspace{-0.05cm}[0,\tau _{0}]\hspace{-0.04cm}]$ for $\mathbb{P}$-almost all $\omega $. On the other hand, $$\begin{aligned} 0 &=&\left\langle \int \left\{ \theta _{1}^{\prime }\left( s,x\right) -\theta _{1}\left( s,x\right) \right\} \left( \mu \left( ds,dx\right) -ds\ \nu \left( dx\right) \right) \right\rangle _{t} \\ &=&\int\limits_{]0,t]\times \mathbb{R}_{0}}\left\{ \theta _{1}^{\prime }\left( s,x\right) -\theta _{1}\left( s,x\right) \right\} ^{2}\nu \left( dx\right) ds,\end{aligned}$$implies that $\theta _{1}\left( s,x\right) =\theta _{1}^{\prime }\left( s,x\right) \quad \mu _{\mathbb{P}}^{\mathcal{P}}\left( ds,dx\right) $-a.s. on $[\hspace{-0.05cm}[0,\tau _{0}]\hspace{-0.04cm}]\times \mathbb{R}_{0}$ for $\mathbb{P}$-almost all $\omega $. $\Box $ For $\mathbb{Q}\ll \mathbb{P}$ the function $\theta _{1}\left( \omega ,t,x\right) $ described in Lemma \[Q&lt;&lt;P =&gt;\] determines the density of the predictable projection $\mu _{\mathbb{Q}}^{\mathcal{P}}\left( dt,dx\right) $ with respect to $\mu _{\mathbb{P}}^{\mathcal{P}}\left( dt,dx\right) $ (see He,Wang and Yan [@HeWanYan] or Jacod and Shiryaev ). More precisely, for $B\in \left( \mathcal{B}\left( \mathbb{R}_{+}\right) \otimes \mathcal{B}\left( \mathbb{R}_{0}\right) \right) $ we have $$\mu _{\mathbb{Q}}^{\mathcal{P}}\left( \omega ,B\right) =\int_{B}\left( 1+\theta _{1}\left( \omega ,t,x\right) \right) \mu _{\mathbb{P}}^{\mathcal{P}}\left( dt,dx\right) . \label{Q<<P=>_miu_wrt_Q}$$ In what follows we restrict ourself to the time interval $\left[ 0,T\right] , $ for some $T>0$ fixed, and we take $\mathcal{F}=\mathcal{F}_{T}.$ The corresponding classes of density processes associated to $\mathcal{Q}_{\ll }(\mathbb{P})$ and $\mathcal{Q}_{\approx }\left( \mathbb{P}\right) $ are denoted by $\mathcal{D}_{\ll }\left( \mathbb{P}\right) $ and $\mathcal{D}_{\approx }\left( \mathbb{P}\right) $, respectively. For instance, in the former case $$\mathcal{D}_{\ll }\left( \mathbb{P}\right) :=\left\{ D=\left\{ D_{t}\right\} _{t\in \left[ 0,T\right] }:\exists \mathbb{Q}\in \mathcal{Q}_{\ll }\left( \mathbb{P}\right) \text{ with }D_{t}=\left. \frac{d\mathbb{Q}}{d\mathbb{P}}\right\vert _{\mathcal{F}_{t}}\right\} , \label{Def._D<<}$$and the processes in this set are of the form $$\begin{array}{rl} D_{t}= & \exp \left\{ \int\limits_{]0,t]}\theta _{0}dW+\int\limits_{]0,t]\times \mathbb{R}_{0}}\theta _{1}\left( s,x\right) \left( \mu \left( ds,dx\right) -\nu \left( dx\right) ds\right) -\frac{1}{2}\int\limits_{]0,t]}\left( \theta _{0}\right) ^{2}ds\right\} \times \\ & \times \exp \left\{ \int\limits_{]0,t]\times \mathbb{R}_{0}}\left\{ \ln \left( 1+\theta _{1}\left( s,x\right) \right) -\theta _{1}\left( s,x\right) \right\} \mu \left( ds,dx\right) \right\}\end{array} \label{D(t) explicita}$$for $\theta _{0}\in \mathcal{L}\left( W\right) $ and $\theta _{1}\in \mathcal{G}\left( \mu \right) $. The set $\mathcal{D}_{\ll }\left( \mathbb{P}\right) $ is characterized as follow. \[D&lt;&lt;\_&lt;=&gt;\] $D$ belongs to $\mathcal{D}_{\ll }\left( \mathbb{P}\right) $ if and only if there are $\theta _{0}\in \mathcal{L}\left( W\right) $ and $\theta _{1}\in \mathcal{G}\left( \mu \right) $ with $\theta _{1}\geq -1$ such that $D_{t}=\mathcal{E}\left( Z^{\theta }\right) \left( t\right) \ \mathbb{P}$-a.s. $\forall t\in \left[ 0,T\right] $ and $\mathbb{E}_{\mathbb{P}}\left[ \mathcal{E}\left( Z^{\theta }\right) \left( t\right) \right] =1\ \forall t\geq 0$, where $Z^{\theta }\left( t\right) $ is defined by $\left( \ref{Def._Ztheta(t)}\right) .$ *Proof.* The necessity follows from Lemma \[Q&lt;&lt;P =&gt;\]. Conversely, let $\theta _{0}\in \mathcal{L}\left( W\right) $ and $\theta _{1}\in \mathcal{G}\left( \mu \right) $ be arbitrarily chosen. Since $D_{t}=\int D_{s-}dZ_{s}^{\theta }\in \mathcal{M}_{loc}$ is a nonnegative local martingale, it is a supermartingale, with constant expectation from our assumptions. Therefore, it is a martingale, and hence the density process of an absolutely continuous probability measure. $\Box$ Since density processes are essentially uniformly integrable martingales, using Lemma \[E\[|Mn-M|\]-&gt;0=&gt;\[Mn-M\](oo)-&gt;0\_in\_P\] and Corollary \[E\[|(Mn-M)thau|\]-&gt;0=&gt;\[Mn-M\]thau-&gt;0\_in\_P\] the following proposition follows immediately. \[E\[|Dn-D|\]-&gt;0 =&gt; \[Dn-D\](T)-&gt;0\_in\_P\] Let $\left\{ \mathbb{Q}^{\left( n\right) }\right\} _{n\in \mathbb{N}}$ be a sequence in $\mathcal{Q}_{\ll }(\mathbb{P})$, with $D_{T}^{\left( n\right) }:=\left. \frac{d\mathbb{Q}^{\left( n\right) }}{d\mathbb{P}}\right\vert _{\mathcal{F}_{T}}$ converging to $D_{T}:=\left. \frac{d\mathbb{Q}}{d\mathbb{P}}\right\vert _{\mathcal{F}_{T}}$ in $L^{1}\left( \mathbb{P}\right) $. For the corresponding density processes $D_{t}^{\left( n\right) }:=\mathbb{E}_{\mathbb{P}}\left[ D_{T}^{\left( n\right) }\left\vert \mathcal{F}_{t}\right. \right] $ and $D_{t}:=\mathbb{E}_{\mathbb{P}}\left[ D_{T}\left\vert \mathcal{F}_{t}\right. \right] $, for $t\in \left[ 0,T\right] $, we have$$\left[ D^{\left( n\right) }-D\right] _{T}\overset{\mathbb{P}}{\rightarrow }0.$$ Penalty functions for densities\[Sect Penalty Function for densities\] ====================================================================== Now, we shall introduce a family of penalty functions for the density processes described in Section \[Sect. Density\_Processes\], for the absolutely continuous measures $\mathbb{Q}\in \mathcal{Q}_{\ll }\left( \mathbb{P}\right) $. Let $h:\mathbb{R}_{+}\mathbb{\rightarrow R}_{+}$ and $h_{0},$$h_{1}:\ \mathbb{R\rightarrow R}_{+}$ be convex functions with $0=h\left( 0\right) =h_{0}\left( 0\right) =h_{1}\left( 0\right) $. Define the penalty function, with $\tau_0$ as in (\[Tau0=JumpZ=-1\]), by $$\begin{array}{rl} \vartheta \left( \mathbb{Q}\right) := & \mathbb{E}_{\mathbb{Q}}\left[ \int\limits_{0}^{T\wedge \tau _{0}}h\left( h_{0}\left( \theta _{0}\left( t\right) \right) +\int\nolimits_{\mathbb{R}_{0}}\delta \left( t,x\right) h_{1}\left( \theta _{1}\left( t,x\right) \right) \nu \left( dx\right) \right) dt\right] \mathbf{1}_{\mathcal{Q}_{\ll }}\left( \mathbb{Q}\right) \\ & +\infty \times \mathbf{1}_{\mathcal{Q}_{cont}\setminus \mathcal{Q}_{\ll }}\left( \mathbb{Q}\right) ,\end{array} \label{Def._penalty_theta}$$ where $\theta _{0},$ $\theta _{1}$ are the processes associated to $\mathbb{Q}$ from Lemma \[Q&lt;&lt;P =&gt;\] and $\delta \left( t,x\right) :\mathbb{R}_{+}\times \mathbb{R}_{0}\rightarrow \mathbb{R}_{+}$ is an arbitrary fixed nonnegative function $\delta \left( t,x\right) \in \mathcal{G}\left( \mu \right) $. Since $\theta _{0}\equiv 0$ on $[\hspace{-0.05cm}[\tau _{0},\infty \lbrack \hspace{-0.04cm}[$ and $\theta _{1}\equiv 0$ on $[\hspace{-0.05cm}[\tau _{0},\infty \lbrack \hspace{-0.04cm}[\times \mathbb{R}_{0}$ we have from the conditions imposed to $h,h_{0},$ and $h_{1}$$$\begin{array}{rl} \vartheta \left( \mathbb{Q}\right) = & \mathbb{E}_{\mathbb{Q}}\left[ \int\limits_{0}^{T}h\left( h_{0}\left( \theta _{0}\left( t\right) \right) +\int\nolimits_{\mathbb{R}_{0}}\delta \left( t,x\right) h_{1}\left( \theta _{1}\left( t,x\right) \right) \nu \left( dx\right) \right) dt\right] \mathbf{1}_{\mathcal{Q}_{\ll }}\left( \mathbb{Q}\right) \\ & +\infty \times \mathbf{1}_{\mathcal{Q}_{cont}\setminus \mathcal{Q}_{\ll }}\left( \mathbb{Q}\right) .\end{array} \label{Def._penalty_theta_(2)}$$Further, define the convex measure of risk $$\rho \left( X\right) :=\sup_{\mathbb{Q\in }\mathcal{Q}_{\ll }(\mathbb{P})}\left\{ \mathbb{E}_{\mathbb{Q}}\left[ -X\right] -\vartheta \left( \mathbb{Q}\right) \right\} . \label{rho def.}$$Notice that $\rho $ is a normalized and sensitive measure of risk. For each class of probability measures introduced so far, the subclass of those measures with a finite penalization is considered. We will denote by $\mathcal{Q}^{\vartheta },$ $\mathcal{Q}_{\ll }^{\vartheta }(\mathbb{P})$ and $\mathcal{Q}_{\approx }^{\vartheta }(\mathbb{P})$ the respective subclasses, i.e. $$\mathcal{Q}^{\vartheta }:=\left\{ \mathbb{Q}\in \mathcal{Q}:\vartheta \left( \mathbb{Q}\right) <\infty \right\} ,\ \mathcal{Q}_{\ll }^{\vartheta }(\mathbb{P}):=\mathcal{Q}^{\vartheta }\cap \mathcal{Q}_{\ll }(\mathbb{P})\text{ and }\mathcal{Q}_{\approx }^{\vartheta }(\mathbb{P}):=\mathcal{Q}^{\vartheta }\cap \mathcal{Q}_{\approx }(\mathbb{P}). \label{Def._Qdelta(P)}$$Notice that $\mathcal{Q}_{\approx }^{\vartheta }(\mathbb{P})\neq \varnothing .$ The next theorem establishes the minimality on $\mathcal{Q}_{\ll }\left( \mathbb{P}\right) $ of the penalty function introduced above for the risk measure $\rho $ . Its proof is based on the sufficient conditions given in Theorem \[static minimal penalty funct. in Q(&lt;&lt;) &lt;=&gt;\]. \[theta=minimal penalty function\] The penalty function $\vartheta $ defined in $\left( \ref{Def._penalty_theta}\right) $ is equal to the minimal penalty function of the convex risk measure $\rho $, given by $\left( \ref{rho def.}\right) $, on $\mathcal{Q}_{\ll }\left( \mathbb{P}\right) $, i.e.$$\vartheta \mathbf{1}_{\mathcal{Q}_{\ll }\left( \mathbb{P}\right) }=\psi _{\rho }^{\ast }\mathbf{1}_{\mathcal{Q}_{\ll }\left( \mathbb{P}\right) }.$$ *Proof:* From Lemma \[static minimal penalty funct. in Q(&lt;&lt;) &lt;=&gt;\] $\left( b\right) $, we need to show that the penalization $\vartheta $ is proper, convex and that the corresponding identification, defined as $\Theta \left( Z\right) :=\vartheta \left( \mathbb{Q}\right) $ if $Z\mathbb{\in }\delta \left( \mathcal{Q}_{\ll }\left( \mathbb{P}\right) \right) :=\left\{ Z\in L^{1}\left( \mathbb{P}\right) :Z=d\mathbb{Q}/d\mathbb{P}\text{ with }\mathbb{Q}\in \mathcal{Q}_{\ll }\left( \mathbb{P}\right) \right\} $ and $\Theta \left( Z\right) :=\infty $ on $L^{1}\setminus \delta \left( \mathcal{Q}_{\ll }\left( \mathbb{P}\right) \right) $, is lower semicontinuous with respect to the strong topology. First, observe that the function $\vartheta $ is proper, since $\vartheta \left( \mathbb{P}\right) =0$. To verify the convexity of $\vartheta $, choose $\mathbb{Q}$, $\widetilde{\mathbb{Q}}\in \mathcal{Q}_{\ll }^{\vartheta }$ and define $\mathbb{Q}^{\lambda }:=\lambda \mathbb{Q}+\left( 1-\lambda \right) \widetilde{\mathbb{Q}}$, for $\lambda \in \left[ 0,1\right] $. Notice that the corresponding density process can be written as $D^{\lambda }:=\dfrac{d\mathbb{Q}^{\lambda }}{d\mathbb{P}}=\lambda D+\left( 1-\lambda \right) \widetilde{D}$ $\mathbb{P}$-a.s. . Now, from Lemma \[Q&lt;&lt;P =&gt;\], let $\left( \theta _{0},\theta _{1}\right) $ and $(\widetilde{\theta }_{0},\widetilde{\theta }_{1})$ be the processes associated to $\mathbb{Q}$ and $\widetilde{\mathbb{Q}}$, respectively, and observe that from$$D_{t}=1+\int\limits_{\left[ 0,t\right] }D_{s-}\theta _{0}\left( s\right) dW_{s}+\int\limits_{\left[ 0,t\right] \times \mathbb{R}_{0}}D_{s-}\theta _{1}\left( s,x\right) d\left( \mu \left( ds,dx\right) -ds\nu \left( dx\right) \right) )$$and the corresponding expression for $\widetilde{D}$ we have for $\tau _{n}^{\lambda }:=\inf \left\{ t\geq 0:D_{t}^{\lambda }\leq \frac{1}{n}\right\} $ $$\int\limits_{0}^{t\wedge \tau _{n}^{\lambda }}\left( D_{s-}^{\lambda }\right) ^{-1}dD_{s}^{\lambda }=\int\limits_{0}^{t\wedge \tau _{n}^{\lambda }}\tfrac{\lambda D_{s-}\theta _{0}\left( s\right) +\left( 1-\lambda \right) \widetilde{D}_{s-}\widetilde{\theta }_{0}\left( s\right) }{\left( \lambda D_{s-}+\left( 1-\lambda \right) \widetilde{D}_{s-}\right) }dW_{s}+\int\limits_{\left[ 0,t\wedge \tau _{n}^{\lambda }\right] \times \mathbb{R}_{0}}\tfrac{\lambda D_{s-}\theta _{1}\left( s,x\right) +\left( 1-\lambda \right) \widetilde{D}_{s-}\widetilde{\theta }_{1}\left( s,x\right) }{\left( \lambda D_{s-}+\left( 1-\lambda \right) \widetilde{D}_{s-}\right) }d\left( \mu -\mu _{\mathbb{P}}^{\mathcal{P}}\right) .$$The weak predictable representation property of the local martingale $\int\nolimits_{0}^{t\wedge \tau _{n}^{\lambda }}\left( D_{s-}^{\lambda }\right) ^{-1}dD_{s}^{\lambda }$, yield on the other hand $$\int\limits_{0}^{t\wedge \tau _{n}^{\lambda }}\left( D_{s-}^{\lambda }\right) ^{-1}dD_{s}^{\lambda }=\int\limits_{0}^{t\wedge \tau _{n}^{\lambda }}\theta _{0}^{\lambda }\left( s\right) dW_{s}+\int\limits_{\left[ 0,t\wedge \tau _{n}^{\lambda }\right] \times \mathbb{R}_{0}}\theta _{1}^{\lambda }\left( s,x\right) d\left( \mu -\mu _{\mathbb{P}}^{\mathcal{P}}\right) ,$$where identification $$\theta _{0}^{\lambda }\left( s\right) =\frac{\lambda D_{s-}\theta _{0}\left( s\right) +\left( 1-\lambda \right) \widetilde{D}_{s-}\widetilde{\theta }_{0}\left( s\right) }{\left( \lambda D_{s-}+\left( 1-\lambda \right) \widetilde{D}_{s-}\right) },$$and $$\theta _{1}^{\lambda }\left( s,x\right) =\frac{\lambda D_{s-}\theta _{1}\left( s,x\right) +\left( 1-\lambda \right) \widetilde{D}_{s-}\widetilde{\theta }_{1}\left( s,x\right) }{\left( \lambda D_{s-}+\left( 1-\lambda \right) \widetilde{D}_{s-}\right) }.$$ is possible thanks to the uniqueness of the representation in Lemma [Q&lt;&lt;P =&gt;]{}. The convexity follows now from the convexity of $h,h_{0}$and $h_{1}$, using the fact that any convex function is continuous in the interior of its domain. More specifically, $$\begin{array}{rl} \vartheta \left( \mathbb{Q}^{\lambda }\right) \leq & \mathbb{E}_{\mathbb{Q}^{\lambda }}\left[ \int\limits_{\left[ 0,T\right] }\tfrac{\lambda D_{s}}{\left( \lambda D_{s}+\left( 1-\lambda \right) \widetilde{D}_{s}\right) }h\left( h_{0}\left( \theta _{0}\left( s\right) \right) +\int\limits_{\mathbb{R}_{0}}\delta \left( s,x\right) h_{1}\left( \theta _{1}\left( s,x\right) \right) \nu \left( dx\right) \right) ds\right] \\ & +\mathbb{E}_{\mathbb{Q}^{\lambda }}\left[ \int\limits_{\left[ 0,T\right] }\tfrac{\left( 1-\lambda \right) \widetilde{D}_{s}}{\left( \lambda D_{s}+\left( 1-\lambda \right) \widetilde{D}_{s}\right) }h\left( h_{0}\left( \widetilde{\theta }_{0}\left( s\right) \right) +\int\limits_{\mathbb{R}_{0}}\delta \left( s,x\right) h_{1}(\widetilde{\theta }_{1}\left( s,x\right) )\nu \left( dx\right) \right) ds\right] \\ = & \int\limits_{\left[ 0,T\right] }\int\limits_{\Omega }\dfrac{\lambda D_{s}}{\left( \lambda D_{s}+\left( 1-\lambda \right) \widetilde{D}_{s}\right) }h\left( h_{0}\left( \theta _{0}\left( s\right) \right) +\int\limits_{\mathbb{R}_{0}}\delta \left( s,x\right) h_{1}\left( \theta _{1}\left( s,x\right) \right) \nu \left( dx\right) \right) \\ & \ \ \ \ \ \ \ \ \times \left( \lambda D_{s}+\left( 1-\lambda \right) \widetilde{D}_{s}\right) \mathbf{1}_{\left\{ \lambda D_{s}+\left( 1-\lambda \right) \widetilde{D}_{s}>0\right\} }d\mathbb{P}ds \\ & +\int\limits_{\left[ 0,T\right] }\int\limits_{\Omega }\dfrac{\left( 1-\lambda \right) \widetilde{D}_{s}}{\left( \lambda D_{s}+\left( 1-\lambda \right) \widetilde{D}_{s}\right) }h\left( h_{0}\left( \widetilde{\theta }_{0}\left( s\right) \right) +\int\limits_{\mathbb{R}_{0}}\delta \left( s,x\right) h_{1}(\widetilde{\theta }_{1}\left( s,x\right) )\nu \left( dx\right) \right) \\ & \ \ \ \ \ \ \ \ \times \left( \lambda D_{s}+\left( 1-\lambda \right) \widetilde{D}_{s}\right) \mathbf{1}_{\left\{ \lambda D_{s}+\left( 1-\lambda \right) \widetilde{D}_{s}>0\right\} }d\mathbb{P}ds \\ = & \lambda \vartheta \left( \mathbb{Q}\right) +\left( 1-\lambda \right) \vartheta \left( \widetilde{\mathbb{Q}}\right) ,\end{array}$$where we used that $\left\{ \int\nolimits_{\mathbb{R}_{0}}\delta \left( t,x\right) h_{1}\left( \theta _{1}\left( t,x\right) \right) \nu \left( dx\right) \right\} _{t\in \mathbb{R}_{+}}$ and $\left\{ \int\nolimits_{\mathbb{R}_{0}}\delta \left( t,x\right) h_{1}(\widetilde{\theta }_{1}\left( t,x\right) )\nu \left( dx\right) \right\} _{t\in \mathbb{R}_{+}}$ are predictable processes. It remains to prove the lower semicontinuity of $\Theta $. As pointed out earlier, it is enough to consider a sequence of densities $Z^{\left( n\right) }:=\frac{d\mathbb{Q}^{\left( n\right) }}{d\mathbb{P}}\in \delta \left( \mathcal{Q}_{\ll }\left( \mathbb{P}\right) \right) $ converging in $L^{1}\left( \mathbb{P}\right) $ to $Z:=\frac{d\mathbb{Q}}{d\mathbb{P}}$. Denote the corresponding density processes by $D^{\left( n\right) }$ and $D$, respectively. In Proposition \[E\[|Dn-D|\]-&gt;0 =&gt; \[Dn-D\](T)-&gt;0\_in\_P\] it was verified the convergence in probability to zero of the quadratic variation process $$\begin{aligned} \left[ D^{\left( n\right) }-D\right] _{T} &=&\int\limits_{0}^{T}\left\{ D_{s-}^{\left( n\right) }\theta _{0}^{\left( n\right) }\left( s\right) -D_{s-}\theta _{0}\left( s\right) \right\} ^{2}ds \\ &&+\int\limits_{\left[ 0,T\right] \times \mathbb{R}_{0}}\left\{ D_{s-}^{\left( n\right) }\theta _{1}^{\left( n\right) }\left( s,x\right) -D_{s-}\theta _{1}\left( s,x\right) \right\} ^{2}\mu \left( ds,dx\right) .\end{aligned}$$This implies that $$\left. \begin{array}{cc} & \int\nolimits_{0}^{T}\left\{ D_{s-}^{\left( n\right) }\theta _{0}^{\left( n\right) }\left( s\right) -D_{s-}\theta _{0}\left( s\right) \right\} ^{2}ds\overset{\mathbb{P}}{\rightarrow }0, \\ \text{and } & \\ & \int\limits_{\left[ 0,T\right] \times \mathbb{R}_{0}}\left\{ D_{s-}^{\left( n\right) }\theta _{1}^{\left( n\right) }\left( s,x\right) -D_{s-}\theta _{1}\left( s,x\right) \right\} ^{2}\mu \left( ds,dx\right) \overset{\mathbb{P}}{\rightarrow }0.\end{array}\right\} \label{[]=>*}$$Then, for an arbitrary but fixed subsequence, there exists a sub-subsequence such that $\mathbb{P}$-a.s. $$\left\{ D_{s-}^{\left( n\right) }\theta _{0}^{\left( n\right) }\left( s\right) -D_{s-}\theta _{0}\left( s\right) \right\} ^{2}\overset{L^{1}\left( \lambda \right) }{\longrightarrow }0$$and $$\left\{ D_{s-}^{\left( n\right) }\theta _{1}^{\left( n\right) }\left( s,x\right) -D_{s-}\theta _{1}\left( s,x\right) \right\} ^{2}\overset{L^{1}\left( \mu \right) }{\longrightarrow }0,$$where for simplicity we have denoted the sub-subsequence as the original sequence. Now, we claim that for the former sub-subsequence it also holds that $$\left\{ \begin{array}{c} D_{s-}^{\left( n\right) }\theta _{0}^{\left( n\right) }\left( s\right) \overset{\lambda \times \mathbb{P}\text{-a.s.}}{\longrightarrow }D_{s-}\theta _{0}\left( s\right) , \\ \smallskip \ \\ D_{s-}^{\left( n\right) }\theta _{1}^{\left( n\right) }\left( s,x\right) \overset{\mu \times \mathbb{P}\text{-a.s.}}{\longrightarrow }D_{s-}\theta _{1}\left( s,x\right) .\end{array}\right. \label{[]=>*.1}$$ We present first the arguments for the proof of the second assertion in $\left( \ref{[]=>*.1}\right) $. Assuming the opposite, there exists $C\in \mathcal{B}\left( \left[ 0,T\right] \right) \otimes \mathcal{B}\left( \mathbb{R}_{0}\right) \otimes \mathcal{F}_{T}$, with $\mu \times \mathbb{P}\left[ C\right] >0$, and such that for each $\left( s,x,\omega \right) \in C$ $$\lim_{n\rightarrow \infty }\left\{ D_{s-}^{\left( n\right) }\theta _{1}^{\left( n\right) }\left( s,x\right) -D_{s-}\theta _{1}\left( s,x\right) \right\} ^{2}=c\neq 0,$$or the limit does not exist. Let $C\left( \omega \right) :=\left\{ \left( t,x\right) \in \left[ 0,T\right] \times \mathbb{R}_{0}:\left( t,x,\omega \right) \in C\right\} $ be the $\omega $-section of $C$. Observe that $B:=\left\{ \omega \in \Omega :\mu \left[ C\left( \omega \right) \right] >0\right\} $ has positive probability: $\mathbb{P}\left[ B\right] >0.$ From $\left( \ref{[]=>*}\right) $, any arbitrary but fixed subsequence has a sub-subsequence converging $\mathbb{P}$-a.s.. Denoting such a sub-subsequence simply by $n$, we can fix $\omega \in B$ with$$\begin{aligned} &&\int\nolimits_{C\left( \omega \right) }\left\{ D_{s-}^{\left( n\right) }\theta _{1}^{\left( n\right) }\left( s,x\right) -D_{s-}\theta _{1}\left( s,x\right) \right\} ^{2}d\mu \left( s,x\right) \\ &\leq &\int\nolimits_{\left[ 0,T\right] \times \mathbb{R}_{0}}\left\{ D_{s-}^{\left( n\right) }\theta _{1}^{\left( n\right) }\left( s,x\right) -D_{s-}\theta _{1}\left( s,x\right) \right\} ^{2}d\mu \left( s,x\right) \underset{n\rightarrow \infty }{\longrightarrow }0,\end{aligned}$$and hence $\left\{ D_{s-}^{\left( n\right) }\theta _{1}^{\left( n\right) }\left( s,x\right) -D_{s-}\theta _{1}\left( s,x\right) \right\} ^{2}$ converges in $\mu $-measure to $0$ on $C\left( \omega \right) .$ Again, for any subsequence there is a sub-subsequence converging $\mu $-a.s. to $0$. Furthermore, for an arbitrary but fixed $\left( s,x\right) \in C\left( \omega \right) $, when the limit does not exist $$\begin{array}{clc} a & :=\underset{n\rightarrow \infty }{\lim \inf }\left\{ D_{s-}^{\left( n\right) }\theta _{1}^{\left( n\right) }\left( s,x\right) -D_{s-}\theta _{1}\left( s,x\right) \right\} ^{2} & \\ & \neq \underset{n\rightarrow \infty }{\lim \sup }\left\{ D_{s-}^{\left( n\right) }\theta _{1}^{\left( n\right) }\left( s,x\right) -D_{s-}\theta _{1}\left( s,x\right) \right\} ^{2} & =:b,\end{array}$$and we can choose converging subsequences $n\left( i\right) $ and $n\left( j\right) $ with $$\begin{aligned} \underset{i\rightarrow \infty }{\lim }\left\{ D_{s-}^{n\left( i\right) }\theta _{1}^{n\left( i\right) }\left( s,x\right) -D_{s-}\theta _{1}\left( s,x\right) \right\} ^{2} &=&a \\ \underset{j\rightarrow \infty }{\lim }\left\{ D_{s-}^{n\left( j\right) }\theta _{1}^{n\left( j\right) }\left( s,x\right) -D_{s-}\theta _{1}\left( s,x\right) \right\} ^{2} &=&b.\end{aligned}$$From the above argument, there are sub-subsequences $n\left( i\left( k\right) \right) $ and $n\left( j\left( k\right) \right) $ such that $$\begin{aligned} a &=&\underset{k\rightarrow \infty }{\lim }\left\{ D_{s-}^{n\left( i\left( k\right) \right) }\theta _{1}^{n\left( i\left( k\right) \right) }\left( s,x\right) -D_{s-}\theta _{1}\left( s,x\right) \right\} ^{2}=0 \\ b &=&\underset{k\rightarrow \infty }{\lim }\left\{ D_{s-}^{n\left( j\left( k\right) \right) }\theta _{1}^{n\left( j\left( k\right) \right) }\left( s,x\right) -D_{s-}\theta _{1}\left( s,x\right) \right\} ^{2}=0,\end{aligned}$$which is clearly a contradiction. For the case when $$\underset{n\rightarrow \infty }{\lim }\left\{ D_{s-}^{\left( n\right) }\theta _{1}^{\left( n\right) }\left( s,x\right) -D_{s-}\theta _{1}\left( s,x\right) \right\} ^{2}=c\neq 0,$$the same argument can be used, and get a subsequence converging to $0$, having a contradiction again. Therefore, the second part of our claim in $\left( \ref{[]=>*.1}\right) $ holds. Since $D_{s-}^{\left( n\right) }\theta _{1}^{\left( n\right) }\left( s,x\right) ,\ D_{s-}\theta _{1}\left( s,x\right) \in \mathcal{G}\left( \mu \right) $, we have, in particular, that $D_{s-}^{\left( n\right) }\theta _{1}^{\left( n\right) }\left( s,x\right) \in \widetilde{\mathcal{P}}$ and $D_{s-}\theta _{1}\left( s,x\right) \in \widetilde{\mathcal{P}}$ and hence $C\in \widetilde{\mathcal{P}}$. From the definition of the predictable projection it follows that $$\begin{aligned} 0 &=&\mu \times \mathbb{P}\left[ C\right] \mathbb{=}\int\limits_{\Omega }\int\limits_{\left[ 0,T\right] \times \mathbb{R}_{0}}\mathbf{1}_{C}\left( s,\omega \right) d\mu d\mathbb{P=}\int\limits_{\Omega }\int\limits_{\left[ 0,T\right] \times \mathbb{R}_{0}}\mathbf{1}_{C}\left( s,\omega \right) d\mu _{\mathbb{P}}^{\mathcal{P}}d\mathbb{P} \\ &=&\int\limits_{\Omega }\int\limits_{\mathbb{R}_{0}}\int\limits_{\left[ 0,T\right] }\mathbf{1}_{C}\left( s,\omega \right) dsd\nu d\mathbb{P=}\lambda \times \nu \times \mathbb{P}\left[ C\right] ,\end{aligned}$$and thus $$D_{s-}^{\left( n\right) }\theta _{1}^{\left( n\right) }\left( s,x\right) \overset{\lambda \times \nu \times \mathbb{P}\text{-a.s.}}{\longrightarrow }D_{s-}\theta _{1}\left( s,x\right) .$$ Since $\int\limits_{\Omega \times \left[ 0,T\right] }\left\vert D_{t-}^{\left( n\right) }-D_{t-}\right\vert d\mathbb{P}\times dt\mathbb{=}\int\limits_{\Omega \times \left[ 0,T\right] }\left\vert D_{t}^{\left( n\right) }-D_{t}\right\vert d\mathbb{P}\times dt\longrightarrow 0$, we have that $\left\{ D_{t-}^{\left( n\right) }\right\} _{t\in \left[ 0,T\right] }$ $\overset{L^{1}\left( \lambda \times \mathbb{P}\right) }{\longrightarrow }\left\{ D_{t-}\right\} _{t\in \left[ 0,T\right] }$ and $\left\{ D_{t}^{\left( n\right) }\right\} _{t\in \left[ 0,T\right] }$ $\overset{L^{1}\left( \lambda \times \mathbb{P}\right) }{\longrightarrow }\left\{ D_{t}\right\} _{t\in \left[ 0,T\right] }.$ Then, for an arbitrary but fixed subsequence $\left\{ n_{k}\right\} _{k\in \mathbb{N}}\subset \mathbb{N}$, there is a sub-subsequence $\left\{ n_{k_{i}}\right\} _{i\in \mathbb{N}}\subset \mathbb{N}$ such that $$\begin{array}{ccc} D_{t-}^{\left( n_{k_{i}}\right) }\theta _{1}^{\left( n_{k_{i}}\right) }\left( t,x\right) & \overset{\lambda \times \nu \times \mathbb{P}\text{-a.s.}}{\longrightarrow } & D_{t-}\theta _{1}\left( t,x\right) , \\ D_{t-}^{\left( n_{k_{i}}\right) } & \overset{\lambda \times \mathbb{P}\text{-a.s.}}{\longrightarrow } & D_{t-}, \\ D_{t}^{\left( n_{k_{i}}\right) } & \overset{\lambda \times \mathbb{P}\text{-a.s.}}{\longrightarrow } & D_{t}.\end{array}$$Furthermore, $\mathbb{Q}\ll \mathbb{P}$ implies that $\lambda \times \nu \times \mathbb{Q}\ll \lambda \times \nu \times \mathbb{P}$, and then $$\begin{array}{ccc} D_{t-}^{\left( n_{k_{i}}\right) }\theta _{1}^{\left( n_{k_{i}}\right) }\left( t,x\right) & \overset{\lambda \times \nu \times \mathbb{Q}\text{-a.s.}}{\longrightarrow } & D_{t-}\theta _{1}\left( t,x\right) , \\ D_{t-}^{\left( n_{k_{i}}\right) } & \overset{\lambda \times \nu \times \mathbb{Q}\text{-a.s.}}{\longrightarrow } & D_{t-},\end{array}$$and $$D_{t}^{\left( n_{k_{i}}\right) }\overset{\lambda \times \nu \times \mathbb{Q}\text{-a.s.}}{\longrightarrow }D_{t}. \label{[]=>*.2}$$Finally, noting that $\inf D_{t}>0$ $\mathbb{Q}$-a.s. $$\theta _{1}^{\left( n_{k_{i}}\right) }\left( t,x\right) \overset{\lambda \times \nu \times \mathbb{Q}\text{-a.s.}}{\longrightarrow }\theta _{1}\left( t,x\right) . \label{[]=>*.3}$$ The first assertion in $\left( \ref{[]=>*.1}\right) $ can be proved using essentially the same kind of ideas used above for the proof of the second part, concluding that for an arbitrary but fixed subsequence $\left\{ n_{k}\right\} _{k\in \mathbb{N}}\subset \mathbb{N}$, there is a sub-subsequence $\left\{ n_{k_{i}}\right\} _{i\in \mathbb{N}}\subset \mathbb{N}$ such that $$\left\{ D_{t}^{\left( n_{k_{i}}\right) }\right\} _{t\in \left[ 0,T\right] }\overset{\lambda \times \mathbb{Q}\text{-a.s.}}{\longrightarrow }\left\{ D_{t}\right\} _{t\in \left[ 0,T\right] } \label{[]=>*.4}$$and $$\left\{ \theta _{0}^{\left( n_{k_{i}}\right) }\left( t\right) \right\} _{t\in \left[ 0,T\right] }\overset{\lambda \times \mathbb{Q}\text{-a.s.}}{\longrightarrow }\left\{ \theta _{0}\left( t\right) \right\} _{t\in \left[ 0,T\right] }. \label{[]=>*.5}$$ We are now ready to finish the proof of the theorem, observing that $$\begin{aligned} &&\underset{n\rightarrow \infty }{\lim \inf }\vartheta \left( \mathbb{Q}^{\left( n\right) }\right) \\ &=&\underset{n\rightarrow \infty }{\lim \inf }\int\limits_{\Omega \times \left[ 0,T\right] }\left\{ h\left( h_{0}\left( \theta _{0}^{\left( n\right) }\left( t\right) \right) +\int\nolimits_{\mathbb{R}_{0}}\delta \left( t,x\right) h_{1}\left( \theta _{1}^{\left( n\right) }\left( t,x\right) \right) \nu \left( dx\right) \right) \right\} \dfrac{D_{t}^{\left( n\right) }}{D_{t}}d\left( \lambda \times \mathbb{Q}\right) .\end{aligned}$$Let $\left\{ n_{k}\right\} _{k\in \mathbb{N}}\subset \mathbb{N}$ be a subsequence for which the limit inferior is realized. Using $\left( \ref{[]=>*.2}\right) ,\left( \ref{[]=>*.3}\right) ,\ $$\left( \ref{[]=>*.4}\right) ,$ and $\left( \ref{[]=>*.5}\right) $ we can pass to a sub-subsequence $\left\{ n_{k_{i}}\right\} _{i\in \mathbb{N}}\subset \mathbb{N}$ and, from the continuity of $h,\ h_{0}$and$h_{1}$, it follows $$\begin{aligned} &&\underset{n\rightarrow \infty }{\lim \inf }\ \vartheta \left( \mathbb{Q}^{\left( n\right) }\right) \\ &\geq &\int\limits_{\Omega \times \left[ 0,T\right] }\underset{i\rightarrow \infty }{\lim \inf }\left( \left\{ h\left( h_{0}\left( \theta _{0}^{\left( n_{k_{i}}\right) }\left( t\right) \right) +\int\limits_{\mathbb{R}_{0}}\delta \left( t,x\right) h_{1}\left( \theta _{1}^{\left( n_{k_{i}}\right) }\left( t,x\right) \right) \nu \left( dx\right) \right) \right\} \tfrac{D_{t}^{\left( n_{k_{i}}\right) }}{D_{t}}\right) d\left( \lambda \times \mathbb{Q}\right) \\ &\geq &\int\limits_{\Omega \times \left[ 0,T\right] }h\left( h_{0}\left( \theta _{0}\left( t\right) \right) +\int\nolimits_{\mathbb{R}_{0}}h_{1}\left( \theta _{1}\left( t,x\right) \right) \nu \left( dx\right) \right) d\left( \lambda \times \mathbb{Q}\right) \\ &=&\vartheta \left( \mathbb{Q}\right) .\end{aligned}$$$\Box $ [99]{} Artzner, P. ; Delbaen, F. ; Eber, J.M. and Heath, D. 1997 Thinking coherently, RISK Magazine 10, pp 68-71. Artzner, P. ; Delbaen, F. ; Eber, J.M. and Heath, D. 1999 Coherent measures of risk, Math. Finance, 9, pp 203-228. Barrieu, P. and El Karoui, N. 2009 Pricing, hedging and optimality designing derivatives via minimization of risk measures In: Volume on Indifference Pricing (ed: Rene Carmona), Princeton University Press, 2009. Bion-Nadal, J. 2008 Dynamic risk measures: Time consitency and risk measures from BMO martingales, Finance and Stochastics 12, pp 219-244. Bion-Nadal, J. 2009 Time consistent dynamic risk processes, Stochastics Processes and Their Applications, 119, pp 633-654. Delbaen, F. 2002 Coherent risk measures on general probability spaces in Advances in Finance and Stochastics, Essays in Honor of Dieter Sondermann, pp 1-37, Eds. K. Sandmann, Ph. Schönbucher. Berlin, Heidelberg, New York: Springer. Delbaen, F.; Peng, S. and Rosazza Gianin, E. 2010 Representation of the penalty term of dynamic concave utilities, Finance and Stochastics 14, pp 449-472. Föllmer, H. and Schied, A. 2002 Convex measures of risk and trading constraints, Finance and Stochastics 6, pp 429-447. Föllmer, H. and Schied, A. 2002 Robust Preferences and Convex Risk Measures in Advances in Finance and Stochastics, Essays in Honor of Dieter Sondermann, 39-56, Eds. K. Sandmann, Ph. Schönbucher. Berlin, Heidelberg, New York: Springer. Föllmer, H. and Schied, A. 2004 Stochastic Finance. An Introduction in Discrete Time (2nd. Ed.), de Gruyter Studies in Mathematics 27. Frittelli, M. and Rosazza Gianin, E. 2002 Putting order in risk measures, Journal of Banking & Finance 26, pp 1473 - 1486. Frittelli, M. and Rosazza Gianin, E. 2004 Dynamic Convex Risk Measures, in Risk Measures for the 21st Century, pp 227 - 248, Ed. G. Szegö, Wiley. Heath, D. 2000 Back to the future. Plenary lecture at the First World Congres of the Bachelier Society, Paris. He, S.W. ; Wang, J.G. and Yan, J.A. 1992 Semimartingale theory and stochastic calculus, Beijing, Science Press. Hernández-Hernández, D. and Pérez-Hernández, L. 2011 Robust utility maximization for Lévy processes: Penalization and Solvability, arXiv 1206.0715. Jacod, J. and Shiryaev, A. 2003 Limit Theorems for Stochastic Processes (2nd Ed.), Springer. Krätschmer, V. 2005 Robust representation of convex risk measures by probability measures, Finance and Stochastics 9, pp 597 - 608. Schied, A. 2007 Optimal investments for risk- and ambiguity-averse preferences: a duality approach, Finance and Stochastics 11, pp 107 - 129. [^1]: Centro de Investigación en Matemáticas, Apartado postal 402, Guanajuato, Gto. 36000, México. E-mail: dher@cimat.mx [^2]: Departamento de Economía y Finanzas, Universidad de Guanajuato, DCEA Campus Guanajuato, C.P. 36250, Guanajuato, Gto. E-mail: lperezhernandez@yahoo.com
1
  **New Penrose Limits and AdS/CFT** 1.8cm 0.5cm *$^1$ Dipartimento di Fisica, Università di Perugia,\ I.N.F.N. Sezione di Perugia,\ Via Pascoli, I-06123 Perugia, Italy\ 0.4cm *$^2$ NORDITA\ Roslagstullsbacken 23, SE-106 91 Stockholm, Sweden 0.4cm *$^3$ The Niels Bohr Institute\ *Blegdamsvej 17, DK-2100 Copenhagen Ø, Denmark\ **** 0.5cm grignani@pg.infn.it, harmark@nordita.org, andrea.marini@fisica.unipg.it, orselli@nbi.dk 1.5cm **Abstract** 0.2cm We find a new Penrose limit of $\mbox{AdS}_5 \times S^5$ giving the maximally supersymmetric pp-wave background with two explicit space-like isometries. This is an important missing piece in studying the AdS/CFT correspondence in certain subsectors. In particular whereas the Penrose limit giving one space-like isometry is useful for the $SU(2)$ sector of ${\mathcal{N}}=4$ SYM, this new Penrose limit is instead useful for studying the $SU(2|3)$ and $SU(1,2|3)$ sectors. In addition to the new Penrose limit of $\mbox{AdS}_5 \times S^5$ we also find a new Penrose limit of $\mbox{AdS}_4 \times {\mathbb{C}}P^3$. 0.5cm Introduction {#sec:intro} ============ AdS/CFT duality identifies ${\mathcal{N}}=4$ superconformal Yang-Mills (SYM) theory with gauge group $SU(N)$ to type IIB superstring theory on the $\mbox{AdS}_5\times S^5$ background [@Maldacena:1997re; @Gubser:1998bc; @Witten:1998qj]. The AdS/CFT correspondence relates gauge theory and string theory in different regimes, thus, on the one hand, this makes it powerful as it can be used to compute the strong coupling regime of either theory using the weak coupling limit of the other, on the other hand this makes it hard to test directly since it is not easy to find situations where approximate computations in both theories have an overlapping domain of validity. In [@Berenstein:2002jq] a way out of this difficulty was presented by introducing a Penrose limit of the $\mbox{AdS}_5\times S^5$ background. Taking the Penrose limit one gets the maximally supersymmetric pp-wave background [@Blau:2001ne; @Blau:2002dy] where type IIB string theory can be quantized [@Metsaev:2001bj; @Metsaev:2002re]. On the gauge theory side the Penrose limit corresponds to considering a certain sector of the operators. This enables one to compare directly the spectrum of operators in the planar limit of ${\mathcal{N}}=4$ SYM to the energy spectrum of quantum strings on the pp-wave. In [@Bertolini:2002nr] an alternative Penrose limit of $\mbox{AdS}_5\times S^5$ was found also giving the maximally supersymmetric background but in a coordinate system with an explicit space-like isometry [@Michelson:2002wa; @Bertolini:2002nr]. As explained in [@Harmark:2006ta] having this explicit isometry makes it particularly well-suited to study the $SU(2)$ sector of ${\mathcal{N}}=4$ SYM. Building on the Penrose limit of [@Berenstein:2002jq] many very interesting results in matching gauge theory and string theory were found in the case of the planar limit using the idea of integrability and the connection to spin chains [@Minahan:2002ve; @Beisert:2003tq; @Beisert:2003yb][^1] particularly by considering a near plane wave limit with curvature corrections to the pp-wave background [@Callan:2003xr; @Callan:2004uv]. A high point of this is the development of the Asymptotic Bethe Ansatz describing the dimension of infinitely long operators for any ’t Hooft coupling in the planar limit [@Staudacher:2004tk; @Beisert:2005tm; @Beisert:2006ez]. Going beyond the planar limit seems instead to be very difficult [@Kristjansen:2002bb]. New ideas are needed in order to further explore the AdS/CFT correspondence in the non-planar limit and its potential applications. Recently another example of an exact duality between ${\mathcal{N}}= 6$ superconformal Chern-Simons theory (ABJM theory) and type IIA string theory on $\mbox{AdS}_4 \times CP^3$ have been found [@Aharony:2008ug]. Also here certain Penrose limits and near plane wave limits have been explored [@Nishioka:2008gz; @Gaiotto:2008cg; @Grignani:2008is; @Astolfi:2008ji; @Astolfi:2009qh]. The difficulty of going beyond the planar limit, where integrability most likely is absent, makes it desirable to consider alternative approaches to match the spectrum of operators and string states. One of the cornerstones in comparing the operator spectrum to the string spectrum in a Penrose limit or near-plane wave limit is that in comparing the spectrum of operators one assumes that most of the operators of the gauge theory receive an infinitely large correction to the bare dimension in the large ’t Hooft coupling limit $\lambda \rightarrow \infty$. This is of course a built in feature of the Asymptotic Bethe Ansatz for ${\mathcal{N}}=4$ SYM. However, an alternative approach to this problem of taking the strong coupling limit of ${\mathcal{N}}=4$ SYM has been proposed in [@Harmark:2006di; @Harmark:2006ta; @Harmark:2006ie; @Harmark:2007et; @Harmark:2007px; @Harmark:2008gm] where a regime of AdS/CFT was found in which both gauge theory and string theory are reliable and the correspondence can be tested in a precise way. Applying the approach of [@Harmark:2006di; @Harmark:2006ta; @Harmark:2006ie; @Harmark:2007et; @Harmark:2007px; @Harmark:2008gm][^2] to match the spectrum of operators and string states in the $SU(2)$ sector uses in an essential way the alternative Penrose limit of [@Bertolini:2002nr] where the maximally supersymmetric pp-wave has an explicit isometry. This is because for this pp-wave background the string states having an energy just above the vacuum energy are the states dual to the operators in the $SU(2)$ sector of ${\mathcal{N}}=4$ SYM. However, as shown in [@Harmark:2007px] there are several other sectors of ${\mathcal{N}}=4$ SYM that one can explore as well, and these sectors are crucial for approaching non-perturbative physics of type IIB string theory in $\mbox{AdS}_5\times S^5$, such as D-branes and black holes. This means that there should be additional Penrose limits of $\mbox{AdS}_5\times S^5$ in addition to the ones of [@Blau:2002dy; @Berenstein:2002jq; @Bertolini:2002nr]. In this paper we address these issues by deriving a new Penrose limit of $\mbox{AdS}_5 \times S^5$ which leads to a new pp-wave background with two explicit space-like isometries. As for the two previously found Penrose limits [@Blau:2002dy; @Berenstein:2002jq; @Bertolini:2002nr] this leads to a pp-wave background where type IIB string theory can be quantized and the spectrum can be matched to the spectrum of operators of ${\mathcal{N}}=4$ SYM. Our analysis completes the study of all possible pp-wave backgrounds which can be obtained as Penrose limits of the $\mbox{AdS}_5 \times S^5$ geometry. It also represents a further step in the investigation of the matching of strongly coupled gauge theory and string theory in certain sectors which are relevant for describing non-perturbative physics of type IIB string theory on $\mbox{AdS}_5\times S^5$. In particular, the new Penrose limit is relevant for studying the $SU(1,2|3)$ sector, which is the maximally possible subsector of ${\mathcal{N}}=4$ SYM [@Harmark:2007px]. In addition to the new Penrose limit of $\mbox{AdS}_5\times S^5$ we also explore Penrose limits of $\mbox{AdS}_4 \times {\mathbb{C}}P^3$. Here two different classes of Penrose limits have been found, one in which there are no explicit space-like isometries [@Nishioka:2008gz; @Gaiotto:2008cg] and another in which there are two explicit space-like isometries [@Grignani:2008is; @Astolfi:2009qh] which makes it suitable for studying the $SU(2)\times SU(2)$ sector of ABJM theory. We find in this paper a new Penrose limit of the $\mbox{AdS}_4 \times {\mathbb{C}}P^3$ background giving a pp-wave background with one explicit space-like isometry. The new Penrose limit of $\mbox{AdS}_5\times S^5$ found in this paper is also relevant for studying the finite temperature behavior of AdS/CFT. It is conjectured that the confinement/deconfinement transition temperature of planar $\mathcal{N}=4$ SYM on $R\times S^3$ is dual to the Hagedorn temperature of type IIB string theory on $\mbox{AdS}_5 \times S^5$ [@Witten:1998zw; @Sundborg:1999ue; @Polyakov:2001af; @Aharony:2003sx]. Using the Penrose limit [@Bertolini:2002nr] this was shown quantitatively to be true [@Harmark:2006ta] by matching the confiment/deconfinement temperature of planar $\mathcal{N}=4$ SYM on $R\times S^3$ in a limit with R-charge chemical potentials to the Hagedorn temperature of type IIB string on the pp-wave background of [@Bertolini:2002nr][^3]. We furthermore expect that our results could help in understanding more generally the behavior of string theory above the Hagedorn temperature and to study the connection between gauge theory and black holes in $\mbox{AdS}_5 \times S^5$ [@Grignani:2009ua][^4]. Interesting related work in other less supersymmetric gauge theories can be found in Refs. [@Grignani:2007xz; @Larsen:2007bm; @Hamilton:2007he]. The paper is organized as follows. In Section \[sec:stringtheory\] we first review the Penrose limit of string theory that lead to pp-wave backgrounds with zero and one spatial isometry. Then, we find a new Penrose limit giving rise to a pp-wave background with two space-like isometries in which string theory can be quantized. In Section \[sec:stringrotspectra\] we obtain a general form for a pp-wave metric that reproduces all the pp-wave backgrounds analyzed in the previous section. We moreover show that string theory can be directly quantized on this background which we dub “[*rotated pp-wave background*]{} " and we compute the spectrum. In Section \[sec:decsectors\] we show that, after taking an appropriate limit, the spectrum of type IIB string theory on the rotated pp-wave background can be exactly matched to the spectrum of the dual gauge theory operators in certain decoupled sectors of ${\mathcal{N}}=4$ SYM. Finally, in Section \[sec:ads4\] we find a new Penrose limit of the ${\mbox{AdS}}_4\times {\mathbb{C}}P^3$ background of type IIA supergravity with one explicit space-like isometry. Penrose limits and pp-waves with explicit isometries {#sec:stringtheory} ==================================================== In this section we derive a Penrose limit of $\mbox{AdS}_5 \times S^5$ which results in a new pp-wave background with two space-like isometries. We then show how to obtain a general pp-wave background which, for appropriate choices of the parameters of the background, reproduces all the known pp-wave backgrounds which are obtained through a Penrose limit procedure on $\mbox{AdS}_5 \times S^5$. We begin the section by writing down a slightly generalized version of the previously found Penrose limits of $\mbox{AdS}_5 \times S^5$ with zero and one explicit space-like isometries [@Blau:2002dy; @Berenstein:2002jq; @Bertolini:2002nr]. In $\mbox{AdS}_5 \times S^5$, the Penrose limit consists in considering a particle in the center of $\mbox{AdS}_5 $ that is moving very rapidly on a geodesic of $S^5$. This means that the angular momentum along the direction in which the particle is moving is very large ($J \to \infty$). Then by taking the limit $R \to \infty$, where $R$ is the radius of $\mbox{AdS}_5$ and $S^5$, but such that the ratio $J/R^2$ remains fixed, the geometry of $\mbox{AdS}_5 \times S^5$ reduces to a plane-wave geometry. An important point to emphasize is that one can choose any light-like geodesic of $\mbox{AdS}_5 \times S^5$ for implementing the procedure. While the pp-wave background always corresponds to the maximally supersymmetric pp-wave background of type IIB supergravity [@Blau:2001ne], different choices of light-like geodesics can give this background in different coordinate systems [@Bertolini:2002nr]. Naively this should not matter, however, the different coordinate systems can correspond to different choices of lightcone time on the pp-wave background. And this corresponds moreover to different dictionaries between the physical quantities of the $\mbox{AdS}_5\times S^5$ background and of the maximally supersymmetric pp-wave background. Therefore, the different coordinate systems for the pp-wave background are connected to the fact that the different Penrose limits that we consider correspond to zooming in to different regimes of type IIB string theory on $\mbox{AdS}_5\times S^5$. This in turns corresponds to zooming in to different regimes of ${\mathcal{N}}=4$ SYM. Furthermore, as we discuss in section \[sec:decsectors\], the different Penrose limits correspond to different decoupling limits of ${\mathcal{N}}=4$ SYM on ${\mathbb{R}}\times S^3$. In the literature the “canonical” coordinate system used for the maximally supersymmetric pp-wave background is that of [@Blau:2001ne; @Blau:2002dy; @Berenstein:2002jq] which we here dub the [*BMN pp-wave background*]{}. This coordinate system is such that the quadratic potential terms for the transverse directions are massive for all eight transverse directions. Another coordinate system was introduced in [@Michelson:2002wa; @Bertolini:2002nr] and we will refer to it as the [*one flat direction pp-wave background*]{} due to the presence of a space-like isometry in the pp-wave metric and since in this case the quadratic terms for the transverse directions have a massless direction. Here we find a new pp-wave background corresponding to a new coordinate system for the maximally supersymmetric pp-wave of type IIB supergravity. This new background is again obtained as a Penrose limit of $\mbox{AdS}_5 \times S^5$ with an appropriate choice of light-cone coordinates. The new pp-wave background differs from the other two because of the presence of two spacial isometries in the metric, namely two flat directions, corresponding to two massless directions in the potential terms for the transverse directions. Hence we call it the [*two flat directions pp-wave background*]{}. This new pp-wave background is important in the context of the AdS/CFT correspondence. In fact, as shown explicitly in Section \[sec:stringrotspectra\], string theory can be quantized on this background. Moreover, as discussed in Section \[sec:decsectors\], after taking a certain limit on the spectrum of type IIB string theory in this new background, we can complete the matching between the spectrum of anomalous dimensions of gauge theory operators in certain sectors of $\neqf$ SYM theory and the spectrum of the dual string theory states. We show below in Section \[sec:stringrotspectra\] that all the pp-wave s achievable through the Penrose limit are connected by a time-dependent coordinate transformation. This proves that mathematically they are all equivalent. The same is not true from the physical point of view, since the transformation involves time. Thus what changes from a  to another is what we call time, and consequently what we call Hamiltonian. Therefore the physics is different when we consider the theory on different pp-wave backgrounds. It is also interesting to notice which regimes of ${\mathcal{N}}=4$ SYM the different Penrose limits correspond to. We give these regimes for each of the three different limits below. To consider this, we record the following dictionary between strings on $\mbox{AdS}_5\times S^5$ and ${\mathcal{N}}=4$ SYM on ${\mathbb{R}}\times S^3$. We have $$\frac{R^4}{l_s^4} = 4 \pi^2 \lambda {\,, \ \ }g_s = \frac{\pi \lambda}{N}$$ where $R$ is the radius of $\mbox{AdS}_5$ and $S^5$, $g_s$ and $l_s$ are the string coupling and string length, respectively, and $\lambda = {g_{\rm YM}}^2 N/(4\pi^2)$ is the ’t Hooft coupling of $SU(N)$ ${\mathcal{N}}=4$ SYM.[^5] The energy $E$ of type IIB string states on $\mbox{AdS}_5\times S^5$ is identified with the energy $E$ of the dual ${\mathcal{N}}=4$ SYM states on ${\mathbb{R}}\times S^3$, or equivalently, with the scaling dimension of the dual operators of ${\mathcal{N}}=4$ SYM on ${\mathbb{R}}^4$. Similarly the angular momenta $J_{1,2,3}$ on $S^5$ for string states are identified with the three R-charges $J_{1,2,3}$ for states/operators of ${\mathcal{N}}=4$ SYM. Moreover the angular momenta $S_{1,2}$ for strings on $\mbox{AdS}_5$ are identified with the Cartan generators for the $SO(4)$ symmetry of the $S^3$ for the dual ${\mathcal{N}}=4$ SYM states on ${\mathbb{R}}\times S^3$, or equivalently, the $SO(4)$ symmetry of the ${\mathbb{R}}^4$ for the dual operators of ${\mathcal{N}}=4$ SYM on ${\mathbb{R}}^4$. The string theory that we are interested in is type IIB string theory on $\mbox{AdS}_5 \times S^5$. The metric for this background is given by $$\label{adsmet} ds^2 = R^2 \left[ - \cosh^2 \rho dt^2 + d\rho^2 + \sinh^2 \rho d{\Omega'_3}^2 + d\theta^2 + \sin^2 \theta d\alpha^2 + \cos^2 \theta d\Omega_3^2 \right]\, ,$$ with the five-form Ramond-Ramond field strength $$\label{adsF5} F_{(5)} = 2 R^4 ( \cosh \rho \sinh^3 \rho dt d\rho d\Omega_3' + \sin \theta \cos^3 \theta d\theta d\alpha d\Omega_3 )\, .$$ We parameterize the two three-spheres as $$\begin{aligned} \label{3sph} d\Omega_3^2 &= d\psi^2 + \sin^2 \psi d\phi^2 + \cos^2 \psi d\chi^2\, , \\ \label{3sphAdS} d\Omega_3'^2 &= d\beta^2 + \sin^2 \beta d\gamma^2 + \cos^2 \beta d\xi^2\, .\end{aligned}$$ The three angular momenta on the five sphere $S^5$ are defined as $$\begin{aligned} \label{eq:JJJ} J_1= -i\partial_\chi\, , \quad J_2= -i\partial_\phi\, , \quad J_3= -i\partial_\alpha\, ,\end{aligned}$$ and the two angular momenta on the $S^3$ inside $\mbox{AdS}_5$ are defined as $$\begin{aligned} \label{eq:SS} S_1 = -i \partial_\gamma\, , \qquad S_2=-i\partial_\xi \, .\end{aligned}$$ We moreover define the quantity $J\equiv J_1 + \eta_1 J_2 + \eta_2 J_3 + \eta_3 S_1 + \eta_4 S_2$, where $\eta_1$, $\eta_2$, $\eta_3$, $\eta_4$ are some parameters that characterize the background. We will show that they play an important role in Section \[sec:decsectors\] where we compare the results we obtain on the string theory side with previous computations done in the dual gauge theory. The “no flat direction” Penrose limit ------------------------------------- In order to derive the new Penrose limit, we first review the Penrose limit giving rise to the [*BMN pp-wave* ]{}. We introduce new coordinates $\varphi_0,...,\varphi_4$ defined by $$\begin{aligned} \label{eq:noflatphi} \chi &= \varphi_0, \quad \phi = \eta_1 \varphi_0 + \varphi_1\, , \quad \alpha = \eta_2 \varphi_0 + \varphi_2\, , \quad \gamma = \eta_3 \varphi_0 + \varphi_3\, , \quad \xi = \eta_4 \varphi_0 + \varphi_4\,,\end{aligned}$$ and we define the light-cone coordinates as $$\begin{aligned} z^- = \frac{1}{2} \mu R^2 (t-\varphi_0)\, , \quad z^+ = \frac{1}{2\mu} (t+\varphi_0)\, . \label{lcc}\end{aligned}$$ By defining $r_1,...,r_4$ such that $$\begin{aligned} r_1= R \psi\, , \quad r_2 = R \theta\, ,\quad r_3 = R \rho \sin\beta\, ,\quad r_4= R \rho \cos\beta\, .\end{aligned}$$ we can parametrize the eight $z^i$ coordinates in the following way $$\begin{aligned} \label{coordinates} z^1+iz^2 = r_1e^{i\varphi_1}\, , \quad z^3+iz^4 = r_2e^{i\varphi_2}\, , \cr z^5+iz^6 = r_3e^{i\varphi_3}\, , \quad z^7+iz^8 = r_4e^{i\varphi_4}\, .\end{aligned}$$ Writing the background – in terms of the coordinate $z^\pm$ and $z^i$ and taking the Penrose limit by sending $R\to\infty$ while keeping $z^\pm$ and $z^i$ fixed, we obtain the following metric $$\label{eq:dsnoflat} \begin{split} ds^2=&-4dz^+dz^- + dz^i dz^i - \mu^2 \sum_{k=1}^{4} \left(1-\eta_{k}^{2}\right)\left[\left(z^{2k-1}\right)^2+\left(z^{2k}\right)^2\right]\left(dz^+\right)^2 \\ &+ 2\mu \sum_{k=1}^{4}\eta_k \left[z^{2k-1}dz^{2k}- z^{2k}dz^{2k-1}\right]dz^+. \end{split}$$ and five-form field strength $$\begin{aligned} \label{eq:F5z} F_{(5)} = 2 \mu \,dz^+ \left(dz^1 dz^2 dz^3 dz^4 + dz^5 dz^6 dz^7 dz^8 \right)\, .\end{aligned}$$ We see that by setting the parameters $\eta_k$’s all to zero, we precisely recover the pp-wave background derived in  [@Blau:2002mw; @Berenstein:2002jq]. In this sense, the background – is a generalization of it. Type IIB string theory can be quantized on this background and the light-cone Hamiltonian that one obtains is $$\begin{aligned} H_\textrm{lc} \sim E-J_1, \qquad p^+ \sim \frac{E+J_1}{R^2}\, .\end{aligned}$$ From the condition that $H_\textrm{lc}$ and $p^+$ should stay finite in the limit, we get that $J_1=-i\partial_{\varphi_0}$ must be large. On the other hand since $\varphi_1, ..., \varphi_4$ are all fixed in the limit $R\to \infty$, we deduce from , and that $J_2$, $J_3$, $S_1$ and $S_2$ are also fixed. We see from the above that the “no flat direction” Penrose limit corresponds to the following regime of type IIB string theory on $\mbox{AdS}_5\times S^5$ $$R \rightarrow \infty \ \mbox{with}\ E-J_1\ \mbox{fixed} , \quad \frac{E+J_1}{R^2}\ \mbox{fixed}, \quad \frac{J_1}{R^2} \ \mbox{fixed}, \quad g_s,l_s \ \mbox{fixed}$$ Translating this into ${\mathcal{N}}=4$ SYM language, it corresponds to the regime $$N \rightarrow \infty \ \mbox{with}\ E-J_1\ \mbox{fixed} , \quad \frac{E+J_1}{\sqrt{N}}\ \mbox{fixed}, \quad \frac{J_1}{\sqrt{N}} \ \mbox{fixed}, \quad {g_{\rm YM}}^2 \ \mbox{fixed}$$ The “one flat direction” Penrose limit -------------------------------------- Now we repeat an analogous procedure and show that, by a different choice of light-cone coordinates, we obtain a generalization of the pp-wave background derived in [@Bertolini:2002nr]. We define the coordinates $\varphi_0,...,\varphi_4$ in the following way $$\begin{aligned} \label{eq:oneflatphi} \chi = \varphi_0 -\varphi_1\, , \quad \phi = \varphi_0 + \varphi_1\, , \quad \alpha = \eta_2 \varphi_0 + \varphi_2\, , \quad \gamma = \eta_3\varphi_0 + \varphi_3\, , \quad \xi = \eta_4\varphi_0 + \varphi_4\, ,\end{aligned}$$ with the light-cone variables still given by eq.n . We moreover define $z^1$ and $z^2$ as $$\begin{aligned} z^1 = R\varphi_1\, , \quad z^2=R\left(\frac{\pi}{4}-\psi\right)\, ,\end{aligned}$$ while $z^3,...,z^8$ are defined as before (see Eq.) and $$\begin{aligned} r_2 = R \theta\, , \quad r_3 = R \rho \sin\beta\, ,\quad r_4= R \rho \cos\beta\, ,\end{aligned}$$ $$\begin{aligned} z^3+iz^4 =r_2 e^{i\varphi_2}\, , \quad z_5+iz_6 = r_3e^{i\varphi_3}\, , \quad z_7+iz_8 = r_4e^{i\varphi_4}\, .\end{aligned}$$ The Penrose limit is then the limit $R\to\infty$ keeping $z^\pm,z^i$ fixed. Plugging the coordinates $z^\pm, z^i$ into the background – and taking the limit described above the metric becomes $$\label{eq:dsoneflat} \begin{split} ds^2=&-4dz^+dz^- + dz^i dz^i - \mu^2 \sum_{k=2}^{4} \left(1-\eta_{k}^{2}\right)\left[\left(z^{2k-1}\right)^2+\left(z^{2k}\right)^2\right]\left(dz^+\right)^2 \\ &+ 2\mu \sum_{k=2}^{4}\eta_k \left[z^{2k-1}dz^{2k}- z^{2k}dz^{2k-1}\right]dz^+ -4 \mu z^2 dz^+ dz^1. \end{split}$$ with the five-form given by . From we see that $z^1$ is an explicit isometry of the above pp-wave background and therefore we call this background [*one flat direction pp-wave background*]{}. As before we have that $\varphi_2,\varphi_3,\varphi_4$ are fixed in the Penrose limit which, using , means that $J_3$, $S_1$ and $S_2$ are fixed. But now the condition that $H_\textrm{lc}$, $p^+$ and $p^1$ have to remain finite in the limit tells us that the quantities $$E-J_1-J_2 , \quad \frac{E+J_1+J_2}{R^2}, \quad \frac{J_1+J_2}{R^2} , \quad \frac{J_1-J_2}{R} , \quad g_s,l_s$$ are all fixed when $R \to \infty$. This is the regime corresponding to the “one flat direction” Penrose limit of type IIB string theory on $\mbox{AdS}_5\times S^5$, as found in [@Bertolini:2002nr]. Translating this into ${\mathcal{N}}=4$ SYM language, it corresponds to the regime where [@Bertolini:2002nr] $$E-J_1-J_2 , \quad \frac{E+J_1+J_2}{\sqrt{N}}, \quad \frac{J_1+J_2}{\sqrt{N}} , \quad \frac{J_1-J_2}{N^{1/4}} , \quad {g_{\rm YM}}^2$$ are fixed for $N \to \infty$. The “two flat directions” Penrose limit --------------------------------------- We finally consider the Penrose limit that leads to a new pp-wave  with two flat directions. The variables $\varphi_0,$ $\varphi_1,$ $\varphi_2,$ $\varphi_3,$ $\varphi_4$ are now defined as $$\begin{gathered} \label{phi2fd} \chi = \varphi_0 - \sqrt{2}\varphi_1 - \varphi_2 \, , \qquad \phi = \varphi_0 + \sqrt{2}\varphi_1 - \varphi_2\, , \qquad \alpha = \varphi_0 + \varphi_2 \, ,{\nonumber}\\[2mm] \gamma = \eta_3 \varphi_0 + \varphi_3 \, , \qquad \xi = \eta_4 \varphi_0 + \varphi_4 \, ,\end{gathered}$$ whereas the light-cone coordinate are as usual given by . The coordinates $z^1$, $z^2$, $z^3$ and $z^4$ are defined as $$\begin{array}{lcl} z^1 = R \varphi_1 \, , & \phantom{qquad} & z^2 = \displaystyle{ \frac{R}{\sqrt{2}}} \left(\displaystyle{\frac{ \pi}{4}-\psi}\right) \, , \\[4mm] z^3 = R \varphi_2 \, , & & z^4 = R \left(\displaystyle{\frac{ \pi}{4}}-\theta \right) \, . \end{array}$$ while $z^5$, $z^6$, $z^7$, $z^8$ are again given by Eq.. More explicitly we have $$\begin{aligned} r_3 = R \rho \sin \beta \, , \qquad r_4 = R \rho \cos \beta\, ,\end{aligned}$$ $$\begin{aligned} z^5 + i z^6 = r_3 \displaystyle{ e^{i \varphi_3}} \, , \qquad z^7 + i z^8 = r_4 \displaystyle{ e^{i \varphi_4}}\, .\end{aligned}$$ Substituting the new coordinates in the background – and taking the Penrose limit we get the following pp-wave metric $$\label{eq:dstwoflat} \begin{split} ds^2&=-4dz^+dz^- + dz^i dz^i - \mu^2 \sum_{k=3,4} \left(1-\eta_{k}^{2}\right)\left[\left(z^{2k-1}\right)^2+\left(z^{2k}\right)^2\right]\left(dz^+\right)^2 \\ &+ 2\mu \sum_{k=3,4}\eta_k\left[ z^{2k-1}dz^{2k}- z^{2k}dz^{2k-1}\right]dz^+ - 4\mu\left(z^2 dz^1 + z^4 dz^3\right)dz^+. \end{split}$$ and the five-form is defined in . This is a new pp-wave background and it has two explicit isometries, $z^1$ and $z^3$ We will therefore refer to it as [*two flat directions pp-wave background*]{}. In this case $\varphi_3,\varphi_4$ are fixed, thus, keeping in mind , we have that also the angular momenta $S_1$ and $S_2$ are fixed. In a similar fashion as before if we compute $H_\textrm{lc}$, $p^+$, $p^1$ and $p^3$ and request that they should stay finite in the Penrose limit we get that the quantities $$E-J_1-J_2-J_3 , \quad \frac{E+J_1+J_2+J_3}{R^2}, \quad \frac{J_1+J_2+J_3}{R^2}, \quad \frac{J_1 - J_2}{R} ,\quad \frac{J_3 -J_1 - J_2}{R}, \quad g_s,l_s$$ are fixed as $R$ goes to infinity. This is the regime corresponding to the “two flat directions” Penrose limit of type IIB string theory on $\mbox{AdS}_5\times S^5$. Translating this into ${\mathcal{N}}=4$ SYM it corresponds to the regime where $$E-J_1-J_2-J_3 , \quad \frac{E+J_1+J_2+J_3}{\sqrt{N}}, \quad \frac{J_1+J_2+J_3}{\sqrt{N}}, \quad \frac{J_1 - J_2}{N^{1/4}} ,\quad \frac{J_3 -J_1 - J_2}{N^{1/4}} , \quad {g_{\rm YM}}^2$$ are fixed for $N \rightarrow \infty$. Here $J_1-J_2$ and $J_3-J_1-J_2$ correspond to the two momenta for the two space-like isometries of the [*two flat directions pp-wave background*]{} . Type IIB string theory on the pp-wave backgrounds , (with five-form field strength given by ) can be easily quantized. The spectra in all these three cases are worked out in the next section. String theory spectrum on a rotated pp-wave background {#sec:stringrotspectra} ====================================================== In this section we obtain a pp-wave metric, which depends on parameters introduced through a coordinate transformation on the maximally   of [@Blau:2001ne]. For this reason, in practice, this metric describes an infinite set of pp-wave s (one for each point of the parameter space). We refer to them as to *rotated pp-wave backgrounds*. Note that the backgrounds obtained in this way do not necessarily have any specific meaning in an AdS/CFT context. They will only have a meaning in the AdS/CFT context if we derive them from a Penrose limit of $\mbox{AdS}_5 \times S^5$. Despite this, the procedure that we are going to show results to be very useful because allows to obtain a general formula that contains all the physically interesting pp-wave s. In fact we will show that by appropriately choosing the values of the parameters of the background, this general formula describes exactly the s studied in the previous section which are indeed obtained by taking Penrose limits of the $\mbox{AdS}_5 \times S^5$ . We can then proceed in finding the spectra on these generic rotated s. An important result is that, by taking an appropriate limit on these spectra, we will show that one can reproduce the spectra found in [@Harmark:2007px] for the nine decoupled sectors of $\neqf$ SYM which contain scalars. Coordinate transformation ------------------------- We start from the simplest pp-wave background metric without flat directions $$\label{BMNmetric} ds^2=-4dx^+dx^- - \mu^2 x^ix^i\left(dx^+\right)^2+dx^idx^i\, ,$$ where $i=1,2,\dots,8$ and five-form field strength $$\label{fff} F_{(5)}=2\mu dx^{+}\left(dx^{1}dx^{2}dx^{3}dx^{4}+dx^{5}dx^{6}dx^{7}dx^{8}\right)\, .$$ We consider the following coordinate transformation $$\label{transfrot} \begin{split} x^- =z^- &+\frac{\mu}{2}\left(C_1 z^1z^2 + C_2z^3z^4 + C_3z^5z^6 + C_4z^7z^8\right)\, , \\[2mm] \left( \begin{array}{c} x^{2k-1} \\[2mm] x^{2k} \end{array} \right) &= \left( \begin{array}{cc} \cos(\eta_k \mu z^+) & -\sin(\eta_k \mu z^+) \\[2mm] \sin(\eta_k \mu z^+) & \cos(\eta_k \mu z^+) \end{array} \right) \left( \begin{array}{c} z^{2k-1} \\[2mm] z^{2k} \end{array} \right)\, , \end{split}$$ where $C_{k}$ and $\eta_{k}$, $k=1, 2, 3, 4$, are parameters. Note that the transformations for the transverse coordinates are rotations whose angles depend on the $\eta_k$ parameters, hence the name “[*rotated pp-wave* ]{}”. The metric then becomes $$\label{rotmetric} \begin{split} ds^2=&-4dz^+dz^- + dz^i dz^i - \mu^2 \sum_{k=1}^{4} \left(1-\eta_{k}^{2}\right)\left[\left(z^{2k-1}\right)^2+\left(z^{2k}\right)^2\right]\left(dz^+\right)^2 \\ &- 2\mu \sum_{k=1}^{4}\left[(C_k-\eta_k)z^{2k-1}dz^{2k}+(C_k+\eta_k)z^{2k}dz^{2k-1}\right]dz^+\, , \end{split}$$ while the five-form field strength is invariant under the coordinate transformation . It is straightforward to check that the metric contains all the s obtained in Section \[sec:stringtheory\]. In fact, for various values of the $C_k$ and $\eta_k$ parameters, we have the following possibilities ------------------------------------------- --------------- ---------------------- $C_1=C_2=C_3=C_4=0$ $\Rightarrow$ no flat direction; $C_1=\eta_1=1$ and $C_2=C_3=C_4=0$ $\Rightarrow$ one flat direction; $C_1=\eta_1=C_2=\eta_2=1$ and $C_3=C_4=0$ $\Rightarrow$ two flat directions. ------------------------------------------- --------------- ---------------------- String theory can be quantized on the general background  and we now proceed in finding the superstring spectrum. Bosonic sector -------------- We work in the light-cone gauge $z^+ = p^+ \tau$ with $l_s=1$. The light-cone Lagrangian density of the bosonic $\sigma$-model is given by $$\label{boslagr} \begin{split} \mathscr{L}_{lc}^{B}= &- \frac{1}{4\pi p^+}\left(\partial^{\alpha}z^i\partial_{\alpha}z^i+ f^2 \sum_{k=1}^{4}\left(1-\eta_{k}^{2}\right)\left[\left(z^{2k-1}\right)^2+\left(z^{2k}\right)^2\right] \right. \\ &+\left. 2f \sum_{k=1}^{4}\left[(C_k-\eta_k)z^{2k-1}\dot{z}^{2k}+(C_k+\eta_k)z^{2k}\dot{z}^{2k-1}\right]\right)\, , \end{split}$$ where we have defined $f = \mu p^+$. The conjugate momenta are computed to be $$\Pi_{2k-1} = \frac{\dot{z}^{2k-1}-f\left(C_k + \eta_k \right) z^{2k}}{2\pi }\, ,~~~~~ \Pi_{2k} = \frac{\dot{z}^{2k}-f\left(C_k - \eta_k \right) z^{2k-1}}{2\pi }\, ,$$ and the bosonic light-cone Hamiltonian is given by $$H_{lc}^{B}= \frac{1}{4\pi p^+}\int_{0}^{2\pi}d\sigma \Bigg[ \dot{z}^i \dot{z}^i+ (z^i)'(z^i)' +f^2 \sum_{k=1}^{4}\left(1-\eta_{k}^{2}\right)\left[\left(z^{2k-1}\right)^2+\left(z^{2k}\right)^2\right]\Bigg]\, .$$ In order to solve the equations of motion $$\begin{aligned} &\partial^{\alpha}\partial_{\alpha}z^{2k-1}+2f\eta_k \dot{z}^{2k} - f^2 \left(1-\eta_{k}^{2}\right) z^{2k-1}=0\label{moteq1}\, ,\\ &\partial^{\alpha}\partial_{\alpha}z^{2k}-2f\eta_k \dot{z}^{2k-1} - f^2 \left(1-\eta_{k}^{2}\right) z^{2k}=0\label{moteq2}\, ,\end{aligned}$$ it is useful to introduce four complex fields $$X^k = z^{2k-1}+ iz^{2k}\, ,$$ in terms of which the above equations read $$\begin{aligned} &\partial^{\alpha}\partial_{\alpha}X^{k}-2 i f\eta_k \dot{X}^{k} - f^2 \left(1-\eta_{k}^{2}\right) X^{k}=0\, ,\label{moteqd1}\\ &\partial^{\alpha}\partial_{\alpha}\bar{X}^{k}+2 i f\eta_k \dot{\bar{X}}^{k} - f^2 \left(1-\eta_{k}^{2}\right) \bar{X}^{k}=0\label{moteqd2}\, .\end{aligned}$$ One can see that a solution of the form $$X^k=e^{-i f \eta_k \tau} Y^k$$ solves if $Y^k$ satisfy the equation $$\partial^{\alpha}\partial_{\alpha}Y^{k} -f^2 Y^k=0\, .$$ Therefore for $Y^k$ and its conjugate $\bar{Y}^k$ we have the following mode expansions \[bosmodeex\] $$\begin{aligned} Y^k&=i \sum_{n=-\infty}^{+\infty} \frac{1}{\sqrt{\omega_n}}\left(a_{n}^{k}e^{-i (\omega_n \tau -n\sigma)}- \left(\tilde{a}_{n}^{k}\right)^\dagger e^{i (\omega_n \tau -n\sigma)}\right)\, , \\ \bar{Y}^k&=i \sum_{n=-\infty}^{+\infty} \frac{1}{\sqrt{\omega_n}}\left(\tilde{a}_{n}^{k}e^{-i (\omega_n \tau -n\sigma)}- \left(a_{n}^{k}\right)^\dagger e^{i (\omega_n \tau -n\sigma)}\right)\, .\end{aligned}$$ The bosonic Hamiltonian now reads $$\label{Hcomplexfield} H_{lc}^{B}= \frac{1}{4\pi p^+}\int_{0}^{2\pi}d\sigma \sum_{k=1}^{4}\left(\dot{\bar{X}}^k \dot{X}^k+ (\bar{X}^k)'(X^k)' +f^2 \left(1-\eta_{k}^{2}\right)\bar{X}^{k}X^{k}\right)\, .$$ Then we quantize the theory imposing the canonical equal time commutation relations $$\label{etcr} \left[a_{n}^{k},a_{m}^{k'}\right]=0\, , \qquad \left[a_{n}^{k},(a_{m}^{k'})^{\dagger}\right]=\left[\tilde{a}_{n}^{k},(\tilde{a}_{m}^{k'})^{\dagger}\right]=\delta^{kk'}\delta_{nm}\, .$$ We obtain the following bosonic spectrum in this background $$\label{rotbosH} \begin{split} H_{lc}^{B}=& \frac{1}{ p^+}\sum_{n=-\infty}^{+\infty} \sum_{k=1}^2 \left[\left(\omega_n + \eta_k f\right) M_{n}^{(k)}+\left(\omega_n - \eta_k f\right) \tilde{M}_{n}^{(k)}\right. \\ +&\left.\left(\omega_n + \eta_{(k+2)} f\right) N_{n}^{(k)}+\left(\omega_n - \eta_{(k+2)} f\right) \tilde{N}_{n}^{(k)}\right]\, , \end{split}$$ where $\omega_n = \sqrt{n^2 + f^2}$ for all $n\in \mathbb{Z}$ and the number operators are defined as $$M_{n}^{(k)}=a_{n}^{k\dagger}a_{n}^{k}\, , ~~ \tilde{M}_{n}^{(k)} =\tilde{a}_{n}^{k\dagger}\tilde{a}_{n}^{k} \, ,~~N_{n}^{(k)}=a_{n}^{(k+2)\dagger}a_{n}^{(k+2)}\, , ~~\tilde{N}_{n}^{(k)} =\tilde{a}_{n}^{(k+2)\dagger}\tilde{a}_{n}^{(k+2)}$$ for $k=1,2$. Fermionic sector ---------------- We now work out the fermionic part of the spectrum. The light-cone gauge and $\kappa$-symmetry gauge fixing condition are $$z^+ = p^+ \tau, \qquad \Gamma^{+}\theta^A=0\,$$ where $\theta^A$, with $A=1,2$, is a Majorana-Weyl spinor with $32$ components. The Green-Schwarz fermionic light-cone action is then given by [@Metsaev:2002re] $$\label{GSaction} S_{lc}^{F}= \frac{i}{4\pi p^+}\int d\tau d\sigma \left[ \left(\eta^{\alpha\beta}\delta_{AB}-\epsilon^{\alpha\beta}\left(\sigma_{3}\right)_{AB}\right)\partial_{\alpha}z^+ \bar{\theta}^A \Gamma_+ \left(\mathcal{D}_{\beta}\theta\right)^B\right]\, ,$$ with covariant derivative $$\mathcal{D}_{\alpha}=\partial_{\alpha}+\frac{1}{4}\partial_{\alpha}z^+ \left(\omega_{+\rho\sigma}\Gamma^{\rho \sigma}-\frac{1}{2\cdot 5!}F_{\lambda\nu\rho\sigma\kappa}\Gamma^{\lambda\nu\rho\sigma\kappa}i\sigma_2 \Gamma_+ \right)\, ,$$ where $\sigma_{k}$’s are the Pauli matrices and $\omega_{a,b,c}$ are the spin connections. The non-vanishing components of the five-form field strength are $F_{+1234}=F_{+5678}=2\mu$. We can write the action as $$\label{feract} \begin{split} S_{lc}^{F}=& \frac{i}{2\pi p^+ }\int d\tau d\sigma \Bigg\{{\left(S^1\right)^T} \left[\partial_{+}-\frac{f}{2}\sum_{k=1}^{4}\eta_{k}\gamma^{2k-1,2k}\right]S^1\\ +& {\left(S^2\right)^T} \left[\partial_{-}-\frac{f}{2}\sum_{k=1}^{4}\eta_{k}\gamma^{2k-1,2k}\right]S^2 -2f {\left(S^1\right)^T} \Pi S^2\Bigg\}\, . \end{split}$$ where $S^A$, $A=1,2$, is a eight component real spinor and we introduced the matrix $\Pi=\gamma^{1234}$, where $\gamma_i$ are $8\times 8$ Dirac matrices [^6]. Moreover, $\partial_{\pm}=\partial_{\tau}\pm\partial_{\sigma}$. The equations of motion are \[eqmotferm\] $$\begin{aligned} &\left(\partial_{+}-\frac{f}{2}\sum_{k=1}^{4}\eta_{k}\gamma^{2k-1,2k}\right)S^{1}-f\Pi S^{2}=0\, ,\\ &\left(\partial_{-}-\frac{f}{2}\sum_{k=1}^{4}\eta_{k}\gamma^{2k-1,2k}\right)S^{2}+f\Pi S^{1}=0\, .\end{aligned}$$ It is useful to observe that a field of the form $$S^{A}=e^{\displaystyle \frac{f}{2}\sum_{k=1}^{4}\eta_{k}\gamma^{2k-1,2k}\tau}\Sigma^{A}$$ satisfies the above equations if the fields $\Sigma^{A}$ obey the equations of motion of the fermionic fields in the usual pp-wave background [@Metsaev:2001bj; @Metsaev:2002re]: $$\partial_{+}\Sigma^{1}-f\Pi \Sigma^{2}=0\, ,~~~~~~\partial_{-}\Sigma^{2}+f\Pi \Sigma^{1}=0\, ,$$ whose solutions are $$\begin{aligned} &\Sigma^{1}=c_0\, e^{-i f \tau}S_0 - \sum_{n>0}c_n e^{-i \omega_{n}\tau} \left(S_n e^{i n \sigma}+\frac{\omega_{n}-n}{f} S_{-n}e^{-i n \sigma} \right) +\textrm{h.c. },\\ &\Sigma^{2}=-c_0\, e^{-i f \tau}i\Pi S_0 - i \Pi\sum_{n>0}c_n e^{-i \omega_{n}\tau} \left(S_{-n} e^{-i n \sigma}-\frac{\omega_{n}-n}{f} S_{n}e^{i n \sigma} \right)+\textrm{h.c. },\end{aligned}$$ where, for all values of $n$, $\omega_{n}=\sqrt{n^2+f^2}$, while $c_n = \frac{1}{\sqrt{2}}[1+(\frac{\omega_{n}-n}{f})^{2}]^{-1/2}$. The fermionic conjugate momenta can be computed from the action $$\lambda^{A}=\frac{i}{2\pi}S^{A}\, ,$$ and the fermionic part of the Hamiltonian can be written in the form $$H_{lc}^{F}= \frac{i}{2\pi p^+ }\int^{2\pi}_{0}d\sigma \left({\left(S^1\right)^T}\dot{S^1}+{\left(S^2\right)^T}\dot{S^2}\right)\,$$ where we used the equations of motion . Now we quantize the theory imposing the canonical equal time anticommutation relations $$\left\{S_{n}^{a},\left(S_{m}^{b}\right)^{\dagger}\right\}=\delta^{ab}\delta_{nm}\,$$ and the fermionic Hamiltonian reads $$H_{lc}^{F}=\frac{1}{ p^+ }{\sum_{n=-\infty}^{+\infty}}S_{n}^{\dagger} \left(\omega_{n}+i\frac{f}{2}{\sum_{k=1}^{4}\eta_{k}\gamma^{2k-1,2k}}\right)S_{n}\, .$$ The matrices $i\,\gamma^{2k-1,2k}$ are commuting matrices and have eigenvalues $\pm 1$, each with multiplicity four. Since they commute we can find a set of common eigenvectors. Choosing this set as basis we can write the fermionic spectrum as $$\label{rotferH} H_{lc}^{F}= \frac{1}{ p^+}{\sum_{n=-\infty}^{+\infty}} \sum_{b=1}^{8} \left(\omega_n + \frac{f}{2} d_b \right)F_{n}^{(b)}\, ,$$ where $F_{n}^{(b)}$ are the fermionic number operators defined by the relation $$F_{n}^{(b)}=\left(S_{n}^{b}\right)^{\dagger}S_{n}^{b}\,$$ and where we have defined the coefficients $d_b$ as the following combinations of the $\eta_k$ parameters $$\begin{array}{lll} d_1 = -\eta_{1}-\eta_{2}+\eta_{3}+\eta_{4} \, , \phantom{qquad} & d_5 = -\eta_{1}+\eta_{2}+\eta_{3}-\eta_{4} \, ,\\[1mm] d_2 = -\eta_{1}-\eta_{2}-\eta_{3}-\eta_{4} \, , & d_6 = \eta_{1}-\eta_{2}+\eta_{3}-\eta_{4} \, ,\\[1mm] d_3 = \eta_{1}+\eta_{2}+\eta_{3}+\eta_{4} \, , & d_7 = \eta_{1}-\eta_{2}-\eta_{3}+\eta_{4} \, ,\\[1mm] d_4 = \eta_{1}+\eta_{2}-\eta_{3}-\eta_{4} \, , & d_8 = -\eta_{1}+\eta_{2}-\eta_{3}+\eta_{4} \, . \end{array}$$ At this point we can write the total light-cone Hamiltonian, $H_{lc}$, of type IIB string theory on the [*rotated pp-wave s*]{} $$\label{eq:rotH} \begin{split} H_{lc}=&H_{lc}^{B} +H_{lc}^{F}= \frac{1}{ p^+}\sum_{n=-\infty}^{+\infty} \left\{\sum_{k=1}^2 \left[\left(\omega_n + \eta_k f\right) M_{n}^{(k)}+\left(\omega_n - \eta_k f\right) \tilde{M}_{n}^{(k)}\right]\right. \\ +&\left.\sum_{k=1}^2\left[\left(\omega_n + \eta_{(k+2)} f\right) N_{n}^{(k)}+\left(\omega_n - \eta_{(k+2)} f\right) \tilde{N}_{n}^{(k)}\right] + \sum_{b=1}^{8}\left(\omega_n + \frac{f}{2} d_b \right)F_{n}^{(b)}\right\}\, , \end{split}$$ and the level matching condition is $$\sum_{n=-\infty}^{+\infty}\left[\sum_{k=1}^2\left(M_{n}^{(k)}+\tilde{M}_{n}^{(k)} +N_{n}^{(k)}+\tilde{N}_{n}^{(k)}\right)+ \sum_{b=1}^{8}F_{n}^{(b)}\right]=0 \, .$$ Note that the spectrum does not depend on the $C_k$ parameters since they just represent a gauge choice, but only on the $\eta_k$ parameters. The decoupled sectors {#sec:decsectors} ===================== In this section we show that by taking a certain limit of the spectra , we can reproduce the spectrum of anomalous dimensions of gauge theory operators in the dual sectors of $\mathcal{N}=4$ SYM theory found in [@Harmark:2007px]. The procedure follows that of [@Harmark:2006ta] where the spectrum in the $SU(2)$ sector is matched. Here we generalize this to all sectors that include scalar fields on the gauge theory side. According to the AdS/CFT correspondence, the string light-cone Hamiltonian $H_{\rm lc}$ should be dual to $D-J$ on the gauge theory side, $$\frac{H_{\rm lc}}{\mu}\, \longleftrightarrow \, D-J \, .$$ where $D$ is the dilatation operator and $J$ is the total charge defined by $J = n_1 S_1 + n_2 S_2 + n_3 J_1 + n_4 J_2 + n_5 J_3$ with the $n_i$ characterizing the decoupling limit giving a particular sector of ${\mathcal{N}}=4$ SYM [@Harmark:2007px]. As explained in more detail below, the decoupling limit on the gauge theory consists of taking the limit $D-J \rightarrow 0$ and $\lambda\rightarrow 0$ keeping $(D-J)/\lambda$ fixed. On the string theory side, this decoupling limit corresponds to the limit $\mu \to \infty$, or equivalently $f \to \infty$. We now apply this limit to the string spectra . Remembering the definition of $\omega_n$, its expansion for $f \to \infty$ takes the form $$\omega_n=\sqrt{f^2 + n^2}\simeq f+\frac{n^2}{2f} +\mathcal{O}(f^{-2})\, .$$ In order for the spectra to be finite, the divergent term contained in the expansion of $\omega_n$ should cancel. In the bosonic part of the Hamiltonian we deal with terms of the kind $$\begin{aligned} \left(\omega_n + \eta_k f\right)M_{n}^{(k)} & \simeq \left[f\left(1+\eta_k \right) + \frac{n^2}{2f} +\mathcal{O}(f^{-2})\right] M_{n}^{(k)}\, ,\\ \left(\omega_n - \eta_k f\right)\tilde{M}_{n}^{(k)} & \simeq \left[f\left(1-\eta_k \right) + \frac{n^2}{2f} +\mathcal{O}(f^{-2})\right] \tilde{M}_{n}^{(k)}\, , \end{aligned}$$ and the analogous ones for $N_{n}^{(k)}$ and $\tilde{N}_{n}^{(k)}$. Instead in the fermionic part of the Hamiltonian we have $$\left( \omega_n + \frac{f}{2} d_b\right)F_{n}^{(b)} \simeq \left[f\left(1+\frac{d_b}{2}\right) + \frac{n^2}{2f} +\mathcal{O}(f^{-2})\right]F_{n}^{(b)}\, .$$ The only terms that survive the limit $f \to \infty$ are those for which the coefficient of the linear part in $f$ vanishes. All the other terms are divergent and thus decouple in the large $f$ limit. The bosonic number operators will survive only if the corresponding $\eta_k$ results to be $\pm 1$ and the fermionic number operators only if the corresponding $d_b$ results to be $-2$. In the following we want to show that by appropriately fixing the values of the parameters $\eta_k$, the string theory spectra that survive the limit $\mu \to \infty$ precisely reproduce the spectra of the dual gauge theory sectors. As an important consequence of the matching of the spectra, it follows that also the Hagedorn temperature of the gauge theory matches the one of string theory in these sectors. This can also be used to verify the conjectured relation between the Hagedorn/deconfinement temperature of planar ${\mathcal{N}}=4$ SYM on ${\mathbb{R}}\times S^3$ and the Hagedorn temperature of string theory on $\mbox{AdS}_5\times S^5$. Moreover, these results show that the decoupling limits [@Harmark:2006di; @Harmark:2006ta; @Harmark:2006ie; @Harmark:2007et; @Harmark:2007px] of thermal $SU(N)$ $\mathcal{N}=4$ SYM on ${\mathbb{R}}\times S^3$ provide a very useful and powerful tool to match gauge theory and string theory. On the gauge theory side the idea [@Harmark:2006di; @Harmark:2006ta; @Harmark:2006ie; @Harmark:2007et; @Harmark:2007px; @Harmark:2008gm] is to consider decoupling limits of weakly coupled ${\mathcal{N}}=4$ SYM on ${\mathbb{R}}\times S^3$ with gauge group $SU(N)$. The decoupling limit is defined by $$\label{limit2} \lambda \rightarrow 0 {\,, \ \ }J_i,\, N \ \mbox{fixed} {\,, \ \ }H_{\rm g.t.} \equiv \frac{E-J}{\lambda} \ \mbox{fixed}$$ where $\lambda={g_{\rm YM}}^2 N/4\pi^2$ is the ’t Hooft coupling of $\mathcal{N}=4$ SYM theory, $E$ is the energy of a state measured in units of the three sphere radius and $J\equiv n_1 S_1 + n_2 S_2 + n_3 J_1 + n_4 J_2 + n_5 J_3$ is the total charge with $n_i$, $i=1,\ldots,5$ being fixed numbers. $S_1$ and $S_2$ denote the two charges of the $SO(4)$ group of $S^3$ and $J_1$, $J_2$ and $J_3$ are the three R-charges. Here we only consider the gauge theory in the planar limit $N=\infty$. In terms of operators we have that the Hamiltonian is given by $H_{\rm g.t.} = (D-J)/\lambda$. $D$ is the dilatation operator of $\mathcal{N}=4$ SYM which, at weak ’t Hooft coupling, can be expanded as $$D = D_0 + \lambda D_2 + \lambda^{\frac{3}{2}}D_3 + \lambda^2D_4 + \ldots$$ where $D_0$ is the bare scaling dimension, $D_2$ is the one-loop part of the dilatation operator and so on. One can see that in the limit , the operators with $D_0>J$ decouple and only the ones with $D_0=J$ survive the limit. One thus gets the effective Hamiltonian $H_{\rm g.t.}=D_2$, namely only the one-loop part of the dilatation operator survive the limit  [@Harmark:2006di; @Harmark:2006ta; @Harmark:2006ie; @Harmark:2007et; @Harmark:2007px; @Harmark:2008gm]. Among the possible decoupling limits of $\mathcal{N}=4$ SYM theory found in [@Harmark:2007px], here we are interested only in the decoupled sectors that contain scalars. The presence of the scalars is in fact crucial in order to analyze the regime of the gauge theory which is related to the dual string theory. These sectors are the $SU(2)$, $SU(1|1)$, $SU(1|2)$, $SU(2|3)$, bosonic $SU(1,1)$, $SU(1,1|1)$, $SU(1,1|2)$, $SU(1,2|2)$ and $SU(1,2|3)$ sectors. **Sector** $(n_1,n_2,n_3,n_4,n_5)$ --------------- -------------------------------------------------------- $SU(2)$ (0,0,1,1,0) $SU(1,1)_{b}$ (1,0,1,0,0) $SU(1|1)$ $\left(\frac{2}{3},0,1,\frac{2}{3},\frac{2}{3}\right)$ $SU(1|2)$ $\left(\frac{1}{2},0,1,1,\frac{1}{2}\right)$ $SU(2|3)$ (0,0,1,1,1) $SU(1,1|1)$ $\left(1,0,1,\frac{1}{2},\frac{1}{2}\right)$ $SU(1,1|2)$ (1,0,1,1,0) $SU(1,2|2)$ (1,1,1,0,0) $SU(1,2|3)$ (1,1,1,1,1) : The table shows the nine decoupled sectors that contain at least one scalar: in the left column are listed the sectors that survive the decoupling limit for the corresponding choice of $n=(n_1,n_2,n_3,n_4,n_5)$ reported in the right column. $SU(1,1)_b$ is the bosonic $SU(1,1)$ sector.[]{data-label="tab:sectors"} For more details see Ref. [@Harmark:2007px]. The spectra for these nine different sectors all take the form [@Harmark:2007px] $$\label{eq:ABCspectrum2} H_{\rm g.t.} = \frac{2\pi^2}{J^2} \sum_{n\in \mathbb{Z}} n^2 \left( \sum_{i=1}^a M_n^{(i)} +\sum_{j=1}^b N_n^{(j)} + \sum_{\alpha=1}^c F_n^{(\alpha)} \right)$$ The cyclicity (zero momentum) constraint is $$\begin{aligned} \label{eq:ABCconstraint} P \equiv \sum_{n\in \mathbb{Z}} n \left( \sum_{i=1}^a M_n^{(i)} +\sum_{j=1}^b N_n^{(j)} + \sum_{\alpha=1}^c F_n^{(\alpha)} \right) = 0.\end{aligned}$$ Note that $F_n^{(\alpha)} \in \{0,1\}$ while $M_n^{(i)}, N_n^{(j)} \in \{0,1,2,...\}$. The numbers $a,b$ and $c$ are given in Tab. \[tab:abc\]. $SU(\cdot)$ $(2)$ $(1,1)_b$ $(1|1)$ $(1|2)$ $(2|3)$ $(1,1|1)$ $(1,1|2)$ $(1,2|2)$ $(1,2|3)$ ------------- ------- ----------- --------- --------- --------- ----------- ----------- ----------- ----------- $a$ 1 0 0 1 2 0 1 0 2 $b$ 0 1 0 0 0 1 1 2 2 $c$ 0 0 1 1 2 1 2 2 4 : The table shows how many number operators we have of each type ($a$ for scalars $M_n$, $b$ for derivatives $N_n$, and $c$ for fermions $F_n$) in each of the nine theories that contain at least one scalar. $SU(1,1)_b$ is the bosonic $SU(1,1)$ sector. \[tab:abc\] We want to show that there is a direct relation between the critical values of the numbers $(n_1,...,n_5)$ that characterize the various sectors on the gauge theory side and the parameters $\eta_1,...,\eta_4,$ that give the corresponding decoupled sectors on the string theory side. From table \[tab:sectors\], we see that all the nine sectors containing scalars have $n_3 = 1$. It is not hard to see that a suitable choice of $\eta_k$ parameters to match the string theory spectrum with the spectrum of the gauge theory side is the following $$\label{eq:etaasn} \eta_1 =n_4 \, ,\phantom{qquad} \eta_2 =n_5 \, , \phantom{qquad} \eta_3 =-n_1 \, ,\phantom{qquad} \eta_4 =n_2 \, .$$ Using the previous relations in the spectrum and taking the limit $f\to \infty$ we see that the string theory spectrum precisely matches the spectrum of the nine decoupled sectors of the gauge theory side. As an example we can consider the $SU(1,1|1)$ sector: in this case $n=\left(1,0,1,\frac{1}{2},\frac{1}{2}\right)$ (see Table \[tab:sectors\]) so using the relations we have that $\eta=\left(\frac{1}{2},\frac{1}{2},-1,0\right)$. Since the only $\eta_k$ equal to -1 is $\eta_3$ and the only $d_b$ equal to -2 is $d_1$ we have that only one bosonic and one fermionic number operator survive the limit $f \to \infty$. The string theory spectrum thus becomes $$\label{strsect2} \frac{H_{lc}}{\mu}\sim \frac{1}{2 \mu p^+ f} \sum_{n\in \mathbb{Z}} n^2 \left( N_n^{(1)} + F_n^{(1)} \right)\, ,$$ which, using the dictionary between gauge theory and string theory, can be written as $$\label{strsect} \frac{H_{lc}}{\mu}= \lambda D_2=\frac{2\pi^2\lambda}{J^2} \sum_{n\in \mathbb{Z}} n^2 \left( N_n^{(1)} + F_n^{(1)} \right)\, ,$$ where we used that $f=J/(2\pi\sqrt{\lambda})$. It is easy to check that is in accordance with the corresponding result in the gauge theory side which can be deduced from . We can repeat an analogous check for all the other decoupled sectors and we can show that the field content of the surviving spectrum is exactly the same as the one obtained on the gauge theory side. Using again Table \[tab:abc\], we can thus write the reduced spectrum for all the nine sectors on the string theory side at once. It is given by $$\frac{H_{lc}}{\mu}=\frac{1}{2 \mu p^+ f} \sum_{n\in \mathbb{Z}} n^2 \left( \sum_{i=1}^a M_n^{(i)} +\sum_{j=1}^b N_n^{(j)} + \sum_{\alpha=1}^c F_n^{(\alpha)} \right)$$ which indeed coincides with Eq. once we use the dictionary between gauge theory and string theory. New Penrose limit of ${\mbox{AdS}}_4 \times {\mathbb{C}}P^3$ {#sec:ads4} ============================================================ In the above we have found a new Penrose limit of ${\mbox{AdS}}_5 \times S^5$ with two explicit space-like isometries in addition to the existing Penrose limits with zero and one space-like isometries [@Blau:2002dy; @Berenstein:2002jq; @Bertolini:2002nr]. A natural question is whether one can similarly find new Penrose limits of the ${\mbox{AdS}}_4\times {\mathbb{C}}P^3$ background of type IIA supergravity. The known Penrose limits for this background are with either zero explicit space-like isometries [@Nishioka:2008gz; @Gaiotto:2008cg] or with two space-like isometries [@Grignani:2008is; @Astolfi:2009qh]. In particular the one with two space-like isometries of [@Grignani:2008is; @Astolfi:2009qh] is connected to studying the $SU(2) \times SU(2)$ sector of string theory on ${\mbox{AdS}}_4\times {\mathbb{C}}P^3$. We find in this section a new Penrose limit of the ${\mbox{AdS}}_4\times {\mathbb{C}}P^3$ background of type IIA supergravity with one explicit space-like isometry, $i.e.$ with one flat direction. We find furthermore the spectrum of type IIA string theory on this background by finding the spectrum for a general rotated pp-wave background that for certain choices of parameters corresponds to both the new pp-wave background with one explicit space-like isometry, as well as the two known backgrounds with zero and two explicit space-like isometries. The “one flat direction” Penrose limit -------------------------------------- In this section we present a new Penrose limit of ${\mbox{AdS}}_4 \times {\mathbb{C}}P^3$, here called the “one flat direction” Penrose limit. The  metric is given by $$ds^2=R^2\left(\frac{1}{4}ds^2_{AdS_4}+ ds^2_{\CP^3}\right) \, ,$$ where $$\label{metricAdS4} ds^2_{AdS_4} = -\cosh^2\rho \, dt^2 +d\rho^2 +\sinh^2 \rho \, d\Omega_2^2 \, ,$$ and $$\label{metricCP3} \begin{split} ds^2_{\CP^3} & = d\theta^2+4\cos^2 \theta \sin^2 \theta \left(d\delta+\frac{\cos\theta_1}{4}d\vp_1- \frac{\cos\theta_2}{4}d\vp_2\right)^2 \\ &+\frac{1}{4}\cos^2 \theta\left(d\theta_1^2+\sin^2\theta_1 d\vp_1^2\right)+\frac{1}{4}\sin^2 \theta (d\theta_2^2+\sin^2\theta_2 d\vp_2^2)\, . \end{split}$$ We introduce the new variables $\chi$, $\xi$ and $\psi$ by $$2\delta = \chi + \frac{\vp_2}{2}\, ,\qquad \vp_2=\xi+b\chi\,, \qquad 2\theta = \psi+ \frac{\pi}{2}\, ,$$ where $b$ is a parameter. The coordinate transformation that defines the Penrose limit is $$\begin{split} &x^+ = \frac{t+\chi}{2} \,, \qquad x^- = R^2\frac{t-\chi}{8} \,, \qquad \rho= \frac{2r}{R} \, ,\qquad \psi=\frac{2 u_4}{R} \, ,\\ &\vp_1 = \frac{2\sqrt{2}\,x_1}{R} \, ,\qquad \theta_1=\frac{2\sqrt{2}\,y_1}{R}+\frac{\pi}{2} \, ,\qquad \theta_2=\frac{2\sqrt{2}\,z}{R}\, . \end{split}$$ Taking the limit $R \to \infty$ while keeping $x^\pm$, $r$, $u_4$, $x_1$, $y_1$, $z$ finite, the metric becomes $$\label{metric1fd} \begin{split} ds^2 = &-4dx^+ dx^- + {\sum_{i=1}^{4}}\left(du_i^2-u_i^2 {dx^+}^2\right) + \sum_{a=1}^{2}\left(dx_a^2+dy_a^2 \right) \\ &+b(1+b)\left(x_2^2+y_2^2\right){dx^+}^2- 2 y_1 dx_1 dx^+ +(1+2b)\left[x_2 dy_2 - y_2 dx_2\right]dx^+ \, , \end{split}$$ where $x_2+iy_2=z\,e^{i\xi}$. The metric describes exactly a a pp-wave  with a flat direction, namely $x_1$. Rotated s and the string spectrum --------------------------------- Let us start from the pp-wave metric found in [@Nishioka:2008gz] $$ds^2=-4d\tilde{x}^+d\tilde{x}^- -\left(\sum_{i=1}^4 \tilde{x}_i^2+\frac{1}{4}\sum_{i=5}^8\tilde{x}_i^2\right){d\tilde{x}^+}^2+\sum_{i=1}^8 d\tilde{x}_i^2,$$ We consider the following coordinate transformation $$\label{transfrot2} \begin{split} \tilde x^+ &=x^+ \, \\[2mm] \tilde x^- &=x^- + \sum_{a=1}^{2} C_a x_a y_a \, , \\[2mm] \tilde x_i &=u_i \, , \quad i=1,\dots,4\, ,\\[2mm] \left( \begin{array}{c} \tilde x_{3+2a} \\[2mm] \tilde x_{4+2a} \end{array} \right) &= \left( \begin{array}{cc} \cos(\eta_a x^+ ) & -\sin(\eta_a x^+ ) \\[2mm] \sin(\eta_a x^+ ) & \cos(\eta_a x^+ ) \end{array} \right) \left( \begin{array}{c} x_a \\[2mm] y_a \end{array} \right)\, , \quad a=1,2\, , \end{split}$$ where $C_{1}, C_{2}$ and $\eta_{1}, \eta_{2}$ are parameters. Under this tranformation the metric becomes $$\begin{aligned} \label{rotmet} ds^2 =& -4dx^+ dx^- + {\sum_{i=1}^{4}}\left(du_i^2-u_i^2 {dx^+ }^2\right) + \sum_{a=1}^{2}\Bigg[dx_a^2+dy_a^2+\left(\eta_a^2-\frac{1}{4}\right)\left(x_a^2+y_a^2\right){dx^+ }^2 {\nonumber}\\ & +2\left(\eta_a-2C_a\right)x_a dy_a dx^+ - 2\left(\eta_a+2C_a\right)y_a dx_a dx^+ \Bigg]\, .\end{aligned}$$ It is easy to see that if one chooses the $C_a$ and $\eta_a$ parameters so the terms $dx^+ {}^2$ and $dx_a dx^+ $ in the metric vanish, i.e. $$\eta_a=-\frac{1}{2}\, ,\qquad \quad C_a=\frac{1}{4} \, ,$$ then one gets the  with two flat directions found in [@Grignani:2008is] $$ ds^2 = -4dx^+ dx^- + {\sum_{i=1}^{4}}\left(du_i^2-u_i^2 {dx^+ }^2\right) + \sum_{a=1}^{2}\left[dx_a^2+dy_a^2 - 2 y_a dx_a dx^+ \right]$$ Eq.  also contains the pp-wave with one flat direction that we just obtained through a Penrose limit of the geometry for the following choice of parameters $$\begin{array}{lcl} \eta_1=- \displaystyle \frac{1}{2} \, , & \phantom{aaa}& \eta_2=b+ \displaystyle \frac{1}{2} \, , \\[2mm] C_1= \displaystyle \frac{1}{4} \, ,& & C_2=0 \, . \end{array}$$ ### Spectrum {#spectrum .unnumbered} Now we derive the string spectrum on the rotated pp-wave  . In the light-cone gauge $x^+ = c \tau$ the bosonic Lagrangian density is $$\label{penboslagr} \begin{split} &\mathscr{L}_{\rm lc}^{B}= - \frac{1}{4\pi c}\bigg\{\sum_{i=1}^4\left[\dot{u}_i^2 -u_i'^2-c^2u_i^2\right]+\sum_{a=1}^2\Big[\dot{x}_a^2+\dot{y}_a^2 -x_a'^2-y_a'^2 \\ &+c^2\left(\eta_a^2-\frac{1}{4}\right)\left(x_a^2+y_a^2\right)+2c\left(\eta_a-2 C_a\right)x_a \dot{y}_a -2c\left(\eta_a+2C_a\right)y_a \dot{x}_a\Big]\bigg\}\, . \end{split}$$ where $c$ is fixed by requiring that the conjugate momentum to $x^-$ is constant. The bosonic light-cone Hamiltonian is then given by $$\label{penbosham} \begin{split} c H^B_{\rm lc}=& \frac{1}{4\pi } \int_0^{2\pi} d\sigma \bigg\{\sum_{i=1}^4\left[\dot{u}_i^2 +u_i'^2+c^2u_i^2\right] \\ &+\sum_{a=1}^2\left[\dot{x}_a^2+\dot{y}_a^2 +x_a'^2+y_a'^2+c^2\left(\frac{1}{4}-\eta_a\right)\left(x_a^2+y_a^2\right)\right] \bigg\}\, . \end{split}$$ The mode expansion for the bosonic fields can be written as $$u_i (\tau,\sigma ) = \frac{i}{\sqrt{2}} \sum_{n\in {\mathbb{Z}}} \frac{1}{\sqrt{\Omega_n}} \Big[ \hat{a}^i_n e^{-i ( \Omega_n \tau - n \sigma ) } - (\hat{a}^i_n)^\dagger e^{i ( \Omega_n \tau - n \sigma ) } \Big] \, ,$$ $$\label{zmode} z_a(\tau,\sigma) = \, e^{-i c \eta_a \tau} \sum_{n \in {\mathbb{Z}}} \frac{1}{\sqrt{\omega_n}} \Big[ a_n^a e^{-i ( \omega_n \tau - n \sigma ) } - (\tilde{a}^a)^\dagger_n e^{i ( \omega_n \tau - n \sigma ) } \Big]\, ,$$ where $\Omega_n=\sqrt{c^2+n^2}$, $\omega_n=\sqrt{\frac{c^2}{4}+n^2}$ and we defined $z_a(\tau,\sigma)=x_a(\tau,\sigma)+iy_a(\tau,\sigma)$. The canonical commutation relations $[x_a(\tau,\sigma),p_{x_b}(\tau,\sigma')] = i\delta_{ab} \delta (\sigma-\sigma')$, $[y_a(\tau,\sigma),p_{y_b}(\tau,\sigma')] = i\delta_{ab}\delta (\sigma-\sigma')$ and $[u_i(\tau,\sigma),p_j(\tau,\sigma')] = i\delta_{ij} \delta (\sigma-\sigma')$ follows from $$\label{comrel} [a_m^a,(a_n^b)^\dagger] = \delta_{mn} \delta_{ab}{\,, \ \ }[\tilde{a}_m^a,(\tilde{a}_n^b)^\dagger] = \delta_{mn} \delta_{ab}{\,, \ \ }[\hat{a}^i_m,(\hat{a}^j_n)^\dagger] = \delta_{mn} \delta_{ij} \, .$$ Employing we obtain the bosonic spectrum $$\label{penspectrum} c H^B_{\rm lc} = \sum_{i=1}^4 \sum_{n\in {\mathbb{Z}}} \sqrt{n^2+c^2}\, \hat{N}^i_n+\sum_{a=1}^2\sum_{n\in {\mathbb{Z}}} \left\{ \left(\sqrt{\frac{c^2}{4}+n^2} + \eta_a c \right) M_n^a + \left(\sqrt{\frac{c^2}{4}+n^2}- \eta_a c\right) N_n^a \right\} \, ,$$ with the number operators $\hat{N}^i_n = (\hat{a}^i_n)^\dagger \hat{a}^i_n$, $M_n^a = (a^a)^\dagger_n a^a_n$ and $N_n^a = (\tilde{a}^a)^\dagger_n \tilde{a}_n^a$. Now we compute the fermionic part of the spectrum. We start from the type IIA superstring Lagrangian density on the   $$\label{penferlagr} \mathscr{L}^{F}= \frac{i \,c }{2} \, \bar{\theta} \Gamma_+ \left[\partial_\tau -\Gamma_{11} \partial_\sigma +\frac{c}{4}\left(-2\eta_1\Gamma_{56}-2\eta_2\Gamma_{78}+\Gamma_{11}\Gamma_4-3\Gamma_{123}\right)\right]\theta \, ,$$ where $\theta$ is a 32 component real spinor and we used the zehnbeins $$\begin{split} &e^+_{\phantom{+}+}=\frac{1}{2} \, , \qquad e^-_{\phantom{+}+}=\frac{1}{2}\left[\left({\sum_{i=1}^{4}}u_i^2\right) - \sum_{a=1}^{2} \left(\eta_a^2-\frac{1}{4}\right) \left(x_a^2+y_a^2\right)\right] \, ,\\ &e^-_{\phantom{+}-}=2\, , \qquad e^-_{\phantom{+}x_a}=\left(\eta_a+2C_a\right)y_a \, , \qquad e^-_{\phantom{+}y_a}=-\left(\eta_a-2C_a\right)x_a \, , \\ &e^i_{\phantom{+}u_i}=1 \, , \qquad e^5_{\phantom{+}x_1}=1\, , \qquad e^6_{\phantom{+}y_1}=1\, , \qquad e^7_{\phantom{+}x_2}=1\, , \qquad e^8_{\phantom{+}y_2}=1\, , \end{split}$$ where $i=1,2,3,4$, and the relevant components of the spin connection $$\omega_+^{\phantom{+}56}=-\eta_1\, , \qquad \omega_+^{\phantom{+}78}=-\eta_2\, .$$ Let us decompose $\theta=\theta_+ +\theta_-$ by writing $$\Gamma_{5678}\theta_\pm=\pm \theta_\pm \, ,$$ In terms of $\theta_\pm$ the light-cone gauge conditions are [@Astolfi:2009qh] $$\Gamma_- \theta_- =0\, , \qquad \Gamma_{4956}\theta_+=\theta_+\, .$$ Using the spinor conventions of Appendix \[AppendixA\] we can write the Lagrangian as $$\mathscr{L}^{F} = \mathscr{L}_+ +\mathscr{L}_- \, ,$$ with $\mathscr{L}_+$ and $\mathscr{L}_-$ given by $$\mathscr{L}_+=i \psi^* \dot{\psi} - \frac{i}{2}\left(\psi \psi' + \psi^* {\psi^*}'\right) +\frac{i\, c}{2}\Delta_1 \psi \gamma_{56} \psi^* + \frac{c}{2} \psi \psi^* \, ,$$ $$\mathscr{L}_-=i \chi^* \dot{\chi} - \frac{i}{2}\left(\chi \chi' + \chi^* {\chi^*}'\right) -\frac{i\, c}{2}\Delta_2 \chi \gamma_{56} \chi^* - c \chi \chi^* \, ,$$ where $\Delta_1=\eta_2-\eta_1$ and $\Delta_2=\eta_1+\eta_2$. The mode expansions for the 8 component spinors $\psi$ and $\chi$ are $$\psi_{\alpha} = \left( e^{- \frac{c}{2} \Delta_1 \gamma_{56}\tau} \right)_{\alpha \beta} \sum_{n\in Z} \left[ f^+_n d_{n,\alpha}e^{-i ( \omega_n \tau - n \sigma ) } - f^-_n d^\dagger_{n,\alpha} e^{i ( \omega_n \tau - n \sigma ) } \right]\, ,$$ $$\chi_{\alpha} = \left( e^{ \frac{c}{2} \Delta_2 \gamma_{56}\tau} \right)_{\alpha \beta} \sum_{n\in Z} \left[ - g^-_n b_{n,\beta} e^{-i ( \Omega_n \tau - n \sigma ) } + g^+_n b^\dagger_{n,\beta} e^{i ( \Omega_n \tau - n \sigma ) } \right] \, ,$$ with the constants $f^\pm_n$ and $g^\pm_n$ defined by $$f^\pm_n = \frac{\sqrt{\omega_n+n} \pm \sqrt{\omega_n-n}}{2\sqrt{\omega_n}} {\,, \ \ }g^\pm_n = \frac{\sqrt{\Omega_n+n} \pm \sqrt{\Omega_n-n}}{2\sqrt{\Omega_n}}$$ The fermionic Hamiltonian density is therefore $$\label{CH2F} c \mathcal{H}^{F}_{\rm lc} = \frac{i}{2} \left( \psi \psi' -\rho \rho' \right) + \frac{c}{2} \Delta_1 \psi \gamma_{56}\rho - \frac{i\, c}{2} \psi \rho + \frac{i}{2}\left(\chi \chi' - \lambda \lambda'\right) - \frac{i\,c}{2} \Delta_2 \chi \gamma_{56}\lambda + i c \chi \lambda \, ,$$ where the fermionic momenta are $$\rho = - i \psi^* {\,, \ \ }\lambda = - i \chi^* \, .$$ The fermionic spectrum can then be computed and reads $$\label{fermppwave} \begin{split} c H^{F}_{\rm lc} &= \sum_{n\in {\mathbb{Z}}} \Bigg[ \sum_{b=1,2}\left(\omega_n +\frac{c}{2} \Delta_1 \right)F_n^{(b)} + \sum_{b=3,4}\left(\omega_n -\frac{c}{2} \Delta_1 \right)F_n^{(b)} \\ &+ \sum_{b=5,6} \left( \Omega_n - \frac{c}{2}\Delta_2 \right) F_n^{(b)} + \sum_{b=7,8} \left( \Omega_n + \frac{c}{2}\Delta_2 \right) F_n^{(b)} \Bigg] \end{split}$$ with the number operators $F^{(b)}_n= d_{n,\alpha}^\dagger d_{n,\alpha}$ for $b=1,\ldots ,4$, and $F_n^{(b)} = b^\dagger_{n,\alpha} b_{n,\alpha}$ for $b=5,\ldots,8$. The level-matching condition, including also the bosonic part, is $$\label{levelmbf} \sum_{n\in {\mathbb{Z}}}n \left[\sum_{i=1}^4 \hat{N}^i_n+\sum_{a=1}^2 \left(M_n^a + N_n^a\right) +\sum_{b=1}^8 F^{(b)}_n\right] = 0$$ Acknowledgments {#acknowledgments .unnumbered} =============== GG and AM thank the Galielo Galilei Institute for Theoretical Physics for hospitality and the INFN for partial support during the completion of this work. The work of GG is supported in part by the MIUR-PRIN contract 2007-5ATT78. Gamma matrices and spinors {#AppendixA} ========================== We briefly review our conventions for the representations of Dirac matrices in ten dimensions and for Majorana-Weyl spinors. As usual, we shall use the mostly plus metric. Gamma matrices {#gamma-matrices .unnumbered} -------------- Let $I_n$ denote the $n \times n$ unit matrix, $\sigma_1,\, \sigma_2,\, \sigma_3$ the $2\times 2$ Pauli matrices $$\label{Paulimatr} \sigma_1 = {\left( \begin{array}{cc} 0 & 1 \\ 1 & 0 \end{array} \right) }{\ \ ,\ \ \ \ }\sigma_2 = {\left( \begin{array}{cc} 0 & -i \\ i & 0 \end{array} \right) } {\ \ ,\ \ \ \ }\sigma_3 = {\left( \begin{array}{cc} 1 & 0 \\ 0 & -1 \end{array} \right) }\, ,$$ and $\epsilon$ the antisymmetric tensor of rank two $$\epsilon = i \sigma_2 ={\left( \begin{array}{cc} 0 & 1 \\ -1 & 0 \end{array} \right) } \, .$$ We can define the real $8 \times 8$ matrices $\gamma_1,...,\gamma_8$ as $$\label{gammadirac} \begin{array}{ll} \gamma_1 = \epsilon \times \epsilon \times \epsilon\, , \phantom{qquad} & \gamma_5 = \sigma_3 \times \epsilon \times I_2\, , \\[1mm] \gamma_2 = I_2 \times \sigma_1 \times \epsilon \, , & \gamma_6 =\epsilon \times I_2 \times \sigma_1 \, , \\[1mm] \gamma_3 = I_2 \times \sigma_3 \times \epsilon \, ,& \gamma_7 = \epsilon \times I_2 \times \sigma_3 \, , \\[1mm] \gamma_4 = \sigma_1 \times \epsilon \times I_2 \, , & \gamma_8 = I_2\times I_2 \times I_2\, . \end{array}$$ This should be read as $$\label{notgammamatr} \gamma_7 = \epsilon \times I_2 \times \sigma_3 = {\left( \begin{array}{cc} 0 & I_2 \times \sigma_3 \\ -I_2 \times \sigma_3 & 0 \end{array} \right) } {\,, \ \ }I_2 \times \sigma_3 = {\left( \begin{array}{cc} \sigma_3 & 0 \\ 0 & \sigma_3 \end{array} \right) }\, ,$$ and so on. It is easy to verify that the matrices $\gamma_1,...,\gamma_8$ obey the following relations $$\label{smallgamma9} \begin{split} &\gamma_i \gamma_j^T + \gamma_j \gamma_i^T = \gamma_i^T \gamma_j + \gamma_j^T \gamma_i = 2 \delta_{ij} I_8 {\ \ ,\ \ \ \ }i,j=1,...,8 \\[1mm] & \gamma_1 \gamma_2^T \gamma_3 \gamma_4^T \gamma_5 \gamma_6^T \gamma_7 \gamma_8^T = I_8 {\ \ ,\ \ \ \ }\gamma_1^T \gamma_2 \gamma_3^T \gamma_4 \gamma_5^T \gamma_6 \gamma_7^T \gamma_8 = - I_8\, . \end{split}$$ Now we introduce the $16 \times 16$ matrices $\hat{\gamma}_1,...,\hat{\gamma}_9$ defined as $$\label{gam9} \begin{split} &\hat{\gamma}_i = {\left( \begin{array}{cc} 0 & \gamma_i \\ \gamma_i^T & 0 \end{array} \right) }\, , \qquad i,j=1,...,8 \\[1mm] &\hat{\gamma}_{9} = \sigma_3 \times I_8 = {\left( \begin{array}{cc} I_8 & 0 \\ 0 & -I_8 \end{array} \right) }\, . \end{split}$$ The matrices $\hat{\gamma}_1,...,\hat{\gamma}_9$ are symmetric and real, and they obey $$\begin{split} \{ \hat{\gamma}_i, \hat{\gamma}_j \} &= 2 \delta_{ij} I_{16} \, , \qquad i,j=1,...,9 \\[1mm] &\hat{\gamma}_9 = \hat{\gamma}_1 \hat{\gamma}_2 \cdots \hat{\gamma}_8 \, . \end{split}$$ At this point we are ready to define the Dirac matrices in ten dimensions, which are the following $32 \times 32$ matrices: $$\begin{split} \Gamma_0 &= - \epsilon \times I_{16} = {\left( \begin{array}{cc} 0 & -I_{16} \\ I_{16} & 0 \end{array} \right) } \, , \\ \Gamma_i &= \sigma_1 \times \hat{\gamma} = {\left( \begin{array}{cc} 0 & \hat{\gamma}_i \\ \hat{\gamma}_i & 0 \end{array} \right) } \, , \quad i=1,...,9 \\ \Gamma_{11} &= \sigma_3 \times I_{16} = {\left( \begin{array}{cc} I_{16} & 0 \\ 0 & -I_{16} \end{array} \right) }\, . \end{split}$$ We see that these matrices are real and satisfy the relations $$\label{gam11} \begin{split} \{ \Gamma_a,\Gamma_b \} = 2 \eta_{ab} I_{32} \, ,& \quad a,b=0,1,...,9,11 \\[1mm] \Gamma_{11} = \Gamma^0 & \Gamma^1 \cdots \Gamma^9 \, . \end{split}$$ It is convenient to introduce the light-cone Dirac matrices $\Gamma_\pm$, given by $$\begin{split} \Gamma_\pm = &\Gamma_0 \pm \Gamma_9 \, , \\ \Gamma^\pm = - \frac{1}{2} \Gamma_\mp &= \frac{1}{2} ( \Gamma^0 \pm \Gamma^9 )\, . \end{split}$$ The raising and lowering of these indices are done according to a flat space metric with $\eta_{+-} = -2$.\ We then define $$\Gamma_{a_1 a_2 \cdots a_n} = \Gamma_{[a_1} \Gamma_{a_2} \cdots \Gamma_{a_n]} \, ,$$ and analogously the $16 \times 16$ matrices $$\hat{\gamma}_{i_1 \cdots i_n } = \hat{\gamma}_{[i_1} \hat{\gamma}_{i_2} \cdots \hat{\gamma}_{i_n]} \, ,$$ with $i_l = 1,...,8$. Since $\hat{\gamma}_i$ is symmetric we have that $$\hat{\gamma}_{ijkl}^T = \hat{\gamma}_{ijkl} \, ,$$ i.e. that $\hat{\gamma}_{ijkl}$ is also symmetric. Furthermore we define the $8\times 8$ matrices $$\label{gammai1ik} \gamma_{i_1 \cdots i_{2k} } = \gamma_{[i_1} \gamma^T_{i_2}\cdots \gamma^T_{i_{2k}]} {\,, \ \ }\gamma_{i_1 i_2 \cdots i_{2k+1} } = \gamma^T_{[i_1} \gamma_{i_2} \cdots \gamma^T_{i_{2k+1}]} \, .$$ with $i_l = 1,...,8$. In particular we call $\Pi$ the matrix $$\Pi \equiv \gamma_{1234} = \gamma_1 \gamma_2^T \gamma_3 \gamma_4^T \, ,$$ which has the following proprieties $$\label{piids1} \Pi^2 = I_8 {\,, \ \ }\Pi^T = \Pi {\,, \ \ }\Pi = \gamma_{5678} \, .$$ The last equation follows from . Finally it is possible to show that $\Pi$ satisfies the relations $$\label{piids2} \Pi \gamma_{ij} = \gamma_{ij} \Pi = - \epsilon_{ijkl} \gamma^{kl} {\,, \ \ }\Pi \gamma_{i'j'} = \gamma_{i'j'} \Pi = - \epsilon_{i'j'k'l'} \gamma^{k'l'} \, ,$$ with $i,j=1,2,3,4$ and $i',j'=5,6,7,8$. Spinors for type IIB {#spinors-for-type-iib .unnumbered} -------------------- The spinors $\theta^A$ are 32-component Majorana-Weyl spinors. The Majorana condition imposes that the 32 components of $\theta^A$ are real. The Weyl condition is $$\label{weylcond} \Gamma_{11} \theta^A = \theta^A \, ,$$ for both $A=1,2$. Note here that we choose the two spinors to have the same chirality since we are considering type IIB string theory. Using we see that the Weyl condition means that only the first 16 components of $\theta^A$ are non-zero, whereas the last 16 components are zero. We write therefore $$\label{defpsi} \theta^A = {\left( \begin{array}{c} \psi^A \\ 0 \end{array} \right) } \, ,$$ where $\psi^A$, $A=1,2$, are two real 16 component spinors. The light-cone gauge $\Gamma_- \theta^A = 0$ results to be equivalent to $$\hat{\gamma}_9\psi^A = \psi^A \, ,$$ which resembles a Weyl condition for the transverse directions. Indeed, using , we see that the last 8 components of $\psi^A$ are zero. Thus, we write $$\label{defS} \psi^A = {\left( \begin{array}{c} S^A \\ 0 \end{array} \right) } \, ,$$ where $S^A$, $A=1,2$, are two real 8 component spinors. Spinors for type IIA {#spinors-for-type-iia .unnumbered} -------------------- For the type IIA GS string we have two Majorana-Weyl spinors $\theta^{1,2}$ with opposite chirality, $i.e.$ $\Gamma_{11} \theta^1 = \theta^1$ and $\Gamma_{11} \theta^2 = - \theta^2$. We collect these into a 32 component real spinor $\theta = \theta^1 + \theta^2$. We can then decompose $\theta$ in terms of eigenstates of $\Gamma_{5678}$ namely $\theta=\theta_+ +\theta_-$ with $\Gamma_{5678}\theta_{\pm}=\pm\theta_{\pm}$ so that, keeping into account the representation we chose for $\Gamma_{11}$, (\[gam11\]), $\theta_{\pm}$ has the following decomposition in terms of 16-component spinors $$\theta_\pm={\left( \begin{array}{c} \vartheta^1_{\pm} \\ \vartheta^2_\pm \end{array} \right) } \, ,$$ The gauge conditions that should be imposed to fix $\kappa$-symmetry are different on $\theta_+$ and on $\theta_-$ [@Astolfi:2009qh] and read $$\label{kappacond} \Gamma_{-} \theta_- =0~~~{\,, \ \ }~~~\Gamma_{4956}\theta_+=\theta_+$$ It is thus useful to rotate the $\theta_+$ spinor so as to impose also on the rotated spinor the same gauge condition we have on $\theta_-$. This is done by defining $\widetilde\theta_+$ according to $$\label{tildetheta} \theta_+=(I-\Gamma_{0456})\widetilde\theta_+$$ Again we have the decomposition in terms of spinors of opposite chirality $$\widetilde\theta_+={\left( \begin{array}{c} \widetilde\vartheta^1_{+} \\ \widetilde\vartheta^2_+ \end{array} \right) } \, ,$$ The gauge choice on $\widetilde\theta_+$ is thus $\Gamma_{-} \widetilde\theta_+ =0$. It is then useful to define also a rotated 16-component spinor $\hat\vartheta^2_+=\hat\gamma_4\widetilde\vartheta^2_+$ so that both $\widetilde\vartheta^{1}_+$ and $\hat\vartheta^2_+$ have the same eigenvalue +1 of $\hat\gamma_9$. This rotations make the quantization on this type IIA background very similar to that of the type IIB. We can now define the rescaled 8-component spinors $$\label{defS2}\widetilde\vartheta^{1}_+ = \frac{1}{\sqrt{c}}{\left( \begin{array}{c} S^1_+ \\ 0 \end{array} \right) }~~{\,, \ \ }~~~\hat\vartheta^2_+ = \frac{1}{\sqrt{c}}{\left( \begin{array}{c} S^2_+ \\ 0 \end{array} \right) } \, ,$$ In the main text we used then the 8-component complex spinors $$\psi=S^1_++i S^2_+~~~{\,, \ \ }~~~\psi^*=S^1_+-i S^2_+$$ Let us now turn to $\theta_-$. Again to have the same eigenvalue +1 of $\hat\gamma_9$ for the upper and the lower 16-component spinors, we perform a rotation of $\vartheta_-^2$ with $\hat\gamma_4$ according to $\hat\vartheta_-^2=\gamma_4\vartheta_-^2$. We can now define as before the rescaled 8-component spinors $$\label{defS3}\widetilde\vartheta^{1}_- = \frac{1}{\sqrt{c}}{\left( \begin{array}{c} S^1_- \\ 0 \end{array} \right) }~~{\,, \ \ }~~~\hat\vartheta^2_- = \frac{1}{\sqrt{c}}{\left( \begin{array}{c} S^2_- \\ 0 \end{array} \right) } \, ,$$ In the main text we then used then the 8-component complex spinors $$\chi=S^1_-+i S^2_-~~~{\,, \ \ }~~~\chi^*=S^1_--i S^2_-$$ [10]{} J. M. Maldacena, “[The large N limit of superconformal field theories and supergravity]{},” [*Adv. Theor. Math. Phys.*]{} [**2**]{} (1998) 231–252, [[arXiv:hep-th/9711200]{}](http://arxiv.org/abs/hep-th/9711200). S. S. Gubser, I. R. Klebanov, and A. M. Polyakov, “[Gauge theory correlators from non-critical string theory]{},” [[*Phys. Lett.*]{} [ **B428**]{} (1998) 105–114](http://dx.doi.org/10.1016/S0370-2693(98)00377-3), [[arXiv:hep-th/9802109]{}](http://arxiv.org/abs/hep-th/9802109). E. Witten, “[Anti-de Sitter space and holography]{},” [*Adv. Theor. Math. Phys.*]{} [**2**]{} (1998) 253–291, [[arXiv:hep-th/9802150]{}](http://arxiv.org/abs/hep-th/9802150). D. Berenstein, J. M. Maldacena, and H. Nastase, “Strings in flat space and pp waves from [${\mathcal{N}}= 4$]{} super [Yang Mills]{},” [*JHEP*]{} [**04**]{} (2002) 013, [[hep-th/0202021]{}](http://arxiv.org/abs/hep-th/0202021). M. Blau, J. Figueroa-O’Farrill, C. Hull, and G. Papadopoulos, “A new maximally supersymmetric background of [IIB]{} superstring theory,” [*JHEP*]{} [**01**]{} (2002) 047, [[hep-th/0110242]{}](http://arxiv.org/abs/hep-th/0110242). M. Blau, J. M. Figueroa-O’Farrill, C. Hull, and G. Papadopoulos, “[Penrose limits and maximal supersymmetry]{},” [*Class. Quant. Grav.*]{} [**19**]{} (2002) L87–L95, [[arXiv:hep-th/0201081]{}](http://arxiv.org/abs/hep-th/0201081). R. R. Metsaev, “[Type IIB Green-Schwarz superstring in plane wave Ramond- Ramond background]{},” [[*Nucl. Phys.*]{} [ **B625**]{} (2002) 70–96](http://dx.doi.org/10.1016/S0550-3213(02)00003-2), [[arXiv:hep-th/0112044]{}](http://arxiv.org/abs/hep-th/0112044). R. R. Metsaev and A. A. Tseytlin, “[Exactly solvable model of superstring in plane wave Ramond-Ramond background]{},” [[*Phys. Rev.*]{} [ **D65**]{} (2002) 126004](http://dx.doi.org/10.1103/PhysRevD.65.126004), [[arXiv:hep-th/0202109]{}](http://arxiv.org/abs/hep-th/0202109). M. Bertolini, J. de Boer, T. Harmark, E. Imeroni, and N. A. Obers, “Gauge theory description of compactified pp-waves,” [*JHEP*]{} [**01**]{} (2003) 016, [[hep-th/0209201]{}](http://arxiv.org/abs/hep-th/0209201). J. Michelson, “(twisted) toroidal compactification of pp-waves,” [*Phys. Rev.*]{} [**D66**]{} (2002) 066002, [[hep-th/0203140]{}](http://arxiv.org/abs/hep-th/0203140). T. Harmark and M. Orselli, “Matching the [Hagedorn]{} temperature in [AdS/CFT]{},” [*Phys. Rev.*]{} [**D74**]{} (2006) 126009, [[hep-th/0608115]{}](http://arxiv.org/abs/hep-th/0608115). J. A. Minahan and K. Zarembo, “The [Bethe-ansatz]{} for [${\mathcal{N}}= 4$]{} super [Yang-Mills]{},” [*JHEP*]{} [**03**]{} (2003) 013, [[hep-th/0212208]{}](http://arxiv.org/abs/hep-th/0212208). N. Beisert, C. Kristjansen, and M. Staudacher, “The dilatation operator of [${\mathcal{N}}= 4$]{} super [Yang-Mills]{} theory,” [*Nucl. Phys.*]{} [**B664**]{} (2003) 131–184, [[hep-th/0303060]{}](http://arxiv.org/abs/hep-th/0303060). N. Beisert and M. Staudacher, “[The N=4 SYM Integrable Super Spin Chain]{},” [[*Nucl. Phys.*]{} [**B670**]{} (2003) 439–463](http://dx.doi.org/10.1016/j.nuclphysb.2003.08.015), [[arXiv:hep-th/0307042]{}](http://arxiv.org/abs/hep-th/0307042). F. Berruto, G. Grignani, G. W. Semenoff, and P. Sodano, “[Chiral symmetry breaking on the lattice: A study of the strongly coupled lattice Schwinger model]{},” [[*Phys. Rev.*]{} [**D57**]{} (1998) 5070–5083](http://dx.doi.org/10.1103/PhysRevD.57.5070), [[arXiv:hep-lat/9710066]{}](http://arxiv.org/abs/hep-lat/9710066). F. Berruto, G. Grignani, G. W. Semenoff, and P. Sodano, “On the correspondence between the strongly coupled 2-flavor lattice [Schwinger]{} model and the [Heisenberg]{} antiferromagnetic chain,” [*Annals Phys.*]{} [**275**]{} (1999) 254–296, [[hep-th/9901142]{}](http://arxiv.org/abs/hep-th/9901142). J. Callan, Curtis G. [*et al.*]{}, “Quantizing string theory in [${\mbox{AdS}}_5 \times S^5$]{}: Beyond the pp- wave,” [[*Nucl. Phys.*]{} [**B673**]{} (2003) 3–40](http://dx.doi.org/10.1016/j.nuclphysb.2003.09.008), [[arXiv:hep-th/0307032]{}](http://arxiv.org/abs/hep-th/0307032). J. Callan, Curtis G., T. McLoughlin, and I. Swanson, “[Holography beyond the Penrose limit]{},” [[*Nucl. Phys.*]{} [**B694**]{} (2004) 115–169](http://dx.doi.org/10.1016/j.nuclphysb.2004.06.033), [[arXiv:hep-th/0404007]{}](http://arxiv.org/abs/hep-th/0404007). M. Staudacher, “The factorized [S]{}-matrix of [CFT/AdS]{},” [*JHEP*]{} [**05**]{} (2005) 054, [[hep-th/0412188]{}](http://arxiv.org/abs/hep-th/0412188). N. Beisert, “The [$\mathfrak{su}(2|2)$]{} dynamic [S]{}-matrix,” [[hep-th/0511082]{}](http://arxiv.org/abs/hep-th/0511082). N. Beisert, B. Eden, and M. Staudacher, “[Transcendentality and crossing]{},” [*J. Stat. Mech.*]{} [**0701**]{} (2007) P021, [[hep-th/0610251]{}](http://arxiv.org/abs/hep-th/0610251). C. Kristjansen, J. Plefka, G. W. Semenoff, and M. Staudacher, “[A new double-scaling limit of N = 4 super Yang-Mills theory and PP-wave strings]{},” [[*Nucl. Phys.*]{} [ **B643**]{} (2002) 3–30](http://dx.doi.org/10.1016/S0550-3213(02)00749-6), [[arXiv:hep-th/0205033]{}](http://arxiv.org/abs/hep-th/0205033). N. R. Constable [*et al.*]{}, “[PP-wave string interactions from perturbative Yang-Mills theory]{},” [*JHEP*]{} [**07**]{} (2002) 017, [[arXiv:hep-th/0205089]{}](http://arxiv.org/abs/hep-th/0205089). M. Spradlin and A. Volovich, “[Superstring interactions in a pp-wave background]{},” [[*Phys. Rev.*]{} [**D66**]{} (2002) 086004](http://dx.doi.org/10.1103/PhysRevD.66.086004), [[arXiv:hep-th/0204146]{}](http://arxiv.org/abs/hep-th/0204146). G. De Risi, G. Grignani, M. Orselli, and G. W. Semenoff, “[DLCQ]{} string spectrum from [${\mathcal{N}}= 2$]{} [SYM]{} theory,” [[*JHEP*]{} [**11**]{} (2004) 053](http://dx.doi.org/10.1088/1126-6708/2004/11/053), [[arXiv:hep-th/0409315]{}](http://arxiv.org/abs/hep-th/0409315). G. Grignani, M. Orselli, B. Ramadanovic, G. W. Semenoff, and D. Young, “[Divergence cancellation and loop corrections in string field theory on a plane wave background]{},” [*JHEP*]{} [**12**]{} (2005) 017, [[arXiv:hep-th/0508126]{}](http://arxiv.org/abs/hep-th/0508126). G. Grignani, M. Orselli, B. Ramadanovic, G. W. Semenoff, and D. Young, “[AdS/CFT vs. string loops]{},” [[*JHEP*]{} [**06**]{} (2006) 040](http://dx.doi.org/10.1088/1126-6708/2006/06/040), [[arXiv:hep-th/0605080]{}](http://arxiv.org/abs/hep-th/0605080). P. Y. Casteill, R. A. Janik, A. Jarosz, and C. Kristjansen, “[Quasilocality of joining/splitting strings from coherent states]{},” [[*JHEP*]{} [**12**]{} (2007) 069](http://dx.doi.org/10.1088/1126-6708/2007/12/069), [[arXiv:0710.4166 \[hep-th\]]{}](http://arxiv.org/abs/0710.4166). C. Kristjansen, M. Orselli, and K. Zoubos, “[Non-planar ABJM Theory and Integrability]{},” [[ *JHEP*]{} [**03**]{} (2009) 037](http://dx.doi.org/10.1088/1126-6708/2009/03/037), [[arXiv:0811.2150 \[hep-th\]]{}](http://arxiv.org/abs/0811.2150). O. Aharony, O. Bergman, D. L. Jafferis, and J. Maldacena, “[${\mathcal{N}}=6$]{} superconformal [Chern-Simons-matter]{} theories, [M2-branes]{} and their gravity duals,” [[arXiv:0806.1218 \[hep-th\]]{}](http://arxiv.org/abs/0806.1218). T. Nishioka and T. Takayanagi, “[On Type IIA Penrose Limit and N=6 Chern-Simons Theories]{},” [[arXiv:0806.3391 \[hep-th\]]{}](http://arxiv.org/abs/0806.3391). D. Gaiotto, S. Giombi, and X. Yin, “[Spin Chains in N=6 Superconformal Chern-Simons-Matter Theory]{},” [[*JHEP*]{} [**04**]{} (2009) 066](http://dx.doi.org/10.1088/1126-6708/2009/04/066), [[arXiv:0806.4589 \[hep-th\]]{}](http://arxiv.org/abs/0806.4589). G. Grignani, T. Harmark, and M. Orselli, “[The SU(2) x SU(2) sector in the string dual of N=6 superconformal Chern-Simons theory]{},” [[*Nucl. Phys.*]{} [**B810**]{} (2009) 115–134](http://dx.doi.org/10.1016/j.nuclphysb.2008.10.019), [[arXiv:0806.4959 \[hep-th\]]{}](http://arxiv.org/abs/0806.4959). D. Astolfi, V. G. M. Puletti, G. Grignani, T. Harmark, and M. Orselli, “[Finite-size corrections in the SU(2) $\times$ SU(2) sector of type IIA string theory on [${\mbox{AdS}}_4 \times {\mathbb{C}}P^3$]{}]{},” [[*Nucl. Phys.*]{} [**B810**]{} (2009) 150–173](http://dx.doi.org/10.1016/j.nuclphysb.2008.10.020), [[arXiv:0807.1527 \[hep-th\]]{}](http://arxiv.org/abs/0807.1527). D. Astolfi, V. G. M. Puletti, G. Grignani, T. Harmark, and M. Orselli, “[Full Lagrangian and Hamiltonian for quantum strings on [${\mbox{AdS}}_4 \times {\mathbb{C}}P^3$]{} in a near plane wave limit]{},” [[arXiv:0912.2257 \[hep-th\]]{}](http://arxiv.org/abs/0912.2257). T. Harmark and M. Orselli, “Quantum mechanical sectors in thermal [${\mathcal{N}}= 4$]{} super [Yang-Mills]{} on [${\mathbb{R}}\times S^3$]{},” [*Nucl. Phys.*]{} [**B757**]{} (2006) 117–145, [[hep-th/0605234]{}](http://arxiv.org/abs/hep-th/0605234). T. Harmark, K. R. Kristjansson, and M. Orselli, “Magnetic [Heisenberg-chain]{} / pp-wave correspondence,” [*JHEP*]{} [**02**]{} (2007) 085, [[hep-th/0611242]{}](http://arxiv.org/abs/hep-th/0611242). T. Harmark, K. R. Kristjansson, and M. Orselli, “[The [Hagedorn]{} temperature in a decoupled sector of [AdS/CFT]{}]{},” [*Fortsch. Phys.*]{} [**55**]{} (2007) 754–759, [[hep-th/0701088]{}](http://arxiv.org/abs/hep-th/0701088). T. Harmark, K. R. Kristjansson, and M. Orselli, “Decoupling limits of [${\mathcal{N}}=4$]{} super [Yang-Mills]{} on [${\mathbb{R}}\times S^3$]{},” [[*JHEP*]{} [**09**]{} (2007) 115](http://dx.doi.org/10.1088/1126-6708/2007/09/115), [[arXiv:0707.1621 \[hep-th\]]{}](http://arxiv.org/abs/0707.1621). T. Harmark, K. R. Kristjansson, and M. Orselli, “[Matching gauge theory and string theory in a decoupling limit of AdS/CFT]{},” [[arXiv:0806.3370 \[hep-th\]]{}](http://arxiv.org/abs/0806.3370). D. Astolfi, G. Grignani, T. Harmark, and M. Orselli, “[Finite-size corrections to the rotating string and the winding state]{},” [[*JHEP*]{} [**08**]{} (2008) 099](http://dx.doi.org/10.1088/1126-6708/2008/08/099), [[arXiv:0804.3301 \[hep-th\]]{}](http://arxiv.org/abs/0804.3301). E. Witten, “[Anti-de Sitter space, thermal phase transition, and confinement in gauge theories]{},” [*Adv. Theor. Math. Phys.*]{} [**2**]{} (1998) 505–532, [[arXiv:hep-th/9803131]{}](http://arxiv.org/abs/hep-th/9803131). B. Sundborg, “The [Hagedorn]{} transition, deconfinement and [${\mathcal{N}}= 4$ SYM]{} theory,” [*Nucl. Phys.*]{} [**B573**]{} (2000) 349–363, [[hep-th/9908001]{}](http://arxiv.org/abs/hep-th/9908001). A. M. Polyakov, “Gauge fields and space-time,” [*Int. J. Mod. Phys.*]{} [ **A17S1**]{} (2002) 119–136, [[hep-th/0110196]{}](http://arxiv.org/abs/hep-th/0110196). O. Aharony, J. Marsano, S. Minwalla, K. Papadodimas, and M. Van Raamsdonk, “[The Hagedorn / deconfinement phase transition in weakly coupled large N gauge theories]{},” [*Adv. Theor. Math. Phys.*]{} [**8**]{} (2004) 603–696, [[arXiv:hep-th/0310285]{}](http://arxiv.org/abs/hep-th/0310285). N. Deo, S. Jain, and C.-I. Tan, “String statistical mechanics above [Hagedorn]{} energy density,” [*Phys. Rev.*]{} [**D40**]{} (1989) 2626. R. C. Brower, J. McGreevy, and C. I. Tan, “[Stringy model for QCD at finite density and generalized Hagedorn temperature]{},” [[arXiv:hep-ph/9907258]{}](http://arxiv.org/abs/hep-ph/9907258). G. Grignani, M. Orselli, and G. W. Semenoff, “[Matrix strings in a B-field]{},” [*JHEP*]{} [**07**]{} (2001) 004, [[arXiv:hep-th/0104112]{}](http://arxiv.org/abs/hep-th/0104112). G. Grignani, M. Orselli, and G. W. Semenoff, “[The target space dependence of the Hagedorn temperature]{},” [*JHEP*]{} [**11**]{} (2001) 058, [[arXiv:hep-th/0110152]{}](http://arxiv.org/abs/hep-th/0110152). G. De Risi, G. Grignani, and M. Orselli, “[Space / time noncommutativity in string theories without background electric field]{},” [*JHEP*]{} [**12**]{} (2002) 031, [[arXiv:hep-th/0211056]{}](http://arxiv.org/abs/hep-th/0211056). G. Grignani, J. L. Karczmarek, and G. W. Semenoff, “[Hot Giant Loop Holography]{},” [[arXiv:0904.3750 \[hep-th\]]{}](http://arxiv.org/abs/0904.3750). T. Harmark and N. A. Obers, “[Thermodynamics of spinning branes and their dual field theories]{},” [*JHEP*]{} [**01**]{} (2000) 008, [[arXiv:hep-th/9910036]{}](http://arxiv.org/abs/hep-th/9910036). G. Grignani, L. Griguolo, N. Mori, and D. Seminara, “Thermodynamics of theories with sixteen supercharges in non-trivial vacua,” [[arXiv:0707.0052 \[hep-th\]]{}](http://arxiv.org/abs/arXiv:0707.0052 [hep-th]). K. J. Larsen and N. A. Obers, “[Phases of Thermal [${\mathcal{N}}=2$]{} [Quiver Gauge Theories]{}]{},” [ [ **JHEP0801**]{} (2008) 057](http://dx.doi.org/10.1088/1126-6708/2008/01/057), [[arXiv:0708.3199 \[hep-th\]]{}](http://arxiv.org/abs/0708.3199). A. Hamilton, J. Murugan, and A. Prinsloo, “[A note on the universality of the Hagedorn behavior of pp- wave strings]{},” [[*JHEP*]{} [**02**]{} (2008) 108](http://dx.doi.org/10.1088/1126-6708/2008/02/108), [[arXiv:0712.3059 \[hep-th\]]{}](http://arxiv.org/abs/0712.3059). M. Blau, J. M. Figueroa-O’Farrill, and G. Papadopoulos, “[Penrose limits, supergravity and brane dynamics]{},” [[*Class. Quant. Grav.*]{} [**19**]{} (2002) 4753](http://dx.doi.org/10.1088/0264-9381/19/18/310), [[arXiv:hep-th/0202111]{}](http://arxiv.org/abs/hep-th/0202111). [^1]: Early attempts in describing gauge theories in terms of spin chains can be found in [@Berruto:1997jv; @Berruto:1999ga]. [^2]: See also [@Astolfi:2008yw] for work on the winding state with the space-like isometry compactified. [^3]: For related computations of the Hagedorn temperature in the presence of background fields see for example Refs. [@Deo:1989bv]. [^4]: See also [@Harmark:1999xt] for a related study of black holes with R-charged chemical potentials. [^5]: The $4\pi^2$ factor is included in the ’t Hooft coupling for our convenience. [^6]: See Appendix \[AppendixA\] for our conventions on the spinors and the representation of the Dirac matrices.
1
--- abstract: 'In this paper, we consider femtocell CR networks, where femto base stations (FBS) are deployed to greatly improve network coverage and capacity. We investigate the problem of generic data multicast in femtocell networks. We reformulate the resulting MINLP problem into a simpler form, and derive upper and lower performance bounds. Then we consider three typical connection scenarios in the femtocell network, and develop optimal and near-optimal algorithms for the three scenarios. Second, we tackle the problem of streaming scalable videos in femtocell CR networks. A framework is developed to captures the key design issues and trade-offs with a stochastic programming problem formulation. In the case of a single FBS, we develop an optimum-achieving distributed algorithm, which is shown also optimal for the case of multiple non-interfering FBS’s. In the case of interfering FBS’s, we develop a greedy algorithm that can compute near-opitmal solutions, and prove a closed-form lower bound on its performance.' author: - bibliography: - 'cr\_video\_femto.bib' - 'MyWork.bib' title: The Feasibility of Scalable Video Streaming over Femtocell Networks --- Introduction ============ Due to the use of open space as transmission medium, capacity of wireless networks are usually limited by interference. When a mobile user moves away from the base station, a considerably larger transmit power is needed to overcome attenuation, while causing interference to other users and deteriorating network capacity. To this end, femtocells provide an effective solution that brings network infrastructure closer to mobile users. A femtocell is a small (e.g., residential) cellular network, with a [*femto base station*]{} (FBS) connected to the owner’s broadband wireline network [@Chandrasekhar08; @Kim09; @Guvenc10]. The FBS serves approved users when they are within the coverage. Among the many benefits, femtocells are shown effective on improving network coverage and capacity [@Chandrasekhar08]. Due to reduced distance, transmit power can be greatly reduced, leading to prolonged battery life, improved signal-to-interference-plus-noise ratio (SINR), and better spatial reuse of spectrum. Femtocells have received significant interest from the wireless industry. Although highly promising, many important problems should be addressed to fully harvest their potential, such as interference mitigation, resource allocation, synchronization, and QoS provisioning [@Chandrasekhar08; @Kim09]. It is also critical for the success of this technology to support important applications such as real-time video streaming in femtocell networks. In this paper, we first investigate the problem of data multicast in femtocell networks. It is not atypical that many users may request for the same content, as often observed in wireline networks. By allowing multiple users to share the same downlink multicast transmission, significant spectrum and power savings can be achieved. In particular, we adopt [*superposition coding*]{} (SC) and [*successive interference cancellation*]{} (SIC), two well-known PHY techniques, for data multicast in femtocell networks [@Goldsmith06]. With SC, a compound signal is transmitted, consisting of multiple signals (or, layers) from different senders or from the same sender. With SIC, a strong signal can be first decoded, by treating all other signals as noise. Then the decoder will reconstruct the signal from the decoded bits, and subtract the reconstructed signal from the compound signal. The next signal will be decoded from the residual, by treating the remaining signals as noise. And so forth. A special strength of the SC with SIC approach is that it enables simultaneous unicast transmissions (e.g., many-to-one or one-to-many). It has been shown that SC with SIC is more efficient than PHY techniques with orthogonal channels [@Goldsmith06; @Li09]. We adopt SC and SIC for the unique femtocell network environment, and investigate how to enable efficient data multicast from the femtocells to multiple users. We formulate a Mixed Integer Nonlinear Programming (MINLP) problem, which is NP-hard in general. The objective is to minimize the total BS power consumption. Then we reformulate the MINLP problem into a simpler form, and derive upper and lower performance bounds. We also derive a simple heuristic scheme that assigns users to the BS’s with a greedy approach. Finally, we consider three typical connection scenarios in the femtocell network, and develop optimal and near-optimal algorithms for the three scenarios. The proposed algorithms have low computational complexity, and are shown to outperform the heuristic scheme with considerable gains. Then, we investigate the problem of video streaming in femtocell cognitive radio (CR) networks. We consider a femtocell network consisting of a [*macro base station*]{} (MBS) and multiple FBS’s. The femtocell network is co-located with a primary network with multiple licensed channels. This is a challenging problem due to the stringent QoS requirements of real-time videos and, on the other hand, the new dimensions of network dynamics (i.e., channel availability) and uncertainties (i.e., spectrum sensing and errors) found in CR networks. We adopt Scalable Video Coding (SVC) in our system [@Hu10JSAC; @Hu10TW]. SVC encodes a video into multiple substreams, subsets of which can be decoded to provide different quality levels for the reconstructed video [@Wien07]. Such scalability is very useful for video streaming systems, especially in CR networks, to accommodate heterogeneous channel availabilities and dynamic network conditions. We consider H.264/SVC medium grain scalable (MGS) videos, since MGS can achieve better rate-distortion performance over Fine-Granularity-Scalability (FGS), although it only has Network Abstraction Layer (NAL) unit-based granularity [@Wien07]. The unique femtocell network architecture and the scalable video allow us to develop a framework that captures the key design issues and trade-offs, and to formulate a [*stochastic programming*]{} problem. It has been shown that the deployment of femtocells has a significant impact on the network performance [@Chandrasekhar08]. In this paper, we examine three deployment scenarios. In the case of a single FBS, we apply [*dual decomposition*]{} to develop a distributed algorithm that can compute the optimal solution. In the case of multiple non-interfering FBS’s, we show that the same distributed algorithm can be used to compute optimal solutions. In the case of multiple interfering FBS’s, we develop a greedy algorithm that can compute near-optimal solutions, and prove a closed-form lower bound for its performance based on an [*interference graph*]{} model. The proposed algorithms are evaluated with simulations, and are shown to outperform three alternative schemes with considerable gains. The remainder of this paper is organized as follows. The related work is discussed in Section \[sec:femto\_work\]. We investigate the problem of data multicast over fenmtocell networks in Section \[sec:femto\_mcast\_sic\]. The problem of streaming multiple MGS videos in a femtocell CR network is discussed in Section \[sec:femto\_cr\_video\]. Section \[sec:femto\_conc\] concludes this paper. Background and Related Work {#sec:femto_work} =========================== Femtocells have attracted considerable interest from both industry and academia. Technical and business challenges, requirements and some preliminary solutions to femtocell networks are discussed in [@Chandrasekhar08]. Since FBS’s are distributedly located and are able to spatially reuse the same channel, considerable research efforts were made on interference analysis and mitigation [@Chandrasekhar09; @Lee10]. A distributed utility based SINR adaptation scheme was presented in [@Chandrasekhar09] to alleviate cross-tire interference at the macrocell from co-channel femtocells. Lee, Oh and Lee [@Lee10] proposed a fractional frequency reuse scheme to mitigate inter-femtocell interference. Deploying femtocells by underlaying the macrocell has been proved to significantly improve indoor coverage and system capacity. However, interference mitigation in a two-tier heterogeneous network is a challenging problem. In [@Chu11], the interference from macrocell and femtocells was mitigated by a spatial channel separation scheme with codeword-to-channel mapping. In [@Rangan10], the rate distribution in the macrocell was improved by subband partitioning and modest gains were achieved by interference cancellation. In [@Bharucha09], the interference was controlled by denying the access of femtocell base stations to protect the transmission of nearby macro base station. A novel algorithmic framework was presented in [@Madan10] for dynamic interference management to deliver QoS, fairness and high system efficiency in LTE-A femtocell networks. Requiring no modification of existing macrocells, CR was shown to achieve considerable performance improvement when applied to interference mitigation [@Cheng11]. In [@Kaimaletu11], the orthogonal time-frequency blocks and transmission opportunities were allocated based on a safe/victim classification. SIC has high potential of sending or receiving multiple signals concurrently, which improves the transmission efficiency [@Hu11GC]. In [@Li09], the authors developed MAC and routing protocols that exploit SC and SIC to enable simultaneous unicast transmissions. Sen, et al. investigated the possible throughput gains with SIC from a MAC layer perspective [@Sen10]. Power control for SIC was comprehensively investigated and widely applied to code division multiple access (CDMA) systems [@Jean09; @Park08; @Benvenuto07; @Agrawal05; @Andrews03]. Applying game theory, Jean and Jabbari proposed an uplink power control under SIC in direct sequence-CDMA networks [@Jean09]. In [@Park08], the authors introduced an iterative two-stage SIC detection scheme for a multicode MIMO system and showed the proposed scheme significantly outperformed the equal power allocation scheme. A scheme on joint power control and receiver optimization of CDMA transceivers was presented in [@Benvenuto07]. In [@Agrawal05; @Andrews03], the impact of imperfect channel estimation and imperfect interference cancellation on the capacity of CDMA systems was examined. The problem of video over CR networks has only been studied in a few recent papers [@Shiang08; @Ding09; @Hu12JSAC; @Luo11]. In our prior work, we investigated the problem of scalable video over infrastructure-based CR networks [@Hu10JSAC] and multi-hop CR networks [@Hu10TW]. The preliminary results of video over femtocell CR networks were presented in [@Hu11IDS]. Multicast in Femtocell Networks with Superposition Coding and Successive Interference Cancellation {#sec:femto_mcast_sic} ================================================================================================== In this section, we formulate a Mixed Integer Nonlinear Programming (MINLP) problem of data multicast in femotcell networks, which is NP-hard in general. Then we reformulate the MINLP problem into a simpler form, and derive upper and lower performance bounds. We also derive a simple heuristic scheme that assigns users to the BS’s with a greedy approach. Finally, we consider three typical connection scenarios in the femtocell network, and develop optimal and near-optimal algorithms for the three scenarios. The proposed algorithms have low computational complexity, and are shown to outperform the heuristic scheme with considerable gains. System Model and Problem Statement \[sec:mod3\] ----------------------------------------------- ### System Model Consider a femtocell network with an MBS (indexed $0$) and $M$ FBS’s (indexed from $1$ to $M$) deployed in the area. The $M$ FBS’s are connected to the MBS and the Internet via broadband wireline connections. Furthermore, we assume a spectrum band that is divided into two parts, one is allocated to the MBS with bandwidth $B_0$ and the other is allocated to the $M$ FBS’s. The bandwidth allocated to FBS $m$ is denoted by $B_m$. When there is no overlap between the coverages of two FBS’s, they can spatially reuse the same spectrum. Otherwise, the MBS allocates disjoint spectrum to the FBS’s with overlapping coverages. We assumed the spectrum allocation is known a priori. There are $K$ mobile users in the femtocell network. Each user is equipped with one transceiver that can be tuned to one of the two available channels, i.e., connecting to a nearby FBS or to the MBS. The network is time slotted. We assume block-fading channels, where the channel condition is constant in each time slot [@Goldsmith06]. We focus on a multicast scenario, where the MBS and FBS’s multicast a data file to the $K$ users. The data file is divided into multiple packets with equal length and transmitted in sequence with the same modulation scheme. Once packet $l$ is successfully received and decoded at the user, it requests packet $(l+1)$ in the next time slot. We adopt SC and SIC to transmit these packets [@Goldsmith06], as illustrated in Fig. \[fig:sic\]. In each time slot $t$, the compound signal has $L$ [*layers*]{} (or, levels), denoted as $D_1(t)$, $\cdots$, $D_L(t)$. Each level $D_i(t)$, $i=1, \cdots, L$, is a packet requested by some of the users in time slot $t$. A user that has successfully decoded $D_i(t)$, for all $i=1$, $\cdots$, $l-1$, is able to subtract these signals from the received compound signal and then decodes $D_l(t)$, while the signals from $D_{l+1}(t)$ to $D_L(t)$ are treated as noise. ### Problem Statement For the SC and SIC scheme to work, the transmit powers for the levels should be carefully determined, such that there is a sufficiently high SNR for the levels to be decodable. It is also important to control the transmit powers of the BS’s to reduce interference and leverage frequency reuse. The annual power bill is a large part of a mobile operator’s costs [@Ulf10]. Minimizing BS power consumption is important to reduce not only the operator’s OPEX, but also the global CO$_2$ emission; an important step towards “green” communications. ![Superposition coding and successive interference cancellation.[]{data-label="fig:sic"}](superposition-coding.eps){width="4.5in"} Therefore, we focus on BS power allocation in this paper. The objective is to minimize the total power of all the BS’s, while guaranteeing a target rate $R_{tar}$ for each user. Recall that the data file is partitioned into equal-length packets. The target rate $R_{tar}$ ensures that a packet can be transmitted within a time slot, for given modulation and channel coding schemes. Define binary indicator $I_m^k$, for all $m$ and $k$, as: $$\label{eq:imk} I_m^k = \left\{ \begin{array}{ll} 1, & \mbox{if user $k$ connects to BS $m$} \\ 0, & \mbox{otherwise.} \end{array} \right.$$ Consider a general time slot $t$ when $L$ data packets, or levels, are requested. We formulate the optimal power allocation problem (termed OPT-Power) as follows. $$\begin{aligned} \mbox{minimize:} && \hspace{-0.2in} \sum_{m=0}^M \sum_{l=1}^L P_l^m \label{eq:ObjFun11} \\ \mbox{subject to:} && \hspace{-0.2in} B_m\log_2(1+\gamma_m^k I_m^k) \ge R_{tar}I_m^k, \mbox{ for all } k \label{eq:cntrate} \\ && \hspace{-0.2in} \sum_{m=0}^M I_m^k=1, \mbox{for all } k \label{eq:cnttransceiver} \\ && \hspace{-0.2in} P_l^m \ge 0, \mbox{ for all } l, m,\end{aligned}$$ where $P_l^m$ is the power of BS $m$ for transmitting the level $l$ packet; $\gamma_m^k$ is the SNR at user $k$ if it connects to BS $m$. Constraint (\[eq:cntrate\]) guarantees the minimum rate at each user. Constraint (\[eq:cnttransceiver\]) is due to the fact that each user is equipped with one transceiver, so it can only connect to one BS. Let $\mathcal{U}_l$ denote the set of users requesting the level $l$ packet. A user $k \in \mathcal{U}_l$ has decoded all the packets up to $D_{l-1}$. It subtracts the decoded signals from the received signal and treats signals $D_{l+1}, \cdots, D_L$ as noise. The SNR at user $k \in \mathcal{U}_l$, for $l=1, \cdots, L-1$, can be written as: $$\begin{aligned} \label{eq:SNR1} %\gamma_m^k=\frac{H_m^kP_l^m}{N_0+H_m^k\sum_{i=l+1}^LP_i^m} \gamma_m^k = H_m^k P_l^m / \left(N_0 + H_m^k \sum_{i=l+1}^L P_i^m \right),\end{aligned}$$ where $H_m^k$ is the random channel gain from BS $m$ to user $k$ and $N_0$ is the noise power. For user $k \in \mathcal{U}_L$ that requests the last packet $D_L$, the SNR is $$\begin{aligned} \label{eq:SNR2} %\gamma_m^k=\frac{H_m^kP_L^m}{N_0} \gamma_m^k = H_m^k P_L^m / N_0.\end{aligned}$$ The optimization variables in Problem OPT-Power consist of the binary variables $I_m^k$’s and the continuous variables $P_l^m$’s. It is an MINLP problem, which is NP-hard in general. In Section \[sec:alg\], we first reformulate the problem to a obtain a simpler form, and then develop effective algorithms for optimal and suboptimal solutions. Reformulation and Power Allocation \[sec:alg\] ---------------------------------------------- In this section, we reformulate Problem OPT-Power to obtain a simpler form, and derive an upper bound and a lower bound for the total BS power. The reformulation also leads to a simple heuristic algorithm. Finally, we introduce power allocation algorithms for three connection scenarios. ### Problem Reformulation Due to the monotonic logarithm functions and the binary indicators $I_m^k$, constraint (\[eq:cntrate\]) can be rewritten as: $$\label{eq:StSNR} \gamma_m^k I_m^k \ge \Gamma_m^k I_m^k, \;\; m=0, 1, \cdots,M,$$ where $\Gamma_m^k = \Gamma_m =: 2^{R_{tar}/B_m} - 1$ is the minimum SNR requirement at user $k$ that connects to BS $m$. To further simplify the problem, define $Q_l^m = \sum_{i=l}^L P_i^m$, with $Q_{L+1}^m=0$. Then power $P_l^m$ is the difference $$\begin{aligned} \label{eq:QrepP} P_l^m=Q_l^m-Q_{l+1}^m.\end{aligned}$$ Problem OPT-Power can be reformulated as: $$\begin{aligned} \mbox{minimize} && \hspace{-0.2in} \sum_{m=0}^M Q_1^m \label{eq:ObjFun2} \\ %\mbox{subject to:} && \hspace{-0.2in} \frac{H_m^k(Q_l^m-Q_{l+1}^m)}{N_0+H_m^kQ_{l+1}^m}I_m^k\ge \Gamma_m I_m^k, \nonumber \\ \mbox{subject to:} && \hspace{-0.2in} H_m^k(Q_l^m \hspace{-0.025in} - \hspace{-0.025in} Q_{l+1}^m) / \hspace{-0.025in} \left( N_0 \hspace{-0.025in} + \hspace{-0.025in} H_m^k Q_{l+1}^m \right) I_m^k \ge \Gamma_m I_m^k, \nonumber \\ && \hspace{0.7in} \mbox{ for all } k \in \mathcal{U}_l, l=1, \cdots, L \label{eq:cntrate2} \\ && \hspace{-0.2in} Q_l^m \ge Q_{l+1}^m, l=1, \cdots, L \\ && \hspace{-0.2in} \sum_{m=0}^M I_m^k=1, \mbox{ for all } k. \end{aligned}$$ For $l\le L$, constraint (\[eq:cntrate2\]) can be rewritten as: $$\begin{aligned} \label{eq:QmlIneq} Q_l^m I_m^k \ge \left[ N_0 \Gamma_m / H_m^k + (1 + \Gamma_m) Q_{l+1}^m \right] I_m^k.\end{aligned}$$ Let $\mathcal{U}_l^m$ be the subset of users connecting to BS $m$ in $\mathcal{U}_l$. Since $Q_l^m \ge Q_{l+1}^m$, (\[eq:QmlIneq\]) can be rewritten as, $$\label{eq:QmlEqu} Q_l^m = \max \left\{ Q_{l+1}^m, \max_{k \in \mathcal{U}_l^m} \left[ N_0 \Gamma_m / H_m^k + (1 + \Gamma_m) Q_{l+1}^m \right] \right\}. % Q_l^m = \max \left\{ Q_{l+1}^m, \max_{k \in \mathcal{U}_l^m} \left[ N_0 \frac{\Gamma_m}{H_m^k} + (1 + \Gamma_m) Q_{l+1}^m \right] \right\}.$$ From (\[eq:QmlEqu\]), we define a function $Q_l^m = F_m(Q_{l+1}^m,\mathcal{U}_l^m)$ as: $$\begin{aligned} \label{eq:FmDef} %&& \hspace{-0.1in} F_m(Q_{l+1}^m,\mathcal{U}_l^m) %\label{eq:FmDef} \\ %&=&\hspace{-0.1in} = \left\{\begin{array}{l l} Q_{l+1}^m,& \mathcal{U}_l^m=\emptyset \\ \max_{k\in\mathcal{U}_l^m} \left\{ \frac{N_0\Gamma_m}{H_m^k}+(1+\Gamma_m)Q_{l+1}^m \right\}, & \hspace{-0.05in} \mathcal{U}_l^m \neq \emptyset. \end{array}\right. %\nonumber\end{aligned}$$ Obviously, $F_m(Q_{l+1}^m,\mathcal{U}_l^m)$ is non-decreasing with respect to $Q_{l+1}^m$. It follows that $$\begin{aligned} \label{eq:ObjFun3} Q_1^m &=& F_m(Q_2^m, \mathcal{U}_1^m) \; = \; F_m(F_m(Q_3^m,\mathcal{U}_2^m),\mathcal{U}_1^m) \nonumber \\ &=& F_m(\cdots (F_m(Q_{L+1}^m, \mathcal{U}_L^m), \mathcal{U}_{L-1}^m), \cdots, \mathcal{U}_1^m) \nonumber \\ &=& F_m(\cdots (F_m(0, \mathcal{U}_L^m), \mathcal{U}_{L-1}^m), \cdots, \mathcal{U}_1^m).\end{aligned}$$ If none of the subsets $\mathcal{U}_l^m$ ($l=1,\cdots,L$) is empty, we can expand the above recursive term using (\[eq:FmDef\]). It follows that $$\label{eq:FoldTerm} Q_1^m = N_0 \Gamma_m \sum_{l=1}^L (1 + \Gamma_m)^{c_l^m} \max_{k \in \mathcal{U}_l^m} \left\{1 / H_m^k \right\},$$ where the exponent $c_l^m$ is defined as $c_1^m=0$ and $c_{l+1}^m=c_l^m+1$. Otherwise, if a subset $\mathcal{U}_l^m = \emptyset$ for some $m$, we have that $Q_l^m=Q_{l+1}^m$, $\max_{k\in\mathcal{U}_l^m} \left\{1/H_m^k\right\}=\max_{k\in \emptyset} \left\{1/H_m^k\right\}=0$, and $c_l^m=c_{l-1}^m$. Eq. (\[eq:FoldTerm\]) still holds true. Finally, the objective function (\[eq:ObjFun2\]) can be rewritten as $$% \mbox{$\sum_{m=0}^M$} Q_1^m = \mbox{$\sum_{m=0}^M$} N_0 \Gamma_m \mbox{$\sum_{l=1}^L$} (1 + \Gamma_m)^{c_l^m} \max_{k \in \mathcal{U}_l^m} \left\{1 / H_m^k \right\}. \sum_{m=0}^M N_0 \Gamma_m \sum_{l=1}^L (1 + \Gamma_m)^{c_l^m} \max_{k \in \mathcal{U}_l^m} \left\{1 / H_m^k \right\}.$$ Since $(1+\Gamma_m)>0$, it can be seen that to minimize the total BS power, we need to keep the $c_l^m$’s as low as possible. ### Performance Bounds \[subsec:bounds\] The reformulation and simplification allow us to derive performance bounds for the total BS power consumption. First, we derive the upper bound for the objective function (\[eq:ObjFun2\]). Define a variable $$% \overline{G}_m = \max_{l\in\{1,\cdots,L\}} \left\{ \max_{k\in\mathcal{U}_l^m} \left\{\Gamma_m/H_m^k\right\} \right\}. \overline{G}_m = \max_{l\in\{1,\cdots,L\}} \max_{k\in\mathcal{U}_l^m} \left\{\Gamma_m/H_m^k\right\},$$ which corresponds to the user with the worst channel condition among all users that connect to BS $m$. It follows that: $$\begin{aligned} \sum_{m=0}^M Q_1^m &\hspace{-0.1in} =& \hspace{-0.1in} N_0 \sum_{m=0}^M \sum_{l=1}^L (1+\Gamma_m)^{c_l^m}\max_{k\in\mathcal{U}_l^m}\left\{\Gamma_m / H_m^k\right\} \nonumber \\ &\hspace{-0.1in} \le& \hspace{-0.1in} N_0 \sum_{m=0}^M \sum_{l=1}^L (1+\Gamma_m)^{c_l^m} \overline{G}_m \nonumber \\ &\hspace{-0.1in} \le& \hspace{-0.1in} N_0 \sum_{m=0}^M \overline{G}_m \sum_{l=1}^L (1+\Gamma_m)^{l-1} \nonumber \\ &\hspace{-0.1in} =& \hspace{-0.1in} N_0 \sum_{m=0}^M \overline{G}_m \left[ (1+\Gamma_m)^L-1 \right] / \Gamma_m. \label{eq:UBound}\end{aligned}$$ In (\[eq:UBound\]), the first inequality is from the definition of $\overline{G}_m$. The second inequality is from the definition of $c_{l+1}^m$. Specifically, $c_1^m=0$; when $\mathcal{U}_l^m \neq \emptyset$, we have $c_{l}^m=c_{l-1}^m+1$; when $\mathcal{U}_l^m = \emptyset$, we have $c_{l}^m=c_{l-1}^m$. It follows that $c_l^m\le l-1$. Therefore, (\[eq:UBound\]) is an upper bound on the objective function (\[eq:ObjFun2\]). Furthermore, by defining $\overline{G} = \max_{m\in\{0,\cdots,M\}} \left\{ \overline{G}_m \right\}$, and $\overline{\Gamma} = \max_{m\in\{0,\cdots,M\}} \left\{ \Gamma_m \right\}$, we can get a looser upper bound from \[eq:UBound\] as $$\sum_{m=0}^M Q_1^m \leq N_0 \overline{G} (M+1)\left[ (1+\overline{\Gamma})^L-1\right]/\overline{\Gamma}.$$ Next, we derive a lower bound for (\[eq:ObjFun2\]). Define $$\left\{ \begin{array}{l} \underline{G}^l = \min_{m\in\{0,\cdots,M\}} \max_{k\in\mathcal{U}_l^m} \left\{ \Gamma_m/ H_m^k \right\} \\ \underline{\Gamma} = \min_{m\in\{0,\cdots,M\}} \left\{ \Gamma_m \right\}. \end{array} \right.$$ We have that $$\begin{aligned} \sum_{m=0}^M Q_1^m &\hspace{-0.1in} =& \hspace{-0.1in} N_0 \sum_{m=0}^M \sum_{l=1}^L (1+\Gamma_m)^{c_l^m} \max_{k\in\mathcal{U}_l^m} \left\{ \Gamma_m / H_m^k \right\} \nonumber \\ &\hspace{-0.1in} \ge& \hspace{-0.1in} N_0 \sum_{m=0}^M \sum_{l=1}^L (1+\Gamma_m)^{c_l^m} \underline{G}^l \nonumber \\ &\hspace{-0.1in} \ge& \hspace{-0.1in} N_0 \sum_{l=1}^L \underline{G}^l \sum_{m=0}^M (1+\underline{\Gamma})^{c_l^m} \nonumber \\ &\hspace{-0.1in} \ge& \hspace{-0.1in} N_0 (M+1) \sum_{l=1}^L \underline{G}^l (1+\underline{\Gamma})^{\frac{\sum_{m=0}^Mc_l^m}{M+1}}\nonumber \\ &\hspace{-0.1in} \ge& \hspace{-0.1in} N_0 (M+1) \sum_{l=1}^L \underline{G}^l(1+\underline{\Gamma})^{\frac{l-1}{M+1}}. \label{eq:LBound}\end{aligned}$$ In (\[eq:LBound\]), the first inequality is from the definition of $\underline{G}^l$. The second inequality is due to the definition of $\underline{\Gamma}$. The third inequality is due to the fact that $(1+\Gamma)^{c_l^m}$ is a convex function. The fourth inequality is because that each level must be transmitted by at least one BS. Thus for each level $l$, there is at least one $c_l^m=c_{l-1}^m+1$ for some $m$. It follows that the sum $\sum_{m=0}^M c_l^m$ should be greater than $l-1$. Therefore, (\[eq:LBound\]) provides a lower bound for (\[eq:ObjFun2\]). Furthermore, by defining $\underline{G} = \min_{l\in\{1,\cdots,L\}} \left\{ \underline{G}^l \right\}$, we can obtain a looser lower bound from (\[eq:LBound\]) as $$\sum_{m=0}^M Q_1^m \geq N_0 \underline{G} (M+1) \frac{(1+\underline{\Gamma})^{\frac{L}{M+1}}-1}{(1+\underline{\Gamma})^{\frac{1}{M+1}}-1}.$$ ### A Simple Heuristic Scheme \[subsec:heuristic\] We first describe a greedy heuristic algorithm that solves OPT-Power with suboptimal solutions. With this heuristic, each user compares the channel gains from the MBS and the FBS’s. It chooses the BS with the best channel condition to connect to, thus the values of the binary variables $I_m^k$ are determined. Once the binary variables are fixed, all the subsets $\mathcal{U}_l^m$’s are determined. Starting with $Q_{L+1}^m=0$, we can apply (\[eq:QmlEqu\]) iteratively to find the $Q_l^m$’s. Finally, the transmit powers $P_l^m$ can be computed using (\[eq:QrepP\]). With this approach, among the users requesting the level $l$ packet, it is more likely that some of them connect to the MBS and the rest connect to some FBS’s, due to the random channel gains in each time slot. In this situation, both MBS and FBS will have to transmit all the requested data packets. Such situation is not optimal for minimizing the total power, as will be discussed in Section \[subsubsec:caseII\]. ### Power Allocation Algorithms \[subsec:proposed\] In the following, we develop three power allocation algorithms for three different connection scenarios with a more structured approach. #### Case I–One Base Station We first consider the simplest connection scenario where all the $K$ users connect to the same BS (i.e., either the MBS or an FBS). Assume all the users connect to BS $m$. Then we have $I_m^k=1$ for all $k$, and all the subsets $\mathcal{U}_l^m$ are non-empty; $I_{m'}^k=0$ for all $k$ and all $m' \neq m$, and all the subsets $\mathcal{U}_l^{m'}$ are empty for $m' \neq m$. From (\[eq:FmDef\]), we can derive the optimal solution as: $$\begin{aligned} \label{eq:OptCase1} Q_l^{m\ast} &=& (1+\Gamma_m) Q_{l+1}^{m\ast} + \max_{k \in \mathcal{U}_l^m} \left\{ N_0\Gamma_m / H_m^k \right\}, \nonumber \\ &=& N_0 \Gamma_m \sum_{i=l}^L (1+\Gamma_m)^{i-l} \max_{k \in \mathcal{U}_l^m} \left\{1/H_m^k\right\}, %\nonumber \\ %&& \hspace{1.3in} \;\; l=1,2, \cdots,L. %\\ %Q_{L+1}^{m\ast} &=& 0,\end{aligned}$$ Recall that $Q_{L+1}^{m\ast} = Q_{L+1}^{m} = 0$, the optimal power allocation for Problem OPT-Power in this case is: $$% P_l^{i\ast}=Q_l^{i\ast}-Q_{l+1}^{i\ast}, i=m, \mbox{ for all } l. P_l^{m'\ast}=\left\{ \begin{array}{ll} Q_l^{m\ast}-Q_{l+1}^{m\ast}, & m'=m, \mbox{ for all } l \\ 0, & m' \neq m, \mbox{ for all } l. \end{array} \right.$$ #### Case II–MBS and One FBS \[subsubsec:caseII\] We next consider the case with one MBS and one FBS (i.e., $M=1$), where each user has two choices: connecting to either the FBS or the MBS. Recall that $\mathcal{U}_l^0$ and $\mathcal{U}_l^1$ are the subset of users who connected to the MBS and the FBS, respectively, and who request the level $l$ packet. Examining (\[eq:FoldTerm\]), we find that the total power of BS $m$ can be significantly reduced if one or more levels are not transmitted, since the exponent $c_l^m$ will not be increased in this case. Furthermore, consider the two choices: (i) not transmitting level $l$, and (ii) not transmitting level $l'>l$ from BS $m$. The first choice will yield larger power savings, since more exponents (i.e., $c_l^m, c_{l+1}^m, \cdots, c_{l'-1}^m$) will assume smaller values. Therefore, we should let these two subsets be empty whenever possible, i.e., either $\mathcal{U}_l^0=\emptyset$ or $\mathcal{U}_l^1=\emptyset$. According to this policy, all the users requesting the level $l$ packet will connect to the same BS. We only need to make the optimal connection decision for each subset of users requesting the same level of packet, rather than for each individual user. Since not transmitting a lower level packet yields more power savings for a BS, we calculate the power from the lowest to the highest level, and decide whether connecting to the MBS or the FBS for users in each level. Define $G_l^0 = \max_{k\in\mathcal{U}_l} \left\{ 1/H_0^k \right\}$ and $G_l^1 = \max_{k\in\mathcal{U}_l} \left\{ 1/H_1^k \right\}$. The algorithm for solving Problem OPT-Power in this case is given in Table \[tab:Case2Algo\]. In Steps $2$–$10$, the decision on whether connecting to the MBS or the FBS is made by comparing the expected increments in the total power. The user subsets $\mathcal{U}_l^0$ and $\mathcal{U}_l^1$ are determined in Steps $4$ and $7$. In Steps $11$–$14$, $Q_l^m$’s and the corresponding $P_l^m$’s are computed in the reverse order, based on the determined subsets $\mathcal{U}_l^0$ and $\mathcal{U}_l^1$. The computational complexity of this algorithm is $\mathcal{O}(L)$. ----- --------------------------------------------------------------------------------------- 1: Initialize all $c_l^0$, $c_l^1$, $Q_{L+1}^0$ and $Q_{L+1}^1$ to zero; 2: FOR $l=1$ TO $L$ 3: $\;\;$ IF ($\Gamma_0(1+\Gamma_0)^{c_l^0}G_l^0 \le \Gamma_1(1+\Gamma_1)^{c_l^1}G_l^0$) 4: $\;\;\;\;$ Set $\mathcal{U}_l^0=\mathcal{U}_l$ and $\mathcal{U}_l^1=\emptyset$; 5: $\;\;\;\;$ $c_l^0=c_l^0+1$; 6: $\;\;$ ELSE 7: $\;\;\;\;$ Set $\mathcal{U}_l^0=\emptyset$ and $\mathcal{U}_l^1=\mathcal{U}_l$; 8: $\;\;\;\;$ $c_l^1=c_l^1+1$; 9: $\;\;$ END IF 10: END FOR 11: FOR $l=L$ TO $1$ 12: $\;\;$ $Q_l^0=F_0(Q_{l+1}^0,\mathcal{U}_l^0)$ and $P_l^0=Q_l^0-Q_{l+1}^0$; 13: $\;\;$ $Q_l^1=F_1(Q_{l+1}^1,\mathcal{U}_l^1)$ and $P_l^1=Q_l^1-Q_{l+1}^1$; 14: END FOR ----- --------------------------------------------------------------------------------------- : Power Allocation Algorithm For Case II \[tab:Case2Algo\] #### Case III–MBS and Multiple FBS’s \[subsubsec:case3\] Finally, we consider the general case with one MBS and multiple FBS’s in the network. Each user is able to connect to the MBS or a nearby FBS. Recall that we define $\mathcal{U}_l$ as the set of users requesting the level $l$ packet, and $\mathcal{U}_l^m$ as the subset of users in $\mathcal{U}_l$ that [*connect*]{} to BS $m$. These sets have the following properties. $$\label{eq:SetProp1} \left\{ \begin{array}{l} \bigcup_{m=0}^M \mathcal{U}_l^m = \mathcal{U}_l \nonumber \\ \mathcal{U}_l^m \bigcap \mathcal{U}_l^{m'} = \emptyset, \; \mbox{ for all } m' \neq m. \nonumber \end{array} \right.$$ The first property is due to the fact that each user must connect to the MBS or an FBS. The second property is because each user can connect to only one BS. The user subsets connecting to different BS’s do not overlap. Therefore, $\mathcal{U}_l^m$’s is a [*partition*]{} of $\mathcal{U}_l$ with respect to $m$. In addition, we define $\mathcal{S}_l^m$ as the set of possible users that are [*covered*]{} by BS $m$ and request the level $l$ packet. These sets have the following properties. $$\label{eq:SetProp2} \left\{ \begin{array}{l} \bigcup_{m=1}^M \mathcal{S}_l^m = \mathcal{S}_l^0=\mathcal{U}_l \nonumber \\ \mathcal{S}_l^m \bigcap \mathcal{S}_l^0 = \mathcal{S}_l^m, \; \mbox{ for all } m \neq 0 \nonumber \\ \mathcal{S}_l^m \bigcap \mathcal{S}_l^{m'} = \emptyset, \; \mbox{ for all } m' \neq m \mbox{ and } m, m'\neq 0. \nonumber \end{array} \right.$$ The first property is because all users in each femtocell are covered by the MBS. The second property indicates that the users covered by FBS $m$ are a subset of the users covered by the MBS. The third property shows that the user subsets in different femtocells do not overlap. We can see that the $\mathcal{S}_l^m$’s, for $m=1,\cdots,M$, are also a partition of $\mathcal{U}_l$. Define $W_m(\mathcal{U})=\max_{k\in\mathcal{U}}\left\{ 1/H_m^k \right\}$, where $\mathcal{U}$ is the set of users and $m=0,\cdots,M$. If the set $\mathcal{U}$ is empty, we define $W_m(\emptyset)=0$. For example, consider Case II where $M=1$. We have $\mathcal{S}_l^0 = \mathcal{S}_l^1 = \mathcal{U}_l$, $W_0(\mathcal{U}_l)=G_l^0$, and $W_1(\mathcal{U}_l)=G_l^1$. The power allocation algorithm for Case III is presented in Table \[tab:Case3Algo\]. The algorithm iteratively picks users from the [*eligible*]{} subset $\mathcal{S}_l^m$ and assigns them to the [*allocated*]{} subset $\mathcal{U}_l^m$. In each step $l$, $\Psi$ is the subset of FBS’s that will transmit the level $l$ packet; the complementary set $\overline{\Psi}$ is the subset of FBS’s that will not transmit the level $l$ packet. The expected increment in total power for each partition is computed, and the partition with the smallest expected increment will be chosen. $\Delta_l^m$ is the power of BS $m$ for transmitting the level $l$ data packet. In Steps $6$–$15$, the MBS and FBS combination $\Psi$ is determined for transmitting the level $l$ packet, with the lowest power $\Delta_0$. In Steps $16$–$30$, elements in $\mathcal{S}_l^m$ are assigned to $\mathcal{U}_l^m$ according to $\Psi$. In Steps $31$–$35$, power sums $Q_l^m$ and the corresponding power allocations $P_l^m$ are calculated in the reverse order from the known $\mathcal{U}_l^m$’s. The complexity of the algorithm is $\mathcal{O}(ML)$. ----- ------------------------------------------------------------------------------------------------------------------------ 1: Initialize: $c_l^m=0$ and $Q_{L+1}^m=0$, for all $l$, $m$; 2: FOR $l=1$ TO $L$ 3: $\;\;\;$ FOR $m=0$ TO $M$ 4: $\;\;\;\;\;\;$ $\Delta_l^m=\Gamma_m(1+\Gamma_m)^{c_l^m}W_m(\mathcal{S}_l^m)$; 5: $\;\;\;$ END FOR 6: $\;\;\;$ Set $\Omega=\{1,\cdots,M\}$ and $\Psi=\emptyset$; 7: $\;\;\;$ WHILE ($\Omega \neq \emptyset$) 8: $\;\;\;\;\;\;$ $m'=\arg\min_{m\in\Omega} \Delta_l^m$; 9: $\;\;\;\;\;\;$ Compute $\Delta'=\Gamma_0(1+\Gamma_0)^{c_l^0}W_0(\bigcup_{m\in\overline{\Psi\cup m'}}\mathcal{S}_l^m)$; 10: $\;\;\;\;\;\;$ IF ($(\sum_{m\in\Psi\cup m'}\Delta_l^m + \Delta') < \Delta_0$) 11: $\;\;\;\;\;\;\;\;\;$ Add $m'$ to $\Psi$; 12: $\;\;\;\;\;\;\;\;\;$ $\Delta_0=\sum_{m\in\Psi}\Delta_l^m + \Delta'$; 13: $\;\;\;\;\;\;$ END IF 14: $\;\;\;\;\;\;$ Remove $m'$ from $\Omega$; 15: $\;\;\;$ END WHILE 16: $\;\;\;$ IF ($\Psi = \emptyset$) 17: $\;\;\;\;\;\;$ $\mathcal{U}_l^0=\mathcal{S}_l^0$; 18: $\;\;\;\;\;\;$ $c_l^0=c_l^0+1$; 19: $\;\;\;\;\;\;$ Set $\mathcal{U}_l^m = \emptyset$, for all $m \neq 0$; 20: $\;\;\;$ ELSE 21: $\;\;\;\;\;\;$ $\mathcal{U}_l^0=\bigcup_{m\in\overline{\Psi}}\mathcal{S}_l^m$; 22: $\;\;\;\;\;\;$ IF ($|\Psi|<M$) 23: $\;\;\;\;\;\;\;\;\;$ $c_l^0=c_l^0+1$; 24: $\;\;\;\;\;\;$ END IF 25: $\;\;\;\;\;\;$ FOR $m \in \Psi$ 26: $\;\;\;\;\;\;\;\;\;$ $c_l^m=c_l^m+1$; 27: $\;\;\;\;\;\;\;\;\;$ $\mathcal{U}_l^m=\mathcal{S}_l^m$; 28: $\;\;\;\;\;\;$ END FOR 29: $\;\;\;$ END IF 30: END FOR 31: FOR $l=L$ TO $1$ 32: $\;\;\;$ FOR $m=0$ TO $M$ 33: $\;\;\;\;\;\;$ $Q_l^m=F_m(Q_{l+1}^m,\mathcal{U}_l^m)$ and $P_l^m=Q_l^m-Q_{l+1}^m$; 34: $\;\;\;$ END FOR 35: END FOR ----- ------------------------------------------------------------------------------------------------------------------------ : Power Allocation Algorithm For Case III \[tab:Case3Algo\] Performance Evaluation \[sec:sim3\] ----------------------------------- We evaluate the performance of the proposed power allocation algorithms using MATLAB^TM^. Three scenarios corresponding to the three cases in Section \[sec:alg\] are simulated: (i) Case I: a single MBS; (ii) Case II: one MBS and one FBS; and (iii) Case III: one MBS and three FBS’s. Since we do not find any similar schemes in the literature, we made the following comparisons. First, we compare Cases I and II with respect to BS power consumption and interference footprint. In both cases, there are $K=8$ users and $L=4$ levels. In Case I, the MBS bandwidth is $B_0=2$ MHz. In Case II, the MBS and the FBS share the $2$ MHz total bandwidth; the MBS bandwidth is $B_0=1$ MHz and the FBS bandwidth is $B_1=1$ MHz. The target data rate $R_{tar}$ is set to $2$ Mbps. The channel gain from a base station to each user is exponentially distributed in each time slot. The interference footprints in the three dimensional space are plotted in Fig. \[fig:Case1VSCase2\]. The height $B$ of the cylinders indicates the spectrum used by a BS, while the radius $r$ is proportional to the BS transmit power. In Case I when only the MBS is used, the total BS power is $45.71$ dBm and the volume of the cylinder is $\pi r^2 B = 18,841$ MHz m$^2$. In Case II when both the MBS and FBS are used, the total BS power is $34.58$ dBm and the total volume of the two cylinders is $2,378$ MHz m$^2$. Using an additional FBS achieves a $11.13$ dB power saving and the interference footprint is reduced to $12.62$% of that in Case I. This simple comparison clearly demonstrate the advantages of femtocells achieved by bringing BS’s closer to users. \[!t\] ![Case I vs. Case II: interference footprints.[]{data-label="fig:Case1VSCase2"}](PlotCase1VSCase2_br.eps "fig:"){width="4.5in" height="3.0in"} We next consider the more general Case III, using a femtocell network of one MBS and three FBS’s. The MBS bandwidth is $B_0=1$ MHz and each FBS is assigned with bandwidth $B_m=1$ MHz, $m=1, 2, 3$. The target data rate is still $2$ Mbps. In Figs. \[fig:Case3Level\] and \[fig:Case3BW0\], we plot four curves, each obtained with: (i) the heuristic scheme described in Section \[subsec:heuristic\]; (ii) The proposed algorithm presented in Section \[subsubsec:case3\]; (iii) The upper bound; and (iv) the lower bound derived in Section \[subsec:bounds\]. Each point in the figures is the average of $10$ simulation runs. The $95\%$ confidence intervals are plotted as error bars, which are all negligible. In Fig. \[fig:Case3Level\], we examine the impact of the number of packet levels $L$ on the total BS transmit power. We increase $L$ from $2$ to $6$, and plot the total power of base stations. As expected, the more packet levels, the larger the BS power consumption. Both the proposed and heuristic curves lie in between the upper and lower bound curves. When $L$ is increased from $2$ to $6$, the power consumption of the heuristic scheme is increased by $12.22$ dB, while the power consumption of the proposed algorithm is increased by $9.94$ dB. The power savings achieved by the proposed algorithm over the heuristic scheme range from $3.92$ dB to $6.45$ dB. ![Case III: impact of number of levels $L$.[]{data-label="fig:Case3Level"}](PlotCase3Level_largefont.eps){width="4.5in" height="3.0in"} In Fig. \[fig:Case3BW0\], we show the impact of the BS bandwidths. The number of levels is $L=4$. We fix the total bandwidth at $2$ MHz, which is shared by the MBS and FBS’s. We increase the MBS bandwidth from $0.4$ MHz to $1.6$ MHz in steps of $0.2$ MHz, while decrease the bandwidth of FBS’s from $1.6$ MHz to $0.4$ MHz. We find that the total power consumption is increased as $B_0$ gets large. This is due to the fact that as the FBS bandwidth gets smaller, the FBS’s have to spend more power to meet the minimum data rate requirement. The curve produced by the proposed algorithm has a smaller slop than that of the heuristic scheme: the overall increase in the total power of the proposed algorithm is $4.86$ dB, while that of the heuristic scheme is $20.84$ dB. This implies that the proposed scheme is not very sensitive to the bandwidth allocation between the MBS and FBS’s. The proposed algorithm also achieves consider power savings over the heuristic scheme. When $B_0=1.6$ MHz, the total power of the proposed algorithm is $20.75$ dB lower than that of the heuristic scheme. ![Case III: impact of MBS bandwidth $B_0$.[]{data-label="fig:Case3BW0"}](PlotCase3BW0_largefont.eps){width="4.5in" height="3.0in"} Video over CR Femtocell Networks {#sec:femto_cr_video} ================================ In this section, we investigate the problem of video streaming in femtocell cognitive radio (CR) networks and formulate a [*stochastic programming*]{} problem to examine three deployment scenarios. In the case of a single FBS, we apply [*dual decomposition*]{} to develop an optimum-achieving distributed algorithm, which is shown also optimal for the case of multiple non-interfering FBS’s. In the case of multiple interfering FBS’s, we develop a greedy algorithm that can compute near-optimal solutions, and prove a closed-form lower bound for its performance based on an [*interference graph*]{} model. The proposed algorithms are evaluated with simulations, and are shown to outperform three alternative schemes with considerable gains. System Model and Preliminaries \[sec:mod4\] ------------------------------------------- ### Spectrum and Network Model We consider a spectrum consisting of $(M+1)$ channels, including one common, unlicensed channel (indexed as channel $0$) and $M$ licensed channels (indexed as channels $1$ to $M$). The $M$ licensed channels are allocated to a primary network, and the common channel is exclusively used by all CR users. We assume all the channels follow a synchronized time slot structure [@Zhao07a]. The capacity of each licensed channel is $B_1$ Mbps, while the capacity of the common channel is $B_0$ Mbps. The channel states evolve independently, while the occupancy of each licensed channel follows a two-state discrete-time Markov process. The femtocell CR network is illustrated in Fig. \[fig:netmod2\]. There is an MBS and $N$ FBS’s deployed in the area to serve CR users. The $N$ FBS’s are connected to the MBS (and the Internet) via broadband wireline connections. Due to advances in antenna technology, it is possible to equip multiple antennas at the base stations. The MBS has one antenna that is always tuned to the common channel. Each FBS is equipped with multiple antennas (e.g., $M$) and is able to sense multiple licensed channels at the beginning of each time slot. There are $K_i$ CR users in femtocell $i$, $i=1,2,\cdots,N$, and $\sum_{i=1}^N K_i = K$. Each CR user has a software radio transceiver, which can be tuned to any of the $M$+1 channels. A CR user will either connect to a nearby FBS using one or more of the licensed channels or to the MBS via the common channel. ![A femtocell CR network with one MBS and four FBS’s.[]{data-label="fig:netmod2"}](network-mod-jsac-camready.eps){width="4.0in" height="2.7in"} Although the CR users are mobile, we assume constant topology during a time slot. If the topology is changed during a time slot, the video transmission will only be interrupted for the time slot, since the proposed algorithms are executed in every time slot for new channel assignment and schedule. ### Spectrum Sensing and Access The femtocell CR network is within the coverage of the infrastructure-based primary network. Both FBS’s and CR users sense the channels to identify spectrum opportunities in each time slot. Each time slot consists of (i) a [*sensing phase*]{}, when CR users and FBS’s sense licensed channels, (ii) a [*transmission phase*]{}, when CR users and FBS’s attempt to access licensed channels, and (iii) an [*acknowledgment phase*]{}, when acknowledgments (ACK) are returned to the source. Cooperative sensing policy is also adopted here. We also adopt a [*hypothesis test*]{} to detect channel availability. We assume that each CR user chooses one channel to sense in each time slot, since it only has one transceiver. The sensing results will be shared among CR users and FBS’s via the common channel in the sensing phase. Given $L$ sensing results on channel $m$, the availability of channel $m$, i.e., $P^A_m(\Theta_1^m,\cdots,\Theta_L^m)$, can be computed iteratively as follows. $$\begin{aligned} \label{eq:Iteration2} P^A_m(\Theta_1^m)&=&\left[1+\frac{\eta_m}{1-\eta_m}\times\frac{(\delta_1^m)^{1-\Theta_1^m}(1-\delta_1^m)^{\Theta_1^m}}{(\epsilon_1^m)^{\Theta_1^m}(1-\epsilon_1^m)^{1-\Theta_1^m}}\right]^{-1} \\ %\nonumber \\ P^A_m(\vec{\Theta}_l^m)&=&P^A_m(\Theta_1^m,\Theta_2^m,\cdots,\Theta_l^m) \nonumber \\ &=&\left\{1+\left[\frac{1}{P^A_m(\Theta_1^m,\Theta_2^m,\cdots,\Theta_{l-1}^m)}-1\right]\times \right.\nonumber \\ &&\left. \frac{(\delta_l^m)^{1-\Theta_l^m}(1-\delta_l^m)^{\Theta_l^m}}{(\epsilon_l^m)^{\Theta_l^m}(1-\epsilon_l^m)^{1-\Theta_l^m}}\right\}^{-1},l=2,\cdots,L. %\nonumber\end{aligned}$$ We adopt a probabilistic approach: based on sensing results $\vec{\Theta}_m$, we have $D_m(t)=0$ with probability $P^D_m(\vec{\Theta}_m)$ and $D_m(t)=1$ with probability $1-P^D_m(\vec{\Theta}_m)$. For primary user protection, the collision probability with primary users caused by CR users should be bounded. The probability $P^D_m(\vec{\Theta}_m)$ is determined as follows $$\label{eq:PrDm2} P^D_m(\vec{\Theta}_m)=\min \left\{ \gamma_m / \left[ 1 - P^A_m(\vec{\Theta}_m) \right], 1 \right\}.$$ Let $\mathcal{A}(t) := \{m|D_m(t)=0\}$ be the set of available channels in time slot $t$. Then $G^t=\sum_{m\in \mathcal{A}(t)} P^A_m(\Theta_1^m)$ is the expected number of available channels. These channels will be accessed in the transmission phase of time slot $t$. ### Channel Model Without loss of generality, we consider independent block fading channels that is widely used in prior work [@Rappaport01]. The channel fading-gain process is piecewise constant on blocks of one time slot, and fading in different time slots are independent. Let $f^{i,j}_X(x)$ denote the [*probability density function*]{} of the received SINR $X$ from a base station $i$ at CR user $j$. We assume the packet can be successfully decoded if the received SINR exceeds a threshold $H$. The packet loss probability from base station $i$ to CR user $j$ is $$\label{eq:PrFad} P_{i,j}=\Pr\{X \le H\} = \int_0^{H}f_X^{i,j}(x) dx=F_X^{i,j}(H),$$ where $F_X^{i,j}(H)$ is the cumulative density function of $X$. In the case of correlated fading channels, which can be modeled as finite state Markov Process [@Zhang99], the packet loss probability in the next time slot can be estimated from the known state of the previous time slot and the transition probabilities. If the packet is successfully decoded, the CR user returns an ACK to the base station in the ACK phase. We assume ACKs are always successfully delivered. ### Video Performance Measure We assume each active CR user receives a real-time video stream from either the MSB or an FSB. Without loss of generality, we adopt the MGS option of H.264/SVC, for scalability to accommodate the high variability of network bandwidth in CR networks. Due to real-time constraint, each Group of Pictures (GOP) of a video stream must be delivered in the next $T$ time slots. With MGS, enhancement layer NAL units can be discarded from a quality scalable bit stream, and thus packet-based quality scalable coding is provided. Our approach is to encode the video according to the maximum rate the channels can support. During transmission, only part of the MGS video gets transmitted as allowed by the current available channel bandwidth. The video packets are transmitted in decreasing order of their significance in decoding. When a truncated MGS video is received and decoded, the PSNR is computed by substituting the effective rate of the received MGS video into (\[eq:QuaMod\]) given below, thus the original video is not required. Without loss of generality, we assume that the last wireless hop is the bottleneck; video data is available at the MBS and FBS’s when they are scheduled to be transmitted. The quality of reconstructed MGS video can be modeled as [@Wien07]: $$\label{eq:QuaMod} W(R)=\alpha+\beta \times R,$$ where $W(R)$ is the average peak signal-to-noise ratio (PSNR) of the reconstructed video, $R$ is the received data rate, $\alpha$ and $\beta$ are constants depending on the video sequence and codec. We verified (\[eq:QuaMod\]) using an H.264/SVC codec and the [*Bus*]{}, [*Mobile*]{}, and [*Harbour*]{} test sequences. In Fig. \[fig:mgs-rd\], the markers are obtained by truncating the encoded video’s enhancement layer at different positions to obtain different effective rates, while the curves are computed using (\[eq:QuaMod\]). The curves fit well with measurements for the three sequences. It is worth noting that PSNR may not be a good measure of video quality as compared with alternative metrics such as MS-SSIM [@Wang04]. The main reason for choosing PSNR is that there is a closed-form model relating it to network level metrics–video rate. With the closed-form model, we can have a mathematical formulation of the scheduling/resource allocation problem, and derive effective algorithms. Should such closed-form models be available for MS-SSIM, it is possible to incorporate it into the optimization framework as well. ![Rate-distortion curves of three H.264/SVC MGS videos.[]{data-label="fig:mgs-rd"}](MGS-RD.eps){width="4.5in" height="3.0in"} MGS Video over Femtocell CR Networks \[sec:alg2\] ------------------------------------------------- In this section, we address the problem of resource allocation for MGS videos over femtocell CR networks. We first examine the case of a single FBS, and then the more general case of multiple non-interfering or interfering FBS’s. The algorithms for the single and non-interfering FBS cases are distributed ones and optimal. The algorithm for the interfering FBS case is a centralized one that can be executed at the MBS. To simplify notation, we omit the time slot index $t$ for most of the variables in this Section. For example, $x$ represents a variable for time slot $t$, $x^{-}$ represents the variable in time slot $(t-1)$, and $x^{+}$ represents the variable in time slot $(t+1)$. ### Case of Single FBS #### Formulation We first consider the case of a single FBS in the CR network, where the FBS can use all the $G$ available channels to stream videos to $K$ active CR users. Let $w_j$ be the PSNR of CR user $j$ at the beginning of time slot $t$ and $W_j$ the PSNR of CR user $j$ at the end of time slot $t$. In time slot $t$, $w_j$ is already known; $W_j$ is a random variable that depends on channel condition and primary user activity; and $w_j^{+}$ is a [*realization*]{} of $W_j$. Let $\xi_{0,j}$ and $\xi_{1,j}$ indicate the random packet losses from the MBS and FBS, respectively, to CR user $j$ in time slot $t$. That is, $\xi_{i,j}$ is $1$ with probability $\bar{P}_{i,j}=1-P_{i,j}$ and $0$ with probability $P_{i,j}$. Due to block fading channels, $P_{i,j}$’s do not change within the time slot. Let $\rho_{0,j}$ and $\rho_{1,j}$ be the portions of time slot $t$ when CR user $j$ receives video data from the MBS and FBS, respectively. The average PSNR is computed every $T$ time slots. We first have $W_j(0)=\alpha_j$, when $t=0$. In each time slot $t$, the CR user receives $\xi_{0,j} \rho_{0,j} B_0$ bits through the MBS, and $\xi_{1,j} \rho_{1,j} G B_1$ bits through the FBS (assuming that OFDM is used), which contribute an increase of $\beta (\xi_{0,j} \rho_{0,j} B_0 + \xi_{1,j} \rho_{1,j} G B_1) / T$ to the total PSNR in this $T$ time slot interval, according to (\[eq:QuaMod\]). Therefore we have the following recursive relationship: $W_j = W_j^{-} + \beta (\xi_{0,j} \rho_{0,j} B_0 + \xi_{1,j} \rho_{1,j} G B_1) / T$ = $W_j^{-} + \xi_{0,j} \rho_{0,j} R_{0,j} + \xi_{1,j} \rho_{1,j} G R_{1,j}$, where $R_{0,j}=\beta B_0/T$ and $R_{1,j}=\beta B_1/T$. For proportional fairness, we aim to maximize the sum of the logarithms of the PSNRs of all CR users [@Kelly98]. We formulate a [*multistage stochastic programming problem*]{} by maximizing the [*expectation*]{} of the logarithm-sum at time $T$. $$\begin{aligned} \label{eq:MultStage} \mbox{maximize:} && \hspace{-0.2in} \sum_{j=1}^K \mathbb{E}[\log(W_j(T))] \\ \mbox{subject to:} && \hspace{-0.2in} W_j=W_j^{-}+\xi_{0,j} \rho_{0,j} R_{0,j} + \xi_{1,j} \rho_{1,j} G R_{1,j}, %\nonumber \\ %&& \hspace{0.8in} \;\; j=1,\cdots,K, \; t=1,\cdots,T \nonumber \\ && \hspace{-0.2in} \sum_{j=1}^K \rho_{i,j} \leq 1, \;\; i=0,1, \; t=1,\cdots,T \nonumber \\ && \hspace{-0.2in} \rho_{i,j} \ge 0, \; i=0,1, \; j=1,\cdots,K, \; t=1,\cdots,T. \nonumber\end{aligned}$$ $R_{0,j}=\beta_j B_0/T$ and $R_{1,j}=\beta_jB_1/T$ are constants for the $j$-th MGS video. At the beginning of the last time slot $T$, a realization $\bm{\xi}_{[T-1]} = [ \vec{\xi}_1, \vec{\xi}_2, \cdots, \vec{\xi}_{T-1} ]$ is known, where $\vec{\xi}_t = [ \xi_{0,1}^t, \xi_{0,2}^t, \cdots, \xi_{0,K}^t, \xi_{1,1}^t, \cdots, \xi_{1,K}^t ]$, $t = 1, 2, \cdots, T-1$. It can be shown that the multistage stochastic programming problem (\[eq:MultStage\]) can be decomposed into $T$ serial sub-problems, each to be solved in a time slot $t$ [@Hu10TW]. $$\begin{aligned} \label{eq:SingStage} \mbox{maximize:} && \hspace{-0.2in} \sum_{j=1}^K \mathbb{E}\{\log(W_j)|\bm{\xi}_{[t-1]}\} \\ \mbox{subject to:} && %\nonumber \\ \hspace{-0.2in} W_j=W_j^{-}+\xi_{0,j} \rho_{0,j} R_{0,j} + \xi_{1,j} \rho_{1,j} G R_{1,j}, %\nonumber \\ %\;\; j=1,\cdots,K \nonumber \\ %&& \hspace{1.6in} j=1,\cdots,K \nonumber \\ && \hspace{-0.2in} \sum_{j=1}^K \rho_{i,j} \le 1, \;\; i=0,1 \nonumber \\ && \hspace{-0.2in} \rho_{i,j} \ge 0, \; i=0,1, \; j=1,\cdots,K, \nonumber\end{aligned}$$ where $\mathbb{E}\{\log(W_j)|\bm{\xi}_{[t-1]}\}$ denotes the [*conditional expectation*]{} of $\log(W_j)$ given realization $\bm{\xi}_{[t-1]}$. $W_j^{-}$ is known given the realization. When $t=1$, the conditional expectation becomes an unconditional expectation. Since a CR user has only one transceiver, it can operate on either one or more licensed channels (i.e., connecting to the FBS) or the common channel (i.e., connecting to the MBS), but not both simultaneously. Assume CR user $j$ operates on the common channel with probability $p_j$ and one or more licensed channels with probability $q_j$. We then rewrite problem (\[eq:SingStage\]) as $$\begin{aligned} \label{eq:ProbOpt1} \mbox{maximize:} && \hspace{-0.2in} \sum_{j=1}^K \left[ p_j \bar{P}_{0,j} \log(W_j^{-}+\rho_{0,j} R_{0,j}) \; + %q_j \bar{P}_{1,j} \log(W_j^{-} + \rho_{1,j} G R_{1,j}) %\right] \\ %\right. \\ %&& \hspace{0.5in} \left. q_j \bar{P}_{1,j} \log(W_j^{-} + \rho_{1,j} G R_{1,j}) \right] \nonumber \\ \mbox{subject to:} && \hspace{-0.2in} %\nonumber \\ \sum_{j=1}^K \rho_{i,j} \le 1, \;\; i=0,1 \nonumber \\ && \hspace{-0.2in} p_j + q_j = 1, \;\; j=1,\cdots,K \nonumber \\ && \hspace{-0.2in} \rho_{i,j}, \; p_j, \; q_j \ge 0, \;\; i=0,1, \; j=1,\cdots,K. \nonumber \end{aligned}$$ #### Properties In this section, we analyze the formulated problem (\[eq:ProbOpt1\]) and derive its properties. We have Lemmas 1, 2, and 3 and Theorem 1 and provide the proofs in the following. Problem (\[eq:ProbOpt1\]) is a convex optimization problem. First, it can be shown that the single term $ p_j \bar{P}_{0,j} \log(W_j^{-} + \rho_{0,j} R_{0,j}) + q_j \bar{P}_{1,j} \log(W_j^{-} + \rho_{1,j} G R_{1,j}) $ is a concave function, because its [*Hessian matrix*]{} is negative semi-definite. Then, the objective function is concave since the sum of concave functions is also concave. Finally, all the constraints are linear. We conclude that problem (\[eq:ProbOpt1\]) is convex with a unique optimal solution. If $[\rho,p,q]$ is a feasible solution to problem (\[eq:ProbOpt1\]), then $[\rho,q,p]$ is also feasible. Since $[\rho,p,q]$ is feasible, we have $p + q =1$. Switching the two probabilities, we still have $q + p =1$. Therefore, the derived new solution is also feasible. Let the optimal solution be $[\rho^{\ast},p^{\ast},q^{\ast}]$. If $p_j^{\ast} \ge q_j^{\ast}$, then $\bar{P}_{0,j} \log(W_j^{-} + \rho_{0,j}^{\ast} R_{0,j})$ is greater than or equal to $\bar{P}_{1,j} \log(W_j^{-} + \rho_{1,j}^{\ast} G R_{1,j})$. And vice versa. Assume $\bar{P}_{0,j} \log(W_j^{-} + \rho_{0,j}^{\ast} R_{0,j})$ is less than $\bar{P}_{1,j} \log(W_j^{-} + \rho_{1,j}^{\ast} G R_{1,j})$. Since $p_j^{\ast} \ge q_j^{\ast}$, the sum of the product $p_j^{\ast} \bar{P}_{0,j} \log(W_j^{-} + \rho_{0,j}^{\ast} R_{0,j}) + q_j^{\ast} \bar{P}_{1,j} \log(W_j^{-} + \rho_{1,j}^{\ast} G R_{1,j})$ is smaller than the sum of the product $q_j^{\ast} \bar{P}_{0,j} \log(W_j^{-} + \rho_{0,j}^{\ast} R_{0,j}) + p_j^{\ast} \bar{P}_{1,j} \log(W_j^{-} + \rho_{1,j}^{\ast} G R_{1,j})$. Thus we can obtain an objective value larger than the optimum by switching the values of $p_j^{\ast}$ and $q_j^{\ast}$, which is still feasible according to Lemma 2. This conflicts with the assumption that $[\rho^{\ast},p^{\ast},q^{\ast}]$ is optimal. The reverse statement can be proved similarly. Let the optimal solution be $[\rho^{\ast},p^{\ast},q^{\ast}]$. If $p_j^{\ast} > q_j^{\ast}$, then we have $p_j^{\ast} = 1$ and $q_j^{\ast} = 0$. Otherwise, we have $p_j^{\ast} = 0$ and $q_j^{\ast} = 1$. If $p_j^{\ast} > q_j^{\ast}$, we have $\bar{P}_{0,j} \log(W_j^{-} + \rho_{0,j}^{\ast} R_{0,j}) \geq \bar{P}_{1,j} \log(W_j^{-} + \rho_{1,j}^{\ast} G R_{1,j})$ according to Lemma 3. Since the objective function is linear with respect to $p_j$ and $q_j$, the optimal value can be achieved by setting $p_j$ to its maximum value 1 and $q_j$ to its minimum value 0. The reverse statement can be proved similarly. According to Theorem 1, a CR user is connected to either the MBS or the FBS for the [*entire*]{} duration of a time slot in the optimal solution. That is, it does not switch between base stations during a time slot under optimal scheduling. #### Distributed Solution Algorithm To solve problem (\[eq:ProbOpt1\]), we define non-negative [*dual variables*]{} ${\mathcal \lambda}=[\lambda_0, \lambda_1]$ for the two inequality constraints. The [*Lagrangian function*]{} is $$\begin{aligned} \label{eq:LagProbOpt1} \hspace{-0.2in} \mathcal{L}(p,\rho,\lambda) \hspace{-0.025in}&=&\hspace{-0.025in} \sum_{j=1}^K \left[ p_j \bar{P}_{0,j} \log(W_j^{-} + \rho_{0,j} R_{0,j}) + %\right. \nonumber \\ %&& \left. (1-p_j) \bar{P}_{1,j} \log(W_j^{-} + \rho_{1,j} G R_{1,j}) \right] + \nonumber \\ && \lambda_0(1 - \sum_{j=1}^K \rho_{0,j})+ \lambda_1(1 - \sum_{j=1}^K \rho_{1,j}) \nonumber \\ &=& \hspace{-0.025in} \sum_{j=1}^K \mathcal{L}_j(p_j,\rho_{0,j},\rho_{1,j},\lambda_0,\lambda_1) \hspace{-0.025in} + \hspace{-0.025in} \lambda_0 \hspace{-0.025in} + \hspace{-0.025in} \lambda_1, \end{aligned}$$ where $$\begin{aligned} \hspace{0.1in} \mathcal{L}_j(p_j,\rho_{0,j},\rho_{1,j},\lambda_0,\lambda_1) = p_j \bar{P}_{0,j} \log(W_j^{-} + \rho_{0,j} R_{0,j}) + \nonumber \\ \hspace{0.2in} (1 - p_j) \bar{P}_{1,j} \log(W_j^{-} + \rho_{1,j} G R_{1,j}) - \lambda_0 \rho_{0,j} - \lambda_1 \rho_{1,j}. \nonumber\end{aligned}$$ The corresponding problem can be decomposed into $K$ sub-problems and solved iteratively. In Step $\tau \geq 1$, for given $\lambda_0(\tau)$ and $\lambda_1(\tau)$ values, each CR user $j$ solves the following sub-problem using local information. $$\begin{aligned} \label{eq:ArgSubOpt1} %&& \hspace{-0.3in} [p_j^{\ast}(\tau), \rho_{0,j}^{\ast}(\tau), \rho_{1,j}^{\ast}(\tau)] %\nonumber \\ %&& \hspace{-0.5in} = \stackbin[p_j,\rho_{0,j},\rho_{1,j} \ge 0]{}{\arg\max} \mathcal{L}_j(p_j,\rho_{0,j},\rho_{1,j},\lambda_0(\tau),\lambda_1(\tau)).\end{aligned}$$ There is a unique optimal solution since the objective function in (\[eq:ArgSubOpt1\]) is concave. The CR users then exchange their solutions. The [*master dual problem*]{}, for given $p(\tau)$ and $\rho(\tau)$, is: $$\begin{aligned} \label{eq:MaterOpt1} %&& \hspace{-0.2in} \min_{\lambda\ge 0} \mathcal{L}(p(\tau),\rho(\tau),\lambda) %\nonumber \\ %&& \hspace{-0.4in} = \sum_{j=1}^K \mathcal{L}_j(p_j(\tau),\rho_{0,j}(\tau),\rho_{1,j}(\tau),\lambda_0,\lambda_1)+\lambda_0+\lambda_1. \end{aligned}$$ Since the Lagrangian function is differentiable, the [*gradient iteration*]{} approach can be used. $$\label{eq:IterOpt1} \lambda_i(\tau+1) = \left[ \lambda_i(\tau) - s \times \left( 1 - \sum_{j=1}^K \rho_{i,j}^{\ast}(\tau) \right) \right]^+, \; i=0,1,$$ where $s$ is a sufficiently small positive [*step size*]{} and $[\cdot]^+$ denotes the projection onto the nonnegative axis. The updated $\lambda_i(\tau+1)$ will again be used to solve the sub-problems, and so forth. Since the problem is convex, we have [*strong duality*]{}; the [*duality gap*]{} between the primal and dual problems is zero. The dual variables $\lambda(\tau)$ will converge to the optimal values as $\tau$ goes to infinity. Since the optimal solution to (\[eq:ArgSubOpt1\]) is unique, the primal variables $p(\tau)$ and $\rho_{i,j}(\tau)$ will also converge to their optimal values when $\tau$ is sufficiently large. The distributed solution procedure is presented in Table \[tab:Opt1\]. In the table, Steps 3–8 solve the sub-problem in (\[eq:ArgSubOpt1\]); Step 9 updates the dual variables. The threshold $\phi$ is a prescribed small value with $0 \leq \phi \ll 1$. The algorithm terminates when the dual variables are sufficiently close to the optimal values. ----- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 1: Set $\tau=0$, $\lambda_0(0)$ and $\lambda_1(0)$ to some nonnegative value; 2: DO   % (each CR user $j$ executes Steps 3–8) 3: $\;\;$ $\rho_{0,j}(\tau) \hspace{-0.025in} = \hspace{-0.025in} \left[ \frac{\bar{P}_{0,j}}{\lambda_0(\tau)} \hspace{-0.025in} - \hspace{-0.025in} \frac{W_j^{-}}{R_{0,j}} \right]^+$, $\rho_{1,j}(\tau) \hspace{-0.025in} = \hspace{-0.025in} \left[ \frac{\bar{P}_{1,j}}{\lambda_1(\tau)} \hspace{-0.025in} - \hspace{-0.025in} \frac{W_j^{-}}{R_{1,j} G} \right]^+$; 4: $\;\;$ IF $\left[ \bar{P}_{0,j} \log(W_j^{-}+\rho_{0,j}(\tau) R_{0,j})-\lambda_0(\tau)\rho_{0,j}(\tau) \right] >$ $\;\;$ $\left[ \bar{P}_{1,j} \log(W_j^{-} + \rho_{1,j}(\tau) G R_{1,j}) - \lambda_1(\tau) \rho_{1,j}(\tau) \right]$ 5: $\;\;\;\;\;$ Set $p_j(\tau)=1$ and $\rho_{1,j}(\tau)=0$; 6: $\;\;$ ELSE 7: $\;\;\;\;\;$ Set $p_j(\tau)=0$ and $\rho_{0,j}(\tau)=0$; 8: $\;\;$ END IF 9: $\;\;$ MBS updates $\lambda_i(\tau+1)$ as in (\[eq:IterOpt1\]); 10: $\;\;$ $\tau=\tau+1$; 11: WHILE $\left( \sum_{i=0}^{1}(\lambda_i(\tau+1)-\lambda_i(\tau))^2 > \phi \right)$ ----- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ : Algorithm for the Case of Single FBS \[tab:Opt1\] ### Case of Multiple Non-interfering FBS’s \[subsec:mulnifbs\] We next consider the case of $N>1$ non-interfering FBS’s. The coverages of the FBS’s do not overlap with each other, as FBS 1 and 2 in Fig. \[fig:netmod2\]. Consequently, each FBS can use all the available licensed channels without interfering other FBS’s. Assume each CR user knows the nearest FBS and is associate with it. Let $\mathcal{U}_i$ denote the set of CR users associated with FBS $i$. The resource allocation problem becomes: $$\begin{aligned} \label{eq:ProbOptDM} \mbox{maximize:} && \hspace{-0.2in} \sum_{j=1}^K p_j \bar{P}_{0,j} \log(W_j^{-} + \rho_{0,j} R_{0,j}) + %\\ %&& \sum_{i=1}^N \sum_{j\in\mathcal{U}_i} q_j \bar{P}_{i,j} \log(W_j^{-} + \rho_{i,j} G R_{i,j}) \\ %\nonumber \\ \mbox{subject to:} && \hspace{-0.2in} \sum_{j=1}^K \rho_{0,j} \leq 1 \nonumber \\ && \hspace{-0.2in} \sum_{j\in\mathcal{U}_i} \rho_{i,j} \le 1, \;\; i=1,\cdots,N \nonumber \\ && \hspace{-0.2in} p_j + q_j = 1, \;\; j=1,\cdots,K \nonumber \\ && \hspace{-0.2in} \rho_{i,j}, \; p_j, \; q_j \ge 0, \;\; i=1,\cdots,N,\; j=1,\cdots,K. \nonumber\end{aligned}$$ Since all the available channels can be allocated to each FBS with spatial reuse, problem (\[eq:ProbOptDM\]) can be solved using the algorithm in Table \[tab:Opt1\] with some modified notation: $\rho_{1,j}(\tau)$ now becomes $\rho_{i,j}(\tau)$ and $\lambda_1(\tau)$ becomes $\lambda_i(\tau)$, $i=1, \cdots, N$. The dual variables are iteratively updated as $$\begin{aligned} && \hspace{-0.4in} \lambda_0(\tau+1)=\left[\lambda_0(\tau)-s \times \left( 1 - \sum_{j=1}^K \rho_{0,j}^{\ast}(\tau) \right) \right]^+ \label{eq:IterOptM0} \\ && \hspace{-0.4in} \lambda_i(\tau+1)=\left[\lambda_i(\tau)-s \times \left( 1 - \sum_{j\in \mathcal{U}_i} \rho_{i,j}^{\ast}(\tau) \right) \right]^+, %\nonumber \\ %&& \hspace{1.8in} \;\; i=1,\cdots,N. \label{eq:IterOptM1}\end{aligned}$$ The modified solution algorithm is presented in Table \[tab:OptDisM\]. As in the case of single FBS, the algorithm is jointly executed by the CR users and MBS, by iteratively updating the dual variables $\lambda_0(\tau)$ and $\lambda_i(\tau)$’s, and the resource allocations $\rho_{0,j}^{\ast}(\tau)$ and $\rho_{i,j}^{\ast}(\tau)$’s. It can be shown that the distributed algorithm can produce the optimal solution for problem (\[eq:ProbOptDM\]). ----- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 1: Set $\tau=0$, and $\lambda_0(0)$ and $\lambda_i(0)$ to some nonnegative values, for all $i$; 2: DO   % (each CR user $j$ executes Steps 3–8) 3: $\;\;$ $\rho_{0,j}(\tau) \hspace{-0.025in} = \hspace{-0.025in} \left[ \frac{\bar{P}_{0,j}}{\lambda_0(\tau)} \hspace{-0.025in} - \hspace{-0.025in} \frac{W_j^{-}}{R_{0,j}} \right]^+$, $\rho_{i,j}(\tau) \hspace{-0.025in} = \hspace{-0.025in} \left[ \frac{\bar{P}_{i,j}}{\lambda_i(\tau)} \hspace{-0.025in} - \hspace{-0.025in} \frac{W_j^{-}}{R_{i,j} G} \right]^+$ ; 4: $\;\;$ IF $\left[ \bar{P}_{0,j} \log(W_j^{-}+\rho_{0,j}(\tau) R_{0,j})-\lambda_0(\tau)\rho_{0,j}(\tau) \right] >$ $\;\;$ $\left[ \bar{P}_{i,j} \log(W_j^{-} + \rho_{i,j}(\tau) G R_{i,j}) - \lambda_i(\tau) \rho_{i,j}(\tau) \right]$ 5: $\;\;\;\;\;$ Set $p_j(\tau)=1$ and $\rho_{i,j}(\tau)=0$; 6: $\;\;$ ELSE 7: $\;\;\;\;\;$ Set $p_j(\tau)=0$ and $\rho_{0,j}(\tau)=0$; 8: $\;\;$ END IF 9: $\;\;$ MBS updates $\lambda_i(\tau+1)$ as in (\[eq:IterOptM0\]) and (\[eq:IterOptM1\]); 10: $\;\;$ $\tau=\tau+1$; 11: WHILE $\left( \sum_{i=0}^{N}(\lambda_i(\tau+1)-\lambda_i(\tau))^2 > \phi \right)$ ----- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- : Algorithm for the Case of Multiple Non-Interfering FBS’s \[tab:OptDisM\] ### Case of Multiple Interfering FBS’s #### Formulation Finally, we consider the case of multiple interfering FBS’s. Assume that the coverages of some FBS’s overlap with each other, as FBS 3 and 4 in Fig. \[fig:netmod2\]. They cannot use the same channel simultaneously, but have to compete for the available channels in the transmission phase. Define [*channel allocation variables*]{} $c_{i,m}$ for time slot $t$ as: $$\label{eq:cimt} c_{i,m} = \left\{ \begin{array}{ll} 1, & \mbox{if channel $m$ is allocated to FBS $i$} \\ 0, & \mbox{otherwise.} \end{array} \right.$$ Given an allocation, the expected number of available channels for FBS $i$ is $G_i \hspace{-0.025in} = \hspace{-0.025in} \sum_{m\in\mathcal{A}(t)} c_{i,m} P_m^A$. We use [*interference graph*]{} to model the case of overlapping coverages, which is defined below. An [*interference graph*]{} $G_I=(V_I,E_I)$ is an undirected graph where each vertex represents an FBS and each edge indicates interference between the two end FBS’s. For the example given in Fig. \[fig:netmod2\], we can derive an interference graph as shown in Fig. \[fig:interferencegraph\]. FBS 3 and 4 cannot use the same channel simultaneously, as summarized in the following lemma. ![Interference graph for the femtocell CR network shown in Fig. \[fig:netmod2\].[]{data-label="fig:interferencegraph"}](interference-graph-jsac-camready.eps){width="4.0in"} If channel $m$ is allocated to FBS $i$, the neighboring vertices of FBS $i$ in the interference graph $G_I$, denoted as $\mathcal{R}(i)$, cannot use the same channel $m$ simultaneously. Further define index variables $d_i^k$ as $$\label{eq:dik} d_i^k = \left\{ \begin{array}{ll} 1, & \mbox{if FBS $i$ is an endpoint of link $k \in G_I$} \\ 0, & \mbox{otherwise.} \end{array} \right.$$ The interference constraint can be described as $\sum_{i=1}^N d_i^k c_{i,m} \le 1$, for $m=0,\cdots,M$, and for all link $k \in G_I$. We then have the following problem formulation. $$\begin{aligned} \label{eq:ProbOptM} \mbox{maximize:} && \hspace{-0.25in} \sum_{j=1}^K p_j \bar{P}_{0,j} \log(W_j^{-}+\rho_{0,j} R_{0,j}) + %\\ %&& \sum_{i=1}^N \sum_{j\in\mathcal{U}_i} q_j \bar{P}_{i,j} \log(W_j^{-} + \rho_{i,j} G_i R_{i,j}) \\ %\nonumber \\ \mbox{subject to:} && \hspace{-0.2in} \sum_{j=1}^K \rho_{0,j} \leq 1 \nonumber \\ && \hspace{-0.25in} \sum_{j\in\mathcal{U}_i} \rho_{i,j} \le 1, \;\; i=1,\cdots,N \nonumber \\ && \hspace{-0.25in} p_j + q_j = 1, \;\; j=1,\cdots,K \nonumber \\ && \hspace{-0.25in} G_i = \sum_{m\in\mathcal{A}(t)} c_{i,m} P_m^A, \;\; i=1,\cdots,N \nonumber \\ && \hspace{-0.25in} \sum_{i=1}^N d_i^k c_{i,m} \le 1, m=0,\cdots,M, \mbox{for link } k \in G_I, \nonumber \\ && \hspace{-0.25in} \rho_{i,j}, p_j, q_j, c_{i,m} \ge 0, \;\; %\nonumber \\ %&& i=1,\cdots,N,\; j=1,\cdots,K, \; m=0,\cdots,M. \nonumber\end{aligned}$$ #### Solution Algorithm \[subsubsec:ifbs\] The optimal solution to problem (\[eq:ProbOptM\]) depends on the channel allocation variables $c_{i,m}$. Problem (\[eq:ProbOptM\]) can be solved with the algorithm in Table \[tab:OptDisM\] if the $c_{i,m}$’s are known. Let $Q(\bm{c})$ be the suboptimal objective value for a given channel allocation $\bm{c}$, where $\bm{c}=[\vec{c}_1, \vec{c}_2, \cdots, \vec{c}_N]$ and $\vec{c}_i$ is a vector of elements $c_{i,m}$, for FBS $i$ and channels $m \in \mathcal{A}(t)$. If all the FBS’s are disjointedly distributed with no overlap, each FBS can use all the available channels. We have $c_{i,m}=1$ for all $i$ and $m \in \mathcal{A}(t)$, i.e., it is reduced to the case in Section \[subsec:mulnifbs\]. To solve problem (\[eq:ProbOptM\]), we first apply a [*greedy algorithm*]{} to allocate the available channels in $\mathcal{A}(t)$ to the FBS’s (i.e., to determine $\bm{c}$). We then apply the algorithm in Table \[tab:OptDisM\] with the computed $\bm{c}$ to obtain a near-optimal solution. Let $\bm{e}_{i,m}$ be a matrix with $1$ at position $\{i,m\}$ and $0$ at all other positions, representing the allocation of channel $m \in \mathcal{A}(t)$ to FBS $i$. The greedy channel allocation algorithm is given in Table \[tab:ChanAloc\], where the FBS-channel pair that can achieve the largest increase in $Q(\cdot)$ is chosen in each iteration. The worst case complexity of the greedy algorithm is $O(N^2 M^2)$. ---- ------------------------------------------------------------------------------------------------------------------------ 1: Initialize $\bm{c}$ to a zero matrix, FBS set $\mathcal{N}=\{1,\cdots,N\}$, and FBS-channel set $\mathcal{C}=\mathcal{N}\times\mathcal{A}(t)$; 2: WHILE ($\mathcal{C}$ is not empty) 3: $\;\;$ Find FBS-channel pair $\{i',m'\}$, such that $\;\;\;\;\;\;$ $\{i',m'\} = \stackbin[\{i,m\} \in \mathcal{C}]{}{\arg\max} \{Q(\bm{c} + \bm{e}_{i,m}) - Q(\bm{c}) \}$; 4: $\;\;$ Set $\bm{c} = \bm{c} + \bm{e}_{i',m'}$; 5: $\;\;$ Remove $\{i',m'\}$ from $\mathcal{C}$; 6: $\;\;$ Remove $\mathcal{R}(i') \times m'$ from $\mathcal{C}$; 7: END WHILE ---- ------------------------------------------------------------------------------------------------------------------------ : Channel Allocation Algorithm for Case of Interfering FBS’s \[tab:ChanAloc\] #### Performance Lower Bound We next present a lower bound for the greedy algorithm. Let $e(l)$ be the $l$-th FBS-channel pair chosen in the greedy algorithm, and $\pi_l$ denote the sequence $\{e(1),e(2),\cdots,e(l)\}$. The increase in object value (\[eq:ProbOptM\]) due to the $l$-th allocated FBS-channel pair is denoted as $$\Delta_l := \Delta(\pi_l, \pi_{l-1}) = Q(\pi_l) - Q(\pi_{l-1}).$$ Since $Q(\pi_0)=Q(\emptyset)=0$, we have $$\begin{aligned} \sum_{l=1}^L \Delta_l &=& Q(\pi_L)-Q(\pi_{L-1})+ \cdots + Q(\pi_1) - Q(\pi_0) \nonumber \\ &=& Q(\pi_L) - Q(\pi_0) = Q(\pi_L). \nonumber\end{aligned}$$ For two FBS-channel pairs $e(l)$ and $e(l')$, we say $e(l)$ [*conflicts with*]{} $e(l')$ when there is an edge connecting the FBS in $e(l)$ and the FBS in $e(l')$ in the interference graph $G_I$, and the two FBS’s choose the same channel. Let $\Omega$ be the global optimal solution. We define $\omega_l$ as the subset of $\Omega$ that conflicts with allocation $e(l)$ but not with the previous allocations $\{e(1), e(2), \cdots, e(l-1)\}$. \[lemma1:5\] Assume the greedy algorithm in Table \[tab:ChanAloc\] stops in $L$ steps. The global optimal solution $\Omega$ can be partitioned into $L$ non-overlapping subsets $\omega_l$, $l=1,2,\cdots,L$. According to the definition of $\omega_l$, the $L$ subsets of the optimal solution $\Omega$ do not intersect with each other. Assume the statement is false, then the union of these $L$ subsets is not equal to the optimal set $\Omega$. Let the [*set difference*]{} be $\omega_{L+1} = \Omega \setminus (\cup_{l=1}^L \omega_l)$. By definition, $\omega_{L+1}$ does not conflict with the existing $L$ allocations $\{e(1), \cdots, e(L)\}$, meaning that the greedy algorithm can continue to at least the $(L+1)$-th step. This conflicts with the assumption that the greedy algorithm stops in $L$ steps. It follows that $\Omega = \cup_{l=1}^L \omega_l$. Let $\Delta(\pi_2,\pi_1)=Q(\pi_2)-Q(\pi_1)$ denote the difference between two feasible allocations $\pi_1$ and $\pi_2$. We next derive a lower bound on the performance of the greedy algorithm. We assume two properties for function $\Delta(\pi_2,\pi_1)$ in the following. Consider FBS-channel pair sets $\pi_1$, $\pi_2$, and $\sigma$, satisfying $\pi_1\subseteq\pi_2$ and $\sigma \cap \pi_2 =\emptyset$. We have $\Delta(\pi_2\cup\sigma, \pi_1\cup\sigma) \le \Delta(\pi_2,\pi_1)$. Consider FBS-channel pair sets $\pi$, $\sigma_1$, and $\sigma_2$ satisfying $\sigma_1 \cap \pi = \emptyset$, $\sigma_2 \cap \pi = \emptyset$, and $\sigma_1 \cap \sigma_2 = \emptyset$. We have $\Delta(\sigma_1 \cup \sigma_2 \cup \pi, \pi) \le \Delta(\sigma_1 \cup \pi,\pi) + \Delta(\sigma_2 \cup \pi, \pi)$. In Property 1, we have $\sigma \cap \pi_1 =\emptyset$ since $\pi_1\subseteq\pi_2$ and $\sigma \cap \pi_2 =\emptyset$. This property states that the incremental objective value does not get larger as more channels are allocated and as the objective value gets larger. Property 2 states that the incremental objective value achieved by allocating multiple FBS-channel pair sets does not exceed the sum of the incremental objective values achieved by allocating each individual FBS-channel pair set. These are generally true for many resource allocation problems [@Kelly98]. Since we choose the maximum incremental allocation in each step of the greedy algorithm, we have Lemma \[lemma:step3\] that directly follows Step 3 in Table \[tab:ChanAloc\]. \[lemma:step3\] For any FBS-channel pair $\sigma \in \omega_l$, we have $Q(\pi_{l-1} \cup \sigma) - Q(\pi_{l-1}) = \Delta(\pi_{l-1} \cup \sigma, \pi_{l-1}) \le \Delta_l$. \[lemma:6\] Assume the greedy algorithm stops in $L$ steps, we have $$Q(\Omega)\le Q(\pi_L)+\sum_{l=1}^L \sum_{\sigma\in \omega_{l}} \Delta(\sigma \cup \pi_{l-1}, \pi_{l-1}).$$ The following inequalities hold true according to the properties of the $\Delta(\cdot, \cdot)$ function: $$\begin{aligned} \label{eq:IneqProof} %&&\hspace{-0.2in} Q((\cup_{i=l+1}^L \omega_i)\cup \pi_l) &=& Q((\cup_{i=l+2}^L \omega_i)\cup \pi_l) + %\nonumber\\ %&&\hspace{0.6in} \Delta((\cup_{i=l+1}^L \omega_i)\cup \pi_l,(\cup_{i=l+2}^L \omega_i)\cup \pi_l) \nonumber \\ %&&\hspace{-0.35in} && \le Q((\cup_{i=l+2}^L \omega_i)\cup \pi_l)+\Delta(\omega_{l+1}\cup\pi_l,\pi_l) \nonumber \\ %&&\hspace{-0.35in} &&\le Q((\cup_{i=l+2}^L \omega_i)\cup \pi_{l+1})+\Delta(\omega_{l+1}\cup\pi_l,\pi_l) \nonumber \\ %&&\hspace{-0.35in} && \le Q((\cup_{i=l+2}^L \omega_i)\cup \pi_{l+1}) + \sum_{\sigma\in\omega_{l+1}} \Delta(\sigma \cup \pi_l,\pi_l). \nonumber\end{aligned}$$ We have $\pi_0=\emptyset$ and $\omega_{L+1}=\emptyset$ (see Lemma \[lemma1:5\]). With induction from $l=0$ to $l=L-1$, we have $Q((\cup_{i=1}^L \omega_i) \cup \emptyset) = Q(\Omega)$ and $Q(\Omega) \le Q(\pi_{L}) + \sum_{l=1}^L \sum_{\sigma\in\omega_{l}} \Delta(\sigma\cup\pi_{l-1},\pi_{l-1})$. \[lemma:7\] The maximum size of $\omega_l$ is equal to the degree, in the interference graph $G_I$, of the FBS selected in the $l$-th step of the greedy algorithm, which is denoted as $D(l)$. Once FBS $i$ is allocated with channel $m$, the neighboring FBS’s in $G_I$, $\mathcal{R}(i)$, cannot use the same channel $m$ anymore due to the interference constraint. The maximum number of FBS-channel pairs that conflict with the selected FBS-channel pair $\{i,m\}$, i.e., the maximum size of $\omega_l$, is equal to the degree of FBS $i$ in $G_I$. Then we have Theorem \[th1:2\] that provides a lower bound on the objective value achieved by the greedy algorithm given in Table \[tab:ChanAloc\]. \[th1:2\] The greedy algorithm can achieve an objective value that is at least $\frac{1}{1+D_{max}}$ of the global optimum, where $D_{max}$ is the maximum node degree in the interference graph $G_I$ of the femtocell CR network. According to Lemmas \[lemma:6\] and \[lemma:7\], we have: $$\begin{aligned} \label{eq:OptBound} && \hspace{0.0in} Q(\Omega) \le Q(\pi_L) + \sum_{l=1}^L D(l)\Delta_l = Q(\pi_L)+\bar{D} \sum_{l=1}^L \Delta_l \nonumber \\ && \hspace{0.375in} = (1 + \bar{D}) Q(\pi_L), \end{aligned}$$ where $\bar{D}=\sum_{l=1}^L D(l)\Delta_l / \sum_{l=1}^L \Delta_l$. The second equality is due to the facts that $\sum_{l=1}^L \Delta_l = Q(\pi_L)$. To further simplify the bound, we replace $D(l)$ with the maximum node degree $D_{max}$. We then have $\bar{D} \leq \sum_{l=1}^L D_{max} \Delta_l / \sum_{l=1}^L \Delta_l = D_{max}$ and $$\label{eq:lowerbd} \frac{1}{1 + D_{max}} Q(\Omega) \le Q(\pi_L) \le Q(\Omega),$$ which provides a lower bound on the performance of the greedy algorithm. When there is a single FBS in the CR network, we have $D_{max}=0$ and $Q(\pi_L) = Q(\Omega)$ according to Theorem \[th1:2\]. The proposed algorithm produces the optimal solution. In the case of multiple non-interfering FBS’s, we still have $D_{max}=0$ and can obtain the optimal solution using the proposed algorithm. For the femtocell CR network given in Fig. \[fig:netmod2\] (with interference graph shown in Fig. \[fig:interferencegraph\]), we have $D_{max}=1$ and the low bound is a half of the global optimal. Note that (\[eq:OptBound\]) provides a tighter bound for the optimum than (\[eq:lowerbd\]), but with higher complexity. These are interesting performance bounds since they bound the achievable video quality, an application layer performance measure, rather than lower layer metrics (e.g., bandwidth or time share). Simulation Results \[sec:sim4\] ------------------------------- We evaluate the performance of the proposed algorithms using MATLAB and JSVM 9.13 Video codec. Two scenarios are used in the simulations: a single FBS CR network and a CR network with interfering FBS’s. In every simulation, we compare the proposed algorithms with the following three more straightforward heuristic schemes: - Heuristic 1 based on [*equal allocation*]{}: each CR user chooses the better channel (i.e., the common channel or a licensed channel) based on the channel conditions; time slots are equally allocated among active CR users; - Heuristic 2 exploiting [*multiuser diversity*]{}: the MBS and each FBS chooses one active CR user with the best channel condition; the entire time slot is allocated to the selected CR user. - [*SCA-MAC*]{} proposed in [@Hsu07]: with this scheme, the successful transmission rate is evaluated based on channel packet loss rate and collision probability with primary users; the channel-user pair with the highest transmission probability is selected. We choose SCA-MAC because it adopts similar models and assumptions as in this paper. Once the channels are selected, the same distributed algorithm is used for scheduling video data for all the three schemes. We adopt the Raleigh block fading model and the packet loss probability is between \[0.004, 0.028\]. The frame rate is set to 30 fps and the GoP size is 16. The base layer mode is set to be AVC compatible. The motion search mode is set to Fast Search with search range 32. Each point in the figures presented in this section is the average of 10 simulation runs with different random seeds. We plot 95% confidence intervals in the figures, which are generally negligible. ### Case of Single FBS In the first scenario, there are $M=8$ channels and the channel parameters $P_{01}^m$ and $P_{10}^m$ are set to 0.4 and 0.3, respectively, for all $m$. The maximum allowable collision probability $\gamma_m$ is set to 0.2 for all $m$. There is one FBS and three active CR users. Three Common Intermediate Format (CIF, 352$\times$288) video sequences are streamed to the CR users, i.e., [*Bus*]{} to CR user 1, [*Mobile*]{} to CR user 2, and [*Harbor*]{} to CR user 3. We have $T=10$ as the delivery deadline. Both probabilities of false alarm $\epsilon$ and miss detection $\delta$ are set to 0.3 for all the FBS’s and CR users, unless otherwise specified. First we investigate the convergence of the distributed algorithm. The traces of the two dual variables are plotted in Fig. \[fig:conv\]. To improve the convergence speed, the correlation in adjacent time slots can be exploited. In particular, we set the optimal values for the optimization variables in the previous time slot as the initialization values for the variables in the current time slot. By doing so, the convergence speed can be improved. It can be seen that both dual variables converge to their optimal values after 300 iterations. After convergence, the optimal solution for the primary problem can be obtained. ![Convergence of the two dual variables in the single FBS case.[]{data-label="fig:conv"}](convergence_plot2.eps){width="4.5in" height="3.0in"} Our proposed scheme achieves the best performance among the three algorithms, with up to 4.3 dB improvement over the two heuristic schemes and up to 2.5 dB over SCA-MAC. Such gains are significant with regard to video quality, since a 0.5 dB difference is distinguishable by human eyes. Compared to the two heuristic schemes and SCA-MAC, the video quality of our proposed scheme is well balanced among the three users, indicating better fairness performance. In Fig. \[fig:singleFBSmAll\], we examine the impact of the number of channels $M$ on received video quality. First, we validate the video quality measure used in our formulation by comparing the PSNR value computed using (\[eq:QuaMod\]) with that computed from real decoded video frames. The average PSNR for three received videos are plotted in the figure. It can be seen that the real PSNRs are very close to those predicted by (\[eq:QuaMod\]), with overlapping confidence intervals. This is also consistent with the results shown in Fig. \[fig:mgs-rd\]. Second, as expected, the more licensed channels, the more spectrum opportunities for CR users and the higher PSNR for received videos. SCA-MAC performs better than two heuristics, but is inferior to the proposed scheme. ![Single FBS: received video quality vs. number of channels (computed with (9) and measured by PSNR).[]{data-label="fig:singleFBSmAll"}](SingleFBS_ChanM_Plot_all.eps){width="4.5in" height="3.0in"} We also plot the MS-SSIM of the received videos at the three CR users in Fig. \[fig:singleFBSmSSIM\] [@Wang04]. Similar observations can be made from the MS-SSIM plot. All MS-SSIMs for the four curves are more than 0.97 and very close to 1. The proposed scheme still outperforms the other three schemes. In the remaining figures, we will use model predicted PSNR values, since the model (\[eq:QuaMod\]) is sufficient to predict the real video quality. ![Single FBS: received video quality vs. number of channels (measured by MS-SSIM).[]{data-label="fig:singleFBSmSSIM"}](SingleFBS_ChanM_Plot_SSIM.eps){width="4.5in" height="3.0in"} In Fig. \[fig:singleFBSeta\], we demonstrate the impact of channel utilization $\eta$ on received video quality. The average PSNRs achieved by the four schemes are plotted when $\eta$ is increased from 0.3 to 0.7. Intuitively, a smaller $\eta$ allows more spectrum opportunities for video transmission. This is illustrated in the figure where all the three curves decrease as $\eta$ gets larger. The performance of both heuristics are close and the proposed scheme achieves a gain about 3 dB over the heuristics and 2 dB over SCA-MAC. ![Single FBS: received video quality vs. channel utilization.[]{data-label="fig:singleFBSeta"}](SingleFBS_ChanEta_Plot.eps){width="4.5in" height="3.0in"} We also compare the MGS and FGS videos while keeping other parameters identical. We find that MGS video achieves over 0.5 dB gain in video quality over FGS video. The results are omitted for brevity. ### Case of Interfering FBS’s We next investigate the second scenario with three FBS’s, and each FBS has three active CR users. Each FBS streams three different videos to the corresponding CR users. The coverages of FBS 1 and 2 overlap with each other, and the coverages of FBS 2 and 3 overlap with each other. In Fig. \[fig:multiFBSm\], we examine the impact of the number of channels $M$ on the received video quality. The average PSNRs of all the active CR users are plotted in the figure when we increase $M$ from 12 to 20 with step size 2. As mentioned before, more channels imply more transmission opportunities for video transmission. In this scenario, heuristic 2 (with a multiuser diversity approach) outperforms heuristic 1 (with an equal allocation approach). But its PSNRs are still about 0.3 $\sim$ 0.5 dB lower that those of the proposed algorithm. The proposed scheme has up to 0.4 dB improvement over SCA-MAC. In Fig. \[fig:multiFBSm\], we also plot an upper bound on the optimal objective value, which is obtained as in (\[eq:OptBound\]). It can be seen that the performance of our proposed scheme is close to optimal solution since the gap between the upper bound and our scheme is generally small (about 0.5 dB). ![Interfering FBS’s: received video quality vs. number of channels.[]{data-label="fig:multiFBSm"}](MultiFBS_ChanM_Plot.eps){width="4.5in" height="3.0in"} Next, we examine the impact of sensing errors on the received video quality. In Fig. \[fig:multiFBSepsilon\], we test five pairs of $\{\epsilon, \delta\}$ values: {0.2,0.48}, {0.24,0.38}, {0.3,0.3}, {0.38,0.24}, and {0.48,0.2}. It is interesting to see that the performance of all the four schemes get worse when the probability of one of the two sensing errors gets large. We can trade-off between false alarm and miss detection probabilities to find the optimal operating point for the spectrum sensors. Moreover, the dynamic range of video quality is not big for the range of sensing errors simulated, compared to that in Fig. \[fig:multiFBSm\]. This is because both sensing errors are modeled and treated in the algorithms. Again, our proposed scheme outperforms the two heuristic schemes and SCA-MAC with considerable margins for the entire range. ![Interfering FBS’s: received video quality vs. sensing error probability.[]{data-label="fig:multiFBSepsilon"}](MultiFBS_SenseErr_Plot.eps){width="4.5in" height="3.0in"} We also investigate the impact of the bandwidth of the common channel $B_0$. In this simulation, we fix $B_1$ at 0.3 Mbps and increase $B_0$ from 0.1 Mbps to 0.5 Mbps with step size 0.1 Mbps. The results are presented in Fig. \[fig:multiFBSCtrlBW\]. We notice that the average video quality increases rapidly as the common channel bandwidth is increased from 0.1 Mbps to 0.3 Mbps. Beyond 0.3 Mbps, the increases of the PSNR curves slow down and the curves get flat. This implies that a very large bandwidth for the common channel is not necessary, since the gain for additional bandwidth diminishes as $B_0$ gets large. Again, the proposed scheme outperforms the other three schemes and the gap between our scheme and the upper bound is small. ![Interfering FBS’s: received video quality vs. bandwidth of the common channel.[]{data-label="fig:multiFBSCtrlBW"}](MultiFBS_CtrlBW_Plot.eps){width="4.5in" height="3.0in"} Next, we stop the distributed algorithm after a fixed amount of time, and evaluate the suboptimal solutions. In particular, we vary the duration of time slots, and let the distributed algorithm run for 5% of the time slot duration at the beginning of the time slot. Then the solution obtained this way will be used for the video data transmissions. The results are presented in Fig. \[fig:timedura\]. It can be seen that when the time slot is 5 ms, the algorithm does not converge after 5%$\times$5 = 0.25 ms and the PSNR produced by the distributed algorithm is close to that of Heuristic 1, and lower than those of Heuristic 2 and SCA-MAC. When the time slot is sufficiently large, the algorithm can get closer to the optimal and the proposed algorithm produces better video quality as compared to the two heuristic algorithms and SCA-MAC. Beyond 20 ms, the increase in PSNR is small since all the curves gets flat. Therefore the proposed algorithm could be useful even when there is no time for it to fully converge to the optimal. ![Video quality achieved by the algorithms when they are only executed for 5% of the time slot duration.[]{data-label="fig:timedura"}](MultiFBS_TimeDura_Plot.eps){width="4.5in" height="3.0in"} During the simulations, we find the collision rate with primary users are strictly kept below the prescribed collision tolerance $\gamma$. These results are omitted for brevity. Conclusions {#sec:femto_conc} =========== In this paper, we first investigated data multicast in femtocell networks consisting of an MBS and multiple FBS’s. We adopted SC and SIC for multicast data and investigated how to assign transmit powers for the packet levels. The objective was to minimize the total BS power consumption, while guaranteeing successful decoding of the multicast data at each user. We developed optimal and near-optimal algorithms with low computational complexity, as well as performance bounds. The algorithms were evaluated with simulations and are shown to outperform a heuristic with considerable gains. Next, we investigated the problem of streaming multiple MGS videos in a femtocell CR network. We formulated a multistage stochastic programming problem considering various design factors across multiple layers. We developed a distributed algorithm that can produce optimal solutions in the case of non-interfering FBS’s, and a greedy algorithm for near-optimal solutions in the case of interfering FBS’s with a proved lower bound. The proposed algorithms are evaluated with simulations and are shown to outperform three alternative schemes with considerable gains.
1
--- abstract: 'We have analyzed available optical data for Au in the mid-infrared range which is important for a precise prediction of the Casimir force. Significant variation of the data demonstrates genuine sample dependence of the dielectric function. We demonstrate that the Casimir force is largely determined by the material properties in the low frequency domain and argue that therefore the precise values of the Drude parameters are crucial for an accurate evaluation of the force. These parameters can be estimated by two different methods, either by fitting real and imaginary parts of the dielectric function at low frequencies, or via a Kramers-Kronig analysis based on the imaginary part of the dielectric function in the extended frequency range. Both methods lead to very similar results. We show that the variation of the Casimir force calculated with the use of different optical data can be as large as 5% and at any rate cannot be ignored. To have a reliable prediction of the force with a precision of 1%, one has to measure the optical properties of metallic films used for the force measurement.' address: - '$^1$Laboratoire Kastler Brossel, ENS, CNRS, UPMC, 4, place Jussieu, Case 74, 75252 Paris Cedex 05, France' - '$^2$MESA+ Research Institute, University of Twente, P.O. 217, 7500 AE Enschede, The Netherlands' author: - 'I. Pirozhenko$^1$, A. Lambrecht$^1$, and V. B. Svetovoy$^2$' title: Sample dependence of the Casimir force --- Introduction\[Sec1\] ==================== The Casimir force [@Cas48] between uncharged metallic plates attracts considerable attention as a macroscopic manifestation of the quantum vacuum [@Mil94; @Mos97; @Mil01; @Kar99; @Bor01]. With the development of microtechnologies, which routinely control the separation between bodies smaller than 1 $\mu m$, the force became a subject of systematic experimental investigation. Modern precision experiments have been performed using different techniques such as torsion pendulum [@Lam97], atomic force microscope (AFM) [@Moh98; @Har00], microelectromechanical systems (MEMS) [@Cha01; @Dec03a; @Dec03b; @Dec05; @Ian04; @Ian05] and different geometrical configurations: sphere-plate [@Lam97; @Har00; @Dec03b], plate-plate [@Bre02] and crossed cylinders [@Ede00]. The relative experimental precision of the most precise of these experiments is estimated to be about 0.5% for the recent MEMS measurement [@Dec05] and 1% for the AFM experiments [@Har00; @Cha01]. In order to come to a valuable comparison between the experiments and the theoretical predictions, one has to calculate the force with a precision comparable to the experimental accuracy. This is a real challenge to the theory because the force is material, surface, geometry and temperature dependent. Here we will only focus on the material dependence, which is easy to treat on a level of some percent precision but which will turn out difficult to tackle on a high level of precision since different uncontrolled factors are involved. In its original form, the Casimir force per unit surface [@Cas48] $$F_{c}\left( a\right) =-\frac{\pi ^{2}}{240}\frac{\hbar c}{L^{4}} \label{Fc}$$ was calculated between ideal metals. It depends only on the fundamental constants and the distance between the plates $L$. The force between real materials differs significantly from (\[Fc\]) for mirror separations smaller than 1 $\mu$m. For mirrors of arbitrary material, which can be described by reflection coefficients, the force per unit area can be written as [@Lam00]: $$\begin{aligned} F&=& 2\sum_{\mu}\int \frac{\mathrm{d}^{2}\mathbf{k}}{4\pi ^{2}} \int_{0}^{\infty }\frac{\mathrm{d}\zeta }{2\pi } \hbar\kappa \frac{r_{\mu}\left[ i\zeta ,\mathbf{k} \right]^2 e^{-2\kappa L}}{1-r_{\mu} \left[ i\zeta ,\mathbf{k} \right]^2 e^{-2\kappa L}}\nonumber \\ &&\kappa=\sqrt{\mathbf{k}^{2}+ \frac{\zeta ^{2}}{c^2}} \label{Force}\end{aligned}$$ where $r_{\mu}=(r_s,r_p)$ denotes the reflection amplitude for a given polarization $\mu=s,\;p$ $$\begin{aligned} r_{s } &=&-\frac{\sqrt{\mathbf{k}^{2}+ \varepsilon \left( i\zeta \right)\frac{\zeta ^{2}}{c^2}}-c\kappa } {\sqrt{\mathbf{k}^{2}+ \varepsilon \left( i\zeta \right) \frac{\zeta ^{2}}{c^2}}+c\kappa } \nonumber \\ r_{p} &=&\frac{\sqrt{\mathbf{k}^{2}+ \varepsilon \left( i\zeta \right)\frac{\zeta ^{2}}{c^2}}-c\kappa \varepsilon \left( i\zeta \right) }{\sqrt{\mathbf{k}^{2}+ \varepsilon \left( i\zeta \right)\frac{\zeta ^{2}}{c^2}}+c\kappa \varepsilon \left( i\zeta \right) } \label{rThick}\end{aligned}$$ The force between dielectric materials had first been derived by Lifshitz [@Lif56; @LP9]. The material properties enter these formulas via the dielectric function $\varepsilon \left( i\zeta \right) $ at angular imaginary frequencies $\omega=i\zeta $, which is related to the physical quantity $\varepsilon ^{\prime \prime }\left( \omega \right)= \mathrm{Im}\left( \varepsilon \left( \omega \right)\right) $ with the help of the dispersion relation $$\varepsilon \left( i\zeta \right) -1=\frac{2}{\pi }\int\limits_{0}^{\infty } d\omega\frac{\omega \varepsilon ^{\prime \prime }\left( \omega \right) }{\omega ^{2}+\zeta ^{2}}. \label{K-K}$$ For metals $\varepsilon ^{\prime \prime }\left( \omega \right)$ is large at low frequencies, thus the main contribution to the integral in Eq. (\[K-K\]) comes from the low frequencies even if $\zeta $ corresponds to the visible frequency range. For this reason the low-frequency behavior of $\varepsilon(\omega)$ is of primary importance. The Casimir force is often calculated using the optical data taken from [@HB1], which provides real and imaginary parts of the dielectric function within some frequency range, typically between 0.1 and $10^4$ eV for the most commonly used metals, Au, Cu and Al, corresponding to a frequency interval $[1.519\cdot 10^{14},1.519 \cdot10^{19}]$ rad/s (1 eV=$1.519 \cdot10^{15}$ rad/s [^1]). When the two plates are separated by a distance $L$, one may introduce a characteristic imaginary frequency $\zeta_{\rm ch}=c/2L$ of electromagnetic field fluctuations in the gap. Fluctuations of frequency $\zeta \sim \zeta _{\rm ch}$ give the dominant contribution to the Casimir force. For example, for a plate separation of $L=100$ nm the characteristic imaginary frequency is $\zeta _{\rm ch}=0.988$ eV. Comparison with the frequency interval where optical data is available shows that the high frequency data exceeds the characteristic frequency by 3 orders of magnitude, which is sufficient for the calculation of the Casimir force. However, in the low frequency domain, optical data exists only down to frequencies which are one order of magnitude below the characteristic frequency, which is not sufficient to evaluate the Casimir force. Therefore for frequencies lower than the lowest tabulated frequency, $\omega _{\rm c}$, the data has to be extrapolated. This is typically done by a Drude dielectric function $$\varepsilon \left( \omega \right) =1-\frac{\omega _{\rm p}^{2}}{\omega \left( \omega +i\omega _{\tau }\right) }, \label{Drude}$$ which is determined by two parameters, the plasma frequency $\omega _{\rm p}$ and the relaxation frequency $\omega _{\tau }$. Different procedures to get the Drude parameters have been discussed in the literature. They may be estimated, for example, from information in solid state physics or extracted form the optical data at the lowest accessible frequencies. The exact values of the Drude parameters are very important for the precise evaluation of the force. Lambrecht and Reynaud [@Lam00] fixed the plasma frequency using the relation $$\omega _{\rm p}^{2}=\frac{Ne^{2}}{\varepsilon _{0}m_{e}^{\ast }}, \label{Omp}$$ where $N$ is the number of conduction electrons per unit volume, $e $ is the charge and $m_{e}^{\ast }$ is the effective mass of electron. The plasma frequency was evaluated using the bulk density of Au, assuming that each atom gives one conduction electron and that the effective mass coincides with the mass of the free electron. The optical data at the lowest frequencies were then used to estimate $\omega _{\tau }$ with the help of Eq. (\[Drude\]). In this way the plasma frequency $\omega _{\rm p}=9.0$ eV and the relaxation frequency $\omega _{\tau }=0.035$ eV have been found. This procedure was largely adopted in the following [@Har00; @Ede00; @Cha01; @Bre02; @Dec03a]. However, on the example of Cu, it was stressed in [@Lam00] that the optical data may vary from one reference to another and a different choice of parameters for the extrapolation procedure to low frequencies can influence the Casimir force significantly. Boström and Sernelius [@Bos00b] and Svetovoy and Lokhanin [@Sve00b] extracted the low-frequency optical data by fitting them with Eq. (\[Drude\]). For one set of data from Ref. [@HB2] the result [@Sve00b] was close to that found by the first approach, but using different sources for the optical data collected in Ref. [@HB2] an appreciable difference was found [@Sve00a; @Sve00b]. This difference was attributed to the defects in the metallic films which appear as the result of the deposition process. It was indicated that the density of the deposited films is typically smaller and the resistivity larger than the corresponding values for the bulk material. The dependence of optical properties of Au films on the details of the deposition process, annealing, voids in the films, and grain size was already discussed in the literature [@Sve03b]. In this paper we analyze the optical data for Au from several available sources, where the mid-infrared frequency range was investigated. The purpose is to establish the variation range of the Drude parameters and calculate the uncertainty of the Casimir force due to the variation of existing optical data. This uncertainty is of great importance in view of the recent precise Casimir force measurement [@Che04; @Dec05] which have been performed with high experimental accuracy. On the other hand, sophisticated theoretical calculations predict the Casimir force at the level of 1% or better. These results illustrate the considerable progress achieved in the field in only one decade. In order to assure a comparison between theory and experiment at the same level of precision, one has to make sure that the theoretical calculation considers precisely the same system investigated in the experiment. This is the key point we want to address in our paper. With our current investigation we find an intrinsic force uncertainty of the order of 5% coming from the fact that the Drude parameters are not precisely known. These parameters may vary from one sample to another, depending on many details of the preparation conditions. In order to assure a comparison at the level of 1% or better between theoretical predictions and experimental results for the Casimir force, the optical properties of the mirrors have to be measured in the experiment. The paper is organized as follows. In Sec. \[Sec2\] we explain and discuss the importance of the precise values of the Drude parameters. In Sec. \[Sec3\] the existing optical data for gold are reviewed and analyzed. The Drude parameters are extracted from the data by fitting both real and imaginary parts of the dielectric function at low frequencies in Sec. \[Sec4\]. In Section \[Sec5\] the Drude parameters are estimated by a different method using Kramers-Kroning analysis. The uncertainty in the Casimir force due to the sample dependence is evaluated in Sec. \[Sec6\] and we present our conclusions in Sec. \[Sec7\]. Importance of the values of the Drude parameters\[Sec2\] ======================================================== In Figure \[fig1\] (left) we present a typical plot of the imaginary part of the dielectric function, which comprises Palik’s Handbook data for gold [@HB1]. The solid line shows the actual data taken from two original sources: the points to the right of the arrow are those by Thèye [@The70] and to the left by Dold and Mecke [@Dol65]. No data is available for frequencies smaller than the cutoff frequency $\omega _{\rm c}$ ($0.125$ eV for this data set) and $\varepsilon ^{\prime \prime }\left( \omega \right) $ has to be extrapolated into the region $\omega <\omega _{\rm c}$. The dotted line shows the Drude extrapolation with the parameters $\omega _{\rm p}=9.0$ eV and $\omega _{\tau }=0.035$ eV obtained in Ref. [@Lam00]. One can separate three frequency regions in Fig. \[fig1\] (left panel). The region marked as [1]{} corresponds to the frequencies smaller than $\omega _{\rm c}$. The region [2]{} defining the Drude parameters extends from the cutoff frequency to the edge of the interband absorption $\omega _{0}$. The high energy domain $\omega>\omega _{0}$ is denoted by [3]{}. We may now deduce the dielectric function at imaginary frequencies (\[K-K\]) using the Kramers-Kronig relation $$\varepsilon \left( i\zeta \right) =1+\varepsilon _{1}\left( i\zeta \right) +\varepsilon _{2}\left( i\zeta \right) +\varepsilon _{3}\left( i\zeta \right) , \label{split}$$ where the indices 1, 2, and 3 indicate respectively the integration ranges $0\leq \omega <\omega _{\rm c}$, $\omega _{\rm c}\leq\omega <\omega _{0}$, and $\omega _{0}\leq \omega <\infty $. $\varepsilon _{1}$ can be derived using the Drude model (\[Drude\]) leading to $$\varepsilon _{1}\left( i\zeta \right) =\frac{2}{\pi }\frac{\omega _{p}^{2}}{\zeta ^{2}-\omega _{\tau }^{2}}\left[ \tan ^{-1}\left( \frac{\omega _{c}}{\omega _{\tau }}\right) -\frac{\omega _{\tau }}{\zeta }\tan ^{-1}\left( \frac{\omega _{c}}{\zeta }\right) \right] . \label{eps1}$$ The two other functions $\varepsilon _{2}$ and $\varepsilon _{3}$ have to be calculated numerically. The results for all three functions as well as for $ \varepsilon \left( i\zeta \right) $ are shown in Fig. \[fig1\] (right). One can clearly see that $\varepsilon _{1}\left( i\zeta \right) $ dominates the dielectric function at imaginary frequencies up to $\zeta \approx 5$ eV. $\varepsilon _{2}\left( i\zeta \right) $ gives a perceptible contribution to $\varepsilon \left( i\zeta \right)$, while $\varepsilon_{3}\left( i\zeta \right)$ produces minor contribution negligible for $\zeta<0.5$ eV. As mentioned in the Introduction, we may introduce a characteristic imaginary frequency $\zeta_{\rm ch}=c/2L$ of field fluctuations which give the dominant contribution to the Casimir force between two plates at a distance $L$. For a plate separation of $L=100$ nm the characteristic imaginary frequency is $\zeta _{\rm ch}=0.988$ eV. At this frequency the contributions of different frequency domains to $\varepsilon \left( i\zeta _{ch}\right) $ are $\varepsilon _{1}=68.42$, $\varepsilon _{2}=15.65$, and $\varepsilon _{3}=5.45$. This means that for all experimentally investigated situations, $L\gtrsim100$ nm, region [1]{}, corresponding to the extrapolated optical data, gives the main contribution to $\varepsilon \left( i\zeta \right)$. It is therefore important to know precisely the Drude parameters. Analysis of different optical data for gold\[Sec3\] =================================================== The optical properties of gold were extensively investigated in 50-70th. In many of those works the importance of sample preparation methods was recognized and carefully discussed. A complete bibliography of the publications up to 1981 can be found in Ref. [@Wea81]. Regrettably the contemporary studies of gold nanoclusters produce data inappropriate for our purposes. Among recent experiments let us mention the measurement of normal reflectance for evaporated gold films [@Sot03], which was performed in the wide wavelength range $0.3-50$ $\mu$m, but unfortunately does not permit to evaluate independently both real and imaginary parts of the dielectric function. In contrast, the use of new ellipsometric techniques [@An02; @Xia00] has produced data for the real and imaginary part of the dielectric function for energy intervals $1.5-4.5$ eV [@Wan98] and $1.5-3.5$ eV [@Ben99]. A significant amount of data in the interband absorption region (domain [3]{}) has been obtained by different methods under different conditions [@Pel69; @The70; @Joh72; @Gue75; @Asp80; @Wan98; @Ben99]. Though this frequency band is not very important for the Casimir force, it provides information on how the data may vary from one sample to another. On the contrary there are only a few sources where optical data was collected in the mid-infrared (domain 2) and from which the dielectric function can be extracted. The data available for $ \varepsilon ^{\prime }\left( \omega \right) $ and $ \varepsilon ^{\prime \prime }\left( \omega \right) $ in the range $\omega <1.5$ eV and interband absorption domain [3]{} are presented respectively in the left and right graph of Fig. \[fig2\]. These data sets demonstrate considerable variations of the dielectric function from one sample to another. Let us briefly discuss the sets of data [@HB1; @Wea81; @Mot64; @Pad61] used in our analysis and the corresponding samples. The commonly used Handbook of Optical Constants of Solids [@HB1] comprises the optical data covering the region from $0.125$ to $9184$ eV (dots in Fig. \[fig2\]). The experimental points are assembled from several sources. For $\omega<1$ eV they are reported by Dold and Mecke [@Dol65]. For higher frequencies up to $6$ eV they correspond to the Thèye data [@The70]. Dold and Mecke give only little information about the sample preparation, reporting that the films were evaporated onto a polished glass substrate and measured in air by using an ellipsometric technique [@Dol65]. Annealing of the samples was not reported. Thèye [@The70] described her films very carefully. The samples were semitransparent Au films with a thickness of $100-250$ Å evaporated in ultrahigh vacuum on supersmooth fused silica. The substrate was kept in most cases at room temperature. After the deposition the films were annealed in the same vacuum at $ 100-150^{\circ }$ C. The structure of the films was investigated by X-ray and transmission-electron-microscopy methods. The dc resistivity of the films was found to be very sensitive to the preparation conditions. The errors in the optical characteristics of the films were estimated on the level of a few percents. The handbook [@Wea81] embraces the optical data from $0.1$ eV to $28.6$ eV (marked with squares in Fig. 2). The data in the domain $\omega<4$ eV is provided by Weaver et al. [@Wea81]. The values of $\varepsilon(\omega)$ were found for the electropolished bulk Au(110) sample. Originally the reflectance was measured in a broad interval $0.1\leq \omega \leq 30$ eV and then the dielectric function was determined by a Kramers-Kronig analysis. Due to indirect determination of $\varepsilon $ the recommended accuracy of these data sets is only 10%. The optical data of Motulevich and Shubin [@Mot64] for Au films is marked with circles in Fig. 2. In this paper the films were carefully described. Gold was evaporated on polished glass at a pressure of $\sim 10^{-6}$ Torr. The investigated films were $0.5-1\ \mu$m thick. The samples were annealed in the same vacuum at $400^{\circ }$ C for more than 3 hours. The optical constants $n$ and $k$ ($n+ik=\sqrt{\varepsilon }$) were measured by polarization methods in the spectral range $1-12\ \mu$m. The errors in $n$ and $k$ were estimated as 2-3% and 0.5-1%, respectively. Finally, the triangles represent Padalka and Shklarevskii data [@Pad61] for unannealed Au films evaporated onto glass. The variation of the data points from different sources cannot be explained by experimental errors. The observed deviation is the result of different preparation procedures and reflects genuine difference between samples. The deposition method, type of the substrate, its temperature, quality and the deposition rate influence the optical properties. When we are speaking about a precise comparison between theory and experiment for the Casimir force at the level of 1% or better, there is no such material as gold in general any more. There is only a gold sample prepared under definite conditions. Evaluation of the Drude parameters through extrapolation\[Sec4\] ================================================================ We will now use the available data in the mid-infrared region to extrapolate into the low frequency range. If the transition between inter- and intraband absorption in gold is sharp, the data below $\omega _{0}$ should be well described by the Drude function $$\varepsilon ^{\prime }\left( \omega \right) =1-\frac{\omega _{p}^{2}}{\omega ^{2}+\omega _{\tau }^{2} },\ \ \varepsilon ^{\prime \prime }\left( \omega \right) =\frac{\omega _{p}^{2}\omega _{\tau }}{\omega \left( \omega ^{2}+\omega _{\tau }^{2}\right). } \label{ImDrude}$$ For $\omega \gg \omega _{\tau }$, the data on the log-log plot should fit straight lines with the slopes $-2$ and $-3$ for $\varepsilon ^{\prime }$ and $\varepsilon ^{\prime\prime }$, respectively, shifted along the ordinate due to variation of the parameters for different samples. The data points in the right graph of Fig. \[fig2\] are in general agreement with these expectations. The onset values for $\varepsilon ^{\prime\prime }$, $\ln(\omega_{\rm p}^2\omega_{\tau})$, vary more significantly due to a significant change in $\omega_{\tau}$ for different samples, but the Casimir force is in general not very sensitive to the relaxation parameter [@Lam00]. The onset values for $-\varepsilon ^{\prime }$, $\ln(\omega_{\rm p}^2)$, vary less but this variation is more important for the Caimir force, which is particularly sensitive to the value of the plasma frequency $\omega_{\rm p}$. The Drude parameters can be found by fitting both $\varepsilon ^{\prime }$ and $ \varepsilon ^{\prime \prime }$ with the functions (\[ImDrude\]). This procedure is discussed below. The dielectric function for low frequencies, $\omega < \omega_{\rm c}$, is found by the extrapolation of the optical data from the mid-infrared domain, $\omega_{\rm c}<\omega<\omega_0$. The real and imaginary parts of $\varepsilon $ follow from Eq. (\[ImDrude\]) with an additional polarization term ${\cal P}$ in $\varepsilon ^{\prime }$: $$\varepsilon ^{\prime }\left( \omega \right) ={\cal P}-\frac{\omega _{p}^{2}}{\omega ^{2}+\omega _{\tau }^{2}},\ \ \varepsilon ^{\prime \prime }\left( \omega \right) =\frac{\omega _{p}^{2}\omega _{\tau }}{\omega \left( \omega ^{2}+\omega _{\tau }^{2}\right) }. \label{DrudeRI}$$ The polarization term appears here due to the following reason. The total dielectric function $\varepsilon =\varepsilon _{\left( c\right) }+\varepsilon _{\left( i\right) }$ includes contributions due to conduction electrons $\varepsilon _{\left( c\right) }$ and the interband transitions $\varepsilon _{\left( i\right) }$. The polarization term consists of the atomic polarizability and polarization due to the interband transitions $ \varepsilon _{\left( i\right) }^{\prime }$ $${\cal P}=1+\frac{N_{a}\alpha }{\varepsilon _{0}}+\varepsilon _{\left( i\right) }^{\prime }\left( \omega \right) , \label{polariz}$$ where $\alpha $ is the atomic polarizability and $N_{a}$ the concentration of atoms. If the transition from intra- to interband absorption is sharp, the polarization can be considered as constant, because the interband transitions have a threshold behavior with an onset frequency $\omega _{0}$ and the Kramers-Kronig relation allows one to express $\varepsilon _{\left( i\right) }^{\prime }$ as $$\varepsilon _{\left( i\right) }^{\prime }\left( \omega \right) =\frac{2}{\pi }\int\limits_{\omega _{0}}^{\infty }dx\frac{x\varepsilon _{\left( i\right) }^{\prime \prime }\left( x\right) }{x^{2}-\omega ^{2}}. \label{KKi}$$ For $\omega \ll \omega _{0}$ this integral does not depend on $\omega $, leading to a constant $\varepsilon _{\left( i\right) }^{\prime }\left( \omega \right) $. In reality the situation is more complicated because the transition is not sharp and many factors can influence the transition region. We will assume here that ${\cal P}$ is a constant but the fitting procedure will be shifted to frequencies where the transition tail is not very important. In practice Eq. (\[DrudeRI\]) can be applied for $\omega <1$ eV. Our purpose is now to establish the magnitude of the force change due to reasonable variation of the optical properties. To this end the available low-frequency data for $\varepsilon ^{\prime }\left( \omega \right) $ and $\varepsilon ^{\prime\prime }\left( \omega \right) $ presented in the left graph of Fig. \[fig2\] were fitted with Eq. (\[DrudeRI\]). The results together with the expected errors are collected in Table \[tab1\]. N $\ \ \ \omega _{p}$(eV) $\omega _{\tau }\cdot 10^{2}$(eV)     ${\cal P}$ --- ------------------------- ----------------------------------- ------------------ -------------------------------------------------------- 1 $7.50\pm 0.02$ $6.1\pm 0.07$ $-27.67\pm 5.79$ Palik, 66 points , $\ \cdot$ 2 $8.41\pm 0.002$ $2.0\pm 0.005$ $7.15\pm 0.035$ Weaver, 20 points,  $\blacksquare, \Box $ 3 $8.84\pm 0.03$ $4.2\pm 0.06$ $12.94\pm 16.81$ Motulevich, 11 points,  $\bullet, \circ$ 4 $6.85\pm 0.02$ $3.6\pm 0.05$ $-12.33\pm 9.13$ Padalka 11 points,  $\blacktriangledown,\triangledown$ : The Drude parameters found by fitting the available infrared data for $\varepsilon ^{\prime }\left( \omega \right)$ and $\varepsilon ^{\prime \prime }\left( \omega \right) $ with Eq. (\[DrudeRI\]). The error is statistical.[]{data-label="tab1"} The error in Table \[tab1\] is the statistical uncertainty. It was found using a $\chi ^{2}$ criterion for joint estimation of 3 parameters [@PatDat]. For a given parameter the error corresponds to the change $\Delta\chi ^{2}=1$ when two other parameters are kept constant. The parameter ${\cal P}$ enters (\[DrudeRI\]) as an additive constant and in the considered frequency range its value is smaller than 1% of $\varepsilon ^{\prime }\left( \omega \right)$ . That is why the present fitting procedure cannot resolve it with reasonable errors. As mentioned before, in the case of the Weaver data [@Wea81] the recommended precision in $\varepsilon^{\prime}$ and $\varepsilon^{\prime\prime}$ is 10% while Motulevich and Schubin reported 2-3% and 0.5-1% errors in $n$ and $k$. We did not take these errors explicitly into account as we do not know if they are of statistical or systematic nature or a combination of both. But to illustrate their possible influence let us just mention that if we interpret them as systematic errors, we can propagate the errors in $\varepsilon$ or $n,k$ to the values of $\omega_{\rm p}$ and $\omega_{\tau}$, leading to an additional error in $\omega_{\rm p}$ of about 5% for the Weaver data and 1% for the Motulevich data and twice as large in $\omega_{\tau}$. Significant variation of the plasma frequency, well above the errors, is a distinctive feature of the table. The bulk and annealed samples (rows 2 and 3) demonstrate larger values of $\omega _{\rm p}$. The rows 1 and 4 corresponding to the evaporated unannealed films give rise to considerably smaller plasma frequencies $\omega _{\rm p}$. Note that our calculations are in agreement with the one given by the authors [@Dol65; @Pad61] themselves. To have an idea of the quality of the fitting procedure, we show in Fig. \[fig5\] the experimental points and the best fitting curves for Dold and Mecke data [@Dol65; @HB1] (full circles and solid lines) and Motulevich and Shubin data [@Mot64] (open circles and dashed lines). Only 25% of the points from [@HB1] are shown for clarity. One can see that for $\varepsilon ^{\prime \prime }$ at high frequencies the dots lie above the solid line demonstrating presence of a wide transition between inter- and intraband absorption. Coincidence of the solid and dashed lines for $\varepsilon ^{\prime \prime }$ is accidental. The fits for $\varepsilon ^{\prime }$ are nearly perfect for both data sets. It is interesting to see on the same figure how well the parameters $\omega _{\rm p}=9.0$ eV, $\omega _{\tau }=0.035$ eV agree with the data in the mid-infrared range. The curves corresponding to this set of parameters are shown in Fig. \[fig5\] as dotted lines. One can see that the dotted line, which describes $\varepsilon ^{\prime \prime }$ is very close to the solid line. However, the dotted line for $ \varepsilon ^{\prime }$ does not describe well the handbook data (full circles). It agrees much better with Motulevich and Shubin data [@Mot64] (open circles). The reason for this is that $\omega _{\rm p}=9.0$ eV is the maximal plasma frequency for Au. Any real film may contain voids leading to smaller density of electrons and, therefore, to smaller $\omega _{\rm p}$. Motulevich and Shubin [@Mot64] annealed their films which reduced the number of defects and made the plasma frequency close to its maximum. A plasma frequency $\omega _{\rm p}=9.0$ eV was also reported in Ref. [@Ben66], where the authors checked the validity of the Drude theory by measuring reflectivity of carefully prepared gold films in ultrahigh vacuum in the spectral range $0.04<\omega<0.6$ eV. Therefore, this value is good if one disposes of well prepared samples. The Drude parameters from Kramers-Kronig analysis\[Sec5\] ========================================================= Because the values of the Drude parameters are crucial for a reliable prediction of the Casimir force, it is important to assess that different methods to determine the parameters give the same results. Alternatively to the extrapolation procedure of the previous section we will now discuss a procedure based on a Kramers-Kronig analysis. To this aim we will extrapolate only the imaginary part of the dielectric function to low frequencies $\omega<\omega_{\rm c}$. The dispersion relation between $\varepsilon^{\prime}$ and $\varepsilon^{\prime\prime}$ $$\label{KKrel} \varepsilon^{\prime}(\omega)-1=\frac{2}{\pi }P\int\limits_{0}^{\infty }dx\frac{x\varepsilon ^{\prime \prime }\left( x\right) }{x^{2}-\omega ^{2}}$$ can then be used to predict the behavior of $\varepsilon^{\prime}(\omega)$ and compare it with the one observed in the experiments. From this comparison the Drude parameters can be extracted. The low-frequency behavior of $\varepsilon^{\prime\prime}(\omega)$ is important for the prediction of $\varepsilon^{\prime}$ because for metals $\varepsilon^{\prime\prime}(\omega)\gg1$ in the low frequency range. Therefore, at $\omega<\omega_{\rm c}$ we are using $\varepsilon^{\prime\prime}(\omega)$ from Eq. (\[ImDrude\]). At higher frequencies the experimental data from different sources [@HB1; @Wea81; @Mot64; @Pad61] are used. The data in Refs. [@Mot64; @Pad61] must be extended to high frequencies starting from $\omega=1.25$ eV. We do this using the handbook data [@HB1]. Let us start from the data for bulk Au(110) [@Wea81]. This data set is given in the interval $0.1<\omega<30$ eV. Below $\omega=0.1$ eV we use the Drude model for $\varepsilon^{\prime\prime}$ and above $\omega=30$ eV the cubic extrapolation $C/\omega^3$. The Drude parameters are practically insensitive to the high frequency extrapolation. The data set was divided into overlapping segments containing 12 points. Each segment was fitted with a polynomial of forth order in frequency. The first segment, were $\varepsilon^{\prime\prime}(\omega)$ increases very fast, was fitted with the polynomial in $1/\omega$. Then, in the range of overlap (4 points) a new polynomial smoothly connecting two segments was chosen. In this way we have fitted the experimental data with a function which is smooth up to the first derivative. The real part of the dielectric function $\varepsilon^{\prime}(\omega)$ is predicted by Eq. (\[KKrel\]) as a function of the Drude parameters $\omega_p$ and $\omega_{\tau}$. These parameters are chosen such as to minimize the difference between observed and predicted values of $\varepsilon^{\prime}(\omega)$, leading to $\omega_{\rm p}=8.40$ eV and $\omega_{\tau}=0.020$ eV. These parameters are in reasonable agreement with the ones indicated in Tab. \[tab1\]. In Fig. \[fig6\] the experimental data (dots) and $|\varepsilon^{\prime}(\omega)|$ found from Eq. (\[KKrel\]) (solid line) are plotted, showing perfect agreement at low frequencies, while at high frequencies $\omega>2.6$ eV the agreement is not very good. This may be fixed by choosing an appropriate high frequency extrapolation. We do not give these details here as this extrapolation has practically no influence on the Drude parameters. When applying the same procedure to the handbook data [@HB1], we find $\omega_p=7.54$ eV and $\omega_{\tau}=0.051$ eV, again in agreement with the parameters indicated in Tab. \[tab1\]. Fig. \[fig7\] shows a plot of $\varepsilon^{\prime}(\omega)$ predicted with these parameters. At low frequencies the agreement with the experimental data is good but it becomes worse when the interband data [@Dol65] joins the intraband (high frequency) data [@The70]. These two data sets correspond to samples with different optical properties. In this case the dispersion relation (\[KKrel\]) is not necessarily very well satified. In contrast with the previous case, high frequency extrapolation cannot improve the situation; it influences the curve only marginally. Following the same procedure for the Motulevich and Shubin data [@Mot64], we find the Drude parameters $\omega_{\rm p}=8.81$ eV, $\omega_{\tau}=0.044$ eV which are close to the values in Tab. \[tab1\]. The experimental data and calculated function $|\varepsilon^{\prime}(\omega)|$ are shown in Fig. \[fig8\]. There is good agreement for frequencies $\omega<4$ eV as the data in Ref. [@Mot64] matches very well the Thèye data [@The70]. Deviations at higher frequencies are again quite sensitive to high-frequency extrapolation as already noted before. Similar calculations done for the Padalka and Shklyarevskii data [@Pad61] give the Drude parameters $\omega_{\rm p}=6.88$ eV and $\omega_{\tau}=0.033$ eV, producing good agreement only in the range $\omega<1.3$ eV because this data set matches only poorly the Thèye data [@The70]. Using the Kramers-Kronig analysis for the determination of the Drude parameters leads essentially to the same parameters for all 4 sets of the experimental data. Experimental and calculated curves for $\varepsilon^{\prime}(\omega)$ are in very good agreement at low frequencies. At high frequencies the agreement is not so good for two different reasons. First, at high frequencies the calculated curve is sensitive to the high-frequency extrapolation and thus a better choice of this extrapolation can significantly reduce high frequency deviations. The other reason is that one has to combine the data from different sources to make a Kramers-Kronig analysis possible. These data sets do not always match each other well as it is for example the case of the Dold and Mecke data and the Thèye data. In this case significant errors might be introduced in the dispersion relation. Indeed the Kramers-Kronig analysis is a valuable tool only for data taken from the same sample. Uncertainty in the Casimir force due to variation of optical properties\[Sec6\] =============================================================================== We will now assess how the values of the Casimir force are influenced by the different values of the Drude parameters. As an example we consider as input the optical data for Au from [@HB1]. Instead of calculating the absolute value of the Casimir force, we will give the factor which measures the reduction of the Casimir force with respect to the ideal Casimir force between perfect mirrors as introduced in [@Lam00] $$\label{eta} \eta_F=\frac{120 L^4}{c\pi^4}\int\limits_0^{\infty}d\kappa\,\kappa^2 \int\limits_0^\kappa d\zeta\sum_{\mu}\frac{r_{\mu}^2}{e^{2\kappa}-r_{\mu}^2},$$ The dielectric function at imaginary frequencies $\varepsilon(i\zeta)$ is calculated using the Kramers-Kronig relation (\[K-K\]) and the integration region is divided in two parts $$\label{imfreq} \int_0^{\infty}\frac{x\, \varepsilon''(x)}{x^2+\omega^2}dx\rightarrow \left\{\int_{0}^{x_c}+\int_{x_c}^{x_{max}}\right\}\frac{x\, \varepsilon''(x)}{x^2+\omega^2}dx=I_1+I_2.$$ We assume that for $x<x_{\rm c}$ the Drude model (\[ImDrude\]) is applicable. Then the integration in $I_1$ may be carried out explicitly, see (\[eps1\]). In $I_2$ we integrate from $x_{\rm c}=0.125$ eV to $x_{\rm max}=9000$ eV (corresponding to the range of available optical data in [@HB1]). For the calculation of the reduction factor (\[eta\]) the integration range was chosen as $10^{-4}-10^{3}$ eV. We also varied the integration range by half an order of magnitude, which changed the result by less than $0.1\%$. The results of the numerical integration are collected in Table \[Tab3\]. ----------------------------------------------------------------------------------------------------------------------------------------------------------------- $\omega_p, \omega_{\tau}(eV)\,\backslash L (\mu m)$ $ \quad $\quad 0.3\quad $ $\quad 0.5 \quad$ $\quad 1.0 \quad $\quad 3.0 \quad $ 0.1 \quad $ $ ------------ ----------------------------------------------------- ------------- ------------------- ------------------- ------------------ --------------------- 1. $\omega_{\rm p}=7.50$, $\omega_{\tau}=0.061$ 0.43 0.66 0.75 0.85 0.93 \[3mm\] 2. $\omega_{\rm p}=8.41$, $\omega_{\tau}=0.02$ 0.45 0.69 0.79 0.88 0.95 \[3mm\] 3. $\omega_{\rm p}=8.84$, $\omega_{\tau}=0.0422$ 0.46 0.69 0.78 0.87 0.94 \[3mm\] 4. $\omega_{\rm p}=6.85$, $\omega_{\tau}=0.0357$ 0.42 0.65 0.75 0.84 0.93 \[3mm\] 5. $\omega_{\rm p}=9.00$, $\omega_{\tau}=0.035$ 0.47 0.71 0.79 0.88 0.95 \[3mm\] 6. $\omega_{\rm p}=7.50\pm15\%$ 0.45 0.68 0.77 0.86 0.94 $\omega_{\tau}=0.061$ 0.41 0.63 0.73 0.83 0.92 \[3mm\] 7. $\omega_{\rm p}=7.50$ 0.42 0.65 0.74 0.84 0.92 $\omega_{\tau}=0.061\pm30\%$ 0.44 0.67 0.76 0.86 0.93 ----------------------------------------------------------------------------------------------------------------------------------------------------------------- : The reduction factors at different plate separations calculated with the different pairs of values of the Drude parameters corresponding to different data. The last two rows show the variation of the reduction factor when either the plasma frequency or the relaxation parameter is varied.[]{data-label="Tab3"} The first four rows of the table present the reduction factors for four pairs of the Drude parameters that were obtained by fitting the optical data from different sources. The next row shows the result obtained for $\omega_{\rm p}=9$ eV and $\omega_{\tau}=35$ meV. The last two rows show the variation of the reduction factor if the plasma frequency $\omega_{\rm p}$ or the relaxation parameter $\omega_{\tau}$ are varied by $\pm 15\%$ and $\pm 30\%$, respectively. The upper (lower) line corresponds here to the upper (lower) sign. The variation of the optical data and the associated Drude parameters introduces a variation in the Casimir force ranging from 5.5% at short distances (100 nm) to 1.5% at long distances (3 $\mu$m). The distance dependence is of course related to the fact that the material properties influence the Casimir force much more at short than at long plate separation. The strongest variation of 5.5% gives an indication of the genuine sample dependence of the Casimir force. For this reason it is necessary to measure the optical properties of the plates used in the Casimir force measurement if a precision of the order 1% or better in the *comparison* between experimental values and theoretical predictions is aimed at. Incidentally let us notice that the plasma frequency $\omega_{\rm p}=7.5$ eV, which is found here to fit best Palik’s handbook data [@HB1], is basically the same as the one proposed alternatively in [@Lam00] for Cu, which has very similar optical properties to Au concerning the Casimir force [@PRLComment]. For Cu, the variation of the plasma frequency from $\omega_{\rm p}=9$ eV to $\omega_p=7.5$ eV introduced a variation of the Casimir force up to 5% [@Lam00]. In order to asses more quantitatively the role of the two Drude parameters, we show in the last two rows of table \[Tab3\] the variation of the reduction factor when either the plasma frequency or the relaxation parameter is varied with the other parameter kept constant. One can see that the increase (decrease) of the relaxation parameter by $\delta\omega_{\tau}=30\%$ lowers (increases) the reduction factor $\eta_F$ at $L=0.1~\mu m$ by only $\delta\eta_F=1.6\%$. However, the $15\%$ variation of the plasma frequency leads to $4.2\%$ change in the reduction factor. Thus the Casimir force is much more sensitive to the variation of the plasma frequency, basically as the plasma frequency determines the reflection quality of the plates (an infinite plasma frequency corresponds to perfectly reflecting mirrors). Conclusions\[Sec7\] =================== In this paper we have performed the first systematic and detailed analysis of optical data for Casmir force measurements. We have studied the relative importance of the different frequency regions for the Casimir force as a function of the plate separation and established the critical role of the Drude parameters in particular for short distance measurements. We have then analyzed and compared four different sets of optical data. For each set we have extracted the corresponding plasma frequency and relaxation parameter either by fitting real and imaginary part of the dielectric function at low frequencies or by using a detailed Kramers-Kronig analysis. Both methods lead essentially to the same results. The Kramers-Kronig analysis reveals itself to be a powerful tool for the estimation of the low frequency Drude parameters for data coming from the same sample. A variation of the values of the Casimir force up to 5.5% is found for different optical data sets. This gives an intrinsic unknown parameter for the Casimir force calculations and demonstrates the genuine sample dependence of the Casimir force. The today existing numerical and analytical calculations of the Casimir force in themselves are very precise. In the same way, measurements of the Casimir force have achieved high accuracy over the last decade. In order to compare the results of the achievements in theory and experiment at a level of 1% precision or better, the crucial point is to make sure that calculations and experiments are performed for the same physical sample. One therefore has to know the optical and material properties of the sample used in the experiment. These properties must be measured for frequencies as low as possible. In practice, the material properties have to be known over an interval of about 4 orders of magnitude around the characteristic frequency $\zeta_{\rm ch}=c/2L$. For a plate separation of $L=100$ nm this means an interval \[10 meV, 100 eV\]. If measurements at low frequencies are not possible, the low frequency Drude parameters should be extracted from the measured data, by one of the two methods discussed here. **Acknowledgements** Part of this work was funded by the European Contract STRP 12142 NANOCASE. We wish to thank S. Reynaud and A. Krasnoperov for useful discussions. References {#references .unnumbered} ========== [99]{} Casimir H B G 1948 *Proc. K. Ned. Akad. Wet.* [**51**]{} 793. Milonni P W 1994 The Quantum Vacuum (Academic Press, San Diego). Mostepanenko V M and Trunov N N 1997 The Casimir Effect ans its Applications (Clarendon Press, Oxford). Kardar M and Golestanian R 1999 *Rev. Mod. Phys.* [**71**]{} 1233. Milton K A 2001 The Casimir Effect (World Scientific, Singapore). Bordag M, Mohideen U, and Mostepanenko V M 2001 *Phys. Rep.* [**353**]{} 1. Lamoreaux S K 1997 *Phys. Rev. Lett.* [**78**]{} 5; 1998 [**81**]{} 5475. Mohideen U and Roy A 1998 *Phys. Rev. Lett.* [**81**]{} 4549; Roy A, Lin C-Y, and Mohideen U 1999 *Phys. Rev. D* [**60**]{}, 111101(R). Harris B W, Chen F, and Mohideen U 2000 *Phys. Rev. A* [**62** ]{} 052109. Chan H B, Aksyuk V A, Kleiman R N, Bishop D J, and Capasso F 2001 *Science* [**291**]{} 1941; 2001 *Phys. Rev. Lett.* [**87**]{} 211801. Decca R S, López D, Fischbach E, and Krause D E 2003 *Phys. Rev. Lett.* [**91**]{} 050402. Decca R S, Fischbach E, Klimchitskaya G L, Krause D E, López D, and Mostepanenko V M 2003 *Phys. Rev. D* [**68**]{} 116003. Decca R S, López D, Fischbach E, Klimchitskaya G L, Krause D E, and Mostepanenko V M 2005 *Ann. Phys.* **318** 37. Iannuzzi D, Lisanti M, and Capasso F 2004 *Proc. Natinonal Acad. Sci. USA* [**101**]{} 4019. Lissanti M, Iannuzzi D and Capasso F 2005 *Proc. Natinonal Acad. Sci. USA* [**102**]{} 11989. Bressi G, Carugno G, Onofrio R, and Ruoso G, 2002 *Phys. Rev. Lett.* [**88**]{} 041804. Ederth T 2000 *Phys. Rev. A* **62** 062104. Lambrecht A and Reynaud S 2000 *Eur. Phys. J. D* [**8**]{}, 309. Lifshitz E M 1956 *Zh. Eksp. Teor. Fiz.* [**29**]{} 94 \[1956 *Sov. Phys. JETP* [**2**]{} 73\]. Lifshitz E M and Pitaevskii L P 1980 Statistical Physics, Part 2 (Pergamon Press, Oxford). Palik E D (ed) 1995 *Handbook of Optical Constants of Solids* (New York: Academic Press). Svetovoy V B and Lokhanin M V 2000 *Mod. Phys. Lett. A* **15**, 1437. Boström M and Sernelius B E 2000 *Phys. Rev. A* [**61**]{} 046101. Svetovoy V B and Lokhanin M V 2000 *Mod. Phys. Lett. A* **15** 1013. Zolotarev V M, Morozov V N, and Smirnova E V 1984 Optical Constants of Natural and Technical Media (Khimija, Leningrad) (in Russian). Svetovoy V B 2003 Proc. Quantum Field Theory Under External Conditions 76, Ed. Milton K A (Princeton, NJ: Rynton Press); arXiv: cond-mat/0401562. Chen F, Klimchitskaya G L, Mohideen U, and Mostepanenko V M 2004 *Phys. Rev. A* [**69**]{} 022117. Thèye M-L 1970 *Phys. Rev. B* [**2**]{} 3060. Dold B and Mecke R 1965 *Optik* [**22**]{} 435. Weaver J H, Krafka C, Lynch D W, and Koch E E 1981 Optical Properties of Metals, Part II, Physics Data No. 18-2 (Fachinformationszentrum Energie, Physik, Mathematik, Karsruhe). Sotelo J, Ederth J, and Niklasson G 2003 *Phys. Rev. B* [**67**]{} 195106. An I, Park M-G, Bang K-Y, Oh H-K, and Kim H 2002 *Jpn. J. Appl. Phys.* [**41**]{} 3978. Xia G-Q et al. 2000 *Rev. Sci. Instrum.* [**71**]{} 2677. Yu Wang et al. 1998 *Thin Solid Films* [**313**]{} 232. Bendavid A, Martin P J, Wieczorek L 1999 *Thin Solid Films* [**354**]{} 169. Pells G P and Shiga M 1969 *J. Phys. C* [**2**]{} 1835. Johnson P B and Christy R W 1972 *Phys. Rev. B* [**6**]{} 4370. Guerrisi M, Rosei R, and Winsemius P 1975 *Phys. Rev. B* [**12**]{} 557. Aspnes D E, Kinsbron E, and Bacon D D 1980 *Phys. Rev. B* [**21**]{} 3290. Motulevich G P and Shubin A A 1964 *Zh. Eksp. Teor. Fiz.* [**47**]{} 840 \[1965 *Soviet Phys. JETP* [**20**]{} 560\]. Padalka V G and Shklyarevskii I N 1961 *Opt. Spektroskopiya* [**11**]{} 527 \[1961 *Opt. Spectry. (USSR)* [**11**]{} 285\]. Bennett J M and Ashley E J 1965 *Appl. Opt.* [**4**]{} 221. Hagiwara K et al. 2002 *Phys. Rev. D* [**66**]{} 010001. Bennett H E and Bennett J M 1966 *in* Optical Properties and Electronic Structure of Metals and Alloys, edited by Abélès F (North-Holland Publ., Amsterdam). Lambrecht A and Reynaud S 2000 *Phys. Rev. Lett.* **84** 5672. [^1]: In a conversion factor $1.537 \cdot 10^{15}$ rad/s was used, leading however to a negligible difference in the Casimir force (well below 1%).
1
--- abstract: 'We study strong-coupling lattice QCD with staggered-Wilson fermions, with emphasis on discrete symmetries and possibility of their spontaneous breaking. We perform hopping parameter expansion and effective potential analyses in the strong-coupling limit. From gap equations we find nonzero pion condensate in some range of a mass parameter, which indicates existence of the parity-broken phase in lattice QCD with staggered-Wilson fermions. We also find massless pions and PCAC relations around second-order phase boundary. These results suggest that we can take a chiral limit by tuning a mass parameter in lattice QCD with staggered-Wilson fermions as with the Wilson fermion.' author: - Tatsuhiro Misumi - 'Takashi Z. Nakano' - Taro Kimura - Akira Ohnishi title: | Strong-coupling Analysis of Parity Phase Structure\ in Staggered-Wilson Fermions --- Introduction {#sec:Intro} ============ Since the dawn of lattice field theory [@Wil], the doubling problem of fermions has been a notorious obstacle to lattice simulations. Among several prescriptions for this problem, the Wilson fermion simply bypasses the no-go theorem [@NN] by introducing a species-splitting mass term into the naive lattice fermion. This Wilson term is regarded as one example of “flavored-mass terms" which split original 16 fermion species into plural branches [@CKM1; @CKM12]. It has been recently shown that the flavored-mass terms can also be constructed for staggered fermions [@KS; @Suss; @Sha] in Ref. [@Adams1; @Adams2; @Hoel]. The original purpose of introducing these terms was establishment of the index theorem with staggered fermions [@Adams1]. A bonus here is that staggered fermions with the flavored-mass terms can be applied to lattice QCD simulations as Wilson fermion and an overlap kernel. One possible advantage of these novel formulations, called staggered-Wilson and staggered-overlap, is reduction of matrix sizes in the Dirac operators, which would lead to reduction of numerical costs in lattice simulations. The numerical advantage in the staggered-overlap fermion have been shown in [@PdF]. Now further study is required toward lattice QCD with these lattice fermions. The purpose of this work is to reveal properties of staggered-Wilson fermions in terms of the parity-phase structure (Aoki phase) [@AokiP; @AokiU1; @CreutzW; @SS; @ACV; @Sharpe]. As is well-known, the existence of the Aoki phase and the second-order phase boundary in Wilson-type fermions enables us to perform lattice QCD simulations by taking a chiral limit since the critical behavior near the phase boundary reproduces massless pions and the PCAC relation. Besides, understanding the phase structure also gives practical information for the application of the overlap [@GW; @Neu] and domain-wall [@Kap; @FuSh] versions, both built on the Wilson-type kernel. Thus, in order to judge applicability of these new lattice fermions, it is essential to investigate the Aoki phase in the staggered-Wilson fermions. The phase structure for the staggered-Wilson fermion was first studied by using the Gross-Neveu model in Ref. [@CKM2; @MCKNO] and the present paper shows further investigation of this topic. In this paper, we investigate strong-coupling lattice QCD [@KaS] with emphasis on parity-phase structure for two types of staggered-Wilson fermions [@Adams2; @Hoel]. Firstly we discuss discrete symmetries of staggered-Wilson fermions, and show that physical parity and charge conjugation can be defined in both cases while hypercubic symmetry depends on types of staggered-Wilson fermions. Secondly, we perform hopping-parameter expansion and effective potential analysis for meson fields in the strong-coupling limit. For this purpose, we develop a method to derive the effective potential for lattice fermions with multiple-hopping terms. The gap equations show that pion condensate becomes non-zero in some range of a mass parameter, which indicates that parity-broken phase appears in this range. We also study meson masses around the second-order phase boundary, and find that massless pions and PCAC relations are reproduced. Lastly, we discuss parity-flavor symmetry breaking for 2-flavor cases. These results suggest that we can take a chiral limit by tuning a mass parameter in lattice QCD with staggered-Wilson fermions as with the Wilson fermion. This paper is organized as follows. In Sec. \[sec:SWF\], we review staggered flavored-mass terms and two types of staggered-Wilson fermions. In Sec. \[sec:Sym\], we study discrete symmetries of staggered-Wilson fermions. In Sec. \[sec:HPE\], we study hopping parameter expansion in lattice QCD with these fermions. In Sec. \[sec:EPA\], we investigate Aoki phase structure by effective potential analysis. In Sec. \[sec:Tf\], we discuss parity-flavor symmetry breaking in two-flavor cases. In Sec. \[sec:SD\], we devote ourselves to a summary and discussion. Staggered-Wilson fermions {#sec:SWF} ========================= Before looking into staggered-Wilson fermions, we review the Wilson fermion and its relatives. The Wilson term splits 16 species of naive fermions into 5 branches with 1, 4, 6, 4 and 1 fermions. We call this kind of species-splitting terms “flavored-mass terms". As shown in [@CKM1], there are 4 types of flavored-mass terms for naive fermion which satisfy $\gamma_{5}$ hermiticity. ($\gamma_{5}$ in the naive fermion is flavored such as $\gamma_{5}\otimes(\tau_{3}\otimes\tau_{3}\otimes\tau_{3}\otimes\tau_{3})$ in the spin-flavor representation.) All these terms with proper mass shifts lead to a second derivative term as $\sim a\int dx^{4} \bar{\psi}D^{2}_{\mu}\psi$ up to $\mathcal{O}(a^2)$ errors. Thus we can regard them as cousins of Wilson fermion. There are also non-trivial flavored-mass terms for staggered fermions, which split 4 tastes into branches and satisfy $\gamma_{5}$ hermiticity. Since $\gamma_{5}$ is expressed in spin-taste representation as $\gamma_{5}\otimes\gamma_{5}$ in this case, we only have two flavored-mass terms satisfying $\gamma_{5}$ hermiticity: ${{\bf 1}}\otimes\gamma_{5}$ and ${{\bf 1}}\otimes \sigma_{\mu\nu}$. (For larger discrete symmetry one needs to take a proper sum for $\mu,\nu$ in the latter case.) These spin-flavor representations translate into four- and two-hopping terms in the one-component staggered action up to $\mathcal{O}(a)$ errors. The first type is given by $$M_A= \epsilon\sum_{sym} \eta_{1}\eta_{2}\eta_{3}\eta_{4} C_{1}C_{2}C_{3}C_{4} = ({{\bf 1}}\otimes \gamma_{5}) + \mathcal{O}(a) \ , \label{AdamsM}$$ with $$\begin{aligned} (\epsilon)_{xy}&=(-1)^{x_{1}+...+x_{4}}\delta_{x,y} \ , \\ (\eta_{\mu})_{xy}&=(-1)^{x_{1}+...+x_{\mu-1}}\delta_{x,y} \ , \\ C_{\mu}&=(V_{\mu}+V_{\mu}^{\dag})/2 \ , \\ (V_{\mu})_{xy}&=U_{\mu,x}\delta_{y,x+\mu} \ .\end{aligned}$$ The second type is given by $$\begin{aligned} M_H&={i\over{\sqrt{3}}}(\eta_{12}C_{12}+\eta_{34}C_{34}+\eta_{13}C_{13}+\eta_{42}C_{42} +\eta_{14}C_{14}+\eta_{23}C_{23}) \ , \nonumber\\ &= [{{\bf 1}}\otimes (\sigma_{12}+\sigma_{34}+\sigma_{13}+\sigma_{42}+\sigma_{14}+\sigma_{23}) ] + \mathcal{O}(a) \ , \label{HoelM}\end{aligned}$$ with $$\begin{aligned} (\eta_{\mu\nu})_{xy}&=\epsilon_{\mu\nu}\eta_{\mu}\eta_{\nu}\delta_{x,y} \ , \\ (\epsilon_{\mu\nu})_{xy}&=(-1)^{x_{\mu}+x_{\nu}}\delta_{x,y} \ , \\ C_{\mu\nu}&=(C_{\mu}C_{\nu}+C_{\nu}C_{\mu})/2 \ .\end{aligned}$$ We refer to $M_A$ and $M_H$ as the Adams- [@Adams1] and Hoelbling-type [@Hoel], respectively. The former splits the 4 tastes into two branches with positive mass and the other two with negative mass. These two branches correspond to $+1$ and $-1$ eigenvalues of $\gamma_{5}$ in the taste space. The latter splits them into one with positive mass, two with zero mass and the other one with negative mass. We note that $M_{A}$ and $M_{H}$ are also derived from the flavored mass terms for naive fermions through spin-diagonalization as shown in [@CKM1]. Now we introduce a Wilson parameter $r=r\delta_{x,y}$ and shift mass as in Wilson fermions [@Hoel]. Then the Adams-type staggered-Wilson fermion action is given by $$\begin{aligned} S_{\rm A}\,&=\,\sum_{xy}\bar{\chi}_{x}[\eta_{\mu}D_{\mu} +r(1+M_A)+M]_{xy}\chi_{y} \ , \label{AdamsS} \\ D_{\mu}&={1\over{2}}(V_{\mu}-V^\dagger_{\mu}) \ .\end{aligned}$$ Here $M$ stands for the usual taste-singlet mass ($M=M\delta_{x,y}$). The Hoelbling-type fermion action is given by $$\begin{aligned} S_{\rm H}\,=\,\sum_{xy}\bar{\chi}_{x}[\eta_{\mu}D_{\mu} +r(2+M_H)+M]_{xy}\chi_{y} \ . \label{HoelS}\end{aligned}$$ It is obvious that these lattice fermions have possibility to be two- or one-flavor Wilson fermions. In lattice QCD simulation with these fermions, the mass parameter $M$ will be tuned to take a chiral limit as Wilson fermions. For some negative value of the mass parameter: $-1<M<0$ for Adams-type and $-2<M<0$ for Hoelbling-type, we obtain two-flavor and one-flavor overlap fermions respectively by using the overlap formula in principle. Discrete Symmetries {#sec:Sym} =================== In this section, we discuss discrete symmetry of staggered-Wilson fermions. A potential problem with staggered-Wilson fermions in lattice QCD is discrete symmetry breaking. As discussed in [@Adams2; @Hoel], the discrete symmetry possessed by the original staggered fermion is broken to its subgroup both in the Adams-type and Hoelbling-type actions. We below list all the staggered discrete symmetries (shift, axis reversal, rotation and charge conjugation), and look into their status in staggered-Wilson fermions. Shift transformation is given by $$\begin{aligned} \mathcal{S}_{\mu}:\,\,\chi_{x} \to \zeta_{\mu}(x)\chi_{x+\hat{\mu}}, \,\,\,\, \bar{\chi}_{x} \to \zeta_{\mu}(x)\bar{\chi}_{x+\hat{\mu}},\,\,\,\, U_{\nu,x} \to U_{\nu, x+\hat{\mu}} \ , \label{shift1}\end{aligned}$$ with $\zeta_{1}(x)=(-1)^{x_{2}+x_{3}+x_{4}}$, $\zeta_{2}(x)=(-1)^{x_{3}+x_{4}}$, $\zeta_{3}(x)=(-1)^{x_{4}}$ and $\zeta_{4}(x)=1$. It is obvious that this transformation flips the sign of both flavored-mass terms. The Adams type is invariant under the two-shift subgroup as $x \to x+\hat{1}\pm\hat{\mu}$ while the Hoelbling type is invariant under four-shift subgroup as $x\to x+\hat{1}\pm\hat{2}\pm\hat{3}\pm\hat{4}$. Note that these subgroups include the doubled shift $x\to x+2\hat{\mu}$ as their subgroup. The axis reversal transformation is given by, $$\begin{aligned} \mathcal{I}_{\mu}:\,\,\chi_{x} \to (-1)^{x_{\mu}}\chi_{Ix}, \,\,\,\, \bar{\chi}_{x} \to (-1)^{x_{\mu}}\bar{\chi}_{Ix},\,\,\,\, U_{\nu,x} \to U_{\nu, Ix} \ , \label{axis1}\end{aligned}$$ with $I=I^{\mu}$ is the axis reversal $x_{\mu}\to -x_{\mu}$, $x_{\rho}\to x_{\rho}$, $\rho\not= \mu$. It again flips the sign of the flavored-mass terms. The staggered rotational transformation is given by $$\begin{aligned} \mathcal{R}_{\rho\sigma}:\,\,\chi_{x} \to S_{R}(R^{-1}x)\chi_{R^{-1}x},\,\,\,\, \bar{\chi_{x}} \to S_{R}(R^{-1}x)\bar{\chi}_{R^{-1}x},\,\,\,\, U_{\nu, x} \to U_{\nu, Rx} \ , \label{rot1}\end{aligned}$$ where $R_{\rho\sigma}$ is the rotation $x_{\rho}\to x_{\sigma}$, $x_{\sigma}\to -x_{\rho}$, $x_{\tau}\to x_{\tau}$, $\tau \not= \rho, \sigma$ and $S_{R}(x)={1\over{2}}[1\pm\eta_{\rho}(x)\eta_{\sigma}(x)\mp\zeta_{\rho}(x)\zeta_{\sigma}(x) +\eta_{\rho}(x)\eta_{\sigma}(x)\zeta_{\rho}(x)\zeta_{\sigma}(x)]$ with $\rho$$\sigma$. It is notable that the Adams-type fermion keeps this staggered rotation symmetry while the Hoelbling type loses it. The staggered charge conjugation transformation is given by $$\mathcal{C}:\,\,\chi_{x}\to\epsilon_{x}\bar{\chi}_{x}^{T},\,\,\,\, \bar{\chi}_{x}\to-\epsilon_{x}\chi_{x}^{T},\,\,\,\, U_{\nu,x} \to U_{\nu,x}^{*} \ . \label{C0}$$ The Adams-type fermion keeps this symmetry while the Hoelbling type loses it. We next elucidate residual subgroups possessed by staggered-Wilson fermions, and discuss how to define physical discrete symmetries as Parity, Charge conjugation and Hypercubic symmetry. For this purpose we separate spin and flavor rotations in the above transformations. Here we utilize the momentum-space representation in [@GS; @DS]. In this representation we can define two set of clifford generators $\Gamma_{\mu}$ and $\Xi_{\mu}$, which operate on spinor and flavor spaces in the momentum-space field $\phi(p)$, respectively. (Details are shown in Appendix. \[SFS\].) Then the shift transformation translates into $$\mathcal{S}_{\mu}:\,\,\phi(p)\,\,\to\,\, \exp(ip_{\mu})\Xi_{\mu}\,\phi(p) \ . \label{shift2}$$ The axis reversal translates into $$\mathcal{I}_{\mu}:\phi(p)\,\,\to\,\,\Gamma_{\mu}\Gamma_{5}\Xi_{\mu}\Xi_{5}\,\phi(Ip) \ . \label{axis2}$$ The rotational transformation translates into $$\mathcal{R}_{\rho\sigma}:\,\,\phi(p)\,\,\to\,\,\exp({\pi\over{4}}\Gamma_{\rho}\Gamma_{\sigma}) \exp({\pi\over{4}}\Xi_{\rho}\Xi_{\sigma})\,\phi(R^{-1}p) \ . \label{rot2}$$ By using this representation we figure out residual discrete symmetries of staggered-Wilson fermions as follows. We first consider parity. Both staggered-Wilson fermions are invariant under $$\mathcal{I}_{s}\mathcal{S}_{4}\sim \exp(ip_{4})\Gamma_{1}\Gamma_{2}\Gamma_{3}\Gamma_{5}\,\phi(-{\bf p},p_{4})\sim \exp(ip_{4})\Gamma_{4}\,\phi(-{\bf p},p_{4}) \ , \label{parity}$$ with $\mathcal{I}_{s}\equiv \mathcal{I}_{1}\mathcal{I}_{2}\mathcal{I}_{3}$. This is essentially the continuum parity transformation [@GS]. In the continuum limit the phase factor disappears and it results in the continuum parity transformations $P : \psi(p)\to\gamma_{4}\psi(-{\bf p},p_{4})$ for the Dirac fermion. We thus conclude both staggered-Wilson fermions possess symmetry leading to physical parity symmetry $P$. We note the simple combination of $\mu$-shift and $\mu$-axis reversal (shifted-axis reversal:$\mathcal{S}_{\mu}\mathcal{I}_{\mu}$) is also a symmetry of both fermions. We next consider physical charge conjugation. In the case of Adams fermion the staggered charge conjugation symmetry $\mathcal{C}$ in Eq. (\[C0\]) remains intact. Thus, physical charge conjugation for the two-flavor branch can be formed in a similar way to usual staggered fermions as shown in [@GS] ($C\sim\mathcal{C}\mathcal{S}_{2}\mathcal{S}_{4} \mathcal{I}_{2}\mathcal{I}_{4}$). On the other hand, the Hoelbling type breaks $\mathcal{C}$. In this case, however, we can define another charge conjugation by combining $\mathcal{C}$ with rotation transformation as $$\mathcal{C}_{T} : \,\,\mathcal{R}_{21}\mathcal{R}_{13}^{2}\mathcal{C} \ . \label{RRRC}$$ The Hoelbling action is invariant under this transformation. By using this symmetry we can define physical charge conjugation $C$ for one-flavor branch as in the Adams type. We thus conclude that both staggered-Wilson fermions have proper charge conjugation symmetry. $N_f$ $\mathcal{S}$$\&$$\mathcal{I}$-subgroup $\mathcal{R}$-subgroup $P$ $C$ $SW_{4}$ ----------- ------- ------------------------------------------------------------------------------------------------------------------ ------------------------------------------------ ------------ ------------ ------------ -- Staggered $4$ $\mathcal{S}_{\mu}$,$\mathcal{I}_{\mu}$ $\mathcal{R}_{\mu\nu}$ $\bigcirc$ $\bigcirc$ $\bigcirc$ Adams $2$ $\mathcal{S}_{\mu}\mathcal{S}_{\nu}$, $\mathcal{S}_{\mu}\mathcal{I}_{\mu}$ $\mathcal{R}_{\mu\nu}$ $\bigcirc$ $\bigcirc$ $\bigcirc$ Hoelbling $1$ $\mathcal{S}_{\mu}\mathcal{S}_{\nu}\mathcal{S}_{\rho}\mathcal{S}_{\sigma}$, $\mathcal{S}_{\mu}\mathcal{I}_{\mu}$ $\mathcal{R}_{\mu\nu}\mathcal{R}_{\rho\sigma}$ $\bigcirc$ $\bigcirc$ $\times$ We lastly consider hypercubic symmetry. In staggered fermions, the rotation Eq. (\[rot1\]) and axis reversal Eq. (\[axis1\]) form hypercubic symmetry [@KilSha], which enhances to Euclidian Lorentz symmetry in the continuum limit. In the case of the Adams-type fermion, the action is invariant under the rotation Eq. (\[rot1\]) and the shifted-axis reversal $\mathcal{S}_{\mu}\mathcal{I}_{\mu}$. These two symmetries can form proper hypercubic symmetry $SW_{4}$ in this case. Thus we conclude that Adams fermion recovers Lorentz symmetry in the continuum. In the Hoelbling-type formulation, the action breaks the rotation symmetry into its subgroup called doubled rotation [@Hoel], which is given by $$\mathcal{R}_{\rho\sigma}\mathcal{R}_{\mu\nu}\sim \exp[{\pi\over{4}}(\Gamma_{\rho}\Gamma_{\sigma}+\Gamma_{\mu}\Gamma_{\nu})] \exp[{\pi\over{4}}(\Xi_{\rho}\Xi_{\sigma}+\Xi_{\mu}\Xi_{\nu})] \phi(R_{\rho\sigma}^{-1}R_{\mu\nu}^{-1}p) \ , \label{DR}$$ where $(\mu,\nu,\sigma,\rho)$ is any permutation of ($1,2,3,4$). It is also invariant under sequential transformations of ($\mu,\nu$ rotation), ($\nu,\mu$ rotation), ($\mu$ shift) and ($\nu$ shift) as $$\mathcal{S}_{\nu}\mathcal{S}_{\mu}\mathcal{R}_{\nu\mu}\mathcal{R}_{\mu\nu}\sim \exp(ip_{\mu}+ip_{\nu})\Gamma_{\mu}\Gamma_{\nu}\,\phi(\tilde{p}) \ , \label{SDR}$$ with $\tilde{p}_{\mu,\nu}=-p_{\mu,\nu}, \tilde{p}_{\tau}=p_{\tau}$, $\tau\not=\mu,\nu$. The loss of rotation symmetry indicates that we cannot define hypercubic symmetry in the Hoelbling fermion. It implies that it could not lead to a correct continuum theory, and we would need to tune parameters to restore Lorentz symmetry. Indeed the recent study on symmetries of staggered-Wilson fermions by Sharpe [@Steve] reports that recovery of Lorentz symmetry requires fine-tuning of coefficients in the gluonic sector in lattice QCD with Hoelbling fermion. To summarize, Adams fermion possesses physical parity, charge conjugation and hypercubic symmetries while Hoelbling fermion loses hypercubic as shown in Table. \[Table:sym\]. It seems that Hoelbling fermion cannot be straightforwardly applied to lattice QCD while Adams type can. We note that both staggered-Wilson fermions have proper parity symmetry, and we can discuss spontaneous parity symmetry breaking. Moreover, we may find some symptom due to Lorentz symmetry breaking in Hoelbling fermion in the parity-phase structure. This is another motivation to study parity-phase structure in strong-coupling lattice QCD. In the end of this section, we comment on special symmetries in staggered-Wilson fermions without mass shift. Hoelbling fermion without the mass shift ($\eta_{\mu}D_{\mu}+M_{H}$) possesses special charge conjugation symmetry ($\mathcal{C}_{T}': \chi\to\bar\chi,\, \bar{\chi}\to\chi$). This topic is beyond the scope of this work, but we note that it can do good to two flavors in the central branch. Use of the central branch is intensively discussed in [@Rev; @PdF]. Hopping Parameter Expansion {#sec:HPE} =========================== In this section we investigate parity-phase structure in lattice QCD with staggered-Wilson fermions in the framework of hopping parameter expansion (HPE) in the strong-coupling regime [@AokiP]. In the hopping parameter expansion, we treat a mass term as a leading action while we perturbatively treat hopping terms. We thus perform expansion by a hopping parameter which essentially corresponds to inverse of a mass parameter. By using self-consistent equations we derive one- and two-point functions, and calculate meson condensates and meson mass for two types of staggered-Wilson fermions. We for simplicity drop the flavor indices until we discuss the two-flavor case in details in Sec. \[sec:Tf\]. However it is easy to recover the flavor indices for the field $\chi_{f}$, the mass parameter $M_{f}$ and the condensate $\Sigma_{f}$ ($f=1,2,...$). Hoelbling type {#subsec:HPE-H} -------------- ![Feynman rules in hopping parameter expansion (HPE) with the Hoelbling-type staggered-Wilson fermion. $a$ and $b$ stand for the color indices. $\mathcal{W}_{\alpha\mu\beta\nu,x}^{(2)}$ is given in Table \[Table:U2\].[]{data-label="FR-H"}](HPE-Feynman-rule-Hoelbling.eps){height="6cm"} $\alpha$ $\beta$ $\mathcal{W}_{\alpha\mu\beta\nu,x}^{(2)}$ ---------- --------- --------------------------------------------------------------------- $+$ $+$ $U_{\mu,x}U_{\nu,x+\hat{\mu}}$ $-$ $-$ $U_{\mu,x-\hat{\mu}}^\dagger U_{\nu,x-\hat{\mu}-\hat{\nu}}^\dagger$ $+$ $-$ $U_{\mu,x}U_{\nu,x+\hat{\mu}-\hat{\nu}}^\dagger$ $-$ $+$ $U_{\mu,x-\hat{\mu}}^\dagger U_{\nu,x-\hat{\mu}}$ ![Feynman diagram for mesonic one-point functions in the $\mathcal{O}(K^{3})$ self-consistent equation of HPE with the Hoelbling fermion. Black circles stand for the leading one-point function $\langle \chi_x \bar{\chi}_x \rangle_0$ while white circles stand for $\langle \chi_x \bar{\chi}_x \rangle$ which include next-leading and higher hopping terms. By summing up higher contributions, we obtain the second equality.[]{data-label="One-H"}](HPEwHoelbling.eps){height="6cm"} We begin with the Hoelbling-type fermion, which contains two-hopping terms in the action. One reason that we start with Hoelbling type regardless of the lower rotation symmetry is that 2-hopping calculation can be a good exercise for 4-hopping case in Adams type. To perform the HPE for the Hoelbling-type fermion, we rewrite the action Eq. (\[HoelS\]) by redefining $\chi \rightarrow \sqrt{2K} \chi$ with the hopping parameter $K=1/[2(M+2r)]$, $$S = \sum_{x} {{\bar{\chi}}}_x \chi_x + 2K \sum_{x, y} {{\bar{\chi}}}_{x}(\eta_{\mu} D_\mu )_{xy}\chi_{y}+ 2 K r \sum_{x,y} {{\bar{\chi}}}_x (M_{H})_{xy} \chi_y \ , \label{HPE-Hoel}$$ where $M_{H}$ is given by Eq. (\[HoelM\]). The plaquette action is $1/g^2$ term and we can omit it in the strong-coupling limit. In this section, we derive one- and two-point functions by using a $\mathcal{O}(K^{3})$ self-consistent equation: Solving this equation leads to truncation of diagrams as the ladder approximation having all diagrams to $\mathcal{O}(K^3)$ are taken into account. More precisely, this approximation does not take account of all diagrams, but it successfully includes certain kinds of diagrams to all orders of $K$ thanks to a self-consistent approach. We thus expect that it works to figure out existence of Aoki phase. We note that this approximation especially works well for a small hopping parameter $K\ll1$. In Fig. \[FR-H\], we depict Feynman rules in the HPE for this fermion. The fundamental Feynman rules contain contributions from 0-hopping (mass term), 1-hopping (kinetic term) and 2-hopping (flavored-mass term) terms. First, by using these Feynman rules, we derive meson condensates from an one-point function of the meson operator $\mathcal{M}_{x}=\bar{\chi}_{x}\chi_{x}$ in the mean-field approximation. The one-point function is defined as $$\begin{aligned} \langle \chi_x^a \bar{\chi}_x^b \rangle \equiv& - \delta_{ab} \Sigma_x = \frac{\int \mathcal{D}[\chi,\bar{\chi},U] \chi_x^a \bar{\chi}_x^b\ e^{S} }{ \int \mathcal{D}[\chi,\bar{\chi},U] e^{S} }\ .\end{aligned}$$ Note that we use the partition function $Z=\int \mathcal{D}[\chi,\bar{\chi},U] e^{S}$, not $Z=\int \mathcal{D}[\chi,\bar{\chi},U] e^{-S}$, following the convention for the partition function in the strong-coupling analysis [@KaS]. The leading term in the hopping parameter expansion is given by $$\begin{aligned} \langle \chi_x^a \bar{\chi}_x^b \rangle_0 = \frac{\int \mathcal{D}[\chi,\bar{\chi},U]\, \chi_x^a \bar{\chi}_x^b\ e^{S_0}} {\int \mathcal{D}[\chi,\bar{\chi},U]\, e^{S_0}} = - \delta^{ab} \ ,\end{aligned}$$ where $S_0=\sum_x \bar{\chi}_x \chi_x$. By using the Feynman rules, we can evaluate the diagrams in Fig. \[One-H\]. $$\begin{aligned} \langle \chi_x^a \bar{\chi}_x^b \rangle & \equiv - \delta^{ab} \Sigma_x {\nonumber}\\ &= \langle \chi_x^a \bar{\chi}_x^b \rangle_0 {\nonumber}\\ & + \sum_{\pm \mu} (-1) (K \eta_{\mu,x})^2 \langle (\chi^a \bar{\chi})_x \rangle_0 U_{\mu,x} \langle (\chi \bar{\chi})_{x+\hat{\mu}} \rangle_0 U_{\mu,x}^\dagger \langle (\chi \bar{\chi}^b)_x \rangle_0 {\nonumber}\\ & + \sum_{\substack{\pm \mu,\pm \nu \\ (\mu \neq \nu)}} (-1) \left( 2K r i \eta_{\mu \nu,x} \displaystyle \frac {1}{2^3 \sqrt{3}} \right)^2 \langle (\chi^a \bar{\chi})_x \rangle_0 \mathcal{W}_{\mu\nu,x}^{(2)} \langle (\chi \bar{\chi})_{x+\hat{\mu}+\hat{\nu}} \rangle_0 \mathcal{W}_{\mu\nu,x}^{(2)\dagger} \langle (\chi \bar{\chi}^b)_x \rangle_0 {\nonumber}\\ & + \sum_{\pm \mu,\pm \nu} (-1) (K \eta_{\mu,x})^2 (-1) (K \eta_{\nu,x})^2 \langle (\chi^a \bar{\chi})_x \rangle_0 U_{\mu,x} \langle (\chi \bar{\chi})_{x+\hat{\mu}} \rangle_0 U_{\mu,x}^\dagger \langle (\chi \bar{\chi})_x \rangle_0 U_{\nu,x} {\nonumber}\\ & \times \langle (\chi \bar{\chi})_{x+\hat{\nu}} \rangle_0 U_{\nu,x}^\dagger \langle (\chi \bar{\chi}^b)_x \rangle_0 {\nonumber}\\ & + \sum_{\substack{\pm \mu,\pm \nu ,\pm \rho, \pm \sigma \\ (\mu \neq \nu, \rho \neq \sigma)}} (-1) \left( 2K r i \eta_{\mu \nu,x} \displaystyle \frac {1}{2^3 \sqrt{3}} \right)^2 (-1) \left( 2K r i \eta_{\rho \sigma,x} \displaystyle \frac {1}{2^3 \sqrt{3}} \right)^2 {\nonumber}\\ & \times \langle (\chi^a \bar{\chi})_x \rangle_0 \mathcal{W}_{\mu\nu,x}^{(2)} \langle (\chi \bar{\chi})_{x+\hat{\mu}+\hat{\nu}} \rangle_0 \mathcal{W}_{\mu\nu,x}^{(2)\dagger} \langle (\chi \bar{\chi})_x \rangle_0 \mathcal{W}_{\rho\sigma,x}^{(2)} \langle (\chi \bar{\chi})_{x+\hat{\rho}+\hat{\sigma}} \rangle_0 \mathcal{W}_{\rho\sigma,x}^{(2)\dagger} \langle (\chi \bar{\chi}^b)_x \rangle_0 {\nonumber}\\ & + \sum_{\substack{\pm \mu,\pm \nu ,\pm \rho \\ (\mu \neq \nu)}} (-1) \left( 2K r i \eta_{\mu \nu,x} \displaystyle \frac {1}{2^3 \sqrt{3}} \right)^2 (-1) \left( K \eta_{\rho,x} \right)^2 \langle (\chi^a \bar{\chi})_x \rangle_0 \mathcal{W}_{\mu\nu,x}^{(2)} {\nonumber}\\ & \times \langle (\chi \bar{\chi})_{x+\hat{\mu}+\hat{\nu}} \rangle_0 U_{\rho,x+\hat{\mu}+\hat{\nu}} \langle (\chi \bar{\chi})_{x+\hat{\mu}+\hat{\nu}+\hat{\rho}} \rangle_0 U_{\rho,x+\hat{\mu}+\hat{\nu}}^\dagger \langle (\chi \bar{\chi})_{x+\hat{\mu}+\hat{\nu}} \rangle_0 \mathcal{W}_{\mu\nu,x}^{(2)\dagger} \langle (\chi \bar{\chi}^b)_x \rangle_0 {\nonumber}\\ & + \mathcal{O}(K^5) \ , \label{Eq:HPE-Hoelbling1-detail}\end{aligned}$$ where $(\chi \bar{\chi})_x$ stands for $\chi_x \bar{\chi}_x$ and $\mathcal{W}_{\mu\nu,x}^{(2)}=\mathcal{W}_{+\mu+\nu,x}^{(2)}$ in Table. \[Table:U2\]. Note that we consider only connected diagrams in Fig. \[One-H\]. By summing higher hopping terms, the one-point function is obtained as shown in Fig. \[One-H\], which is given by $$- \Sigma_x \equiv - \langle \mathcal{M}_x \rangle = - \langle \mathcal{M}_x \rangle_0 + 2K^2 \sum_\mu \Sigma_x \Sigma_{x+\hat{\mu}} - 2 \cdot \displaystyle \frac {1}{24} (Kr)^2 \sum_{\mu \neq \nu} \Sigma_x \Sigma_{x+\hat{\mu}+\hat{\nu}} \ . \label{Eq:HPE-Hoelbling1}$$ The equation contains terms to $\mathcal{O}(K^{2})$, and $\mathcal{O}(K^{3})$ diagrams are found to vanish due to cancellation between the diagrams. Here we solve it in a self-consistent way for condensate $\Sigma$ within mean-field approximation. We here assume $\Sigma_x=\sigma_x +i \epsilon_x \pi_x$ as the condensate. $\sigma$ and $\pi$ correspond to chiral and pion condensates, respectively. We substitute this form of $\Sigma_{x}$ in Eq. (\[Eq:HPE-Hoelbling1\]) and obtain a self-consistent equation $$- \left( \sigma +i \epsilon_x \pi \right)=-1 + 2K^2 \cdot 4 \left( \sigma^2 + \pi^2 \right) -2 \cdot \displaystyle \frac {1}{24} (Kr)^2 \cdot 4 \cdot 3 \left( \sigma +i \epsilon_x \pi \right)^2 \ , \label{HPE-HoelSelf1}$$ which yields $- \sigma = -1 + 16 K^2 \pi^2$ and $- i \pi = - 8K^2 \cdot 2 i \sigma \pi$. For simplicity, we have set $r=2\sqrt{2}$ to make the equation Eq. (\[HPE-HoelSelf1\]) simpler. Of course we can also discuss for other values of $r$ in general. Now we have two solutions depending on $\pi=0$ or $\pi\not=0$: For $\pi=0$, we have a trivial solution $\sigma=1$. For $\pi \neq 0$, we have a solution as $$\sigma = \displaystyle \frac{1}{16K^2},\,\,\,\,\,\,\,\,\,\,\,\,\,\, \pi = \pm \sqrt{ \displaystyle \frac{1}{16K^2} \left( 1- \displaystyle \frac{1}{16K^2} \right) } \ . \label{cond}$$ Nonzero pion condensate implies spontaneous parity breaking for the range $\mid{K}\mid > 1/4$. The sign of the pion condensate in Eq. (\[cond\]) reflects the $Z_2$ parity symmetry of the theory. Thus the parity-broken phase, if it exists, appears in a parameter range $-4\sqrt{2}-2<M<-4\sqrt{2}+2$ in the strong-coupling limit. We note that the critical hopping parameter $|K_c|=1/4$ is small, and we speculate that the $\mathcal{O}(K^{3})$ self-consistent equation is valid around the value. Next, we discuss the meson mass from a two-point function of the meson operator $\mathcal{S}(0,x)\equiv\langle\mathcal{M}_{0}\mathcal{M}_{x}\rangle$. From Fig. \[Two-H\], we derive the following $\mathcal{O}(K^{3})$ equation for a two-point function. ![Feynman diagram for mesonic two-point functions for $\mathcal{O}(K^{3})$ self-consistent equation with the Hoelbling fermion. []{data-label="Two-H"}](HPEwHoelbling-Mass.eps){height="4cm"} $$\begin{aligned} \mathcal{S}(0,x) = &\langle {{\bar{\chi}}}_0^a\chi_0^a {{\bar{\chi}}}_x^b \chi_x^b \rangle {\nonumber}\\ = &- \delta_{0x} N_c {\nonumber}\\ &-K^2 \langle {{\bar{\chi}}}_0^a \chi_0^a {{\bar{\chi}}}_0^c (\eta_{\mu,0})^2 \biggl[ U_{\mu,0}^{cd} \chi_{\hat{\mu}}^d {{\bar{\chi}}}_{\hat{\mu}}^e (U_{\mu,0}^{\dagger})^{ef} + (U_{\mu,-\hat{\mu}}^{\dagger})^{cd} \chi_{-\hat{\mu}}^d {{\bar{\chi}}}_{-\hat{\mu}}^e U_{\mu,-\hat{\mu}}^{ef} \biggr] \chi_0^f {{\bar{\chi}}}_x^b \chi_x^b \rangle {\nonumber}\\ & - \left( 2 K r i \displaystyle \frac{1}{2^3 \sqrt{3}} \right)^2 \langle {{\bar{\chi}}}_0^a \chi_0^a {{\bar{\chi}}}_0^c \sum_{\alpha,\beta=\pm} \sum_{\mu \neq \nu} (\eta_{\mu\nu,0})^2 \biggl[ (\mathcal{W}_{\alpha\mu\beta\nu,0}^{(2)})^{cd} \chi_{\alpha\hat{\mu}\beta\hat{\nu}}^d {{\bar{\chi}}}_{\alpha\hat{\mu}\beta\hat{\nu}}^e (\mathcal{W}_{\alpha\mu\beta\nu,0}^{(2) \dagger})^{ef} \biggr] \chi_0^f {{\bar{\chi}}}_x^b \chi_x^b \rangle \ , \label{HPE-HoelTwo} \end{aligned}$$ where $\mathcal{W}_{\alpha\mu\beta\nu,0}^{(2)}$ is defined in Table. \[Table:U2\]. Note that we consider only connected diagrams in Fig. \[Two-H\]. By integrating out the link variables in the strong-coupling limit, it is simplified as $$\begin{aligned} \mathcal{S}(0,x) \equiv \langle {{\bar{\chi}}}_0^a\chi_0^a {{\bar{\chi}}}_x^b \chi_x^b \rangle = - \delta_{0x} N_c &+ K^2 \sum_{\pm \mu} \langle \chi_{\hat{\mu}}^a {{\bar{\chi}}}_{\hat{\mu}}^a {{\bar{\chi}}}_x^b \chi_x^b \rangle {\nonumber}\\ & + \left( 2 K r i \displaystyle \frac{1}{2^3 \sqrt{3}} \right)^2 \sum_{\substack{\pm \mu,\pm \nu \\ (\mu \neq \nu)}} \langle \chi_{\hat{\mu}+\hat{\nu}}^a {{\bar{\chi}}}_{\hat{\mu}+\hat{\nu}}^a {{\bar{\chi}}}_x^b \chi_x^b \rangle \ . \label{Eq:HPE-Hoelbling2}\end{aligned}$$ Then the self-consistent equation for $\mathcal{S}$ is given in the momentum space as $$\begin{aligned} \mathcal{S}(p) &= - N_c + \biggl[- K^2 \sum_\mu \left( e^{-ip_\mu} + e^{ip_\mu} \right) {\nonumber}\\ &+ \left( 2 K r \displaystyle \frac{1}{2^3 \sqrt{3}} \right)^2 \sum_{\mu \neq \nu} \left( e^{-i(p_\mu+p_\nu)} + e^{i(p_\mu+p_\nu)} + e^{-i(p_\mu-p_\nu)} + e^{i(p_\mu-p_\nu)} \right)\biggr] \mathcal{S}(p) \ . \label{HPE-HoelSelf2}\end{aligned}$$ We finally obtain the meson propagator as $$\mathcal{S}(p) = N_c \biggl[ - 2 K^2 \sum_\mu \cos p_\mu + 4 \left( 2 K r \displaystyle \frac{1}{2^3 \sqrt{3}} \right)^2 \sum_{\mu \neq \nu} \cos p_\mu \cos p_\nu - 1 \biggr]^{-1} \ . \label{MP}$$ Here the pole of $\mathcal{S}(p)$ should give meson mass. Since $\chi$ is an one-component fermion, it may seem to be difficult to find the pion excitation from Eq. (\[MP\]). However, as we discussed, $\gamma_{5}$ in the staggered fermion is given by $\epsilon_{x}=(-1)^{x_{1}+...+x_{4}}$ and the pion operator is given by $\pi_{x}=\bar{\chi}_{x}i\epsilon_{x}\chi_{x}$. We therefore identify momentum of pion by measuring it from a shifted origin $p=(\pi,\pi,\pi,\pi)$. Here we set $p=(i m_\pi a + \pi, \pi,\pi,\pi)$ for $1/\mathcal{S}(p)=0$ in Eq. (\[MP\]). Then we derive the pion mass $m_\pi$ as $$\begin{aligned} \cosh(m_\pi a) &= 1 + \displaystyle \frac{1-16K^2}{6K^2} \ , \label{HPE-Hoelpi}\end{aligned}$$ where we again set $r=2\sqrt{2}$ for simplicity. In this result, the pion mass becomes zero at $|K|=1/4$, and tachyonic in the range $\mid{K}\mid > 1/4$. It implies that there occurs a phase transition between parity-symmetric and parity-broken phases at $|K|=1/4$, which is consistent with the result from the one-point function in Eq. (\[cond\]). We note that the massless pion at the phase boundary is consistent with the scenario of second-order transition. We can also derive the sigma meson mass by substituting $p=(i m_\pi a, 0, 0, 0)$ for $1/\mathcal{S}(p)=0$ in Eq. (\[MP\]) as $$\begin{aligned} \cosh(m_\sigma a) &= 1 + \displaystyle \frac{1}{2K^2} \ . \label{HPE-Hoelsigma}\end{aligned}$$ Adams type {#subsec:HPE-A} ---------- We investigate the parity-phase structure for the Adams-type staggered-fermion by using $\mathcal{O}(K^{3})$ self-consistent equations in hopping parameter expansion. The approach is basically parallel to that of Hoelbling type. We just need to consider Feynman diagrams for this case. The action Eq. (\[AdamsS\]) is rewritten by redefining $\chi \rightarrow \sqrt{2K} \chi$ with $K=1/[2(M+r)]$ as, $$S = \sum_{x} {{\bar{\chi}}}_x \chi_x + 2K \sum_{x, y} {{\bar{\chi}}}_{x}(\eta_{\mu} D_\mu )_{xy}\chi_{y}+ 2 K r \sum_{x,y} {{\bar{\chi}}}_{x} (M_{A})_{xy} \chi_{y} \ , \label{HPE-Adams}$$ where $M_{A}$ is given in Eq. (\[AdamsM\]). In Fig. \[FR-A\], the Feynman rules in the HPE for this fermion are depicted. ![Feynman rules for the HPE with the Adams fermion. $a$ and $b$ stand for the color indices. We show the concrete forms of $\mathcal{W}_{\alpha\mu\beta\nu\gamma\rho\delta\sigma,x}^{(4)}$ in Table \[Table:U4\] of Appendix \[AdamsEff\].[]{data-label="FR-A"}](HPE-Feynman-rule-Adams.eps){height="6cm"} First, we derive meson condensates from the one-point function $\mathcal{M}_{x}=\bar{\chi}_{x}\chi_{x}$. The equation for the one-point function is obtained as shown in Fig. \[One-A\], $$\begin{aligned} - \Sigma_x & \equiv - \langle \mathcal{M}_x \rangle {\nonumber}\\ & = - \langle \mathcal{M}_x \rangle_0 + 2K^2 \sum_\mu \Sigma_x \Sigma_{x+\hat{\mu}} - 2 \cdot \displaystyle \frac {1}{(4!)^2 \cdot 2^3} (Kr)^2 \sum_{\mu \neq \nu \neq \rho \neq \sigma} \Sigma_x \Sigma_{x+\hat{\mu}+\hat{\nu}+\hat{\rho}+\hat{\sigma}} \ . \label{Eq:HPE-Adams1}\end{aligned}$$ ![Feynman diagram for mesonic one-point functions in the $\mathcal{O}(K^{3})$ self-consistent equations of HPE with the Adams fermion. There is a 4-hopping fundamental diagram, which is peculiar to this fermion.[]{data-label="One-A"}](HPEwAdams.eps){height="3cm"} We substitute $\Sigma_x=\sigma_x +i \epsilon_x \pi_x$ for $\Sigma_{x}$ in Eq. (\[Eq:HPE-Adams1\]) and obtain the self-consistent equation $$- \left( \sigma +i \epsilon_x \pi \right)= -1 + 2K^2 \cdot 4 \left( \sigma^2 + \pi^2 \right) - 2 \cdot \displaystyle \frac {1}{(4!)^2 \cdot 2^3} (Kr)^2 \cdot 4! \left( \sigma +i \epsilon_x \pi \right)^2 \ . \label{HPE-AdamsSelf1}$$ From this, we obtain $- \sigma = -1 + 16 K^2 \pi^2$ and $- i \pi = - 8K^2 \cdot 2 i \sigma \pi$. Here we have set $r=16\sqrt{3}$ to make the equation simple. We again have two solutions: For $\pi=0$, we have a trivial solution $\sigma=1$. For $\pi \neq 0$, we have a non-trivial solution as $$\sigma = \displaystyle \frac{1}{16K^2},\,\,\,\,\,\,\,\,\,\,\,\,\,\, \pi = \pm \sqrt{ \displaystyle \frac{1}{16K^2} \left( 1- \displaystyle \frac{1}{16K^2} \right) } \ . \label{condA}$$ It indicates that parity-broken phase appears in the range of the hopping parameter as $\mid{K}\mid > 1/4$ or equivalently $-16\sqrt{3}-2<M<-16\sqrt{3}+2$. Next, we discuss the meson mass from a two-point function of the meson operator $\mathcal{S}(0,x)\equiv\langle\mathcal{M}_{0}\mathcal{M}_{x} \rangle$. From Fig. \[Two-A\] we derive the following equation for two-point functions, ![Feynman diagram for mesonic two-point functions for $ \mathcal{O}(K^{3})$ self-consistent equations of HPE with the Adams fermion.[]{data-label="Two-A"}](HPEwAdams-Mass.eps){height="4cm"} $$\begin{aligned} \mathcal{S}(0,x) = &\langle {{\bar{\chi}}}_0^a\chi_0^a {{\bar{\chi}}}_x^b \chi_x^b \rangle {\nonumber}\\ =&- \delta_{0x} N_c {\nonumber}\\ &-K^2 \langle {{\bar{\chi}}}_0^a \chi_0^a {{\bar{\chi}}}_0^c (\eta_{\mu,0})^2 \biggl[ U_{\mu,0}^{cd} \chi_{\hat{\mu}}^d {{\bar{\chi}}}_{\hat{\mu}}^e (U_{\mu,0}^{\dagger})^{ef}+ (U_{\mu,-\hat{\mu}}^{\dagger})^{cd} \chi_{-\hat{\mu}}^d {{\bar{\chi}}}_{-\hat{\mu}}^e U_{\mu,-\hat{\mu}}^{ef}\biggr] \chi_0^f {{\bar{\chi}}}_x^b \chi_x^b \rangle {\nonumber}\\ & + \left( 2 K r \epsilon \eta_5 \displaystyle \frac{1}{4! \cdot 2^4} \right)^2 \langle {{\bar{\chi}}}_0^a \chi_0^a {{\bar{\chi}}}_0^c \sum_{\alpha,\beta,\gamma,\delta=\pm} \sum_{\mu \neq \nu \neq \rho \neq \sigma} \biggl[ \left( \mathcal{W}_{\alpha\mu\beta\nu\gamma\rho\delta\sigma,0}^{(4)} \right)^{cd} \chi_{\alpha\hat{\mu}\beta\hat{\nu}\gamma\hat{\rho}\delta\hat{\sigma}}^d {{\bar{\chi}}}_{\alpha\hat{\mu}\beta\hat{\nu}\gamma\hat{\rho}\delta\hat{\sigma}}^e {\nonumber}\\ & \times \left( \mathcal{W}_{\alpha\mu\beta\nu\gamma\rho\delta\sigma,0}^{(4) \dagger} \right)^{ef} \biggr] \chi_0^f {{\bar{\chi}}}_x^b \chi_x^b \rangle \ , \label{HPE-AdamsTwo} \end{aligned}$$ where $\mathcal{W}_{\alpha\mu\beta\nu\gamma\rho\delta\sigma,x}^{(4)}$ is defined in Table. \[Table:U4\] of Appendix \[AdamsEff\]. By integrating out the link variables in the strong-coupling limit, it is simplified as $$\begin{aligned} \mathcal{S}(0,x) \equiv \langle {{\bar{\chi}}}_0^a\chi_0^a {{\bar{\chi}}}_x^b \chi_x^b \rangle = &- \delta_{0x} N_c + K^2 \sum_{\pm \mu} \langle \chi_{\hat{\mu}}^a {{\bar{\chi}}}_{\hat{\mu}}^a {{\bar{\chi}}}_x^b \chi_x^b \rangle {\nonumber}\\ & - \left( 2 K r \displaystyle \frac{1}{4! \cdot 2^4} \right)^2 \sum_{\substack{\pm \mu, \pm \nu, \pm \rho, \pm \sigma \\ (\mu \neq \nu \neq \rho \neq \sigma)}} \langle \chi_{\hat{\mu}+\hat{\nu}+\hat{\rho} +\hat{\sigma}}^a {{\bar{\chi}}}_{\hat{\mu}+\hat{\nu}+\hat{\rho} +\hat{\sigma}}^a {{\bar{\chi}}}_x^b \chi_x^b \rangle \ . \label{Eq:HPE-Adams2}\end{aligned}$$ Then the self-consistent equation for $\mathcal{S}$ is given in the momentum space as $$\begin{aligned} \mathcal{S}(p) &= - N_c + \biggl[- K^2 \sum_\mu \left( e^{-ip_\mu} + e^{ip_\mu} \right) {\nonumber}\\ &+ \left( 2 K r \displaystyle \frac{1}{4! \cdot 2^4} \right)^2 \sum_{\alpha,\beta,\gamma,\delta=\pm} \sum_{\mu \neq \nu \neq \rho \neq \sigma} e^{+i(\alpha p_\mu + \beta p_\nu + \gamma p_\rho + \delta p_\sigma)} \biggr] \ . \label{HPE-AdamsSelf2}\end{aligned}$$ We finally obtain the meson propagator as $$S(p) = N_c \biggl[ - 2 K^2 \sum_\mu \cos p_\mu + 16 \left( 2 K r \displaystyle \frac{1}{4! \cdot 2^4} \right)^2 \sum_{\mu \neq \nu \neq \rho \neq \sigma} \cos p_\mu \cos p_\nu \cos p_\rho \cos p_\sigma- 1 \biggr]^{-1} \ . \label{MP2}$$ Here we set $p=(i m_\pi a + \pi, \pi,\pi,\pi)$ for $1/\mathcal{S}(p)=0$ in Eq. (\[MP2\]), which gives the pion mass $m_\pi$ as $$\begin{aligned} \cosh(m_\pi a) &= 1 + \displaystyle \frac{1-16K^2}{10K^2} \ , \label{HPE-Adamspi}\end{aligned}$$ where we again set $r=16\sqrt{3}$ for simplicity. Here the pion mass becomes zero at $\mid{K}\mid = 1/4$ and becomes tachyonic in the range $\mid{K}\mid > 1/4$. It suggests that there occurs a second-order phase transition between parity-symmetric and broken phases at $|K|=1/4$, which is consistent with Eq. (\[condA\]). We can also derive the sigma meson mass by substituting $p=(i m_\pi a, 0, 0, 0)$ for $1/\mathcal{S}(p)=0$ in Eq. (\[MP2\]) as $$\begin{aligned} \cosh(m_\sigma a) &= 1 + \displaystyle \frac{1}{6K^2} \ . \label{HPE-Asigma}\end{aligned}$$ Effective Potential Analysis {#sec:EPA} ============================ In the previous section, we have investigated the parity-phase structure in hopping parameter expansion. We found a strong sign of parity-broken phase for $\mid{K}\mid >1/4$. In order to judge whether the parity-broken phase is realized as a vacuum, the analysis of the gap solution in the hopping parameter expansion is not enough, and we need to investigate the effective potential for meson fields. In this section, we consider the effective potential of meson fields for $SU(N)$ lattice gauge theory with staggered-Wilson fermions. In the strong-coupling limit and the large $N$ limit, effective action can be exactly derived by integrating the link variables [@KaS; @AokiP]. Then, by solving a saddle-point equation, we can investigate a vacuum and find meson condensations. In this section we again begin with the Hoelbling case as exercise, and go on to Adams fermion with better discrete symmetry. Hoelbling type {#hoelbling-type} -------------- In the strong-coupling limit we can drop plaquette action. Then the partition function for meson fields $\mathcal{M}_x=({{\bar{\chi}}}_x \chi_x)/N$ with the source $J_{x}$ is given by $$\begin{aligned} Z(J) &= \displaystyle \int {{\cal D}}\left[ \chi, {{\bar{\chi}}}, U \right] \exp \left[ N \sum_x J_x \mathcal{M}_x + S_F \right] \ . \label{ZJ}\end{aligned}$$ where $S_{F}$ stands for the fermion action. Here we have defined $\mathcal{M}_x$ with a $1/N$ factor to ensure it to have a certain large $N$ limit. In this case, $S_F$ is the Hoelbling-type staggered-Wilson action Eq. (\[HoelS\]). $N$ stands for the number of color. In the large $N$ limit, we can perform the link integral. We here consider the effective action up to $\mathcal{O}(\mathcal{M}^{3})$ for meson field $\mathcal{M}$. This order corresponds to the $\mathcal{O}(K^{3})$ self-consistent equation in the hopping parameter expansion. We develop a method to perform the link-variable integral with multi-hopping fermion action terms. In our method, we perform the link integral by introducing two kinds of link-variable measures. Now we formally rewrite the partition function as, $$\begin{aligned} Z(J) &= \displaystyle \int {{\cal D}}\left[ \chi, {{\bar{\chi}}}\right] \exp \left[ \sum_x N \left( J_x + {\hat{M}}\right) \mathcal{M}_x \right] \exp \left[ \sum_x N W (\Lambda) \right] \ ,\end{aligned}$$ where we define ${\hat{M}}=M+2r$ and the last term is expressed as, $$\begin{aligned} \exp \left[ \sum_x N W (\Lambda) \right] &= \prod_x Z_x \ , {\nonumber}\\ Z_x &= \displaystyle \int \left( \prod_{\mu \neq \nu} {{\cal D}}\left[ U_{\mu,x}, U_{\mu,x+\hat{\nu}} \right] \right) \exp \left[ - \left( {\mathrm{Tr}}(VE^\dagger) - {\mathrm{Tr}}(V^\dagger E) \right) \right] \ . \label{WZ}\end{aligned}$$ \[sec:EPA-H\] $\Lambda$, $E$ and $E^\dagger$ ($V$ and $V^\dagger$) are composites of the fermion field $\chi$ (link variables $U$), which we will explicitly show later. $W(\Lambda)$ is a function of $\Lambda$, which will be an essential part of effective potential of meson fields. Now we explain how the integral in Eq. (\[WZ\]) can be performed by using two types of the link measure. Let us consider two-dimensional cases in Fig. \[H-2link\] for simplicity. In this case, $U_{\mu,x}$ and $U_{\mu, x+\hat{\nu}} \ (\mu \neq \nu)$ form link variables in a square block. We can classify diagrams to $\mathcal{O}(\mathcal{M}^{3})$ into three types: (1) 1-link ($\mu$) + 1-link ($-\mu$) hoppings, (2) 2-link ($\mu,\nu$) + 2-link ($-\nu,-\mu$) hoppings, (3) 2-link ($\mu,\nu$) + 1-link ($-\nu$) + 1-link ($-\mu$) hoppings. The 1-link hopping comes from the usual staggered kinetic term while the 2-link hopping from the flavored-mass term. (1) and (2) are $\mathcal{O}(\mathcal{M}^{2})$ while (3) is $\mathcal{O}(\mathcal{M}^{3})$. Since one square block contains all the three diagrams, we can derive the effective potential by integrating link variables per each block. We note that $\mathcal{O}(\mathcal{M}^{3})$ diagrams cancel between one another, which is consistent with the HPE calculations. We can also avoid double-counting by adjusting factors for one-link and two-link terms as we will show in Eq. (\[Lambda\]). ![Link variables corresponding to the two kinds of measures in the partition function Eq. (\[WZ\]) in a 2 dimensional case.[]{data-label="H-2link"}](Hoelbling-2link.eps){height="5cm"} In this method, we need to define sets of link variables and fermion bilinears as $V$ and $E$ in Eq. (\[WZ\]) : $V$ and $E$ are matrices including components corresponding to $1$- and $2$-link terms. We call a space spanned by these matrices “hopping space". Here we define $a,b$ and $\alpha, \beta$ as color and hopping space indices respectively. We also denote ${\mathrm{Tr}}$ as trace for color and the hopping space. Explicit forms of $V$ and $E$ are given by $$\begin{aligned} V^{ab}_{\alpha \beta} &= \mathrm{diag} \left( V_1^{ab}, V_2^{ab}, V_3^{ab} \right) \ ,\end{aligned}$$ with $$\begin{aligned} V_1 &=\mathrm{diag} \left(U_{\mu,x} \right) \ {\nonumber}\\ &\equiv \mathrm{diag} \underbrace{ \left( U_{1,x}, U_{2,x}, \cdots, U_{4,x} \right) }_4 \ , \\ V_2 &=\mathrm{diag} \left( U_{\mu,x} U_{\nu,x+\hat{\mu}} \right) \ {\nonumber}\\ &\equiv \mathrm{diag} \underbrace{ \left(U_{1,x} U_{2,x+\hat{1}}, U_{1,x} U_{3,x+\hat{1}} , \cdots, U_{4,x} U_{3,x+\hat{4}}\right) }_{12} \ , \\ V_3 &=\mathrm{diag} \left( U_{\mu,x+\hat{\nu}} U_{\nu,x+\hat{\mu}}^\dagger \right) \ {\nonumber}\\ &\equiv \mathrm{diag} \underbrace{ \left(U_{1,x+\hat{2}} U_{2,x+\hat{1}}^\dagger, U_{1,x+\hat{3}} U_{3,x+\hat{1}}^\dagger , \cdots, U_{4,x+\hat{3}} U_{3,x+\hat{4}}^\dagger \right) }_{12} \ ,\end{aligned}$$ $$\begin{aligned} E^{ab}_{\alpha \beta} &= \mathrm{diag} \left( E_1^{ab}, E_2^{ab}, E_3^{ab} \right) \ ,\end{aligned}$$ and $$\begin{aligned} E_1 &= \mathrm{diag} \left( D_{1,\mu} \right) \ {\nonumber}\\ &\equiv \mathrm{diag} \underbrace{ \left( D_{1,1}, D_{1,2}, \cdots, D_{1,4} \right) }_4 \ , \\ E_i &= \mathrm{diag} \left( D_{i,\mu\nu} \right) \ {\nonumber}\\ &\equiv \mathrm{diag} \underbrace{ \left( D_{i,12}, D_{i,13}, \cdots, D_{i,43} \right) }_{12} , \ (i=2,3) \ ,\end{aligned}$$ where we define the operator $D$ as the fermion bilinears, $$\begin{aligned} \left( D_{1,\mu}^\dagger \right)^{ab} &= \displaystyle \frac {1}{2} \eta_{\mu,x} {{\bar{\chi}}}_x^a \chi_{x+\hat{\mu}}^b \ , \quad \left( D_{1,\mu} \right)^{ab} = \displaystyle \frac {1}{2} \eta_{\mu,x} {{\bar{\chi}}}_{x+\hat{\mu}}^a \chi_x^b \ , \\ \left( D_{2,\mu \nu}^\dagger \right)^{ab} &= \displaystyle \frac {ir}{2^3 \sqrt{3}} \eta_{\mu \nu,x} {{\bar{\chi}}}_x^a \chi_{x+\hat{\mu}+\hat{\nu}}^b \ , \quad \left( D_{2,\mu \nu} \right)^{ab} = \displaystyle \frac {ir}{2^3 \sqrt{3}} \eta_{\mu \nu,x} {{\bar{\chi}}}_{x+\hat{\mu}+\hat{\nu}}^a \chi_x^b \ , \\ \left( D_{3,\mu \nu}^\dagger \right)^{ab} &= \displaystyle \frac {ir}{2^3 \sqrt{3}} \eta_{\mu \nu,x+\hat{\nu}} {{\bar{\chi}}}_{x+\hat{\nu}}^a \chi_{x+\hat{\mu}}^b \ , \quad \left( D_{3,\mu \nu} \right)^{ab} = \displaystyle \frac {ir}{2^3 \sqrt{3}} \eta_{\mu \nu,x+\hat{\nu}} {{\bar{\chi}}}_{x+\hat{\mu}}^a \chi_{x+\hat{\nu}}^b \ .\end{aligned}$$ Here $V_{1}$ and $E_{1}$ are $4\times 4$ diagonal matrices while $V_{i}$ and $E_{i}$ $(i=2,3)$ are $12\times 12$ diagonal matrices. Now we have prepared to obtain $W(\Lambda)$. By using the relation $U^{\dagger}U=1$, we obtain the Schwinger-Dyson equation, $$\begin{aligned} \displaystyle \frac{\partial^2 Z_x}{\partial E^{ab}_{\alpha \beta} \partial \left( E^\dagger \right)^{bc}_{\beta \gamma}} &= - \delta_{ca} \delta^{\alpha \gamma} Z_x \ .\end{aligned}$$ $W$ should be a function of a gauge-invariant quantities as follows. $$\begin{aligned} \Lambda^{ab}_{\alpha \beta} &= \displaystyle \frac {1}{N^2} \left( E^\dagger E \right)^{ab}_{\alpha \beta} \ .\end{aligned}$$ We can solve the Schwinger-Dyson equation analytically and derive $W$ as a function of $\Lambda$, $$\begin{aligned} W(\Lambda) &= {\mathrm{Tr}}\left[ \left( 1-4 \Lambda \right)^{1/2} - 1 - \ln \left[ \displaystyle \frac {1+\left( 1-4 \Lambda \right)^{1/2}}{2} \right] \right] \ .\end{aligned}$$ We here perform trace for colors and hopping spaces. $$\begin{aligned} \sum_x W(\Lambda) &= - \sum_x \left[ \left( 1-4 \Lambda_x \right)^{1/2} - 1 - \ln \left[ \displaystyle \frac {1+ \left( 1-4 \Lambda_x \right)^{1/2}}{2} \right] \right] \ .\end{aligned}$$ Finally we obtain a concrete form of $\Lambda$ as $$\begin{aligned} \Lambda_x &= \displaystyle \frac {1}{8} \left[ \sum_\mu \mathcal{M}_x \mathcal{M}_{x+\hat{\mu}} + \displaystyle \frac {1}{3} \sum_{\mu \neq \nu} \mathcal{M}_{x+\hat{\mu}} \mathcal{M}_{x+\hat{\mu}+\hat{\nu}} \right] -\left( \displaystyle \frac {r}{2^3 \sqrt{3}} \right)^2 \sum_{\mu \neq \nu} \left( \mathcal{M}_x \mathcal{M}_{x+\hat{\mu}+\hat{\nu}} + \mathcal{M}_{x+\hat{\nu}} \mathcal{M}_{x+\hat{\mu}}\right) \ . \label{Lambda}\end{aligned}$$ The first and second terms correspond to contribution from $D_{1,\mu},D_{1,\mu}^\dagger$, and the third and forth terms correspond to contribution from $D_{i,\mu},D_{i,\mu}^\dagger \ (i=2,3)$. Now we again set $r=2\sqrt{2}$ to compare the result to that of the hopping parameter expansion in Sec. \[sec:HPE\]. We need to change the fermion measure to the meson-field measure as $$\begin{aligned} \displaystyle \int {{\cal D}}\left[ \chi, {{\bar{\chi}}}\right] &= \displaystyle \int {{\cal D}}\mathcal{M} \exp \left[ -N \sum_x \ln \mathcal{M}_x \right] \ .\end{aligned}$$ Then the effective partition function for the meson field is given by $$\begin{aligned} Z(J) &= \displaystyle \int {{\cal D}}\mathcal{M} \exp \left[ N \left( \sum_x J_x \mathcal{M}_x + S_{\mathrm{eff}}(\mathcal{M}) \right) \right] \ , \\ S_{\mathrm{eff}}(\mathcal{M}) &= \sum_x \left( {\hat{M}}\mathcal{M}_x - \ln \mathcal{M}_x \right) + \sum_x W(\Lambda) \ , \label{EAc1}\end{aligned}$$ where we denote ${\hat{M}}$ as the shifted mass parameter ${\hat{M}}= M+2r$. The partition function with $J=0$ in the large $N$ limit is reduced to the integrant for the saddle-point values of the meson fields, $$\begin{aligned} Z(J=0) &= \displaystyle \int {{\cal D}}\mathcal{M} \exp \left[ N S_{\mathrm{eff}}(\mathcal{M}) \right] \ {\nonumber}\\ & \sim \exp \left[ N S_{\mathrm{eff}}(\bar{\mathcal{M}}) \right], \quad \left( N \rightarrow \infty \right) \ .\end{aligned}$$ Now we consider pion condensate. For now, we consider only scalar $\sigma$ and pseudo-scalar $\pi$ fields as $$\begin{aligned} \bar{\mathcal{M}}_{x} &= \sigma + i \epsilon_x \pi \ , \\ &= \Sigma e^{i\epsilon_x \theta} \ .\end{aligned}$$ By substituting this form of the meson field into the Eq. (\[EAc1\]), we derive the effective action for the $\Sigma$ and $\theta$, $$\begin{aligned} S_{\mathrm{eff}}(\bar{\mathcal{M}}) &= {\hat{M}}\sum_x \Sigma \cos \theta - \sum_x \ln \Sigma \ {\nonumber}\\ &- \sum_x \left[ \left( 1- 4 \cdot 2 \Sigma^2 \sin^2 \theta \right)^{1/2} - \ln \left[ \displaystyle \frac{1+\left( 1- 4 \cdot 2 \Sigma^2 \sin^2 \theta \right)^{1/2}}{2} \right] \right] \ .\end{aligned}$$ We ignore the irrelevant constant. From the translational invariance, we factorize the 4-dimensional volume $V_4$ from the effective action as $S_{\mathrm{eff}}(\bar{M})=-V_4 V_\mathrm{eff}(\Sigma,\theta)$. Then the effective potential $V_{\mathrm{eff}}$ is given by $$\begin{aligned} V_{\mathrm{eff}}(\Sigma,\theta) &= - {\hat{M}}\Sigma \cos \theta + \ln \Sigma \ {\nonumber}\\ &+ \left[ \left( 1- 4 \cdot 2 \Sigma^2 \sin^2 \theta \right)^{1/2} - \ln \left[ \displaystyle \frac{1+\left( 1- 4 \cdot 2 \Sigma^2 \sin^2 \theta \right)^{1/2}}{2} \right] \right] \ . \label{effS1}\end{aligned}$$ Now let us look into the vacuum structure of this effective potential by solving the saddle-point condition, which are given by $$\begin{aligned} \displaystyle \frac {\partial V_{\mathrm{eff}}(\Sigma,\theta)}{\partial \Sigma} &= - {\hat{M}}\cos \theta + \displaystyle \frac {1}{\Sigma} - \displaystyle \frac {8 \Sigma \sin^2 \theta}{1+ \left( 1- 4 \cdot 2 \Sigma^2 \sin^2 \theta \right)^{1/2}} \quad =0 \ , \\ \displaystyle \frac {\partial V_{\mathrm{eff}}(\Sigma,\theta)}{\partial \theta} &= \Sigma \sin \theta \left[ {\hat{M}}- \displaystyle \frac {8 \Sigma \cos \theta}{1 + \left( 1- 4 \cdot 2 \Sigma^2 \sin^2 \theta \right)^{1/2}} \right] \quad =0 \ .\end{aligned}$$ Here we find two types of solutions for these equations depending on whether $\theta$ is zero or nonzero: For a trivial solution $\theta=0$, we have $\Sigma=1/{\hat{M}}$. For $\theta \neq 0$, the stationary conditions are written as $$\begin{aligned} &{\hat{M}}\Sigma - \cos \theta =0 \ , \\ &1 - \displaystyle \frac {8 \Sigma^2}{1+ \left( 1- 4 \cdot 2 \Sigma^2 \sin^2 \theta \right)^{1/2}} =0 \ .\end{aligned}$$ Then, we find a solution for $\theta\not= 0$ as $$\begin{aligned} \Sigma &= \bar{\Sigma}= \sqrt{ \displaystyle \frac {1}{8-{\hat{M}}^2}} \ , \\ \sin^2 \theta &= \sin^2 \bar{\theta}= \displaystyle \frac {2 ( 4-{\hat{M}}^2 ) }{8-{\hat{M}}^2} \ .\end{aligned}$$ Now we need to figure out which solution is realized as the vacuum of the theory by comparing the potentials for the two solutions. We easily show for ${\hat{M}}^2 < 4$, $$V_{\mathrm{eff}}(1/{\hat{M}},0)-V_{\mathrm{eff}}(\bar{\Sigma},\bar{\theta})>0 \ .$$ while $V_{\mathrm{eff}}(1/{\hat{M}},0)-V_{\mathrm{eff}}(\bar{\Sigma},\bar{\theta})<0$ for ${\hat{M}}^2 > 4$. Thus the vacuum of the strong-coupling QCD with the Hoelbling-type staggered-Wilson fermion is given by the following: For ${\hat{M}}^2 > 4$ or equivalently $M > -4\sqrt{2}+2,\,\, M<-4\sqrt{2}-2$, there is only the chiral condensate as $$\begin{aligned} \displaystyle \frac {1}{N} \langle {{\bar{\chi}}}\chi \rangle &= \Sigma \cos \theta \quad = \displaystyle \frac {1}{{\hat{M}}} \ , \\ \displaystyle \frac {1}{N} \langle {{\bar{\chi}}}i \epsilon_x \chi \rangle &= \Sigma \sin\theta \quad = 0 \ .\end{aligned}$$ For ${\hat{M}}^2 < 4$ or equivalently $-4\sqrt{2}-2<M<-4\sqrt{2}+2$, there is pion condensate which breaks parity symmetry spontaneously. $$\begin{aligned} \displaystyle \frac {1}{N} \langle {{\bar{\chi}}}\chi \rangle &= \bar{\Sigma} \cos \bar{\theta} \quad = \displaystyle \frac {{\hat{M}}}{8-{\hat{M}}^2} \ , \\ \displaystyle \frac {1}{N} \langle {{\bar{\chi}}}i \epsilon_x \chi \rangle &= \bar{\Sigma} \sin \bar{\theta} \quad = \pm \displaystyle \frac {\sqrt{ 2(4-{\hat{M}}^2)}}{8-{\hat{M}}^2} \ . \label{picond}\end{aligned}$$ The sign of the pion condensate Eq. (\[picond\]) reflects the $Z_2$ parity symmetry of the theory. The critical mass parameter $M_{c}=-4\sqrt{2}\pm 2$ and the range for the Aoki phase $-4\sqrt{2}-2<M<-4\sqrt{2}+2$ is consistent with those of the hopping parameter expansion shown below Eq. (\[cond\]). These results strongly suggest the existence of the parity-broken phase in the lattice QCD although it is just a strong-coupling limit. Figure \[trans1\] shows that the phase transition is second-order. ![The pion condensate undergoes second-order phase transition.[]{data-label="trans1"}](trans1.eps){height="7cm"} We can also derive mass spectrum of mesons by expanding the effective action Eq. (\[EAc1\]) up to the quadratic terms of the meson-excitation field $\Pi_x=\mathcal{M}_x-\bar{\mathcal{M}}_x$. Since we are interested in the chiral limit which is taken from the parity-symmetric phase to the critical line, we here concentrate on the pion mass in the parity-symmetric phase. For the parity-symmetric phase (${\hat{M}}^{2}>4$), the quadratic part of the effective action is given by $$\begin{aligned} S_\mathrm{eff}(\mathcal{M}) - S_\mathrm{eff}(\bar{\mathcal{M}}) &= \sum_{x,y} S_\mathrm{eff}^{(2)} (x,y) \Pi_x \Pi_y \nonumber\\ &= \int_{-\pi}^\pi \displaystyle \frac{d^4p}{(2\pi)^4} \Pi(-p) \mathcal{D} \Pi(p) \ ,\end{aligned}$$ where $\Pi(p)$ is the Fourier component of $\Pi_x$, and $$\begin{aligned} \mathcal{D} &= \displaystyle \frac {1}{2\Sigma^2} + \left[ \displaystyle \frac{1}{4} \sum_\mu \cos p_\mu - \displaystyle \frac{1}{24} \sum_{\mu \neq \nu} \left( \cos p_{\mu + \nu} + \cos p_{\mu - \nu} \right) \right] \ , \label{HpoD}\end{aligned}$$ with $p_{\mu\pm\nu}\equiv p_{\mu}\pm p_{\nu}$. Then we obtain the pion mass by solving $\mathcal{D}=0$ at $p=(i m_\pi a + \pi, \pi,\pi,\pi)$ as $$\cosh(m_{\pi}a) = 1+{2{\hat{M}}^{2}-8\over{3}} \ . \label{HpiM}$$ By using the definition $K=1/2{\hat{M}}$ with ${\hat{M}}=M+2r$ and $r=2\sqrt{2}$, we find $\cosh(m_{\pi}a)=1+(1-16K^{2})/6K^{2}$, which is consistent with the result of the hopping parameter expansion Eq. (\[HPE-Hoelpi\]): The pion mass becomes zero at the critical mass ${\hat{M}}^{2}=4$, which indicates there occurs a second-order phase transition between parity-symmetric and broken phases in the strong-coupling limit. By defining quark mass as $m_{q}a={\hat{M}}-{\hat{M}}_{c}$, we find PCAC relation near the critical mass as $$(m_{\pi}a)^{2} = {16\over{3}}m_{q}a+\mathcal{O}(a^{2}) \ . \label{HpiM2}$$ We can also study a case for non-zero spacial momenta by considering $p=(iEa+\pi,p_{1}a+\pi,p_{2}a+\pi,p_{3}a+\pi)$ in Eq. (\[HpoD\]). By using the pion mass Eq. (\[HpiM2\]) and re-normalizing the Dirac operator as $-{8\over{3}}\mathcal{D}\to\mathcal{D}$, we show that Eq. (\[HpoD\]) results in the Lorentz-covariant form up to $\mathcal{O}(a)$ discretization errors, $$\mathcal{D}=(E^{2}-{\bf p}^{2}-m_{\pi}^{2})a^{2}+\mathcal{O}(a^{3}) \ , \label{HLorentz}$$ with ${\bf p}^{2}=p_{1}^{2}+p_{2}^{2}+p_{3}^{2}$. As we discussed in Sec. \[sec:Sym\], we expected that we may find a sign of Lorentz symmetry breaking of the Hoelbling fermion in this study. However, we eventually cannot find a disease due to the symmetry breaking in the strong-coupling study. We consider that it is because Lorentz symmetry breaking appears mainly in the gluon sector as shown in [@Steve] and it is difficult to find it in the meson sector in this limit. In future work, we may be able to find it by including higher order corrections of $1/g^{2}$ and $1/N$. We here discuss possibility of other condensation. For this purpose, we consider a general form of the meson field as $$\bar{\mathcal{M}}_{x}=\sigma+i\epsilon_{x}\pi+\sum_{\mu}(-1)^{x_{\mu}}v_{\mu} +\sum_{\mu}i\epsilon_{x}(-1)^{x_{\mu}}a_{\mu} +\sum_{\mu>\nu}(-1)^{x_{\mu}+x_{\nu}}t_{\mu\nu} \ , \label{GeM}$$ where we define the vector, axial-vector and tensor meson fields as $v_{\mu}$, $a_{\mu}$ and $t_{\mu}$. We can easily show there is no other condensate by substituting this general form Eq. (\[GeM\]) into the meson action Eq. (\[EAc1\]). Thus we conclude that the vacuum we obtained is a true one. The results in this section suggest that the chiral limit can be taken in Hoelbling-fermion lattice QCD in a parallel manner to Wilson fermion. However we probably need to tune other parameters to restore Lorentz symmetry in lattice QCD with this type. Therefore, what we can state here is just that, if we succeed to restore Lorentz symmetry by parameter tuning, this fermion could be applied to lattice QCD as Wilson fermion. Adams type {#sec:EPA-A} ---------- We next investigate the case for the Adams fermion. We again consider the effective potential up to $\mathcal{O}(\mathcal{M}^{3})$. The derivation is almost the same as the Hoelbling case in Subsec. \[sec:EPA-H\]. The main difference between them is the number of the multi-links. The fermion of the Adams type includes the four-hopping terms while the Hoelbling one has the two-hopping terms. In the appendix \[AdamsEff\], we derive the effective potential for the Adams-type fermion. Here we only summarize the results. In this case, we again set $r=16\sqrt{3}$ to match the result to that of the hopping parameter expansion in Sec. \[sec:HPE\]. We can derive the effective potential for scalar and pseudo-scalar fields by assuming condensation as $\mathcal{M}_{x} = \Sigma e^{i\epsilon_{x}\theta}$. We note that the functional form of the effective potential is the same as Eq. (\[effS1\]). By solving saddle-point equations, we find that the critical mass is given by ${\hat{M}}^{2}_{c}=4$ or equivalently $M_c=-16\sqrt{3}\pm2$ with ${\hat{M}}=M+r$ and $r=16\sqrt{3}$. The vacuum in this case is given by following: For ${\hat{M}}^2 > 4$ or $M > -16\sqrt{3}+2,\,\, M<-16\sqrt{3}-2$, there is only the chiral condensate as $$\begin{aligned} \displaystyle \frac {1}{N} \langle {{\bar{\chi}}}\chi \rangle &= \Sigma \cos \theta \quad = \displaystyle \frac {1}{{\hat{M}}} \ , \\ \displaystyle \frac {1}{N} \langle {{\bar{\chi}}}i \epsilon_x \chi \rangle &= \Sigma \sin\theta \quad = 0 \ .\end{aligned}$$ For ${\hat{M}}^2 < 4$ or $-16\sqrt{3}-2<M<-16\sqrt{3}+2$, there emerge the pion condensate which breaks the parity symmetry spontaneously. $$\begin{aligned} \displaystyle \frac {1}{N} \langle {{\bar{\chi}}}\chi \rangle &= \bar{\Sigma} \cos \bar{\theta} \quad = \displaystyle \frac {{\hat{M}}}{8-{\hat{M}}^2} \ , \\ \displaystyle \frac {1}{N} \langle {{\bar{\chi}}}i \epsilon_x \chi \rangle &= \bar{\Sigma} \sin \bar{\theta} \quad = \pm \displaystyle \frac {\sqrt{ 2(4-{\hat{M}}^2)}}{8-{\hat{M}}^2} \ .\end{aligned}$$ We note the critical mass and the parameter range of the Aoki phase are consistent with those of the hopping parameter expansion shown below Eq. (\[condA\]). This result supports the existence of the parity-broken phase in the lattice QCD with the Adams fermion again. The behavior of pion condensate in this case is also given by Fig. \[trans1\], which shows the order of phase transition is second-order. This is consistent with the second-order scenario that we can take a chiral limit by a mass parameter tuning. We derive the pion mass from the quadratic parts of the effective potential in a parallel way to the Hoelbling type. The pion mass for this case is given by $$\cosh(m_{\pi}a) = 1+{2{\hat{M}}^{2}-8\over{5}} \ , \label{ApiM}$$ for the parity-symmetric phase (${\hat{M}}^{2}>4$). By using the definition $K=1/2{\hat{M}}$ with ${\hat{M}}=M+r$ and $r=16\sqrt{3}$, we find $\cosh(m_{\pi}a)=1+(1-16K^{2})/10K^{2}$ which is consistent with the result of the hopping parameter expansion Eq. (\[HPE-Adamspi\]): The pion mass becomes zero at the critical mass ${\hat{M}}^{2}=4$, which indicates there occurs a second-order phase transition between parity-symmetric and broken phases in the strong-coupling limit. The PCAC relation holds near the critical mass also in this case. $$(m_{\pi}a)^{2} = {16\over{5}}m_{q}a+\mathcal{O}(a^{2}) \ . \label{ApiM2}$$ We can also show that the Lorentz-covariant dispersion relation recovers in the continuum limit in the Adams-type formalism as $\mathcal{D}=(E^{2}-{\bf p}^{2}-m_{\pi}^{2})a^{2}+\mathcal{O}(a^{3})$. It is reasonable that Lorentz-symmetric dispersion recovers since Adams fermion has sufficiently large discrete symmetry for Lorentz symmetry restoration. We can also argue the possibility of other condensations in the same way: We can show there is no other condensates by substituting a general form of the meson fields (\[GeM\]) into the mesonic action for this case. All these results indicate that, in Adams-type staggered-Wilson, we can take a chiral limit by tuning a mass parameter toward the second-order critical line from the parity-symmetric phase. Since Adams fermion has sufficient discrete symmetry, it can be straightforwardly applied to lattice QCD in a parallel manner to Wilson fermions. Discussion on two-flavor case {#sec:Tf} ============================= ![Conjectures of pion mass behaviors as a function of a mass parameter in two-flavor Hoelbling fermions and a single Adams fermion. In both cases PCAC relation holds in the parity symmetric phase.[]{data-label="Fig:mpi-HA"}](Aoki-Phase-Mass-Hoelbling-Adams-2flavor.eps){height="5cm"} In this section, we discuss parity-flavor breaking for two-flavor staggered-Wilson fermions. We first consider two-flavor Hoelbling-type fermion action. In this case, except that the low discrete symmetry would require further parameter tuning, the situation is quite similar to that of the Wilson fermion [@AokiP]. We here assume that mass parameters for two flavors $M_{f}$ $(f=1,2)$ are equal, which means there is exact $SU(2)$ flavor symmetry. The chiral and pion condensates are given by $$\begin{aligned} {1\over{N}}\langle \bar{\chi}_{f}\chi_{f} \rangle &= \Sigma_{f} \cos \theta_{f} = {1\over{\hat{M}_{f}}} \ , \label{2fout1} \\ {1\over{N}}\langle \bar{\chi}_{f}i\epsilon_{x}\chi_{f} \rangle &= \Sigma_{f} \sin \theta_{f} = 0 \ , \label{2fout2}\end{aligned}$$ for $\hat{M}_{f}^{2}\geq 4$ (parity-symmetric phase), while they are given by $$\begin{aligned} {1\over{N}}\langle \bar{\chi}_{f}\chi_{f} \rangle &= \bar{\Sigma}_{f} \cos \bar{\theta}_{f} = {\hat{M}_{f}\over{8-\hat{M}_{f}^{2}}} \ , \\ {1\over{N}}\langle \bar{\chi}_{f}i\epsilon_{x}\chi_{f} \rangle &= \bar{\Sigma}_{f} \sin \bar{\theta}_{f} = \pm {\sqrt{2(4-\hat{M}_{f}^{2})}\over{8-\hat{M}_{f}^{2}}} \ , \label{2fcon}\end{aligned}$$ for $\hat{M_{f}}^{2}< 4$ (Aoki phase). Here we have assumed that only the diagonal condensates in the flavor space ([*i.e.*]{} neutral condensates) can take finite values. Here we remind you of $\hat{M}_{f}=M_{f}+2r$ with $r=2\sqrt{2}$. We first look into the parity-symmetric phase. Although $SU(2)$ chiral symmetry is explicitly broken due to the flavored-mass term, three-massless pions appear on the second-order phase boundary due to divergence of correlation length as shown in Fig. \[Fig:mpi-HA\]. In the parity-broken phase, things depend on whether or not $\bar{\theta}_{1}$ and $\bar{\theta}_{2}$ have the same sign in Eq. (\[2fcon\]). For $\bar{\theta}_{1}=\bar{\theta}_{2}$, $$\begin{aligned} &\langle \bar{\chi}i\epsilon_{x}\chi \rangle\not= 0 \ , \nonumber\\ &\langle \bar{\chi}i\epsilon_{x}\tau_{i}\chi \rangle=0 \ , \,\,\,\,\,(i=1,2,3) \ , \end{aligned}$$ where $\tau_{i}$ is the Pauli matrix and $\chi$ stands a doublet $\chi=(\chi_{1},\chi_{2})^T$. For $\bar{\theta}_{1}=-\bar{\theta}_{2}$, $$\begin{aligned} &\langle \bar{\chi}i\epsilon_{x}\chi \rangle=0 \ , \nonumber\\ &\langle \bar{\chi}i\epsilon_{x}\tau_{1}\chi \rangle =0 \ , \nonumber\\ &\langle \bar{\chi}i\epsilon_{x}\tau_{2}\chi \rangle =0 \ , \nonumber\\ &\langle \bar{\chi}i\epsilon_{x}\tau_{3}\chi \rangle \not=0 \ . \label{2fVW}\end{aligned}$$ From Vafa-Witten’s theorem [@VW], we expect that the latter vacuum ($\bar{\theta}_{1}=-\bar{\theta}_{2}$) realizes [@ACV]. It is also possible to check this by studying next-leading-order calculation of $1/N$ or $1/g^{2}$ expansions. If the latter scenario realizes, the flavored pion condensate Eq. (\[2fVW\]) breaks $SU(2)$ flavor symmetry into its $U(1)$ subgroup as well as parity symmetry [^1]. Thus, in the parity-broken phase, we have two-massless pions as NG bosons associated with spontaneous breaking of the flavor symmetry. We summarize them in Fig. \[Fig:mpi-HA\]. This situation is qualitatively the same as the case of Wilson fermion [@AokiP] except possibility of further parameter tuning to recover Lorentz symmetry. The Adams-type staggered-Wilson fermion is more fascinating. It has two flavors for each branch in the first place. In this case there is no exact $SU(2)$ flavor symmetry due to taste-mixing in original staggered fermions. It means that in the parity-broken phase there is no massless excitation since there is no continuous symmetry to be broken as Fig. \[Fig:mpi-HA\]. However the number of massless pions in the chiral limit (on the boundary) depends on residual discrete flavor symmetry in the pion sector. If the discrete flavor symmetry is not sufficient to have a degenerate pion triplet, we have only one massless pion in the chiral limit although these three are expected to be degenerate in the continuum limit. If the symmetry is large enough, we have three-massless pions on the phase boundary. The latter is a quite fascinating scenario because we can simulate two-flavor QCD with a single lattice fermion. It is possible to study it by looking into transfer matrix symmetry or chiral Lagrangian potential. Recently Ref. [@Steve] has reported that classification of pion operators from the transfer matrix symmetry indicates three-degenerate pions. Adams fermion can be a new standard of lattice fermion in the near future. Summary and Discussion {#sec:SD} ====================== In this paper we investigate strong-coupling lattice QCD with staggered-Wilson fermions, with emphasis on the parity-broken phase (Aoki phase) structure. We consider hopping parameter expansion and effective potential analysis in the strong-coupling limit. We have shown that the parity-broken phase and the second-order phase boundary exist for both Adams-type and Hoelbling-type staggered-Wilson fermions, which is consistent with the second-order scenario for a chiral limit. In Sec. \[sec:Sym\], we discuss and classify discrete symmetries of two types of staggered-Wilson fermions. We show that they are invariant under charge conjugation and parity transformation, the latter of which is defined as 4th-shift followed by spatial axis reversal. We also discuss smaller rotation symmetry in the Hoelbling fermion, which would require further parameter tuning as shown in [@Steve]. In Sec. \[sec:HPE\], we analyze staggered-Wilson fermions by using hopping parameter expansion. From one-point functions of meson fields in the expansion, we find that pion condensate becomes nonzero in some range of the hopping parameter. From two-point functions, we show that square pion mass becomes zero on the boundary and becomes negative in the parameter region with nonzero pion condensate. These results suggest that there is a parity-broken phase and a second-order phase boundary. In Sec. \[sec:EPA\], we study the effective potential for meson fields in the strong-coupling limit and large $N$ limit to elaborate the phase structure in details. Here we develop a method to derive effective potential for lattice fermion actions with multiple-hopping terms. The gap equations from the effective potential exhibit nonzero pion condensate in the same parameter range as the hopping parameter expansion. From this analysis, we also show that pion becomes massless on the second-order phase boundary, and PCAC relation is reproduced around the boundary. If this property carries over into the weak-coupling regime, we can take a chiral limit by tuning a mass parameter in lattice QCD with staggered-Wilson fermions as with Wilson fermion. In Sec. \[sec:Tf\], we discuss the two-flavor cases. The situation in two-flavor Hoelbling-type fermions is similar to the original Wilson fermion except less rotational symmetry: Three-massless pions are expected to appear on the second-order critical lines, while two of them remain massless in the Aoki phase due to the flavored pion condensate. However we probably need to care about Lorentz symmetry breaking in this case, thus we cannot straightforwardly apply it to two-flavor lattice QCD. The Adams-type staggered-Wilson fermion contains two flavors in each branch. Although the taste-mixing breaks flavor symmetry at finite lattice spacing, it does not necessarily mean non-degenerate three pions. Moreover SU(2) flavor symmetry should recover in the continuum limit at least, and three-massless pions emerge if we take a chiral and continuum limit. In this case, there is no rotational symmetry breaking, and the hypercubic symmetry will recover in the continuum limit. We can thus perform two-flavor QCD simulations with Adams-type staggered-Wilson fermion more efficiently than usual. All of these results shows new possibilities of lattice fermion formulations. In particular, the Adams fermion can be straightforwardly applied to 2-flavor lattice QCD since it does not require any other fine-tuning and automatically has two flavors. Taking account of less numerical expenses in the staggered fermion, there is possibility that it would be numerically better than Wilson fermions, especially as an overlap kernel [@PdF]. We finally note that the analysis here does not include contribution from some of higher-hopping terms or higher-meson fields. To confirm our results, we need to perform the same analysis with these higher contributions. In the future work, we can also study detailed mass spectrum of mesons and possibility of small other condensation in the Aoki phase. TM is thankful to D. Adams, M. Creutz, M. Golterman, T. Izubuchi and S. Sharpe for the fruitful discussions. We are thankful to P. de Forcrand for the fruitful discussions. TK and TN are supported by Grants-in-Aid for the Japan Society for Promotion of Science (JSPS) Research Fellows (Nos. 22-3314, 23-593.). TM is supported by Grant-in-Aid for the Japan Society for Promotion of Science (JSPS) Postdoctoral Fellows for Reseach Abroad (24-8). This work is suppported in part by the Grants-in-Aid for Scientific Research from JSPS (Nos. 09J01226, 10J03314, 11J00593, 23340067, 24340054, and 24540271. ), and by the Grant-in-Aid for the global COE program “The Next Generation of Physics, Spun from Universality and Emergence" from MEXT. This work is based on fruitful discussions in the YIPQS-HPCI workshop “New-Type of Fermions on the Lattice", Feb. 9-24, 2012 in Yukawa Institute for Theoretical Physics. The authors are grateful to the organizers for giving them chances to have interest in the present topics. spin and flavor separation {#SFS} ========================== From one-staggered field we define 16 species fields in the momentum space as $\phi(p)_{A}\equiv\chi(p+\pi_{A})$ $(-\pi/2\leq p_{\mu} <\pi/2)$ where $\pi_{A}$ ($A=1,2,...,16$) being 4-dim vectors whose components take $0$ or $\pi$. For convenience, we here consider a 16-multiplet field as $\phi(p)=(\phi(p)_{1}, \phi(p)_{2},\cdots, \phi(p)_{16})^{T}$. As this 16-multiplet field has both the spinor (space-time) and the flavor (taste) indices, we can construct two sets of clifford generators $\Gamma_{\mu}$ and $\Xi_{\mu}$, which operate on spinor and flavor spaces in the momentum field $\phi(p)$. They satisfy the clifford algebra as $$\begin{aligned} \{ \Gamma_{\mu},\Gamma_{\nu} \}=2\delta_{\mu\nu} \ , \\ \{ \Xi_{\mu},\Xi_{\nu} \}=2\delta_{\mu\nu} \ , \\ \{ \Gamma_{\mu}, \Xi_{\nu} \}=0 \ .\end{aligned}$$ By using these definitions, the Dirac operator for the staggered fermion is given by $D_{st}=i\Gamma_{\mu}\sin p_{\mu}$ for the 16 multiplet $\phi(p)$ [^2]. Strong-coupling analysis for Adams-type {#AdamsEff} ======================================= $\alpha$ $\beta$ $\gamma$ $\delta$ $\mathcal{W}_{\alpha\mu\beta\nu\gamma\rho\delta\sigma,x}^{(4)}$ ---------- --------- ---------- ---------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- $+$ $+$ $+$ $+$ $U_{\mu,x} U_{\nu,x+\hat{\mu}} U_{\rho,x+\hat{\mu}+\hat{\nu}} U_{\sigma,x+\hat{\mu}+\hat{\nu}+\hat{\rho}}$ $-$ $-$ $-$ $-$ $U_{\mu,x-\hat{\mu}}^\dagger U_{\nu,x-\hat{\mu}-\hat{\nu}}^\dagger U_{\rho,x-\hat{\mu}-\hat{\nu}-\hat{\rho}}^\dagger U_{\sigma,x-\hat{\mu}-\hat{\nu}-\hat{\rho}-\hat{\sigma}}^\dagger$ $-$ $+$ $+$ $+$ $U_{\mu,x-\hat{\mu}}^\dagger U_{\nu,x-\hat{\mu}} U_{\rho,x-\hat{\mu}+\hat{\nu}} U_{\sigma,x-\hat{\mu}+\hat{\nu}+\hat{\rho}}$ $+$ $-$ $-$ $-$ $U_{\mu,x} U_{\nu,x+\hat{\mu}-\hat{\nu}}^\dagger U_{\rho,x+\hat{\mu}-\hat{\nu}-\hat{\rho}}^\dagger U_{\sigma,x+\hat{\mu}-\hat{\nu}-\hat{\rho}-\hat{\sigma}}^\dagger$ $+$ $-$ $+$ $+$ $U_{\mu,x} U_{\nu,x+\hat{\mu}-\hat{\nu}}^\dagger U_{\rho,x+\hat{\mu}-\hat{\nu}} U_{\sigma,x+\hat{\mu}-\hat{\nu}+\hat{\rho}}$ $-$ $+$ $-$ $-$ $U_{\mu,x-\hat{\mu}}^\dagger U_{\nu,x-\hat{\mu}} U_{\rho,x-\hat{\mu}+\hat{\nu}-\hat{\rho}}^\dagger U_{\sigma,x-\hat{\mu}+\hat{\nu}-\hat{\rho}-\hat{\sigma}}^\dagger$ $+$ $+$ $-$ $+$ $U_{\mu,x} U_{\nu,x+\hat{\mu}} U_{\rho,x+\hat{\mu}+\hat{\nu}-\hat{\rho}}^\dagger U_{\sigma,x+\hat{\mu}+\hat{\nu}-\hat{\rho}}$ $-$ $-$ $+$ $-$ $U_{\mu,x-\hat{\mu}}^\dagger U_{\nu,x-\hat{\mu}-\hat{\nu}}^\dagger U_{\rho,x-\hat{\mu}-\hat{\nu}} U_{\sigma,x-\hat{\mu}-\hat{\nu}+\hat{\rho}-\hat{\sigma}}^\dagger$ $+$ $+$ $+$ $-$ $U_{\mu,x} U_{\nu,x+\hat{\mu}} U_{\rho,x+\hat{\mu}+\hat{\nu}} U_{\sigma,x+\hat{\mu}+\hat{\nu}+\hat{\rho}-\hat{\sigma}}^\dagger$ $-$ $-$ $-$ $+$ $U_{\mu,x-\hat{\mu}}^\dagger U_{\nu,x-\hat{\mu}-\hat{\nu}}^\dagger U_{\rho,x-\hat{\mu}-\hat{\nu}-\hat{\rho}}^\dagger U_{\sigma,x-\hat{\mu}-\hat{\nu}-\hat{\rho}}$ $+$ $+$ $-$ $-$ $U_{\mu,x} U_{\nu,x+\hat{\mu}} U_{\rho,x+\hat{\mu}+\hat{\nu}-\hat{\rho}}^\dagger U_{\sigma,x+\hat{\mu}+\hat{\nu}-\hat{\rho}-\hat{\sigma}}^\dagger$ $+$ $-$ $+$ $-$ $U_{\mu,x} U_{\nu,x+\hat{\mu}-\hat{\nu}}^\dagger U_{\rho,x+\hat{\mu}-\hat{\nu}} U_{\sigma,x+\hat{\mu}-\hat{\nu}+\hat{\rho}-\hat{\sigma}}^\dagger$ $+$ $-$ $-$ $+$ $U_{\mu,x} U_{\nu,x+\hat{\mu}-\hat{\nu}}^\dagger U_{\rho,x+\hat{\mu}-\hat{\nu}-\hat{\rho}}^\dagger U_{\sigma,x+\hat{\mu}-\hat{\nu}-\hat{\rho}}$ $-$ $+$ $+$ $-$ $U_{\mu,x-\hat{\mu}}^\dagger U_{\nu,x-\hat{\mu}} U_{\rho,x-\hat{\mu}+\hat{\nu}} U_{\sigma,x-\hat{\mu}+\hat{\nu}+\hat{\rho}-\hat{\sigma}}^\dagger$ $-$ $-$ $+$ $+$ $U_{\mu,x-\hat{\mu}}^\dagger U_{\nu,x-\hat{\mu}-\hat{\nu}}^\dagger U_{\rho,x-\hat{\mu}-\hat{\nu}} U_{\sigma,x-\hat{\mu}-\hat{\nu}+\hat{\rho}}$ $-$ $+$ $-$ $+$ $U_{\mu,x-\hat{\mu}}^\dagger U_{\nu,x-\hat{\mu}} U_{\rho,x-\hat{\mu}+\hat{\nu}-\hat{\rho}}^\dagger U_{\sigma,x-\hat{\mu}+\hat{\nu}-\hat{\rho}}$ In this chapter, we show the derivation of the effective potential for the Adams-type fermion in the strong-coupling limit. To derive the effective potential for the Adams type, we replace Eq. (\[WZ\]) by Eq. (\[WZA\]) in the Adams type. $$\begin{aligned} &\exp \left[ \sum_x N W (\Lambda) \right] = \prod_x Z_x , {\nonumber}\\ Z_x &= \displaystyle \int \left( \prod_{\mu \neq \nu \neq \rho \neq \sigma} {{\cal D}}\left[ U_{\mu,x}, U_{\mu,x+\hat{\nu}}, U_{\rho, x+\hat{\mu}+\hat{\nu}}, U_{\sigma, x+\hat{\mu}+\hat{\nu}+\hat{\rho}} \right] \right) \exp \left[ - \left( {\mathrm{Tr}}(VE^\dagger) - {\mathrm{Tr}}(V^\dagger E) \right) \right] \ . \label{WZA}\end{aligned}$$ Here we represent the partition function as $4$ link integrals with $U_{\mu,x}$, $U_{\nu, x+\hat{\mu}}$, $U_{\rho, x+\hat{\mu}+\hat{\nu}}$, $U_{\sigma, x+\hat{\mu}+\hat{\nu}+\hat{\rho}}$. $V$ and $E$ in Eq. (\[WZA\]) are the matrices which include components corresponding to $1$-, $2$-, $3$-, and $4$-link terms. The components of $V$ and $E$ consist of link variables and fermion fields respectively. The concrete forms of the $V$ and $E$ for this case are given by $$\begin{aligned} V^{ab}_{\alpha \beta} &= \mathrm{diag} \left( V_1^{ab}, V_2^{ab}, \cdots, V_{11}^{ab} \right) \ ,\end{aligned}$$ with$$\begin{aligned} V_1 &=\mathrm{diag} \left(U_{\mu,x} \right) \ , \\ V_2 &=\mathrm{diag} \left( U_{\mu,x} U_{\nu,x+\hat{\mu}} U_{\rho,x+\hat{\mu}+\hat{\nu}} U_{\sigma,x+\hat{\mu}+\hat{\nu}+\hat{\rho}} \right) \ , \\ V_3 &=\mathrm{diag} \left( U_{\mu,x}^\dagger U_{\nu,x} U_{\rho,x+\hat{\nu}} U_{\sigma,x+\hat{\nu}+\hat{\rho}} \right) \ , \\ V_4 &=\mathrm{diag} \left( U_{\mu,x+\hat{\nu}} U_{\nu,x+\hat{\mu}}^\dagger U_{\rho,x+\hat{\mu}} U_{\sigma,x+\hat{\mu}+\hat{\rho}} \right) \ , \\ V_5 &=\mathrm{diag} \left( U_{\mu,x+\hat{\rho}} U_{\nu,x+\hat{\mu}+\hat{\rho}} U_{\rho,x+\hat{\mu}+\hat{\nu}}^\dagger U_{\sigma,x+\hat{\mu}+\hat{\nu}} \right) \ , \\ V_6 &=\mathrm{diag} \left( U_{\mu,x+\hat{\sigma}} U_{\nu,x+\hat{\mu}+\hat{\sigma}} U_{\rho,x+\hat{\mu}+\hat{\nu}+\hat{\sigma}} U_{\sigma,x+\hat{\mu}+\hat{\nu}+\hat{\rho}}^\dagger \right) \ , \\ V_7 &=\mathrm{diag} \left( U_{\mu,x+\hat{\rho}+\hat{\sigma}} U_{\nu,x+\hat{\mu}+\hat{\rho}+\hat{\sigma}} U_{\rho,x+\hat{\mu}+\hat{\nu}+\hat{\sigma}}^\dagger U_{\sigma,x+\hat{\mu}+\hat{\nu}}^\dagger \right) \ , \\ V_8 &=\mathrm{diag} \left( U_{\mu,x+\hat{\nu}+\hat{\sigma}} U_{\nu,x+\hat{\mu}+\hat{\sigma}}^\dagger U_{\rho,x+\hat{\mu}+\hat{\sigma}} U_{\sigma,x+\hat{\mu}+\hat{\rho}}^\dagger \right) \ , \\ V_9 &=\mathrm{diag} \left( U_{\mu,x+\hat{\nu}+\hat{\rho}} U_{\nu,x+\hat{\mu}+\hat{\rho}}^\dagger U_{\rho,x+\hat{\mu}}^\dagger U_{\sigma,x+\hat{\mu}} \right) \ , \\ V_{10} &=\mathrm{diag} \left( U_{\mu,x+\hat{\nu}}^\dagger U_{\nu,x}^\dagger U_{\rho,x} U_{\sigma,x+\hat{\rho}} \right) \ , \\ V_{11} &=\mathrm{diag} \left( U_{\mu,x+\hat{\rho}}^\dagger U_{\nu,x+\hat{\rho}} U_{\rho,x+\hat{\nu}}^\dagger U_{\sigma,x+\hat{\nu}} \right) \ ,\end{aligned}$$ $$\begin{aligned} E^{ab}_{\alpha \beta} &= \mathrm{diag} \left( E_1^{ab}, E_2^{ab}, \cdots, E_{11}^{ab} \right) \ ,\end{aligned}$$ and $$\begin{aligned} E_1 &= \mathrm{diag} \left( D_{1,\mu} \right) \ , \\ E_i &= \mathrm{diag} \left( D_{i,\mu\nu\rho\sigma} \right) , \ (i=2,3,\cdots,11) \ ,\end{aligned}$$ where we define the operator $D$ as the fermion bilinears, $$\begin{aligned} \left( D_{1,\mu}^\dagger \right)^{ab} &= \displaystyle \frac {1}{2} \eta_{\mu,x} {{\bar{\chi}}}_x^a \chi_{x+\hat{\mu}}^b \ , \quad \left( D_{1,\mu} \right)^{ab} = \displaystyle \frac {1}{2} \eta_{\mu,x} {{\bar{\chi}}}_{x+\hat{\mu}}^a \chi_x^b \ , \\ \left( D_{2,\mu \nu \rho \sigma}^\dagger \right)^{ab} &= - s {{\bar{\chi}}}_x^a \chi_{x+\hat{\mu}+\hat{\nu}+\hat{\rho}+\hat{\sigma}}^b \ , \quad \left( D_{2,\mu \nu} \right)^{ab} = s {{\bar{\chi}}}_{x+\hat{\mu}+\hat{\nu}+\hat{\rho}+\hat{\sigma}}^a \chi_x^b \ , \\ \left( D_{3,\mu \nu \rho \sigma}^\dagger \right)^{ab} &= - s_\mu {{\bar{\chi}}}_{x+\hat{\mu}}^a \chi_{x+\hat{\nu}+\hat{\rho}+\hat{\sigma}}^b \ , \quad \left( D_{3,\mu \nu} \right)^{ab} = s_\mu {{\bar{\chi}}}_{x+\hat{\nu}+\hat{\rho}+\hat{\sigma}}^a \chi_{x+\hat{\mu}}^b \ , \\ \left( D_{4,\mu \nu \rho \sigma}^\dagger \right)^{ab} &= - s_\nu {{\bar{\chi}}}_{x+\hat{\nu}}^a \chi_{x+\hat{\mu}+\hat{\rho}+\hat{\sigma}}^b \ , \quad \left( D_{4,\mu \nu} \right)^{ab} = s_\nu {{\bar{\chi}}}_{x+\hat{\mu}+\hat{\rho}+\hat{\sigma}}^a \chi_{x+\hat{\nu}}^b \ , \\ \left( D_{5,\mu \nu \rho \sigma}^\dagger \right)^{ab} &= - s_\rho {{\bar{\chi}}}_{x+\hat{\rho}}^a \chi_{x+\hat{\mu}+\hat{\nu}+\hat{\sigma}}^b \ , \quad \left( D_{5,\mu \nu} \right)^{ab} = s_\rho {{\bar{\chi}}}_{x+\hat{\mu}+\hat{\nu}+\hat{\sigma}}^a \chi_{x+\hat{\rho}}^b \ , \\ \left( D_{6,\mu \nu \rho \sigma}^\dagger \right)^{ab} &= - s_\sigma {{\bar{\chi}}}_{x+\hat{\sigma}}^a \chi_{x+\hat{\mu}+\hat{\nu}+\hat{\rho}}^b \ , \quad \left( D_{6,\mu \nu} \right)^{ab} = s_\sigma {{\bar{\chi}}}_{x+\hat{\mu}+\hat{\nu}+\hat{\rho}}^a \chi_{x+\hat{\sigma}}^b \ , \\ \left( D_{7,\mu \nu \rho \sigma}^\dagger \right)^{ab} &= - s_{\rho+\sigma} {{\bar{\chi}}}_{x+\hat{\rho}+\hat{\sigma}}^a \chi_{x+\hat{\mu}+\hat{\nu}}^b \ , \quad \left( D_{7,\mu \nu} \right)^{ab} = s_{\rho+\sigma} {{\bar{\chi}}}_{x+\hat{\mu}+\hat{\nu}}^a \chi_{x+\hat{\rho}+\hat{\sigma}}^b \ , \\ \left( D_{8,\mu \nu \rho \sigma}^\dagger \right)^{ab} &= - s_{\hat{\nu}+\hat{\sigma}} {{\bar{\chi}}}_{x+{\hat{\nu}+\hat{\sigma}}}^a \chi_{x+\hat{\mu}+\hat{\rho}}^b \ , \quad \left( D_{8,\mu \nu} \right)^{ab} = s_{\hat{\nu}+\hat{\sigma}} {{\bar{\chi}}}_{x+\hat{\mu}+\hat{\rho}}^a \chi_{x+{\hat{\nu}+\hat{\sigma}}}^b \ , \\ \left( D_{9,\mu \nu \rho \sigma}^\dagger \right)^{ab} &= - s_{\hat{\nu}+\hat{\rho}} {{\bar{\chi}}}_{x+\hat{\nu}+\hat{\rho}}^a \chi_{x+\hat{\mu}+\hat{\sigma}}^b \ , \quad \left( D_{9,\mu \nu} \right)^{ab} = s_{\hat{\nu}+\hat{\rho}} {{\bar{\chi}}}_{x+\hat{\mu}+\hat{\sigma}}^a \chi_{x+\hat{\nu}+\hat{\rho}}^b \ , \\ \left( D_{10,\mu \nu \rho \sigma}^\dagger \right)^{ab} &= - s_{\hat{\mu}+\hat{\nu}} {{\bar{\chi}}}_{x+\hat{\mu}+\hat{\nu}}^a \chi_{x+\hat{\rho}+\hat{\sigma}}^b \ , \quad \left( D_{10,\mu \nu} \right)^{ab} = s_{\hat{\mu}+\hat{\nu}} {{\bar{\chi}}}_{x+\hat{\rho}+\hat{\sigma}}^a \chi_{x+\hat{\mu}+\hat{\nu}}^b \ , \\ \left( D_{11,\mu \nu \rho \sigma}^\dagger \right)^{ab} &= - s_{\hat{\mu}+\hat{\rho}} {{\bar{\chi}}}_{x+\hat{\mu}+\hat{\rho}}^a \chi_{x+\hat{\nu}+\hat{\sigma}}^b \ , \quad \left( D_{11,\mu \nu} \right)^{ab} = s_{\hat{\mu}+\hat{\rho}} {{\bar{\chi}}}_{x+\hat{\nu}+\hat{\sigma}}^a \chi_{x+\hat{\mu}+\hat{\rho}}^b \ .\end{aligned}$$ Note that $s=r \left(\epsilon \eta_5 \right)_x / (4! \cdot 16), s_\mu=r \left(\epsilon \eta_5 \right)_{x+\hat{\mu}} / (4! \cdot 16),$ and $s_{\mu+\nu}=r \left(\epsilon \eta_5 \right)_{x+\hat{\mu}+\hat{\nu}} / (4! \cdot 16)$. Here $V_{1}$ and $E_{1}$ are $4\times 4$ diagonal matrices while $V_{i}$ and $E_{i}$ $(i=2,3,\cdots,11)$ are $24\times 24$ diagonal matrices. Here we note the situation of the cancellation between the diagrams crossing the different blocks is basically the same as the case for the Hoelbling type although there is difference between the 2-link and 4-link hoppings. We can derive $W$ as a function of $\Lambda$ by using the Schwinger-Dyson equation in a similar way to the Hoelbling type. $\Lambda$ is $$\begin{aligned} \Lambda_x &= \displaystyle \frac {1}{16} \biggl[ \sum_\mu \mathcal{M}_x \mathcal{M}_{x+\hat{\mu}} + \displaystyle \frac {1}{3} \sum_{\mu \neq \nu} \mathcal{M}_{x+\hat{\mu}} \mathcal{M}_{x+\hat{\mu}+\hat{\nu}} \ {\nonumber}\\ & + \displaystyle \frac {1}{6} \sum_{\mu \neq \nu \neq \rho} \mathcal{M}_{x+\hat{\mu}+\hat{\nu}} \mathcal{M}_{x+\hat{\mu}+\hat{\nu}+\hat{\rho}} + \displaystyle \frac {1}{6} \sum_{\mu \neq \nu \neq \rho \neq \sigma} \mathcal{M}_{x+\hat{\mu}+\hat{\nu}+\hat{\rho}} \mathcal{M}_{x+\hat{\mu}+\hat{\nu}+\hat{\rho}+\hat{\sigma}} \biggr] \ {\nonumber}\\ &-\left( \displaystyle \frac {r}{4! \cdot 16} \right)^2 \sum_{\mu \neq \nu \neq \rho \neq \sigma} \left( 2 \mathcal{M}_x \mathcal{M}_{x+\hat{\mu}+\hat{\nu}+\hat{\rho}+\hat{\sigma}} +4 \mathcal{M}_{x+\hat{\mu}} \mathcal{M}_{x+\hat{\nu}+\hat{\rho}+\hat{\sigma}} +2 \mathcal{M}_{x+\hat{\mu}+\hat{\nu}} \mathcal{M}_{x+\hat{\rho}+\hat{\sigma}} \right) \ .\end{aligned}$$ Order parameter and zero eigenvalue of staggered-Wilson operator {#zeroeigen} ================================================================ We investigate the relation between the order parameter $\langle {{\bar{\chi}}}i \epsilon_x \tau_3 \chi \rangle$ for spontaneous symmetry breaking and zero eigenvalue of staggered-Wilson operator. In QCD, the chiral condensate $\langle \bar{\psi} \psi \rangle$ which is the order parameter for spontaneous breaking of chiral symmetry relates to the zero eigenvalue of Dirac operator. It is called Banks-Casher relation [@BCrel]. In Wilson fermion, the pion condensate $\langle \bar{\psi} i \gamma_5 \tau_3\psi \rangle$ which is the order parameter for spontaneous breaking of parity-flavor symmetry relates to the zero eigenvalue of Wilson operator. In staggered-Wilson fermion (two-flavor Hoelbling type), we derive the relation between $\langle {{\bar{\chi}}}i \epsilon_x \tau_3 \chi \rangle$ and zero eigenvalue as Wilson fermion. Then we add the external field $H$ for order parameter to the Hoelbling-type staggered-Wilson action Eq. (\[HoelS\]), $$\begin{aligned} S_{H}(H) &= {{\bar{\chi}}}\left[ D_{SW}(M) + i \epsilon_x \tau_3 H \right] \chi \ .\end{aligned}$$ The order parameter $\langle {{\bar{\chi}}}i \epsilon_x \tau_3 \chi \rangle$ is represented as, $$\begin{aligned} \lim_{H \rightarrow 0} \langle {{\bar{\chi}}}i \epsilon_x \tau_3 \chi \rangle &= -\lim_{H \rightarrow 0} \lim_{V \rightarrow \infty} \displaystyle \frac{1}{V} \mathrm{Tr} \left( i \epsilon_x \tau_3 \displaystyle \frac{1}{D_{SW}+ i\epsilon_x \tau_3 H} \right) \, {\nonumber}\\ &= -\lim_{H \rightarrow 0} \lim_{V \rightarrow \infty} \displaystyle \frac{1}{V} \mathrm{tr} \left[ i \epsilon_x \left( \displaystyle \frac{1}{D_{SW}+ i\epsilon_x H} - \displaystyle \frac{1}{D_{SW}- i\epsilon_x H} \right) \right] \, {\nonumber}\\ &= -\lim_{H \rightarrow 0} \lim_{V \rightarrow \infty} \displaystyle \frac{1}{V} \mathrm{tr} \left[ i \left( \displaystyle \frac{1}{H_{SW}+ i H} - \displaystyle \frac{1}{H_{SW}- i H} \right) \right] \, {\nonumber}\\ &= -i \lim_{H \rightarrow 0} \lim_{V \rightarrow \infty} \displaystyle \frac{1}{V} \sum_\lambda \langle \lambda \mid \left[ \displaystyle \frac{1}{\lambda+ i H} - \displaystyle \frac{1}{\lambda- i H} \right] \mid \lambda \rangle \, {\nonumber}\\ &= -i \lim_{H \rightarrow 0} \lim_{V \rightarrow \infty} \displaystyle \frac{1}{V} \int d\lambda \rho\left( \lambda \right) \left[ \displaystyle \frac{1}{\lambda+ i H} - \displaystyle \frac{1}{\lambda- i H} \right] \, {\nonumber}\\ &= - \displaystyle \frac{2 \pi \rho(0)}{V} \, {\nonumber}\\ &\equiv - \lim_{\epsilon \rightarrow 0} \lim_{H \rightarrow 0} \lim_{V \rightarrow \infty} \displaystyle \frac{2 \pi \rho(\epsilon)}{V} \ ,\end{aligned}$$ where $H_{SW}=\epsilon_x D_{SW}$ is the Hermitian operator. $\mathrm{Tr}$ means the traces for flavor, color and space while $\mathrm{tr}$ means the traces for color and space. $\lambda$ and $\mid{\lambda}\rangle$ are the eigenvalues and eigenstates and $\rho(\lambda)=\sum_{\lambda^{\prime}} \delta( \lambda - \lambda^\prime )$ is the density of the state. By this analysis, we find the order parameter $\langle {{\bar{\chi}}}i \epsilon_x \tau_3 \chi \rangle$ for spontaneous breaking of parity-flavor symmetry relates to the zero eigenvalue of the staggered-Wilson operator $H_{SW}$. Also, we can derive this relation for Adams fermion in the same way. [99]{} K. G. Wilson, Phys. Rev. D [**10**]{}, 2445 (1974). H. B. Nielsen and M. Ninomiya, Nucl. Phys. B [**185**]{}, 20 (1981); Nucl. Phys. B [**193**]{} 173 (1981); Phys. Lett. B [**105**]{} 219 (1981). M. Creutz, T. Kimura and T. Misumi, JHEP 1012, 041 (2010) \[[[arXiv:1011.0761]{}]{}\]. T. Kimura, M. Creutz and T. Misumi, PoS Lattice2011 (2011) 106 \[arXiv:1110.2482\]. J. B. Kogut and L. Susskind, Phys. Rev. D [**11**]{}, 395 (1975). L. Susskind, Phys. Rev. D [**16**]{}, 3031 (1977). H. S. Sharatchandra, H. J. Thun and P. Weisz, Nucl. Phys. B [**192**]{}, 205 (1981). D. H. Adams, Phys. Rev. Lett. [**104**]{}, 141602 (2010) \[[[arXiv:0912.2850]{}]{}\]. D. H. Adams, Phys. Lett. B [**699**]{}, 394 (2011) \[[[arXiv:1008.2833]{}]{}\]. C. Hoelbling, Phys. Lett. B [**696**]{}, 422 (2011) \[[[arXiv:1009.5362]{}]{}\]. P. de Forcrand, A. Kurkela and M. Panero, PoS Lattice2010 (2011) 080 \[[[arXiv:1102.1000]{}]{}\]: JHEP [**1204**]{} (2012) 142 \[[[arXiv:1202.1867]{}]{}\]. S. Aoki, Phys. Rev. D [**30**]{}, 2653 (1984). S. Aoki, Phys. Rev. D [**33**]{}, 2377 (1986); [**34**]{}, 3170 (1986); Phys. Rev. Lett. [**57**]{} 3136 (1986); Nucl. Phys. B [**314**]{}, 79 (1989). M. Creutz, (1996) \[[[arXiv:hep-lat/9608024]{}]{}\]. S. Sharpe and R. Singleton. Jr, Phys. Rev. D [**58**]{}, 074501 (1998) \[[[arXiv:hep-lat/9804028]{}]{}\]. Another interesting scenario is discussed in V. Azcoiti, G. Di Carlo, A. Vaquero, Phys. Rev. D [**79**]{}, 014509 (2009) \[[[arXiv:0809.2972]{}]{}\] and their other works. S. R. Sharpe, Phys. Rev. D [**79**]{}, 054503 (2009) \[[[arXiv:0811.0409]{}]{}\]. P. H. Ginsparg and K. G. Wilson, Phys. Rev. D [**25**]{}, 2649 (1982). N. Neuberger, Phys. Lett. B [**427**]{}, 353 (1998) \[[[arXiv:hep-lat/9801031]{}]{}\]. D. B. Kaplan, Phys. Lett. B [**288**]{}, 342 (1992) \[[[arXiv:hep-lat/9206013]{}]{}\]. V. Furman and Y. Shamir, Nucl. Phys. B [**439**]{}, 54 (1995) \[[[arXiv:hep-lat/9405004]{}]{}\]. M. Creutz, T. Kimura and T. Misumi, Phys. Rev. D [**83**]{}, 094506 (2011) \[[[arXiv:1101.4239]{}]{}\]. T. Misumi, M. Creutz, T. Kimura, T. Z Nakano and A. Ohnishi, PoS Lattice2011 (2011) 108 \[arXiv:1110.1231\]. N. Kawamoto and J. Smit, Nucl. Phys. B[**192**]{}, 100 (1981). C. van den Doel and J. Smit, Nucl. Phys. B [**228**]{}, 122 (1983). M. F. L. Golterman and J. Smit, Nucl. Phys. B [**245**]{}, 61 (1984). G. W. Kilcup and S. Sharpe, Nucl. Phys. B [**283**]{}, 493 (1987). S. Sharpe, Talk in YIPQS-HPCI workshop “New-Type of Fermions on the Lattice" (2012), http://www2.yukawa.kyoto-u.ac.jp/ws/2011/newtype/Talk-slides/sharpe-kyoto12-1.pdf\ and manuscript in preparation. T. Kimura, S. Komatsu, T. Misumi, T. Noumi, S. Torii and S. Aoki, JHEP [**01 (2012)**]{} 048 \[[[arXiv:1111.0402]{}]{}\]. C. Vaffa and E.  Witten, Phys. Rev. Lett [**53**]{}, 535 (1984). T. Banks and A. Casher, Nucl. Phys. B [**169**]{}, 103 (1980). [^1]: In Appendix \[zeroeigen\], we show the relation between an order parameter of the phase transition and zero eigenvalues of staggered-Wilson operator, as Wilson fermion. [^2]: The origin of the discrepancy between this form and the usual spin-taste representation is clearly elaborated in the reference, G. P. Lepage, \[arXiv:1111.2955\].
1
--- abstract: | A non-uniform hypergraph $H=(V,E)$ consists of a vertex set $V$ and an edge set $E\subseteq 2^V$; the edges in $E$ are not required to all have the same cardinality. The set of all cardinalities of edges in $H$ is denoted by $R(H)$, the set of edge types. For a fixed hypergraph $H$, the Turán density $\pi(H)$ is defined to be $\lim_{n\to\infty}\max_{G_n}h_n(G_n)$, where the maximum is taken over all $H$-free hypergraphs $G_n$ on $n$ vertices satisfying $R(G_n)\subseteq R(H)$, and $h_n(G_n)$, the so called Lubell function, is the expected number of edges in $G_n$ hit by a random full chain. This concept, which generalizes the Turán density of $k$-uniform hypergraphs, is motivated by recent work on extremal poset problems. The details connecting these two areas will be revealed in the end of this paper. Several properties of Turán density, such as supersaturation, blow-up, and suspension, are generalized from uniform hypergraphs to non-uniform hypergraphs. Other questions such as “Which hypergraphs are degenerate?" are more complicated and don’t appear to generalize well. In addition, we completely determine the Turán densities of $\{1,2\}$-hypergraphs. author: - 'Travis Johnston [^1]' - 'Linyuan Lu [^2]' title: 'Turán Problems on Non-uniform Hypergraphs' --- Introduction ============ A hypergraph $H$ is a pair $(V,E)$; $V$ is the vertex set, and $E\subseteq 2^V$ is the edge set. If all edges have the same cardinality $k$, then $H$ is a $k$-uniform hypergraph. Turán problems on $k$-uniform hypergraphs have been actively studied for many decades. However, Turán problems on non-uniform hypergraphs are rarely considered (see [@MZ; @Lemons] for two different treatments). Very recently, several groups of people have started actively studying extremal families of sets avoiding given sub-posets. Several new problems have been established. One of them is the diamond problem: [**The diamond conjecture: [@GriLu]**]{} Any family ${\mathcal{F}}$ of subsets of $[n]:=\{1,2,\ldots,n\}$ with no four sets $A,B,C,D$ satisfying $A\subseteq B\cap C$, $B\cup C \subseteq D$ can have at most $(2+o(1)){\binom{n}{\lfloor \frac{n}{2}\rfloor}}$ subsets. This conjecture, along with many other problems, motivates us to study Turán-type problems on non-uniform hypergraphs. The details of this connection will be given in the last section. We briefly review the history of Turán Problems on uniform hypergraphs. Given a positive integer $n$ and a $k$-uniform hypergraph $H$ on $n$ vertices (or $k$-graph, for short), the Turán number ${{\rm ex}}(n,H)$ is the maximum number of edges in a $k$-graph on $n$ vertices that does not contain $H$ as a subgraph; such a graph is called $H$-*free*. Katona et al. [@KNS] showed that $f(n,H)={{\rm ex}}(n,H)/{n\choose k}$ is a decreasing function of $n$. The limit $\displaystyle \pi(H)=\lim_{n\to \infty} f(n,H)$, which always exists, is called the [*Turán density*]{} of $H$. For $k=2$, the graph case, Erdős-Stone-Simonovits proved $\pi(G)=1-\frac{1}{\chi(G)-1}$ for any graph $G$ with chromatic number $\chi(G)\geq 3$. If $G$ is bipartite, then ${{\rm ex}}(n,G)=o(n^2)$. The magnitude of ${{\rm ex}}(n,G)$ is unknown for most bipartite graphs $G$. Let $K^r_k$ denote the complete $r$-graph on $k$ vertices. Turán determined the value of ${{\rm ex}}(n,K^2_{k})$ which implyies that $\pi(K^2_k)=1-\frac{1}{k-1}$ for all $k\geq 3$. However, no Turán density $\pi(K^r_k)$ is known for any $k>r\geq 3$. The most extensively studied case is when $k=4$ and $r=3$. Turán conjectured [@Tu] that $\pi(K_4^3)= 5/9$. Erdős [@Er81] offered \$500 for determining any $\pi(K^r_k)$ with $k>r\geq 3$ and \$1000 for answering it for all $k$ and $r$. The upper bounds for $\pi(K_4^3)$ have been sequentially improved: $0.6213$ (de Caen [@dC94]), $0.5936$ (Chung-Lu [@ChLu]), $0.56167$ (Razborov [@Razborov], using flag algebra method.) There are a few uniform hypergraphs whose Turán density has been determined: the Fano plane [@FS05; @KS05a], expanded triangles [@KS05b], $3$-books, $4$-books [@FMP], $F_5$ [@FrFu83], extended complete graphs [@Pik], etc. In particular, Baber [@baber] recently found the Turán density of many $3$-uniform hypergraphs using flag algebra methods. For a more complete survey of methods and results on uniform hypergraphs see Peter Keevash’s survey paper [@KeevashSurvey]. A non-uniform hypergraph $H=(V,E)$ consists of a vertex set $V$ and an edge set $E\subseteq 2^V$. Here the edges of $E$ could have different cardinalities. The set of all the cardinalities of edges in $H$ is denoted by $R(H)$, the set of edge types. For a fixed hypergraph $H$, the Turán density $\pi(H)$ is defined to be $\lim_{n\to\infty}\max_{G_n}h_n(G_n)$, where the maximum is taken over all $H$-free hypergraphs $G_n$ on $n$ vertices satisfying $R(G_n)\subseteq R(H)$. $h_n(G_n)$, the so called Lubell function, is the expected number of edges in $G_n$ hit by a random full chain. The Lubell function has been a very useful tool in extremal poset theory, in particular it has been used in the study of the diamond conjecture. In section 2, we show that our notion of $\pi(H)$ is well-defined and is consistent with the usual definition for uniform hypergraphs. We also give examples of Turán densities for several non-uniform hypergraphs. In section 3, we generalize the supersaturation Lemma to non-uniform hypergraphs. Then we prove that blowing-up will not affect the Turán density. Using various techniques, we determine the Turán density of every $\{1,2\}$-hypergraph in section 4. Remarkably, the Turán densities of $\{1,2\}$-hypergraphs are in the set $$\bigg\{1,\frac{9}{8}, \frac{5}{4}, \frac{3}{2}, \frac{5}{3}, \ldots, 2-\frac{1}{k},\ldots \bigg \}.$$ Among $r$-uniform hypergraphs, $r$-partite hypergraphs have the smallest possible Turán density. Erdős proved that any $r$-uniform hypergraph forbidding the complete $r$-uniform $r$-partite hypergraphs can have at most $O(n^{r-1/\delta})$ edges. We generalize this theorem to non-uniform hypergraphs. A hypergraph is degenerate if it has the smallest possible Turán density. For $r$-uniform hypergraphs, a hypergraph $H$ is degenerate if and only if $H$ is the subgraph of a blow-up of a single edge. Unlike the degenerate $r$-uniform hypergraphs, the degenerate non-uniform hypergraphs are not classified. For non-uniform hypergraphs, chains–one natural extension of a single edge–are degenerate. Additionally, every subgraph of a blow-up of a chain is also degenerate. However, we give an example of a degenerate, non-uniform hypergraph not contained in any blow-up of a chain. This leaves open the question of which non-uniform hypergraphs are degenerate. In section 6, we consider the suspension of hypergraphs. The suspension of a hypergraph $H$ is a new hypergraph, denoted by $S(H)$, with one additional vertex, $\ast$, added to every edge of $H$. In a hypergraph Turán problem workshop hosted by the AIM Research Conference Center in 2011, the following conjecture was posed: $\displaystyle \lim_{t\to\infty}\pi(S^t(K^{r}_n))=0$. We conjecture $\displaystyle\lim_{t\to\infty} \pi(S^t(H))=|R(H)|-1$ holds for any hypergraph $H$. Some partial results are proved. Finally in the last section, we will point out the relation between the Turán problems on hypergraphs and extremal poset problems. Non-uniform hypergraphs ======================= Notation -------- Recall that a hypergraph $H$ is a pair $(V,E)$ with the vertex set $V$ and edge set $E\subseteq 2^{V}$. Here we place no restriction on the cardinalities of edges. The set $R(H):=\{|F|\colon F\in E\}$ is called the set of its [*edge types*]{}. A hypergraph $H$ is $k$-uniform if $R(H)=\{k\}$. It is non-uniform if it has at least two edge types. For any $k\in R(H)$, the [*level hypergraph*]{} $H^k$ is the hypergraph consisting of all $k$-edges of $H$. A uniform hypergraph $H$ has only one (non-empty) level graph, i.e., $H$ itself. In general, a non-uniform hypergraph $H$ has $|R(H)|$ (non-empty) level hypergraphs. Throughout the paper, for any finite set $R$ of non-negative integers, we say, $G$ is an $R$-graph if $R(G)\subseteq R$. We write $G^R_n$ for a hypergraph on $n$ vertices with $R(G)\subseteq R$. We simplify it to $G$ if $n$ and $R$ are clear under context. Let $R$ be a fixed set of edge types. Let $H$ be an $R$-graph. The number of vertices in $H$ is denoted by $v(H):=|V(H)|$. Our goal is to measure the edge density of $H$ and be able to compare it (in a meaningful way) to the edge density of other $R$ graphs. The standard way to measure this density would be: $$\mu(H) = \frac{|E(H)|}{\sum_{k\in R(H)}\binom{v(H)}{k}}.$$ This density ranges from 0 to 1 (as one would expect)–a complete $R$-graph having a density of 1. Unfortunately, this is no longer a useful measure of density since the number of edges with maximum cardinality will dwarf the number of edges of all other sizes. Specifically, one could take $k$-uniform hypergraph (where $k=\max\{r:r\in R(H)\}$) on enough vertices and make its density as close to 1 he likes. The problem is that this $k$-uniform hypergraph is quite different from the complete $R$-graph (when $|R|>1$) with the same number of vertices. Instead, we use the Lubell function to measure the edge density. This is adapted from the use of the Lubell function studying families of subsets. For a non-uniform hypergraph $G$ on $n$ vertices, we define the Lubell function of $G$ as $$\label{eq:lubell} h_{n}(G):=\sum_{F\in E(G)}\frac{1}{\binom{n}{|F|}}=\sum_{k\in R(G)}\frac{|E(H^{k})|}{\binom{n}{k}}.$$ The Lubell function is the expected number of edges hit by a random full chain. Namely, pick a uniformly random permutation $\sigma$ on $n$ vertices; define a random full chain $C_\sigma$ by $$\{ \{\emptyset\}, \{\sigma(1)\}, \{\sigma(1), \sigma(2)\}, \cdots, [n]\}.$$ Let $X=|E(G)\cap C_{\sigma}|$, the number of edges hit by the random full chain. Then $$\label{eq:X} h_n(G)={{\rm E}}(X).$$ Given two hypergraphs $H_1$ and $H_2$, we say $H_1$ is a subgraph of $H_2$, denoted by $H_1\subseteq H_2$, if there exists a 1-1 map $f\colon V(H_1)\to V(H_2)$ so that $f(F)\in E(H_2)$ for any $F\in E(H_1)$. Whenever this occurs, we say the image $f(H_1)$ is an [*ordered copy*]{} of $H_2$, written as $H_1\stackrel{f}{\hookrightarrow} H_2$. A necessary condition for $H_1\subseteq H_2$ is $R(H_1)\subseteq R(H_2)$. Given a subset $K\subseteq V(H)$ and a subset $S\subseteq R(H)$, the [*induced subgraph*]{}, denoted by $H^{[S]}[K]$, is a hypergraph on $K$ with the edge set $\{F\in E(H)\colon F\subseteq K \mbox{ and } |F|\in S\}$. When $S=R(H)$, we simply write $H[K]$ for $H^{[S]}[K]$. Given a positive integer $n$ and a subset $R\subseteq [n]$, the complete hypergraph $K^R_{n}$ is a hypergraph on $n$ vertices with edge set $\bigcup_{i\in R} \binom{[n]}{i}$. For example, $K^{\{k\}}_n$ is the complete $k$-uniform hypergraph. $K^{[k]}_n$ is the non-uniform hypergraph with all possible edges of cardinality at most $k$. (0,0)–(2,1)–(2,-1)–cycle; at (0,0) \[vertex\_open\] (v1) \[label=above:[1]{}\] ; at (2,1) \[vertex\_open\] (v2) \[label=above:[2]{}\] ; at (2,-1) \[vertex\_open\] (v3) \[label=below:[3]{}\] ; (v1)–(v2); (v1)–(v3); (v2)–(v3); at (1,-2) [Illustration of $K_{3}^{\{2,3\}}$]{} ; (0,0)–(2,0)–(3.5,2)–cycle; (2,0)–(3.5,2)–(2,4)–cycle; (3.5,2)–(2,4)–(0,4)–cycle; (2,4)–(0,4)–(-1.5,2)–cycle; (0,4)–(-1.5,2)–(0,0)–cycle; (-1.5,2)–(0,0)–(2,0)–cycle; at (0,0) \[vertex\_open\] (v1) \[label=below:[1]{}\] ; at (2,0) \[vertex\_open\] (v2) \[label=below:[2]{}\] ; at (3.5,2) \[vertex\_open\] (v3) \[label=right:[3]{}\] ; at (2,4) \[vertex\_open\] (v4) \[label=above:[4]{}\] ; at (0,4) \[vertex\_open\] (v5) \[label=above:[5]{}\] ; at (-1.5,2) \[vertex\_open\] (v6) \[label=left:[6]{}\] ; (v1)–(v2)–(v3)–(v4)–(v5)–(v6)–(v1); at (1,-1) [A (tight) cycle $C_{6}^{\{2,3\}}$]{}; Given a family of hypergraphs ${\mathcal{H}}$ with common set of edge-types $R$, we define $$\pi_n({\mathcal{H}}):= \max \left\{h_{n}(G)\colon v(G)=n, G\subseteq K^{R}_{n}, \text{ and } G \mbox{ contains no subgraph in }{\mathcal{H}}\right\}.$$ A hypergraph $G:=G_n^R$ is [*extremal*]{} with respect to the family ${\mathcal{H}}$ if 1. $G$ contains no subgraph in ${\mathcal{H}}$. 2. $h_n(G)=\pi_n({\mathcal{H}})$. The Turán density of ${\mathcal{H}}$ is defined to be $$\begin{aligned} \pi(\mathcal{{\mathcal{H}}}):&= \lim_{n\to \infty} \pi_n({\mathcal{H}}) \\ &= \lim_{n\to \infty} \max \left\{\sum_{F\in E(G)} \frac{1}{\binom{n}{|F|}}\colon v(G)=n, G\subseteq K^{R}_{n}, \text{ and } G \mbox{ contains no subgraph in }{\mathcal{H}}\right\}\end{aligned}$$ when the limit exists; we will soon show that this limit always exists. When ${\mathcal{H}}$ contains one hypergraph $H$, then we write $\pi(H)$ instead of $\pi(\{H\})$. Throughout, we will consider $n$ growing to infinity, and $R$ to be a fixed set (not growing with $n$). Note that $\pi(\mathcal{H})$ agrees with the *usual* definition of $$\displaystyle \pi(\mathcal{H})=\lim_{n\to\infty}\frac{\text{ex}(\mathcal{H},n)}{\binom{n}{k}}$$ when $\mathcal{H}$ is a set of $k$-uniform hypergraphs. The following result is a direct generalization Katona-Nemetz-Simonovit’s theorem [@KNS]. For any family ${\mathcal{H}}$ of hypergraphs with a common edge-type $R$, $\pi(\mathcal{H})$ is well-defined, i.e. the limit $\displaystyle \lim_{n\to\infty}\pi_{n}({\mathcal{H}})$ exists. It suffices to show that $\pi_n({\mathcal{H}})$, viewed as a sequence in $n$, is decreasing. Write $R=\{k_1,k_2,...,k_{r}\}$. Let $G_{n}\subseteq K^{R}_{n}$ be a hypergraph with $v(G_{n})=n$ not containing $\mathcal{H}$ and with Lubell value $h_n(G_{n})=\pi_n({\mathcal{H}})$. For any $\ell< n$, consider a random subset $S$ of the vertices of $G$ with size $|S|=\ell$. Let $G_{n}[S]$ be the induced subgraph of $G_{n}$ (whose vertex set is restricted to $S$). Clearly $$\pi_\ell({\mathcal{H}}) \geq \mathbb{E}(h_{\ell}(G_{n}[S])).$$ Write $E(G_{n})=E_{k_1}\bigcup E_{k_2}\bigcup ...\bigcup E_{k_r}$ where $E_{k_i}$ contains all the edges of size $k_{i}$. Note that the expected number of edges of size $k_i$ in $G_{n}[S]$ is precisely $\frac{\binom{\ell}{k_i}}{\binom{n}{k_i}}|E_{k_i}|$. It follows that $$\begin{aligned} \pi_\ell({\mathcal{H}}) &\geq \mathbb{E}(h_{\ell}(G_{n}[S])) \\ &=\sum_{i=1}^{r} \frac{\mathbb{E}(|E_{k_i} \bigcap \binom{S}{k_i}|)}{\binom{\ell}{k_{i}}} \\ &=\sum_{i=1}^{r} \frac{ \frac{\binom{\ell}{k_{i}}}{\binom{n}{k_i}}|E_{k_i}|}{\binom{\ell}{k_{i}}} \\ &=\sum_{i=1}^{r} \frac{ |E_{k_i}| }{\binom{n}{k_i}} \\ &= h_{n}(G_{n})\\ &=\pi_n({\mathcal{H}}).\end{aligned}$$ The sequence $\pi_n({\mathcal{H}})$ is non-negative and decreasing; therefore it converges. For a fixed set $R:=\{k_1,k_2,\ldots, k_r\}$ (with $k_1<k_2<\cdots < k_r$), an [*$R$-flag*]{} is an $R$-graph containing exactly one edge of each size. The chain $C^R$ is a special $R$-flag with the edge set $${{\rm E}}(C^R)=\{[k_1], [k_2],\ldots, [k_r]\}.$$ \[p1\] For any hypergraph $H$, the following statements hold. 1. $|R(H)|-1\leq \pi_n(H)\leq |R(H)|$. 2. For subgraph $H'$ of $H$, we have $\pi(H')\leq \pi(H)- |R(H)|+|R(H')|$. 3. For any $R$-flag $L$ on $m$ vertices and any $n\geq m$, we have $\pi_n(L)=|R|-1$. [**Proof:**]{} Pick any maximal proper subset $R'$ of $R(H)$. Consider the complete graph $K_n^{R'}$. Since $K_n^{R'}$ misses one type of edge in $R(H)\setminus R'$, it does not contain $H$ as a subgraph. Thus $$\pi_n(H)\geq h_n(K_n^{R'})=|R'|=|R(H)|-1.$$ The upper bound is due to the fact $h_n(K_n^{R(H)})=|R(H)|$. Proof of item 2 is similar. Let $S=R(H')$ and $G^S_n$ be an extremal hypergraph for $\pi_n(H')$. Extend $G^S_n$ to $G^{R(H)}_n$ by adding all the edges with cardinalities in $R(H)\setminus S$. The resulting graph $G^{R(H)}_n$ is $H$-free. We have $$\pi_n(H)\geq \pi_n(G^{R(H)}_n)=\pi_{n}(G^S_n)+ |R(H)|-|S|= |R(H)|-|R(H')|+ \pi_n(H').$$ Taking the limit as $n$ goes to infinity, we have $$\pi(H)\geq |R(H)|-|R(H')|+ \pi(H').$$ Finally, for item 3, consider $L$-free hypergraph $G_n^R$. Pick a random $n$-permutation $\sigma$ uniformly. Let $X$ be the number of edges of $G^R_n$ hit by a random flag $\sigma(L)$. Note that each edge $F$ has probability $\frac{1}{{n\choose |F|}}$ of being hit by $\sigma(L)$. We have $$\label{eq:ef} {{\rm E}}(X)=\sum_{F\in E(G)}\frac{1}{{n\choose |F|}}=h_n(G).$$ Since $G^R_n$ is $L$-free, we have $X\leq r-1$. Taking the expectation, we have $$h_n(G^R_n)={{\rm E}}(X)\leq r-1.$$ Hence, $\pi_n(H)\leq r-1$. The result is followed after combining with item 1. $\square$ A hypergraph $H$ is called [*degenerate*]{} if $\pi(H)=|R(H)|-1$. By Proposition \[p1\], flags, and specifically chains, are degenerate hypergraphs. A necessary condition for $H$ to be degenerate is that every level hypergraph $H^{k_i}$ is $k_i$-partite. The following examples will show that the converse is not true. [**Example 1:**]{} The complete hypergraph $K^{\{1,2\}}_2$ has three edges $\{1\}$, $\{2\}$, and $\{1,2\}$. We claim $$\label{eq:k12} \pi(K^{\{1,2\}}_2)=\frac{5}{4}.$$ The lower bound is from the following construction. Partition $[n]$ into two parts $A$ and $B$ of nearly equal size. Consider the hypergraph $G$ with the edge set $$E(G)={A\choose 1} \cup \left({[n]\choose 2}\setminus {A\choose 2} \right).$$ It is easy to check $h_n(G)=\frac{5}{4}+O(\frac{1}{n})$ and that $G$ contains no copy of $K_{2}^{\{1,2\}}$. Now we prove the upper bound. Consider any $K^{\{1,2\}}_2$-free hypergraph $G$ of edge-type $\{1,2\}$ on $n$ vertices. Let $A$ be the set of all singleton edges. For any $x,y\in A$, $xy$ is not a 2-edge of $G$. We have $$\begin{aligned} h_n(G)&\leq \frac{|A|}{n} + 1-\frac{{|A|\choose 2}}{{n\choose 2}} \\ &= 1+ \frac{|A|}{n} -\frac{|A|^2}{n^2} + O\left(\frac{1}{n}\right) \\ &\leq 1+ \frac{1}{4} + O\left(\frac{1}{n}\right).\end{aligned}$$ In the last step, we use the fact that $f(x)=1+x-x^2$ has the maximum value $\frac{5}{4}$. Combining the upper and lower bounds we have $\pi(K^{\{1,2\}}_2)=\frac{5}{4}$. The argument is easily generalized to the complete graph $K^{\{1,k\}}_k$ (for $k>1$). We have $$\label{eq:k1k} \pi(K^{1,k}_k)=1+ \frac{k-1}{k^{\frac{k}{k-1}}}.$$ Let $H$ be a hypergraph. The [*suspension*]{} of $H$, denoted by $S(H)$, is a new hypergraph with the vertex set $V(H)\cup \{\ast\}$ and the edge set $\{F\cup \{\ast\}\colon F\in E(H)\}$. Here $\ast$ is a new vertex not in $H$. Let $H$ be a hypergraph. The $k$-degree of a vertex $x$, denoted $d_{k}(x)$, is the number of edges of size $k$ containing $x$. [**Example 2:**]{} Consider $H:=S(K^{\{1,2\}}_2)$. The edges of $H$ are $\{1,2\}$, $\{2,3\}$, $\{1,2,3\}$. We claim $$\pi(S(K^{\{1,2\}}_2))= \frac{5}{4}.$$ The lower bound is from the following construction. Partition $[n]$ into two parts $A$ and $B$ of nearly equal size. Consider the hypergraph $G$ with the edge set $E=E_{2}\bigcup E_{3}$ where $E_{2}=\binom{A}{2}\bigcup \binom{B}{2}$ and $E_{3}=\binom{[n]}{3}\setminus \left(\binom{A}{3} \bigcup \binom{B}{3}\right)$. It is easy to check $h_n(G)=\frac{5}{4}+O\left(\frac{1}{n}\right)$ and that $G$ is $H$-free. Now we prove the upper bound. Consider any $H$-free hypergraph $G$ of edge-type $\{2,3\}$ on $n$ vertices. Recall that $d_{2}(v)$ denotes the number of 2-edges that contain $v$. For each pair of 2-edges that intersect $v$ there is a unique 3-set containing those two pairs. This 3-set cannot appear in the edge set of $G$ since $G$ is $H$-free. We say that the edge is forbidden. Note that each 3-edge may be forbidden up to 3 times in this manner–depending on which of the three vertices we call $v$. Hence there are at least $\frac{1}{3} \sum_{v\in [n]} \binom{d_{2}(v)}{2}$ 3-edges which are not in $G$. Note that this is true for any $H$-free graph $G$ with number of vertices. Hence $$\begin{aligned} h_n(G) &\leq \frac{ \binom{n}{3}-\frac{1}{3}\sum_{v\in [n]} \binom{d_{2}(v)}{2} }{\binom{n}{3}} + \frac{ \frac{1}{2}\sum_{v\in [n]} d_{2}(v)}{\binom{n}{2}} \\ &= 1-\frac{\sum_{v\in [n]} d_2(v)^{2}}{6\binom{n}{3}} +\left(\frac{1}{6\binom{n}{3}}+\frac{1}{2\binom{n}{2}}\right)\sum_{v\in [n]} d_2(v) +1.\end{aligned}$$ Applying Cauchy-Schwarz Inequality and letting $m:=\sum_vd_2(v)$, we have $$\begin{aligned} h_n(G) &\leq \frac{-\left(\sum_{v\in [n]} d_2(v)\right)^2}{6n\binom{n}{3}} +\left(\frac{1}{6\binom{n}{3}}+\frac{1}{2\binom{n}{2}}\right)\sum_{v\in [n]} d_2(v) + 1 \\ &=-\frac{m^2}{n^4} + \frac{m}{n^2}+1 +O\left(\frac{1}{n}\right)\\ &\leq \frac{5}{4} +O\left(\frac{1}{n}\right).\end{aligned}$$ In the last step, we use the fact that $f(x)=1+x-x^2$ has the maximum value $\frac{5}{4}$. Taking the limit, we get $\pi(S(K_{2}^{\{1,2\}}))\leq \frac{5}{4}.$ We can generalize this construction, giving the following lower bound for $S^k(K_2^{\{1,2\}})$ (the $k$-th suspension of $K_2^{\{1,2\}}$). The details of the computation are omitted. $$\label{eq:sk122} \pi(S^k(K_2^{\{1,2\}}))\geq 1+ \frac{1}{2^{k+1}}.$$ \[conj:1\] For any $k\geq 2$, $\pi(S^k(K_2^{\{1,2\}}))= 1+ \frac{1}{2^{k+1}}.$ Supersaturation and Blowing-up ============================== Supersaturation Lemma [@ErSi] is an important tool for uniform hypergraphs. There is a natural generalization of the supersaturation lemma and blowing-up in non-uniform hypergraphs. [**(Supersaturation)**]{} For any hypergraph $H$ and $a>0$ there are $b$, $n_0>0$ so that if $G$ is a hypergraph on $n>n_0$ vertices with $R(G)=R(H)$ and $h_n(G)>\pi(H)+a$ then $G$ contains at least $b{n\choose v(H)}$ copies of $H$. [**Proof:**]{} Let $R:=R(H)$ and $r:=|R|$. Since $\displaystyle \pi(H)=\lim_{n\to\infty}\pi_n(H)$, there is an $n_0$ so that for $m\geq n_0$, $\pi_m(H)\leq \pi(H)+\frac{a}{2}$. For any $n_0\leq m\leq n$, there must be at least $\frac{a}{2r}{n\choose m}$ $m$-sets $M\subset V(G)$ inducing a $R$-graph $G[M]$ with $h(G[M])>\pi(H)+\frac{a}{2}$. Otherwise, we have $$\sum_{M}h_m(G[M])\leq \left(\pi(H)+\frac{a}{2}\right){n\choose m} + \frac{a}{2r}{n\choose m}r=(\pi(H)+a){n\choose m}.$$ But we also have $$\begin{aligned} \sum_Mh_m(G[M]) &=\sum_M \sum_{\stackrel{F\in E(G)}{F\subseteq M}}\frac{1}{{m\choose |F|}} \\ &=\sum_{F\in E(G)}\sum_{M\supseteq F}\frac{1}{{m\choose |F|}} \\ &= \sum_{F\in E(G)} \frac{{n-|F| \choose m-|F|}}{{m\choose |F|}} \\ &=\sum_{F\in E(G)} \frac{{n\choose m}}{{n\choose |F|}}\\ &= {n\choose m}h_n(G).\end{aligned}$$ This is a contradiction to the assumption that $h_n(G)>\pi(H)+a$. By the choice of $m$, each of these $m$-sets contains a copy of $H$, so the number of copies of $H$ in $G$ is at least $\frac{a}{2r}{n\choose m}/{{n-v(H)\choose m-v(H)}}=b {n\choose v(H)}$ where $b:=\frac{a}{2r}{m\choose v(H)}^{-1}$. $\square$ Supersaturation can be used to show that “blowing up” does not change the Turán density $\pi(H)$ just like in the uniform cases. For any hypergraph $H_n$ and positive integers $s_1,s_2,\ldots, s_n$, the [*blowup*]{} of $H$ is a new hypergraph $(V,E)$, denoted by $H_n(s_1,s_2,\ldots, s_n)$, satisfying 1. $V:=\sqcup_{i=1}^n V_i$, where $|V_i|=s_i$. 2. $E=\cup_{F\in {{\rm E}}(H)} \prod_{i\in F} V_i$. When $s_1=s_2=\cdots=s_n=s$, we simply write it as $H(s)$. Consider the following simple example. Take $H$ to be the hypergraph with vertex set $[3]$ and edge set $E=\{\{1,2\}, \{1,2,3\}\}$, a chain. Consider the blow-ups $H(2,1,1)$ and $H(1,1,2)$ illustrated below. (0,1)–(2,0)–(4,0)–cycle; at (0,1) \[vertex\] (v1) \[label=above:[1]{}\] ; at (2,0) \[vertex\] (v2) \[label=below:[2]{}\] ; at (4,0) \[vertex\] (v3) \[label=right:[3]{}\] ; (v1)–(v2); at (2,-2) (empty) [$H$]{}; (0,1)–(2,0)–(4,0)–cycle; (0,-1)–(2,0)–(4,0)–cycle; at (0,1) \[vertex\] (v11) \[label=above:[$v_{1,1}$]{}\] ; at (0,-1) \[vertex\] (v12) \[label=below:[$v_{1,2}$]{}\] ; at (2,0) \[vertex\] (v2) \[label=left:[$v_2$]{}\] ; at (4,0) \[vertex\] (v3) \[label=right:[$v_3$]{}\] ; (v11)–(v2); (v12)–(v2); at (2,-2) [$H(2,1,1)$]{}; (0,0)–(2,0)–(4,1)–cycle; (0,0)–(2,0)–(4,-1)–cycle; at (0,0) \[vertex\] (v1) \[label=above:[$v_{1}$]{}\] ; at (2,0) \[vertex\] (v2) \[label=right:[$v_{2}$]{}\] ; at (4,1) \[vertex\] (v31) \[label=above:[$v_{3,1}$]{}\] ; at (4,-1) \[vertex\] (v32) \[label=below:[$v_{3,2}$]{}\] ; (v1)–(v2); at (2,-2) [$H(1,1,2)$]{}; In the blow-up $H(2,1,1)$ vertex 1 splits into vertices $v_{1,1}$ and $v_{1,2}$; vertex 2 becomes $v_2$ and vertex 3 becomes $v_3$. In the blow-up $H(1,1,2)$ vertex 3 splits into vertices $v_{3,1}$ and $v_{3,2}$; vertex 1 becomes $v_1$ and vertex 2 becomes $v_2$. [**(Blowing up)**]{} \[blowup\] Let $H$ be any hypergraph and let $s\geq 2$. Then $\pi(H(s))=\pi(H)$. [**Proof:**]{} Let $R:=R(H)$. By the supersaturation lemma, for any $a>0$ there is a $b>0$ and an $n_0$ so that any $R$-graph $G$ on $n\geq n_0$ vertices with $h_n(G)>\pi(H)+a$ contains at least $b{n \choose v(H)}$ copies of $H$. Consider an auxiliary $v(H)$-uniform hypergraph $U$ on the same vertex set as $G$ where edges of $U$ correspond to copies of $H$ in $G$. For any $S>0$, there is a copy of $K=K^{v(H)}_{v(H)}(S)$ in $U$. This follows from the fact that $\pi(K^{v(H)}_{v(H)}(S))=0$ since it is $v(H)$-partite, and $h_{n}(U)=b>0$. Now color each edge of $K$ by one of $v(H)!$ colors, each color corresponding to one of $v(H)!$ possible orders the vertices of $H$ are mapped to the parts of $K$. The pigeon-hole principle gives us that one of the color classes contains at least $S^{v(H)}/v(H)!$ edges. For large enough $S$ there is a monochromatic copy of $K^{v(H)}_{v(H)}(s)$, which gives a copy of $H(s)$ in $G$. $\square$ [**(Squeeze Theorem)**]{} Let $H$ be any hypergraph. If there exists a hypergraph $H^{\prime}$ and integer $s\geq 2$ such that $H^{\prime}\subseteq H\subseteq H^{\prime}(s)$ then $\pi(H)=\pi(H^{\prime})$. [**Proof:**]{} One needs only observe that for any hypergraphs $H_{1}\subseteq H_{2}\subseteq H_{3}$ that $\pi(H_{1})\leq \pi(H_{2})\leq \pi(H_{3})$. If $H_{3}=H_{1}(s)$ for some $s\geq 2$ then $\pi(H_{1})=\pi(H_{3})$ by the previous theorem. $\square$ Turán Densities of $\{1,2\}$-hypergraphs ======================================== In this section we will determine the Turán density for any hypergraph $H$ with $R(H)=\{1,2\}$. We begin with the following more general result. Let $H=H^{1}\cup H^{k}$ be a hypergraph with $R(H)=\{1,k\}$ and $E(H^{1})=V(H^{k})$. Then $$\pi(H) = \begin{cases} 1+\pi(H^{k}) & \text{if } \pi(H^{k})\geq 1-\frac{1}{k}; \\ 1+\left(\frac{1}{k(1-\pi(H^{k}))}\right)^{1/(k-1)}\left(1-\frac{1}{k}\right) & \text{otherwise.} \end{cases}$$ [**Proof:**]{} For each $n\in \mathbb{N}$, let $G_{n}$ be any $H$-free graph $n$ vertices with $h_{n}(G_{n})=\pi_{n}(H)$. Partition the vertices of $G_{n}$ into $X_{n}=\{v\in V(G_{n}):\{v\}\in E(G)\}$ and $\bar{X_{n}}$ containing everything else. Say that $|X_{n}|=x_{n}n$ and $|\bar{X_{n}}|=(1-x_{n})n$ for some $x_{n}\in [0,1]$. Since $(x_{n})$ is a sequence in $[0,1]$ it has a convergent subsequence. Consider $(x_{n})$ to be the convergent subsequence, and say that $x_{n}\to x\in [0,1]$. With the benefit of hindsight, we know that $x>0$, however, for the upper bound portion of this proof we will not assume this knowledge. Since there is no copy of $H$ in $G_{n}$, it follows that $G_{n}[X_{n}]$ contains no copy of $H^{k}$. We have that $$\begin{aligned} \pi(H) &=\lim_{n\to\infty} h_{n}(G_{n}) \\ &=\lim_{n\to\infty} \sum_{F\in H^{1}}\frac{1}{\binom{n}{1}} + \sum_{F\in H^{k}} \frac{1}{\binom{n}{k}} \\ &\leq \lim_{n\to\infty} \frac{x_{n}n}{\binom{n}{1}} + \frac{\binom{n}{k}-(1-\pi_{x_{n}n}(H^{k}))\binom{x_{n}n}{k}}{\binom{n}{k}} \\ &=\lim_{n\to\infty} 1 + x_{n} - (1-\pi_{x_{n}n}(H^{k}))\frac{\binom{x_{n}n}{k}}{\binom{n}{k}} \\ &\leq \lim_{n\to\infty} \begin{cases} 1+\frac{1}{\sqrt{n}} & \text{if } x_{n}n \leq \sqrt{n}, \\ 1+ x_{n} - (1-\pi_{x_{n}n}(H^{k}))x_{n}^{k} & \text{if } x_{n}n>\sqrt{n}, \end{cases} \\ &\leq \max\{1, 1+x-(1-\pi(H^{k}))x^{k}\}.\end{aligned}$$ Let $f(x)=1+x-(1-\pi(H^{k}))x^{k}$ and then note that $$\pi(H)= \lim_{n\to\infty} h_{n}(G) \leq \max_{x\in [0,1]} f(x).$$ An easy calculus exercise shows that $f^{\prime\prime}(x)<0$ for all $x>0$, and $f^{\prime}(x)=0$ when $x=\left(\frac{1}{k(1-\pi(H^{k}))}\right)^{\frac{1}{k-1}}.$ If $\frac{1}{k(1-\pi(H^{k}))}\geq 1$ then $f^{\prime}(x)>0$ when $x\in [0,1)$ and hence $f(x)$ is maximized when $x=1$. Note that $f(1)=1+\pi(H^{k})$. If, on the other hand, $\frac{1}{k(1-\pi(H^{k}))}< 1$ it follows that $f(x)$ is maximized at $x=\left(\frac{1}{k(1-\pi(H^{k}))}\right)^{1/(k-1)}$. Together, this gives us $$\pi(H) \leq \begin{cases} 1+\pi(H^{k}) & \text{if } \pi(H^{k})\geq 1-\frac{1}{k}; \\ 1+\left(\frac{1}{k(1-\pi(H^{k}))}\right)^{1/(k-1)}\left(1-\frac{1}{k}\right) & \text{otherwise}. \end{cases}$$ To get equality, take $x$ that maximizes $f(x)$ as above. For any $n\in \mathbb{N}$ (thinking of $n\to \infty$) partition $[n]$ into two sets $X$ and $\bar{X}$ with $|X|=xn$ and $|\bar{X}|=(1-x)n$. Let $E(G^{1})=\{\{v\}:v\in X\}$ and let $g^{k}$ be a $k$-uniform graph on $xn$ vertices attaining $|E(g^{k})|=\text{ex}(xn,H^{k})$ and $g^{k}$ is $H^{k}$-free. Then $$E(G^{k})=\{F\in \binom{[n]}{k}:\text{either } F\in E(g^{k}) \text{ or } F\cap \bar{X}\neq \emptyset\}.$$ Then $G=G^{1}\cup G^{k}$ is $H$-free and (by choice of $x$) we have that $\displaystyle \lim_{n\to\infty}h_{n}(G)$ attains the upper bound of $\pi(H)$. $\square$ Let us now return to the task of determining $\pi(H)$ when $H=H^{1}\cup H^{2}$. Let $H=H^{1}\cup H^{2}$. If $H^{2}$ is not bipartite, then $$\pi(H)=1+\pi(H^{2})= 1+\left(1-\frac{1}{\chi(H^{2})-1}\right)=2-\frac{1}{\chi(H^{2})-1}.$$ [**Proof:**]{} First, $\pi(H)\geq 1+\pi(H^{2})$ since one can construct an $H$-free graph $G_{n}$ by letting $$E(G_{n})=\{\{v\}:v\in V(G_{n})\}\cup E(G^{\prime}_{n})$$ where $G^{\prime}_{n}$ attains $h_{n}(G^{\prime}_{n})=\pi_{n}(H^{2})$ and $G^{\prime}_{n}$ is $H^{2}$-free. Then $$\pi(H)\geq \lim_{n\to\infty} h_{n}(G_{n}) = \lim_{n\to\infty} 1+\pi_{n}(H^{2}) = 1+\pi(H^{2}).$$ To get the upper-bound, first add every missing $1$-edge into $H$, call the new graph $H^{\prime}$. Note that $\pi(H)\leq \pi(H^{\prime})$. Note that we didn’t change the edge set $H^{2}$. The Erdős-Stone-Simonovits theorem states that if $H^{2}$ is not bipartite, then $\pi(H^{2})=1-\frac{1}{\chi(H^{2})-1}$. Also, if $H^{2}$ is not bipartite, then $\chi(H^{2})\geq 3$. With the added vertices, taking $k=2$, we apply the previous theorem. Since $$\pi(H^{2})=1-\frac{1}{\chi(H^{2})-1}\geq 1-\frac{1}{2}$$ we may conclude that $\pi(H)\leq \pi(H^{\prime})=1+\pi(H^{2})$. $\square$ It remains to investigate the cases when $H^{2}$ is bipartite. Let $H=H^{1}\cup H^{2}$. If $H^{2}$ is bipartite and $K_{2}^{\{1,2\}}\subseteq H$ then $\pi(H)=\frac{5}{4}$. [**Proof:**]{} First, in example 1, we computed $\pi(K_{2}^{\{1,2\}})=\frac{5}{4}$. Second, $H$ must be contained in some blow-up of $K_{2}^{\{1,2\}}$ since $H^{2}$ is bipartite, i.e. there exists some $s>2$ such that $H\subseteq K_{2}^{\{1,2\}}(s)$. So, by the squeeze theorem we have $$\frac{5}{4}=\pi(K_{2}^{\{1,2\}}) \leq \pi(H) \leq \pi(K_{2}^{\{1,2\}}(s))=\frac{5}{4}.$$ Hence $\pi(H)=\frac{5}{4}$ as claimed. $\square$ We will say that $H=H^{1}\cup H^{2}$ is a ***closed path*** (from $x_{1}$ to $x_{k}$) of length $k$ if $V(H)=\{x_{1}, x_{2},...,x_{k}\}$ and $E(H^{1})=\{\{x_{1}\}, \{x_{k}\}\}$ and $E(H^{2})=\{ \{x_{i}, x_{i+1}\}:1\leq i\leq k-1\}$. We will denote a closed path of length $k$, or a closed $k$-path, by $\bar{P}_{k}$. Pictorially, we view a closed path of length $k$ as follows: at (0,0) \[vertex\_closed\] (v1) \[label=below:[$x_{1}$]{}\] ; at (1,0) \[vertex\_open\] (v2) \[label=below:[$x_{2}$]{}\] ; at (2,0) \[vertex\_open\] (v3) \[label=below:[$x_{3}$]{}\] ; at (3,0) \[vertex\_open\] (v4) \[label=below:[$x_{k-2}$]{}\] ; at (4,0) \[vertex\_open\] (v5) \[label=below:[$x_{k-1}$]{}\] ; at (5,0) \[vertex\_closed\] (v6) \[label=below:[$x_{k}$]{}\] ; (v1)–(v2); (v2)–(v3); (v4)–(v5); (v5)–(v6); (v3)–(2.3, 0); (2.7, 0)–(v4); at (2.5, 0) [$\dots$]{}; Let $H=H^{1}\cup H^{2}$. If $H^{2}$ is bipartite and $H$ does not contain a copy of $K_{2}^{\{1,2\}}$ and $H$ contains a closed path of length $2k$, then $\pi(H)=\frac{9}{8}$. [**Proof:**]{} First, we will give a construction giving us the lower bound. For any $n\in \mathbb{N}$ let $G_{n}$ have vertex set $[n]$. Partition the vertices of $G_{n}$ into two sets $X$ and $\bar{X}$ where $|X|=\frac{3n}{4}$ and $|\bar{X}|=\frac{n}{4}$. Let $$E(G)=\{\{x\}:x\in X\} \cup \{\{x,\bar{x}\}: x\in X \text{ and } \bar{x}\in \bar{X}\}.$$ It is clear that $G_{n}$ contains no closed paths of length $2k$ when $k\geq 1$. Also, $$\begin{aligned} \lim_{n\to\infty} h_{n}(G_{n}) &= \lim_{n\to\infty} \frac{|X|}{\binom{n}{1}}+\frac{|X|\cdot |\bar{X}|}{\binom{n}{2}} \\ &= \lim_{n\to\infty} \frac{3}{4}+\frac{\frac{3}{16} n^{2}}{\binom{n}{2}} \\ &= \frac{3}{4}+\frac{3}{8} =\frac{9}{8}.\end{aligned}$$ Thus $\pi(H)\geq \frac{9}{8}$ for any $H$ containing a closed path of length $2k$ for any $k\geq 1$. Since $H^{2}$ is bipartite, and $H^{2}$ does not contain a copy of $K_{2}^{\{1,2\}}$, then $H$ is contained in a blow-up of a closed $4$-path. To see this, note that there is a bipartition of the vertices of $H$, $V(H)=A\cup B$, (with respect to the $2$-edges in $H$). Furthermore, we can partition $A$ into $A_{1}\cup A_{2}$ where $v\in A_{1}$ if $\{v\}\in E(H)$ and $v\in A$, $v\in A_{2}$ if $v\in A\setminus A_{1}$. And similarly partition $B$ into $B_{1}\cup B_{2}$ with $v\in B_{1}$ if $\{v\}\in E(H)$ and $v\in B$. Then note that there are no edges from $A_{1}$ to $B_{1}$ since $H$ contains no copy of $K_{2}^{\{1,2\}}$. So $H\subset \bar{P}_{4}(\max\{|A_{1}|, |A_{2}|, |B_{1}|, |B_{2}|)$–a blow-up of $\bar{P}_{4}$. Below is a graphical representation of $H$, illustrating that $H$ is contained in a blow-up of $\bar{P}_{4}$. (0, 0) circle (1 cm); (0,-3) circle (1 cm); (3, 0) circle (1 cm); (3,-3) circle (1 cm); at (-1.5, 0) [$A_{1}$]{}; at (-1.5,-3) [$A_{2}$]{}; at (4.5, 0) [$B_{2}$]{}; at (4.5, -3) [$B_{1}$]{}; at (0, .5) \[vertex\_closed\] (v1) ; at (-.5, -.5) \[vertex\_closed\] (v2) ; at (.35,-.35) \[vertex\_closed\] (v3) ; at (3, .5) \[vertex\_open\] (v4) ; at (2.65,-.35) \[vertex\_open\] (v5) ; at (3.5, -.5) \[vertex\_open\] (v6) ; at (0, -3.5) \[vertex\_open\] (v7) ; at (-.5, -2.65) \[vertex\_open\] (v8) ; at (3, -3.5) \[vertex\_closed\] (v9) ; at (3.5, -2.65) \[vertex\_closed\] (v10) ; (.6,-.2)–(2.75, 0); (.35, .2)–(2.5, .5); (3,-.4)–(0,-2.65); (3.15, -.5)–(.35, -3); (.5, -3.5)–(2.5, -3.25); (.65,-3.25)–(2.7, -2.8); Since $\pi(H)\leq \pi(\bar{P}_{4}(s))=\pi(\bar{P}_{4})$ we need only show that $\pi(\bar{P}_{4})\leq \frac{9}{8}$. Let $G_{n}$ be a family of $\bar{P}_{4}$-free graphs such that $h_{n}(G_{n})=\pi_{n}(\bar{P}_{4})$. Partition the vertices of $G_{n}$ as follows: $$\begin{aligned} X_{n} &= \{v:\{v\}\in E(G_{n})\}, \\ Y_{n} &= \{v: \{v\}\notin E(G_{n}) \text{ and } \exists x_{1}\neq x_{2}\in X_{n} \text{ with } \{x_{1}, v\}, \{x_{2}, v\} \in E(G_{n})\}, \\ Z_{n} &= V(G)\setminus (X_{n}\cup Y_{n}).\end{aligned}$$ Let us say that $|X_{n}|=xn$, $|Y_{n}|=yn$ and hence $|Z_{n}|=(1-x-y)n$. First, note that $E(G)\cap \binom{Y_{n}}{2}=\emptyset$. Otherwise, since each vertex in $Y_{n}$ has at least 2 neighbors in $X_{n}$, $G_{n}$ would contain a closed path of length $4$. Also, each vertex in $Z_{n}$ has at most 1 neighbor in $X_{n}$. It follows that $$\begin{aligned} \pi(\bar{P}_{4}) &=\lim_{n\to\infty} \pi_{n}(\bar{P}_{4}) \\ &= \lim_{n\to \infty} h_{n}(G_{n}) \\ &\leq \lim_{n\to\infty} \frac{|X_{n}|}{\binom{n}{1}} + \frac{|X_{n}|\cdot |Y_{n}|}{\binom{n}{2}} + \frac{|Y_{n}|\cdot |Z_{n}|}{\binom{n}{2}} + \frac{\binom{|Z_{n}|}{2}}{\binom{n}{2}} + \frac{|Z_{n}|}{\binom{n}{2}} \\ &\leq \lim_{n\to\infty} \frac{xn}{\binom{n}{1}} + \frac{xyn^{2}}{\binom{n}{2}} + \frac{y(1-x-y)n^{2}}{\binom{n}{2}} + \frac{\frac{(1-x-y)^{2}n^{2}}{2}}{\binom{n}{2}} + \frac{(1-x-y)n}{\binom{n}{2}} \\ &\leq \max_{\substack {0\leq x\leq 1 \\ 0\leq y\leq 1-x}} x + 2xy + 2y(1-x-y)+ (1-x-y)^{2} \\ &=\frac{9}{8}.\end{aligned}$$ The last inequality is an easy multivariate calculus exercise. One can also verify it with software, such as *Mathematica*, the syntax being: Maximize[{x^2-x-y^2+2*x*y+1, 0<=x<=1, 0<=y<=1-x}, {x,y}]. It may be of interest to note that the maximum value of the function is obtained when $x=\frac{3}{4}$ and $y=\frac{1}{4}$. In this case $Z_{n}$ is empty. Since our upper bound matches the lower bound, we have the desired result. $\square$ Let $H=H^{1}\cup H^{2}$. If $H^{2}$ is bipartite and $H^{2}$ does not contain a closed $2k$-path for any $k\geq 1$, then $\pi(H)=1$. [**Proof:**]{} First, since $|R(H)|=2$ we have, trivially, that $\pi(H)\geq 1$. Since $H$ contains no path of length $2k$ for any $k\geq 1$ it must be the case that $H$ is contained in a blow-up of a chain $C^{\{1,2\}}=\{\{x\}, \{x,y\}\}$. This is most clearly seen by again, considering the previous illustration. The difference is, in this case, $B_{1}$ (or $A_{1}$) is empty. (0, 0) circle (1 cm); (0,-3) circle (1 cm); (3, 0) circle (1 cm); at (-1.5, 0) [$A_{1}$]{}; at (-1.5,-3) [$A_{2}$]{}; at (4.5, 0) [$B_{2}$]{}; at (0, .5) \[vertex\_closed\] (v1) ; at (-.5, -.5) \[vertex\_closed\] (v2) ; at (.35,-.35) \[vertex\_closed\] (v3) ; at (3, .5) \[vertex\_open\] (v4) ; at (2.65,-.35) \[vertex\_open\] (v5) ; at (3.5, -.5) \[vertex\_open\] (v6) ; at (0, -3.5) \[vertex\_open\] (v7) ; at (-.5, -2.65) \[vertex\_open\] (v8) ; (.6,-.2)–(2.75, 0); (.35, .2)–(2.5, .5); (3,-.4)–(0,-2.65); (3.15, -.5)–(.35, -3); at (1.5, -4) [$H$]{}; at (7,0) \[vertex\_closed\] (w1) ; at (8,0) \[vertex\_open\] (w2) ; at (7,-1) \[vertex\_open\] (w3) ; (w1)–(w2)–(w3); at (7.5, -1.5) [$K$]{}; It is clear that $H$ is contained in a blow-up of $K$ where $$K=\{\{x\}, \{x,y\}, \{y,z\}\}\subseteq C^{\{1,2\}}(2,1)=\{\{x\}, \{z\}, \{x, y\}, \{z, y\}\}.$$ It follows that $\pi(H)\leq \pi(C^{\{1,2\}})=1$. $\square$ The combination of these propositions completely determines $\pi(H)$ when $R(H)=\{1,2\}$. The results are summarized by the following theorem. \[t12\] For any hypergraph $H$ with $R(H)=\{1,2\}$, we have $$\pi(H) = \begin{cases} 2-\frac{1}{\chi(H^{2})-1} & \text{if } H^{2} \text{ is not bipartite}; \\ \frac{5}{4} & \text{if } H^{2} \text{ is bipartite and } \min \{k:\bar{P}_{2k}\subseteq H\}=1; \\ \frac{9}{8} & \text{if } H^{2} \text{ is bipartite and } \min \{k:\bar{P}_{2k}\subseteq H\}\geq 2; \\ 1 & \text{if } H^{2} \text{ is bipartite and } \bar{P}_{2k}\nsubseteq H \text{ for any }k\geq 1. \end{cases}$$ Degenerate hypergraphs ====================== Recall that a hypergraph $H$ is [*degenerate*]{} if $\pi(H)=|R(H)|-1$. For $k$-uniform hypergraph $H$, $H$ is degenerate if and only $H$ is $k$-partite. From Proposition \[p1\] and Theorem \[blowup\], we have the following proposition. Suppose $H$ is a degenerate hypergraph. Then the following properties hold. - Every subgraph of $H$ is degenerate. - Every blowup of $H$ is degenerate. - Any subgraph of the blowup of a flag is degenerate. Note that every flag is a subgraph of some blowup of a chain with the same edge type. Is every degenerate hypergraph a subgraph of some blowup of a chain? The answer is yes for uniform hypergraphs and $\{1,2\}$-hypergraphs. This follows from Theorem \[t12\], which completely determined $\pi(H)$ when $R(H)=\{1,2\}$, and from the fact that a $k$-uniform hypergraph is degenerate if and only if it is $k$-partite (a subgraph of a blowup of a single edge). However, the answer in general is false. We will show that the following hypergraph $H_1$ with edge set $E(H_1)=\{\{1,2\}, \{1,3\}, \{2,3,4,\}\}$ is degenerate. (2,1)–(4,0)–(2,-1)–cycle; at (0,0) \[wvertex\] (v1) \[label=above:[1]{}\] ; at (2,1) \[wvertex\] (v2) \[label=above:[2]{}\] ; at (2,-1) \[wvertex\] (v3) \[label=below:[3]{}\] ; at (4,0) \[wvertex\] (v4) \[label=above:[4]{}\] ; (v1)–(v2); (v1)–(v3); \ $H_{1}$: A degenerate hypergraph not contained in the blowup of a chain. This result is a special case of the following theorem. Let $H$ be a hypergraph containing some $2$-edges. The $2$-subdivision of $H$ is a new hypergraph $H'$ obtained from $H$ by subdividing each $2$-edge simultaneously. Namely, if $H$ contains $t$ $2$-edges, add $t$ new vertices $x_1,x_2,\ldots,x_t$ to $H$ and for $i=1,2,\ldots, t$ replace the $2$-edge $\{u_{i},v_{i}\}$ with $\{u_i,x_i\}$ and $\{x_i,v_i\}$. \[t:sd\] Let $H'$ be the $2$-subdivision of $H$. If $H$ is degenerate, then so is $H'$. For example, $H_1$ can be viewed as the $2$-division of the chain $C^{\{2,3\}}$. Since any chain is degenerate, so is $H_1$. To prove this theorem, we need a Lemma on graphs, which has independent interest. Let $G$ be any simple graph. Then $G^{(2)}$, a variation of the square of $G$, will be defined as follows: - $V(G^{(2)}):=V(G)$, - $E(G^{(2)}):=\{ \{u,v\}| \exists w\in V(G) \text{ with } \{u,w\},\{v,w\}\in E(G)\}$. Note that an edge of $G$ may or may not be an edge of $G^{(2)}$. For example, if $G$ is the complete graph, then $G^{(2)}$ is also the complete graph. However, if $G$ is a complete bipartite graph with partite set $V_1\cup V_2$, then $G^{(2)}$ is the disjoint union of two complete graphs on $V_1$ and $V_2$. In this case, $G^{(2)}$ is the complement graph of $G$! We also note that $G^{(2)}$ is the empty graph if $G$ is a matching. Surprisingly, we have the following Lemma on the difference of the number of edges in $G$ and $G^{(2)}$. \[l:square\] For any simple graph $G$ on $n$ vertices, $$\label{eq:5.1} |E(G)|-|E(G^{(2)})| \leq \left\lfloor \frac{n}{2}\right\rfloor.$$ Furthermore, equality holds if and only if $G$ is the vertex-disjoint union of complete bipartite graphs of balanced part-size with at most one component having odd number of vertices, i.e. $$G= K_{t_1,t_1}\cup K_{t_2,t_2} \cup \cdots \cup K_{t_k,t_k} \cup K_{\lfloor \frac{n}{2}\rfloor-\sum_{i=1}^kt_i, \lceil \frac{n}{2}\rceil-\sum_{i=1}^kt_i},$$ for some positive integers $t_1,t_2,\ldots, t_k$ satisfying $\sum_{i=1}^kt_i= \lfloor \frac{n}{2}\rfloor$. First, we will show that Equation holds for any forest. Let $G$ be a forest. Since $G$ is a forest, if $\{a,b\}\in E(G^{(2)})$ then $a$ and $b$ have a unique common neighbor in $G$. Furthermore, given any vertex $c\in V(G)$, it follows that any pair of neighbors of $c$ is in $E(G^{(2)})$. Thus we have $$\begin{aligned} |E(G)|-|E(G^{(2)})| &= \frac{1}{2}\sum_{v\in V(G)} \deg(v) - \sum_{v\in V(G)} \binom{\deg(v)}{2} \\ &=\sum_{v\in V(G)} \frac{1}{2}\deg(v)-\binom{\deg(v)}{2} \\ &=\sum_{v\in V(G)}-\frac{1}{2}\deg(v)^{2} + \deg(v) \\ &\leq \sum_{v\in V(G)} \frac{1}{2} \\ &=\frac{n}{2}.\end{aligned}$$ The inequality above comes from the fact that $-\frac{1}{2}x^{2}+x \leq \frac{1}{2}$, attaining its maximum when $x=1$. Since $|E(G)|-|E(G^{(2)})|$ is an integer, we have that $$\label{eq:5.2} |E(G)|-|E(G^{(2)})|\leq \left\lfloor \frac{n}{2}\right\rfloor$$ as claimed. Now we will prove the statement $|E(G)|-|E(G^{(2)})|\leq\left\lfloor \frac{|V(G)|}{2}\right\rfloor$ for general graphs using induction on the number of vertices. It holds trivially for $n=1,2$. Assume that the statement holds for all graphs with at fewer than $n$ vertices. Consider a graph $G$ with $n$ vertices. If $G$ is a forest, then the statement holds. Otherwise, $G$ contains a cycle. Choose $C_g$ to be a minimal cycle in $G$, i.e. one with no chords. If $G=C_g$, then ${{\rm E}}(C_g)-{{\rm E}}(C_g^{(2)})=0$ if $g\not=4$ or $2$ if $g=4$. The statement holds. Now assume $V(C)\subsetneq V(G)$. Let $V_1:=V(C)=\{x_1,x_2,...,x_{g}\}$, where $x_{i}$ is adjacent to $x_{i+1}$, and let $V_2:=V(G)\setminus V_1=\{v_1,v_2,...,v_{n-g}\}$. The edges of $G$ can be partitioned into three parts: the induced graph $G[V_1]=C_g$, the induced graph $G[V_2]$, and the bipartite graph $G[V_1,V_2]$. Similarly, the edges of $G^{(2)}$ can be partitioned into three parts: $G^{(2)}[V_1]$, the induced graph $G^{(2)}[V_2]$, and the bipartite graph $G^{(2)}[V_1,V_2]$. Now we compare term by term. 1. Note $|E(G^{(2)}[V_1])|\geq |E(C_g^{(2)})|$, and $|E(C_g^{(2)})|=g$ if $g\neq 4$ or $2$. We have $$\label{eq:5.3} |E(G[V_1])|-|E(G^{(2)}[V_1])|\leq |E(C_g)|-|E(C_g^{(2)})|\leq \left\lfloor \frac{g}{2}\right\rfloor.$$ 2. By inductive hypothesis, we have $|E(G[V_2])|-|E((G[V_2])^{(2)})|\leq \left\lfloor \frac{n-g}{2}\right\rfloor$. Combining with the fact $|E(G^{(2)}[V_2])\geq |E((G[V_2])^{(2)})|$, we have $$\label{eq:5.4} |E(G[V_2]))|-|E(G^{(2)}[V_2])|\leq \left\lfloor \frac{n-g}{2}\right\rfloor.$$ 3. We claim $|E(G[V_1,V_2])| \leq |E(G^{(2)}[V_1,V_2])|$. We define a map $$f\colon E(G[V_1,V_2]) \to E(G^{(2)}[V_1,V_2])$$ as follows. For any edge $x_iv\in E(G)$ with $v\in V_2$ and $x_i\in V_1$, define $f(vx_i)=vx_{i+1}$ (with the convention $x_{g+1}=x_1$). Since $x_iv\in E(G)$ and $x_ix_{i+1}\in E(G)$, we have $vx_{i+1}\in E(G^{(2)})$. The map $f$ is well-defined. We also observe that $f$ is an injective map. Thus $$\label{eq:5.5} |E(G[V_1,V_2])| \leq |E(G^{(2)}[V_1,V_2])|.$$ Combining equations , , and , we get $$|E(G)|-|E(G^{(2)})|\leq \left\lfloor\frac{n-g}{2}\right\rfloor + \left\lfloor\frac{g}{2}\right\rfloor \leq \left\lfloor\frac{n}{2}\right\rfloor.$$ The inductive step is finished. Now we check when equality holds. It is straightforward to verify the sufficient condition; we omit the computation here. Now we prove the necessary condition. Assume that $G$ has $k+1$ connected components $G_1, G_2,\ldots, G_{k+1}$. Then we have $$|E(G)|-|E(G^{(2)})|\leq \sum_{i=1}^{k+1} (|E(G_i)|-|E(G^{(2)}_i)|) \leq \sum_{i=1}^{k+1} \left\lfloor \frac{|V(G_i)|}{2} \right\rfloor \leq \left\lfloor \frac{n}{2}\right\rfloor.$$ If equality holds, then all but possibly one component has an even number of vertices. It remains to show each component is a balanced complete bipartite graph. Without loss of generality, we assume $G$ is connected. If $G$ is a tree, then equality in Equation either forces the degree of every vertex to be $1$, or all the degrees are $1$ with a single exceptional vertex of degree $2$. Since $G$ is assumed to be connected, $G$ is either $P_2=K_{1,1}$ or $P_3=K_{1,2}$. Suppose that $G$ contains cycles, and the equalities hold in Equations , , and . First we show that $C_4$ is the only possible chordless cycle in $G$. Suppose not; let $C_g$ ($g\not=4$) be a cordless cycle. We have $|E(C_g)|-|E(C^{(2)}_g)|=0$; which contradicts the assumption that equality holds in Equation . Thus $G$ is a bipartite graph. Furthermore,the equality in forces each vertex $v$ to be connected to at least $2$ vertices of $C_4$. Hence $G$ is 2-connected. Now $G$ must be a complete bipartite graph. Otherwise, say $uv$ is a nonedge crossing the partite sets. Since $G$ is $2$-connected, there exists a cycle containing both $u$ and $v$. Let $C$ be such a cycle with minimum length; $C$ is cordless but not a $C_4$. Contradiction. Finally we show $G=K_{st}$ is balanced. Note that $$|E(G)|-|E(G^{(2)})|=st-{s\choose 2}-{t\choose 2}=\frac{n}{2}-\frac{(s-t)^2}{2}\leq \left\lfloor \frac{n}{2}\right\rfloor.$$ The equality holds only if $|s-t|\leq 1$. So $G$ is balanced. [**Proof of Theorem \[t:sd\]:**]{} We will prove by contradiction. Let $R:=R(H)=R(H')$ be the common set of edge types of $H$ and $H'$. Suppose that $H'$ is not degenerate, then $\pi(H')>|R|-1 +\epsilon$ for some $\epsilon>0$. Thus, there exists an $n_0$ satisfying $\pi_n(H')> |R|-1 +\epsilon/2$ for any $n\geq n_0$. Let $G_n^R$ be a $H'$-free hypergraph with $\pi_n(G)> |R|-1 +\epsilon/2$. Define a new hypergraph $G'_n$ over the same vertex set of $G$ with a new edge set $E(G'_n)=E(G_n)\setminus E(G_n^{2}) \cup E((G_n^{2})^{(2)})$. The hypergraph $G'_n$ is obtained for $G_n$ by replacing all $2$-edges by the edges in its square graph while keeping other type of edges. By Lemma \[l:square\], we have $$\label{eq:6} \pi_n(G'_n)\geq \pi_n(G) - \frac{\lfloor \frac{n}{2}\rfloor}{{n\choose 2}} \geq |R|-1 + \epsilon/2 -\frac{1}{n}.$$ Suppose that $H$ has $t$ $2$-edges. Since $H$ is degenerate, so is the blowup hypergraph $H(t+1)$. For sufficiently large $n$, $G'_n$ contains a subhypergraph $H(t+1)$. By the definition of $G'$, for every copy of $H\subseteq H(t+1)$ and every $2$-edge $u_iv_i$ (for $1\leq i\leq t$) of $H$, there exists a vertex $x_i:=x_i(u_i,v_i)$ satisfying $u_ix_i$ and $v_ix_i$ are $2$-edges of $G$. Our goal is to force that $x_1,x_2,\ldots, x_t$ are distinct from the vertices of $H$ and from each other. This can be done by a greedy algorithm. Suppose that the vertices of $H$ are listed by $y_1,y_2,y_3,\ldots,$ and so on. Each vertex has $y_i$ has $t+1$ copies in $H(t+1)$. For $i=1,2,3,\ldots$, select a vertex $y_i'$ from the $t+1$ copies of $y_i$ so that $y_i'$ is not the same vertex as $x_j(u_j,v_j)$ for some $2$-edge $u_jv_j$ where $u_j, v_j$ have been selected. This is always possible since $H$ has only $t$ $2$-edges. Thus, we found a copy of $H'$ as a subgraph of $G$. Contradiction! $\square$ It remains an open question to classify all non-degenerate hypergraphs. In the remainder of this section, we generalize the following theorem due to Erdős [@Erdos64] on the Turán density of complete $k$-partitite $k$-uniform hypergraphs. [**Theorem (Erdős [@Erdos64]):**]{} [ Let $K^{(k)}_k(s_1,\ldots, s_k)$ be the complete $k$-partite $k$-uniform hypergraph with partite sets of size $s_1,\ldots, s_k$. Then any $K^{(k)}_k (s_1, \ldots, s_k)$-free $r$-uniform hypergraph can have at most $O(n^{k-\delta})$ edges, where $\delta = \left(\prod^{k-1}_{i=1} s_i\right)^{-1}$. ]{} We have the following theorem. \[tflag\] Let $L(s_1,s_2,\ldots, s_{v(L)})$ be a blowup of a flag $L^R$, we have $$\pi_n(L(s_1,s_2,\ldots, s_{v(L)}))=r-1+O(n^{-\delta}),$$ where $\delta= \frac{\max\{s_i\colon 1\leq i\leq v(L)\}}{\prod_{i=1}^{v(L)}s_i}$. Using the concept of $H$-density, we can say a lot more about avoiding a blowup of any hypergraph $H$. Given two hypergraphs $H$ and $G$ with the same edge-type $R(H)=R(G)$, the [*density*]{} of $H$ in $G$, denoted by $\mu_H(G)$, is defined as the probability that a random injective map $f\colon V(H)\to V(G)$ satisfies $H\stackrel{f}{\hookrightarrow} G$ (i.e. $f$ maps $H$ to an ordered copy of $H$ in $G$). We have the following theorem. For a fixed hypergraph $H$ on $m$ vertices and $m$ positive integers $s_1,s_2,\ldots, s_m$, let $H(s_1,s_2,\ldots, s_m)$ be the blowup of $H$. For sufficiently large $n$ and any hypergraph $G$ on $n$ vertices with edge type $R(G)=R(H)$, if $G$ contains no subgraph $H(s_1,s_2,\ldots, s_m)$, then $$\mu_H(G)=O(n^{-\delta}),$$ where $\delta= \frac{\max\{s_i\colon 1\leq i\leq m\}}{\prod_{i=1}^{m}s_i}$. [**Proof:**]{} We will prove by contradiction. We assume $\mu_H(G)\geq C n^{-\delta}$ for some constant $C$ to be chosen later. By reordering the vertices of $H$, we can assume $s_1\leq s_2\leq \cdots \leq s_m$. Without loss of generality, we assume $n$ is divisible by $m$. Consider a random $m$-partition of $V(G)=V_1\cup V_2\cup \cdots \cup V_{m}$ where each part has size $\frac{n}{m}$. For any $m$-set $S$ of $V(G)$ , we say $S$ is a [*transversal*]{} (with respect to this partition of $V(G)$) if $S$ intersects each $V_i$ exactly once. The probability that $S$ is transversal is given by $\frac{(\frac{n}{m})^{m}}{{n\choose m}m!}$. We say, an ordered copy $f(H)$ is a [*transversal*]{} if $f(V(H))$ is a transversal. By the definition of $\mu_H(G)$, $G$ contains $\mu_H(G){n\choose m}m!$ ordered copies of $H$. Thus, the expected number of transversal ordered copies of $H$ is $$\mu_H(G)\left(\frac{n}{m}\right)^{m}\geq \frac{C}{m^m} n^{m-\delta}.$$ There exists a partition so that the number of crossing maximum chains in $E(H)$ is at least $Cm^{-\delta} n^{m-{\delta}}$. Now we fix this partition $[n]=V_1\cup \cdots \cup V_m$. For $t_i \in \{1, s_i\}$ with $i=1,2\ldots, m$, we would like to estimate the number of monochromatic (ordered) copies in $H'$, denoted by $f(t_1, t_2, \ldots, t_{m})$, of $H(t_1,\ldots, t_{m})$ so that the first $t_1$ vertices in $V_{\tau_1}$, the second $t_2$ vertices in $V_{\tau_2}$, and so on. [**Claim a:**]{} For $0\leq l\leq m-1$, we have $$f(s_{1},\ldots, s_{l}, 1,\ldots, 1)\geq \left(1+o(1)\right) \frac{\left(Cn^{-\delta}\right)^{\prod_{j=1}^l s_{j}}} {\prod_{j=1}^l \left(s_{j}! \right)^{\prod_{u=j+1}^ls_{u}}} \left(\frac{n}{m}\right)^{m -l+ \sum_{j=1}^ls_j}.$$ We prove claim (a) by induction on $l\in [0,m]$. For the initial case $l=0$, the claim is trivial since $f(1,1,\ldots, 1)$ counts the number of transversal ordered copies of $H$. We have $$f(1,1,\ldots,1)\geq Cn^{-\delta} \left(\frac{n}{m}\right)^{m}.$$ The statement holds for $l=0$. Now we assume claim (a) holds for $l>0$. Consider the case $l+1$, for some $l\geq 0$. For any $S\in {V_{1}\choose s_{1}}\times \cdots \times {V_{l}\choose s_{}}\times V_{l+2}\times \cdots \times V_{m}$, let $d_S$ be the number of vertices $v$ in $V_{l+1}$ such that the induced subgraph of $H'$ on $S\times \{v\}$ is $H(s_{1},\ldots, s_{l}, s_{l+1},\ldots, 1)$. We have $$\begin{aligned} \label{eq:si} f(s_1,\ldots, s_l,1,1, \ldots, 1) &= \sum_{S}d_S; \\ f(s_1,\ldots, s_l, s_{l+1},1,\ldots,1)&=\sum_{S} {d_S\choose s_{l+1}}. \label{eq:sil}\end{aligned}$$ Let $\bar d_l$ be the average of $d_S$. By equation and the inductive hypothesis, we have $$\label{eq:dl} \bar d_l\geq \frac{\sum_{S} d_S}{(\frac{n}{m})^{m -l-1+ \sum_{j=1}^ls_j}} \geq \left(1+o(1)\right) \frac{(\frac{n}{m})\left(Cn^{-\delta}\right)^{\prod_{j=1}^l s_j}} {\prod_{j=1}^l \left(s_j!\right)^{\prod_{u=j+1}^ls_u}}.$$ Applying the convex inequality, we have $$\begin{aligned} f(s_1,\ldots, s_l, s_{l+1},1,\ldots,1) &=\sum_{S} {d_S\choose s_{l+1}} \\ &\geq \left(\frac{n}{m}\right)^{m-1 +\sum_{j=1}^l(s_j-1)} {\bar d_l\choose s_{l+1}} \\ &=\left(1+O\left(\frac{1}{\bar d_l}\right)\right)\frac{\bar d_l^{s_{l+1}}}{s_{l+1}!} \left(\frac{n}{m}\right)^{m +\sum_{j=1}^{l+1}(s_j-1)}.\end{aligned}$$ Combining with equation , we get $$f(s_1,\ldots, s_l, s_{l+1},1,\ldots,1) \geq \left(1+o(1)\right) \frac{\left(Cn^{-\delta}\right)^{\prod_{j=1}^{l+1} s_j}} {\prod_{j=1}^{l+1} \left(s_j!\right)^{\prod_{u=j+1}^{l+1}s_u}} \left(\frac{n}{m}\right)^{m -l-1+ \sum_{j=1}^{l+1}s_j}.$$ In the last step, we used the assumption $s_{m}:=\max\{s_i\colon 1\leq i\leq m\}$. Applying claim (a) with $l=m-1$, we get $$\begin{aligned} \nonumber f(s_1,s_2,\ldots, s_{m-1},1) &\geq \left(1+o(1)\right) \frac{\left(Cn^{-\delta}\right)^{\prod_{j=1}^{m-1} s_j}} {\prod_{j=1}^{m-1} \left(s_j!\right)^{\prod_{u=j+1}^{m-1}s_u}} \left(\frac{n}{m}\right)^{1+ \sum_{j=1}^{m-1}s_j} \\ &=\left(1+o(1)\right) \frac{m^{-1}C^{\prod_{j=1}^{m-1} s_j} (\frac{n}{m})^{\sum_{j=1}^{m-1}s_j}} {\prod_{j=1}^{m-1} \left(s_j! \right)^{\prod_{u=j+1}^{m-1}s_u}}. \label{eq:lb}\end{aligned}$$ For any $S\in {V_1\choose s_1}\times \cdots \times {V_{m-1}\choose s_{m-1}}$, let $d_S$ be the number of vertices $v$ in $V_{l+1}$ such that the edges in the induced subgraph of $H'$ on $S\times \{v\}$ are monochromatic. Since $H'$ contains no monochromatic copy of $C^R(s_1,\ldots, s_{m})$, we have $d_S\leq s_{m}$. It implies $$\label{eq:ub} f(s_1,s_2,\ldots, s_{m-1},1)=\sum_S d_S\leq s_{m} \left(\frac{n}{m}\right)^{\sum_{j=1}^{m-1}s_j}.$$ Choosing $C$ so that $C>\left((ms_{m})^{\frac{1}{\prod_{u=1}^{m}s_u}}\right)\cdot \prod_{j=1}^{m-1} \left(s_j!\right)^{\frac{1}{\prod_{u=1}^{j}s_u}}$, equations and contradict each other. $\square$ [**Proof of Theorem \[tflag\]:**]{} Consider a hypergraph $G:=G^R_n$ with $$h_n(G)=r-1+Cn^{-\delta}.$$ Let $r=|R|$. It suffices to show that $\mu_H(G)\geq C'n^{-\delta}$. Given a random permutation $\sigma$, let $X$ be the number of edges on a random full chain $\sigma(L)$. By the definition of the Lubell function, we have $h_n(G)={{\rm E}}(X)$. Note $X$ only takes integer values $0,1,\ldots, r$. Since ${{\rm E}}(X)>r-1$, there is non-zero probability that $X=r$. In fact, we have $$\begin{aligned} {{\rm E}}(X)&=\sum_{i=0}^r i{{\rm Pr}}(X=i)\\ &\leq r{{\rm Pr}}(X=r) + (r-1)(1-{{\rm Pr}}(X=r))\\ &=r-1 + {{\rm Pr}}(X=r). \end{aligned}$$ Thus, we get $$\label{eq:xr} {{\rm Pr}}(X=r)\geq \frac{C}{n^{\delta}}.$$ Every flag $\sigma(L)$ contributes an equal share of the probability of the event that $X=r$, namely, $$\label{eq:pchain} \frac{|Aut(L)|}{{n\choose v(L)}v(L)!}.$$ Here $Aut(L)$ is the automorphism of $L$. Thus, the number of such flags is at least $$\label{eq:nchain} \frac{C}{|Aut(L)|n^{\delta}}{n\choose v(L)} v(L)!.$$ It follows that $\mu_H(G)\geq Cn^{-\delta}$. $\square$ Suspensions =========== The [*suspension*]{} of a hypergraph $H$, denoted $S(H)$, is the hypergraph with $V=V(H) \bigcup \{\ast\}$ where $\{\ast\}$ is an element not in $V(H)$, and edge set $E=\{F\bigcup \{\ast\}\colon F\in E(H)\}$. We write $S^{t}(H)$ to denote the hypergraph obtained by iterating the suspension operation $t$-times, i.e. $S^{2}(H)=S(S(H))$ and $S^{3}(H)=S(S(S(H)))$, etc. In this section we will investigate the relationship between $\pi(H)$ and $\pi(S(H))$ and look at limits such as $\lim_{t\to \infty}\pi(S^{t}(H))$. Given a graph $G$ with vertex set $v_1,...,v_n$ the [*link*]{} hypergraph $G^{v_i}$ is the hypergraph with vertex set $V(G)\setminus \{v_i\}$ and edge set $E=\{F\setminus \{v_i\}\colon v_i\in F \text{ and } F\in E(G)\}$. For any hypergraph $H$ we have that $\pi(S(H))\leq \pi(H)$. Let $G_{n}$ be a graph on $n$ vertices containing no copy of $S(H)$ such that $h_{n}(G_{n})=\pi_{n}(S(H))$. Say $V(G_{n})=\{v_1,v_2,...,v_{n}\}$. Note that for any $v_i\in V(G_n)$, we have that Lubell value of the corresponding link graph is $$h_{n-1}(G^{v_i}_{n})=\sum_{F\in G_n, v_i\in F} \frac{1}{\binom{n-1}{|F|-1}}.$$ Also, note that $G^{v_i}_n$ contains no copy of $H$. If it did, then $S(H)\subset S(G^{v_i}_n)\subseteq G_n$; but $S(H)$ is not contained in $G_n$. Thus $h_{n-1}(G^{v_i}_{n})\leq \pi_{n-1}(H)$. We then have the following: $$\begin{aligned} \pi_{n}(S(H)) &= h_n(G_n) \\ &=\sum_{F\in E(G_n)} \frac{1}{\binom{n}{|F|}} \\ &=\sum_{F\in E(G_n)} \frac{1}{|F|} \sum_{v_i\in F} \frac{1}{\binom{n}{|F|}} \\ &=\sum_{i=1}^{n} \sum_{F\in E(G_n), v_i\in F} \frac{1}{|F|}\cdot \frac{1}{\binom{n}{|F|}} \\ &=\sum_{i=1}^{n} \sum_{F\in E(G_n), v_i\in F} \frac{1}{n}\cdot \frac{1}{\binom{n-1}{|F|-1}} \\ &=\frac{1}{n} \sum_{i=1}^{n} h_{n-1}(G^{v_i}_{n}) \\ &\leq \frac{1}{n} \sum_{i=1}^{n} \pi_{n-1}(H) \\ &=\pi_{n-1}(H).\end{aligned}$$ Thus, for any $n$, $\pi_{n}(S(H))\leq \pi_{n-1}(H)$; taking the limit as $n\to\infty$ we get the result as claimed. If $H$ is degenerate, so is $S(H)$. For all $H$, $\displaystyle \lim_{t\to\infty}\pi(S^{t}(H))=|R(H)|-1$. To conclude our paper, we prove a special case of this conjecture. \[t8\] Suppose that $H$ is a subgraph of the blowup of a chain. Let $k_1$ be the minimum number in $R(H)$. Suppose $k_1\geq 2$, and $H'$ is a new hypergraph obtained by adding finitely many edges of type $k_1-1$ arbitrarily to $H$. Then $$\lim_{t\to\infty} \pi(S^{t}(H'))=|R(H')|-1.$$ Without loss of generality, we can assume that $H$ is a blowup of a chain and $V(H')=V(H)$. (This can be done by taking blowup of $H$ and adding more edges.) Suppose that $H$ has $v$ vertices and its edge type is $R(H):=\{k_1,k_2,\ldots, k_r\}$. Set $k_0:=k_1-1$ so that $R(H'):=\{k_0,k_1,\ldots, k_r\}$. For convenience, we write $R$ for $R(H)$ and $R'$ for $R(H')$, and $$\begin{aligned} R+t &:=\{k_1+t,k_2+t,\ldots, k_r+t\},\\ R'+t &:=\{k_0+t,k_1+t,\ldots, k_r+t\}.\end{aligned}$$ For any small $\epsilon>0$, let $n_0=\lfloor \epsilon^{-t}\rfloor$. For any $n\geq n_0$ and any hypergraph $G_n^{R+t}$ with $$\pi_n(G)>|R(H)|-1+\epsilon=r+\epsilon,$$ we will show $G$ contains a subhypergraph $S^t(H')$. Take a random permutation $\sigma\in S_n$ and let $X$ be the number of edges in $G$ hit by the random full chain $C_\sigma$: $$\emptyset \subset \{\sigma(1)\} \subset \{\sigma(1),\sigma(2)\} \cdots \subset \{\sigma(1), \sigma(2),\ldots, \sigma(i)\} \subset \cdots \subset [n].$$ We have $${{\rm E}}(X)=\pi_n(G)>r+\epsilon.$$ Since $X\leq r+1$, we have $${{\rm E}}(X)=\sum_{i=0}^{r+1}i {{\rm Pr}}(X=i)\leq (r+1){{\rm Pr}}(X=r+1) +r.$$ Thus, we get $$\label{eq:X=r+1} {{\rm Pr}}(X=r+1)\geq \frac{{{\rm E}}(X)-r}{r+1}>\frac{\epsilon}{r+1}.$$ Recall that the density $\mu_H(G)$ is the probability that a random injective map $f\colon V(H)\to V(G)$ such that $H\stackrel{f}{\hookrightarrow}G$. Applying to $H=C^{R'+t}$, we have $$\mu_{C^{R'+t}}(G)={{\rm Pr}}(X=r+1)>\frac{\epsilon}{r+1}.$$ Every copy of the chain $C^{R'+t}$ will pass through a set $A_1\in E^{k_1+t}(G)$. Let $\mu_{C^{R+t}, A_1}(G)$ be the conditional probability that a random injective map $f\colon V(C^{R+t}) \to V(G)$ satisfies $C^{R+t}\stackrel{f}{\hookrightarrow}G$ given that the chain $C^{R+t}$ passes through $A_1$. Let $d_-(A_1)$ be the number of sets $A_0$ satisfying $A_0\in E^{k_0+t}(G)$ and $A_0\subset A_1$. Then, we have $$\mu_{C^{R'+t}}(G)=\frac{1}{{n\choose k_1+t}}\sum_{A_1\in E^{k_1+t(G)}}\mu_{C^{R+t}, A_1}(G) \cdot \frac{d_-(A_1)}{k_1+t}.$$ Setting $\eta=\frac{\epsilon}{2(r+1)}$, define a family $${\mathcal{A}}=\{A_1\in E^{k_1+t}(G) \colon \mu_{C^{R+t}, A_1}(G)> \eta \mbox{ and } d_-(A_{1})> \eta (k_1+t)\}.$$ We claim $|{\mathcal{A}}|>\eta {n\choose k_1+t}.$ Otherwise, we have $$\begin{aligned} \mu_{C^{R'+t}}(G)&=\frac{1}{{n\choose k_1+t}}\sum_{A_1\in E^{k_1+t(G)}}\mu_{C^{R+t}, A_1}(G) \cdot \frac{d_-(A_1)}{k_1+t}\\ &= \frac{1}{{n\choose k_1+t}}\sum_{A_1\in{\mathcal{A}}}\mu_{C^{R+t}, A_1}(G) \cdot \frac{d_-(A_1)}{k_1+t} + \frac{1}{{n\choose k_1+t}}\sum_{A_1\not\in{\mathcal{A}}}\mu_{C^{R+t}, A_1}(G) \cdot \frac{d_-(A_1)}{k_1+t}\\ &\leq\eta+\eta<\frac{\epsilon}{t+1}.\end{aligned}$$ Contradiction! A $k_1$-configuration is a pair $(S,A_1)$ satisfying $A_1\in {\mathcal{A}}$, $S=A_1\setminus \{i_1,i_2,\ldots, i_{k_1}\}$, and $A_1\setminus \{i_j\}\in E^{k_0+t}(G)$ for any $1\leq j\leq k_1$. For any $A_1\in {\mathcal{A}}$, the number of $S$ such that $(S,A_1)$ forms a $k_1$-configuration is at least $${d_-(A_1)\choose k_1}\geq {\eta(k_1+t)\choose k_1} >\left(\frac{\eta}{2}\right)^{k_1} {k_1+t\choose k_1}.$$ In the above inequality, we use the assumption $t>\frac{2}{\eta}k_1$. By an averaging argument, there exists an $S$ so that the number of $k_1$-configurations $(S, \bullet)$ is at least $$\frac{|{\mathcal{A}}|\left(\frac{\eta}{2}\right)^{k_1} {k_1+t\choose k_1}} {{n\choose t}} \geq \frac{\eta^{k_1+1}}{2^{k_1}}{{n-t\choose k_1}}.$$ Now consider the link graph $G^S$. The inequality above implies $$\mu_{C^R}(G^S)\geq \frac{\eta^{k_1+2}}{2^{k_1}}.$$ This implies $G^S$ contains a blow up of $C^R$. Thus $G^S$ has a subhypergraph $H$. By the definition of $k_1$-configuration, this $H$ can be extended to $H'$ in $G^S$. In another words, $G$ contains $S^t(H')$. Connections to extremal poset problems ====================================== As stated earlier, the Turán density of non-uniform hypergraphs is motivated by the extremal subset/poset problems. Let ${\mathcal{B}}_n=(2^{[n]},\subseteq)$ be the $n$-dimensional Boolean lattice. Under the partial relation $\subseteq$, any family ${\mathcal{F}}\subseteq 2^{[n]}$ can be viewed as a subposet of ${\mathcal{B}}_n$. For posets $P=(P,\le)$ and $P'=(P',\le')$, we say $P'$ is a [*weak subposet of $P$*]{} if there exists an injection $f\colon P'\to P$ that preserves the partial ordering, meaning that whenever $u\le' v$ in $P'$, we have $f(u)\le f(v)$ in $P$. If $P'$ is not a weak poset of $P$, we say $P$ is $P'$-free. The following problems originate from Sperner’s theorem, which states that the largest antichain of ${\mathcal{B}}_n$ is ${\binom{n}{\lfloor \frac{n}{2}\rfloor}}$. [**Extremal poset problems:**]{} Given a fixed poset $P$, what is the largest size of a $P$-free family ${\mathcal{F}}\subset {\mathcal{B}}_n$? Let ${{\rm La}}(n,P)$ be the largest size of a $P$-free family ${\mathcal{F}}\subseteq {\mathcal{B}}_n$. The value of ${{\rm La}}(n,P)$ is known for only a few posets $P$. Let ${\mathcal{P}}_k$ be the (poset) chain of size $k$. Then ${{\rm La}}(n,{\mathcal{P}}_2)={\binom{n}{\lfloor \frac{n}{2}\rfloor}}$ by Sperner’s theorem. Erdős [@Erdos45] proved that ${{\rm La}}(n,{\mathcal{P}}_k)=\Sigma(n,k)$, where $\Sigma(n,k)$ is the sum of $k$ largest binomial coefficients. De Boinis-Katona-Swanepoel [@DebKatSwa] proved ${{\rm La}}(n,{\mathcal{O}}_{4})=\Sigma(n,2)$. Here ${\mathcal{O}}_4$ is the butterfly poset ($A,B \; \subset \; C,D$), or the crown poset of size $4$. The asymptotic value of ${{\rm La}}(n, P)$ has been discovered for various posets (see Table \[tab:1\]). Let $e(P)$ be the largest integer $k$ so that the family of $k$ middle layers of ${\mathcal{B}}_n$ is $P$-free. Griggs and Lu [@GriLu] first conjecture $\lim_{n\to\infty}\frac{{{\rm La}}(n,P)}{{\binom{n}{\lfloor \frac{n}{2}\rfloor}}}$ exists and is an integer, and it slowly involves into the following conjecture. \[conj:2\] For any fixed poset $P$, $\lim_{n\to\infty}\frac{{{\rm La}}(n,P)}{{\binom{n}{\lfloor \frac{n}{2}\rfloor}}}=e(p)$. We overload the notation $\pi(P)$ for the limit $\lim_{n\to\infty}\frac{{{\rm La}}(n,P)}{{\binom{n}{\lfloor \frac{n}{2}\rfloor}}}$, where $P$ is a poset. The conjecture is based on the observation of several previous known results, which are obtained by Katona and others [@CarKat; @DebKat; @DebKatSwa; @GriKat; @KNS; @KatTar; @Tha]. We summarize the known poset $P$, for which the conjecture has been verified in Table \[tab:1\]. [|l|c|c|c|c|]{} $P$ & meaning & & $e(P)$ &References\ & $A<B_1,\ldots,B_r$ & \[5mm\]\[2mm\] at (0,0) \[vertex\] (v0) ; at (-0.5,1) \[vertex\] (v1) ; at (-0.3,1) \[vertex\] (v2) ; at (0.3,1) \[vertex\] (v4) ; at (0.5,1) \[vertex\] (v5) ; (v0) – (v1); (v0) – (v2); (v2) – (v4); (v0) – (v4); (v0) – (v5); &1&\ ${\mathcal{N}}$ & $A<B$, $B>C$, and $C<D$. & \[5mm\]\[2mm\] at (0,0) \[vertex\] (v0) ; at (0,1) \[vertex\] (v1) ; at (1,0) \[vertex\] (v2) ; at (1,1) \[vertex\] (v3) ; (v0) – (v1); (v1) – (v2); (v2) – (v3); &1&\ & $A<B$, $B>C$, $C<D$, and $A<D$. & \[5mm\]\[2mm\] at (0,0) \[vertex\] (v0) ; at (0,1) \[vertex\] (v1) ; at (1,0) \[vertex\] (v2) ; at (1,1) \[vertex\] (v3) ; (v0) – (v1); (v1) – (v2); (v2) – (v3); (v0) – (v3); &2&\ & & \[6mm\]\[3mm\] at (0,0) \[vertex\] (v0) ; at (-0.5,1) \[vertex\] (v1) ; at (-0.3,1) \[vertex\] (v2) ; at (0.3,1) \[vertex\] (v3) ; at (0.5,1) \[vertex\] (v4) ; at (0,2) \[vertex\] (v5) ; (v0) – (v1); (v0) – (v2); (v0) – (v3); (v0) – (v4); (v5) – (v1); (v5) – (v2); (v5) – (v3); (v5) – (v4); &2&\ & & \[8mm\]\[5mm\] at (0,0) \[vertex\] (b0) ; at (0,0.5) \[vertex\] (b1) ; at (0,1) \[vertex\] (b2) ; at (-0.5,-0.5) \[vertex\] (a1) ; at (-0.3,-0.5) \[vertex\] (a2) ; at (0.3,-0.5) \[vertex\] (a3) ; at (0.5,-0.5) \[vertex\] (a4) ; at (-0.5,1.5) \[vertex\] (c1) ; at (-0.3,1.5) \[vertex\] (c2) ; at (0.3,1.5) \[vertex\] (c3) ; at (0.5,1.5) \[vertex\] (c4) ; (b0) – (b1); (b1) – (b2); (b0) – (a1); (b0) – (a2); (b0) – (a3); (b0) – (a4); (b2) – (c1); (b2) – (c2); (b2) – (c3); (b2) – (c4); (c2) – (c3); (a2) – (a3); &k-1&\ & & \[5mm\]\[2mm\] at (0,0) \[vertex\] (b0) ; at (1,0) \[vertex\] (b1) ; at (2,0) \[vertex\] (b2) ; at (-0.5,1) \[vertex\] (c0) ; at (0.5,1) \[vertex\] (c1) ; at (1.5,1) \[vertex\] (c2) ; at (-0.5,-1) \[vertex\] (a0) ; at (0.5,-1) \[vertex\] (a1) ; at (1.5,-1) \[vertex\] (a2) ; (a0) – (b0); (a1) – (b1); (a2) – (b2); (b0) – (c0); (b0) – (c1); (b1) – (c1); (b1) – (c2); (a2) – (c1); &$h(T)-1$&\ & & \[6mm\]\[4mm\] at (0,0) \[vertex\] (b0) ; at (0.2,0) \[vertex\] (b1) ; at (0.4,0) \[vertex\] (b2) ; at (0.8,0) \[vertex\] (b4) ; at (1,0) \[vertex\] (b5) ; at (0,1) \[vertex\] (c0) ; at (0.2,1) \[vertex\] (c1) ; at (0.4,1) \[vertex\] (c2) ; at (0.8,1) \[vertex\] (c4) ; at (1,1) \[vertex\] (c5) ; (b0) – (c0); (b1) – (c0); (b1) – (c1); (b2) – (c1); (b2) – (c2); (b2) – (b4); (c2) – (c4); (b4) – (c4); (b5) – (c4); (b5) – (c5); (b0) – (c5); &$1$&\ The posets in Table \[tab:1\] are far from complete. Let $\lambda_n(P)=\max\{h_n({\mathcal{F}})\colon {\mathcal{F}}\subseteq 2^{[n]}, P\mbox{-free}\}.$ A poset $P$ is called [*uniform-L-bounded*]{} if $\lambda_n(P)\leq e(P)$ for all $n$. Griggs-Li [@GriLi; @GriLi2] proved ${{\rm La}}(n,P) =\sum_{i=\lfloor \frac{n+e(p)-1}{2}\rfloor}^{i=\lfloor \frac{n-e(p)+1}{2}\rfloor}{n\choose i}$ if $P$ is uniform-L-bounded. The uniform-L-bounded posets include ${\mathcal{P}}_k$ (for any $k\geq 1$), diamonds ${\mathcal{D}}_k$ (for $k\in [2^{m-1}-1, 2^m-{\binom{m}{\lfloor \frac{m}{2}\rfloor}}-1]$ where $m:=\lceil \log_2(k+2)\rceil$), and harps ${\mathcal{H}}(l_1,l_2,\ldots,l_k)$ (for $l_1>l_2>\cdots>l_k$), and other posets. Noticeably, Griggs-Li [@GriLi2] provides a method to construct large uniform-L-bounded posets from smaller uniform-L-bounded posets. There are infinitely many posets $P$ so that $\pi(P)=e(P)$ holds. Although there is no counter example found yet for Conjecture \[conj:2\], some posets have resisted efforts to determine their $\pi$ value. The most studied, yet unsolved, poset is the diamond poset ${\mathcal{D}}_2$ (or ${\mathcal{B}}_2$, $Q_2$ in some papers) as shown in Figure \[fig:d2c6c10\]. Griggs and Lu first observed $\pi({\mathcal{D}}_2)\in [2,2.296]$. Axenovich, Manske, and Martin [@AxeManMar] came up with a new approach which improves the upper bound to $2.283$. Griggs, Li, and Lu [@GriLiLu] further improved the upper bound to $2.27\dot 3=2\frac{3}{11}$. Very recently, Kramer-Martin-Young [@KMY] recently proved $\pi({\mathcal{D}}_2)\leq 2.25$. While it seems to be hard to prove the conjecture $\pi({\mathcal{D}}_2)=2$, several groups of researchers have considered restricting the problem to three consecutive layers. Let ${{\rm La}}^{c}(n,P)$ be the largest size a $P$-free family ${\mathcal{F}}\subseteq{\mathcal{B}}_n$ such that ${\mathcal{F}}$ is in $e(p)+1$ consecutive layers. Let $\pi^c(P)=\lim_{n\to\infty}\frac{{{\rm La}}^c(n,p)}{{\binom{n}{\lfloor \frac{n}{2}\rfloor}}}$, if the limit exists. Here is a weaker conjecture (of consecutive layers). \[conj:3\] For any fixed poset $P$, $\pi^c(P)=e(p)$. at (0,0) \[vertex\] (v0) ; at (-0.5,1) \[vertex\] (v1) ; at (0.5,1) \[vertex\] (v2) ; at (0,2) \[vertex\] (v3) ; (v0) – (v1); (v0) – (v2); (v1) – (v3); (v2) – (v3); at (0,-0.5) (label) [${\mathcal{D}}_2$]{}; at (0,0) \[vertex\] (b0) ; at (1,0) \[vertex\] (b1) ; at (2,0) \[vertex\] (b2) ; at (0,2) \[vertex\] (c0) ; at (1,2) \[vertex\] (c1) ; at (2,2) \[vertex\] (c2) ; (b0) – (c0); (b1) – (c0); (b1) – (c1); (b2) – (c1); (b2) – (c2); (b0) – (c2); at (1,-0.5) (label) [${\mathcal{O}}_6$]{}; at (0,0) \[vertex\] (b0) ; at (1,0) \[vertex\] (b1) ; at (2,0) \[vertex\] (b2) ; at (3,0) \[vertex\] (b3) ; at (4,0) \[vertex\] (b4) ; at (0,2) \[vertex\] (c0) ; at (1,2) \[vertex\] (c1) ; at (2,2) \[vertex\] (c2) ; at (3,2) \[vertex\] (c3) ; at (4,2) \[vertex\] (c4) ; (b0) – (c0); (b1) – (c0); (b1) – (c1); (b2) – (c1); (b2) – (c2); (b3) – (c2); (b3) – (c3); (b4) – (c3); (b4) – (c4); (b0) – (c4); at (2,-0.5) (label) [${\mathcal{O}}_{10}$]{}; Axenovich-Manske-Martin [@AxeManMar] first proved $\pi^c({\mathcal{D}}_2)\leq 2.207$; it was recently improved to $2.1547$ (Manske-Shen [@MS]) and $2.15121$ (Balogh-Hu-Lidický-Liu [@BHLL]). We say a hypergraph $H$ [*represents*]{} a poset $P$ if the set of edges of $H$ (as a poset) is isomorphic to $P$. For any fixed finite poset $P$, by the definition of $e(P)$, there exists a hypergraph $H\subseteq {\mathcal{B}}_{n_0}$ with $|R(H)|=e(P)+1$ representing a superposet of $P$. \[t9\] Suppose that a hypergraph $H$ with $|R(H)|=e(P)+1$ represents a superposet of $P$. Then, for any integer $t\geq 0$, we have $$\pi^c(P)\leq \pi(S^t(H)).$$ [**Proof:**]{} Let $x:=\pi(S^t(H))$. For any $\epsilon>0$, there exists an $n_1$ so that $\pi_{n}(S^{t}(H)) \leq x+\epsilon$ for all $n\geq n_1$. We claim $$\pi^c(P)\leq x+2\epsilon.$$ Otherwise, for any sufficiently large $n$, there exists a family ${\mathcal{F}}\subset {\mathcal{B}}_n$ which is in $e(p)+1$ consecutive layers with $|{\mathcal{F}}|>(x+\epsilon){\binom{n}{\lfloor \frac{n}{2}\rfloor}}$. Let $k_0$ be the smallest size of edges in $S^{t}(H)$. Let $k_1$ be the integer that ${\mathcal{F}}$ is in $k_1$-th to $(k_1+e(P))$-th layer. Since $e(P)\leq x < e(P)+1$, we have $$|{\mathcal{F}}|>(x+\epsilon){\binom{n}{\lfloor \frac{n}{2}\rfloor}}\geq (e(P)+\epsilon){\binom{n}{\lfloor \frac{n}{2}\rfloor}}.$$ Note any layer below $\frac{n}{2}-2\sqrt{n\ln n}$ can only contribute $\frac{2^n}{n^2}$, which is less than $\epsilon {\binom{n}{\lfloor \frac{n}{2}\rfloor}}$ for sufficiently large $n$. We get $$k_1\geq \frac{n}{2}-2\sqrt{n\ln n}.$$ Choose $n$ large enough so that $k_1\geq k_0$ and $n-k_1+k_0\geq n_1$. We observe that $$h_n({\mathcal{F}})\geq \frac{|F|}{{\binom{n}{\lfloor \frac{n}{2}\rfloor}}}> x+2\epsilon.$$ By the property of Lubell function, $h_n({\mathcal{F}})$ is the average of $h_{n+k_0-k_1}({\mathcal{F}}_S)$ over all $S\in {[n]\choose k_1-k_0}$, where ${\mathcal{F}}_S$ is the link hypergraph over $S$. Therefore, there exists a set $S\in {[n]\choose k_1-k_0}$ so that $h_{n+k_0-k_1}({\mathcal{F}}_S)>x+2\epsilon$. Thus, ${\mathcal{F}}_S$ contains a subhypergraph $S^{t_0}(H)$. In particular, ${\mathcal{F}}$ contains a subposet $P$. Thus, we have $$\pi^c(P)\leq x+2\epsilon.$$ Since this holds for any $\epsilon>0$, we have $\pi^c(P)\leq x.$ $\square$ Conjecture \[conj:2\] implies Conjecture \[conj:3\]. In particular, from Theorem \[t8\], we get a new family of posets $P$ so that $\pi^c(P)=e(p)$. A special example is the crown ${\mathcal{O}}_{2t}$, where $t=4$ and $t\geq 6$. The idea can be traced back from Conlon’s concept [*$k$-representation*]{} of bipartite graphs [@Conlon]. Theorem \[t8\] can be viewed as a natural generalization of Conlon’s theorem. It is easy to generate more examples of posets in this family. However, a complete description of these posets is tedious; thus it is omitted here. Note that the complete hypergraph $K_2^{\{0,1,2\}}$ has $4$ edges $\emptyset, \{1\}, \{2\}, \{1,2\}$; which form the diamond poset ${\mathcal{D}}_2$. In particular, for any $t\geq 0$, we have $$\pi^c({\mathcal{D}}_2)\leq \pi(S^t(K_2^{\{0,1,2\}})).$$ This provides a possible way to improve the bounds of $\pi^c({\mathcal{D}}_2)$. [30]{} M. Axenovich, J. Manske, and R. Martin, [$Q_2$-free families in the Boolean lattice]{}, [*Order*]{} published online: 15 March 2011. Rahil Baber and John Talbot, [New Turán densities for 3-graphs]{}, [*Elect. J. Combin.*]{}, (2012), P22, 21p. J. Balogh, P. Hu, B. Lidický, and H. Liu Upper bounds on the size of 4- and 6-cycle-free subgraphs of the hypercube, arxiv:1201.0209 \[math.CO\]. B. Bukh, [Set families with a forbidden poset]{}, [*Elect. J. Combin.*]{} [**16**]{} (2009), R142, 11p. T. Carroll and G. O. H. Katona, Bounds on maximal families of sets not containing three sets with $A\cup B\subset C, A \not\subset B$, [*Order*]{} [**25**]{} (2008) 229–236. D. de Caen, The current status of Turán’s problem on hypergraphs. [*Extremal problems for finite sets*]{} (Visegrád, 1991), 187–197, Bolyai Soc. Math. Stud., 3, János Bolyai Math. Soc., Budapest, 1994. F. Chung, L. Lu, An upper bound for the Turán number $t\sb 3(n,4)$. [*J. Combin. Theory Ser. A*]{} [**87**]{} (1999), no. 2, 381–389. D. Conlon, An extremal theorem in the hypercube. [*Electron. J. Combin.*]{} [**17**]{} (2010), \#R111. A. De Bonis and G. O. H. Katona, [Largest families without an $r$-fork]{}, [*Order*]{} [**24**]{} (2007), 181–191. A. De Bonis, G. O.H. Katona and K. J. Swanepoel, [Largest family without $A\cup B \subset C\cap D$]{}, [*J. Combin. Theory (Ser. A)*]{} [**111**]{} (2005), 331–336. P. Erdős, On a lemma of Littlewood and Offord, [*Bull. Amer. Math. Soc.*]{} [**51**]{} (1945), 898–902. P. Erdős, On extremal problems of graphs and generalized g raphs, [*Israel J. Math.*]{} [**2**]{} (1964), 183–190. P. Erdős, On the combinatorial problems which I would like to see solved, [*Combinatorica*]{} [**1**]{} (1981), 25-42. P. Erdős, M. Simonovits, Supersaturated graphs and hypergraphs. [*Combinatorica*]{} [**3**]{} (1983), no. 2, 181–192. P. Frankl, Z. Füredi, A new generalization of the Erdős-Ko-Rado theorem, [*Combinatorica*]{} [**3**]{} (1983), 341-349. Z. Füredi and M. Simonovits, Triple systems not containing a Fano configuration, *Combin. Probab. Comput.* **14** (2005), 467–484. Z. Füredi, D. Mubayi and O. Pikhurko, Quadruple systems with independent neighborhoods, **115** (2008), 1552–1560. J. R. Griggs and G. O. H. Katona, [No four subsets forming an $N$]{}, [*J. Combinatorial Theory (Ser. A)*]{} [**115**]{} (2008), 677–685. J. R. Griggs and W.-T. Li, [The partition method for poset-free families]{}, accepted by Journal of Combinatorial Optimization. J. R. Griggs and W.-T. Li, [Uniformly L-bounded posets]{}, preprint (2011). J. R. Griggs and L. Lu, [On families of subsets with a forbidden subposet]{}, [*Combinatorics, Probability, and Computing*]{} [**18**]{} (2009), 731–748. J. R. Griggs, W.-T. Li, and L. Lu, Diamond-free Families, [*Journal of Combinatorial Theory Ser. A*]{}, [**119**]{} (2012) 310-322. J. R. Griggs and L. Lu, [On families of subsets with a forbidden subposet]{}, [*Combinatorics, Probability, and Computing*]{} [**18**]{} (2009), 731–748. Gy Katona, T. Nemetz, M. Simonovits, On a problem of Turán in the theory of graphs. [*Mat. Lapok*]{} [**15**]{} (1964) 228–238. G. O. H. Katona and T. G. Tarján, [Extremal problems with excluded subgraphs in the $n$-cube]{}, in: M. Borowiecki, J. W. Kennedy, and M. M. Sysło (eds.) [**Graph Theory**]{}, Łagów, 1981, [*Lecture Notes in Math.*]{}, [**1018**]{} 84–93, Springer, Berlin Heidelberg New York Tokyo, 1983. P. Keevash, Hypergraph Turán Problems, [*Surveys in Combinatorics*]{}, Cambridge University Press, 2011, 80-140. P. Keevash and B. Sudakov, The Turán number of the Fano plane, *Combinatorica* **25** (2005), 561-574. P. Keevash and B. Sudakov, The Turán number of the Fano plane,*Combinatorica* **25** (2005), 561-574. L. Kramer, R. Martin, M. Young, On diamond-free subposets of the Boolean lattice, `http://arxiv.org/abs/1205.1501`. N. W. Lemons, [*Turán Problems for Hypergraphs*]{}, dissertation, Central European University, Budapest, Hungary, (2008). Linyuan Lu, On crown-free families of subsets, arxiv:1206.6258v1 \[math.CO\]. Manske and Shen, Three Layer $Q_2$-Free Families in the Boolean Lattice, arxiv:1108.4373 \[math.CO\]. D. Mubayi and Y. Zhao, Non-Uniform Turán-Type problems, [*J. Comb. Th. A*]{}, [**111**]{} (2004), 106–110. O. Pikhurko, Exact computation of the hypergraph Turán function for expanded complete 2-graphs, Accepted by *J. Combin. Theory Ser. B* publication suspended for an indefinite time, see http://www.math.cmu.edu/$\sim$pikhurko/Copyright.html. A. A. Razborov, On 3-hypergraphs with forbidden 4-vertex configurations, *SIAM J. Disc. Math.* **24** (2010), 946–963. H. T. Thanh, An extremal problem with excluded subposets in the Boolean lattice, [*Order*]{} [**15**]{} (1998), 51–57. P. Turán, On an extremal problem in graph theory. *Mat. Fiz. Lapok* **48** (1941), 436–452. [^1]: University of South Carolina, Columbia, SC 29208, ([j.travis.johnston@gmail.com]{}). [^2]: University of South Carolina, Columbia, SC 29208, ([lu@math.sc.edu]{}). This author was supported in part by NSF grant DMS 1000475.
1
--- author: - 'Mark Bun [^1]' - 'Roi Livni [^2]' - 'Shay Moran [^3]' bibliography: - 'biblio.bib' title: | An Equivalence Between Private Classification\ and Online Prediction --- Introduction ============ This paper continues the study of the close relationship between differentially-private learning and online learning. #### Differentially-Private Learning. Statistical analyses and computer algorithms play significant roles in the decisions which shape modern society. The collection and analysis of individuals’ data drives computer programs which determine many critical outcomes, including the allocation of community resources, decisions to give loans, and school admissions. While data-driven and automated approaches have obvious benefits in terms of efficiency, they also raise the possibility of unintended negative impacts, especially against marginalized groups. This possibility highlights the need for [*responsible*]{} algorithms that obey relevant ethical requirements (see e.g. [@Oneil2016weapons]). [*Differential Privacy*]{} (DP) [@DworkMNS06] plays a key role in this context. Its initial (and primary) purpose was to provide a formal framework for ensuring individuals’ privacy in the statistical analysis of large datasets. But it has also found use in addressing other ethical issues such as [*algorithmic fairness*]{} (see, e.g. [@DworkHPRZ12; @cummings19fairness]). Many tasks which involve sensitive data arise in machine learning (e.g. in medical applications and in social networks). Consequently, a large body of practical and theoretical work has been dedicated to understand which learning tasks can be performed by DP learning algorithms. The simplest and most extensively studied model of learning is the private PAC model [@Valiant84; @KasiviswanathanLNRS11], which captures binary classification tasks under differential privacy. A partial list of works on this topic includes [@KasiviswanathanLNRS11; @BeimelBKN14; @BunNSV15; @FeldmanX15; @BeimelNS16; @BunDRS18; @Beimel19Pure; @AlonLMM19; @kaplan2019privately]. In this manuscript we make progress towards characterizing what tasks are DP PAC-learnable by demonstrating a qualitative equivalence with online-learnable tasks. #### Online Learning. Online learning is a well-studied branch of machine learning which addresses algorithms making real-time predictions on sequentially arriving data. Such tasks arise in contexts including recommendation systems and advertisement placement. The literature on this subject is vast and includes several books, e.g. [@cesabianchi06prediction; @Shalev-Shwartz12book; @Hazan16oco]. [*Online Prediction*]{}, or [*Prediction with Expert Advice*]{} is a basic setting within online learning. Let $\mathcal{H} = \{h:X\to \{\pm1\} \}$ be a class of predictors (also called experts) over a domain $X$. Consider an algorithm which observes examples $(x_1,y_1)\ldots (x_T,y_T)\in X\times\{\pm 1\}$ in a sequential manner. More specifically, in each time step $t$, the algorithm first observes the instance $x_t$, then predicts a label $\hat{y}_t\in\{\pm 1\}$, and finally learns whether its prediction was correct. The goal is to minimize the [*regret*]{}, namely the number of mistakes compared to the best expert in $\mathcal{H}$: $$\sum_{t=1}^T 1[y_t\neq \hat{y}_t] - \min_{h^*\in\mathcal{H}} \sum_{t=1}^T 1[y_t\neq h^*(x_t)].$$ In this context, a class $\mathcal H$ is said to be online learnable if for every $T$, there is an algorithm that achieves sublinear regret $o(T)$ against any sequence of $T$ examples. The [*Littlestone dimension*]{} is a combinatorial parameter associated to the class ${\mathcal{H}}$ which characterizes its online learnability [@Littlestone87online; @Bendavid09agnostic]: $\mathcal H$ is online learnable if and only if it has a finite Littlestone dimension $d<\infty$. Moreover, the best possible regret $R(T)$ for online learning of ${\mathcal{H}}$ satisfies $$\Omega (\sqrt{dT}) \leq R(T) \leq O(\sqrt{dT\log T}).$$ Furthermore, if it is known that if one of the experts never errs (i.e. in the realizable mistake-bound model), then the optimal regret is exactly $d$. #### Stability. While at a first glance it may seem that online learning and differentially-private learning have little to do with one another, a line of recent works has revealed a tight connection between the two [@Agarwal17dponline; @Abernathy17onlilnedp; @AlonLMM19; @bousquet2019passing; @NeelRW19; @Joseph2019TheRO; @Gonen19privateonline]. At a high-level, this connection appears to boil down to the notion of stability, which plays a key role in both topics. On one hand, the definition of differential privacy is itself a form of stability; it requires robustness of the output distribution of an algorithm when its input undergoes small changes. On the other hand, stability also arises as a central motif in online learning paradigms such as [*Follow the Perturbed Leader*]{} [@Kalai02geometricalgorithms; @kalai05efficient] and [*Follow the Regularized Leader*]{} [@abernethy08competing; @Shalev07ftrl; @Hazan16oco]. In their monograph [@DworkR14], Dwork and Roth identified stability as a common factor of learning and differential privacy: [*“Differential privacy is enabled by stability and ensures stability… we observe a tantalizing moral equivalence between learnability, differential privacy, and stability.”*]{} This insight has found formal manifestations in several works. For example, Abernethy et al. used DP inspired stability methodology to derive a unified framework for proving state of the art bounds in online learning [@Abernathy17onlilnedp]. In the opposite direction, Agarwal and Singh showed that certain standard stabilization techniques in online learning imply differential privacy [@Agarwal17dponline]. Stability plays a key role in this work as well. Our main result, which shows that any class with a finite Littlestone dimension can be privately learned, hinges on the following form of stability: for $\eta > 0$ and $n\in\mathbb{N}$, a learning algorithm ${\mathcal{A}}$ is [*$(n,\eta)$-globally stable*]{}[^4] with respect to a distribution ${\mathcal{D}}$ over examples if there exists an hypothesis $h$ whose frequency as an output is at least $\eta$. Namely, $$\Pr_{S\sim {\mathcal{D}}^n}[{\mathcal{A}}(S) = h] \geq \eta.$$ Our argument follows by showing that every ${\mathcal{H}}$ can be learned by a globally-stable algorithm with parameters $\eta = \exp(-d), n=\exp(d)$, where $d$ is the Littlestone dimension of ${\mathcal{H}}$. As a corollary, we get an equivalence between global stability and differential privacy (which can be viewed as a form of local stability). That is, the existence of a globally stable learner for ${\mathcal{H}}$ is equivalent to the existence of a differentially-private learner for it (and both are equivalent to having a finite Littlestone dimension). #### Littlestone Classes. It is natural to ask which classes have finite Littlestone dimension. First, note that every finite class ${\mathcal{H}}$ has Littlestone dimension $d \leq \log\lvert {\mathcal{H}}\rvert$. There are also many natural and interesting infinite classes with finite Littlestone dimension. For example, let $X=\mathbb{F}^n$ be an $n$-dimensional vector space over a field $\mathbb{F}$ and let ${\mathcal{H}}\subseteq\{\pm 1\}^X$ consist of all (indicators of) affine subspaces of dimension $\leq d$. The Littlestone dimension of ${\mathcal{H}}$ is $d$. More generally, any class of hypotheses that can be described by polynomial *equalities* of constant degree has finite Littlestone dimension.[^5] This can be generalized even further to classes that are definable in [*stable theories*]{}. This (different, still) notion of stability is deep and well-explored in model theory. We refer the reader to [@Chase19modelmachine], Section 5.1 for more examples of stable theories and the Littlestone classes they correspond to. #### Organization. The rest of this manuscript is organized as follows. In Section \[sec:results\] we formally state our main results and discuss some implications. Section \[sec:overview\] overviews some of the main ideas in the proofs. Sections \[sec:preliminaries\] - \[sec:wrapping\] contain complete proofs, and the last section (Section \[sec:conc\]) concludes the paper with some suggestions for future work. Main Results {#sec:results} ------------ We next present our main results. We begin with the statements concerning the relationship between online learning and differentially-private learning. In Section \[sec:stability\] we present and discuss the notion of global stability, and finally in Section \[sec:boosting\] we discuss an implication for private boosting. Throughout this section some standard technical terms are used. For definitions of these terms we refer the reader to Section \[sec:preliminaries\]. \[thm:main\] Let ${\mathcal{H}}\subseteq\{\pm 1\}^X$ be a class with Littlestone dimension $d$, let ${\varepsilon},\delta \in (0, 1)$ be privacy parameters, and let $\alpha,\beta \in (0, 1/2)$ be accuracy parameters. For $$n = O\left(\frac{16^d \cdot d^2 \cdot (d + \log(1/\beta\delta))}{\alpha{\varepsilon}}\right) = O_d\left(\frac{\log(1/\beta\delta)}{\alpha{\varepsilon}}\right)$$ there exists an $({\varepsilon},\delta)$-DP learning algorithm such that for every realizable distribution ${\mathcal{D}}$, given an input sample $S\sim {\mathcal{D}}^n$, the output hypothesis $f={\mathcal{A}}(S)$ satisfies ${\operatorname{loss}}_{{\mathcal{D}}}(f)\leq \alpha$ with probability at least $1-\beta$, where the probability is taken over $S\sim {\mathcal{D}}^n$ as well as the internal randomness of ${\mathcal{A}}$. A similar result holds in the agnostic setting: \[thm:agnostic\] Let ${\mathcal{H}}\subseteq\{\pm 1\}^X$ be a class with Littlestone dimension $d$, let ${\varepsilon}$, and $\delta \in (0, 1)$ be privacy parameters, and let $\alpha,\beta \in (0, 1/2)$ be accuracy parameters. For $$n = O\left(\frac{16^d \cdot d^2 \cdot (d + \log(1/\beta\delta))}{\alpha\epsilon} +\frac{\textrm{VC}({\mathcal{H}})+\log 1/\beta}{\alpha^2\epsilon} \right)$$ there exists an $({\varepsilon},\delta)$-DP learning algorithm such that for every distribution ${\mathcal{D}}$, given an input sample $S\sim {\mathcal{D}}^n$, the output hypothesis $f={\mathcal{A}}(S)$ satisfies $${\operatorname{loss}}_{{\mathcal{D}}}(f)\leq \min_{h\in {\mathcal{H}}} {\operatorname{loss}}_{{\mathcal{D}}}(h)+ \alpha$$ with probability at least $1-\beta$, where the probability is taken over $S\sim {\mathcal{D}}^n$ as well as the internal randomness of ${\mathcal{A}}$. follows from by Theorem 2.3 in [@alon2020closure] which provides a general mechanism to transform a learner in the realizable setting to a learner in the agnostic setting[^6]. We note that formally the transformation in [@alon2020closure] is stated for a constant ${\varepsilon}=O(1)$. Taking ${\varepsilon}=O(1)$ is without loss of generality as a standard “secrecy-of-the-sample” argument can be used to convert this learner into one which is $({\varepsilon}, \delta)$-differentially private by increasing the sample size by a factor of roughly $1/{\varepsilon}$ and running the algorithm on a random subsample. See [@KasiviswanathanLNRS11; @Vadhan17] for further details. \[thm:equivalence\] The following statements are equivalent for a class ${\mathcal{H}}\subseteq \{\pm 1\}^X$: 1. ${\mathcal{H}}$ is online learnable. 2. ${\mathcal{H}}$ is approximate differentially-privately PAC learnable. Theorem \[thm:equivalence\] is a corollary of Theorem \[thm:main\] (which gives $1\to 2$) and the result by Alon et al. [@AlonLMM19] (which gives $2\to 1$). We comment that a quantitative relation between the learning and regret rates is also implied: it is known that the optimal regret bound for ${\mathcal{H}}$ is $\tilde \Theta_d(\sqrt{T})$, where the $\tilde \Theta_d$ conceals a constant which depends on the Littlestone dimension of ${\mathcal{H}}$ [@Bendavid09agnostic]. Similarly, we get that the optimal sample complexity of agnostically privately learning ${\mathcal{H}}$ is $\Theta_d(\frac{\log({1}/(\beta\delta))}{\alpha^2{\varepsilon}})$. We remark however that the above equivalence is mostly interesting from a theoretical perspective, and should not be regarded as an efficient transformation between online and private learning. Indeed, the Littlestone dimension dependencies concealed by the $\tilde \Theta_d(\cdot)$ in the above bounds on the regret and sample complexities may be very different from one another. For example, there are classes for which the $\Theta_d(\frac{\log({1}/(\beta\delta))}{\alpha{\varepsilon}})$ bound hides a $\mathrm{poly}(\log^*(d))$ dependence, and the $\tilde \Theta_d(\sqrt{T})$ bound hides a $\Theta(d)$ dependence. One example which attains both of these dependencies is the class of thresholds over a linearly ordered domain of size $2^d$ [@AlonLMM19; @kaplan2019privately]. ### Global Stability {#sec:stability} Our proof of Theorem \[thm:main\], which establishes that every Littlestone class can be learned privately, hinges on an intermediate property which we call [*global stability*]{}: Let $n\in\mathbb{N}$ be a sample size and $\eta > 0$ be a global stability parameter. An algorithm ${\mathcal{A}}$ is $(n,\eta)$-globally-stable with respect to a distribution ${\mathcal{D}}$ if there exists an hypothesis $h$ such that $$\Pr_{S\sim{\mathcal{D}}^n}[A(S) = h] \geq \eta.$$ While global stability is a rather strong property, it holds automatically for learning algorithms using a finite hypothesis class. By an averaging argument, every learner using $n$ samples which produces a hypothesis in a finite hypothesis class ${\mathcal{H}}$ is $(n, 1/|{\mathcal{H}}|)$-globally-stable. The following proposition generalizes “Occam’s Razor" for finite hypothesis classes to show that global stability is enough to imply similar generalization bounds in the realizable setting. \[prop:gs\] Let ${\mathcal{H}}\subseteq\{\pm 1\}^X$ be a class, and assume that ${\mathcal{A}}$ is a learner for ${\mathcal{H}}$ (i.e. ${\operatorname{loss}}_S({\mathcal{A}}(S))=0$ for every realizable sample $S$). Let ${\mathcal{D}}$ be a realizable distribution such that ${\mathcal{A}}$ is $(n,\eta)$-globally-stable with respect to ${\mathcal{D}}$, and let $h$ be a hypothesis such that $\Pr_{S\sim{\mathcal{D}}^n}[A(S) = h] \geq \eta$, as guaranteed by the definition of global stability. Then, $${\operatorname{loss}}_{\mathcal{D}}(h) \leq \frac{\ln(1/\eta)}{n}.$$ Let $\alpha$ denote the loss of $h$, i.e. ${\operatorname{loss}}_{\mathcal{D}}(h) = \alpha$, and let $E_1$ denote the event that $h$ is consistent with the input sample $S$. Thus, $\Pr[E_1] = (1-\alpha)^n$. Let $E_2$ denote the event that ${\mathcal{A}}(S)=h$. By assumption, $\Pr[E_2]\geq \eta$. Now, since ${\mathcal{A}}$ is consistent we get that $E_2\subseteq E_1$, and hence that $\eta\leq(1-\alpha)^n$. This finishes the proof (using the fact that $1-\alpha \leq e^{-\alpha}$ and taking the logarithm of both sides). Another way to view global stability is in the context of *pseudo-deterministic algorithms* [@Gat11pseudo]. A pseudo-deterministic algorithm is a randomized algorithm which yields some fixed output with high probability. Thinking of a realizable distribution ${\mathcal{D}}$ as an instance on which PAC learning algorithm has oracle access, a globally-stable learner is one which is “weakly” pseudo-deterministic in that it produces some fixed output with probability bounded away from zero. A different model of pseudo-deterministic learning, in the context of learning from membership queries, was defined and studied by Oliveira and Santhanam [@OliveiraS18]. We prove Theorem \[thm:main\] by constructing, for a given Littlestone class ${\mathcal{H}}$, an algorithm ${\mathcal{A}}$ which is globally stable with respect to realizable distribution. ### Boosting for Approximate Differential Privacy {#sec:boosting} Our characterization of private learnability in terms of the Littlestone dimension has new consequences for boosting the privacy and accuracy guarantees of differentially-private learners. Specifically, it shows that the existence of a learner with weak (but non-trivial) privacy and accuracy guarantees implies the existence of a learner with any desired privacy and accuracy parameters — in particular, one with $\delta(n) = \exp(-\Omega(n))$. \[thm:adp-boost\] There exists a constant $c > 0$ for which the following holds. Suppose that for some sample size $n_0$ there is an $({\varepsilon}_0, \delta_0)$-differentially private learner $\cal W$ for a class ${\mathcal{H}}$ satisfying the guarantee $$\Pr_{S\sim {\mathcal{D}}^{n_0}}[{\operatorname{loss}}_{{\mathcal{D}}}({\cal W}(S)) > \alpha_0 ] < \beta_0$$ for ${\varepsilon}_0 = 0.1, \alpha_0 = \beta_0 = 1/16$, and $\delta_0 \le c/n_0^2\log n_0$. Then there exists a constant $C_{\mathcal{H}}$ such that for every $\alpha, \beta, {\varepsilon}, \delta \in (0, 1)$ there exists an $({\varepsilon}, \delta)$-differentially private learner for ${\mathcal{H}}$ with $$\Pr_{S\sim {\mathcal{D}}^{n}}[{\operatorname{loss}}_{{\mathcal{D}}}({{\mathcal{A}}}(S)) > \alpha] < \beta$$ whenever $n \ge C_{\mathcal{H}}\cdot \log(1/\beta\delta)/\alpha{\varepsilon}$. Given a weak learner $\cal W$ as in the statement of Theorem \[thm:adp-boost\], the results of [@AlonLMM19] imply that ${\operatorname{Ldim}}({\mathcal{H}})$ is finite. Hence Theorem \[thm:main\] allows us to construct a learner for ${\mathcal{H}}$ with arbitrarily small privacy and accuracy, yielding Theorem \[thm:adp-boost\]. The constant $C_{{\mathcal{H}}}$ in the last line of the theorem statement suppresses a factor depending on ${\operatorname{Ldim}}({\mathcal{H}})$. Prior to our work, it was open whether arbitrary learning algorithms satisfying approximate differential privacy could be boosted in this strong a manner. We remark, however, that in the case of *pure* differential privacy, such boosting can be done algorithmically and efficiently. Specifically, given an $({\varepsilon}_0, 0)$-differentially private weak learner as in the statement of Theorem \[thm:adp-boost\], one can first apply random sampling to improve the privacy guarantee to $(p{\varepsilon}_0, 0)$-differential privacy at the expense of increasing its sample complexity to roughly $n_0 /p$ for any $p \in (0, 1)$. The Boosting-for-People construction of Dwork, Rothblum, and Vadhan [@DworkRV10] (see also [@BunCS20]) then produces a strong learner by making roughly $T \approx \log(1/\beta)/\alpha^2$ calls to the weak learner. By composition of differential privacy, this gives an $({\varepsilon}, 0)$-differentially private strong learner with sample complexity roughly $n_0 \cdot \log(1/\beta)/\alpha^2{\varepsilon}$. What goes wrong if we try to apply this argument using an $({\varepsilon}_0, \delta_0)$-differentially private weak learner? Random sampling still gives a $(p{\varepsilon}_0, p\delta_0)$-differentially private weak learner with sample complexity $n_0 / p$. However, this is not sufficient to improve the $\delta$ parameter of the learner *as a function of the number of samples $n$*. Thus the strong learner one obtains using Boosting-for-People still at best guarantees $\delta(n) = \tilde{O}(1/n^2)$. Meanwhile, Theorem \[thm:adp-boost\] shows that the existence of a $(0.1, \tilde{O}(1/n^2))$-differentially private learner for a given class implies the existence of a $(0.1, \exp(-\Omega(n))$-differentially private learner for that class. We leave it as an interesting open question to determine whether this kind of boosting for approximate differential privacy can be done algorithmically. Proof Overview {#sec:overview} ============== We next give an overview of the main arguments used in the proof of Theorem \[thm:main\]. The proof consist of two parts: (i) we first show that every class with a finite Littlestone dimension can be learned by a globally-stable algorithm, and (ii) we then show how to generically obtain a differentially-private learner from any globally-stable learner. Step 1: Finite Littlestone Dimension $\implies$ Globally-Stable Learning ------------------------------------------------------------------------ Let ${\mathcal{H}}$ be a concept class with Littlestone dimension $d$. Our goal is to design a globally-stable learning algorithm for ${\mathcal{H}}$ with stability parameter $\eta = \exp(-d)$ and sample complexity $n=\exp(d)$. We will sketch here a weaker variant of our construction which uses the same ideas but is simpler to describe. The property of ${\mathcal{H}}$ that we will use is that it can be online learned in the realizable setting with at most $d$ mistakes (see Section \[sec:online\] for a brief overview of this setting). Let ${\mathcal{D}}$ denote a realizable distribution with respect to which we wish to learn in a globally-stable manner. That is, ${\mathcal{D}}$ is a distribution over examples $(x,c(x))$ where $c\in{\mathcal{H}}$ is an unknown target concept. Let $\mathcal{A}$ be a learning algorithm that makes at most $d$ mistakes while learning an unknown concept from ${\mathcal{H}}$ in the online model. Consider applying $\mathcal{A}$ on a sequence $S=((x_1,c(x_1))\ldots (x_n,c(x_n)))\sim{\mathcal{D}}^n$, and denote by $M$ the random variable counting the number of mistakes $\mathcal{A}$ makes in this process. The mistake-bound guarantee on ${\mathcal{A}}$ guarantees that $M\leq d$ always. Consequently, there is $0\leq i \leq d$ such that $$\Pr[M=i] \geq \frac{1}{d+1}.$$ Note that we can identify, with high probability, an $i$ such that $\Pr[M=i] \geq 1/2d$ by running ${\mathcal{A}}$ on $O(d)$ samples from ${\mathcal{D}}^n$. We next describe how to handle each of the $d+1$ possibilities for $i$. Let us first assume that $i=d$, namely that $$\Pr[M=d] \geq \frac{1}{2d}.$$ We claim that in this case we are done: indeed, after making $d$ mistakes it must be the case that ${\mathcal{A}}$ has completely identified the target concept $c$ (or else ${\mathcal{A}}$ could be presented with another example which forces it to make $d+1$ mistakes). Thus, in this case it holds with probability at least $1/2d$ that ${\mathcal{A}}(S)=c$ and we are done. Let us next assume that $i=d-1$, namely that $$\Pr[M=d-1] \geq \frac{1}{2d}.$$ The issue with applying the previous argument here is that before making the $d$’th mistake, ${\mathcal{A}}$ can output many different hypotheses (depending on the input sample $S$). We use the following idea: draw two samples $S_1,S_2 \sim {\mathcal{D}}^n$ independently, and set $f_1 = {\mathcal{A}}(S_1)$ and $f_2={\mathcal{A}}(S_2)$. Condition on the event that the number of mistakes made by ${\mathcal{A}}$ on each of $S_1,S_2$ is exactly $d-1$ (by assumption, this event occurs with probability at least $(1/2d)^2$) and consider the following two possibilities: - $\Pr[f_1=f_2]\geq\frac{1}{4}$, - $\Pr[f_1=f_2] < \frac{1}{4}$. If (i) holds then using a simple calculation one can show that there is $h$ such that $\Pr[A(S) = h] \geq \frac{1}{(2d)^2}\cdot \frac{1}{4}$ and we are done. If (ii) holds then we apply the following [*“random contest”*]{} between $S_1,S_2$: 1. Pick $x$ such that $f_1(x)\neq f_2(x)$ and draw $y\sim\{\pm 1\}$ uniformly at random. 2. If $f_1(x)\neq y$ then the output is ${\mathcal{A}}(S_1 \circ (x,y))$, where $S_1\circ (x,y)$ denotes the sample obtained by appending $(x,y)$ to the end of $S$. In this case we say that $S_1$ “won the contest”. 3. Else, $f_2(x)\neq y$ then the output is ${\mathcal{A}}(S_2 \circ (x,y))$. In this case we that $S_2$ “won the contest”. Note that adding the auxiliary example $(x,y)$ forces ${\mathcal{A}}$ to make exactly $d$ mistakes on $S_i\circ (x,y)$. Now, if $y\sim\{\pm 1\}$ satisfies $y = c(x) $ then by the mistake-bound argument it holds that ${\mathcal{A}}(S_i\circ (x,y))=c$. Therefore, since $\Pr_{y\sim\{\pm 1\}}[c(x)=y] = 1/2$, it follows that $$\Pr_{S_1,S_2, y}[{\mathcal{A}}(S_i\circ (x,y))=c] \geq \frac{1}{(2d)^2}\cdot \frac{3}{4}\cdot\frac{1}{2} =\Omega(1/d^2),$$ and we are done. Similar reasoning can be used by induction to handle the remaining cases (the next one would be that $\Pr[M=d-2] \geq \frac{1}{2d}$, and so on). The proof we present in Section \[sec:LSstable\] is based on a similar idea of performing “random contests,” although the construction becomes more complex to handle other issues, such as generalization, which were not addressed here. For more details we refer the reader to the complete argument in Section \[sec:LSstable\]. Step 2: Globally-Stable Learning $\implies$ Differentially-Private Learning --------------------------------------------------------------------------- Given a globally-stable learner ${\mathcal{A}}$ for a concept class ${\mathcal{H}}$, we can obtain a differentially-private learner using standard techniques in the literature on private learning and query release. If ${\mathcal{A}}$ is a $(\eta, m)$-globally stable learner with respect to a distribution ${\mathcal{D}}$, we obtain a differentially-private learner using roughly $m/\eta$ samples from that distribution as follows. We first run ${\mathcal{A}}$ on $k \approx 1/\eta$ independent samples, non-privately producing a list of $k$ hypotheses. We then apply a differentially-private “Stable Histograms” algorithm [@KorolovaKMN09; @BunNS16] to this list which allows us to privately publish a short list of hypotheses that appear with frequency $\Omega(\eta)$. Global stability of the learner ${\mathcal{A}}$ guarantees that with high probability, this list contains *some* hypothesis $h$ with small population loss. We can then apply a generic differentially-private learner (based on the exponential mechanism) on a fresh set of examples to identify such an accurate hypothesis from the short list. Preliminaries {#sec:preliminaries} ============= PAC Learning ------------ We use standard notation from statistical learning; see, e.g., [@Shalev14book]. Let $X$ be any “domain” set and consider the “label” set $Y=\{\pm 1\}$. A [*hypothesis*]{} is a function $h : X\to Y$, which we alternatively write as an element of $Y^X$. An [*example*]{} is a pair $(x, y) \in X\times Y$. A [*sample*]{} $S$ is a finite sequence of examples. Let ${\mathcal{D}}$ be a distribution over $X \times \{\pm 1\}$. The population loss of a hypothesis $h : X \to \{\pm 1\}$ is defined by $${\operatorname{loss}}_{{\mathcal{D}}}(h) = \Pr_{(x, y) \sim {\mathcal{D}}}[h(x) \ne y].$$ Let $S=\bigl((x_i,y_i)\bigr)_{i=1}^n$ be a sample. The empirical loss of $h$ with respect to $S$ is defined by $${\operatorname{loss}}_{S}(h) = \frac{1}{n}\sum_{i=1}^n1[h(x_i)\neq y_i].$$ Let ${\mathcal{H}}\subseteq Y^X$ be a [*hypothesis class*]{}. A sample $S$ is said to be [*realizable by ${\mathcal{H}}$*]{} if there is $h\in H$ such that ${\operatorname{loss}}_S(h)=0$. A distribution ${\mathcal{D}}$ is said to be [*realizable by ${\mathcal{H}}$*]{} if there is $h\in H$ such that ${\operatorname{loss}}_{\mathcal{D}}(h)=0$. A [*learning algorithm*]{} $A$ is a (possibly randomized) mapping taking input samples to output hypotheses. We also use the following notation: for samples $S,T$, let $S\circ T$ denote the combined sample obtained by appending $T$ to the end of $S$. Online Learning {#sec:online} --------------- #### Littlestone Dimension. The Littlestone dimension is a combinatorial parameter that captures mistake and regret bounds in online learning [@Littlestone87online; @Bendavid09agnostic].[^7] Its definition uses the notion of [*mistake trees*]{}. A mistake tree is a binary decision tree whose internal nodes are labeled by elements of $X$. Any root-to-leaf path in a mistake tree can be described as a sequence of examples $(x_1,y_1),...,(x_d,y_d)$, where $x_i$ is the label of the $i$’th internal node in the path, and $y_i=+1$ if the $(i+1)$’th node in the path is the right child of the $i$’th node and $y_i = -1$ otherwise. We say that a mistake tree $T$ is [*shattered* ]{}by ${\mathcal{H}}$ if for any root-to-leaf path $(x_1,y_1),...,(x_d,y_d)$ in $T$ there is an $h\in {\mathcal{H}}$ such that $h(x_i)=y_i$ for all $i\leq d$ (see Figure \[fig:shatteredtree\]). The Littlestone dimension of ${\mathcal{H}}$, denoted ${\operatorname{Ldim}}({\mathcal{H}})$, is the depth of largest complete tree that is shattered by ${\mathcal{H}}$. We say that ${\mathcal{H}}$ is a Littlestone class if it has finite Littlestone dimension. ![[]{data-label="fig:shatteredtree"}](shatteredtree.pdf) #### Mistake Bound and the Standard Optimal Algorithm (${\mathsf{SOA}}$). The simplest setting in which learnability is captured by the Littlestone dimension is called the [*mistake-bound model*]{} [@Littlestone87online]. Let ${\mathcal{H}}\subseteq \{\pm 1\}^X$ be a fixed hypothesis class known to the learner. The learning process takes place in a sequence of trials, where the order of events in each trial $t$ is as follows: - the learner receives an instance $x_t\in X$, - the learner responses with a prediction $\hat y_t\in \{\pm 1\}$, and - the learner is told whether or not the response was correct. We assume that the examples given to the learner are realizable in the following sense: For the entire sequence of trials, there is a hypothesis $h\in {\mathcal{H}}$ such that $y_t = h(x_t)$ for every instance $x_t$ and correct response $y_t$. An algorithm in this model learns ${\mathcal{H}}$ with mistake bound $M$ if for every realizable sequence of examples presented to the learner, it makes a total of at most $M$ incorrect predictions. Littlestone showed that the minimum mistake bound achievable by any online learner is exactly ${\operatorname{Ldim}}({\mathcal{H}})$ [@Littlestone87online]. Furthermore, he described an explicit algorithm, called the [*Standard Optimal Algorithm*]{} (${\mathsf{SOA}}$), which achieves this optimal mistake bound. [**Standard Optimal Algorithm (${\mathsf{SOA}}$)**]{}\ 1. Initialize ${\mathcal{H}}_1 = {\mathcal{H}}$. 2. For trials $t = 1, 2, \dots$: - For each $b \in \{\pm 1\}$ and $x \in X$, let ${\mathcal{H}}_t^b(x) = \{h \in {\mathcal{H}}_t : h(x) = b\}$. Define $h : X \to \{\pm 1\}$ by $h_t(x) = {\operatorname{argmax}}_b {\operatorname{Ldim}}({\mathcal{H}}_t^{b}(x))$. - Receive instance $x_t$. - Predict $\hat{y}_t = h_t(x_t)$. - Receive correct response $y_t$. - Update ${\mathcal{H}}_{t+1} = {\mathcal{H}}_t^{y_t}(x_t)$. #### Extending the ${\mathsf{SOA}}$ to non-realizable sequences. Our globally-stable learner for Littlestone classes will make use of an optimal online learner in the mistake bound model. For concreteness, we pick the ${\mathsf{SOA}}$ (any other optimal algorithm will also work). It will be convenient to extend the ${\mathsf{SOA}}$ to sequences which are not necessarily realizable by a hypothesis in ${\mathcal{H}}$. We will use the following simple extension of the ${\mathsf{SOA}}$ to non-realizable samples: \[def:soaext\] Consider a run of the ${\mathsf{SOA}}$ on examples $(x_1,y_1),\ldots, (x_m,y_m)$, and let $h_t$ denote the predictor used by the ${\mathsf{SOA}}$ after seeing the first $t$ examples (i.e., $h_t$ is the rule used by the ${\mathsf{SOA}}$ to predict in the $(t+1)$’st trial). Then, after observing both $x_{t+1},y_{t+1}$ do the following: - If the sequence $(x_1,y_1),\ldots, (x_{t+1},y_{t+1})$ is realizable by some $h\in{\mathcal{H}}$ then apply the usual update rule of the ${\mathsf{SOA}}$ to obtain $h_{t+1}$. - Else, set $h_{t+1}$ as follows: $h_{t+1}(x_{t+1}) = y_{t+1}$, and $h_{t+1}(x)=h_t(x)$ for every $x\neq x_{t+1}$. Thus, upon observing a non-realizable sequence, this update rule locally updates the maintained predictor $h_t$ to agree with the last example. Differential Privacy -------------------- We use standard definitions and notation from the differential privacy literature. For more background see, e.g., the surveys [@DworkR14; @Vadhan17]. For $a, b, {\varepsilon}, \delta \in [0, 1]$ let $a\approx_{{\varepsilon},\delta} b$ denote the statement $$a\leq e^{{\varepsilon}}b + \delta ~\text{ and }~ b\leq e^{\varepsilon}a + \delta.$$ We say that two probability distributions $p,q$ are [*$({\varepsilon},\delta)$-indistinguishable*]{} if $p(E) \approx_{{\varepsilon},\delta} q(E)$ for every event $E$. \[def:private\] A randomized algorithm $$A: (X\times \{\pm 1\})^m \to \{\pm 1\}^X$$ is $({\varepsilon},\delta)$-differentially-private if for every two samples $S,S'\in (X\times \{\pm 1\})^n$ that disagree on a single example, the output distributions $A(S)$ and $A(S')$ are $({\varepsilon},\delta)$-indistinguishable. We emphasize that $({\varepsilon}, \delta)$-indistinguishability must hold for every such pair of samples, even if they are not generated according to a (realizable) distribution. The parameters ${\varepsilon},\delta$ are usually treated as follows: ${\varepsilon}$ is a small constant (say $0.1$), and $\delta$ is negligible, $\delta = n^{-\omega(1)}$, where $n$ is the input sample size. The case of $\delta=0$ is also referred to as [*pure differential privacy*]{}. Thus, a class ${\mathcal{H}}$ is privately learnable if it is PAC learnable by an algorithm $A$ that is $({\varepsilon}(n),\delta(n))$-differentially private with ${\varepsilon}(n) \leq 0.1$, and $\delta(n) \leq n^{-\omega(1)} $. Globally-Stable Learning of Littlestone Classes {#sec:LSstable} =============================================== Theorem Statement ----------------- The following statement any class ${\mathcal{H}}$ with a bounded Littlestone dimension can be learned by a globally-stable algorithm. \[thm:littlestone-frequent\] Let ${\mathcal{H}}$ be a hypothesis class with Littlestone dimension $d\geq 1$, let $\alpha>0$, and set $$m = (8^{d+1}+1)\cdot\Bigl\lceil\frac{d}{\alpha}\Bigr\rceil.$$ Then there exists a randomized algorithm $G : (X \times \{\pm 1\})^m \to \{\pm 1\}^X$ with the following properties. Let ${\mathcal{D}}$ be a realizable distribution and let $S\sim {\mathcal{D}}^m$ be an input sample. Then there exists a hypothesis $f$ such $$\Pr[G(S) = f] \geq \frac{1}{(d+1)2^{d+1}} \text{ and } {\operatorname{loss}}_{{\mathcal{D}}}(f) \leq \alpha.$$ The distributions ${\mathcal{D}}_k$ ----------------------------------- The Algorithm $G$ is obtained by running the ${\mathsf{SOA}}$ on a sample drawn from a carefully tailored distribution. This distribution belongs to a family of distributions which we define next. Each of these distributions can be sampled from using black-box access to i.i.d. samples from ${\mathcal{D}}$. Recall that for a pair of samples $S,T$, we denote by $S\circ T$ the sample obtained by appending $T$ to the end of $S$. Define a sequence of distributions ${\mathcal{D}}_k$ for $k\geq 0$ as follows: [**Distributions ${\mathcal{D}}_k$**]{}\ Let $n$ denote an “auxiliary sample” size (to be fixed later) and let ${\mathcal{D}}$ denote the target realizable distribution over examples. The distributions ${\mathcal{D}}_k = {\mathcal{D}}_k({\mathcal{D}},n)$ are defined by induction on $k$ as follows: 1. ${\mathcal{D}}_0$: output the empty sample $\emptyset$ with probability 1. 2. Let $ k\ge 1 $. If there exists a $f$ such that $$\Pr_{S \sim {\mathcal{D}}_{k-1}, T\sim{\mathcal{D}}^n}[{\mathsf{SOA}}(S\circ T) = f] \geq 2^{-d},$$ or if ${\mathcal{D}}_{k-1}$ is undefined then ${\mathcal{D}}_k$ is undefined. 3. Else, ${\mathcal{D}}_k$ is defined recursively by the following process: - Draw $S_0,S_1\sim {\mathcal{D}}_{k-1}$ and $T_0,T_1\sim{\mathcal{D}}^n$ independently. - Let $f_0={\mathsf{SOA}}(S_0\circ T_0)$, $f_1={\mathsf{SOA}}(S_1\circ T_1)$. - If $f_0=f_1$ then go back to step (i). - Else, pick $x\in \{x: f_0(x)\neq f_1(x)\}$ and sample $y\sim\{\pm 1\}$ uniformly. - If $f_0(x)\neq y$ then output $S_0 \circ T_0\circ \bigl((x,y)\bigr)$ and else output $S_1\circ T_1\circ \bigl((x,y)\bigr)$. Please see Figure \[fig:Dk\] for an illustration of sampling $S\sim {\mathcal{D}}_k$ for $k=3$. We next observe some basic facts regarding these distributions. First, note that whenever ${\mathcal{D}}_k$ is well-defined, the process in Item 3 terminates with probability 1. Let $k$ be such that ${\mathcal{D}}_k$ is well-defined and consider a sample $S$ drawn from ${\mathcal{D}}_k$. The size of $S$ is $\lvert S\rvert = k\cdot(n + 1)$. Among these $k\cdot(n+1)$ examples there are $k\cdot n$ examples drawn from ${\mathcal{D}}$ and $k$ examples which are generated in Item 3(iv). We will refer to these $k$ examples as . Note that during the generation of $S\sim {\mathcal{D}}_k$ there are examples drawn from ${\mathcal{D}}$ which do not actually appear in $S$. In fact, the number of such examples may be unbounded, depending on how many times Items 3(i)-3(iii) were repeated. In Section \[sec:monte\] we will define a “Monte-Carlo” variant of ${\mathcal{D}}_k$ in which the number of examples drawn from ${\mathcal{D}}$ is always bounded. This Monte-Carlo variant is what we actually use to define our globally-stable learning algorithm, but we introduce the simpler distributions ${\mathcal{D}}_k$ to clarify our analysis. The $k$ tournament examples satisfy the following important properties. \[obs:mistakebound\] Let $k$ be such that ${\mathcal{D}}_k$ is well-defined and consider running the ${\mathsf{SOA}}$ on the concatenated sample $S\circ T$, where $S\sim {\mathcal{D}}_k$ and $T\sim {\mathcal{D}}^n$. Then 1. Each tournament example forces a mistake on the ${\mathsf{SOA}}$. Consequently, the number of mistakes made by the ${\mathsf{SOA}}$ when run on $S\circ T$ is at least $k$. 2. ${\mathsf{SOA}}(S\circ T)$ is consistent with $T$. The first item follows directly from the definition of $x$ in Item 3(iv) and the definition of $S$ in Item 3(v). The second item clearly holds when $S\circ T$ is realizable by ${\mathcal{H}}$ (because the ${\mathsf{SOA}}$ is consistent). For non-realizable $S\circ T$, Item 2 holds by our extension of the ${\mathsf{SOA}}$ in Definition \[def:soaext\]. ![[]{data-label="fig:Dk"}](DrawingfromDk.pdf) ### The Existence of Frequent Hypotheses The following lemma is the main step in establishing global stability. \[lem:frequentdist\] There exists $k\leq d$ and an hypothesis $f:X\to\{\pm 1\}$ such that $$\Pr_{S\sim {\mathcal{D}}_k, T\sim{\mathcal{D}}^n}[{\mathsf{SOA}}(S\circ T) = f] \geq 2^{-d}.$$ Suppose for the sake of contradiction that this is not the case. In particular, this means that ${\mathcal{D}}_d$ is well-defined and that for every $f$: $$\label{eq:3} \Pr_{S\sim {\mathcal{D}}_d, T\sim{\mathcal{D}}^n}[{\mathsf{SOA}}(S\circ T) = f] < 2^{-d}.$$ We show that this cannot be the case when $f=c$ is the target concept (i.e., for $c\in{\mathcal{H}}$ which satisfies ${\operatorname{loss}}_{\mathcal{D}}(c)=0$). Towards this end, note that with probability $2^{-d}$ over $S\sim {\mathcal{D}}_d$ we have that all $d$ tournament examples are consistent with $c$. Indeed, this follows since in each tournament example $(x_i,y_i)$, the label $y_i$ is drawn independently of $x_i$ and of the sample constructed thus far. So, $y_i=c(x_i)$ with probability $1/2$ independently for each tournament example. Therefore, with probability $2^{-d}$ we have that $S\circ T$ is consistent with $c$ (because all examples in $S\circ T$ which are drawn from ${\mathcal{D}}$ are also consistent with $c$). Now, since each tournament example forces a mistake on the ${\mathsf{SOA}}$ (Observation \[obs:mistakebound\]), and since the ${\mathsf{SOA}}$ does not make more than $d$ mistakes on realizable samples, it follows that if all tournament examples in $S\sim {\mathcal{D}}_d$ are consistent with $c$ then ${\mathsf{SOA}}(S)={\mathsf{SOA}}(S\circ T)=c$. Thus, $$\Pr_{S\sim {\mathcal{D}}_d, T\sim{\mathcal{D}}^n}[{\mathsf{SOA}}(S\circ T) = c] \geq 2^{-d},$$ which contradicts Equation \[eq:3\] and finishes the proof. ### Generalization The next lemma shows that only hypotheses $f$ that generalize well satisfy the conclusion of Lemma \[lem:frequentdist\] (note the similarity of this proof with the proof of Proposition \[prop:gs\]): \[lem:gen\] Let $k$ be such that ${\mathcal{D}}_k$ is well-defined. Then every $f$ such that $$\Pr_{S\sim {\mathcal{D}}_k, T\sim{\mathcal{D}}^n}[{\mathsf{SOA}}(S\circ T) = f] \geq 2^{-d}$$ satisfies ${\operatorname{loss}}_{{\mathcal{D}}}(f) \le \frac{d}{ n}$. Let $f$ be a hypothesis such that $\Pr_{S\sim {\mathcal{D}}_k, T \sim {\mathcal{D}}^n}[{\mathsf{SOA}}(S \circ T) = f] \geq 2^{-d}$ and let $\alpha={\operatorname{loss}}_{{\mathcal{D}}}(h)$. We will argue that $$\label{eq:4} 2^{-d} \leq (1-\alpha)^{n}.$$ Define the events $A,B$ as follows. 1. $A$ is the event that ${\mathsf{SOA}}(S\circ T) = f$. By assumption, $\Pr[A] \geq 2^{-d}$. 2. $B$ is the event that $f$ is consistent with $T$. Since $\lvert T\rvert = n$, we have that $\Pr[B] = (1-\alpha)^{n}$. Note that $A \subseteq B$: Indeed, ${\mathsf{SOA}}(S\circ T)$ is consistent with $T$ by the second item of Observation \[obs:mistakebound\]. Thus, whenever ${\mathsf{SOA}}(S\circ T)=f$, it must be the case that $f$ is consistent with $T$. Hence, $\Pr[A]\leq \Pr[B]$, which implies Inequality \[eq:4\] and finishes the proof (using the fact that $1-\alpha \leq 2^{-\alpha}$ and taking logarithms on both sides). The Algorithm $G$ ----------------- ### A Monte-Carlo Variant of ${\mathcal{D}}_k$ {#sec:monte} Consider the following first attempt of defining a globally-stable learner $G$: (i) draw $i\in\{0\ldots d\}$ uniformly at random, (ii) sample $S\sim{\mathcal{D}}_i$, and (iii) output ${\mathsf{SOA}}(S\circ T)$, where $T\sim {\mathcal{D}}^n$. The idea is that with probability $1/(d+1)$ the sampled $i$ will be equal to a number $k$ satisfying the conditions of Lemma \[lem:frequentdist\], and so the desired hypothesis $f$ guaranteed by this lemma (which also has low population loss by Lemma \[lem:gen\]) will be outputted with probability at least $2^{-d}/(d+1)$. The issue here is that sampling $f\sim {\mathcal{D}}_i$ may require an unbounded number of samples from the target distribution ${\mathcal{D}}$ (in fact, ${\mathcal{D}}_i$ may even be undefined). To circumvent this possibility, we define a Monte-Carlo variant of ${\mathcal{D}}_k$ in which the number of examples drawn from ${\mathcal{D}}$ is always bounded. [**The Distributions $\tilde {\mathcal{D}}_k$ (a Monte-Carlo variant of ${\mathcal{D}}_k$)**]{}\ 1. Let $n$ be the auxiliary sample size and $N$ be an upper bound on the number of examples drawn from ${\mathcal{D}}$. 2. $\tilde {\mathcal{D}}_0$: output the empty sample $\emptyset$ with probability 1. 3. For $k> 0$, define $\tilde {\mathcal{D}}_k$ recursively by the following process: - [**Throughout the process, if more than $N$ examples from ${\mathcal{D}}$ are drawn (including examples drawn in the recursive calls), then output “Fail”.**]{} - Draw $S_0,S_1\sim \tilde {\mathcal{D}}_{k-1}$ and $T_0,T_1\sim{\mathcal{D}}^n$ independently. - Let $f_0={\mathsf{SOA}}(S_0\circ T_0)$, $f_1={\mathsf{SOA}}(S_1\circ T_1)$. - If $f_0=f_1$ then go back to step (i). - Else, pick $x\in \{x: f_0(x)\neq f_1(x)\}$ and sample $y\sim\{\pm 1\}$ uniformly. - If $f_0(x)\neq y$ then output $S_0 \circ T_0\circ \bigl((x,y)\bigr)$ and else output $S_1\circ T_1\circ \bigl((x,y)\bigr)$. Note that $\tilde {\mathcal{D}}_k$ is well-defined for every $k$, even for $k$ such that ${\mathcal{D}}_k$ is undefined (however, for such $k$’s the probability of outputting “Fail” may be large). It remains to specify the upper bound $N$ on the number of examples drawn from ${\mathcal{D}}$ in $\tilde {\mathcal{D}}_k$. Towards this end, we prove the following bound on the expected number of examples from ${\mathcal{D}}$ that are drawn during generating $S\sim{\mathcal{D}}_k$: \[lem:avgsample\] Let $k$ be such that ${\mathcal{D}}_k$ is well-defined, and let $M_k$ denote the number of examples from ${\mathcal{D}}$ that are drawn in the process of generating $S\sim {\mathcal{D}}_k$. Then, $$\mathbb{E}[M_k] \leq 4^{k+1}\cdot n.$$ Note that $\mathbb{E}[M_0]=0$ as ${\mathcal{D}}_0$ deterministically produces the empty sample. We first show that for all $0 < i < k$, $$\label{eq:1} \mathbb{E}[M_{i+1}] \leq 4\mathbb{E}[M_{i}] + 4n,$$ and then conclude the desired inequality by induction. To see why Inequality \[eq:1\] holds, let the random variable $R$ denote the number of times Item 3(i) was executed during the generation of $S\sim {\mathcal{D}}_{i+1}$. That is, $R$ is the number of times a pair $S_0,S_1\sim {\mathcal{D}}_i$ and a pair $T_0,T_1\sim {\mathcal{D}}^n$ were drawn. Observe that $R$ is distributed geometrically with success probability $\theta$, where: $$\begin{aligned} \theta &= 1 - \Pr_{S_0,S_1, T_0,T_1}\bigl[{\mathsf{SOA}}(S_0\circ T_0) = {\mathsf{SOA}}(S_1\circ T_1)\bigr] \\ &= 1 - \sum_{f}\Pr_{S, T}\bigl[{\mathsf{SOA}}(S\circ T) = f\bigr]^2\\ &\geq 1-2^{-d},\end{aligned}$$ where the last inequality follows because $i< k$ and hence ${\mathcal{D}}_i$ is well-defined, which implies that $\Pr_{S, T}\bigl[{\mathsf{SOA}}(S\circ T) = f\bigr]\leq 2^{-d}$ for all $h$. Now, the random variable $M_{i+1}$ can be expressed as follows: $$M_{i+1} = \sum_{j=1}^\infty M_{i+1}^{(j)},$$ where $$M_{i+1}^{(j)} = \begin{cases} 0 &\text{if } R < j,\\ \text{$\#$ of examples drawn from ${\mathcal{D}}$ in the $j$'th execution of Item 3(i)} &\text{if } R\geq j. \end{cases}$$ Thus, $\mathbb{E}[M_{i+1}] = \sum_{j=1}^\infty\mathbb{E}[M_{i+1}^{(j)}]$. We claim that $$\mathbb{E}[M_{i+1}^{(j)}] = (1-\theta)^{j-1}\cdot (2\mathbb{E}[M_i] + 2n).$$ Indeed, the probability that $R\geq j$ is $(1-\theta)^{j-1}$ and conditioned on $R\geq j$, in the $j$’th execution of Item 3(i) two samples from ${\mathcal{D}}_{i}$ are drawn and two samples from ${\mathcal{D}}^n$ are drawn. Thus $$\mathbb{E}[M_{i+1}] = \sum_{j=1}^\infty(1-\theta)^{j-1}\cdot (2\mathbb{E}[M_i] + 2n)= \frac{1}{\theta}\cdot (2\mathbb{E}[M_i] + 2n) \leq 4\mathbb{E}[M_i] + 4n,$$ where the last inequality is because $\theta \geq 1- 2^{-d}\geq 1/2$. This gives Inequality \[eq:1\]. Next, using that $\mathbb{E}[M_0]=0$, a simple induction gives $$\mathbb{E}[M_{i+1}]\leq (4+4^2+\ldots+ 4^{i+1})n \leq 4^{i+2}n,$$ and the lemma follows by taking $i+1=k$. ### Completing the Proof of Theorem \[thm:littlestone-frequent\] Our globally-stable learning algorithm $G$ is defined as follows. [**Algorithm $G$**]{}\ 1. Consider the distribution $\tilde {\mathcal{D}}_k$, where the auxiliary sample size is set to $n=\lceil \frac{d}{\alpha}\rceil$ and the sample complexity upper bound is set to $N=8^{d+1}\cdot n$. 2. Draw $k\in \{0,1,\ldots, d\}$ uniformly at random. 3. Output $h={\mathsf{SOA}}(S\circ T)$, where $T\sim {\mathcal{D}}^n$ and $S\sim \tilde {\mathcal{D}}_k$. First note that the sample complexity of $G$ is $\lvert S\rvert + \lvert T\rvert \leq N+n = (8^{d+1}+1)\cdot\lceil\frac{d}{\alpha}\rceil$, as required. It remains to show that there exists a hypothesis $f$ such that: $$\Pr[G(S) = f] \geq \frac{2^{-(d+1)}}{d+1} \text{ and } {\operatorname{loss}}_{{\mathcal{D}}}(f) \leq \alpha.$$ By Lemma \[lem:frequentdist\], there exists $k^*\leq d$ and $f^*$ such that $$\Pr_{S\sim {\mathcal{D}}_{k^*}, T\sim{\mathcal{D}}^n}[{\mathsf{SOA}}(S\circ T) = f^*] \geq 2^{-d}.$$ By Lemma \[lem:gen\], $${\operatorname{loss}}_{{\mathcal{D}}}(f^*) \leq \frac{d}{n} \leq \alpha.$$ We claim that $G$ outputs $f^*$ with probability at least $2^{-(d+1)}$. To see this, let $M_{k^*}$ denote the number of examples drawn from ${\mathcal{D}}$ during the generation of $S\sim {\mathcal{D}}_{k^*}$. Lemma \[lem:avgsample\] and an application of Markov’s inequality yield $$\begin{aligned} \Pr\bigl[M_{k^*} > 8^{d+1}\cdot n\bigr] &\leq \Pr\bigl[M_{k^*} > 2^{d+1}\cdot 4^{k^*+1}\cdot n\bigr] \tag{because $k^*\leq d$}\\ &\leq 2^{-(d+1)}. \tag{by Markov's inequality, since $\mathbb{E}[M_{k^*}]\leq 4^{k^*+1}\cdot n$}\end{aligned}$$ Therefore, $$\begin{aligned} \Pr_{S\sim \tilde {\mathcal{D}}_{k^*}, T\sim {\mathcal{D}}^n}[{\mathsf{SOA}}(S\circ T) = f^*] &= \Pr_{S\sim {\mathcal{D}}_{k^*}, T\sim {\mathcal{D}}^n}[{\mathsf{SOA}}(S\circ T) = f^* \text{ and } M_{k^*} \leq 8^{d+1}\cdot n] \\ &\geq 2^{-d} - 2^{-(d+1)} = 2^{-(d+1)}.\end{aligned}$$ Thus, since $k=k^*$ with probability $1/(d+1)$, it follows that $G$ outputs $f^*$ with probability at least $\frac{2^{-(d+1)}}{d+1}$ as required. Globally-Stable Learning Implies Private Learning {#sec:stableprivate} ================================================= In this section we prove that any globally-stable learning algorithm yields a differentially-private learning algorithm with finite sample complexity. Tools from Differential Privacy ------------------------------- We begin by stating a few standard tools from the differential privacy literature which underlie our construction of a learning algorithm. Let $X$ be a data domain and let $S \in X^n$. For an element $x \in X$, define ${\operatorname{freq}}_S(x) = \frac{1}{n} \cdot \#\{i \in [n] : x_i = x\}$, i.e., the fraction of the elements in $S$ which are equal to $x$. \[lem:histograms\] Let $X$ be any data domain. For $$n \ge O\left(\frac{\log(1/\eta\beta\delta)}{\eta {\varepsilon}}\right)$$ there exists an $({\varepsilon}, \delta)$-differentially private algorithm ${\mathsf{Hist}}$ which, with probability at least $1-\beta$, on input $S = (x_1, \dots, x_n)$ outputs a list $L \subseteq X$ and a sequence of estimates $a \in [0, 1]^{|L|}$ such that - Every $x$ with ${\operatorname{freq}}_S(x) \ge \eta$ appears in $L$ and - For every $x \in L$, the estimate $a_x$ satisfies $|a_x - {\operatorname{freq}}_S(x)| \le \eta$. Using the Exponential Mechanism of McSherry and Talwar [@McSherryT07], Kasiviswanathan et al. [@KasiviswanathanLNRS11] described a generic differentially-private learner based on approximate empirical risk minimization. \[lem:generic\] Let $H \subseteq \{\pm 1\}^X$ be a collection of hypotheses. For $$n = O\left(\frac{\log|H| +\log(1/\beta)}{\alpha {\varepsilon}}\right)$$ there exists an ${\varepsilon}$-differentially private algorithm ${\mathsf{GenericLearner}}: (X \times \{\pm 1\})^n \to H$ such that the following holds. Let ${\mathcal{D}}$ be a distribution over $(X \times \{\pm 1\})$ such that there exists $h^* \in H$ with $${\operatorname{loss}}_{{\mathcal{D}}}(h^*) \le \alpha.$$ Then on input $S \sim {\mathcal{D}}^n$, algorithm ${\mathsf{GenericLearner}}$ outputs, with probability at least $1-\beta$, a hypothesis $\hat{h} \in H$ such that $${\operatorname{loss}}_{\mathcal{D}}(\hat{h}) \le 2\alpha.$$ Our formulation of the guarantees of this algorithm differ slightly from those of [@KasiviswanathanLNRS11], so we give its standard proof for completeness. The algorithm ${\mathsf{GenericLearner}}(S)$ samples a hypothesis $h \in H$ with probability proportional to $\exp(-{\varepsilon}n {\operatorname{loss}}_S(h) / 2)$. This algorithm can be seen as an instantiation of the Exponential Mechanism [@McSherryT07]; the fact that changing one sample changes the value of ${\operatorname{loss}}_S(h)$ by at most $1$ implies that ${\mathsf{GenericLearner}}$ is ${\varepsilon}$-differentially private. We now argue that ${\mathsf{GenericLearner}}$ is an accurate learner. Let $E$ denote the event that the sample $S$ satisfies the following conditions: 1. For every $h \in H$ such that ${\operatorname{loss}}_{{\mathcal{D}}}(h) > 2\alpha$, it also holds that ${\operatorname{loss}}_{S}(h) > 5\alpha/3$, and 2. For the hypothesis $h^* \in H$ satisfying ${\operatorname{loss}}_{{\mathcal{D}}}(h^*) \le \alpha$, it also holds that ${\operatorname{loss}}_{S}(h^*) \le 4\alpha / 3$. We claim that $\Pr[E] \ge 1-\beta/2$ as long as $n \ge O(\log(|H|/\beta) / \alpha)$. To see this, let $h \in H$ be an arbitrary hypothesis with ${\operatorname{loss}}_D(h) > 2\alpha$. By a multiplicative Chernoff bound[^8] we have ${\operatorname{loss}}_S(h) > 7\alpha / 4$ with probability at least $1 - \beta/(4|H|)$ as long as $n \ge O(\log(|H|/\beta) / \alpha)$. Taking a union bound over all $h \in H$ shows that condition 1. holds with probability at least $1 - \beta/4$. Similarly, a multiplicative Chernoff bound ensures that condition 2 holds with probability at least $1 - \beta/4$, so $E$ holds with probability at least $1-\beta/2$. Now we show that conditioned on $E$, the algorithm ${\mathsf{GenericLearner}}(S)$ indeed produces a hypothesis $h$ with ${\operatorname{loss}}_D(\hat{h}) \le 2\alpha$. This follows the standard analysis of the accuracy guarantees of the Exponential Mechanism. Condition 2 of the definition of event $E$ guarantees that ${\operatorname{loss}}_S(h^*) \le 4\alpha / 3$. This ensures that the normalization factor in the definition of the Exponential Mechanism is at least $\exp(-2{\varepsilon}\alpha n /3)$. Hence by a union bound, $$\Pr[{\operatorname{loss}}_S(\hat{h}) > 5\alpha/3] \le |H| \cdot \frac{\exp(-5{\varepsilon}\alpha n / 6)}{\exp(-2{\varepsilon}\alpha n / 3)} = |H| e^{-{\varepsilon}\alpha n / 6}.$$ Taking $n \ge O(\log(|H|/\beta) / \alpha{\varepsilon})$ ensures that this probability is at most $\beta / 2$. Given that ${\operatorname{loss}}(\hat{h}) \le 5\alpha / 3$, Condition 1 of the definition of event $E$ ensures that ${\operatorname{loss}}_{{\mathcal{D}}}(\hat{h}) \le 2\alpha$. Thus, for $n$ sufficiently large as described, we have overall that ${\operatorname{loss}}_{{\mathcal{D}}}(\hat{h}) \le 2\alpha$ with probability at least $1- \beta$. Construction of a Private Learner --------------------------------- We now describe how to combine the Stable Histograms algorithm with the Generic Private Learner to convert any globally-stable learning algorithm into a differentially-private one. \[thm:selection\] Let ${\mathcal{H}}$ be a concept class over data domain $X$. Let $G : (X \times \{\pm 1\})^m \to \{\pm 1\}^X$ be a randomized algorithm such that, for ${\mathcal{D}}$ a realizable distribution and $S \sim {\mathcal{D}}^m$, there exists a hypothesis $h$ such that $\Pr[G(S) = h] \ge \eta$ and ${\operatorname{loss}}_{{\mathcal{D}}}(h) \le \alpha / 2$. Then for some $$n = O\left(\frac{m \cdot \log(1/\eta\beta\delta)}{\eta{\varepsilon}} + \frac{\log(1/\eta\beta)}{\alpha{\varepsilon}}\right)$$ there exists an $({\varepsilon}, \delta)$-differentially private algorithm $M: (X \times \{\pm 1\})^n \to \{\pm 1\}^X$ which, given $n$ i.i.d. samples from $\mathcal{D}$, produces a hypothesis $\hat{h}$ such that ${\operatorname{loss}}_{{\mathcal{D}}}(\hat{h}) \le \alpha$ with probability at least $1-\beta$. Theorem \[thm:selection\] is realized the learning algorithm $M$ described below. Here, the parameter $$k = O\left(\frac{\log(1/\eta\beta\delta)}{\eta{\varepsilon}}\right)$$ is chosen so that Lemma \[lem:histograms\] guarantees Algorithm ${\mathsf{Hist}}$ succeeds with the stated accuracy parameters. The parameter $$n' = O\left(\frac{\log(1/\eta\beta)}{\alpha{\varepsilon}}\right)$$ is chosen so that Lemma \[lem:generic\] guarantees that ${\mathsf{GenericLearner}}$ succeeds on a list $L$ of size $|L| \le 2/\eta$ with the given accuracy and confidence parameters. [**Differentially-Private Learner $M$**]{}\ 1. Let $S_1, \dots, S_k$ each consist of $m$ i.i.d. samples from ${\mathcal{D}}$. Run $G$ on each batch of samples producing $h_1 = G(S_1), \dots, h_k = G(S_k)$. 2. Run the Stable Histogram algorithm ${\mathsf{Hist}}$ on input $H = (h_1, \dots, h_k)$ using privacy parameters $({\varepsilon}/2, \delta)$ and accuracy parameters $(\eta/8, \beta/3)$, producing a list $L$ of frequent hypotheses. 3. Let $S'$ consist of $n'$ i.i.d. samples from ${\mathcal{D}}$. Run ${\mathsf{GenericLearner}}(S')$ using the collection of hypotheses $L$ with privacy parameter ${\varepsilon}/ 2$ and accuracy parameters $(\alpha / 2, \beta/3)$ to output a hypothesis $\hat{h}$. We first argue that the algorithm $M$ is differentially private. The outcome $L$ of step 2 is generated in a $({\varepsilon}/2, \delta)$-differentially-private manner as it inherits its privacy guarantee from ${\mathsf{Hist}}$. For every fixed choice of the coin tosses of $G$ during the executions $G(S_1), \dots, G(S_k)$, a change to one entry of some $S_i$ changes at most one outcome $h_i \in H$. Differential privacy for step 2 follows by taking expectations over the coin tosses of all the executions of $G$, and for the algorithm as a whole by simple composition. We now argue that the algorithm is accurate. Using standard generalization arguments, we have that with probability at least $1-\beta/3$, $$\left|{\operatorname{freq}}_H(h) - \Pr_{S \sim {\mathcal{D}}^m}[G(S) = h]\right| \le \frac{\eta}{8}$$ for every $h \in \{\pm 1\}^X$ as long as $k \ge O(\log(1/\beta)/\eta)$. Let us condition on this event. Then by the accuracy of the algorithm ${\mathsf{Hist}}$, with probability at least $1-\beta/2$ it produces a list $L$ containing $h^*$ together with a sequence of estimates that are accurate to within additive error $\eta / 8$. In particular, $h^*$ appears in $L$ with an estimate $a_{h^*} \ge \eta - \eta/8 - \eta/8 \ge 3\eta / 4$. Now remove from $L$ every item $h$ with estimate $a_h < 3\eta / 4$. Since every estimate is accurate to within $\eta / 8$, this leaves a list with $|L| \le 2/\eta$ that contains $h^*$ with ${\operatorname{loss}}_{{\mathcal{D}}}(h^*) \le \alpha$. Hence, with probability at least $1-\beta/3$, step 3 succeeds in identifying $h^*$ with ${\operatorname{loss}}_{{\mathcal{D}}}(h^*) \le \alpha/2$. The total sample complexity of the algorithm is $k\cdot m + n'$ which matches the asserted bound. Wrapping up (Proof of Theorem \[thm:main\]) {#sec:wrapping} =========================================== We now combine Theorem \[thm:littlestone-frequent\] (finite Littlestone dimension $\implies$ global stability) with Theorem \[thm:selection\] (global stability $\implies$ private learnability) to prove Theorem \[thm:main\]. Let ${\mathcal{H}}$ be a hypothesis class with Littlestone dimension $d$ and let ${\mathcal{D}}$ be any realizable distribution. Then Theorem \[thm:littlestone-frequent\] guarantees, for $m = O(8^d \cdot d / \alpha)$, the existence of a randomized algorithm $G : (X \times \{\pm 1\})^m \to \{\pm 1\}^X$ and a hypothesis $f$ such that $$\Pr[G(S) = f] \ge \frac{1}{(d+1)2^{d+1}} \text{ and } {\operatorname{loss}}_{{\mathcal{D}}}(f) \le \alpha / 2.$$ Taking $\eta = 1/(d+1)2^{d+1}$, Theorem \[thm:selection\] gives an $({\varepsilon}, \delta)$-differentially private learner with sample complexity $$n = O\left(\frac{m \cdot \log(1/\eta\beta\delta)}{\eta{\varepsilon}} + \frac{\log(1/\eta\beta)}{\alpha{\varepsilon}}\right) = O\left(\frac{16^d \cdot d^2 \cdot (d + \log(1/\beta\delta))}{\alpha{\varepsilon}}\right).$$ Conclusion {#sec:conc} ========== We conclude this paper with a few suggestions for future work. 1. [**Sharper Quantitative Bounds.**]{} Our upper bound on the differentially-private sample complexity of a class ${\mathcal{H}}$ has an exponential dependence on the Littlestone dimension ${\operatorname{Ldim}}({\mathcal{H}})$, while the lower bound by [@AlonLMM19] depends on $\log^*({\operatorname{Ldim}}({\mathcal{H}}))$. The recent work by [@kaplan2019privately] shows that for some classes the lower bound is nearly tight (up to a polynomial factor). It would be interesting to determine whether the upper bound can be improved in general. 2. [**Characterizing Private Query Release.**]{} Another fundamental problem in differentially-private data analysis is the query release, or equivalently, data sanitization problem: Given a class ${\mathcal{H}}$ and a sensitive dataset $S$, output a synthetic dataset $\hat{S}$ such that $h(S) \approx h(\hat{S})$ for every $h \in {\mathcal{H}}$. Does finite Littlestone dimension characterize when this task is possible? Such a statement would follow if one could make our private learner for Littlestone classes *proper* [@bousquet2019passing]. 3. [**Oracle-Efficient Learning.**]{} Neel, Roth, and Wu [@NeelRW19] recently began a systematic study of oracle-efficient learning algorithms: Differentially-private algorithms which are computationally efficient when given oracle access to their non-private counterparts. The main open question left by their work is whether *every* privately learnable concept class can be learned in an oracle-efficient manner. Our characterization shows that this is possible if and only if Littlestone classes admit oracle-efficient learners. 4. [**General Loss Functions.**]{} It is natural to explore whether the equivalence between online and private learning extends beyond binary classification (which corresponds to the 0-1 loss) to regression and other real-valued losses. 5. [**Global Stability.**]{} It would be interesting to perform a thorough investigation of global stability and to explore potential connections to other forms of stability in learning theory, including uniform hypothesis stability [@Bousquet02stability], PAC-Bayes [@McAllester99PACB], local statistical stability [@Ligett19adaptive], and others. 6. [**Differentially-Private Boosting.**]{} Can the type of private boosting presented in Section \[sec:boosting\] be done algorithmically, and ideally, efficiently? Acknowledgements {#acknowledgements .unnumbered} ================ We would like to thank Amos Beimel and Uri Stemmer for pointing out, and helping to fix, a mistake in the derivation of \[thm:agnostic\] in a previous version. We also thank Yuval Dagan for providing useful comments and for insightful conversations. [^1]: Department of Computer Science, Boston University *mbun@bu.edu* [^2]: Department of Electrical Engineering, Tel Aviv University *rlivni@tauex.tau.ac.il* [^3]: Google AI Princeton *shaymoran1@gmail.com* [^4]: The word [*global*]{} highlights a difference with other forms of algorithmic stability. Indeed, previous forms of stability such as DP and [*uniform hypothesis stability*]{} [@Bousquet02stability] are local in the sense that they require output robustness subject to [*local*]{} changes in the input. However, the property required by global stability captures stability with respect to resampling the entire input. [^5]: Note that if one replaces “equalities” with “inequalities” then the Littlestone dimension may become unbounded while the VC dimension remains bounded. This is demonstrated, e.g., by halfspaces which are captured by polynomial inequalities of degree $1$. [^6]: Theorem 2.3 in [@alon2020closure] is based on a previous realizable-to-agnostic transformation from [@beimel2015learning] which applies to [*proper*]{} learners. Here we require the more general transformation from [@alon2020closure] as the learner implied by  may be improper. [^7]: It appears that the name “Littlestone dimension” was coined in [@Bendavid09agnostic]. [^8]: I.e., for independent random variables $Z_1, \dots, Z_n$ whose sum $Z$ satisfies $\operatorname*{\mathbb{E}}[Z] = \mu$, we have for every $\delta \in (0, 1)$ that $\Pr[Z \le (1-\delta)\mu] \le \exp(-\delta^2\mu / 2)$ and $\Pr[Z \ge (1 + \delta)\mu] \le \exp(-\delta^2\mu / 3)$.
1
--- abstract: 'We present high performance implementations of the QR and the singular value decomposition of a batch of small matrices hosted on the GPU with applications in the compression of hierarchical matrices. The one-sided Jacobi algorithm is used for its simplicity and inherent parallelism as a building block for the SVD of low rank blocks using randomized methods. We implement multiple kernels based on the level of the GPU memory hierarchy in which the matrices can reside and show substantial speedups against streamed cuSOLVER SVDs. The resulting batched routine is a key component of hierarchical matrix compression, opening up opportunities to perform H-matrix arithmetic efficiently on GPUs.' address: - '$^1$Extreme Computing Research Center (ECRC), King Abdullah University of Science and Technology (KAUST), Thuwal 23955, Saudi Arabia.' - '$^2$Department of Computer Science, American University of Beirut (AUB), Beirut, Lebanon.' author: - Wajih Halim Boukaram$^1$ - George Turkiyyah$^2$ - Hatem Ltaief$^1$ - 'David E. Keyes$^1$' bibliography: - 'arxiv\_batch\_svd.bib' title: Batched QR and SVD Algorithms on GPUs with Applications in Hierarchical Matrix Compression --- Introduction {#sec:intro} ============ The singular value decomposition (SVD) is a factorization of a general $m \times n$ matrix $A$ of the form $$A = U \Sigma V^*.$$ $U$ is an $m \times m$ orthonormal matrix whose columns $U_i$ are called the left singular vectors. $\Sigma$ is an $m \times n$ diagonal matrix whose diagonal entries $\sigma_i$ are called the singular values and are sorted in decreasing order. $V$ is an $n \times n$ orthonormal matrix whose columns $V_i$ are called the right singular vectors. When $m > n$, we can compute a reduced form $A = \hat{U} \hat{\Sigma} V^*$ where $\hat{U}$ is an $m \times n$ matrix and $\hat{\Sigma}$ is an $n \times n$ diagonal matrix. One can easily obtain the full form from the reduced one by extending $\hat{U}$ with $(m - n)$ orthogonal vectors and $\hat{\Sigma}$ with an $(m - n)$ zero block row. Without any loss of generality, we will focus on the reduced SVD of real matrices in our discussions. The SVD of a matrix is a crucial component in many applications in signal processing and statistics as well as matrix compression, where truncating the $(n - k)$ singular values that are smaller than some threshold gives us a rank-$k$ approximation $\tilde{A}$ of the matrix $A$. This matrix is the unique minimizer of the function $f_k(B) = || A - B ||_F$. In the context of hierarchical matrix operations, effective compression relies on the ability to perform the computation of large batches of independent SVDs of small matrices of low numerical rank. Randomized methods [@halko2011finding] are well suited for computing a truncated SVD of these types of matrices and are built on three computational kernels: the QR factorization, matrix-matrix multiplications and SVDs of smaller $k \times k$ matrices. Motivated by this task, we discuss the implementation of high performance batched QR and SVD kernels on the GPU, focusing on the more challenging SVD tasks. The remainder of this paper is organized as follows. Section \[sec:background\] presents different algorithms used to compute the QR factorization and the SVD as well as some considerations when optimizing for GPUs. Section \[sec:batch\_qr\] discusses the batched QR factorization and compares its performance with existing libraries. Sections \[sec:registers\], \[sec:shared\] and \[sec:block\_global\] discuss the various implementations of the SVD based on the level of the memory hierarchy in which the matrices can reside. Specifically, Section \[sec:registers\] describes the implementation for very small matrix sizes that can fit in registers, Section \[sec:shared\] describes the implementation for matrices that can reside in shared memory, and Section \[sec:block\_global\] describes the block Jacobi implementation for larger matrix sizes that must reside in global memory. Section \[sec:randomized\] details the implementation of the batched randomized SVD routine. We then discuss some details of the application to hierarchical matrix compression in Section \[sec:application\]. We conclude and discuss future work in Section \[sec:conclusion\]. Background {#sec:background} ========== In this section we give a review of the most common algorithms used to compute the QR factorization and the SVD of a matrix as well as discuss some considerations when optimizing on the GPU. QR Factorization ---------------- The QR factorization decomposes an $m \times n$ matrix $A$ into the product of an orthogonal $m \times m$ matrix $Q$ and an upper triangular $m \times n$ matrix $R$ [@golub2013matrix]. We can also compute a reduced form of the decomposition where Q is an $m \times n$ matrix and R is $n \times n$ upper triangular. The most common QR algorithm is based on transforming $A$ into an upper triangular matrix using a series of orthogonal transformations generated using Householder reflectors. Other algorithms such as the Gram-Schmidt or Modified Gram-Schmidt can produce the QR factorization by orthogonalizing a column with all previous columns; however, these methods are less stable than the Householder orthogonalization and the orthogonality of the resulting $Q$ factor suffers with the condition number of the matrix. Another method is based on Givens rotations, where entries in the subdiagonal part of the matrix are zeroed out to form the triangular factor and the rotations are accumulated to form the orthogonal factor. This method is very stable and has more parallelism than the Householder method; however it is more expensive, doing about 50% more work, and it is more challenging to extract the parallelism efficiently on the GPU. For our implementation, we rely on the Householder method due to its numerical stability and simplicity. The method is described in pseudo-code in Algorithm \[alg:qr\]. \[t\] $[Q, R] = [I, A]$ $v = \text{house}(R(i))$ $R = (I - 2vv^T) R$ \[alg:qr:trailing\_update\] $Q = Q (I - 2vv^T)$ SVD Algorithms -------------- Most implementations of the SVD are based on the two-phase approach popularized by Trefethen et al. [@trefethen1997numerical], where the matrix $A$ first undergoes bidiagonalization of the form $A = Q_U B Q_V^T$ where $Q_U$ and $Q_V$ are orthonormal matrices and $B$ is a bidiagonal matrix. The matrix $B$ is then diagonalized using some variant of the QR algorithm, the divide and conquer method or a combination of both to produce a decomposition $B = U_B \Sigma V_B^T$. The complete SVD is then determined as $A = (Q_U U_B) \Sigma (Q_V V_B)^T$ during the backward transformation. These methods require significant algorithmic and programming effort to become robust and efficient while still suffering from a loss of relative accuracy [@demmel1992jacobi]. An alternative is the one-sided Jacobi method where all $n(n-1)/2$ pairs of columns are repeatedly orthogonalized in sweeps using plane rotations until all columns are mutually orthogonal. When the process converges (i.e., all columns are mutually orthogonal up to machine precision), the left singular vectors are the normalized columns of the modified matrix with the singular values as the norms of those columns. The right singular vectors can be computed either by accumulating the rotations or by solving a system of equations. Our application does not need the right vectors, so we omit the details of computing them. Algorithm \[alg:jacobi\] describes the one-sided Jacobi method. Since each pair of columns can be orthogonalized independently, the method is also easily parallelized. The simplicity and inherent parallelism of the method make it an attractive first choice for an implementation on the GPU. \[b\] $G = A_{ij}^T A_{ij}$ \[alg:jacobi:gram\] $R = rot(G)$ $A_{ij} = A_{ij} R$ \[alg:jacobi:rot\] GPU Optimization Considerations ------------------------------- GPU kernels are launched by specifying a grid configuration which lets us organize threads into blocks and blocks into a grid. Launching a GPU kernel causes a short stall (as much as 10 microseconds) as the kernel is prepared for execution. This kernel launch overhead prevents kernels that complete their work faster than the overhead from executing in parallel, essentially serializing them. To overcome this limitation when processing small workloads, the work is batched into a single kernel call when possible [@batchqr_haidar; @batch_haidar]. All operations can then be executed in parallel without incurring the kernel launch overhead, with the grid configuration used to determine thread work assignment. A warp is a group of threads (32 threads in current generation GPUs, such as the NVIDIA K40) within a block that executes a single instruction in lockstep, without requiring any explicit synchronization. The occupancy of a kernel tells us the ratio of active warps to the maximum number of warps that a multiprocessor can host. This metric is dependent on the amount of resources that a kernel uses, such as register and shared memory usage and kernel launch configuration, as well as the compute capability of the card ([@wilt2013cuda] for more details). While not a requirement for good performance [@volkov2010better], it is generally a good idea to aim for high occupancy. Memory on the GPU is organized into a hierarchy of memory spaces as shown in Figure \[fig:memory\_hierarchy\]. At the bottom, we have global memory which is accessible by all threads and is the most plentiful but the slowest memory. The next space of interest is the shared memory which is accessible only by threads within the same block and is configurable with the L1 cache to be at most 48KB per thread block on current generation GPUs. Shared memory is very fast and acts as a programmer controllable cache. Finally, we have the registers which are local to the threads. Registers are the fastest of all memory, but the total number of registers usable by a thread without performance implications is limited. If a kernel needs more registers than the limit, then registers are spilled to “local" memory, which is in the slow but cached global memory. Making good use of the faster memories and avoiding excessive accesses to the slower ones is key to good performance on the GPU. As such, it is common to use blocking techniques in many algorithms, where a block of data is brought in from global memory and processed in one of the faster memories. ![The memory hierarchy of a modern GPU.[]{data-label="fig:memory_hierarchy"}](memory.pdf){width="45.00000%"} Related Work ------------ Batched GPU routines for LU, Cholesky and QR factorizations have been developed in [@batchqr_haidar; @batch_haidar; @charara_batch_tdla] using a block recursive approach which increases data reuse and leads to very good performance for relatively large matrix sizes. GPU routines optimized for computing the QR decomposition of very tall and skinny matrices are presented in [@caqr_anderson] where they develop an efficient transpose matrix-vector computation that is employed with some minor changes in this work. GPU-CPU hybrid algorithms for batched SVD using Jacobi and bidiagonalization methods are introduced in [@kotas_svd] where pair generation for the Jacobi method and the solver phase of the bidiagonalization are handled on the CPU. The work in [@Kang2015] employs the power method to construct a rank 1 approximation for 2D filters in convolutional neural networks. Routines to handle the SVD of many matrices on GPUs is presented in [@badolato_2015] where each thread within a warp computes the SVD of a single matrix. Batched QR Decomposition {#sec:batch_qr} ======================== In this section, we discuss implementation details of our batched QR kernel and compare it with other implementations from the MAGMA 2.2 [@tnld10] and CUBLAS 8 [@nvidia-cublas] libraries. Implementation -------------- One benefit of the Householder algorithm is that the application of reflectors to the trailing matrix (line \[alg:qr:trailing\_update\] of the algorithm) can be blocked together and expressed as a matrix-matrix multiplication (Level 3 BLAS) instead of multiple matrix-vector multiplications (Level 2 BLAS). The increased arithmetic intensity typically allows performance to improve when the trailing matrix is large. However, for small matrix blocks, the overhead of generating the blocked reflectors from their vector form as well as the lower performance of the matrix-matrix multiplication for small matrices hinder performance. We can obtain better performance by applying multiple reflectors in their vector form and performing the transpose matrix-vector multiplication efficiently within a thread block [@caqr_anderson]. First, we perform the regular factorization on a column block $P$ (called a panel). The entire panel is stored in registers, with each thread storing one row of the panel, and the transpose matrix-vector product is computed using a series of reductions using shared memory and warp shuffles [@warp_shfl] which allow threads within a warp to read each other’s registers. Figure \[fig:register\_storage\_reduction\] shows the data layout for a theoretical warp of size 8 with 4 columns in registers and a warp reduction using shuffles. Once we factor the panel, we can apply the reflectors to the trailing sub-matrix in a separate kernel that is optimized for performing the core matrix-vector product in the update. In this second kernel, we load both the factored panel $P$ and a panel $M_i$ of the trailing sub-matrix $M$ to registers and apply the reflectors one at a time, updating the trailing panel in registers. Let us take an example of a $32 \times 8$ trailing panel $M_i$. For each reflector, we compute the matrix-vector product $M_i^Tv$ by flattening the $32 \times 8$ product into a reduction of a 256 vector in shared memory that has been padded to avoid bank conflicts. The reduction can then be serialized until it reaches a size of 32, where a partial reduction to a vector of size 8 can take place in 2 steps. This final vector is the product $M_i^Tv$ which can then be quickly applied to the registers storing $M_i$. This process is repeated for each trailing panel within the same kernel to maximize the use of the reflectors which have been stored in registers. Figure \[fig:qr\_fig\] shows one step of a panel factorization and the application of its reflectors to the trailing submatrix. Since threads are limited to 1024 per block on current architectures, we use the approach developed in [@journals/concurrency/KurzakLDB10] to factorize larger matrices. We first factorize panels up to the thread block limit in a single kernel call. The panels below the first are then factorized by first loading the triangular factor into shared memory and then proceeding with the panel factorization as before, taking the triangular portion into consideration when computing reflectors and updates. To keep occupancy up for the small matrices on devices where the resident block limit could be reached before the thread limit, we assign multiple operations to a single thread block. For a batch of $N$ matrices of dimensions $m \times n$, kernels can be launched using $N/b$ thread blocks of size $m \times b$, where each thread block handles $b$ operations. ![Left: matrix rows allocated to thread registers in a warp. Right: parallel warp reduction using shuffles within registers.[]{data-label="fig:register_storage_reduction"}](reg_svd.pdf){width="0.6\linewidth"} ![One step of the QR factorization where a panel P is factored to produce a triangular factor R and reflectors V which are used to update the trailing sub-matrix M.[]{data-label="fig:qr_fig"}](qr_fig.pdf){width="65.00000%"} Performance ----------- Figures \[fig:batch\_qr\] and \[fig:batch\_qr\_rect\] show the performance of our batched QR for 1000 square and rectangular matrices with a panel width of $16$, tuned for the P100 GPU. We compare against the vendor implementation in CUBLAS as well as the high performance library MAGMA. We can see that our proposed version performs well for rectangular matrices with column size of 32 and starts losing ground against MAGMA for the larger square matrix sizes where the blocked algorithm starts to show its performance benefits. A nested implementation where our kernel can be used to factor relatively large panels in a blocked algorithm will likely show some additional performance improvements for the large square matrices, but we leave that as future work. [0.45]{} ![Comparing batched QR kernels for 1000 matrices of varying size on a P100 GPU in single and double precision.](qr_perf_1.pdf "fig:") [0.45]{} ![Comparing batched QR kernels for 1000 matrices of varying size on a P100 GPU in single and double precision.](qr_perf_2.pdf "fig:") Register Memory One-Sided Jacobi {#sec:registers} ================================ In this section we will discuss the first batched SVD kernel where the matrix data is hosted in registers and analyze the performance of the resulting kernel. Implementation -------------- In this implementation, to avoid repeated global memory accesses, we attempt to fit the matrix in register memory using the same layout as the panel in the QR factorization, i.e. one row per thread; however, the number of registers that a thread uses has an impact on occupancy which can potentially lead to lower performance. In addition, once the register count exceeds the limit set by the GPU’s compute capability, the registers spill into “local" memory which resides in cached slow global memory. Since we store an entire matrix row in the registers of one thread, we use the serial one-sided Jacobi algorithm to compute the SVD where column pairs are processed by the threads one at a time. The bulk of the work lies in the computation of the Gram matrix $G = A_{ij}^T A_{ij}$ (line \[alg:jacobi:gram\] of Algorithm \[alg:jacobi\]) and in the update of the columns (line \[alg:jacobi:rot\]). Since the Gram matrix is symmetric, this boils down to three dot products which are executed as parallel reductions within the warp using warp shuffles. The computation of the $2 \times 2$ rotation matrix as well as the convergence test is performed redundantly in each thread. Finally, the column update is done in parallel by each thread on its own register data. As with the QR kernel, we keep occupancy up for the smaller matrix sizes by assigning multiple SVD operations to a single block of threads with each operation assigned to a warp to avoid unnecessary synchronizations. Performance {#subsec:reg_perf} ----------- We generate batches of 1000 test matrices with varying condition numbers using the `latms` LAPACK routine and calculate performance based on the total number of rotations needed for convergence. Figures \[fig:reg\_svd\_perf\] and \[fig:reg\_svd\_occupancy\] show the performance on a P100 GPU of the register-based batched SVD kernel and the effect increased register usage has on occupancy. Profiling the kernel, we see that the Gram matrix computation takes about 500 cycles, column rotations take about 240 cycles, and the redundantly computed convergence test and rotation matrices dominate at 1900 cycles. The fact that the redundant portion of the computation dominates means that it is preferable to assign as few threads as possible when processing column pairs. Due to the low occupancy for the larger matrix sizes and the register spills to local memory for matrices larger than 30, it is obvious that the register approach will not suffice for larger matrix sizes. This leads us to our next implementation based on the slower but more parallel-friendly shared memory. [.45]{} ![Performance of the batched register memory SVD on a P100 GPU for 1000 matrices of varying size in single and double precision arithmetics.](reg_perf_1.pdf "fig:") [.45]{} ![Performance of the batched register memory SVD on a P100 GPU for 1000 matrices of varying size in single and double precision arithmetics.](reg_perf_2.pdf "fig:") Shared Memory One-Sided Jacobi {#sec:shared} ============================== While the register based SVD performs well for very small matrix sizes, we need a kernel that can handle larger sizes and maintain reasonably high occupancy. This leads us to building a kernel based on shared memory, the next level of the GPU memory hierarchy. This section discusses the implementation details of this kernel and analyze its performance when compared with the register kernel. Implementation -------------- In this version, the matrix is stored entirely in shared memory, which is limited to at most 48 KB per thread block on current generation GPUs. Using the same thread assignment as the register based kernel would lead to very poor occupancy due to the high shared memory consumption, where potentially only a few warps will be active in a multiprocessor. Instead, we exploit the inherent parallelism of the one-sided Jacobi to assign a warp to a pair of columns, i.e., there are $n/2$ warps processing an $m \times n$ matrix stored in shared memory. There are a total of $n(n-1)/2$ pairs of columns, so we must generate all pairings in $n-1$ steps, with each step processing $n/2$ pairs in parallel. There are many ways of generating these pairs, including round robin, odd-even, and ring ordering [@parosbj_zhou; @ZHOU19971]. We implement the round robin ordering using shared memory to keep track of the column indexes of the pairs with the first warp in the block responsible for updating the index list after each step. Figure \[fig:round\_robin\] shows this ordering for a matrix with 8 columns. When the number of matrix rows exceeds the size of the warp, the thread-per-row assignment no longer allows us to use fast warp reductions, which would force us to use even more resources, as the reductions would now have to be done in shared memory. Instead, we assign multiple rows to a thread, serializing a portion of the reduction over those rows until warp reductions can be used. This follows our observation in Section \[subsec:reg\_perf\] to assign as few threads as possible to process column pairs, frees up valuable resources and increases the overall performance of the reduction. Row padding is used to keep the rows at multiples of the warp size, and column padding is used to keep the number of columns even. Kernels can then be launched using $32\times n/2$ threads to process each matrix. Figures \[fig:shared\_alloc\] and \[fig:shared\_warp\_reduction\] show examples of the thread allocation and reductions for a $8 \times 8$ matrix using a theoretical warp size of 4. ![Distribution of column pairs to warps at each step of a sweep.[]{data-label="fig:round_robin"}](pair_generation.pdf){width="0.6\linewidth"} [0.3]{} ![Shared memory kernel implementation details.](shared_alloc.pdf "fig:"){width="\linewidth"} [0.4]{} ![Shared memory kernel implementation details.](shared_warp_reduction.pdf "fig:"){width="\linewidth"} Performance {#performance-1} ----------- Figures \[fig:shared\_svd\_perf\] and \[fig:shared\_svd\_occupancy\] show the performance of the parallel shared SVD kernel compared to the serial register SVD kernel on a P100 GPU. We can see the improved growth in performance in the shared memory kernel due to the greater occupancy as well as the absence of any local memory transactions. Looking at the double precision occupancy, we notice two dips in occupancy at matrix sizes 22 and 32 as the number of resident blocks become limited by the registers/block limits of the device, dropping to 2 and then 1 resident blocks. Performance increases steadily from there as we increase the number of threads assigned to the operation until we reach a matrix size of $64 \times 64$ where we reach the block limit of 1024 threads. To handle larger sizes, we must use a blocked version of the algorithm or the randomized SVD as we see in Sections \[sec:block\_global\] and \[sec:randomized\], respectively. [0.45]{} ![Performance of the batched shared memory SVD on a P100 GPU for 1000 matrices of varying size in single and double precision arithmetics.](smem_perf_1.pdf "fig:") [0.45]{} ![Performance of the batched shared memory SVD on a P100 GPU for 1000 matrices of varying size in single and double precision arithmetics.](smem_perf_2.pdf "fig:") Global Memory One-Sided Block Jacobi {#sec:block_global} ==================================== When we can no longer store the entire matrix in shared memory, we have to operate on the matrix in the slower global memory. Instead of repeatedly reading and updating the columns one at a time, block algorithms that facilitate cache reuse have been developed [@bevcka1999block1; @bevcka1999block2; @bevcka2015new]. The main benefit of the block Jacobi algorithm is its high degree of parallelism; however, since we implement a batched routine for independent operations, we will use the serial block Jacobi algorithm for individual matrices and rely on the parallelism of the batch processing. The parallel version, where multiple blocks are processed simultaneously, can still be used when the batch size is very small, but we will focus on the serial version. In this section we will discuss the implementation details for two global memory block Jacobi algorithms that differ only in the way block columns are orthogonalized and compare their performance with parallel streamed calls to the cuSOLVER 8 [@nvidia-cusolver] library routines. Gram Matrix Block Jacobi SVD ---------------------------- The block Jacobi algorithm is very similar to the vector Algorithm \[alg:jacobi\], orthogonalizing pairs of blocks columns instead of vectors. The first method of orthogonalizing pairs of block columns is based on the SVD of their Gram matrix. During the $p$-th sweep, each pair of $m \times k$ block columns $A^{(p)}_i$ and $A^{(p)}_j$ is orthogonalized by forming a $2k \times 2k$ Gram matrix $G^{(p)}_{ij} = {[A^{(p)}_i A^{(p)}_j]}^T [A^{(p)}_i A^{(p)}_j] = {A^{(p)}_{ij}}^T A^{(p)}_{ij}$ and generating a block rotation matrix $U^{(p)}_{ij}$, computed as the left singular vectors of $G^{(p)}_{ij}$ (or equivalently its eigenvectors, since it is symmetric positive definite). Updating $A^{p+1}_{ij} = A^p_{ij} U^{(p)}_{ij}$ orthogonalizes the block columns, since we have $${A^{p+1}_{ij}}^T A^{p+1}_{ij} = {U^{(p)}_{ij}}^T {A^p_{ij}}^T A^p_{ij} U^{(p)}_{ij} = {U^{(p)}_{ij}}^T G^{(p)}_{ij} U^{(p)}_{ij} = \Lambda^{p}_{ij},$$ where $\Lambda^{p}_{ij}$ is a diagonal matrix of the singular values of $G^{(p)}_{ij}$. Orthogonalizing all pairs of block columns until the entire matrix is orthogonal will give us the left singular vectors as the normalized columns and the singular values as the corresponding column norms. If the right singular vectors are needed, we can accumulate the action of the block rotation matrices on the identity matrix. For our batched implementation, we use highly optimized batched `syrk` and `gemm` routines from MAGMA to compute $G$ and to apply the block rotations, while the SVD is computed by our shared memory batched kernel. Since different matrices will converge in different numbers of sweeps, we keep track of the convergence of each operation $l$ by computing the norm $e_l$ of the off-diagonal entries of $G$ scaled by its diagonal entries. While this term is an inexact approximation of the off-diagonal terms of the full matrix in each sweep, it is still a good indication of convergence and will cost us at most an extra cheap sweep, since the final sweep will not actually perform any rotations within the SVD of $G$. The entire batched operation will then converge when $e = \max e_l < \epsilon$, where $\epsilon$ is our convergence tolerance. This gives us the Gram matrix path of the batched block Jacobi Algorithm \[alg:block\_jacobi\] to compute the SVD of a batch of matrices in global memory. It is worth noting that the computation of the Gram matrix can be optimized by taking advantage of the special structure of $G$, but since the bulk of the computation is in the SVD of G, it will not result in any significant performance gains. Direct Block Jacobi SVD ----------------------- The Gram matrix method is an indirect way of orthogonalizing block columns and may fail to converge if the matrix is very ill-conditioned. Ill-conditioned matrices can be handled by directly orthogonalizing the columns using their SVD. Since the block columns are rectangular, we first compute their QR decomposition followed by the SVD of the triangular factor $R$. Overwriting the block column $A^p_{ij}$ by the orthogonal factor $Q$ and multiplying it by the left singular vectors of $R$ scaled by the singular values will give us the new block column $A^{p+1}_{ij}$: $$A^p_{ij} = Q^p_{ij} R^p_{ij} = \left( Q^p_{ij} U^p_{ij} \Sigma^p_{ij} \right) {V^p_{ij}}^T = A^{p+1}_{ij} {V^p_{ij}}^T.$$ If the right singular vectors are needed, we can accumulate the action of $V^p_{ij}$ on the identity matrix. For our batched implementation, we use the batch QR routine developed in Section \[sec:batch\_qr\] and `gemm` routines from MAGMA to multiply the orthogonal factor by the left singular vectors, while the SVD is computed by our shared memory batched kernel. The same convergence test used in the Gram matrix method can be used on the triangular factor, since the triangular factor should be close to a diagonal matrix if a pair of block columns are orthogonal. This gives us the direct path of the batched block Jacobi Algorithm \[alg:block\_jacobi\] to compute the SVD of a batch of matrices in global memory. \[t\] $e_l = 0$ $G = \text{batchSyrk}(A_{ij})$ $[A_{ij}, G] = \text{batchQR}(A_{ij})$ $e_l = \text{max}(e_l, \text{scaledOffdiag}(G))$ $U = \text{batchSvd}(G)$ $A_{ij} = \text{batchGemm}(A_{ij}, U)$ $e = \text{max}(e_l)$ Performance {#performance-2} ----------- Figures \[fig:block\_jacobi\_profile1\] and \[fig:block\_jacobi\_profile1\] show the profiling of the different computational kernels involved in the batched block algorithms with a block width of $32$, specifically percentages of total execution time for determining convergence and memory operations, matrix multiplications, QR decompositions and the SVD of the Gram matrix. For the Gram matrix approach, the SVD is the most costly phase, even for the larger operations, while the QR and SVD decompositions take almost the same time for the larger matrices in the direct approach. Figure \[fig:block\_jacobi\_perf\] shows the performance of the batched block Jacobi SVD of 200 matrices using both methods and Figure \[fig:osbjvscustream\] compares the performance of our batched SVD routine with a batched routine that uses the cuSOLVER SVD routine using 20 concurrent streams on a P100 GPU. Increasing the number of streams for cuSOLVER showed little to no performance benefits, highlighting the performance limitations of routines that are bound by kernel launch overhead. The matrices are generated randomly using the `latms` LAPACK routine with a condition number of $10^7$. The Gram matrix approach fails to converge in single precision for these types of matrices, whereas the direct approach always converges; however the Gram matrix approach performs better when it is applicable for the larger matrices due to the strong performance of the matrix-matrix multiplcations. The performance of the block algorithm can be improved by preprocessing the matrix using QR and LQ decompositions to decrease the number of sweeps required for convergence [@Oksa_2006] as well as by adaptively selecting pairs of block columns based on the computed offdiagonal norms of their Gram matrices. These changes are beyond the scope of this paper and will be the focus of future work. [0.45]{} ![Profile of the different phases of the block Jacobi SVD for 200 matrices of varying size on a P100 GPU in double precision. Single precision exhibits similar behavior.[]{data-label="fig:block_jacobi_profile"}](block_perf_1.pdf "fig:") [0.45]{} ![Profile of the different phases of the block Jacobi SVD for 200 matrices of varying size on a P100 GPU in double precision. Single precision exhibits similar behavior.[]{data-label="fig:block_jacobi_profile"}](block_perf_1b.pdf "fig:") [0.45]{} ![Batched block Jacobi performance for 200 matrices of varying size on a P100 GPU in single and double precision arithmetics.](block_perf_2.pdf "fig:") [0.45]{} ![Batched block Jacobi performance for 200 matrices of varying size on a P100 GPU in single and double precision arithmetics.](block_perf_3.pdf "fig:") Randomized SVD {#sec:randomized} ============== As mentioned in Section \[sec:intro\], we are often interested in a rank-$k$ approximation of a matrix $A \approx \tilde{U} \tilde{S} \tilde{V}$. We can compute this approximation by first determining the singular value decomposition of the full $m \times n$ matrix $A$ and then truncating the $n-k$ smallest singular values with their corresponding singular vectors; however, when the matrix has low numerical rank $k$, we can obtain the approximation using very fast randomization methods [@halko2011finding]. This section will discuss some details of the algorithm and compare its performance with the full SVD using our one-sided block Jacobi kernel. Implementation -------------- When the singular values of a matrix decay rapidly, we can compute an approximate SVD using a simple two phase randomization method: 1. The first phase determines an approximate orthogonal basis $Q$ for the columns of $A$ ensuring that $A \approx QQ^T A$. When the numerical rank $k$ of $A$ is low, we can be sure that $Q$ has a small number of columns as well. In [@halko2011finding] we see that by drawing $k+p$ sample vectors $y = Aw$ from random input vectors $w$, we can obtain a reliable approximate basis for $A$ which can then be orthogonalized. This boils down to computing a matrix $Y = A \Omega$, where $\Omega$ is a $n \times (k + p)$ random Gaussian sampling matrix, and then computing the QR decomposition of $Y = Q R_y$, where $Q$ is the desired approximate orthogonal basis. 2. The second phase uses the fact that $A \approx QQ^T A$ to compute a matrix $B = Q^T A$ so that we now have $A \approx QB$. Forming the SVD of $B = U_B S V^T$, we finalize our approximation $A \approx QU_B S V^T = U S V^T$. For the wide $(k+p) \times n$ matrix $B$, we can first compute a QR decomposition of its transpose, followed by the SVD of the upper triangular factor. Algorithm \[alg:batch\_rsvd\] shows that the core computations for the randomized method are matrix-matrix multiplications, QR decompositions, and the singular value decompositions of small matrices. Using the batched routines from the previous sections, it is straightforward to form the required randomized batched SVD. More robust randomized SVD algorithms would employ randomized subspace iteration methods to obtain a better basis $Q$ for the columns of $A$ and rely on these same core kernels, but will not be further discussed here. \[t\] $[m, n] = size(A)$ $\Omega = \text{Rand}(n, k + p)$ $Y = \text{batchGemm}(A, \Omega)$ $[Q, R_y] = \text{batchQR}(Y)$ $B = \text{batchGemm}(Q^T, A)$ $[Q_B, R_B] = \text{batchQR}(B^T)$ $[U_R, S, V_R] = \text{batchSvd}(R_B^T)$ $U = \text{batchGemm}(Q, U_R)$ $V = \text{batchGemm}(Q_B, V_R)$ Performance {#performance-3} ----------- Figure \[fig:rsvd\_profile\] shows the profiling of the different kernels used in the randomized batched routine for determining the top 64 singular values and vectors of randomly generated low rank matrices using the `latms` LAPACK routine. The miscellaneous portion includes random number generation using the CURAND library’s default random number generator and a Gaussian distribution, batched transpose operations and memory operations. We can see that the performance of all kernels play almost equally important roles in the performance of the randomized routine as the matrix size grows while keeping the computed rank the same. Figure \[fig:rsvd\_perf\] shows the performance of the batched randomized SVD of 200 operations and Figure \[fig:rsvd\_vs\_osbj\] compares the runtimes of the direct block one-sided Jacobi routine with the randomized SVD on a P100 GPU for the same set of matrices, showing that significant time savings can be achieved even for relatively small blocks. ![Profile of the different phases of the batched randomized SVD for 200 matrices of varying size on a P100 GPU in double precision. Single precision exhibits similar behavior.[]{data-label="fig:rsvd_profile"}](rand_perf_1.pdf) [0.45]{} ![Batched randomized SVD performance for 200 matrices of varying size on a P100 GPU in single and double precision for the first 64 singular values and vectors.](rand_perf_2.pdf "fig:") [0.45]{} ![Batched randomized SVD performance for 200 matrices of varying size on a P100 GPU in single and double precision for the first 64 singular values and vectors.](rand_perf_3.pdf "fig:") Application to Hierarchical Matrix Compression {#sec:application} ============================================== As an application of the batched kernels presented, we consider the problem of compressing/recompressing hierarchical matrices. This is a problem of significant importance for building hierarchical matrix algorithms and in fact was our primary motivation for the development of the batched kernels. Hierarchical matrices [@hackbush_2000; @hackbush_h2_2000; @hackbush_1999] have received substantial attention in recent years because of their ability to store and perform algebraic operations in near linear complexity rather than the $O(n^2)$ and $O(n^3)$ that regular dense matrices require. The effectiveness of hierarchical matrices comes from the fact they can approximate a matrix by a (quad)-tree of blocks where many of the blocks in the off-diagonal regions have a rapidly decaying spectrum and can therefore be well-approximated by numerically low rank representations. It is these low rank representations, at different levels of the hierarchical tree, that reduce the memory footprint and operations complexity of the associated matrix algorithms. Hackbush [@hbook] shows that many of the large dense matrices that appear in scientific computing, such as from the discretization of integral operators, Schur complements of discretized PDE operators, and covariance matrices, can be well approximated by these hierarchical representations. Reviewing and analyzing hierarchical matrix algorithms is beyond the scope of this paper. Here we focus on the narrow task of compressing hierarchical matrices. This compression task may be viewed as a generalization of the well-known compression (i.e., low rank approximation) of large dense matrices to the case of hierarchical matrices. For large dense matrices, one way to perform the compression is to generate a single exact or approximate SVD ($U \Sigma V^T$) and truncate the spectrum $\Sigma$ to the desired tolerance, to produce a truncated or “compressed” representation $(\bar{U} \bar{\Sigma} \bar{V}^T)$. For hierarchical matrices, the equivalent operations involve *batched SVDs* on small blocks, with one batched kernel call per level of the tree in the hierarchical representation. The size of the batch in every such call is the number of nodes at the corresponding level in the tree. Compression algorithms with controllable accuracy are important practically, because it is often the case that the hierarchical matrices generated by analytical methods can be compressed with no significant loss of accuracy. Even more importantly, when performing matrix operations such as additiona and multiplication, the apparent ranks of the blocks often grow and have to be recompressed regularly during the operations to prevent superlinear growth in memory requirements. [0.45]{} ![The basis tree and matrix tree leaves of a simple $\mathcal{H}^2$-matrix.](basis_tree.pdf "fig:"){width="0.9\linewidth"} [0.45]{} ![The basis tree and matrix tree leaves of a simple $\mathcal{H}^2$-matrix.](hmatrix_fig.pdf "fig:"){width="0.8\linewidth"} $\mathcal{H}^2$-matrix representation ------------------------------------- For our application, we use the memory efficient $\mathcal{H}^2$ variant of hierarchical matrices which exhibit linear complexity in time and space for many of its core operations. In the $\mathcal{H}^2$-matrix format, a hierarchical matrix is actually represented by three trees: - Row and column basis column trees $U$ and $V$ that organize the row and column indices of the matrix hierarchically. Each node represents a set of basis vectors for the row and column spaces of the blocks of $A$. Nodes at the leaves of the tree store these vectors explicitly, while inner nodes store only transfer matrices that allow us to implicitly represent basis vectors in terms of their children. A basis tree with this parent-child relationship of the nodes is called a nested basis. For example, in a binary row basis tree $U$ with transfer matrices $E$, we can explicitly compute the basis vectors for a node $i$ with children $i_1$ and $i_2$ at level $l$ as: $$U^{l-1}_i = \begin{bmatrix} U^l_{i_1} & \\ & U^l_{i_2} \end{bmatrix} \begin{bmatrix} E^l_{i_1} \\ E^l_{i_2} \end{bmatrix}.$$ Figure \[fig:basis\_tree\] shows an example of a binary basis tree. - A matrix tree for the hierarchical blocking of $A$ formed by a dual traversal of the nodes of the two basis trees. A leaf is determined when the block is either small enough and stored as an $m \times m$ dense matrix, or when a low rank approximation of the block meets a specified accuracy tolerance. For the latter case, the node is stored as a $k_l \times k_l$ coupling matrix $S$ at each level $l$ of the tree, where $k_l$ is the rank at level $l$. The block $A_{ts}$ of the matrix, where $t$ is the index set of a node in the row basis tree $U$ and $s$ is the index set of a node in the column basis $V$, is then approximated as $A_{ts} \approx U_t S_{ts} V_s^T$. Figure \[fig:hmatrix\] shows the leaves of the matrix quadtree of a simple hierarchical matrix. For the case of symmetric matrices, the $U$ and $V$ trees are identical. Our numerical results below are from a symmetric covariance matrix. Compression ----------- The compression of a symmetric $\mathcal{H}^2$-matrix $A_H$, represented by the two trees $U$ (with its transfer matrices $E$) and $S$, involves generating a new optimal basis tree $\widetilde{U}$ (with its transfer matrices $\widetilde{E}$) in a truncation phase, and a new $\widetilde{S}$ that expresses the contents of the matrix blocks in this new basis in a projection phase. We present a version of the truncation algorithm that generates a memory efficient basis $[\widetilde{U}, \widetilde{E}]$ from a representation of the matrix in a given $[U, E]$ basis. More sophisticated algebraic compression algorithms that involve the use of $S$ in the truncation phase in order to generate a more efficient basis will be the subject of future work. The truncation phase computes the SVD of the nodes of the basis tree $U$ level by level, with all nodes in a level being processed in parallel to produce the new basis $\widetilde{U}$. We have an explicit representation of the basis vectors at the leaves, so we can compute the SVD of all leaf nodes in parallel with our batched kernels and truncate the singular vectors whose singular values are lower than our relative compression threshold $\epsilon$. Truncating the node to the relative threshold using the SVD will give us an approximation of the leaf such that $\frac{||U - \widetilde{U}||_F}{||U||_F} \le \epsilon$. With the new leaf nodes, we can compute projection matrices in a tree $T$, where each node $i$, $T^d_i = \widetilde{U^d_i}^T U^d_i$ and $d$ is the leaf level. Sweeping up the tree, we process the inner nodes while preserving the nested basis property. Using the parent-child relationship of a node $i$ with children $i_1$ and $i_2$ at level $l$, we have: $$U^{l-1}_i = \begin{bmatrix} U^l_{i_1} & \\ & U^l_{i_2} \end{bmatrix}\begin{bmatrix} E^l_{i_1}\\ E^l_{i_2} \end{bmatrix} \approx \begin{bmatrix} \widetilde{U}^l_{i_1} & \\ & \widetilde{U}^l_{i_2} \end{bmatrix}\begin{bmatrix} T^l_{i_1} E^l_{i_1}\\ T^l_{i_2} E^l_{i_2} \end{bmatrix} = \begin{bmatrix} \widetilde{U}^l_{i_1} & \\ & \widetilde{U}^l_{i_2} \end{bmatrix} TE_i$$ After forming the $TE$ matrices using batched matrix-matrix multiplication, we compute their SVD $TE = QSW^T$ using the batched SVD kernel and truncate as we did for the leaves to form the truncated $\widetilde{TE}$ matrices as: $$\widetilde{TE}_i = \widetilde{Q}_i \left( \widetilde{S}_i \widetilde{W}_i^T \right) = \begin{bmatrix} \widetilde{E}^l_{i_1}\\ \widetilde{E}^l_{i_2} \end{bmatrix} T^{l-1}_i$$ where $\widetilde{E}^l$, the block rows of $\widetilde{Q}$, are the new transfer matrices at level $l$ of our compressed nested basis and $T^{l-1}$ are the projection matrices for level $(l-1)$. The key computations involved in this truncation phase consist then of one batched SVD involving the leaves of the tree, followed by a sequence of batched SVDs, one per level of the tree, involving the transfer matrices and data from the lower levels. The projection phase consists of transforming the coupling matrices in the matrix tree using the generated projection matrices of the truncation phase. For each coupling matrix $S_{ts}$, we compute a new coupling matrix $\widetilde{S}_{ts} = T_t S_{ts} T_s^T$ using batched matrix-matrix multiplications. This phase of the operation consumes much less time than the truncation phase on GPUs, because of substantial efficiencies in executing regular arithmetically intensive operations on them. Results ------- As an illustration of the effectiveness of the algebraic compression procedure, we generate covariance matrices of various sizes for a spatial Gaussian process with $n$ observation points placed on a random perturbation of a regular discretization of the unit square $[0,1] \times [0,1]$ and an isotropic exponential kernel with correlation length of $0.1$. Hierarchical representations of the formally dense $n \times n$ covariance matrices are formed analytically by first clustering the points in a KD-tree using a mean split giving us the hierarchical index sets of the basis tree. The basis vectors and transfer nodes are generated using Chebyshev interpolation [@borm2007approximating]. The matrix tree is constructed using a dual traversal of the basis tree [@hackbush_2000; @hackbush_2003], and the coupling matrices are generated by evaluating the kernel at the interpolation points. The approximation error of the constructed matrix is then controlled by varying the number of interpolation points and by varying the leaf admissibility condition during the dual tree traversal. An approximation error of $10^{-7}$ has been used in the following tests and a relative truncation error $\epsilon=\frac{||A_H - \widetilde{A}_H||_F}{||A_H||_F} \le 10^{-7}$ has been used to maintain the accuracy of the compressed matrices. Figure \[fig:compression\] shows the memory consumption before and after compression of hierachical covariance matrices with leaf size $64$ and initial rank $64$ (corresponding to an $8 \times 8$ Chebyshev grid). The dense part remains untouched, while the low rank part of the representation sees a substantial decrease in memory consumption after compression with minimal loss of accuracy. Figure \[fig:compression\_time\] shows the expected asymptotic linear growth in time of the compression algorithm and shows the effect of using the randomized SVD with $32$ samples instead of the full SVD as computed by the shared memory kernel. Figure \[fig:compression\_time2\] shows another example where the admissibility condition is weakened to generate a coarser matrix tree with an increased rank of 121 (corresponding to an $11 \times 11$ Chebyshev grid) and the randomized SVD with $64$ samples also reduces compression time when compared to the full SVD using the direct block Jacobi kernels. [0.45]{} ![Compression results for sample covariance matrices generated from 2D spatial statistics on a P100 GPU in single and double precision, using a relative Frobenius norm threshold of $10^{-7}$ and initial rank 64.](compress_perf_1.pdf "fig:") [0.45]{} ![Compression results for sample covariance matrices generated from 2D spatial statistics on a P100 GPU in single and double precision, using a relative Frobenius norm threshold of $10^{-7}$ and initial rank 64.](compress_perf_2.pdf "fig:") ![Compression time for a coarser matrix tree with initial rank 121 comparing the randomized SVD with 64 samples and the full SVD.[]{data-label="fig:compression_time2"}](compress_perf_3.pdf) Conclusions and Future Work {#sec:conclusion} =========================== In this paper, we described the implementation of efficient batched kernels for the QR decomposition and randomized singular value decomposition of low rank matrices hosted on the GPU. Our batched QR kernel provides significant performance improvements for small matrices over existing state of the art libraries, and our batched SVD routines are the first of their kind on the GPU, with performance exceeding 800/400 GFLOP/s on a batch of 1000 matrices of size $512 \times 512$ in single/double precision. We illustrated the power of these kernels on a problem involving the algebraic compression of hierarchical matrices stored entirely in GPU memory, and demonstrated a high-performance compression algorithm yielding significant memory savings on practical problems. In the future, we plan to investigate alternatives to the one-sided Jacobi algorithm for the SVD of the small blocks in the randomized algorithm and improve the performance of the blocked algorithms using preconditioning and adaptive block column pair selection. We also plan to develop a suite of hierarchical matrix operations suited for execution on modern GPU and manycore architectures. Acknowledgments {#acknowledgments .unnumbered} =============== We thank the NVIDIA Corporation for providing access to the P100 GPU used in this work.
1
--- abstract: 'Multiplicity fluctuations in limited segments of momentum space are calculated for a classical pion gas within the statistical model. Results for the grand canonical, canonical, and micro-canonical ensemble are obtained, compared and discussed. We demonstrate that even in the large volume limit correlations between macroscopic subsystems due to energy and momentum conservation persist. Based on the micro-canonical formulation we make qualitative predictions for the rapidity and transverse momentum dependence of multiplicity fluctuations. The resulting effects are of similar magnitude as the predicted enhancement due to a phase transition from a quark-gluon plasma to a hadron gas phase, or due to the critical point of strongly interacting matter, and qualitatively agree with recently published preliminary multiplicity fluctuation data of the NA49 SPS experiment.' author: - Michael Hauer title: | Multiplicity Fluctuations in Limited Segments of Momentum Space\ in Statistical Models --- Introduction {#Intro} ============ The statistical model has been, for a long time, successfully applied to fit experimental data on mean hadron multiplicities in heavy ion collision experiments over a wide range of beam energies and system sizes. For recent reviews see [@FOCley; @FOBeca; @FOPBM; @FORafe]. So naturally the question arises whether the statistical model is able to describe event-by-event fluctuations of these observables as well. And indeed, a first comparison suggests that this might be possible for the sample of most central events. Global conservation laws, imposed on a statistical system, lead, even in the large volume limit, to suppressed fluctuations. The multiplicity distributions of charged hadrons recently reported [@NA49_fluc] by the NA49 SPS experiment are systematically narrower than a Poissonian reference distribution. This could be interpreted [@MCEvsData] as effects due to energy and charge conservation in a relativistic hadronic gas. Multiplicity fluctuations are usually quantified by the ratio of the variance of a multiplicity distribution to its mean value, the so-called scaled variance. In statistical models there is a qualitative difference in the properties of mean value and scaled variance. In the case of the mean multiplicity results obtained within the grand canonical ensemble (GCE), canonical ensemble (CE), and micro-canonical ensemble (MCE) approach each other in the large volume limit. One refers here to as the thermodynamic equivalence of these ensembles. It was recently found [@CEfluc1] that corresponding results for the scaled variance are different in different ensembles, and thus this observable is sensitive to conservation laws obeyed by a statistical system. The growing interest in the experimental and theoretical study of fluctuations in strong interactions (see e.g., reviews [@reviewfluc]) is motivated by expectations of anomalies in the vicinity of the onset of deconfinement [@ood] and in the case when the expanding system goes through the transition line between a quark-gluon plasma and a hadron gas phase [@phasetrans]. In particular, a critical point of strongly interacting matter may be accompanied by a characteristic power-law pattern in fluctuations [@critpoint]. A non-monotonic dependence of event-by-event fluctuations on system size and/or center of mass energy in heavy ion collisions would therefore give valuable insight into the phase diagram of strongly interacting matter. Provided the signal survives the subsequent evolution and hadronization of the system (see also [@recomb]). Therefore, in order to asses the discriminating power of proposed measures, for a recent review see [@reviewfluc2], one should firstly study properties of equilibrated sources [@MCEvsData; @res; @VolDep; @vdw] and quantify ‘baseline‘ (or thermal/statistical) fluctuations. Apart from being an important tool in an effort to study a possible critical behavior, the study of fluctuations within the statistical model constitutes also a further test of its validity. In this paper we make detailed predictions for the momentum space dependence of multiplicity fluctuations. We show that energy and momentum conservation lead to a non-trivial dependence of the scaled variance on the location and magnitude of the observed fraction of momentum space. These predictions can be tested against existing and future data from the heavy ion collision experiments at the CERN SPS and BNL RHIC facilities. The paper is organized as follows: In section \[model\] we briefly introduce our model. In section \[GCECE\] we consider multiplicity distributions in a limited region of momentum space in GCE and CE. For the MCE we follow, in section \[MCE\], the procedure of Ref.[@clt] and show how to calculate the width of the corresponding distributions in the large volume limit. We revisit the so-called ‘acceptance scaling‘ previously suggested as an approximate implementation of experimental acceptance in section \[Results\]. Technical details of the calculations are presented in the Appendix. Concluding remarks and a summary in sections \[Remarks\] and \[Summary\] close the paper. The Model {#model} ========= The ideal Boltzmann $\pi^+$ $\pi^-$ $\pi^0$ gas serves as the standard example throughout this paper, while the main subject of investigation is the multiplicity distribution $P(N_{\Omega})$ of particles with momenta inside a certain segment $\Omega$ of momentum space. Calculations are done for the three standard ensembles GCE, CE, and MCE. For the sake of argument we will assume that we only want to measure $P(N_{\Omega}^-)$, i.e. the probability distribution of negatively charged pions in a limited segment $\Omega$ of momentum space. Hence $\pi^-$ with momenta inside $\Omega$ are observed, while $\pi^-$ inside the complementary segment $\bar \Omega$ are not observed. $\pi^+$ and $\pi^0$ are never detected. In GCE and CE the presence of $\pi^0$ as a degree of freedom is of no relevance, while in MCE it constitutes a heat bath for the remaining system. For consistency we use the same system throughout this discussion. In order to keep the model simple, we assume a static homogenous fireball. Our considerations therefore exclude collective motion, i.e. flow, and resulting momentum spectra are purely thermal. We also omit resonance decay contributions in this work. The spectra presented in Fig. \[spectra\] are normalized to the total $\pi^-$ yield in GCE and CE. Thus they are the same in both ensembles. In MCE one expects in the large volume limit only small deviations from Boltzmann spectra. None of the forthcoming arguments are affected by this. In the following we will use the transverse momentum and rapidity spectra presented in Fig. \[spectra\] to construct bins $\Omega_i = \Delta {p_T}_i = \left[p_{T_i},p_{T_{i+1}} \right]$ (left), or $\Omega_i = \Delta y_i = \left[y_i,y_{i+1} \right]$ (right), as indicates by the drop-lines. In section \[GCECE\] we calculate the multiplicity distributions $P(N_{\Omega})$ for arbitrary segments $\Omega$ for the ideal Boltzmann GCE and CE. To characterize the distribution one can calculate its (raw) moments $\langle N_{\Omega}^n \rangle$ from: $$\label{Moments} \langle N_{\Omega}^n \rangle ~=~ \sum \limits_{N_{\Omega}=0}^{\infty} ~N_{\Omega}^n~ P \left(N_{\Omega} \right)~.$$ A convenient measure for the width of a distribution is the scaled variance: $$\label{ScaledVar} \omega_{\Omega} ~\equiv~ \frac{\langle N_{\Omega}^2 \rangle - \langle N_{\Omega} \rangle^2}{\langle N_{\Omega} \rangle} ~.$$ In order to remove simple scaling effects, the bin sizes or segments are chosen such that each bin or segment contains the same fraction $q= \langle N_{\Omega} \rangle ~/~ \langle N_{4\pi} \rangle$ of the total yield (compare Eq.(\[ScaledVar\])). Here $\langle N_{\Omega} \rangle$ denotes the average particle number in the momentum space segment $\Omega$, and $\langle N_{4\pi} \rangle$ denotes the average total ($4\pi$ integrated) multiplicity. The effect of finite acceptance can approximately be taken into account by [@CEfluc1]: $$\label{accscaling} \omega_{q} ~=~ 1 + q \left(\omega_{4\pi}-1 \right)~,$$ where $\omega_{4\pi}$ assumes the ideal situation when all particles are detected, while $\omega_{q}$ assumes that particles are detected with probability $q$ regardless of their momentum. Hence Eq.(\[accscaling\]) holds when particles are assumed to be uncorrelated in momentum space. In the limit $q\rightarrow 0$ one observes a random distribution with $\omega_q \rightarrow 1$, i.e. a Poissonian, while when $q\rightarrow 1$ one sees the real distribution with width $\omega_q \rightarrow \omega_{4\pi}$. In this work we take explicitely correlations due to globally conserved charge (CE), and energy-momentum (MCE) into account and compare the results to Eq.(\[accscaling\]). Grand Canonical and Canonical Ensemble {#GCECE} ====================================== Grand Canonical Ensemble ------------------------ In the GCE, both, heat and charge bath are assumed to be infinite. And thus neither charge, energy nor momentum are conserved exactly. Temperature $T$ and charge chemical potential $\mu$ regulate average energy and charge density in a system of volume $V$. Usually it is said that charge, energy and momentum are conserved in the average sense and fluctuations about an equilibrium value are allowed. Apart form Bose and Fermi effects [@Qstats] particles are therefore uncorrelated in momentum space. However this example serves as an illustration for the following CE and MCE calculations. We start by decomposing the Boltzmann single particle partition function $z^-\left(\phi_{N_{\Omega}}\right)$ of $\pi^-$ into two parts, $$\begin{aligned} z^-\left(\phi_{N_{\Omega}}\right)= z^-_{\Omega} \left(\phi_{N_{\Omega}} \right) + z^-_{\bar \Omega} &=& \frac{gV}{\left( 2\pi\right)^3} \int \limits_{\Omega} d^3 p ~ e^{-\frac{\varepsilon+\mu}{T}}~ e^{i \phi_{N_{\Omega}}} + \frac{gV}{\left( 2\pi\right)^3} \int \limits_{\bar \Omega} d^3 p ~ e^{-\frac{\varepsilon+\mu}{T}} , \end{aligned}$$ where the single particle energy $\varepsilon = \sqrt{p^2+m^2}$, and $m$, and $g$ are mass and degeneracy factor of $\pi^-$ respectively. Only for momentum states inside the momentum space region $\Omega$ we introduce additionally a Wick-rotated fugacity $\exp \left(i \phi_{N_{\Omega}} \right)$. For the positive and neutral pion (which we do not want to detect in our example) we write: $$\begin{aligned} z^+ ~=~ \frac{gV}{\left( 2\pi\right)^3} \int d^3 p ~ e^{-\frac{\varepsilon-\mu}{T}}~, \qquad \qquad \textrm{and} \qquad \qquad z^0 ~=~ \frac{gV}{\left( 2\pi\right)^3} \int d^3 p ~ e^{-\frac{\varepsilon}{T}}~.\end{aligned}$$ The value of the single particle partition function, for instance of the neutral pion, is given by: $$\label{aveN} z^0=\langle N^0 \rangle = \frac{gV}{2\pi} m^2 T K_2 \left( \frac{m}{T}\right).$$ For the sake of simplicity we assume equal masses for all pions. To obtain the GCE multiplicity distribution for $N_{\Omega}$ in a momentum space segment $\Omega$ we use the Fourier integral over the generalized GCE partition function $ \mathcal{Z} \left( \phi_{N_{\Omega}}\right)=\exp \left[ z^-_{\Omega} \left( \phi_{N_{\Omega}} \right) + z^-_{\bar{ \Omega}} + z^+ + z^0 \right] $, normalized by the GCE partition function: $$\begin{aligned} \label{GCEPDF} P_{gce} \left(N_{\Omega} \right) ~\equiv~ Z^{-1}_{gce} \times \int \limits_{-\pi}^{\pi} \frac{d\phi_{N_{\Omega}}}{2\pi} ~ e^{-iN_{\Omega} \phi_{N_{\Omega}} } ~ \mathcal{Z} \left( \phi_{N_{\Omega}}\right) ~=~ \frac{\left(z^-_{\Omega}\right)^{N_{\Omega}}}{N_{\Omega}!} \exp \left[- z^-_{\Omega} \right]~,\end{aligned}$$ where the system partition function is given by $ Z_{gce} \equiv \mathcal{Z} \left( \phi_{N_{\Omega}} = 0\right) $, and $z^-_{\Omega} = z^-_{\Omega} \left(\phi_{N_{\Omega}}=0 \right)$. Independent of the shape or size of $\Omega$ we find a Poissonian for the multiplicity distribution Eq.(\[GCEPDF\]). Thus, using Eq.(\[ScaledVar\]), one finds for the scaled variance $\omega^{gce}_{\Omega} = 1$, since $\langle N_{\Omega} \rangle = z^-_{\Omega}$, and $\langle N^2_{\Omega} \rangle = \langle N_{\Omega} \rangle^2 + \langle N_{\Omega} \rangle$. For Bose and Fermi statistics one does not expect a Poisson distribution and (in particular when the chemical potential is large) deviations from a Poissonian can be large. Thus one expects also deviations from Eq.(\[accscaling\]) when considering only finite acceptance. Canonical Ensemble ------------------ In the CE the heat bath is still assumed to be infinite, while we remove the charge bath and drop the chemical potential. Thus, we introduce a further Wick-rotated fugacity $\mu/T \rightarrow i \phi_Q $ into the single particle partition functions to account for global (however not in the momentum space segment $\Omega$) conservation of electric charge $Q$. Particles in $\Omega$ are therefore correlated, due to the condition of fixed net-charge, with a finite charge bath composed of $\pi^+$ and unobserved $\pi^-$. We again split the single particle partition function for $\pi^-$ into an observed, $z^-_{\Omega}\left(\phi_{N_{\Omega}},\phi_Q\right)$, and an unobserved part, $z^-_{\bar \Omega} \left(\phi_Q\right)$, $$\begin{aligned} z^-\left(\phi_{N_{\Omega}},\phi_Q\right) = z^-_{\Omega} \left(\phi_{N_{\Omega}},\phi_Q\right) + z^-_{\bar \Omega} \left(\phi_Q\right) = \frac{gV}{\left( 2\pi\right)^3} \int \limits_{\Omega} d^3 p ~ e^{-\frac{\varepsilon}{T}} e^{-i \phi_Q} e^{i \phi_{N_{\Omega}}} + \frac{gV}{\left( 2\pi\right)^3} \int \limits_{\bar \Omega} d^3 p ~ e^{-\frac{\varepsilon}{T}} e^{-i \phi_Q},\end{aligned}$$ while we do not want to measure $\pi^+$ and $\pi^0$, and thus: $$\begin{aligned} z^+ \left(\phi_Q\right) ~=~ \frac{gV}{\left( 2\pi\right)^3} \int d^3 p ~ e^{-\frac{\varepsilon}{T}} e^{+ i \phi_Q}~, \qquad \qquad \textrm{and} \qquad \qquad z^0 ~=~ \frac{gV}{\left( 2\pi\right)^3} \int d^3 p ~ e^{-\frac{\varepsilon}{T}} .\end{aligned}$$ The normalization of the CE multiplicity distribution is given by the CE system partition function $Z_{ce}$, i.e. the number of all micro states with fixed charge Q, $Z^{ce} = I_Q\left(2z \right) \exp(z^0)$, where $I_Q$ is the modified Bessel function. The multiplicity distribution of $N_{\Omega}$ in a momentum space segment $\Omega$, while charge $Q$ is globally conserved, can be obtained from Fourier integration of the generalized GCE partition function $\mathcal{Z} \left( \phi_{N_{\Omega}}, \phi_Q \right) = \exp \left[ z^-_{\Omega} \left( \phi_{N_{\Omega}}, \phi_Q \right)~+~ z^-_{\bar{ \Omega}} \left( \phi_Q \right) + z^+ \left( \phi_Q \right) + z^0\right] $, over both angles $\phi_Q$ and $\phi_{N_{\Omega}}$: $$\begin{aligned} P_{ce} \left(N_{\Omega}\right) &\equiv& Z_{ce}^{-1} \times \int \limits_{-\pi}^{\pi} \frac{d\phi_{N_{\Omega}}}{2\pi} \int \limits_{-\pi}^{\pi} \frac{d\phi_{Q}}{2\pi} ~ e^{-iN_{\Omega} \phi_{N_{\Omega}} } ~ e^{-i Q \phi_{ Q} } ~\mathcal{Z} \left( \phi_{N_{\Omega}}, \phi_Q \right) \\ &=& I_Q^{-1}\left(2z \right) \times \frac{\left(z^-_{\Omega}\right)^{N_{\Omega}}}{N_{\Omega}!} ~ \sum \limits_{a=0}^{\infty} ~ \frac{\left(z^-_{\bar \Omega}\right)^{a}}{a!} ~ \frac{z^{Q+N_{\Omega}+a}}{\left(Q+N_{\Omega}+a\right)!}~,\end{aligned}$$ where in CE $z^-_{\Omega} = z^-_{\Omega} \left(\phi_{N_{\Omega}}=\phi_Q=0 \right)$, $z^-_{\bar \Omega} = z^-_{\bar \Omega} \left(\phi_Q=0 \right)$, and $z= z^+\left(\phi_Q=0\right)=z^0$. For the respective first two moments one finds from Eq.(\[Moments\]): $$\begin{aligned} \langle N_{\Omega} \rangle = z^-_{\Omega } ~ \frac{I_{Q+1} \left(2z \right)}{I_Q \left(2z \right)}~, \qquad \textrm{and} \qquad \langle N^2_{\Omega} \rangle = \left( z^-_{\Omega }\right)^2 ~\frac{I_{Q+2} \left(2z \right)}{I_Q \left(2z \right)} + z^-_{\Omega }~ \frac{I_{Q+1} \left(2z\right)}{I_Q \left(2z \right)}~. \end{aligned}$$ Thus, we obtain the well known canonical suppression of yields [@CEyield; @CEfits; @CEtransport; @RateEqYield] and fluctuations [@CEfluc1; @RateEqFluc]. The result, however, is completely independent of the position of the segment $\Omega$. And therefore the scaled variance, Eq.(\[ScaledVar\]), takes the form: $$\begin{aligned} \omega^{ce}_{\Omega} = 1 + z^-_{\Omega} ~\left[ \frac{I_{Q+2} \left(2z\right)}{I_{Q+1} \left(2z \right)} -\frac{I_{Q+1} \left(2z\right)}{I_Q \left(2z \right)} \right]~, \qquad \textrm{and} \qquad \omega^{ce}_{4\pi} = 1 + z ~ \left[ \frac{I_{Q+2} \left(2z\right)}{I_{Q+1} \left(2z \right)} -\frac{I_{Q+1} \left(2z\right)}{I_Q \left(2z \right)} \right]~,\end{aligned}$$ where $\omega_{\Omega}$ is the width of $P_{ce}(N_{\Omega})$, i.e. the multiplicity distribution of $\pi^-$ with momenta inside $\Omega$, while $\omega_{4\pi}$ is the width of the corresponding distribution when $\Omega$ is extended to the full momentum space. It can immediately be seen that this formula is consistent with acceptance scaling, Eq.(\[accscaling\]), $\omega_{\Omega} ~=~ 1 + q \left(\omega_{4\pi} -1 \right)$, if $q \equiv z^-_{\Omega}/z$. Generally we find $\omega^{ce}_{4\pi} < \omega^{ce}_{\Omega} < \omega^{gce}=1$. In the limit of $z^-_{\Omega}/z \rightarrow 0$ we approach the Poisson limit of a ‘random‘ distribution with $\omega = 1$, i.e. the observed part of the system is embedded into a much larger charge bath and the GCE is a valid description. Micro-Canonical Ensemble {#MCE} ======================== For the MCE an analytical solution seems to be out of reach presently, so we use instead the asymptotic solution, applicable to large systems, derived in [@clt]. In order to avoid unnecessary repetition of calculations, we will only give a general outline here, and refer the reader for a detailed discussion to Ref.[@clt]. It should be mentioned that this method be would be also applicable to systems of finite spatial extention, provided the average particle number in a given momentum space bin exceeds roughly $\langle N_{\Omega}\rangle \gtrsim 5$. In this work we confine ourselves to large systems and try to asses the general trends. The basic idea is to define the MCE multiplicity distribution in terms of a joint GCE distribution of multiplicity, charge, energy, momentum, etc. The MCE multiplicity distribution is then given by the (normalized) conditional probability in the GCE to find a number $N_{\Omega}$ of particles in a segment $\Omega$ of momentum space, while electric charge $Q$, energy $E$, and three momentum $\vec P$ are fixed. Therefore we will keep temperature and chemical potentials as parameters to describe our system. Effective temperature and effective chemical potential, i.e. Lagrange multipliers, can be determined by demanding that the GCE partition function is maximized for a certain equilibrium state $(Q,E,\vec P)$. This requirement is entirely consistent [@clt] with the usual textbook definitions of $T$ and $\mu$ in MCE and CE through differentiation of entropy and Helmholtz free energy with respect to conserved quantities. In principle we would have to treat all conservation laws on equal footing [@MCEmagic], and thus introduce Lagrange multipliers for momentum conservation as well. However here we are only interested in a static source, thus $\vec P = \vec 0$, and the relevant parameters are equal to zero. In the large volume limit energy, charge, and particle density in the MCE will correspond to GCE values. This is required by the thermodynamic equivalence of ensembles for mean quantities. MCE and CE partition functions are generally obtained from their GCE counterpart by multiplication with delta-functions, which pick out a set of micro states consistent with a particular conservation law. Here it will be of considerable advantage to use Fourier representations of delta-functions, similar to the treatment in Section \[GCECE\]. This method could be considered to be a Fourier spectral analysis of the generalized GCE partition function [@clt]. The normalized conditional probability distribution of multiplicity $N_{\Omega}$ can be defined by the ratio of the values of two partition functions: $$\label{MCEprob} P_{mce}(N_{\Omega})~\equiv~ \frac{\textrm{number of all states with $N_{\Omega}$, $Q$, $E$, and $\vec P = \vec 0$}}{\textrm{number of all states with $Q$, $E$, and $\vec P= \vec 0$} }~.$$ The real MCE partition function and our modified version are connected as $Z(V,N_{\Omega},Q,E,\vec P) \equiv \mathcal{Z}^{N_{\Omega},Q,E,\vec P}(V,T,\mu) e^{+E/T} e^{-Q\mu/T}$. In either case the normalization in Eq.(\[MCEprob\]) is given by the partition functions with fixed values of $Q,E,\vec P$, but arbitrary particle number $N_{\Omega}$, hence $Z(V,Q,E,\vec P) \equiv \sum_{N_{\Omega}=0}^{\infty} Z(V,N_{\Omega},Q,E,\vec P)$, or $\mathcal{Z}^{Q,E,\vec P}(V,T,\mu) \equiv \sum_{N_{\Omega}=0}^{\infty} \mathcal{Z}^{N_{\Omega},Q,E,\vec P}(V,T,\mu)$. However when taking the ratio (\[MCEprob\]) auxiliary parameters chemical potential and temperature drop out: $$\label{MCEprob2} P_{mce}(N_{\Omega})~ \equiv~ \frac{Z(V,N_{\Omega},Q,E,\vec P)}{Z(V,Q,E,\vec P)} ~=~ \frac{\mathcal{Z}^{N_{\Omega},Q,E,\vec P}(V,T,\mu)}{\mathcal{Z}^{Q,E,\vec P}(V,T,\mu)}~.$$ The main difference between the two versions of partition functions is that for $Z(V,N_{\Omega},Q,E,\vec P)$ one is confronted with a heavily oscillating (or even irregular) integrant, while for $\mathcal{Z}^{N_{\Omega},Q,E,\vec P}(V,T,\mu)$ the integrant becomes ($T$,$\mu$ correctly chosen) very smooth. Thus, introduction of $T$ and $\mu$ allows to derive (and use) the asymptotic solution of Ref.[@clt]. We have a total number of 6 conserved ‘charges‘, and hence we need to solve the 6-dimensional Fourier integral for the numerator in Eq.(\[MCEprob2\])[^1]: $$\begin{aligned} \label{MCEInt} \mathcal{Z}^{N_{\Omega},Q,E, \vec P}&=& \int \limits_{-\pi}^{\pi} \frac{d \phi_{N_{\Omega}}}{2\pi} \int \limits_{-\pi}^{\pi} \frac{d \phi_Q}{2\pi} \int \limits_{-\infty}^{\infty} \frac{d \phi_E}{2\pi} \int \limits_{-\infty}^{\infty} \frac{d \phi_{P_x}}{2\pi} \int \limits_{-\infty}^{\infty} \frac{d \phi_{P_y}}{2\pi} \int \limits_{-\infty}^{\infty} \frac{d \phi_{P_z}}{2\pi} \nonumber \\ &\times& e^{-iN_{\Omega} \phi_{N_{\Omega}}}~ e^{-iQ\phi_Q} ~e^{- iE \phi_E} ~e^{-iP_x \phi_{P_x}}~ e^{-iP_y \phi_{P_y}}~ e^{-iP_z \phi_{P_z}} \nonumber \\ &\times& \exp \left[ V \sum_k \psi_k \left( \phi_{N_{\Omega}}\phi_Q, \phi_E, \phi_{P_x},\phi_{P_y},\phi_{P_z} \right) \right].\end{aligned}$$ The summation in (\[MCEInt\]) should be taken over the single particle partitions $V \psi_k=z_k$ of all considered particle species $k$. The Wick-rotated fugacities $\phi_{Q}$, etc. are related to the individual conservation laws. The distinction between the Kronecker delta-function (limits of integration $\left[-\pi,\pi \right]$) for discrete quantities and the Dirac delta-function (limits of integration $\left[-\infty,\infty \right]$) for continuous quantities is important here, however for deriving an asymptotic solution it will not be. To simplify (\[MCEInt\]) we change to shorthand notation for $\phi_j = (\phi_{N_{\Omega}}\phi_Q, \phi_E, \vec \phi_P)$ and the conserved ‘charge‘ vector $ Q^j = (N_{\Omega},Q,E,\vec P)$. We again split the single particle partition functions in two parts. The first part counts the number of momentum states observable to our detector, while the second part counts momentum states invisible to our detector: $$\begin{aligned} \psi_{k} \left( \phi_j\right) &=& \frac{g_k}{\left( 2\pi\right)^3} \int \limits_{ \Omega} d^3 p ~ e^{-\frac{\varepsilon_k - q_k \mu}{T}} ~e^{i q_{k, \Omega}^j \phi_j}~ +~ \frac{g_k}{\left( 2\pi\right)^3} \int \limits_{ \bar \Omega} d^3 p~ e^{-\frac{\varepsilon_k-q_k \mu}{T}} ~ e^{i q_{k,\bar \Omega}^j \phi_j}~.\end{aligned}$$ For the ‘charge‘ vector of all measured particle species $k$ we write $q^j_{k,\Omega} = (1,q_k,\varepsilon_k,\vec p_k)$ for momenta inside $\Omega$, and $q^j_{k, \bar \Omega} = (0,q_k,\varepsilon_k,\vec p_k)$ for momenta outside of $\Omega$. For all unobserved particle species we write $q^j_{k,\Omega}=q^j_{k, \bar \Omega} =(0,q_k,\varepsilon_k,\vec p_k) $. Here $q_k$ is the electrical charge of particle species $k$, and $\varepsilon_k$ and $\vec p_k$ are its energy and momentum vector. In Ref.[@clt], where only multiplicity distributions in the full momentum space were considered, the general ‘charge‘ vector took the form $q^j_{k,4\pi} = (n_k,q_k,\varepsilon_k,\vec p_k)$, where $n_k$ is the multiplicity of this particle. For stable particles $n_k=1$ in case they are observed, and $n_k=0$ if they are not measured, while for unstable particles $n_k$ could also denote the number of measurable decay products. For large system volume the main contribution to the integral (\[MCEInt\]) comes from a small region around the origin [@VolDep]. Thus we proceed by Taylor expansion of the integrant of (\[MCEInt\]) around $\phi_j=\vec 0$. In this context $ \Psi \left( \phi_j\right) = \sum_k \psi_k \left( \phi_j\right) $ would be called the cumulant generating function (CGF). Cumulants (expansion terms) are defined by differentiation of the CGF at the origin: $$\label{kappa_n} \kappa_n^{j_1,j_2,\dots,j_n } ~\equiv~ \left(-i\right)^n\frac{\partial^n \Psi \left( \phi_j \right) }{\partial \phi_{j_1} \partial \phi_{j_2} \dots \partial \phi_{j_n} } \Bigg|_{\phi_j = \vec 0}~.$$ Generally are cumulants tensors of rank $n$ and dimension defined by the number of conserved quantities. Here $\kappa_1$ is a 6 component vector, while $\kappa_2$ is a $6 \times 6$ matrix, etc. The parts of the integrant related to discrete quantities, i.e. $N_{\Omega}$ and $Q$, are now not $2\pi$ periodic anymore (while in Eq.(\[MCEInt\]) they are), but superpositions of oscillating and decaying parts. Thus we extent the limits of integration to $\pm \infty$, what introduces a negligible error. Eq.(\[MCEInt\]) therefore simplifies to: $$\begin{aligned} \label{MCEIntApprox} \mathcal{Z}^{Q^j} &\simeq& \left[ \prod_{j=1}^{6} \int \limits_{-\infty}^{\infty} \frac{d\phi_j}{\left( 2\pi \right)} \right] ~\exp \Big[ -iQ^j\phi_j ~+~V \sum_{n=0}^{\infty} \frac{i^n}{n!} \; \kappa_n^{j_1,j_2,\dots,j_n } \; \phi_{j_1} \phi_{j_2} \dots \phi_{j_n} \Big] ~.\end{aligned}$$ Summation over repeated indices is implied. Existence and finiteness of the first three cumulants provided, any such integral can be shown to converge to a multivariate normal distribution in the large volume limit: $$\label{MCE_MultNormal} \mathcal{Z}^{Q^j} ~\simeq~ Z_{gce} \frac{\exp \left(-\frac{\xi^j \; \xi_{ j}}{2} \right)}{\left(2\pi V \right)^{6/2} \det| \sigma| }~,$$ where $Z_{gce}\equiv \exp \left[V \kappa_0 \right]$ is the GCE partition function, $\kappa_0$ is the cumulant of $0^{th}$ order, $\xi^j=\left( Q^k - V \kappa_1^k \right) \left( \sigma^{-1}\right)_{k}^{\;\;j} V^{-1/2}~$ is a measure for the distance of a particular macro state $Q^k$ to the peak $V \kappa_1^k$ of the joint distribution, and $\sigma$ is the square root of the second rank tensor $\kappa_2$, see [@clt] for details. The normalization in Eq.(\[MCEprob2\]) can essentially be found in two ways. The first way would be to integrate the distribution (\[MCE\_MultNormal\]) over all possible values of multiplicity $N_{\Omega}$, while all other variables are set to their peak values, e.g. $Q=V\kappa_1^Q$, $E=V\kappa_1^E$, $\vec P = \vec 0$. The second and more practical way is to use an approximation similar to Eq.(\[MCE\_MultNormal\]) to describe the macro state $Q^j = (Q,E,\vec P)$. The normalization in Eq.(\[MCEprob2\]), $\mathcal{Z}^{E,Q,\vec P}$, is then given by the 5-dimensional integral, similar to Eq.(\[MCEInt\]), without the integration over $\phi_{N_{\Omega}}$. The 1-dimensional slice along $N_{\Omega}$, i.e. the conditional distribution of particle number $N_{\Omega}$, while charge, energy and momentum are fixed to $Q,E,\vec P = \vec 0$, can then be shown [@clt] to converge to a Gaussian in the large volume limit: $$\begin{aligned} \label{PMCE} P_{mce}(N_{\Omega} ) ~ \simeq~ \frac{1}{\left(2\pi ~\omega^{mce}_{\Omega} ~ \langle N_{\Omega} \rangle \right)^{1/2}} ~ \exp \left(- \frac{ \left( N_{\Omega} - \langle N_{\Omega} \rangle \right)^2}{2 ~ \omega^{mce}_{\Omega} ~ \langle N_{\Omega} \rangle } \right) ~. \end{aligned}$$ The scaled variance $\omega^{mce}_{\Omega}$ is given by the ratio of the two determinants of the two relevant second rank cumulants, $\kappa_2$ and $\tilde \kappa_2$, of the two partition functions $\mathcal{Z}^{N_{\Omega},E,Q,\vec P}$ and $\mathcal{Z}^{E,Q,\vec P}$, hence[^2]: $$\label{SimpleOmega} \omega^{mce}_{\Omega} = \frac{ \det | \kappa_2| }{ \kappa_1^{N_{\Omega}} \;\det| \tilde \kappa_2| }~.$$ The asymptotic ($V\rightarrow \infty$) scaled variance can therefore be written in the form of Eq.(28) in [@clt]. Considering only the asymptotic solution we need to investigate only the first two cumulants ($n=1,2$) in detail. We will first discuss the structure of $\kappa_1$ and $\kappa_2$, and then deduce a few properties of Eq.(\[SimpleOmega\]). The first order cumulant $\kappa_1$ of $\mathcal{Z}^{N_{\Omega},Q,E,\vec P}$ gives GCE expectation values for particle density $\kappa_1^{N_{\Omega}}$, charge density $\kappa_1^{Q}$, energy density $\kappa_1^{E}$, and expectation values of momentum $\kappa_1^{p_x}$, etc. Since we are only interested in a static source we find due to the antisymmetric momentum integral (see Appendix \[Calc\]) $\kappa_1^{p_x} = \kappa_1^{p_y} = \kappa_1^{p_z}=0$. The general form of the first cumulant $\kappa_1$ is then: $$\begin{aligned} \kappa_1 = \begin{pmatrix} \kappa_1^{N_{\Omega}}, & \kappa_1^{Q}, & \kappa_1^{E}, & 0, & 0, & 0 \end{pmatrix}~. \label{vector}\end{aligned}$$ The second cumulant $\kappa_2$ of $\mathcal{Z}^{N_{\Omega},Q,E,\vec P}$ contains information about correlations due to different conserved quantities. A detailed discussion of correlation terms only involving Abelian charges and/or energy, e.g. $\kappa_2^{Q,Q}$, $\kappa_2^{Q,E}$, and $\kappa_2^{E,E}$, can be found in [@clt]. Again, due to the antisymmetric nature of the momentum integral, all cumulant entries involving an odd order in one of the momenta, e.g. $\kappa_2^{E,p_x}$, $\kappa_2^{p_x,p_y}$, or $\kappa_2^{Q,p_x}$ are equal to zero. The general second order cumulant $\kappa_2$ thus reads: $$\begin{aligned} \kappa_2 = \begin{pmatrix} \kappa_2^{N_{\Omega},N_{\Omega}} & \kappa_2^{N_{\Omega},Q} & \kappa_2^{N_{\Omega},E} & \kappa_2^{N_{\Omega},p_x} & \kappa_2^{N_{\Omega},p_y} & \kappa_2^{N_{\Omega},p_z} \\ \kappa_2^{Q,N_{\Omega}} & \kappa_2^{Q,Q} & \kappa_2^{Q,E} & 0 & 0 & 0 \\ \kappa_2^{E,N_{\Omega}} & \kappa_2^{E,Q} & \kappa_2^{E,E} & 0 & 0 & 0 \\ \kappa_2^{p_x,N_{\Omega}} & 0 & 0 & \kappa_2^{p_x,p_x} & 0 & 0 \\ \kappa_2^{p_y,N_{\Omega}} & 0 & 0 & 0 & \kappa_2^{p_y,p_y} & 0 \\ \kappa_2^{p_z,N_{\Omega}} & 0 & 0 & 0 & 0 & \kappa_2^{p_z,p_z} \end{pmatrix}~. \label{matrix}\end{aligned}$$ Please note that by construction, Eq.(\[kappa\_n\]), the matrix (\[matrix\]) is symmetric, hence $\kappa_2^{N_{\Omega},Q} = \kappa_2^{Q,N_{\Omega}}$, etc. The second matrix $\tilde \kappa_2$, now related to the partition function $\mathcal{Z}^{Q,E,\vec P}$, is obtained from $\kappa_2$, Eq.(\[matrix\]), by crossing out the first row and first column. In the following we are going to make use of the fact that one can express the determinant of a matrix $A$ by: $$\label{calcdet} \det |A| ~=~ \sum \limits_{j=1}^n \left( -1\right)^{j+k} A_{j,k} M_{j,k} ~,$$ where $A_{j,k}$ is the matrix element $j,k$ of a general non-singular $n\times n$ matrix $A$, and $ M_{j,k}$ is its complementary minor. A simple consequence of Eq.(\[calcdet\]) is: $$\label{normdet} \det |\tilde \kappa_2| = \kappa_2^{p_x,p_x} ~\kappa_2^{p_y,p_y} ~\kappa_2^{p_z,p_z} \left[ \kappa_2^{E,E}~\kappa_2^{Q,Q}- \left(\kappa_2^{E,Q}\right)^2 \right] ~=~ \left( \kappa_2^{p_x,p_x} \right)^3 \det |\hat \kappa_2|,$$ where $\kappa_2^{p_x,p_x} =\kappa_2^{p_y,p_y} =\kappa_2^{p_z,p_z} $, due to spherical symmetry in momentum space, and $\hat \kappa_2$ is just a $2\times2$ matrix involving only terms containing $E$ and $Q$. In case correlations between particle number and conserved momenta are vanishing, i.e. $\kappa_2^{N_{4\pi},p_x} = 0$, or $\kappa_2^{N_{\Omega},p_x} = 0$, then, similarly to Eq.(\[normdet\]), the determinant of $\kappa_2$ factorizes into a product of correlation terms $(\kappa_2^{p_x,p_x})^3$ and the determinant of a $3\times3$ sub-matrix involving only terms containing $E$, $Q$, and $N$. Hence in taking the ratio Eq.(\[SimpleOmega\]) one notes, that in this case momentum conservation will not affect multiplicity fluctuations in the large volume limit [@clt]. In this work, however we do not necessarily find $ \kappa_2^{N_{\Omega},p_x} = 0 $, as we only integrate over a limited segment $\Omega$ of momentum space, and taking momentum conservation into account may affect the result. Finally it should be stressed that this procedure can be easily generalized to account for Bose or Fermi statistics. Also phenomenological phase space suppression (enhancement) factors $\gamma_q$ [@gammaQfirst] or $\gamma_s$ [@gammaSfirst] could be straightforwardly included. However, without proper implementation of the effect of additional correlations due to resonance decay and collective motion, i.e. flow, it seems of little value to do too strict calculations for experimentally measurable distributions. We thus return to the pion gas example from section \[GCECE\] and restrict the discussion to simple momentum space cuts in rapidity, transverse momentum, and azimuthal angle, see also the Appendix for details. Results {#Results} ======= Multiplicity fluctuations in the full momentum space ---------------------------------------------------- Let us firstly recall basic properties of multiplicity fluctuations of negative particles in the full momentum space ($4\pi$ fluctuations) in the three standard ensembles, of the Boltzmann pion gas considered here. Multiplicity fluctuations in the CE are suppressed due to exact charge conservation. For a neutral ($Q=0$) system one finds in the large volume limit $\omega_{4\pi}^{ce} = 0.5$ [@CEfluc1]. Further suppression of fluctuations arise from additionally enforcing exact energy conservation in the MCE. Here one finds $\omega_{4\pi}^{mce} \approx 0.25$ for a Boltzmann pion gas at $T\approx 160MeV$. In the GCE, since no conservation laws are enforced, we always find a Poisson distribution with width $\omega_{4\pi}^{gce} =1$. Since charge conservation in CE links the distributions of negatively charged particles to the one of their positive counterparts, i.e. $P(N_-) = P(N_+-Q)$, the relative width of $P(N_-)$ increases (decreases) as we move the electric charge density to positive (negative) values [@CEfluc2]. This can be easily be seen from Eq.(\[matrix\]) by crossing out all rows and columns containing energy and momentum and calculating the asymptotic scaled variance of negatively charged particles, $\omega^{ce}_{4\pi}$, from Eq.(\[SimpleOmega\]), $$\omega_{4\pi}^{ce}~=~ \frac{\kappa_2^{N_{4\pi},N_{4\pi}}\kappa_2^{Q,Q}-\left( \kappa_2^{N_{4\pi},Q} \right)^2}{\kappa_1^{N_{4\pi}}~ \kappa_2^{Q,Q}} ~=~ \frac{\exp \left( \frac{\mu}{T} \right)}{2\cosh \left( \frac{\mu}{T}\right)}~.$$ The same effect is present in the MCE, however the calculation is slightly longer. Results for $4\pi$ multiplicity fluctuations of negatively charged particles in a Boltzmann pion gas at $T=160MeV$ and different charge densities are summarized in Table \[table\]. Additionally estimates, based on our previously employed ‘uncorrelated particle‘ approach, Eq(\[accscaling\]), for multiplicity fluctuations with limited acceptance are given. --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- $\quad \omega^{gce}_{4\pi} \quad$ $\quad \omega^{ce}_{4\pi} \quad$ $ $ \quad \omega^{gce}_{q=1/9} \quad$ $\quad \omega^{ce}_{q=1/9} \quad$ $\quad \omega^{mce}_{q=1/9} \quad$ \quad \omega^{mce}_{4\pi} \quad$ -------------------- ----------------------------------- ---------------------------------- ---------------------------------- ------------------------------------- ----------------------------------- ------------------------------------ $\mu=0$ $1$ $0.5$ $0.235$ $1$ $0.944$ $0.915$ $\mu=-\frac{m}{2}$ $1$ $0.294$ $0.147$ $1$ $0.922$ $0.905$ $\mu=+\frac{m}{2}$ $1$ $0.706$ $0.353$ $1$ $0.967$ $0.928$ --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- : Multiplicity fluctuation of $\pi^-$ in a classical pion gas in the large volume limit in the three standard ensembles at $T=160MeV$ for different charge densities. The index ‘$4\pi$‘ denotes fluctuations in the full momentum space, while the index ‘$q=1/9$‘ assumes acceptance scaling, Eq.(\[accscaling\]). The ratio $n_-/n_{tot}$ equals to $0.33$ for $\mu=0$, $0.48$ for $\mu=-m/2$, and $0.20$ for $\mu=+m/2$.[]{data-label="table"} Despite the fact that $\omega_{4\pi}$ is very different in GCE, CE, or MCE and also rather sensitive to the charge density, the estimates for limited acceptance ($q=1/9$) based on Eq.(\[accscaling\]) vary only by a few %. In order to decisively distinguish predictions for different ensembles a large value of $q$ would be needed. Multiplicity fluctuations in limited segments of momentum space --------------------------------------------------------------- In Section \[GCECE\] we have seen that in the Boltzmann CE multiplicity fluctuations observed in a limited segment of phase space are insensitive to the position of this segment. The dependence on the size of the segment can thus be taken into account by use of acceptance scaling Eq.(\[accscaling\]). To balance charge a particle can be produced or annihilated anywhere in momentum space. And due to a infinitely large heat and momentum bath in the CE no momentum state is essentially preferred. In the MCE this dependence is qualitatively different. When using the MCE formulation particles are correlated due to the constraints of exactly conserved energy and momentum, even in the large volume limit. Fluctuations in a macroscopic subsystem are strongly affected by correlations with the remainder of the system. In Fig. \[dodp\] we show the scaled variance of multiplicity fluctuations for negatively charges particles in finite bins in transverse momentum (left), and rapidity (right). The bins are constructed such that each bin contains on average the same fraction $q$ of the total average yield. The width of each bin is indicated by the bars. Calculations are done for two values of acceptance ($q=1/5$, and $q=1/9$). The dashed and dotted lines correspond to acceptance scaling Eq.(\[accscaling\]), while the markers are calculated from Eq.(\[SimpleOmega\]). One finds that multiplicity fluctuations in bins with high transverse momentum and high values of rapidity are, due to energy and momentum conservation, essentially suppressed with respect to bins where individual particles carry less energy and momentum. A intuitive explanation would probably look like this: Let us consider an event with an unusually large (small) number of particles at the most forward rapidity bin. In this bin we would find therefore a macroscopic state with unusually large (small) observed longitudinal momentum $P^{obs}_z$ and energy $E^{obs}$. The remainder of the system therefore has to have rather large (small) momentum $-P_z^{obs}$ and rather small (large) energy $E-E^{obs}$. Since both probability distributions, for the observed and the unobserved subsystems, do not factorize into independent probability distributions, but are correlated, this macro state would be rather unlikely. Fluctuations about the mean $\langle N_y \rangle$ at forward (backward) rapidities should therefore be suppressed. On the other hand can modest multiplicity fluctuations in a high $p_T$ bin induce stronger fluctuations in the lower $p_T$ bins, and fluctuations about $\langle N_{p_T} \rangle$ in a low $p_T$ bin are enhanced. Even when detecting only a fraction of about 10% of the total system these correlations can have a sizeable effect. Conservation laws ----------------- It seems worthwhile to consider individual conservation laws and their impact on multiplicity fluctuations in more detail. On of the main advantages of the analytical procedure presented here, is certainly that one can easily ‘switch on‘ or ‘switch off‘ a particular conservation law. For illustrative purposes we show the result of $\omega^{mce}_{\Delta y}$ for MCE without longitudinal momentum conservation in Fig.\[dodywopz\]. In comparing Fig.\[dodp\], right, to Fig.\[dodywopz\] it becomes obvious that energy conservation alone cannot account for the strong suppression of multiplicity fluctuations at forward rapidities, but has to be explained by combined energy and longitudinal momentum conservation. The relevant cumulants elements, which give information about the strenght of correlations between particle number and a particular conserved quantity, are $\kappa_2^{N_{\Omega},Q}$, $\kappa_2^{N_{\Omega},E}$, $\kappa_2^{N_{\Omega},p_x}$, etc. Whenever a element is vanishig, then the corresponding conservation law has no impact on multiplicity fluctuations. For details of the calculations please see the Appendix. Since for fluctuations of charged particles $\kappa_2^{N_{\Omega},Q}$ and $\kappa_2^{N_{\Omega},E}$ are generally non-zero, we will focus only on the effects of momentum conservation. For multiplicity fluctuations in bins in transverse momentum momentum conservation does not affect the result, see Appendix \[App\_pt\], and the suppression effect is a result of energy conservation alone. When considering cuts in rapidity one finds in general $\kappa_2^{N_y,p_z} \not= 0$, but $\kappa_2^{N_y,p_x} = \kappa_2^{N_y,p_y} = 0$, and only longitudinal momentum conservation needs to be taken into account, see Appendix \[App\_y\]. In considering the third idealized case, where our detector observes only a segment in azimuthal angle $\phi$, but all rapidities $y$ and transverse momenta $p_T$, both global $P_x$, and $P_y$ conservation lead to non-trivial modifications of Eq.(\[accscaling\]), see Appendix \[App\_phi\]. To understand the difference between the strong suppression of fluctuations at high transverse momentum and the rather modest suppression at high rapidity when momentum conservation is not enforced, one should compare the elements $\kappa_2^{N_{p_T},E}$ in Eq.(\[omega\_pt\]), and $\kappa_2^{N_{y},E}$ in Eq.(\[omega\_y\]), which measure in Boltzmann approximation the average energy density carried by particles in a bin $\Delta p_T$ or $\Delta y$, to the total average energy density $\langle E_-\rangle = \kappa_2^{N_{4\pi},E}$ carried by $\pi^-$. (All other elements in Eqs.(\[omega\_pt\]) and (\[omega\_y\]) do not depend on the location of the segment.) In case of kinematical cuts in $\Delta p_T$ the fraction $\kappa_2^{N_{p_T},E} / \langle E_- \rangle$ rises from about $5\%$ in the lowest to roughly $20\%$ in the highest $p_T$-bin. In contrast to that for the central $y$-bin this ratio is about $10\%$, while the most forward or backward bins it is roughly $12\%$. However in both cases the bins contain on average $q=1/9\approx 11\%$ of the total average $\pi^-$ yield. The effect of energy conservation is thus weaker for cuts in rapidity than for cut in transverse momentum, see also Appendices \[App\_pt\] and \[App\_y\]. Charged systems --------------- In Figs. \[dodp\_chr\] the transverse momentum (left) and rapidity (right) dependence of the scaled variance is presented for two different values of charge density. Similar to the CE, in MCE the effective size of the heat and charge bath matters. We find that in general MCE effects for negatively charged particles are stronger (weaker) when the electric charge density is negative (positive). In the limit of a strongly positively charged system, the $\pi^-$ subsystem could be considered as embedded in a large heat, charge, and momentum bath (provided by $\pi^+$ and $\pi^0$ particles) and MCE effects would cease. The GCE would here be the appropriate limit. In the opposite limit of a strongly negatively charged system, charge conservation essentially becomes equivalent to particle number conservation. This scenario might be more familiar from textbooks, where the CE is usually understood as the ensemble with fixed particle number. However here also the same arguments as above apply, except the effect would be much stronger, and $\omega^{mce}_{4\pi} = 0$. In general one would expect that suppression effects in bins of high transverse momentum or high values of rapidity are stronger the more abundant the analyzed particle species is. In the context of heavy ion collision this implies that MCE effects should be stronger for positively charged particles than for negatively charged particles, due to the fact that the created system carries positive net-charge. Previous work suggests that the asymptotic values for the scaled variance are indeed reached rather quickly [@VolDep] and above results are certainly applicable to large systems expected to be created in relativistic heavy ion collisions. Remarks and Conclusion {#Remarks} ====================== Some concluding remarks seems to be in order. Although it might seem inappropriate to use the MCE formulation of a hadron resonance gas model for calculation of multiplicity fluctuations in heavy ion collisions, as energy and volume cannot be assumed to be the same in all events, it should be stressed that GCE and CE still imply a very particular type of heat (and momentum) bath, namely an infinite (and ideal) one. This assumption seems to us even less appropriate. Also the MCE is often understood as the ensemble with energy (and charge), however not momentum conservation. It is usually assumed that taking momentum conservation into account will not affect fluctuations in the large volume limit. We have shown [@clt] in a recent paper that this is indeed the case, when one assumes information about all produced particles. However for calculations of multiplicity fluctuations in arbitrary finite subsystems in momentum space all kinematic conservation laws need to be taken into account. In a realistic heavy ion experiment it seems impossible to measure the entire final state of each collision. The observed subsystem could therefore be seen as effectively embedded into a (possibly much larger) heat, charge, and momentum bath. Sometimes it is therefore argued that, when investigating only a small part of a statistical system (canonical or micro-canonical), one can ignore correlations of the subsystem under investigation with the remaining system. This argument is often applied when considering yields and/or fluctuations in a limited segment of momentum space. More precisely, usually the GCE is thought to be the appropriate ensemble to model fluctuations of particle multiplicity or particle ratios found in some mid-rapidity interval [@GCEfluc]. In this work we have argued that this assumption should be checked carefully. The GCE is only the correct ensemble to choose, if heat and charge bath are assumed to be infinite, while the observed subsystem remains finite. Based on our previous line of arguments, one would also expect that strong collective longitudinal and transverse flow would lead to a strong correlation of macroscopic subsystems. Longitudinal momentum conservation implies that when ‘observing‘ in an event a final state with a certain small (large) number of produced particles at very forward rapidity, a similarly small (large) number of particles should exist at backward rapidities. Particles in these bins carry substantial longitudinal momenta, and hence energy. Modest fluctuations in their numbers should therefore induce stronger fluctuations in the central rapidity region. The same line of arguments is applicable to the transverse momentum dependence. One would therefore expect a similar momentum space dependence of experimentally measured charged particle multiplicity fluctuations as shown in Figs. \[dodp\]. This argument is additionally supported by UrQMD simulations [@UrQMDfluc]. In transport calculations the produced systems stay far away from global or local equilibrium [@TransportEq] and other (dynamical) mechanisms might lead to similar effects. On the other hand could one also infer from [@UrQMDfluc] that even in non-equilibrium systems correlations due to exactly enforced conservation laws determine the general trend, although transport simulations show, for instance, a very different dependence of multiplicity fluctuations on beam energy [@HSDfluc1; @HSDfluc2] than statistical equilibrium models. This should be subject of further investigation. Finally, and most importantly, we want to stress that recently presented preliminary NA49 analysis of multiplicity fluctuations in certain rapidity and transverse momentum windows [@BeniCoolData] shows qualitatively the very same trends as they are suggested by the MCE formulation of the statistical model. Data, UrQMD simulations, and the statistical model exhibit suppressed multiplicity fluctuations when bins with high transverse momentum (or high values of rapidity) are compared to bins of same mean multiplicity at lower transverse momentum (or lower values of rapidity). We are certainly tempted to interpret this rather unexpected common behavior as a manifestation of energy and momentum conservation effects. Summary {#Summary} ======= We have discussed the effect of momentum space cuts on multiplicity fluctuations in the framework of an ideal classical pion gas in the three standard ensembles, GCE, CE, and MCE. Only in the MCE we expect a momentum space dependence of multiplicity fluctuations, when comparing intervals of same average multiplicity. We have shown that even in the thermodynamic limit energy-momentum conservation can leave a sizable effect in the fluctuation pattern. In a previous publication we have argued that despite the fact one may expect event-by-event fluctuations of the thermal energy, i.e. the part of the total energy which goes into thermal particle production rather than collective expansion, these event-by-event fluctuations remain small compared to energy fluctuations one would expect from grand canonical and canonical ensembles. In this work we have shown that energy and momentum conservation lead to a non-trivial momentum space dependence of the fluctuation pattern. This argument seems to be strongly supported by data. Above results become all the more interesting when compared to models which seek to describe effects beyond our considerations. In fact our calculations suggest a similar strength of respective suppression or enhancement as they were predicted as signals for the critical point of strongly interacting matter, the onset of deconfinement, or generally a possible phase transition. One might also be tempted to argue, that enhanced fluctuations around mid-rapidity, when compared to a more forward rapidity slice, should be interpreted as a signal of a phase transition from a quark gluon plasma to a hadron gas phase, expected to be first realized in the presumably hotter and denser central rapidity region. However in this case there should be a non-monotonic variation as center of mass energy of colliding nuclei is changed. This seems not to be supported by preliminary NA49 data. In summary, above results should be treated as a prediction for general trends of multiplicity fluctuations in limited segments of momentum space. The existence of this general behavior should be further tested by current experiments. Observation of effects similar to those of Figs. \[dodp\] in experimental data would, in our opinion, strongly speak in favor of our hypothesis that fluctuations of extensive observables are indeed dominated by material and motional conservation laws. We would like to thank F. Becattini, V.V. Begun, M. Bleicher, E.L. Bratkovskaya, W. Broniowski, L. Ferroni, M.I. Gorenstein, M. Gaździcki, S. Häussler, V.P. Konchakovski, B. Lungwitz, and G. Torrieri for fruitful discussions. Globally Conserved Quantities {#Calc} ============================= Turning now to calculations of cumulants, Eq.(\[kappa\_n\]), we employ always coordinates most suitable to our problem. The invariant phase space element is given by: $$\varepsilon ~\frac{dN}{d^3p} ~=~ \frac{dN}{m_T~ dm_T ~dy ~d \phi} ~=~ \frac{dN}{p_T ~dp_T ~dy ~d \phi} ~=~ \varepsilon ~\frac{g}{\left( 2\pi\right)^3} ~\exp \left( -\frac{\varepsilon-\mu}{T}\right)~,$$ where the single particle energy $\varepsilon = m_T \cosh y$, its longitudinal momentum $p_z = m_T \sinh y$, transverse mass $m_T^2 = p_T^2 + m^2 $, transverse momentum $p_T^2 = p_x^2 + p_y^2$, and rapidity $y = \tanh \left( p_z/\varepsilon\right)$. Additionally we employ spherical coordinates: $$\frac{dN}{d^3p} ~=~ \sin \theta ~ p^2 ~ \frac{dN}{d\phi~ d\theta ~dp}~.$$ For clarity we consider explicitely a few terms, not given in [@clt], here. The total energy density is given by the sum over individual contributions of all particle species $k$: $$\begin{aligned} \label{kappa_1_E} \kappa_1^E &=& \left(- i \frac{\partial }{\partial \phi_{E}}\right) \Psi\left( \phi_j \right) \Bigg|_{\phi_j=\vec 0} ~= \sum_k \int \limits_{0}^{+ \pi} ~d \theta \int \limits_{-\pi}^{+\pi} d \phi \int \limits_{0}^{\infty} ~dp ~\varepsilon_k~ \frac{dN_k}{d\phi ~d\theta ~ dp} \nonumber \\ &=& \sum_k \frac{g_k ~e^{\frac{q_k\mu}{T}}}{2\pi^2} ~m_k^3 ~T~ \left[K_1 \left(\frac{m_k}{T} \right) + 3~ \frac{T}{m_k}~ K_2 \left(\frac{m_k}{T} \right)\right]~= \sum_k \langle E_k \rangle~.\end{aligned}$$ The diagonal energy element $\kappa_2^{E,E}$ is given by: $$\begin{aligned} \kappa_2^{E,E} &= & \left(- i \frac{\partial }{\partial \phi_{E}}\right)^2 \Psi\left( \phi_j \right) \Bigg|_{\phi_j=\vec 0} ~= \sum_k \int \limits_{0}^{+ \pi} ~d \theta \int \limits_{-\pi}^{+\pi} d \phi \int \limits_{0}^{\infty}~dp ~\varepsilon_k^2~ \frac{dN_k}{d\phi ~d\theta ~ dp}\nonumber \\ &=& \sum_k \frac{g_k ~e^{\frac{q_k\mu}{T}}}{2\pi^2} ~m_k^4 ~T~ \left[K_0 \left(\frac{m_k}{T} \right) + 5~ \frac{T}{m_k} ~K_1 \left(\frac{m_k}{T} \right) + 12~\frac{T^2}{m_k^2}~ K_2 \left(\frac{m_k}{T} \right) \right] ~.\end{aligned}$$ Additionally we define the diagonal momentum correlation terms, with $p_z = p \cos \theta$: $$\begin{aligned} \label{kappa_p_p} \kappa_2^{p_z,p_z} &=& \left(- i \frac{\partial }{\partial \phi_{p_z}}\right)^2 ~\Psi\left( \phi_j \right) \Bigg|_{\phi_j=\vec 0}~= \sum_k \int \limits_{0}^{2\pi} d\phi ~ \int \limits_{0}^{\pi} d\theta ~ \int \limits_{0}^{\infty} dp ~p_z^2 ~\frac{dN_k}{d\phi ~d\theta ~ dp} \nonumber \\ &=&\sum_k \frac{g_k ~e^{\frac{q_k\mu}{T}}}{2\pi^2}~ m_k^4~T~ \Bigg[\frac{T}{m_k} ~K_1\left(\frac{m_k}{T} \right) +4~\frac{T^2}{m_k^2}~ K_2\left(\frac{m_k}{T} \right) \Bigg]~.\end{aligned}$$ Due to spherical symmetry in momentum space we find $\kappa_2^{p_x,p_x} = \kappa_2^{p_y,p_y} = \kappa_2^{p_z,p_z}$. Correlation terms of odd order in one of the momenta are identical to zero. As an example we find for correlations between energy and longitudinal momentum: $$\begin{aligned} \label{kappa_E_p} \kappa_2^{E,p_z} &=& \left(- i \frac{\partial }{\partial \phi_{E}}\right) \left(- i \frac{\partial } {\partial \phi_{p_z}}\right) \Psi\left( \phi_j \right) \Bigg|_{\phi_j=\vec 0} \nonumber \\ &=& \sum_k \int \limits_{0}^{2\pi} d\phi \int \limits_{0}^{\pi} d\theta \int \limits_{0}^{\infty} dp ~\varepsilon ~p_z ~\frac{dN_k}{d\phi ~d\theta ~ dp} =0, \end{aligned}$$ since the integral over the polar angle $\int_{0}^{\pi} \sin \theta \cos \theta=0$. Similarly we find $\kappa_2^{Q,p_x} = \kappa_2^{p_x,p_y} = 0$. Additionally $\kappa_1^{p_x}=0$, etc., since for a static source $\langle \vec P \rangle = \vec 0$. Transverse Momentum Segment {#App_pt} =========================== The average particle number density of $\pi^-$ in a segment of transverse momentum ${\Delta p_T}$ is given by Eq.(\[kappa\_n\]), i.e. the first derivative of the CGF with respect to $\phi_{N_{\Omega}}=\phi_{N_{p_T}}$ at the origin: $$\begin{aligned} \label{dNdpt} \kappa_1^{N_{p_T}} &=& \left( -i \frac {\partial} {\partial \phi_{N_{p_T}}} \right) \Psi\left( \phi_j \right) \Bigg|_{\phi_j=\vec 0} ~=~ \int \limits_{\Delta p_T} dp_T \int \limits_{0}^{2\pi} d\phi \int \limits_{-\infty}^{\infty} dy ~ \frac{dN}{dp_T ~dy ~ d\phi} \nonumber \\ &=& \frac{g~e^{-\frac{\mu}{T}}}{2\pi^2} \int \limits_{\Delta p_T} dp_T ~p_T \sqrt{p_T^2+m^2} ~K_1 \left( \frac{\sqrt{p_T^2+m^2}}{T}\right)~.\end{aligned}$$ Please note that $\kappa_1^{N_{p_T}} = \int_{\Delta p_T} dp_T ~dN/dp_T = \langle N_{p_T} \rangle$. Correlations of $\pi^-$ in a segment ${\Delta p_T}$ with globally conserved energy are given by double differentiation of $\Psi\left( \phi_j \right)$ with respect to $\phi_{N_{p_T}}$ and $\phi_E$, thus: $$\begin{aligned} \kappa_2^{N_{p_T},E} &=& \left(- i \frac{\partial } {\partial \phi_{N_{p_T}}}\right) \left(- i \frac{\partial } {\partial \phi_{E}}\right) \Psi\left( \phi_j \right) \Bigg|_{\phi_j=\vec 0}~= ~ \int \limits_{\Delta p_T} dp_T ~ \int \limits_{0}^{2\pi} d\phi ~ \int \limits_{-\infty}^{\infty} dy ~ \varepsilon ~ \frac{dN}{dp_T ~dy ~ d\phi} \nonumber \\ &=& \frac{g~e^{-\frac{\mu}{T}}}{2\pi^2} \! \!\int \limits_{\Delta p_T} dp_T ~ p_T \left(p_T^2+m^2 \right) \left[ K_0 \left( \frac{\sqrt{p_T^2+m^2}}{T} \right) + \frac{T}{\sqrt{p_T^2+m^2}} ~ K_1 \left( \frac{\sqrt{p_T^2+m^2}}{T} \right) \right] ~.\end{aligned}$$ Correlations between conserved momenta and particles in $\Delta p_T$, given by the elements $\kappa_2^{N_{p_T},p_x}$,$\kappa_2^{N_{p_T},p_y}$, and $\kappa_2^{N_{p_T},p_z} $ are identical to zero, due to symmetry in azimuthal angle $\phi$ for the first two, and due to an antisymmetric rapidity integral for the last. Therefore, all elements involving an odd order in one of the momenta in Eq.(\[matrix\]) are equal to zero. The determinant of Eq.(\[matrix\]) thus factorizes, similar to Eq.(\[normdet\]), into a product of $(\kappa_2^{p_x,p_x})^3$ and the determinant of a $3 \times 3$ sub-matrix involving only terms containing $N_{p_T},E,Q$. Hence momentum conservation drops out when calculating Eq.(\[SimpleOmega\]). However the strength of correlations between particle number $N_{p_T}$ and globally conserved energy $E$ will depend on the position of the segment $\Delta p_T$. Thus using Eqs.(\[SimpleOmega\]) and (\[matrix\]), one can express the width of the MCE multiplicity distribution (\[PMCE\]) by: $$\label{omega_pt} \omega^{mce}_{\Delta p_T}~=~ \frac{\kappa_2^{N_{p_T},N_{p_T}}}{\kappa_1^{N_{p_T}}} - \frac{1}{\kappa_1^{N_{p_T}} \det |\hat \kappa_2|}\Bigg[ \left( \kappa_2^{N_{p_T},Q} \right)^2 \kappa_2^{E,E}+ \left( \kappa_2^{N_{p_T},E} \right)^2 \kappa_2^{Q,Q} - 2 \kappa_2^{N_{p_T},E} \kappa_2^{N_{p_T},Q} \kappa_2^{E,Q} \Bigg]$$ In Boltzmann approximation, we find from Eq.(\[kappa\_n\]), $\kappa_2^{N_{p_T},N_{p_T}} = \kappa_2^{N_{p_T},Q} = \kappa_1^{N_{p_T}} = q \kappa_1^{N_{4\pi}}$, where we have defined the acceptance $q \equiv \kappa_1^{N_{p_T}} / \kappa_1^{N_{4\pi}} $. However, when observing a fraction $q$ of the particle density, one does not necessarily observe the same fraction $q$ of the energy density $\langle E_- \rangle$ carried by $\pi^-$, and thus $\kappa_2^{N_{p_T},E} \not= q \langle E_- \rangle$. Therefore depending on the location of $\Delta p_T$, our detector sees a larger (smaller) fraction of the total energy, which leads to smaller (larger) particle number fluctuations, see Fig. \[dodp\], left panel. One can easily verify that setting $\kappa_2^{N_{p_T},E} = q \langle E_- \rangle $ in Eq.(\[omega\_pt\]), leads to acceptance scaling, Eq.(\[accscaling\]), $\omega^{mce}_{\Delta p_T} = 1 + q \left(\omega^{mce}_{4\pi}-1 \right)$. Rapidity Segment {#App_y} ================ The average particle number density of $\pi^-$ in a rapidity interval ${\Delta y}$ is given by: $$\begin{aligned} \label{dNdy} \kappa_1^{N_y} &=& \left(- i \frac{\partial } {\partial \phi_{N_{y}}}\right) \Psi\left( \phi_j \right) \Bigg|_{\phi_j=\vec 0}~= ~\int \limits_{m}^{\infty} dm_T ~ \int \limits_{0}^{2\pi} d\phi ~ \int \limits_{\Delta y} dy ~ \frac{dN}{dm_T ~dy ~ d\phi} \nonumber \\ &=& \frac{g~e^{-\frac{\mu}{T}}}{\left(2\pi \right)^2}~ T^3~\int \limits_{\Delta y} dy~ \exp \left( -\frac{m }{T} \cosh \left(y \right) \right) \left[ \left(\frac{m}{T}\right)^2 + 2\frac{m}{T} \cosh^{-1} y + 2 \cosh^{-2} y \right]~.\end{aligned}$$ Please note that $\kappa_1^{N_{y}} = \int_{\Delta y} dy ~dN/dy = \langle N_{y} \rangle$. Correlations of particles in ${\Delta y}$ with globally conserved energy are given by: $$\begin{aligned} \kappa_2^{N_y,E} &=& \left(- i \frac{\partial } {\partial \phi_{N_{y}}}\right) \left(- i \frac{\partial } {\partial \phi_{E}}\right) \Psi\left( \phi_j \right) \Bigg|_{\phi_j=\vec 0}~= ~\int \limits_{m}^{\infty} dm_T ~ \int \limits_{0}^{2\pi} d\phi ~ \int \limits_{\Delta y} dy ~ \varepsilon ~ \frac{dN}{dm_T ~dy ~ d\phi} \nonumber \\ &=&\frac{g~e^{-\frac{\mu}{T}}}{\left(2\pi \right)^2} ~ T^4 ~ \int \limits_{\Delta y} dy ~ \cosh y ~\exp \left(- \frac{m}{T} \cosh y \right) \nonumber \\ && \times~ \left[\left(\frac{m}{T} \right)^3 + 3 \left( \frac{m}{T} \right)^2 \cosh^{-1} y+ 6 ~\frac{m}{T}~ \cosh^{-2} y + 6 \cosh^{-3} y \right]~.\end{aligned}$$ The correlation term of particles in ${\Delta y}$ with globally conserved longitudinal momentum $P_z$ reads: $$\begin{aligned} \kappa_2^{N_y,p_z} &=& \left(- i \frac{\partial } {\partial \phi_{N_{y}}}\right) \left(- i \frac{\partial } {\partial \phi_{p_z}}\right) \Psi\left( \phi_j \right) \Bigg|_{\phi_j=\vec 0}~= \int \limits_{0}^{2\pi} d\phi ~ \int \limits_{m}^{\infty} dm_T ~ \int \limits_{\Delta y} dy ~ p_z ~\frac{dN}{dm_T ~dy ~ d\phi} \nonumber \\ &=& \frac{g~e^{-\frac{\mu}{T}}}{\left(2\pi \right)^2} ~ T^4 ~ \int \limits_{\Delta y} dy ~ \sinh y ~ \exp \left(- \frac{m}{T} \cosh y \right) \nonumber \\ &&\times~ \left[\left(\frac{m}{T} \right)^3 + 3 \left( \frac{m}{T} \right)^2 \cosh^{-1} y + 6 ~\frac{m}{T}~ \cosh^{-2} y + 6 \cosh^{-3} y \right]~.\end{aligned}$$ Thus the element $\kappa_2^{N_y,p_z}$ in the matrix (\[matrix\]) is non-vanishing, and longitudinal momentum ($P_z$) conservation seems to affects correlations between particles in a segment $\Delta y$ and the remaining system. In contrast to that further elements are equal to zero, $\kappa_2^{N_y,p_x} =\kappa_2^{N_y,p_y} = 0$, and $P_x$ and $P_y$ conservation have no additional effect. When momentum conservation is taken into account the scaled variance (\[SimpleOmega\]) can be calculated from Eq.(\[calcdet\]): $$\begin{aligned} \label{omega_y} \omega^{mce}_{\Delta y} = \frac{\kappa_2^{N_{y},N_{y}}}{\kappa_1^{N_{y}}} &-& \frac{1}{\kappa_1^{N_{y}} \kappa_2^{p_z,p_z} \det |\hat \kappa_2|}\Bigg[ \left( \kappa_2^{N_{y},Q} \right)^2 \kappa_2^{E,E} \kappa_2^{p_z,p_z}+ \left( \kappa_2^{N_{y},E} \right)^2 \kappa_2^{Q,Q} \kappa_2^{p_z,p_z} \nonumber \\ &+&\left( \kappa_2^{N_{y},p_z} \right)^2 \left[ \kappa_2^{Q,Q}\kappa_2^{E,E} - \left( \kappa_2^{E,Q} \right)^2 \right] - 2 \kappa_2^{p_z,p_z} \kappa_2^{N_{y},E} \kappa_2^{N_{y},Q} \kappa_2^{E,Q}~\Bigg]~. \end{aligned}$$ Similarly to the previous section, we find a large (small) $\kappa_2^{N_{y},p_z}$ leads to small (large) fluctuations, see Fig. \[dodp\], right panel. When intervals symmetric in rapidity are assumed, e.g. $\Delta y = \left[-y_1,y_1 \right]$, or $ \Delta y = \left[-y_2,-y_1\right] \cup \left[y_1,y_2 \right]$, correlations between particle number and momentum disappear, $\kappa_2^{N_{y},p_z}=0$, and Eq.(\[omega\_y\]) reduces to Eq.(\[omega\_pt\]), and momentum conservation does not play a role. Equally when disregarding longitudinal momentum conservation the same arguments as those of Appendix \[App\_pt\] apply and Eq.(\[omega\_pt\]) holds, however the effect is much weaker, see Fig. \[dodywopz\]. Azimuthal Angle Segment {#App_phi} ======================= Th average particle number in $\Delta \phi $, while integrating over all $p_T$ and $y$ is simply a fraction $q = \Delta \phi / 2 \pi$ of the total yield $\langle N_{4\pi}\rangle$. Therefore $\kappa_1^{N_{\phi}} = q \kappa_1^{N_{4\pi}}$. Equally, the energy carried by $\pi^-$ in this interval is $\kappa_2^{E,N_{\phi}}= q \langle E_-\rangle$. Due to symmetry around $y=0$, we find additionally $\kappa_2^{N_{\phi},p_z}= 0$. However for the transverse momenta $p_x= p_T \cos \phi$, and $p_y= p_T \sin \phi$ the correlation with $N_{\phi}$ is generally non-zero. $$\begin{aligned} \kappa_2^{N_{\phi},p_x} &=& \left(- i \frac{\partial } {\partial \phi_{N_{\phi}}}\right) \left(- i \frac{\partial } {\partial \phi_{p_x}}\right) \Psi\left( \phi_j \right) \Bigg|_{\phi_j=\vec 0}~=\frac{g}{\left(2\pi \right)^3} \int \limits_{\Delta \phi} d\phi ~ \int \limits_{0}^{\infty} dp_T ~ \int \limits_{-\infty}^{\infty} dy ~ p_x ~\frac{dN}{dp_T ~dy ~ d\phi} \nonumber \\ &=& \int \limits_{\Delta \phi}d\phi \cos \phi ~ ~\frac{2g~e^{-\frac{\mu}{T}}}{\left(2\pi \right)^3}~m^2~T~\sqrt{\frac{\pi}{2}~m~T} ~ K_{5/2} \left( \frac{m}{T} \right)~=~ \left( 2\pi \right)^{-1} \Big[ \sin \phi \Big]_{\Delta \phi} ~\langle p_T \rangle ~.\end{aligned}$$ Similarly we find $\kappa_2^{N_{\phi},p_y} = - \left( 2\pi \right)^{-1} \left[ \cos \phi \right]_{\Delta \phi} ~\langle p_T \rangle $. Unlike in the previous sections there is no particular dependence of the position of the interval $\Delta \phi$. However in general there is a dependence. When momentum conservation is taken into account Eq.(\[SimpleOmega\]) can be calculated from Eq.(\[calcdet\]): $$\begin{aligned} \label{omega_phi} \omega^{mce}_{\Delta \phi} &=& \frac{\kappa_2^{N_{\phi},N_{\phi}}}{\kappa_1^{N_{\phi}}} - \frac{1}{\kappa_1^{N_{\phi}} \kappa_2^{p_x,p_x} \det |\hat \kappa_2|} \Bigg[ \left( \kappa_2^{N_{\phi},Q} \right)^2 \kappa_2^{E,E} \kappa_2^{p_x,p_x} + \left( \kappa_2^{N_{\phi},E} \right)^2 \kappa_2^{Q,Q} \kappa_2^{p_x,p_x} \nonumber \\ &+&\left( \left( \kappa_2^{N_{\phi},p_x} \right)^2 + \left( \kappa_2^{N_{\phi},p_y} \right)^2 \right) \left[ \kappa_2^{Q,Q}\kappa_2^{E,E} - \left( \kappa_2^{E,Q} \right)^2 \right] - 2 \kappa_2^{p_x,p_x} \kappa_2^{N_{\phi},E} \kappa_2^{N_{\phi},Q} \kappa_2^{E,Q}~\Bigg]~, \end{aligned}$$ where we have used $\kappa_2^{p_x,p_x}=\kappa_2^{p_y,p_y}$. As mentioned before there is no particular dependence of $\kappa_2^{N_{\phi},E}$ and $\kappa_2^{N_{\phi},Q}$ on the position of $\Delta \phi$. However we have a term $ \left( \kappa_2^{N_{\phi},p_x} \right)^2 + \left( \kappa_2^{N_{\phi},p_y} \right)^2 $. In case we assume a continuous interval $\Delta \phi_A = \left[\phi_1,\phi_2 \right]$ this terms reads: $$\left( \kappa_2^{N_{\phi},p_x} \right)^2 + \left( \kappa_2^{N_{\phi},p_y} \right)^2 = \frac{\langle p_T \rangle^2}{\left( 2\pi \right)^2} ~\left[ 1- \cos \left( \phi_1 - \phi_2 \right) \right]$$ This term is evidently positive, hence fluctuations are suppressed. One can easily verify that when one takes $\Delta \phi_B =\left[\phi_1,\phi_2\right] \cup \left[\phi_1+\pi,\phi_2+\pi \right]$, i.e. two opposite slices in azimuthal angle, the correlation disappears, $\kappa_2^{N_{\phi},p_x} =\kappa_2^{N_{\phi},p_y}= 0 $, and one returns to acceptance scaling, Eq.(\[accscaling\]). [100]{} J. Cleymans, H. Oeschler, K. Redlich and S. Wheaton, Phys. Rev.  C [**73**]{} (2006) 034905. F. Becattini, J. Manninen and M. Gaździcki, Phys. Rev.  C [**73**]{} (2006) 044905. A. Andronic, P. Braun-Munzinger and J. Stachel, Nucl. Phys.  A [**772**]{} (2006) 167. J. Letessier and J. Rafelski, arXiv:nucl-th/0504028. B. Lungwitz [*et al.*]{} \[NA49 Collaboration\], PoS C [**FRNC2006**]{} (2006) 024. V. V. Begun, M. Gaździcki, M. I. Gorenstein, M. Hauer, V. P. Konchakovski and B. Lungwitz, Phys. Rev.  C [**76**]{} (2007) 024902. V. V. Begun, M. Gaździcki, M. I. Gorenstein and O. S. Zozulya, Phys. Rev.  C [**70**]{}, 034901 (2004). H. Heiselberg, Phys. Rept.  [**351**]{} (2001) 161; S. Jeon and V. Koch, in [*Quark-Gluon Plasma*]{} 3, edited by R.C. Hwa and X.-N. Wang (World Scientific, Singapore, 2004), p.430. M. Gaździcki, M. I. Gorenstein and S. Mrowczynski, Phys. Lett.  B [**585**]{} (2004) 115; M. I. Gorenstein, M. Gaździcki and O. S. Zozulya, Phys. Lett.  B [**585**]{} (2004) 237. I. N. Mishustin, Phys. Rev. Lett.  [**82**]{} (1999) 4779; H. Heiselberg and A. D. Jackson, Phys. Rev.  C [**63**]{} (2001) 064904. M. A. Stephanov, K. Rajagopal and E. V. Shuryak, Phys. Rev. Lett.  [**81**]{} (1998) 4816; Phys. Rev.  D [**60**]{} (1999) 114028; M. Stephanov, Acta Phys. Polon.  B [**35**]{} (2004) 2939. S. Häussler, S. Scherer and M. Bleicher, arXiv:hep-ph/0702188. T. K. Nayak, arXiv:0706.2708 \[nucl-ex\]. V. V. Begun, M. I. Gorenstein, M. Hauer, V. P. Konchakovski and O. S. Zozulya, Phys. Rev.  C [**74**]{} (2006) 044903. F. Becattini, A. Keränen, L. Ferroni and T. Gabbriellini, Phys. Rev.  C [**72**]{} (2005) 064904. M. I. Gorenstein, M. Hauer and D. O. Nikolajenko, Phys. Rev.  C [**76**]{}, 024901 (2007). M. Hauer, V. V. Begun and M. I. Gorenstein, arXiv:0706.3290 \[nucl-th\]. V. V. Begun and M. I. Gorenstein, Phys. Rev.  C [**73**]{} (2006) 054904; V. V. Begun, M. I. Gorenstein, A. P. Kostyuk and O. S. Zozulya, J. Phys. G [**32**]{} (2006) 935. K. Redlich and L. Turko, Z. Phys.  C [**5**]{} (1980) 201; L. Turko, Phys. Lett.  B [**104**]{} (1981) 153; R. Hagedorn and K. Redlich, Z. Phys.  C [**27**]{} (1985) 541. J. Cleymans, K. Redlich and E. Suhonen, Z. Phys.  C [**51**]{} (1991) 137; A. Keränen and F. Becattini, Phys. Rev.  C [**65**]{} (2002) 044901; F. Becattini and U. W. Heinz, Z. Phys.  C [**76**]{}, 269 (1997). O. Fochler, S. Vogel, M. Bleicher, C. Greiner, P. Koch-Steinheimer and Z. Xu, Phys. Rev.  C [**74**]{}, 034902 (2006). C. M. Ko, V. Koch, Z. w. Lin, K. Redlich, M. A. Stephanov and X. N. Wang, Phys. Rev. Lett.  [**86**]{}, 5438 (2001). S. Jeon, V. Koch, K. Redlich and X. N. Wang, Nucl. Phys.  A [**697**]{}, 546 (2002). F. Becattini and L. Ferroni, Eur. Phys. J.  C [**35**]{} (2004) 243; Eur. Phys. J.  C [**38**]{} (2004) 225. J. Letessier, A. Tounsi and J. Rafelski, Phys. Lett.  B [**475**]{} (2000) 213; J. Rafelski and J. Letessier, Phys. Rev. Lett.  [**85**]{} (2000) 4695. P. Koch, B. Muller and J. Rafelski, Phys. Rept.  [**142**]{} (1986) 167. V. V. Begun, M. I. Gorenstein and O. S. Zozulya, Phys. Rev.  C [**72**]{} (2005) 014902. S. Jeon and V. Koch, Phys. Rev. Lett.  [**83**]{}, 5435 (1999); G. Torrieri, S. Jeon and J. Rafelski, Phys. Rev.  C [**74**]{} (2006) 024901; G. Torrieri, arXiv:nucl-th/0702062. B. Lungwitz and M. Bleicher, arXiv:0707.1788 \[nucl-th\]. E. L. Bratkovskaya, W. Cassing, C. Greiner, M. Effenberger, U. Mosel and A. Sibirtsev, Nucl. Phys.  A [**681**]{} (2001) 84; L. V. Bravina [*et al.*]{}, Phys. Rev.  C [**60**]{}, 024904 (1999). V. P. Konchakovski, M. I. Gorenstein and E. L. Bratkovskaya, Phys. Lett.  B [**651**]{} (2007) 114. V. P. Konchakovski, M. I. Gorenstein and E. L. Bratkovskaya, arXiv:0704.1831 \[nucl-th\]. B. Lungwitz [*et al.*]{} \[NA49 Collaboration\], arXiv:0709.1646 \[nucl-ex\]. [^1]: We drop in the following the argument $(V,T,\mu)$ to simplify the notation. [^2]: Please note, that in order to simplify formulas, the notation is slightly different from [@clt].
1
--- abstract: 'The profiles of the chromo-electric field generated by static quark-antiquark, $Q{\bar Q}$ and three-quark, $QQQ$ sources are calculated in Coulomb gauge. Using a variational ansatz for the ground state, we show that a flux tube-like structure emerges and combines to the “Y”-shape field profile for three static quarks. The properties of the chromo-electric field are, however, not expected to be the same as those of the full action density or the Wilson line and the differences are discussed.' author: - 'Patrick O. Bowman and Adam P. Szczepaniak' title: ' Chromo-Electric flux tubes ' --- Introduction ============ An intuitive picture of quark-gluon dynamics emerges in the Coulomb gauge, $\na\cdot \A^a = 0$ [@clee; @zw1; @zw2]. In this case QCD is represented as a many-body system of strongly interacting physical quarks, antiquarks and gluons. In particular the gluon degrees of freedom have only the two transverse polarizations and in the non-interacting limit reduce to the physical massless plane wave states. In the interacting theory gluonic states, just like any other colored objects, are expected to be non-propagating, [*i.e.*]{} confined on the hadronic scale. The non-propagating nature of colored states follows from the infrared enhanced dispersion relations which can be set up in the Coulomb gauge [@zw2; @as1; @as2; @as3]. In the Coulomb gauge the $A^0$ component of the 4-vector potential results in an instantaneous interaction (potential) between color charges. Unlike QED, where the corresponding potential is a function only of the relative distance between the electric charges, in QCD it is a functional of the transverse gluon components, ${\bf A}$ [@clee]. Thus the numerical value of the potential cannot be obtained without knowing the correct wave functional of the state and its dependence on the gluon coordinates. So in QCD the chromo-electric field is expected to be non-local and to depend on the global distribution of charges, which set up the gluon wave functional. Even though the exact solution to the general many-body problem is unavailable it is often possible to obtain good approximations if the dominant correlations can be identified. In Coulomb gauge QCD (in the Schrödinger field representation) the domain of the transverse gluon field, $\A$ is bounded and non-flat, and is referred to as the Gribov region. It is expected that the strong interaction between static charges originates from the long-range modes near the boundary of the Gribov region, the so called Gribov horizon. For example it has been recently shown that center vortices, when transformed to the Coulomb gauge, indeed reside on the Gribov horizon [@goz]. The curvature of the Gribov region contributes to matrix elements via the functional measure determined by the determinant of the Faddeev-Popov operator. This determinant prevents analytical calculations of functional integrals, however it has been shown that its effect can be approximated by imposing appropriate boundary conditions on the gluon wave functional [@as4; @hr]. This wave functional is in turn constrained by minimizing the expectation value of the energy density which leads to a set of coupled self-consistent Dyson equations [@as1; @sw]. Once the wave functional is determined it is possible to calculate the distribution of the chromo-electric field in the system. This is the main subject of this paper. In the following we study the chromo-electric field in the presence of the static quark-antiquark and three-quark systems, prototypes for a meson and a baryon respectively. Recent lattice computations indicate that the gluonic field near the static $Q-{\bar Q}$ state forms flux tubes. There are also indications that for the $QQQ$ state the fields arrange in the so called “Y”-shape [@tak; @latY], although some work supports the “$\Delta$”-shape [@latD]. String-like behavior has been observed in the chromo-electric field in Ref. [@sho] and the “Y”-shape interaction advocated in Ref. [@kuzsim]. A recent reevaluation of the center-vortex model also supports the “Y”-shape [@corn]. In the following section we summarize the relevant elements of the Coulomb gauge formalism and discuss the approximations used. This is followed by numerical results and outlook of future studies. There is a fundamental difference between lattice gauge flux tubes corresponding to the distribution for the action density and the chromo-electric field profiles. In the context of the potential energy of the sources, this difference was emphasized in Zwanziger, Greensite and Olejnik [@goz; @zwancon]. We discuss those in Section. IV. Chromo-electric Coulomb field in the presence of static charges =============================================================== The Coulomb gauge Hamiltonian ----------------------------- The Yang-Mills Coulomb gauge Hamiltonian in the Schrödinger representation, $H=H(\Pi, A)$ is given by $$H = \frac{1}{2} \int d\x \left[ \bm{\Pi}^a(\x) \cdot \bm{\Pi}^a(\x) + \B^a(\x)\cdot \B^a(\x) \right] + {\hat V}_C. \label{h}$$ The gluon field satisfies the Coulomb gauge condition, $\na \cdot \A^a(\x)=0$, for all color components $a=1\cdots N_c^2-1$. The conjugate momenta, $\bm{\Pi}^a(\x)= -i\partial/\partial {\A^a(\x)}$ obey the canonical commutation relation, $[\Pi^{i,a}(\x), A^{j,b}(\y)] = -i\delta_{ab} \delta^{ij}_T(\na_\x)\delta(\x-\y)$, with $\delta^{ij}_T(\na) = \delta_{ij} - \nabla_i \nabla_i/\na^2$. The canonical momenta also correspond to the negative of the transverse component of the chromo-electric field, $\bm{\Pi}^a(\x) = - \E^a_T(\x)$, $\na \cdot \E^a_T = 0$. The chromo-magnetic field, $\B$ contains linear and quadratic terms in $\A$. It will also be convenient to transform to the momentum space components of the fields by $$\A^a(\k) = \int d\x\A^a(\x) e^{-i\k\cdot x},$$ and similarly for $\bm{\Pi}^a(\k)$. The Coulomb potential ${\hat V}_C$ may be expressed in terms of the longitudinal component of the chromo-electric field, $${\hat V}_C = \frac{1}{2}\int d\x {\bf E}^a(\x) {\bf E}^a(\x),$$ with $${\bf E}^a(\x) = \int d\y d\z { \frac{\x - \y}{4\pi|\x - \y|^3}} \left[ \frac{g}{1 - \lambda}\right]^{ab}_{\y,\z} \rho^b(\z). \label{e}$$ Here $(1-\lambda)$ is the Faddeev-Popov (FP) operator which in the configuration-color space is determined by, $$[\lambda]^{ab}_{\x,\y} = \int {\frac{d\p}{(2\pi)^3}} {\frac{d\q}{(2\pi)^3}} e^{i\p\cdot\x} e^{-i\q\cdot\y} \lambda_{ab}(\p,\q),$$ where $$\lambda_{ab}(\p,\q) = ig f_{acb} {\frac{\A^c(\p-\q)\cdot \q}{\q^2}},$$ $f$ are the $SU(N_c)$ structure constants, and $g$ is the bare coupling. In [Eq. (\[e\])]{}, $\rho$ is the color charge density given by $$\rho^a(\x) = \psi^{\dag}(\x)T^a \psi(\x) + f_{abc} \A^b(\x)\cdot \bm{\Pi}^c(\x),$$ with the two terms representing the quark and the gluon contribution, respectively; the former is replaced by a $c$-number for static quarks. Without light flavors there is no other dependence on the quark degrees of freedom. The energy of the static $Q{\bar Q}$ or $QQQ$ systems measured with respect to the state with no sources is thus given by the Coulomb term and is determined by the expectation value of the longitudinal component of the chromo-electric field. It is the dependence of the chromo-electric field and the Coulomb interaction on the static vector potential (through $\lambda$) that produces the differences between QCD and QED. In QED the kernel in the bracket in [Eq. (\[e\])]{} reduces to $[\cdots] \to \delta(\y-\z)$ and the Abelian expression for the electric field emerges. In QCD the chromo-electric field and the Coulomb potential are enhanced due to long-wavelength transverse gluon modes on the Gribov horizon where the FP operator vanishes. The combination of two effects on the Gribov horizon: enhancement of $(1 - \lambda)^{-1}$ in the longitudinal electric field and vanishing of the functional norm, which is proportional to $\det(1-\lambda)$, leads to finite, albeit large, expectation values of the static interaction between color charges. In [Eq. (\[h\])]{} we have omitted the FP measure since, as mentioned earlier in Ref. [@as4], its effect can be approximately accounted for by imposing specific boundary conditions on the ground state wave functional. Since the chromo-electric field depends on the distribution of the transverse vector potential it is necessary to know the wave functional of the system. A self-consistent variational ansatz can be chosen in a Gaussian form, $$\Psi[A] = \exp\left( - \frac{1}{2} \int {\frac{d\p}{(2\pi)^3}} \omega(p) \A^a(\p)\cdot \A^a(-\p) \right). \label{varia}$$ The parameter $\omega(p)$ ($p\equiv |\p|$) is determined by minimizing the expectation value of the energy density of the vacuum ([*i.e.*]{} without sources). The boundary condition referred to above corresponds to setting $\omega(0) \equiv \mu$ to be finite, which plays the role of $\Lambda_{QCD}$, [*i.e*]{} it controls the position of the Landau pole. Minimizing the energy density of the vacuum leads to a set of coupled self-consistent integral equations: one for $\omega$, one for the expectation value of the inverse of the FP operator, $d(p)$, $$\begin{gathered} (2\pi)^3 \delta(\p-\q)\delta_{ab} d(p) \equiv \\ \int d\x d\y e^{-i\p\cdot\x} e^{i\q\cdot\y} \langle \Psi| \left[ \frac{g}{1 - \lambda}\right]^{ab}_{\x,\y} |\Psi \rangle / {\langle \Psi|\Psi \rangle }, \label{d}\end{gathered}$$ and one for the expectation value of the square of the inverse of the FP operator, which appears in the matrix elements of $V_C$, $$\begin{gathered} (2\pi)^3 \delta(\k-\q)\delta_{ab} f(p)d^2(p) \equiv \\ \int d\x d\y e^{-i\p\cdot\x} e^{i\q\cdot\y} \langle \Psi| \left[ \left(\frac{g}{1 - \lambda}\right)^2 \right]^{ab}_{\x,\y} |\Psi \rangle / {\langle \Psi|\Psi \rangle }. \label{ff}\end{gathered}$$ The approximation $f=1$ ignores the dispersion in the expectation value of the inverse of the FP operator, $$\Bigg\langle \left[ \frac{g}{1 - \lambda} \right]^2 \Bigg\rangle \to \Bigg\langle \frac{g}{1 - \lambda} \Bigg\rangle^2. \label{disp}$$ This approximation has been extensively used, [*e.g.*]{} in Refs. [@zw1; @zwan]. The three Dyson equations were analyzed in Ref. [@as1] where it was found that the solution of $\omega$ can be well approximated by the simple function $\omega(p) = \theta( \mu - p) \mu + \theta( p -\mu) p$. The renormalization scale $\mu$, being the only parameter in the theory, can constrained by the long range part of the Coulomb kernel $\langle V_C \rangle \propto fd^2$. We will discuss this more in the subsection below. The low momentum, $p<\mu$ dependence of $d(p)$ and of the Coulomb potential $V_C(p) = f(p)d(p)^2$ is well approximated by a power-law, $$d(p) = d(\mu) \left( \frac{\mu}{p}\right)^\alpha, f(p) = f(\mu) \left( \frac{\mu}{p}\right)^\beta \label{df}$$ with $\alpha \sim 0.5$ and $\beta \sim 1$. The exponents are bounded by $ 2\alpha + \beta \le 2$ and the upper limit corresponds to the linearly rising confining potential. At large momentum, $p >> \mu$, as expected from asymptotic freedom, both $d$ and $f$ are proportional to $1/\log^\gamma(p)$, with $\gamma = O(1)$. Adding static sources does not modify the parameters of the vacuum gluon distribution, [*e.g.*]{} $\omega(p)$. This is because the vacuum energy is an extensive quantity while sources contribute a finite amount to the total energy. Thus we can use the three functions $\omega$, $d$ and $f$ calculated in the absence of the sources to compute the expectation value of the chromo-electric field in the presence of static sources. The ansantz state obtained by applying quark sources to the variational vacuum of Eq. (\[varia\]) does not, however optimize the state with sources. The field lines in the $Q{\bar Q}$ and $QQQ$ systems ------------------------------------------------------ For a quark and an antiquark at positions $\x_q \equiv \R/2 = R{\hat \z}/2$ and $\x_{\bar q} = -\R/2 = -R{\hat \z}/2$, respectively, and the gluon field distributed according to $\Psi[\A]$, the expectation value of the square of the magnitude of the chromo-electric field measured at position $\x$ is given by $$\begin{gathered} \langle \E^2(\x,\R) \rangle = { \frac{C_F}{(4\pi)^2}} \sum_{\z_1=\pm \R/2} \sum_{\z_2 =\pm \R/2} \pm \int d\y_1 d\y_2 \\ \times{ \frac{ (\x - \y_1)\cdot (\x-\y_2)}{|\x - \y_1|^3 |\x - \y_2|^3}} E(\z_1,\y_1;\z_2,\y_2), \label{qq}\end{gathered}$$ where the $+ (-)$ sign is for the $\z_1 =(\ne) \z_2$ contributions, and $$E(\z_1,\y_1;\z_2,\y_2) \equiv \frac{\langle \Psi | \left[ \frac{g}{1 - \lambda}\right]_{\z_1,\y_1} \left[ \frac{g}{1 - \lambda}\right]_{\y_2,\z_2}|\Psi \rangle } { \langle \Psi|\Psi \rangle }. \label{ee}$$ The color factors leading to $C_F$ can be extracted from the expectation value in [Eq. (\[ee\])]{} (the ground state expectation value of the inverse of two FP operators is an identity in the adjoint representation). In the Abelian limit, $E(\z_1 \cdots \y_2) \to \delta(\y_1 - \z_1)\delta(\y_2 - \z_2)$ and Eq. (\[qq\]) gives the dipole field distribution, $\langle \E^2 \rangle_{QED}$. One should note that [Eq. (\[qq\])]{} contains the two self energies. These self energies are necessary to produce the correct asymptotic behavior at $x >> R$ for charge-neutral systems, (in QED and QCD) [*i.e*]{} $\E^2$ has to fall-off at least as $1/\x^4$ at large distances from the sources. The infrared, $|\x| \sim |\R| >> 1/\mu$ enhancement in QCD arises from the expectation value of the inverse of the FP operator. If $\langle \E^2(\x,\R) \rangle$ is integrated over $\x$ one obtains the expectation value of the Coulomb energy of the $Q{\bar Q}$ source. The mutual interaction energy is given by, $$\begin{aligned} V_C(\R) &= {1\over 2}\int d\x \langle \E^2(\x,\R) \rangle \nonumber \\ &\hspace{-5mm}= -C_F {{\langle \Psi | \left[ {g\over {1 - \lambda}} \left(-{1\over \na^2}\right) {g\over {1 - \lambda}}\right]_{{\R\over 2},-{\R\over 2}} |\Psi \rangle } /{ \langle \Psi|\Psi \rangle } }, \nonumber \\ &\hspace{-5mm} = -C_F \int {{d\p}\over {(2\pi)^3}} {{d^2(p) f(p)}\over {p^2}} e^{i\p\cdot \R}, \label{vc} \end{aligned}$$ and the net self-energy contribution is, $$\begin{aligned} \Sigma &= C_F {{\langle \Psi | \left[ {g\over {1 - \lambda}} \left(-{1\over \na^2}\right) {g\over {1 - \lambda}}\right]_{\pm {\R\over 2},\pm {\R\over 2}} |\Psi \rangle } /{ \langle \Psi|\Psi \rangle } }, \nonumber \\ &= C_F \int {{d\p} \over {(2\pi)^3}} {{d^2(p) f(p)}\over {p^2}}. \label{sigma} \end{aligned}$$ In lattice simulations it has been shown [@Greensite] that the Coulomb energy and the phenomenological static $Q\bar Q$ potential obtained from the Wilson loop are different. In particular it was found that the Coulomb potential string tension is about three times larger than the phenomenological string tension. This is in agreement with the “no confinement without Coulomb Confinement” scenario discussed by Zwanziger [@zwancon]. It is simple to understand the origin of the difference. Even if $|\Psi[\A]\rangle $ were the true vacuum state (without sources) of the Coulomb gauge QCD Hamiltonian (here we approximate it by a variational ansatz) the state $|Q{\bar Q},R\rangle \equiv Q(\R/2){\bar Q}(-\R/2) |\Psi[A]\rangle$ is no longer an eigenstate. For example ${\hat V}_C$ acting on $|Q{\bar Q},R\rangle$ excites any number of gluons and couples them to the quark sources. The Coulomb energy was defined as the expectation value, $V_C$ in $|Q{\bar Q},R\rangle$ minus the vacuum energy and it is therefore different from the phenomenological static potential energy which corresponds to the total energy (measured with respect to the vacuum) of the true eigenstate of the Hamiltonian with a $Q{\bar Q}$ pair. If one defines [@goz] $$\begin{aligned} G(R,T) &\equiv \langle Q{\bar Q},R|e^{-(H-E_0)T}|Q{\bar Q},R\rangle \nonumber \\ & = \sum_n |\langle Q {\bar Q},R,n|Q{\bar Q},R\rangle|^2 e^{-(E_n - E_0) T},\end{aligned}$$ then the Coulomb potential on the lattice can be calculated from $$V_C(R) = \lim_{T=0} -{d\over {dT}} \log(G(R,T)),$$ and the phenomenological potential from $$V(R) = \lim_{T=\infty} -{d\over {dT}} \log(G(R,T)).$$ Thus one should be comparing $V_C(R)$ in [Eq. (\[vc\])]{} to the lattice Coulomb potential and not to the phenomenological potential obtained from the Wilson loop. Finally, one could try to optimize the state with sources, [*e.g.*]{} by adding gluonic components. In this case terms in the Hamiltonian beyond the Coulomb term would contribute to the energy of the system and one could compare with the true (Wilson loop) static energy. In our previous studies, where we extracted numerical values for $\mu$ and the critical exponents $\alpha,\beta,\gamma$ ([*cf.*]{} [Eq. (\[df\])]{}) we have instead compared $V_C$ to the phenomenological, Wilson potential [@as1]. In what follows we will use the larger value of the string tension, to be in agreement with Ref. [@goz]. If the two exponents $\alpha$ and $\beta$, which determine the infrared behavior of $d(p)$ and $f(p)$ respectively, satisfy $2\alpha + \beta > 2$, then the self energy in [Eq. (\[sigma\])]{} is divergent and so is the [*rhs*]{} of [Eq. (\[vc\])]{}. This reflects the long-range behavior of the effective confining potential generated by self-interactions between the gluons that make up the Coulomb operator. For the colorless $Q\bar Q$ system the total energy which is the sum of $V_C$ and $\Sigma$, is finite as it should be. For a colored system, [*e.g.*]{} a quark-quark source, the sign of $V_C$ changes, there is no cancellation between the infrared singularities, and in the confined phase the system would be un-physical with infinite energy. The integral determining the self energy also becomes divergent in the UV, since for $p\to \infty$ the product $d^2(p)f(p)$ only falls-off logarithmically. Modulo these logarithmic corrections this UV divergence is the same as in the Abelian theory and can be removed by renormalizing the quark charge. It follows from translational invariance of the matrix element in [Eq. (\[ee\])]{}, that $E$ depends only on the relative coordinates, $\z_1 - \y_1$ and $\z_2 - \y_2$. We therefore introduce the momentum space representation, $$\begin{aligned} E(\z_1,\y_1;\z_2,\y_2) & = & \int {{d\p} \over {(2\pi)^3}} {{d\q} \over {(2\pi)^3}} e^{i\p \cdot (\z_1 - \y_1) -i \q\cdot (\z_2 - \y_2) } \nonumber \\ & & \qquad\times d(p) d(q) E(\p;\q), \label{ft}\end{aligned}$$ and define $F_\L(\l) \equiv E(\l+\L/2; \l-\L/2)$ with $\l \equiv (\p + \q)/2$ and $\L \equiv \p - \q$. The Dyson equation for $F$ can be derived in the rainbow-ladder approximation which, as shown in Ref. [@as1; @as2], sums up the dominant infrared and ultra-violet contributions to the expectation value of the inverse of two FP operators, $$F_\L(\l) = 1 + N_c \int {{d\k}\over {(2\pi)^3 }} {{\left[ (\k-\L/2)\delta_T(\k + \l)(\k+\L/2) \right] } \over {2\omega(\k + \l)}} {{d(\k - \L/2)}\over {(\k - \L/2)^2}} {{d(\k + \L/2)}\over {(\k + \L/2)^2}} F_\L(\k), \label{F}$$ It follows from [Eq. (\[ft\])]{} that $\L$ and $\l$ are conjugate to the [*center of mass*]{}, $\R \equiv [ (\z_1 - \y_1) + (\z_2 - \y_2) ]/2$ and the [*relative*]{}, $\r \equiv [ (\z_1 - \y_1) - (\z_2 - \y_2) ]$ coordinate respectively. The Dyson equation for $F_L$ is UV divergent if for $p/\mu >> 1$, and $d(p) \ge \log^{1/2}(p^2)$. This divergence can be removed by the Coulomb operator renormalization constant. The renormalized equation is obtained from the once-subtracted equation $F_\L(\l) - F_{\L_0}(\l_0)$. For example, if the subtraction is chosen at $|\l_0| = \mu$ and $\L_0 = {\bf 0}$, the renormalized coupling $F_0(\mu)$ can be fixed from the Coulomb potential. After integrating [Eq. (\[qq\])]{} (over $\x$) one obtains $\delta(\y_1 - \y_2)$ multiplying $E(\z_1, \cdots, \y_2)$. Therefore, it follows from [Eq. (\[ft\])]{} that $V_C(\R)$ is determined by $F_0(\l)$ and $F_0(\l) = F_0(l) = f(l)$ with $f$ defined in [Eq. (\[ff\])]{}. In [Eq. (\[F\])]{} $\L$ is a parameter, [*i.e.*]{} the Dyson equation does not involve self-consistency in $\L$. We have just shown that as $\L \to {\bf 0}$, $F_\L(\l)$ has a finite limit: it is given by $f$. For large $L=|\L|$ ($L/\mu >> 1$), due to asymptotic freedom, $F_\L$ is expected to vanish logarithmically, $F_\L \to d^2(L) \propto 1/\log(L^2)$. We do not attempt here to solve [Eq. (\[F\])]{}, instead we use a simple interpolation formula between the $L=0$ and $L\to \infty$ limits, $$F_\L(\l) = f(\l) \theta(\mu - |\l|) \theta(\mu -|{\L\over 2}|) + \left[ 1 - \theta(\mu - |\l|)\theta(\mu - |{\L\over 2}|) \right] \sim f\left( {\p + \q}\over 2 \right) \theta(\mu - p) \theta( \mu - q) + \left[ 1 - \theta(p) \theta(q) \right], \label{f}$$ [*i.e.*]{} in the term in the bracket we ignore the short distance logarithmic corrections. It is easy to show that if logarithmic corrections are ignored then the short-range, $p,q > \mu$ contribution to the energy density is the same as in the Abelian case. Since we are mainly interested in the long range behavior of the chromo-electric field, in the following we shall ignore contributions from the region $p,q>\mu$ all together. In the long-range approximation, $x, R >> 1/\mu$ the expectation value of $\E^2$ is then given by, $$\langle \E^2(\x,\R) \rangle = {{C_F}\over {(4\pi)^2}} \sum_{ij=1}^2 \xi^{Q\bar Q}_{ij} \int d\r f_L(r) {{\z_i - \x - \r/2} \over {|\z_i - \x - \r/2|}} \cdot {{\z_j - \x + \r/2} \over {|\z_j - \x + \r/2|}} d'_L(\z_i - \x - \r/2) d'_L(\z_j - \x + \r/2). \label{el}$$ $\xi^{Q\bar Q}_{ij} = 1$ for $i=j$ and $-1$ for $i\ne j$, $\z_{1,(2)} = (-)\R/2$, $$d'_L(r) \equiv {2\over {\pi}}\int^\mu_0 p dp j_1(rp) d(p), \label{dpl}$$ is the derivative of $d_L$ w.r.t. $r$, $$f_L(r) = {1\over {2\pi^2}} \int^\mu_0 dp p^2 f(p) j_0(pr), \label{fl}$$ and $j_0, j_1$ are Bessel’s functions. We note that the expression in [Eq. (\[el\])]{} is not necessarily positive. In the limit $f(p)=1$, the matrix element of the square of the inverse of the FP operator is approximated by the square of matrix elements (cf. [Eq. (\[disp\])]{}) and $\langle \E^2\rangle$ becomes positive. The expression for $\langle \E^2\rangle$ for the three quark system is derived by taking the expectation value of the Coulomb operator, ${\hat V}_C$ in a color-singlet state $\epsilon_{ijk}Q_i(\z_1) Q_j(\z_2) Q_k(\z_3) |\Psi[\A]\rangle$, which gives $$\langle \E^2(\x,\R_i) \rangle = {{C_F}\over {(4\pi)^2}} \sum_{ij=1}^3 \xi^{QQQ}_{ij} \int d\r f_L(r) {{ \z_i - \x - \r/2 } \over { |\z_i - \x - \r/2| } } \cdot {{ \z_j - \x + \r/2 } \over { |\z_j - \x + \r/2| }} d'_L(\z_i - \x - \r/2) d'_L(\z_j - \x + \r/2)$$ where $\xi^{QQQ}_{ij} = 1$ if $i = j$ and $\xi^{QQQ}_{ij} = -1/2$ if $i \ne j$. We note that the energy density for the $QQQ$ system comes from two-body correlations between the $QQ$ pairs. Numerical results =================== We first consider the simple approximation to the expectation value of the Coulomb kernel of [Eq. (\[disp\])]{} in which $f(p)=1$. If one wishes to have the confining potential grow linearly at large distances then it is necessary to set $\alpha = 1$, [*i.e.*]{} $d(p) \propto \mu/p$ for $p/\mu < 1$. In this case, assuming that the long-range behavior of the potential is of the form $V_C(r) = b_C r$, we obtain from [Eq. (\[vc\])]{}, $$b_C = C_F d^2(\mu)\mu^2/(8\pi). \label{bc}$$ We use the Coulomb string tension $b_C = 0.6 \mbox{ GeV}^2$. For the $Q\bar Q$ system the long-range contribution to the electric fields is then given by, $$\begin{gathered} \langle \E^2(\x,\R) \rangle = {{2b_C}\over {\pi^3}} \biggl[ {{(\R/2 - \x)} \over {|\R/2 - \x|^2}}\left(1-j_0(\mu|\R/2-\x|)\right) \\ + (\x \to -\x) \biggr]^2. \label{elnof}\end{gathered}$$ ![\[fig1\] $R^2 \langle \E^2(x) \rangle$ in units of $2b_C/\pi^3(\hbar c)^2 $ as a function of the distance $x$ along the $Q{\bar Q}$ axis. We employ the $f(p) = 1$ approximation. The quark and the antiquark are located at $R/2=5\mbox{ fm}$ and $-R/2 = -5\mbox{ fm}$ respectively. The renormalization scale $\mu=1.1\mbox{ GeV}$ is calculated from [Eq. (\[bc\])]{} using $d(\mu) = 3.5$ from Ref. [@as1]. The dashed line is the contribution from the two self energies, the dash-dotted line represents mutual interactions and the solid line is the total.](Fig1a.eps){width="3.in"} In Fig. 1 we show the Coulomb energy density as a function of position on the $Q{\bar Q}$ axis, $\x = x\hat{\R}$, for $R=|\R|=10\mbox{ fm}$. The small oscillations come from the sharp-cutoff introduced by the $\theta$-functions in [Eq. (\[f\])]{} which produces the Bessel’s functions in [Eq. (\[elnof\])]{}. For a smooth cutoff, [*e.g.*]{} with $\theta(\mu - p) \to \exp(-p/\mu)$ in [Eq. (\[elnof\])]{} one should replace $1 - j_0(\mu|\R/2-\x|)$ by $1 - \arctan(\mu|\R/2-\x|)/\mu|\R/2-\x|$. The cut-off is also responsible for the rapid variations near the quark positions, $\x = \pm R/2$. We note that for large separations between the quarks, $R >> 1/\mu$ and $x << R$, the Coulomb energy density behaves as expected from dimensional analysis, $$\langle \E^2(\x,R\mu\to \infty) \rangle \to {{32 b_C}\over {\pi^3R^2}}. \label{short}$$ which is consistent with linear confinement, [*i.e.*]{} if $\langle \E^2(\x,R\mu\to \infty) \rangle$ is integrated over $\x$ in the region $|\x|<R$ on obtains $V_C(R) \propto R$. At large distances $x >> R >> 1/\mu$ we obtain $$\langle \E^2(|\x|/R \to \infty,R\mu\to \infty) \rangle \to {{2 b_C R^2}\over {\pi^3\x^4}}. \label{inf}$$ If there were a finite correlation length one would expect $\langle \E^2(|\x|/R \to \infty,R\mu\to \infty)\rangle$ to fall-off exponentially with $|\x|$ [@DDS] and not as a power-law. The power-law behavior obtained in Eq. (\[inf\]) is again related to the difference between the $|Q{\bar Q}, R\rangle$ state used here, which is built by adding quark sources to the vacuum and the true ground state of the $Q{\bar Q}$ system as discussed in Sec. IIA. In other words the profile of the chromo-electric field distribution for such a state is not expected to agree with the profile of the flux-tube or action density. To illustrate this difference, in Fig. 2 we plot the energy density as a function of the magnitude of the distance transverse to the $Q\bar Q$ axis, $x_\perp = |\x_\perp|$, $\R\cdot \x = \R\cdot \x_\perp = 0$. ![\[fig1b\] $R^2 \langle \E^2(x) \rangle$ in units of $2b_C/\pi^3(\hbar c)^2 $ as a function of the distance $x$ transverse to the $Q{\bar Q}$ axis. The units and the setting are as in Fig. 1.](Fig1b.eps){width="3.in"} Finally, in Fig. 3, we show the contour plot of the energy density as a function of the position in the $xz$ plane with quark and antiquark on the $z$ axis at $R/2$ and $-R/2$ respectively. ![\[fig2\] $R^2 \langle \E^2(x) \rangle$ as a function of position in the $xz$ plane. The units and the same setting as in Fig. 1. ](Fig2.eps){width="3.in"} It is clear from Figs. 2 and 3 that a flux tube like structure emerges and from [Eq. (\[short\])]{} that it has the correct scaling as a function of the $Q{\bar Q}$ separation but, as discussed above it does not have a finite correlation length (large $x$ behavior). The field distribution for the $QQQ$ system in the $f_L(p) =1 $ approximation is equal to the sum of three terms each representing a contribution from a $QQ$ pair. We place each of the three quarks in a corner of an equilateral triangle, $\z_i$, $i=1,\cdots3$ $$\begin{gathered} \langle \E^2(\x,\R_i) \rangle = \frac{C_F}{32\pi^2} \bigl[ (\D_1 - \D_2)^2 \\ + (\D_1 - \D_3)^2 + (\D_2 - \D_3)^2 \bigr],\end{gathered}$$ where $$\D_i = {{ \z_j - \x + \r/2 } \over { |\z_j - \x + \r/2| }} d'_L(\z_i - \x - \r/2).$$ The contour plot of of energy density in this case is shown in Fig. 4. Even though the field originates from the two-particle correlations the net field seems to form into a “Y”-shape structure. This structure has also recently been seen to emerge in Euclidean lattice simulations. ![\[fig3\] $R^2 \langle \E^2(x) \rangle$ as a function of position in the $xz$ plane. The units and the same setting as in Fig. 1. The upper panel show the total field distribution and the lower the distribution from mutual interaction (no self-energies) only. ](Fig3.eps){width="2.5in"} Finally, to study the effects of $f_L(p)$, in Fig. 5 we show the predictions for the $Q{\bar Q}$ field distribution given by [Eq. (\[vc\])]{} where we use $d(p)$ and $f(p)$ in the form given by [Eq. (\[df\])]{} with $\alpha = 1/2$ and $\beta=1$ and normalized such that $V(R) \to bR$ at large distances. Furthermore, to remove the oscillations introduced by the momentum space cutoff, we now cut the small $x$ region in coordinate space, by i) extending the upper limits of integration in Eqs. (\[dpl\]) and (\[fl\]) to infinity and ii) cutting off the position space functions at short distances, $$d'_L(r) = {2\over {\pi\alpha}} \sin(\pi\alpha/2) \Gamma(2 -\alpha) \theta(r\mu - 1) { {\mu^2} \over {(\mu r)^{2-\alpha}}}$$ $$f_L(r) = {1\over {2\pi^2}} \sin(\pi\beta/2) \Gamma(2-\beta) \theta(r \mu - 1) { {\mu^3} \over {(\mu r)^{3-\beta}}}.$$ Comparing Fig. 3 and Fig. 5 we observe a narrowing of the flux tube. This is to be expected as the action of $f(p)$ is to introduce additional gluonic correlations. That said, there is no major qualitative change in the field distribution. ![\[fig4\] $R^2 \langle \E^2(x) \rangle$ for $Q{\bar Q}$ from [Eq. (\[el\])]{} with $\alpha = 1/2$ and $\beta=1$. The units and the same setting as in Fig. 1, except that the contribution from $f(p)$ has been included.](Fig4.eps){width="2.5in"} Summary ======= We have calculated the distribution of the longitudinal chromo-electric field in the presence of static $Q{\bar Q}$ and $QQQ$ sources using a variational model for the ground state wave functional. Despite this wave functional having no string-like correlations a flux tube like picture does emerge. In particular the on-axis energy density of the $Q{\bar Q}$ system behaves as $b_c/R^2$ for large inter-quark separation, $R$ and the field falls off like $b_c R^2/x^4$ at large distances from the center of mass of the $Q{\bar Q}$ system, $x$. This is weaker than in the Abelian case ($\sim R^2/x^6$) and implies that moments of the average transverse spread of the tube, defined as proportional to $\langle |x_\perp|^n \E^2(z,x_\perp) \rangle$, are finite for $n<2$ only. Thus there is no finite correlation length for the longitudinal component of the chromo-electric field, as expected for the state which does not take into account screening of the Coulomb line by the transverse gluons (flux tube). This also leads to large Van der Waals forces, which is bothersome, but it is consistent with the scenario of “no confinement without Coulomb confinement” of Zwanziger. The Coulomb potential leads to a variational (stronger) upper bound to the true confining interaction. Similar behavior at large distances is also true for the three quark sources, except that here we find the emergence of the “Y”-shape junction. This is consistent with lattice simulations, but is remarkable in our case as it arises from two-body forces. It will be interesting to examine field distributions which include transverse field excitations. In that case the only lattice results available are for the potential, not for the field distributions. Finally we note that, since the mean field calculation provides a variational upper bound, the long range behavior of the field distribution falls-off more slowly than expected for the Van der Waals force. Certainly as the complete string develops this is expected to disappear and it would be interesting to build a string-like model for the ansatz ground state to verify this assertion. The authors wish to thank J. Greensite, H. Reinhardt, Y. Simonov, F. Steffen, H. Suganuma and D. Zwanziger for helpful feedback. This work was supported in part by the US Department of Energy under contract DE-FG0287ER40365. The numerical computations were performed on the AVIDD Linux Clusters at Indiana University funded in part by the National Science Foundation under grant CDA-9601632. [99]{} N. H. Christ and T. D. Lee, Phys. Rev. D [**22**]{}, 939 (1980) \[Phys. Scripta [**23**]{}, 970 (1981)\]. D. Zwanziger, Nucl. Phys. B [**485**]{}, 185 (1997) \[arXiv:hep-th/9603203\]. A. Cucchieri and D. Zwanziger, Phys. Rev. Lett.  [**78**]{}, 3814 (1997) \[arXiv:hep-th/9607224\]. A. P. Szczepaniak and E. S. Swanson, Phys. Rev. D [**65**]{}, 025012 (2002) \[arXiv:hep-ph/0107078\]. A. P. Szczepaniak and E. S. Swanson, Phys. Rev. D [**62**]{}, 094027 (2000) \[arXiv:hep-ph/0005083\]. A. P. Szczepaniak and E. S. Swanson, Phys. Rev. D [**55**]{}, 1578 (1997) \[arXiv:hep-ph/9609525\]. J. Greensite, S. Olejnik and D. Zwanziger, arXiv:hep-lat/0401003. L. Del Debbio, A. Di Giacomo and Y. A. Simonov, Phys. Lett. B [**332**]{}, 111 (1994) \[arXiv:hep-lat/9403016\]. A. P. Szczepaniak, arXiv:hep-ph/0306030. C. Feuchter and H. Reinhardt, arXiv:hep-th/0402106. A. R. Swift, Phys. Rev. D [**38**]{}, 668 (1988). T. T. Takahashi, H. Matsufuru, Y. Nemoto and H. Suganuma, Phys. Rev. Lett.  [**86**]{}, 18 (2001) \[arXiv:hep-lat/0006005\]; Phys. Rev. D [**65**]{}, 114509 (2002) \[arXiv:hep-lat/0204011\]; T. T. Takahashi and H. Suganuma, Phys. Rev. Lett.  [**90**]{}, 182001 (2003). V.G. Bornyakov [*et al.*]{} \[DIK Collaboration\], arXiv:hep-lat/0401026. C. Alexandrou, P. De Forcrand and A. Tsapalis, Phys. Rev. D [**65**]{}, 054503 (2002) \[arXiv:hep-lat/0107006\]. A. I. Shoshi, F. D. Steffen, H. G. Dosch and H. J. Pirner, Phys. Rev. D [**68**]{}, 074004 (2003) \[arXiv:hep-ph/0211287\]. D. S. Kuzmenko and Y. A. Simonov, Phys. Atom. Nucl.  [**66**]{}, 950 (2003) \[Yad. Fiz.  [**66**]{}, 983 (2003)\] \[arXiv:hep-ph/0202277\]. J. M. Cornwall, Phys. Rev. D [**69**]{}, 065013 (2004) \[arXiv:hep-th/0305101\]. D. Zwanziger, Phys. Rev. Lett.  [**90**]{}, 102001 (2003) \[arXiv:hep-lat/0209105\]. D. Zwanziger, arXiv:hep-ph/0312254. J. Greensite and S. Olejnik, Phys. Rev. D [**67**]{}, 094503 (2003) \[arXiv:hep-lat/0302018\].
1
--- abstract: | This paper describes a class of sequences that are in many ways similar to Fibonacci sequences: given $n$, sum the previous two terms and divide them by the largest possible power of $n$. The behavior of such sequences depends on $n$. We analyze these sequences for small $n$: 2, 3, 4, and 5. Surprisingly these behaviors are very different. We also talk about any $n$. Many statements about these sequences are difficult or impossible to prove, but they can be supported by probabilistic arguments, we have plenty of those in this paper. We also introduce ten new sequences. Most of the new sequences are also related to Fibonacci numbers proper, not just free Fibonacci numbers. author: - | Brandon Avila\ MIT - | Tanya Khovanova\ MIT title: Free Fibonacci Sequences --- Introduction {#sec:intro} ============ John Horton Conway likes playing with the Fibonacci sequence. Instead of summing the two previous terms, he sums them up and then adds a twist: some additional operation. John Conway discussed these sequences with the second author and this is how we got interested in them. The second author already wrote about one class of such sequences called subprime Fibonacci sequences jointly with Richard Guy and Julian Salazar [@GKS]. Here we discuss another variation called $n$-free Fibonacci sequences. An $n$-free Fibonacci sequence starts with any two integers: $a_1$ and $a_2$. After that it is defined by the recurrence $a_k = (a_{k-1} +a_{k-2})/n^i$, where $n^i$ is the largest power of $n$ that is a factor of $a_{k-1} +a_{k-2}$. It appears that many other people like twisting Fibonacci sequences. After we started working on this paper and made our calculations we checked, as everyone should, the On-Line Encyclopedia of Integer Sequences (OEIS [@OEIS]) and discovered that some $n$-free Fibonacci sequences were already submitted by three other people. Surprisingly, the first sequence submitted was the sequence of 7-free Fibonacci numbers (A078414) entered by Yasutoshi Kohmoto in Dec 2002. After that the sequence of 5-free Fibonacci numbers (A214684) was submitted by John W. Layman in Jul 2012. It followed by the sequence of 4-free Fibonacci numbers (A224382) submitted by Vladimir Shevelev in Apr 2013. As the reader will see very soon 2-free and 3-free Fibonacci numbers do not constitute new sequences. We filled the gap and submitted 6-free Fibonacci numbers (A232666) in Nov 2013. In Section \[sec:definitions\] we introduce useful facts about Fibonacci numbers. In Section \[sec:2free\] we show that all 2-free sequences end in a cycle of length 1. The 3-free Fibonacci sequences are much more complicated and we study them in Section \[sec:3free\]. All our computational experiments ended in a cycle of length 3. On the other hand, we show that 3-free sequences may contain arbitrary long increasing substrings. We prove this in Section \[sec:customized\]. Nevertheless, we give a probabilistic argument that a 3-free sequence should end in a cycle in Section \[sec:3free\]. The 4-free Fibonacci sequences are vastly different from 2-free and 3-free sequences (Section \[sec:4free\]). We did not find a sequence that ends in a cycle: all of them grow in our experiments. The proof that all of them grow seems intractable, but we supply a probabilistic argument that this is the case. Yet 5-free sequence bring something new (see Section \[sec:5free\]). They contain sequences that are never divided by 5 and provably grow indefinitely. At the same time 5-free sequences contain cycles too. We continue with Section \[sec:division-free\] where we find other numbers $n$ that provide examples of sequences that never need to be divided by $n$. Now we wonder where the cycles disappeared to and discuss there potential properties in Section \[sec:cycles\]. We finish with a discussion of our computational results. Section \[sec:growthdivfree\] explains why the average growth for some $n$-free sequences is close to the golden ratio and Section \[sec:growthomni\] explains the growth behavior for other values of $n$. Fibonacci Numbers and $n$-free Fibonacci Sequences {#sec:definitions} ================================================== Let us denote *Fibonacci numbers* by $F_k$. We assume that $F_0=0$ and $F_1=1$. The sequence is defined by the Fibonacci recurrence: $F_{n+1} =F_n + F_{n-1}$ (See A000045). We call an integer sequence $a_n$ *Fibonacci-like* if it satisfies the Fibonacci recurrence: $a_k = a_{k-1} +a_{k-2}$. A Fibonacci-like sequence is similar to the Fibonacci sequence, except it starts with any two integers. The second-famous Fibonacci-like sequence is the sequence of *Lucas numbers* $L_i$ that starts with $L_0=2$ and $L_1=1$: 2, 1, 3, 4, 7, 11, $\ldots$ (See A000032). An *$n$-free Fibonacci* sequence starts with any two integers: $a_1$ and $a_2$ and is defined by the recurrence $a_k = (a_{k-1} +a_{k-2})/n^i$, where $n^i$ is the largest power of $n$ that is a factor of $a_{k-1} +a_{k-2}$. To continue the tradition we call numbers in the $n$-free Fibonacci sequence that starts with $a_0=0$ and $a_1=1$ *$n$-free Fibonacci numbers*. In the future we will consider only sequences starting with two non-negative integers. It is not that we do not care about other starting pairs, but positive sequences cover all essential cases. Indeed, if we start with two negative numbers we can multiply the sequence by $-1$ and get an all-positive sequence. If we start with numbers of different signs, the sequence eventually will become the same-sign sequence. If we start with two zeros, we get an all-zero sequence. So we will consider only sequences that do not have two zeros at the beginning. Note, that a non-negative sequence can have a zero only in one of the two starting positions, never later. The $n$-free Fibonacci sequence coincides with the Fibonacci-like sequence with the same beginning until the first occurrence of a multiple of $n$ in the Fibonacci-like sequence. Given a positive integer $m > 1$, the smallest positive index $k$ for which $n$ divides the $k$-th Fibonacci number $F_k$ is called the *entry point* of $m$ and is denoted by $Z(m)$ (see sequence A001177 of Fibonacci entry points). For example, $Z(10) = 15$ and the 10-free Fibonacci numbers coincide with the Fibonacci numbers for indices $ < 15$. Now that all the preparation is done, let us take a closer look at the simplest $n$-free Fibonacci sequences: 2-free Fibonacci sequences. 2-free Fibonacci Sequences {#sec:2free} ========================== Consider some examples. The sequence that starts with 0, 1 continues as 1, 1, 1, .... The only two 2-free Fibonacci numbers are 0 and 1. The sequence eventually stabilizes, or in other words, turns into a cycle of length 1. Let us look at other starting points. The sequence that starts as 1, 2 continues as 3, 5, 1, 3, 1, 1, and stabilizes at 1. This sequence turns into the same cycle. The sequence that starts as 100, 220, continues as, 5, 225, 115, 85, 25, 55, 5, 15, 5, 5, 5, and stabilizes at 5. It turns into a different cycle, but the length of the cycle is again equal to 1. Every 2-free Fibonacci sequence eventually turns into a cycle of length 1: $x$, $x$, $x$, $\ldots$, for an odd $x$. It is clear that after the second term all elements of the sequence are odd. Consider the maximum of the two consecutive terms of the sequence: $m_k = \max\{a_k,a_{k-1}\}$. If two consecutive terms $a_{k-1}$, $a_k$ of the sequence are odd and not equal to each other, then the maximum decreases: $m_{k+1} < m_k$. Hence, the sequence has to stabilize. It follows from the proof that for a sequence starting with $a_1$, $a_2$ the number of steps until the cycle is reached is not more than $\max\{a_1,a_2\}$. On the other hand, the subsequence before the cycle can be arbitrary long. It follows from the following lemma. For any two odd numbers $a_1$, $a_2$, a preceding odd number $a_0$ can be found so that $a_0$, $a_1$, and $a_2$ form a 2-free Fibonacci sequence. Pick a positive integer $k$ so that $2^k a_2 > a_1$ and set $a_0$ to be equal to $2^k a_2 - a_1$. There are many ways to build predecessors to a given 2-free Fibonacci sequence. The minimal such sequence is built when we choose the smallest power of 2 that still allows us to have positive members in the sequence. We explicitly build such an example starting with $a_1=3$, and $a_2=1$. Reversing the indexing direction we get: 1, 3, 1, 5, 3, 7, 5, 9, 1, $\ldots$, which is now sequence A233526. Next, we want to continue with 3-free Fibonacci sequences. Are they as simple as 2-free sequences? 3-free Fibonacci Sequences {#sec:3free} ========================== Let us look at 3-free Fibonacci sequences. Consider an example of 3-free Fibonacci numbers: 0, 1, 1, 2, 1, 1, 2, and so on. The sequence turns into a cycle of length 3. There are only 3 different 3-free Fibonacci numbers. We can multiply a 3-free sequence by a number not divisible by 3 to get another 3-free sequence. Thus, in general we can get cycles of the form $k$, $k$, $2k$, where $k$ is not divisible by 3. Any cycle of length 3 in a 3-free Fibonacci sequence is of the form $k$, $k$, $2k$. Consider the length 3 cycle $a$, $b$, $c$. From the definition of 3-free Fibonacci sequences, we know the following relations: $$\begin{aligned} a+b=3^xc \nonumber \\ b+c=3^ya \label{eq:b+c}\\ c+a=3^zb. \label{eq:c+a}\end{aligned}$$ Furthermore, no term in the sequence is divisible by 3. Then, by the pigeon-hole principle, at least two of the terms $a,b,c$ must be congruent modulo 3. Without loss of generality, take $a \equiv b$ (mod 3). Then $a+b \not\equiv 0$ (mod 3), so we have that $x=0$ and $a+b=c$. Now substitute for $c$ and add equations (\[eq:b+c\]) and (\[eq:c+a\]) to get that $a+b=3^{y-1}a+3^{z-1}b$. Since $3\nmid a+b$, either $y=1$ or $z=1$. If $y=1$, then $b=3^{z-1}b$, hence $z=1$. Similarly, $z=1$ implies $y=1$. In either case, $y=z=1$. Then we may solve for our initial variables to show that $a=b$ and $c=a+b$. Restated, $a=k$, $b=k$, and $c=2k$. The number $k$ in the cycle is the greatest common divisor of the sequence. Because of the Fibonacci additive property, if any number divides two or more elements of the sequence (excluding the first two, which may be divisible by 3), it must divide all numbers in the sequence. Thus, $k$ must divide every element. The least of these elements, then, can only be $k$ itself, making it the greatest common divisor. Will it be the case that all 3-free Fibonacci sequences end in cycles of length 3? We will build suspense by delaying this discussion, meanwhile we have a lemma about the length of any potential cycle: \[thm:parity\] Any cycle in a 3-free Fibonacci sequence is of length $3n$ for some integer $n$. Begin with any 3-free Fibonacci sequence, and divide out the highest power of 2 in the GCD of all its elements. The resulting sequence is a 3-free Fibonacci sequence with at least one odd element. It is clear that dividing or multiplying any number by 3 does not change its parity. Thus, any sequence, regardless of how many factors of 3 are divided out from each term, will have the same underlying structure in its parity. Since we have reduced the sequence to the point where there exists at least one odd number, we know that the sequence reduced modulo 2 must be congruent to 1, 1, 0, 1, 1, 0. Only cycles of length $3n$ are permitted in this structure, and therefore permitted in 3-free sequences. We checked all the starting pairs of numbers from 1 to 1000, and all these sequences end in a 3-cycle. The 3-free Fibonacci sequences are in many ways similar to the notorious Collatz sequences [@La10], for which it is still not known if every sequence eventually cycles. We do not expect that it is easy to prove or disprove that ever 3-free Fibonacci sequence ends in a cycle. On the other hand, it is possible to make probabilistic arguments to support different claims about free Fibonacci sequences. Here is the base of the argument. Suppose we encounter a number in a sequence that is divisible by 3, then after removing all powers of 3, let us assume that the resulting number has the remainder 1 modulo 3 with probability $1/2$. If the sum of two consecutive terms in a sequence is large and divisible by 3, then we also assume that this number is divisible by $3^k$ with probability $1/3^{k-1}$. How often do we divide by 3 in a 3-free Fibonacci sequence? The following lemma is obvious. In a 3-free Fibonacci sequence the division happens for every term or for every other term. In other words, we can not have a subsequence of length 3 such that each term is the sum of the previous two terms. We want to study two polar cases first: stretches where we divide every term and stretches were we divide every other term. We will call a subsequence of a 3-free Fibonacci sequence where we divide at each step a *division-rich subsequence*. Conversely, we will call a subsequence of a 3-free Fibonacci sequence where we divide at every other step a *division-poor subsequence*. \[thm:rich\] There exist arbitrary long division-rich subsequences. The proof is done by explicit construction. Consider the definition of a division-rich subsequence. In this case, we divide by a power of 3 after every addition step, so that $3^{i_n} \cdot a_n = a_{n-1}+a_{n-2}$ for $i_n>0$. Equivalently, $a_{n-2}=3^{i_n} \cdot a_n - a_{n-1}$. Thus, by choosing $a_n$ and $a_{n-1}$, and selecting a sequence $\{i_m\}$ that satisfies this relationship, the sequence can easily be constructed backwards. Our only requirements are that every term of the sequence is positive, and every step contains a division, so it will suffice to construct a sequence $\{i_m\}$ such that $3^{i_n} \cdot a_n - a_{n-1}>0$ and $i_n>0$ for all $n$. As an example, let us begin with $a_n=1$, $a_{n-1}=1$. At each step let us choose the smallest possible power for $i_m$. Then $i_n = 1$ satisfies our inequality for the first step, and $a_{n-2}=2$. Continuing, $i_{n-1}=1$ satisfies the inequality for the next step, yielding $a_{n-3}=1$. Next, $i_{n-2}=1$ and $a_{n-3}=1$, followed by $i_{n-3}=2$ and $a_{n-4}=5$. Reading the sequence backwards we get: 1, 1, 2, 1, 5, 4, 11, 1, 32, 49, $\ldots$. This is now sequence A233525 in the OEIS [@OEIS]. when read forward, this sequence is a 3-free sequence containing eight divisions in a row. The process can be continued to arbitrarily many terms for arbitrarily many consecutive divisions. The growth bound for division-rich subsequences is estimated by the following lemma. For a division-rich subsequence: $\max\{a_{2k+1},a_{2k+2}\} \leq 2\max\{a_{2k-1},a_{2k}\}/3$. We can estimate that $a_{2k+1} \leq (a_{2k-1}+a_{2k})/3 \leq 2\max\{a_{2k-1},a_{2k}\}/3$ and $a_{2k+2} \leq (a_{2k}+a_{2k+1})/3 \leq 5\max\{a_{2k-1},a_{2k}\}/9 < 2\max\{a_{2k-1},a_{2k}\}/3$. So we can expect that with probability $1/2^n$ there would be a subsequence of length $2n$, where the maximum of the next two terms does not exceed the maximum of the previous two terms by $2/3$. Clearly it cannot go down forever. We need to start with very large numbers to get a long stretch of a division-rich subsequence. \[thm:poor\] There exist arbitrary long division-poor subsequences. This proof is more complicated than the previous one, so we will do it together with a proof of a more powerful theorem in next Section \[sec:customized\]. If we index a division-poor subsequence in such a way that division happens on the odd term, then all the even terms form an increasing subsequence: $a_{2k} > a_{2k-2}$. As every even term is the sum of the previous two terms we get: $a_{2k} = a_{2k-1} + a_{2k-2} > a_{2k-2}$. That means that both division-rich and division-poor subsequences can not form a cycle. In particular, it means we can have sequences of arbitrary length without entering a cycle. We showed that there exist 3-free Fibonacci sequences that have long increasing subsequences. Still, we want to present a probabilistic argument that any 3-free Fibonacci sequence ends in a cycle. According to our probabilistic assumptions we divide either every term or every other term with the same probability $1/2$. So on average we divide on every $1.5$ step. But for how much we divide on average? \[thm:average3\] On average we divide by $3^{3/2}$. Now we want to use the fact that when we divide by a power of 3 we on average divide by more than 3. If the number is large, we divide by 3 with probability $2/3$, by 9 with probability $2/9$ and so on. So the average division is by $$3^{2/3} \cdot 9^{2/9} \cdot 27^{2/27} \cdot \ldots.$$ We can say it differently. We can say that we divide by 3 with probability 1 and additionally we divide by 3 more with probability $1/3$, and by 3 more with probability $1/9$, so the result is 3 to the power $$1+1/3+1/9+1/27 + 1/81 + \ldots = 3/2.$$ Since the above sum is equal to $3/2$, every time we divide, we on average divide by $3^{3/2}$. Notice that the average number we divide by is approximately 5.2 which is more than 5. Let us build a probabilistic sequence that capture some of the behavior of 3-free Fibonacci sequences. We start with two numbers $a_1$ and $a_2$ and flip a coin. If the coin turns heads we add the next number $a_3 = (a_1+a_2)/5$ to the sequence. If the coin turns tails we add two more terms: $a_3 = a_1+a_2$ and $a_4 = (a_2+a_3)/5$ to the sequence. Then repeat. We expect that this sequence on average grows faster than 3-free Fibonacci sequences, because we divide by a smaller number. Now we want to bound the maximum of the last two terms of this probabilistic sequence after two coin flips. Let $M = \max\{a_1,a_2\}$. We have the following cases: - After two heads, the sequence becomes: $a_1$, $a_2$, $(a_1+a_2)/5$, $(a_1+6a_2)/25$. The last two terms do not exceed $2M/5$. - After head, tail, the sequence becomes: $a_1$, $a_2$, $(a_1+a_2)/5$, $(a_1+6a_2)/5$, $(2a_1+7a_2)/25$. The last two terms do not exceed $7M/5$. - After tail, head, the sequence becomes: $a_1$, $a_2$, $a_1+a_2$, $(a_1+2a_2)/5$, $(6a_1+7a_2)/25$. The last two terms do not exceed $3M/5$. - After two tails the sequence becomes: $a_1$, $a_2$, $a_1+a_2$, $(a_1+2a_2)/5$, $(6a_1+7a_2)/5$, $(7a_1+9a_2)/25$. The last two terms do not exceed $13M/5$. As each event happens with the same probability $1/4$, the average growth after $2n$ coin flips is $(2\cdot 3 \cdot 7 \cdot 13)^{1/4}/5$, which is below 0.97. So the overall trend for this sequence is to decrease. Based on our computational experiments and probabilistic discussions above we conjecture: Any 3-free Fibonacci sequence ends in a cycle. So 2-free Fibonacci sequences provably end in a cycle, 3-free sequences conjecturally end in a cycle. Will 4-free Fibonacci sequences end in cycles too? Before discussing 4-free Fibonacci sequences, we want to make a detour and prove the promised result that an arbitrarily long division-poor sequence exists (See Lemma \[thm:poor\]) as a corollary to a much stronger and a more general theorem. Customized-division subsequences {#sec:customized} ================================ We promised to give a proof that there exist arbitrary long division-poor sequences that are 3-free Fibonacci sequences. Now we want to prove a stronger statement. We want to allow any $n$ and show that we can build a customized $n$-free Fibonacci sequence that will have a division by a prescribed power of $n$ with the prescribed remainder. Let us correspond to any $n$-free Fibonacci sequence the list of numbers by which we divide at every step. We call this list *a signature*. For example, a 3-free sequence 5, 4, 1, 5, 2, 7, 1, has signature \*, \*, 9, 1, 3, 1, 9. We placed stars at the first two places, because we do not know the preceding members of the sequence and, hence, do not know the powers. Given an $n$-free Fibonacci sequence with a given signature and a given set of remainders, we can build many other sequences with the same signature and the set of remainders. \[thm:adjustement\] Suppose an $n$-free Fibonacci sequence $s_1$ starts with $a_1$ and $a_2$ that are not divisible by $n$ and the product of the numbers that we divide by while calculating the first $k$ terms is strictly less than $n^m$. Consider an $n$-free Fibonacci sequence $s_2$ that starts with $b_1=a_1+ d_1 n^m$ and $b_2=a_2+ d_2 n^m$ for any integers $d_1$ and $d_2$. The first $k$ terms of both sequences have the same signature and the same set of remainders modulo $n$. The initial terms of both sequences have the same remainders modulo $n^m$. Hence their sums have the same remainders. The first time we need to divide, we divide by the same power of $n$, say $m_1$, and the result will have the same remainders modulo $n^{m-m_1}$. The next time we divide, the result will have the same remainders modulo $n^{m-m_1-m_2}$. And so on. When we complete the subsequence, all the remainders will be the same modulo $n$. This lemma allows us to find a positive sequence with a given signature if we already found a sequence that might not be all positive. If there exists some finite $n$-free Fibonacci sequence with a given signature and a set of remainders, then there exists a sequence with the same signature and remainders such that every term is positive. Adjust the initial terms according to Lemma \[thm:adjustement\]. We say that a finite sequence of remainders $r_i$ modulo $n$ and a finite signature of the same length *match* each other, if the signature has a positive power of $n$ in place $k$ if and only $r_{k-2} + r_{k-1} | n$. We call a sequence of remainders modulo $n$ *legal* if $r_{k-2} + r_{k-1} = r_k$ unless $r_{k-2} + r_{k-1} | n$. Non-legal sequences can not be sequences of remainders of an $n$-free Fibonacci sequence. Given a legal finite sequence of remainders and a matching signature, there exists an $n$-free Fibonacci sequence with the given sequence of remainders and signature. By Lemma \[thm:adjustement\] it is enough to find such a sequence that does not need to have positive terms. We can produce such a sequence by building it backwards, similar to what we did in Lemma \[thm:rich\]. There exist arbitrary long division-poor 3-free Fibonacci subsequences. Let us build an example of a division-poor 3-free sequence, where each division is by exactly 3. Begin with $a_n=1$, $a_{n-1}=1$, and build a sequence that is not necessarily all-positive. That means we will have $a_{n-2k}=3a_{n-2k+2}-a_{n-2k+1}$ and $a_{n-2k-1}=3a_{n-2k+1}-a_{n-2k}$. Here are several terms: $-8$, 7, $-1$, 2, 1, 1. For a positive version we need to add $3^3$ to $-8$, as outlined in Lemma \[thm:adjustement\] as we had two divisions by 3. The adjusted all-positive division-poor sequence is 19, 7, 26, 11, 37, 16. 4-free Fibonacci sequences {#sec:4free} ========================== Consider the 4-free Fibonacci sequence starting with 0, 1. This sequence is A224382: 0, 1, 1, 2, 3, 5, 2, 7, 9, 1, 10, 11, 21, 2, 23, 25, $\ldots$. It seems that this sequence grows and does not cycle. In checking many other 4-free Fibonacci sequences, we still did not find any cycles. The behavior of 4-free sequences is completely different from the behavior of 3-free sequences. For 3-free sequences we expected that all of them cycle. Here it might be possible that none of them cycles. Before making any claims, let us see how these sequences behave modulo 4. A 4-free Fibonacci sequence contains an odd number. Suppose there exists a 4-free Fibonacci sequence containing only even numbers. Then ignoring the initial terms, all the elements of the sequence equal 2 modulo 2. Therefore, we divide by a power of 4 every time. This cannot last forever. After the first occurrence of an odd number, a 4-free Fibonacci sequence cannot have two even numbers in a row. Start with the first odd number. The steps that do not include division generate a parity pattern: odd, odd, even, odd, odd, even and so on. So there are no two even numbers in a row. That means we can get a multiple of 4 only after summing two odd numbers. We might get an even number after the division, but the next number must be odd again. Let us analyze the 4-free Fibonacci sequences probabilistically, similar to what we did for $n=3$. \[thm:average4\] An average division is by a factor of $4^{4/3} \approx 6.35$. When we divide, we divide by 4 with probability 1, additionally with probability $1/4$ we divide by 4 more, and so on. So the result is 4 to the power $$1+1/4+1/16+\ldots = 4/3.$$ How often on average do we divide? Let us assume that we passed stretches of all even numbers. Each time we divide after that, the previous two numbers are odd. After the division, the remainder is 1, 2, or 3. So the following six cases describe what happens after the division: If $a \equiv 1$ (mod 4) and $b \equiv 3$ (mod 4), then the following set of remainders might happen until the next division: - 1; - 2, 1, 3; - 3, 2, 1, 3. If $a \equiv 3$ (mod 4) and $b \equiv 1$ (mod 4), then the following set of remainders might happen until the next division: - 3; - 2, 3, 1; - 1, 2, 3, 1. \[thm:averagesteps\] The average number of steps between divisions is $8/3$. We assume that during the division each remainder is generated with probability $1/3$, so the stretches of length 1, 3, and 4 between divisions are equally probable. We want to build a probabilistic model that reflects some behavior of 4-free Fibonacci sequences. Let us denote the average factor by which we divide by $x$. In this model, we simply divide by $x$ each time we need to divide. The sequence stops being an integer sequence, but we artificially assign a remainder mod 4 to every element of the sequence to see when we need to divide. We want to show that our model sequence grows with probability 1. First, we want to estimate the ratio of two consecutive numbers in the model sequence. \[thm:ratio\] If $a$ and $b$ are two consecutive numbers in the model sequence, starting from index 3, then $b > a\frac{2+x}{(1+x)x}$ and $a > b\frac{1}{1+x}$. Let $v$ be the element before $a$. Then $b \geq (a+v)/x > a/x$. Analogously, $b \leq a+v$, and by the previous sentence, $xa > v$. Therefore, $b < a(1+x)$, which means that $a/(1+x) < v$. Plugging this back into $b \geq (a+v)/x$, we get $b > a(2+x)/(1+x)x$. Now we are ready to prove the theorem: In our probabilistic model, a sequence grows with probability 1. Suppose we have two consecutive numbers in the sequence $a$ and $b$ whose sum is divisible by 4. By Lemma \[thm:averagesteps\] we have 1, 3, or 4 terms until the following division with the same probability. That is, the following continuations until the next division are equally probable: - $a$, $b$, $(a+b)/x$. - $a$, $b$, $(a+b)/x$, $(a+(x+1)b)/x$, $(2a+(x+2)b)/x$. - $a$, $b$, $(a+b)/x$, $(a+(x+1)b)/x$, $(2a+(x+2)b)/x$, $(3a+(2x+3)b)/x$. If $b>a$, then the maximum of the last two terms is: $b$, $(2a+(x+2)b)/x$, and $(3a+(2x+3)b)/x$ correspondingly. Adjusting for the fact that $a > b\frac{1}{1+x}$ (see Lemma \[thm:ratio\]), the maximum of the last two terms is at least $b$, $b((x+2)+2/(x+1))/x$, and $b((2x+3)+3/(x+1))/x$ correspondingly. We want to estimate the ratio of the maximum of the last two terms to $\max\{a,b\}$. Counting probabilities we get the following lower bound for the ratio $$1^{1/3}(((x+2)+2/(x+1))/x)^{1/3}(((2x+3)+3/(x+1))/x)^{1/3}= 1.51023.$$ If $b \leq a$, then the maximum of the last two terms is at least $(a+b)/x$, $(2a+(x+2)b)/x$, or $(3a+(2x+3)b)/x$, respectively. Using $b > a\frac{2+x}{(1+x)x}$ from Lemma \[thm:ratio\], we get that the maximum of the last two terms is at least $a\frac{2+2x+x^2}{(1+x)x^2}$, $a\frac{2x^2+6x+4}{(1+x)x^2}$, or $a\frac{5x^2+10x+6}{(1+x)x^2}$, respectively. Counting probabilities, we get that the maximum is multiplied by at least $$\frac{2+2x+x^2}{(1+x)x^2}^{1/3}\frac{3x^2+6x+4}{(1+x)x^2}^{1/3}\frac{5x^2+10x+6}{(1+x)x^2}^{1/3}=0.453822.$$ Notice that if we have 3 or 4 terms until the next division, then the last two terms before the division are in the increasing order. That means the case when $b \leq a$ is at least twice less probable, so the average growth is at least the cube root of $1.51023^2\cdot 0.453822 = 1.03507$, which is greater than 1. The result is greater than 1, which means that our model sequence does not cycle all the time. Therefore, extending the argument to 4-free sequences, we can safely say that 4-free sequences do not cycle all the time. Taking our computational experiments and our intuition into account we are comfortable with the following conjecture: With probability 1 a 4-free Fibonacci sequence does not cycle. So 4-free Fibonacci sequences do not cycle. If $n$ grows will it mean that $n$-free Fibonacci sequences for $n > 4$ will not cycle either? 5-free Fibonacci sequences {#sec:5free} ========================== Let us look at the Lucas sequence modulo 5: $2$, $1$, $3$, $4$, $2$, 1 $\ldots$ and see that no term is divisible by 5. Clearly, no term in the Lucas sequence will require that we factor out a power of 5, and the terms will grow indefinitely. Thus, the Lucas sequence is itself a 5-free Fibonacci sequence. This is something new. We do not need a probabilistic argument to show that there are 5-free Fibonacci sequences that do not cycle. On the other hand, it becomes quickly evident that the sequence of 5-free Fibonacci numbers: 0, 1. 1, 2, 3, 1, 4, 1, 1, 2, $\ldots$ (see A214684) cycles. Some sequences cycle, and some clearly do not! But how often will we come upon a sequence that grows indefinitely? To answer this question, let us look at a couple of terms from a few sequences of Fibonacci numbers modulo 5. Begin with $1, 1, \ldots$, to obtain the sequence 1, 1, 2, 3, 0, 3, 3, 1, 4, 0, 4, 4, 3, 2, 0, 2, 2, 4, 1, 0, and so on. We write it like this for clarity: at the end of each line, the last term is divisible by 5. In particular, the table above shows that $Z(5)$—the entry point of 5—is 5. Furthermore, since 5 is prime, we could know beforehand that each of these lines would be the same length. We simply had to start with the line beginning $1, 1, \ldots$, and multiply each term by 2, then 3, then 4. Clearly no term in the line could become 0 after the multiplication (except, of course, 0 itself), since there are no two non-zero numbers that multiply to zero in a field. Buy the way, the sequence of Fibonacci entry points for primes is A001602. There were exactly $5-1=4$ lines, and 5 elements (and therefore, 5 consecutive-element pairs) in each line, for a total of 20 pairs. But, excluding (0, 0), there are $5^2-1=24$ possible pairs! Thus, our 4 extra pairs must have gone into another cycle. This cycle could not contain any multiple of 5, and therefore serves as a witness that non-cyclic sequences exist in some 5-free Fibonacci sequence. There many books and paper about Fibonacci numbers and Lucas numbers, we mostly used [@HW; @Va89] for reference. This argument shows what we already know: that a Fibonacci-like sequence that does not contain multiple of 5 exists. We see that there is a strong connection between free Fibonacci sequences and proper Fibonacci sequences. Maybe we can study entry points of Fibonacci sequences to see if the story with 5 repeats for some other numbers. Division-Free $n$-Free Fibonacci Sequences {#sec:division-free} ========================================== Let us call an integer $n$ a *Fibonacci omni-factor* if any Fibonacci-like sequence contains a multiple of $n$. We just saw that 5 is the smallest integer that is not a Fibonacci omni-factor. If a number $n$ is not a Fibonacci omni-factor, then there exists a Fibonacci-like sequence that is at the same time an $n$-free Fibonacci sequence. Prime omni-factors can be found with the help of the following well known lemma. A prime $p$ is a Fibonacci omni-factor if and only if $Z(p) = p+1$. Consider a section of the Fibonacci sequence modulo $p$ before the entry point. Multiply this subsequence by any other remainder modulo $p$. A prime $p$ is not an omni-factor if there are fewer than $p^2-1$ total elements in all the lines of the Fibonacci sequences beginning with $k, k, \ldots$ modulo $p$. Then, as all lines are the same length as that beginning with 1, 1, $\ldots$ (that is, the start of the Fibonacci sequence proper), it will suffice to show that $(p-1)\cdot Z(p) < p^2-1$, or $Z(p) < p+1$. It is clear that $Z(p)(p-1) + 1$ can not be more than $p^2$. Hence we just proved a well known fact: For every prime $p$, $Z(p) \leq p+1$. Examples of primes that are not omni-factors include 5, which divides $F_5$, 11, which divides $F_{10}$, and 13, which divides $F_7$. The corresponding sequence is now sequence A230359. Omni-factor primes are the primes $p$ such that any Fibonacci-like sequences contains multiples of $p$. This sequence is A000057. By the way it is not known whether the latter sequence is infinite [@CR]. The entry points can be defined not only for primes, see sequence A001177 of Fibonacci entry points. For a composite number $n$ the relationship between the entry point $Z(n)$ and the existence of Fibonacci-like sequences not divisible by $n$ is slightly more complicated. But it is possible to check computationally if the sequences that start with zero and another number contain all possible pairs of remainders. The sequence of Fibonacci omni-factors, that is of numbers $n$ such that any Fibonacci-like sequences contains multiples of $n$ is A064414. Correspondingly the numbers $n$ such that there exist a Fibonacci-like sequence without multiples if $n$ is the complement of A064414. It is now sequence A230457. It starts as 5, 8, 10, 11, 12, 13, 15. The first composite number in the sequence is 8. Lucas numbers again is a sequence that does not have multiples of 8. Lucas numbers provide examples for many numbers, namely for the numbers that they do not divide. These numbers are represented by the sequence A064362. Thus, Lucas numbers provide examples for 10, 12, 13, 15 and so on in addition to 5 and 8. The smallest number that Lucas numbers does not provide an example for is 11. For 11, we can start with 1 and 4, to get the sequence A000285: 1, 4, 5, 9, 14, 23, 37 and so on. This is a Fibonacci-like sequence that is never divisible by 11. All non-omni-factors that are not factors of the Lucas numbers are: 11, 18, 19, 22, 29, 31, 38, 41, 44, 46, 47, 54, 58, 59, 62, 71, 76, 79, 82, 94 and so on. This sequence is the A230457 sequence intersecting with factors of Lucas numbers: sequence A065156. It is now sequence A232658. Given that A064362 (Numbers $n$ such that no Lucas number is a multiple of $n$) is the complement of A065156, the new sequence can be defined as the sequence A230457 from which the numbers from A064362 are removed. Here we found many sequences that are Fibonacci-like and are not divisible by some number $n$. They provide an example of infinitely growing $n$-free sequences. Moreover, the division never happens. Will we ever see more cycles? Other Cycles {#sec:cycles} ============ For $n=2$ every sequence ends in a cycle of length 1. For $n=3$ every sequence we checked ended with a cycle of length 3. For $n=4$ we did not find any cycles at all. So far we found an example of a 5-free Fibonacci sequence that need not ever be divided by 5. We also found a cycle: 1, 1, 2, 3, 1, 4. Are there other cycles? We know that we can multiply a 5-free Fibonacci sequence by any number that is not divisible by 5 to get another 5-free Fibonacci sequence. Thus, we have many other cycles among 5-free Fibonacci sequences: 2, 2, 4, 6, 2, 8, and 3, 3, 6, 9, 3, 12, and so on. In general, we can multiply an $n$-free sequence by a number coprime with $n$ to get another sequence. Also, a more subtle statement is true. if we multiply an $n$-free sequence by a number not necessary coprime with $n$ but the result does not contain multiples of $n$, then the result of the multiplication is an $n$-free sequence. If all the elements of an $n$-free sequence are divisible by a number $m$, we can divide the sequence by $m$ to get another $n$-free sequence. We would like to point out that $m$ does not need to be coprime with $n$. This warrants a definition. Call a cycle *primitive* if its terms are coprime. As we just explained the following lemma is true. Any cycle can be divided by an integer to become a primitive cycle. All the primitive cycles we found so far contained only numbers below $n$. There is no reason why this property should hold for any cycle. For example, here is another primitive cycle of 5-free Fibonacci sequences: 4, 3, 7, 2, 9, 11, 4, 3. Though we did not find any more cycles, in case they exist, we can prove some of their properties. Take, for example, Lemma \[thm:parity\] where we used parity to prove that any 3-free Fibonacci cycle has length that is a multiple of 3. We can replace 3 by any odd number in Lemma \[thm:parity\] to get the following lemma. The length of an $n$-free Fibonacci cycle is divisible by 3 for odd $n$. It is not surprising that all the 5-free Fibonacci cycles that we found so far has length 6. There is no reason we should restrict ourselves with parity: remainders of Fibonacci numbers modulo 2. The Fibonacci sequence modulo $n$ is periodic. The period is called *Pisano period* and is denoted as $\pi(n)$. The sequence of Pisano periods is A001175. Pisano periods for prime numbers are outsourced into A060305. For example Fibonacci numbers modulo 3 form a cycle of length 8: 0, 1, 1, 2, 0, 2, 2, 1. We can generalize Lemma \[thm:parity\] to any prime number. The length of an $n$-free Fibonacci cycle is divisible by $\pi(p)$, where $p$ is a prime factor of $n-1$. Any cycle has the same length as a primitive cycle. Dividing by $n$ or its power does not change a remainder modulo $p$ as $n \equiv 1$ (mod $p$). All non-trivial Fibonacci cycles modulo $p$ are of the same length $\pi(p)$. So the primitive $n$-free Fibonacci cycle modulo $p$ has to be a multiple of $\pi(p)$. For example, any 4-free Fibonacci cycle, if it exists, is of length $8k$ for some $k$. This is due to the fact that $4 \equiv 1$ (mod $3$) and $\pi(3) = 8$. The following theorem immediately follows. Let $p_i$ be prime factors of $n-1$. Then the cycles in $n$-free Fibonacci sequences are of a length divisible by $\gcd(\pi(p_i))$. We studied cycles, but we actually do not expect many of them, as we expect the $n$-free Fibonacci sequences to grow faster for larger $n$. As the number $n$ grows, the multiples of $n$ are more spread apart in the Fibonacci sequence, that means the division happens more rarely. We think that the increase in the number by which we divide is less pronounced than the fact that the divisions are more spread apart. In the next Section we look at computational results and make probabilistic arguments to show that for $n > 3$ cycles should appear very rarely. Growth in Division-Free Sequences {#sec:growthdivfree} ================================= We ran several experiments. In our first experiment we did the following: For each $n$ we built 10000 random $n$-free Fibonacci sequences of length 500. Namely, we picked initial terms of each sequence as two random numbers between 1 and 1000. Then we averaged each term and found the best approximation for the exponential growth. We did this 5 times to confirm consistency of the exponents. That is, we approximated the $m$-th term of the $n$-free Fibonacci sequence as $g(n)^m$, where $g(n)$ is described by the following sequence starting from $n=4$: 1.32, 1.61, 1.42, 1.34, 1.61, 1.4, 1.61, 1.61, and so on. We did this for $n$ up to 50. Our experimental results showed that for $n$ for which division-free $n$-free Fibonacci sequences exist the growth is the same and it is about 1.61. Can we explain this? Let us take a closer look at the smallest such $n$: 5. Consider an arbitrary 5-free Fibonacci sequence. When we divide by a power of 5 at some point we may crossover to a division-free sequence. If we ever get two consecutive remainders as in the Lucas numbers: 2, 1, 3, 4, 2, and so on, we will never divide by a power of 5 again. Notice that the Lucas numbers modulo 5 cycle. The cycle has length 4 and contain every remainder exactly once. That means if the number in the sequence before division has remainder $r$, then we crossover into a division free sequence when the next number is congruent to exactly one out of possible four remainders modulo 5: 1, 2, 3, or 4. Consider for example the sequence starting with 1, 6. It continues to 7, 13, 4, and from here we would never divide by 5 again. Assume that the remainder after the division is chosen randomly with a uniform distribution. In this case, there is a 25% chance of entering the cycle with no multiples of 5, and a 75% chance of entering a sequence which will be divided by 5 again. Unless we enter into a cycle, as the number of these randomizations increases, it becomes more likely that the sequence will crossover into a division-free sequence. We did not find many cycles. Moreover all primitive cycles that we found had small numbers in it. Suppose that there are no primitive cycles with large numbers. Then, if we start with two large coprime numbers, there would be many potential divisions on the way to a cycle. Therefore, the probability of entering a division-free sequence will be very large. This probabilistic argument leads us to a conjecture. If we pick the starting integers in the range from 1 to $N$ the probability that we end up in a division-free 5-free Fibonacci sequence tends to 1 when $N$ tends to $\infty$. Let us remind you that division-free sequences are Fibonacci-like sequences. They grow like $\phi^n$, where $\phi$ is the golden ratio. It is not surprising that we get 1.61 as the growth ratio: the number close to the golden ratio, but slightly below it. We explained why 1.61 is the exponent for $n=5$. What about other numbers? Let us start by looking at the proportion of pairs that generate sequences not containing zeros. We submitted two new sequences to the OEIS: - A232656 The number of pairs of numbers below $n$ that, when generating a Fibonacci-like sequence modulo $n$, contain zeros: 1, 4, 9, 16, 21, 36, 49, 40, 81, $\ldots$. - A232357 The number of pairs of numbers below $n$ that, when generating a Fibonacci-like sequence modulo $n$, do not contain zero: 0, 0, 0, 0, 4, 0, 0, 24, 0, $\ldots$. The sum of the two sequences is the sequence of squares: $a(n) = n^2$: the total number of possible pairs of remainders modulo $n$. For our argument we are interested in the ratio A232357$(n)/(n-1)^2$: the proportion of pairs not containing zero that lead to division-free sequences. This is what we get starting from $n=2$: > 0, 0, 0.25, 0, 0, 0.49, 0, 0.20, 0.20, 0.40, 0.58, 0, 0.18, 0.53, 0.56, 0.50, 0.11, 0.18, 0.72, 0.18, 0, 0.68, 0.18, 0.54, 0, 0.40, 0.57, 0.17, 0.067, 0.52, 0.57, 0.79, 0.17, 0.74, 0.53, 0.58, 0.52, 0.50, 0.55, 0.69, 0, 0.50, 0.17, 0.52, 0.70, 0.81, 0, 0.17, 0.52, 0.52, 0.52, 0.51, 0.86, 0.67, 0.52, 0.55, 0.034, 0.46, 0.78, 0.55, 0.75 There is no clear pattern. For example, it drops significantly for $n=59$. Let us not get upset yet. If we look at the exact number A232357$(59) = 116 = 2 \cdot 58$. Is it true that every time we divide the probability of getting into a division-free sequence is the same? Suppose the term of the $n$-free Fibonacci sequence before division has remainder $r$. The probability that we crossover to a division-free sequence is $f(r)/(n-1)$, where $f(r)$ is the number of possible remainders that follow $r$ in division-free sequences. If $n$ is prime, then $f(r)$ is constant, for $r \neq 0$. Consider the Fibonacci sequence modulo $n$ between 0 and the first entry point. Multiply this sequence by all numbers below $n$. Pick the set of all possible pairs we get. Every non-zero number will be in this set of pairs the same number of times. But these are the pairs that lead to divisions. That means that in the set of pairs that lead to division-free sequences each remainder over zero is contained there the same number of times. As each remainder can not be followed by the same number of times, $f(r)$ must be constant. The previous lemma shows that the argument we provided for 5 works for every prime number that is not a Fibonacci omni-factor. Each time we divide, we crossover into a division-free sequence with the same probability. But what about composite numbers that are not omni-factors? We know that the cycles of Fibonacci-like sequences modulo a composite number might be different length. The number of different cycles modulo $n$ is A015134. Correspondingly, A015135 is the number of different period lengths. Let us look at the smallest possible case of a composite non omni-factor: $n=8$. Let us check all possible Fibonacci-like sequences modulo 8. There is a cycle 0, 1, 1, 2, 3, 5, 0. This cycle is of length 6. If we multiply it by 2, 3, 5, 6, or 7 we get 5 more cycles of length 6. There is a cycle 0, 4, 4, 0 of length 3. There is a trivial cycle 0, 0 of length 1. We also have a cycle corresponding to Lucas numbers 1, 3, 4, 7, 3, 2, 5, 7, 4, 3, 7, 2, 1, 3, $\ldots$ of length 12. There is also another cycle, that is a multiple of this cycle: 1, 4, 5, 1, 6, 7, 5, 4, 1, 5, 6, 3, 1, 4, $\ldots$. These are two cycles of length 12. There is no way each of the 7 possible non-zero remainders participates in these cycles the same number of times. Table \[tbl:divfreepairs\] shows, given a remainder, what the next remainder should be if we are inside a division-free sequence. --- ------------ 1 3, 4, 5, 6 2 1, 5 3 1, 2, 4, 7 4 1, 3, 5, 7 5 1, 4, 6, 7 6 3, 7 7 2, 3, 4, 5 --- ------------ : Remainders in division-free pairs.[]{data-label="tbl:divfreepairs"} We see that numbers 1, 3, 4, 5, 7 correspond to 4 possibilities, and numbers 2 and 6 to two possibilities. That means the probability that we crossover to a division-free sequence depends on the previous remainder. But the important part is that it is never zero, because each remainder has at list two numbers that follow it. We can assume that with each division we crossover to a division-free sequence with some probability that is bounded from below. From this assumption we get a conjecture. For any $n$ such that a division-free sequence exists, if we pick the starting integers in the range from 1 to $N$ the probability that we eventually end up in a division-free $n$-free Fibonacci sequence tends to 1 when $N$ tends to $\infty$. This conjecture explains why we get 1.61 as a growth estimate for non-omni-factors in our data. Now we want to explain other numbers in the next section. Growth rates for omni-factors {#sec:growthomni} ============================= Table \[tbl:exp\] shows the exponents that are not equal to 1.61. We call them the deviated exponents and they appear when $n$ is an omni-factor. -------- ------ ------ ------ ----- ------ ------ ------ ------ ------ $n$ 4 6 7 9 14 23 27 43 49 growth 1.32 1.42 1.34 1.4 1.49 1.48 1.53 1.54 1.56 -------- ------ ------ ------ ----- ------ ------ ------ ------ ------ : Deviated exponents.[]{data-label="tbl:exp"} It is not surprising that these numbers are smaller than the golden ratio. Indeed, in every sequence we divide by a power of $n$ an infinite number of times. Moreover, we can extend the reasoning from Lemma \[thm:average3\] and Lemma \[thm:average4\] to see that each time we divide, we divide on average by $n^{n/(n-1)}$. The number we divide by grows with $n$, but the numbers in the second row of Table \[tbl:exp\] do not decrease. To explain this we need to see how often we divide. If we start with a pair of remainders, what is the average number of steps we need to make to get to zero? If we take all the pairs from the same cycle, then the average would be about half of the length of the cycle. The total average is the sum of squares of cycle lengths divided by the total number of pairs and by 2. This is not an integer sequence, but we submitted the rounded number and it is now sequence A233248. To keep the memory of the precise number we submitted the sum of the squares of cycle lengths as sequence A233246. We can approximate the average number of steps to the next division as $Z(n)/2$, but this is not precise. For example, we saw before that for 4-free numbers the number of steps until the next division is 1, 3, or 4. So the average is 8/3. It is easy to calculate this number when $n$ is prime. After the division we get a number that is not divisible by $n$, and the previous number is not divisible by $n$. We can assume that all such pairs are equally probable and the average number of steps is $(Z(n)-1)/2$. For non-prime numbers we can calculate this explicitly keeping in mind that before the division both numbers are coprime with $n$. (We can probabilistically argue that eventually elements of a sequence become coprime.) If $a$ is the average number of steps until the next division, then the estimated average division is by $n^{2n/(n-1)a}$. We combined these numbers in Table \[tbl:growthexplanation\]. The third row is the entry points, the fourth row is the average number of steps until the next division. The fifth row is the calculated average division per step. The last row in the table needs a separate explanation. Suppose $d$ is the average division per step. We took a recurrence defined as: $x_n = (x_{n-1} + x_{n-2})/d$ and calculated its growth, which is the last row. --------------------- ------ ------ ------ ------ -------- ------ -------- ------ -------- $n$ 4 6 7 9 14 23 27 43 49 experimental growth 1.32 1.42 1.34 1.4 1.49 1.48 1.53 1.54 1.56 $Z(n)$ 6 12 8 12 24 24 36 44 56 $a$ 8/3 6 7/2 45/8 154/13 23/2 459/26 43/2 441/16 average division 2.00 1.43 1.91 1.55 1.27 1.33 1.21 1.20 1.16 recurrence growth 1 1.26 1.03 1.19 1.36 1.32 1.41 1.42 1.46 --------------------- ------ ------ ------ ------ -------- ------ -------- ------ -------- : Estimated growth.[]{data-label="tbl:growthexplanation"} The last line, the recurrence growth and the fourth line—the average number of steps until division—are strongly correlated with the experimental growth. [9]{} *The On-Line Encyclopedia of Integer Sequences*, published electronically at <http://oeis.org>, 2012. Paul Cubre, Jeremy Rouse, *Divisibility properties of the Fibonacci entry point*, Number Theory arXiv:1212.6221, 2012. Richard K. Guy, Tanya Khovanova, Julian Salazar, *Conway’s subprime Fibonacci sequences*, Number Theory arXiv:1207.5099, 2012. Hardy, G. H.; Wright, E. M. (1938), *An Introduction to the Theory of Numbers* (Sixth ed.), Oxford University Press, Oxford 2008. Jeffrey C. Lagarias, *The Ultimate Challenge: The $3x+1$ Problem*, Amer. Math. Soc., 2010 S. Vajda. *Fibonacci & Lucas Numbers, and the Golden Section: Theory and Applications*, Ellis Horwood Ltd., Chichester, England, 1989. Morgan Ward, *The characteristic number of a sequence of integers satisfying a linear recursion relation*, Trans. Am. Math. Soc. 33 (1): 153–165, 1931 ------------------------------------------------------------------------ 2010 [*Mathematics Subject Classification*]{}: Primary 11B39; Secondary 11B50. *Keywords:* Fibonacci numbers, Lucas numbers, divisibility, entry points. ------------------------------------------------------------------------ (Mentions A000032, A000045, A000057, A000285, A001175, A001177, A015134, A015135, A001602, A060305, A064362, A064414, A065156, A078414, A214684, A224382. New sequences A230359, A230457, A232656, A232357, A232658, A232666, A233246, A233248, A233525, A233526) ------------------------------------------------------------------------
1
--- abstract: 'If $G$ is a complex simply connected semisimple algebraic group and if ${\lambda}$ is a dominant weight, we consider the compactification $X_{\lambda}\subset {\mathbb P}\big(\operatorname{End}({V({{\lambda}})})\big)$ obtained as the closure of the $G\times G$-orbit of the identity and we give necessary and sufficient conditions on the support of ${\lambda}$ so that $X_{\lambda}$ is normal; as well, we give necessary and sufficient conditions on the support of ${\lambda}$ so that $X_{\lambda}$ is smooth.' author: - 'Paolo Bravi, Jacopo Gandini, Andrea Maffei, Alessandro Ruzzi' title: 'Normality and non-normality of group compactifications in simple projective spaces' --- Introduction {#introduction .unnumbered} ============ Consider a semisimple simply connected algebraic group $G$ over an algebraically closed field ${\Bbbk}$ of characteristic zero. If ${\lambda}$ is a dominant weight (with respect to a fixed maximal torus $T$ and a fixed Borel subgroup $B\supset T$) and if ${V({{\lambda}})}$ is the simple $G$-module of highest weight ${\lambda}$, then $\operatorname{End}\big({V({{\lambda}})}\big)$ is a simple $G\times G$-module. Let $I_{\lambda}\in \operatorname{End}\big({V({{\lambda}})}\big)$ be the identity map and consider the variety $X_{\lambda}\subset {\mathbb P}\big(\operatorname{End}({V({{\lambda}})})\big)$ given by the closure of the $G\times G$-orbit of $[I_{\lambda}]$. In [@Ka], S. Kannan studied for which ${\lambda}$ this variety is projectively normal, and this happens precisely when ${\lambda}$ is minuscule. In [@Ti], D. Timashev studied the more general situation of a sum of irreducible representations, giving necessary and sufficient conditions for the normality and smoothness of these compactifications; however the conditions for normality are not completely explicit. In this paper we give an explicit characterization of the normality of $X_{\lambda}$, which allows to simplify the conditions for the smoothness as well. To explain our results we need some notation. Let ${\Delta}$ be the set of simple roots (w.r.t. $T\subset B$) and identify ${\Delta}$ with the vertices of the Dynkin diagram. Define the support of ${\lambda}$ as the set $\operatorname{Supp}({\lambda})=\{{\alpha}\in {\Delta}{\, : \,}\langle {\lambda}, {\alpha}^\vee \rangle \neq 0 \}$. \[see Theorem \[teo:normalita\]\] The variety $X_{\lambda}$ is normal if and only if ${\lambda}$ satisfies the following property: - For every non-simply laced connected component $\Delta'$ of $\Delta$, if $\operatorname{Supp}({\lambda})\cap {\Delta}'$ contains a long root, then it contains also the short root which is adjacent to a long simple root. In particular, if the Dynkin diagram of $G$ is simply laced then $X_{\lambda}$ is normal, for all ${\lambda}$. In the paper we will prove the theorem in a more general form, for simple (i.e. with a unique closed orbit) linear projective compactifications of an adjoint group (see section \[ssez:XSigma\]). We will make use of the wonderful compactification of $G_\mathrm{ad}$, the adjoint group of $G$, and of the results on projective normality of these compactifications proved by S. Kannan in [@Ka]. These results hold in the more general case of a symmetric variety; however our method does not apply to this more general situation (see section \[ssez:simmetrichenormalita\]). \[see Theorem \[smooth Xsigma\]\] The variety $X_{\lambda}$ is smooth if and only if $\lambda$ satisfies property $(\star)$ of Theorem A together with the following properties: - For every connected component $\Delta'$ of $\Delta$, $\operatorname{Supp}(\lambda)\cap \Delta'$ is connected and, in case it contains a unique element, then this element is an extreme of ${\Delta}'$; - $\operatorname{Supp}(\lambda)$ contains every simple root which is adjacent to three other simple roots and at least two of the latter; - Every connected component of ${\Delta}{\smallsetminus}\operatorname{Supp}(\lambda)$ is of type ${\mathsf A}$. Theorem B can be generalized to any simple and normal adjoint symmetric variety. Following a criterion of ${\mathbb Q}$-factoriality for spherical varieties given by M. Brion in [@Br2], properties i) and ii) characterize the $\mathbb{Q}$-factoriality of the normalization of $X_{\lambda}$ (see Proposition \[Q-fattorialita\]), while property iii) arises from a criterion of smoothness given by D. Timashev in [@Ti] in the case of a linear projective compactification of a reductive group. As a corollary of Theorem B, we get that $X_{\lambda}$ is smooth if and only if its normalization is smooth. The paper is organized as follows. In the first section we introduce the wonderful compactification of $G_\mathrm{ad}$ and the normalization of the variety $X_{\lambda}$. In the second section we prove Theorem A, and in the third section Theorem B. In the last section we discuss some possible generalizations of our results. Preliminaries {#sez:preliminari} ============= Notation {#ssez:notazioni} -------- Recall that $G$ is semisimple and simply connected. Fix a Borel subgroup $B\subset G$, a maximal torus $T\subset B$ and let $U$ denote the unipotent radical of $B$. Lie algebras of groups denoted by upper-case latin letters ($G,U,L,\ldots$) will be denoted by the corresponding lower-case german letter (${\mathfrak g}, \mathfrak u, \mathfrak l,\ldots$). Let $\Phi$ denote the set of roots of $G$ relatively to $T$ and ${\Delta}\subset \Phi$ the basis associated to the choice of $B$. For all ${\alpha}\in {\Delta}$ let $e_{\alpha}, {\alpha}^{{\vee}},f_{\alpha}$ be an $\mathfrak{sl}(2)$-triple of $T$-weights ${\alpha},0,-{\alpha}$. Let ${\Lambda}$ denote the weight lattice of $T$ and ${\Lambda}^+$ the subset of dominant weights. For all ${\alpha}\in{\Delta}$, denote by $\omega_{\alpha}$ the corresponding fundamental weight. If ${\lambda}\in {\Lambda}$, recall the definition of its *support*: $$\operatorname{Supp}({\lambda}) = \{{\alpha}\in {\Delta}{\, : \,}\langle {\lambda}, {\alpha}^\vee \rangle \neq 0 \}.$$ If $I \subset {\Delta}$, define its *border* $\partial{I}$, its *interior* $I^\circ$ and its *closure* ${\overline}{I}$ as follows: $$\partial{I} = \{ {\alpha}\in {\Delta}{\smallsetminus}I {\, : \,}{\exists\,}{\beta}\in I \mathrm{ \; such \, that \; } \langle {\beta},{\alpha}^\vee \rangle \neq 0\};$$ $$I^\circ = I {\smallsetminus}\partial{({\Delta}{\smallsetminus}I)};$$ $$\overline{I} = I \cup \partial{I}.$$ For ${\lambda}\in \Lambda$, denote by ${\mathcal L}_{\lambda}$ the line bundle on $G/B$ whose $T$-weight in the point fixed by $B$ is $-{\lambda}$. For ${\lambda}$ dominant, ${V({{\lambda}})} = {\Gamma}(G/B,{\mathcal L}_{\lambda})^*$ is an irreducible $G$-module of highest weight ${\lambda}$; when we deal with different groups we will use the notation ${V_G({{\lambda}})}$. Denote by ${\Pi({{\lambda}})}$ the set of weights occurring in ${V({{\lambda}})}$ and set ${\Pi^+({{\lambda}})}={\Pi({{\lambda}})}\cap {\Lambda}^+$. Let ${\lambda}\mapsto {\lambda}^*$ be the linear involution of ${\Lambda}$ defined by $({V({{\lambda}})})^*{\simeq}{V({{\lambda}^*})}$, for any dominant weight ${\lambda}$. The weight lattice ${\Lambda}$ is endowed with the dominance order ${\leqslant}$ defined as follows: $\mu {\leqslant}{\lambda}$ if and only if ${\lambda}- \mu \in {\mathbb N}{\Delta}$. If ${\beta}= \sum_{{\alpha}\in {\Delta}} n_{\alpha}{\alpha}\in {\mathbb Z}{\Delta}$, define its *support over* ${\Delta}$ (not to be confused with the previous one) as follows: $$\operatorname{Supp}_{\Delta}({\beta}) = \{ {\alpha}\in {\Delta}{\, : \,}n_{\alpha}\neq 0 \}.$$ We introduce also some notations about the multiplication of sections. Notice that, for all ${\lambda},\mu \in\Lambda$, ${\mathcal L}_{\lambda}\otimes {\mathcal L}_\mu ={\mathcal L}_{{\lambda}+\mu}$. Therefore, if ${\lambda},\mu$ are dominant weights and $n\in {\mathbb N}$, the multiplication of sections defines maps as follows: $$m_{{\lambda},\mu}:{V({{\lambda}})}\times {V({\mu})} {\rightarrow}{V({{\lambda}+\mu})} \; \text{ and } m_{\lambda}^n : {V({{\lambda}})} {\rightarrow}{V({n{\lambda}})}.$$ We will also write $uv$ for $m_{{\lambda},\mu}(u,v)$ and $u^n$ for $m^n_{\lambda}(u)$. Since $G/B$ is irreducible, $m_{{\lambda},\mu}$ and $m^n_{\lambda}$ induce the following maps at the level of projective spaces: $$\psi_{{\lambda},\mu}: {\mathbb P}({V({{\lambda}})}) \times {\mathbb P}({V({\mu})}) {\rightarrow}{\mathbb P}({V({{\lambda}+ \mu})}) \;{\text{ and }}\; \psi^n_{\lambda}: {\mathbb P}({V({{\lambda}})}) {\rightarrow}{\mathbb P}({V({n{\lambda}})}).$$ The following lemma is certainly well known; however we do not know any reference. \[lem:immersioni\] Let ${\lambda},\mu$ be dominant weights. 1. If $\operatorname{Supp}({\lambda}) \cap \operatorname{Supp}(\mu) = {\varnothing}$, then the map $\psi_{{\lambda},\mu}\colon {\mathbb P}({V({{\lambda}})}) \times {\mathbb P}({V({\mu})}) \to {\mathbb P}({V({{\lambda}+ \mu})})$ is a closed embedding. 2. For any $n>0$, the map $\psi_{\lambda}^n: {\mathbb P}({V({{\lambda}})}) {\rightarrow}{\mathbb P}({V({n{\lambda}})})$ is a closed embedding. $i)$. Fix highest weight vectors $v_{\lambda}\in {V({{\lambda}})}$, $v_\mu \in {V({\mu})}$ and $v_{{\lambda}+\mu}= v_{\lambda}v_\mu \in {V({{\lambda}+\mu})}$. If $V$ is irreducible, then ${\mathbb P}(V)$ has a unique closed orbit, namely the orbit of the highest weight vector. Consequently, since ${\mathbb P}(V({\lambda})) \times {\mathbb P}(V(\mu))$ has a unique closed orbit, in order to prove the claim it suffices to prove that $\psi_{{\lambda},\mu}$ is smooth in $x=([v_{\lambda}],[v_\mu])$ and that the inverse image of $[v_{{\lambda}+\mu}]$ is $x$. The second claim is clear for weight reasons. In order to prove that $\psi_{{\lambda},\mu}$ is smooth in $x$, consider $T$-stable complements $U \subset {V({{\lambda}})}$, $V \subset {V({\mu})}$ and $W\subset {V({{\lambda}+\mu})}$ of ${\Bbbk}\,v_{\lambda}$, ${\Bbbk}\,v_\mu$ and ${\Bbbk}\,v_{{\lambda}+\mu}$. So in a neighbourhood of $x$ the map $\psi_{{\lambda},\mu}$ can be described as $$\psi:U\times V {\longrightarrow}W \;\text{ where } \psi(u,v)= u v_\mu + v_{\lambda}v + u v.$$ The differential of $\psi_{{\lambda},\mu}$ in $x$ is then given by the differential of $\psi$ in $(0,0)$, thus it is described as follows: $$d\psi_x(u,v)=uv_\mu+v_{\lambda}v.$$ Suppose that $d\psi_x$ is not injective. Since it is $T$-equivariant, consider a maximal weight $\eta \in {\Pi({{\lambda}+\mu})}{\smallsetminus}\{{\lambda}+\mu\}$ such that there exists a couple of non-zero $T$-eigenvectors $(u,v)\in \ker d \psi_x$ with weights respectively $\eta - \mu$ and $\eta - {\lambda}$. Suppose that $\eta - \mu \in {\Pi({{\lambda}})}{\smallsetminus}\{{\lambda}\}$ is not maximal and take ${\alpha}\in {\Delta}$ such that $\eta - \mu + {\alpha}\in {\Pi({{\lambda}})}{\smallsetminus}\{{\lambda}\}$ and $e_{\alpha}u\neq 0$: then $$(e_{\alpha}u)v_\mu + v_{\lambda}(e_{\alpha}v) = e_{\alpha}(u v_\mu + v_{\lambda}v) = 0$$ and $\eta + {\alpha}\in {\Pi({{\lambda}+ \mu})}{\smallsetminus}\{{\lambda}+\mu\}$, against the maximality of $\eta$. Thus $\eta - \mu$ is maximal in ${\Pi({{\lambda}})} {\smallsetminus}\{{\lambda}\}$ and similarly $\eta - {\lambda}$ is maximal in ${\Pi({\mu})} {\smallsetminus}\{\mu\}$. Therefore, on one hand it must be $$\eta - \mu = {\lambda}- {\alpha}$$ with ${\alpha}\in \operatorname{Supp}({\lambda})$, while on the other hand it must be $$\eta - {\lambda}= \mu - {\beta}$$ with ${\beta}\in \operatorname{Supp}(\mu)$. Since $\operatorname{Supp}({\lambda}) \cap \operatorname{Supp}(\mu) = {\varnothing}$, this is impossible and shows that, if $(u, v) \in \ker d \psi_x$, then it must be $u = 0$ or $v = 0$. Suppose now that $(u,0) \in \ker d\psi_x$: then $u v_\mu = 0$ and by the irreducibility of $G/B$ also $u=0$. A similar argument applies if $v=0$. $ii).$ Suppose that $v,w \in V({\lambda})$ are such that $v^{n} = w^{n}$: then $v=tw$ for some $t\in {\Bbbk}$. Thus $\psi_{\lambda}^n$ is injective. Let us show now that $\psi_{\lambda}^n$ is smooth; it is enough to show it in $x = [v_{\lambda}]$ where $v_{\lambda}\in V({\lambda})$ is a highest weight vector. Let $V \subset V({\lambda})$ be the $T$-stable complement of ${\Bbbk}v_{\lambda}$, identified with the tangent space $T_{x} {\mathbb P}(V({\lambda}))$. If $v \in V$, the differential $d (\psi_{\lambda}^n)_{x}$ is described as follows $$d (\psi^n_{\lambda})_{x} (v) = n v_{\lambda}^{n-1} v.$$ Thus $d (\psi^n_{\lambda})_{x}$ is injective and $\psi_{\lambda}^n$ is smooth. The variety $X_{\lambda}$ {#ssez:Xlambda} ------------------------- If ${\lambda}$ is a dominant weight, denote by ${E({{\lambda}})}$ the $G\times G$-module $\operatorname{End}({V({{\lambda}})})$ and set $X_{\lambda}$ the closure of the $G\times G$-orbit of $[I_{\lambda}] \in {\mathbb P}({E({{\lambda}})})$. More generally if ${\lambda}_1,\dots,{\lambda}_m$ are dominant weights we define $$X_{{\lambda}_1, \ldots, {\lambda}_m} = {\overline}{G\times G([I_{{\lambda}_1}], \ldots, [I_{{\lambda}_m}])} \subset {\mathbb P}({E({{\lambda}_1})}) \times \cdots \times {\mathbb P}({E({{\lambda}_m})}).$$ Since ${E({{\lambda}})}$ is an irreducible $G\times G$-module of highest weight $({\lambda},{\lambda}^*)$, as a consequence of Lemma \[lem:immersioni\] we get that if ${\lambda}$ and $\mu$ have non-intersecting supports and if $n\in {\mathbb N}$ then $$X_{{\lambda}+\mu}\simeq X_{{\lambda},\mu} \qquad \mathrm{ and } \qquad X_{n{\lambda}}{\simeq}X_{\lambda}.$$ As a consequence we get the following proposition: \[prp:supporto\]Let ${\lambda},\mu$ be dominant weights. Then $X_{\lambda}\simeq X_\mu$ as $G\times G$-varieties if and only if ${\lambda}$ and $\mu$ have the same support. Moreover, if $\operatorname{Supp}({\lambda})=\{{\alpha}_1,\dots,{\alpha}_m\}$ then $$X_{\lambda}{\simeq}X_{\omega_{{\alpha}_1},\dots,\omega_{{\alpha}_m}}.$$ By the discussion above we have to prove only that the condition is necessary. This follows by noticing that if $X_{\lambda}$ and $X_\mu$ are $G\times G$-isomorphic then also their closed $G\times G$-orbits are isomorphic, which is equivalent to the fact that ${\lambda}$ and $\mu$ have the same support. The wonderful compactification of $G_{\mathrm{ad}}$ and the normalization of $X_{\lambda}$ {#ssez:meravigliosa} ------------------------------------------------------------------------------------------ When ${\lambda}$ is a regular weight (i.e. $\operatorname{Supp}({\lambda})={\Delta}$) the variety $X_{\lambda}$ is called the wonderful compactification of $G_\mathrm{ad}$ and it has been studied by C. De Concini and C. Procesi in [@CP]. We will denote this variety by $M$: it is smooth and the complement of its open orbit is the union of smooth prime divisors with normal crossings whose intersection is the closed orbit. The closed orbit of $M$ is isomorphic to $G/B \times G/B$ and the restriction of line bundles determines an embedding of $\operatorname{Pic}(M)$ into $\operatorname{Pic}(G/B\times G/B)$, that we identify with ${\Lambda}\times {\Lambda}$ as before; the image of this map is the set of weights of the form $({\lambda},{\lambda}^*)$. Therefore $\operatorname{Pic}(M)$ is identified with ${\Lambda}$ and we denote by ${\mathcal M}_{{\lambda}}$ a line bundle on $M$ whose restriction to $G/B \times G/B$ is isomorphic to ${\mathcal L}_{\lambda}\boxtimes {\mathcal L}_{{\lambda}^*}$. If $D\subset M$ is a $G\times G$-stable prime divisor then the line bundle defined by $D$ is of the form ${\mathcal M}_{{\alpha}_D}$, where ${\alpha}_D$ is a simple root. The map $D\mapsto {\alpha}_D$ defines a bijection between the set of $G\times G$-stable prime divisors and ${\Delta}$, and we denote by $M_{\alpha}$ the prime divisor which corresponds to a simple root ${\alpha}$. We denote by $s_{\alpha}$ a section of ${\mathcal M}_{\alpha}$ whose associated divisor is $M_{\alpha}$; notice that such a section is $G\times G$-invariant. More generally if $\nu = \sum_{{\alpha}\in {\Delta}} n_{{\alpha}}{\alpha}\in {\mathbb N}{\Delta}$, set $s^\nu = \prod_{{\alpha}\in {\Delta}} s_{{\alpha}} ^{n_{{\alpha}}} \in {\Gamma}(M,{\mathcal M}_\nu)$. Then, given any ${\lambda}\in {\Lambda}$, the multiplication by $s^\nu$ injects ${\Gamma}(M,{\mathcal M}_{{\lambda}- \nu})$ in ${\Gamma}(M,{\mathcal M}_{\lambda})$. If ${\lambda}$ is a dominant weight, the map $G_\mathrm{ad}{\longrightarrow}{\mathbb P}({E({{\lambda}})})$ extends to a map $q_{\lambda}:M{\longrightarrow}{\mathbb P}({E({{\lambda}})})$ (see [@CP]) whose image is $X_{\lambda}$ and such that ${\mathcal M}_{\lambda}=q_{\lambda}^*({\mathcal O}_{{\mathbb P}({E({{\lambda}})})}(1))$. If we pull back the homogeneous coordinates of ${\mathbb P}({E({{\lambda}})})$ to $M$, we get then a submodule of ${\Gamma}(M,{\mathcal M}_{\lambda})$ which is isomorphic to ${E({{\lambda}})}^*$; by abuse of notation we will denote this submodule by ${E({{\lambda}})}^*$. If ${\lambda}\in {\Lambda}$, in [@CP Theorem 8.3] the following decomposition of ${\Gamma}(M,{\mathcal M}_{\lambda})$ is given: $${\Gamma}(M,{\mathcal M}_{\lambda}) = \bigoplus_{\mu\in {\Lambda}^+ {\, : \,}\mu {\leqslant}{\lambda}} s^{{\lambda}-\mu}{E({\mu})}^*.$$ Consider the graded algebra $A({\lambda})=\bigoplus_{n=0}^\infty A_n({\lambda})$, where $A_n({\lambda})= {\Gamma}(M,{\mathcal M}_{n{\lambda}})$, and set $\widetilde{X}_{\lambda}= \operatorname{Proj}A({\lambda})$. We have then a commutative diagram as follows: $$\xymatrix{M \ar@{->>}^{p_{\lambda}}[r] \ar@{->>}_{q_{\lambda}}[dr] & \widetilde{X}_{\lambda}\ar@{->>}^{r_{\lambda}}[d]\\ & X_{\lambda}}$$ In [@Ka], it has been shown that $A({\lambda})$ is generated in degree $1$ and in [@DC] that $r=r_{\lambda}$ is the normalization of $X_{\lambda}$. Notice that the projective coordinate ring of $X_{\lambda}\subset {\mathbb P}({E({{\lambda}})})$ is given by the graded subalgebra $B({\lambda})=\bigoplus_{n=0}^\infty B_n({\lambda})$ of $A({\lambda})$ generated by ${E({{\lambda}})}^* \subset {\Gamma}(M,{\mathcal M}_{\lambda})$. The variety $X_\Sigma$ {#ssez:XSigma} ---------------------- We consider now a generalization of the variety $X_{\lambda}$. Let $\Sigma$ be a finite set of dominant weights and denote ${E({\Sigma})} = \bigoplus_{\mu \in \Sigma} {E({\mu})}$; let $x_\Sigma=[(I_\mu)_{\mu\in\Sigma}]\in {\mathbb P}({E({\Sigma})})$ and define $X_\Sigma$ as the closure of the $G\times G$-orbit of $x_\Sigma$ in ${\mathbb P}({E({\Sigma})})$. If $\Sigma = \{{\lambda}\}$, then we get the variety $X_{\lambda}$, while if ${\Sigma}= {\Pi^+({{\lambda}})}$ we get its normalization $\widetilde X _{\lambda}$. Notice that the diagonal action of $G$ fixes the point $x_\Sigma$ so we have a $G\times G$ equivariant map $G{\longrightarrow}X_\Sigma$ given by $g \longmapsto (g,1)x_\Sigma$. This map induces a map from $G_{\mathrm{ad}}$ to $X_\Sigma$ if and only if the action of the center of $G\times G$ on $E({\lambda})$ is the same for all ${\lambda}\in \Sigma$ or equivalently if ${\Sigma}$ is contained in a coset of ${\Lambda}$ modulo ${\mathbb Z}{\Delta}$. In this case we say that $X_\Sigma$ is a *semi-compactification* of $G_{ad}$. If $G_{\mathrm{ad}}$ is a simple group and and $\Sigma \neq \{ 0 \}$ then $X_\Sigma$ is a compactification of $G_{\mathrm{ad}}$, while if $G_{\mathrm{ad}}$ is not simple we can only say that is a compactification of a group which is a quotient of $G_{\mathrm{ad}}$. We say that $\Sigma$ is *simple* if there exists ${\lambda}\in\Sigma$ such that $\Sigma \subset {\Pi^+({{\lambda}})}$ or equivalently if ${\Sigma}$ contains a unique maximal element with respect to the dominance order ${\leqslant}$. Notice also that if ${\lambda}\in \Sigma$ is such that for all $\mu \in \Sigma$ different from ${\lambda}$ the vector $\mu - {\lambda}$ is not in ${\mathbb Q}_{{\geqslant}0}[{\Delta}]$ then is easy to construct a cocharacter $\chi : {\Bbbk}^*{\longrightarrow}G\times G$ such that $\lim_{t\to 0}\chi(t)x_\Sigma $ is the highest weight line in ${\mathbb P}(E({\lambda}))$. In particular $X_\Sigma$ is a simple $G\times G$ semi-compactification of $G_{\mathrm{ad}}$ if and only if $\Sigma$ is simple. By the description of the normalization of $X_{\lambda}$ is $\Sigma$ is simple and ${\lambda}\in {\Sigma}$ is the maximal element, then we get $$\xymatrix{\widetilde X_{\lambda}\ar[r]^{r} & X_\Sigma \ar[r] & X_{\lambda}}$$ In particular, it follows that $r=r_\Sigma:\widetilde X_{\lambda}{\longrightarrow}X_\Sigma$ is the normalization of $X_\Sigma$. If ${\Sigma}$ is simple, denote $B(\Sigma)=\bigoplus_{n=0}^\infty B_n(\Sigma)$ the projective coordinate ring of $X_\Sigma\subset {\mathbb P}({E({\Sigma})})$: it is the subalgebra of $A({\lambda})$ generated by ${E({\Sigma})}^*\subset {\Gamma}(M,{\mathcal M}_{\lambda})$. The discussion above and the fact that in ${\mathbb P}(E({\lambda}))$ there is only one point fixed by the diagonal action of $G$ (the line of scalar matrices) proves that any $G\times G$ linear projective compactification of $G_{\mathrm{ad}}$ is of the form $X_\Sigma$. A projective $G\times G$-variety $X$ is said to be *linear* if there exists an equivariant embedding $X\subset {\mathbb P}(V)$ where $V$ is a finite dimensional rational $G\times G$-module. In particular as a consequence of Sumihiro’s Theorem (see for example [@KKLV Corollary 2.6]) all normal projective compactifications are linear. In this paper we study only linear compactifications. Normality {#sez:normalita} ========= In this section we determine for which simple $\Sigma$ the variety $X_\Sigma$ is normal, proving in particular Theorem A. In the following, by $\lambda$ we will always denote the maximal element of $\Sigma$. Let ${\varphi}_{\lambda}\in {E({{\lambda}})}^*$ be a highest weight vector and set $X_\Sigma^\circ \subset X_\Sigma$ the open affine subset defined by the non-vanishing of ${\varphi}_{\lambda}$. In particular, we set ${\widetilde}X_{\lambda}= X_{{\Pi^+({{\lambda}})}}$ and notice that ${\widetilde}X ^\circ_{\lambda}= r^{-1}(X_\Sigma^\circ)$. Notice that $X^\circ_\Sigma$ is $B\times B$-stable and, since it intersects the closed orbit, it intersects every orbit: therefore $X_\Sigma$ is normal if and only if $X_\Sigma^\circ$ is normal if and only if the restriction $r{\bigr|}_{{\widetilde}X ^\circ_{\lambda}} : {\widetilde}X^\circ_{\lambda}\to X^\circ_\Sigma$ is an isomorphism. Denote by $\bar B(\Sigma)$ the coordinate ring of $X^\circ_\Sigma$ and by $\bar A({\lambda})$ the coordinate ring of ${\widetilde}X^\circ_{\lambda}$; then we have $$\bar A({\lambda})= \{\frac{{\varphi}}{{\varphi}_{\lambda}^n}{\, : \,}{\varphi}\in A_n({\lambda})\} \supset \{\frac{{\varphi}}{{\varphi}_{\lambda}^n}{\, : \,}{\varphi}\in B_n(\Sigma)\} = \bar B(\Sigma)$$ and $X_\Sigma$ is normal if and only if $\bar A({\lambda})=\bar B (\Sigma)$. The rings $\bar A({\lambda})$ and $\bar B(\Sigma)$ are not $G\times G$-modules, however since $X^\circ_\Sigma$ is an open subset of $X_\Sigma$ we still have an action of the Lie algebra ${\mathfrak g}\oplus{\mathfrak g}$ on them. By [@Ka], $\bar A({\lambda})$ is generated by the elements of the form ${\varphi}/{\varphi}_{\lambda}$ with ${\varphi}\in A_1({\lambda})$. In particular we have the following lemma. \[lem:general-normality\] The variety $X_\Sigma$ is normal if and only if for all $\mu\in{\Lambda}^+$ such that $\mu{\leqslant}{\lambda}$ there exists $n>0$ such that $$s^{{\lambda}-\mu}{E({\mu+(n-1){\lambda}})}^* \subset B_n(\Sigma).$$ Let ${\varphi}_\mu \in s^{{\lambda}-\mu}{E({\mu})}^*$ be a highest weight vector and suppose that $X_\Sigma$ is normal. Then, by the descriptions of $\bar A({\lambda})$ and $\bar B(\Sigma)$, for every dominant weight $\mu{\leqslant}{\lambda}$ there exist $n>0$ and $\varphi\in B_n(\Sigma)$ such that ${{\varphi}}/{{\varphi}_{\lambda}^n} = {{\varphi}_\mu}/{{\varphi}_{\lambda}}$ or equivalently ${\varphi}={\varphi}_\mu {\varphi}_{\lambda}^{n-1}\in B_n(\Sigma)$. Since ${\varphi}$ is a highest weight vector of the module $s^{{\lambda}-\mu}{E({\mu+(n-1){\lambda}})}^*$ the claim follows. Conversely assume that for every dominant weight $\mu{\leqslant}{\lambda}$ there exists $n$ such that $$s^{{\lambda}-\mu}{E({\mu+(n-1){\lambda}})}^*\subset B_n(\Sigma);$$ in particular ${\varphi}={\varphi}_\mu {\varphi}_{\lambda}^{n-1}\in B_n(\Sigma)$. Let’s prove that ${\varphi}/{\varphi}_{\lambda}\in \bar B({\Sigma})$ for every ${\varphi}\in s^{{\lambda}-\mu}{E({\mu})}^*$; this implies the thesis since $\bar A({\lambda})$ is generated in degree one. If ${\varphi}={\varphi}_\mu$ this is clear. Using the action of the Lie algebra ${\mathfrak g}\oplus{\mathfrak g}$ on $\bar B(\Sigma)$, let’s show that if ${\varphi}/{\varphi}_{\lambda}\in \bar B(\Sigma)$ then $f_{\alpha}({\varphi})/{\varphi}_{\lambda}\in \bar B(\Sigma)$: indeed we have $$\frac{f_{\alpha}({\varphi})}{{\varphi}_{\lambda}} = f_{\alpha}(\frac{{\varphi}}{{\varphi}_{\lambda}}) + \frac{{\varphi}}{{\varphi}_{\lambda}} \cdot \frac{f_{\alpha}({\varphi}_{\lambda})}{{\varphi}_{\lambda}}$$ and the claim follows since $f_{\alpha}({\varphi}_{\lambda})\in {E({{\lambda}})}^*\subset B_1(\Sigma)$. We can describe the set $B_n(\Sigma)$ more explicitly. Indeed, as in [@DC] or in [@Ka], it is possible to identify sections of a line bundle on $M$ with functions on $G$ and use the description of the multiplication of matrix coefficients. Recall that as a $G\times G$-module we have ${\Bbbk}[G]=\bigoplus_{{\lambda}\in {\Lambda}^+}{E({{\lambda}})}^* {\simeq}\bigoplus_{{\lambda}\in {\Lambda}^+}{V({{\lambda}})}^*\otimes {V({{\lambda}})}$. More explicitly if $V$ is a representation of $G$, define $c_V:V^*\otimes V {\longrightarrow}{\Bbbk}[G]$ as usual by $c_V(\psi \otimes v)(g)= \langle \psi, gv\rangle$. If we multiply functions in ${\Bbbk}[G]$ of this type then we get $$c_{V}( \psi \otimes v) \cdot c_{W}(\chi \otimes w) = c_{V\otimes W} \big((\psi\otimes\chi)\otimes(v\otimes w)\big):$$ in particular we get that the image of the multiplication ${E({{\lambda}})}^* \otimes {E({\mu})}^* {\longrightarrow}{\Bbbk}[G]$ is the sum of all ${E({\nu})}^*$ with ${V({\nu})}\subset {V({{\lambda}})}\otimes {V({\mu})}$. As a consequence we obtain the following Lemma: \[lem:coefficientimatriciali\] Let $\nu, \nu'$ be dominant weights, then the image of ${E({\nu})}^* \otimes {E({\nu'})}^*$ in ${\Gamma}(M, {\mathcal M}_{\nu+\nu'})$ via the multiplication map is $$\bigoplus_{{V({\mu})}\subset {V({\nu})}\otimes {V({\nu'})}}\!\!\!\!s^{\nu+\nu'-\mu}{E({\mu})}^*.$$ Indeed let $ \pi : G {\rightarrow}M$ be the map induced by the inclusion $G_\mathrm{ad}\subset M$. Then any line bundle on $G$ can be trivialized so that the image of $\pi^*:{E({{\lambda}})}^*\subset{\Gamma}(M,{\mathcal M}_\nu){\longrightarrow}{\Bbbk}[G]$ is the image of $c_{{V({{\lambda}})}}$ and the claim follows from previous remarks. Together with Lemma \[lem:general-normality\], this gives the following \[prp:normalita\] The variety $X_\Sigma$ is normal if and only if, for every $\mu \in {\Lambda}^+$ such that $\mu {\leqslant}{\lambda}$, there exist $n>0$ and ${\lambda}_1,\dots,{\lambda}_n\in \Sigma$ such that $${V({\mu + (n-1){\lambda}})} \subset {V({{\lambda}_1})}\otimes \cdots \otimes {V({{\lambda}_n})} .$$ Remarks on tensor products {#ssez:prodottitensore} -------------------------- By Proposition \[prp:normalita\], in order to establish the normality (or the non-normality) of $X_\Sigma$, we need some results on tensor product decomposition. \[lem:riduzionelevi\] Let ${\lambda}, \mu, \nu$ be dominant weights and let ${\Delta}'\subset {\Delta}$ be such that $\operatorname{Supp}_{\Delta}({\lambda}+ \mu - \nu) \subset {\Delta}'$; let $L \subset G$ be the standard Levi subgroup associated to ${\Delta}'$. If $\pi \in {\Lambda}^+$, denote by ${V_L({\pi})}$ the simple $L$-module of highest weight $\pi$. Then $${V({\nu})} \subset {V({{\lambda}})} \otimes {V({\mu})} \iff {V_L({\nu})} \subset {V_L({{\lambda}})} \otimes {V_L({\mu})}.$$ If $\mathfrak a$ is any Lie algebra, denote ${\mathfrak U}(\mathfrak a)$ the corresponding universal enveloping algebra. Suppose that ${V_L({\nu})} \subset {V_L({{\lambda}})} \otimes {V_L({\mu})}$; fix maximal vectors $v_{\lambda}\in {V_L({{\lambda}})}$ and $v_\mu \in {V_L({\mu})}$ for the Borel subgroup $B\cap L \subset L$ and fix $p \in {\mathfrak U}(\mathfrak l\cap\mathfrak u^-) \otimes {\mathfrak U}(\mathfrak l\cap\mathfrak u^-)$ such that $p\,(v_{\lambda}\otimes v_\mu) \in {V_L({{\lambda}})} \otimes {V_L({\mu})}$ is a maximal vector of weight $\nu$. Since ${V_L({{\lambda}})} \otimes {V_L({\mu})} \subset {V({{\lambda}})} \otimes {V({\mu})}$, we only need to prove that $p\,(v_{\lambda}\otimes v_\mu)$ is a maximal vector for $B$ too. If ${\alpha}\in {\Delta}'$ then we have $e_{\alpha}p\,(v_{\lambda}\otimes v_\mu) = 0$ by hypothesis. On the other hand, if ${\alpha}\in {\Delta}{\smallsetminus}{\Delta}'$, notice that $e_{\alpha}$ commutes with $p$, since by its definition $p$ is supported only on the $f_{\alpha}$’s with ${\alpha}\in {\Delta}'$. Since $v_{\lambda}\otimes v_\mu$ is a maximal vector for $B$, then we get $$e_{\alpha}p\,(v_{\lambda}\otimes v_\mu) = p\, e_{\alpha}(v_{\lambda}\otimes v_\mu) = 0;$$ thus $p\,(v_{\lambda}\otimes v_\mu)$ generates a simple $G$-module of highest weight $\nu$. Assume conversely that ${V({\nu})} \subset {V({{\lambda}})} \otimes {V({\mu})}$ and fix $p \in {\mathfrak U}(\mathfrak u^-) \otimes {\mathfrak U}(\mathfrak u^-)$ such that $p\,(v_{\lambda}\otimes v_\mu) \in {V({{\lambda}})} \otimes {V({\mu})}$ is a maximal vector of weight $\nu$. Since $\operatorname{Supp}_{\Delta}({\lambda}+ \mu - \nu) \subset {\Delta}'$, we may assume that the only $f_{\alpha}$’s appearing in $p$ are those with ${\alpha}\in {\Delta}'$; therefore $p\,(v_{\lambda}\otimes v_\mu) \in {V_L({{\lambda}})} \otimes {V_L({\mu})}$ and it generates a simple $L$-module of highest weight $\nu$. \[lem:traslazione\] Fix ${\lambda}, \mu, \nu \in {\Lambda}^+$ such that ${V({\nu})} \subset {V({{\lambda}})} \otimes {V({\mu})}$. Then, for any $\nu' \in {\Lambda}^+$, it also holds $${V({\nu + \nu'})} \subset {V({{\lambda}+ \nu'})} \otimes {V({\mu})}.$$ Fix a maximal vector $v_{\nu'} \in {V({\nu'})}$ and consider the $U$-equivariant map $$\begin{array}{cccc} \phi: & {V({{\lambda}})} \otimes {V({\mu})} & {\longrightarrow}& {V({{\lambda}+ \nu'})}\otimes {V({\mu})}\\ & w_1 \otimes w_2 & \longmapsto & m_{{\lambda},\nu'}(w_1,v_{\nu'}) \otimes w_2 \end{array}$$ The claim follows since, if $v_\nu \in {V({{\lambda}})}\otimes {V({\mu})}$ is a $U$-invariant vector of weight $\nu$, then $\phi(v_\nu) \in {V({{\lambda}+ \nu'})} \otimes {V({\mu})}$ is a $U$-invariant vector of weight $\nu + \nu'$. We now describe some more explicit results. When we deal with explicit irreducible root systems, unless otherwise stated, we always use the numbering of simple roots and fundamental weights of Bourbaki [@Bo]. In order to describe the simple subsets ${\Sigma}\subset {\Lambda}^+$ which give rise to a non-normal variety $X_{\Sigma}$, we will make use of following lemma. \[lem:BGno\] 1. Let $G$ be of type ${\sf B}_r$. Then, for any $n$, ${V({(n-1){\omega}_1})} \not \subset {V({{\omega}_1})}^{\otimes n}$. 2. Let $G$ be of type ${\sf G}_2$. Then, for any $n$, ${V({{\omega}_1+(n-1){\omega}_2})} \not \subset {V({{\omega}_2})}^{\otimes n}$. We consider only the first case, the second is similar. Fix a highest weight vector $v_1 \in {V({{\omega}_1})}$. If ${\alpha}$ is any simple root and if $1 {\leqslant}s {\leqslant}r$, notice that $f_{\alpha}$ acts non-trivially on $f_{{\alpha}_{s-1}}\cdots f_{{\alpha}_1}v_1$ if and only if ${\alpha}= {\alpha}_s$. The $T$-eigenspace of weight 0 in ${V({{\omega}_1})}$ is spanned by $v_0=f_{{\alpha}_r}\cdots f_{{\alpha}_1}v_1$, and similarly the $T$-eigenspace of weight $(n-1){\omega}_1$ in ${V({{\omega}_1})}^{\otimes n}$ is spanned by $v_1^{\otimes i-1} \otimes v_0 \otimes v_1^{\otimes n-i}$, where $1 {\leqslant}i {\leqslant}n$. Since the vectors $$e_{{\alpha}_r} (v_1^{\otimes i-1} \otimes v_0 \otimes v_1^{\otimes n-i}) = v_1^{\otimes i-1} \otimes (e_{{\alpha}_r} v_0) \otimes v_1^{\otimes n-i}$$ are linearly independent, there exists no maximal vector of weight $(n-1){\omega}_1$ in ${V({{\omega}_1})}^{\otimes n}$. Dual results will be needed to describe the subsets ${\Sigma}$ which give rise to a normal variety $X_{\Sigma}$, but before we need to introduce some further notation. If $\Phi$ is an irreducible root system and ${\Delta}$ is a basis for $\Phi$ we will denote by $\eta$ the highest root if $\Phi$ is simply laced or the highest short root if $\Phi$ is not simply laced. For the convenience of the reader we list the highest short root of every irreducible root system in Table \[tab:hsr\]. type of $\Phi$ highest short root ----------------- ------------------------------------------------------------------------------------------------------------ ${\mathsf A}_r$ ${\alpha}_1+\cdots+{\alpha}_r=\omega_1+\omega_r$ ${\mathsf B}_r$ ${\alpha}_1+\cdots+{\alpha}_r=\omega_1$ ${\mathsf C}_r$ ${\alpha}_1+2({\alpha}_2+\cdots+{\alpha}_{r-1})+{\alpha}_r=\omega_2$ ${\mathsf D}_r$ ${\alpha}_1+2({\alpha}_2+\cdots+{\alpha}_{r-2})+{\alpha}_{r-1}+{\alpha}_r=\omega_2$ ${\mathsf E}_6$ ${\alpha}_1+2{\alpha}_2+2{\alpha}_3+3{\alpha}_4+2{\alpha}_5+{\alpha}_6=\omega_2$ ${\mathsf E}_7$ $2{\alpha}_1+2{\alpha}_2+3{\alpha}_3+4{\alpha}_4+3{\alpha}_5+2{\alpha}_6+{\alpha}_7=\omega_1$ ${\mathsf E}_8$ $2{\alpha}_1+3{\alpha}_2+4{\alpha}_3+6{\alpha}_4+5{\alpha}_5+4{\alpha}_6+3{\alpha}_7+2{\alpha}_8=\omega_8$ ${\mathsf F}_4$ ${\alpha}_1+2{\alpha}_2+3{\alpha}_3+2{\alpha}_4=\omega_4$ ${\mathsf G}_2$ $2{\alpha}_1+{\alpha}_2=\omega_1$ : []{data-label="tab:hsr"} Recall the condition $(\star)$ defined in the introduction: a dominant weight ${\lambda}$ satisfies $(\star)$ if, for every non-simply laced connected component ${\Delta}'\subset {\Delta}$, if $\operatorname{Supp}({\lambda})\cap {\Delta}'$ contains a long root then it contains also the short root which is adjacent to a long simple root. \[twin\] If ${\Delta}' \subset {\Delta}$ is a non-simply laced connected component, order the simple roots in ${\Delta}'= \{ {\alpha}_1, \ldots, {\alpha}_r\}$ starting from the extreme of the Dynkin diagram of ${\Delta}'$ which contains a long root and denote ${\alpha}_q$ the first short root in ${\Delta}'$. If ${\lambda}$ is a dominant weight such that ${\alpha}_q\not \in \operatorname{Supp}({\lambda})$ and such that $\operatorname{Supp}({\lambda})\cap {\Delta}'$ contains a long root, denote ${\alpha}_p$ the last long root which occurs in $\operatorname{Supp}({\lambda})\cap {\Delta}'$; for instance, if ${\Delta}'$ is not of type ${\mathsf G}_2$, then the numbering is as follows: $$\begin{picture}(9000,1800)(2000,-900) \put(0,0){\multiput(0,0)(3600,0){2}{\circle*{150}}\thicklines\multiput(0,0)(2500,0){2}{\line(1,0){1100}}\multiput(1300,0)(400,0){3}{\line(1,0){200}}} \put(3600,0){\multiput(0,0)(3600,0){2}{\circle*{150}}\thicklines\multiput(0,0)(2500,0){2}{\line(1,0){1100}}\multiput(1300,0)(400,0){3}{\line(1,0){200}}} \put(7200,0){\multiput(0,0)(1800,0){2}{\circle*{150}}\thicklines\multiput(0,-60)(0,150){2}{\line(1,0){1800}}\multiput(1050,0)(-25,25){10}{\circle*{50}}\multiput(1050,0)(-25,-25){10}{\circle*{50}}} \put(9000,0){\multiput(0,0)(3600,0){2}{\circle*{150}}\thicklines\multiput(0,0)(2500,0){2}{\line(1,0){1100}}\multiput(1300,0)(400,0){3}{\line(1,0){200}}} \put(-150,-700){\tiny $\alpha_1$} \put(3450,-700){\tiny $\alpha_p$} \put(8850,-700){\tiny $\alpha_q$} \put(12450,-700){\tiny $\alpha_r$} \end{picture}$$ The *little brother* of ${\lambda}$ with respect to ${\Delta}'$ is the dominant weight $${\lambda}_{{\Delta}'}^{\mathrm{lb}}= {\lambda}- \sum_{i=p}^q {\alpha}_i = \left\{ \begin{array}{ll} {\lambda}-\omega_1+\omega_2 & \textrm{ if $G$ is of type $\sf{G}_2$} \\ {\lambda}+ {\omega}_{p-1} - {\omega}_{p} + {\omega}_{q+1} & \textrm{ otherwise} \end{array} \right.$$ where ${\omega}_i$ is the fundamental weight associated to ${\alpha}_i$ if $1{\leqslant}i {\leqslant}r$, while ${\omega}_0 = {\omega}_{r+1} = 0$. The set of the little brothers of ${\lambda}$ will be denoted by ${\mathrm{LB}}({\lambda})$; notice that ${\mathrm{LB}}({\lambda})$ is empty if and only if ${\lambda}$ satisfies condition $(\star)$ of Theorem A. For convenience, define $\overline {\mathrm{LB}}({\lambda})={\mathrm{LB}}({\lambda})\cup\{{\lambda}\}$, while if ${\Delta}$ is connected and non-simply laced set ${\lambda}^{\mathrm{lb}}= {\lambda}_{\Delta}^{\mathrm{lb}}$. \[lem:eta\] Assume $G$ to be simple and let ${\lambda}\in {\Lambda}^+{\smallsetminus}\{0\}$. Denote $\eta$ the highest root of $\Phi$ if the latter is simply laced or the highest short root otherwise. 1. If ${\lambda}$ satisfies the condition $(\star)$ then $${V({{\lambda}})} \subset {V({\eta})} \otimes {V({{\lambda}})}.$$ 2. If ${\lambda}$ does not satisfy the condition $(\star)$ and if ${\lambda}^{\mathrm{lb}}$ is the little brother of ${\lambda}$ then $${V({{\lambda}})} \subset {V({\eta})} \otimes {V({{\lambda}^{\mathrm{lb}}})}.$$ If ${\Delta}$ is simply laced, then ${V({\eta})}\simeq{\mathfrak g}$ is the adjoint representation: in this case the claim follows straightforward by considering the map ${\mathfrak g}\otimes {V({{\lambda}})} \to {V({{\lambda}})}$ induced by the ${\mathfrak g}$-module structure on ${V({{\lambda}})}$, which is non-zero since ${\lambda}$ is non-zero. Suppose now that ${\Delta}$ is not simply laced. If ${\lambda}$ satisfies condition $(\star)$, then by Lemma \[lem:traslazione\] it is enough to study the case ${\lambda}= \omega_{\alpha}$ where ${\alpha}$ is a short simple root: *Type ${\mathsf B}_r$*: ${V({\omega_r})}\subset{V({\omega_1})}\otimes {V({\omega_r})}$. *Type ${\mathsf C}_r$*: ${V({\omega_i})}\subset{V({\omega_2})}\otimes {V({\omega_i})}$, with $i<r$. *Type ${\mathsf F}_4$*: ${V({\omega_3})}\subset{V({\omega_4})}\otimes {V({\omega_3})}$ and ${V({\omega_4})}\subset{V({\omega_4})}\otimes {V({\omega_4})}$. *Type ${\mathsf G}_2$*: ${V({\omega_1})}\subset{V({\omega_1})}\otimes {V({\omega_1})}$. If ${\lambda}$ does not satisfy condition $(\star)$, by Lemma \[lem:traslazione\] we can assume that ${\lambda}={\omega}_{\alpha}$ with ${\alpha}$ a long root: *Type ${\mathsf B}_r$*: ${V({\omega_i})}\subset{V({\omega_1})}\otimes {V({\omega_{i-1}})}$, if $1<i<r$, and ${V({\omega_1})}\subset{V({\omega_1})}\otimes {V({0})}$. *Type ${\mathsf C}_r$*: ${V({\omega_r})}\subset{V({\omega_2})}\otimes {V({\omega_{r-2}})}$. *Type ${\mathsf F}_4$*: ${V({\omega_1})}\subset{V({\omega_4})}\otimes {V({\omega_4})}$ and ${V({\omega_2})}\subset{V({\omega_4})}\otimes {V({\omega_1+\omega_4})}$. *Type ${\mathsf G}_2$*: ${V({\omega_2})}\subset{V({\omega_1})}\otimes {V({\omega_1})}$. The above mentioned inclusion relations for tensor products are essentially known: let us treat the case of type ${\mathsf C}_r$ with ${\lambda}=\omega_i$ and $i<r$, the other cases are easier or can be checked directly. Let $v_0$ be a highest weight vector of ${V({\omega_2})}$ and $w_0$ be a highest weight vector of ${V({\omega_i})}$. Let $f$ be the following product (in the universal enveloping algebra $\mathfrak U(\mathfrak u^-)$) $$f=f_{{\alpha}_i}\cdots f_{{\alpha}_1}\cdot f_{{\alpha}_{i+1}}\cdots f_{{\alpha}_{r-1}}\cdot f_{{\alpha}_r}\cdots f_{{\alpha}_2},$$ and consider all the factorizations $f = p\cdot q$ such that $p,q \in\mathfrak U(\mathfrak u^-)$. If ${\beta}_1,\ldots,{\beta}_j\in{\Delta}$, set $$\,^\mathrm r(f_{{\beta}_1}\cdots f_{{\beta}_j})=(-1)^j2^\delta f_{{\beta}_j}\cdots f_{{\beta}_1},$$ where $\delta$ equals 0 (resp. 1) if $\alpha_i$ occurs an even (resp. odd) number of times in $\{{\beta}_1,\ldots,{\beta}_j\}$. Then it is easy to check that the vector $$\sum_{p\cdot q=f} p.v_0 \otimes \,^\mathrm r\!q.w_0$$ is a $U$-invariant vector in ${V({\omega_2})}\otimes{V({\omega_i})}$ of $T$-weight $\omega_i$. If the Dynkin diagram of $G$ is not simply laced we will need some further properties of tensor products. If ${\Delta}$ is connected but not simply laced, we will denote by $\alpha_S$ the short simple root that is adjacent to a long simple root $\alpha_L$; moreover, we will denote the associated fundamental weights by $\omega_S$ and $\omega_L$. Finally, define $\zeta$ as the sum of all simple roots and notice that $\omega_S + \zeta$ is dominant. \[lem:zeta\] Let ${\lambda}$ be a non-zero dominant weight. 1. If $G$ is of type ${\mathsf F}_4$ or ${\mathsf C}_r$ ($r{\geqslant}3$) and if $\operatorname{Supp}({\lambda})$ contains a long root then $${V({{\lambda}+\omega_S})} \subset {V({\zeta + \omega_S})} \otimes {V({{\lambda}})}.$$ 2. If $G$ is of type ${\mathsf G}_2$ and if ${\lambda}$ does not satisfy $(\star)$ then $${V({{\lambda}+\omega_1})} \subset {V({\omega_2})} \otimes {V({{\lambda}^{\mathrm{lb}}})}.$$ 3. If $G$ is of type ${\mathsf G}_2$ and if ${\alpha}_S \in \operatorname{Supp}({\lambda})$ then $${V({{\lambda}+\omega_1})} \subset {V({\omega_2})} \otimes {V({{\lambda}})}.$$ By Lemma \[lem:traslazione\] it is enough to check the statements for ${\lambda}=\omega_{\alpha}$ with ${\alpha}$ a long root in the first two cases and ${\alpha}={\alpha}_S$ in the last case. *Type ${\mathsf C}_r$*: by Lemma \[lem:traslazione\] it is enough to check that ${V({\omega_{r-1}})}\subset {V({\omega_1})} \otimes {V({\omega_r})}$. *Type ${\mathsf F}_4$*: we have ${\lambda}=\omega_1$ or ${\lambda}=\omega_2$ and $\omega_S+\zeta=\omega_1+\omega_4$. *Type ${\mathsf G}_2$*: we have ${\lambda}=\omega_2$ and ${\lambda}^{\mathrm{lb}}=\omega_1$ in point (2) and ${\lambda}=\omega_1$ in point (3). Normality and non-normality of $X_\Sigma$ {#ssez:normalita} ----------------------------------------- We are now able to state the main theorem. \[teo:normalita\]Let $\Sigma$ be a simple set of dominant weights and let ${\lambda}$ be its maximal element. The variety $X_\Sigma$ is normal if and only if $\Sigma \supset {\mathrm{LB}}({\lambda})$. Theorem A stated in the introduction follows immediately by considering the case $\Sigma=\{{\lambda}\}$. The remaining part of this section will be devoted to the proof of Theorem \[teo:normalita\]. The general strategy will be based on Proposition \[prp:normalita\] and will proceed by induction on the dominance order of weights. The ingredients of this induction will be the results proved in section \[ssez:prodottitensore\] together with the description of the dominance order given by J. Stembridge in [@St]: the dominance order between dominant weights is generated by pairs which differ by the highest short root for a subsystem of the root system. If $K$ is a subset of ${\Delta}$, denote $\Phi_K\subset \Phi$ the associated root subsystem and, in case $K$ is connected, denote by $\eta_K$ the corresponding highest short root. Moreover, if ${\beta}= \sum_{{\alpha}\in{\Delta}}n_{\alpha}{\alpha}$, set ${\beta}|_K=\sum_{{\alpha}\in K}n_{\alpha}{\alpha}$. The result of [@St] that we will use is the following. \[lem:stembridge\] Let ${\lambda},\mu$ be two dominant weights with ${\lambda}>\mu$; set $I = \operatorname{Supp}_{\Delta}({\lambda}-\mu)$. Let $\Phi_K$ be an irreducible subsystem of $\Phi_I$ (where $K\subset I$). 1. If $\langle ({\lambda}-\mu){\bigr|}_K, {\alpha}^\vee \rangle {\geqslant}0$ for all ${\alpha}\in K \cap \operatorname{Supp}(\mu)$, then $\mu+\eta_K {\leqslant}{\lambda}$. 2. If in addition $\langle \mu + \eta_K, {\alpha}^\vee \rangle {\geqslant}0$ for all ${\alpha}\in I {\smallsetminus}K$, then $\mu + \eta_K \in {\Lambda}^+$. The next two lemmas are the main steps of our induction. \[lem:costruzioneK\] Suppose that $\Phi$ is irreducible; let ${\lambda},\mu\in{\Lambda}^+$ such that ${\lambda}>\mu$ and $\operatorname{Supp}_{\Delta}({\lambda}-\mu)={\Delta}$. Assume that either $\Phi$ is simply laced, or there exists a short root ${\alpha}\in \operatorname{Supp}({\lambda})$ such that $\langle {\lambda}-\mu,{\alpha}^{\vee}\rangle {\geqslant}0$, or ${\alpha}_S \notin \operatorname{Supp}(\mu)$. Then there exists a connected subset $K$ of ${\Delta}$ such that 1. $\mu+\eta_K{\leqslant}{\lambda}$; 2. $\mu+\eta_K\in {\Lambda}^+$; 3. $K\cap\operatorname{Supp}({\lambda})\neq {\varnothing}$. Set $K_1= \{{\alpha}\in {\Delta}{\, : \,}\langle{\lambda}-\mu,{\alpha}^{\vee}\rangle{\geqslant}0\}$. Since ${\lambda}>\mu$ we have that $K_1\cap\operatorname{Supp}({\lambda})$ is non-empty. Notice also that $\operatorname{Supp}(\mu)\supset {\Delta}{\smallsetminus}K_1$. Define $K$ as follows: - If $\Phi$ is simply laced, let $K$ be a connected component of $K_1$ which intersects $\operatorname{Supp}({\lambda})$. - If ${\alpha}\in\operatorname{Supp}({\lambda})$ is a short root such that $\langle {\lambda}-\mu, {\alpha}^{\vee}\rangle {\geqslant}0$ let $K$ be the connected component of $K_1$ containing ${\alpha}$. - If $\Phi$ is not simply laced and there does not exist a short root ${\alpha}$ as in b), let $K$ be a connected component of $K_1$ which intersects $\operatorname{Supp}({\lambda})$. Properties $i)$ and $iii)$ are then easily verified by Lemma \[lem:stembridge\](a) and by construction. To prove $ii)$ notice that, if $\Phi$ is not simply laced, by the construction of $K$ it follows that if ${\alpha}_L \in K$ then ${\alpha}_S \in K$ as well: indeed, $K$ is a connected component of $K_1$ and if there is no short root ${\alpha}$ as in b) then ${\alpha}_S \not \in \operatorname{Supp}(\mu)$ implies ${\alpha}_S \in K_1$. By the description of highest short roots in Table \[tab:hsr\] we deduce that, if ${\alpha}\in K {\smallsetminus}K^{\circ}$, then the respective coefficient in $\eta_K$ is $1$: hence $\langle \eta_K, {\alpha}^\vee \rangle = -1$ for all ${\alpha}\in \partial K$ and, since $\operatorname{Supp}(\mu)\supset {\Delta}{\smallsetminus}K_1 \supset \partial{K}$, we get $\mu + \eta_K \in {\Lambda}^+$. In order to proceed with the induction, in the next lemma we will need to consider the condition $(\star)$ also for a Levi subgroup of $G$. If $K\subset {\Delta}$ let $L_K$ be the associated standard Levi subgroup; we say that ${\lambda}\in{\Lambda}^+$ satisfies condition $(\star_K)$ if, for every non-simply laced connected component $K'$ of $K$ such that $\operatorname{Supp}({\lambda})\cap K'$ contains a long root, $\operatorname{Supp}({\lambda})\cap K'$ contains also the short root adjacent to a long root. Notice that if ${\lambda}$ satisfies $(\star)$ then it also satisfies $(\star_K)$ for all $K\subset {\Delta}$. Similarly we can also define the little brother of a dominant weight w.r.t. the Levi subgroup $L_K$: if $K'$ is a connected component of $K$ such that ${\lambda}$ does not satisfy $(\star_{K'})$, define the little brother ${\lambda}_{K'}^{\mathrm{lb}}$ w.r.t. $K'$ as in Definition \[twin\] and denote by ${\mathrm{LB}}_K({\lambda})$ the set of little brothers of ${\lambda}$ constructed in this way. Notice that if $K'$ is a connected component of $K$ such that ${\lambda}$ does not satisfy $(\star_{K'})$ and if ${\Delta}'$ is the connected component of ${\Delta}$ containing $K'$, then ${\lambda}$ does not satisfy $(\star_{{\Delta}'})$ as well and ${\lambda}_{K'}^{\mathrm{lb}}={\lambda}_{{\Delta}'}^{\mathrm{lb}}$. In particular ${\mathrm{LB}}_K({\lambda})\subset {\mathrm{LB}}({\lambda})$. \[lem:induzione\] Assume $G$ to be simple and let ${\lambda},\mu$ be two dominant weights such that ${\lambda}>\mu$ and $\operatorname{Supp}_{\Delta}({\lambda}-\mu)={\Delta}$. Then there exist $\mu'\in{\Lambda}^+$ and ${\lambda}'\in \overline {\mathrm{LB}}({\lambda})$ such that $\mu<\mu'{\leqslant}{\lambda}$ and $${V({\mu+{\lambda}})}\subset {V({\mu'})}\otimes {V({{\lambda}'})}.$$ Suppose first that either $\Phi$ is simply laced or ${\alpha}_S \notin \operatorname{Supp}(\mu)$ or there exists a short root ${\alpha}$ in $\operatorname{Supp}({\lambda})$ such that $\langle {\lambda}-\mu, {\alpha}^{\vee}\rangle {\geqslant}0$. Take $K$ as in Lemma \[lem:costruzioneK\] and set $\mu'=\mu+\eta_K$: then by Lemma \[lem:eta\] together with Lemma \[lem:traslazione\] we get $${V_{L_K}({\mu + {\lambda}})} \subset {V_{L_K}({\mu'})} \otimes {V_{L_K}({{\lambda}'})}$$ with ${\lambda}'\in\overline {\mathrm{LB}}_K({\lambda})$. The claim follows by Lemma \[lem:riduzionelevi\] together with the inclusion ${\mathrm{LB}}_K({\lambda})\subset {\mathrm{LB}}({\lambda})$. Suppose now that $\Phi$ is not simply laced, that ${\alpha}_S \in \operatorname{Supp}(\mu)$ and that there is no short root ${\alpha}\in \operatorname{Supp}({\lambda})$ such that $\langle {\lambda}-\mu, {\alpha}^{\vee}\rangle {\geqslant}0$. Since ${\lambda}>\mu$ there exists ${\alpha}\in \operatorname{Supp}({\lambda})$ such that $\langle {\lambda}-\mu , {\alpha}^\vee\rangle {\geqslant}0$: therefore, $\operatorname{Supp}({\lambda})$ contains at least a long root. Set $\mu'=\mu + \zeta$; notice that $\mu'{\leqslant}{\lambda}$ and that $\mu'$ is dominant. The claim follows then by Lemma \[lem:eta\] and by Lemma \[lem:traslazione\] if $\Phi$ is of type ${\mathsf B}$, while if $\Phi$ is of type ${\mathsf C}$, ${\mathsf F}_4$ or ${\mathsf G}_2$ it follows by Lemma \[lem:zeta\] and by Lemma \[lem:traslazione\]. We prove first that the condition is necessary. Assume that there exists a little brother $\mu = {\lambda}_{{\Delta}'}^{\mathrm{lb}}$ of ${\lambda}$ which is not in $\Sigma$. We prove that for every positive $n$ and for every choice of weights ${\lambda}_1,\dots,{\lambda}_n \in \Sigma$ the module ${V({\mu+(n-1){\lambda}})}$ is not contained in ${V({{\lambda}_1})}\otimes \cdots \otimes {V({{\lambda}_n})}$. We proceed by contradiction. Assume there exist weights ${\lambda}_1,\dots,{\lambda}_n$ as above and notice that any of them satisfies $\mu{\leqslant}{\lambda}_i{\leqslant}{\lambda}$: indeed, ${\lambda}-\mu=n{\lambda}-(\mu+(n-1){\lambda}){\geqslant}n{\lambda}-\sum{\lambda}_{i}{\geqslant}{\lambda}-{\lambda}_{i}$ for every $i$. Therefore $\operatorname{Supp}_\Delta (\sum {\lambda}_i -(\mu+(n-1){\lambda}) ) \subset \operatorname{Supp}_\Delta ({\lambda}- \mu)$. By Definition \[twin\] together with Lemma \[lem:riduzionelevi\], it is enough to analyse the case $G$ of type ${\mathsf B}_r$ and $\operatorname{Supp}({\lambda}) = \{{\alpha}_1\}$ or $G$ of type ${\mathsf G}_2$ and $\operatorname{Supp}({\lambda}) = \{{\alpha}_2\}$. We analyse these two cases separately. *Type ${\mathsf B}_r$*: we have ${\lambda}= a \omega_1$, $\mu=(a-1)\omega_1$ and $\mu+(n-1){\lambda}=(na-1)\omega_1$. If $a=1$ we notice that there are no dominant weights between ${\lambda}$ and $\mu$. So the only possibility is ${\lambda}_i={\lambda}=\omega_1$ for all $i$ and this is in contradiction with Lemma \[lem:BGno\]. If $a>1$, notice that there is only one dominant weight between ${\lambda}$ and $\mu$, namely $\nu={\lambda}-{\alpha}_1=(a-2)\omega_1+\omega_2$; hence for all $i$ it must be ${\lambda}_i={\lambda}$ or ${\lambda}_i=\nu$. Since $\sum {\lambda}_i {\geqslant}\mu+(n-1){\lambda}$, at most one ${\lambda}_i$ can be equal to $\nu$; therefore ${V({\mu+(n-1){\lambda}})}\subset {V({{\lambda}})}^{\otimes n}$ or ${V({\mu+(n-1){\lambda}})}\subset {V({\nu})} \otimes {V({{\lambda}})}^{\otimes (n-1)}$. In the first case we obtain $${V({(na-1)\omega_1})}={V({\mu+(n-1){\lambda}})}\subset {V({{\lambda}})}^{\otimes n}\subset {V({\omega_1})}^{\otimes na},$$ against Lemma \[lem:BGno\]. In the second case we notice that ${V({\omega_2})} = {\Lambda}^2 {V({\omega_1})} \subset {V({\omega_1})}^{\otimes 2}$, hence ${V({\nu})}\subset {V({(a-2)\omega_1})}\otimes {V({\omega_2})} \subset {V({\omega_1})}^{\otimes a}$ and we can conclude as in the first case. *Type ${\mathsf G}_2$*: we have ${\lambda}= a \omega_2$, $\mu=\omega_1+(a-1)\omega_2$ and we proceed as in the previous case. We now prove that the condition is sufficient, showing that for every dominant weight $\mu {\leqslant}{\lambda}$ there exist $n>0$ and weights ${\lambda}_1,\dots,{\lambda}_n \in \overline {\mathrm{LB}}({\lambda})$ such that ${V({\mu+(n-1){\lambda}})}\subset{V({{\lambda}_1})}\otimes \cdots\otimes {V({{\lambda}_n})}$. To do this, we proceed by decreasing induction with respect to the dominance order. If $\mu = {\lambda}$ then the claim is clear, so we assume $\mu < {\lambda}$. Let ${\lambda}- \mu = {\beta}_1+\dots+{\beta}_m$ where $\operatorname{Supp}_{\Delta}({\beta}_i)$ are the connected components of $\operatorname{Supp}_{\Delta}({\lambda}-\mu)$. Set $K= \operatorname{Supp}_{\Delta}({\beta}_1)$ and ${\beta}'={\beta}_2+\dots+{\beta}_m$. Notice that $\mu+{\beta}_1$ is dominant: indeed if ${\alpha}\not \in {\overline}{K}$ then $\langle \mu + \beta_1 , {\alpha}^\vee \rangle = \langle \mu , {\alpha}^\vee \rangle {\geqslant}0$, while if ${\alpha}\in {\overline}{K}$ then $\langle \mu + \beta_1 , {\alpha}^\vee \rangle = \langle {\lambda}- {\beta}', {\alpha}^\vee \rangle {\geqslant}\langle {\lambda}, {\alpha}^\vee \rangle {\geqslant}0$. Notice moreover that, if $\nu\in \overline {\mathrm{LB}}_K(\mu+{\beta}_1)$, then $\nu+{\beta}' \in \overline {\mathrm{LB}}({\lambda})$. By Lemma \[lem:induzione\] applied to the semisimple part of the Levi $L=L_K$ associated to $K$, there exists a weight $\mu'$ which is dominant with respect to $K$ such that $\mu < \mu'{\leqslant}\mu+{\beta}_1$ and there exists $\nu \in \overline {\mathrm{LB}}_K(\mu+{\beta}_1)$ which satisfy $${V_L({\mu+{\beta}_1+\mu})}\subset {V_L({\mu'})} \otimes {V_L({\nu})}.$$ By tensoring with ${V_L({{\beta}'})}$, which is a one dimensional representation, we get ${V_L({\mu+{\lambda}})}\subset {V_L({\mu'})} \otimes {V_L({{\lambda}'})}$ with ${\lambda}'=\nu+{\beta}'\in \overline {\mathrm{LB}}({\lambda})$. Since $\langle \mu',{\alpha}^\vee \rangle {\geqslant}\langle \mu+{\beta}_1,{\alpha}^\vee \rangle$ for every ${\alpha}\not \in K$, $\mu'$ is a dominant weight; by Lemma \[lem:riduzionelevi\] we get then ${V({\mu+{\lambda}})}\subset {V({\mu'})} \otimes {V({{\lambda}'})}$ and we may apply the induction on $\mu'{\leqslant}{\lambda}$. Therefore there exist weights ${\lambda}_1,\dots,{\lambda}_n \in \overline {\mathrm{LB}}({\lambda})$ such that ${V({\mu'+(n-1){\lambda}})}\subset{V({{\lambda}_1})}\otimes \cdots\otimes {V({{\lambda}_n})}$. Finally by Lemma \[lem:traslazione\] we conclude $${V({\mu+n{\lambda}})}\subset {V({\mu'+(n-1){\lambda}})}\otimes {V({{\lambda}'})} \subset {V({{\lambda}_1})}\otimes \cdots\otimes {V({{\lambda}_n})} \otimes {V({{\lambda}'})}.$$ Smoothness ========== In this section we will study the variety ${\widetilde}{X}_{\lambda}$; in particular we will give necessary and sufficient conditions on $\operatorname{Supp}({\lambda})$ for its $\mathbb{Q}$-factoriality and for its smoothness. Thanks to Lemma \[lem:immersioni\], we may assume that $G$ is a simple group. Indeed suppose ${\Delta}= \cup_{i=1}^n {\Delta}_i$ is the decomposition in connected components and write ${\lambda}= {\lambda}_1 + \ldots + {\lambda}_n$ with $\operatorname{Supp}({\lambda}_i)\subset {\Delta}_i$: correspondingly we get a decomposition $X_{\lambda}= X_{{\lambda}_1} \times \ldots \times X_{{\lambda}_n}$, and every $X_{{\lambda}_i}$ is an embedding of the corresponding simple factor of $G_\mathrm{ad}$ if ${\lambda}_i \neq 0$ or a point if ${\lambda}_i = 0$. From now on, we will therefore assume that $\Phi$ is an irreducible root system. By the Bruhat decomposition, the group $G_\mathrm{ad}$ has an open $B\times B^-$-orbit; therefore it is a spherical $G\times G$-homogeneous space. Following the general theory of spherical embeddings (see [@Kn]), its simple normal embeddings are classified by combinatorial data called the *colored cones*. Here we will skip an overview of such theory, and we will simply recall the definition of the colored cone in the particular case of a simple normal embedding of $G_\mathrm{ad}$. Recall that a normal variety $X$ is said ${\mathbb Q}$-*factorial* if, given any Weil divisor $D$ of $X$, there exists an integer $n\neq 0$ such that $nD$ is a Cartier divisor. In subsection \[sez smooth-notaz\], we will explicitly describe the colored cone of ${\widetilde}{X}_{\lambda}$; then in subsection \[sez Q-factoriality\] we will study $\mathbb{Q}$-factoriality of ${\widetilde}{X}_{\lambda}$ following [@Br2]. Finally, in subsection \[sezione smoothness\], we will use Theorem \[teo:normalita\] together with the description of the colored cone of ${\widetilde}{X}_{\lambda}$ to make more explicit the criterion of smoothness given in [@Ti] in the case of a linear projective compactification of a reductive group. The colored cone of ${\widetilde}{X}_{\lambda}$ {#sez smooth-notaz} ----------------------------------------------- Let $X$ be a simple normal compactification of $G_\mathrm{ad}$, call $Y$ its unique closed orbit. Set ${\mathcal D}(G_\mathrm{ad})$ the set of $B\times B^{-}$-stable prime divisors of $G_\mathrm{ad}$ and ${\mathcal D}(X)\subset {\mathcal D}(G_\mathrm{ad})$ the set of divisors whose closure in $X$ contains $Y$. Let ${\mathcal N}(X)$ be the set of $G\times G$-stable prime divisors of $X$, so that the set of $B\times B^{-}$-stable prime divisors of $X$ is identified with ${\mathcal D}(G_\mathrm{ad})\cup {\mathcal N}(X)$. Let $T_\mathrm{ad}\subset G_\mathrm{ad}$ be the image of $T$; then the character group ${\mathcal X}(T_\mathrm{ad})$ coincides with the root lattice $\mathbb{Z}{\Delta}$, while the cocharacter group ${\mathcal X}^\vee(T_\mathrm{ad})$ coincides with the coweight lattice ${\Lambda}^\vee$. If $V$ is a simple $G\times G$-module denote by $V^{(B\times B^{-})}$ the subset of $B\times B^{-}$-eigenvectors. Notice that ${\Bbbk}(G_\mathrm{ad})^{(B\times B^{-})}/{\Bbbk}^{*} \simeq \mathbb{Z}{\Delta}$ and define a natural map $\rho : {\mathcal D}(G_\mathrm{ad})\cup {\mathcal N}(X) \to {\Lambda}^\vee$ by associating to a $B\times B^-$-stable prime divisor of $X$ the cocharacter associated to the rational discrete valuation induced by $D$. If $D\in {\mathcal N}(X)$, then $\rho(D)$ is the opposite of a fundamental coweight, while if $D\in {\mathcal D}(G_\mathrm{ad})$, then $\rho(D)$ is a simple coroot; moreover, $\rho$ is injective and $\rho({\mathcal D}(G_\mathrm{ad}))=\Delta^\vee$ (see [@Ti § 7]). Let ${\mathcal C}(X)$ be the convex cone in ${\Lambda}^\vee_{\mathbb{Q}}$ generated by $\rho\big({\mathcal D}(X)\cup {\mathcal N}(X)\big)$; by the general theory of spherical embeddings we have that ${\mathcal C}(X)$ is generated by $\rho({\mathcal D}(X))$ together with the negative Weyl chamber of $\Phi$. The *colored cone* of $X$ is then the couple $\big({\mathcal C}(X),{\mathcal D}(X)\big)$: up to equivariant isomorphisms, it uniquely determines $X$ as a $G\times G$-compactification of $G_\mathrm{ad}$. In the case of the compactification $\widetilde{X}_{{\lambda}}$, then $\rho({\mathcal D}(X)) = {\Delta}^\vee {\smallsetminus}\operatorname{Supp}(\lambda)^\vee$ (see [@Ti Theorem 7]). $\mathbb{Q}$-factoriality {#sez Q-factoriality} ------------------------- In order to give a necessary and sufficient condition for the $\mathbb{Q}$-factoriality of ${\widetilde}{X}_{\lambda}$ we need to determine the set of extremal rays of the associated cone ${\mathcal C}({\widetilde}{X}_{\lambda})$. If ${\alpha}\in {\Delta}{\smallsetminus}\operatorname{Supp}(\lambda)$, then ${\alpha}^\vee$ generates an extremal ray of ${\mathcal C}({\widetilde}{X}_{\lambda})$. If a simple coroot ${\alpha}^\vee \in {\mathcal C}({\widetilde}{X}_{\lambda})$ does not generate an extremal ray, then we can write $${\alpha}^\vee = \sum_{{\beta}\in {\Delta}{\smallsetminus}\{{\alpha}\}} a_{{\beta}}{\beta}^{\vee} - \sum_{{\beta}\in {\Delta}} b_{{\beta}}\omega_{{\beta}}^{\vee},$$ with $a_{\beta}, b_{\beta}{\geqslant}0$ for every ${\beta}$: this yields a contradiction since then it would be $\langle {\alpha},{\alpha}^\vee \rangle {\leqslant}0$. Recall that a convex cone is said to be *simplicial* if it is generated by linearly independent vectors; the following proposition is a particular case of a characterization of $\mathbb{Q}$-factoriality that M. Brion gave in [@Br2] in the general case of a spherical variety. We recall it in the case of our interest. \[see [[@Br2 Proposition 4.2]]{}\] \[lem Q fattor\] The variety ${\widetilde}{X}_{\lambda}$ is $\mathbb{Q}$-factorial if and only if ${\mathcal C}({\widetilde}{X}_{\lambda})$ is simplicial. Therefore, since ${\mathcal C}({\widetilde}{X}_{\lambda})$ has maximal dimension, ${\widetilde}{X}_{\lambda}$ is $\mathbb{Q}$-factorial if and only if the number of extremal rays of the associated cone equals the rank of $G$. To describe such rays we need to introduce some more notation; the description will be slightly more complicated if $\Phi$ is of type ${\mathsf D}$ or ${\mathsf E}$. Denote ${\Delta}^e$ the set of extremal roots of ${\Delta}$ and set ${\Delta}{\smallsetminus}\operatorname{Supp}({\lambda}) = \bigcup_{i=1}^n I_i$ the decomposition in connected components. Denote $$I^e = \bigcup_{\substack{I_i \neq I_\mathsf{de} \\ I_i \cap {\Delta}^e \neq {\varnothing}}} I_i,$$ where $I_\mathsf{de}$ is defined as follows. If ${\Delta}$ is of type $\sf{D}$ or $\sf{E}$, denote ${\gamma}_\mathsf{de}$ the unique simple root which is adjacent to other three simple roots and, if it exists, denote $I_\mathsf{de} \subset {\Delta}{\smallsetminus}\operatorname{Supp}({\lambda})$ the unique connected component such that ${\gamma}_\mathsf{de} \in I_\mathsf{de}$ and $|I_\mathsf{de} \cap {\Delta}^e| = 1$, otherwise define $I_\mathsf{de}$ to be the empty set. Denote $I^\ast_\mathsf{de} \subset I_\mathsf{de}$ the minimal connected subset such that ${\gamma}_\mathsf{de} \in I^\ast_\mathsf{de}$ and $I^\ast_\mathsf{de} \cap {\Delta}^e \neq {\varnothing}$, or define it to be the empty set otherwise. Finally define $$J({\lambda}) = \big({\Delta}{\smallsetminus}({\overline}{I^e} \cup I^\ast_\mathsf{de})\big) \cup \big({\Delta}^e {\smallsetminus}\operatorname{Supp}({\lambda})\big).$$ \[raggi estremali2\] The extremal rays of ${\mathcal C}({\widetilde}{X}_{\lambda})$ are generated by the simple coroots $\alpha^{\vee}$ with $\alpha\in {\Delta}{\smallsetminus}\operatorname{Supp}({\lambda})$ and by the opposite of fundamental coweights $-\omega_{\alpha}^{\vee}$ with $\alpha\in J({\lambda})$. Recall that ${\mathcal C}({\widetilde}{X}_{\lambda})$ is generated by the simple coroots $\alpha^{\vee}$ with $\alpha\in {\Delta}{\smallsetminus}\operatorname{Supp}({\lambda})$ together with the fundamental coweights $-\omega_{\alpha}^{\vee}$ with $\alpha\in {\Delta}$ and that every coroot $\alpha^{\vee}$ with $\alpha\in {\Delta}{\smallsetminus}\operatorname{Supp}({\lambda})$ generates an extremal ray of ${\mathcal C}({\widetilde}{X}_{\lambda})$. A coweight $-\omega_{{\alpha}}^{\vee}$ does not generate an extremal ray if and only if it can be written as follows $$-\omega_{\alpha}^{\vee}=\sum_{{\beta}\in K} a_{{\beta}}{\beta}^{\vee} - \sum_{{\beta}\in H} b_{{\beta}} \omega_{{\beta}}^{\vee}$$ with $a_{\beta}>0 $ for every ${\beta}\in K$ and with $b_{\beta}>0$ for every ${\beta}\in H$, for suitable non-empty subsets $K\subset {\Delta}{\smallsetminus}\operatorname{Supp}({\lambda})$ and $H \subset {\Delta}{\smallsetminus}\{{\alpha}\}$. Since the right member of the equality is negative against every simple root in $\partial{K}$, we get $\partial{K} = \{{\alpha}\}$. Notice that $K$ is connected. Indeed if $K' \subset K$ is a connected component then $\partial{K'} = \{{\alpha}\}$ and $\sum_{{\beta}\in K'} a_{{\beta}} \langle {\alpha}, {\beta}^{\vee} \rangle < 0$: therefore if $K$ contains two connected components it must be $$\sum_{{\beta}\in K} a_{{\beta}} \langle {\alpha}, {\beta}^{\vee} \rangle {\leqslant}-2.$$ On the other hand $\langle {\alpha}, \omega_{{\beta}}^{\vee} \rangle = 0$ for every ${\beta}\in H$, therefore if $K$ is not connected it follows $$-1 = - \langle {\alpha}, \omega_{\alpha}^{\vee} \rangle = \sum_{{\beta}\in K} a_{{\beta}} \langle {\alpha}, {\beta}^{\vee} \rangle {\leqslant}-2.$$ Since $\partial{K}$ is one single root, $K$ contains an extreme of ${\Delta}$, thus we get $K\subset I^{e} \cup I_\mathsf{de}$. Suppose that ${\gamma}_\mathsf{de} \in K \subset I_\mathsf{de}$: then we get a contradiction since it would be $|\partial{K}|=2$. Therefore we get $K\subset I^{e} \cup (I^\ast_\mathsf{de} {\smallsetminus}\{{\gamma}_\mathsf{de}\})$ and ${\alpha}\in {\overline}{I^{e}} \cup I^\ast_\mathsf{de}$. Such a subset $K$ cannot exist if ${\alpha}\in {\Delta}^{e}{\smallsetminus}\operatorname{Supp}({\lambda})$, otherwise it would be $K = {\Delta}{\smallsetminus}\{{\alpha}\}$ which intersects $\operatorname{Supp}({\lambda})$. We get then that every $-{\omega}_{\alpha}^\vee$ with ${\alpha}\in J({\lambda})$ generates an extremal ray of ${\mathcal C}({\widetilde}{X}_{\lambda})$. Suppose conversely that ${\alpha}\not \in J({\lambda})$. Then we can construct a connected subset $K\subset I^{e} \cup (I^\ast_\mathsf{de} {\smallsetminus}\{{\gamma}_\mathsf{de}\})$ such that $\partial{K} = \{{\alpha}\}$. If ${\gamma}\in K\cap {\Delta}^e$, consider the fundamental coweight $({\omega}_{\gamma}^K)^\vee$ associated to ${\gamma}$ in the irreducible root subsystem associated to $K$: then we get $$({\omega}_{\gamma}^K)^\vee = \sum_{{\beta}\in K}a_{\beta}{\beta}^\vee = {\omega}_{\gamma}^\vee - m {\omega}^\vee_{\alpha},$$ where $a_{\beta}>0$ are rational coefficients and where $m>0$ is an integer. Therefore $-{\omega}_{\alpha}^\vee$ does not generate an extremal ray of ${\mathcal C}({\widetilde}{X}_{\lambda})$. \[Q-fattorialita\] The variety ${\widetilde}{X}_{\lambda}$ is $\mathbb{Q}$-factorial if and only if the following conditions are fulfilled: - $\operatorname{Supp}({\lambda})$ is connected; - If $\operatorname{Supp}({\lambda})$ contains a unique element, then this element is an extreme of ${\Delta}$; - If ${\Delta}$ is of type ${\mathsf D}$ or ${\mathsf E}$, then $\operatorname{Supp}({\lambda})$ contains ${\gamma}_\mathsf{de}$ and at least two simple roots adjacent to ${\gamma}_\mathsf{de}$. By Proposition \[lem Q fattor\] together with Lemma \[raggi estremali2\] ${\widetilde}{X}_{\lambda}$ is $\mathbb{Q}$-factorial if and only if $|\operatorname{Supp}({\lambda})| = |J({\lambda})|$. Suppose that ${\widetilde}{X}_{\lambda}$ is $\mathbb{Q}$-factorial. Consider the dominant weight ${\lambda}' = \sum_{{\alpha}\not \in I^e \cup I^\ast_\mathsf{de}} {\omega}_{\alpha}$: then $J({\lambda}') = J({\lambda})$ and $$|{\Delta}| = |J({\lambda})| + |{\Delta}{\smallsetminus}\operatorname{Supp}({\lambda})| {\geqslant}|J({\lambda}')| + |{\Delta}{\smallsetminus}\operatorname{Supp}({\lambda}')| {\geqslant}|{\Delta}|,$$ which implies $\operatorname{Supp}({\lambda}) = \operatorname{Supp}({\lambda}')$. This shows ${\Delta}{\smallsetminus}\operatorname{Supp}({\lambda}) = I^e \cup I^\ast_\mathsf{de}$, and we get the following decomposition of $J({\lambda})$: $$J({\lambda}) \cap \operatorname{Supp}({\lambda}) = {\Delta}{\smallsetminus}({\overline}{I^e}\cup I^\ast_\mathsf{de}), \qquad \quad J({\lambda}) {\smallsetminus}\operatorname{Supp}({\lambda}) = {\Delta}^e {\smallsetminus}\operatorname{Supp}({\lambda}).$$ If $I_\mathsf{de} \neq {\varnothing}$, set $I_\mathsf{de} \cap {\Delta}^e = \{{\alpha}_\mathsf{de}\}$. Define a surjective map $F: J({\lambda}) {\smallsetminus}\{{\alpha}_\mathsf{de}\} {\longrightarrow}\operatorname{Supp}({\lambda})$ as follows: $F$ is the identity on $J({\lambda}) \cap \operatorname{Supp}({\lambda})$, while if ${\alpha}\in J({\lambda}) {\smallsetminus}\operatorname{Supp}({\lambda})$ consider the connected component $K\subset {\Delta}{\smallsetminus}\operatorname{Supp}({\lambda})$ containing ${\alpha}$ and define $F({\alpha})$ by the relation $\partial{K} = \{F({\alpha})\}$: since ${\alpha}\neq {\alpha}_\mathsf{de}$, it must be $|\partial{K}|=1$. Therefore $F$ is well defined and it is surjective since $\operatorname{Supp}({\lambda}) {\smallsetminus}J({\lambda}) = \partial{I^e}$. Therefore ${\Delta}{\smallsetminus}\operatorname{Supp}({\lambda}) = I^e$ and we get i). Being surjective, $F$ has to be injective as well; this easily implies both ii) and iii). Suppose conversely that $\operatorname{Supp}({\lambda})$ is connected, or equivalently that ${\Delta}{\smallsetminus}\operatorname{Supp}({\lambda}) = I^e$: then ii) and iii) imply $|{\Delta}^e {\smallsetminus}\operatorname{Supp}({\lambda})| = |\partial{I^e}|$. This shows that ${\widetilde}{X}_{\lambda}$ is $\mathbb{Q}$-factorial, since then $|J({\lambda})| + |{\Delta}{\smallsetminus}\operatorname{Supp}({\lambda})| = |{\Delta}|$. \[cor:raggi estremali\] If ${\widetilde}{X}_{\lambda}$ is $\mathbb{Q}$-factorial, the extremal rays of ${\mathcal C}({\widetilde}{X}_{\lambda})$ are generated by: - the coroots $\alpha^{\vee}$ with $\alpha\in {\Delta}{\smallsetminus}\operatorname{Supp}({\lambda})$, - the coweights $-\omega_{\alpha}^{\vee}$ with $\alpha\in \operatorname{Supp}({\lambda})^\circ \cup \big({\Delta}^e {\smallsetminus}\operatorname{Supp}({\lambda})\big)$. Smoothness {#sezione smoothness} ---------- Suppose that ${\Sigma}=\{{\lambda},{\lambda}_1,\ldots,{\lambda}_s\}$ is a simple set of dominant weights, where ${\lambda}$ is the maximal one. In this section we will prove the following generalization of Theorem B. \[smooth Xsigma\] The variety $X_{\Sigma}$ is smooth if and only if $X_{\lambda}$ is normal, $\mathbb{Q}$-factorial and every connected component of ${\Delta}{\smallsetminus}\operatorname{Supp}(\lambda)$ has type ${\mathsf A}$. $X_{\Sigma}$ is smooth if and only if $X_{\lambda}$ is smooth. To prove Theorem \[smooth Xsigma\], we will make use of a characterization of smoothness for arbitrary group compactifications given by D. Timashev in [@Ti]. For convenience, we will use a generalization of it which can be found in [@Ru] in the more general context of symmetric spaces. We recall it in the case of a simple group compactification. \[smooth-timashev\] The variety ${\widetilde}{X}_{{\lambda}}$ is smooth if and only if the following conditions are fulfilled: - All connected components of ${\Delta}{\smallsetminus}\operatorname{Supp}(\lambda)$ are of type ${\mathsf A}$ and there are no more than $|\operatorname{Supp}(\lambda)|$ of them. - The cone ${\mathcal C}({\widetilde}{X}_{\lambda})$ is simplicial and it is generated by a basis of the coweight lattice $\Lambda^{\vee}$. - One can enumerate the simple roots in order of their positions at Dynkin diagrams of connected components $I_k = \{\alpha_{1}^{k},\ldots,\alpha_{n_{k}}^{k}\}$ of ${\Delta}{\smallsetminus}\operatorname{Supp}({\lambda})$ , $k=1,\ldots,n$, and partition the basis of the free semigroup ${\mathcal C}({\widetilde}{X}_{\lambda})^{\vee}\cap \mathbb{Z}{\Delta}$ into subsets $\{\pi_{1}^{k},\ldots,\pi_{n_{k}+1}^{k}\}$, $k=1,\ldots,p$, $p{\geqslant}n$, in such a way that $\langle\pi_{j}^{k}, (\alpha_{i}^{h})^\vee \rangle = \delta_{i,j}\delta_{h,k}$ and $\pi_{j}^{k} - \frac{j}{n_{k}+1} \pi_{n_{k}+1}^{k}$ is the $j$-th fundamental weight of the root system generated by $\{\alpha_{1}^{k},...,\alpha_{n_{k}}^{k}\}$ for all $j,k$. First, we prove that the conditions are necessary; since we only have to prove that $X_{\lambda}$ is normal, we may assume that ${\Delta}$ is non-simply laced. By Theorem \[smooth-timashev\] i), $\operatorname{Supp}({\lambda})$ contains at least one of the two simple roots $\alpha_S$, ${\alpha}_L$; suppose that $\operatorname{Supp}({\lambda})$ contains $\alpha_L$ but not ${\alpha}_S$. Denote $K =\{{\alpha}_1,\ldots,{\alpha}_l\} \subset {\Delta}{\smallsetminus}\operatorname{Supp}(\lambda)$ the connected component which contains ${\alpha}_S$ and number its simple roots starting from ${\alpha}_S$: therefore ${\alpha}_1 = {\alpha}_S$ and ${\alpha}_l \in {\Delta}^e$, moreover ${\overline}{K}$ is either of type ${\sf C}_{l+1}$ or of type $\sf{G}_2$. Set ${\omega}^\vee = (l+1)({\omega}^K_l)^\vee$, where $({\omega}^K_l)^\vee$ is the fundamental coweight associated to ${\alpha}_l$ in the root subsystem $\Phi_K$ associated to $K$; then $${\omega}^\vee = \sum_{i=1}^{l} i {\alpha}^\vee_{i} = (l+1) {\omega}^\vee_{{\alpha}_l} -m {\omega}_{{\alpha}_L}^\vee.$$ where $m = 2$ if ${\overline}{K}$ is of type ${\sf C}_{l+1}$ (with $l{\geqslant}1$) and $m=3$ if ${\overline}{K}$ is of type ${\sf G}_2$. If ${\overline}{K}$ is not of type ${\sf B}_2$, then ${\Delta}$ is either of type ${\sf C}_r$ (with $r>2$) or of type ${\sf F}_4$ or of type ${\sf G}_2$ and every simple coroot ${\beta}^\vee \in {\Delta}^\vee$ is a primitive element in $\Lambda^{\vee}$ (i.e. there does not exist $\pi^\vee\in {\Lambda}^\vee$ which satisfies $t\pi^\vee = {\beta}^\vee$ with $t>1$): therefore by Lemma \[raggi estremali2\] together with Theorem \[smooth-timashev\] ii) $\{{\alpha}^\vee_1, \ldots, {\alpha}^\vee_l, {\omega}^\vee_{{\alpha}_l} \}$ is part of a basis of ${\Lambda}^\vee$ and we get a contradiction since then the equality above would imply ${\omega}_{{\alpha}_L}^\vee\not \in {\Lambda}^\vee$. Otherwise ${\overline}{K}$ is of type ${\sf B}_2$, thus ${\Delta}$ is of type ${\sf B}_r$ and $\frac{1}{2}{\alpha}_S^\vee \in {\Lambda}^\vee$: then we get a contradiction since by Theorem \[smooth-timashev\] iii) there exists $\pi \in {\mathcal C}({\widetilde}{X}_{\lambda})^{\vee}\cap \mathbb{Z}{\Delta}$ such that $\langle \pi, {\alpha}_S^\vee \rangle = 1$. Let’s prove now that conditions of Theorem \[smooth-timashev\] are verified if $X_{\lambda}$ is normal, $\mathbb{Q}$-factorial and ${\Delta}{\smallsetminus}\operatorname{Supp}(\lambda)$ has type ${\mathsf A}$. Set $N = {\mathcal C}({\widetilde}{X}_{\lambda}) \cap {\Lambda}^\vee$ the monoid generated by the primitive elements of the extremal rays of ${\mathcal C}({\widetilde}{X}_{\lambda})$. To prove condition i), it is enough to notice as in Proposition \[Q-fattorialita\] that, since $\operatorname{Supp}({\lambda})$ is connected, we have ${\Delta}{\smallsetminus}\operatorname{Supp}({\lambda}) = I^e$ and the number of its connected components equals $|{\Delta}^e {\smallsetminus}\operatorname{Supp}({\lambda})| {\leqslant}|J({\lambda})| = |\operatorname{Supp}({\lambda})|$. To prove condition ii), let’s show that, if ${\beta}\in {\Delta}{\smallsetminus}J({\lambda}) = {\overline}{I^e}{\smallsetminus}{\Delta}^e$, then $-\omega_{{\beta}}^{\vee} \in N$. Denote $I =\{{\alpha}_1,\ldots,{\alpha}_l\} \subset {\Delta}{\smallsetminus}\operatorname{Supp}(\lambda)$ the connected component which contains ${\beta}$ in its closure and number its simple roots starting from the extreme of $I$ which is not an extreme of ${\Delta}$; therefore ${\alpha}_l \in {\Delta}^e$. Let $j$ be such that ${\beta}= {\alpha}_j$ or set $j = 0$ if ${\beta}\in \operatorname{Supp}({\lambda})$. Set $K=\{{\alpha}_{j+1},\ldots,{\alpha}_{l}\}$ and set ${\omega}^\vee = (l-j+1)({\omega}^K_l)^\vee$, where $({\omega}^K_l)^\vee$ is the fundamental weight associated to ${\alpha}_l$ in the root subsystem $\Phi_K$ associated to $K$; then $${\omega}^\vee = \sum_{i=1}^{l-j} i {\alpha}^\vee_{j+i} = (l-j+1){\omega}^\vee_{{\alpha}_l} + \langle {\beta},{\alpha}_{j+1}^\vee \rangle {\omega}_{{\beta}}^\vee.$$ Since $X_{\lambda}$ is normal, by Theorem A we get $\langle {\beta},{\alpha}_{j+1}^\vee \rangle = -1$; therefore by Corollary \[cor:raggi estremali\] $-{\omega}_{{\beta}}^\vee \in N$. Finally let’s show that condition iii) holds. Suppose that $K = \{{\alpha}_1, \ldots, {\alpha}_l \} \subset {\Delta}{\smallsetminus}\operatorname{Supp}({\lambda})$ is a connected component, where the simple roots in $K$ are numbered starting from the extreme of $K$ which is not an extreme of ${\Delta}$, and define $$\pi_i^K = \left\{ \begin{array}{ll} ({\alpha}_i^\vee)^\ast & \textrm{ if $i{\leqslant}l$} \\ (-{\omega}^\vee_{{\alpha}_l})^\ast & \textrm{ if $i=l+1$} \end{array} \right.$$ where, if $\{v_1, \ldots, v_r\}$ is a basis of ${\Lambda}^\vee$, $\{v_1^\ast, \ldots, v_r^\ast\}$ denotes the dual basis of ${\Lambda}$. Therefore, if $\omega_{j}^{K}$ is the $j$-th fundamental weight of $\Phi_{K}$, we have $\omega_{j}^{K} = \pi^K_j - \frac{j}{l+1} \pi^K_{l+1}$. Remarks and generalizations =========================== In this section we will consider the more general situation of compactifications of symmetric varieties. Let $G$ be as before and ${\sigma}:G\to G$ an involution of $G$. We denote by $H^\circ$ the subgroup of points fixed by ${\sigma}$ and by $H$ its normalizer. The notation is not completely coherent with those of previous sections: $G$ plays now the role that $G\times G$ played before, while $H^\circ$ has now the role that the diagonal of $G\times G$ had before. Let $\Omega^+$ be the set of dominant weights ${\lambda}$ such that ${V({{\lambda}})}$ has a non-zero vector fixed by $H^\circ$ and $\Omega$ the sublattice of ${\Lambda}$ generated by $\Omega^+$. The monoid $\Omega^+$ (resp. the lattice $\Omega$) is in a natural way the set of dominant weights (resp. the set of weights) of a (possibly non-reduced) root system ${\widetilde \Phi}$, which is called the [*restricted root system*]{}. For ${\lambda}\in \Omega^+$ we can consider the (unique) point $x_{\lambda}\in {\mathbb P}({V({{\lambda}})})$ fixed by $H$ and define $X_{\lambda}$ as the closure of the $G$-orbit of $x_{\lambda}$ in ${\mathbb P}({V({{\lambda}})})$. Proposition \[prp:supporto\] generalizes to this more general situation without any further comment. Normality of $X_{\lambda}$ and the closure of a maximal torus orbit. -------------------------------------------------------------------- Let $T\subset G$ be a maximal torus such that the dimension of $TH$ is maximal and let $Z_{\lambda}= {\overline}{T\,x_{\lambda}} \subset X_{\lambda}$. In [@Ru], it is proved that when $X_{\lambda}$ is normal then $Z_{\lambda}$ also is normal. The converse of this result does not hold in general. Indeed $Z_{\lambda}$ is always normal in the case of the $G\times G$-compactification of $G_\mathrm{ad}$. Generalization to symmetric varieties: normality {#ssez:simmetrichenormalita} ------------------------------------------------ The wonderful compactification has been defined in the more general situation of symmetric varieties and the description of the normalization of $X_{\lambda}$ generalizes thanks to the results contained in [@CM] and [@CDM] (which generalize [@Ka] and [@DC]). In particular, Lemma \[lem:general-normality\] holds here in general. However, in the case of symmetric varieties we do not have a clear description of the multiplication of sections as in Lemma \[lem:coefficientimatriciali\]. In particular, we have no analogue of Proposition \[prp:normalita\]. One may wonder whether the normality of $X_{\lambda}$ is equivalent to the analogous combinatorial condition on the weight ${\lambda}$, that is, ${\lambda}$ satisfies condition $(\star)$ w.r.t. the root system ${\widetilde \Phi}$; here is a counterexample. Let $G$ be of type ${\sf B}_2$ and let ${\sigma}$ be the involution of type B I: thus $G/H \simeq \mathrm{SO}(5)/\mathrm{S}\big(\mathrm{O}(3)\times \mathrm{O}(2)\big)$ and ${\widetilde\Delta}=2{\Delta}$. Consider ${\lambda}=2{\omega}_1\in{\Omega}^+$; then $X_{\lambda}$ is a normal embedding of $G/H$. Denote by ${\leqslant}_{\sigma}$ the dominance order w.r.t. the root system ${\widetilde \Phi}$ and suppose that $X_{\lambda}$ is normal. Then ${\lambda}$ satisfies > *for all $\mu\in{\Omega}^+$ such that $\mu{\leqslant}_{\sigma}{\lambda}$ there exists $n\in{\mathbb N}$ such that ${V({\mu+(n-1){\lambda}})}\subset > \mathrm S^n({V({{\lambda}})})$.* If one assumes that the multiplication map is as generic as possible, then also the converse is true. Generalization to symmetric varieties: smoothness ------------------------------------------------- In the setting of normal compactifications of symmetric varieties $G/H^\circ$, fix a maximal torus $T$ such that $TH^\circ$ has maximal dimension and a Borel subgroup $B\supset T$ such that $BH^\circ \subset G$ is dense. If $X$ is a simple normal compactification of $G/H$, denote ${\mathcal D}(X)$ the set of $B$-stable and not $G$-stable prime divisors of $X$ which contain the closed orbit. Denote $\rho : {\mathcal D}(X) \to {\Omega}^\vee$ the map defined by the evaluation of functions; by [@Vu Proposition 1] $\rho({\mathcal D}(X))$ is a basis of the restricted coroot system ${\widetilde}{\Phi}^\vee$. Since the map $\rho$ is not always injective, following the criterion of $\mathbb{Q}$-factoriality in [@Br2] in order to generalize Proposition \[Q-fattorialita\] we only need to assume that $\rho$ is injective on ${\mathcal D}(X)$, and the proof is the same. Such proposition is true also for compactifications of $G/H^\circ$, and not only of $G/H$, since $\mathbb Q$-factoriality concerns no integrality questions. Theorem \[smooth Xsigma\] also can be generalized to this setting with the same proof, but we do not have anymore the equivalence between property $(\star)$ and the normality of $X_{\lambda}$. Thus the theorem has to be reformulated as follows (recall that a simple normal spherical variety is always quasi-projective). \[smooth general\] A simple normal compactification $X$ of $G/H$ is smooth if and only if it is $\mathbb{Q}$-factorial, ${\Delta}{\smallsetminus}\rho({\mathcal D}(X))$ satisfies $(\star)$ and every connected component of $\rho({\mathcal D}(X))$ has type ${\mathsf A}$. [KKLV]{} N. Bourbaki, [*Éléments de mathématique. Fasc. XXXIV. Groupes et algèbres de Lie. Chapitres IV, V, VI*]{}, Actualités Scientifiques et Industrielles **1337**, Hermann Paris 1968. M. Brion, [*Variétés sphériques et théorie de Mori*]{}, Duke Math. J. **72** (1993) no. 2, 369–404. R. Chirivì, C. De Concini and A. Maffei, [*On normality of cones over symmetric varieties*]{}, Tohoku Math. J. (2) **58** (2006) no. 4, 599–616. R. Chiriv[ì]{} and A. Maffei, [ *Projective normality of complete symmetric varieties*]{}, Duke Math. J. **122** (2004), 93–123. C. De Concini, [*Normality and non normality of certain semigroups and orbit closures*]{}, Algebraic transformation groups and algebraic varieties, Encyclopaedia Math. Sci. **132**, Springer Berlin 2004, 15–35. C. De Concini and C. Procesi, [*Complete symmetric varieties*]{}, Invariant Theory, Lecture Notes in Math. **996**, Springer, Berlin, 1983, 1–44. S.S. Kannan, [*Projective normality of the wonderful compactification of semisimple adjoint groups*]{}, Math. Z. **239** (2002) 673–682. F. Knop, [*The Luna-Vust theory of spherical embeddings*]{}, Proceedings of the Hyderabad Conference on Algebraic Groups (Hyderabad, 1989), 225–249, Manoj Prakashan, Madras, 1991. F. Knop, H. Kraft, D. Luna and T. Vust, [*Local properties of algebraic group actions*]{}, Algebraische Transformationsgruppen und Invariantentheorie, DMV Sem. **13**, Birkhäuser, Basel, 1989, 63–75. A. Ruzzi, [*Smooth projective symmetric varieties with Picard number equal to one*]{}. To appear in Internat. J. Math. J.R. Stembridge, [*The partial order of dominant weights*]{}, Adv. Math. **136** (1998) no. 2, 340–364. D.A. Timashev, [*Equivariant compactifications of reductive groups*]{}, Sb. Math. **194** (2003) no. 3-4, 589–616. Th. Vust, [*Plongements d’espaces symétriques algébriques: une classification*]{}, Ann. Scuola Norm. Sup. Pisa Cl. Sci. (4) **17** (1990), no. 2, 165–195.
1
--- abstract: 'We develop a method for estimating well-conditioned and sparse covariance and inverse covariance matrices from a sample of vectors drawn from a sub-gaussian distribution in high dimensional setting. The proposed estimators are obtained by minimizing the quadratic loss function and joint penalty of $\ell_1$ norm and variance of its eigenvalues. In contrast to some of the existing methods of covariance and inverse covariance matrix estimation, where often the interest is to estimate a sparse matrix, the proposed method is flexible in estimating both a sparse and well-conditioned covariance matrix simultaneously. The proposed estimators are optimal in the sense that they achieve the minimax rate of estimation in operator norm for the underlying class of covariance and inverse covariance matrices. We give a very fast algorithm for computation of these covariance and inverse covariance matrices which is easily scalable to large scale data analysis problems. The simulation study for varying sample sizes and variables shows that the proposed estimators performs better than several other estimators for various choices of structured covariance and inverse covariance matrices. We also use our proposed estimator for tumor tissues classification using gene expression data and compare its performance with some other classification methods.' author: - | Ashwini Maurya mauryaas@msu.edu\ Department of Statistics and Probability\ Michigan State University\ East Lansing, MI 48824, USA bibliography: - 'sample.bib' title: 'A Well-Conditioned and Sparse Estimation of Covariance and Inverse Covariance Matrices Using a Joint Penalty' --- Sparsity, Eigenvalue Penalty, Penalized Estimation Introduction ============ With the recent surge in data technology and storage capacity, today’s statisticians often encounter data sets where sample size $n$ is small and number of variables $p$ is very large: often hundreds, thousands and even million or more. Examples include gene expression data and web search problems \[@Clark1:7, @pass:21\]. For many of the high dimensional data problems, the choice of classical statistical methods becomes inappropriate for making valid inference. The recent developments in asymptotic theory deal with increasing $p$ as long as both $p$ and $n$ tend to infinity at some rate depending upon the parameters of interest. The estimation of covariance and inverse covariance matrix is a problem of primary interest in multivariate statistical analysis. Some of the applications include: **(i)** Principal component analysis (PCA) \[@Johnstone:14, @Zou:37\]:, where the goal is to project the data on “best" $k$-dimensional subspace, and where best means the projected data explains as much of the variation in original data without increasing $k$. **(ii)** Discriminant analysis \[@Mardia:19\]:, where the goal is to classify observations into different classes. Here estimates of covariance and inverse covariance matrices play an important role as the classifier is often a function of these entities. **(iii)** Regression analysis: If interest focuses on estimation of regression coefficients with correlated (or longitudinal) data, a sandwich estimator of the covariance matrix may be used to provide standard errors for the estimated coefficients that are robust in the sense that they remain consistent under mis-specification of the covariance structure. **(iv)** Gaussian graphical modeling \[@Mein:20, @Wainwright:27, @Yuan:34,@Yuan1:35\]:, the relationship structure among nodes can be inferred from inverse covariance matrix. A zero entry in the inverse covariance matrix implies conditional independence between the corresponding nodes.\ The estimation of large dimensional covariance matrix based on few sample observations is a difficult problem, especially when $n \asymp p$ (here $a_n \asymp b_n$ means that there exist positive constants $c$ and $C$ such that $c \le a_n/b_n \le C $). In these situations, the sample covariance matrix becomes unstable which explodes the estimation error. It is well known that the eigenvalues of sample covariance matrix are over-dispersed which means that the eigen-spectrum of sample covariance matrix is not a good estimator of its population counterpart \[@Marcenko:18, @Karoui1:16\]. To illustrate this point, consider $\Sigma_p=I_p$, so all the eigenvalues are $1$. A result from \[@Geman:12\] shows that if entries of $X_i$’s are i.i.d (let $X_i$’s have mean zero and variance 1) with a finite fourth moment and if $p/n \rightarrow \theta <1 $, then the largest sample eigenvalue $l_1$ satisfies: $$\begin{aligned} l_1~ \rightarrow ~(1+\sqrt{\theta})^2, ~~~~~~ a.s\end{aligned}$$ This suggests that $l_1$ is not a consistent estimator of the largest eigenvalue $\sigma_1$ of population covariance matrix. In particular if $n=p$ then $l_1$ tends to $4$ whereas $\sigma_1$ is $1$. This is also evident in the eigenvalue plot in Figure 2.1. The distribution of $l_1$ also depends on the underlying structure of the true covariance matrix. From Figure 2.1, it is evident that the smaller sample eigenvalues tend to underestimate the true eigenvalues for large $p$ and small $n$. For more discussion on this topic, see @Karoui1:16. To correct for this bias, a natural choice would be to shrink the sample eigenvalues towards some suitable constant to reduce the over-dispersion. For instance, @Stein:28 proposed an estimator of the form $\tilde{\Sigma}=\tilde{U} \Lambda (\tilde{\lambda}) \tilde{U}$, where $\Lambda (\tilde{\lambda})$ is a diagonal matrix with diagonal entries as transformed function of the sample eigenvalues and $\tilde{U}$ is the matrix of the eigenvectors. In another interesting paper @Ledoit:17 proposed an estimator that shrinks the sample covariance matrix towards the identity matrix. In another paper, @Karoui:15 proposed a non-parametric estimation of spectrum of eigenvalues and show that his estimator is consistent in the sense of weak convergence of distributions. The covariance matrix estimates based on eigen-spectrum shrinkage are well-conditioned in the sense that their eigenvalues are well bounded away from zero. These estimates are based on the shrinkage of the eigenvalues and therefore invariant under some orthogonal group i.e. the shrinkage estimators shrink the eigenvalues but eigenvectors remain unchanged. In other words, the basis (eigenvector) in which the data are given is not taken advantage of and therefore the methods rely on premise that one will be able to find a good estimate in any basis. In particular, it is reasonable to believe that the basis generating the data is somewhat nice. Often this translates into the assumption that the covariance matrix has particular structure that one should be able to take advantage of. In these situations, it becomes natural to perform certain form of regularization directly on the entries of the sample covariance matrix. Much of the recent literature focuses on two broad clases of regularized covariance matrix estimation. i) The one class relies on natural ordering among variables, where one often assumes that the variables far apart are weekly correlated and ii) the other class where there is no assumption on the natural ordering among variables. The first class includes the estimators based on banding and tapering \[@Bickel:3, @Cai1:6\]. These estimators are appropriate for a number of applications for ordered data (time series, spectroscopy, climate data). However for many applications including gene expression data, prior knowledge of any canonical ordering is not available and searching for all permutation of possible ordering would not be feasible. In these situations, an $\ell_1$ penalized estimator becomes more appropriate which yields a permutation-invariant estimate. To obtain a suitable estimate which is both well-conditioned and sparse, we introduce two regularization terms: **i)** $\ell_1$ penalty for each of the off-diagonal elements of matrix and, **ii)** penalty propotional to the variance of the eigenvalues. The $\ell_1$ minimization problems are well studied in the covariance and inverse covariance matrix estimation literature \[@Freidman:11, @Banerjee:38, @Ravi1:24, @Bein:13, @Maurya:19 etc.\]. @Roth1:26 proposes an $\ell_1$ penalized log-likelihood estimator and shows that estimator is consistent in Frobenius norm at the rate of $O_P\Big(\sqrt{\{(p+s)~log~p\}/{n}}\Big)$, as both $p$ and $n$ approach to infinity. Here $s$ is the number of non-zero off-diagonal elements in the true covariance matrix. In another interesting paper @Bein:13 propose an estimator of covariance matrix as penalized maximum likelihood estimator with a weighted lasso type penalty. In these optimization problems, the $\ell_1$ penalty results in sparse and a permutation-invariant estimator as compared to other $l_q, q \neq 1$ penalties. Another advantage is that the $\ell_1$ norm is a convex function which makes it suitable for large scale optimization problems. A number of fast algorithms exist in the literature for covariance and inverse covariance matrix estimation \[(@Freidman:11, @Roth:25\]. The eigenvalues variance penalty overcomes the over-dispersion in the sample covariance matrix so that the estimator remains well-conditioned. @Ledoit:17 proposed an estimator of covariance matrix as a linear combination of sample covariance and identity matrix. Their estimator of covariance matrix is well-conditioned but it is not sparse. @Roth:25 proposed estimator of covariance matrix based on quadratic loss function and $\ell_1$ penalty with a log-barrier on the determinant of covariance matrix. The log-determinant barrier is a valid technique to achieve positive definiteness but it is still unclear whether the iterative procedure proposed in @Roth:25 actually finds the right solution to the corresponding optimization problem. In another interesting paper, @Xue:31 proposed an estimator of covariance matrix as a minimizer of penalized quadratic loss function over set of positive definite matrices. In their paper, the authors solve a positive definite constrained optimization problem and establish the consistency of estimator. The resulting estimator is sparse and positive definite but whether it overcomes the over-dispersion of the eigen-spectrum of sample covariance matrix, is hard to justify. @Maurya:19 proposed a joint convex penalty as function of $\ell_1$ and trace norm (defined as sum of singular values of a matrix) for inverse covariance matrix estimation based on penalized likelihood approach. In this paper, we propose the JPEN (Joint PENalty) estimators for covariance and inverse covariance matrices estimation and derive an explicit rate of convergence in both the operator and Frobenius norm. The JPEN estimators achieves minimax rate of convergence under operator norm for the underlying class of sparse covariance and inverse covariance matrices and hence is optimal. For more details see section $\S3$. One of the major advantage of the proposed estimators is that the proposed algorithm is very fast, efficient and easily scalable to a large scale data analysis problem. The rest of the paper is organized as following. The next section highlights some background and problem set-up for covariance and inverse covariance matrix estimation. In section 3, we describe the proposed estimators and establish their theoretical consistency. In section 4, we give an algorithm and compare its computational time with some other existing algorithms. Section 5 highlights the performance of the proposed estimators on simulated data while an application of proposed estimator to real life data is given in section 6. **Notation:** For a matrix $M$, let $\|M\|_1 $ denote its $\ell_1$ norm defined as the sum of absolute values of the entries of $M$, $\|M\|_F$ denote its Frobenius norm, defined as the sum of square of elements of $M$, $\|M\|$ denote its operator norm (also called spectral norm), defined as the largest absolute eigenvalue of $M$, $M^{-}$ denotes matrix $M$ where all diagonal elements are set to zero, $M^{+}$ denote matrix $M$ where all off-diagonal elements are set to zero, $\sigma_i(M)$ denote the $i^{th}$ largest eigenvalue of $M$, $tr(M)$ denotes its trace, $det(M)$ denote its determinant, $\sigma_{min}(M) $ and $\sigma_{max}(M)$ denote the minimum and maximum eigenvalues of $M$, $|M|$ be its cardinality, and let $\text{sign}(M)$ be matrix of s of elements of $M$. For any real $x$, let $\text{sign}(x) $ denotes of $x$, and let $|x|$ denotes its absolute value. Background and Problem Set-up ============================= Let $X=(X_1, X_2, \cdots, X_p) $ be a zero-mean p-dimensional random vector. The focus of this paper is the estimation of the covariance matrix $\Sigma:=\mathbb{E}(XX^T)$ and its inverse $\Sigma^{-1}$ from a sample of independently and identically distributed data $\{ X^{(k)} \}^{n}_{k=1}$. In this section we provide some background and problem setup more precisely. The choice of loss function is very crucial in any optimization problem. An optimal estimator for a particular loss function may not be optimal for another choice of loss function. Recent literature in covariance matrix and inverse covariance matrix estimation mostly focuses on estimation based on likelihood function or quadratic loss function \[@Freidman:11, @Banerjee:38, @Bickel:3, @Ravi1:24, @Roth:25, @Maurya:19\]. The maximum likelihood estimation requires a tractable probability distribution of observations whereas quadratic loss function does not have any such requirement and therefore fully non-parametric. The quadratic loss function is convex and due to this analytical tractability, it is a widely applicable choice for many data analysis problems. Proposed Estimators ------------------- Let $S$ be the sample covariance matrix. Consider the following optimization problem. $$\hat{\Sigma}_{\lambda,\gamma}=\operatorname*{arg\,min}_{\Sigma=\Sigma^T,tr(\Sigma)=tr(S)}~~\Big[ ||\Sigma-S||^2_2 + \lambda \|{\Sigma^-}\|_1 + \gamma\sum_{i=1}^{p} \big \{\sigma_i(\Sigma)-\bar{\sigma}_{\Sigma} \big \}^2\Big],$$ where $\bar{\sigma}_\Sigma$ is the mean of eigenvalues of $\Sigma$, $\lambda$ and $\gamma$ are some positive constants. Note that by penalty function $\|{\Sigma^-}\|_1$, we only penalize off-diagonal elements of $\Sigma$. The eigenvalues variance penalty term for eigen-spectrum shrinkage is chosen from the following points of interest: i) It is easy to interpret and ii) this choice of penalty function yields a very fast optimization algorithm. By constraint $tr(\Sigma)=tr(S)$, the total variation in $\hat{\Sigma}_{\lambda,\gamma}$ is same as that in sample covariance matrix $S$, however the eigenvalues of $\hat{\Sigma}_{\lambda,\gamma} $ are well-conditioned than those of $S$. From here onwards we suppress the dependence of $\lambda, \gamma $ on $\hat{\Sigma }$ and denote $\hat{\Sigma}_{\lambda,\gamma} $ by $\hat{\Sigma}$.\ \ For $\gamma=0$, the solution to (2.1) is the standard soft-thresholding estimator for quadratic loss function and its solution is given by (see $\S4$ for derivation of this estimator): $$\begin{aligned} \begin{split} \hat{\Sigma}_{ii}& =s_{ii} \\ \hat{\Sigma}_{ij}& =\text{sign}(s_{ij})\max\Big (|s_{ij}|-\frac{\lambda}{2},0\Big), ~~~~~~~~~~~~~i \neq j. \end{split}\end{aligned}$$ It is clear from this expression that a sufficiently large value of $\lambda$ will result in sparse covariance matrix estimate. But estimator $\hat{\Sigma}$ of (2.2) is not necesarily positive definite \[for more details here see @Xue:31\]. Moreover it is hard to say whether it overcomes the over-dispersion in the sample eigenvalues. The following eigenvalue plot (Figure (2.1)) illustrates this phenomenon for a neighbourhood type (see $\S5$ for details on description of neighborhood type of covariance matrix) covariance matrix. Here we simulated random vectors from multivariate normal distribution with sample size $n=50$ and number of covariates $~p=20$. ![*Comparison of Eigenvalues of Covariance Matrices* ](sam_ei_plot.pdf){width=".8\textwidth"} As is evident from Figure 2.1, eigenvalues of sample covariance matrix are over-dispersed as most of them are either too large or close to zero. Eigenvalues of the proposed Joint Penalty (JPEN) estimator and PDSCE (Positive Definite Sparse Covariance matrix Estimator (@Roth1:26) of the covariance matrix are well aligned with those of true covariance matrix. See $\S 5$ for detailed discussion. Another drawback of the estimator (2.2) is that the estimate can be negative definite.\ As argued earlier, to overcome the over-dispersion in eigen-spectrum of sample covariance matrix, we include eigenvalues variance penalty. To illustrate its advantage, consider $\lambda=0$. After some algebra, let $\hat{\Sigma}$ be the minimizer of (2.1), then it is given by: $$\hat{\Sigma}= (S+\gamma~t~I)/(1+\gamma),$$ where $I$ is the identity matrix, and $t=\sum_{i=1}^{p}S_{ii}/p$. After some algebra, conclude that for any $\gamma>0$: $$\begin{aligned} \sigma_{min} (\hat{\Sigma}) & = & \sigma_{min} (S+ \gamma~t~I)/(1+\gamma) \\ & \geq & \frac{\gamma~t}{1+\gamma}>0\end{aligned}$$ This means that the eigenvalues variance penalty improves $S$ to a positive definite estimator $\hat{\Sigma}$. However the estimator (2.3) is well-conditioned but need not be sparse. Sparsity can be achieved by imposing $\ell_1$ penalty on the entries of covariance matrix. Simulations have shown that, in general the minimizer of (2.1) is not positive definite for all values of $\lambda >0$ and $\gamma >0$. Here onwards we focus on correlation matrix estimation, and later generalize the method for covariance matrix estimation.\ To achieve both well-conditioned and sparse positive definite estimator we optimize the following objective function in $R$ over specific region of values of $(\lambda, \gamma)$ which depends upon sample correlation matrix $K$ and $\lambda,\gamma$. Here the condition $tr(\Sigma)=tr(S)$ reduces to $tr(R)=p$, and $t=1$. Consider the following optimization problem: $$\hat{R}_K=\operatorname*{arg\,min}_{R=R^T,tr(R)=p|(\lambda, \gamma) \in \hat{\mathscr{S}}^{K}_1}~~\Big[ ||R-K||^2_F + \lambda \|R^-\|_1 + \gamma\sum_{i=1}^{p} \big \{\sigma_i(R)-\bar{\sigma}_{R} \big \}^2\Big],$$ where $$\begin{aligned} \hat{\mathscr{S}}^{K}_1& = \Big \{(\lambda,\gamma): \lambda, \gamma >0, \lambda \asymp \gamma \asymp \sqrt{\frac{log~ p}{n}}, \forall \epsilon >0,\sigma_{min}\{(K+\gamma I)-\frac{\lambda}{2}*sign(K+\gamma I)\}>\epsilon \Big \},\end{aligned}$$ and $\bar{\sigma}_{R}$ is mean of the eigenvalues of $R$. For instance when $K$ is diagonal matrix, the set $\hat{\mathscr{S}}^{K}_1$ is given by: $\hat{\mathscr{S}}^{K}_1 = \Big \{(\lambda,\gamma): \lambda, \gamma >0, \lambda \asymp \gamma \asymp \sqrt{\frac{log~ p}{n}}, \forall \epsilon >0,\lambda <2(\gamma-\epsilon) \Big \}$. The minimization in (2.4) over $R$ is for fixed $ (\lambda, \gamma) \in \hat{\mathscr{S}}^{K}_1$. The proposed estimator of covariance matrix (based on regularized correlation matrix estimator $\hat{R}_K$) is given by $\hat{\Sigma}_K=({S^+})^{1/2}\hat{R}_K({S^+})^{1/2}$, where $S^+$ is the diagonal matrix of the diagonal elements of $S$. Furthermore Lemmas 3.1 and 3.2, respectively show that the objective function (2.4) is convex and estimator given in (2.4) is positive definite. Our Contribution ---------------- The main contributions are the following:\ **i)** The proposed estimators are both sparse and well-conditioned simultaneously. This approach allows to take advantage of a prior structure if known on the eigenvalues of the true covariance and the inverse covariance matrices.\ **ii)** We establish theoretical consistency of proposed estimators in both operator and Frobenius norm. The proposed JPEN estimators achieves the minimax rate of convergence in operator norm for the underlying class of sparse and well-conditioned covariance and inverse covariance matrices and therefore is optimal.\ **iii)** The proposed algorithm is very fast, efficient and easily scalable to large scale optimization problems. Analysis of JPEN Method ======================= **Def:** A random vector $X$ is said to have sub-gaussian distribution if for each $t \ge 0$ and $y \in \mathbb{R}^p $ with $\|y\|_2=1$, there exist $0< \tau < \infty $ such that $$\mathbb{P}\{|y^T(X-\mathbb{E}(X))|>t\} \le e^{-t^2/2\tau}$$ Although the JPEN estimators exists for any finite $2 \le n<p<\infty$, for theoretical consistency in operator norm we require $s~log~p=o(n)$ and for Frobenus norm we require $(p+s) ~log~p=o(n)$ where $s$ is the upper bound on the number of non-zero off-diagonal entries in true covariance matrix. For more details, see the remark after Theorem 3.1.\ Covariance Matrix Estimation ---------------------------- We make the following assumptions about the true covariance matrix $\Sigma_0$.\ **A0.** Let $X:=(X_1,X_2,\cdots,X_p)$ be a mean zero vector with covariance matrix $\Sigma_0$ such that each $X_i/ \sqrt{\Sigma_{0ii}}$ has subgaussian distribution with parameter $\tau$ as defined in (3.1).\ **A1.** With $ E=\{(i,j): \Sigma_{0ij} \neq 0, i \neq j \}, $ the $|E| \le s $ for some positive integer $s$.\ **A2.** There exists a finite positive real number $\bar{k} >0$ such that $ 1/\bar{k} \le \sigma_{min}(\Sigma_0) \le \sigma_{max}(\Sigma_0) \le \bar{k}$.\ Assumption A2 guarantees that the true covariance matrix $\Sigma_0$ is well-conditioned (i.e. all the eigenvalues are finite and positive). A well-conditioned means that \[@Ledoit:17)\] inverting the matrix does not explode the estimation error. Assumption A1 is more of a definition which says that the number of non-zero off diagonal elements are bounded by some positive integer. Theorem 3.1 gives the rate of convergence of the proposed correlation based covariance matrix estimator (2.4). The following Lemmas show that optimization problem in (2.4) is convex and the proposed JPEN estimator (2.4) is positive definite. The optimization problem in (2.4) is convex. The estimator given by (2.4) is positive definite for any $2 \le n < \infty $ and $p <\infty$. Let $(\lambda, \gamma) \in \hat{\mathscr{S}}^{K}_1$ and $\hat{\Sigma}_K$ be as defined in (2.4). Under Assumptions A0, A1, A2, $$\|\hat{R}_K-R_0\|_F=O_P \Big ( \sqrt{\frac{s~ log~p}{n}} \Big ) ~~~ \text{and} ~~~\|\hat{\Sigma}_K-\Sigma_0\|=O_P \Big ( \sqrt{\frac{(s+1) log~p}{n}} \Big ),$$ where $R_0$ is true correlation matrix. **Remark: 1.** The JPEN estimator $\hat{\Sigma}_K$ is minimiax optimal under the operator norm. In (@Cai2:40), the authors obtain the minimax rate of convergence in the operator norm of their covariance matrix estimator for the particular construction of parameter space $\mathscr{H}_0(c_{n,p}):=\Big \{ \Sigma : max_{1 \le i \le p}\sum_{i=1}^{p}I\{\sigma_{ij}\neq 0\} \leq c_{n,p} \Big \}$. They show that this rate in operator norm is $c_{n,p} \sqrt{log~p/n}$ which is same as that of $\hat{\Sigma}_K$ for $1 \leq c_{n,p}=\sqrt{s}$.\ [**2.**]{} @Bickel1:4 proved that under the assumption of $\sum_{j=1}|\sigma_{ij}|^q \leq c_0(p)$ for some $ 0 \leq q \leq 1$, the hard thresholding estimator of the sample covariance matrix for tuning parameter $\lambda \asymp \sqrt{(log~p)/n}$ is consistent in operator norm at a rate no worse than $ O_P\Big ( c_0(p) \sqrt{p}(\frac{log ~p}{n})^{(1-q)/2} \Big ) $ where $c_0(p)$ is the upper bound on the number of non-zero elements in each row. Here the truly sparse case corresponds to $q=0$. The rate of convergence of $\hat{\Sigma}_K$ is same as that of @Bickel1:4 except in the following cases:\ [**Case (i)**]{} The covariance matrix has all off diagonal elements zero except last row which has $ \sqrt{p}$ non-zero elements. Then $c_0(p)=\sqrt{p}$ and $ \sqrt{s}=\sqrt{2~\sqrt{p}-1}$. The opeartor norm rate of convergence for JPEN estimator is $O_P \Big ( \sqrt{\sqrt{p}~(log~p)/n} \Big )$ where as rate of Bickel and Levina’s estimator is $O_P \Big (\sqrt{p~(log~p)/n} \Big )$.\ [**Case (ii)**]{} When the true covariance matrix is tridiagonal, we have $c_0(p)=2$ and $s=2p-2$, the JPEN estimator has rate of $\sqrt{p~log~p/n}$ whereas the Bickel and Levina’s estimator has rate of $\sqrt{log~p/n}$.\ For the case $\sqrt{s} \asymp c_0(p)$ and JPEN has the same rate of convergence as that of Bickel and Levina’s estimator.\ **3.** The operator norm rate of convergence is much faster than Frobenius norm. This is due to the fact that Frobenius norm convergence is in terms of all eigenvalues of the covariance matrix whereas the operator norm gives the convergence of the estimators in terms of the largest eigenvalue.\ **4.** Our proposed estimator is applicable to estimate any non-negative definite covariance matrix.\ Note that the estimator $\hat{\Sigma}_K$ is obtained by regularization of sample correlation matrix in (2.4). In some application it is desirable to directly regularize the sample covariance matrix. The JPEN estimator of the covariance matrix based on regularization of sample covariance matrix is obtained by solving the following optimization problem: $$\hat{\Sigma}_S=\operatorname*{arg\,min}_{\Sigma=\Sigma^T,tr(\Sigma)=tr(S)|(\lambda, \gamma) \in \hat{\mathscr{S}}^{S}_1}~~\Big[ ||\Sigma-S||^2_F + \lambda \|\Sigma^-\|_1 + \gamma\sum_{i=1}^{p} \{\sigma_i(\Sigma)-\bar{\sigma}_{\Sigma}\}^2\Big],$$ where $$\begin{aligned} \hat{\mathscr{S}}^S_1& = \Big \{(\lambda,\gamma): \lambda,\gamma >0, \lambda \asymp \gamma \asymp \sqrt{\frac{log~ p}{n}}, \forall \epsilon >0, \sigma_{min}\{(S+\gamma t I)-\frac{\lambda}{2}*sign(S+\gamma t I)\}>\epsilon \},\end{aligned}$$ and $S$ is sample covariance matrix. The minimization in (3.3) over $\Sigma$ is for fixed $ (\lambda, \gamma) \in \hat{\mathscr{S}}^{S}_1$. The estimator $\hat{\Sigma}_S$ is positive definite and well-conditioned. Theorem 3.2 gives the rate of convergence of the estimator $\hat{\Sigma}_S$ in Frobenius norm. Let $(\lambda, \gamma) \in \hat{\mathscr{S}}^S_1$, and let $\hat{\Sigma}_S$ be as defined in (3.3). Under Assumptions A0, A1, A2, $$\|\hat{\Sigma}_S-\Sigma_0\|_F=O_P \Big ( \sqrt{\frac{(s+p) log~p}{n}} \Big )$$ As noted in @Roth1:26 the worst part of convergence here comes from estimating the diagonal entries. ### Weighted JPEN Estimator for the Covariance Matrix Estimation A modification of estimator $\hat{R}_{K}$ is obtained by adding positive weights to the term $(\sigma_i(R)-\bar{\sigma}_R)^2$. This leads to weighted eigenvalues variance penalty with larger weights amounting to greater shrinkage towards the center and vice versa. Note that the choice of the weights allows one to use any prior structure of the eigenvalues (if known) in estimating the covariance matrix. The weighted JPEN correlation matrix estimator $\hat{R}_A$ is given by : $$\hat{R}_A=\operatorname*{arg\,min}_{R=R^T,tr(R)=p|(\lambda, \gamma) \in \hat{\mathscr{S}}^{K,A}_1}~~\Big[ ||R-K||^2_F + \lambda \|R^-\|_1 + \gamma\sum_{i=1}^{p} a_i\{\sigma_i(R)-\bar{\sigma}_R\}^2\Big],$$ where $$\begin{aligned} \hat{\mathscr{S}}^{K,A}_1& = \Big \{(\lambda,\gamma): \lambda \asymp \gamma \asymp \sqrt{\frac{log~p}{n}}, \lambda \le \frac{(2~\sigma_{min}(K))(1+\gamma ~max(A_{ii})^{-1})}{(1+\gamma~ min(A_{ii}))^{-1}p}+ \frac{\gamma ~min(A_{ii})}{p} \Big \},\end{aligned}$$ and $A=\text{diag}(A_{11},A_{22},\cdots A_{pp})$ with $A_{ii}=a_i$. The proposed covariance matrix estimator is $\hat{\Sigma}_{K,A}=(S^{+})^{1/2}\hat{R}_A (S^{+})^{1/2}$. The optimization problem in (3.5) is convex and yields a positive definite estimator for each $(\lambda,\gamma) \in \hat{\mathscr{S}}^{K,A}_1$. A simple excercise shows that the estimator $\hat{\Sigma}_{K,A}$ has same rate of convergence as that of $\hat{\Sigma}_{S}$. Estimation of Inverse Covariance Matrix --------------------------------------- We extend the JPEN approach to estimate a well-conditioned and sparse inverse covariance matrix. Similar to the covariance matrix estimation, we first propose an estimator for inverse covariance matrix based on regularized inverse correlation matrix and discuss its rate of convergence in Frobenious and operator norm.\ **Notation:** We shall use $Z$ and $\Omega$ for inverse correlation and inverse covariance matrix respectively.\ **Assumptions:** We make the following assumptions about the true inverse covariance matrix $\Omega_0$. Let $\Sigma_0=\Omega_0^{-1}$.\ **B0.** Same as the assumption $A0$.\ **B1.** With $ H=\{(i,j): \Omega_{0ij} \neq 0, i \neq j \}$, the $|H| \le s $, for some positive integer $s$.\ **B2.** There exist $ 0< \bar{k} < \infty $ large enough such that $ (1/{\bar{k}}) \le \sigma_{min}(\Omega_0) \le \sigma_{max}(\Omega_0) \le \bar{k}$.\ Let $\hat{R}_K$ be a JPEN estimator for the true correlation matrix. By Lemma 3.2, $\hat{R}_K$ is positive definite. Define the JPEN estimator of inverse correlation matrix as the solution to the following optimization problem, $$\hat{Z}_{K}=\operatorname*{arg\,min}_{Z=Z^T,tr(Z)=tr(\hat{R}_K^{-1})|(\lambda,\gamma) \in \hat{\mathscr{S}}^{{K}}_2}\Big [ \|Z-\hat{R}_K^{-1} \|^2~+~\lambda\|Z^-\|_1~+~\gamma\sum_{i=1}^{p} \{\sigma_i(Z)- \bar{\sigma}(Z)\}^2 \Big ]$$ where $$\begin{aligned} \hat{\mathscr{S}}^{{K}}_2& = \Big \{(\lambda,\gamma): \lambda,\gamma >0, \lambda \asymp \gamma \asymp \sqrt{\frac{log~ p}{n}}, \forall \epsilon >0, \\ & ~~~~~~~~~~~~ \sigma_{min}\{(\hat{R}_K^{-1}+\gamma t_1 I)-\frac{\lambda}{2}*sign(\hat{R}_K^{-1}+\gamma t_1 I)\}>\epsilon \Big \},\end{aligned}$$ and $t_1$ is average of the diagonal elements of $\hat{R}_K^{-1}$. The minimization in (3.6) over $Z$ is for fixed $ (\lambda, \gamma) \in \hat{\mathscr{S}}^{{K}}_2$. The proposed JPEN estimator of inverse covariance matrix (based on regularized inverse correlation matrix estimator $\hat{Z}_{K}$) is given by $\hat{\Omega}_{{K}}=(S^+)^{-1/2}{\hat{Z}}_{{K}}(S^+)^{-1/2}$, where $S^+$ is a diagonal matrix of the diagonal elements of $S$. Moreover (3.6) is a convex optimization problem and $\hat{Z}_K$ is positive definite.\ Next we state the consistency of estimators $\hat{Z}_{{K}}$ and $\hat{\Omega}_{{K}}$. Under Assumptions B0, B1, B2 and for $(\lambda,\gamma) \in \hat{\mathscr{S}}^{{K}}_{2}$, $$\|\hat{Z}_{{K}}-R_0^{-1}\|_F=O_P \Big ( \sqrt{\frac{s~log~p}{n}} \Big ) ~~~\text{and} ~~~\|\hat{\Omega}_{{K}}-\Omega_0\|=O_P \Big ( \sqrt{\frac{(s+1)~log~p}{n}} \Big )$$ where $R_0^{-1}$ is the inverse of true correlation matrix. **Remark:1.** Note that the JPEN estimator $\hat{\Omega}_{{K}}$ achieves minimax rate of convergence for the class of covariance matrices satisfying assumption $B0$, $B1$, and $B2$ and therefore optimal. The similar rate is obtained in @Cai2:40 for their class of sparse inverse covariance matrices.\ Next we give another estimate of inverse covariance matrix based on $\hat{\Sigma}_{S}$. Consider the following optimization problem: $$\hat{\Omega}_S=\operatorname*{arg\,min}_{\Omega=\Omega^T,tr(\Omega)=tr(\hat{\Sigma}_{S}^{-1})|(\lambda, \gamma) \in \hat{\mathscr{S}}^{{S}}_2}~~\Big[ ||\Omega- \hat{\Sigma}_{S}^{-1}||^2_F + \lambda \|\Omega^-\|_1 + \gamma\sum_{i=1}^{p} \{\sigma_i(\Omega)-\bar{\sigma}_{\Omega}\}^2\Big],$$ where $$\begin{aligned} \hat{\mathscr{S}}^{{S}}_2&= \Big \{(\lambda,\gamma): \lambda,\gamma >0, \lambda \asymp \gamma \asymp \sqrt{\frac{log~ p}{n}}, ~\forall \epsilon >0, \\ & ~~~~~~~~~~~~\sigma_{min}\{(\hat{\Sigma}_S^{-1}+\gamma ~t_2~I)-\frac{\lambda}{2}*sign(\hat{\Sigma}_S^{-1}+\gamma t_2 I)\}>\epsilon \Big \}, $$ and $t_2$ is average of the diagonal elements of $\hat{\Sigma}_S $. The minimization in (3.8) over $\Omega$ is for fixed $ (\lambda, \gamma) \in \hat{\mathscr{S}}^{S}_2$. The estimator in (3.8) is positive definite and well-conditioned. The consistency result of the estimator $\hat{\Omega}_S$ is given in following theorem. Let $(\lambda, \gamma) \in \hat{\mathscr{S}}^{S}_2$ and let $\hat{\Omega}_S$ be as defined in (3.8). Under Assumptions B0, B1, and B2, $$\|\hat{\Omega}_S-\Omega_0\|_F=O_P \Big ( \sqrt{\frac{(s+p) log~p}{n}} \Big ).$$ ### Weighted JPEN Estimator for The Inverse Covariance Matrix Similar to weighted JPEN covariance matrix estimator $\hat{\Sigma}_{K,A}$, a weighted JPEN estimator of the inverse covariance matrix is obtained by adding positive weights $a_i $ to the term $(\sigma_i(Z)-1)^2$ in (3.8). The weighted JPEN estimator is $\hat{\Omega}_{{K},A}:=({S^{+}})^{-1/2}\hat{Z}_A ({S^{+}})^{-1/2}$, where $$\hat{Z}_A=\operatorname*{arg\,min}_{Z=Z^T,tr(Z)=tr(\hat{R}_K^{-1})|(\lambda, \gamma) \in \hat{\mathscr{S}}^{K,A}_2}~~\Big[ ||Z-\hat{R}_K^{-1}||^2_F + \lambda \|Z^-\|_1 + \gamma\sum_{i=1}^{p} a_i\{\sigma_i(Z)-1\}^2\Big],$$ with $$\begin{aligned} \hat{\mathscr{S}}^{{K},A}_2& = \Big \{(\lambda,\gamma): \lambda \asymp \gamma \asymp \sqrt{\frac{log~p}{n}}, \lambda \le \frac{(2~\sigma_{min}({R}_K^{-1}))(1+\gamma t_1 max(A_{ii})^{-1})}{(1+\gamma~ min(A_{ii})^{-1}p}+ \frac{\gamma min(A_{ii})}{p} \Big \},\end{aligned}$$ and $A=\text{diag}(A_{11},A_{22},\cdots A_{pp})$ with $A_{ii}=a_i$. The optimization problem in (3.10) is convex and yields a positive definite estimator for $(\lambda,\gamma) \in \hat{\mathscr{S}}^{K,A}_2$. A simple excercise shows that the estimator $\hat{Z}_{A}$ has similar rate of convergence as that of $\hat{Z}_K$. An Algorithm ============ Covariance Matrix Estimation: ----------------------------- The optimization problem (2.4) can be written as:\ $$\begin{aligned} \hat{R}_K=\operatorname*{arg\,min}_{R=R^T|(\lambda, \gamma) \in \hat{\mathscr{S}}^{K}_1 }~f(R),\end{aligned}$$ where $$\begin{aligned} f(R)=||R-K||^2_F + \lambda \|{R}^-\|_1 + \gamma\sum_{i=1}^{p} \{\sigma_i(R)-\bar{\sigma}(R)\}^2.\end{aligned}$$ Note that $\sum_{i=1}^{p} \{\sigma_i(R)-\bar{\sigma}(R)\}^2=tr(R^2)-2~tr(R)+p$, where we have used the constraint $tr(R)=p$. Therefore, $$\begin{aligned} f(R) & = & \|R-K\|_F^2+\lambda \|{R}^-\|_1 +\gamma~ tr(R^2)-2~ \gamma~tr(R) +p\\ & = & tr(R^2 (1+\gamma))-2tr\{R(K+\gamma I)\}+ tr(K^TK) +\lambda ~\|{R}^-\|_1 +p\\ & = & (1+\gamma)\{tr(R^2)-2/(1+\gamma) tr\{R(K+\gamma I)\}+ (1/(1+\gamma))tr(K^TK)\} \\ & &~~~~+\lambda ~\|{R}^-\|_1 +p \\ & = & (1+\gamma)\{\|R- (K+\gamma I)/(1+\gamma)\|^2_F+ (1/(1+\gamma))tr(K^TK)\} \\ & & ~~~~+\lambda ~\|{R}^-\|_1 +p.\end{aligned}$$ The solution of (4.1) is soft thresholding estimator and it is given by: $$\begin{aligned} \hat{R}_K =\frac{1}{1+\gamma}~\text{sign}(K)*\text{pmax}\{\text{abs}(K+\gamma ~I)-\frac{\lambda}{2},0\}\end{aligned}$$ with $(\hat{R}_{K})_{ii}=(K_{ii}+\gamma)/(1+\gamma)$, $pmax(A,b)_{ij}:=max(A_{ij},b)$ is elementwise max function for each entry of the matrix $A$. Note that for each $(\lambda,\gamma) \in \hat{\mathscr{S}}^{K}_1$, $\hat{R}_{K}$ is positive definite.\ \ [**Choice of $\lambda$ and $\gamma$:**]{} For a given value of $\gamma$, we can find the value of $\lambda $ satisfying: $$\begin{aligned} \sigma_{min}\{(K+\gamma I)-\frac{\lambda}{2}*sign(K+\gamma I)\}>0\end{aligned}$$ which can be simplified to $$\begin{aligned} \lambda < \frac{\sigma_{min}(K+\gamma I)}{C_{12}~\sigma_{max}(\text{sign}(K))}. \end{aligned}$$ For some $C_{12} \ge 0.5$. Such choice of $(\lambda,\gamma)\in \hat{\mathscr{S}}_1^{{K}}$, and the estimator $\hat{R}_{K}$ is positive definite. Smaller values of $C_{12}$ yeild a solution which is more sparse but may not be positive definite.\ \ [**Choice of weight matrix A:**]{} For optimization problem in (3.5), the weights are chosen in following way:\ Let $\mathscr{E}$ be the set of sorted diagonal elements of the sample covariance matrix $S$.\ **i)** Let $k$ be largest index of $\mathscr{E}$ such that $k^{th}$ elements of $\mathscr{E}$ is less than $1$. For $i \le k, ~a_i=\mathscr{E}_{i}$. For $k<i \le p,~ a_i=1/\mathscr{E}_{i}.$\ **ii)** $A=\text{diag}(a_1,a_2,\cdots,a_p),~\text{where}~ a_j=a_j/\sum_{i=1}^p a_i.$ Such choice of weights allows more shrinkage of extreme sample eigenvalues than the ones in center of eigen-spectrum. Inverse Covariance Matrix Estimation: ------------------------------------- To get an expression of inverse covariance matrix estimate, we replace $K$ by $\hat{R}_K^{-1}$ in (4.2), where $\hat{R}_K$ is a JPEN estimator of correlation matrix. We chose $(\lambda,\gamma) \in \hat{\mathscr{S}}^{{K}}_2$. For a given $\gamma$, we chose $\lambda>0$ satisfying:\ $$\begin{aligned} \sigma_{min}\{(\hat{R}_K^{-1}+\gamma t_1 I)-\frac{\lambda}{2}*sign(\hat{R}_K^{-1}+\gamma t_1 I)\}>0\end{aligned}$$ which can be simplified to $$\begin{aligned} \lambda < \frac{\sigma_{min}(\hat{R}_K^{-1}+\gamma t_1 I)}{C_{12}~\sigma_{max}(\text{sign}(\hat{R}_K^{-1}))}. \end{aligned}$$ Computational Complexity ------------------------ The JPEN estimator $\hat{\Sigma}_{K}$ has computational complexity of $O(p^2)$ as there are at most $3p^2$ multiplications for computing the estimator $\hat{\Sigma}_{K}$. The other existing algorithm Glasso ([@Freidman:11]), PDSCE ([@Roth1:26]) have computational complexity of $O(p^3)$. We compare the computational timing of our algorithm to some other existing algorithms Glasso (@Freidman:11), PDSCE (@Roth1:26). The exact timing of these algorithm also depends upon the implementation, platform etc. (we did our computations in $R$ on a AMD 2.8GHz processor). Following the approach [@Bickel1:4], the optimal tuning parameter $(\lambda,\gamma)$ was obtained by minimizing the $5-$fold cross validation error $$(1/5) \sum_{i=1}^5\|\hat{\Sigma}_i^v-\Sigma_i^{-v}\|_1,$$ where $\hat{\Sigma}_i^v$ is JPEN estimate of the covariance matrix [H]{}[0.5]{} ![image](timing.png){width="40.00000%"} based on $v=4n/5$ observations, $\Sigma_i^{-v}$ is the sample covariance matrix using $(n/5)$ observations. Figure 4.1 illustrates the total computational time taken to estimate the covariance matrix by $Glasso,~PDSCE$ and $JPEN$ algorithms for different values of $p$ for Toeplitz type of covariance matrix on log-log scale (see section $\S 5$ for Toeplitz type of covariance matrix). Although the proposed method requires optimization over a grid of values of $(\lambda,\gamma) \in \hat{\mathscr{S}}^{K}_1$, our algorithm is very fast and easily scalable to large scale data analysis problems. Simulation Results ================== #### • We compare the performance of the proposed method to other existing methods on simulated data for five types of covariance and inverse covariance matrices.\ **(i) Hub Graph:** Here the rows/columns of $\Sigma_0$ are partitioned into J equally-sized disjoint groups: $ \{ V_1 \cup V_2~ \cup, ..., \cup~ V_J\} = \{1,2,...,p\},$ each group is associated with a $\bf pivotal$ row k. Let size $|V_1| = s$. We set $\sigma_{0i,j}=\sigma_{0j,i}=\rho $ for $ i \in V_k $ and $\sigma_{0i,j}=\sigma_{0j,i}=0$ otherwise. In our experiment, $J=[p/s], k =1,s+1, 2s+1,...,$ and we always take $\rho= 1/(s + 1)$ with J = 20.\ **(ii) **Neighborhood Graph:**** We first uniformly sample $(y_1,y_2,...,y_n)$ from a unit square. We then set $\sigma_{0i,j}=\sigma_{0j,i}=\rho $ with probability ${(\sqrt{2\pi})}^{-1}exp( -4\|y_i-y_j\|^2)$. The remaining entries of $\Sigma_0$ are set to be zero. The number of nonzero off-diagonal elements of each row or column is restricted to be smaller than $[1/\rho]$ where $\rho$ is set to be 0.245.\ **(iii) **Toeplitz   Matrix:**** We set $\sigma_{0i,j}=2~$for$ ~i=j;~\sigma_{0i,j}=|0.75|^{|i-j|}~$ for $|i-j|=1,2;$ and $~\sigma_{0i,j}=0~$ otherwise.\ **(iv) **Block  Diagonal  Matrix:**** In this setting $\Sigma_0$ is a block diagonal matrix with varying block size. For $p=500$ number of blocks is 4 and for $p=1000$ the number of blocks is 6. Each block of covariance matrix is taken to be Toeplitz type matrix as in case (iii).\ **(v) Cov-I  type Matrix:** In this setting, we first simulate a random sample $(y_1,y_2,...,y_p)$ from standard normal distribution. Let $x_i=|y_i|^{3/2}*(1+1/p^{1+log(1+1/p^2)})$. Next we generate multivariate normal random vectors $\mathscr{Z}=(z_1,z_2,...,z_{5p})$ with mean vector zero and identity covariance matrix. Let $U$ be eigenvector corresponding to sample covariance matrix of $\mathscr{Z}$. We take $\Sigma_0=UDU'$, where $D=\text{diag}(x_1,x_2,....x_p)$. This is not a sparse setting but the covariance matrix has most of eigenvalues close to zero and hence allows us to compare the performance of various methods in a setting where most of eigenvalues are close to zero and widely spread as compared to structured covariance matrices in **(i)-(iv)**.\ [|r|p[1.6cm]{}|p[1.6cm]{}|p[1.6cm]{}|p[1.6cm]{}|]{}\ & &\ & p=500 & p=1000 & p=500 & p=1000\ Ledoit-Wolf & 1.54(0.102) & 2.96(0.0903) & 4.271(0.0394) & 2.18(0.11)\ Glasso & 0.322(0.0235) & 3.618(0.073) & 0.227(0.098) & 2.601(0.028)\ PDSCE & 3.622(0.231) & 4.968(0.017) & 1.806(0.21) & 2.15(0.01)\ BLThresh & 2.747(0.093) & 3.131(0.122) & 0.887(0.04) & 0.95(0.03)\ JPEN & 2.378(0.138) & 3.203(0.144) & 1.124(0.088) & 2.879(0.011)\ \ & &\ & p=500 & p=1000 & p=500 & p=1000\ Ledoit-Wolf & 2.13(0.103) & 2.43(0.043) & 1.07(0.165) & 3.47(0.0477)\ Glasso & 0.511(0.047) & 0.551(0.005) & 0.325(0.053) & 0.419(0.003)\ PDSCE & 0.735(0.106) & 0.686(0.006) & 0.36(0.035) & 0.448(0.002)\ BLThresh & 1.782(0.047) & 2.389(0.036) & 0.875(0.102)& 1.82(0.027)\ JPEN & 0.732(0.111) & 0.688(0.006) & 0.356(0.058) & 0.38(0.007)\ \ & &\ & p=500 & p=1000 & p=500 & p=1000\ Ledoit-Wolf & 1.36(0.054) & 2.89(0.028) & 1.1(0.0331) & 2.32(0.0262)\ Glasso & 0.608(0.054) & 0.63(0.005) & 0.428(0.047) & 0.419(0.038)\ PDSCE & 0.373(0.085) & 0.468(0.007) & 0.11(0.056) & 0.175(0.005)\ BLThresh & 1.526(0.074) & 2.902(0.033) & 0.870(0.028)& 1.7(0.026)\ JPEN & 0.454(0.0423) & 0.501(0.018) & 0.086(0.045) & 0.169(0.003)\ \ & &\ & p=500 & p=1000 & p=500 & p=1000\ Ledoit-Wolf & 1.526(0.074) & 2.902(0.033) & 1.967(0.041) & 2.344(0.028)\ Glasso & 2.351(0.156) & 3.58(0.079) & 1.78(0.087) & 2.626(0.019)\ PDSCE & 3.108(0.449) & 5.027(0.016) & 0.795(0.076) & 2.019(0.01)\ BLThresh & 0.858(0.040) & 1.206(0.059) & 0.703(0.039)& 1.293(0.018)\ JPEN & 2.517(0.214) & 3.205(0.16) & 1.182(0.084) & 2.919(0.011)\ \ & &\ & p=500 & p=1000 & p=500 & p=1000\ Ledoit-Wolf & 33.2(0.04) & 36.7(0.03) & 36.2(0.03) & 48.0(0.03)\ Glasso & 15.4(0.25) & 16.1(0.4) & 14.0(0.03) & 14.9(0.02)\ PDSCE & 16.5(0.05) & 16.33(0.04) & 16.9(0.03) & 17.5(0.02)\ BLThresh & 15.7(0.04) & 17.1(0.03) & 13.4(0.02) & 17.5(0.02)\ JPEN & 7.1(0.042) & 11.5(0.07) & 8.4(0.042) & 7.8(0.034)\ \[tab:addlabel\] We chose similar structure of $\Omega_0$ for simulations. For all these choices of covariance and inverse covariance matrices, we generate random vectors from multivariate normal distribution with varying $n$ and $p$. We chose $n=50,100$ and $p=500,1000$. We compare the performance of proposed covariance matrix estimator $\hat{\Sigma}_K$ to graphical lasso \[@Freidman:11\], PDSC Estimate \[@Roth1:26\], Bickel and Levina’s thresholding estimator (BLThresh) \[[@Bickel1:4]\] and Ledoit-Wolf \[[@Ledoit:17]\] estimate of covariance matrix. The JPEN estimate $\hat{\Sigma}_{K}$ was computed using R software(version 3.0.2). The graphical lasso estimate of the covariance matrix was computed using R package “glasso" (http://statweb.stanford.edu/ tibs/glasso/). [|r|p[1.6cm]{}|p[1.6cm]{}|p[1.6cm]{}|p[1.6cm]{}|]{}\ & &\ & p=500 & p=1000 & p=500 & p=1000\ Glasoo & 4.144(0.523) & 1.202(0.042) & 0.168(0.136) & 1.524(0.028)\ PDSCE & 1.355(0.497) & 1.201(0.044) & 0.516(0.196) & 0.558(0.032)\ CLIME & 4.24(0.23) & 6.56(0.25) & 6.88(0.802) & 10.64(0.822)\ JPEN & 1.248(0.33) & 1.106(0.029) & 0.562(0.183) & 0.607(0.03)\ \ & &\ & p=500 & p=1000 & p=500 & p=1000\ Glasoo & 1.122(0.082) & 0.805(0.007) & 0.07(0.038) & 0.285(0.004)\ PDSCE & 0.717(0.108) & 0.702(0.007) & 0.358(0.046) & 0.356(0.005)\ CLIME & 10.5(0.329) & 10.6(0.219) & 6.98(0.237) & 10.8(0.243)\ JPEN & 0.684(0.051) & 0.669(0.003) & 0.34(0.024) & 0.337(0.002)\ \ & &\ & p=500 & p=1000 & p=500 & p=1000\ Glasoo & 1.597(0.109) & 0.879(0.013) & 1.29(0.847) & 0.428(0.007)\ PDSCE & 0.587(0.13) & 0.736(0.014) & 0.094(0.058) & 0.288(0.01)\ CLIME & 10.5(0.535) & 11.5(0.233) & 10.5(0.563) & 11.5(0.245)\ JPEN & 0.551(0.075) & 0.691(0.008) & 0.066(0.042) & 0.201(0.007)\ \ & &\ & p=500 & p=1000 & p=500 & p=1000\ Glasoo & 2.862(0.475) & 2.89(0.048) & 2.028(0.267) & 2.073(0.078)\ PDSCE & 1.223(0.5) & 1.238(0.065) & 0.49(0.269) & 0.473(0.061)\ CLIME & 4.91(0.22) & 7.597(0.34) & 5.27(1.14) & 8.154(1.168)\ JPEN & 1.151(0.333) & 2.718(0.032) & 0.607(0.196) & 2.569(0.057)\ \ & &\ & p=500 & p=1000 & p=500 & p=1000\ Glasoo & 54.0(0.19) & 190.(5.91) & 14.7(0.37) & 49.9(0.08)\ PDSCE & 28.8(0.19) & 45.8(0.32) & 16.9(0.04) & 26.9(0.08)\ CLIME & 59.8(0.82) & 207.5(3.44) & 15.4(0.03) & 53.7(0.69)\ JPEN & 26.3(0.36) & 7.0(0.07) & 15.7(0.08) & 23.5(0.3)\ \[tab:addlabel\] The Ledoit-Wolf estimate was obtained using code from (http://econ.uzh.ch/faculty/wolf/publications.html\#9). The PDSC estimate was obtained using PDSCE package (http://cran. r-project. org/web/ packages/PDSCE/index.html). The Bickel and Levina’s estimator was computed as per the algorithm given in their paper. For inverse covariance matrix performance comparison we include glasso, CLIME (@Cai1:6) and PDSCE. For each of covariance and inverse covariance matrix estimate, we calculate Average Relative Error (ARE) based on 50 iterations using following formula, $$\begin{aligned} ARE (\Sigma,\hat{\Sigma})= |log(f(S,\hat{\Sigma}))~-~log(f(S,\Sigma_0))|/|(log(f(S,\Sigma_0))|,\end{aligned}$$ ![*Heatmap of zeros identified in covariance matrix out of 50 realizations. White color is 50/50 zeros identified, black color is 0/50 zeros identified.* ](heatmap1.pdf){width="85.00000%"} where $ f(S,\cdot) $ is multivariate normal density given the sample covariance matrix $S$, $\Sigma_0$ is the true covariance, $\hat{\Sigma}$ is the estimate of $\Sigma_0$ based on one of the methods under consideration. Other choices of performance criteria are Kullback-Leibler used by @Yuan:34 and @Bickel1:4. The optimal values of tuning parameters were obtained over a grid of values by minimizing $5-$fold cross-validation as explained in $\S4$. The average relative error and their standard deviations (in percentage) for covariance and inverse covariance matrix estimates are given in Table 5.1 and Table 5.2, respectively. The numbers in the bracket are the standard errors of relative error based on the estimates using different methods. Among all the methods JPEN and PDSCE perform similar for most of choices of $n$ and $p$ for all five type of covariance matrices. This is due to the fact that both PDSCE and JPEN use quadratic optimization function with a different penalty function. The behavior of Bickel and levina’s estimator is quite good in Toepltiz case where it performs better than the other methods. For this type of covariance matrix, the entries away from the diagonal decay to zero and therefore soft-thresholding estimators like BLThresh perform better in this setting. However for neighorhood and hub type covariance matrix which are not necessarily banded type, Bickel and Levina estimator is not a natural choise as their estimator would fail to recover the underlying sparsity pattern. The performance of Ledoit-Wolf estimator is not very encouraging for Cov-I type matrix. The Ledoit-Wolf estimator shrinks the sample covariance matrix towards identity and hence the eigenvalues estimates are highly shrunk towards one. This is also visible in eigenvalues plot in Figure 5.2 and Figure 5.3. For Cov-I type covariance matrix where most of eigenvalues are close to zero and widely spread, the performance of JPEN estimator is impressive. The eigenplot in Figure 5.3 shows that among all the methods, estimates of eigenvalues of JPEN estimator are most consistent with true eigenvalues. This clearly shows the advantage of JPEN estimator of covariance matrix when the true eigenvalues are dispersed or close to zero. The eigenvalues plot in Figure 5.2 shows that when eigen-spectrum of true covariance matrix are not highly dispersed, the JPEN and PDSCE estimates of eigenavlues are almost the same. This phenomenon is also apparent in Figure 2.1. Also Ledoit-Wolf estimator heavily shrinks the eigenvalues towards the center and thus underestimates the true eigen-spectrum. ![*Eigenvalues plot for $n=100, p=50$ based on 50 realizations for neighborhood type of covariance matrix*](all_eig_nbd.pdf){width=".8\textwidth"} ![*Eigenvalues plot for $n=100, p=100$ based on 50 realizations for Cov-I type matrix*](log_log_plot1.pdf){width=".8\textwidth"} For inverse covariance matrix, we compare glasso, CLIME and PDSCE estimates with proposed JPEN estimator. The JPEN estimator $\hat{\Omega}_{K}$ outperforms other methods for the most of the choices of $n$ and $p$ for all five types of inverse covariance matrices. Additional simulations (not included here) show that for $n \approx p$, all the underlying methods perform similarly and the estimates of their eigenvalues are also well aligned with true values. However in high dimensional setting, for large $p$ and small $n$, their performance is different as seen in simulations of Table 5.1 and Table 5.2. Figure 5.1 shows the recovery of non-zero and zero entries of true covariance matrix based on JPEN estimator $\hat{\Sigma}_{K}$ based on 50 realizations. The estimtor recovers the true zeros for about 90% of times for Hub and Neighborhood type of covariance matrix. It also reflect the recovery of true structure of non-zero entries and actual pattern among the rows/columns of covariance matrix. To see the implication of eigenvalues shrinkage penalty as compared to other methods, we plot (Figure 5.2) the eigenvalues of estimated covariance matrix for $n=100$, $p=50$ for neighborhood type of covariance matrix. The JPEN estimates of eigen-spectrum are well aligned with true ones and closest being PDSC estimates of eigenvalues. Figure 5.3 shows the recovery of eigenvalues based on estimates using different methods for Cov-I type covariance matrix. For this particular simulation, the eigenvalues are choosen differently than the one described in (v) of $\S5$. The eigenvalues of true covariance matrix are taken to be very diverse with maximum about $10^6$ and smallest eigenvalue about $10^{-6}$. For Cov-I type of matrix, JPEN estimates of eigenvalues are better than other methods. Colon Tumor Classification Example ================================== In this section, we compare performance of JPEN estimator of inverse covariance matrix for tumors classification using Linear Discriminant Analysis (LDA). The gene expression data (@Alan:2 consists of 40 tumorous and 22 non-tumorous adenocarcinoma tissue. After preprocessing, data was reduced to a subset of 2,000 gene expression values with the largest minimal intensity over the 62 tissue samples (source: *http://genomics-pubs.princeton.edu/oncology /affydata/index.html*). In our analysis, we reduced the number of genes by selecting $p$ most ificant genes based on logistic regression. We obtain estimates of inverse covariance matrix for $p=50,100,200$ and then use LDA to classify these tissues as either tumorous or non-tumorous (normal). We classify each test observation x to either class k = 0 or k = 1 using the LDA rule $$\begin{aligned} \delta_k(x)=\operatorname*{arg\,max}_k \Big \{ x^T\hat{\Omega}\hat{\mu_k}~-~\frac{1}{2} \hat{\mu_k}\hat{\Omega}\hat{\mu_k} ~+~log(\hat{\pi}_k) \Big\},\end{aligned}$$ where $\hat{\pi}_k$ is the proportion of class $k$ observations in the training data, $\hat{\mu}_k$ is the sample mean for class k on the training data, and $\hat{\Omega}:=\hat{\Sigma}^{-1}$ is an estimator of the inverse of the common covariance matrix on the training data computed. Tuning parameters $\lambda$ and $\gamma$ were chosen using 5-fold cross validation. To create training and test sets, we randomly split the data into a training and test set of sizes 42 and 20 respectively; following the approach used by Wang et al. (2007), the training set has 27 tumor samples and 15 non-tumor samples. We repeat the split at random 100 times and measure the average classification error. [l | [l]{} [l]{} [c]{}r]{} Method & p=50 & p=100 & p=200\ Logistic Regression & 21.0(0.84) & 19.31(0.89) & 21.5(0.85)\ SVM & 16.70(0.85) & 16.76(0.97) & 18.18(0.96)\ Naive Bayes & 13.3(0.75) & 14.33(0.85) & 14.63(0.75)\ Graphical Lasso & 10.9(1.3) & 9.4(0.89) & 9.8(0.90)\ Joint Penalty & **9.9(0.98)** & **8.9(0.93)** & **8.2(0.81)** Since we do not have separate validation set, we do the 5-fold cross validation on training data. At each split, we divide the training data into 5 subsets (fold) where 4 subsets are used to estimate the covariance matrix and one subset is used to measure the classifier’s performance. For each split, this procedure is repeated 5 times by taking one of the 5 subsets as validation data. An optimal combination of $\lambda $ and $\gamma$ is obtained by minimizing the $5$-fold cross validation error.\ The average classification errors with standard errors over the 100 splits are presented in Table 6.1. Since the sample size is less than the number of genes, we omit the inverse sample covariance matrix as it is not well defined and instead include the naive Bayes’ and support vector machine classifiers. Naive Bayes has been shown to perform better than the sample covariance matrix in high-dimensional settings (@Bickel2:39. Support Vector Machine (SVM) is another popular choice for high dimensional classification tool. Among all the methods covariance matrix based LDA classifiers perform far better than Naive Bayes, SVM and Logistic Regression. For all other classifiers the classification performance deteriorates for increasing $p$. For larger $p$, i.e., when more genes are added to the data set, the classification performance of JPEN estimate based LDA classifier initially improves but it deteriorates for large $p$. For $p=2000$, the classifier based on inverse covariance matrix has accuracy of $30\%$. This is due to the fact that as dimension of covariance matrix increases, the estimator does not remain very informative. Summary ======= We have proposed and analyzed regularized estimation of large covariance and inverse covariance matrix using joint penalty. The proposed JPEN estimators are optimal under spectral norm for underlying classs of sparse and well-conditioned covariance and inverse covariance matrices. We also establish its theoretical consistency in Frobenius norm. One of its biggest advantage is that the optimization carries no computational burden and and the resulting algorithm is very fast and easily scalable to large scale data analysis problems. The extensive simulation shows that the proposed estimators performs well for a number of structured covariance and inverse covariance matrices. Also when the eigenvalues of underlying true covariance matrix are highly dispersed, it outperforms other methods (based on simulation analysis). The JPEN estimator recovers the sparsity pattern of the true covariance matrix and provides a good approximation of the underlying eigen-spectrum and hence we expect that PCA will be one of the most important application of the method. Although the proposed JPEN estimators of covariance and inverse covariance matrix do not require any assumption on the structure of true covariance and inverse covariance matrices respectively, any prior knowledge of structure of true covariance matrix might be helpful to choose a suitable weight matrix and hence improve estimation.\ 0.2in Appendix A. {#appendix-a. .unnumbered} =========== [**Proof of Lemma 3.1**]{}\ Let $$f(R)=\|R-K\|^2+\lambda \|R^-\|_1+\gamma \sum_{i=1}^{p} \{\sigma_i(R)-\bar{\sigma}_R\}^2.$$ where $\bar{\sigma}_R$ is the mean of eigenvalues of $R$. Due to the constraint $tr(R)=p$, we have $\bar{\sigma}_R=1$. The third term of (.1) can be written as\ $$\sum_{i=1}^{p} \{\sigma_i(R)-\bar{\sigma}_R\}^2=tr(R^2)-2~tr(R)+p$$ We obtain, $$\begin{aligned} \begin{split} f(R)&= tr(R^2)-2~tr(RK)+tr(K^2)+\lambda\|R^-\|_1 + \gamma\{tr(R^2) -2~tr(R)+p\} \\ &= tr(R^2(1+\gamma))-2~tr(K+\gamma~I)+tr(K^2)+\lambda\|R^-\|_1 + p \\ &= (1+\gamma)\|R-(K+\gamma~I)/(1+\gamma)\|^2+tr(K^2)+\lambda \|R^-\|_1 + p \end{split}\end{aligned}$$ This is quadratic in $R$ with a $\ell_1$ penalty to the off-diagonal entries of $R$, therefore a convex function in $R$.\ \ [**Proof of Lemma 3.2**]{} The solution to (.2) satisfies: $$\begin{aligned} 2(R-(K+\gamma I))(1+\gamma)^{-1}+ \lambda \frac{\partial{\|R^-\|_1}}{\partial{R}}=0\end{aligned}$$ where $\frac{\partial{\|R^-\|_1}}{\partial{R}}$ is given by: $$\frac{\partial{\|R^-\|_1}}{\partial{R}} = \left\{ \begin{array}{lr} 1 & : if~~~R_{ij}>0 \\ -1 &: if ~~~ R_{ij} <0 \\ \tau \in (-1,1)& : if~~~ R_{ij}=0 \end{array} \right.$$ Note that $\|R^-\|_1$ has same value irrespective of of $R$, therefore the right hand side of (.2) is minimum if :\ $$\begin{aligned} \text{sign}(R)=\text{sign}(K+\gamma I)=\text{sign}(K)\end{aligned}$$ $\forall \epsilon >0$, using (.3), $\sigma_{min}\{ (K+\gamma I)-\frac{\lambda}{2} \text{sign}(K) \} >\epsilon $ gives a $(\lambda,\gamma) \in \hat{\mathscr{S}}^{K}_1$ and such a choice of $(\lambda,\gamma)$ gaurantees the estimator to be positive definite.\ [**Remark:**]{} Intuitively, a larger $\gamma$ shrinks the eigenvalues towards center which is 1, a larger $\gamma$ would result in positive definite estimator, whereas a larger $\lambda$ results in sparse estimate. A combination of $(\lambda, \gamma)$ results in a sparse and well-conditioned estimator. In particular case, when $K$ is diagonal matrix, the $\lambda<2*\gamma$.\ \ [**Proof of Theorem 3.1**]{} Define the function $Q(.)$ as following: $$Q(R)=f(R)-f(R_0)$$ where $R_0$ is the true correlation matrix and $R$ is any other correlation matrix. Let $R=UDU^T$ be eigenvalue decomposition of $R$, $D$ is diagonal matrix of eigenvalues and $U$ is matrix of eigenvectors. We have, $$\begin{aligned} \begin{split} Q(R)& = \|R-K\|_F^2+\lambda \|R^-\|_1+\gamma~ tr(D^2-2~D+p) \\ & ~~~-\|R_0-K\|_F^2 - \lambda\|R_0^-\|_1 -\gamma~ tr(D_0^2-2~D_0+p) \end{split}\end{aligned}$$ $R_0=U_0D_0U_0^T$ is eigenvalue decomposition of $R_0$. Let $ \Theta_n(M):=\{\Delta : \Delta=\Delta^T , ~ \|\Delta\|_2=Mr_n,~ 0< M < \infty ~\}$. The estimate $\hat{R}$ minimizes the $Q(R)$ or equivalently $ \hat{\Delta}=\hat{R}-R_0$ minimizes the $G(\Delta)=Q(R_0+\Delta)$. Note that $G(\Delta)$ is convex and if $\hat{\Delta}$ be its solution, then we have $G(\hat{\Delta}) \le G(0) = 0$. Therefore if we can show that $G(\Delta)$ is non-negative for $\Delta \in \Theta_n(M)$, this will imply that the $\hat{\Delta}$ lies within sphere of radius $Mr_n$. We require $r_n=o\Big (\sqrt{(p+s)~log~p/n}\Big )$. $$\begin{aligned} \|R-K\|_F^2-\|R_0-K\|_F^2 & =& tr(R^TR -2 R^TK+K^TK)-tr(R^T_0R_0-2R_0S+K^TK) \\ & = & tr(R^TR-R^T_0R_0)-2~tr((R-R_0)^TK) \\ & = & tr((R_0+\Delta)^T(R_0+\Delta)-R^T_0R_0)-2~tr(\Delta^TK)\\ & = & tr(\Delta^T\Delta)-2~tr(\Delta^T(K-R_0))\end{aligned}$$ Next, we bound term involving $K$ in above expression, we have $$\begin{aligned} |tr(\Delta^T(R_0-K))| & \le & \sum_{i\neq j} |\Delta_{ij}({R_0}_{ij}-K_{ij})| \\ & \le & \max_{i\neq j}(|{R_0}_{ij}-K_{ij}|) \|\Delta^-\|_1 \\ & \le& C_0 (1+\tau) \sqrt{\frac{log~p}{n}} \|\Delta^-\|_1 \le C_1 \sqrt{\frac{log~p}{n}} \|\Delta^-\|_1\end{aligned}$$ holds with high probability by a result (Lemma 1) from @Ravi1:24 on the tail inequality for sample covariance matrix of sub-gaussian random vectors and where $C_1=C_0 (1+\tau), C_0 >0$. Next we obtain upper bound on the terms involving $\gamma$ in (.4). we have, $$\begin{aligned} \lefteqn{tr(D^2-2D) - tr(D_0^2-2D_0) } \\ & = & tr\{R^2-R_0^2\} - 2~tr\{R-R_0)\} = tr(R_0+\Delta)^2-tr(R^2_0) \\ & = & 2~tr(R_0 \Delta) +tr(\Delta^T\Delta) \le 2~\sqrt{s}\|\Delta\|_F+ \|\Delta\|^2_F. \end{aligned}$$ using Cauchy-Schwarz inequality. To bound the term $ \lambda(\|R^-\|_1-\|R_0^-\|_1) =\lambda(\|\Delta^-+R_0^-\|_1-\|R_0^-\|_1) $, let $E$ be index set as defined in Assumption A.2 of Theorem 3.2. Then using the triangle inequality, we obtain, $$\begin{aligned} \lambda(\|\Delta^-+R_0^-\|_1-\|R_0^-\|_1) & = & \lambda(\|\Delta_E^-+R_{0}^-\|_1+\|\Delta_{\bar{E}}^-\|_1-\|R_0\|_1) \\ & \geq & \lambda(\|R_{0}^-\|_1-\|\Delta_E^-\|_1+\|\Delta_{\bar{E}}^-\|_1 -\|R_0^-\|_1) \\ & \geq & \lambda(\|\Delta_{\bar{E}}^-\|_1-\|\Delta_{E}^-\|_1)\end{aligned}$$ Let $ \lambda =(C_1/\epsilon)\sqrt{log~p/n}$, $\gamma=(C_1/\epsilon_1)\sqrt{log~p/n}, $ where $(\lambda, \gamma) \in \hat{\mathscr{S}}^{K}_1$, we obtain, $$\begin{aligned} G(\Delta)& \geq & tr(\Delta^T\Delta)(1+\gamma)-2~C_1 \Big \{ \sqrt{\frac{log~p} {n}}(\|\Delta^{-}\|_1)+ \frac{1}{\epsilon_1}\sqrt{\frac{s~log~p} {n}} \|\Delta\|_F \Big \} \\ & & +\frac{C_1}{\epsilon} \sqrt{\frac{log~p}{n}} \big ( \|\Delta^-_{\bar{E}}\|_1 - \Delta^-_{E}\|_1 \big )\\ & \geq & \|\Delta\|_F^2(1+\gamma) -2C_1\sqrt{\frac{log~p}{n}} \big ( \|\Delta^-_{\bar{E}}\|_1 + \|\Delta^-_{{E}}\|_1 \big ) \\ & & \frac{C_1}{\epsilon} \sqrt{\frac{log~p}{n}} \big ( \|\Delta^-_{\bar{E}}\|_1- \Delta^-_{E}\|_1 \big ) - \frac{2C_1}{\epsilon_1}\sqrt{\frac{s~log~p} {n}} \|\Delta\|_F.\end{aligned}$$ Also because $\|\Delta^-_{E}\|_1=\sum_{(i,j) \in E, i \neq j } \Delta_{ij} \leq \sqrt{s} \|\Delta^-\|_F$, $$\begin{aligned} -2C_1\sqrt{\frac{log~p}{n}} \|\Delta^-_{\bar{E}}\|_1 +\frac{C_1}{\epsilon} \sqrt{\frac{log~p}{n}} {\|\Delta^-_{\bar{E}}\|}_1 & \geq & \sqrt{\frac{log~p}{n}} \|\Delta^-_{\bar{E}}\|_1 \big ( -2 C_1 + \frac{C_1}{\epsilon} \big ) \\ & \geq & 0\end{aligned}$$ for sufficiently small $\epsilon$. Therefore, $$\begin{aligned} G(\Delta) & \geq & \|\Delta\|_F^2 \big ( 1+\frac{C_1}{\epsilon_1}\sqrt{\frac{log~p}{n}} \big ) -C_1\sqrt{\frac{s~log~p}{n}} \|\Delta^+\|_F \{1+1/\epsilon+2/\epsilon_1\}\\ & \geq & \|\Delta\|_F^2\Big[1 +\frac{C_1}{\epsilon_1}\sqrt{\frac{log~p}{n}}-\frac{C_1}{M}\{1+1/\epsilon+2/\epsilon_1\} \Big] \\ & \geq & 0,\end{aligned}$$ for all sufficiently large $n$ and $M$. Which proves the first part of theorem. To prove the operator norm consistency, we have, $$\begin{aligned} \|\hat{\Sigma}_K-\Sigma_0\| & = & \|\hat{W}\hat{R}\hat{W}-WK W\| \\ & \le & \|\hat{W}-W\| \|\hat{R}-K\| \|\hat{W}-W\| \\ & & + \|\hat{W}-W\| (\|\hat{R}\|\|W\| +\|\hat{W}\| \|K\|) + \|\hat{R}-K\| \|\hat{W}\| \|W\|.\end{aligned}$$ using sub-multiplicative norm property $\|AB\| \leq \|A\|\|B\|$. Since $\|K\|=O(1)$ and $\|\hat{R}-K\|_F=O(\sqrt{\frac{s~log~p}{n}})$ these together implies that $\|\hat{R}\|=O(1)$ . Also, $$\begin{aligned} \|{\hat{W}}^2-W^2\| & = & \max_{\|x\|_2=1} \sum_{i=1}^p |({\hat{w}_i}^2-w_i^2)| x_i^2 \leq \max_{1 \leq i \leq p} |({\hat{w}_i}^2-w_i^2)| \sum_{i=1}^p x_i^2 \\ & = & \max_{1 \leq i \leq p} |({\hat{w}_i}^2-w_i^2)| = O \big ( \sqrt{\frac{log~p}{n}} \big ).\end{aligned}$$ holds with high probability by using a result (Lemma 1) from @Ravi1:24. Next we shall shows that $\|\hat{W}-W\|\asymp \|\hat{W}^2-W^2\|$, (where A$\asymp$B means A=$O_P(B)$ and B=$O_P(A)$). We have, $$\begin{aligned} \|\hat{W}-W\| & = & \max_{\|x\|_2=1} \sum_{i=1}^p |({\hat{w}_i}-w_i)| x_i^2 = \max_{\|x\|_2=1} \sum_{i=1}^p | \big (\frac{{{\hat{w}_i}}^2-w_i^2}{\hat{w}_i+w_i} \big ) | x_i^2 \\ & \asymp & \sum_{i=1}^p |({\hat{w}_i}^2-w_i^2)| x_i^2 = C_3 \|\hat{W}^2-W^2\|.\end{aligned}$$ where we have used the fact that the true standard deviations are well above zero, i.e., $\exists~ 0 < C_3 < \infty $ such that $1/C_3 \leq w^{-1}_i \leq C_3 ~\forall ~i=1,2,\cdots, p$, and sample standard deviation are all positive, i.e, $\hat{w}_i > 0 ~\forall ~i=1,2,\cdots,p.$ Now since $\|\hat{W}^2-W^2\| \asymp \|\hat{W}-W\|$, this follows that $\|\hat{W}\|=O(1)$ and we have $\|\hat{\Sigma}_K-\Sigma_0\|^2=O \big(\frac{s~log~p}{n} +\frac{log~p}{n} \big )$. This completes the proof.\ \ **Proof of Theorem 3.2** Let $$\begin{aligned} f(\Sigma)=||\Sigma-S||^2_F + \lambda \|\Sigma^-\|_1 + \gamma\sum_{i=1}^{p} \{\sigma_i(\Sigma)-\bar{\sigma}_{\Sigma}\}^2,\end{aligned}$$ Similar to the proof of theroem (3.1), define the function $Q_1(.)$ as following: $$Q_1(\Sigma)=f(\Sigma)-f(\Sigma_0)$$ where $\Sigma_0$ is the true covariance matrix and $\Sigma$ is any other covariance matrix. Let $\Sigma=UDU^T$ be eigenvalue decomposition of $\Sigma$, $D$ is diagonal matrix of eigenvalues and $U$ is matrix of eigenvectors. We have, $$\begin{aligned} \begin{split} Q_1(\Sigma)& = \|\Sigma-S\|_F^2+\lambda \|\Sigma^-\|_1+\gamma~ tr(D^2)-(tr(D))^2/p \\ & ~~~-\|\Sigma_0-S\|_F^2 - \lambda\|\Sigma_0^-\|_1 -\gamma~ tr(D_0^2)-(tr(D_0))^2/p \end{split}\end{aligned}$$ where $A=diag(a_1,a_2,\cdots,a_p)$ and $\Sigma_0=U_0D_0U_0^T$ is eigenvalue decomposition of $\Sigma_0$. Write $\Delta=\Sigma-\Sigma_0$, and let $ \Theta_n(M):=\{\Delta : \Delta=\Delta^T , ~ \|\Delta\|_2=Mr_n,~ 0< M < \infty ~\}$. The estimate $\hat{\Sigma}$ minimizes the $Q(\Sigma)$ or equivalently $ \hat{\Delta}=\hat{\Sigma}-\Sigma_0$ minimizes the $G(\Delta)=Q(\Sigma_0+\Delta)$. Note that $G(\Delta)$ is convex and if $\hat{\Delta}$ be its solution, then we have $G(\hat{\Delta}) \le G(0) = 0$. Therefore if we can show that $G(\Delta)$ is non-negative for $\Delta \in \Theta_n(M)$, this will imply that the $\hat{\Delta}$ lies within sphere of radius $Mr_n$. We require $\sqrt{(p+s)~log~p}=o \Big (\sqrt{n} \Big ) $. $$\begin{aligned} \|\Sigma-S\|_F^2-\|\Sigma_0-S\|_F^2 & =& tr(\Sigma ^T \Sigma -2 \Sigma^TS+S^TS)-tr(\Sigma_0^T\Sigma_0-2\Sigma_0S+S^TS) \\ & = & tr(\Sigma^T\Sigma-\Sigma_0^T\Sigma_0)-2~tr((\Sigma-\Sigma_0)S) \\ & = & tr((\Sigma_0+\Delta)^T(\Sigma_0+\Delta)-\Sigma_0^T\Sigma_0)-2~tr(\Delta^TS)\\ & = & tr(\Delta^T\Delta)-2~tr(\Delta^T(S-\Sigma_0))\end{aligned}$$ Next, we bound term involving $S$ in above expression, we have $$\begin{aligned} |tr(\Delta(\Sigma_0-S))| & \le & \sum_{i\neq j} |\Delta_{ij}({\Sigma_0}_{ij}-S_{ij})| + \sum_{i=1} |\Delta_{ii}({\Sigma_0}_{ii}-S_{ii})| \\ & \le & \max_{i\neq j}(|{\Sigma_0}_{ij}-S_{ij}|) \|\Delta^-\|_1 + \sqrt{p} \max_{i=1} (|{\Sigma_0}_{ii}-S_{ii}|) \sqrt{\sum_{i=1} \Delta^2_{ii}} \\ & \le& C_0 (1+\tau) \max_{i}(\Sigma_{0ii}) \Big \{ \sqrt{\frac{log~p}{n}} \|\Delta^-\|_1+ \sqrt{\frac{p~log~p}{n}} \|\Delta^+\|_2 \Big \} \\ & \le & C_1 \Big \{ \sqrt{\frac{log~p}{n}} \|\Delta^-\|_1+ \sqrt{\frac{p~log~p}{n}} \|\Delta^+\|_2 \Big \}\end{aligned}$$ holds with high probability by a result (Lemma 1) from @Ravi1:24 where $C_1=C_0 (1+\tau) \max_{i}(\Sigma_{0ii}), C_0 >0$ and $\Delta^+$ is matrix $\Delta$ with all off-diagonal elements set to zero. Next we obtain upper bound on the terms involving $\gamma$ in (3.7). we have, $$\begin{aligned} \lefteqn{tr(D^2)-(tr(D))^2/p - tr(D_0^2)-(tr(D))^2/p } \\ & = & tr(\Sigma^2)-tr(\Sigma_0^2) - (tr(\Sigma))^2/p + (tr(\Sigma_0))^2/p \end{aligned}$$ $$\begin{aligned} \textbf{(i)}~~~\lefteqn{tr(\Sigma^2)-\Sigma_0^2))} \\ & \leq & tr(\Sigma_0+\Delta)^2 -tr(\Sigma_0)^2 \\ & = & tr(\Delta)^2+ 2~tr(\Delta^2\Sigma_0) \leq tr(\Delta)^2 + C_1 \sqrt{s}\|\Delta\|_F\end{aligned}$$ $$\begin{aligned} \textbf{(ii)}~~~\lefteqn{tr((\Sigma))^2 -(tr(\Sigma_0))^2}\\ & = & (tr(\Sigma_0+\Delta))^2 - (tr(\Sigma_0))^2 \\ & \leq & (tr(\Delta))^2 +2~tr(\Sigma_0)~tr(\Delta) \leq p~\| \Delta\|^2_F +2~\bar{k} p\sqrt{p} \|\Delta^+\|_F.\end{aligned}$$ Therefore the $\gamma$ term can be bounded by $2\|\Delta\|^2_F+(C_1\sqrt{s}+2\sqrt{p}\bar{k})\|\Delta\|_F$. We bound the term invloving $\lambda$ as in similar to the proof of Theorem 3.1. For $\lambda \asymp \gamma \asymp \sqrt{\frac{log~p}{n}}$, the proof follows very simialr to Therem 3.1.\ \ [**Proof of Theorem 3.3.**]{} To bound the cross product term involving $\Delta$ and $\hat{R}_K^{-1}$, we have, $$\begin{aligned} |tr((R_0^{-1}-\hat{R}_K^{-1})\Delta)| & = & |tr(R_0^{-1}(\hat{R}_K-R_0)\hat{R}_K^{-1}{\Delta})| \\ & \leq & \sigma_1(R_0^{-1})|tr((\hat{R}_K-R_0)\hat{R}_K^{-1} \Delta)| \\ & \leq & \bar{k}\sigma_1(\hat{R}_K^{-1})|tr((\hat{R}_K-R_0) \Delta)| \\ & \leq & \bar{k} \bar{k}_1|tr((\hat{R}_K-R_0)\Delta)|.\end{aligned}$$ where $\sigma_{min}(\hat{R}_K) \geq (1/\bar{k}_1) >0 $, is a positive lower bound on the eigenvalues of JPEN estimate $\hat{R}_K$ of correlation matrix $R_0$. Such a constant exist by Lemma 3.2. Rest of the proof closely follows as that of Theorem 3.1.\ \ [**Proof of Theorem 3.4.**]{} We bound the term $tr((\hat{\Omega}_{S}-\Omega_0)\Delta)$ similar to that in proof of Theorem 3.3. Rest of the proof closely follows to that Theorem 3.2.
1
--- author: - 'Phoolendra K. Mishra and Kristopher L. Kuhlman' bibliography: - 'review\_paper.bib' title: 'Unconfined Aquifer Flow Theory - from Dupuit to present' --- Abstract ======== Analytic and semi-analytic solution are often used by researchers and practicioners to estimate aquifer parameters from unconfined aquifer pumping tests. The non-linearities associated with unconfined (i.e., water table) aquifer tests makes their analysis more complex than confined tests. Although analytical solutions for unconfined flow began in the mid-1800s with Dupuit, Thiem was possibly the first to use them to estimate aquifer parameters from pumping tests in the early 1900s. In the 1950s, Boulton developed the first transient well test solution specialized to unconfined flow. By the 1970s Neuman had developed solutions considering both primary transient storage mechanisms (confined storage and delayed yield) without non-physical fitting parameters. In the last decade, research into developing unconfined aquifer test solutions has mostly focused on explicitly coupling the aquifer with the linearized vadose zone. Despite the many advanced solution methods available, there still exists a need for realism to accurately simulate real-world aquifer tests. Introduction ============ Pumping tests are widely used to obtain estimates of hydraulic parameters characterizing flow and transport processes in subsurface (e.g., @kruseman90 [@batu1998aquifer]). Hydraulic parameter estimates are often used in planning or engineering applications to predict flow and design of aquifer extraction or recharge systems. During a typical pumping test in a horizontally extensive aquifer, a well is pumped at constant volumetric flow rate and head observations are made through time at one or more locations. Pumping test data are presented as time-drawdown or distance-drawdown curves, which are fitted to idealized models to estimate aquifer hydraulic properties. For unconfined aquifers properties of interest include hydraulic conductivity, specific storage, specific yield, and possibly unsaturated flow parameters. When estimating aquifer properties using pumping test drawdown data, one can use a variety of analytical solutions involving different conceptualizations and simplifiying assumptions. Analytical solutions are impacted by their simplifiying assumptions, which limit their applicability to characterize certain types of unconfined aquifers. This review presents the historical evolution of the scientific and engineering thoughts concerning groundwater flow towards a pumping well in unconfined aquifers (also referred to variously as gravity, phreatic, or water table aquifers) from the steady-state solutions of Dupuit to the coupled transient saturated-unsaturated solutions. Although it is sometimes necessary to use gridded numerical models in highly irregular or heterogeneous systems, here we limit our consideration to analytically derived solutions. Early Well Test Solutions ========================= Dupuit’s Steady-State Finite-Domain Solutions --------------------------------------------- [@dupuit1857] considered steady-state radial flow to a well pumping at constant volumetric flowrate $Q$ \[L$^3$/T\] in a horizontal homogeneous confined aquifer of thickness $b$ \[L\]. He used Darcy’s law [@darcy1856] to express the velocity of groundwater flow $u$ \[L/T\] in terms of radial hydraulic head gradient $\left(\partial h/\partial r\right)$ as $$\label{darcys-law} u=K\frac{\partial h}{\partial r},$$ where $K=kg/\nu$ is hydraulic conductivity \[L/T\], $k$ is formation permeability \[L$^2$\], $g$ is the gravitational constant \[L/T$^2$\], $\nu$ is fluid kinematic viscosity \[L$^2$/T\], $h=\psi+z$ is hydraulic head \[L\], $\psi$ is gage pressure head \[L\], and $z$ is elevation above an arbitrary datum \[L\]. Darcy derived a form equivalent to for one-dimensional flow through sand-packed pipes. Dupuit was the first to apply to converging flow by combining it with mass conservation $Q=\left(2\pi rb \right)u$ across a cylindrical shell concentric with the well, leading to $$\label{dupuit_1} Q=K\left( 2\pi rb\right)\frac{\partial h}{\partial r}.$$ Integrating between two radial distances $r_1$ and $r_2$ from the pumping well, Dupuit evaluated the confined steady-state head difference between the two points as $$\label{dupuit_confined} h(r_{2})-h(r_{1})=\frac{Q}{2\pi Kb}\log\left( \frac{r_2}{r_1}\right).$$ This is the solution for flow to a well at the center of a circular island, where a constant head condition is applied at the edge of the island ($r_2$). @dupuit1857 also derived a radial flow solution for unconfined aquifers by neglecting the vertical flow component. Following a similar approach to confined aquifers, @dupuit1857 estimated the steady-state head difference between two distances from the pumping well for unconfined aquifers as $$\label{dupuit_unconfined} h^{2}(r_{2}) - h^{2} (r_{1}) = \frac{Q}{\pi K} \log\left( \frac{r_2}{r_1}\right).$$ These two solutions are only strictly valid for finite domains; when applied to domains without a physical boundary at $r_2$, the outer radius essentially becomes a fitting parameter. The solutions are also used in radially infinite systems under pseudo-static conditions, when the shape of the water table does not change with time. Equations and are equivalent when $b$ in is average head $\left( h(r_1)+h(r_2)\right)/2$. In developing , [@dupuit1857] used the following assumptions (now commonly called the Dupuit assumptions) in context of unconfined aquifers: - the aquifer bottom is a horizontal plane; - groundwater flow toward the pumping wells is horizontal with no vertical hydraulic gradient component; - the horizontal component of the hydraulic gradient is constant with depth and equal to the water table slope; and - there is no seepage face at the borehole. These assumptions are one of the main approaches to simplifying the unconfined flow problem and making it analytically tractable. In the unconfined flow problem both the head and the location of the water table are unknowns; the Dupuit assumptions eliminate one of the unknowns. Historical Developments after Dupuit ------------------------------------ @narasimhan98 and @vries2007 give detailed historical accounts of groundwater hydrology and soil mechanics; only history relevant to well test analysis is given here. @forchheimer1886 first recognized the Laplace equation $\nabla^2 h = 0$ governed two-dimensional steady confined groundwater flow (to which is a solution), allowing analogies to be drawn between groundwater flow and steady-state heat conduction, including the first application of conformal mapping to solve a groundwater flow problem. @slichter1898 also arrived at the Laplace equation for groundwater flow, and was the first to account for a vertical flow component. Utilizing Dupuit’s assumptions, @forchheimer1898 developed the steady-state unconfined differential equation (to which is a solution), $\nabla^2 h^2=0$. @boussinesq1904 first gave the transient version of the confined groundwater flow equation $\alpha_s \nabla^2 h = \partial h/\partial t$ (where $\alpha_s=K/S_s$ is hydraulic diffusivity \[L$^2$/T\] and $S_s$ is specific storage \[1/L\]), based upon analogy with transient heat conduction. In Prague, @thiem1906 was possibly the first to use for estimating $K$ from pumping tests with multiple observation wells [@simmons2008]. Equation (commonly called the Thiem equation) was tested in the 1930’s both in the field (@wenzel1932recent performed a 48-hour pumping test with 80 observation wells in Grand Island, Nebraska) and in the laboratory (@wyckoff1932dupuitflow developed a 15-degree unconfined wedge sand tank to simulate converging flow). Both found the steady-state solution lacking in ability to consistently estimate aquifer parameters. @wenzel1942 developed several complex averaging approaches (e.g., the “limiting” and “gradient” formulas) to attempt to consistently estimate $K$ using steady-state confined equations for a finite system from transient unconfined data. @muskat1932partialpen considered partial-penetration effects in steady-state flow to wells, discussing the nature of errors associated with assumption of uniform flux across the well screen in a partially penetrating well. Muskat’s textbook on porous media flow [@muskat1937book] summarized much of what was known in hydrology and petroleum reservoir engineering around the time of the next major advance in well test solutions by Theis. Confined Transient Flow ----------------------- [@theis1935] utilized the analogy between transient groundwater flow and heat conduction to develop an analytical solution for confined transient flow to a pumping well (see Figure \[fig:diagram\]). He initially applied his solution to unconfined flow, assuming instantaneous drainage due to water table movement. The analytical solution was based on a Green’s function heat conduction solution in an infinite axis-symmetric slab due to an instantaneous line heat source or sink [@carslaw1921]. With the aid of mathematician Clarence Lubin, Theis extended the heat conduction solution to a continuous source, motivated to better explain the results of pumping tests like the 1931 test in Grand Island. [@theis1935] gave an expression for drawdown due to pumping a well at rate $Q$ in a homogeneous, isotropic confined aquifer of infinite radial extent as an exponential integral $$\label{theis} s(r,t)=\frac{Q}{4\pi T}\int_{r^2 /(4 \alpha_s t)}^{\infty}\frac{e^{-u}}{u} \;\mathrm{d}u,$$ where $s=h_0(r)-h(t,r)$ is drawdown, $h_0$ is pre-test hydraulic head, $T=Kb$ is transmissivity, and $S=S_s b$ is storativity. Equation is a solution to the diffusion equation, with zero-drawdown inital and far-field conditions, $$s(r,t=0) = s(r\rightarrow \infty,t) = 0.$$ The pumping well was approximated by a line sink (zero radius), and the source term assigned there was based upon , $${\label{boulton_sink_well}} \lim_{r \rightarrow 0} r\frac{\partial s}{\partial r}=-\frac{Q}{2 \pi T}.$$ ![Unconfined well test diagram[]{data-label="fig:diagram"}](well-diagram.eps){width="60.00000%"} Although the transient governing equation was known through analogy with heat conduction, the transient storage mechanism (analogous to specific heat capacity) was not completely understood. Unconfined aquifer tests were known to experience slower drawdown than confined tests, due to water supplied by dewatering the zone near the water table, which is related to the formation specific yield (porosity less residual water). @muskat1934transient and [@hurst1934unsteady] derived solutions to confined transient radial flow problems for finite domains, but attributed transient storage solely to fluid compressibility. @jacob1940 derived the diffusion equation for groundwater flow in compressible elastic confined aquifers, using mass conservation and Darcy’s law, without recourse to analogy with heat conduction. @terzaghi1923 developed a one-dimensional consolidation theory which only considered the compressibility of the soil (in his case a clay), unknown at the time to most hydrologists [@batu1998aquifer]. @meinzer1928 studied regional drawdown in North Dakota, proposing the modern storage mechanism related to both aquifer compaction and the compressiblity of water. @jacob1940 formally showed $S_s=\rho_w g(\beta_p + n\beta_w)$, where $\rho_w$ and $\beta_w$ are fluid density \[M/L$^3$\] and compressibility \[LT$^2$/M\], $n$ is dimensionless porosity, and $\beta_p$ is formation bulk compressibility. The axis-symmetric diffusion equation in radial coordinates is $$\label{diffusion} \frac{\partial ^2 s}{\partial r^2}+\frac{1}{r}\frac{\partial s}{\partial r}= \frac{1}{\alpha_s}\frac{\partial s}{\partial t}.$$ When deriving analytical expressions, the governing equation is commonly made dimensionless to simplify presentation of results. For flow to a pumping well, it is convenient to use $L_C = b$ as a characteristic length, $T_C = Sb^2/T$ as a characteristic time, and $H_C = Q/(4 \pi T)$ as a characteristic head. The dimensionless diffusion equation is $$\label{diffusion} \frac{\partial ^2 s_D}{\partial r_D^2}+\frac{1}{r_D}\frac{\partial s_D}{\partial r_D}=\frac{\partial s_D}{\partial t_D},$$ where $r_D=r/L_C$, $s_D=s/H_c$, and $t_D=t/T_C$ are scaled by characteristic quantities. The [@theis1935] solution was developed for field application to estimate aquifer hydraulic properties, but it saw limited use because it was difficult to compute the exponential integral for arbitrary inputs. [@wenzel1942] proposed a type-curve method that enabled graphical application of the [@theis1935] solution to field data. [@cooperjacob1946] suggested for large values of $t_D$ ($t_D \geq 25$), the infinite integral in the [@theis1935] solution can be approximated as $$\label{JacobCooper} s_D(t_D,r_D) = \int_{r^2/(4\alpha_st)}^{\infty} \frac{e^{-u}}{u} \; \mathrm{d}u \approx \log_e \left(\frac{4 Tt}{r^2S}\right) - \gamma$$ where $\gamma \approx 0.57722$ is the Euler-Mascheroni constant. This leads to Jacob and Cooper’s straight-line simplification $$\Delta s \approx 2.3 \frac{Q}{4 \pi T}$$ where $\Delta s$ is the drawdown over one log-cycle (base 10) of time. The intercept of the straight-line approximation is related to $S$ through This approximation made estimating hydraulic parameters much simpler at large $t_D$. @hantush1961 later extended Theis’ confined solution for partially penetrating wells. Observed Time-drawdown Curve ---------------------------- Before the time-dependent solution of @theis1935, distance drawdown was the diagnostic plot for aquifer test data. Detailed distance-drawdown plots require many observation locations (e.g., the 80 observation wells of @wenzel1936TheimTest). Re-analyzing results of the unconfined pumping test in Grand Island, [@wenzel1942] noticed that the [@theis1935] solution gave inconsistent estimates of $S_s$ and $K$, attributed to the delay in the yield of water from storage as the water table fell. The @theis1935 solution corresponds to the Dupuit assumptions for unconfined flow, and can only re-create the a portion of observed unconfined time-drawdown profiles (either late or early). The effect of the water table must be taken into account through a boundary condition or source term in the governing equation to reproduce observed behavior in unconfined pumping tests. ![Drawdown data from Cape Cod [@moenchetal2001], observation well F377-037. Upper dashed curve is confined model of @hantush1961 with $S=S_sb$, lower dotted curve is same with $S=S_sb + S_y$. Solid curve is unconfined model of @neuman1974 using $S_y=0.23$.[]{data-label="fig:capecod"}](cape_cod_F377-037.eps){width="80.00000%"} [@walton1960] recognized three distinct segments characterizing different release mechanisms on time-drawdown curve under water table conditions (see Figure \[fig:capecod\]). A log-log time-drawdown plot in an unconfined aquifer has a characteristic shape consisting of a steep early-time segment, a flatter intermediate segment and a steeper late-time segment. The early segment behaves like the @theis1935 solution with $S=S_s b$ (water release due to bulk medium relaxation), the late segment behaves like the @theis1935 solution with $S=S_s b + S_y$ [@gambolati1976transient] (water release due to water table drop), and the intermediate segment represents a transition between the two. Distance-drawdown plots from unconfined aquifer tests do not show a similar inflection or change in slope, and do not produce good estimates of storage parameters. Early Unconfined Well Test Solutions ==================================== Moving Water Table Solutions Without Confined Storage ----------------------------------------------------- The [@theis1935] solution for confined aquifers can only reproduce either the early or late segments of the unconfined time-drawdown curve (see Figure \[fig:capecod\]). [@boulton1954a] suggested it is theoretically unsound to use the @theis1935 solution for unconfined flow because it does not account for vertical flow to the pumping well. He proposed a new mechanism for flow towards a fully penetrating pumping well under unconfined conditions. His formulation assumed flow is governed by $\nabla^2 s = 0$, with transient effects incorporated through the water table boundary condition. He treated the water table (where $\psi=0$, located at $z=\xi$ above the base of the aquifer) as a moving material boundary subject to the condition $h\left( r,z=\xi,t\right)=z$. He considered the water table without recharge to be comprised of a constant set of particles, leading to the kinematic boundary condition $$\label{dynamic} \frac{D}{Dt}\left(h - z \right) = 0$$ which is a statement of conservation of mass, for an incompressible fluid. @boulton1954a considered the Darcy velocity of the water table as $u_z=-\frac{K_z}{S_y}\frac{\partial h}{\partial z}$ and $u_r=-\frac{K_r}{S_y}\frac{\partial h}{\partial r}$, and expressed the total derivative as $$\label{material_derivative} \frac{D}{Dt}=\frac{\partial}{\partial t}- \frac{K_r}{S_y}\frac{\partial h}{\partial r}\frac{\partial}{\partial r}- \frac{K_z}{S_y}\frac{\partial h}{\partial z}\frac{\partial}{\partial z},$$ where $K_r$ and $K_z$ are radial and vertical hydraulic conductivity components. Using , the kinematic boundary condition in terms of drawdown is $${\label{free_surface}} \frac{\partial s}{\partial t}-\frac{K_r}{S_y} \left( \frac{\partial s}{\partial r} \right)^2- \frac{K_z}{S_y} \left( \frac{\partial s}{\partial z} \right)^2=- \frac{K_z}{S_y}\frac{\partial s}{\partial z}.$$ @boulton1954a utilized the wellbore and far-field boundary conditions of @theis1935. He also considered the aquifer rests on an impermeable flat horizontal boundary $\left. \partial h/\partial z \right|_{z=0}= 0$; this was also inferred by @theis1935 because of his two-dimensional radial flow assumption. [@dagan1967] extended Boulton’s water table solution to the partially penetrating case by replacing the wellbore boundary condition with $$\lim_{r \rightarrow 0} r\frac{\partial s}{\partial r}= \begin{cases}\frac{Q}{2\pi K (\ell - d)} & b-\ell < z < b-d \\ 0 & \text{otherwise}\end{cases},$$ where $\ell$ and $d$ are the upper and lower boundaries of the pumping well screen, as measured from the initial top of the aquifer. The two sources of non-linearity in the unconfined problem are: 1) the boundary condition is applied at the water table, the location of which is unknown *a priori*; 2) the boundary condition applied on the water table includes $s^2$ terms. In order to solve this non-linear problem both Boulton and Dagan linearized it by disregarding second order components in the free-surface boundary condition and forcing the free surface to stay at its initial position, yielding $$\label{boulton_linear_water_table} \frac{\partial s}{\partial t}=- \frac{K_z}{S_y}\frac{\partial s}{\partial z} \qquad\qquad z=h_0,$$ which now has no horizontal flux component after neglecting second-order terms. Equation can be written in non-dimensional form as $$\label{dimensionless_MWT_bc} \frac{\partial s_D}{\partial t_D}=- K_D \sigma^{\ast} \frac{\partial s_D}{\partial z_D} \qquad\qquad z_D=1,$$ where $K_D=K_z/K_r$ is the dimensionless anisotropy ratio and $\sigma^{\ast}=S/S_y$ is the dimensionless storage ratio. Both [@boulton1954a] and [@dagan1967] solutions reproduce the intermediate and late segments of the typical unconfined time-drawdown curve, but neither of them reproduces the early segment of the curve. Both are solutions to the Laplace equation, and therefore disregard confined aquifer storage, causing pressure pulses to propagate instantaneously through the saturated zone. Both solutions exhibit an instantaneous step-like increase in drawdown when pumping starts. Delayed Yield Unconfined Response --------------------------------- [@boulton1954b] extended Theis’ transient confined theory to include the effect of delayed yield due to movement of the water table in unconfined aquifers. Boulton’s proposed solutions ([-@boulton1954b; -@boulton1963]) reproduce all three segments of the unconfined time-drawdown curve. In his formulation of delayed yield, he assumed as the water table falls water is released from storage (through drainage) gradually, rather than instantaneously as in the free-surface solutions of [@boulton1954a] and [@dagan1967]. This approach yielded an integro-differential flow equation in terms of vertically averaged drawdown $s^\ast$ as $$\label{boulton_solution} \frac{\partial ^2 s^\ast}{\partial r^2}+ \frac{1}{r}\frac{\partial s^\ast}{\partial r}= \left[\frac{S}{T}\frac{\partial s^\ast}{\partial t} \right]+ \left\lbrace\alpha S_y \int _0^t \frac{\partial s^\ast}{\partial \tau} e^{-\alpha (t-\tau )}\;\mathrm{d}\tau \right \rbrace$$ which Boulton linearized by treating $T$ as constant. The term in square brackets is instantaneous confined storage, the term in braces is a convolution integral representing storage released gradually since pumping began, due to water table decline. [@boulton1963] showed the time when delayed yield effects become negligible is approximately equal to $\frac{1}{\alpha}$, leading to the term “delay index”. [@prickett1965] used this concept and through analysis of large amount of field drawdown data with [@boulton1963] solution, he established an empirical relationship between the delay index and physical aquifer properties. Prickett proposed a methodology for estimation of $S$, $S_y$, $K$, and $\alpha$ of unconfined aquifers by analyzing pumping tests with the [@boulton1963] solution. Although Boulton’s model was able to reproduce all three segment of the unconfined time-drawdown curve, it failed to explain the physical mechanism of the delayed yield process because of the non-physical nature of the “delay index” in the [@boulton1963] solution. [@streltsova1972a] developed an approximate solution for the decline of the water table and $s^\ast$ in fully penetrating pumping and observation wells. Like @boulton1954b, she treated the water table as a sharp material boundary, writing the two-dimensional depth-averaged flow equation as $$\label{streltsova_equation} \frac{\partial ^2 s^\ast}{\partial r^2}+ \frac{1}{r}\frac{\partial s^\ast}{\partial r}= \frac{S}{T}\left(\frac{\partial s^\ast}{\partial t}- \frac{\partial \xi}{\partial t} \right).$$ The rate of water table decline was assumed to be linearly proportional to the difference between the water table elevation $\xi$ and the vertically averaged head $b-s^\ast$, $$\label{streltsova_watertable} \frac{\partial \xi}{\partial t}=\frac{K_z}{S_yb_z} \left( s^\ast-b+\xi \right)$$ where $b_z=b/3$ is an effective aquifer thickness over which water table recharge is distributed into the deep aquifer. Equation can be viewed as an approximation to the zero-order linearized free-surface boundary condition of [@boulton1954a] and [@dagan1967]. Streltsova considered the initial condition $\xi (r,t=0)=b$ and used the same boundary condition at the pumping well and the outer boundary $(r\rightarrow \infty )$ used by [@theis1935] and [@boulton1963]. Equation has the solution [@streltsova1972b] $$\label{strelstsova_sol} \frac{\partial \xi}{\partial t} = -\alpha_T \int _0^t e^{-\alpha_T (t-\tau)} \frac{\partial s^\ast}{\partial \tau}\;\mathrm{d}\tau$$ where $\alpha_T = K_z/(S_yb_z)$. Substituting into produces solution of [@boulton1954b; @boulton1963]; the two solutions are equivalent. Boulton’s delayed yield theory (like that of Streltsova) does not account for flow in unsaturated zone but instead treats water table as material boundary moving vertically downward under influence of gravity. [@streltsova1973] used field data collected by [@meyer1962] to demonstrate unsaturated flow had virtually no impact on the observed delayed process. Although Streltsova’s solution related Boulton’s delay index to physical aquifer properties, it was later found to be a function of $r$ [@neuman1975; @herrera1978]. The delayed yield solutions of Boulton and Streltsova do not account for vertical flow within the unconfined aquifer through simplifying assumptions; they cannot be extended to account for partially penetrating pumping and observation wells. Prickett’s pumping test in the vicinity of Lawrenceville, Illinois [@prickett1965] showed that specific storage in unconfined aquifers can be much greater than typically observed values in confined aquifers – possibly due to entrapped air bubbles or poorly consolidated shallow sediments. It is clear the elastic properties of unconfined aquifers are too important to be disregarded. Delayed Water Table Unconfined Response --------------------------------------- Boulton’s ([-@boulton1954b; -@boulton1963]) models encountered conceptual difficulty explaining the physical mechanism of water release from storage in unconfined aquifers. [@neuman1972] presented a physically based mathematical model that treated the unconfined aquifer as compressible (like @boulton1954b [@boulton1963] and @streltsova1972a [@streltsova1972b]) and the water table as a moving material boundary (like @boulton1954a and @dagan1967). In Neuman’s approach aquifer delayed response was caused by physical water table movement, he therefore proposed to replace the phrase “delayed yield” by “delayed water table response”. [@neuman1972] replaced the Laplace equation of [@boulton1954a] and [@dagan1967] by the diffusion equation; in dimensionless form it is $$\label{neuman_diffusion} \frac{\partial ^2 s_D}{\partial r_D^2}+ \frac{1}{r_D}\frac{\partial s_D}{\partial r_D}+ K_D\frac{\partial ^2s_D}{\partial z_D^2} = \frac{\partial s_D}{\partial t_D}.$$ Like @boulton1954a and @dagan1967, Neuman treated the water table as a moving material boundary, linearized it (using ), and treated the anisotropic aquifer as three-dimensional axis-symmetric. Like @dagan1967, @neuman1974 accounted for partial penetration. By including confined storage in the governing equation , Neuman was able to reproduce all three parts of the observed unconfined time-drawdown curve and produce parameter estimates (including the ability to estimate $K_z$) very similar to the delayed yield models. Compared to the delay index models, Neuman’s solution produced similar fits to data (often underestimating $S_y$, though), but [@neuman1975; @neuman1979] questioned the physical nature of Boulton’s delay index. He performed a regression fit between the @boulton1954b and @neuman1972 solutions, resulting in the relationship $$\label{alpha-regression} \alpha = \frac{K_z}{S_yb}\left[3.063-0.567 \log\left( \frac{K_Dr^2}{b^2}\right) \right]$$ demonstrating $\alpha$ decreases linearly with $\log r$ and is therefore not a characteristic aquifer constant. When ignoring the logarithmic term in the relationship $\alpha=3K_z/(S_yb)$ proposed by [@streltsova1972a] is approximately recovered. After comparative analysis of various methods for determination of specific yield, [@neuman1987] concluded the water table response to pumping is a much faster phenomenon than drainage in the unsaturated zone above it. @malama2011 recently proposed an alternative linearization of , approximately including the effects of the neglected second-order terms, leading to the alternative water table boundary condition of $$\label{malama} S_y \frac{\partial s}{\partial t} =- K_z \left( \frac{\partial s}{\partial z} + \beta \frac{\partial^2 s}{\partial z^2} \right) \qquad z=h_0$$ where $\beta$ is a linearization coefficient \[L\]. The parameter $\beta$ provides additional adjustment of the shape of the intermediate portion of the time-drawdown curve (beyond adjustments possible with $K_D$ and $\sigma^\ast$ alone), leading to improved estimates of $S_y$. When $\beta=0$ simplifies to . Hybrid Water Table Boundary Condition ------------------------------------- The solution of [@neuman1972; @neuman1974] was accepted by many hydrologists “as the preferred model ostensibly because it appears to make the fewest simplifying assumptions” [@moenchetal2001]. Despite acceptance, [@nwankwor1984] and [@moench1995] pointed out that significant difference might exist between measured and model-predicted drawdowns, especially at locations near the water table, leading to significantly underestimated $S_y$ using Neuman’s models (e.g., see Figure \[fig:capecod\]). @moench1995 attributed the inability of Neuman’s models to give reasonable estimates of $S_y$ and capture this observed behavior near the water table due to the later’s disregard of “gradual drainage”. In an attempt to resolve this problem, [@moench1995] replaced the instantaneous moving water table boundary condition used by Neuman with one containing a @boulton1954b delayed yield convolution integral, $$\label{moench_hybrid} \int _0^t\frac{\partial s}{\partial \tau} \sum _{m=1}^M \alpha _m e^{-\alpha _m (t-\tau )}\;\mathrm{d}\tau =- \frac{K_z}{S_y}\frac{\partial s}{\partial z}$$ This hybrid boundary condition ($M=1$ in @moench1995) included the convolution source term @boulton1954b [@boulton1963] and @streltsova1972a [@streltsova1972b] used in their depth-averaged governing flow equations. In addition to this new boundary condition, [@moench1995] included a finite radius pumping well with wellbore storage, conceptually similar to how @papadopulosandcooper1967 modified the solution of @theis1935. In all other respects, his definition of the problem was similar to [@neuman1974]. Moench’s solution resulted in improved fits to experimental data and produced more realistic estimates of specific yield [@moenchetal2001], including the use of multiple delay parameters $\alpha_m$ [@moench2003]. @moenchetal2001 used with $M = 3$ to estimate hydraulic parameters in the unconfined aquifer at Cape Cod. They showed that $M=3$ enabled a better fit to the observed drawdown data than obtained by $M<3$ or the model of [@neuman1974]. Similar to the parameter $\alpha $ in Boulton’s model, the physical meaning of $\alpha _m$ is not clear. Unconfined Solutions Considering Unsaturated Flow ================================================= As an alternative to linearizing the water table condition of @boulton1954a, the unsaturated zone can be explicitly included. The non-linearity of unsaturated flow is substituted for the non-linearity of . By considering the vadose zone, the water table is internal to the domain, rather than a boundary condition. The model-data misfit in Figure \[fig:capecod\] at “late intermediate” time is one of the motivations for considering the mechanisms of delayed yield and the effects of the unsaturated zone. Unsaturated Flow Without Confined Aquifer Storage ------------------------------------------------- [@kroszynskidagan1975] were the first to account analytically for the effect of the unsaturated zone on aquifer drawdown. They extended the solution of [@dagan1967] by accounting for unsaturated flow above the water table. They used Richards’ equation for axis-symmetric unsaturated flow in a vadose zone of thickness $L$ $$\label{richards} K_r\frac{1}{r}\frac{\partial}{\partial r}\left( k(\psi )r\frac{\partial \sigma}{\partial r}\right)+ K_z\frac{\partial}{\partial z}\left( k(\psi )\frac{\partial \sigma}{\partial z}\right) = C(\psi)\frac{\partial \sigma }{\partial t} \qquad \xi < z <b+L$$ where $\sigma = b + \psi_a - h$ is unsaturated zone drawdown \[L\], $\psi _a$ is air-entry pressure head \[L\], $0 \leq k(\psi )\leq 1$ is dimensionless relative hydraulic conductivity, $C(\psi)=d\theta/d \psi$ is the moisture retention curve \[1/L\], and $\theta$ is dimensionless volumetric water content (see inset in Figure \[fig:diagram\]). They assumed flow in the underlying saturated zone was governed by the Laplace equation (like @dagan1967). The saturated and unsaturated flow equations were coupled through interface conditions at the water table expressing continuity of hydraulic heads and normal groundwater fluxes, $$\begin{aligned} s=\sigma\qquad \nabla s \cdot \textbf{n}=\nabla \sigma \cdot \textbf{n} \qquad z= \xi\end{aligned}$$ where $\textbf{n}$ is the unit vector perpendicular to the water table. To solve the unsaturated flow equation , @kroszynskidagan1975 linearized by adopting the [@gardner1958] exponential model for the relative hydraulic conductivity, $k(\psi )=e^{\kappa_a (\psi -\psi_a)}$, where $\kappa_a$ is the sorptive number \[1/L\] (related to pore size). They adopted the same exponential form for the moisture capacity model, $\theta (\psi )=e^{\kappa_k (\psi -\psi_k)}$, where $\psi_k$ is the pressure at which $k(\psi)=1$, $\kappa_a=\kappa_k$, and $\psi_a=\psi_k$, leading to the simplified form $C(\psi)=S_y\kappa_a e^{\kappa_a (\psi -\psi_a)}$. In the limit as $\kappa_k=\kappa_a \rightarrow \infty$ their solution reduces to that of [@dagan1967]. The relationship between pressure head and water content is a step function. [@kroszynskidagan1975] took unsaturated flow above the water table into account but ignored the effects of confined aquifer storage, leading to early-time step-change behavior similar to @boulton1954a and @dagan1967. Increasingly Realistic Saturated-Unsaturated Well Test Models ------------------------------------------------------------- [@mathiasbutler2006] combined the confined aquifer flow equation with a one-dimensional linearized version of for a vadose zone of finite thickness. Their water table was treated as a fixed boundary with known flow conditions, decoupling the unsaturated and saturated solutions at the water table. Although they only considered a one-dimensional unsaturated zone, they included the additional flexibility provided by different exponents $(\kappa_a \neq \kappa_k)$. [@mathiasbutler2006] did not consider a partially penetrating well, but they did note the possibility of accounting for it in principle by incorporating their uncoupled drainage function in the solution of @moench1997, which considers a partially penetrating well of finite radius. [@tartakovskyneuman2007] similarly combined the confined aquifer flow equation , but with the original axis-symmetric form of considered by @kroszynskidagan1975. Also like @kroszynskidagan1975, their unsaturated zone was characterized by a single exponent $\kappa_a=\kappa_k$ and reference pressure head $\psi_a=\psi_k$. Unlike @kroszynskidagan1975 and @mathiasbutler2006, [@tartakovskyneuman2007] assumed an infinitely thick unsaturated zone. [@tartakovskyneuman2007] demonstrated flow in the unsaturated zone is not strictly vertical. Numerical simulations by @moench2008 showed groundwater movement in the capillary fringe is more horizontal than vertical. [@mathiasbutler2006] and @moench2008 showed that using the same exponents and reference pressure heads for effective saturation and relative permeability decreases model flexibility and underestimates $S_y$. @moench2008 predicted an extended form of @tartakovskyneuman2007 with two separate exponents, a finite unsaturated zone, and wellbore storage would likely produce more physically realistic estimates of $S_y$. [@mishraneuman2010] developed a new generalization of the solution of [@tartakovskyneuman2007] that characterized relative hydraulic conductivity and water content using $\kappa_a \neq \kappa_k$, $\psi_a \neq \psi_k$ and a finitely thick unsaturated zone. @mishraneuman2010 validated their solution against numerical simulations of drawdown in a synthetic aquifer with unsaturated properties given by the model of [@vangenuchten1980]. They also estimated aquifer parameters from Cape Cod drawdown data [@moenchetal2001], comparing estimated @vangenuchten1980 parameters with laboratory values [@maceetal1998]. [@mishraneuman2011] further extended their [-@mishraneuman2010] solution to include a finite-diameter pumping well with storage. @mishraneuman2010 [@mishraneuman2011] were the first to estimate non-exponential model unsaturated aquifer properties from pumping test data, by curve-fitting the exponential model to the [@vangenuchten1980] model. Analyzing pumping test data of @moenchetal2001 (Cape Cod, Massachusetts) and @nwankwor1984 [@nwankwor1992] (Borden, Canada), they estimated unsaturated flow parameters similar to laboratory-estimated values for the same soils. Future Challenges ================= The conceptualization of groundwater flow during unconfined pumping tests has been a challenging task that has spurred substantial theoretical research in the field hydrogeology for decades. Unconfined flow to a well is non-linear in multiple ways, and the application of analytical solutions has required utilization of advanced mathematical tools. There are still many additional challenges to be addressed related to unconfined aquifer pumping tests, including: - Hysteretic effects of unsaturated flow. Different exponents and reference pressures are needed during drainage and recharge events, complicating simple superposition needed to handle multiple pumping wells, variable pumping rates, or analysis of recovery data. - Collecting different data types. Validation of existing models and motivating development of more realistic ones depends on more than just saturated zone head data. Other data types include vadose zone water content [@meyer1962], and hydrogeophysical data like microgravity [@damiata2006] or streaming potentials [@malama2009]. - Moving water table position. All solutions since @boulton1954a assume the water table is fixed horizontal $\xi(r,t)=h_0$ during the entire test, even close to the pumping well where large drawdown is often observed. Iterative numerical solutions can accommodate this, but this has not been included in an analytical solution. - Physically realistic partial penetration. Well test solutions suffer from the complication related to the unknown distribution of flux across the well screen. Commonly, the flux distribution is simply assumed constant, but it is known that flux will be higher near the ends of the screened interval that are not coincident with the aquifer boundaries. - Dynamic water table boundary condition. A large increase in complexity comes from explicitly including unsaturated flow in unconfined solutions. The kinematic boundary condition expresses mass conservation due to water table decline. Including an analogous dynamic boundary condition based on a force balance (capillarity vs. gravity) may include sufficient effects of unsaturated flow, without the complexity associated with the complete unsaturated zone solution. - Heterogeneity. In real-world tests heterogeneity is present at multiple scales. Large-scale heterogeneity (e.g., faults or rivers) can sometimes be accounted in analytical solutions using the method of images, or other types of superpostion. A stochastic approach [@neuman2004type] could alternatively be developed to estimate random unconfined aquifer parameter distribution parameters. Despite advances in considering physically realistic unconfined flow, most real-world unconfined tests (e.g., @wenzel1942, @nwankwor1984 [@nwankwor1992], or @moenchetal2001) exhibit non-classical behavior that deviates from the early-intermediate-late behavior predicted by the models summarized here. We must continue to strive to include physically relevant processes and representatively linearize non-linear phenomena, to better understand, simulate and predict unconfined flow processes. Acknowledgments {#acknowledgments .unnumbered} =============== This research was partially funded by the Environmental Programs Directorate of the Los Alamos National Laboratory. Los Alamos National Laboratory is a multi-program laboratory managed and operated by Los Alamos National Security (LANS) Inc. for the U.S. Department of Energy’s National Nuclear Security Administration under contract DE-AC52-06NA25396. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy’s National Nuclear Security Administration under contract DE-AC04-94AL85000.
1
--- abstract: 'We describe how to solve simultaneous [Padé]{}approximations over a power series ring ${{\mathsf{K}}}[[x]]$ for a field ${{\mathsf{K}}}$ using ${O^{\scriptscriptstyle \sim}\!}(n^{\omega - 1} d)$ operations in ${{\mathsf{K}}}$, where $d$ is the sought precision and $n$ is the number of power series to approximate. We develop two algorithms using different approaches. Both algorithms return a reduced sub-bases that generates the complete set of solutions to the input approximations problem that satisfy the given degree constraints. Our results are made possible by recent breakthroughs in fast computations of minimal approximant bases and Hermite [Padé]{}approximations.' author: - | Johan Rosenkilde, né Nielsen\ Technical University of Denmark\ Denmark\ jsrn@jsrn.dk - | Arne Storjohann\ University of Waterloo\ Canada\ astorjoh@uwaterloo.ca - | Johan Rosenkilde, né Nielsen\ \ \ Arne Storjohann\ \ \ bibliography: - 'bibtex.bib' title: 'Algorithms for Simultaneous [Padé]{}Approximations [^1] ' --- Introduction {#sec:intro} ============ The Simultaneous [Padé]{}approximation problem concerns approximating several power series $S_1,\ldots,S_n \in {{\mathsf{K}}}[[x]]$ with rational functions $\frac {\sigma_1}\lambda,\ldots,\frac {\sigma_n} \lambda$, all sharing the same denominator $\lambda$. In other words, for some $d \in {\mathbb Z\xspace}_{\geq 0}$, we seek $\lambda \in {{\mathsf{K}}}[x]$ of low degree such that each of $${{\textnormal{rem}}}(\lambda S_1,\ x^d) , {{\textnormal{rem}}}(\lambda S_2,\ x^d),\ \ldots,\ {{\textnormal{rem}}}(\lambda S_n,\ x^d)$$ has low degree. The study of Simultaneous [Padé]{}approximations traces back to Hermite’s proof of the transcendence of $e$ [@hermite_sur_1878]. Solving Simultaneous [Padé]{}approximations has numerous applications, such as in coding theory, e.g. [@feng_generalization_1991; @schmidt_collaborative_2009]; or in distributed, reliable computation [@clement_pernet_high_2014]. Many algorithms have been developed for this problem, see e.g. [@beckermann_uniform_1992; @olesh_vector_2006; @sidorenko_linear_2011; @nielsen_generalised_2013] as well as the references therein. Usually one cares about the regime where $d \gg n$. Obtaining $O(n d^2)$ is classical through successive cancellation, see [@beckermann_uniform_1994] or [@feng_generalization_1991] for a Berlekamp–Massey-type variant. Using fast arithmetic, the previous best was ${O^{\scriptscriptstyle \sim}\!}(n^\omega d)$, where $\omega$ is the exponent for matrix multiplication, see \[ssec:cost\]. That can be done by computing a minimal approximant basis with e.g. [@giorgi_complexity_2003; @GuptaSarkarStorjohannValeriote11]; this approach traces back to [@barel_general_1992; @beckermann_uniform_1992]. Another possibility which achieves the same complexity is fast algorithms for solving structured linear systems, e.g. [@bostan_solving_2008]; see [@chowdhury_faster_2015] for a discussion of this approach. A common description is to require $\deg \lambda < N_0$ for some degree bound $N_0$, and similarly $\deg {{\textnormal{rem}}}(\lambda S_1,\, x^d) < N_i$ for $i = 1,\ldots,n$. The degree bounds could arise naturally from the application, or could be set such that a solution must exist. A natural generalisation is also to replace the $x^d$ moduli with arbitrary $g_1,\ldots,g_n \in {{\mathsf{K}}}[x]$. Formally, for any field ${{\mathsf{K}}}$: \[prob:sim\_pade\] Given a tuple $({\bm{S}}, {\bm{g}}, {\bm{N}})$ where - ${\bm{S}} = (S_1,\ldots,S_n) \in {{\mathsf{K}}}[x]^n$ is a sequence of polynomials, - ${\bm{g}} = (g_1,\ldots,g_n) \in {{\mathsf{K}}}[x]^n$ is a sequence of moduli polynomials with $\deg S_i < \deg g_i$ for $i=1,\ldots,n$, - and ${\bm{N}} = (N_0,\ldots,N_n) \in {\mathbb Z\xspace}_{\geq 0}^{n+1}$ are degree bounds satisfying $1\leq N_0 \leq \max_i \deg g_i$ and $N_i \leq \deg g_i$ for $i=1,\ldots,n$, find, if it exists, a non-zero vector $(\lambda, \phi_1, \ldots, \phi_n)$ such that 1. $\lambda S_i \equiv \phi_i \mod g_i$ for $i = 1,\ldots, n$, and \[p1item1\] 2. $\deg \lambda < N_0$ and $\deg \phi_i < N_i$ for $i=1,\ldots,n$. We will call any vector $(\lambda, \phi_1, \ldots, \phi_n)$ as above *a solution* to a given Simultaneous [Padé]{}approximation problem. Note that if the $N_i$ are set too low, then it might be the case that no solution exists. \[ex:simpade\] Consider over ${\mathbb F_{2}\xspace}[x]$ that $g_1 = g_2 = g_3 = x^5$, and ${\bm{S}} = (S_1,S_2,S_3) = \left(x^{4} + x^{2} + 1,\,x^{4} + 1,\,x^{4} + x^{3} + 1\right)$, with degree bounds ${\bm{N}} = (5, 3, 4, 5)$. Then $\lambda_1 = x^4 + 1$ is a solution, since $\deg \lambda_1 < 5$ and $$\lambda_1 {\bm{S}} \equiv \left(x^{2} + 1,\ 1,\ x^{3} + 1\right) \mod x^5 \ .$$ $\lambda_2 = x^{3} + x$ is another solution, since $$\lambda_2 {\bm{S}} \equiv \left(x,\ x^{3} + x,\ x^4+x^3 + x\right) \mod x^5 \ .$$ These two solutions are linearly independent over ${\mathbb F_{2}\xspace}[x]$ and span all solutions. Several previous algorithms for solving \[prob:sim\_pade\] are more ambitious and produce an entire *basis* of solutions that satisfy the first output condition $\lambda S_i \equiv \phi_i \mod g_i$ for $i=1,\ldots,n$, including solutions that do not satisfy the degree bounds stipulated by the second output condition. Our algorithms are slightly more restricted in that we only return the sub-basis that generates the set of solutions that satisfy both output requirements of \[prob:sim\_pade\]. Formally: \[prob:sim\_pade\_basis\] Given an instance of \[prob:sim\_pade\], find a matrix $A \in {{\mathsf{K}}}[x]^{* \times (n+1)}$ such that: - Each row of $A$ is a solution to the instance. - All solutions are in the ${{\mathsf{K}}}[x]$-row space of $A$. - $A$ is $(-{\bm{N}})$-row reduced[^2]. The last condition ensures that $A$ is minimal, in a sense, according to the degree bounds ${\bm{N}}$, and that we can easily parametrise which linear combinations of the rows of $A$ are solutions. We recall the relevant definitions and lemmas in \[sec:preliminaries\]. We will call such a matrix $A$ a *solution basis*. In the complexities we report here, we cannot afford to compute $A$ explicitly. For example, if all $g_i = x^d$, the number of field elements required to explicitly write down all of the entries of $A$ could be $\Omega(n^2d)$. Instead, we remark that $A$ is completely given by the problem instance as well as the first column of $A$, containing the $\lambda$ polynomials.[^3] Our algorithms will therefore represent $A$ row-wise using the following compact representation. For a given instance of \[prob:sim\_pade\_basis\], a *solution specification* is a tuple $({\bm{\lambda}},{\bm{\delta}}) \in {{\mathsf{K}}}[x]^{k \times 1} \times {\mathbb Z\xspace}_{<0}^k$ such that the *completion* of ${\bm{\lambda}}$ is a solution basis, and where ${\bm{\delta}}$ are the $(-{\bm{N}})$-degrees of the rows of $A$. The *completion* of ${\bm{\lambda}} = (\lambda_1,\ldots,\lambda_k){^\top}$ is the matrix $$\begin{bmatrix} \lambda_1 & {{\textnormal{rem}}}(\lambda_1 S_1,\ g_1) & \ldots & {{\textnormal{rem}}}(\lambda_1 S_n,\ g_n) \\ \vdots & & \ddots & \vdots \\ \lambda_k & {{\textnormal{rem}}}(\lambda_k S_1,\ g_1) & \ldots & {{\textnormal{rem}}}(\lambda_k S_n,\ g_n) \\ \end{bmatrix} \ .$$ Note that ${\bm{\delta}}$ will consist of only negative numbers, since any solution ${\bm{v}}$ by definition has $\deg_{-{\bm{N}}} {\bm{v}} < 0$. A solution specification for the problem in \[ex:simpade\] is $$({\bm{\lambda}}, {\bm{\delta}}) = \big( [x^4 + 1,\ x^3 + x]{^\top},\ (-1, -1) \big) \ .$$ The completion of this is $$A = \begin{bmatrix} x^4 + 1 & x^{2} + 1 & 1 & x^{3} + 1 \\ x^3 + x & x & x^{3} + x & x^4+x^3 + x \end{bmatrix}$$ One can verify that $A$ is $(-{\bm{N}})$-row reduced. We present two algorithms for solving \[prob:sim\_pade\_basis\], both with complexity $O\big(n^{\omega-1}\, {{\mathsf{M}}}(d)\,(\log d)\,(\log d/n)^2\big)$, where $d = \max_i \deg g_i$ and ${{\mathsf{M}}}(d)$ is the cost of multiplying two polynomials of degree $d$, see \[ssec:cost\]. They both depend crucially on recent developments that allow computing minimal approximant bases of non-square matrices faster than for the square case [@zhou_efficient_2012; @jeannerod_computation_2016]. We remark that from the solution basis, one can also compute the expanded form of one or a few of the solutions in the same complexity, for instance if a single, expanded solution to the simultaneous [Padé]{}problem is needed. Our first algorithm in \[sec:dual\] assumes $g_i = x^d$ for all $i$ and some $d \in {\mathbb Z\xspace}_{\geq 0}$. It utilises a well-known duality between Simultaneous [Padé]{}approximations and Hermite [Padé]{}approximations, see e.g. [@beckermann_uniform_1992]. The Hermite [Padé]{}problem is immediately solvable by fast minimal approximant basis computation. A remaining step is to efficiently compute a single row of the adjoint of a matrix in Popov form, and this is done by combining partial linearisation [@GuptaSarkarStorjohannValeriote11] and high-order lifting [@storjohann_high-order_2003]. Our second algorithm in \[sec:intersect\] supports arbitrary $g_i$. The algorithm first solves $n$ single-sequence [Padé]{}approximations, each of $S_1,\ldots,S_n$. The solution bases for two problem instances can be combined by computing the intersection of their row spaces; this is handled by a minimal approximant basis computation. A solution basis of the full Simultaneous [Padé]{}problem is then obtained by structuring intersections along a binary tree. Before we describe our algorithms, we give some preliminary notation and definitions in \[sec:preliminaries\], and in \[sec:subroutines\] we describe some of the computational tools that we employ. Both our algorithms have been implemented in Sage v. 7.0 [@stein_sagemath_????] (though asymptotically slower alternatives to the computational tools are used). The source code can be downloaded from <http://jsrn.dk/code-for-articles>. Cost model {#ssec:cost} ---------- We count basic arithmetic operations in ${{\mathsf{K}}}$ on an algebraic RAM. We will state complexity results in terms of an exponent $\omega$ for matrix multiplication, and a function ${{\mathsf{M}}}(\cdot)$ that is a multiplication time for ${{\mathsf{K}}}[x]$ [@von_zur_gathen_modern_2012 Definition 8.26]. Then two $n\times n$ matrices over ${{\mathsf{K}}}$ can be multiplied in $O(n^{\omega})$ operations in ${{\mathsf{K}}}$, and two polynomials in ${{\mathsf{K}}}[x]$ of degree strictly less than $d$ can be multiplied in ${{\mathsf{M}}}(d)$ operations in ${{\mathsf{K}}}$. The best known algorithms allow $\omega < 2.38$ [@coppersmith_matrix_1990; @LeGall14], and we can always take ${{\mathsf{M}}}(d) \in O(n (\log n) (\operatorname{loglog}n))$ [@CantorKaltofen]. In this paper we assume that $\omega > 2$, and that ${{\mathsf{M}}}(d)$ is super-linear while ${{\mathsf{M}}}(d) \in O(d^{\omega-1})$. The assumption ${{\mathsf{M}}}(d) \in O(d^{\omega-1})$ simply stipulates that if fast matrix multiplication techiques are used then fast polynomial multiplication should be used also: for example, $n \, {{\mathsf{M}}}(nd) \in O(n^{\omega} \, {{\mathsf{M}}}(d))$. Preliminaries {#sec:preliminaries} ============= Here we gather together some definitions and results regarding row reduced bases, minimal approximant basis, and their shifted variants. For a matrix $A$ we denote by $A_{i,j}$ the entry in row $i$ and column $j$. For a matrix $A$ over ${{\mathsf{K}}}[x]$ we denote by ${{\textnormal{Row}}}(A)$ the ${{\mathsf{K}}}[x]$-linear row space of $A$. Degrees and shifted degrees --------------------------- The degree of a nonzero vector ${\bm{v}} \in {{\mathsf{K}}}[x]^{1 \times m}$ or matrix $A \in {{\mathsf{K}}}[x]^{n\times m}$ is denoted by $\deg {\bm{v}}$ or $\deg A$, and is the maximal degree of entries of ${\bm{v}}$ or $A$. If $A$ has no zero rows the [*row degrees*]{} of $A$, denoted by ${{\textnormal{rowdeg}}}\, A$, is the tuple $(d_1,\ldots,d_n)$ with $d_i = \deg {{\textnormal{row}}}(A,i)$. The (row-wise) [*leading matrix*]{} of $A$, denoted by ${\rm LM}(A) \in {{\mathsf{K}}}^{n \times m}$, has ${\rm LM}(A)_{i,j}$ equal to the coefficient of $x^{d_i}$ of $A_{i,j}$. Next we recall [@barel_general_1992; @zhou_efficient_2012; @jeannerod_computation_2016] the shifted variants of the notion of degree, row degrees, and leading matrix. For a [*shift*]{} ${\bm{s}} = (s_1,\ldots,s_n) \in {\mathbb Z\xspace}^n$, define the $n \times n$ diagonal matrix $x^{{\bm{s}}}$ by $$x^{{\bm{s}}} := \left [ \begin{array}{ccc} x^{s_1} & & \\ & \ddots & \\ & & x^{s_n} \end{array} \right ].$$ Then the [*${{\bm{s}}}$-degree*]{} of $v$, the [*${{\bm{s}}}$-row degrees*]{} of $A$, and the [*${\bm{s}}$-leading matrix*]{} of $A$, are defined by $\deg_{{\bm{s}}} v := \deg v x^{{\bm{s}}}$, ${{\textnormal{rowdeg}}}_{{\bm{s}}} A := {{\textnormal{rowdeg}}}\, Ax^{{\bm{s}}}$, and ${\rm LM}_{{\bm{s}}}(A) := {\rm LM}(Ax^{{\bm{s}}})$. Note that we pass over the ring of Laurent polynomials only for convenience; our algorithms will only compute with polynomials. As pointed out in [@jeannerod_computation_2016], up to negation the definition of ${{\bm{s}}}$-degree is equivalent to that used in [@BeckermannLabahnVillard06] and to the notion of [*defect*]{} in [@beckermann_uniform_1994]. For an instance $({\bm{S}}, {\bm{g}}, {\bm{N}})$ of \[prob:sim\_pade\], in the context of defining matrices, we will be using ${\bm{S}}$ and ${\bm{g}}$ as vectors, and by ${ \ifx\null\null \Gamma_{{\bm{g}}} \else \Gamma_{{\bm{g}}_{\null}} \fi}$ denote the diagonal matrix with the entries of ${\bm{g}}$ on its diagonal. Row reduced ----------- Although row reducedness can be defined for matrices of arbitrary shape and rank, it suffices here to consider the case of matrices of full row rank. A matrix $R \in {{\mathsf{K}}}[x]^{n \times m}$ is [*row reduced*]{} if ${\rm LM}(R)$ has full row rank, and [*${\bm{s}}$-row reduced*]{} if ${\rm LM}_{{\bm{s}}}(R)$ has full row rank. Every $A \in {{\mathsf{K}}}[x]^{n \times m}$ of full row rank is left equivalent to a matrix $R \in {{\mathsf{K}}}[x]^{n \times m}$ that is ${{\bm{s}}}$-row reduced. The rows of $R$ give a basis for ${{\textnormal{Row}}}(A)$ that is minimal in the following sense: the list of ${{\bm{s}}}$-degrees of the rows of $R$, when sorted in non-decreasing order, will be lexicographically minimal. An important feature of row reduced matrices is the so-called “predictable degree”-property [@kailath_linear_1980 Theorem 6.3-13]: for any ${\bm{v}} \in {{\mathsf{K}}}[x]^{1 \times n}$, we have $$\deg_{{\bm{s}}}({\bm{v}} R) = \max_{i=1,\ldots,n}( \deg_{{\bm{s}}} {\rm row}(R,i) + \deg v_i ) \ .$$ A canonical ${\bm{s}}$-reduced basis is provided by the ${{\bm{s}}}$-Popov form. Although an ${{\bm{s}}}$-Popov form can be defined for a matrix of arbitrary shape and rank, it suffices here to consider the case of a non-singular matrix. The following definition is equivalent to [@jeannerod_computation_2016 Definition 1.2]. \[def:popov\] A non-singular matrix $R \in {{\mathsf{K}}}[x]^{n\times n}$ is in ${{\bm{s}}}$-Popov form if ${\rm LM}_{{\bm{s}}}(R)$ is unit lower triangular and the degrees of off-diagonal entries of $R$ are strictly less than the degree of the diagonal entry in the same column. Adjoints of row reduced matrices -------------------------------- For a non-singular matrix $A$ recall that the adjoint of $A$, denoted by ${\rm adj}(A)$, is equal to $(\det A)A^{-1}$, and that entry ${{{\textnormal{adj}}}(A)}_{i,j}{^\top}$ is equal to $(-1)^{i+j}$ times the determinant of the $(n-1) \times (n-1)$ sub-matrix that is obtained from $A$ by deleting row $i$ and column $j$. \[lem:adjointRowReduced\] Let $A \in {{\mathsf{K}}}[x]^{n \times n}$ be ${\bm{s}}$-row reduced. Then ${{\textnormal{adj}}}(A){^\top}$ is $(-{\bm{s}})$-row reduced with $${{\textnormal{rowdeg}}}_{(-{\bm{s}})} {{\textnormal{adj}}}(A){^\top}=(\eta - s - \eta_1,\ldots , \eta - s -\eta_n) \ ,$$ where ${\bm{\eta}} = {{\textnormal{rowdeg}}}_{{\bm{s}}} A$, $\eta = \sum_i \eta_i$ and $s = \sum_i s_i$. Since $A$ is ${\bm{s}}$-row reduced then $A x^{{\bm{s}}}$ is row reduced. Note that ${{\textnormal{adj}}}(A x^{{\bm{s}}}){^\top}(A x^{{\bm{s}}}){^\top}= (\det A x^{{\bm{s}}}) I_m$ with $\deg \det A x^{{\bm{s}}} = \eta$. It follows that row $i$ of ${{\textnormal{adj}}}(A x^{{\bm{s}}}){^\top}$ must have degree at least $\eta - \eta_i$ since $\eta_i$ is the degree of column $i$ of $(A x^{{\bm{s}}}){^\top}$. However, entries in row $i$ of ${{\textnormal{adj}}}(A x^{{\bm{s}}}){^\top}$ are minors of the matrix obtained from $A x^{{\bm{s}}}$ by removing row $i$, hence have degree at most $\eta - \eta_i$. It follows that the (row-wise) leading coefficient matrix of ${{\textnormal{adj}}}(A x^{{\bm{s}}}){^\top}$ is non-singular, hence ${{\textnormal{adj}}}(A x^{{\bm{s}}}){^\top}$ is row reduced. Since ${{\textnormal{adj}}}(A x^{{\bm{s}}}){^\top}= (\det x^{{\bm{s}}}) {{\textnormal{adj}}}(A){^\top}x^{-{\bm{s}}}$ we conclude that ${{\textnormal{adj}}}(A){^\top}$ is $(-{\bm{s}})$-row reduced with ${{\textnormal{rowdeg}}}_{(-{\bm{s}})} {{\textnormal{adj}}}(A) = (\eta - \eta_1 - s, \ldots, \eta - \eta_n - s)$. Minimal approximant bases ------------------------- We recall the standard notion of minimal approximant basis, sometimes known as order basis or $\sigma$-basis [@beckermann_uniform_1994]. For a matrix $A \in {{\mathsf{K}}}[x]^{n \times m}$ and order $d \in {\mathbb Z\xspace}_{\geq 0}$, an *order $d$ approximant* is a vector ${\bm{p}} \in {{\mathsf{K}}}[x]^{1 \times n}$ such that ${\bm{p}}A \equiv {\bm{0}} \mod x^d.$ An *approximant basis of order $d$* is then a matrix $F \in {{\mathsf{K}}}[x]^{n \times n}$ which is a basis of all order $d$ approximants. Such a basis always exists and has full rank $n$. For a shift ${\bm{s}} \in {\mathbb Z\xspace}^n$, $F$ is then an *${\bm{s}}$-minimal approximant basis* if it is ${\bm{s}}$-row reduced. Let ${{\ensuremath{\mathsf{MinBasis}}\xspace}}(d,A,{\bm{s}})$ be a function that returns $(F,{\bm{\delta}})$, where $F$ is an ${\bm{s}}$-minimal approximant basis of $A$ of order $d$, and ${\bm{\delta}} = {{\textnormal{rowdeg}}}_{{\bm{s}}} F$. The next lemma recalls a well known method of constructing minimal approximant bases recursively. Although the output of ${{\ensuremath{\mathsf{MinBasis}}\xspace}}$ may not be unique, the lemma holds for *any* ${\bm{s}}$-minimal approximant basis that ${{\ensuremath{\mathsf{MinBasis}}\xspace}}$ might return. \[lem:paderec\] Let $A = \left [ \begin{array}{c|c} A_1 & A_2 \end{array} \right ]$ over ${{\mathsf{K}}}[x]$. If $(F_1, {\bm{\delta}}_1) = {{\ensuremath{\mathsf{MinBasis}}\xspace}}(d,A_1,{\bm{s}})$ and $(F_2,{\bm{\delta}}_2) = {{\ensuremath{\mathsf{MinBasis}}\xspace}}(d,F_1A_2,{\bm{\delta}}_1)$, then $F_2F_1$ is an ${\bm{s}}$-minimal approximant basis of $A$ of order $d$ with ${\bm{\delta}}_2 = {{\textnormal{rowdeg}}}_{{\bm{s}}} F_2 F_1$. Sometimes only the [*negative part*]{} of an ${\bm{s}}$-minimal approximant bases is required, the submatrix of the approximant bases consisting of rows with negative ${\bm{s}}$-degree. Let function ${{\ensuremath{\mathsf{NegMinBasis}}\xspace}}(d,A,{\bm{s}})$ have the same output as ${{\ensuremath{\mathsf{MinBasis}}\xspace}}$, but with $F$ restricted to the negative part. \[lem:paderecprune\] \[lem:paderec\] still holds if ${{\ensuremath{\mathsf{MinBasis}}\xspace}}$ is replaced by ${{\ensuremath{\mathsf{NegMinBasis}}\xspace}}$, and “an ${\bm{s}}$-minimal” is replaced with “the negative part of an ${\bm{s}}$-minimal.” Using for example the algorithm `M-Basis` of [@giorgi_complexity_2003], it is easy to show that any order $d$ approximant basis $G$ for an $A$ of column dimension $m$ has $\det G = x^D$ for some $D \in {\mathbb Z\xspace}_{\geq 0}$ with $D \leq md$. Many problems of ${{\mathsf{K}}}[x]$ matrices or approximations reduce to the computation of (shifted) minimal approximant bases, see e.g. [@beckermann_uniform_1994; @giorgi_complexity_2003], often resulting in the best known asymptotic complexities for these problems. Direct solving of Simultaneous [Padé]{}approximations {#sec:direct_solve} ----------------------------------------------------- Let $({\bm{S}}, {\bm{g}}, {\bm{N}})$ be an instance of \[prob:sim\_pade\_basis\] of size $n$. We recall some known approaches for computing a solution specification using row reduction and minimal approximant basis computation. ### Via reduced basis {#sec:direct_reduced_basis} Using the predictable degree property it is easy to show that if $R \in {{\mathsf{K}}}[x]^{(n+1) \times (n+1)}$ is an $(-{\bm{N}})$-reduced basis of $$A = \left [ \begin{array}{c|c} 1 & {\bm{S}} \\\hline & { \ifx\null\null \Gamma_{{\bm{g}}} \else \Gamma_{{\bm{g}}_{\null}} \fi}\end{array} \right] \in {{\mathsf{K}}}[x]^{(n+1) \times (n+1)},$$ then the sub-matrix of $R$ comprised of the rows with negative $(-{\bm{N}})$-degree form a solution basis. A solution specification $({\bm{\lambda}}, {\bm{\delta}})$ is then a subvector ${\bm{\lambda}}$ of the first column of $R$, with ${\bm{\delta}}$ the corresponding subtuple ${\bm{\delta}}$ of ${{\textnormal{rowdeg}}}_{(- {\bm{N}})} R$. Mulders and Storjohann [@mulders_lattice_2003] gave an iterative algorithm for performing row reduction by successive cancellation; it is similar to but faster than earlier algorithms [@kailath_linear_1980; @lenstra_factoring_1985]. Generically on input $F \in {{\mathsf{K}}}[x]^{m \times m}$ it has complexity $O(n^3 (\deg F)^2)$. Alekhnovich [@alekhnovich_linear_2005] gave what is essentially a Divide & Conquer variant of Mulders and Storjohann’s algorithm, with complexity ${O^{\scriptscriptstyle \sim}\!}(n^{\omega+1}\deg F)$. Nielsen remarked [@nielsen_generalised_2013] that these algorithms perform fewer iterations when applied to the matrix $A$ above, due to its low *orthogonality defect*: ${\rm OD}(F) = \sum{{\textnormal{rowdeg}}}F - \deg \det F$, resulting in $O(n^2(\deg A)^2)$ respectively ${O^{\scriptscriptstyle \sim}\!}(n^\omega \deg A)$. Nielsen also used the special shape of $A$ to give a variant of the Mulders–Storjohann algorithm that computes coefficients in the working matrix in a lazy manner with a resulting complexity $O(n \,\mathsf{P}(\deg A))$, where $\mathsf P(\deg A) = (\deg A)^2$ when the $g_i$ are all powers of $x$, and $\mathsf P(\deg A) = {{\mathsf{M}}}(\deg A)\deg A$ otherwise. Giorgi, et al. [@giorgi_complexity_2003] gave a reduction for performing row reduction by computing a minimal approximant basis. For the special matrix $A$, this essentially boils down to the approach described in the following section. When $n = 1$, the extended Euclidean algorithm on input $S_1$ and $g_1$ can solve the approximation problem by essentially computing the reduced basis of the $2 \times 2$ matrix $A$: each iteration corresponds to a reduced basis for a range of possible shifts [@sugiyama_further_1976; @justesen_complexity_1976; @gustavson_fast_1979]. The complexity of this is $O({{\mathsf{M}}}(\deg g_1) \log \deg g_1)$. ### Via minimal approximant basis First consider the special case when all $g_i = x^d$ for the same $d$. An approximant ${\bm{v}} = (\lambda, \phi_1, \ldots, \phi_n)$ of order $d$ of $$\begin{aligned} A &= \left [ \begin{array}{c} - {\bm{S}} \\ I \end{array} \right ] \in {{\mathsf{K}}}[x]^{(n+1) \times n}\end{aligned}$$ clearly satisfies $\lambda S_i \equiv \phi_i \mod x^d$ for $i = 1,\ldots,n$; conversely, any such vector ${\bm{v}}$ satisfying these congruences must be an approximant of $A$ of order $d$. So the negative part of a $(-{\bm{N}})$-minimal approximant basis of $A$ of order $d$ is a solution basis. In the general case we can reduce to a minimal approximant bases computation as shown by \[alg:simpadedirect\]. Correctness of the algorithm follows from the following result. \[thm:simpadedirect\] Corresponding to an instance $({\bm{S}}, {\bm{g}}, {\bm{N}})$ of \[prob:sim\_pade\_basis\] of size $n$, define a shift ${\bm{h}}$ and order $d$: - ${\bm{h}} := -({\bm{N}} \mid N_0 -1, \ldots, N_0-1) \in {\mathbb Z\xspace}^{2n+1}$ - $d := N_0 + \max_i \deg g_i -1$ If $G$ is the negative part of an ${\bm{h}}$-minimal approximant basis of $$H = \left [ \begin{array}{c} -{\bm{S}} \\ I \\ { \ifx\null\null \Gamma_{{\bm{g}}} \else \Gamma_{{\bm{g}}_{\null}} \fi}\end{array} \right ] \in {{\mathsf{K}}}[x]^{(2n+1) \times n}$$ of order $d$, then the submatrix of $G$ comprised of the first $n+1$ columns is a solution basis to the problem instance. An approximant ${\bm{v}} = (\lambda, \phi_1,\ldots,\phi_n,q_1,\ldots, q_n)$ of order $d$ of $H$ clearly satisfies $$\begin{aligned} \label{eq:lambda} \lambda S_i = \phi_i + q_ig_i \bmod x^d\end{aligned}$$ for $i=1,\ldots,n$; conversely, any such vector ${\bm{v}}$ satisfying these congruences must be an approximant of $H$ of order $d$. Now suppose ${\bm{v}}$ is an order $d$ approximant of $H$ with negative ${\bm{h}}$-degree, so $\deg \lambda \leq N_0-1$, $\deg \phi_i \leq N_i-1$, and $\deg q_i \leq N_0 - 2$. Since \[prob:sim\_pade\] specifies that $\deg S_i < \deg g_i$ and $N_i \leq \deg g_i$, both $\lambda S_i$ and $q_i g_i$ will have degree bounded by $N_0 + \deg g_i - 2$. Since \[prob:sim\_pade\] specifies that $N_0 \geq 1$, it follows that both the left and right hand sides of (\[eq:lambda\]) have degree bounded by $N_0+ \deg g_i -2$, which is strictly less than $d$. We conclude that $$\begin{aligned} \label{eq:lambda2} \lambda S_i = \phi_i + q_i g_i\end{aligned}$$ for $i=1,\ldots,n$. It follows that ${\bm{v}} H = 0$ so ${\bm{v}}$ is in the left kernel of $H$. Moreover, restricting ${\bm{v}}$ to its first $n+1$ entries gives $\bar{{\bm{v}}} := (\lambda, \phi_1,\ldots,\phi_n)$, a solution to the simultaneous [Padé]{}problem with $\deg_{- {\bm{N}}} \bar{{\bm{v}}} = \deg_{{\bm{h}}} {\bm{v}}$. Conversely, if $\bar{{\bm{v}}} = (\lambda, \phi_1,\ldots,\phi_n)$ is a solution to the simultaneous [Padé]{}problem, then the extension ${\bm{v}} = (\lambda, \phi_1,\ldots,\phi_n,q_1,\ldots,q_n)$ with $q_i = (\lambda S_i - \phi_i)/g_i \in {{\mathsf{K}}}[x]$ for $i=1,\ldots,n$ is an approximant of $H$ of order $d$ with $\deg_{{\bm{h}}} {\bm{v}} = \deg_{-{\bm{N}}} \bar{{\bm{v}}}$. Finally, consider that a left kernel basis for $H$ is given by $$K = \left[ \begin{array}{c|c} K_1 & K_2 \end{array} \right ] = \left [ \begin{array}{cc|c} 1 & {\bm{S}} & \\ & { \ifx\null\null \Gamma_{{\bm{g}}} \else \Gamma_{{\bm{g}}_{\null}} \fi}& -I \end{array} \right ].$$ We must have $G = M K$ for some polynomial matrix $M$ of full row rank. But then $M K_1$ also has full row rank with ${{\textnormal{rowdeg}}}_{-{\bm{N}}} MK_1 = {{\textnormal{rowdeg}}}_{{\bm{h}}} G$. ${\bm{h}} {\leftarrow}-( {\bm{N}} \mid N_0-1,\ldots,N_0-1) \in {\mathbb Z\xspace}^{2n+1}$ $d {\leftarrow}N_0 + \max_i \deg g_i - 1$ $H = \left[ \begin{array}{c} -{\bm{S}} \\ I \\ { \ifx\null\null \Gamma_{{\bm{g}}} \else \Gamma_{{\bm{g}}_{\null}} \fi}\end{array} \right]$ $(\left [ \begin{array}{c|c} {\bm{\lambda}} & \ast \end{array} \right ], {\bm{\delta}}) {\leftarrow}{{\ensuremath{\mathsf{NegMinBasis}}\xspace}}(d, H, {\bm{h}})$ ${\textbf{return }}({\bm{\lambda}}, {\bm{\delta}})$ [$\mathsf{DirectSimPade}$]{} can be performed in time ${O^{\scriptscriptstyle \sim}\!}(n^{\omega} \deg H) = {O^{\scriptscriptstyle \sim}\!}(n^\omega \max_i \deg g_i)$ using the minimal approximant basis algorithm by Jeannerod, et al. [@jeannerod_computation_2016], see \[sec:subroutines\]. A closely related alternative to [$\mathsf{DirectSimPade}$]{} is the recent algorithm by Neiger [@neiger_fast_2016] for computing solutions to modular equations with general moduli $g_i$. This would give the complexity ${O^{\scriptscriptstyle \sim}\!}(n^{\omega-1} \sum_i \deg g_i) \subset {O^{\scriptscriptstyle \sim}\!}(n^\omega \max_i \deg g_i)$. All of the above solutions ignore the sparse, simple structure of the input matrices, which is why they do not obtain the improved complexity that we do here. Computational tools {#sec:subroutines} =================== The main computational tool we will use is the following very recent result from Jeannerod, Neiger, Schost and Villard [@jeannerod_computation_2016] on minimal approximant basis computation. \[thm:orderbasis\] There exists an algorithm ${{\ensuremath{\mathsf{PopovBasis}}\xspace}}(d,A, {\bm{s}})$ where the input is an order $d \in {\mathbb Z\xspace}_+$, a polynomial matrix $A \in {{\mathsf{K}}}[x]^{n \times m}$ of degree at most $d$, and shift ${\bm{s}} \in {\mathbb Z\xspace}^n$, and which returns $(F, {\bm{\delta}})$, where $F$ is an ${\bm{s}}$-minimal approximant basis of $A$ of order $d$, $F$ is in ${\bm{s}}$-Popov form, and ${\bm{\delta}} = {{\textnormal{rowdeg}}}_{{\bm{s}}} F$. [[$\mathsf{PopovBasis}$]{}]{}has complexity $O(n^{\omega-1}\, {{\mathsf{M}}}(\sigma)\, (\log \sigma) \, (\log \sigma /n)^2)$ operations in ${{\mathsf{K}}}$, where $\sigma = md$. Our next result says that we can quickly compute the first row of ${{\textnormal{adj}}}(F)$ if $F$ is a minimal approximant basis in Popov form. In particular, since $F$ is an approximant basis $\det F = x^D$ for some $D \leq \sigma$, where $\sigma = md$ from \[thm:orderbasis\]. \[thm:fastsolver\] Let $F \in {{\mathsf{K}}}[x]^{n \times n}$ be in Popov form and with $\det F = x^D$ for some $D \in {\mathbb Z\xspace}_{\geq 0}$. Then the first row of ${{\textnormal{adj}}}(F)$ can be computed in $O(n^{\omega-1}\, {{\mathsf{M}}}(D)\, (\log D) \, (\log D/n))$ operations in ${{\mathsf{K}}}$. Because $F$ is in ${\bm{s}}$-Popov form, $D$ is the sum of the column degrees of $F$. We consider two cases: $D \geq n$ and $D < n$. First suppose $D \geq n$. Partial linearisation [@GuptaSarkarStorjohannValeriote11 Corollary 2] can produce from $F$, with no operations in ${{\mathsf{K}}}$, a new matrix $G \in {{\mathsf{K}}}[x]^{\bar n \times \bar n}$ with dimension $\bar{n} < 2n$, $\deg G \leq \lceil D/n\rceil$, $\det G = \det F$, and such that $F^{{-1}}$ is equal to the principal $n \times n$ sub-matrix of $G^{{-1}}$. Let ${\bm{v}} \in {{\mathsf{K}}}[x]^{1 \times \bar{n}}$ be the first row of $x^DI_{\bar{n}}$. Then the first row of ${{\textnormal{adj}}}(F)$ will be the first $n$ entries of the first row of ${\bm{v}}G^{-1}$. High-order $X$-adic lifting [@storjohann_high-order_2003 Algorithm 5] using the modulus $X=(x-1)^{\lceil D/n \rceil}$ will compute ${\bm{v}}G^{-1}$ in $O\big(n^{\omega}\, {{\mathsf{M}}}(\lceil D/n \rceil) \,(\log \lceil D/n \rceil)\big)$ operations in ${{\mathsf{K}}}$ [@storjohann_high-order_2003 Corollary 16]. Since $D \geq n$ this cost estimate remains valid if we replace $\lceil D/n \rceil$ with $D/n$. Finally, from the super-linearity assumption on ${{\mathsf{M}}}(\cdot)$ we have $M(D/n) \leq (1/n) {{\mathsf{M}}}(D)$, thus matching our target cost. Now suppose $D < n$. In this case we can not directly appeal to the partial linearisation technique since the resulting $O(n^{\omega} \lceil D/n\rceil)$ may be asymptotically larger than our target cost. But $D < n$ means that $F$ has — possibly many — columns of degree 0; since $F$ is in Popov form, such columns have a 1 on the matrix’s diagonal and are 0 on the remaining entries. The following describes how to essentially ignore those columns. $D$ is then greater than or equal to the number of remaining columns, thus effectuating the gain from the partial linearisation. If $n-k$ is the number of such columns in $F$ that means we can find a permutation matrix $P$ such that $$\hat{F} := PFP{^\top}= \left [ \begin{array}{c|c} F_1 & \\\hline F_2 & I_{n-k} \end{array} \right ] \ ,$$ with each column of $F_1$ having degree strictly greater than zero. Let $i$ be the row index of the single 1 in the first column of $P{^\top}$. Since $F^{-1} = P{^\top}\hat{F}^{-1}P$, we have $$\label{first} {\rm row}({{\textnormal{adj}}}(F),1)P^{-1} = x^D\, {\rm row}(\hat{F}^{-1},i).$$ Considering that $$\hat{F}^{-1} = \left [ \begin{array}{c|c} F_1^{-1} & \\\hline -F_2F_1^{-1} & I_{n-k} \end{array} \right ],$$ it will suffice to compute the first $k$ entries of the vector on the right hand side of (\[first\]). If $i \leq k$ then let ${\bm{v}} \in {{\mathsf{K}}}[x]^{1 \times k}$ be row $i$ of $x^{D}I_k$. Otherwise, if $i>k$ then let ${\bm{v}}$ be row $i-k$ of $-x^{D}F_2$. Then in both cases, ${\bm{v}}F_1^{-1}$ will be equal to the first $k$ entries of the vector on the right hand side of (\[first\]). Like before, high-order lifting combined with partial linearisation will compute this vector in $O\big(k^{\omega}\, {{\mathsf{M}}}(\lceil D/k \rceil)\,(\log \lceil D/k \rceil) \big)$ operations in ${{\mathsf{K}}}$. Since $D\geq k$ the cost estimate remains valid if $\lceil D/k \rceil$ is replaced with $D/k$. Reduction to Hermite [PADÉ]{} {#sec:dual} ============================= In this section we present an algorithm for solving \[prob:sim\_pade\_basis\] when $g_1 = \ldots = g_n = x^d$ for some $d \in {\mathbb Z\xspace}_{\geq 0}$. The algorithm is based on the well-known duality between the Simultaneous [Padé]{}problem and the Hermite [Padé]{}problem, see for example [@beckermann_uniform_1992]. This duality, first observed in a special case [@Mahler68], and then later in the general case [@beckermann_recursiveness_1997], was exploited in [@beckermann_fraction-free_2009] to develop algorithms for the fraction free computation of Simultaneous [Padé]{}approximation. We begin with a technical lemma that is at the heart of this duality. \[lem:duality\] Let $\hat A, \hat B \in {{\mathsf{K}}}[x]^{(n+1)\times(n+1)}$ be as follows. $$\begin{aligned} \hat A &= \left [ \begin{array}{c|c} x^d & -{\bm{S}} \\\hline & I \end{array} \right ] && \hspace*{-1em}\hat B &= \left [ \begin{array}{c|cccc} 1 & \\{ \hline \\[\dimexpr-\normalbaselineskip+2pt] } {\bm{S}}{^\top}& x^d I \end{array} \right ] \end{aligned}$$ Then $\hat B$ is the adjoint of $\hat A{^\top}$. Furthermore, $\hat A{^\top}$ is an approximant basis for $\hat B$ of order $d$, and $\hat B{^\top}$ is an approximant basis of $\hat A$ of order $d$. Direct computation shows that $\hat A{^\top}\hat B = x^d I_m = \det \hat A{^\top}I_m$, so $\hat B$ is the adjoint of $\hat A{^\top}$. Let now $G$ be an approximant basis of $\hat B$. By the above computation the row space of $\hat A{^\top}$ must be a subset of the row space of $G$. But since $G \hat B = (x^dI_m) R$ for some $R \in {{\mathsf{K}}}[x]^{(n+1)\times(n+1)}$, then $\det G = x^d \det R$. Thus $x^d \mid \det G$. But $\det \hat A{^\top}= x^d$, so the row space of $\hat A{^\top}$ can not be smaller than the row space of $G$. That is, $\hat A{^\top}$ is an approximant basis for $B$ of order $d$. Taking the transpose through the argument shows that $\hat B{^\top}$ is an approximant basis of $\hat B$ of order $d$. \[thm:dualityMinbasis\] Let $A$ and $B$ be as follows. $$\begin{aligned} A &= \left [ \begin{array}{cccc} -{\bm{S}} \\{ \hline \\[\dimexpr-\normalbaselineskip+1pt] } I \end{array} \right ] \in {{\mathsf{K}}}[x]^{(n+1) \times (n+1)} &&& \hspace*{-1em}B &= \left [ \begin{array}{c} 1 \\ {\bm{S}} \end{array} \right] \in {{\mathsf{K}}}[x]^{(n+1) \times 1}\end{aligned}$$ If $G$ is an ${\bm{N}}$-minimal approximant basis of $B$ of order $d$ with shift ${\bm{N}} \in {\mathbb Z\xspace}_{\geq 0}^{n+1}$, then ${{\textnormal{adj}}}(G{^\top})$ is a $(-{\bm{N}})$-minimal approximant basis of $A$ of order $d$. Moreover, if ${\bm{\eta}} = {{\textnormal{rowdeg}}}_{{\bm{N}}} G$, then ${{\textnormal{rowdeg}}}_{-{\bm{N}}} {{\textnormal{adj}}}(G) =(\eta - N - \eta_1,\ldots , \eta - N -\eta_{n+1})$, where $\eta = \sum_i \eta_i$ and $N = \sum_i N_i$. Introduce $\hat A$ and $\hat B$ as in \[lem:duality\]. Clearly $G$ is also an ${\bm{N}}$-minimal approximant basis of $\hat B$ of order $d$. Likewise, $\hat A$ and $A$ have the same minimal approximant bases for given order and shift. Assume, without loss of generality, that we have scaled $G$ such that $\det G$ is monic. Since $\hat A{^\top}$ is also an approximant basis for $\hat B$ of order $d$, then $\det G = \det \hat A{^\top}= x^d$. By definition $G\hat B = x^d R$ for some matrix $R \in {{\mathsf{K}}}[x]^{(n+1)\times(n+1)}$. That means $$\begin{aligned} x^{2d}((G\hat B){^\top}))^{{-1}}&= x^{2d}((x^d R){^\top})^{{-1}}\ , & \textrm{so} \\ (x^d(G{^\top})^{{-1}})(x^d(\hat B{^\top})^{{-1}}) &= x^{d}(R{^\top})^{{-1}}\ , & \textrm{that is} \\ {{\textnormal{adj}}}(G{^\top}) \hat A &= x^d (R{^\top})^{{-1}}\ . \end{aligned}$$ Now $\det R = 1$ since $(x^d)^{n+1} \det R = \det(G\hat B) = x^{d+nd}$, so $(R{^\top})^{{-1}}= {{\textnormal{adj}}}(R{^\top}) \in {{\mathsf{K}}}[x]^{(n+1) \times (n+1)}$. Therefore ${{\textnormal{adj}}}(G{^\top})$ is an approximant basis of $\hat A$ of order $d$. The theorem now follows from \[lem:adjointRowReduced\] by noting that $G$ is ${\bm{N}}$-row reduced. We apply \[thm:dualityMinbasis\] to the problem of \[ex:simpade\] with shifts ${\bm{N}} = (5, 3, 4, 5)$. We have $$\begin{aligned} A &= \left[\begin{array}{rrr} x^{4} + x^{2} + 1 & x^{4} + 1 & x^{4} + x^{3} + 1 \\ 1 & & \\ & 1 & \\ & & 1 \end{array}\right] \\ B &= \left[\begin{array}{r} 1 \\ x^4 + x^2 + 1 \\ x^{4} + 1 \\ x^{4} + x^{3} + 1 \end{array}\right] \end{aligned}$$ An ${\bm{N}}$-minimal approximant basis to order $d = 5$ of $B$ is $$\begin{aligned} G &= \left[\begin{array}{rrrr} x & 0 & x & 0 \\ 1 & x^{2} + 1 & 0 & 0 \\ 0 & 1 & x^{2} + 1 & 0 \\ 0 & x & x + 1 & 1 \end{array}\right] , \textrm{ and} \\ {{\textnormal{adj}}}(G){^\top}&= \left[\begin{array}{rrrr} x^{4} + 1 & x^{2} + 1 & 1 & x^{3} + 1 \\ x & x^{3} + x & x & x^{4} + x \\ x^{3} + x & x & x^{3} + x & x^{4} + x^{3} + x \\ 0 & 0 & 0 & x^{5} \end{array}\right] \ . \end{aligned}$$ ${{\textnormal{adj}}}(G){^\top}$ can be confirmed to be an $(-{\bm{N}})$-minimal approximant basis of $A$, since ${{\textnormal{adj}}}(G){^\top}A \equiv 0 \mod x^d$, and since the $(-{\bm{N}})$-leading coefficient matrix of ${{\textnormal{adj}}}(G){^\top}$ has full rank. Algorithm \[alg:simpadedual\] uses \[thm:dualityMinbasis\] to solve a Simultaneous [Padé]{}approximation by computing a minimal approximant basis of $B$ in Popov form. $B {\leftarrow}[ 1, S_1, \ldots, S_n ]^T \in {{\mathsf{K}}}[x]^{(n+1) \times 1}$ $G {\leftarrow}{{\ensuremath{\mathsf{PopovBasis}}\xspace}}(d, B, {\bm{N}})$ \[line:simpadedual:basis\] ${\bm{\eta}} {\leftarrow}{{\textnormal{rowdeg}}}_{{\bm{N}}} G$ $\hat{{\bm{\lambda}}} {\leftarrow}$ first column of ${{\textnormal{adj}}}(G{^\top})$ \[line:simpadedual:firstcol\] $\hat{ {\bm{\delta}}} {\leftarrow}(\eta - N -\eta_1, \ldots, \eta - N - \eta_{n+1})$, where $\eta = \sum_i \eta_i$ and $N = \sum_i N_i$ \[line:simpadedual:degrees\] $I {\leftarrow}\{ i \mid \hat{{\bm{\delta}}}_i < 0 \}$, and $k {\leftarrow}|I|$ $({\bm{\lambda}},\ {\bm{\delta}}) {\leftarrow}\big( \hat{{\bm{\lambda}}}_{i \in I},\ (\hat{{\bm{\delta}}}_i)_{i \in I} \big) \in {{\mathsf{K}}}[x]^{k \times 1} \times {\mathbb Z\xspace}^{k} $ ${\textbf{return }}({\bm{\lambda}}, {\bm{\delta}})$ \[alg:simpadedual\] is correct. The cost of the algorithm is $O(n^{\omega-1}\, {{\mathsf{M}}}(d) (\log d) (\log d/n)^2)$ operations in ${{\mathsf{K}}}$. Correctness follows from \[thm:dualityMinbasis\]. The complexity estimate is achieved if the algorithms supporting \[thm:orderbasis\] and \[thm:fastsolver\] are used for the computation in lines 2 and 4, respectively. A Divide & Conquer algorithm {#sec:intersect} ============================ Our second algorithm can handle the full generality of \[prob:sim\_pade\_basis\]. It works by first solving $n$ single [Padé]{}approximations, one for each of the $S_i$ individually, and then intersecting these solutions to form approximations of multiple $S_i$ simultaneously. The intersection is structured in a Divide & Conquer tree, and performed by computing minimal approximant bases. Let $({\bm{S}}, {\bm{g}}, {\bm{N}})$ be an instance of \[prob:sim\_pade\_basis\] of size $n$. The idea of the intersection algorithm is the following: consider that we have solution specifications for two different Simultaneous [Padé]{}problems, $({\bm{\lambda}}_1, {\bm{\delta}}_1)$ and $({\bm{\lambda}}_2, {\bm{\delta}}_2)$. We then compute an approximant basis $G$ of the following matrix: $$\label{eqn:intersect_R} R = \left[\begin{array}{@{}c|c@{}} 1 & 1 \\ \hline -{\bm{\lambda}}_1 & \\\hline & -{\bm{\lambda}}_2 \\ \end{array}\right]$$ $G$ then encodes the *intersection* of the ${{\mathsf{K}}}[x]$-linear combinations of the ${\bm{\lambda}}_1$ with the ${{\mathsf{K}}}[x]$-linear combinations of the ${\bm{\lambda}}_2$: any $\lambda \in {{\mathsf{K}}}[x]$ residing in both sets of polynomials will appear as the first entry of a vector in the row space of $G$. We compute $G$ as an ${\bm{r}}$-minimal approximant basis to high enough order, where ${\bm{r}}$ is selected carefully such that the ${\bm{r}}$-degree of any $(\lambda \mid \ldots) \in {{\textnormal{Row}}}(G)$ will equal the $(-{\bm{N}})$-degree of the completion of $\lambda$ according to the combined Simultaneous [Padé]{}problem, whenever this degree is negative. From those rows of $G$ with negative ${\bm{r}}$-degree we then get a solution specification for the combined problem. Consider again \[ex:simpade\]. We divide the problem into two sub-problems ${\bm{S}}_1 = (S_1, S_2)$, ${\bm{N}}_1 = (5, 3, 4)$, and ${\bm{S}}_2 = (S_3)$ and ${\bm{N}}_2 = (5, 5)$. Note that $N_{1,0} = N_{2,0} = 5$, since this is the degree bound on the sought $\lambda$ for the combined problem. The sub-problems have the following solution specifications and their completions: $$\begin{aligned} ({\bm{\lambda}}_1, {\bm{\delta}}_1) &= \big( [ x^{4} + 1,\ x^3 + x ]{^\top},\ ( -1, -1 ) \big) \\ A_1 &= \left(\begin{array}{rrr} x^{4} + 1 & x^{2} + 1 & 1 \\ x^{3} + x & x & x^{3} + x \end{array}\right) \\ ({\bm{\lambda}}_2, {\bm{\delta}}_2) &= \big( [ x^2,\ x^3 + x + 1 ]{^\top},\ ( -3, -2 ) \big) \\ A_2 &= \left(\begin{array}{rr} x^{2} & x^{2} \\ x^{3} + x + 1 & x + 1 \end{array}\right) \end{aligned}$$ We construct $R$ as in , and compute $G$, a minimal approximant basis of $R$ of order $7$ and with shifts ${\bm{r}}= (-5 \mid -1, -1 \mid -3, -2)$ (the $G$ below is actually in ${\bm{r}}$-Popov form): $$G = \left(\begin{array}{rrrrr} x^{8} & 0 & 0 & 0 & 0 \\ x^{3} + x + 1 & x^{4} + 1 & 1 & 0 & 1 \\ x^{3} + x^{2} + x + 1 & 1 & x + 1 & 1 & 1 \\ x^{4} + x^{3} + x + 1 & 1 & 1 & x^{2} & 1 \\ x^{4} + 1 & 1 & 0 & x + 1 & x + 1 \end{array}\right)$$ $G$ has ${\bm{r}}$-row degree $(3, 3, 0, -1, -1)$. Only rows 4 and 5 have negative ${\bm{r}}$-degree, and their first entries are the linearly independent solutions $x^4 + x^3 + x + 1$ and $x^4 + 1$. Both solutions complete into vectors with $(-{\bm{N}})$-degree -1. To prove the correctness of the above intuition, we will use \[alg:simpadedirect\] ([$\mathsf{DirectSimPade}$]{}). The following lemma says that to solve two simultaneous [Padé]{}approximations, one can compute a minimal approximant basis of one big matrix $A$ constructed essentially from two of the matrices employed in [$\mathsf{DirectSimPade}$]{}. Afterwards, \[lem:recursive\_R\_solves\] uses this to show that a minimal approximant basis of $R$ in provides the crucial information in a minimal approximant basis of $A$. ${\textbf{return }}{\ensuremath{\mathsf{DirectSimPade}}\xspace}({\bm{S}}, {\bm{g}}, {\bm{N}})$ ${\bm{S}}_1, {\bm{g}}_1 {\leftarrow}$ the first ${\lceil n/2 \rceil}$ elements of ${\bm{S}}, {\bm{g}}$ ${\bm{S}}_2, {\bm{g}}_2 {\leftarrow}$ the last ${\lfloor n/2 \rfloor}$ elements of ${\bm{S}}, {\bm{g}}$ ${\bm{N}}_1 {\leftarrow}(N_0,N_1,\ldots,N_{{\lceil n/2 \rceil}})$ ${\bm{N}}_2 {\leftarrow}(N_0,N_{{\lceil n/2 \rceil}+1},\ldots,N_n)$ $({\bm{\lambda}}_1, {\bm{\delta}}_1) {\leftarrow}{\ensuremath{\mathsf{RecursiveSimPade}}\xspace}\big({\bm{S}}_1, {\bm{g}}_1, {\bm{N}}_1)$ $({\bm{\lambda}}_2, {\bm{\delta}}_2) {\leftarrow}{\ensuremath{\mathsf{RecursiveSimPade}}\xspace} \big({\bm{S}}_2, {\bm{g}}_2, {\bm{N}}_2)$ ${\bm{r}} {\leftarrow}(-N_0 \mid {\bm{\delta}}_1 \mid {\bm{\delta}}_2)$ $d {\leftarrow}N_0 + \max_i \deg g_i - 1$ \[line:recursive:choosed\] \[lineH\] $R {\leftarrow}\left[\begin{array}{c|c} 1 & 1\\ \hline -{\bm{\lambda}}_1 & \\\hline & -{\bm{\lambda}}_2 \end{array}\right]$ $(\left [\begin{array}{c|c} {\bm{\lambda}} & \ast \end{array} \right ], {\bm{\delta}}) {\leftarrow}{{\ensuremath{\mathsf{NegMinBasis}}\xspace}}(d, R, {\bm{r}})$ \[calltoNegMin\] ${\textbf{return }}({\bm{\lambda}}, {\bm{\delta}})$ \[lem:recursive\_big\_matrix\] Let $({\bm{S}}_1, {\bm{g}}_1,{\bm{N}}_1)$ and $({\bm{S}}_2, {\bm{g}}_2,{\bm{N}}_2)$ be two instances of \[prob:sim\_pade\_basis\] of lengths $n_1, n_2$ respectively, and where ${\bm{N}}_1 = (N_0 \mid \grave{{\bm{N}}_1})$ and ${\bm{N}}_2 = (N_0 \mid \grave{{\bm{N}}_2})$. Let ${\bm{S}} = ({\bm{S}}_1 \mid {\bm{S}}_2)$, ${\bm{g}} = ({\bm{g}}_1 \mid {\bm{g}}_2)$ and ${\bm{N}} = (N_0 \mid \grave{{\bm{N}}_1} \mid \grave{{\bm{N}}_2})$ be the combined problem having length $n = n_1 + n_2$. Let ${\bm{h}}_i = (-{\bm{N}}_i \mid N_0 -1 \ldots N_0 -1) \in {\mathbb Z\xspace}^{2n_i+1}$ for $i=1,2$. Let $(F, {\bm{\delta}}) = {{\ensuremath{\mathsf{NegMinBasis}}\xspace}}(d, A, {\bm{a}})$, where $A$ of dimension $(2n+3) \times (n+2)$ is given as: $$A = \left [ \begin{array}{c|c} A_1 & A_2 \end{array} \right ] = \left [ \begin{array}{cc|cc} & & 1 & 1 \\ -{\bm{S}}_1 & & -1 \\ I & & \\ { \ifx\null1 \Gamma_{{\bm{g}}} \else \Gamma_{{\bm{g}}_{1}} \fi} & & \\ & -{\bm{S}}_2 & & -1 \\ & I & \\ & { \ifx\null2 \Gamma_{{\bm{g}}} \else \Gamma_{{\bm{g}}_{2}} \fi} & \end{array} \right ] ,$$ with ${\bm{a}} = (- N_0 \mid {\bm{h}}_1 \mid {\bm{h}}_2)$ and $d = N_0 + \max_i \deg g_i - 1$. Then $({\bm{\lambda}}, {\bm{\delta}})$ is a solution specification to $({\bm{S}}, {\bm{g}}, {\bm{N}})$, where ${\bm{\lambda}}$ is the first column of $F$. Note that the matrix $A$ is right equivalent to the following matrix $B$: $$B := A \left [ \begin{array}{cccc} & & I & \\ & & & I \\ 1 & & {\bm{S}}_1 & \\ & 1 & & {\bm{S}}_2 \end{array} \right ] = \left [ \begin{array}{cc|cc} 1 & 1 & - {\bm{S}}_1 & -{\bm{S}}_2 \\ -1 & & & \\ & & I & \\ & & { \ifx\null1 \Gamma_{{\bm{g}}} \else \Gamma_{{\bm{g}}_{1}} \fi} & \\ & -1 & & \\ & & & I \\ & & & { \ifx\null2 \Gamma_{{\bm{g}}} \else \Gamma_{{\bm{g}}_{2}} \fi} \end{array} \right ].$$ Since $F$ is an ${{\bm{a}}}$-minimal approximant of $A$ of order $d$, then it will also be one for $B$. Let $P$ be the permutation matrix that produces the following matrix $C := P B$: $$\setlength{\arraycolsep}{.8\arraycolsep} C = P B = \left [ \begin{array}{cc|cc} 1 & 1 & - {\bm{S}}_1 & -{\bm{S}}_2 \\ & & I & \\ & & & I \\ & & { \ifx\null1 \Gamma_{{\bm{g}}} \else \Gamma_{{\bm{g}}_{1}} \fi} & \\ & & & { \ifx\null2 \Gamma_{{\bm{g}}} \else \Gamma_{{\bm{g}}_{2}} \fi} \\\hline -1 & & & \\ & -1 & & \end{array} \right ] = \left [ \begin{array}{cc|c} 1 & 1 & - {\bm{S}} \\ & & I \\ & & { \ifx\null\null \Gamma_{{\bm{g}}} \else \Gamma_{{\bm{g}}_{\null}} \fi}\\\hline -1 & & \\ & -1 & \end{array} \right ].$$ Define ${\bm{c}} := {\bm{a}} P^{-1}$, and note that ${\bm{c}} = ({\bm{h}} \mid -N_0, -N_0)$. Since $F = {{\ensuremath{\mathsf{NegMinBasis}}\xspace}}(d, A, {\bm{a}})$, then $(FP^{-1}, {\bm{\delta}})$ is a valid output of ${{\ensuremath{\mathsf{NegMinBasis}}\xspace}}(d, C, {\bm{c}})$. Furthermore, since the first column of $P$ is $(1, 0, \ldots, 0)$, the first column of $F$ will be equal to the first column of $FP^{{-1}}$. We are therefore finished if we can show that if $(F', {\bm{\delta}}')$ is any valid output of ${{\ensuremath{\mathsf{NegMinBasis}}\xspace}}(d, C, {\bm{c}})$, then the first column of $F'$ together with ${\bm{\delta}}'$ form a solution specification to $({\bm{S}}, {\bm{g}}, {\bm{N}})$. Consider therefore such an $(F', {\bm{\delta}}')$. By the first two columns of $C$, we must have $F'_{*,1} \equiv F'_{*,2n+2} \equiv F'_{*,2n+3} \mod x^d$, where $F'_{*,i}$ denotes the $i$’th column of $F'$. Since each row of $F'$ have negative ${\bm{c}}$-degree, and since $N_0 < d$, then the congruences must lift to equalities. We can therefore write $F = [ G \mid F'_{*,1} \mid F'_{*,1} ]$ for some $G \in {{\mathsf{K}}}[x]^{k \times (2n+1)}$ for some $k$, and we have ${{\textnormal{rowdeg}}}_{{\bm{h}}} G = {{\textnormal{rowdeg}}}_{{\bm{c}}} F' = {\bm{\delta}}'$. By the last $n$ columns of $C$, we have $G H \equiv 0 \mod x^d$, where $$H = \left[\begin{array}{c} -{\bm{S}} \\ I \\ { \ifx\null\null \Gamma_{{\bm{g}}} \else \Gamma_{{\bm{g}}_{\null}} \fi}\end{array}\right] \ .$$ In fact, $(G, {\bm{\delta}}')$ is a valid output for ${{\ensuremath{\mathsf{NegMinBasis}}\xspace}}(d, H, {\bm{h}})$: for $G$ has full row rank since $F'$ does; $G$ is ${\bm{h}}$-row reduced since $F'$ is ${\bm{c}}$-row reduced; and any negative ${\bm{h}}$-order $d$ approximant of $H$ must clearly be in the span of $G$ since $F'$ is a negative ${\bm{c}}$-minimal approximant basis of $C$. By the choice of $d$, then \[thm:simpadedirect\] therefore implies that the first column of $G$ together with ${\bm{\delta}}'$ form a solution specification to the problem $({\bm{S}}, {\bm{g}}, {\bm{N}})$. Since the first column of $G$ is also the first column of $F'$, this finishes the proof. \[lem:recursive\_R\_solves\] In the context of \[lem:recursive\_big\_matrix\], let $({\bm{\lambda}}_1, {\bm{\delta}}_1)$ and $({\bm{\lambda}}_2, {\bm{\delta}}_2)$ be solution specifications to the two sub-problems, and let ${\bm{r}} = (-N_0 \mid {\bm{\delta}}_1 \mid {\bm{\delta}}_2)$. If $([ {\bm{\lambda}} \mid * ], {\bm{\delta}}) = {{\ensuremath{\mathsf{NegMinBasis}}\xspace}}(d, R, {\bm{r}})$, where ${\bm{\lambda}}$ is a column vector and $$R = \left[\begin{array}{c|c} 1 & 1\\ \hline -{\bm{\lambda}}_1 & \\\hline & -{\bm{\lambda}}_2 \end{array}\right] \ ,$$ then $({\bm{\lambda}}, {\bm{\delta}})$ is a solution specification for the combined problem. We will prove the lemma by using \[lem:paderecprune\] to relate valid outputs of ${{\ensuremath{\mathsf{NegMinBasis}}\xspace}}(d, R, {\bm{r}})$ with valid outputs of ${{\ensuremath{\mathsf{NegMinBasis}}\xspace}}(d, A, {\bm{a}})$ from \[lem:recursive\_big\_matrix\]. For $i=1,2$, since $({\bm{\lambda}}_i, {\bm{\delta}}_i)$ is a solution specification to the $i$’th problem, then by \[thm:simpadedirect\] there is some $G_i \in {{\mathsf{K}}}[x]^{k_i \times 2n_i+1}$ whose first column is ${\bm{\lambda}}_i$ and such that $G_i$ is a valid output of ${{\ensuremath{\mathsf{NegMinBasis}}\xspace}}(d, H_i, {\bm{h}}_i)$, where $$H_i = \left [ \begin{array}{c} -{\bm{S}}_i \\ I \\ { \ifx\nulli \Gamma_{{\bm{g}}} \else \Gamma_{{\bm{g}}_{i}} \fi} \end{array} \right ] \in {{\mathsf{K}}}[x]^{(2n_i+1) \times n_i} ,$$ and ${\bm{h}}_i$ is as in \[lem:recursive\_big\_matrix\]. Note now that if $$F_1 := \left [\begin{array}{ccc} 1 & & \\ & G_1 \\ & & G_2 \end{array} \right ] \in {{\mathsf{K}}}[x]^{(k_1+k_2+1) \times (2n_1 + 2n_2 + 3)} ,$$ then $(F_1, {\bm{r}})$ is a valid output of ${{\ensuremath{\mathsf{NegMinBasis}}\xspace}}(d, A_1, {\bm{a}})$: for ${{\textnormal{rowdeg}}}_{{\bm{a}}} F_1$ is clearly ${\bm{r}}$; $F_1$ has full row rank and is ${\bm{r}}$-row reduced; and the rows of $F_1$ must span all ${\bm{a}}$-order $d$ approximants of $A_1$, since the three column “parts” of $F_1$ correspond to the three row parts of $A_1$. . Note now that $F_1 A_2 = R$. Thus by \[lem:paderecprune\], if $(F_2, {\bm{\delta}}) = {{\ensuremath{\mathsf{NegMinBasis}}\xspace}}(d, R, {\bm{r}})$, then $(F_2 F_1, {\bm{\delta}})$ is a valid output of ${{\ensuremath{\mathsf{NegMinBasis}}\xspace}}(d, A, {\bm{a}})$. Note that by the shape of $F_1$ then the first column ${\bm{\lambda}}$ of $F_2 F_1$ is the first column of $F_2$. Thus ${\bm{\lambda}}, {\bm{\delta}}$ are exactly as stated in the lemma, and by \[lem:recursive\_big\_matrix\] they must be a solution specification to the combined problem. \[alg:recsimpade\] is correct. The cost of the algorithm is $O(n^{\omega-1}\, {{\mathsf{M}}}(d) (\log d) (\log d/n)^2)$, $d = \max_i \deg g_i$. Correctness follows from \[lem:recursive\_R\_solves\]. For complexity, note that the choice of order in \[line:recursive:choosed\] is bounded by $2\max_i \deg g_i$, i.e. twice the value of $d$ of this theorem. So if $T(n)$ is the cost \[alg:recsimpade\] for given $n$ and where the order will be bounded by $O(d)$, then we have the following recursion: $$T(n) = \left \{\begin{array}{ll} 2T(n/2) + P(n) & \textrm{if } n > 1 \\ O({{\mathsf{M}}}(d)\log d) & \textrm{if } n = 1 \textrm{ (see {\Fref{sec:direct_reduced_basis}})} \end{array}\right . \ ,$$ where $P(n)$ is the cost of line \[calltoNegMin\]. Using algorithm [$\mathsf{PopovBasis}$]{} for the computation of the negative part of the minimal approximant bases we can set $P(n)$ to the target cost. The recursion then implies $T(n) \in O(P(n))$. [**Acknowledgements.**]{} The authors would like to thank George Labahn for valuable discussions, and for making us aware of the Hermite–Simultaneous [Padé]{}duality. We would also like to thank Vincent Neiger for making preprints of [@jeannerod_computation_2016] available to us. The first author would like to thank the Digiteo Foundation for funding the research visit at Waterloo, during which most of the ideas of this paper were developed. [^1]: $\copyright$ Johan Rosenkilde, Arne Storjohann. This is the authors’ version of the work. It is posted here for your personal use. Not for redistribution. The definitive version was published in ISSAC ’16, http://dx.doi.org/10.1145/2930889.2930933. [^2]: The notions $(-{\bm{N}})$-degree, $\deg_{-{\bm{N}}}$ and $(-{\bm{N}})$-row reduced are recalled in \[sec:preliminaries\]. [^3]: The restriction $N_i \leq \deg g_i$ in \[prob:sim\_pade\] ensures that for a given $\lambda$, the only possibilities for the $\phi_i$ in a solution are ${{\textnormal{rem}}}(\lambda S_i, \ g_i)$. In particular, if we allowed $N_i > \deg g_i$ then $(0,\ldots, 0, g_i, 0, \ldots, 0)$ would be a solution which can not be directly reconstructed from its first element.
1
--- abstract: | We consider the Cauchy problem for the evolutive discrete $p$-Laplacian in infinite graphs, with initial data decaying at infinity. We prove optimal sup and gradient bounds for nonnegative solutions, when the initial data has finite mass, and also sharp evaluation for the confinement of mass, i.e., the effective speed of propagation. We provide estimates for some moments of the solution, defined using the distance from a given vertex. Our technique relies on suitable inequalities of Faber-Krahn type, and looks at the local theory of continuous nonlinear partial differential equations. As it is known, however, not all of this approach can have a direct counterpart in graphs. A basic tool here is a result connecting the supremum of the solution at a given positive time with the measure of its level sets at previous times. We also consider the case of slowly decaying initial data, where the total mass is infinite. address: - | Department of Basic and Applied Sciences for Engineering\ Sapienza University of Rome, Italy - | South Mathematical Institute of VSC RAS\ Vladikavkaz, Russian Federation author: - Daniele Andreucci - 'Anatoli F. Tedeev' bibliography: - 'paraboli.bib' - 'pubblicazioni\_andreucci.bib' title: | Asymptotic estimates for the $p$-Laplacian on infinite graphs\ with decaying initial data --- \[1994/06/01\] [^1] [^2] Introduction {#s:intro} ============ We consider nonnegative solutions to the Cauchy problem for discrete degenerate parabolic equations $$\begin{aligned} {2} \label{eq:pde} {\frac{\partial {u}}{\partial t}}(x,t) - \operatorname{\Delta}_{p} {u}(x,t) &= 0 \,, &\qquad& x\in V\,, t>0 \,, \\ \label{eq:init} {u}(x,0) &= {u}_{0}(x) \ge 0 \,, &\qquad& x\in V \,.\end{aligned}$$ Here $V$ is the set of vertices of the graph $G(V,E)$ with edge set $E\subset V\times V$ and weight ${\omega}$, and $$\operatorname{\Delta}_{p} u(x,t) = \frac{1}{{d_{{\omega}}}(x)} \sum_{y\in V} {\lvert{u}(y)-{u}(x)\rvert}^{p-2} ({u}(y)-{u}(x)) {\omega}(x,y) \,.$$ We assume that the graph $G$ is simple, undirected, infinite, connected with locally finite degree $${d_{{\omega}}}(x) = \sum_{y\sim x} {\omega}(x,y) \,,$$ where we write $y\sim x$ if and only if $\{x,y\}\in E$. Here the weight ${\omega}:V\times V\to [0,+\infty)$ is symmetric, i.e., ${\omega}(x,y)={\omega}(y,x)$, and is strictly positive if and only if $y\sim x$; then ${\omega}(x,x)=0$ for $x\in V$. We assume also that $p>2$ and that ${u}_{0}$ is nonnegative; further assumptions on ${u}_{0}$ will be stated below. We prove sharp sup bounds for large times of solutions corresponding to finite mass initial data; in order to prove the bound from below we find an optimal estimate for the effective speed of propagation of mass. We also determine the stabilization rate for data exhibiting slow decay ‘at infinity’, in a suitable sense. To the best of our knowledge such results are new in the framework of discrete nonlinear diffusion equations on graphs.\ We apply an alternative approach, more local than the one in [@Mugnolo:2013], [@Hua:Mugnolo:2015] where the global arguments of semigroup theory are extended to graphs, actually in a more general setting which is out of the scope of this paper. We comment below in the Introduction on the inherent difficulty and even partial unfeasibility of a local approach in graphs. It is therefore an interesting and not trivial problem to understand how much of this body of techniques can be used in this environment. This paper can be seen as a cross section of this effort; specifically we look at the interplay between spread of mass and sup estimates, following ideas coming from the theory of continuous partial differential equations, with the differences required by the discrete character of graphs. We recall the following notation: for any $R\in {{\boldsymbol}{N}}$, we let $$B_{R}(x_{0}) = \{ x\in V \mid d(x,x_{0}) \le R \} \,.$$ Here $d$ is the standard combinatorial distance in $G$ so that $d$ only takes integral values. For any $f:V\to {{\boldsymbol}{R}}$ we set for all $q\ge 1$, $U\subset V$ $$\begin{gathered} {{\lVertf\rVert}_{\ell^{q}(U)}}^{q} = \sum_{x\in U} {\lvertf(x)\rvert}^{q} {d_{{\omega}}}(x) \,, \quad {{\lVertf\rVert}_{\ell^{\infty}(U)}} = \sup_{x\in U} {\lvertf(x)\rvert} \,, \\ {\mu_{{\omega}}}(U) = \sum_{x\in U} {d_{{\omega}}}(x) \,.\end{gathered}$$ All the infinite sums in this paper are absolutely convergent. In the following we always assume, unless explicitly noted, that all balls are centered at a given fixed $x_{0}\in V$ and we write $B_{R}(x_{0})=B_{R}$. We denote generic constants depending on the parameters of the problem by $\gamma$ (large constants), $\gamma_{0}$ (small constants). We also set for all $A\subset V$ $$\chi_{A}(x) = 1 \,, \quad x\in A \,; \qquad \chi_{A}(x) = 0 \,, \quad x\not\in A \,.$$ \[d:fk\] We say that $G$ satisfies a global Faber-Krahn inequality for a given $p>1$ and function ${\Lambda_p}:(0,+\infty)\to(0,+\infty)$ if for any $v>0$ and any finite subset $U\subset V$ with ${\mu_{{\omega}}}(U)=v$ we have $$\label{eq:fk} {\Lambda_p}(v) \sum_{x\in U} {\lvertf(x)\rvert}^{p} {d_{{\omega}}}(x) \le \sum_{x,y\in (U)_{1}} {\lvertf(y)-f(x)\rvert}^{p} {\omega}(x,y) \,,$$ for all $f:V\to {{\boldsymbol}{R}}$ such that $f(x)=0$ if $x\not \in U$; here $$(U)_{1} = \{ x\in V \mid d(x,U) \le 1 \} \,.$$ We assume throughout that ${\Lambda_p}\in C(0,+\infty)$ is decreasing and that two suitable positive constants ${N}$, $\omega$ exists such that $$\begin{aligned} \label{eq:fkf_nd} v&\mapsto {\Lambda_p}(v)^{-1} v^{-\frac{p}{{N}}} \,, \quad v>0 \,, \quad \text{is nondecreasing;} \\ \label{eq:fkf_ni} v&\mapsto {\Lambda_p}(v)^{-1} v^{-\omega} \,, \quad v>0 \,, \quad \text{is nonincreasing.}\end{aligned}$$ An important class of functions in our approach is given by $$\label{eq:dcf_def} {\psi}_{r}(s) = s^{\frac{p-2}{r}} {\Lambda_p}(s^{-1}) \,, \qquad s>0 \,,$$ for each fixed $r\ge 1$. They, or more exactly their inverses, give the correct time-space scaling for the equation , see for example Theorems \[t:l1\] and \[p:bbl\] below. If we make the additional assumption that for some constant $c>0$ $$\label{eq:fkf_bbl} {\Lambda_p}(v) \ge c {\mathcal{R}}(v)^{-p} \,, \qquad v>0 \,,$$ where ${\mathcal{R}}:(0,+\infty)\to (0,+\infty)$ is such that ${\mu_{{\omega}}}(B_{{\mathcal{R}}(v)})=v$, we may connect ${\psi}_{1}$ to the measure of a ball in $G$. This in turn allows us to prove sharpness of our $\ell^{1}$–$\ell^{\infty}$ estimate. Property is rather natural. For instance it is known to hold for the explicit examples in Subsection \[s:examples\], to which we refer for implementations of our results in some concrete relevant cases. \[r:spdim\] The constant ${N}$ in has no intrinsic meaning in this paper, and it is employed here only with the purpose of making easier the comparison with the case of standard regular graphs ${{\boldsymbol}{Z}}^{{N}}$, where ${\Lambda_p}(v)=\gamma_{0}v^{-p/{N}}$, see Subsection \[s:examples\]. \[r:fk\_below\] Let $x\in V$ and choose $U=\{x\}$, $f=\chi_{U}$ in , which then yields $$\label{eq:rem_fk} {\Lambda_p}({d_{{\omega}}}(x)) {d_{{\omega}}}(x) \le 2{d_{{\omega}}}(x) \,.$$ Since ${\Lambda_p}$ is decreasing by assumption we infer $$\label{eq:rem_fk_bound} {d_{{\omega}}}(x)\ge {\Lambda_p}^{(-1)}(2) \,.$$ A remark in this connection is perhaps in order: clearly according to its definition the Faber-Krahn function ${\Lambda_p}(v)$ is defined for uniformly positive $v$ according to , so that , should be assumed for such $v$. Aiming at a technically streamlined framework, we extend ${\Lambda_p}$ to all $v>0$, while easily preserving the latter assumptions. However one can check that for large times, ${\Lambda_p}$ is evaluated at large arguments in our results, which are thus independent of this extension. \[r:lp\_scale\] A consequence of is that any bound in $\ell^{q}(V)$ yields immediately a uniform pointwise bound: if ${v}\in\ell^{q}(V)$, $$\label{eq:lp_linf} {\lvert{v}(z)\rvert}^{q} \le {\lvert{v}(z)\rvert}^{q}\frac{{d_{{\omega}}}(z)}{{\Lambda_p}^{(-1)}(2)} \le \frac{1}{{\Lambda_p}^{(-1)}(2)} {{\lVert{v}\rVert}_{\ell^{q}(V)}}^{q} \,, \quad z\in V \,.$$ In turn this implies that $\ell^{p}(V)\subset\ell^{q}(V)$ if $p<q$, since $$\sum_{x\in V} {\lvertf(x)\rvert}^{q} {d_{{\omega}}}(x) \le M^{q-p} \sum_{x\in V} {\lvertf(x)\rvert}^{p} {d_{{\omega}}}(x) \,,$$ for a suitable $M$ as in . \[d:sol\] We say that ${u}\in L^{\infty}(0,T; \ell^{r}(V))$ is a solution to if ${u}(x)\in C^{1}([0,T])$ for every $x\in V$ and ${u}$ satisfies in the classical pointwise sense.\ A solution to – also is required to take the initial data prescribed by , for each $x\in V$. We refer the reader to [@Hua:Mugnolo:2015] for existence and uniqueness of solutions. To make this paper more self-contained however we sketch in Section \[s:prelim\] a proof of existence in Proposition \[p:ex\] (in $\ell^{q}$, $q>1$, see Theorem \[t:l1\] for $q=1$), and of uniqueness via comparison in Proposition \[p:compare\]. Our first two results are typical of the local approach we pursue. All solutions we consider below are nonnegative. \[p:linf\_meas\] Let ${u}:V\to {{\boldsymbol}{R}}$ be a solution to , with ${u}\in L^{\infty}(0,T;\ell^{r}(V))$ for some $r\ge 1$. Then for all $x\in V$, $0<t<T$ $$\label{eq:linf_meas_n} {u}(x,t) \le k \,,$$ provided $k>0$ satisfies for a suitable $\gamma_{0}(p,{N})$ $$\label{eq:linf_meas_nn} k^{-1} t^{-\frac{1}{p-2}} {\Lambda_p}\Big(\sup_{\frac{t}{4}<\tau<t}{\mu_{{\omega}}}(\{x\in V\mid {u}(x,\tau)> {k}/{2}\})\Big)^{-\frac{1}{p-2}} \le \gamma_{0} \,.$$ \[co:linf\_int\] Under the assumptions in Proposition \[p:linf\_meas\], we have $$\label{eq:linf_int_m} {u}(x,t) \le \gamma \sup_{0<\tau<t}{{\lVert{u}(\tau)\rVert}_{\ell^{r}(V)}} \big[{\psi}_{r}^{(-1)} \big( t^{-1} \sup_{0<\tau<t}{{\lVert{u}(\tau)\rVert}_{\ell^{r}(V)}}^{-(p-2)} \big) \big]^{\frac{1}{r}} \,,$$ for all $x\in V$, $0<t<T$. Here ${\psi}_{r}^{(-1)}$ is the inverse function of ${\psi}_{r}$ as defined in . \[r:dcf\] One can check easily using the fact that ${\Lambda_p}$ is nonincreasing that $$a\mapsto a{\psi}_{r}^{(-1)}(sa^{-(p-2)})^{\frac{1}{r}}$$ is nondecreasing in $a>0$ for each fixed $s>0$. Next Theorem follows directly from the estimates we stated above. Note that conservation of mass in was proved also in [@Hua:Mugnolo:2015], while the other estimates are new, as far as we know. \[t:l1\] Let ${u}_{0}\in\ell^{1}(V)$, ${u}_{0}\ge 0$. Then problem – has a unique solution satisfying for all $t>0$ $$\begin{aligned} \label{eq:l1_n} {{\lVert{u}(t)\rVert}_{\ell^{1}(V)}} &= {{\lVert{u}_{0}\rVert}_{\ell^{1}(V)}} \,, \\ \label{eq:l1_nn} {{\lVert{u}(t)\rVert}_{\ell^{\infty}(V)}} &\le \gamma {{\lVert{u}_{0}\rVert}_{\ell^{1}(V)}} {\psi}_{1}^{(-1)} \big( t^{-1} {{\lVert{u}_{0}\rVert}_{\ell^{1}(V)}}^{-(p-2)} \big) \,. \end{aligned}$$ In addition ${u}$ satisfies $$\begin{gathered} \label{eq:l1_nnn} \int_{0}^{t} \sum_{x,y\in V} {\lvert{D}_{y}{u}(x,\tau)\rvert}^{p-1} {\omega}(x,y) {\,\textup{\textmd{d}}}\tau \\ \le \gamma t^{\frac{1}{p}} {{\lVert{u}_{0}\rVert}_{\ell^{1}(V)}}^{\frac{2(p-1)}{p}} {\psi}_{1}^{(-1)}(t^{-1}{{\lVert{u}_{0}\rVert}_{\ell^{1}(V)}}^{-(p-2)})^{\frac{p-2}{p}} \,. \end{gathered}$$ \[r:lp\_est\] We notice that one could exploit , to derive trivially a bound of the integral in . This is due of course to the fact that the $p$-laplacian in our setting is discrete, and it would not be possible in the framework of continuous partial differential equations. Such a bound however is not sharp, and for example could not be used in the proof of Theorem \[p:bbl\]. In other instances where optimality is not needed we exploit a device similar to the one just described, relying on Remark \[r:lp\_scale\]; see for example the proof of Lemma \[l:cacc2\]. So far our extension to graphs of methods and results of continuous differential equations has been successful. However, in the latter setting a standard device to prove optimality of the bound in relies on the property of finite speed of propagation (i.e., solutions with initially bounded support keep this feature for all $t>0$). In the setting of graphs this property strikingly fails, as shown in [@Hua:Mugnolo:2015]. As a technical but perhaps worthwile side remark, we note that all the main ingredients in the proof of finite speed of propagation (see [@Andreucci:Tedeev:1999], [@Andreucci:Tedeev:2000]) seem to be available in graphs too: embeddings as in [@Ostrovskii:2005], Caccioppoli inequalities as in Lemma \[l:cacc\] below, and of course iterative techniques as the one displayed in the proof of Proposition \[p:linf\_meas\]. The key exception in this regard is the fact that full localization via an infinite sequence of nested shrinking balls is clearly prohibited by the discrete metric at hand. This is a point of marked difference with the continuous setting. Still we can prove sharpness of our $\ell^{1}$–$\ell^{\infty}$ bound by means of the following result of confinement of mass. By the same argument we can estimate also a suitable moment of the solution, which is also a new result for nonlinear diffusion in graphs, see Section \[s:bbl\]. \[p:bbl\] Let ${u}_{0}\ge 0$ be finitely supported. Then for every $1>{\varepsilon}>0$ there exists a $\varGamma>0$ such that $$\label{eq:bbl_n} {{\lVert{u}(t)\rVert}_{\ell^{1}(B_{R})}} \ge (1-{\varepsilon}) {{\lVert{u}_{0}\rVert}_{\ell^{1}(V)}} \,, \qquad t>0 \,,$$ provided $B_{{\lfloorR/2\rfloor}}$ contains the support of ${u}_{0}$, and $R$ is chosen so that $$\label{eq:bbl_nn} R \ge \varGamma t^{\frac{1}{p}} {{\lVert{u}_{0}\rVert}_{\ell^{1}(V)}}^{\frac{p-2}{p}} {\psi}_{1}^{(-1)}(t^{-1}{{\lVert{u}_{0}\rVert}_{\ell^{1}(V)}}^{-(p-2)})^{\frac{p-2}{p}} \ge 8 \,.$$ In addition, provided $R$ is chosen as in , for ${\varepsilon}=1/2$, and $\alpha\in (0,1)$, $$\label{eq:bbl_p} \sum_{x\in V} d(x,x_{0})^{\alpha} {u}(x,t) {d_{{\omega}}}(x) \le \gamma R^{\alpha} {{\lVert{u}_{0}\rVert}_{\ell^{1}(V)}} \,, \qquad t>0 \,.$$ Next we exploit the estimate – in order to show that up to a change in the constant we can reverse the inequality in , proving at once the optimality of both results. \[p:bbl2\] Under the assumptions in Theorem \[p:bbl\], let in addition ${\Lambda_p}$ satisfy . Then $$\label{eq:bbl_nnn} {{\lVert{u}(t)\rVert}_{\ell^{\infty}(V)}} \ge \frac{{{\lVert{u}_{0}\rVert}_{\ell^{1}(V)}}}{2{\mu_{{\omega}}}(B_{R})} \ge \gamma_{0} {{\lVert{u}_{0}\rVert}_{\ell^{1}(V)}} {\psi}_{1}^{(-1)} \big( t^{-1} {{\lVert{u}_{0}\rVert}_{\ell^{1}(V)}}^{-(p-2)} \big) \,,$$ where $R$ is as in , for ${\varepsilon}=1/2$. Clearly, owing to the comparison principle of Proposition \[p:compare\], results like those in and may be proved even dropping the assumption that ${u}_{0}$ is finitely supported; for the sake of brevity we omit the details. In order to state our last result we need to introduce the following function, which essentially gives the correct scaling between time and space in the case of slow decay initial data: for ${u}_{0}\in\ell^{q}(V)\setminus\ell^{1}(V)$ for some $q>1$ set $$\label{eq:decay_fn} {T}_{{u}_{0}}(R,x_{0}) = \Big[ \frac{{{\lVert{u}_{0}\rVert}_{\ell^{1}(B_{R}(x_{0}))}}}{{{\lVert{u}_{0}\rVert}_{\ell^{q}(V\setminus B_{R}(x_{0}))}}^{q}} \Big]^{\frac{p-2}{q-1}} \, {\Lambda_p}\bigg( \Big( \frac{ {{\lVert{u}_{0}\rVert}_{\ell^{1}(B_{R}(x_{0}))}} }{ {{\lVert{u}_{0}\rVert}_{\ell^{q}(V\setminus B_{R}(x_{0}))}} } \Big)^{\frac{q}{q-1}} \bigg)^{-1} \,,$$ for $R\in {{\boldsymbol}{N}}$, $x_{0}\in V$. Clearly for each fixed $x_{0}$ the function ${T}_{{u}_{0}}$ is nondecreasing in $R$ and ${T}_{{u}_{0}}(R,x_{0})\to +\infty$ as $R\to\infty$. Conversely, ${T}_{{u}_{0}}(0,x_{0})$ may be positive. However it can be easily seen that for any given ${\varepsilon}>0$ there exists $x_{0}$ such that ${T}_{{u}_{0}}(0,x_{0})<{\varepsilon}$. \[t:decay\] Let ${u}_{0}\in\ell^{q}(V)\setminus\ell^{1}(V)$ for some $q>1$. Then for all $t>0$, $x_{0}\in V$ $$\label{eq:decay_n} {{\lVert{u}(t)\rVert}_{\ell^{\infty}(V)}} \le \gamma {{\lVert{u}_{0}\rVert}_{\ell^{1}(B_{R}(x_{0}))}} {\psi}_{1}^{(-1)}\big(t^{-1} {{\lVert{u}_{0}\rVert}_{\ell^{1}(B_{R}(x_{0}))}}^{-(p-2)}\big) \,,$$ provided $R$ is chosen so that $$\label{eq:decay_nn} t\le {T}_{{u}_{0}}(R,x_{0}) \,,$$ the optimal choice being of course the minimum $R=R(t)$ such that holds true. Let us comment briefly on the existing literature on the non-linear $p$-Laplacian in graphs. The papers [@Mugnolo:2013], [@Hua:Mugnolo:2015], deal with the Cauchy problem applying techniques inspired from the theory of semigroups of continuous differential operators. They consider a more general variety of weighted graphs and operators than we do here, dealing e.g., with existence, uniqueness, time regularity, possible extinction in a finite time. However our results do not seem to be easily reached by this approach. We also quote [@Keller:Mugnolo:2016] where a connection between Cheeger constants and the eigenvalues of the $p$-laplacian is drawn in a very flexible setting.\ Boundary problems on finite subgraphs are also considered in several papers dealing with features like blow up or extinction; we quote only [@Chung:Choi:2014] and [@Chung:Park:2017]. The case of the discrete linear Laplacian where $p=2$ is more classical, also for its connections with probability theory (see e.g., [@Andres:etal:2013] and references therein), and is often attacked by means of suitable parallels with the theory of heat kernels in manifolds. We quote [@Coulhon:Grigoryan:1998], [@Barlow:Coulhon:Grigoryan:2001] where a connection is drawn between properties of heat kernels, of graphs and Faber-Krahn functions.\ In [@Lin:Wu:2017] heat kernels are used to study the blow up of solutions to the Cauchy problem for a semilinear equation on a possibly infinite graph. The subject of diffusion in graphs is popular also owing to its applicative interest. We refer the reader to [@Mugnolo:2013], [@Elmoataz:Toutain:Tenbrinck:2015] and to the references therein for more on this point. Finally we recall the papers [@Bakry:etal:1995a], [@Ostrovskii:2005] and books [@Chung:SGT], [@Grigoryan:AG] for basic information on functional analysis on graphs and manifolds. We mention that in our setting it is still valid the argument in [@Bonforte:Grillo:2007] showing that optimal decay rates imply suitable embeddings. Here we look essentially at the approach of [@DiBenedetto:Herrero:1989] and [@Andreucci:Tedeev:2015]. The paper is organized as follows: Section \[s:prelim\] is devoted to preliminary material. Proposition \[p:linf\_meas\] and its Corollary \[co:linf\_int\] are proved in Section \[s:linf\], while Section \[s:l1\] contains the proof of Theorem \[t:l1\] and Section \[s:bbl\] deals with Theorem \[p:bbl\] and Corollary \[p:bbl2\]. Finally Theorem \[t:decay\] is proved in Section \[s:decay\]. Examples {#s:examples} -------- 1\) As a first example we consider the case of the standard lattice $G={{\boldsymbol}{Z}}^{{N}}$, where one can take ${\Lambda_p}(v)=\gamma_{0}v^{-p/{N}}$, according to the results of [@Wang:Wang:1977], [@Ostrovskii:2005]. This is the case where comparison with the Cauchy problem for the continuous $p$-Laplacian is more straightforward. In this case $$\label{eq:dcf_euc} {\psi}_{r}(s) = \gamma_{0} s^{\frac{{N}(p-2)+pr}{{N}r}} \,, \qquad s>0 \,,$$ and for example estimate becomes $$\label{eq:l1_nn_grid} {{\lVert{u}(t)\rVert}_{\ell^{\infty}(V)}} \le \gamma {{\lVert{u}_{0}\rVert}_{\ell^{1}(V)}}^{\frac{p}{{N}(p-2)+p}} t^{-\frac{{N}}{{N}(p-2)+p}} \,,$$ while the critical radius for expansion of mass in amounts to $$\label{eq:bbl_nn_grid} R\ge \gamma {{\lVert{u}_{0}\rVert}_{\ell^{1}(V)}}^{\frac{p-2}{{N}(p-2)+p}} t^{\frac{1}{{N}(p-2)+p}} \,.$$ We remark that both results formally coincide with the corresponding ones for the continuous $p$-Laplacian in ${{{\boldsymbol}{R}}^{{N}}}$, see [@DiBenedetto:Herrero:1989].\ Next we apply Theorem \[t:decay\] to the following initial data: for $x=(x_{1},\dots,x_{{N}})\in{{\boldsymbol}{Z}}^{{N}}$ set ${u}_{0}(x)=({\lvertx_{1}\rvert}+\dots+{\lvertx_{{N}}\rvert})^{-\alpha}$ for a given $0<\alpha<{N}$. Let us write here $a(s)\simeq b(s)$ if $\gamma_{0}a(s)\le b(s)\le \gamma a(s)$ for two constants independent of $s$. One can see that $${{\lVert{u}_{0}\rVert}_{\ell^{1}(B_{R}(0))}} \simeq R^{{N}-\alpha} \,; \qquad {{\lVert{u}_{0}\rVert}_{\ell^{q}(V\setminus B_{R}(0))}} \simeq R^{{N}-\alpha q} \,,$$ for all $q>{N}/\alpha$. Therefore in this case $${T}_{{u}_{0}}(R,0) \simeq R^{\alpha(p-2)+p} \,,$$ and the estimate in essentially amounts to the decay rate $t^{-\alpha/(\alpha(p-2)+p)}$, which is the expected one in view of the results of [@Tedeev:1991]. 2\) One can treat also other examples of product graphs; for instance if $H$ is a finite connected graph we let $G=H\times {{\boldsymbol}{Z}}^{{N}}$ and recover results similar to the ones of the previous example. 3\) All examples where the Faber-Krahn function is estimated for $p=2$ yield also examples in our case of $p>2$, as it follows from applying Hölder’s inequality; see e.g., [@Coulhon:Grigoryan:1998], [@Barlow:Coulhon:Grigoryan:2001]. Preliminary material {#s:prelim} ==================== We use for $f:V\to {{\boldsymbol}{R}}$ the notation $${D}_{y}f(x) = f(y)-f(x) = - {D}_{x}f(y) \,, \qquad x\,,y\in V \,.$$ Caccioppoli type inequalities {#s:prelim_cacc} ----------------------------- \[l:monot\] Let $q>0$, $p>2$, $h\ge 0$, ${u}$, ${v}:V\to{{\boldsymbol}{R}}$. Then for all $x$, $y\in V$ $$\begin{gathered} \label{eq:monot_n} \big( {\lvert{D}_{y}{u}(x)\rvert}^{p-2} {D}_{y}{u}(x) - {\lvert{D}_{y}{v}(x)\rvert}^{p-2} {D}_{y}{v}(x) \big) {D}_{y}{{\ifthenelse{\equal{*}{*}}{({u}(x)-{v}(x)-h)_{+}}{\left({u}(x)-{v}(x)-h\right)_{+}}}}^{q} \\ \ge \gamma_{0} {\left| {D}_{y}{{\ifthenelse{\equal{*}{*}}{({u}(x)-{v}(x)-h)_{+}}{\left({u}(x)-{v}(x)-h\right)_{+}}}}^{\frac{q-1+p}{p}} \right|}^{p} \,. \end{gathered}$$ First we remark that we may assume $h=0$, by renaming $\tilde{v}={v}+h$. The corresponding version of clearly holds true if ${D}_{y}{u}(x)={D}_{y}{v}(x)$. If ${D}_{y}{u}(x)\not={D}_{y}{v}(x)$ the left hand side of with $h=0$ can be written as, on appealing also to a classical elementary result in monotone operators, see [@DiB:dpe], $$\begin{gathered} \label{eq:monot_i} \big( {\lvert{D}_{y}{u}(x)\rvert}^{p-2} {D}_{y}{u}(x) - {\lvert{D}_{y}{v}(x)\rvert}^{p-2} {D}_{y}{v}(x) \big) {D}_{y}({u}(x)-{v}(x)) \, \mathcal{A} \\ \ge \gamma_{0}(p) {\lvert{D}_{y}({u}(x)-{v}(x))\rvert}^{p} \, \mathcal{A} \,, \end{gathered}$$ where we define $$\mathcal{A} = \frac{ {D}_{y}{{\ifthenelse{\equal{*}{*}}{({u}(x)-{v}(x))_{+}}{\left({u}(x)-{v}(x)\right)_{+}}}}^{q} }{ {D}_{y}({u}(x)-{v}(x)) } \ge 0 \,.$$ On the other hand, we write the right hand side of with $h=0$ as $$\label{eq:monot_ii} {\lvert{D}_{y} ({u}(x)-{v}(x))\rvert}^{p} \, \mathcal{B} \,, \qquad \mathcal{B} := {\left| \frac{ {D}_{y}{{\ifthenelse{\equal{*}{*}}{({u}(x)-{v}(x))_{+}}{\left({u}(x)-{v}(x)\right)_{+}}}}^{\frac{q-1+p}{p}} }{ {D}_{y}({u}(x)-{v}(x)) } \right|}^{p} \,.$$ Therefore we have only to prove that $\mathcal{A}\ge \gamma_{0}\mathcal{B}$. Clearly in doing so we may assume without loss of generality that $${u}(y) - {v}(y) > {u}(x) - {v}(x) \,.$$ Hence it is left to prove that $$\begin{gathered} \label{eq:monot_iii} \big[ {u}(y)-{v}(y) - ({u}(x)-{v}(x)) \big]^{p-1} \big[ {{\ifthenelse{\equal{*}{*}}{({u}(y)-{v}(y))_{+}}{\left({u}(y)-{v}(y)\right)_{+}}}}^{q} - {{\ifthenelse{\equal{*}{*}}{({u}(x)-{v}(x))_{+}}{\left({u}(x)-{v}(x)\right)_{+}}}}^{q} \big] \\ \ge \gamma_{0} \Big[ {{\ifthenelse{\equal{*}{*}}{({u}(y)-{v}(y))_{+}}{\left({u}(y)-{v}(y)\right)_{+}}}}^{\frac{q-1+p}{p}} - {{\ifthenelse{\equal{*}{*}}{({u}(x)-{v}(x))_{+}}{\left({u}(x)-{v}(x)\right)_{+}}}}^{\frac{q-1+p}{p}} \Big]^{p} \,. \end{gathered}$$ Denote $$a = {u}(y) - {v}(y) \,, \qquad b = {u}(x) - {v}(x) \,.$$ If $b\le 0$, is obviously satisfied with $\gamma_{0}=1$. If $b>0$, by Hölder’s inequality we have $$\begin{gathered} \label{eq:monot_iv} \big[ a^{\frac{q-1+p}{p}} - b^{\frac{q-1+p}{p}} \big]^{p} = \Big[ \frac{q-1+p}{p} \int_{b}^{a} s^{\frac{q-1}{p}} {\,\textup{\textmd{d}}}s \Big]^{p} \\ \le \gamma(q,p) \Big[ \int_{b}^{a} s^{q-1} {\,\textup{\textmd{d}}}s \Big] \Big[ \int_{b}^{a} {\,\textup{\textmd{d}}}s \Big]^{p-1} \le \gamma(q,p) (a^{q}-b^{q}) (a-b)^{p-1} \,, \end{gathered}$$ proving and concluding the proof. In the following all radii of balls in $G$ will be assumed to be natural numbers. Let $R_{2}\ge R_{1}+1$, $R_{1}$, $R_{2}>0$; we define the cutoff function $\zeta$ in $B_{R_{2}}(x_{0})$ by means of $$\begin{aligned} {2} \zeta(x) &= 1 \,, &\qquad& x\in B_{R_{1}}(x_{0}) \,, \\ \zeta(x) &= \frac{ R_{2} - d(x,x_{0}) }{ R_{2} - R_{1} } \,, &\qquad& x\in B_{R_{2}}\setminus B_{R_{1}}(x_{0}) \,, \\ \zeta(x) &= 0 \,, &\qquad& x\not \in B_{R_{2}}(x_{0}) \,.\end{aligned}$$ The function $\zeta$ is chosen so that $${\lvert{D}_{y}\zeta(x)\rvert} = {\lvert \zeta(y) - \zeta(x) \rvert} \le \frac{1}{R_{2}-R_{1}} \,, \qquad x\sim y \,.$$ For $\tau_{1}>\tau_{2}>0$ we also define the standard nonnegative cutoff function $\eta\in C^{1}({{\boldsymbol}{R}})$ such that $$\eta(t) =0 \,, \,\,\, t\ge \tau_{1} \,; \quad \eta(t) = 0 \,, \,\,\, t\le \tau_{2} \,; \quad 0\le\eta'(t)\le \frac{2}{\tau_{1}-\tau_{2}} \,, \,\,\, t\in{{\boldsymbol}{R}}\,.$$ Our next Lemma is not used in the sequel; we present it here to substantiate our claim made in the Introduction that suitable local Caccioppoli type inequalities are available in the nonlinear setting, and also for its possible independent interest. The proof is somehow more complex than in the continuous case. \[l:cacc\] Let ${u}$ be a solution of in $V\times (0,T)$, $x_{0}\in V$. Then for $T>\tau_{1}>\tau_{2}>0$, $R_{2}>R_{1}+1$, $R_{1}>0$, $h>k>0$, $1>\theta>0$ we have $$\label{eq:cacc_n} \begin{split} &\sup_{\tau_{1}<\tau<t} \sum_{x\in B_{R_{1}}(x_{0})} {{\ifthenelse{\equal{*}{*}}{({u}(x,\tau)-h)_{+}}{\left({u}(x,\tau)-h\right)_{+}}}}^{\theta+1} \zeta(x)^{p} {d_{{\omega}}}(x) \\ &\quad+ \int_{\tau_{1}}^{t} \sum_{x\in B_{R_{1}}(x_{0}),y\in V} {\left| {D}_{y} {{\ifthenelse{\equal{*}{*}}{({u}(x,\tau)-h)_{+}}{\left({u}(x,\tau)-h\right)_{+}}}}^{\frac{p+\theta-1}{p}} \right|}^{p} {\omega}(x,y) {\,\textup{\textmd{d}}}\tau \\ &\qquad\le \frac{\gamma}{\tau_{1}-\tau_{2}} \int_{\tau_{2}}^{t} \sum_{x\in B_{R_{2}}(x_{0})} {{\ifthenelse{\equal{*}{*}}{({u}(x,\tau)-h)_{+}}{\left({u}(x,\tau)-h\right)_{+}}}}^{\theta+1} {d_{{\omega}}}(x) {\,\textup{\textmd{d}}}\tau + \gamma A^{\frac{1}{p}} B^{\frac{p-1}{p}} + \gamma A \,, \end{split}$$ where $$\begin{aligned} A&= \frac{1}{(R_{2}-R_{1})^{p}} \int_{\tau_{2}}^{t} \sum_{x\in B_{R_{2}}(x_{0})} {{\ifthenelse{\equal{*}{*}}{({u}(x,\tau)-k)_{+}}{\left({u}(x,\tau)-k\right)_{+}}}}^{p+\theta-1} {d_{{\omega}}}(x) {\,\textup{\textmd{d}}}\tau \,, \\ B&= h^{p} (h-k)^{\theta-1} \int_{\tau_{2}}^{t} {\mu_{{\omega}}}(B_{R_{2}}(x_{0})\cap\{2h\ge {u}(x,\tau)>h\}) {\,\textup{\textmd{d}}}\tau \,. \end{aligned}$$ \[r:caccio\] The term $A^{1/p}B^{(p-1)/p}$ in can be reduced to one containing only $A$ by means of Young’s and Chebychev’s inequalities. We multiply against $\zeta(x)^{p}\eta(t)^{p}{{\ifthenelse{\equal{*}{*}}{({u}(x,t)-h)_{+}}{\left({u}(x,t)-h\right)_{+}}}}^{\theta}$ and apply the well known formula of integration by parts $$\begin{gathered} \sum_{x,y\in V} {\lvert{D}_{y}{u}(x)\rvert}^{p-2} {D}_{y}{u}(x) f(x) {\omega}(x,y) \\= - \frac{1}{2} \sum_{x,y\in V} {\lvert{D}_{y}{u}(x)\rvert}^{p-2} {D}_{y}{u}(x) {D}_{y}f(x) {\omega}(x,y) \,, \end{gathered}$$ where $f:V\to {{\boldsymbol}{R}}$ has finite support. Below we denote $B_{R}(x_{0})=B_{R}$ for simplicity of notation. We obtain $$\begin{gathered} \label{eq:cacc_i} J_{1}+J_{2} := \frac{1}{\theta+1} \sum_{x\in B_{R_{2}}} {{\ifthenelse{\equal{*}{*}}{({u}(x,t)-h)_{+}}{\left({u}(x,t)-h\right)_{+}}}}^{\theta+1} \zeta(x)^{p} \eta(t)^{p} {d_{{\omega}}}(x) \\ + \frac{1}{2} \int_{0}^{t} \sum_{x,y\in V} {\lvert{D}_{y}{u}(x,\tau)\rvert}^{p-2} {D}_{y}{u}(x,\tau) {D}_{y}[ {{\ifthenelse{\equal{*}{*}}{({u}(x,\tau)-h)_{+}}{\left({u}(x,\tau)-h\right)_{+}}}}^{\theta} \zeta(x)^{p} ] {\omega}(x,y) \eta(\tau)^{p} {\,\textup{\textmd{d}}}\tau \\ = \frac{p}{\theta+1} \int_{0}^{t} \sum_{x\in B_{R_{2}}} {{\ifthenelse{\equal{*}{*}}{({u}(x,\tau)-h)_{+}}{\left({u}(x,\tau)-h\right)_{+}}}}^{\theta+1} \zeta(x)^{p} \eta(\tau)^{p-1} \eta'(\tau) {d_{{\omega}}}(x) {\,\textup{\textmd{d}}}\tau =:J_{3} \,. \end{gathered}$$ We split $J_{2}$ according to the equality $${D}_{y}[ {{\ifthenelse{\equal{*}{*}}{({u}(x,\tau)-h)_{+}}{\left({u}(x,\tau)-h\right)_{+}}}}^{\theta} \zeta(x)^{p} ] = \zeta(y)^{p} {D}_{y} {{\ifthenelse{\equal{*}{*}}{({u}(x,\tau)-h)_{+}}{\left({u}(x,\tau)-h\right)_{+}}}}^{\theta} + {{\ifthenelse{\equal{*}{*}}{({u}(x,\tau)-h)_{+}}{\left({u}(x,\tau)-h\right)_{+}}}}^{\theta} {D}_{y} \zeta(x)^{p} \,.$$ Next we appeal to Lemma \[l:monot\] with ${v}=0$ to get $$\label{eq:cacc_iii} {\lvert{D}_{y}{u}(x,\tau)\rvert}^{p-2} {D}_{y}{u}(x,\tau) {D}_{y}[ {{\ifthenelse{\equal{*}{*}}{({u}(x,\tau)-h)_{+}}{\left({u}(x,\tau)-h\right)_{+}}}}^{\theta} ] \ge \gamma_{0} {\left| {D}_{y} {{\ifthenelse{\equal{*}{*}}{({u}(x,\tau)-h)_{+}}{\left({u}(x,\tau)-h\right)_{+}}}}^{\frac{p+\theta-1}{p}} \right|}^{p} \,.$$ Thus from we infer the bound $$\label{eq:cacc_iv} J_{1} + J_{21} + J_{22} \le J_{3} + J_{23} \,,$$ where $$\begin{aligned} J_{21} &= \gamma_{0} \int_{0}^{t} \sum_{x,y\in V} {\left| {D}_{y} {{\ifthenelse{\equal{*}{*}}{({u}(x,\tau)-h)_{+}}{\left({u}(x,\tau)-h\right)_{+}}}}^{\frac{p+\theta-1}{p}} \right|}^{p} \zeta(y)^{p} {\omega}(x,y) \eta(\tau)^{p} {\,\textup{\textmd{d}}}\tau \,, \\ J_{22} &= \frac{1}{4} \int_{0}^{t} \sum_{x,y\in V} {\lvert{D}_{y}{u}(x,\tau)\rvert}^{p-2} {D}_{y}{u}(x,\tau) {D}_{y}[ {{\ifthenelse{\equal{*}{*}}{({u}(x,\tau)-h)_{+}}{\left({u}(x,\tau)-h\right)_{+}}}}^{\theta} ] \zeta(y)^{p} {\omega}(x,y) \eta(\tau)^{p} {\,\textup{\textmd{d}}}\tau \,, \\ J_{23} &= \frac{1}{2} \int_{0}^{t} \sum_{x,y\in V} {\lvert{D}_{y}{u}(x,\tau)\rvert}^{p-1} {\lvert{D}_{y}\zeta(x)^{p}\rvert} {{\ifthenelse{\equal{*}{*}}{({u}(x,\tau)-h)_{+}}{\left({u}(x,\tau)-h\right)_{+}}}}^{\theta} \eta(\tau)^{p} {\omega}(x,y) {\,\textup{\textmd{d}}}\tau \,. \end{aligned}$$ The reason to preserve the fraction $J_{22}$ of $J_{2}$ (rather than treating it as in $J_{21}$) will become apparent presently. Let us introduce the functions $$\begin{gathered} H(x,y;r) = \max[ {{\ifthenelse{\equal{*}{*}}{({u}(x,\tau)-r)_{+}}{\left({u}(x,\tau)-r\right)_{+}}}} , {{\ifthenelse{\equal{*}{*}}{({u}(y,\tau)-r)_{+}}{\left({u}(y,\tau)-r\right)_{+}}}} ] \,, \\ \chi_{x,y} = 1 \,, \quad \text{if $H(x,y;h)>0$;} \qquad \chi_{x,y} = 0 \,, \quad \text{if $H(x,y;h)=0$.} \end{gathered}$$ Note that $r>0$ is arbitrary in the definition of $H$ but we fix $r=h$ in the definition of $\chi_{x,y}$. Next we select $0<k<h$; by elementary calculations and Young’s inequality we get $$\begin{split} J_{23} &\le \frac{p}{2} \int_{0}^{t} \sum_{x,y\in V} {\lvert{D}_{y} {u}(x,\tau)\rvert}^{p-1} {\lvert{D}_{y} \zeta(x)\rvert} (\zeta(x)+\zeta(y))^{p-1} H(x,y;k)^{\theta} \chi_{x,y} {\omega}(x,y) \eta(\tau)^{p} {\,\textup{\textmd{d}}}\tau \\ &\le {\varepsilon}\int_{0}^{t} \sum_{x,y\in V} {\lvert{D}_{y}{u}(x,\tau)\rvert}^{p} (\zeta(x)^{p}+\zeta(y)^{p}) H(x,y;k)^{\theta-1} \chi_{x,y} {\omega}(x,y) \eta(\tau)^{p} {\,\textup{\textmd{d}}}\tau \\ &\quad+ \gamma {\varepsilon}^{1-p} \int_{0}^{t} \sum_{x,y\in V} {\lvert{D}_{y}\zeta(x)\rvert}^{p} H(x,y;k)^{p+\theta-1} {\omega}(x,y) \eta(\tau)^{p} {\,\textup{\textmd{d}}}\tau =: J_{231} + J_{232} \,. \end{split}$$ We want to absorb partially the term $J_{231}$ into $J_{22}$, for a suitable choice of ${\varepsilon}$. To this end we observe that by a change of variables we have $$\begin{split} J_{22} &= \frac{1}{4} \int_{0}^{t} \sum_{x,y\in V} {\lvert{D}_{x}{u}(y,\tau)\rvert}^{p-1} {\lvert{D}_{x}{{\ifthenelse{\equal{*}{*}}{({u}(y,\tau)-h)_{+}}{\left({u}(y,\tau)-h\right)_{+}}}}^{\theta}\rvert} \zeta(x)^{p} {\omega}(y,x) \eta(\tau)^{p} {\,\textup{\textmd{d}}}\tau \\ &= \frac{1}{4} \int_{0}^{t} \sum_{x,y\in V} {\lvert{D}_{y}{u}(x,\tau)\rvert}^{p-1} {\lvert{D}_{y}{{\ifthenelse{\equal{*}{*}}{({u}(x,\tau)-h)_{+}}{\left({u}(x,\tau)-h\right)_{+}}}}^{\theta}\rvert} \zeta(x)^{p} {\omega}(x,y) \eta(\tau)^{p} {\,\textup{\textmd{d}}}\tau \\ &= \frac{1}{8} \int_{0}^{t} \sum_{x,y\in V} {\lvert{D}_{y}{u}(x,\tau)\rvert}^{p-1} {\lvert{D}_{y}{{\ifthenelse{\equal{*}{*}}{({u}(x,\tau)-h)_{+}}{\left({u}(x,\tau)-h\right)_{+}}}}^{\theta}\rvert} (\zeta(x)^{p}+\zeta(y)^{p}) {\omega}(x,y) \eta(\tau)^{p} {\,\textup{\textmd{d}}}\tau \,. \end{split}$$ Then by elementary calculus $$\begin{gathered} \label{eq:cacc_j} \chi_{x,y} {\lvert{D}_{y}{{\ifthenelse{\equal{*}{*}}{({u}(x,\tau)-h)_{+}}{\left({u}(x,\tau)-h\right)_{+}}}}^{\theta}\rvert} \ge \chi_{x,y} \theta {\lvert{D}_{y}{{\ifthenelse{\equal{*}{*}}{({u}(x,\tau)-h)_{+}}{\left({u}(x,\tau)-h\right)_{+}}}}\rvert} H(x,y;h)^{\theta-1} \\ \ge \chi_{x,y} \theta {\lvert{D}_{y}{{\ifthenelse{\equal{*}{*}}{({u}(x,\tau)-h)_{+}}{\left({u}(x,\tau)-h\right)_{+}}}}\rvert} H(x,y;k)^{\theta-1} \,. \end{gathered}$$ Next we discriminate three cases in , aggregating equivalent symmetric cases: i) ${u}(x,\tau)>h$, ${u}(y,\tau)>h$. In this case clearly $${\lvert{D}_{y}{{\ifthenelse{\equal{*}{*}}{({u}(x,\tau)-h)_{+}}{\left({u}(x,\tau)-h\right)_{+}}}}\rvert} = {\lvert{D}_{y}{u}(x,\tau)\rvert} \,.$$ ii) ${u}(x,\tau)>2h$, $h\ge {u}(y,\tau)$. Then $${\lvert{D}_{y}{{\ifthenelse{\equal{*}{*}}{({u}(x,\tau)-h)_{+}}{\left({u}(x,\tau)-h\right)_{+}}}}\rvert} \ge \frac{{u}(x,\tau)}{2} \ge \frac{1}{2} {\lvert{D}_{y}{u}(x,\tau)\rvert} \,.$$ iii) $2h \ge {u}(x,\tau)>h\ge {u}(y,\tau)$. In this case $J_{22}$ does not offer any help. We rather bound directly this part of $J_{231}$ as shown below. Collecting the estimates above we see that, provided ${\varepsilon}\le 1/16$, $$\begin{gathered} J_{231} \le J_{22} + {\varepsilon}2^{p+2} h^{p} (h-k)^{\theta-1} \int_{\tau_{2}}^{t} {\mu_{{\omega}}}(B_{R_{2}}\cap\{2h\ge {u}(x,\tau)>h\}) {\,\textup{\textmd{d}}}\tau \,. \end{gathered}$$ Hence we have transformed into $$\label{eq:cacc_jj} J_{1}+J_{21} \le J_{3} + \gamma {\varepsilon}B + \gamma {\varepsilon}^{1-p} A \,,$$ where $A$ and $B$ are as in the statement. Finally we check whether the root ${\varepsilon}$ of ${\varepsilon}B={\varepsilon}^{1-p}A$ is less than $1/16$; on distinguishing the cases ${\varepsilon}\le 1/16$, ${\varepsilon}>1/16$ we get the inequality in . \[l:cacc2\] Let ${u}\in L^{\infty}(0,T;\ell^{q}(V))$, for a given $q> 1$, be a solution of in $V\times(0,T)$. Then for all $T>\tau_{1}>\tau_{2}>0$, $h\ge0$, we have for all $0<t<T$ $$\begin{gathered} \label{eq:cacc2_n} \sup_{\tau_{1}<\tau<t} \sum_{x\in V} {{\ifthenelse{\equal{*}{*}}{({u}(x,\tau)-h)_{+}}{\left({u}(x,\tau)-h\right)_{+}}}}^{q} {d_{{\omega}}}(x) + \int_{\tau_{1}}^{t} \sum_{x,y\in V} {\left|{D}_{y}{{\ifthenelse{\equal{*}{*}}{({u}(x,\tau)-h)_{+}}{\left({u}(x,\tau)-h\right)_{+}}}}^{\frac{p+q-2}{p}}\right|}^{p} {\omega}(x,y) {\,\textup{\textmd{d}}}\tau \\ \le \frac{\gamma}{\tau_{1}-\tau_{2}} \int_{\tau_{2}}^{t} \sum_{x\in V} {{\ifthenelse{\equal{*}{*}}{({u}(x,\tau)-h)_{+}}{\left({u}(x,\tau)-h\right)_{+}}}}^{q} {d_{{\omega}}}(x) {\,\textup{\textmd{d}}}\tau \,. \end{gathered}$$ We have also, if condition is satisfied, $$\begin{gathered} \label{eq:cacc2_nn} \sup_{0<\tau<t} \sum_{x\in V} {{\ifthenelse{\equal{*}{*}}{({u}(x,\tau)-h)_{+}}{\left({u}(x,\tau)-h\right)_{+}}}}^{q} {d_{{\omega}}}(x) + \int_{0}^{t} \sum_{x,y\in V} {\left|{D}_{y}{{\ifthenelse{\equal{*}{*}}{({u}(x,\tau)-h)_{+}}{\left({u}(x,\tau)-h\right)_{+}}}}^{\frac{p+q-2}{p}}\right|}^{p} {\omega}(x,y) {\,\textup{\textmd{d}}}\tau \\ \le \gamma \int_{0}^{t} \sum_{x\in V} {{\ifthenelse{\equal{*}{*}}{({u}_{0}(x)-h)_{+}}{\left({u}_{0}(x)-h\right)_{+}}}}^{q} {d_{{\omega}}}(x) {\,\textup{\textmd{d}}}\tau \,. \end{gathered}$$ Let us prove ; the inequality is proved similarly. We multiply against $\zeta(x)\eta(t){{\ifthenelse{\equal{*}{*}}{({u}(x,t)-h)_{+}}{\left({u}(x,t)-h\right)_{+}}}}^{q-1}$; on integrating by parts as in the proof of Lemma \[l:cacc\] we obtain $$\label{eq:cacc2_i} \begin{split} &\frac{1}{q} \sum_{x\in V} \zeta(x) {{\ifthenelse{\equal{*}{*}}{({u}(x,t)-h)_{+}}{\left({u}(x,t)-h\right)_{+}}}}^{q} {d_{{\omega}}}(x) \eta(t) \\ &\quad+ \int_{0}^{t} \sum_{x,y\in V} {\lvert{D}_{y}{u}(x,\tau)\rvert}^{p-2} {D}_{y}{u}(x,\tau) \zeta(y) {D}_{y}{{\ifthenelse{\equal{*}{*}}{({u}(x,\tau)-h)_{+}}{\left({u}(x,\tau)-h\right)_{+}}}}^{q-1} {\omega}(x,y) \eta(\tau) {\,\textup{\textmd{d}}}\tau \\ &\quad+ \int_{0}^{t} \sum_{x,y\in V} {\lvert{D}_{y}{u}(x,\tau)\rvert}^{p-2} {D}_{y}{u}(x,\tau) {D}_{y}\zeta(x) {{\ifthenelse{\equal{*}{*}}{({u}(x,\tau)-h)_{+}}{\left({u}(x,\tau)-h\right)_{+}}}}^{q-1} {\omega}(x,y) \eta(\tau) {\,\textup{\textmd{d}}}\tau \\ &\qquad= \frac{1}{q} \int_{0}^{t} \sum_{x\in V} \zeta(x) {{\ifthenelse{\equal{*}{*}}{({u}(x,\tau)-h)_{+}}{\left({u}(x,\tau)-h\right)_{+}}}}^{q} {d_{{\omega}}}(x) \eta'(\tau) {\,\textup{\textmd{d}}}\tau \,. \end{split}$$ We estimate next the second integral in . The absolute value of the integrand is bounded from above by $$\begin{gathered} \frac{1}{R_{2}-R_{1}} \sum_{x,y\in B_{R_{2}+1}} {\lvert{D}_{y}{u}(x,\tau)\rvert}^{p-1} {{\ifthenelse{\equal{*}{*}}{({u}(x,\tau)-h)_{+}}{\left({u}(x,\tau)-h\right)_{+}}}}^{q-1} {\omega}(x,y) \\ \le \frac{1}{R_{2}-R_{1}} \sum_{x,y\in B_{R_{2}+1}} \big( {u}(x,\tau)^{p+q-2} + {u}(y,\tau)^{p-1} {u}(x,\tau)^{q-1} \big) {\omega}(x,y) \le \frac{C_{u}}{R_{2}-R_{1}} \,, \end{gathered}$$ where $C_{u}$ is independent of $R_{i}$. Owing to $p+q-2>q$ and to Remark \[r:lp\_scale\], to this end it is only left to observe that $$\begin{gathered} \sum_{x,y\in B_{R_{2}+1}} {u}(y,\tau)^{p-1} {u}(x,\tau)^{q-1} {\omega}(x,y) \\ \le \Big( \sum_{y\in V} {u}(y,\tau)^{(p-1)q} {d_{{\omega}}}(y) \Big)^{\frac{1}{q}} \Big( \sum_{x\in V} {u}(x,\tau)^{q} {d_{{\omega}}}(x) \Big)^{\frac{q-1}{q}} \,, \end{gathered}$$ and to use once more Remark \[r:lp\_scale\], since $(p-1)q>q$. The sought after estimates follows immediately upon applying Lemma \[l:monot\] with ${v}=0$ and then letting first $R_{2}\to\infty$ and then $R_{1}\to\infty$. \[r:prelim\_diff\] Lemma \[l:cacc2\] is still in force if ${u}$ is the difference of two solutions to . The proof is the same, when we start from the difference of the two equations and recall Lemma \[l:monot\]. Existence and comparison {#s:prelim_ex} ------------------------ \[p:ex\] Let ${u}_{0}\in\ell^{q}(V)$, $q>1$. Then – has a solution in $L^{\infty}(0,+\infty;\ell^{q}(V))$. If ${u}_{0}\ge 0$ then ${u}\ge 0$. Let ${u}_{0}\in\ell^{q}(V)$, $q>1$. Define for $n\ge 1$ ${u}_{n}$ as the solution to $$\begin{aligned} {2} \label{eq:exist_pde_n} {\frac{\partial {u}_{n}}{\partial t}}(x,t) &= \operatorname{\Delta}_{p}{u}_{n}(x,t) \,, &\qquad& x\in B_{n} \,, t>0 \,, \\ \label{eq:exist_init_n} {u}_{n}(x,0) &= {u}_{0}(x) \,, &\qquad& x\in B_{n} \,, \\ \label{eq:exist_dir_n} {u}_{n}(x,t) &= 0 \,, &\qquad& x\not\in B_{n} \,, t\ge 0 \,. \end{aligned}$$ In practice this is a finite system of ordinary differential equations, uniquely solvable in the class $C^{1}(0,T)$ at least as long as the solution stays bounded over $(0,T)$. In this connection, we rewrite , as $${u}_{n}(x,t)^{q-1} {\frac{\partial {u}_{n}}{\partial t}}(x,t) = {u}_{n}(x,t)^{q-1} \operatorname{\Delta}_{p}{u}_{n}(x,t) \,, \qquad x\in V \,, t>0 \,,$$ where we stress that the equality holds for all $x\in V$. In this Subsection we denote $s^{q-1}={\lverts\rvert}^{q-1}\textup{sign}(s)$ for all $s\in {{\boldsymbol}{R}}$. Thus, summing over $x\in V$ and integrating by parts both in $t$ and in $x$ (in the suitable sense) we see that the elliptic part of the equation yields a nonnegative contribution, so that $$\label{eq:exist_energy_n} \sum_{x\in V} {\lvert{u}_{n}(x,t)\rvert}^{q} {d_{{\omega}}}(x) \le \sum_{x\in B_{n}} {\lvert{u}_{0}(x)\rvert}^{q} {d_{{\omega}}}(x) \le {{\lVert{u}_{0}\rVert}_{\ell^{q}(V)}}^{q} \,.$$ In turn, as explained in Remark \[r:lp\_scale\], this implies stable sup bounds for ${u}_{n}$ which, together with the discrete character of the $p$-laplacian and with the equation , also give stable sup bounds for the time derivative $\partial {u}_{n}/\partial t$, for each fixed $x$. However $V$ is countable, so that this is enough to enable us to extract a subsequence, still denoted by ${u}_{n}$ such that $$\label{eq:exist_unif_conv} {u}_{n}(x,t) \to {u}(x,t) \,, \quad {\frac{\partial {u}_{n}}{\partial t}}(x,t) \to {\frac{\partial {u}}{\partial t}}(x,t)$$ for each $x\in V$, uniformly for $t\in [0,T]$, where we have made use of the equation again to obtain convergence for the time derivative. Finally owing to we have $$\label{eq:exist_energy_nj} \sum_{x\in V} {\lvert{u}(x,t)\rvert}^{q} {d_{{\omega}}}(x) \le {{\lVert{u}_{0}\rVert}_{\ell^{q}(V)}}^{q} \,, \qquad t>0 \,.$$ It is easily seen that ${u}\in L^{\infty}(0,+\infty;\ell^{q}(V))$ is a solution to –. If ${u}_{0}\ge 0$, we appeal to our next result to prove that ${u}\ge 0$. \[p:compare\] If ${u}_{1}$, ${u}_{2}\in L^{\infty}(0,T; \ell^{q}(V))$ solve – with ${u}_{01}$, ${u}_{02}\in\ell^{q}(V)$, ${u}_{01}\ge {u}_{02}$, then ${u}_{1}\ge{u}_{2}$. According to Remark \[r:lp\_scale\] and to Definition \[d:sol\], we may assume $q>1$. Define ${w}={u}_{2}-{u}_{1}$. Then ${w}$ does not solve , but we may still apply (with $h=0$) to it, see Remark \[r:prelim\_diff\]. This proves ${{\ifthenelse{\equal{*}{*}}{({w})_{+}}{\left({w}\right)_{+}}}}=0$ and thus the statement. Elementary inequalities {#s:prelim_elem} ----------------------- We record for future use two immediate consequences of , : $$\begin{aligned} 2 \label{eq:fkf_above} {\Lambda_p}(sa)^{-1} &\le s^{\omega} {\Lambda_p}(a)^{-1} \,, &\qquad& s\ge 1 \,, a>0 \,; \\ \label{eq:fkf_below} {\Lambda_p}(\sigma a)^{-1} &\le \sigma^{\frac{p}{{N}}} {\Lambda_p}(a)^{-1} \,, &\qquad& 0<\sigma\le 1 \,, a>0 \,.\end{aligned}$$ Also the following Lemma relies on and will be used in a context where it is important that $\nu<1/(p-1)$. \[l:dcf\] Let $\nu={N}(p-2)/[({N}(p-2)+p)(p-1)]$ and $b>0$. Then the function $$\tau\mapsto \tau^{\nu}{\psi}_{1}^{(-1)}(\tau^{-1}b)^{\frac{p-2}{p-1}} \,, \qquad \tau>0 \,,$$ is nondecreasing. Equivalently we show that $$r\mapsto r^{-\alpha}{\psi}_{1}^{(-1)}(r)^{p-2}$$ is nonincreasing for $\alpha=\nu(p-1)$. Set $s={\psi}_{1}^{(-1)}(r)$, so that by definition of ${\psi}_{1}$ $$r^{-\alpha}{\psi}_{1}^{(-1)}(r)^{p-2} = s^{(1-\alpha)(p-2)}{\Lambda_p}(s^{-1})^{-\alpha} = [s^{-\frac{p}{{N}}}{\Lambda_p}(s^{-1})]^{-\alpha} \,.$$ By assumption the latter quantity is indeed nonincreasing in $s$ which however is a nondecreasing function of $r$. \[l:bbl\] Under assumption we have that if $R$, $s>0$, $c\ge 1$ and $$\label{eq:bbl_m} R^{p} = cs {\psi}_{1}^{(-1)}(s^{-1})^{p-2} \,,$$ then $$\label{eq:bbl_mm} {\mu_{{\omega}}}(B_{{\lfloorR\rfloor}}) \le \gamma(c) \psi_{1}^{(-1)} (s^{-1})^{-1} \,.$$ Let $\tau>0$ be such that $s^{-1}={\psi}_{1}(\tau)=\tau^{p-2}{\Lambda_p}(\tau^{-1})$. Then $$c^{-1}R^{p} = {\Lambda_p}(\tau^{-1})^{-1} \,.$$ On the other hand, on setting $v={\mu_{{\omega}}}(B_{{\lfloorR\rfloor}})$ and invoking we get $$c^{-1}R^{p} \ge c^{-1}{\mathcal{R}}(v)^{p} \ge c^{-1} \gamma_{0} {\Lambda_p}(v)^{-1} \ge {\Lambda_p}((\gamma_{0}c^{-1})^{\frac{{N}}{p}}v)^{-1} \,,$$ where we also used . Since ${\Lambda_p}$ is nonincreasing, the result follows. Proofs of Proposition \[p:linf\_meas\] and Corollary \[co:linf\_int\] {#s:linf} ===================================================================== By assumption, and by Remark \[r:lp\_scale\], ${u}\in L^{\infty}(0,T;\ell^{q}(V))$ for some $q> 1$; then for all $k>0$ the cut function ${{\ifthenelse{\equal{*}{*}}{({u}(t)-k)_{+}}{\left({u}(t)-k\right)_{+}}}}$ is finitely supported. For given $0<\sigma_{1}<\sigma_{2}<1/2$, $k>0$, $0<t<T$ define the decreasing sequences $$\begin{gathered} k_{i} = k[ 1-\sigma_{2} + 2^{-i} (\sigma_{2}-\sigma_{1}) ] \,, \qquad i=0\,,1\,,2\,,\dots \\ t_{i} = \frac{t}{2} [ 1-\sigma_{2} + 2^{-i} (\sigma_{2}-\sigma_{1}) ] \,, \qquad i=0\,,1\,,2\,,\dots \end{gathered}$$ and let $f_{i}(x,\tau)={{\ifthenelse{\equal{*}{*}}{({u}(x,\tau)-k_{i})_{+}}{\left({u}(x,\tau)-k_{i}\right)_{+}}}}^{\nu}$, where $\nu=(p+q-2)/p$. Let also $$\begin{aligned} {m}_{i}(\tau) &= {\mu_{{\omega}}}( \{ x\in V \mid {u}(x,\tau)>k_{i} \} ) \,, \quad {M}_{i} = \sup_{t_{i}<\tau<t} {m}_{i}(\tau) \,, \\ {{D}}_{i}(\tau) &= \sum_{x,y\in V} {\lvert{D}_{y}f_{i}(x,\tau)\rvert}^{p} {\omega}(x,y) \,. \end{aligned}$$ Since $b:=q/\nu<p$, it follows from Faber-Krahn inequality and Hölder’s and Young’s inequalities that $$\begin{gathered} \label{eq:fkb} \sum_{x\in V} f_{i+1}(x,\tau)^{b} {d_{{\omega}}}(x) \le {m}_{i+1}(\tau)^{1-\frac{b}{p}} {\Lambda_p}({m}_{i+1}(\tau))^{-\frac{b}{p}} {{D}}_{i+1}(\tau)^{\frac{b}{p}} \\ \le {\varepsilon}^{\frac{p}{b}} {{D}}_{i+1}(\tau) + {\varepsilon}^{-\frac{p}{p-b}} {\Lambda_p}({m}_{i+1}(\tau))^{-\frac{b}{p-b}} {m}_{i+1}(\tau) \,. \end{gathered}$$ Here ${\varepsilon}>0$ is arbitrary and will be selected below. We integrate over $(t_{i+1},t)$ to find $$\label{eq:linf_i} \begin{split} &\int_{t_{i+1}}^{t} \sum_{x\in V} f_{i+1}(x,\tau)^{b} {d_{{\omega}}}(x) {\,\textup{\textmd{d}}}\tau \le {\varepsilon}^{\frac{p}{b}} \int_{t_{i+1}}^{t} {{D}}_{i+1}(\tau) {\,\textup{\textmd{d}}}\tau \\ &\qquad+ {\varepsilon}^{-\frac{p}{p-b}} \int_{t_{i+1}}^{t} {\Lambda_p}({m}_{i+1}(\tau))^{-\frac{b}{p-b}} {m}_{i+1}(\tau) {\,\textup{\textmd{d}}}\tau \\ &\quad\le {\varepsilon}^{\frac{p}{b}} \int_{t_{i+1}}^{t} {{D}}_{i+1}(\tau) {\,\textup{\textmd{d}}}\tau + {\varepsilon}^{-\frac{p}{p-b}} t {\Lambda_p}({M}_{i+1})^{-\frac{b}{p-b}} {M}_{i+1} \,. \end{split}$$ Next we invoke Lemma \[l:cacc2\] with $\tau_{1}=t_{i}$, $\tau_{2}=t_{i+1}$, $h=k_{i}$, to infer $$\label{eq:linf_ii} \begin{split} & L_{i}:= \sup_{t_{i}<\tau<t} \sum_{x\in V} f_{i}(x,\tau)^{b} {d_{{\omega}}}(x) + \int_{t_{i}}^{t} {{D}}_{i}(\tau) {\,\textup{\textmd{d}}}\tau \\ &\quad \le \frac{\gamma 2^{i}}{t(\sigma_{2}-\sigma_{1})} \int_{t_{i+1}}^{t} \sum_{x\in V} f_{i+1}(x,\tau)^{b} {d_{{\omega}}}(x) {\,\textup{\textmd{d}}}\tau \\ &\quad\le \frac{\gamma 2^{i}}{t(\sigma_{2}-\sigma_{1})} {\varepsilon}^{\frac{p}{b}} \int_{t_{i+1}}^{t} {{D}}_{i+1}(\tau) {\,\textup{\textmd{d}}}\tau \\ &\qquad + \frac{\gamma 2^{i}}{\sigma_{2}-\sigma_{1}} {\varepsilon}^{-\frac{p}{p-b}} {\Lambda_p}({M}_{i+1})^{-\frac{b}{p-b}} {M}_{i+1} \,, \end{split}$$ where the second inequality follows of course from . For a $\delta>0$ to be chosen, select ($\gamma$ denotes here the constant in ) $$\frac{\gamma 2^{i}}{t(\sigma_{2}-\sigma_{1})} {\varepsilon}^{\frac{p}{b}} = \delta \qquad \text{i.e.,} \qquad {\varepsilon}= \gamma_{0} \delta^{\frac{b}{p}} t^{\frac{b}{p}} (\sigma_{2}-\sigma_{1})^{\frac{b}{p}} 2^{-\frac{b}{p}i} \,.$$ On substituting this choice of ${\varepsilon}$ in we arrive at an estimate which can be successfully iterated, that is $$\label{eq:linf_iii} L_{i} \le \delta L_{i+1} + \frac{\gamma 2^{\frac{pi}{p-b}}}{(\sigma_{2}-\sigma_{1})^{\frac{p}{p-b}}} \delta^{-\frac{b}{p-b}} t^{-\frac{b}{p-b}} {\Lambda_p}({M}_{\infty})^{-\frac{b}{p-b}} {M}_{\infty} \,.$$ Here we set $$\begin{gathered} \label{eq:linf_ij} t_{\infty} = \lim_{i\to\infty} t_{i} = \frac{t}{2}(1-\sigma_{2}) \,, \qquad k_{\infty} = \lim_{i\to\infty} k_{i} = k(1-\sigma_{2}) \,, \\ \label{eq:linf_ijj} M_{\infty} = \sup_{t_{\infty}<\tau<t} {\mu_{{\omega}}}( \{ x\in V \mid {u}(x,\tau)>k_{\infty} \} ) \,. \end{gathered}$$ On iterating we infer $$L_{0} \le \delta^{j} L_{j} + \Big( \sum_{i=0}^{j} \delta^{i} 2^{\frac{pi}{p-b}} \Big) \frac{\gamma}{(\sigma_{2}-\sigma_{1})^{\frac{p}{p-b}}} t^{-\frac{b}{p-b}} {\Lambda_p}({M}_{\infty})^{-\frac{b}{p-b}} {M}_{\infty} \,,$$ which yields as $j\to\infty$, provided we select $\delta<2^{-p/(p-b)}$, $$\label{eq:linf_j} \begin{split} & \sup_{t(1-\sigma_{1})/2<\tau<t} \sum_{x\in V} {{\ifthenelse{\equal{*}{*}}{({u}(x,\tau)-k(1-\sigma_{1}))_{+}}{\left({u}(x,\tau)-k(1-\sigma_{1})\right)_{+}}}}^{q} {d_{{\omega}}}(x) \le L_{0} \\ &\quad \le \frac{\gamma}{(\sigma_{2}-\sigma_{1})^{\frac{q}{p-2}}} t^{-\frac{q}{p-2}} {\Lambda_p}({M}_{\infty})^{-\frac{q}{p-2}} {M}_{\infty} \,, \end{split}$$ for ${M}_{\infty}$ as in , owing also to $b/(p-b)=q/(p-2)$. The proof will be concluded by a second process of iteration, built on . Let $1/2>\sigma>0$ and $k>0$, and define the increasing sequences $$\begin{gathered} \tau_{n} = \frac{t}{2}(1-\sigma 2^{-n}) \,, \qquad h_{n} = k(1-\sigma 2^{-n}) \,, \\ \bar h_{n} = \frac{h_{n}+h_{n+1}}{2} = k(1-3\sigma 2^{-n-2}) \,, \qquad n\ge 0 \,, \end{gathered}$$ as well as the decreasing one $$Y_{n} = \sup_{\tau_{n}<\tau<t} {\mu_{{\omega}}}(\{x\in V\mid {u}(x,\tau)> h_{n}\}) \,.$$ Next we apply Chebychev’s inequality to find $$\label{eq:linf_jj} Y_{n+1} \le 2^{(n+2)q} \sigma^{-q} k^{-q} \sup_{\tau_{n+1}<\tau<t} \sum_{x\in V} {{\ifthenelse{\equal{*}{*}}{({u}(x,\tau)-\bar h_{n})_{+}}{\left({u}(x,\tau)-\bar h_{n}\right)_{+}}}} {d_{{\omega}}}(x) \,.$$ The right hand side of is then majorized by appealing to , where we select $$\sigma_{1} = 3\sigma 2^{-n-2} \,, \quad \sigma_{2} = \sigma 2^{-n} \,,$$ in order to obtain $$\label{eq:linf_jjj} Y_{n+1} \le \gamma \sigma^{-\frac{q(p-1)}{p-2}} 2^{\frac{n(p-2+q)}{p-2}} t^{-\frac{q}{p-2}} k^{-q} {\Lambda_p}(Y_{n})^{-\frac{q}{p-2}} Y_{n} \,.$$ In turn, on invoking our assumption , we transform into $$\label{eq:linf_k} Y_{n+1} \le \gamma \sigma^{-\frac{q(p-1)}{p-2}} 2^{\frac{n(p-2+q)}{p-2}} t^{-\frac{q}{p-2}} k^{-q} {\Lambda_p}(Y_{0})^{-\frac{q}{p-2}} Y_{0}^{-\frac{p}{{N}}\,\frac{q}{p-2}} Y_{n}^{1+\frac{p}{{N}}\,\frac{q}{p-2}} \,.$$ This inequality yields $Y_{n}\to0$ as $n\to\infty$ provided we choose $k$ so that (see [@LSU Lemma 5.6 Ch. II]) $$\label{eq:linf_kk} k^{-1} t^{-\frac{1}{p-2}} {\Lambda_p}(Y_{0})^{-\frac{1}{p-2}} \le \gamma_{0}(q,p,{N}) \,.$$ In this connection we may assume e.g., $\sigma=1/4$. The proof is concluded when we remark that $Y_{n}\to0$ immediately implies $${u}(x,t) \le k \,, \qquad x\in V \,.$$ \[r:cacc\] We note that the proof of Proposition \[p:linf\_meas\] makes use of the differential equation only thru inequality . This fact will be used below. We remark on using Chebychev’s inequality once more that in $$Y_{0} \le 2^{r}k^{-r} \sup_{\frac{t}{4}<\tau<t} \sum_{x\in V} {u}(x,\tau)^{r} {d_{{\omega}}}(x) \,.$$ Let us set $$E_{r} = \sup_{0<\tau<t} \sum_{x\in V} {u}(x,\tau)^{r} {d_{{\omega}}}(x) \,.$$ Then is certainly fulfilled if $$\label{eq:linf_int_i} k^{-1} t^{-\frac{1}{p-2}} {\Lambda_p}( k^{-r} E_{r} )^{-\frac{1}{p-2}} = \gamma_{0} \,,$$ where we also used . On the other hand, if we set $${\psi}_{r}(s) = s^{\frac{p-2}{r}} {\Lambda_p}(s^{-1}) \,, \qquad s>0 \,,$$ then amounts to $$\label{eq:linf_int_ii} k = E_{r}^{\frac{1}{r}} \big[{\psi}_{r}^{(-1)} \big(\gamma t^{-1}E_{r}^{-\frac{p-2}{r}}\big) \big]^{\frac{1}{r}} \le \gamma E_{r}^{\frac{1}{r}} \big[{\psi}_{r}^{(-1)} \big(t^{-1}E_{r}^{-\frac{p-2}{r}}\big) \big]^{\frac{1}{r}} \,,$$ where we have made use of the definition of ${\psi}_{r}$ and of . Proof of Theorem \[t:l1\] {#s:l1} ========================= Let ${u}_{0}\in\ell^{1}(V)$, ${u}_{0}\ge 0$. Then we have also ${u}_{0}\in\ell^{2}(V)$ as noted in the Introduction, and we may consider the solution ${u}\ge 0$ constructed according to Subsection \[s:prelim\_ex\]. First we bound the $\ell^{1}(V)$ norm of the solution from above. We multiply the equation against $\varTheta({u}(x,\tau))\zeta(x)$ where $\zeta$ is as in Section \[s:prelim\], $$\varTheta({u}) = \frac{ {{\ifthenelse{\equal{*}{*}}{({u}-h)_{+}}{\left({u}-h\right)_{+}}}} }{ {u}+{\varepsilon}} \,,$$ for any given $h>0$, ${\varepsilon}>0$, and integrate by parts. The purpose of the cut at level $h$ is to ease technically the argument. Since $\varTheta$ is a nondecreasing function, reasoning as in the proof of Lemma \[l:cacc2\] we easily obtain $$\begin{gathered} \sum_{x\in V} \int_{0}^{{u}(x,t)} \frac{ {{\ifthenelse{\equal{*}{*}}{(s-h)_{+}}{\left(s-h\right)_{+}}}} }{ s +{\varepsilon}} \zeta(x) {d_{{\omega}}}(x) \le \sum_{x\in V} \int_{0}^{{u}_{0}(x)} \frac{ {{\ifthenelse{\equal{*}{*}}{(s-h)_{+}}{\left(s-h\right)_{+}}}} }{ s +{\varepsilon}} {d_{{\omega}}}(x) + K_{1} \\ \le \sum_{x\in V} {u}_{0}(x) {d_{{\omega}}}(x) + K_{1} \,,\end{gathered}$$ where $$\begin{gathered} K_{1} = \frac{1}{R_{2}-R_{1}} \int_{0}^{t} \sum_{x\in V} {\lvert{D}_{y}{u}(x,\tau)\rvert}^{p-1} \chi_{\{{u}(\tau)>h\}}(x) {\omega}(x,y) {\,\textup{\textmd{d}}}\tau \\ \le \frac{h^{-1}}{R_{2}-R_{1}} \int_{0}^{t} \sum_{x\in V} {\lvert{D}_{y}{u}(x,\tau)\rvert}^{p-1} {u}(x,\tau) {\omega}(x,y) {\,\textup{\textmd{d}}}\tau \,.\end{gathered}$$ Then we may proceed as in the proof of Lemma \[l:cacc2\] with $q=2$ and let $R_{2}\to \infty$ and then $R_{1}\to \infty$ to make $K_{1}$ vanish. Finally we let first ${\varepsilon}\to 0$ and then $h\to 0$: on invoking the monotone convergence theorem we get $$\label{eq:l1_above} {{\lVert{u}(t)\rVert}_{\ell^{1}(V)}} \le {{\lVert{u}_{0}\rVert}_{\ell^{1}(V)}} \,.$$ Therefore from Corollary \[co:linf\_int\] and Remark \[r:dcf\] we infer that is satisfied. In order to prove we proceed as follows. We multiply the equation against $\zeta(x)$ as above and integrate by parts obtaining $$\label{eq:l1_j} \sum_{x\in V} {u}(x,t) \zeta(x) {d_{{\omega}}}(x) + K_{2} = \sum_{x\in V} {u}_{0}(x) \zeta(x) {d_{{\omega}}}(x) \,,$$ where $$\label{eq:l1_entropy} \begin{split} {\lvertK_{2}\rvert} &= {\left| \int_{0}^{t} \sum_{x\in V} {\lvert{D}_{y}{u}(x,\tau)\rvert}^{p-2} {D}_{y}{u}(x,\tau) {D}_{y}\zeta(x) {\omega}(x,y) {\,\textup{\textmd{d}}}\tau \right|} \\ &\le \frac{1}{R_{2}-R_{1}} \int_{0}^{t} \sum_{x\in B_{R_{2}}+1} {\lvert{D}_{y}{u}(x,\tau)\rvert}^{p-1} {\omega}(x,y) {\,\textup{\textmd{d}}}\tau \\ &\le \frac{2t}{(R_{2}-R_{1}){\Lambda_p}^{(-1)}(2)^{p-2}} {{\lVert{u}_{0}\rVert}_{\ell^{1}(V)}}^{p-1} \,. \end{split}$$ Here we reasoned as in (with $q=1$), exploiting $p>2$ and the already proved bound . Then we rewrite as $${{\lVert{u}(t)\rVert}_{\ell^{1}(V)}} + K_{2} \ge \sum_{x\in B_{R_{1}}} {u}_{0}(x) {d_{{\omega}}}(x) \,,$$ and let first $R_{2}\to \infty$ then $R_{1}\to\infty$ to obtain the converse to . Finally we prove the entropy estimate . First we invoke Hölder’s inequality to bound $$\label{eq:l1_k} \begin{split} &I:= \int_{0}^{t} \sum_{x,y\in V} {\lvert{D}_{y}{u}(x,\tau)\rvert}^{p-1} {\omega}(x,y) {\,\textup{\textmd{d}}}\tau \\ &\quad\le \Big( \int_{0}^{t} \sum_{x,y\in V} \tau^{-\delta(p-1)} ({u}(x,\tau)+{u}(y,\tau))^{(2-\theta)(p-1)} {\omega}(x,y) {\,\textup{\textmd{d}}}\tau \Big)^{\frac{1}{p}} \\ &\qquad\times \Big( \int_{0}^{t} \sum_{x,y\in V} \tau^{\delta} {\lvert{D}_{y}{u}(x,\tau)\rvert}^{p} ({u}(x,\tau)+{u}(y,\tau))^{\theta-2} {\omega}(x,y) {\,\textup{\textmd{d}}}\tau \Big)^{\frac{p-1}{p}} \\ &\quad=: K_{3}^{\frac{1}{p}} K_{4}^{\frac{p-1}{p}} \,. \end{split}$$ Here $\delta>0$ is to be chosen and we select $$\theta = \frac{2p-3}{p-1} \in(1,2) \,, \quad \text{so that} \quad (2-\theta)(p-1) = 1 \,.$$ Thus $$\label{eq:l1_kk} K_{3} \le 2 \int_{0}^{t} \tau^{-\delta(p-1)} {{\lVert{u}(\tau)\rVert}_{\ell^{1}(V)}} {\,\textup{\textmd{d}}}\tau \le \gamma {{\lVert{u}_{0}\rVert}_{\ell^{1}(V)}} t^{1- \delta(p-1)} \,,$$ provided $$\label{eq:l1_kkk} \delta(p-1)<1 \,.$$ In order to bound $K_{4}$ we multiply the differential equation against $\tau^{\delta}{u}^{\theta-1}$ and integrate by parts. After dropping a positive contribution from the left hand side of the resulting equality we obtain $$\label{eq:l1_kj} \begin{split} K_{4} &\le \gamma \int_{0}^{t} \sum_{x\in V} \tau^{\delta -1} {u}(x,\tau)^{\frac{p-2}{p-1}+1} {d_{{\omega}}}(x) {\,\textup{\textmd{d}}}\tau \\ &\le \gamma {{\lVert{u}_{0}\rVert}_{\ell^{1}(V)}} \int_{0}^{t} \tau^{\delta-1} {{\lVert{u}(\tau)\rVert}_{\ell^{\infty}(V)}}^{\frac{p-2}{p-1}} {\,\textup{\textmd{d}}}\tau \\ &\le \gamma {{\lVert{u}_{0}\rVert}_{\ell^{1}(V)}}^{\frac{p-2}{p-1}+1} \int_{0}^{t} \tau^{\delta-1} {\psi}_{1}^{(-1)}(\tau^{-1}{{\lVert{u}_{0}\rVert}_{\ell^{1}(V)}}^{2-p})^{\frac{p-2}{p-1}} {\,\textup{\textmd{d}}}\tau \,. \end{split}$$ Select now $\nu<\delta<1/(p-1)$, where $\nu$ is the constant defined in Lemma \[l:dcf\]. Accordingly, the last integral above is bounded by $$\begin{gathered} \label{eq:l1_kjj} \int_{0}^{t} \tau^{\delta-\nu-1} \tau^{\nu} {\psi}_{1}^{(-1)}(\tau^{-1}{{\lVert{u}_{0}\rVert}_{\ell^{1}(V)}}^{2-p})^{\frac{p-2}{p-1}} {\,\textup{\textmd{d}}}\tau \\ \le t^{\nu} {\psi}_{1}^{(-1)}(t^{-1}{{\lVert{u}_{0}\rVert}_{\ell^{1}(V)}}^{2-p})^{\frac{p-2}{p-1}} (\delta-\nu)^{-1} t^{\delta-\nu} \,.\end{gathered}$$ Collecting all the estimates in –, we finally arrive at $$\label{eq:l1_kkj} I \le \gamma t^{\frac{1}{p}} {{\lVert{u}_{0}\rVert}_{\ell^{1}(V)}}^{\frac{2(p-1)}{p}} {\psi}_{1}^{(-1)}(t^{-1}{{\lVert{u}_{0}\rVert}_{\ell^{1}(V)}}^{2-p})^{\frac{p-2}{p}} \,.$$ Proof of Theorem \[p:bbl\] and Corollary \[p:bbl2\] {#s:bbl} =================================================== Let ${u}$ be as in the statement of Theorem \[p:bbl\]. For all $t>0$, $R\in{{\boldsymbol}{N}}$ we write $${{\lVert{u}(t)\rVert}_{\ell^{1}(V)}} = {{\lVert{u}(t)\rVert}_{\ell^{1}(B_{R})}} + {{\lVert{u}(t)\rVert}_{\ell^{1}(V\setminus B_{R})}} \,.$$ Here we denote for a fixed $x_{0}\in V$ $$B_{R} = B_{R}(x_{0}) \,, \quad {\lvertx\rvert} = d(x,x_{0}) \,, \quad x\in V \,.$$ For the sake of clarity let us denote by $\zeta_{R_{1},R_{2}}$ the cutoff function defined in Section \[s:prelim\]. Let $\rho>4R$, $R\ge 4$, $\rho$, $R\in {{\boldsymbol}{N}}$ and $\phi=1-\zeta_{R,2R}$. We use ${\lvertx\rvert}^{\alpha}\phi(x)\zeta_{\rho,2\rho}(x)$ as a testing function in , for a fixed $0<\alpha<1$. We obtain, assuming in addition that $R$ is so large as ${u}_{0}(x)=0$ for $x\not\in B_{R}$, $$\begin{gathered} \sum_{x\in V} {\lvertx\rvert}^{\alpha} \phi(x) \zeta_{\rho,2\rho}(x) {u}(x,t) {d_{{\omega}}}(x) \\ = - \int_{0}^{t} \sum_{x\in V} {\lvert{D}_{y}{u}(x,\tau)\rvert}^{p-2} {D}_{y}{u}(x,\tau) {D}_{y}[\phi(x)\zeta_{\rho,2\rho}(x){\lvertx\rvert}^{\alpha}] {\omega}(x,y) {\,\textup{\textmd{d}}}\tau \,. \end{gathered}$$ In last integral, the term originating from ${D}_{y} \zeta_{\rho,2\rho}$ is seen to become vanishingly small as $\rho\to\infty$, since $\alpha<1$, similarly to what we did to bound $K_{2}$ in Section \[s:l1\]. Thus in the limit $\rho\to\infty$ we get $$\begin{split} &\sum_{x\not\in B_{2R}} {\lvertx\rvert}^{\alpha} {u}(x,t) {d_{{\omega}}}(x) \\ &\quad\le - \int_{0}^{t} \sum_{x\in V} {\lvert{D}_{y}{u}(x,\tau)\rvert}^{p-2} {D}_{y}{u}(x,\tau) {D}_{y}[\phi(x){\lvertx\rvert}^{\alpha}] {\omega}(x,y) {\,\textup{\textmd{d}}}\tau \\ &\quad= \int_{0}^{t} \sum_{x\in V} {\lvert{D}_{y}{u}(x,\tau)\rvert}^{p-2} {D}_{y}{u}(x,\tau) {D}_{y}\zeta_{R,2R}(x) {\lverty\rvert}^{\alpha} {\omega}(x,y) {\,\textup{\textmd{d}}}\tau \\ &\qquad- \int_{0}^{t} \sum_{x\in V} {\lvert{D}_{y}{u}(x,\tau)\rvert}^{p-2} {D}_{y}{u}(x,\tau) {D}_{y}{\lvertx\rvert}^{\alpha} \phi(x) {\omega}(x,y) {\,\textup{\textmd{d}}}\tau =: Q_{1}+Q_{2} \,. \end{split}$$ Since if $x\sim y$, $x\not\in B_{R}$, $${\lvert{D}_{y}{\lvertx\rvert}^{\alpha}\rvert} \le \alpha \min({\lvertx\rvert},{\lverty\rvert})^{\alpha-1} \le \gamma R^{\alpha-1} \,,$$ we have $${\lvertQ_{1}\rvert} + {\lvertQ_{2}\rvert} \le \gamma R^{\alpha-1} \int_{0}^{t} \sum_{x\in V} {\lvert{D}_{y}{u}(x,\tau)\rvert}^{p-1} {\omega}(x,y) {\,\textup{\textmd{d}}}\tau \,.$$ We bound the last integral by means of , concluding as follows: $$\begin{gathered} \label{eq:bbl_ik} \sum_{x\not\in B_{2R}} {u}(x,t) {d_{{\omega}}}(x) \le R^{-\alpha} \sum_{x\not\in B_{2R}} {\lvertx\rvert}^{\alpha} {u}(x,t) {d_{{\omega}}}(x) \\ \le \gamma R^{-1} t^{\frac{1}{p}} {{\lVert{u}_{0}\rVert}_{\ell^{1}(V)}}^{\frac{2(p-1)}{p}} {\psi}_{1}^{(-1)}(t^{-1}{{\lVert{u}_{0}\rVert}_{\ell^{1}(V)}}^{-(p-2)})^{\frac{p-2}{p}} \le \gamma \varGamma^{-1} {{\lVert{u}_{0}\rVert}_{\ell^{1}(V)}} \,, \end{gathered}$$ where we have selected $$\label{eq:bbl_jk} R \ge R_{p}({u}_{0},t) := \varGamma t^{\frac{1}{p}} {{\lVert{u}_{0}\rVert}_{\ell^{1}(V)}}^{\frac{p-2}{p}} {\psi}_{1}^{(-1)}(t^{-1}{{\lVert{u}_{0}\rVert}_{\ell^{1}(V)}}^{-(p-2)})^{\frac{p-2}{p}} \,,$$ for a $\varGamma>0$. This together with conservation of mass proves , upon an unessential redefinition of $R$. In order to prove we remark that from the argument above it follows that for $R$ as in , $$\begin{gathered} \sum_{x\in V} {\lvertx\rvert}^{\alpha} {u}(x,t) {d_{{\omega}}}(x) \le (2R)^{\alpha} \sum_{x\in B_{2R}} {u}(x,t) {d_{{\omega}}}(x) \\ + \sum_{x\not\in B_{2R}} {\lvertx\rvert}^{\alpha} {u}(x,t) {d_{{\omega}}}(x) \le \gamma R^{\alpha} {{\lVert{u}_{0}\rVert}_{\ell^{1}(V)}} \,, \end{gathered}$$ where we have used conservation of mass again. For a suitable choice of $\varGamma$, setting $R=2R_{p}$, $R_{p}$ as in , we have from $$\label{eq:bbl_ikk} {{\lVert{u}(t)\rVert}_{\ell^{\infty}(V)}} {\mu_{{\omega}}}(B_{R}(t)) \ge {{\lVert{u}(t)\rVert}_{\ell^{1}(B_{R}(t))}} \ge \frac{1}{2} {{\lVert{u}_{0}\rVert}_{\ell^{1}(V)}} \,.$$ The statement in then follows, if is assumed, on invoking Lemma \[l:bbl\]. Proof of Theorem \[t:decay\] {#s:decay} ============================ We follow here ideas from [@Andreucci:1997], [@Andreucci:Cirmi:Leonardi:Tedeev:2001], [@Afanaseva:Tedeev:2004]. Let ${u}_{R}$ be the solution to with initial data $${u}_{R}(x,0) = {u}_{0}(x) \chi_{B_{R}(x_{0})}(x) \,, \qquad x\in V \,.$$ Then mass conservation and with $r=1$ imply $$\label{eq:decay_i} {{\lVert{u}_{R}(t)\rVert}_{\ell^{\infty}(V)}} \le \gamma m_{R} {\psi}_{1}^{(-1)}\big(t^{-1}m_{R}^{-(p-2)}\big) \,, \qquad t>0 \,,$$ where $$m_{R} = \sum_{x\in B_{R}(x_{0})} {u}_{0}(x) {d_{{\omega}}}(x) \,.$$ Let us also define ${w}_{R}={u}-{u}_{R}$; note that ${w}_{R}\ge 0$ by Proposition \[p:compare\]. In spite of the fact that ${w}_{R}$ does not solve we may still prove the following inequality for $h\ge 0$, $t>\tau_{1}>\tau_{2}>0$, also by appealing to Lemma \[l:monot\]: $$\begin{gathered} \label{eq:decay_ii} \sup_{\tau_{1}<\tau<t} \sum_{x\in V} {{\ifthenelse{\equal{*}{*}}{({w}_{R}(x,\tau)-h)_{+}}{\left({w}_{R}(x,\tau)-h\right)_{+}}}}^{q} {d_{{\omega}}}(x) + \int_{\tau_{1}}^{t} \sum_{x,y\in V} {\left|{D}_{y}{{\ifthenelse{\equal{*}{*}}{({w}_{R}(x,\tau)-h)_{+}}{\left({w}_{R}(x,\tau)-h\right)_{+}}}}^{\frac{p+q-2}{p}}\right|}^{p} {\omega}(x,y) {\,\textup{\textmd{d}}}\tau \\ \le \frac{\gamma}{\tau_{1}-\tau_{2}} \int_{\tau_{2}}^{t} \sum_{x\in V} {{\ifthenelse{\equal{*}{*}}{({w}_{R}(x,\tau)-h)_{+}}{\left({w}_{R}(x,\tau)-h\right)_{+}}}}^{q} {d_{{\omega}}}(x) {\,\textup{\textmd{d}}}\tau \,.\end{gathered}$$ As already observed in Remark \[r:cacc\] this is enough for us to apply Proposition \[p:linf\_meas\] and thus Corollary \[co:linf\_int\] to ${w}_{R}$, and get $$\label{eq:decay_iii} {{\lVert{w}_{R}\rVert}_{\ell^{\infty}(V)}} \le \gamma E_{q}^{\frac{1}{q}} \big[{\psi}_{q}^{(-1)} \big( t^{-1} E_{q}^{-\frac{p-2}{q}} \big) \big]^{\frac{1}{q}} \le \gamma E_{q0}^{\frac{1}{q}} \big[{\psi}_{q}^{(-1)} \big( t^{-1} E_{q0}^{-\frac{p-2}{q}} \big) \big]^{\frac{1}{q}} \,,$$ where by invoking a simple variant of with $h=0$ we find $$E_{q} := \sup_{0<\tau<t} \sum_{x\in V} {w}_{R}(x,\tau)^{q} \le E_{q0} := \sum_{x\not\in B_{R}(x_{0})} {u}_{0}(x)^{q} {d_{{\omega}}}(x) \,.$$ We use here also Remark \[r:dcf\]. Thus we have that for all $R>0$, since ${u}={u}_{R}+{w}_{R}$, $$\label{eq:decay_j} {{\lVert{u}(t)\rVert}_{\ell^{\infty}(V)}} \le \gamma \Big\{ m_{R} {\psi}_{1}^{(-1)}\big(t^{-1}m_{R}^{-(p-2)}\big) + E_{q0}^{\frac{1}{q}} \big[{\psi}_{q}^{(-1)} \big( t^{-1} E_{q0}^{-\frac{p-2}{q}} \big) \big]^{\frac{1}{q}} \Big\} \,.$$ The first term on the right hand side of is increasing in $R$, while the second one is decreasing. We aim at making them equal, but this is in general impossible in the discrete setting of graphs. We instead select $R$ as any number (optimally the minimum one) such that $$\label{eq:decay_kk} m_{R} {\psi}_{1}^{(-1)}\big(t^{-1}m_{R}^{-(p-2)}\big) \ge E_{q0}^{\frac{1}{q}} \big[{\psi}_{q}^{(-1)} \big( t^{-1} E_{q0}^{-\frac{p-2}{q}} \big) \big]^{\frac{1}{q}} \,.$$ Then is proved under assumption . We need to make explicit. First, we define $$X_{1} = {\psi}_{1}^{(-1)}(t^{-1}m_{R}^{-(p-2)}) \,, \qquad X_{q} = {\psi}_{q}^{(-1)}(t^{-1}E_{q0}^{-\frac{p-2}{q}}) \,,$$ so that from the definition of ${\psi}_{r}$, we get $$\label{eq:decay_k} X_{1} = t^{-\frac{1}{p-2}} m_{R}^{-1} {\Lambda_p}(X_{1}^{-1})^{-\frac{1}{p-2}} \,, \qquad X_{q}^{\frac{1}{q}} = t^{-\frac{1}{p-2}} E_{q0}^{-\frac{1}{q}} {\Lambda_p}(X_{q}^{-1})^{-\frac{1}{p-2}} \,.$$ Therefore can be written as $$\label{eq:decay_kkk} {\Lambda_p}(X_{1}^{-1}) \le {\Lambda_p}(X_{q}^{-1}) \,, \quad \text{that is} \quad X_{1} \le X_{q} \,.$$ We apply ${\psi}_{1}$ and write the last inequality in the form $$\begin{split} (tm_{R}^{p-2})^{-1} &\le {\psi}_{1}(X_{q}) = X_{q}^{p-2} {\Lambda_p}(X_{q}^{-1}) \\ &= X_{q}^{\frac{(p-2)(q-1)}{q}} {\psi}_{q}(X_{q}) = X_{q}^{\frac{(p-2)(q-1)}{q}} (tE_{q0}^{\frac{p-2}{q}})^{-1} \,. \end{split}$$ From here we immediately get, on recalling the definition of ${\psi}_{q}$, $$\frac{1}{t} \ge \Big[ \frac{E_{q0}}{m_{R}} \Big]^{\frac{p-2}{q-1}} {\Lambda_p}\Big( {m_{R}^{\frac{q}{q-1}}} {E_{q0}^{-\frac{1}{q-1}}} \Big) \,.$$ This amounts to concluding the proof. [^1]: The first author is member of the Gruppo Nazionale per la Fisica Matematica (GNFM) of the Istituto Nazionale di Alta Matematica (INdAM) [^2]: The second author was supported by Sapienza Grant C26V17KBT3
1
--- abstract: 'We discuss the stability properties of an autonomous system in loop quantum cosmology. The system is described by a self-interacting scalar field $\phi$ with positive potential $V$, coupled with a barotropic fluid in the Universe. With $\Gamma=VV''''/V''^2$ considered as a function of $\lambda=V''/V$, the autonomous system is extended from three dimensions to four dimensions. We find that the dynamic behaviors of a subset, not all, of the fixed points are independent of the form of the potential. Considering the higher-order derivatives of the potential, we get an infinite-dimensional autonomous system which can describe the dynamical behavior of the scalar field with more general potential. We find that there is just one scalar-field-dominated scaling solution in the loop quantum cosmology scenario.' author: - Kui Xiao - 'Jian-Yang Zhu' title: Stability analysis of an autonomous system in loop quantum cosmology --- [^1] Introduction ============ The scalar field plays an important role in modern cosmology. Indeed, scalar-field cosmological models are of great importance in the study of the early Universe, especially in the investigation of inflation. The dynamical properties of a scalar fields also make an interesting research topic for modern cosmological studies [@Copeland-IJMPD; @Coley-Book]. The dynamical behavior of scalar field coupled with a barotropic fluid in a spatially flat Friedmann-Robertson-Walker universe has been studied by many authors (see [@Copeland-IJMPD; @Copeland-scalar; @Leon-Phase], and the first section of [@Coley-Book]). The phase-plane analysis of the cosmological autonomous system is an useful method for studying the dynamical behavior of a scalar field. One always considers the dynamical behavior of a scalar field with an exponential potential in the classical cosmology [@Copeland-exponential; @Hao-PRD1; @Hao-PRD2] or modified cosmology [@Samart-Phantom; @Li-O(N)]. And, if one considers the dynamical behavior of a scalar field coupled with a barotropic fluid, the exponential potential is also the first choice [@Ferreira-scaling; @Hoogen-scaling; @Billyard-interaction; @Fu-phantom]. The exponential potential $V$ leads to the fact that the variables $\Gamma=VV''/V'^2$ equal 1 and that $\lambda=V'/V$ is also a constant. Then the autonomous system is always two dimensional in classical cosmology [@Copeland-exponential], and three dimensional in loop quantum cosmology (LQC) [@Samart-Phantom]. Although one can also consider a more complex case with $\lambda$ being a dynamically changing quantity [@Copeland-IJMPD; @Macorra-lambda; @Nunes-lambda], the fixed point is not a real one, and this method is not exact. Recently, Zhou *et al* [@Zhou-plb; @Fang-CQG] introduced a new method by which one can make $\Gamma$ a general function of $\lambda$. Then the autonomous system is extended from two dimensions to three dimensions in classical cosmology. They found that this method can help investigate many quintessence models with different potentials. The goal of this paper is to extend this method for studying the dynamical behavior of a scalar field with a general potential coupled with a barotropic fluid in LQC. LQC [@Bojowald-Living; @Ashtekar-overview] is a canonical quantization of homogeneous spacetime based on the techniques used in loop quantum gravity (LQG) [@Rovelli-Book; @Thiemann-Book]. Owing to the homogeneity and isotropy of the spacetime, the phase space of LQC is simpler than that of LQG. For example, the connection is determined by a single parameter $c$ and the triad is determined by $p$. Recently, it has been shown that the loop quantum effects can be very well described by an effective modified Friedmann dynamics. Two corrections of the effective LQC are always considered: the inverse volume correction and the holonomy correction. These modifications lead to many interesting results: the big bang can be replaced by the big bounce [@Ashtekar], the singularity can be avoided [@Singh], the inflation can be more likely to occur (e.g., see [@Bojowald; @Germani-inflation; @Copeland-superinflation; @Ashtekar-inflation; @Corichi-measure]), and more. But the inverse volume modification suffers from gauge dependence which cannot be cured and thus yields unphysical effects. In the effective LQC based on the holonomy modification, the Friedmann equation adds a $-\frac{\kappa}{3}\frac{\rho^2}{\rho_c}$ term, in which $\kappa=8\pi G$, to the right-hand side of the standard Friedmann equation [@Ashtekar-improve]. Since this correction comes with a negative sign, the Hubble parameter $H$, and then $\dot{a}$ will vanish when $\rho=\rho_c$, and the quantum bounce occurs. Moreover, for a universe with a large scalar factor, the inverse volume modification to the Friedmann equation can be neglected and only the holonomy modification is important. Based on the holonomy modification, the dynamical behavior of dark energy has recently been investigated by many authors [@Samart-Phantom; @Samart-dy; @Xiao-dynamical]. The attractor behavior of the scalar field in LQC has also been studied [@Copeland-superinflation; @Lidsey-attractor]. It was found that the dynamical properties of dark-energy models in LQC are significantly different from those in classical cosmology. In this paper, we examine the background dynamics of LQC dominated by a scalar field with a general positive potential coupled with a barotropic fluid. By considering $\Gamma$ as a function of $\lambda$, we investigate scalar fields with different potentials. Since the Friedmann equation has been modified by the quantum effect, the dynamical system will be very different from the one in classical cosmology, e.g., the number of dimensions of autonomous system will change to four in LQC. It must be pointed out that this method cannot be used to describe the dynamical behavior of scalar field with arbitrary potential. To overcome this problem, therefore, we should consider an infinite-dimensional autonomous system. The paper is organized as follows. In Sec. \[sec2\], we present the basic equations and the four dimensional dynamical system, and in Sec. \[sec3\], we discuss the properties of this system. In Sec. \[sec4\], we discuss the autonomous system in greater detail, as well as an infinite-dimensional autonomous system. We conclude the paper in the last section. The Appendix contains the analysis of the dynamical properties of one of the fixed points, $P_3$. Basic equations {#sec2} =============== We focus on the flat Friedmann-Robertson-Walker cosmology. The modified Friedmann equation in the effective LQC with holonomy correction can be written as [@Ashtekar-improve] $$\begin{aligned} H^2=\frac{1}{3}\rho\left(1-\frac{\rho}{\rho_c}\right),\label{Fri}\end{aligned}$$ in which $\rho$ is the total energy density and the natural unit $\kappa=8\pi G=1$ is adopted for simplicity. We consider a self-interacting scalar field $\phi$ with a positive potential $V(\phi)$ coupled with a barotropic fluid. Then the total energy density can be written as $\rho=\rho_\phi+\rho_\gamma$, with the energy density of scalar field $\rho_\phi=\frac12\dot{\phi}^2+V(\phi)$ and the energy density of barotropic fluid $\rho_\gamma$. We consider that the energy momenta of this field to be covariant conserved. Then one has $$\begin{aligned} &&\ddot{\phi}+3H\dot{\phi}+V'=0,\label{ddotphi}\\ &&\dot{\rho}_\gamma+3\gamma H\rho_\gamma=0, \label{dotrg}\end{aligned}$$ where $\gamma$ is an adiabatic index and satisfies $p_\gamma=(\gamma-1)\rho_\gamma$ with $p_\gamma$ being the pressure of the barotropic fluid, and the prime denotesthe differentiation with respect to the field $\phi$. Differentiating Eq. (\[Fri\]) and using Eqs. (\[ddotphi\]) and (\[dotrg\]), one can obtain $$\begin{aligned} \dot{H}=-\frac12\left(\dot{\phi}^2+\gamma\rho_\gamma\right)\left[1-\frac{2(\rho_\gamma+\rho_\phi)}{\rho_c}\right]. \label{Fri4}\end{aligned}$$ Equations (\[Fri\])-(\[dotrg\]) and (\[ddotphi\])-(\[Fri4\]) characterize a closed system which can determine the cosmic behavior. To analyze the dynamical behavior of the Universe, one can further introduce the following variables [@Copeland-exponential; @Samart-Phantom]: $$\begin{aligned} x\equiv\frac{\dot{\phi}}{\sqrt{6}H},\quad y\equiv\frac{\sqrt{V}}{\sqrt{3}H},\quad z\equiv\frac{\rho}{\rho_c},\quad \lambda\equiv\frac{V'}{V}, \label{new-v}\end{aligned}$$ where the $z$ term is a special variable in LQC \[see Eq. (\[Fri\])\]. In the LQC scenario, the total energy density $\rho$ should be less than or equal to the critical energy density $\rho_c$, and thus $0\leq z\leq 1$. Notice that, in the classical region, $z=0$ for $\rho\ll\rho_c$. Using these new variables, one can obtain $$\begin{aligned} &&\frac{\rho_\gamma}{3H^2}=\frac{1}{1-z}-x^2-y^2,\label{rg-xyz}\\ &&\frac{\dot{H}}{H^2}=-\left[3x^2+\frac{3\gamma}{2}\left(\frac{1}{1-z}-x^2-y^2\right)\right]\left(1-2z\right)\nonumber\\ \label{dH-HH}.\end{aligned}$$ Using the new variables (\[new-v\]), and considering Eqs. (\[rg-xyz\]) and (\[dH-HH\]), one can rewrite Eqs. (\[Fri\])-(\[dotrg\]) in the following forms: $$\begin{aligned} \frac{dx}{dN} &=&-3x-\frac{\sqrt{6}}{2}\lambda y^2+x\left[3x^2+\frac{3\gamma}{2}\left(\frac{1}{1-z}-x^2-y^2\right)\right]\nonumber\\ &&\times(1-2z),\label{x'}\\ \frac{dy}{dN}&=&\frac{\sqrt{6}}{2}\lambda x y+y\left[3x^2+\frac{3\gamma}{2} \left(\frac{1}{1-z}-x^2-y^2\right)\right]\nonumber\\ &&\times(1-2z),\label{y'}\\ \frac{dz}{dN} &=&-3\gamma z-3z\left(1-z\right)\left(2x^2-\gamma x^2-\gamma y^2\right),\label{z'}\\ \frac{d\lambda}{dN} &=&\sqrt{6}\lambda^2 x\left(\Gamma-1\right),\label{l}\end{aligned}$$ where $N=\ln a$ and $$\begin{aligned} \Gamma\equiv \frac{VV''}{V'^2}.\label{Gamma}\end{aligned}$$ Note that the potential $V(\phi)$ is positive in this paper, but one can also discuss a negative potential. Just as [@Heard-negative] has shown, the negative scalar potential could slow down the growth of the scale factor and cause the Universe to be in a collapsing phase. The dynamical behavior of the scalar field with the positive and negative potential in brane cosmology has been discussed by [@Copeland-scalar]. In this paper we are concerned only with an expanding universe, and both the Hubble parameter and the potential are positive. Differentiating $\lambda$ with respect to the scalar field $\phi$, we obtain the relationship between $\lambda$ and $\Gamma$, $$\begin{aligned} \frac{d\lambda^{-1}}{d\phi}=1-\Gamma. \label{l-G}\end{aligned}$$ If we only consider a special case of the potential, like exponential potential [@Copeland-exponential; @Hao-PRD1; @Hao-PRD2; @Samart-Phantom; @Li-O(N); @Ferreira-scaling; @Hoogen-scaling; @Billyard-interaction; @Fu-phantom], then $\lambda$ and $\Gamma$ are both constants. In this case, the four dimensional dynamical system, Eqs. (\[x’\])-(\[l\]), reduces to a 3-dimensional one, since $\lambda$ is a constant. (In the classical dynamical system, the $z$ term does not exist, and then the dynamical system is reduced from three dimensions to two dimensions.) The cost of this simplification is that the potential of the field is restricted. Recently, Zhou *et al* [@Zhou-plb; @Fang-CQG] considered the potential parameter $\Gamma$ as a function of another potential parameter $\lambda$, which enables one to study the fixed points for a large number of potentials. We will follow this method in this section and the sections that follow to discuss the dynamical behavior of the scalar field in the LQC scenario, and we have $$\begin{aligned} \Gamma(\lambda)=f(\lambda)+1.\label{G-l}\end{aligned}$$ In this case, Eq. (\[G-l\]) can cover many scalar potentials. For completeness’ sake, we briefly review the discussion of [@Fang-CQG] in the following. From Eq. (\[l-G\]), one can obtain $$\begin{aligned} \frac{d\lambda}{\lambda f(\lambda)}=\frac{dV}{V}.\label{l-f-V}\end{aligned}$$ Integrating out $\lambda=\lambda(V)$, one has the following differential equation of potential $$\begin{aligned} \frac{dV}{V\lambda(V)}=d\phi.\label{l-V}\end{aligned}$$ Then, Eqs. (\[l-f-V\]) and (\[l-V\]) provide a route for obtaining the potential $V=V(\phi)$. If we consider a concrete form of the potential (e.g., an exponential potential), the dynamical system is specialized (e.g., the dynamical system is reduced to three dimensions if one considers the exponential potential for $d\lambda/dN=0$). These specialized dynamical systems are too special if one hopes to distinguish the fixed points that are the common properties of scalar field from those that are just related to the special potentials [@Fang-CQG]. If we consider a more general $\lambda$, then we can get the more general stability properties of scalar field in the LQC scenario. We will continue the discussion of this topic in Sec. \[sec4\]. In this case, Eq. (\[l\]) becomes $$\begin{aligned} \frac{d\lambda}{dN}&=&\sqrt{6}\lambda^2 xf(\lambda).\label{l'}\end{aligned}$$ Hereafter, Eqs. (\[x’\])-(\[z’\]) along with Eq. (\[l’\]) definitely describe a dynamical system. We will discuss the stability of this system in the following section. Properties of the autonomous system {#sec3} =================================== Obviously, the terms on the right-hand side of Eqs. (\[x’\])-(\[z’\]) and (\[l’\]) only depend on $x,y,z,\lambda$, but not on $N$ or other variables. Such a dynamical system is usually called an autonomous system. For simplicity, we define $ \frac{dx}{dN}=F_1(x,y,z,\lambda)\equiv F_1, \frac{dy}{dN}=F_2(x,y,z,\lambda)\equiv F_2, \frac{dz}{dN}=F_3(x,y,z,\lambda)\equiv F_3$, and $\frac{d\lambda}{dN}=F_4(x,y,z,\lambda)\equiv F_4.$ The fixed points $(x_c,y_c,z_c,\lambda_c)$ satisfy $F_i=0, i=1,2,3,4$. From Eq. (\[l’\]), it is straightforward to see that $x=0, \lambda=0$ or $f(\lambda)=0$ can make $F_4(x,y,z,\lambda)=0$. Also, we must consider $\lambda^2f(\lambda)=0$. Just as [@Fang-CQG] argued, it is possible that $\lambda^2 f(\lambda)\neq0$ and $\frac{d\lambda}{dN}\neq0$ when $\lambda=0$. Thus the necessary conditionfor the existence of the fixed points with $x\neq 0$ is $\lambda^2f(\lambda)=0$. Taking into account these factors, we can easily obtain all the fixed points of the autonomous system described by Eqs. (\[x’\])-(\[z’\]) and (\[l’\]), and they are shown in Tab. **I**. ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Fixed-points $x_c$ $y_c$ $z_c$ $\lambda_c$ Eigenvalues Stability -------------- ------------------------------------------- ------------------------------------------------- ------- ------------- ----------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------- -- $P_1$ $0$ $0$ $0$ $0$ $\mathbf{M}^T=(0,-3\gamma,\frac32\gamma,-3+\frac32\gamma)$ U, for all $\gamma$ $P_2$ $0$ $0$ $0$ $\lambda_*$ $\mathbf{M}^T=(0,\frac32\gamma,-3\gamma,-3+\frac32\gamma)$ U, for all $\gamma$ S, for $\gamma=1,f_1(0)\geq0$ $P_3$ $0$ $1$ 0 $0$ $\mathbf{M}^T=(-3,-3\gamma,0,0)$ U, for $\gamma=\frac43$, if $f_1(0)\neq 0$ S, for $\gamma=\frac43$, if $f_1(0)=0$ $P_4$ $1$ $0$ $0$ $0$ $\mathbf{M}^T=(0,-6,0,6-3\gamma)$ U,for all $\gamma$ $P_5$ $-1$ $0$ $0$ $0$ $\mathbf{M}^T=(0,-6,0,6-3\gamma)$ U,for all $\gamma$ $P_6$ $0$ $0$ $0$ $\lambda_a$ $\mathbf{M}^T=(0,\frac32\gamma,-3\gamma,-3+\frac32\gamma)$ U,for all $\gamma$ $P_7$ $1$ $0$ $0$ $\lambda_a$ $\mathbf{M}^T= U,for all $\gamma$ \left(-6,6-3\gamma,\frac12\sqrt{6}\lambda_a+3,\sqrt{6}\lambda_a A\right)$ $P_8$ $-1$ $0$ $0$ $\lambda_a$ $\mathbf{M}^T= U,for all $\gamma$ \left(-6,6-3\gamma,-\frac12\sqrt{6}\lambda_a+3,-\sqrt{6}\lambda_aA\right)$ $P_9$ $-\frac{\sqrt{6}}{6}\lambda_a$ $\sqrt{1-\frac{\lambda^2_a}{6}}$ 0 $\lambda_a$ $\mathbf{M}^T=\left(-\lambda_a^2,-3+\frac12\lambda_a^2,\lambda_a^2-3\gamma,-\lambda^3_a-f_1(\lambda_a) \right)$ S, for $f_1(\lambda_a)>\lambda_a$, and $\lambda_a<3\gamma$ U, for $f_1(\lambda_a)<\lambda_a$ and/or $\lambda_a>3\gamma$ $P_{10}$ $-\sqrt{\frac32}\frac{\gamma}{\lambda_a}$ $\sqrt{\frac{3}{2\lambda_a^2}\gamma(2-\gamma)}$ 0 $\lambda_a$ See the Eq. (\[p10\]) U, for all $\gamma$ ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- : The stability analysis of an autonomous system in LQC. The system is described by a self-interacting scalar field $\phi$ with positive potential $V$ coupled with a barotropic fluid $\rho_\gamma$. Explanation of the symbols used in this table: $P_{i}$ denotes the fixed points located in the four dimensionsal phase space, which are earmarked by the coordinates $(x_c,y_c,z_c,\lambda_c)$. $\lambda_*$ means that $\lambda$ can be any value. $\lambda_a$ is the value that makes $f(\lambda)=0$. $\mathbf{M}^T$ means the inverted matrix of the eigenvalues of the fixed points. $f_1(\Lambda)=\left.\frac{df(\lambda)}{d\lambda}\right|_{\lambda=\Lambda}$ with $\Lambda=0, \lambda_a$. $A=\left[2f(\lambda_a)+\lambda_a\left( \left.\frac{df(\lambda)}{d\lambda}\right|_{\lambda_a}\right)\right]$. U stands for unstable, and S stands for stable. The properties of each fixed points are determined by the eigenvalues of the Jacobi matrix $$\begin{aligned} \mathcal{M}= \left. \begin{pmatrix} \frac{\partial F_1}{\partial x}&\frac{\partial F_1}{\partial y}&\frac{\partial F_1}{\partial z}&\frac{\partial F_1}{\partial \lambda}\\ \frac{\partial F_2}{\partial x}&\frac{\partial F_2}{\partial y}&\frac{\partial F_2}{\partial z}&\frac{\partial F_2}{\partial \lambda}\\ \frac{\partial F_3}{\partial x}&\frac{\partial F_3}{\partial y}&\frac{\partial F_3}{\partial z}&\frac{\partial F_3}{\partial \lambda}\\ \frac{\partial F_4}{\partial x}&\frac{\partial F_4}{\partial y}&\frac{\partial F_4}{\partial z}&\frac{\partial F_4}{\partial \lambda} \end{pmatrix} \right|_{(x_c,y_c,z_c,\lambda_c)}.\end{aligned}$$ According to Lyapunov’s linearization method, the stability of a linearized system is determined by the eigenvalues of the matrix $\mathcal{M}$ (see Chapter 3 of [@Slotine-book]). If all of the eigenvalues are strictly in the left-half complex plane, then the autonomous system is stable. If at least one eigenvalue is strictly in the right-half complex plane, then the system is unstable. If all of the eigenvalues are in the left-half complex plane, but at least one of them is on the $i\omega$ axis, then one cannot conclude anything definite about the stability from the linear approximation. By examining the eigenvalues of the matrix $\mathcal{M}$ for each fixed point shown in Table **I**, we find that points $P_{1,2,4-8,10}$ are unstable and point $P_{9}$ is stable only under some conditions. We cannot determine the stability properties of $P_{3}$ from the eigenvalues, and we will give the full analysis of $P_3$ in the Appendix. Some remarks on Tab.**I**: 1. Apparently, points $P_2$ and $P_6$ have the same eigenvalues, and the difference between these two points is just on the value of $\lambda$. As noted in the caption of Table **I**, $\lambda_*$ means that $\lambda$ can be any value, and $\lambda_a$ is just the value that makes $f(\lambda)=0$. Obviously, $\lambda_a$ is just a special value of $\lambda_*$, and point $P_6$ is a special case of point $P_2$. $P_6$ is connected with $f(\lambda)$, but $P_2$ is not. From now on, we do not consider separately the special case of $P_6$ when we discuss the property of $P_2$. Hence the value of $\lambda_a$ is contained in our discussion of$\lambda_*$. 2. It is straightforward to check that, if $x_c=\lambda_c=0$, $y_c$ can be any value $y_*$ when it is greater than or equal $1$. But, if $y_*>1$, then $z_c=1-1/y_*^2<1$, and this means that the fixed point is located in the quantum-dominated regions. Although the stability of this point in the quantum regions may depend on $f(\lambda)$, it is not necessary to analyze its dynamical properties, since it does not have any physical meanings. The reason is the following: If the Universe is stable, it will not evolve to today’s pictures. If the Universe is unstable, it will always be unstable. We will just focus on point $P_3$ staying in the classical regions. Then $y_c=y_*=1,z_c=1-1/y_*^2=0$, i.e., for the classical cosmology region, $\rho\ll\rho_c$. 3. Since the adiabatic index $\gamma$ satisfies $0<\gamma< 2$ (in particular, for radiation $\gamma=\frac43$ and for dust $\gamma=1$), all the terms that contain $\gamma$ should not change sign. A more general situation of $\gamma$ is $0\leq \gamma\leq 2$ [@Billyard-scaling]. If $\gamma=0$ or $\gamma=2$, the eigenvalues corresponding to points $P_{1,2,4,5,9}$ will have some zero elements and some negative ones. To analyze the stability of these points, we need to resort to other more complex methods, just as we do in the Appendix for the dynamical properties of point $P_3$. In this paper, we just consider the barotropic fluid which includes radiation and dust, and $\gamma\neq 0,2$. Notice that if one considers $\gamma=0$, the barotropic fluid describes the dark energy. This is an interesting topic, but will not be considered here for the sake of simplicity. 4. $-\sqrt{6}<\lambda_a<\sqrt{6},\lambda_a\neq 0$ should hold for point $P_9$, hence $-3+\frac12\lambda^2_a<0$. 5. $\lambda_a>0$ should hold, since $y_c>0$ for point $P_{10}$. The eigenvalue of this point is $$\begin{aligned} \label{p10} \mathbf{M}=\begin{pmatrix} -3\gamma\\-3\lambda_a\gamma f_1(\lambda_a)\\-\frac32+\frac34\gamma+\frac{3}{4\lambda_a}\sqrt{(2-\gamma)(\lambda^2_a(2-\gamma)+8\gamma+24\gamma^2)}\\ -\frac32+\frac34\gamma-\frac{3}{4\lambda_a}\sqrt{(2-\gamma)(\lambda^2_a(2-\gamma)+8\gamma+24\gamma^2)} \end{pmatrix}.\nonumber\\\end{aligned}$$ Since we just consider $0<\gamma<2$ in this paper, it is easy to check that $(2-\gamma)(\lambda^2_a(2-\gamma)+8\gamma+24\gamma^2)>0$ is always satisfied. And this point is unstable with $f_1(\lambda_a)=\left.\frac{df(\lambda)}{d\lambda}\right|_{\lambda=\lambda_a}$ being either negative or positive, since $-\frac32+\frac34\gamma+\frac{3}{4\lambda_a}\sqrt{(2-\gamma)(\lambda^2_a(2-\gamma)+8\gamma+24\gamma^2)}$ is always positive. Based on Table **I** and the related remarks above, we have the folloing conclusions: 1. Points $P_{1,2}$: The related critical values, eigenvalues and stability properties do not depend on the specific form of the potential, since $\lambda_c=0$ or $\lambda$ can be any value $\lambda_*$. 2. Point $P_3$: The related stability properties depend on $f_1(0)=\left.\frac{df(\lambda)}{d\lambda}\right|_{\lambda=0}$. 3. Points $P_{4,5}$: The related eigenvalues and stability properties do not depend on the form of the potential, but the critical values of these points should satisfy $\lambda^2 f(\lambda)=0$ since $x_c\neq 0$. 4. Point $P_6$: It is a special case of $P_2$, but $f(\lambda_a)=0$ should be satisfied. 5. Points $P_{7,8}$: Same as $P_6$, they would not exist if $f(\lambda_a)\neq 0$. 6. Point $P_{9,10}$: $f(\lambda_a)=0$ should hold. The fixed values and the eigenvalues of these two points depend on $f_1(\lambda_a)= \left.\frac{df(\lambda)}{d\lambda}\right|_{\lambda=\lambda_a}$. Thus, only points $P_{1,2}$ are independent of $f(\lambda)$. Comparing the fixed points in LQC and the ones in classical cosmology (see the Table **I** of [@Fang-CQG]), we can see that, even though the values of the coordinates $(x_{c},y_{c},\lambda_{c})$ are the same, the stability properties are very different. This is reasonable, because the quantum modification is considered, and the autonomous system in the LQC scenario is very different from the one in the classical scenario, e.g., the autonomous system is four dimensional in LQC but three dimensional in the classical scenario. Notice that all of the fixed points lie in the classical regions, and therefore the coordinates of fixed points remain the same from classical to LQC, which we also pointed out in an earlier paper [@Xiao-dynamical]. Now we focus on the late time attractors: point $P_{3}$ under the conditions of $\gamma=1, f_1(0)\geq 0$ and $\gamma=4/3, f_1(0)=0$, and point $P_{9}$ under the conditions of $\lambda_a^2<6, f_1(\lambda_a)>\lambda_a,\lambda_a<3\gamma$. Obviously, these points are scalar-field dominated, since ${\rho_\gamma}=H^2(1/(1-z_c)-x_c^2-y_c^2)=0$. For point $P_{3}$, the effective adiabatic index $\gamma_\phi=(\rho_\phi+p_\phi)/\rho_\phi=0$, which means that the scalar field is an effective cosmological constant. For point $P_{9}$, $\gamma_\phi=\lambda^2_a/2$. This describes a scaling solution that, as the universe evolves, the kinetic energy and the potential energy of the scalar field scale together. And we can see that there is not any barotropic fluid coupled with the scalar-field-dominated scaling solution. This is different from the dynamical behavior of scalar field with exponential potential $V=V_0 \exp(-\lambda\kappa\phi)$ in classical cosmology [@Copeland-exponential; @Hao-PRD1; @Hao-PRD2; @Samart-Phantom; @Li-O(N); @Ferreira-scaling; @Hoogen-scaling; @Billyard-interaction; @Fu-phantom], and also is different from the properties of the scalar field in brane cosmology [@Copeland-scalar], in which $\lambda=\rm{const.}$ (notice that the definition of $\lambda$ in [@Copeland-scalar] is different from the one in this paper) and $\Gamma$ is a function of $L(\rho(a))$ and $|V|$. In these models, the Universe may enter a stage dominated by a scalar field coupled with fluid when $\lambda,\gamma$ satisfy some conditions [@Copeland-exponential; @Copeland-scalar]. We discuss the dynamical behavior of the scalar field by considering $\Gamma$ as a function of $\lambda$ in this and the preceding sections. But $\Gamma$ can not always be treated as a function of $\lambda$. We need to consider a more general autonomous system, which we will introduce in the next section. Further discussion of the autonomous system {#sec4} =========================================== The dynamical behavior of the scalar field has been discussed by many authors (e.g., see [@Copeland-IJMPD; @Coley-Book; @Copeland-exponential; @Hao-PRD1; @Hao-PRD2; @Samart-Phantom; @Li-O(N); @Ferreira-scaling; @Hoogen-scaling; @Billyard-interaction; @Fu-phantom]). If one wants to get the potentials that yield the cosmological scaling solutions beyond the exponential potential, one can add a $\frac{d\phi}{dN}$ term into the autonomous system [@Nunes-scaling]. All of these methods deal with special cases of the dynamical behavior of scalar fields in backgrounds of some specific forms. By considering $\Gamma$ as a function of $\lambda$, one can treat potentials of more general forms and get the common fixed points of the general potential, as shown in [@Zhou-plb; @Fang-CQG] and in the two preceding sections. However, as is discussed in [@Fang-CQG], sometimes $\Gamma$ is not a function of $\lambda$, and then the dynamical behaviors of the scalar fields discussed above are still not general in the strict sense. For a more general discussion, we must consider the higher-order derivatives of the potential. We define $$\begin{aligned} &&{}^{(1)}\Gamma=\frac{VV_3}{V'^2}, \quad {}^{(2)}\Gamma =\frac{VV_4}{V'^2},\quad{}^{(3)}\Gamma=\frac{VV_5}{V'^2},\nonumber\\ &&\cdots\quad{}^{(n)}\Gamma=\frac{VV_{n+2}}{V'^2},\quad \cdots\end{aligned}$$ in which $V_{n}=\frac{d^n V}{d\phi^n},n=3,4,5,\cdots$. Then we can get $$\begin{aligned} \frac{d\Gamma}{d N} &=&\sqrt{6}x\left[\Gamma\lambda+{}^{(1)}\Gamma-2\lambda\Gamma^2\right],\label{G'}\\ \frac{d\left({}^{(1)}\Gamma\right)}{dN} &=&\sqrt{6}x\left[{}^{(1)}\Gamma\lambda +{}^{(2)}\Gamma -2\lambda\Gamma\left({}^{(1)}\Gamma\right)\right],\label{G3'}\\ \frac{d\left({}^{(2)}\Gamma\right)}{dN} &=&\sqrt{6}x\left[ {}^{(2)}\Gamma\lambda+{}^{(3)}\Gamma -2 \lambda\Gamma\left({}^{(2)}\Gamma\right)\right],\label{G4'}\\ \frac{d\left({}^{(3)}\Gamma\right)}{dN} &=&\sqrt{6}x\left[{}^{(3)}\Gamma\lambda+{}^{(4)}\Gamma -2\lambda\Gamma\left({}^{(3)}\Gamma\right)\right],\label{G5'}\\ && \cdots\cdots\nonumber\\ \frac{d\left({}^{(n)}\Gamma\right)}{dN} &=&\sqrt{6}x\left[{}^{(n)}\Gamma\lambda+{}^{(n+1)}\Gamma -2\lambda\Gamma \left({}^{(n)}\Gamma\right)\right],\label{Gn'}\\ && \cdots\cdots\nonumber\end{aligned}$$ To discuss the dynamical behavior of scalar field with more general potential, e.g., when neither $\lambda$ nor $\Gamma$ is constant, we need to consider a dynamical system described by Eqs. (\[x’\])-(\[l\]) coupled with Eqs. (\[G’\])-(\[Gn’\]). It is easy to see that this dynamical system is also an autonomous one. We can discuss the values of the fixed points of this autonomous system. Considering Eq. (\[l\]), we can see that the values of fixed points should satisfy $x_c=0$, $\lambda_c=0$, or $\Gamma_c=1$. Then, we can get the fixed points of this infinite-dimensional autonomous system. 1. If $x_c=0$, considering Eqs. (\[x’\])-(\[z’\]), one can get $(y_c,z_c,\lambda_c)=(0,0,0)$ or $(y_c,z_c,\lambda_c)=(0,0,\lambda_*)$, and $\Gamma_c,{}^{(n)}\Gamma_c$ can be any values. 2. If $\lambda_c=0$, considering Eqs. (\[x’\])-(\[z’\]), one can see that the fixed points of $(x,y,z)$ are $(x_c,y_c,z_c)=(0,y_*,1-1/y_*^2)$and $(x_c,y_c,z_c)=(\pm1,0,0)$. If $x_c=0$, $\Gamma_c$ and ${}^{(n)}\Gamma_c$ can be any values, and if $x_c=\pm 1$, ${}^{(n)}\Gamma_c=0$. 3. If $\Gamma_c=1$, considering Eqs. (\[x’\])-(\[z’\]), one can get that the fixed points of $(x,y,z,\lambda)$ are $(x_c,y_c,z_c,\lambda_c)=(0,0,0,\lambda_*)$ and $(x_c,y_c,z_c,\lambda_c)=(\pm 1,0,0,\lambda_*)$. And ${}^{(n)}\Gamma_c$ should satisfy ${}^{(n)}\Gamma_c=\lambda_*^n$. There are other fixed points, which will be discussed below. Based on the above analysis and Table **I**, one can find that points $P_{1-10}$ are just special cases of the fixed points of an infinite-dimensional autonomous systems. Considering the definition of $\Gamma$ (see Eq. (\[Gamma\])), the simplest potential is an exponential potential when $\Gamma_c=1$. The properties of these fixed points have been discussed by many authors [@Copeland-exponential; @Hao-PRD1; @Hao-PRD2; @Samart-Phantom; @Li-O(N); @Ferreira-scaling; @Hoogen-scaling; @Billyard-interaction; @Fu-phantom]. If $x_c=0$ and $y_c=0$, this corresponds to a fluid-dominated universe, which we do not consider here. If $x_c=\pm 1$, $\Gamma_c=0$ and ${}^{(n)}\Gamma_c=0$, we do not need to consider the $\Gamma$ and the $^{(n)}\Gamma$ terms. Then the stability properties of these points are the same as points $P_{4,5}$ in Table **I**, and there are unstable points. The last case is $(x_c,y_c,z_c,\lambda_c)=(0,y_*,1-1/y_*^2,0)$ and $\Gamma,{}^{(n)}\Gamma$ can be any value. To analyze the dynamical properties of this autonomous system, we need to consider the ${}^{(n)}\Gamma_c$ terms. We will get an infinite series. In order to solve this infinite series, we must truncate it by setting a sufficiently high-order ${}^{(M)}\Gamma$ to be a constant, for a positive integer $M$, so that $d\left({}^{(M)}\Gamma\right)/dN=0$. Thus we can get an $(M+4)$-dimensional autonomous system. One example is the quadratic potential $V=\frac12m^2\phi^2$ with some positive constant $m$ that gives a five dimensional autonomous system, and another example is the Polynomial (concave) potential $V=M^{4-n}\phi^n$ [@Lindle-potential] that gives an $(n+3)$-dimensional autonomous system. Following the method we used in the two preceding sections, we can get the dynamical behavior of such finite-dimensional systems. In the remainder of this section, we discuss whether this autonomous system has a scaling solution. If $x_c=0$, then $\Gamma_c\neq 0,{}^{(n)}\Gamma_c\neq 0$, and the stability of the fixed points may depend on the truncation. As an example, if we choose ${}^{(2)}\Gamma=0$, then we can get a six dimensional autonomous system. The eigenvalues for the fixed point $(x_c,y_c,z_c,\lambda_c,\Gamma_c,{}^{(1)}\Gamma_c)=(0,0,0,\lambda_b,\Gamma_*,{}^{(1)}\Gamma_*)$, where $\lambda_b=0$ or $\lambda_b=\lambda_*$, is $$\begin{aligned} \mathbf{M}^T=(0,0,0,\frac32\gamma,-3\gamma,-3+\frac32\gamma).\nonumber\end{aligned}$$ Obviously, this is an unstable point, and it has no scaling solution. The eigenvalues for the fixed point $(x_c,y_c,z_c,\lambda_c,\Gamma_c,{}^{(1)}\Gamma_c)=(0,1,0,0,\Gamma_*,{}^{(1)}\Gamma_*)$ is $$\begin{aligned} \mathbf{M}^T=(0,0,0,0,-3\gamma,-3-3\gamma).\nonumber\end{aligned}$$ According to the center manifold theorem (see Chapter 8 of [@Khalil-non], or [@DynamicalReduction]), there are two nonzero eigenvalues, and we need to reduce the dynamical system to two dimensions to get the stability properties of the autonomous system. This point may have scaling solution, but we need more complex mathematical method. We discuss this problem in another paper [@Xiao-scaling]. We discuss the last case. If $\Gamma_c=1$, we can consider an exponential potential. Then the autonomous system is reduced to three dimensions. It is easy to check that the values $(x_{ec},y_{ec},z_{ec})$ of the fixed points are just the values $(x_c,y_c,z_c)$ of points $P_{6-10}$ in Table **I**. We focus on the two special fixed points: $$\begin{aligned} && F_1: (x_{ec},y_{ec},z_{ec})=(-\lambda/\sqrt{6}, \sqrt{1-\lambda^2/6}, 0),\nonumber\\ &&F_2: (x_{ec},y_{ec},z_{ec})=(-\sqrt{3/2}\gamma/\lambda, \sqrt{3\gamma(2-\gamma)/(2\lambda^2)}, 0).\nonumber\end{aligned}$$ Using Lyapunov’s linearization method, we can find that $F_2$ is unstable and $F_1$ is stable if $\lambda<3\gamma$. It is easy to check that $\rho_\gamma=H^2[1/(1-z_{ec})-x_{ec}^2-y_{ec}^2]=0$ when $(x_{ec},y_{ec},z_{ec})=(-\lambda/\sqrt{6}, \sqrt{1-\lambda^2/6}, 0)$. From the above analysis, we find that there is just the scalar-field-dominated scaling solution when we consider the autonomous system to be described by a self-interacting scalar field coupled with a barotropic fluid in the LQC scenario. Conclusions {#sec5} =========== The aim of this paper is two-fold. We discuss the dynamical behavior of scalar field in the LQC scenario following [@Fang-CQG; @Zhou-plb]. To further analyze the dynamical properties of scalar field with more general potential, we introduce an infinite-dimensional autonomous system. To discuss the dynamical properties of scalar field in the LQC scenario, we take $\Gamma$ as a function of $\lambda$, and extend the autonomous system from three dimensions to four dimensions. We find this extended autonomous system has more fixed points than the three dimensional one does. And we find that for some fixed points, the function $f(\lambda)$ affects either their values, e.g., for points $P_{4-10}$, or their stability properties, e.g., for points $P_{3,9}$. In other words, the dynamical properties of these points depend on the specific form of the potential. But some other fixed points, e.g., points $P_{1,2}$,are independent of the potential. The properties of these fixed points are satisfied by all scalar fields. We also find that there are two later time attractors, but the Universe is scalar-field dominated since $\rho_\gamma=0$ at these later time attractors. The method developed by [@Fang-CQG; @Zhou-plb] can describe the dynamical behavior of the scalar field with potential of a more general form than, for example, an exponential potential [@Copeland-exponential; @Hao-PRD1; @Hao-PRD2; @Samart-Phantom; @Li-O(N); @Ferreira-scaling; @Hoogen-scaling; @Billyard-interaction; @Fu-phantom]. But it is not all-encompassing. If one wants to discuss the dynamical properties of a scalar field with an arbitrary potential, one needs to consider the higher-order derivatives of the potential $V(\phi)$. Hence the dynamical system will extend from four dimensions to infinite-dimensions. This infinite-dimensional dynamical system is still autonomic, but it is impossible to get all of its dynamical behavior unless one considers $\Gamma_c=1$ which just gives an exponential potential. If one wants to study as much as possible the dynamical properties of this infinite-dimensional autonomous system, one has to consider a truncation that sets ${}^{(M)}\Gamma=\rm{Const.}$, with $M$ above a certain positive integer. Then the infinite-dimensional system can be reduced to $M+4)$ dimensions. And we find that there is just the scalar-field-dominated scaling solution for this autonomous system. We only give out the basic properties of this infinite-dimensional autonomous system in this paper, and will continue the discussion in the paper in [@Xiao-scaling]. We only get the scalar-field-dominated scaling solutions, whether we consider $\Gamma$ as a function of $\lambda$ or consider the higher-order derivatives of the potential. This conclusion is very different from the autonomous system which is just described by a scalar field with an exponential potential [@Samart-Phantom]. This is an interesting problem that awaits further analysis. K. Xiao thanks Professor X. Li for his help with the center manifold theorem. This work was supported by the National Natural Science Foundation of China, under Grant No. 10875012 and the Fundamental Research Funds for the Central Universities. The stability properties of the Point $P_3$ =========================================== In Sec. \[sec3\], we point out that it is impossible to get the stability properties of the fixed point if at least one of the eigenvalues of $\mathcal{M}$ is on the $i\omega$ axis with the rest being in the left-half complex plane. The fixed point $P_3$ is exactly such a point. In this appendix, we use the center manifold theorem (see Chapter 8 of [@Khalil-non] , or [@DynamicalReduction]) to get the condition for stability of $P_3$. The coordinates of $P_3$ are $(0,1,0,0)$ and the eigenvalues are $(-3,-3\gamma,0,0)$. First, we transfer $P_3$ to $P'_3$ $(x_c=0,\bar{y}_c=y-1=0,z_c=0, \lambda_c=0)$. In this case, Eqs. (\[x’\])-(\[z’\]) and (\[l’\]) become $$\begin{aligned} \frac{dx}{dN}&=&-3\,x-\frac12\,\sqrt {6}\lambda\, \left( \bar{y}+1 \right) ^{2}+x \left[ 3\,{x}^ {2}+\frac32\gamma\, \left((1+z)\right.\right.\nonumber\\ &&\left.\left.-{x}^{2}-\left( \bar{y}+1 \right) ^{2} \right) \right]\left( 1-2z\right)\label{X'},\\ \frac{d \bar{y}}{dN}&=&\frac12\,\sqrt {6}\lambda\,x \left( \bar{y}+1 \right) + \left( \bar{y}+1 \right) \left[3\,{x}^{2}+\frac32\gamma\, \left((1+z)\right.\right.\nonumber\\ &&\left.\left.-{x}^{2}-\left( \bar{y}+1 \right) ^{2} \right) \right] \left( 1-2z \right)\label{Y'}, \\ \frac{dz}{dN}&=&-3\gamma z-3z\left(1-z\right)\left[2x^2-\gamma x^2-\gamma (\bar{y}+1)^2\right],\label{Z'}\\ \frac{d\lambda}{dN}&=& \sqrt {6}{\lambda}^{2}\left(f(0)+f_1(0)\lambda\right) x \label{L'},\end{aligned}$$ where we have considered that the related variables $(x,\bar{y},z,\lambda)$ are small around point $(x_c,\bar{y}_c,z_c,\lambda_c)=\left(0,0, 0,0\right)$. Therefore the following Taylor series $$\begin{aligned} &&\frac{1}{1-z}=1+{z}+\cdots,\nonumber\\ &&f(\lambda)=f(0)+f_1(0)\lambda+\cdots,\nonumber\end{aligned}$$ can be used, where $f_1(0)=\frac{df(\lambda)}{d\lambda}\left.\right|_{\lambda=0}$. We can get the Jacobi matrix $\mathcal{M'}$ of the dynamical system Eqs. (\[X’\])-(\[L’\]) as $$\begin{aligned} \mathcal{M'}= \begin{pmatrix} -3& 0& 0&-\frac{\sqrt{6}} 2\\ 0&-3\gamma& \frac32\gamma & 0\\ 0&0& 0&0\\ 0&0&0&0 \end{pmatrix}.\end{aligned}$$ It is easy to get the eigenvalues and eigenvectors of $\mathcal{M}'$. Let $\mathcal{A}$ denote the matrix whose columns are the eigenvalues, and $\mathcal{S}$ denote the matrix whose columns are the eigenvectors, and then we have $$\begin{aligned} \mathcal{A}=\begin{pmatrix} -3\\ -3\gamma\\ 0\\ 0 \end{pmatrix},\quad \mathcal{S}=\begin{pmatrix} 1& 0& -\frac{\sqrt{6}}{6}& 0\\ 0& 1& 0 & \frac{1}{2}\\ 0 & 0&0& 1\\ 0 & 0& 1 & 0 \end{pmatrix}.\end{aligned}$$ With the help of $\mathcal{S}$, we can transform $\mathcal{M}'$ into a block diagonal matrix $$\begin{aligned} \mathcal{S}^{-1} \mathcal{M}' \mathcal{S}=\begin{pmatrix} -3 & 0&0&0\\ 0& -3\gamma& 0& 0\\ 0&0&0&0\\ 0&0&0&0 \end{pmatrix}=\begin{pmatrix} \mathcal{A}_1 &0\\ 0& \mathcal{A}_2 \end{pmatrix},\end{aligned}$$ where all eigenvalues of $\mathcal{A}_1$ have negative real parts, and all eigenvalues of $\mathcal{A}_2$ have zero real parts. Now we change the variables to be $$\begin{aligned} \begin{pmatrix} X\\Y\\Z\\ \bar{\lambda} \end{pmatrix}= \mathcal{S}^{-1}\begin{pmatrix} x\\ \bar{y}\\ {z}\\ \lambda \end{pmatrix}= \begin{pmatrix} x+\frac{\sqrt{6}}{6}\lambda\\ \bar{y}-\frac12z\\ \lambda \\z \end{pmatrix}.\end{aligned}$$ Then, we can rewrite the autonomous system in the form of the new variables: $$\begin{aligned} \begin{pmatrix}\label{A9} \frac{dX}{dN}\\ \frac{dY}{dN}\\ \frac{dZ}{dN}\\ \frac{d\bar{\lambda}}{dN} \end{pmatrix}= \begin{pmatrix} -3 & 0&0&\\ 0&-3\gamma&0&0\\ 0&0&0&0\\ 0&0&0&0 \end{pmatrix} \begin{pmatrix} X\\Y\\Z\\ \bar{\lambda} \end{pmatrix}+ \begin{pmatrix} G_1\\G_2\\G_3\\G_4 \end{pmatrix},\nonumber\\\end{aligned}$$ where $G_i=G_i(X,Y,Z,\bar{\lambda}),(i=1,2,3,4)$ are functions of $X,Y,Z$, and $\bar{\lambda}$. It is easy to get $G_i$ by substituting the transformations $x=X-\frac{\sqrt{6}}{6}Z, \bar{y}=Y+\frac12\bar{\lambda}, z=\bar{\lambda}, \lambda=Z$ into the R.H.S. of Eqs. (\[X’\])-(\[L’\]). According to the center manifold theorem [@DynamicalReduction], there exists a $C^\infty$-center manifold $$\begin{aligned} W^c_{loc}&=&\left\{(X,Y,Z,\bar{\lambda}): X\equiv h_1(Z,\bar{\lambda}), Y\equiv h_2(Z,\bar{\lambda}),\right.\nonumber\\ &&\left.h_i(0,0)=0, J_{h_i}(0,0)=0\right\}\nonumber\end{aligned}$$ such that the dynamics of (\[A9\]) can be restricted to the center manifold. $J_{h_i}$ is the Jacobi matrix of $h_i$, and $h_1(Z,\bar{\lambda}), h_2(Z,\bar{\lambda})$ are $$\begin{aligned} h_1(Z,\bar{\lambda})=A_1 Z^2+A_2 Z \bar{\lambda}+A_3 \bar{\lambda}^2+\cdots,\label{h1}\\ h_2(Z,\bar{\lambda})=B_1 Z^2+B_2 Z \bar{\lambda}+B_3 \bar{\lambda}^2+\cdots.\label{h2}\end{aligned}$$ We just consider the quadratic forms of $Z$ and $\bar{\lambda}$ in this appendix. Considering the center manifold theorem, we have $$\begin{aligned} \frac{dX}{dN}&=&\frac{\partial h_1(Z,\bar{\lambda})}{\partial Z}\frac{dZ}{dN}+\frac{\partial h_1(Z,\bar{\lambda})}{\partial\bar{\lambda}}\frac{d\bar{\lambda}}{dN},\label{X''}\\ \frac{dY}{dN}&=&\frac{\partial h_2(Z,\bar{\lambda})}{\partial Z}\frac{dZ}{dN}+\frac{\partial h_2(Z,\bar{\lambda})}{\partial\bar{\lambda}}\frac{d\bar{\lambda}}{dN}.\label{Y''}\end{aligned}$$ Inserting the Eqs. (\[h1\]) and (\[h2\]) into $dX/dN,dY/dN$ in Eq. (\[A9\]) and Eqs. (\[X”\])-(\[Y”\]), and comparing the coefficients of $dX/dN$ and $dY/dN$, we get $$\begin{aligned} &&A_1=0,\quad A_2=\frac{\sqrt{6}}{6},\quad A_3=0, \quad B_1=\frac{1}{12},\nonumber\\ && B_2=0, \quad B_3=\frac{1}{8}.\end{aligned}$$ Then, the dynamics near the origin is governed by the following equations, $$\begin{aligned} \frac{dZ}{dN}&=&-{Z}^{3}f_1(0),\label{ZF'}\\ \frac{d\bar{\lambda}}{dN}&=&-{Z}^{2}\bar{\lambda}+\gamma\,{Z}^{2}\bar{\lambda}-\frac32\,\gamma\,{\bar{\lambda}}^{3}.\label{LF'}\end{aligned}$$ We consider two different values of $\gamma$ to get the stability properties of this system. This is because a different $\gamma$ will give a different dynamical systems. The first one to be considered is dust, which has $\gamma=1$. Then, we have $$\begin{aligned} \frac{dZ}{dN}&=&-{Z}^{3}f_1(0)\label{ZF''},\\ \frac{d\bar{\lambda}}{dN}&=&-\frac32\bar{\lambda}^3.\label{LF''}\end{aligned}$$ According to Lyapunov’s theorem, we can define a Lyapunov function to analyze the stability properties of a dynamical system. Different dynamical systems have different Lyapunov functions, and one dynamical system can also have different Lyapunov functions. But all the Lyapunov functions $U$ should satisfy $U(\mathbf{x})\geq 0$ at the original point (Chapter 2 of [@Khalil-non]). Then we can define $$\begin{aligned} U_1=\frac12\left(Z^2+\bar{\lambda}^2\right).\end{aligned}$$ Using Eqs.(\[ZF”\]) and (\[LF”\]), wehave $$\begin{aligned} \frac{dU_1}{dN}=-f_1(0)Z^4-\frac32Z^2\bar{\lambda}^4.\end{aligned}$$ According to Lyapunov¡¯s stability theorems, the system is stable if $f_1(0)\geq 0$. Now we turn to considering radiation, which has $\gamma=\frac43$. Eqs. (\[ZF’\]) and (\[LF’\]) become $$\begin{aligned} \frac{dZ}{dN}&=&-{Z}^{3}f_1(0),\label{zc}\\ \frac{d\bar{\lambda}}{dN}&=&-2\bar{\lambda}^3+\frac13Z^2\bar{\lambda}.\label{lc}\end{aligned}$$ We need to consider three possible cases: (a) $f_1(0)\neq 0$, (b) $f_1(0)=0, Z(N=0)=0$, and (c) $f_1(0)=0, Z(N=0)\neq 0$, since these three different cases will bring out three different dynamical systems. If $f_1(0)\neq 0$, the Lyapunov function can be defined as $$\begin{aligned} U_2=\frac{1}{1+Z^2/(6A)+\bar{\lambda}^2},\end{aligned}$$ where $A=f_1(0)$ if $f(0)>0$, and $A=-f_1(0)$ if $f_1(0)<0$. Then one can get $$\begin{aligned} \frac{dU_2}{dN}=\frac{12A^2\left[(Z^2-\bar{\lambda}^2)^2+5\bar{\lambda}^4 \right]}{\left[6A+6A\bar{\lambda}^2+Z^2\right]^2}>0.\end{aligned}$$ Then this point is an unstable one. If $f_1(0)=0$ and $Z(N=0)=0$, Eq. (\[lc\]) becomes $d\bar{\lambda}/{dN}=-2\bar{\lambda}^3$. Defining Lyapunov function, $$\begin{aligned} U_3=1+\bar{\lambda}^2,\end{aligned}$$ then $$\begin{aligned} \frac{dU_3}{dN}=-4\bar{\lambda}^4\leq 0.\end{aligned}$$ If $f_1(0)=0$ and $Z(N=0)\neq 0$, one can get $Z=C$ from Eq. (\[zc\]), with a non-zero constant $C$. Equation (\[lc\]) becomes $$\begin{aligned} \frac{d\bar{\lambda}}{dN}=-2\bar{\lambda}^3+\frac13C^2\bar{\lambda},\end{aligned}$$ The Lyapunov function can be defined as $$\begin{aligned} U_4=\left(1-\frac{6}{C^2}\bar{\lambda}^2\right)^2,\end{aligned}$$ Then we have $$\begin{aligned} \frac{dU_4}{dN}=-\frac{8}{C^4}\bar{\lambda}^2\left(C^2-6\bar{\lambda}^2\right)^2\leq 0.\end{aligned}$$ Obviously, according to Lyapunov¡¯s stability theorem, this point is stable as long as $f_1(0)=0$, regardless of $Z(N=0)= 0$ or $Z(N=0)\neq 0$. [99]{} E. J. Copeland, M. Sami, and S. Tsujikawa, Int. J. Mod. Phys. D **15**, 1753(2006). A. A. Coley, *Dynamical systems and cosmology*, Kluwer Academic Publishers, Dordrecht/Boston/London, 2003. E.J. Copenland, S. Mizuno, and M. Shaeri, Phys. Rev. D **79**, 103515(2009). G. Leon, P. Silveira and C. R. Fadragas, arXiv: 1009.0689. E.J. Copeland, A.R. Liddle, and D. Wands, Phys. Rev. D **57**, 4686(1998). J.G. Hao, and X.Z. Li, Phys. Rev. D **67**, 107303(2003). J.G. Hao, and X.Z. Li, Phys. Rev. D **70**, 043529(2004). D. Samart and B. Gumjudpai, Phys. Rev. D **76**, 043514(2007). X.Z. Li, and J.G. Hao, Phys. Rev. D **69**, 107303(2004). P. G. Ferreira, and M. Joyce, Phys. Rev. D **58**, 023503(1998). R.J. van den Hoogen, A.A. Coley, and D. Wands, Class. Quantum Grav.**16**, 1843(1999). A.P. Billyard, and A.A. Coley, Phys. Rev. D **61**, 083503(2000). X.Y. Fu, H.W. Yu, and P.X. Wu, Phys. Rev. D **78**, 063001(2008). A. de la Macorra, and G. Piccinelli, Phys. Rev. D **61**, 123503(2000). S.C.C. Ng, N.J. Nunes, and F. Rosati, Phys. Rev. D **64**, 083510(2001). S.Y. Zhou, Phys. Lett. B **660**, 7(2008). W. Fang, Y. Li, K. Zhang, and H.Q. Lu, Class. Quant. Grav. **26**, 155005(2009). Martin Bojowald, Living Rev. Rel. **11**, 4(2008). AbhayAshtekar, J. Phys .Conf. Ser. **189**, 012003(2009). Gen. Rel. Grav. **41**, 707(2009). C. Rovelli, *Quantum Gravity*, Cambridge university press, Cambridge, 2004. T. Thiemann, *Modern Canonical Quantum General Relativity*, Cambridge university press, Cambridge, 2007. A. Ashtekar, T. Pawlowshik, and P. Singh, Phys. Rev. Lett. **96**, 141301(2006). Phys. Rev. D **73**, 124038(2006). P. Singh and A. Toporensky, Phys. Rev. D **69**, 104008(2004). G.V. Vereshchagin, J. Cosmol. Astropart.Phys.**0407**, 013(2004). G. Date and G.M. Hossain, Phys. Rev. Lett. **94**, 011302(2005). M. Bojowald, Phys. Rev. Lett. **89**, 261301(2002). M. Bojowald and K. Vandersloot, Phys. Rev. D **67**, 124023(2003). M. Bojowald, J.E. Lidsey, D.J. Mulryne, P. Singh, and R. Tavakol, Phys. Rev. D **70**, 043530(2004). C. Germani, W. Nelson, M. Sakellariadou, Phys. Rev. D **76**, 043529(2007). E.J. Copeland, D.J. Mulryne, N.J. Nunes, and M. Shaeri, Phys. Rev. D **77**, 023510(2008). A. Ashtekar, D. Sloan, Phys. Lett. B **694**, 108(2010). A. Corichi, A, Karami, *On the measure problem in slow roll inflation and loop quantum cosmology*, arXiv:1011.4249. A. Ashtekar, T. Pawlowski, and P. Singh, Phys. Rev. D **74**, 084003(2006). H. Wei and S.N. Zhang, Phys. Rev. D **76**, 063005(2007). X.Y. Fu, H.W. Yu and P.X. Wu, Phys. Rev. D **78**, 063001(2008). S. Li and Y.G. Ma, Eur. Phys. J. C **68**, 227(2010). Raphael Lamon, Andreas J. Woehr, Phys.Rev.D **81**, 024026(2010). K. Xiao, and J.Y. Zhu, Int. J. Mod. Phys.A, **25**, 4993(2010). J.E. Lidsey, J. Cosmol. Astropart.Phys. **0412**, 007(2004). I.P.C. Heard and D. Wands, Classical Quantum Gravity, **19**, 5435(2002). J.J.E. Slotine and W.P. Li, *Applied nonlinear control*, Englewood Cliffs, NJ: Prentice-Hall, 1991. A.P. Billyard, A.A. Coley, and R.J. van den Hoogen, Phys. Rev. D **58**, 123501(1998). A. Nunes, J.P. Mimoso, Phys. Lett. B **488**, 423(2000). H.K. Khalil, *Nonlinear system*, Englewood Cliffs, NJ: Prentice Hall, 1996. W.O. Bray, *Dynamcial System Reduction: The center manifold*, URL: http://www.math.umaine.edu/$\sim$bray/\ Archive$\_$res. R. Kallosh, J. Kratochvil, A. Linde, E.V. inder and M. Shmakova, J. Cosmol. Astropart.Phys. **0310**, 015(2003). A. D. Linde, Phys. Lett. B **259**, 38(1991); Phys. Rev. D. **49**, 748(1994). K. Xiao and J.Y. Zhu, *Scaling solution in loop quantum cosmology*. [^1]: Author to whom correspondence should be addressed
1
--- abstract: 'Spatial heterogeneity in the elastic properties of soft random solids is examined via vulcanization theory. The spatial heterogeneity in the *structure* of soft random solids is a result of the fluctuations locked-in at their synthesis, which also brings heterogeneity in their *elastic properties*. Vulcanization theory studies semi-microscopic models of random-solid-forming systems, and applies replica field theory to deal with their quenched disorder and thermal fluctuations. The elastic deformations of soft random solids are argued to be described by the Goldstone sector of fluctuations contained in vulcanization theory, associated with a subtle form of spontaneous symmetry breaking that is associated with the liquid-to-random-solid transition. The resulting free energy of this Goldstone sector can be reinterpreted as arising from a phenomenological description of an elastic medium with quenched disorder. Through this comparison, we arrive at the statistics of the quenched disorder of the elasticity of soft random solids, in terms of residual stress and Lamé-coefficient fields. In particular, there are large residual stresses in the equilibrium reference state, and the disorder correlators involving the residual stress are found to be long-ranged and governed by a universal parameter that also gives the mean shear modulus.' author: - Xiaoming Mao - 'Paul M. Goldbart' - Xiangjun Xing - Annette Zippelius title: Soft random solids and their heterogeneous elasticity --- Introduction {#SEC:Intr} ============ Random solids, such as chemical gels, rubber, glasses and amorphous silica, are characterized by their [*structural*]{} heterogeneity, which results from the randomness locked-in at the time they are synthesized. The mean positions of the constituent particles exhibit no long-range order, and every particle inhabits a unique spatial environment. The [*elasticity*]{} of random solids also inherits heterogeneity from this locked-in randomness. For example, the Lamé coefficients and the residual stress vary from point to point throughout the elastic medium. The central goal of this paper is to develop a statistical characterization of random elastic media, via the mean values of the Lamé coefficients and the residual stress as well as the two-point spatial correlations amongst the fluctuations of these quantities, which we shall name as the disorder correlator. These mean values and correlations are to be thought of as averages taken over realizations of the sample fabrication for a given set of fabrication parameters. We expect these characteristic quantities to coincide with the volume-averages of their single-sample counterparts. Our focus will be on [*soft*]{} random solids. These are network media that include chemical gels [@Addad1996], which are formed by the permanent random chemical bonding of small molecules, as well as rubber [@Treloar1975], which is formed via the introduction of permanent random chemical cross-links between nearby monomers in melts or solutions of flexible long-chain polymers. Soft random solids are characterized by their *entropic elasticity*. These are media in which the shear modulus originates in the strong thermal fluctuations of the configurations of the constituent particles and is much smaller than the bulk modulus, which is energetic in nature and originates in the excluded-volume interactions between the particles. The concept of entropic elasticity forms the basis of the classical theory of rubber elasticity, developed long ago by Kuhn, Flory, Wall, Treloar and others (see Ref. [@Treloar1975]). As we discuss soft random solids we shall take chemical gels as our prototype media. When the density of the introduced links exceeds the percolation threshold, an infinite cluster of linked molecules forms, spanning the system, and the network acquires a thermodynamic rigidity with respect to shear deformations [^1]. This event is often called the gelation transition or the vulcanization transition [@Goldbart1996]. The [*geometrical*]{} or [*architectural*]{} aspects of the gelation/vulcanization transition can be well captured by the theory of percolation [@Stauffer1994]. However, to study the [*elasticity*]{} that emerges at the gelation/vulcanization transition, and especially its heterogeneity, one needs a theory that incorporates not only the geometrical aspects, but also the equilibrium thermal fluctuations of the particle positions and the strong, qualitative changes that they undergo at the gelation/vulcanization transition. In the setting of rubber elasticity, although the classical theory is successful in explaining the entropic nature of the shear rigidity of rubber, it is essentially based on a single-chain picture and, as such, is incapable of describing the consequences of the long scale random structure of rubbery media, e.g., the random spatial variations in their local elastic parameters and the resulting nonaffinity of their local strain-response to macroscopic applied stresses. The general problem of heterogeneous elasticity and nonaffine deformations has been studied in the setting of flexible polymer networks [@Rubinstein1997; @Glatting1995; @Holzl1997; @Svaneborg2005], and also semi-flexible polymer networks [@Head2003; @Heussinger2007], glasses [@Wittmer2002], and granular materials [@Utter2008]. Particularly noteworthy is the recent investigation by DiDonna and Lubensky of the general relationship between the spatial correlations of the nonaffine deformations and those of the underlying quenched random elastic parameters [@DiDonna2005]. The mission of the present work is to develop a statistical characterization of the heterogeneous elasticity of soft random solids by starting from a semi-microscopic model and applying a body of techniques that we shall call vulcanization theory to it. In particular, we aim to obtain the mean values and disorder correlators of elastic parameters, such as the Lamé coefficients and the residual stress, in terms of the parameters of the semi-microscopic model, such as the density of cross-links, the excluded-volume interactions, etc. One of our key findings is that the disorder correlator of the residual stress is long ranged, as are all cross-disorder correlators between the residual stress and the Lamé coefficients. We also find that these disorder correlators are controlled by a universal scale parameter—independent of the microscopic details—that, moreover, controls the scale of the mean shear modulus. In addition, we characterize the nonaffininity of the deformations in terms of these parameters. The strategy we adopt for accomplishing our goals involves a handshaking between two different analytical schemes. [^2] The first scheme follows a well-trodden path. We begin with a semi-microscopic model, the Randomly Linked Particle Model (RLPM) [@Mao2007; @Ulrich2006; @Broderix2002; @Mao2005], involving particle coordinates and quenched random interactions between them that represent the randomness that is locked-in at the instant of cross-linking. In order to account for this quenched randomness, as well as the thermal fluctuations in particle positions which are the origin of the entropic elasticity, we adopt the framework of vulcanization theory [@Goldbart1996]. This framework includes the use of the replica method to eliminate the quenched randomness, followed by a Hubbard-Stratonovich transformation to construct a field-theoretic representation in terms of an order parameter field—in this case, the random solidification order parameter. We analyze this representation at the stationary-point level of approximation, and then focus on the gapless excitations around the stationary point—the Goldstone fluctuations—observing that these excitations can be parameterized in terms of a set of replicated shear deformation fields [@Mao2007; @Ulrich2006; @Goldbart2004]. The second prong of our approach is less conventional. It begins with our introduction of a phenomenological model free energy of an elastic continuum, characterized by a nonlocal kernel of quenched random attractions between mass-points. To obtain statistical information about this quenched random kernel, we use the replica method to eliminate the randomness, and obtain a pure model of replicas of the deformation field with couplings controlled by the disorder-moments of the kernel; these disorder-moments are then treated as unknown quantities to be determined. This pure model has precisely the same structure as the Goldstone theory mentioned in the previous paragraph has. Thus, by comparing the two models we can learn the moments of the quenched random kernel from the (already-computed) coupling functions of the Goldstone model. Then we analyze a particular realization of the phenomenological model having a fixed value of the quenched randomness. We observe that the natural reference state (i.e., the state of vanishing displacement field) of this model is not in fact an equilibrium state for any given realization of disorder, due to the random attractive interactions embodied by the kernel. We analyze how these attractions compete with the near-incompressibility of the medium to determine the displacement to the new, equilibrium configuration, which we shall term the relaxed state.(This process can be understood in the setting of a hypothetical, instant process of preparing a sample of rubber: the cross-links introduce attractions and random stresses, and the system then undergoes relaxation, including global contraction and local deformation.)  We then explore shear deformations around this relaxed state, pass to the local limit, and arrive at the standard form of continuum elasticity theory, expressed in terms of the strain around the relaxed state, but with coefficients that are explicit functions of the quenched random kernel. Thus, using the information about the statistics of the quenched random kernel obtained via the comparison with the RLPM, we are able to infer statistical information about the elastic properties of the random elastic medium in the relaxed state, which is of experimental relevance. Why is it legitimate to identify the Goldstone theory arising from the microscopic model with the replica theory of the phenomenological model? The reason is that, within the schemes that we have chosen to analyze them, both models describe shear deformations not of the equilibrium state of the system but, rather, of the system immediately after cross-linking has been done but before any cross-linking-induced relaxation has been allowed to occur. [[ The equivalence between these two schemes is not based solely on the equality of free energies of the two models; it is also based on the identity of the physical meaning of the (replicated) deformation fields in the two theories, and thus the way that these fields couple to externally applied forces. Both schemes are descriptions of the elasticity of soft random solids displaying heterogeneous elastic properties, one from a semi-microscopic viewpoint, the other invoking phenomenological parameters. Thus, the equivalence of the two schemes provides the values of the phenomenological parameters as functions of the semi-microscopic parameters. ]{}]{} Before concluding this introduction, let us emphasize that this work is a first attempt to [*derive*]{} the elastic heterogeneities of vulcanized matters. To date, the prevalent strategy in studies of disordered systems is to assume a particular structure for the quenched disorder on phenomenological grounds (such as Gaussian, short ranged, etc.) and explore the consequences. Assumptions about disorder structure are usually based on symmetry arguments and also the preference for simplicity, but otherwise lack theoretical substantiation. It is one of the main advantage of vulcanization theory that it can [*predict*]{} some generic properties of the disordered structure in vulcanized matter, as is shown in the present work, which can be used to support and sharpen the assumptions underlying more phenomenological theories. The classical theory of rubber elasticity, which was shown to be derivable from the saddle-point approximation of vulcanization theory, is known to fail to describe rubber elasticity in the intermediate and large deformation regimes [@Treloar1975]. While a recent study [@Xing2007] shows that long wave-length thermal elastic fluctuations account qualitatively for this failure, on general grounds, we expect that elastic heterogeneities should play an equally important role. It would therefore be interesting to explore how the elastic heterogeneities discovered in the present work modify the macroscopic elasticity of rubbery materials. Such a program is left for a future work. The outline of this paper is as follows. In Section \[SEC:RLPM\], we analyze the semi-microscopic RLPM using the tools of vulcanization theory. Specifically, we use the replica method [@Mezard1987] to study a model network consisting of randomly linked particles, which exhibits a continuous phase transition from the liquid state to the random solid state, paying particular attention to the Goldstone fluctuations of the random solid state, which, as we have mentioned above, are related to the elastic shear deformations of the random solid state. In Section \[SEC:Phen\], we propose a nonlocal phenomenological model of a random elastic medium, and subsequently derive it from the RLPM, by identify that this phenomenological model is the low energy theory (i.e., it captures the Goldstone fluctuations) of the RLPM in the random solid state. Through this correspondence we learn information of the statistics of the quenched random nonlocal kernel. In Section \[SEC:relaxation\], we study the relaxation of the phenomenological model to a stable state for any fixed randomness (i.e., any realization of disorder), due to random stresses and attractive interactions. We re-expand the free energy about this relaxed state to obtain the true elastic theory. This relaxed reference state is, however, still randomly stressed [@Alexander1998]; nevertheless, the stress in this state—the so-called *residual stress*—does satisfy the condition of mechanical equilibrium, viz., $\partial_i \Stre_{ij}(x)=0$. In its local limit, the proposed phenomenological model reproduces a version of Lagrangian continuum elasticity theory that features random Lamé coefficients and residual stresses. In this section, we also use the phenomenological model to explore the related issue of elastic heterogeneity, viz., the nonaffine way in which the medium responds to external stress. In Section \[SEC:Heterogeneity\], we arrive at predictions for the statistics of the quenched random elastic parameters that feature in the phenomenological model in the relaxed state, along with the statistics of nonaffine deformations. Thus we provide a first principles account of the heterogeneous elasticity of soft random solids. We conclude, in Section \[SEC:ConcDisc\], with a brief summary and discussion of our results. Semi-microscopic approach: The randomly linked particle model {#SEC:RLPM} ============================================================= Randomly linked particle model {#SEC:RLPMBasics} ------------------------------ The Randomly Linked Particle Model (RLPM) consists of $\PartNumb$ particles in a volume $\Volu$ in $\Dime$ dimensions. In order to study elasticity, including bulk deformations, $\Volu$ is allowed to fluctuate under a given pressure $\Pres$. The positions of the particles in this fluctuating volume are denoted by $\{\PartPosi_j\}_{j=1}^{\PartNumb}$. The particles in the RLPM interact via two types of interactions: a repulsive interaction $\VExc$ between all pairs of particles (either direct or mediated via a solvent); and an attractive interaction $\VLink$ between the pairs of particles that are chosen at random to be linked. We take the latter to be a soft link (as opposed to the usual hard constraint of vulcanization theory). Thus, the Hamiltonian can be written as $$\begin{aligned} \label{EQ:HRLPM} H_{\RealDiso} = \sum_{1\le i<j\le N}^{\PartNumb} \VExc (\PartPosi_i-\PartPosi_j)+ \sum_{e=1}^{\LinkNumb} \VLink \big(\vert \PartPosi_{i_e}-\PartPosi_{j_e}\vert\big) .\end{aligned}$$ The label $e$, which runs from $1$ to the total number of links $M$, indexes the links in a given realization of the quenched disorder, and specifies them via the quenched random variables $M$ and $\{i_{e},j_{e}\}_{e=1}^{M}$. We take $\VExc$ to be a strong, short-ranged repulsion, which serves to penalize density fluctuations and thus render the system nearly incompressible, as is appropriate for regular or polymeric liquids. As we shall describe in Section \[SEC:FTMF\], we address the interactions in Eq. (\[EQ:HRLPM\]) by eliminating the particle coordinates in favor of collective fields, which have the form of joint densities of the replicas of the particles. This continuum approach enables us to focus on the physics of random particle localization, particularly at lengthscales that are relevant for such localization, which (except when the density of links is extremely high) are long compared with the ranges of $\VExc$ and $\VLink$. With a focus on these longer lengthscales in mind, we see that it is adequate to replace the repulsive interaction $\VExc(\PartPosi)$ by the model Dirac delta-function excluded-volume interaction $\ExclVolu\,\delta(\PartPosi)$, characterized by the strength $\ExclVolu$ [@Gennes1979; @Deam1976; @Doi1986]. This procedure amounts to making a gradient expansion in real space (or, equivalently, a wave vector expansion in Fourier space) of $\VExc$ and retaining only the zeroth-order term; it gives for the strength $\ExclVolu$ the value ${\int}d\PartPosi\,\VExc(\PartPosi)$. Terms of higher order in the gradient expansion would have a non-negligible impact on the suppression of density fluctuations only at lengthscales comparable to or shorter than the range of $\VExc$, and fluctuation modes at such lengthscales are not the ones driven via random linking to the instability associated with random localization (and thus are not modes in need of stabilization via $\VExc$). We remark that the approximate interaction $\ExclVolu\,\delta(\PartPosi)$ is not, in practice, singular, and is instead regularized via a high wave vector cut-off. At our coarse-grained level of description, the particles of the RLPM can be identified with polymers or small molecules, and the soft links can be identified with molecular chains that bind the molecules to one another. The potential for the soft links can be modeled as Gaussian chains $$\begin{aligned} \label{EQ:VGC} \VLink^{(\textrm{GC})}(\vert r \vert) = \frac{\BoltCons T \vert r \vert^2}{2\LinkScal^2} ,\end{aligned}$$ i.e., a harmonic attraction, or a zero rest-length spring, of lengthscale $\LinkScal$ between the two particles. In making this coarse-graining one is assuming that microscopic details (e.g., the precise locations of the cross-links on the polymers, the internal conformational degrees of freedom of the polymers, and the effects of entanglement) do not play significant roles for the long-wavelength physics. In part, these assumptions are justified by studying more detailed models, in which the conformational degrees of freedom of the polymers are retained [@Goldbart1996]. However, we should point out that the precise form of $\VLink$ is not important for long-wavelength physics and, hence, for the elastic properties that we are aiming to investigate \[cf. the discussion at the end of Sec. \[SEC:FTMF\]\]. From the discussion above, the RLPM is a convenient minimal model of soft random solids, inasmuch as it adequately captures the necessary long-wavelength physics. It can be regarded as either a model of a chemical gel, or as a caricature of vulcanized rubber or other soft random solid. The RLPM can be viewed as a simplified version of vulcanization theory [@Castillo1994; @Goldbart2004], with microscopic details, such as polymer chain conformations, being ignored. Nevertheless, it is able to reproduce the same universality class as vulcanization theory at the liquid-to-random-solid transition. For the study of elasticity, we shall consider length-scales on which the system is a well-defined solid (i.e., scales longer than the localization length, as we shall see later in this paper). Both of these scales are much larger than the characteristic linear dimension of an individual polymer. The RLPM is a model very much in the spirit of lattice percolation, except that it naturally allows for particle motion as well as particle connectivity, and is therefore suitable for the study of continuum elasticity and other issues associated with the (thermal or deformational) motion of the constituent entities. Equation (\[EQ:HRLPM\]) is a Hamiltonian for a given realization of quenched disorder $\RealDiso \equiv \{ i_e, j_e\}_{e=1}^{\LinkNumb}$, which describes the particular random instance of the linking of the particles. These links are the quenched disorder of the system, which are specified at synthesis and do not change with thermal fluctuations. This is because there is a wide separation between the timescale for the linked-particle system to reach thermal equilibrium and the much longer timescale required for the links themselves to break. Therefore, we treat the links as permanent. Later, we shall apply the replica technique [@Mezard1987] to average over these permanent random links. Replica statistical mechanics of the RLPM {#SEC:RLPMReplica} ----------------------------------------- For a given volume and a given realization of disorder $\RealDiso$ we can write the partition function $Z_{\RealDiso}$ for the RLPM as $$\begin{aligned} \label{EQ:partition} Z_{\RealDiso} (\Volu) &\equiv& \int _{\Volu} \prod_{i=1}^{\PartNumb} d \PartPosi_i \exp\Big(-\frac{H_{\RealDiso}}{{\BoltCons}T}\Big) \nonumber\\ &=& \PartitionLiquid(V) \Bigg\ltha \prod _{e=1}^{\LinkNumb} \LinkPote\big( \vert \PartPosi_{i_e} - \PartPosi_{j_e} \vert \big) \Bigg\rtha_{1}^{\HamiExcl},\end{aligned}$$ where $\HamiExcl \equiv \frac{\ExclVolu }{2}\sum_{i,j=1}^{\PartNumb} \delta (\PartPosi_i-\PartPosi_j)$ is the excluded-volume interaction part of the Hamiltonian, and $\PartitionLiquid(V) \equiv \int _{\Volu} \prod_{i=1}^{\PartNumb} d \PartPosi_i \exp\big(-\HamiExcl/{\BoltCons}T\big)$ is the partition function of the liquid in the absence of any links. The issue of the Gibbs factorial factor, which is normally introduced to compensate for the overcounting of identical configuration, is a genuinely subtle one in the context of random solids (for a discussion, see Ref. [@Goldbart1996]). However, our focus will be on observables such as order parameter rather than on free energies, and thus the omission of the Gibbs factor is of no consequence. The factor $$\begin{aligned} \label{EQ:DeltReal} \LinkPote\big( \vert \PartPosi_{i_e} - \PartPosi_{j_e} \vert \big) \equiv e^{ -\frac{\vert \PartPosi_{i_e} - \PartPosi_{j_e} \vert^2} {2 \LinkScal^2}}\end{aligned}$$ is associated with the link-induced attractive interaction term in the Hamiltonian. The average $\ltha \cdots \rtha_{1}^{\HamiExcl}$, taken with respect to a Boltzmann weight involving the excluded-volume interaction Hamiltonian $\HamiExcl$, is defined as $$\begin{aligned} \ltha \cdots \rtha_{1}^{\HamiExcl} \equiv \frac{1}{\PartitionLiquid(V)}\int _{\Volu} \prod_{i=1}^{\PartNumb} d \PartPosi_i \, e^{-\frac{\HamiExcl}{{\BoltCons}T}}\ldots\, .\end{aligned}$$ The corresponding Helmholtz free energy is then given by $$\begin{aligned} \HelmFreeEner _{\RealDiso} (\Volu) \equiv -\BoltCons T \ln Z_{\RealDiso} (\Volu).\end{aligned}$$ To perform the average of the free energy over the quenched disorder, we shall need to choose a probability distribution that assigns a sensible statistical weight $\DEDist (\{ i_e, j_e\}_{e=1}^{\LinkNumb})$ to each possible realization of the total number $\LinkNumb$ and location $\{i_e,j_e\}_{e=1}^{\LinkNumb}$ of the links. Following an elegant strategy due to Deam and Edwards [@Deam1976], we assume a version of the normalized link distribution as follows: $$\begin{aligned} \DEDist(\RealDiso) = \frac{ \Big(\frac{\LinkDens \Volu_0}{2\PartNumb \LinkPote_0}\Big)^{\LinkNumb} Z_{\RealDiso}(\Volu_0)} {\LinkNumb ! Z_1},\end{aligned}$$ where $\LinkDens$ is a parameter that controls the mean total number of links. We assume that the preparation state (i.e., the state in which the links are going to be introduced) is in a given volume $\Volu_0$. The $Z_{\RealDiso}(\Volu_0)$ factor is actually the partition function, as given in Eq. (\[EQ:partition\]), and can be regarded as probing the equilibrium correlations of the underlying unlinked liquid. The factor $\LinkPote_0=\big(2\pi \LinkScal^2 \big)^{d/2}$ is actually the $p=0$ value of the Fourier transform of the $\LinkPote$ function defined in Eq. (\[EQ:DeltReal\]), and we shall see later that these factors ensure that the (mean-field) critical point occurs at $\LinkDens_C =1$. The normalization factor $Z_1$ is defined to be $\sum_{\RealDiso} \big(\frac{\LinkDens \Volu_0}{2\PartNumb \LinkPote_0}\big)^{\LinkNumb} Z_{\RealDiso}(\Volu_0)/ \LinkNumb !$. The calculation for $Z_1$ is straightforward, and is given in Appendix \[APP:DisoAver\]. The Deam-Edwards distribution can be understood as arising from a realistic vulcanization process in which the links are introduced simultaneously and instantaneously into the liquid state in equilibrium. Specifically, it incorporates the notion that all pairs of particles that happen (at some particular instant) to be nearby are, with a certain probability controlled by the link density parameter $\LinkDens$, linked. Thus, the correlations of the link distribution reflect the correlations of the unlinked liquid, and it follows that realizations of links only acquire an appreciable statistical weight if they are compatible with some reasonably probable configuration of the unlinked liquid. The factor $\big(\frac{\LinkDens \Volu_0}{2\PartNumb \LinkPote_0}\big)^{\LinkNumb} /\LinkNumb !$ in the Deam-Edwards distribution introduces a Poissonian character to the total number $\LinkNumb$ of links. These links are envisioned to be the product of a Poisson chemical linking process. The factor $Z_{\RealDiso}(\Volu_0)$ assures that the probability of having a given random realization of links is proportional to the statistical weight for, in the unlinked liquid state, finding the to-be-linked pairs to be co-located in the liquid state to within the shape function $\exp \big(- \vert \PartPosi_{i_e}-\PartPosi_{j_e}\vert ^2 / 2\LinkScal^2\big)$. As a result of the Deam-Edwards distribution, the mean number of links per particle is given by $\lda \LinkNumb \rda/\PartNumb = \LinkDens/2$. Thus, $\LinkDens = 2 \lda \LinkNumb \rda/N$ is the *mean coordination number*, i.e., the average number of particles to which a certain particle is connected, [[the factor of $2$ results from the fact that each link is shared by two particles]{}]{}. For a detailed discussion of the Deam-Edwards distribution, see Ref. [@Deam1976; @Broderix2002]. By using this distribution of the quenched disorder, we can perform the disorder average of the Helmholtz free energy via the replica technique, thus obtaining $$\begin{aligned} \label{EQ:HFReplica} \lda \HelmFreeEner \rda &\equiv& \sum_{\RealDiso} \DEDist(\RealDiso) \HelmFreeEner_{\RealDiso} (\Volu) \nonumber\\ &=& -\BoltCons T \sum_{\RealDiso} \DEDist(\RealDiso) \ln Z_{\RealDiso} (\Volu) \nonumber\\ &=& -\BoltCons T \lim _{n \to 0} \sum_{\RealDiso} \DEDist(\RealDiso) \frac{Z_{\RealDiso}(\Volu)^{n}-1}{n}.\end{aligned}$$ We now insert the Deam-Edwards distribution to get $$\begin{aligned} \label{EQ:HFReplicaDE} \lda \HelmFreeEner \rda &=& -\BoltCons T \lim _{n \to 0} \sum_{\RealDiso} \frac{ \Big(\frac{\LinkDens \Volu_0}{2\PartNumb \LinkPote_0}\Big)^{\LinkNumb} Z_{\RealDiso}(\Volu_0)}{\LinkNumb ! Z_1} \nonumber\\ &&\quad\times \frac{Z_{\RealDiso}(\Volu)^{n}-1}{n}.\end{aligned}$$ This disorder-averaged free energy differs from the form traditionally obtained via the replica technique, in that there is an extra replica $Z_{\RealDiso}(\Volu_0)$, which originates in the Deam-Edwards distribution. We shall call this extra replica the $0^{\textrm{th}}$ replica, and note that it represents the *preparation state* of the system. [^3] The summation over the realizations of the quenched disorder $\RealDiso$ can be performed, following the calculation in Appendix \[APP:DisoAver\]; thus we arrive at the form $$\begin{aligned} \label{EQ:HFEDisoAver} \lda \HelmFreeEner \rda &=& -\BoltCons T \lim _{n \to 0} \frac{1}{n}\Big( \frac{Z_{1+n}}{Z_1}-1 \Big),\end{aligned}$$ which can also be expressed as $$\begin{aligned} \label{EQ:HFEDisoAverDeri} \lda \HelmFreeEner \rda = -\BoltCons T \lim _{n \to 0} \frac{\partial}{\partial n} \ln Z_{1+n} \, ,\end{aligned}$$ where $$\begin{aligned} \label{EQ:ReplPart} Z_{1+n} &\equiv& \sum_{\RealDiso} \frac{\Big(\frac{\LinkDens \Volu_0}{2\PartNumb \LinkPote_0}\Big)^{\LinkNumb}} { \LinkNumb !} Z_{\RealDiso}(\Volu_0) Z_{\RealDiso}(\Volu)^n \nonumber\\ &=& \PartitionLiquid(\Volu_0)\PartitionLiquid(\Volu)^n \lthal \exp \Big(\frac{\LinkDens \Volu_0} {2\PartNumb \LinkPote_0} \sum_{i\ne j}^{\PartNumb} \ReplProd \LinkPote \big( \vert \PartPosi_{i}\REPa - \PartPosi_{j}\REPa \vert \big)\Big)\rthal_{1+n}^{\HamiExcl} .\end{aligned}$$ Notice that, here, the preparation state (i.e., $0^{\textrm{th}}$ replica) has a fixed volume $\Volu_0$ because, for convenience, we have assumed that the linking process was undertaken instantaneously in a liquid state of fixed volume and thus the pressure is fluctuating, whereas the measurement states (replicas $1$ through $n$) are put in a fixed-pressure $\Pres$ environment, the volume $\Volu$ of which is allowed to fluctuate. In the latter parts of the paper we shall set the pressure $\Pres$ to be the average pressure measured in the preparation state at volume $\Volu_0$. In particular, for a given volume of the liquid state in which the links are made, the average pressure is given by $$\begin{aligned} \Pres = - \frac{\partial \FLiquid (\Volu_0)}{\partial \Volu_0} \Big\vert_{T} \, ,\end{aligned}$$ where we have introduced the Helmholtz free energy of the unlinked liquid $\FLiquid (\Volu_0) \equiv - \BoltCons T \ln \PartitionLiquid (\Volu_0)$. We suppose that the excluded-volume interactions are so strong that the density fluctuations are suppressed, and the density of the unlinked liquid is just $\PartNumb/\Volu_0$. [^4] Thus, the mean-field value of Helmholtz free energy in the unlinked liquid state is $$\begin{aligned} \label{EQ:FLiquid} \FLiquid (\Volu_0) &=& -\PartNumb \BoltCons T \ln \Volu_0 +\frac{\ExclVolu \PartNumb^2}{2\Volu_0}.\end{aligned}$$ Therefore, the mean pressure in the unlinked liquid state is given by $$\begin{aligned} \label{EQ:AverPres} \Pres = \frac{\PartNumb \BoltCons T}{\Volu_0} +\frac{\ExclVolu \PartNumb^2}{2\Volu_0^2},\end{aligned}$$ [[from which we can identify—by the standard way in which the second virial coefficient $B_2$ appears in the free energy of the equation of state for a fluid—that $B_2$ for the unlinked liquid is $\ExclVolu/2\BoltCons T$. Thus, without having actually performed a cluster expansion, we see that the Dirac delta-function interaction with coefficient $\ExclVolu$ indeed leads to a virial expansion with a suitable excluded volume, viz., $\ExclVolu/(\BoltCons T)$. ]{}]{} As mentioned above, we shall apply this pressure in the measurement states (described by $1^{\textrm{th}}$ through $n^{\textrm{th}}$ replicas), and let their volumes $\Volu$ fluctuate, in order to obtain an elastic free energy that can describe volume variations. In particular, by choosing the pressure $p$ to be exactly the mean pressure of the liquid state, we shall obtain an elastic free energy that takes the *state right after linking*, which has the same volume $\Volu_0$ as the liquid state, as the elastic reference state. This issue of the state right after linking and the elastic reference state will be discussed in detail in Section \[SEC:PhenFE\]. In light of this construction of the pressure ensemble, we have the capability of learning about the bulk modulus of the system, and to characterize volume changes caused by linking, a process that has the effect of eliminating translational degrees of freedoms. To establish an appropriate statistical mechanics for the fixed-pressure ensemble, we shall make the following Legendre transformation of the Helmholtz free energy, which leads to the Gibbs free energy $\GibbFreeEner (\Pres,T)$: \[EQ:LegeTran\] $$\begin{aligned} \Pres &=& -\frac{\partial \HelmFreeEner(\Volu,T)} {\partial \Volu}\big\vert_{T} \label{EQ:LegeTran1} \, , \\ \GibbFreeEner (\Pres,T) &=& \HelmFreeEner(\Volu,T) + \Pres \Volu \, . \label{EQ:LegeTran2}\end{aligned}$$ In Eq. (\[EQ:LegeTran2\]) the volume $\Volu$ takes the value (in terms of $\Pres$) that satisfies Eq. (\[EQ:LegeTran1\]), i.e., the volume that minimizes the Gibbs free energy at a given pressure $p$. In the following sections, we shall first calculate the disorder-average of the Helmholtz free energy, and then make this Legendre transformation to obtain the disorder-averaged Gibbs free energy. This will allow us to explore the elasticity of the RLPM in detail. Field-theoretic description of the RLPM {#SEC:FTMF} --------------------------------------- We shall use field-theoretic methods to analyze the disorder-averaged free energy $\lda\HelmFreeEner\rda$ and, more specifically, the replicated partition function $Z_{1+n}$. To do this, we introduce a joint probability distribution for the particle density in the replicated space, i.e., the replicated density function $$\begin{aligned} \label{EQ:DensReal} \DensFunc(\REPX)\equiv \frac{1}{\PartNumb} \sum_{i=0}^{\PartNumb} \ReplProd \delta^{(d)}(x\REPa-\PartPosi_i\REPa) ,\end{aligned}$$ where $\REPX \equiv (x^{0},x^{1},\ldots,x^{n})$ is a short-hand for the $(1+n)$-replicated position $\Dime$-vector. For convenience, we introduce a complete orthonormal basis set in replica space $\{\BASE\REPa\}_{\alpha=0}^{n}$, in terms of which a vector $\REPX$ can be expressed as $$\begin{aligned} \REPX=\ReplSum x\REPa \BASE\REPa .\end{aligned}$$ Note that the components $x\REPa$ are themselves $\Dime$-vectors. With this notation, the density function of a single replica $\alpha$ is given by $\DensFunc_{p\REPa\BASE\REPa}$, which is the Fourier transform of the $\DensFunc(\REPX)$ field, $\DensFunc_{\REPP}$, with momentum nonzero only in replica $\alpha$, corresponding to integrating over the normalized densities in other replicas in real space. The replicated partition function (\[EQ:ReplPart\]) can be written as a functional of the replicated density function $\DensFunc$ in momentum space as $$\begin{aligned} \label{EQ:ZQ} Z_{1+n} = \int_{\Volu_0} \prod_{i=1}^{\PartNumb} d\PartPosi_{i}^{0} \int_{\Volu} \ReplProd \prod_{j=1}^{\PartNumb} d\PartPosi_{j}\REPa \, e^{-\frac{H_{\DensFunc}\lbrack \DensFunc_{\REPP}\rbrack}{\BoltCons T}} ,\end{aligned}$$ with $$\begin{aligned} \label{EQ:HQ} H_{\DensFunc}\lbrack \DensFunc_{\REPP}\rbrack &\equiv& -\frac{\PartNumb \LinkDens \BoltCons T}{2 \Volu^n \LinkPote_0} \sum_{\REPP}\DensFunc_{\REPP}\DensFunc_{-\REPP}\LinkPoteRepl_{\REPP}\nonumber\\ &&+\frac{\ExclVolu \PartNumb^2}{2\Volu_0} \sum_{p}\DensFunc_{p\BASE^{0}} \DensFunc_{-p\BASE^{0}} \nonumber\\ &&+\frac{\ExclVolu \PartNumb^2}{2\Volu} \sum_{p} \ReplSumOne \DensFunc_{p\BASE\REPa} \DensFunc_{-p\BASE\REPa} ,\end{aligned}$$ where the factor $$\begin{aligned} \LinkPoteRepl_{\REPP} = \big(\LinkPote_0\big)^{1+n} e^{-\LinkScal^2 \vert \REPP \vert^2/2}\end{aligned}$$ is the replicated version of the Fourier transform of the function $\LinkPote(x)$, defined in Eq. (\[EQ:DeltReal\]). [[The summation $\sum_{\REPP}$ denotes a summation over all momentum $\Dime$-vectors $p\REPa$, one for each replica, with $\REPP$ taking the values $\REPP=\ReplSum p\REPa \BASE\REPa$. The cartesian components of the $p\REPa$ take the values $2\pi m/L$, where $L$ is the linear size of the system and $m$ is any integer. Similarly, summations $\sum_{p}$ over $\Dime$-vectors $p$ include components having the values $2\pi m/L$.]{}]{} The first term on the right hand side of Eq. (\[EQ:HQ\]) arises from the attractive, link-originating, interaction part \[see Eq. (\[EQ:ReplPart\])\]; the next two terms represent the excluded-volume interaction in $\HamiExcl$, for the $0^{\textrm{th}}$ replica and for replicas $1$ through $n$, respectively. The excluded-volume interaction is taken to be very strong, and thus the density fluctuations in any single replica are heavily suppressed. This means that $\DensFunc_{p\REPa\BASE\REPa}$ are very small for all $p\REPa\ne 0$, and $\DensFunc_{p\REPa\BASE\REPa}\vert_{p\REPa=0}=1$, corresponding to a nearly homogeneous particle density. To manage this issue, we separate the replicated space into a Lower Replica Sector (LRS), in which the density fluctuations are suppressed, and a Higher Replica Sector (HRS), which captures the correlations between different replicas, and develops an instability at the liquid-to-random-solid transition. The definitions of the LRS and HRS are (in momentum space) as follows: if two or more components of a replicated momentum vector $\REPP \equiv (p^{0},p^{1},\ldots,p^{n})$ are non zero then $\REPP$ is an element of the HRS; on the other hand, if $\REPP$ has zero or only one component $p\REPa$ being nonzero, and all other $p\REPb=0$, then $\REPP$ is an element of the LRS. In addition, the LRS can be separated into a 1RS part, in which vectors $\REPP$ has exactly one component $p\REPa$ being nonzero, and a 0RS, which consists of only $\REPP=0$. With this separation we can rewrite the effective Hamiltonian as $$\begin{aligned} \label{EQ:HQHL} H_{\DensFunc}\lbrack \DensFunc_{\REPP}\rbrack &=& -\frac{\PartNumb \LinkDens \BoltCons T}{2 \Volu^n \LinkPote_0} \sum_{\REPP\in HRS}\DensFunc_{\REPP} \DensFunc_{-\REPP}\LinkPoteRepl_{\REPP}\nonumber\\ &&+\frac{ \PartNumb^2}{2\Volu_0} \sum_{p}\ExclVoluRN_0(p)\DensFunc_{p\BASE^{0}} \DensFunc_{-p\BASE^{0}} \nonumber\\ &&+\frac{ \PartNumb^2}{2\Volu} \sum_{p}\ExclVoluRN(p) \ReplSumOne \DensFunc_{p\BASE\REPa} \DensFunc_{-p\BASE\REPa} ,\end{aligned}$$ with the renormalized coefficients $$\begin{aligned} \label{EQ:ExclVoluRN} \frac{\ExclVoluRN_0(p) \PartNumb^2}{2\Volu_0} &\equiv&\frac{\ExclVolu \PartNumb^2}{2\Volu_0} -\frac{\PartNumb \LinkDens\BoltCons T\LinkPoteRepl_{\REPP}}{2\Volu^n\LinkPote_0} , \nonumber\\ \frac{\ExclVoluRN(p) \PartNumb^2}{2\Volu} &\equiv&\frac{\ExclVolu \PartNumb^2}{2\Volu} -\frac{\PartNumb \LinkDens\BoltCons T\LinkPoteRepl_{\REPP}}{2\Volu^n\LinkPote_0} .\end{aligned}$$ We suppose that $\frac{\ExclVolu N}{\BoltCons T \Volu}\gg \LinkDens$ (i.e., the excluded-volume repulsion is very strong, relative to the attractive effects of the links), so these coefficients $\frac{\ExclVoluRN(p) \PartNumb^2}{2\Volu}$ are always positive and large, relative to the energy-scale of the HRS that we are interested in. The interactions in Eq. (\[EQ:HQHL\]) can be decoupled using a Hubbard-Stratonovich (HS) transformation (for details see Appendix \[APP:HSTransformation\]). Thus, we arrive at a field-theoretic formulation of the replicated partition function, in terms of the order parameter field $\VOP$: $$\begin{aligned} \label{EQ:HS} Z_{1+n} = \int \mathcal{D}\VOP_{\hat{p}}\ReplProd\mathcal{D}\VOP_{p\BASE\REPa} e^{-\frac{H_{\VOP}\lbrack \VOP_{\hat{p}},\VOP_{p\BASE\REPa} \rbrack}{\BoltCons T}} ,\end{aligned}$$ where the effective Hamiltonian is given by $$\begin{aligned} \label{EQ:HEffcVOP} \! H_{\VOP}\lbrack \VOP_{\hat{p}},\VOP_{p\BASE\REPa} \rbrack &=& \frac{\PartNumb \LinkDens \BoltCons T}{2 \Volu^n \LinkPote_0} \sum_{\REPP\in HRS}\!\VOP_{\REPP} \VOP_{-\REPP}\LinkPoteRepl_{\REPP} +\frac{ \PartNumb^2}{2\Volu_0} \sum_{p}\ExclVoluRN_0(p)\VOP_{p\BASE^{0}} \VOP_{-p\BASE^{0}} \nonumber\\ && +\frac{\PartNumb^2}{2\Volu} \sum_{p} \ExclVoluRN(p) \ReplSumOne \VOP_{p\BASE\REPa} \VOP_{-p\BASE\REPa} \!-\! N\BoltCons T \ln \MyZzero \, .\end{aligned}$$ and $$\begin{aligned} \MyZzero &=& \int_{\Volu_0} \! d\PartPosi^{0} \! \int_{\Volu} \!\ReplProd \! d\PartPosi\REPa \exp\!\Big\lbrack \frac{ \LinkDens}{ \Volu^n \LinkPote_0} \!\sum_{\REPP\in HRS}\!\!\VOP_{\REPP}\,\LinkPoteRepl_{\REPP}e^{i\REPP \cdot \hat{c}} \nonumber\\ && \quad\quad + \frac{i \PartNumb}{\Volu_0\BoltCons T} \sum_{p}\ExclVoluRN_0(p)\VOP_{p\BASE^{0}}\,e^{ip^{0}c^{0}} +\frac{i \PartNumb}{\Volu\BoltCons T} \!\sum_{p} \ExclVoluRN(p)\! \ReplSumOne \VOP_{p\BASE\REPa} \,e^{ip\REPa c\REPa} \! \Big\rbrack .\end{aligned}$$ The form of this HS transformation \[see Appendix \[APP:HSTransformation\], especially Eq. (\[EQ:HSAver\])\] ensures that the mean value of the order parameter field $\VOP$ is related to the mean value of the replicated density function field $\DensFunc$ as $$\begin{aligned} &&\textrm{HRS:} \quad\quad \langle \DensFunc_{\REPP} \rangle_{H_{\DensFunc}} \label{EQ:HSrelationHRS} = \langle \VOP_{\REPP} \rangle_{H_{\VOP}} , \\ &&\textrm{LRS:} \quad\quad i\langle \DensFunc_{p\BASE\REPa} \rangle_{H_{\DensFunc}} = \langle \VOP_{p\BASE\REPa} \rangle_{H_{\VOP}} ,\end{aligned}$$ where the averages on either sides are defined via $$\begin{aligned} \langle \, \cdots \rangle_{H_{\DensFunc}} & \equiv \frac{1}{Z_{1+n}} \int_{\Volu_0} \prod_{i=1}^{\PartNumb} d\PartPosi_{i}^{0} \int_{\Volu} \ReplProdOne \prod_{j=1}^{\PartNumb} d\PartPosi_{j}\REPa \label{EQ:HQAver} \,\, e^{-\frac{H_{\DensFunc}\lbrack \DensFunc_{\REPP}\rbrack}{\BoltCons T}} \cdots , \\ \langle \, \cdots \rangle_{H_{\VOP}} & \equiv \frac{1}{Z_{1+n}} \int \mathcal{D}\VOP_{\hat{p}}\ReplProd\mathcal{D}\VOP_{p}\REPa \label{EQ:HVOPAver} \,\,e^{-\frac{H_{\VOP}\lbrack \VOP_{\hat{p}},\VOP_{p\BASE\REPa} \rbrack}{\BoltCons T}} \cdots . \end{aligned}$$ The leading-order terms in $H_{\VOP}\lbrack \VOP_{\hat{p}},\VOP_{p\BASE\REPa} \rbrack$ can be constructed by expanding the $\ln \MyZzero$ term in Eq. (\[EQ:HEffcVOP\]) in powers of the fields $\VOP_{\REPP}$ and $\VOP_{p\BASE\REPa}$, and thus we can obtain the leading-order terms in the Landau-Wilson effective Hamiltonian. To leading order this expansion gives $$\begin{aligned} \label{EQ:HQExpansion} H_{\VOP}\lbrack \VOP_{\hat{p}},\VOP_{p\BASE\REPa} \rbrack = \frac{\PartNumb \LinkDens \BoltCons T}{2 \Volu^n \LinkPote_0} && \sum_{\REPP\in HRS}\VOP_{\REPP}\VOP_{-\REPP}\LinkPoteRepl_{\REPP} \Bigg(1-\LinkDens \frac{\LinkPoteRepl_{\REPP}}{V^{n}\LinkPote_{0}}\Bigg) +\frac{\ExclVoluRN_0 \PartNumb^2}{2\Volu_0} \sum_{p}\VOP_{p\BASE^{0}} \VOP_{-p\BASE^{0}} \Bigg(1+\frac{\ExclVoluRN_0 \PartNumb}{\Volu_0\BoltCons T}\Bigg) \nonumber\\ \noalign{\medskip} +\frac{\ExclVoluRN \PartNumb^2}{2\Volu} &&\sum_{p} \ReplSumOne \VOP_{p\BASE\REPa} \VOP_{-p\BASE\REPa} \Bigg(1+\frac{\ExclVoluRN \PartNumb}{\Volu_0\BoltCons T}\Bigg) +O\big((\VOP_{\hat{p}})^3,(\VOP_{p\BASE\REPa})^3\big).\end{aligned}$$ For the LRS fields $\VOP_{p\BASE\REPa}$ we see the coefficients of the corresponding quadratic term are always positive (given that $\ExclVoluRN_0,\,\ExclVoluRN>0$, i.e., the excluded-volume repulsion is very strong), so this sector of the field theory does not undergo an instability. Furthermore, because these coefficients (the masses, in particle-physics language) are very large \[see Eq. (\[EQ:ExclVoluRN\])\], the fluctuations of these LRS fields $\VOP_{p\BASE\REPa}$ are heavily suppressed. For this reason, we ignore these fluctuations and, for all $\alpha =0,1,\ldots,n$, we take $$\begin{aligned} \label{EQ:LRScons} \VOP_{p\BASE\REPa}\vert_{p=0}&=& i , \nonumber\\ \VOP_{p\BASE\REPa}\vert_{p\ne 0}&=& 0 ,\end{aligned}$$ as a hard constraint. Having implemented this constraint, we arrive at the HRS Hamiltonian (the full form, not just the leading-order expansion): $$\begin{aligned} \label{EQ:HVOPHRS} H_{\VOP}\lbrack\VOP_{\REPP}\rbrack &=& \frac{\PartNumb \LinkDens \BoltCons T}{2 \Volu^n \LinkPote_0} \sum_{\REPP\in HRS}\VOP_{\REPP} \VOP_{-\REPP}\LinkPoteRepl_{\REPP}\nonumber\\ \noalign{\medskip} &&\quad - N\BoltCons T \ln \MyZzero ,\end{aligned}$$ where $$\begin{aligned} \label{EQ:Z_zero} \MyZzero&\equiv& \int_{\Volu_0} \! d\PartPosi^{0} \!\! \int_{\Volu} \!\ReplProdOne d\PartPosi\REPa \exp \Big\lbrack \frac{ \LinkDens}{ \Volu^n \LinkPote_0} \nonumber\\ \noalign{\medskip} &&\quad\quad\times \sum_{\REPP\in HRS}\VOP_{\REPP}\LinkPoteRepl_{\REPP}e^{i\REPP \cdot \hat{c}} \Big\rbrack .\end{aligned}$$ The Landau theory of the vulcanization transition [@Peng1998] can be recovered by making the expansion of this HRS Hamiltonian that keeps only the leading-order terms in the order-parameter $\VOP$ and the momentum $\REPP$. Up to an additive constant and an appropriate rescaling of the order parameter, this expansion reads $$\begin{aligned} \label{EQ:Landau} H_{\VOP}\lbrack\VOP_{\REPP}\rbrack &=& \frac{1}{2}\sum_{\REPP\in HRS} \big( r+ \vert \REPP \vert^2 \big) \VOP_{\REPP} \VOP_{-\REPP} \nonumber\\ && -\frac{v}{3!} \sum_{\REPP_1,\REPP_2 \in HRS} \VOP_{\REPP_1} \VOP_{\REPP_2} \VOP_{-\REPP_1-\REPP_2} \, ,\end{aligned}$$ where the potential of the links $\LinkPoteRepl_{\REPP}$ has been momentum-expanded. This is precisely the form of the Landau free energy that was constructed via *symmetry arguments* in Ref. [@Peng1998]. In the limit $n\to 0$, the coefficients become $$\begin{aligned} r &\propto& \LinkDens(1-\LinkDens) , \nonumber\\ v &\propto& (\LinkDens)^3 .\end{aligned}$$ It is straightforward to see that the $r$ term leads to an *instability* for the link density parameter $\LinkDens$ larger than the critical value $\LinkDens_C=1$, and the lowest unstable modes are long-wavelength modes (i.e., $\REPP \to 0$). \[One should, however, keep in mind that the component $\REPP=0$ itself, which is the 0RS, is excluded from this HRS-only field theory; see Eq. (\[EQ:LRScons\]).\] This instability corresponds to the *liquid-to-soft-random-solid transition*, because the liquid state corresponds to the $\VOP_{\REPP}=0$ (in the HRS) state, and becomes unstable when the link density parameter $\LinkDens$ exceeds $1$. [[ There are two points that we would like to discuss about this Hamiltonian. First, it can be seen from the expansion in Eq. (\[EQ:Landau\]) that near the critical point the exact form of interaction is irrelevant, because we have only kept terms to $\vert\REPP\vert^2$ in $\LinkPoteRepl_{\REPP}$, and this governs the long-distance physics. We have used the Gaussian-chain potential (\[EQ:VGC\]) in this calculation. It is clear that if we were to change to a different potential, such as a finite-rest-length spring $\VLink(\vert r \vert) = \frac{k}{2} \big(\vert r \vert -l \big)^2$, the long-distance (i.e., small-momentum) physics would be unchanged. Second, as we shall see in Sec. \[SEC:Heterogeneity\], neither do the statistics of the elastic modulus depend on the details of the interaction potential; instead, they only depend on the density of links. This results from the fact that the elasticity originates in the entropy of the *network*. The critical point for the vulcanization transition, and thus the entropic rigidity in the presence of thermal fluctuations, occurs at the same point as *connectivity percolation* does, rather than at the *rigidity percolation* critical point. The fact that we have a shear modulus scaling as $T$ results from the entropy of the network, and not from the factor of $T$ in the Gaussian-chain potential, is clear because if we were to change to another attractive potential, the same results would hold for the long-distance physics. ]{}]{} Mean-field theory of the RLPM ----------------------------- To understand the physics of the order parameter in vulcanization theory, and thus obtain the form of the stationary value of the order parameter, we need to recall the property of the HS transformation, Eq. (\[EQ:HSrelationHRS\]), which relates the average of the order parameter field $\VOP$ to the average of the replicated density function $\DensFunc$. According to this relation, we have $$\begin{aligned} \langle \VOP_{\REPP} \rangle_{H_{\VOP}} = \Big\langle \frac{1}{N}\sum_{j=1}^{\PartNumb}e^{i\REPP \cdot \hat{c}_j}\Big\rangle_{H_{\DensFunc}} -\delta^{((1+n)d)}_{\REPP,0} .\end{aligned}$$ Here, the $\delta^{((1+n)d)}_{\REPP}$ removes the 0RS part of $\VOP_{\REPP}$. Equivalently, in real space we have $$\begin{aligned} \langle \VOP(\REPX)\rangle_{H_{\VOP}} \!\! =\!\Big\langle \frac{1}{N}\!\sum_{j=1}^{\PartNumb}\delta^{((1+n)d)}(\REPX-\hat{c}_j)\!\Big\rangle_{H_{\DensFunc}} \!\!\!-\frac{1}{\Volu_0 \Volu^n} ,\end{aligned}$$ which can be interpreted as $$\begin{aligned} \label{EQ:VOPinte} \langle \VOP(\REPX) \rangle_{H_{\VOP}} &=&\!\frac{1}{N}\!\sum_{j=1}^{\PartNumb} \big\lbrack \langle \delta^{(d)}(x^0-c^{0}_{j})\rangle \langle \delta^{(d)}(x^1-c^{1}_{j})\rangle \cdots \nonumber\\ &&\quad\times \langle \delta^{(d)}(x^n-c^{n}_{j})\rangle \big\rbrack - \frac{1}{\Volu_0 \Volu^n} \, .\end{aligned}$$ This average consists of the following two steps: One first constructs *independent thermal averages in each replica* (denoted by $\langle\cdots\rangle$) with a common given realization of disorder $\RealDiso$; one then forms the product over all replicas, and finally one *averages over all realizations of disorder* (an average denoted by $\big\lbrack\cdots\big\rbrack$). This interpretation can be understood from the definition of $H_{\DensFunc}$ via $Z_{1+n}$, as in Eq. (\[EQ:ZQ\]). Recall that $Z_{1+n}$, as defined in Eq. (\[EQ:ReplPart\]), contains thermal averages of the $(1+n)$ replicas, represented by the factor $Z_{\RealDiso}(\Volu_0)\, Z_{\RealDiso}(\Volu)^n$, together with an overall disorder average. This validates the interpretation given in Eq. (\[EQ:VOPinte\]). For a strict proof, see Ref. [@Goldbart1996]. The structure of Eq. (\[EQ:VOPinte\]) allows us to relate the value of the order parameter $\VOP$ to measurements on the system. In the liquid state, the single-particle densities $\ltha \delta^{(d)}(x\REPa-c\REPa_{j}) \rtha$ in each replica are simply $1/\Volu$ (or $1/\Volu_0$ for the $0^{\textrm{th}}$ replica), and thus the order parameter $\VOP$ vanishes. In the soft random solid state, it is hypothesized that *a finite fraction $\LocaPart$ of the particles become localized around random positions*. This happens when the density of links exceeds the *percolation threshold*, and the particles that constitute the infinite, percolating cluster become localized. In the language of replica field theory, a localized particle remains near the same spatial position in each replica[^5]. This is because, as we discussed earlier, *each replica corresponds to a copy of the same disordered system but with independent thermal fluctuations*, and for a particle localized as a part of the percolating cluster, it fluctuates around its fixed mean position, which is common to all replicas. According to these considerations, it is reasonable to hypothesize the following form for the stationary value of the order parameter in real space: $$\begin{aligned} \label{EQ:VOPAnsatzR} \!\!\!\!\VOPSP(\REPX)\!\!\!\!&=&\!\!\!\LocaPart \int \frac{dz}{\Volu_0}\int d\ISLL \DistISLL(\ISLL) \Big(\frac{\ISLL}{2\pi}\Big)^{\frac{(1+n)\Dime}{2}} \nonumber\\ && \!\!\times e^{-\frac{\ISLL}{2} \{ \vert x^0-z\vert^2 +\ReplSumOne\vert x\REPa-\Contraction z \vert^2 \}} \!\! -\!\frac{\LocaPart}{\Volu_0 \Volu^n} .\,\end{aligned}$$ To arrive at this form we have assumed that the fraction of particles that become localized is $\LocaPart$, and the localized single-particle density function is proportional to $e^{-\frac{\ISLL}{2}\vert x\REPa-\Contraction z \vert^2}$ in replica $\alpha$($=1,\ldots,n$), where $\Contraction z$ is the random position near to which the particle is localized, and $\Contraction$ corresponds to the uniform contraction of the entire volume in the measurement state with respect to the preparation state, due to linking \[see the discussion following Eq. (\[EQ:ContRLPM\])\]. Correspondingly, this particle was near the position $z$ in the preparation state, so, for replica $0$ the corresponding single-particle density function is proportional to $e^{-\frac{\ISLL}{2}\vert x^0-z \vert^2}$. For convenience of notation we define $\hat{z}\equiv (z,\Contraction z, \Contraction z,\ldots)$ as the mean position vector in the replicated space. The contraction $\Contraction$ is related to the change of volume as $$\begin{aligned} \label{EQ:VContraction} \frac{\Volu}{\Volu_0}=\Contraction^{d} .\end{aligned}$$ The localization of the particle is characterized by the localization length $\LocaLeng$, although for notational convenience we exchange this variable for the inverse square localization length $\ISLL\equiv 1/\LocaLeng^2$. Because the network is heterogeneous, the particles can have widely different localization lengths. This heterogeneity is characterized by the distribution $\DistISLL(\ISLL)$. We can also write this stationarity-point order parameter in momentum space: $$\begin{aligned} \label{EQ:VOPAnsatzM} \!\!\!\!(\!\VOPSP\!)_{\REPP}\!\!\!&=&\!\!\!\LocaPart \int \frac{dz}{\Volu_0}\int d\ISLL \DistISLL(\ISLL) \nonumber\\ \noalign{\medskip} && \!\!\times e^{-\frac{\vert\REPP\vert^2}{2\ISLL} \!-\! ip^0 \cdot z-i\ReplSumOne p\REPa \cdot (\Contraction z) } \!\!-\!\LocaPart\delta^{((1+n)d)}_{\REPP,0}\! .\,\,\,\end{aligned}$$ The parameters that characterize this order parameter, $\LocaPart$ and $\DistISLL(\ISLL)$, have been obtained by solving the stationarity condition for the Hamiltonian: $$\begin{aligned} \frac{\delta H_{\VOP}}{\delta \VOP_{\REPP}}=0 .\end{aligned}$$ The form of the order parameter $(\VOPSP)_{\REPP}$ given in Eq. (\[EQ:VOPAnsatzM\]) exactly solves the above stationarity condition, thus we arrive at self-consistency equations of $\LocaPart$ and $\DistISLL(\ISLL)$. In particular, the equation for $\LocaPart$ is $$\begin{aligned} \label{EQ:SPQ} 1-\LocaPart=e^{-\LinkDens \LocaPart}.\end{aligned}$$ For all values of $\LinkDens$, Eq. (\[EQ:SPQ\]) has a solution $\LocaPart=0$, corresponding to the liquid state. However, for $\LinkDens> 1$, an additional root appears, emerging continuously from $\LocaPart=0$ at $\LinkDens= 1$, and describing the equilibrium amorphous solid state. In Fig. \[FIG:ThetaGamma\] we show the dependence of the localized fraction $\LocaPart$ on the link density, which we characterize by $\LinkDens$. [[The critical point $\LinkDens_C=1$ corresponds to mean coordination number $z$ of $1$, and this agrees with the classic work on the statistical properties of random graphs by Erd[ő]{}s and R[é]{}nyi [@Erdos1960].]{}]{} For a detailed discussion and for the stationary-point distribution of inverse square localization lengths $\DistISLL(\ISLL)$, see Refs. [@Goldbart1996; @Castillo1994]. The contraction $\Contraction$, which is relevant to the elasticity of the random solid state, can be investigated by inserting the form (\[EQ:VOPAnsatzM\]) of the order parameter into the Hamiltonian $H_{\VOP}$, Eq. (\[EQ:HVOPHRS\]), which yields the dependence of the Hamiltonian on the parameters $\LocaPart$, $\DistISLL(\ISLL)$ and $\Contraction$. Through a lengthy derivation, and by keeping terms to $O(n)$, we arrive at the following Hamiltonian for the stationary point (cf. Appendix \[APP:HSP\]): $$\begin{aligned} \label{EQ:HSP} \HVOPSP&=&\frac{\ExclVoluRN_0(0)\PartNumb^2}{2\Volu_0} +\frac{n\ExclVoluRN(0)\PartNumb^2}{2\Volu} -\PartNumb \BoltCons T \ln \Volu_0 -n\PartNumb \BoltCons T \ln \Volu + n\PartNumb \BoltCons T \Bigg\lbrace \UnivPara \Big\lbrack \frac{d}{2}\big(\ln(2\pi)+\Contraction^2\big)-\ln\Volu \Big\rbrack \nonumber\\ &&- \frac{\LinkDens \LocaPart^2}{2}\cdot\frac{d}{2}\int_{\ISLL_1,\ISLL_2} \ln\big( \frac{1}{\ISLL_1}+\frac{1}{\ISLL_2}+\LinkScal^2 \big) -e^{-\LinkDens \LocaPart}\frac{d}{2} \sum_{m=1}^{\infty}\frac{(\LinkDens \LocaPart)^m}{m!} \int_{\ISLL_1,\ldots,\ISLL_m} \ln \Big( \frac{\tISLL_1\cdots\tISLL_m}{\tISLL_1 +\cdots +\tISLL_m} \Big) \Bigg\rbrace .\end{aligned}$$ Here, the variable $\tISLL$ is defined as $\tISLL\equiv\big(\frac{1}{\ISLL}+\LinkScal^2\big)^{-1}$, where $\ISLL$ is the inverse square localization length $\ISLL\equiv 1/\LocaLeng^2$, and we have introduced the shorthand for the integrals $\int_{\ISLL}\equiv \int d\ISLL \DistISLL(\ISLL)$. The dimensionless factor $\UnivPara$ in Eq. (\[EQ:HSP\]) is given by $$\begin{aligned} \label{EQ:theta} \UnivPara\equiv -\frac{\LinkDens \LocaPart^2}{2}+\LinkDens \LocaPart -1+e^{-\LinkDens \LocaPart}.\end{aligned}$$ We shall see in Section \[SEC:Heterogeneity\] that $\UnivPara$ also controls the mean value of the shear modulus, as well as the amplitude of the disorder correlators that involve the residual stress fields. To obtain the disorder-averaged free energy, we shall make the stationary-point approximation: $$\begin{aligned} Z_{1+n}\simeq e^{-\frac{\HVOPSP}{\BoltCons T}} .\end{aligned}$$ Thus, we can obtain the Helmholtz free energy using Eq. (\[EQ:HFEDisoAver\]), arriving at the result $$\begin{aligned} \lda \HelmFreeEner_{\textrm{SP}} \rda \!&=&\! -\BoltCons T \lim _{n \to 0} \frac{1}{n} \Big( \frac{Z_{1+n}}{Z_1}-1 \Big) \nonumber\\ \!&=&\! -\PartNumb \BoltCons T \Big(1-\frac{\LinkDens}{2}+\UnivPara \Big) \ln\Volu -\PartNumb \BoltCons T \frac{\UnivPara d}{2}\big(\ln(2\pi)+\Contraction^2 \big) +\frac{\ExclVolu\PartNumb^2}{2\Volu} -\frac{\LinkDens \PartNumb \BoltCons}{2} \ln \LinkPote_0 \nonumber\\ &&+ \PartNumb \BoltCons T \frac{\LinkDens\LocaPart^2}{2}\frac{d}{2} \int_{\ISLL_1,\ISLL_2} \!\ln\Big( \frac{1}{\ISLL_1}+\frac{1}{\ISLL_2}+\frac{\LinkScal^2}{\BoltCons T} \Big)\! \!+\!\PartNumb \BoltCons T \frac{d}{2} e^{-\LinkDens \LocaPart} \sum_{m=1}^{\infty}\frac{(\LinkDens \LocaPart)^m}{m!} \!\int_{\ISLL_1,\ldots,\ISLL_m}\! \!\!\ln \Big( \frac{\tISLL_1\cdots\tISLL_m}{\tISLL_1 +\cdots +\tISLL_m} \Big) ,\end{aligned}$$ where we have used the mean-field value of $Z_1$, from Eq. (\[EQ:Z1SP\]), and we have also made an expansion for small $n$ of the renormalized excluded-volume parameter $\ExclVoluRN$, using $$\begin{aligned} \LinkPoteRepl_{0}=(\LinkPote_{0})^{1+n} =\LinkPote_{0}\big(1+n\ln \LinkPote_{0} +O(n^2)\big).\end{aligned}$$ In order to study elasticity, we shall need to know the disorder-averaged Gibbs free energy $\lda\GibbFreeEner\rda$, which is given by a Legendre transformation, Eq. (\[EQ:LegeTran\]): $$\begin{aligned} \lda\GibbFreeEner_{\textrm{SP}}\rda=\lda\HelmFreeEner_{\textrm{SP}}\rda+\Pres\Volu .\end{aligned}$$ [[We can insert the pressure $p$, given by Eq. (\[EQ:AverPres\]), and drop the slowly-varying $\ln\Volu$ term given that $\UnivPara\simeq\LinkDens/2-1$ provided the system is not close to the critical point $\LinkDens_C=1$. In the limit $\frac{\ExclVolu N}{\BoltCons T \Volu}\gg \LinkDens$ we arrive at]{}]{} $$\begin{aligned} \label{EQ:SPGIBBS} \lda \GibbFreeEner_{\textrm{SP}} \rda \!&\simeq&\! \frac{\ExclVolu\PartNumb^2}{2\Volu_0} \Big\lbrack 2+\Big(\frac{\Volu}{\Volu_0}-1\Big)^2\Big\rbrack -\PartNumb \BoltCons T \frac{\UnivPara d}{2}\big(\ln(2\pi)+\Contraction^2 \big) -\frac{\LinkDens \PartNumb \BoltCons}{2} \ln \LinkPote_0 \nonumber\\ &&+ \PartNumb \BoltCons T \frac{\LinkDens\LocaPart^2}{2}\frac{d}{2} \int_{\ISLL_1,\ISLL_2}\!\! \ln\Big( \frac{1}{\ISLL_1}+\frac{1}{\ISLL_2}+\frac{\LinkScal^2}{\BoltCons T} \Big) \!+\!\PartNumb \BoltCons T \frac{d}{2} e^{-\LinkDens \LocaPart} \!\sum_{m=1}^{\infty}\frac{(\LinkDens \LocaPart)^m}{m!} \!\int_{\ISLL_1,\ldots,\ISLL_2} \!\!\!\ln \Big( \frac{\tISLL_1\cdots\tISLL_m}{\tISLL_1 +\cdots +\tISLL_m} \Big).\,\,\end{aligned}$$ Using the relation (\[EQ:VContraction\]), we can obtain the stationary values of the contraction $\Contraction$ that minimizes the disorder-averaged Gibbs free energy by solving $$\begin{aligned} \frac{\delta \lda \GibbFreeEner_{\textrm{SP}} \rda }{ \delta \Contraction}=0 .\end{aligned}$$ [[In the limit $\frac{\ExclVolu N}{\BoltCons T \Volu}\gg \LinkDens$, the solution is $$\begin{aligned} \label{EQ:ContRLPM} \Contraction \simeq 1-\frac{ \UnivPara\Volu_0 \BoltCons T}{\ExclVolu \PartNumb d}.\end{aligned}$$ The limit $\frac{\ExclVolu N}{\BoltCons T \Volu}\gg \LinkDens$ is the same as the limit taken below Eq. (\[EQ:ExclVoluRN\]), indicating that the excluded-volume repulsion is much stronger than the attractive effects of the links.]{}]{} This contraction of the volume due to the introduction of links at a given pressure is a result of both the reduction of the total number of translational degrees of freedom, i.e., the change of the osmotic pressure, and the attractive interactions induced by the links. We shall see later that this contraction is consistent with a particular phenomenological model of a disordered elastic medium that we shall introduce. Goldstone fluctuations in the RLPM ---------------------------------- ### Spontaneous symmetry breaking {#SEC:SSB} To characterize the Goldstone modes of fluctuations associated with the random solid state, we shall first look at the pattern of symmetry breaking accompanying the transition to this state. The Hamiltonian (\[EQ:HVOPHRS\]) for the liquid-to-soft-random-solid transition has the symmetry of independent translations and rotations of each replica. The translational invariance of the Hamiltonian can be readily verified by making the transformation $$\begin{aligned} \hat{x} &\to& \hat{x}'=\hat{x}+\hat{a} , \nonumber\\ \VOP(\hat{x}) &\to& \VOP'(\hat{x}')=\VOP(\hat{x}) = \VOP(\hat{x}'-\hat{a}) ,\end{aligned}$$ where $\hat{a}\equiv(a^0,a^1,\ldots,a^n)$ represents a replicated translation. In momentum space this transformation reads $$\begin{aligned} \VOP_{\hat{p}} &\to & \VOP'_{\hat{p}}=e^{i\hat{p}\cdot\hat{a}}\, \VOP_{\hat{p}}\, .\end{aligned}$$ It is easy to check that, by inserting this transformed order parameter back into the Hamiltonian (\[EQ:HVOPHRS\]) and making a change of variables, the same Hamiltonian but for the field $\VOP '$ is recovered. Similarly, one can verify invariance under independent rotations $\hat{\Tens{O}}\equiv(\Tens{O}^{0},\Tens{O}^{1},\ldots,\Tens{O}^{n})$ with $$\begin{aligned} \VOP_{\hat{p}} \to \VOP'_{\hat{p}} = \VOP_{\hat{\Tens{O}}^{-1}\cdot\hat{p}} \, .\end{aligned}$$ The order parameter in the liquid state (i.e., $\VOP=0$) has the full symmetry of the Hamiltonian, and at the transition to the soft random solid state it is the symmetry of *relative* translations and rotations between different replicas that is spontaneously broken. However, the symmetry of *common* translations and rotations of all replicas are preserved, and this reflects the important notion that from the macroscopic perspective the system remains translationally and rotationally invariant, even in the random solid state. This entire pattern of symmetry breaking amounts to an unfamiliar but essentially conventional example of the Landau paradigm. This broken symmetry of relative translations and rotations between different replicas can be understood as a result of particle localization. Because a delocalized liquid particle can explore the whole volume via its thermal fluctuations, and in thermal equilibrium its positions in different replicas are uncorrelated, the liquid state is invariant, under separate translation and rotation of individual replica. On the contrary, for a localized particle, its positions in the various replicas are strongly correlated, and therefore the symmetries of relative translations and rotations are broken. It is straightforward to verify that the form of the random-solid-state order parameter, Eq. (\[EQ:VOPAnsatzR\]), correctly implements this pattern of symmetry breaking. To see this, we can use the complete orthonormal basis in replica space defined in Section \[SEC:FTMF\], and define an alternative basis involving a replica (almost) body-diagonal unit vector $$\begin{aligned} \BASEl \equiv \frac{1}{\sqrt{1+n\Contraction^2}} \big(\BASE^{0}+\Contraction\ReplSumOne\BASE\REPa\big) .\end{aligned}$$ Relative to $\BASEl$, we may decompose a $(1+n)d$ dimensional vector $\hat{x}$ into its longitudinal ($\lambda$) and transverse ($\tau$) components: $$\begin{aligned} \hat{x}=\hat{x}_{\lambda}+\hat{x}_{\tau}, \quad\, \hat{x}_{\lambda} = (\hat{x}\cdot\BASEl)\BASEl, \quad\, \hat{x}_{\tau} = \hat{x}-\hat{x}_{\lambda} \, .\end{aligned}$$ Note that $\hat{x}_{\lambda}$ and $\hat{x}_{\tau}$ are both $(1+n)d$-dimensional vectors, but $\hat{x}_{\lambda}$ has only $d$-degrees of freedom (given by $\hat{x}\cdot\BASEl$), and $\hat{x}_{\tau}$ has only $nd$-degrees of freedom. By this decomposition, the vector $\hat{z}=(z,\Contraction z, \Contraction z, \ldots )$, which characterizes the mean positions of a particle in the stationary-point state, can be written as $$\begin{aligned} \hat{z} = \sqrt{1+n\Contraction^2} \, z\, \BASEl ,\end{aligned}$$ which points purely in the $\BASEl$ direction. As a result, the stationary order parameter, Eq. (\[EQ:VOPAnsatzR\]), can be written as $$\begin{aligned} \label{EQ:OPInv} \VOPSP(\hat{x}) &=& \LocaPart \int \frac{dz}{\Volu_0} \int d\ISLL \DistISLL(\ISLL) \Big(\frac{\ISLL}{2\pi}\Big)^{\frac{(1+n)\Dime}{2}} \nonumber\\ && \quad\times e^{-\frac{\ISLL}{2} \vert \hat{x}_{\lambda}-\hat{z}_{\lambda}\vert^2 -\frac{\ISLL}{2} \vert \hat{x}_{\tau}\vert^2} - \frac{\LocaPart}{\Volu_0\Volu^n} \nonumber\\ &=& \LocaPart \int d\ISLL \DistISLL(\ISLL) \Big(\frac{\ISLL}{2\pi}\Big)^{\frac{(1+n)\Dime}{2}} \Big(\frac{2\pi}{\ISLL(1+n\Contraction^2)}\Big)^{\frac{d}{2}} \nonumber\\ && \quad\times e^{-\frac{\ISLL}{2} \vert \hat{x}_{\tau}\vert^2} - \frac{\LocaPart}{\Volu_0\Volu^n} ,\end{aligned}$$ where in the last line we have integrated out the $d$-dimensional vector $z$. It is evident that this value of order parameter does not depend on $\hat{x}_{\lambda}$, which means that it is *invariant under translations in the $\BASEl$ direction, corresponding to common translations and rotations of all replicas (albeit appropriately contracted by $\Contraction$ in replicas $1$ through $n$)*. This stationary order parameter is shown schematically in Fig. \[FIG:OPab\](a) for two replicas. The Gaussian-like form in the $\hat{x}_{\tau}$ direction indicates a condensation between different replicas. This is called a *molecular bound state* in Ref. [@Goldbart2004]. ### Goldstone fluctuations {#SEC:GSFT} With the pattern of continuous symmetry breaking just outlined, we can write down the form that the order parameter takes when it is subject to Goldstone fluctuations: $$\begin{aligned} \label{EQ:VOPGSR} \VOPGS(\REPX)&=&\LocaPart \int \frac{dz}{\Volu_0}\int d\ISLL\, \DistISLL(\ISLL) \Big(\frac{\ISLL}{2\pi}\Big)^{\frac{(1+n)\Dime}{2}} \nonumber\\ \noalign{\medskip} && \,\times e^{-\frac{\ISLL}{2} \{ \ReplSum\vert x\REPa-\DefoPosi\REPa(z) \vert^2 \}} -\frac{\LocaPart}{\Volu_0 \Volu^n} ,\end{aligned}$$ where $\DefoPosi^{0}(z)=z$. Therefore the Goldstone deformation of the order parameter is parameterized by the $n$ independent functions $\{\DefoPosi^{1}(z),\ldots,\DefoPosi^{n}(z)\}$. The stationary form of the order parameter, Eq. (\[EQ:VOPAnsatzR\]), describes a system in which the mean positions of the replicas of the thermally fluctuating particles are located at $(x^0,x^1,\ldots,x^n)=(z,\Contraction z,\ldots,\Contraction z)$. We shall refer to these positions as the *centers of the thermal cloud*. By comparing the undeformed order parameter (\[EQ:VOPAnsatzR\]) and the Goldstone-deformed one, Eq. (\[EQ:VOPGSR\]), we see that the Goldstone-deformed order parameter describes a system in which the mean positions of the replicas of the fluctuating particles are displaced from $(z,\Contraction z,\ldots,\Contraction z)$ to $(z, \DefoPosi^{1}(z),\ldots,\DefoPosi^{n}(z))$. Thus, $\DefoPosi\REPa(z)$ ($\alpha=1,2,\ldots,n$) represent the deformed mean positions in the measurement replicas. ![(a) Schematic plot of the value of the order parameter (brightness) at the stationary point, for the illustrative two replicas, labeled by $x^1$ and $x^2$. (b) Schematic plot of the value of the order parameter (brightness) for a Goldstone deformation of the stationary point for two replicas.[]{data-label="FIG:OPab"}](GoldstonePlot.eps){width=".48\textwidth"} We require that the deformations $\Contraction z \to \DefoPosi\REPa(z)$ be *pure shear* deformations. This constraint can be expressed as $\textrm{det}\big(\partial \DefoPosi_i\REPa(z)/\partial (\Contraction z)_j\big)=1$; it guarantees that the Goldstone fluctuation does not excite the LRS (i.e., each replica still has homogeneous density), which would be extremely energetically costly, owing to the large excluded-volume interaction. The 0RS has already been removed from the theory, and one can easily check that it remains zero in this Goldstone-deformed order parameter. The vanishing of the order parameter in the 1RS can be verified by taking the momentum-space Goldstone deformation and making a change of variables: $$\begin{aligned} (\VOPGS)_{\hat{q}}\big\vert_{\hat{q}=p\BASE\REPa} &=& \LocaPart \int \frac{dz}{\Volu_0}\int d\ISLL \DistISLL(\ISLL) e^{-\frac{\vert p \vert^2}{2\ISLL} -ip \, \DefoPosi\REPa(z)} -\LocaPart\delta^{((1+n)d)}_{\REPP,0} \nonumber\\ &=&\LocaPart \int \frac{d\DefoPosi\REPa}{\Volu} \frac{\Volu}{\Volu_0} \Big\vert\frac{\partial z}{\partial \DefoPosi\REPa}\Big\vert \int d\ISLL \DistISLL(\ISLL) e^{-\frac{\vert p \vert^2}{2\ISLL} -ip \, \DefoPosi\REPa(z)}-\LocaPart\delta^{((1+n)d)}_{\REPP,0}=0.\end{aligned}$$ This result indicates that the deformed state has the same local density as the (undeformed) stationary-point state. ![Example of a Goldstone-deformed state. The system of replicas $0$ through $n$ are shown. The mean positions of the replicas of a thermally fluctuating particle are displaced to $(z, \DefoPosi^{1}(z),\ldots,\DefoPosi^{n}(z))$ in this Goldstone-deformed state, which characterizes an $n$-fold replicated deformation field. Here, for simplicity, we only show spatially homogeneous deformations, and it is worth noticing that the volumes of the measurement replicas, i.e., replicas $1$ through $n$, are contracted by a factor $\Contraction^{\Dime}$.[]{data-label="FIG:ReplGoldstone"}](ReplGold.eps){width=".45\textwidth"} We ought to clarify the following point about this Goldstone deformation. As we have already mentioned, the symmetry that is broken at the transition is that of relative translations and rotations of the various replicas, the symmetry of common translations and rotations remaining intact. As a result, the Goldstone deformation should be constructed via $z$-dependent translations of the order parameter in the $\hat{x}_{\tau}$ direction, i.e., *the broken symmetry direction*. However, if we look at the deformation field defined by $\hat{\GSDF}\equiv\hat{\DefoPosi}-\hat{z}$, we find that $\hat{\GSDF}=(0,\DefoPosi^{1}(z)-\Contraction z, \DefoPosi^{2}(z)-\Contraction z , \ldots)$ is in fact *not* in the broken symmetry direction $\hat{x}_{\tau}$, because it has an $\hat{x}_{\lambda}$ component, viz., $$\begin{aligned} \hat{\GSDF}_{\lambda}=(\hat{\GSDF}\cdot\BASEl )\,\, \BASEl = \frac{\Contraction}{\sqrt{1+n\Contraction^2}} \ReplSumOne \GSDF\REPa(z) \,\, \BASEl .\end{aligned}$$ This $\hat{\GSDF}_{\lambda}$ component is actually redundant. This can be seen by decomposing the quadratic form $\vert \hat{x}-\hat{z}-\hat{\GSDF} \vert^2$ as $$\begin{aligned} \vert \hat{x}-\hat{z}-\hat{\GSDF} \vert^2 = \vert \hat{x}_{\lambda}- \hat{z}_{\lambda}-\hat{\GSDF}_{\lambda} \vert^2 + \vert \hat{x}_{\tau}-\hat{\GSDF}_{\tau} \vert^2 ,\end{aligned}$$ and noting that in the form of the order parameter (\[EQ:VOPGSR\]) one can change the integration variable (which is a $d$-dimensional vector) from $z$ to $$\begin{aligned} y\equiv z+\frac{1}{\sqrt{1+n\Contraction^2}} \,\hat{\GSDF}\cdot\BASEl ,\end{aligned}$$ so the longitudinal component of the $(1+n)\Dime$-dimensional vector $\hat{y}\equiv(y,\Contraction y,\ldots,\Contraction y)$ is $\hat{y}_{\lambda}=\hat{z}_{\lambda}+\hat{\GSDF}_{\lambda}$. The Jacobian of this change of variables is unity, provided that each deformation $z\to \GSDF\REPa(z)$ is a pure shear deformation [^6]. With this change of variables the Goldstone-deformed order parameter attains the form $$\begin{aligned} \label{EQ:VOPGSRTWO} \VOPGS(\hat{x}) &=& \LocaPart \int \frac{dy}{\Volu_0} \int d\ISLL \DistISLL(\ISLL) \Big(\frac{\ISLL}{2\pi}\Big)^{\frac{(1+n)\Dime}{2}} e^{-\frac{\ISLL}{2} \vert \hat{x}_{\lambda} - \hat{y}_{\lambda}\vert^2 -\frac{\ISLL}{2} \vert \hat{x}_{\tau} - \hat{\GSDF}'_{\tau}(y)\vert^2} - \frac{\LocaPart}{\Volu_0\Volu^n} \nonumber\\ &=& \LocaPart \int \frac{dy}{\Volu_0} \int d\ISLL \DistISLL(\ISLL) \Big(\frac{\ISLL}{2\pi}\Big)^{\frac{(1+n)\Dime}{2}} e^{-\frac{\ISLL}{2} \vert \hat{x} - \hat{y}-\hat{\GSDF}'_{\tau}(y) \vert^2} - \frac{\LocaPart}{\Volu_0\Volu^n} \nonumber\\ &\equiv& \VOPGS'(\hat{y}) ,\end{aligned}$$ where the transformed deformation field $\hat{\GSDF}'$ is defined via $\hat{\GSDF}'_{\tau}(y)=\hat{\GSDF}_{\tau}(z)$. With this change of variables, the order parameter field, $\VOPGS(\hat{x})$ of $\hat{x}$, is transformed to a new field, $\VOPGS'(\hat{y})$ of $\hat{y}$, which can be viewed as being the stationary point $\VOP_{SP}(\hat{y})$ but locally translated purely in the $\hat{y}_{\tau}$ directions, because the deformation is $\hat{\DefoPosi}'(y)=\hat{y}+\hat{\GSDF}'_{\tau}(y)$. Comparing with the deformation before the change of variables, $\hat{\DefoPosi}(z)=\hat{z}+\hat{\GSDF}(z)$, it is evident that the $\GSDF_{\lambda}$ component is actually removed, and the deformation field $\hat{\GSDF}$ only affects the $\hat{x}_{\tau}$ direction. Therefore, $\GSDF_{\lambda}$ is a redundant component in this field-theoretic description of the Goldstone fluctuation. Note that in these two representations of the Goldstone fluctuation, Eqs. (\[EQ:VOPGSR\]) and (\[EQ:VOPGSRTWO\]), the number of degrees of freedom of the deformation field, $\GSDF(z)$ or $\hat{\GSDF}'_{\tau}(y)$, is $nd$, because in $\GSDF(z)$ one has the constraint $\GSDF^{0}=0$. The reason we choose to adopt the form of Goldstone fluctuation given in Eq. (\[EQ:VOPGSR\]) is that in the true physical system that we are intending to describe, the preparation state (replica $0$) is not deformed. Although in the field theory the $1+n$ replicas feature symmetrically (apart from the contraction $\Contraction$), physically, one should only have Goldstone fluctuations that deform replicas $1$ through $n$, as these are the replicas associated with the measurement states, on which deformations are actually performed. Therefore, although Eqs. (\[EQ:VOPGSR\]) and (\[EQ:VOPGSRTWO\]) are mathematically equivalent, Eq. (\[EQ:VOPGSR\]) provides a better physical description of the Goldstone fluctuations [^7]. ### Energetics of Goldstone deformations {#SEC:EnerGoldSton} To obtain the energy of a Goldstone deformed state, we take the momentum-space version of the Goldstone-deformed order parameter, Eq. (\[EQ:VOPGSR\]), $$\begin{aligned} \label{EQ:VOPGSM} (\VOPGS)_{\REPP}&=&\LocaPart \int \frac{dz}{\Volu_0}\int d\ISLL \DistISLL(\ISLL) \nonumber\\ && \quad\times e^{-\frac{\vert\REPP\vert^2}{2\ISLL} -i\REPP\cdot\hat{\DefoPosi}(z)} -\LocaPart \delta^{(1+n)d}_{\REPP,0} ,\end{aligned}$$ and insert it into the Hamiltonian (\[EQ:HVOPHRS\]). After a lengthy calculation (see Appendix \[APP:HGS\]), similar to the one for the stationary-point free energy, we arrive at the energy of the Goldstone deformed state: $$\begin{aligned} \label{EQ:HGS} \HVOPGF= \HVOPSP+H_{\VOP}^{\DefoScalPsi}.\end{aligned}$$ Here, we use the short-hand $$\begin{aligned} \!\!\!\!\DefoScalPsi(z_1,z_2)\!\!&\equiv&\!\!(\hat{\DefoPosi}(z_1)\!-\!\hat{\DefoPosi}(z_2))^2\!-\!(1+n)(z_1\!-\!z_2)^2 \nonumber\\ \!\!&=&\!\!\!\! \ReplSumOne\! \big((\DefoPosi\REPa(z_1)\!\!\!-\!\!\DefoPosi\REPa(z_2))^2\!-\!(z_1\!-\!z_2)^2\!\big)\end{aligned}$$ to denote the deformation. The term $\HVOPSP$ is the Hamiltonian at the stationary point, as given in Eq. (\[EQ:HSP\]), and the term $H_{\VOP}^{\DefoScalPsi}$ accounts for the increase in the energy due to Goldstone deformation, and is given by $$\begin{aligned} \label{EQ:GibbEnerGoldSton} \!\!\!\! H_{\VOP}^{\DefoScalPsi} \!\!\!\!&=&\!\!\frac{1}{2}\int dz_1 dz_2 \KernOne(z_1,z_2)\DefoScalPsi(z_1,z_2) \nonumber\\ &&\!\!-\frac{1}{8\BoltCons T}\int dz_1 dz_2 dz_3 dz_4 \KernTwo(z_1,z_2,z_3,z_4) \nonumber\\ && \! \times \DefoScalPsi(z_1,z_2)\DefoScalPsi(z_3,z_4) \!-\!\!n\PartNumb \BoltCons T \frac{\UnivPara d}{2}(\Contraction^2\!-\!1\!) , \,\end{aligned}$$ [[where we have kept terms to linear order in $n$ and quadratic order in $\DefoScalPsi^2$ in this expansion.]{}]{} The functions $\KernOne(z_1,z_2)$ and $\KernTwo(z_1,z_2,z_3,z_4)$ are bell-shaped functions of the distances $(z_1-z_2)$ and $(z_1-z_2)$, $(z_2-z_3)$, $(z_3-z_4)$, as shown in Fig. \[FIG:KK\]. They are independent of the centers of mass, $(z_1+z_2)/2$ for $\KernOne(z_1,z_2)$, and $(z_1+z_2+z_3+z_4)/4$ for $\KernTwo(z_1,z_2,z_3,z_4)$. The forms of $\KernOne(z_1,z_2)$ and $\KernTwo(z_1,z_2,z_3,z_4)$ are given in Appendix \[APP:HGS\]. In specific, $\KernOne(z_1,z_2)$ has the following schematic functional form $$\begin{aligned} \KernOne(z_1,z_2) = \int dr \, C(r) e^{-\frac{\vert z_1-z_2\vert^2}{2r^2}} ,\end{aligned}$$ which is a superposition of Gaussian distributions on the scale of $r$, where $r$ is a certain combination of localization lengths $\LocaLeng$, weighted by the distribution of those localization lengths. The kernel $\KernTwo(z_1,z_2,z_3,z_4)$ has a similar structure, except that it also contains delta-function factors such as $\delta(z_1-z_3)$, which should be associated with the scale of the short-distance cutoff of the theory, which is also on the scale of typical localization length. ![Diagrams for the functions $\KernOne(z_1,z_2)$ and $\KernTwo(z_1,z_2,z_3,z_4)$. In these diagrams, the straight lines represent delta functions, and the wavy lines represent Gaussian potentials between pairs of points, averaged over the distribution of localization lengths, as indicated schematically by the summation signs, which also indicate symmetrization over the arguments. The typical localization length provides the characteristic lengthscale for $\KernOne$ and $\KernTwo$. The full expression is listed in Appendix \[APP:HGS\].[]{data-label="FIG:KK"}](K1K2.eps){width=".4\textwidth"} The form of the energy increase due to Goldstone fluctuations, Eq. (\[EQ:GibbEnerGoldSton\]), can be understood intuitively as follows. This energy actually describes the elastic energy of replicated shear deformations of the system, as we have explained in Section \[SEC:GSFT\] (i.e., the Goldstone fluctuations in the RLPM are replicated shear deformations). Thus, it is evident that the first term in Eq. (\[EQ:GibbEnerGoldSton\]) represents a coupling of the deformations $\hat{\DefoPosi}(z_1)$ and $\hat{\DefoPosi}(z_2)$ at the points $z_1$ and $z_2$. The coupling function, $\KernOne(z_1,z_2)$, given in Eq. (\[EQ:KoneApp\]), has a magnitude controlled by the probability $\LocaPart$ for a particle to be localized, and a lengthscale controlled by the typical localization length; these two quantities are, in turn, determined by the link density parameter $\LinkDens$, which is the control parameter for the RLPM. In particular, the first term in $\KernOne(z_1,z_2)$ carries a factor $\LocaPart^2$, which is the probability for both of the two mass-points, $z_1$ and $z_2$, to be in the infinite cluster, and therefore to have an elastic interaction between their deformations. The second term in $\KernOne(z_1,z_2)$, which involves a summation over $m$ from $2$ to $\infty$, takes into account the interactions between these two points that are mediated via other mass points. The four-point term, with coupling function $\KernTwo(z_1,z_2,z_3,z_4)$, is a bit more complicated. It couples replicas of the shear deformation to one another and to themselves, also on the scale of the localization length. These two terms in the elastic energy embody the statistics of the elastic free energy of the soft random solid in the language of replicas, and thus encode information about the statistics of the quenched random elastic parameters. In Section \[SEC:Phen\] we shall uncover this statistical content via the introduction of a phenomenological model inspired by this elastic energy. In order to compare with the phenomenological elastic free energy that we shall discuss in Section \[SEC:Phen\], it is useful to make an alternative decomposition of the Hamiltonian to the one made in Eq. (\[EQ:HGS\]). That decomposition (\[EQ:HGS\]) was in terms of the stationary-point part and the part due to fluctuations. The alternative decomposition reads $$\begin{aligned} \label{EQ:HSepa} \HVOPGF= H_{\VOP}^{(0)}+H_{\VOP}^{(\DefoPosi)}.\end{aligned}$$ The relation between the two decompositions is given by $$\begin{aligned} H_{\VOP}^{(0)} &=& \HVOPSP-\ContEner(\Contraction) , \nonumber\\ H_{\VOP}^{(\DefoPosi)} &=& H_{\VOP}^{\DefoScalPsi} \,\,\,\, + \ContEner(\Contraction) ,\end{aligned}$$ where $\ContEner(\Contraction)$ accounts for the energy of the stationary point, measured with respect to the state right after linking, which is actually the elastic energy of the contraction $\Contraction$, i.e., $$\begin{aligned} \label{EQ:CE} \ContEner(\Contraction) = \frac{\ExclVolu \PartNumb^2}{2\Volu_0} \Big( \frac{\Volu}{\Volu_0}-1 \Big)^2 +\PartNumb \BoltCons T \frac{\UnivPara d}{2}(\Contraction^2-1).\end{aligned}$$ In Eq. (\[EQ:HSepa\]), we are separating the Hamiltonian into two parts. $H_{\VOP}^{(0)}$ gives energy of the state right after linking, which is a state that has not been allowed to contract after the links were made and thus has the same volume and shape as the liquid state. In the state right after linking, the mean positions of the replicas of the thermally fluctuating particle (i.e., the centers of the thermal cloud) are located at the positions $(x^0,x^1,\ldots,x^n)=(z,z,\ldots,z)$. The other part of the Hamiltonian is the elastic energy of the deformation away from this state, $H_{\VOP}^{(\DefoPosi)}$. This is different from the separation made in Eq. (\[EQ:HGS\]), in which one has the stationary point energy $\HVOPSP$ and the energy-increase due to Goldstone deformations $H_{\VOP}^{\DefoScalPsi}$. Phenomenological approach to the elasticity of soft random solids {#SEC:Phen} ================================================================= Phenomenological nonlocal elastic free energy {#SEC:PhenFE} --------------------------------------------- As discussed in Section \[SEC:Intr\], in the classical theory of rubber elasticity [@Treloar1975], rubbery materials are modeled as incompressible networks of entropic Gaussian chains, and the resulting elastic free energy density is given by $$\begin{aligned} f = \frac{\SheaModu}{2} \, \textrm{Tr} \,\, \DefoGrad^{\textrm{T}} \DefoGrad\end{aligned}$$ for spatially uniform deformations $r\to \DefoGrad \cdot r$. Incompressibility is incorporated via the constraint $\textrm{det} \DefoGrad = 1$. For the shear modulus $\SheaModu$, the classical theory gives the result $n_c \, \BoltCons \, T$, where $n_c$ is the density of effective chains in the network. The phenomenological model that we now discuss is in the same spirit as the classical theory of rubber elasticity. However, to account for the heterogeneity of the medium we need to introduce the additional feature of quenched randomness into the model, and thus the entropic Gaussian chains are allowed to be of heterogeneous length and density. Furthermore, the classical theory is a local elasticity theory, which is valid at length scales that are much longer than the effective chain length of the polymers. By contrast, our phenomenological model is a nonlocal theory, which explicitly takes into account the finite length of polymer chains, as well as their variations. Inspired by the form of the energy of Goldstone fluctuations determined from the RLPM in Section \[SEC:EnerGoldSton\], we choose the following elastic free energy $\FreeEnerPhen_{\NonLocaKern}$, associated with a deformation of the soft random solid state that maps the mass point at $z$ to the new location $\DefoPosi(z)$: $$\begin{aligned} \label{EQ:phenom_model} \FreeEnerPhen_{\NonLocaKern} \!\!&=& \! \frac{1}{2} \!\int\! dz_1\, dz_2\, \NonLocaKern (z_1,z_2) \big(\vert \DefoPosi(z_1)-\DefoPosi(z_2) \vert^2 \! -\vert z_1-z_2 \vert^2\big) \nonumber\\ \noalign{\medskip} && +\frac{\BulkModuZero}{2}\int dz \Big\lbrace\textrm{det}\Big(\frac{\partial\DefoPosi_i(z)}{\partial z_j}\Big)-1 \Big\rbrace^2 ,\end{aligned}$$ where $\NonLocaKern (z_1,z_2)$ is a *nonlocal harmonic attraction* that serves to pull the two mass points (i.e., coarse-grained volume-elements) at $z_1$ and $z_2$ towards one another. The kernel $\NonLocaKern (z_1,z_2)$ originates in the entropy of the molecular chains of the heterogeneous network, and we model it as zero rest-length springs having random spring coefficient. Notice that $\NonLocaKern (z_1,z_2)$ is a *coarse grained* consequence of many molecular chains and, more importantly, is an *entropic* effect and does not depend on the choice of precise form of microscopic attractive interactions. We take $\NonLocaKern (z_1,z_2)$ to be a quenched random function of the two positions, $z_1$ and $z_2$, symmetric under $z_1 \leftrightarrow z_2$. We assume that the disorder average of $\NonLocaKern (z_1,z_2)$ is $\NonLocaKernZero(z_{1}-z_{2})\equiv [\NonLocaKern(z_{1},z_{2})]$, i.e., is translationally invariant. Furthermore, we define the fluctuation part of $\NonLocaKern (z_1,z_2)$ to be $\NonLocaKernOne (z_1,z_2)\equiv \NonLocaKern (z_1,z_2) - \NonLocaKernZero(z_{1}-z_{2})$. In the following analysis, we assume that $\NonLocaKernOne \ll \NonLocaKernZero$ in order to make a necessary perturbative expansion. In the second term in Eq. (\[EQ:phenom\_model\]), the determinant of the deformation gradient tensor $\DefoGrad_{ij} (z)$\[$\equiv \partial \DefoPosi_i/\partial z_j$\] captures the change of the volume and, correspondingly, the parameter $\BulkModuZero$, which we take to be large, heavily penalizes density variations. This large $\BulkModuZero$ results from a competition between (i) repulsions (either direct or mediated via a solvent, e.g., excluded-volume), and (ii) intermolecular attractions and external pressure. In discussion of elasticity that follows, we exploit the notions of a reference space and a target space for any deformation $\DefoPosi(z)$. The reference space, labeled by the $\Dime$-dimensional vector $z$, is the space *before* the deformation, whereas the target space, labeled by the $\Dime$-dimensional vector $\DefoPosi(z)$, is the space *after* the deformation. Disorder average of the phenomenological model via the replica method --------------------------------------------------------------------- To make a comparison with the RLPM, and thus to obtain information about the statistics of the nonlocal kernel $\NonLocaKern$ that characterizes the disorder present in the phenomenological model, we shall use the replica method to average the elastic free energy (\[EQ:phenom\_model\]) over the quenched disorder, whose statistics will be specified below. We follow a recipe similar to the one used in Section \[SEC:RLPMReplica\] \[see Eq. (\[EQ:HFReplica\])\]. The elastic free energy, Eq. (\[EQ:phenom\_model\]), contains the random ingredient $\NonLocaKern$. As with the RLPM, the physical quantity to be disorder-averaged is the free energy at a given pressure, but now with the deformations $\DefoPosi(z)$ as thermally fluctuating field. Therefore, we need to take $\FreeEnerPhen$, defined in Eq. (\[EQ:phenom\_model\]), as the effective Hamiltonian, because it is the elastic energy for a given deformation field specified by $\DefoPosi(z)$, and then calculate the free energy at a given temperature, via the partition function $$\begin{aligned} Z_{\NonLocaKern} =\int \mathcal{D} \DefoPosi e^{-\FreeEnerPhen_{\NonLocaKern}(\DefoPosi(z))/\BoltCons T} ,\end{aligned}$$ with $\FreeEnerPhen$ depending on the quenched randomness through its kernel $\NonLocaKern$. The Gibbs free energy is related to this partition function via $\GibbFreeEner = -\BoltCons T \ln Z$, and $\GibbFreeEner$ is the quantity that should be averaged over the quenched disorder. Note that it is the Gibbs free energy, instead of the Helmholtz free energy, that is related to this partition function $Z$, because in the elastic energy $\FreeEnerPhen_{\NonLocaKern}$ one has a fixed pressure, which is accounted for by the $\BulkModuZero$ term, the volume being allowed to fluctuated. The disorder average of the Gibbs free energy can be computed using the replica technique: $$\begin{aligned} \lda \GibbFreeEner \rda &=& -\BoltCons T \int \mathcal{D} \NonLocaKern\, \ProbG \ln Z_{\NonLocaKern} \nonumber\\ &=& -\BoltCons T\int \mathcal{D} \NonLocaKern\, \ProbG \lim_{n\to 0} \frac{Z_{\NonLocaKern}^n-1}{n} \nonumber\\ &=& -\BoltCons T \lim_{n\to 0} \frac{\partial}{\partial n}\Big\vert_{n \to 0} \lda Z_{\NonLocaKern}^n \rda ,\end{aligned}$$ where we have used $\lda\ldots\rda\equiv \int \mathcal{D} \NonLocaKern \, \ProbG \ldots$ once again to denote a disorder average, but this time over the values of the quenched random kernel $\NonLocaKern$, weighted by an as-yet unknown distribution $\ProbG$. In the present setting, we do not have a zeroth replica, as such a replica arises from the Deam-Edwards distribution of the links, and this is not the type of quenched disorder that we have in mind. Rather, in the present setting we regard the distribution of disorder $\ProbG$ as a physical quantity that is unknown but is to be determined through a comparison with the analysis of the Goldstone fluctuations of the RLPM. The replica partition function is then given by $$\begin{aligned} \label{EG:PhenZn} \mathbb{Z}_{n}&\equiv& \lda Z_{\NonLocaKern}^n \rda = \int \mathcal{D} \NonLocaKern \,\ProbG\, Z_{\NonLocaKern}^n \nonumber\\ &=& \int \mathcal{D} \NonLocaKern \,\ProbG \int \ReplProdOne \mathcal{D} \DefoPosi\REPa e^{-\ReplSumOne\FreeEnerPhen_{\NonLocaKern}(\DefoPosi\REPa(z))/\BoltCons T} \nonumber\\ &=& \int \ReplProdOne \mathcal{D} \DefoPosi\REPa\, e^{-\FreeEnerPhen_n/\BoltCons T} ,\end{aligned}$$ in which we functionally integrate over the configurations of the $n$-fold replicated displacement fields $\DefoPosi\REPa$. We have also introduced the effective pure Hamiltonian $\FreeEnerPhen_n$ that governs the replicated deformation fields: $$\begin{aligned} \label{EQ:PhenReplPartFunc} \FreeEnerPhen_n &\equiv& -\BoltCons T \ln \big\lda e^{-\ReplSumOne\FreeEnerPhen_{\NonLocaKern}(\DefoPosi\REPa(z))/\BoltCons T} \big\rda .\end{aligned}$$ The exponential and the logarithm in Eq. (\[EQ:PhenReplPartFunc\]) can jointly be expanded in terms of cumulants, and thus we arrive at the form $$\begin{aligned} \FreeEnerPhen_n &=& -\BoltCons T \Big\lbrace -\Big\lda \ReplSumOne\FreeEnerPhen_{\NonLocaKern}(\DefoPosi\REPa(z))/\BoltCons T \Big\rda_c +\frac{1}{2} \Big\lda \sum_{\alpha,\beta=1}^{n} \FreeEnerPhen_{\NonLocaKern}(\DefoPosi\REPa(z)) \FreeEnerPhen_{\NonLocaKern}(\DefoPosi\REPb(z))/\BoltCons T \Big\rda_c - \cdots \Big\rbrace ,\end{aligned}$$ where $\lda \ldots \rda_c$ are connected statistical moments (i.e., cumulants) associated with the probability distribution of the disorder $\ProbG$, and the omitted terms are $O[(\FreeEnerPhen/\BoltCons T)^3]$. The elastic energy $\FreeEnerPhen_{\NonLocaKern}$ for a given realization of disorder $\NonLocaKern$ and a given deformation field $\DefoPosi(z)$ is given in Eq. (\[EQ:phenom\_model\]); inserting this form for $\FreeEnerPhen_{\NonLocaKern}$ we obtain $$\begin{aligned} \label{EQ:FEPn} \FreeEnerPhen_n &=& \frac{\BulkModuZero}{2} \int _{z} \ReplSumOne \big(\vert\partial \DefoPosi\REPa \vert-1\big)^2 +\frac{1}{2}\int_{z_1,z_2} \lda \NonLocaKern(z_1,z_2) \rda_c \DefoScalPsi (z_1,z_2) \nonumber\\ \noalign{\medskip} && -\frac{1}{8\BoltCons T} \int_{z_1,z_2,z_3,z_4} \lda \NonLocaKern(z_1,z_2) \NonLocaKern(z_3,z_4) \rda_c \DefoScalPsi (z_1,z_2)\DefoScalPsi (z_3,z_4) + O(\DefoScalPsi^3) ,\end{aligned}$$ and we remind the reader of the definition of $\DefoScalPsi$, first given in Section \[SEC:EnerGoldSton\]: $$\begin{aligned} \DefoScalPsi\equiv\big(\hat{\DefoPosi}(z_1)-\hat{\DefoPosi}(z_2)\big)^2-(1+n)(z_1-z_2)^2 .\end{aligned}$$ Up to quadratic order in $\DefoScalPsi$, the effective pure Hamiltonian $\FreeEnerPhen_n$ of Eq. (\[EQ:FEPn\]) has *precisely the same form* as the energy of the Goldstone fluctuations (\[EQ:GibbEnerGoldSton\]), derived microscopically from the RLPM. Thus, the RLPM actually provides a *derivation* of the phenomenological model we proposed in Section \[SEC:PhenFE\], and *justifies*, from a microscopic perspective, the phenomenological elastic free energy (\[EQ:phenom\_model\]) with its quenched randomness. Therefore, the probability distribution $\ProbG$ of the quenched randomness in Eq. (\[EQ:phenom\_model\]) is contained in the RLPM. By comparing the two schemes, i.e., Eqs. (\[EQ:GibbEnerGoldSton\]) and (\[EQ:FEPn\]), we arrive at a statistical description of the quenched random kernel $\NonLocaKern$ in the phenomenological model (\[EQ:phenom\_model\]), as we shall now show. Comparing the Gibbs free energies of the RLPM and the phenomenological model ---------------------------------------------------------------------------- The RLPM is a semi-microscopic random network model, our analysis of which led to the disorder-averaged Gibbs free energy (\[EQ:HFEDisoAverDeri\],\[EQ:LegeTran\]): $$\begin{aligned} \label{EQ:GRLPM} \lda \GibbFreeEner \rda = -\BoltCons T \lim _{n \to 0} \frac{\partial}{\partial n} \ln Z_{1+n} + \Pres\Volu ,\end{aligned}$$ with $$\begin{aligned} \label{EQ:ZRLPMVOP} Z_{1+n}=\int \mathcal{D} \VOP \, e^{-H_{\VOP}/\BoltCons T} .\end{aligned}$$ [[ The functional integration here is over all possible configurations of the order-parameter field $\VOP$. By contrast, in the phenomenological model the replicated partition function $\mathbb{Z}_{n}$ involves a functional integration over the $n$-fold replicated deformation field $\DefoPosi\REPa$, as in Eq. (\[EG:PhenZn\]). To obtain the equivalence between the RLPM and the phenomenological model it is useful to proceed in two steps. First, we note that in the random solid state the Boltzmann weight in Eq. (\[EQ:ZRLPMVOP\]) \[i.e., $\exp\left(-H_{\VOP}/\BoltCons T\right)$\] is heavily concentrated near the stationary point $\VOP_{SP}$ and the Goldstone-deformed states $\VOPGS$, and decreases steeply for other sectors of fluctuations. This suggests that we parametrize the fluctuating field $\VOP$ in terms of an amplitude, which we take to be $\VOP_{SP}$, plus radial fluctuations around it, together with an appropriate set of generalized angular [Goldstone]{} variables, which are the $n$ independent $\Dime$-vector fields in $\hat{\DefoPosi}$, as, e.g., in Eq. (\[EQ:VOPGSM\]). We then make an exact change of functional integration variables, from $\VOP$ to these radial and angular variables, and this introduces a corresponding Jacobian factor. The second step is to recognize that the radial fluctuations are massive (i.e., they have restoring forces, in contrast with the angular fluctuations, which are massless). Thus, if we were to integrate these radial fluctuations we would obtain small corrections to the terms of the remaining, effective, angular-variable theory. We therefore elect to treat the radial variables at the classical level, which amounts to neglecting the radial fluctuations. Under this condition, the aforementioned Jacobian factor does not depend on the angular variables, and therefore it contributes only a constant multiplicative factor to the functional integral, which can be safely omitted. This procedure enables us to arrive at the following approximate form for the replica partition function of the RLPM, Eq. (\[EQ:ZRLPMVOP\]): ]{}]{} $$\begin{aligned} Z_{1+n}\approx e^{-\HVOPSP/\BoltCons T} \int \ReplProdOne \mathcal{D} \DefoPosi\REPa\, e^{-H_{\VOP}^{\DefoScalPsi}/\BoltCons T},\end{aligned}$$ with $\HVOPSP$ and $H_{\VOP}^{\DefoScalPsi}$ given in Eqs. (\[EQ:HSP\]) and (\[EQ:GibbEnerGoldSton\]). In our phenomenological model, introduced in Section \[SEC:PhenFE\], the disorder-averaged Gibbs free energy is given by $$\begin{aligned} \label{EQ:GPhen} \lda \GibbFreeEner \rda = -\BoltCons T \lim_{n\to 0} \frac{\partial}{\partial n}\Big\vert_{n \to 0} \mathbb{Z}_{n}\, ,\end{aligned}$$ with $$\begin{aligned} \mathbb{Z}_{n}=\int \ReplProdOne \mathcal{D} \DefoPosi\REPa\, e^{-\FreeEnerPhen_n/\BoltCons T},\end{aligned}$$ where $\FreeEnerPhen_n$ is given in Eq. (\[EQ:FEPn\]). The Gibbs free energies—for the RLPM and for the phenomenological model—are supposed to be equal, up to an additive constant, because they both capture the Gibbs free energy of a soft random solid system having elastic deformations. It is this equality that we shall now exploit to characterize, via the RLPM, the distribution of quenched disorder $\ProbG$ in the phenomenological model. Actually, we can directly identify the Hamiltonian $\FreeEnerPhen_n$ with $H_{\VOP}^{(\DefoPosi)}$, because the functional integration over the replicated deformation field $\DefoPosi\REPa$ is common to both the RLPM and the phenomenological model, in the sense that the deformation $z\to\DefoPosi\REPa(z)$, in both the RLPM and the phenomenological model, takes the *state right after linking* as the reference state. Therefore, we have the relation $$\begin{aligned} \label{EQ:Comp} \FreeEnerPhen_n = H_{\VOP}^{(\DefoPosi)} .\end{aligned}$$ Notice that, here, the RLPM Hamiltonian is $H_{\VOP}^{(\DefoPosi)}$, not $H_{\VOP}^{\DefoScalPsi}$, because it is $H_{\VOP}^{(\DefoPosi)}$ that is the energy measured from the *state right after linking*, which matches the definition of reference state in the phenomenological theory, whereas $H_{\VOP}^{\DefoScalPsi}$ is the energy measured from the *stationary point*, which differs from the *state right after linking* by the energy associated with the contraction $\ContEner(\Contraction)$, given in Eq. (\[EQ:CE\]). By the comparison stated Eq. (\[EQ:Comp\]), we arrive at the following determination of the quenched-disorder characteristics of the phenomenological model (LHS) in terms of the elastic properties of the RLPM (RHS): \[EQ:CompDA\] $$\begin{aligned} \lda \NonLocaKern(z_1,z_2) \rda_c &=& \KernOne(z_1,z_2), \label{EQ:KOne}\\ \lda \NonLocaKern(z_1,z_2) \, \NonLocaKern(z_3,z_4) \rda_c &=& \KernTwo(z_1,z_2,z_3,z_4), \label{EQ:KTwo}\\ \BulkModuZero &=& \ExclVolu \PartDens^2 , \label{EQ:BMZ}\end{aligned}$$ where $\PartDens \equiv \PartNumb / \Volu_0$ is the number-density of the particles in the preparation state. The functions $\KernOne(z_1,z_2)$ and $\KernTwo(z_1,z_2,z_3,z_4)$ are defined in Appendix \[APP:HGS\], and have been discussed in Section \[SEC:EnerGoldSton\] Phenomenological model at fixed disorder: Relaxation, excitation, and deformation {#SEC:relaxation} ================================================================================= Relaxation to the stable state at fixed disorder {#SEC:subrelaxation} ------------------------------------------------ The free energy $\FreeEnerPhen$ provides a natural description of the heterogeneous elasticity of soft random solids. However, its stable state is not $\DefoPosi(z)=z$ (i.e., the state $\DefoPosi(z)=z$ does not satisfy the stationarity condition $\delta \FreeEnerPhen/\delta \DefoPosi(z)=0$). There are two reasons for this instability. First, the attraction $\NonLocaKern$ causes a small, spatially uniform, contraction \[the fractional volume change being $O(1/\BulkModuZero)$\]. Second, the randomness of $\NonLocaKern$ additionally destabilizes this contracted state, causing the adoption of a randomly deformed stable state. We denote this relaxation as $$\begin{aligned} \label{EQ:Defiv} z\to\tz\equiv\Contraction z + \RelaRand(z) ,\end{aligned}$$ in which $\Contraction$ describes the uniform contraction and $\RelaRand(z)$ describes the random local deformation. This relaxation process can be understood in the setting of the preparation of a sample of rubber via a hypothetic instantaneous cross-linking: cross-linking not only drives the liquid-to-random-solid transition but it also generates a uniform inward pressure, as well as introducing random stresses, as shown in Fig. \[FIG:Rela\] As a result, immediately after cross-linking the state is not stable, but relaxes to a new stable state, determined by the particular realization of randomness created by the cross-linking. In the following discussion, we shall use the following nomenclature: $$\begin{aligned} \DefoPosi(z)\!\!&=&\!\!z \!\iff\! \textrm{the state right after linking,} \\ \DefoPosi(z)\!\!&=&\!\!\tz\equiv\Contraction z + \RelaRand(z) \!\iff\! \textrm{the relaxed state.}\,\end{aligned}$$ The state right after linking, here, is the same as the one just defined, following Eq. (\[EQ:HSepa\]), which has energy $H_{\VOP}^{(0)}$ in the RLPM, because they both describe the state that has undergone no deformation since being linked. ![Schematic plot of the relaxation process under a fixed pressure. (a) The liquid state with no linking. (b) The *state right after linking*. cross-links are added to the system, and an infinite cluster is formed. This state is not stable, because of the inward pressure and local stresses. (c) The *relaxed state*. The system undergoes a uniform contraction and random local deformations that release the unbalanced stress introduced by cross-linking.[]{data-label="FIG:Rela"}](relaxation.eps){width=".45\textwidth"} By writing the relaxation as $z\to\tz \,[\,\equiv\Contraction z + \RelaRand(z)]$ we are making the approximations that the contraction $\Contraction$ is homogeneous and that the random deformations $\RelaRand(z)$ are pure shear, which means that any randomness in the *bulk* deformation is ignored. This can be understood by looking at the orders of magnitude of the deformations. The uniform contraction is of order $O(\NonLocaKernZero/\BulkModuZero)$, and the random local shear deformations are of order $O(\NonLocaKernOne/\NonLocaKernZero)$. The random local bulk deformations is, however, of order $O(\NonLocaKernOne/\BulkModuZero)$, and is thus much smaller than the other two deformations, given the assumptions that (i) the fluctuations of the shear modulus are much smaller than the mean value (the shear modulus corresponds to $\NonLocaKern$, as we shall see later), and (ii) the shear modulus is much smaller than the bulk modulus. With these assumptions, we can insert the form $\DefoPosi(z)=\tz(z)$ into the stationarity condition, and solve for the relaxed state, which is characterized by $\Contraction$ and $\RelaRand(z)$. As we have just discussed, for the contraction, only the homogeneous part is admitted, so the variational equation for $\Contraction$ assumes $\NonLocaKernOne=0$ and thus $\RelaRand(z)=0$ and, stationarity requires $$\begin{aligned} \label{EQ:StabGlob} \frac{\partial \FreeEnerPhen}{\partial \Contraction} = 0 .\end{aligned}$$ Thus, for the present model, Eq. (\[EQ:phenom\_model\]), we have $$\begin{aligned} \label{EQ:StabCont} \!\!0\!\!&=&\!\!\frac{\partial }{\partial \Contraction} \Big\lbrace \frac{1}{2} \int dz_1 \, dz_2 \, \NonLocaKernZero (z_1,z_2) \, (\Contraction^2-1)\, \vert z_1-z_2 \vert ^2 \nonumber\\ &&\quad\quad +\frac{\BulkModuZero}{2}\int dz \big(\Contraction^{\Dime}-1 \big)^2 \Big\rbrace\nonumber\\ \noalign{\medskip} &=& \Volu_0 \big( \Dime \, \MeanSheaModu \, \Contraction + \BulkModuZero \, \big(\Contraction^{\Dime}-1 \big) \Dime \Contraction^{\Dime-1} \big) .\end{aligned}$$ By solving this equation to leading order in $\MeanSheaModu/\BulkModuZero$, we obtain $$\begin{aligned} \label{EQ:SoluCont} \Contraction \approx 1-({\MeanSheaModu}/{d\BulkModuZero}) ,\end{aligned}$$ where $$\label{EQ:MSheaModu} \MeanSheaModu\equiv \frac{1}{\Dime} \int dz_2\,(z_1-z_2)^2\,\NonLocaKernZero(z_1-z_2).$$ As we shall see below, $\MeanSheaModu$ is actually the mean shear modulus. The stationarity condition for the random local deformation $\RelaRand(z)$ reads $$\begin{aligned} \label{EQ:StabLoca} \frac{\delta \FreeEnerPhen}{\delta \RelaRand_a(z)} = 0 ,\end{aligned}$$ and for the present model, Eq. (\[EQ:phenom\_model\]), this condition becomes $$\begin{aligned} \label{EQ:RelaV} && 2(\Contraction z_a + \RelaRand_a(z))\int dz_2 \NonLocaKern(z,z_2) \nonumber\\ && -2\!\int dz_2\NonLocaKern(z,z_2)\big(\Contraction z_{2,a} + \RelaRand_a(z_2)\big) \nonumber\\ && -\BulkModuZeroP \partial_{a} \big(\partial_i \RelaRand_i(z)\big) =0 .\end{aligned}$$ Here, the last term, $\BulkModuZeroP \partial_{a} \big(\partial_i \RelaRand_i(z)\big)$, is associated with density variations, and arises from the variation of the second term in the elastic free energy (\[EQ:phenom\_model\]), which is $\frac{\BulkModuZero}{2}\int dz \big\lbrack\textrm{det}\big(\partial\DefoPosi_i(z)/\partial z_j\big)-1 \big\rbrack^2$. In the following discussion we shall call this the bulk term in the elastic free energy, and we have made the definition $\BulkModuZeroP\equiv\Contraction^{2d-2}\BulkModuZero$; see Appendix \[APP:Relaxation\] for the expansion. The stationarity equation (\[EQ:RelaV\]) for $\RelaRand(z)$ can be solved perturbatively, by assuming that $\NonLocaKernZero$ is of zeroth order and that $\NonLocaKernOne$ and $\RelaRand(z)$ are of first order; see Appendix \[APP:Relaxation\] for the explicit calculation. In momentum space, the result is $$\begin{aligned} \label{EQ:SoluV} \vec{\RelaRand}_{p} &=& \frac{\PPerpT \cdot \vec{\RandForc}_{p}} {2\DiffG_p} +\frac{\PLongT \cdot \vec{\RandForc}_{p}} {\BulkModuZeroP+ 2\DiffG_p} ,\end{aligned}$$ where $p$ is a $\Dime$-dimensional momentum vector. The notation $\vec{\RandForc}_{p}$ and $\DiffG_p$ are defined as $$\begin{aligned} \RandForc_{a,p}&\equiv& -2\Contraction \Big( i\frac{\partial}{\partial p_{a}} \NonLocaKernOne_{p,0} -i\frac{\partial}{\partial p'_{a}}\Big\vert_{p'=0} \NonLocaKernOne_{p,p'} \Big), \nonumber\\ \DiffG_p &\equiv& \NonLocaKernZero_{0}-\NonLocaKernZero_{p} .\end{aligned}$$ Notice that $\RandForc_{a,p}$ is actually the random force in the state that is contracted but not yet equilibrated for randomness. The definitions of the projection operators $\PLongT$ and $\PPerpT$ are respectively, $$\begin{aligned} \label{EQ:DefiProjText} \PLong_{ij} &\equiv& p_i p_i /p^2 , \nonumber\\ \PPerp_{ij} &\equiv& \delta_{ij}-p_i p_i /p^2 .\end{aligned}$$ We use bold letters to denote $\Dime$-dimensional rank-$2$ tensors, and add an overhead arrow \[such as $\vec{\RelaRand}$\] to denote vectors, when needed. [[In the solution (\[EQ:SoluV\]), the second term is much smaller than the first term, due to the large bulk modulus $\BulkModuZeroP$. In the incompressible limit (i.e., $\BulkModuZero \to \infty$), we have $$\begin{aligned} \vec{\RelaRand}_{p}=\frac{\PPerpT \cdot \vec{\RandForc}_{p}} {2\DiffG_p} ,\end{aligned}$$ which is a purely transverse field, meaning that it satisfies $p_i\,\RelaRand_{i,p}=0$ or, equivalently, that $\partial_i\,\RelaRand_{i}(x)=0$, which is the only type of deformation that can occur in an incompressible medium.]{}]{} Excitation around the relaxed state at fixed disorder {#SEC:EFERS} ----------------------------------------------------- In order to obtain a description of the elasticity of the relaxed state $z\to\tz$, which is a stable state and thus relevant for experimental observations, we re-expand the phenomenological elastic free energy (\[EQ:phenom\_model\]) around the relaxed state. This amounts to taking the relaxed state $\tz(z)\,[\,\equiv\Contraction z+\RelaRand(z)]$ as the new reference state, and deriving the elastic free energy for deformations (i.e., excitations) relative to this state. To do this, we study the free energy for the following elastic deformation: $$\begin{aligned} z\to \DefoPosi(z)=\tz(z) + \deformation(z) ,\end{aligned}$$ where $\deformation(z)$ is a deformation away from the relaxed reference state. It will be convenient to make the following change of the independent variables: $$\begin{aligned} z\to\tz(z)\equiv\Contraction z+\RelaRand(z) ,\end{aligned}$$ which has the Jacobian determinant $$\begin{aligned} \label{EQ:zJaco} \Jaco (z) \equiv \Big\vert \frac{\partial \tz_i}{\partial z_j}\Big\vert \approx \Contraction^{\Dime} \big(1+\Contraction^{-1}\partial_j\RelaRand_j(z)\big).\end{aligned}$$ With this change of variables, the phenomenological elastic free energy is expressed as $$\begin{aligned} \label{EQ:REExpaEner} \FreeEnerPhen &=& \FreeEnerPhen_0 + \frac{1}{2} \int d\tz_1\, d\tz_2\, \Jaco(z_1)^{-1}\,\Jaco(z_2)^{-1} \tNonLocaKern (\tz_1,\tz_2)\Big( \big\vert \tz_1+\tu(\tz_1)-\tz_2-\tu(\tz_2)\big\vert^2 - \big\vert \tz_1-\tz_2\big\vert^2 \Big) \nonumber\\ \noalign{\medskip} &&+ \frac{\BulkModuZero}{2} \int d\tz\, \Jaco(z)^{-1} \Big\lbrace \Jaco(z) \textrm{det} \Big( \frac{\partial \tz_i+\tu_i(\tz)}{\partial \tz_j} \Big)-1 \Big\rbrace^2 ,\end{aligned}$$ where we have made the definitions $$\begin{aligned} \label{EQ:tGtu} \tNonLocaKern(\tz_1,\tz_2)&\equiv&\NonLocaKern(z(\tz_1),z(\tz_2)), \\ \tu(\tz)&\equiv&\deformation(z(\tz)), \\ \tDefoPosi(\tz)&\equiv& \tz+\tu(\tz),\end{aligned}$$ with $z(\tz)$ denoting the mapping of the mass point $\tz$ in relaxed state back to the mass point $z$ in the state right after linking, i.e., the inverse of the $\tz(z)$ mapping. The change of the free energy due to choosing a different reference state is defined as $$\begin{aligned} \!\!\FreeEnerPhen_0\!\equiv\frac{1}{2} \!\int\!\! dz_1\, dz_2 \, \NonLocaKern (z_1,z_2)\big(\vert \tz_1-\tz_2\vert^2 \!\!-\! \vert z_1-z_2\vert^2 \big) ,\end{aligned}$$ which is a constant for any given realization of the randomness. In order to obtain a direct description of the elastic energy relative to the relaxed state, we expand the quenched random nonlocal kernel in the relaxed state, $\tNonLocaKern(\tz_1,\tz_2)$, defined in Eq. (\[EQ:tGtu\]) as $$\begin{aligned} \label{EQ:tNonLocaKernText} \!\!\!\! \tNonLocaKern_{\tp_1,\tp_2} \!\! &\approx& \!\! \NonLocaKernZero_{\tp_1,\tp_2}+\NonLocaKernOne_{\tp_1,\tp_2} \nonumber\\ &&\!\! -i \big( \tp_1\cdot\!\vec{\RelaRand}_{(\tp_1+\tp_2)} \NonLocaKernZero_{\tp_2} \!+\!\tp_2\cdot\!\vec{\RelaRand}_{(\tp_1+\tp_2)} \NonLocaKernZero_{\tp_1} \big),\,\end{aligned}$$ where $\vec{\RelaRand}$ is the random local deformation field defined in Eq. (\[EQ:Defiv\]). We then expand the elastic free energy (\[EQ:REExpaEner\]) as a power series in the small deformation $\tu(\tz)$ away from the relaxed state $\tz$. The computation of this expansion is given in Appendix \[APP:ReExpandFreeEner\]. As we shall show in Section \[SEC:Heterogeneity\], the statistics of the quenched randomness present in this phenomenological theory can be determined via a comparison with the RLPM. Through this comparison, we find that, the lengthscale of the nonlocal kernel $\NonLocaKern$ is actually the typical localization length, which is small compared to the lengthscales on which our theory of elasticity applies, because the deformations in this theory are associated with Goldstone fluctuations in the RLPM, which feature lengthscales larger than the typical localization length. Thus, it is reasonable to make a *local expansion* of the elastic energy (\[EQ:REExpaEner\]) relative to the relaxed state in terms of the strain tensor $\StraTensT$. The resulting form, which we shall call the local form of the elastic energy relative to the relaxed state,is in the form of Lagrangian elasticity. As will be seen in Section \[SEC:Corr\], the advantage of this local form of the elastic energy is that one can extract from it *large-distance behavior* of the disorder correlators of the elastic parameters, which turn out to be *universal*. The calculation for this local expansion is given in Appendix \[APP:ReExpandFreeEner\]. The resulting local form of elastic energy is $$\begin{aligned} \label{EQ:FELagr} \FreeEnerPhen &=& \int d\tz \big\lbrace \textrm{Tr}(\StreT(\tz)\cdot\tStraTensT(\tz)) \nonumber\\ && + \SheaModu(\tz) \textrm{Tr} \tStraTensT (\tz)^2 + \frac{\BulkModu(\tz)}{2} (\textrm{Tr} \tStraTensT (\tz))^2 \big\rbrace ,\end{aligned}$$ where the strain tensor relative to the relaxed state is defined as $$\begin{aligned} \tStraTens_{ij} (\tz) \equiv \frac{1}{2}\Big(\frac{\partial \tu_j}{\partial \tz_i}+\frac{\partial \tu_i}{\partial \tz_j} +\frac{\partial \tu_l}{\partial \tz_i}\frac{\partial \tu_l}{\partial \tz_j}\Big) ,\end{aligned}$$ and the heterogeneous elastic parameters, viz., the residual stress $\StreT$, the shear modulus $\SheaModu$, and the bulk modulus $\BulkModu$, are given in momentum space, by \[EQ:RelaStre\] $$\begin{aligned} \Stre_{ij,\tp} &=& -\frac{\partial^2}{\partial \tq_i \partial \tq_j} \Big\vert_{q=0} \NonLocaKernOne_{\tp-\tq,\tp} + i \delta_{ij} \frac{i\tp\cdot\vec{\RandForc}_{\tp}}{\vert \tp \vert ^2}\label{EQ:Stress}\\ && \quad -\frac{\RandForc_{a,\tp}}{\vert \tp \vert ^2}\big( \tp_{i} \PPerp_{ja,\tp} + \tp_{j} \PPerp_{ia,\tp} \big) , \nonumber\\ \SheaModu_{\tp} &=& \MeanSheaModu \Volu_0 \delta_{\tp} - \frac{i\tp\cdot\vec{\RandForc}_{\tp}}{\vert \tp \vert ^2} , \label{EQ:Shear}\\ \BulkModu_{\tp} &=& \BulkModuZero \Volu_0 \delta_{\tp} + 2\Big\lbrace \frac{i\tp\cdot\vec{\RandForc}_{\tp}}{\vert \tp \vert ^2} -\MeanSheaModu \Volu_0 \delta_{\tp} \Big\rbrace . \label{EQ:Bulk}\end{aligned}$$ \[Eq:ThreeResults\] In the expression for $\Stre_{ij,\tp}$ we have kept terms only to leading order in the momentum $\tp$ (see Appendix \[APP:ReExpandFreeEner\] for the derivation) [^8]. It is worth mentioning that, to leading order in the momentum $\tp$, this residual stress satisfies the stability condition $\tp_i\,\Stre_{ij,\tp}=0$, because the reference state of this elastic free energy, the relaxed state, is a stable state. This will also be shown more directly in the final results in Section \[SEC:Corr\]. Nonaffine deformation at fixed disorder {#SEC:NAD} --------------------------------------- Because of the quenched disorder present in the elastic parameter $\NonLocaKern$ of our phenomenological model, Eq. (\[EQ:phenom\_model\]), upon the application of external stress, the system will respond by adopting a strain field that is nonaffine. This means that the strain tensor will be spatially inhomogeneous even though the applied stress is homogeneous. Such nonaffine deformations reflect the quenched randomness of the elasticity, and can be derived for a given realization of the disorder and a given macroscopic deformation by external stress. Because the deformation is the quantity that is directly measurable in experiments, it is useful to derive the relationship between the nonaffine deformation and the quenched randomness in the elastic parameters. Then, by comparing with the RLPM, we shall obtain a statistical description of the nonaffine deformations, as we shall discuss in Section \[SEC:SNAD\]. To study nonaffine deformations, it is convenient to take the state right after linking \[i.e., the state $\DefoPosi(z)=z$\] as the reference state, and to re-derive the relaxation in the presence of *a given deformation* $\DefoGrad$. This is equivalent to applying the deformation $\DefoGrad$ to the relaxed state, and then letting the system further relax for this given deformation, as shown in Fig. \[FIG:RelaDefo\]. The relaxed state for this given deformation $\DefoGrad$, which we term the relaxed deformed state, is described by the deformation $z\to\tzL (z) $. We suppose that $$\begin{aligned} \label{EQ:tzl} \tzL (z) = \Contraction \DefoGrad \cdot z + \RelaRandL(z) .\end{aligned}$$ For simplicity, we assume that the deformation $\DefoGrad$ is pure shear (i.e., $\textrm{det}\,\DefoGrad=1$). ![Illustration of the *relaxed deformed state*. Theoretically, the relaxed deformed state can be reached via two routes: in the first, the upper route in this plot, one applies the deformation $\DefoGrad$ on the already-relaxed state, and lets the system re-relax while keeping the external deformation $\DefoGrad$ to the relaxed deformed state; the second route, the lower route in this plot, one deforms the system before relaxation is allowed, and then lets the system relax while maintaining the deformation $\DefoGrad$. These two routes reach the same final state, the relaxed deformed state, which characterizes the *nonaffine deformations* that the system undergoes under external deformation. For convenience of calculation, we use the lower route to determine the nonaffine deformations.[]{data-label="FIG:RelaDefo"}](deformation.eps){width=".45\textwidth"} Next, we use the two stationarity conditions, Eqs. (\[EQ:StabGlob\]) and (\[EQ:StabLoca\]) to solve for the relaxed deformed state. The condition (\[EQ:StabCont\]) for the homogeneous contraction $\Contraction$ is unchanged, so we still obtain $\Contraction \approx 1-({\MeanSheaModu}/{d\BulkModuZero})$. For the stability condition for the random local deformations $\RelaRandL$ we follow a similar expansion to the one given in Eqs. (\[EQ:DetPPExp\]) and (\[EQ:BulkTermExp\]), arriving at $$\begin{aligned} && 2(\Contraction \DefoGrad_{ai}z_i+(\RelaRandL)_{a}(z)) \int dz_2\, \NonLocaKern (z,z_2) \nonumber\\ && \quad -2 \int dz_2\, \NonLocaKern (z,z_2) \,(\Contraction \DefoGrad_{ai}z_{2,i}+(\RelaRandL)_{a}(z_2)\,) \nonumber\\ && \quad - \BulkModuZeroP \, \DefoGrad^{-1}_{ia}\DefoGrad^{-1}_{jb}\,\partial_i \partial_j (\RelaRandL)_{b}(z) =0 .\end{aligned}$$ As with the derivation given in Section \[SEC:relaxation\], we can solve this equation perturbatively, to leading order in $\NonLocaKernOne$ and $\RelaRandL$; see Appendix \[APP:DefoRela\] for details. The result is $$\begin{aligned} (\vec{\RelaRandL})_p = \Bigg\lbrace \frac{\PPerpL}{2 D_p} + \frac{\PLongL}{\BulkModuZeroP \trOne \vert p \vert ^2+2 D_p} \Bigg\rbrace \cdot (\vec{\RandForcL})_p \, ,\end{aligned}$$ where $\PPerpL$ and $\PLongL$, defined in Appendix \[APP:DefoRela\], are the *deformed* versions of the projection operators, and $\trOne$ is also defined in Appendix \[APP:DefoRela\]. In the literature the nonaffine deformations are often characterized by the nonaffine deformation field $\NADF$, which is defined in momentum space as $$\begin{aligned} \label{EQ:DefNADF} (\NADF) _p &\equiv& \DefoGrad^{-1} \cdot (\tzL)_p - \tz_p \, , \nonumber\\ &=& \DefoGrad^{-1} \cdot (\RelaRandL)_p - \RelaRand_p ,\end{aligned}$$ where $\tz$ denotes the relaxed state of the undeformed system (as discussed in Section \[SEC:relaxation\]), and $\tzL$ is the relaxed deformed state. Inserting the solution for $\RelaRandL$ into the expression for nonaffine deformation field, Eq. (\[EQ:DefNADF\]), we have $$\begin{aligned} \label{EQ:NADFp} (\NADF) _p &=& 2i\Contraction \Bigg\lbrace \frac{\BulkModuZeroP \vert p \vert ^2 }{(\BulkModuZeroP \vert p \vert ^2 \trOne + 2D_p)2D_p} \MetrTens^{-1} \nonumber\\ && \quad - \frac{\BulkModuZeroP \vert p \vert ^2 }{(\BulkModuZeroP \vert p \vert ^2 + 2D_p)2D_p} \IdenT \Bigg\rbrace \cdot \PLongT \cdot S_p \, ,\end{aligned}$$ where $$\begin{aligned} S_p \equiv \frac{\partial}{\partial p_{1,a}} \NonLocaKernOne_{p_1,0} -\frac{\partial}{\partial p_{2,a}}\Big\vert_{p_2=0} \NonLocaKernOne_{p_1,p_2} ,\end{aligned}$$ and $$\begin{aligned} \MetrTens \equiv \DefoGrad\Tran\DefoGrad\end{aligned}$$ In the incompressible limit, we have $$\begin{aligned} \label{EQ:NADFpInco} (\NADF) _p &\approx& 2i\Big\lbrace \frac{1 }{2D_p \trOne} \MetrTens^{-1} - \frac{1 }{2D_p} \IdenT \Big\rbrace \cdot \PLongT \cdot S_p \,\, .\end{aligned}$$ In Section \[SEC:SNAD\] we shall compute the mean value and disorder correlator of this nonaffine deformation field. Characterizing the physical elastic quenched disorder {#SEC:Heterogeneity} ===================================================== In Section \[SEC:EFERS\] we addressed three elastic parameter fields—the residual stress $\StreT$ and the Lamé coefficients $\SheaModu_{\tp}$ and $\BulkModu_{\tp}$—which characterize the elastic energy relative to the *relaxed state* and are therefore the physically relevant parameters for describing spatially heterogeneous elasticity. There, we showed how these parameters are determined, for a given realization of the quenched disorder, by $\NonLocaKern$, i.e., the random nonlocal kernel of the phenomenological model, giving the connection in Eqs. (\[EQ:RelaStre\]). In Section \[SEC:Phen\] we obtained a statistical characterization of $\NonLocaKern$ via a comparison with the semi-microscopic RLPM, giving the results in Eqs. (\[EQ:CompDA\]). Thus, we have the ingredients for constructing a statistical characterization of the physical position-dependent elastic parameters, as we do in the present section. Disorder averages of the elastic parameters {#SEC:DAEP} ------------------------------------------- Our first step is to determine the disorder average of the nonlocal kernel in the relaxed state $\tNonLocaKern$. To do this, we note that $\tNonLocaKern$ is related to $\NonLocaKern$ via Eq. (\[EQ:tGtu\]); the leading-order expansion of this relationship is given in Eq. (\[EQ:tNonLocaKernText\]). By taking the disorder average on both sides of Eq. (\[EQ:tNonLocaKernText\]), we find that only the first term on the RHS survives, because all other terms are linear in the fluctuation part of $\NonLocaKern$ and this vanishes upon disorder–averaging. Thus, we find that the disorder average of $\tNonLocaKern$ is given by $$\begin{aligned} \lda \tNonLocaKern(z_1,z_2) \rda = \lda \NonLocaKern(z_1,z_2) \rda = \KernOne(z_1,z_2) ,\end{aligned}$$ which means that the disorder average of $\tNonLocaKern$ is the same as the disorder average of $\NonLocaKern$, where we have dropped the tilde on $\tz$ because now we discuss elastic parameters in the relaxed reference state only. It is worth noting that, as expected, because $\KernOne(z_1,z_2)$ is independent of the center of mass coordinate $(z_1+z_2)/2$, the disorder average of the nonlocal kernel $\lda \tNonLocaKern(z_1,z_2) \rda$ is translationally and rotationally invariant, depending only on $\vert z_1-z_2\vert$. This is a consequence of the macroscopic translational and rotational invariance of the random solid state discussed in Section \[SEC:SSB\]. Second, we determine the disorder averages of the position-dependent elastic parameters in the local form of the elastic energy relative to the relaxed state, including the residual stress $\StreT$, the shear modulus $\SheaModu$, and the bulk modulus $\BulkModu$. For any given realization of the disorder, these elastic parameters are related to $\NonLocaKern$ and $\BulkModuZero$ via Eqs. (\[EQ:RelaStre\]). Thus, as with the nonlocal kernel, we obtain the disorder averages of these elastic parameters via the statistics of $\NonLocaKern$. The disorder average of the residual stress $\StreT$ is straightforwardly seen to vanish: $$\begin{aligned} \lda \Stre_{ij}(z) \rda =0 .\end{aligned}$$ Thus, the residual stress is a quenched random field with zero mean. As for the shear modulus $\SheaModu$, its disorder average is given by $$\begin{aligned} \lda \SheaModu(z) \rda &=& \MeanSheaModu = -\frac{1}{\Dime} \int dz_2 \NonLocaKernZero(z-z_2) \vert z-z_2\vert^2 \nonumber\\ &=& \PartDens \BoltCons T \, \UnivPara ,\end{aligned}$$ with $\UnivPara$ given in Eq. (\[EQ:theta\]). This has been obtained in Ref. [@Ulrich2006]. This mean shear modulus is linear in temperature $T$, reflecting its entropic nature, a result that confirms this aspect of the classical theory of rubber elasticity. As for the disorder average of bulk modulus $\BulkModu$, it is obtained via Eq. (\[EQ:BMZ\]), which gives $$\begin{aligned} \lda \BulkModu(z) \rda = \ExclVolu \PartDens^2 \, .\end{aligned}$$ As one might expect, the mean bulk modulus depends on the particle number density $\PartDens$ and the strength of the excluded-volume interaction $\ExclVolu$. The disorder average of the these three elastic parameters of the local form of the elastic energy relative to the relaxed state (viz., $\lda \Stre_{ij}(z) \rda$, $\lda \SheaModu(z) \rda$ and $\lda \BulkModu(z) \rda$) are all spatially homogeneous and isotropic; this is also a consequence of the macroscopic translational and rotational invariance of the random solid state discussed in Section \[SEC:SSB\]. Disorder correlators of the elastic parameters {#SEC:Corr} ---------------------------------------------- ### Disorder correlator of the nonlocal kernel {#SEC:CorrNK} The nonlocal kernel $\tNonLocaKern$ characterizes the quenched random nonlocal interactions in the relaxed state. Its statistics can be described via its moments. In Section \[SEC:DAEP\], we already determined the disorder average of $\tNonLocaKern$; in the present section we determine the disorder correlator of $\tNonLocaKern$. To do this, we use Eq. (\[EQ:tNonLocaKernText\]), which relates $\tNonLocaKern$ to any given random configuration of $\NonLocaKern$. Using the disorder correlator of $\NonLocaKern$, Eq. (\[EQ:KTwo\]), we then arrive at the disorder correlator of $\tNonLocaKern$, viz., $\lda\tNonLocaKern(z_1,z_2)\tNonLocaKern(z_3,z_4)\rda$. This is a combination of Gaussian and delta-function factors in the separations of the six pairs formed by the four points $\{z_1,z_2,z_3,z_4\}$. The derivation and the momentum-space expression of the result for this disorder correlator is given in Appendix \[APP:MM\]. To reveal the universal characteristics of the disorder correlator $\lda\tNonLocaKern(z_1,z_2)\tNonLocaKern(z_3,z_4)\rda$, we investigate its large-distance behavior. The nonlocal kernel itself describes a short-distance attractive interaction, because the disorder average $\lda\tNonLocaKern(z_1,z_2)\rda$ is a short-ranged function in $\vert z_1-z_2\vert$ characterized by the typical localization length. Thus, to extract the long-distance behavior of $\lda\tNonLocaKern(z_1,z_2)\tNonLocaKern(z_3,z_4)\rda$, we take the limit that the two pairs $\{z_1,z_2\}$ and $\{z_3,z_4\}$ are far apart from one another, but $z_1$ is near $z_2$, and $z_3$ is near $z_4$. This is precisely the construction of the local description of elasticity of the relaxed state introduced in Section \[SEC:EFERS\], featuring the quenched random residual stress and the Lamé coefficients fields. We shall now discuss the disorder correlators of these elastic parameters in the local description of elasticity. ### Disorder correlators of the elastic parameters in the local form of the elastic energy {#SEC:Local} The elastic parameters in the local form of the elastic energy, including the residual stress $\StreT$, the shear modulus $\SheaModu$, and the bulk modulus $\BulkModu$, are related to any given configuration $\NonLocaKern$ via Eq. (\[EQ:RelaStre\]). Using these relations and the disorder correlator of $\NonLocaKern$, Eq. (\[EQ:KTwo\]), we arrive at the disorder correlators of the elastic parameters. The details of this calculation are given in Appendix \[APP:CFLL\]; we summarize the results in Table \[TABLE:CorrTable\]. [|r|c c c|]{} &$\myshift\Stre_{kl,p^{\prime}}\myshift$ ------------------------------------------------------------------------ &$\myshift\SheaModu_{p^{\prime}}\myshift$& $\myshift\BulkModu_{p^{\prime}}\myshift$\ $\myshift \Stre_{ij,p}\myshift$ ------------------------------------------------------------------------ &$\myshift \UnivPara A_{ijkl}\myshift$ &$\myshift -2\UnivPara\PPerp_{ij}\myshift$ &$\myshift\phantom{-}4\UnivPara\PPerp_{ij}\myshift$\ $\myshift \SheaModu_{p}\myshift$ ------------------------------------------------------------------------ &$\myshift -2\UnivPara\PPerp_{kl}\myshift$ &$\myshift\phantom{-2} \Corr\myshift$ &$\myshift -2 \Corr\myshift$\ $\myshift \BulkModu_{p}\myshift$ ------------------------------------------------------------------------ &$\myshift\phantom{-}4\UnivPara\PPerp_{kl}\myshift$ &$\myshift -2 \Corr\myshift$ &$\myshift\phantom{-}4 \Corr\myshift\myshift$\ The correlation function $\lda\StreT\StreT\rda$ features the tensor $A_{ijkl}$, which is defined as $$\begin{aligned} A_{ijkl} \equiv 2 \PPerp_{ij} \PPerp_{kl} + \PPerp_{ik} \PPerp_{jl} + \PPerp_{il} \PPerp_{jk}\, .\end{aligned}$$ where the projection operator $\PPerpT$ is defined in Section \[SEC:relaxation\]. The stability condition on the residual stress field $\StreT$ requires that its Fourier transform vanishes when contracted with the momentum $p$. It is straightforward to see that this feature is obeyed by the correlation function $\lda\StreT\StreT\rda$ given in Table \[TABLE:CorrTable\], owing to the structure of $A$. The parameters $\UnivPara$ and $\Corr$, on which the correlators in Table \[TABLE:CorrTable\] depend, are given by $$\begin{aligned} \UnivPara \!&\equiv&\! -\frac{\LinkDens \LocaPart^2}{2}+\LinkDens \LocaPart -1+e^{-\LinkDens \LocaPart} , \\ \Corr \!&\equiv&\! -\frac{3}{2}\LinkDens \LocaPart^2 \!+(\LinkDens \LocaPart)^2 +\LinkDens \LocaPart -1 + e^{\LinkDens \LocaPart} .\end{aligned}$$ ![Plot of $\LocaPart$, $\UnivPara$ and $\Corr$ as functions of the links density parameter $\LinkDens$.[]{data-label="FIG:ThetaGamma"}](theta.eps){width=".45\textwidth"} The dependence of $\UnivPara$ and $\Corr$ on the density of link $\LinkDens$ is shown in Fig. \[FIG:ThetaGamma\]. The asymptotic behaviors of $\UnivPara$ and $\Corr$ are as follows: $$\begin{aligned} \UnivPara = \left\{ \begin{array}{ll} \frac{2}{3} (\LinkDens-1)^3 , & \textrm{for} \quad\LinkDens\gtrsim 1 ; \\ \LinkDens/2 , & \textrm{for} \quad\LinkDens\gg 1 ; \end{array} \right.\end{aligned}$$ $$\begin{aligned} \Corr = \left\{ \begin{array}{ll} \frac{14}{3} (\LinkDens-1)^3 , & \textrm{for} \quad\LinkDens\gtrsim 1 ; \\ \eta^4 , & \textrm{for} \quad\LinkDens\gg 1 , \end{array} \right.\end{aligned}$$ where $\LinkDens$ is equal to the mean coordination number of the particles. Although the connected disorder correlators of the elastic parameters increase with the density of links $\LinkDens$, it is worth noting that the *relative fluctuations* are decreasing functions of $\LinkDens$. For example, the relative fluctuation in the shear modulus, defined as $\lda \SheaModu \SheaModu\rda_c / \lda \SheaModu \rda^2$, scales as $$\begin{aligned} \frac{\lda \SheaModu \SheaModu\rda_c}{ (\lda \SheaModu \rda)^2} \sim \frac{\Corr}{\UnivPara^2} ,\end{aligned}$$ which is shown in Fig. \[FIG:RelaFluc\]. This is a decreasing function of $\LinkDens$, which means that the relative fluctuations of the shear modulus actually decrease as links are added. ![Plot of the function $\phi(\LinkDens)=\frac{\Corr}{\UnivPara^2}$, which characterizes the relative fluctuations of both the shear and bulk moduli, and also the connected correlator of the nonaffine deformations, as will be shown in Section \[SEC:SNAD\].[]{data-label="FIG:RelaFluc"}](phi.eps){width=".45\textwidth"} It is interesting to look at the real-space behavior of disorder correlators of the elastic parameters in the local form of elastic energy. First, it is easy to see that the disorder correlators $\lda \SheaModu(0)\,\SheaModu(r) \rda$, $\lda \BulkModu(0)\,\BulkModu(r) \rda$ and $\lda \SheaModu(0)\,\BulkModu(r) \rda$ are short-ranged in real space: more precisely, they are proportional to $\delta_s(r)$, i.e., to a Dirac delta-function that has been smoothed on the scale of the short-distance cutoff. This cutoff should be taken to be the typical localization length, in order to validate the Goldstone-fluctuation framework for elastic deformations, because the Goldstone fluctuations in the RLPM are long-wavelength, low-energy excitations of the random solid state, and these do not touch lengthscales shorter than the typical localization length. By contrast, entities involving the residual stress have a more interesting spatial correlations: in three dimensions and at large lengthscales we find that $$\begin{aligned} \!\!\!\!\!\!\lda \Stre_{ij}(0) \, \Stre_{kl}(\vec{r})\rda_c \!\!\!&=&\!\! \frac{(\BoltCons T)^2 \PartDens \UnivPara}{\pi \vert \vec{r}\vert^3} B_{ijkl}\, , \\ \!\!\!\!\!\!\lda \Stre_{ij}(0) \, \SheaModu(\vec{r})\rda_c \!\!\!&=&\!\!\! - \frac{(\BoltCons T)^2 \PartDens \UnivPara}{\pi \vert \vec{r}\vert^3} \Big(\!\PLong_{ij}(\vec{r})\!-\!\PPerp_{ij}(\vec{r})\!\Big), \,\, \\ \!\!\!\!\!\!\lda \Stre_{ij}(0) \, \BulkModu(\vec{r})\rda_c \!\!\!&=&\!\! \!\frac{2(\BoltCons T)^2 \PartDens \UnivPara}{\pi \vert \vec{r}\vert^3} \Big(\!\PLong_{ij}(\vec{r})\!-\!\PPerp_{ij}(\vec{r})\!\Big) , \,\end{aligned}$$ where $\PLong_{ij}(\vec{r})$ and $\PPerp_{ij}(\vec{r})$ are, respectively, longitudinal and transverse projection operators in real space, which are given by $$\begin{aligned} \PLong_{ij}(\vec{r})\equiv \frac{r_i r_j}{\vert \vec{r}\vert^2}, \quad\quad \PPerp_{ij}(\vec{r})\equiv \delta_{ij}-\PLong_{ij}(\vec{r}) .\end{aligned}$$ The tensor $B_{ijkl}$ has a complicated structure comprising terms built from projection operators of $\vec{r}$, together with various index combinations, and also depends on the large-momentum cutoff, which can be identified with the inverse of the typical localization length [^9] . [[ As the above results show, the mean shear modulus and the long-distance behavior of the disorder correlators depend only on the link density $\LinkDens$, the temperature $T$, and the particle density $\PartDens$, and do not depend on the details of the link potential. This verifies the argument that the shear rigidity of the RLPM is a result of the network entropy.]{}]{} Statistics of nonaffine deformations {#SEC:SNAD} ------------------------------------ In this section we develop a statistical characterization of the nonaffine deformations of the soft random solid state. In Section \[SEC:NAD\] we discussed why soft random solids undergo nonaffine deformations in presence of a given shear deformation $\DefoGrad$, and explained how to characterize these deformations in terms of the nonaffine deformation field $\NADF$. The nonaffine deformation field is related to any given nonlocal random kernel $\NonLocaKern$ via Eq. (\[EQ:NADFp\]). It is straightforward to see that the disorder average of the nonaffine deformation field vanishes, i.e., $\lda \NADF \rda =0$: it is proportional to the fluctuation part of the quenched random nonlocal kernel $\NonLocaKernOne$. Next, we calculate the disorder correlator of the nonaffine deformations. For convenience, we take the incompressible limit, i.e., $\BulkModuZero\to\infty$, in which limit the nonaffine deformation field $\NADF$ is given by Eq. (\[EQ:NADFpInco\]). Using Eq. (\[EQ:NADFpInco\]), as well as the disorder correlator of the nonlocal kernel $\lda\NonLocaKern\NonLocaKern\rda_c$, the disorder correlator of the nonaffine deformation field $\NADF$ is found to be $$\begin{aligned} \label{EQ:CorrNADF} \lda (\NADF)_p \cdot (\NADF)_{-p} \rda_c &=& \frac{1}{\vert \vec{p} \vert^2} \frac{1}{\PartDens} \frac{\Corr}{\UnivPara^2} \Big(\frac{\trTwo}{\trOne^2}-1\Big) ,\end{aligned}$$ where $$\begin{aligned} \trOne &\equiv& \textrm{Tr}(\PLongT \MetrTens^{-1} ) , \\ \trTwo &\equiv& \textrm{Tr}(\PLongT \MetrTens^{-1} \MetrTens^{-1}) , \\ \MetrTens &\equiv& \DefoGrad\Tran\DefoGrad .\end{aligned}$$ The dependence of the connected disorder-correlator of the nonaffine deformation field $\lda \NADF \NADF \rda_c$ on the density of links comes through the factor $\phi\equiv \Corr/(\UnivPara^2)$, which is shown in Fig. \[FIG:RelaFluc\]. It is evident that, as the density of links increases, the system has smaller relative fluctuations of its elasticity, i.e., the relative fluctuations in the elastic moduli decrease, and thus the nonaffine deformations also decrease, corresponding to the system becoming *less heterogeneous*. The disorder correlator of the nonaffine deformation field, Eq. (\[EQ:CorrNADF\]), is consistent with the disorder correlator of the nonaffine deformations given in Ref. [@DiDonna2005]. In Eq. (3.22) of Ref. [@DiDonna2005], the disorder correlator of the nonaffine deformation $u^{\prime}$ (which corresponds to $\RelaRandL$ in our notation) was found to depend on the random local elastic modulus $K_{ijkl}$ in momentum-space as $$\begin{aligned} \lda u^{\prime}(q) \, u^{\prime}(-q)\rda \propto \frac{\gamma^2}{q^2} \frac{\Delta^{K}(q)}{K^2} ,\end{aligned}$$ where $\gamma$ represents the appropriate components of the tensorial externally applied deformation (i.e., $\DefoGrad$ in our notation), $\Delta^{K}$ represents components of the variance of the elastic-modulus tensor, and $K$ represents components of the average of the elastic-modulus tensor. Consistency with Ref. [@DiDonna2005] is revealed by taking Eq. (\[EQ:CorrNADF\]), and recalling that $\lda\SheaModu\rda\sim\UnivPara$ and $\lda\SheaModu\SheaModu\rda_c\sim\Corr$, and therefore that our disorder correlator of the nonaffine deformation field can be written as $$\begin{aligned} \lda (\NADF)_p (\NADF)_{-p} \rda_c \propto \frac{1}{\vert p \vert^2} \frac{\lda\SheaModu\SheaModu\rda_c}{\lda\SheaModu\rda^2} ,\end{aligned}$$ which exhibits the same dependence on the mean and variance of the quenched random elastic modulus as Eq. (3.22) of Ref. [@DiDonna2005] does. By transforming the disorder correlator of the nonaffine deformation field (\[EQ:CorrNADF\]) back to real space, we find that the large-distance behavior of the disorder correlator of the nonaffine deformation field $\lda \NADF(0) \cdot \NADF(r) \rda_c$ is proportional to $\vert r\vert^{-1}$ in three dimensions, which is also long-ranged. Concluding remarks {#SEC:ConcDisc} ================== The heterogeneous elasticity of soft random solids has been investigated via a semi-microscopic approach. By starting with the Randomly Linked Particle Model (RLPM), which describes networks of particles randomly connected by soft links, and applying the concepts and techniques of vulcanization theory, we have established a field-theoretic description of the liquid-to-random-solid transition, and have analyzed the corresponding pattern of spontaneous symmetry breaking and the structure of the associated Goldstone fluctuations. We have identified these Goldstone fluctuations as being related to shear deformations of the random solid state and, via this identification, we have obtained a statistical characterization of the quenched randomness exhibited by the heterogeneous elasticity of soft random solids, which features a random nonlocal kernel describing attractive interactions between mass-points. The heterogeneous elasticity studied via the Goldstone fluctuations in the RLPM is a description of the elastic properties of the state right after linking (i.e., an elastic free energy that takes the state right after linking as its elastic reference state). We have shown that, after linking, the system relaxes to a stable state for any given realization of disorder (i.e., for any given heterogeneous configuration of the elastic parameters in the state right after linking), and this relaxed state, which is a state of mechanical equilibrium, is actually the state of experimental relevance. By solving for the relaxed state for any given realization of disorder, and expanding the elastic free energy for deformations relative to this relaxed state, we have obtained an elastic free energy relative to the relaxed state (i.e., taking the relaxed state as the new elastic reference state). The statistics of the quenched randomness in this elastic free energy is subsequently determined. The first statistical moments of the quenched random elastic parameters (i.e., the disorder averages of the elastic parameters), unveil the basic homogeneous macroscopic properties of the heterogeneous elastic medium. We have found that the disorder average of the nonlocal kernel of attractive interactions is characterized by the typical localization lengthscale of the RLPM, which is a scale smaller than the lengthscale of the elastic deformations that we are considering. Thus, it is reasonable to make a local expansion of the elastic energy, relative to the relaxed state. The resulting local form of the elastic energy is a version of Lagrangian elasticity, featuring heterogeneous (i.e., spatially randomly varying) residual stress and Lamé coefficients. The disorder average of the residual stress vanishes. The disorder average of the shear modulus is found to be proportional to temperature, reflecting the entropic nature of the shear rigidity of soft random solids. The disorder average of the bulk modulus depends on the particle number-density and the strength of the excluded-volume interaction. In particular, the disorder averages of these elastic parameters of the relaxed state are all translationally and rotationally invariant, reflecting the macroscopic translational and rotational invariance of the soft random solid state. The second statistical moments of the quenched random elastic parameters (i.e., the spatial correlations of these elastic parameters) characterize the fluctuations of the quenched randomness in the elastic properties. The disorder correlators of the elastic parameters that appear in the local form of the elastic energy (relative to the relaxed state) exhibit interesting universal behaviors. In particular, the disorder-correlators involving the residual stress are found to be long-ranged and governed by a universal parameter that also determines the mean shear modulus, but the disorder-correlators of the shear and bulk moduli are found to be short ranged. Because of the heterogeneity present in the elasticity of soft random solids, upon the application external stress, the system responds by adopting a strain field that is nonaffine (i.e., a strain field that is characterized by an inhomogeneous deformation gradient). We have also obtained a statistical description of these nonaffine deformations. The disorder average of the nonaffine deformations vanishes, and their disorder correlator is also found to be long ranged. So far, we have studied the first two statistical moments of the quenched random elastic parameters of soft random solids. The entire probability distribution of the quenched random elastic parameters can also be explored using the formalism presented here, via the RLPM, and one can also progress beyond the local limit of the elasticity theory. This approach to the heterogeneous elasticity of soft random solids can also be applied to the setting of liquid crystal elastomers, in which the constituent polymers of the random network possesses (or are capable of exhibiting) liquid-crystalline order [@Warner2003; @Lubensky2002; @Xing2008; @Clarke1998]. In liquid crystal elastomers, the strain field is coupled to the liquid-crystalline order, and this produces a rich collection of interesting phenomena, such as spontaneous sample-shape deformation upon changes of temperature, anomalously soft modes in the elasticity. The interplay of the heterogeneity of the random network and the liquid-crystalline order has interesting consequences: e.g., it can give rise to a polydomain structure in the liquid crystalline order. [These interesting topics shall be reserved for future studies. ]{} We thank Tom Lubensky for stimulating discussions. This work was supported by National Science Foundation Grant No. DMR 06-05816 (X.M. and P.M.G.), American Chemical Society Grant No. PRF 44689-G7 (X.X.), Deutsche Forschungsgemeinschaft through SFB 602 (standard) (A.Z.), and the National Science Foundation Grant No. DMR 0804900 (X.M.). Disorder average with the Deam-Edwards distribution {#APP:DisoAver} =================================================== In this Appendix we calculate disorder averages weighted by the Deam-Edwards distribution, in particular, $Z_1$ and $Z_{1+n}$ of Section \[SEC:RLPMReplica\]. First, we calculate the factor $Z_1$, which is defined as $$\begin{aligned} \label{EQ:AZOne} Z_1 &\equiv& \sum_{\RealDiso} \frac{\left(\frac{\LinkDens \Volu_0}{2\PartNumb \LinkPote_0}\right)^{\LinkNumb} Z_{\RealDiso}(\Volu_0)}{ \LinkNumb !}.\end{aligned}$$ The summation over the quenched disorder $\sum_{\RealDiso}$ includes two steps: a summation over the number of links $\LinkNumb$, and a summation over all possible ways of making these $\LinkNumb$ links, i.e., of assigning the $\LinkNumb$ links to different collections of pairs. Thus Eq. (\[EQ:AZOne\]) can be written as $$\begin{aligned} Z_1 &=& \sum_{\LinkNumb} \frac{\Big(\frac{\LinkDens \Volu_0}{2\PartNumb \LinkPote_0}\Big)^{\LinkNumb} Z_{\RealDiso}(\Volu_0)}{ \LinkNumb !} \nonumber\\ &=& \sum_{\LinkNumb=0}^{\infty} \sum_{i_1\ne j_1}^{\PartNumb} \sum_{i_2\ne j_2}^{\PartNumb} \cdots \sum_{i_{\LinkNumb}\ne j_{\LinkNumb}}^{\PartNumb} \frac{\Big(\frac{\LinkDens \Volu_0}{2\PartNumb \LinkPote_0}\Big)^{\LinkNumb}} { \LinkNumb !} \PartitionLiquid(\Volu) \left\langle \prod_{e=1}^{\LinkNumb} \LinkPote \big( \vert \PartPosi_{i_e} - \PartPosi_{j_e} \vert \big) \right\rangle_{1}^{\HamiExcl} \nonumber\\ &=& \PartitionLiquid(\Volu)\lthal \sum_{\LinkNumb=0}^{\infty} \frac{\big(\frac{\LinkDens \Volu_0} {2\PartNumb \LinkPote_0}\big)^{\LinkNumb}}{ \LinkNumb !} \Big(\sum_{i\ne j}^{\PartNumb} \LinkPote \big( \vert \PartPosi_{i} - \PartPosi_{j} \vert \big)\Big)^{\LinkNumb}\rthal_{1}^{\HamiExcl} \nonumber\\ \noalign{\smallskip} &=& \PartitionLiquid(\Volu)\left\langle \exp\Big( \frac{\LinkDens \Volu_0}{2\PartNumb \LinkPote_0} \sum_{i\ne j}^{\PartNumb} \LinkPote \big( \vert \PartPosi_{i} - \PartPosi_{j} \vert \big) \Big)\right\rangle_{1}^{\HamiExcl}.\end{aligned}$$ The mean-field approximation for $Z_1$ amounts to taking the number density of the unlinked liquid to be $\PartNumb/\Volu_0$, which is similar to the calculation that yields Eq. (\[EQ:FLiquid\]), and hence we arrive at $$\begin{aligned} \label{EQ:Z1SP} Z_1=\exp\big(\PartNumb\ln\Volu_0 -\frac{\ExclVolu\PartNumb^2}{2\Volu_0 \BoltCons T}+\frac{\PartNumb \LinkDens}{2} \big).\end{aligned}$$ Second, we calculate $Z_{1+n}$, which is defined as $$\begin{aligned} Z_{1+n}\equiv\sum_{\RealDiso} \frac{\big(\frac{\LinkDens \Volu_0}{2\PartNumb \LinkPote_0}\big)^{\LinkNumb}} { \LinkNumb !} Z_{\RealDiso}(\Volu_0) Z_{\RealDiso}(\Volu)^n .\end{aligned}$$ The factor $Z_{\RealDiso}(\Volu)^n$ can be written in terms of replicas as $$\begin{aligned} Z_{\RealDiso}(\Volu)^n &=& \int_{\Volu} \ReplProd \prod_{j=1}^{\PartNumb} d\PartPosi_{j}\REPa \, e^{-\ReplSum \HamiExcl \REPa/\BoltCons T} \nonumber\\ &&\quad\quad\times \ReplProd \prod_{e=1}^{\LinkNumb} \LinkPote \big( \vert \PartPosi_{i_e}\REPa - \PartPosi_{j_e}\REPb \vert \big) ,\end{aligned}$$ where $\HamiExcl \REPa$ is the part of the Hamiltonian $\HamiExcl$ (i.e., the excluded-volume interaction) for replica $\alpha$, as defined in Eq. (\[EQ:partition\]). We define the $\HamiExcl$ average for $1+n$ replicas as $$\begin{aligned} \ltha \cdots \rtha_{1+n}^{\HamiExcl} &\equiv& \frac{1}{\PartitionLiquid(V_0) \PartitionLiquid(V)^n}\nonumber\\ &&\quad\quad\times \int _{\Volu_0} \prod_{i=1}^{\PartNumb} d \PartPosi_i \REP0 \int _{\Volu} \ReplProdOne \prod_{i=1}^{\PartNumb} d \PartPosi_i \REPa \nonumber\\ &&\quad\quad\times e^{-\frac{\HamiExcl\REP0}{{\BoltCons}T} -\frac{\ReplSumOne \HamiExcl\REPa}{{\BoltCons}T}}\cdots .\end{aligned}$$ Using this notation we arrive at $$\begin{aligned} Z_{1+n}&=&\sum_{\LinkNumb=0}^{\infty} \sum_{i_1\ne j_1}^{\PartNumb} \sum_{i_2\ne j_2}^{\PartNumb} \cdots \sum_{i_{\LinkNumb}\ne j_{\LinkNumb}}^{\PartNumb} \frac{\big(\frac{\LinkDens \Volu_0}{2\PartNumb \LinkPote_0}\big)^{\LinkNumb}} { \LinkNumb !} \PartitionLiquid(\Volu_0)\PartitionLiquid(\Volu)^n \ltha \ReplProd \prod_{e=1}^{\LinkNumb} \LinkPote \big( \vert \PartPosi_{i_e}\REPa - \PartPosi_{j_e}\REPa \vert \big) \rtha_{1+n}^{\HamiExcl} \nonumber\\ &=& \PartitionLiquid(\Volu_0)\PartitionLiquid(\Volu)^n \lthal \sum_{\LinkNumb=0}^{\infty} \frac{\big(\frac{\LinkDens \Volu_0} {2\PartNumb \LinkPote_0}\big)^{\LinkNumb}}{ \LinkNumb !} \Big(\sum_{i\ne j}^{\PartNumb} \ReplProd \LinkPote \big( \vert \PartPosi_{i}\REPa - \PartPosi_{j}\REPa \vert \big)\Big)^{\LinkNumb}\rthal_{1+n}^{\HamiExcl} \nonumber\\ &=& \PartitionLiquid(\Volu_0)\PartitionLiquid(\Volu)^n \lthal \exp \Big(\frac{\LinkDens \Volu_0} {2\PartNumb \LinkPote_0} \sum_{i\ne j}^{\PartNumb} \ReplProd \LinkPote \big( \vert \PartPosi_{i}\REPa - \PartPosi_{j}\REPa \vert \big)\Big)\rthal_{1+n}^{\HamiExcl}\end{aligned}$$ Hubbard-Stratonovich Transformation {#APP:HSTransformation} =================================== The effective Hamiltonian, Eq. (\[EQ:HQHL\]), can be analyzed via a Hubbard-Stratonovich (HS) transformation—a field-theoretic tool that is often applied to strongly coupled models to decouple interactions and develop a convenient representation in terms of functional integrals [@Hubbard1959; @Stratonovich1957]. The version of the HS transformation that we use for the RLPM can be illustrated via the following simple example. Consider a statistical-mechanical system having the following partition function: $$\begin{aligned} Z(h) =\int dq\, e^{-H_0(q)}e^{Jq^2+hq} = Z_0\,\langle e^{Jq^2+hq} \rangle_{H_0(q)} ,\end{aligned}$$ where $H(q)\equiv H_0(q)-Jq^2-hq$ is the total Hamiltonian for the variable $q$, with $H_0(q)$ being the leading-order term and $Jq^2$ being considered as a perturbation. (Although it is just a simple quadratic term, we use it to illustrate the method.)  The factor $Z_0\equiv \int dq \, e^{-H_0(q)}$. The term $hq$ denotes the coupling to an external field, which generates the statistical moments of $q$ via $$\begin{aligned} \langle q \rangle_{H(q)}=\frac{\partial }{\partial h}\Big\vert_{h=0} \ln Z(h) .\end{aligned}$$ The $Jq^2$ term in the exponent can be decoupled using the following version of the HS transformation: $$\begin{aligned} Z&=&\Big(\frac{J}{\pi}\Big)^{1/2} Z_0 e^{-\frac{h^2}{4J}} \int d \omega \, e^{-J\omega^2+h \omega} \langle e^{2J\omega q} \rangle_{H_0(q)} \nonumber\\ &=&\Big(\frac{J}{\pi}\Big)^{1/2} \int d \omega\, e^{-\mathcal{H}(\omega)} ,\end{aligned}$$ with $$\begin{aligned} \label{EQ:Homega} \mathcal{H}(\omega)\equiv J\omega^2-\ln Z_0\,\langle e^{2J\omega q} \rangle_{H_0(q)}-h\omega +\frac{h^2}{4J} .\end{aligned}$$ In this form, the partition function is expressed as an integral over the variable $\omega$, and the quadratic term in the original variable $q$ is now decoupled. If fluctuations with large $q$ only appear with very small probabilities, as governed by $H_0(q)$, one can expand the $\ln\langle e^{2J\omega q} \rangle_{H_0(q)}$ term as a power series in $q$. Thus, one can obtain an effective Hamiltonian $\mathcal{H}(\omega)$ via the low-order terms in $\omega$, which has the form of a Landau free energy, and is convenient to analyze. It is evident that the average of $\omega$, taken with the statistical weight defined by $\mathcal{H}(\omega)$, equals the average of $q$, taken with the statistical weight defined by $H(q)$: $$\begin{aligned} \label{EQ:HSAver} \langle \omega \rangle_{\mathcal{H}(\omega)} =\frac{\partial }{\partial h}\Big\vert_{h=0} \ln Z(h) =\langle q \rangle_{H(q)}.\end{aligned}$$ Thus, the statistical mechanics of $q$ can be examined by studying the statistical mechanics of $\omega$. In the Hamiltonian $\mathcal{H}(\omega)$, $q$ appears linearly; therefore, in cases in which $q$ is a variable that involves a summation over many particles, this method will allow us to decouple the problem into a single-particle one, as will be seen in the following application of the HS transformation to vulcanization theory. In the RLPM, the partition function we are going to decouple is Eq. (\[EQ:ZQ\]): $$\begin{aligned} Z_{1+n} = \int_{\Volu_0} \prod_{i=1}^{\PartNumb} d\PartPosi_{i}^{0} \int_{\Volu} \ReplProdOne \prod_{j=1}^{\PartNumb} d\PartPosi_{j}\REPa \, e^{-\frac{H_{\DensFunc}\lbrack \DensFunc_{\REPP}\rbrack}{\BoltCons T}} ,\end{aligned}$$ with $$\begin{aligned} H_{\DensFunc}\lbrack \DensFunc_{\REPP}\rbrack = -\frac{\PartNumb \LinkDens \BoltCons T}{2 \Volu^n \LinkPote_0} \sum_{\REPP\in HRS}\DensFunc_{\REPP} \DensFunc_{-\REPP}\LinkPoteRepl_{\REPP} +\frac{\ExclVoluRN_0 \PartNumb^2}{2\Volu_0} \sum_{p}\DensFunc_{p\BASE^{0}} \DensFunc_{-p\BASE^{0}} +\frac{\ExclVoluRN \PartNumb^2}{2\Volu} \sum_{p} \ReplSumOne \DensFunc_{p\BASE\REPa} \DensFunc_{-p\BASE\REPa} .\end{aligned}$$ The field $\DensFunc_{\REPP}=(1/N)\sum_{j=1}^{\PartNumb}e^{-i\REPP \cdot \hat{c}_j}$ is a complex field, so we apply the following equalities for the complex variables $q$ and $\omega$: \[EQ:HSComplex\] $$\begin{aligned} \!\!e^{-J\vert q \vert^2}\!\! &=&\!\! \frac{J}{\pi}\int\!\! d(\RealPart\, \omega)d(\ImagPart\, \omega) \label{EQ:HS1} e^{-J\vert \omega \vert^2+2iJ \RealPart\, q\omega^{*}} , \\ \!\!e^{+J\vert q \vert^2}\!\! &=&\!\! \frac{J}{\pi}\int \!\!d(\RealPart\, \omega)d(\ImagPart\, \omega) e^{-J\vert \omega \vert^2+2J \RealPart\, q\omega^{*}} , \label{EQ:HS2}\end{aligned}$$ noticing that the product $\RealPart \, (q\omega^{*}) = (\RealPart\, q)(\RealPart\,\omega)+ (\ImagPart\, q)(\ImagPart\,\omega)$. We use Eq. (\[EQ:HS1\]) for the HS transformation for the LRS fields, and Eq. (\[EQ:HS2\]) for the HS transformation for the HRS field, and hence arrive at the form (For more details, see Section V in [@Goldbart1996].) $$\begin{aligned} \label{EQ:HSPartition} Z_{1+n} = \int \mathcal{D}\VOP_{\hat{p}}\ReplProd\mathcal{D}\VOP_{p}\REPa e^{-\frac{H_{\VOP}\lbrack\VOP_{p}\REPa,\VOP_{\hat{p}}\rbrack}{\BoltCons T}} ,\end{aligned}$$ with $$\begin{aligned} H_{\VOP}\lbrack\VOP_{p}\REPa,\VOP_{\hat{p}}\rbrack = \frac{\PartNumb \LinkDens \BoltCons T}{2 \Volu^n \LinkPote_0} \!\sum_{\REPP\in HRS}\!\!\VOP_{\REPP} \VOP_{-\REPP}\LinkPoteRepl_{\REPP} +\!\frac{\ExclVoluRN_0 \PartNumb^2}{2\Volu_0}\!\sum_{p}\VOP_{p\BASE^{0}} \VOP_{-p\BASE^{0}} +\!\frac{\ExclVoluRN \PartNumb^2}{2\Volu} \sum_{p}\!\ReplSumOne \VOP_{p\BASE\REPa} \VOP_{-p\BASE\REPa} \! - \! N\BoltCons T \ln \MyZzero,\end{aligned}$$ where the $N\BoltCons T \ln \MyZzero$ term is analog of the $\ln \langle e^{2J\omega q} \rangle_{H_0(q)}$ term in Eq. (\[EQ:Homega\]) and, using $\DensFunc_{-\REPP}=(1/\PartNumb)\sum_{j=1}^{\PartNumb} e^{i\REPP\cdot\hat{c}_j}$ and $\DensFunc_{-p}\REPa=(1/\PartNumb)\sum_{j=1}^{\PartNumb} e^{ip\cdot c_j\REPa}$, we have $$\begin{aligned} \PartNumb \ln \MyZzero &\equiv& \ln \Big\lbrace \int_{\Volu_0} \prod_{i=1}^{\PartNumb} d\PartPosi_{i}^{0} \int_{\Volu} \ReplProdOne \prod_{j=1}^{\PartNumb} d\PartPosi_{j}\REPa \exp\big\lbrack \frac{\PartNumb \LinkDens }{ \Volu^n \LinkPote_0} \sum_{\REPP\in HRS}\VOP_{\REPP} \frac{1}{\PartNumb}\sum_{j=1}^{\PartNumb} e^{i\REPP\cdot\hat{c}_j}\LinkPoteRepl_{\REPP} \nonumber\\ &&\quad\quad + \frac{i\ExclVoluRN_0 \PartNumb^2}{\Volu_0\BoltCons T} \sum_{p}\VOP_{p\BASE^{0}} \frac{1}{\PartNumb}\sum_{j=1}^{\PartNumb} e^{ip\cdot c_j^{0}} +\frac{i\ExclVoluRN \PartNumb^2}{\Volu\BoltCons T} \sum_{p} \ReplSumOne \VOP_{p\BASE\REPa} \frac{1}{\PartNumb}\sum_{j=1}^{\PartNumb} e^{ip\cdot c_j^{0}} \big\rbrack\Big\rbrace \nonumber\\ &=&\PartNumb \ln \Bigg\lbrace \int_{\Volu_0} \! d\PartPosi^{0} \!\! \int_{\Volu} \!\ReplProd d\PartPosi\REPa \exp\Big\lbrack \frac{ \LinkDens}{ \Volu^n \LinkPote_0} \sum_{\REPP\in HRS}\VOP_{\REPP}\LinkPoteRepl_{\REPP}e^{i\REPP \cdot \hat{c}} \nonumber\\ && \quad + \frac{i\ExclVoluRN_0 \PartNumb}{\Volu_0\BoltCons T} \sum_{p}\VOP_{p\BASE^{0}}e^{ip^{0}c^{0}} \!\!+\frac{i\ExclVoluRN \PartNumb}{\Volu\BoltCons T} \sum_{p} \ReplSumOne \VOP_{p\BASE\REPa} e^{ip\REPa c\REPa} \Big\rbrack\Bigg\rbrace . \end{aligned}$$ In this form it is evident that the $\PartNumb$ particles are actually *decoupled*. Notice that in Eq. (\[EQ:HSPartition\]) the functional integrals $\int \mathcal{D}\,\VOP_{\hat{p}}\ReplProd\mathcal{D}\VOP_{p}\REPa $ have carefully chosen prefactors \[as in Eq. (\[EQ:HSComplex\])\], to ensure that the integration is properly normalized. Hamiltonian of the stationary point {#APP:HSP} =================================== In this Appendix we calculate the value of the Hamiltonian at the stationary point by inserting the stationary point order parameter (\[EQ:VOPAnsatzM\]) into the Hamiltonian (\[EQ:HVOPHRS\]). The first term in the Hamiltonian is $$\begin{aligned} && \frac{\PartNumb \LinkDens \BoltCons T}{2 \Volu^n \LinkPote_0} \sum_{\REPP\in HRS}\VOP_{\REPP} \VOP_{-\REPP}\LinkPoteRepl_{\REPP} \nonumber\\ &=&\frac{\PartNumb \LinkDens \BoltCons T}{2 \Volu^n \LinkPote_0} \sum_{\REPP} (\LinkPote)^{1+n} e^{-\frac{\LinkScal^2\vert\REPP\vert^2}{2\BoltCons T}} \big\lbrace \LocaPart \int \frac{dz_1}{\Volu_0}\int _{\ISLL_1} e^{-\frac{\vert\REPP\vert^2}{2\ISLL_1} -i \REPP \cdot \hat{z}_{\Contraction,1} } -\LocaPart\delta^{((1+n)d)}_{\REPP}\big\rbrace \nonumber\\ &&\quad\times \big\lbrace \LocaPart \int \frac{dz_2}{\Volu_0}\int _{\ISLL_2} e^{-\frac{\vert\REPP\vert^2}{2\ISLL_2} +i \REPP \cdot \hat{z}_{\Contraction,2} } -\LocaPart\delta^{((1+n)d)}_{\REPP}\big\rbrace \nonumber\\ &=&\frac{\PartNumb \LinkDens \BoltCons T\LocaPart^2}{2 \Volu^n \LinkPote_0} (\LinkPote)^{1+n}\nonumber\\ && +\frac{\PartNumb \LinkDens \BoltCons T\LocaPart^2}{2 \Volu^n \LinkPote_0} (\LinkPote)^{1+n} \sum_{\REPP} \int\frac{dz_1 dz_2}{\Volu_0^2}\int _{\ISLL_1,\ISLL_2} e^{\big(\frac{1}{2\ISLL_1}+\frac{1}{2\ISLL_2}+ \frac{\LinkScal^2}{2\BoltCons T}\big)\vert\REPP\vert^2 -i\REPP\cdot (\hat{z}_{\Contraction,1}-\hat{z}_{\Contraction,2})} \nonumber\\ &=&\frac{\PartNumb \LinkDens \BoltCons T\LocaPart^2(\LinkPote)^{n}}{2 \Volu^n} +\frac{\PartNumb \LinkDens \BoltCons T\LocaPart^2(\LinkPote)^{n}}{2 \Volu^n } (1+n\Contraction^2)^{-d/2}\int _{\ISLL_1,\ISLL_2} \Big\lbrace 2\pi \Big( \frac{1}{2\ISLL_1}+\frac{1}{2\ISLL_2}+ \frac{\LinkScal^2}{2\BoltCons T}\Big) \Big\rbrace^{-nd/2} ,\end{aligned}$$ where $\hat{z}_{\Contraction}\equiv \{z,\Contraction z,\ldots,\Contraction z\}$. The sum $\sum_{\REPP\in HRS}$ can be changed into the $\sum_{\REPP}$ because the order parameter we have inserted vanishes for $\REPP\in\,$LRS. We have also changed momentum summation into an integral by using $\frac{1}{\Volu_0\Volu^n}\sum_{\REPP}=\int\frac{d^{(1+n)d}\REPP}{(2\pi)^{(1+n)d}}$. The free energy of the system is related to the $O(n)$ term of this Hamiltonian, as given by Eqs. (\[EQ:HFEDisoAver\], \[EQ:HFEDisoAverDeri\]). Thus, we make the small-$n$ expansion. It is straightforward to see that the $O(1)$ terms cancel, and that the leading-order term is given by $O(n)$ $$\begin{aligned} && \frac{\PartNumb \LinkDens \BoltCons T}{2 \Volu^n \LinkPote_0} \sum_{\REPP\in HRS}\VOP_{\REPP} \VOP_{-\REPP}\LinkPoteRepl_{\REPP} \nonumber\\ &=& n \frac{\PartNumb \LinkDens \BoltCons T\LocaPart^2}{2 \Volu^n} \Big\lbrace \ln \Volu -\frac{d}{2}\big(\ln(2\pi)+\Contraction^2\big) -\frac{d}{2} \int _{\ISLL_1,\ISLL_2} \Big( \frac{1}{2\ISLL_1}+\frac{1}{2\ISLL_2}+ \frac{\LinkScal^2}{2\BoltCons T}\Big) \Big\rbrace .\end{aligned}$$ Similarly, we can calculate the $\ln \MyZzero$ term. By inserting the saddle-point value of the order parameter into $\MyZzero$, and summing (or, more precisely, integrating) over momentum $\REPP$, we have $$\begin{aligned} \MyZzero\!&=&\!\int_{\Volu_0} \! d\PartPosi^{0} \!\! \int_{\Volu} \!\ReplProd d\PartPosi\REPa \exp\Big\lbrace \frac{ \LinkDens}{ \Volu^n \LinkPote_0} \sum_{\REPP\in HRS}\VOP_{\REPP}\LinkPoteRepl_{\REPP}e^{i\REPP \cdot \hat{c}} \Big\rbrace \nonumber\\ &=& e^{-\LinkDens \LocaPart (\LinkPote_0/\Volu)^n} \int d\hat{c} \exp \big\lbrace \LinkDens \LocaPart (\LinkPote_0)^n \int dz \int_{\ISLL} \big(\frac{\tISLL}{2\pi}\big)^{\frac{(1+n)d}{2}} e^{-\frac{\tISLL}{2}(\hat{z}_{\Contraction}-\hat{c})^2} \big\rbrace ,\end{aligned}$$ where $\tISLL\equiv\big(\frac{1}{\ISLL}+\LinkScal^2\big)^{-1}$. We then Taylor-expand the exponential (keeping all orders) and integrate out $\hat{c}$ to get $$\begin{aligned} \label{EQ:Z0intec} \MyZzero &=& e^{-\LinkDens \LocaPart (\LinkPote_0/\Volu)^n}\Volu_0\Volu^n \Big\lbrace 1+\LinkDens \LocaPart (\LinkPote_0/\Volu)^n +\frac{1}{\Volu_0\Volu^n} \sum_{m=2}^{\infty} \frac{\big(\LinkDens \LocaPart (\LinkPote_0)^n\big)^m}{m!} \int dz_1 \cdots dz_m \nonumber\\ &&\quad \times \int_{\ISLL_1,\ldots,\ISLL_m} \prod _{j=1}^{m} \Big(\frac{\tISLL_j}{2\pi}\Big)^{\frac{(1+n)d}{2}} \Big(\frac{2\pi}{\tISLL_1+\tISLL_m}\Big)^{\frac{(1+n)d}{2}} e^{-\frac{\tISLL_1\tISLL_2 (\hat{z}_{\Contraction,1}-\hat{z}_{\Contraction,2})^2+\cdots} {2(\tISLL_1+\cdots+\tISLL_m)}} \Big\rbrace ,\end{aligned}$$ where in the exponential the terms following $(\hat{z}_{\Contraction,1}-\hat{z}_{\Contraction,2})^2$ include all pairs of the $m$ variables \[there are $m(m-1)/2$ such terms\]. By using $(\hat{z}_{\Contraction,1}-\hat{z}_{\Contraction,2})^2 =(1+n\Contraction^2)(z_1-z_2)^2$ \[recall that $\hat{z}_{\Contraction,1}$ is a $(1+n)d$-dimensional vector, and that $z_1$ is a $d$-dimensional vector\], the integration $\int dz_1 \cdots dz_m $ can be readily performed, and we thus obtain $$\begin{aligned} \label{EQ:Z0intez} \MyZzero&=& e^{-\LinkDens \LocaPart (\LinkPote_0/\Volu)^n}\Volu_0\Volu^n \Big\lbrace 1+\LinkDens \LocaPart (\LinkPote_0/\Volu)^n \nonumber\\ && \quad +\frac{1}{\Volu^n} \sum_{m=2}^{\infty} \frac{\big(\LinkDens \LocaPart (\LinkPote_0)^n\big)^m}{m!} \int_{\ISLL_1,\ldots,\ISLL_m} \prod _{j=1}^{m} \Big(\frac{\tISLL_j}{2\pi}\Big)^{\frac{nd}{2}} \Big(\frac{2\pi}{\tISLL_1+\tISLL_m}\Big)^{\frac{nd}{2}} (1+n\Contraction^2)^{\frac{(1-m)d}{2}} \Big\rbrace .\end{aligned}$$ From this, $-\ln \MyZzero$ can obtained by making the small-$n$ expansion, using the following equality: $$\begin{aligned} \ln(x+ny+O(n^2))=\ln(x(1+n(y/x)+O(n^2)))=\ln x + n (y/x) +O(n^2),\end{aligned}$$ so that we have $$\begin{aligned} -\ln \MyZzero &=& -\ln\Volu_0 +n \Big\lbrace -\ln \Volu +(\LinkDens \LocaPart + e^{-\LinkDens \LocaPart}-1) \Big(\frac{d}{2}\big(\ln(2\pi)+\Contraction^2\big)-\ln \Volu \Big) \nonumber\\ &&\quad -e^{-\LinkDens \LocaPart} \frac{d}{2} \sum_{m=1}^{\infty} \frac{\big(\LinkDens \LocaPart \big)^m}{m!} \ln \Big(\frac{\tISLL_1\cdots\tISLL_m}{\tISLL_1+\cdots+\tISLL_m}\Big) \Big\rbrace .\end{aligned}$$ Therefore, the small-$n$ expansion of the stationary-point Hamiltonian $\HVOPSP$ to $O(n)$ is given by $$\begin{aligned} \HVOPSP&=&\frac{\ExclVoluRN_0(0)\PartNumb^2}{2\Volu_0} +\frac{n\ExclVoluRN(0)\PartNumb^2}{2\Volu} -\PartNumb \BoltCons T \ln \Volu_0 -n\PartNumb \BoltCons T \ln \Volu +n\PartNumb \BoltCons T \Bigg\lbrace \UnivPara \big\lbrack \frac{d}{2}\big(\ln(2\pi)+\Contraction^2\big)-\ln\Volu \big\rbrack \nonumber\\ &&- \frac{\LinkDens \LocaPart^2}{2}\cdot\frac{d}{2}\int_{\ISLL_1,\ISLL_2} \ln\big( \frac{1}{\ISLL_1}+\frac{1}{\ISLL_2}+\frac{\LinkScal^2}{\BoltCons T} \big) -e^{-\LinkDens \LocaPart}\frac{d}{2} \sum_{m=1}^{\infty}\frac{(\LinkDens \LocaPart)^m}{m!} \int_{\ISLL_1,\ldots,\ISLL_2} \ln \Big( \frac{\tISLL_1\cdots\tISLL_m}{\tISLL_1 +\cdots +\tISLL_m} \Big) \Bigg\rbrace ,\end{aligned}$$ where the parameter $\UnivPara$ is defined via $$\begin{aligned} \label{EQ:UnivParaDef} \UnivPara\equiv -\frac{\LinkDens \LocaPart^2}{2}+\LinkDens \LocaPart +e^{-\LinkDens \LocaPart}-1.\end{aligned}$$ Hamiltonian of the Goldstone deformed order parameter {#APP:HGS} ===================================================== In this Appendix we calculate the value of the Hamiltonian for the Goldstone-deformed order parameter by inserting the Goldstone-deformed order parameter (\[EQ:VOPGSR\]) into the Hamiltonian (\[EQ:HVOPHRS\]), following a calculation similar to that in Appendix \[APP:HSP\]. To obtain a description of the elasticity, we shall expand the Hamiltonian for small deformations, specifically in a series in the small, scalar variable that characterizes the replicated deformation field, viz., $\DefoScalPsi(z_1,z_2)\equiv (\hat{\DefoPosi}(z_1)-\hat{\DefoPosi}(z_2))^2-(1+n)(z_1-z_2)^2$. The quadratic term in the Hamiltonian is given by $$\begin{aligned} && \frac{\PartNumb \LinkDens \BoltCons T}{2 \Volu^n \LinkPote_0} \sum_{\REPP\in HRS}\VOP_{\REPP} \VOP_{-\REPP}\LinkPoteRepl_{\REPP} \nonumber\\ &=&\frac{\PartNumb \LinkDens \BoltCons T}{2 \Volu^n \LinkPote_0} \sum_{\REPP} (\LinkPote)^{1+n} e^{-\frac{\LinkScal^2\vert\REPP\vert^2}{2\BoltCons T}} \big\lbrace \LocaPart \int \frac{dz_1}{\Volu_0}\int _{\ISLL_1} e^{-\frac{\vert\REPP\vert^2}{2\ISLL_1} -i \REPP \cdot \hat{\DefoPosi}(z_1) } -\LocaPart\delta^{((1+n)d)}_{\REPP}\big\rbrace \nonumber\\ && \quad\times \big\lbrace \LocaPart \int \frac{dz_2}{\Volu_0}\int _{\ISLL_2} e^{-\frac{\vert\REPP\vert^2}{2\ISLL_2} +i \REPP \cdot \hat{\DefoPosi}(z_2) } -\LocaPart\delta^{((1+n)d)}_{\REPP}\big\rbrace \nonumber\\ &=&\frac{\PartNumb \LinkDens \BoltCons T\LocaPart^2(\LinkPote)^{n}}{2 \Volu^n} \nonumber\\ && +\frac{\PartNumb \LinkDens \BoltCons T\LocaPart^2(\LinkPote)^{n}}{2 \Volu^n } \Volu_0 \Volu^n \int\frac{dz_1 dz_2}{\Volu_0^2}\int _{\ISLL_1,\ISLL_2} \Big\lbrace 2\pi \Big( \frac{1}{2\ISLL_1}+\frac{1}{2\ISLL_2}+ \frac{\LinkScal^2}{2\BoltCons T}\Big) \Big\rbrace^{-\frac{(1+n)d}{2}} e^{-\frac{(\hat{\DefoPosi}(z_1)-\hat{\DefoPosi}(z_2))^2} {2\big( \frac{1}{2\ISLL_1}+\frac{1}{2\ISLL_2}+ \frac{\LinkScal^2}{2\BoltCons T}\big)}} .\end{aligned}$$ Next, we expand for small $\DefoScalPsi$, adopting the notation $\DefoScalPsi(z_1,z_2)\equiv (\hat{\DefoPosi}(z_1)-\hat{\DefoPosi}(z_2))^2-(1+n)(z_1-z_2)^2$. Note that $\DefoScalPsi$ is *not* related to the deformation relative to the stationary point, this stationary point being characterized by the mean positions of the replicas of the particle $\hat{z}_{\Contraction}=\{z,\Contraction z,\ldots,\Contraction z\}$. Instead, $\DefoScalPsi$ describes deformations relative to the state right after linking (i.e., prior to relaxation), this state being characterized by the mean positions of the replicas of the particle $\hat{z}=\{z,z,\ldots,z\}$, as discussed in Sections \[SEC:EnerGoldSton\] and \[SEC:relaxation\]. The expansion of the quadratic term for small $\DefoScalPsi$ is given by $$\begin{aligned} && \frac{\PartNumb \LinkDens \BoltCons T}{2 \Volu^n \LinkPote_0} \sum_{\REPP\in HRS}\VOP_{\REPP} \VOP_{-\REPP}\LinkPoteRepl_{\REPP} \nonumber\\ &=&\frac{\PartNumb \LinkDens \BoltCons T\LocaPart^2(\LinkPote)^{n}}{2 \Volu^n} \nonumber\\ && +\frac{\PartNumb \LinkDens \BoltCons T\LocaPart^2(\LinkPote)^{n}}{2 } \int\frac{dz_1 dz_2}{\Volu_0}\int _{\ISLL_1,\ISLL_2} \Big\lbrace 2\pi \Big( \frac{1}{2\ISLL_1}+\frac{1}{2\ISLL_2}+ \frac{\LinkScal^2}{2\BoltCons T}\Big) \Big\rbrace^{-(1+n)d/2} e^{-\frac{(1+n)(z_1-z_2)^2} {2\big( \frac{1}{2\ISLL_1}+\frac{1}{2\ISLL_2}+ \frac{\LinkScal^2}{2\BoltCons T}\big)}} \nonumber\\ &&\times \Bigg\lbrace 1-\frac{\DefoScalPsi(z_1,z_2)}{2\big( \frac{1}{2\ISLL_1}+\frac{1}{2\ISLL_2}+ \frac{\LinkScal^2}{2\BoltCons T}\big)} +\frac{1}{2}\Big(\frac{\DefoScalPsi(z_1,z_2)} {2\big( \frac{1}{2\ISLL_1}+\frac{1}{2\ISLL_2}+ \frac{\LinkScal^2}{2\BoltCons T}\big)}\Big)^2 +O(\DefoScalPsi(z_1,z_2)^3) \Bigg\rbrace .\end{aligned}$$ The small-$n$ expansion on this quadratic term is then given by $$\begin{aligned} && \frac{\PartNumb \LinkDens \BoltCons T}{2 \Volu^n \LinkPote_0} \sum_{\REPP\in HRS}\VOP_{\REPP} \VOP_{-\REPP}\LinkPoteRepl_{\REPP} \nonumber\\ &=&\frac{\PartNumb \LinkDens \BoltCons T\LocaPart^2}{2} \Bigg\lbrace n \Big\lbrace \ln \Volu -\frac{d}{2}\big(\ln(2\pi)+1\big) -\frac{d}{2}\int _{\ISLL_1,\ISLL_2} \ln \Big( \frac{1}{2\ISLL_1}+\frac{1}{2\ISLL_2}+ \frac{\LinkScal^2}{2\BoltCons T}\Big) \Big\rbrace \nonumber\\ &&\quad +\int\frac{dz_1 dz_2}{\Volu_0}\int _{\ISLL_1,\ISLL_2} \Big\lbrace 2\pi \Big( \frac{1}{2\ISLL_1}+\frac{1}{2\ISLL_2}+ \frac{\LinkScal^2}{2\BoltCons T}\Big) \Big\rbrace^{-d/2} e^{-\frac{(1+n)(z_1-z_2)^2} {2\big( \frac{1}{2\ISLL_1}+\frac{1}{2\ISLL_2}+ \frac{\LinkScal^2}{2\BoltCons T}\big)}} \nonumber\\ &&\quad\quad\times \Big\lbrace -\frac{\DefoScalPsi(z_1,z_2)}{2\big( \frac{1}{2\ISLL_1}+\frac{1}{2\ISLL_2}+ \frac{\LinkScal^2}{2\BoltCons T}\big)} +\frac{1}{2}\Big(\frac{\DefoScalPsi(z_1,z_2)} {2\big( \frac{1}{2\ISLL_1}+\frac{1}{2\ISLL_2}+ \frac{\LinkScal^2}{2\BoltCons T}\big)}\Big)^2 +O(\DefoScalPsi(z_1,z_2)^3) \Big\rbrace \Bigg\rbrace .\end{aligned}$$ The calculation for the $\ln \MyZzero$ term is similar to the above calculation of the quadratic term. The expansion in small quantity $\DefoScalPsi$ reads $$\begin{aligned} \MyZzero &=& e^{-\LinkDens \LocaPart (\LinkPote_0/\Volu)^n}\Volu_0\Volu^n \Big\lbrace 1+\LinkDens \LocaPart (\LinkPote_0/\Volu)^n \nonumber\\ &&\quad +\frac{1}{\Volu_0\Volu^n} \sum_{m=2}^{\infty} \frac{\big(\LinkDens \LocaPart (\LinkPote_0)^n\big)^m}{m!} \int dz_1 \cdots dz_m \int_{\ISLL_1,\ldots,\ISLL_m} \prod _{j=1}^{m} \Big(\frac{\tISLL_j}{2\pi}\Big)^{\frac{(1+n)d}{2}} \Big(\frac{2\pi}{\tISLL_1+\cdots+\tISLL_m}\Big)^{\frac{(1+n)d}{2}} e^{-\frac{\tISLL_1\tISLL_2(z_1-z_2)^2+\cdots} {2(\tISLL_1+\cdots+\tISLL_m)}} \nonumber\\ &&\quad\times\big\lbrace 1-\frac{\tISLL_1\tISLL_2 \DefoScalPsi(z_1,z_2)+\cdots} {2(\tISLL_1+\cdots+\tISLL_m)} +\frac{1}{2}\big(\frac{\tISLL_1\tISLL_2 \DefoScalPsi(z_1,z_2)+\cdots} {2(\tISLL_1+\cdots+\tISLL_m)} \big)^2 \big\rbrace \Big\rbrace ,\end{aligned}$$ where the summations that we have abbreviated with $\cdots$ include all pairs formed by $\{z_1,\ldots,z_m\}$. Next, we expand for small $n$, keep terms to $O(n)$ in the $\ln \MyZzero$ term, [assuming that both $\DefoScalPsi$ and $\DefoScalPsi^2$ contain $O(n)$ terms. ]{} After a tedious calculation we have $$\begin{aligned} \label{EQ:AppLnz} -\ln \MyZzero &=& -\LinkDens \LocaPart (\frac{\LinkPote}{\Volu})^n -\ln \Volu_0 -n\ln \Volu - \LinkDens \LocaPart \nonumber\\ && -n e^{-\LinkDens \LocaPart} \Big\lbrace (1-e^{\LinkDens \LocaPart})\ln \Volu +\LinkDens\LocaPart e^{\LinkDens \LocaPart} \ln \Volu_0 +\big(e^{\LinkDens \LocaPart}-1 -\LinkDens\LocaPart e^{\LinkDens \LocaPart}\big)\frac{d}{2} \big( \ln(2\pi)+1 \big) \nonumber\\ &&\quad\quad +\frac{d}{2}\sum_{m=1}^{\infty}\int_{\ISLL _1,\ldots\ISLL _m} \ln\Big(\frac{\tISLL_1\cdots\tISLL_m}{\tISLL_1+\cdots+\tISLL_m}\Big) \Big\rbrace \nonumber\\ && -e^{-\LinkDens\LocaPart} \frac{1}{\Volu_0} \sum_{m=2}^{\infty}\frac{(\LinkDens \LocaPart)^m}{m!}\int dz_1\cdots dz_m \int_{\ISLL_1,\ldots,\ISLL_m} \prod_{j=1}^{m} \Big(\frac{\tISLL_j}{2\pi}\Big)^{\frac{d}{2}} \Big(\frac{2\pi}{\tISLL_1+\cdots+\tISLL_m}\Big)^{\frac{d}{2}} e^{-\frac{\tISLL_1\tISLL_2(z_1-z_2)^2+\cdots} {2(\tISLL_1+\cdots+\tISLL_m)}} \nonumber\\ &&\quad\quad\times \Bigg\lbrace -\frac{\tISLL_1\tISLL_2 \DefoScalPsi(z_1,z_2)+\cdots} {2(\tISLL_1+\cdots+\tISLL_m)} +\frac{1}{2}\Big(\frac{\tISLL_1\tISLL_2 \DefoScalPsi(z_1,z_2)+\cdots} {2(\tISLL_1+\cdots+\tISLL_m)} \Big)^2 \Bigg\rbrace .\end{aligned}$$ To further simplify the expression, first consider the $O(\DefoScalPsi)$ terms in the expansion, $-\frac{\tISLL_1\tISLL_2 \DefoScalPsi(z_1,z_2)+\cdots}{2(\tISLL_1+\cdots+\tISLL_m)}$. The first term has a factor of $\DefoScalPsi(z_1,z_2)\equiv (\hat{\DefoPosi}(z_1)-\hat{\DefoPosi}(z_2))^2-(1+n)(z_1-z_2)^2$, which only involves two variables $z_1$ and $z_2$, so we can integrate out the other $(m-2)$ variables, i.e., $z_3,\ldots,z_m$. (Of course, for $m=2$, no integrals are needed.)In total, there are $\frac{m(m-1)}{2}$ such terms (i.e., the number of pairs among $m$ variables). Thus, the $O(\DefoScalPsi)$ term in $-\ln \MyZzero$ is given by $$\begin{aligned} \label{EQ:FirsOrdeInte} && -e^{-\LinkDens\LocaPart} \frac{1}{\Volu_0} \sum_{m=2}^{\infty}\frac{(\LinkDens \LocaPart)^m}{m!}\int dz_1\cdots dz_m \int_{\ISLL_1,\ldots,\ISLL_m} \prod_{j=1}^{m} \Big(\frac{\tISLL_j}{2\pi}\Big)^{\frac{d}{2}} \Big(\frac{2\pi}{\tISLL_1+\cdots+\tISLL_m}\Big)^{\frac{d}{2}} \nonumber\\ && \quad\times e^{-\frac{\tISLL_1\tISLL_2(z_1-z_2)^2+\cdots} {2(\tISLL_1+\cdots+\tISLL_m)}} \Bigg\lbrace -\frac{\tISLL_1\tISLL_2 \DefoScalPsi(z_1,z_2)+\cdots} {2(\tISLL_1+\cdots+\tISLL_m)} \Bigg\rbrace \nonumber\\ &=& -e^{-\LinkDens\LocaPart} \frac{1}{\Volu_0} \sum_{m=2}^{\infty}\frac{(\LinkDens \LocaPart)^m}{m!}\int dz_1\cdots dz_m \int_{\ISLL_1,\ldots,\ISLL_m} \prod_{j=1}^{m} \Big(\frac{\tISLL_j}{2\pi}\Big)^{\frac{d}{2}} \int dc \nonumber\\ && \quad\times e^{-\frac{\tISLL_1}{2}(z_1-c)^2-\frac{\tISLL_2}{2}(z_2-c)^2-\cdots} \Bigg\lbrace -\frac{\tISLL_1\tISLL_2 \DefoScalPsi(z_1,z_2)+\cdots} {2(\tISLL_1+\cdots+\tISLL_m)} \Bigg\rbrace \nonumber\\ &=& -e^{-\LinkDens\LocaPart} \frac{1}{\Volu_0} \sum_{m=2}^{\infty}\frac{(\LinkDens \LocaPart)^m}{m!} \frac{m(m-1)}{2} \int dz_1 dz_2 \int_{\ISLL_1,\ldots,\ISLL_m} \Big(\frac{\tISLL_1 \tISLL_2}{2\pi(\tISLL_1+\tISLL_2)}\Big)^{\frac{d}{2}} \nonumber\\ &&\quad\times e^{-\frac{\tISLL_1 \tISLL_2 (z_1-z_2)^2}{2(\tISLL_1+\tISLL_2)}} \Big\lbrace -\frac{\tISLL_1\tISLL_2 } {2(\tISLL_1+\cdots+\tISLL_m)} \Big\rbrace \DefoScalPsi(z_1,z_2) ,\end{aligned}$$ where in the last line here we have used the fact that $\{z_1,z_2,\ldots,z_m\}$ appear symmetrically, so that the $\frac{m(m-1)}{2}$ terms are identical. Similarly, for the $O(\DefoScalPsi^2)$ terms in the expansion in Eq. (\[EQ:AppLnz\]) there are terms involving two points, such as $\DefoScalPsi(z_1,z_2)^2$, three points, such as $\DefoScalPsi(z_1,z_2)\DefoScalPsi(z_1,z_3)$, and four points, such as $\DefoScalPsi(z_1,z_2)\DefoScalPsi(z_3,z_4)$. (Of course, for $m=3$ there are no four-point terms, and for $m=2$ there are no three or four points terms.)Thus, the $O(\DefoScalPsi^2)$ terms can be written as $$\begin{aligned} \label{EQ:QuadExpa} && \Big\lbrace\frac{\tISLL_1\tISLL_2 \DefoScalPsi(z_1,z_2)+\cdots} {2(\tISLL_1+\cdots+\tISLL_m)}\Big\rbrace ^2 \nonumber\\ &\to& \frac{1} {4(\tISLL_1+\cdots+\tISLL_m)^2} \Big\lbrace \frac{m(m-1)}{2}\tISLL_1^2\tISLL_2^2 \DefoScalPsi(z_1,z_2)^2 \nonumber\\ &&+ m(m-1)(m-2)\tISLL_1\tISLL_2^2\tISLL_3 \DefoScalPsi(z_1,z_2)\DefoScalPsi(z_2,z_3) \nonumber\\ &&+ \frac{m(m-1)(m-2)(m-3)}{4}\tISLL_1\tISLL_2\tISLL_3\tISLL_4 \DefoScalPsi(z_1,z_2)\DefoScalPsi(z_3,z_4) \Big\rbrace .\end{aligned}$$ Following a calculation similar to that in Eq. (\[EQ:FirsOrdeInte\]), we can integrate out the integration variables that are not present in $\DefoScalPsi$, and thus obtain the $O(\DefoScalPsi^2)$ term in $\ln \MyZzero$. Summing up the contributions from the quadratic term and the $\ln \MyZzero$ term, we arrive at the Hamiltonian of the Goldstone deformed state: $$\begin{aligned} H_{\VOP}^{(G)}= \HVOPSP+H_{\VOP}^{\DefoScalPsi},\end{aligned}$$ with $\HVOPSP$ being the Hamiltonian of the stationary point, and the increase of the Hamiltonian due to Goldstone deformation is given by $$\begin{aligned} \label{EQ:HPsi} H_{\VOP}^{\DefoScalPsi} &=& -n\PartNumb \BoltCons T \frac{\UnivPara d}{2}(\Contraction^2-1) + \frac{1}{2}\int dz_1 dz_2 \KernOne(z_1,z_2)\DefoScalPsi(z_1,z_2) \nonumber\\ && -\frac{1}{8\BoltCons T}\int dz_1 dz_2 dz_3 dz_4 \KernTwo(z_1,z_2,z_3,z_4) \DefoScalPsi(z_1,z_2)\DefoScalPsi(z_3,z_4) .\end{aligned}$$ The first term here, i.e., $-\PartNumb \BoltCons T \frac{\UnivPara d}{2}(\Contraction^2-1)$, is present due to the fact that the expansion variable $\DefoScalPsi(z_1,z_2)$ measures departures from the *state right after linking*, not the stationary point, as we have previously discussed. Note that $H_{\VOP}^{\DefoScalPsi}$ involves only the energy of shear deformation, because the Goldstone modes contain only pure shear deformation. The energy of volume variations is in the stationary-point Hamiltonian part, which contains the variable contraction parameter $\Contraction$. The kernels in Eq. (\[EQ:HPsi\]) are given by $$\begin{aligned} \label{EQ:KoneApp} \frac{1}{2}\KernOne(z_1,z_2) &=& \frac{\PartNumb \LinkDens \BoltCons T \LocaPart^2}{4 \Volu_0} \int_{\tISLL_1,\tISLL_2} \Big(2\pi\Big(\frac{1}{\tISLL_1}+\frac{1}{\tISLL_2}+\frac{\LinkScal^2}{\BoltCons T}\Big)\Big)^{-d/2} \Big(\frac{1}{\tISLL_1}+\frac{1}{\tISLL_2}+\frac{\LinkScal^2}{\BoltCons T}\Big)^{-1} e^{-\frac{(z_1-z_2)^2}{2\big(\frac{1}{\tISLL_1}+\frac{1}{\tISLL_2}+\frac{\LinkScal^2}{\BoltCons T}\big)}} \nonumber\\ && +\frac{\PartNumb \BoltCons T}{2 \Volu_0} e^{-\LinkDens \LocaPart} \sum_{m=2}^{\infty}\frac{(\LinkDens \LocaPart)^m}{m!}\frac{m(m-1)}{2}\int_{\tISLL_1,\ldots,\tISLL_m} \Big(\frac{\tISLL_1\tISLL_2}{2\pi(\tISLL_1+\tISLL_2)}\Big)^{d/2} \nonumber\\ && \quad\quad\times e^{-\frac{\tISLL_1\tISLL_2(z_1-z_2)^2}{2(\tISLL_1+\tISLL_2)}} \frac{\tISLL_1\tISLL_2}{\tISLL_1+\cdots+\tISLL_m} ,\end{aligned}$$ and $$\begin{aligned} \label{EQ:KtwoApp} && -\frac{1}{8\BoltCons T}\KernTwo(z_1,z_2,z_3,z_4) \nonumber\\ &=& \frac{\PartNumb \LinkDens \BoltCons T \LocaPart^2}{16 \Volu_0} \int_{\tISLL_1,\tISLL_2} \Big(2\pi\Big(\frac{1}{\tISLL_1}+\frac{1}{\tISLL_2}+\frac{\LinkScal^2}{\BoltCons T}\Big)\Big)^{-d/2} \Big(\frac{1}{\tISLL_1}+\frac{1}{\tISLL_2}+\frac{\LinkScal^2}{\BoltCons T}\Big)^{-1} \nonumber\\ && \quad\quad\times e^{-\frac{(z_1-z_2)^2}{2\big(\frac{1}{\tISLL_1}+\frac{1}{\tISLL_2}+\frac{\LinkScal^2}{\BoltCons T}\big)}} \delta^{(d)}(z_1-z_3) \delta^{(d)}(z_2-z_4) \nonumber\\ && -\frac{\PartNumb \BoltCons T}{8 \Volu_0} e^{-\LinkDens \LocaPart} \sum_{m=2}^{\infty}\frac{(\LinkDens \LocaPart)^m}{m!}\frac{m(m-1)}{2}\int_{\tISLL_1,\ldots,\tISLL_m} \Big(\frac{\tISLL_1\tISLL_2}{2\pi(\tISLL_1+\tISLL_2)}\Big)^{d/2} \nonumber\\ && \quad\quad\times e^{-\frac{\tISLL_1\tISLL_2(z_1-z_2)^2}{2(\tISLL_1+\tISLL_2)}} \frac{\tISLL_1^2\tISLL_2^2}{(\tISLL_1+\cdots+\tISLL_m)^2}\delta^{(d)}(z_1-z_3) \delta^{(d)}(z_2-z_4) \nonumber\\ && -\frac{\PartNumb \BoltCons T}{8 \Volu_0} e^{-\LinkDens \LocaPart} \sum_{m=3}^{\infty}\frac{(\LinkDens \LocaPart)^m}{m!}m(m-1)(m-2)\int_{\tISLL_1,\ldots,\tISLL_m} \Big(\frac{\tISLL_1\tISLL_2\tISLL_3}{4\pi^2(\tISLL_1+\tISLL_2+\tISLL_3)}\Big)^{d/2} \nonumber\\ && \quad\quad\times e^{-\frac{\tISLL_1\tISLL_2(z_1-z_2)^2+\tISLL_2\tISLL_3(z_2-z_3)^2+\tISLL_3\tISLL_1(z_3-z_1)^2} {2(\tISLL_1+\tISLL_2+\tISLL_3)}} \frac{\tISLL_1\tISLL_2^2\tISLL_3}{(\tISLL_1+\cdots+\tISLL_m)^2}\delta^{(d)}(z_2-z_4) \nonumber\\ && -\frac{\PartNumb \BoltCons T}{8 \Volu_0} e^{-\LinkDens \LocaPart} \sum_{m=3}^{\infty}\frac{(\LinkDens \LocaPart)^m}{m!}\frac{m(m-1)(m-2)(m-3)}{4}\int_{\tISLL_1,\ldots,\tISLL_m} \Big(\frac{\tISLL_1\tISLL_2\tISLL_3\tISLL_4}{8\pi^3(\tISLL_1+\tISLL_2+\tISLL_3+\tISLL_4)}\Big)^{d/2} \nonumber\\ && \quad\quad\times e^{-\frac{\tISLL_1\tISLL_2(z_1-z_2)^2+\tISLL_1\tISLL_3(z_1-z_3)^2+\tISLL_1\tISLL_4(z_1-z_4)^2 +\tISLL_2\tISLL_3(z_2-z_3)^2+\tISLL_2\tISLL_4(z_2-z_4)^2+\tISLL_3\tISLL_4(z_3-z_4)^2} {2(\tISLL_1+\tISLL_2+\tISLL_3+\tISLL_4)}} \frac{\tISLL_1\tISLL_2\tISLL_3\tISLL_4}{(\tISLL_1+\cdots+\tISLL_m)^2} .\end{aligned}$$ Strictly speaking, the kernel $\KernTwo$ should be symmetric under the exchanges of variables $z_1 \leftrightarrow z_2$ or $z_3 \leftrightarrow z_4$. Here, to save space, we have written the above non-symmetric form. The true (i.e., symmetric) form can be recovered by averaging: $$\begin{aligned} \label{EQ:KSym} \KernTwo(z_1,z_2,z_3,z_4) \to \frac{1}{4} \big( \KernTwo(z_1,z_2,z_3,z_4)+ \KernTwo(z_1,z_2,z_4,z_3)+ \KernTwo(z_2,z_1,z_3,z_4)+ \KernTwo(z_2,z_1,z_4,z_3) \big) .\end{aligned}$$ Relaxation of the phenomenological elastic free energy for a given realization of disorder {#APP:Relaxation} ========================================================================================== In this Appendix we solve the stationarity condition for the random local deformations $\RelaRand$. First, we need to calculate the variation of the bulk term, which can be expanded, to leading order in small $\RelaRand$, as [^10] $$\begin{aligned} \label{EQ:DetPPExp} \textrm{det}\Big(\frac{\partial\DefoPosi_i(z)}{\partial z_j}\Big) &=& \textrm{det} \Big( \Contraction \delta_{ij} + \partial_j \RelaRand_i(z) \Big) \nonumber\\ &=& \Contraction^{d} \textrm{det} \Big( \delta_{ij} + \Contraction^{-1}\partial_j \RelaRand_i(z) \Big) \nonumber\\ &\simeq& \Contraction^{d} \big(1+\Contraction^{-1} \partial_i \RelaRand_i(z)\big).\end{aligned}$$ Using this expansion, we have $$\begin{aligned} \label{EQ:BulkTermExp} && \Big\lbrace \textrm{det}\Big(\frac{\partial\DefoPosi_i(z)}{\partial z_j}\Big)-1 \Big\rbrace^2 \nonumber\\ &=& (\Contraction^{d}-1)^2 + 2(\Contraction^d-1)\Contraction^{d-1}\partial_i \RelaRand_i(z) \nonumber\\ && + \Contraction^{2d-2}\partial_i \RelaRand_i(z)\partial_j \RelaRand_j(z) .\end{aligned}$$ Thus, the stationarity equation reads $$\begin{aligned} 0 &=& 2(\Contraction z_a + \RelaRand_a(z))\int dz_2 \NonLocaKern(z,z_2) -2\!\int dz_2\NonLocaKern(z,z_2)(\Contraction z_{2,a} + \RelaRand_a(z_2)) -\BulkModuZeroP \partial_{a} (\partial_i \RelaRand_i(z)) ,\end{aligned}$$ where $$\begin{aligned} \BulkModuZeroP \equiv \BulkModuZero \Contraction^{2d-2} .\end{aligned}$$ We take the disorder average of the nonlocal kernel $\NonLocaKernZero$ to be a zeroth-order quantity, and the fluctuation part $\NonLocaKernOne$ to be a first-order quantity and, thus, $\RelaRand(z)$ is also of first order. The he zeroth-order equation is then $$\begin{aligned} 0=2z_a\int dz_2 \NonLocaKernZero(z-z_2)-2\int dz_2 \NonLocaKernZero(z-z_2)z_{2,a}\, ,\end{aligned}$$ which is automatically satisfied, given that $\NonLocaKernZero(z-z_2)$ is even in $(z-z_2)$. The first order equation reads $$\begin{aligned} \! 0\! = \Contraction z_a \! \int\! dz_2 \NonLocaKernZero(z,z_2) \!+ \RelaRand_a(z)\!\! \int\! dz_2 \NonLocaKernZero(z-z_2) \!- \!\!\int \!\! dz_2 \NonLocaKernOne(z,z_2)\Contraction z_{2,a} \!- \!\int \!\! dz_2 \NonLocaKernZero(z-z_2) \RelaRand_a(z) \!- \frac{\BulkModuZeroP}{2} \partial_{a} (\partial_i \RelaRand_i(z)) . \,\end{aligned}$$ We address this equation in momentum space. We define the following Fourier transforms (on a specific finite volume—the volume of the state right after linking, viz., $\Volu_0$): $$\begin{aligned} \NonLocaKernZero_{p} &=& \int dx e^{-ipx} \NonLocaKernZero (x) , \nonumber\\ \NonLocaKernOne_{p_1,p_2} &=& \int dx e^{-ip_1 x_1-ip_2 x_2} \NonLocaKernZero (x_1,x_2) ,\end{aligned}$$ so that the momentum-space stationarity equation becomes $$\begin{aligned} \label{mom-space-stat} 0 &=& i \Contraction \frac{\partial}{\partial p_{1,a}} \NonLocaKernOne_{p_1,0} -i \Contraction \frac{\partial}{\partial p_{2,a}} \Big\vert_{p_2=0} \NonLocaKernOne_{p_1,p_2} + (\NonLocaKernZero_{0}-\NonLocaKernZero_{p_1})\RelaRand_{a,p_1} + \frac{\BulkModuZeroP}{2}p_{1,a} p_{1,b}\RelaRand_{b,p_1} .\end{aligned}$$ Strictly speaking, the derivatives here should instead be understood as difference quotients because we are using a finite-volume version of the Fourier transform; but for convenience we write it a derivatives. Equation (\[mom-space-stat\]) can be written in the tensorial form $$\begin{aligned} \label{EQ:OrdeOneMomeTens} \Big\lbrace 2\Big( \frac{\DiffG_p}{p^2}\Big) \IdenT +\BulkModuZeroP \PLongT \Big\rbrace \cdot \vert p \vert ^2 \vec{\RelaRand}(p) = \vec{\RandForc}(p) ,\end{aligned}$$ where $$\begin{aligned} \label{EQ:VectRelaRand} \RandForc_{a,p_1}\equiv -2\Contraction \Big( i\frac{\partial}{\partial p_{1,a}} \NonLocaKernOne_{p_1,0} -i\frac{\partial}{\partial p_{2,a}}\Big\vert_{p_2=0} \NonLocaKernOne_{p_1,p_2} \Big).\end{aligned}$$ This quantity $\RandForc_{a,p_1}$ is actually the random force in the state that is contracted but has not yet been equilibrated for the randomness, and $$\begin{aligned} \DiffG_p &\equiv& \NonLocaKernZero_{0}-\NonLocaKernZero_{p}.\end{aligned}$$ Furthermore, $\IdenT$ is the $\Dime$-dimensional identity matrix, and the projection operators in momentum space, $\PLong_{ij}$ and $\PPerp_{ij}$, are defined as $$\begin{aligned} \label{EQ:DefiProj} \PLong_{ij} &\equiv& (p_i\,p_j)/p^2 , \nonumber\\ \PPerp_{ij} &\equiv& \delta_{i,j}-(p_i\,p_j)/p^2;\end{aligned}$$ They satisfy the following relations: $$\begin{aligned} (\PLongT)^2 = \PLongT, \quad (\PPerpT)^2 = \PPerpT, \quad \PLongT \cdot \PPerpT =0 .\end{aligned}$$ In the following we shall use bold-face letters to denote rank-2 tensors, and letters with an overhead arrow (such as $\vec{\RelaRand}(p)$) to denote a vector, when needed. By this decomposition we arrive at the solution to Eq. (\[EQ:OrdeOneMomeTens\]): $$\begin{aligned} \label{EQ:SoluRelaRand} \vec{\RelaRand}_{p} &=& \frac{\PPerpT \cdot \vec{\RandForc}_{p}} {2\DiffG_p} +\frac{\PLongT \cdot \vec{\RandForc}_{p}} {\BulkModuZeroP + 2\DiffG_p} .\end{aligned}$$ Notice that the second term is much smaller than the first term, due to the large bulk modulus $\BulkModuZeroP$. In the incompressible limit (i.e., $\BulkModuZero \to \infty$), we have $$\begin{aligned} \vec{\RelaRand}_{p}=\frac{\PPerpT \cdot \vec{\RandForc}_{p}} {2\DiffG_p} ,\end{aligned}$$ which is a purely transverse field, meaning that it satisfies $p_i\,\RelaRand_{i,p}=0$ or, equivalently, $\partial_i\,\RelaRand_{i}(x)=0$, which is the only deformation allowed in an incompressible medium. Re-expanding the elastic energy around the equilibrium reference state {#APP:ReExpandFreeEner} ====================================================================== In this Appendix we re-expand the elastic energy for deformations relative to the relaxed state, $\tz=\Contraction z + \RelaRand(z)$, as discussed in Section \[SEC:EFERS\]. The small variable in this expansion is the deformation field $\tu(\tz)$ relative to the relaxed state. Furthermore, to obtain a continuum description of the elasticity, we adopt a notation involving the strain tensor $\StraTens_{ij}(x)$: $$\begin{aligned} \!\StraTens_{ij}(x)\!&\equiv&\! \frac{1}{2}(\DefoGrad_{ij}(x)\DefoGrad_{ij}(x)-\delta_{ij}) \nonumber\\ \!&=&\! \frac{1}{2}(\partial_i \deformation_j(x)\!+\!\partial_j \deformation_i(x) \!+\!\partial_i \deformation_l(x)\partial_j \deformation_l(x) ).\end{aligned}$$ where $\DefoGrad_{ij}(x)\equiv \partial \DefoPosi_i(x)/\partial x_j$ is the deformation gradient tensor. This strain tensor transforms as a tensor in the reference space labeled by $x$, and as a scalar in the target space labeled by $\DefoPosi$. Expanding the nonlocal kernel $\tNonLocaKern$ in the relaxed state {#SEC:NLKRR} ------------------------------------------------------------------ The definition of $\tNonLocaKern$, given in Section \[SEC:EFERS\], is $$\begin{aligned} \tNonLocaKern(\tz_1,\tz_2)&\equiv&\NonLocaKern(z(\tz_1),z(\tz_2)) .\end{aligned}$$ It can be expanded for small $\RelaRand$ to yield a direct expression for $\tNonLocaKern$. In momentum-space this reads $$\begin{aligned} \tNonLocaKern_{\tp_1,\tp_2} =& \int d\tz_1 d\tz_2 e^{-i\tp_1 \tz_1-i\tp_2 \tz_2} \tNonLocaKern(\tz_1,\tz_2) = \!\int d\tz_1 d\tz_2 e^{-i\tp_1 \tz_1-i\tp_2 \tz_2} \NonLocaKern(z(\tz_1),z(\tz_2)) \nonumber\\ =& \int dz_1 dz_2 \Jaco(z_1) \Jaco(z_2) e^{-i\tp_1 (\Contraction z_1+\RelaRand(z_1)) -i\tp_2 (\Contraction z_2+\RelaRand(z_2))} \NonLocaKern(z_1,z_2) ,\end{aligned}$$ where, in the first line, $z(\tz_1)$ is the mapping of a mass point $\tz_1$ in the relaxed state back to the position $z(\tz_1)$, at which it was located in the state right after linking. Inserting in the expressions for $\Contraction$ and $\RelaRand(z)$, given in Eqs. (\[EQ:SoluCont\], \[EQ:SoluV\]), and keeping terms to $O((1/\BulkModuZero)^0)$ \[which gives $\Contraction \simeq 1$ and $\Jaco(z)\simeq 1$\], we can expand $\RelaRand$ down from the exponent, and keep terms to first order in $\NonLocaKernOne$ (noting that $\RelaRand$ is the same order as $\NonLocaKernOne$), and thus arrive at $$\begin{aligned} \label{EQ:tNonLocaKern} \tNonLocaKern_{\tp_1,\tp_2} \simeq& \int dz_1 dz_2 \big(1-i\tp_1 \RelaRand(z_1) -i\tp_2 \RelaRand(z_2)\big) e^{-i\tp_1 z_1-i\tp_2 z_2} (\NonLocaKernZero(z_1,z_2)+\NonLocaKernOne(z_1,z_2)) \nonumber\\ \simeq& \,\,\NonLocaKernZero_{\tp_1,\tp_2}+\NonLocaKernOne_{\tp_1,\tp_2} -i \int dz_1 dz_2 \big(\tp_1 \RelaRand(z_1) +\tp_2 \RelaRand(z_2)\big) e^{-i\tp_1 z_1-i\tp_2 z_2} \NonLocaKernZero(z_1,z_2) \nonumber\\ =& \,\, \NonLocaKernZero_{\tp_1,\tp_2}+\NonLocaKernOne_{\tp_1,\tp_2} -i \big( \tp_1\cdot\RelaRand_{(\tp_1+\tp_2)} \NonLocaKernZero_{\tp_2} +\tp_2\cdot\RelaRand_{(\tp_1+\tp_2)} \NonLocaKernZero_{\tp_1} \big) .\end{aligned}$$ Local expansion of the harmonic attraction {#APP:REHT} ------------------------------------------ In this section we make a local expansion of the nonlocal term in the elastic free energy of the equilibrium reference state, Eq. (\[EQ:REExpaEner\]), i.e., the term $$\begin{aligned} \label{EQ:FNL} \FreeEnerPhen_{\textrm{nonlocal}} = & \, \frac{1}{2} \!\int\! d\tz_1 d\tz_2 \Jaco(z_1)^{-1}\Jaco(z_2)^{-1} \tNonLocaKern (\tz_1,\tz_2) \Big( \big\vert \tDefoPosi(\tz_1)-\tDefoPosi(\tz_2)\big\vert^2 - \big\vert \tz_1-\tz_2\big\vert^2 \Big) .\end{aligned}$$ For convenience of notation, we define the following change of variables: $$\begin{aligned} z&=&z_1 ,\nonumber\\ y&=&z_2-z_1 ,\nonumber\\ \NLM(z,y)&\equiv&\tNonLocaKern(z_1,z_2),\end{aligned}$$ so that the nonlocal kernel in the relaxed state, Eq. (\[EQ:tNonLocaKernText\]), can be written (in momentum space) as $$\begin{aligned} \label{EQ:tMinM} \tNLM_{\tp,\tq}&\simeq&\NLMZero_{\tp,\tq}+\NLMOne_{\tp,\tq} + \tg_{\tp,\tq}\big( \RandForc_{\tp} \cdot \PPerpT \cdot \tq \big) ,\end{aligned}$$ with the definition of $\tg_{\tp,\tq}$, and then its leading-order expansion in momentum, being given by $$\begin{aligned} \tg_{\tp,\tq}&\equiv&\frac{i(\NonLocaKernZero_{\tq}-\NonLocaKernZero_{\tp-\tq})} {2(\NonLocaKernZero_{0}-\NonLocaKernZero_{\tp})} \simeq \frac{i(\tp^2-2\tp\cdot\tq)}{2\tp^2} .\end{aligned}$$ The local expansion of Eq. (\[EQ:FNL\]) then becomes $$\begin{aligned} \FreeEnerPhen_{\textrm{nonlocal}} &=& \frac{1}{2} \int d\tz d\ty \, \tNLM (\tz,\ty) \Big( \big\vert \tDefoPosi(\tz)-\tDefoPosi(\tz+\ty)\big\vert^2 - \big\vert y \big\vert^2 \Big) \nonumber\\ &\simeq& \frac{1}{2} \int d\tz \big( \partial _i \tDefoPosi_l(\tz)\partial_j \tDefoPosi_l(\tz) -\delta_{ij} \big) \int d\ty \,\, \ty_{i}\,\ty_{j} \, \tNLM(\tz,\ty),\end{aligned}$$ where the factor $\Jaco(z_1)^{-1}\Jaco(z_2)^{-1}$ is ignored because its difference from unity is of $O(1/\BulkModuZero)$. Now it is straightforward to express the elastic energy $\FreeEnerPhen_{\textrm{nonlocal}}$ in the standard form of Lagrangian elasticity using the strain tensor $\tStraTens_{ij}(\tz)=\frac{1}{2} \big( \partial _i \tDefoPosi_l(\tz)\partial_j \tDefoPosi_l(\tz) -\delta_{ij} \big)$. The complete expression of the local form of the elastic energy for deformations relative to the relaxed state, including the contribution from the bulk term, will be calculated in Appendix \[APP:Lagr\]. Expansion of the bulk term -------------------------- The bulk term in the elastic free energy Eq. (\[EQ:REExpaEner\]) is given by $$\begin{aligned} \FreeEnerPhen_{\textrm{bulk}} \equiv \frac{\BulkModuZero}{2} \int d\tz \Jaco(z)^{-1} \Big\lbrace \Jaco(z)\, \textrm{det} \Big( \frac{\partial \tDefoPosi_i(\tz)}{\partial \tz_j} \Big)-1 \Big\rbrace^2 .\end{aligned}$$ The determinant in this equation can be expanded using the strain tensor $\tStraTensT$: $$\begin{aligned} \label{EQ:EXPDet} \textrm{det} \Big( \frac{\partial \tDefoPosi_i(\tz)}{\partial \tz_j}\Big) &=& \textrm{det} \big(\tDefoGradT(\tz)\big) = \big\lbrace \textrm{det} \big(\IdenT+2\tStraTensT(\tz)\big)\big\rbrace^{1/2} = e^{\frac{1}{2}\textrm{Tr}\, \ln \big(\IdenT+2\tStraTensT(\tz)\big)} \nonumber\\ &=& 1+ \textrm{Tr} \tStraTensT(\tz) -\textrm{Tr} \tStraTensT(\tz)^2 +\frac{1}{2} \big(\textrm{Tr} \tStraTensT(\tz)\big)^2 + O(\tStraTensT(\tz)^3).\end{aligned}$$ Thus, we have $$\begin{aligned} \FreeEnerPhen_{\textrm{bulk}} =&\, \frac{\BulkModuZero}{2} \int d\tz \Jaco(z)^{-1} \Big\lbrace \Jaco(z) \textrm{det} \Big( \frac{\partial \tDefoPosi_i(\tz)}{\partial \tz_j} \Big)-1 \Big\rbrace^2 \nonumber\\ \simeq&\, \frac{\BulkModuZero}{2} \int d\tz \Jaco(z)^{-1} \Big\lbrace (\Jaco(z)-1)^2 +2(\Jaco(z)-1)\Jaco(z) \textrm{Tr} \tStraTensT (\tz) \nonumber\\ & \quad\quad\quad - 2(\Jaco(z)-1)\Jaco(z) \textrm{Tr} \tStraTensT (\tz)^2 + (2\Jaco(z)-1)\Jaco(z) (\textrm{Tr} \tStraTensT (\tz))^2 \Big\rbrace .\end{aligned}$$ Inserting the solutions for $\Contraction$ and $\RelaRand$, given in Eqs. (\[EQ:SoluCont\], \[EQ:SoluV\]), into the Jacobian $\Jaco (z) \equiv \big\vert \frac{\partial \tz_i}{\partial z_j}\big\vert$, we arrive at $$\begin{aligned} \FreeEnerPhen_{\textrm{bulk}} = \int d\tz \big\lbrace \textrm{Tr}(\StreBulkT(\tz)\cdot\tStraTensT(\tz)) + \SheaModu(\tz) \textrm{Tr} \tStraTensT (\tz)^2 + \frac{\BulkModu(\tz)}{2} (\textrm{Tr} \tStraTensT (\tz))^2 \big\rbrace ,\end{aligned}$$ with the elastic parameters (in momentum space) being given by $$\begin{aligned} \StreBulk_{ij,p} &=&\delta_{ij}\Big\lbrace \frac{i\tp\cdot\vec{\RandForc}_{\tp}}{\tp^2} -\MeanSheaModu\Volu_0\delta_{\tp} \Big\rbrace , \\ \SheaModu_{\tp} &=& \MeanSheaModu\Volu_0\delta_{\tp} - \frac{i\tp\cdot\vec{\RandForc}_{\tp}}{\tp^2} , \\ \BulkModu_{\tp} &=& \BulkModuZero\Volu_0\delta_{\tp} + 2\Big\lbrace \frac{i\tp\cdot\vec{\RandForc}_{\tp}}{\tp^2} -\MeanSheaModu\Volu_0\delta_{\tp} \Big\rbrace .\end{aligned}$$ Local form of the elastic energy relative to the relaxed state {#APP:Lagr} -------------------------------------------------------------- Summing up the contributions from the nonlocal term $\FreeEnerPhen_{\textrm{nonlocal}}$ and the bulk term $\FreeEnerPhen_{\textrm{bulk}}$ to the elastic free energy (\[EQ:REExpaEner\]), we arrive at the local form of the elastic energy for deformations relative to the relaxed state: $$\begin{aligned} \FreeEnerPhen = \int d\tz \big\lbrace \textrm{Tr}(\StreT(\tz)\cdot\tStraTensT(\tz)) + \SheaModu(\tz) \textrm{Tr} \tStraTensT (\tz)^2 + \frac{\BulkModu(\tz)}{2} (\textrm{Tr} \tStraTensT (\tz))^2 \big\rbrace ,\end{aligned}$$ with the elastic parameters being given by $$\begin{aligned} \Stre_{ij,\tp} =&\, -\frac{\partial^2}{\partial \tq_i \partial \tq_j} \Big\vert_{q=0} \NonLocaKernOne_{\tp-\tq,\tp} + i \delta_{ij} \frac{i\tp\cdot\vec{\RandForc}_{\tp}}{\vert \tp \vert ^2} -\frac{\RandForc_{a,\tp}}{\vert \tp \vert ^2}\big( \tp_{i} \PPerp_{ja,\tp} + \tp_{j} \PPerp_{ia,\tp} \big) , \\ \SheaModu_{\tp} =&\, \MeanSheaModu \Volu_0 \delta_{\tp} - \frac{i\tp\cdot\vec{\RandForc}_{\tp}}{\vert \tp \vert ^2} , \\ \BulkModu_{\tp} =&\, \BulkModuZero \Volu_0 \delta_{\tp} + 2\Big\lbrace \frac{i\tp\cdot\vec{\RandForc}_{\tp}}{\vert \tp \vert ^2} -\MeanSheaModu \Volu_0 \delta_{\tp} \Big\rbrace .\end{aligned}$$ Relaxation of the deformed state {#APP:DefoRela} ================================ In this section we solve the stationarity equation with a given macroscopic deformation $\DefoGrad$, as discussed in Section \[SEC:NAD\], and thus obtain information about nonaffine deformations. The stationarity condition is given by $$\begin{aligned} 2(\Contraction \DefoGrad_{ai}z_i+(\RelaRandL)_{a}(z)) \int dz_2 \NonLocaKern (z,z_2) -2 \int dz_2 \NonLocaKern (z,z_2) (\Contraction \DefoGrad_{ai}z_{2,i}+(\RelaRandL)_{a}(z_2)) - \BulkModuZeroP \DefoGrad^{-1}_{ia}\DefoGrad^{-1}_{jb}\partial_i \partial_j (\RelaRandL)_{b}(z) =0 .\end{aligned}$$ We take $\NonLocaKernZero$ to be of zeroth order, and $\NonLocaKernOne$ and $\RelaRandL(z)$ to be first-order quantities. Thus, the zeroth order equation reads $$\begin{aligned} 0=2\DefoGrad_{ai}z_i\int dz_2 \NonLocaKernZero(z-z_2)-2\DefoGrad_{ai}\int dz_2 \NonLocaKernZero(z-z_2)z_{2,i} ,\end{aligned}$$ which is already satisfied given $\NonLocaKernZero(z-z_2)$ is even in $(z-z_2)$. The first-order equation reads $$\begin{aligned} && \Contraction \DefoGrad_{ai}z_i \int dz_2 \NonLocaKernOne (z,z_2) +(\RelaRandL)_{a}(z) \int dz_2 \NonLocaKernZero (z,z_2) - \int dz_2 \NonLocaKernOne (z,z_2) \Contraction \DefoGrad_{ai}z_{2,i} - \int dz_2 \NonLocaKernZero (z,z_2) (\RelaRandL)_{a}(z_2) \nonumber\\ && - \frac{\BulkModuZeroP}{2} \DefoGrad^{-1}_{ia}\DefoGrad^{-1}_{jb}\partial_i \partial_j (\RelaRandL)_{b}(z) =0 .\end{aligned}$$ We can solve this equation in momentum space, where it is expressed as $$\begin{aligned} 0 &=& i \Contraction \DefoGrad_{ai} \frac{\partial}{\partial p_{1,i}} \NonLocaKernOne_{p_1,0} -i \Contraction \DefoGrad_{ai} \frac{\partial}{\partial p_{2,i}} \Big\vert_{p_2=0} \NonLocaKernOne_{p_1,p_2} + (\NonLocaKernZero_{0}-\NonLocaKernZero_{p_1})(\RelaRandL)_{a,p_1} + \frac{\BulkModuZeroP}{2} \DefoGrad^{-1}_{ia}\DefoGrad^{-1}_{jb} p_{1,i} p_{1,j}\RelaRand_{b,p_1} .\end{aligned}$$ Writing this equation in tensorial form, we have $$\begin{aligned} \label{EQ:DefoRelaOOne} \Bigg\lbrace 2\Big( \frac{D_p}{\vert p \vert ^2}\Big) \IdenT +\BulkModuZeroP \DefoGradT^{-T} \PLongT \DefoGradT^{-1} \Bigg\rbrace \cdot \vert p \vert ^2 (\Vect{\RelaRandL})_p = (\Vect{\RandForcL})_p ,\end{aligned}$$ where $\MetrTens=\DefoGrad^{T} \DefoGrad$ is the metric tensor, and $$\begin{aligned} D_p &\equiv& \NonLocaKernZero_{0}-\NonLocaKernZero_{p}, \\ (\RandForcL)_{a,p_1} &\equiv& -2\Contraction \Big( i \DefoGrad_{ai} \frac{\partial}{\partial p_{1,i}} \NonLocaKernOne_{p_1,0} -i \DefoGrad_{ai} \frac{\partial}{\partial p_{2,i}} \NonLocaKernOne_{p_1,p_2} \Big) .\end{aligned}$$ To solve Eq. (\[EQ:DefoRelaOOne\]), it is useful to define the $\DefoGradT$-deformed versions of the projection operators, i.e., $$\begin{aligned} \PLongL &\equiv& \frac{1}{\textrm{Tr}(\PLongT \MetrTens^{-1})} \DefoGradT^{-\textrm{T}} \PLongT \DefoGradT^{-1} , \\ \PPerpL &\equiv& \IdenT - \PLongL .\end{aligned}$$ It is straightforward to verify that they obey $$\begin{aligned} (\PLongL)^2 = \PLongL, \quad (\PPerpL)^2 = \PPerpL, \quad \PLongL \cdot \PPerpL =0 .\end{aligned}$$ By using these projection operators we can write Eq. (\[EQ:DefoRelaOOne\]) as $$\begin{aligned} \Big\lbrace \frac{2 D_p}{\vert p \vert ^2} \PPerpL +\Big(\frac{2 D_p}{\vert p \vert ^2}+\BulkModuZeroP \trOne \Big) \PLongL \Big\rbrace\! \cdot \vert p \vert ^2 (\Vect{\RelaRandL})_p = (\Vect{\RandForcL})_p ,\end{aligned}$$ where we have defined $$\begin{aligned} \trOne \equiv \textrm{Tr}(\PLongT\MetrTens^{-1}) .\end{aligned}$$ Thus, it is straightforward to arrive at the solution: $$\begin{aligned} (\Vect{\RelaRandL})_p = \Bigg\lbrace \frac{\PPerpL}{2 D_p} + \frac{\PLongL}{\BulkModuZeroP \trOne \vert p \vert ^2+2 D_p} \Bigg\rbrace \cdot (\Vect{\RandForcL})_p .\end{aligned}$$ Correlation functions of the elastic parameters in the equilibrium reference state ================================================================================== Correlation function of the non-local kernel in the equilibrium reference state {#APP:MM} ------------------------------------------------------------------------------- The nonlocal kernel in the equilibrium reference state $\tNonLocaKern$ is related to the nonlocal kernel in the state right after linking $\NonLocaKern$ via Eq. (\[EQ:tGtu\]); to leading-order in the small quantity $\RelaRand$ we have $$\begin{aligned} \tNonLocaKern_{\tp_1,\tp_2} \simeq \NonLocaKernZero_{\tp_1,\tp_2}+\NonLocaKernOne_{\tp_1,\tp_2} -i \big( \tp_1\cdot\vec{\RelaRand}_{(\tp_1+\tp_2)} \NonLocaKernZero_{\tp_2} +\tp_2\cdot\vec{\RelaRand}_{(\tp_1+\tp_2)} \NonLocaKernZero_{\tp_1} \big) .\end{aligned}$$ By using this relation, we can derive the correlation function of $\tNonLocaKern$ from the correlation function of $\NonLocaKern$ \[which is given in Eq. (\[EQ:KTwo\])\], and thus we arrive at the correlation function $$\begin{aligned} && \lda \tNLM_{p_1,q_1}\tNLM_{p_2,q_2} \rda_c \nonumber\\ &=& \delta_{p_1+p_2} \Big\lbrace -\frac{\PartNumb \LinkDens \LocaPart^2}{2\BoltCons T} \int_{\ISLL_1,\ISLL_2} \Big(\frac{1}{\ISLL_1}+\frac{1}{\ISLL_2}+\LinkScal^2\Big)^{-2} \Big( e^{-\frac{1}{2}\big(\frac{1}{\ISLL_1} +\frac{1}{\ISLL_2}+\LinkScal^2 \big)\vert q_1+q_2\vert^2} + e^{-\frac{1}{2}\big(\frac{1}{\ISLL_1} +\frac{1}{\ISLL_2}+\LinkScal^2 \big)\vert p_1-q_1+q_2\vert^2} \Big)/2 \nonumber\\ && \quad + \frac{\PartNumb}{(\BoltCons T)^2} e^{-\LinkDens \LocaPart} \sum_{m=2}^{\infty} \frac{(\LinkDens \LocaPart)^m}{m!} \frac{m(m-1)}{2}\int_{\ISLL_1,\ldots,\ISLL_m}\Big( \frac{\tISLL_1 \tISLL_2}{\tISLL_1+\cdots+\tISLL_m}\Big)^2 \Big( e^{-\frac{(\tISLL_1+\tISLL_2)\vert q_1+q_2\vert^2}{2\tISLL_1\tISLL_2}} + e^{-\frac{(\tISLL_1+\tISLL_2)\vert p_1-q_1+q_2\vert^2}{2\tISLL_1\tISLL_2}} \Big)/2 \nonumber\\ && \quad + \frac{\PartNumb}{(\BoltCons T)^2} e^{-\LinkDens \LocaPart} \sum_{m=3}^{\infty} \frac{(\LinkDens \LocaPart)^m}{m!} m(m-1)(m-2)\int_{\ISLL_1,\ldots,\ISLL_m} \frac{\tISLL_1 \tISLL_2^2 \tISLL_3}{(\tISLL_1+\cdots+\tISLL_m)^2} \Big( e^{-\frac{1}{2} \big(\frac{\vert p_1-q_1\vert^2}{\tISLL_1} +\frac{\vert p_1-q_1+q_2\vert^2}{\tISLL_2} +\frac{\vert q_2 \vert^2}{\tISLL_3}\big)} \nonumber\\ && \quad\quad + e^{-\frac{1}{2} \big(\frac{\vert q_1\vert^2}{\tISLL_1} +\frac{\vert q_1+q_2\vert^2}{\tISLL_2} +\frac{\vert q_2 \vert^2}{\tISLL_3}\big)} + e^{-\frac{1}{2} \big(\frac{\vert p_1-q_1\vert^2}{\tISLL_1} +\frac{\vert q_1+q_2\vert^2}{\tISLL_2} +\frac{\vert p_1+q_2 \vert^2}{\tISLL_3}\big)} +e^{-\frac{1}{2} \big(\frac{\vert q_1\vert^2}{\tISLL_1} +\frac{\vert p_1-q_1+q_2\vert^2}{\tISLL_2} +\frac{\vert p_1+q_2 \vert^2}{\tISLL_3}\big)} \Big)/4 \nonumber\\ && \quad + \frac{\PartNumb}{(\BoltCons T)^2} e^{-\LinkDens \LocaPart} \sum_{m=4}^{\infty} \frac{(\LinkDens \LocaPart)^m}{m!} \frac{m(m-1)(m-2)(m-3)}{4}\nonumber\\ &&\quad\times \int_{\ISLL_1,\ldots,\ISLL_m} \frac{\tISLL_1 \tISLL_2 \tISLL_3 \tISLL_4}{(\tISLL_1+\cdots+\tISLL_m)^2} e^{-\frac{1}{2} \big(\frac{\vert p_1-q_1\vert^2}{\tISLL_1} +\frac{\vert q_1\vert^2}{\tISLL_2} +\frac{\vert p_1+q_2 \vert^2}{\tISLL_3} +\frac{\vert q_2 \vert^2}{\tISLL_4}\big)}\Big\rbrace \nonumber\\ && +2i \delta_{p_1+p_2} q_1 \cdot \PPerpT_1 \cdot q_2 \Big\lbrace -\frac{\PartNumb \LinkDens \LocaPart^2}{2\BoltCons T} \int_{\ISLL_1,\ISLL_2} \Big(\frac{1}{\ISLL_1}+\frac{1}{\ISLL_2}+\LinkScal^2\Big)^{-1} \nonumber\\ && \quad\quad \times \Big\lbrack t_{p_1,q_1} \big( e^{-\frac{1}{2}\big(\frac{1}{\ISLL_1} +\frac{1}{\ISLL_2}+\LinkScal^2 \big)\vert q_2\vert^2} + e^{-\frac{1}{2}\big(\frac{1}{\ISLL_1} +\frac{1}{\ISLL_2}+\LinkScal^2 \big)\vert p_1+q_2\vert^2} \big)/2 \nonumber\\ &&\quad -t_{-p_1,q_2} \big( e^{-\frac{1}{2}\big(\frac{1}{\ISLL_1} +\frac{1}{\ISLL_2}+\LinkScal^2 \big)\vert q_1\vert^2} + e^{-\frac{1}{2}\big(\frac{1}{\ISLL_1} +\frac{1}{\ISLL_2}+\LinkScal^2 \big)\vert -p_1+q_1\vert^2} \big)/2 \Big\rbrack \nonumber\\ && \quad + \frac{\PartNumb}{(\BoltCons T)^2} e^{-\LinkDens \LocaPart} \sum_{m=2}^{\infty} \frac{(\LinkDens \LocaPart)^m}{m!} \frac{m(m-1)}{2}\int_{\ISLL_1,\ldots,\ISLL_m} \frac{\tISLL_1 \tISLL_2(\tISLL_1 + \tISLL_2)}{(\tISLL_1+\cdots+\tISLL_m)^2} \nonumber\\ && \quad\quad\times \Big\lbrack t_{p_1,q_1} \big(e^{-\frac{(\tISLL_1+\tISLL_2)\vert q_2\vert^2}{2\tISLL_1\tISLL_2}} + e^{-\frac{(\tISLL_1+\tISLL_2)\vert p_1+q_2\vert^2}{2\tISLL_1\tISLL_2}} \big)/2 - t_{-p_1,q_2} \big(e^{-\frac{(\tISLL_1+\tISLL_2)\vert q_1\vert^2}{2\tISLL_1\tISLL_2}} + e^{-\frac{(\tISLL_1+\tISLL_2)\vert -p_1+q_1\vert^2}{2\tISLL_1\tISLL_2}} \big)/2 \Big\rbrack \nonumber\\ && \quad + \frac{\PartNumb}{(\BoltCons T)^2} e^{-\LinkDens \LocaPart} \sum_{m=3}^{\infty} \frac{(\LinkDens \LocaPart)^m}{m!} m(m-1)(m-2)\int_{\ISLL_1,\ldots,\ISLL_m} \frac{\tISLL_1 \tISLL_2^2 \tISLL_3}{(\tISLL_1+\cdots+\tISLL_m)^2} \nonumber\\ && \quad\quad\times \Big\lbrack t_{p_1,q_1} \big( - e^{-\frac{1}{2} \big(\frac{1}{\tISLL_2} +\frac{1}{\tISLL_3}\vert q_2\vert^2 \big)} + e^{-\frac{1}{2} \big(\frac{1}{\tISLL_2} +\frac{1}{\tISLL_3}\vert p_1+q_2\vert^2 \big)}\big)/4 \nonumber\\ &&\quad - t_{-p_1,q_1} \big( - e^{-\frac{1}{2} \big(\frac{1}{\tISLL_2} +\frac{1}{\tISLL_3}\vert q_2\vert^2 \big)} + e^{-\frac{1}{2} \big(\frac{1}{\tISLL_2} +\frac{1}{\tISLL_3}\vert - p_1+q_2\vert^2 \big)}\big)/4 \Big\rbrack \Big\rbrace ,\end{aligned}$$ where we have used the notation $\NLM(z,y)\equiv\tNonLocaKern(z_1,z_2)$ defined in Appendix \[APP:REHT\]. Disorder correlators of the elastic parameters in the local form {#APP:CFLL} ---------------------------------------------------------------- In this appendix we calculate the disorder correlators of the quenched random elastic parameters in the local form of the elastic energy for deformations relative to the relaxed state. First, we calculate the disorder correlator of the residual stress $\lda \Stre\Stre \rda_c$. The residual stress $\StreT$ in the relaxed state is related to the nonlocal kernel $\NonLocaKern$ via Eq. (\[EQ:Stress\]). Thus, the correlator of the residual stress can be expressed as $$\begin{aligned} \label{EQ:SSNN} && \lda \Stre_{ij,p_1}\Stre_{kl,p_2} \rda_c \nonumber\\ \!\! &=& \!\!\frac{\partial}{\partial q_{1,i}}\Big\vert_{q_1=0} \frac{\partial}{\partial q_{2,j}}\Big\vert_{q_2=0} \lda N_{j,p_1,q_1} N_{l,p_2,q_2} \rda_c \nonumber\\ &&\!\! - \frac{2}{\vert p_1 \vert^2} \big(p_{1,k}\PPerpT_{bl}(p_1)+p_{1,l}\PPerpT_{bk}(p_1)+p_{1,b}\PPerpT_{kl}(p_1)\big) \frac{\partial}{\partial q_{1,i}}\Big\vert_{q_1=0} \lda N_{j,p_1,q_1} N_{l,p_2,0} \rda_c \nonumber\\ &&\!\! + \frac{2}{\vert p_1 \vert^2} \big(p_{1,i}\PPerpT_{aj}(p_1)+p_{1,j}\PPerpT_{ai}(p_1)+p_{1,a}\PPerpT_{ij}(p_1)\big) \frac{\partial}{\partial q_{2,k}}\Big\vert_{q_2=0} \lda N_{j,p_1,0} N_{l,p_2,q_2} \rda_c \nonumber\\ &&\!\! - \frac{2}{(\vert p_1 \vert^2)^2} \big(p_{1,i}\PPerpT_{aj}(p_1)\!+p_{1,j}\PPerpT_{ai}(p_1)\!+p_{1,a}\PPerpT_{ij}(p_1)\big) \! \big(p_{1,k}\PPerpT_{bl}(p_1)\!+p_{1,l}\PPerpT_{bk}(p_1)\!+p_{1,b}\PPerpT_{kl}(p_1)\big) \lda N_{j,p_1,0} N_{l,p_2,0} \rda_c , \,\end{aligned}$$ where we have defined $N_{j,p,q} \equiv {\partial \NLM_{p,q}}/{\partial q_j}$, and the notation $\NLM(z,y)\equiv\tNonLocaKern(z_1,z_2)$ is defined in Appendix \[APP:REHT\]. We then insert in the disorder correlator $\lda \NLM_{p_1,q_1}\NLM_{p_2,q_2} \rda _c$, given in Eq. (\[EQ:KTwo\]) (in the form of $\lda\NonLocaKern\NonLocaKern\rda$) into Eq. (\[EQ:SSNN\]). After a lengthy calculation, and making use of the identity $$\begin{aligned} && m\int_{\ISLL_1,\ldots,\tISLL_m}\frac{\tISLL_1^2}{(\tISLL_1+\cdots+\tISLL_m)^2} \nonumber\\ && + m(m-1)\int_{\ISLL_1,\ldots,\tISLL_m} \frac{\tISLL_1\,\tISLL_2}{(\tISLL_1+\cdots+\tISLL_m)^2}=1,\end{aligned}$$ we arrive at the correlator $$\begin{aligned} && \lda \Stre_{ij,p_1}\Stre_{kl,p_2}\rda_c \nonumber\\ &=& \delta_{p_1+p_2} \frac{\PartNumb \UnivPara}{(\BoltCons T)^2} (2\PPerpT_{ij}\PPerpT_{kl}+\PPerpT_{il}\PPerpT_{jk}+\PPerpT_{ik}\PPerpT_{jl}),\end{aligned}$$ where $\UnivPara\equiv -\frac{1}{2}{\LinkDens \LocaPart^2}+\LinkDens \LocaPart +e^{-\LinkDens \LocaPart}-1$ is given in Eq. (\[EQ:UnivParaDef\]). By following a similar scheme, we have also calculated the other disorder correlators and cross-correlators of the quenched random elastic parameters in the local form of elasticity of the relaxed state. Hence, we arrive at the correlators of the shear modulus and bulk modulus, which read $$\begin{aligned} \lda \SheaModu_{p_1}\SheaModu_{p_2} \rda_c =& \Corr \,\delta_{p_1+p_2} \PartNumb(\BoltCons T)^2 , \\ \lda \BulkModu_{p_1}\BulkModu_{p_2} \rda_c =& 4 \Corr\, \delta_{p_1+p_2} \PartNumb(\BoltCons T)^2 ,\end{aligned}$$ in which the dimensionless scale factor $\Corr$ is given by $$\begin{aligned} \label{EQ:AppDefiCorr} \Corr \equiv -\frac{3}{2}\LinkDens \LocaPart^2 + e^{\LinkDens \LocaPart}-1+\LinkDens \LocaPart+(\LinkDens \LocaPart)^2 .\end{aligned}$$ We also arrive at the cross-correlators, which are given by $$\begin{aligned} \lda \Stre_{ij,p_1} \SheaModu_{p_2}\rda_c &=& -2 \PartNumb (\BoltCons T)^2 \UnivPara\, \delta_{p_1+p_2}\PPerpT_{ij}(p_1), \\ \lda \Stre_{ij,p_1} \BulkModu_{p_2} \rda_c &=& 4 \PartNumb (\BoltCons T)^2 \UnivPara\, \delta_{p_1+p_2} \PPerpT_{ij}(p_1), \\ \lda \SheaModu_{p_1} \BulkModu_{p_2} \rda_c &=& -2 \PartNumb(\BoltCons T)^2\Corr\,\delta_{p_1+p_2};\end{aligned}$$ the scale factor $\UnivPara$ is defined in Eq. (\[EQ:UnivParaDef\]). [31]{} natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{} , ed., ** (, ). , ** (, ). , , , ****, (). , ** (, ). , ****, (). , , , ****, (). , , , ****, (). , , , ****, (). , , , ****, (). , , , ****, (). , , , , ****, (). , ****, (). , ****, (). , , , , ****, (). , , , , ****, (). , , , ****, (). , , , , ****, (). , , , ****, (). , , , ****, (). , , , ** (, ). , ****, (). , ** (, ). , ****, (). , ** (, ). , , , ****, (). , , , , ****, (). , ** (, ). , , , , ****, (). , , , , , ****, (). , ****, (). , ****, (). , eds., ** (, ). , , , ****, (). , , , ****, (). P. Erd[ő]{}s, A. R[é]{}nyi, Magyar Tud. Akad. Mat. Kut. Int. K[ö]{}zl. [**5**]{}, 17 (1960). , , , , ****, (). [^1]: This realization of rigidity should not be confused with *rigidity percolation*, which captures the rigidity of athermal (i.e., mechanical, rather than thermodynamic) networks [@Thorpe1999]. In the case of soft random solids, shear rigidity results from the entropy of thermal fluctuations of the positions of the constituent particles, and is proportional to the temperature [^2]: [[This handshaking is the analog of that between the Born-Huang expansion for crystals and the continuum theory of elasticity, or that between the Newtonian equations of motion for particles and the Navier-Stokes equations of hydrodynamics. ]{}]{} [^3]: [[In this preparation state, the temperature and the strength of the excluded-volume interaction can differ from those characterizing the measurement ensemble, and this has been discussed in Ref. [@Xing2004]]{}]{} [^4]: For (unswelled) rubbery materials this is an appropriate assumption. [^5]: Strictly speaking, the localized infinite cluster can undergo global translations or rotations from replica to replica, and this corresponds to a distinct equilibrium states of the field theory, which are connected by relative translations and/or rotations of the replicas. [^6]: This relation is correct to linear order in $\GSDF$ or linear order in $n$. [^7]: In earlier work (see Refs. [@Goldbart2004; @Mukhopadhyay2004]), the form of the Goldstone fluctuation was taken to be $ \VOPGS(\hat{x}) = \LocaPart \int d\ISLL \DistISLL(\ISLL) \big(\ISLL/2\pi\big)^{(1+n)\Dime/2} e^{-\frac{\ISLL}{2} \vert \hat{x}_{\tau} - \hat{\GSDF}_{\tau}(x_{\lambda})\vert^2} - \LocaPart/\big(\Volu_0\Volu^n\big)$. This form (which we shall call the old Goldstone deformation) differs from the Goldstone deformation that we are currently using (the new Goldstone deformation) in two ways: (i) In the old Goldstone deformation, the deformation field was taken to be a function of $x_{\lambda}$ \[and, as a result, $z$ can be integrated out, as in the stationary-point form (\[EQ:OPInv\])\]. However, in the new Goldstone deformation, the deformation is instead taken to be function of $z$. The new Goldstone deformation is more physical, in the sense that the deformation field should be defined in terms of the the *mean* positions $z$ during thermal fluctuations, not the *instantaneous* positions of the particles $\hat{x}$. In the new Goldstone deformation, it is clear that it is the mean positions that are deformed, $\hat{z} \to \hat{\DefoPosi}(z)$, but the shape of the thermal cloud, which corresponds to a massive mode, is left untouched. (The point was already made in Ref. [@Ulrich2006], but we make it again, here, for the sake of completeness.)(ii) The deformation field in the old Goldstone deformation lies in the $\hat{x}_{\tau}$ direction, whereas the new Goldstone deformation has a deformation field in replicas $1$ through $n$. This issue has already been discussed in Sec. \[SEC:GSFT\], as has been the point that these two representations are related by a change of variables; see Eq. (\[EQ:VOPGSRTWO\]). The new Goldstone structure is more physical, in the sense that the preparation state (replica $0$) cannot be deformed once the sample has been made. [^8]: To be consistent with the RLPM, we have used finite-volume versions of the Fourier transform and Kronecker delta function in momentum space; we shall take the continuum limit later on, in the final results in Section \[SEC:Local\]. Strictly speaking, the differentiations in Eq. (\[EQ:Stress\]) should be understood as the corresponding difference quotients. [^9]: The dependence on the large-momentum cutoff is a result of keeping only terms of leading order at small momentum $p$ in the calculation of the disorder correlators given in Table \[TABLE:CorrTable\]. This enables us to extract the small-momentum behavior in momentum-space, which corresponds to the large-distance behavior in real-space. [^10]: A similar expansion, but to higher order in $\RelaRand$, is performed in Eq. (\[EQ:EXPDet\]), in terms of the strain tensor $\StraTensT$.
1
--- abstract: 'We present resolved *Herschel* images of a circumbinary debris disk in the 99 Herculis system. The primary is a late F-type star. The binary orbit is well characterised and we conclude that the disk is misaligned with the binary plane. Two different models can explain the observed structure. The first model is a ring of polar orbits that move in a plane perpendicular to the binary pericenter direction. We favour this interpretation because it includes the effect of secular perturbations and the disk can survive for Gyr timescales. The second model is a misaligned ring. Because there is an ambiguity in the orientation of the ring, which could be reflected in the sky plane, this ring either has near-polar orbits similar to the first model, or has a 30 degree misalignment. The misaligned ring, interpreted as the result of a recent collision, is shown to be implausible from constraints on the collisional and dynamical evolution. Because disk+star systems with separations similar to 99 Herculis should form coplanar, possible formation scenarios involve either a close stellar encounter or binary exchange in the presence of circumstellar and/or circumbinary disks. Discovery and characterisation of systems like 99 Herculis will help understand processes that result in planetary system misalignment around both single and multiple stars.' author: - | G. M. Kennedy[^1]$^1$, M. C. Wyatt$^1$, B. Sibthorpe$^2$, G. Duchêne$^{3,4}$, P. Kalas$^4$, B. C. Matthews$^{5,6}$, J. S. Greaves$^7$, K. Y. L. Su$^8$, M. P. Fitzgerald$^{9,10}$\ $^1$ Institute of Astronomy, University of Cambridge, Madingley Road, Cambridge CB3 0HA, UK\ $^2$ UK Astronomy Technology Center, Royal Observatory, Blackford Hill, Edinburgh EH9 3HJ, UK\ $^3$ Department of Astronomy, University of California, B-20 Hearst Field Annex, Berkeley, CA 94720-3411, USA\ $^4$ Laboratoire d’Astrophysique, Observatoire de Grenoble, Université J. Fourier, CNRS, France\ $^5$ Herzberg Institute of Astrophysics, National Research Council Canada, 5071 West Saanich Road., Victoria, BC, Canada, V9E 2E7, Canada\ $^6$ University of Victoria, Finnerty Road, Victoria, BC, V8W 3P6, Canada\ $^7$ School of Physics and Astronomy, University of St Andrews, North Haugh, St Andrews, Fife KY16 9SS, UK\ $^8$ Steward Observatory, University of Arizona, 933 North Cherry Avenue, Tucson, AZ 85721, USA\ $^9$ Institute of Geophysics and Planetary Physics, Lawrence Livermore National Laboratory, L-413, 7000 East Avenue, Livermore, CA 94550, USA\ $^{10}$ Department of Physics and Astronomy, UCLA, Los Angeles, CA 90095-1547, USA title: '99 Herculis: Host to a Circumbinary Polar-ring Debris Disk' --- circumstellar matter — stars: individual: 99 Herculis, HD 165908, HIP 88745, GJ704AB Introduction {#s:intro} ============ The *Herschel* Key Program DEBRIS (Dust Emission via a Bias free Reconnaissance in the Infrared/Submillimeter) has observed a large sample of nearby stars to discover and characterise extrasolar analogues to the Solar System’s asteroid and Kuiper belts, collectively known as “debris disks.” The 3.5m *Herschel* mirror diameter provides 6-7” resolution at 70-100$\mu$m , and as a consequence our survey has resolved many disks around stars in the Solar neighbourhood for the first time .[^2] Here we present resolved images of the 99 Herculis circumbinary disk. This system is particularly interesting because unlike most debris disk+binary systems, the binary orbit is well characterised. The combination of a known orbit and resolved disk means we can compare their (different) inclinations and consider circumbinary particle dynamics and formation scenarios. This system is a first step toward building on the binary debris disk study of @2007ApJ...658.1289T. Their *Spitzer* study found that debris disks are as common in binary systems as in single systems, but tend not to have separations in the 3-30AU range. However, only some of their systems had detections at multiple wavelengths to constrain the disk location and none were resolved, making the true dust location uncertain. Resolved systems such as 99 Her remove this ambiguity, and provide crucial information on the disk location, stability and dynamics. This paper is laid out as follows. We first consider the stellar and orbital properties of the 99 Her system, including the possibility of a third component. Then we consider the *Herschel* image data and several different models that can explain it. Finally, we discuss the implications of these models for the formation of the system. 99 Herculis {#s:stprop} =========== The binary 99 Herculis (HD 165908, HIP 88745, GJ 704AB, ADS 11077) contains the 37$^{\rm th}$ closest F star primary within the volume limited Unbiased Nearby Stars sample [@2010MNRAS.403.1089P]. The Catalogue of Components of Doubles and Multiple systems [CCDM J18071+3034, @2002yCat.1274....0D] lists three components, but using Hipparcos proper motions @2010MNRAS.403.1089P find that the 93” distant C component is not bound to the system. The binary pair has been known since 1859, and consists of an F7V primary orbited by a K4V secondary. The primary is known to be metal poor with \[Fe/H\] $\approx -0.4$ [e.g. @1996yCat..33140191G; @2000MNRAS.316..514A; @2007PASJ...59..335T] and has an age consistent with the main-sequence . Binary Configuration {#ss:binary} -------------------- Parameter Symbol (unit) Value Uncertainty ---------------------------- ------------------------- --------- ------------- Semi-major axis a (”) 1.06 $0.02$ Eccentricity e 0.766 $0.004$ Inclination i ($^\circ$) 39 $2$ Ascending node $\Omega$ ($^\circ$) 41 $2$ Longitude of pericenter $\omega$ ($^\circ$) 116 $2$ Date of pericenter passage T (yr) 1997.62 $0.05$ Period P (yr) 56.3 $0.1$ Total mass $M_{\rm tot} (M_\odot)$ 1.4 $0.1$ : 99 Her orbital elements, system mass and 1$\sigma$ uncertainties. The ascending node $\Omega$ is measured anti-clockwise from North. The longitude of pericenter is measured anti-clockwise from the ascending node, and projected onto the sky plane has a position angle of 163$^\circ$ (i.e. is slightly different to 41+116 because the orbit is inclined).[]{data-label="tab:elem"} ![99 Her binary orbit as seen on the sky, with the line of nodes and pericenter indicated. North is up and East is left. The stellar orbits are shown over one (anti-clockwise) orbital period with black dots. Grey dots (primary is the larger grey dot) show the positions at the PACS observation epoch. Black dot sizes are scaled in an arbitrary way such that larger dots are closer to Earth. The arrows indicate the direction of motion and the scale bar indicates the binary semi-major axis of 1.06” (16.5AU).[]{data-label="fig:sys"}](systop.eps){width="50.00000%"} To interpret the *Herschel* observations requires an understanding of the binary configuration, which we address first. Typically, the important orbital elements in fitting binary orbits are the semi-major axis, eccentricity, and period, which yield physical characteristics of the system (if the distance is known). Regular observations of 99 Her date back to 1859 and the binary has completed nearly three revolutions since being discovered. Additional observations since the previous orbital derivation , have allowed us to derive an updated orbit. Aside from a change of 180$^\circ$ in the ascending node [based on spectroscopic data @2006ApJS..162..207A], the orbital parameters have changed little; the main purpose of re-deriving the orbit is to quantify the uncertainties. The orbit was derived by fitting position angles (PAs) and separations ($\rho$) from the Washington Double Star catalogue [@2011yCat....102026M].[^3] We included three additional observations; a *Hubble Space Telescope* (HST) *Imaging Spectrograph* (STIS) acquisition image [epoch 2000.84, $PA=264 \pm 2^\circ$, $\rho=0.54 \pm 0.02$”, @2004ApJ...606..306B], an adaptive optics image taken using the Lick Observatory Shane 3m telescope with the IRCAL near-IR camera as an ongoing search for faint companions of stars in the UNS sample (epoch 2009.41, $PA=309 \pm 2^\circ$, $\rho=1.12 \pm 0.02$”), and a Keck II NIRC2 L’ speckle image taken to look for a third companion (see §\[s:third\], epoch 2011.57, $PA=317 \pm 1^\circ$, $\rho=1.20 \pm 0.014$”). For visual data we set uncertainties of 7$^\circ$ to PAs and 0.5” to separations, for *Hipparcos* data we used 5$^\circ$ and 0.1”, and for speckle observations without quoted uncertainties we used 2$^\circ$ and 0.04”. The resulting orbital elements, shown in Table \[tab:elem\], vary only slightly from those derived by .[^4] The best fit yields $\chi^2 = 190$ with 399 degrees of freedom. The fit is therefore reasonable, and most likely the $\chi^2$ per degrees of freedom is less than unity because the uncertainties assigned to the visual data are too conservative. If anything, the uncertainties quoted in Table \[tab:elem\] are therefore overestimated. However, visual data can have unknown systematic uncertainties due to individual observers and their equipment so we do not modify them [@2001AJ....122.3472H]. These data also allow us to derive a total system mass of 1.4$M_\odot$, where we have used a distance of 15.64pc [@2008yCat.1311....0F]. While the total mass is well constrained by visual observations, the mass ratio must be derived from either the differential luminosity of each star or radial velocities. We use the spectroscopic mass function derived by @2006ApJS..162..207A, yielding a mass ratio of 0.49, which has an uncertainty of about 10%. The mass ratio from the differential luminosity is 0.58, with a much larger (20%) uncertainty . Using the spectroscopic result, the primary (GJ 704A) has a mass of 0.94$M_\odot$, while the secondary (GJ 704B) is 0.46$M_\odot$. The system configuration is shown in Figure \[fig:sys\] and is inclined such that the primary is currently closer to Earth than the secondary. The position of the B component relative to A on the date it was observed by *Herschel* in late April 2010 was PA=314$^\circ$ at an observed separation of 1.15” (22.6AU when deprojected), indicated by grey circles in the Figure. A Third Component? {#s:third} ------------------ While the STIS images clearly resolve the binary, there is a possible third component with $PA \approx 284^\circ$ and $\rho \approx 0.27"$ that is about 2.4 times as faint as the B component. @2008AN....329...54S also report a third component (epoch 2005.8) at $PA \approx 50^\circ$ and $\rho \approx 0.228"$ (no magnitude is given). However, while they detected the secondary again in mid-2007, they do not report any detection of the putative tertiary [@2010AN....331..286S]. The detected positions are shown as star symbols in sky coordinates in Figure \[fig:pm\], which shows the motion of the 99 Her system. The system proper motion is $\mu_\alpha \cos \delta = -110.32$mas yr$^{-1}$, $\mu_\delta = 110.08$mas yr$^{-1}$ [@2007ASSL..350.....V], and accounts for the motion of the primary assuming the orbit derived in , which is very similar to ours. The small proper motion uncertainty means STIS and @2008AN....329...54S cannot have seen the same object if it is fixed on the sky. There is no clear sign of a third component in the residuals from fitting the orbit of the secondary. ![Motion of the 99 Her system (filled dots) in sky coordinates at three epochs. The epochs including the putative third component are enclosed in boxes. The arrow shows the direction of the system center of mass movement and the distance travelled in 5 years, and the grey lines show the path traced out by each star. Star symbols show the position of the third object observed in the STIS data in 2000 (dashed box) and by @2008AN....329...54S in 2005 (dotted box).[]{data-label="fig:pm"}](pm.eps){width="50.00000%"} ![Keck/NIRC2 adaptive optics image of 99 Her at 3.8$\mu$m, cropped to about 1.5” around the primary. North is up and East is left. The saturated A component is at the center of the frame.[]{data-label="fig:keck"}](her99b_fig_v2.eps){width="45.00000%"} To try and resolve this issue we obtained an adaptive optics image of 99 Her at L’ (3.8 $\mu$m) using the NIRC2 camera at Keck II on July 27, 2011, shown in Figure \[fig:keck\]. We adopted the narrow camera (10 mas/pixel) and used a five-point dither pattern with three images obtained at each position consisting of 25 coadds of 0.181 seconds integration. The cumulative integration time for the final co-registered and coadded image is 67.875 seconds. The core of the A component point-spread-function is highly saturated, which degrades the achievable astrometry. We estimate the position of 99 Her A by determining the geometric center of the first diffraction ring. The position of 99 Her B is taken from the centroid of the unsaturated core. The PA and separation are quoted above. There is no detection of the putative 99 Her C within 1.6” of the primary in the combined image if it is only a factor 2.4 fainter than the B component, because it would appear 20 times brighter than the brightest speckles. However, if it were closer to the primary than 0.2” it would currently be too close to detect. If the object was fixed on the sky near either the 2000 or 2005 locations, it would have been detected in the individual pointings of the five-point dither since each NIRC2 pointing has a field of view of $10''\times10''$. To be outside the field of view and still bound to the primary, the tertiary must have an apocenter larger than about 75AU (5”). An object in such an orbit would have a period of at least 200 years, so could not have been detected near the star in 2005 and be outside the NIRC2 field of view in 2011. The non-detections by @2010AN....331..286S and NIRC2 make the existence of the tertiary suspicious. It is implausible that the object was too close to, or behind the star both in 2007 and 2011, because at a semi-major axis of 0.23” (3.5AU) from the primary (similar to the projected separation) the orbital period is 7 years. Therefore, the object would be on opposite sides of the primary, and the two detections already rule out an edge-on orbit. Even assuming a circular orbit, such an object is unlikely to be dynamically stable, given the high eccentricity and small pericenter distance (4.1AU) of the known companion. A tertiary at this separation would be subject to both short term perturbations and possible close encounters. If the mutual inclination were high enough, it would also be subject to Kozai cycles due to the secondary that could result in a high eccentricity and further affect the orbital stability. While it may be worthy of further detection attempts, the existence of this component appears doubtful and we do not consider it further. IR and Sub-mm Data {#s:data} ================== Observations {#s:obs} ------------ ![image](pacs70.eps){width="33.00000%"} ![image](pacs100.eps){width="33.00000%"} ![image](pacsall160.eps){width="33.00000%"} *Herschel* Photodetector and Array Camera & Spectrometer data at 100 and 160$\mu$m were taken in April 2010 during routine DEBRIS observations. Subsequently, a Spectral & Photometric Imaging Receiver observation was triggered by the large PACS excess indicating the presence of a debris disk and a likely sub-mm detection. The disk was detected, but not resolved with SPIRE at 250 and 350$\mu$m. A 70$\mu$m PACS image was later obtained to better resolve the disk. Because every PACS observation includes the 160$\mu$m band, we have two images at this wavelength, which are combined to produce a single higher S/N image. All observations were taken in the standard scan-map modes for our survey; mini scan-maps for PACS data and small maps for SPIRE. Data were reduced using a near-standard pipeline with the Herschel Interactive Processing Environment [HIPE Version 7.0, @2010ASPC..434..139O]. We decrease the noise slightly by including some data taken as the telescope is accelerating and decelerating at the start and end of each scan leg. The high level of redundancy provided by PACS scan maps means that the pixel size used to generate maps can be smaller than the natural scale of 3.2”/pix at 70 and 100$\mu$m and 6.4”/pix at 160$\mu$m via an implementation of the “drizzle” method [@2002PASP..114..144F]. Our maps are generated at 1”/pix at 70 and 100$\mu$m and 2”/pix at 160$\mu$m. The benefit of better image sampling comes at the cost of correlated noise [@2002PASP..114..144F], which we discuss below. In addition to correlated noise, two characteristics of the PACS instrument combine to make interpretation of the data challenging. The PACS beam has significant power at large angular scales; about 10% of the energy lies beyond 1 arcminute and the beam extends to about 17 arcminutes (1000 arcsec). While this extent is not a problem in itself, it becomes problematic because PACS data are subject to fairly strong $1/f$ (low frequency) noise and must be high-pass filtered. The result is that a source will have a flux that is 10-20% too low because the “wings” of the source were filtered out. While this problem can be circumvented with aperture photometry using the appropriate aperture corrections derived from the full beam extent, the uncorrected apertures typically used for extended sources will result in underestimates of the source flux.[^5] Here, we correct the fluxes measured in apertures for 99 Her based on a comparison between PSF fitted and aperture corrected measurement of bright point sources in the DEBRIS survey with predictions from their stellar models [based on the calibration of @2008AJ....135.2245R]. These upward corrections are $16 \pm 5\%$, $19 \pm 5\%$, and $21 \pm 5\%$ at 70, 100, and 160$\mu$m respectively. These factors depend somewhat on the specifics of the data reduction, so are *not* universal. This method assumes that the correction for 99 Her is the same as for a point source, which is reasonable because the scale at which flux is lost due to filtering the large beam is much larger than the source extent. The corrected PACS measurement is consistent with MIPS 70$\mu$m, so we do not investigate this issue further. The beam extent and filtering is also important for resolved modelling because the stellar photospheric contribution to the image is decreased. Therefore, in generating a synthetic star+disk image to compare with a raw PACS observation, the stellar photospheric flux should be decreased by the appropriate factor noted above. Alternatively, the PACS image could be scaled up by the appropriate factor and the true photospheric flux used. Table \[tab:obs\] shows the measured star+disk flux density in each *Herschel* image. Uncertainties for PACS are derived empirically by measuring the standard deviation of the same sized apertures placed at random image locations with similar integration time to the center (i.e. regions with a similar noise level). The SPIRE observations of 99 Her are unresolved. The disk is detected with reasonable S/N at 250$\mu$m, marginally detected at 350$\mu$m, and not detected at 500$\mu$m. Fluxes are extracted with PSF fitting to minimise the contribution of background objects. Because all three bands are observed simultaneously (i.e. a single pointing), the PSF fitting implementation fits all three bands at once. A detection in at least one band means that all fluxes (or upper limits) are derived at the same sky position. Additional IR data exist for 99 Her, taken with the Multiband Imaging Photometer for *Spitzer* [MIPS, @2004ApJS..154...25R]. Only the star was detected at 24$\mu$m ($270.3 \pm 0.1$mJy), but this observation provides confirmation of the 99 Her stellar position in the PACS images relative to a background object 1.8 arcmin away to the SE ($PA=120^\circ$) that is visible at 24, 70, and 100$\mu$m. The presence of an excess at 70$\mu$m ($98 \pm 5$mJy compared to the photospheric value of 30mJy) was in fact reported by @2010ApJ...710L..26K. They did not note either the circumbinary nature or that the disk may be marginally resolved by MIPS at 70$\mu$m. Because our study focuses on the spatial structure, we use the higher resolution PACS data at 70$\mu$m, but include the MIPS data to model the SED. Basic image analysis {#s:basic} -------------------- Figure \[fig:pacs\] shows the *Herschel* PACS data. Compared to the beam size, the disk is clearly resolved at all three wavelengths. At 160$\mu$m the peak is offset about 5” East relative to both the 70 and 100$\mu$m images. However, the disk is still visible at 160$\mu$m as the lower contours match the 70 and 100$\mu$m images well. The 160$\mu$m peak is only 2-3$\sigma$ more significant than these contours. While such variations are possible due to noise, in this case the offset is the same in both 160$\mu$m images, so could be real. The fact that the peak extends over several pixels is not evidence that it is real, because the pixels in these maps are correlated (see below). If real, this component of the disk or background object cannot be very bright at SPIRE wavelengths because the measured fluxes appear consistent with a blackbody fit to the disk (see §\[s:sed\]). Based on an analysis of all DEBRIS maps (that have a constant depth), the chance of a 3$\sigma$ or brighter background source appearing within 10” of 99 Her at 160$\mu$m is about 5% (Thureau et al in prep). Given that the 160$\mu$m offset is only a 2-3$\sigma$ effect (i.e. could be a 2-3$\sigma$ background source superimposed on a smooth disk), the probability is actually higher because the number of background sources increases strongly with depth. These objects have typical temperatures of 20–40K , so could easily appear in only the 160$\mu$m image, particularly if the disk flux is decreasing at this wavelength. Band Flux (mJy) Uncertainty Method ---------- ------------ ------------- -------------- PACS70 93 10 15” aperture PACS100 87 10 15” aperture PACS160 80 15 17” aperture SPIRE250 44 6 PSF fit SPIRE350 22 7 PSF fit SPIRE500 4 8 PSF fit : *Herschel* photometry of 99 Her. The disk is not detected at 500$\mu$m and can be considered a 3$\sigma$ upper limit of 24mJy.[]{data-label="tab:obs"} We now analyse the PACS images using 2D Gaussian models to estimate the disk size, inclination, and position angle. A 2D Gaussian fits the star-subtracted PACS 100$\mu$m image fairly well, with major and minor full-width half-maxima of 17.7 and 12.8” at a position angle of $78^\circ$. Quadratically deconvolving from the 6.7” FWHM beam assuming a circular ring implies an inclination of $48^\circ$ from face-on and an estimated diameter of 250AU. Gaussian fitting to star-subtracted images at both 70 and 160$\mu$m yields similar results. As noted above, estimation of uncertainties in these parameters is non-trivial due to correlated noise, but made easier by the constant depth of our survey. By inserting the best fit Gaussian to the star-subtracted image of the 99 Her disk from the 100$\mu$m image into 438 other 100$\mu$m maps at high coverage positions offset from the intended target, we obtain a range of fit parameters for hundreds of different realisations of the same noise. This process yields an inclination of $45 \pm 5^\circ$ and PA of $75 \pm 8^\circ$. Repeating the process, but using the best fit Gaussian for the 70$\mu$m image yields an inclination of $44 \pm 6^\circ$ and PA of $68 \pm 9^\circ$. Though the inclination of the disk is similar to the binary, the position angle is significantly different from the binary line of nodes of $41 \pm 2 ^\circ$. This difference means that the disk and binary orbital planes are misaligned. As a check on the above approach, we can correct for the correlated noise directly. @2002PASP..114..144F show that for a map that has sufficiently many dithers (corresponding in our case to many timeline samples across each pixel), a noise “correction” factor of $r/\left(1-1/3r\right)$ can be derived, where $r$ is the ratio of natural to actual pixel scales and is 3.2 for our PACS maps. A correction factor of 3.6 for the measured pixel to pixel noise is therefore required when estimating the uncertainty on a fitted Gaussian. Including this factor at 70$\mu$m and calculating the uncertainty by the standard $\Delta \chi^2$ method yields an inclination of $42 \pm 7^\circ$ and a PA of $68 \pm 9^\circ$. At 100$\mu$m the result is an inclination of $44 \pm 6^\circ$ and a PA of $76 \pm 8^\circ$. These results are therefore almost exactly the same as the empirical method used above and therefore lead to the same conclusion of misalignment. As will become apparent in §\[s:spatial\], there is reason to believe that the disk plane could be perpendicular to the binary pericenter direction. The projection of the binary pericenter direction on the sky plane has a PA of $163 \pm 2^\circ$, and a line perpendicular to this has a PA of $73 \pm 2^\circ$. Therefore, the observed disk position angle of about 72$^\circ$ is consistent with being at 90$^\circ$ to the binary pericenter direction. SED {#s:sed} === ![SED for the 99 Her system (both stars) showing the stellar and disk models (grey lines) and star+disk model (black line). The blackbody disk model is the solid grey line, and the physical grain model the dashed line. Photometric measurements are shown as black filled circles, and synthetic photometry of the stellar atmosphere as open circles ($U-B$, $B-V$, & $b-y$ colours, and $m1$ and $c1$ Stromgren indices were fitted but are not shown here). Black triangles mark upper limits from IRAS at 60 and 100$\mu$m.[]{data-label="fig:sed"}](F037AB.eps){width="50.00000%"} The combination of all photometry for 99 Her allows modelling of the spectral energy distribution (SED). The model is separated into two components; a stellar atmosphere and a disk. Due to being fairly bright ($V\sim5$mag) the system is saturated in the 2MASS catalogue. However, sufficient optical photometry for each individual star and the pair exists [@1993AJ....106..773H; @1997yCat.2215....0H; @1997ESASP1200.....P; @2006yCat.2168....0M], as well as infra-red measurements of the AB pair from AKARI and IRAS . These data were used to find the best fitting stellar models via $\chi^2$ minimisation. This method uses synthetic photometry over known bandpasses and has been validated against high S/N MIPS 24$\mu$m data for DEBRIS targets, showing that the photospheric fluxes are accurate to a few percent for AFG-type stars. The stellar luminosities ($L_{\star,A}=1.96L_\odot$, $L_{\star,B}=0.14L_\odot$) and IR fluxes of the individual components are consistent with the fit for the pair ($L_{\star,AB}=2.08L_\odot$). The fit for the AB pair is shown in Figure \[fig:sed\]. The spatial structure of the disk can be modelled with dust at a single radial distance of 120AU (i.e. thin compared to *Herschel’s* resolution, §\[s:spatial\]), so disk SED modelling can be decoupled from the resolved modelling once this radial distance is known. Because we have measurements of the disk emission at only five wavelengths, we cannot strongly constrain the grain properties and size distribution. We fit the data with a blackbody model, and then compare the data with several “realistic” grain models . In fitting a blackbody we account for inefficient grain emission at long wavelengths by including parameters $\lambda_0$ and $\beta$, where the blackbody is modified by a factor $\left( \lambda_0/\lambda \right)^\beta$ for wavelengths longer than $\lambda_0$. The best fitting model has a temperature of 49K and fractional luminosity $L_{\rm disk}/L_\star = 1.4 \times 10^{-5}$. The SPIRE data are near the confusion limit of about 6mJy, so the parameters $\beta$ and $\lambda_0$ are unconstrained within reasonable limits by the data (based on previous sub-mm detections for other disks we fix them to $\lambda_0=210 \mu$m and $\beta=1$ in Figure \[fig:sed\] [@2007ApJ...663..365W]). Assuming that grains absorb and emit like blackbodies, the radial dust distance implied by 49K is 45AU. Because the disk is observed at a radius of 120AU (i.e. is warmer than expected for blackbodies at 120AU), the dust emission has at least some contribution from grains small enough to emit inefficiently in the 70-350$\mu$m wavelength range. Because the SED alone is consistent with a pure blackbody (i.e. with $\beta=0$), we cannot make such a statement without the resolved images. However, actually constraining the grain sizes is difficult because temperature and emission are also affected by composition. We fit the data by generating disk SEDs for grains at a range of semi-major axes and choosing the one with the lowest $\chi^2$. If the dust semi-major axis is different from the observed semi-major axis of 120AU the model parameters are changed and the model recalculated, thus iterating towards the best fit. We model the dust with a standard diameter ($D$) distribution $n(D) \propto D^{2-3q}$ where $q=1.9$ [equivalently $n(M) \propto M^{-q}$ where $M$ is mass @2003Icar..164..334O], with the minimum size set by the blowout limit for the specific composition used (about 1.4$\mu$m) and a maximum size of 10cm. The size distribution probably extends beyond 10cm, but objects larger than 10cm contribute negligibly to the emission because the size distribution slope means that smaller grains dominate. Preliminary tests found that icy grains provided a much better fit than silicates. To refine the grain model so the SED and resolved radius agree, we introduced small amounts of amorphous silicates to the initially icy model. The grains are therefore essentially ice mixed with a small fraction ($f_{\rm sil} = 1.5\%$) of silicate. The icy grain model is shown as a dotted line in Figure \[fig:sed\]. This model has a total dust surface area of 14AU$^2$ and a mass of order 10$M_\oplus$ if the size distribution is extrapolated up to 1000km size objects. The parameters from of this model are degenerate for the data in hand; for example the size distribution could be shallower and the fraction of silicates higher (e.g. $q=1.83$ and $f_{\rm sil} = 4\%$). If we allow the minimum grain size to be larger than the blowout limit, the disk is well fit by amorphous silicate grains with $q=1.9$ and $D_{\rm bl} = 10\mu$m. The disk spectrum can even be fit with a single size population of 25$\mu$m icy grains. However, the predictions for the flux at millimeter wavelengths depend on the size distribution, with lower fluxes for steeper size distributions. Therefore, grain properties and size distribution can be further constrained in the future with deep (sub)mm continuum measurements. In summary, it is hard to constrain the grain sizes or properties. There is a difference in the required minimum grain size that depends on composition. Because icy grains are reflective at optical wavelengths, a detection of the disk in scattered light could constrain the albedo of particles, and therefore their composition. Spatial structure {#s:spatial} ================= ![image](coplanar-models.eps){width="100.00000%"} The PACS images of the 99 Her disk are resolved, which allows modelling of the spatial distribution of grains that contribute to the observed emission at each wavelength. We compare synthetic images with the *Herschel* observations in several steps: i) Generate a three dimensional distribution of surface area $\sigma(r,\theta,\phi)$, where the coordinates are centered on the primary star. ii) Generate a radial distribution of grain emission properties. Because the SED can be modelled with blackbody grains at 49K and the spatial structure modelled with a narrow ring, there is no real need for a radial temperature dependence and the grain properties are only a function of wavelength: $P(\lambda) = B_\nu(49K,\lambda)$. Practically, we use a radial temperature dependence $T \propto r^{-1/2}$ centered on the primary, normalised so that the disk ring temperature is 49K. This approach ensures that temperature differences due to non-axisymmetries (negligible for 99 Her) are automatically taken into account. iii) Generate a high resolution model as viewed from a specific direction. The emission in a single pixel of angular size $x$ from a single volume element in the three dimensional model $dV$ viewed along a vector $\mathcal{R}$ is $dF_\nu(\lambda,r,\theta,\phi) = P(\lambda) \sigma(r,\theta,\phi) dV$, where $dV=x^2 d^2 d\mathcal{R}$, so $d\mathcal{R}$ is the length of the volume element, and $d$ is the distance to the particles from Earth [@1999ApJ...527..918W]. The emission is derived by integrating along the line of sight $\mathcal{R}$ for each pixel in the synthetic image. The photospheric fluxes for each star (decreased by the factors noted in §\[s:obs\]) are placed in the relevant pixels at this step. iv) Convolve the high resolution model with a high resolution telescope+instrument beam, for which we use interpolated and rotated PACS images of the star Arcturus.[^6] v) Degrade the resolution to match the data. vi) Generate a map of residuals, defined by $(observed-model)/uncertainty$, where the uncertainty is the pixel to pixel RMS for that observation. We compute the model $\chi^2$ from pixels in a square crop around the disk. A minor consideration is that in the general circumbinary case the disk temperature is not axisymmetric because the disk orbits the center of mass, not the primary. An axisymmetric disk is therefore subject to a temperature asymmetry such that it will be slightly hotter, and therefore brighter, where the distance to the primary is smallest. This “binary offset” asymmetry will rotate with the primary, and will be most pronounced in the coplanar case. The result of this effect is similar to the offset caused by perturbations from an eccentric object [“ pericenter glow” @1999ApJ...527..918W]. However, the pericenter glow is offset towards the pericenter of the perturbing object, so does not rotate unless the perturbing object’s pericenter precesses. The offset from the primary and the pericenter glow are completely independent effects. Therefore, if the pericenter glow effect is present, it will reinforce and cancel the binary offset effect, depending on the relative magnitude of each each offset. The magnitude of the binary offset effect is negligibly small ($\lesssim$1%) because the disk radius is much larger than the binary separation. Because our model is centered on the system center of mass this effect is taken into account anyway. We discuss the effect of the binary on pericenter glow in §\[s:dyn\]. To fit the data requires a handful of parameters, some are required for all models and some are model specific. The disk surface area, temperature, radius, width, and total opening angle are the five physical parameters for a ring. The sky position angle and inclination are two further parameters that set the orientation, but can be fixed if the disk plane is assumed to be aligned with the binary. In addition each observation has the stellar RA and Dec as parameters to allow for the 2” $1\sigma$ pointing accuracy of *Herschel*. The position at 160$\mu$m is tied to the 100$\mu$m position. There are therefore eleven possible parameters to fit for the resolved observations at 70, 100, and 160$\mu$m. We fix the disk temperature to 49K in all cases. From the basic analysis (§\[s:basic\]), a simple ring coplanar with the binary does not appear a viable option. To emphasise this point we show 70 and 100$\mu$m images of the best fitting coplanar model in Figure \[fig:copl\]. This model was generated by the steps outlined above, and the rightmost three panels are the results of steps iv (convolved model), iii (high resolution model), and vi (residuals). We fix the disk width to 20AU, the opening angle to 5$^\circ$, and the position angle and inclination to the binary plane, so there are six free parameters (surface area, radius, and two pairs of RA/Dec sky positions). While we include the 160$\mu$m data in the fitting, it does not constrain the fit strongly due to low S/N and always shows $\sim$2$\sigma$ residual structure due to the offset peak. For comparison with the models below, the $\chi^2$ value for all three PACS bands is 4278 with 3797 degrees of freedom. The positive and negative residuals (rightmost panels) show that the disk ansae in the model have the wrong orientation at both wavelengths. It is clear that any structure symmetric about the binary line of nodes will not be consistent with the observations because the position angle is significantly different. An alternative explanation for the misalignment between the observed position angle and the binary line of nodes could be that the dust does in fact lie in the binary plane, but that the particles are on eccentric orbits with common pericenter directions (i.e. the disk is elliptical and offset from the binary). In principle, the observations can constrain the eccentricity and pericenter direction. However, this model fails because the eccentricity needed to match the observed position angle is too extreme. In order to obtain an ellipse that lies in the binary orbital plane and has a position angle and aspect ratio similar to the observations requires eccentricities $\gtrsim$0.4. The eccentricity of these particles is so high that i) the ring is significantly offset from the star and ii) the ring has an extreme pericenter glow asymmetry at all wavelengths caused by particles residing at different stellocentric distances. Because the PACS 70$\mu$m image shows that the star lies very near the disk center, such a strong offset is ruled out. We now consider two relatively simple models that account for the misalignment between the disk and binary orbital planes. The first is based on the expected secular evolution of circumbinary particles, and the second is a simple misaligned ring where the disk position angle and inclination are free parameters. Secularly perturbed polar ring {#s:polar} ------------------------------ In this section we consider a ring inspired by the secular evolution of circumbinary particles. This approach ensures that the disk is stable over the stellar lifetime and encompasses the particle dynamics dictated by the binary. We first outline the dynamics of circumbinary particles, and then show the model for the 99 Her disk. ### Dynamics {#s:dyn} Particle dynamics are important for evolution and stability in the 99 Her system. A circumbinary disk will have its inner edge truncated, while circumstellar disks around either component can be truncated at their outer edges. In addition, secular perturbations lead to precession of test particles’ nodes coupled with inclination variations. We explore these dynamics using the *Swift HJS* integrator . In general, disk truncation allows us to place limits on possible locations for disk particles. However, in the case of 99 Her there is no evidence for disk components orbiting only one star, and the apparent circumbinary disk extent lies well beyond $\sim$30-60AU stability limit at any inclination [@1997AJ....113.1445W; @2011arXiv1108.4144D]. Circumbinary particles also undergo long-term dynamical evolution due to secular perturbations. Because the binary has a small mass ratio and high eccentricity, the dynamics are not well described by the circular restricted three-body problem, commonly applied in the case of debris disks perturbed by planets. Similar dynamics have previously been explored in the context of the HD 98800 system [@2008MNRAS.390.1377V; @2009MNRAS.394.1721V] and more generally [@2010MNRAS.401.1189F; @2011arXiv1108.4144D]. These studies show that the inclination ($i$) and line of nodes ($\Omega$) of circumbinary particles evolve due to perturbations from the binary. Depending on the binary eccentricity and particle inclination, $\Omega$ can circulate (increase or decrease continuously) or librate (oscillate about 90 or $270^\circ$). Particles with low inclinations stay on low inclination orbits, thus sweeping out a roughly disk or torus-like volume over long timescales. Higher inclination particles are subject to nodal libration and large inclination variations, thus sweeping out large parts of a sphere around the binary. Most importantly for 99 Her, the orbits of particles with $\Omega \approx 90^\circ$ (or $270^\circ$) and on near polar orbits will not change much due to secular evolution, thus sweeping out a polar ring. ![Secular evolution of circumbinary particles in inclination ($i$) and line of nodes ($\Omega$) space. Particles begin at dots and move along the curves due to perturbations from the binary. Particles that would appear reflected in the x axis duplicate the spatial distribution so are not shown. Crosses show the current location of particles in the two interpretations of the transient ring model (§\[s:ring\]). Over time these particles will sweep out curves similar to particles 4 and 11. The long term structure of the transient ring will therefore appear similar to either panel 4 or 11 in Figure \[fig:swift\], depending on which inclination is correct.[]{data-label="fig:sec"}](sec.eps){width="50.00000%"} Figure \[fig:sec\] shows the secular evolution of 23 particles on initially circular orbits in complex inclination space. All particles have initial nodes of 90$^\circ$ relative to the binary pericenter and inclinations spread evenly between 0 and 180$^\circ$ and are integrated for 1Gyr (i.e. there are no other significant effects on such long timescales). At 120AU, the time taken for a particle to complete one cycle of secular evolution (make a loop in figure \[fig:sec\]) varies in the range 2-7$\times 10^5$ years, with larger loops taking longer. These times will also scale with particle semi-major axis. Particles 1-12 are those with initial inclinations between 0-90$^\circ$ that are sufficient to describe the range of spatial structures as we cannot distinguish between pro/retrograde orbits. The particles can be split into two groups; those with low inclinations whose nodes circulate (1–3) and those with high inclinations whose nodes librate about 90$^\circ$ (4–12). The dividing line (separatrix) between these families for the binary eccentricity of 0.76 is $21^\circ$ when $\Omega=90^\circ$ [or $270^\circ$, @2010MNRAS.401.1189F]. While particles in the first group have $i<21^\circ$ when $\Omega=90^\circ$, their inclinations when $\Omega=0^\circ$ (or $180^\circ$) can be as high as $90^\circ$. Thus, particles near the separatrix will sweep out an entire spherical shell during their secular evolution. Similarly, particles near the separatrix but in the second group also sweep out a spherical shell, though the orbital evolution is different. ![image](swift.eps){width="100.00000%"} To visualise the structures swept out by these families of particles due to secular perturbations, Figure \[fig:swift\] shows the resulting debris structures for particles that follow each of the trajectories 1-12 from Figure \[fig:sec\] (left to right and down). The structures are oriented as they would be seen on the sky in the 99 Her system (i.e. have the same orientation with respect to the binary orbit shown in Figure \[fig:sys\]). Each structure was generated by taking the relevant particle at each time step and spawning 1000 additional particles spread randomly around the orbit. This process was repeated for every time step, thus building up the spatial density of a family of particles that follow a specific curve in Figure \[fig:sec\]. These structures are optically thin, which makes interpreting them somewhat difficult. We have included a scaled version of the binary orbit from Figure \[fig:sys\] in some panels in an attempt to make the orientations clearer. The first (top left) panel shows a circular orbit coplanar with the binary. The PA is the binary line of nodes, and Figure \[fig:copl\] shows why a disk in the plane of the binary is not a satisfactory match to the observations. The second and third panels are still symmetric about the binary orbital plane, but have a wider range of inclinations and are an even poorer match to the observations. Panel 3 shows that while particle inclinations are restricted for $\Omega=90,270^\circ$, they can be large for $\Omega=0,180^\circ$ and result in a “butterfly” structure when viewed down the binary pericenter direction. The remaining panels are for particles 4-12, whose nodes librate and for which the plane of symmetry is perpendicular to the binary pericenter direction. In panel 4 the range of nodes and inclinations is so large that a particle sweeps out nearly an entire spherical shell during a cycle of secular evolution (i.e. the particle is near the separatrix). This range decreases as the initial inclination nears a polar orbit, at which point the orbital elements do not evolve and the resulting structure appears in panel 12 as a simple ring. The key difference from the ring in panel 1 is that this ring’s position angle is perpendicular to the sky projection of the binary pericenter direction, and as noted in §\[s:basic\] is therefore similar to the observed PA in the PACS images. Secular perturbations from the binary also affect the long term evolution of particle eccentricities and pericenter longitudes. These effects are taken into account by our $n$-body approach. However, we noticed that the eccentricities imposed (“forced”) on the particles are lower than would be expected for a lower mass companion. Further $n$-body simulations of coplanar particles show that for 99 Her with a mass ratio of 0.49 the forced eccentricity at 120AU is about 0.03, but if the mass ratio were 0.05 the forced $e$ is 0.1. This lack of significant eccentricity forcing is visible by its absence in Figure \[fig:swift\], where the structures would be much broader if there were a large range of particle eccentricities. For example, if the mass of the secondary in the 99 Her system were significantly smaller, the model in panel 1 would become broader and offset from the binary center of mass, resulting in a small pericenter glow effect. This dependence suggests that a circumbinary disk’s structure may help constrain the binary mass ratio in cases where it is uncertain. However, we cannot apply this idea to make a better estimate of the 99 Her mass ratio because the PACS observations do not have enough resolution. In addition, at high inclinations the particle behaviour is more complicated, because polar particles switch between pro and retrograde orbits and do not follow simple circles in complex eccentricity space. ### Polar ring model ![image](swift-models.eps){width="100.00000%"} We now use the models from Figure \[fig:swift\] to fit the PACS observations. The model has only seven free parameters; the particle semi-major axis and initial inclination, the surface area of dust, and the same four RA/Dec positions. The dust temperature is fixed to 49K. Using a semi-major axis of 120AU, each panel was compared to the PACS images, setting the surface area in grains for each model to obtain the least residual structure. Of these we found that panel 9 was the best fit, shown in Figure \[fig:polar\]. These particles follow near-polar orbits so we call this model a “polar ring.” We find $\chi^2=3202$. In terms of $\chi^2$ the results for panels 8 and 10 are similar, but slightly higher. The uncertainty in the initial inclination is therefore about 10$^\circ$, and for the semi-major axis about 10AU. This model is much better than the coplanar model of Figure \[fig:copl\], with no overlapping residual structure at 70 and 100$\mu$m. The particles likely occupy a wider range of orbits than a single semi-major axis with some non-zero eccentricity, which may account from some minor (2$\sigma$) structure in the residuals at the disk ansae at 70$\mu$m. However, given that this model stems directly from the secular evolution, has very few free parameters, and accounts for the structure in all PACS images, we consider it a plausible explanation. Transient ring model {#s:ring} -------------------- A simple circular ring is a natural model to fit to the observations. This model has eight free parameters, with the width of the ring fixed at 20AU and the opening angle fixed to 5$^\circ$. As expected from the simple analysis in §\[s:basic\] the position angle of this ring is not aligned with the binary line of nodes, and is therefore misaligned with the binary orbit. The interpretation depends on the orientation of the best fit. A misaligned ring with polar orbits and the correct line of nodes would be considered further evidence in favour of the above polar ring model. A ring with a non-polar orientation will be spread into a broad structure like one of the panels in Figure \[fig:swift\] by secular perturbations. The ring cannot be long-lived and could therefore be the aftermath of a recent collision, seen after the collision products have sheared into a ring, but before secular perturbations destroy the ring structure. Thus we call this model a “transient ring.” ![image](ring-models.eps){width="100.00000%"} This model is shown in Figure \[fig:ring\], and is a reasonable match to the PACS observations. However, the residuals at 70$\mu$m show that the ring produces a structure that is slightly too elliptical, compared to the more rectangular structure that is observed and reproduced by the polar ring. This model also has less emission at the stellar position than is observed. For this model $\chi^2= 3304$. The disk is inclined 53$^\circ$ from face-on and the PA is 81$^\circ$. The uncertainties are similar to those derived for the Gaussian fits in §\[s:basic\]. The minimum relative inclination between the disk and binary orbital planes is therefore 32 degrees, with a line of nodes with respect to the binary orbit of 139$^\circ$. However, the inclination between the disk and binary plane could also be 87$^\circ$ if the disk were mirrored in the sky plane, which means that the particles have near-polar orbits. These orbits are nearly the same as panel 12 of Figure \[fig:swift\] (the narrow polar ring) because the line of nodes with respect to the binary orbit is 276$^\circ$. These two interpretations correspond to two points in Figure \[fig:sec\], shown as crosses. Over time the particles would spread around to make two more lines similar to those drawn. The particles in the lower inclination case are close to the separatrix, and would therefore sweep out a near-spherical shell like panel 4 of Figure \[fig:swift\]. In this case, the long term evolution produces structures that have the wrong position angle and are a poor match to the observations. The higher inclination case is very nearly a polar ring and would look very similar to panel 11. Such a result is expected because we found above that the polar ring model works well, and argues in favour of the polar-ring interpretation. We can in fact improve this simple ring model by increasing the total disk opening angle (i.e. allowing a larger range of inclinations), which emulates the range of inclinations that result from the secular evolution. We find a best fit when the particle inclinations are 25$^\circ$ (total opening angle of 50$^\circ$), where $\chi^2=3210$. This model looks very similar to the preferred polar ring model above, but is not generated in the same way, and will therefore change somewhat due to secular perturbations over time because the disk is not perfectly polar. Discussion {#sec:disc} ========== We strongly favour the polar ring model as the best explanation of the disk structure surrounding 99 Her. The polar ring is stable for the stellar lifetime, and takes the secular dynamics into account. The transient ring model, where the disk orientation is not fixed, also finds that the disk particles can have polar orbits. However, because the ring could be mirrored in the sky plane and appear the same, the ring could be misaligned with the binary orbital plane by about 30$^\circ$. Based on $\chi^2$ and the residuals the polar ring is marginally preferable over the transient ring. However, given that a more realistic model with a range of particle radii and inclinations could improve the fit in each case, we do not assign much importance to the relatively small $\chi^2$ differences. Instead, we consider several constraints on the collisional evolution that argue against the transient ring interpretation. By considering the timescales for collisions and secular evolution, we can estimate the likelihood of observing the products of a collision as a transient ring before it is spread into a broader structure. Based on the observed radius and total cross-sectional area, the collisional lifetime of grains just above the blowout size is about a million years [@2007ApJ...658..569W]. The emission could last much longer if larger objects exist in the size distribution, and the lifetime scales with the maximum size as $\sqrt{D_{\rm max}/D_{\rm bl}}$, so depends on the size of the largest fragment created in the collision. If the largest fragments are at least 1mm in size the lifetime is at least 50Myr, and we would expect the collisional cascade to be detectable for this length of time. The secular precession timescale is about 0.5Myr, and it is reasonable to assume that the ring structure would be erased by secular perturbations within 10 secular timescales. Thus, the collisional products would be observable as a ring for only 5Myr. Because the collision time is longer than the secular time, the collision products would spend at most a tenth of their lifetime appearing as a misaligned ring, and the remainder as a broader structure. That is, assuming such a collision did occur, we have less than a 1:10 chance of observing the collision products as a ring that looks like the *Herschel* observations.[^7] While 1:10 is not unreasonable, this estimate does not consider the object that must be destroyed to generate the observed dust or the plausibility of a 1mm maximum size. To produce the observed fractional luminosity, a parent body of at least 600km in diameter must be broken into blowout sized grains. With the more realistic assumption that the collision produced a range of grain sizes, the parent body must be larger, about 2000km if grains were as large as the 1mm assumed above (assuming $q=11/6$). Under the more realistic assumption of a wide range of fragment sizes, up to 1km say, the parent body would need to be roughly Earth-sized. However, for such large fragments the dust lifetime would be 50Gyr and the chance of observing the structure as a ring very unlikely (1:10,000). We can estimate the ability of collisions to smash large objects into small pieces by considering their strength and possible collision velocities. The specific energy needed for catastrophic disruption, where the largest collision product is half the original mass (i.e. very large), is roughly $10^{11}$erg/g for objects 2000km in size [@2009ApJ...691L.133S]. The energy needed to disrupt an object so that the collision products are all very small must be larger. The maximum collision energy possible for circular orbits is for a collision between two equal sized objects on pro and retrograde orbits. The collision energy assuming such an impact at twice the orbital velocity of a few km/s at 100AU is a few $10^{10}$erg/g. Therefore, only in the most optimistic (highest velocity) case is the collision energy sufficient to catastrophically disrupt 2000km objects. In the even of a disruption, the lifetime of the collision products will be very long because the largest remnant is about 1000km in size. In the more realistic case where collision velocities are set by object eccentricities and inclinations, disruption of large objects at large semi-major axes is even more difficult. This difficulty, combined with the smaller amount of starlight intercepted at such distances means that single collisions only produce a minor increase over the background level of dust [@2005AJ....130..269K]. These probability and collision arguments suggest that a single collision is an extremely unlikely explanation for the origin of the observed dust. The polar ring model does not have these issues. The secular evolution of particles in the 99 Her system means that particles on polar orbits suffer only minor changes in inclination and node (Fig \[fig:sec\]). These orbits are therefore stable over the stellar lifetime so the dust could be the steady state collision products of the polar planetesimal belt. Initial misalignment is therefore the only special requirement for the polar ring model. The excellent agreement between the PACS data and a simple model generated by particles on these orbits argues strongly in favour of this interpretation. The question is then shifted to one of the origin of the misalignment. Most binaries are thought to form through fragmentation and subsequent accretion during collapse of a molecular cloud [for a recent review see @2007prpl.conf..133G]. The resulting binary systems should be aligned with their protoplanetary disks when the separations are of order tens of AU [@2000MNRAS.317..773B]. Given the 16AU separation of the 99 Her system, it therefore seems that interactions during the subsequent phase of dynamical cluster evolution are a more likely cause of a misaligned disk. There are several ways that such a configuration could arise from interactions in a young stellar cluster. A close encounter in which a binary captures some material from the outer regions of a circumstellar disk hosted by another star seems possible. This “disk exchange” scenario requires an encounter where the binary pericenter is perpendicular to the circumstellar disk plane, and that the encounter distance and geometry captures material into orbits similar to those observed for the debris disk (e.g. most likely a prograde rather than retrograde encounter). An alternative scenario is a stellar exchange reaction, where a binary encounters a single star that harbours a circumstellar disk. During the exchange one of the binary components is captured by the originally single star, and the other leaves [e.g. @2011arXiv1109.2007M]. The post-encounter configuration is then a binary surrounded by a circumbinary disk. If the binary pericenter direction were perpendicular to the disk plane it could represent a young analogue of the 99 Her system. Such an encounter would require that the disk is not irreparably damaged by large stellar excursions during the exchange [@2011arXiv1109.2007M], but may also present a way to clear inner disk regions, thus providing a possible reason that the 99 Her disk resides at 120AU and not closer, where it could still be stable (see § \[s:dyn\]). Both scenarios require some degree of tuning; the encounters must happen with specific geometries to produce the observed relative binary and disk orientations. However, differences in the surface brightness between the different models in Figure \[fig:swift\] mean there could be some selection bias towards more disk-like structures. The advantage of the disk exchange scenario is that the cross section for interaction at a distance of about 100AU is much higher than for stellar exchange, which would need to have an encounter distance similar to the binary semi-major axis. With a factor of about ten difference in the encounter impact parameter for each scenario, the close encounter is therefore about 100 times more likely than the exchange (ignoring other constraints on geometry, configuration etc.). In the absence of detailed simulations of encounter outcomes, some data exist to help distinguish between these two scenarios. The minimum inclination of the stellar pole for the 99 Her primary relative to the binary orbital plane is $20 \pm 10^\circ$ [@1994AJ....107..306H]. The inclination difference is therefore different from the binary plane with 95% confidence, and is a hint that the system may be the result of an exchange. However, the scatter in inclination differences for binaries with separations similar to that of 99 Her is about 20$^\circ$ [@1994AJ....107..306H], which may indicate that systems with this separation are in fact aligned and the uncertainties were underestimated, or that this scatter is the intrinsic level of misalignment at these separations. Though 99 Herculis is the first clear case of misalignment between binary and disk planes, the GG Tauri system may show a similar signature. The GG Tau system consists of an Aa/Ab binary surrounded by a circumbinary ring , and a more distant Ba/Bb pair that may be bound . It is not clear if the inner binary is misaligned with the circumbinary disk, but is suggested because if they are aligned the ring’s inner edge is too distant to be set by the binary . However, there could also be problems if they are misaligned, because the expected disk scale height due to perturbations from the binary may be inconsistent with observations . Though uncertain, the possible misalignment between the binary and ring planes shows that GG Tau could be a young analogue of 99 Her-like systems. Summary ======= We have modelled the resolved circumbinary debris disk in the 99 Her system. This disk is unusual because it appears misaligned with the binary plane. It can be explained as either an inclined transient ring due to a recent collision, or more likely a ring of polar orbits. The transient ring is shown to be implausible from collisional arguments. While the inclined ring cannot exist on long (secular) timescales, the polar ring can. There appear to be two possible formation scenarios for the polar ring model, which both invoke stellar encounters. The binary may have captured material from another stars’ circumstellar disk, or a new binary may have formed in a stellar exchange where one of the systems already contained a circumstellar disk. While many binary and multiple systems are known to have debris disks, none are resolved and have orbits characterised as well as 99 Herculis. Future efforts should characterise this system further to test our interpretation and attempt to find more examples. A sample of resolved circumbinary disks would test whether disk-binary misalignment is a common outcome of star formation and cluster evolution, with implications for planetary systems around both single and binary stars. Acknowledgments {#acknowledgments .unnumbered} =============== We are grateful to the referee for a thorough reading of the manuscript, especially for noting that previous 99 Her visual orbits have the wrong ascending node. This research has made use of the Washington Double Star Catalog maintained at the U.S. Naval Observatory, and the SwiftVis $n$-body visualisation software developed by Mark Lewis. We also thank Herve Beust for use of the HJS code, and Paul Harvey for comments on a draft of this article. [59]{} natexlab\#1[\#1]{}href \#1\#2urllinklabel adsurllinklabel , H. A. & [Willmarth]{}, D. 2006, , 162, 207 [](http://adsabs.harvard.edu/abs/2006ApJS..162..207A) , S. J., [Caliskan]{}, H., [Kocer]{}, D., [Cay]{}, I. H., & [Gokmen Tektunali]{}, H. 2000, , 316, 514 [](http://adsabs.harvard.edu/abs/2000MNRAS.316..514A) , A. [et al.]{} 2010, , 518, L9 [](http://adsabs.harvard.edu/abs/2010A&A...518L...9A) , J. C., [Nelson]{}, R. P., [Lagrange]{}, A. M., [Papaloizou]{}, J. C. B., & [Mouillet]{}, D. 2001, , 370, 447 [](http://adsabs.harvard.edu/abs/2001A&A...370..447A) , M. R., [Bonnell]{}, I. A., [Clarke]{}, C. J., [Lubow]{}, S. H., [Ogilvie]{}, G. I., [Pringle]{}, J. E., & [Tout]{}, C. A. 2000, , 317, 773 [](http://adsabs.harvard.edu/abs/2000MNRAS.317..773B) , H. 2003, , 400, 1129 [](http://adsabs.harvard.edu/abs/2003A&A...400.1129B) , H. & [Dutrey]{}, A. 2005, , 439, 585 [](http://adsabs.harvard.edu/abs/2005A&A...439..585B) —. 2006, , 446, 137 [](http://adsabs.harvard.edu/abs/2006A&A...446..137B) , A. M., [McGrath]{}, E. J., [Lambert]{}, D. L., & [Cunha]{}, K. 2004, , 606, 306 [](http://adsabs.harvard.edu/abs/2004ApJ...606..306B) , L. J., [Wyatt]{}, M. C., [Duch[ê]{}ne]{}, G., [Sibthorpe]{}, B., [Kennedy]{}, G., [Matthews]{}, B. C., [Kalas]{}, P., [Greaves]{}, J., [Su]{}, K., & [Rieke]{}, G. 2011, , 417, 1715 [](http://adsabs.harvard.edu/abs/2011MNRAS.417.1715C) , J. & [Nys]{}, O. 2002, VizieR Online Data Catalog, 1274, 0 [](http://adsabs.harvard.edu/abs/2002yCat.1274....0D) , S. & [Blundell]{}, K. M. 2011, ArXiv e-prints, (1108.4144) [](http://adsabs.harvard.edu/abs/2011arXiv1108.4144D) , F. & [Laskar]{}, J. 2010, , 401, 1189 [](http://adsabs.harvard.edu/abs/2010MNRAS.401.1189F) , A. S. & [Hook]{}, R. N. 2002, , 114, 144 [](http://adsabs.harvard.edu/abs/2002PASP..114..144F) , S. P., [Kroupa]{}, P., [Goodman]{}, A., & [Burkert]{}, A. 2007, Protostars and Planets V, 133 [](http://adsabs.harvard.edu/abs/2007prpl.conf..133G) , R. G., [Carretta]{}, E., & [Castelli]{}, F. 1996, VizieR Online Data Catalog, 331, 40191 [](http://adsabs.harvard.edu/abs/1996yCat..33140191G) , M. J. [et al.]{} 2010, , 518, L3 [](http://adsabs.harvard.edu/abs/2010A&A...518L...3G) , S., [Dutrey]{}, A., & [Simon]{}, M. 1999, , 348, 570 [](http://adsabs.harvard.edu/abs/1999A&A...348..570G) , A. 1994, , 107, 306 [](http://adsabs.harvard.edu/abs/1994AJ....107..306H) , W. I., [Mason]{}, B. D., & [Worley]{}, C. E. 2001, , 122, 3472 [](http://adsabs.harvard.edu/abs/2001AJ....122.3472H) , B. & [Mermilliod]{}, M. 1997, VizieR Online Data Catalog, 2215, 0 [](http://adsabs.harvard.edu/abs/1997yCat.2215....0H) , T. J. & [McCarthy]{}, Jr., D. W. 1993, , 106, 773 [](http://adsabs.harvard.edu/abs/1993AJ....106..773H) , D. [et al.]{} 2010, , 514, A1 [](http://adsabs.harvard.edu/abs/2010A&A...514A...1I) , S. J. & [Bromley]{}, B. C. 2005, , 130, 269 [](http://adsabs.harvard.edu/abs/2005AJ....130..269K) , D. W., [Kim]{}, S., [Trilling]{}, D. E., [Larson]{}, H., [Cotera]{}, A., [Stapelfeldt]{}, K. R., [Wahhaj]{}, Z., [Fajardo-Acosta]{}, S., [Padgett]{}, D., & [Backman]{}, D. 2010, , 710, L26 [](http://adsabs.harvard.edu/abs/2010ApJ...710L..26K) , R. 2011, , 530, A126 [](http://adsabs.harvard.edu/abs/2011A&A...530A.126K) , Y. 1962, , 67, 591 [](http://adsabs.harvard.edu/abs/1962AJ.....67..591K) , A. & [Greenberg]{}, J. M. 1997, , 323, 566 [](http://adsabs.harvard.edu/abs/1997A&A...323..566L) , M. L. 1962, , 9, 719 [](http://adsabs.harvard.edu/abs/1962P&SS....9..719L) , B. D., [Wycoff]{}, G. L., [Hartkopf]{}, W. I., [Douglass]{}, G. G., & [Worley]{}, C. E. 2011, VizieR Online Data Catalog, 1, 2026 [](http://adsabs.harvard.edu/abs/2011yCat....102026M) , B. C. [et al.]{} 2010, , 518, L135 [](http://adsabs.harvard.edu/abs/2010A&A...518L.135M) , J. C. 2006, VizieR Online Data Catalog, 2168, 0 [](http://adsabs.harvard.edu/abs/2006yCat.2168....0M) , N. & [Goddi]{}, C. 2011, ArXiv e-prints, (1109.2007) [](http://adsabs.harvard.edu/abs/2011arXiv1109.2007M) , M. & [et al.]{} 1990, in IRAS Faint Source Catalogue, version 2.0 (1990), 0 [](http://adsabs.harvard.edu/abs/1990IRASF.C......0M) , B., [Mayor]{}, M., [Andersen]{}, J., [Holmberg]{}, J., [Pont]{}, F., [J[ø]{}rgensen]{}, B. R., [Olsen]{}, E. H., [Udry]{}, S., & [Mowlavi]{}, N. 2004, , 418, 989 [](http://adsabs.harvard.edu/abs/2004A&A...418..989N) , D. P. & [Greenberg]{}, R. 2003, , 164, 334 [](http://adsabs.harvard.edu/abs/2003Icar..164..334O) , S. 2010, in Astronomical Society of the Pacific Conference Series, Vol. 434, Astronomical Data Analysis Software and Systems XIX, ed. [Y. Mizumoto, K.-I. Morita, & M. Ohishi]{}, 139 [](http://adsabs.harvard.edu/abs/2010ASPC..434..139O) , M. A. C. & [ESA]{}, eds. 1997, ESA Special Publication, Vol. 1200, [The HIPPARCOS and TYCHO catalogues. Astrometric and photometric star catalogues derived from the ESA HIPPARCOS Space Astrometry Mission]{} [](http://adsabs.harvard.edu/abs/1997ESASP1200.....P) , N. M., [Greaves]{}, J. S., [Dent]{}, W. R. F., [Matthews]{}, B. C., [Holland]{}, W. S., [Wyatt]{}, M. C., & [Sibthorpe]{}, B. 2010, , 403, 1089 [](http://adsabs.harvard.edu/abs/2010MNRAS.403.1089P) , V., [Gueth]{}, F., [Hily-Blant]{}, P., [Schuster]{}, K.-F., & [Pety]{}, J. 2011, , 528, A81 [](http://adsabs.harvard.edu/abs/2011A&A...528A..81P) , G. L., [Riedinger]{}, J. R., [Passvogel]{}, T., [Crone]{}, G., [Doyle]{}, D., [Gageur]{}, U., [Heras]{}, A. M., [Jewell]{}, C., [Metcalfe]{}, L., [Ott]{}, S., & [Schmidt]{}, M. 2010, , 518, L1 [](http://adsabs.harvard.edu/abs/2010A&A...518L...1P) , A. [et al.]{} 2010, , 518, L2 [](http://adsabs.harvard.edu/abs/2010A&A...518L...2P) , G. H. [et al.]{} 2004, , 154, 25 [](http://adsabs.harvard.edu/abs/2004ApJS..154...25R) —. 2008, , 135, 2245 [](http://adsabs.harvard.edu/abs/2008AJ....135.2245R) , M., [Prieur]{}, J.-L., [Pansecchi]{}, L., [Argyle]{}, R. W., & [Sala]{}, M. 2010, Astronomische Nachrichten, 331, 286 [](http://adsabs.harvard.edu/abs/2010AN....331..286S) , M., [Prieur]{}, J.-L., [Pansecchi]{}, L., [Argyle]{}, R. W., [Sala]{}, M., [Basso]{}, S., [Ghigo]{}, M., [Koechlin]{}, L., & [Aristidi]{}, E. 2008, Astronomische Nachrichten, 329, 54 [](http://adsabs.harvard.edu/abs/2008AN....329...54S) , S. 1999, , 341, 121 [](http://adsabs.harvard.edu/abs/1999A&A...341..121S) , S. T. & [Leinhardt]{}, Z. M. 2009, , 691, L133 [](http://adsabs.harvard.edu/abs/2009ApJ...691L.133S) , Y. 2007, , 59, 335 [](http://adsabs.harvard.edu/abs/2007PASJ...59..335T) , D. E., [Stansberry]{}, J. A., [Stapelfeldt]{}, K. R., [Rieke]{}, G. H., [Su]{}, K. Y. L., [Gray]{}, R. O., [Corbally]{}, C. J., [Bryden]{}, G., [Chen]{}, C. H., [Boden]{}, A., & [Beichman]{}, C. A. 2007, , 658, 1289 [](http://adsabs.harvard.edu/abs/2007ApJ...658.1289T) , F., ed. 2007, Astrophysics and Space Science Library, Vol. 350, [Hipparcos, the New Reduction of the Raw Data]{} [](http://adsabs.harvard.edu/abs/2007ASSL..350.....V) , F. 2008, VizieR Online Data Catalog, 1311, 0 [](http://adsabs.harvard.edu/abs/2008yCat.1311....0F) , P. E. & [Evans]{}, N. W. 2008, , 390, 1377 [](http://adsabs.harvard.edu/abs/2008MNRAS.390.1377V) —. 2009, , 394, 1721 [](http://adsabs.harvard.edu/abs/2009MNRAS.394.1721V) , P. A. & [Holman]{}, M. J. 1997, , 113, 1445 [](http://adsabs.harvard.edu/abs/1997AJ....113.1445W) , M. C. & [Dent]{}, W. R. F. 2002, , 334, 589 [](http://adsabs.harvard.edu/abs/2002MNRAS.334..589W) , M. C., [Dermott]{}, S. F., [Telesco]{}, C. M., [Fisher]{}, R. S., [Grogan]{}, K., [Holmes]{}, E. K., & [Pi[ñ]{}a]{}, R. K. 1999, , 527, 918 [](http://adsabs.harvard.edu/abs/1999ApJ...527..918W) , M. C., [Smith]{}, R., [Greaves]{}, J. S., [Beichman]{}, C. A., [Bryden]{}, G., & [Lisse]{}, C. M. 2007, , 658, 569 [](http://adsabs.harvard.edu/abs/2007ApJ...658..569W) , M. C., [Smith]{}, R., [Su]{}, K. Y. L., [Rieke]{}, G. H., [Greaves]{}, J. S., [Beichman]{}, C. A., & [Bryden]{}, G. 2007, , 663, 365 [](http://adsabs.harvard.edu/abs/2007ApJ...663..365W) [^1]: Email: <gkennedy@ast.cam.ac.uk> [^2]: Herschel is an ESA space observatory with science instruments provided by European-led Principal Investigator consortia and with important participation from NASA. [^3]: <http://ad.usno.navy.mil/wds/> [^4]: A figure showing the orbit is available in the WDS catalogue. [^5]: See http://herschel.esac.esa.int/twiki/bin/view/Public/WebHome for details regarding the PACS beam extent and calibration. [^6]: The observations are reduced in sky coordinates, so the spacecraft orientation is different for each observation and the PSF must be rotated accordingly. This rotation step could be avoided by reducing both in spacecraft coordinates. [^7]: Had we found that an eccentric ring could explain the data, the same argument applied to ring spreading by pericenter precession would apply, with the same 1:10 result. The particles’ pericenter directions are unlikely to be maintained through forcing by a third (circumbinary) body as for a standard pericenter glow model, because the perturbing body would be subject to the same pericenter precession.
1
--- abstract: 'We show existence of solutions to the least gradient problem on the plane for boundary data in $BV(\partial\Omega)$. We also provide an example of a function $f \in L^1(\partial\Omega) \backslash (C(\partial\Omega) \cup BV(\partial\Omega))$, for which the solution exists. We also show non-uniqueness of solutions even for smooth boundary data in the anisotropic case for a nonsmooth anisotropy. We additionally prove a regularity result valid also in higher dimensions.' address: 'W. Górny: Faculty of Mathematics, Informatics and Mechanics, University of Warsaw, Warsaw, Poland.' author: - Wojciech Górny title: 'Planar least gradient problem: existence, regularity and anisotropic case' --- Introduction ============ Many papers, including [@SWZ], [@MNT], [@MRL], [@GRS] describe the least gradient problem, i.e. a problem of minimalization $$\min \{ \int_\Omega |Du|, \quad u \in BV(\Omega), \quad u|_{\partial\Omega} = f \},$$ where we may impose certain conditions on $\Omega$, $f$ and use different approaches to the boundary condition. In [@SWZ] $f$ is assumed to be continuous and the boundary condition is in the sense of traces. They also impose a set of geometrical conditions on $\Omega$, which are satisfied by strictly convex sets; in fact, in dimension two they are equivalent to strict convexity. The authors of [@MNT] also add a positive weight. Another approach is presented in [@MRL], where boundary datum belongs to $L^1(\partial\Omega)$, but the boundary condition is understood in a weaker sense. Throughout this paper $\Omega \subset \mathbb{R}^N$ shall be an open, bounded, strictly convex set with Lipschitz (or $C^1$) boundary. The boundary datum $f$ will belong to $L^1(\partial \Omega)$ or $BV(\partial\Omega)$. We consider the following minimalization problem called the least gradient problem (for brevity denoted by LGP): $$\label{zagadnienie} \inf \{ \int_\Omega |Du|, \quad u \in BV(\Omega), \quad Tu = f \},$$ where $T$ denotes the trace operator $T: BV(\Omega) \rightarrow L^1(\partial\Omega)$. Even existence of solutions in this sense is not obvious, as the functional $$F(u) = \twopartdef{\int_\Omega |Du|}{ u \in BV(\Omega) \text{ and } Tu = f;}{+\infty}{\text{otherwise}}$$ is not lower semicontinuous with respect to $L^1$ convergence. In fact, in [@ST] the authors have given an example of a function $f$ without a solution to corresponding least gradient problem. It was a characteristic function of a certain fat Cantor set. Let us note that it does not lie in $BV(\partial\Omega)$. There are two possible ways to deal with Problem . The first is the relaxation of the functional $F$. Such reformulation and its relationship with the original statement is considered in [@MRL] and [@Maz]. Another way is to consider when Problem has a solution in the classical sense and what is its regularity. This paper uses the latter approach. The main result of the present paper is giving a sufficient condition for existence of solutions of the least gradient problem on the plane. It is given in the following theorem, which will be later proved as Theorem \[tw:istnienie\]: Let $\Omega \subset \mathbb{R}^2$ be an open, bounded, strictly convex set with $C^1$ boundary. Then for every $f \in BV(\partial\Omega)$ there exists a solution of LGP for $f$. Obviously, this condition is not necessary; the construction given in [@SWZ] does not require the boundary data to have finite total variation. We also provide an example of a function $f \in L^1(\Omega) \backslash (C(\partial\Omega) \cup BV(\partial\Omega))$, for which the solution exists, see Example \[ex:cantor\]. Another result included in this article provides a certain regularity property. Theorem \[tw:rozklad\] asserts existence of a decomposition of a function of least gradient into a continuous and a locally constant function. It is not a property shared by all BV functions, see [@AFP Example 4.1]. Let $\Omega \subset \mathbb{R}^N$, where $N \leq 7$, be an open, bounded, strictly convex set with Lipschitz boundary. Suppose $u \in BV(\Omega)$ is a function of least gradient. Then there exist functions $u_c, u_j \in BV(\Omega)$ such that $u = u_c + u_j$ and $(Du)_c = Du_c$ and $(Du)_j = Du_j$, i.e. one can represent $u$ as a sum of a continuous function and a piecewise constant function. They are of least gradient in $\Omega$. Moreover this decomposition is unique up to an additive constant. The final chapter takes on the subject of anisotropy. As it was proved in [@JMN], for an anisotropic norm $\phi$ on $\mathbb{R}^N$ smooth with respect to the Euclidean norm there is a unique solution to the anisotropic LGP. I consider $p-$norms on the plane for $p \in [1, \infty]$ to show that for $p = 1, \infty$, i.e. where the anisotropy is not smooth, the solutions need not be unique even for smooth boundary data (see Examples \[ex:l1\] and \[ex:linfty\]), whereas for $1 < p < \infty$, when the anisotropy is smooth, Theorem \[tw:anizotropia\] asserts that the only connected minimal surface with respect to the p-norm is a line segment, similarly to the isotropic solution. Let $\Omega \subset \mathbb{R}^2$ be an open convex set. Let the anisotropy be given by the function $\phi(x,Du) = \| Du \|_p$, where $1 < p < \infty$. Let $E$ be a $\phi-$minimal set with respect to $\Omega$, i.e. $\chi_E$ is a function of $\phi-$least gradient in $\Omega$. Then every connected component of $\partial E$ is a line segment. Preliminaries ============= Least gradient functions ------------------------ Now we shall briefly recall basic facts about least gradient functions. What we need most in this paper is the Miranda stability theorem and the relationship between functions of least gradient and minimal surfaces. For more information, see [@Giu]. We say that $u \in BV(\Omega)$ is a function of least gradient, if for every compactly supported $($equivalently: with trace zero$)$ $v \in BV(\Omega)$ we have $$\int_\Omega |Du| \leq \int_\Omega |D(u + v)|.$$ We say that $u \in BV(\Omega)$ is a solution of the least gradient problem in the sense of traces $($solution of LGP$)$ for given $f \in L^1(\Omega)$, if $Tu = f$ and for every $v \in BV(\Omega)$ such that $Tv = 0$ we have $$\int_\Omega |Du| \leq \int_\Omega |D(u + v)|.$$ To underline the difference between the two notions, we recall a stability theorem by Miranda: $($[@Mir Theorem 3]$)$ Let $\Omega \subset \mathbb{R}^N$ be open. Suppose $\{ f_n \}$ is a sequence of least gradient functions in $\Omega$ convergent in $L^1_{loc}(\Omega)$ to $f$. Then $f$ is of least gradient in $\Omega$. An identical result for solutions of least gradient problem is impossible, as the trace operator is not continuous in $L^1$ topology. We need an additional assumption regarding traces. A correct formulation would be: \[stabilnosc\] Suppose $f, f_n \in L^1(\partial\Omega)$. Let $u_n$ be a solution of LGP for $f_n$, i.e. $Tu_n = f_n$. Let $f_n \rightarrow f$ in $L^1(\partial\Omega)$ and $u_n \rightarrow u$ in $L^1(\Omega)$. Assume that also $Tu = f$. Then $u$ is a solution of LGP for $f$. To deal with regularity of solutions of LGP, it is convenient to consider superlevel sets of $u$, i.e. sets of the form $\partial \{ u > t \}$ for $t \in \mathbb{R}$. It follows the the two subsequent results: \[lem:jednoznacznoscnadpoziomic\] Suppose $u_1, u_2 \in L^1(\Omega)$. Then $u_1 = u_2$ a.e. iff for every $t \in \mathbb{R}$ the superlevel sets of $u_1$ and $u_2$ are equal, i.e. $\{ u_1 > t \} = \{ u_2 > t \}$ up to a set of measure zero. \[twierdzeniezbgg\] $($[@BGG Theorem 1]$)$\ Suppose $\Omega \subset \mathbb{R}^N$ is open. Let $f$ be a function of least gradient in $\Omega$. Then the set $\partial \{ f > t \}$ is minimal in $\Omega$, i.e. $\chi_{\{ f > t \}}$ is of least gradient for every $t \in \mathbb{R}$. It follows from [@Giu Chapter 10] that in low dimensions $(N \leq 7)$ the boundary $\partial E$ of a minimal set $E$ is an analytical hypersurface $($after modification of $E$ on a set of measure zero$)$. Thus, as we modify each superlevel set of $u$ by a set of measure zero, from Lemma \[lem:jednoznacznoscnadpoziomic\] we deduce that the class of $u$ in $L^1(\Omega)$ does not change. After a change of representative we get that the boundary of each superlevel set of $u$ is a sum of analytical minimal surfaces; thus, we may from now on assume that we deal with such a representative. Also, several proofs are significantly simplified if we remember that in dimension two there is only one minimal surface: an interval. Sternberg-Williams-Ziemer construction -------------------------------------- In [@SWZ] the authors have shown existence and uniqueness of solutions of LGP for continuous boundary data and strictly convex $\Omega$ (or, to be more precise, the authors assume that $\partial \Omega$ has non-negative mean curvature and is not locally area-minimizing). The proof of existence is constructive and we shall briefly recall it. The main idea is reversing Theorem \[twierdzeniezbgg\] and constructing almost all level sets of the solution. According to the Lemma \[lem:jednoznacznoscnadpoziomic\] this uniquely determines the solution. We fix the boundary data $g \in C(\partial \Omega)$. By Tietze theorem it has an extension $G \in C(\mathbb{R}^n \backslash \Omega)$. We may also demand that $G \in BV(\mathbb{R}^n \backslash \overline{\Omega})$. Let $L_t = (\mathbb{R}^n \backslash \Omega) \cap \{ G \geq t \}$. Since $G \in BV(\mathbb{R}^n \backslash \overline{\Omega})$, then for a.e. $t \in \mathbb{R}$ we have $P(L_t, \mathbb{R}^n \backslash \overline{\Omega}) < \infty$. Let $E_t$ be a set solving the following problems: $$\label{sternbergminimalnadlugosc} \min \{ P(E, \mathbb{R}^n): E \backslash \overline{\Omega} = L_t \backslash \overline{\Omega} \},$$ $$\max \{ |E|: E \text{ is a minimizer of \eqref{sternbergminimalnadlugosc}} \}.$$ Let us note that both of these problems have solutions; let $m \geq 0$ be the infimum in the first problem. Let $E_n$ be a sequence of sets such that $P(E_n, \Omega) \rightarrow m$. By compactness of unit ball in $BV(\Omega)$ and lower semicontinuity of the total variation we obtain $\chi_{E_{n_k}} \rightarrow \chi_E$, where $$m \leq P(E, \Omega) \leq P(E_n, \Omega) \rightarrow m.$$ Take $M \leq |\Omega|$ be the supremum in the second problem. Take a sequence o sets $E_n$ such that $|E_n| \rightarrow M$. Then on some subsequence $\chi_{E_{n_k}} \rightarrow \chi_E$, and thus $$M \geq |E| \geq |E_n| - |E_n \triangle E| = |E_n| - \|\chi_{E_n} - \chi_E \|_1 \rightarrow M - 0.$$ Then we can show existence of a set $T$ of full measure such that for every $t \in T$ we have $\partial E_t \cap \partial \Omega \subset g^{-1}(t)$ and for every $t,s \in T$, $s < t$ the inclusion $E_t \subset \subset E_s$ holds. It enables us to treat $E_t$ as superlevel sets of a certain function; we define it by the following formula: $$u(x) = \sup \{t \in T: x \in \overline{E_t \cap \Omega} \}.$$ It turns out that $u \in C(\overline{\Omega}) \cap BV(\Omega)$ and $u$ is a solution to LGP for $g$. Moreover $| \{ u \geq t \} \triangle (\overline{E_t \cap \Omega})| = 0$ for a.e. $t$. Uniqueness proof is based on a maximum principle. In the existence proof in chapter $4$ we are going to use a particularly simple case of the construction. Suppose $\Omega \subset \mathbb{R}^2$ and that $f \in C^1(\partial\Omega)$. Firstly, let us notice that we only have to construct the set $E_t$ for almost all $t$. Secondly, we recall that in dimension $2$ the only minimal surfaces are intervals; thus, to find the set $E_t$, let us fix $t$ and look at the preimage $g^{-1}(t)$. We connect its points with intervals with sum of their lengths as small as possible. It can cause problems, for example if we take $t$ to be a global maximum of the function; thus, let us take $t$ to be a regular value (by Sard theorem almost all values are regular), so the preimage $f^{-1}(t)$ is a manifold. In dimension $2$ this means that the preimage contains finitely many points, because $f$ is Lipschitz and $\partial\Omega$ is compact. As the derivative at every point $p \in f^{-1}(t)$ is nonzero, there is at least one interval in $\partial E_t$ ending in $p$. As is established later in Proposition \[slabazasadamaksimum2\], by minimality of $\partial E_t$ there can be at most one, so there is exactly one interval in $\partial E_t$ ending in every $p \in f^{-1}(t)$. A typical example for the construction, attributed to John Brothers, is to let $\Omega = B(0,1)$ and take the boundary data to be (in polar coordinates, for fixed $r = 1$) the function $f: [0, 2\pi) \rightarrow \mathbb{R}$ given by the formula $f(\theta) = \cos(2 \theta)$; see [@MRL Example 2.7] or [@SZ Example 3.6]. BV on a one-dimensional compact manifold ---------------------------------------- In the general case one may attempt to define BV spaces on compact manifolds using partition of unity; such approach is presented in [@AGM]. It is not necessary for us; it suffices to consider one-dimensional case. Let us consider $\Omega \subset \mathbb{R}^2$ open, bounded with $C^1$ boundary. We may define on $\partial\Omega$ the Hausdorff measure, integrable functions $($which are appoximatively continuous a.e.$)$. We recall (see [@EG Chapter 5.10]) that the one-dimensional $BV$ space on the interval $(a,b) \subset \mathbb{R}$ may be described in the following way: $$f \in BV((a,b)) \Leftrightarrow \sum |f(x_{i})-f(x_{i-1})| \leq M < \infty$$ for every $a < x_0 < ... < x_n < b$, where $x_i$ are points of approximate continuity of $f$. The smallest such constant $M$ turns out to be the usual total variation of $f$. We may extend this definition to the case where we have a one-dimensional manifold diffeomorphic to an open interval if it is properly parametrized, i.e. all tangent vectors have length one. Repeating the proof from [@EG] we get that this definition coincides with the divergence definition. Then we extend it to the case of a one-dimensional compact connected manifold in the following way: We say that $f \in BV(\partial\Omega)$, if after removing from $\partial\Omega$ a point $p$ of approximate continuity of $f$ we have $f \in BV(\partial\Omega \backslash \{ p \})$. The norm is defined to be $$\| f \|_{BV(\partial\Omega)} = \| f \|_1 + \| f \|_{BV(\partial\Omega \backslash \{ p \})}.$$ This definition does not depend on the choice of $p$, as in dimension one the total variation on disjoint intervals is additive, thus for different points $p_1, p_2$ we get that $$\| f \|_{BV(\partial\Omega \backslash \{ p_1 \})} = \| f \|_{BV((p_1,p_2))} + \| f \|_{BV((p_2,p_1))} = \| f \|_{BV(\partial\Omega \backslash \{ p_2 \})},$$ where $(p_1,p_2)$ is an oriented arc from $p_1$ to $p_2$. Thus all local properties of $BV(\partial\Omega)$ hold; we shall recall the most important one for our considerations: \[stw:bvdim1\] Let $E \subset \partial\Omega$ be a set of finite perimeter, i.e. $\chi_E \in BV(\partial\Omega)$. Then, if we take its representative to be the set of points of density $1$, then $\partial E = \partial^{*} E = \{ p_1, ..., p_{2n} \}$ and $P(E, \partial\Omega) = 2n$. Here $\partial^{*} E$ denotes the reduced boundary of $E$, i.e. the set where a measure-theoretical normal vector exists; see [@EG Chapter 5.7]. However, some global properties need not hold. For example, the decomposition theorem $f = f_{ac} + f_j + f_s$ does not hold; consider $\Omega = B(0,1)$, $f = \arg (z)$. The main reason is that $\pi_1(\partial\Omega) \neq 0$. Regularity of least gradient functions ====================================== In this section we are going to prove several regularity results about functions of least gradient, valid up to dimension $7$. We start with a weak form of the maximum principle and later prove a result on decomposition of a least gradient function into a continuous and jump-type part; this decomposition holds not only at the level derivatives, but also at the level of functions. We will extensively use the blow-up theorem, see [@EG Section 5.7.2]. \[tw:blowup\] For each $x \in \partial^{*} E$ define the set $E^r = \{ y \in \mathbb{R}^N: r(y-x) + x \in E \}$ and the hyperplane $H^{-}(x) = \{ y \in \mathbb{R}^N: \nu_E (x) \cdotp (y - x) \leq 0 \}$. Then $$\chi_{E^r} \rightarrow \chi_{H^{-}(x)}$$ in $L^1_{loc}(\mathbb{R}^N)$ as $r \rightarrow 0$. It turns out that on the plane Theorem \[twierdzeniezbgg\] may be improved to an analogue of the maximum principle for linear equations; geometrically speaking, the linear weak maximum principle states that every level set touches the boundary. \[slabazasadamaksimum\] $($weak maximum principle on the plane$)$\ Let $\Omega \subset \mathbb{R}^2$ be an open bounded set with Lipschitz boundary and suppose $u \in BV(\Omega)$ is a function of least gradient. Then for every $t \in \mathbb{R}$ the set $\partial \{ u > t \}$ is empty or it is a sum of intervals, pairwise disjoint in $\Omega$, such that every interval connects two points of $\partial \Omega$. By the argument from [@Giu Chapter 10] for every $t \in \mathbb{R}$ the set $\partial \{ u > t \}$ is a sum of intervals and $\partial \{ u > t \} = \partial^* \{ u > t \}$. Obviously $\partial \{ u > t \}$ is closed in $\Omega$. Suppose one of those intervals ends in $x \in \Omega$. Then the normal vector at $x$ is not well defined (the statement of the Theorem \[tw:blowup\] does not hold), so $x \notin \partial^* \{ u > t \}$. Thus $x \notin \partial \{ u > t \}$, contradiction. Similarly suppose two such intervals intersect in $x \in \Omega$. Then the measure-theoretic normal vector at $x$ has length smaller then $1$, depending on the angle between the two intervals. Thus $x \notin \partial^* \{ u > t \}$, contradiction. If we additionally assume that $\Omega$ is convex, then the union is disjoint also on $\partial\Omega$: \[slabazasadamaksimum2\] Let $\Omega \subset \mathbb{R}^2$ be an open, bounded, convex set with Lipschitz boundary and suppose $u \in BV(\Omega)$ is a function of least gradient. Then for every $t \in \mathbb{R}$ the set $\partial \{ u > t \}$ is empty or it is a sum of intervals, pairwise disjoint in $\overline{\Omega}$, such that every interval connects two points of $\partial \Omega$. Suppose that at least two intervals in $\partial E_t$ end in $x \in \partial\Omega$: $\overline{xy}$ and $\overline{xz}$. We have two possibilities: there are countably many intervals in $\partial E_t$, which end in $x$, with the other end lying in the arc $\overline{yz} \subset \partial\Omega$ which does not contain $x$; or there are finitely many. The first case is excluded by the monotonicity formula for minimal surfaces, see for example [@Sim Theorem 17.6, Remark 37.9], as from Theorem \[twierdzeniezbgg\] $E$ is a minimal set and only finitely many connected components of the boundary of a minimal set may intersect any compact subset of $\Omega$. In the second case we may without loss of generality assume that $\overline{xy}$ and $\overline{xz}$ are adjacent. Consider the function $\chi_{E_t}$. In the area enclosed by the intervals $\overline{xy}, \overline{xz}$ and the arc $\overline{yz} \subset \partial\Omega$ not containing $x$ we have $\chi_{E_t} = 1$ and $\chi_{E_t} = 0$ on the two sides of the triangle (or the opposite situation, which we handle similarly). Then $\chi_{E_t}$ is not a function of least gradient: the function $\widetilde{\chi_{E_t}} = \chi_{E_t} - \chi_{\Delta xyz}$ has strictly smaller total variation due to the triangle inequality. This contradicts Theorem \[twierdzeniezbgg\]. The result above is sharp. As the following example shows, we may not relax the assumption of convexity of $\Omega$. Denote by $\varphi$ the angular coordinate in the polar coordinates on the plane. Let $\Omega = B(0,1) \backslash (\{ \frac{\pi}{4} \leq \varphi \leq \frac{3\pi}{4} \} \cup \{ 0 \}) \subset \mathbb{R}^2$, i.e. the unit ball with one quarter removed. Take the boundary data $f \in L^1(\partial \Omega)$ to be $$f(x,y) = \twopartdef{1}{y \geq 0}{0}{y < 0.}$$ Then the solution to the least gradient problem is the function (defined inside $\Omega$) $$u(x,y) = \twopartdef{1}{y \geq 0}{0}{y < 0,}$$ in particular $\partial \{ u \geq 1 \}$ consists of two horizontal line segments whose closures intersect at the point $(0,0) \in \partial\Omega$. Note that in this example the set $\Omega$ is star-shaped, but it is not convex. In higher dimensions, we are going to need a result from [@SWZ] concerning minimal surfaces: \[stw:sternbergpowmin\] $($[@SWZ Theorem 2.2])\ Suppose $E_1 \subset E_2$ and let $\partial E_1, \partial E_2$ are area-minimizing in a open set $U$. Further, suppose $x \in \partial E_1 \cap \partial E_2 \cap U$. Then $\partial E_1$ and $\partial E_2$ agree in some neighbourhood of $x$. $($weak maximum principle$)$\ Let $\Omega \subset \mathbb{R}^N$, where $N \leq 7$ and suppose $u \in BV(\Omega)$ is a function of least gradient. Then for every $t \in \mathbb{R}$ the set $\partial \{ u > t \}$ is empty or it is a sum of minimal surfaces $S_{t,i}$, pairwise disjoint in $\Omega$, which satisfy $\partial S_{t,i} \subset \partial \Omega$. Let us notice, that with only subtle changes the previous proof works also in the case $N \leq 7$, i.e. when boundaries of superlevel sets are minimal surfaces. From [@Giu Chapter 10] it follows that for $t \in \mathbb{R}$ the set $\partial \{ u > t \}$ is a sum of minimal surfaces $S_{t,i}$ and $\partial \{ u > t \} = \partial^* \{ u > t \}$. Obviously $\partial \{ u > t \}$ $($boundary in topology of $\Omega)$ is closed in $\Omega$, so $\partial S_{t,i} \cap \Omega = \emptyset$ $($boundary in topology of $\partial \{ u > t \})$; suppose otherwise. Let $x \in \partial S_{t,i} \cap \Omega$. Then in $x$ the blow-up theorem does not hold, so $x \notin \partial \{ u > t \}$, contradiction. Now suppose that $S_{t,i}$ and $S_{t,j}$ are not disjoint in $\Omega$. Then from the Proposition \[stw:sternbergpowmin\] applied to $E_1 = E_2 = \{ u > t \}$ we get $S_{t,i} = S_{t,j}$. Let $E_1 \subset E_2$ and suppose that $E_1$ and $E_2$ are sets of locally bounded perimeter and let $x \in \partial^{*} E_1 \cap \partial^{*} E_2$. Then $\nu_{E_1}(x) = \nu_{E_2}(x)$. We are going to use the blow-up theorem (Theorem \[tw:blowup\]). First notice that the inclusion $E_1 \subset E_2$ implies $$E_1^r = \{ y \in \mathbb{R}^N: r(y-x) + x \in E_1 \} \subset \{ y \in \mathbb{R}^N: r(y-x) + x \in E_2 \} = E_2^r.$$ We keep the same notation as in Theorem \[tw:blowup\] and use it to obtain $$\chi_{H_1^-(x)} \leftarrow \chi_{E_1^r} \leq \chi_{E_1^r} \rightarrow \chi_{H_2^-(x)},$$ where the convergence holds in $L^1_{loc}$ topology. Thus $H_1^-(x) = H_2^-(x)$, so $\nu_{E_2}(x) = \nu_{E_2}(x)$. \[stw:zbiorskokow\] For $u \in BV(\Omega)$ the structure of its jump set is as follows: $$J_u = \bigcup_{s,t \in \mathbb{Q}; s \neq t} (\partial^{*} \{ u > s \} \cap \partial^{*} \{ u > t \}).$$ Let $x \in J_u$. By definition of $J_u$ the normal vector at $x$ is well defined. The same applies to the trace values from both sides: let us denote them by $u^{-} (x) < u^{+} (x)$. But then there exist $s,t \in \mathbb{Q}$ such that $u^{-} (x) < s < t < u^{+} (x)$, so $x \in \partial^{*} \{ u > s \} \cap \partial^{*} \{ u > t \}$. On the other hand, let $x \in \partial^{*} \{ u > s \} \cap \partial^{*} \{ u > t \}$. From the previous proposition the normal vectors coincide, so the normal at $x$ does not depend on $t$ and we may define traces from both sides as $$u^{+}(x) = \sup \{ t: x \in \partial^{*} \{ u > s \} \cap \partial^{*} \{ u > t \} \};$$ $$u^{-}(x) = \inf \{ t: x \in \partial^{*} \{ u > s \} \cap \partial^{*} \{ u > t \} \}.$$ More precisely, the trace is uniquely determined up to a measure zero set by the mean integral property from [@EG Theorem 5.3.2]. But it holds for all $x \in \partial^{*} \{ u > s \} \cap \partial^{*} \{ u > t \}$; from the weak maximum principle this set divides $\Omega$ into two disjoint parts, $\Omega^+$ and $\Omega^-$. Let $\Omega^+$ be the part with greater values of $u$ in the neighbourhood of the cut. If $u^+(x) < \sup \{ t: x \in \partial^{*} \{ u > s \} \cap \partial^{*} \{ u > t \} \} = s$, then for sufficiently small neighbourhoods of $x$ we would have $u \geq s$, so $\fint_{B(x,r) \cap \Omega^+} |u^+(x) - u(y)| \geq |u^+(x) - s| > 0$, contradiction. The other cases are analogous. Suppose $u \in BV(\Omega)$ is a least gradient function. Then $J_u = \bigcup_{k=1}^{\infty} S_k$, where $S_k$ are pairwise disjoint minimal surfaces. In addition, the trace of $u$ from both sides is constant along $S_k$; in particular the jump of $u$ is constant along $S_k$. We follow the characterisation of $J_u$ from the Proposition \[stw:zbiorskokow\]. For every $t$ the set $\partial^{*} \{ u > t \}$ is a minimal surface. Proposition \[stw:sternbergpowmin\] ensures that if $\partial^{*} \{ u > s \} \cap \partial^{*} \{ u > t \} \neq \emptyset$, then their intersecting connected components $S_{s,i}$, $S_{t,j}$ coincide. In particular, the trace from both sides defined as above is constant along $S_{t,j}$. Thus connected components of $J_u$ coincide with connected components of $\partial^{*} \{ u > t \}$ for some $t$, so by weak maximum principle they are minimal surfaces non-intersecting in $\Omega$ with boundary in $\partial \Omega$. As the area of each such surface is positive, there is at most countably many of them. \[tw:rozklad\] Let $\Omega \subset \mathbb{R}^N$, where $N \leq 7$, be an open, bounded, strictly convex set with Lipschitz boundary. Suppose $u \in BV(\Omega)$ is a function of least gradient. Then there exist functions $u_c, u_j \in BV(\Omega)$ such that $u = u_c + u_j$ and $(Du)_c = Du_c$ and $(Du)_j = Du_j$, i.e. one can represent $u$ as a sum of a continuous function and a piecewise constant function. They are of least gradient in $\Omega$. Moreover this decomposition is unique up to an additive constant. 1\. From the previous theorem $J_u = \bigcup_{k=1}^{\infty} S_k$, where $S_k$ are pairwise disjoint minimal surfaces with boundary in $\partial\Omega$. The jump along each of them has a constant value $a_k$. They divide $\Omega$ into open, pairwise disjoint sets $U_i$. 2\. We define $u_j$ in the following way: let us call any of the obtained sets $U_0$. Let us draw a graph such that the sets $U_i$ are its vertices. $U_i$ and $U_j$ are connected by an edge iff $\partial U_i \cap U_j = S_k$, i.e. when they have a common part of their boundaries. To such an edge we ascribe a weight $a_k$. Example of such construction is presented on the picture above. As $S_k$ are disjoint in $\Omega$ and do not touch the boundary, then such a graph is a tree, i.e. it is connected and there is exactly one path connecting two given vertices. Thus, we define $u_j$ by the formula $$u_j(x) = \sum_{\text{path connecting } U_0 \text{ with } U_i} a_k, \text{ when } x \in U_j.$$ Such a function is well defined, as our graph has no cycles. It also does not depend on the choice of $U_0$ up to an additive constant (if we chose some $U_1$ instead, the function would change by a summand $\sum_{\text{path connecting } U_0 \text{ with } U_1} a_k)$. We see that $u_j \in L^1(\Omega)$ and that it is piecewise constant.\ 3. We notice that $Du_j = (Du)_j$, as $u_j$ is constant on each $U_i$, $J_{u_j} = J_u$ and the jumps along connected components of $J_u$ have the same magnitude. Thus we define $u_c = u - u_j$. We see that $(Du_c)_j = 0$.\ 4. The $u_c$, $u_j$ defined above are functions of least gradient. Suppose that $u_j$ is not a a function of least gradient, i.e. there exists $v \in BV(\Omega)$ such that $\int_\Omega |Dv| < \int_\Omega |Du_j|$ and $Tu_j = Tv$. Then we would get $$\int_\Omega |Du| \leq \int_\Omega |D(u_c + v)| \leq \int_\Omega |Du_c| + \int_\Omega |Dv| < \int_\Omega |Du_c| + \int_\Omega |Du_j| = \int_\Omega |Du|,$$ where the first inequality follows from $u$ being a function of least gradient, and the last equality from measures $Du_c$ and $Du_j$ being mutually singular. The proof for $u_c$ is analogous. 5\. The function $u_c$ is continuous. As $u_c$ is of least gradient, then if it isn’t continuous at $x \in \Omega$, then a certain set of the form $\partial \{ u_c > t \}$ passes through $x$; otherwise $u_c$ would be constant in the neighbourhood of $x$. But in that case $u_c$ has a jump along the whole connected component of $\partial \{ u_c > t \}$ containing $x$, which is impossible as $(Du_c)_j = 0$. 6\. What is left is to prove uniqueness of such a decomposition. Let $u = u_c^1 + u_j^1 = u_c^2 + u_j^2$. Changing the order of summands we obtain $$u_c^1 - u_c^2 = u_j^2 - u_j^1,$$ but the distributional derivative of the left hand side is a continuous measure, and the distributional derivative of the right hand side is supported on the set of zero measure with respect to $\mathcal{H}^{n-1}$, so both of them are zero measures. But the condition $Dv = 0$ implies $v = \text{const}$, so the functions $u_c^1$, $u_c^2$ differ by an additive constant. In this decomposition $u_c$ isn’t necessarily continuous up to the boundary. Let us use the complex numbers notation for the plane. We take $\Omega = B(1,1)$. Let the boundary values be given by the formula $f(z) = \arg(z)$. Then $u = u_c = \arg(z) \in BV(\Omega) \cap C^{\infty}(\Omega)$, but $u$ isn’t continuous at $0 \in \partial\Omega$. Existence of solutions on the plane =================================== We shall prove existence of solutions on the plane for boundary data in $BV(\partial\Omega)$. We are going to use approximations of the solution in strict topology. Proposition \[podciagzbiezny\] will ensure us that existence of convergent sequences of approximations in $L^1$ topology is not a problem; Theorem \[tw:scislazb\] will upgrade it to strict convergence. The Miranda stability theorem (Theorem \[stabilnosc\]) ends the proof. Later, we shall see an example of a discontinuous function $f$ of infinite total variation such that the solution to the LGP exists. \[podciagzbiezny\] Suppose $f_n \rightarrow f$ in $L^1(\partial\Omega)$. $u_n$ are solutions of LGP for $f_n$. Then $u_n$ has a convergent subsequence, i.e. $u_{n_k} \rightarrow u$ in $L^1(\Omega)$. As the trace operator is a surjection, by the Open Mapping Theorem it is open. Let us fix $\widetilde{f} \in BV(\Omega)$ such that $T\widetilde{f} = f$ and a sequence of positive numbers $\varepsilon_n \rightarrow 0$. Then by continuity and openness of $T$ the image of a ball $B(\widetilde{f}, \varepsilon_n)$ contains a smaller ball $B(T\widetilde{f}, \delta_n)$ for another sequence of positive numbers $\delta_n \rightarrow 0$. As $f_n \rightarrow f$ in $L^1(\partial\Omega)$, there exists a subsequence $f_{n_k}$ such that $f_{n_k} \in B(f, \delta_n) = B(T\widetilde{f}, \delta_n)$, so the set $T^{-1}(f_{n_k})$ is non-empty; there exists a preimage of $f_{n_k}$ by $T$ in $B(\widetilde{f}, \varepsilon_n)$. Let us call it $\widetilde{f_n}$. Obviously $\widetilde{f_n} \rightarrow \widetilde{f}$ in $BV(\Omega)$. Thus, after possibly passing to a subsequence, there exist functions $\widetilde{f_n}, \widetilde{f}$ such that $\widetilde{f_n} \rightarrow \widetilde{f}$ in $BV(\Omega)$ and $T\widetilde{f_n} = f_n, T\widetilde{f} = f$. Now we may proceed as in [@HKLS Proposition 3.3]. Let us estimate from above the norm of $\|u_n - \widetilde{f_n}\|_{BV}$: $$\begin{gathered} \|u_n - \widetilde{f_n}\|_{BV} = \|u_n - \widetilde{f_n}\|_1 + \int_\Omega |D(u_n - \widetilde{f_n})| \leq (C + 1) \int_\Omega |D(u_n - \widetilde{f_n})| \leq \\ \leq (C + 1) (\int_\Omega |Du_n| + \int_\Omega |D\widetilde{f_n}|) \leq 2(C + 1) \int_\Omega |D\widetilde{f_n}| \leq M < \infty\end{gathered}$$ where the inequalities follow from Poincaré inequality $($as $u_n - f_n$ has trace zero$)$, triangle inequality and the fact that $u_n$ is solution of LGP for $f_n$. The common bound follows from convergence of $\widetilde{f_n}$. Thus, by compactness of the unit ball of $BV(\Omega)$ in $L^1(\Omega)$ we get a convergent subsequence $u_{n_k} - \widetilde{f_{n_k}} \rightarrow v$ in $L^1(\Omega)$. But $\widetilde{f_n} \rightarrow \widetilde{f}$ in $BV(\Omega)$, so as well in $L^1(\Omega)$; thus $u_{n_k} \rightarrow v + \widetilde{f} = u$ in $L^1(\Omega)$. We are going to need three lemmas. The first two are straightforward and their proofs can be found as a step in the proof of co-area formula, see [@EG Section 5.5]. The third one is a convenient version of Fatou lemma. \[lem:zb1\] Let $f_n \rightarrow f$ in $L^1(\Omega)$. Then there exists a subsequence $f_{n_k}$ such that $\chi_{\{ f_{n_k} \geq t \}} \rightarrow \chi_{\{ f \geq t \}}$ in $L^1(\Omega)$ for a.e. $t$. \[lem:zb2\] Suppose $\chi_{\{ f_{n} \geq t \}} \rightarrow \chi_{\{ f \geq t \}}$ in $L^1(\Omega)$ for a.e. $t$. Then $f_n \rightarrow f$ in $L^1_{loc}(\Omega)$. If additionally $f, f_n$ form a bounded family in $L^{\infty}(\Omega)$, then this covergence holds also in $L^1(\Omega)$. \[lem:zb3\] Suppose that $g, g_n \geq 0$. If additionally $g \leq \liminf g_n$ a.e. and $\lim \int_\Omega g_n \, dx = \int_\Omega g \, dx < \infty$, then $g_n \rightarrow g$ in $L^1(\Omega)$. Let $f_+ = \max(f, 0)$ and $f_- = \max(-f, 0)$. Let us note that $$\int_\Omega |g - g_n| = \int_\Omega (g - g_n)_+ + \int_\Omega (g - g_n)_-$$ and $$0 \leftarrow \int_\Omega (g - g_n) = \int_\Omega (g - g_n)_+ - \int_\Omega (g - g_n)_-,$$ so it suffices to prove that $\int_\Omega (g - g_n)_+ \rightarrow 0$ to show that $g_n \rightarrow g$ in $L^1(\Omega)$. Now let us see what happens to (well defined) upper limit of the sequence $\int_\Omega (g - g_n)_+$: $$0 \leq \limsup \int_\Omega (g - g_n)_+ \leq \int_\Omega \limsup (g - g_n)_+ = \int_\Omega \limsup \max(g - g_n, 0) =$$ $$= \int_\Omega \max(g + \limsup (- g_n), 0) = \int_\Omega \max(g - \liminf g_n), 0) = \int_\Omega 0 = 0.$$ where inequality follows from the (inverse) Fatou lemma: by definition $0 \leq (g - g_n)_+ \leq g$, and $g$ is integrable, so we can apply the Fatou lemma. To prove equalities we use the fact that $\limsup (- g_n) = - \liminf g_n$ and the assumption that $g \leq \liminf g_n$ a.e. Thus $\int_\Omega (g - g_n)_+ \rightarrow 0$, so $g_n \rightarrow g$ in $L^1(\Omega)$. \[tw:scislazb\] Let $\Omega \subset \mathbb{R}^2$ be an open, bounded, strictly convex set with $C^1$ boundary and suppose $f \in BV(\partial\Omega)$. Let $f_n \rightarrow f$ strictly in $BV(\partial\Omega)$, where $f_n$ are smooth. Denote the unique solution of LGP for $f_n$ by $u_n$. Then on some subsequence $u_{n_k}$ we have strict convergence in $BV(\Omega)$ to a function $u \in BV(\Omega)$. In particular $Tu = f$. 1\. As we have $f_n \rightarrow f$ strictly in $BV(\partial\Omega)$, we by definition also convergence in $L^1(\partial\Omega)$. Thus, by Lemma \[lem:zb1\], after possibly passing to a subsequence we have convergence $\chi_{\{ f_n \geq t \}} \rightarrow \chi_{\{ f \geq t \}}$ for a.e. $t$. 2\. By co-area formula $$\int_{\partial\Omega} |Df_n| = \int_{\mathbb{R}} P(E_t^n, \partial\Omega) \, dt \rightarrow \int_{\mathbb{R}} P(E_t, \partial\Omega) dt = \int_{\partial \Omega} |Df|,$$ and lower semicontinuity of the total variation gives us $P(E_t, \partial\Omega) \leq \liminf P(E_t^n, \partial\Omega) < \infty$ for a.e. $t$. We observe that the conditions in Lemma \[lem:zb3\] are fulfilled and we obtain convergence $P(E_t^n, \partial\Omega) \rightarrow P(E_t, \partial\Omega)$ in $L^1(\mathbb{R})$, so after possibly passing to a subsequence we have pointwise convergence for a.e. $t$. Consequently $\chi_{\{ f_n \geq t \}} \rightarrow \chi_{\{ f \geq t \}}$ strictly in $BV(\partial\Omega)$. 3\. As $\partial\Omega \in C^1$ and $f_n \in C^1(\partial\Omega)$, then by Sard theorem the set $\mathcal{T}$ of such $t$, which are regular values for all $f_n$, is of full measure. Recalling the Sternberg-Williams-Ziemer construction we get that for every $t \in \mathcal{T}$ every point of $\partial E_t^n \cap \partial\Omega$ is an end of at least one interval; according to Proposition \[slabazasadamaksimum2\] it is an end of exactly one interval. 4\. From now on it is necessary that we are in dimension $N = 2$. Let $t \in \mathcal{T}$. As $\partial\Omega$ is one-dimensional, then $P(E_t^n, \partial\Omega) \in \mathbb{N}$ and $D \chi_{\{ f_n \geq t \}}$ is a sum $\sum_{i = 1}^M \pm \delta_{x_i}$. Furthermore, by Proposition \[stw:bvdim1\] there exists a representative of the set $E_t^n$, which is a sum of closed arcs between consecutive points $x_i$. By Lemma \[lem:jednoznacznoscnadpoziomic\] we can change all representatives of the sets $E_t^n$ not changing $f_n$ itself. We do the same for $E_t$. As $f_n$ are smooth functions, such form of $E_t^n$ follows directly from their smoothness; this needs not be the case for $E_t$. 5\. As $\chi_{\{ f_n \geq t \}} \rightarrow \chi_{\{ f \geq t \}}$ strictly, then for sufficiently large $n$ $P(E_t^n, \partial\Omega) = P(E_t, \partial\Omega)$. What is more, their derivatives converge in weak\* topology; but we have an exact representation of those derivative. This gives us convergence $x_i^n \rightarrow x_i$ for every $i$. 6\. We apply the Sternberg-Williams-Ziemer construction to the sequence $f_n$. The set $\partial E_t^n$ is a sum of intervals, disjoint in $\Omega$, connecting certain pairs of points among $x_i^n$. By definition of $\mathcal{T}$ every point of $\partial E_t^n \cap \partial\Omega$ is an end of exactly one interval. This gives us convergence $\chi_{\{ u_n \geq t \}} \rightarrow \chi_{\{ u \geq t \}}$ w $L^1(\Omega)$ for a.e. $t$. Because of continuity of the metric in $\mathbb{R}^2$ we get $P(E_t^n, \Omega) = \sum \| x_i^n - x_j^n \| \rightarrow \sum \| x_i - x_j \| = P(E_t, \Omega)$. 7\. Let us see that $P(E_t^n, \Omega) \leq P(\Omega, \mathbb{R}^N)$. Indeed, $\partial E_t^n$ is a sum of intervals, disjoint in $\Omega$, connecting certain pairs of points among $x_i^n$. If we choose a different connection between them, for example by drawing a full convex polygon with vertices in $x_i^n$, by minimality of $\partial E_t^n$ the polygon has a larger perimeter. If we use arcs on $\partial\Omega$ instead, the perimeter would be even larger, as intervals are minimal surfaces in $\mathbb{R}^2$. 8\. Since the functions $\chi_{\{ u_n \geq t \}}$ converge in $L^1(\Omega)$ for a.e. $t$ to $\chi_{\{ u \geq t \}}$, then by Lemma \[lem:zb2\] we have convergence $u_n \rightarrow u$ in $L^1(\Omega)$. Furthermore in step $6$ we proved convergence $P(E_t^n, \Omega) \rightarrow P(E_t, \Omega)$ for a.e. $t$, so by dominated convergence theorem (by step $7$ this sequence is bounded) we have convergence $P(E_t^n, \Omega) \rightarrow P(E_t, \Omega)$ in $L^1(\mathbb{R})$. By co-area formula $\int_\Omega |Du_n| \rightarrow \int_\Omega |Du|$, which gives that $u_n \rightarrow u$ strictly in $BV(\Omega)$. \[tw:istnienie\] Let $\Omega \subset \mathbb{R}^2$ be an open, bounded, strictly convex set with $C^1$ boundary. Then for every $f \in BV(\partial\Omega)$ there exists a solution of LGP for $f$. For each $f \in BV(\partial\Omega)$ we can find a sequence $f_n$ of class $C^{\infty}(\partial\Omega)$ strictly convergent to $f$. Let $u_n$ be solutions of LGP for $f_n$. Then after possibly passing to subsequence we have that $u_n \rightarrow u$ strictly in $BV(\Omega)$; but the trace is continuous in the strict topology, so $Tu = f$. Thus by Miranda stability theorem (Theorem \[stabilnosc\]) we get that $u$ is a solution of LGP for $f$. Take $\Omega = B(0,1)$. As we know from [@ST], when $f$ is a characteristic function of a certain fat Cantor set, then the least gradient problem has no solution. Thus, we would expect that if we approximated the boundary function and constructed solutions of LGP for the approximation, then the trace of the limit would be incorrect. To settle this, let $f_n$ be a function of the $n-$th stage of the Cantor set construction. Then $u_n \rightarrow 0$ in $L^1(\Omega)$: Let $f_0(\theta) = \chi_{[0,1]}$. We construct $f_1$ by removing from the middle of $[0,1]$ an interval of length $2^{-2}$, i.e. $f_1 = \chi_{[0,3/8] \cup [5/8,1]}$. In the second stage we remove from the middle of both intervals an interval of length $2^{-4}$ and obtain $f_2 = \chi_{[0,5/32] \cup [7/32, 3/8] \cup [5/8,25/32] \cup [27/32, 1]}$. During the $n-$th stage of construction we remove an interval of length $2^{-2n}$ from the middle of all existing $2^{n-1}$ intervals. Let us see what is the length of all such intervals. Let $a_n$ be the length of an interval at the $n-$th stage of construction. Then $a_{n} = \frac{a_{n-1}}{2} - \frac{1}{2^{2n+1}}$. As $a_0 = 1$, we obtain a direct formula $a_n = \frac{2^n + 1}{2^{2n+1}}$. Now we take the fat Cantor set to be on the circle, i.e. the interval \[0,1\] corresponds to angles measured in radians. On the rest of the circle we set the function $f$ to be $0$. Let us compare at every stage of construction the sum of lengths of the red intervals and the green ones. After trygonometric considerations we have to check the following inequality: $$\label{ineq:cantor} \sqrt{1 - \cos(a_n)} + \sqrt{1 - \cos(\frac{1}{2^{2n}})} > 2 \sqrt{1 - \cos(a_{n+1})}.$$ Substitute $x = 2^{-n}$ and use the direct formula for $a_n$. It changes to the inequality $$g(x) = \sqrt{1 - \cos(\frac{x(x+1)}{2})} + \sqrt{1 - \cos(x^2)} - 2 \sqrt{1 - \cos(\frac{x(x+2)}{8})} > 0.$$ But $g$ satisfies $g(0) = 0$ and its derivative is positive on $(0,1)$, so $g > 0$ on $(0,1)$, thus the inequality holds for all $n$. Thus, as on every stage of construction the sides of the trapezoid are shorter than the bases. It means that the solution of LGP for $f_n$ takes value $0$ on the trapezoid (as we minimize $P(E_t, B(0,1))$ for $t \in (0,1)$). In the next stage of construction the value on this trapezoid will remain zero and we will make the same reasoning on two adjacent smaller trapezoids. From the construction of Cantor set the sequence $u_n$ is nonincreasing and for every point $x$ inside the circle at a sufficiently large stage of construction we would have $u_n(x) = 0$. Thus $u_n \rightarrow 0$ a.e.; but it is bounded from above by $1$, so the convergence holds also in $L^1(\Omega)$. \[ex:cantor\] Let us make a slight change to the previous example: consider another fat Cantor set. More precisely, take a set almost of full measure such that the inequality holds in the opposite direction; it is possible due to the triangle inequality. Thus at every stage of construction it is more efficient (minimizing lengths of level sets) to remove $2^{n-1}$ curvilinear triangles from the set $\{ u_n = 1 \}$ than to repeat the above construction, i.e. add trapezoids to the set $\{ u_n = 0 \}$. Thus at every stage of construction the set $\{ u_n = 1 \}$ will be a sum of trapezoids mentioned before, so the trace of $u$ equals $f$. Also $u_n \rightarrow u$ in $L^1(\Omega)$, as it converges a.e. Thus we obtained that there exists a solution to LGP for a certain discontinuous $f \notin BV(\partial\Omega)$. Anisotropic case ================ This section is devoted to the anisotropic least gradient problem. We discuss $l^p$ norms on the plane for $p \in [1, \infty]$. We prove a non-uniqueness result for $p = 1, \infty$ and discuss how the solutions look like for $p \in (1, \infty)$. We shall use the notation introduced in [@Maz]. A continuous function $\phi: \overline{\Omega} \times \mathbb{R}^n \rightarrow [0, \infty)$ is called a metric integrand, if it satisfies the following conditions:\ \ $(1)$ $\phi$ is convex with respect to the second variable for a.e. $x \in \overline{\Omega}$;\ $(2)$ $\phi$ is homogeneous with respect to the second variable, i.e. $$\forall x \in \overline{\Omega}, \quad \forall \xi \in \mathbb{R}^n, \quad \forall t \in \mathbb{R} \quad \phi(x, t \xi) = |t| \phi(x, \xi);$$ $(3)$ $\phi$ is bounded in $\Omega$, i.e. $$\exists \Gamma > 0 \quad \forall x \in \overline{\Omega}, \quad \forall \xi \in \mathbb{R}^n \quad 0 \leq \phi(x, \xi) \leq \Gamma |\xi|.$$ \ $(4)$ $\phi$ is elliptic in $\Omega$, i.e. $$\exists \lambda > 0 \quad \forall x \in \overline{\Omega}, \quad \forall \xi \in \mathbb{R}^n \quad \lambda |\xi| \leq \phi(x, \xi).$$ These conditions are sufficient for most of the cases considered in scientific work: they are satisfied for the classical LGP, i.e. $(\phi(x, \xi) = |\xi|)$, as well as for the $l_p$ norms, $p \in [1, \infty]$ and for weighted LGP considered in [@JMN]: a function $\phi(x, \xi) = g(x) |\xi|$, where $g \geq c > 0$. The polar function of $\phi$ is $\phi^0: \overline{\Omega} \times \mathbb{R}^N \rightarrow [0, \infty)$ defined as $$\phi^0 (x, \xi^*) = \sup \, \{ \langle \xi^*, \xi \rangle : \xi \in \mathbb{R}^N, \phi(x, \xi) \leq 1 \}.$$ Let $$\mathcal{K}_\phi(\Omega) = \{ \mathbf{z} \in X_\infty(\Omega) : \phi^0(x,\mathbf{z}(x)) \leq 1 \text{ for a.e. } x \in \Omega, \quad [\mathbf{z}, \nu] = 0 \}.$$ For a given function $u \in L^1(\Omega)$ we define its $\phi-$total variation in $\Omega$ by the formula (another notation used in the literature is $\int_\Omega \phi(x, Du)$): $$\int_\Omega |Du|_\phi = \sup \, \{ \int_\Omega u \, \mathrm{div} \, \mathbf{z} \, dx : \mathbf{z} \in \mathcal{K}_\phi(\Omega) \}.$$ If $\int_\Omega |Du|_\phi < \infty$, we say that $u \in BV_\phi(\Omega)$. If $\phi$ is a metric integrand, by properties $(3)$ and $(4)$ we have that $\lambda \int_\Omega |Du| \leq \int_\Omega |Du|_\phi \leq \Gamma \int_\Omega |Du|$, so $BV_\phi(\Omega) = BV(\Omega)$. We also know ([@AB Chapter 3]) that when $\phi$ is continuous and elliptic in $\Omega$, then in the definition of $\mathcal{K}_\phi(\Omega)$ we can replace the condition $[\mathbf{z}, \nu] = 0$ with a demand that $\mathbf{z} \in C_c^1(\Omega)$, so we recover the classical definition. \[lem:scisaniz\] When $\phi$ is continuous and elliptic in $\Omega$, then similarly to the classical case ([@AB Chapter 4]) we recover lower semicontinuity of the $\phi-$total variation, the notion of $\phi-$perimeter of a set and the co-area formula. We also recover the approximation by $C^\infty$ functions in the strict topology, even in the strong form proved by Giusti in [@Giu Corollaries 1.17, 2.10]: let $v \in BV_\phi(\Omega)$, $Tv = f$. Then there exists a sequence of $C^\infty$ functions $v_n$ such that $v_n \rightarrow v$ strictly in $BV_\phi(\Omega)$ such that $Tv = f$. For an explicit use we shall need the following integral representation ([@AB], [@JMN]): \[stw:repcalkowa\] Let $\varphi: \overline{\Omega} \times \mathbb{R}^N \rightarrow \mathbb{R}$ be a metric integrand. Then we have an integral representation: $$\int_\Omega |Du|_\phi = \int_\Omega \phi(x, \nu^u(x)) \, |Du|,$$ where $\nu^u$ is the Radon-Nikodym derivative $\nu^u = \frac{d Du}{d |Du|}$. In particular, if $E \subset \Omega$ and $\partial E$ is sufficiently smooth (at least $C^1$), then we have a representation $$P_\phi(E, \Omega) = \int_\Omega \phi(x, \nu_E) \, d \mathcal{H}^{n-1},$$ where $\nu_E$ is the external normal to $E$. For $p \in [1, \infty)$ we define the $p-$th norm of a vector on the plane by the formula $\| (x, y) \|_p = (|x|^p + |y|^p)^{1/p}$. For $p = \infty$ it is defined as $\| (x, y) \|_\infty = \sup(|x|, |y|)$. Let us note that $\| \cdotp \|_1 \geq \| \cdotp \|_2 \geq \| \cdotp \|_\infty$ and that the case $p = 2$ is isotropic. We aim to prove that for nonsmooth anisotropy the solutions need not be unique (and in general are not unique); to achieve this goal, we will study how do minimal surfaces with respect to the $p-$th norm look like. At first let us see an example that the solution is unique: \[stw:uniqueness\] Let $\Omega \subset \mathbb{R}^2$ be an open, bounded, strictly convex set. Take $\phi(x, Du) = \| Du \|_1$. Let $f \in C(\partial \Omega)$. Denote by $u$ the solution to isotropic LGP for $f$. Then, if the boundaries of superlevel sets of $u$ are parallel to the axes of the coordinate system, then $u$ is a unique solution of the anisotropic LGP with respect to the $l^1$ norm. Let $v \in BV(\Omega)$, $Tv = f$. Then $$\int_\Omega |Dv|_1 \geq \int_\Omega |Dv|_2 \geq \int_\Omega |Du|_2.$$ By uniqueness of solution to Euclidean LGP the second inequality is strict, if only $u \neq v$. As the boundaries of superlevel sets of $u$ are parallel to the axes of the coordinate system, we have $\int_\Omega |Du|_1 = \int_\Omega |Du|_2$; it follows that $u$ is a unique solution to the anisotropic LGP. \[ex:uniqueness\] Let $\Omega = B(0,1)$. Take $\phi(x, Du) = \| Du \|_1$. Let $f(\theta) = \cos(2 \theta)$. We construct the isotropic solution $u$ using Sternberg-Williams-Ziemer construction. We notice, as the picture below shows, that the boundaries of superlevel sets of $u$ are parallel to the axes of the coordinate system. By Proposition \[stw:uniqueness\] the solution to the anisotropic LGP is unique. \[stw:niejednoznacznosc\] Let $\Omega \subset \mathbb{R}^2$ be an open, bounded, strictly convex set. Take $\phi(x, Du) = \| Du \|_1$. Let $f \in C(\partial \Omega)$. Denote by $u$ the solution to isotropic LGP for $f$. Then, if for some $t$ the boundaries of superlevel sets of $u$ are not parallel to the axes of the coordinate system, then the solution to the anisotropic LGP with respect to the $l^1$ norm is not unique. 1\. Take $v \in C^1(\Omega)$ with trace $f$. Then the co-area formula reads $$\int_\Omega |Dv|_1 = \int_{\mathbb{R}} P_1 (E_t, \Omega) dt,$$ in particular $v$ is a solution to anisotropic LGP iff $P_1(E_t, \Omega)$ is minimal for a.e. $t$. As $v$ is smooth, $v |_{\partial\Omega} = f$, then by Sard theorem for a.e. $t$ the set $\{ v = t \}$ is a smooth manifold; as such, it is an at most countable sum of smooth curves disjoint in $\Omega$. 2\. We want to find the lower bound for $\int_\Omega |Dv|_1$. We shall find it for a larger class of functions: continuous functions, for which the sets $\{ v = t \}$ are at most countable sums of smooth curves disjoint in $\Omega$. We have to extend our class of functions, as we need to be able to eliminate closed curves from the disjoint sum: if there were any closed curves, then by setting $v = t$ in the open set enclosed by such curves we obtain a function with strictly smaller total variation, but not necessarily smooth. Thus we may assume that $\partial \{ v \geq t \}$ is a disjoint sum of open curves. Let us note that they must end in points $p \in f^{-1}(t) \subset \partial\Omega$. 3\. According to the co-area formula, it is sufficient to construct superlevel sets of $v$ such that $P_1(E_t, \Omega)$ is minimal; then $\int_\Omega |Dv|_1$ would be minimal as well. Let us suppose additionally that $\partial E_t$ does not contain any vertical intervals, i.e. we may represent a level set from the point $(x,y)$ to $(z,t)$ as a graph of a $C^1$ function $g$. Let us note that at the point $((s, g(s)))$ the Radon-Nikodym derivative $\nu^{\chi_{E_t}}$ is perpendicular to the level set, so it is a vector $(- \sin \theta, \cos \theta)$, where $g'(s) = \tan \theta$. Thus $\phi(x, \nu^{\chi_{E_t}}) = |\sin \theta| + |\cos \theta|$. As $|D \chi_{E_t}| = \mathcal{H}^{n-1}|_{\partial E_t}$, then, using the representation introduced by Proposition \[stw:repcalkowa\], we have to minimize the integral (we may assume that $x < z$): $$P(E_t, \Omega) = \int_\Omega \phi(x, \nu^{\chi_{E_t}}) |D \chi_{E_t}| = \int_{\partial E_t} (|\sin \theta| + |\cos \theta|) d\mathcal{H}^{n-1} =$$ $$= \int_{x}^{z} (|\sin \theta| + |\cos \theta|) \sqrt{1 + (\tan \theta)^2} dx = \int_{x}^{z} (|\sin \theta| + |\cos \theta|) \frac{1}{|\cos \theta|} dx =$$ $$= \int_{x}^{z} (1 + |\tan \theta|) dx = |z - x| + \int_{x}^{z} |g'| dx \geq |z - x| + |t - y|,$$ where the inequality becomes equality iff $g$ is monotone (remember we assumed it to be $C^1$). Thus there are multiple functions minimizing this integral. 4\. Now we allow $\partial E_t$ to contain vertical intervals. The difference is purely technical, as we have to divide our integral into two parts. Let us suppose that the (orientated) length of $i-$th vertical interval equals $\lambda_i$, then we have $$\int_{\text{graph part of } \partial E_t} (|\sin \theta| + |\cos \theta|) dl + \int_{\text{vertical part of } \partial E_t} (1 + 0) dl = \int_{x}^{z} (1 + |g'|) dx + \sum_{i = 1}^{\infty} |\lambda_i| =$$ $$= \int_{x}^{z} |g'| dx + |z - x| + \sum_{i = 1}^{\infty} |\lambda_i| \geq |t - y - \sum_{i = 1}^{\infty} \lambda_i| + |z - x| + \sum_{i = 1}^{\infty} |\lambda_i| \geq |z - x| + |t - y|,$$ where the inequality becomes equality iff $g$ is monotone (remember we assumed it to be $C^1$) and all the vertical intervals are orientated in the same direction as $g'$. Thus there are multiple functions minimizing this integral. We have proved that in a class containing all smooth functions the problem of minimizing perimeter of a set $E_t$ doesn’t have a unique solution. 5\. Let us denote by $u$ the solution to the Euclidean LGP. Let us notice that intervals are graphs of monotone functions, so an interval mimimizes the above integral; thus, by co-area formula, the value of $\int_\Omega |Dv|_1$ is bounded from below by $$\int_\Omega |Dv|_1 = \int_{\mathbb{R}} P_1(E_t, \Omega) \geq \int_{\mathbb{R}} P_1(\{ u > t \}, \Omega),$$ so by Remark \[lem:scisaniz\] such inequality holds for all $v \in BV_1(\Omega)$ such that $Tv = f$. In particular, the Euclidean solution is also a solution to the anisotropic LGP. But if we choose $v$ such that its level sets $\{ v = t \}$ be monotone for almost all $t$, then its total variation is exactly the same (it is possible due to the non-parallelism assumption). Thus the solution to this anisotropic LGP is not unique. \[ex:l1\] Let $\Omega = B(0,1)$. Take $\phi(x, Du) = \| Du \|_1$. Let $f \in C^{\infty}(\partial\Omega)$ be given as $f = \cos(2 \theta - \pi/2)$. Then the solution to the anisotropic LGP is not unique. At first, let us see that the Euclidean solution is a rotation of the function $u$ from Example \[ex:uniqueness\], so we may apply the procedure from Proposition \[stw:niejednoznacznosc\]. We observe that for fixed $t \in (0,1)$ its preimage contains points of the form $A_1 = (a,b)$, $A_2 = (b, a)$, $A_3 = (-a, -b)$, $A_4 = (-b, -a)$; then, applying the above calculation to the function $f$, we see that the two possible connections, $A_1 A_2, A_3 A_4$ and $A_1 A_4, A_2 A_3$ have perimeter lengths $4 |a - b|$ and $4 |a + b|$ respectively; we choose the former as the level set $E_t$. Similar calculation holds for $t \in (-1,0)$. But if we choose $v$ such that its level sets $\{ v = t \}$ be monotone for almost all $t$, then their perimeter (and, by co-area formula, its total variation) stays exactly the same. Thus the solution to this anisotropic LGP is not unique; an example of a non-Euclidean solution is presented on the picture below. \[ex:linfty\] Now let $p = \infty$. If we make a similar calculation, we obtain that the perimeter of a level set connecting points $(x,y)$ with $(z,t)$ equals $$\int_{\partial E_t} \max(|\sin \theta|, |\cos \theta|) d\mathcal{H}^{n-1} = \int_{x}^{z} \max(|\sin \theta|, |\cos \theta|) \sqrt{1 + (\tan \theta)^2} dx =$$ $$= \int_{x}^{z} \max(|\sin \theta|, |\cos \theta|) \frac{1}{|\cos \theta|} dx = \int_{x}^{z} \max(1, |\tan \theta|) dx = \int_{x}^{z} \max(1, |g'|) dx \geq |z - x|,$$ where the inequality becomes equality iff $|g'| \leq 1$; in other words, the angle between the level set and the $x$ coordinate axis is not greater than $\frac{\pi}{4}$. Thus, if we take the function $f(\theta) = \cos(2 \theta)$, the solution is not unique; we apply this result for $t \in (-1,0)$ and then apply it again for $t \in (0,1)$ considering the level set as a function of $y$. A solution different than the Euclidean one is presented on the picture below. Nevertheless, it may still happen that the solution is unique: it is the case if we take such $f$ that the Euclidean solution has all level sets at an angle $\frac{\pi}{4}$ to the coordinate axes. For example we can take $f(\theta) = \cos(2 \theta - \frac{\pi}{2})$. Now let $1 < p < \infty$. By [@JMN Theorems 1.1, 1.2] for continuous boundary data the anisotropic LGP has a unique solution, because the norm $\| \cdotp \|_p$ is a smooth function of the Euclidean norm outside $(0,0)$. We will show that connected components of boundaries of superlevel sets of functions of $\phi-$least gradient are line segments, similarly to for the isotropic norm $\| \cdotp \|_2$; in fact, due to an anisotropic analogue of Theorem \[twierdzeniezbgg\] proved in [@Maz Theorem 3.19], it is enough to show that the boundaries of minimal sets are line segments. \[tw:anizotropia\] Let $\Omega \subset \mathbb{R}^2$ be an open convex set. Let the anisotropy be given by the function $\phi(x,Du) = \| Du \|_p$, where $1 < p < \infty$. Let $E$ be a $\phi-$minimal set with respect to $\Omega$, i.e. $\chi_E$ is a function of $\phi-$least gradient in $\Omega$. Then every connected component of $\partial E$ is a line segment. Let $(x,y),(z,t)$ be two points on the same connected component of $\partial E$. We have to minimize an integral analogous to the previous one (notation stays the same): $$L(x, g, g') = \int_{\partial E_t} (|\sin \theta|^p + |\cos \theta|^p)^{\frac{1}{p}} d\mathcal{H}^{n-1} = \int_{x}^{z} (|\sin \theta|^p + |\cos \theta|^p)^{\frac{1}{p}} \sqrt{1 + (\tan \theta)^2} dx =$$ $$= \int_{x}^{z} (|\sin \theta|^p + |\cos \theta|^p)^{\frac{1}{p}} \frac{1}{|\cos \theta|} dx = \int_{x}^{z} (1 + |\tan \theta|^p)^{\frac{1}{p}} dx = \int_{x}^{z} (1 + |g'|^p)^{\frac{1}{p}} dx.$$ The Euler$-$Lagrange equation for the functional $L$ takes form $$0 = \frac{\partial L}{\partial g} = \frac{d}{dx} (\frac{\partial L}{\partial g'}) = \frac{d}{dx} (\mathrm{sgn}(g') (g')^{p-1} (1 + |g'|^p)^{\frac{1}{p} - 1})$$ $$\mathrm{sgn}(g') (g')^{p-1} (1 + |g'|^p)^{\frac{1}{p} - 1} = \text{ const}.$$ Taking absolute value and raising both sides to power $\frac{p}{p-1}$ we obtain $$\frac{|g'|^p}{1 + |g'|^p} = \text{ const} = C,$$ thus $g' =$ const. Thus the anisotropic minimal surface connecting points $(x,y)$ and $(z,t)$ is a line segment. This paper is based on my master’s thesis. My supervisor was Piotr Rybka, whom I would like to thank for many fruitful discussions on this paper. The author receives the WCNM scholarship. [99]{} M. Amar, G. Bellettini, *A notion of total variation depending on a metric with discontinuous coefficients*, Ann. Inst. Henri Poincaré Analyse non linéaire **11**, pp. 91$-$133 (1994). L. Ambrosio, N. Fusco, D. Pallara, *Functions of bounded variation and free-discontinuity problems*, Oxford Mathematical Monographs, Oxford 2000. L. Ambrosio, R. Ghezzi, V. Magnani, *BV functions and sets of finite perimeter in sub-Riemannian manifolds*, Ann. de l’Institut Henri Poincaré (C) Non Linear Analysis **32** (2015), pp. 489$–$517 . E. Bombieri, E. de Giorgi, E. Giusti, *Minimal cones and the Bernstein problem*, Invent. Math., **7** (1969), pp. 243$-$268. L. C. Evans, R. F. Gariepy, *Measure theory and fine properties of functions*, CRC Press, Boca Raton 1992. E. Gagliardo, *Caratterizzazione delle tracce sulla frontiera relative ad alcune classi di funzioni in piú variabili*, Rend. Sem. Mat. Univ. Padova **27** (1957), pp. 284$-$305. E. Giusti, *Minimal surfaces and functions of bounded variation*, Birkhäuser, Basel 1984. W. Górny, P. Rybka, A. Sabra, *Special cases of the planar least gradient problem*, $arXiv:1605.00035v2$ (2016). H. Hakkarainen, R. Korte, P. Lahti, N. Shanmugalingam, *Stability and continuity of functions of least gradient*, Anal. Geom. Metr. Spaces 2015; **3**, pp. 123$-$139 (2014). R. L. Jerrard, A. Moradifam, A. I. Nachman, *Existence and uniqueness of minimizers of general least gradient problems*, J. Reine Angew. Math, to appear (published online 2015). J. M. Mazon, *The Euler-Lagrange equation for the anisotropic least gradient problem*, Nonlinear Analysis: Real World Applications **31** (2016), pp. 452$-$472. M. Miranda, *Comportamento delle successioni convergenti di frontiere minimali*, Rend. Sem. Mat. Univ. Padova **38** (1967), pp. 238$-$257. A. Moradifam, A. Nachman, A. Tamasan, *Uniqueness of minimizers of weighted least gradient problems arising in conductivity imaging*, $arXiv: 1404.5992v1$ $(2014)$. J. M. Mazon, J. D. Rossi, S. S. de Leon, *Functions of least gradient and 1-harmonic functions*, Indiana Univ. J. Math **63**, pp. 1067$-$1084 (2014). L. Simon, *Lectures on geometric measure theory*, Proc. Centre Math. Analysis, ANU **3** (1983). G. Spradlin, A. Tamasan, *Not all traces on the circle come from functions of least gradient in the disk*, $arXiv: 1311.1494$ $(2014)$. P. Sternberg, G. Williams, W. P. Ziemer, *Existence, uniqueness, and regularity for functions of least gradient*, J. Reine Angew. Math. **430** (1992), pp. 35$-$60. P. Sternberg, W. P. Ziemer, *Generalized motion by curvature with a Dirichlet condition*, J. Differ. Equations **114**, no. 2 (1994), pp. 580$-$600.
1
--- abstract: 'Quasi-periodic signals have yielded important constraints on the masses of black holes in galactic X-ray binaries, and here we extend this to active galactic nuclei (AGN). We employ a wavelet technique to analyze 19 observations of 10 AGN obtained with the [*XMM-Newton*]{} EPIC-PN camera. We report the detection of a candidate 3.3 kilosecond quasi-period in 3C 273. If this period represents an orbital timescale originating near a last stable orbit of 3 $R_S$, it implies a central black hole mass of $7.3\times 10^6$ M$_\sun$. For a maximally rotating black hole with a last stable orbit of 0.6 $R_S$, a central black hole mass of $8.1\times 10^7$ M$_\sun$ is implied. Both of these estimates are substantially lower than previous reverberation mapping results which place the central black hole mass of 3C 273 at about $2.35\times 10^8$ M$_\sun$. Assuming that this reverberation mass is correct, the X-ray quasi-period would be caused by a higher order oscillatory mode of the accretion disk.' author: - 'C. Espaillat, J. Bregman, P. Hughes, and E. Lloyd-Davies' title: 'Wavelet Analysis of AGN X-ray Time Series: A QPO in 3C 273?' --- Introduction ============ Quasi-periodic oscillations (QPOs) are thought to originate in the inner accretion disk of a black hole or neutron star in an X-ray binary (XRB) system [@vdk00]. Consequently, QPOs have been used in galactic XRBs to introduce important constraints on the masses of the central black holes of these systems. Previous work has revealed that AGN and XRBs are alike: noise power spectra have shown that similar physical processes may be underlying the X-ray variability in both [@ede99; @utt02; @mar03; @vau03; @mchar04; @mchar05]. Taking this resemblance into account and assuming that accretion onto a stellar-mass black hole is comparable to accretion onto a supermassive black hole, one would expect some AGN to exhibit QPOs similar to those observed in XRBs. In supermassive black holes ($10^6$-$10^9$ M$_\sun$), these QPOs would be at much lower frequencies than those we find in stellar-mass black holes ($\sim$ 10 M$_\sun$). Low frequency quasi-periods (LF QPOs) in XRBs range from 50 mHz to 30 Hz; scaling from a $\sim 1$ Hz QPO in a 10 $M_\sun$ XRB, a LF QPO in an AGN would occur on timescales of days to months [@vau05], too long to be detectable for the AGN in our sample. On the other hand, high frequency QPOs (HF QPOs) in XRBs have values of $\geq$ 100 Hz and assuming a 1/$M_{BH}$ scaling of frequencies, $f_{HFQPO}\sim 3\times 10^{-3} (M_{BH}/ 10^{6} M_\sun)^{-1}$ Hz [@abram], corresponding to timescales greater than 400s for AGN. While this parallel between AGN and XRBs seems promising, no claim of an X-ray quasi-period in an AGN has been found to be statistically robust. @vaub remark that a major source of false detections arise from assuming an inappropriate background noise power spectrum. X-ray variations of AGN have intrinsically red noise power spectra (i.e. the power spectra have a continuum resembling a power law with a steep slope; Press 1978), however many purported QPOs in AGN are compared against an assumed background of white noise (i.e. Poisson photon noise or a flat spectrum). For example, in a $\sim$5 day ASCA observation of IRAS 18325-5926 the significance of the candidate periodicity was estimated with white noise [@iwa98]. After including red noise in the periodogram fitting, @vau found that the candidate periodicity was no longer significant at the 95$\%$ level. @fiore also claimed high ($>99\%$) significance peaks in NGC 4151, however, after fitting red noise and Poisson photon noise components of the spectrum @vaub showed that the significances of the QPOs fell below the 95$\%$ confidence level. It is also difficult to constrain the significance of possible QPOs due to power spectra effects [@vaub]. EXOSAT data of NGC 5548 were reported to have a significant period [@pap93], but @tag96 later showed that the significance of the candidate QPO was lower than previously reported once the uncertainties in modeling the spectrum were taken into consideration. This lack of statistically significant evidence for QPOs in AGN has led to questions of whether existing X-ray observations of AGN are sensitive enough to detect QPOs even if they are present [@vau05]. Here we use a different technique to search for significant periodic structures in the time variability data that have been collected for AGNs with XMM-Newton. We use a wavelet transform technique, which can have certain advantages relative to periodograms and Fourier power spectra, the methods that have previously dominated the literature. The wavelet technique, which has become widely used in other branches of science, is particularly useful in identifying signals where the period or its amplitude changes with time. This technique is applied to the XMM-Newton data from bright 10 AGNs, with special care taken to properly treat the noise characteristics and error analysis, and we find a candidate 3.3 ks quasi-period in 3C 273. In Section 2, we present our observations and data reduction steps. In Section 3, we provide an overview of the two wavelet techniques used in our analysis: the continuous wavelet transform and the cross-wavelet transform. The results of these two techniques as well as significance tests are presented. We also discuss structure function analysis for the AGN in our sample. In Section 4 we argue that this 3.3 ks quasi-period in 3C 273 is consistent with what we would expect from oscillations in the accretion disk around the supermassive black hole based on current black hole mass estimates. Observations and Data Reduction {#obssection} =============================== The 10 AGN in our sample were selected because they are bright and have [*XMM-Newton*]{} EPIC-PN camera observations which exceed 30 kiloseconds (ks). In total, we have 19 observations and each observation’s ID, date, length, and average counts are listed in Table \[obslog\]. All observations are in the energy range 0.75 to 10 keV and most were taken in small window mode, which has a readout time of 6 milliseconds (ms). The only exception is NGC 4151 Observation ID (Obs. ID): 0112830201, which was taken in full frame mode with a readout time of 73.4 ms. Observation Data Files (ODFs) were obtained from the on-line [*XMM-Newton*]{} Science Archive and later reduced with the [*XMM-Newton*]{} Science Analysis Software (SAS, v. 7.0.0, 6.1.0, 5.4.1). Source light curves, with 5 s bins, were extracted for a circular region centered on the source ($\sim$20$^\prime$$^\prime$). Background light curves were obtained from a nearby rectangular source-free region and subtracted from the source light curves. These rectangular background regions were larger than the source regions and were accordingly scaled down. Due to strong flaring, the last few kiloseconds of data are excluded from most observations. The count rates for the target sources are orders of magnitude greater than the background count rate in the detection cell, so a rise in the background is unimportant. We removed these last few kiloseconds of data from the data stream just to be very cautious. We note that in the observation of 3C 273 with the claimed detection, including the periods with flaring does not change our results. Some of the observations in our sample are affected by pile-up. Pile-up occurs when more than one X-ray photon arrives in a pixel before the pixel is read out by the CCD, making it difficult to distinguish one high energy photon from two lower energy photons. Pile-up can also occur when photons striking adjacent pixels are confused with a single photon that deposits charge in more than one pixel. Depending on how many pixels are involved, this is called a single-, double-, triple-, or quadruple- pixel event. The SAS task EPATPLOT measures the pile-up in an observation and the results for our target with the highest count rate, MKN 421 (Table 1), are shown in Figure \[epatplot\]. When we compare the expected fractions of pixel events (solid lines) with those actually measured in the data (histograms) for the range 0.75 to 10 keV we see that a larger than expected fraction of double events (third histogram from top, dark blue in electronic edition) is measured as well as a larger fraction of triple and quadruple events (bottom two histograms), although to a lesser degree, while single events (second histogram from top) are lower than expected, indicating the presence of pile-up. Pile-up leads to a general reduction in the mean count rate as well as a reduction in the magnitude of variations. We will explore the influence of pile-up on our data in more detail when we discuss structure functions in Section \[sfpileupsection\]. Data Analysis and Results ========================= Wavelet Analysis {#waveletanalysis} ---------------- ### The Continuous Wavelet Transform The continuous wavelet transform (CWT) is the inner product of a dilated and translated mother wavelet and a time series $f(t)$, the idea being that the wavelet is applied as a band-pass filter to the time-series. The continuous wavelet transform maps the power of a particular frequency (i.e. dilation) at different times in translation-dilation space, giving an expansion of the signal in both time and frequency. Hence, the continuous wavelet transform not only tells us which frequencies exist in the signal, but also when they exist, allowing us to see whether a timescale varies in time. This is the wavelet technique’s advantage over Fourier transforms in detecting quasi-periods. In addition, the Fourier transform is not suited for detecting quasi-periods since non-periodic outbursts will spread power across the spectrum and windowing will cause power to appear at low frequencies, potentially obscuring quasi-periodic signals. Throughout this paper, we follow @hug98 and @kel03 and references within. In previous studies [@hug98; @kel03; @liu05; @kadler06] we have found the Morlet wavelet $$\psi_{Morlet} = \pi^{-1/4} e^{ik_{\psi}t} e^{-|t^{2}|/2},$$ with $k_{\psi}=6$ to be an excellent choice. The value of $k_{\psi}$ is a satisfactory compromise between a value small-enough that we have good resolution of temporal structures, and large-enough that the admissibility condition is satisfied, at least to machine accuracy [@far92]. The wavelet, being continuous and complex, permits a rendering in transform space that highlights temporally localized, periodic activity – oscillatory behavior in the real part and a smooth distribution of power in the modulus – and being progressive (zero power at negative frequency), is optimal for the study of causal signals. We have deliberately avoided any form of weighting, such as that introduced by @foster to allow for uneven sampling, or @johnson to rescale within the cone of influence, in order to facilitate our interpretation of the cross wavelet, and to allow the use of existing methods of significance analysis. From this mother wavelet, we generate a set of translated ($t'$) and dilated ($l$) wavelets $$\psi_{lt^{'}}(t)=\frac{1}{\sqrt{l}} \psi (\frac{t-t^{'}}{l}), l \in \Re^{+}, t\in \Re$$ and we then take the inner product with the signal $F(t)$ to obtain the wavelet coefficients $$\label{coefficients} \widetilde f (l,t^{'})= \int_{\Re}f(t)\psi^{*}_{lt'}(t) dt .$$ The wavelet coefficients are later mapped in wavelet space which has as coordinates translation and dilation, and so periodic behavior shows up as a pattern over all translations at a specific dilation. By way of example, Figure \[sinecwt\] shows the real part and the power of the continuous wavelet transform (second and bottom panel, respectively) for a sinusoidal signal of varying frequency (top panel). Here, the real part of the transform shows oscillatory behavior corresponding to the two periodicities of the sinusoidal signal at dilations of 3s and 6s with a break in translation at 50s corresponding to the time where the change in frequency occurs. The bottom panel in Figure \[sinecwt\] shows that the power of the continuous wavelet transform is concentrated at these two frequencies as well. The hatched area in both panels of Figure \[sinecwt\] represents the cone of influence: the region where edge effects become important. It arises because discontinuities at the beginning and end of a finite time series result in a decrease in the wavelet coefficient power. Also shown in the header of Figure \[sinecwt\] are the number of dilations used ($N_l$) and the ranges of dilations explored. We discuss $\alpha$ and the normalization of Figure \[sinecwt\] in the following section. ### Significance Tests {#sigtestsection} Significance tests can be created for the continuous wavelet transform and here we follow @tor98. First, one compares the wavelet power with that of an appropriate background spectrum. We use the univariate lag-1 auto-regressive \[AR(1)\] process given by $$\label{noiseeqn} x_{n}=\alpha x_{n-1} + z_{n}$$ where $\alpha$ is the assumed lag-1 autocorrelation and $z_{n}$ is a random deviate taken from white noise. Note that $\alpha = 0$ gives a white noise process. Throughout this paper, we will use “white noise” to refer to an AR(1) process with $\alpha = 0$. Red noise is sometimes used to refer to noise with $\alpha = 1$, however, throughout this paper we apply the term to any non-zero $\alpha$. The normalized discrete Fourier power spectrum of this process is $$\label{fouriereqn} P_{j}=\frac{1-\alpha^{2}}{1 + \alpha^{2} - 2 \alpha \cos(2 \pi \delta t /\tau_{j})}$$ where $\tau_{j}$ is the associated Fourier period for a scale $l_{j}$. We use the above two equations to model a white noise or red noise spectrum. The global wavelet power spectrum (GWPS) is obtained by averaging in time $$\widetilde f_{G}^{2}(l_{j})=\frac{1}{N_{j}} \Sigma^{i'_{j}}_{i=i_{j}}|\widetilde f(l_{j},t'_{j})|^{2}.$$ Here, $i_{j}$ and $i'_{j}$ are the indices of the initial and final translations $t'_{i}$ outside of the cone of influence at a given scale $l_{j}$. $N_{j}$ is the number of translations $t'_{i}$ outside the cone of influence at that scale. Assuming a background spectrum given by Eqn. \[fouriereqn\] we estimate the autocorrelation coefficient ($\alpha$) by calculating the lag-1 and lag-2 autocorrelations, $\alpha_{1}$ and $\alpha_{2}$. The autocorrelation coefficient is then estimated as $\alpha = (\alpha_{1} + \sqrt{\alpha_{2}})/2$. The background spectrum $P_{j}$ then allows us to compute the confidence levels. It is assumed that the time series has a mean power spectrum given by Eqn. \[fouriereqn\] and so if a peak in the wavelet power spectrum is significantly above this background spectrum, then the peak can be assumed to be a true feature. If the values in the time series $f(t)$ are normally distributed, we expect the wavelet power $|\widetilde f|^{2}$ to be $\chi^{2}$ distributed with two degrees of freedom ($\chi^{2}_{2}$). The square of a normally distributed variable is $\chi^{2}$ distributed with one degree of freedom and the second degree of freedom comes from the fact that both the real and imaginary parts of the complex $\widetilde f$ are normally distributed. For example, to determine the 95$\%$ confidence level, one multiplies the background spectrum (Eqn. \[fouriereqn\]) by the 95th percentile value for $\chi_{2}^{2}$. In Figure \[sinegwps\] we show the GWPS of a time series along with 99$\%$ and 95$\%$ confidence level for a red noise process and 99$\%$ confidence level for a white noise process. The distribution for the local wavelet power spectrum is $$\frac{|\widetilde f(l_{j},t'_{i})|^2}{\sigma^{2}} \Rightarrow P_{j}\frac{\chi^{2}_{\nu}}{\nu}$$ where the arrow means “distributed as," $\sigma^{2}$ is the variance, and $\nu$ is the number of degrees of freedom, which is two here. The indices on the scale $l$ are $j$=1,2,...,$J$ where $J$ is the number of scales, and the indices on the translation $t'$ are $i$=1,2,...,$N_{data}$. We evaluate this equation at each scale to get 95$\%$ confidence contour lines and in this paper our continuous transforms are normalized to the 95$\%$ confidence level for the corresponding red noise process. Doing this allows one to see the strength of the wavelet coefficients relative to the 95$\%$ confidence level of a red noise process. ### The Cross-Wavelet Transform {#xwtsection} Although the continuous wavelet transform is useful in examining how a time series varies in time and scale, it does not tell us how the time series varies in dilation over a range of scales when assigning a characteristic timescale. Since a quasi-periodic signal has no unique dilation we use the cross-wavelet transform (XWT) which filters out noise and reveals the QPO more clearly. Here we use the XWT introduced by @kel03. After the continuous transform identifies that a periodic pattern exists in the data, the dilation that characterizes this period is obtained from the global wavelet power spectrum and is used to create a sinusoidal mock signal. The continuous wavelet transform coefficients of the data signal $f_{a}(t)$ are then multiplied by the complex conjugate of the continuous transform coefficients of a mock signal $f_{m}(t)$. The results are mapped out in wavelet space and analyzed for a correlation. The cross-wavelet transform takes the form $$\widetilde f_{c} (l,t')= \widetilde f_a(l,t')\widetilde f^{*}_m(l,t')$$ where the continuous wavelet coefficients $\widetilde f_a$ and $\widetilde f_m$ are given by Equation \[coefficients\]. Figure \[sinecross\] shows the cross-wavelet for the same sinusoidal signal of varying frequency used in Figure \[sinecwt\]. The mock signal was calculated using the 6s period found in the wavelet power spectrum (see Fig. \[sinegwps\]) and as the concentrations in the real and power panels of Figure \[sinecross\] show, the cross-wavelet finds that this 6s period exists in the first half of the time series, illustrating the cross-wavelet’s ability to highlight a QPO. The reader may refer to @kel03 for a full review of the cross-wavelet technique used here. Structure Function Analysis {#sfanalysissection} --------------------------- Since the global wavelet power spectrum compares the observed signal to expected levels of red noise and white noise, we created structure functions (SFs) for each of our observations to see which noise process dominates the signal at different times. A structure function calculates the mean deviation of data points, providing an alternate method of quantifying time variations. Here we use a structure function of the first-order [@sim85]: $$SF(\delta t) = <[F(t) - F(t + \delta t)]^{2}>$$ where $F(t)$ is the flux at time $t$ and $\delta t$ is a time lag. The slope $\alpha$ of the SF curve in $log(SF)-log(\delta t)$ space depends on the noise processes underlying the signal, giving us an indication of the nature of the process of variation. If $\alpha = 1$ red noise dominates, and for flatter slopes of $\alpha = 0$ Poisson photon noise is significant. A plateau at short time lag is due to measurement noise. The transition from plateau to power-law in the structure function curve determines where the dominant underlying noise process changes in the object. The point of turnover from power-law to plateau at longer time lags corresponds to a maximum characteristic timescale. ### Effects of Pileup {#sfpileupsection} We measure the presence of pile-up in our observations by using the SAS task EPATPLOT and find that the majority of our sources show varying degrees of pile-up. For example, as previously shown in Section \[obssection\], MKN 421 Obs. ID: 0099280101 has a modest amount of pile-up (see Figure \[epatplot\]). In the structure function of this observation (left panel, Figure  \[structpileup\]), the flat portion of the structure function curve should have a value of $log(SF)=1$, which corresponds to the Poisson photon noise inherent in the photon statistics. However, here it falls below the Poisson photon noise level. To remove the pile-up we exclude the central core of the source in the event file since pile-up is more likely to occur here. For this subtracted data, the EPATPLOT output indicates that there is no pile-up and the SF curve is then at the expected value for Poisson photon noise (right panel, Figure \[structpileup\]). Pile-up affects the SF because it lowers the overall count rate and thereby Poisson photon noise is underreported. We correct for pile-up in the rest of our data by adding a fixed value to $log(SF)$, moving the flat part of the structure function curve up to 1. All of our observations had less than 5$\%$ pile-up except for PKS 2155-304 Obs. ID 124930301 (6.5$\%$) and both observations of MKN 421 ($\sim 10\%$). Overall, the percentage of pile-up in our sample increases with the number of counts except for NGC 4151 Obs. ID 112830201 which is 5$\%$ piled-up and has an average of only 25 counts. Results ------- ### Wavelet Analysis Results {#wavelet_results} Of the observations that we analyzed, only one showed a quasi-period of interest (at 3.3 ksec), and this occurred in an observation of 3C 273 (ID 126700301). The continuous wavelet transform result for this observation is shown in Figure \[3ccwt\] with the quasi-period circled in the real and power plots (second and third panel, respectively). One can see that the quasi-period appears in the last two-thirds of the observation. In the real plot, the concentrations match up with peaks in the light curve, and the power is concentrated at $4.2\times10^{4}$ s. The wavelet is sampled with 220 dilations ($N_{l}$) ranging between $\sim 207.2$ s and $2.3\times10^{4}$ s. We note that the data in Figure \[3ccwt\] are binned from 5 s to 100 s for clarity and that we only show the first 56 ks due to background flaring at the end of the observation. We note that including the periods with background flaring does not change our results. The $\alpha$ found from autocorrelation analysis for the unbinned data is 0.14 and this value is used to reach the conclusions in this paper. The 3.3 ks quasi-period is also evident in the Global Wavelet Power Spectrum (GWPS, Figure \[3cgwps\]), which is calculated by summing up the wavelet power spectra at all times. In searching for quasi-periodic behavior we excluded time scales above 25$\%$ of the time series length, where, using spectral methods, too few periods to provide a convincing result would be present, and where the cone of influence becomes important for the wavelet coefficients. On short time scales, experience has shown that sources often exhibit a broad distribution of power, with local maxima not well-separated from the mean power level. We selected a lower bound for our search, by visual identification of such behavior in the GWPS, in conjunction with a concomitant change in behavior of the SF. The solid line in Figure \[3cgwps\] is the power spectrum of the signal, which is compared to the power spectrum of white and red noise random processes (broken lines). One can see that the 3.3 ks detection exceeds the expected levels of white and red noise at the 99$\%$ significance level, i.e. the probability of the detection is higher than 99$\%$ of the noise random processes (the significance of this signal is 99.979$\%$ relative to red noise with $\alpha = 0.14$). The origins of the white and red noise power spectra were discussed in Section \[sigtestsection\]. The cross-wavelet analysis for 3C 273 (Fig. \[3ccross\]) supports the conclusion that a period of  3.3 ks is indeed present. Here, the XWT (see Section \[xwtsection\]) compares a mock sinusoidal signal with a period of 3282 s with the 3C 273 light curve. The concentration in the cross-wavelet transform shows that the 3.3 ks signal is present throughout the observation. As one can see, by comparing the crosswavelet signals in juxtaposed bands, the 3.3 ks periodicity can be traced over the entire interval. In the CWT (Figure \[3ccwt\]) the 3.3 ks signal is particularly strong at late times, and so, due to the limited dynamic range of the rendering, is not evident early in the time interval in that figure. This periodicity is not detected in the other three observations of this object. In the 58 ks (Obs. ID 159960101) and 60 ks (Obs. ID 126700801) observations of 3C 273, there is a signal at 5000s, but it does not rise above the 99$\%$ red noise confidence level (Fig. \[gwpsall1\]). We note that a Fourier analysis of 3C 273 yielded a feature at 3.3 ks, but with a lower significance ($<$ 3$\sigma$) than is found with the wavelet technique. We performed Monte Carlo simulations in order to estimate the probability that the wavelet technique would claim a spurious detection. As a baseline, we created one thousand simulated light curves for Poisson photon noise (Fig. 11) to represent random observational errors i.e. photon counting statistics. The simulated light curves were 56 ks long with 5 s intervals and we multiplied the mean deviate $z_{n}$ by 40 to produce an average spread in the y-axis of 40 counts to resemble the 3C 273 light curve. Most of the false detections occur at timescales less than 2000s which corresponds to 3.6 $\%$ the length of the observation and supports our earlier point that one can select the lower limit to search for periodicities by visual identification of broad distributions of power on short time scales in the GWPS. On average, the wavelet technique claims a detection (at or above the significance level reported by the wavelet analysis for 3C 273) 0.4$\%$ of the time (Fig. \[hist\]). The Monte Carlo simulations suggest a significantly higher rate of false detections than is implied by the statistics based on the GWPS. However, they are consistent with the latter estimates [*within the margin of error*]{}, given that only 1000 realizations of a time series were generated. Better simulation statistics could be achieved by increasing the number of time series realizations by several orders of magnitude, but devoting time and resources to this is not warranted. Visual inspection of the simulated light curves reveals that they differ qualitatively from the actual time series: a better correspondence can be achieved with the addition of randomly distributed Gaussian-profile bursts of fixed, small amplitude. Evidently, the process under study is not strictly a stationary, first order one, and the formal statistical measures of significance should be regarded as only indicative of the high likelihood of a quasi-periodic phenomenon in this source. A more detailed analysis, allowing for nonstationary processes, is beyond the scope of this paper. While we have performed 19 independent experiments and found only 1 detection we point out that of our 19 data sets only 7 have average counts (Table 1) equal to or more than the observation in which we find the QPO. One cannot expect to see with equal likelihood, a periodicity of equal strength in these weaker AGN. We note that independently, the XWT finds evidence for power throughout the observation at 3.3 ks (Fig. 8). We measured the 3.3 ks signal strength across the time series from the power plot of Figure 8. The power of the 3.3 ks signal is $\sim$ 4000 times stronger than shorter and longer dilations, illustrating that the 3.3 ks period is well-constrained. We also ran the XWT on this time series with analyzing signals of 2.3 ks and 4.3 ks. The average power of these signals is $\sim$ 2 times less than the average power of the 3.3 ks signal. This demonstrates that the XWT is picking out a well-defined, persistent signal, and will not misleadingly suggest a signal where there is none. We did not find any significant detections for the other nine AGN in our sample. No features had significances that exceeded the 99$\%$ confidence levels for both white noise and red noise processes (see Figs. \[gwpsall1\], \[gwpsall2\]) and appeared at either too short (i.e. at timescales shorter than 3.6$\%$ the length of the observation) or too long (i.e. at timescales greater than half the length of the observation) a timescale. Some of the AGN in our sample have been studied before and previous reports of QPOs exist in the literature. We will discuss those results in more detail in Section \[discussionprevious\]. ### Structure Function Results {#sfanalysis} After correcting for pile-up, we subtract a constant level corresponding to Poisson photon noise from the structure functions (Figures \[sfminus1\], \[sfminus2\]). The slopes are measured by fitting a power-law to the SF curve using the least-squares method in $log(SF)-log(\delta t)$ space. Slopes are listed in Table \[sftbl\] along with the characteristic time-scales of variability, which were measured by identifying the times of turnover from plateau to power-law and vice versa in the SF curve. All of our structure functions have a flat plateau at short timescales corresponding to Poisson photon noise, most have a power-law portion, and some have a plateau at long timescales. We include light curves in Figures  \[lightcurves1\],  \[lightcurves2\] for comparison with the structure functions. The structure functions for all four observations of 3C 273 are shown in the first four panels of Figure \[sfminus1\]. The observation with the 3.3 ks quasi-period (upper left, Figure \[sfminus1\]) is dominated by whitish noise around 3000s, as inferred from its flat slope; however, the SF is unsuited to quantifying the autocorrelation coefficient precisely. Recall that the wavelet analysis finds an autocorrelation coefficient of $\alpha = 0.14$, relatively small, and consistent with a flattish structure function. We note that this observation also has the greatest excess of such noise above the photon noise, compared to the other three observations, consistent with this being a unique time series out of all those analyzed. Discussion ========== Mass Estimates of 3C 273 ------------------------ There are several mass estimates for 3C 273 obtained from different methods. One method is reverberation mapping whereby one uses the time lag of the emission-line light curve with respect to the continuum light curve to determine the light crossing size of the broad line region (BLR) and then assumes Keplerian conditions in the broad line region gas motion (i.e. $M_{BH}=v^{2}R_{BLR}/G$) [@pet00]. Reverberation mapping results based on the optical continuum (i.e. Balmer lines) place the mass of the central black hole in 3C 273 at $2.35^{+0.37}_{-0.33}\times 10^8$ M$_\sun$ [@kas00]. In a different study, @pian use $Hubble$ $Space$ $Telescope$ UV luminosities to find the broad line region size. To do so, they derive a relationship between $R_{BLR}$ and UV luminosity using the empirical relationship found by @kas00 between $R_{BLR}$ and the optical luminosity. @pian obtain a mass of $4.0^{+2}_{-2}\times 10^{8}M_\sun$ for 3C 273, consistent with the @kas00 value within errors. In another study, @paltani look at the strongest broad emission UV lines (Ly$\alpha$ and C IV ) in archival $International$ $Ultraviolet$ $Explorer$ observations and obtain a mass of $6.59^{+1.86}_{-0.9}\times10^9M_\sun$ for the central supermassive black hole in 3C 273. There are also mass estimates for 3C 273 that do not come from reverberation mapping. @lia03 find a black hole mass of $2\times 10^7$ M$_\sun$ by generalizing the Elliot-Shapiro relation to the Klein-Nishina regime for 3C 273’s gamma-ray flux obtained from EGRET. Another method is to use the @mclure correlation between host galaxy luminosity and black hole mass which obtains a mass of $1.6 \times 10^9$ M$_\sun$ with an uncertainty of 0.6 dex [@wang]. Underlying Physical Process for the QPO in 3C 273 ------------------------------------------------- If the 3.3 ks quasi-period in 3C 273 represents an orbital timescale originating near a last stable orbit, it implies a central black hole mass of $7.3\times 10^6$ M$_\sun$ for a non-rotating black hole or $8.1\times 10^7$ M$_\sun$ for a maximally rotating black hole. These numbers agree with the @lia03 mass estimate of $2\times 10^7$ M$_\sun$. However, these masses are substantially lower than those expected for supermassive black holes. The @pian estimate for the mass of the black hole in 3C 273 at $4.0 \times 10^8$ M$_\sun$ points to an orbital period of $\sim 200$ ks for a last stable orbit of 3$R_S$ and a period of $\sim$16 ks for 0.6$R_S$ for a rotating black hole. @paltani estimate a mass for 3C 273 of $6.59 \times 10^{9}$ M$_\sun$, which points to an orbital period of  3000 ks for a last stable orbit of 3$R_S$ and a period of  270 ks for 0.6$R_S$. The 3.3 ks quasi-period we find here is only about 2-20$\%$ of the @pian orbital timescale and 0.1-1$\%$ of the @paltani orbital timescale, suggesting that this X-ray quasi-period is not caused by dynamical motion in the inner accretion disk. Furthermore, the inverse scaling between frequency and black hole mass yields an expected period of $t \sim 300 M_{BH}/10^{6}M_\sun$ based on the representative of HF QPOs in XRBs, GRO J1655-40 [@orosz; @remi99; @abram]. Using either the @pian or @paltani mass estimates yields a period that is one to two orders of magnitude higher than what we observe. Previous work has suggested that oscillations can occur in the innermost region of relativistic accretion disks due to their instability against axisymmetric radial oscillations, possibly due to a magnetic field [@kato]. This has been proposed for X-ray binary systems, but the physical mechanism responsible for these oscillations can be applied to other accretion disk systems like AGN. @perez analyze modes of oscillation in terms of perturbations of the general relativistic equations of motion of perfect fluids within the Kerr metric. They look at the case of a thin accretion disk around a Kerr black hole in order to determine black hole mass and angular momentum for different trapped modes. We propose that a g-mode oscillation of $m\geq3$ is responsible for the 3.3 ks quasi-period in 3C 273. A g-mode (inertial) oscillation can be characterized as a restoring force that is dominated by the net gravitational-centrifugal force. These modes are the most relevant observationally since they appear to occupy the largest area of the disk and hence should be the most observable trapped modes [@perez]. Equation 5.4 of @perez shows that the frequency of a quasi-period should be observed at $$f=714(M_\sun/M)F(a) Hz$$ where $a$ is the angular momentum parameter and $M=M_{AGN}$. For $m=0$ and $a=0$, $F(0)=1$, while $F(a_{max})=3.443$ where $a_{max}=0.998$. This gives a mass that is too low. For $m=3$, $F(a_{max})\sim~59$ (see Figure 5 of @perez) and this gives a mass for 3C 273 of $1.4 \times 10^8 M_\sun$. @perez do not look at modes higher than 3. Previously Reported QPOs for AGN in Our Sample {#discussionprevious} ---------------------------------------------- @fiore report a QPO in NGC 4151 around 5.8 ks with $>99\%$ significance based on three EXOSAT observations. @vaub reanalyzed these data sets and found that after fitting the red noise significance and Poisson photon noise components of the spectrum, the QPOs fall below the 95$\%$ threshold. Our [*XMM-Newton*]{} observations show a $\sim$4.8 ks feature. This appears in our 57 ks observation (Obs. ID: 112830201) with 96$\%$ red noise significance and in our 30 ks observation (Obs. ID: 112310101) with 99.4$\%$ red noise (see Fig. \[gwpsall2\]). Even though this signal rises above the 99$\%$ red noise level in this observation, we discount it because it appears as part of a larger power structure in the GWPS and is not a well-defined peak. For NGC 5548, @pap93 claim a 500s QPO in five out of eight EXOSAT observations. @tag96 reanalyzed the same data and found that one observation had detector problems. Also, in every case they found less than 95$\%$ significance by taking into account the uncertainties in modeling the spectrum. In our [*XMM-Newton*]{} data of NGC 5548, we report a 500s feature, but it has only a 93$\%$ red noise significance, and is seen in only one of our two observations (Obs. ID: 089960401, Fig. \[gwpsall2\]). For MKN 766, @boller claim a $\sim$4200s QPO in a 30 ks [*XMM-Newton*]{} observation. In our 128 ks observation, taken a year later, we see a signal at 4200s with 99.5$\%$ red noise significance, but it is dwarfed in the global wavelet power by a much stronger, wider broad peak (see Fig. \[gwpsall2\]), possibly due to a secular change in flux over the observation. We do not detect any significant feature for MCG-6-30-15 [@lee], MKN 421 or PKS 2155-304 [@osone], which have previously reported QPOs. To the best of our knowledge there are no published QPO claims for any of the other objects in our sample: 3C 273, IRAS 13349$+$2438, NGC 3516. @hal03 reported the discovery of a 2.08 day quasi-period in the NLS1 galaxy TonS180 with a 33 day observation taken with the $Extreme$ $Ultraviolet$ $Explorer$ (EUVE). @vau suggest that this periodogram is oversampled and so the significance is overestimated. Our wavelet analysis of this data shows a $\sim~$2 d period in the global wavelet power spectrum which rises above the 99$\%$ white noise level, but it has only 89.5$\%$ red noise significance (Fig. \[gwpsall2\]). Our wavelet analysis finds an $\alpha$ of 0.8 for this observation and the structure function shows that red noise dominates, implying that this 2 d feature should be compared to red noise significance and so is not significant. Summary & Conclusions ===================== We applied the wavelet analysis technique to [*XMM-Newton*]{} observations of 10 AGN and detected a candidate 3.3 ks period in 3C 273. The cross wavelet transform shows that the 3.3 ks signal is present throughout the entire observation. If the 3.3 ks quasi-period in 3C 273 represents an orbital timescale originating near a last stable orbit, it implies a central black hole mass of at least $7.3\times 10^6$ M$_\sun$ which does not agree with reverberation mapping mass estimates. @kas00 estimate the mass of the black hole in 3C 273 at $2.35\times 10^8$ M$_\sun$ and @paltani find a mass of $6.59 \times10^{9}$ M$_\sun$. This suggests that this X-ray quasi-period is not caused by dynamical motion in the inner accretion disk. We suggest that oscillations with modes of three or higher are occurring in the accretion disk of 3C 273, producing the detected 3.3 ks quasi-period. @perez shows that for $m\geq3$ and maximum angular momentum one can obtain a mass for 3C 273 of $1.4 \times 10^8$ M$_\sun$, consistent with the lower mass estimate obtained from reverberation mapping. We thank the anonymous referee for constructive comments that helped improve the paper. We thank Margo Aller, Robert Fender, Luis Ho, Jereon Homan, Jon Miller, and Simon Vaughan for useful discussions. JNB would like to acknowledge support from NASA for these activities, through the Long Term Space Astrophysics grant NAG5-10765. Abramowicz, M.A., Kluzniak, W., McClintock, J.E., Remillard, R. A. 2004, ApJ, 609, L63 Devereux, N., Ford, H., Tsvetanov, Z., & Jacoby, G. 2003, , 125, 1226 Boller, Th., et al. 2001, A&A, 365, L146 Edelson, R. & Nandra, K. 1999, , 514, 682 Farge, M. 1992, Ann. Rev. Fluid Mech., 24, 395 Fiore, F., et al. 1989, , 347, 171 Foster, G. 1996, , 112, 1709 Halpern, J.P., Leighly, K.M., & Marshall, H.L. 2003, , 585, 665 Hughes, P. A., Aller, H.D., & Aller, M.F. 1998, , 503, 662 Iwasawa, K., Fabian A.C., Brandt, W.N., et al. 1998, , 295, L20 Johnson, R. W. 2006, ArXiv Physics e-prints, arXiv:physics/0604211 Kadler, M., Hughes, P. A., Ros, E., Aller, M. F., & Aller, H. D. 2006, , 456, L1 Kaspi, S., Smith, P.S., Netzer, H., Maoz, D., Jannuzi, B. T., & Giveon, U. 2000, , 533, 631 Kato, S., & Fukue, J. 1980, , 32, 377 Kelly, B. C., Hughes, P.A., Aller, H.D., & Aller, M.F. 2003, , 591, 695 Lee, J.C., et al. 2000, , 318, 857 Liang, E. W. & Liu, H. T. 2003, , 340, 632 Liu, J.-F., Bregman, J. N., Lloyd-Davies, E., Irwin, J., Espaillat, C., & Seitzer, P. 2005, , 621, L17 Markowitz, A., Edelson, R., Vaughan, S., Uttley, P., George, I.M., Griffiths, R.E., Kaspi, S., Lawrence, A., McHardy, I., Nandra, K., Pounds, K., Reeves, J., Schurch, N., & Warwick, R. 2003, , 593, 96 McHardy, I.M., Papadakis, I.E., Uttley, P., Page, M.J., Mason, K.O. 2004, MNRAS, 348, 783 McHardy, I.M., Gunn, K.F., Uttley, P., & Goad, M.R. 2005, , 359, 1469 McLure, R.J., & Dunlop, J.S. 2001, MNRAS, 327, 199 Orosz, J.A., & Bailyn, C.D. 1997, , 477, 876 Osone, S., Teshima, M., & Mase, K., 2001, preprint(astro-ph/0106223) Paltani, S. & Tűrler, M., 2005, A&A, 435, 811 Papadakis I.E., Lawrence A., 1993, Nature, 361, 233 Perez, C.A., et al. 1997, , 476, 589 Peterson, B. M., Wandel, A. 2000, , 540, L13 Pian, E., Falamo, R., & Treves, A., 2005, , 361, 919 Press, W.H. 1978, Comments on Astrophysics, 7, 103 Remillard, R.A., Morgan, E.H., McClintock, J.E., Bailyn, C.D., & Orosz, J.A. 1999, , 522, 397 Remillard, R.A., Sobczak, G.J., Muno, M.P., & McClintock, J.E. 2002, , 564, 962 Simonetti, J.H., Cordes, J.M., & Heeschen, D.S. 1985, , 296, 46 Tagliaferri G., et al. 1996, , 465, 181 Torrence, C. & Compo, G. 1998, Bull. Am. Meteorol. Soc., 79, 61 Uttley, P., McHardy, I.M., Papadakis, I.E.,2002, MNRAS, 332, 231 van der Klis, M. 2000, , 38, 717 Vaughan, S., Fabian, A. C., & Nandra, K. 2003 , 339, 1237 Vaughan, S. 2005, A&A, 431, 391 Vaughan, S., & Uttley, P. 2005, , 362, 235 Vaughan, S., & Uttley, P. 2006, Advances in Space Research, 38, 1405 Wang, J., Luo, B., & Ho, L. 2004, , 615, L9 Woo, J. & Urry, C. M. 2002, , 579, 530 0.5in 0.5in 0.5in ![Effect of pile-up on structure functions of MKN 421 Obs. ID: 0099280101. The flat portion of the structure function curve affected by pile-up (left panel) falls below $log(SF)=1$, which corresponds to Poisson photon noise. In the right panel the central core of the source has been removed (i.e. pile-up is greatly reduced) and the SF curve is then at the expected value for Poisson photon noise (right panel).\[structpileup\]](fig5a.convert.eps "fig:") ![Effect of pile-up on structure functions of MKN 421 Obs. ID: 0099280101. The flat portion of the structure function curve affected by pile-up (left panel) falls below $log(SF)=1$, which corresponds to Poisson photon noise. In the right panel the central core of the source has been removed (i.e. pile-up is greatly reduced) and the SF curve is then at the expected value for Poisson photon noise (right panel).\[structpileup\]](fig5b.convert.eps "fig:") 0.5in 0.5in ![Lightcurves for the 10 AGN in our sample. All observations are in the energy range 0.75 to 10 keV. Most observations are binned to 100s except for MKN 421, NGC 5548 (Obs ID: 089960401) and IRAS 13349+2438 which are binned to 50s. \[lightcurves1\]](fig14a.convert.eps "fig:") ![Lightcurves for the 10 AGN in our sample. All observations are in the energy range 0.75 to 10 keV. Most observations are binned to 100s except for MKN 421, NGC 5548 (Obs ID: 089960401) and IRAS 13349+2438 which are binned to 50s. \[lightcurves1\]](fig14b.convert.eps "fig:") ![Light curves for the 10 AGN in our sample. All observations are in the energy range 0.75 to 10 keV. Most observations are binned to 100s except for MKN 421, NGC 5548 (Obs ID: 089960401) and IRAS 13349+2438 which are binned to 50s.\[lightcurves2\]](fig15a.convert.eps "fig:") ![Light curves for the 10 AGN in our sample. All observations are in the energy range 0.75 to 10 keV. Most observations are binned to 100s except for MKN 421, NGC 5548 (Obs ID: 089960401) and IRAS 13349+2438 which are binned to 50s.\[lightcurves2\]](fig15b.convert.eps "fig:") [lcccc]{} 3C 373 & 126700301 & 2000-06-13 & 66 & 140\ 3C 373 & 126700801 & 2000-06-17 & 60.6 & 120\ 3C 373 & 136550101 & 2003-01-05 & 88.6 & 160\ 3C 273 & 159960101 & 2003-07-07 & 58 & 230\ IRAS 13349+2438 & 096010101 & 2000-06-20 & 44.6 & 10\ M81 & 111800101 & 2001-04-22 & 130 & 20\ MCG-6-30-15 & 029740701 & 2001-08-02 & 127 & 70\ MCG-6-30-15 & 029740801 & 2001-08-04 & 125 & 120\ MKN 421 & 099280101 & 2000-05-25 & 32.5 & 740\ MKN 421 & 099280301 & 2000-11-13 & 46.6 & 1060\ MKN 766 & 109141301 & 2001-05-20 & 128.5 & 90\ NGC 3516 & 107460601 & 2001-04-10 & 129 & 20\ NGC 3516 & 107460701 & 2001-11-09 & 128 & 12\ NGC 4151 & 112310101 & 2000-12-21 & 30 & 20\ NGC 4151 & 112830201 & 2000-12-22 & 57 & 25\ NGC 5548 & 089960301 & 2001-07-09 & 93.4 & 75\ NGC 5548 & 089960401 & 2001-07-12 & 37 & 90\ PKS 2155-304 & 124930201 & 2000-05-31 & 59 & 280\ PKS 2155-304 & 124930301 & 2001-11-30 & 44.6 & 380\ [lcccc]{} 3C 373 & 126700301 & 1.23 & $1.5\times10^4$ & -\ 3C 373 & 126700801 & 2.09 & $1.5\times10^4$ & -\ 3C 373 & 136550101 & - & - & -\ 3C 273 & 159960101 & 1.68 & $2\times10^4$& -\ IRAS 13349+2438 & 096010101 & 1.68 & 1000 & -\ M81 & 111800101 & 0.95 & 7000 & -\ MCG-6-30-15 & 029740701 & 0.82 & - & $10^4$\ MCG-6-30-15 & 029740801 & 1.11 & 200 & -\ MKN 421 & 099280101 & 1.25 & 400 & -\ MKN 421 & 099280301 & 1.13 & 350 & 7000\ MKN 766 & 109141301 & 0.67 & 100 & $3\times10^4$\ NGC 3516 & 107460601 & 1.18,1.39 & 2000 & -\ NGC 3516 & 107460701 & 1.86 & $2\times10^4$ &-\ NGC 4151 & 112310101 & - & - & -\ NGC 4151 & 112830201 & 1.19 & 7000 & -\ NGC 5548 & 089960301 & 0.99,1.64 & 1200 & -\ NGC 5548 & 089960401 & 0.097,2.75 & 1000 & -\ PKS 2155-304 & 124930201 & .99,.59,.45,1.69 & 2000 & -\ PKS 2155-304 & 124930301 & 1.59 & 1100 & -\
1
--- abstract: 'With the goal of recovering high-quality image content from its degraded version, image restoration enjoys numerous applications, such as in surveillance, computational photography, medical imaging, and remote sensing. Recently, convolutional neural networks (CNNs) have achieved dramatic improvements over conventional approaches for image restoration task. Existing CNN-based methods typically operate either on full-resolution or on progressively low-resolution representations. In the former case, spatially precise but contextually less robust results are achieved, while in the latter case, semantically reliable but spatially less accurate outputs are generated. In this paper, we present a novel architecture with the collective goals of maintaining spatially-precise high-resolution representations through the entire network, and receiving strong contextual information from the low-resolution representations. The core of our approach is a multi-scale residual block containing several key elements: (a) parallel multi-resolution convolution streams for extracting multi-scale features, (b) information exchange across the multi-resolution streams, (c) spatial and channel attention mechanisms for capturing contextual information, and (d) attention based multi-scale feature aggregation. In the nutshell, our approach learns an enriched set of features that combines contextual information from multiple scales, while simultaneously preserving the high-resolution spatial details. Extensive experiments on five real image benchmark datasets demonstrate that our method, named as MIRNet, achieves state-of-the-art results for a variety of image processing tasks, including image denoising, super-resolution and image enhancement.' author: - Syed Waqas Zamir - Aditya Arora - Salman Khan - Munawar Hayat - Fahad Shahbaz Khan - 'Ming-Hsuan Yang' - Ling Shao bibliography: - 'MIRNet.bib' title: Learning Enriched Features for Real Image Restoration and Enhancement --- Introduction ============ Image content is exponentially growing due to the ubiquitous presence of cameras on various devices. During image acquisition, degradations of different severities are often introduced. It is either because of the physical limitations of cameras, or due to inappropriate lighting conditions. For instance, smart phone cameras come with narrow aperture, and have small sensors with limited dynamic range. Consequently, they frequently generate noisy and low-contrast images. Similarly, images captured under the unsuitable lighting are either too dark, or too bright. The art of recovering the original clean image from its corrupted measurements is studied under the image restoration task. It is an ill-posed inverse problem, due to the existence of many possible solutions. Recently, deep learning models have made significant advancements for image restoration and enhancement, as they can learn strong (generalizable) priors from large-scale datasets. Existing CNNs typically follow one of the two architecture designs: 1) an encoder-decoder, or 2) high-resolution (single-scale) feature processing. The encoder-decoder models [@ronneberger2015u; @kupyn2019deblurgan; @chen2018; @zhang2019kindling] first progressively map the input to a low-resolution representation, and then apply a gradual reverse mapping to the original resolution. Although these approaches learn a broad context by spatial-resolution reduction, on the downside, the fine spatial details are lost, making it extremely hard to recover them in the later stages. On the other side, the high-resolution (single-scale) networks [@dong2015image; @DnCNN; @zhang2020residual; @ignatov2017dslr] do not employ any downsampling operation, and thereby produce images with spatially more accurate details. However, these networks are less effective in encoding contextual information due to their limited receptive field. Image restoration is a position-sensitive procedure, where pixel-to-pixel correspondence from the input image to the output image is needed. Therefore, it is important to remove only the undesired degraded image content, while carefully preserving the desired fine spatial details (such as true edges and texture). Such functionality for segregating the degraded content from the true signal can be better incorporated into CNNs with the help of large context, *e.g.*, by enlarging the receptive field. Towards this goal, we develop a new *multi-scale* approach that maintains the original high-resolution features along the network hierarchy, thus minimizing the loss of precise spatial details. Simultaneously, our model encodes multi-scale context by using *parallel convolution streams* that process features at lower spatial resolutions. The multi-resolution parallel branches operate in a manner that is complementary to the main high-resolution branch, thereby providing us more precise and contextually enriched feature representations. The main difference between our method and existing multi-scale image processing approaches is the way we aggregate contextual information. First, the existing methods [@tao2018scale; @nah2017; @gu2019self] process each scale in isolation, and exchange information only in a top-down manner. In contrast, we progressively fuse information across all the scales at each resolution-level, allowing both top-down and bottom-up information exchange. Simultaneously, both fine-to-coarse and coarse-to-fine knowledge exchange is laterally performed on each stream by a new *selective kernel* fusion mechanism. Different from existing methods that employ a simple concatenation or averaging of features coming from multi-resolution branches, our fusion approach dynamically selects the useful set of kernels from each branch representations using a self-attention approach. More importantly, the proposed fusion block combines features with varying receptive fields, while preserving their distinctive complementary characteristics. Our main contributions in this work include: - A novel feature extraction model that obtains a complementary set of features across multiple spatial scales, while maintaining the original high-resolution features to preserve precise spatial details. - A regularly repeated mechanism for information exchange, where the features across multi-resolution branches are progressively fused together for improved representation learning. - A new approach to fuse multi-scale features using a selective kernel network that dynamically combines variable receptive fields and faithfully preserves the original feature information at each spatial resolution. - A recursive residual design that progressively breaks down the input signal in order to simplify the overall learning process, and allows the construction of very deep networks. - Comprehensive experiments are performed on five real image benchmark datasets for different image processing tasks including, image denoising, super-resolution and image enhancement. Our method achieves state-of-the-results on *all* five datasets. Furthermore, we extensively evaluate our approach on practical challenges, such as generalization ability across datasets. Related Work ============ With the rapidly growing image media content, there is a pressing need to develop effective image restoration and enhancement algorithms. In this paper, we propose a new approach capable of performing image denoising, super-resolution and image enhancement. Different from existing works for these problems, our approach processes features at the original resolution in order to preserve spatial details, while effectively fuses contextual information from multiple parallel branches. Next, we briefly describe the representative methods for each of the studied problems. **Image denoising.** Classic denoising methods are mainly based on modifying transform coefficients [@yaroslavsky1996local; @donoho1995noising; @simoncelli1996noise] or averaging neighborhood pixels [@smith1997susan; @tomasi1998bilateral; @perona1990scale; @rudin1992nonlinear]. Although the classical methods perform well, the self-similarity [@efros1999texture] based algorithms, *e.g.*, NLM [@NLM] and BM3D [@BM3D], demonstrate promising denoising performance. Numerous patch-based algorithms that exploit redundancy (self-similarity) in images are later developed [@dong2012nonlocal; @WNNM; @mairal2009non; @hedjam2009markovian]. Recently, deep learning-based approaches [@MLP; @RIDNet; @Brooks2019; @Gharbi2016; @CBDNet; @N3Net; @DnCNN; @FFDNetPlus] make significant advances in image denoising, yielding favorable results than those of the hand-crafted methods. **Super-resolution (SR).** Prior to the deep-learning era, numerous SR algorithms have been proposed based on the sampling theory [@keys1981cubic; @irani1991improving], edge-guided interpolation [@allebach1996edge; @li2001new; @zhang2006edge], natural image priors [@kim2010single; @xiong2010robust], patch-exemplars [@chang2004super; @freedman2011image] and sparse representations [@yang2010image; @yang2008image]. Currently, deep-learning techniques are actively being explored, as they provide dramatically improved results over conventional algorithms. The data-driven SR approaches differ according to their architecture designs [@wang2019deep; @anwar2019deep]. Early methods [@dong2014learning; @dong2015image] take a low-resolution (LR) image as input and learn to directly generate its high-resolution (HR) version. In contrast to directly producing a latent HR image, recent SR networks [@VDSR; @tai2017memnet; @tai2017image; @hui2018fast] employ the residual learning framework [@He2016] to learn the high-frequency image detail, which is later added to the input LR image to produce the final super-resolved result. Other networks designed to perform SR include recursive learning [@kim2016deeply; @han2018image; @ahn2018fast], progressive reconstruction [@wang2015deep; @Lai2017], dense connections [@tong2017image; @wang2018esrgan; @zhang2020residual], attention mechanisms [@RCAN; @dai2019second; @zhang2019residual], multi-branch learning [@Lai2017; @EDSR; @dahl2017pixel; @li2018multi], and generative adversarial models [@wang2018esrgan; @park2018srfeat; @sajjadi2017enhancenet; @SRResNet]. **Image enhancement.** Oftentimes, cameras provide images that are less vivid and lack contrast. A number of factors contribute to the low quality of images, including unsuitable lighting conditions and physical limitations of camera devices. For image enhancement, histogram equalization is the most commonly used approach. However, it frequently produces under-enhanced or over-enhanced images. Motivated by the Retinex theory [@land1977retinex], several enhancement algorithms mimicking human vision have been proposed in the literature [@bertalmio2007; @palma2008perceptually; @jobson1997multiscale; @rizzi2004retinex]. Recently, CNNs have been successfully applied to general, as well as low-light, image enhancement problems. Notable works employ Retinex-inspired networks [@Shen2017; @wei2018deep; @zhang2019kindling], encoder-decoder networks [@chen2018encoder; @Lore2017; @ren2019low], and generative adversarial networks [@chen2018deep; @ignatov2018wespe; @deng2018aesthetic]. ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![Framework of the proposed network MIRNet that learns enriched feature representations for image restoration and enhancement. MIRNet is based on a recursive residual design. In the core of MIRNet is the multi-scale residual block (MRB) whose main branch is dedicated to maintaining spatially-precise high-resolution representations through the entire network and the complimentary set of parallel branches provide better contextualized features. It also allows information exchange across parallel streams via selective kernel feature fusion (SKFF) in order to consolidate the high-resolution features with the help of low-resolution features, and vice versa.[]{data-label="fig:framework"}](Images/framework.png "fig:"){width="\textwidth"} ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Proposed Method =============== In this section, we first present an overview of the proposed MIRNet for image restoration and enhancement, illustrated in Fig. \[fig:framework\]. We then provide details of the *multi-scale residual block*, which is the fundamental building block of our method, containing several key elements: **(a)** parallel multi-resolution convolution streams for extracting (fine-to-coarse) semantically-richer and (coarse-to-fine) spatially-precise feature representations, **(b)** information exchange across multi-resolution streams, **(c)** attention-based aggregation of features arriving from multiple streams, **(d)** dual-attention units to capture contextual information in both spatial and channel dimensions, and **(e)** residual resizing modules to perform downsampling and upsampling operations. **Overall Pipeline.** Given an image $\mathbf{I} \in \mathbb{R}^{H\times W \times 3}$, the network first applies a convolutional layer to extract low-level features $\mathbf{X_0} \in \mathbb{R}^{H\times W \times C}$. Next, the feature maps $\mathbf{X_0}$ pass through $N$ number of recursive residual groups (RRGs), yielding deep features $\mathbf{X_d} \in \mathbb{R}^{H\times W \times C}$. We note that each RRG contains several multi-scale residual blocks, which is described in Section \[sec:msrb\]. Next, we apply a convolution layer to deep features $\mathbf{X_d}$ and obtain a residual image $\mathbf{R} \in \mathbb{R}^{H\times W \times 3}$. Finally, the restored image is obtained as $\mathbf{\hat{I}} = \mathbf{I} + \mathbf{R}$. We optimize the proposed network using the Charbonnier loss [@charbonnier1994]: $$\label{Eq:loss} \mathcal{L}(\mathbf{\hat{I}},\mathbf{I}^*) = \sqrt{ {\|\mathbf{\hat{I}}-\mathbf{I}^*\|}^2 + {\varepsilon}^2 },$$ where $\mathbf{I}^*$ denotes the ground-truth image, and $\varepsilon$ is a constant which we empirically set to $10^{-3}$ for all the experiments. Multi-scale Residual Block (MRB) {#sec:msrb} -------------------------------- In order to encode context, existing CNNs [@ronneberger2015u; @newell2016stacked; @noh2015learning; @xiao2018simple; @badrinarayanan2017segnet; @peng2016recurrent] typically employ the following architecture design: **(a)** the receptive field of neurons is fixed in *each* layer/stage, **(b)** the spatial size of feature maps is *gradually* reduced to generate a semantically strong low-resolution representation, and **(c)** a high-resolution representation is *gradually* recovered from the low-resolution representation. However, it is well-understood in vision science that in the primate visual cortex, the sizes of the local receptive fields of neurons in the same region are different [@hubel1962receptive; @riesenhuber1999hierarchical; @serre2007robust; @hung2005fast]. Therefore, such a mechanism of collecting multi-scale spatial information in the same layer needs to be incorporated in CNNs [@huang2017multi; @hrnet; @fourure2017residual; @Szegedy2015]. In this paper, we propose the multi-scale residual block (MRB), as shown in Fig. \[fig:framework\]. It is capable of generating a spatially-precise output by maintaining high-resolution representations, while receiving rich contextual information from low-resolutions. The MRB consists of multiple (three in this paper) fully-convolutional streams connected in parallel. It allows information exchange across parallel streams in order to consolidate the high-resolution features with the help of low-resolution features, and vice versa. Next, we describe individual components of MRB. **Selective kernel feature fusion (SKFF).** One fundamental property of neurons present in the visual cortex is to be able to change their receptive fields according to the stimulus [@li2019selective]. This mechanism of adaptively adjusting receptive fields can be incorporated in CNNs by using multi-scale feature generation (in the same layer) followed by feature aggregation and selection. The most commonly used approaches for feature aggregation include simple concatenation or summation. However, these choices provide limited expressive power to the network, as reported in [@li2019selective]. In MRB, we introduce a nonlinear procedure for fusing features coming from multiple resolutions using a self-attention mechanism. Motivated by [@li2019selective], we call it selective kernel feature fusion (SKFF). The SKFF module performs dynamic adjustment of receptive fields via two operations –[*Fuse* and *Select*, as illustrated in Fig. \[fig:skff\]]{}. The *fuse* operator generates global feature descriptors by combining the information from multi-resolution streams. The *select* operator uses these descriptors to recalibrate the feature maps (of different streams) followed by their aggregation. Next, we provide details of both operators for the three-stream case, but one can easily extend it to more streams. **(1) Fuse:** SKFF receives inputs from three parallel convolution streams carrying different scales of information. We first combine these multi-scale features using an element-wise sum as: $\mathbf{L = L_1 + L_2 + L_3}$. We then apply global average pooling (GAP) across the spatial dimension of $\mathbf{L} \in \mathbb{R}^{H\times W \times C}$ to compute channel-wise statistics $\mathbf{s} \in \mathbb{R}^{1\times 1 \times C}$. Next, we apply a channel-downscaling convolution layer to generate a compact feature representation $\mathbf{z} \in \mathbb{R}^{1\times 1 \times r}$, where $r=\frac{C}{8}$ for all our experiments. Finally, the feature vector $\mathbf{z}$ passes through three parallel channel-upscaling convolution layers (one for each resolution stream) and provides us with three feature descriptors $\mathbf{v_1}, \mathbf{v_2}$ and $\mathbf{v_3}$, each with dimensions $1\times1\times C$. **(2) Select:** this operator applies the softmax function to $\mathbf{v_1}, \mathbf{v_2}$ and $\mathbf{v_3}$, yielding attention activations $\mathbf{s_1}, \mathbf{s_2}$ and $\mathbf{s_3}$ that we use to adaptively recalibrate multi-scale feature maps $\mathbf{L_1}, \mathbf{L_2}$ and $\mathbf{L_3}$, respectively. The overall process of feature recalibration and aggregation is defined as: $\mathbf{U = s_1 \cdot L_1 + s_2\cdot L_2 + s_3 \cdot L_3}$. Note that the SKFF uses $\sim6\times$ fewer parameters than aggregation with concatenation but generates more favorable results (an ablation study is provided in experiments section). **Dual attention unit (DAU).** While the SKFF block fuses information across multi-resolution branches, we also need a mechanism to share information within a feature tensor, both along the spatial and the channel dimensions. Motivated by the advances of recent low-level vision methods [@RCAN; @RIDNet; @dai2019second; @zhang2019residual] based on the attention mechanisms [@hu2018squeeze; @wang2018non], we propose the dual attention unit (DAU) to extract features in the convolutional streams. The schematic of DAU is shown in Fig. \[fig:dau\]. The DAU suppresses less useful features and only allows more informative ones to pass further. This feature recalibration is achieved by using channel attention [@hu2018squeeze] and spatial attention [@woo2018cbam] mechanisms. **(1) Channel attention (CA)** branch exploits the inter-channel relationships of the convolutional feature maps by applying *squeeze* and *excitation* operations [@hu2018squeeze]. Given a feature map $\mathbf{M} \in \mathbb{R}^{H\times W \times C}$, the squeeze operation applies global average pooling across spatial dimensions to encode global context, thus yielding a feature descriptor $\mathbf{d} \in \mathbb{R}^{1\times 1 \times C}$. The excitation operator passes $\mathbf{d}$ through two convolutional layers followed by the sigmoid gating and generates activations $\mathbf{\hat{d}} \in \mathbb{R}^{1\times 1 \times C}$. Finally, the output of CA branch is obtained by rescaling $\mathbf{M}$ with the activations $\mathbf{\hat{d}}$. **(2) Spatial attention (SA)** branch is designed to exploit the inter-spatial dependencies of convolutional features. The goal of SA is to generate a spatial attention map and use it to recalibrate the incoming features $\mathbf{M}$. To generate the spatial attention map, the SA branch first independently applies global average pooling and max pooling operations on features $\mathbf{M}$ along the channel dimensions and concatenates the outputs to form a feature map $\mathbf{f} \in \mathbb{R}^{H\times W \times 2}$. The map $\mathbf{f}$ is passed through a convolution and sigmoid activation to obtain the spatial attention map $\mathbf{\hat{f}} \in \mathbb{R}^{H\times W \times 1}$, which we then use to rescale $\mathbf{M}$. **Residual resizing modules.** The proposed framework employs a recursive residual design (with skip connections) to ease the flow of information during the learning process. In order to maintain the residual nature of our architecture, we introduce residual resizing modules to perform downsampling (Fig. \[fig:downsample\]) and upsampling (Fig. \[fig:upsample\]) operations. In MRB, the size of feature maps remains constant along convolution streams. On the other hand, across streams the feature map size changes depending on the input resolution index $i$ and the output resolution index $j$. If $i<j$, the input feature tensor is downsampled, and if $i>j$, the feature map is upsampled. To perform $2\times$ downsampling (halving the spatial dimension and doubling the channel dimension), we apply the module in Fig. \[fig:downsample\] only once. For $4\times$ downsampling, the module is applied twice, consecutively. Similarly, one can perform $2\times$ and $4\times$ upsampling by applying the module in Fig. \[fig:upsample\] once and twice, respectively. Note in Fig. \[fig:downsample\], we integrate anti-aliasing downsampling [@zhang2019making] to improve the shift-equivariance of our network. [0.49]{} ![image](Images/downsample.png){width="\textwidth"} [0.49]{} ![image](Images/upsample.png){width="\textwidth"} Experiments =========== In this section, we perform qualitative and quantitative assessment of the results produced by our MIRNet and compare it with the previous best methods. Next, we describe the datasets, and then provide the implementation details. Finally, we report results for **(a)** image denoising, **(b)** super-resolution and **(c)** image enhancement on five real image datasets. The source code and trained models will be released publicly[^1]. Real Image Datasets ------------------- **Image denoising.** **(1) DND [@dnd]** consists of $50$ images captured with four consumer cameras. Since the images are of very high-resolution, the dataset providers extract $20$ crops of size $512\times512$ from each image, yielding $1000$ patches in total. All these patches are used for testing (as DND does not contain training or validation sets). The ground-truth noise-free images are not released publicly, therefore the image quality scores in terms of PSNR and SSIM can only be obtained through an online server [@dndwebsite]. **(2) SIDD [@sidd]** is particularly collected with smartphone cameras. Due to the small sensor and high-resolution, the noise levels in smartphone images are much higher than those of DSLRs. SIDD contains $320$ image pairs for training and $1280$ for validation. **Super-resolution.** **(1) RealSR [@RealSR]** contains real-world LR-HR image pairs of the same scene captured by adjusting the focal-length of the cameras. RealSR have both indoor and outdoor images taken with two cameras. The number of training image pairs for scale factors $\times2$, $\times3$ and $\times4$ are $183$, $234$ and $178$, respectively. For each scale factor, $30$ test images are also provided in RealSR. **Image enhancement.** **(1) LoL [@wei2018deep]** is created for low-light image enhancement problem. It provides 485 images for training and 15 for testing. Each image pair in LoL consists of a low-light input image and its corresponding well-exposed reference image. **(2) MIT-Adobe FiveK [@mit_fivek]** contains $5000$ images of various indoor and outdoor scenes captured with several DSLR cameras in different lighting conditions. The tonal attributes of all images are manually adjusted by five different trained photographers (labelled as experts A to E). Same as in [@hu2018exposure; @park2018distort; @wang2019underexposed], we also consider the enhanced images of expert C as the ground-truth. Moreover, the first 4500 images are used for training and the last 500 for testing. Implementation Details ---------------------- The proposed architecture is end-to-end trainable and requires no pre-training of sub-modules. We train three different networks for three different restoration tasks. The training parameters, common to all experiments, are the following. We use 3 RRGs, each of which further contains $2$ MRBs. The MRB consists of $3$ parallel streams with channel dimensions of $64, 128, 256$ at resolutions $1, \frac{1}{2}, \frac{1}{4}$, respectively. Each stream has $2$ DAUs. The models are trained with the Adam optimizer ($\beta_1 = 0.9$, and $\beta_2=0.999$) for $7\times10^5$ iterations. The initial learning rate is set to $2\times10^{-4}$. We employ the cosine annealing strategy [@loshchilov2016sgdr] to steadily decrease the learning rate from initial value to $10^{-6}$ during training. We extract patches of size $128\times128$ from training images. The batch size is set to $16$ and, for data augmentation, we perform horizontal and vertical flips. Image Denoising --------------- In this section, we demonstrate the effectiveness of the proposed MIRNet for image denoising. We train our network only on the training set of the SIDD [@sidd] and directly evaluate it on the test images of both SIDD and DND [@dnd] datasets. Quantitative comparisons in terms of PSNR and SSIM metrics are summarized in Table \[table:sidd\] and Table \[table:dnd\] for SIDD and DND, respectively. Both tables show that our MIRNet performs favourably against the data-driven, as well as conventional, denoising algorithms. Specifically, when compared to the recent best method VDN [@VDN], our algorithm demonstrates a performance gain of $0.44$ dB on SIDD and $0.50$ dB on DND. Furthermore, it is worth noting that CBDNet [@CBDNet] and RIDNet [@RIDNet] use additional training data, yet our method provides significantly better results. For instance, our method achieves $8.94$ dB improvement over CBDNet [@CBDNet] on the SIDD dataset and $1.82$ dB on DND. In Fig. \[fig:dnd example\] and Fig. \[fig:sidd example\], we present visual comparisons of our results with those of other competing algorithms. It can be seen that our MIRNet is effective in removing real noise and produces perceptually-pleasing and sharp images. Moreover, it is capable of maintaining the spatial smoothness of the homogeneous regions without introducing artifacts. In contrast, most of the other methods either yield over-smooth images and thus sacrifice structural content and fine textural details, or produce images with chroma artifacts and blotchy texture. ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -- ![Denoising examples from SIDD [@sidd]. Our method effectively removes real noise from challenging images, while better recovering structural content and fine texture.[]{data-label="fig:sidd example"}](Images/Denoising/SIDD/RGB/SSID_0324_noisy "fig:"){width=".16\textwidth"} ![Denoising examples from SIDD [@sidd]. Our method effectively removes real noise from challenging images, while better recovering structural content and fine texture.[]{data-label="fig:sidd example"}](Images/Denoising/SIDD/RGB/SSID_0324_CBD "fig:"){width=".16\textwidth"} ![Denoising examples from SIDD [@sidd]. Our method effectively removes real noise from challenging images, while better recovering structural content and fine texture.[]{data-label="fig:sidd example"}](Images/Denoising/SIDD/RGB/SSID_0324_RIDNet "fig:"){width=".16\textwidth"} ![Denoising examples from SIDD [@sidd]. Our method effectively removes real noise from challenging images, while better recovering structural content and fine texture.[]{data-label="fig:sidd example"}](Images/Denoising/SIDD/RGB/SSID_0324_VDN "fig:"){width=".16\textwidth"} ![Denoising examples from SIDD [@sidd]. Our method effectively removes real noise from challenging images, while better recovering structural content and fine texture.[]{data-label="fig:sidd example"}](Images/Denoising/SIDD/RGB/SSID_0324_Ours_MSRNet "fig:"){width=".16\textwidth"} ![Denoising examples from SIDD [@sidd]. Our method effectively removes real noise from challenging images, while better recovering structural content and fine texture.[]{data-label="fig:sidd example"}](Images/Denoising/SIDD/RGB/SSID_0324_gt "fig:"){width=".16\textwidth"} 18.25 dB 28.84 dB 35.57 dB 36.39 dB **36.97 dB** ![Denoising examples from SIDD [@sidd]. Our method effectively removes real noise from challenging images, while better recovering structural content and fine texture.[]{data-label="fig:sidd example"}](Images/Denoising/SIDD/RGB1/SSID_3828_noisy "fig:"){width=".16\textwidth"} ![Denoising examples from SIDD [@sidd]. Our method effectively removes real noise from challenging images, while better recovering structural content and fine texture.[]{data-label="fig:sidd example"}](Images/Denoising/SIDD/RGB1/SSID_3828_CBD "fig:"){width=".16\textwidth"} ![Denoising examples from SIDD [@sidd]. Our method effectively removes real noise from challenging images, while better recovering structural content and fine texture.[]{data-label="fig:sidd example"}](Images/Denoising/SIDD/RGB1/SSID_3828_RIDNet "fig:"){width=".16\textwidth"} ![Denoising examples from SIDD [@sidd]. Our method effectively removes real noise from challenging images, while better recovering structural content and fine texture.[]{data-label="fig:sidd example"}](Images/Denoising/SIDD/RGB1/SSID_3828_VDN "fig:"){width=".16\textwidth"} ![Denoising examples from SIDD [@sidd]. Our method effectively removes real noise from challenging images, while better recovering structural content and fine texture.[]{data-label="fig:sidd example"}](Images/Denoising/SIDD/RGB1/SSID_3828_Ours_MSRNet "fig:"){width=".16\textwidth"} ![Denoising examples from SIDD [@sidd]. Our method effectively removes real noise from challenging images, while better recovering structural content and fine texture.[]{data-label="fig:sidd example"}](Images/Denoising/SIDD/RGB1/SSID_3828_gt "fig:"){width=".16\textwidth"} 18.16 dB 20.36 dB 29.83 dB 30.31 dB **31.36 dB** Noisy CBDNet [@CBDNet] RIDNet [@RIDNet] VDN [@VDN] MIRNet (Ours) Reference ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -- **Generalization capability.** The DND and SIDD datasets are acquired with different sets of cameras having different noise characteristics. Since the DND benchmark does not provide training data, setting a new state-of-the-art on DND with our SIDD trained network indicates the good generalization capability of our approach. Super-Resolution (SR) --------------------- We compare our MIRNet against the state-of-the-art SR algorithms (VDSR [@VDSR], SRResNet [@SRResNet], RCAN [@RCAN], LP-KPN [@RealSR]) on the testing images of the RealSR [@RealSR] for upscaling factors of $\times2$, $\times3$ and $\times4$. Note that all the benchmarked algorithms are trained on the RealSR [@RealSR] dataset for fair comparison. In the experiments, we also include bicubic interpolation [@keys1981cubic], which is the most commonly used method for generating super-resolved images. Here, we compute the PSNR and SSIM scores using the Y channel (in YCbCr color space), as it is a common practice in the SR literature [@RCAN; @RealSR; @wang2019deep; @anwar2019deep]. The results in Table \[table:realSR\] show that the bicubic interpolation provides the least accurate results, thereby indicating its low suitability for dealing with real images. Moreover, the same table shows that the recent method LP-KPN [@RealSR] provides marginal improvement of only $\sim0.04$ dB over the previous best method RCAN [@RCAN]. In contrast, our method significantly advances state-of-the-art and consistently yields better image quality scores than other approaches for all three scaling factors. Particularly, compared to LP-KPN [@RealSR], our method provides performance gains of $0.45$ dB, $0.74$ dB, and $0.22$ dB for scaling factors $\times2$, $\times3$ and $\times4$, respectively. The trend is similar for the SSIM metric as well. Visual comparisons in Fig. \[fig:sr example\] show that our MIRNet recovers content structures effectively. In contrast, VDSR [@VDSR], SRResNet [@SRResNet] and RCAN [@RCAN] reproduce results with noticeable artifacts. Furthermore, LP-KPN [@RealSR] is not able to preserve structures (see near the right edge of the crop). Several more examples are provided in Fig. \[fig:sr crop examples\] to further compare the image reproduction quality of our method against the previous best method [@RealSR]. It can be seen that LP-KPN [@RealSR] has a tendency to over-enhance the contrast (cols. 1, 3, 4) and in turn causes loss of details near dark and high-light areas. In contrast, the proposed MIRNet successfully reconstructs structural patterns and edges (col. 2) and produces images that are natural (cols. 1, 4) and have better color reproduction (col. 5). **Cross-camera generalization.** The RealSR [@RealSR] dataset consists of images taken with Canon and Nikon cameras at three scaling factors. To test the cross-camera generalizability of our method, we train the network on the training images of one camera and directly evaluate it on the test set of the other camera. Table \[table:realSR generalization\] demonstrates the generalization of competing methods for four possible cases: (a) training and testing on Canon, (b) training on Canon, testing on Nikon, (c) training and testing on Nikon, and (d) training on Nikon, testing on Canon. It can be seen that, for all scales, LP-KPN [@RealSR] and RCAN [@RCAN] shows comparable performance. In contrast, our MIRNet exhibits more promising generalization. Image Enhancement ----------------- In this section, we demonstrate the effectiveness of our algorithm by evaluating it for the image enhancement task. We report PSNR/SSIM values of our method and several other techniques in Table \[table:lol\] and Table \[table:fivek\] for the LoL [@wei2018deep] and MIT-Adobe FiveK [@mit_fivek] datasets, respectively. It can be seen that our MIRNet achieves significant improvements over previous approaches. Notably, when compared to the recent best methods, MIRNet obtains $3.27$ dB performance gain over KinD [@zhang2019kindling] on the LoL dataset and $0.69$ dB improvement over DeepUPE [@wang2019underexposed] on the Adobe-Fivek dataset. We show visual results in Fig. \[Fig:qual\_lol\] and Fig. \[Fig:qual\_fivek\]. Compared to other techniques, our method generates enhanced images that are natural and vivid in appearance and have better global and local contrast. ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![Visual comparison of low-light enhancement approaches on the LoL dataset [@wei2018deep]. Our method reproduces image that is visually closer to the ground-truth in terms of brightness and global contrast.[]{data-label="Fig:qual_lol"}](Images/Enhancement/lol_images/lol_input "fig:"){width="24.40000%"} ![Visual comparison of low-light enhancement approaches on the LoL dataset [@wei2018deep]. Our method reproduces image that is visually closer to the ground-truth in terms of brightness and global contrast.[]{data-label="Fig:qual_lol"}](Images/Enhancement/lol_images/lol_lime "fig:"){width="24.40000%"} ![Visual comparison of low-light enhancement approaches on the LoL dataset [@wei2018deep]. Our method reproduces image that is visually closer to the ground-truth in terms of brightness and global contrast.[]{data-label="Fig:qual_lol"}](Images/Enhancement/lol_images/crm "fig:"){width="24.40000%"} ![Visual comparison of low-light enhancement approaches on the LoL dataset [@wei2018deep]. Our method reproduces image that is visually closer to the ground-truth in terms of brightness and global contrast.[]{data-label="Fig:qual_lol"}](Images/Enhancement/lol_images/lol_retx "fig:"){width="24.40000%"} Input image LIME [@guo2016lime] CRM [@ying2017bio] Retinex-Net [@wei2018deep] ![Visual comparison of low-light enhancement approaches on the LoL dataset [@wei2018deep]. Our method reproduces image that is visually closer to the ground-truth in terms of brightness and global contrast.[]{data-label="Fig:qual_lol"}](Images/Enhancement/lol_images/srie "fig:"){width="24.40000%"} ![Visual comparison of low-light enhancement approaches on the LoL dataset [@wei2018deep]. Our method reproduces image that is visually closer to the ground-truth in terms of brightness and global contrast.[]{data-label="Fig:qual_lol"}](Images/Enhancement/lol_images/lol_kind "fig:"){width="24.40000%"} ![Visual comparison of low-light enhancement approaches on the LoL dataset [@wei2018deep]. Our method reproduces image that is visually closer to the ground-truth in terms of brightness and global contrast.[]{data-label="Fig:qual_lol"}](Images/Enhancement/lol_images/lol_ours "fig:"){width="24.40000%"} ![Visual comparison of low-light enhancement approaches on the LoL dataset [@wei2018deep]. Our method reproduces image that is visually closer to the ground-truth in terms of brightness and global contrast.[]{data-label="Fig:qual_lol"}](Images/Enhancement/lol_images/lol_gt "fig:"){width="24.40000%"} SRIE [@fu2016weighted] KinD [@zhang2019kindling] MIRNet (Ours) Ground-truth ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![Visual results of image enhancement on the MIT-Adobe FiveK [@mit_fivek] dataset. Compared to the state-of-the-art, our MIRNet makes better color and contrast adjustments and produces image that is vivid, natural and pleasant in appearance. ](Images/Enhancement/fivek/input "fig:"){width="32.40000%"} ![Visual results of image enhancement on the MIT-Adobe FiveK [@mit_fivek] dataset. Compared to the state-of-the-art, our MIRNet makes better color and contrast adjustments and produces image that is vivid, natural and pleasant in appearance. ](Images/Enhancement/fivek/hdrnet "fig:"){width="32.40000%"} ![Visual results of image enhancement on the MIT-Adobe FiveK [@mit_fivek] dataset. Compared to the state-of-the-art, our MIRNet makes better color and contrast adjustments and produces image that is vivid, natural and pleasant in appearance. ](Images/Enhancement/fivek/dpe "fig:"){width="32.40000%"} Input image HDRNet [@Gharbi2017] DPE [@chen2018deep] ![Visual results of image enhancement on the MIT-Adobe FiveK [@mit_fivek] dataset. Compared to the state-of-the-art, our MIRNet makes better color and contrast adjustments and produces image that is vivid, natural and pleasant in appearance. ](Images/Enhancement/fivek/deepupe "fig:"){width="32.40000%"} ![Visual results of image enhancement on the MIT-Adobe FiveK [@mit_fivek] dataset. Compared to the state-of-the-art, our MIRNet makes better color and contrast adjustments and produces image that is vivid, natural and pleasant in appearance. ](Images/Enhancement/fivek/ours "fig:"){width="32.40000%"} ![Visual results of image enhancement on the MIT-Adobe FiveK [@mit_fivek] dataset. Compared to the state-of-the-art, our MIRNet makes better color and contrast adjustments and produces image that is vivid, natural and pleasant in appearance. ](Images/Enhancement/fivek/gt "fig:"){width="32.40000%"} DeepUPE [@wei2018deep] MIRNet (Ours) Ground-truth ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- \[Fig:qual\_fivek\] Ablation Studies ================ In this section we study the impact of each of our architectural components and design choices on the final performance. All the ablation experiments are performed for the super-resolution task with $\times3$ scale factor. Table \[table:ablation main\] shows that removing skip connections causes the largest performance drop. Without skip connections, the network finds it difficult to converge and yields high training errors, and consequently low PSNR. Furthermore, we note that the information exchange among parallel convolution streams via SKFF is helpful and leads to an improved performance. Similarly, DAU also makes positive influence to the final image quality. Next, we analyze the feature aggregation strategy in Table \[table:ablation aggregation\]. It shows that the proposed SKFF generates favorable results compared to summation and concatenation. Moreover, it can be seen that our SKFF uses $\sim6\times$ fewer parameters than concatenation. Finally, in Table \[table: ablation MRB\] we study how the number of convolutional streams and columns (DAU blocks) of MRB affect the image restoration quality. We note that increasing the number of streams provides significant improvements, thereby justifying the importance of multi-scale features processing. Moreover, increasing the number of columns yields better scores, thus indicating the significance of information exchange among parallel streams for feature consolidation. Additional ablation studies and qualitative results are provided in the supplementary material. Concluding Remarks ================== Conventional image restoration and enhancement pipelines either stick to the full resolution features along the network hierarchy or use an encoder-decoder architecture. The first approach helps retain precise spatial details, while the latter one provides better contextualized representations. However, these methods can satisfy only one of the above two requirements, although real-world image restoration tasks demand a combination of both conditioned on the given input sample. In this work, we propose a novel architecture whose main branch is dedicated to full-resolution processing and the complementary set of parallel branches provides better contextualized features. We propose novel mechanisms to learn relationships between features within each branch as well as across multi-scale branches. Our feature fusion strategy ensures that the receptive field can be dynamically adapted without sacrificing the original feature details. Consistent achievement of state-of-the-art results on five datasets for three image restoration and enhancement tasks corroborates the effectiveness of our approach. [^1]: <https://github.com/swz30/MIRNet>
1
--- abstract: 'We compute time-periodic and relative-periodic solutions of the free-surface Euler equations that take the form of overtaking collisions of unidirectional solitary waves of different amplitude on a periodic domain. As a starting guess, we superpose two Stokes waves offset by half the spatial period. Using an overdetermined shooting method, the background radiation generated by collisions of the Stokes waves is tuned to be identical before and after each collision. In some cases, the radiation is effectively eliminated in this procedure, yielding smooth soliton-like solutions that interact elastically forever. We find examples in which the larger wave subsumes the smaller wave each time they collide, and others in which the trailing wave bumps into the leading wave, transferring energy without fully merging. Similarities notwithstanding, these solutions are found quantitatively to lie outside of the Korteweg-de Vries regime. We conclude that quasi-periodic elastic collisions are not unique to integrable model water wave equations when the domain is periodic.' address: 'Dept of Mathematics, University of California, Berkeley, CA 94720-3840' author: - Jon Wilkening date: 'April 21, 2014' title: ' Relative-Periodic Elastic Collisions of Water Waves' --- =1 [^1] Introduction ============ A striking feature of multiple-soliton solutions of integrable model equations such as the Korteweg-deVries equation, the Benjamin-Ono equation, and the nonlinear Schrödinger equation is that they interact elastically, leading to time-periodic, relative-periodic, or quasi-periodic dynamics. By contrast, the interaction of solitary waves for the free-surface Euler equations is inelastic. However, it has been observed many times in the literature [@chan:street:70; @cooker:97; @maxworthy:76; @su:mirie; @mirie:su; @zou:su; @craig:guyenne:06; @milewski:11] that the residual radiation after a collision of such solitary waves can be remarkably small. In the present paper we explore the possibility of finding nearby time-periodic and relative-periodic solutions of the Euler equations using a collision of unidirectional Stokes waves as a starting guess. Such solutions demonstrate that recurrent elastic collisions of solitary waves in the spatially periodic case do not necessarily indicate that the underlying system is integrable. A relative-periodic solution is one that returns to a spatial phase shift of its initial condition at a later time. This only makes sense on a periodic domain, where the waves collide repeatedly at regular intervals in both time and space, with the locations of the collisions drifting steadily in time. They are special cases (with $N=2$) of quasi-periodic solutions, which have the form $u(x,t)=U(\vec\kappa x+\vec \omega t + \vec\alpha)$ with $U$ an $N$-periodic continuous function, i.e. $U\in C\big(\mathbb{T}^N\big)$, and $\vec\kappa$, $\vec\omega$, $\vec\alpha\in\mathbb{R}^N$. Throughout the manuscript, we will use the phrase “solitary waves” in a broad sense to describe waves that, most of the time, remain well-separated from one another and propagate with nearly constant speed and shape. “Stokes waves” will refer to periodic progressive solutions of the free-surface Euler equations of permanent form, or waves that began at $t=0$ as a linear superposition of such traveling waves. They comprise a special class of solitary waves. “Solitons” will refer specifically to superpositions of ${\operatorname}{sech}^2$ solutions of the KdV equation on the whole line, while “cnoidal solutions” will refer to their spatially periodic, multi-phase counterparts; see §\[sec:kdv\] for elaboration. It was found in [@water2] that decreasing the fluid depth causes standing waves to transition from large-scale symmetric sloshing behavior in deep water to pairs of counter-propagating solitary waves that collide repeatedly in shallow water. In the present work, we consider unidirectional waves of different amplitude that collide due to taller waves moving faster than shorter ones. We present two examples of solutions of this type: one where the resulting dynamics is fully time-periodic; and one where it is relative-periodic, returning to a spatial phase shift of the initial condition at a later time. Both examples exhibit behavior typical of collisions of KdV solitons. In the first, one wave is significantly larger than the other, and completely subsumes it during the interaction. In the second, the waves have similar amplitude, with the trailing wave bumping into the leading wave and transferring energy without fully merging. Despite these similarities, the amplitude of the waves in our examples are too large for the assumptions in the derivation of the KdV equation to hold. In particular, the larger wave in the first example is more than half the fluid depth in height, and there is significant vertical motion of the fluid when the waves pass by. A detailed comparison of the Euler and KdV equations for waves with these properties is carried out in §\[sec:kdv\]. A review of the literature on water wave collisions and the accuracy of the KdV model of water waves is also given in that section. Rather than compute such solutions by increasing the amplitude from the linearized regime via numerical continuation, as was done for counter-propagating waves in [@water2], we use collisions of right-moving Stokes waves as starting guesses. The goal is to minimally “tune” the background radiation generated by the Stokes collisions so that the amount coming out of each collision is identical to what went into it. In the first example of §\[sec:num\], we find that the tuned background radiation takes the form of a train of traveling waves of smaller wavelength moving to the right more slowly than either solitary wave. By contrast, in the counter-propagating case studied in [@water2], it consists of an array of smaller-wavelength standing waves oscillating rapidly relative to the time between collisions of the primary waves. In the second example of §\[sec:num\], the background radiation is essentially absent, which is to say that the optimized solution is free from high-frequency, low-amplitude disturbances in the trough, and closely resembles a relative-periodic cnoidal solution of KdV. We call the collisions in this solution “elastic” as they repeat forever, unchanged up to spatial translation, and there are no features to distinguish radiation from the waves themselves. This process of tuning parameters to minimize or eliminate small-amplitude oscillations in the wave troughs is reminiscent of Vanden-Broeck’s work [@vandenBroeck91] in which oscillations at infinity could be eliminated from solitary capillary-gravity waves by choosing the amplitude appropriately. To search for relative periodic solutions, we use a variant of the overdetermined shooting method developed by the author and collaborators in previous work to study several related problems: time-periodic solutions of the Benjamin-Ono equation [@benj1; @benj2] and the vortex sheet with surface tension [@vtxs1; @vtxs2]; Hopf bifurcation and stability transitions in mode-locked lasers [@lasers]; cyclic steady-states in rolling treaded tires [@tires1]; self-similarity (or lack thereof) at the crests of large-amplitude standing water waves [@water1]; harmonic resonance and spontaneous nucleation of new branches of standing water waves at critical depths [@water2]; and three-dimensional standing water waves [@water3d]. The three approaches developed in these papers are the adjoint continuation method [@benj1; @lasers], a Newton-Krylov shooting method [@tires1], and a trust region shooting method [@water2] based on the Levenberg-Marquardt algorithm [@nocedal]. We adopt the latter method here to exploit an opportunity to consolidate the work in computing the Dirichlet-Neumann operator for many columns of the Jacobian simultaneously, in parallel. One computational novelty of this work is that we search directly for large-amplitude solutions of a nonlinear two-point boundary value problem, without using numerical continuation to get there. This is generally difficult. However, in the present case, numerical continuation is also difficult due to non-smooth bifurcation “curves” riddled with Cantor-like gaps [@plotnikov01], and the long simulation times that occur between collisions in the unidirectional case. Our shooting method has proven robust enough to succeed in finding time-periodic solutions, when they exist, with a poor starting guess. False positives are avoided by resolving the solutions spectrally to machine accuracy and overconstraining the minimization problem. Much of the challenge is in determining the form of the initial condition and the objective function to avoid wandering off in the wrong direction and falling into a nonzero local minimum before locking onto a nearby relative-periodic solution. Equations of motion {#sec:eqm} =================== The equations of motion of a free surface $\eta(x,t)$ evolving over an ideal fluid with velocity potential $\phi(x,y,t)$ may be written [@whitham74; @johnson97; @craik04; @craik05] $$\begin{aligned} \label{eq:ww} \eta_t &= \phi_y - \eta_x\phi_x, \\[-3pt] \notag \varphi_t &= P\left[\phi_y\eta_t - \frac{1}{2}\phi_x^2 - \frac{1}{2}\phi_y^2 - g\eta\right], $$ where subscripts denote partial derivatives, $\varphi(x,t) = \phi(x,\eta(x,t), t)$ is the restriction of $\phi$ to the free surface, $g=1$ is the acceleration of gravity, $\rho=1$ is the fluid density, and $P$ is the projection $$Pf = f - \frac{1}{2\pi}\int_0^{2\pi} f(x)\,dx,$$ where we assume a $2\pi$-periodic domain. The velocity components $u=\phi_x$ and $v=\phi_y$ at the free surface can be computed from $\varphi$ via $$\label{eq:uv:from:G} \begin{pmatrix} \phi_x \\ \phi_y \end{pmatrix} = \frac{1}{1+\eta'(x)^2}\begin{pmatrix} 1 & -\eta'(x) \\ \eta'(x) & 1 \end{pmatrix} \begin{pmatrix} \varphi'(x) \\ {\mathcal{G}}\varphi(x) \end{pmatrix},$$ where a prime denotes a derivative and ${\mathcal{G}}$ is the Dirichlet-Neumann operator [@craig:sulem:93] $$\label{eq:DNO:def} {\mathcal{G}}\varphi(x) = \sqrt{1+\eta'(x)^2}\,\, {\frac{\partial \phi}{\partial n}}(x+i\eta(x)) = \phi_y - \eta_x\phi_x$$ for the Laplace equation, with periodic boundary conditions in $x$, Dirichlet conditions ($\phi=\varphi$) on the upper boundary, and Neumann conditions ($\phi_y=0$) on the lower boundary, assumed flat. We have suppressed $t$ in the notation since time is frozen in the Laplace equation. We compute ${\mathcal{G}}\varphi$ using a boundary integral collocation method [@lh76; @baker:82; @krasny:86; @mercer:92; @baker10] and advance the solution in time using an 8th order Runge-Kutta scheme [@hairer:I] with 36th order filtering [@hou:li:07]. See [@water2] for details. Computation of relative-periodic solutions {#sec:method} ========================================== Traveling waves have the symmetry that $$\label{eq:init} \eta(x,0) \, \text{ is even}, \qquad \varphi(x,0) \, \text{ is odd.}$$ This remains true if $x$ is replaced by $x-\pi$. As a starting guess for a new class of time-periodic and relative-periodic solutions, we have in mind superposing two traveling waves, one centered at $x=0$ and the other at $x=\pi$. Doing so will preserve the property (\[eq:init\]), but the waves will now interact rather than remain pure traveling waves. A solution will be called *relative periodic* if there exists a time $T$ and phase shift $\theta$ such that $$\label{eq:ts:def} \eta(x,t+T) = \eta(x-\theta,t), \qquad\quad \varphi(x,t+T) = \varphi(x-\theta,t)$$ for all $t$ and $x$. Time-periodicity is obtained as a special case, with $\theta\in2\pi\mathbb{Z}$. We can save a factor of 2 in computational work by imposing the alternative condition $$\label{eq:even:odd} \eta(x+\theta/2,T/2) \, \text{ is even}, \qquad\quad \varphi(x+\theta/2,T/2) \, \text{ is odd.}$$ From this, it follows that $$\begin{aligned} \eta(x+\theta/2,T/2) &= \eta(-x+\theta/2,T/2) = \eta(x-\theta/2,-T/2), \\ \varphi(x+\theta/2,T/2) &= -\varphi(-x+\theta/2,T/2) = \varphi(x-\theta/2,-T/2).\end{aligned}$$ But then both sides of each equation in (\[eq:ts:def\]) agree at time $t=-T/2$. Thus, (\[eq:ts:def\]) holds for all time. In the context of traveling-standing waves in deep water [@trav:stand], it is natural to define $T$ as twice the value above, replacing all factors of $T/2$ by $T/4$. That way a pure standing wave returns to its original configuration in time $T$ instead of shifting in space by $\pi$ in time $T$. In the present work, we consider pairs of solitary waves moving to the right at different speeds, so it is more natural to define $T$ as the first (rather than the second) time there exists a $\theta$ such that (\[eq:ts:def\]) holds. Objective function {#sec:obj:fun} ------------------ We adapt the overdetermined shooting method of [@water1; @water2] to compute solutions of (\[eq:init\])–(\[eq:even:odd\]). This method employs the Levenberg-Marquardt method [@nocedal] with delayed Jacobian updates [@water2] to solve the nonlinear least squares problem described below. For (\[eq:init\]), we build the symmetry into the initial conditions over which the shooting method is allowed to search: we choose an integer $n$ and consider initial conditions of the form $$\label{eq:init:trav} \hat\eta_k(0) = c_{2|k|-1}, \qquad\quad \hat\varphi_k(0) = \pm ic_{2|k|},$$ where $k\in\{\pm1,\pm2,\dots,\pm \frac{n}{2}\}$ and $\hat\eta_k(t)$, $\hat\varphi_k(t)$ are the Fourier modes of $\eta(x,t)$, $\varphi(x,t)$. The numbers $c_1,\dots,c_n$ are assumed real and all other Fourier modes (except $\hat\eta_0$) are zero. We set $\hat\eta_0$ to the fluid depth so that $y=0$ is a symmetry line corresponding to the bottom wall. This is convenient for computing the Dirichlet-Neumann operator [@water2]. In the formula for $\hat\varphi_k$, the minus sign is taken if $k<0$ so that $\hat\varphi_{-k} =\overline{\hat\varphi_k}$. We also solve for the period, $$\label{eq:T:theta} T=c_{n+1}. $$ The phase shift $\theta$ is taken as a prescribed parameter here. Alternatively, in a study of traveling-standing waves [@trav:stand], the author defines a traveling parameter $\beta$ and varies $\theta=c_{n+2}$ as part of the algorithm to obtain the desired value of $\beta$. This parameter $\beta$ is less meaningful for solitary wave collisions in shallow water, so we use $\theta$ itself as the traveling parameter in the present study. We also need to specify the amplitude of the wave. This can be done in various ways, e.g. by specifying the value of the energy, $$E = \frac{1}{2\pi}\int_0^{2\pi} {\textstyle}\frac{1}{2}\varphi{\mathcal{G}}\varphi + \frac{1}{2}g\eta^2\,dx,$$ by constraining a Fourier mode such as $\hat\eta_1(0)$, or by specifying the initial height of the wave at $x=0$: $$\eta(0,0) = \hat\eta_0 + \sum_{k=1}^{n/2} 2c_{2k-1}.$$ Thus, to enforce (\[eq:even:odd\]), we can minimize the objective function $$\label{eq:f} f(c) = \frac{1}{2} r(c)^Tr(c),$$ where $$\begin{aligned} \label{eq:r:def} r_1 = \big(\;\text{choose one:} \quad & E-a \quad,\quad \hat\eta_1(0)-a \quad,\quad \eta(0,0)-a \;\big), \\ \notag r_{2j} = {\operatorname{Im}}\{e^{ij\theta/2}\hat\eta_j(T/2)\}, \qquad &r_{2j+1} = {\operatorname{Re}}\{e^{ij\theta/2}\hat\varphi_j(T/2)\}, \qquad (1 \le j\le M/2).\end{aligned}$$ Here $a$ is the desired value of the chosen amplitude parameter. Alternatively, we can impose (\[eq:ts:def\]) directly by minimizing $$\label{eq:f1} \tilde f = \frac{1}{2}r_1^2 + \frac{1}{4\pi} \int_0^{2\pi} \left(\big[\eta(x,T)-\eta(x-\theta,0)\big]^2 + \big[\varphi(x,T)-\varphi(x-\theta,0)\big]^2\right)dx,$$ which also takes the form $\frac{1}{2}r^Tr$ if we define $r_1$ as above and $$\label{eq:r1:def} \begin{aligned} r_{4j-2}+ir_{4j-1} &= \sqrt{2}\left[ \hat\eta_j(T) - e^{-ij\theta}\hat\eta_j(0) \right], \\ r_{4j}+ir_{4j+1} &= \sqrt{2}\left[ \hat\varphi_j(T) - e^{-ij\theta}\hat\varphi_j(0) \right], \end{aligned} \qquad\quad (1\le j\le M/2).$$ Note that $f$ measures deviation from evenness and oddness of $\eta(x+\theta/2,T/2)$ and $\varphi(x+\theta/2,T/2)$, respectively, while $\tilde f$ measures deviation of $\eta(x+\theta,T)$ and $\varphi(x+\theta,T)$ from their initial states. In the first example of §\[sec:num\], we minimize $\tilde f$ directly, while in the second we minimize $f$ and check that $\tilde f$ is also small, as a means of validation. The number of equations, $m=M+1$ for $f$ and $m=2M+1$ for $\tilde f$, is generally larger than the number of unknowns, $n+1$, due to zero-padding of the initial conditions. This adds robustness to the shooting method and causes all Fourier modes varied by the algorithm, namely those in (\[eq:init:trav\]), to be well-resolved on the mesh. Computation of the Jacobian --------------------------- To compute the $k$th column of the Jacobian $J=\nabla_c r$, which is needed by the Levenberg-Marquardt method, we solve the linearized equations along with the nonlinear ones: $$\label{eq:q:qdot} {\frac{\partial }{\partial t}} \begin{pmatrix} q \\ \dot q \end{pmatrix} = \begin{pmatrix} F(q) \\ DF(q)\dot q \end{pmatrix}, \quad \begin{aligned} q(0) &= q_0 = (\eta_0,\varphi_0), \\ \dot q(0) &= \dot q_0 = \partial q_0/\partial c_k. \end{aligned}$$ Here $q=(\eta,\varphi)$, $\dot q=(\dot\eta,\dot\varphi)$, $F(q)$ is given in (\[eq:ww\]), $DF$ is its derivative (see [@water2] for explicit formulas), and a dot represents a variational derivative with respect to perturbation of the initial conditions, not a time derivative. To compute $\partial r_i/\partial c_k$ for $i\ge2$ and $k\le n$, one simply puts a dot over each Fourier mode on the right-hand side of (\[eq:r:def\]) or (\[eq:r1:def\]), including $\hat\eta_j(0)$ and $\hat\varphi_j(0)$ in (\[eq:r1:def\]). If $k=n+1$, then $c_k=T$ and $${\frac{\partial r_{2j}}{\partial T}} = {\operatorname{Im}}\{e^{ij\theta/2}(1/2)\partial_t\hat\eta_j(T/2)\}, \qquad {\frac{\partial (r_{4j}+ir_{4j+1})}{\partial T}} = \sqrt{2}\big[\partial_t\hat\varphi_j(T)\big]$$ in (\[eq:r:def\]) and (\[eq:r1:def\]), respectively, with similar formulas for $\partial(r_{4j-2}+ir_{4j-1})/\partial T$ and $\partial r_{2j+1}/\partial T$. The three possibilities for $r_1$ are handled as follows: $$\begin{aligned} &\text{case 1:} \quad {\frac{\partial r_1}{\partial c_k}} = \dot E = \frac{1}{2\pi}\int_0^{2\pi} \left[\dot\varphi\eta_t - \dot\eta\varphi_t \right]_{t=0}dx, \quad (k\le n), \qquad {\frac{\partial r_1}{\partial c_{n+1}}} = 0, \\ &\text{case 2:} \quad {\frac{\partial r_1}{\partial c_k}} = {{\dot\eta}^{\scriptscriptstyle\bm\wedge}}_1(0) = \delta_{k,1}, \quad (k\le n+1),\\ &\text{case 3:} \quad {\frac{\partial r_1}{\partial c_k}} = \dot\eta(0,0) = 2\delta_{k,\text{odd}}, \quad (k\le n), \qquad {\frac{\partial r_1}{\partial c_{n+1}}} = 0,\end{aligned}$$ where $\delta_{k,j}$ and $\delta_{k,\text{odd}}$ equal 1 if $k=j$ or $k$ is odd, respectively, and equal zero otherwise. The vectors $\dot q$ in (\[eq:q:qdot\]) are computed in batches, each initialized with a different initial perturbation, to consolidate the work in computing the Dirichlet-Neumann operator during each timestep. See [@water2; @trav:stand] for details. Numerical results {#sec:num} ================= As mentioned in the introduction, our idea is to use collisions of unidirectional Stokes (i.e. traveling) waves as starting guesses to find time-periodic and relative periodic solutions of the Euler equations. We begin by computing traveling waves of varying wave height and record their periods. This is easily done in the framework of §\[sec:method\]. We set $\theta=\pi/64$ (or any other small number) and minimize $\tilde f$ in (\[eq:f1\]). The resulting “period” $T$ will give the wave speed via $c=\theta/T$. Below we report $T=2\pi c$, i.e. $T$ is rescaled as if $\theta$ were $2\pi$. We control the amplitude by specifying $\hat\eta_1(0)$, which is the second option listed in §\[sec:method\] for defining the first component $r_1$ of the residual. A more conventional approach for computing traveling waves is to substitute $\eta(x-ct)$, $\varphi(x-ct)$ into (\[eq:ww\]) and solve the resulting stationary problem (or an equivalent integral equation) by Newton’s method [@chen80a; @chandler:93; @milewski:11]. Note that the wave speed $c$ here is unrelated to the vector $c$ of unknowns in (\[eq:init:trav\]). ![\[fig:bif:stokes\] Plots of wave height and first Fourier mode versus period for Stokes waves with wavelength $2\pi$ and fluid depth $h=0.05$. The temporal periods are $6T_A=137.843\approx 137.738 = 5T_C$.](figs/bif_stokes){width="3.3in"} ![\[fig:align:stokes\] Collision of two right-moving Stokes waves that nearly return to their initial configuration after the interaction. (left) Solutions A and C were combined via (\[eq:AandC\]) and evolved through one collision to $t=137.738$. (right) Through trial and error, we adjusted the amplitude of the smaller Stokes wave and the simulation time to obtain a nearly time-periodic solution. ](figs/align_stokes){width="\linewidth"} With traveling waves in hand, out next goal is to collide two of them and search for a nearby time-periodic solution, with $\theta=0$. As shown in Figure \[fig:bif:stokes\], varying $\hat\eta_1(0)$ from 0 to $7.4\times 10^{-4}$ causes the period of a Stokes wave with wavelength $\lambda=2\pi$ and mean fluid depth $h=0.05$ to decrease from $T_O=28.1110$ to $T_A=22.9739$, and the wave height (vertical crest-to-trough distance) to increase from 0 to $0.02892$. Solution C is the closest among the Stokes waves we computed to satisfying $5T_C=6T_A$, where $p=5$ is the smallest integer satisfying $\frac{p+1}{p}T_A<T_O$. We then combine solution A with a spatial phase shift of solution C at $t=0$. The resulting initial conditions are $$\label{eq:AandC} \begin{aligned} \eta^{A+C}_0(x) &= h + \big[\eta^A_0(x)-h\big] + \big[\eta^C_0(x-\pi)-h\big], \\ \varphi^{A+C}_0(x) &= \varphi^A_0(x) + \varphi^C_0(x-\pi), \end{aligned}$$ where $h=0.05$ is the mean fluid depth. Plots of $\eta_0^A(x)$, $\eta_0^C(x-\pi)$, $\varphi_0^A(x)$ and $\varphi_0^C(x-\pi)$ are shown in Figure \[fig:AandC\]. If the waves did not interact, the combined solution would be time-periodic (to the extent that $5T_C=6T_A$, i.e. to about $0.076\%$). But the waves do interact. In addition to the complicated interaction that occurs when they collide, each slows the other down between collisions by introducing a negative gradient in the velocity potential between its own wave crests. Indeed, as shown in the right panel of Figure \[fig:AandC\], the velocity potential increases rapidly across a right-moving solitary wave and decreases elsewhere to achieve spatial periodicity. The decreasing velocity potential induces a background flow opposite to the direction of travel of the other wave. In the left panel of Figure \[fig:align:stokes\], we see that the net effect is that neither of the superposed waves has returned to its starting position at $t=5T_C$, and the smaller wave has experienced a greater net decrease in speed. However, as shown in the right panel, by adjusting the amplitude of the smaller wave (replacing solution C by B) and increasing $T$ slightly to $138.399$, we are able to hand-tune the Stokes waves to achieve $\tilde f\approx5.5\times10^{-8}$, where $\theta$ is set to zero in (\[eq:f1\]). Note that as $t$ varies from 0 to $T/10$ in the left panel of Figure \[fig:evol:kdv1\], the small wave advances by $\pi$ units to the right while the large wave advances by $1.2\pi$ units. The waves collide at $t=T/2$. This generates a small amount of radiation, which can be seen at $t=T$ in the right panel of Figure \[fig:align:stokes\]. Some radiation behind the large wave is present for all $t>0$, as shown in Figure \[fig:pov:kdv1\]. Before minimizing $\tilde f$, we advance the two Stokes waves to the time of the first collision, $t=T/2$. At this time, the larger solitary wave has traversed the domain 3 times and the smaller one 2.5 times, so their peaks lie on top of each other at $x=0$. The reason to do this is that when the waves merge, the combined wave is shorter, wider, and smoother than at any other time during the evolution. Quantitatively, the Fourier modes of $\hat\eta_k(t)$ and $\hat\varphi_k(t)$ decay below $10^{-15}$ for $k\ge600$ at $t=0$, and $k\ge200$ when $t=T/2$. Thus, the number of columns needed in the Jacobian is reduced by a factor of 3, and the problem becomes more overdetermined, hence more robust. For the calculation of a time-periodic solution, we let $t=0$ correspond to this merged state, which affects the time labels when comparing Figures \[fig:evol:kdv1\] and \[fig:evol:kdv2\]. As a final initialization step, we project onto the space of initial conditions satisfying (\[eq:init:trav\]) by zeroing out the imaginary parts of $\hat\eta_k(0)$ and the real parts of $\hat\varphi_k(0)$, which are already small. Surprisingly, this improves the time-periodicity of the initial guess in (\[eq:f1\]) to $\tilde f = 2.3\times 10^{-8}$. ![\[fig:evol:kdv1\] Evolution of two Stokes waves that collide repeatedly, at times $t\approx T/2+kT$, $k\ge0$. (left) Traveling solutions A and B in Figure \[fig:bif:stokes\] were initialized with wave crests at $x=0$ and $x=\pi$, respectively. The combined solution is approximately time-periodic, with period $T=138.399$. (right) The same solution, at later times, starting with the second collision ($t=3T/2$).](figs/evol_kdv1){width="\linewidth"} ![\[fig:pov:kdv1\] A different view of the solutions in Figure \[fig:evol:kdv1\] shows the generation of background waves. Shown here are the functions $\eta(x+8\pi t/T,t)$, which give the dynamics in a frame moving to the right fast enough to traverse the domain four times in time $T$. In a stationary frame, the smaller and larger solitary waves traverse the domain 5 and 6 times, respectively.](figs/pov_kdv1){height="2in"} We emphasize that our goal is to find *any* nearby time-periodic solution by adjusting the initial conditions to drive $\tilde f$ to zero. Energy will be conserved as the solution evolves from a given initial condition, but is only imposed as a constraint (in the form of a penalty) on the search for initial conditions when the first component of the residual in (\[eq:r:def\]) is set to $r_1=E-a$. In the present calculation, we use $r_1=\eta(0,0)-a$ instead. In the second example, presented below, we will constrain energy. In either case, projecting onto the space of initial conditions satisfying (\[eq:init:trav\]) can cause $r_1$ to increase, but it will decrease to zero in the course of minimizing $\tilde f$. This projection is essential for the symmetry arguments of §\[sec:obj:fun\] to work. ![\[fig:evol:kdv2\] Time-periodic solutions near the Stokes waves of Figure \[fig:evol:kdv1\]. (left) $h=0.05$, $\eta(0,0) = 0.0707148$, $T=138.387$, $\tilde f=4.26\times10^{-27}$. (right) $h=0.0503$, $\eta(0,0)=0.0707637$, $T=138.396$, $\tilde f=1.27\times 10^{-26}$. The background radiation was minimized by hand in the right panel by varying $h$ and $\eta(0,0)$.](figs/evol_kdv2){width="\linewidth"} ![\[fig:pov:kdv2\] Same as Figure \[fig:pov:kdv1\], but showing the time-periodic solutions of Figure \[fig:evol:kdv2\] instead of the Stokes waves of Figure \[fig:evol:kdv1\]. The Stokes waves generate new background radiation with each collision while the time-periodic solutions are synchronized with the background waves to avoid generating additional disturbances. ](figs/pov_kdv2){height="2in"} We minimize $\tilde f$ subject to the constraint $\eta(0,0)=0.0707148$, the third case described in §\[sec:method\] for specifying the amplitude. This causes $\tilde f$ to decrease from $2.3\times 10^{-8}$ to $4.26\times 10^{-27}$ using $M=1200$ grid points and $N=1200$ time-steps (to $t=T$). The results are shown in the left panel of Figures \[fig:evol:kdv2\] and \[fig:pov:kdv2\]. The main difference between the Stokes collision and this nearby time-periodic solution is that the Stokes waves generate additional background ripples each time they collide while the time-periodic solution contains an equilibrium background wave configuration that does not grow in amplitude after the collision. While the background waves in the counter-propagating case (studied in [@water2]) look like small-amplitude standing waves, these background waves travel to the right, but slower than either solitary wave. After computing the $h=0.05$ time-periodic solution, we computed 10 other solutions with nearby values of $h$ and $\eta(0,0)$ to try to decrease the amplitude of the background radiation. The best solution we found (in the sense of small background radiation) is shown in the right panel of Figures \[fig:evol:kdv2\] and \[fig:pov:kdv2\], with $h=0.0503$ and $\eta(0,0)=0.0707637$. The amplitude of the background waves of this solution are comparable to that of the Stokes waves after two collisions. Our second example is a relative periodic solution in which the initial Stokes waves (the starting guess) are B and C in Figure \[fig:bif:stokes\] instead of A and C. As before, solution C is shifted by $\pi$ when the waves are combined initially, just as in (\[eq:AandC\]). Because the amplitude of the larger wave has been reduced, the difference in wave speeds is smaller, and it takes much longer for the waves to collide. If the waves did not interact, we would have $$\label{eq:cBcC} c_{B,0} = 0.23246089, \quad c_{C,0} = 0.22808499, \quad T_0 = \frac{2\pi}{c_{B,0}-c_{A,0}} = 1435.86,$$ where wave B crosses the domain $53.1230$ times in time $T_0$ while wave C crosses the domain $52.1230$ times. The subscript 0 indicates that the waves are assumed not to interact. Since the waves do interact, we have to evolve the solution numerically to obtain useful estimates of $T$ and $\theta$. We arbitrarily rounded $T_0$ to 1436 and made plots of the solution at times $\Delta t = T_0/1200$. We found that $\eta$ is nearly even (up to a spatial phase shift) for the first time at $463\Delta t=554.057$. This was our initial guess for $T/2$. The phase shift required to make $\eta(x+\theta/2,T/2)$ approximately even and $\varphi(x+\theta/2,T/2)$ approximately odd was found by graphically solving $\varphi(x,T/2)=0$. This gives the initial guess $\theta/2=2.54258$. This choice of $T$ and $\theta$ (with $\eta^{B+C}$ and $\varphi^{B+C}$ as initial conditions) yields $f=2.0\times10^{-11}$ and $\tilde f=1.5\times10^{-10}$. We then minimize $f$ holding $E$ and $\theta$ constant, which gives $f=2.1\times10^{-29}$ and $\tilde f=3.0\times10^{-26}$. We note that $\tilde f$ is computed over $[0,T]$, twice the time over which the solution was optimized by minimizing $f$, and provides independent confirmation of the accuracy of the solution and the symmetry arguments of §\[sec:obj:fun\]. The results are plotted in Figure \[fig:evol:kdv3\]. We omit a plot of the initial guess (the collision of Stokes waves) as it is indistinguishable from the minimized solution. In fact, the relative change in the wave profile and velocity potential is about $0.35$ percent, $$\left(\frac{ \|\eta_\text{Stokes} - \eta_\text{periodic}\|^2 + \|\varphi_\text{Stokes} - \varphi_\text{periodic}\|^2}{ \|\eta_\text{Stokes} - h\|^2 + \|\varphi_\text{Stokes}\|^2} \right)^{1/2} \le 0.0035,$$ and $T/2$ changes even less, from 554.057 (Stokes) to 554.053 (periodic). By construction, $E$ and $\theta/2$ do not change at all. It was not necessary to evolve the Stokes waves to $T/2$, shift space by $\theta/2$, zero out Fourier modes that violate the symmetry condition (\[eq:init\]), and reset $t=0$ to correspond to this new initial state. Doing so increases the decay rate of the Fourier modes (slope of $\ln|\hat\eta_k|$ vs $k$) by a factor of 1.24 in this example, compared to 3.36 in the previous example, where it is definitely worthwhile. ![\[fig:evol:kdv3\] A relative-periodic solution found using a superposition of the Stokes waves labeled B and C in Figure \[fig:bif:stokes\] as a starting guess. Unlike the previous case, the waves do not fully merge at $t=T/2$. ](figs/evol_kdv3){width="\linewidth"} The large change from $T_0/2 = 717.93$ to $T/2=554.053$ is due to nonlinear interaction of the waves. There are two main factors contributing to this change in period. The first is that the waves do not fully combine when they collide. Instead, the trailing wave runs into the leading wave, passing on much of its amplitude and speed. The peaks remain separated by a distance of $d=0.52462$ at $t=T/2$, the transition point where the waves have the same amplitude. Thus, the peak separation changes by $\pi-d$ rather than $\pi$ in half a period. The second effect is that the larger wave slows down the smaller wave more than the smaller slows the larger. Recall from Fig. \[fig:AandC\] that each wave induces a negative potential gradient across the other wave that generates a background flow opposing its direction of travel. Quantitatively, when the waves are well separated, we find that the taller and smaller waves travel at speeds $c_B=0.231077=0.994049c_{B,0}$ and $c_C=0.226153=0.991531c_{C,0}$, respectively. The relative speed is then $(c_B-c_C) = 1.12526(c_{B,0}-c_{C,0})$. Thus, $$\label{eq:ineq} \frac{\pi-d}{c_B-c_C} < \frac{T}{2} < \frac{\pi-d}{c_{B,0}-c_{C,0}} < \frac{T_0}{2} = \frac{\pi}{c_{B,0}-c_{C,0}}, $$ with numerical values $531.5<554.1<598.0<717.9$. This means that both effects together have overestimated the correction needed to obtain $T$ from $T_0$. This is because the relative speed slows down as the waves approach each other, which is expected since the amplitude of the trailing wave decreases and the amplitude of the leading wave increases in this interaction regime. Indeed, the average speed of the waves is $$\label{eq:average:speed} \overline{c_B} = \frac{\theta/2 - d/2}{T/2} = 0.993388c_{B,0}, \qquad \overline{c_C} = \frac{\theta/2 + d/2 - \pi}{T/2} = 0.991737c_{C,0},$$ which are slightly smaller and larger, respectively, than their speeds when well separated. Note that $T/2$ in (\[eq:ineq\]) may be written $T/2=(\pi-d)/(\overline{c_B} - \overline{c_C})$. We used $\theta/2=2.54258+40\pi$ in (\[eq:average:speed\]) to account for the 20 times the waves cross the domain $(0,2\pi)$ in time $T/2$ in addition to the offset shown in Figure \[fig:evol:kdv3\]. Comparison with KdV {#sec:kdv} =================== In the previous section, we observed two types of overtaking collisions for the water wave: one in which the larger wave completely subsumes the smaller wave for a time, and one where the two waves remain distinct throughout the interaction. Similar behavior has of course been observed for the Korteweg-de Vries equation, which was part of our motivation for looking for such solutions. Lax [@lax:1968] classified overtaking collisions of two KdV solitons as bimodal, mixed, or unimodal. Unimodal and bimodal waves are analogous to the ones we computed above, while mixed mode collisions have the larger wave mostly subsume the smaller wave at the beginning and end of the interaction, but with a two-peaked structure re-emerging midway through the interaction. Lax showed that if $1<c_1/c_2<A=(3+\sqrt{5})/2$, the collision is bimodal; if $c_1/c_2>3$, the collision is unimodal; and if $A<c_1/c_2<3$, the collision is mixed. Here $c_1$ and $c_2$ are the wave speeds of the trailing and leading waves, respectively, at $t=-\infty$. Leveque [@leveque:87] has studied the asymptotic dynamics of the interaction of two solitons of nearly equal amplitude. Zou and Su [@zou:su] performed a computational study of overtaking water wave collisions, compared the results to KdV interactions, and found that the water wave collisions ceased to be elastic at third order. Craig *et. al.* [@craig:guyenne:06] also found that solitary water waves collide inelastically. This does not conflict with our results since we optimize the initial conditions to make the collision elastic. Head on collisions have been studied numerically by Su and Mirie [@su:mirie; @mirie:su], experimentally by Maxworthy [@maxworthy:76], and by a mixture of analysis and computation by Craig *et. al.* [@craig:guyenne:06]. Validation of KdV as a model of water waves has also been studied extensively. A formal derivation may be found in Ablowitz and Segur [@ablowitz:segur]. Rigorous justification has been given by Bona, Colin and Lannes [@bona:lannes], building on earlier work by Craig [@craig:kdv] as well as Schneider and Wayne [@schneider:wayne]. According to [@bona:lannes], some gaps still exist in the theory in the spatially periodic case. Experimental studies of the validity of KdV for describing surface waves have been performed by Zabusky and Galvin [@zabusky:galvin] as well as Hammack and Segur [@hammack:segur:74]. Recently, Ostrovsky and Stepanyants [@ostrovsky] have compared internal solitary waves in laboratory experiments to the predictions of various model equations, including KdV, and give a comprehensive overview of the literature on this subject [@ostrovsky]. Our objective in this section is to determine quantitatively whether the solutions of the water wave equations that we computed in §\[sec:num\] are well-approximated by the KdV equation. Following Ablowitz and Segur [@ablowitz:segur], we introduce a small parameter ${\varepsilon}$ and dimensionless variables $$\hat y = \frac{y}{h}, \qquad \hat x = \sqrt{{\varepsilon}}\frac{x}{h}, \qquad \hat t = \sqrt{\frac{{\varepsilon}g}{h}} t, \qquad \hat \eta = \frac{\eta}{{\varepsilon}h}, \qquad \hat \phi = \frac{\phi}{\sqrt{{\varepsilon}g h^3}},$$ where $h$ is the fluid depth. We assume the bottom boundary is at $y=-h$ rather than 0 in this derivation, so that $\hat y$ runs from $-1$ to ${\varepsilon}\hat\eta$. The Laplacian becomes $\Delta_{\varepsilon}= h^{-2}\big( {\varepsilon}\partial_{\hat{x}}^2 + \partial_{\hat{y}}^2 \big)$, which allows for $\hat\phi = \hat\phi_0 + {\varepsilon}\hat\phi_1 + {\varepsilon}^2\hat\phi_2 + \cdots$ to be computed order by order, with leading terms satisfying $$\hat\phi_{0,\hat y} = 0, \qquad \hat\phi_1 = -\frac{1}{2}(1+\hat y)^2\hat\phi_{0,\hat x\hat x}, \qquad \hat\phi_2 = \frac{1}{24}(1+\hat y)^4\hat\phi_{0,\hat x\hat x\hat x\hat x}.$$ Here we used $\Delta\phi=0$ and $\phi_y(x,-h)=0$. Note that $\hat\phi_0$ is independent of $\hat y$, and agrees with the velocity potential $\phi$ on the bottom boundary, up to rescaling: $$\hat\phi_0(\hat x,\hat t) = ({\varepsilon}gh^3)^{-1/2}\phi(x,-h,t).$$ From the equations of motion, $\eta_t = \phi_y - \eta_x\phi_x$ and $\phi_t + \frac{1}{2}\phi_x^2 + \frac{1}{2}\phi_y^2 + g\eta = 0$, one finds that $$\begin{aligned} \hat\eta_{\hat t} + \hat u_{\hat x} &= {\varepsilon}\big\{ {\textstyle}\frac{1}{6}\hat u_{\hat x\hat x\hat x} - (\hat\eta\hat u)_{\hat x} \big\} + O({\varepsilon}^2), \\ \hat u_{\hat t} + \hat\eta_{\hat x} &= {\varepsilon}\big\{ {\textstyle}\frac{1}{2} \hat u_{\hat x\hat x\hat t} - \frac{1}{2}\partial_{\hat x}(\hat u)^2 \big\} + O({\varepsilon}^2),\end{aligned}$$ where $\hat u(\hat x,\hat t) = \partial_{\hat x}\hat\phi_0(\hat x,\hat t)$. Expanding $\hat\eta=\hat\eta_0 + {\varepsilon}\hat\eta_1 +\cdots$, $\hat u=\hat u_0 + {\varepsilon}\hat u_1 +\cdots$, we find that $$\begin{aligned} \hat\eta_0 = f(\hat x - \hat t; \tau) + g(\hat x + \hat t; \tau), \\ \hat u_0 = f(\hat x - \hat t; \tau) - g(\hat x + \hat t; \tau), \end{aligned} \qquad \begin{aligned} 2f_\tau + 3ff_r + (1/3)f_{rrr} &= 0, \\ -2g_\tau + 3gg_l + (1/3)g_{lll} &= 0, \end{aligned}$$ where we have introduced characteristic coordinates $r=\hat x-\hat t$, $l = \hat x + \hat t$ as well as a slow time scale $\tau={\varepsilon}\hat t$ to eliminate secular growth in the solution with respect to $r$ and $l$ at first order in ${\varepsilon}$; see [@ablowitz:segur] for details. The notational conflict of $g(l,\tau)$ with the acceleration of gravity, $g$, is standard, and will not pose difficulty below. ![\[fig:kdv:cmp1\] Comparison of the solutions of the KdV and Euler equations, initialized identically with the superposition of Stokes waves labeled A and B in Figure \[fig:bif:stokes\]. The final time $T$ is set to $138.399$, as in Fig. \[fig:align:stokes\], when the Euler solution nearly returns to its initial configuration after a single overtaking collision. ](figs/kdv1){width="\linewidth"} In our case, the waves travel to the right, so we may set $g(l,\tau)=0$ in the formulas above. Returning to dimensional variables, we then have $$\eta(x,t) = h{\varepsilon}f\left(\sqrt{\varepsilon}\left(\frac{x}{h} - \sqrt{\frac{g}{h}} t \right),\sqrt{\frac{g}{h}}{\varepsilon}^{3/2}t\right),$$ which satisfies $$\label{eq:dim:kdv} \eta_t + \alpha \eta_x + \frac{3\sqrt{gh}}{2h}\eta\eta_x + \frac{1}{6}\sqrt{gh}\,h^2\eta_{xxx} = 0,$$ where $\alpha=\sqrt{gh}$. Note that ${\varepsilon}$ drops out. For comparison with the results of §\[sec:num\], we will add $h$ to $\eta$ and set $\alpha=-\frac{1}{2}\sqrt{gh}$ instead. In Figure \[fig:kdv:cmp1\], we compare the solution of (\[eq:dim:kdv\]), with initial condition $\eta(x,0) = \eta_0^{A+B}(x)$, defined similarly to $\eta_0^{A+C}(x)$ in (\[eq:AandC\]), to the solution of the free-surface Euler equations shown in Figs. \[fig:align:stokes\] and \[fig:evol:kdv1\]. Shortly after the waves are set in motion, the KdV solution develops high-frequency oscillations behind the larger peak that travel left and quickly fill up the computational domain with radiation. The solution of the Euler equations remains much smoother. The large peak of the KdV solution also travels $3.4\%$ faster, on average, than the corresponding peak of the Euler solution, so that at $t=138.399$, when the taller Euler wave has traversed the domain 6 times, the taller KdV wave has traversed it $6.206$ times. For our purposes, these discrepancies are much too large for KdV to be a useful model, and we conclude that the first example in §\[sec:num\] is well outside of the KdV regime. In this comparison, timestepping the KdV equation was done with the 8 stage, 5th order implicit/explicit Runge-Kutta method of Kennedy and Carpenter [@carpenter]. Spatial derivatives were computed spectrally using the 36th order filter of Hou and Li [@hou:li:07]. We found that 2048 spatial grid points and 96000 timesteps was sufficient to reduce the error at $t=138.399$ below $5\times 10^{-6}$ near the larger peak and below $6\times 10^{-7}$ elsewhere, based on comparing the solution to one with 3072 grid points and 192000 timesteps. Our solutions of the Euler equations are much more accurate since there are no second or third spatial derivative terms present to make the equations stiff. Thus, we can use 8th or 15th order explicit timestepping rather than 5th order implicit/explicit timestepping. Monitoring energy conservation and performing mesh refinement studies suggests that we obtain 13–14 digits of accuracy in the solutions of the Euler equations, at which point roundoff error prevents further improvement in double-precision arithmetic. ![\[fig:kdv:cmp2\] Comparison of the solutions of the KdV and Euler equations, both initialized with the superposition of Stokes waves labeled B and C in Figure \[fig:bif:stokes\]. $T=1108.11$ here. ](figs/kdv2){width="\linewidth"} In Figure \[fig:kdv:cmp2\], we repeat this computation using initial conditions corresponding to the superposition of Stokes waves $\eta_0^{B+C}(x)$, which was used as a starting guess for the second example of §\[sec:num\]. This time the KdV solution does not develop visible high-frequency radiation in the wave troughs, and the solutions of KdV and Euler remain close to each other for much longer. However, the interaction time for a collision also increases, from $T=138.399$ in the first example to $T=1108.11$ here. In Fig. \[fig:kdv:cmp2\], by $t=T/6$, the taller KdV and Euler waves have visibly separated from each other, and by $t=T/2$, when the Euler waves have reached their minimum approach distance, the KdV solution is well ahead of the Euler solution. Thus, while there is good qualitative agreement between the KdV and Euler solutions, they do not agree quantitatively over the time interval of interest. From this point of view, the second example of §\[sec:num\] also lies outside of the KdV regime. An alternative measure of the agreement between KdV and Euler is to compare the solutions from §\[sec:num\] with nearby relative-periodic solutions of KdV. In other words, we wish to quantify how much the initial conditions and period have to be perturbed to convert a relative-periodic solution of the Euler equations into one for the KdV equations. Since we used a superposition of Stokes waves for the initial guess to find time-periodic and relative-periodic solutions of the Euler equations, we will use a similar superposition (of cnoidal waves) for KdV. The vertical crest-to-trough heights of the three Stokes waves considered in §\[sec:num\] are $$\label{eq:H:ABC} H_A = 0.028918699, \qquad H_B = 0.004973240, \qquad H_C = 0.002683648.$$ Well-known [@kdv:1895; @dingemans] periodic traveling wave solutions of (\[eq:dim:kdv\]) are given by $$\begin{gathered} \eta(x,t) = h - H + \frac{H}{m}\left(1 - \frac{E(m)}{K(m)}\right) + H{\operatorname}{cn}^2\left( 2K(m)\frac{x-ct}{\lambda}\bigg\vert m\right), \\ \lambda = \sqrt{\frac{16mh^3}{3H}}\,K(m), \qquad c = \left[1 - \frac{H}{2h} + \frac{H}{mh}\left(1 - \frac{3E(m)}{2K(m)}\right)\right]\sqrt{gh},\end{gathered}$$ where we added $h$ to $\eta$ to match the change in $\alpha$ from $\sqrt{gh}$ to $-\frac{1}{2}\sqrt{gh}$ in (\[eq:dim:kdv\]). Here $K(m)$ and $E(m)$ are the complete elliptic integrals of the first and second kind, respectively, and ${\operatorname}{cn}(z|m)$ is one of the Jacobi elliptic functions [@dingemans; @gradshteyn]. In our case $\lambda=2\pi$, $g=1$ and $h=0.05$. For each $H$ in (\[eq:H:ABC\]), we solve the $\lambda$ equation for $m$ using Mathematica [@mma], and then evaluate $\eta(x,0)$ on a uniform grid that is fine enough that its Fourier coefficients decay below machine roundoff. The values of $m' = 1-m$ are $$m'_A = 1.81924\times10^{-35}, \qquad m'_B = 1.98689\times10^{-14}, \qquad m'_C = 1.79643\times10^{-10}.$$ This approach requires extended precision arithmetic to compute $m$ and evaluate $\eta$, but the running time takes only a few seconds on a typical laptop. A periodized version of the simpler ${\operatorname}{sech}^2$ formula could be used for the first two waves, but decays too slowly for wave $C$ to be spatially periodic to roundoff accuracy. Once these cnoidal waves have been computed, we superpose their initial conditions to form $\eta_0^{A+B}$ and $\eta_0^{B+C}$, just as in §\[sec:num\]. It is well-known that a superposition of $N$ cnoidal waves retain this form when evolved via KdV, with $N$ amplitude and $N$ phase parameters governed by an ODE describing pole dynamics in the complex plane [@kruskal:pole; @airault:kdv; @deconinck:segur]. In the $N=2$ case, the solutions are relative-periodic. ![\[fig:kdv:cmp5\] Comparison of time-periodic solution found in §\[sec:num\] to nearby relative-periodic two phase cnoidal solution of KdV. The periods are $T=138.387$ and $113.079$, respectively. ](figs/kdv5){width=".98\linewidth"} ![\[fig:kdv:cmp4\] Comparison of relative-periodic solution found in §\[sec:num\] to nearby relative-periodic two phase cnoidal solution of KdV. The periods are $T=1108.11$ and $1068.73$, respectively. ](figs/kdv4){width=".98\linewidth"} Figures \[fig:kdv:cmp5\] and \[fig:kdv:cmp4\] compare the time-periodic and relative-periodic solutions of the Euler equations, computed in §\[sec:num\], to these cnoidal solutions of KdV. Since the periods are different, only the initial conditions are compared. In the larger-amplitude example, shown in Fig. \[fig:kdv:cmp5\], the Euler solution is not as flat in the wave trough as the cnoidal solution due to an additional oscillatory component (the “tuned” radiation). From the difference plot in the right panel, we see that the crest-to-trough amplitude of these higher frequency oscillations is roughly $6\times10^{-4}$, or $2.1\%$ of the wave height $H_A$. The Euler solution is time-periodic with period $T_\text{Euler}=138.387$ while the cnoidal solution is relative-periodic, returning to a spatial phase shift of its initial condition at $T_\text{KdV}=113.079$, which differs from $T_\text{Euler}$ by $18\%$. In the smaller-amplitude example, shown in Fig. \[fig:kdv:cmp4\], both solutions have smooth, flat wave troughs, and it is difficult to distinguish one from the other in the left panel. The crest-to-trough amplitude of the difference in the right panel is roughly $5.5\times10^{-5}$, or $1.1\%$ of $H_B$. The relative change in period is $(T_\text{Euler}-T_\text{KdV})/ T_\text{Euler} = 3.6\%$. While the left panels of Figures \[fig:kdv:cmp5\] and \[fig:kdv:cmp4\] show close agreement between relative-periodic solutions of the Euler and KdV equations at $t=0$, it should be noted that the wave amplitudes of the cnoidal solutions were chosen to minimize the discrepancy in these figures. The change in period by $18\%$ and $3.6\%$, respectively, is perhaps a better measure of agreement. ![\[fig:kdv:cmp7\] Comparison of KdV and Euler solutions, both initialized with a 2-phase cnoidal wave with peaks matching the heights of the Stokes waves labeled A and B (left) or B and C (right) in Fig. \[fig:bif:stokes\]. Here $T=138.387$ (left) and $T=1068.73$ (right). ](figs/kdv7){width=".98\linewidth"} A final comparison of the two equations is made in Fig. \[fig:kdv:cmp7\], where we evolve the Euler equations with the KdV initial conditions. This requires an initial condition for $\varphi(x)=\phi(x,\eta(x))$, where we have suppressed $t$ in the notation for this discussion since it is held fixed at 0. Based on the derivation presented above, we first solve $\phi_x(x,0) = \sqrt{g/h}[\eta(x)-h]$ for $\phi$ on the bottom boundary. We then use the approximation $$\varphi(x) \approx \phi(x,0) - \frac{\eta(x)^2}{2}\phi_{xx}(x,0) + \frac{\eta(x)^4}{24}\phi_{xxx}(x,0)$$ to evaluate $\phi$ on the free surface. In the left panel of Figure \[fig:kdv:cmp7\], the larger wave grows and overturns before $t=T/400$ when evolved under the Euler equations, instead of traveling to the right when evolved via KdV. To handle wave breaking, we switched to an angle-arclength formulation of the free-surface Euler equations [@hls94; @vtxs1]. In the small-amplitude example in the right panel, the Euler solution develops visible radiation and falls slightly behind the KdV solution, although the phases are closer at $T/2$ than the result of evolving the Stokes waves under KdV in Figure \[fig:kdv:cmp2\]. We also tried evaluating $$\phi(x,y)=\sqrt{g/h} \sum_{k=1}^\infty 2k^{-1}\hat\eta_k \sin(kx)\cosh(ky)$$ at $y=\eta(x)$ to obtain the initial condition for $\varphi(x)$, where $\hat\eta_k$ are the Fourier modes of $\eta(x)$ at $t=0$, but the results were worse for the large-amplitude example — the wave breaks more rapidly — and were visually indistinguishable in the small-amplitude example from the results plotted in Fig. \[fig:kdv:cmp7\]. In summary, the large-amplitude time-periodic solution of the Euler equations found in §\[sec:num\] is well outside of the KdV regime by any measure, and the small-amplitude relative-periodic solution is closer, but not close enough to achieve quantitative agreement over the entire time interval of interest. Conclusion ========== We have demonstrated that the small amount of background radiation produced when two Stokes waves interact in shallow water can often be tuned to obtain time-periodic and relative-periodic solutions of the free-surface Euler equations. Just as for the Korteweg-de Vries equation, the waves can fully merge when they collide or remain well-separated. However, the comparison is only qualitative as the waves are too large to be well-approximated by KdV theory. In future work, we will study the stability of these solutions using Floquet theory. Preliminary results suggest that the first example considered above is unstable to harmonic perturbations while the second example is stable. In the stable case, an interesting open question is whether the Stokes waves used as a starting guess for the minimization algorithm, which have the same energy as the relative-periodic solution found, might remain close to it forever, executing almost-periodic oscillations around it. Presumably $\theta$ would need to be varied slightly for this to be true, since $\theta$ is a free parameter that we selected by hand to obtain a small value of $\tilde f$ for the initial guess. Another open question is whether there are analogues for the Euler equations of $N$-phase quasi-periodic solutions of the KdV equation with $N\ge3$. We are confident that the methods of this paper could be used to construct degenerate cases of $N\ge3$ solitary water waves colliding elastically in a time-periodic or relative-periodic fashion, along the lines of what was done for the Benjamin-Ono equation in [@benj2]. Computing more general quasi-periodic dynamics of the form $\eta(x,t)=H(\vec\kappa x + \vec\omega t + \vec\alpha)$, $\varphi(x,t)=\Phi(\vec\kappa x + \vec\omega t + \vec\alpha)$ with $H,\Phi\in C(\mathbb{T}^N)$ and $\vec\kappa$, $\vec\omega$, $\vec\alpha\in\mathbb{R}^N$ seems possible in principle using a more sophisticated shooting method to determine $H$, $\Phi$ and $\vec\omega$. Existence of such solutions for the Euler equations would show that non-integrable equations can also support recurrent elastic collisions even if they cannot be represented as $N$-phase superpositions of elliptic functions. [10]{} Mark J. Ablowitz and Harvey Segur, *Solitons and the inverse scattering transform*, SIAM, Philadelphia, 1981. H. Airault, H. P. McKean, and J. Moser, *Rational and elliptic solutions of the [Korteweg-de Vries]{} equation and a related many-body problem*, Comm. Pure Appl. Math. **30** (1977), 95–148. D. M. Ambrose and J. Wilkening, *Global paths of time-periodic solutions of the [Benjamin]{}–[Ono]{} equation connecting pairs of traveling waves*, Comm. App. Math. and Comp. Sci. **4** (2009), no. 1, 177–215. [to3em]{}, *Computation of symmetric, time-periodic solutions of the vortex sheet with surface tension*, Proc. Nat. Acad. Sci. **107** (2010), no. 8, 3361–3366. [to3em]{}, *Computation of time-periodic solutions of the [Benjamin]{}–[Ono]{} equation*, J. Nonlinear Sci. **20** (2010), no. 3, 277–308. [to3em]{}, *Dependence of time-periodic votex sheets with surface tension on mean vortex sheet strength*, Procedia IUTAM **11** (2014), 15–22. G. R. Baker, D. I. Meiron, and S. A. Orszag, *Generalized vortex methods for free-surface flow problems*, J. Fluid Mech. **123** (1982), 477–501. G. R. Baker and C. Xie, *Singularities in the complex physical plane for deep water waves*, J. Fluid Mech. **685** (2011), 83–116. J. L. Bona, T. Colin, and D. Lannes, *Long wave approximations for water waves*, Arch. Rational Mech. Anal. **178** (2005), 373–410. R. K.-C. Chan and R. Street, *A computer study of finite amplitude water waves*, J. Comput. Phys. **6** (1970), 68–94. G. A. Chandler and I. G. Graham, *The computation of water waves modelled by [Nekrasov’s]{} equation*, SIAM J. Numer. Anal. **30** (1993), no. 4, 1041–1065. B. Chen and P. G. Saffman, *Numerical evidence for the existence of new types of gravity waves of permanent form on deep water*, Stud. Appl. Math. **62** (1980), 1–21. M. J. Cooker, P. D. Weidman, and D. S. Bale, *Reflection of a high-amplitude solitary wave at a vertical wall*, J. Fluid Mech. **342** (1997), 141–158. W. Craig, *An existence theory for water waves and the [Boussinesq]{} and [Korteweg-de Vries]{} scaling limits*, Comm. Partial Diff. Equations **10** (1985), 787–1003. W. Craig, P. Guyenne, J. Hammack, D. Henderson, and C. Sulem, *Solitary water wave interactions*, Phys. Fluids **18** (2006), 057106. W. Craig and C. Sulem, *Numerical simulation of gravity waves*, J. Comput. Phys. **108** (1993), 73–83. A. D. D. Craik, *The origins of water wave theory*, Ann. Rev. Fluid Mech. **36** (2004), 1–28. [to3em]{}, *George gabriel stokes on water wave theory*, Ann. Rev. Fluid Mech. **37** (2005), 23–42. B. Deconinck and H. Segur, *Pole dynamics for elliptic solutions of the [KdV]{} equation*, Mathematical Physics, Analysis and Geometry **3** (2000), 49–74. Maarten W. Dingemans, *Water wave propagation over uneven bottoms, part 2. non-linear wave propagation*, World Scientific, Singapore, 1997. S. Govindjee, T. Potter, and J. Wilkening, *Cyclic steady states of treaded rolling bodies*, Int. J. Numer. Meth. Engng. (2014), (accepted). I. S. Gradshteyn and I. M. Ryzhik, *Table of integrals, series and products*, 7th ed., Academic Press, Amsterdam, 2007. E. Hairer, S. P. Norsett, and G. Wanner, *Solving ordinary differential equations [I]{}: Nonstiff problems, 2nd edition*, Springer, Berlin, 2000. J. L. Hammack and H. Segur, *The [Korteweg-de Vries]{} equation and water waves. part 2. comparison with experiments*, J. Fluid Mech. **65** (1974), 289–314. T. Y. Hou and R. Li, *Computing nearly singular solutions using pseudo-spectral methods*, J. Comput. Phys. **226** (2007), 379–397. T. Y. Hou, J. S. Lowengrub, and M. J. Shelley, *Removing the stiffness from interfacial flows with surface tension*, J. Comput. Phys. **114** (1994), 312–338. R. S. Johnson, *A modern introduction to the mathematical theory of water waves*, Cambridge University Press, Cambridge, UK, 1997. C. A. Kennedy and M. H. Carpenter, *Additive [Runge]{}-[Kutta]{} schemes for convection-diffusion-reaction equations*, Appl. Numer. Math. **44** (2003), no. 1–2, 139–181. D. J. Korteweg and G. de Vries, *On the change of form of long waves advancing in a rectangular canal, and on a new type of long stationary waves*, Philosophical Magazine **39** (1895), 422–443. R. Krasny, *Desingularization of periodic vortex sheet roll-up*, J. Comput. Phys. **65** (1986), 292–313. M. D. Kruskal, *The [Korteweg-de Vries]{} equation and related evolution equations*, Nonlinear wave motion, Lectures in Appl. Math., vol. 15, American Mathematical Society, Providence, 1974, pp. 61–83. P. D. Lax, *Integrals of nonlinear equations of evolution and solitary waves*, Comm. Pure Appl. Math. **21** (1968), 467–490. R. J. LeVeque, *On the interaction of nearly equal solitons in the kdv equation*, SIAM J. Appl. Math. **47** (1987), 254–262. M. S. Longuet-Higgins and E. D. Cokelet, *The deformation of steep surface waves on water. [I]{}. a numerical method of computation*, Proc. Royal Soc. A **350** (1976), 1–26. T. Maxworthy, *Experiments on collisions between solitary waves*, J. Fluid Mech. **76** (1976), 177–185. G N Mercer and A J Roberts, *[Standing waves in deep water: Their stability and extreme form]{}*, Phys. Fluids A **4** (1992), no. 2, 259–269. P. A. Milewski, J.-M. Vanden-Broeck, and Z. Wang, *Dynamics of steep two-dimensional gravity–capillary solitary waves*, J. Fluid Mech. **664** (2010), 466–477. R. M. Mirie and C. H. Su, *Collisions between two solitary waves, [Part II]{}*, J. Fluid Mech. **115** (1982), 475–492. J. Nocedal and S. J. Wright, *Numerical optimization*, Springer, New York, 1999. L. A. Ostrovsky and Y. A. Stepanyants, *Internal solitons in laboratory experiments: Comparison with theoretical models*, Chaos **15** (2005), 037111:1–28. P. Plotnikov and J. Toland, *Nash-moser theory for standing water waves*, Arch. Rat. Mech. Anal. **159** (2001), 1–83. Chris H. Rycroft and Jon Wilkening, *Computation of three-dimensional standing water waves*, J. Comput. Phys. **255** (2013), 612–638. G. Schneider and C. E. Wayne, *The long-wave limit for the water wave problem. i. the case of zero surface tension*, Comm. Pure Appl. Math. **53** (2000), 1475–1535. C. H. Su and R. M. Mirie, *On head-on collisions between two solitary waves*, J. Fluid Mech. **98** (1980), 509–525. J.-M. Vanden-Broeck, *Elevation solitary waves with surface tension*, Phys. Fluids A **3** (1991), 1989–1993. G. B. Whitham, *Linear and nonlinear waves*, Wiley, New York, 1974. J. Wilkening, *Breakdown of self-similarity at the crests of large amplitude standing water waves*, Phys. Rev. Lett **107** (2011), 184501. [to3em]{}, *Traveling-standing water waves*, (2014), (in preparation). J. Wilkening and J. Yu, *Overdetermined shooting methods for computing standing water waves with spectral accuracy*, Comput. Sci. Disc. **5** (2012), 014017:1–38. M. O. Williams, J. Wilkening, E. Shlizerman, and J. N. Kutz, *Continuation of periodic solutions in the waveguide array mode-locked laser*, Physica D **240** (2011), no. 22, 1791–1804. , *Mathematica, version 8.0*, Champaign, IL, 2010. N. J. Zabusky and C. J. Galvin, *Shallow-water waves, the [Korteweg-de Vries]{} equation and solitons*, J. Fluid Mech. **47** (1970), 811–824. Q. Zou and C. H. Su, *Overtaking collisions between two solitary waves*, Phys. Fluids **29** (1986), 2113–2123. [^1]: This research was supported in part by the Director, Office of Science, Computational and Technology Research, U.S. Department of Energy under Contract No. DE-AC02-05CH11231, and by the National Science Foundation through grant DMS-0955078.
1
--- abstract: | As exemplified by the Kuramoto model, large systems of coupled oscillators may undergo a transition to phase coherence with increasing coupling strength. It is shown that below the critical coupling strength for this transition such systems may be expected to exhibit ‘echo’ phenomena: a stimulation by two successive pulses separated by a time interval $\tau $ leads to the spontaneous formation of response pulses at a time $\tau$, $2\tau $, $3\tau \ldots$, after the second stimulus pulse. Analysis of this phenomenon, as well as illustrative numerical experiments, are presented. The theoretical significance and potential uses of echoes in such systems are discussed. author: - 'Edward Ott, John H. Platig, Thomas M. Antonsen and Michelle Girvan' title: Echo Phenomena in Large Systems of Coupled Oscillators --- [**Large systems consisting of many coupled oscillators for which the individual natural oscillator frequencies are different naturally occur in a wide variety of interesting applications. As shown by Kuramoto, such systems can undergo a type of dynamical phase transition such that as the coupling strength is raised past a critical value, global synchronous collective behavior results. In this paper we show that another interesting, potentially useful, behavior of these systems also occurs [*below*]{} the critical coupling strength. Namely, we demonstrate that these systems exhibit [*echo*]{} phenomena: If a stimulus pulse is applied at time $t=0$, followed by a second stimulus pulse at time $t=\tau $, then pulse echo responses can appear at $t=2\tau ,3\tau ,\ldots$. This phenomenon depends on both nonlinearity and memory inherent in the oscillator system, the latter being a consequence of the continuous spectrum of the linearized system.**]{} I. Introduction =============== Due to their occurrence in a wide variety of circumstances, systems consisting of a large number of coupled oscillators with different natural oscillation frequencies have been the subject of much scientific interest[@pikovsky01; @strogatz04]. Examples where the study of such systems is thought to be relevant are synchronous flashing of fireflies[@buck88] and chirping of crickets[@walker69], synchronous cardiac pacemaker cells[@michaels87], brain function[@singer93], coordination of oscillatory neurons governing circadian rhythms in mammals[@yamaguchi02], entrainment of coupled oscillatory chemically reacting cells[@kiss05], Josephson junction circuit arrays[@wiesenfeld95], etc. The globally-coupled, phase-oscillator model of Kuramoto[@kuramoto84; @acebron05] exemplifies the key generic feature of large systems of coupled oscillators. In particular, Kuramoto considered the case where the distribution function of oscillator frequencies was monotonically decreasing away from its peak value, and he showed that, as the coupling strength $K$ between the oscillators is increased through a critical coupling strength $K_c$, there is a transition to sustained global cooperative behavior. In this state $(K>K_c)$ a suitable average over the oscillator population (this average is often called the ‘order parameter’) exhibits steady macroscopic oscillatory behavior. For $K<K_c$ a stimulus may transiently induce macroscopic oscillations, but the amplitude of these coherent oscillations (i.e., the magnitude of the order parameter) decays exponentially to zero with increasing time[@acebron05]. In the present paper we consider the Kuramoto model in the parameter range $K<K_c$, and we demonstrate that ‘echo’ phenomena occur for this system. The basic echo phenomenon can be described as follows: A first stimulus is applied at time $t=0$, and the response to it dies away; next, a second stimulus is applied at a later time, $t=\tau $, and its response likewise dies away; then at time $t=2\tau $ (also possibly at $n\tau $, for $n=3,4,\ldots$) an echo response spontaneously builds up and then decays away. An illustrative example is shown in Fig. 1, which was obtained by ![Illustration of the echo phenomenon. Stimuli at times $t=0$ and $t=\tau $ lead to direct system responses which rapidly decay away followed by echo responses that can arise at times $2\tau $, $3\tau , \ldots $. The ‘response’ plotted on the vertical axis is the magnitude of the complex valued order parameter, Eq. (8). See Sec. IV for details of this computation.](fig1) numerical simulation (see Sec. IV for details). In order for this phenomenon to occur, the system must have two fundamental attributes, nonlinearity and memory. Nonlinearity is necessary because the response seen in Fig. 1 is not the same as the sum of the responses to each of the individual stimulus pulses in the absence of the other pulse (which is simply the decay that occurs immediately after the individual stimuli, without the echo). Memory is necessary in the sense that the system state after the decay of the second pulse must somehow encode knowledge of the previous history even though the global average of the system state, as represented by the order parameter, is approximately the same as before the two pulses were applied. Echo phenomena of this type, occurring in systems of many oscillators having a spread in their natural oscillation frequencies, have been known for a long time. The first example was the ‘spin echo’ discovered in 1950 by Hahn[@hahn50], where the distribution of frequencies resulted from the position dependence of the precession frequency of nuclear magnetic dipoles in an inhomogeneous magnetic field. \[The spin echo forms the basis for modern magnetic resonance imaging (MRI).\] Subsequently, echoes for cyclotron orbits of charged particles in a magnetic field have been studied for the cases in which the distribution in frequency was due to magnetic field inhomogeneity[@gould65], relativistic dependence of the particle mass on its energy[@ott70], and Doppler shifts of the cyclotron frequency[@porkolab68]. Another notable case is that of plasma waves, where the frequency distribution results from the Doppler shift of the wave frequency felt by charged particles with different streaming velocities[@oneil68]. Although echo phenomena are well-known in the above settings, they have so far not received attention in the context of the Kuramoto model and its many related situations. It is our purpose in the present paper to investigate that problem. Two possible motivations for our study of echoes in the Kuramoto model are that they provide increased basic understanding of the model and also that they may be of potential use as a basis for future diagnostic measurements of related systems (see Sec. V). In what follows, Sec. II will give a formulation of the model problem that will be analyzed in Sec. III and numerically simulated in Sec. IV, while Sec. V will provide a discussion of the implications of the results obtained. II. Formulation =============== We consider the basic Kuramoto model supplemented by the addition of a $\delta $-correlated noise term $n(t)$ and two impulsive stimuli, one at time $t=0$, and the other at time $t=\tau $, $$d\theta _i/dt=\omega _i+K/N\sum ^N_{j=1}\sin (\theta _j-\theta _i)-h(\theta _i)\Delta (t)+n(t) \ ,$$ $$\Delta (t)=\hat d_0\delta (t)+\hat d_1\delta (t-\tau ) \ ,$$ $$\langle n(t)n(t')\rangle =2\xi \delta (t-t') \ ,$$ $$h(\theta )=\sum _nh_ne^{in\theta } \ , \ \ h_n=h^*_{-n} \ , \ \ h_0=0 \ ,$$ where $h^*_{-n}$ denotes the complex conjugate of $h_{-n}$. In the above $\theta _i(t)$ represents the angular phase of oscillator $i$, where $i=1,2,\ldots ,N\gg 1$; and $\omega _i$ is the natural frequency of oscillator $i$ where we take $\omega _i$ for different oscillators (i.e., different $i$) to be distributed according to some given, time-independent distribution function $g(\omega )$, where $g(\omega )$ has an average frequency $\bar \omega =\int \omega g(\omega )d\omega $, is symmetric about $\omega =\bar \omega $, and monotonically decreases as $|\omega -\bar \omega |$ increases. To motivate the impulsive stimuli term, consider the example of a population of many fireflies, and imagine that the stimuli at $t=0$ and at $t=\tau $ are external flashes of light at those times, where the constants $\hat d_0$ and $\hat d_1$ in Eq. (2) represent the intensity of these flashes. We hypothesize that a firefly will be induced by a stimulus flash to move its flashing phase toward synchronism with the stimulus flash. Thus a firefly that has just recently flashed will reset its phase by retarding it, while a firefly that was close to flashing will advance its phase. The amount of advance or retardation is determined by the ‘reset function’, $h(\theta )$. Since the reset function $h(\theta )$ depends on properties of the fireflies, we do not specify it further. Let $\theta ^+_i$ and $\theta ^-_i$ represent the phases of oscillator $i$ just after and just before a stimulus flash at $t=0$ or $t=\tau$. Then we have from Eq. (1) that $$\int ^{\theta ^{+}_{i}}_{\theta _i^-} \frac{d\theta }{h(\theta )}=\hat d_p ; \ \ p=0,1 \ .$$ Letting $F(\theta )=\int ^\theta d\theta /h(\theta )$, we obtain $$\theta ^+_i=F^{-1}(\hat d_p+F(\theta ^-_i)) \ .$$ In our subsequent analysis in Sec. III, we will for convenience assume that $\hat d_p$ is small, in which case $(\theta ^+_i-\theta ^-_i)$ is small, and we can use the approximation, $$\theta _i^+\cong \theta ^-_i+\hat d_ph(\theta ^-_i); \ p=0,1 \ .$$ Following Kuramoto we introduce the complex valued order parameter $R(t)$, $$R(t)=\frac{1}{N} \sum ^N_{j=1} e^{i\theta _j(t)} \ ,$$ in terms of which Eq. (1) can be rewritten as $$d\theta _i/dt=\omega _i+(K/N)Im[e^{-i\theta _i}R(t)]-h(\theta _i)\Delta (t)+n(t) \ .$$ In our analysis in Sec. III we will take the limit $N\rightarrow \infty$ useful for approximating the situation where $N\gg 1$. In that limit it is appropriate to describe the system state by a continuous distribution function $f(\theta ,\omega ,t)$, where $$\int ^{2\pi }_0f(\theta ,\omega ,t)\frac{d\theta}{2\pi }=1 \ ,$$ and the fraction of oscillators with angles and natural frequencies in the ranges $(\theta , \theta +d\theta )$ and $(\omega ,\omega +d\omega )$ is $f(\theta ,\omega ,t)g(\omega )d\omega d\theta /2\pi $. The conservation of the number of oscillators then gives the time evolution equation for $f(\theta ,\omega ,t)$, $$\frac{\partial f}{\partial t}+\frac{\partial}{\partial \theta }\left\{ f\left[ \omega +KIm(R(t)e^{-i\theta })-h(\theta )\Delta (t)\right]\right\}=\xi \frac{\partial ^2f}{\partial \theta ^2} \ ,$$ $$R^*(t)=\int d\omega f_1(\omega ,t)g(\omega ) \ ,$$ where $R^*$ denotes the complex conjugate of $R$, $f(\omega ,\theta ,t)\equiv 1$ for $t<0$, and, in writing Eq. (12), $f_1$ represents the $e^{i\theta }$ component of the Fourier expansion of $f(\omega ,\theta ,t)$ in $\theta $, $$f(\omega ,\theta ,t)=\sum ^{+\infty}_{n=-\infty }f_n(\omega ,t)e^{in\theta } \ ,$$ with $f_0=1$, $f_n=f^*_{-n}$. As seen in Eq. (11), the effect of the noise term in Eq. (1) is to introduce diffusion in the phase angle $\theta $ whose strength is characterized by the phase diffusion coefficient $\xi $. In Sec. III we will solve Eqs. (11) and (12) for the case $d_p\ll 1$, thus demonstrating the echo phenomenon as described in Sec. I. In Sec. IV we will present numerical solutions of Eq. (1) for large $N$. III. Analysis ============= A. Amplitude expansion ---------------------- In order to proceed analytically we use a small amplitude expansion and obtain results to second order (i.e., up to quadratic in the small amplitude). This will be sufficient to obtain the echo phenomenon. We introduce a formal expansion parameter $\epsilon $, as follows, $$f=1+\epsilon f^{(1)}+\epsilon ^2f^{(2)}+\mathcal{O}(\epsilon ^3) \ ;$$ $\hat d_p=\epsilon d_p $ for $p=0$, $1$; $R=\epsilon R^{(1)}+\epsilon ^2R^{(2)}+\mathcal{O}(\epsilon ^3)$; $R^{(m)*}=\int gf_1^{(m)}d\omega $; where $f^{(m)}=\Sigma _nf_n^{(m)}\exp (in\theta )$. (Although we formally take $\epsilon \ll 1$, when we finally get our answers, the results will apply for $\epsilon =1$ and $d_p=\hat d_p$, if $\hat d_p\ll 1$.) B. Order $\epsilon $ -------------------- In linear order (i.e., $\mathcal{O}(\epsilon))$, by multiplying Eq. (11) by $\exp (-i\theta )d\theta $ and integrating over $\theta $, we have for the component of $f^{(1)}$ varying as $e^{i\theta }$, $$\frac{\partial f_1^{(1)}}{\partial t}+(i\omega +\xi )f_1^{(1)}=\frac{K}{2} R^{(1)*}+ih_1\Delta (t) \ , \ \ R^{(1)*}(t)=\int f^{(1)}gd\omega \ ,$$ where $f_1^{(1)}(\omega ,t)=0$ for $t<0$ and $R^{(1)*}$ is the complex conjugate of $R^{(1)}$. Due to the delta function term on the right hand side of Eq. (15), $ih_1d_0\delta (t)$, at the instant just after the first delta function (denoted $t=0^+$), $f_1^{(1)}$ jumps from zero just before the delta function (denoted $t=0^-$) to the value $f_1^{(1)}(\omega ,0^+)=ih_1d_0$. Making use of this observation, in Appendix I we solve Eq. (15) for $0<t<\tau $, with the result that, for $K<K_c$, $$f_1^{(1)}(\omega ,t)=A(\omega )e^{-(i\omega +\xi )t}+\ \ ({\rm a\ more\ rapidly\ exponentially\ decaying\ component)} \ ,$$ where $$A(\omega )=ih_1d_0/D[-(i\omega +\xi )] \ ,$$ $$D(s)=1-\frac{K}{2}\int ^{+\infty}_{-\infty} \frac{g(\omega )d\omega }{s+\xi +i\omega } \ , \ {\rm for} \ Re(s)>0 \ ,$$ and $D(s)$ for $Re(s)\leq 0$ is defined from Eq.(18) by analytic continuation. Since Eq. (16) applies for $0<t<\tau $, we have that just before the application of the second delta function stimulus $(t=\tau ^-)$, $$f_1^{(1)}(\omega ,\tau ^-)\cong A(\omega )e^{-(i\omega +\xi )\tau } \ ,$$ where we have neglected the second term on the right hand side of Eq. (16) on the basis that, due to its more rapid exponential decay, it is small compared to the first term. Solutions of $D(s)=0$ govern the stability of the state with $R^{(1)}=0$. Let $s=s_0$ denote the solutions of $D(s)=0$ with the largest real part. If $Re(s_0)<0$ the state $R^{(1)}=0$ is stable, and a perturbation away from $R^{(1)}=0$ decays to zero with increasing $t$ at the exponential rate $Re(s_0)$. If $Re(s_0)>0$, then the perturbation grows and $R^{(1)}$ eventually saturates into a sustained nonlinear state of coherent cooperative oscillatory behavior[@kuramoto84; @acebron05]. In general, $Re(s_0)$ is an increasing function of the coupling constant $K$, and $Re(s_0)\stackrel{>}{<}0$ for $K\stackrel{>}{<}K_c$, where $K_c$ is a critical value that depends on $\xi$ and $g(\omega )$. Throughout this paper we shall be considering only the case $K<K_c$ for which $Re(s_0)<0$. It is instructive to consider $\xi =0$. In that case, the first term in Eq. (16) is of constant magnitude in time, but, as time $t$ increases, it oscillates more and more rapidly as a function of $\omega $. Because of this increasingly rapid variation in $\omega $, the contribution of this term to $R^{(1)*}(t)=\int gf_1^{(1)}d\omega $ decays in time (see Appendix I), and it does so at the same time-asymptotic rate as the contribution from the second more rapidly exponentially decaying contribution in Eq. (16). Thus the order parameter magnitude decays away, but the distribution function $f_1^{(1)}$ can still have a component (the first term in Eq. (16)) due to the pulse that has not decayed away. A similar conclusion applies for $\xi >0$ provided that $\xi $ is substantially less than the damping for the second term in Eq. (16). This is the source of the ‘memory’ referred to in Sec. I. It is also worth noting that the first term in Eq. (16) can be thought of as the manifestation of the continuous spectrum of the Kuramoto problem, discussed in detail in Ref.  [@strogatz92]. Thus the echo phenomenon that we derive subsequently can be regarded as an observable macroscopic consequence of the continuous spectrum, where by ‘macroscopic’ we mean that the effect can be seen through monitoring of the order parameter without the necessity of other more detailed knowledge of the distribution function. It is also of interest to consider $f_n^{(1)}$ for $n\geq 2$. From Eq. (11) we obtain for $|n|\geq 2$ $$\frac{\partial f_n^{(1)}}{\partial t}+(in\omega +n^2\xi )f_n^{(1)}=inh_n\Delta (t) \ ,$$ which does not have any contribution from the order parameter, $R$. For $\tau >t>0$, Eq. (20) yields $$f_n^{(1)}(\omega ,t)=inh_nd_0\exp [-(in\omega +n^2\xi )t]\ ,$$ for $0 < t < \tau $, which, similar to the first term on the right hand side of Eq. (16), also oscillates increasingly more rapidly with $\omega $ as $t$ increases. At time $t=\tau ^-$ Eq. (21) yields $$f_n^{(1)}(\omega ,\tau ^-)=inh_nd_0\exp [-(in\omega +n^2\xi )\tau ] \ ,$$ for $|n|\geq 2$. C. Order $\epsilon ^2$ ---------------------- Now proceeding to $\mathcal{O}(\epsilon ^2)$ and again (as done in obtaining Eq. (15)) taking the $e^{i\theta }$ component of Eq. (11), we have $$\frac{\partial f_1^{(2)}}{\partial t}+(i\omega +\xi )f_1^{(2)}-\frac{1}{2}KR^{(2)*}=-i\left\{ \frac{K}{2i}f_2^{(1)}R^{(1)}-\Delta (t)\sum ^{+\infty}_{n=-\infty}h_{-(n-1)}f^{(1)}_n\right\}$$ where $R^{(1,2)*}(t)=\int ^{+\infty}_{-\infty}g(\omega )f_1^{(1,2)}(\omega ,t)d\omega $. The above equation is linear in $f_1^{(2)}$ and is driven by several inhomogeneous terms appearing on the right hand side of Eq. (23) that are quadratic in first order quantities. Since we are interested in the components of $f_1^{(2)}$ that result in echoes, and since, by our previous discussion, we expect that the echoes depend on the presence of [*both*]{} stimulus delta functions (i.e., the delta function $\delta (t)$ of strength $d_0$ and the delta function $\delta (t-\tau )$ of strength $d_1$), we are interested in the component of $f_1^{(2)}$ that is proportional to the product $d_0d_1$ for $t>\tau $. We denote this component $f^{(2)}_{1,e}$, where the subscript $e$ stands for ‘echo’. From Eq. (23) we see that for $t>\tau $, the $f^{(2)}_{1,e}$ component of $f_1^{(2)}$ satisfied the following initial value problem $$\frac{\partial f_{1,e}^{(2)}}{\partial t}+(i\omega +\xi )f^{(2)}_{1,e}-\frac{1}{2}KR_e^{(2)*}=0 \ ,$$ $$f^{(2)}_{1,e}(\omega ,\tau ^+)=id_1\sum ^{+\infty}_{n=-\infty}h_{-(n-1)}f^{(1)}_n(\omega ,\tau ^-) \ ,$$ $$R_e^{(2)*}(t)=\int ^{+\infty}_{-\infty} g(\omega )f^{(2)}_{1,e}(\omega ,t)d\omega \ .$$ Since $f^{(1)}_n(\omega ,\tau^- )$ is proportional to $d_0$ (see Eqs. (19) and (22)), we see that the solution of Eqs. (24)–(26) for $f_{1,e}^{(2)}$ and $R_e^{(2)}$ will indeed be proportional to $d_0d_1$ as desired. We solve Eqs. (24)–(26) by taking Laplace transforms, $$\hat f^{(2)}_{1,e}(\omega ,s)=\int ^\infty _\tau e^{-st}f_{1,e}^{(2)}(\omega ,t)dt \ ,$$ $$\hat R^{(2)}_{e*}(s)\equiv \int ^\infty _\tau e^{-st}R_e^{(2)*}(t)dt \ ,$$ in terms of which we obtain from Eq. (24) $$\hat f^{(2)}_{1,e}(\omega ,s)=\hat R^{(2)}_{e*}\frac{K/2}{s+\xi +i\omega }+\frac{f^{(2)}_{1,e}(\omega ,\tau ^+)e^{-s\tau }}{s+\xi +i\omega } \ .$$ Multiplying Eq. (29) by $g(\omega )d\omega $ and integrating from $\omega =-\infty$ to $\omega =+\infty$, then yields $$\hat R^{(2)}_{e*}(s)=\frac{e^{-s\tau }}{D(s)}\int ^{+\infty}_{-\infty}\frac{f^{(2)}_{1,e}(\omega ,\tau ^+)}{s+\xi +i\omega }g(\omega )d\omega \ .$$ To find $R_e^{(2)*}(t)$ we take the inverse Laplace transform, $$R_e^{(2)*}(t)=\frac{1}{2\pi i}\int ^{+i\infty +\eta }_{-i\infty+\eta }e^{st}\hat R_{e*}^{(2)}(s)ds \ , \eta >0 \ .$$ For the purposes of evaluating the integral (31), we recall that $D(s)=0$ has roots whose real parts correspond to the exponential decay rate of a response to an initial stimulus toward the $R=0$ state. Thus, as before in our discussion of the linear response (see Eq. (16)), any poles at the roots of $D(s)=0$ give contributions that we assume decay substantially faster with increasing $t>\tau $ than the diffusion induced exponential decay rate $\xi $. Since we are interested in echoes that we will find occur for $t=2\tau ,3\tau ,\ldots $, we neglect contributions to Eq. (31) from such poles. Thus it suffices to consider only the contribution to Eq. (31) from the pole at $s+\xi +i\omega =0$. Hence Eqs. (30) and (31) yield $$R_e^{(2)*}(t)\cong \int ^{+\infty}_{-\infty}e^{-(i\omega +\xi )(t-\tau )}\frac{f^{(2)}_{1,e}(\omega ,\tau ^+)}{D[-(i\omega +\xi )]} g(\omega )d\omega \ .$$ D. Echoes --------- In order to see how Eq. (32) results in echoes, we recall our previous results, Eqs. (25), (19) and (21) for $f_{1,e}^{(2)}(\omega ,\tau ^+)$, and combine them to obtain $$f^{(2)}_{1,e}(\omega ,\tau ^+)=d_0d_1h_2h_1^*\frac{\exp (i\omega \tau -\xi \tau )}{D^*[-(i\omega +\xi )]}-d_0d_1\sum _{|n|\geq 2}nh_nh^*_{n-1}\exp [-(in\omega +n^2\xi )\tau ] \ ,$$ where we have used $h_0=0$,  $h_n=h^*_n$, $f^{(1)}_{-1}=f_1^{(1)*}$, and the first term on the right side of Eq. (33) corresponds to $n=-1$ in Eq. (25). Putting Eq. (33) into Eq. (32), we see that we have an integral of a sum over terms with exponential time variations of the form $$\exp\{-i\omega [t-(1-n)\tau ]\}\exp \{ -\xi [t+(n^2-1)\tau \} \ .$$ Considering the first exponential in Eq. (34), we see that, for large values of $|t-(1-n)\tau |$, there is rapid oscillation of the integrand with $\omega $, and the integral can therefore be expected to be near zero. However, such rapid oscillation is absent near the times $t=(1-n)\tau $, at which a large value of $R_e^{(2)*}$ will occur. Since $t>\tau $, the relevant times occur for $n<-1$; e.g., for $n=-1$, we get an echo at $t=2\tau$; for $n=-2$, we get an echo at $t=3\tau $; etc. Therefore, we henceforth replace the summation over $|n|\geq 2$ in Eq. (33) by a summation from $n=-\infty$ to $n=-2$. E. Evaluation for Lorentzian frequency distribution functions ------------------------------------------------------------- We now consider the case of a Lorentzian frequency distribution, $$g(\omega )=g_L(\omega )\equiv \frac{1}{\pi }\frac{\Delta}{(\omega -\bar \omega )^2+\Delta ^2}=\frac{1}{2\pi i}\left\{ \frac{1}{\omega -(\bar \omega +i\Delta )}-\frac{1}{\omega -(\bar \omega -i\Delta )}\right\} \ .$$ The right-most expression for $g_L(\omega )$ makes clear that, when the previously real variable $\omega $ is analytically continued into the complex plane, the function $g_L(\omega )$ results from the sum of two pole contributions, one at $\omega =\bar \omega +i\Delta $, and one at $\omega =\bar \omega -i\Delta $. The quantity $\bar \omega $ represents the average frequency of the distribution, while $\Delta $ represents the width of the distribution. Consideration of the Lorentzian will be particularly useful to us because the integral (32) can be explicitly evaluated, and also because our numerical experiments in Sec. IV will be for the case of a Lorentzian frequency distribution function. As a first illustration we consider the $n=-1$ term which results in an echo at $t=2\tau $. We first evaluate $D(s)$ by inserting the pole-form for $g_L(\omega )$ into Eq. (18) and closing the integration path with a large semicircle of radius approaching infinity. This yields a single residue contribution to $D(s)$, $$D(s)=1-\frac{K}{2}[s+\xi +i(\bar \omega -i\Delta )]^{-1} \ .$$ Note that the solution of $D(s)=0$ occurs at $$s=-i\omega -\left(\xi +\Delta -\frac{K}{2}\right) \ .$$ According to our previous assumptions, we require $K<K_c\equiv 2(\Delta +\xi )$ so that the $R=0$ state is stable, and $(\Delta -K/2)\tau - \xi \tau \gg 1$ so that we can neglect contributions from the pole at the root $D(s)=0$ in our approximation of (31) by (32). Using Eq. (36) and the $n=-1$ contribution to $f^{(2)}_{1,e}$ (i.e., the first term in (33)) in Eq. (32) we obtain for the echo term at $t=2\tau $ (denoted $R^{(2)*}_{2\tau }(\epsilon )$), $$R^{(2)*}_{2\tau }(t)=2ih^*_1h_2d_0d_1\Delta \int^{+\infty}_{-\infty} \frac{d\omega }{2\pi i}\cdot \frac{\exp[-i\omega (t-2\tau )-\xi t]}{[(\omega -\bar \omega )-i(\Delta -\frac{K}{2})][(\omega -\bar \omega )+i( \Delta -\frac{K}{2})]}\ .$$ For $t>2\tau $ $(t<2\tau )$ the integrand exponentially approaches zero as $Im(\omega )\rightarrow -\infty $ $(Im(\omega )\rightarrow +\infty)$, and we can therefore close the integration path with a large semicircle in the lower half $\omega $-plane (upper half $\omega $-plane). Thus the integral (38) is evaluated from the pole enclosed by the resulting path \[i.e., the pole $\omega =\omega _0-i(\Delta -\frac{K}{2})$ for $t>2\tau $, and the pole $\omega =\omega _0+i(\Delta -\frac{K}{2})$ for $t<2\tau $\], $$R^{(2)*}_{2\tau}(t)=\frac{h^*_1h_2d_0d_1\Delta }{\Delta -(K/2)}e^{-i\bar \omega (t-2\tau )-\xi t}e^{-(\Delta -\frac{K}{2})|t-2\tau |} \ .$$ From Eq. (39) we see that we obtain an echo that is approximately symmetric in shape about $t=2\tau $ (i.e., the envelope $\exp [-(\Delta -K/2)|t-2\tau |]$) for $\xi \ll (\Delta -\frac{1}{2}K)$. We can similarly evaluate the contribution $R^{(2)*}_{m\tau }(t)$ of echoes at $t=m\tau $ for $m=3,4,\ldots$. For example, the result for the echo at $t=3\tau $ is $$R^{(2)*}_{3\tau }=\frac{2h^*_2h_3d_0d_1\Delta }{\Delta -(K/4)}e^{-\xi (3\tau +t)}e^{-i\bar \omega (t-3\tau )}E(t-3\tau ) \ ,$$ $$E(t-3\tau )= \left\{ \begin{array}{ll} \exp [\Delta (t-3\tau )] \ , & {\rm for} \ t<3\tau \ , \\ \exp -[(\Delta -\frac{1}{2}K)(t-3\tau )] \ , & {\rm for} \ t>3\tau \ . \end{array} \right.$$ Thus, in the case $\xi =0$, the shape of the pulse envelope $E(t-3\tau )$ is asymmetric about $t=3\tau $, increasing at a more rapid exponential rate (namely, $\Delta )$ as $t$ increases toward $3\tau $, than the slower exponential rate of decrease (namely, $\Delta -(K/2)$) as $t$ increases away from $3\tau $. This is in contrast to the symmetrically shaped envelope $\exp [-(\Delta -\frac{1}{2}K)|t-2\tau |]$ for the echo at $t=2\tau $. In Appendix II we present an evaluation of $R^{(2)*}_{2\tau }(t)$ for the case of a Gaussian frequency distribution function, $$g(\omega )=g_G(\omega )\equiv [2\pi \Delta ^2]^{-1/2}\exp [-(\omega -\bar \omega )^2/(2\Delta ^2)] \ .$$ F. The small coupling limit --------------------------- We now consider a general frequency distribution function $g(\omega )$ but for the case where the coupling between oscillators is small. That is, $K\ll \Delta $, where $\Delta $ denotes the frequency width of $g(\omega )$ about its mean value $\omega =\bar \omega $. In this case a good approximation is provided by setting $K=0$. Thus $D[-(i\omega +\xi )]\cong 1$ and Eq. (33) yields $$f^{(2)}_{1,e}(\omega ,\tau ^+)=d_0d_1\sum ^\infty _{n=1}nh^*_nh_{n+1}\exp [-(-in\omega +n^2\xi )\tau ] \ ,$$ where we have replaced $n$ by $-n$ and used $h_n=h^*_{-n}$. Inserting Eq. (42) into Eq. (32) we obtain $$R_e^{(2)*}(t)=\sum ^\infty _{n=2}(n-1)d_0d_1h^*_{n-1}h_n\tilde g(t-n\tau )e^{-[(n^{2}-1)\tau +t]\xi } \ ,$$ where $\tilde g(t)$ is defined by $$\tilde g(t)=\int ^{+\infty}_{-\infty}d\omega e^{-i\omega t}g(\omega ) \ .$$ Thus, for $K\ll \Delta $, the shape of the echoes at $t=2\tau ,3\tau ,\ldots$ is directly given by the Fourier transform (44) of the frequency distribution function $g(\omega )$. Another point is that with $K\rightarrow 0$, Eq. (1) shows that the oscillators do not interact, and the nonlinearity needed to produce the echo phenomenon comes entirely from the stimulus function $h(\theta )$. IV. Simulations =============== We have performed direct numerical simulations of the system (1) with a Lorentzian oscillator distribution (see Eq. (35)), $\bar \omega =0$, $\Delta =1$ (corresponding to $K_c=2$), $\hat d_0=\hat d_1$, $K=1$, $\tau =50$, and $\xi =0$. At $t=0^{-}$ we initialize each phase $\theta _i$ for $i=1,2,\ldots ,N$ randomly and independently with a uniform distribution in the interval $(0,2\pi )$. We then apply the mapping given by Eq. (7) with $\hat d_p=\hat d_0$ to each $\theta _i$ in order to simulate the effect of the delta function at $t=0$. Next we integrate Eq. (1) for each $i=1,2,\ldots ,N$ forward in time to $t=\tau ^{-}$, again apply the mapping Eq. (7) (but now with $\hat d_p=\hat d_1$), and we then continue the integration. At each time step we also calculate $R(t)$ using Eq. (8). Figure 2 shows results for $\hat d_0=\hat d_1=1/4$, and ![$|R(t)|$ versus $t$ for (a) $N=10^6$, (b) $N=10^5$, (c) $N=10^4$, and (d) $N=10^3$, showing the echo at $t\cong 2\tau $ and the increase of fluctuations at lower $N$.](fig2) ![$|R(t)|$ versus $t$ blown up around $t\cong 2\tau =200$, for $N=10^6$, $10^5$, $10^4$, $10^3$ (solid curves) showing the increase of fluctuations at lower $N$. The dotted curve is the theoretical result from Eq. (39) with $\xi =0$.](fig3) $$h(\theta )=\sin \theta +\sin 2\theta \ ,$$ for several different system sizes, $N=10^6$, $10^5$, and $10^4$. Figure 2(a–c) shows $|R(t)|$ versus $t$ for $0\leq t\leq 125$. The responses to the delta functions at $t=0$ and $\tau $, as well as the echo at time $t=2\tau $ are clearly illustrated. The effect of lower $N$ is to increase the fluctuations making the echo somewhat less distinct. We do not see any echo at $t=3\tau $. This is in agreement with Eq. (40), since $h_3=0$ for the $h(\theta )$ employed in these computations. Figure 3 shows a blow-up of the numerically computed echo around the time $t=2\tau $ for $N=10^6$, $10^5$, and $10^4$. Also, plotted in Fig. 3 as asterisks is the result from our theoretical calculation Eq. (39). Reasonable agreement between the theoretical and computed echo shapes is obtained, although the agreement is somewhat obscured by fluctuation effects at the smaller system sizes $(N)$. While our choice $\hat d_0=\hat d_1=1/4$ might be regarded as questionable for applicability of the small amplitude approximation $(\hat d_p\ll 1$, for $p=0,1$) employed by Eq. (7) and by our theory of Sec. III, we have nonetheless evidently obtained good agreement between the theory and numerical experiment. Figure 4 illustrates the effect of varying the driving amplitude for a network of size $N=10^4$. For $\hat d_0=\hat d_1=1/8$ (Fig. 4(a)) the echo is swamped by the noise and is not seen. For $\hat d_0=\hat d_1=1/4$ (Fig. 4(b), same as 2(a)) the echo seems to have appeared, but because of the noise, this conclusion is somewhat questionable. Finally, at the larger driving of $\hat d_0=\hat d_1=1/2$, the echo is clearly present. Figures 5(a) and 5(b) show the effect of changing $h(\theta )$. In particular, Fig. 5(a) shows numerical results for $\hat d_0=\hat d_1=1/4$, $N=10^5$, and $h(\theta )=\sin \theta $, with all other parameters the same as before. Since $h_2$ is now zero, Eq. (39) now predicts that there is no echo, in agreement with Fig. 5(a). Figure 5(b) shows numerical results for $\hat d_0=\hat d_1=1/4$, $N=10^5$, and $$h(\theta )=\sin \theta +\sin 2\theta +\sin 3\theta,$$ with all other parameters the same as before. Since $h_1$, $h_2$ and $h_3$ are all nonzero, Eqs. (39) and (40) now predict echoes at both $t\cong 2\tau $ and at $t\cong 3\tau $, and this is confirmed by Fig. 5(b). ![Simulation of $10^5$ oscillators for $\tau =100$, $\hat d_0=\hat d_1=1/3$, $K=1=\frac{1}{2}K_c$, $h(\theta )=\sin \theta $. In this case, no echo at $t=2\tau =200$ is observed.](fig4) ![Simulation of $10^5$ oscillators for $\tau =100$, $\hat d_0=\hat d_1=??$, $K=1=\frac{1}{2}K_c$, $h(\theta )=\sin \theta +\sin 2\theta +\sin 3\theta $. In this case, echoes are seen at $t=2\tau =200$ and at $t=3\tau =300$. The inset shows a blow-up of the numerical result for the echo shape at $t=3\tau $ with the theoretical result, Eq. (41), superposed (dotted curve).](fig5) Finally, we note that similar numerical experiments to all of the above have been repeated using a Gaussian $g(\omega )$, and these yield similar results (not shown). V. Discussion ============= Echo phenomena as used for MRI provide a powerful medical diagnostic tool. Echoes in plasmas have also been used as a basis for measuring velocity space diffusion of plasma particles[@jensen69]. Thus it is of interest to consider whether there are potential diagnostic measurement uses of echoes in the context of situations that can be described by the Kuramoto model and its variants. For example, we note that the amplitude of the echo varies exponentially with $\xi $, providing a possible means of determining the phase diffusion coefficient $\xi $. For example, the amplitude of the echo at $t=2\tau $ varies as $e^{-\xi \tau }$. Thus the log of the ratio of measurements of the echo amplitude using two different values of $\tau $, divided by the difference in the $\tau $ values, provides a potential means of estimating $\xi $. Also, as indicated by Eq. (43), if one can lower the coupling $K$ sufficiently, then echoes provide a potential way of determining the oscillator frequency distribution function $g(\omega )$. In particular, for low $K$ the distribution $g(\omega )$ is directly given by the inverse Fourier transform of the echo profile. On the other hand, we have seen from the simulations in Sec. IV that finite $N$ leads to noise-like behavior that may compromise such attempts. We also note that the Kuramoto model is an idealization, and application to any given situation may require modifications of the model and theory to more closely correspond to the situation at hand. We, nevertheless, feel that consideration of echoes for diagnostics may be of potential use. Furthermore, these phenomena are of theoretical interest from at least two points of view. First, as mentioned in Sec. IIIb, the memory required by the echo phenomenon can be thought of as leading to a macroscopically observable consequence of the continuous spectrum[@strogatz92] of the Kuramoto model. A second point of theoretical interest relates to the recent work in Ref.  [@ott]. In that paper it was shown for a general class of initial conditions that are on a certain manifold of the infinite dimensional state space of the Kuramoto system, that the future time evolution of the order parameter is determined by the current value of the order parameter. In particular, there is an ordinary differential equation describing the order parameter evolution. The echo phenomenon provides an example showing that, if initial conditions do not lie on the specified manifold of Ref.  [@ott], other behavior can occur. In particular, well after the second stimulus (at $t=\tau $) and well before the occurrence of the first echo (at $t=2\tau $), the order parameter is essentially zero, yet it does not remain zero as would be predicted for initial conditions on the manifold of Ref.[@ott] for $K<K_c$. This is discussed further in Appendix III. In conclusion, we hope that our work will stimulate experimental groups to investigate the type of situations we have addressed. This work was supported by ONR (N00014-07-1-0734) and NSF (PHY0456249). Appendix I: Linear Analysis {#appendix-i-linear-analysis .unnumbered} =========================== In this Appendix we solve Eq. (15) for $0<t<\tau $ to obtain the solution (16) and (17) for $K<K_c$. Taking the Laplace transform, $\hat u(s)=\int ^\infty _0u(t)e^{-st}dt$, Eq. (15) yields $$\hat f_1^{(1)}(\omega ,s) =\left( \frac{K}{2}\hat R_*^{(1)}(s)+ih_1d_0\right) /(s+\xi +i\omega )$$ where $\hat R_*^{(1)}(s)$ denotes the Laplace transform of $R^{(1)*}(t)$. Multiplying Eq. (45) by $g(\omega )d\omega $ and integrating from $\omega =-\infty$ to $\omega =+\infty$, we obtain $$\hat R_*^{(1)}(s)=ih_1d_0I(s)/D(s) \ ,$$ where $I(s)=\int ^{+\infty}_{-\infty}d\omega g(\omega )/(s+\xi +i\omega )$ and $D(s)=1-(K/2)I(s)$. Inserting Eq. (46) in (45) gives $$\hat f_1^{(1)}(\omega ,s)=ih_1d_0[D(s)(s+\xi +i\omega )]^{-1} \ .$$ As noted in Sec. IIIb, $\hat f_1^{(1)}(\omega ,s)$ has poles in $s$ at the zeros of $D(s)$ and at $s=-(i\omega +\xi )$. These yield time dependences of the inverse Laplace transform of $\hat f_1^{(1)}$ (see Eq. (27)) that vary as $e^{s_{0}t}$ and as $e^{-(i\omega +\xi )t}$, respectively, where $s_0$ denotes the root of $D(s)=0$ with the least negative real part. For $t\approx \tau $ and $-[Re(s_0)+\xi ]\tau \gg 1$, we can neglect the contributions from poles arising from roots of $D(s)=0$, and use only the contribution from the pole at $s=-(i\omega +\xi )$. From Eqs. (27) and (47) this yields $$f_1^{(1)}(\omega ,t)\cong ih_1d_0e^{-(i\omega +\xi )t}/D[-(i\omega +\xi )] \ ,$$ thus confirming Eqs. (16) and (17). Appendix II: Echo at $t=2\tau $ for Gaussian $g(\omega )$ {#appendix-ii-echo-at-t2tau-for-gaussian-gomega .unnumbered} ====================================================== We consider the case $g(\omega )=g_G(\omega )\equiv (2\pi \Delta ^2)^{-1/2}\exp [-(\omega -\bar \omega )^2/(2\Delta ^2)]$. Putting this expression for $g(\omega )$ and the $n=-1$ contribution to $f_{1,e}^{(2)}$ (i.e., the first term in Eq. (33)) into Eq. (32) we have, $$R^{(2)*}_{2\tau }(t)=\frac{h_1h_2d_0d_1}{\sqrt{2\pi \Delta ^2}}\int ^{+\infty}_{-\infty}d\omega \frac{\exp-\left\{ \frac{[(\omega -\bar \omega )+i\Delta ^2(t-2\tau )]^2}{2\Delta ^2}+\frac{\Delta ^2}{2}(t-2\tau )^2+i\bar \omega (t-2\tau )-\xi t\right\}}{D[-(i\omega +\xi )]D^*[-(i\omega +\xi )]} \ .$$ The collective damping rate is determined by the root of $D(s)=0$ with the least negative real part. Denote this root $s=s_0$ where $$s_0=-(i\bar \omega +\xi +\gamma _0)=-(i\omega _0+\xi ) \ ,\ \omega _0=\bar \omega -i\gamma _0 \ ,$$ where $\gamma _0>0$ is real. Letting $F(\omega )\equiv D[-(i\omega +\xi )]$, continuing this function from real $\omega $ into the complex $\omega $-plane, and expanding around $\omega =\omega _0$, we have $$F(\omega )=(\omega -\omega _0)\eta +\mathcal{O}[(\omega -\omega _0)^2] \ ,$$ where $\eta $ is a complex constant. Letting $F_*(\omega )$ denote the continuation of the function of the real variable $\omega $ in Eq. (49), $D^*[-(i\omega +\xi )]$, into the complex $\omega $-plane, we have that this function has a zero at $\omega =\omega ^*_0$, $$F_*(\omega )=(\omega -\omega ^*_0)\eta ^*+\mathcal{O}[(\omega -\omega ^*_0)^2] \ .$$ Considering the oscillatory $\omega $ variation in the numerator of the integrand of Eq. (49) to be rapid (valid for $K\ll \gamma _0$), we can approximate the integral by the saddle point method, where the saddle point is at $$\omega _{sp}=\bar \omega -i\Delta ^2(t-2\tau ) \ ,$$ and the steepest descent path through $\omega =\omega _{sp}$ runs along the horizontal line $Im(\omega )=-\Delta ^2(t-2\tau )$ from $Re(\omega )=-\infty$ to $Re(\omega )=+\infty$ (see Fig. 6). From Fig. 6(a) we see that for $\Delta ^2|t-2\tau |<\gamma _0$, the poles at $\omega =\bar \omega \pm i\gamma _0$ are not intercepted by the steepest descent path, while for $\Delta ^2|t-2\tau |>\gamma _0$ one of the poles is intercepted (e.g., Fig. 6(b)). In the case where a pole is intercepted, its contribution dominates the contribution from the saddle point by virtue of its time dependence, $e^{-\gamma _{0}|t-2\tau |}$, as opposed to the saddle point contribution time dependence, $e^{-\frac{1}{2}\Delta ^2(t-2\tau )^2}$. Thus we obtain $$R^{(2)*}_{2\tau }(t)\sim e^{-i\bar \omega (t-2\tau )-\xi t}\times \left\{ \begin{array}{ll} e^{-\frac{\Delta ^2}{2}(t-2\tau )^2} & {\rm for} \ |t-2\tau |<2\gamma _0/\Delta ^2 \ , \\ e^{-\gamma _0|t-2\tau |} & {\rm for} \ |t-2\tau | >2\gamma _0/\Delta ^2 \ . \end{array} \right.$$ Near $\gamma _0=\Delta ^2|t-2\tau |/2$, the pole is near the saddle point, and a uniform asymptotic expansion of the integral (49) is necessary to obtain the transition between the two forms in Eq. (53). ![(a) Steepest descent path (dashed) through the saddle point $\omega =\omega _{sp}$ for $\Delta ^2|t-2\tau |<\gamma _*$. (b) The steepest descent path (dashed) for $\Delta ^2(t-2\tau )<-\gamma _*$. The dominant poles at the roots of $F(\omega )=0$ and $F_*(\omega )=0$ are shown as crosses, where in (b) the steepest descent path has intercepted the pole $\omega =\omega _0$ resulting in a pole contribution to $R^{(2)*}_{2\tau }(t)$.](fig6) ![(a) Steepest descent path (dashed) through the saddle point $\omega =\omega _{sp}$ for $\Delta ^2|t-2\tau |<\gamma _0$. (b) The steepest descent path (dashed) for $\Delta ^2(t-2\tau )<-\gamma _0$. The dominant poles at the roots of $F(\omega )=0$ and $F_*(\omega )=0$ are shown as crosses, where in (b) the steepest descent path has intercepted the pole $\omega =\omega _0$ resulting in a pole contribution to $R^{(2)*}_{2\tau }(t)$.](fig7) Appendix III:  Further Discussion of Ref.[@ott] {#appendix-iiifurther-discussion-of-ref. .unnumbered} =============================================== In Ref.  [@ott], a broad class of noiseless (e.g., $\xi =0$ in Eqs. (3) and (11)) globally coupled systems of phase oscillators was studied. The simplest example of this class is the Kuramoto model. Reference [@ott] considered Lorentzian $g(\omega )$ and a special class of initial conditions. Referring to Eq. (12), these initial conditions are of the form, $$f_n(\omega ,0)= \alpha ^n(\omega ) \ , \ \ {\rm for } \ \ n\geq 0 \ , \\$$ and $f_n(\omega ,0)=f^*_{-n}(\omega ,0)\ , \ \ {\rm for} \ \ n\leq 0 $ , where $|\alpha (\omega )|<1$ for $\omega $ on the real axis, $\alpha (\omega )$ is analytic in ${\rm Im} (\omega )< 0$, and $|\alpha (\omega )|\rightarrow 0$ as ${\rm Im} (\omega )\rightarrow -\infty $. Under these conditions, Ref. [@ott] shows that the order parameters (or parameter), see Eq. (12), that describe the nonlinear, macroscopic time evolution of the given system satisfy a finite set of ordinary differential equations in time. Thus the order parameter dynamics is low dimensional, while the dynamics of the full system determining the evolution of the distribution function $f(\omega ,\theta ,t)$ is infinite dimensional[@ott]. For example, for the Kuramoto problem with the above conditions satisfied, Ref.  [@ott] shows that $$dR/dt+\left( \Delta -\frac{1}{2}K\right) R + \frac{1}{2}K\Delta |R|^2R=0 \ ,$$ where we have taken $\bar \omega =0$ in Eq. (35). A consequence of Eq. (55) is that for $K<2\Delta \equiv K_c$, $|R(t)|$ [*decreases monotonically*]{} to zero. This behavior is not followed in the echo phenomena we discuss in the present paper. In particular, in Fig. 1, $|R(t)|$ is small between $t=\tau $ and $t=2\tau $, but then [*increased*]{} to form the echo in the vicinity of time $t=2\tau $. Referring to Eq. (34) and our subsequent discussion, we see that this is because there is a component of $f_1(\omega ,t)$ that varies as $\exp [-i\omega (t-2\tau )]$. Identifying $f_1(\omega ,0)$ in the linear problem with $\alpha (\omega )$ in the nonlinear problem \[Eq. (54)\] and considering $t_0$ as a new initial time (shift time so that $t_0$ goes to $t=0$), we see that $\alpha (\omega )\sim \exp [-i\omega (t_0-2\tau )]$. If we take $t_0$ to be such that $\tau <t_0<2\tau $ and $|R(t_0)|$ is small, then $\alpha (\omega )$ does not satisfy the condition of Ref.  [@ott] that $\alpha (\omega )\rightarrow 0$ as $Im(\omega )\rightarrow -\infty$. However, if $t_0>2\tau $, then it does. Thus the increase of $|R(t)|$ occurs only when the hypothesis under which Eq. (55) was derived does not hold. More generally, consider an initial condition for the original Kuramoto problem (without stimuli or noise) where $f_1(\omega ,0)$ is analytic on the real $\omega $-axis. Expressing $f_1(\omega ,0)$ as a Fourier integral transform, we have $$f_1(\omega ,0)=\int ^{+\infty}_{-\infty} e^{i\omega \eta }k(\eta ) d\eta \ ,$$ where $k(\eta )$ is the Fourier transform of $f_1(\omega ,0)$. Since $f_1(\omega ,0)$ is analytic in $\omega $, $k(\eta )$ decreases exponentially for sufficiently large $\eta $, $$|k(\eta )|<He^{-\beta \eta } \ , \ \ {\rm if}\ \ \eta >\eta _0 \ ,$$ for some set of positive constants $H,\beta ,\eta _0$. Using the Laplace transform technique (as in Appendix I), it can be shown that the solution to the linearized initial value Kuramoto problem contains a component of $f_1(\omega ,t)$ of the form $\exp (-i\omega t)f_1(\omega ,0)$, which we can express using Eq. (56) as $$\exp(-i\omega t)f_1(\omega ,0)=\int ^t_{-\infty}e^{-i\omega (t-\eta )}k(\eta )d\eta +\int ^\infty _te^{i\omega (\eta -t)}k(\eta )d\eta \ .$$ Setting $t=t_0$ and regarding $t=t_0$ as a new initial condition time, we note that the initial condition consists of two terms, namely the first and second integrals on the right hand side of Eq. (58). For $t_0>\eta _0$ sufficiently large, the second integral is smaller than the first by a factor of order $\exp (-\beta t_0)$. Furthermore, the first integral satisfies the condition $f_1(\omega ,t_0)\rightarrow 0$ as ${\rm Im}(\omega )\rightarrow -\infty$ \[because $(\eta -t_0)>0$ for the first integral\], while the second integral does not. Thus, if we choose to shift what we designate as the initial time to sufficiently large $t_0$, then aside from an exponentially small component of order $\exp (-\beta t_0)$, the initial condition obeys the requirement of Ref. [@ott] that $f_1(\omega ,t_0)$ goes to zero as ${\rm Im} (\omega )\rightarrow -\infty$. A. Pikovsky, M. Rosenblum and J. Kurths, [*Synchronization: A Universal Concept in Nonlinear Science*]{} (Cambridge University Press, 2001). S. H. Strogatz, [*Sync: The Emerging Science of Spontaneous Order*]{} (Penguin Science Press, 2004). J. Buck, Q. Rev. Biology [**63**]{}, 265 (1988). T. J. Walker, Science [**166**]{}, 891 (1969). D. C. Michaels, Circulation Research [**61**]{}, 704 (1987). W. Singer, Ann. Rev. Physiology [**55**]{}, 349 (1993); R. Eckhorn et al., Biological Cymbernetics [**60**]{}, 121 (1988); C. M. Gray, Nature [**338**]{}, 334 (1989). S. Yamaguchi et al., Science [**302**]{}, 1408 (2002); T. M. Antonsen et al., arXiv:0711.4135. I. Z. Kiss, Y. Zhai and J. L. Hudson, Science [**296**]{}, 1676 (2005). K. Wiesenfeld and J. W. Swift, Phys. Rev. E [**51**]{}, 1020 (1995). Y. Kuramoto, [*Chemical Oscillations, Waves and Turbulence*]{} (Springer, 1984); and in [*International Symposium on Mathematical Problems in Theoretical Physics*]{}, [**39**]{}, edited by H. Araki (Springer-Verlag, Berlin, 1975). For reviews of the Kuramoto model see J. A. Acebron, et al., Rev. Mod. Phys. [**77**]{}, 137 (2005); S. H. Strogatz, Physica D [**143**]{}, 1 (2000); and E. Ott, [*Chaos in Dynamical Systems*]{}, second edition, chapter 6, section 6. (Cambridge University Press, 2002). E. L. Hahn, Phys. Rev. [**80**]{}, 580 (1950). R. W. Gould, Phys. Lett. [**19**]{}, 477 (1965); F. W. Crawford and R. S. Harp, J. Appl. Phys. [**37**]{}, 4405 (1966). E. Ott, J. Plasma Phys. [**4**]{}, 471 (1970). M. Porkolab and J. Sinnis, Phys. Rev. Lett. [**21**]{}, 1227 (1968). T. M. O’Neil and R. W. Gould, Phys. Fluids [**11**]{}, 134 (1968); J. H. Malmberg, C. B. Wharton, R. W. Gould and T. M. O’Neil, Phys. Fluids [**11**]{}, 1147 (1968). S. H. Strogatz, R. E. Mirollo and P. C. Matthews, Phys. Rev. Lett. [**68**]{}, 2730 (1992). T. H. Jensen, J. H. Malmberg and T. M. O’Neil, Phys. Fluids [**12**]{}, 1728 (1969); C. H. Su and C. Oberman, Phys. Rev. Lett. [**20**]{}, 427 (1968); T. M. O’Neil, Phys. Fluids [**11**]{}, 2420 (1968). E. Ott and T. M. Antonsen, Phys. Rev. Lett. (submitted).
1
--- abstract: 'We report the selective stabilization of chiral rotational direction of bacterial vortices, from turbulent bacterial suspension, in achiral circular microwells sealed by an oil-water interface. This broken-symmetry, originating from the intrinsic chirality of bacterial swimming near hydrodynamically different top and bottom surfaces, generates a chiral edge current of bacteria at lateral boundary and grows stronger as bacterial density increases. We demonstrate that chiral edge current favors co-rotational configurations of interacting vortices, enhancing their ordering. The interplay between the intrinsic chirality of bacteria and the geometric properties of the boundary is a key-feature for the pairing order transition of active turbulence.' author: - Kazusa Beppu - Ziane Izri - Tasuku Sato - Yoko Yamanishi - Yutaka Sumino - 'Yusuke T. Maeda' - Kazusa Beppu - Ziane Izri - Tasuku Sato - Yoko Yamanishi - Yutaka Sumino - 'Yusuke T. Maeda' title: Edge Current and Pairing Order Transition in Chiral Bacterial Vortex --- Turbulent flows offer a rich variety of structures at large length scales and are usually obtained by driving flows out of equilibrium[@zhang] while overcoming viscous dampening. A peculiar class of out-of-equilibrium fluids, stimulated from the lower scales, also present turbulence-like structures called active turbulence [@ramaswamy; @marchetti]. For example, a dense bacterial suspension is driven out of equilibrium by the autonomous motion of self-propelled bacteria suspended therein[@yeomans; @frey; @kokot; @Li]. The alignment between bacteria shapes the active turbulence patterns of collective swimming into vortices of similar size[@wioland1; @wioland2; @beppu; @hamby; @clement; @nishiguchi2]. However, this vortical order decays over distance, making it a long-standing issue for the development of ordered dynamics at larger scales. Hence, a growing attention is paid to novel strategies to control active turbulence with simple geometric design. ![**Chiral bacterial vortex.** (a) Experimental setup: a dense bacterial suspension confined in hydrophilic-treated PDMS microwells and sealed under oil/water interface stabilized with a surfactant. (b) Ensemble of chiral bacterial vortices, in microwells with radius (top row) and (bottom row). Color map codes for the direction of the velocity field. Scale bar, . Schematic illustration of a bacterial vortex in a single microwell. “+” defines the positive sign of CCW rotation. CCW and CW occurrences are displayed in blue and red arrows, respectively.[]{data-label="fig1"}](figure1_9.jpg) Chirality, i.e. the non-equivalence of opposite handedness, is ubiquitous across scales[@bahr], and is commonly involved in active systems[@glotzer; @lowen; @levis; @lenz], either biological, such as bacteria[@whitesides; @haoran; @howard; @petroff], cytoskeletons and molecular motors[@tee; @frey2; @kim], or non-biological, consisting of self-propelled colloids[@jiang; @bechinger1; @irvine]. One of the effects of chirality is the non-equivalence of clockwise (CW) and counter-clockwise (CCW) rotations. As for bacteria, broken mirror-symmetry in flagellar rotation (CCW rotation around the tail-to-head direction during swimming) results in the opposite rotation of the cell body, which generates a net torque onto the solid surface the bacterium swims over, and in turn bends its trajectory circularly[@whitesides]. Despite such intrinsic chirality in individual motion, active turbulence reported in the past showed CW and CCW global rotational directions have equal probability, indicating that mirror symmetry was recovered at the collective level[@wioland2; @beppu; @nishiguchi2; @hamby]. Can microscopic chirality of bacterial motion be transferred into the macroscopic order of collective swimming? Such question is a great challenge that would provide both fundamental understanding of active turbulence and technical applications for controlled material transport[@fabrizio; @aronson]. In this Letter, we report the chiral collective swimming of a dense bacterial suspension confined in an asymmetric (different top and bottom interfaces) but achiral (perfectly circular lateral interface) hydrodynamic boundary. Non-equivalence between CW and CCW collective swimming is enhanced as the bacterial density increases, and the selected CCW rotation with respect to the bottom-top direction, later referred to as “top view”, induces persistent edge current mostly observed on bacteria swimming near the lateral boundaries. Such edge current can alter the geometric constraints ruling the self-organization of bacterial vortices by suppressing the anti-rotational mode. The extended geometric rule, which is in excellent agreement with experiment, brings new understanding of chiral active matter in order to organize larger scale flow. ![**Chiral edge current.** (a) Color map of the orientation of collective motion in a chiral bacterial vortex ($R = \SI{50}{\micro\meter}$). The edge, defined as the area within from the lateral boundary, is separated from the rest of the microwell (the bulk) by a dashed white line. Scale bar, . (b) Normalized azimuthal velocity $v_{\theta}$ in microwells of various sizes. The blue circles indicate the edge current, and the green squares indicate the motion in bulk. $v_{\theta}$ is averaged over 10 s and plotted with error bars representing standard deviation.[]{data-label="fig2"}](figure2_8.jpg) A bacterial suspension of *Escherichia coli* was confined in microwells with the depth of , made of poly-dimethyl siloxane (PDMS) rendered hydrophilic with a polyethylene glycol coating, and sealed with an oil/water interface stabilized with a surfactant (Fig. \[fig1\](a))[@izri]. Top/bottom hydrodynamic asymmetry (later referred to as “asymmetric conditions”) consists in the solid bottom interface of the microwell and its top fluidic interface (Fig. S1 in [@supplement]). Bacteria in the suspension collectively move following the circular boundary of the microwell, but show a surprisingly selective CCW rotational direction (top view). The orientation map $\theta_i$ of the velocity field $\bm{v}(\bm{r}_i)$ obtained from particle image velocimetry (PIV) shows vortical structure maintained across the microwell of $R = \SI{20}{\micro\meter}$ ( Fig. \[fig1\](b)) . CCW-biased vortex (called “chiral bacterial vortex” hereafter) occurs without built-in chirality of the confinement, e.g. ratchets[@fabrizio; @aronson; @leonardo]. CCW rotation is strongly favored at 95% ($N = 145$, Fig. S3) probability in chiral bacterial vortex, while bacterial vortex rotates at equal probability in CCW or CW direction in water-in-oil droplets between *symmetric solid (top)/solid (bottom)* interfaces (symmetric conditions, Fig. S2). Flow reversal in vortex was not observed during our observations (Fig. S3) which emphasizes that chiral bacterial vortex is much more stable than the bacterial vortices in droplets[@hamby](Fig. S3). Chiral vortical structure is also observed in larger microwells ($R = \SI{35}{\micro\meter}$) but only within a distance of away from the circular boundary. This chiral motion with a coherent orientation near the lateral boundary, called edge current, is known to be a key feature in chiral many-body systems[@jiang; @bechinger1; @irvine]. We in turn examined the size-dependence of chiral vortices and the edge current in order to investigate the mechanism of such stability and selectivity in rotational direction. We define the tangential vector $\bm{t}(\theta_i)$ in CCW direction along the circular boundary and the azimuthal velocity $v_{\theta}(\bm{r}_i)=\bm{v}(\bm{r}_i)\cdot \bm{t}_i$. The orientation of the edge current along the boundary wall is analyzed by $\langle v_{\theta}(\bm{r}_i)/ |\bm{v}| \rangle$ where $\langle \cdot \rangle$ denotes the average over all possible site $i$ (Fig. \[fig2\](a)). Surprisingly, this edge current was maintained even in very large microwells ($R\geq$), the size of which is much larger than the critical size of a stable bacterial vortex in the bulk ($\approx\SI{35}{\micro\meter}$) (Fig. \[fig2\](b)). This persistence of edge current motivates us to further investigate its physical origin, by analyzing the interplay between the boundary and the intrinsic chirality of bacteria, which is the only element having chirality in the present system. With respect to the tail-to-head direction, the flagella of the bacterium rotate in the CCW direction. Torque balance then imposes a CW rotation of the body. Those two opposite rotations result in opposite frictions against the bottom interface, which ultimately converts into a CCW rotation (top view) of bacteria swimming near the top interface (CW rotation near bottom interface) (Fig. \[fig3\](a))[@whitesides]. Because of this CCW bias of their trajectories beneath the top, bacteria that collide with the lateral boundaries align with it and swim in a CCW rotation direction (CW rotation direction on the bottom). However, this effect alone does not determine net chirality because bacteria swim in opposite directions on the top and bottom interfaces of the microwell. In order to find the origin of net chirality in edge current, we recorded the trajectories of individual bacteria in dilute ($0.2\% v/v$) suspensions, such that interactions between bacteria do not affect their swimming. Fig. \[fig3\](b) presents the probability distribution function of the azimuthal velocities of individual bacteria, $P(v_{\theta})$, in a microwell ($R = \SI{35}{\micro\meter}$). Individual bacterial motion beneath the top fluidic interface shows visible nonequivalence between CW and CCW swimming, as more than half ($71\%$) of the tracked bacteria swim in CCW direction. By contrast, $58\%$ of individual bacteria onto the bottom solid interface swim in CW rotational direction, indicating weaker chiral bias. To compare those swimming chiralities, we define the chirality index $CI(v_{\theta})$ as antisymmetric part of $P(v_{\theta})$, i.e. $$CI(v_{\theta}) = P(v_{\theta}) - P(-v_{\theta}).$$ Positive chirality index means CCW rotation is dominant, and the larger the absolute value of the index, the more biased the rotational direction. For bacteria swimming near the top interface, chirality index is positive, while it is negative near the bottom interface (Fig. \[fig3\](c)). This tells that bacterial motion is CCW-biased near the top interface, and CW near the bottom interface. In addition, near the top interface, $|CI(v_{\theta})|>0.05$ whereas near the bottom interface $|CI(v_{\theta})| < 0.02$. This indicates that bacterial swimming is more biased near the top (fluidic) interface than near the bottom (solid) interface. This difference in the amplitude of the bias at each interface is responsible for the chiral rotation of the whole bacterial vortex, and therefore is at the origin of chiral edge current (Fig. \[fig3\](d)). ![**Bacterial swimming in dilute ($0.2\% v/v$) suspension** (a) Schematic illustrations of chiral bacterial swimming near the top fluidic interface (left) and near the bottom solid interface (right). Near the top interface, individual bacterial swimming is CCW-biased (blue curved arrow) while near the bottom interface it is CW-biased (see Fig. S5). (b) Histograms of azimuthal velocity and their average (vertical dashed line) of single bacteria swimming near top interface (left, schematic illustration in insert, $N=5004$, average ) and bottom interface (right, schematic illustration in insert, $N=3702$, average ). Proportions of CCW and CW occurrences are displayed in blue and red at the top of each plot. (c) Corresponding chirality indices plotted against azimuthal velocity, near the top (blue) and bottom (red) interfaces. (d) Chiral bias of swimming near the top and bottom interfaces have opposite signs but different amplitudes, under asymmetric conditions.[]{data-label="fig3"}](figure3_7.jpg) ![**Chiral bacterial vortex is collective effect.** A dense ($20\% v/v$) bacterial suspension is confined under asymmetric conditions. Represented chirality indices against azimuthal velocity of (a) individual fluorescent bacteria (schematic illustration in insert) observed under confocal microscopy near the top (left) and bottom (right) interface, (b) fluid flow with tracer particles also observed under confocal microscopy, and (c) collective bacterial motion observed under bright field. Two microwell sizes were considered: $R = \SI{35}{\micro\meter}$ (small blue circles) and $R = \SI{50}{\micro\meter}$. In the larger microwells were considered two regions: the boundary layer within from the lateral boundary (larger blue diamonds), and the bulk that is more than away from the lateral wall (green disks). Sample size and average azimuthal velocity of each case can be found in Fig. S6.[]{data-label="fig4"}](figure4_7.jpg) As the density of self-propelled particles increases, their mutual interactions affect more significantly their global motion, which ultimately gives rise to collective behavior. As implied already, the rotational biases of handedness of individual bacteria ($71\%$ in top and $58\%$ in bottom) are too weak to account for the 95%-selective chirality in the bacterial vortex (Fig. \[fig1\](c)). To resolve this gap, we therefore characterized the structure of chiral bacterial vortices in much denser ($20\% v/v$) suspensions. Fig. \[fig4\](a) presents chirality index of individual bacteria near the top and bottom interfaces in a microwell ($R = \SI{35}{\micro\meter}$). Interestingly, both interfaces present dominantly CCW rotational direction, although the top interface is more strongly biased. This indicates that the effect of interactions between bacteria forces the rotational direction to be the same across the microwell. Intriguingly, chirality index of individual motion becomes larger as the bacterial density increases. Trajectories of tracer particles were also predominantly CCW, indicating that fluid flow also has CCW handedness. Moreover, chirality index of the fluid velocity (Fig. \[fig4\](b)) and the collective velocity (Fig. \[fig4\](c)) are comparable to that of individual motion. This suggests that a slight chiral bias in individual bacteria is amplified by collective bacterial interactions, which turns into a global vortex with a unidirectional rotation. When we analyzed the vortex flow near the lateral boundary of larger microwells ($R = \SI{50}{\micro\meter}$), individual motion at high density has a large positive chirality index, similarly to smaller microwells. However, near the center of larger microwells, chirality is null for individual and collective bacterial motion as well as for the fluid flow (Fig. \[fig4\](a) to (c)). The necessity to be close to the lateral boundaries to observe coherent chiral swimming emphasizes the importance of spatial constraint to amplify the rotational bias and turn it into chiral edge current. ![**Edge current favors co-rotational vortex pairing** (a) Schematic illustration and definition of relevant geometric parameters. (b) Illustration of the co-rotational vortex pairing (FMV pattern, left) and the anti-rotational pairing (AFMV pattern, right) with edge current. The edge current deviates the orientation angle of bacteria $\theta$ around the *tip*. In FMV pattern, bi-particle alignment and edge current deviate bacteria in the same direction, but in AFMV pattern, those effects compete. (c) Vorticity map of bacterial vortex pairs at various values of $\Delta$ with $R = \SI{19}{\micro\meter}$. (d) Order parameter $\Phi_{FMV}$ of FMV pairs of interacting bacterial vortices, against $\Delta/R$. Under no edge current (full grey circles), FMV to AFMV transition occurs at $\Delta/R\simeq\sqrt{2}$, while under CCW edge current (inverted black triangles) it occurs at a larger value of $\Delta/R\simeq1.9$.[]{data-label="fig5"}](figure5_8.jpg) The edge current plays a crucial role in the ordering of interacting bacterial vortices. When two bacterial vortices interact with one another via near-field interaction, they have either the same rotational direction (defined as ferromagnetic vortices, FMV) or opposite rotational directions (anti-ferromagnetic vortices, AFMV) in a geometry-dependent manner[@wioland2; @beppu; @nishiguchi2]. To reveal how a chiral edge current affects the ordering of interacting bacterial vortices, we construct a theoretical model of interacting bacterial vortices in doublets of overlapping identical circular boundaries (Fig. \[fig5\](a))[@beppu]. Two identical overlapping circular microwells, with a radius $R$ and an inter-center $\Delta$, offer the means for a systematic investigation of the pairing order transition[@beppu]. The ratio $\Delta/R = 2\cos\Psi$ is an important geometric parameter characterizing the pairing order transition from FMV to AFMV: if $\Delta$/R is small enough, the two vortices align in co-rotational direction and pair into a FMV pattern. Transition from FMV to AFMV occurs at a predicted value $\Delta_c/R = 2 \cos \SI{45}{\degree} = \sqrt{2}$ because that is the only configuration at which the two pairing patterns are equiprobable[@beppu]. How does edge current affect the previously established design principle? To answer this question, the orientation dynamics of bacteria with the heading angle $\theta$ is considered at the vicinity of the sharp areas of the boundaries (“*tip*”). Given the CCW fluid flow near boundary reorients the bacteria at the *tip*, the effective torque for reorientation $\bm{\tau_e} = - \partial U_e/\partial \theta$ has geometry-dependent potential $U_e = -2 \gamma_e \sin\theta\cos\Psi$ with $\gamma_e$ representing the ratio of fluid flow to bacterial swimming ($\gamma_e > 0$). This reorientation maintains the CCW rotation of the edge current along boundary in both FMV pairing (Fig. \[fig5\](b), left) and AFMV pairing (Fig. \[fig5\](b), right). In addition, the result of the bi-particular collision near the *tip* between bacteria coming from different circular parts[@beppu] also affects the vortex pairing. Bacterial collision is ruled by a polar alignment as a source of geometry-dependence[@vicsek; @supplement]. By considering the most probable configurations (general case detailed in [@supplement]), the orientation near the *tip* is decided by the potential of FMV pairing $U_p^{FMV} = 2\gamma_p\sin\Psi$ at $\theta = 0$ (Fig. \[fig5\](b), left) or of AFMV pairing $U_p^{AFMV} = 2\gamma_p\cos\Psi$ at $\theta = \pi/2$ (Fig. \[fig5\](b), right), with the strength of the polar alignment $\gamma_p>0$. The respective sums of the potentials coincide at the transition point, i.e. $U_p^{FMV} + U_e|_{\theta=0} = U_p^{AFMV} + U_e|_{\theta=\pi/2}$, which leads to $$\gamma_p \sin\Psi_c = (\gamma_p- \gamma_e) \cos\Psi_c,$$ where $\gamma_p- \gamma_e$ indicates the suppression of AFMV pairing by chiral edge current. Hence, $\Delta_c/R=2\cos\Psi_c$ at which FMV and AFMV pairings occur at equal probability is $$\label{chiral_geometric rule} \frac{\Delta_c}{R} = \frac{2}{\sqrt{1+(1-\gamma_e/\gamma_p)^2}}\simeq \sqrt{2} \Bigl( 1 + \frac{\gamma_e}{2\gamma_p}\Bigr).$$ FMV pattern is stabilized in $\Delta/R \geq \sqrt{2}$ and the relative strength of the edge current $\gamma_e/\gamma_p$ determines the shift of the transition point. To test those chirality effects, we examined the pairing of doublets of chiral vortices with $R = \SI{19}{\micro\meter}$ within the range of $0 \leq \Delta < 2R = \SI{38}{\micro\meter}$. FMV pattern of chiral vortices is dominant in $0 \leq \Delta/R \leq 1.9$ and exhibits CCW rotation (Fig. \[fig5\](c)). The pairing order transition is also analyzed by using the order parameter of FMV pairing $\Phi_{FMV}$ [@supplement]. $\Phi_{FMV}$, which reaches 1 for FMV while it goes down to 0 for AFMV, is defined as $\vert \langle \bm{p}_i \cdot \bm{u}_i \rangle \vert$ with the orientation of velocity $\bm{p}_i$ measured experimentally at site $i$ and the expected orientation of FMV pattern $\bm{u}_i$ calculated numerically at corresponding site. Under asymmetric condition, $\Phi_{FMV}$ shows a transition from $1$ to $0$ at $\Delta/R\simeq1.9$, while it occurs at $\Delta/R\simeq1.4$ under symmetric condition (absence of chiral edge current) (Fig. \[fig5\](d)). FMV pairing pattern appears to be favored in the presence of edge current. Moreover, according to Eq. with the experimental values $\gamma_p=0.5$ (obtained from independent experiment, Fig. S7) and $\Delta_c/R\simeq1.9$, the coefficient of edge current is $\gamma_e=0.3$ that is reasonably comparable to the ratio of fluid flow to bacterial swimming velocity (Figs.\[fig4\](a) and 4(b)). The excellent agreement with experiment means that chiral edge current affects the pairing order transition. In conclusion, we revealed that confining a dense bacterial suspension in microwells with asymmetric top/bottom interfaces but achiral circular lateral boundaries stabilizes a chiral vortex. Hydrodynamically different top/bottom interfaces give then rise to a subtle broken symmetry in bacterial swimming, that is amplified by collective bacterial motion and becomes an edge current persistent over larger length scale. We showed that it is possible to generate and control the break of mirror-symmetry without using built-in chirality, unlike most current experimental setup, e.g. with built-in chiral ratchet-shape[@fabrizio; @aronson; @leonardo]. Moreover, in present chiral bacterial vortex, rigid rod much longer than bacterial body can consistently rotate in CCW direction over multiple rounds at (Fig. S8). Thus, asymmetric hydrodynamic boundary offers simple and fast material transport, even without built-in chirality. Finally, the edge current favors co-rotational FMV pairing pattern of doublets of identical vortices. Even in triplets of identical overlapping circular microwells, the pairing order transition from FMV to (frustrated) AFMV patterns is also shifted to higher values (from $1.7$ to $1.9$) (Fig. S9), suggesting that chiral bacterial vortex has fewer limitation from geometric frustration. Although beyond the scope of this study, understanding how the nature of an interface affects the amplitude of the chiral flow it generates could give the means of a finer tuning of the global rotation of confined bacterial vortices. The finding of edge current in chiral bacterial vortex opens new directions for the tailoring of collective motion. Such asymmetric hydrodynamic boundaries are also involved in flocking and lane formation of active droplets[@stone]. The edge current observed in active droplets with similar collective effect may advance the generic understanding of chiral collective behavior[@glotzer; @tjhung]. Furthermore, stabilized co-rotational pairing order is a key to clarify how broad class of active matter amplifies microscopic chirality. As such constrained pairing order is also relevant to chiral spinners[@tierno], controlling chirality-induced order with simple geometric rule would verify the validity of geometric approach to chiral many-body systems. This work was supported by Grant-in-Aid for Scientific Research on Innovative Areas (JP18H05427 and JP19H05403) and Grant-in-Aid for Scientific Research (B) JP17KT0025 from MEXT. [45]{} J. Zhang and A. Libchaber Phys. Rev. Lett. **84**, 4361 (2000). S. Ramaswamy, Annu. Rev. Condens. Matt. Phys. **1**, 323 (2010). M. C. Marchetti, J. F. Joanny, S. Ramaswamy, T. B. Liverpool, J. Prost, M. Rao, and R. A. Simha, Rev. Mod. Phys. **85**, 1143 (2013). H.H. Wensink, J. Dunkel, S. Heidenreich, K. Drescher, R. E. Goldstein, H. Löwen, and J. M. Yeomans. Proc. Natl. Acad. Sci. U.S.A. **109**, 14308-14313 (2012). V. Bratanov, F. Jenko, and E. Frey, Proc. Natl. Acad. Sci. USA 112, 15048-15053 (2015). G. Kokot, S. Das, R.G. Winkler, G. Gompper, I.S. Aranson, and A. Snezhko, Proc. Natl. Acad. Sci. U.S.A. **114**, 12870-12875 (2017). H. Li, et al., Proc. Natl. Acad. Sci. U.S.A. **116**, 777-785 (2019). H. Wioland, F. G. Woodhouse, J. Dunkel, J. O. Kessler, and R. E. Goldstein, Phys. Rev. Lett. **110**, 268102 (2013). H. Wioland, F. G. Woodhouse, J. Dunkel, and R. E. Goldstein, Nat. Phys. **12**, 341-345 (2016). K. Beppu, Z. Izri, J. Gohya, K. Eto, M. Ichikawa, and Y. T. Maeda, Soft Matt. **13**, 5038-5043 (2017). A.E. Hamby, D.K. Vig, S. Safonova, and C.W. Wolgemuth, Sci. Adv. **4**, eaau0125 (2018). B. Vincenti, C. Ramos, M.L. Cordero, C. Douarche, R. Soto, and E. Clement, Nat. Commun. **10**, 5082 (2019). D. Nishiguchi, I. Aranson, A. Snezhko and A. Sokolov, Nat. Commun. **9**, 4486 (2018). H-S Kitzerow and C. Bahr, Chirality in Liquid Crystal, Springer, New York (2001). N.H. Nguyen, D. Klotsa, M. Engel, and S.C. Glotzer, Phys. Rev. Lett. **112**, 075701 (2014). H. Löwen, Eur. Phys. J. **225**, 2319-2331 (2016). B. Liebchen, and D. Levis. Phys. Rev. Lett. **119**, 058002 (2017). A. Maitra and M. Lenz, Nat. Commun. **10**, 920 (2019). W.R. DiLuzio, et al. Nature **435**, 1271-1274 (2005). H. Xu, J. Dauparas, D. Das, E. Lauga and Y. Wu, Nat. Commun. **10**, 1792 (2019). I. H. Riedel, K. Kruse, and J. A. Howard, Science **309**, 300-303 (2005). A.P. Petroff, X-L. Wu, and A. Libchaber, Phys. Rev. Lett. **114**, 158102 (2015). Y.H. Tee, et al. Nat. Cell Biol. **17**, 445-457 (2015). J. Denk, L. Huber, E. Reithmann, and E. Frey, Phys. Rev. Lett. 116, 178301 (2016). K. Kim, et al. Soft Matt. **14**, 3221-3231 (2018). H. Jiang, H. Ding, M. Pu and Z. Hou, Soft Matt. **13**, 836-841 (2017). N. Narinder, C. Bechinger, and J.R. Gomez-Solano, Phys. Rev. Lett. **121**, 078003 (2018). V. Soni, E.S. Bililign, S. Magkiriadou, S. Sacanna, D. Bartolo, M.J. Shelley and W.T.M. Irvine, Nat. Phys. **15**, 1188-1194 (2019). R. Di Leonardo, et al. Proc. Natl. Acad. Sci. U.S.A. **107**, 9541-9545 (2010). A. Sokolov, M.M. Apodaca, B.A. Grzybowski, and I.S. Aranson, Proc. Natl. Acad. Sci. U.S.A. **107**, 969-974 (2010). Z. Izri, D. Garenne, V. Noireaux, and Y.T. Maeda, ACS Synth. Biol. **8**, 1705-1712 (2019). See supplemental material for details of experimental methods, full theoretical details. N. Koumakis, A. Lepore, C. Maggi and R. Di Leonardo, Nat. Commun. **4**, 2588 (2013). T. Vicsek, A. Czirók, E. Ben-Jacob, I. Cohen, and O. Shochet, Phys. Rev. Lett. **75**, 1226 (1995). A. Ortiz-Ambriz, C. Nisoli, C. Reichhardt, C.J.O. Reichhardt, and P. Tierno, Rev. Mod. Phys. **91**, 041003 (2019). S. Thutupalli, D. Geyer, R. Singh, R. Adhikari, and H.A. Stone, Proc. Natl. Acad. Sci. U.S.A. **115**, 5403-5408 (2018). E. Tjhung, M.E. Cates, and D. Marenduzzo, Proc. Natl. Acad. Sci. U.S.A. **114**, 4631-4636 (2017). $ $ Supplemental information for\ Edge Current and Pairing Order Transition in Chiral Bacterial Vortex {#supplemental-information-for-edge-current-and-pairing-order-transition-in-chiral-bacterial-vortex .unnumbered} ==================================================================== Materials and Methods ===================== Device microfabrication ----------------------- The device used in this experiment is a flow cell that contains an array of microwells of various radii $R$ ($\SI{20}{\micro\meter} \leq R \leq \SI{150}{\micro\meter}$) and three types of shape: single circular microwells, pairs and triplets of overlapping identical circular microwells. Their depth is in all the experiments[@beppu]. The flow cell is made of a cover glass slide (Matsunami, S1127, ) and a cover slip (Matsunami, C218181, ) separated by a double-sided adhesive tape (NICHI-BAN, NW-10, ). The microwells are fabricated using standard soft lithography techniques. Briefly, PDMS polymer and curing agent (Dow Corning, 98-0898, Sylgard184) at 90-to-10 mass ratio was spin-coated at 1000 rpm for 30 seconds to reach a thickness of , on a mold made of a photoresist (SU-8 3025, Microchem) pattern (thickness ) cured and developed through conventional photo-lithography on a silicon wafer (Matsuzaki, Ltd., $\phi$2-inch wafer). After a curing at for an hour, the PDMS film is cut around a single array of microwells and peeled off to be bonded on its unpatterned surface to the glass cover slide that has been exposed to air plasma beforehand (, corona discharge gun, Shinko Denso). This assemblage is then once again exposed to air plasma (, corona discharge gun), covered with a solution of polyethylene glycol-poly-L-lysine (PEG-PLL, Nanocs, PG2k-PLY), and left to rest for 30 minutes. Ungrafted PEG-PLL is then washed away with deionized water. PDMS microwells and glass cover slide are now treated hydrophilic, and PEG-PLL coating prevents non-specific adhesion of bacteria. Finally, the flow cell is completed with the glass cover slip (untreated) being attached to the glass cover slide with the adhesive spacer placed over the unpatterned areas of the PDMS sheet. It is used immediately after fabrication (Fig. \[fig.s1\] Step 1). ![**Schematic illustration of device fabrication.** Step 1: device construction. A thin PDMS layer patterned with microwells is transferred between the two glass slides of a flow cell and coated with PEG-PLL to avoid unspecific adhesion of bacteria. Step 2: bacterial suspension is injected from one side of the flow cell. Step 3: excess bacterial suspension is flushed out by a mixture of oil and surfactant. This mixture seals the microwells on their top side. Step 4: device is sealed at both ends with epoxy glue.[]{data-label="fig.s1"}](figureS1_3.jpg) Filling protocol ---------------- Once a flow cell is ready, it is filled with of a bacterial suspension (two volume fractions were available: dense at $20\% v/v$, dilute at $0.2\% v/v$). The PEG-PLL coating of the PDMS allows the bacterial suspension to reach the inside of the microwell and fill them properly. Once the flow cell is completely filled, oil (light mineral oil, Sigma Aldrich) with surfactant (SPAN80, Nacalai) at $2 wt\%$ is injected from the same side, while excess bacterial suspension over the microwells is flushed out and absorbed with filter paper from the other side of the flow cell. This seals the microwells under an oil/water interface. To suppress unwanted flow in the flow cell, both of its ends are sealed with epoxy glue (Huntsmann, Ltd.). The array of microwells is then ready to be observed under microscope (Fig. \[fig.s1\] Steps 2-4)[@izri]. Persistently straight-swimming bacterial strain ----------------------------------------------- We used bacterial strain *Escherichia coli* RP4979 that lacks tumbling ability. These bacteria swim smoothly without tumbling and show persistent straight motion in the bulk of a fluid. Highly motile bacteria were obtained by inoculating a single bacterial colony into of LB medium (NaCl , yeast extract , tryptone , pH7.2) and incubating it overnight at . The next day, of this overnight culture solution is transferred to of T-Broth (NaCl , trypton , pH7.2), and the inoculated cultures are incubated at and agitated at 150 rpm for about 6 hours. After reaching an optical density of $0.4$, the culture medium is centrifuged at 3000 rpm at room temperature for 10 minutes to concentrate the bacterial suspension density to $20-25\% v/v$. The straight swimming of bacteria was measured in a bulk fluid, and the fluctuation of the heading angle $\theta(t)$ was analyzed. Single bacteria are considered as self-propelled points particle, with a position $\bm{r}(t) = (\bm{x}(t), \bm{y}(t))$ and an orientation $\bm{d}(t)$(Fig.\[figs\_diffusion\](a)). For two-dimensional coordinates, the orientation of single bacteria is expressed by the unit vector along the long-axis of bacteria $\bm{d} = \bm{d}(\theta(t)) = (\cos\theta(t) , \sin\theta(t))$. Because the mean-square angle displacement (MSD) is given by $\langle \bigl[\bm{d}(\theta(t)) - \bm{d}(\theta(0))\bigr]^2 \rangle_t = 2(1-\exp[-D t])$ with time interval $\delta t$ and the angular diffusion coefficient $D$ that reflects the fluctuation of bacterial orientation at single cell level[@jain](Fig.\[figs\_diffusion\](b)). The obtained coefficient $D$ is 0.12 rad$^2$/sec, indicating that RP4979 bacteria persistently swim in one direction at a low density in bulk. ![**Straight swimming of single bacteria in bulk.** (a) Typical trajectory of swimming single bacteria in bulk. Scale bar is . (b) Mean square displacement of heading angle of single bacteria is plotted with time. The slope of this MSD curve is fitted with $2(1-\exp[-Dt])$, where $D$ reflects orientation fluctuation of smoothly swimming bacteria[@jain].[]{data-label="figs_diffusion"}](figureS_fluctuation.jpg) Bacterial density measurement and image velocimetry --------------------------------------------------- To measure bacterial density, we used a mixture - 99-to-1 ratio - of two genetically modified bacteria (strain RP4979) that constitutively express fluorescent protein (either YFP or dTomato). That fluorescent labeling allows the quantitative recording of the trajectories of individual bacterial swimming in either dilute (Fig. 3 in main text) or dense suspensions (Fig. 4(a) in main text). By tracking dTomato-expressing bacteria, we can record the individual trajectories of bacteria inside collective vortical motion. In the analysis of single bacteria and tracer particles in a suspension, they were tracked by means of a plugin of Particle Tracker 2D/3D in the Fiji(ImageJ) software. Bright-field optical imaging and video-microscopy were performed by using an inverted microscope (IX73, Olympus) with a CCD camera (DMK23G445, Imaging Source) that enables us to record bacterial collective motion at 30 frames per second. The velocity field of bacterial collective motion $\bm{v}(r,t)$ was analyzed by Particle Image Velocimetry(PIV) with Wiener filter method using PIVlab based on MATLAB software, and its grid size was $\times$. Acquired velocity fields were further smoothed by averaging over 1 sec. In addition, polystyrene tracer particles with red fluorescence (Molecular probes) were used to track the flow field (Fig. 4(b) in main text). The tracer particles were dispersed at a low density of $0.026\% v/v$, where individual particles could be tracked inside the bacterial suspension. Recording of the trajectories of red-labeled bacteria and red tracer particles was done with a confocal microscope (IX73 inverted microscope from Olympus, and confocal scanning unit CSU-X1 from Yokogawa Electric Cor. Ltd., iXon-Ultra EM-CCD camera from Andor Technologies) under red fluorescence channel. All the recordings were done at 30 frames per second. Experimental details ==================== Preponderance and stability of CCW bacterial vortex --------------------------------------------------- ![**Preponderance and stability of chiral bacterial vortex.** (a) A droplet of a dense bacterial suspension confined in a microwell ($R = \SI{20}{\micro\meter}$) between *asymmetric fluidic (top) and solid (bottom)* interfaces. (b) Control experiment for the effect of top/bottom interfaces on the directionality of the bacterial vortex. A droplets ($\SI{15}{\micro\meter}\leq R\leq\SI{35}{\micro\meter}$) of an emulsion of dense bacterial suspension in oil between *symmetric solid (top)/solid (bottom)* interfaces. (c) Distributions of vorticity averaged over 10 s and their respective average (vertical dashed line) under asymmetric interfaces ($N = 145$, average ) in a microwell ($R = \SI{20}{\micro\meter}$). Proportions of CCW and CW occurrences are displayed in blue and red at the top of each plot. (d) The distributions of vorticity and their respective average (vertical dashed line) under symmetric solid/solid interfaces ($N = 126$, average ). Under symmetric interfaces, CW and CCW rotations are observed with the same frequency. (e) Persistent dynamics of vorticity. The vorticity is always positive without sign change, meaning CCW rotation is stable over the range of our measurement (30 sec). (f) Dynamics of vorticity change under solid / solid symmetric interfaces show the frequent change of vorticity sign, indicating that switching between CW and CCW rotations occurs.[]{data-label="fig.s2"}](figureS_vorticity.jpg) The chiral bias of bacterial swimming can be affected by the nature of the top and bottom interfaces, i.e. whether the two interfaces are hydrodynamically equivalent or not as seen in main text. We compare this effect on chiral bacterial vortices (dense suspension) between asymmetric (Fig. \[fig.s2\](a)) and symmetric top/bottom interfaces (Fig. \[fig.s2\](b)). Fig. \[fig.s2\] (c) shows the distribution of vorticity of an ensemble of chiral bacterial vortices. CCW rotation is favored between *asymmetric fluidic (top)/solid (bottom)* interfaces (later referred to as asymmetric conditions) and the probability of CCW rotation is 95%. We next tested the vortex rotation of bacterial collective motion under symmetric top/bottom interface. We confined dense bacterial suspension in water-in-oil droplets between *symmetric solid (top)/solid (bottom)* interfaces (symmetric conditions). By examining the vorticity of bacterial collective motion an emulsion of $\SI{15}{\micro\meter}\leq R\leq\SI{35}{\micro\meter}$, a vortex rotated at equal probability in CCW (52%) or CW direction (48%) (Fig. \[fig.s2\](d)). Thus, the selective chirality is observed only in the asymmetric top/bottom interfaces. Moreover, such prepondence in chirality can be involved in the stability of rotational direction of vortical flow in dynamics. we analyzed the dynamics of vorticity for 30 sec in order to examine the flow reversal in chiral bacterial vortex. For chiral bacterial vortex under asymmetric condition, the sign reversal was not observed in this typical observation time (Fig. \[fig.s2\](e)). In contrast, for achiral bacterial vortex under symmetric condition, the vorticity shows periodical change with sign reversal as reported in previous study[@hamby] (Fig. \[fig.s2\](f)). Thus, chiral bacterial vortex is highly selective (95% in CCW rotation) and stable (without flow reversal for few tens of sec). Edge current is not observed in a microwell with symmetric top/bottom condition ------------------------------------------------------------------------------- In Figure 2 in main text, we showed the edge current in CCW rotation in chiral bacterial vortex. This edge current is also observed in larger microwells but only within a distance of away from the circular boundary. To test whether this edge current is unique to the spatial confinement between *asymmetric fluidic (top)/solid (bottom)* interfaces, we also examined the bacterial vortex in the circular microwell between *symmetric solid (top)/solid (bottom)* interfaces (Fig. \[fig.edge\](a)). The solid interface was made of PEG-coated PDMS and the dense bacterial suspension (20% volume fraction) was confined in microwells. We define the tangential vector $\bm{t}(\theta_i)$ in CCW direction along the circular boundary and the azimuthal velocity $v_{\theta}(\bm{r}_i) = \bm{v}(\bm{r}_i)\cdot \bm{t}_i$. The edge, defined as the area within from the lateral boundary, is separated from the rest of the microwell (the bulk). The orientation of the edge current along the boundary wall is analyzed by $\langle v_{\theta}(\bm{r}_i)/ |\bm{v}| \rangle$ where $\langle \cdot \rangle$ denotes the average over all possible site $i$ and observation time of 10 sec (Fig. 2(a) in main text). However, the stable edge current in symmetric conditions is observed only in smaller microwells with $R\leq$(red open triangle: CW direction, blue open circle: CCW direction, in Fig. \[fig.edge\](b)). The threshold value is comparable to the critical size of a stable bacterial vortex in the bulk (green filled square in Fig. \[fig.edge\](b)), and this characteristic length is much shorter than the one for chiral edge current ($\approx \SI{100}{\micro\meter}$, see Fig 2(b) in main text). In addition, collective motion near boundary has no preference of either CW or CCW directions in symmetric conditions. Thus, the collective motion in symmetric conditions does not have persistent edge current in CCW rotation, indicating that chiral edge current is uniquely stabilized in asymmetric top/bottom conditions. ![**Weak edge current of bacteria in a microwell with symmetric condition.** (a) Schematic illustration of a dense bacterial suspension confined in a microwell between *symmetric solid (top)/solid (bottom)* interfaces. (b) The edge current in symmetric PDMS microwell. The edge, defined as the area within from the lateral boundary, is separated from the rest of the microwell (the bulk) as same as Figure 2 in main text. Normalized azimuthal velocity of the flow in microwells of various sizes. The blue circles (red circles) indicate the edge current in CCW direction (in CW direction), and the green squares indicate the vortical collective motion in bulk. Normalized azimuthal velocities averaged over 10 s are plotted with error bars that present standard deviations of their time series.[]{data-label="fig.edge"}](figureS_edge.jpg) Trajectory of single bacteria near solid interface -------------------------------------------------- Bacteria of the RP4979 strain swim persistently straight in the bulk of their environment due to their inability to tumble. However, in the vicinity of a solid interface, the torque generated by flagella rotation induces a CW (while looking from above, the bacteria being on top of the interface) deviation of the bacterial swimming that eventually leads to a circular trajectory. RP 4979 bacteria, in a dilute suspension, swimming on top of PDMS interface in the absence of lateral confinement show, as predicted, circular trajectories with a CW rotation direction (Fig. \[fig.s4\](a)(d)(g)). Such circular trajectory reflects the hydrodynamic interaction between the chiral rotation of flagella and boundaries. A bacterial suspension was confined in PDMS microwells and sealed with an oil/water interface, which means that two interfaces were present in interaction with bacterial swimming. We thus tracked the swimming of individual bacteria in a dilute suspension, near the top oil/water interface, and the bottom PDMS interface of circular microwells (Fig. 3 in main text). Fig. \[fig.s4\](b)(e)(h) show the trajectory of bacteria that swim below the top oil/water interface, with top view. Their trajectories are curved towards the CCW direction. In addition, Fig. \[fig.s4\](c)(f)(i) shows the swimming trajectories near the bottom solid PDMS interface. Furthermore, we also analyzed the bacterial swimming, fluid flow, and collective motion (analyzed by PIV) at a dense ($20\% v/v$) bacterial suspension confined under asymmetric conditions. Figs. \[fig.s5\](a) and (b) show the distribution of average azimuthal velocity $P(v_{\theta})$ at higher density. The small fraction of bacteria expressing dTomato (red fluorescent protein) was mixed at 1% in dense bacterial suspension expressing YFP (yellow fluorescent protein) and the trajectory of individual bacteria was recorded by confocal microscopy. Fig. \[fig.s5\](a) (or \[fig.s5\](b)) is the histogram of the azimuthal velocity of bacteria in top fluidic interface (or bottom solid interface). In Figs. 4(a) and 4(b) in main text, the chirality index $CI(v_{\theta}) = P(v_{\theta}) - P(-v_{\theta})$ is presented by using those data. Figs. \[fig.s5\](c) and (d) show the distribution of average azimuthal velocity $P(v_{\theta})$ of tracer particle (, red fluorescence) at higher density. The trajectories of particles were recorded at the top (\[fig.s5\](c)) or the middle (\[fig.s5\](d)) in the microwell by using confocal microscopy. Finally, Figs. \[fig.s5\](e) is the distribution function of average azimuthal velocity $P(v_{\theta})$ from PIV analysis at higher density. One can find large fraction of bacteria (82-95%) shows CCW rotation near boundary. ![**Intrinsic bacterial chiral swimming near interfaces.** (a-c) Schematic illustrations of (a) CW rotation (top view) of bacterial swimming on a solid interface with open lateral boundaries, (b) CCW rotation of bacterial swimming below oil/water interface, with lateral solid PDMS wall, and (c) CW rotation of bacterial swimming on the solid PDMS interface, with lateral solid PDMS wall. Scale bar in (a) is , and scale bars in (b) and (c) are both . (d-f) Representative trajectories of individual bacterial swimming in corresponding conditions ((d): open lateral boundaries, (e) top oil/water interface, (f) bottom solid PDMS interface), in dilute bacterial suspensions. The start points of each trajectory are fixed at the center of the plot, and their end points are indicated by a black cross. Colors code for different trajectories. (g-i) Histogram of the mean angular velocities of the detected trajectories in the corresponding cases ((g): open lateral boundaries, (h) top oil/water interface, (i) bottom solid PDMS interface). Proportions of rotation directions are given in blue for CCW and red for CW.[]{data-label="fig.s4"}](figureS4_3.jpg) ![**Histograms of azimuthal velocities in dense bacterial suspension.** (a and b) individual bacteria swimming near (a) the top oil/water interface and (b) the bottom PDMS interface, (c and d) fluid flow with tracer particles near (c) the top oil/water interface and (d) the middle of PDMS microwell, and (e) PIV velocity field of collective bacterial motion observed under bright field. The blue bars correspond to the probability of CCW rotation while the red bars to CW rotation. We also analyzed collective motion in two different sizes of microwells: $R = \SI{35}{\micro\meter}$ (small blue circle) and $R=\SI{50}{\micro\meter}$. In the larger microwells, we considered two regions: the boundary layer within from the lateral boundary (larger blue diamond), and the bulk that is more than away from the lateral wall (green filled square).[]{data-label="fig.s5"}](figureS5_5.jpg) solid interface (Fig.\[fig.s4\](a)) Top fluidic interface (Fig.\[fig.s4\](b)) Bottom solid interface (Fig.\[fig.s4\](c)) -------------------------- ------------------------------------- ------------------------------------------- -------------------------------------------- spatial constraint Boundary-free Circular microwell Circular microwell sample size, N N = 264 N = 1469 N = 876 average angular velocity : Sample sizes and average velocities of histograms presented in Fig. \[fig.s4\] (PTV (bacteria)). \[table.s1\] [|c|c|c|c|ll]{} & **$R$= , boundary** & **$R$= , boundary** & **$R$ = , bulk** &\ PTV (bacteria) TOP & $N = 491$, $\langle v_{\theta}\rangle = \SI{4.24}{\micro\meter}$/s & $N = 784$, $\langle v_{\theta}\rangle = \SI{3.44}{\micro\meter}$/s & $N = 1129$, $\langle v_{\theta}\rangle = \SI{0.369}{\micro\meter}$/s &\ PTV (bacteria) BOTTOM & $N = 619$, $\langle v_{\theta}\rangle = \SI{2.35}{\micro\meter}$/s & $N = 1022$, $\langle v_{\theta}\rangle = \SI{2.16}{\micro\meter}$/s & $N = 1654$, $\langle v_{\theta}\rangle = \SI{-0.130}{\micro\meter}$/s &\ PTV (tracer particles) TOP & $N = 442$, $\langle v_{\theta}\rangle = \SI{2.86}{\micro\meter}$/s & $N = 406$, $\langle v_{\theta}\rangle = \SI{2.06}{\micro\meter}$/s & $N = 628$, $\langle v_{\theta}\rangle = \SI{0.557}{\micro\meter}$/s &\ PTV (tracer particles) MIDDLE & $N = 574$, $\langle v_{\theta}\rangle = \SI{3.12}{\micro\meter}$/s & $N = 736$, $\langle v_{\theta}\rangle = \SI{2.59}{\micro\meter}$/s & $N = 1379$, $\langle v_{\theta}\rangle = \SI{0.541}{\micro\meter}$/s &\ PIV & $N = 88536$, $\langle v_{\theta}\rangle = \SI{1.71}{\micro\meter}$/s & $N = 131240$, $\langle v_{\theta}\rangle = \SI{1.28}{\micro\meter}$/s & $N = 231200$, $\langle v_{\theta}\rangle = \SI{-0.048}{\micro\meter}$/s &\ \[table.s2\] The distribution of orientation angle in bacterial collective motion -------------------------------------------------------------------- To analyze the fluctuation of the heading angle of bacteria $\theta(t)$ at a higher density, we measured the trajectory of single bacteria inside dense bacterial suspension confined in a microwell of overlapping two circles. The swimming bacteria show angular fluctuation $D$ as shown in Fig.\[figs\_parameter\], but such fluctuation should be changed in the presence of orientation alignment due to collective motion. The strength of orientation interaction, which is defined as $\gamma_p$, is an important parameter for their collective motion. Indeed, the variance of the distribution function of heading angle $\sigma_{\theta}^2$ is closely related to both the coefficient of polar orientation interaction $\gamma_p$ and the fluctuation of bacterial heading angle $D$ as $\sigma_{\theta}^2=\frac{D}{2\gamma_p\sin\Psi}$ for FMV pattern ($\sigma_{\theta}^2=\frac{D}{2\gamma_p \cos\Psi}$ for AFMV pattern) because the polar orientation reorients the direction of bacterial swimming and in turn suppresses the angular fluctuation. The ratio between the angular fluctuation $D=0.12$ and the coefficient of polar alignment $\gamma_p$ is approximately equal to the variance of the distribution at the given geometric parameter $\Psi$. From the data for the straight swimming of bacteria in a bulk fluid (Fig.\[figs\_parameter\]), the angular diffusion coefficient $D$ is 0.12 rad$^2$/sec. In addition, the variance of angle distribution has 0.21. By using these values, we can estimate $\gamma_p$=$\frac{D}{2\sigma_{\theta}^2\sin\Psi}$=0.47. This value is used to examine the effect of chiral edge current for the pairing transition of FMV and AFMV, later. ![**Orientation distribution of bacteria at high density.** A bacterial population was enclosed in a microwell with the boundary shape of two identical overlapping circles ($\Delta/R$ = 1.58). The group of bacteria exhibited collective motion in an FMV pattern, and we observed single bacteria labeled with fluorescent dTomato protein that swim away from the tip and the distribution of its heading angle was measured. The variance of this probability distribution is used to estimate the coefficient $\gamma_p$ of the polar alignment as $\sigma^2=\frac{D}{2\gamma \sin \Psi}$.[]{data-label="figs_parameter"}](figureS_parameter.jpg) Active stirring of a micron-sized object in chiral bacterial vortex ------------------------------------------------------------------- ![**Rotation of rod-shaped object.** (a) Movement of a rigid rod in chiral bacterial vortex showed in (left) snapshots over $\SI{6}{\second}$. Red point indicates the head-end of the rod. Scale bar, . (b) Corresponding orientation angle increases linearly with time, exhibiting a constant CCW angular velocity (for comparison, dashed line has a slope of ). (c) Snapshots under bright field over of the random rotation and translation of a rigid rod in a dense bacterial suspension under quasi-two-dimensional confinement. Rod of interest is colored yellow in the image, and its head is arbitrarily defined by a red dot. Scale bar, . (d) Time evolution of the orientation angle of the marked rod. Blue color represents the CCW rotation and red color, the CW rotation. Both rotation directions have comparable angular velocities (black dashed line at ). Scale bar, . (e) Velocity field (black arrows) and vorticity map (red to blue colormap) obtained from the PIV analysis of the turbulent flow in a dense ($20\% v/v$) bacterial suspension under quasi-two-dimensional confinement between PDMS sheet (top) and a glass slide (bottom). Blue stands for CCW rotation and red stands for CW rotation. (f) Power spectrum of the previous velocity field. Peak is at which corresponds to a wavelength of . (g) Corresponding histogram of vorticity (Average vorticity ). Proportions of rotation directions are given in blue for CCW and red for CW.[]{data-label="fig.s3"}](figureS3_6.jpg) Collective motion of suspended self-propelled particles enhances transport properties in active fluids, which has been used conjunctly with built-in geometry to direct the motion of objects larger than the suspended particles (e.g. gear-shape[@fabrizio; @aronson] or ratchet-shape[@leonardo]). To further illustrate the transport properties chiral bacterial vortices, we confined a rigid rod ( length and thickness, made of SU-8 photoresist) that is much longer than bacterial body ( length and thickness). The rod consistently rotates in CCW direction over multiple rounds at , i.e. 6 times faster than previously known ratcheted gears in a bacterial suspension (Fig. \[fig.s3\]). As one would expect it, disordered vortices seen in quasi two-dimensional channel are also able to rotate rigid rods, larger than bacteria (Fig. \[fig.s3\](c)). However, the direction of rod rotation switches stochastically between CW and CCW and the absolute value of the angular velocity remains constant ($\simeq\SI{0.7}{\radian\per\second}$) (Fig. \[fig.s3\](d)). At high concentration, RP4979 strain bacteria present a turbulent behavior characterized by a large number of dynamic and intermingled vortices under quasi-two-dimensional confinement (Fig. \[fig.s3\](e)). The PIV analysis of their collective motion reveals a widely distributed power spectrum with a peak at (Fig. \[fig.s3\](f)). This suggests that the vortices observed in this turbulent active flow have a size of typically or more. The PIV analysis of their collective motion also shows that the distribution of vorticity is even (Fig. \[fig.s3\](f)), which indicates that CW and CCW behaviors are identical under symmetric and quasi-two dimensional confinement. This equiprobability of CW and CCW leads to the stochastic change of rod rotation in two-dimensional chamber (Fig. \[fig.s3\](d)). Therefore, chiral bacterial vortex in present setup offers simple and fast material transport, even without built-in chirality such as gear-shape. FMV pairing order in triplet of circular microwells --------------------------------------------------- To determine the strength of the ordering induced by chiral bacterial vortices, we tested in against the rotational frustration observed in triplets of identical overlapping circular microwells. Here again, the microwell radius is noted $R$, and the center-to-center distance is noted $\Delta$ and is the same for all the pairs of microwells in the triplet. We used microwells with $R$ = and $0 \leq \Delta/R \leq 1.98$ (Figs. \[fig.s6\](a) and \[fig.s6\](b)). When we confine a dense($20\% v/v$) bacterial suspension in such microwells, under symmetric conditions (top and bottom interfaces are in PDMS), geometric frustration is responsible for a shift of the FMV-AFMV (frustrated order) transition point $\Delta/R$ from 1.3-1.4 (observed in doublets of overlapping microwells under symmetric conditions, Fig. \[fig.s6\](a) top) to 1.6-1.7 (Fig. \[fig.s6\](b) top). Under asymmetric conditions (top interface is in oil/water and bottom interface is in PDMS), the chiral edge current of the interacting vortices further shifts that transition point to 1.8-1.9, similarly to what is observed in doublets of microwells (Figs. \[fig.s6\](a) bottom and \[fig.s6\](b) bottom). This result indicates that the chirality of bacterial vortices can generate larger unidirectional flow and overcome geometric frustration of interacting triplet vortices. ![**Edge current favors co-rotational vortex pairing**. Vortex pairing is affected by geometric frustration and chiral edge current. Dense bacterial suspension confined in multiple circular identical overlapping microwells ($R = \SI{19}{\micro\meter}$) can present various paring ((a) left and (b) left). In doublets of microwells ((a) top right and bottom right) no frustration is present and the effect of chirality shifts the transition point between FMV and AFMV pairing towards higher values of $\Delta/R$. In triplets of microwells (b), in the absence of chirality ((b) top right), FMV pairing is favored by geometric frustration and transits to AFMV pairing (frustrated order) at increased value of $\Delta/R$. When chirality is added to geometric frustration ((b) bottom right), FMV pairing is further favored, and the value of the transition point is further increased. The transition point from FMV to AFMV is $\Delta_c/R=1.9$ in both doublet and triplet circle microwells, which indicates that chiral edge current induces larger shift than the effect of frustration in triplet circle microwell.[]{data-label="fig.s6"}](figureS6_5.jpg) Order parameter of FMV pattern ------------------------------ To analyze the ordered pattern of ferromagnetic vortex (FMV) pattern of bacteria, we used the order parameter $\Phi_{FMV}$ according to our previous study[@beppu]. Before giving the definition of $\Phi_{FMV}$, we need to derive analytical form of the angular velocity of interacting vortices inside a overlapping circular microwell. For this aim, we firstly find the analytical form for angular velocity of single vortex, ${v}_{\theta}(r)$ inside a circle of the radius $R$. We assume the boundary condition at $r=R$ is ${v}_{\theta}(R)$=0, and ${v}_{\theta}(r)$ is proportional to $r$. The spatial distribution of vorticity inside the circle is given by $$\label{angvel1} \omega(r) = \begin{cases} \omega & (0 \leq r \leq R-s) \\ - \omega \Bigl[1- \frac{(R-s)^2}{R^2} \Bigr] & (R-s \leq r \leq R) \end{cases}$$ where $R-s$ is the position with maximum angular velocity and the size $s$ is estimated from experimental data. By solving , one can express the orthoradial velocity in a circular microwell $\bm{v}(r,\theta)$=$v_{\theta}(r)\bm{t}(\theta)$ as $$\label{angvel2} \bm{v}(r,\theta) = \begin{cases} \frac{\omega}{2} \Bigl[1- \frac{(R-s)^2}{R^2} \Bigr]r \bm{t}(\theta) & (0 \leq r \leq R-s) \\ \frac{\omega}{2} \Bigl(1- \frac{s}{R}\Bigr)^2\frac{R^2-r^2}{r} \bm{t}(\theta) & (R-s \leq r \leq R) \\ 0 & (r>R) \end{cases}$$ where $\bm{t}(\theta)=(-\sin \theta, \cos \theta)$ is the unit orthoradial vector at the angular position $\theta$. In the following section, this analytic formulation is used to define the order parameter of FMV pattern. Here we show the derivation of order parameter of ferromagnetic vortices (FMV) pattern used in Fig. 5(d). This order parameter reports the correlation of orientation field of velocity between the experimentally observed vortex pairing and numerically calculated FMV. For the numerical calculation of FMV pattern, the vortex confined in circular boundary is firstly considered: for each circle composing the doublet microwell, we set an index $j$, 1 stands for the left side and 2 for the right side. We define two sets of polar coordinates $(r_j, \theta_j)$; one for left circle is $(r_1, \theta_1)$ and the other for right circle is $(r_2, \theta_2)$. The origin of $j$ polar coordinates is set at the center of $j$ circle. We consider $\bm{t}_j(\theta_j)$ the orthoradial unit vector at the angular position $\theta_j$ centered on the center of the circle $j$ for $0\leq r_j \leq R$. In particular, we have $\bm{v}_j$$(r_j,\theta_j)=v_{\theta}(r_j)\bm{t}_j(\theta_j)$ where $v_{\theta}(r_j)$ is given by . ![**Orientation fields of FMV pattern.** (a) Orientation field of FMV pattern obtained from numerical method using Eq.. (b) Typical orientation field of interacting vortices in FMV pattern obtained in experiment.[]{data-label="fig.Order"}](figureS_FMV.jpg) We then construct velocity field for vortices showing FMV pattern in the doublet microwell. In addition to the boundary condition of a doublet circle that is characterized by $R$ and $\Delta$, the polar coordinates $(r, \theta)$ defines the internal space. The origin of polar coordinates is placed at the centroid of the doublet shape and the velocity field, $\bm{v}$$(r, \theta)$, is in turn considered as the superposition of two vortices in $j$=1 and 2 circles. Because two vortices in FMV pattern show same angular velocities of $\bm{t}_1(\theta)=\bm{t}_2(\theta)$, we can describe the velocity field as $$\label{angvel3} \bm{v}(r,\theta) = \sum_{j} \bm{v}_j(r_j, \theta_j) = \sum_{j} v_{\theta}(r_j) \bm{t}_j(\theta_j) .$$ The orientation field of an FMV pattern lies on the unit vector $\bm{u}(r,\theta)$ such that $$\label{angvel4} \bm{u}(r,\theta) =\frac{\bm{v}(r,\theta)}{|\bm{v}(r,\theta)|} .$$ By using the inner product of expected orientation map $\bm{u}(r,\theta)$ and the one measured experimentally $\bm{p}(r,\theta)$, the order parameter $\Phi_{FMV}$ is then defined as $$\Phi_{FMV}=\vert \langle \bm{p}(r,\theta)\cdot \bm{u}(r,\theta) \rangle \vert$$ where $\langle \cdot \rangle$ denotes the ensemble average over all sites in a doublet microwell. One can estimate the deviation of experimentally obtained velocity field from ideal FMV pattern because $\Phi_{FMV} = 1$ if it matches the FMV pattern, and $\Phi_{FMV} = 0$ if it becomes an AFMV pattern or a disordered turbulence. Theoretical analysis ==================== Geometric rule of bacterial vortex transition with no edge current (without chirality) -------------------------------------------------------------------------------------- ![**Boundary condition in theoretical model.** (a) The shape of the boundary conditions used in the theoretical model. Geometric parameters are shown in the figure. (b) The direction of movement and flow of particles along the wall near the tip. The horizontal right direction is defined as angle $\theta=0$, and the counterclockwise direction is a positive angle direction. The counterclockwise tangential direction of the left circular microwell (Red arrow) is an angle $\pi/2-\Psi$, and the clockwise tangential angle of the right circular microwell (Blue arrow) is $\pi/2+\Psi$.[]{data-label="fig.s7"}](figureS7_1.jpg) To explain the transition of ferromagnetic vortex (FMV) pattern and anti-ferromagnetic vortex (AFMV) patterns under geometric constraints, we construct theoretical model of orientational dynamics by considering the motion of self-propelled particles in a fluid. Bacteria is considered as a self-propelled point particle, with a position $\bm{r}_m(t) = (x_m(t), y_m(t))$ and an orientation $\bm{d}_m(t)$. For two-dimensional coordinates, the orientation of the particles is also expressed with the unit vector along the long-axis of bacteria $\bm{d}_m = \bm{d}(\theta_{m}) = (\cos\theta_m, \sin\theta_m)$. We impose the confinement of a circular microwell on the fluid and bacteria as shown in Fig.\[fig.s7\](a). The bacteria swim along the boundary wall at low noise limit and their heading angle $\theta_m$ is parallel to the tangential direction of boundary (Fig.\[fig.s7\](b)). The geometry of boundary shape is a pair of overlapping circular microwells with geometric parameter $\Delta/R$. The “tip” of this doublet is a point where two bacterial vortices intersect and bacteria from different circular parts of the microwell collide. The parameters $\Delta$ and $R$ are also rewritten with the elevation angle $\Psi$ as $\cos\Psi = \frac{\Delta}{2R}$ . The bacteria swim in their surrounding fluid and, when their density increases, their mutual interaction increases the alignment of their orientations. The evolution of $\bm{r}_m(t)$ is given by: $$\dot{\bm{r}}_m(t) = v_0 \bm{d}(\theta_m)$$ where particles move at a constant speed $v_0$. We next consider that the orientation angle of the bacteria $\theta(t)$ is determined by to three distinct effects: mutual alignment of bacteria due to collision, hydrodynamic processes and random rotational diffusion. Swimming bacteria exert a force on the fluid and in turn induce fluid advection $\bm{v}(\bm{r},t)$$ \propto \bm{p}$, where $\bm{p}$ is a local polar vector defined by $$\label{polar} \bm{p}(\bm{r}) = \langle\bm{d}(\theta_m)\rangle_{\bm{r}}.$$ Collective motion occurs at a higher density and it generates the active fluid flow as $\bm{v}=V_0 \bm{p}$, with $V_0<v_0$. Such hydrodynamic flow rotates bacteria through velocity gradient effect. The orientational dynamics of local alignment and rotation of bacteria by flow is $$\label{orientation_all} \dot{\theta}_m = - \gamma_p \underbrace{\sum_{\vert \bm{r}_{mn} \vert < \epsilon} \sin(\theta_m - \theta_n)}_{\rm polar \: alignment} - \gamma_n \underbrace{\sum_{\vert \bm{r}_{mn} \vert < \epsilon} \sin2(\theta_m - \theta_n)}_{\rm nematic \: alignment} + \gamma_a \underbrace{[\bm{d}_m \times (\bm{d}_m \cdot \nabla )\bm{v}]\cdot \bm{e}_z}_{\rm flow-induced \: alignment} + \eta_m(t),$$ where the first and the second terms govern the polar[@beppu][@wioland1] and nematic[@Li] alignments of bacteria due to mutual collision, and the third term shows rotation by velocity gradient of fluid flow that generates the torque on bacteria[@hamby][@Doi]. The forth term $\eta_m(t)$ represents the random fluctuation of the rotational direction, that satisfies $\langle \eta_m(t) \rangle$=0, $\langle \eta_m(t) \eta_n(t') \rangle$=$2D \delta_{mn}\delta(t-t')$ where $\delta_{mn}$ and $\delta(t)$ are Kronecker’s delta symbol and Dirac’s delta function, respectively. $D$ is the amplitude of the random noise that affects the orientation of bacteria. As shown in Fig. 5 in main text, the geometrical feature of the boundary shape induces the transition from FMV to AFMV. However, it is not clear if this geometry-induced transition occurs due to the polar interaction (first term in Eq.(\[orientation\_all\])) or the nematic interaction (second term in Eq.(\[orientation\_all\])) depending on the orientation of the bacteria, or the flow-induced rotation change in fluid advection (third term in Eq.(\[orientation\_all\])). What we need to consider is the interaction of bacteria from left or right circles in a doublet microwell defined by geometric constant $\Psi$. In the tip where the two circles intersect, bacteria swimming in the left and right wells interact and become oriented. If bacteria swim in counterclockwise direction along the wall in the left microwell, their heading angle is $\pi/2 - \Psi$ at the tip, while bacteria swimming clockwise in the right microwell have an angle $\pi/2 + \Psi$ at the tip (Fig.\[fig.s8\](a)). In the following, a theoretical analysis is performed on how each orientational dynamics is affected by the geometric shape of the boundary. We assume the bacteria flowing from the left well to be aligned initially in CCW rotation, and hence the fluid flow follows the same rotation direction. In the case of AFMV (FMV) pattern, we consider the right well has CW (CCW) bacteria motion and flow rotation. To have the insight of the coupling, we assume it to be weak. In this way, we estimate the interaction between the vortices residing in different microwell by comparison with the swimming manner of bacteria and fluid flow of independent ones. In addition, we assume the fluid flow of interacting vortices to be the linear superposition of the independent ones. In this way, we can consider how each term in Eq.  affects the orientation of bacteria. We first consider the effects of polar alignment due to mutual collision (including near field hydrodynamic interaction) and show that polar interaction can account for both (1) the preference of AFMV and FMV depending on geometrical parameter $\Psi$ and (2) the transition point consistent with experimental result. Because the effect of nematic alignment due to mutual collision cannot explain experimental results, the geometric dependence in the pairing order transition is decided by polar alignment. We then consider the flow-induced alignment of bacteria near the tip in order to explain the effect of edge current with rotational chirality. Finally, by extending the model of polar alignment with this chiral edge current, we propose geometric rule to account for both the selection of FMV pattern and the shift of the pairing order transition. ### The effect of polar alignment explains geometry-dependent FMV-AFMV transition At first, we consider the effect of polar alignment described by $$\label{orientation_polar} \dot{\theta}_m = - \gamma_p \sum_{\vert \bm{r}_{mn} \vert < \epsilon} \sin(\theta_m - \theta_n) + \eta_m(t).$$ where $\bm{r}_{mn}=\bm{x}_m-\bm{x}_n$ and $\epsilon$ is the effective radius of particle interaction. Polar interaction is involved in collective swimming in a highly dense suspension of bacteria, and a condition where the bacteria are oriented along the long axis with distinguishing the head and tail is favorable. Suppose the high-density of bacteria, the distribution of bacterial particles is homogeneous in space, and the density fluctuation is negligible. The aim of this theoretical analysis is to clarify whether polar interaction can explain the transition between FMV and AFMV patterns depending on the boundary shape $\Delta/R=2\cos\Psi$, and whether it also agrees with the transition point seen in experiment. Therefore, what we should consider is the collective motion under the spatial constraint where bacteria move along the walls of the left or right circles in a doublet microwell defined by geometric constant $\Psi$, and then collide mutually at the tip. We first consider the AFMV pattern at which two bacterial vortices have opposite rotational direction. If bacteria swim in counterclockwise (CCW) along the wall in the left microwell, its heading angle is $\pi/2 - \Psi$ at the tip, while bacteria swimming clockwise (CW) in the right microwell have the angle $\pi/2 + \Psi$ at the tip (Fig.\[fig.s8\](b)). By taking mean field approximation[@beppu][@vicsek][@peruani], the orientational dynamics Eq. is rewritten by $$\label{orientation_polar2} \dot{\theta}_m = - \gamma_p \bigl(\sin(\theta_m - \pi/2 + \Psi) + \sin (\theta_m - \pi/2 - \Psi)\bigr) + \eta_m(t).$$ ![**Polar alignment at the tip.** (a) Diagram of bacterial polar interaction showing anti-rotational vortex pairing at the tip. Bacteria moving along the left wall collide with bacteria moving from right circle at the tip. This pattern of polar interaction corresponds to AFMV pattern. (b) Diagram of bacterial polar interaction for co-rotational vortex pairing at the tip. The impact angle of the bacteria changes, and collective motion is in turn induced in the x-axis direction. This pattern eventually leads to FMV pattern.[]{data-label="fig.s8"}](figureS8_3.jpg) Then the dynamics of bacterial orientation in AFMV order is reduced to $$\dot{\theta}_m(t) = 2 \gamma_p \cos \theta_m \cos \Psi+ \eta_m(t).$$ Once the orientational dynamics is obtained, one can derive the Fokker-Planck equation for the probability distribution of particle orientation, $P_{AFMV}(\theta,t;\Psi)$, in AFMV pattern as $$\label{polarFP_afmv} \frac{\partial P_{AFMV}}{\partial t} = D \frac{\partial^2 P_{AFMV}}{\partial \theta^2} - 2\gamma_p \frac{\partial}{\partial \theta} \Bigl[(\cos\theta \cos \Psi) P_{AFMV} \Bigr].$$ As for FMV pattern, the bacteria swimming in CCW rotation from the left circle has an angle $\pi/2 - \Psi$ at the tip while the bacteria entering the right circle has the heading angle of $- \pi/2 + \Psi$ at the tip (Fig.\[fig.s8\](b)). To construct the FMV pattern from Eq., we take the same mean-field approximation and give the orientational dynamics of FMV pattern at the tip $$\label{orientation_polar3} \dot{\theta}_m = - \gamma_p \bigl(\sin(\theta_m - \pi/2 + \Psi) + \sin (\theta_m + \pi/2 - \Psi)\bigr) + \eta_m(t).$$ Then the dynamics of bacterial orientation in FMV order is reduced to $$\dot{\theta}_m(t) = - 2 \gamma_p \sin \theta_m \sin \Psi+ \eta_m(t).$$ The Fokker-Planck equation for the probability distribution of particle orientation, $P_{FMV}(\theta,t;\Psi)$, in FMV pattern is derived as $$\label{polarFP_fmv} \frac{\partial P_{FMV}}{\partial t} = D \frac{\partial^2 P_{FMV}}{\partial \theta^2} + 2\gamma_p \frac{\partial}{\partial \theta} \Bigl[(\sin\theta \sin \Psi) P_{FMV} \Bigr],$$ Because we focus on the static pattern of confined bacterial vortices, the left-hand side of both Eq. and Eq. is set at $\frac{\partial P}{\partial t} = 0$. By solving Eq. and Eq. , one can find the probability distribution $P_{AFMV}(\theta;\Psi)$ and $P_{FMV}(\theta;\Psi)$ at the steady state, $$\label{soln_afmv} P_{AFMV}(\theta;\Psi)=\frac{\exp \bigl(\frac{2\gamma_p}{D}\sin \theta \cos \Psi \bigr)}{2 \pi I_0\bigl(\frac{\gamma_p}{D}\cos \Psi \bigr)}.$$ and $$\label{soln_fmv} P_{FMV}(\theta;\Psi)=\frac{\exp \bigl(\frac{2\gamma_p}{D}\cos \theta \sin \Psi \bigr)}{2 \pi I_0\bigl(\frac{\gamma_p}{D}\sin \Psi \bigr)}.$$ where $I_0(\cdot)$ is first order Bessel function. In particular, AFMV pattern pointing in the vertical direction at $\theta = \pi/2 $ and the probability of FMV pattern pointing in the horizontal direction at $\theta = 0$ are both maximal. The unique solution $P_{FMV}(\theta = 0; \Psi_c) = P_{AFMV}(\theta = \pi/2; \Psi_c)$, meaning that AFMV and FMV patterns occur at equal probability, is at the transition point $\Psi_c$. By comparing Eqs. and , the unique transition point is obtained as $$\label{geometricrule} \sin \Psi_c = \cos \Psi_c,$$ which sets the transition at $\Psi_c = \pi/4$. Thus in the collective motion of self-propelled particles without chiral edge current, the geometric condition for the transition from FMV to AFMV is $$\label{geometricrule2} \Delta_c/R = 2 \cos \Psi_c = \sqrt{2}.$$. In addition, at $\Psi \leq \pi/4$, the probability of FMV pattern $P^{FMV}(\theta = 0; \Psi)$ is always larger than that of AFMV pattern $P^{AFMV}(\theta = \pi/2; \Psi)$, indicating that FMV pattern is favored at $\Delta/R\leq \sqrt{2}$. This geometric dependence is in excellent agreement with experimental result (Fig. 5(c) in main text and Fig.\[fig.s6\](a)), $\Delta_c/R \approx 1.3-1.4$. This analysis suggests that polar alignment is considered to be the primary effect that can explain both the pattern formation of FMV and AFMV and its transition at $\Delta_c/R=1.3-1.4$ observed in the experiment. $ $ ### Nematic alignment cannot explain geometry dependent FMV-AFMV transition We next consider the effect of nematic alignment due to mutual collisions $$\label{orientation_nematic} \dot{\theta}_m = - \gamma_n \sum_{\vert \bm{r}_{mn} \vert < \epsilon} \sin2(\theta_m - \theta_n) + \eta_m(t).$$ We assume the left well has CCW chirality; bacteria swim in CCW direction, and fluid flows in CCW direction as well. Bacteria from left well are considered to have the orientation angle $\theta=\pi/2-\Psi$ at the tip. In the case of AFMV pattern, we assume bacteria from CW right well have the orientation angle $\theta=\pi/2+\Psi$ By taking mean field approximation of particle orientation, the dynamics of $\theta$ at time $t$ is given by $$\label{orientation_nematic-AFMV} \dot{\theta}_m = - \frac{ \gamma_n }{2} \left\{\sin2(\theta_m - (\pi/2-\Psi)) + \sin2(\theta_m - (\pi/2+\Psi))\right\} + \eta_m(t)=\cos 2\Psi \sin 2\theta_m + \eta_m(t)$$ where $\gamma_n$ is the coefficient of nematic alignment. In the case of FMV pattern, CCW right well has the orientation angle $\theta=-\pi/2+\Psi$. In this case, the dynamics of $\theta$ at time $t$ is also given by $$\label{orientation_nematic-FMV} \dot{\theta}_m = - \frac{\gamma_n}{2} \left\{\sin2(\theta_m - (\pi/2-\Psi)) + \sin2(\theta_m - (-\pi/2+\Psi))\right\} + \eta_m(t)=\cos 2\Psi \sin 2\theta_m + \eta_m(t).$$ Both AFMV and FMV follow the same dynamics of $\theta_m$ by comparing Eqns. and , and one can find the interaction term is identical in $P_{AFMV}$ and $P_{FMV}$. FMV and AFMV patterns can be obtained with the same probability for all geometric parameters under nematic alignment of bacteria. In other words, the nematic interaction at the tip do not make any preference between AFMV and FMV. This analytical result is not consistent with the experimental results for geometrical transitions of FMV-AFMV patterns, suggesting that nematic orientation is not needed in orientational dynamics for our experimental observation. $ $ ### Chiral edge current as advection-induced alignment ![**The alignment due to velocity gradient.** (a) A particle with orientation angle $\theta_m$ is affected by the velocity gradient of the fluid. Velocity of fluid is denoted by $\bm{v}=v_f \Theta(x)\bm{e}_y$. The calculation leads to the alignment of bacteria with the newly imposed flow direction, $\pi/2$. (b) When the imposed flow direction is given by $\theta_f$, bacteria is aligned with the direction of $\theta_f$.[]{data-label="fig.fa"}](flow-align.jpg) Bacteria experience torque due to velocity gradient, as given by $$\label{orientation_adv} \dot{\theta}_m=\gamma_a [\bm{d}_m \times (\bm{d}_m \cdot \nabla )\bm{v}]\cdot \bm{e}_z+ \eta_m(t).$$ Assuming step-like velocity field proportional to the step function $\Theta(x)$, we have $\bm{v}=v_f \Theta(x)\bm{e}_y$ as shown in Fig.\[fig.fa\]. The bacteria heading to this step experiences torque, which can be calculated as $$\label{orientation_rot} [\bm{d}_m \times (\bm{d}_m \cdot \nabla )\bm{v}]\cdot \bm{e}_z=v_f\delta(x)(\cos^2 \theta)$$ Bacteria whose orientation angle $\theta_m$ is heading to the step of velocity fields has normal velocity $v_0 \cos\theta$. Thus, the total change of the orientation $\Delta \theta$ due to the step-like flow fields can be obtained by $$\label{orientation_change} \Delta \theta = \int dt \dot{\theta}= \frac{\gamma_a v_f }{v_0}\int_{-\infty}^{\infty} \cos \theta(x) dx = -\frac{\gamma_a v_f }{v_0} \sin\bigl(\theta_m-\frac{\pi}{2}\bigr).$$ Thus, dynamics of the bacterial orientation under the effect of step-like flow field can be described as $$\label{orientation-flow} \dot{\theta}_m=-\gamma_e \sin\bigl(\theta_m-\frac{\pi}{2}\bigr)+ \eta_m(t),$$ where $\gamma_e = \frac{ \gamma_a v_f }{v_0 \Delta t}$. When the direction of flow $\theta_f$ is arbitrary set to $\bm{v}=v_f (\cos \theta_f \bm{e}_x+\sin \theta_f \bm{e}_y)$, the same argument applies and the dynamics of the bacteria orientation is denoted as $$\label{orientation_change2} \dot{\theta}_m=- \gamma_e \sin(\theta_m-\theta_f)+ \eta_m(t).$$ where $\gamma_e$ is the coefficient for the alignment along the boundary edge ($\gamma_e\geq0$). ![**The edge current of bacteria and fluid streaming near boundary.** Diagram of interaction and superposition of bacteria and vortical flow explaining edge current in CCW direction. (left) Bacteria moving along the left wall collide with the vortical flow on the right microwell. (right) bacteria along the wall from the right collide with the vortical flow on the left microwell. Collective motion can be described with the sum of these two polar alignment of bacteria and vortical flow in co-rotational vortex pair. []{data-label="fig.s9"}](figureS9_3.jpg) As we noted earlier, collective motion occurs at a higher density and it generates the active fluid flow along the polarized direction as $\bm{v}=V_0 \bm{p}$, and we assume the flow-like vortical rotation can be present at the steady state, with velocity $\bm{v}(\bm{r},t)$. At the tip, bacteria collide with the advection flowing at velocity $\bm{v}(\bm{r},t)$ as shown in Fig.\[fig.s7\]. The direction of the flow advection is also directed to the tangential direction of the boundary wall, which is considered to be a stable vortical flow. In Figure 4 in main text, we found that chiral bacterial vortices show strong CCW bias in both bacterial swimming and fluid flow at boundary region. This fact allows one to propose the mathematical description of edge current of bacteria as follows: we consider the orientational dynamics under fluid advection for two interacting vortices with rotational flow in CCW direction. Since the bacterial population is constrained by a quasi-two dimensional space, the hydrodynamic flow is considered to be two-dimensional. The fluid flow in the microwell can be described as a superposition of the vortical flows present in the left and right circles of the doublet. In addition, we assume that bacteria do not collide with each other while they are aligned with the advective flow driven by collective motion that appears in a neighboring microwell. Then, it can be approximated that bacterial and flow alignment is limited to two combinations: the first is the alignment of bacteria moving in the left microwell along the vortical flow in the right microwell, and the second is the alignment case for bacteria moving in the right microwell with the vortical flow present in the left microwell (Fig. \[fig.s9\]). As for edge current in CCW direction (Fig. \[fig.s7\](c)), the probability distribution functions $P_{L}(\theta,t;\Psi)$ and $P_{R}(\theta,t;\Psi)$ representing the angular distribution of bacteria in left and right are obtained, respectively (Fig. \[fig.s9\](a)). If the bacteria at the left side have the heading angle $\pi/2 - \Psi$ while the bacteria at the right side have the angle $-\pi/2 + \Psi$, the orientation probability distribution of bacteria can be obtained based on the equation describing orientational dynamics. The Fokker-Planck equations for probability of heading angle are $$\label{advFP_left_fmv} \frac{\partial P_{L}}{\partial t} = D \frac{\partial^2 P_{L}}{\partial \theta^2} + \gamma_e \frac{\partial}{\partial \theta} \Bigl[\cos(\theta - \Psi)P_{L} \Bigr],$$ and $$\label{advFP_right_fmv} \frac{\partial P_{R}}{\partial t} = D \frac{\partial^2 P_{R}}{\partial \theta^2} + \gamma_e \frac{\partial}{\partial \theta} \Bigl[\cos(\theta + \Psi)P_{R} \Bigr].$$ At this time, the bacteria swimming from the left microwell have an angle $\pi/2 - \Psi$, and the orientation changes due to the vortical flow of the right microwell. The vortical flow at the tip is directed along $- \pi/2 + \Psi$. Thereby, the change in orientation is given by $- \gamma_e \cos (\theta - \Psi)$. On the other hand, the bacteria entering the right microwell have an angle $- \pi/2 + \Psi$ at the tip, but in this case, the orientation is changed by being released from the flow at the angle $\pi/2 - \Psi$. Thus, the change in orientation is represented by $- \gamma_e \cos (\theta + \Psi)$. Instead of finding the general solutions of Eqs. (\[advFP\_left\_fmv\]) and (\[advFP\_right\_fmv\]), we focus on the symmetry of probability distribution: because the shape of the boundary condition is highly symmetric at the tip, one can expect only negligible difference between $P_L(\theta,t)$ and $P_R(\theta,t)$ at this point. This geometric argument allows one to define the probability distribution of orientation for edge current by $P_{edge}=(P_{L}+P_{R})/2$. Hence, Eqs. and can be reduced to $$\begin{aligned} \label{advFP_fmv} \frac{\partial P_{edge}}{\partial t} &=& D \frac{\partial^2 P_{edge}}{\partial \theta^2} + \frac{\partial}{\partial \theta} \Bigl(\gamma_e \Bigl[\cos(\theta + \Psi)+\cos(\theta - \Psi)\Bigr]P_{edge}\Bigr) \nonumber \\ &=& D \frac{\partial^2 P_{edge}}{\partial \theta^2} + \frac{\partial}{\partial \theta} \Bigl(\Bigl[2\gamma_e \cos\theta \cos\Psi \Bigr]P_{edge}\Bigr),\end{aligned}$$ where $P_{edge}(\theta,t;\Psi)$ is the probability distribution of particles at the tip with a heading $\theta$ at the time $t$ under CCW edge current. Eq.(\[advFP\_fmv\]) tells that the reorientation by the edge current has geometric dependence $\cos\Psi$. In addition, it tends to trap bacterial particle nearby boundary and rotate bacteria in CW direction (due to negative sign in right hand side). Due to such reorientation, bacteria keep swimming nearby boundary edge in CCW direction. In next section, in order to account for how chiral bacterial vortex favors FMV order and move the transition point, we consider the effect of particle reorientation due to this edge current in addition to polar alignment. Geometric rule of bacterial vortex transition with chiral edge current ---------------------------------------------------------------------- In this section, we theoretically explain that interacting chiral vortices favors FMV pattern in a wide range of geometric condition, by adding the effect of chiral edge current in CCW direction in the orientational dynamics. From the above analysis, one can propose the orientational dynamics of bacterial particle swimming close to boundary under the edge current in CCW direction and polar alignment at the tip position. Because the direction of chiral symmetry breaking originates from bacterial swimming onto surface, the direction of edge current should be continuously formed in the CCW direction regardless of whether the vortex pairing pattern is FMV or AFMV. When CCW-chiral collective motion remains near the wall, the heading angle tends to be downward, resulting in an edge current that continues along the wall as an FMV pattern. By taking mean-field approximation, the change of bacterial orientation under edge current with CCW rotation is described by $\dot{\theta} = - \gamma_e ( \cos(\theta_m - \Psi) + \cos(\theta_m + \Psi)) = - 2 \gamma_e \cos\theta_m\cos\Psi$ that corresponds to the second term in right-hand side in Fokker-Planck equation Eq.. In addition, as shown in Eq., it is effective to make the polar interaction $\sum_{\vert \bm{r}_{mn} \vert < \epsilon} \sin(\theta_m - \theta_n) $ the primary effect in the orientational dynamics. We thus start the orientational dynamics with edge current in CCW chirality by taking minimal Vicsek-style model : $$\dot{\theta}_m = - \gamma_p \sum_{\vert \bm{r}_{mn} \vert < \epsilon} \sin(\theta_m - \theta_n) - 2 \gamma_e \cos\theta_m\cos\Psi + \eta_m(t).$$ Then, one can write the Fokker-Planck equations for AFMV or FMV pattern with chiral edge current as follows: for AFMV order with edge current: $$\begin{aligned} & &\frac{\partial P_{AFMV}}{\partial t}= D \frac{\partial^2 P_{AFMV}}{\partial \theta^2} - \frac{\partial}{\partial \theta} \Bigl(\bigl[2\gamma_p \cos \theta \cos \Psi - 2\gamma_e \cos \theta \cos \Psi \bigr] P_{AFMV}\Bigr),\end{aligned}$$ and for FMV order with edge current: $$\begin{aligned} & &\frac{\partial P_{FMV}}{\partial t}= D \frac{\partial^2 P_{FMV}}{\partial \theta^2} + \frac{\partial}{\partial \theta} \Bigl(\bigl[2\gamma_p \sin \theta \sin \Psi + 2 \gamma_e \cos \theta \cos \Psi \bigr] P_{FMV}\Bigr).\end{aligned}$$ As is apparent from these equations, the distribution function of the bacterial collective motion is affected by edge current and determined by the ratio of its rotational speed to the rotational diffusion constant $D$, and deviates the geometric rule of $\Delta_c/R = \sqrt{2}$ that characterizes the FMV-AFMV transition in the absence of chiral edge current. The probability of realizing the FMV and AFMV patterns is obtained by $$P_{AFMV}(\theta;\Psi)=A(\Psi) \exp \biggl[\frac{2}{D}((\gamma_p- \gamma_e)\sin\theta \cos\Psi)\biggr].$$ and $$P_{FMV}(\theta;\Psi)=B(\Psi) \exp \biggl[\frac{2}{D}(\gamma_p \cos\theta\sin\Psi + \gamma_e \sin\theta\cos\Psi)\biggr]$$ where $A(\Psi)$ and $B(\Psi)$ are normalization factor with $\Psi$. Suppose that $\theta = 0$ is assigned to $P_{FMV}(\theta; \Psi) $, and $\theta = \pi/2$ is assigned to $P_{AFMV}(\theta; \Psi) $, we can rewrite $$P_{AFMV}(\theta=\pi/2;\Psi)=A(\Psi)\exp\biggl[\frac{2}{D}\bigl((\gamma_p - \gamma_e)\cos\Psi\bigr)\biggr]$$ and $$P_{FMV}(\theta=0;\Psi)=B(\Psi)\exp\biggl[\frac{2}{D}(\gamma_p \sin\Psi)\biggr].$$ $A(\Psi)$ and $B(\Psi)$ are not equal unless $ \gamma_e = 0$, Then, the transition point with $P_{FMV}(\theta = 0; \Psi) = P_{AFMV}(\theta = \pi/2; \Psi)$ is given by $$\label{transition_chiral} \log\biggl[\frac{A(\Psi)}{B(\Psi)}\biggr] + \frac{2(\gamma_p -\gamma_e)}{D} \cos\Psi = \frac{2\gamma_p}{D}\sin\Psi .$$ ![**Phase diagram of vortex patterns.** (a) Illustration of the chiral bias of vortex pairing by edge current. In the configuration $\Psi =\pi/4$, that is $\Delta_c/R =2\cos(\pi/4)=\sqrt{2}$, CW and CCW rotations are equiprobable in both overlapping microwells in the absence of chirality effect (left). However, since the edge current creates a flow around the tip, the FMV pattern with CCW rotation is maintained, and the transition to the AFMV pattern shifts to the point of $\Delta_c/R\geq\sqrt{2}$ (right). (b) Phase diagram of FMV, chirality-induced FMV and AFMV is shown with geometric parameter $\Delta/R$ and the coefficient of chiral edge current $\gamma_e$. The dotted line indicates the transition point of chiral bacterial vortices and the horizontal broken line is the original transition point $\Delta/R=\sqrt{2}$ without chiral edge current.[]{data-label="fig.s10"}](figureS10_2.jpg) By calculating $\Psi$ satisfying the Eq. (\[transition\_chiral\]) numerically, the transition point to AFMV pattern can be obtained. FIG. \[fig.s10\] is the phase diagram showing the transition point $\Delta_c/R$ with the chiral edge current. If bacterial vortex does not have such edge current, the transition point is $\Delta_c/R=\sqrt{2}$. In other word, the chiral edge current in bacterial vortex shifts the transition from FMV to AFMV at a point deviating from $\sqrt{2}$. If the transition from FMV pattern occurs at $\Delta_c/R > \sqrt{2}$ at non-zero $\gamma_e$, that is classified as the chirality-induced FMV. Interestingly, one can find that the shift of transition point increases in proportion to the coefficient of chiral edge current $\gamma_e$ in the range where chirality is small. In order to understand why such a linear relation holds, Eq. (\[transition\_chiral\]) was solved and the approximate solution of the transition point from chiral-FMV to AFMV was determined. To analyze the shift of transition point, we suppose the geometric parameter $\Psi$ that is close to $\cos\Psi=\sin\Psi=1/\sqrt{2}$ because this condition allows one to get $A \simeq B$ and then the first term in Eq. can approximate $\log\bigl[\frac{A(\Psi)}{B(\Psi)}\bigr] \simeq 0$. By solving Eq.(\[transition\_chiral\]) with $\sin^2\Psi + \cos^2\Psi =1$, the transition point $\Delta_c/R$ is $$\label{chiral_rule} \frac{\Delta_c}{R} = \frac{2}{\sqrt{1 + \bigl(1 - \frac{ \gamma_e }{\gamma_p}\bigr)^2}}$$ where $\Delta_c/R$ is in the range of $0 \leq \Delta/R \leq 2$ by definition. When Eq. (\[chiral\_rule\]) is linearized for $ \gamma_e /\gamma_p\ll1$, the transition point is rewritten as $$\label{chiral_rule2} \frac{\Delta_c}{R} \approx \sqrt{2} \Bigl(1 + \frac{ \gamma_e }{2\gamma_p} \Bigr).$$ The obtained geometric relation Eq.(\[chiral\_rule2\]) means that the term related to chirality is added as a linear sum to the original expression of $\Delta_c/R=\sqrt{2}$ obtained when there is no chiral edge current. The shift of transition point is determined by the ratio between the effect of polar alignment $\gamma_p$ and the effect of edge current $ \gamma_e $. In addition, the transition point of chirality-induced FMV to AFMV is always larger than $\sqrt{2}$, suggesting that the edge current extends FMV order to a broader range of geometric conditions, in agreement with our experiments. [9]{} R. Jain and K.L. Sebastian, Journal of Chemical Physics **146**, 214102 (2017). M. Doi and S. F. Edwards, Theory of Polymer Dynamics (Oxford University Press.) p. 293 (1986). F. Peruani, A. Deutsch, M. Bär. Euro. Phys. J. 157, 111-122 (2008).
1
--- abstract: | We consider a Riemannian cylinder $\Omega$ endowed with a closed potential $1$-form $A$ and study the magnetic Laplacian $\Delta_A$ with magnetic Neumann boundary conditions associated with those data. We establish a sharp lower bound for the first eigenvalue and show that the equality characterizes the situation where the metric is a product. We then look at the case of a planar domain bounded by two closed curves and obtain an explicit lower bound in terms of the geometry of the domain. We finally discuss sharpness of this last estimate. *2000 Mathematics Subject Classification. 58J50, 35P15.* *Key words and phrases. Magnetic Laplacian, Eigenvalues, Upper and lower bounds, Zero magnetic field* author: - Bruno Colbois and Alessandro Savo bibliography: - 'CS.bib' title: 'Lower bounds for the first eigenvalue of the magnetic Laplacian\' --- Introduction {#intro} ============ Let $(\Omega,g)$ be a compact Riemannian manifold with boundary. Consider the trivial complex line bundle $\Omega\times\bf C$ over $\Omega$; its space of sections can be identified with $C^{\infty}(\Omega,\bf C)$, the space of smooth complex valued functions on $\Omega$. Given a smooth real 1-form $A$ on $\Omega$ we define a connection $\nabla^A$ on $C^{\infty}(\Omega,\bf C)$ as follows: $$\label{connection} \nabla^A_Xu=\nabla_Xu-iA(X)u$$ for all vector fields $X$ on $\Omega$ and for all $u\in C^{\infty}(\Omega,\bf C)$; here $\nabla$ is the Levi-Civita connection assocated to the metric $g$ of $\Omega$. The operator $$\label{magnetic laplacian} \Delta_A=(\nabla^A)^{\star}\nabla ^A$$ is called the [*magnetic Laplacian*]{} associated to the magnetic potential $A$, and the smooth two form $$B=dA$$ is the associated [*magnetic field*]{}. We will consider Neumann magnetic conditions, that is: $$\label{mneumann} \nabla^A_Nu=0\quad\text{on}\quad{\partial}\Omega,$$ where $N$ denotes the inner unit normal. Then, it is well-known that $\Delta_A$ is self-adjoint, and admits a discrete spectrum $$0\le {\lambda}_1(\Delta_A)\le {\lambda}_2(\Delta_A) \le ... \to \infty.$$ The above is a particular case of a more general situation, where $E\to M$ is a complex line bundle with a hermitian connection $\nabla^E$, and where the magnetic Laplacian is defined as $\Delta_E=(\nabla^E)^{\star}\nabla ^E$. The spectrum of the magnetic Laplacian is very much studied in analysis (see for example [@BDP] and the references therein) and in relation with physics. For *Dirichlet boundary conditions*, lower estimates of its fundamental tone have been worked out, in particular, when $\Omega$ is a planar domain and $B$ is the constant magnetic field; that is, when the function $\star B$ is constant on $\Omega$ (see for example a Faber-Krahn type inequality in [@Er1] and the recent[@LS] and the references therein, also for Neumann boundary condition). The case when the potential $A$ is a closed $1$-form is particularly interesting from the physical point of view (Aharonov-Bohm effect), and also from the geometric point of view. For Dirichlet boundary conditions, there is a serie of papers for domains with a pole, when the pole approaches the boundary (see [@AFNN; @NNT] and the references therein). Last but not least, there is a Aharonov-Bohm approach to the question of nodal and minimal partitions, see chapter 8 of [@BH]. For *Neumann boundary conditions*, we refer in particular to the paper [@HHHO], where the authors study the multiplicity and the nodal sets corresponding to the ground state ${\lambda}_1$ for non-simply connected planar domains with harmonic potential (see the discussion below). Let us also mention the recent article [@LLPP] (chapter 7) where the authors establish a *Cheeger type inequality* for ${\lambda}_1$; that is, they find a lower bound for ${\lambda}_1(\Delta_A)$ in terms of the geometry of $\Omega$ and the potential $A$. In the preprint [@ELMP], the authors approach the problem via the Bochner method. Finally, in a more general context (see [@BBC]) the authors establish a lower bound for ${\lambda}_1(\Delta_A)$ in terms of the *holonomy* of the vector bundle on which $\Delta_A$ acts. In both cases, implicitly, the flux of the potential $A$ plays a crucial role. [$\bullet\quad$]{}From now on we will denote by $\lambda_1(\Omega,A)$ the first eigenvalue of $\Delta_A$ on $(\Omega,g)$. Main lower bound ---------------- Our lower bound is partly inspired by the results in [@HHHO] for plane domains. First, recall that if $c$ is a closed parametrized curve (a loop), the quantity: $$\Phi^A_c=\dfrac{1}{2\pi}\oint_{c}A$$ is called the [*flux*]{} of $A$ across $c$. (We assume that $c$ is travelled once, and we will not specify the orientation of the loop, so that the flux will only be defined up to sign: this will not affect any of the statements, definitions or results which we will prove in this paper). Let then $\Omega$ be a fixed plane domain with one hole, and let $\Phi^A$ be the flux of the harmonic potential $A$ across the inner boundary curve. In Theorem 1.1 of [@HHHO] it is first remarked that $\lambda_1(\Omega, A)$ is positive if and only if $\Phi^A$ is not an integer (but see the precise statement in Section 2.1 below). Then, it is shown that $\lambda_1(\Omega,A)$ is maximal precisely when $\Phi^A$ is congruent to $\frac 12$ modulo integers. The proof relies on a delicate argument involving the nodal line of a first eigenfunction; in particular, the conclusion does not follow from a specific comparison argument, or from an explicit lower bound. In this paper we give a geometric lower bound of $\lambda_1(\Omega,A)$ when $\Omega$ is, more generally, a [*Riemannian cylinder*]{}, that is, a domain $(\Omega,g)$ diffeomorphic to $[0,1]\times{{\bf S}^{1}}$ endowed with a Riemannian metric $g$, and when $A$ is a closed potential $1$-form : hence, the magnetic field $B$ associated to $A$ is equal to $0$. The lower bound will depend on the geometry of $\Omega$ and, in an explicit way, on the flux of the potential $A$. Let us write $\partial\Omega=\Sigma_1\cup\Sigma_2$ where $$\Sigma_1=\{0\}\times{{\bf S}^{1}}, \quad \Sigma_2=\{1\}\times{{\bf S}^{1}}.$$ We will need to foliate the cylinder by the (regular) level curves of a smooth function $\psi$ and then we introduce the following family of functions. $$\begin{aligned}{\cal F}_{\Omega}=\{\psi:\Omega\to{{\bf R}}: \quad &\text{\it $\psi$ is constant on each boundary component}\\ &\text{\it and has no critical points inside $\Omega$}\} \end{aligned}$$ As $\Omega$ is a cylinder, we see that ${\cal F}_{\Omega}$ is not empty. If $\psi\in{\cal F}_{\Omega}$, we set: $$K=K_{\Omega,\psi}=\dfrac{\sup_{\Omega}{\lvert{\nabla\psi}\rvert}}{\inf_{\Omega}{\lvert{\nabla\psi}\rvert}}.$$ It is clear that, in the definition of the constant $K$, we can assume that the range of $\psi$ is the interval $[0,1]$, and that $\psi=0$ on $\Sigma_1$ and $\psi=1$ on $\Sigma_2$. Note that the level curves of the function $\psi$ are all smooth, closed and connected; moreover they are all homotopic to each other so that the flux of a closed $1$-form $A$ across any of them is the same, and will be denoted by $\Phi^A$. We say, briefly, that $\Omega$ is [*$K$-foliated by the level curves of $\psi$.*]{} We also denote by $d(\Phi^A,{\bf Z})$ the minimal distance between $\Phi^A$ and the set of integer $\bf Z$: $$d(\Phi^A,{\bf Z})^2=\min\Big\{(\Phi^A-k)^2: k\in\bf Z\Big\}.$$ Finally, we say that [*$\Omega$ is a Riemannian product*]{} if it is isometric to $[0,a]\times{{\bf S}^{1}}(R)$ for suitable positive constants $a,R$. \[main3\] a\) Let $(\Omega,g)$ be a Riemannian cylinder, and let $A$ be a closed $1$-form on $\Omega$. Assume that $\Omega$ is $K$-foliated by the level curves of the smooth function $\psi\in{\cal F}_{\Omega}$. Then: $$\label{cylinder} \lambda_1(\Omega,A)\geq\dfrac{4\pi^2}{KL^2}\cdot d(\Phi^A,{\bf Z})^2,$$ where $L$ is the maximum length of a level curve of $\psi$ and $\Phi^A$ is the flux of $A$ across any of the boundary components of $\Omega$. b\) Equality holds if and only if the cylinder $\Omega$ is a Riemannian product. [$\bullet\quad$]{}It is clear that we can also state the lower bound as follows: $$\lambda_1(\Omega,A)\geq\dfrac{4\pi^2}{\tilde K_{\Omega}}\cdot d(\Phi^A,{\bf Z})^2,$$ where $\tilde K_{\Omega}$ is an invariant depending only $\Omega$: $$\tilde K_{\Omega}=\inf_{\psi\in{\cal F}_{\Omega}}K_{\Omega,\psi}L_{\psi}^2\quad\text{and}\quad L_{\psi}=\sup_{r\in {\rm range}(\psi)}{\lvert{\psi^{-1}(r)}\rvert}.$$ It is is not always easy to estimate $K$. In Section \[estimate K\] we will show how to estimate $K$ in terms of the metric tensor. Note that $K\geq 1$; we will see that in many interesting situations (for example, for revolution cylinders, or for smooth embedded tubes around a closed curve) one has in fact $K=1$. Doubly connected planar domains ------------------------------- We now estimate the constant $K$ above when $\Omega$ is an annular region in the plane, bounded by the inner curve $\Sigma_1$ and the outer curve $\Sigma_2$. [$\bullet\quad$]{}We assume that the inner curve $\Sigma_1$ is convex. From each point $x\in\Sigma_1$, consider the ray $\gamma_x(t)=x+tN_x$, where $N_x$ is the exterior normal to $\Sigma_1$ at $x$ and $t\geq 0$. Let $Q(x)$ be the first intersection of $\gamma_x(t)$ with $\Sigma_2$, and let $$r(x)=d(x,Q(x)).$$ We say that $\Omega$ is [*starlike with respect to $\Sigma_1$*]{} if the map $x\to Q(x)$ is a bijection between $\Sigma_1$ and $\Sigma_2$; equivalently, if given any point $y\in\Sigma_2$, the geodesic segment which minimizes distance from $y$ to $\Sigma_1$ is entirely contained in $\Omega$. For $x\in\Sigma_1$, we denote by $\theta_x$ the angle between $\gamma'_x$ and the outer normal to $\Sigma_2$ at the point $Q(x)$, and we let $$m\doteq\min_{x\in\Sigma_1}{\cos\theta_x}.$$ Note that as $\Omega$ is starlike w.r.t. $\Sigma_1$, one has $\theta_x\in [0,\frac{\pi}2]$ and then $m\geq 0$. [$\bullet\quad$]{}To have a positive lower bound, we will assume that $m>0$ (that is, $\Omega$ is [*strictly*]{} starlike w.r.t. $\Sigma_1$). We also define $$\label{annulus} \twosystem {\beta=\min\{r(x): x\in\Sigma_1\}} {B=\max\{r(x): x\in\Sigma_1\}}$$ We then have the following result. \[main2\] Let $\Omega$ be an annulus in ${{\bf R}^{2}}$, which is strictly-starlike with respect to its inner (convex) boundary component $\Sigma_1$. Assume that $A$ is a closed potential having flux $\Phi^A$ around $\Sigma_1$. Then: $$\lambda_1(\Omega,A)\geq \dfrac{4\pi^2}{L^2} \dfrac{\beta m}{B} d(\Phi^A,{\bf Z})^2$$ where $\beta$ and $B$ are as in , and $L$ is the length of the outer boundary component. If $\Sigma_2$ is also convex, then $m\geq \beta/B$ and the lower bound takes the form: $$\lambda_1(\Omega,A)\geq \dfrac{4\pi^2}{L^2} \dfrac{\beta^2}{B^2} d(\Phi^A,{\bf Z})^2.$$ In section \[sharpness\], we will explain why we need to control $\dfrac{\beta}{B}$, $L$, and why we need to impose the starlike condition. If $\beta=B$ and $\Sigma_2$ is the circle of length $L$ we get the estimate $$\lambda_1(\Omega,A)\geq \dfrac{4\pi^2}{L^2} d(\Phi^A,{\bf Z})^2$$ which is the first eigenvalue of the magnetic Laplacian on the circle with potential $A$ (see section \[riemannian circle\]). If $\Sigma_2$ and $\Sigma_1$ are two concentric circles of respective lengths $L$ and $L_{\epsilon} \to L$, the domain is a thin annulus with $\lambda_1 \to \dfrac{4\pi^2}{L^2} d(\Phi^A,{\bf Z})^2$ which shows that our estimate is sharp. Our aim is to use these estimates on cylinders as a basis stone in order to study the same type of questions on compact surfaces of higher genus. Proof of the main theorem ========================= Preliminary facts and notation {#preliminary} ------------------------------ First, we recall the variational definition of the spectrum. Let $\Omega$ be a compact manifold with boundary and $\Delta_A$ the magnetic Laplacian with Neumann boundary conditions. One verifies that $$\int_{\Omega}(\Delta_Au)\bar u=\int_{\Omega}{\lvert{\nabla^Au}\rvert}^2,$$ and the associated quadratic form is then $$Q_A(u)=\int_{\Omega}{\lvert{\nabla^Au}\rvert}^2.$$ The usual variational characterization gives: $$\lambda_1(\Omega,A)= \min\Big\{ \frac{Q_A(u)}{\Vert u\Vert^2}:\ u\in C^{1}(\Omega,\mathbb C) / \{0\}\Big\}$$ The following proposition (which is well-known) expresses the [*gauge invariance*]{} of the spectrum of the magnetic Laplacian. \[basic facts\] a The spectrum of $\Delta_A$ is equal to the spectrum of $\Delta_{A+d\phi}$ for all smooth real valued functions $\phi$; in particular, when $A$ is exact, the spectrum of $\Delta_A$ reduces to that of the classical Laplace-Beltrami operator acting on functions (with Neumann boundary conditions if ${\partial}\Omega$ is not empty). b If $A$ is a closed $1$-form, then $A$ is gauge equivalent to a unique (harmonic) $1$-form $\tilde A$ satisfying $$\twosystem {d\tilde A=\delta\tilde A=0\quad\text{on}\quad \Omega} {\tilde A(N)=0\quad\text{on}\quad {\partial}\Omega}$$ The form $\tilde A$ is often called the [Coulomb gauge]{} of $A$. Note that $\tilde A$ is the harmonic representative of $A$ for the absolute boundary conditions. a This comes from the fact that $ \Delta_A e^{-i\phi}=e^{-i\phi} \Delta_{A+d\phi} $ hence $\Delta_A$ and $\Delta_{A+d\phi}$ are unitarily equivalent. b Consider a solution $\phi$ of the problem: $$\twosystem{\Delta\phi=\delta A \quad\text{on}\quad \Omega,} {{\dfrac{{\partial}\phi}{\bdN}}=A(N) \quad\text{on}\quad {\partial}\Omega.}$$ Then one checks that $\tilde A=A-d\phi$ is a Coulomb gauge of $A$. As $\phi$ is unique up to an additive constant, $d\phi$, hence $\tilde A$, is unique. We now focus on the first eigenvalue. Clearly, if $A=0$, then $\lambda_1(\Omega,A)=0$ simply because $\Delta_A$ reduces to the usual Laplacian, which has first eigenvalue equal to zero and first eigenspace spanned by the constant functions. If $A$ is exact, then $\Delta_{A}$ is unitarily equivalent to $\Delta$, hence, again, $\lambda_1(\Omega,A)=0$. In fact one checks easily from the definition of the connection that, if $A=d\phi$ for some real-valued function $\phi$ then $ \nabla^{A}e^{i\phi}=0, $ which means that $u=e^{i\phi}$ is $\nabla^A$-parallel hence $\Delta_A$-harmonic. On the other hand, if the magnetic field $B=dA$ is non-zero then $\lambda_1(\Omega,A)>0$. It then remains to examine the case when $A$ is closed but not exact. The situation was clarified in [@Sh] for closed manifolds and in [@HHHO] for Neumann boundary conditions. \[shikegawa\]The following statements are equivalent: a\) $\lambda_1(\Omega,A)=0$; b\) $dA=0$ and $\Phi^A_c\in\bf Z$ for any closed curve $c$ in $\Omega$. Thus, the first eigenvalue vanishes if and only if $A$ is a closed form whose flux around every closed curve is an integer; equivalently, if $A$ has non-integral flux around at least one closed loop, then $\lambda_1(\Omega,A)>0$. Proof of the lower bound ------------------------ From now on we assume that $\Omega$ is a Riemannian cylinder. Fix a first eigenfunction $u$ associated to $\lambda_1(\Omega, A)$ and fix a level curve $$\Sigma_r=\{\psi=r\}, \quad\text{where $r\in [0,1]$.}$$ As $\psi$ has no critical points, $\Sigma_r$ is isometric to ${{\bf S}^{1}}(\frac{L_r}{2\pi})$, where $L_r$ is the length of $\Sigma_r$. The restriction of $A$ to $\Sigma_r$ is a closed $1$-form denoted by $\tilde A$; we use the restriction of $u$ to $\Sigma_r$ as a test-function for the first eigenvalue $\lambda_1(\Sigma_r,\tilde A)$ and obtain: $$\label{level} \lambda_1(\Sigma_r,\tilde A)\int_{\Sigma_r}{\lvert{u}\rvert}^2\leq\int_{\Sigma_r}{\lvert{\nabla^{\tilde A}u}\rvert}^2.$$ By the estimate on the eigenvalues of a circle done in Section \[sectioncircle\] below we see : $$\lambda_1(\Sigma_r,\tilde A)=\dfrac{4\pi^2}{L_r^2}d(\Phi^{\tilde A},{\bf Z})^2,$$ where $\Phi^{\tilde A}$ is the flux of $\tilde A$ across $\Sigma_r$. Now note that $\Phi^{\tilde A}=\Phi^{A}$, because $\tilde A$ is the restriction of $A$ to $\Sigma_r$; moreover $L_r\leq L$ by the definition of $L$. Therefore: $$\label{llower} \lambda_1(\Sigma_r,\tilde A)\geq \dfrac{4\pi^2}{L^2}d(\Phi^{ A},{\bf Z})^2$$ for all $r$. Let $X$ be a unit vector tangent to $\Sigma_r$. Then: $$\begin{aligned} \nabla^{\tilde A}_{X}u&=\nabla_{X}u-i\tilde A(X)u\\ &=\nabla_{X}u-iA(X)u\\ &=\nabla^A_{X}u. \end{aligned}$$ The consequence is that: $$\label{energy} {\lvert{\nabla^{\tilde A}u}\rvert}^2={\lvert{\nabla^{\tilde A}_{X}u}\rvert}^2={\lvert{\nabla^{A}_{X}u}\rvert}^2\leq {\lvert{\nabla^{A}u}\rvert}^2.$$ [$\bullet\quad$]{}[*Note that equality holds in iff $\nabla^A_{N}u=0$ where $N$ is a unit vector normal to the level curve $\Sigma_r$ (we could take $N=\nabla\psi/{\lvert{\nabla\psi}\rvert}$).*]{} For any fixed level curve $\Sigma_r=\{\psi=r\}$ we then have, taking into account , and : $$\dfrac{4\pi^2}{L^2}d(\Phi^{ A},{\bf Z})^2\int_{\psi=r}{\lvert{u}\rvert}^2\leq \int_{\psi=r}{\lvert{\nabla^Au}\rvert}^2.$$ Assume that $B_1\leq{\lvert{\nabla\psi}\rvert}\leq B_2$ for positive constants $B_1,B_2$. Then the above inequality implies: $$\dfrac{4\pi^2}{L^2}d(\Phi^{ A},{\bf Z})^2\cdot B_1\int_{\psi=r}\dfrac{{\lvert{u}\rvert}^2}{{\lvert{\nabla\psi}\rvert}}\leq B_2\int_{\psi=r}\dfrac{{\lvert{\nabla^Au}\rvert}^2}{{\lvert{\nabla\psi}\rvert}}.$$ We now integrate both sides from $r=0$ to $r=1$ and use the coarea formula. Conclude that $$\dfrac{4\pi^2}{L^2}d(\Phi^{ A},{\bf Z})^2\cdot B_1\int_{\Omega}{{\lvert{u}\rvert}^2}\leq B_2\int_{\Omega}{\lvert{\nabla^Au}\rvert}^2.$$ As $u$ is a first eigenfunction, one has: $$\int_{\Omega}{\lvert{\nabla^Au}\rvert}^2=\lambda_1(\Omega,A)\int_{\Omega}{\lvert{u}\rvert}^2.$$ Recalling that $K=\frac{B_2}{B_1}$ we finally obtain the estimate . Proof of the equality case {#equalitycase} -------------------------- If the cylinder $\Omega$ is a Riemannian product then it is obvious that we can take $K=1$ and then we have equality by Proposition \[cyl\] below. Now assume that we do have equality: we have to show that $\Omega$ is a Riemannian product. Going back to the proof, we must have the following facts. [**F1.**]{} [*All level curves of $\psi$ have the same length $L$*]{}. [**F2.**]{} [*${\lvert{\nabla\psi}\rvert}$ must be constant and, by renormalization, we can assume that it is everywhere equal to $1$.* ]{}Then, $\psi:\Omega\to [0,a]$ for some $a>0$ and we set $$N\doteq\nabla\psi.$$ [**F3.**]{} [*The eigenfunction $u$ on $\Omega$ restricts to an eigenfunction of the magnetic Laplacian of each level set $\Sigma_r=\{\psi=r\}$, with potential given by the restriction of $A$ to $\Sigma_r$.*]{} [**F4.**]{} [*One has $\nabla^A_Nu=0$ identically on $\Omega$.* ]{} ### First step: description of the metric $\Omega$ is isometric to the product $[0,a]\times {{\bf S}^{1}}(\frac{L}{2\pi})$ with metric $$\label{metric} g={{\begin{pmatrix}}{1&0\\}{0&\theta^2(r,t)\\}{\end{pmatrix}}}, \quad (r,t)\in [0,a]\times [0,L]$$ where $\theta(r,t)$ is positive and periodic of period $L$ in the variable $t$. Moreover $\theta(0,t)=1$ for all $t$. We first show that the integral curves of $N$ are geodesics; for this it is enough to show that $ \nabla_NN=0 $ on $\Omega$. Let $e_1(x)$ be a vector tangent to the level curve of $\psi$ passing through $x$. Then, we obtain a smooth vector field $e_1$ which, together with $N$, forms a global orthonormal frame. Now $${\langle{\nabla_NN},{N}\rangle}=\dfrac 12 N\cdot{\langle{N},{N}\rangle}=0.$$ On the other hand, as the Hessian is a symmetric tensor: $${\langle{\nabla_NN},{e_1}\rangle}=\nabla^2\psi(N,e_1)=\nabla^2\psi(e_1,N)={\langle{\nabla_{e_1}N},{N}\rangle}=\dfrac 12e_1\cdot{\langle{N},{N}\rangle}=0.$$ Hence $\nabla_NN=0$ as asserted. As each integral curve of $N=\nabla\psi$ is a geodesic meeting $\Sigma_1$ orthogonally, we see that $\psi$ is actually the distance function to $\Sigma_1$. We introduce coordinates on $\Omega$ as follows. For a fixed point $p\in\Omega$ consider the unique integral curve $\gamma$ of $N$ passing through $p$ and let $x\in\Sigma_1$ be the intersection of $\gamma$ with $\Sigma_1$ (note that $x$ is the foot of the unique geodesic which minimizes the distance from $p$ to $\Sigma_1$). Let $r$ be the distance of $p$ to $\Sigma_1$. We then have a map $ \Omega\to [0,a]\times\Sigma_1 $ which sends $p$ to $(r,x)$. Its inverse is the map $F: [0,a]\times\Sigma_1\to\Omega$ defined by $$F(r,x)=\exp_x(rN).$$ Note that $F$ is a diffeomeorphism; we call the pair $(r,x)$ the [*normal coordinates*]{} based on $\Sigma_1$. We introduce the arc-length $t$ on $\Sigma_1$ (with origin in any assigned point of $\Sigma_1$) and recall that $L$ is length of $\Sigma_1$ (which is also the length of $\Sigma_2)$). Let us compute the metric $g$ in normal coordinates. Since $N={\dfrac{{\partial}}{{\partial}r}}$ one sees that $g_{11}=1$ everywhere; for any fixed $r=r_0$ we have that $F(r_0,\cdot)$ maps $\Sigma_1$ diffeomorphically onto the level set $\{\psi=r_0\}$ so that ${\dfrac{{\partial}}{{\partial}r}}$ and ${\dfrac{{\partial}}{{\partial}t}}$ will be mapped onto orthogonal vectors, and indeed $g_{12}=0$. Setting $\theta(r,t)^2={\langle{{\dfrac{{\partial}}{{\partial}t}}},{{\dfrac{{\partial}}{{\partial}t}}}\rangle}$ one sees that the metric takes the form . Finally note that $ \theta(0,t)=1 $ for all $t$, because $F(0,\cdot)$ is the identity. ### Second step : Gauge invariance Let $\Omega$ be any Riemannian cylinder and $A=f(r,t)\,dr+h(r,t)\,dt$ a closed $1$-form on $\Omega$. Then, there exists a smooth function $\phi$ on $\Omega$ such that $$A+d\phi=H(t)\,dt$$ for a smooth function $H(t)$ depending only on t. Hence, by gauge invariance, we can assume from the start that $A=H(t)\,dt$. Consider the function $ \phi(r,t)=-\int_0^rf(x,t)\,dx. $ Then: $$A+d\phi=\tilde h(r,t)\,dt$$ for some smooth function $\tilde h(r,t)$. As $A$ is closed, also $A+d\phi$ is closed, which implies that ${\dfrac{{\partial}\tilde h}{\bdr}}=0$, that is, $ \tilde h(t,r) $ does not depend on $r$; if we set $H(t)\doteq\tilde h(t,0)$ we get the assertion. [$\bullet\quad$]{}We point out the following consequence. If $u=u(r,t)$ is an eigenfunction, we know from [**F4**]{} above that $\nabla^A_Nu=0$, where $N={\dfrac{{\partial}}{\bdr}}$. As $ \nabla^A_Nu={\dfrac{{\partial}u}{\bdr}}-iA({\dfrac{{\partial}}{\bdr}})u $ and $A=H(t)\,dt$ we obtain $A({\dfrac{{\partial}}{\bdr}})=0$ hence $ {\dfrac{{\partial}u}{\bdr}}=0 $ at all points of $\Omega$. This implies that $$\label{uoft} u=u(t)$$ depends only on $t$. ### Third step : spectrum of circles and Riemannian products {#sectioncircle} In this section, we give an expression for the eigenfunctions of the magnetic Laplacian on a circle with a Riemannian metric $g$ and a closed potential $A$. Of course, we know that any metric $g$ on a circle is always isometric to the canonical metric $g_{\rm can}=\,dt^2$, where $t$ is arc-length. But our problem in this proof is to reconstruct the global metric of the cylinder and to show that it is a product, and we cannot suppose a priori that the restricted metric of each level set of $\psi$ is the canonical metric. The same is true for the restricted potential: we know that it is Gauge equivalent to a potential of the type $a\,dt$ for a scalar $a$, but we cannot suppose a priori that it is of that form. We refer to Appendix \[riemannian circle\] for the complete proof of the following fact. \[circle\] Let $(M,g)$ be the circle of length $L$ endowed with the metric $ g=\theta(t)^2\,dt^2 $ where $t\in [0,L]$ and $\theta(t)$ is a positive function, periodic of period $L$. Let $A=H(t)\,dt$. Then, the eigenvalues of the magnetic Laplacian with potential $A$ are: $$\lambda_k(M,A)=\dfrac{4\pi^2}{L^2}(k-\Phi^A)^2, \quad k\in\bf Z$$ with associated eigenfunctions $$u_k(t)=e^{i\phi(t)}e^{\frac{2\pi i (k-\Phi^A)}{L}s(t)}, \quad k\in\bf Z.$$ where $\phi(t)=\int_0^tH(\tau)\,d\tau$ and $s(t)=\int_0^t\theta(\tau)\,d\tau$. In particular, if the metric is the canonical one, that is, $g=dt^2$, and the potential $1$-form is harmonic, so that $A=\frac{2\pi \Phi^A}{L}dt$, then the eigenfunctions are simply : $$u_k(t)=e^{\frac{2\pi i k}{L}t}, \quad k\in\bf Z.$$ We remark that if the flux $\Phi^A$ is not congruent to $1/2$ modulo integers, then the eigenvalues are all simple. If the flux is congruent to $1/2$ modulo integers, then there are two consecutive integers $k,k+1$ such that $ \lambda_{k}=\lambda_{k+1}. $ Consequently, the lowest eigenvalue has multiplicity two, and the first eigenspace is spanned by $$e^{i\phi(t)}e^{\frac{\pi i}{L}s(t)}, \, e^{i\phi(t)}e^{-\frac{\pi i}{L}s(t)}.$$ The following proposition is an easy consequence (for a proof, see also Appendix \[riemannian circle\]). \[cyl\] Consider the Riemannian product $\Omega=[0,a]\times{{\bf S}^{1}}(\frac{L}{2\pi})$, and let $A$ be a closed $1-$form on $\Omega$. Then, the spectrum of $\Delta_A$ is given by $$\dfrac{\pi^2 h^2}{a^2}+\dfrac{4\pi^2}{L^2}(k-\Phi^A)^2, \quad h, k\in{\bf Z}, h\geq 0.$$ In particular, $$\lambda_1(\Omega,A)=\dfrac{4\pi^2}{L^2}d(\Phi^A,{\bf Z})^2.$$ ### Fourth step : a calculus lemma In this section, we state a technical lemma which will allow us to conclude. The proof is conceptually simple, but perhaps tricky at some points; then, we decided to put it in Appendix \[technical lemma\]. \[calculus\] Let $s:[0,a]\times [0,L]\to {{\bf R}}$ be a smooth, non-negative function such that $$s(0,t)=t,\quad s(r,0)=0, \quad s(r,L)=L \quad\text{and}\quad {\dfrac{{\partial}s}{\bdt}}(r,t)\doteq\theta(r,t)>0.$$ Assume that there exist smooth functions $p(r),q(r)$ with $p(r)^2+q(r)^2>0$ such that $$p(r)\cos(\frac{\pi}{L}s(r,t))+q(r)\sin(\frac{\pi}{L}s(r,t))=F(t)$$ where $F(t)$ depends only on $t$. Then $p$ and $q$ are constant and $ {\dfrac{{\partial}s}{\bdr}}=0 $ so that $$s(r,t)=t$$ for all $(r,t)$. ### End of proof of the equality case Assume that equality holds. Then, if $u$ is an eigenfunction, we know that $u=u(t)$ by the discussion in and $u$ restricts to an eigenfunction on each level circle $\Sigma_r$ for the potential $A=H(t)\,dt$ above (see Fact 3 at the beginning of Section \[equalitycase\] and the second step above). We assume that $\Phi^A$ is congruent to $\frac 12$ modulo integers. This is the most difficult case; in the other cases the proof is a particular case of this, it is simpler and we omit it. Recall that each level set $\Sigma_r$ is a circle of length $L$ for all $r$, with metric $g=\theta(r,t)^2\,dt$. As the flux of $A$ is congruent to $\frac 12$ modulo integers, we see that there exist complex-valued functions $w_1(r),w_2(r)$ such that $$u(t)=e^{i\phi(t)}\Big(w_1(r)e^{\frac{\pi i}{L} s(r,t)}+w_2(r)e^{-\frac{\pi i}{L} s(r,t)}\Big),$$ which, setting $f(t)=e^{-i\phi(t)}u(t)$, we can re-write $$\label{rewrite} f(t)=w_1(r)e^{\frac{\pi i}{L} s(r,t)}+w_2(r)e^{-\frac{\pi i}{L} s(r,t)}.$$ Recall that here $\phi(t)=\int_0^tH(\tau)\,d\tau$ and $$s(r,t)=\int_0^t\theta(r,\tau)\,d\tau.$$ We take the real part on both sides of and obtain smooth real-valued functions $F(t), p(r),q(r)$ such that $$F(t)=p(r)\cos({\frac{\pi}{L}}s(r,t))+q(r)\sin(\frac{\pi}{L} s(r,t)).$$ Since $\theta(0,t)=1$ for all $t$, we see $$s(0,t)=t.$$ Clearly $s(r,0)=0$; finally, $s(r,L)=\int_0^L\theta(r,\tau)\,d\tau=L$, being the length of the level circle $\Sigma_r$. Thus, we can apply Lemma \[calculus\] and conclude that $s(r,t)=t$ for all $t$, that is, $$\theta(r,t)=1$$ for all $(r,t)$ and the metric is a Riemannian product. It might happen that $p(r)=q(r)\equiv 0$. But then the real part of $f(t)$ is zero and we can work in an analogous way with the imaginary part of $f(t)$, which cannot vanish unless $u\equiv 0$. General estimate of $K_{\Omega,\psi}$ {#estimate K} ------------------------------------- We can estimate $K_{\Omega,\psi}$ for a Riemannian cylinder $\Omega=[0,a]\times{{\bf S}^{1}}$ if we know the explicit expression of the metric in the normal coordinates $(r,t)$, where $t\in [0,2\pi]$ is arc-length : $$g= \left( \begin{array}{cc} g_{11} & g_{12} \\ g_{21} & g_{22} \end{array} \right).$$ If $g^{ij}$ is the inverse matrix of $g_{ij}$, and if $\psi=\psi(r,t)$ one has: $${\lvert{\nabla\psi}\rvert}^2=g^{11}\Big({\dfrac{{\partial}\psi}{\bdr}}\Big)^2+2g^{12}{\dfrac{{\partial}\psi}{\bdr}}{\dfrac{{\partial}\psi}{\bdt}}+ g^{22}\Big({\dfrac{{\partial}\psi}{\bdt}}\Big)^2.$$ The function $\psi(r,t)=r$ belongs to ${\cal F}_{\Omega}$ and one has: $ {\lvert{\nabla\psi}\rvert}^2=g^{11}, $ which immediately implies that we can take $$K_{\Omega,\psi}\leq \dfrac{\sup_{\Omega}g^{11}}{\inf_{\Omega}g^{11}}.$$ Note in particular that if $\Omega$ is rotationally invariant, so that the metric can be put in the form: $$g=\left( \begin{array}{cc} 1 & 0 \\ 0 & \alpha(r)^2 \end{array} \right),$$ for some function $\alpha(r)$, then $K_{\Omega,\psi}=1$. The estimate becomes $$\label{simple} \lambda_1(\Omega,A)\geq\dfrac{4\pi^2}{L^2}\cdot d(\Phi^A,{\bf Z})^2,$$ where $L$ is the maximum length of a level curve $r={\rm const}$. Yet more generally, one can fix a smooth closed curve $\gamma$ on a Riemannian surface $M$ and consider the tube of radius $R$ around $\gamma$: $$\Omega=\{x\in M: d(x,\gamma)\leq R\}.$$ It is well-known that if $R$ is sufficiently small (less than the injectivity radius of the normal exponential map) then $\Omega$ is a cylinder with smooth boundary which can be foliated by the level sets of $\psi$, the distance function to $\gamma$. Clearly ${\lvert{\nabla\psi}\rvert}=1$ and holds as well. A concrete example where we could estimate the width $R$ is the case of a compact surface $M$ of genus $\ge 2$ and curvature $-a^2\le K\le -b^2$, $a \ge b >0$. Let $\gamma$ be a simple closed geodesic. Then, using the Gauss-Bonnet theorem, one can show that $R$ is bounded below by an explicit positive constant $R=R(\gamma,a)$, hence the $R$-neighborhood of $\gamma$ is diffeomorphic to the product $S^1 \times (-1,1)$ (see for example [@CF]). If we take $\Omega$ as the Riemannian cylinder of width $R(\gamma,a)$ having one boundary component equal to $\gamma$ then we can foliate $\Omega$ with the level sets of the distance function to $\gamma$ and so $K=1$ and holds, with $L$ given by the length of the other boundary component. Proof of Theorem \[main2\]: plane annuli {#convex} ======================================== Let $\Omega$ be an annulus in ${{\bf R}^{2}}$, which is starlike with respect to its inner convex boundary component $\Sigma_1$. Assume that $A$ is a closed potential having flux $\Phi^A$ around $\Sigma_1$. Recall that we have to show: $$\label{annuliestimate} \lambda_1(\Omega,A)\geq \dfrac{4\pi^2}{L^2} \dfrac{\beta m}{B} d(\Phi^A,{\bf Z})^2$$ where $\beta, B$ and $m$ will be recalled below and $L$ is the length of the outer boundary component. If we assume that $\Sigma_2$ is also convex, then we show that $m\geq \beta/B$ and the lower bound takes the form: $$\label{annuliestimatetwo} \lambda_1(\Omega,A)\geq \dfrac{4\pi^2}{L^2} \dfrac{\beta^2}{B^2} d(\Phi^A,{\bf Z})^2.$$ Before giving the proof let us recall notation. For $x\in\Sigma_1$, the ray $\gamma_x$ is the geodesic segment $\gamma_x(t)=x+tN_x$, where $N_x$ is the exterior normal to $\Sigma_1$ at $x$ and $t\geq 0$. The ray $\gamma_x$ meets $\Sigma_2$ at a first point $Q(x)$, and we let $ r(x)=d(x,Q(x)). $ For $x\in\Sigma_1$, we denote by $\theta_x$ the angle between the ray $\gamma'_x$ and the outer normal to $\Sigma_2$ at the point $Q(x)$, and we let $$m\doteq\min_{x\in\Sigma_1}{\cos\theta_x}.$$ We assume that $\Omega$ is strictly starlike, that is, $m>0$; in particular $Q(x)$ is unique. Recall also that: $$\label{annulus} \beta=\min_{x\in\Sigma_1}r(x), \quad B=\max_{x\in\Sigma_1}r(x).$$ We construct a suitable smooth function $\psi$ and estimate the constant $K=K_{\Omega,\psi}$ with respect to the geometry of $\Omega$. The starlike assumption implies that each point in $\Omega$ belongs to a unique ray $\gamma_x$. Then we can define a function $\psi:\Omega\to [0,1]$ as follows: $$\psi=\threesystem {0\quad\text{on}\quad\Sigma_1} {1\quad\text{on}\quad\Sigma_2} {\text{linear on each ray from $\Sigma_1$ to $\Sigma_2$}.}$$ Estimates and now follow from Theorem \[main3\] together with the following Proposition. \[estimate doubly convex\] a At all points of $\Omega$ one has: $ \frac{1}{B}\leq{\lvert{\nabla\psi}\rvert}\leq\frac{1}{\beta m}. $ Therefore: $$K_{\Omega,\psi}=\dfrac{\sup_{\Omega}{\lvert{\nabla\psi}\rvert}}{\inf_{\Omega}{\lvert{\nabla\psi}\rvert}}\leq\dfrac{B}{\beta m}.$$ b One has $$\sup_{r\in [0,1]}{\lvert{\psi^{-1}(r)}\rvert}=L={\lvert{\Sigma_2}\rvert}.$$ c If $\Sigma_2$ is also convex, then $m\geq \beta/B$ hence we can take $K=\beta^2/B^2$. The proof of the Proposition \[estimate doubly convex\] depends on the following steps. [**Step 1.**]{} [*On the ray $\gamma_x$ joining $x$ to $Q(x)$, consider the point $Q_t(x)$ at distance $t$ from $x$, and let $\theta_x(t)$ be the angle between $\gamma'_x$ and $\nabla\psi(Q_t(x))$. Then the function $$h(t)=\cos(\theta_x(t))$$ is non-increasing in $t$. As $\theta_x(r(x))=\theta_x$ we have in particular: $$\cos(\theta_x(t))\geq \cos(\theta_x)\geq m$$ for all $t\in [0,r(x)]$ and $x\in\Sigma_1$.*]{} [**Step 2.**]{} [*The function $r\to{\lvert{\psi^{-1}(r)}\rvert}$ is non-decreasing in $r$.*]{} [**Step 3.**]{} [*If $\Sigma_2$ is also convex we have $m\geq \beta/B$.*]{} We will prove Steps 1-3 below. [**Proof of Proposition \[estimate doubly convex\]**]{}. a) At any point of $\Omega$, let $\nabla^R\psi$ denote the radial part of $\nabla\psi$, which is the gradient of the restriction of $\psi$ to the ray passing through the given point. As such restriction is a linear function, one sees that $$\dfrac{1}{B}\leq{\lvert{\nabla^R\psi}\rvert}\leq \dfrac{1}{\beta}.$$ Since ${\lvert{\nabla\psi}\rvert}\geq{\lvert{\nabla^R\psi}\rvert}$ one gets immediately $${\lvert{\nabla\psi}\rvert}\geq\dfrac{1}{B}.$$ Note that $\theta_x(t)$, as defined above, is precisely the angle between $\nabla\psi$ and $\nabla^R\psi$, so that, using Step 1, $${\lvert{\nabla^R\psi}\rvert}={\lvert{\nabla\psi}\rvert}\cos\theta_x(t)\geq m{\lvert{\nabla\psi}\rvert}$$ hence: $${\lvert{\nabla\psi}\rvert}\leq \dfrac{1}{m}{\lvert{\nabla^R\psi}\rvert}\leq\dfrac{1}{\beta m}.$$ as asserted. It is clear that b) and c) are immediate consequences of Steps 2-3. **Proof of Step 1.** We use a suitable parametrization of $\Omega$. Let $l$ be the length of $\Sigma_1$ and consider a parametrization $\gamma:[0,l]\to \Sigma_1$ by arc-length $s$ with origin at a given point in $\Sigma_1$. Let $N(s)$ be the outer normal vector to $\Sigma_1$ at the point $\gamma(s)$. Consider the set: $$\tilde\Omega=\{(t,s)\in [0,\infty)\times [0,l): t\leq \rho(s)\}$$ where we have set $\rho(s)=r(\gamma(s))$. The starlike property implies that the map $ \Phi:\tilde\Omega\to \Omega $ defined by $$\Phi(t,s)=\gamma(s)+tN(s)$$ is a diffeomorphism. Let us compute the Euclidean metric tensor in the coordinates $(t,s)$. Write $\gamma'(s)=T(s)$ for the unit tangent vector to $\gamma$ and observe that $N'(s)=k(s)T(s)$, where $k(s)$ is the curvature of $\Sigma_1$ which is everywhere non-negative because $\Sigma_1$ is convex. Then: $$\twosystem {d\Phi(\dfrac{{\partial}}{{\partial}t})=N(s)} {d\Phi(\dfrac{{\partial}}{{\partial}s})=(1+tk(s))T(s)}$$ If we set $\Theta(t,s)=1+t k(s)$ the metric tensor is: $$g={{\begin{pmatrix}}{1&0\\}{0&\Theta^2\\}{\end{pmatrix}}}$$ and an orthonormal basis is then $(e_1,e_2)$, where $$e_1=\dfrac{{\partial}}{{\partial}t}, \quad e_2=\dfrac{1}{\Theta}\dfrac{{\partial}}{{\partial}s}.$$ In these coordinates, our function $\psi$ is written: $$\psi(t,s)=\dfrac{t}{\rho(s)}.$$ Now $$\twosystem {{\langle{\nabla\psi},{e_1}\rangle}={\dfrac{{\partial}\psi}{\bdt}}=\dfrac{1}{\rho(s)}} {{\langle{\nabla\psi},{e_2}\rangle}=\dfrac{1}{\Theta}{\dfrac{{\partial}\psi}{\bds}}=-\dfrac{t\rho'(s)}{\Theta(t,s)\rho(s)^2}}.$$ It follows that $${\lvert{\nabla\psi}\rvert}^2=\dfrac{1}{\rho^2}+\dfrac{t^2\rho'^2}{\Theta^2\rho^4}= \dfrac{\Theta^2\rho^2+t^2\rho'^2}{\Theta^2\rho^4}.$$ Recall the radial gradient, which is the orthogonal projection of $\nabla\psi$ on the ray, whose direction is given by $e_1$. If we fix $x\in\Sigma_1$, we have $$\theta_x(t)=\text{angle between $\nabla\psi$ and $e_1$}$$ and we have to study the function $$h(t)=\cos\theta_x(t)=\dfrac{{\langle{\nabla\psi},{e_1}\rangle}}{{\lvert{\nabla\psi}\rvert}}=\dfrac{1}{\rho(s){\lvert{\nabla\psi}\rvert}}$$ for a fixed $s$. From the above expression of ${\lvert{\nabla\psi}\rvert}$ and a suitable manipulation we see $$h(t)^2=\dfrac{\Theta^2}{\Theta^2+t^2g^2}$$ where $g=\rho'(s)/\rho(s)$. Now $$\begin{aligned} \dfrac{d}{dt}\dfrac{\Theta^2}{\Theta^2+t^2g^2}&=\dfrac{2t\Theta g^2}{(\Theta^2+t^2g^2)^2} (t{\dfrac{{\partial}\Theta}{\bdt}}-\Theta)\\ \end{aligned}$$ As $\Theta(t,s)=1+tk(s)$ one sees that $t{\dfrac{{\partial}\Theta}{\bdt}}-\Theta=-1$ hence $$\dfrac{d}{dt}h(t)^2=-\dfrac{2t\Theta g^2}{(\Theta^2+t^2g^2)^2}\leq 0$$ Hence $h(t)^2$ is non-increasing and, as $h(t)$ is positive, it is itself non-increasing. [**Proof of Step 2.**]{} In the coordinates $(t,s)$ the curve $\psi^{-1}(r)$ is parametrized by $\alpha:[0,l]\to\tilde\Omega$ as follows: $$\alpha(u)=(r\rho(u),u)\quad u\in [0,l].$$ Then: $$\begin{aligned} {\lvert{\psi^{-1}(r)}\rvert}&=\int_0^l\sqrt{g(\alpha'(u),\alpha'(u))}\,du\\ &=\int_0^l\sqrt{r^2\rho'(u)^2+(1+rk(u)\rho(u))^2}\,du \end{aligned}$$ Convexity of $\Sigma_1$ implies that $k(u)\geq 0$ for all $u$; differentiating under the integral sign with respect to $r$ one sees that indeed $\frac{d}{dr}{\lvert{\psi^{-1}(r)}\rvert}\geq 0$ for all $r\in [0,1]$. [**Proof of Step 3.**]{} Let $T_x$ be the tangent line to $\Sigma_2$ at $Q(x)$ and $H(x)$ the point of $T_x$ closest to $x$. As $\Sigma_2$ is convex, $H(x)$ is not an interior point of $\Omega$, hence $$d(x,H(x))\geq\beta.$$ The triangle formed by $x, Q(x)$ and $H(x)$ is rectangle in $H(x)$, then we have: $$r(x)\cos\theta_x=d(x,H(x)).$$ As $r(x)\leq B$ we conclude: $$B \cos\theta_x\geq \beta,$$ which gives the assertion. Sharpness of the lower bound {#sharpness} ============================ An upper bound -------------- In this short paragraph, we give a simple way to get an upper bound when the potential $A$ is *closed*. Then, we will use this in different kinds of examples, in order to show that the assumptions of Theorem \[main2\] are sharp. The geometric idea is the following: if we have a region $D \subset \Omega$ such that the first absolute cohomology group $H^1(D)$ is $0$, then we can estimate from above the spectrum of $\Delta_A$ in $\Omega$ in terms of the spectrum of the usual Laplacian on $D$. The reason is that the potential $A$ is $0$ on $D$ up to a gauge transformation; then, on $D$, $\Delta_A$ becomes the usual Laplacian and any eigenfunction of the Laplacian on $D$ may be extended by $0$ on $\Omega$ and thus used as a test function for the magnetic Laplacian on the whole of $\Omega$. Let us give the details. Let $D$ be a closed subset of $\Omega$ such that, for some (small) $\delta>0$ one has $H^1(D^{\delta},{{\bf R}})=0$, where $D^{\delta}=\{p\in \Omega: {\rm dist}(p,D) < \delta\}$. This happens when $D^{\delta}$ has a retraction onto $D$. We write $$\partial D= (\partial D\cap \partial \Omega) \cup (\partial D \cap \Omega)=\partial^{\rm ext}D\cup\partial^{\rm int}D$$ and we denote by $(\nu_j(D))_{j=1}^{\infty}$ the spectrum of the Laplacian acting on functions, with the Neumann boundary condition on $\partial^{\rm ext}D$ (if non empty) and the Dirichlet boundary condition on $\partial^{\rm int}D$. \[upperharmonic\] Let $\Omega$ be a compact manifold with smooth boundary and $A$ a closed potential on $\Omega$. Assume that $D\subset \Omega$ is a compact subdomain such that $H^1(D,\textbf R)=H^1(D^{\delta},\textbf R)=0$ for some $\delta>0$. Then we have $$\lambda_k(\Omega,A) \le \nu_k(D)$$ for each $k\geq 1$. **Proof.** We recall that for any function $\phi$ on $\Omega$, the operator $\Delta_A$ and $\Delta_{A+d\phi}$ are unitarily equivalent and have the same spectrum. As $A$ is closed and, by assumption, $H^1(D^{\delta},\textbf R)=0$, $A$ is exact on $D^{\delta}$ and there exists a function $\tilde \phi$ on $D^{\delta}$ such that $A+d\tilde\phi=0$ on $D^{\delta}$. We consider the restriction of $\tilde\phi$ to $D$ and extend it differentiably on $\Omega$ by using a partition of unity $(\chi_1,\chi_2)$ subordinated to $(D^{\delta},\Omega/D)$. Then, setting $$\phi\doteq\chi_1 \tilde\phi$$ we see that $\phi$ is a smooth function on $\Omega$ which is equal to $\tilde\phi$ on $D$ so that, on $D$, one has $A+d\phi=0$. We consider the new potential $\tilde A=A+d\phi$ and observe that $\tilde A=0$ on $D$. Now consider an eigenfunction $f$ for the mixed problem on $D$ (Neumann boundary conditions on $\partial^{\rm ext}D$ and Dirichlet boundary conditions on $\partial^{\rm int}D$), and extend it by $0$ on $\Omega\setminus D$. As $\tilde A=0$ on $D$, we see that $${\lvert{\nabla^{\tilde A}f}\rvert}^2={\lvert{\nabla f}\rvert}^2,$$ and we get a test function having the same Rayleigh quotient as that of $f$. Thanks to the usual min-max characterization of the spectrum, we obtain, for all $k$: $$\lambda_k(\Omega,A) = \lambda_k(\Omega,\tilde A)\le \nu_k(D).$$ Sharpness --------- We will use Proposition \[upperharmonic\] to show the sharpness of the hypothesis in Theorem \[main2\]. Let us first show that we need to control the ratio $\frac{BL}{\beta}$. \[Blarge\] In the first situation, we give an example where the ratio $\frac{BL}{\beta} \to \infty$ and the distance $\beta$ between the two components of the boundary is uniformly bounded from below. We want to show that $\lambda_1\to 0$. We consider an annulus $\Omega$ composed of two concentric balls of radius $1$ and $R+1$ and same center, with $R\to \infty$. We have $B=\beta=R$ and $L \to \infty$. From the assumptions we get the existence of a point $x\in \Omega$ such that the ball $B(x,\frac{R}{2})$ of center $x$ and radius $\frac{R}{2}$ is contained in $\Omega$. Proposition \[upperharmonic\] implies that $\lambda_1(\Omega,A)$ is bounded from above by the first eigenvalue of the Dirichlet problem for the Laplacian of the ball, which is proportional to $\frac{1}{R^2}$ and tends to zero because $R\to\infty$. \[example1\] Next, we construct an example to show that if the distance $\beta$ tends to $0$ and $B$ and $L$ are uniformly bounded from below and from above, then again $\lambda_1 \to 0$. We again use Proposition \[upperharmonic\]. Fix the rectangles : $$R_2 = [-4,4]\times [0,4], \quad R_{1,\epsilon}=[-3,3]\times [\epsilon,2]$$ and consider the region $\Omega_{\epsilon}$ given by the closure of $R_2\setminus R_{1,\epsilon}$. Note that $\Omega_{\epsilon}$ is a planar annulus whose boundary components are convex and get closer and closer as $\epsilon\to 0$. ![$\lambda_1 \to 0$ as $\epsilon \to 0$](PVP.jpg){width="70mm"} We show that, for any closed potential $A$ one has: $$\label{small} \lim_{\epsilon\to 0}\lambda_1(\Omega_{\epsilon},A)=0.$$ Consider the simply connected region $D_{\epsilon}\subset\Omega_{\epsilon}$ given by the complement of the rectangle $[-1,1]\times [0,\epsilon]$. Now $D_{\epsilon}$ has trivial $1$-cohomology; by Proposition \[upperharmonic\], to show it is enough to show that $$\label{enough} \lim_{\epsilon\to 0}\nu_1(D_{\epsilon})=0.$$ By the min-max principle : $$\nu_1(D_{\epsilon})=\inf\Big\{ \frac{\int_{D_{\epsilon}}\vert \nabla f\vert^2}{\int_{D_{\epsilon}}f^2} : f=0\,\,\text{on}\,\, {\partial}D_{\epsilon}^{\rm int} \Big\}$$ where $${\partial}D_{\epsilon}^{\rm int} =\{(x,y)\in\Omega_{\epsilon}:x=\pm 1, y\in [0,\epsilon]\}.$$ Define the test-function $f:D_{\epsilon}\to{{\bf R}}$ as follows. $$f=\threesystem{1\quad\text{on the complement of $[-2,2]\times[0,\epsilon]$}} {x-1\quad\text{on $[1,2]\times [0,\epsilon]$}} {-x-1\quad\text{on $[-2,-1]\times [0,\epsilon]$}}$$ One checks easily that, for all $\epsilon$: $$\int_{D_{\epsilon}}\vert \nabla f\vert^2=2\epsilon, \quad \int_{D_{\epsilon}}f^2\geq {\rm const}>0$$ Then follows immediately by observing that the Rayleigh quotient of $f$ tends to $0$ as $\epsilon\to 0$ \[example small\] In the example we constructed previously the two boundary components approach each other along a common set of positive measure (precisely, a segment of total length $6$). In the next example we sketch a construction showing that, in fact, this is not necessary. So, let us fix the outside curve $\Sigma_2$ and choose a family of inner convex curves $\Sigma_1$ such that $B$ is bounded below (say, $B\ge 1$) and $\beta \to 0$ (no other assumption is made). Then, we want to show that $\lambda_1(\Omega,A)\to 0$. Fix points $x\in \Sigma_2$, $y\in \Sigma_1$ such that $d(x,y)=\beta$. We take $b=2\beta$ and introduce the balls of center $x$ and radius $b$ and $\sqrt b$, denoted by $B(x,b)$ and $B(x,\sqrt b)$, respectively. Then the set $D=\Omega\setminus (B(x,b)\cap\Omega)$ is simply connected so that, by Proposition \[upperharmonic\]: $$\lambda_1(\Omega,A)\leq \nu_1(D)$$ and it remains to show that $\nu_1(D)\to 0$ as $b\to 0$. Introduce the function $F(r)$ ( $r$ being the distance to $x$): $$F(r)=\threesystem{1\quad\text{on the complement of $B(x,\sqrt b)$}} {0\quad\text{on $B(x,b)$}} {\frac{-2}{\ln b} (\ln r -\ln b)\quad\text{on $B(x,\sqrt b)-B(x,b)$}}$$ and let $f$ be the restriction of $F$ to $D$. As $f=0$ on ${\partial}^{\rm int}D={\partial}B(x,b)\cap\Omega$, we see that $f$ is a test function for the eigenvalue $\nu_1(D)$. A straightforward calculation shows that, as $b\to 0$, we have $$\int_D \vert \nabla f\vert^2 \to 0;$$ on the other hand, as $B \ge 1$, the volume of $D$ is uniformly bounded from below, which implies that $$\int_D f^2 \ge C >0.$$ We conclude that the Rayleigh quotient of $f$ tends to $0$ as $b \to 0$, which shows the assertion. \[example2\] The following example shows that we need to impose some condition on the outer curve in order to get a positive lower bound as in Theorem \[main2\]. It is an easy and classical fact that, in order to create a small eigenvalue for the Neumann problem, it is sufficient to deform a domain locally, near a boundary point, as indicated by the mushroom-shaped region shown in the figure below. Up to a gauge transformation, we can suppose that the potential $A$ is locally $0$ in a neigborhood of the mushroom, and we have to estimate the first eigenvalue of the Laplacian with Dirichlet boundary condition at the basis of the mushroom (which is a segment of length $\epsilon$) and Neumann boundary condition on the remaining part of its boundary, as required by Proposition \[upperharmonic\]. ![A local deformation implying $\lambda_1 \to 0$](Deformation.jpg){width="70mm"} The only point is to take the value of the parameter $\epsilon$ much smaller than $\delta$ as $\delta \to 0$. Take for example $\epsilon =\delta^4$ and consider a function $u$ taking value $1$ in the square of size $\delta$ and passing linearly from $1$ to $0$ outside the rectangle of sizes $\epsilon,\delta$. The norm of the gradient of $u$ is $0$ on the square of size $\delta$ and $\frac{1}{\delta}$ in the rectangle of size $\delta,\epsilon$. Then the Rayleigh quotient is $$R(u) \le \frac{\frac{1}{\delta^2}\delta \epsilon}{\delta^2}=\frac{\epsilon}{\delta^3}$$ which tends to $0$ as $\delta \to 0$. Moreover, we can make such local deformation keeping the curvature of the boundary uniformly bounded in absolute value (see Example 2 in [@CGI]). Appendix ======== Spectrum of circles and Riemannian products {#riemannian circle} ------------------------------------------- We first prove Proposition \[circle\]. Let then $(M,g)$ be the circle of length $L$ with metric $g=\theta(t)^2dt^2$, where $t\in [0,L]$ and $\theta(t)$ is periodic of period $L$. Given the $1$-form $A=H(t)dt$ we first want to find the harmonic $1$-form $\omega$ which is cohomologous to $A$; that is, we look for a smooth function $\phi$ so that $ \omega=A+d\phi $ is harmonic. Now a unit tangent vector field to the circle is $$e_1=\dfrac{1}{\theta} \dfrac{d}{dt}.$$ Write $\omega=G(t)\,dt$. Then $$\delta\omega=-\dfrac 1{\theta}\Big(\dfrac{G}{\theta}\Big)'.$$ As any $1$-form on the circle is closed, we see that $\omega$ is harmonic iff $G(t)=c\theta(t)$ for a constant $c$. We look for $\phi$ and $c\in{{\bf R}}$ so that $$\phi'=-H+c\theta.$$ As $\phi$ must be periodic of period $L$, we must have $\int_0^L\phi'=0$. As the volume of $M$ is $L$, we also have $\int_0^L\theta=L$. This forces $$c=\dfrac{1}{L}\int_0^LH(t)\,dt.$$ On the other hand, as the curve $\gamma(t)=t$ parametrizes $M$ with velocity $\frac{d}{dt}$, one sees that the flux of $A$ across $M$ is given by $$\Phi^A=\dfrac{1}{2\pi}\int_0^LH(t)\,dt.$$ Therefore $ c=\frac{2\pi}{L} \Phi^A $ and a primitive could be $$\phi(t)=-\int_0^tH+c\int_0^t\theta.$$ Conclusion: [$\bullet\quad$]{}[*The form $A=H(t)dt$ is cohomologous to the harmonic form $\omega=c\theta\, dt$ with $c=\frac{2\pi}{L} \Phi^A$*]{}. We first compute the eigenvalues. By gauge invariance, we can use the potential $\omega$. In that case $$\Delta_{\omega}=-\nabla^{\omega}_{e_1}\nabla^{\omega}_{e_1}.$$ Now $$\nabla^{\omega}_{e_1}u=\dfrac{u'}{\theta}-icu$$ hence $$\nabla^{\omega}_{e_1}\nabla^{\omega}_{e_1}u=\dfrac{1}{\theta}\Big(\dfrac{u'}{\theta}-icu\Big)'-ic\Big(\dfrac{u'}{\theta}-icu\Big).$$ After some calculation, the eigenfunction equation $\Delta_{\omega}u=\lambda u$ takes the form: $$-u''+\dfrac{\theta'}{\theta}u'+2ic \theta u'+c^2\theta^2 u=\lambda\theta^2 u.$$ Recall the arc-length function $s(t)=\int_0^t\theta(\tau)\,d\tau$. We make the change of variables: $$u(t)=v(s(t)), \quad\text{that is}\quad v=u\circ s^{-1}.$$ Then: $$\twosystem {u'=v'(s)\theta} {u''=v''(s)\theta^2+v'(s)\theta'}$$ and the equation becomes: $$-v''+2ic v'+c^2v=\lambda v$$ with solutions : $$v_k(s)=e^{\frac{2\pi i k}{L}s}, \quad \lambda=\dfrac{4\pi^2}{L^2}(k-\Phi^A)^2, \quad k\in\bf Z.$$ Now Gauge invariance says that $$\Delta_{A+d\phi}=e^{i\phi}\Delta_Ae^{-i\phi};$$ and $v_k$ is an eigenfunction of $\Delta_{A+d\phi}$ iff $e^{-i\phi}v_k$ is an eigenfunction of $\Delta_A$. Hence, the eigenfunctions of $\Delta_A$ (where $A=H(t)\,dt$) are $$u_k=e^{-i\phi}v_k,$$ where $\phi(t)=-\int_0^tH+c\, s(t)$ and $c=\frac{2\pi}{L}\Phi^A$. Explicitly: $$\label{eigenfunctions} u_k(t)=e^{i\int_0^tH}e^{\frac{2\pi i(k-\Phi^A)s(t)}{L}}$$ as asserted in Proposition \[circle\]. Let us know verify the last statement. If the metric is $g=dt^2$ then $\theta(t)=1$ and $s(t)=t$. If $A$ is a harmonic $1$-form then it has the expression $A=\frac{2\pi \Phi^A}{L}dt$. Taking into account we indeed verify that $u_k(t)=e^{\frac{2\pi i k}{L}t}$. [$\bullet\quad$]{}We now prove Proposition \[cyl\]. Here we assume that $\Omega$ is a Riemannian product $[0,a]\times{{\bf S}^{1}}(\frac{L}{2\pi})$ with coordinates $(r,t)$ and the canonical metric on the circle. We fix a closed potential $A$ on $\Omega$. By gauge invariance we can assume that $A$ is a Coulomb gauge, and by what we said above we have easily $$A=\dfrac{2\pi \Phi^A}{L}\,dt.$$ Then $A$ restrict to zero on $[0,a]$; as $A(N)=0$ on ${\partial}\Omega$ the magnetic Neumann conditions reduce simply to ${\dfrac{{\partial}u}{\bdN}}=0$. At this point we apply a standard argument of separation of variables; if $\phi(r)$ is an eigenfunction of the usual Neumann Laplacian on $[0,a]$, and $v(t)$ is an eigenfunction of $\Delta_A$ on ${{\bf S}^{1}}(\frac{L}{2\pi})$, we see that the product $u(r,t)=\phi(r)v(t)$ is indeed an eigenfunction of $\Delta_A$ on $\Omega$. As the set of eigenfunctions we obtain that way is a complete orthonormal system in $L^2(\Omega)$, we see that each eigenvalue of the product is the sum of an eigenvalue in the Neumann spectrum of $[0,a]$ and an eigenvalue in the magnetic spectrum of the circle, as computed before. We omit further details. Proof of Lemma \[calculus\] {#technical lemma} --------------------------- For simplicity of notation, we give the proof when $a=L=1$. This will not affect generality. Then, assume that $s : [0,1]\times [0,1]\to{{\bf R}}$ is smooth, non-negative and satisfies $$s(0,t)=t,\quad s(r,0)=0,\quad s(r,1)=1\quad\text{and}\quad {\dfrac{{\partial}s}{\bdt}}(r,t)\doteq \theta(r,t)>0.$$ Assume the identity $$\label{identity} F(t)=p(r)\cos(\pi s(r,t))+q(r)\sin(\pi s(r,t))$$ for real-valued functions $F(t),p(r),q(r)$, such that $p(r)^2+q(r)^2>0$. Then we must show: $$\label{sr} {\dfrac{{\partial}s}{\bdr}}=0$$ everywhere. Differentiate with respect to $t$ and get: $$\label{fprime} F'(t)=-\pi p(r)\theta(r,t)\sin(\pi s)+\pi q(r)\theta(r,t)\cos(\pi s)$$ and we have the following matrix identity $${{\begin{pmatrix}}{\cos(\pi s)&\sin(\pi s)\\}{-\pi\theta\sin(\pi s)&\pi\theta\cos(\pi s)\\}{\end{pmatrix}}}{\begin{pmatrix}p\\#2\end{pmatrix}}={\begin{pmatrix}F\\#2\end{pmatrix}}.$$ We then see: $$p(r)=F(t)\cos(\pi s)-\dfrac{F'(t)}{\pi\theta}\sin(\pi s).$$ Set $t=0$ so that $s=0$ and $p(r)=F(0)\doteq p$ is constant; the previous identity becomes $$\label{p} p=F(t)\cos(\pi s)-\dfrac{F'(t)}{\pi\theta}\sin(\pi s).$$ Observe that: $$\label{changes} \twosystem {F'(0)=\pi q(r)\theta(r,0)} {F'(1)=-\pi q(r)\theta(r,1)}$$ [$\bullet\quad$]{}Assume $F'(0)=0$. Then, as $\theta(t,r)$ is positive one must have $q(r)=0$ for all $r$, hence $p\ne 0$ and $ F(t)=p\cos(\pi s), $ from which, differentiating with respect to $r$, one gets easily $ {\dfrac{{\partial}s}{\bdr}}=0 $ and we are finished. [$\bullet\quad$]{}We now assume that $F'(0)\ne 0$: then we see from that $q$ is not identically zero and the smooth function $F':[0,1]\to{{\bf R}}$ changes sign. This implies that [$\bullet\quad$]{}[*there exists $t_0\in (0,1)$ such that $F'(t_0)=0$.*]{} Now evaluated at $t=t_0$ gives: $$p=F(t_0)\cos(\pi s(r,t_0))$$ for all $r$. Differentiate w.r.t. $r$ and get, for all $r\in [0,1]$: $$0=\sin(\pi s(r,t_0)){\dfrac{{\partial}s}{\bdr}}(r,t_0).$$ Since $s(r,t)$ is increasing in $t$, we have $$0<s(r,t_0)<s(r,1)=1.$$ Hence $\sin(\pi s(r,t_0))>0$ and we get $${\dfrac{{\partial}s}{\bdr}}(r,t_0)=0.$$ writes: $$F(t)=p\cos(\pi s)+q(r)\sin(\pi s),$$ and then, differentiating w.r.t. $r$: $$0=-p\pi \sin(\pi s){\dfrac{{\partial}s}{\bdr}}+q'(r)\sin(\pi s)+\pi q(r)\cos(\pi s){\dfrac{{\partial}s}{\bdr}}.$$ Evaluating at $t=t_0$ we obtain $ 0=q'(r)\sin(\pi s(r,t_0)) $ which implies $$q'(r)=0$$ hence $q(r)=q$, a constant. We conclude that $$F(t)=p\cos(\pi s)+q\sin(\pi s)$$ for constants $p,q$. We differentiate the above w.r.to $r$ and get: $$0=\Big(-\pi p \sin(\pi s)+\pi q\cos(\pi s)\Big){\dfrac{{\partial}s}{\bdr}}$$ for all $(r,t)\in [0,1]\times [0,1]$. Now, the expression inside parenthesis is non-zero a.e. on the square. Then one must have ${\dfrac{{\partial}s}{\bdr}}=0$ everywhere and the final assertion follows. Bruno Colbois Université de Neuchâtel, Institut de Mathématiques\ Rue Emile Argand 11\ CH-2000, Neuchâtel, Suisse bruno.colbois@unine.ch Alessandro Savo Dipartimento SBAI, Sezione di Matematica\ Sapienza Università di Roma, Via Antonio Scarpa 16\ 00161 Roma, Italy alessandro.savo@sbai.uniroma1.it
1
--- abstract: 'During the epoch of large galaxy formation, thermal instability leads to the formation of a population of cool fragments which are embedded within a background of tenuous hot gas. The hot gas attains a quasi-hydrostatic equilibrium. Although the cool clouds are pressure confined by the hot gas, they fall into the galactic potential, and their motion is subject to drag from the hot gas. The release of gravitational energy due to the infall of the cool clouds is first converted into their kinetic energy, and is subsequently dissipated as heat. The cool clouds therefore represent a potentially significant energy source for the background hot gas, depending upon the ratio of thermal energy deposited within the clouds versus the hot gas. In this paper, we show that most of dissipated energy is deposited in the tenuous hot halo gas, providing a source of internal energy to replenish losses in the hot gas through bremsstrahlung emission and conduction into the cool clouds. The heating from the motion of the cool clouds allows the multi-phase structure of the interstellar medium to be maintained.' author: - 'Stephen D. Murray' - 'Douglas N. C. Lin' title: 'Energy Dissipation in Multi-Phase Infalling Clouds in Galaxy Halos' --- Introduction ============ The stellar velocity dispersion in the halos of galaxies similar to the Milky Way exceeds 100 km s$^{-1}$. The gravitational potential that binds the stars to their host galaxies is dominated by collisionless dark matter. According to the widely adopted cold dark matter (CDM) scenario, these normal galaxies are formed through the mergers of much smaller entities, dwarf galaxies. After violent relaxation, the dark matter is well mixed in phase space and attains an extended 3-D spatial distribution. In spiral galaxies, the formation and concentration of stars in extended, flattened, rotating disks requires the detachment of ordinary matter from the dark-matter halos of the original host dwarf galaxies. The dominance of the dark-matter halo to the galactic potential at large radii, and the separation of the ordinary matter imply that, during the epoch of galactic buildup, the ordinary matter was primarily in the form of gas which dissipated a substantial fraction of its initial potential energy. In a previous paper (Lin & Murray 2000), we considered the dynamical evolution of infalling gas in the halos of normal galaxies. We showed that for typical values ($\sim 10^6$ K) of the virial temperature, the cooling timescale increases with temperature, and the protogalactic clouds (hereafter PGC’s) are thermally unstable (Field 1965). Thermal instability leads to the rapid growth of perturbations and fragmentation of a PGC (Murray & Lin 1990). The result is that a two-phase medium develops during the initial cooling of the PGC, in which a population of warm fragmentary clouds (WFC’s) are confined by the pressure of hot, residual halo gas (RHG) (Burkert & Lin 2001). The RHG is cooled by radiative emission and conductive transport into the WFC’s (which are efficient radiators). In our earlier work, we assumed that the RHG is heated primarily by the release of the gravitational energy as the WFC’s into the central region of the halo potential, due both to their collective gravity as well as that of the dark matter. The WFC’s are unable to cool below 10$^4$ K until their density reaches a sufficiently high value that the WFC’s become self-shielded from external photo-dissociating UV radiation (Couchman & Rees 1986; Haiman, Rees, & Loeb 1997; Dong, Lin, & Murray 2003). In the above picture, the evolution of the WFC’s is similar to that of Lyman-$\alpha$ clouds and high velocity clouds (HVC’s). Both of those systems have been proposed as representatives of late-time accretion of material in an ongoing process of galaxy buildup by mergers [@MI94b; @Blitzetal99; @Manning99]. Because they evolve at an earlier time and closer to the centers of the parent galaxies, however, the WFC’s would evolve in an environment of higher pressures and UV fluxes, compared to either Lyman-$\alpha$ clouds or HVC’s. Their environment may, instead, more closely resemble that of cooling flows (e.g. Sarazin 1986; Loewenstein & Mathews 1987; Sarazin & White 1987, 1988), and many of our results may have relevance to those systems. Additionally, the Ly-$\alpha$ clouds have been proposed as being contained within dark matter “minihaloes,” (e.g. Rees 1986; Ikeuchi 1986; Mo, Miralda-Escude, & Rees 1993) whereas the WFC’s are either pressure confined, or at most weakly self-gravitating In this paper, we verify our basic conjecture that most of the gravitational energy released by the infalling WFC’s is dissipated within the RHG. That process is crucial to the assumption that the RHG is in quasi-thermal equilibrium. Without this heating source, the background gas would gradually be depleted due to loss of thermal energy and precipitation into WFC’s. A reduction in the pressure of the background gas would also enable the WFC’s to expand and eventually eliminate the multi-phase structure of the gas. In order to simulate this process in detailed, we adopt a 2-D numerical hydrodynamic scheme with a multi-phase medium. The motion of clouds relative to, and their interaction with an external medium has been studied by numerous authors. [@MI94a] examined ram-pressure stripping due to the supersonic motion of gas past clouds confined within minihalos, a very different situation from that described above for the evolution of the WFC’s. Tenorio-Tagle et al. (1986, 1987) examined the interactions of clouds hitting relatively high density galactic disks at high speeds. Again, that is a very different situation from the evolution of the WFC’s, which move slowly through a low density medium with a smooth density distribution. [@MWBL93] examined the loss of gas from a cloud due to the growth of Kelvin-Helmholtz instability for transsonic motions. As with the above studies, however, the energy transfer between the cloud and the background gas was not examined. We proceed by briefly describing our method and the model parameters in §2. In §3, we analyze the results of our computations. Finally we discuss the implication of these results in §4. Numerical Method and Model Parameters ===================================== Equation of Motion ------------------ Following its collapse into the potential of the galactic dark matter halo, the RHG is shock-heated to the virial temperature of the potential, and rapidly attains a quasi hydrostatic equilibrium. For computational simplicity, we adopt a Cartesian coordinate system in which the galactic potential $g$ is imposed in the $y$ direction. For a spherically symmetric potential, $y$ corresponds to the radial direction. The equation of motion of the RHG becomes $${d V_{h x} \over d x} = - {1 \over \rho_h} {d P_t \over d x}$$ $${d V_{h y} \over d y} = - {1 \over \rho_h} {d P_t \over d y} -g$$ where $\rho_h$, $P_h$, $V_{h, x}$, and $V_{h,y}$ are respectively the density, pressure, two-velocity components of the RHG, $P_t = P_h + P_w$ is the total pressure, $\rho_w$, $P_w$, $V_{w, x}$, and $V_{w,y}$ are respectively the density, pressure and two velocity components of WFC’s. The equation of motion for the WFC’s is similar, $${d V_{w x} \over d x} = - {1 \over \rho_w} {d P_t \over d x}$$ $${d V_{w y} \over d y} = - {1 \over \rho_w} {d P_t \over d y} -g+F_D,$$ where $F_D$ is a drag force term, which is a function of the speed and geometry of the WFC’s, and of their density contrast with the RHG. Parameters for the Residual Halo Gas ------------------------------------ We consider four models. The parameters are listed in Table \[tab:models\], which lists, for each model, value of $g$, the polytropic index for the cloud, $\gamma_w$, the density contrast between the cloud and the background, $D_\rho$, and the initial downward speed of the cloud, normalized to the sound speed of the background. In all cases, the RHG is initialized with the same temperature throughout, but thereafter evolves with a polytropic equation of state, in which $P_h = K_h \rho_h ^{\gamma_h}$ where $K_h$ is the adiabatic constant, and the polytropic index $\gamma_h=5/3$ for each model. We also assume that the RHG is initially in hydrostatic equilibrium, such that $$\rho_h=\rho_0{\rm e}^{{g\over{C_h^2}}\left(y-y_0\right)}, \label{eq:den}$$ where $\rho_0$ is the density at a reference height $y_0$, and $C_h$ is the isothermal sound speed of the RHG. Because the RHG initially has the same temperature throughout, the magnitude of $K_h$ is a function of $y$, such that $$K_h = C_h^2 \rho(y)^{1-\gamma_h}.$$ The density scale height of the RHG is $$r_h = { C_h^2 \over g}.$$ In all models the velocities are normalized to $C_h=1$. The initial location of the WFC’s is set to be at $x_0=y_0=0$. The value of $g$ is uniform throughout the grid, justified by the fact that the computational domain represents a small fraction of a galaxy. For Models 1-3, we set $g=0.1$, so that $r_h = 10$, while in Model 4, $g=0.05$, giving $r_h = 20$. We set $\rho_0=1$. In these units, Equation (\[eq:den\]) reduces to $$\rho_h = {\rm e}^{-g\left(y - y_0\right)}.$$ The computational domain extends from -0.75 to 0.75 in $x$, and from -15 to 1 in $y$. At the base of the computational domain, $\rho_h /\rho_0 \sim 4.5$ for Models 1-3, and 2.1 for Model 4. Parameters for the Warm Fragmentary Clouds ------------------------------------------ The density ratio at the launch point, $D_{\rho}\equiv \rho_w/\rho_h =10^{2}$ in Models 1-3, while $D_\rho = 25 $ in Model 4. The magnitude of $D_\rho$ would be constant throughout the simulation time if 1) the WFC’s retain their integrity, 2) $\gamma_h = \gamma_w$, and 3) there is no shock dissipation to modify $K_h$ and $K_w$. In general, however, $D_\rho$ is a function of $y$, depending on the equation of state for both the WFC’s and RHG. The value $D_{\rho}=100$ is selected to represent ionized clouds at temperatures of 10$^4$ K in a Milky Way-sized galaxy, for which the RHG is heated to the virial temperature of $\approx$10$^6$ K. The smaller value of $D_{\rho}$ for Model 4 would be appropriate for either cooler backgrounds or warmer clouds. The physical dimensions of the WFC’s are set by dynamical and thermal processes [@LM00]. Clouds below a minimum radius, $S_{min}$, are re-heated by conduction from the RHG. The maximum radius, $S_{max}$, is set by the point at which the clouds become self-gravitating. Such clouds have negative specific heats, and so are unstable to external heating. The lower limit upon the cloud size translates to [@LM00] $$S_{min,s}=1.6{\ T}^{3/2}_6{\ D}_{100}^{-1}{\ n}_{10}^{-1} {\ }\Lambda_{25}^{-1}{\ \rm pc},$$ in the limit of saturated conduction, where $T_6$ in the temperature of the RHG in units of 10$^6$ K, $D_{100}$ is the density contrast between the cloud and RHG in units of 100, n$_{10}$ is number density of the cloud in units of 10 cm$^{-3}$, and $\Lambda_{25}$ is the cooling efficiency in units of 10$^{-25}$ ergs cm$^3$ s$^{-1}$, characteristic of low metallicity gas near 10$^4$ K [@DM72]. In the limit of unsaturated conduction, $$S_{min,u}=4{\rm T}^{7/4}_6{ \rm n}_{10}^{-1} {\ }\Lambda_{25}^{-1/2}{\ \rm pc}.$$ The maximum cloud size is set by the Bonner-Ebert criterion for self-gravity to become important, $$S_{max}=350{\ T}_4\left({{nT}\over{10^5}}\right)^{-1/2}{\ \rm pc},$$ where $T_4$ is the temperature of the WFC’s in units of 10$^4$ K, and $nT$ is the pressure of the clouds (assumed to be in pressure equilibrium with the RHG). In the two-phase model discussed by [@LM00], the WFC’s are heated by UV emission from massive stars, in a self-regulated star formation process. For the parameters given above, the total column densities of the clouds range from $5\times10^{19}$ to $10^{22}$ cm$^{-2}$. In Models 1-2 and 4, we adopt a polytropic equation of state for the WFC’s with a power index $\gamma_w = \gamma_h$. In Model 3 we attempt to maintain $\gamma_w\approx1$ throughout the evolution. This is done by allowing the cloud to cool, but turning cooling off below a set minimum temperature, which we select as $10^4$ K. The high cooling efficiency in this temperature regime ensures that the cloud temperature cannot significantly exceed the cutoff temperature. Cooling is not allowed to proceed in the background gas. Because we are using a single-fluid code (see below), zones where cloud and background gas mix are a concern. In order to prevent significant cooling of the background gas, cooling is turned off whenever the background gas exceeds a volume fraction of 0.5, as measured by the relative amounts of the two tracers initially placed within the cloud and background. Cooling is also not allowed whenever the temperature of a cell exceeds 0.2 times the initial temperature of the background gas. The models with polytropic equations of state permit the use of dimensionless numbers, as used here, to scale the results to a wide range of systems. The presence of cooling in Model 3, however, introduces some dimensions into the problem. In that model, we take $T_h=10^6$ K, and $T_w=10^4$ K, appropriate, as discussed above, for an L$_\ast$ galaxy. The corresponding sound speeds are $C_h=130$ km s$^{-1}$, and $C_w=13$ km s$^{-1}$ (assuming ionized gas). The initial density of the cloud, $n_w=6$ cm$^{-3}$. A length dimension is not explicitly imposed upon the problem, because the heating and cooling are strictly local processes. Typical values can, however, be computed. For an isothermal potential, $g=V_c^2/R$, where $V_c$ is the circular speed and $R$ is the galactocentric radius. Taking $V_c=220$ km s$^{-1}$, the physical scale height of the gas $$R_h={{C_h^2}\over g}={130\over220}R=350~R_{kpc}{\ \rm pc},$$ where $R_{kpc}$ is the galactocentric radius in kpc. In code units, $R_h=10$ (see previous section), and so one unit of distance in the code corresponds to $l_{unit}=35$ $R_{kpc}$ pc in physical units. The cloud radius is initially set to be 0.2 in code units, or 7 $R_{kpc}$ pc in physical dimensions. The unit of time for Model 3 is given by the ratio $l_{unit}/C_h=0.26~R_{kpc}$ Myr. In Models 1, 3, and 4, we assume that the WFC is initially at rest. The evolution of the WFC’s is, however, dynamic, with clouds continually colliding and merging, being disrupted by dynamical instabilities, being reheated by conduction from the RHG, cooling to form stars, and condensing out of the RHG. When the clouds form from the RHG, they would be expected to have typical speeds up to the sound speed of the RHG. In Model 2, therefore, we take the WFC to be initially falling in the $-y$ direction, at a velocity equal to $C_h$. Due to their negative buoyancy, the WFC’s fall through the RHG in all models. If the background RHG is not perturbed, it induces a drag force $F_D$ on WFC’s. For WFC’s with sizes $S$ which are larger than the mean free path of particles in the RHG, $$F_D = {1\over2}C_D \pi S^2 \rho_h V_w ^2,$$ where $C_D$ is the drag coefficient, and $S$ is the cloud radius (Batchelor 2000). In flows with high Reynolds number, the turbulent wake behind the body provides an effective momentum transfer mechanism, dominating $C_D$. For example, the experimentally measured $C_D$ for a hard sphere in a nearly inviscid fluid is 0.165 (Whipple 1972). For compressible gas clouds, $C_D$ is probably closer to unity. When $F_D\approx g$, the WFC’s attain a terminal speed $$V_t \approx \left( {8 D_{\rho} {S g} \over 3 C_D } \right)^{1/2}. \label{eq:vterm}$$ At the launch point, the size of the WFC is set to be $S (y_0) = 0.2$ in models 1-3. If $C_D =1$ and the WFC preserves its integrity, $V_t = 2.3$, which would exceed sound speed of the RHG. Once the Mach number of the WFC exceeds unity, however, shock dissipation would greatly increase the drag relative to the above estimate. Prior to the WFC achieving $V\approx1$, however, Rayleigh Taylor instability causes it to break up into smaller pieces. For smaller fragments, the value of $V_t$ is reduced, as seen in Equation (\[eq:vterm\]). Due to shock dissipation, the sound speed in the RHG is also slightly larger than that in Equation (\[eq:vterm\]). Both of the above factors may prevent the falling WFC’s from attaining $V_t>C_h$. Because the WFC’s are pressure confined, however, their internal sound speed $C_w = D_\rho ^{-1/2} C_h =0.1 \ll V_t$, and so internal shock dissipation is likely to occur within the WFC’s. In order to examine the role of the relative magnitudes of the speeds, we choose, in model 4, $S (y_0) =0.1$, $g=0.05$, and $D_\rho (y_0) = 25$ such that $V_t < C_h$ throughout the computational domain. Shock dissipation does not occur in the RHG, but it is present interior to the WFC. In Model 3, we assume the same initial condition as model 1, but adopt an effectively isothermal equation of state for the WFC’s by allowing the gas to cool to 10$^4$ K. The resulting energy drainage would lead to a greater dissipation rate within the WFC’s but it should not significantly modify the energy deposition rate into the RHG. Numerical Method ---------------- The models discussed below are calculated using Cosmos, a multi-dimensional, chemo-radiation-hydrodynamics code developed at Lawrence Livermore National Laboratory (Anninos, Fragile & Murray 2003). For the current models, radiative emission is not included. In order to maximize the resolution, the models are run in two dimensions. Because Cosmos runs on a Cartesian grid, this means that the clouds simulated are actually slices through infinite cylinders, rather than spheres. This limitation should not, however, significantly alter our conclusions, and allows us to run the simulations at significantly higher resolution than would be possible in three dimensions. The resolutions of the models are 300x3200 zones. The clouds are therefore resolved by 80 zones across their diameters. This is somewhat poorer than the resolution found necessary by Klein, McKee, & Colella (1994) for their study of shock-cloud interactions. Because the clouds in our models are not subject to extreme shocks, however, lower resolutions should be adequate, and reductions in resolution by a factor of two have not been found to have any affect upon our results. Because we are concerned with energy transfer and dissipation, the form of the artificial viscosity used in the models might be expected to play a significant role. In order to examine that possibility, we have computed versions of Model 1 using both scalar and tensor forms of the artificial viscosity, with the coefficient varied by a factor of two, and both with and without linear artificial viscosity. The energy changes in the cloud and background were found to differ among the models by no more than 10%. We therefore conclude that the form of the artificial viscosity does not dominate our results. The lack of sensitivity is most likely to to the absence of strong shocks in the models. The models are run with reflecting boundary conditions on all sides. This choice of boundaries serves to isolate the system, eliminating potential ambiguities in the interpretation of the energies of the two components. Results of the Numerical Simulations ==================================== Model 1: Transsonic sedimentation of adiabatic clouds ----------------------------------------------------- In Model 1, we adopt a polytropic equation of state for both the cloud and background. For the values of $D_\rho$, $S$, and $g$ of the model, $V_t \sim C_h$ during the descent. In Figure \[fig:mod1rho\], we show the evolution of the density of Model 1. The model is shown from time 0 to 16, at intervals of 2 (the horizontal sound crossing time in the RHG $\Delta x/C_h=1.5$). The WFC rapidly accelerates to a speed $\vert V_y\vert \approx C_h$, at which point the increasing drag causes it to achieve a terminal speed. The deceleration of the cloud as it approaches terminal speed leads to the growth of Rayleigh-Taylor instability, causing rapid breakup of the cloud. For an incompressible fluid, the Rayleigh-Taylor instability grows, in the linear regime, as ${\rm e}^{\omega t}$, where $$\omega^2={{2\pi g}\over \lambda}\left({{\rho_h-\rho_l} \over{\rho_h+\rho_l}}\right), \label{eq:rt}$$ $\lambda$ is the wavelength of the perturbation, and the subscripts $h$ and $l$ refer, respectively, to the heavy and light fluids (Chandrasekhar 1961, p. 428). For subsonic flows, the growth rate is similar for compressible fluids. Perturbations with the shortest wavelengths grow most rapidly, but saturate quickly when their amplitudes $A\approx\lambda$. As a result, wavelengths $\lambda\sim S$ lead most strongly to cloud breakup. For such perturbations, the above relation gives $\omega\approx17$, in fair agreement with the rate of breakup observed in the cloud, though the latter is complicated by the additional growth of Kelvin-Helmholtz instability due to the flow of gas around the cloud (cf. Murray et al. 1993). Figure \[fig:mod1en\] shows the energy evolution of Model 1. Shown are the evolution of the total (internal, kinetic, and gravitational), the kinetic plus internal, kinetic, and internal energies. Values for the background gas are given by the solid curves, while those for the cloud are indicated by the dashed curves. The energies are in code units, and are plotted as changes relative to their initial values. The energies of the cloud and background are calculated as sums across the entire computational grid, with the contribution from each zone weighted by the fractional amount of the appropriate tracer present in each zone. This should minimize any confusion due to mixing of the cloud and background gas. The high order of the advection scheme also minimizes numerical diffusion (Anninos, Fragile, & Murray 2003). As can be seen in Figure \[fig:mod1en\], the total energy of the cloud decreases as it falls in the gravitational potential. The increase in $E_{Tot}$ at late times is due to the upward motion of cloud material entrained within the vortices that form behind the cloud. The kinetic energy of the cloud increases until it reaches a terminal infall speed at $t\approx10$, According to Equation (\[eq:vterm\]), the terminal speed of infalling clouds is an increasing function of their size. As a cloud breaks up into many smaller pieces, its kinetic energy decreases along with $V_t$. The internal energy of the cloud does not change significantly during its descent and breakup. The distance travelled before breakup is $\sim 30 S(y_0)$. The effective cross section of the cloud is $\sim 2 S(y_0)$, implying that the mass of the RHG that is encountered by the falling cloud is smaller than, but comparable to the mass of the cloud (Murray et al. 1993). During break up, the terminal velocity of the fragments $V_t\propto S^{1/2}$, in accordance with Equation (\[eq:vterm\]). The fragments therefore trail behind the remaining clouds. The kinetic, and especially the internal energy of the background gas are substantially increased by the end of the simulation. In this model, therefore, the majority of the energy released by the infall of the cloud is deposited into the internal energy of the background gas, primarily by the action of weak shock waves generated by the motion of the infalling cloud. This result supports the assumption of [@LM00] that the rate of energy deposition throughout the galaxy is directly proportional to the total infall rate of WFC’s throughout the system. Model 2: Supersonic impact of WFC’s ----------------------------------- In Model 1, the WFC attains a terminal speed which is a significant fraction of both $C_h$, and the value of $V_t$ predicted from Equation \[eq:vterm\]. It might also be expected that WFC’s which condense from the RHG would have initial speeds comparable to the sound speed of the RHG. In order to examine the possible effects of nonzero initial speeds upon the evolution, we consider in Model 2 an initial condition in which the WFC is already falling at the sound speed at the start of the numerical calculation. The density of Model 2 is shown in Figure \[fig:mod2rho\]. Due to the more rapid motion of the cloud, as compared to that of Model 1, the simulation is only carried out to $t=12$. The initial motion of the cloud can be seen to drive a weak shock ahead of it. Behind the shock, the leading edge of the cloud continues to move downwards at almost $C_h$, slowing down gradually until the very end of the simulation, when it rapidly decelerates as it breaks up, due to the combined action of Rayleigh-Taylor and Kelvin-Helmholtz instabilities. These results suggest that the infalling WFC’s quickly settle to $V_t$ irrespective of the initial conditions, as we have assumed previously (Lin & Murray 2000). The breakup of the cloud proceeds at nearly the same vertical height as in Model 1. The similarity arises because the models have the same gravitational accelerations and density contrast. Prior to breakup, the downward motion of the cloud is more rapid than the value of $V_t$ found for Model 1. The differences are due to the modification in the drag caused by the leading shock in Model 2. The energy evolution is shown in Figure \[fig:mod2en\]. The initial kinetic energy of the cloud, $E_{K,0}=6.3$, is almost entirely dissipated by $t=10$. Over the same time interval, the cloud is also able to penetrate to a greater depth than the cloud in Model 1, increasing the release of gravitational energy relative to that model. Together, these effects lead to a gain of internal energy for the background gas by a approximately a factor of two larger than seen in Model 1. However, the depth at which the cloud breaks up is similar in the two models. As in Model 1, the break up occurs when the cloud encounters a column of RHG that is comparable in mass to that of the cloud. Thereafter, the fragments’ rate of sedimentation is significantly reduced in accordance with Equation (\[eq:vterm\]). The asymptotic rate of RHG’s internal energy increase in Model 2 is comparable to that in Model 1. Model 3: Efficient energy loss within the cool clouds ----------------------------------------------------- In Model 3, we approximate an isothermal equation of state for the cloud, as described above, in order to represent the limit in which cooling is highly efficient. The evolution of the density is shown in Figure \[fig:mod3rho\], while the energies are shown in Figure \[fig:mod3en\]. The isothermal behavior of the cloud leads to nonconservation of the total energy of the cloud plus background, and so we do not plot that here, focusing instead upon the kinetic and internal energies. As expected, cooling within mixed cells does lead to some cooling of the background gas, as well as some overcooling within the cloud, both of which can be seen in Figure \[fig:mod3en\]. The lack of heating within the cloud leads to additional compression relative to the previous models, reducing its breakup. Overall, however, the transfer of kinetic energy of the cloud to the internal energy of the background gas is very similar to the adiabatic models described above, indicating that efficient cooling within the clouds does not have a strong effect upon the energy deposition rate. Fragmentation of the cloud also occurs in Model 3. The efficient cooling enhances the density contrast between the cloud and the RHG, such that the cloud retains smaller volume and cross section. Consequently, the cloud encounters a smaller gas mass along the path of its descent, and fragmentation occurs at a greater depth. On small wavelengths, the infalling cloud appears to be better preserved than in the previous models. But on the scale of the cloud size, the cloud again fragments after encountering a column similar to its own mass, as above. Model 4: Subsonic Sedimentation of WFC’s ---------------------------------------- For Model 4, $D_\rho=25$, and $g=0.05$, such that $V_t$ is predicted by Equation \[eq:vterm\] to be subsonic. The evolution of the density of Model 4 is shown in Figure \[fig:mod4rho\]. The cloud rapidly reaches a terminal speed, $V_t\approx0.3$, smaller than predicted if $C_D=1$. As in Model 1, however, expansion of the cloud enhances the drag coefficient to $C_D>1$. The cloud therefore never achieves the terminal speed predicted for a hard sphere, even in the absence of strong supersonic dissipation. From Equation \[eq:rt\], $\omega\approx6$, and the cloud breaks up even more rapidly than the more dense clouds considered in Models 1-3, due to its reduced density contrast relative to those models. The downward displacement of the cloud in Model 4 is reduced by a factor of a few relative to that of Model 1. As a result, the gravitational energy released by the settling of the cloud, and dissipated into the background gas, is reduced by an order of magnitude relative to Model 1, as can be seen in Figure \[fig:mod4en\]. In the absence of trans/supersonic motion of the cloud through the background, shock dissipation cannot be a strong mechanism for the dissipation of energy due to motion of the cloud. The primary mechanism involves the wake of the cloud. In the simulations, the vortical motions behind the cloud dissipate energy on small scales, due to artificial and numerical viscosity. In three dimensions, the high Reynolds numbers would lead to the formation of turbulent wakes, which would lead to the dissipation of energy by viscous stress on sufficiently small length scales, leading to the same outcome as observed in Model 4. The observed outcome of the energy deposition is not, therefore, sensitive to the exact physical process responsible for it. Summary and Discussion ====================== In this paper, we examine the interactions of a two-phase medium in a passive gravitational potential. This situation represents the physical environment that occurs naturally in the context of galaxy formation, cooling flows, and during the transition of gas clouds from quasi-hydrostatic contraction to dynamical collapse. It is a natural consequence of thermal instability, which generally leads to the emergence of a population of relatively cool, dense clouds (warm, fragmentary clouds, or WFC’s) that are pressure confined by an ambient hot, tenuous gas (residual hot gas, or RHG). In such a state, the hot gas establishes a quasistatic equilibrium with the background gravitational potential, and the cold clouds settle into it under the action of their negative buoyancy. In the present investigation, we neglect the self-gravity of the gas, and consider the potential to be due to a time-invariant background distribution of dark matter or stars. Through a series of numerical simulations, we demonstrate the following evolutionary outcomes. 1\) During their descent, the WFC’s break up on the same timescale as is required for them to attain a terminal speed. 2\) Most of the energy released from the sedimentation of the WFC’s into the background gravitational potential is deposited into the RHG. These results provide justifications for the assumptions we made in our earlier model for the evolution of multi-phase medium during the epoch of galaxy formation (Lin & Murray 2000). They also resolve an outstanding conceptual issue with regard to the energy source needed for the persistence of the multi-phase medium. In particular, the RHG can achieve a thermal equilibrium, in which its heat loss via bremsstrahlung emission and conduction into the WFC’s is balanced by the release of energy from the infalling WFC’s. This equilibrium allows a multi-phase structure to be maintained in the system. The equilibrium is very dynamic. The WFC’s are continually formed by thermal instability within the hot gas. As they move within the hot gas, they break up, and are eventually re-heated by conduction from the hot gas. They may also cool and form stars, if local sources of UV radiation are lost. Additionally, the fragmentation of the WFC’s increases their surface area to volume ratio. This reduces their timescale for collisions and mergers, leading to the formation of larger WFC’s. A natural extension of the present investigation, therefore, is to consider the collisional equilibrium for a population of WFC’s. We shall investigate this in future work. This work was performed under the auspices of the U.S. Department of Energy by University of California, Lawrence Livermore National Laboratory under Contract W-7405-Eng-48. This work is partially supported by NASA through an astrophysical theory grant NAG5-12151. Anninos, P., Fragile, P. C. & Murray, S. D. 2003, , 147, 177 Batchelor, G. K. 2000, An Introduction to Fluid Dynamics, (Cambridge: Cambridge University Press), 331 Blitz, L., Spergel, D. N., Teuben, P. J., Hartmann, D., & Burton, W. B. 1999, , 514, 818 Burkert, A., & Lin, D. N. C. 2000, , 537, 270 Chandrasekhar, S. 1961, Hydrodynamic and Hydromagnetic Stability, (New York: Dover) Couchman, H. M. P., & Rees, M. J. 1986, MNRAS, 221, 53 Dalgarno, A., & McCray, R. A. 1972, , 10, 375 Dong, S., Lin, D. N. C., & Murray, S. D. 2003, , 596, 930 Field, G. B. 1965, , 142, 531 Haiman, Z., Rees, M. J., & Loeb, A. 1997, , 476, 458 Ikeuchi, S. 1986, , 118, 509 Klein, R. I., McKee, C. F., & Colella, P. 1994, , 420, 213 Lin, D. N. C., & Murray, S. D. 2000, , 540, 170 Loewenstein, M. & Mathews, W. G. 1987, , 319, 614 Manning, C. V. 1999, , 518, 226 Mo, H. J., Miralda-Escude, J., & Rees, M. J. 1993, MNRAS, 264, 75 Murakami, I. & Ikeuchi, S. 1994, , 420, 68 Murakami, I. & Ikeuchi, S. 1994, , 421, L79 Murray, S. D., & Lin, D. N. C. 1990, 363, 50 Murray, S. D., White, S. D. M., Blondin, J. M., & Lin, D. N. C. 1993, , 407, 588 Rees, M. J. 1986, MNRAS, 218, 25P Sarazin, C. L. 1986, RvMP, 58, 1 Sarazin, C. L. & White, R. E. 1987, , 320, 32 Sarazin, C. L. & White, R. E. 1988, , 331, 102 Tenorio-Tagle, G., Bodenheimer, P., Rozyczka, M., & Franco, J. 1986, A&A, 170, 107 Tenorio-Tagle, G., Bodenheimer, P., Rozyczka, M., & Franco, J. 1987, A&A, 179, 219 Whipple, F. L. 1972, in From Plasma to Planet, ed. A. Elvius, (London: Wiley) [crrrc]{} 1 & 0.10 & $5\over3$ & 100 & [-]{}0\ 2 & 0.10 & $5\over3$ & 100 & -1\ 3 & 0.10 & 1 & 100 & [-]{}0\ 4 & 0.05 & $5\over3$ & 25 & [-]{}0\ ![Density evolution of Model 1. The model is shown from t = 0 to 16, in intervals of 2, where the horizontal sound crossing time is 1.5.[]{data-label="fig:mod1rho"}](figure1.ps){width="6.0in"} ![Energy evolution of Model 1. Shown are the time evolution of the (a) total, (b) kinetic plus internal, (c) internal, and (d) kinetic energies. Data for the background gas are shown as the solid curves, while that for the dense cloud are shown as dashed curves.[]{data-label="fig:mod1en"}](figure2.ps){width="6.0in"} ![Density evolution of Model 2, displayed as in Figure \[fig:mod1rho\]. Due to the rapid motion of the cloud, the simulation is only carried out to $t=12$.[]{data-label="fig:mod2rho"}](figure3.ps){width="6.0in"} ![Energy evolution of Model 2, displayed as in Figure \[fig:mod1en\].[]{data-label="fig:mod2en"}](figure4.ps){width="6.0in"} ![Density evolution of Model 3, displayed as in Figure \[fig:mod1rho\].[]{data-label="fig:mod3rho"}](figure5.ps){width="6.0in"} ![Energy evolution of Model 3, displayed as in Figure \[fig:mod1en\].[]{data-label="fig:mod3en"}](figure6.ps){width="6.0in"} ![Density evolution of Model 4, displayed as in Figure \[fig:mod1rho\].[]{data-label="fig:mod4rho"}](figure7.ps){width="6.0in"} ![Energy evolution of Model 4, displayed as in Figure \[fig:mod1en\].[]{data-label="fig:mod4en"}](figure8.ps){width="6.0in"}
1
--- abstract: 'One of the main scientific objectives of the ongoing [[*Fermi*]{}]{} mission is unveiling the nature of the unidentified $\gamma$-ray sources (UGSs). Despite the large improvements of [[*Fermi*]{}]{} in the localization of $\gamma$-ray sources with respect to the past $\gamma$-ray missions, about one third of the [[*Fermi*]{}]{}-detected objects are still not associated to low energy counterparts. Recently, using the Wide-field Infrared Survey Explorer ([[*WISE*]{}]{}) survey, we discovered that blazars, the rarest class of Active Galactic Nuclei and the largest population of $\gamma$-ray sources, can be recognized and separated from other extragalactic sources on the basis of their infrared (IR) colors. Based on this result, we designed an association method for the $\gamma$-ray sources to recognize if there is a blazar candidate within the positional uncertainty region of a generic $\gamma$-ray source. With this new IR diagnostic tool, we searched for $\gamma$-ray blazar candidates associated to the UGS sample of the second [[*Fermi*]{}]{} $\gamma$-ray catalog (2FGL). We found that our method associates at least one $\gamma$-ray blazar candidate as a counterpart each of 156 out of 313 UGSs analyzed. These new low-energy candidates have the same IR properties as the blazars associated to $\gamma$-ray sources in the 2FGL catalog.' author: - 'F. Massaro, R. D’Abrusco, G. Tosti, M. Ajello, A. Paggi, D. Gasparrini.' title: 'Unidentified gamma-ray sources: hunting $\gamma$-ray blazars' --- Introduction {#sec:intro} ============ More than half of the $\gamma$-ray sources detected by [*Compton*]{} Gamma-Ray Observatory (CGRO), and present in the third EGRET (3EG) catalog were not associated with known counterparts seen at low energies [@hartman99]. Whatever the nature of the unidentified $\gamma$-ray sources (UGSs), these objects could provide a significant contribution to the isotropic gamma-ray background (IGRB) [e.g., @abdo10a]. Solving the puzzle of the origin of the UGSs, together with a better knowledge of other IGRB contributions estimated from known sources, is crucial also to constrain exotic high-energy physics phenomena, such as dark matter signatures, or new classes of sources. With the advent of the [[*Fermi*]{}]{} mission the localization of $\gamma$-ray sources has significantly improved with respect to the past $\gamma$-ray missions, thus simplifying the task of finding statistically probably counterparts at lower energies. New association methods also have been developed and applied, so that the number of UGSs has significantly decreased with respect to the 3EG catalog [@hartman99]; however, according to the second [[*Fermi*]{}]{} $\gamma$-ray catalog (2FGL), about one third of detected gamma-ray sources in the energy range above 100 MeV is still unassociated [@abdo11]. It is worth noting that the most commonly detected sources in the $\gamma$-ray sky, since the epoch of CGRO, are blazars, one of the most enigmatic classes of Active Galactic Nuclei (AGNs) [e.g., @hartman99]. Within the 2FGL, there are 576 UGSs out of a total number of 1873 sources detected, while among the 1297 associated sources, $\sim$ 1000 have been associated with AGNs [@abdo11; @ackermann11a]. Blazar emission extends over the whole electromagnetic spectrum and is generally interpreted as non-thermal radiation arising from particles accelerated in relativistic jets closely aligned to the line of sight [@blandford78]. They come in two flavors: the BL Lac objects, with featureless optical spectra or only with absorption lines of galactic origin and weak and narrower emission lines, and the Flat Spectrum Radio Quasars, with a optical spectra showing broad emission lines. In the following, we indicate the former as BZBs and the latter as BZQs, respectively, according to the ROMA-BZCAT[^1] nomenclature [@massaro09; @massaro10; @massaro11a]. The first step to improve our knowledge on the origin of the UGSs and of their associations with low-energy counterparts, is to recognize those that could have a blazar within their $\gamma$-ray positional uncertainty regions. Recently, we developed a procedure to identify blazars using their infrared (IR) colors within the preliminary data release of the Wide-field Infrared Survey Explorer ([[*WISE*]{}]{}) survey [@wright10] [^2]. In particular, we discovered that the IR color space distribution of the extragalactic sources dominated by non-thermal emission, as blazars, can be used to distinguish such sources from other classes of galaxies and/or AGNs and/or galactic sources [@massaro11b hereinafter Paper I]. We also found that $\gamma$-ray emitting blazar delineate a narrow, distinct region of the IR color-color plots, denominated as the [[*WISE*]{}]{} Gamma-ray blazar Strip ([[*WGS*]{}]{}) [@dabrusco12 hereinafter Paper II]. There is a peculiar correspondence between the IR and $\gamma$-ray spectral properties of the blazars detected in the 2FGL (Paper II). Then, on the basis of our previous investigation of these IR-$\gamma$-ray properties of blazars, we built a parametrization of the [[*WGS*]{}]{} to evaluate how many AGNs of Uncertain type (AGUs) have a counterpart associated with a $\gamma$-ray blazar candidate in the 2FGL [@massaro12a hereinafter Paper III]. In this paper, we present a new association method based on the IR colors of the $\gamma$-ray emitting blazars and the [[*WGS*]{}]{} parametrization. Then we apply this new association procedure to search for $\gamma$-ray blazar candidates within the $\gamma$-ray positional error regions of the UGSs. One of the main advantages of our method is that it reduces the number of potential counterparts for the UGSs and provides their positions with arcsec resolution, thus restricting the search regions for future followup observations necessary to confirm their blazar nature. Unfortunately, only a restricted number of UGSs falls within the portion of the sky currently covered by the IR observations of the [[*WISE*]{}]{} Preliminary Data Release corresponding to $\sim$ 57% of the whole sky. Then, when the [[*WISE*]{}]{} survey will be completely released in March 2012[^3], it will be possible to apply the method to the whole sky, even in regions not covered at radio, optical and X-ray frequencies, where the other methods for establishing counterpart associations for the 2FGL cannot be used. This paper is organized as follows: in Section \[sec:sample\] we describe the samples used in our investigation; in Section \[sec:method\] we illustrate the new association method; then, in Section \[sec:ugs\] we apply the new association technique to the UGSs and describe the subset of sources that has been associated with $\gamma$-ray blazar candidates. In Section \[sec:comparison\], we also compare our results with those found adopting different statistical approaches for a subsample of UGSs. Finally, conclusions are presented in Section \[sec:summary\]. The sample selection {#sec:sample} ==================== To build our association procedure we considered a sample of blazars selected from the combination of the ROMA-BZCAT [@massaro09; @massaro10] and the 2FGL [@abdo11], as described and used in Paper II and used in Paper III to parametrize the [[*WGS*]{}]{}, denoted the 2FB sample. It contains 284 $\gamma$-ray blazars (135 BZBs and 149 BZQs) that have optical and radio counterparts as reported in the ROMA-BZCAT, and also having a [[*WISE*]{}]{} counterpart within 2.4 $^{\prime\prime}$ radius (see Paper I and III). The blazars in the 2FB sample are detected by [[*WISE*]{}]{} with a signal to noise ratio higher than 7 in at least one band and do not have any upper limits in all the [[*WISE*]{}]{} bands. We excluded from our analysis all the blazars with a [[*Fermi*]{}]{} analysis flag, according to the 2FGL and the 2LAC [@abdo11; @ackermann11a]. The blazars of uncertain type (BZUs) have been excluded from our analysis, while the BL Lac candidates have been considered as BZBs. More details on the 2FB sample and the source selections are given in Papers II and III. Then, we applied our association procedure to the sample of the UGS defined as follows. The number of UGSs in the 2FGL is 576, but only 410 of these $\gamma$-ray sources lie in the region of the sky available in the [[*WISE*]{}]{} Preliminary Data Release. These sources can be analyzed according to our method based on the IR [[*WISE*]{}]{} colors. We adopted a more conservative selection restricting our sample to 313 UGSs out of 410, excluding sources with a [[*Fermi*]{}]{} analysis flag, since these sources might not be real and/or could be affected by analysis artifacts [see e.g. @abdo11 for more details]. The [[*WGS*]{}]{} association method {#sec:method} ==================================== In Paper III, working on the AGUs, we built the [[*WGS*]{}]{} parametrization to verify if the low-energy counterparts of the AGUs, associated in 2FGL, is consistent with the [[*WGS*]{}]{}, so being a $\gamma$-ray blazar candidate. With respect to the previous analysis, the following proposed association procedure aims at providing new $\gamma$-ray blazar candidates, possible counterparts of the UGSs, that lie within their $\gamma$-ray positional uncertainty regions, on the basis of our previous results on the IR-$\gamma$-ray blazar properties. In this Section, we report the basic details of our [[*WGS*]{}]{} parametrization together with the definition of different classes of $\gamma$-ray blazar candidates. Then we describe our new association procedure. The [[*WGS*]{}]{} parametrization {#sec:parameter} --------------------------------- In Paper II, we found that $\gamma$-ray emitting blazars (i.e., those in the 2FB sample) cover a narrow region in the 3D color space built with the [[*WISE*]{}]{} magnitudes delineating the so-called [[*WISE*]{}]{} Gamma-ray blazar Strip ([[*WGS*]{}]{}). In Paper III, using the 2FB sample, we presented the parametrization of the [[*WGS*]{}]{}  based on the [*strip parameter*]{} $s$. This parameter, ranging between 0 and 1, provides a measure of the distance between the [[*WGS*]{}]{} and the location of a [[*WISE*]{}]{} source in the three dimensional IR color parameter space. For example, sources with high values of $s$ (e.g., $\geq$ 0.50) are consistent with the [[*WGS*]{}]{}. We also distinguished between [[*WISE*]{}]{} sources that lie in the subregion of the [[*WGS*]{}]{}occupied by the BZBs and BZQs using the $s_b$ and $s_q$ parameters separately (Paper III). The IR color space has been built using the archival data available in the 2011 [[*WISE*]{}]{} Preliminary Data Release, in four different bands centered at 3.4, 4.6, 12, and 22 $\mu$m with an angular resolution of 6.1, 6.4, 6.5 & 12.0$^{\prime\prime}$, respectively and achieving 5$\sigma$ point source sensitivities of 0.08, 0.11, 1 and 6 mJy. In addition, the absolute (radial) differences between [[*WISE*]{}]{} source-peaks and “true" astrometric positions anywhere on the sky are no larger than $\sim$ 0.50, 0.26, 0.26, and 1.4$^{\prime\prime}$ in the four [[*WISE*]{}]{} bands, respectively [@cutri11][^4]. $\gamma$-ray blazar candidate definition {#sec:association} ---------------------------------------- Based on the $s_b$ and $s_q$ distributions of all [[*WISE*]{}]{} sources in different random regions of the sky, at both high and low Galactic latitudes (Paper III), the critical threshold of the $s$ parameters, used to define the above classes, have been arbitrarily determined on the basis of the following considerations: - [class A: [[*WISE*]{}]{} sources with 0.24 $<s_b<$ 1.00 and 0.38 $<s_q<$ 1.00;]{} - [class B: [[*WISE*]{}]{} sources with 0.24 $<s_b<$ 1.00 or 0.38 $<s_q<$ 1.00;]{} - [class C: [[*WISE*]{}]{} sources with 0.10 $<s_b<$ 0.24 and 0.14 $<s_q<$ 0.38.]{} All the [[*WISE*]{}]{} sources with $s_b<$0.10 or $s_q<$0.14 are considered [*outliers*]{} of the [[*WGS*]{}]{} and, for this reason, discarded. All the above thresholds are then used to select the [[*WISE*]{}]{} sources that are associated to the UGSs and that can be considered potential $\gamma$-ray blazar candidates. The above choice of threshold have been adopted for the analysis of the $\gamma$-ray blazar content within the AGUs (Paper III). From the distributions of the $s_b$ and $s_q$ parameters for the generic IR [[*WISE*]{}]{} sources, we note that 99.9% of them have $s_b<$0.24 and $s_q<$0.38. Then, for the BZBs in the 2FB sample only 6 sources out of 135 have $s_b<$ 0.24, and in the case of the BZQs only 33 sources out of 149 show $s_q$ values lower than 0.38. We also note that 99.0% of the generic IR [[*WISE*]{}]{} sources have $s_b<$0.10 and only 2 BZBs are below this value, while 97.2% of the generic IR [[*WISE*]{}]{} sources together with only 5 BZQs out of 149 have $s_q<$0.14. The [[*WISE*]{}]{} objects of class A are the most probable blazar counterpart of the unidentified $\gamma$-ray sources, because their WISE colors are more consistent with the [[*WGS*]{}]{}  in both the BZBs and BZQs subregions than the colors of sources of class B or C. Based on the distributions of the $s_b$ and $s_q$ parameters for [[*WISE*]{}]{} sources in random region of the sky, the sources of class A are, as expected, rarer than the sources belonging to the other two classes (see Section \[sec:ugs\] for more details). The association procedure {#sec:procedure} ------------------------- ![The position of the region of comparison (ROC) for a generic [[*Fermi*]{}]{} source, with respect to the searching region (SR) centered on the position reported in the 2FGL catalog. The radius of both regions is $R=\theta_{999}$ and they are separated by 2.5$\sqrt{2}$ deg of distance.[]{data-label="fig:roi"}](./roi.pdf){height="6.8cm" width="9.5cm"} For each unidentified $\gamma$-ray source we defined the [*searching region*]{} (SR) corresponding to a circular region of radius $R$=$\theta_{999}$, centered on the position given in the 2FGL, where $\theta_{999}$ is the major axis of the elliptical source location region corresponding to the 99.9% level of confidence. In addition, we also considered a [*region of comparison*]{} (ROC) defined as a circular region of the same radius $R$, but lying at 2.5$\sqrt{2}$ deg angular distance from the 2FGL position. A schematic view of the locations of the SR and the ROC is shown in Figure \[fig:roi\]. Successively, for every unassociated gamma-ray source in the 2FGL catalog, we ranked all the [[*WISE*]{}]{} sources within its SR on the basis of the classification described above and we selected as $\gamma$-ray blazar candidates the positionally closest sources with the highest class. In our analysis we considered only sources of the [[*WISE*]{}]{} preliminary catalog detected in all the four [[*WISE*]{}]{} bands, without any upper limit. The ROCs are used to assess the association confidence that a [[*WISE*]{}]{} source in a random region in the sky, where no $\gamma$-ray source is located, has IR colors compatible with the WGS. To provide an estimate of the association confidence, we considered the distribution of the strip parameters $s_b$ and $s_q$ for all the [[*WISE*]{}]{} sources within each ROC associated to an UGS. For these [[*WISE*]{}]{} sources we estimated the confidence $\pi$ that a generic [[*WISE*]{}]{} source belongs to the same class as the $\gamma$-ray blazar candidate selected within the SR. Thus the $\pi$ value will be expressed as the ratio between the number of [[*WISE*]{}]{} sources of a particular class and the total number of [[*WISE*]{}]{} sources that lie in the ROC. Testing the association method with blazars {#sec:test} ------------------------------------------- We performed a test to evaluate the completeness of our association method searching for the $\gamma$-ray blazar candidates that are potential counterparts of the 2FB sample, and verifying whether our procedure correctly finds the same associations as in the 2FB sample. Assuming that the 284 blazars in the 2FB sample have been associated to the real low-energy counterparts, we run our association procedure considering the IR colors for all the [[*WISE*]{}]{} sources within the SRs for all these sources. We found that for the population of BZBs, consisting of 135 BL Lacs, our association procedure is able to recognize 123 sources as the 2FGL, 62 of class A, and 61 of class B. Within the remaining 12 BZBs, 3 objects are associated to WISE sources of higher class than the original 2FGL associated sources, while for 9 sources we only found outliers of the [[*WGS*]{}]{} within their SRs. For the BZQs, our method finds the same associations as in the 2FGL catalog for 124 of the sources, with 85 sources classified as class A, 32 classified as class B and 7 as class C. For the remaining 25 sources, we found 11 outliers and 14 $\gamma$-ray sources associated to a [[*WISE*]{}]{} source with higher classes. Our procedure re-associates 247 out of 284 $\gamma$-ray blazars of the 2FB sample in agreement with the 2FGL analysis, with a completeness of 87.0% (91.0% for the BZBs and 83.0% for the BZQs). We found that 7.1% are outliers of the [[*WGS*]{}]{}, but this number can be expected because the [[*WGS*]{}]{} parametrization was built to require at least 90% of the 2FB sources inside each 2-dimensional [[*WGS*]{}]{} projection (see Paper III for more details). It is interesting to note that 17 out of 284 $\gamma$-ray sources in the 2FGL have a “better", on the basis of our method, $\gamma$-ray blazar candidate within the SR. These associations need to be verified with followup observations, as for example in the X-rays, and a deeper analysis to check their reliability relative to the 2FGL association method will be performed in a forthcoming paper[@massaro12b]. Results {#sec:ugs} ======= The application of our association procedure to the 313 UGSs selected from the 2FGL (see Section \[sec:sample\] for more details), led to the associations of 156 UGSs with a low-energy candidate $\gamma$-ray blazar counterpart within their SRs. According to our criteria (see Section \[sec:association\]), these 156 new associations consist of 44 sources of class A, 74 of class B and 38 of class C. Thus our procedure finds associations with likely $\gamma$-ray blazar candidates for 49.8% of the UGSs analyzed. We also list of all the $\gamma$-ray blazar candidates with lower class for each UGSs, if more than one is present within the SRs. Among these 156 new associations, for 86 sources, 12 of class A, 43 of class B and 31 of class C, have only a single $\gamma$-ray blazar candidate within the SR. In Figure \[fig:strip\_pln1\] we show the [[*WISE*]{}]{} colors of the 156 $\gamma$-ray blazar candidates in comparison with those of the blazars in the 2FB sample for the \[3.4\]-\[4.6\]-\[12\] $\mu$m 2D projection of the [[*WGS*]{}]{}. ![The \[3.4\]-\[4.6\]-\[12\] $\mu$m 2D projection of the [[*WGS*]{}]{} is shown. Red dashed lines show the boundaries of the [[*WGS*]{}]{} used in our analysis (see Paper III for more details). The orange background filled circles are the blazars associated with the 2FGL constituting the 2FB sample while the balck filled circles indicate the 156 $\gamma$-ray blazars that have been associated by our procedure.[]{data-label="fig:strip_pln1"}](./strip_pln1.pdf){height="6.0cm" width="9.7cm"} By restricting our sample of UGSs only to those at high Galactic latitudes, i.e. $|b|>$15$^\circ$, we found a $\gamma$-ray blazar candidate for 72 UGSs, 16 of class A, 29 of class B and 27 of class C; where for 34 out of these 74, the low energy counterpart associated with our method is univocal. ![The distribution of the Galactic latitude for all the UGSs analyzed in comparison with that for the 156 associated by our procedure.[]{data-label="fig:glat_distrib"}](./glat_distrib.pdf){height="6.0cm" width="9.7cm"} In Figure \[fig:glat\_distrib\], we shown the distribution of the Galactic latitude (i.e., sin $b$ ) for all the UGSs analyzed in comparison with those 156 associated by our method. At high Galactic latitude, the method seems to be less efficient given the ratio between the number of UGSs analyzed and those associated. This could be due to the non uniform exposure of the archival [[*WISE*]{}]{} observations in the [[*WISE*]{}]{} Preliminary Data Release[^5], and will be re-analyzed once the whole [[*WISE*]{}]{} archive will be available. In addition, we note that our association method could be more efficient at low Galactic latitudes where the blazar catalogs, as the ROMA-BZCAT, are less complete [@massaro09]. We also remark that within the 313 regions of comparison chosen for the UGSs there are 55195 [[*WISE*]{}]{} sources, but only 49 of class A, 213 of class B and 129 of class C, all of them detected in all four [[*WISE*]{}]{} bands and with a signal to noise ratio higher than seven in at least one band, as the blazars in the 2FB sample. The distributions of the $s_b$ and $s_q$ parameters for all the 55195 [[*WISE*]{}]{} sources within the 313 ROCs are shown in Figure \[fig:histogram\]. A blind search of all the possible $\gamma$-ray blazar candidates in the [[*WISE*]{}]{} archive on the basis of the [[*WGS*]{}]{} properties will be performed once it will be completely available [@massaro12b]. However, the $s_b$ and $s_q$ distributions reported in Figure \[fig:histogram\] strongly suggest that the density of [[*WISE*]{}]{} blazar candidates is low over the sky. ![The distribution of the $s_b$ (black) and $s_q$ (red) parameters for all the 55195 [[*WISE*]{}]{} source within all the ROCs defined for the 313 UGSs analyzed. The vertical lines corresponds to the thresholds for the $s_b$ and $s_q$ parameters to determine the blazar classes (see Section \[sec:association\]).[]{data-label="fig:histogram"}](./histogram.pdf){height="6.0cm" width="9.7cm"} In Table \[tab:example\] we show three cases of [[*WISE*]{}]{}  sources that have been associated with our procedure to three UGSs. We report both the $s_b$ and $s_q$ values, the [[*WGS*]{}]{} class and the association confidence $\pi$. In this example, the source 2FGL J0038.8+6259 is associated to one [[*WISE*]{}]{} source of a class A, J003818.70+630605.0, that has been selected as a single $\gamma$-ray blazar candidate out of 791 [[*WISE*]{}]{} sources within its SR. The corresponding association confidence $\pi$, expressed in terms of number of sources with an higher $s_b$ or $s_q$ values than J003818.70+630605.0 within the region of comparison and estimated considering 830 [[*WISE*]{}]{} sources, is 2/830. Similarly, the source 2FGL J0616.6+2425 has been associated to the [[*WISE*]{}]{}  source J061609.79+241911.0, that belongs to class B with a low association confidence estimated on 5465 [[*WISE*]{}]{} sources in the region of comparison. The 2FGL source 2FGL JJ0312.8+2013 has a [[*WISE*]{}]{} class C source associated following our procedure, with a lower confidence of finding a similar source in a region of comparison where there are 512 [[*WISE*]{}]{} sources. Within the 313 UGS analyzed there are 14 sources that have a variability index [@abdo11] higher than the value of 41.6 corresponding to the 99% of confidence that the source is variable. It is worth noting that 13 out of these 14 variable UGSs have been successfully associated here with a $\gamma$-ray blazar candidate, strongly supporting the blazar nature. [|lrlcccc|]{} 2FGL & Sources & [[*WISE*]{}]{}  & $s_b$ & $s_q$ & class & $\pi$\ name & in SR & name & & &\ J0038.8+6259 & 791 & J003818.70+630605.2 & 0.89 & 0.99 & A & 2/830\ J0616.6+2425 & 6021 & J061623.95+241809.2 & — & 0.57 & B & 1/5465\ J0312.8+2013 & 453 & J031223.00+200749.5 & 0.19 & 0.15 & C & 1/512\ \[tab:example\] The entire list of the UGSs analyzed can be found in Table 2. For each UGS, we report all the $\gamma$-ray blazar candidates with their IR colors (i.e., $c_{12}$ = \[3.4\]-\[4.6\] $\mu$m, $c_{23}$ = \[4.6\]-\[12\] $\mu$m and $c_{34}$ = \[12\]-\[22\] $\mu$m, together with their errors, $\sigma_{12}$, $\sigma_{23}$, $\sigma_{34}$, respectively), the distances in arc seconds between the $\gamma$-ray position and the selected [[*WISE*]{}]{} source, the $s_b$ and $s_q$ values, the class and the association confidence $\pi$ that there is a [[*WISE*]{}]{} source of the same class within the ROC (see Section \[sec:association\]). In addition, we found that there are 157 unidentified $\gamma$-ray sources that do not have clear $\gamma$-ray blazar counterpart within their SRs and are classified as outliers of the [[*WGS*]{}]{}. The lack of association for these sources could be due to a lower accuracy of the $\gamma$-ray position that might occur close to the Galactic plane or to the systematic uncertainties of the diffuse emission model used in the 2FGL analysis. The whole UGS sample will be reconsidered for associations with $\gamma$-ray blazar candidates when the all-sky [[*WISE*]{}]{} survey will be available. Assuming that all the 2FB blazar associations are correct, on the basis of our test (see Section \[sec:test\]), we can argue that within our sample we would expect about 41 ($\sim$ 13.0%) not recognized low-energy counterparts, for a total of 197 $\gamma$-ray blazar candidates within the 313 UGSs analyzed. Finally, it is worth stressing that our association procedure provides also interesting information on the sources that do not have a $\gamma$-ray blazar candidates in the SR. The absence of $\gamma$-ray blazar candidates selected according to our association procedure could direct to better use the follow-up resources for identifying other $\gamma$-ray source candidates. For example in the case of the unidentified $\gamma$-ray source: 2FGL J1446.8$-$4701 within the 1604 [[*WISE*]{}]{} sources that lie in its SR, we did not find any $\gamma$-ray blazar candidates. This source has been recently identified with the pulsar PSR 1446-4701 (see Public List of LAT-Detected Gamma-Ray Pulsars) [^6]. Comparison with other methods {#sec:comparison} ============================= We note that among the 313 UGSs analyzed, there are 70 sources that were also unidentified according to the investigation performed in the first Fermi $\gamma$-ray catalog (1FGL), and 48 of them have been associated with a $\gamma$-ray blazar candidates in our analysis. In particular, a recent analysis of the 1FGL unidentified $\gamma$-ray sources has been carried out using two different statistical approaches: the Classification Tree and the Logistic regression analyses [see @ackermann11b and references therein]. For 44 out of the 48 UGSs, that have been analyzed on the basis of the above statistical methods, it is also possible to perform a comparison with our results to verify if the 2FGL sources that we associated to a $\gamma$-ray blazar candidates have been also classified as AGNs following the Ackermann et al. (2011b) procedures. By comparing the results of our association method with those in Ackermann et al. (2011b), we found that 27 out of 44 UGSs that we associate to a $\gamma$-ray blazar candidate are also classified as AGNs, all of them with a probability higher than 71% and 18 of them higher than 80%. Among the remaining 17 out of 44 sources, 7 have been classified as pulsars, with a very low probability with respect to the whole sample; in particular, 3 of these pulsar candidates are classified with a probability lower than 41% and all of them lower than 71%, making these classifications less reliable than those of the AGNs. The last 10 UGSs did not have a classification in Ackermann et al. (2011b). Consequently, we emphasize that our results are in good agreement with the classification suggested previously by Ackermann et al. (2011b) consistent with the $\gamma$-ray blazar nature of the [[*WISE*]{}]{} candidates proposed in our analysis. Summary and Conclusions {#sec:summary} ======================= Recently, we discovered that blazars have peculiar mid-IR colors with respect to other galactic sources or different classes of AGNs. In particular, we found that within the 3-dimensional IR parameter space they delineate a distinct, well-defined, region known as [[*WISE*]{}]{} Blazar Strip (Paper I). Moreover, this distinction, mostly due to the non-thermal emission that dominates the IR radiation of blazars, appears to be more evident when considering those blazars selected on the basis of their $\gamma$-ray properties (Paper II) so defining the [[*WISE*]{}]{} Gamma-ray blazar Strip ([[*WGS*]{}]{}). Then, in Paper III, we built the [[*WGS*]{}]{} parametrization to test the consistency of the low energy counterpart of the AGUs, associated in 2FGL with the [[*WGS*]{}]{}. On the basis of these results, in the present work, we developed a new association method to search for blazar counterparts of $\gamma$-ray sources and we applied this method to the blazars of the 2FGL sample. We also provide new $\gamma$-ray blazar candidates, potential counterparts of the UGSs, that lie within their $\gamma$-ray positional error region, having the same mid-IR colors as the $\gamma$-ray blazars already associated. We also tested our new procedure [*a posteriori*]{} trying to re-associate all the blazars in the 2FB sample and we found that our results are in good agreement with different association procedures. The application of our association procedure to the UGSs has led to the selection of possible blazar counterparts for 156 of 313 UGSs analyzed. As also noted in Section \[sec:ugs\], our association procedure provides also interesting information on the sources that do not have a $\gamma$-ray blazar candidates in the SRs as the case of the unidentified $\gamma$-ray source: 2FGL J1446.8$-$4701, recently identified with the pulsar PSR 1446-4701. Several developments will be considered to improve our association procedure, such as taking into account not only the IR colors, correspondent to flux ratios, but also the IR fluxes as well as the IR-$\gamma$-ray spectral index correlation (Paper II) and the sky distribution of the $\gamma$-ray blazar candidates, once the whole [[*WISE*]{}]{} data archive will be released. Then, it will be also possible to calibrate our association procedure choosing the different thresholds for the $s$ parameters at different Galactic latitudes to take into account of the [[*WISE*]{}]{} background. Moreover, our association method is complementary to those adopted in the 2FGL catalog analysis, because it is based on different multifrequency information. For this reason, these methods could be in principle combined to increase the fraction of associated UGSs and the efficiency of the association. Further developments of this new association method will be investigated in a forthcoming paper [@massaro12b]. We thank the anonymous referee for the his/her comments. F. Massaro is grateful S. Digel for their fruitful discussions for all the comments helpful toward improving our presentation. We also thank to A. Cavaliere, D. Harris, J. Grindlay, J. Knodlseder, P. Giommi, N. Omodei, H.Smith and D. Thompson for their suggestions. The work at SAO and at Stanford University is supported in part by the NASA grant NNX10AD50G, NNH09ZDA001N and NNX10AD68G. R. D’Abrusco gratefully acknowledges the financial support of the US Virtual Astronomical Observatory, which is sponsored by the National Science Foundation and the National Aeronautics and Space Administration. F. Massaro acknowledges the Fondazione Angelo Della Riccia for the grant awarded him to support his research at SAO during 2011 and the Foundation BLANCEFLOR Boncompagni-Ludovisi, n’ee Bildt for the grant awarded him in 2010 to support his research. TOPCAT[^7] [@taylor2005] was used extensively in this work for the preparation and manipulation of the tabular data. Part of this work is based on archival data, software or on-line services provided by the ASI Science Data Center. This publication makes use of data products from the Wide-field Infrared Survey Explorer, which is a joint project of the University of California, Los Angeles, and the Jet Propulsion Laboratory/California Institute of Technology, funded by the National Aeronautics and Space Administration. Abdo, A. A. et al. 2010a ApJ, 720, 435 Abdo, A. A. et al. 2010b ApJS 188 405 Abdo, A. A. et al. ApJS submitted http://arxiv.org/abs/1108.1435 Ackermann, M. et al. 2011a ApJ, 743, 171 Ackermann, M. et al. 2011b ApJ submitted http://arxiv.org/abs/1108.1202 Blandford, R. D. & Rees, M. J., 1978, Proc. “Pittsburgh Conference on BL Lac objects", 328 Cutri, R. M. 2011, wise.rept, 1 D’Abrusco, R., Massaro, F., Ajello, M., Grindlay, J. E., Smith, Howard A. & Tosti, G. 2012 ApJ accepted Hartman, R.C. et al., 1999 ApJS 123 Laurino, O. & D’Abrusco 2011 MNRAS in press Massaro, E., Giommi, P., Leto, C., Marchegiani, P., Maselli, A., Perri, M., Piranomonte, S., Sclavi, S. 2009 A&A, 495, 691 Massaro, E., Giommi, P., Leto, C., Marchegiani, P., Maselli, A., Perri, M., Piranomonte, S., Sclavi, S. 2010 [http://arxiv.org/abs/1006.0922]{} Massaro, E., Giommi, P., Leto, C., Marchegiani, P., Maselli, A., Perri, M., Piranomonte, S., “Multifrequency Catalogue of Blazars (3rd Edition)", ARACNE Editrice, Rome, Italy Massaro, F., D’Abrusco, R., Ajello, M., Grindlay, J. E. & Smith, H. A. 2011 ApJ, 740L, 48 Massaro, F., D’Abrusco, R., Ajello, Gasparrini, d., Tosti, G., M., Grindlay, J. E. & Smith, H. A. 2012 ApJ submitted Massaro, F., D’Abrusco, R., Tosti, G. & Ajello, M. 2012b in preparation Taylor, M. B. 2005, ASP Conf. Ser., 347, 29 Wright, E. L., et al. 2010 AJ, 140, 1868 [|lccccccccccc|]{} Name & distance (arcsec) & $c_{12}$ & $\sigma_{12}$ & $c_{23}$ & $\sigma_{23}$ & $c_{34}$ & $\sigma_{34}$ & $s_b$ & $s_q$ & $class$ & $\pi$\ 2FGL J0038.8+6259 & & & & & & & & & & &\ J003818.70+630605.2 & 443.08 & 1.15 & 0.03 & 2.59 & 0.03 & 2.52 & 0.03 & 0.89 & 0.99 & A & 2/830\ J003756.80+630459.2 & 492.33 & 0.67 & 0.04 & 2.09 & 0.06 & 2.36 & 0.12 & 0.40 & 0.00 & B & 3/830\ J003834.17+630621.7 & 413.68 & 0.95 & 0.06 & 2.55 & 0.09 & 2.69 & 0.16 & 0.22 & 0.19 & C & 0/830\ 2FGL J0158.6+8558 & & & & & & & & & & &\ J014847.32+860345.3 & 707.33 & 1.21 & 0.03 & 3.10 & 0.03 & 2.66 & 0.05 & 0.29 & 0.73 & A & 1/2439\ J015619.63+855634.6 & 173.64 & 1.11 & 0.04 & 3.49 & 0.04 & 2.71 & 0.05 & 0.00 & 0.70 & B & 0/2439\ J014935.28+860115.3 & 603.41 & 0.73 & 0.03 & 2.34 & 0.05 & 2.06 & 0.16 & 0.44 & 0.21 & B & 0/2439\ J015550.14+854745.1 & 644.48 & 1.15 & 0.04 & 2.51 & 0.06 & 2.20 & 0.19 & 0.26 & 0.37 & B & 0/2439\ J015248.81+855703.5 & 376.98 & 1.11 & 0.05 & 2.96 & 0.08 & 1.87 & 0.29 & 0.21 & 0.22 & C & 0/2439\ 2FGL J0222.7+6820 & & & & & & & & & & &\ J022151.83+682414.0 & 375.72 & 0.45 & 0.03 & 2.0 & 0.04 & 2.17 & 0.07 & 0.61 & 0.00 & B & 0/642\ 2FGL J0226.1+0943 & & & & & & & & & & &\ J022634.26+093844.4 & 445.67 & 1.1 & 0.06 & 2.78 & 0.12 & 2.24 & 0.37 & 0.18 & 0.21 & C & 0/529\ 2FGL J0227.7+2249 & & & & & & & & & & &\ J022744.34+224834.3 & 73.27 & 0.98 & 0.04 & 2.53 & 0.05 & 1.91 & 0.13 & 0.47 & 0.26 & B & 0/621\ 2FGL J0237.9+5238 & & & & & & & & & & &\ J023749.24+523932.8 & 118.26 & 1.0 & 0.15 & 3.0 & 0.15 & 2.35 & 0.28 & 0.12 & 0.15 & C & 0/1330\ 2FGL J0251.0+2557 & & & & & & & & & & &\ J025144.37+255233.4 & 648.24 & 1.11 & 0.06 & 3.19 & 0.09 & 2.43 & 0.24 & 0.16 & 0.27 & C & 1/1587\ 2FGL J0307.4+4915 & & & & & & & & & & &\ J030727.22+491510.3 & 13.52 & 0.41 & 0.03 & 2.0 & 0.06 & 2.26 & 0.15 & 0.35 & 0.00 & B & 0/593\ 2FGL J0308.7+5954 & & & & & & & & & & &\ J030939.63+594254.9 & 838.69 & 1.1 & 0.04 & 2.67 & 0.06 & 2.18 & 0.14 & 0.39 & 0.39 & A & 0/4242\ J030904.80+595515.7 & 163.43 & 0.72 & 0.04 & 2.04 & 0.06 & 2.08 & 0.11 & 0.46 & 0.00 & B & 0/4242\ J030949.27+595443.4 & 497.05 & 1.27 & 0.04 & 3.45 & 0.05 & 2.54 & 0.07 & 0.00 & 0.56 & B & 0/4242\ 2FGL J0312.5$-$0914 & & & & & & & & & & &\ J031210.66$-$090902.3 & 494.53 & 1.1 & 0.06 & 2.84 & 0.12 & 2.85 & 0.25 & 0.12 & 0.2 & C & 2/898\ J031152.05$-$092132.6 & 731.58 & 1.07 & 0.05 & 2.48 & 0.14 & 2.17 & 0.45 & 0.17 & 0.16 & C & 2/898\ 2FGL J0312.8+2013 & & & & & & & & & & &\ J031223.00+200749.5 & 505.2 & 1.01 & 0.05 & 2.43 & 0.14 & 2.29 & 0.4 & 0.19 & 0.15 & C & 1/512\ 2FGL J0318.0+0255 & & & & & & & & & & &\ J031820.31+025909.9 & 348.38 & 1.02 & 0.07 & 2.97 & 0.15 & 2.54 & 0.38 & 0.12 & 0.17 & C & 0/734\ 2FGL J0332.1+6309 & & & & & & & & & & &\ J033153.90+630814.1 & 114.51 & 0.98 & 0.04 & 2.4 & 0.05 & 1.85 & 0.15 & 0.39 & 0.18 & B & 0/1034\ 2FGL J0340.5+5307 & & & & & & & & & & &\ J034004.71+530127.6 & 476.7 & 1.13 & 0.05 & 2.75 & 0.07 & 2.34 & 0.17 & 0.34 & 0.34 & B & 0/2365\ 2FGL J0345.2$-$2356 & & & & & & & & & & &\ J034448.74$-$235653.1 & 373.57 & 1.18 & 0.07 & 3.16 & 0.11 & 2.24 & 0.37 & 0.11 & 0.19 & C & 0/622\ 2FGL J0404.0+3843 & & & & & & & & & & &\ J040520.96+384721.5 & 923.5 & 1.01 & 0.05 & 2.84 & 0.07 & 2.26 & 0.18 & 0.34 & 0.34 & B & 0/4315\ 2FGL J0404.6+5822 & & & & & & & & & & &\ J040400.59+582317.1 & 333.46 & 0.47 & 0.04 & 1.92 & 0.06 & 2.09 & 0.21 & 0.34 & 0.00 & B & 1/3481\ J040628.51+582709.4 & 892.88 & 0.57 & 0.03 & 2.07 & 0.04 & 2.27 & 0.08 & 0.55 & 0.14 & B & 1/3481\ 2FGL J0409.8-0357 & & & & & & & & & & &\ J040946.57-040003.5 & 144.21 & 0.9 & 0.03 & 2.42 & 0.04 & 1.88 & 0.12 & 0.52 & 0.22 & B & 0/549\ J041011.21-040000.0 & 319.59 & 0.99 & 0.06 & 2.91 & 0.13 & 2.31 & 0.37 & 0.14 & 0.19 & C & 1/549\ 2FGL J0414.9-0855 & & & & & & & & & & &\ J041457.01-085651.9 & 99.2 & 0.99 & 0.04 & 2.66 & 0.08 & 2.39 & 0.21 & 0.31 & 0.32 & B & 0/1182\ \[tab:main\] [|lccccccccccc|]{} Name & distance (arcsec) & $c_{12}$ & $\sigma_{12}$ & $c_{23}$ & $\sigma_{23}$ & $c_{34}$ & $\sigma_{34}$ & $s_b$ & $s_q$ & $class$ & $\pi$\ 2FGL J0416.0$-$4355 & & & & & & & & & & &\ J041605.81$-$435514.6 & 47.18 & 1.13 & 0.04 & 2.94 & 0.04 & 2.37 & 0.08 & 0.58 & 0.58 & A & 0/2603\ J041442.89$-$434935.4 & 943.62 & 1.03 & 0.04 & 2.9 & 0.07 & 2.43 & 0.2 & 0.32 & 0.34 & B & 2/2603\ J041506.36$-$440958.9 & 1045.16 & 1.08 & 0.04 & 2.74 & 0.06 & 2.14 & 0.18 & 0.36 & 0.38 & B & 2/2603\ J041538.31$-$440412.8 & 568.43 & 1.01 & 0.05 & 2.6 & 0.1 & 2.49 & 0.3 & 0.21 & 0.23 & C & 1/2603\ J041709.78$-$435922.4 & 751.76 & 1.04 & 0.08 & 2.94 & 0.16 & 2.5 & 0.45 & 0.11 & 0.15 & C & 1/2603\ 2FGL J0420.9$-$3743 & & & & & & & & & & &\ J042025.09$-$374444.5 & 368.83 & 0.82 & 0.04 & 2.5 & 0.09 & 2.41 & 0.28 & 0.25 & 0.17 & B & 1/1249\ 2FGL J0428.0$-$3845 & & & & & & & & & & &\ J042721.64$-$390100.6 & 1039.57 & 1.09 & 0.04 & 2.76 & 0.05 & 2.62 & 0.11 & 0.42 & 0.48 & A & 0/2563\ 2FGL J0458.4+0654 & & & & & & & & & & &\ J045911.96+065932.7 & 775.68 & 1.14 & 0.06 & 3.02 & 0.12 & 2.6 & 0.28 & 0.16 & 0.23 & C & 0/1476\ 2FGL J0515.0$-$4411 & & & & & & & & & & &\ J051322.89$-$441947.8 & 1153.62 & 1.01 & 0.04 & 3.02 & 0.05 & 2.38 & 0.13 & 0.35 & 0.46 & A & 1/3932\ J051439.26$-$441348.7 & 258.38 & 1.11 & 0.06 & 3.28 & 0.09 & 2.48 & 0.21 & 0.16 & 0.28 & C & 3/3932\ J051524.23$-$441457.2 & 308.78 & 1.15 & 0.06 & 2.85 & 0.11 & 2.28 & 0.38 & 0.14 & 0.22 & C & 3/3932\ J051525.83$-$435632.7 & 962.23 & 1.02 & 0.06 & 2.96 & 0.1 & 2.23 & 0.32 & 0.21 & 0.22 & C & 3/3932\ J051432.40$-$435214.3 & 1221.78 & 1.16 & 0.06 & 3.09 & 0.12 & 2.74 & 0.29 & 0.11 & 0.21 & C & 3/3932\ 2FGL J0529.3+3821 & & & & & & & & & & &\ J053006.26+382649.0 & 604.83 & 0.98 & 0.03 & 2.92 & 0.03 & 2.74 & 0.04 & 0.56 & 0.7 & A & 0/1574\ J052939.38+382327.1 & 230.39 & 1.02 & 0.05 & 2.67 & 0.1 & 2.45 & 0.17 & 0.3 & 0.31 & B & 0/1574\ J052943.26+382049.3 & 231.47 & 1.1 & 0.05 & 2.84 & 0.07 & 2.14 & 0.19 & 0.31 & 0.34 & B & 0/1574\ J052941.52+382442.2 & 298.7 & 0.69 & 0.03 & 2.43 & 0.05 & 2.18 & 0.1 & 0.53 & 0.25 & B & 0/1574\ J053006.35+382532.8 & 566.64 & 0.89 & 0.05 & 2.29 & 0.09 & 2.2 & 0.28 & 0.27 & 0.15 & B & 0/1574\ J053018.78+382231.9 & 653.47 & 1.01 & 0.05 & 2.62 & 0.07 & 2.37 & 0.21 & 0.3 & 0.29 & B & 0/1574\ 2FGL J0540.1$-$7554 & & & & & & & & & & &\ J054003.36$-$754157.9 & 736.81 & 1.06 & 0.04 & 2.76 & 0.04 & 2.49 & 0.1 & 0.51 & 0.52 & A & 0/3412\ J054231.99$-$760139.4 & 677.11 & 1.0 & 0.04 & 2.84 & 0.08 & 2.42 & 0.23 & 0.29 & 0.31 & B & 1/3412\ J053926.22$-$760648.6 & 772.37 & 0.91 & 0.05 & 2.71 & 0.08 & 2.64 & 0.21 & 0.24 & 0.24 & B & 1/3412\ J054111.58$-$760247.1 & 557.16 & 1.06 & 0.05 & 2.82 & 0.08 & 2.81 & 0.2 & 0.18 & 0.26 & C & 0/3412\ 2FGL J0545.6+6018 & & & & & & & & & & &\ J054610.69+602331.6 & 368.05 & 1.11 & 0.05 & 2.85 & 0.07 & 2.5 & 0.16 & 0.32 & 0.35 & B & 0/449\ 2FGL J0555.9$-$4348 & & & & & & & & & & &\ J055601.91$-$433947.3 & 513.92 & 1.07 & 0.04 & 2.55 & 0.06 & 2.35 & 0.18 & 0.38 & 0.38 & A & 0/2668\ J055618.73$-$435146.1 & 338.08 & 0.93 & 0.04 & 2.51 & 0.05 & 2.13 & 0.14 & 0.46 & 0.33 & B & 0/2668\ J055432.27$-$435241.2 & 927.17 & 0.89 & 0.04 & 2.68 & 0.07 & 2.12 & 0.26 & 0.32 & 0.21 & B & 0/2668\ J055531.86$-$435706.1 & 584.44 & 0.94 & 0.05 & 2.9 & 0.1 & 2.32 & 0.3 & 0.18 & 0.23 & C & 2/2668\ 2FGL J0600.8$-$1949 & & & & & & & & & & &\ J060120.34$-$200725.5 & 1130.75 & 1.03 & 0.03 & 2.64 & 0.04 & 2.09 & 0.08 & 0.62 & 0.63 & A & 0/4104\ J055931.95$-$195135.0 & 1131.41 & 1.13 & 0.06 & 3.1 & 0.1 & 2.22 & 0.35 & 0.16 & 0.21 & C & 1/4104\ 2FGL J0600.9+3839 & & & & & & & & & & &\ J060102.86+383829.6 & 68.68 & 0.97 & 0.04 & 2.54 & 0.08 & 2.38 & 0.19 & 0.33 & 0.28 & B & 0/613\ 2FGL J0602.7$-$4011 & & & & & & & & & & &\ J060237.09$-$401453.5 & 239.84 & 0.97 & 0.03 & 2.48 & 0.04 & 2.3 & 0.07 & 0.63 & 0.51 & A & 0/1127\ J060251.28$-$401845.1 & 472.64 & 0.88 & 0.04 & 2.44 & 0.04 & 1.82 & 0.11 & 0.53 & 0.22 & B & 0/1127\ J060254.77$-$402035.2 & 588.88 & 1.12 & 0.06 & 2.95 & 0.12 & 2.49 & 0.32 & 0.18 & 0.22 & C & 2/1127\ \[tab:main\] [|lccccccccccc|]{} Name & distance (arcsec) & $c_{12}$ & $\sigma_{12}$ & $c_{23}$ & $\sigma_{23}$ & $c_{34}$ & $\sigma_{34}$ & $s_b$ & $s_q$ & $class$ & $\pi$\ 2FGL J0608.3+2037 & & & & & & & & & & &\ J060835.36+203604.1 & 237.43 & 1.03 & 0.03 & 2.83 & 0.03 & 2.22 & 0.05 & 0.76 & 0.77 & A & 0/1130\ J060831.76+203141.8 & 405.85 & 1.03 & 0.03 & 2.47 & 0.03 & 2.32 & 0.03 & 0.91 & 0.88 & A & 0/1130\ J060740.08+203959.2 & 579.55 & 1.11 & 0.04 & 2.86 & 0.04 & 2.37 & 0.09 & 0.57 & 0.58 & A & 0/1130\ J060803.38+204111.0 & 309.04 & 1.24 & 0.03 & 2.55 & 0.03 & 2.56 & 0.03 & 0.00 & 0.7 & B & 2/1130\ J060831.63+204122.5 & 260.5 & 1.19 & 0.04 & 2.82 & 0.09 & 2.74 & 0.12 & 0.18 & 0.32 & C & 0/1130\ 2FGL J0616.6+2425 & & & & & & & & & & &\ J061623.95+241809.2 & 458.21 & 0.98 & 0.03 & 3.38 & 0.04 & 2.8 & 0.04 & 0.00 & 0.56 & B & 1/5456\ J061609.79+241911.1 & 513.16 & 0.66 & 0.03 & 2.37 & 0.03 & 1.72 & 0.08 & 0.63 & 0.21 & B & 1/5456\ J061705.49+240850.4 & 1063.29 & 0.53 & 0.04 & 2.17 & 0.07 & 1.87 & 0.24 & 0.34 & 0.00 & B & 1/5456\ J061611.60+241917.2 & 491.67 & 0.94 & 0.04 & 2.15 & 0.06 & 2.8 & 0.12 & 0.23 & 0.2 & C & 1/5456\ J061600.61+242253.6 & 506.98 & 0.84 & 0.06 & 2.78 & 0.11 & 2.44 & 0.28 & 0.17 & 0.14 & C & 1/5456\ J061640.77+241151.1 & 806.78 & 0.98 & 0.06 & 2.78 & 0.09 & 1.84 & 0.41 & 0.18 & 0.16 & C & 1/5456\ J061650.09+244221.1 & 1042.8 & 1.2 & 0.08 & 3.03 & 0.1 & 2.44 & 0.24 & 0.11 & 0.23 & C & 1/5456\ 2FGL J0620.8$-$2556 & & & & & & & & & & &\ J062108.67$-$255758.1 & 222.84 & 0.96 & 0.05 & 2.7 & 0.09 & 2.08 & 0.35 & 0.23 & 0.21 & C & 1/3508\ 2FGL J0631.7+0428 & & & & & & & & & & &\ J063151.53+041903.3 & 581.96 & 0.93 & 0.03 & 2.76 & 0.03 & 2.46 & 0.06 & 0.69 & 0.67 & A & 1/9083\ J063155.40+041844.1 & 612.84 & 1.09 & 0.04 & 2.95 & 0.09 & 2.72 & 0.12 & 0.27 & 0.38 & A & 1/9083\ J063103.52+042739.6 & 624.47 & 0.71 & 0.04 & 2.36 & 0.07 & 1.88 & 0.38 & 0.28 & 0.14 & B & 1/9083\ J063239.59+043629.4 & 941.7 & 0.81 & 0.04 & 2.53 & 0.09 & 1.71 & 0.32 & 0.26 & 0.14 & B & 1/9083\ J063048.95+043714.4 & 986.14 & 0.7 & 0.03 & 1.99 & 0.09 & 2.02 & 0.3 & 0.3 & 0.00 & B & 1/9083\ J063147.96+045049.0 & 1332.46 & 0.62 & 0.04 & 2.26 & 0.08 & 2.35 & 0.14 & 0.35 & 0.00 & B & 1/9083\ J063314.27+042905.5 & 1333.84 & 0.6 & 0.03 & 2.1 & 0.12 & 1.83 & 0.31 & 0.26 & 0.07 & B & 1/9083\ 2FGL J0644.6+6034 & & & & & & & & & & &\ J064459.38+603132.1 & 223.54 & 1.01 & 0.04 & 2.69 & 0.06 & 2.52 & 0.15 & 0.38 & 0.4 & A & 0/877\ J064417.82+603932.2 & 347.48 & 1.2 & 0.06 & 3.19 & 0.1 & 2.74 & 0.22 & 0.1 & 0.26 & C & 0/877\ 2FGL J0647.7+0032 & & & & & & & & & & &\ J064712.98+003702.8 & 545.46 & 1.12 & 0.04 & 3.07 & 0.05 & 2.19 & 0.08 & 0.51 & 0.51 & A & 0/2093\ J064718.24+003250.6 & 384.38 & 0.81 & 0.04 & 2.59 & 0.11 & 2.31 & 0.2 & 0.29 & 0.21 & B & 0/2093\ 2FGL J0713.5$-$0952 & & & & & & & & & & &\ J071337.47$-$101207.9 & 1182.86 & 0.98 & 0.04 & 2.82 & 0.05 & 2.23 & 0.1 & 0.49 & 0.49 & A & 1/11990\ J071400.29$-$095518.9 & 429.54 & 1.14 & 0.07 & 2.93 & 0.08 & 2.72 & 0.14 & 0.19 & 0.31 & C & 2/11990\ 2FGL J0723.9+2901 & & & & & & & & & & &\ J072354.83+285929.9 & 153.39 & 1.15 & 0.05 & 2.93 & 0.05 & 2.3 & 0.12 & 0.38 & 0.43 & A & 0/1215\ 2FGL J0725.8$-$0549 & & & & & & & & & & &\ J072536.62$-$055807.3 & 558.18 & 1.06 & 0.04 & 2.74 & 0.04 & 2.14 & 0.09 & 0.55 & 0.56 & A & 0/1094\ 2FGL J0737.1$-$3235 & & & & & & & & & & &\ J073738.91$-$323256.6 & 385.01 & 1.13 & 0.06 & 3.27 & 0.05 & 2.55 & 0.08 & 0.22 & 0.49 & B & 1/1985\ J073646.70$-$324215.2 & 516.85 & 0.63 & 0.04 & 2.28 & 0.07 & 1.72 & 0.31 & 0.27 & 0.1 & B & 1/1985\ 2FGL J0737.5$-$8246 & & & & & & & & & & &\ J073713.75$-$824812.8 & 111.61 & 1.02 & 0.06 & 3.08 & 0.11 & 2.24 & 0.36 & 0.16 & 0.2 & C & 0/916\ J073715.07$-$825511.4 & 525.59 & 0.89 & 0.05 & 2.89 & 0.07 & 2.0 & 0.23 & 0.23 & 0.24 & C & 0/916\ \[tab:main\] [|lccccccccccc|]{} Name & distance (arcsec) & $c_{12}$ & $\sigma_{12}$ & $c_{23}$ & $\sigma_{23}$ & $c_{34}$ & $\sigma_{34}$ & $s_b$ & $s_q$ & $class$ & $\pi$\ 2FGL J0742.7$-$3113 & & & & & & & & & & &\ J074226.42$-$305720.1 & 990.78 & 1.09 & 0.03 & 2.73 & 0.03 & 2.22 & 0.03 & 0.9 & 0.91 & A & 1/3499\ J074345.64$-$310758.0 & 873.05 & 1.02 & 0.05 & 2.29 & 0.05 & 2.16 & 0.15 & 0.31 & 0.25 & B & 1/3499\ J074303.55$-$312057.4 & 522.58 & 1.2 & 0.1 & 2.78 & 0.08 & 2.0 & 0.29 & 0.1 & 0.18 & C & 2/3499\ 2FGL J0744.1$-$2523 & & & & & & & & & & &\ J074406.14$-$252154.1 & 142.98 & 1.27 & 0.03 & 2.55 & 0.03 & 1.98 & 0.05 & 0.00 & 0.44 & B & 0/630\ J074401.10$-$252203.9 & 180.53 & 1.26 & 0.06 & 2.77 & 0.04 & 2.66 & 0.08 & 0.00 & 0.5 & B & 0/630\ J074359.96$-$252220.6 & 183.53 & 0.93 & 0.05 & 2.86 & 0.06 & 2.53 & 0.18 & 0.27 & 0.33 & B & 0/630\ J074356.53$-$251909.8 & 351.78 & 1.1 & 0.09 & 2.87 & 0.1 & 2.69 & 0.21 & 0.14 & 0.22 & C & 0/630\ 2FGL J0745.5+7910 & & & & & & & & & & &\ J074502.17+791110.5 & 98.65 & 1.04 & 0.07 & 2.83 & 0.14 & 2.39 & 0.43 & 0.16 & 0.18 & C & 0/1343\ J074659.63+790356.7 & 469.93 & 1.14 & 0.05 & 2.67 & 0.1 & 2.72 & 0.23 & 0.17 & 0.24 & C & 0/1343\ 2FGL J0746.0$-$0222 & & & & & & & & & & &\ J074627.02$-$022549.4 & 391.79 & 0.65 & 0.04 & 2.08 & 0.07 & 2.18 & 0.27 & 0.29 & 0.00 & B & 1/1597\ J074534.78$-$021534.7 & 604.93 & 1.13 & 0.04 & 2.9 & 0.06 & 2.42 & 0.18 & 0.34 & 0.36 & B & 1/1597\ 2FGL J0748.5$-$2204 & & & & & & & & & & &\ J074726.63$-$220630.9 & 935.48 & 1.0 & 0.04 & 2.89 & 0.03 & 2.29 & 0.07 & 0.65 & 0.66 & A & 0/4863\ 2FGL J0753.2+1937 & & & & & & & & & & &\ J075217.84+193542.3 & 839.58 & 1.16 & 0.03 & 2.97 & 0.03 & 2.65 & 0.03 & 0.53 & 0.93 & A & 0/1565\ 2FGL J0753.2$-$1634 & & & & & & & & & & &\ J075313.37$-$160833.2 & 1560.13 & 1.29 & 0.05 & 2.99 & 0.06 & 2.59 & 0.13 & 0.00 & 0.39 & B & 6/15814\ J075249.51$-$160820.5 & 1618.57 & 0.92 & 0.04 & 2.95 & 0.05 & 2.64 & 0.09 & 0.23 & 0.45 & B & 6/15814\ J075213.04$-$160415.1 & 2032.41 & 1.11 & 0.06 & 2.75 & 0.11 & 2.86 & 0.25 & 0.13 & 0.2 & C & 0/15814\ 2FGL J0900.9+6736 & & & & & & & & & & &\ J090121.65+673955.5 & 228.51 & 0.9 & 0.05 & 2.67 & 0.11 & 2.34 & 0.31 & 0.23 & 0.21 & C & 2/1220\ 2FGL J1032.9$-$8401 & & & & & & & & & & &\ J103015.41$-$840308.2 & 274.15 & 0.95 & 0.04 & 2.6 & 0.05 & 2.04 & 0.16 & 0.41 & 0.32 & B & 0/1438\ 2FGL J1058.7$-$6621 & & & & & & & & & & &\ J105854.76$-$663412.8 & 736.6 & 0.93 & 0.07 & 2.62 & 0.07 & 2.54 & 0.17 & 0.27 & 0.25 & B & 1/1609\ 2FGL J1104.7$-$6036 & & & & & & & & & & &\ J110500.45$-$603559.0 & 117.21 & 0.76 & 0.08 & 2.04 & 0.08 & 1.85 & 0.22 & 0.25 & 0.11 & B & 0/235\ 2FGL J1105.4$-$7622 & & & & & & & & & & &\ J110409.06$-$762719.3 & 413.02 & 0.78 & 0.02 & 2.59 & 0.02 & 2.07 & 0.03 & 1.21 & 0.73 & A & 0/1918\ J110632.74$-$762521.1 & 293.54 & 0.56 & 0.03 & 1.79 & 0.06 & 1.97 & 0.22 & 0.38 & 0.00 & B & 1/1918\ J110746.55$-$761517.6 & 644.16 & 0.65 & 0.03 & 1.75 & 0.04 & 2.25 & 0.07 & 0.46 & 0.00 & B & 1/1918\ 2FGL J1105.6$-$6114 & & & & & & & & & & &\ J110641.59$-$611126.7 & 486.49 & 1.13 & 0.1 & 2.9 & 0.14 & 2.69 & 0.25 & 0.11 & 0.18 & C & 0/1650\ J110709.52$-$611247.7 & 664.92 & 1.15 & 0.06 & 2.87 & 0.14 & 2.51 & 0.2 & 0.19 & 0.25 & C & 0/1650\ 2FGL J1207.3$-$5055 & & & & & & & & & & &\ J120715.61$-$504558.3 & 595.37 & 0.99 & 0.04 & 2.98 & 0.04 & 2.19 & 0.1 & 0.4 & 0.51 & A & 0/2542\ J120750.50$-$510314.6 & 523.02 & 1.12 & 0.07 & 2.8 & 0.12 & 2.72 & 0.25 & 0.15 & 0.18 & C & 2/2542\ 2FGL J1214.1$-$4410 & & & & & & & & & & &\ J121305.50$-$440807.3 & 667.39 & 0.85 & 0.03 & 2.63 & 0.03 & 2.46 & 0.04 & 0.82 & 0.66 & A & 0/1816\ J121442.80$-$441338.0 & 445.13 & 1.09 & 0.05 & 2.88 & 0.09 & 2.84 & 0.2 & 0.17 & 0.25 & C & 0/1816\ 2FGL J1236.1$-$6155 & & & & & & & & & & &\ J123550.23$-$614507.5 & 629.83 & 0.93 & 0.04 & 2.43 & 0.05 & 2.41 & 0.1 & 0.48 & 0.34 & B & 0/1868\ J123452.66$-$614702.4 & 739.59 & 0.9 & 0.04 & 2.29 & 0.03 & 2.1 & 0.03 & 0.87 & 0.36 & B & 0/1868\ J123500.79$-$615611.1 & 489.18 & 0.96 & 0.06 & 2.98 & 0.09 & 2.12 & 0.21 & 0.21 & 0.25 & C & 1/1868\ J123551.88$-$614523.2 & 611.92 & 1.18 & 0.04 & 2.55 & 0.09 & 2.07 & 0.22 & 0.16 & 0.28 & C & 1/1868\ \[tab:main\] [|lccccccccccc|]{} Name & distance (arcsec) & $c_{12}$ & $\sigma_{12}$ & $c_{23}$ & $\sigma_{23}$ & $c_{34}$ & $\sigma_{34}$ & $s_b$ & $s_q$ & $class$ & $\pi$\ 2FGL J1243.9$-$6232 & & & & & & & & & & &\ J124252.40$-$623214.0 & 437.79 & 1.13 & 0.04 & 2.52 & 0.04 & 2.52 & 0.04 & 0.63 & 0.64 & A & 0/854\ J124322.29$-$623912.7 & 466.0 & 0.7 & 0.08 & 2.41 & 0.07 & 2.29 & 0.15 & 0.3 & 0.09 & B & 0/854\ J124438.34$-$623425.2 & 317.58 & 0.85 & 0.06 & 2.3 & 0.11 & 2.6 & 0.15 & 0.24 & 0.14 & C & 0/854\ 2FGL J1248.6$-$5510 & & & & & & & & & & &\ J124946.06$-$550758.3 & 602.9 & 1.03 & 0.05 & 2.7 & 0.07 & 2.45 & 0.15 & 0.36 & 0.36 & B & 2/4244\ 2FGL J1255.8$-$5828 & & & & & & & & & & &\ J125459.46$-$582009.4 & 636.52 & 1.32 & 0.05 & 2.82 & 0.04 & 2.19 & 0.06 & 0.00 & 0.53 & B & 0/12152\ J125357.08$-$583322.2 & 945.84 & 0.65 & 0.04 & 2.22 & 0.03 & 1.49 & 0.03 & 0.9 & 0.34 & B & 0/12152\ J125448.95$-$585010.1 & 1399.83 & 1.06 & 0.1 & 2.84 & 0.06 & 2.34 & 0.11 & 0.33 & 0.33 & B & 0/12152\ J125646.35$-$581306.0 & 1009.28 & 1.13 & 0.11 & 2.45 & 0.09 & 2.48 & 0.2 & 0.12 & 0.17 & C & 2/12152\ 2FGL J1317.2$-$6304 & & & & & & & & & & &\ J131818.06$-$630215.1 & 456.0 & 0.87 & 0.09 & 2.85 & 0.1 & 2.03 & 0.24 & 0.16 & 0.14 & C & 0/1382\ 2FGL J1320.1$-$5756 & & & & & & & & & & &\ J131938.49$-$575738.1 & 271.56 & 1.13 & 0.12 & 2.45 & 0.11 & 2.36 & 0.25 & 0.1 & 0.16 & C & 0/1294\ 2FGL J1324.4$-$5411 & & & & & & & & & & &\ J132530.49$-$542548.7 & 999.54 & 1.13 & 0.09 & 2.77 & 0.09 & 2.34 & 0.26 & 0.18 & 0.22 & C & 0/3899\ 2FGL J1339.2$-$2348 & & & & & & & & & & &\ J133901.75$-$240113.9 & 762.96 & 1.09 & 0.04 & 2.94 & 0.04 & 2.26 & 0.07 & 0.61 & 0.61 & A & 0/1337\ J133825.70$-$235150.2 & 678.03 & 1.09 & 0.04 & 3.19 & 0.06 & 2.98 & 0.09 & 0.00 & 0.38 & B & 0/1337\ 2FGL J1345.8$-$3356 & & & & & & & & & & &\ J134543.05$-$335643.3 & 94.27 & 0.85 & 0.04 & 2.35 & 0.08 & 2.02 & 0.3 & 0.28 & 0.13 & B & 0/2098\ J134515.46$-$334917.7 & 612.18 & 0.91 & 0.06 & 2.81 & 0.14 & 2.56 & 0.34 & 0.13 & 0.16 & C & 1/2098\ 2FGL J1347.0$-$2956 & & & & & & & & & & &\ J134706.88$-$295842.4 & 134.22 & 0.78 & 0.04 & 2.19 & 0.1 & 2.05 & 0.32 & 0.26 & 0.11 & B & 0/729\ J134742.26$-$300046.8 & 576.17 & 1.19 & 0.05 & 3.07 & 0.07 & 2.44 & 0.18 & 0.17 & 0.33 & C & 0/729\ 2FGL J1407.4$-$2948 & & & & & & & & & & &\ J140657.64$-$301718.0 & 1737.44 & 1.15 & 0.07 & 2.69 & 0.15 & 2.45 & 0.43 & 0.12 & 0.17 & C & 4/7776\ 2FGL J1414.1$-$5450 & & & & & & & & & & &\ J141236.97$-$543952.4 & 1009.62 & 0.99 & 0.04 & 2.32 & 0.05 & 2.32 & 0.1 & 0.46 & 0.35 & B & 1/4045\ J141349.58$-$544243.4 & 475.18 & 0.99 & 0.1 & 2.93 & 0.11 & 2.53 & 0.23 & 0.15 & 0.21 & C & 1/4045\ J141337.01$-$550126.7 & 736.43 & 0.93 & 0.08 & 2.4 & 0.11 & 2.42 & 0.21 & 0.22 & 0.16 & C & 1/4045\ 2FGL J1417.5$-$4404 & & & & & & & & & & &\ J141721.43$-$435342.9 & 668.44 & 1.02 & 0.07 & 2.88 & 0.09 & 2.48 & 0.22 & 0.23 & 0.26 & C & 1/2016\ 2FGL J1422.3$-$6841 & & & & & & & & & & &\ J142409.23$-$683715.6 & 660.57 & 0.68 & 0.04 & 2.1 & 0.04 & 1.83 & 0.1 & 0.53 & 0.22 & B & 0/1482\ 2FGL J1423.9$-$7842 & & & & & & & & & & &\ J142343.58$-$782934.3 & 801.85 & 1.04 & 0.06 & 2.82 & 0.12 & 2.91 & 0.24 & 0.11 & 0.19 & C & 2/2721\ \[tab:main\] [|lccccccccccc|]{} Name & distance (arcsec) & $c_{12}$ & $\sigma_{12}$ & $c_{23}$ & $\sigma_{23}$ & $c_{34}$ & $\sigma_{34}$ & $s_b$ & $s_q$ & $class$ & $\pi$\ 2FGL J1517.2+3645 & & & & & & & & & & &\ J151649.27+365022.9 & 420.93 & 1.05 & 0.05 & 2.63 & 0.09 & 2.48 & 0.24 & 0.27 & 0.27 & B & 0/1352\ J151752.13+364125.5 & 508.82 & 1.04 & 0.04 & 3.01 & 0.06 & 2.19 & 0.19 & 0.34 & 0.36 & B & 0/1352\ 2FGL J1518.4$-$5233 & & & & & & & & & & &\ J151807.33$-$523431.0 & 184.75 & 0.66 & 0.07 & 2.31 & 0.05 & 1.41 & 0.17 & 0.26 & 0.13 & B & 0/979\ 2FGL J1543.7$-$0241 & & & & & & & & & & &\ J154315.19$-$022531.4 & 1052.36 & 1.06 & 0.08 & 2.85 & 0.18 & 2.58 & 0.43 & 0.11 & 0.14 & C & 0/3462\ 2FGL J1552.8$-$4824 & & & & & & & & & & &\ J155237.30$-$484152.9 & 1044.56 & 0.98 & 0.05 & 2.7 & 0.04 & 2.45 & 0.06 & 0.54 & 0.53 & A & 0/5499\ 2FGL J1553.5$-$0324 & & & & & & & & & & &\ J155341.52$-$031231.4 & 698.12 & 1.35 & 0.04 & 2.68 & 0.05 & 2.37 & 0.11 & 0.00 & 0.42 & B & 0/2108\ J155314.95$-$031204.8 & 778.98 & 0.95 & 0.05 & 2.59 & 0.09 & 2.21 & 0.24 & 0.27 & 0.24 & B & 0/2108\ J155306.36$-$032317.7 & 432.95 & 0.92 & 0.05 & 2.36 & 0.11 & 2.5 & 0.28 & 0.24 & 0.16 & C & 1/2108\ 2FGL J1612.0+1403 & & & & & & & & & & &\ J161148.58+135716.8 & 426.68 & 1.07 & 0.05 & 3.09 & 0.09 & 2.16 & 0.33 & 0.16 & 0.22 & C & 0/2763\ J161118.10+140328.9 & 616.83 & 1.12 & 0.06 & 3.14 & 0.09 & 2.59 & 0.21 & 0.17 & 0.28 & C & 0/2763\ 2FGL J1614.8+4703 & & & & & & & & & & &\ J161434.68+470420.3 & 178.15 & 1.09 & 0.03 & 3.15 & 0.03 & 2.1 & 0.04 & 0.51 & 0.82 & A & 0/3922\ J161541.22+471111.8 & 701.47 & 0.75 & 0.04 & 2.27 & 0.05 & 2.26 & 0.13 & 0.48 & 0.23 & B & 3/3922\ J161450.96+465954.1 & 200.29 & 1.18 & 0.05 & 2.8 & 0.09 & 2.45 & 0.24 & 0.21 & 0.28 & C & 1/3922\ J161513.04+471356.0 & 680.29 & 1.06 & 0.06 & 2.95 & 0.13 & 2.34 & 0.42 & 0.17 & 0.19 & C & 1/3922\ J161536.97+471711.1 & 959.75 & 1.24 & 0.06 & 2.83 & 0.11 & 2.28 & 0.36 & 0.1 & 0.21 & C & 1/3922\ 2FGL J1617.6$-$4219 & & & & & & & & & & &\ J161955.00$-$422815.1 & 1557.87 & 1.05 & 0.05 & 2.85 & 0.04 & 2.58 & 0.06 & 0.52 & 0.58 & A & 0/7942\ J161804.21$-$421209.0 & 524.8 & 0.97 & 0.05 & 2.22 & 0.05 & 2.23 & 0.09 & 0.38 & 0.27 & B & 0/7942\ 2FGL J1619.0$-$4650 & & & & & & & & & & &\ J161922.19$-$464331.6 & 472.97 & 0.66 & 0.04 & 2.14 & 0.04 & 1.88 & 0.05 & 0.69 & 0.29 & B & 5/17602\ J161926.88$-$463511.1 & 964.94 & 1.01 & 0.04 & 2.48 & 0.04 & 1.92 & 0.07 & 0.62 & 0.35 & B & 5/17602\ J161929.08$-$463501.3 & 980.05 & 0.54 & 0.04 & 2.35 & 0.05 & 2.02 & 0.09 & 0.32 & 0.00 & B & 5/17602\ 2FGL J1619.6$-$4509 & & & & & & & & & & &\ J161944.64$-$451147.4 & 145.23 & 0.6 & 0.07 & 2.24 & 0.06 & 2.25 & 0.12 & 0.35 & 0.07 & B & 1/2541\ 2FGL J1622.8$-$0314 & & & & & & & & & & &\ J162225.36$-$031439.0 & 414.55 & 1.18 & 0.06 & 3.18 & 0.1 & 2.66 & 0.2 & 0.11 & 0.27 & C & 0/1577\ \[tab:main\] [|lccccccccccc|]{} Name & distance (arcsec) & $c_{12}$ & $\sigma_{12}$ & $c_{23}$ & $\sigma_{23}$ & $c_{34}$ & $\sigma_{34}$ & $s_b$ & $s_q$ & $class$ & $\pi$\ 2FGL J1623.2+4328 & & & & & & & & & & &\ J162324.12+432533.8 & 183.07 & 0.91 & 0.04 & 2.64 & 0.06 & 2.47 & 0.14 & 0.41 & 0.35 & B & 0/1983\ J162237.62+433801.8 & 720.08 & 1.13 & 0.05 & 2.92 & 0.08 & 2.25 & 0.24 & 0.29 & 0.3 & B & 0/1983\ 2FGL J1624.2$-$2124 & & & & & & & & & & &\ J162608.57$-$211709.1 & 1617.78 & 1.38 & 0.04 & 2.79 & 0.06 & 2.57 & 0.12 & 0.00 & 0.4 & B & 0/17861\ 2FGL J1627.8+3219 & & & & & & & & & & &\ J162800.41+322414.9 & 319.07 & 1.12 & 0.05 & 2.89 & 0.09 & 2.52 & 0.27 & 0.21 & 0.26 & C & 1/1045\ 2FGL J1630.2$-$4752 & & & & & & & & & & &\ J162957.80$-$475327.5 & 189.16 & 0.73 & 0.05 & 1.88 & 0.05 & 1.44 & 0.22 & 0.25 & 0.15 & B & 0/539\ 2FGL J1631.0$-$1050 & & & & & & & & & & &\ J163204.19$-$104411.5 & 1008.36 & 0.89 & 0.04 & 2.56 & 0.06 & 1.89 & 0.19 & 0.38 & 0.18 & B & 0/3687\ 2FGL J1641.8$-$5319 & & & & & & & & & & &\ J164059.69$-$532258.6 & 509.89 & 0.97 & 0.19 & 2.57 & 0.11 & 2.55 & 0.15 & 0.17 & 0.15 & C & 0/2000\ 2FGL J1643.3$-$4928 & & & & & & & & & & &\ J164153.54$-$491752.0 & 1067.61 & 0.7 & 0.03 & 2.29 & 0.03 & 2.1 & 0.02 & 1.01 & 0.42 & A & 1/3888\ 2FGL J1647.0+4351 & & & & & & & & & & &\ J164652.47+435821.7 & 434.78 & 0.97 & 0.04 & 2.86 & 0.07 & 2.58 & 0.16 & 0.31 & 0.36 & B & 0/2415\ J164720.15+434438.6 & 441.88 & 0.86 & 0.05 & 2.69 & 0.09 & 2.54 & 0.23 & 0.27 & 0.2 & B & 0/2415\ J164808.75+435530.4 & 734.82 & 0.86 & 0.05 & 2.73 & 0.1 & 2.23 & 0.34 & 0.19 & 0.17 & C & 3/2415\ 2FGL J1653.6$-$0159 & & & & & & & & & & &\ J165315.62$-$015822.3 & 324.68 & 1.02 & 0.04 & 2.37 & 0.04 & 1.84 & 0.12 & 0.45 & 0.25 & B & 0/335\ 2FGL J1656.4$-$0738 & & & & & & & & & & &\ J165639.14$-$073821.1 & 142.99 & 1.21 & 0.04 & 2.85 & 0.05 & 2.36 & 0.11 & 0.22 & 0.45 & B & 0/1591\ 2FGL J1657.5$-$4652 & & & & & & & & & & &\ J165708.90$-$464752.2 & 377.56 & 0.82 & 0.02 & 2.09 & 0.03 & 1.58 & 0.03 & 1.13 & 0.47 & A & 0/1009\ J165744.80$-$464635.7 & 370.78 & 0.59 & 0.05 & 1.72 & 0.09 & 1.91 & 0.18 & 0.26 & 0.00 & B & 0/1009\ 2FGL J1704.3+1235 & & & & & & & & & & &\ J170409.58+123422.0 & 170.99 & 0.74 & 0.04 & 2.12 & 0.07 & 1.57 & 0.41 & 0.24 & 0.12 & B & 0/1044\ J170418.39+123057.4 & 293.74 & 1.13 & 0.04 & 2.71 & 0.08 & 2.18 & 0.26 & 0.26 & 0.29 & B & 0/1044\ 2FGL J1704.9$-$4618 & & & & & & & & & & &\ J170503.47$-$462929.4 & 676.28 & 1.12 & 0.04 & 2.59 & 0.03 & 2.37 & 0.04 & 0.75 & 0.76 & A & 0/6392\ J170511.74$-$462809.5 & 608.53 & 0.57 & 0.03 & 2.2 & 0.03 & 2.03 & 0.03 & 0.91 & 0.00 & B & 5/6392\ J170410.40$-$462600.8 & 689.62 & 0.43 & 0.04 & 2.16 & 0.03 & 2.15 & 0.03 & 0.51 & 0.00 & B & 5/6392\ J170556.65$-$462409.7 & 690.38 & 0.53 & 0.03 & 1.74 & 0.03 & 1.97 & 0.05 & 0.71 & 0.00 & B & 5/6392\ 2FGL J1710.0$-$0323 & & & & & & & & & & &\ J170853.49$-$032323.4 & 1078.67 & 0.96 & 0.05 & 2.56 & 0.11 & 2.59 & 0.25 & 0.22 & 0.19 & C & 0/6212\ J170911.00$-$033720.7 & 1160.96 & 1.01 & 0.07 & 2.77 & 0.16 & 2.81 & 0.31 & 0.11 & 0.15 & C & 0/6212\ J171053.82$-$030442.3 & 1344.04 & 1.12 & 0.06 & 2.87 & 0.09 & 2.78 & 0.16 & 0.18 & 0.29 & C & 0/6212\ 2FGL J1710.5$-$5020 & & & & & & & & & & &\ J171141.00$-$502817.2 & 803.32 & 0.98 & 0.06 & 2.56 & 0.05 & 1.98 & 0.11 & 0.41 & 0.3 & B & 0/2260\ \[tab:main\] [|lccccccccccc|]{} Name & distance (arcsec) & $c_{12}$ & $\sigma_{12}$ & $c_{23}$ & $\sigma_{23}$ & $c_{34}$ & $\sigma_{34}$ & $s_b$ & $s_q$ & $class$ & $\pi$\ 2FGL J1726.6$-$3545 & & & & & & & & & & &\ J172706.54$-$354400.0 & 332.55 & 0.83 & 0.05 & 2.26 & 0.05 & 1.95 & 0.16 & 0.38 & 0.16 & B & 6/2747\ J172645.15$-$355126.8 & 340.31 & 0.78 & 0.04 & 1.97 & 0.04 & 2.01 & 0.06 & 0.52 & 0.00 & B & 6/2747\ J172743.40$-$355104.9 & 823.97 & 0.75 & 0.05 & 1.97 & 0.06 & 1.8 & 0.14 & 0.38 & 0.16 & B & 6/2747\ 2FGL J1727.6+0647 & & & & & & & & & & &\ J172644.95+063918.6 & 964.35 & 1.1 & 0.05 & 2.91 & 0.08 & 2.46 & 0.21 & 0.28 & 0.3 & B & 0/3011\ J172743.08+063729.0 & 610.69 & 0.88 & 0.06 & 2.79 & 0.13 & 2.16 & 0.41 & 0.15 & 0.16 & C & 0/3011\ 2FGL J1729.5$-$0854 & & & & & & & & & & &\ J172917.32$-$085503.3 & 214.0 & 1.3 & 0.04 & 2.73 & 0.06 & 2.63 & 0.11 & 0.00 & 0.43 & B & 1/4158\ 2FGL J1730.6$-$0353 & & & & & & & & & & &\ J173052.85$-$035247.1 & 223.9 & 1.27 & 0.04 & 2.94 & 0.04 & 2.13 & 0.1 & 0.00 & 0.49 & B & 0/1447\ 2FGL J1730.8+5427 & & & & & & & & & & &\ J173238.56+543233.4 & 952.59 & 1.09 & 0.04 & 2.66 & 0.05 & 2.53 & 0.14 & 0.44 & 0.42 & A & 0/5245\ J173145.57+540836.1 & 1246.33 & 1.12 & 0.05 & 2.74 & 0.09 & 2.27 & 0.32 & 0.24 & 0.26 & B & 3/5245\ J173018.81+543700.6 & 622.98 & 1.07 & 0.07 & 2.97 & 0.17 & 2.43 & 0.49 & 0.1 & 0.15 & C & 6/5245\ J172953.81+541836.0 & 768.47 & 1.06 & 0.06 & 2.8 & 0.11 & 2.58 & 0.3 & 0.18 & 0.22 & C & 6/5245\ J173014.69+540822.6 & 1223.9 & 1.0 & 0.06 & 2.9 & 0.12 & 2.1 & 0.44 & 0.17 & 0.17 & C & 6/5245\ 2FGL J1734.7$-$2533 & & & & & & & & & & &\ J173414.24$-$253645.5 & 456.0 & 0.73 & 0.03 & 2.21 & 0.03 & 1.42 & 0.03 & 0.81 & 0.38 & B & 3/2555\ 2FGL J1739.6$-$2726 & & & & & & & & & & &\ J173943.37$-$272858.7 & 181.4 & 0.52 & 0.05 & 2.11 & 0.03 & 1.61 & 0.03 & 0.72 & 0.00 & B & 11/4288\ 2FGL J1741.1$-$6750 & & & & & & & & & & &\ J174046.15$-$674325.0 & 471.78 & 1.15 & 0.05 & 2.86 & 0.07 & 2.24 & 0.2 & 0.29 & 0.34 & B & 0/2877\ J174059.77$-$680132.3 & 634.08 & 1.06 & 0.06 & 2.46 & 0.1 & 2.55 & 0.26 & 0.18 & 0.21 & C & 2/2877\ \[tab:main\] [|lccccccccccc|]{} Name & distance (arcsec) & $c_{12}$ & $\sigma_{12}$ & $c_{23}$ & $\sigma_{23}$ & $c_{34}$ & $\sigma_{34}$ & $s_b$ & $s_q$ & $class$ & $\pi$\ 2FGL J1742.5$-$3323 & & & & & & & & & & &\ J174220.74$-$333005.4 & 414.44 & 0.55 & 0.11 & 2.26 & 0.08 & 2.14 & 0.06 & 0.28 & 0.07 & B & 13/2475\ 2FGL J1743.2$-$2304 & & & & & & & & & & &\ J174224.66$-$225942.4 & 710.92 & 1.02 & 0.06 & 2.56 & 0.04 & 1.88 & 0.04 & 0.58 & 0.35 & B & 0/3656\ 2FGL J1745.6+0203 & & & & & & & & & & &\ J174526.95+020532.7 & 208.91 & 1.06 & 0.04 & 2.63 & 0.04 & 2.19 & 0.08 & 0.57 & 0.57 & A & 0/4745\ J174647.05+020925.9 & 1066.59 & 1.07 & 0.04 & 2.61 & 0.06 & 2.27 & 0.16 & 0.38 & 0.39 & A & 0/4745\ J174507.82+015442.5 & 729.02 & 1.32 & 0.03 & 3.5 & 0.03 & 2.4 & 0.03 & 0.00 & 0.7 & B & 0/4745\ 2FGL J1746.5$-$3238 & & & & & & & & & & &\ J174609.56$-$323717.1 & 326.79 & 0.79 & 0.04 & 2.11 & 0.03 & 1.59 & 0.03 & 0.82 & 0.35 & B & 1/436\ 2FGL J1747.2$-$3507 & & & & & & & & & & &\ J174753.04$-$352154.3 & 975.52 & 0.75 & 0.03 & 2.35 & 0.03 & 1.78 & 0.02 & 1.01 & 0.43 & A & 0/3389\ J174741.23$-$350334.3 & 385.87 & 0.72 & 0.04 & 1.97 & 0.03 & 1.34 & 0.03 & 0.87 & 0.36 & B & 6/3389\ 2FGL J1748.6$-$2913 & & & & & & & & & & &\ J174832.69$-$291040.8 & 210.12 & 0.48 & 0.05 & 1.74 & 0.06 & 2.09 & 0.09 & 0.36 & 0.00 & B & 0/518\ J174830.54$-$291822.6 & 292.64 & 0.68 & 0.04 & 2.0 & 0.03 & 1.49 & 0.05 & 0.67 & 0.28 & B & 0/518\ J174838.23$-$291609.7 & 137.48 & 1.16 & 0.2 & 2.78 & 0.11 & 2.77 & 0.1 & 0.1 & 0.17 & C & 0/518\ 2FGL J1749.1+0515 & & & & & & & & & & &\ J174850.00+050822.5 & 540.35 & 1.23 & 0.06 & 3.1 & 0.08 & 2.86 & 0.15 & 0.11 & 0.3 & C & 0/2912\ 2FGL J1754.1$-$2930 & & & & & & & & & & &\ J175438.09$-$291752.5 & 850.88 & 0.65 & 0.04 & 2.11 & 0.03 & 1.4 & 0.02 & 0.84 & 0.39 & A & 0/3242\ J175414.40$-$293326.6 & 187.81 & 0.83 & 0.12 & 2.08 & 0.07 & 1.34 & 0.06 & 0.26 & 0.15 & B & 9/3242\ J175304.57$-$292725.2 & 860.74 & 0.69 & 0.15 & 2.09 & 0.09 & 1.72 & 0.06 & 0.3 & 0.11 & B & 9/3242\ 2FGL J1759.2$-$3853 & & & & & & & & & & &\ J175903.29$-$384739.5 & 401.59 & 0.61 & 0.04 & 2.03 & 0.03 & 1.44 & 0.03 & 0.82 & 0.38 & A & 0/1855\ \[tab:main\] [|lccccccccccc|]{} Name & distance (arcsec) & $c_{12}$ & $\sigma_{12}$ & $c_{23}$ & $\sigma_{23}$ & $c_{34}$ & $\sigma_{34}$ & $s_b$ & $s_q$ & $class$ & $\pi$\ 2FGL J1802.8$-$6706 & & & & & & & & & & &\ J180100.62$-$670503.7 & 647.12 & 1.0 & 0.04 & 3.12 & 0.06 & 2.47 & 0.13 & 0.00 & 0.41 & B & 1/3111\ 2FGL J1811.3$-$2421 & & & & & & & & & & &\ J181126.39$-$240459.9 & 1018.47 & 0.84 & 0.04 & 2.74 & 0.03 & 2.56 & 0.03 & 0.65 & 0.56 & A & 0/4178\ 2FGL J1813.6$-$2821 & & & & & & & & & & &\ J181345.24$-$283058.6 & 594.38 & 0.75 & 0.04 & 2.26 & 0.03 & 2.1 & 0.02 & 0.94 & 0.4 & A & 0/1731\ 2FGL J1819.3$-$1523 & & & & & & & & & & &\ J181947.65$-$152807.1 & 479.71 & 0.66 & 0.11 & 2.22 & 0.11 & 2.12 & 0.12 & 0.25 & 0.1 & B & 1/1363\ 2FGL J1821.8+0830 & & & & & & & & & & &\ J182134.70+084319.1 & 802.9 & 1.15 & 0.09 & 2.87 & 0.13 & 2.6 & 0.28 & 0.14 & 0.18 & C & 0/2585\ 2FGL J1824.5+1013 & & & & & & & & & & &\ J182448.39+100712.6 & 423.77 & 1.19 & 0.06 & 2.76 & 0.06 & 2.1 & 0.17 & 0.27 & 0.35 & B & 0/1066\ 2FGL J1827.6+1149 & & & & & & & & & & &\ J182721.63+114844.0 & 265.97 & 1.11 & 0.07 & 2.57 & 0.11 & 2.43 & 0.3 & 0.17 & 0.2 & C & 0/2202\ 2FGL J1831.2$-$1518 & & & & & & & & & & &\ J183033.83$-$151412.0 & 651.32 & 0.89 & 0.04 & 2.59 & 0.05 & 1.76 & 0.12 & 0.42 & 0.18 & B & 2/4390\ J183205.20$-$152327.9 & 782.97 & 0.85 & 0.07 & 2.23 & 0.06 & 1.54 & 0.12 & 0.31 & 0.15 & B & 2/4390\ J183050.58$-$153533.2 & 1104.95 & 0.46 & 0.04 & 1.85 & 0.03 & 1.71 & 0.03 & 0.74 & 0.23 & B & 2/4390\ 2FGL J1832.0$-$0200 & & & & & & & & & & &\ J183208.77$-$015414.2 & 421.64 & 0.72 & 0.04 & 2.18 & 0.05 & 2.04 & 0.12 & 0.44 & 0.18 & B & 0/1036\ 2FGL J1832.2$-$6502 & & & & & & & & & & &\ J183256.27$-$651006.5 & 535.98 & 1.17 & 0.06 & 2.91 & 0.12 & 2.77 & 0.26 & 0.12 & 0.22 & C & 0/1700\ 2FGL J1835.4+1036 & & & & & & & & & & &\ J183551.92+103056.8 & 510.44 & 1.04 & 0.04 & 2.94 & 0.04 & 2.55 & 0.07 & 0.52 & 0.61 & A & 0/2844\ \[tab:main\] [|lccccccccccc|]{} Name & distance (arcsec) & $c_{12}$ & $\sigma_{12}$ & $c_{23}$ & $\sigma_{23}$ & $c_{34}$ & $\sigma_{34}$ & $s_b$ & $s_q$ & $class$ & $\pi$\ 2FGL J1835.4+1349 & & & & & & & & & & &\ J183522.00+135733.9 & 484.14 & 1.15 & 0.06 & 2.78 & 0.06 & 2.28 & 0.13 & 0.33 & 0.38 & A & 1/2344\ J183535.35+134848.9 & 141.16 & 0.78 & 0.04 & 2.26 & 0.05 & 2.02 & 0.15 & 0.44 & 0.19 & B & 1/2344\ J183539.22+135055.0 & 206.92 & 1.22 & 0.04 & 2.82 & 0.06 & 2.67 & 0.11 & 0.19 & 0.44 & B & 1/2344\ 2FGL J1835.5$-$0649 & & & & & & & & & & &\ J183538.53$-$064854.2 & 71.66 & 0.61 & 0.04 & 1.74 & 0.04 & 2.12 & 0.09 & 0.55 & 0.00 & B & 0/622\ J183554.30$-$065518.0 & 427.7 & 0.74 & 0.05 & 1.88 & 0.05 & 1.96 & 0.09 & 0.39 & 0.00 & B & 0/622\ 2FGL J1837.9+3821 & & & & & & & & & & &\ J183656.31+382233.2 & 692.72 & 1.21 & 0.04 & 2.92 & 0.05 & 2.51 & 0.1 & 0.22 & 0.5 & B & 0/6850\ J183816.58+383708.9 & 992.13 & 1.33 & 0.03 & 2.93 & 0.04 & 2.52 & 0.08 & 0.00 & 0.59 & B & 0/6850\ J183812.98+380159.9 & 1170.32 & 0.99 & 0.05 & 2.42 & 0.09 & 2.33 & 0.28 & 0.27 & 0.21 & B & 0/6850\ J183753.23+384500.2 & 1429.91 & 1.34 & 0.04 & 2.98 & 0.04 & 2.2 & 0.08 & 0.00 & 0.55 & B & 0/6850\ J183742.09+381955.6 & 167.03 & 1.14 & 0.06 & 2.75 & 0.1 & 2.16 & 0.36 & 0.18 & 0.23 & C & 2/6850\ J183828.81+382705.1 & 534.57 & 0.99 & 0.05 & 2.45 & 0.12 & 2.6 & 0.3 & 0.2 & 0.17 & C & 2/6850\ J183836.32+382924.4 & 694.43 & 1.22 & 0.06 & 3.06 & 0.1 & 2.67 & 0.26 & 0.11 & 0.25 & C & 2/6850\ J183746.20+380846.5 & 750.76 & 1.04 & 0.07 & 2.85 & 0.13 & 2.69 & 0.3 & 0.14 & 0.18 & C & 2/6850\ J183642.07+381203.1 & 1016.29 & 1.05 & 0.07 & 3.06 & 0.13 & 2.17 & 0.47 & 0.12 & 0.16 & C & 2/6850\ 2FGL J1839.0$-$0102 & & & & & & & & & & &\ J183839.61$-$010614.0 & 435.0 & 0.85 & 0.04 & 2.69 & 0.03 & 2.21 & 0.04 & 0.79 & 0.48 & A & 1/1441\ 2FGL J1842.3$-$5839 & & & & & & & & & & &\ J184317.58$-$583752.0 & 452.64 & 1.25 & 0.05 & 2.91 & 0.07 & 2.72 & 0.13 & 0.00 & 0.38 & B & 1/883\ J184240.89$-$584439.7 & 340.91 & 0.95 & 0.07 & 2.99 & 0.11 & 2.91 & 0.21 & 0.11 & 0.16 & C & 0/883\ 2FGL J1844.3+1548 & & & & & & & & & & &\ J184425.36+154645.9 & 150.64 & 0.91 & 0.03 & 2.36 & 0.04 & 1.95 & 0.09 & 0.57 & 0.24 & B & 0/350\ 2FGL J1846.6$-$2519 & & & & & & & & & & &\ J184700.67$-$245940.2 & 1215.87 & 1.19 & 0.11 & 2.75 & 0.1 & 2.13 & 0.28 & 0.12 & 0.19 & C & 2/5081\ 2FGL J1847.2$-$0236 & & & & & & & & & & &\ J184633.39$-$023728.2 & 613.36 & 0.69 & 0.04 & 2.28 & 0.08 & 1.47 & 0.3 & 0.25 & 0.12 & B & 0/1203\ 2FGL J1857.6+0211 & & & & & & & & & & &\ J185756.07+020729.2 & 323.88 & 0.58 & 0.05 & 2.1 & 0.07 & 1.98 & 0.22 & 0.3 & 0.07 & B & 0/415\ 2FGL J1901.1+0427 & & & & & & & & & & &\ J190055.36+041949.3 & 517.32 & 0.95 & 0.04 & 2.57 & 0.04 & 1.94 & 0.07 & 0.59 & 0.34 & B & 0/1306\ 2FGL J1902.7$-$7053 & & & & & & & & & & &\ J190317.61$-$705539.6 & 199.8 & 0.93 & 0.06 & 2.8 & 0.14 & 2.43 & 0.36 & 0.16 & 0.16 & C & 1/1056\ 2FGL J1904.8$-$0705 & & & & & & & & & & &\ J190444.57$-$070740.0 & 169.32 & 1.1 & 0.12 & 2.9 & 0.1 & 2.48 & 0.17 & 0.18 & 0.23 & C & 0/1562\ 2FGL J1914.0+1436 & & & & & & & & & & &\ J191415.95+142839.3 & 484.89 & 0.77 & 0.03 & 2.07 & 0.03 & 1.28 & 0.04 & 0.7 & 0.36 & B & 0/2020\ \[tab:main\] [|lccccccccccc|]{} Name & distance (arcsec) & $c_{12}$ & $\sigma_{12}$ & $c_{23}$ & $\sigma_{23}$ & $c_{34}$ & $\sigma_{34}$ & $s_b$ & $s_q$ & $class$ & $\pi$\ 2FGL J1917.0$-$3027 & & & & & & & & & & &\ J191637.70$-$303356.6 & 525.92 & 1.19 & 0.06 & 3.01 & 0.08 & 2.47 & 0.2 & 0.15 & 0.28 & C & 1/1040\ 2FGL J1923.4+2013 & & & & & & & & & & &\ J192142.39+201107.1 & 1491.67 & 1.08 & 0.12 & 2.61 & 0.07 & 2.29 & 0.13 & 0.28 & 0.27 & B & 7/14898\ J192540.72+201244.2 & 1870.96 & 0.95 & 0.05 & 2.64 & 0.04 & 1.88 & 0.06 & 0.55 & 0.34 & B & 7/14898\ J192501.65+204022.6 & 2080.26 & 0.95 & 0.06 & 2.28 & 0.04 & 2.01 & 0.08 & 0.43 & 0.22 & B & 7/14898\ 2FGL J1924.9$-$1036 & & & & & & & & & & &\ J192501.64$-$104315.3 & 409.6 & 1.36 & 0.07 & 3.31 & 0.06 & 2.65 & 0.09 & 0.00 & 0.4 & B & 0/1471\ 2FGL J1931.8+1325 & & & & & & & & & & &\ J193226.87+134708.5 & 1394.03 & 1.08 & 0.13 & 2.54 & 0.09 & 2.43 & 0.2 & 0.17 & 0.2 & C & 2/8391\ 2FGL J1936.5$-$0855 & & & & & & & & & & &\ J193635.53$-$091142.4 & 943.64 & 1.27 & 0.05 & 2.91 & 0.05 & 2.46 & 0.1 & 0.00 & 0.44 & B & 0/5385\ J193718.37$-$090822.4 & 986.29 & 0.94 & 0.05 & 2.46 & 0.1 & 2.44 & 0.28 & 0.23 & 0.17 & C & 1/5385\ 2FGL J1944.3+7325 & & & & & & & & & & &\ J194738.74+732636.3 & 843.32 & 1.27 & 0.04 & 2.83 & 0.04 & 2.38 & 0.07 & 0.00 & 0.61 & B & 2/1991\ J194312.51+731730.7 & 559.24 & 0.95 & 0.05 & 2.88 & 0.1 & 2.39 & 0.29 & 0.19 & 0.25 & C & 0/1991\ 2FGL J1947.8$-$0739 & & & & & & & & & & &\ J194757.07$-$075000.1 & 614.49 & 1.09 & 0.06 & 2.69 & 0.13 & 2.44 & 0.48 & 0.14 & 0.17 & C & 1/4369\ 2FGL J1949.7+2405 & & & & & & & & & & &\ J195012.96+235508.8 & 712.46 & 0.93 & 0.04 & 2.58 & 0.03 & 2.32 & 0.04 & 0.75 & 0.6 & A & —\ J195009.38+235610.2 & 634.3 & 0.61 & 0.04 & 2.13 & 0.04 & 1.45 & 0.12 & 0.41 & 0.18 & B & —\ J194857.53+240407.6 & 651.18 & 0.74 & 0.03 & 2.18 & 0.03 & 2.44 & 0.04 & 0.78 & 0.23 & B & —\ J194845.97+240237.4 & 820.79 & 0.79 & 0.05 & 2.12 & 0.06 & 2.19 & 0.08 & 0.47 & 0.00 & B & —\ 2FGL J1950.3+1223 & & & & & & & & & & &\ J195014.42+123119.6 & 461.61 & 1.01 & 0.07 & 2.8 & 0.07 & 2.7 & 0.1 & 0.27 & 0.37 & B & 0/3119\ 2FGL J2006.2$-$0929 & & & & & & & & & & &\ J200802.67$-$093641.9 & 1617.39 & 0.81 & 0.05 & 2.75 & 0.1 & 2.4 & 0.24 & 0.19 & 0.14 & C & 2/15089\ J200615.96$-$095721.0 & 1669.93 & 1.16 & 0.05 & 2.7 & 0.08 & 2.3 & 0.24 & 0.23 & 0.3 & C & 2/15089\ J200728.94$-$090737.4 & 1688.58 & 1.24 & 0.06 & 2.82 & 0.1 & 2.31 & 0.28 & 0.12 & 0.25 & C & 2/15089\ 2FGL J2006.5$-$2256 & & & & & & & & & & &\ J200725.61$-$230605.9 & 906.51 & 1.05 & 0.05 & 2.91 & 0.09 & 2.39 & 0.21 & 0.28 & 0.29 & B & 0/2088\ 2FGL J2006.9$-$1734 & & & & & & & & & & &\ J200626.14$-$173418.6 & 429.99 & 1.04 & 0.05 & 2.78 & 0.08 & 2.37 & 0.24 & 0.28 & 0.29 & B & 0/2143\ J200623.81$-$173639.2 & 473.07 & 1.16 & 0.07 & 3.09 & 0.11 & 2.73 & 0.23 & 0.13 & 0.24 & C & 0/2143\ \[tab:main\] [|lccccccccccc|]{} Name & distance (arcsec) & $c_{12}$ & $\sigma_{12}$ & $c_{23}$ & $\sigma_{23}$ & $c_{34}$ & $\sigma_{34}$ & $s_b$ & $s_q$ & $class$ & $\pi$\ 2FGL J2009.2$-$1505 & & & & & & & & & & &\ J200901.29$-$151620.3 & 700.87 & 1.02 & 0.06 & 3.02 & 0.1 & 2.52 & 0.24 & 0.17 & 0.26 & C & 0/1969\ 2FGL J2031.4$-$1842 & & & & & & & & & & &\ J203142.40$-$182208.0 & 1219.37 & 0.89 & 0.04 & 2.39 & 0.09 & 2.31 & 0.26 & 0.29 & 0.18 & B & 0/3790\ J203159.63$-$183824.4 & 499.26 & 0.92 & 0.07 & 2.66 & 0.11 & 2.3 & 0.32 & 0.22 & 0.18 & C & 1/3790\ J203003.91$-$184441.6 & 1208.58 & 1.23 & 0.06 & 2.85 & 0.12 & 2.43 & 0.32 & 0.11 & 0.23 & C & 1/3790\ J203035.02$-$185910.1 & 1269.41 & 1.24 & 0.06 & 2.95 & 0.1 & 2.63 & 0.2 & 0.12 & 0.27 & C & 1/3790\ 2FGL J2044.4$-$4757 & & & & & & & & & & &\ J204444.65$-$481006.7 & 764.03 & 1.02 & 0.03 & 2.8 & 0.04 & 2.44 & 0.07 & 0.64 & 0.65 & A & 0/1279\ J204520.57$-$475648.5 & 515.06 & 1.39 & 0.04 & 2.89 & 0.06 & 2.41 & 0.14 & 0.00 & 0.38 & B & 0/1279\ 2FGL J2131.0$-$5417 & & & & & & & & & & &\ J213014.80$-$540748.6 & 728.48 & 1.09 & 0.05 & 2.6 & 0.13 & 2.64 & 0.32 & 0.16 & 0.18 & C & 1/2080\ 2FGL J2200.1$-$6931 & & & & & & & & & & &\ J215916.12$-$692856.1 & 324.01 & 1.11 & 0.05 & 2.8 & 0.09 & 2.3 & 0.28 & 0.25 & 0.26 & B & 0/910\ \[tab:main\] [^1]: http://www.asdc.asi.it/bzcat/ [^2]: http://wise2.ipac.caltech.edu/docs/release/prelim/ [^3]: http://wise2.ipac.caltech.edu/docs/release/allsky/ [^4]: http://wise2.ipac.caltech.edu/docs/release/prelim/expsup/sec2\_3g.html [^5]: http://wise2.ipac.caltech.edu/docs/release/prelim/figures/prelim\_3x3-w1-equ.jpg [^6]: [^7]:
1
--- abstract: 'The strong chromatic index of a graph $G$, denoted ${\chi_{s}''(G)}$, is the least number of colors needed to edge-color $G$ so that edges at distance at most two receive distinct colors. The strong list chromatic index, denoted ${\chi_{\ell,s}''(G)}$, is the least integer $k$ such that if arbitrary lists of size $k$ are assigned to each edge then $G$ can be edge-colored from those lists where edges at distance at most two receive distinct colors. We use the discharging method, the Combinatorial Nullstellensatz, and computation to show that if $G$ is a subcubic planar graph with $\operatorname{girth}(G) \geq 41$ then ${\chi_{\ell,s}''(G)}\leq 5$, answering a question of Borodin and Ivanova \[Precise upper bound for the strong edge chromatic number of sparse planar graphs, *Discuss. Math. Graph Theory*, 33(4), (2014) 759–770\]. We further show that if $G$ is a subcubic planar graph and $\operatorname{girth}(G) \geq 30$, then ${\chi_{s}''(G)}\leq 5$, improving a bound from the same paper. Finally, if $G$ is a planar graph with maximum degree at most four and $\operatorname{girth}(G) \geq 28$, then ${\chi_{s}''(G)}\leq 7$, improving a more general bound of Wang and Zhao from \[Odd graphs and its application on the strong edge coloring, `arXiv:1412.8358`\] in this case.' author: - 'Philip DeOrsey$^{1,6}$' - 'Jennifer Diemunsch$^{2,6}$' - 'Michael Ferrara$^{2,6,7}$' - 'Nathan Graber$^{2,6}$' - 'Stephen G. Hartke$^{3,6,8}$' - 'Sogol Jahanbekam$^{2,6}$' - 'Bernard Lidický$^{4,6,9}$' - 'Luke Nelsen$^{2,6}$' - 'Derrick Stolee$^{4,5,6}$' - 'Eric Sullivan$^{2,6}$' title: On the Strong Chromatic Index of Sparse Graphs --- Introduction ============ A [*proper edge-coloring*]{} of a graph $G$ is an assignment of colors to the edges so that incident edges receive distinct colors. A [*strong edge-coloring*]{} of a graph $G$ is an assignment of colors to the edges so that edges at distance at most two receive distinct colors. A proper edge-coloring is a decomposition of $G$ into matchings, while a strong edge-coloring is a decomposition of $G$ into *induced* matchings. Fouquet and Jolivet [@Fouquet83; @Fouquet84] defined the [*strong chromatic index*]{} of a graph $G$, denoted ${\chi_{s}'(G)}$, as the minimum integer $k$ such that $G$ has a strong edge-coloring using $k$ colors. Erdős and Nešetřil gave the following conjecture, which is still open, and provided an example to show that it would be sharp, if true. \[Erdos\] For every graph $G$, $\chi'_s(G) \leq \dfrac{5}{4}\Delta(G)^2$ when $\Delta(G)$ is even, and $\chi'_s(G) \leq \dfrac{1}{4}(5\Delta(G)^2-2\Delta(G)+1)$ when $\Delta(G)$ is odd. Towards this conjecture, Molloy and Reed [@MR] bounded ${\chi_{s}'(G)}$ away from the trivial upper bound of $2\Delta(G)(\Delta(G)-1)+1$ by showing that every graph $G$ with sufficiently large maximum degree satisfies ${\chi_{s}'(G)}\leq1.998\Delta(G)^2$. Bruhn and Joos [@BJ] have announced an improvement, claiming ${\chi_{s}'(G)}\leq 1.93\Delta(G)^2$. The focus of this paper is the study of strong edge-colorings of *subcubic graphs*, those with maximum degree at most three, and *subquartic graphs*, those with maximum degree at most four. Faudree, Gyárfas, Schelp, and Tuza [@Faudree90] studied ${\chi_{s}'(G)}$ in the class of subcubic graphs, and gave the following conjectures. Let $G$ be a subcubic graph. (1) ${\chi_{s}'(G)}\leq 10$. (2) If $G$ is bipartite, then ${\chi_{s}'(G)}\leq 9$. \[con:bipartite\] (3) If $G$ is planar, then ${\chi_{s}'(G)}\leq 9$. \[con:planar\] (4) If $G$ is bipartite and for each edge $xy \in E(G)$, $d(x) + d(y) \leq 5$, then ${\chi_{s}'(G)}\leq 6$. (5) If $G$ is bipartite and $C_4 \not \subset G$, then ${\chi_{s}'(G)}\leq 7$. (6) If $G$ is bipartite and its girth is large, then ${\chi_{s}'(G)}\leq 5$. \[con:girth\] \[con:Faud\] Several of these conjectures have been verified, including (1) by Andersen [@Andersen] and (2) by Steger and Yu [@StegerYu]. Quite recently, Kostochka, Li, Ruksasakchai, Santana, Wang, and Yu [@KLRSWY2014] announced an affirmative resolution to (3). This result is best possible since the prism, shown in Figure \[prism\], is a subcubic planar graph with ${\chi_{s}'(G)}= 9$. (0,0)–(9,0) (0,0)–(0,4) (0,4)–(9,4) (9,4)–(9,0); (0,0) node\[insep\] (0,4) node\[insep\] (9,4) node\[insep\] (9,0) node\[insep\] (2,2) node\[insep\] (7,2) node\[insep\]; (0,0)–(2,2) (0,4)–(2,2) (2,2)–(7,2) (7,2)–(9,4) (7,2)–(9,0); Several papers prove sharper bounds on the strong chromatic index of planar graphs with additional structure [@Fouquet84; @Hocquard11; @Hocquard13; @Hudak], generally by introducing conditions on maximum average degree or girth to ensure that the target graph is sufficiently sparse. For a graph $G$, the [*maximum average degree*]{} of $G$, denoted $\operatorname{mad}(G)$, is the maximum of average degrees over all subgraphs of $G$. Hocquard, Montassier, Raspaud, and Valicov [@Hocquard11; @Hocquard13] proved the following. \[thm:hocquard\] Let $G$ be a subcubic graph. 1. If $\operatorname{mad}(G) < \frac{7}{3}$, then ${\chi_{s}'(G)}\leq 6$. \[part1\] 2. If $\operatorname{mad}(G) < \frac{5}{2}$, then ${\chi_{s}'(G)}\leq 7$. \[part2\] 3. If $\operatorname{mad}(G) < \frac{8}{3}$, then ${\chi_{s}'(G)}\leq 8$. \[thm:Hoc\] (0,0)–(0,2) (0,2)–(2,2) (2,2)–(2,0) (2,0)–(0,0); (0,0) node\[insep\] (0,2) node\[insep\] (2,2) node\[insep\] (2,0) node\[insep\] (1,2) node\[insep\] (1,3.5) node\[insep\] (0,2)–(1,3.5) (1,3.5)–(2,2); \[fig:house\] (-1.5,0)–(1.5,0) (-1.5,0) node\[insep\] (-.5,0) node\[insep\] (.5,0) node\[insep\] (1.5,0) node\[insep\] (90:1.5) node\[insep\] (270:1.5) node\[insep\] (-1.5,0)–(90:1.5) (90:1.5)–(1.5,0) (1.5,0)–(270:1.5) (270:1.5)–(-1.5,0) (90:1.5)–(2.5,1.5) (2.5,1.5)–(2.5,-1.5) (270:1.5)–(2.5,-1.5) (2.5,1.5) node\[insep\] (2.5,-1.5) node\[insep\] ; \[fig:diamond\] Parts (\[part1\]) and (\[part2\]) of Theorem \[thm:hocquard\] are sharp by the graphs shown in Figures \[fig:house\] and \[fig:diamond\], respectively. An elementary application of Euler’s Formula (see [@west]) gives the following. If $G$ is a planar graph with girth $g$ then $\operatorname{mad}(G) < \frac{2g}{g-2}$. \[prop:mad\] Theorem \[thm:Hoc\] and Proposition \[prop:mad\] yield the following corollary. Let $G$ be a subcubic planar graph with girth $g$. 1. If $g \geq 14$, then ${\chi_{s}'(G)}\leq 6$. 2. If $g \geq 10$, then ${\chi_{s}'(G)}\leq 7$. 3. If $g \geq 8$, then ${\chi_{s}'(G)}\leq 8$. Note that no non-trivial sparsity condition on a graph $G$ with maximum degree $d$ will guarantee that ${\chi_{s}'(G)}< 2d - 1$ since any graph having two adjacent vertices of degree $d$ requires at least $2d-1$ colors to strongly edge-color the graph. We give sparsity conditions that imply a subcubic planar graph has strong chromatic index at most five and a subquartic planar graph has strong chromatic index at most seven. Previous work in this direction was initiated by Borodin and Ivanova [@BI], Chang, Montassier, Pěcher, and Raspaud [@CMPR], and most recently extended by Wang and Zhao [@WZ]. The current-best bounds are given by the following two results. \[thm:bi\] Let $G$ be a subcubic graph. 1. If $G$ has girth at least $9$ and $\operatorname{mad}(G) < 2 + \frac{2}{23}$, then $\chi_s'(G) \leq 5$. 2. If $G$ is planar and has girth at least $41$, then $\chi_s'(G) \leq 5$. \[thm:wz\] Fix $d \geq 4$ and let $G$ be a graph with $\Delta(G) \leq d$. 1. If $G$ has girth at least $2d-1$ and $\operatorname{mad}(G) < 2 + \frac{2}{6d-7}$, then $\chi_s'(G) \leq 2d-1$. 2. If $G$ is planar and has girth at least $10d-4$, then $\chi_s'(G) \leq 2d-1$. One barrier to proving sparsity conditions that imply ${\chi_{s}'(G)}\leq 5$ is that there exist graphs $G$ with $\operatorname{mad}(G) = 2$ and ${\chi_{s}'(G)}= 6$. Let $S_3$ be a triangle with pendant edges at each vertex, and let $S_4$ be a $4$-cycle with pendant edges at two adjacent vertices. For $k \geq 5$, let $S_k$ be a $k$-cycle with pendant edges at each vertex. Each of $S_3$, $S_4$ and $S_7$ have maximum average degree $2$ and strong chromatic index at least 6, see Figure \[fig:S3S4S5\]. However, these graphs are 6-critical with respect to ${\chi_{s}'(G)}$, as the removal of any edge from $S_3$, $S_4$ or $S_7$ results in a graph that has a strong edge-coloring using five colors. Our main theorem demonstrates that if these few graphs are avoided, and the maximum average degree is not too large, then we can find a strong 5-edge-coloring, improving Theorem \[thm:bi\]. \[thm:sparse\] Let $G$ be a subcubic graph. 1. If $G$ does not contain $S_3$, $S_4$, or $S_7$ and $\operatorname{mad}(G) < 2 + \frac{1}{7}$, then ${\chi_{s}'(G)}\leq 5$. 2. If $G$ is planar and has girth at least $30$, then ${\chi_{s}'(G)}\leq 5$. The bound in Theorem \[thm:sparse\] is likely not sharp, but is close to optimal. The graph in Figure \[fig:thetaexample\] is subcubic, avoids $S_3$, $S_4$, and $S_7$, and satisfies both ${\chi_{s}'(G)}= 6$ and $\operatorname{mad}(G) = 2 + \frac{1}{6}$. (0,0) node\[insep\](a) (5,0) node\[insep\](b) \(a) to\[bend left=50\] node\[insep,pos=0.25\](x) node\[insep,pos=0.50\](y) node\[insep,pos=0.75\](z) (b) (x)– ++(90:0.5)node\[insep\] (y)– ++(90:0.5)node\[insep\] (z)– ++(90:0.5)node\[insep\] \(a) to\[bend right=50\] node\[insep,pos=0.25\](x) node\[insep,pos=0.50\](y) node\[insep,pos=0.75\](z) (b) (x)– ++(-90:0.5)node\[insep\] (y)– ++(-90:0.5)node\[insep\] (z)– ++(-90:0.5)node\[insep\] \(a) to node\[insep,pos=0.20\](x) node\[insep,pos=0.40\](y) node\[insep,pos=0.60\](z) node\[insep,pos=0.80\](w) (b) (x)– ++(90:0.5)node\[insep\] (y)– ++(90:0.5)node\[insep\] (z)– ++(90:0.5)node\[insep\] (w)– ++(90:0.5)node\[insep\] ; ; Using similar methods, we improve the bounds in Theorem \[thm:wz\] when $d = 4$. \[thm:sparse4\] Let $G$ be a subquartic graph. 1. If $G$ has girth at least $7$ and $\operatorname{mad}(G) < 2 + \frac{2}{13}$, then $\chi_s'(G) \leq 7$. 2. If $G$ is planar and has girth at least $28$, then $\chi_s'(G) \leq 7$. We also consider a list variation of the strong chromatic index of $G$, first introduced by Vu [@vu]. A [*strong list edge-coloring*]{} of a graph $G$ is an assignment of lists to $E(G)$ such that a strong edge-coloring can be chosen from the lists at each edge. The minimum $k$ such that a graph $G$ can be strongly list edge-colored using any lists of size at least $k$ on each edge is the [*strong list chromatic index*]{} of $G$, denoted ${\chi_{\ell,s}'(G)}$. Borodin and Ivanova [@BI] asked if there are sparsity conditions that imply ${\chi_{\ell,s}'(G)}\leq 2d-1$ for a planar graph $G$ with maximum degree $d$. We generalize the bounds in Theorem \[thm:bi\] to apply to list coloring. \[thm:mainlist\] Let $G$ be a subcubic graph. 1. If $G$ has girth at least $9$ and $\operatorname{mad}(G) < 2 + \frac{2}{23}$, then ${\chi_{\ell,s}'(G)}\leq 5$. 2. If $G$ is planar and has girth at least $41$, then ${\chi_{\ell,s}'(G)}\leq 5$. The proofs of Theorems \[thm:sparse\], \[thm:sparse4\], and \[thm:mainlist\] use the discharging method. We begin by proving Theorem \[thm:mainlist\] in Section \[sec:list\] as the proof is shorter and the one reducible configuration is used again in the proof of Theorem \[thm:sparse\] in Section \[sec:sparse\]. Preliminaries and Notation -------------------------- Throughout this paper we will only consider simple, finite, undirected graphs. We refer to [@west] for any undefined definitions and notation. A graph $G$ has vertex set $V(G)$, edge set $E(G)$, and maximum degree $\Delta(G)$. If a vertex $v$ has degree $j$ we refer to it as a [*$j$-vertex*]{}, and if $v$ has a neighbor that is a $j$-vertex, we say it is a [*$j$-neighbor*]{} of $v$. When $G$ is planar we let $F(G)$ denote the set of faces of $G$, and $\ell(f)$ denote the length of a face $f$. The [*girth*]{} of a graph $G$ is length of its shortest cycle. A graph $G$ is $\{a,b\}$-regular if for every $v$ in $G$, the degree of $v$ is either $a$ or $b$. Every graph $G$ with maximum degree $d$ is contained in a prescribed $\{1,d\}$-regular graph, denoted $\operatorname{ex}_d(G)$, the [*$d$-expansion*]{} of $G$. To construct $\operatorname{ex}_d(G)$, add $d-d(v)$ pendant edges to each vertex $v$ in $G$ where $d(v) \in \{2,\dots, d\}$. Additionally, let the [*contracted graph*]{} of $G$, denoted $\operatorname{ct}(G)$ be the graph obtained by deleting all 1-vertices of $G$. A vertex $v$ in $G$ is a *$2{{\ensuremath{^\perp}}}$-vertex* if $v$ is a 2-vertex in $\operatorname{ct}(G)$. Thus, for the remainder of the paper a vertex $v$ is a *$k^+$-vertex* in $G$ if it has degree at least $k$ in $\operatorname{ct}(G)$. We will make use of the discharging method for some of our results. For an introduction to this method, see the survey by Cranston and West [@CW]. We will directly use two standard results that can be proven using this method. Both of Theorems \[thm:bi\] and \[thm:wz\] rely on Lemmas \[lma:cw\] and \[lma:nrs\]. Let $G$ be a graph and $\operatorname{ct}(G$) be the contracted graph. An *$\ell$-thread* is a path $v_1\dots v_\ell$ in $\operatorname{ct}(G)$ where each $v_i$ is a $2{{\ensuremath{^\perp}}}$-vertex. \[lma:cw\] If $G$ is a graph with girth at least $\ell+1$ and $\operatorname{mad}(G) < 2 + \frac{2}{3\ell - 1}$, then $\operatorname{ct}(G)$ contains a 1-vertex or an $\ell$-thread. \[lma:nrs\] If $G$ is a planar graph with girth at least $5\ell + 1$, then $\operatorname{ct}(G)$ contains a 1-vertex or an $\ell$-thread. Strong List Edge-Coloring of Subcubic Graphs {#sec:list} ============================================ In this section, we prove Theorem \[thm:mainlist\]. Our proof uses the discharging method, wherein we assign an initial charge to the vertices and faces of a theoretical minimal counterexample. This initial charge is then disbursed according to a set of discharging rules in order to draw a contradiction to the existence of such a minimal counterexample. We will often make use of the following, which is another simple and well known application of Euler’s Formula. \[prop:sum\] In a planar graph $G$, $$\sum_{f \in F(G)} ( \ell(f) - 6 ) + \sum_{v \in V(G)} (2d(v) - 6) = -12.$$ We will also use the Combinatorial Nullstellensatz, which will be applied to show we can extend certain list colorings. \[CN\] Let $f$ be a polynomial of degree $t$ in $m$ variables over a field $\mathbb{F}$. If there is a monomial $\prod x_i^{t_i}$ in $f$ with $\sum t_i=t$ whose coefficient is nonzero in $\mathbb{F}$, then $f$ is nonzero at some point of $\prod S_i$, where each $S_i$ is a set of $t_i+1$ distinct values in $\mathbb{F}$. The first item of Theorem \[thm:mainlist\] follows from the following strengthened theorem. Let $G$ be a planar $\{1,3\}$-regular graph of girth at least $41$, and let $p \in V(G)$. Assign distinct colors to the edges incident to $p$ and let $L$ be a $5$-list-assignment to the remaining edges of $G$. There exists a strong edge-coloring $c$ where $c(e) \in L(e)$ for all $e \in E(G)$. For the sake of contradiction, select $G$, $p$, $c$, and $L$ as in the theorem statement, and assume there does not exist a strong edge coloring of $E(G)$ using colors from $L$. In this selection, minimize $n(G)$. Note that $G$ is connected and $e(G) > 5$. We can further assume that $d(p) > 1$, since if $d(p)=1$ and $\{p'\}=N(p)$ then we can instead color the edges incident to $p'$. \[lma:cutedge\] There does not exist a cut-edge $uv$ such that $d(u) = d(v) = 3$. Suppose that $G$ contains a cut-edge $uv$ with $d(u) = d(v) = 3$. There are exactly two components in $G - uv$, call them $G_1$ and $G_2$, with $u \in V(G_1)$ and $v \in V(G_2)$. Without loss of generality, $p \in V(G_1)$. For each $i \in \{1,2\}$, let $G_i' = G_i + uv$. Since $d(v) = 3$, $n(G_1') < n(G)$. Thus there is a strong edge-coloring of $G_1'$ using the 5-list-assignment $L$. Next, color the other two edges incident to $v$ using colors distinct from those on the edges incident to $u$. Now, $G_2'$ is a subcubic planar graph of girth at least 41 with distinctly colored edges about the vertex $v$ and $n(G_2') < n(G)$. Thus, there is an extension of the coloring to $G_2'$. The colorings of $G_1'$ and $G_2'$ form a strong edge coloring of $G$, a contradiction. Define a *$k$-caterpillar* to be a $k$-thread $v_1,\dots,v_k$ in $G$ where $p \notin \{v_1,\dots,v_k\}$. Figure \[8cat\] is an $8$-caterpillar. (0,0)–(9,0) (0,0) node\[insep\] (9,0) node\[insep\]; in [1,2,...,8]{}[ (,0)–(,1) (,0) node\[free\] (,1) node\[extra\]; at (,-0.35) [$v_{\x}$]{}; at (,1.5) [$v_{\x}'$]{}; ]{}; (9,0)–(10,.75) (9,0)–(10,-.75) (0,0)–(-1,.75) (0,0)–(-1,-.75); (-1,-.75) node\[insep\]; (-1,.75) node\[insep\]; (10,-.75) node\[insep\]; (10,.75) node\[insep\]; at (-0.5,0) [$v_{0}$]{}; at (9.5,0) [$v_{9}$]{}; at (10.35,.75) [$v_9'$]{}; at (10.35,-.75) [$u_9'$]{}; at (-1.35,.75) [$v_0'$]{}; at (-1.35,-.75) [$u_0'$]{}; \[lma:caterpillar\] $G$ does not contain an $8$-caterpillar. We will show that if $G-p$ contains an 8-caterpillar, then $G$ has a strong edge $L$-coloring. If $v_1,\dots,v_8$ form an 8-caterpillar, then let $v_i'$ be the 1-vertex adjacent to $v_i$, $v_0$ and $v_9$ be the other neighbors of $v_1$ and $v_8$. For $i \in \{0,9\}$, let $v_i'$ and $u_i'$ be the neighbors of $v_i$ other than $v_1$ or $v_8$. By removing all edges incident to $v_2,\dots, v_7$ and $u_1,\dots,u_8$, as well as any isolated vertices that are produced, we obtain a graph $G'$ with fewer vertices than $G$, so we can strongly edge-color $G'$ with 5 colors. We fix such a coloring of $G'$ and generate a contradiction by extending this coloring to a strong edge-coloring of $G$. Suppose that $c_1, \dots, c_6$ are the colors of the edges incident to the vertices $v_0$ and $v_9$, and assign variables $y_1, \dots, y_8$ to the pendant edges, and variables $x_1, \dots, x_7$ to the interior edges as shown in Figure \[fig:lcat\]. (0,0)–(9,0) (0,0) node\[insep\] (9,0) node\[insep\]; in [1,2,...,8]{}[ (,0)–(,1) (,0) node\[free\] (,1) node\[extra\]; ]{}; (9,0)–(10,.75) (9,0)–(10,-.75) (0,0)–(-1,.75) (0,0)–(-1,-.75); in [1,2,...,8]{}[ (-.2,.5) node[$y_{\x}$]{}; ]{} in [1,2,...,7]{}[ (+.5, -.2) node[$x_{\x}$]{}; ]{} (.5,-.2) node[$c_{3}$]{} (8.5,-.2) node[$c_{4}$]{} (-.25,.6) node[$c_1$]{} (-.25,-.6) node[$c_2$]{} (9.25,.6) node[$c_5$]{} (9.25,-.6) node[$c_6$]{} ; (-1,-.75) node\[insep\]; (-1,.75) node\[insep\]; (10,-.75) node\[insep\]; (10,.75) node\[insep\]; Identifying the conflicts between variables and colors produces the following polynomial, $$\begin{aligned} f(y_1, \dots, y_8,x_1,\dots, x_7) &= (y_2 - c_3)(x_2 - c_3)(y_7 - c_4)(x_6 - c_4) \\ &\quad\cdot \prod_{i=1}^3(x_1 - c_i)\prod_{i=1}^3(y_1 - c_i) \prod_{i=4}^6 (x_7 - c_i) \prod_{i=4}^6 (y_8 - c_i)\\ &\quad\cdot \prod_{j -i \in \{1,2\}}(x_i-x_j) \prod_{j-i = 1}(y_i - y_j)\prod_{i - j \in \{-1,0,1,2\}} (y_i - x_j). \end{aligned}$$ We will use the Combinatorial Nullstellensatz to show that there is an assignment of colors $\hat{c}_1,\dots, \hat{c}_8$ and $c_1',\dots, c_7'$ such that $f(\hat{c}_1,\dots,\hat{c}_8,c_1',\dots,c_7')\ne 0$. Such an assignment of colors would extend the inductive coloring of $G-p$ to a strong edge-coloring of $G$. If the coefficient of $$(x_{1}~ x_{2}~ x_{3}~ x_{4} ~x_{5} ~x_{6} ~x_{7} ~y_{1} ~y_{2} ~y_{3} ~y_{4} ~y_{5} ~y_{6} ~y_{7} ~y_{8})^4$$ is nonzero, then there are values from $L$ for $x_1,\ldots,x_7,y_1,\ldots,y_8$ such that $f$ is nonzero by Theorem \[CN\]. Using the Magma algebra system [@magma], this monomial has coefficient $-2$, and thus there is a strong edge-coloring using the 5-list assignment[^1]. Thus, the 8-caterpillar does not exist in a vertex minimal counterexample. Note that the proof in Lemma \[lma:caterpillar\] cannot be extended to exclude a 7-caterpillar in $G$, as there exists a 5-coloring of the external edges that does not extend to the caterpillar, even when the lists are all the same. To complete the proof, we apply a discharging argument to $\operatorname{ct}(G)$. [^2]. First, observe that by Lemma \[lma:cutedge\], $\operatorname{ct}(G)$ is 2-connected and so every face is a simple cycle of length at least 41. Also observe that by Lemma \[lma:caterpillar\], $\operatorname{ct}(G)$ does not contain a path of length 8 where every vertex is of degree 2, unless one of those vertices is $p$. Assign charge $2d(v)-6$ to every vertex $v \neq p$, charge $\ell(f) - 6$ to every face $f$, and charge $2d(p)+5$ to $p$. By Proposition \[prop:sum\], the total amount of charge on $\operatorname{ct}(G)$ is $-1$. Apply the following discharging rules. 1. For every $v\in G-p$, if $v$ is a $2$–vertex, $v$ pulls charge 1 from each incident face. 2. If $p$ is a $2$–vertex, then $p$ gives charge $\frac{9}{2}$ to each incident face. Observe that every vertex has nonnegative charge after this discharging process. It remains to show that every face has nonnegative charge. Let $f$ be a face, and let $r_2$ be the number of 2–vertices on the boundary of $f$, not counting $p$, and consider two cases. **Case 1**: $d(p)=3$ or $p$ is not adjacent to $f$. In this case, $p$ does not give charge to $f$, and therefore $f$ has charge $\ell(f) - r_2 - 6$ after discharging. Also, the boundary of $f$ does not contain a path of length 8 containing only vertices of degree 2, thus $r_2 \leq \left\lfloor \frac{7}{8}\ell(f)\right\rfloor$. Since $\ell(f) \geq 41$, we have $$\ell(f) - r_2 - 6 \geq \ell(f) - \left\lfloor \frac{7}{8}\ell(f)\right\rfloor - 6 \geq 0.$$ **Case 2**: $d(p)=2$ and $p$ is adjacent to $f$. By (R2), $p$ gives charge $\frac{9}{2}$ to $f$, so that $f$ has charge $\ell(f) - r_2 - \frac{3}{2}$ after discharging. The boundary of $f$ does not contain a path of length 8 containing only vertices of degree 2, except when using $p$, so, $r_2 \leq \left\lfloor \frac{7}{8}\ell(f)\right\rfloor$. Since $\ell(f) \geq 41$, we have $$\ell(f) - r_2 - \frac{3}{2} \geq \ell(f) - \left\lfloor \frac{7}{8}\ell(f)\right\rfloor - \frac{3}{2} \geq 0.$$ Thus, all vertices and faces have nonnegative charge, contradicting Proposition \[prop:sum\]. The second item of Theorem \[thm:mainlist\] follows by similarly strengthening the statement to include a precolored vertex and using Lemmas \[lma:cw\], \[lma:cutedge\], and \[lma:caterpillar\]. Strong Edge-Coloring of Sparse Graphs {#sec:sparse} ===================================== In this section, we prove Theorems \[thm:sparse\] and \[thm:sparse4\]. Let $G$ be a graph with maximum degree $\Delta(G) \leq d$. For a vertex $v$ in $\operatorname{ct}(G)$ denote by $N_3(v)$ the set of $3^+$-vertices $u$ where $\operatorname{ct}(G)$ contains a path $P$ from $u$ to $v$ where all internal vertices of $P$ are $2{{\ensuremath{^\perp}}}$-vertices. For $u \in N_3(v)$, let $\mu(v,u)$ be the number of paths from $v$ to $u$ whose internal vertices have degree 2 in $\operatorname{ct}(G)$. For a 3-vertex $v$, let the *responsibility set*, denoted $\operatorname{Resp}(v)$, be the set of $2{{\ensuremath{^\perp}}}$-vertices that appear on the paths between $v$ and the vertices in $N_3(v)$. Let $D$ be a subgraph of $G$. We call $D$ a *$k$-reducible configuration* if there exists a subgraph $D'$ of $D$ such that any strong $k$-edge-coloring of $G-D'$ can be extended to a strong $k$-edge-coloring of $G$. One necessary property for the selection of $D'$ is that no two edges that remain in $G-D'$ can have distance at most two in $G$ but distance strictly larger than two in $G - D'$. In the next subsection we describe several reducible configurations. Reducible Configurations ------------------------ This subsection contains description of four types of reducible configurations. Each configuration is described in terms of how it appears within $\operatorname{ct}(G)$ where $G$ is a graph with maximum degree $\Delta(G) \leq d$ for some $d \geq 4$. Let $t$ be a positive integer. The *$t$-caterpillar* is formed by two $3^+$-vertices $v_0$ and $v_{t+1}$ with a path $v_0v_1\dots v_tv_{t+1}$ where each $v_i$ is a $2{{\ensuremath{^\perp}}}$-vertex for every $i \in \{1,\dots,t\}$. Let $t_1,\dots, t_k$ be nonnegative integers. A configuration $Y(t_1,\dots,t_k)$ is formed by a $k^+$-vertex $v$ and $k$ internally disjoint paths of lengths $t_1+1,\dots,t_k+1$ with $v$ as a common endpoint, where the internal vertices of the paths are $2{{\ensuremath{^\perp}}}$-vertices. We call such configuration a *$Y$-type configuration about $v$*, see Figure \[fig:Y\]. A configuration $H(t_1,t_2;r;s_1,s_2)$ is formed by two 3-vertices $u$ and $v$ and 5 internally disjoint paths of lengths $t_1+1$, $t_2+1$, $r+1$, $s_1+1$, and $s_2+1$, where the internal vertices of the paths are $2{{\ensuremath{^\perp}}}$-vertices. The paths of lengths $t_1+1$ and $t_2+1$ have $v$ as an endpoint, the path of length $r+1$ has $u$ and $v$ as endpoints and the paths of lengths $s_1+1$ and $s_2+1$ have $u$ as an endpoint. We call such configuration an *$H$-type configuration about $v$ and $u$*, see Figure \[fig:H\]. A configuration $\Phi(t,a_1,a_2,s)$ is formed by two 3-vertices $u$ and $v$ and 4 internally disjoint paths of lengths $t+1$, $a_1+1$, $a_2+1$, and $s+1$, where the internal vertices of the paths are $2{{\ensuremath{^\perp}}}$-vertices. The path of length $t+1$ has $v$ as an endpoint, the paths of lengths $a_1+1$ and $a_2+1$ have $u$ and $v$ as endpoints and the path of length $s+1$ has $u$ as an endpoint. We call such configuration a *$\Phi$-type configuration about $v$ and $u$*, see Figure \[fig:Phi\]. The reducibility of these configurations was verified using computer[^3], and in addition the 8-caterpillar is addressed in Lemma \[lma:caterpillar\]. Given the definition of a $2{{\ensuremath{^\perp}}}$-vertex, the vertices of degree two in these configurations may, or may not, be adjacent to some 1-vertices in $G$. We demonstrate the reducibility of the instances of these configurations wherein each vertex of degree 2 is adjacent to $d-2$ 1-vertices, as depicted in Figures \[fig:Y\]–\[fig:Phi\]. This suffices to address all other instances of these configurations that may occur. \[claim:red\_cat\] The following caterpillars with maximum degree $d$ are reducible: 1. (Borodin and Ivanova [@BI]) For $d = 3$, the $8$-caterpillar is 5-reducible. 2. (Wang and Zhao [@WZ]) For $d \geq 4$, the $(2d-2)$-caterpillar is $(2d-1)$-reducible. These caterpillars are likely the smallest that are reducible for each degree $d$. Thus, the bounds in Theorems \[thm:bi\] and \[thm:wz\] are best possible using only Lemma \[lma:nrs\]. To improve these bounds, we demonstrate larger reducible configurations and use a more complicated discharging argument. \[clm:Reducible\] The following configurations with maximum degree 3 are 5-reducible: 1. $Y(1,6,7)$, $Y(2,5,6)$ and $Y(3,4,5)$. 2. $H(7,7;0;3,7),\, H(7,7;0;4,6),\, H(7,7;0;5,5),\, H(6,7;0;3,7),\, H(6,7;0;4,6),\\ H(6,7;0;5,5),\, H(6,6;1;2,7),\, H(6,6;1;3,6),\, H(6,6;1;4,5),\, H(5,7;1;2,7),\\ H(5,7;1;3,6),\, H(5,7;1;4,5),\, H(4,7;2;1,7),\, H(4,7;2;2,6),\, H(4,7;2;3,5),\\ H(4,7;2;4,4),\, H(3,7;3;1,6),\, H(3,7;3;2,5) \text{ and } H(3,7;3;3,4).$ 3. $\Phi(7,0,7,1),\, \Phi(7,0,6,1),\, \Phi(6,0,7,1),\, \Phi(6,1,6,1),\, \Phi(7,1,5,1), \Phi(5,1,7,1),\\ \Phi(7,2,4,1),\, \Phi(4,2,7,1),\, \Phi(7,3,3,1),\, \Phi(3,3,7,1) \text{ and }\Phi(3,7,0,7).$ \[clm:Reducible4\] The following configurations with maximum degree 4 are $7$-reducible: $$Y(2,4,4),\ Y(1,5,5),\ Y(2,4,5),\ Y(3,4,4),\text{ and }Y(2,5,5).$$ Proof of Theorem \[thm:sparse\] ------------------------------- Among graphs $G$ with $\operatorname{mad}(G) < 2 + \frac{1}{7}$ not containing $S_3$, $S_4$, or $S_7$, with ${\chi_{s}'(G)}> 5$, select $G$ while minimizing the number of vertices in $\operatorname{ct}(G)$. Note that $e(G) > 5$ since ${\chi_{s}'(G)}> 5$, and let $n$ be the number of vertices in $\operatorname{ct}(G)$. By using the discharging method, we will show that $\operatorname{mad}(\operatorname{ct}(G)) \geq 2+\frac{1}{7}$, which is a contradiction, so no such minimal counterexample exists. Observe that $G$ does not contain any of the reducible configurations addressed in Claim \[clm:Reducible\]. We also have the following additional structure on $\operatorname{ct}(G)$. \[lma:cutedge2\] $\operatorname{ct}(G)$ is 2-connected. Suppose that $\operatorname{ct}(G)$ contains a cut-edge $uv$. In $G$, the vertices $u$ and $v$ have degree at least two. There are exactly two components, $G_1$ and $G_2$, in $G - uv$, with $u \in V(G_1)$ and $v \in V(G_2)$. Let $u_1,u_2$ be neighbors of $u$ in $G_1$ and $v_1,v_2$ be neighbors of $v$ in $G_2$; let $u_1 = u_2$ only when $u$ has a unique neighbor in $G_1$, and $v_1 = v_2$ only when $v$ has a unique neighbor in $G_2$. Let $G_1' = G_1 + \{ uv, vv_1, vv_2\}$ and $G_2' = G_2 + \{ uv, uu_1, uu_2\}$. If $G_1' = G$, then consider $G' = G - v_1 - v_2$. Since $n(G') < n(G)$ and $\operatorname{mad}(G') \leq \operatorname{mad}(G)$, there is a strong 5-edge-coloring $c$ of $G'$. Extend the coloring $c$ to color $c(vv_1)$ and $c(vv_2)$ from the colors not in $\{ c(uv), c(uu_1), c(uu_2)\}$, a contradiction. We similarly reach a contradiction when $G_2' = G$. Therefore, $n(G_i') < n(G)$ and $\operatorname{mad}(G_i') \leq \operatorname{mad}(G)$ for each $i \in \{1,2\}$. Thus, there exist strong 5-edge-colorings $c_1$ and $c_2$ of $G_1'$ and $G_2'$, respectively. For each coloring, the colors on the edges $uv, uu_1, uu_2, vv_1, vv_2$ are distinct. Let $\pi$ be a permutation of the five colors satisfying $\pi(c_2(e)) = c_1(e)$ for each edge $e \in \{uv, uu_1, uu_2, vv_1, vv_2\}$. Then, we extend the coloring $c_1$ of $G_1'$ to all of $G$ by assigning $c_1(e) = \pi(c_2(e))$ for all edges $e \in E(G_2')$. The coloring $c_1$ is a strong 5-edge-coloring of $G$, a contradiction. If $\operatorname{ct}(G)$ does not have any $3$-vertices, then $\operatorname{ct}(G)$ must be isomorphic to cycle $C_n$. If $n \geq 9$, then $\operatorname{ex}_3(G)$ contains an 8-caterpillar. If $n \in \{5,6,8\}$, then $G$ is a subgraph of $S_5$, $S_6$, or $S_8$, which each has a strong edge-coloring using five colors, discovered by computer. When $n \in\{3,4,7\}$, $G$ does not contain $S_3$, $S_4$, or $S_7$, and any proper subgraph of these graphs is $5$ strong edge-colorable, discovered by computer. Therefore, $\operatorname{ct}(G)$ is not isomorphic to a cycle, and hence for every $2{{\ensuremath{^\perp}}}$-vertex $u$ in $G$, $|N_3(u)| \geq 1$. If $G$ has some vertex $v$ such that $|N_3(v)|=1$, then $G$ must be a subgraph of $\Theta(t_1,t_2,t_3)$, which is the graph consisting of three internally disjoint $x-y$ paths of length $t_1+1, t_2+1$ and $t_3+1$, for some $0 \leq t_1 \leq t_2 \leq t_3$. If $t_3 \geq 8$, then $\operatorname{ex}_3(G)$ contains an 8-caterpillar, so we assume that $t_3 < 8$. Observe that if $\operatorname{mad}(\Theta(t_1,t_2,t_3)) < 2 + \frac{1}{7}$, then $t_1+t_2+t_3 \geq 13$. However, if $\Theta(t_1,t_2,t_3)$ does not contain a reducible $Y$-type configuration, then by Claim \[clm:Reducible\] the sequence $(t_1,t_2,t_3)$ is one of $(0,7,7)$, $(0,6,7)$, $(1,6,6)$, $(1,5,7)$, $(2,5,6)$, $(2,4,7)$, or $(3,3,4)$. In each of these cases, we have verified by computer that $\Theta(t_1,t_2,t_3)$ has a strong edge-coloring using five colors. Therefore, $|N_3(v)|\geq 2$ for every $v\in\operatorname{ct}(G)$. We proceed using discharging. Assign each vertex initial charge $d(v)$. Note that the total charge on the graph is $2e(\operatorname{ct}(G))$, which is at most $\operatorname{mad}(G)n < (2+\frac{1}{7})n$. We shall distribute charge among the vertices of $\operatorname{ct}(G)$ and result with charge at least $2 + \frac{1}{7}$ on every vertex, giving a contradiction. Distribute charge among the vertices according to the following discharging rules, applied to each pair of vertices $u, v \in V(\operatorname{ct}(G))$: 1. \[2vtx\] If $u$ is a 2-vertex and $v \in N_3(u)$, then $v$ sends charge $\frac{1}{14}$ to $u$. 2. \[3vtx\] If $v$ is a 3-vertex with $|\operatorname{Resp}(v)|\leq 10$ and $u \in N_3(v)$, then (a) \[adj\] if $d(u,v)=1$ and $|\operatorname{Resp}(u)| = 14$, then $v$ sends charge $\frac{1}{7}$ to $u$; (b) \[near\] otherwise, if $d(u,v) \leq 4$, then $v$ sends charge $\frac{1}{14}$ to $u$. We will now verify the assertion that each vertex has final charge at least $2 + \frac{1}{7}$. If $v$ is a 2-vertex, then since $|N_3(v)| = 2$ the final charge on $v$ is $2 + \frac{1}{7}$ after by Rule R\[2vtx\]. Let $v$ be a 3-vertex. If $u \in N_3(v)$, then $d(u,v) \leq 8$ by Lemma \[lma:caterpillar\]. Claim \[clm:Reducible\] implies that $|\operatorname{Resp}(v)|\leq 14$. *Case : $|\operatorname{Resp}(v)| \in \{11, 12\}$.* In this case, $v$ only loses charge by Rule R\[2vtx\], so the final charge is at least $3 - \frac{12}{14} = 2 + \frac{1}{7}$. *Case : $|\operatorname{Resp}(v)| = 14$.* By Claim \[clm:Reducible\], the $Y$-type configuration about $v$ is $Y(0,7,7)$. Thus, some vertex $u_1 \in N_3(v)$ is at distance one from $v$. If $\mu(v,u_1) = 1$, then the $H$-type configuration about $v$ and $u_1$ is of the form $H(7,7;0;s_1,s_2)$; by Claim \[clm:Reducible\] $s_1+s_2 \leq 9$, $|\operatorname{Resp}(u_1)|\leq 9$, and $u_1$ sends charge $\frac{1}{7}$ to $v$ by Rule R\[adj\]. If $\mu(v,u_1) = 2$, then the $\Phi$-type configuration about $v$ and $u_1$ is of the form $\Phi(7,0,7,s)$; by Claim \[clm:Reducible\] $s = 0$, $|\operatorname{Resp}(u_1)|\leq 7$, and $u_1$ sends charge $\frac{1}{7}$ to $v$ by Rule R\[adj\]. *Case : $|\operatorname{Resp}(v)| = 13$.* By Claim \[clm:Reducible\], the $Y$-type configuration $Y(t_1,t_2,t_3)$ about $v$ is one of $Y(0,6,7)$, $Y(1,6,6)$, $Y(1,5,7)$, $Y(2,4,7)$, or $Y(3,3,7)$. We consider each case separately. *Case [.]{}: $(t_1,t_2,t_3) = (0,6,7).$* Let $u_1$ be the vertex in $N_3(v)$ at distance 1 from $v$. If $\mu(v,u_1) = 1$, then the $H$-type configuration about $v$ and $u_1$ is of the form $H(6,7;0;s_1,s_2)$; by Claim \[clm:Reducible\] $s_1+s_2 \leq 9$, $|\operatorname{Resp}(u_1)|\leq 9$, and $u_1$ sends charge $\frac{1}{14}$ to $v$ by Rule R\[near\]. If $\mu(v,u_1) = 2$, then the $\Phi$-type configuration about $v$ and $u_1$ is of the form $\Phi(6,0,7,s)$ or $\Phi(7,0,6,s)$; by Claim \[clm:Reducible\] $s = 0$, $|\operatorname{Resp}(u_1)|\leq 7$, and $u_1$ sends charge $\frac{1}{14}$ to $v$ by Rule R\[near\]. *Case [.]{}: $(t_1,t_2,t_3) = (1,6,6).$* Let $u_1$ be the vertex in $N_3(v)$ at distance 2 from $v$. If $\mu(v,u_1) = 1$, then the $H$-type configuration about $v$ and $u_1$ is of the form $H(6,6;1;s_1,s_2)$; by Claim \[clm:Reducible\] $s_1+s_2 \leq 8$, $|\operatorname{Resp}(u_1)|\leq 9$, and $u_1$ sends charge $\frac{1}{14}$ to $v$ by Rule R\[near\]. If $\mu(v,u_1) = 2$, then the $\Phi$-type configuration about $v$ and $u_1$ is of the form $\Phi(6,1,7,s)$ or $\Phi(7,1,6,s)$; by Claim \[clm:Reducible\] $s = 0$, $|\operatorname{Resp}(u_1)|\leq 8$, and $u_1$ sends charge $\frac{1}{14}$ to $v$ by Rule R\[near\]. *Case [.]{}: $(t_1,t_2,t_3) = (1,5,7).$* Let $u_1$ be the vertex in $N_3(v)$ at distance 2 from $v$. If $\mu(v,u_1) = 1$, then the $H$-type configuration about $v$ and $u_1$ is of the form $H(5,7;1;s_1,s_2)$; by Claim \[clm:Reducible\] $s_1+s_2 \leq 8$, $|\operatorname{Resp}(u_1)|\leq 9$, and $u_1$ sends charge $\frac{1}{14}$ to $v$ by Rule R\[near\]. If $\mu(v,u_1) = 2$, then the $\Phi$-type configuration about $v$ and $u_1$ is of the form $\Phi(5,1,7,s)$ or $\Phi(7,1,5,s)$; by Claim \[clm:Reducible\] $s = 0$, $|\operatorname{Resp}(u_1)|\leq 8$, and $u_1$ sends charge $\frac{1}{14}$ to $v$ by Rule R\[near\]. *Case [.]{}: $(t_1,t_2,t_3) = (2,4,7).$* Let $u_1$ be the vertex in $N_3(v)$ at distance 3 from $v$. If $\mu(v,u_1) = 1$, then the $H$-type configuration about $v$ and $u_1$ is of the form $H(4,7;2;s_1,s_2)$; by Claim \[clm:Reducible\] $s_1+s_2 \leq 7$, $|\operatorname{Resp}(u_1)|\leq 9$, and $u_1$ sends charge $\frac{1}{14}$ to $v$ by Rule R\[near\]. If $\mu(v,u_1) = 2$, then the $\Phi$-type configuration about $v$ and $u_1$ is of the form $\Phi(4,2,7,s)$ or $\Phi(7,2,4,s)$; by Claim \[clm:Reducible\] $s = 0$, $|\operatorname{Resp}(u_1)|\leq 8$, and $u_1$ sends charge $\frac{1}{14}$ to $v$ by Rule R\[near\]. *Case [.]{}: $(t_1,t_2,t_3) = (3,3,7).$* Let $u_1$ be the vertex in $N_3(v)$ at distance 4 from $v$. If $\mu(v,u_1) = 1$, then the $H$-type configuration about $v$ and $u_1$ is of the form $H(3,7;3;s_1,s_2)$; by Claim \[clm:Reducible\] $s_1+s_2 \leq 7$, $|\operatorname{Resp}(u_1)|\leq 10$, and $u_1$ sends charge $\frac{1}{14}$ to $v$ by Rule R\[near\]. If $\mu(v,u_1) = 2$, then the $\Phi$-type configuration about $v$ and $u_1$ is of the form $\Phi(3,3,7,s)$ or $\Phi(7,3,3,s)$; by Claim \[clm:Reducible\] $s = 0$, $|\operatorname{Resp}(u_1)|\leq 10$, and $u_1$ sends charge $\frac{1}{14}$ to $v$ by Rule R\[near\]. *Case : $|\operatorname{Resp}(v)| \leq 10$.* In this case, $v$ loses charge at most $\frac{10}{14}$ by Rule R\[2vtx\], so if it sends charge at most $\frac{1}{7}$ by Rule R\[3vtx\], then the final charge on $v$ is at least $2 + \frac{1}{7}$. Consider how much charge is sent by Rule R\[3vtx\]. *Case [.]{}: $v$ sends charge $\frac{3}{14}$ by Rule R\[3vtx\].* If $|\operatorname{Resp}(v)| \leq 9$, then the final charge on $v$ is at least $2 + \frac{1}{7}$, so assume that $|\operatorname{Resp}(v)| = 10$. If $v$ sends charge $\frac{1}{14}$ to each of three vertices in $N_3(v)$, then $d(v,u) \leq 4$ for each $u \in N_3(v)$ and hence $|\operatorname{Resp}(v)| < 10$. Thus, $v$ sends charge $\frac{1}{7}$ to some $u_1 \in N_3(v)$ and $\frac{1}{14}$ to some $u_2 \in N_3(v)$. Since $|\operatorname{Resp}(u_1)| = 14$, Claim \[clm:Reducible\] implies that the $Y$-type configuration about $u_1$ is of the form $Y(0,7,7)$. Since $v$ is adjacent to $u_1$, $d(v,u_2)\leq 4$, and $|\operatorname{Resp}(v)| = 10$, the $Y$-type configuration about $v$ is of the form $Y(0,3,7)$. If $\mu(v,u_1)= 1$, then the $H$-type configuration about $v$ and $u_1$ is of the form $H(3,7;0;7;7)$ which is reducible by Claim \[clm:Reducible\]. If $\mu(v,u_1)= 2$, then the $\Phi$-type configuration about $v$ and $u_1$ is of the form $\Phi(3,7,0,7)$ which is reducible by Claim \[clm:Reducible\]. *Case [.]{}: $v$ sends charge $\frac{2}{7}$ by Rule R\[3vtx\].* In this case, $v$ must send charge $\frac{1}{7}$ to at least one vertex $u_1$ in $N_3(v)$. If $v$ sends charge $\frac{1}{7}$ to another vertex $u_2$ in $N_3(v)$, then, as $G$ contains no $8$-caterpillar, $|\operatorname{Resp}(v)| \leq 7$ and hence the final charge on $v$ is at least $2 + \frac{3}{14}$. If $v$ sends charge $\frac{1}{14}$ to the other two vertices $u_2$ and $u_3$ in $N_3(v)$, then $|\operatorname{Resp}(v)| \leq 6$ and hence the final charge on $v$ is at least $2 + \frac{5}{14}$. *Case [.]{}: $v$ either sends charge $\frac{5}{14}$ or $\frac{3}{7}$ by Rule R\[3vtx\].* Suppose that $v$ sends charge $\frac{5}{14}$ by Rule R\[3vtx\]. Thus, $v$ must send charge $\frac{1}{7}$ to two of three vertices in $N_3(v)$, and $\frac{1}{14}$ to the third vertex. This implies that $|\operatorname{Resp}(v)|\le 3$ and hence the final charge on $v$ is at least $2 + \frac{3}{7}$. Similarly, if $v$ sends charge $\frac{3}{7}$ by Rule R\[3vtx\], then $|Resp(v)|=0$. Thus, the final charge on $v$ is $2+\frac{4}{7}$. In all cases, we verified that the final charge is at least $2 + \frac{1}{7}$, contradicting that the average degree of $\operatorname{ct}(G)$ is strictly less than $2 + \frac{1}{7}$. We note that it is possible to improve the bound $\operatorname{mad}(G) < 2 + \frac{1}{7}$ by a small amount. In particular, the discharging method used above essentially states that the average size of a responsibility set in $\operatorname{ct}(G)$ is at most 12. By careful analysis, we can find that a 3-vertex $v$ with $|\operatorname{Resp}(v)|\leq 11$ has some excess charge after the discharging argument that could be used to increase the charge on nearby vertices by a small fraction. We have verified using computation that for every 3-vertex $v$, there is at least one vertex $u \in N_3(v)$ where $|\operatorname{Resp}(u)| < 12$. Thus, it is impossible to have a minimal counterexample where all responsibility sets have size 12, and it is feasible to construct a discharging argument that will improve on the bound $\operatorname{mad}(G) < 2 + \frac{1}{7}$ by a small fraction. We do not do this explicitly as it requires significant detail without significant gain. In order to prove that $\operatorname{mad}(G) < 2 + \frac{1}{6}$ implies that $G$ can be strongly 5-edge-colored, then the proof will imply that the average size of a responsibility set is at most 10. This will require sending charge to all of the vertices with 11 or 12 vertices in the responsibility set, and also making sure that the charge comes from vertices with responsibility sets much smaller. Likely, larger reducible configurations will grant some improvement in this direction, but our algorithm is insufficient to effectively test reducibility for larger configurations.\ Proof of Theorem \[thm:sparse4\] -------------------------------- Note that the second item of Theorem \[thm:sparse4\] follows from the first by Proposition \[prop:mad\]. For the first item, we follow a similar discharging argument as in Theorem \[thm:sparse\]. The argument will be simpler as we will only discharge from $3^+$-vertices to $2{{\ensuremath{^\perp}}}$-vertices. Select a graph $G$ that satisfies the hypotheses and minimizes $n(G)$. Observe that $\operatorname{ct}(G)$ is 2-connected by an argument similar to Lemma \[lma:cutedge2\]. Since the $6$-caterpillar is $7$-reducible by Claim \[claim:red\_cat\], $\operatorname{ct}(G)$ does not contain a path of six $2{{\ensuremath{^\perp}}}$-vertices. Since $G$ has girth at least 7, $\operatorname{ct}(G)$ is not a cycle, so it contains at least one $3^+$-vertex. If $v$ is a $3^+$-vertex, then let $\operatorname{Resp}(v)$ be the set of $2{{\ensuremath{^\perp}}}$-vertices reachable from $v$ using only $2{{\ensuremath{^\perp}}}$-vertices. We consider $\operatorname{Resp}(v)$ to be a multiset, where the multiplicity of a vertex $u \in \operatorname{Resp}(v)$ is given by the number of paths from $v$ to $u$ using only $2{{\ensuremath{^\perp}}}$-vertices. Note that the multiplicity is either 1 or 2. Assign charge $d_{\operatorname{ct}(G)}(v)$ to each vertex $v \in V(\operatorname{ct}(G))$. Note that the average charge on each vertex is equal to the average degree of $G$. To discharge, let $\varepsilon = \frac{1}{13}$ and each $3^+$-vertex $v$ sends $\varepsilon m$ to each $2{{\ensuremath{^\perp}}}$-vertex in $\operatorname{Resp}(v)$ with multiplicity $m$. Thus, every $2{{\ensuremath{^\perp}}}$-vertex ends with charge $2 + \frac{2}{13}$. Suppose $d_{\operatorname{ct}(G)}(v) = 3$. Since $\operatorname{ct}(G)$ is 2-connected, all vertices in $\operatorname{Resp}(v)$ appear with multiplicity one. By Claim \[clm:Reducible4\], $|\operatorname{Resp}(v)| \leq 11$. Thus each $3$-vertex ends with charge at least $3 - \frac{11}{13} = 2 + \frac{2}{13}$. Suppose $d_{\operatorname{ct}(G)}(v) = 4$. Since the $(6,4)$-caterpillar is reducible, each path of $2{{\ensuremath{^\perp}}}$-vertices has length at most five, and hence $|\operatorname{Resp}(v)| \leq 20$, including multiplicity. Thus each $4$-vertex ends with charge at least $4 - \frac{20}{13} = 2 + \frac{6}{13} > 2 + \frac{2}{13}$. Therefore, every vertex ends with charge at least $2 + \frac{2}{13}$ and thus the average degree of $G$ is at least $2 + \frac{2}{13}$, a contradiction. [99]{} N. Alon. Combinatorial [N]{}ullstellensatz. **8** (1999), 7–29. L. D. Andersen, The strong chromatic index of a cubic graph is at most 10, *Discrete Math.* 108 (1992) 231–252. O.V. Borodin and A.O. Ivanova, Precise upper bound for the strong edge chromatic number of sparse planar graphs. *Discussiones Mathematicae Graph Theory*, 33(4) (2014) 759–770. W. Bosma, J. Cannon, and C. Playoust, The Magma algebra system. I. The user language. *J. Symbolic Comput.* **24** (1997) 235–265. H. Bruhn, F. Joos, A stronger bound for the strong chromatic index. *arXiv preprint* `arXiv:1504.02583`. J. Chang, M. Montassier, A. Pěche, and A. Raspaud, Strong chromatic index of planar graphs with large girth. *Discussiones Mathematicae Graph Theory*, 34(4), (2014) 723–733. D.W. Cranston and D.B. West, A Guide to the Discharging Method. *arXiv preprint* `arXiv:1306.4434`. P. Erdős, Problems and results in combinatorial analysis and graph theory, *Proceedings of the First Japan Conference on Graph Theory and Applications* (Hakone, 1986), **72** (1988), 81–92. R.J. Faudree, A. Gyárfas, R.H. Schelp, and Zs. Tuza. The strong chromatic index of graphs, **29** (1990) (B), 205–211. J.-L. Fouquet and J.-L. Jolivet, Strong edge-colorings of graphs and applications to multi-k-gons, **16** (1983) (A) 141–150. J.-L. Fouquet and J.-L. Jolivet, Strong edge-coloring of cubic planar graphs, in Progress in graph theory (Waterloo, Ont., 1982), Academic Press, Toronto, ON, 1984, pp. 247–264. H. Hocquard and P. Valicov, Strong edge colouring of subcubic graphs, **159** (2011), 1650–1657. H. Hocquard, M. Montassier, A. Raspaud, and P. Valicov, On strong edge-colouring of subcubic graphs **161** (2013), 2467–2479. D. Hudák, B. Lužar, R. Soták, and R. Škrekovski, Strong edge-coloring of planar graphs, *Discrete Math.* **324** (2014), 41–49. A.V. Kostochka, X. Li, W. Ruksasakchai, M. Santana, T. Wang, and G. Yu, Strong chromatic index of subcubic planar multigraphs, *in preparation*. M. Molloy and B. Reed. A bound on the strong chromatic index of a graph, *J. Combin. Theory, Ser. B* **69** (1997), 103–109. J. Nešetřil, A. Raspaud, and E. Sopena, Colorings and girth of oriented planar graphs, *Discrete Math.* 165/166 (1997) 519–530. A. Steger and M.-L. Yu, On induced matchings, *Discrete Math.* **120** (1993), 291–295. T. Wang and X. Zhao, Odd graphs and its application on the strong edge coloring. *arXiv preprint* `arXiv:1412.8358`. D. B. West. . Prentice Hall, Inc., Upper Saddle River, NJ, 1996. V. H. Vu, A General Upper Bound on the List Chromatic Number of Locally Sparse Graphs, *Comb. Probab. Comp.*, **11** (2002), 103–111. [^1]: All source code and data is available at <http://www.math.iastate.edu/dstolee/r/scindex.htm>. [^2]: Our discharging approach is similar to the proof of Lemma \[lma:nrs\] where $\ell = 8$, but some care is needed due to the precolored vertex $p$. [^3]: All source code and data is available at <http://www.math.iastate.edu/dstolee/r/scindex.htm>.
1
--- author: - | V.A. Baskov, A.V. Koltsov, A.I. L’vov, A.I. Lebedev, L.N. Pavlyuchenko, , E.V. Rzhanov, S.S. Sidorin, G.A. Sokol\ P.N. Lebedev Physical Institute, Leninsky prospect 53, Moscow 119991, Russia\ Email: - | S.V. Afanasiev, A.I. Malakhov\ Joint Institute for Nuclear Research, Joliot-Curie 6, Dubna 141980, Moscow region, Russia - | A.S. Ignatov, V.G. Nedorezov\ Institute for Nuclear Research, 60-letiya Oktyabrya prospekt 7a, Moscow 117312, Russia title: 'Studies of eta-mesic nuclei at the LPI electron synchrotron' --- Introduction: $\eta$-mesic nuclei {#introduction-eta-mesic-nuclei .unnumbered} ================================= $\eta$-mesic nuclei, i.e. nuclear systems $_\eta A$ having the $\eta$-meson bound in a nuclear orbit by strong interaction with $A$ nucleons, have been predicted long ago [@haider86; @liu86] — soon after recognizing the attractive character of the $\eta N$ interaction at low energies [@bhalerao85]. Observations and investigations of these exotic systems would be very valuable for understanding meson-baryon interactions in free space and in nuclei and for studies of properties of hadrons in the dense nuclear matter. [r]{}[0.5]{} ![image](fig-eta-cs.eps){width="40.00000%"} The $\eta$-meson, together with pions and kaons, belongs to the SU(3) octet of pseudoscalar mesons and has, therefore, a similar $q\bar q$ space structure. In contrast to the pion, however, the pseudoscalar coupling of $\eta$ to the nucleon is empirically rather small [@tiator94]. Nevertheless the amplitude of $\eta N$ $s$-wave scattering is not as small as that for $\pi N$ scattering because of the contribution of the $s$-wave resonance $S_{11}(1535)$ which is actually a chiral partner of the nucleon — the lowest lying baryon with the opposite parity to the nucleon. This resonance has the mass slightly above the $\eta N$ threshold, $m_\eta+m_N = 1486$ MeV, and owing to its very strong coupling to the $\eta N$ channel \[with the branching ratio ${\rm Br}\;(S_{11}(1535)\to\eta N) \simeq 55\%$\] strongly enhances all interactions in this channel. A nice illustration of this feature is provided by Mainz data [@mcnicoll10] on the total cross section of $\eta$ photoproduction off protons. A huge near-threshold enhancement shown in Fig. \[eta-x-sect\] is just a manifestation of the $S_{11}(1535)$ resonance excited in the reaction $\gamma p\to S_{11}(1535)\to\eta p$. The $S_{11}(1535)$ resonance strongly contributes to the low-energy $\eta N$ scattering and, in particular, makes the threshold value of the $\eta N$ scattering amplitude (i.e. the $\eta N$ scattering length $a_{\eta N}$) positive. In the framework of a dynamical resonance model for the coupled channels $\pi N$, $\eta N$ and $\pi\pi N$, Bhalerao and Liu [@bhalerao85] found a\_[N]{} = 0.28 + i 0.19 fm. \[a-etaN-BL\] The positive value of ${\rm Re\,}a_{\eta N}$ means an effective attraction between $\eta$ and $N$, so that one can expect that several nucleons could jointly bind $\eta$ to a nuclear orbit. The first-order static-limit on-shell optical potential of $\eta$ in the nuclear matter at zero energy $E_\eta^{\rm kin}=0$ is equal to U(r) = -2 a\_[N]{}(r) ( + ), what gives \[together with Eq. (\[a-etaN-BL\])\] $U = -34 -i\;23$ MeV at normal nuclear matter density $\rho=\rho_0 = 0.17~\rm fm^{-3}$. The imaginary part of the potential describes a local absorption rate $\Gamma = -2\,{\rm Im}\,U$ of $\eta$ in the nuclear substance. With the above strength of the $\eta A$ potential, $\eta$-mesic nuclei $_\eta A$ are expected to exist for all $A\ge 10$ [@haider02; @haider09]. Actually, due to a sharp (cusp) energy dependence of the $\eta N$ scattering amplitude near threshold, Fermi motion of nucleons and $\eta$ reduces the optical potential \[especially its imaginary part\], and this makes $\eta$-mesic nuclei to exist only for $A\ge 12$. For binding energies and widths of the lightest $\eta$-mesic nuclei Haider and Liu predicted [@haider02; @haider09] E\_= -1.19 [MeV]{}, && \_=   7.34 [MeV for\^[12]{}\_[ ]{}C]{}, E\_= -3.45 [MeV]{}, && \_= 10.76 [MeV for\^[16]{}\_[ ]{}O]{}, E\_= -6.39 [MeV]{}, && \_= 13.20 [MeV for\^[26]{}\_[ ]{}Mg]{}. \[eq:EB-Liu\] Note, however, that a stronger $\eta N$ scattering amplitude was inferred in some other analyses. For example, using a $K$-matrix model for coupled channels $\pi N$, $\eta N$, $\gamma N$ and $\pi\pi N$, Green and Wycech [@green97; @green05] found from fit to available data a\_[N]{} = (0.910.06) + i (0.270.02) fm. With such a big strength of $\eta N$ interaction lighter $\eta$-mesic nuclei could also exist. As an example of different predictions for binding energies and widths of $\eta$-mesic nuclei we mention very elaborated calculations [@oset02a; @oset02b; @oset02c], in which a model for meson-baryon interaction with dynamically generated resonances was build using a unitarized chiral perturbation theory for coupled channels $\pi N$, $\eta N$, $K\Lambda$, $K\Sigma$ and $\pi\pi N$ and then self-energies of all the particles in the nuclear matter were evaluated consistently. This approach leads to the $\eta N$ scattering length $a_{\eta N} = 0.264 + i\, 0.245~\rm fm$ close to that obtained in Eq. (\[a-etaN-BL\]). The resulting $\eta A$ potential is, however, found stronger owing to nonlinear dressing effects: $U = -54 -i\, 29$ MeV at normal nuclear density. Also stronger are $\eta$-meson bindings found in [@oset02c]: E\_= -9.71 [MeV]{}, && \_= 35.0 [MeV for\^[12]{}\_[ ]{}C]{}, E\_= -12.57 [MeV]{}, && \_= 33.4 [MeV for\^[24]{}\_[ ]{}Mg]{}. \[eq:EB-Oset\] Bindings with equally large widths arise also in calculations [@jido02; @nagahiro05; @jido08] that use a chiral doublet model and treat $\eta A$ and $S_{11}(1535)A$ attraction as a result of partial restoration of chiral symmetry in the dense nuclear matter leading to reduction of the $S_{11}(1535){-}N$ mass gap. It is clear that experimental data on energies and widths of $\eta$-mesic nuclei are needed to test these and many other models and calculations. Signature for eta-mesic nuclei produced in photoreactions {#signature-for-eta-mesic-nuclei-produced-in-photoreactions .unnumbered} ========================================================= A mechanism of $\eta$-mesic nuclei formation and decay in the photoreaction + A N’ + \_[ ]{}(A-1)N’ + + N + (A-2) \[reac:piN\] is shown in Fig. \[diagrams-piN\]a. A fast nucleon $N'$ ejected forward at the first stage of the reaction, i.e. in the subprocess + N’ N’ + \_[slow]{}, \[reac:eta\] escapes the nucleus, whereas a slow $\eta$ is captured by remaining $A-1$ nucleons to a bound state. At $E_\gamma \sim 800{-}900$ MeV, a minimal momentum transfer to $\eta$ in the reaction (\[reac:eta\]) is not large (less than $70~{\rm MeV}/c$). That is why the total cross section of $\eta$-mesic nuclei formation off light nuclei (like carbon or oxygen implied in the following) turns out to be a few $\mu$b [@kohno89; @lebedev89; @lebedev91; @lebedev95; @tryasuchev99; @tryasuchev01], i.e. $\simeq 2{-}7\%$ of the total cross section $\sigma_{\gamma A}^\eta$ of inclusive $\eta$ photoproduction, with the exact value strongly dependent on the assumed strength of the optical potential $U$. ![a) $\eta$-mesic nuclei formation and decay with the emission of back-to-back $\pi N$ pairs. b) Background creation of back-to-back $\pi N$ pairs by unbound $\eta$.[]{data-label="diagrams-piN"}](fig-diagram-piN.eps){width="80.00000%"} Energies $E[{}_\eta(A-1)]$ of the produced $\eta$-mesic nuclei can, in principle, be determined through missing mass measurements in the reaction $(\gamma,p)$ using tagged photons $\gamma$ and a magnetic spectrometer for $N'=p$. Indirectly, the same energy E\[\_(A-1)\] = E\_+ E\_[A-1]{} = E\_[N]{} + E\_[A-2]{} \[eq:E-eta-(A-1)\] can also be found from the observed energy of a correlated back-to-back $\pi N$ pair produced at the second stage of the reaction (\[reac:piN\]) where the captured $\eta$ meson annihilates through the subprocess N N. \[etaNpiN\] The energy excitation of $(A-2)$ in (\[eq:E-eta-(A-1)\]) is not a fixed value. It rather depends on whether an $s$-shell or $p$-shell nucleon $N$ is knocked out in the process (\[etaNpiN\]). Therefore a distribution of the experimental observable $E_{\pi N}$ has appropriately a bigger width than the width of the $\eta$-mesic nucleus. Neglecting binding and Fermi motion of nucleons and $\eta$, we have the following kinematical characteristics of the ejected correlated $\pi N$ pairs (as for energies, momenta and velocities): && s = E\_+ E\_N = m\_+m\_N = 1486 MeV, && E\_\^[kin]{} = 313 [MeV]{},E\_N\^[kin]{} = 94 [MeV]{}, p\_= p\_N = 431 [MeV]{}/c, && \_= 0.95,\_N=0.42. \[kinema-piN\] A simple simulation that takes into account the Fermi motion of nucleons and $\eta$ as well as binding of these particles reveals that fluctuations around these ideal parameters are substantial (see Fig. \[simulation-piN\]) \[specifically, we used in this simulation the $\eta$-meson binding energy of 10 MeV with the width 25 MeV; for nucleons, we assumed a Fermi-gas distribution with binding energies distributed between 5 and 30 MeV\]. In particular, the angle $\theta_{\pi N}$ between the emitted pion and nucleon may not be so close to $180^\circ$, and a subtraction of background events with $\theta_{\pi N} \ne 180^\circ$ used sometimes in practice should be done cautiously. A shift of the peak down to 1486 MeV in the distribution of the total energy $E_{\pi N}=E_\pi+E_N$ seen in Fig. \[simulation-piN\] is related with binding of both the $\eta$-meson (by 10 MeV) and the nucleon (by 15 MeV). ![Simulation of $\pi N$ pairs emitted in $\eta$-mesic nuclei decays. Shown are distributions over kinetic energies of the particles, their total energy, velocities, and the $\pi N$ relative angle.[]{data-label="simulation-piN"}](fig-Tpi.ps "fig:"){width="33.00000%"} ![Simulation of $\pi N$ pairs emitted in $\eta$-mesic nuclei decays. Shown are distributions over kinetic energies of the particles, their total energy, velocities, and the $\pi N$ relative angle.[]{data-label="simulation-piN"}](fig-TN.ps "fig:"){width="33.00000%"} ![Simulation of $\pi N$ pairs emitted in $\eta$-mesic nuclei decays. Shown are distributions over kinetic energies of the particles, their total energy, velocities, and the $\pi N$ relative angle.[]{data-label="simulation-piN"}](fig-Etot.ps "fig:"){width="33.00000%"} ![Simulation of $\pi N$ pairs emitted in $\eta$-mesic nuclei decays. Shown are distributions over kinetic energies of the particles, their total energy, velocities, and the $\pi N$ relative angle.[]{data-label="simulation-piN"}](fig-betapi.ps "fig:"){width="33.00000%"} ![Simulation of $\pi N$ pairs emitted in $\eta$-mesic nuclei decays. Shown are distributions over kinetic energies of the particles, their total energy, velocities, and the $\pi N$ relative angle.[]{data-label="simulation-piN"}](fig-betaN.ps "fig:"){width="33.00000%"} ![Simulation of $\pi N$ pairs emitted in $\eta$-mesic nuclei decays. Shown are distributions over kinetic energies of the particles, their total energy, velocities, and the $\pi N$ relative angle.[]{data-label="simulation-piN"}](fig-costheta.ps "fig:"){width="33.00000%"} [r]{}[0.4]{} ![image](fig-piN-spectrum.eps){width="35.00000%"} Notice that $\pi N$ pairs with the characteristics (\[kinema-piN\]) do not necessary originate from $\eta$-mesic nuclei decays. They can also be produced by slow etas in a background nonresonance process shown in Fig. \[diagrams-piN\]b. The resonance and nonresonance processes correspond to a resonance (Breit-Wigner) and nonresonance part of the full propagator \[i.e. the Green function $G({\bm r}_1, {\bm r}_2; E_\eta)$\] of the $\eta$-meson moving in the optical potential $U(r)$. Jointly, these parts generate a complicated spectrum of $E_\eta$ similar to that obtained in a toy model with a square-well potential [@lvov98; @sokol99]. Shown in Fig. \[spectral-function\] is the spectral function in that model, S(E\_) = ([r]{}\_1)  ([r]{}\_2)   |G([r]{}\_1, [r]{}\_2; E\_)|\^2 d[r]{}\_1 d[r]{}\_2, that characterizes near-threshold energy distribution of the propagated etas as well as the near-threshold energy dependence of the yield of $\pi N$ pairs produced by these $\eta$. Bound states of the $\eta$-meson give pronounced peaks in the yield of the $\pi N$ pairs at subthreshold energies $E_\eta$. Generally, observation of a relatively narrow resonance peak in the spectrum of $E_\eta$ in the region $E_\eta < m_\eta$ is mandatory for claiming an observation of $\eta$-mesic nuclei at all. We refer to recent works by Haider and Liu [@haider10a; @haider10b] where a deeper and more elaborated consideration is given in relation with a recent experiment. Since $\eta$ is isoscalar, the $\pi N$ pairs produced in the subprocess (\[etaNpiN\]) have isospin $\frac12$ and hence the following isotopic contents \[for $\eta$-mesic nuclei with $A\gg 1$\]: (N) ={ [lll]{} 1/3 & [for]{} & \^+n,\ 1/6 & [for]{} & \^0p,\ 1/6 & [for]{} & \^0n,\ 1/3 & [for]{} & \^-p. . \[piN-modes\] From these, the channel $\pi^+n$ was chosen for detection in our experiment. Previous searches for $\eta$-mesic nuclei {#previous-searches-for-eta-mesic-nuclei .unnumbered} ========================================= Searches for $\eta$-mesic nuclei began very soon after their predictions [@haider86] followed by suggestions [@liu86; @kohno89; @lebedev89; @lebedev91; @kohno90] to seek these novel high-energy nuclear excitations in missing-mass experiments using the inclusive reactions $(\pi^+,p)$ and $(\gamma,p)$. The first two experiments have been done along this line in 1988 at Brookhaven [@chrien88] and Los Alamos [@lieb88a; @lieb88b]. In both experiments, a $\pi^+$ beam was used and several targets (Li, C, O and Al) were examined. The inclusive $(\pi^+,p)$ reaction \^+ + A  \_(A-1) + p was studied in [@chrien88] with a magnetic spectrometer, whereas the Los Alamos experiment had also an additional $4\pi$ BGO crystal ball for detecting charged paticles ejected in the subprocess (\[etaNpiN\]) of $\eta$-mesic nuclei decays to $\pi N$ pairs in coincidence with the forward proton $p$. The Brookhaven experiment did not find a theoretically expected signal [@liu86] — a relatively narrow peak of a predicted strength in the missing mass spectrum. The team working at Los Alamos did report a preliminary evidence for a wanted peak for the $^{16}$O target but this report was not confirmed (published) since then. It was recognized in the following that the above obtained negative or incomplete results do not necessarily mean that the predicted $\eta$-mesic nuclei do not exist. It was possible that the binding energies and especially the widths of the $\eta$ bound states were theoretically underestimated. This point of view was supported by many-body calculations [@chiang91] taking into account some effects disregarded in the first theoretical works [@haider86; @liu86], in particular — dressing, binding and collisional decays of the $S_{11}(1535)$ resonance in the dense nuclear matter. The analysis of [@chiang91] was later extended and revised [@oset02a; @oset02b; @oset02c] (in particular, dressing of mesons was also included) with the main conclusion survived that $\eta$-mesic nuclei widths are bigger than those found in [@haider86; @liu86]. The next experiment has been performed at the Lebedev Physical Institute in Moscow/Troitsk [@sokol99; @sokol00] (see also a summary in [@sokol08]). It was triggered [@sokol94; @lebedev95a] by a suggestion [@sokol91] to seek $\eta$-mesic nuclei through observing decay products of $\eta$-mesic nuclei, namely two correlated back-to-back particles, a pion and a nucleon, ejected in the process of annihilation of captured $\eta$-mesons in the nucleus, Eq. (\[etaNpiN\]). It was hoped that a background for the two very energetic particles (the pion and the nucleon) ejected in decays of $\eta$-mesic nuclei transversely to the beam would be lower than that for ejection of forward protons in the inclusive processes. Besides, it was hoped that background conditions in photon-induced reactions would be generally better than those in pion-induced ones. Studies of the reaction + \^[12]{}[C]{} (\_[ ]{}\^[11]{}[Be]{} [  or  ]{} \_[ ]{}\^[11]{}[C]{}) + N \^+ + n + X + N \[reac-C:piN\] done in the middle of 1990’s at the LPI electron synchrotron indeed showed a signal of an enhanced production of the correlated back-to-back $\pi^+n$ pairs ejected transversely to the photon beam when the photon energy exceeded the $\eta$-meson photoproduction threshold. Energy resolution of the experimental setup was, however, not sufficient to resolve a peak similar to that shown in Fig. \[spectral-function\] and to determine whether the observed correlated pairs were produced by bound or unbound intermediate etas. After the works [@sokol99; @sokol00] gaining and using information on the decay products became mandatory for experiment planning and data analysis in all further searches for $\eta$-mesic nuclei. In 2004 an evidence for the $\eta$-mesic nucleus $_\eta^3$He formed in the reaction + \^3[He]{} \_\^3[He]{} \^0 + p + X has been reported from Mainz [@pfeiffer04]. A resonance-like structure was observed in a contribution to the cross section from back-to-back $\pi^0p$ pairs found after a background subtraction. A later study [@pheron12] revealed, however, that the background has a rather complicated structure, so that the conclusions of Ref. [@pfeiffer04] cannot be confirmed. At the moment their statement is that the existence of the $\eta$-mesic nucleus $_\eta^3$He is not yet established. One more attempt to find $\eta$-mesic nuclei by detecting their $\pi^-p$ decay products has recently been done at the JINR nuclotron [@afanasiev11]. The reaction studied was d + \^[12]{}[C]{} (\_[ ]{}\^[11]{}[Be]{} [  or  ]{} \_[ ]{}\^[11]{}[C]{}) + N\_1 + N\_2 \^- + p + X + N\_1 + N\_2. \[reac:Dubna\] The measured effective mass spectra of the correlated back-to-back $\pi^-p$ pairs show a presence of resonance-like peaks lying slightly below the threshold energy $m_\eta+m_N=1486$ MeV. However, a consistent interpretation of these peaks was not yet obtained. To date the strongest evidence for the existence of $\eta$-mesic nuclei came from the precision COSY-GEM experiment [@budzanowski09]. Following ideas of the work [@hayano99] borrowed in turn from previous experience in studying deeply-bound pionic states in nuclei, the reaction p + \^[27]{} \^3[He]{} + \^[25]{}\_[ ]{}[Mg]{} \^3[He]{} + p + \^- + X \[reac:COSY-GEM\] of a recoilless formation of the $\eta$-mesic nuclei $^{25}_{~\eta}\rm Mg$ was explored and the mass of this $\eta$-mesic nucleus was determined through precision missing-mass measurements in $(p, {}^3{\rm He})$. A clear peak was found in the missing mass spectrum that corresponds to the binding energy $-13.13\pm 1.64$ MeV and the width $10.22\pm 2.98$ MeV of the formed $\eta$-mesic nucleus. An upper limit of $\approx 0.5$ nb was found for the cross section of the $\eta$-mesic nucleus formation. Recently Haider and Liu argued [@haider10a; @haider10b] that the observed peak in (\[reac:COSY-GEM\]) is shifted down from the genuine binding energy of $\eta$ because of interference of the resonance and nonresonance mechanisms of the reaction (similar to those shown in Fig. \[diagrams-piN\]). This very interesting effect signifies that the genuine $\eta$ binding in ${}_{~\eta}^{25}{\rm Mg}$ is  $\approx -8$ MeV with the width $\approx 19$ MeV. On the two-nucleon decay mode of $\eta$-mesic nuclei {#on-the-two-nucleon-decay-mode-of-eta-mesic-nuclei .unnumbered} ==================================================== [r]{}[0.4]{} ![image](fig-diagram-NN.eps){width="35.00000%"} ![image](fig-etaNN.ps){width="40.00000%"} The main novelty in our present research is exploring a new possibility for searching for $\eta$-mesic nuclei, namely through observation of their two-nucleon decay mode arising owing to the two-nucleon absorption of the captured $\eta$ in the nucleus, NN NN, \[2N-absorption\] see Fig. \[diagram-NN\]. Ejected in this process correlated back-to-back nucleons of the $NN$ pairs have very high energies ($E_N^{\rm kin}\simeq \frac12 m_\eta = 274$ MeV) and momenta ($p_N\simeq 770~{\rm MeV}/c$), so that they are to be visible (especially in coincidence) at the background of other particles emitted in photoreactions at $E_\gamma\sim 800$ MeV and thus should provide a bright signature for the $\eta$-mesic nucleus formation. The $NN$ pair production in decays of $\eta$ in the nuclear matter was considered among other channels by Chiang, Oset and Liu [@chiang91] in terms of the self-energy of $S_{11}(1535)$ that includes a contribution of $S_{11}(1535)N\to NN$. A more direct and rather transparent evaluation of this process has been done by Kulpa and Wycech [@kulpa98b] who used available experimental data on the inverse reactions $pp\to pp\eta$, $pn\to pn\eta$ and $pn\to d\eta$ and then converted them into the rate of (\[2N-absorption\]). In terms of the imaginary part $W_{NN}$ of the optical potential $U$, this rate was found to be proportional to $\rho^2$, being $W_{NN}=3.4$ MeV at central nuclear density. This is only about 15% of $W_N \sim 23$ MeV related with the absorption of $\eta$ by one nucleon. Nevertheless such a small fraction of $NN$ can be quite visible experimentally because of a specific isotopic contents of the $\pi N$ and $NN$ pairs. The matter is that $\gtrsim 90\%$ of these $NN$ pairs are proton plus neutron because the cross section of $pp\to pp\eta$ (and $nn\to nn\eta$) is by order or magnitude less than that of $pn\to pn\eta$ (plus $pn\to d\eta$), see Fig. \[fig-etaNN\] where pertinent Uppsala-Celsius [@calen96; @calen97; @calen98a; @calen98b] and COSY [@smyrski00; @moskal09] data are shown (and see also, e.g., [@baru03] for theoretical explanations). This difference can be traced to isospin factors and Fermi statistics signs in the dominating pion-exchange mechanism of the reaction $NN\to NN\eta$ shown in Fig. \[diagrams-NNeta\]. If the experimental setup detects one charged and one neutral particle from the pairs, it detects $\sim90\%$ of $NN$ and only $\sim33\%$ of $\pi N$. Then count rates of the setup would not be so different for $pn$ and $\pi^+n$ pairs. That seems to be exactly what we see in our experiment. ![Pion-exchange mechanism of $NN\to NN\eta$. Isospin factors, which accompany the $\pi NN$ coupling $g$ and the $\pi N\to\eta N$ amplitude $T$, and the Fermi-statistics signs (both shown in this Figure) jointly determine the big difference between the cross sections of $pp\to pp\eta$ and $pn\to pn\eta$ (plus $pn\to d\eta$). Antisymmetrization of the initial state and initial/final state interactions are not shown.[]{data-label="diagrams-NNeta"}](fig_pp.ps "fig:"){width="50.00000%" height="13ex"} ![Pion-exchange mechanism of $NN\to NN\eta$. Isospin factors, which accompany the $\pi NN$ coupling $g$ and the $\pi N\to\eta N$ amplitude $T$, and the Fermi-statistics signs (both shown in this Figure) jointly determine the big difference between the cross sections of $pp\to pp\eta$ and $pn\to pn\eta$ (plus $pn\to d\eta$). Antisymmetrization of the initial state and initial/final state interactions are not shown.[]{data-label="diagrams-NNeta"}](fig_pn.ps "fig:"){width="50.00000%" height="13ex"} ![Simulation of $NN$ pairs emitted in decays of $\eta$-mesic nuclei. Shown are distributions over the kinetic energy and velocity of one of the nucleons, the total energy of the pair and the relative angle.[]{data-label="simulation-NN"}](figNN-TN.ps "fig:"){width="36.00000%"} ![Simulation of $NN$ pairs emitted in decays of $\eta$-mesic nuclei. Shown are distributions over the kinetic energy and velocity of one of the nucleons, the total energy of the pair and the relative angle.[]{data-label="simulation-NN"}](figNN-Etot.ps "fig:"){width="36.00000%"}\ ![Simulation of $NN$ pairs emitted in decays of $\eta$-mesic nuclei. Shown are distributions over the kinetic energy and velocity of one of the nucleons, the total energy of the pair and the relative angle.[]{data-label="simulation-NN"}](figNN-bN.ps "fig:"){width="36.00000%"} ![Simulation of $NN$ pairs emitted in decays of $\eta$-mesic nuclei. Shown are distributions over the kinetic energy and velocity of one of the nucleons, the total energy of the pair and the relative angle.[]{data-label="simulation-NN"}](figNN-costheta.ps "fig:"){width="36.00000%"} Neglecting binding effects and effects of Fermi motion of nucleons and $\eta$, we have the following kinematical characteristics of the correlated $NN$ pairs (i.e. energies, momenta, velocities) ejected in $\eta$-mesic nuclei decays: E\_[N\_1]{}\^[kin]{} = E\_[N\_2]{}\^[kin]{} = 12 m\_= 274 [MeV]{}, p\_[N\_1]{} = p\_[N\_2]{} = 767 [MeV]{}/c, \_[N\_1]{} = \_[N\_2]{} = 0.63. \[kinema-NN\] Actually, the Fermi motion and binding leads to fluctuations around these ideal parameters as a simple simulation reveals, see Fig. \[simulation-NN\]. Note that the angular correlation in the $NN$ pairs is stronger than that in the $\pi N$ pairs — owing to higher momenta of particles in $NN$. The first studies of the photoreaction + \^[12]{}[C]{} (\_[ ]{}\^[11]{}[Be]{} [  or  ]{} \_[ ]{}\^[11]{}[C]{}) + N p + n + X + N \[reac:NN\] have recently been done at the LPI synchrotron and we report below on the obtained results. Experimental setup at LPI {#experimental-setup-at-lpi .unnumbered} ========================= [r]{}[0.3]{} ![image](fig-setup.eps){width="30.00000%"} Our experiment was performed at the bremsstrahlung photon beam of the 1.2-GeV electron synchrotron of the Lebedev Physical Institute. Photons were produced with an electron beam of intensity $I_e \simeq 10^{12} ~\rm s^{-1}$ and the duty factor $\simeq 10\%$. The energy of the beam was usually $E_e = E_{\gamma\;\rm max} = 850~\rm MeV$ (i.e. above the $\eta$ photoproduction threshold off free nucleons, $E_{\eta\;\rm thr}=708~\rm MeV$); additional measurements of subthreshold backgrounds have been done at $E_e = E_{\gamma\;\rm max} = 650~\rm MeV$. The experimental setup included two time-of-flight arms (two scintillation telescopes — C and N arms) for detecting in coincidence charged and neutral particles (back-to-back pairs), see Fig. \[exp-setup\]. These arms were both positioned at $90^\circ{-}90^\circ$ with respect to the beam axis in order to minimize background. The C-arm used for detection of charged particles is a plastic TOF spectrometer for charged pions and protons. It consists of a start detector T1 ($20\times 20\times 2~\rm cm^3$), a stop detector T2 ($50\times 50\times 5~\rm cm^3$) and three energy losses detectors $\Delta E_1$, $\Delta E_2$ and $\Delta E_3$ ($40\times 40\times 2~\rm cm^3$ each). A 4 mm lead (Pb) plate was used in some runs for TOF calibrations with ultrarelativistic electrons/positrons produced in the lead plate with high-energy photons emitted from the target owing to production and decays of neutral pions. The N-arm is a plastic TOF spectrometer for neutrons. It consists of a veto counter A ($50\times 50\times 2~\rm cm^3$) and four plastic blocks — the neutron detectors N1, N2, N3 and N4 ($50\times 50\times 10~\rm cm^3$ each). Again, a 4 mm lead plate was used in some runs for TOF calibrations. The efficiency of the N-arm for neutrons of energies above 50 MeV was $\approx 30\%$. In both arms each volume of scintillator counters/blocks was viewed from corners with 4 phototubes. The time-of-flight bases in the C and N arms were 1.4 m and the time resolution was $\simeq 200$ ps ($1\sigma$). The target was a carbon cylinder of the 10 cm length along the beam axis. Its diameter was 4 cm, i.e. slightly more than the diameter of the collimated photon beam (3 cm). The distance between the target and the start detector T1 was 0.7 m. Mostly, the setup was the same as in our previous work [@sokol00; @sokol08] but a few useful changes have been made: - $\Delta E_i$ detectors have been placed after the time-of-flight interval T1-T2. This enabled us to have a better $\pi^\pm/p$ separation and time resolution. - A transverse size of the start detector T1 was cut off according to required geometry. This reduced a background load of the C-arm. - A thickness of the start detector was also reduced in order to improve time resolution. - All unnecessary layers of absorbers used previously to suppress radiative backgrounds have been removed from the time-of-flight interval, with the effect of reducing the $e^+/e^-$ background created by photons from $\pi^0$ decays. [r]{}[0.4]{} ![image](fig-2dim-bb.eps){width="35.00000%"} General tests of the setup, including preliminary time calibrations of the arms, have been done in special runs, in which the quasifree reaction $\gamma p\to\pi^+n$ inside carbon nuclei was observed. In such runs the two arms of the setup have been moved to the angles $50^\circ{-}50^\circ$ where the high count rate enabled one to do the calibrations quickly. Lead convertors used in these runs provided reliable ultrarelativistic reference points $\beta=1$ for particle’s velocities $\beta_C$ and $\beta_N$ measured in the C- and N-arms. A two-dimensional $\beta_C{-}\beta_N$ plot on Fig. \[2-dim-bb\] illustrates this procedure. The calibration done provided a linear scale of velocities in the range $\beta = 0.6{-}1$ with errors of about 3% ($1\sigma$). We have checked the linearity of the scale by using cosmic rays and setting different distances between detectors. Results and comparison with simulations {#results-and-comparison-with-simulations .unnumbered} ======================================= Measurement runs have mostly been done in 2009 at two maximal beam energies: $E_{\gamma\;\rm max} = 650$ MeV and 850 MeV. The on-line trigger was a coincidence of particles in the C- and N-arms within a time gate of 50 ns. For further off-line analysis events were selected with an additional condition of sufficiently long ranges of the charged particles, E\_i &gt; E\_i\^[thr]{} \[eq:selection\] with experimentally adjusted thresholds $E_i^{\rm thr}$. In this way low-energy particles in the C-arm were rejected. A two-dimensional histogram in the variables $\Delta E{-}\beta_C$, where $\Delta E$ is the minimal energy loss in the $\Delta E_i$ detectors, E=\_i(E\_i), is shown in Fig. \[2-dim-bE\] for the beam energy $E_{\gamma\;\rm max} = 850$ MeV. Results of simulations using the Intra Nuclear Cascade (INC) model [@pschenichnov97] in the GEANT-3 framework are shown in Fig. \[2-dim-bE-INC\] for comparison. The INC model takes into account production of various mesons and baryon resonances, their free propagation in the nuclear matter, and then various $2\to 2$ collisional reactions including $\eta N\to\pi N$. This model successfully describes many photoreactions in wide kinematical ranges as was demonstrated, beyond [@pschenichnov97], in simulations of the GRAAL experiment at energies 500–1500 MeV [@ignatov08]. Binding effects for $\eta$ and reactions like $\eta NN\to NN$ were not included into the model, so one can try to find effects arising due to formation and decay of $\eta$-mesic nuclei through characteristic deviations of the model predictions from the experimental data. The simulation shows that the selection (\[eq:selection\]) of particles with sufficiently long ranges distinguishes very well protons (as particles with $\beta_C \leq 0.7$) and pions (as particles with $\beta_C \geq 0.7$): the overlap is less than 1%. ![Two-dimensional $\Delta E{-}\beta_C$ distribution, the INC model.[]{data-label="2-dim-bE-INC"}](fig-2dim-bE.eps){width="60.00000%"} ![Two-dimensional $\Delta E{-}\beta_C$ distribution, the INC model.[]{data-label="2-dim-bE-INC"}](fig-2dim-bE-INC.eps){width="60.00000%"} Considering one-dimensional spectra over $\beta_C$ of events selected according to the condition (\[eq:selection\]) of sufficiently long ranges and imposing the additional cut-off $0.3 <\beta_N < 0.7$ for neutron velocities, we find rather interesting structures in the spectra. Shown in Fig. \[bC\] are experimental data (blue areas) together with results of the INC simulation (pink hatched areas). Separately shown are INC predictions for the number of protons and charged pions in the C-arm. There is a qualitative agreement of the INC simulation with the experimental data for the case of the subthreshold beam energy, $E_e=650$ MeV. Meanwhile, in the case of $E_e=850$ MeV there is a clear excess of the experimentally observed events over the simulation results in two velocity regions closely corresponding to the kinematics of $\eta$-mesic nuclei decays with emission of $\pi N$ and $NN$ correlated pairs, Eqs. (\[kinema-piN\]) and (\[kinema-NN\]). Knowing from the INC simulations that the ”normal” (without $\eta$-mesic nuclei) dynamics of the considered reaction does not yield a sufficient amount of protons and pions with the velocities of about $\beta_C\sim 0.7$, we interpret the found anomaly at $\beta_C\sim 0.7$ as a result of production of low-energy $\eta$-mesons followed by their two-nucleon annihilation. The energy resolution of the experimental setup is not sufficient to say whether an essential part of these $\eta$-mesons is produced in the bound state, but theoretical arguments discussed in above make such a statement plausible. Concerning the excess of pions with $\beta_C\simeq 0.95$, this feature is in agreement with our measurements reported earlier [@sokol99; @sokol00; @sokol08]. It can be interpreted as an evidence for one-nucleon annihilation of produced low-energy $\eta$-mesons (bound or unbound). Electron/positron peaks shown in Fig. \[bC\] originate from calibration runs with the lead plate inserted. They were not included into simulations made. ![Velocity distribution of charged particles selected according to the criterion $\Delta E_i > 0$ (for all $i=1,2,3$) at $E_e=650$ and 850 MeV. A well visible excess of events over the INC simulation is seen at the right panel — in the case of the beam energy exceeding the $\eta$-photoproduction threshold — in both velocity regions corresponding to the expected velocities of the $\pi N$ and $NN$ decay products of $\eta$-mesic nuclei.[]{data-label="bC"}](fig-bC-650.eps){width="45.00000%"} The observed proton peak in the $\beta_C$ distribution is very unusual because it corresponds to $pn$ pairs with very high kinetic energies $T_p\sim T_n\sim 200{-}300$ MeV and transverse momenta $p_p\sim p_n\sim 400{-}800$ MeV/c. One should keep in mind that photons which produce such pairs have quite a modest energy $650~{\rm MeV} < E_\gamma < 850$ MeV. Ordinary photoproduction reactions do not give nucleons with such a high energy and momentum. Creation and annihilation of intermediate low-energy $\eta$-mesons seems to be the only explanation to these events. Assuming that the observed access events are mainly related with formation and isotropic decays of $\eta$-mesic nuclei with $A=11$, we can estimate their photoproduction cross section. The number of photons of the energies $E_\gamma = 650{-}850$ MeV that hit the carbon target in experimental runs was evaluated via comparison of the total yield of charged pions detected by a single C-arm of the setup with predictions of INC for that yield, thus giving the result $N_\gamma \simeq 1.36\times 10^{11}$. Taking into account the solid angle of the C-arm telescope ($\Omega_C = 0.027$ sr), efficiencies of detectors, a geometric efficiency of the $N$-arm of the setup ($\sim 18\%$ as found from theoretically expected angular distributions of particles of the correlated pairs), we arrived at the following cross section of $\eta$-mesic nucleus formation: (+ [ ]{}\^[12]{}[C]{}\_A + X) 10 b. \[x-section\] We write it as an upper limit because part of the observed events can be related with unbound etas. This number is consistent with available theoretical estimates (typically, a few $\mu$b). Conclusions {#conclusions .unnumbered} =========== The new obtained data confirm the main features of the $\pi N$ signal of formation and decay of $\eta$-mesic nuclei off the carbon target in the photoreaction found in our previous work. A new signature for formation and decay of $\eta$-mesic nuclei, the back-to-back $pn$ pairs, was explored. For the first time an experimental evidence was found that the yield of such pairs in the region of $\beta_C\sim 0.6{-}0.7$ is quite large and therefore is also suitable for searching for $\eta$–mesic nuclei. Assuming that the observed excess of events is related with $\eta$-mesic nuclei, an estimate of the total cross section of formation of $\eta$-nuclei in the photoreaction off carbon have been obtained, see Eq. (\[x-section\]). We have plans to carry out a more precise experiment, with a better energy resolution, at the deuteron beam of the JINR nuclotron. Acknowledgments {#acknowledgments .unnumbered} =============== This work was supported in part by the RFBR grants 08-02-00648-a and 10-02-01433-a. A nice work of the accelerator group of the LPI synchrotron and its leader G.G. Subbotin is highly appreciated. [00]{} Q. Haider and L.C. Liu, *Formation of an eta-mesic nucleus*, *Phys. Lett. B* [**172**]{} (1986) 257; *Erratum: Ibid.* [**174**]{} (1986) 465E. L.C. Liu and Q. Haider, *Signature for the existence of eta-mesic nuclei*, *Phys. Rev. C* [**34**]{} (1986) 1845. R.S. Bhalerao and L.C. Liu, *Off-shell model for threshold pionic $\eta$ production on a nucleon and for $\eta N$ scattering*, *Phys. Rev. Lett.* [**54**]{} (1985) 865. L. Tiator, C. Bennhold and S.S. Kamalov, *The $\eta NN$ coupling in eta photoproduction*, *Nucl. Phys. A* [**580**]{} (1994) 455. E.F. McNicoll, S. Prakhov, I.I. Strakovsky et al., *Experimental study of the $\gamma p \to \eta p$ reaction with the Crystal Ball detector at the Mainz Microtron (MAMI-C)* \[data tables are available from the Durham HEPDATA base, [http://durpdg.dur.ac.uk]{}\]. Q. Haider and L.C. Liu, *Dependence of calculated binding energies and widths of $\eta$-mesic nuclei on treatment of subthreshold $\eta$-nucleon interaction*, *Phys. Rev. C* [**66**]{} (2002) 045208. Q. Haider and L.C. Liu, *Eta-mesic nucleus: A new form of nuclear matter*, *Acta Phys. Pol. B Proc. Suppl.* [**2**]{} (2009) 121. A.M. Green and S. Wycech, *$\eta$-nucleon scattering length and effective range*, *Phys. Rev. C* [**55**]{} (1997) R2167. A.M. Green and S. Wycech, *$\eta$-nucleon scattering length and effective range uncertainties*, *Phys. Rev. C* [**71**]{} (2005) 014001. T. Inoue, E. Oset and M.J. Vicente Vacas, *Chiral unitary approach to $S$-wave meson baryon scattering in the strangeness $S=0$ sector*, *Phys. Rev. C* [**65**]{} (2002) 035204. I. Inoue and E. Oset, *$\eta$ in the nuclear medium within a chiral unitary approach*, *Nucl. Phys. A* [**710**]{} (2002) 354. C. García-Recio, T. Inoue, J. Nieves and E. Oset, *$\eta$ bound states in nuclei*, *Phys. Lett. B* [**550**]{} (2002) 47. D. Jido, H. Nagahiro and S. Hirenzaki, *Medium effects to the $N(1535)$ resonance and $\eta$ mesic nuclei*, *Phys. Rev. C* [**66**]{} (2002) 045202. H. Nagahiro, D. Jido and S. Hirenzaki, *Formation of mesic nuclei by $(\gamma,p)$ reactions*, *Nucl. Phys. A* [**761**]{} (2005) 92. D. Jido, E.E. Kolomeitsev, H. Nagahiro and S. Hirenzaki, *Level crossing of particle-hole and mesonic modes in eta-mesonic nuclei*, *Nucl. Phys. A* [**811**]{} (2008) 158. M. Kohno and H. Tanabe, *Low energy $\eta$ production in $(\pi^+,p)$ and $(\gamma,p)$ reactions on $^{12}$C*, *Phys. Lett. B* [**231**]{} (1989) 219. A.I. Lebedev and V.A. Tryasuchev, *Calcultion of the photoptoduction cross section of $\eta$-nuclei*, *Voprosy Atomnoi Nauki i Tekhniki, ser. Yad.-Fiz. Issled. (Kharkov)*, [**8/8**]{} (1989) 97 (in Russian). A.I. Lebedev and V.A. Tryasuchev, *Cross section for production of $\eta$ nuclei by photons*, *J. Phys. G: Nucl. Part. Phys.* [**17**]{} (1991) 1197. A.I. Lebedev and V.A. Tryasuchev, *Study of the photoproduction of eta mesic nuclei on the basis of a complex potential*, *Phys. Atom. Nucl.* [**58**]{} (1995) 586 \[*Yad. Fiz.* [**58**]{} (1995) 642 (in Russian)\]. V.A. Tryasuchev, *Photoproduction of light eta nuclei*, *Phys. Part. Nucl.* [**30**]{} (1999) 606 \[*Fiz. Elem. Chast. Atom. Yadra* [**30**]{} (1999) 1391 (in Russian)\]. V.A. Tryasuchev, *Theoretical analysis of the formation of $\eta$ mesic nuclei in $\gamma + A\to N + {}_\eta A'$ reactions*, *Phys. Atomic Nucl.* [**64**]{} (2001) 346 \[*Yad. Fiz.* [**64**]{} (2001) 396 (in Russian)\]. A.I. L’vov, *Production and decay of eta-mesic nuclei*, in Proc. of the *7th Int. Conf. ‘Mesons and Light Nuclei’, Czech Republic, 1998* (Mesons and Light Nuclei ’98, World Scientific, Eds. J. Adam, P. Bydžovský, J. Dobeš, R. Mach, J. Mareš and M. Sotona), pp. 469–472; E-print arXiv: nucl-th/9809054. G.A. Sokol, T.A. Aibergenov, A.V. Kravtsov, A.I. L’vov and L.N. Pavlyuchenko, *Search for $\eta$–mesic nuclei in photoproduction processes*, *Fizika B* [**8**]{} (1999) 85. Q. Haider and Lon-Chang Liu, *Eta-mesic nucleus and COSY-GEM data*, *Acta Phys.Polon. B* [**41**]{} (2010) 2231. Q. Haider and Lon-Chang Liu, *Interference and nuclear medium effects on the eta-mesic nuclear spectrum*, *J. Phys. G: Nucl. Part. Phys.* [**37**]{} (2010) 125104. M. Kohno and H. Tanabe, *Pion-induced $\eta$ production on nuclei*, *Nucl. Phys. A* [**519**]{} (1990) 755. R.E. Chrien, S. Bart, P. Pile et al., *Search for bound states of the $\eta$ meson in light nuclei*, *Phys. Rev. Lett.* [**60**]{} (1988) 2595. B.J. Lieb, in Proceedings of *International Conference on Nuclear Physics*, Sao Paulo, Brazil, 1988. B.J. Lieb, L.C. Liu, E. Cheung et al., *Search for nuclear bound states of the eta meson*, *Progress at LAMPF, January – December 1988. LA-11670-PR Progress Report*, pp. 52-55. H.C. Chiang, E. Oset and L.C. Liu, *Width of bound eta in nuclei*, *Phys. Rev. C* [**44**]{} (1991) 738. G.A. Sokol, T.A. Aibergenov, A.V. Koltsov et al., *Discovery of $\eta$-mesic nuclei*, *Part. Nucl. Lett.* [**102**]{} (2000) 71 \[*Pisma EChaYa* [No.5 \[102\]]{} (2000) 71 (in Russian)\]. G.A. Sokol and L.N. Pavlyuchenko, *Discovery and investigation of $\eta$-mesic nuclei in photoproduction processes*, *Phys. At. Nucl.* [**71**]{} (2008) 509 \[*Yad. Fiz.* [**71**]{} (2008) 532 (in Russian)\]. G.A. Sokol, V.L. Kashevarov, A.I. Lebedev and L.N. Pavlyuchenko, *Photoproduction of eta-nuclei*, in Proceedings of *International Conference on Mesons and Nuclei at Intermediate Energies*, Dubna, Russia, 1994 (Eds. M.Kh. Khankhasayev and Zh.B. Kurmanov, World Scientific, Singapore, 1995), p. 651–657; *Preprint LPI* No. 17 (1994). A.I. Lebedev and G.A. Sokol, *Search for $eta$-nuclei*, *Preprint LPI* No. 34 (1995). G.A. Sokol and V.A. Tryasuchev, *A possible method of observing eta nuclei*, *Bull. Lebedev Phys.Inst.*, No.4 (1991) 21 \[*Kratk. Soobshch. Fiz.* 4 (1991) 23 (in Russian)\]. M. Pfeiffer, J. Ahrens, J.R.M. Annand et al., *Photoproduction of $\eta$-mesic $^3$He*, *Phys. Rev. Lett.* [**92**]{} (2004) 252001; *Ibid.* [**94**]{} (2005) 049102. F. Pheron, J. Ahrens, J.R.M. Annand et al., *Coherent photoproduction of $\eta$-mesons off $^3$He – search for $\eta$-mesic nuclei*, *Phys. Lett. B* [**709**]{} (2012) 21. S.V. Afanasiev, A.S. Artiomov, R.N. Bekmirzaev et al., *Search results of $\eta$-mesic nuclei in the $d + \rm C$ reaction in JINR*, *Nucl. Phys. B (Proc. Suppl.)* [**209-210**]{} (2011) 255. A. Budzanowski, A. Chatterjee, P. Hawranek et al., *Search for $\eta$-mesic nuclei in a recoil-free transfer reaction*, *Phys. Rev. C* [**79**]{} (2009) 012201(R). R.S. Hayano, S. Hirenzaki and A. Gillitzer, *Formation of $\eta$-mesic nuclei using the recoilless $(d,{}^3\rm He)$ reaction*, *Eur. Phys. J. A* [**6**]{} (1999) 99. J. Kulpa and S. Wycech, *The absorptive $\rho^2$ terms in the $\eta$ optical potential*, *Acta Phys. Pol. B* [**29**]{} (1998) 3077. H. Calén, S. Carius, K. Fransson et al., *The $pp\to pp\eta$ reaction near the kinematical threshold*, *Phys. Lett. B* [**366**]{} (1996) 39. H. Calén, J. Dyring, K. Fransson et al., *Measument of the quasifree $p+n\to d+\eta$ reaction near threshold*, *Phys. Rev. Lett.* [**79**]{} (1997) 2642. H. Calén, J. Dyring, K. Fransson et al., *Threshold structure of the quasifree $p+n\to d+\eta$ reaction*, *Phys. Rev. Lett.* [**80**]{} (1998) 2069. H. Calén, J. Dyring, K. Fransson et al., *Measument of the quasifree $pn\to pn\eta$ reaction*, *Phys. Rev. C* [**58**]{} (1998) 2667. J. Smyrski, P. Wünster, J.T. Balewski et al., *Near-threshold $\eta$ meson production in proton-proton collisions*, *Phys. Lett. B* [**474**]{} (2000) 182. P. Moskal, R. Czyżykiewicz, H.-H. Adam et al., *Near-threshold production of the $\eta$-meson via the quasifree $pn\to pn\eta$ reaction*, *Phys. Rev. C* [**79**]{} (2009) 015208. V. Baru, A.M. Gasparyan, J. Haidenbauer, C. Hanhart, A.E. Kudryavtsev and J. Speth, *Production of $\eta$ mesons in nucleon-nucleon collisions*, *Phys. Rev. C* [**67**]{} (2003) 024002. A.S. Iljinov, I.A. Pschenichnov, N. Bianchi et al., *Extension of the intranuclear cascade model for photonuclear reactions at energies up to 10 GeV*, *Nucl. Phys. A* [**616**]{} (1997) 575. A. Ignatov, O. Bartalini, V. Bellini et al., *New experimental and simulated results on nuclear media effects in meson photoproduction off nuclei*, *Prog. Part. Nucl. Phys.* [**61**]{} (2008) 253.
1
--- abstract: 'Dissipationless collapses in Modified Newtonian Dynamics (MOND) are studied by using a new particle-mesh N-body code based on our numerical MOND potential solver. We found that low surface-density end-products have shallower inner density profile, flatter radial velocity-dispersion profile, and more radially anisotropic orbital distribution than high surface-density end-products. The projected density profiles of the final virialized systems are well described by Sersic profiles with index $m \lsim 4$, down to $m\sim 2$ for a deep-MOND collapse. Consistently with observations of elliptical galaxies, the MOND end-products, if interpreted in the context of Newtonian gravity, would appear to have little or no dark matter within the effective radius. However, we found impossible (under the assumption of constant mass-to-light ratio) to simultaneously place the resulting systems on the observed Kormendy, Faber-Jackson and Fundamental Plane relations of elliptical galaxies. Finally, the simulations provide strong evidence that phase mixing is less effective in MOND than in Newtonian gravity.' author: - Carlo Nipoti - Pasquale Londrillo - Luca Ciotti title: Dissipationless collapses in MOND --- Introduction ============ In Bekenstein & Milgrom’s (1984, hereafter BM) Lagrangian formulation of Milgrom’s (1983) Modified Newtonian Dynamics (MOND), the Poisson equation $$\nabla^2{\phi_{\rm N}}=4\pi G\rho \label{eqPoisson}$$ for the Newtonian gravitational potential ${\phi_{\rm N}}$ is replaced by the field equation $$\nabla\cdot\left[\mu\left({\Vert\nabla\phi\Vert\over{a_0}}\right) \nabla\phi\right] = 4\pi G \rho, \label{eqMOND}$$ where ${a_0}\simeq 1.2 \times 10^{-10} {\rm m\, s^{-2}}$ is a characteristic acceleration, $\Vert ...\Vert$ is the standard Euclidean norm, $\phi$ is the MOND gravitational potential produced by the density distribution $\rho$, and in finite mass systems $\nabla\phi\to 0$ for $\Vert{{\bf x}}\Vert\to\infty$. The MOND gravitational field ${{\bf g}}$ experienced by a test particle is $${{\bf g}}=-\nabla\phi, \label{eqgv}$$ and the function $\mu$ is such that $$\mu(y)\sim\cases{y&for $y\ll 1$,\cr 1&for $y\gg 1$;} \label{eqmulim}$$ throughout this paper we use $$\mu (y)={y\over\sqrt{1+y^2}}. \label{eqmu}$$ In the so-called ‘deep MOND regime’ (hereafter dMOND), describing low-acceleration systems ($\Vert\nabla\phi\Vert \ll{a_0}$), $\mu(y)=y$ and so equation (\[eqMOND\]) simplifies to $$\nabla\cdot\left({\Vert\nabla\phi\Vert}\nabla\phi\right) = 4\pi G {a_0}\rho. \label{eqdMOND}$$ The source term in equation (\[eqMOND\]) can be eliminated by using equation (\[eqPoisson\]), giving $$\mu\left(\Vert\nabla\phi\Vert\over{a_0}\right)\nabla\phi=\nabla{\phi_{\rm N}}+{{\bf S}}, \label{eqcurl}$$ where ${{\bf S}}={\rm curl\,}{{\bf h}}$ is a solenoidal field dependent on $\rho$ and in general different from zero. When ${{\bf S}}=0$ equation (\[eqcurl\]) reduces to Milgrom’s (1983) formulation and can be solved explicitly. Such reduction is possible for configurations with spherical, cylindrical or planar symmetry, which are special cases of a more general family of stratifications (BM; Brada & Milgrom 1995). Though the solenoidal field ${{\bf S}}$ has been shown to be small for some configurations (Brada & Milgrom 1995; Ciotti, Londrillo & Nipoti 2006, hereafter CLN), neglecting it when simulating time-dependent dynamical processes has dramatic effects such as non-conservation of total linear momentum (e.g. Felten 1984; see also Section \[secic\]). Nowadays several astronomical observational data appear consistent with the MOND hypothesis (see, e.g., Milgrom 2002; Sanders & McGaugh 2002). In addition, Bekenstein (2004) recently proposed a relativistic version of MOND (Tensor-Vector-Scalar theory, TeVeS), making it an interesting alternative to the cold dark matter paradigm. However, dynamical processes in MOND have been investigated very little so far, mainly due to difficulties posed by the non-linearity of equation (\[eqMOND\]). Here we recall the spherically symmetric simulations (in which ${{\bf S}}=0$) of gaseous collapse in MOND by Stachniewicz & Kutschera (2005) and Nusser & Pointecouteau (2006). The only genuine three-dimensional MOND N-body simulations (in which equation \[\[eqMOND\]\] is solved exactly) are those by Brada & Milgrom (1999, 2000), who studied the stability of disk galaxies and the external field effect, and those of Tiret & Combes (2007). Other attempts to study MOND dynamical processes have been conducted using three-dimensional N-body codes by arbitrarily setting ${{\bf S}}=0$: Christodoulou (1991) investigated disk stability, while Nusser (2002) and Knebe & Gibson (2004) explored cosmological structure formation[^1]. In this paper we present results of N-body simulations of dissipationless collapse in MOND. The simulations were performed with an original three-dimensional particle-mesh N-body code, based on the numerical MOND potential solver presented in CLN, which solves equation (\[eqMOND\]) exactly. These numerical experiments are interesting both from a purely dynamical point of view, allowing for the first time to explore the relaxation processes in MOND, and in the context of elliptical galaxy formation. In fact, the ability of dissipationless collapse at producing systems strikingly similar to real ellipticals is a remarkable success of Newtonian dynamics (e.g., van Albada 1982; Aguilar & Merritt 1990; Londrillo, Messina & Stiavelli 1991; Udry 1993; Trenti, Bertin & van Albada 2005; Nipoti, Londrillo, & Ciotti 2006, hereafter NLC06), while there have been no indications so far that MOND can work as well in this respect. Here we study the structural and kinematical properties of the end-products of MOND simulations, and we compare them with the observed scaling relations of elliptical galaxies: the Faber–Jackson (FJ) relation (Faber & Jackson 1976), the Kormendy (1977) relation, and the Fundamental Plane (FP) relation (Djorgovski & Davis 1987, Dressler et al. 1987). The paper is organized as follows. The main features of the new N-body code are presented in Section \[seccod\], while Section \[secsim\] describes the set-up and the analysis of the numerical simulations. The results are presented in Section \[secres\] and discussed in Section \[secdis\]. The N-body code {#seccod} =============== While most N-body codes for simulations in Newtonian gravity are based on the gridless multipole expansion treecode scheme (Barnes & Hut 1986; see also Dehnen 2002), the non-linearity of the MOND field equation (\[eqMOND\]) forces one to resort to other methods, such as the particle-mesh technique (see Hockney & Eastwood 1988). In this approach, particles are moved under the action of a gravitational field which is computed on a grid, with particle-mesh interpolation providing the link between the two representations. In our MOND particle-mesh N-body code, we adopt a spherical grid of coordinates ($r$, $\vartheta$, $\varphi$), made of $\Nr\times\Nth\times\Nph$ points, on which the MOND field equation is solved as in CLN. Particle-mesh interpolations are obtained with a quadratic spline in each coordinate, while time stepping is given by a classical leap-frog scheme (Hockney & Eastwood 1988). The time-step $\Dt$ is the same for all particles and is allowed to vary adaptively in time. In particular, according to the stability criterion for the leap-frog time integration, we adopt $\Dt=\eta/\sqrt{\max{|\nabla^2 \phi|}}$, where $\eta\lsim0.3$ is a dimensionless parameter. We found that $\eta=0.1$ assures good conservation of the total energy in the Newtonian cases (see Section \[secic\]). In the present version of the code, all the computations on the particles and the particle-mesh interpolations can be split among different processors, while the computations relative to the potential solver are not performed in parallel. The solution of equation (\[eqMOND\]) over the grid is then the bottleneck of the simulations: however, the iterative procedure on which the potential solver is based (see CLN) allows to adopt as seed solution at each time step the potential previously determined. The MOND potential solver can also solve the Poisson equation (obtained by imposing $\mu=1$ in equation \[eqMOND\]), so Newtonian simulations can be run with the same code. We exploited this property to test the code by running several Newtonian simulations of both equilibrium distributions and collapses, comparing the results with those of simulations (starting from the same initial conditions) performed with the FVFPS treecode (Londrillo, Nipoti & Ciotti 2003; Nipoti, Londrillo & Ciotti 2003). One of these tests is described in Section \[seckin\]. We also verified that the code reproduces the Newtonian and MOND conservation laws (see Section \[secic\]): note that the conservation laws in MOND present some peculiarities with respect to the Newtonian case, so we give here a brief discussion of the subject. As already stressed by BM, equation (\[eqMOND\]) is obtained from a variational principle applied to a Lagrangian with all the required symmetries, so energy, linear and angular momentum are conserved. Unfortunately, as also shown by BM, the total energy diverges even for finite mass systems, thus posing a computational challenge to code validation. We solved this problem by checking the volume-limited energy balance equation $$\begin{aligned} \label{eqbaltext} {d \over d t}\int_{V_0}\left[k+\rho\phi+{{a_0}^2\over 8 \pi G}\mathcal{F}\left({||\nabla\phi||\over {a_0}}\right)\right]\dxcube=\nonumber\\{1\over 4\pi G}\int_{\partial V_0}\mu{\partial \phi \over \partial t} <\nabla \phi,\hat{\bf n}> d a,\end{aligned}$$ which is derived in Appendix \[appetot\]. In equation (\[eqbaltext\]) $V_0$ is an arbitrary (but fixed) volume enclosing all the system mass, $k$ is the kinetic energy per unit volume, and $$\mathcal{F}(y)\equiv2\int_{y_0}^{y} \mu(\xi) \xi d \xi, \label{eqeffe}$$ where $y_0$ is an arbitrary constant; note that only finite quantities are involved. Another important relation between global quantities for a system at equilibrium (in MOND as in Newtonian gravity) is the virial theorem $$\label{eqvirtheo} 2 K + W=0,$$ where $K$ is the total kinetic energy and $W=\Tr W_{ij}$ is the trace of the Chandrasekhar potential energy tensor $$\label{eqwij} W_{ij}\equiv-\int \rho({{\bf x}}) x_i {\partial \phi({{\bf x}}) \over \partial x_j} \dxcube$$ (e.g., Binney & Tremaine 1987). Note that in MOND $K+W$ is [*not*]{} the total energy, and is not conserved. However, [*$W$ is conserved in the limit of dMOND*]{}, being $W=-(2/3)\sqrt{G{a_0}M_*^3}$ for [*all*]{} systems of finite total mass ${{M_*}}$ (see Appendix \[appw\] for the proof). As a consequence, in dMOND the virial theorem writes simply $\sgv^4=4G{{M_*}}{a_0}/9$, where $\sgv \equiv \sqrt{2 K / {{M_*}}}$ is the system virial velocity dispersion (this relation was proved for dMOND spherical systems by Gerhard & Spergel 1992; see also Milgrom 1984). In our simulations we also tested that equation (\[eqvirtheo\]) is satisfied at equilibrium, and that $W$ is conserved in the dMOND case (see Sections \[secic\] and \[secres\]). Numerical simulations {#secsim} ===================== ----------------------------------------------- --------------------------------------------- ${t_{\rm *n}}={r_*}^{3/2} (G {{M_*}})^{-1/2}$ ${t_{\rm *d}}={r_*}(G {{M_*}}{a_0})^{-1/4}$ ${v_{\rm *n}}=(G {{M_*}})^{1/2}{r_*}^{-1/2}$ ${v_{\rm *d}}=(G {{M_*}}{a_0})^{1/4}$ ${E_{\rm *n}}=G {{M_*}}^2{r_*}^{-1}$ ${E_{\rm *d}}=(G{a_0})^{1/2}{{M_*}}^{3/2}$ ----------------------------------------------- --------------------------------------------- : Time, velocity, and energy units for Newtonian and MOND (subscript n), and dMOND (subscript d) N-body simulations. The choice of appropriate scaling physical units is an important aspect of N-body simulations. This is especially true in the present case, in which we want to compare MOND and Newtonian simulations having the same initial conditions. As well known, due to the scale-free nature of Newtonian gravity, a Newtonian $N$-body simulation starting from a given initial condition describes in practice $\infty^2$ systems of arbitrary mass and size. Each of them is obtained by assigning specific values to the length and mass units, ${r_*}$ and ${{M_*}}$, in which the initial conditions are expressed. Also dMOND gravity is scale free, because ${a_0}$ appears only as a multiplicative factor in equation (\[eqdMOND\]), and so a simulation in dMOND gravity represents systems with arbitrary mass and size (though in principle the results apply only to systems with accelerations much smaller than ${a_0}$). MOND simulations can also be rescaled, but, due to the presence of the characteristic acceleration ${a_0}$ in the non-linear function $\mu$, each simulation describes only $\infty^1$ systems, because ${r_*}$ and ${{M_*}}$ cannot be chosen independently of each other. On the basis of the above discussion, we fix the physical units as follows (see Appendix \[appscal\] for a detailed description of the scaling procedure). Let the initial density distribution be characterized by a total mass ${{M_*}}$ and a characteristic radius ${r_*}$. We rescale the field equations so that the dimensionless source term is the same in Newtonian, MOND and dMOND simulations. We also require that the Second Law of Dynamics, when cast in dimensionless form, is independent of the specific force law considered, and this leads to fix the time unit. As a result, Newtonian and MOND simulations have the same time unit ${t_{\rm *n}}={r_*}^{3/2} (G {{M_*}})^{-1/2}$, while the natural time unit in dMOND simulations is ${t_{\rm *d}}={r_*}(G {{M_*}}{a_0})^{-1/4}$. Note that MOND simulations are characterized by the dimensionless parameter $\kappa=G {{M_*}}/{r_*}^2{a_0}$, and scaling of a specific simulation is allowed provided the value of $\kappa$ is maintained constant. So, simulations with lower $\kappa$ values describe lower surface-density, weaker acceleration systems; dMOND simulations represent the limit case $\kappa \ll 1$, while Newtonian ones describe the regime with $\kappa \gg 1$. With the time units fixed, the corresponding velocity and energy units are ${v_{\rm *n}}\equiv{r_*}/{t_{\rm *n}}$, ${v_{\rm *d}}\equiv{r_*}/{t_{\rm *d}}$, ${E_{\rm *n}}={{M_*}}{v_{\rm *n}}^2$, and ${E_{\rm *d}}={{M_*}}{v_{\rm *d}}^2$ (see Table 1 for a summary). Initial conditions and analysis of the simulations {#secic} -------------------------------------------------- We performed a set of five dissipationless-collapse N-body simulations, starting from the same phase-space configuration: the initial particle distribution follows the Plummer (1911) spherically symmetric density distribution $$\label{eqplum} \rho(r)={{3 {{M_*}}{r_*}^2 }\over 4 {\pi (r^2 +{r_*}^2)^{5/2}}},$$ where ${{M_*}}$ is the total mass and ${r_*}$ a characteristic radius. The choice of a Plummer sphere as initial condition is quite artificial, and not necessarily the most realistic to reproduce initial conditions in the cosmological context (e.g., Gunn & Gott 1972). We adopt such a distribution to adhere to other papers dealing with collisionless collapse (e.g., Londrillo et al. 1991; NLC06; see also Section \[secdis\], in which we present the results of a set of simulations starting from different initial conditions). The particles are at rest, so the initial virial ratio $2K/|W|=0$. What is different in each simulation is the adopted gravitational potential, which is Newtonian in simulation N, dMOND in simulation D, and MOND with acceleration ratio $\kappa$ in simulations M$\kappa$ ($\kappa$=1, 2, 4). For each simulation we define the dynamical time ${t_{\rm dyn}}$ as the time at which the virial ratio $2K/|W|$ reaches its maximum value. In particular, we find ${t_{\rm dyn}}\sim2{t_{\rm *d}}$ in simulation D, and ${t_{\rm dyn}}\sim2{t_{\rm *n}}$ in simulations N, M1, M2 and M4. We note that ${t_{\rm dyn}}\sim{G {{M_*}}^{5/2}(2|K+W|)^{-3/2}}$ in simulation N. Following NLC06, the particles are spatially distributed according to equation (\[eqplum\]) and then randomly shifted in position (up to ${r_*}/5$ in modulus). This artificial, small-scale “noise” is introduced to enhance the phase mixing at the beginning of the collapse, because the numerical noise is small, and the velocity dispersion is zero (see also Section 4.2). As such, these fluctuations are not intended to reproduce any physical clumpiness. All the simulations (realized with $N=10^6$ particles, and a grid with $\Nr=64$, $\Nth=16$ and $\Nph=32$) are evolved up $t=150{t_{\rm dyn}}$. In all cases the modulus of the center of mass position oscillates around zero with r.m.s $\lsim 0.1{r_*}$; similarly, the modulus of the total angular momentum oscillates around zero[^2] with r.m.s. $\lsim0.02$, in units of ${r_*}{{M_*}}{v_{\rm *n}}$ (simulations M$\kappa$ and N) and of ${r_*}{{M_*}}{v_{\rm *d}}$ (simulation D). $K+W$ in the Newtonian simulation and $W$ in the dMOND simulation are conserved to within $2\%$ and $0.6\%$, respectively. The volume-limited energy balance equation (\[eqbaltext\]) is conserved with an accuracy of $1\%$ in MOND simulations, independently of the adopted $V_0$. To estimate possible numerical effects, we reran one of the MOND collapse simulations (M1) using $N=2\times 10^6$, $\Nr=80$, $\Nth=24$, and $\Nph=48$: we found that the end-products of these two simulations do not differ significantly, as far as the properties relevant to the present work are concerned. The intrinsic and projected properties of the collapse end-products are determined as in NLC06. In particular, the position of the center of the system is determined using the iterative technique described by Power et al. (2003). Following Nipoti et al. (2002), we measure the axis ratios $c/a$ and $b/a$ of the inertia ellipsoid (where $a$, $b$ and $c$ are the major, intermediate and minor axis) of the final density distributions, their angle-averaged profile and half-mass radius $\rhalf$. We fitted the final angle-averaged density profiles with the $\gamma$-model (Dehnen 1993; Tremaine et al. 1994) $$\label{eqgamma} \rho (r)= {\rho_0 \rc^4 \over r^{\gamma} (\rc +r)^{4-\gamma}},$$ where the inner slope $\gamma$ and the break radius $\rc$ are free parameters, and the reference density $\rho_0$ is fixed by the total mass ${{M_*}}$. The fitting radial range is $0.06\,\lsim\, r/\rhalf\,\lsim\, 10$. In order to estimate the importance of projection effects, for each end-product we consider three orthogonal projections along the principal axes of the inertia tensor, measuring the ellipticity $\epsilon=1-\sbe/\sae$, the circularized projected density profile and the circularized effective radius [$\Re\equiv\sqrt{\sae\sbe}$]{} (where [$\sae$]{} and [$\sbe$]{} are the major and minor semi-axis of the effective isodensity ellipse). We fit (over the radial range $0.1 \, \lsim \, R/\Re \, \lsim \, 10$) the circularized projected density profiles of the end-products with the $R^{1/m}$ Sersic (1968) law: $$\label{eqser} I(R)=\Ie \, \exp\left\{-b(m)\left[ \left( \frac{R}{\Re} \right)^{1/m} -1 \right]\right\},$$ where $\Ie\equiv I(\Re)$ and $b(m)\simeq 2m-1/3+4/405m$ (Ciotti & Bertin 1999). In the fitting procedure $m$ is the only free parameter, because $\Re$ and $\Ie$ are determined by their measured values obtained by particle count. In addition, we measure the central velocity dispersion $\sgz$, obtained by averaging the projected velocity dispersion over the circularized surface density profile within an aperture of $\Re/8$. Some of these structural parameters are reported in Table 2 for the five simulations described above, as well as for three additional simulations, which start from different initial conditions (see Section \[secdis\]). Results {#secres} ======= In Newtonian gravity, collisionless systems reach virialization through violent relaxation in few dynamical times, as predicted by the theory (Lynden-Bell 1967) and confirmed by numerical simulations (e.g. van Albada 1982). On the other hand, due to the non linearity of the theory and the lack of numerical simulations, the details of relaxation processes and virialization in MOND are much less known. Thus, before discussing the specific properties of the collapse end-products we present a general overview of the time evolution of the virial quantities in our simulations, postponing to Section \[secphsp\] a more detailed description of the phase-space evolution. In particular, in Fig. \[figtime\] we show the time evolution of $2 K/|W|$, $K$, $W$, and $K+W$ for simulations D, M1, and N. In the diagrams time is normalized to ${t_{\rm dyn}}$, so plots referring to different simulations are directly comparable (the values of ${t_{\rm dyn}}$ in time units for the five simulations are given in Section \[secic\]). In simulation N (right column) we find the well known behavior of Newtonian dissipationless collapses: $2 K/|W|$ has a peak, then oscillates, and eventually converges to the equilibrium value $2 K/|W|=1$; the total energy $K+W$ is nicely conserved during the collapse, though it presents a secular drift, a well known feature of time integration in N-body codes. The time evolution of the same quantities is significantly different in a dMOND simulation (left column). In particular, the virial ratio $2 K/|W|$ quickly becomes close to one, but is still oscillating at very late times because of the oscillations of $K$, while $W$ is constant as expected. As we show in Section \[secphsp\], these oscillations are related to a peculiar behavior of the system in phase space. Finally, simulation M1 (central column) represents an intermediate case between models N and D: the system starts as dMOND, but soon its core becomes concentrated enough to enter the Newtonian regime. After the initial phases of the collapse, Newtonian gravity acts effectively in damping the oscillations of the virial ratio. Overall, it is apparent how the system is in a “mixed” state, neither Newtonian ($K+W$ is not conserved) nor dMOND ($W$ is not constant). Properties of the collapse end-products --------------------------------------- ### Spatial and projected density profiles We found that all the simulated systems, once virialized, are not spherically symmetric. However, while the dMOND collapse end-product is triaxial ($c/a\sim0.2$, $b/a\sim0.4$), MOND and Newtonian end-products are oblate ($c/a \sim c/b \sim 0.5$). The ellipticity $\epsilon$ of the projected density distributions (measured for each of the principal projections) is found in the range $0.5-0.8$ in D, and $0 - 0.5$ in M1, M2, M4 and N. These values are consistent with those observed in real ellipticals, with the exception of $\epsilon_b$ in model D (see Table 2), which would correspond - if taken at face value - to an E8 galaxy. These result could be just due to the procedure adopted to measure the ellipticity (see Section \[secic\]), however we find interesting that dMOND gravity could be able to produce some system that would be unstable in Newtonian gravity. We remark that a similar result, in the different context of disk stability in MOND, has been obtained by Brada & Milgrom (1999). In order to describe the radial mass distribution of the final virialized systems, we fitted their angle-averaged density profiles with the $\gamma$-model (\[eqgamma\]) over the radial range $0.06\,\lsim\, r/\rhalf\,\lsim\, 10$. The best-fit $\gamma$ and $\rc$ for the final distribution of each simulation are reported in Table 2 together with their $1\sigma$ uncertainties (calculated from $\Delta \chi^2=2.30$ contours in the space $\gamma-\rc$). As also apparent from Fig. \[fig3d\] (bottom), the Newtonian collapse produced the system with the steepest inner profile ($\gamma\sim1.7$), the dMOND end-product has inner logarithmic slope close to zero, while MOND collapses led to intermediate cases, with $\gamma$ ranging from $\sim 1.2$ ($\kappa=1$) to $\sim1.5$ ($\kappa=4$). We also note that the ratio $\rc/\rhalf$ (indicating the position of the knee in the density profile) increases systematically from dMOND to Newtonian simulations. The circularized projected density profiles of the end-products are analyzed as described in Section \[secic\]. The best-fit Sersic indices $m_a$, $m_b$ and $m_c$ (for projections along the axes $a$, $b$, and $c$, respectively) are reported in Table 2, together with the $1\sigma$ uncertainties corresponding to $\Delta \chi^2=1$; the relative uncertainties on the best-fit Sersic indices are in all cases smaller than 5 per cent and the average residuals between the data and the fits are typically $0.05 \lsim\langle\Delta{SB}\rangle \lsim 0.2$, where $SB\equiv-2.5 \log [I(R)/\Ie]$. The fitting radial range $0.1\,\lsim\, R/\Re\,\lsim\, 10$ is comparable with or larger than the typical ranges spanned by observations (e.g., see Bertin, Ciotti & Del Principe 2002). In agreement with previous investigations, we found that the Newtonian collapse produced a system well fitted by the de Vaucouleurs (1948) law. MOND collapses led to systems with Sersic index $m<4$, down to $m\sim2$ in the case of the dMOND collapse. Figure \[fig2d\] (bottom) shows the circularized (major-axis) projected density profiles for the end-products of simulations D, M1 and N together with their best-fit Sersic laws ($m=2.87$, $m=3.20$, and $m=4.21$, respectively), and the corresponding residuals. Curiously, NLC06 found that low-$m$ systems can be also obtained in Newtonian dissipationless collapses in the presence of a pre-existing dark-matter halo, with Sersic index value decreasing for increasing dark-to-luminous mass ratio. ### Kinematics {#seckin} We quantify the internal kinematics of the collapse end-products by measuring the angle-averaged radial and tangential components ($\sigma_r$ and $\sigma_{\rm t}$) of their velocity-dispersion tensor, and the anisotropy parameter $\beta(r) \equiv 1 -0.5\sigma^2_{\rm t}/\sigma^2_r$. These quantities are shown in Fig. \[fig3d\] for simulations D, M1, and N. We note that the $\sigma_r$ profile decreases more steeply in the Newtonian than in the MOND end-products, while it presents a hole in the inner regions of the dMOND system. In addition, the dMOND galaxy is radially anisotropic ($\beta\sim0.4$) even in the central regions, where models N and M1 are approximately isotropic ($\beta \sim 0.1$). All systems are strongly radially anisotropic for $r\gsim\rhalf$. For each model projection we computed the line-of-sight velocity dispersion $\sglos$, considering particles in a strip of width $\Re/4$ centered on the semi-major axis of the isophotal ellipse. The line-of-sight velocity-dispersion profiles (for the major-axis projection) are plotted in the top panels of Fig. \[fig2d\]. The Newtonian profile is very steep within $\Re$, while MOND and dMOND profiles are significantly flatter there. As well as $\sigma_r$, $\sglos$ decreases for decreasing radius in the inner region of model D. The kinematical properties of M2 and M4 are intermediate between those of M1 and of N: overall we find only weak dynamical non-homology among MOND end-products. The empty symbols in Fig. \[fig3d\] (right column) refer to a test Newtonian simulation run with the FVFPS treecode (with $4\times10^5$ particles). The structural and kinematical properties of the end-product of this simulation are clearly in good agreement with those of the end-product of simulation N (solid lines), which started from the same initial conditions. Phase-space properties of MOND collapses {#secphsp} ---------------------------------------- To explore the phase-space evolution of the systems during the collapse and the following relaxation we consider time snapshots of the particles radial velocity ($\vr$) vs. radius as in Londrillo et al. (1991). In Fig. \[figphsp\] we plot five of these diagrams for simulations D, M1 and N: each plot shows the phase-space coordinates of 32000 particles randomly extracted from the corresponding simulation, and, as in Fig. \[figtime\], times are normalized to the dynamical time ${t_{\rm dyn}}$ (see Section \[secic\]). At time $t=0.5{t_{\rm dyn}}$ all particles are still collapsing in simulation N, while in MOND simulations a minority of particles have already crossed the center of mass, as revealed by the vertical distribution of points at $r\sim0$ in panels D and M1. At $t={t_{\rm dyn}}$ (time of the peak of $2 K/|W|$ in the three models), sharp shells in phase space are present, indicating that particles are moving in and out collectively and phase mixing has not taken place yet. At $t=4{t_{\rm dyn}}$ is already apparent that phase mixing is operating more efficiently in simulation N than in simulation M1, while there is very little phase mixing in the dMOND collapse. At significantly late times ($t=44{t_{\rm dyn}}$), when the three systems are almost virialized ($2 K/|W|\sim 1$; see Fig. \[figtime\]), phase mixing is complete in simulation N, but phase-space shells still survive in models M1 and D. Finally, the bottom panels show the phase-space diagrams at equilibrium ($t=150{t_{\rm dyn}}$), when phase mixing is completed also in the MOND and dMOND galaxies: note that the populated region in the ($r$,$\vr$) space is significantly different in MOND and in Newtonian gravity, consistently with the sharper decline of radial velocity dispersion in the Newtonian system. Thus, our results indicate that phase mixing is more effective in Newtonian gravity than in MOND[^3]. It is then interesting to estimate in physical units the phase-mixing timescales of MOND systems. From Table 1 it follows that ${t_{\rm *n}}\simeq4.7\,({r_*}/\kpc)^{3/2}({{M_*}}/10^{10} \Msun)^{-1/2}\Myr=29.8\kappa^{-3/4}({{M_*}}/10^{10} \Msun)^{1/4}\Myr$ for ${a_0}=1.2\times 10^{-10} {\rm m}\,{\rm s}^{-2}$. For example, in the case of model M1, adopting ${{M_*}}=10^{12} \Msun$ (and ${r_*}=\sqrt{G{{M_*}}/{a_0}}\simeq34\kpc$), shells in phase space are still apparent after $\sim8.3\Gyr$ ($\simeq44{t_{\rm dyn}}$). Simulation M1 might also be interpreted as representing a dwarf elliptical galaxy of, say, ${{M_*}}= 10^9 \Msun$ (and ${r_*}=\sqrt{G{{M_*}}/{a_0}}\simeq 1.1\kpc$). In this case $44{t_{\rm dyn}}\sim 1.5 \Gyr$. We conclude that in some MOND systems substructures in phase space can survive for significantly long times. In addition to the ($r$,$\vr$) diagram, another useful diagnostic to investigate phase-space properties of gravitational systems is the energy distribution $N(E)$ (i.e. the number of particles with energy per unit mass between $E$ and $E+dE$; e.g., Binney & Tremaine 1987; Trenti & Bertin 2005). Independently of the force law, the energy per unit mass of a particle orbiting at ${{\bf x}}$ with speed $v$ in a gravitational potential $\phi({{\bf x}})$ is $E=v^2/2+\phi({{\bf x}})$, and $E$ is constant if $\phi$ is time-independent. In Newtonian gravity $\phi$ is usually set to zero at infinity for finite-mass systems, so $E<0$ for bound particles; in MOND all particles are bound, independently of their velocity, because $\phi$ is confining, and all energies are admissible. This difference is reflected in Fig. \[fignde\], which plots the initial (top) and final (bottom) differential energy distributions for simulations D, M1, and N. Given that the particles are at rest at $t=0$, the initial $N(E)$ depends only on the structure of the gravitational potential, and is significantly different in the Newtonian and MOND cases. We also note that $N(E)$ is basically the same in models D and M1 at $t=0$, because model M1 is initially in dMOND regime. In accordance with previous studies, in the Newtonian case the final differential $N(E)$ is well represented by an exponential function over most of the populated energy range (Binney 1982; van Albada 1982; Ciotti 1991; Londrillo et al. 1991; NLC06). In contrast, in model D the final $N(E)$ decreases for increasing energy, qualitatively preserving its initial shape. In the case of simulation M1 it is apparent a dichotomy between a Newtonian part at lower energies (more bound particles), where $N(E)$ is exponential, and a dMOND part at higher energies, where the final $N(E)$ resembles the initial one. We interpret this result as another manifestation of a less effective phase-space reorganization in MOND than in Newtonian collapses. Comparison with the observed scaling relations of elliptical galaxies {#secsca} --------------------------------------------------------------------- It is not surprising that galaxy scaling relations represent an even stronger test for MOND than for Newtonian gravity, due to the absence of dark matter and the existence of the critical acceleration ${a_0}$ with a universal value in the former theory (e.g., see Milgrom 1984; Sanders 2000). For example, when interpreting the FP tilt in Newtonian gravity one can invoke a systematic and fine-tuned increase of the galaxy dark-to-luminous mass ratio with luminosity (e.g., Bender, Burstein & Faber 1992; Renzini & Ciotti 1993; Ciotti, Lanzoni & Renzini 1996), while in MOND the tilt should be related to the characteristic acceleration ${a_0}$. Note, however, that in MOND as well as in Newtonian gravity other important physical properties may help to explain the FP tilt, such as a systematic increase of radial orbital anisotropy with mass or a systematic structural weak homology (Bertin et al. 2002). Due to the relevance of the subject, we attempt here to derive some preliminary hints. In particular, for the first time, we can compare with the scaling relations of elliptical galaxies MOND models produced by a formation mechanism, yet as simple as the dissipationless collapse. In this Section we consider the end-products of simulations M1, M2, and M4. As already discussed in Section \[secsim\], each of the three systems corresponds to a family with constant ${{M_*}}/{r_*}^2$. This degeneracy is represented by the straight dotted lines in Fig. \[figfp\]a: all galaxies on the same dotted line have the same $\kappa$ value. This behavior is very different from the Newtonian case, in which the result of a N-body simulation can be placed anywhere in the space $\Re-{{M_*}}$, by arbitrarily choosing ${{M_*}}$ and ${r_*}$. For comparison with observations, the specific scaling laws represented in Fig. \[figfp\] (thick solid lines) are the near-infrared $z^*$-band Kormendy relation $\Re \propto {{M_*}}^{0.63}$ and FJ relation ${{M_*}}\propto\sigma_0^{3.92}$ (Bernardi et al. 2003a), and the edge-on FP relation in the same band $\log \Re = A \log \sigma_0 + B \log ({{M_*}}/ \Re^2) +const$ (with $A=1.49$, $B=- 0.75$; Bernardi et al. 2003b), under the assumption of luminosity-independent mass-to-light ratio. The physical properties of each model are determined as follows. First, for each model (identified by a value of $\kappa= G {{M_*}}/{r_*}^2{a_0}$) we measure the ratio $\Re/{r_*}$ (see Section \[secic\]), and so we obtain $\Re=\Re(\kappa,{{M_*}})$. This function, for fixed $\kappa$ and variable ${{M_*}}$, is a dotted line in Fig. \[figfp\]a. As apparent, only one pair $(\Re,{{M_*}})$ satisfies the Kormendy relation for each $\kappa$: in particular, we obtain that models M1, M2, and M4 have stellar masses $1.6\times 10^{12}$, $1.4\times 10^{11}$, and $10^{10}\Msun$, respectively (so lower $\kappa$ models correspond to higher mass systems). We are now in the position to obtain ${r_*}=\sqrt{G{{M_*}}/{a_0}\kappa}$ and ${v_{\rm *n}}=(\kappa{a_0}G{{M_*}})^{1/4}$, so we know the physical value of the projected central velocity dispersion, and we can place our models also in the FJ and FP planes. It is apparent that these two relations are not reproduced, in particular by massive galaxies. We note that this discrepancy cannot be fixed even when considering the mass interval allowed by the scatter in the Kormendy relation (thin solid lines in panel a), as revealed by the dotted lines in Fig. \[figfp\]bc, which are just the projections of the dotted lines in Fig. \[figfp\]a onto the planes of the edge-on FP[^4] and the FJ. Finally, Fig. \[figfp\]d plots the best-fit Sersic index $m$ of the models as a function of ${{M_*}}$. Observations show that elliptical galaxies are characterized by $m$ values increasing with size: $m\gsim 4$ for galaxies with $\Re\gsim3\kpc$ and $m\lsim 4$ for those with $\Re\lsim3\kpc$ (e.g. Caon, Capaccioli & d’Onofrio 1993). Our models behave in the opposite way, as $m$ decreases for increasing size (mass) of the system. So, while model M4 ($\Re\sim 1\kpc$, $m\sim3.4$) is consistent with observations, models M1 and M2 have significantly lower $m$ than real ellipticals of comparable size. However, this finding is not a peculiarity of MOND gravity: also in Newtonian gravity dissipationless collapse end-products with $m>4$ are obtained only for specific initial conditions (NLC06), while equal mass Newtonian mergings are able to produce high-$m$ systems (Nipoti et al. 2003). So far we have compared the results of our simulations with the scaling relations of high-surface brightness galaxies. However, it is well known that low surface-brightness hot stellar systems, such as dwarf ellipticals and dwarf spheroidals, have larger effective radii than predicted by the Kormendy relation (e.g., Bender et al. 1992; Capaccioli, Caon & d’Onofrio 1992; Graham & Guzmán 2003). In particular, dwarf ellipticals are characterized by effective surface densities comparable to those of the most luminous ellipticals, while their surface brightness profiles are characterized by Sersic indices smaller than $4$ (e.g. Caon et al. 1993; Trujillo, Graham, & Caon 2001). Dwarf spheroidals are the lowest surface-density stellar systems known, and typically have exponential ($m\sim1$) luminosity profiles (e.g. Mateo 1998). So, simulations M1 and D can be interpreted as modeling a dwarf elliptical and a dwarf spheroidal, respectively, and their end-products qualitatively reproduce the surface brightness profiles of the observed systems. As pointed out in Section \[seckin\], the velocity-dispersion profile of model D is rather flat, with a hole in the central regions: interestingly observations of dwarf spheroidals indicate that their velocity-dispersion profiles are also flat (e.g. Walker et al. 2006 and references therein). Discussion and conclusions {#secdis} ========================== In this paper we studied the dissipationless collapse in MOND by using a new three-dimensional particle-mesh N-body code, which solves the MOND field equation (\[eqMOND\]) exactly. For obvious computational reasons, we did not attempt a complete exploration of the parameter space, and we just presented results of a small set of numerical simulations, ranging from Newtonian to dMOND systems. The main results of the present study can be summarized as follows: - The intrinsic structural and kinematical properties of the MOND collapse end-products depend weakly on their characteristic surface density: lower surface-density systems have shallower inner density profile, flatter velocity-dispersion profile, and more radially anisotropic orbital distribution than higher surface-density systems. - The projected density profiles of the MOND collapse end-products are characterized by Sersic index $m$ lower than $4$, and decreasing for decreasing mean surface density. In particular, the end-product of the dMOND collapse, modeling a very low surface density system, is characterized by a Sersic index $m\sim 2$ and by a central hole in the projected velocity-dispersion profile. - We found impossible to satisfy simultaneously the observed Kormendy, Faber-Jackson and Fundamental Plane relations of elliptical galaxies with the MOND collapse end-products, under the assumption of a luminosity independent mass-to-light ratio. In other words, this point and the two points above show that, in the framework of dissipationless collapse, the presence of a characteristic acceleration is not sufficient to reproduce important observed properties of spheroids of different mass and surface density, such as their scaling relations and weak structural homology. - From a dynamical point of view we found that phase mixing is less effective (and stellar systems take longer to relax) in MOND than in Newtonian gravity. A natural question to ask is how the end-products of our simulations would be interpreted in the context of Newtonian gravity. Clearly, models D and N would represent dark-matter dominated and baryon dominated stellar systems, respectively. More interestingly, models M1, M2, and M4, once at equilibrium, would be characterized by a [*dividing radius*]{}, separating a baryon-dominated inner region (with accelerations higher than ${a_0}$) from a dark-matter dominated outer region (with accelerations lower than ${a_0}$). This radius is $\sim 1.1 \rhalf$ for M1, $\sim1.8 \rhalf$ for M2, and $\sim 2.7 \rhalf$ for M4. So, all these models would show little or no dark matter in their central regions. Remarkably, observational data indicate that there is at most as much dark matter as baryonic matter within the effective radius of ellipticals (e.g. Bertin et al. 1994; Cappellari et al. 2006 and references therein). The conclusions above have been drawn by considering only simulations starting from an inhomogeneous Plummer density distribution. To explore the dependence of these results on this specific choice, we ran also three simulations starting from a cold ($2K/|W|=0$), inhomogeneous and truncated density distribution $\rho(r)=C{{M_*}}/({r_*}^3 +r^3)$, where $C^{-1}\equiv 4\pi\ln(1+{r_{\rm t}}^3/{r_*}^3)/3$, ${{M_*}}$ is the total mass, and ${r_{\rm t}}=20{r_*}$ is the truncation radius. Inhomogeneities are introduced as described in Section \[secic\]. Note that in the external parts the new initial conditions are significantly flatter than a Plummer sphere. The three simulations are labeled $\Dprime$ (dMOND), $\Mprime$ (MOND with acceleration ratio[^5] $\kappa=20$) and $\Nprime$ (Newtonian). As in the case of Plummer initial conditions, also in these cases the final intrinsic and projected density distributions are well represented by $\gamma$-models and Sersic models, respectively. In analogy with model N, the Newtonian collapse $\Nprime$ produced the system with the steepest central density distribution (see Table 2). In addition, model $\Mprime$, when compared with the scaling laws of ellipticals (stars in Fig. \[figfp\]), follows the same trend as models M1, M2 and M4. The analysis of the time-evolution in phase-space of models $\Dprime$, $\Mprime$ and $\Nprime$ confirmed that mixing and relaxation processes are less effective in MOND than in Newtonian gravity. How do the presented results depend on the specific choice (equation \[eqmu\]) of the MOND interpolating function $\mu$? Recently, a few other interpolating functions have been proposed to better fit galactic rotation curves (Famaey & Binney 2005), and in the context of TeVeS (Bekenstein 2004; Zaho & Famaey 2006). It is reasonable to expect that the exact form of $\mu$ is not critical in a violent dynamical process such as dissipationless collapse. We verified that this is actually the case, by running an additional MOND simulation with the same initial conditions and parameter $\kappa$ as simulation M1, but adopting $\mu(y)=y/(1+y)$, as proposed by Famaey & Binney (2005). In fact, neither in the time-evolution nor in the structural and kinematical properties of the end-products we found significant differences between the two simulations. This result suggests that, in the context of structure formation in MOND, the crucial feature is the presence of a characteristic acceleration separating the two gravity regimes, while the details of the transition region are unimportant. Though the dissipationless collapse is a very simplistic model for galaxy formation, it is expected to describe reasonably well the last phase of “monolithic-like” galaxy formation, in which star formation is almost completed during the initial phases of the collapse. The importance of gas dissipation in the formation of elliptical galaxies is very well known, going back to the seminal works of Rees & Ostriker (1977) and White & Rees (1978; see also Ciotti, Lanzoni & Volonteri 2006, and references therein, for a discussion of the expected impact of gas dissipation on the scaling laws followed by elliptical galaxies). This aspect has been completely neglected in our exploration, and we are working on an hybrid (stars plus gas) version of the MOND code to explore quantitatively this issue. We also stress that the dissipationless collapse process catches the essence of violent relaxation, which is certainly relevant to the formation of spheroids even in more complicated scenarios, such as merging. For example, it is well known that in Newtonian dynamics systems with de Vaucouleurs profiles are produced by dissipationless merging of spheroids (e.g. White 1978) or disk galaxies (e.g. Barnes 1992) as well as by dissipationless collapses (van Albada 1982). Merging simulations in MOND have not been performed so far, and a relevant and still open question is how efficient merging is in MOND, in which the important effect of dark matter halos is missing, and galaxies are expected to collide at higher speed than in Newtonian gravity (Binney 2004; Sellwood 2004). Our results, indicating that relaxation takes longer in MOND than in Newtonian gravity, go in the direction of making merging time scales even longer in MOND; on the other hand, analytical estimates seem to indicate shorter dynamical friction time-scales in MOND than in Newtonian gravity (Ciotti & Binney 2004). So, the next application of our code will be the study of galaxy merging in MOND. We are grateful to James Binney and Scott Tremaine for helpful discussions and to the anonymous referee for useful comments. This work was partially supported by the MIUR grant CoFin2004. The volume-limited energy-balance equation in MOND {#appetot} ================================================== In this Appendix we derive a useful volume-limited integral relation representing energy conservation in MOND, well suited to test numerical simulations. The total (ordered and random) kinetic energy per unit volume of a continuous distribution with density $\rho$ and velocity field ${{\bf u}}$ is $$k={\rho\over 2}(||{{\bf u}}||^2+\Tr{\sigma^2_{ij}}),$$ where $\sigma^2_{ij}$ is the velocity-dispersion tensor. In the present case the energy balance equation is (e.g. Ciotti 2000) $${d \over d t}\int_{V(t)}k\dxcube=-\int_{V(t)}\rho {{\bf u}}\cdot\nabla\phi\dxcube, \label{eqbalone}$$ where the integral in the r.h.s. is the work per unit time done by mechanical forces. By application of the Reynolds transport theorem and using the mass continuity equation we obtain $${\partial \over \partial t}(k+\rho\phi)+{\nabla\cdot}[(k+\rho\phi){{\bf u}}]=\rho{\partial \phi \over \partial t}. \label{eqbala}$$ When $\phi$ is the MOND gravitational potential, $\rho$ can be eliminated using equation (\[eqMOND\]), so $$4 \pi G \rho{\partial \phi \over \partial t}={\nabla\cdot}(\mu \nabla\phi){\partial \phi \over \partial t}={\nabla\cdot}\left(\mu \nabla\phi {\partial \phi \over \partial t}\right)-{{a_0}^2\over 2}{\partial \over \partial t} \left[\mathcal{F}\left({||\nabla\phi||\over {a_0}}\right)\right],$$ where $\mathcal{F}$ is defined in equation (\[eqeffe\]). Thus, equation (\[eqbala\]) can be written as $${\partial \over \partial t}\left[k+\rho\phi+{{a_0}^2\over 8 \pi G}\mathcal{F}\left({||\nabla\phi||\over {a_0}}\right)\right]+{\nabla\cdot}\left[(k+\rho\phi){{\bf u}}-{\mu\nabla\phi\over 4\pi G}{\partial \phi \over \partial t}\right]=0. \label{eqbalafin}$$ By integration over a fixed control volume $V_0$ enclosing all the system mass one obtains equation (\[eqbaltext\]). The virial trace $W$ in deep-MOND systems of finite mass {#appw} ======================================================== Here we prove that $W=-(2/3)\sqrt{G{a_0}M_*^3}$ for any dMOND system of finite mass ${{M_*}}$. Eliminating $\rho$ from equation (\[eqwij\]) by using equation (\[eqdMOND\]), and considering the trace of the resulting expression one finds $$W=-{1 \over 4\pi G {a_0}}\int \Dcal[\phi] \nabla\cdot\left({\Vert\nabla\phi\Vert}\nabla\phi\right)\dxcube, \label{eqwdiv}$$ where we define the operator $\Dcal\equiv<{{\bf x}},\nabla>$. The remarkable fact behind the proof is that the integrand above can be written as the divergence of a vector field, so only contributions from $r \to \infty$ are important. We will then use the spherically symmetric asymptotic behavior of dMOND solutions for $r \to \infty$ (BM) $${{\bf g}}=-\nabla \phi \sim-{\sqrt{G {{M_*}}{a_0}}\over r} \hat{\bf e}_r \label{eqasym}$$ and Gauss theorem to evaluate $W$. [**Theorem**]{}. For a generic potential the following identity holds: $$\Dcal[\phi]\nabla \cdot({\Vert\nabla\phi\Vert}\nabla\phi)=\nabla \cdot \left( \Dcal[\phi]{\Vert\nabla\phi\Vert}\nabla \phi - {{{\bf x}}{\Vert\nabla\phi\Vert}^3 \over 3}\right). \label{eqtheo}$$ [*Proof*]{}. From standard vector analysis (e.g. Jackson 1999) it follows that $$\Dcal[\phi] \nabla\cdot\left({\Vert\nabla\phi\Vert}\nabla\phi\right)= \nabla\cdot\left(\Dcal[\phi]{\Vert\nabla\phi\Vert}\nabla\phi\right)- {\Vert\nabla\phi\Vert}<\nabla\phi,\nabla\Dcal[\phi]>, \label{eqxnabla}$$ and $${\Vert\nabla\phi\Vert}<\nabla\phi,\nabla\Dcal[\phi]>={{\nabla\cdot}\left( {{\bf x}}{\Vert\nabla\phi\Vert}^3\right)\over 3}. \label{eqdivthird}$$ Identity (\[eqdivthird\]) follows from the expansion $\nabla \Dcal[\phi]=\nabla \phi+\Dcal[\nabla \phi]$ as $${\Vert\nabla\phi\Vert}^3+{\Vert\nabla\phi\Vert}<\nabla\phi,\Dcal[\nabla\phi]>={\Vert\nabla\phi\Vert}^3+{ \Dcal\left[{\Vert\nabla\phi\Vert}^3\right]\over 3}={{\nabla\cdot}\left({{\bf x}}{\Vert\nabla\phi\Vert}^3\right)\over 3}.$$ Combining equations (\[eqxnabla\]) and (\[eqdivthird\]) completes the proof of equation (\[eqtheo\]). We now transform the volume integral (\[eqwdiv\]) in a surface integral over a sphere of radius $r$, and we consider the limit for $r \to \infty$ together with the asymptotic relation (\[eqasym\]), obtaining $$W=-{1 \over 4\pi G {a_0}} \lim_{r\to\infty} \int_{4\pi} {2 \over 3} r^3 g^2 d\Omega =-{2 \over 3}\sqrt{G{a_0}M_*^3}.$$ Scaling of the equations {#appscal} ======================== Given a generic density distribution $\rho$, and the mass and length units ${{M_*}}$ and ${r_*}$, we define the dimensionless quantities $\txv\equiv {{\bf x}}/{r_*}$, and $\trho\equiv\rho{r_*}^3/{{M_*}}$. From equation (\[eqgv\]) the equation of motion for a test particle can be written in dimensionless form as $$\label{eqmotion} {d^2 \txv \over d \tilde{t}^2}=-{{\phi_*}{t_*}^2 \over {r_*}^2}\tnabla \tphi,$$ where ${\phi_*}$ and ${t_*}$ are for the moment two unspecified scaling constants, $\tphi=\phi/{\phi_*}$ and $\tilde{t}=t/{t_*}$, and the dimensionless gradient operator is $\tnabla = {r_*}\nabla $. In all of our simulations we define ${t_*}\equiv{r_*}/\sqrt{{\phi_*}}$, so that the scaling factor in equation (\[eqmotion\]) is unity, while ${\phi_*}$ is specified case-by-case from the field equation as follows. In Newtonian gravity the Poisson equation (\[eqPoisson\]) can be written as $$\tnabla^2\tphi=4 \pi{G {{M_*}}\over {r_*}{\phi_*}} \trho:$$ we fix ${\phi_*}= G {{M_*}}/{r_*}$, so ${t_*}=\sqrt{{r_*}^3/G {{M_*}}}\equiv{t_{\rm *n}}$. The dMOND field equation (\[eqdMOND\]) in dimensionless form writes $$\tnabla\cdot( ||\tnabla\tphi||\tnabla\tphi)=4 \pi {G {{M_*}}{a_0}\over {\phi_*}^2} \trho,$$ so the natural choice is ${\phi_*}=\sqrt{G {{M_*}}{a_0}}$, and ${t_*}={r_*}(G{{M_*}}{a_0})^{-1/4}\equiv {t_{\rm *d}}$. Finally, the MOND field equation (\[eqMOND\]) in dimensionless form is $$\tnabla\cdot\left[\mu(\kappa ||\tnabla\tphi||) \tnabla\tphi\right]=4 \pi {G {{M_*}}\over {r_*}{\phi_*}} \trho ,$$ where $\kappa\equiv {\phi_*}/{r_*}{a_0}$. In this case, as in the Newtonian case, ${\phi_*}=G {{M_*}}/{r_*}$, so ${t_*}={t_{\rm *n}}$ and $\kappa= G {{M_*}}/{r_*}^2{a_0}$. Aguilar, L.A., & Merritt, D. 1990, ApJ, 354, 33 Barnes, J. 1992, ApJ, 393, 484 Barnes, J.E., & Hut, P. 1986, Nature, 324, 446 Bekenstein, J. 2004, Phys. Rev. D, 70, 083509 Bekenstein, J., & Milgrom, M. 1984, , 286, 7 (BM) Bender, R., Burstein, D., & Faber, S.M. 1992, ApJ, 399, 462 Bernardi M., et al. 2003a, AJ, 125, 1849 Bernardi M., et al. 2003b, AJ, 125, 1866 Bertin, G., Ciotti, L., & Del Principe, M. 2002, A&A, 386, 1491 Bertin, G., et al. 1994, A&A, 292, 381 Binney, J. 1982, MNRAS, 200, 951 Binney, J. 2004, in Ryder, S.D., Pisano, D.J., Walker, M.A., Freeman, K.C., eds, IAU Symp. 220, Dark Matter in Galaxies. Astron. Soc. Pac., San Francisco, p. 3 Binney, J., & Tremaine, S. 1987, Galactic Dynamics (Princeton: Princeton University Press) Brada, R., & Milgrom, M. 1995, , 276, 453 Brada, R., & Milgrom, M. 1999, , 519, 590 Brada, R., & Milgrom, M. 2000, , 541, 556 Caon, N., Capaccioli, M., & D’Onofrio, M. 1993, MNRAS, 265, 1013 Capaccioli, M., Caon N., & D’Onofrio, M. 1992, MNRAS, 259, 323 Cappellari M., et al., 2006, MNRAS, 366, 1126 Christodoulou, D.M. 1991, ApJ, 372, 471 Ciotti, L. 1991, A&A, 249, 99 Ciotti, L. 2000, Lecture Notes on Stellar Dynamics (Pisa: Scuola Normale Superiore editor) Ciotti, L., & Bertin, G. 1999, A&A, 352, 447 Ciotti, L., & Binney, J. 2004, MNRAS, 351, 285 Ciotti, L., Lanzoni, B., & Renzini, A. 1996, MNRAS, 282, 1 Ciotti, L., Lanzoni, B., & Volonteri, M. 2006, , in press (astro-ph/0611328) Ciotti, L., Londrillo, P., & Nipoti, C. 2006, , 640, 741 (CLN) Ciotti, L., Nipoti, C., & Londrillo, P. 2007, in proceedings of the International Workshop “Collective phenomena in macroscopic systems” (Como, Italy, December 4-6, 2006) de Vaucouleurs, G. 1948, Ann. d’Astroph., 11, 247 Dehnen, W. 1993, MNRAS, 265, 250 Dehnen, W. 2002, Journal of Computational Physics, 179, 27 Djorgovski, S., & Davis, M. 1987, ApJ, 313, 59 Dressler, A., Lynden-Bell, D., Burstein, D., Davies, R.L., Faber, S.M., Terlevich, R., & Wegner, G. 1987, ApJ, 313, 42 Faber, S.M., & Jackson, R.E. 1976, ApJ, 204, 668 Famaey, B., & Binney, J. 2005, , 363, 603 Felten, J.E. 1984, , 286, 3 Gerhard, O.E., & Spergel, D.N. 1992, , 397, 38 Graham, A.W., & Guzmán, R. 2003, AJ, 125, 2936 Gunn, J.E., & Gott, J.R. 1972, ApJ, 176, 1 Hockney, R., & Eastwood, J. 1988, Computer Simulation Using Particles (Bristol: Hilger) Jackson, J.D. 1999, Classical Electrodynamics (New York: New John Wiley & Sons, Inc.) Knebe, A., & Gibson, B.K. 2004, , 347, 1055 Kormendy J., 1977, ApJ, 295, 73 Londrillo, P., Messina, A., & Stiavelli, M. 1991, MNRAS, 250, 54 Londrillo, P., Nipoti, C., & Ciotti, L. 2003, In “Computational astrophysics in Italy: methods and tools”, Roberto Capuzzo-Dolcetta ed., Mem. S.A.It. Supplement, vol. 1, p. 18 Lynden-Bell, D. 1967, , 136, 101 Mateo, M.L. 1998, , 36, 435 Milgrom, M. 1983, , 270, 365 Milgrom, M. 1984, , 287, 571 Milgrom, M. 2002, New. Astron. Rev., 46, 741 Nipoti, C., Londrillo, P., & Ciotti, L. 2002, MNRAS, 332, 901 Nipoti, C., Londrillo, P., & Ciotti, L. 2003, MNRAS, 342, 501 Nipoti, C., Londrillo, P., & Ciotti, L. 2006, MNRAS, 370, 681 (NLC06) Nusser, A. 2002, MNRAS, 331, 909 Nusser, A., & Pointecouteau, E. 2006, MNRAS, 366, 96 Plummer, H.C. 1911, MNRAS, 71, 460 Power, C., Navarro, J.F., Jenkins, A., Frenk, C.S., White, S.D.M., Springel, V., Stadel, J., & Quinn, T. 2003, MNRAS, 338, 14 Rees, M.J., & Ostriker, J.P. 1977, MNRAS, 179, 541 Renzini A., Ciotti L., 1993 ApJ, 416, L49 Sanders, R.H. 2000, , 313, 767 Sanders, R.H., & McGaugh, S.S. 2002, , 40, 263 Sellwood, J. 2004, in Ryder, S.D., Pisano, D.J., Walker, M.A., Freeman, K.C., eds, IAU Symp. 220, Dark Matter in Galaxies. Astron. Soc. Pac., San Francisco, p. 27 Sersic, J.L. 1968, Atlas de galaxias australes. Observatorio Astronomico, Cordoba Shen, S., Mo, H.J., White, S.D.M., Blanton, M.R., Kauffmann, G., Voges, W., Brinkmann, J., & Csabai, I. 2003, MNRAS, 343, 978 Stachniewicz, S., & Kutschera, M. 2005, MNRAS, 362, 89 Tiret, O., & Combes, F. 2007, preprint (astro-ph/0701011) Tremaine, S., Richstone, D.O., Yong-Ik, B., Dressler, A., Faber, S.M., Grillmair, C., Kormendy, J., & Laurer, T.R. 1994, AJ, 107, 634 Trenti, M., & Bertin, G. 2005, A&A, 429, 161 Trenti, M., Bertin, G., & van Albada, T.S. 2005, A&A, 433, 57 Trujillo, I., Graham, A.W., & Caon, N. 2001, MNRAS, 326, 869 Udry, S. 1993, A&A, 268, 35 van Albada, T.S. 1982, MNRAS, 201, 939 Walker, M.G., Mateo, M., Olszewski, E.W., Pal, J.K., Sen, B., & Woodroofe, M. 2006, , 642, L41 White, S.D.M 1978 MNRAS, 184, 185 White, S.D.M, & Rees, M.J., 1978, MNRAS, 183, 341 Zaho, H., & Famaey, B. 2006, ApJ, 638, L9 [^1]: Cosmological N-body simulations in the context of a relativistic MOND theory such as TeVeS have not been performed so far. [^2]: As an experiment we also ran a simulation, with the same initial conditions and parameter $\kappa$ as M1, in which the force was calculated from equation (\[eqcurl\]) imposing ${{\bf S}}=0$. In this simulation the linear and angular momentum are strongly not conserved: for instance, the center of mass is already displaced by $\sim7{r_*}$ after $\sim30{t_{\rm dyn}}$. [^3]: Ciotti, Nipoti & Londrillo (2007) found similar results in “ad hoc” numerical simulations in which the angular force components were frozen to zero, so that the evolution was driven by radial forces only. In fact, while phase mixing is less effective both in MOND and in Newtonian simulations with respect to the simulations here reported, the phase mixing time scale in MOND is still considerably longer than in Newtonian gravity. [^4]: In Fig. \[figfp\]b the dotted lines are nearly coincident because 1) the models are almost homologous, and 2) the variable in abscissa is independent of $\kappa$, being the FP coefficients $A\sim-2B$ in this case. [^5]: This value of $\kappa$ is not directly comparable with those in simulations M1, M2 and M4, because of the different role of ${r_*}$ in the corresponding initial distributions.
1
--- abstract: | We formulate the Hubbard model for the simple cubic lattice in the representation of interacting dimers applying the exact solution of the dimer problem. By eliminating from the considerations unoccupied dimer energy levels in the large $U$ limit (it is the only assumption) we analytically derive the Hubbard Hamiltonian for the dimer (analogous to the well-known $t-J$ model), as well as, the Hubbard Hamiltonian for the crystal as a whole by means of the projection technique. Using this approach we can better visualize the complexity of the model, so deeply hidden in its original form. The resulting Hamiltonian is a mixture of many multiple ferromagnetic, antiferromagnetic and more exotic interactions competing one with another. The interplay between different competitive interactions has a decisive influence on the resulting thermodynamic properties of the model, depending on temperature, model parameters and assumed average number of electrons per lattice site. A simplified form of the derived Hamiltonian can be obtained using additionally Taylor expansion with respect to $x=\frac{t}{U}$ ($t$-hopping integral between nearest neighbours, $U$-Coulomb repulsion). As an example, we present the expansion including all terms proportional to $t$ and to $\frac{t^2}U$ and we reproduce the exact form of the Hubbard Hamiltonian in the limit $U\rightarrow \infty $. The nonperturbative approach, presented in this paper, can, in principle, be applied to clusters of any size, as well as, to another types of model Hamiltonians. author: - | M.Matlak[^1], J.Aksamit, B.Grabiec\ Institute of Physics, Silesian University,\ 4 Uniwersytecka, PL-40-007 Katowice,Poland\ and\ W.Nolting\ Institute of Physics, Humboldt University,\ 110 Invalidenstr., D-10115 Berlin, Germany title: 'Hubbard Hamiltonian in the dimer representation. Large $U$ limit' --- Introduction ============ The single-band Hubbard model, Ref. \[1\], plays in the solid state physics a similar principal role as the hydrogen atom in the atomic physics. This explains a continuous interest in its properties. The Hubbard Hamiltonian reads $$H=\sum\limits_{i,j,\sigma }t_{i,j}c_{i,\sigma }^{+}c_{j,\sigma }+U\sum\limits_in_{i,\uparrow }n_{i,\downarrow }.$$ Here $c_{i,\sigma }$ $(c_{i,\sigma }^{+})$ are annihilation (creation) operators of an electron with spin $\sigma =\uparrow $, $\downarrow $ in the Wannier representation at the lattice site $\mathbf{R}_i$ and $n_{i,\sigma }=c_{i,\sigma }^{+}c_{i,\sigma }$. Moreover, $t_{i,j}$ is the hopping integral between different lattice sites $i$ and $j$ ($t_{i,i}=0)$ and $U$ is the intrasite Coulomb repulsion. The Bloch conduction band energy $% \varepsilon _{\mathbf{k}}$ is given by $$\varepsilon _{\mathbf{k}}=\sum\limits_{i-j}t_{i,j}e^{-i\mathbf{k\cdot (R}_i-% \mathbf{R}_j)}.$$ In the following we restrict ourselves to the simple cubic (sc) lattice and assume that $$t_{i,j}=\left\{ \begin{array}{ll} -t & i,j\mbox{-nearest neighbours} \\ 0 & \mbox{otherwise}. \end{array} \right.$$ Then $$\varepsilon _{\mathbf{k}}=-2t(\cos k_xa+\cos k_ya+\cos k_za).$$ The hopping parameter $t$ is simply related to the bandwidth $W$ of the Bloch band (2), e. g. $W=12t$ for the sc lattice$.$ The interplay between the two model parameters, $W$ and $U,$ is decisive for the properties of the model resulting in strong electron correlations, leading to band magnetism (see e.g. Refs \[2\], \[3\] for a review), insulator-to-metal transition (see e.g. Refs \[3\], \[4\] and papers cited therein) and high-$T_c$ superconductivity (negative $U$-model, see e.g. Ref. \[5\]). The Hubbard model very often plays also a role of a submodel for many other more complicated models (as e.g. Anderson model (see Ref. \[6\]), s-f model (see Refs \[7,8\]) and so on). Especially interesting, but difficult to handle are the properties of the Hubbard model in such a regime of the model parameters where the bandwidth $W$ is comparable to the Coulomb repulsion $U$. In a large number of papers \[9-30\] the authors tried to solve this model using many sophisticated methods. The exact solution, however, does not exist till now. Many authors tried to change the situation in this field by introducing the expansion parameter $x=\frac{t}{U}$ $(x\ll 1)$. This idea (cf Refs \[31,32\]) consists in replacing ”difficult physics” connected with the model by ”difficult mathematics” obtained by a laborious expansion with respect to $x.$ Different methods connected with this problem have been applied as e.g. the perturbation expansion (see Refs \[18\], \[21\]), canonical transformation (see Refs \[9\], \[12\], \[17\], \[22\], \[23\]) or ab initio derivations (see Refs \[25\], \[28-30\]). Most of the methods, leading to the $% t-J$ model (or generalized $t-J$ model) are also summarized in Refs \[27\], \[4\]. The goal of the present paper is just to show that we can take another, nonperturbative way. In the first step we divide the crystal lattice into a set of interacting dimers. In other words, we can rewrite the Hubbard Hamiltonian (1) for the sc lattice (see Fig. 1) in the equivalent form [^2]: $$\begin{array}{ll} H= & \sum\limits_IH_I^d-t\sum\limits_{I,\sigma }(c_{I,2,\sigma }^{+}c_{I+1,1,\sigma }+c_{I+1,1,\sigma }^{+}c_{I,2,\sigma }) \\ & -t\sum\limits_{I\neq J,\sigma }(c_{I,1,\sigma }^{+}c_{J,1,\sigma }+c_{I,2,\sigma }^{+}c_{J,2,\sigma }) \end{array}$$ where $$H_I^d=-t\sum\limits_{\sigma} (c_{I,1,\sigma }^{+}c_{I,2,\sigma }+c_{I,2,\sigma }^{+}c_{I,1,\sigma })+U(n_{I,1,\uparrow }n_{I,1,\downarrow }+n_{I,2,\uparrow }n_{I,2,\downarrow }).$$ The indices $I$ and $J$ enumerate the dimers and $H_I^d$ is the dimer Hamiltonian. The second term in (4) describes the hopping between nearest dimers in the $z$-direction, the third one represents the hopping between nearest dimers (($y,z$)-plane) and between different dimer planes (see Fig. 1). The equivalent form of the Hubbard Hamiltonian (4) is especially suitable because we can apply in the following the exact solution of the dimer problem (5). In the next step we express the construction operators $% a_{\sigma}$ ($a_{\sigma}^{+}$) as linear combinations of the transition operators between different dimer states. This, in turn, allows to find the exact dimer representation of the whole Hamiltonian (4). The space of the dimer eigenvectors consists, however, of two subspaces. One of them corresponds to the lowest lying energy levels, the second one contains the levels with energies which in the large $U$ limit take on large, positive values. These levels in a reasonable temperature range cannot be occupied by electrons and therefore we can exclude them from further considerations. It is interesting to note that this approch, without any additional assumptions, applied to the dimer Hamiltonian (5) produces the analogy of the well known $t-J$ model (cf Refs \[31\], \[32\]). A similar approach can be applied to the Hamiltonian (4), describing the crystal as a whole. By eliminating from the considerations the unoccupied dimer energy levels in the large $U$ limit (it is the only assumption) with the use of the projection technique we can find in a straightforward way the final form of the Hamiltonian in this limit without using perturbation expansion or canonical transformation. The resulting Hamiltonian, obtained in this way, is very complicated. It, however, explicitly shows all possible magnetic, nonmagnetic or more complex competitive interaction processes very deeply hidden in the original Hamiltonian written in the site representation (1). It is the aim of this paper just to reveal these important but normally invisible elementary interactions. One important advantage might be that a given approach to the unsolvable Hubbard problem can be tested with respect to the types of neglected interaction processes. Besides, the new and straightforward method, presented in this paper, can easily be adopted to clusters of any size and also to another types of model Hamiltonians. The paper is organized as follows. In Sec. 2 we find the exact solution of the Hubbard dimer (5) and give the exact expressions for the annihilation operators $c_{I,1(2),\sigma }$ in the dimer representation. In Sec. 3 we derive the Hubbard dimer Hamilonian (5) in the large $U$ limit. With the use of the projection technique onto the lowest lying dimer states we derive in Sec. 4 the Hubbard Hamiltonian for the crystal in this limit (central formula of this paper). A simplified version of the derived Hamiltonian with the use of the Taylor expansion with respect to $x=\frac{t}{U}$ $(x\ll 1)$ and the case $U\rightarrow \infty $ is discussed in Sec. 5. Exact solution of the Hubbard dimer =================================== The eigenvalues and eigenvectors of the dimer Hamiltonian (5) can be found by using a standard procedure (cf Refs \[33-35\]). We start with the vectors $% |n_{1,\uparrow },n_{1,\downarrow };n_{2,\uparrow },n_{2,\downarrow }\rangle $ $(n_{i,\sigma }=0,1;$ $i=1,2$; $\sigma =\uparrow ,\downarrow )$ which constitute the Fock basis of the single-dimer space of states: $$\begin{array}{lll} \begin{array}{l} \begin{array}{l} |0\rangle =|0,0;0,0\rangle , \end{array} \\ \\ \begin{array}{l} |11\rangle =|1,0;0,0\rangle , \\ |12\rangle =|0,1;0,0\rangle , \\ |13\rangle =|0,0;1,0\rangle , \\ |14\rangle =|0,0;0,1\rangle , \end{array} \end{array} & \begin{array}{l} |21\rangle =|1,1;0,0\rangle , \\ |22\rangle =|1,0;1,0\rangle , \\ |23\rangle =|1,0;0,1\rangle , \\ |24\rangle =|0,1;1,0\rangle , \\ |25\rangle =|0,1;0,1\rangle , \\ |26\rangle =|0,0;1,1\rangle , \end{array} & \begin{array}{l} \begin{array}{l} |31\rangle =|0,1;1,1\rangle , \\ |32\rangle =|1,0;1,1\rangle , \\ |33\rangle =|1,1;0,1\rangle , \\ |34\rangle =|1,1;1,0\rangle , \end{array} \\ \\ \begin{array}{l} |4\rangle =|1,1;1,1\rangle . \end{array} \end{array} \end{array}$$ Starting with the vectors (6) we easily get the eigenvalues $E_\alpha $ and the eigenvectors $|E_{\alpha} \rangle $ of the Hubbard dimer (5). Here, we only mention that the space of 16 eigenvectors $|E_{\alpha} \rangle $ can be devided into some subspaces numbered by $n=\sum\limits_{i,\sigma }n_{i,\sigma }$. The subspace, belonging to $n=0$ and $n=4$ is 1-dimensional ($E_0=0,$ $|E_0\rangle =|0\rangle ;$ $E_4=2U,$ $|E_4\rangle =|4\rangle )$. There are, however, two 2-dimensional subspaces corresponding to $n=1$ (as e.g. $E_{11}=-t,$ $|E_{11}\rangle =\frac 1{\sqrt{2}}(|11\rangle +|13\rangle ),$ etc.) and two 2-dimensional subspaces corresponding to $n=3$ (as e.g. $% E_{31}=t+U,$ $|E_{31}\rangle =\frac 1{\sqrt{2}}(|31\rangle +|33\rangle ),$ etc.). The subspace, belonging to $n=2$ consists of one 4-dimensional subspace (as e.g. $E_{21}=0,$ $|E_{21}\rangle =\frac 1{\sqrt{2}}(|23\rangle +|24\rangle ),$ etc.) and two 1-dimensional subspaces ($E_{25}=E_{26}=0,$ $% |E_{25}\rangle =|22\rangle ,$ $|E_{26}\rangle =|25\rangle $). The complete set of the eigenvalues $E_{\alpha} $ and eigenvectors $|E_{\alpha} \rangle $ of the Hubbard dimer is given in the Appendix A. It allows to express the dimer Hamiltonian (5) in the equivalent form $$H^d=\sum\limits_{\alpha} E_{\alpha} |E_{\alpha} \rangle \langle E_{\alpha} |.$$ The next important step in our calculations is the possibility to express the annihilation operators $c_{1(2),\sigma }$ as linear combinations of transition operators between dimer states $P_{\alpha ,\beta }=|E_{\alpha} \rangle \langle E_{\beta} |.$ This procedure can easily be performed when acting with the annihilation operators on the basis vectors (6) and using the reciprocal relations to (A.1). In this way we can obtain the dimer representation of the annihilation (creation) operators, given in Appedix B. After this operation we can insert so prepared $c_{I,1(2),\sigma }$ $% (c_{I,1(2),\sigma }^{+})$ into the Hubbard Hamiltonian (4) to obtain the Hubbard model in the dimer representation for the sc lattice. This representation will be used later to derive the Hubbard Hamiltonian in the large $U$ limit (see Sec. 4). Hubbard dimer for large $U$ =========================== Looking at the eigenvalues (A.1) of the Hubbard dimer it is easy to see that for large $U$ $(U\gg t)$ the energies $E_{\alpha \text{ }% }=E_{22},E_{23},E_{31},E_{32},E_{33},E_{34}$ and $E_4$ take on large, positive values, producing in the partition function the terms which can practically be neglected. It means that the mentioned energies are not occupied in the reasonable temperature range ($1$ $eV\sim 11604.5$ $K$) and can be excluded from our considerations. Therefore the dimer Hamiltonian, given by (7), reduces to $$\overline{H}^d=-tP_{11,11}+tP_{12,12}-tP_{13,13}+tP_{14,14}+\left( -C+\frac U2\right) P_{24,24}.$$ To bring this expression into a compact (second quanization) form we introduce the Hubbard operators $$\begin{aligned} a_{i,\sigma } &=&c_{i,\sigma }(1-n_{i,-\sigma }), \\ b_{i,\sigma } &=&c_{i,\sigma }n_{i,-\sigma }\end{aligned}$$ and spin operators $$\begin{aligned} S_i^z &=&\frac 12(n_{i,\uparrow }-n_{i,\downarrow })=\frac 12(n_{i,\uparrow }^a-n_{i,\downarrow }^a), \\ S_i^{+} &=&c_{i,\uparrow }^{+}c_{i,\downarrow }=a_{i,\uparrow }^{+}a_{i,\downarrow }, \\ S_i^{-} &=&c_{i,\downarrow }^{+}c_{i,\uparrow }=a_{i,\downarrow }^{+}a_{i,\uparrow }\end{aligned}$$ where $n_{i,\sigma }^a=a_{i,\sigma }^{+}a_{i,\sigma }$ $(i=1,2;$ $\sigma =\uparrow $, $\downarrow ).$ With the use of (9)-(13) the Hamiltonian of the Hubbard dimer (8) for large $% U$ can be presented in the form (we introduce the omitted earlier dimer index $I$) $$\begin{aligned} \overline{H}_I^d &=&-t\sum\limits_{\sigma} [a_{I,1,\sigma }^{+}a_{I,2,\sigma }+a_{I,2,\sigma }^{+}a_{I,1,\sigma }] \nonumber \\ && \nonumber \\ &&+\frac{4t^2}{U\sqrt{1+(\frac{4t}U)^2}}[\overrightarrow{S}_{I,1}\cdot \overrightarrow{S}_{I,2}-\frac{n_{I,1}^an_{I,2}^a}4] \nonumber \\ && \nonumber \\ &&+\frac{t^2(1-\sqrt{1+(\frac{4t}U)^2})}{U(1+\sqrt{1+(\frac{4t}U)^2})\sqrt{% 1+(\frac{4t}U)^2}}[2(b_{I,1,\uparrow }^{+}a_{I,1,\downarrow }^{+}a_{I,2,\downarrow }b_{I,2,\uparrow } \nonumber \\ && \nonumber \\ &&+b_{I,2,\uparrow }^{+}a_{I,2,\downarrow }^{+}a_{I,1,\downarrow }b_{I,1,\uparrow }) \\ && \nonumber \\ &&+(1-n_{I,1}^a)n_{I,2}^b+(1-n_{I,2}^a)n_{I,1}^b-n_{I,1}^bn_{I,2}^b] \nonumber \\ && \nonumber \\ &&+\frac{t(1-\sqrt{1+(\frac{4t}U})^2)}{2\sqrt{1+(\frac{4t}U)^2}}% \sum\limits_{\sigma} \sum\limits_{\alpha =1,2}[a_{I,\alpha ,\sigma }^{+}b_{I,% \overline{\alpha },\sigma }+b_{I,\alpha ,\sigma }^{+}a_{I,\overline{\alpha }% ,\sigma }] \nonumber\end{aligned}$$ where $n_{I,\alpha }^{a,b}=\sum\limits_{\sigma} n_{I,\alpha ,\sigma }^{a,b},$ $n_{I,\alpha ,\sigma }^b=b_{I,\alpha ,\sigma }^{+}b_{I,\alpha ,\sigma }=n_{I,\alpha ,\sigma }n_{I,\alpha ,-\sigma }$ $(\alpha =1,2)$, $\overline{% \alpha }=1$ when $\alpha =2$ and $\overline{\alpha }=2$ when $\alpha =1$. It is very important to stress that the formula (14) has been obtained in a nonperturbative way, starting from the exact form of the dimer Hamiltonian (see (5) or (7)), and excluding from the considerations unoccupied dimer energy levels in the large $U$ limit. Let us note that using the Taylor expansion in (14) with respect to $x=\frac{t}{U}$ and retaining the terms proportional to $\frac{t^2}U$ we obtain $$\overline{H}_I^d=-t\sum\limits_\sigma [a_{I,1,\sigma }^{+}a_{I,2,\sigma }+a_{I,2,\sigma }^{+}a_{I,1,\sigma }]+\frac{4t^2}U[\overrightarrow{S}% _{I,1}\cdot \overrightarrow{S}_{I,2}-\frac{n_{I,1}^an_{I,2}^a}4].$$ The formula (15) is the well-known $t-J$ model for the Hubbard dimer where the first part in (15), similarly to (14), represents the exact form of the dimer Hamiltonian (5) in the limit $U\rightarrow \infty $. The same result (14) can also be obtained applying a more general approach which will be used later to derive the Hubbard Hamiltonian for large $U$ in the case of a crystal. Let us note that after elimination of the unoccupied levels the subspace of the eigenvectors of the Hubbard dimer (A.1) for large $U$ consists of the following eigenvectors: $|E_0\rangle $, $|E_{11}\rangle $, $|E_{12}\rangle $, $|E_{13}\rangle $, $|E_{14}\rangle $, $|E_{21}\rangle $, $|E_{24}\rangle $, $|E_{25}\rangle $, and $|E_{26}\rangle $. It means that we can define a projection operator onto this subspace $$\begin{array}{ll} P_I= & P_{0,0}^{(I)}+P_{11,11}^{(I)}+P_{12,12}^{(I)}+P_{13,13}^{(I)}+P_{14,14}^{(I)} \\ & +P_{21,21}^{(I)}+P_{24,24}^{(I)}+P_{25,25}^{(I)}+P_{26,26}^{(I)} \end{array}$$ which, in the second quantization form, reads $$\begin{array}{ll} P_I= & 1-\frac 12(n_{I,1}^b+n_{I,2}^b)+\frac 14n_{I,1}^bn_{I,2}^b \\ & \\ & \begin{array}{ll} & +\frac 18(1-\frac 1{\sqrt{1+(\frac{4t}U)^2}})[4(\overrightarrow{S}% _{I,1}\cdot \overrightarrow{S}_{I,2}-\frac 14n_{I,1}^an_{I,2}^a) \\ & \\ & +2(b_{I,1,\uparrow }^{+}a_{I,1,\downarrow }^{+}a_{I,2,\downarrow }b_{I,2,\uparrow }+b_{I,2,\uparrow }^{+}a_{I,2,\downarrow }^{+}a_{I,1,\downarrow }b_{I,1,\uparrow }) \\ & \\ & +(1-n_{I,1}^a)n_{I,2}^b+(1-n_{I,2}^a)n_{I,1}^b-n_{I,1}^bn_{I,2}^b] \end{array} \\ & \\ & +\frac t{U\sqrt{1+(\frac{4t}U)^2}}\sum\limits_{\sigma} \sum\limits_{\alpha =1,2}(a_{I,\alpha ,\sigma }^{+}b_{I,\overline{\alpha },\sigma }+b_{I,\alpha ,\sigma }^{+}a_{I,\overline{\alpha },\sigma }). \end{array}$$ Now, the Hamiltonian (14) can also be obtained from (5) (or (7)) with the use of the projection operator (16) or (17) using the relation $$\overline{H}_I^d=P_IH_I^dP_I$$ and applying a straightforward but laborious algebraic calculation. Hubbard model for large $U$ =========================== The Hamiltonian of the whole crystal (4) can be expressed (similar to (7)) in the form $$H=\sum\limits_{\gamma} \overline{E}_{\gamma} |\overline{E}_{\gamma} \rangle \langle \overline{E}_{\gamma} |$$ with unknown energies $\overline{E}_{\gamma} $ and eigenvectors $|\overline{E% }_{\gamma} \rangle $. We can, however, expand the eigenvectors $|\overline{E}% _{\gamma} \rangle $ in the series of the dimer eigenvectors (see (A.1)) $$|\overline{E}_{\gamma} \rangle =\sum\limits_{\gamma _1,..,\gamma _M}c_{\gamma _{1,}.._{,}\gamma _M}^{\gamma} |E_{\gamma _1}\rangle ..|E_{\gamma _M}\rangle$$ assuming that the crystal consists of $M$ dimers. Using (19) and (20) we obtain $$H=\sum\limits_{\gamma} \overline{E}_{\gamma} \sum\limits_{\gamma _1,..,\gamma _M}\sum\limits_{\gamma _1^{^{,}},..,\gamma _M^{,}}c_{\gamma _{1,}.._{,}\gamma _M}^{\gamma} c_{\gamma _{1,}^{,}.._{,}\gamma _M^{,}}^{\gamma *}|E_{\gamma _1}\rangle ..|E_{\gamma _M}\rangle \langle E_{\gamma _1^{,}}|..\langle E_{\gamma _M^{,}}|.$$ It is clear that to obtain the Hubbard Hamiltonian for large $U$ we have to project (21) onto the subspace of the lowest lying dimer states with the use of the projection operator $$P=P_1P_2...P_M$$ where $P_{I}$ is given by (16) or (17). In analogy to (14) we denote the Hubbard Hamiltonian in the large $U$ limit by $\overline{H}$. Similar to (18) we write $$\overline{H}=PHP$$ and instead of the form (21) for the Hamiltonian $H$ we can use (4). Taking into account that $P^2=P$ $(P_I^2=P_I,$ $\left[ P_I,P_J\right] =0)$ we obtain $$\begin{array}{ll} \overline{H}= & P[\sum\limits_I\overline{H}_I^d-t\sum\limits_{I,\sigma }\left( \overline{c}_{I,2,\sigma }^{+}\overline{c}_{I+1,1,\sigma }+\overline{% c}_{I+1,1,\sigma }^{+}\overline{c}_{I,2,\sigma }\right) \\ & -t\sum\limits_{I\neq J,\sigma }\left( \overline{c}_{I,1,\sigma }^{+}% \overline{c}_{J,1\sigma }+\overline{c}_{I,2,\sigma }^{+}\overline{c}% _{J,2,\sigma }\right) ]\equiv P\overline{\overline{H}} \end{array}$$ where $\overline{H}_I^d$ is given by (14) and $(\alpha =1,2)$ $$\overline{c}_{I,\alpha ,\sigma }=c_{I,\alpha ,\sigma }P_I,$$ $$\overline{c}_{I,\alpha ,\sigma }^{+}=P_Ic_{I,\alpha ,\sigma }^{+}.$$ Applying the projection operator (16) or (17) to (B.1) - (B.4) and introducing Hubbard- and spin operators (9) - (13) we obtain $$\overline{c}_{I,1,\uparrow }=\underline{\underline{a}}_{I,1,\uparrow }+\beta \left[ S_{I,2}^za_{I,1,\uparrow }+S_{I,2}^{-}a_{I,1,\downarrow }\right] -\delta \left[ S_{I,1}^za_{I,2,\uparrow }+S_{I,1}^{-}a_{I,2,\downarrow }\right] ,$$ $$\overline{c}_{I,1,\downarrow }=\underline{\underline{a}}_{I,1,\downarrow }-\beta \left[ S_{I,2}^za_{I,1,\downarrow }-S_{I,2}^{+}a_{I,1,\uparrow }\right] +\delta \left[ S_{I,1}^za_{I,2,\downarrow }-S_{I,1}^{+}a_{I,2,\uparrow }\right]$$ where the corresponding expressions for $\overline{c}_{I,2,\sigma }$ ($% \sigma =\uparrow ,\downarrow $) can easily be obtained by changing the internal dimer index $1\Leftrightarrow 2$ in (27) and (28). The new operators $\underline{\underline{a}}_{I,\alpha ,\sigma }\left( \alpha =1,2;\sigma =\uparrow ,\downarrow \right) $ are introduced to obtain a relatively compact form of (27) and (28). They are defined as follows $$\begin{array}{ll} \underline{\underline{a}}_{I,\alpha ,\sigma }= & \underline{a}_{I,\alpha ,\sigma }+\beta \left( \underline{b}_{I,\alpha ,\sigma }+a_{I,\alpha ,-\sigma }^{+}a_{I,\overline{\alpha },-\sigma }b_{I,\overline{\alpha }% ,\sigma }\right) \\ & +\delta \left[ \underline{b}_{I,\overline{\alpha },\sigma }+a_{I,\overline{% \alpha },-\sigma }^{+}a_{I,\alpha ,-\sigma }b_{I,\alpha ,\sigma }+a_{I,% \overline{\alpha },\sigma }\frac{n_{I,\alpha }^a}2\right] \end{array}$$ where $$\underline{a}_{I,\alpha ,\sigma }=a_{I,\alpha ,\sigma }\left( 1-\frac \beta 2n_{I,\overline{\alpha }}^a-\frac 12n_{I,\overline{\alpha }}^b\right) ,$$ $$\underline{b}_{I,\alpha ,\sigma }=b_{I,\alpha ,\sigma }\left( 1-n_{I,% \overline{\alpha }}^a-\frac 12n_{I,\overline{\alpha }}^b\right)$$ and $$\beta =\frac 14\left( 1-\frac 1{\sqrt{1+\left( \frac{4t}U\right) ^2}}\right) ,$$ $$\delta =\frac{t}{U}\frac 1{\sqrt{1+\left( \frac{4t}U\right) ^2}}.$$ To write down the explicit form of the Hubbard Hamiltonian $\overline{% \overline{H}}$ (see (24)) we have to insert (27) and (28) into $\overline{% \overline{H}}$. This operation leads, however, to a very complicated form of $\overline{\overline{H}},$ given in the Appendix C (a simplified form of this Hamiltonian is discussed in the next Section). Here again the only assumption made to derive $\overline{\overline{H}}$ was the reduction of the whole dimer space (A.1) to the subspace of the dimer eigenvectors $\left| E_0\right\rangle $, $\left| E_{11}\right\rangle $, $\left| E_{12}\right\rangle $, $\left| E_{13}\right\rangle $, $\left| E_{14}\right\rangle $, $\left| E_{21}\right\rangle $, $\left| E_{24}\right\rangle $, $\left| E_{25}\right\rangle $ and $\left| E_{26}\right\rangle $, corresponding to the lowest lying dimer energy levels because in the large $U$ limit only these levels can be occupied. The Hamiltonian $\overline{\overline{H}}$, obtained in this way, contains many competing, magnetic, nonmagnetic and more complex interactions. Among them we can find a direct antiferromagnetic interaction generated by the term $% \stackrel{\rightarrow }{S}_{I,1}\cdot \stackrel{\rightarrow }{S}_{I,2}$ (Heisenberg exchange interaction) multiplied by the positive coupling constant. Such a term appears in $\overline{H}_I^d$ (see (C.6) and (14)). Inside of $\overline{\overline{H}}$ (see (C.6) and (C.2) - (C.5)) a kind of ferromagnetic interactions between spins from different dimers, represented by (C.3), appears within the terms proportional to $\beta ^2$ and $\delta ^2$ (negative coupling constants). The antiferromagnetic interactions, however, appear again in terms proportional to $\beta \delta $. There are also many other magnetic, more exotic interactions, represented by (C.2), (C.4) and (C.5), entering into (C.6). The situation is, however, much more complicated when we consider the total Hamiltonian $\overline{H}$ (see (24)) in the large $U$ limit. $\overline{H}$ differs from $\overline{\overline{H}}$ by the multiplicative factor $P$ (a product of the projection operators $P_I$ (see (22) and (17)), standing on the left. Inside of each $P_I$ the mentioned antiferromagnetic interaction also appears (see (17)). In other words, the total Hamiltonian $\overline{H}$, we are interested in, is actually a sum of the products of many competitive ferromagnetic, antiferromagnetic and more exotic interactions. The thermodynamic properties of the system, described by the Hamiltoniam $\overline{H}$ (24), are then a result of the competition between all of them. Which interaction wins in such a competition it certainly depends on temperature, model parameters ($% t,U$) and on the average number of electrons per lattice site, determining the chemical potential of the system. The formalism presented in this paper is also applicable to a more complicated decomposition of the Hubbard Hamiltonian (1) into a set od interacting clusters consisting e.g. of one central atom and $z$ its nearest neighbours. We, however, know (see e.g. Refs \[36\]-\[39\] and papers cited therein) that, unfortunately, the mathematical problems in this case exponentially grows up with the size of the cluster. Taylor expansion ================ The complicated form of the Hubbard Hamiltonian $\overline{H}$ in the large $% U$ limit (see (24), (C.6)) where $P$ is given by (22) (see also (17)) can essentially be reduced when applying the Taylor expansion with respect to the parameter $x=\frac{t}{U}\ll 1$. To do it we have to expand all the coefficients in (14), (17) and (C.6) including also $\beta $ and $\delta $ (see (32), (33)). Such an expansion can be performed to any power of $x$, however, the most simple form we obtain when we restrict ourselves to the linear approximation, resulting in the terms proportional to $t$ and $\frac{% t^2}U.$ The accuracy of this expansion can easily be verified assumming e.g. a typical value of the ratio $\frac{W}{U}=\frac 15$ (or less). Because the bandwidth of the conduction band for the sc lattice is $W=12t,$ it results in a small value of the expansion parameter $x=\frac tU=\frac 1{60}$ in this case. It, however, means that the linear approximation is quite reasonable because all higher terms in the expansion, proportional to $x^n$ ($% n=2,3,...),$ produce $60$ times smaller contribution. To present the results of the Taylor expansion including all the terms proportional to $t$ and $\frac{t^2}U$ let us first define several auxiliary quantities $$P^{(1)}=\prod\limits_{I=1}^MP_I^{(1)},$$ $$P^{(2)}=\frac tU\sum\limits_{I=1}^MP_1^{(1)}\cdot ...\cdot P_{I-1}^{(1)}P_I^{(2)}P_{I+1}^{(1)}\cdot ...\cdot P_M^{(1)}$$ where $$P_I^{(1)}=1-\frac 12\left( n_{I,1}^b+n_{I,2}^b\right) +\frac 14n_{I,1}^bn_{I,2}^b,$$ $$P_I^{(2)}=\sum\limits_{\sigma} \sum\limits_{\alpha =1}^2\left( a_{I,\alpha ,\sigma }^{+}b_{I,\overline{\alpha },\sigma }+b_{I,\alpha ,\sigma }^{+}a_{I,% \overline{\alpha },\sigma }\right)$$ and ($\alpha =1,$ $2$) $$\widetilde{a}_{I,\alpha ,\sigma }=a_{I,\alpha ,\sigma }\left( 1-\frac 12n_{I,% \overline{\alpha }}^b\right) ,$$ $$\widetilde{\widetilde{a}}_{I,\alpha ,\sigma }=\underline{b}_{I,\overline{% \alpha },\sigma }+a_{I,\overline{\alpha },-\sigma }^{+}a_{I,\alpha ,-\sigma }b_{I,\alpha ,\sigma }+a_{I,\overline{\alpha },\sigma }\frac{n_{I,\alpha }^a}% 2$$ where $\underline{b}_{I,\alpha ,\sigma }$ is given by (31). The Hamiltonian $\overline{\overline{H}}$ (C.6), including all the terms proportional to $t$ and $\frac{t^2}U$, takes on the form $$\overline{\overline{H}}=\overline{\overline{H}}^{(1)}+\overline{\overline{H}}% ^{(2)}$$ where $$\begin{aligned} \overline{\overline{H}}^{(1)} &=&-t\sum\limits_{I,\sigma }\left[ a_{I,1,\sigma }^{+}a_{I,2,\sigma }+a_{I,2,\sigma }^{+}a_{I,1,\sigma }\right] \nonumber \\[0.2cm] &&-t\sum\limits_{I,\sigma }\left[ \widetilde{a}_{I,2,\sigma }^{+}\widetilde{% a}_{I+1,1,\sigma }+\widetilde{a}_{I+1,1,\sigma }^{+}\widetilde{a}% _{I,2,\sigma }\right] \\[0.2cm] &&-t\sum\limits_{I\neq J,\sigma }\sum\limits_{\alpha =1}^2\widetilde{a}% _{I,\alpha ,\sigma }^{+}\widetilde{a}_{J,\alpha ,\sigma } \nonumber\end{aligned}$$ and $$\begin{aligned} \overline{\overline{H}}^{(2)} &=&\frac{4t^2}U\sum\limits_I[\overrightarrow{S% }_{I,1}\cdot \overrightarrow{S}_{I,2}-\frac 14n_{I,1}^an_{I,2}^a] \nonumber \\ &&-\frac{t^2}U\sum\limits_{I,\sigma }[\widetilde{a}_{I,2,\sigma }^{+}% \widetilde{\widetilde{a}}_{I+1,1,\sigma }+\widetilde{\widetilde{a}}% _{I,2,\sigma }^{+}\widetilde{a}_{I+1,1,\sigma } \nonumber \\ &&+\widetilde{a}_{I+1,1,\sigma }^{+}\widetilde{\widetilde{a}}_{I,2,\sigma }+% \widetilde{\widetilde{a}}_{I+1,1,\sigma }^{+}\widetilde{a}_{I,2,\sigma }] \nonumber \\ &&-\frac{t^2}U\sum\limits_{I\neq J,\sigma }\sum\limits_{\alpha =1}^2[% \widetilde{a}_{I,\alpha ,\sigma }^{+}\widetilde{\widetilde{a}}_{J,\alpha ,\sigma }+\widetilde{\widetilde{a}}_{I,\alpha ,\sigma }^{+}\widetilde{a}% _{J,\alpha ,\sigma }] \\ &&+\frac{2t^2}U\sum\limits_I[\overrightarrow{S}_{I,2}\cdot (\overrightarrow{% \underline{\underline{s}}}_{I,1;I+1,1}+\overrightarrow{\underline{s}}% _{I+1,1;I,1}) \nonumber \\ &&+\overrightarrow{S}_{I+1,1}\cdot (\overrightarrow{\underline{\underline{s}}% }_{I+1,2;I,2}+\overrightarrow{\underline{s}}_{I,2;I+1,2})] \nonumber \\ &&+\frac{2t^2}U\sum\limits_{I\neq J}[\overrightarrow{S}_{I,1}\cdot (% \overrightarrow{\underline{\underline{s}}}_{I,2;J,1}+\overrightarrow{% \underline{s}}_{J,1;I,2}) \nonumber \\ &&+\overrightarrow{S}_{I,2}\cdot (\overrightarrow{\underline{\underline{s}}}% _{I,1;J,2}+\overrightarrow{\underline{s}}_{J,2;I,1})]. \nonumber\end{aligned}$$ The operators $\underline{\underline{s}}_{I,\mu ;J,\nu }^{z,\pm }$ and $% \underline{s}_{I,\mu ;J,\nu }^{z,\pm }$ in (42) retain their forms introduced in (C.1) but $\underline{\underline{a}}_{I,\mu ,\sigma }(% \underline{\underline{a}}_{I,\mu ,\sigma }^{+})$ in (C.1) should actually be replaced by $\widetilde{a}_{I,\mu ,\sigma }(\widetilde{a}_{I,\mu ,\sigma }^{+})$, defined by (38). The total Hamiltonian $\overline{H}$ in the large $% U$ limit (24) including all terms proportional to $t$ and $\frac{t^2}U$ can thus be written in the form $$\overline{H}=P^{(1)}(\overline{\overline{H}}^{(1)}+\overline{\overline{H}}% ^{(2)})+P^{(2)}\overline{\overline{H}}^{(1)}=P^{(1)}\overline{\overline{H}}% ^{(1)}+(P^{(1)}\overline{\overline{H}}^{(2)}+P^{(2)}\overline{\overline{H}}% ^{(1)}).$$ The first part, $P^{(1)}\overline{\overline{H}}^{(1)}$, contains the terms proportional to $t$ whereas the term in the parentheses is proportional to $% \frac{t^2}U$ (see (35) and (42)). In the expression for $\overline{\overline{% H}}^{(2)}$(see (42)) the first term describes the antiferromagnetic, Heisenberg intradimer interaction (see also the second term in (15)). The apearence of this term may suggest that such an interaction should also arise between different dimers. This is of course the case. However, because of applied procedure such terms do not appear explicitly. In the last analysis our method treats all interactions within and between dimers, on the same quality level, i.e. all interactions are taken into account. The magnetic interdimer interactions, represented by the fourth and fifth term in (42), have formally the same structure as the Heisenberg interactions but instead of the scalar products of spin operators there are the products of spin operators and ”hopping spin” operators (defined in C.1). All the terms presented in (42) are correct because they originate from the exact decomposition of the Hubbard Hamiltonian (1) into a set of interacting dimers (4) where each dimer problem has been exactly solved (exact dimer representation of the construction operators after applying the projection procedure, given by (27) and (28)). The total Hamiltonian $\overline{H}$ (43), we are interested in, is much more complicated than $\overline{% \overline{H}}^{(1)}$ and $\overline{\overline{H}}^{(2)}$alone (see (41)-(43)) because of the presence of the projection operators $P^{(1)}$ and $P^{(2)}$ (see (34)-(37)) in the expression (43). It is also interesting to see what happens in the special case when taking the limit $U\rightarrow \infty $. The second term in the parentheses of Eq.(43) vanishes in this limit (cf (35) and (42)). Besides, each lattice site cannot be at the same time occupied by two electrons what is equivalent to the assumption ($\alpha =1,2)$ $$n_{I,\alpha ,\sigma }^b=b_{I,\alpha ,\sigma }^{+}b_{I,\alpha ,\sigma }=n_{I,\alpha ,\sigma }n_{I,\alpha ,-\sigma }=0,$$ $$n_{I,\alpha }^b=\sum\limits_{\sigma} n_{I,\alpha ,\sigma }^b=0$$ and (cf (38)) $$\widetilde{a}_{I,\alpha ,\sigma }=a_{I,\alpha ,\sigma }.$$ The total Hamiltonian (43) is thus given by a simple formula $$\begin{array}{ll} \overline{H}=P^{(1)}\overline{\overline{H}}= & -t\sum\limits_{I,\sigma }\left[ a_{I,1,\sigma }^{+}a_{I,2,\sigma }+a_{I,2,\sigma }^{+}a_{I,1,\sigma }\right] \\ & \\ & -t\sum\limits_{I,\sigma }\left[ a_{I,2,\sigma }^{+}a_{I+1,1,\sigma }+a_{I+1,1,\sigma }^{+}a_{I,2,\sigma }\right] \\ & \\ & -t\sum\limits_{I\neq J,\sigma }\sum\limits_{\alpha =1}^2a_{I,\alpha ,\sigma }^{+}a_{J,\alpha ,\sigma }. \end{array}$$ Going back to the original lattice (cf (4),(5) and (1)) it is easy to see that the exact Hubbard Hamiltonian in the limit $U\rightarrow \infty $ takes on the form $$\overline{H}=-t\sum\limits_{i,j,\sigma }a_{i,\sigma }^{+}a_{j,\sigma }$$ where $i$ and $j$ (as before) number the lattice points and $a_{i,\sigma }$ $% (a_{i,\sigma }^{+})$ are the Hubbard operators, defined by (9). Conclusions =========== Using a new, nonperturbative approach, basing on the equivalent form of the Hubbard Hamiltonian, represented by the collection of interacting dimers (4) where each dimer problem has been exactly solved, we have expressed the annihilation (creation) operators (B.1)-(B.4) as linear combinations of transition operators between different dimer states (dimer representation). This method made it possible to exclude from the considerations the unoccupied dimer energy levels in the large $U$ limit by means of the projection technique resulting in the final Hamiltonian for the dimer itself (14), as well as, for the crystal as a whole (see (23), (24) and (C.6)). It is important to stress that the elimination of the unoccupied dimer energy levels was the only assumption to derive the Hubbard Hamiltonian (24) in the large $U$ limit. Therefore we can be sure that expanding (24) with respect to $x=\frac tU$ (Taylor expansion) we obtain absolutely all terms proportional to $x^n$ $(n=1,2,...).$ In other words all the coefficients proportional to $x^n$ are easy to control what is not always the case when using another methods. The final form of the obtained Hamiltonian in the large $U$ limit (see (23), (24) and (C.6)) visualizes high degree of complexity of the model, deeply hidden in its original version, forming a mixture of many multiple ferromagnetic, antiferromagnetic and more complex interactions competing one with another. This fact seems to be decisive for our final conclusion. Because we are still dealing with the approximate solutions of the model (the exact solution does not exist till now) it may happen that we underestimate in this way some important interactions and overestimate the others. It is the reason why the resulting thermodynamic properties of the Hubbard model, obtained in an approximate way, so strongly depend on the quality of applied approximations. We note in passing that the exact dimer solution may serve as a novel alloy analogy for the Hubbard model, which could be treated by coherent potential approximation (CPA). It is well-known that the standard alloy analogy, based on the atomic limit, does not allow for ferromagnetism in the Hubbard model. This may change by application of the dimer solution which already accounts for a restricted hopping of the band electrons. A corresponding study is in preparation. Another example of a dimer approach to Hubbard-like models is the bond operator theory as an extension of the slave bosonic and fermionic operators (see Refs \[40\], \[41\] and papers cited therein). [Appendix A]{} The exact solution of the dimer eigenvalue problem (5) reads: $$\begin{array}{ll} E_0=0; & |E_0\rangle =|0\rangle , \end{array}$$ $$\begin{array}{ll} E_{11}=-t; & |E_{11}\rangle =\frac{1}{\sqrt{2}}(|11\rangle +|13\rangle ), \\ E_{12}=t; & |E_{12}\rangle =\frac{1}{\sqrt{2}}(|11\rangle -|13\rangle ), \\ E_{13}=-t; & |E_{13}\rangle =\frac{1}{\sqrt{2}}(|12\rangle +|14\rangle ), \\ E_{14}=t; & |E_{14}\rangle =\frac{1}{\sqrt{2}}(|12\rangle -|14\rangle ), \end{array}$$ $$\begin{array}{ll} E_{21}=0; & |E_{21}\rangle =\frac 1{\sqrt{2}}(|23\rangle +|24\rangle ), \\ E_{22}=U; & |E_{22}\rangle =\frac 1{\sqrt{2}}(|21\rangle -|26\rangle ), \\ E_{23}=C+\frac U2; & |E_{23}\rangle =a_1(|21\rangle +|26\rangle )-a_2(|23\rangle -|24\rangle ), \\ E_{24}=-C+\frac U2; & |E_{24}\rangle =a_2(|21\rangle +|26\rangle )+a_1(|23\rangle -|24\rangle ), \\ E_{25}=0; & |E_{25}\rangle =|22\rangle , \\ E_{26}=0; & |E_{26}\rangle =|25\rangle , \end{array} \tag{A.1}$$ $$\begin{array}{ll} E_{31}=t+U; & |E_{31}\rangle =\frac 1{\sqrt{2}}(|31\rangle +|33\rangle ), \\ E_{32}=-t+U; & |E_{32}\rangle =\frac 1{\sqrt{2}}(|31\rangle -|33\rangle ), \\ E_{33}=t+U; & |E_{33}\rangle =\frac 1{\sqrt{2}}(|32\rangle +|34\rangle ), \\ E_{34}=-t+U; & |E_{34}\rangle =\frac 1{\sqrt{2}}(|32\rangle -|34\rangle ), \end{array}$$ $$\begin{array}{ll} E_4=2U; & |E_4\rangle =|4\rangle \end{array}$$ where $$C=\sqrt{{\left( \frac U2\right) }^2+4t^2}, \tag{A.2}$$ $$a_1=\frac 12\sqrt{1+\frac U{2C}}, \tag{A.3}$$ $$a_2=\frac 12\sqrt{1-\frac U{2C}}. \tag{A.4}$$ [Appendix B]{} The exact dimer representation of the construction operators is given by the following expressions: $$\begin{array}[t]{ll} c_{1,\uparrow }= & \frac 1{\sqrt{2}}\left( P_{0,11}+P_{0,12}\right) +\frac 1{% \sqrt{2}}P_{11,25}-\frac 1{\sqrt{2}}P_{12,25} \\ & +\frac 12\left( P_{13,21}+P_{13,22}\right) +\frac 1{\sqrt{2}}\left( bP_{13,23}+aP_{13,24}\right) \\ & -\frac 12\left( P_{14,21}-P_{14,22}\right) +\frac 1{\sqrt{2}}\left( aP_{14,23}-bP_{14,24}\right) \\ & +\frac 12\left( P_{21,33}-P_{21,34}\right) -\frac 12\left( P_{22,33}+P_{22,34}\right) \\ & +\frac 1{\sqrt{2}}\left( aP_{23,33}+bP_{23,34}\right) -\frac 1{\sqrt{2}% }\left( bP_{24,33}-aP_{24,34}\right) \\ & +\frac 1{\sqrt{2}}\left( P_{26,31}-P_{26,32}\right) +\frac 1{\sqrt{2}% }P_{31,4}+\frac 1{\sqrt{2}}P_{32,4}, \end{array} \tag{B.1}$$ $$\begin{array}[t]{ll} c_{1,\downarrow }= & \frac 1{\sqrt{2}}\left( P_{0,13}+P_{0,14}\right) +\frac 1{\sqrt{2}}P_{13,26}-\frac 1{\sqrt{2}}P_{14,26} \\ & +\frac 12\left( P_{11,21}-P_{11,22}\right) -\frac 1{\sqrt{2}}\left( bP_{11,23}+aP_{11,24}\right) \\ & -\frac 12\left( P_{12,21}+P_{12,22}\right) -\frac 1{\sqrt{2}}\left( aP_{12,23}-bP_{12,24}\right) \\ & -\frac 12\left( P_{21,31}-P_{21,32}\right) -\frac 12\left( P_{22,31}+P_{22,32}\right) \\ & +\frac 1{\sqrt{2}}\left( aP_{23,31}+bP_{23,32}\right) -\frac 1{\sqrt{2}% }\left( bP_{24,31}-aP_{24,32}\right) \\ & -\frac 1{\sqrt{2}}\left( P_{25,33}-P_{25,34}\right) -\frac 1{\sqrt{2}% }P_{33,4}-\frac 1{\sqrt{2}}P_{34,4}, \end{array} \tag{B.2}$$ $$\begin{array}[t]{ll} c_{2,\uparrow }= & \frac 1{\sqrt{2}}\left( P_{0,11}-P_{0,12}\right) -\frac 1{% \sqrt{2}}P_{11,25}-\frac 1{\sqrt{2}}P_{12,25} \\ & -\frac 12\left( P_{13,21}+P_{13,22}\right) +\frac 1{\sqrt{2}}\left( bP_{13,23}+aP_{13,24}\right) \\ & -\frac 12\left( P_{14,21}-P_{14,22}\right) -\frac 1{\sqrt{2}}\left( aP_{14,23}-bP_{14,24}\right) \\ & -\frac 12\left( P_{21,33}+P_{21,34}\right) +\frac 12\left( P_{22,33}-P_{22,34}\right) \\ & +\frac 1{\sqrt{2}}\left( aP_{23,33}-bP_{23,34}\right) -\frac 1{\sqrt{2}% }\left( bP_{24,33}+aP_{24,34}\right) \\ & -\frac 1{\sqrt{2}}\left( P_{26,31}+P_{26,32}\right) +\frac 1{\sqrt{2}% }P_{31,4}-\frac 1{\sqrt{2}}P_{32,4}, \end{array} \tag{B.3}$$ $$\begin{array}[t]{ll} c_{2,\downarrow }= & \frac 1{\sqrt{2}}\left( P_{0,13}-P_{0,14}\right) -\frac 1{\sqrt{2}}P_{13,26}-\frac 1{\sqrt{2}}P_{14,26} \\ & -\frac 12\left( P_{11,21}-P_{11,22}\right) -\frac 1{\sqrt{2}}\left( bP_{11,23}+aP_{11,24}\right) \\ & -\frac 12\left( P_{12,21}+P_{12,22}\right) +\frac 1{\sqrt{2}}\left( aP_{12,23}-bP_{12,24}\right) \\ & +\frac 12\left( P_{21,31}+P_{21,32}\right) +\frac 12\left( P_{22,31}-P_{22,32}\right) \\ & +\frac 1{\sqrt{2}}\left( aP_{23,31}-bP_{23,32}\right) -\frac 1{\sqrt{2}% }\left( bP_{24,31}+aP_{24,32}\right) \\ & +\frac 1{\sqrt{2}}\left( P_{25,33}+P_{25,34}\right) -\frac 1{\sqrt{2}% }P_{33,4}+\frac 1{\sqrt{2}}P_{34,4} \end{array} \tag{B.4}$$ where $$P_{\alpha ,\beta }=|E_{\alpha} \rangle \langle E_{\beta} | \tag{B.5}$$ and $$\begin{array}{l} a=a_1+a_2 \\ b=a_1-a_2. \end{array} \tag{B.6}$$ To obtain the annihilation operators of the $I-$th dimer, the index $I$ should be added $(c_{1(2),\sigma }\longrightarrow c_{I,1(2),\sigma }$, $% P_{\alpha ,\beta }\rightarrow P_{\alpha ,\beta }^{(I)})$ in (B.1)-(B.4). [Appendix C]{} To present the Hamiltonian $\overline{\overline{H}}$ in a compact form we first introduce the following operators $\left( \mu ,\nu =1,2\right) $ : $$\begin{array}{ll} s_{I,\mu ;J,\nu }^{+}= & a_{I,\mu ,\uparrow }^{+}a_{J,\nu ,\downarrow }, \\ & \\ s_{I,\mu ;J,\nu }^{-}= & a_{J,\nu ,\downarrow }^{+}a_{I,\mu ,\uparrow }, \\ & \\ n_{I,\mu ;J,\nu ;\sigma }= & a_{I,\mu ,\sigma }^{+}a_{J,\nu ,\sigma }, \\ & \\ n_{I,\mu ;J,\nu }= & \sum\limits_{\sigma} n_{I,\mu ;J,\nu ;\sigma }, \\ & \\ s_{I,\mu ;J,\nu }^z= & \frac 12\left( n_{I,\mu ;J,\nu ;\uparrow }-n_{I,\mu ;J,\nu ;\downarrow }\right) ; \\ & \end{array}$$ $$\begin{array}{ll} \underline{s}_{I,\mu ;J,\nu }^{+}= & \underline{\underline{a}}_{I,\mu ,\uparrow }^{+}a_{J,\nu ,\downarrow }, \\ & \\ \underline{s}_{I,\mu ;J,\nu }^{-}= & a_{J,\nu ,\downarrow }^{+}\underline{% \underline{a}}_{I,\mu ,\uparrow }, \\ & \\ \underline{n}_{I,\mu ;J,\nu ;\sigma }= & \underline{\underline{a}}_{I,\mu ,\sigma }^{+}a_{J,\nu ,\sigma }, \\ & \\ \underline{n}_{I,\mu ;J,\nu }= & \sum\limits_{\sigma} \underline{n}_{I,\mu ;J,\nu ;\sigma }, \\ & \\ \underline{s}_{I,\mu ;J,\nu }^z= & \frac 12\left( \underline{n}_{I,\mu ;J,\nu ;\uparrow }-\underline{n}_{I,\mu ;J,\nu ;\downarrow }\right) ; \\ & \end{array} \tag{C.1}$$ $$\begin{array}{ll} \underline{\underline{s}}_{I,\mu ;J,\nu }^{+}= & a_{I,\mu ,\uparrow }^{+}% \underline{\underline{a}}_{J,\nu ,\downarrow }, \\ & \\ \underline{\underline{s}}_{I,\mu ;J,\nu }^{-}= & \underline{\underline{a}}% _{J,\nu ,\downarrow }^{+}a_{I,\mu ,\uparrow }, \\ & \\ \underline{\underline{n}}_{I,\mu ;J,\nu ;\sigma }= & a_{I,\mu ,\sigma }^{+}% \underline{\underline{a}}_{J,\nu ,\sigma }, \\ & \\ \underline{\underline{n}}_{I,\mu ;J,\nu }= & \sum\limits_{\sigma} \underline{\underline{n}}_{I,\mu ;J,\nu ;\sigma }, \\ & \\ \underline{\underline{s}}_{I,\mu ;J,\nu }^z= & \frac 12\left( \underline{% \underline{n}}_{I,\mu ;J,\nu ;\uparrow }-\underline{\underline{n}}_{I,\mu ;J,\nu ;\downarrow }\right) . \end{array}$$ The operators $s_{I,\mu ;J,\nu }^{z,\pm }$, $\underline{s}_{I,\mu ;J,\nu }^{z,\pm }$, $\underline{\underline{s}}_{I,\mu ;J,\nu }^{z,\pm }$, are not strictly the spin operators, they, however, show some similarities to the true spin operators (as e.g. (11)-(13)) and therefore they can be called ”hopping spin operators”. Moreover, the following abreviations have to be used: $$Q\left( I,\mu ;J,\nu \right) =\overrightarrow{S}_{I,\mu }\cdot \left( \overrightarrow{\underline{\underline{s}}}_{I,\overline{\mu };J,\nu }+% \stackrel{\rightarrow }{\underline{s}}_{J,\nu ;I,\overline{\mu }}\right) , \tag{C.2}$$ $$R\left( I,\mu ;J,\nu \right) =\stackrel{\rightarrow }{S}_{I,\mu }\cdot \stackrel{\rightarrow }{S}_{J,\nu }n_{I,\overline{\mu };J,\overline{\nu }}, \tag{C.3}$$ $$R^{z,\pm }\left( I,\mu ;J,\nu \right) =\left( S_{I,\mu }^zS_{J,\nu }^{\pm }-S_{I,\mu }^{\pm }S_{J,\nu }^z\right) s_{I,\overline{\mu };J,\overline{\nu }% }^{\mp }, \tag{C.4}$$ $$R^{-,+}\left( I,\mu ;J,\nu \right) =\left( S_{I,\mu }^{-}S_{J,\nu }^{+}-S_{I,\mu }^{+}S_{J,\nu }^{-}\right) s_{I,\overline{\mu };J,\overline{% \nu }}^z. \tag{C.5}$$ The Hamiltonian $\overline{\overline{H}}$ (cf (24)) with the use of (C.1) - (C.5) takes on the form $$\begin{array}{ll} \overline{\overline{H}}= & \sum\limits_I\overline{H}_I^d-t\sum\limits_{I,% \sigma }\left[ \underline{\underline{a}}_{I,2,\sigma }^{+}\underline{% \underline{a}}_{I+1,1,\sigma }+\underline{\underline{a}}_{I+1,1,\sigma }^{+}% \underline{\underline{a}}_{I+1,2,\sigma }\right] \\ & \\ & -2t\beta \sum\limits_I\left[ Q\left( I,1;I+1,1\right) +Q\left( I+1,2;I,2\right) \right] \\ & \\ & +2t\delta \sum\limits_I\left[ Q\left( I,2;I+1,1\right) +Q\left( I+1,1;I,2\right) \right] \end{array}$$ $$\begin{array}{ll} -t\beta ^2\sum\limits_I & [R\left( I,1;I+1,2\right) +R^{z,-}\left( I,1;I+1,2\right) \\ & \\ & +R^{z,+}\left( I,1;I+1,2\right) +R^{-,+}\left( I,1;I+1,2\right) \\ & \\ & +R\left( I+1,2;I,1\right) +R^{z,-}\left( I+1,2;I,1\right) \\ & \\ & +R^{z,+}\left( I+1,2;I,1\right) +R^{-,+}\left( I+1,2;I,1\right) ] \end{array}$$ $$\begin{array}{ll} -t\delta ^2\sum\limits_I & [R\left( I,2;I+1,1\right) +R^{z,-}\left( I,2;I+1,1\right) \\ & \\ & +R^{z,+}\left( I,2;I+1,1\right) +R^{-,+}\left( I,2;I+1,1\right) \\ & \\ & +R\left( I+1,1;I,2\right) +R^{z,-}\left( I+1,1;I,2\right) \\ & \\ & +R^{z,+}\left( I+1,1;I,2\right) +R^{-,+}\left( I+1,1;I,2\right) ] \end{array}$$ $$\begin{array}{ll} +t\beta \delta \sum\limits_I\sum\limits_{\mu =1}^2 & [R\left( I,\mu ;I+1,\mu \right) +R^{z,-}\left( I,\mu ;I+1,\mu \right) \\ & \\ & +R^{z,+}\left( I,\mu ;I+1,\mu \right) +R^{-,+}\left( I,\mu ;I+1,\mu \right) \\ & \\ & +R\left( I+1,\mu ;I,\mu \right) +R^{z,-}\left( I+1,\mu ;I,\mu \right) \\ & \\ & +R^{z,+}\left( I+1,\mu ;I,\mu \right) +R^{-,+}\left( I+1,\mu ;I,\mu \right) ] \end{array} \tag{C.6}$$ $$\begin{array}{ll} & -t\sum\limits_{I\neq J,\sigma }\sum\limits_{\mu =1}^2\underline{% \underline{a}}_{I,\mu ,\sigma }^{+}\underline{\underline{a}}_{J,\mu ,\sigma } \\ & \\ & -2t\beta \sum\limits_{I\neq J}\sum\limits_{\mu =1}^2Q\left( I,\mu ;J,% \overline{\mu }\right) +2t\delta \sum\limits_{I\neq J}\sum\limits_{\mu =1}^2Q\left( I,\mu ;J,\mu \right) \end{array}$$ $$\begin{array}{ll} & -t\left( \beta ^2+\delta ^2\right) \sum\limits_{I\neq J}\sum\limits_{\mu =1}^2[R\left( I,\mu ;J,\mu \right) +R^{z,-}\left( I,\mu ;J,\mu \right) \\ & \\ & +R^{z,+}\left( I,\mu ;J,\mu \right) +R^{-,+}\left( I,\mu ;J,\mu \right) ] \\ & \\ & +2t\beta \delta \sum\limits_{I\neq J}\sum\limits_{\mu =1}^2[R\left( I,\mu ;J,\overline{\mu }\right) +R^{z,-}\left( I,\mu ;J,\overline{\mu }% \right) \\ & \\ & +R^{z,+}\left( I,\mu ;J,\overline{\mu }\right) +R^{-,+}\left( I,\mu ;J,% \overline{\mu }\right) ] \end{array}$$ where $\overline{H}_I^d$ is given by (14). It should also be noted that the operators $\underline{\underline{a}}_{I,\alpha ,\sigma }$ $(\underline{% \underline{a}}_{I,\alpha ,\sigma }^{+}),$ entering in (C.1) and (C.2), are in fact, dependent on $\beta $ and $\delta $ (see (29)). The decomposition of the Hamiltonian $\overline{\overline{H}}$ (C.6) according to the terms proportional to $\beta ,$ $\delta $, $\beta ^2$,$\delta ^2$ and $\beta \delta $ has thus only a formal character in order to keep the presentation of the Hamiltonian $\overline{\overline{H}}$ in a compact form. The formula (C.6) is the complete expression. [99]{} Hubbard J. 1963 *Proc. Roy. Soc. London* A **276** 238 ; 1964 A **281** 401. Herrmann T. and Nolting W. 1997 *J. Magn. Magn. Mater.* **170** 253. Fulde P. 1995 *Electron Correlations in Molecules and Solids* (Berlin: Springer). Gebhard F. 1997 *The Mott Metal-Insulator Transitions* (Springer Tracts in Modern Physics 137 Berlin: Springer). Micnas R., Ranninger J. and Robaszkiewicz S. 1990 *Rev. Mod. Phys.* **62** 113. Anderson P. W. 1961 *Phys. Rev.* **124** 41. Nolting W. 1976 *Phys. Stat. Sol.* (b) **96** 11. Matlak M. and Nolting W. 1984 *Z. Physik* B **55** 103. Kohn W. 1964 *Phys. Rev.* **133** 171. Nagaoka Y. 1966 *Phys. Rev.* **147** 392. Roth L. 1966 *Phys. Rev.* **149** 306. Harris A. B. and Lange R. V. 1967 *Phys. Rev.* **157** 295. Penn D. 1968 *Phys. Lett.* A **26** 509. Carron L. G. and Pratt G. W. 1968 *Rev. Mod. Phys.* **40** 802. Langer W., Plischke M. and Mattis D. 1969 *Phys. Rev. Lett.* **23** 1448. Kaplan T. A. and Bari R. A. 1970 *J. Appl. Phys.* **41** 875. Sokoloff J. B. 1970 *Phys. Rev.* B **1** 1144. Klein D. J. and Seitz W. A. 1973 *Phys. Rev.* B **8** 2236. Visscher P. B. 1974 *Phys. Rev.* B **10** 943. Ogawa T., Kanda K. and Matsubara T. 1975 *Prog. Theor. Phys.* **53** 614. Florencio J., Jr. , and Chao K. A. 1975 *Phys. Rev. Lett.* **35** 741. Florencio J., Jr. , and Chao K. A. 1976 *Phys. Rev.* B **14** 3121. Chao K. A., Spałek J. and Oleś A. M. 1977 *J. Phys.* C **10** L271. Takahashi M. 1982 *J. Phys. Soc. Jpn.* **51** 3475. Irkhin V. Yu. and Katsnelson M. I. 1985 *J. Phys.* C **18** 4173. Shastry B. S., Krishnamurthy H. R. and Anderson P. W. 1990 *Phys. Rev.* B **41** 2375. Dagotto E. 1994 *Rev. Mod. Phys.* **66** 763. Irkhin V. Yu. and Irkhin Yu. P. 1994 *Phys. Stat. Sol.* (b) **183** 9. Eskes H. and Eder R. 1996 *Phys. Rev.* B **54** R14226. Irkhin V. Yu. 1998 *Phys. Rev.* B **57** 13375. Anderson P. W. 1959 *Phys. Rev.* **115** 2. Anderson P. W. 1963 *Solid State Physics*, edited by F.Seitz and D.Turnbull (Academic Press: New York) Vol. 14, p. 99. Cheng V. C. and Chen S. H. 1977 *Physica* B **85** 299. Matlak M. 1980 *Phys. Stat. Sol.* (b) **99** K 97. Lorentz B. 1983 *Phys. Stat. Sol.* (b) **119** 555. Falicov L. M. and Victor R. H. 1984 *Phys. Rev.* B **30** 1695. Callaway J., Chen D. P. and Tang R. 1987 *Phys Rev.* B **35** 3705. Callaway J., Chen D. P. and Zhang Y. 1987 *Phys. Rev.* B **36** 2084. Pastor G. M., Hirsch R. and Műhlschlegel B. 1996 *Phys. Rev.* B **53** 10382. Sachdev S., Bhatt R. N. 1990 *Phys. Rev.* B **41,** 9323. Park K., Sachdev S. 2001 *Phys. Rev.* B **64,** 1845510. [^1]: e-mail:matlak@us.edu.pl [^2]: The dimer Fourier transformation $c_{I,1,\sigma }=\frac 1{\sqrt{N}}\sum\limits_{\mathbf{k}}c_{\mathbf{k,}% \sigma }e^{i\mathbf{k\cdot R}_{I,1}},c_{I,2,\sigma }=\frac 1{\sqrt{N}% }\sum\limits_{\mathbf{k}}c_{\mathbf{k,}\sigma }e^{i\mathbf{k\cdot R}_{I,2}}$ where $N$ is the number of lattice points, applied to (4), gives the well known result $H=\sum\limits_{\mathbf{k,\sigma }}\varepsilon _{\mathbf{k}}n_{\mathbf{k,}% \sigma }+\frac UN\sum\limits_{\mathbf{k,k}^{\prime },\mathbf{q}}c_{\mathbf{% k+q,}\uparrow }^{+}c_{\mathbf{k,}\uparrow }c_{\mathbf{k}^{\prime }-\mathbf{q,% }\downarrow }^{+}c_{\mathbf{k}^{\prime }\mathbf{,}\downarrow }$ with $% \varepsilon _{\mathbf{k}}$ given by (3).
1
--- abstract: 'We consider spatially homogeneous, anisotropic cosmological models in $5D$ whose line element can be written as $dS^2 = {\cal{A}}(u, v)du dv - {\cal{B}}_{i j}(u, v)dx^{i}dx^{j}$, $(i, j = 1, 2, 3)$, where $u$ and $v$ are light-like coordinates. In the case where ${\cal{B}}_{i j}$ is diagonal, we construct three families of analytic solutions to the $5D$ vacuum field equations $R_{AB} = 0$ $ (A, B = 0, 1, 2, 3, 4)$. Among them, there is a family of self-similar homothetic solutions that contains, as a particular case, the so-called light-like Kasner universes. In this work we provide a detailed study of the different types of $4D$ scenarios that can be embedded in such universes. For the sake of generality of the discussion, and applicability of the results, in our analysis we consider the two versions of non-compactified $5D$ relativity in vogue, viz., braneworld theory and induced matter theory. We find a great variety of cosmological models in $4D$ which are anisotropic versions of the FRW ones. We obtain models on the brane with a non-vanishing cosmological term $\Lambda_{(4)}$, which inflate [*à la*]{} de Sitter without satisfying the classical false-vacuum equation of state. Using the symmetry of the solutions, we construct a class of non-static vacuum solutions on the brane. We also develop [*static*]{} pancake-like distributions where the matter is concentrated in a thin surface (near $z = 0$), similar to those proposed by Zel’dovich for the shape of the first collapsed objects in an expanding anisotropic universe. The solutions discussed here can be applied in a variety of physical situations.' author: - | J. Ponce de Leon[^1]\ Laboratory of Theoretical Physics, Department of Physics\ University of Puerto Rico, P.O. Box 23343, San Juan,\ PR 00931, USA title: '$4D$ spacetimes embedded in $5D$ light-like Kasner universes' --- PACS: 04.50.+h; 04.20.Cv [*Keywords:*]{} Cosmological models; Kasner universes; 5D Models; Braneworld theory; Induced matter theory; Exact solutions; General Relativity. Introduction ============ In recent years there has been an increased interest in theories that envision our spacetime as embedded in a universe with more than four large dimensions. There are several reasons that justify this interest, among them that extensions of four-dimensional general relativity to five and more dimensions seem to provide the best route to unification of gravity with the interactions of particle physics [@Davidson; @Owen]-[@particle; @physics]. In $5D$ there are two versions of relativity where the extra dimension is not assumed to be compactified. These are membrane theory [@algebraically] and space-time-matter (or induced matter) theory [@general]. They lead to a great variety of models both in the cosmological context and in the description of local self-gravitating objects (see, e.g., [@review], [@available]). Most of these models have been obtained in coordinates where the metric in $5D$ can be written as[^2] $$\label{General line element in 5D without off-diagonal terms} dS^2 = g_{\mu\nu}(x^{\rho}, \; \psi)dx^{\mu}dx^{\nu} + \epsilon \Phi^2(x^{\rho}, \; \psi)d\psi^2,$$ in such a way that our $4D$ spacetime can be recovered by going onto a hypersurface $\Sigma_{\psi}: \psi = \psi_{0} = $ constant, which is orthogonal to the $5D$ unit vector $$\label{unit vector n} {\hat{n}}^{A} = \frac{\delta^{A}_{4}}{\sqrt{\epsilon g_{44}}}, \;\;\;n_{A}n^{A} = \epsilon,$$ along the extra dimension, and $g_{\mu\nu}$ can be interpreted as the metric of the spacetime. In this framework, the effective equations for gravity in $4D$ are obtained from dimensional reduction of the Einstein field equations in $5D$. The reduction is based on Campbell’s theorem [@Campbell], [@Seahra] and consists in isolating the $4D$ part of the relevant $5D$ geometric quantities and use them to construct the $4D$ Einstein tensor ${^{(4)}G}_{\alpha \beta}$. The crucial result is that, even in the case where the energy-momentum tensor (EMT) in $5D$ is zero, to an observer confined to making physical measurements in our ordinary spacetime, and not aware of the extra dimension, the spacetime is not empty but contains (effective) matter whose EMT, ${^{(4)}T}_{\alpha\beta}$, is determined by the Einstein equations in $4D$, namely $$\begin{aligned} \label{4D Einstein with T and K} {^{(4)}G}_{\alpha\beta} = 8 \pi \;{^{(4)}T}_{\alpha\beta} = - \epsilon\left(K_{\alpha\lambda}K^{\lambda}_{\beta} - K_{\lambda}^{\lambda}K_{\alpha\beta}\right) + \frac{\epsilon}{2} g_{\alpha\beta}\left(K_{\lambda\rho}K^{\lambda\rho} - (K^{\lambda}_{\lambda})^2 \right) - \epsilon E_{\alpha\beta}, \end{aligned}$$ where $K_{\mu\nu}$ is the extrinsic curvature $$\label{extrinsic curvature} K_{\alpha\beta} = \frac{1}{2}{\cal{L}}_{\hat{n}}g_{\alpha\beta} = \frac{1}{2\Phi}\frac{\partial{g_{\alpha\beta}}}{\partial \psi};$$ $E_{\mu\nu}$ is the projection of the bulk Weyl tensor ${^{(5)}C}_{ABCD}$ orthogonal to ${\hat{n}}^A$, i.e., “parallel" to spacetime, viz., $$\label{Weyl Tensor} E_{\alpha\beta} = {^{(5)}C}_{\alpha A \beta B}{\hat{n}}^A{\hat{n}}^B = - \frac{1}{\Phi}\frac{\partial K_{\alpha\beta}}{\partial \psi} + K_{\alpha\rho}K^{\rho}_{\beta} - \epsilon \frac{\Phi_{\alpha;\beta}}{\Phi},$$ and $\Phi_{\alpha} \equiv \partial \Phi/\partial x^{\alpha}$. Before going on, it is worthwhile to emphasize that the above dimensional reduction of the field equations in $5D$ is a standard technique that leads to the same effective matter content in $4D$, i.e., ${^{(4)}T}_{\alpha\beta}$, regardless of whether the line element (\[General line element in 5D without off-diagonal terms\]) is interpreted within the context of brane theory with ${\bf Z}_2$ symmetry [@Shiromizu] or space-time-matter (STM) theory [@Wesson; @and; @JPdeL]. In this sense these two approaches to $5D$ relativity are mathematically equivalent. However, they are different as regards physical interpretation and motivation [@physical; @motivation]. In brane theory there is a singular hypersurface that defines spacetime, and the properties of matter in that hypersurface are, in general, [*not identical*]{} to the ones of induced matter calculated in STM from the effective EMT defined by (\[4D Einstein with T and K\]). In the cosmological realm, nearly all models assume spatial homogeneity and isotropy, which means that the line element in $5D$ is taken to be an extended version of the conventional Friedmann-Roberson-Walker (FRW) metric in $4D$, namely $$\label{Usual cosmological metric in 5D} dS^2 = n^2(t, \psi)dt^2 - a^2(t, \psi)\gamma_{i j} dx^{i}dx^{j} + \epsilon \Phi^2(t, \psi) d\psi^2,\;\;\;i, j = 1, 2, 3,$$ where $\gamma_{i j}$ is a maximally symmetric $3$-dimensional metric, with curvature index $k = 0, \pm 1$. In these coordinates the full integration of the vacuum Einstein field equations in $5D$ requires the specification of [*two*]{} additional assumptions. One of them is usually an assumption of geometric nature, e.g., that $\Phi = 1$, or $n = 1$. The second one is usually an equation of state for the matter quantities in $4D$ [@Binetruy], [@JPdeL-isotropicCosm]. Observations indicate that on large scales ($\gg 100$ Mpc) the universe is homogeneous and isotropic and well described by spatially-flat FRW cosmologies. However, there is no reason to expect that such features should hold at the early stages of the evolution of the universe. Rather, it is generally accepted that anisotropy could have played a significant role in the early universe and that it has been fading away in the course of cosmic evolution. In the framework of 4-dimensional spacetime, a prototype for anisotropic vacuum cosmologies is provided by the Kasner metric [@Kasner], which mimics the behavior of more general solutions near the singularity during some finite periods of time[^3]. Various higher dimensional extensions of the vacuum Kasner model have been discussed in the literature [@Kokarev], [@Paul]. In this paper we consider spatially homogeneous but anisotropic cosmological models whose metric in $5D$ has the form $$\label{light-like 5D metrics} dS^2 = {\cal{A}}(u, v)du dv - {\cal{B}}_{i j}(u, v)dx^{i}dx^{j},$$ where ${\cal{A}}$ and ${\cal{B}}$ are some functions of the “light-like" coordinates $u$ and $v$. These metrics are different from (\[Usual cosmological metric in 5D\]) in various aspects: (i) they do not contain the time or extra dimension in an explicit way; (ii) the hypersurfaces of constant $u$ or $v$ are three-dimensional instead of $4D$; (ii) [*[a priori]{}*]{} it is not clear how to define the $5D$ unit vector ${\hat{n}}^{A}$ along the extra dimension, which in turn is needed for defining the spacetime sections and for constructing the appropriate projected quantities in $4D$. Here, for the case where ${\cal{B}}_{i j}$ is diagonal we construct three families of analytic solutions to the $5D$ vacuum field equations $R_{AB} = 0$ $ (A, B = 0, 1, 2, 3, 4)$. The simplest one is a family of self-similar solutions[^4], which contains as a particular case the so-called light-like Kasner universes. From a physical point of view, self-similar homothetic models are interesting because they may serve as asymptotic regimes, i.e., near the initial cosmological singularity and at late times, for many homogeneous and inhomogeneous cosmological models [@Coley]. The other two families of solutions are obtained under the assumption that some of the metric coefficients are separable functions of their arguments. In view of their potential relevance to the “similarity hypothesis" [@Coley], in this work we focus our attention to the family of self-similar $5D$ spacetimes mentioned above. The main question under study is what kind of $4D$ scenarios can be embedded in such spacetimes. For the sake of generality of the discussion, and applicability of our results, in our analysis we consider both versions of non-compactified $5D$ relativity, viz., induced-matter and brane theory. Unfortunately, the expressions obtained in $4D$ as projections of the $5D$ solutions are quite complex and cumbersome. Therefore, to obtain manageable mathematical expressions in $4D$, in sections $3$, $4$ and $5$ we simplify the algebra (but not the physics) by restricting our discussion to the subset of light-like Kasner solutions. Our solutions generalize a number of isotropic cosmological models and give back previous ones in the literature (see e.g., [@JPdelW]-[@JPdeL-JCAP] and references therein). Although we are not discussing particular applications here, they could be useful in the study of generalizations of Mixmaster or Belinskii-Khalatnikov-Lifshitz oscillations in theories with a single extra dimension [@mixmaster], [@BKL], [@Halpern]-[@Henneaux]. They could also be applied to studying conjectures about isotropic Big Bang singularities in braneworlds [@Maartens]-[@Coley2]. Certain cosmological models, such as the cyclic universe model, also require an understanding of the behavior of Kasner-like solutions and are based on brane-type models [@Erickson]. The paper is organized as follows. In section $2$ we derive our self-similar solution (the other two families of solutions are presented in the Appendix) and introduce a timelike coordinate $t$ and a spacelike coordinate $\psi$ along the extra dimension. This is equivalent to introducing two additional degrees of freedom, which are expressed in terms of two functions of $t$ and $\psi$. We will see that, as in the familiar FRW picture (\[Usual cosmological metric in 5D\]), these two functions can be related to the specific choice for embedding $\Sigma_{\psi}$ in $5D$ and to the physics in $4D$. In section $3$, within the context of STM we show that the light-like Kasner solutions generate a great variety of cosmological models in $4D$, including the de Sitter, Milne and power-law FRW models. In section $4$, within the context of the braneworld paradigm we find that they embed $4D$ cosmological models with a non-vanishing cosmological term $\Lambda_{(4)}$, which in principle can be either constant or time-dependent. In the case of constant $\Lambda_{(4)}$ the $3D$ space exponentially inflates, regardless of the specific embedding. We also show that by virtue of the symmetry $(x, y, z) \leftrightarrow \psi$, they generate a class of non-static vacuum solutions on the brane. In section $5$, once again using symmetry properties, we demonstrate that the Kasner-like metric (\[Lightlike Kasner solution\]) can be used to generate static pancake-like distributions of matter in $4D$, similar to those proposed by Zel’dovich for the shape of the first collapsed objects in an expanding anisotropic universe [@Zel'dovich]. In section $6$ we present a summary of our results. Cosmological models in $5D$. Light-like coordinates =================================================== Solving the field equations. Part I ----------------------------------- In this work we obtain three families of solutions to the field equations $R_{A B} = 0$. However, to facilitate the discussion, in this subsection we present only one of them. Specifically, we present the family of solutions that we will use throughout the paper, which (as we will see in sections $3$-$5$) may be interpreted or used as $5D$ embeddings for a number of $4D$ universes. The derivation of the other two families of solutions, whose $4D$ interpretation is not discussed here, is deferred to the Appendix (Solving the field equations. Part II). To simplify the shape of the field equations let us momentarily denote ${\cal{B}}_{11} = e^{\lambda(u, v)}$, ${\cal{B}}_{22} = e^{\mu(u, v)}$ and ${\cal{B}}_{33} = e^{\sigma(u, v)}$. From $R_{xx} = 0$, $R_{yy} = 0$ $R_{zz} = 0$ we obtain the equations $$\begin{aligned} \label{Rxx, Ryy, Rzz} 4 \lambda_{u v} + \lambda_{u}\left( \sigma_{v} + 2 \lambda_{v} + \mu_{v}\right) + \lambda_{v}\left(\mu_{u} + \sigma_{u}\right) &=& 0, \nonumber \\ 4 \mu_{u v} + \mu_{u}\left( \lambda_{v} + 2 \mu_{v} + \sigma_{v}\right) + \mu_{v}\left(\sigma_{u} + \lambda_{u}\right) &=& 0, \nonumber \\ 4 \sigma_{u v} + \sigma_{u}\left( \mu_{v} + 2 \sigma_{v} + \lambda_{v}\right) + \sigma_{v}\left(\lambda_{u} + \mu_{u}\right) &=& 0.\end{aligned}$$ Here the subscripts $u$, $v$ indicate partial derivatives with respect to those arguments. The above equations show cyclic permutation symmetry, i.e., starting from any of them by means of the transformation $\lambda \rightarrow \mu \rightarrow \sigma \rightarrow \lambda$ we obtain the other two. $\bullet$ Self-similar solutions: First we solve the field equations under the assumption that the metric (\[light-like 5D metrics\]) possesses self-similar symmetry. This assumption is motivated by a number of studies suggesting that self-similar models play a significant role at asymptotic regimes [@Coley]. From a mathematical point of view, it means that by a suitable transformation of coordinates all the dimensionless quantities can be put in a form where they are functions only of a single variable (say $\zeta$) [@Sedov]. In our particular case, this implies that $\lambda = \lambda(\zeta)$, $\mu = \mu(\zeta)$, $\sigma = \sigma(\zeta)$, where $\zeta$ is some function of $u$ and $v$, viz., $$\label{z} \zeta = \zeta(u, v).$$ With this assumption the first equation in (\[Rxx, Ryy, Rzz\]) reduces to $$\label{equation for lambda in the self-similar solution} 2 \; \frac{\lambda_{\zeta \zeta}}{\lambda_{\zeta}} + \left(\lambda_{\zeta} + \mu_{\zeta} + \sigma_{\zeta}\right) + 2\; \frac{{\zeta}_{u v}}{{\zeta}_{u}{\zeta}_{v}} = 0.$$ The assumed symmetry requires $\left({{\zeta}_{u v}}/{{\zeta}_{u}{\zeta}_{v}}\right)$ to be some function of $\zeta$, say $ Z(\zeta) = \left({{\zeta}_{u v}}/{{\zeta}_{u}{\zeta}_{v}}\right)$. Integrating we get $$\label{lambdaz} \lambda_{\zeta} = 2 \alpha \; e^{- \left(\lambda + \mu + \sigma\right)/2} e^{- \int{Z(\zeta) d\zeta}} \equiv 2 \alpha \left(\frac{f_{\zeta}}{f}\right),$$ where $\alpha$ is an arbitrary constant of integration. Similar equations, with new constants, e.g., $\beta$ and $\gamma$, are obtained for $\mu$ and $\sigma$ by means of a cyclic transformation. What this means is that $$\label{lambdaz, muz, sigma z} \frac{\lambda_{\zeta}}{2 \alpha} = \frac{\mu_{\zeta}}{2 \beta} = \frac{\sigma_{\zeta}}{2 \gamma} = \frac{f_{\zeta}}{f},$$ which upon integration yields $$\label{elambda, emu, esigma} e^{\lambda} = C_{1}f^{2 \alpha}, \;\;\;e^{\mu} = C_{2} f^{2 \beta}, \;\;\;e^{\sigma} = C_{3} f^{2 \gamma},$$ where $C_{1}$, $C_{2}$, $C_{3}$ are constants of integration. A single differential equation for $f(\zeta) = f(u, v)$ can be easily obtained by substituting (\[elambda, emu, esigma\]) into any (\[Rxx, Ryy, Rzz\]), namely $$\label{equation for f} f f_{u v} + \left(a - 1\right) f_{u} f_{v} = 0.$$ [*Notation*]{}: Here and henceforth we denote $$\label{notation} a \equiv \alpha + \beta + \gamma, \;\;\;b \equiv \alpha^2 + \beta^2 + \gamma^2 - \alpha - \beta - \gamma, \;\;\;c \equiv \alpha \beta + \alpha\gamma + \beta \gamma,$$ where $\alpha, \beta, \gamma$ are arbitrary parameters. A simple integration gives $$\label{f(u, v)} f = \left[h(u) + g(v)\right]^{1/a},$$ where $h(u)$ and $g(v)$ are arbitrary functions of their arguments. Clearly, in the present case ${\cal{B}}_{i j}$ are power-law type solutions of the similarity variable $\zeta = \left[h(u) + g(v)\right]$. To simplify the discussion, and eliminate spurious degrees of freedom, we now make the coordinate transformation $h(u) = c_{1} \bar{u}$, $g(v) = c_{2} \bar{v}$, where $c_{1}$ and $c_{2}$ are constants. In these new coordinates $${\cal{A}}(u, v)du dv \rightarrow \bar{{\cal{A}}}(\bar{u}, \bar{v})d\bar{u} d\bar{v},$$ where $\bar{{\cal{A}}}(\bar{u}, \bar{v}) = \left[{\cal{A}}(u, v)/h_{u}g_{v}\right]$ with $u$ and $v$ expressed in terms of $\bar{u}$, $\bar{v}$. Then relabeling the coordinates (dropping the overbars) the metric becomes $$dS^2 = {\cal{A}}(u, v) du d v - C_{1}\left[c_{1} u + c_{2}v\right]^{2\alpha/a}dx^2 - C_{2}\left[c_{1} u + c_{2}v\right]^{2\beta/a}dy^2 - C_{3}\left[c_{1} u + c_{2}v\right]^{2\gamma/a}dz^2.$$ For this metric the field equations $R_{u u} = 0$ and $R_{vv} = 0$ yield the equations $$\begin{aligned} \label{Ruu, Rvv} a^2 \left[c_{1} u + c_{2}v\right]{\cal{A}}_{u} &+& 2 c \; c_{1} {\cal{A}} = 0, \nonumber \\ a^2 \left[c_{1} u + c_{2}v\right]{\cal{A}}_{v} &+& 2 c \; c_{2} {\cal{A}} = 0,\end{aligned}$$ which have a unique solution given by ${\cal{A}} = C_{0} \left(c_{1} u + c_{2} v\right)^{- 2c/a^2}$, where $C_{0}$ is a constant of integration. Now, it is easy to verify that $R_{u v} = 0$ is identically satisfied. In summary, the final form of the self-similar solution is given by[^5] $$\label{new solution} {\cal{A}} = \left(c_{1} u + c_{2} v\right)^{- 2c/a^2}, \;\;\;{\cal{B}}_{11} = {\cal{A}}^{- \alpha a/c}, \;\;\;{\cal{B}}_{22} = {\cal{A}}^{- \beta a/c}, \;\;\;{\cal{B}}_{33} = {\cal{A}}^{- \gamma a/c}, \;\;\;{\cal{B}}_{i j} = 0, \;\;\;i \neq j.$$ We note that this solution admits a homothetic Killing vector in $5D$ for any values of $\alpha$, $\beta$ and $\gamma$, namely, $$\label{Lie derivative of the self-similar metric} {\cal{L}}_{{\zeta}}{g_{A B}} = 2 g_{A B}, \;\;\;\mbox{with}\;\;\; {\zeta}^C = \left[ {\eta}_{0} u,\; {\eta}_{0} v,\; (1 - \alpha {\eta}_{0}/a) x,\; (1 - \beta {\eta}_{0}/a) y,\; (1 - \gamma {\eta}_{0}/a) z \right],$$ where $g_{A B}$ is the metric (\[new solution\]), ${\cal{L}}_{\zeta}$ denotes the Lie derivative along the $5D$ vector ${\zeta}^C$ and ${\eta}_{0} \equiv a^2/(a^2 - c)$. In addition, by setting one of the constants equal to zero, say $c_{2} = 0$, and making the coordinate transformation $u^{- 2c/a^2}du \rightarrow d\bar{u} $, it reduces to $$\label{Lightlike Kasner solution} dS^2 = d\bar{u} dv -A\bar{u}^{p_{1}}dx^2 - B \bar{u}^{p_{2}}dy^2 - C \bar{u}^{p_{3}}dz^2,$$ where $A, B, C$ are constants with the appropriate units, and $p_{1}, p_{2}, p_{3}$ denote $$\label{introduction of p1, p2 and p3} p_{1} = \frac{2 \alpha (\alpha + \beta + \gamma)}{\alpha^2 + \beta^2 + \gamma^2}, \;\;\;p_{2} = \frac{2 \beta (\alpha + \beta + \gamma)}{\alpha^2 + \beta^2 + \gamma^2}, \;\;\;p_{3} = \frac{2 \gamma (\alpha + \beta + \gamma)}{\alpha^2 + \beta^2 + \gamma^2},$$ which satisfy the relation $\sum_{i = 1}^{3}\left(p_{i} - 1\right)^2 = 3$ for [*any*]{} values of $\alpha, \beta $ and $\gamma$. The metric (\[Lightlike Kasner solution\]) is usually called light-like Kasner solution. In this case the $5D$ homothetic vector is given by ${\zeta}^C = \left[\bar{u},\; v,\; (1 - p_{1}/2) x,\; (1 - p_{2}/2) y,\; (1 - p_{3}/2) z\right]$. Introducing the timelike and “extra" coordinates ------------------------------------------------ In order to be able to apply the standard dimensional reduction (\[4D Einstein with T and K\]) to metrics (\[light-like 5D metrics\]) one has to introduce coordinates that are adapted to the spacetime sections $\Sigma_{\psi}$. With this aim we make the coordinate transformation $$\label{definition of F and V} u = F(t, \psi), \;\;\;v = V(t, \psi),$$ where $t$ is assumed to be the timelike coordinate; $\psi$ the “extra" coordinate; $F$ and $V$ are, in principle, arbitrary differentiable functions of their arguments, except from the condition that the Jacobian of the transformation must be nonzero. With this transformation we obtain $$\label{dudv} du dv = \left(\dot{F}\dot{V} dt^2 + F' V' d\psi^2\right) + \left(\dot{F} V' + F' \dot{V}\right)dt d\psi,$$ where dots and primes denote derivatives with respect to $t$ and $\psi$, respectively. We can choose the coordinates $t, \psi$ in such a way that the $5D$ metric be diagonal. This requires $$\label{diagonal 5D metric} V' = - \frac{F' \dot{V}}{\dot{F}}.$$ As a consequence, the line element (\[light-like 5D metrics\]) becomes $$\label{light-like metric in t-psi coordinates} dS^2 = \bar{{\cal{A}}}(t, \psi)\dot{F}\dot{V}dt^2 - {\bar{{\cal{B}}}}_{i j}(t, \psi)dx^{i} dx^{j} - \bar{{\cal{A}}}(t, \psi)\frac{F'^2 \dot{V}}{\dot{F}} d\psi^2,$$ where $\bar{{\cal{A}}}(t, \psi) \equiv {\cal{A}}(F, V)$ and ${\bar{{\cal{B}}}}_{i j}(t, \psi) \equiv {{\cal{B}}}_{i j}(F, V)$. A couple of points should be noticed here. Firstly, that the physical requirement $g_{00} > 0$ demands $\psi$ to be spacelike. Secondly, that the line element (\[light-like metric in t-psi coordinates\]) contains two arbitrary functions, which are not present in the original solution (\[new solution\]). The question is, why? Is this a mathematical, or gauge, artifact? The answer to this question is that the arbitrary functions in (\[light-like metric in t-psi coordinates\]) are [*not*]{} gauge artifacts. They reflect the physical reality that there are many ways of embedding a $4D$ spacetime in $5D$ while satisfying the field equations. If we choose some particular embedding we obtain a differential constraint connecting $V$ and $F$, which allows us to obtain one of them in terms of the other, e.g., $V$ in terms of $F$. Then, the remaining unknown function, e.g. $F$, can be determined from the physics in $4D$. As an illustration of the former assertion, let us consider two common embeddings that arise from the choice of the coordinate/reference system. #### Gaussian normal coordinate system: A popular choice in the literature is to use the five degrees of coordinate freedom to set $g_{4 \mu} = 0$ and $g_{44} = - 1$. This is the so-called ‘Gaussian normal coordinate system’ based on $\Sigma_{\psi}$. Consequently, in such coordinates $\dot{V} = ({\dot{F}}/{\bar{{\cal{A}}} F'^2})$ and (\[diagonal 5D metric\]) becomes $V' = - (1/{\bar{{\cal{A}}} F'})$. Now the condition $(\partial \dot{V}/\partial \psi) = (\partial {V'}/\partial t)$ yields $$\label{obtaining V from F in Gaussian coord.} a^2\left(c_{1}F + c_{2}V\right) F'' - 2c c_{1} F'^2 = 0.$$ If $c_{2} \neq 0$, this equation gives $V(t, \psi)$ for any smooth function $F(t, \psi)$, and the metric (\[light-like metric in t-psi coordinates\]) becomes $$\label{the metric in Gaussian coordinates} dS^2 = \left(\frac{\dot{F}}{F'}\right)^2 dt^2 - {\bar{{\cal{B}}}}_{i j}(t, \psi)dx^{i} dx^{j} - d\psi^2.$$ If $c_{2} = 0$, then (\[obtaining V from F in Gaussian coord.\]) is an equation for $F$. Integrating it we find $$\label{F in Gaussian coordinates, c2 = 0} F = \left[l(t) + \psi h(t)\right]^{a^2/(a + b)},$$ where $l(t)$ and $h(t)$ are arbitrary differentiable functions. #### Synchronous reference system: The choice $g_{00} = 1$ is usual in cosmology: it corresponds to the so-called synchronous reference system where the coordinate $t$ is the proper time at each point. Thus, setting $\dot{V} = (1/{\bar{{\cal{A}}}}\dot{F})$, the line element (\[light-like metric in t-psi coordinates\]) becomes $$\label{cosmological homogeneous solution in synchronous coordinates } dS^2 = dt^2 - {\bar{{\cal{B}}}}_{i j}(t, \psi)dx^{i} dx^{j} - \left(\frac{F'}{\dot{F}}\right)^2d\psi^2.$$ In these coordinates (\[diagonal 5D metric\]) reduces to $V' = - (F'/{\bar{{\cal{A}}}}{\dot{F}}^2)$ and $(\partial \dot{V}/\partial \psi) = (\partial {V'}/\partial t)$ yields $$\label{obtaining V from F in Synchronous coord.} a^2\left(c_{1}F + c_{2}V\right) \ddot{F} - 2c c_{1} \dot{F}^2 = 0.$$ Thus, for $c_{2} = 0$ we get $$\label{F in the synchronous coordinates} F = \left[M(\psi) + t N(\psi)\right]^{a^2/(a + b)},$$ where $M$ and $N$ are arbitrary differentiable functions of $\psi$. For any other $c_{2} \neq 0$, we obtain $V$ from (\[obtaining V from F in Synchronous coord.\]) after choosing some smooth function $F(t, \psi)$. Thus, in principle the function $V$ can be determined if we know $F$. At this point the question arises of whether we can single out the function $F$ from “physical" considerations in $4D$. Further analysis of the field equations shows that if we assume an equation of state for the matter in $4D$, then we obtain an extra differential equation connecting $V$ and $F$, which in addition to (\[obtaining V from F in Gaussian coord.\]) or (\[obtaining V from F in Synchronous coord.\]), allows to express the solution (\[light-like metric in t-psi coordinates\]) in terms of $t$ and $\psi$. This is what is required for the $4 + 1$ dimensional reduction of the $5D$ solutions. The general calculations are straightforward, but the equations are notational cumbersome in both STM and braneworld theory. On the other hand, (\[F in Gaussian coordinates, c2 = 0\]) and (\[F in the synchronous coordinates\]) indicate that a great algebraic simplification is attained if $c_{2} = 0$. In fact, in this case we can re-scale the function $F$, as $F \rightarrow \bar{F}^{a^2/(a + b)}$, after which the solution (\[light-like metric in t-psi coordinates\]) with $c_{2} = 0$ reduces to $$\label{Kasner-like metric in terms of F-bar} dS^2 = \dot{\bar{F}}\dot{V} dt^2 - A {\bar{F}}^{p_{1}}dx^2 - B {\bar{F}}^{p_{2}}dy^2 - C {\bar{F}}^{p_{3}}dz^2 - \frac{{\bar{F}}'^2 \dot{V}}{\dot{\bar{F}}} d\psi^2,$$ where $p_{1}, p_{2}, p_{3}$ are the parameters introduced in (\[introduction of p1, p2 and p3\]). In sections $3$, $4$ and $5$ we use this line element, which we call Kasner-like, for illustrating the fact that physics in $4D$ determines $F$. Cosmological models in $4D$. The STM approach ============================================= The aim of this section is to determine $F$ within the context of induced matter theory. To this end we assume an equation of state for the effective matter quantities. Our results show that the light-like Kasner metrics (\[Lightlike Kasner solution\]) can be used, or interpreted, as $5$-dimensional embeddings for a number of cosmological models in $4D$ that are spatially anisotropic extensions of the FRW ones. For the Kasner-like metric (\[Kasner-like metric in terms of F-bar\]), the components of the effective EMT induced on $\Sigma_{\psi}: \psi = \psi_{0} = $ constant are given by (in what follows we simplify the notation by omitting the bar over $F$ in (\[Kasner-like metric in terms of F-bar\]) and the index $^{(4)}$ in $^{(4)}T_{\mu\nu}$) $$\begin{aligned} \label{EMT} 8 \pi G T_{0}^{0} &=& \frac{a^2 c \dot{F}}{(a + b)^2 \dot{V} F^2}, \nonumber \\ 8\pi G T_{1}^{1} &=& \frac{a \left\{(\gamma + \beta)(a + b) F \left[\frac{\ddot{F}}{\dot{F}} - \frac{\ddot{V}}{\dot{V}}\right] - 2 c(\alpha - \beta - \gamma )\dot{F}\right\}}{2(a + b)^2 \dot{V} F^2}, \nonumber \\ 8\pi G T_{2}^{2} &=& \frac{a \left\{(\gamma + \alpha)(a + b) F \left[\frac{\ddot{F}}{\dot{F}} - \frac{\ddot{V}}{\dot{V}}\right] - 2 c(\beta - \alpha - \gamma )\dot{F}\right\}}{2(a + b)^2 \dot{V} F^2}, \nonumber \\ 8\pi G T_{3}^{3} &=& \frac{a \left\{(\alpha + \beta)(a + b) F \left[\frac{\ddot{F}}{\dot{F}} - \frac{\ddot{V}}{\dot{V}}\right] - 2 c(\gamma - \alpha - \beta )\dot{F}\right\}}{2(a + b)^2 \dot{V} F^2}.\end{aligned}$$ Certainly the specific shape of the EMT depends on the embedding. However, there are a number of relationships, between the components of the EMT, which are “embedding-independent". These are $$\begin{aligned} \label{rel between the stresses} (\gamma - \beta)T_{1}^{1} + (\alpha - \gamma)T_{2}^{2} + (\beta - \alpha)T_{3}^{3} = 0,\end{aligned}$$ and $$\begin{aligned} \label{rel between T0 and the stresses} (\alpha + \gamma)T_{1}^{1} - (\beta + \gamma)T_{2}^{2} &=& (\beta - \alpha)T_{0}^{0},\nonumber \\ (\alpha + \beta)T_{1}^{1} - (\beta + \gamma)T_{3}^{3} &=& (\gamma - \alpha)T_{0}^{0},\nonumber \\ (\alpha + \beta)T_{2}^{2} - (\alpha + \gamma)T_{3}^{3} &=& (\gamma - \beta)T_{0}^{0}.\end{aligned}$$ Let us notice some particular cases: (i) If two of the parameters are equal to each other (axial symmetry), say $\alpha = \beta$, then $T_{1}^{1} = T_{2}^{2}$; (ii) If $T_{1}^{1} = T_{2}^{2}$ but $\alpha \neq \beta$, then $T_{1}^{1} = T_{2}^{2} = T_{3}^{3} = - T_{0}^{0}$; (iii) If $\alpha = - \beta$, then $T_{3}^{3} = - T_{0}^{0}$; (iv) In the case of isotropic expansion $(\alpha = \beta = \gamma)$ then $T_{1}^{1} = T_{2}^{2} = T_{3}^{3}$ (but not necessarily $T_{1}^{1} = T_{2}^{2} = T_{3}^{3} = - T_{0}^{0}$). Perfect Fluid ------------- Let us consider the case where the effective EMT behaves like a perfect fluid. From (\[EMT\]) we find that $T_{1}^{1} = T_{2}^{2} = T_{3}^{3}$ requires $$\label{perfect fluid case} (a + b)\left(\frac{\ddot{F}}{\dot{F}} - \frac{\ddot{V}}{\dot{V}}\right) + 4 c \left(\frac{\dot{F}}{F}\right) = 0,$$ which implies $\dot{V} \propto \dot{F} F^{4 c/(a + b)}$. Substituting this into (\[diagonal 5D metric\]) and using $(\partial \dot{V}/\partial \psi) = (\partial {V'}/\partial t)$ we find that $F$ must satisfy the equation[^6] $$\label{equation for F, perfect fluid} (a + b)F \dot{F}' + 4 c \dot{F}F' = 0,$$ from which we get $$\label{F for perfect fluid} F = \left[f(t) + g(\psi)\right]^{(a + b)/(4 c + a + b)},$$ where $f(t)$ and $g(\psi)$ are arbitrary functions of their arguments. The effective energy density $\rho^{(eff)} \equiv T_{0}^{0}$ and pressure $p^{(eff)} \equiv - T_{1}^{1} = - T_{2}^{2} = - T_{3}^{3}$ are given by $$\label{density and pressure, perfect fluid} \rho^{(eff)} = p^{(eff)}, \;\;\;8\pi G \rho^{(eff)} = \frac{c a^2}{(a + b)^2 F^{2a^2/(a + b)}}.$$ Ultra-relativistic matter and radiation --------------------------------------- It is well-known that in the case of radiation as well as for ultra-relativistic matter (i.e., particles with finite rest mass moving close to the speed of light) the trace of the EMT vanishes identically. From (\[EMT\]) we find that $T = T_{0}^{0} + T_{1}^{1} + T_{2}^{2} + T_{3}^{3} = 0$ requires $$\label{ultra-relativistic matter} (a + b)\left(\frac{\ddot{F}}{\dot{F}} - \frac{\ddot{V}}{\dot{V}}\right) + 2 c \left(\frac{\dot{F}}{F}\right) = 0.$$ This equation is the analogue of (\[perfect fluid case\]). Following the same procedure as above we find $F = \left[f(t) + g(\psi)\right]^{(a + b)/(2 c + a + b)}$. Therefore, the solution for radiation-like matter resembles that of perfect fluid in the sense that the effective stresses $p_{x}^{(eff)} \equiv - T_{1}^{1}$, $p_{y}^{(eff)} \equiv - T_{2}^{2}$, $p_{z}^{(eff)} \equiv - T_{3}^{3}$ are proportional to the energy density, viz., $$\label{effective stresses, radiation-like solution} p_{x}^{(eff)} = n_{x}\rho^{(eff)}, \;\;\;p_{y}^{(eff)} = n_{y}\rho^{(eff)}, \;\;\;p_{z}^{(eff)} = n_{z}\rho^{(eff)},$$ where $n_{x}, n_{y}$ and $n_{z}$ are constants satisfying $n_{x} + n_{y} + n_{z} = 1$. If we average over the three spatial directions, this is equivalent to saying that the equation of state is ${\bar{p}}^{(eff)} = \rho^{(eff)}/3$, where ${\bar{p}}^{(eff)} \equiv - T_{i}^{i}/3$. Barotropic linear equation of state ------------------------------------ For the sake of generality, and in order to keep contact with isotropic FRW cosmologies, let us study the scenario where the effective mater is barotropic, that is the ratio $\bar{p}^{(eff)}/\rho^{(eff)}$ is constant. Thus we set $$\label{barotropic equation of state} \bar{p}^{(eff)} = n \rho^{(eff)},\;\;\;n = \mbox{constant},$$ which for $n = 1$ and $n = 1/3$ gets back the above-discussed perfect fluid and radiation-like scenarios, respectively. Substituting into (\[EMT\]) we obtain an equation similar to (\[perfect fluid case\]) and (\[ultra-relativistic matter\]), but with the coefficient $(1 + 3n)$ in front of the term $c\dot{F}/F$. Consequently, $\dot{V} \propto \dot{F} F^{[c(1 + 3n)/(a + b)]}$. The condition $(\partial \dot{V}/\partial \psi) = (\partial {V'}/\partial t)$ then requires $$\label{F, barotropic case} F = \left[f(t) + g(\psi)\right]^{\frac{(a + b)}{a + b + (3n + 1)c}}.$$ Substituting this expression into (\[Kasner-like metric in terms of F-bar\]) and making the coordinate transformation $(df/dt)dt \rightarrow d\tilde{t}$, $(d g/d\psi)d\psi \rightarrow d\tilde{\psi}$, the line element in $5D$ can be written as $$\label{5D line element for the barotropic case} dS^2 = \frac{D d\tilde{t}^2}{\tilde{H}^{(3n + 1)c}} - A \tilde{H}^{2 a \alpha } dx^2 - B \tilde{H}^{2 a \beta } dy^2 - C \tilde{H}^{2 a \gamma } dz^2 - \frac{D d\tilde{\psi}^2}{\tilde{H}^{(3n + 1)c}},$$ where $\tilde{H} \equiv \left(\tilde{t} + E \tilde{\psi}\right)^{\frac{1}{a + b + (3n + 1)c}}$; $E$ is an arbitrary constant for $n = 1/3$, but $E = \pm 1$ for any other $n \neq 1/3$; $D$ is a positive constant introduced for dimensional considerations. ### Kasner universe in $5D$ We immediately note that the case where $E = 0$, which requires $n = 1/3$, gives back the well-known Kasner universe in $5D$. In fact, setting $\tilde{t} \propto \tau^{(a + b + 2c)/(a + b + c)}$ the line element (\[5D line element for the barotropic case\]) reduces to $$\label{Kasner solution in 5D} dS^2 = d\tau^2 - A \tau^{2q_{1}} dx^2 - B \tau^{2 q_{2}}dy^2 - C \tau^{2 q_{3}}dz^2 \pm D \tau^{2 q_{4}}d\psi^2,$$ where $$\label{defifition of the q's} q_{1} = \frac{\alpha a}{(a + b + c)}, \;\;\;q_{2} = \frac{\beta a}{(a + b + c)}, \;\;\;q_{3} = \frac{\gamma a}{(a + b + c)}, \;\;\; q_{4} = - \frac{c}{a + b + c},$$ satisfy $\Sigma_{i = 1}^4{q_{i}} = \Sigma_{i = 1}^4{q^2_{i}} = 1$, typical of the Kasner universe in $5D$. In order to avoid misunderstandings, it may be useful to reiterate our terminology: (i) the light-like Kasner metric is (\[Lightlike Kasner solution\]), which depends on the light-like variable $u$; (ii) by Kasner-like metric we refer to (\[Kasner-like metric in terms of F-bar\]), which depends on one arbitrary function of $t$ and $\psi$, and (iii) Kasner metric is the usual name given to (\[Kasner solution in 5D\]), which depends only on $\tau$. In general, for the $4$-dimensional interpretation of (\[5D line element for the barotropic case\]) we should notice that on every hypersurface $\tilde{\psi} = \tilde{\psi}_{0} = $ constant (${\psi} = {\psi}_{0} = $ constant) the proper time $\tau$ is given by $$\label{proper time} d\tau = \pm \; \frac{\sqrt{D} d\tilde{t}}{(\tilde{t} + E \tilde{\psi}_{0})^m}, \;\;\;\;m \equiv \frac{(3n + 1)c}{2 \left[a + b + (3n + 1)c\right]}.$$ Bellow we consider several cases. ### Anisotropic Milne universe If $m = 0$, then[^7] $$\label{Milne universe} n = - \frac{1}{3}.$$ In terms of the proper time $\tau = \sqrt{D}(\tilde{t} + E \tilde{\psi}_{0}) $, the metric induced on $4$-dimensional hypersurfaces $\Sigma_{\psi}$ can be written as $$\label{anisotropic milne universe } ds^2 = dS^2_{\Sigma_{|_{\psi}}} = d\tau^2 - \bar{A} \tau^{p_{1}}dx^2 - \bar{B} \tau^{p_{2}}dy^2 - \bar{C} \tau^{p_{3}}dz^2,$$ where $p_{i}$ are the parameters defined in (\[introduction of p1, p2 and p3\]) and $\bar{A}, \bar{B}$ and $\bar{C}$ are some new constants. In addition, $n_{i}$, the ratios of the anisotropic stresses to the energy density (\[effective stresses, radiation-like solution\]) are given by $$\label{the n's for Milne universe} n_{x} = \frac{(\alpha - \beta - \gamma)}{a}, \;\;\; n_{y} = \frac{(\beta - \alpha - \gamma)}{a}, \;\;\;n_{z} = \frac{(\gamma - \alpha - \beta)}{a}, \;\;\;8\pi G \rho^{(eff)} = \frac{a^2 c}{(a + b)^2 \tau^2}$$ For $\alpha = \beta = \gamma $ we find $n_{x} = n_{y} = n_{x} = - 1/3$ and consequently we recover Milne’s universe, as expected. ### Anisotropic de Sitter universe If $m = 1$, then $$\label{m = 1} n = - \frac{1}{3} - \frac{2(\alpha^2 + \beta^2 + \gamma^2)}{3(\alpha \beta + \alpha \gamma + \beta \gamma)}.$$ From (\[proper time\]) we get $(\tilde{t} + E\tilde{\psi}_{0} ) \propto e^{\pm \tau/\sqrt{D}}$. Taking the negative sign, the induced metric in $4D$ can be expressed as $$\label{barotropic case, 4D metric} ds^2 = dS^2_{\Sigma_{|_{\psi}}} = d\tau^2 - \bar{A}e^{p_{1}\tau/\sqrt{D}}dx^2 - \bar{B}e^{p_{2}\tau/\sqrt{D}}dy^2 - \bar{C}e^{p_{3}\tau/\sqrt{D}}dz^2.$$ For this metric we find $$\label{matter quantities for the de Sitter-like solution} n_{x} = - \frac{(\beta^2 + \gamma^2 + \beta \gamma)}{c}, \;\;\;n_{y} = - \frac{(\alpha^2 + \gamma^2 + \alpha \gamma)}{c}, \;\;\;n_{z} = - \frac{(\alpha^2 + \beta^2 + \alpha\beta)}{c}, \;\;\;8 \pi G \rho^{(eff)} = \frac{a^2 c}{(a + b)^2 D}.$$ In the case of isotropic expansion $(\alpha = \beta = \gamma)$ these equations yield $n_{x} = n_{y} = n_{z} = n = - 1$ and (\[barotropic case, 4D metric\]) reduces to the familiar de Sitter metric with cosmological constant $\Lambda_{(4)} = 3/D$. An interesting conclusion here is that an anisotropic universe can enter a phase of exponential expansion (inflation), without satisfying the classical “false-vacuum" equation $p = - \rho$ (see (\[m = 1\])). ### Anisotropic power-law FRW universe For $m \neq 1$, from (\[proper time\]) we obtain $$\label{proper time, general case} (\tilde{t} + E \tilde{\psi}_{0}) = \left[\frac{(1 - m)}{\sqrt{D}}(\tau - \tau_{0})\right]^{1/(1 - m)},$$ where $\tau_{0}$ is a constant of integration. Thus, the induced metric in $4D$ becomes $$\label{non-empty Kasner-like universe} ds^2 = dS^2_{\Sigma_{|_{\psi}}} = d\tau^2 - \bar{A}\tau^{\alpha \kappa}dx^2 - \bar{B}\tau^{\beta \kappa}dy^2 - \bar{C}\tau^{\gamma \kappa}dz^2,$$ where we have set $\tau_{0} = 0$; $\bar{A}, \bar{B}, \bar{C}$ are constants with the appropriate units, and $\kappa$ is given by $$\label{definition of kappa} \kappa \equiv \frac{4 a}{2(a + b) + (3n + 1)c} = \frac{4(\alpha + \beta + \gamma)}{2(\alpha^2 + \beta^2 + \gamma^2) +(3n + 1)(\alpha \beta + \alpha \gamma + \beta \gamma)}.$$ We note that the denominator of $\kappa$ is non-zero because by assumption here $m \neq 1$, see (\[m = 1\]). The effective matter quantities are $$\label{matter quantities for the barotropic case} \bar{p}^{(eff)} = n \rho^{(eff)}, \;\;\;8 \pi G \rho^{(eff)} = \frac{\kappa^2 c}{4 \tau^2}$$ We also find $$\label{the n's for the barotropic case} n_{x} = \frac{2\alpha +(3n - 1)(\beta + \gamma)}{2 a}, \;\;\;n_{y} = \frac{2\beta +(3n - 1)(\alpha + \gamma)}{2 a}, \;\;\;n_{z} = \frac{2\gamma +(3n - 1)(\alpha + \beta)}{2 a}.$$ We note that for $c = 0$, the space is empty $(\rho^{(eff)} = 0)$, and the line element (\[non-empty Kasner-like universe\]) yields the well-known Kasner solution in $4D$. Besides, for $n = 1$ and $n = 1/3$ the above expressions reduce to those obtained for perfect fluid and radiation-like matter discussed in sections $3.1$ and $3.2$, respectively. ### Isotropic expansion: spatially flat FRW universe The above expressions evidence the fact that for anisotropic expansion the effective EMT behaves like a perfect fluid for $n = 1$. In contrast, isotropic expansion allows perfect fluid for any value of $n$. In this case the $5D$ metric (\[5D line element for the barotropic case\]) can be written as (we omit the tilde over $t$ and $\psi$) $$\label{barotropic case with isotropic expansion} dS^2 = \frac{ D dt^2}{(t + E \psi)^{(3n + 1)/(3n + 2)}} - C (t + \psi)^{2/(2 + 3n)}\left[dx^2 + dy^ 2 + dz^2\right] - \frac{ D d\psi^2}{(t + E \psi)^{(3n + 1)/(3n + 2)}}.$$ For $n \neq - 1$, on every hypersurface $\Sigma_{\psi}$ it reduces to $$\label{metric for the flat FRW model} ds^2 = d\tau^2 - C \tau^{4/3(n + 1)}\left[dx^2 + dy^2 + dz^2\right],$$ which is the familiar flat FRW model with perfect fluid $$\label{matter for FRW flat model} p = n \rho, \;\;\; 8\pi G \rho= \frac{4}{3(n + 1)^2 \tau^2}.$$ For $n = - 1$ we recover the de Sitter spacetime as shown in (\[barotropic case, 4D metric\]). To finish this section we would like to emphasize that although the metrics with $a = 0$ and $c = 0$ correspond to empty space (Ricci-flat in $4D$), they are different in nature. For $a = 0$ the spacetime is Minkowski (Riemann-flat) in $5D$ and $4D$, while for $c = 0$ the components of the Riemann tensor are nonzero in $5D$ and in the $4D$ subspace $\Sigma_{\psi}$. Cosmological models in $4D$. The braneworld approach ==================================================== The preceding discussion shows that, in the framework of STM the Kasner-like metric (\[Kasner-like metric in terms of F-bar\]) embeds a large family of $4D$ cosmological models that are anisotropic versions of the FRW ones. However, one could argue that the effective matter quantities (\[EMT\]) do not have to satisfy the regular energy conditions [@Bronnikov], or any physically motivated equation of state, because they involve terms of geometric origin[^8]. In this section we will see that the $5D$ metric (\[Kasner-like metric in terms of F-bar\]) can be completely determined if one imposes an equation of state on the matter in the brane. Although the concept is the same as in section $3$, the physics here is [*different*]{}. Namely, in this approach the spacetime is a singular hypersurface and, for the $5D$ Kasner metrics under consideration, there is an effective non-vanishing cosmological term in $4D$ (the brane). As a consequence, the time evolution as well as the interpretation of the solutions in $4D$ is distinct from the one obtained, under similar conditions, in the framework of STM. The braneworld paradigm ----------------------- In order to make the paper self-consistent, and set the notation, we give a brief sketch of the technical details that we need in our discussion. In the simplest RS2 braneworld scenario our universe is identified with a [*fixed*]{} singular hypersurface ${\Sigma_{\psi_{b}}}$ (called [*brane*]{}) embedded in a $5$-dimensional bulk with ${\bf Z}_{2}$ symmetry with respect to the brane. The discontinuity of the extrinsic curvature across $\Sigma_{\psi_{b}}$ is related to the presence of matter on the brane, which is described by an EMT that we denote as $\tau_{\mu\nu}$. Thus, now the Einstein field equations in $5D$ are $G_{AB} = k_{(5)}^2 T_{AB}^{(brane)}$, where $k_{(5)}^2$ is a constant with the appropriate units and $T_{AB}^{(brane)} = \delta_{A}^{\mu}\delta_{B}^{\nu}\tau_{\mu\nu} \delta(\psi)/\Phi$. Israel’s boundary conditions [@Israel] relate the jump of $K_{\mu\nu}$ to $\tau_{\mu\nu}$, namely, $$({K_{\mu \nu}}_{|\Sigma^{+}_{\psi_{b}}} - {K_{\mu \nu}}_{|\Sigma^{-}_{\psi_{b}}}) = - \epsilon \frac{k_{(5)}^2}{2}(\tau_{\mu\nu} - \frac{1}{3}\tau g_{\mu\nu}).$$ Now, the assumed ${\bf{Z}}_{2}$ symmetry implies ${K_{\mu \nu}}_{|\Sigma^{+}_{\psi_{b}}} = - {K_{\mu \nu}}_{|\Sigma^{-}_{\psi_{b}}}$. Consequently, $$\label{emt on the brane in terms of K} \tau_{\mu\nu} = - \frac{2\epsilon}{k_{(5)}^2}\left(K_{\mu\nu} - g_{\mu\nu} K\right),$$ where the extrinsic curvature $K_{\mu\nu}$ has to be evaluated on ${\Sigma^{+}_{\psi_{b}}}$. From $G_{\nu 4} = 0$ it follows that $\tau^{\mu}_{\nu;\mu} = 0$. Thus $\tau_{\mu\nu}$ represents the total, vacuum plus matter, conserved energy-momentum tensor on the brane. It is usually separated in two parts [@Csaki], $$\label{decomposition of tau} \tau_{\mu\nu} = \sigma g_{\mu\nu} + {\cal{T}}_{\mu\nu},$$ where $\sigma$ is the tension of the brane, which is interpreted as the vacuum energy density, and ${\cal{T}}_{\mu\nu}$ represents the energy-momentum tensor of [*ordinary*]{} matter in $4D$. From (\[emt on the brane in terms of K\]) and (\[decomposition of tau\]) we get $$\label{K in terms of matter in the brane} K_{\mu\nu} = - \frac{\epsilon k_{(5)}^2}{2} \left({\cal{T}}_{\mu\nu} - \frac{1}{3}g_{\mu\nu}({\cal{T}} + \sigma)\right).$$ Substituting this expression into (\[4D Einstein with T and K\]) we obtain [@Shiromizu] $$\label{EMT in brane theory} ^{(4)}G_{\mu\nu} = {\Lambda}_{(4)}g_{\mu\nu} + 8\pi G {\cal{T}}_{\mu\nu} - \epsilon k_{(5)}^4 \Pi_{\mu\nu} - \epsilon E_{\mu\nu},$$ where $$\label{definition of lambda} \Lambda_{(4)} = - \epsilon \frac{k_{(5)}^4 \sigma^2}{12},$$ $$\label{effective gravitational coupling} 8 \pi G = - \epsilon \frac{k_{(5)}^4 \sigma}{6},$$ and $$\label{quadratic corrections} \Pi_{\mu\nu} = \frac{1}{4} {\cal{T}}_{\mu\alpha}{\cal{T}}^{\alpha}_{\nu} - \frac{1}{12}{\cal{T}} {\cal{T}}_{\mu\nu} - \frac{1}{8}g_{\mu\nu}{\cal{T}}_{\alpha\beta}{\cal{T}}^{\alpha\beta} + \frac{1}{24}g_{\mu\nu}{\cal{T}}^2.$$ All these four-dimensional quantities have to be evaluated on ${\Sigma^{+}_{\psi_{b}}}$. They contain two important features; they give a working definition of the fundamental quantities $\Lambda_{(4)}$ and $G$ and contain higher-dimensional modifications to general relativity. Namely, local quadratic energy-momentum corrections via the tensor $\Pi_{\mu\nu}$, and the nonlocal effects from the free gravitational field in the bulk, transmitted by $E_{\mu\nu}$. Matter in the brane. Gaussian coordinates ----------------------------------------- In the braneworld literature the use of Gaussian coordinates in quite common. In these coordinates the function $F$ is given by (\[F in Gaussian coordinates, c2 = 0\]), which under the re-scaling $F \rightarrow \bar{F}^{a^2/(a + b)}$ becomes $\bar{F} = l(t) + \psi h(t)$. If we locate the brane at $\psi = 0$, then the metric of the bulk is given by: 1. For $\psi > 0$ $$\label{bulk metric for psi positive} dS^2_{(+)} = \frac{\left[\dot{l} + \psi \dot{h}\right]^2}{h^2}dt^2 - A \left[l(t) + \psi h(t)\right]^{p_{1}}dx^2 - B \left[l(t) + \psi h(t)\right]^{p_{2}}dy^2 - C \left[l(t) + \psi h(t)\right]^{p_{3}}dz^2 - d\psi^2.$$ 2. For $\psi < 0$ $$\label{bulk metric for psi negative} dS^2_{(-)} = \frac{\left[\dot{l} - \psi \dot{h}\right]^2}{h^2}dt^2 - A \left[l(t) - \psi h(t)\right]^{p_{1}}dx^2 - B \left[l(t) - \psi h(t)\right]^{p_{2}}dy^2 - C \left[l(t) - \psi h(t)\right]^{p_{3}}dz^2 - d\psi^2.$$ Using (\[extrinsic curvature\]) we calculate the non-vanishing components of $K_{\mu\nu} = {K_{\mu \nu}}_{|\Sigma^{+}_{\psi_{b}}}$. These are $$\label{extrinsic curvature in Gaussian coordinates} K_{00} = \frac{\dot{l}\dot{h}}{h^2}, \;\;\;K_{11} = - \frac{A \alpha a l^{(p_{1} - 1)}h}{(a + b)},\;\;\;K_{22} = - \frac{B \beta a l^{(p_{2} - 1)}h}{(a + b)}, \;\;\;K_{33} = - \frac{C \gamma a l^{(p_{3} - 1)}h}{(a + b)}.$$ We assume that the matter in the brane satisfies the equation of state $$\label{equation of state for ordinary matter} p = n \rho,$$ where $\rho = {\cal{T}}_{0}^{0}$, $p = (p_{x} + p_{y} + p_{z})/3$ and $p_{x} = - {\cal{T}}_{1}^{1}$, $p_{y} = - {\cal{T}}_{2}^{2}$, $p_{z} = - {\cal{T}}_{3}^{3}$. Using these expressions, from (\[emt on the brane in terms of K\]), with $\epsilon = - 1$, and (\[decomposition of tau\]) we obtain $$\begin{aligned} \label{ordinary matter quantities} k_{(5)}^2 \sigma &=& - \frac{2}{(1 + n)}\left[\frac{\dot{h}}{\dot{l}} + \frac{(2 + 3n) a^2}{3(a + b)}\left(\frac{h}{l}\right)\right], \nonumber \\ k_{(5)}^2 \rho &=& \frac{2}{(1 + n)}\left[\frac{\dot{h}}{\dot{l}} - \frac{a^2}{3 (a + b)}\left(\frac{h}{l}\right)\right], \;\;\;n \neq -1, \;\;\;\dot{l} \neq 0.\end{aligned}$$ We notice that in cosmological applications the metric function $g_{00}$ is subjected to the condition [@Binetruy], [@strength] $$\label{condition on the metric} {g_{00}}_{|_{brane}} = 1.$$ Thus $$\label{condition on h and l} h(t) = s \; \dot{l}(t), \;\;\;s = \pm 1.$$ Therefore, we have two equations for the three unknown $\sigma, \rho$ and $l(t)$. Taking the covariant divergence of (\[decomposition of tau\]), it follows that to conserve both the total brane energy-momentum tensor $\tau_{\mu\nu}$ and the matter energy-momentum tensor ${\cal{T}}_{\mu\nu}$, we must have $\sigma = \sigma_{0} = $ constant. Then, using (\[condition on h and l\]) we integrate the first equation in (\[ordinary matter quantities\]) and obtain the scale factor as[^9] $$\label{exponential expansion} l(t) = \left[C_{1} e^{- s(n + 1)k_{(5)}^2 \sigma_{0} t/2} + C_{2}\right]^{\eta}, \;\;\; \eta \equiv \frac{3(a + b)}{(2 + 3n)a^2 + 3(a + b)},$$ where $C_{1}$ and $C_{2}$ are constants of integration. We note that $\eta$ is positive for arbitrary values of $\alpha$, $\beta$, $\gamma$ and $n > -1$. Therefore, if we choose $s = - 1$ and set $C_{2} = 0$, then the “origin" $l = 0$ is located at $t = - \infty$. Thus, from (\[ordinary matter quantities\]) we find $\rho \propto \sigma = \sigma_{0}$ for all $t$, regardless of the value of $n$ (but $n \neq - 1$). The resulting metric on the brane is de Sitter-like, with different rates of exponential expansion in every direction, similar to the models discussed in (\[barotropic case, 4D metric\]). Non-Gaussian embeddings ----------------------- The question may arise of whether the simplicity of the above scenario is not a consequence of the simplifying assumption of Gaussian coordinates. In order to investigate this question, we consider here the embedding that arises from the choice[^10] $$\label{generalization of V dot} \dot{V} \propto \frac{\dot{F} F^{q c/(a + b)}}{F'^2},$$ where $q$ is some constant. With this choice the metric in $5D$ can be written as $$\label{metric for sigma constant without exponential expansion} dS^2 = \frac{ F^{q c/(a + b)} \dot{F}^2}{F'^2} dt^2 - A F^{p_{1}}dx^2 - B F^{p_{2}}dy^2 - C F^{p_{3}}dz^2 - F^{q c/(a + b)} d\psi^2.$$ Now, using $(\partial \dot{V}/\partial \psi) = (\partial {V'}/\partial t)$ we find $$\label{F generalized} F = \left[l(t) + \psi h(t)\right]^{(a + b)/(a + b - q c)}, \;\;\;\mbox{for}\;\;\; q \neq \frac{a + b}{c},$$ and $$\label{F for Generalization II} F = l(t)e^{\psi h(t)},\;\;\; \mbox{for}\;\;\; q = \frac{a + b}{c},$$ where $l(t)$ and $h(t)$ are arbitrary functions of integration. Again, if we locate the brane at $\psi = 0$, then the metric in the ${\bf{Z}}_{2}$-symmetric bulk is obtained by replacing $\psi \rightarrow |\psi|$ in (\[F generalized\]) and (\[F for Generalization II\]). Following the steps used in Section $(4.2)$ we find $$\label{equation for sigma in the generalized model} k_{(5)}^2 \sigma = - \frac{2 s }{n + 1}\left[\frac{\ddot{l}}{\dot{l}} + \frac{(2 + 3 n)a^2 + 3 q c}{3 (a + b - q c)}\;\left(\frac{\dot{l}}{l}\right)\right], \;\;\;\;q \neq \frac{a + b}{c}.$$ It is interesting to note that in the case where $q = (a + b)/c$, the equation for $\sigma$ can [*formally*]{} be obtained from (\[equation for sigma in the generalized model\]) by setting $q = 0$. Consequently, (\[F for Generalization II\]) yields models on the brane that are identical to those in Gaussian coordinates (\[ordinary matter quantities\]), although the metric in the $5D$ bulk is completely different in both cases. The conclusion emanating from (\[equation for sigma in the generalized model\]) is that, within the context of the $5$-dimensional Kasner spacetimes under consideration, Gaussian and non-Gaussian embeddings generate the same physics on the brane. In particular, the assumption of constant $\sigma$, which is equivalent to a constant cosmological term $\Lambda_{(4)}$, obliges the universe to expand in a de Sitter [*anisotropic*]{} form regardless of (the choice of) the embedding. This is quite analogous to the cosmological “no-hair" theorem/conjecture of general relativity. Vacuum solutions on the brane ----------------------------- Since the extra dimension is spacelike, the solutions to the field equations are invariant under the transformation $(x, y, z) \leftrightarrow \psi$. However, the physics in $4D$ crucially depends on how we choose our ordinary $3D$ space. In order to illustrate this, let us permute $\psi \leftrightarrow z$ in the solution given (\[metric for sigma constant without exponential expansion\]) and (\[F generalized\]). Also, to avoid misunderstanding we change $F(t, \psi) \rightarrow H(t, z)$. Using this notation, we find that the metric $$\label{vaccum solutions on the brane} dS^2 = \frac{ H^{q c/(a + b)} \dot{H}^2}{H_{z}^2} dt^2 - A H^{p_{1}}dx^2 - B H^{p_{2}}dy^2 - H^{q c/(a + b)} dz^2 \pm C H^{p_{3}}d\psi^2,$$ where $H_{z} \equiv \partial H/\partial z$ and $H = \left[l(t) + z h(t)\right]^{(a + b)/(a + b - q c)}$, is also a solution of the field equations $R_{AB} = 0$. Although (\[metric for sigma constant without exponential expansion\]) and (\[vaccum solutions on the brane\]) are diffeomorphic in $5D$, their interpretation in $4D$ is quite different. Specifically, unlike (\[metric for sigma constant without exponential expansion\]) in (\[vaccum solutions on the brane\]): (i) the extra dimension can be either spacelike or timelike, (ii) the spacetime slices $\Sigma_{\psi}$ are non-flat, and (iii) the metric of the spacetime is independent of $\psi$. As a consequence of the latter, the extrinsic curvature $K_{\mu \nu}$, defined by (\[extrinsic curvature\]), vanishes identically. Which in turn, by virtue of (\[emt on the brane in terms of K\]), implies $\tau_{\mu\nu} = 0$, i.e., the spacetime (the brane) is devoid of matter $({\cal{T}}_{\mu\nu} = 0)$ and $\Lambda_{(4)} = 0$. Clearly, other $5D$ metrics with properties similar to (\[vaccum solutions on the brane\]) can be constructed from the solutions (\[the metric in Gaussian coordinates\])-(\[F in the synchronous coordinates\]) of section $2$ as well as from (\[F for perfect fluid\]) and (\[F, barotropic case\]) of section $3$. The conclusion here is that the spacetime part of the $5D$ Kasner-like metric (\[Kasner-like metric in terms of F-bar\]), after the transformation $\psi \leftrightarrow (x, y, z)$, can be interpreted as vacuum solutions in a braneworld ${\bf{Z}}_{2}-$symmetric scenario. Static embeddings ================== As we noted above, when a $5D$ metric is independent of $\psi$, the extra dimension can be either spacelike or timelike. This is a general feature of the $5D$ field equations [@XtraSym]. Therefore, after the transformation $\psi \leftrightarrow t$ the $5D$ metric still satisfies the field equations $R_{AB} = 0$. The interesting feature here is that after such a transformation the line element induced on $4D$ hypersurfaces $\Sigma_{\psi}$ is static, instead of dynamic as in sections $3$ and $4$. A simple $5D$ line element that illustrates this feature, in a quite general way, can be obtained from the Kasner-like metric (\[Kasner-like metric in terms of F-bar\]) in the synchronous reference system. In fact, making the transformations ${\psi \leftrightarrow z}$, $F(t, \psi) {\rightarrow} H(t, z)$; and $\psi \leftrightarrow t$, $H(t, z) \rightarrow W(\psi, z)$, from (\[cosmological homogeneous solution in synchronous coordinates \]) and (\[F in the synchronous coordinates\]) we obtain $$\label{static solution} dS^2 = C W^{p_{3}} dt^2 - A W^{p_{1}}dx^2 - B W^{p_{2}}dy^2 - \left(\frac{W_{z}}{W'}\right)^2 dz^2 + d\psi^2,$$ where (We recall the re-scalling of $F$ introduced at the end of section $2$.) $$\label{the function S} W = M(z) + \psi N(z).$$ This is a solution of the $5D$ equations $R_{AB} = 0$ for any arbitrary functions $M(z)$ and $N(z)$. It explicitly depends on the extra dimension $\psi$, which now is timelike. At this point it is worthwhile to emphasize that in modern noncompactified $5D$ theories both, spacelike and timelike extra dimensions are physically admissible [@epsilon]. Once again the choice of the functions $M(z)$ and $N(z)$ depends on the version of $5D$ relativity we use to evaluate the properties of the matter content in $4D$. Bellow we illustrate this by considering the induced matter approach, used in section $3$, and the braneworld paradigm used in section $4$. Static solutions with planar symmetry in conventional $4D$ general relativity ----------------------------------------------------------------------------- It is not difficult to show that the components of the effective EMT, induced on spacetime hypersurfaces $\Sigma_{\psi}:\psi = \psi_{0} =$ constant, for the metric (\[static solution\]) satisfy algebraic relations similar, but not identical[^11], to those in (\[rel between the stresses\]) and (\[rel between T0 and the stresses\]), which are independent on the specific choice of $M$ and $N$. We omit them here and present the case where the effective matter quantities satisfy the barotropic linear equation of state (\[barotropic equation of state\]). In such a case we find $$\label{M for the general static solution} M(z) = \bar{C} N(z)^{k} - \psi_{0}N(z), \;\;\;k \equiv - \frac{(\alpha^2 + \beta^2 + \gamma^2)[(3n + 1)(\alpha + \beta)] + 2 \gamma]}{(\alpha \beta + \alpha \gamma + \beta \gamma)[(3n + 1)(\alpha + \beta - \gamma) + 4 \gamma]},$$ where $\bar{C}$ is a constant of integration. Thus, in the $5D$ solution (\[static solution\]): $W = \bar{C} N^k + (\psi - \psi_{0})N$, which implies that the metric induced in $4D$ is independent of the choice of the hypersurface $\Sigma_{\psi}$. The effective energy density in $4D$ is given by $$\label{static rho effective} 8 \pi G \rho^{(eff)} = \frac{2 a^2 \gamma c \; N^{2(1 - k)}}{\bar{C}^2 [(3n + 1)(\alpha + \beta) + 2\gamma](a + b)^2 },$$ and the stresses are $$\label{the static stresses} \frac{p_{x}^{(eff)}}{\rho^{(eff)}} = \frac{(3n - 1)\gamma - \alpha (3n + 1)}{2 \gamma}, \;\;\;\frac{p_{y}^{(eff)}}{\rho^{(eff)}} = \frac{(3n - 1)\gamma - \beta (3n + 1)}{2 \gamma}, \;\;\;\frac{p_{z}^{(eff)}}{\rho^{(eff)}} = \frac{(3n + 1)(\alpha + \beta) + 2\gamma}{2 \gamma}.$$ We note that for $k = 1$ these quantities are constants and $\rho^{(eff)} < 0$ for all values of $\alpha, \beta$ and $\gamma$. In what follows we assume $k \neq 1$. Since ${g_{33}}_{|_{\Sigma_{\psi}}} = - {\bar{C}}^2 k^2 N^{2(k - 2)}(dN/dz)^2$ we can make the coordinate transformation $N^{(k - 2)} dN \rightarrow d\bar{z}$, i.e., $N \sim {\bar{z}}^{1/(k - 1)}$. In terms of this new coordinate the static metric in $5D$ is generated by (henceforth we omit the bar over $z$) $$\label{S generating static solutions } W(t, \psi) = \bar{C}\left[z^{k/(k - 1)} + (\psi - \psi_{0}) z^{1/(k - 1)}\right]$$ The matter quantities induced in $4D$ decrease as $1/z^2$. Therefore, the above equations represent static “pancake-like" distributions where the matter is concentrated near the plane $z = 0$, while far from it $\rho \rightarrow 0$. Except for the singularity at $z = 0$, the matter distribution presents “reasonable" physical properties. Indeed, for every value of $n $, the “physical" conditions $\rho^{(eff)} > 0$ and $\rho \geq |p_{x, y, z}|$ are satisfied in a wide range of parameters $\alpha, \beta$ and $\gamma$. As an illustration, in the case of axial symmetry with respect to $z$, for $n = 0$ these conditions hold in the range $- 2\beta/3 < \gamma < - \beta/2$ ($\alpha = \beta > 0$) or $-\beta/2 < \gamma < - 2\beta/3$ $(\alpha = \beta < 0)$. For $n = 1/3$, they hold if $- 2 \beta < \gamma \leq - \beta$ ($\alpha = \beta > 0$) or $- \beta \leq \gamma < - 2\beta$ ($\alpha = \beta < 0$). A similar analysis can be extended for other values of $n$. A simpler solution can be obtained from the above expressions in the limiting case where $k = \infty$, which occurs for $(3n + 1)(\alpha + \beta - \gamma) + 4\gamma = 0$. In this case (\[S generating static solutions \]) simplifies to $W = \bar{C}\left[z + (\psi - \psi_{0})\right]$ and the matter quantities are obtained from (\[static rho effective\]), (\[the static stresses\]) by replacing $\gamma \rightarrow [(3n + 1)(\alpha + \beta)/3(n - 1)]$. In the case of axial symmetry, the line element becomes independent of the parameters $\alpha, \beta, \gamma$ and depends only on $n$. The effective matter quantities satisfy $\rho^{(eff)} > 0$ and $\rho \geq |p_{x, y, z}|$ for any $n$ in the range $- 1/3 \leq n < 1/3 $. Static solutions on the brane ----------------------------- We now proceed to use the braneworld technique for evaluating the matter quantities. If we locate the brane at $\psi = 0$, then the metric in the ${{\bf{Z}}_{2}}$-symmetric bulk is generated by $W = M(z) + |\psi| N(z)$. From (\[emt on the brane in terms of K\]), with $\epsilon = 1$, and (\[decomposition of tau\]) we obtain the components of ${\cal{T}}_{\mu\nu}$. Now the barotropic equation of state (\[equation of state for ordinary matter\]) yields a differential equation linking $M(z), N(z)$ and $\sigma$, which can be easily integrated for constant vacuum energy, $\sigma = \sigma_{0}$. Namely, we obtain $$\label{Static solutions on the brane, N(z)} N(z) = \frac{3 k_{(5)}^2\sigma_{0} (a + b)(1 + n) M(z)}{2(5 + 3n)[\gamma^2 + \gamma(\alpha + \beta)] + 4(2 + 3n)[\alpha^2 + \beta(\alpha + \beta)]} - E M(z)^{- \frac{a [(2 + 3n)(\alpha + \beta) + 3\gamma]}{(a + b)(2 + 3n)}},$$ where $E$ is a constant of integration. Using this expression we obtain $$\label{Static solutions on the brane, rho} k_{(5)}^2 \rho = \frac{6 E a \gamma}{(a + b)(2 + 3n) M(z)^{\tilde{k}}} + \frac{2 k_{(5)}^2 \sigma_{0}[\alpha^2 + \beta^2 - \gamma^2 + \alpha \beta - \gamma(\alpha + \beta)]}{\tilde{k}(a + b)(2 + 3n)},$$ where $$\label{Static solutions on the brane, definition of k tilde} \tilde{k} = \frac{(5 + 3n)[\gamma^2 + \gamma(\alpha + \beta)] + 2(2 + 3n)[\alpha^2 + \beta(\alpha + \beta)]}{(\alpha^2 + \beta^2 + \gamma^2)(2 + 3n)}.$$ We note that $\rho = $ constant for $\gamma = 0$. Therefore, in what follows we assume $\gamma > 0$. Since $M(z)$ is an arbitrary function, without loss of generality we can choose it as[^12] $$\label{choice of M(z)} M(z) \sim z^{2/\tilde{k}},$$ which is suggested by the decrease of the effective density discussed in section $5.1$. It is not difficult to see that $\rho$ is positive for a large number of values of $\alpha, \beta, \gamma$. The positivity of the first term is guaranteed by the constant of integration $E$. To illustrate the positivity of the second term, we once again consider the case with axial symmetry with respect to $z$. In this case we find $$\label{Static solutions on the brane, rho for axial symmetry} \lim_{z \rightarrow \pm \infty}{\rho} = \frac{2\sigma_{0} (3\beta + \gamma)(\beta - \gamma)} {\gamma (5 + 3n)(2\beta + \gamma) + 6\beta^2(3n + 2)},$$ which is positive for any $\beta > \gamma$ and $n \geq - 2/3$. The main conclusion from this section is that regardless of whether we use the braneworld paradigm or the induced matter approach, the basic picture in $4D$ is essentially the same. Namely that the $4D$ part of (\[static solution\]) represents static pancake-like distributions of matter. Summary ======= The vacuum Einstein field equations for the $5D$ FRW line element (\[Usual cosmological metric in 5D\]) allow complete integration in a number of cases. In particular for $\Phi = 1$, or $n = 1$, the $(t, \psi)$-component of the field equations provides a relation that leads to a set of first integrals [@Binetruy], [@JPdeL-isotropicCosm]. However, for the simplest anisotropic extension of (\[Usual cosmological metric in 5D\]), namely, the diagonal Bianchi type-I metric $$\label{Bianchi type I, conclusions} dS^2 = n^2(t, \psi)dt^2 - \sum_{i = 1}^{3}b_{i}(t, \psi) \left(dx^{i}\right)^2 + \epsilon \Phi^2(t, \psi)d\psi^2,$$ this procedure does not work. (For a discussion, and a new point of view in the context of braneworld, see [@Antonio].) Here we have pointed out that making the coordinate transformation $du \propto \left(n dt - \Phi d\psi\right)$, $dv \propto \left(n dt + \Phi d\psi\right)$, in (\[Bianchi type I, conclusions\]) with $\epsilon = - 1$, the field equations allow complete integration in several physical situations, viz., (\[new solution\]), (\[solution 2\]), (\[solution 3\]). The $4D$ interpretation of the $5D$ solutions requires the introduction of coordinates adapted to spacetime sections $\Sigma_{\psi}$. We introduced the $(t, \psi)$ coordinates by setting $u = F(t, \psi)$ and $v = V(t, \psi)$, and used a foliation of the $5D$ manifold such that $\Sigma_{\psi}$ is a hypersurface of the foliation that is orthogonal to the extra dimension with tangent ${\hat{n}}^{A} = \delta^{A}_{4}/\Phi$. From a mathematical point of view the functions $F$ and $V$ can be arbitrary, except for the fact that they have to satisfy (\[diagonal 5D metric\]). However, from a physical point of view, they are related to two important aspects of the construction of the spacetime, namely: (i) the choice of coordinates in $5D$, e.g., Gaussian normal coordinates adapted to $\Sigma_{\psi}$, and (ii) the formulation of physical conditions on the matter fields in $4D$, e.g., some an equation of state. Our study shows that there is a great freedom for embedding a $4D$ spacetime in an anisotropic $5D$ cosmological model. Similar results but in a distinct context have been found in [@Antonio]. To simplify the algebraic expressions, but not the physics, in sections $3$, $4$ and $5$ we have devoted our attention to the study of $4D$ spacetimes embedded in the light-like Kasner cosmological metric, which is a simplified version of (\[light-like metric in t-psi coordinates\]). We have seen that the simple one-variable line element (\[Lightlike Kasner solution\]) can accommodate a great variety of models in $4D$. Indeed, within the context of STM and braneworld theories, we have shown here that the Kasner-like metrics (\[Kasner-like metric in terms of F-bar\]) may be used or interpreted as embeddings for a large number of cosmological and static spacetimes in $4D$. Thus, apparently “different" astrophysical and cosmological scenarios in $4D$ might just be distinct versions of the same physics in $5D$ [@XtraSym]. This investigation can be extended, or generalized, in different ways. In particular, we have not fully examined the possible $4D$ interpretation of the self-similar homothetic solution (\[new solution\]). Neither, have we investigated the solutions (\[solution 2\]) and (\[solution 3\]). An important future development here is the question of how these solutions can be applied in the generalizations of Mixmaster or Belinskii-Khalatnikov-Lifschits oscillations, as well as other issues mentioned in section $1$, which appear in theories with one extra dimension. #### Acknowledgments: I wish to thank one of the anonymous referees for a careful reading of this manuscript as well as for helpful and constructive suggestions. Appendix: Solving the field equations. Part II {#appendix-solving-the-field-equations.-part-ii .unnumbered} ============================================== Here we show two more families of analytic solutions to the field equations (\[Rxx, Ryy, Rzz\]). With this aim we notice that these equations are greatly simplified if we introduce the function $$\label{definition of calV} {\cal{V}} = e^{\lambda + \mu + \sigma},$$ in terms of which (\[Rxx, Ryy, Rzz\]) become $$\begin{aligned} \label{Rxx, Ryy, Rzz in terms of calV} 4 \lambda_{u v} + \frac{\lambda_{u} {\cal{V}}_{v}}{{\cal{V}}} + \frac{\lambda_{v} {\cal{V}}_{u}}{{\cal{V}}} &=& 0, \nonumber \\ 4 \mu_{u v} + \frac{\mu_{u} {\cal{V}}_{v}}{{\cal{V}}} + \frac{\mu_{v} {\cal{V}}_{u}}{{\cal{V}}}&=& 0, \nonumber \\ 4 \sigma_{u v} + \frac{\sigma_{u} {\cal{V}}_{v}}{{\cal{V}}} + \frac{\sigma_{v} {\cal{V}}_{u}}{{\cal{V}}} &=& 0.\end{aligned}$$ Adding these equations and using (\[definition of calV\]) we obtain an equation for ${\cal{V}}$, namely, $$\label{equation for calV} 2 {\cal{V}}{\cal{V}}_{u v} - {\cal{V}}_{u} {\cal{V}}_{v} = 0,$$ whose general solution can be written as $$\label{solution for calV} {\cal{V}} = \left[\tilde{h}(u) + \tilde{g}(v)\right]^2,$$ where $\tilde{h}$ and $\tilde{g}$ are arbitrary functions of their arguments. Clearly, the self-similar solution discussed in section $2.1$ corresponds to the particular choice $$\label{the metric functions in terms of calV} e^{\lambda} \propto {\cal{V}}^{\alpha/a}, \;\;\;e^{\mu} \propto {\cal{V}}^{\beta/ a}, \;\;\;e^{\sigma} \propto {\cal{V}}^{\gamma/ a},$$ which satisfies (\[Rxx, Ryy, Rzz in terms of calV\]) and (\[equation for calV\]) identically. In what follows, as in section $2.1$, we introduce a new set of null coordinates $\tilde{u}$ and $\tilde{v}$ by the relations $\tilde{h}(u) = {\tilde{c}}_{1} \tilde{u}$ and $\tilde{g}(v) = {\tilde{c}}_{2} \tilde{v}$, where ${\tilde{c}}_{1}$ and ${\tilde{c}}_{2}$ are constants. In terms of these new coordinates ${\cal{V}} = \left({\tilde{c}}_{1} \tilde{u} + {\tilde{c}}_{2} \tilde{v}\right)^2$. Substituting this expression into the first of the equations (\[Rxx, Ryy, Rzz in terms of calV\]), and dropping the tilde characters, we obtain $$2 \left(c_{1} u + c_{2} v\right)\lambda_{u v} + c_{1} \lambda_{v} + c_{2} \lambda_{u} = 0.$$ A similar expression holds for $\mu$. The solutions bellow are obtained under the assumption that $e^{\lambda}$ and $e^{\mu}$ are separable functions of their arguments. In which case, the above equation implies that they are proportional to $e^{\pm \left(c_{1} u - c_{2} v\right)}$. The metric function $e^{\sigma} = {\cal{V}}\; e^{- \left(\lambda + \mu\right)}$ automatically satisfies the third equation in (\[Rxx, Ryy, Rzz in terms of calV\]) and is non-separable. Consequently, there are two different families of solutions corresponding to whether $e^{\lambda} \propto e^{- \mu}$ or $e^{\lambda} \propto e^{ \mu}$. $\bullet$ In the case where $e^{\lambda} \propto e^{- \mu}$, the field equations $R_{uu} = 0$ and $R_{vv} = 0$ reduce to $$\label{separable 1} 2 {\cal{A}}_{u} - c_{1}\left(c_{1} u + c_{2} v\right){\cal{A}} = 0,\;\;\;\mbox{and}\;\;\; 2 {\cal{A}}_{v} - c_{2}\left(c_{1} u + c_{2} v\right){\cal{A}} = 0.$$ These equations completely determine the function ${\cal{A}}$ and assure the fulfillment of $R_{u v} = 0$. The final form of the solution is given by $$\label{solution 2} dS^2 = C_{0} e^{[\left(c_{1} u + c_{2}v\right)^2/4]} d u d v - C_{1} e^{\left(c_{1} u - c_{2} v\right)} d x^2 - C_{2} e^{- \left(c_{1} u - c_{2} v\right)} d y^2 - \left(C_{1} C_{2}\right)^{- 1}\left(c_{1} u + c_{2} v\right)^2 d z^2.$$ $\bullet$ Following the same steps as above we find that when $e^{\lambda} \propto e^{\mu}$ the solution is $$\label{solution 3} dS^2 = C_{0} e^{[3 \left(c_{1} u + c_{2}v\right)^2/4]} e^{- 2\left(c_{1} u - c_{2} v\right)}d u d v - C \; e^{\left(c_{1} u - c_{2} v\right)} \left(d x^2 + d y^2\right) - C^{- 2} \left(c_{1} u + c_{2} v\right)^2 e^{- 2\left(c_{1} u - c_{2} v\right)} d z^2.$$ In the above line elements $\left(C, C_{0}, C_{1}, C_{2}\right)$ are constants of integration. We note that the resulting solutions are quite complicated even in the case where either $c_{1}$ or $c_{2}$ are set equal to zero. Although this is a great obstacle for the analytical interpretation of these metrics in $4D$, it allows us to appreciate the simplicity of the self-similar solutions discussed in the main text. [99]{} . [^1]: E-Mail: jpdel@ltp.upr.clu.edu, jpdel1@hotmail.com [^2]: Notation: $x^{\mu} = (x^0, x^1, x^2, x^3)$ are the coordinates in $4D$ and $\psi$ is the coordinate along the extra dimension. We use spacetime signature $(+, -, -, -)$, while $\epsilon = \pm 1$ allows for spacelike or timelike extra dimension, both of which are physically admissible for a detailed discussion see, e.g., [@epsilon]. [^3]: Without entering into technical details: extrapolating backwards in time towards the singularity, one finds an infinite number of alternating quasi-periodic Kasner-like epochs with different expansion rates [@MTW]-[@BKL]. [^4]: In the traditional interpretation of Sedov, Taub and Zel’dovich [@Sedov], self-similarity means that all dimensionless quantities in the theory can be expressed as functions only of a single similarity variable, which is some combination of the independent coordinates. In this way the field equations become a system of ordinary, instead of partial, differential equations. [^5]: The proportionality coefficients $C_{0}$, $C_{1}$, $C_{2}$, $C_{3}$ can be set equal to unity without any loss of generality. [^6]: We note that (\[diagonal 5D metric\]) remains invariant under the re-scaling $F \rightarrow \bar{F}^{a^2/(a + b)}$. [^7]: We exclude $c = 0$ because it corresponds to empty space, i.e., $T_{\mu\nu} = 0$. [^8]: In fact, the effective EMT defined by (\[4D Einstein with T and K\]) contains a contribution, given by $E_{\alpha\beta}$, which is the spacetime projection of the $5D$ Weyl tensor and connects the physics in $4D$ with the geometry in $5D$. [^9]: From (\[effective gravitational coupling\]), with $\epsilon = - 1$, it follows that $\sigma$ must be positive in order to ensure $G > 0$. [^10]: This is a simple combination between $\dot{V} \propto \dot{F}F^{[c(1 + 3n)/(a + b)]}$ for the anisotropic FRW models considered in Section $3.3$, and $\dot{V} \propto \dot{F}/F'^2$ for the Gaussian embedding discussed in Section $4.2$. [^11]: For example (\[rel between the stresses\]) is now replaced by $- (\alpha + \gamma)T_{1}^{1} + (\beta + \gamma)T_{2}^{2} + (\beta - \alpha)T_{3}^{3} = 0$. [^12]: We note that in the case under consideration $(\gamma \neq 0)$, $\tilde{k}$ never vanishes. In fact, for real parameters $\alpha$ and $\beta$ the quantity $[\alpha^2 + \beta(\alpha + \beta)]$ is always positive. On the other hand, $[\gamma^2 + \gamma(\alpha + \beta)] = 0$ requires $\alpha + \beta + \gamma = 0$, i.e., $a = 0$, which corresponds to Minkowski space in $5D$ and $4D$.
1
--- abstract: | The objective of this paper is to correct a substantial, widespread error in parts of the quasielastic scattering literature. This error leads to entirely erroneous interpretations of quasielastic scattering spectra. The error, which is most prominent for interpreting spectra of dilute probe particles diffusing in complex fluids, arises from a valid calculation that is being invoked under circumstances in which its primary assumptions are incorrect. Quasielastic scattering from dilute probes yields the incoherent structure factor $g^{(1s)}(q,t) = \langle \exp(i q \Delta x(t)) \rangle$, with $q$ being the magnitude of the scattering vector ${\bf q}$ and $\Delta x(t)$ being the probe displacement parallel to ${\bf q}$ during a time interval $t$. The error is the claim that $g^{(1s)}(q,t)$ for dilute probe particles uniformly reduces to $\exp(- q^{2} \langle (\Delta x(t))^{2} \rangle/2 )$, regardless of the nature of the surrounding medium. If true, this claim would allow one to use quasielastic scattering to determine the time-dependent mean-square probe displacements in complex fluids. In reality, $g^{(1s)}(q,t)$ is determined by all even moments $\langle (\Delta x(t))^{2n} \rangle$, $n = 1, 2, 3,\ldots$, of the displacement distribution function $P(\Delta x,t)$. Only in the very special case of monodisperse probes in a simple Newtonian solvent is $g^{(1s)}(q,t)$ entirely determined by $\langle (\Delta x(t))^{2} \rangle$. Furthermore, the Langevin equation approach that ties $g^{(1s)}(q,t)$ to $\langle (\Delta x(t))^{2} \rangle$ *also requires with equal certainty* that $g^{(1s)}(q,t)$ relaxes as a simple exponential $\exp(- \Gamma t)$, $\Gamma$ being a time-independent constant. Contrariwise, *if the spectrum is not a simple exponential in time, $g^{(1s)}(q,t)$ is not determined by $\langle (\Delta x(t))^{2} \rangle$.* Several related subsidiary errors are discussed. author: - 'George D. J. Phillies' title: Interpretation of Quasielastic Scattering Spectra of Probe Species in Complex Fluids --- Introduction ============ Quasielastic scattering of light, x-rays, and neutrons has proven to be a powerful experimental tool for the study of complex fluids. For dilute macromolecule solutions, quasielastic scattering has extensive analytic applications for particle sizing[@phillies1990z]. With non-dilute solutions and more complex systems, quasielastic scattering reveals consequences of intermacromolecular and other interactions. In a substantial enhancement of the classical method, the fluid of interest is doped with trace concentrations of monodisperse probe particles. Probes have ranged from small molecules to micron-size colloidal particles. The diffusion of probe particles through the complex fluid is then observed. Early successful studies of probe diffusion in complex fluids using quasielastic light scattering were by Hallett and co-workers[@gray1974a; @turner1976a], who in 1974 and 1976 observed the diffusion of polystyrene spheres through hyaluronic acid and dextran solutions, and compared probe diffusion with the rheological properties of their solutions. The subsequent four decades have seen an enormous extension of this approach[@phillies2011a], including studies of probes in highly viscous simple liquids[@phillies1981z], polymer melts[@lin1986a], chemically cross-linked gels[@schmidt1989b], surfactant solutions[@phillies1993b], protein solutions[@ullmann1985a], and the interior of living cells[@lubyphelps1987a]. More recently, quasielastic x-ray scattering has been used to extend the range of distance scales over which diffusion can be observed[@dierker1995a]. Probe diffusion has also been studied by a series of other physical techniques, each technique being sensitive to distinctive range of time and distance scales or other features of probe motion. For example, fluorescence correlation spectroscopy[@magde1972a], which by varying the probe concentration can measure both the self diffusion coefficient and the mutual diffusion coefficient of the labeled species[@phillies1975a; @scalettar1989a], has in recent years been extensively used to measure tracer diffusion. Recent work using probe diffusion is sometimes termed *microrheology*, the term microrheology referring to a particular model[@mason1996a] for interpreting quasielastic scattering spectra. In some studies, probe diffusion has been viewed as being of interest because it is a path to measuring the viscoelastic properties of the solution. In other studies, probe diffusion has been viewed as being of interest because it measures solution properties that are not the same as the viscoelastic properties of the solution. A valuable complement to quasielastic scattering studies of probe diffusion is provided by measurements on probes subject to external driving forces. The overwhelming bulk of the literature on driven probe motion is provided by studies of capillary electrophoresis. The electrophoretic literature primarily emphasizes improving separations of different charged species. However, electrophoretic separations often use solutions of neutral polymers as the support medium. At the same time as these experiments are performing separations, these experiments are also giving information on the dynamics of the neutral polymers[@phillies2011a; @phillies2012e]. A substantial literature exists on buoyancy-driven probe motion in the ultracentrifuge[@laurent1963a; @ye1998a]. In a few experiments, magnetic[@hough1999a; @schmidt2000a] or optical[@amblard1996a] tweezers were used to examine oscillatory driven movements of probes. Tweezer experiments are particularly interesting because the experimenter can separately control two of the three: drive force, drive frequency, and particle displacement. An alternative complement to probe diffusion is provided by tracking probes in complex fluids in which the fluid itself is performing driven motion, e.g., shear[@tapadia2006a]. Quasielastic scattering of light and other photons is most commonly studied via correlation methods, in which the quantity measured directly is the intensity-intensity time correlation function $$S(q,t) = \langle I(q,\tau) I(q, \tau+t) \rangle, \label{eq:Sqtdef}$$ Here $q$ is the magnitude of the scattering vector, $I(q,\tau)$ and $I(q, \tau+t)$ are the scattering intensities over short time intervals near $\tau$ and $\tau+t$, and the brackets $\langle \cdots \rangle$ represent an average. Scattering is said to be due to scatterers within the medium, scatterers generally being represented mathematically by points whose locations are known. Scattering from extended bodies, such as high-molecular-weight polymer chains, is often treated by representing the scattering body as a series of scattering points whose relative positions are partly fixed. If the volume being observed is much larger than the volumes over which particle positions and displacements are correlated, quasielastic scattering corresponds to the intermediate structure factor (or field correlation function) $g^{(1)}(q,t)$ via[@crosignani1975a] $$S(q,t) = A |g^{(1)}(q,t)|^{2} + B. \label{eq:Sqg1def}$$ In this equation $A$ and $B$ are constants determined by details of the experimental apparatus; these constants have no effect on the time dependence. Homodyne rather than heterodyne detection of the scattered light is assumed. The factorization of $S(q,t)$ into $g^{(1)}(q,t)$ is sometimes termed the “Gaussian approximation”. This Gaussian approximation is not related to the Gaussian approximation for the particle displacements as discussed below. The intermediate structure factor is in turn determined by the time-dependent positions of the scattering particles via $$g^{(1)}(q,t) = \left\langle \sum_{i=1}^{N} \sum_{j=1}^{N} \exp(\imath {\bf q} \cdot ({\bf r}_{i}(t+\tau) - {\bf r}_{j}(\tau) )) \right\rangle \label{eq:g1qgeneral}$$ In this equation, sums on $i$ and $j$ proceed separately over all $N$ particles in the system, while ${\bf r}_{i}(t+\tau)$ and ${\bf r}_{j}(\tau)$ are the locations of scatterers $i$ and $j$ at times $t+\tau$ and $\tau$, respectively. In applying eq \[eq:g1qgeneral\], two particularly interesting experimental circumstances are described as mutual diffusion measurements and as probe diffusion measurements. In a measurement on a binary solvent: scatterer system, the scattering particles may be concentrated or dilute. Quasielastic scattering on such a system measures the mutual diffusion coefficient, which describes the diffusion of the scatterers down a concentration gradient[@phillies1974a; @phillies1974b]. Tracer diffusion experiments examine ternary solvent: matrix : probe systems. In these systems the matrix component may be dilute or concentrated, is substantially responsible for the system’s rheological and other interesting properties, but is nearly optically inert. Conversely, the probe (tracer) component is dilute, has virtually no effect on rheological and other properties of the solvent: matrix system, but dominates scattering by the ternary mixture. If matrix scattering is not entirely negligible, there are established, reliable ways to isolate the probe scattering, based on spectral subtraction at the level of the field correlation function. Because probe particles very nearly do not interact with each other, the field correlation function for probe diffusion reduces (up to normalization constants) to the incoherent scattering function $$g^{(1s)}(q,t) = \langle \exp(\imath q \Delta x(t))\rangle. \label{eq:g1sandr}$$ with $\Delta x(t)$ being the component parallel to ${\bf q}$ of $\mathbf{\Delta r}(t) = {\bf r}_{i}(t+\tau) - {\bf r}_{i}(\tau)$. Probe motions perpendicular to $\mathbf{q}$ do not contribute to $g^{(1s)}(q,t)$. In moving from eq \[eq:g1qgeneral\] to eq \[eq:g1sandr\], terms of eq \[eq:g1qgeneral\] in which $i \neq j$ were taken to average to zero, because the relative positions of dilute probes are very nearly uncorrelated. An expression formally identical to eq \[eq:g1sandr\] describes diffusion measurements using pulsed-field-gradient nuclear magnetic resonance, though with this method $q$ has an entirely different meaning, namely in the simplest case $q = \gamma \delta g$, where $\gamma$ is the gyromagnetic ratio, $\delta$ is a pulse width, and $g = dB/dz$ is the field gradient. The averages in eqs \[eq:g1qgeneral\] and \[eq:g1sandr\] may formally be phrased as averages over displacement distribution functions such as $P(\Delta x, t)$, which gives the time-dependent probability that a scattering particle will displace through $\Delta x$ during time $t$. Two previous papers[@phillies2005a; @phillies2012a] examined how $g^{(1s)}(q,t)$ and $g^{(1)}(q,t)$ are actually related to the displacement distribution functions. The two prior papers were primarily concerned with establishing formal relationships between dynamic structure factors and probabilities for scatterer displacement. The significance of these relationships for the interpretation of experimental measurements was at most a secondary consideration. This paper focuses on interpreting experimental measurements. Section II of this paper presents the correct general relationship between $g^{(1s)}(q,t)$ and $P(\Delta x, t)$. Section III discusses the special case of probe particles in a purely Newtonian fluid. Section IV notes experimental findings bearing on the relative significance of Sections II and III. Section V considers paths for interpreting probe diffusion spectra. Section VI treats the determination of $P(\Delta x, t)$, relationships between $g^{(1s)}(q,t)$ and trapping/hopping behavior, and, closing on a positive note, cases in which quasielastic scattering from diffusing probes, correctly interpreted, has given valuable information about complex fluids and the objects diffusing in them. General Case\[sectiongeneralcase\] ================================== This section summarizes what $g^{(1s)}(q,t)$ and $g^{(1)}(q,t)$ actually reveal about particle displacements. Extended derivations have appeared previously in two earlier papers, refs and , and are not repeated here. For probe diffusion, the intermediate structure factor is always determined by the displacement distribution function, namely the average in eq \[eq:g1sandr\] can be written as $$g^{(1s)}(q,t) = \int_{-\infty}^{\infty} d(\Delta x) \exp(i q \Delta x) P(\Delta x, t). \label{eq:g1sPDelta}$$ $P(\Delta x, t)$ is taken to be properly normalized. On taking a Taylor series expansion of the exponential in powers of $q$, reflection symmetry, namely $P(\Delta x, t) = P(-\Delta x, t)$, eliminates all terms odd in $q$. As a result, $g^{(1s)}(q,t)$ and its logarithm are necessarily power series in $q^{2}$. The coefficients of the $q^{2n}$ are generated by the even moments $\langle(\delta x)^{2n}\rangle$ of $P(\Delta x, t)$. As shown previously[@phillies2005a], the lead terms of an expansion for $g^{(1s)}(q,t)$ are $$g^{(1s)}(q,t) = N \exp\left( - \frac{1}{2} q^2 \langle \Delta x(t)^{2} \rangle + \frac{1}{24} q^{4}( \langle \Delta x(t)^{4} \rangle - 3\langle \Delta x(t)^{2} \rangle^{2}) - {\cal O}(q^{6})\right). \label{eq:g1sanddisplacements}$$ All even moments $\langle(\delta x)^{2n}\rangle$ are required for the complete expansion. It was early shown that quasielastic scattering from a binary solvent: macromolecule system determines the mutual diffusion coefficient, not the self diffusion coefficient[@phillies1974a; @phillies1974b]. Theoretical approaches to computing $g^{(1)}(q,t)$ and the mutual diffusion coefficient of non-dilute colloid solutions have historically followed routes very different from the routes that are based on the displacement distribution function $P(\Delta x, t)$. Only very recently[@phillies2012a] was a solution for $g^{(1)}(q,t)$ in terms of displacement distribution functions obtained. In this solution, the expansion of eq \[eq:g1qgeneral\] was shown to require averages over two different displacement distribution functions, namely $P(\Delta x, t)$ and a new distribution function $P_{2}(\Delta x, t, \mathbf{R}_{12})$. $P_{2}$ is a two-particle conditional displacement distribution function, in which $\Delta x$ is the displacement of particle $1$ during $(0, t)$ given that the vector $\mathbf{R}_{12}$ from particle $1$ to some particle $2$ at time $0$ has a given value. Special Case: Probes in Simple Newtonian Liquids \[sectionsimple\] =================================================================== The earliest quasielastic scattering experiments were performed on dilute suspensions of monodisperse scattering particles in simple Newtonian solvents. Cummins, et al.’s results on polystyrene spheres in water[@cummins1964a] are the archetype. The resulting spectra were interpreted by invoking a mechanical model for the motions of diffusing particles. The mechanical model was provided by the Langevin equation, which in one dimension is $$m\frac{d^{2} x(t) }{dt^{2}} = - f \frac{dx(t)}{dt} +{\cal F}_{x}(t). \label{eq:Langevin}$$ Here $x(t)$ is a coordinate of the diffusing particle, $m$ is the particle mass, $f$ is the particle’s drag coefficient, and ${\cal F}_{x}(t)$ is the random force, called *random* because in the Langevin model the values of ${\cal F}_{x}(t)$ at different instants in time are uncorrelated. Within the model, ${\cal F}_{x}$ cannot be predicted beyond stating that ${\cal F}_{x}$ has certain statistical properties. The canonical literature treatment of the Langevin model as applied to quasielastic light scattering is the volume by Berne and Pecora[@berne1976a], notably their Section 5.9. Berne and Pecora show that the Langevin model is appropriate for polystyrene spheres in water, on the time and distances scales observed by quasielastic light scattering. From the Langevin model and the requirement that the system remains in thermal equilibrium, a series of conclusions about the statistical properties of the particle motion follow. In particular: (i) : The mean-square average value of ${\cal F}_{x}(t)$ must be consistent – the fluctuation-dissipation theorem – with the drag coefficient $f$ and the thermal energy $k_{B}T$. (ii) : The distribution $P(\Delta x)$ of particle displacements $\Delta x$ during a time interval $\Delta t$ is the same for all time intervals $(t, t+\Delta t)$. (iii) : Velocity correlations are evanescent. For time steps appreciably longer than $m/f$, which for Brownian particles is actually a quite short time, particle displacements in a series of time steps are very nearly independent from each other. Conclusion (ii) corresponds to the statement that $x(t)$ is the sum of a series of identically-distributed random variables. Conclusion (iii) corresponds to the independent statement that the time evolution of $x(t)$ is described by a Markoff process. In this very special case, the distribution of particle displacements is described by Doob’s Theorem[@doob1942a]. Doob’s theorem is closely related to the central limit theorem. Doob’s theorem treats random processes such as $\Delta x(t)$, while the central limit theorem treats random variables. For the Langevin model, Doob’s Theorem shows that the distribution of particle displacements is a Gaussian $$P(\Delta x) = \left(2\pi \langle \Delta x^{2} \rangle \right)^{-1/2} \exp(- (\Delta x(t))^{2}/ 2 \langle \Delta x^{2} \rangle ). \label{eq:gaussianform}$$ For this special case, the incoherent scattering function reduces to $$g^{(1s)}(q,t) = \exp(- q^{2} \langle (\Delta x(t))^{2} \rangle/2). \label{eq:g1swrong}$$ Equation \[eq:g1swrong\] is quite accurate for the systems considered in Ref. [@berne1976a], namely highly dilute solutions of monodisperse objects in simple Newtonian solvents. However, Berne and Pecora[@berne1976a], especially their Appendix 5.A and Section 5.9 leading to their eq 5.9.6, also prove the other important consequence of the Langevin model and Doob’s theorem, namely that the Langevin model determines the exact value of $\langle (\Delta x(t))^{2} \rangle$. On the time and distance scales accessible to quasielastic scattering, the Langevin model requires $$\langle (\Delta x(t))^{2} \rangle = 2 D t. \label{eq:meansquare}$$ Here $k_{B}$ is Boltzmann’s constant and $T$ is the absolute temperature. $D = k_{B}T/f$ is the diffusion constant, a quantity *that does not depend on time*. Time independence of $D$ is required by the calculation, because $D$ results from a time integral over ($0 \leq t \leq \infty$). Equations \[eq:g1swrong\] and \[eq:meansquare\] come as a package; they are equally consequences of the Langevin model. Correspondingly, Berne and Pecora show for diffusing monodisperse Brownian particles that the Langevin model requires that the field correlation function is a simple exponential $$g^{(1s)}(q,t) = \exp(- q^{2} D t). \label{eq:g1ssimple}$$ For unclear reasons – the literature error noted in the introduction – Berne and Pecora’s entirely correct Chapter 5 is being misread as proving that eq \[eq:g1swrong\] is always correct, even when the time relaxation of $g^{(1s)}(q,t)$ is not a simple exponential. Berne and Pecora in fact prove exactly the opposite, namely the contrapositive of their result is that if the relaxation is not a single exponential, then the Langevin model must not be applicable to the system, and therefore invocation of the the Langevin Model prediction eq \[eq:g1swrong\] is incorrect. Berne and Pecora’s discussion refers purely and exclusively to the special case of particle motion that is described by the Langevin equation, in which case eqs \[eq:gaussianform\]-\[eq:g1ssimple\] are all correct. This special case corresponds to most of the experiments that were of interest at the time that ref was written, namely applications of quasielastic scattering for particle sizing[@dahneke1983]. For diffusing dilute colloids, the $t$ and $q$ dependences of $g^{(1s)}(q,t)$ are precisely as predicted by the Langevin model, in particular $g^{(1s)}(q,t) \sim \exp(- \Gamma t)$ with $\Gamma \propto q^{2}$. The quasielastic scattering spectrum only leads to the time-dependent mean-square particle displacement if the spectrum is a pure exponential in $q^{2} t$. In this special case, the mean-square displacement increases linearly with $t$, so that the short-time and long-time behaviors of $g^{(1s)}(q,t)$ are one and the same. If the decay of the field correlation function is not a simple exponential in $t q^{2}$, then eq \[eq:gaussianform\] and the Langevin model cannot not possibly describe how the scattering particles move. If eq \[eq:gaussianform\] described the particle motions, then the spectrum would be a simple exponential. In systems in which the spectrum is more complex than a simple exponential, eq \[eq:g1swrong\] is invalid. That is, if $g^{(1s)}(q,t)$ is not a simple exponential in $t$, $\log(g^{(1s)}(q,t))$ does not reveal the mean-square displacement of the particles. Why does eq \[eq:g1sanddisplacements\] ever reduce to eq \[eq:g1swrong\]? If and only if $P(\Delta x, t)$ is a Gaussian in $\Delta x$, $P(\Delta x, t)$ is entirely characterized by its second moment $\langle \Delta x^{2} \rangle$. For a Gaussian displacement distribution function, the higher moments of $P(\Delta x, t)$ have values such that the coefficients of the higher-order terms ($q^{2n}$ for $n \geq 2$) of eq \[eq:g1sanddisplacements\] all vanish. For a Gaussian $P(\Delta x, t)$, the only non-zero part of eq \[eq:g1sanddisplacements\] is eq \[eq:g1swrong\]. This disappearance of the higher-order terms is unique to a Gaussian $P(\Delta x, t)$. For any other $P(\Delta x, t)$, the higher-order terms of eq \[eq:g1sanddisplacements\] do not vanish. Experimental Findings \[sectionexperiment\] =========================================== What do experiments say about $P(\Delta x, t)$ and $g^{(1s)}(q,t)$? There are systems in which the Langevin model is adequate, namely dilute monodisperse particles suspended in simple Newtonian fluids. The Langevin model provides the solid foundation for particle sizing via quasielastic scattering[@dahneke1983]. For probe diffusion in complex fluids, experiment provides a far more complex picture. Consider a few representative experiments that have determined $P\Delta x, t)$ or $g^{(1s)}(q,t)$ for probe particles in complex fluids. On relatively long time scales, $P(\Delta x,t)$ is accessible via particle tracking methods. As examples, note experimental studies by Apgar, et al.[@apgar2000a], Tseng and Wirtz[@tseng2001a], and Xu, et al.[@xu2002a] on probe diffusion in glycerol, actin solutions and gels, and gliadin solutions. These authors used video recording and computer image analysis to track simultaneously the motions of large numbers of particles. They report $P(\Delta x, t)$ and $\langle \Delta x(t)^{2} \rangle$ in their systems. Probes in glycerol follow eqs \[eq:gaussianform\] and \[eq:meansquare\]. For probes in various complex fluids, $P(\Delta x, t)$ has decidedly non-Gaussian forms. In these systems, the mean-square displacement does not increase linearly in time. By direct measurement, eq \[eq:gaussianform\] and \[eq:meansquare\] are not uniformly correct for probes in polymer solutions. Quasielastic scattering spectra of probes in polymer solutions are often markedly non-exponential. For polystyrene latex sphere probes in hydroxypropylcellulose: water, this author and Lacroix[@lacroix1997a] found stretched exponentials in time $$g^{(1s)}(q,t) = a \exp(- \theta t^{\beta}) \equiv a \exp(- (t/\tau)^{\beta}). \label{eq:gisqeext}$$ Here $\beta$ is a scaling exponent while $\theta$ and $\tau$ are prefactors. A series of papers by Streletzky and collaborators on the same chemical system (most recently, ref. ) established by viewing a much wider range of delay times that $g^{(1s)}(q,t)$ is in fact a sum of two stretched exponentials in time. Finally, I note a very simple model system in which eqs \[eq:gaussianform\] and \[eq:g1swrong\] fail. The system is a dilute aqueous dispersion of polystyrene spheres, in which the spheres are of two different sizes. There are no sphere-sphere interactions. Each sphere individually performs Brownian motion as described by the Langevin equation. Therefore, for each sphere in the mixture, $P(\Delta x, t)$ is a Gaussian in $\Delta x$ and $\langle (\Delta x)^{2} \rangle$ increases linearly in time. For the mixture as a whole, the mean-square displacement averaged over all the particles must therefore also increase linearly with time, at a rate determined by a weighted average of the diffusion coefficients of the two sphere species. However, the mixture’s field correlation function is a sum of two exponentials $$g^{(1s)}(q,t) = A_{1} \exp(-D_{1} q^{2} t) + A_{2} \exp(-D_{2} q^{2} t) \label{eq:doubleexponential}$$ where $A_{i}$ is the scattering amplitude of species $i$ and $D_{i}$ is the diffusion coefficient of species $i$. Correspondingly, the displacement distribution function for all the particles in solution is a weighted sum of two Gaussians, one Gaussian for each sphere size. Suppose one were used eq \[eq:g1swrong\] to determine $\langle (\Delta x)^{2} \rangle$ of the sphere mixture. According to eq \[eq:g1swrong\] $$\langle (\Delta x)^{2} \rangle = - \ln( g^{(1s)}(q,t))/q^{2}. \label{eq:meansquare2}$$ At short times, $g^{(1s)}(q,t)$ contains contributions from both species, and the $\langle (\Delta x)^{2} \rangle$ computed from eq \[eq:meansquare2\] is the weighted average of the mean-square displacements of the two species. At long times, in eq \[eq:doubleexponential\] the exponential corresponding to the more rapidly-moving species has decayed to zero, so the nominal mean-square displacement from eq \[eq:meansquare2\] is determined by scattering from the larger, more slowly-moving spheres. For this simple bidisperse sphere suspension, eq \[eq:meansquare2\] asserts that the slope of $\langle (\Delta x)^{2}\rangle$ depends on time, $\langle (\Delta x)^{2}\rangle$ increasing more rapidly at short times and more slowly at long times. The assertion is incorrect. In this system $\langle (\Delta x)^{2}\rangle$ increases linearly with $t$. At large times the smaller spheres continue to move, but they stop contributing to $g^{(1s)}(q,t)$ or to the nominal $\langle (\Delta x)^{2}\rangle$ from eq \[eq:meansquare2\]. Experiment thus demonstrates that neither eq \[eq:gaussianform\] nor eq \[eq:g1ssimple\] is generally valid for probe diffusion in complex fluids. Even in a Newtonian fluid, a model system in which $g^{(1s)}(q,t)$ does not decay exponentially in time does not follow eqs \[eq:g1swrong\] and \[eq:meansquare2\], a result that is exactly as required by Doob’s theorem. Interpretations of quasielastic scattering spectra for probes in complex fluids, based on the Gaussian approximation of eq \[eq:g1swrong\], are therefore incorrect. Interpretations of spectra of probes in complex fluids in terms of particle displacements are properly based on eq \[eq:g1sanddisplacements\], which correctly reflects the non-Gaussian displacement distribution function of real probes. Interpretations of Quasielastic Scattering Spectra \[sectioninterpretations\] ============================================================================= First, every single physical $g^{(1s)}$ viewed only as a function of $t$ corresponds to a system in which the mean-square particle displacement increases linearly with time. However, the correspondence is not unique. The same $g^{(1s)}(q,t)$ may also correspond to systems in which particle thermal motions are more complex. In consequence, from a $g^{(1s)}(q,t)$ measured over a full range of times and a single $q$ one cannot infer how the particle displacement depends on time. This result has a purely mathematical basis, namely that the field correlation function can always be represented via a Laplace transform as $$g^{(1s)}(q,t) = \int_{0}^{\infty} A(\Gamma) \exp(- \Gamma t). \label{eq:laplace}$$ Here $\Gamma$ is a relaxation rate and $A(\Gamma)$ is the contribution of relaxations having decay rate $\Gamma$ to $g^{(1s)}(q,t)$. So long as the system does not have relaxational modes that have negative amplitudes, $A(\Gamma)$ is everywhere positive or zero. In this case, there is always a system having the same $g^{(1s)}(q,t)$ as the system of interest, and in which $ \langle (\Delta x(t) )^{2} \rangle$ increases linearly in time. The system can be physically constructed as a solution of of polystyrene spheres of all different sizes. The composition of the mixture is determined by $A(\Gamma)$: One adds to the mixture just enough polystyrene spheres having diffusion coefficient $\Gamma/q^{2}$ so that their contribution to the scattering spectrum is $A(\Gamma)$. For each sphere, $\langle (\Delta x(t) )^{2} \rangle$ increases linearly in time, so therefore $\langle (\Delta x(t) )^{2} \rangle$ of the mixture also increases linearly in time. Thus, an arbitrary physically-acceptable (i.e., $A(\Gamma) > 0 \ \forall \ \Gamma$) form for $A(\Gamma)$ corresponds as one non-unique possibility to a system in which $\langle (\Delta x(t) )^{2} \rangle$ increases linearly in time. It has repeatedly been found that $g^{(1s)}(q,t)$ decays in time as the stretched exponential of eq \[eq:gisqeext\]. If one interpreted this time dependence by applying eq \[eq:g1swrong\], one would conclude $$\langle (\Delta x(t) )^{2} \rangle = \theta t^{\beta}. \label{eq:subdiffusive}$$ In the common case $\beta < 1$, from eq \[eq:subdiffusive\] one would infer that the mean-square particle displacement increases less rapidly at large times than would be the case for simple diffusion, a behavior that has been termed ’subdiffusive’ motion. The inference is incorrect. A more reasonable interpretation for $\beta <1$ is that diffusion in the complex fluid is created by modes having a range of relaxation times, some longer than others, the contribution of the slower modes to the spectrum becoming more important at longer times. It is not suggested that there cannot exist subdiffusive motion. Such motion has unambiguously been observed experimentally. Amblard, et al.,[@amblard1996a] studied probe motion in f-actin solutions, using magnetic tweezers and video microscopy. Small-bead motion was diffusive; larger-bead diffusion was subdiffusive with $\beta \approx 3/4$. The non-classical drag forces for diffusive motion and for driven motion are the same, in the sense that the displacement under an applied force and the mean-square displacement for diffusion have the same linear or sublinear time dependences, depending on the probe size. There are sometimes suggestions that probe behavior should approach Stokes’ Law and the Stokes Einstein-equation as the size of the probes is increased. Amblard, et al.’s experiments show precisely the opposite behavior. Deviations from simple diffusion are larger for the larger particles. Whenever particle motion is subdiffusive, the light scattering spectrum will not be a simple exponential. The scattering spectrum will follow eq \[eq:g1sanddisplacements\], showing that the relationship between $g^{(1s)}(q,t)$ and $\langle (\Delta x(t) )^{2} \rangle$ is a neither-way street. Just as one cannot in general calculate $\langle (\Delta x(t) )^{2} \rangle$ from $g^{(1s)}(q,t)$, so also one cannot in general calculate $g^{(1s)}(q,t)$ from $\langle (\Delta x(t) )^{2} \rangle$, because all higher moments of $P(\Delta x)$ are needed in order to calculate $g^{(1s)}(q,t)$. Discussion\[sectiondiscussion\] =============================== The primary objective of this paper was to correct a widespread literature error, namely the false belief that that $g^{(1s)}(q,t)$ can in general be used to determine the mean-square displacement of probes in complex fluids. The belief appears to have arisen from a misreading of Berne and Pecora’s excellent monograph, Chapter 5, in which Berne and Pecora discuss the motions of monodisperse probe particles in simple Newtonian fluids by using the Langevin model. The spectra of monodisperse Langevin-model particles are without exception single exponentials. The calculation is correct, but does not refer to non-Newtonian fluids or polydisperse probe systems. Having corrected this error, I turn to several subsidiary points of misinterpretation. The functional form of $P(\Delta x, t)$ can be inferred, at least approximately, from the angular dependence of $g^{(1s)}(q,t)$, namely as seen in eq \[eq:g1sPDelta\] the correlation function $g^{(1s)}(q,t)$ at fixed $t$ is the spatial Fourier transform of $P(\Delta x, t)$. If $g^{(1s)}(q,t)$ is determined sufficiently accurately over an adequate range of $q,$ an inverse spatial Fourier transform can take the experimenter back from $g^{(1s)}(q,t)$ to $P(\Delta x, t)$. To the author’s knowledge, this inversion has only been done for the ubiquitous polystyrene spheres in distilled water, for which $g^{(1s)}$ is a simple exponential $\exp(- \Gamma t)$, while $\Gamma$ is found to be accurately linear in $q^{2}$, showing that $P(\Delta x, t)$ in this system has the Gaussian form of eq \[eq:gaussianform\]. Measurements of the $q$-dependence of $g^{(1s)}$ for probes in complex fluids are less common, though note, e. g., Streletzky and Phillies[@streletzky1998q]. These authors found spectra having multiple relaxational modes, some of which had relaxation rates that did not scale linearly in $q^{2}$, proving that the modes did not correspond to Gaussian displacement distribution functions. Particle tracking is sometimes used to generate the simplified $\langle \Delta x(t)^{2} \rangle$, rather than the full $P(\Delta x, t)$. The simplification is potentially hazardous, because if one has not determined $P(\Delta x, t)$ one does not know if the particle motion process corresponds to simple diffusion. Any physically reasonable $P(\Delta x, t)$ has some second moment $\langle \Delta x(t)^{2} \rangle$, but if the form of $P(\Delta x, t)$ is unknown, one cannot tell if it is meaningful to characterize $P(\Delta x, t)$ by its second moment or by the corresponding nominal diffusion coefficient $$D(t) = \langle \Delta x(t)^{2} \rangle/ 2 t \label{eq:timedependentD}$$ The notion that a diffusive process can be characterized by $\langle \Delta x(t)^{2} \rangle$ corresponds to particle diffusion in simple Newtonian liquids, in which $D(t)$ as defined here is a constant independent of time. In a complex fluid, characterization of probe motion via measurement of the second moment $\langle \Delta x(t)^{2} \rangle$ must be expected to be inadequate. The assertion that the central limit theorem guarantees that $P(\Delta x, t)$ is a Gaussian in $\Delta x$ is sometimes describes as the “Gaussian Approximation”. Experiments such as those summarized above prove that this assertion is incorrect. The central limit theorem (for random variables) and Doob’s theorem (for random processes) are well-known. Where do their invocations go wrong? The central limit theorem and Doob’s theorem are statements about the sum of a large number of identically distributed, independent random processes. As applied to the diffusion problem, the displacement $\Delta x$ during experimentally accessible times can be expressed as the sum of a large number of far smaller steps $\delta x$, each taken during a far smaller time interval $\delta t$. If a random variable $\Delta x$ is the sum of a large number of identically distributed *independent* variables, i. e., if $\delta x(t)$ is a Markoff process, it must in general be the case that $P(\Delta x, t)$ is a Gaussian in $\Delta x$. This rationale fails because it refers to a sum of *independent* random processes. The process that generates $\delta x$ for probes in a viscoelastic fluid is not a Markoff process, because the system has memory. The central limit theorem and Doob’s theorem are therefore not applicable. For probes in complex fluids, the processes generating the steps $\delta x$ are highly correlated, because the “random” forces that determine the $\delta x(t)$ are controlled by the shear modulus $G(t)$, which in a complex fluid has a long correlation time. Correspondingly, the time correlation function of the random force $\langle {\cal F}_{x}(0) {\cal F}_{x}(t) \rangle$ is long-lived, not $\sim \delta(t)$, and in the Langevin equation the friction force $f \dot{x}(t)$ is replaced with a memory function $\sim \int ds \langle {\cal F}_{x}(0) {\cal F}_{x}(s) \rangle \dot{x}(t-s)$. An alternative to the central limit theorem is the *small-$q$ approximation*. The nominal idea in the small-q approximation is that the rhs of eq \[eq:g1sanddisplacements\] is a power series in $q$. If one went to sufficiently small $q$, one might hope that the $q^{2}$ term in the exponential would become dominant, so that eq \[eq:g1swrong\] would approach being valid. This hope is not met. For the simplest case of a mixture of diffusing particles, $g^{(1s)}(q,t)$ is in fact a power series in $q^{2} t$. If one goes to smaller $q$, in order to keep fitting spectra equally accurately one needs to observe the same fractional degree of decay of $g^{(1s)}(q,t)$; one must therefore go out to longer times. At those longer times, the ($q^{4}$) terms are as significant as they were at larger $q$, smaller $t$, and the same $q^{2} t$. Said differently, the coefficients of the correct Taylor series (in $q$) expansion of $g^{(1s)}(q,t)$ are time-dependent. In order for the lead term of the expansion to be dominant, the expansion must be limited not only to small $q$ but also to to small $t$. If $t$ is large, no matter how small $q$ has been made, the higher-order in $q$ terms are as important as the lower-order terms. Only at small $q$ and small $t$ is a single-term small-$q$ expansion valid. The valid small-$q$ expansion is $1-q^{2} \langle \Delta x(t)^{2} \rangle$, which only describes the leading slope of $g^{(1s)}(q,t)$ at small times. Consider spectra described by eq \[eq:gisqeext\]. The exponential in eq \[eq:g1sanddisplacements\] scales as $q^{2}$ or is a power series in $q^{q}$, so therefore $\theta$ should also be a power series in $q^{2}$, perhaps simply by being linear in $q^{2}$. Indeed, for probes in in some but not other water: hydroxypropylcellulose solutions, Streletzky[@streletzky1998q] confirmed experimentally $\theta \sim q^{2}$ over a wide range of $q$. If $\theta$ were replaced with $\tau^{-\beta}$, one would have $$\tau \sim q^{-2/\beta}. \label{eq:tauq}$$ $\beta$ is often in the range 0.5-1.0, so $\tau$ often depends on $q$ as $q^{-3\pm 1}$. If one interpreted $\tau$ to be a relaxation time, the $q$-dependence from eq \[eq:tauq\] would be strange indeed: The relaxation would occur more rapidly over large distances (small $q$) than over short distances (large $q$). This strange $q$-dependence is simply an artifact of the choice of parameterization of $g^{(1s)}(q,t)$, and the identification of $\tau$ as a relaxation time. In terms of eq \[eq:gisqeext\], $(\theta, \beta)$ provides a natural parameterization while $(\tau,\beta)$ is less transparent. If mean relaxation times are inferred from the spectral time moments $$\langle T_{n} \rangle = \int_{0}^{\infty} t^{n} g^{(1s)}(q,t) dt \label{eq:Tmoments}$$ of $g^{(1s)}(q,t)$, the choice of parameterizations in eq \[eq:gisqeext\] has no consequences. The two paramaterizations of $g^{(1s)}(q,t)$ must lead to the same $\langle T_{n} \rangle$. Spectra of diffusing probes showing two relaxations on very different time scales are sometimes interpreted in terms of caging and hopping relaxations. The notion is that the medium supplies regions of low potential energy within which probes are free to move (“caging”). The regions are separated by barriers of high potential energy, across which probes only pass on rare occasion (“hopping”). The short time-scale relaxation is said to correspond to caging, while the long time-scale relaxation is said to correspond to hopping. Computer simulation studies by Luo and Phillies and Luo, et al., test the caging-hopping interpretation[@luo1995a; @luo1996a]. These simulations represented Brownian particles moving through a square lattice or a random glass of Lennard-Jones force centers. The force centers were immobile. Probe motions were generated via the Metropolis algorithm. These studies differed from some earlier work in that they determined not only time dependent mean-square displacements and effective diffusion coefficients but also obtained $P(\Delta r,t)$ and $g^{(1s)}(q,t)$. By varying the nominal temperature, trapping, hopping, and hindered diffusion behaviors were obtained. At low temperatures, probe particles explored the volume of their traps; after a certain relaxation time $\langle r^{2}(t) \rangle$ ceased to increase. At high temperatures, $P(\Delta r, t)$ was nearly Gaussian, with $\langle r^{2}(t) \rangle$ increasing linearly in time even at short times. Luo, et al., evaluated $g^{(1s)}(q,t)$ for $q^{-1}$ extending from a small fraction of the size of a single potential energy minimum out to distances substantially larger than a typical distance between force centers. At low and high temperatures, $g^{(1s)}(q,t)$ showed nearly exponential relaxations, though at small $T$ and small $q$ the relaxation fell to a non-zero baseline. The baseline was non-zero because the particles were permanently trapped in small isolated volumes of the system. At intermediate temperatures, relaxations were single-exponential at large $q$ but double-exponential at small $q$. At the same intermediate temperatures, $P(\Delta r, t)$ was radically non-Gaussian, with local maxima and minima created by local potential energy minima, potential energy saddle points, and times required to traverse local energy maxima. Other, physically different, systems also give bimodal spectra. In contrast to Luo, et al.’s probes moving though a fixed matrix, in which relaxations are only bidisperse for some values of $q$, relaxations of dilute bidisperse suspensions are double-exponential at all $q$. An alternative model system in which monodisperse particles show several very different classes of relaxation behavior is shown by Glotzer, et al.’s[@glotzer2000a] computer simulations of three-dimensional glasses, in which one finds distinct long-lived populations of slow and fast-moving particles, with the immobile particles in clumps and the rapidly moving particles lying in thin ribbons. Thus, in order to distinguish between systems containing species with two different dynamic behaviors, and systems in which there is local trapping with escapes from the traps at longer times, it is experimentally necessary to study $g^{(1s)}(q,t)$ over a wide range of $q$. Observations at fixed $q$ of double-exponential relaxations do not reveal whether one is seeing trapping with hopping, or whether the system is in some sense dynamically bidisperse. Furthermore, in the cases in which $g^{(1s)}(q,t)$ was observed by Luo, et al., to be very nearly the sum of two exponentials, $P(\Delta r,t)$ on interesting distance scales had an elaborate dependence on $r$ with multiple maxima and deep minima. The interpretation that a biexponential $g^{(1s)}(q,t)$ must correspond to a $P(\Delta r,t)$ that is a sum of two Gaussians, each with a mean-square width increasing linearly in time, is categorically disproved by Luo, et al.’s measured forms for $P(\Delta r,t)$. Finally, the observation that quasielastic scattering does not determine the mean-square probe displacement certainly does not mean that probe diffusion is ineffective as an experimental technique. Probe diffusion measurements can certainly be used to obtain novel information about complex fluids. The richness of the revealed information corresponds to the depth with which models for probe motion are constructed. As a positive conclusion, two successful applications of probe diffusion are noted: \(i) A long-time question in the study of surfactant solutions is the determination of the aggregation number $n$ of surfactant molecules in micelles. One of many approaches to this question has been to use quasielastic scattering to determine an effective hydrodynamic radius of the micelles. Perhaps after some hydrodynamic modeling to account for micelle shape, spherical micelles being the simplest case, the measured diffusion coefficient can be transformed to an apparent hydrodynamic radius $r_{H}$, to a hydrodynamic volume $V_{h}$, and (taking into account the surfactant density and molecular weight) finally to a nominal aggregation number. This procedure was criticized by Kratohvil[@kratohvil1980a], who noted that the hydrodynamic volume of the micelle might well include solvent molecules rather than being composed of pure surfactant. Probe diffusion experiments prove that Kratovil was correct. The diffusion of probe particles through micellar solutions is retarded by hydrodynamic and direct interactions between the micelles and the probe particles. The degree of retardation is determined by the volume fraction of micelles in the solution. By combining quasielastic scattering measurements on surfactant solutions and on surfactant-probe mixtures, quasielastic scattering has been used to determine the size, volume fraction, and thus number density of micelles in solution, leading to determinations of the micellar aggregation number and, independently, the (substantial) degree of hydration of micelles, as seen in studies by Phillies and collaborators[@phillies1993a]. \(ii) Diffusion of mesoscopic probe particles in polymer solutions is not Stokes-Einsteinian. $D$ is not determined by the macroscopic viscosity $\eta$. Therefore, one cannot use the Stokes-Einstein equation for sphere diffusion $$D = \frac{k_{B}T}{6 \pi \eta R} \label{eq:SEeq}$$ (where $k_{B}$ is Boltzmann’s constant, $T$ is the absolute temperature, $\eta$ is the solution viscosity, and $R$ is the sphere radius) to determine the size of probe particles in polymer solutions. However, by using probes of known size, Ullmann and Phillies[@ullmann1983a] were able to quantitate the degree of failure of the Stokes-Einstein equation for their polymer solutions, allowing them to measure the size of unknown probe particles in the same solutions. This approach permitted a quantitative study of the extent of polymer adsorption onto particles chosen for their ability to bind polymers in solution. [10.]{} G. D. J. Phillies, Analyt. Chem. **62**, 1049A (1990). F. R. Hallett and A. L. Gray, Biochim. Biophys. Acta **343**, 648 (1974). D. N. Turner and F. R. Hallett, Bioch. Biop. Acta **451**, 305 (1976). G. D. J. Phillies, *Phenomenology of Polymer Solution Dynamics*, (Cambridge U. P., Cambridge, 2011), Ch. 9. G. D. J. Phillies, J. Phys. Chem. [**85**]{}, 2838 (1981). T.-H. Lin, Makromol. Chem. **187**, 1189 (1986). C. F. Schmidt, M. Baermann, G. Isenberg and E. Sackmann, Macromolecules **22**, 3638 (1989). G. D. J. Phillies, J. Stott, and S. Z. Ren, J. Phys. Chem. **97**, 11563 (1993). K. Ullmann, G. S. Ullmann, and G. D. J. Phillies, J. Coll.Interf. Sci., **105**, 315 (1985). K. Luby-Phelps, P. E. Castle, D. L. Taylor, and F. Lanni, Proc. Natl. Acad. Sci. **84**, 4910 (1987). S. B. Dierker, R. Pindak, R. M. Fleming, I. K. Robinson, and L. Berman, Phys. Rev. Lett. **75**, 449 (1995). D. Magde, E. L. Elson, and W. W. Webb, Phys. Rev. Lett. **29**, 705 (1972). G. D. J. Phillies, Biopolymers **14**, 499 (1975). B. A. Scalettar, J. E. Hearst and M. P. Klein, Macromolecules **22**, 4550 (1989). T. G. Mason, H. Gang, and D. A. Weitz, J. Mol. Struct. [**383**]{}, 180 (1996) G. D. J. Phillies, Electrophoresis [**33**]{}, 1008 (2012). T. C. Laurent, I. Bjork, A. Pietruszkiewicz, and H. Persson, Biochim. Biophys.Acta **78**, 351 (1963); T. C. Laurent and H. Persson, Biochim. Biophys. Acta **83**, 141 (1964). X. Ye, P. Tong, and L. J. Fetters, Phys. Rev. Lett. **31**, 6534 (1998). L. A. Hough and H. D. Ou-Yang, J. Nanoparticle Research **1**, 495 (1999). F. G. Schmidt, B. Hinner and E. Sackmann, Phys. Rev. E **61**, 5646 (2000); F. G. Schmidt, B. Hinner, E. Sackmann and J. X. Tang, Phys. Rev. E **62**, 5509 (2000). F. Amblard, A. C. Maggs, B. Yurke, A. N. Pargellis and S. Leibler, Phys. Rev. Lett.**77**, 4470 (1996). P. Tapadia and S. Q. Wang, Phys. Rev. Lett. **96**, 016001 (2006). B. Crosignani, P. DiPorto and M. Bertoleotti, *Statistical Properties of Scattered Light* (New York, NY: Academic Press, 1975). G. D. J. Phillies, J. Chem. Phys. **60**, 976 (1974) G. D. J. Phillies, J. Chem. Phys. **60**, 983 (1974). G. D. J. Phillies, J. Chem. Phys. **122**, 224905 (2005). G. D. J. Phillies, J. Chem. Phys. **137**, 124901 (2012). H. Z. Cummins, N. Knable, and Y. Yeh, Phys. Rev. Lett. [**12**]{}, 150 (1964). J. L. Doob, Ann. Math. **43**, 351 (1942). B. J. Berne and R. Pecora.*Dynamic Light Scattering: With Applications in Chemistry, Biology and Physics.* (New York: Wiley, 1976). B. E. Dahneke, *Measurement of Suspended Particles by Quasi-Elastic Light Scattering*, Wiley: New York (1983). J. Apgar, Y. Tseng, E. Federov, M. B. Herwig, S. C. Almo, and D. Wirtz, Biophys. J. **79**, 1095 (2000). Y. Tseng and D. Wirtz., Biophys. J. **81**, 1643 (2001). J. Xu, Y. Tseng, C. J. Carriere, and D. Wirtz, Biomacromolecules **3**, 92 (2002). G. D. J. Phillies and M. Lacroix, J. Phys. Chem. B [**101**]{}, 39 (1997). G. D. J. Phillies, R. O’Connell, P. Whitford and K. A. Streletzky, J. Chem. Phys. **119**, 9903 (2003). K. A. Streletzky and G. D. J. Phillies, J Polym. Sci.: Part B Polym. Phys. **36**, 3087 (1998). L.-S. Luo, G. D. J.  Phillies, L. Colonna-Romano, and H. Gould, Phys. Rev. E [**51**]{}, 43 (1995). L.-S. Luo and G. D. J.  Phillies, J. Chem. Phys. [**105**]{}, 598 (1996). S. Glotzer, V. N. Novikoff, and T. B. Schroder, J. Chem. Phys. [**112**]{}, 509 (2000). J. P. Kratohvil, J. Colloid Interface Sci. [**75**]{}, 271 (1980). G. D. J. Phillies, J. Stott, and S. Z. Ren, J. Phys. Chem. [**97**]{}, 11563 (1993); K. Streletzky and G. D. J. Phillies, Langmuir [**11**]{}, 42 (1995); G. D. J. Phillies, R. H. Smith, K. Strang, and N. V. Sushkin, Langmuir, [**11**]{}, 3408 (1995); G. D. J. Phillies and J. Yambert, Langmuir [**12**]{}, 3431-3436 (1996); N. V. Sushkin, D. Clomenil, J. Ren, and G. D. J. Phillies, Langmuir [**15**]{}, 3492 (1999). G. S. Ullmann and G. D. J. Phillies, Macromolecules [**16**]{}, 1947 (1983).
1
--- abstract: 'The classical matrix-tree theorem discovered by G.Kirchhoff in 1847 relates the principal minor of the $n \times n$ Laplace matrix to a particular sum of monomials of matrix elements indexed by directed trees with $n$ vertices and a single sink. In this paper we consider a generalization of this statement: for any $k \ge n$ we define a degree $k$ polynomial $\det_{n,k}$ of matrix elements and prove that this polynomial applied to the Laplace matrix gives a sum of monomials indexed by acyclic graphs with $n$ vertices and $k$ edges.' address: 'National Research University Higher School of Economics, 119048, 6 Usacheva str., Moscow, Russia, and Independent University of Moscow, 119002, 11 B.Vlassievsky per., Moscow, Russia' author: - Yurii Burman title: 'Abstract matrix-tree theorem' --- Introduction and the main results ================================= Principal definitions {#SSec:Def} --------------------- Denote by $\Gamma_{n,k}$ the set of all directed graphs with $n$ vertices numbered $1 {, \dots ,} n$ and $k$ edges numbered $1 {, \dots ,} k$. We will write $e = [ab]$ if $e$ is an edge from vertex $a$ to vertex $b$; in particular, $[aa]$ means a loop attached to the vertex $a$. We will treat elements of $\Gamma_{n,k}$ as sequences of edges: $G = (e_1 {, \dots ,} e_k) \in \Gamma_{n,k}$ means a graph where the edge $e_\ell$ has number $\ell$, for all $\ell = 1 {, \dots ,} k$. By a slight abuse of notation $e \in G$ will mean that $e$ is an edge of $G$ (regardless of number). Let $G \in \Gamma_{n,k}$ and $e \in G$. By $G \setminus e$, $G/e$ and $G_e^{\vee}$ we will denote the graph $G$ with $e$ deleted, $e$ contracted and $e$ reversed, respectively. Note for correctness that since $G \setminus e \in \Gamma_{n,k-1}$, one has to change the edge numbering in $G$ after deleting $e$: namely, if $e$ bears number $s$ in $G$ then the numbers of the edges are preserved if they are less than $s$ and lowered by $1$ otherwise. For $G/e \in \Gamma_{n-1,k-1}$ the same renumbering is applied both to the edges and to the vertices. The contracted edge $e$ should not be a loop. A graph $H \in \Gamma_{n,m}$ is called a subgraph of $G \in \Gamma_{n,k}$ (notation $H \subseteq G$) if $H$ is obtained from $G$ by deletion of several (possibly zero) edges. Denote by $\Graph_{n,k}$ a vector space over $\Complex$ spanned by $\Gamma_{n,k}$. The direct sum $\Graph_n {\stackrel{\mbox{\scriptsize def}}{=}}\bigoplus_{k=0}^\infty \Graph_{n,k}$ bears the structure of an associative algebra: one defines a product of the graphs $G_1 = (e_1 {, \dots ,} e_{k_1}) \in \Gamma_{n,k_1}$ and $G_2 = (h_1 {, \dots ,} h_{k_2}) \in \Gamma_{n,k_2}$ as $G_1*G_2 {\stackrel{\mbox{\scriptsize def}}{=}}(e_1 {, \dots ,} e_{k_1}, h_1 {, \dots ,} h_{k_2}) \in \Gamma_{n,k_1+k_2}$; then $*$ is extended to the whole $\Graph_n$ as a bilinear operation. Note that $G_1*G_2 \ne G_2*G_1$ (the edges are the same but the edge numbering is different), so the algebra $\Graph_n$ is not commutative. We call a graph $G \in \Gamma_{n,k}$ [*strongly connected*]{} if every two its vertices can be joined by a directed path. A graph is [*strongly semiconnected*]{} if every its connected component (in the topological sense) is strongly connected; equivalently, if every its edge is a part of a directed cycle. A strongly semiconnected graph may contain isolated vertices (i.e. vertices not incident to any edge); by $\SSC_{n,k}^{\{i_1 {, \dots ,} i_s\}}$ we denote the set of strongly semiconnected graphs $G \in \Gamma_{n,k}$ such that the vertices $i_1 {, \dots ,} i_s$, and only they, are isolated. By $\SSC_{n,k} {\stackrel{\mbox{\scriptsize def}}{=}}\bigcup_{I \subset \{1 {, \dots ,} n\}} \SSC_{n,k}^I$ we will denote the set of all strongly semiconnected graphs. We call a graph $G \in \Gamma_{n,k}$ [*acyclic*]{} if it contains no directed cycles. Recall that a vertex $a$ of the graph $G$ is called a [*sink*]{} if $G$ has no edges starting from $a$. Note that an isolated vertex is a sink but a vertex with a loop attached to it is not. We denote by $\AC_{n,k}^{\{i_1 {, \dots ,} i_s\}}$ the set of acyclic graphs $G \in \Gamma_{n,k}$ such that the vertices $i_1 {, \dots ,} i_s$, and only they, are sinks. By $\AC_{n,k} {\stackrel{\mbox{\scriptsize def}}{=}}\bigcup_{I \subset \{1 {, \dots ,} n\}} \AC_{n,k}^I$ we will denote the set of all acyclic graphs. \[Ex:SSC\] If a vertex of a strongly semiconnected graph $G \in \SSC_{n,k}^I$ is not isolated then there is at least one edge starting from it; so if $I = \{i_1 {, \dots ,} i_s\}$ and $\SSC_{n,k}^I \ne {\varnothing}$ then $k \ge n-s$. Let $k=n-s$. If $G \in \Gamma_{n,k}^I$ then for any vertex $i \notin I$ there is exactly one edge $[i, \sigma(i)]$ starting at it and exactly one edge $[j, \sigma(j)] = [j,i]$ finishing at it (that is, $\sigma(j) = i$). Hence $\sigma$ is a bijection $\{1 {, \dots ,} n\} \setminus I \to \{1 {, \dots ,} n\} \setminus I$ (a permutation of $k = n-s$ points). Geometrically $G$ is a union of disjoint directed cycles passing through all vertices except $i_1 {, \dots ,} i_s$. \[Ex:Forest\] Let $n > k$; then any graph $G \in \Gamma_{n,k}$ contains at least $n-k$ connected components. If $G$ is acyclic then every its connected component contains a sink. So for $I = \{i_1 {, \dots ,} i_s\}$ if $\AC_{n,k}^I \ne {\varnothing}$ then $k \ge n-s$. Let $k = n-s$. Then the elements of $\AC_{n,k}^I$ are forests of $s$ components, each component containing exactly one vertex $i_\ell \in I$ (for some $\ell = 1 {, \dots ,} s$), which is its only sink. This component is a tree and every its edge is directed towards the sink $i_\ell$. Determinants and minors ----------------------- Let $W = (w_{ij})$ be a $n \times n$-matrix; denote by $\langle W \vert: \Graph_{n,k} \to \Complex$ a linear functional acting on the basic element $G \in \Gamma_{n,k}$ as $$\langle W \mid G\rangle {\stackrel{\mbox{\scriptsize def}}{=}}\prod_{[ij] \in G} w_{ij}.$$ Note that $\langle W \mid G\rangle$ is independent of the edge numbering in $G$; in particular, $\langle W \mid G_1*G_2 - G_2*G_1\rangle = 0$ for all $G_1, G_2$. For a function $f: \bigcup_s \Gamma_{n,s} \to \Complex$ and a graph $G \in \Gamma_{n,k}$ introduce the notation $$\label{Eq:SumSubgr} \SumSub(f;G) {\stackrel{\mbox{\scriptsize def}}{=}}\sum_{H \subseteq G} f(H).$$ For a set of graphs $\mathfrak B \subset \Gamma_{n,k}$ denote $$\begin{aligned} &U(\mathfrak B) {\stackrel{\mbox{\scriptsize def}}{=}}\sum_{G \in \mathfrak B} G \in \Graph_{n,k}, \\ &X(\mathfrak B) {\stackrel{\mbox{\scriptsize def}}{=}}\sum_{G \in \mathfrak B} (-1)^{\beta_0(G)} G \in \Graph_{n,k}; \end{aligned}$$ $\beta_0(G)$ here means the $0$-th Betti number of $G$, i.e. the number of its connected components (in the topological sense). \[Df:Minors\] The element $$\det_{n,k}^I {\stackrel{\mbox{\scriptsize def}}{=}}\frac{(-1)^k}{k!} X(\SSC_{n,k}^I) \in \Graph_{n,k}$$ is called a [*universal diagonal $I$-minor*]{} of degree $k$; in particular, $\det_{n,k}^{\varnothing}$ is called a [*universal determinant*]{} of degree $k$. The element $$\det_{n,k}^{i/j} {\stackrel{\mbox{\scriptsize def}}{=}}\frac{(-1)^k}{k!} X(\{G \in \Graph_{n,k} \mid ([ij])*G \in \SSC_{n,k+1}^{\varnothing}\})$$ is called a [*universal (codimension $1$) $(i,j)$-minor*]{} of degree $k$. \[Ex:Det\] Example \[Ex:SSC\] implies that if $I = \{i_1 {, \dots ,} i_s\}$ and $k < n-s$ then $\det_{n,k}^I = 0$. Let $k = n$ and $I = {\varnothing}$. By Example \[Ex:SSC\] the graphs $G \in \SSC_{n,n}^{\varnothing}$ are in one-to-one correspondence with permutations $\sigma$ of $\{1 {, \dots ,} n\}$. It is easy to see that $(-1)^{\beta_0(G)}$ is equal to $(-1)^n$ if $\sigma$ is even and to $-(-1)^n$ if it is odd. Geometrically $G$ is a union of disjoint directed cycles. If the order of vertices in all the cycles is fixed, then there are $n!$ ways to assign numbers $\{1 {, \dots ,} n\}$ to the edges; this implies the equality $$\langle W \mid \det_{n,n}^{\varnothing}\rangle = \sum_\sigma (-1)^{\text{parity of $\sigma$}} w_{1\sigma(1)} \dots w_{n \sigma(n)} = \det W$$ for any matrix $W = (w_{ij})$. Similarly, for any set $I = \{i_1 {, \dots ,} i_s\}$ the value $\nolinebreak[0]\langle W \mid \det_{n,n-s}^I\rangle$ is equal to the diagonal minor of the matrix $W$ obtained by deletion of the rows and the columns $i_1 {, \dots ,} i_s$. Also $\langle W \mid \det_{n,n-1}^{i/j}\rangle$ is equal to the codimension $1$ minor of $W$ obtained by deletion of the row $i$ and the column $j$. This explains the terminology of Definition \[Df:Minors\]. The elements $\det_{n,k}^I$ exhibit some properties one would expect from determinants and minors: \[Pp:DetProp\] 1. \[It:RowCol\] (generalized row and column expansion) $$\label{Eq:DetViaMinors} \det_{n,k}^{\varnothing}= \frac{1}{k} \sum_{i,j=1}^n ([ij]) * \det_{n,k-1}^{i/j}.$$ 2. \[It:PDer\] (partial derivative with respect to a diagonal matrix element) Let matrix elements $w_{ij}$, $i,j = 1 {, \dots ,} n$, of the matrix $W$ be independent (commuting) variables. Then for any $i = 1 {, \dots ,} n$ and any $m = 1 {, \dots ,} k$ one has $$\label{Eq:Deriv} \frac{\partial^m}{\partial w_{ii}^m} \langle W \mid \det_{n,k}^{\varnothing}\rangle = \langle W \mid \det_{n,k-m}^{\varnothing}+ \det^{\{i\}}_{n,k-m}\rangle.$$ See [@Epstein Lemma 86] for a formula similar to (with $m=1$ and a finite difference instead of a derivative). Main results {#SSec:Main} ------------ Let $G \in \Gamma_{n,k}$, $p \in \{1 {, \dots ,} k\}$ and $i,j \in \{1 {, \dots ,} n\}$. Denote by $R_{ab;p} G \in \Gamma_{n,k}$ the graph obtained from $G$ by replacement of its $p$-th edge by the edge $[ab]$ bearing the same number $p$. Consider now a linear operator $B_p: \Graph_{n,k} \to \Graph_{n,k}$ acting on every basic element $G \in \Gamma_{n,k}$ as follows: $$B_p(G) = \begin{cases} G, &\text{if the $p$-th edge of $G$ is not a loop},\\ -\sum_{b \ne a} R_{ab;p} G, &\text{if the $p$-th edge of $G$ is the loop $[aa]$}. \end{cases}$$ In particular, $B_p = 0$ if $n=1$ (and $k > 0$). \[Df:Laplace\] The product $\Lapl {\stackrel{\mbox{\scriptsize def}}{=}}B_1 \dots B_k: \Graph_{n,k} \to \Graph_{n,k}$ is called [*Laplace operator*]{}. If $n = 1$ and $k > 0$ then $\Delta = 0$; also take $\Delta = \name{id}$ by definition if $k = 0$. The operators $B_p$, $p = 1 {, \dots ,} k$, are commuting idempotents: $B_p^2 = B_p$ and $B_p B_q = B_q B_p$ for all $p, q = 1 {, \dots ,} k$. Therefore, $\Lapl$ is an idempotent, too: $\Lapl^2 = \Lapl$. Let $W = (w_{ij})_{i,j=1}^n$ be a $n \times n$-matrix, like in Example \[Ex:Det\] and Proposition \[Pp:DetProp\]. Denote by $\widehat{W}$ the corresponding Laplace matrix, i.e. a matrix with nondiagonal elements $w_{ij}$ ($1 \le i \ne j \le n$) and diagonal elements $-\sum_{j \ne i} w_{ij}$ ($1 \le i \le n$). It follows from Definition \[Df:Laplace\] that $$\langle \widehat{W} \mid X \rangle = \langle W \mid \Lapl(X) \rangle$$ for any $X \in \Graph_{n,k}$. This equation explains the name “Laplace operator” for $\Lapl$. Note that since $\Lapl(X)$ is a sum of graphs containing no loops, one is free to change diagonal entries of $W$ in the right-hand side; in particular, one can use $\widehat{W}$ instead. The main results of this paper are the following two theorems: \[Th:Diag\] $$\label{Eq:LaplDiag} \Lapl(\det_{n,k}^I) = \frac{(-1)^n}{k!} U(\AC_{n,k}^I).$$ and \[Th:Codim1\] $$\label{Eq:LaplCodim1} \Lapl(\det_{n,k}^{i/j}) = \frac{(-1)^n}{k!} U(\AC_{n,k}^{\{i\}}).$$ Applying the functional $\langle \widehat{W}\vert\relax$ to equation with $k = n-s$ and to equation with $k = n-1$ and using Examples \[Ex:Det\] and \[Ex:Forest\] one obtains \[Cr:Diag\] The diagonal minor of the Laplace matrix obtained by deletion of the rows and columns numbered $i_1 {, \dots ,} i_s$ is equal to $\frac{(-1)^{n-s}}{(n-s)!}\langle W \mid U(\AC_{n,n-s}^I)\rangle$, that is, to $(-1)^{n-s}$ times the sum of monomials $w_{a_1 b_1} \dots w_{a_{n-s} b_{n-s}}$ such that the graph $([a_1 b_1] {, \dots ,} [a_{n-s} b_{n-s}])$ is a $s$-component forest where every component contains exactly one vertex $i_\ell$ for some $\ell = 1 {, \dots ,} s$, and all the edges of the component are directed towards $i_\ell$. and \[Cr:Codim1\] The minor of the Laplace matrix obtained by deletion of its $i$-th row and its $j$-th column is equal to $(-1)^{n-1}$ times the sum of monomials $w_{a_1 b_1} \dots w_{a_{n-1} b_{n-1}}$ such that the graph $([a_1 b_1] {, \dots ,} [a_{n-1} b_{n-1}])$ is a tree with all the edges directed towards the vertex $i$. Corollaries \[Cr:Diag\] and \[Cr:Codim1\] are particular cases of the celebrated matrix-tree theorem first discovered by G.Kirchhoff [@Kirch] in 1847 (for symmetric matrices and diagonal minors of codimension $1$) and proved in its present form by W.Tutte [@TutteMTT]. Consider now the following functions on $\Gamma_{n,k}$: $$\begin{aligned} \sigma(G) &= \begin{cases} (-1)^{\beta_1(G)}, &G \in \Gamma_{n,k} \text{ is strongly semiconnected},\\ 0 &\text{otherwise}, \end{cases}\\ \text{and\hspace{3.5cm}}\\ \alpha(G) &= \begin{cases} (-1)^k, &G \in \Gamma_{n,k} \text{ is acyclic},\\ 0 &\text{otherwise}. \end{cases} \end{aligned}$$ Theorem \[Th:Diag\] follows from the two equivalent statements (see Section \[Sec:Proofs\] for details): \[Th:Direct\] $\SumSub(\alpha;G) = (-1)^k \sigma(G)$ for $G \in \Gamma_{n,k}$. $\SumSub(\sigma;G) = (-1)^k\alpha(G)$ for $G \in \Gamma_{n,k}$. Here $\SumSub$ is summation over subgraphs, as defined by . These theorems are essentially [@Bernardi Proposition 6.16]. We will nevertheless give their proofs in Section \[Sec:Proofs\] thus answering a request for a direct proof expressed in [@Bernardi] (the original proof in [@Bernardi] is a specialization of a much more general identity). A digression: undirected graphs and the universal Potts partition function -------------------------------------------------------------------------- Denote by $\Upsilon_{n,k}$ the set of all [*undirected*]{} graphs with $n$ vertices numbered $1 {, \dots ,} n$ and $k$ edges numbered $1 {, \dots ,} k$. Denote by $\lmod \,\cdot\,\rmod: \Gamma_{n,k} \to \Upsilon_{n,k}$ the “forgetful” map replacing every edge by its undirected version; the edge numbering is preserved. By $\Undir_{n,k}$ denote a vector space spanned by $\Upsilon_{n,k}$; then $\lmod \,\cdot\,\rmod$ is extended to the linear map $\Graph_{n,k} \to \Undir_{n,k}$. The notion of a subgraph and the notation $\SumSub$ (see ) for undirected graphs are similar to those for $\Graph_{n,k}$. One can also define the operators $B_p: \Undir_{n,k} \to \Undir_{n,k}$, $p = 1 {, \dots ,} k$, and the Laplace operator $\Delta: \Undir_{n,k} \to \Undir_{n,k}$ for undirected graphs exactly as in Definition \[Df:Laplace\]. For any $G \in \Undir_{n,k}$ consider the two-variable polynomial: $$\label{Eq:DefPotts} Z_G(q,v) = \SumSub(q^{\beta_0(H)} v^{\#\text{of edges of $H$}};G).$$ called [*Potts partition function*]{}. It is related [@Sokal Eq.(2.26)] to the Tutte polynomial $T_G$ of the graph $G$ as $$T_G(x,y) = (x-1)^{-\beta_0(G)} (y-1)^{-n} Z((x-1)(y-1), y-1;G).$$ Values of $Z_G$ in some points have a special combinatorial interpretation, in particular \[Pp:SpecVal\] $$\begin{aligned} Z_G(-1,1) &= (-1)^{\beta_0(G)} 2^{\#\text{\upshape of loops of $G$}} \# \{\Phi \in \SSC_{n,k} \subset \Gamma_{n,k} \mid \lmod \Phi\rmod = G\}.\\ Z_G(-1,-1) &= (-1)^n \# \{\Phi \in \AC_{n,k} \subset \Gamma_{n,k} \mid \lmod \Phi\rmod = G\}. \end{aligned}$$ (Recall that by $\SSC_{n,k}$ and $\AC_{n,k}$ we denote the sets of all strongly semiconnected and acyclic graphs in $\Gamma_{n,k}$, respectively.) \[Cr:NumSSC\] For any graph $G \in \Upsilon_{n,k}$ one has $$\# \{\Phi \in \SSC_{n,k} \subset \Gamma_{n,k} \mid \lmod \Phi\rmod = G\} = (-1)^{\beta_0(G)} Z_{\widehat G}(-1,1)$$ where $\widehat G$ is the graph $G$ with all the loops deleted. The definition of the Potts partition function implies immediately that $Z_G(q,v) = (v+1)^{\#\text{of loops of $G$}} Z_{\widehat G}(q,v)$. Consider now the [*universal Potts partition function*]{} $${\mathcal Z}_{n,k}(q,v) {\stackrel{\mbox{\scriptsize def}}{=}}\sum_{G \in \Upsilon_{n,k}} Z_G(q,v)G \in \Undir_{n,k}$$ and its “shaved” version $$\widehat{{\mathcal Z}}_{n,k}(q,v) {\stackrel{\mbox{\scriptsize def}}{=}}\sum_{G \in \Upsilon_{n,k}} Z_{\widehat G}(q,v)G \in \Undir_{n,k}.$$ \[Pp:LaplTutte\] $$\Delta\widehat{{\mathcal Z}}_{n,k}(-1,1) = (-1)^k {\mathcal Z}_{n,k}(-1,-1).$$ Note that by Proposition \[Pp:SpecVal\] the right-hand side of the equality contains only graphs without loops, as does the left-hand side. Corollary \[Cr:NumSSC\] implies that $$\widehat{{\mathcal Z}}_{n,k}(-1,1) = (-1)^k k!\sum_{I \subset \{1 {, \dots ,} n\}} \lmod \det_{n,k}^I\rmod.$$ Apply now the Laplace operator $\Delta$ to both sides of the equality. Apparently, $\Delta$ commutes with the forgetful map: $\lmod \Delta(x)\rmod = \Delta(\lmod x\rmod)$ for any $x \in \Graph_{n,k}$. Therefore by Theorem \[Th:Diag\] and Proposition \[Pp:SpecVal\] $$\begin{aligned} \Delta\widehat{{\mathcal Z}}_{n,k}(-1,1) &= (-1)^k k!\sum_{I \subset \{1 {, \dots ,} n\}} \lmod \Delta \det_{n,k}^I\rmod = (-1)^{n+k} \sum_{I \subset \{1 {, \dots ,} n\}} \lmod U(\AC_{n,k}^I)\rmod \\ &= (-1)^k {\mathcal Z}_{n,k}(-1,-1). \end{aligned}$$ Proposition \[Pp:LaplTutte\] admits several generalizations. The author is planning to write a separate paper considering action of the Laplace operator on the universal Potts functions and their oriented-graph versions. An application: invariants of $3$-manifolds ------------------------------------------- Universal determinants have an application in $3$-dimensional topology, due to M.Polyak. We describe it briefly here; see [@Toronto] and the MSc. thesis [@Epstein] for detailed definitions, formulations and proofs. A [*chainmail graph*]{} is defined as a planar graph, possibly with loops but without parallel edges; the edges (including loops) are supplied with integer weights. We denote by $w_{ij} = w_{ji}$ the weight of the edge joining vertices $i$ and $j$; $w_{ii}$ is the weight of the loop attached to the vertex $i$. If the edge $[ij]$ is missing then $w_{ij} = 0$ by definition. There is a way (see [@Toronto]) to define for every chainmail graph $G$ a closed oriented $3$-manifold $\Manif(G)$; any closed oriented $3$-manifold is $\Manif(G)$ for some $G$ (which is not unique). To the chainmail graph $G$ with $n$ vertices one associates two $n \times n$-matrices: the adjacency matrix $W(G) = (w_{ij})$ and the Laplace (better to say, Schroedinger) matrix $L(G) = (l_{ij})$ where $l_{ij} {\stackrel{\mbox{\scriptsize def}}{=}}w_{ij}$ for $i \ne j$ and $l_{ii} {\stackrel{\mbox{\scriptsize def}}{=}}w_{ii} - \sum_{j \ne i} w_{ij}$. If all $w_{ii} = 0$ (such $G$ is called a balanced graph) then $L(G)$ is the usual (symmetric, degenerate) Laplace matrix $\widehat{W}$ from Section \[SSec:Main\]. 1. The rank of the homology group $H_1(\Manif(G),\Integer)$ is equal to $\dim \name{Ker} L(G)$. 2. If $L(G)$ is nondegenerate (so that $\Manif(G)$ is a rational homology sphere and $H_1(\Manif(G),\Integer)$ is finite) then $$\label{Eq:3ManifDet} \lmod H_1(\Manif(G),\Integer)\rmod = \lmod \det L(G)\rmod = \lmod\langle L(G) \mid \det_{n,n}^{\varnothing}\rangle\rmod.$$ 3. If $L(G)$ is nondegenerate then $$\label{Eq:3ManifTheta} \langle W(G) \mid \Theta_n\rangle = 12 \det L(G) \bigl(\lambda_{CW}(\Manif(G)) - \frac{1}{4} \name{sign}(L(G))\bigr)$$ where $\lambda_{CW}$ is the Casson–Walker invariant [@Walker] of the raional homology sphere $\Manif(G)$, $\name{sign}$ is the signature of the symmetric matrix $L(G)$, and $\Theta_n$ is an element of $\Graph_{n,n+1} \oplus \Graph_{n,n-1}$ defined as $$\Theta_n \stackrel{\mathrm{def}}{=}\det_{n,n+1}^{\varnothing}- \sum_{1 \le i \ne j \le n} ([ij])*\det_{n,n-2}^{\{i,j\}} - \sum_{i=1}^n \det_{n,n-1}^{\{i\}}.$$ Conjecturally, and begin a series of formulas for invariants of $3$-manifolds. See [@Toronto] for details. Applying $\Delta$ to the element $\Theta_n$ and using Theorem \[Th:Diag\] and Corollary \[Cr:Diag\] one obtains $\Delta \Theta_n = -2 U(\AC_{n,n-1})$. Therefore if $G$ is balanced then $\langle L(G) \mid \Theta_n\rangle$ is equal to $-2$ times the codimension $1$ diagonal minor of $L(G)$. The last assertion is [@Epstein Theorem 84]. Acknowledgements {#acknowledgements .unnumbered} ---------------- The research was inspired by numerous discussions with prof. Michael Polyak (Haifa Technion, Israel) whom the author wishes to express his most sincere gratitude. The research was funded by the Russian Academic Excellence Project ‘5-100’ and by the grant No. 15-01-0031 “Hurwitz numbers and graph isomorphism” of the Scientific Fund of the Higher School of Economics. Proofs {#Sec:Proofs} ====== We start with proving Proposition \[Pp:DetProp\] (Section \[SSec:PrDetProp\]), to continue with Theorems \[Th:Direct\] and \[Th:Direct\]’ (Sections \[SSec:Equiv\] and \[SSec:PrDirect\]). Theorem \[Th:Diag\] will then follow from Theorem \[Th:Direct\]’ (Section \[SSec:DiagFromDirect\]), and Theorem \[Th:Codim1\], from Theorem \[Th:Diag\] and assertion \[It:RowCol\] of Proposition \[Pp:DetProp\] (Section \[SSec:PrCodim1\]). For two vertices $a, b \in G \in \Gamma_{n,k}$ we will write $a \succeq b$ if $G$ contains a directed path starting at $a$ and finishing at $b$; also $a \succeq a$ for any $a$ by definition. Proof of Proposition \[Pp:DetProp\] {#SSec:PrDetProp} ----------------------------------- Let $G$ be a strongly semiconnected graph, and $[ij] \in G$ be its edge carrying number $1$. Since $G$ is strongly semiconnected, $j \succeq i$ in $G \setminus ([ij])$, and therefore $\beta_0(G \setminus ([ij])) = \beta_0(G)$. Then $G$ enters the left-hand side and the $(i,j)$-th term of the sum at the right-hand side of with the same coefficient. Denote by $\SSC_{n,k}^{[i:q]}$ the set of all graphs $G \in \SSC_{n,k}^{\varnothing}$ having $q$ loops ($0 \le q \le k$) attached to vertex $i$. The graph $\hat G$ obtained from $G$ by deletion of all these loops (with the relevant renumbering of the remaining edges) belongs either to $\SSC_{n,k-q}^{[i:0]} \subset \SSC_{n,k-q}^{\varnothing}$ or, if $q > 0$, to $\SSC_{n,k-q}^{\{i\}}$. Vice versa, if $q > 0$ and $\hat G \in \SSC_{n,k-q}^{[i:0]} \cup \SSC_{n,k-q}^{\{i\}}$ then $G \in \SSC_{n,k}^{[i:q]}$. Deletion of a loop does not break a graph, so $\beta_0(G) = \beta_0(\hat G)$. If $G \in \SSC_{n,k}^{[i:q]}$ then there are $\binom{k}{q}$ ways to assign numbers to the loops of $G$ attached to $i$. Since $\langle W \mid G\rangle$ does not depend on the edge numbering, one has for $q > 0$ $$\langle W \mid X(\SSC_{n,k}^{[i:q]})\rangle = \binom{k}{q} w_{ii}^q \langle W \mid X(\SSC_{n,k-q}^{[i:0]}) + X(\SSC_{n,k-q}^{\{i\}})\rangle,$$ so that $$\begin{aligned} \langle W \mid \det_{n,k}^{\varnothing}\rangle &= \frac{1}{k!}\sum_{q=0}^k \langle W \mid X(\SSC_{n,k}^{[i:q]})\rangle\nonumber \\ &= \frac{1}{k!} \langle W \mid X(\SSC_{n,k}^{[i:0]})\rangle + \sum_{q=1}^k \frac{w_{ii}^q}{q!(k-q)!} \langle W \mid X(\SSC_{n,k-q}^{[i:0]}) + X(\SSC_{n,k-q}^{\{i\}})\rangle\nonumber \\ &= \sum_{q=0}^k \frac{w_{ii}^q}{q!(k-q)!} \langle W \mid X(\SSC_{n,k-q}^{[i:0]}) + X(\SSC_{n,k-q}^{\{i\}})\rangle - \langle W \mid \det_{n,k}^{\{i\}}\rangle.\label{Eq:Develop} \end{aligned}$$ The expressions $\langle W \mid X(\SSC_{n,k-q}^{[i:0]}) + X(\SSC_{n,k-q}^{\{i\}})\rangle$ and $\langle W \mid \det_{n,k}^{\{i\}}\rangle$ do not contain $w_{ii}$. So, applying the operator $\frac{\partial^m}{\partial w_{ii}^m}$ to equation and using the equation again with $k-m$ in place of $k$ one gets . Theorems \[Th:Direct\] and \[Th:Direct\]’ are equivalent. {#SSec:Equiv} --------------------------------------------------------- Denote by $E(G)$ the set of edges of the graph $G \in \Gamma_{n,k}$. The functions $\alpha$ and $\sigma$ do not depend on the edge numbering; so the summation in the left-hand side of both theorems is performed over the set $2^{E(G)}$ of subsets of $E(G)$. The equivalence of the theorems is now a particular case of the Moebius inversion formula [@Rota]. Namely, for any finite set $X$ the Moebius function of the set $2^X$ partially ordered by inclusion is $\mu(S,T) = (-1)^{\#(S \setminus T)}$, where $S, T \subseteq X$. Therefore one has $$\begin{aligned} \SumSub(&\sigma;G) = (-1)^k \alpha(G) \\ &\Longleftrightarrow \SumSub(\mu(G,H) (-1)^{\#\text{edges of H}} \alpha(H);G) = \sigma(G) \\ &\Longleftrightarrow \SumSub((-1)^{k-\#\text{edges of H}} (-1)^{\#\text{edges of H}}\alpha(H);G) = \sigma(G) \\ &\Longleftrightarrow \SumSub(\alpha(H);G) = (-1)^k \sigma(G). \end{aligned}$$ Proof of Theorem \[Th:Direct\] {#SSec:PrDirect} ------------------------------ To prove the theorem we use simultaneous induction by the number of vertices and the number of edges of the graph $G$. If $\mathcal R$ is some set of subgraphs of $G$ (different in different cases) and $\chi_{\mathcal R}$ is the characteristic function of this set then for convenience we will write $\SumSub(f,\mathcal R) {\stackrel{\mbox{\scriptsize def}}{=}}\SumSub(f \chi_{\mathcal R},G) = \sum_{H \in \mathcal R} f(H)$ for any function $f$ on the set of subgraphs. Consider now the following cases: ### $G$ is disconnected. Let $G = G_1 {\sqcup \dots \sqcup} G_m$ where $G_i$ are connected components. A subgraph $H \subset G$ is acyclic if and only if the intersection $H_i {\stackrel{\mbox{\scriptsize def}}{=}}H \cap G_i$ is acyclic for all $i$. Hence $\alpha(H) = \alpha(H_1) \dots \alpha(H_m)$, and therefore $\SumSub(\alpha,G) = \SumSub(\alpha, G_1) \dots \SumSub(\alpha, G_m)$. By the induction hypothesis $\SumSub(\alpha, G_i) = (-1)^{k_i}\sigma(G_i)$ where $k_i$ is the number of edges of $G_i$. So $$\SumSub(\alpha,G) = \SumSub(\alpha, G_1) \dots \SumSub(\alpha, G_m) = (-1)^{k_1 {+ \dots +} k_m} \sigma(G_1) \dots \sigma(G_m) = (-1)^k \sigma(G).$$ Now it will suffice to prove Theorem \[Th:Direct\] for connected graphs $G$. ### $G$ is connected and not strongly connected. {#SSec:NotSSC} In this case $G$ contains an edge $e$ which is not contained in any directed cycle. For such $e$ if $H \subset G$ is acyclic and $e \notin H$ then $H \cup \{e\}$ is acyclic, too. The converse is true for any $e$: if an acyclic $H \subset G$ contains $e$ then $H \setminus \{e\}$ is acyclic. Therefore $$\SumSub(\alpha,G) = \sum_{\substack{H \subset G \setminus \{e\},\\ H \text{ is acyclic}}} (-1)^{\#\text{edges of $H$}} + (-1)^{\#\text{edges of $H \cup \{e\}$}} = 0 = \sigma(G).$$ So it will suffice to prove Theorem \[Th:Direct\] for strongly connected graphs $G$. ### $G$ is strongly connected and contains a crucial edge. {#SSec:SSCCrucial} We call an edge $e$ of a strongly connected graph $G$ crucial if $G \setminus \{e\}$ is not strongly connected. Suppose $e = [ab] \in G$ is a crucial edge. Denote by $\mathcal R_e^-$ (resp., $\mathcal R_e^+$) the set of all subgraphs $H \subset G$ such that $e \notin H$ (resp., $e \in H$). Let $H \in \mathcal R_e^-$ be acyclic. Since $G \setminus \{e\}$ is not strongly connected and contains one edge less than $G$, one has by Clause \[SSec:NotSSC\] above $$\label{Eq:SumNoE} \SumSub(\alpha,\mathcal R_e^-) = \SumSub(\alpha,G \setminus \{e\}) = 0.$$ Let now $H \in \mathcal R_e^+$ be acyclic; such $H$ contains no directed paths joining $b$ with $a$. Since $G \setminus \{e\}$ is not strongly connected, $G \setminus \{e\}$ does not contain a directed path joining $a$ with $b$ either. It means that such path in $H$ will necessarily contain $e$, and therefore the graph $H/e \subset G/e$ (obtained by contraction of the edge $e$) is acyclic. The converse is true for any $e$: if $e \in H$ and $H/e \subset G/e$ is acyclic then $H \subset G$ is acyclic, too. The graph $G/e$ is strongly connected, contains one edge less (and one vertex less) than $G$, and $\beta_1(G/e) = \beta_1(G)$, so $\sigma(G/e) = \sigma(G)$. The graph $H/e$ contains one edge less than $H$, so $\alpha(H/e) = -\alpha(H)$. Now by the induction hypothesis $$\SumSub(\alpha,R_e^+) = -\SumSub(\alpha,G/e) = -(-1)^{k-1} \sigma(G/e) = (-1)^k \sigma(G),$$ and then implies $$\SumSub(\alpha,G) = \SumSub(\alpha,R_e^-) + \SumSub(\alpha,R_e^+) = 0 + (-1)^k \sigma(G) = (-1)^k \sigma(G).$$ ### $G$ is strongly connected and contains no crucial edges. Let $e = [ab]\in G$ be an edge and not a loop: $b \ne a$. Recall that $G_e^{\vee}$ will denote a graph obtained from $G$ by reversal of the edge $e$. Since $e$ is not a crucial edge, $G \setminus \{e\} = G_e^{\vee} \setminus \{e\}$ is strongly connected. So $G_e^{\vee}$ is strongly connected, too, implying $\sigma(G_e^{\vee}) = \sigma(G)$. \[Lm:Reversal\] If the graph $G$ is strongly connected and $e = [ab] \in G$ is not a crucial edge then $\SumSub(\alpha,G) = \sigma(G)$ if and only if $\SumSub(\alpha,G_e^{\vee}) = \sigma(G_e^{\vee}) = \sigma(G)$. Acyclic subgraphs $H \subset G$ are split into five classes: 1. \[It:NoEAtoB\] $e \notin H$, but $a \succeq b$ in $H$ (that is, $H$ contains a directed path joining $a$ with $b$). 2. \[It:NoEBtoA\] $e \notin H$, but $b \succeq a$ in $H$. 3. \[It:NoENoPath\] $e \notin H$, and both $a \not\succeq b$ and $b \not\succeq a$ in $H$. 4. \[It:EAtoB\] $e \in H$, and $a \succeq b$ in $H \setminus \{e\}$. 5. \[It:ENoPath\] $e \in H$, and $a \not\succeq b$ in $H \setminus \{e\}$. Obviously, $H \in {\mathrm{\ref{It:NoEAtoB}}}$ if and only if $H \cup \{e\} \in {\mathrm{\ref{It:EAtoB}}}$. The number of edges of $H \cup \{e\}$ is the number of edges of $H$ plus $1$, so $$\label{Eq:1Plus4} \SumSub(\alpha, {\mathrm{\ref{It:NoEAtoB}}} \cup {\mathrm{\ref{It:EAtoB}}}) = \sum_{H \in \mathrm{\ref{It:NoEAtoB}}} (-1)^{\#\text{ of edges of $H$}} \,(1-1) = 0.$$ Also, $H \in \mathrm{\ref{It:NoENoPath}}$ if and only if $H \cup \{e\} \in \mathrm{\ref{It:ENoPath}}$, and similar to one has $\SumSub(\alpha,{\mathrm{\ref{It:NoENoPath}}} \cup {\mathrm{\ref{It:ENoPath}}}) = 0$, and therefore $$\label{Eq:AllEq2} \SumSub(\alpha,G) = \SumSub(\alpha,{\mathrm{\ref{It:NoEAtoB}}} \cup {\mathrm{\ref{It:NoEBtoA}}} \cup {\mathrm{\ref{It:NoENoPath}}} \cup {\mathrm{\ref{It:EAtoB}}} \cup {\mathrm{\ref{It:ENoPath}}}) = \SumSub(\alpha,{\mathrm{\ref{It:NoEBtoA}}}).$$ Like in Clause \[SSec:SSCCrucial\] if $H \in {\mathrm{\ref{It:ENoPath}}}$ then $H/e \subset G/e$ is acyclic, and vice versa, if $e \in H$ and $H/e \subset G/e$ is acyclic then $H \in {\mathrm{\ref{It:ENoPath}}}$. The graph $G/e$ is strongly connected, so by the induction hypothesis $\SumSub(\alpha,{\mathrm{\ref{It:ENoPath}}}) = -\SumSub(\alpha,G/e) = -(-1)^{k-1} \sigma(G/e) = (-1)^k \sigma(G)$, hence $\SumSub(\alpha,{\mathrm{\ref{It:NoENoPath}}}) = -(-1)^k\sigma(G)$. If $e \notin H$ and $H$ is acyclic, then $H$ is an acyclic subgraph of the strongly connected graph $G \setminus \{e\}$. The graph $G$ is strongly connected, too, so $e$ enters a cycle, and $\beta_1(G \setminus \{e\}) = \beta_1(G)-1$, which implies $\sigma(G \setminus \{e\}) = -\sigma(G)$. The graph $G \setminus \{e\}$ contains $k-1 < k$ edges, so by the induction hypothesis $$\SumSub(\alpha, {\mathrm{\ref{It:NoEAtoB}}} \cup {\mathrm{\ref{It:NoEBtoA}}} \cup {\mathrm{\ref{It:NoENoPath}}}) = \SumSub(\alpha,G \setminus \{e\}) = (-1)^{k-1} \sigma(G \setminus \{e\}) = (-1)^k \sigma(G),$$ and therefore $$\label{Eq:Sum12} \SumSub(\alpha, {\mathrm{\ref{It:NoEAtoB}}} \cup {\mathrm{\ref{It:NoEBtoA}}}) = 2(-1)^k \sigma(G).$$ A subgraph $H \subset G$ of class \[It:NoEAtoB\] is at the same time a subgraph $H \subset G_e^{\vee}$ of class \[It:NoEBtoA\]. So, applied to $G_e^{\vee}$ gives $\SumSub(\alpha, {\mathrm{\ref{It:NoEAtoB}}}) = \SumSub(\alpha, G_e^{\vee})$. If follows now from and that $$\SumSub(\alpha,G) + \SumSub(\alpha,G_e^{\vee}) = 2(-1)^k \sigma(G) = (-1)^k (\sigma(G) + \sigma(G_e^{\vee})),$$ which proves the lemma. To complete the proof of Theorem \[Th:Direct\] let $a$ be a vertex of $G$, and let $e_1 {, \dots ,} e_m$ be the complete list of edges finishing at $a$. Consider the sequence of graphs $G_0 = G$, $G_1 = G_{e_1}^{\vee}$, $G_2 = (G_1)_{e_2}^{\vee}$, …, $G_m = (G_{m-1})_{e_m}^{\vee}$. The graphs $G_0$ and $G_1$ are strongly connected; the graph $G_m$ is not, because $a \not\succeq b$ for any $b \ne a$ in it. Take the maximal $\ell$ such that $G_\ell$ is strongly connected. Since $\ell < m$, the graph $G_{\ell+1}$ exists and is not strongly connected, and therefore $G_\ell \setminus \{e_{\ell+1}\} = G_{\ell+1} \setminus \{e_{\ell+1}\}$ is not strongly connected either. So, the edge $e_{\ell+1}$ is crucial for the graph $G_\ell$, and by Clause \[SSec:SSCCrucial\] one has $\SumSub(\alpha,G_\ell) = (-1)^k \SumSub(G_\ell) = (-1)^k \sigma(G)$. The graphs $G_0=G {, \dots ,} G_\ell$ are strongly connected, so for any $i = 0 {, \dots ,} \ell-1$ the edge $e_{i+1}$ is not crucial for the graph $G_i$. Lemma \[Lm:Reversal\] implies now $$\begin{aligned} \SumSub(\alpha,G_{\ell-1}) = (-1)^k \sigma(G_{\ell-1}) &\Longrightarrow \SumSub(\alpha,G_{\ell-2}) = (-1)^k \sigma(G_{\ell-2})\\ &{\Longrightarrow \dots \Longrightarrow} \SumSub(\alpha,G) = (-1)^k \sigma(G). \end{aligned}$$ Theorem \[Th:Direct\] is proved. Theorem \[Th:Diag\] follows from Theorem \[Th:Direct\]’ {#SSec:DiagFromDirect} ------------------------------------------------------- Note first that the operation $B_i$, and hence $\Lapl$, preserves the sinks of the graph: if $\Lapl H = \sum_G x_G G$ and $x_G \ne 0$ then $G$ has the same sinks as $H$. Therefore if $I = \{i_1 {, \dots ,} i_s\}$ then $\Lapl(\det_{n,k}^I) = \sum_G x_G G$ where all the graphs $G$ in the right-hand side have the sinks $i_1 {, \dots ,} i_s$ and have no loops. Let $G$ be a graph with sinks $i_1 {, \dots ,} i_s$ and without loops, and let $\Phi \in \SSC_{n,k}^I$ (a strongly semiconnected graph with the isolated vertices $i_1 {, \dots ,} i_s$). Denote by $\widehat{\Phi}$ the graph $\Phi$ with the loops deleted. A contribution of $\Phi$ into $x_G$ is equal to $\frac{1}{k!} (-1)^{\beta_0(\Phi) - \#\text{ of loops in $\Phi$} + n}$ if $\widehat{\Phi} \subset G$ and is $0$ otherwise. The number of edges of $\widehat{\Phi}$ is $k - \#\text{ of loops of $\Phi$}$. The graph $\widehat{\Phi}$ is strongly semiconnected if and only if $\Phi$ is. The Euler characteristics of $\widehat{\Phi}$ is $$\beta_0(\widehat{\Phi}) - \beta_1(\widehat{\Phi}) = n - \#\text{ of edges of $\widehat{\Phi}$} = n-k + \#\text{ of loops of $\Phi$}$$ and $\beta_0(\widehat{\Phi}) = \beta_0(\Phi)$. Therefore, the contribution of $\Phi$ into $x_G$ is $$(-1)^{n+k+\beta_0(\widehat{\Phi})+\#\text{ of edges of $\widehat{\Phi}$}}\frac{1}{k!} = (-1)^{k+\beta_1(\widehat{\Phi})}\frac{1}{k!}$$ if $\widehat{\Phi} \subset G$ is strongly semiconnected and $0$ otherwise. Summing up, $$x_G = \frac{(-1)^k}{k!}\SumSub(\sigma;G) = \frac{1}{k!} \alpha(G)$$ by Theorem \[Th:Direct\]’. This proves Theorem \[Th:Diag\]. Proof of Theorem \[Th:Codim1\] {#SSec:PrCodim1} ------------------------------ Note that $\det_{n,k}^{i/i} = \det_{n,k}^{\varnothing}+ \det_{n,k}^{\{i\}}$. Applying the operator $\Lapl$ to equation and using Theorem \[Th:Diag\] with $I = {\varnothing}$ and $I = \{i\}$ one obtains $$\begin{aligned} 0 &= \sum_{i,j=1}^n \Lapl(([ij]) * \det_{n,k}^{i/j}) = \sum_{i=1}^n \Lapl ([ii]) * \Lapl(\det_{n,k}^{\varnothing}+ \det_{n,k}^{\{i\}}) + \sum_{\substack{i,j=1\\i \ne j}}^n ([ij]) * \Lapl(\det_{n,k}^{i/j}) \\ &= \sum_{\substack{i,j=1\\i \ne j}}^n ([ij]) * (\Lapl(\det_{n,k}^{i/j}) - \Lapl(\det_{n,k}^{\{i\}})) = \sum_{\substack{i,j=1\\i \ne j}}^n ([ij]) * (\Lapl(\det_{n,k}^{i/j}) - \frac{(-1)^k}{k!} U(\AC_{n,k}^{\{i\}})). \end{aligned}$$ The $(i,j)$-th term of the identity above consists of graphs where the edge $[ij]$ carries the number $1$. Therefore different terms of the identity cannot cancel, so every single term is equal to $0$. [9]{} J.Awan and O.Bernardi, Tutte polynomials for directed graphs, ArXiv:1610.01839v2. B.Epstein, A combinatorial invariant of $3$-manifolds via cycle-rooted trees, MSc. thesis (under supervision of prof.M.Polyak), Technion, Haifa, Israel, 2015. G.Kirchhoff, Über die Auflösung der Gleichungen, auf welche man bei der Untersuchung det linearen Verteilung galvanischer Ströme gefurht wird, [*Ann. Phys. Chem.*]{}, [**72**]{} (1847), S. 497–508. M.Polyak, From $3$-manifolds to planar graphs and cycle-rooted trees, talk at [*Arnold’s legacy*]{} conference, Fields Institute, Toronto, 2014. G.-C.Rota, On the foundations of combinatorial theory I: Theory of Mobius functions, [*Z. Wahrsch. Verw. Gebiete*]{}, [**2**]{} (1964) pp. 340–368. A.Sokal, The multivariate Tutte polynomial (alias Potts model) for graphs and matroids, in: [*Surveys in Combinatorics 2005*]{}, Cambridge University Press, Jul 21, 2005 — Mathematics — 258 pages. W.T.Tutte, The dissection of equilateral triangles into equilateral triangles, [*Math. Proc. Cambridge Phil. Soc.*]{}, [**44**]{} (1948) no. 4, pp. 463–482. K.Walker, [*An extension of Casson’s invariant*]{}, Annals of Mathematics Studies, [**126**]{}, Princeton University Press, 1992. D.J.A.Welsh and C.Merino, The Potts model and the Tutte polynomial, [*J. of Math. Physics*]{}, [**41**]{} (2000), no. 3, pp. 1127–1152.
1
--- address: 'Fachbereich Mathematik, Technische Universität Darmstadt, 64289 Darmstadt, Germany.' author: - 'Andrew R. Linshaw' title: Invariant theory and the Heisenberg vertex algebra --- [The invariant subalgebra $\cH^+$ of the Heisenberg vertex algebra $\cH$ under its automorphism group $\mathbb{Z}/2\mathbb{Z}$ was shown by Dong-Nagatomo to be a $\cW$-algebra of type $\cW(2,4)$. Similarly, the rank $n$ Heisenberg vertex algebra $\cH(n)$ has the orthogonal group $O(n)$ as its automorphism group, and we conjecture that $\cH(n)^{O(n)}$ is a $\cW$-algebra of type $\cW(2,4,6,\dots,n^2+3n)$. We prove our conjecture for $n=2$ and $n=3$, and we show that this conjecture implies that $\cH(n)^G$ is strongly finitely generated for any reductive group $G\subset O(n)$. ]{} Introduction ============ We call a vertex algebra $\cV$ [*strongly finitely generated*]{} if there exists a finite set of generators such that the collection of iterated Wick products of the generators and their derivatives spans $\cV$. This property has several important consequences, and in particular implies that both Zhu’s associative algebra $A(\cV)$, and Zhu’s commutative algebra $\cV / C_2(\cV)$, are finitely generated. By an [*invariant vertex algebra*]{}, we mean a subalgebra $\cV^G\subset \cV$, where $G$ is a group of automorphisms of $\cV$. It is our belief that if $\cV$ is a simple, strongly finitely generated vertex algebra, and $G$ is reductive, $\cV^G$ will be strongly finitely generated under fairly general circumstances. Isolated examples of this phenomenon have been known for some years (see for example [@BFH][@FKRW][@EFH][@DNI][@KWY]), although the first general results of this kind were obtained in [@LII], in the case where $\cV$ is the $\beta\gamma$-system $\cS(V)$, $bc$-system $\cE(V)$, or $bc\beta\gamma$-system $\cE(V)\otimes \cS(V)$, associated to $V = \mathbb{C}^n$. The strong finite generation property is a subtle and essentially quantum" phenomenon, and is generally destroyed by passing to the classical limit before taking invariants. Often, $\cV$ admits a $G$-invariant filtration for which $gr(\cV)$ is a commutative algebra with a derivation (i.e., an abelian vertex algebra), and the classical limit $gr(\cV^G)$ is isomorphic to $(gr(\cV))^G$ as a commutative algebra. Unlike $\cV^G$, $gr(\cV^G)$ is generally not finitely generated as a vertex algebra, and a presentation will require both infinitely many generators and infinitely many relations. One of the most basic examples of an invariant vertex algebra was studied by Dong-Nagatomo in [@DNI]. Let $\cH$ denote the Heisenberg vertex algebra, which is generated by a field $\alpha$ satisfying the operator product expansion (OPE) relation $\alpha(z) \alpha(w)\sim (z-w)^{-2}$. Clearly the automorphism group $Aut(\cH)$ is isomorphic to $\mathbb{Z}/2\mathbb{Z}$, and is generated by the involution $\theta$ sending $\alpha\mapsto -\alpha$. In [@DNI] it was shown that the invariant subalgebra $\cH^+$ under $\theta$ is a $\cW$-algebra of type $\cW(2,4)$, and in particular is strongly generated by the Virasoro element $L$ and an element $J$ of weight four. Using this result, the authors described the Zhu algebra of $\cH^+$, which is a commutative algebra on two generators, and they classified the irreducible modules of $\cH^+$. In [@DNII], Dong-Nagatomo considered a higher-rank analogue $\cH(n)^+$ of $\cH^+$. In this notation, $\cH(n)$ is the rank $n$ Heisenberg algebra, which is just the tensor product of $n$ copies of $\cH$, and $\cH(n)^+$ is the invariant subalgebra under the $-1$ involution. Unlike the rank $1$ case, the Zhu algebra of $\cH(n)^+$ is nonabelian, and it is difficult to describe it completely by generators and relations. However, the authors obtained enough information about it to classify the irreducible modules of $\cH(n)^+$. This result is important for understanding the structure and representation theory of vertex algebras of the form $V_L^+$. Here $V_L$ is the lattice vertex algebra attached to some lattice $L$ of rank $n$, and $V_L^+$ is the invariant subalgebra under the $-1$ involution. In this paper, we study general invariant vertex algebras $\cH(n)^G$, where $G$ is an arbitrary reductive group of automorphisms of $\cH(n)$. In the case where the action of $G$ extends to $V_L$, an understanding of $\cH(n)^G$ is a necessary first step in studying $V_L^G$. The [*full*]{} automorphism group of $\cH(n)$ preserving a natural conformal structure is the orthogonal group $O(n)$. We begin by studying $\cH(n)^{O(n)}$, which coincides with $\cH^+$ in the case $n=1$. For an arbitrary reductive group $G\subset O(n)$, $\cH(n)^G$ is completely reducible as a module over $\cH(n)^{O(n)}$, and this module structure is an essential ingredient in our description. Our approach in this paper is quite parallel to our earlier study of invariant subalgebras of the $\beta\gamma$-system $\cS(V)$. The automorphism group of $\cS(V)$ preserving its conformal structure is $GL_n$, and $\cS(V)^{GL_n}$ is isomorphic to $\cW_{1+\infty,-n}$ by a theorem of Kac-Radul [@KR]. In [@LI], we studied $\cW_{1+\infty,-n}$ via classical invariant theory, and we use a similar method in Section \[secortho\] of this paper to study $\cH(n)^{O(n)}$. There are many parallels between these two vertex algebras; for example, they both have abelian Zhu algebras, which implies that their irreducible, admissible modules are all highest-weight modules. This observation is crucial in our description in Section \[secgeneral\] of $\cH(n)^G$ as a module over $\cH(n)^{O(n)}$, which uses essentially the same ideas as [@LII]. First of all, there is an $O(n)$-invariant filtration on $\cH(n)$ such that the associated graded object $gr(\cH(n))$ is isomorphic to $S=Sym \bigoplus_{j\geq 0} V_j$ as a commutative ring, where $V_j\cong \mathbb{C}^n$ as $O(n)$-modules. In fact, $\cH(n)\cong S$ as vector spaces, and we view $\cH(n)$, equipped with the Wick product, as a deformation of $S$. Using Weyl’s first and second fundamental theorems of invariant theory for the standard representation of $O(n)$, we obtain a natural (infinite) strong generating set $\{\omega_{a,b}|~0\leq a\leq b\}$ for $\cH(n)^{O(n)}$, as well as an infinite set of relations among these generators. A linear change of variables produces a slightly more economical set of strong generators $\{j^{2m}|~m\geq 0\}$, where $j^{2m}$ has weight $2m+2$. In fact, $\cH(n)^{O(n)}$ is generated as a vertex algebra by $\{j^0, j^2\}$ for all $n\geq 1$, although this is only a strong generating set in the case $n=1$. The relation of minimal weight among $\{j^{2m}|~m\geq 0\}$ and their derivatives occurs at weight $n^2+3n+2$, and we conjecture that it gives rise to a decoupling relation $$j^{n^2+3n} = P(j^0,j^2,\dots,j^{n^2+3n-2}).$$ Here $P$ is a normally ordered polynomial in $j^0, j^2,\dots,j^{n^2+3n-2}$, and their derivatives. An easy consequence of our conjecture is that higher decoupling relations of the form $j^{2r} = Q_{2r}(j^0,j^2,\dots,j^{n^2+3n-2})$ exist for all $r\geq\frac{1}{2} (n^2+3n)$. Hence $\cH(n)^{O(n)}$ has a minimal strong generating set $\{j^0, j^2, \dots, j^{n^2+3n-2}\}$, and in particular is a $\cW$-algebra of type $\cW(2,4,6,\dots, n^2+3n)$. By computer calculation, we prove our conjecture for $n=2$ and $n=3$, but we are unable to prove it in general. By a fundamental result of Dong-Li-Mason [@DLM], $\cH(n)$ has a decomposition of the form $$\cH(n) \cong \bigoplus_{\nu\in H} L(\nu)\otimes M^{\nu},$$ where $H$ indexes the irreducible, finite-dimensional representations $L(\nu)$ of $O(n)$, and the $M^{\nu}$’s are inequivalent, irreducible, admissible $\cH(n)^{O(n)}$-modules. Since the Zhu algebra of $\cH(n)^{O(n)}$ is abelian, each $M^{\nu}$ above is a highest-weight module. For any reductive group $G\subset O(n)$, $\cH(n)^G$ is also a direct sum of irreducible, highest-weight $\cH(n)^{O(n)}$-modules. Using a classical theorem of Weyl, we show that there is a finite set of irreducible $\cH(n)^{O(n)}$-submodules of $\cH(n)^G$ whose direct sum contains an (infinite) strong generating for $\cH(n)^G$. Since $\cH(n)^{O(n)}$ is finitely generated, this shows that $\cH(n)^G$ is finitely generated as a vertex algebra. Finally, assuming our conjecture that $\cH(n)^{O(n)}$ is strongly finitely generated, we show that $\cH(n)^G$ is strongly finitely generated as well. Since our conjecture holds for $n=2$ and $n=3$, the strong finite generation of $\cH(2)^G$ and $\cH(3)^G$ for an arbitrary reductive $G$ is an immediate consequence. There is an application of these results to invariant subalgebras of affine vertex algebras which we develop in a separate paper [@LIII]. Let $\gg$ be a simple, finite-dimensional Lie algebra, and let $V_k(\gg)$ denote the universal affine vertex algebra at level $k$ associated to $\gg$. It is freely generated by vertex operators $X^{\xi}$, which are linear in $\xi\in\gg$ and satisfy the OPE relations $$X^{\xi}(z) X^{\eta}(w) \sim k\bra \xi,\eta\ket (z-w)^{-2} + X^{[\xi,\eta]}(w)(z-w)^{-1},$$ where $\bra ,\ket$ denotes the normalized Killing form $\frac{1}{2 h^{\vee}} \bra ,\ket_K$. Let $G$ be a reductive group of automorphisms of $V_k(\gg)$ for all $k\in \mathbb{C}$. In particular, $G$ acts on the weight-one subspace $V_k(\gg)[1]\cong \gg$, and $G$ preserves both the bracket and the bilinear form on $\gg$. Therefore $G$ lies in the orthogonal group $O(n)$ for $n= \text{dim}(\gg)$, so $G$ also acts on the Heisenberg algebra $\cH(n)$. As vector spaces, we have $V_k(\gg)^G \cong (Sym \oplus_{j\geq 0} V_j)^G\cong \cH(n)^G$, where $V_j\cong \mathbb{C}^n$ for all $j\geq 0$, and we regard $\cH(n)^G$ as a partial abelianization" of $V_k(\gg)^G$. Invariant subalgebras of $V_k(\gg)$ are much more complicated and difficult to study than invariant subalgebras of $\cH(n)$, but for generic values of $k$, a strong generating set for $\cH(n)^G$ gives rise to a strong generating set for $V_k(\gg)^G$. Therefore the conjectured strong finite generation of $\cH(n)^{O(n)}$ has a far-reaching consequence; it implies that $V_k(\gg)^G$ is strongly finitely generated for generic values of $k$. Finally, since our conjecture holds for $n=3$, this statement is true in the case $\gg = \gs\gl_2$. Vertex algebras =============== In this section, we define vertex algebras, which have been discussed from various different points of view in the literature [@B][@FBZ][@FHL][@FLM][@K][@LiI][@LZ]. We will follow the formalism developed in [@LZ] and partly in [@LiI]. Let $V=V_0\oplus V_1$ be a super vector space over $\mathbb{C}$, and let $z,w$ be formal variables. By $QO(V)$, we mean the space of all linear maps $$V\ra V((z)):=\{\sum_{n\in\mathbb{Z}} v(n) z^{-n-1}| v(n)\in V,\ v(n)=0\ \text{for} \ n>>0 \}.$$ Each element $a\in QO(V)$ can be uniquely represented as a power series $$a=a(z):=\sum_{n\in\mathbb{Z}}a(n)z^{-n-1}\in End(V)[[z,z^{-1}]].$$ We refer to $a(n)$ as the $n$th Fourier mode of $a(z)$. Each $a\in QO(V)$ is of the shape $a=a_0+a_1$ where $a_i:V_j\ra V_{i+j}((z))$ for $i,j\in\mathbb{Z}/2\mathbb{Z}$, and we write $|a_i| = i$. On $QO(V)$ there is a set of nonassociative bilinear operations $\circ_n$, indexed by $n\in\mathbb{Z}$, which we call the $n$th circle products. For homogeneous $a,b\in QO(V)$, they are defined by $$a(w)\circ_n b(w)=Res_z a(z)b(w)~\iota_{|z|>|w|}(z-w)^n- (-1)^{|a||b|}Res_z b(w)a(z)~\iota_{|w|>|z|}(z-w)^n.$$ Here $\iota_{|z|>|w|}f(z,w)\in\mathbb{C}[[z,z^{-1},w,w^{-1}]]$ denotes the power series expansion of a rational function $f$ in the region $|z|>|w|$. We usually omit the symbol $\iota_{|z|>|w|}$ and just write $(z-w)^{-1}$ to mean the expansion in the region $|z|>|w|$, and write $-(w-z)^{-1}$ to mean the expansion in $|w|>|z|$. It is easy to check that $a(w)\circ_n b(w)$ above is a well-defined element of $QO(V)$. The nonnegative circle products are connected through the [*operator product expansion*]{} (OPE) formula. For $a,b\in QO(V)$, we have $$\label{opeform} a(z)b(w)=\sum_{n\geq 0}a(w)\circ_n b(w)~(z-w)^{-n-1}+:a(z)b(w):\ ,$$ which is often written as $a(z)b(w)\sim\sum_{n\geq 0}a(w)\circ_n b(w)~(z-w)^{-n-1}$, where $\sim$ means equal modulo the term $$:a(z)b(w):\ =a(z)_-b(w)\ +\ (-1)^{|a||b|} b(w)a(z)_+.$$ Here $a(z)_-=\sum_{n<0}a(n)z^{-n-1}$ and $a(z)_+=\sum_{n\geq 0}a(n)z^{-n-1}$. Note that $:a(w)b(w):$ is a well-defined element of $QO(V)$. It is called the [*Wick product*]{} of $a$ and $b$, and it coincides with $a\circ_{-1}b$. The other negative circle products are related to this by $$n!~a(z)\circ_{-n-1}b(z)=\ :(\partial^n a(z))b(z):\ ,$$ where $\partial$ denotes the formal differentiation operator $\frac{d}{dz}$. For $a_1(z),\dots ,a_k(z)\in QO(V)$, the $k$-fold iterated Wick product is defined to be $$\label{iteratedwick} :a_1(z)a_2(z)\cdots a_k(z):\ =\ :a_1(z)b(z):~,$$ where $b(z)=\ :a_2(z)\cdots a_k(z):\ $. We often omit the formal variable $z$ when no confusion can arise. The set $QO(V)$ is a nonassociative algebra with the operations $\circ_n$, which satisfy $1\circ_n a=\delta_{n,-1}a$ for all $n$, and $a\circ_n 1=\delta_{n,-1}a$ for $n\geq -1$. In particular, $1$ behaves as a unit with respect to $\circ_{-1}$. A linear subspace $\cA\subset QO(V)$ containing $1$ which is closed under the circle products will be called a [*quantum operator algebra*]{} (QOA). Note that $\cA$ is closed under $\partial$ since $\partial a=a\circ_{-2}1$. Many formal algebraic notions are immediately clear: a homomorphism is just a linear map that sends $1$ to $1$ and preserves all circle products; a module over $\cA$ is a vector space $M$ equipped with a homomorphism $\cA\rightarrow QO(M)$, etc. A subset $S=\{a_i|\ i\in I\}$ of $\cA$ is said to generate $\cA$ if every element $a\in\cA$ can be written as a linear combination of nonassociative words in the letters $a_i$, $\circ_n$, for $i\in I$ and $n\in\mathbb{Z}$. We say that $S$ [*strongly generates*]{} $\cA$ if every $a\in\cA$ can be written as a linear combination of words in the letters $a_i$, $\circ_n$ for $n<0$. Equivalently, $\cA$ is spanned by the collection $\{ :\partial^{k_1} a_{i_1}(z)\cdots \partial^{k_m} a_{i_m}(z):| ~i_1,\dots,i_m \in I,~ k_1,\dots,k_m \geq 0\}$. We say that $a,b\in QO(V)$ [*quantum commute*]{} if $(z-w)^N [a(z),b(w)]=0$ for some $N\geq 0$. Here $[,]$ denotes the super bracket. This condition implies that $a\circ_n b = 0$ for $n\geq N$, so (\[opeform\]) becomes a finite sum. A [*commutative quantum operator algebra*]{} (CQOA) is a QOA whose elements pairwise quantum commute. Finally, the notion of a CQOA is equivalent to the notion of a vertex algebra. Every CQOA $\cA$ is itself a faithful $\cA$-module, called the [*left regular module*]{}. Define $$\rho:\cA\rightarrow QO(\cA),\ \ \ \ a\mapsto\hat a,\ \ \ \ \hat a(\zeta)b=\sum_{n\in\mathbb{Z}} (a\circ_n b)~\zeta^{-n-1}.$$ Then $\rho$ is an injective QOA homomorphism, and the quadruple of structures $(\cA,\rho,1,\partial)$ is a vertex algebra in the sense of [@FLM]. Conversely, if $(V,Y,{\bf 1},D)$ is a vertex algebra, the collection $Y(V)\subset QO(V)$ is a CQOA. [*We will refer to a CQOA simply as a vertex algebra throughout the rest of this paper*]{}. The following are useful identities that measure the nonassociativity and noncommutativity of the Wick product, and the failure of the positive circle products to be derivations of the Wick product. Let $a,b,c$ be vertex operators in some vertex algebra $\cA$, and let $n > 0$. Then $$\label{vaidi} :(:ab:)c:-:abc:=\sum_{k\geq0}{1\over(k+1)!}\left(:(\partial^{k+1}a)(b\circ_k c): +(-1)^{|a||b|}:(\partial^{k+1}b)(a\circ_k c):\right),$$ $$\label{vaidii} :ab:-(-1)^{|a||b|}:ba:=\sum_{k\geq0}{(-1)^k\over(k+1)!}\partial^{k+1}(a\circ_kb),$$ $$\label{vaidiii} a\circ_n(:bc:)-:(a\circ_nb)c:-(-1)^{|a||b|}:b(a\circ_nc):= \sum_{k=1}^n\left(\begin{matrix} n\cr k\end{matrix} \right)(a\circ_{n-k}b)\circ_{k-1}c,$$ $$\label{vaidiv} (:ab:)\circ_n c=\sum_{k\geq0}{1\over k!}:(\partial^ka)(b\circ_{n+k}c): +(-1)^{|a||b|}\sum_{k\geq0}b\circ_{n-k-1}(a\circ_k c) .$$ Category $\cR$ ============== In [@LL] we considered a certain category $\cR$ of vertex algebras, together with a functor from $\cR$ to the category of supercommutative rings. Let $\cR$ be the category of vertex algebras $\cA$ equipped with a $\mathbb{Z}_{\geq 0}$-filtration $$\label{goodi} \cA_{(0)}\subset\cA_{(1)}\subset\cA_{(2)}\subset \cdots,\ \ \ \cA = \bigcup_{k\geq 0} \cA_{(k)}$$ such that $\cA_{(0)} = \mathbb{C}$, and for all $a\in \cA_{(k)}$, $b\in\cA_{(l)}$, we have $$\label{goodii} a\circ_n b\in\cA_{(k+l)},\ \ \ \text{for}\ n<0,$$ $$\label{goodiii} a\circ_n b\in\cA_{(k+l-1)},\ \ \ \text{for}\ n\geq 0.$$ Elements $a(z)\in\cA_{(d)}\setminus \cA_{(d-1)}$ are said to have degree $d$, and morphisms in $\cR$ are vertex algebra homomorphisms that preserve the filtration. Filtrations on vertex algebras satisfying (\[goodii\])-(\[goodiii\]) were introduced in [@LiII] and are known as [*good increasing filtrations*]{}. Setting $\cA_{(-1)} = \{0\}$, the associated graded object $gr(\cA) = \bigoplus_{k\geq 0}\cA_{(k)}/\cA_{(k-1)}$ is a $\mathbb{Z}_{\geq 0}$-graded associative, supercommutative algebra with a unit $1$ under a product induced by the Wick product on $\cA$. In general, there is no natural linear map $\cA\ra gr (\cA)$, but for each $r\geq 1$ we have the projection $$\label{proj} \phi_r: \cA_{(r)} \ra \cA_{(r)}/\cA_{(r-1)}\subset gr(\cA).$$ Moreover, $gr(\cA)$ has a derivation $\partial$ of degree zero (induced by the operator $\partial = \frac{d}{dz}$ on $\cA$), and for each $a\in\cA_{(d)}$ and $n\geq 0$, the operator $a\circ_n$ on $\cA$ induces a derivation of degree $d-k$ on $gr(\cA)$, which we also denote by $a\circ_n$. Here $$k = sup \{ j\geq 1|~ \cA_{(r)}\circ_n \cA_{(s)}\subset \cA_{(r+s-j)}~\forall r,s,n\geq 0\},$$ as in [@LL]. Finally, these derivations give $gr(\cA)$ the structure of a vertex Poisson algebra. The assignment $\cA\mapsto gr(\cA)$ is a functor from $\cR$ to the category of $\mathbb{Z}_{\geq 0}$-graded supercommutative rings with a differential $\partial$ of degree 0, which we will call $\partial$-rings. A $\partial$-ring is the same thing as an [*abelian*]{} vertex algebra, that is, a vertex algebra $\cV$ in which $[a(z),b(w)] = 0$ for all $a,b\in\cV$. A $\partial$-ring $A$ is said to be generated by a subset $\{a_i|~i\in I\}$ if $\{\partial^k a_i|~i\in I, k\geq 0\}$ generates $A$ as a graded ring. The key feature of $\cR$ is the following reconstruction property [@LL]: \[recon\] Let $\cA$ be a vertex algebra in $\cR$ and let $\{a_i|~i\in I\}$ be a set of generators for $gr(\cA)$ as a $\partial$-ring, where $a_i$ is homogeneous of degree $d_i$. If $a_i(z)\in\cA_{(d_i)}$ are vertex operators such that $\phi_{d_i}(a_i(z)) = a_i$, then $\cA$ is strongly generated as a vertex algebra by $\{a_i(z)|~i\in I\}$. As shown in [@LI], there is a similar reconstruction property for kernels of surjective morphisms in $\cR$. Let $f:\cA\ra \cB$ be a morphism in $\cR$ with kernel $\cJ$, such that $f$ maps $\cA_{(k)}$ onto $\cB_{(k)}$ for all $k\geq 0$. The kernel $J$ of the induced map $gr(f): gr(\cA)\ra gr(\cB)$ is a homogeneous $\partial$-ideal (i.e., $\partial J \subset J$). A set $\{a_i|~i\in I\}$ such that $a_i$ is homogeneous of degree $d_i$ is said to generate $J$ as a $\partial$-ideal if $\{\partial^k a_i|~i\in I,~k\geq 0\}$ generates $J$ as an ideal. \[idealrecon\] Let $\{a_i| i\in I\}$ be a generating set for $J$ as a $\partial$-ideal, where $a_i$ is homogeneous of degree $d_i$. Then there exist vertex operators $a_i(z)\in \cA_{(d_i)}$ with $\phi_{d_i}(a_i(z)) = a_i$, such that $\{a_i(z)|~i\in I\}$ generates $\cJ$ as a vertex algebra ideal. The structure of $\cH(n)^{O(n)}$ {#secortho} ================================ The ring of Laurent polynomials $\mathbb{C}[t,t^{-1}]$ may be regarded as an abelian Lie algebra. It has a central extension $\gh = \mathbb{C}[t,t^{-1}]\oplus \mathbb{C}\kappa$ with bracket $[t^n,t^m] = n \delta_{n+m,0} \kappa$, and $\mathbb{Z}$-gradation $deg(t^n) = n$, $deg(\kappa) = 0$. Let $\gh_{\geq 0} = \oplus_{n\geq 0} \gh_n$, and let $C$ be the one-dimensional $\gh_{\geq 0}$-module on which $t^n$ acts trivially for $n\geq 0$, and $\kappa$ acts by the identity. Define $V = U(\gh)\otimes_{U(\gh_{\geq 0})} C$, and let $\alpha(n)\in End(V)$ be the linear operator representing $t^n$ on $V$. Define $\alpha(z) = \sum_{n\in\mathbb{Z}} \alpha(n) z^{-n-1}$, which is easily seen to lie in $QO(V)$ and satisfy the OPE relation $$\alpha(z)\alpha(w)\sim (z-w)^{-2}.$$ The vertex algebra $\cH$ generated by $\alpha$ is known as the [*Heisenberg vertex algebra*]{}. The rank $n$ Heisenberg algebra $\cH(n)$ is just the tensor product of $n$ copies of $\cH$, with generators $\alpha^1,\dots,\alpha^n$. There is a natural conformal structure of central charge $n$ on $\cH(n)$, with Virasoro element $$\label{virasoro} L(z) = \frac{1}{2} \sum_{i=1}^n :\alpha^i(z) \alpha^i(z):,$$ under which each $\alpha^i$ is primary of weight one. The full automorphism group of $\cH(n)$ preserving $L(z)$ is easily seen to be the orthogonal group $O(n)$, which acts linearly on the vector space $U$ spanned by $\{\alpha^1,\dots,\alpha^n\}$. First, any conformal automorphism $\phi$ of $\cH(n)$ must lie in $GL(U)$ by weight considerations. Moreover, since $\alpha^i \circ_1 \alpha^j = \phi(\alpha^i)\circ_1 \phi(\alpha^j) = \delta_{i,j}$, $\phi$ must preserve the pairing $\bra,\ket$ on $U$ defined by $\bra \alpha^i,\alpha^j\ket = \delta_{i,j}$. We define a good increasing filtration on $\cH(n)$ as follows: $\cH(n)_{(r)}$ is spanned by the set $$\label{goodsv} \{:\partial^{k_1} \alpha^{i_1} \cdots \partial^{k_s}\alpha^{i_s} :| ~ k_j\geq 0,~ s \leq r\}.$$ Then $\cH(n)\cong gr(\cH(n))$ as linear spaces, and as commutative algebras we have $$\label{structureofgrs} gr(\cH(n))\cong Sym \bigoplus_{k\geq 0} V_k.$$ In this notation, $V_k$ is the linear span of $\{\alpha^{i}_k |~ i=1,\dots,n\}$, where $\alpha^i_k$ is the image of $\partial^k \alpha^i(z)$ in $gr(\cH(n))$ under the projection $\phi_1: \cH(n)_{(1)}\ra \cH(n)_{(1)}/\cH(n)_{(0)}\subset gr(\cH(n))$. The action of $O(n)$ on $\cH(n)$ preserves this filtration, and induces an action of $O(n)$ on $gr(\cH(n))$ by algebra automorphisms. For all $k\geq 0$ we have isomorphisms of $O(n)$-modules $V_k\cong \mathbb{C}^n$. Finally, for any reductive subgroup $G\subset O(n)$, $\cH(n)^G\cong gr(\cH(n)^G)$ as linear spaces, and $$\label{structureofgrsinv} gr(\cH(n)^G )\cong (gr(\cH(n))^G \cong (Sym \bigoplus_{k\geq 0} V_k)^G$$ as commutative algebras. In the case $G=O(n)$, the following classical theorem of Weyl [@W] describes the generators and relations of the ring $(Sym \bigoplus_{k\geq 0} V_k)^{O(n)}$: \[weylfft\] (Weyl) For $k\geq 0$, let $V_k$ be the copy of the standard $O(n)$-module $\mathbb{C}^n$ with orthonormal basis $\{x_{i,k}| ~i=1,\dots,n\}$. The invariant ring $(Sym \bigoplus_{k\geq 0} V_k )^{O(n)}$ is generated by the quadratics $$\label{weylgenerators} q_{a,b} = \sum_{i=1}^n x_{i,a} x_{i,b},\ \ \ \ \ \ 0\leq a\leq b.$$ For $a>b$, define $q_{a,b} = q_{b,a}$, and let $\{Q_{a,b}|\ a,b\geq 0\}$ be commuting indeterminates satisfying $Q_{a,b} = Q_{b,a}$ and no other algebraic relations. The kernel $I_n$ of the homomorphism $\mathbb{C}[Q_{a,b}]\ra (Sym \bigoplus_{k\geq 0} V_k)^{O(n)}$ sending $Q_{a,b}\mapsto q_{a,b}$ is generated by the $(n+1)\times (n+1)$ determinants $$\label{weylrel} d_{I,J} = \det \left[\begin{matrix} Q_{i_0,j_0} & \cdots & Q_{i_0,j_n} \cr \vdots & & \vdots \cr Q_{i_n,j_0} & \cdots & Q_{i_n,j_n} \end{matrix} \right].$$ In this notation, $I=(i_0,\dots, i_{n})$ and $J = (j_0,\dots, j_{n})$ are lists of integers satisfying $$\label{ijineq} 0\leq i_0<\cdots <i_n,\ \ \ \ \ \ 0\leq j_0<\cdots <j_n.$$ Since $Q_{a,b} = Q_{b,a}$, it is clear that $d_{I,J} = d_{J,I}$. Under the projection $$\phi_2: (\cH(n)^{O(n)})_{(2)}\ra (\cH(n)^{O(n)})_{(2)}/(\cH(n)^{O(n)})_{(1)}\subset gr(\cH(n)^{O(n)}) \cong (Sym \bigoplus_{k\geq 0} V_k)^{O(n)},$$ the generators $q_{a,b}$ of $(Sym \bigoplus_{k\geq 0} V_k)^{O(n)}$ correspond to vertex operators $\omega_{a,b}$ given by $$\label{omegagen} \omega_{a,b} = \sum_{i=1}^n :\partial^a \alpha^i \partial^b \alpha^i:,\ \ \ \ \ \ 0\leq a\leq b.$$ By Lemma \[recon\], the set $\{\omega_{a,b}|~0\leq a \leq b\}$ is a strong generating set for $\cH(n)^{O(n)}$. Note that $\omega_{0,0} = 2L$, where $L$ is the Virasoro element (\[virasoro\]). The subspace $(\cH(n)^{O(n)})_{(2)}$ of degree at most 2 has a basis $\{1\} \cup \{\omega_{a,b}\}$, and for all $n\geq 0$, the operators $\omega_{a,b}\circ_n$ preserve this vector space. It follows that every term in the OPE formula for $\omega_{a,b}(z) \omega_{c,d}(w)$ is a linear combination of these generators, so they form a Lie conformal algebra. We calculate that for $a,b,c\geq 0$ and $0\leq m \leq a+b+c+1$, $$\label{bcalc} \omega_{a,b} \circ_m \partial^c \alpha^i = \lambda_{a,b,c,m} \partial^{a+b+c+1-m} \alpha^i$$ where $$\lambda_{a,b,c,m} = (-1)^b \frac{(b+c+1)!}{(b+c+1-m)!} + (-1)^a \frac{(a+c+1)!}{(a+c+1-m)!}.$$ It follows that for $m\leq a+b+c+1$ we have $$\label{opeformula} \omega_{a,b}\circ_m \omega_{c,d} = \lambda_{a,b,c,m} \omega_{a+b+c+1-m,d} + \lambda_{a,b,d,m}\omega_{c,a+b+d+1-m}.$$ In fact, there is a somewhat more economical set of strong generators for $\cH(n)^{O(n)}$. For each $m\geq 0$, let $A_m$ denote the vector space spanned by $\{\omega_{a,b}|~ a+b = m\}$, which is homogeneous of weight $m+2$. Clearly $\text{dim}(A_{2m}) = m+1 = \text{dim}(A_{2m+1})$ for $m\geq 0$. Moreover, $\partial(A_m)\subset A_{m+1}$, and we have $$\label{deca} \text{dim} \big(A_{2m} / \partial(A_{2m-1})\big) = 1,\ \ \ \ \ \ \text{dim} \big(A_{2m+1} / \partial(A_{2m})\big) = 0.$$ For $m\geq 0$, define $$\label{defofj} j^{2m} = \omega_{0,2m},$$ which is clearly not a total derivative. Hence $A_{2m}$ has a decomposition $$\label{decompofa} A_{2m} = \partial (A_{2m-1})\oplus \bra j^{2m}\ket = \partial^2 (A_{2m-2})\oplus \bra j^{2m}\ket ,$$ where $\bra j^{2m}\ket$ is the linear span of $j^{2m}$. Similarly, $A_{2m+1}$ has a decomposition $$\label{decompofai} A_{2m+1} = \partial^2(A_{2m-1})\oplus \bra \partial j^{2m}\ket = \partial^3 (A_{2m-2})\oplus \bra \partial j^{2m}\ket.$$ It is easy to see that $\{\partial^{2i} j^{2m-2i}|~ 0\leq i\leq m\}$ and $\{\partial^{2i+1} j^{2m-2i}|\ 0\leq i\leq m\}$ are bases of $A_{2m}$ and $A_{2m+1}$, respectively. Hence each $\omega_{a,b}\in A_{2m}$ and $\omega_{c,d}\in A_{2m+1}$ can be expressed uniquely in the form $$\label{lincomb} \omega_{a,b} =\sum_{i=0}^m \lambda_i \partial^{2i}j^{2m-2i},\ \ \ \ \ \ \omega_{c,d} =\sum_{i=0}^m \mu_i \partial^{2i+1}j^{2m-2i}$$ for constants $\lambda_i,\mu_i$, $i=0,\dots,m$. Hence $\{j^{2m}|\ m\geq 0\}$ is an alternative strong generating set for $\cH(n)^{O(n)}$, and it will be convenient to pass back and forth between the sets $\{j^{2m}|\ m\geq 0\}$ and $\{\omega_{a,b}|\ 0\leq a\leq b\}$. \[ordfingen\] $\cH(n)^{O(n)}$ is generated as a vertex algebra by $j^0$ and $j^2$. Let $\cJ$ denote the vertex subalgebra of $\cH(n)^{O(n)}$ generated by $j^0$ and $j^2$. We need to show that $j^{2m}\in\cJ$ for all $m\geq 2$. Specializing (\[opeformula\]) shows that $$\label{specialope} j^2\circ_1 j^{2k} = 4 \omega_{2,2k} + (4+4k) j^{2k+2}.$$ Moreover, it is easy to check that $\omega_{2,2k} \equiv - j^{2k+2}$ modulo second derivatives. It follows that $j^2\circ_1j^{2k} \equiv (3+4k)j^{2k+2}$ modulo a linear combination of elements of the form $\partial^{2i} j^{2k+2-2i}$ for $1\leq i \leq k+1$. The claim then follows by induction on $k$. Consider the category of all vertex algebras with generators $\{J^{2m}|~m\geq 0\}$, which satisfy the same OPE relations as the generators $\{j^{2m}|~m\geq 0\}$ of $\cH(n)^{O(n)}$. Since the vector space with basis $\{1\}\cup \{\partial^lj^{2m}|~l,m\geq 0\}$ is closed under all nonnegative circle products, it forms a Lie conformal algebra. By Theorem 7.12 of [@BK], this category possesses a universal object $\cV_n$, which is [*freely*]{} generated by $\{J^{2m}|\ m\geq 0\}$. In other words, there are no nontrivial normally ordered polynomial relations among the generators and their derivatives in $\cV_n$. Then $\cH(n)^{O(n)}$ is a quotient of $\cV_n$ by an ideal $\cI_n$, and since $\cH(n)^{O(n)}$ is a simple vertex algebra, $\cI_n$ is a maximal ideal. Let $\pi_n: \cV_n\ra \cH(n)^{O(n)}$ denote the quotient map, which sends $J^{2m}\mapsto j^{2m}$. Using the formula (\[lincomb\]), which holds in $\cH(n)^{O(n)}$ for all $n$, we can define an alternative strong generating set $\{\Omega_{a,b}| ~0\leq a\leq b\}$ for $\cV_n$ by the same formula: for $a+b = 2m$ and $c+d = 2m+1$, $$\Omega_{a,b} =\sum_{i=0}^m \lambda_i \partial^{2i}J^{2m-2i},\ \ \ \ \ \ \Omega_{c,d} =\sum_{i=0}^m \mu_i \partial^{2i+1}J^{2m-2i}.$$ Clearly $\pi_n(\Omega_{a,b}) = \omega_{a,b}$. We will use the same notation $A_m$ to denote the linear span of $\{\Omega_{a,b}|~ a+b = m\}$, when no confusion can arise. Finally, $\cV_{n}$ has a good increasing filtration in which $(\cV_n)_{(2k)}$ is spanned by iterated Wick products of the generators $J^{2m}$ and their derivatives, of length at most $k$, and $(\cV_n)_{(2k+1)} = (\cV_n)_{(2k)}$. Equipped with this filtration, $\cV_n$ lies in the category $\cR$, and $\pi_n$ is a morphism in $\cR$. Since the vertex operators $J^{2m}$ satisfy the same OPE relations as the $j^{2m}$, $\cV_n$ is also generated as a vertex algebra by $J^0$ and $J^2$. Recall the variables $Q_{a,b}$ and $q_{a,b}$ appearing in Theorem \[weylfft\]. Since $\cV_n$ is freely generated by $\{J^{2m}|\ m\geq 0\}$, and $\{\Omega_{a,b}|\ 0\leq a\leq b\}$ and $\{\partial^k J^{2m}|\ k,m\geq 0\}$ form bases for the same space, we may identify $gr(\cV_n)$ with $\mathbb{C}[Q_{a,b}]$, and we identify $gr(\cH(n)^{O(n)})$ with $\mathbb{C}[q_{a,b}]/I_n$. Under this identification, $gr(\pi_{n}): gr(\cV_n) \ra gr(\cH(n)^{O(n)})$ is just the quotient map sending $Q_{a,b}\mapsto q_{a,b}$. Clearly the projection $\pi_{n}: \cV_n\ra \cH(n)^{O(n)}$ maps each filtered piece $(\cV_n)_{(k)}$ onto $(\cH(n)^{O(n)})_{(k)}$, so the hypotheses of Lemma \[idealrecon\] are satisfied. Since $I_{n} = Ker (gr(\pi_{n}))$ is generated by the determinants $d_{I,J}$, we can apply Lemma \[idealrecon\] to find vertex operators $D_{I,J}\in (\cV_{n})_{(2n+2)}$ satisfying $\phi_{2n+2}(D_{I,J}) = d_{I,J}$, such that $\{D_{I,J}\}$ generates $\cI_{n}$. Since $\Omega_{a,b}$ has weight $a+b+2$, $$\label{wtod} wt(D_{I,J}) = |I| + |J| +2n+2,\ \ \ \ \ \ |I| =\sum_{a=0}^{n} i_a,\ \ \ \ \ \ |J| =\sum_{a=0}^{n} j_a.$$ In general, the vertex operators $a_i(z)$ furnished by Lemma \[idealrecon\] satisfying $\phi_{d_i}(a_i(z)) = a_i$ which generate $\cI$ are not unique. However, in our case, $D_{I,J}$ is uniquely determined by the conditions $$\label{uniquedij} \phi_{2n+2}(D_{I,J}) = d_{I,J},\ \ \ \ \ \ \pi_{n}(D_{I,J}) = 0.$$ If $D'_{I,J}$ is another vertex operator satisfying (\[uniquedij\]), $D_{I,J} - D'_{I,J}$ lies in $(\cV_n)_{(2n)} \cap \cI_{n}$, and since there are no relations in $\cH(n)^{O(n)}$ of degree less than $2n+2$, we have $D_{I,J} - D'_{I,J}=0$. There is a distinguished element $D_0 = D_{(0,\dots,n),(0,\dots,n)}$ which is the unique element of $\cI_n$ of minimal weight $n^2+3n+2$. It is annihilated by all operators $J^{2m}(k)$ for $k>2m+1$, which lower the weight by $k-2m-1$. Given a homogeneous polynomial $p\in gr(\cV_{n})\cong \mathbb{C}[Q_{a,b}|~0\leq a\leq b]$ of degree $k$ in the variables $Q_{a,b}$, a [*normal ordering*]{} of $p$ will be a choice of normally ordered polynomial $P\in (\cV_{n})_{(2k)}$, obtained by replacing $Q_{a,b}$ by $\Omega_{a,b}$, and replacing ordinary products with iterated Wick products of the form (\[iteratedwick\]). Of course $P$ is not unique, but for any choice of $P$ we have $\phi_{2k}(P) = p$, where $\phi_{2k}: (\cV_{n})_{(2k)} \ra (\cV_{n})_{(2k)} /(\cV_{n})_{(2k-1)} \subset gr(\cV_{n})$ is the usual projection. For the rest of this section, $D^{2k}$, $E^{2k}$, $F^{2k}$, etc., will denote elements of $(\cV_{n})_{(2k)}$ which are homogeneous, normally ordered polynomials of degree $k$ in the vertex operators $\Omega_{a,b}$. Let $D_{I,J}^{2n+2}\in (\cV_{n})_{(2n+2)}$ be some normal ordering of $d_{I,J}$. Then $$\pi_{n}(D_{I,J}^{2n+2}) \in (\cH(n)^{O(n)})_{(2n)},$$ and $\phi_{2n}(\pi_n(D_{I,J}^{2n+2})) \in gr(\mathcal{H}(n)^{O(n)})$ can be expressed uniquely as a polynomial of degree $n$ in the variables $q_{a,b}$. Choose some normal ordering of the corresponding polynomial in the variables $\Omega_{a,b}$, and call this vertex operator $-D^{2n}_{I,J}$. Then $D^{2n+2}_{I,J} + D^{2n}_{I,J}$ has the property that $\pi_{n}(D^{2n+2}_{I,J} + D^{2n}_{I,J})\in (\cH(n)^{O(n)})_{(2n-2)}.$ Continuing this process, we arrive at a vertex operator $\sum_{k=1}^{n+1} D^{2k}_{I,J}$ in the kernel of $\pi_{n}$. We must have $$\label{decompofd} D_{I,J} = \sum_{k=1}^{n+1}D^{2k}_{I,J},$$ since $D_{I,J}$ is uniquely characterized by (\[uniquedij\]). In this decomposition, the term $D^2_{I,J}$ lies in the space $A_m$ spanned by $\{\Omega_{a,b}|~a+b=m\}$, for $m = |I| +|J|+2n$. Recall that for $m$ even, $A_m = \partial^2 A_{m-2} \oplus \bra J^{m}\ket$, and for $m$ odd, $A_m = \partial^3 A_{m-3} \oplus \bra \partial J^{m-1}\ket$. For $m$ even (respectively odd), define $pr_m: A_m\ra \bra J^m\ket$ (respectively $pr_m: A_m\ra \bra \partial J^{m-1}\ket$) to be the projection onto the second term. Define the [*remainder*]{} $$\label{defofrij} R_{I,J} = pr_m(D^2_{I,J}).$$ \[uniquenessofr\] Given $D_{I,J}\in\cI_n$ as above, suppose that $D_{I,J} = \sum_{k=1}^{n+1} D^{2k}_{I,J}$ and $D_{I,J} = \sum_{k=1}^{n+1} \tilde{D}^{2k}_{I,J}$ are two different decompositions of $D_{I,J}$ of the form (\[decompofd\]). Then $$D^2_{I,J} - \tilde{D}^2_{I,J} \in \partial^2 (A_{m-2}),$$ where $m = |I| +|J| +2 n$. In particular, $R_{I,J}$ is independent of the choice of decomposition of $D_{I,J}$. This is analogous to Corollary 4.8 of [@LI], and the proof is almost the same. First, we claim that for all $j,k,l,m\geq 0$, $\Omega_{j,k}\circ_0\Omega_{l,m}$ is a total derivative. In view of the decomposition (\[deca\]) which holds in $\cV_n$ as well as $\cH(n)^{O(n)}$, it suffices to show that $J^{2k}\circ_0 J^{2l}$ is a total derivative for all $k,l$. This is clear because $J^{2k}\circ_0 J^{2l}$ lies in $A_m$ for $m=2k+2l+1$, and $A_m = \partial(A_{m-1})$. Next, let $\mu = ~:a_1 \cdots a_m:$ be a normally ordered monomial in $(\cV_{n})_{(2m)}$, where each $a_i$ is one of the generators $\Omega_{a,b}$. Let $\tilde{\mu} = :a_{i_1} \cdots a_{i_m}:$, where $(i_1,\dots, i_m)$ is some permutation of $(1,\dots,m)$. We claim that for any decomposition $\mu - \tilde{\mu} = \sum_{k=1}^{m-1} E^{2k}$ of the difference $\mu - \tilde{\mu}\in (\cV_{n})_{(2m-2)}$, the term $E^2$ is a second derivative. To prove this statement, we proceed by induction on $m$. For $m=1$, there is nothing to prove since $\mu - \tilde{\mu} =0$. For $m=2$, and $\mu = ~:\Omega_{a,b}\Omega_{c,d}:$, we have $$\label{reari} \mu - \tilde{\mu} = ~:\Omega_{a,b}\Omega_{c,d}: ~- ~: \Omega_{c,d}\Omega_{a,b}:~ = \sum_{i\geq 0} \frac{(-1)^i}{(i+1)!}\partial^{i+1}(\Omega_{a,b}\circ_{i} \Omega_{c,d}),$$ by (\[vaidii\]). Since $\Omega_{a,b}\circ_{0} \Omega_{c,d}$ is already a total derivative, it follows that $\mu -\tilde{\mu}$ is a second derivative, as claimed. Next, we assume the result for $r\leq m-1$. Since the permutation group on $m$ letters is generated by the transpositions $(i,i+1)$ for $i=1,\dots,m-1$, we may assume without loss of generality that $$\tilde{\mu} = ~:a_1 \cdots a_{i-1} a_{i+1} a_i a_{i+2} \cdots a_m:.$$ If $i>1$, we have $\mu - \tilde{\mu} = ~:a_1 \cdots a_{i-1} f:$, where $f =~ :a_i \cdots a_m:~ -~: a_{i+1}a_i a_{i+2} \cdots a_m$, which lies in $(\cM_{-n})_{(2m-2i+2)}$. Since each term of $f$ has degree at least 2, it follows that $\mu - \tilde{\mu}$ can be expressed in the form $\sum_{k=i}^{m-1}E^{2k}$. Since $i>1$, there is no term of degree $2$. Given any rearrangement $\mu - \tilde{\mu}=\sum_{k=1}^{m-1}F^{2k}$, it follows from our inductive hypothesis that the term $F^2$ is a second derivative. Suppose next that $i=1$, so that $\tilde{\mu} = ~:a_2 a_1 a_3 \cdots a_m:$. Define $$\nu = \ :(:a_1 a_2:) a_3 \cdots a_m:,\ \ \ \ \ \ \tilde{\nu} = \ :(:a_2 a_1:) a_3 \cdots a_m:,$$ and note that $\nu - \tilde{\nu} = \ : (:a_1 a_2: -: a_2 a_1: )f:$, where $f = \ :a_3 \cdots a_m:$. By (\[reari\]), $:a_1 a_2: - : a_2 a_1:$ is homogeneous of degree 2, so $\nu - \tilde{\nu}$ is a linear combination of monomials of degree $2m-2$. By inductive assumption, any rearrangement $\nu - \tilde{\nu} =\sum_{k=1}^{m-1}F^{2k}$ has the property that $F^2$ is a second derivative. Next, by (\[vaidi\]), we have $$\label{diffi} \mu - \nu = - \sum_{k\geq 0} \frac{1}{(k+1)!} \bigg( :(\partial^{k+1} a_1) (a_2 \circ_k f): + : (\partial^{k+1} a_2)(a_1\circ_k f):\bigg).$$ Since the operators $\circ_k$ for $k\geq 0$ are homogeneous of degree $-2$, each term appearing in (\[diffi\]) has degree at most $2m-2$. Moreover, $$deg \big(:(\partial^{k+1} a_1) (a_2\circ_k f):\big) = 2+ deg(a_2 \circ_k f),\ \ \ deg \big(:(\partial^{k+1} a_2) (a_1\circ_k f):\big) = 2+ deg(a_1 \circ_k f),$$ so the only way to obtain terms of degree $2$ is for $a_2\circ_k f$ or $a_1\circ_k f$ to be a scalar. This can only happen if $k>0$, in which case we obtain either $\partial^{k+1} a_1$ or $\partial^{k+1} a_2$, which are second derivatives. By inductive assumption, any rearrangement of $\mu - \nu$ can contain only second derivatives in degree 2. Similarly, $\tilde{\mu} - \tilde{\nu}$ has degree at most $2m-2$, and any rearrangement of $\tilde{\mu} - \tilde{\nu}$ can only contain second derivatives in degree 2. Since $\mu - \tilde{\mu} = (\mu - \nu) + (\nu - \tilde{\nu}) + (\tilde{\nu} - \tilde{\mu})$, the claim follows. An immediate consequence is the following statement. Let $E\in (\cV_{n})_{(2m)}$ be a vertex operator of degree $2m$, and choose a decomposition $$\label{dcnew} E = \sum_{k=1}^m E^{2k},$$ where $E^{2k}$ is a homogeneous, normally ordered polynomial of degree $k$ in the variables $\Omega_{a,b}$. If $E = \sum_{k=1}^m F^{2k}$ is any rearrangement of (\[dcnew\]), i.e., another decomposition of $E$ of the same form, then $E^{2}-F^2$ is a second derivative. Finally, specializing this to the case $E=D_{I,J}$ proves the lemma. Let $R_0$ denote the remainder of the element $D_0$. The condition $R_0 \neq 0$ is equivalent to the existence of a decoupling relation of the form $j^{n^2+3n} = P(j^0,j^2,\dots, j^{n^2+3n-2})$ in $\cH(n)^{O(n)}$. Let $D_0 = \sum_{k=1}^{n+1}D^{2k}_0$ be a decomposition of $D_0$ of the form (\[decompofd\]). If $R_0\neq 0$, we have $D^2_0 = \lambda J^{n^2+3n} + \partial^2 \omega$ for some $\lambda \neq 0$ and some $\omega\in A_{n^2+3n-2}$. Applying the projection $\pi_{n}:\cV_{n}\ra \cH(n)^{O(n)}$, since $\pi_{n}(D_0)=0$ we obtain $$j^{n^2+3n}= -\frac{1}{\lambda}\big( \partial^2 \pi_{n}(\omega) + \sum_{k=2}^{n+1} \pi_{n}(D^{2k}_0) \big),$$ which is a decoupling relation of the desired form. The converse follows from the fact that $D_0$ is the unique element of the ideal $\cI_{n}$ of weight $n^2+3n+2$, up to scalar multiples. \[mainconj\] For all $n\geq 1$, the remainder $R_0$ is nonzero. For $n=1$, it is easy to check that $R_0 = -\frac{5}{4} J^4$, and in the Appendix, we write down computer calculations that prove this conjecture in the cases $n=2$ and $n=3$. For $n=2$, we have $R_0 = \frac{149}{600} J^{10}$, and for $n=3$ we have $R_0 = -\frac{2419}{705600} J^{18}$. However, for $n>3$ it is difficult to calculate $R_0$ directly. Unlike the case of $\cW_{1+\infty,-n}$ where a similar remainder was shown to be nonzero in [@LI], there seems to be no nice recursive structure that allows us to proceed by induction on $n$. Even in the case $\cW_{1+\infty,-n}$, we still lack a conceptual explanation for this phenomenon, and we expect that such an explanation will be necessary to prove our conjecture for $\cH(n)^{O(n)}$. The next theorem shows that the strong finite generation of $\cH(n)^{O(n)}$ is an easy consequence of our conjecture. \[stronggen\] Suppose that Conjecture \[mainconj\] holds. Then for all $r\geq \frac{1}{2}(n^2+3n)$, there exists a decoupling relation $$\label{maindecoup} j^{2r} = Q_{2r}(j^0,j^2,\dots,j^{n^2+3n-2}),$$ where $Q_{2r}$ is some normally ordered polynomial in $j^0,j^2,\dots,j^{n^2+3n-2}$, and their derivatives. It follows that $\{j^0,j^2,\dots,j^{n^2+3n-2}\}$ is a minimal strong generating set for $\cH(n)^{O(n)}$. The decoupling relation $j^{n^2+3n} = P(j^0,j^2,\dots,j^{n^2+3n-2})$ given by Conjecture \[mainconj\] corresponds to an element $J^{n^2+3n} - P(J^0,J^2,\dots,J^{n^2+3n-2})\in \cI_{n}$. We need to show that for all $r\geq \frac{1}{2}(n^2+3n)$, there exists an element $J^{2r} - Q_{2r}(J^0,J^2,\dots,J^{n^2+3n-2})\in \cI_{n}$, so we assume inductively that $Q_{2r-2}$ exists. Choose a decomposition $$Q_{2r-2} = \sum_{k=1}^{d} Q^{2k}_{2r-2},$$ where $Q_{2r-2}^{2k}$ is a homogeneous normally ordered polynomial of degree $k$ in the vertex operators $J^0,\dots,J^{n^2+3n-2}$ and their derivatives. In particular, $$Q^2_{2r-2} = \sum_{i=0}^{\frac{1}{2}(n^2+3n-2)} c_i \partial^{2r-2i-2} J^{2i},$$ for constants $c_i$. We apply the operator $J^2 \circ_1$, which raises the weight by two. By (\[specialope\]), we have $J^2 \circ_1 J^{2r-2} \equiv (4r-1)J^{2r}$ modulo second derivatives. Moreover, using (\[vaidiii\]) and (\[specialope\]), we see that $J^2 \circ_1 \big(\sum_{k=1}^{d} Q^{2k}_{2r-2}\big)$ can be expressed in the form $\sum_{k=1}^{d} E^{2k}$ where each $E^{2k}$ is a normally ordered polynomial in $J^0,\dots,J^{n^2+3n}$ and their derivatives. If $J^{n^2+3n}$ or its derivatives appear in $E^{2k}$, we can use the element $J^{n^2+3n} -P(J^0,\dots, J^{n^2+3n-2})$ in $\cI_{n}$ to eliminate the variable $J^{n^2+3n}$ and any of its derivatives, modulo $\cI_{n}$. Hence $J^2 \circ_1 \big(\sum_{k=1}^{d} Q^{2k}_{2r-2}\big)$ can be expressed modulo $\cI_{n}$ in the form $\sum_{k=1}^{d'} F^{2k}$, where $d'\geq d$, and $F^{2k}$ is a normally ordered polynomial in $J^0,\dots, J^{n^2+3n-2}$ and their derivatives. It follows that $$\frac{1}{4r-1} J^2 \circ_1 \big(J^{2r-2} - Q_{2r-2}(J^0,\dots, J^{n^2+3n-2})\big)$$ can be expressed as an element of $\cI_{n}$ of the desired form. Representation theory of $\cH(n)^{O(n)}$ ======================================== The basic tool in studying the representation theory of vertex algebras is the [*Zhu functor*]{}, which was introduced by Zhu in [@Z]. Given a vertex algebra $\cW$ with weight grading $\cW = \bigoplus_{n\in\mathbb{Z}} \cW_n$, this functor attaches to $\cW$ an associative algebra $A(\cW)$, together with a surjective linear map $\pi_{Zh}:\cW\ra A(\cW)$. For $a\in \cW_{m}$ and $b\in\cW$, define $$\label{defzhu} a*b = Res_z \bigg (a(z) \frac{(z+1)^{m}}{z}b\bigg),$$ and extend $*$ by linearity to a bilinear operation $\cW\otimes \cW\ra \cW$. Let $O(\cW)$ denote the subspace of $\cW$ spanned by elements of the form $$\label{zhuideal} a\circ b = Res_z \bigg (a(z) \frac{(z+1)^{m}}{z^2}b\bigg)$$ where $a\in \cW_m$, and let $A(\cW)$ be the quotient $\cW/O(\cW)$, with projection $\pi_{Zh}:\cW\ra A(\cW)$. Then $O(\cW)$ is a two-sided ideal in $\cW$ under the product $*$, and $(A(\cW),*)$ is a unital, associative algebra. The assignment $\cW\mapsto A(\cW)$ is functorial, and if $\cI$ is a vertex algebra ideal of $\cW$, we have $A(\cW/\cI)\cong A(\cW)/ I$, where $I = \pi_{Zh}(\cI)$. A well-known formula asserts that for all $a\in \cW_m$ and $b\in \cW$, $$\label{zhucomm} a*b - b*a \equiv Res_z (1+z)^{m -1} a(z) b \ \ \ \ \text{mod} \ O(\cW).$$ A $\mathbb{Z}_{\geq 0}$-graded module $M = \bigoplus_{n\geq 0} M_n$ over $\cW$ is called [*admissible*]{} if for every $a\in\cW_m$, $a(n) M_k \subset M_{m+k -n-1}$, for all $n\in\mathbb{Z}$. Given $a\in\cW_m$, the Fourier mode $a(m-1)$ acts on each $M_k$. The subspace $M_0$ is then a module over $A(\cW)$ with action $[a]\mapsto a(m-1) \in End(M_0)$. In fact, $M\mapsto M_0$ provides a one-to-one correspondence between irreducible, admissible $\cW$-modules and irreducible $A(\cW)$-modules. If $A(\cW)$ is a commutative algebra, all its irreducible modules are one-dimensional, and the corresponding $\cW$-modules $M = \bigoplus_{n\geq 0} M_n$ are cyclic and generated by any nonzero $v\in M_0$. Accordingly, we call such a module a [*highest-weight module*]{} for $\cW$, and we call $v$ a [*highest-weight vector*]{}. Let $\cW$ be a vertex algebra which is strongly generated by a set of weight-homogeneous elements $\alpha_i$ of weights $w_i$, for $i$ in some index set $I$. Then $A(\cW)$ is generated by $\{ a_i = \pi_{Zh}(\alpha_i(z))|~i\in I\}$. Moreover, $A(\cW)$ inherits a filtration (but not a grading) by weight. \[abelianzhu\] For all $n\geq 1$, $A(\cH(n)^{O(n)})$ is a commutative algebra. In the case $n=1$, the Zhu algebra $A(\cH^+)$ is clearly abelian since it has two generators, one of which is central since it corresponds to the Virasoro element [@DNI]. For all $n\geq 1$, the circle products $j^{2l}\circ_k j^{2m}$ among the generators of $\cH(n)^{O(n)}$ are independent of $n$ for $0\leq k<2l+2m+3$. For $k = 2l+2m+3$, $j^{2l}\circ_k j^{2m}$ is a constant (which depends on $n$), and $j^{2l}\circ_k j^{2m} = 0$ for $k> 2l+2m+3$. Let $a^{2m}= \pi_{Zh}(j^{2m})$. It is clear from (\[zhucomm\]) that the commutator $[a^{2l},a^{2m}]$ depends only on $j^{2l}\circ_k j^{2m}$ for $0\leq k\leq 2l+2$, and therefore is not affected by the value of this constant. So this commutator is the same for all $n$, and in particular must vanish since it vanishes for $n=1$. Note that this argument is independent of Conjecture \[mainconj\]. All irreducible, admissible $\cH(n)^{O(n)}$-modules are highest-weight modules. Since $\cV_n$ has the same generators and OPE relations as $\cH(n)^{O(n)}$, the same argument shows that the Zhu algebra of $\cV_n$ is abelian. Since $\cV_n$ is freely generated by $J^0,J^2,\dots$, it follows that $A(\cV_n)$ is the polynomial algebra $\mathbb{C}[A^0,A^2,\cdots]$, where $A^{2m} = \pi_{Zh}(J^{2m})$. Moreover, $A(\cH(n)^{O(n)}) \cong \mathbb{C}[a^0,a^2,\dots] / I_n$, where $I_n = \pi_{Zh}(\cI_n)$, and we have a commutative diagram $$\label{commdiag} \begin{array}[c]{ccc} \cV_n &\stackrel{\pi_n}{\rightarrow}& \cH(n)^{O(n)} \\ \downarrow\scriptstyle{\pi_{Zh}}&&\downarrow\scriptstyle{\pi_{Zh}}\\ A(\cV_n) &\stackrel{A(\pi_n)}{\rightarrow}& A(\cH(n)^{O(n)}) \end{array} .$$ For $n=1$, it was shown by Dong-Nagatomo [@DNI] that $A(\cH^+) \cong \mathbb{C}[x,y] / I$, where $I$ is the ideal generated by the polynomials $$P=(y+x-4x^2)(70y+908x^2-515x+27),\ \ \ \ \ \ Q = (x-1)(x-\frac{1}{16})(x-\frac{9}{16})(y+x-4x^2).$$It follows that the irreducible, admissible $\cH^+$-modules are parametrized by the points on the variety $V(I)\subset \mathbb{C}^2$. Suppose now that Conjecture \[mainconj\] holds. Then $A(\cH(n)^{O(n)})$ is generated by $\{a^0, a^2,\dots,a^{n^2+3n-2}\}$, and it follows that $$A(\cH(n)^{O(n)}) \cong \mathbb{C}[a^0,a^2,\dots, a^{n^2+3n-2}]/ I_{n},$$ where $I_{n}$ is now regarded as an ideal inside $\mathbb{C}[a^0,a^2\dots, a^{n^2+3n-2}]$. The corresponding variety $V(I_{n})\subset \mathbb{C}^{\frac{1}{2}(n^2+3n)}$ then parametrizes the irreducible, admissible modules over $\cH(n)^{O(n)}$. The problem of classifying these modules is equivalent to giving a description of the ideal $I_n$. Invariant subalgebras of $\cH(n)$ under arbitrary reductive groups {#secgeneral} ================================================================== In this section, we study $\cH(n)^G$ for a general reductive group $G\subset O(n)$. Our approach is similar to our study of invariant subalgebras of ghost systems in [@LII]. By a fundamental result of Dong-Li-Mason [@DLM], $\cH(n)$ has a decomposition of the form $$\label{dlmdecomp} \cH(n) \cong \bigoplus_{\nu\in H} L(\nu)\otimes M^{\nu},$$ where $H$ indexes the irreducible, finite-dimensional representations $L(\nu)$ of $O(n)$, and the $M^{\nu}$’s are inequivalent, irreducible, highest-weight $\cH(n)^{O(n)}$-modules. The modules $M^{\nu}$ appearing in (\[dlmdecomp\]) have an integrality property; the eigenvalues of $\{j^{2m}(2m+1)|~m\geq 0\}$ on the highest-weight vectors $f_{\nu}$ are all integers. These modules therefore correspond to certain rational points on the variety $V(I_n)$, if it exists. Using the decomposition (\[dlmdecomp\]), together with a classical theorem of Weyl, we show that $\cH(n)^G$ is finitely generated as a vertex algebra. This statement is analogous to Lemma 2 of [@LII], and is independent of Conjecture \[mainconj\]. We then prove some combinatorial properties of the modules $M^{\nu}$ appearing in (\[dlmdecomp\]), which are also independent of Conjecture \[mainconj\]. Next, we show that Conjecture \[mainconj\] implies that each module $M^{\nu}$ appearing in (\[dlmdecomp\]) possesses a certain finiteness property. Together with the finite generation of $\cH(n)^G$, this is enough to prove that $\cH(n)^G$ is strongly finitely generated. Since Conjecture \[mainconj\] holds for $n=2$ and $n=3$, the strong finite generation of $\cH(2)^G$ and $H(3)^G$ is an immediate consequence. \[ordfg\] For any reductive $G\subset O(n)$, $\cH(n)^G$ is finitely generated as a vertex algebra. Recall that $\cH(n)\cong gr(\cH(n))$ as linear spaces, and $$gr(\cH(n)^G )\cong (gr(\cH(n))^G \cong (Sym \bigoplus_{k\geq 0} V_k)^G = R$$ as commutative algebras, where $V_k\cong \mathbb{C}^n$ as $O(n)$-modules. For all $p\geq 0$, there is an action of $GL_p$ on $\bigoplus_{k =0}^{p-1} V_k $ which commutes with the action of $G$. The natural inclusions $GL_p\hookrightarrow GL_q$ for $p<q$ sending $$M \ra \bigg[ \begin{matrix} M & 0 \cr 0 & I_{q-p} \end{matrix} \bigg]$$ induce an action of $GL_{\infty} = \lim_{p\ra \infty} GL_p$ on $\bigoplus_{k\geq 0} V_k$. We obtain an action of $GL_{\infty}$ on $Sym \bigoplus_{k\geq 0} V_k$ by algebra automorphisms, which commutes with the action of $G$. Hence $GL_{\infty}$ acts on $R$ as well. By a basic theorem of Weyl, $R$ is generated by the set of translates under $GL_{\infty}$ of any set of generators for $(Sym \bigoplus_{k = 0} ^{n-1} V_k)^G$ [@W]. Since $G$ is reductive, $(Sym \bigoplus_{k = 0} ^{n-1} V_k)^G$ is finitely generated. Hence there exists a finite set of homogeneous elements $\{f_1,\dots, f_k\}\subset R$ such that $\{ \sigma f_i|~ i=1,\dots,k,~ \sigma\in GL_{\infty}\}$ generates $R$. It follows from Lemma \[recon\] that the set of vertex operators $$\{(\sigma f_i)(z)\in \cH(n)^G|~i=1,\dots,k,~ \sigma\in GL_{\infty}\}$$ which correspond to $\sigma f_i$ under the linear isomorphism $\cH(n)^G\cong gr(\cH(n)^G) \cong R$, is a set of strong generators for $\cH(n)^G$. In the decomposition (\[dlmdecomp\]) of $\cH(n)$ as a bimodule over $O(n)$ and $\cH(n)^{O(n)}$, the $O(n)$-isotypic component of $\cH(n)$ of type $L(\nu)$ is isomorphic to $L(\nu)\otimes M^{\nu}$. Each $L(\nu)$ is a module over $G\subset O(n)$, and since $G$ is reductive, it has a decomposition $L(\nu) =\oplus_{\mu\in H^{\nu}} L(\nu)_{\mu}$. Here $\mu$ runs over a finite set $H^{\nu}$ of irreducible, finite-dimensional representations $L(\nu)_{\mu}$ of $G$, possibly with multiplicity. We thus obtain a refinement of (\[dlmdecomp\]): $$\label{decompref} \cH(n) \cong \bigoplus_{\nu\in H} \bigoplus_{\mu\in H^{\nu}} L(\nu)_{\mu} \otimes M^{\nu}.$$ Let $f_1(z),\dots,f_k(z)\in \cH(n)^G$ be the vertex operators corresponding to the polynomials $f_1, \dots,f_k$ under the linear isomorphism $\cH(n)^G\cong gr(\cH(n)^G) \cong R$. Clearly $f_1(z),\dots, f_k(z)$ must live in a finite direct sum $$\label{newsummation} \bigoplus_{j=1}^r L(\nu_j)\otimes M^{\nu_j}$$ of the modules appearing in (\[dlmdecomp\]). By enlarging the collection $f_1(z),\dots,f_k(z)$ if necessary, we may assume without loss of generality that each $f_i(z)$ lives in a single representation of the form $L(\nu_j)\otimes M^{\nu_j}$. Moreover, we may assume that $f_i(z)$ lives in a trivial $G$-submodule $L(\nu_j)_{\mu_0} \otimes M^{\nu_j}$, where $\mu_0$ denotes the trivial, one-dimensional $G$-module. (In particular, $L(\nu_j)_{\mu_0}$ is one-dimensional). Since the actions of $GL_{\infty}$ and $O(n)$ on $\cH(n)$ commute, we may assume that $(\sigma f_i)(z)\in L(\nu_j)_{\mu_0}\otimes M^{\nu_j}$ for all $\sigma\in GL_{\infty}$. Since $\cH(n)^G$ is strongly generated by the set $\{ (\sigma f_i)(z)|~i=1,\dots,k,~ \sigma\in GL_{\infty}\}$, and each $M^{\nu_j}$ is an irreducible $\cH(n)^{O(n)}$-module, $\cH(n)^G$ is generated as an algebra over $\cH(n)^{O(n)}$ by $f_1(z),\dots,f_k(z)$. Finally, since $\cH(n)^{O(n)}$ is itself a finitely generated vertex algebra by Lemma \[ordfingen\], we conclude that $\cH(n)^G$ is finitely generated. Next, we need a fact about representations of associative algebras which can be found in [@LII]. Let $A$ be an associative $\mathbb{C}$-algebra (not necessarily unital), and let $W$ be a linear representation of $A$, via an algebra homomorphism $\rho: A\ra End(W)$. Regarding $A$ as a Lie algebra with commutator as bracket, let $\rho_{Lie}:A\ra End(W)$ denote the map $\rho$, regarded now as a Lie algebra homomorphism. There is an induced algebra homomorphism $U(A)\ra End(W)$, where $U(A)$ denotes the universal enveloping algebra of $A$. Given elements $a,b\in A$, we denote the product in $U(A)$ by $a*b$ to distinguish it from $ab\in A$. Given a monomial $\mu = a_1* \cdots * a_r\in U(A)$, let $\tilde{\mu} = a_1\cdots a_r$ be the corresponding element of $A$. Let $U(A)_+$ denote the augmentation ideal (i. e., the ideal generated by $A$), regarded as an associative algebra with no unit. The map $U(A)_+ \ra A$ sending $\mu\mapsto \tilde{\mu}$ is then an algebra homomorphism which makes the diagram $$\label{commutativediag} \begin{matrix} U(A)_+ & & \cr \downarrow & \searrow \cr A & \ra & End(W) \cr \end{matrix}$$ commute. Let $Sym(W)$ denote the symmetric algebra of $W$, whose $d$th graded component is denoted by $Sym^d(W)$. Clearly $\rho_{Lie}$ (but not $\rho$) can be extended to a Lie algebra homomorphism $\hat{\rho}_{Lie}: A\ra End(Sym(W))$, where $\hat{\rho}_{Lie}(a)$ acts by derivation on each $Sym^d(W)$: $$\hat{\rho}_{Lie}(a)( w_1\cdots w_d) = \sum_{i=1}^d w_1 \cdots \hat{\rho}_{Lie}(a)(w_i) \cdots w_d.$$ This extends to an algebra homomorphism $U(A)\ra End(Sym(W))$ which we also denote by $\hat{\rho}_{Lie}$, but there is no commutative diagram like (\[commutativediag\]) because the map $A\ra End(Sym(W))$ is not a map of associative algebras. In particular, the restrictions of $\hat{\rho}_{Lie}(\mu)$ and $\hat{\rho}_{Lie}(\tilde{\mu})$ to $Sym^d(W)$ are generally not the same for $d>1$. The following result appears in [@LII]. \[first\] Given $\mu \in U(A)$ and $d\geq 1$, define a linear map $\Phi^d_{\mu} \in End(Sym^d(W))$ by $$\label{mapmu} \Phi^d_{\mu} = \hat{\rho}_{Lie}(\mu) \big|_{Sym^d(W)}.$$ Let $E$ denote the subspace of $End(Sym^d(W))$ spanned by $\{\Phi^d_{\mu}|~\mu\in U(A)\}$. Note that $E$ has a filtration $$E_1\subset E_2\subset \cdots,\ \ \ \ \ \ E = \bigcup_{r\geq 1} E_r,$$ where $E_r$ is spanned by $\{\Phi^d_{\mu}|~ \mu \in U(A),~ deg(\mu) \leq r\}$. Then $E = E_d$. Given a monomial $\mu = a_1* \cdots * a_r \in U(A)$ of arbitrary degree $r>d$, we need to show that $\Phi^d_{\mu}$ can be expressed as a linear combination of elements of the form $\Phi^d_{\nu}$ where $\nu \in U(A)$ and $deg (\nu) \leq d$. Fix $p\leq d$, and let $Part^r_{p}$ denote the set of partitions $\phi$ of $\{1,\dots,r\}$ into $p$ disjoint, non-empty subsets $S^{\phi}_1,\dots, S^{\phi}_p$ whose union is $\{1,\dots,r\}$. Each subset $S^{\phi}_i$ is of the form $$S^{\phi}_i = \{i_1, \dots, i_{k_i} \},\ \ \ \ \ \ i_1<\cdots < i_{k_i}.$$ For $i=1,\dots, p$, let $m_i\in U(A)$ be the corresponding monomial $m_i = a_{i_1} *\cdots *a_{i_{k_i}}$. Let $J = (j_1,\dots,j_p)$ be an (ordered) subset of $\{1,\dots,d\}$. Define a linear map $g_{\phi}\in End(Sym^d(W))$ by $$\label{fphi} g_{\phi}(w_1 \cdots w_d) = \sum_J g_{\phi}^1(w_1) \cdots g_{\phi}^d (w_d),\ \ \ \ \ \ g_{\phi}^k (w_k) = \bigg\{ \begin{matrix} \hat{\rho}_{Lie}(m_i) (w_{j_i}) & k= j_i \cr & \cr w_k & k\neq j_i \end{matrix},$$ where the sum runs over all (ordered) $p$-element subsets $J$ as above. Note that we could replace $m_i\in U(A)$ with $\tilde{m_i}\in A$ in (\[fphi\]), since $\hat{\rho}_{Lie}(m_i)(w_{j_i}) = \hat{\rho}_{Lie}(\tilde{m}_i)(w_{j_i})$. We claim that for each $\phi\in Part^r_p$, $g_{\phi} \in E_d$. We proceed by induction on $p$. The case $p=1$ is trivial because $g_{\phi} = \hat{\rho}_{Lie}(a)$ as derivations on $Sym^d(W)$, where $a = a_1\cdots a_r$. Next, assume the result for all partitions $\psi\in Part^s_{q}$, for $q<p$ and $s\leq r$. Let $m_1,\dots,m_p\in U(A)$ be the monomials corresponding to $\phi$ as above, and define $m_{\phi} = \tilde{m}_1*\cdots * \tilde{m}_p \in U(A)$. By definition, $\Phi^d_{m_{\phi}} \in E_p\subset E_d$, and the leading term of $\Phi^d_{m_{\phi}}$ is $g_{\phi}$. The lower order terms are of the form $g_{\psi}$, where $\psi\in Part^p_q$ is a partition of $\{1,\dots, p\}$ into $q$ subsets, which each corresponds to a monomial in the variables $\tilde{m}_1,\dots,\tilde{m}_p$. By induction, each of these terms lies in $E_q$, and since $g_{\phi} \equiv \Phi^d_{m_{\phi}}$ modulo $E_q$, the claim is proved. Finally, using the derivation property of $A$ acting on $Sym^d(W)$, one checks easily that $$\label{triplesum} \Phi^d_{\mu} = \sum_{p=1}^d \sum_{\phi\in Part^r_p} g_{\phi}.$$ Since each $g_{\phi}$ lies in $E_d$ by the above claim, this completes the proof of the lemma. \[firstcor\] Let $f\in Sym^d(W)$, and let $M\subset Sym^d(W)$ be the cyclic $U(A)$-module generated by $f$. Then $\{\hat{\rho}_{Lie}(\mu)(f)|~ \mu\in U(A),~ deg(\mu)\leq d\}$ spans $M$. Let $\cL$ denote the Lie algebra generated by the Fourier modes $\{j^{2m}(k)|~k\in\mathbb{Z},~m\geq 0\}$ of the generators of $\cH(n)^{O(n)}$, and let $\cP\subset \cL$ be the subalgebra generated by the annihilation modes $\{j^{2m}(k)|~k\geq 0\}$. Note that $\cP$ has a decomposition $$\cP = \cP_- \oplus \cP_0\oplus \cP_+,$$ where $\cP_-$, $\cP_0$, and $\cP_+$ are the Lie algebras spanned by $\{j^{2m}(k)|~0\leq k< 2m+1\}$, $\{j^{2m}(2m+1)\}$, and $\{j^{2m}(k)|~k>2m+1\}$, respectively. Clearly $\cP$ preserves the filtration on $\cH(n)$, so each element of $\cP$ acts by a derivation of degree zero on $gr(\cH(n))$. Let $\cM$ be an irreducible, highest-weight $\cH(n)^{O(n)}$-submodule of $\cH(n)$ with generator $f(z)$, and let $\cM'$ denote the $\cP$-submodule of $\cM$ generated by $f(z)$. Since $f(z)$ has minimal weight among elements of $\cM$ and $\cP_+$ lowers weight, $f(z)$ is annihilated by $\cP_+$. Moreover, $\cP_0$ acts diagonalizably on $f(z)$, so $f(z)$ generates a one-dimensional $\cP_0\oplus \cP_+$-module. By the Poincare-Birkhoff-Witt theorem, $\cM'$ is a quotient of $$U(\cP)\otimes_{U(\cP_0\oplus \cP_+)} \mathbb{C} f(z),$$ and in particular is a cyclic $\cP_-$-module with generator $f(z)$. Suppose that $f(z)$ has degree $d$, that is, $f(z)\in \cH(n)_{(d)} \setminus \cH(n)_{(d-1)}$. Since each element of $\cP$ preserves the filtration on $\cH(n)$, and $\cM$ is irreducible, it is easy to see that the nonzero elements of $\cM'$ lie in $\cH(n)_{(d)} \setminus \cH(n)_{(d-1)}$. Therefore, the projection $\cH(n)_{(d)}\ra \cH(n)_{(d)}/\cH(n)_{(d-1)} \subset gr(\cH(n))$ restricts to an isomorphism of $\cP$-modules $$\label{isopmod} \cM'\cong gr(\cM')\subset gr(\cS(V)).$$ By Lemma \[firstcor\], we conclude that $\cM'$ is spanned by elements of the form $$\{j^{2l_1}(k_1)\cdots j^{2l_r}(k_r) f(z) |~ j^{2l_i}(k_i)\in \cP_-,~ r\leq d\}.$$ Next, we need a basic fact from linear algebra. Let $A = (A_{i,j})$, $i,j = 1,\dots,n$ be an $n\times n$-matrix, whose entries $A_{i,j}$ are all positive real numbers. We call $A$ [*totally increasing*]{} if $A_{i,j} \leq A_{i,j+1}$ and $A_{i,j}\leq A_{i+1,j}$ for all $i,j$, and for each $i,j = 1,\dots,n-1$, the $2\times 2$ matrix $$\bigg[ \begin{matrix} A_{i,j} & A_{i,j+1} \cr A_{i+1,j} & A_{i+1,j+1}\end{matrix} \bigg]$$ has positive determinant. \[totinc\] Any totally increasing matrix is nonsingular. Let $A$ be such a matrix. First, for $i=1,\dots,n$ we rescale the $i$th row by a factor of $1/A_{i,1}$. Next, for $j=2,\dots,n$, we rescale the $j$th column by a factor of $A_{1,1}/A_{1,j}$ to obtain $$\left[ \begin{matrix}1 & 1 & 1 & \cdot & \cdot & \cdot & 1 \cr 1 & \frac{A_{1,1} A_{2,2}}{A_{2,1}A_{1,2}} & \frac{A_{1,1}A_{2,3}}{A_{2,1}A_{1,3}} & \cdot & \cdot & \cdot & \frac{A_{1,1}A_{2,n}}{A_{2,1}A_{1,n}} \cr \vdots & \vdots & \vdots & & & & \vdots \cr 1 & \frac{A_{1,1}A_{n,2}}{A_{n,1}A_{1,2}} & \frac{A_{1,1} A_{n,3}}{A_{n,1}A_{1,3}} & \cdot & \cdot & \cdot & \frac{A_{1,1} A_{n,n}}{A_{n,1} A_{1,n}} \end{matrix}\right],$$ which is easily seen to be totally increasing. Finally, we subtract the first row from the $i$th row, for $i=2,\cdots,n$, obtaining $$\left[ \begin{matrix}1 & 1 & 1 & \cdot & \cdot & \cdot & 1 \cr 0 & \frac{A_{1,1} A_{2,2}}{A_{2,1}A_{1,2}} -1 & \frac{A_{1,1}A_{2,3}}{A_{2,1}A_{1,3}} -1 & \cdot & \cdot & \cdot & \frac{A_{1,1}A_{2,n}}{A_{2,1}A_{1,n}} -1 \cr \vdots & \vdots & \vdots & & & & \vdots \cr 0 & \frac{A_{1,1}A_{n,2}}{A_{n,1}A_{1,2}} -1 & \frac{A_{1,1} A_{n,3}}{A_{n,1}A_{1,3}} -1 & \cdot & \cdot & \cdot & \frac{A_{1,1} A_{n,n}}{A_{n,1} A_{1,n}} -1 \end{matrix} \right].$$ It is easy to check that the $(n-1)\times (n-1)$ matrix $$\left[ \begin{matrix} \frac{A_{1,1} A_{2,2}}{A_{2,1}A_{1,2}} -1 & \frac{A_{1,1}A_{2,3}}{A_{2,1}A_{1,3}} -1 & \cdot & \cdot & \cdot & \frac{A_{1,1}A_{2,n}}{A_{2,1}A_{1,n}} -1 \cr \vdots & \vdots & & & & \vdots \cr \frac{A_{1,1}A_{n,2}}{A_{n,1}A_{1,2}} -1 & \frac{A_{1,1} A_{n,3}}{A_{n,1}A_{1,3}} -1 & \cdot & \cdot & \cdot & \frac{A_{1,1} A_{n,n}}{A_{n,1} A_{1,n}} -1 \end{matrix} \right]$$ is totally increasing, so the claim follows by induction on $n$. This lemma allows us to prove a certain useful property of $\cH(n)$ as a module over $\cP$. For simplicity of notation, we take $n=1$, but the result we are going to prove holds for any $n$. In this case, $\cH = \cH(1)$ is generated by $\alpha(z)$. Recall from (\[structureofgrs\]) that $\alpha_j$ denotes the image of $\partial^j\alpha$ in $gr(\cH)$. Let $W\subset gr(\cH)$ be the vector space with basis $\{\alpha_j|~ j\geq 0\}$, and for each $m\geq 0$, let $W_m$ be the subspace with basis $\{\alpha_j|~ 0\leq j\leq m\}$. Let $\phi:W\ra W$ be a linear map of weight $w\geq 1$, such that $$\label{arbmap} \phi(\alpha_j) = c_j \alpha_{j+w},$$ for constants $c_j \in \mathbb{C}$. For example, the restriction $j^{2k}(2k-w+1)\big|_{W}$ of any $j^{2k}(2k-w+1)\in \cP$, is such a map. \[third\] Fix $w\geq 1$ and $m\geq 0$, and let $\phi$ be a linear map satisfying (\[arbmap\]). Then the restriction $\phi \big|_{W_m}$ can be expressed uniquely as a linear combination of the operators $j^{2k}(2k-w+1)\big|_{W_m}$ for $0\leq 2k+1-w \leq 2m+1$. Suppose first that $w$ is odd, and let $k_j = j+ \frac{1}{2}(w-1)$, for $j=0,\dots,m$. In this notation, we need to show that $\phi \big|_{W_m}$ can be expressed uniquely as a linear combination of the operators $j^{2k_j}(2j)\big|_{W_m}$ for $j=0,\dots,m$. Using (\[bcalc\]), we calculate $$\label{actionofp} j^{2k_j}(2j)(\alpha_i) = \lambda_{0,2k_j,i,2j} (\alpha_{i+w}) = \bigg(\frac{(2k_j+i+1)!}{(2k_j+i+1-2j)!} + \frac{(i+1)!}{(i+1-2j)!} \bigg)\alpha_{i+w}.$$ Let $M^w$ be the $(m+1)\times(m+1)$ matrix with entries $M^w_{i,j} = \lambda_{0,2k_j,i,2j}$, for $i,j = 0,\dots,m$. Let ${\bf c}$ be the column vector in $\mathbb{C}^{m+1}$ whose transpose is given by $(c_0,\dots,c_m)$. Given an arbitrary linear combination $$\psi = t_0 j^{2k_0}(0) + t_1 j^{2k_1}(2) + \cdots + t_{m} j^{2k_m}(2m)$$ of the operators $j^{2k_j}(2j)$ for $0\leq j\leq m$, let ${\bf t}$ be the column vector whose transpose is $(t_0,\dots, t_{m})$. Note that $\phi \big|_{W_m} = \psi \big|_{W_m}$ precisely when $M^w {\bf t} = {\bf c}$, so in order to prove the claim, it suffices to show that $M^w$ is invertible. It is easy to check using (\[actionofp\]) that $M^w$ is totally increasing, so this is immediate from Lemma \[totinc\]. Finally, if $w$ is even, the same argument shows that for $k_j = j+\frac{w}{2}$, $j=0,\dots,m$, $\phi$ can be expressed uniquely as a linear combination of the operators $j^{2k_j}(2j+1)$ for $j=0,\dots,m$. Since (\[bcalc\]) holds for any $n\geq 1$, it follows that the statement of Lemma \[third\] holds for any $n$. More precisely, let $W\subset gr(\cH(n))$ be the vector space with basis $\{\alpha^i_j|~i=1,\dots,n,~j\geq 0\}$, and let $W_m\subset W$ be the subspace with basis $\{\alpha^i_j|~i=1,\dots, n,~0\leq j\leq m\}$. Let $\phi:W\ra W$ be a linear map of weight $w\geq1$ taking $$\label{actiongencase} \alpha^i_j\mapsto c_j \alpha^i_{j+w},\ \ \ \ \ \ i=1,\dots,n,$$ where the constants $c_j$ are independent of $i$. For example, each $\phi = j^{2k}(2k-w+1)\big|_W$ satisfies (\[actiongencase\]). Then $\phi\big|_{W_m}$ can be expressed uniquely as a linear combination of $j^{2k}(2k-w+1)\big|_{W_m}$ for $0\leq 2k+1-w \leq 2m+1$. The next result is analogous to Lemma 7 of [@LII], and the proof is almost identical. \[fourth\] Let $\cM$ be an irreducible, highest-weight $\cH(n)^{O(n)}$-submodule of $\cH(n)$ with highest-weight vector $f(z)$ of degree $d$. Let $\cM'$ be the corresponding $\cP$-module generated by $f(z)$, and let $f$ be the image of $f(z)$ in $gr(\cH(n))$, which generates $M = gr(\cM')$ as a $\cP$-module. Fix $m$ so that $f\in Sym^d(W_m)$. Then $\cM'$ is spanned by $$\{j^{2l_1}(k_1) \cdots j^{2l_r}(k_r) f(z)|~j^{2l_i}(k_i)\in \cP_-,\ \ r\leq d,\ \ 0\leq k_i \leq 2m+1\}.$$ We may work with $M = gr(\cM')$ rather than $\cM'$, and for notational convenience, we do not distinguish between elements of $U(\cP_-)$ and their images in $End(Sym^d(W))$. As in the proof of Lemma \[first\], let $E$ denote the subspace of $End(Sym^d (W))$ spanned by $U(\cP_-)$, and let $E_r$ be the subspace spanned by elements of $U(\cP_-)$ of degree at most $r$. Let $\tilde{E}_r$ be the subspace of $E_r$ spanned by elements of $U(\cP_-)$ which only depend on $j^{2l}(k)$ for $k\leq 2m+1$. It is not true that $E_d = \tilde{E}_d$ as subspaces of $End(Sym^d(W))$, but it suffices to show that these spaces of endomorphisms coincide when restricted to $Sym^d(W_m)$. Since $E = E_d$, and hence is spanned by monomials $\mu = a_1*\cdots * a_r\in U(\cP_-)$ of degree $r\leq d$, we have $$\label{triplesumii} \mu = \sum_{p=1}^r \sum_{\phi\in Part^r_p} g_{\phi},$$ where each partition $\phi\in Part^r_p$ corresponds to a set of monomials $m_1,\dots,m_p$, and $g_{\phi}$ is given by (\[fphi\]). For $p=r$, there is only one partition $\phi_0$ of $\{1,\dots,r\}$ into disjoint, non-empty subsets, and $g_{\phi_0}$ is defined on monomials $w_1\cdots w_d\in Sym^d(W)$ by $$\label{fphii} g_{\phi_0}(w_1\cdots w_d) = \sum_J g_{\phi_0}^1(w_1) \cdots g_{\phi_0}^d (w_d),\ \ \ \ \ \ g_{\phi_0}^k (w_k) = \bigg\{ \begin{matrix} a_i (w_{j_i}) & k= j_i \cr & \cr w_k & k\neq j_i\end{matrix},$$ where the sum runs over all (ordered) $r$-element subsets $J \subset \{1,\dots,d\}$. By Lemma \[third\], the restriction of $a_i$ to $W_m$ coincides with a linear combination $S_i$ of the elements $j^{2l}(k)\big|_{W_m}$ for $k\leq 2m+1$. Replace each of the factors $a_i (w_{j_i})$ appearing in (\[fphii\]) with $S_i (w_{j_i})$, and let $Q = \prod_{i=1}^r S_i$, which lies in $U(\cP_-)$, and depends only on $j^{2l}(k)$ for $k\leq 2m+1$. Clearly the restriction of $Q$ to $Sym^d(W_m)$ agrees with the restriction of $\mu$ to $Sym^d(W_m)$, modulo terms lying in $E_{r-1}$. The lemma then follows by induction on $r$. As in [@LII], we may order the elements $j^{2l}(k)\in \cP_-$ as follows: $j^{2l_1}(k_1) > j^{2l_2}(k_2)$ if $l_1>l_2$, or $l_1=l_2$ and $k_1<k_2$. Then Lemma \[fourth\] can be strengthened as follows: $\cM'$ is spanned by elements of the form $j^{2l_1}(k_1)\cdots j^{2l_r}(k_r) f(z)$ with $$\label{shapealpha} j^{2l_i}(k_i)\in \cP_-,\ \ \ \ r\leq d,\ \ \ \ 0\leq k_i\leq 2m+1,\ \ \ \ j^{2l_1}(k_1)\geq \cdots \geq j^{2l_r}(k_r).$$ Up to this point, everything we have proven in this section is independent of Conjecture \[mainconj\]. Then next lemma is where this assumption will enter. We use the notation $\cH(n)^{O(n)}[k]$, $\cM[k]$, and $\cM'[k]$ to denote the homogeneous components of these spaces of conformal weight $k$. As in [@LII], we define the [*Wick ideal*]{} $\cM_{Wick}\subset \cM$ to be the subspace spanned by elements of the form $$:a(z) b(z):,\ \ \ \ \ a(z)\in \bigoplus_{k>0} \cH(n)^{O(n)}[k],\ \ \ \ \ b(z)\in \cM.$$ Despite the choice of terminology, $\cM_{Wick}$ is not a vertex algebra ideal. It is properly contained in the space $C_1(\cM)$ defined in [@LiIII], and in particular it does not contain all elements of the form $L\circ_0 b = \partial b$ for $b\in\cM$. \[fifth\] Let $\cM$ be an irreducible, highest-weight $\cH(n)^{O(n)}$-submodule of $\cH(n)$ with highest-weight vector $f(z)$. If Conjecture \[mainconj\] holds, any homogeneous element of $\cM$ of sufficiently high weight lies in the Wick ideal. In particular, $\cM / \cM_{Wick}$ is finite-dimensional. It suffices to show that $\cM'[k]$ lies in the Wick ideal for $k>>0$, where $\cM'$ is the $\cP$-module generated by $f(z)$. As usual, let $d$ be the degree of $f(z)$, and fix $m$ so that $f \in Sym^d(W_m)$. Recall that $\cM'$ is spanned by elements of the form $j^{2l_1}(k_1)\cdots j^{2l_r}(k_r) f(z)$ satisfying (\[shapealpha\]). Fix an element $\alpha(z)$ of this form of weight $K>>0$. Since each operator $j^{2l_i}(k_i)$ has weight $2l_i+1-k_i$, $k_i\leq 2m+1$, and $K>>0$, we may assume that $l_1>>\frac{1}{2}(n^2+3n)$. Then the decoupling relation (\[maindecoup\]) allows us to express $j^{2l_1}(z)$ as a normally ordered polynomial $Q_{l_1}(z)$ in the generators $$\label{genera} \partial^t j^{2l}(z),\ \ \ \ \ 0\leq l\leq \frac{1}{2}(n^2+3n-2),\ \ \ \ \ t\geq 0.$$ We claim that for any weight-homogeneous, normally ordered polynomial $Q(z)$ in the generators (\[genera\]) of sufficiently high weight, any element $c(z)\in \cM$, and any $k$ satisfying $0\leq k\leq 2m+1$, $Q(z)\circ_k c(z)$ lies in $\cM_{Wick}$. Specializing this to the case $Q(z) = Q_{l_1}(z)$, $c(z) = j^{2l_2}(k_2)\cdots j^{2l_r}(k_r) f(z)$, and $k=k_1$, proves the lemma. We may assume without loss of generality that $Q(z)=:a(z)b(z):$ where $a(z) = \partial^t j^{2l}(z)$ for some $0\leq l\leq \frac{1}{2}(n^2 +3n-2)$. Then using (\[vaidiv\]), and suppressing the formal variable $z$, we have $$\label{appvaid} Q\circ_k c = \big(:ab:\big) \circ_{k} c = \sum_{r\geq0}{1\over r!}:(\partial^r a)(b\circ_{k+r}c): +\sum_{r\geq 0}b\circ_{k-r-1}(a\circ_r c) .$$ Suppose first that $b = \lambda 1$ for some constant $\lambda$. Then $Q= \lambda \partial^t j^{2l}$, and since $wt(Q) >>0$, we have $t>>0$. Hence $Q\circ_k= \lambda(\partial^t j^{2l})\circ_k = 0$ as an operator (since this operator vanishes whenever $t>k$). So we may assume without loss of generality that $b$ is not a constant. We proceed by induction on $k$. For $k=0$, each term appearing in (\[appvaid\]) lies in $\cM_{Wick}$, so there is nothing to prove. For $k>0$, the only terms appearing in (\[appvaid\]) that need not lie in $\cM_{Wick}$ a priori, are those of the form $\sum_{r=0}^{k-1} b\circ_{k-r-1}(a\circ_r c)$. However, each of these terms is weight-homogeneous, and the weight of $a\circ_r c = \partial^t j^{2l} \circ_r c$ is bounded above by $wt(c) + n^2+3n+1$, since $\partial^t j^{2l} \circ_r c=0$ for $t>r$. So we may still assume that $wt(b)>>0$. By our inductive assumption, all these terms then lie in $\cM_{Wick}$. \[sixth\] Let $\cM$ be an irreducible, highest-weight $\cH(n)^{O(n)}$-submodule of $\cH(n)$. Given a subset $S\subset \cM$, let $\cM_S\subset \cM$ denote the subspace spanned by elements of the form $$:\omega_1(z)\cdots \omega_t(z) \alpha(z):,\ \ \ \ \ \omega_j(z)\in \cH(n)^{O(n)},\ \ \ \ \ \alpha(z)\in S.$$ If Conjecture \[mainconj\] holds, there exists a finite set $S\subset \cM$ such that $\cM = \cM_S$. Now we are ready to prove our main result. \[sfg\] Suppose that Conjecture \[mainconj\] holds. Then for any reductive group $G$ of automorphisms of $\cH(n)$ preserving the conformal structure (\[virasoro\]), $\cH(n)^G$ is strongly finitely generated. By Theorem \[ordfg\], we can find vertex operators $f_1(z),\dots, f_k(z)$ such that the corresponding polynomials $f_1,\dots, f_k\in gr(\cH(n))^G$, together with all $GL_{\infty}$ translates of $f_1,\dots, f_k$, generate the invariant ring $gr(\cH(n))^G$. As in the proof of Lemma \[ordfg\], we may assume that each $f_i(z)$ lies in an irreducible, highest-weight $\cH(n)^{O(n)}$-module $\cM_i$ of the form $L(\nu)_{\mu_0}\otimes M^{\nu}$, where $L(\nu)_{\mu_0}$ is a trivial, one-dimensional $G$-module. Furthermore, we may assume without loss of generality that $f_1(z),\dots, f_k(z)$ are highest-weight vectors for the action of $\cH(n)^{O(n)}$. Otherwise, we can replace these with the highest-weight vectors in the corresponding modules. For each $\cM_i$, choose a finite set $S_i \subset \cM_i$ such that $\cM_i = (\cM_i)_{S_i}$, using Corollary \[sixth\]. Define $$S=\{j^0(z),j^2(z),\dots, j^{n^2+3n-2}(z) \} \cup \big(\bigcup_{i=1}^k S_i \big).$$ Since $\{j^0(z),j^2(z),\dots, j^{n^2+3n-2}(z)\}$ strongly generates $\cH(n)^{O(n)}$ (assuming Conjecture \[mainconj\]), and the set $\bigcup_{i=1}^k \cM_i$ strongly generates $\cH(n)^G$, it is immediate that $S$ is a strong, finite generating set for $\cH(n)^G$. We include one more secondary result in this section, which is independent of Conjecture \[mainconj\]. Recall that $\cH(n)^{O(n)}$ is the quotient of $\cV_n$ by the ideal $\cI_n$, which is generated by the set $\{D_{I,J}\}$, where $I,J$ satisfy (\[ijineq\]). We will show that $\cI_n$ is finitely generated as a vertex algebra ideal. Define $$U_n = (\cV_{n})_{(2n+2)}\cap \cI_{n},$$ which is just the vector space spanned by $\{D_{I,J}\}$, where $I,J$ satisfy (\[ijineq\]). We have $D_{I,J} = D_{J,I}$, but there are no other linear relations among these elements. It is easy to see that $U_n$ is a module over the Lie algebra $\cP$ generated by $\{j^{2m}(k) = j^{2m}\circ_k |~k,m\geq 0\}$, since the action $\cP$ preserves both the filtration degree and the ideal $\cI_n$. Note that $\cP$ has an alternative generating set $\{\Omega_{a,b}\circ_{a+b+1-w}|~ 0\leq a\leq b,~ a+b+1-w\geq 0\}$, where $\Omega_{a,b}\circ_{a+b+1-w}$ is homogeneous of weight $w$. Recall that $gr(\cV_n)$ is the polynomial algebra generated by $\alpha^i_k$ for $i=1,\dots,n$ and $k\geq 0$. The action of $\cP$ by derivations of degree zero on $gr(\cV_{n})$ coming from the vertex Poisson algebra structure is independent of $n$, and is specified by (\[bcalc\]). Using this formula, it is easy to see that the action of $\cP$ on $U_n$ is by weighted derivation" in the following sense. Fix $I = (i_0,\dots,i_n)$ and $J = (j_0,\dots,j_n)$, and let $D_{I,J}$ be the corresponding element of $U_n$. Given $p = \Omega_{a,b}\circ_{a+b+1-w}\in \cP$, we have $$\label{paraction} p(D_{I,J}) = \sum_{r=0}^n c_r D_{I^r,J} + \sum_{r=0}^n d_r D_{I,J^r},$$ for lists $I^r = (i_0,\dots, i_{r-1}, i_r + w,i_{r+1},\dots, i_n)$ and $J^r = (j_0,\dots, j_{r-1}, j_r+ w,j_{r+1},\dots, j_n)$, and constants $c_r,d_r$. If $i_r + w$ appears elsewhere on the list $I^r$, $c_r = 0$, and if $j_r + w$ appears elsewhere on the list $J^r$, $d_r = 0$. Otherwise, $$\label{actioni} c_r = \pm \lambda_{a,b,i_r,t},\ \ \ \ \ \ d_r = \pm \lambda_{a,b,j_r,t},$$ where $t= a+b+1-w$, and the signs $\pm$ are the signs of the permutations transforming $I^r$ and $J^r$ into lists in increasing order, as in (\[ijineq\]). $\cI_n$ is generated as a vertex algebra ideal by the set of elements $D_{I,J} \in U_n$ for which $|I| + |J| \leq 2n^2+3n$. Let $\cI'_n$ denote the ideal in $\cV_n$ generated by $\{D_{I,J}|~ |I| + |J| \leq 2n^2+3n\}$. Since $U_n$ generates $\cI_n$, $\cI'_n$ is properly contained in $\cI_n$ if and only if there exists some $D_{I,J}\in \cI_n \setminus \cI'_n$. Suppose that $\cI_n \setminus \cI'_n$ is nonempty, and let $D_{I,J}$ be an element of this form of minimal weight $d$, lying in $\cI_n \setminus \cI'_n$. Note that elements $D_{I,J}$ satisfying $|I|+|J| \leq 2n^2+3n$ have weight at most $2n^2+5n+2$. This choice guarantees that all elements $D_{I,J}$ for which the union of $I$ and $J$ is $\{0,1,\dots,n,n+1,n+2,\dots,2n+1\}$, must lie in $\cI'_n$. We say that $D_{I,J}$ has a [*hole*]{} at some integer $k\geq 0$ if $k$ does not appear in either $I$ or $J$. Since $d = wt(D_{I,J}) >2n^2+5n+2$, it follows that $D_{I,J}$ has a hole for some $k$ satisfying $0\leq k\leq 2n+1$, and also that there is some $l>2n+1$ such $D_{I,J}$ does [*not*]{} have a hole at $l$. Without loss of generality, we may assume that $l$ appears in $I$. Let $w = l-k$, and fix an integer $m$ greater than all entries of both $I$ and $J$. By Lemma \[third\], we can choose an element $p\in \cP$ which is a linear combination of the operators $\Omega_{a,b} \circ_{a+b+1-w}$, for which $p(\alpha^i_k) = \alpha^i_l$ and $p(\alpha^i_r) = 0$ for all $i=1,\dots, n$ and all $r\neq k$ satisfying $0\leq r\leq m$. Let $I'$ be the list obtained from $I$ by replacing $l$ by $k$, and let $D_{I',J}$ be the corresponding element of $U_n$. Since $D_{I',J}$ has weight $d-w$, it lies in $\cI'_n$ by inductive assumption, and we clearly have $p(D_{I',J}) = D_{I,J}$. This shows that $D_{I,J}$ lies in $\cI'_n$ as well. Appendix ======== In this Appendix, we present the results of computer calculations that prove Conjecture \[mainconj\] in the cases $n=2$ and $n=3$. These calculations were done using Kris Thielemann’s OPE package for Mathematica [@T]. For $n=2$, define $$D^6_0 =:\Omega_{0,0} \Omega_{1,1} \Omega_{2,2}: - :\Omega_{0,2} \Omega_{0,2} \Omega_{1,1}: + 2 :\Omega_{0,1} \Omega_{0,2} \Omega_{1,2}: - :\Omega_{0,0} \Omega_{1,2}\Omega_{1,2}: - :\Omega_{0,1} \Omega_{0,1} \Omega_{2,2}:,$$ $$D^4_0 = - 1/30 :\Omega_{0,0} \Omega_{1,7}: + 3/4 :\Omega_{0,0} \Omega_{2,6}: - 5/6: \Omega_{0,1} \Omega_{1,6}: - 13/15 :\Omega_{0,1} \Omega_{2,5}:$$ $$+ 61/30 : \Omega_{0,2} \Omega_{1,5}: - 1/6 : \Omega_{0,2} \Omega_{2,4}: + 5/12 :\Omega_{1,1} \Omega_{0,6}: - 1/4 :\Omega_{1,1} \Omega_{2,4}:$$ $$- 1/10 :\Omega_{1,2} \Omega_{0,5}: + :\Omega_{1,2} \Omega_{1,4}: + :\Omega_{1,2} \Omega_{2,3}: + 1/12 :\Omega_{0,4} \Omega_{2,2}: - 7/6 :\Omega_{1,3} \Omega_{2,2}: ,$$ $$D^2_0 = \frac{149}{600} \Omega_{0,10} +\partial^2 \bigg(- \frac{1697}{10800} \Omega_{0,8} + \frac{1637}{18900} \Omega_{1,7} - \frac{33241}{37800} \Omega_{2,6} + \frac{16117}{9450} \Omega_{3,5}- \frac{4223}{3780} \Omega_{4,4}\bigg) .$$ A computer calculation shows that the corresponding normally ordered polynomial in the variables $\omega_{a,b}$ is identically zero, so $D^6_0 + D^4_0 + D^2_0$ lies in $Ker(\pi_2)$, and hence must coincide with $D_0$. It follows that $R_0 = \frac{149}{600} J^{10}$, and in particular is nonzero. Similarly, in the case $n=3$, define $$D^8_0 = :\Omega_{0,3} \Omega_{0,3} \Omega_{1,2} \Omega_{1,2}: - 2 :\Omega_{0,2}\Omega_{0,3} \Omega_{1,2} \Omega_{1,3} : + :\Omega_{0,2}\Omega_{0,2} \Omega_{1,3} \Omega_{1,3}:$$ $$- :\Omega_{0,3} \Omega_{0,3} \Omega_{1,1} \Omega_{2,2}: + 2 :\Omega_{0,1} \Omega_{0,3} \Omega_{1,3} \Omega_{2,2}: - :\Omega_{0,0}\Omega_{1,3} \Omega_{1,3} \Omega_{2,2}:$$ $$+ 2 :\Omega_{0,2}\Omega_{0,3}\Omega_{1,1}\Omega_{2,3}: - 2 :\Omega_{0,1}\Omega_{0,3}\Omega_{1,2}\Omega_{2,3}: - 2 :\Omega_{0,1}\Omega_{0,2}\Omega_{1,3}\Omega_{2,3}:$$ $$+ 2 :\Omega_{0,0}\Omega_{1,2}\Omega_{1,3}\Omega_{2,3}: + :\Omega_{0,1}\Omega_{0,1}\Omega_{2,3}\Omega_{2,3}: - :\Omega_{0,0}\Omega_{1,1}\Omega_{2,3}\Omega_{2,3}:$$ $$- :\Omega_{0,2}\Omega_{0,2}\Omega_{1,1}\Omega_{3,3}: + 2 :\Omega_{0,1}\Omega_{0,2}\Omega_{1,2}\Omega_{3,3}: - :\Omega_{0,0}\Omega_{1,2}\Omega_{1,2}\Omega_{3,3}:$$ $$- :\Omega_{0,1}\Omega_{0,1}\Omega_{2,2}\Omega_{3,3}: + :\Omega_{0,0}\Omega_{1,1}\Omega_{2,2}\Omega_{3,3}:,$$ $$D^6_0 = 1/56 :\Omega_{2,10}\Omega_{1,1}\Omega_{0,0}: - 23/42 :\Omega_{3,9}\Omega_{1,1}\Omega_{0,0}: - 1/56 :\Omega_{2,10}\Omega_{0,1}\Omega_{0,1}: + 23/42 :\Omega_{3,9}\Omega_{0,1}\Omega_{0,1}:$$ $$+ 7/12 :\Omega_{2,9}\Omega_{1,2}\Omega_{0,0}: + 62/105 :\Omega_{3,8}\Omega_{1,2}\Omega_{0,0}:- 7/12 :\Omega_{2,9}\Omega_{0,2}\Omega_{0,1}:- 62/105 :\Omega_{3,8}\Omega_{0,2}\Omega_{0,1}:$$ $$- 139/105 :\Omega_{2,8}\Omega_{1,3}\Omega_{0,0}:+ 1/15 :\Omega_{37}\Omega_{13}\Omega_{00}: - 7/24 :\Omega_{1,9}\Omega_{2,2}\Omega_{0,0}: + 1/4 :\Omega_{3,7}\Omega_{2,2}\Omega_{0,0}:$$ $$+ 139/105 :\Omega_{2,8}\Omega_{0,3}\Omega_{0,1}: - 1/15 :\Omega_{3,7}\Omega_{0,3}\Omega_{0,1}: + 3/20 :\Omega_{2,8}\Omega_{1,2}\Omega_{0,1}: - 81/70 :\Omega_{3,7}\Omega_{1,2}\Omega_{0,1}:$$ $$+ 7/24 :\Omega_{1,9}\Omega_{0,2}\Omega_{0,2}: - 1/4 :\Omega_{3,7}\Omega_{0,2}\Omega_{0,2}: - 3/20 :\Omega_{2,8}\Omega_{0,2}\Omega_{1,1}: + 81/70 :\Omega_{3,7}\Omega_{0,2}\Omega_{1,1}:$$ $$+ 1/21 :\Omega_{1,8}\Omega_{2,3}\Omega_{0,0}: - 2/3 :\Omega_{2,7}\Omega_{2,3}\Omega_{0,0}: - 7/10 :\Omega_{3,6}\Omega_{2,3}\Omega_{0,0}: + 9/35 :\Omega_{2,7}\Omega_{1,3}\Omega_{0,1}:$$ $$+ 5/6 :\Omega_{3,6}\Omega_{1,3}\Omega_{0,1}: - 3/20 :\Omega_{1,8}\Omega_{2,2}\Omega_{0,1}: - 1/5 :\Omega_{3,6}\Omega_{2,2}\Omega_{0,1}: - 1/21 :\Omega_{1,8}\Omega_{0,3}\Omega_{0,2}:$$ $$+ 2/3 :\Omega_{2,7}\Omega_{0,3}\Omega_{0,2}: + 7/10 :\Omega_{3,6}\Omega_{0,3}\Omega_{0,2}:+ 3/20 :\Omega_{1,8}\Omega_{1,2}\Omega_{0,2}: + 1/5 :\Omega_{3,6}\Omega_{1,2}\Omega_{0,2}:$$ $$- 9/35 :\Omega_{2,7}\Omega_{0,3}\Omega_{1,1}: - 5/6 :\Omega_{3,6}\Omega_{0,3}\Omega_{1,1}: - 1/30 :\Omega_{1,7}\Omega_{3,3}\Omega_{0,0}: + 3/4 :\Omega_{2,6}\Omega_{3,3}\Omega_{0,0}:$$ $$+ 9/10 :\Omega_{1,7}\Omega_{2,3}\Omega_{0,1}: + 1/5 :\Omega_{2,6}\Omega_{2,3}\Omega_{0,1}: + 13/15 :\Omega_{3,5}\Omega_{2,3}\Omega_{0,1}:- 81/70 :\Omega_{1,7}\Omega_{1,3}\Omega_{0,2}:$$ $$+ 2/5 :\Omega_{2,6}\Omega_{1,3}\Omega_{0,2}: - 61/30 :\Omega_{3,5}\Omega_{1,3}\Omega_{0,2}:+ 3/40 :\Omega_{0,8}\Omega_{2,2}\Omega_{1,1}: - 1/2 :\Omega_{3,5}\Omega_{2,2}\Omega_{1,1}:$$ $$+ 1/30 :\Omega_{1,7}\Omega_{0,3}\Omega_{0,3}: - 3/4 :\Omega_{2,6}\Omega_{0,3}\Omega_{0,3}: + 9/35 :\Omega_{1,7}\Omega_{1,2}\Omega_{0,3}: - 3/5 :\Omega_{2,6}\Omega_{1,2}\Omega_{0,3}:$$ $$+ 7/6 :\Omega_{3,5}\Omega_{1,2}\Omega_{0,3}: - 3/40 :\Omega_{0,8}\Omega_{1,2}\Omega_{1,2}:+ 1/2 :\Omega_{3,5}\Omega_{1,2}\Omega_{1,2}: - 5/6 :\Omega_{1,6}\Omega_{3,3}\Omega_{0,1}:$$ $$- 13/15 :\Omega_{2,5}\Omega_{3,3}\Omega_{0,1}: - 3/5 :\Omega_{1,6}\Omega_{2,3}\Omega_{0,2}: + 1/6 :\Omega_{3,4}\Omega_{2,3}\Omega_{0,2}: - 24/35 :\Omega_{0,7}\Omega_{2,3}\Omega_{1,1}:$$ $$+ 4/5 :\Omega_{2,5}\Omega_{2,3}\Omega_{1,1}: - 1/2 :\Omega_{3,4}\Omega_{2,3}\Omega_{1,1}: + 5/6 :\Omega_{1,6}\Omega_{1,3}\Omega_{0,3}: + 13/15 :\Omega_{2,5}\Omega_{1,3}\Omega_{0,3}:$$ $$+ 3/5 :\Omega_{1,6}\Omega_{2,2}\Omega_{0,3}: - 1/6 :\Omega_{3,4}\Omega_{2,2}\Omega_{0,3}: + 24/35 :\Omega_{0,7}\Omega_{1,3}\Omega_{1,2}: - 4/5 :\Omega_{2,5}\Omega_{1,3}\Omega_{1,2}:$$ $$+ 1/2 :\Omega_{3,4}\Omega_{1,3}\Omega_{1,2}: + 61/30 :\Omega_{1,5}\Omega_{3,3}\Omega_{0,2}: - 1/6 :\Omega_{2,4}\Omega_{3,3}\Omega_{0,2}: + 5/12 :\Omega_{3,3}\Omega_{0,6}\Omega_{1,1}:$$ $$- 1/4 :\Omega_{2,4}\Omega_{3,3}\Omega_{1,1}: - 61/30 :\Omega_{1,5}\Omega_{2,3}\Omega_{0,3}: + 1/6 :\Omega_{2,4}\Omega_{2,3}\Omega_{0,3}:- 1/10 :\Omega_{3,3}\Omega_{0,5}\Omega_{1,2}:$$ $$+ :\Omega_{3,3}\Omega_{1,4}\Omega_{1,2}: + 1/15 :\Omega_{0,6}\Omega_{2,3}\Omega_{1,2}: - 4/5 :\Omega_{1,5}\Omega_{2,3}\Omega_{1,2}:+ 1/12 :\Omega_{3,3}\Omega_{2,2}\Omega_{0,4}:$$ $$- 5/12 :\Omega_{0,6}\Omega_{1,3}\Omega_{1,3}:+ 1/4 :\Omega_{2,4}\Omega_{1,3}\Omega_{1,3}: - 1/15 :\Omega_{0,6}\Omega_{2,2}\Omega_{1,3}: + 4/5 :\Omega_{1,5}\Omega_{2,2}\Omega_{1,3}:$$ $$- 1/6 :\Omega_{3,3}\Omega_{2,2}\Omega_{1,3}: - 1/12 :\Omega_{2,3}\Omega_{2,3}\Omega_{0,4}: + 1/10 :\Omega_{2,3}\Omega_{0,5}\Omega_{1,3}:- :\Omega_{1,4}\Omega_{2,3}\Omega_{1,3}:$$ $$+ 1/6 :\Omega_{2,3}\Omega_{2,3}\Omega_{1,3}:,$$ $$D^4_0 = \frac{451}{114660} :\Omega_{0,0}\Omega_{1,15}: + \frac{6251}{102960} :\Omega_{0,0}\Omega_{2,14}: + \frac{10261}{69300} :\Omega_{0,0}\Omega_{3,13}: - \frac{1}{140} :\Omega_{0,0}\Omega_{6,10}:$$ $$+ \frac{137}{840} :\Omega_{0,0}\Omega_{7,9}: + \frac{4849}{22050} :\Omega_{0,0}\Omega_{8,8}: - \frac{451}{114660} :\Omega_{0,1}\Omega_{0,15}: + \frac{96}{2695} :\Omega_{0,1}\Omega_{1,14}:$$ $$- \frac{1857}{28600} :\Omega_{0,1}\Omega_{2,13}: - \frac{467}{2475} :\Omega_{0,1}\Omega_{3,12}:- \frac{1}{560} :\Omega_{0,1}\Omega_{5,10}: - \frac{347}{1680} :\Omega_{0,1}\Omega_{6,9}:$$ $$- \frac{643}{9800} :\Omega_{0,1}\Omega_{7,8}: + \frac{719}{15015} :\Omega_{0,2}\Omega_{0,14}: + \frac{24929}{1201200} :\Omega_{0,2}\Omega_{1,13}: + \frac{155}{1584} :\Omega_{0,2}\Omega_{2,12}:$$ $$- \frac{1427}{9900} :\Omega_{0,2}\Omega_{3,11}:+ \frac{203}{720} :\Omega_{0,2}\Omega_{5,9}: + \frac{209}{8400} :\Omega_{0,2}\Omega_{6,8}: - \frac{1}{10} :\Omega_{0,2}\Omega_{7,7}:$$ $$+ \frac{59}{1470} :\Omega_{1,1}\Omega_{0,14}: + \frac{811}{6300} :\Omega_{1,1}\Omega_{2,12}:+ \frac{191}{1260} :\Omega_{1,1}\Omega_{3,11}:+ \frac{1}{112} :\Omega_{1,1}\Omega_{4,10}:$$ $$- \frac{23}{105} :\Omega_{1,1}\Omega_{5,9}: + \frac{5}{48} :\Omega_{1,1}\Omega_{6,8}: + \frac{1809}{9800} :\Omega_{1,1}\Omega_{7,7}: - \frac{4271}{69300} :\Omega_{0,3}\Omega_{0,13}: +$$ $$\frac{118}{4725} :\Omega_{0,3}\Omega_{1,12}: + \frac{22669}{118800} :\Omega_{0,3}\Omega_{2,11}: + \frac{1279}{10800} :\Omega_{0,3}\Omega_{3,10}: + \frac{257}{6300} :\Omega_{0,3}\Omega_{5,8}:$$ $$+ \frac{1537}{4200} :\Omega_{0,3}\Omega_{6,7}: + \frac{2467}{46200} :\Omega_{1,2}\Omega_{0,13}: + \frac{2431}{25200} :\Omega_{1,2}\Omega_{1,12}:+ \frac{591}{2200} :\Omega_{1,2}\Omega_{2,11}:$$ $$+ \frac{3523}{6300} :\Omega_{1,2}\Omega_{3,10}: + \frac{5}{8} :\Omega_{1,2}\Omega_{4,9}: + \frac{2383}{8400} :\Omega_{1,2}\Omega_{5,8}: + \frac{3}{700} :\Omega_{1,2}\Omega_{6,7}:$$ $$+ \frac{7}{96} :\Omega_{0,4}\Omega_{2,10}: - \frac{367}{504} :\Omega_{0,4}\Omega_{3,9}: + \frac{1019}{18900} :\Omega_{1,3}\Omega_{0,12}: - \frac{11}{1260} :\Omega_{1,3}\Omega_{1,11}:$$ $$- \frac{4399}{3600} :\Omega_{1,3}\Omega_{2,10}: - \frac{5851}{7560} :\Omega_{1,3}\Omega_{3,9}: - \frac{139}{210} :\Omega_{1,3}\Omega_{4,8}: - \frac{2447}{4200} :\Omega_{1,3}\Omega_{5,7}:$$ $$+ \frac{1}{6} :\Omega_{1,3}\Omega_{6,6}: - \frac{155}{2376} :\Omega_{2,2}\Omega_{0,12}: - \frac{283}{1100} :\Omega_{2,2}\Omega_{1,11}: + \frac{271}{1080} :\Omega_{2,2}\Omega_{3,9}:$$ $$+ \frac{1}{48} :\Omega_{2,2}\Omega_{4,8}: + \frac{1}{10} :\Omega_{2,2}\Omega_{5,7}:+ \frac{7}{150} :\Omega_{2,2}\Omega_{6,6}: - \frac{17}{560} :\Omega_{0,5}\Omega_{1,10}:$$ $$+ \frac{83}{240} :\Omega_{0,5}\Omega_{2,9}: + \frac{799}{2100} :\Omega_{0,5}\Omega_{3,8}: + \frac{17}{56} :\Omega_{1,4}\Omega_{1,10}: + \frac{17}{16} :\Omega_{1,4}\Omega_{2,9}:$$ $$+ \frac{27}{35} :\Omega_{1,4}\Omega_{3,8}:- \frac{169}{5400} :\Omega_{2,3}\Omega_{0,11}: + \frac{1021}{3150} :\Omega_{2,3}\Omega_{1,10}: - \frac{163}{540} :\Omega_{2,3}\Omega_{2,9}:$$ $$- \frac{2539}{2520} :\Omega_{2,3}\Omega_{3,8}:- \frac{1}{4} :\Omega_{2,3}\Omega_{4,7}: - \frac{59}{100} :\Omega_{2,3}\Omega_{5,6}: + \frac{4691}{10080} :\Omega_{0,6}\Omega_{1,9}:$$ $$- \frac{39899}{25200} :\Omega_{0,6}\Omega_{2,8}: - \frac{1889}{5040} :\Omega_{0,6}\Omega_{3,7}: + \frac{1037}{1680} :\Omega_{1,5}\Omega_{0,10}: - \frac{22}{105} :\Omega_{1,5}\Omega_{1,9}:$$ $$- \frac{1}{15} :\Omega_{1,5}\Omega_{2,8}: - \frac{1369}{4200} :\Omega_{1,5}\Omega_{3,7}: - \frac{17}{336} :\Omega_{2,4}\Omega_{0,10}: - \frac{37}{96} :\Omega_{2,4}\Omega_{1,9}:$$ $$- \frac{13}{240} :\Omega_{2,4}\Omega_{2,8}: + \frac{419}{1680} :\Omega_{2,4}\Omega_{3,7}: + \frac{7}{400} :\Omega_{3,3}\Omega_{0,10}: + \frac{137}{1260} :\Omega_{3,3}\Omega_{1,9}:$$ $$+ \frac{3397}{5040} :\Omega_{3,3}\Omega_{2,8}: + \frac{7}{90} :\Omega_{3,3}\Omega_{3,7}: + \frac{19}{72} :\Omega_{3,3}\Omega_{4,6}: + \frac{937}{1800} :\Omega_{3,3}\Omega_{5,5}:$$ $$+ \frac{9697}{29400} :\Omega_{0,7}\Omega_{1,8}: + \frac{37}{60} :\Omega_{0,7}\Omega_{2,7}: + \frac{301}{450} :\Omega_{0,7}\Omega_{3,6}: - \frac{967}{5040} :\Omega_{1,6}\Omega_{0,9}:$$ $$- \frac{47}{240} :\Omega_{1,6}\Omega_{1,8}: + \frac{123}{700} :\Omega_{1,6}\Omega_{2,7}: + \frac{4}{5} :\Omega_{1,6}\Omega_{3,6}: - \frac{13}{360} :\Omega_{2,5}\Omega_{0,9}:$$ $$- \frac{4733}{8400} :\Omega_{2,5}\Omega_{1,8}: - \frac{4}{5} :\Omega_{2,5}\Omega_{2,7}: + \frac{17}{225} :\Omega_{2,5}\Omega_{3,6}: + \frac{11}{252} :\Omega_{3,4}\Omega_{0,9}:$$ $$+ \frac{27}{140} :\Omega_{3,4}\Omega_{1,8}: + \frac{163}{420} :\Omega_{3,4}\Omega_{2,7}: + \frac{59}{180} :\Omega_{3,4}\Omega_{3,6}: - \frac{1199}{2352} :\Omega_{0,8}\Omega_{1,7}:$$ $$+ \frac{11969}{16800} :\Omega_{0,8}\Omega_{2,6}: - \frac{4399}{6300} :\Omega_{0,8}\Omega_{3,5}: - \frac{45}{196} :\Omega_{1,7}\Omega_{1,7}: - \frac{97}{175} :\Omega_{1,7}\Omega_{2,6}:$$ $$- \frac{857}{4200} :\Omega_{1,7}\Omega_{3,5}: + \frac{1}{6} :\Omega_{2,6}\Omega_{2,6}: - \frac{103}{360} :\Omega_{2,6}\Omega_{3,5}: - \frac{127}{300} :\Omega_{3,5}\Omega_{3,5}:,$$ $$D^2_0 = -\frac{2419}{705600} \Omega_{0,18} + \frac{2356854113}{13722508800} \partial^2 \Omega_{0,16} - \frac{3876250811}{34306272000} \partial^2 \Omega_{1,15} + \frac{710040893}{4678128000} \partial^2 \Omega_{2,14}$$ $$- \frac{3598850419}{44108064000} \partial^2 \Omega_{3,13} + \frac{50867963}{212058000} \partial^2 \Omega_{4,12} + \frac{5617559}{75398400} \partial^2 \Omega_{5,11} - \frac{47629609}{62832000}\partial^2 \Omega_{6,10}$$ $$+ \frac{12154709537}{7916832000} \partial^2 \Omega_{7,9} - \frac{13317559687}{15833664000} \partial^2 \Omega_{8,8}.$$ A computer calculation shows that $D^8_0 + D^6_0 + D^4_0 + D^2_0$ lies in $Ker(\pi_3)$, so it must coincide with $D_0$. In particular, $R_0 = -\frac{2419}{705600} J^{18}$. [ABKS]{} B. Bakalov and V. Kac, *Field algebras*, Internat. Math. Res. Notices No. 3, 123-159 (2003). J. de Boer, L. Feher, and A. Honecker, *A class of $\cW$-algebras with infinitely generated classical limit*, Nucl. Phys. B420 (1994), 409-445. R. Borcherds, *Vertex operator algebras, Kac-Moody algebras and the monster*, Proc. Nat. Acad. Sci. USA 83 (1986) 3068-3071. C. Dong, H. Li, and G. Mason, *Compact automorphism groups of vertex operator algebras*, Internat. Math. Res. Notices 1996, no. 18, 913–921. C. Dong and K. Nagatomo, *Classification of irreducible modules for the vertex operator algebra $M(1)^+$*, J. Algebra 216 (1999) no. 1, 384-404. C. Dong and K. Nagatomo, *Classification of irreducible modules for the vertex operator algebra $M(1)^+$ II. Higher rank*, J. Algebra 240 (2001) no. 1, 289-325. W. Eholzer, L. Feher, and A. Honecker, *Ghost systems: a vertex algebra point of view*, Nuclear Phys. B 518 (1998), no. 3, 669–688. E. Frenkel and D. Ben-Zvi, *Vertex Algebras and Algebraic Curves*, Math. Surveys and Monographs, Vol. 88, American Math. Soc., 2001. I.B. Frenkel, Y.Z. Huang, and J. Lepowsky, *On axiomatic approaches to vertex operator algebras and modules*, Mem. Amer. Math. Soc. 104 (1993), no. 494, viii+64. E. Frenkel, V. Kac, A. Radul, and W. Wang, *$\cW_{1+\infty}$ and $\cW(\gg\gl_N)$ with central charge $N$*, Commun. Math. Phys. 170 (1995), 337-357. I.B. Frenkel, J. Lepowsky, and A. Meurman, *Vertex Operator Algebras and the Monster*, Academic Press, New York, 1988. V. Kac, *Vertex Algebras for Beginners*, University Lecture Series, Vol. 10. American Math. Soc., 1998. V. Kac and A. Radul, *Representation theory of the vertex algebra $\cW_{1+\infty}$*, Transformation Groups, Vol. 1 (1996) 41-70. V. Kac, W. Wang, and C. Yan, *Quasifinite representations of classical Lie subalgebras of $\cW_{1+\infty}$*, Advances in Mathematics, vol. 139 (1), (1998) 59–140. H. Li, *Local systems of vertex operators, vertex superalgebras and modules*, J. Pure Appl. Algebra 109 (1996), no. 2, 143–195. H. Li, *Vertex algebras and vertex Poisson algebras*, Commun. Contemp. Math. 6 (2004) 61-110. H. Li, Some finiteness properties of regular vertex operator algebras, J. Algebra 212, 495-514 (1999). B. Lian and A. Linshaw, *Howe pairs in the theory of vertex algebras*, J. Algebra 317, 111-152 (2007). B. Lian and G.J. Zuckerman, *Commutative quantum operator algebras*, J. Pure Appl. Algebra 100 (1995) no. 1-3, 117-139. A. Linshaw, *Invariant theory and the $\cW_{1+\infty}$ algebra with negative integral central charge*, J. Eur. Math. Soc., to appear. A. Linshaw, *A Hilbert theorem for vertex algebras*, Transformation Groups, Vol. 15, No. 2 (2010), 427-448. A. Linshaw, *Invariant subalgebras of affine vertex algebras*, arXiv:1011.2281. K. Thielemans, A Mathematica package for computing operator product expansions, Int. Jour. Mod. Phys. C2 (1991) p.787. H. Weyl, *The Classical Groups: Their Invariants and Representations*, Princeton University Press, 1946. Y. Zhu, Modular invariants of characters of vertex operators, J. Amer. Math. Soc. 9 (1996) 237-302.
1
--- abstract: 'The spin gauge field formalism has been used to explain the emergence of out of plane spin accumulation in two-dimensional spin orbit interaction (SOI) systems in the presence of an in-plane electric field. The adiabatic alignment of the charge carrier spins to the momentum dependent SOI field, which changes in time due to the electric field, can be mathematically captured by the addition of a gauge term in the Hamiltonian. This gauge term acts like an effective, electric field dependent magnetization. In this work we show that this effective magnetization can be generalized to systems which include additional discrete degrees of freedom to real spin, such as the pseudospin and/or valley degrees of freedom in emerging materials like molybdenum sulphide and silicene. We show that the generalized magnetization recovers key results from the Sundaram-Niu formalism as well as from the Kubo formula. We then use the generalized magnetization to study the exemplary system of a topological insulator thin film system where the presence of both a top as well as a bottom surface provides an additional discrete degree of freedom in addition to the real spin.' author: - Zhuo Bin Siu - 'Mansoor B. A. Jalil' - Seng Ghee Tan title: Gauge field in systems with spin orbit interactions and additional discrete degrees of freedom to real spin --- Introduction ============ In the Spin Hall Effect (SHE) [@SHE1; @SHE2; @SHE3; @SHE4; @SHE5], the passage of an in-plane electric field in a two-dimensional electron gas (2DEG) with spin orbit interactions (SOIs) leads to the appearance of an out of plane spin accumulation. Murakami [@Murakami] and Fujita [@Grp1; @Grp2; @Grp3; @Grp4; @SGTSciRep], and their respective coauthors, had independently studied the SHE. They showed that the out of plane spin accumulation can be understood as the response of the charge carriers as their spins align adiabatically with the momentum dependent SOI field. The direction of the SOI field changes in time due to the change in the momentum of the charge carriers as they accelerate under the electric field. Mathematically, the electric field gives to an effective magnetization term in the Hamiltonian which we shall, for short, call the Murakami-Fujita (MF) potential. Many emerging material systems in interest in spintronics, for example silicene [@Sil1; @Sil2; @Sil3; @Sil4] and [@Mo1; @Mo2; @Mo3], possess discrete degrees of freedom (DoFs) such as the pseudospin and / or valley degrees of freedom, in addition to their real spins. In this work, we show in the following sections that the MF potential can be readily extended to incorporate these additional degrees of freedom (DoF) which we shall for simplicity refer to collectively as pseudospin. To first order in the electric field, the MF potential accounts for the effects of a constant, in-plane electric field for the purposes of calculating spin / charge currents and spin accumulations to first order in the electric field. We illustrate the application of the MF potential on a system with a spin$\otimes$pseudospin degrees of freedom through the example of the topological insulator (TI) thin film system [@PRB80_205401; @PRB81_041307; @PRB81_115407]. Unlike a semi-infinite TI slab, a TI thin film has both a top and a bottom surface which, due t the finite thickness, couple to each other. The low energy effective Hamiltonian for can be written as $$H = v(\vec{k}\times\vec{\sigma})\cdot\hat{z} \tau_z + \lambda \tau_x + \vec{M}\cdot\vec{\sigma} \label{TIham1}$$ Besides the real spin denoted as $\vec{\sigma}$ of the charge carriers there is another discrete degree of freedom $\vec{\tau}$ associated with whether the charge carriers are localized nearer the upper ( $|+\tau_z\rangle\langle +\tau_z|$ ) or lower ($|-\tau z \rangle\langle -\tau_z|$) surface of the film. The $\tau_x$ term then represents the coupling between the two surfaces of the film due to the finite thickness. This paper is organized as follows. We first revisit the emergence of the MF potential in a spin 1/2 SOI system. We then generalize the MF potential to include other discrete DoFs, and provide three evidences to support our claim that the MF potential accounts for the effects of the electric field in the sense that an effective Hamiltonian can be constructed by replacing the $\vec{E}\cdot\vec{r}$ term in the original Hamiltonian can be replaced by the position-*independent* MF potential. We first show that taking the momentum derivative of the effective Hamiltonian in the Heisenberg equation of motion for the position operator reproduces the usual Berry curvature expression for the anomalous Hall velocity. As part of their paper on the microscopic origin of spin torque [@ChengRan], Cheng Ran and Niu Qian had extended the original Sundaram-Niu wavepacket formalism [@ChioGoo1], which gave only the time variation of the position and momentum expectation values, to now include the time variation of the spin expectation values. We show that Ran and Niu’s expressions for the time evolution of the spin expectation values can be readily extended to incorporate the other discrete degrees of freedom present, and that the time evolution of these operators can be derived from applying the Heisenberg equation of motion to the MF potential. Finally, we show that the Kubo expression for non equilibrium expectation of spin$\otimes$pseudospin quantities can be interpreted as the first order time independent perturbation theory response to the MF potential. We then move on to apply the MF potential formalism to study the emergence of a TI thin film system subjected to an in-plane magnetization and electric field. We first illustrate the effects of the interlayer coupling on the in-plane magnetization and the dispersion relations. We then show that the direction of the out of plane spin accumulation resulting from an in-plane electric field can be explained in terms of how the direction of the momentum dependent in-plane SOI field rotates with the change in momentum direction resulting from the electric field. The anti-symmetry of the out of plane spin accumulation in $k$ space can be broken with the application of an out of plane electric field in order to yield a finite spin accumulation after integrating over the Fermi surface. Spin 1/2 systems {#spinhalf} ================ To familarize the reader with the MF potential, we first review its appearance in spin 1/2 SOI systems without any additional discrete degrees of freedom. The Hamiltonian for a homogenous 2DEG with SOI and an electric field $E_x$ in the $x$ direction can be generically written as $$H = \frac{p^2}{2m} + \vec{B}(\vec{k})\cdot\vec{\sigma} + E_x x$$ where the $\vec{B}(\vec{k})$ represents a momentum dependent spin orbit interaction. We define a unitary transformation $U(\vec{k})$ which diagonalizes $\vec{B}.\vec{\sigma}$ in spin space so that after the unitary transformation, we have $$UHU^\dagger = \frac{p^2}{2m} + |\vec{B}|\sigma_z + E_x (x - i U \partial_{k_x} U^\dagger).$$ Mathematically, the effect of $U$ can be interpreted as rotating the spin space coordinates so that in the rotated frame, the spin $z$ axis points in the direction of the SOI field $\vec{B}(\vec{k})$. The non commutation between $x$ and the momentum dependent $U$ results in the appearance of the $-i E_x (U \partial_{k_x} U^\dagger)$ term which acts as an effective magnetization $M'_i \tilde{\sigma}_i$ in the rotated frame where the tilde on the $\tilde{\sigma}_i$ indicates that the index $i$ refers to the $i$th spin direction in the rotated frame. To determine what *lab* frame direction this effective magnetization points in, we perform the inverse unitary transformation $U^\dagger (-i U \partial_{k_x}U^\dagger) U = -i (\partial_{k_x}U^\dagger)U$. This expression can be evaluated without an explicit form for $U$. To do this, we first note that by definition $U\hat{b}\cdot\vec{\sigma}U^\dagger = \sigma_z$ where $\hat{b} = \vec{B}/|\vec{B}|$. Thus, $$\begin{aligned} && U^\dagger\sigma_zU = \hat{b}\cdot\vec{\sigma} \\ &\Rightarrow& (\partial_{k_x}U^\dagger)\sigma_z U + U^\dagger\sigma_z(\partial_{k_x}U) = \partial_{k_x}\hat{b}\cdot\vec{\sigma} \\ &\Rightarrow& [(\partial_{k_x}U^\dagger)U, \hat{b}\cdot\vec{\sigma}] = \partial_{k_x}\hat{b}\cdot\vec{\sigma} \\\end{aligned}$$ In going from the first to second line, we differentiated the first line with respect to $k_x$ and then inserted $\mathbb{I}_\sigma=UU^\dagger$ in the appropriate places. From the last line, we use the fact that $[\vec{a}\cdot\vec{\sigma}, \vec{b}\cdot\vec{\sigma}] = i (\vec{a}\times\vec{b})\cdot\vec{\sigma}$ to conclude that $$-i E_x U\partial_{k_x}U^\dagger = E_x (\hat{b}\times\partial_{k_x}\hat{b})\cdot\vec{\sigma}.$$ This is the MF potential for a spin 1/2 system with an electric field in the $x$ direction. Notice that although $U$ is not unique, the lab frame direction of $-i U\partial_{k_x}U^\dagger$ is independent of the specific choice of $U$. $E_x(-i U\partial_{k_x}U^\dagger)$ can then be thought as as an electric field dependent effective magnetization which confers a spin accumulation in the $(\hat{b}\times\partial_{k_x}\hat{b})\cdot\vec{\sigma}$ direction. Taking the specific example of the Rashba SOI where $\vec{B} = \alpha( p_y, -p_x)$ both $\vec{B}$ and $\partial_{k_x}\vec{B}$ lie on the $xy$ plane. $E_x(\hat{b}\times\partial_{k_x}\hat{b})$ thus points in the out of plane spin direction, and confers an out of plane spin accumulation to the charge carriers Physically, the origin of the $(\hat{b}\times(E_x \partial_{k_x}\hat{b})$ term can be explained by assuming that the spins of the charge carriers adiabatically follow the direction of the SOI field. As shown in Fig. \[gA2SOIfield\], $\vec{B}(\vec{k})\cdot\vec{\sigma}$ associates each point in $k$ space with a SOI field pointing in the $\hat{b}(\vec{k})$ direction. Assume that the electric field is initially switched off and consider a carrier with a definite $\vec{k}$. As the electric field is switched on, the field causes the charge carrier to accelerate in the direction of the field so that the momentum changes and the carrier traces out a trajectory along $k$ space. We assume that the electric field is weak enough so that the spin of the carrier rotates along with the direction of the SOI field as it successively moves through different $k$ points. The resulting rotation of the spin can be thought of as being due to an effective magnetic field pointing along the $\hat{b}\times\partial_t \hat{b} = E_x \hat{b}\times\partial_{k_x} \hat{b}$ direction which both provides the torque necessary to rotate the spin as well as confers a spin accumulation in the direction of the torque. ![ The arrows at each point in $k$ space indicate the direction of the Rashba SOI field there. The application of the electric field causes the momentum of the charge carrier to trace out the trajectory in $k$ space indicated by the dotted line. The spin of the charge carrier adiabatically follows the direction of the SOI field at each point in $k$ space. The rotation of the spin can be thought of as being due to an effective magnetization which both creates the torque necessary to rotate the spin as shown in the inset, as well as confers an out of plane spin accumulation. []{data-label="gA2SOIfield"}](A2SOIfield.eps) We now proceed to a general description of the MF potential generalized to include other discrete degrees of freedom. The Murakami-Fujita potential ============================= Consider now a generic Hamiltonian $$H_0 = b_i(\vec{k})\kappa_i \label{H0}$$ where the $\kappa_i$s are finite sized matrices representing the discrete degrees of freedom. For example, in a spin 1/2 system with SOC, the 4 $\kappa_i$s are the Pauli matrices and the identity matrix. In the TI thin film Hamiltonian Eq. \[TIham1\] the $\kappa_i$s represent the 16 $\vec{\sigma}\otimes\vec{\tau}$ matrices. In order to write the Hamiltonian Eq. \[H0\] as a numerical matrix, we need to express the matrix elements in terms of basis. For example, for spin 1/2 system it is common to adopt the usual representation of the Pauli matrices so that, for instance $H_0 = \vec{b}\cdot\vec{\sigma} \simeq \begin{pmatrix} b_z & b_x - i b_y \\ b_x + i b_y & -b_z \end{pmatrix}$. The numerical matrix on the rightmost side of the equal sign is written in the $|\pm z \rangle$ basis. We refer to the basis which $H_0$ as a ‘numerical matrix’ is in as the ‘laboratory frame’ with basis states $|\lambda_i\rangle$ ($\lambda$ for *l*aboratory. ) Label now the $i$th eigenstates of $H_0$ by $|\epsilon_i\rangle$. We assume that the laboratory basis is fixed, i.e. it has no dependence on any parameter in the Hamiltonian so that, for instance $\partial_{k_x} |\lambda_i\rangle = 0$. Instead of using the laboratory basis, we can also expand our states and operators in terms of the eigenbasis, and convert between the two basis through the unitary transformation $U$. Defining the $U$s so that $UH_0 U^\dagger$ is diagonal in the eigenbasis representation, we have $$U = \sum_{i,j} |\epsilon_i \rangle \langle \epsilon_i|\lambda _j\rangle \langle j|$$, i.e. the matrix $i,j$th elements in the numerical representation of $U$ is $\langle \epsilon_i|\lambda_j\rangle$. Notice that since the phase factor $\exp(i\phi)$ can be introduced to $|\tilde{\epsilon}_a \rangle = |\epsilon_i\rangle\exp(i\phi_a)$ arbitrarily the values of the matrix elements $\langle \tilde{\epsilon}_i | \lambda_j \rangle$ will vary with the phase of $|\epsilon_i\rangle$s. Now consider adding a perturbative electric field . The Hamiltonian then becomes $H = H_0 + E_x x$, and we have $UHU^\dagger = UH_0U^\dagger + E_x ( x + i U\partial_{k_x} U^\dagger)$ where, in this rotated frame, $H_0$ is diagonal, and we have an additional $i U\partial_{k_x}U^\dagger$. In order to figure out the lab frame spin$\otimes$pseudospin ‘direction’ where this contribution points to, we transform the $i U\partial_{k_x}U^\dagger$ piece *without the diagonal elements* back to the laboratory frame . The reason for the removal of the diagonal elements will become apparent later. With the diagonal elements in place, we have $U^\dagger (i U\partial_k U^\dagger) U = -i U^\dagger\partial_k U$. (We have dropped the suffix $x$ from $k_x$ and $E_x$ for notational simplicity) We stress that $-i U^\dagger\partial_k U$ has the same numerical matrix elements in the laboratory frame regardless of the phases of the $\langle \lambda_i | \epsilon_j \rangle$. This is because $$\begin{aligned} && -i U^\dagger \partial_k U \\ &=& -i |\lambda_a \rangle \langle \lambda_a|\epsilon_b \rangle \langle \partial_k \epsilon_b|\lambda_c \rangle \langle \lambda_c | \\ &=& -i |\epsilon_a \rangle \langle \partial_k \epsilon_a| \end{aligned}$$ The second line gives the numerical values of the laboratory frame $ac$th matrix elements, and the third line the simplification using a resolution of identity. Notice that we have the combination $|\epsilon_b \rangle \langle \partial_k \epsilon_b|$ with the same state index $b$ occurring together so that any phase factor $\exp(i \phi_b)$ introduced in $|\epsilon_b\rangle \rightarrow |\epsilon_b \rangle\exp(i \phi_b)$ cancels out. Returning now to the diagonal elements of the rotated frame $i U\partial_k U^\dagger$, we see that they correspond to $i |\epsilon_i \rangle \langle \epsilon_i|\partial_k |\epsilon_i\rangle\langle \epsilon_i|$. Subtracting them off from $-i U^\dagger\partial_k U$ gives the MF potential $H_{MF}$ where $$H_{MF} = -i \sum_{a \neq b} |\epsilon_a \rangle \langle \partial_k \epsilon_a|\epsilon_b \rangle \langle \epsilon_b| E.$$ We argue that, at least for the purposes of calculating currents and spin$\otimes$pseudospin accumulations the effects of the electric field $E_i$ to the first order in $E$ can be incorporated by replacing $E_i x_i$ with $H_{MF}$ so that the effective Hamiltonian reads $$H' = H_0 + H_{MF} = b_i(\vec{k})\kappa_i - i \sum_{a \neq b} |\epsilon_a \rangle \langle \partial_{k_i} \epsilon_a|\epsilon_b \rangle \langle \epsilon_b| {E_i}. \label{Heff}$$ In order to support our claim, we list three examples where the use of $H'$ recovers well-known results. Anomalous velocity ------------------ In the presence of an electric field, charge carriers can acquire an anomalous velocity [@AV1; @AV2; @AV3] proportional to the Berry curvature [@AV4; @AV5; @ChioGoo1; @ChioGooRev]. We show that this result can be recovered via the Heisenberg equation of motion on Eq. \[Heff\]. Under the Heisenberg equation of motion, we have $\partial_t \vec{r} = -i ( [\vec{r}, H_0] + [\vec{r}, H_{MF}] = \nabla_{\vec{k}} (H_0 + H_{MF}) $ The first term is the usual velocity. We shall show that the second gives the usual Berry curvature anomalous contribution to the velocity. Taking the expectation value of the second with respect to the $i$th eigenstate of $H_0$, we have $$\begin{aligned} && -i \langle \epsilon_i | [x_b, H_{MF}] |\epsilon_i \rangle \\ &=& -i E_b \langle \epsilon_i| \partial_{k_b} \Big( \sum_{\substack{jk \\ j\neq k}} |\epsilon_j \rangle \langle \partial_{k_a}\epsilon_j| \epsilon_k \rangle\langle\epsilon_k| \Big) |\epsilon_i \rangle \\ &=&-i E_b \sum_j (\langle \epsilon_i |\partial_{k_b} \epsilon_j \rangle \langle \partial_{k_a}\epsilon_j|\epsilon_i\rangle + \langle \partial_{k_a} \epsilon_i | \epsilon_j\rangle\langle \partial_{k_b} \epsilon_j|\epsilon_i \rangle) \\ &=& 2 E_b \mathrm{Im} ( \langle \partial_{k_a} \epsilon_i|\partial_{k_b} \epsilon_i \rangle).\end{aligned}$$ The third line is simply the expansion of the $\partial_{k_b}$ differential. Notice that the requirement that $j \neq k$ stemming from the removal of the diagonal terms of the rotated frame $-i U^\dagger \partial_k U$ results in the absence of terms $\langle \partial_{k_a}\partial_{k_b} \epsilon_i|\epsilon_i\rangle$ and $\langle \partial_{k_b} \epsilon_i|\partial_{k_a} \epsilon_i\rangle$ due to the $\partial_{k_b}$ acting on the second and third terms in the big bracket in the second line. The last line is the usual Berry curvature term for the anomalous velocity. Spin and other discrete DoFs ---------------------------- As part of their paper on explaining the microscopic origin of spin torque, Cheng and Niu extended the original Sundaram-Niu formalism, which described only the spatial evolution of position and velocity, to now cover the time evolution of spin 1/2 as well. Their formalism can be easily extended to cover the time evolution of operators with finite discrete spectra. We describe the extension in the appendix, and simply state the end result here. For a state $$|\psi\rangle = \sum_i |\epsilon_i \rangle \eta_i$$ where the summation $i$ runs over the discrete DoFs (e.g spin up / down for spin 1/2, and the upper / lower surfaces for a TI thin film) and the continuous quantum numbers (e.g. $\vec{k}$ in SOI systems) and the $\eta_i$s are the weightages of the $i$th basis state, we show in the appendix that for an operator $O$ in the discrete DoFs that $$d_t \langle \psi | O | \psi \rangle = 2E_a \mathrm{Re} ( \eta_i^* \langle \epsilon_i|\partial_{k_a} \epsilon_j \rangle \langle \epsilon_j |O|\epsilon_k \rangle \eta_k$$ It is straightforward to show that this expression is $-i \langle \psi [ O, H_{MF}] |\psi\rangle$. Recovery of the Kubo formula ---------------------------- Treating $H_{MF}$ as a perturbation to $H_0$ and applying the standard non-degenerate time-independent perturbation theory to the $i$th eigenstate of $H_0$, $|\epsilon_i\rangle$, the first order correction to $|\epsilon_i\rangle$ which we denote as $|\epsilon_i^{(1)}\rangle$ reads $$|\epsilon_i^{(1)}\rangle = \sum_j |\epsilon_j \rangle \frac{ \langle \epsilon_j|H_{MF}|\epsilon_i\rangle}{E_i - E_j}$$ so that to the correction to the expectation value of an observable $O$ for state $|\epsilon_i\rangle$ to first order in $\vec{E}$, $\delta \langle i| O | i \rangle$ is $$\begin{aligned} && \delta \langle i| O|i \rangle \\ \nonumber &=& 2\mathrm{Re} (\langle \epsilon_i|O|\epsilon_i^{(1)}\rangle ) \\ \nonumber &=& 2\mathrm{Re} \sum_j \frac{ \langle \epsilon_i |O| \epsilon_j \rangle \langle \epsilon_j |H_{MF}|\epsilon_i\rangle}{E_i - E_j}. \label{c1oe}\end{aligned}$$ However, since $$\begin{aligned} && \partial_k \langle \epsilon_i | H_0 |\epsilon_i \rangle = 0 \\ &\Rightarrow& \langle \partial_k \epsilon_i |\epsilon_j \rangle (E_i-E_j) = \langle \epsilon_i|\partial_k H_0 |\epsilon_j \rangle \\ &\Rightarrow& \langle \partial_k \epsilon_i|\epsilon_j \rangle = \frac{ \langle \epsilon_i| \partial_k H_0 |\epsilon_j \rangle}{E_i - E_j},\end{aligned}$$ we can rewrite $$\begin{aligned} H_{MF} &=& -i E_i \sum_{a \neq b} |\epsilon_a \rangle \langle \partial_{k_i} \epsilon_a|\epsilon_b \rangle \langle \epsilon_b|. \\ &=&-i \sum_{a \neq b} |\epsilon_a\rangle \frac{ \langle \epsilon_a| \partial_{k_i} H_0 |\epsilon_b \rangle}{E_a - E_b} \langle \epsilon_b|.\end{aligned}$$ A common form of the Kubo formula is $$\delta \langle O \rangle \propto \sum_{\vec{k}} \sum_{a= \neq b} \frac{n(E_a)-n(E_b)}{(E_a-E_b)^2} \mathrm{Im} ( \langle a|O|b\rangle \langle b| (\partial_k H_0)|a \rangle \label{kuboPaper}.$$ Substituting this back into Eq. \[c1oe\] gives a result similar to the Kubo expression for the change in an expectation value of $O$ under an electric field – $$\delta \langle i| O|i \rangle = 2 \mathrm{Im} \sum_j \vec{E}\cdot\frac{ \langle \epsilon_i |O| \epsilon_j \rangle \langle \epsilon_j |\nabla_{\vec{k}} H_0 |\epsilon_i\rangle}{(E_i - E_j)^2}. \label{c2oe}$$ Our result Eq. \[c2oe\] corresponds to Eq. \[kuboPaper\] with the occupancy factor $n$ set to 1 for the $i$th state we are interested in and 0 for the other states, and without a second summation over all states. Having established the link between the MF potential and the Kubo formula, we now proceed to use Eq. \[c2oe\] to study the exemplary system of a topological insulator thin film system. TI thin films ============= The effective Hamiltonian for the surface states of a TI thin film of infinite dimensions along the $x$ and $y$ directions, and small finite thickness along the $z$ direction, can be written as $$H = (\vec{k}\times\vec{\sigma})\cdot\hat{z}\tau_z + M_y\sigma_y + \lambda \tau_x. \label{tfHam0}$$ We use units where $e=\hbar=v_f=1$. We first highlight the influence of the inter-surface coupling term $\lambda$ on the energy spectrum. Consider the limit where $\lambda \rightarrow 0$,$M_y \neq 0$. In this case, the upper and lower surfaces may be considered separately, and the energy spectrum consists of two Dirac cones. The states localized near the upper surface have $\langle \tau_z \rangle = +1$, while the state localized near the lower surface have the opposite sign of $\langle \tau_z \rangle$. The $M_y\sigma_y$ term, however, has the same sign for both the upper and lower surfaces. The Dirac points for the Dirac cones for the upper surface states and the lower surface states are hence displaced in opposite directions in $k$ space. ![ Panel (a) shows the dispersion relations for the two values of $\lambda$ indicated in the legend at $k_y = 0$ for $\vec{M} = 0.5\hat{y}$. The three horizontal dotted lines correspond to the values of energies at which the EECs in panels (b) to (d) at $E = 0.5,\ 1$ and $1.5$ are plotted respectively. Panels (c) and (d) show the EECs and the in-plane spin accumulation directions at the two values of $\lambda$ indicated by different colors in panel (b). The inset of panel (d) shows a zoomed in view of the EECs near the intersection of the two Fermi ‘circles’ showing that the inter surface coupling leads to a breaking away of the lens shaped region where the two circles overlap into separate EEC curves.[]{data-label="glambdaComb2"}](lambdaComb2.eps) We now turn on the inter-surface coupling. Fig. \[glambdaComb2\] shows the dispersion relations and equal energy contours (EECs) at two values of $\lambda < |\vec{M}|$. At these small (relative to $|\vec{M}|$) values of $\lambda$ the two Dirac cones corresponding to the surface states localized at the upper and lower surfaces of the thin film are still distinctly evident. At low values of energy where the two cones do not overlap (panel (b) ), the EECs consist of two almost circular curves that correspond to the cross sections of the two Dirac cones. As the energy increases and the two almost-circular cross sections begin to almost touch each other, the inter-surface coupling pushes the EECs outwards in $k$-space so that the cross sections link up with each other and form a single curve ( panel (c) ). A further increase in energy causes the the two Dirac cones overlap with each other the anti crossing of the energy levels due to the inter surface coupling causes the $k$ space lens-shaped region where the Dirac cones overlap to break away from the outer perimeter of the overlapping ‘circles’ and form a second closed curve. Despite the distortions of the EECs from the perfectly circular profiles in the absence of inter-surface coupling, the directions of the in-plane spin accumulation along the EECs in the presence of inter-surface coupling still roughly follow those of the original Dirac cones. Returning now to panel (a) of the figure, it is evident that as the inter-surface coupling increases, the energy of the lowest energy particle (hole) band at $\vec{k}=0$ increases (decreases). ![ Panel (a) shows the dispersion relations for progressively larger values of $\lambda$ indicated by the colors in the legend relative to the fixed value of $\vec{M} = 0.5\hat{y}$. Panels (b) and (c) show the EECs (in solid lines) and in-plane spin accumulation directions for $\lambda = 0.7$ at the two values of energies (0.5 and 1.5 respectively) indicated by the horizontal dotted liens in panel (a). The two dotted circles in panels (b) and (c) are indicative of the Fermi circles for $\pm +(\vec{k}\times\vec{\sigma})\cdot\hat{z}$ Dirac cones which provide rough indications of the in-plane spin accumulation directions at the $k$ space points on the EECs. []{data-label="gbigLambdaComb1"}](bigLambdaComb1.eps) Fig. \[gbigLambdaComb1\] shows the dispersion relations and the EECs as $\lambda$ increases further relative to $|\vec{M}|$. As $\lambda$ is increased from 0, the energy of the lowest energy particle band at $\vec{k}=0$ is pushed downwards and that of the highest energy hole band pushed upwards until the two bands touch each other when $\lambda = |\vec{M}|$. At this point ($\lambda=0.5$ in panel (a)) we no longer have two the well-resolved Dirac cones with two separate Dirac points in $\lambda = 0.2$ in panel (a) of the figure. A further increase in $\lambda$ leads to a bandgap opening up between the particle and hole bands. Panels (b) and (c) of the figure show the EECs at two values of energy for $\lambda > |\vec{M}|$. The in-plane spin accumulations at various $k$ space points can still be roughly understood as the spin accumulations of two overlapping circular cross sections of perfect Dirac cones. ![ The EECs, in-plane spin accumulation directions and out of plane spin accumulation at two representative values of energy in the $\lambda < |\vec{M}|$ regime ( (a) and (b) ) and $\lambda > |\vec{M}|$ regime ( (c) and (d) ) for electric fields applied in the $x$ ( (a) and (c) ) and $y$ ( (b) and (d) ) directions. The sizes on the circles on the EECs are indicative of the magnitudes of the out of plane spin accumulations due to the electric field (the sizes of the circles are *not* scaled linearly to the spin accumulation magnitudes) with green (red) circles indicating out of plane spin accumulations in the negative (positive) $z$ directions. The two dotted circles in the left panels are indicative of the Fermi circles for $\pm +(\vec{k}\times\vec{\sigma})\cdot\hat{z}$ Dirac cones which provide rough indications of the in-plane spin accumulation directions at the $k$ space points on the EECs. $E=1.5R$ for all panels; $\lambda = 0.1, \vec{M} = \hat{y}$ for (a) and (b) and $\lambda = 0.7, \vec{M} = 0.5 \hat{y}$ for (c) and (d). []{data-label="gspinZpoln"}](spinZpoln.eps) We now turn our attention to the out of plane spin accumulation generated by an electrical field which we calculate using Eq. \[c2oe\] . Fig. \[gspinZpoln\] shows the out of plane spin $z$ accumulation generated at various $k$ space points on the EECs of a TI thin film with $\lambda > |\vec{M}|$ (panels (a) and (b) ), and $\lambda > |\vec{M}|$ (panels (c) and (d) ) for electrical fields applied in the $x$ ( panels (a) and (c) ) direction perpendicular to the magnetization, and the $y$ direction ( panels (c) and (d) ) parallel to the magnetization. The sign of the resulting spin $z$ accumulation can be understood in terms of how the applied electric field changes the direction of the SOI field experienced by the charge carriers. We noted in our earlier discussion in Sect. \[spinhalf\] that each point on the EECs may be associated with the Fermi circle of either the $+(\vec{k}\times\vec{\sigma})\cdot\hat{z}$ Dirac cone, or the $-(\vec{k}\times\vec{\sigma})\cdot\hat{z}$ cone. This is also indicated on the left panels of Fig. \[gspinZpoln\] where the two Fermi circles are indicated by dotted circles of different colors. Consider now the $k$ space region denoted in the inset of panel (b). The inset shows the spin accumulations on two points in $k$ space with the red (blue) arrows denoting the spin accumulation direction for a point on the + (-) Fermi circle. The passage of an electric field in the $y$ direction causes $\langle p_y \rangle$ to increase while $\langle p_x \rangle$ remains constant, so that the SOI field $\pm (\vec{k}\times\hat{z})$ as well as the spin accumulation rotates in opposite directions for the $\pm$ Fermi circles. Reminiscent of our earlier discussion on spin 1/2 systems, this rotation in turn indicates the existence of an out of plane effective magnetic field which in turn imparts an out of plane spin accumulation. Applying the same argument to most of the other $k$-space points on the EECs in the figure explains the *sign* of the out of plane spin accumulation there. The *magnitude* of the spin $z$ accumulation depends on how much relative change in the SOI field direction the application of the electric field leads to. For example, in the right panels of the figure, the largest spin $z$ accumulation are on those EEC segments where the in-plane spin accumulation are in the $\pm y$ directions so that the small increment in the SOI field in the $\pm x$ directions due to the $y$ electric field is a large increment compared to other $k$ space points on the EECs where the spin accumulations already have large $x$ components. The out of plane spin $z$ accumulations in the preceding figures are antisymetrically distributed in $k$ space on the EECs. This antisymmetry results there being no net out of plane spin accumulation in $k$ space after integrating over the entire Fermi surface. In order to break the antisymmetry, we now introduce a term $E_z \tau_z$ to the Hamiltonian Eq. \[tfHam0\] so that the Hamiltonian now reads $$H = (\vec{k}\times\vec{\sigma})\cdot\hat{z}\tau_z + \vec{M}\cdot\vec{\sigma} + \lambda \tau_x + E_z\tau_z. \label{tfHam1}$$ The $E_z\tau_z$ term introduces an asymmetry between the upper and lower surfaces. This asymmetry may physically result from the fact that in experimentally grown TI thin films the bottom surface of the film is in contact with the usually non-ferromagnetic substrate, and the upper surface either in contact with the vacuum (for $\vec{M}\cdot\vec{\sigma}$ being due to magnetic doping [@PRL102_156603; @Sci329_659; @NatPhy7_32] ) or with a FM layer (for $\vec{M}\cdot\vec{\sigma}$ being due to the proximity effect with a FM layer [@PRL104_146802; @PRL110_186807; @PRB88_081407] ) . ![ Panel (a) and (b) show the EECs at various energies in the (a) absence and (b) presence of $E_z$ for $\lambda < |\vec{M}|$. Panels (c) and (d) show the EECs at various energies in the (c) absence and (d) presence of $E_z$ for $\lambda > |\vec{M}|$. ( $\vec{M} = 0.5\ \hat{y}$, $\lambda = 0.2$ in (a) and (b); $\vec{M} = 0.2 \hat{y}$ in (c) and (d). $E_z = 0.1$ in (b) and (d). ) []{data-label="gEZeec"}](EZeec.eps) Fig. \[gEZeec\] compares the EECs in both the $\lambda < |\vec{M}|$ regime as well as the $\lambda > |\vec{M}|$ regimes in the presence and absence of the $E_z \tau_z$ term. The asymmetry between the upper and lower surfaces of the TI film due to the $E_z\tau_z$ term results in the states stemming from the Dirac cones corresponding to the two surfaces being shifted in opposite directions along the energy axis. The dispersion relations become ‘tilted’, and the EECs at a given value of energy becoming asymmetrical in $k$ space. This asymmetry then results in a net out of plane spin accumulation after integrating over all the $k$ space points spanned by the EECs. Evidently, the spin accumulation increases with the magnitude of $\vec{M}$ and $E_z$. What is perhaps more interesting is the variation of $\langle \sigma_z (E) \rangle $, the out of plane spin accumulation integrated over the EECs at a given value of $E$, with the inter surface coupling $\lambda$ for a fixed value of $\vec{M}$ and $E_z$. Fig. \[gjyzEz\] shows ![ Panel (a) shows the $\log |\langle \sigma_z(E) \rangle|$, the logarithm of the out of plane spin accumulation integrated over the EEC at energy $E$ as a function of $E$ and $\lambda$ at $E_z = 1.1$ and $\vec{M} = 0.5\hat{y}$ for an electrical field in the $y$ direction. The dotted line in panel (a) corresponds to the value of $\lambda$ in panel (b), which shows the dispersion relation at $k_y = 0$ and $\lambda = 0.8$. The ‘tilting’ of the dispersion relations due to $E_z$ is evident from the plot. The two dotted lines in panel (b) in turn correspond to the energies for which the EECs and the out of plane accumulation at each $k$ space point is plotted in panel (c) for $E = 2$, and (d) for $E = 1.7$ respectively. []{data-label="gjyzEz"}](jyzEz.eps) Panel (a) of Fig. \[gjyzEz\] shows the (*logarithm* of ) the $\langle \sigma_z \rangle$. $\langle \sigma_z \rangle$ is *symmetrical* (in contrast to anti symmetrical) about $E = 0,$ (The asymmetry present at small $\lambda$ and large $|E|$ are numerical artifacts. ) The values of $\lambda$ plotted spans the range from being smaller than $|\vec{M}|$ to larger than $|\vec{M}|$. The patch of 0 $\langle \sigma_z (E) \rangle$ centered around $E=0$ for $\lambda > |\vec{M}| = 0.5$ corresponds to the bandgap opened up by large $\lambda$ where no propagating states exist. For a given value of $\lambda$, there exists a value of $|E|$ at which $\langle \sigma_z (|E|) \rangle$ peaks. Panel (b) shows the dispersion relation at $k_y=0$ at a given value of $\lambda = 0.8$. At this value of $\lambda$, $\langle \sigma_z(E) \rangle$ peaks at around $E = 1.4$. This corresponds to the energy below the vicinity of the lower horizontal dotted line in panel (b) where the tilted Dirac ‘cones’ touch and begin to intersect with each other. The intersection of the two Dirac ‘cones’ results in the emergence of the smaller elliptical EEC curves enclosed with the larger EEC ellipses in panels (c) and (d) of the figure at the energies of the two horizontal dotted lines in panel (b). The out of plane spin accumulation is asymmetrical across the smaller EEC ellipses so there is a net out of plane spin accumulation. These smaller EEC ellipses may be thought of as comprising the $k$ points with small values of $|\vec{k}|$ so that the in-plane spin accumulation direction is dominated by the $\vec{M}\cdot\vec{\sigma}$ term in the Hamiltonian rather than the SOI $(\vec{k}\times\vec{\sigma})\cdot\hat{z}\tau_z$ term. Due to the small $|\vec{k}|$ in the smaller ellipses, the same $\delta k_y$ caused by an electrical field and the resulting change in the SOI field direction has a far larger impact on the in-plane spin accumulation direction in the smaller EEC ellipses than in the larger ones at a given value of energy. Comparing between panels (c) and (d), the relative impact of the same $\delta k_y$ increases with decreasing $|\vec{k}|$ of the smaller EEC ellipses. The out of plane spin accumulation thus peak near the energy value at which the smaller EEC ellipses emerge where the two Dirac cones begin to intersect each other in panel (a). (There is a tradeoff between the $k$-space perimeter of the EECs over which the out of spin accumulation is integrated over, and the maximum magnitude of the spin accumulation as the smaller $k$ space ellipses decrease in size so that the peak value of $\lambda \sigma_z (E) \rangle$ occurs slightly above the energy value where the two Dirac cones intersect. ) Conclusion ========== In the first half of this work, we introduced the Murakami-Fujita potential firstly for a spin half system and then more generally for systems with real spin coupled to other discrete degrees of freedom. We argued that the effects of a constant electric field can, to first order in the field, be modeled by replacing the electric potential $\vec{E}\cdot\vec{r}$ by the Murakami-Fujita potential. We showed that the anomalous velocity and Cheng’s extension of the Sundaram-Niu formalism can be recovered from the Heisenberg equation of motion on the MF potential, and that the result of the Kubo equation for the non-equilibrium distribution of an observable be recovered by treating the MF potential as a perturbation and then using standard time-independent non-degenerate perturbation theory. This formalism can be readily applied to emerging material systems of interest to spintronics with pseudospin and / or valley degrees of freedom. As an example, we applied our formalism to study the exemplary system of a three-dimensional topological insulator thin film system where the coupling between the top and bottom surfaces presents an additional discrete degree of freedom in addition to the real spin. We showed similar to the case where the inter-layer coupling is absent, that the direction of the out of plane spin accumulation due to the application of an in-plane electric field can be predicted from the direction of the torque needed to change the direction of the spin accumulation which depends on the momentum-dependent SOI field. The application of an out of plane electric field is necessary in order to break the antisymmetry of the spin accumulation. Acknowledgments =============== The authors acknowledge the Singapore National Research Foundation for support under NRF Award Nos. NRF-CRP9-2011-01 and NRF-CRP12-2013-01, and MOE under Grant No. R263000B10112. Appendix ======== The starting point of Cheng and Niu’s extension [@ChengRan] of the original Sundaram-Niu formalism [@ChioGoo1] to now include the time evolution of spin is to construct the Lagrangian from $$L = i \langle u| d_t u\rangle - \langle u |H|u \rangle + ...$$ where the ... denotes the other quantities appearing in Eq. 2.18 of Ref. like $\vec{k}\cdot\dot\vec{r}_c$ etc.) which do not affect the spin evolution. The $H=H_0 + H'$ that appears above consists of the unperturbed Hamiltonian $H_0$, and the perturbation $H'$. In Ref. , the perturbation is an external magnetization. Here, we shall be concerned with a perturbing electric field modeled as $\vec{E}\cdot\vec{r}$. We write $|u\rangle = \sum |\psi_i\rangle \eta_i$ where $|\psi_i\rangle$ are the eigenstates of $H_0$, and the $i$ is an index denoting the discrete DoFs. Now $$\begin{aligned} i d_t |u\rangle &=& i (\partial_t + \dot\vec{k}\cdot\nabla_{\vec{k}}) (|\psi_i\rangle \eta_i) \\ &=& i (\dot\vec{k}\cdot(\nabla_{\vec{k}} |\psi_i\rangle) \eta_i + \eta_i (\partial_t|\psi_i\rangle) + |\psi_i\rangle \dot{\eta}_i\end{aligned}$$ so that $$\begin{aligned} L &=& i \langle u| d_t u\rangle - \langle u |H|u \rangle + ... \\ &\approx& i \langle u| d_t u\rangle - \langle u |(H_0 + H' \big|_{\vec{r} = 0} )|u \rangle + ... \\ &=& i \big(\eta_i^*\dot{\eta}_i + \dot{\vec{k}}\cdot( \eta_j^* \langle \psi_j|\nabla_{\vec{k}}\psi_i\rangle \eta_i) \big) + \\ && \eta_j^* \langle \psi_j |H_0 + H' \big|_{\vec{r} = 0} |\psi_i \rangle \eta_i - \epsilon_i \eta_i^*\eta_i + ....\end{aligned}$$ Varying with respect to $\eta_i$ and taking complex conjugate give $$\begin{aligned} \dot{\eta}_i^* &=& \eta_j^* \big( \dot{\vec{k}}\cdot \langle \psi_j|\nabla_{\vec{k}}\psi_i\rangle - i \langle \psi_j|H_0+H' \big|_{\vec{r} = 0} |\psi_i \rangle \big) +i\epsilon_i \eta_i^*, \\ \dot{\eta}_i &=& \big( -\dot{\vec{k}}\cdot \langle \psi_i|\nabla_{\vec{k}}\psi_j\rangle + i\langle \psi_i|H_0+H' \big|_{\vec{r} = 0} |\psi_j \rangle\big)\eta_j - i\epsilon_i \eta_i\end{aligned}$$ so that for an operator $\sigma$ we have $$\begin{aligned} \mathrm{d}_t \langle \sigma \rangle &=& \mathrm{d}_t \langle (\psi_i|\sigma|\psi_j\rangle \eta_i^*\eta_j) \nonumber \\ &=& \sigma_{ij} \Big( \eta_a^* \big( \dot{\vec{k}}\cdot \langle \psi_a|\nabla_{\vec{k}}\psi_i\rangle -i \langle \psi_a|H_0|\psi_i \rangle)\eta_j + \\ && \eta_i^*\big( -\dot{\vec{k}}\cdot \langle \psi_j|\nabla_{\vec{k}}\psi_b\rangle + i \langle \psi_j|H_0+H' \big|_{\vec{r} = 0} |\psi_b \rangle\big)\eta_b \\ &&+ i(\epsilon_{i} \eta_i^*\eta_j - \epsilon_{j}\eta_i^*\eta_j) \Big) \nonumber \\ &=& 2\mathrm{Re} (\sigma_{ij} \dot{\vec{k}} \cdot \langle \psi_a|\nabla_{\vec{k}}\psi_i\rangle \eta_a^*\eta_j) \\ && - i \langle [ H_0+H' \big|_{\vec{r} = 0} , \sigma] \rangle + i(\epsilon_{i}-\epsilon_{j}) \eta_i^*\eta_j.\end{aligned}$$ where in going from the 3rd to the 4th line we’ve made use of the fact that $\sigma$ being Hermitian gives $\sigma_{ij} = \sigma_{ji}^*$. [99]{} M.I. Dyakanov and V.I. Perel, Phys. Lett. A. **35**, 459 (1971). A.A. Bakun *et al*, Pis’ma Zh. Eksp. Teor. Fiz. **40**, 464 (1984). J.E. Hirsch, Phys. Rev. Lett. **83**, 1834 (1999). Y.K. Kato *et al*, Science **306**, 1910 (2004). T. Kimura *et al*, Phys. Rev. Lett. **98**, 156601 (2007). S. Mukami,N. Nagaosa and S.C. Zhang, Science **301**, 1348 (2003). T. Fujita, M.B.A. Jalil and S.G. Tan, J. Phys. Soc. Jpn. **78**, 104714 (2009). T. Fujita, M.B.A. Jalil and S.G. Tan, New J. Phys. **12**, 013016 (2010). S.G. Tan and M.B.A. Jalil, J. Phys. Soc. Jpn. **82**, 094714 (2013). T. Fujita *et al*, J. Appl. Phys. **110**, 121301 (2011). S.G. Tan *et al*, Sci. Rep. **5**, 18409 (2015). S. Cahangirov *et al*, Phys. Rev. Lett. **102**, 236804 (2009). C.-C. Liu, W. Feng and Y. Yao, Phys. Rev. Lettt. **107**, 076802 (2011). P. Vogt, Phys. Rev. Lett. **108**, 155501 (2012). L. Tao *et al*, Nature Naotech. **10**, 227 (2016). R. Ganatra and Q. Zhang, ACS Nano **8**, 4074 (2014). B. Radisavljevic *et al*, Nature Nanotech. **6**, 147 (2011). H. Wang *et al*, Nano Lett. **102**, 12, 4674 (2012). J. Linder, T. Yokoyama and A. Sudbø, Phys. Rev. B **80**, 205401 (2009). C.-X. Liu *et al*, Phys. Rev. B **81**, 041307 (2010). H.-Z. Lu *et al*, Phys. Rev. B **81**, 115407 (2010). R. Karplus and J.M. Luttinger, Phys. Rev. **96**, 1154 (1954). W. Kohn and J.M. Luttinger, Phys. Rev. **108**, 590 (1957). E.N. Adams and E.I. Blount, J. Phys. Chem. Solids **10**, 286 (1959). M.-C. Chang and Q. Niu, Phys. Rev. Lett. **75**, 1348 (1995). M.-C. Chang and Q. Niu, Phys. Rev. B **53**, 7010 (1996). G. Sundaram and Q. Niu, Phys. Rev. B **59**, 14915 (1999). D. Xiao, M.-C. Chang and Q. Niu, Rev. Mod. Phys. **82**, 1959 (2010). R. Cheng and Q. Niu, Phys. Rev. B **88**, 024422 (2013). Q. Liu *et al*, Phys. Rev. Lett. **102**, 156603 (2009). Y.L. Chen *et al*, Science **329**, 659 (2010). L.A. Wray *et al*, Nature Phys. **7**, 32 (2010). I. Garate and M. Franz, Phys. Rev. Lett. **104**, 146802 (2010). P . Wei *et al* , Phys. Rev. Lett. **110**, 186807 (2013). Q.I. Yang *et al*, Phys. Rev. B **88**, 081407(R).
1
--- abstract: 'We introduce a symmetrization technique which can be used as an extra step in some continuous-variable quantum key distribution protocols. By randomizing the data in phase space, one can dramatically simplify the security analysis of the protocols, in particular in the case of collective attacks. The main application of this procedure concerns protocols with postselection, for which security was established only against Gaussian attacks until now. Here, we prove that under some experimentally verifiable conditions, Gaussian attacks are optimal among all collective attacks.' author: - Anthony Leverrier title: 'A symmetrization technique for continuous-variable quantum key distribution' --- Quantum key distribution (QKD) is the art of distilling a secret key among distant parties, Alice and Bob, in an untrusted environment. The remarkable feature of QKD is that it is secure in an information theoretic sense [@SBC08]. QKD protocols come in two flavors depending on the type of quantum measurement they use: either a photon counting measurement for *discrete-variable* protocols or a homodyne detection for *continuous-variable* (CV) QKD. While the security of the former is now rather well understood (with the notable exceptions of the differential phase shift [@IWY02] and coherent one-way [@SBG05] protocols), security of CV protocols has been more elusive (see [@WPG11] for a recent review). This is mainly due to the fact that the infinite dimensional Hilbert space required to describe these protocols makes the analysis quite challenging. Among all CVQKD schemes, the protocol GG02 is certainly the easiest one to analyze [@GG02]. In this protocol, Alice sends $n$ coherent states $|\alpha_k\rangle = |x_{2k} + i x_{2k+1}\rangle$ to Bob who measures the states he receives either with a homodyne detection (thus randomly choosing one quadrature to measure for each state) or a heterodyne detection (in which case, Bob measures both quadratures at the same time). Alice’s modulation is Gaussian, meaning that $x_k$ is a centered normal random variable with a given variance. In the case of a heterodyne detection for instance [@WLB04], Bob obtains a classical vector ${\bf y} = [y_1,\cdots, y_{2n}]$ which is correlated to Alice’s vector ${\bf x}= [x_1,\cdots, x_{2n}]$. In this paper, we use bold font to refer to vectors. Then, using parameter estimation, reconciliation and privacy amplification, they can extract a secret key. For this specific protocol, Gaussian attacks (where the action of the eavesdropper can be modeled by a Gaussian quantum channel between Alice and Bob) are known to be optimal among collective attacks [@GC06; @NGA06; @PBL08]. Using de Finetti theorem and conditioned upon an extra verification step [@RC09], these collective attacks are actually optimal in general in the asymptotic limit. The only step which is currently missing in this security analysis is a tight reduction from coherent to collective attacks in the finite-size regime [@LGG10]. The security status of other CVQKD protocols is far less advanced. In particular, not much is known for protocols using postselection [@SRL02; @LKL04; @LSS05]. In these protocols, the idea is that Alice and Bob will only use some data to distill the secret key and discard the rest. More precisely, they only keep the data compatible with a positive key rate. This method, inspired by advantage distillation techniques, certainly makes the protocol more robust against imperfections such as losses or noise in the channel, and potentially gives the best practical CVQKD protocol (see Refs. [@LSS05; @LRH06] for experimental implementations). It is therefore of considerable importance to be able to assess its security. Unfortunately, there are currently no full security proof for this scheme, not even against collective attacks. In fact, the only result available so far is an analysis in the case of Gaussian attacks [@HL07; @SAA07]. This is, however, far from being sufficient for two reasons: first, Gaussian attacks are not believed to be optimal against this protocol; second, one can never prove in practice that a given quantum channel is indeed Gaussian. The problem is that the only tool currently available to establish the security of a CV protocol against collective attacks, namely Gaussian optimality [@WGC06] does not seem to help much for protocols with postselection (see, however, a recent approach along those lines in Ref. [@WSR11]). In this paper, we introduce a new proof technique based on a symmetrization procedure that allows us to make some progress concerning the security analysis of CVQKD with postselection. In particular, we will show how this symmetrization allows us, under some verifiable conditions, to consider that the quantum channel is indeed Gaussian, even though the physical channel may actually be non-Gaussian. This then means that checking the security against Gaussian attacks (which can be done with present tools) is indeed sufficient to get full security against collective attacks, and in fact against arbitrary attacks in the asymptotic limit thanks to de Finetti theorem [@RC09]. A symmetrized protocol ====================== The usual technique to prove the security of a Prepare and Measure (PM) protocol such as GG02 where Alice sends coherent states to Bob who measures them, is to consider an equivalent Entanglement-Based (EB) protocol. In the latter, Alice prepares two-mode squeezed vacuum states of which she measures one mode with a heterodyne detection and sends the second mode to Bob. Interestingly, before Alice and Bob measure their respective modes, their share a bipartite state $\rho_{AB}$. In this paper, we will restrict our attention to collective attacks, meaning that at the end of the protocol, Alice and Bob share $n$ copies of that state, that is, $\rho_{AB}^{\otimes n}$. In general, one cannot perform a perfect tomography of this state, simply because it lives in an infinite dimensional Hilbert space. In the case of protocols without postselection, this is not a problem since the secret key rate can be safely computed from the the Gaussian state with the same first two moments as $\rho_{AB}$ [@GC06; @NGA06]. This is remarkable because one only needs to compute the covariance matrix of the state instead of its whole density matrix. Unfortunately, this approach fails in the case of protocols with postselection. Indeed, one would then need to compute the covariance matrix of the state *given it was postselected*. In principle, one could do this analysis with the experimental data obtained from the EB version of the protocol; but one cannot directly reconstruct this covariance matrix from the data observed in the actual PM version of the protocol. Indeed, the probabilistic map corresponding to a successful postselection is too complicated and one cannot expect to analyze its effect on general non-Gaussian states. For this reason, our only hope for a security proof seems to be to somehow enforce the Gaussianity of the state $\rho_{AB}$ Alice and Bob will use in their protocol. The idea is therefore to add an extra step to the usual protocol, that will make the state $\rho_{AB}^{\otimes n}$ more Gaussian. Let us note $\mathcal{S}$ the quantum map induced by this symmetrization. It is now clear that if one had $\mathcal{S}\left(\rho_{AB}^{\otimes n}\right) = \rho_G^{\otimes n}$ where $\rho_G$ is the bipartite Gaussian state with the same covariance matrix as $\rho_{AB}$, then the security of the symmetrized protocol against general collective attacks would be identical to the security of the original protocol against Gaussian attacks. One could then compute the secret key rate, simply from the transmission and excess noise of the quantum channel, exactly as in the case of protocols without postselection. The symmetrization we introduce below will not induce an exact Gaussification, but only an approximate one. However, the quality of the Gaussification, characterized by the fidelity between $\mathcal{S}\left(\rho_{AB}^{\otimes n}\right)$ and $\rho_G^{\otimes n}$ will increase with $n$, and tend to 1 if some experimentally verifiable conditions (on the moment of order 4 of $\rho_{AB}$) are met. The symmetrization we consider here was introduced in [@LKG09] where it was argued that it corresponds to the natural symmetry for protocols using a Gaussian modulation of coherent states. In the EB scenario, before they both perform their heterodyne measurements, Alice and Bob would apply random conjugate passive linear transformations over their $n$ modes. Once this is done, they apply the usual postselection protocol. This symmetrization can also be used in the PM scenario, and crucially, one can simply apply it to the *classical* data ${\bf x}$ and ${\bf y}$ of Alice and Bob. More concretely, in the PM scenario, Alice and Bob follow the standard scenario of sending coherent states and performing a heterodyne detection for Bob. Then, Alice draws a random transformation with the Haar measure on the group $K(n):=O(2n,\mathbb{R}) \cap Sp(2n,\mathbb{R})$ (isomorphic to the unitary group $U(n)$), that is the transformations corresponding to linear passive transformations in phase-space. She informs Bob of her choice of transformation (over the authenticated classical channel), and both parties apply this transformation to their respective $2n$-vectors ${\bf x}$ and ${\bf y}$. Equivalence between the EB and PM symmetrized protocols ======================================================= In order to study the security of the symmetrized PM protocol, one needs to show that its equivalent EB protocol corresponds to the one symmetrized through the application of random conjugate passive transformations in phase-space. It is useful to introduce three different distributions that can be used to describe the two scenarios. In order to simplify the exposition, let us first consider the analysis of a generic CVQKD protocol in the case of collective attacks, meaning that the protocol is entirely described by a single use of the quantum channel. First, the PM protocol is naturally described by a joint probability distribution $P(x_1, x_{2}, y_1, y_{2})$ where $x_1,x_2$ (resp. $y_1,y_2$) refers to Alice’s (resp. Bob’s) measurement results. The EB scenario is characterized by the bipartite state $\rho$ shared by Alice and Bob before their respective measurements. In the context of a CV protocol, it is natural to describe this state by its Wigner function $W(x_1,x_2,y_1,y_2)$ where index 1 (resp. 2) refers to the first (resp. second) quadrature of Alice or Bob’s mode. Alternatively, since we restrict ourselves to protocols where both Alice and Bob perform a heterodyne detection, we can also consider the convenient characterization in terms of the $Q$-function $Q(x_1,x_2,y_1,y_2)$ of the state $\rho$. This $Q$-function corresponds to the probability distribution sampled from when measuring the state with heterodyne detection [@leo97] and is given by: $$Q(x_1, x_2, y_1,y_2) = \frac{1}{\pi^2} \langle \alpha_A, \alpha_B |\rho|\alpha_A, \alpha_B \rangle,$$ where $|\alpha_{A (B)}\rangle$ is a coherent state centered on $\alpha_{A(B)} = x_{A (B)} + i p_{A (B)}$ in Alice’s (Bob’s) phase space. Interestingly, if we denote $W_0$ the Wigner function of the vacuum, then $Q$ simply corresponds to the convolution of $W$ and $W_0$: $Q= W \star W_0$. The relation between the two probability distributions $P$ and $Q$ can also be made explicit: if Alice measures one mode of the two-mode squeezed vacuum with a heterodyne detection and obtains outcomes $x_1$, $x_2$, she projects the second mode on the coherent state $|\gamma (x_1 - i x_2)\rangle$ where the factor $\gamma=\sqrt{2(V-1)/(V+1)}$ depends on the variance $V$ of the two-mode squeezed vacuum [@GCW03][^1]. This means that we have the one-to-one relation: $$Q(x_1, x_2, y_1,y_2) = P(\gamma x_1, -\gamma x_2, y_1, y_2).$$ We now consider general $n$-mode states. Because of the correspondence above, applying a random transformation $R\otimes R$ with $R$ drawn with the Haar measure on $K(n)$ on the classical data represented by the distribution $P$ is equivalent to a symmetrization in phase-space corresponding to the application of random conjugate passive linear transformations over the $n$ modes of Alice and Bob. Noting $\mathcal{G}$ the group of passive linear transformations in phase-space, the state obtained after the symmetrization of $n$ copies of the state $\rho$ (i.e., for a collective attack) is: $$\label{symm_state} \mathcal{S}(\rho^{\otimes n}) := \int_{U \in \mathcal{G}} (U \otimes U^*) \, \rho^{\otimes n} \, (U \otimes U^*)^\dagger \mathrm{d}U,$$ where $\mathrm{d} U$ is the Haar measure over the group $\mathcal{G}$. Sketch of the security proof ============================ The rest of the paper consists in analyzing the state $\mathcal{S}(\rho^{\otimes n})$, and in particular proving that it becomes approximately Gaussian under conditions on the second and fourth moment of $P(x_1,x_2,y_1,y_2)$ which are usually met in practical implementations of a CVQKD protocol. Since the three distributions $W$, $Q$ and $P$, equivalently describe the protocol, we choose here to work with $P$, which is the one directly observable in the practical implementation of the protocol. Our proof will consist of two steps. First, we show that the distribution $P$ describing the state after the symmetrization tends to an explicit limiting function, where the convergence speed is $O(1/\sqrt{n})$. While one could in principle stop at this point and directly compute the secret key rate that can be extracted from this state, we will focus on the experimentally relevant scenario where the quantum channel behaves approximately as a Gaussian channel. In this case, we can bound the distance between the limiting distribution describing the whole protocol and a Gaussian identically and independently distributed (i.i.d.) function (thus corresponding to a collective Gaussian attack for which the key rate is already known) with an error term of order $O(1/\sqrt{n})$. In such a practical scenario, one can therefore bound the distance between the actual state and the state corresponding to a Gaussian attack by an arbitrary small quantity. Taking $n$ large enough, the secret key rate of the symmetrized protocol is therefore identical to the secret key rate against collective Gaussian attacks. Convergence to a limiting distribution ====================================== To simplify the notations, we write $\tilde{P}$ the distribution corresponding to the state $\mathcal{S}(\rho^{\otimes n})$. The following lemma from [@LG10b] (proven in Appendix \[proof-three-variables\] for completeness) shows that this function only depends on 3 variables, instead of $4n$ for the non-symmetrized scenario. \[three-variables\] For $n \geq 2$, the symmetrized distribution $$\tilde{P}({\bf x}, {\bf y}) = \int_{K(n)} P(R{\bf x}, R{\bf y}) \mathrm{d} R,$$ where $\mathrm{d}R$ refers to the Haar measure on $K(n)$, only depends on $||{\bf x}||^2, ||{\bf y}||^2, {\bf x} \cdot {\bf y}$. Let us note $X_i = x_i^2, Y_i = y_i^2, Z_i = x_i y_i$ and $X^n = \sum_{i=1}^n X_i, Y^n = \sum_{i=1}^n Y_i, Z^n = \sum_{i=1}^n Z_i$. Because $\tilde{P}({\bf x}, {\bf y})$ only depends on $X^n=||{\bf x}||^2$, $Y^n=||{\bf y}||^2$ and $Z^n={\bf x} \cdot {\bf y}$, it actually corresponds to the probability distribution $P_v$ of the vector $V^n=[X^n, Y^n, Z^n]$: $$\tilde{P}({\bf x}, {\bf y}) \mathrm{d}{\bf x} \mathrm{d}{\bf y} = P_v(V^n) \mathrm{d} V^n.$$ According to the central limit theorem, since the vectors $V_i$ are i.i.d. (which follows from the collective attack assumption), the distribution $P_v$ converges to a Gaussian distribution as $n$ tends to infinity, and one can use a multidimensional version of Berry-Esseen theorem to bound the distance between the two distributions. Noting $P_G$ the Gaussian distribution with the same first two moments as $P_v$, one can prove that the variational distance $\Delta$ between the two distributions is of the form (see Appendix \[BE\] for a more precise bound): $$\Delta := \iiint_{\mathbb{R}^3} \left|P_v(V)-P_G(V)) \right| \mathrm{d}V= O\left(\frac{1}{\sqrt{n}}\right).$$ The scaling law in $O(1/\sqrt{n})$ is generic for Berry-Esseen theorem and the constant factor depends on the covariance matrix of $[X,Y,Z]$, that is, on the moment of order 4 of the measurement results $(x,y)$. In general, one could compute the secret key rate corresponding to a state described by the distribution $P_G$. Here, we will restrict ourselves to a very concrete scenario, that of a quantum channel acting like a Gaussian channel (like an optical fiber typically [^2]). We insist that this is not a new assumption since one can always compute the covariance matrix $\Sigma$ of the distribution $P_G$ above. However, in general, the quantum channel will be such that $\Sigma$ will be close (up to sampling errors) to the covariance matrix obtained for a Gaussian channel. The limiting distribution $P_G$ is close to an i.i.d. Gaussian distribution in typical implementations ====================================================================================================== In practice, the coherent states are sent through an optical fiber acting as a Gaussian quantum channel, meaning that the data obtained by Alice and Bob follow a Gaussian distribution. In general, this does not simplify the security analysis since observing variables that *look* Gaussian does not mean that they indeed *are* Gaussian. Here, we will show that *looking* Gaussian is sufficient for the bound obtained through Berry-Esseen theorem to be useful. Let us consider the case where $$(x_i, y_i) \sim \mathcal{N}\left( \begin{bmatrix} 0\\ 0\\ \end{bmatrix}, \begin{bmatrix} \langle x_i^2 \rangle & \langle x_i y_i \rangle \\ \langle x_i y_i \rangle & \langle y_i^2 \rangle \\ \end{bmatrix} \right).$$ Then, applying the symmetrization and using the results of Berry-Esseen theorem, one obtains that the symmetrized (normalized) distribution $P_v$ tends to a Gaussian distribution with covariance matrix $\Sigma_G$: $$\Sigma_G = \left[ \begin{smallmatrix} 3\langle x_i^2 \rangle^2 & \langle x_i^2 \rangle \langle y_i^2 \rangle + 2 \langle x_i y_i \rangle^2 & 3 \langle x_i^2 \rangle \langle x_i y_i \rangle \\ \langle x_i^2 \rangle \langle y_i^2 \rangle + 2 \langle x_i y_i \rangle^2 & 3\langle y_i^2 \rangle^2& 3 \langle y_i^2 \rangle \langle x_i y_i \rangle\\ 3 \langle x_i^2 \rangle \langle x_i y_i \rangle & 3 \langle y_i^2 \rangle \langle x_i y_i \rangle & \langle x_i^2 y_i^2 \rangle\\ \end{smallmatrix}\right].$$ Unfortunately, however, in a practical protocol, Alice and Bob only have access to a finite-precision estimation of the covariance matrix, and the one they measure, $\Sigma_{\mathrm{est}}$, and that they use in Berry-Esseen theorem will slightly differ from the ideal one above $\Sigma_G$. The typical estimation error is of the order of $1/\sqrt{m}$ if $m$ samples are used in the procedure. Assuming that the estimation is performed with a (small) constant fraction of the total samples $n$, the typical error will be on the order of $1/\sqrt{n}$, which is comparable to the error term of the Berry-Esseen theorem. This then implies that the variational distance between the two final distributions is also of that order (see Appendix \[finite\] for details). Security analysis of a CV QKD protocol with postselection ========================================================= Using the results above, the distribution $P$ (or equivalently the $Q$-function) of the state describing the symmetrized version of the QKD protocol is $1/\sqrt{n}$-close in variational distance to a Gaussian distribution. Moreover, in a practical scenario, this Gaussian distribution corresponds to that of an i.i.d. Gaussian state. If one can make the error $1/\sqrt{n}$ small enough, then the security of the symmetrized protocol against collective attacks is equivalent to that of the usual (non-symmetrized) protocol against Gaussian attacks. In particular, the secret key rate for the symmetrized protocol is equal to the secret key rate against Gaussian attacks [@HL07; @SAA07]. Although the variational distance between the $Q$-functions is a weaker criterion than the usual trace distance between the states, one can argue that this distance makes sense when considering CV QKD protocols. Indeed, if two states have $2\epsilon$-close $Q$-functions (for the variational distance), it means that the probability to successfully distinguish them using heterodyne detection is bounded by $1/2+\epsilon$. Sampling from the Haar measure on $K(n)$ for $n$ large might be quite unpractical. Methods to achieve it in complexity $O(n^2)$ are known [@LG11; @JKL11]. Fortunately, for our purpose, it is sufficient to sample from the different measure on $K(n)$ provided that the symmetrized state can be made arbitrary close to the state $\mathcal{S}\left( \rho^{\otimes n}\right)$ of Eq. \[symm\_state\]. This can be achieved by means of quantum $k$-designs as presented in Appendix \[design\]. In particular, it is reasonable to conjecture that this can be done in complexity $O(n \log n)$, which would be compatible with a practical implementation. One could wonder why the symmetrization has to be *active* here in contrast to protocols such as GG02. The difference is that the postselection, performed along the quadrature axes, introduces some privileged directions in phase space. Consequently one needs to actively symmetrize the protocol to make it invariant under rotations in phase space. Conclusion ========== In this paper, we provided a first step towards a general security proof of CVQKD protocols with postselection. Until now, its security was only established in the very restricted case of Gaussian attacks, which are very unlikely to be optimal. Thanks to an active symmetrization of the protocol (performed on the classical data of Alice and Bob), one can show that collective attacks and actually arbitrary attacks in the asymptotic limit basically reduce to Gaussian attacks. The present solution is still not very practical since one would need to randomly sample from the unitary group in a very large dimension. Two possible approaches should be considered: either looking at a much smaller set of transformations for which the sampling can be performed efficiently, or improving the bounds derived here, possibly combining the symmetrization technique with some de Finetti-type arguments. It seems almost clear, at any rate, that the symmetrization technique introduced here will be required for any further advance in the study of the security of CV QKD with postselection. Acknowledgments =============== I thank Antonio Acín, Frédéric Grosshans and Philippe Grangier for fruitful discussions. I acknowledge support from the European Union under the ERC Starting grant PERCENT. Proof of Lemma $[1]$ (Lemma \[1\] from [@LG10b]) {#proof-three-variables} ================================================ Since the probability distribution $P$ is being randomized under the action of the group $K(n) = O(2n,\mathbb{R}) \cap Sp(2n,\mathbb{R})$ to give $\tilde{P}$ defined as $$\tilde{P}({\bf x}, {\bf y}) = \int_{K(n)} P(R{\bf x}, R{\bf y}) \mathrm{d} R$$ where $\mathrm{d}R$ refers to the Haar measure on $K(n)$, one has: $$\tilde{P}(R{\bf x}, R{\bf y}) = \tilde{P}({\bf x}, {\bf y})$$ for any ${\bf x}, {\bf y} \in \mathbb{R}^{2n}$ and $R \in K(n)$. We want to show that any function $\tilde{P}: \mathbb{R}^{2n} \times \mathbb{R}^{2n}$, such that $\tilde{P}(R{\bf x}, R{\bf y}) = \tilde{P}({\bf x}, {\bf y})$ for any transformation $R \in K(n)$ only depends on the three following parameters: $||{\bf x}||^2, ||{\bf y}||^2, {\bf x} \cdot {\bf y}$. Given any four vectors ${\bf x}, {\bf x'}, {\bf y}, {\bf y'} \in \mathbb{R}^{2n}$ such that $||{\bf x}||^2=||{\bf x'}||^2, ||{\bf y}||^2=||{\bf y'}||^2, {\bf x} \cdot {\bf y}={\bf x'} \cdot {\bf y'}$, it is sufficient to exhibit a transformation $R\in K(n)$ such that ${\bf x'}=R {\bf x}$ and ${\bf y'}= R {\bf y}$ to prove Lemma $[1]$. A transformation $R \in K(n)$ can be described as a symplectic map: $$\label{symplectic} R = R(X,Y) \equiv \begin{bmatrix} X & Y \\ -Y & X \\ \end{bmatrix}$$ where the matrices $X$ and $Y$ are such that [@ADMS95]: $$\begin{aligned} X^T X+Y^T Y &=& X X^T + Y Y^T =1\\ X^T Y &,& X Y^T \quad \mathrm{symmetric}.\end{aligned}$$ Note that this matrix is written for reordered vectors of the form $[x_1, x_3, \cdots, x_{2n-1},x_2, x_4, \cdots, x_{2n}]$, that is, one first writes the $n$ $q$-quadratures then the $n$ $p$-quadratures for all the vectors. Let us introduce the following vectors ${\bf a}, {\bf a'}, {\bf b}, {\bf b'} \in \mathbb{C}^n$ defined as $$\begin{aligned} a_k = x_{2k-1} + i x_{2k} &,& a'_k = x'_{2k-1}+ i x'_{2k}\\ b_k = y_{2k-1} + i y_{2k} &,& b'_k = y'_{2k-1}+ i y'_{2k}.\end{aligned}$$ Then, the conditions read: $$\label{condition2} \left\{ \begin{array}{ccc} ||{\bf a}||^2=||{\bf a'}||^2\\ ||{\bf b}||^2=||{\bf b'}||^2\\ \mathrm{Re}\langle {\bf a} | {\bf b} \rangle = \mathrm{Re}\langle {\bf a'} | {\bf b'} \rangle \end{array} \right. ,$$ where $\mathrm{Re}(x)$ refers to the real part of $x$. For our purpose, it is therefore sufficient to prove that there exists an unitary transformation $U \in U(n)$ such that $U {\bf a} ={\bf a'}$ and $U {\bf b} = {\bf b'}$. Indeed, one can split $U$ into real and imaginary parts: $U = X - iY$, and it is easy to check that $R=R(X,Y)$ is such that $R{\bf x} ={\bf x'}$ and $R{\bf y} = {\bf y'}$. Let us introduce the following notations: $A \equiv ||{\bf a}||^2=||{\bf a'}||^2, B \equiv ||{\bf b}||^2=||{\bf b'}||^2$ and $C \equiv \mathrm{Re}\langle {\bf a}| {\bf b} \rangle = \mathrm{Re}\langle {\bf a'}| {\bf b'} \rangle$. Consider first the case where ${\bf a}$ and ${\bf b}$ are colinear. This means that ${\bf b} = C/A {\bf a}$ and $C = \pm \sqrt{AB}$. Using the Cauchy-Schwarz inequality, $|C| = |{\bf a'} \cdot {\bf b'}| \leq ||{\bf a'}|| \cdot||{\bf b'}|| = \sqrt{AB}$ with equality if and only if ${\bf a'}$ and ${\bf b'}$ are colinear. This means that ${\bf a'}$ and ${\bf b'}$ are colinear and that ${\bf b'} = (C/A) \, {\bf a'}$. Because $||{\bf a}|| = ||{\bf a'}||$, the reflexion $U$ across the mediator hyperplane of ${\bf a}$ and ${\bf a'}$ is a unitary transformation that maps ${\bf a}$ to ${\bf a'}$. This reflexion also maps ${\bf b}$ to ${\bf b'}$. This ends the proof in the case where ${\bf a}$ and ${\bf b}$ are colinear. Let us now consider the general case where ${\bf a}$ and ${\bf b}$ are not colinear. It is clear that ${\bf a'}$ and ${\bf b'}$ cannot be colinear either. We take two bases $({\bf a}, {\bf b}, {\bf f_3}, \cdots, {\bf f_n})$ and $({\bf a'}, {\bf b'}, {\bf f_3'}, \cdots, {\bf f_n'})$ of $\mathbbm{C}^n$ and use the Gram-Schmidt process to obtain two orthonormal bases $\mathcal{B}=({\bf e_1}, \cdots, {\bf e_n})$ and $\mathcal{B}'=({\bf e_1'}, \cdots, {\bf e_n'})$. Note that vectors ${\bf e_1}, {\bf e_2}, {\bf e_1'}$ and $e{\bf _2'}$ are given by: $$\begin{aligned} {\bf e_1} = \frac{{\bf a}}{\sqrt{A}} &,& {\bf e_2} = \frac{{\bf b} - \langle {\bf e_1}|{\bf b}\rangle {\bf e_1}}{||{\bf b} - \langle {\bf e_1}|{\bf b}\rangle {\bf e_1} ||}\\ {\bf e_1'} = \frac{{\bf a'}}{\sqrt{A}} &,& {\bf e_2'} = \frac{{\bf b'} - \langle {\bf e_1'}|{\bf b'}\rangle {\bf e_1'}}{||{\bf b'} - \langle {\bf e_1'}|{\bf b}\rangle {\bf e_1'} ||}.\end{aligned}$$ Let us call $U$ the unitary operator mapping $\mathcal{B}$ to $\mathcal{B}'$. It is easy to see that $U$ maps ${\bf a}$ and ${\bf b}$ to ${\bf a'}$ and ${\bf b'}$, respectively. This concludes our proof. Distance between the symmetrized distribution and a Gaussian distribution: Berry-Esseen theorem {#BE} =============================================================================================== We use a multidimensional local version of the Berry-Esseen theorem due to Zitikis. More precisely, Theorem 1.2 of [@Zit93] reads: \[berry-esseen\] Let $V_1, \cdots, V_n$ be a sequence of independent and identically distributed $d$-variate random vectors, let $S_n = \tfrac1n \sum_{i=1}^n V_i$. Let ${\bf \mu}$ and $\Sigma$ be the first two moments of $V_1$, and let $\lambda_{\mathrm{min}}$ be the least eigenvalue of $\Sigma$. Let $G$ be a Gaussian random vector with zero mean and covariance matrix $\mathbbm{1}_d$. Let $\mathcal{C}$ be the class of all convex Borel sets. Then there exists a universal constant $c\geq 0$ such that $$\begin{gathered} \sup_{C \in \mathcal{C}} \left|P(\sqrt{n} (S_n-\mu)\Sigma^{-1/2}\in \mathcal{C}) - P(G \in \mathcal{C}) \right|\\ \leq c \sqrt{d} \lambda_{\mathrm{min}}^{-3/2}\mathbb{E}\left(||V_1||^3\right)/\sqrt{n},\end{gathered}$$ where $L_n(1)=1/n \sum_i V_i$, $V_i$ is a $d$-dimensional vector and $||.||$ refers to the Euclidean norm. First, we note that the quantity $$\sup_{C \in \mathcal{C}} \left|P({\bf x} \in \mathcal{C}) - P({\bf y} \in \mathcal{C}) \right|$$ corresponds to the variational distance between the distributions of ${\bf x}$ and ${\bf y}$. In the case of CVQKD, we need to consider tridimensional random vectors $V_i=[X_i,Y_i,Z_i]$ and one can immediately estimate ${\bf \mu}$ and $\Sigma$ from the experimental data. Using the notations of the main text and applying Theorem \[berry-esseen\] gives $$\begin{gathered} \label{bound} \iiint_{\mathbb{R}^3} \left|P_v(V)-P_G(V)) \right| \mathrm{d}V\\ \leq c \sqrt{3} \lambda_{\mathrm{min}}^{-3/2}\mathbb{E}\left(||X_1^2+Y_1^2+Z_1^2||^{3/2}\right)/\sqrt{n},\end{gathered}$$ where $P_G$ corresponds to the multivariate Gaussian distribution with the same first two moments as $P_v$. Error due to the finite estimation sample {#finite} ========================================= Let us assume that the variables $(x_k,y_k)$ follow a centered bivariate normal distribution, as typical in experimental implementations of CVQKD. During the parameter estimation, Alice and Bob need to estimate the fourth moments of $(x_k,y_k)$ given by the covariance matrix $\Sigma$: $$\Sigma = \left[ \begin{smallmatrix} \langle x_i^4 \rangle & \langle x_i^2 y_i^2 \rangle & \langle x_i^3 y_i \rangle \\ \langle x_i^2 y_i^2 \rangle & \langle y_i^4 \rangle & \langle x_i y_i^3 \rangle\\ \langle x_i^3 y_i \rangle & \langle x_i y_i^3 \rangle & \langle x_i^2 y_i^2 \rangle\\ \end{smallmatrix}\right]$$ Alternatively, one can describe this matrix by the vector $V_i = [\langle x_i^4 \rangle, \langle x_i^3 y_i \rangle, \langle x_i^2 y_i^2 \rangle, \langle x_i y_i^3 \rangle, \langle y_i^4 \rangle]$. In order to estimate this vector, one uses the estimator $\bar{V}^m$ defined as $$\bar{V}^m:= \frac{1}{\sqrt{m}} \sum_{i_1,\cdots, i_m} V_i,$$ where $i_1,\cdots, i_m$ are $m$ randomly chosen indices among $\{1,\cdots,n\}$. Using the mutlivariate version of the Central Limit Theorem, one obtains that the estimator $\bar{V}^m$ converges to the true value and that the error follows a normal distribution. More precisely, let us note $\Sigma_G$ the true covariance matrix and $\Sigma_{\mathrm{est}}$ the estimated matrix, i.e., $$\Sigma_G = \left[ \begin{smallmatrix} 3\langle x_i^2 \rangle^2 & \langle x_i^2 \rangle \langle y_i^2 \rangle + 2 \langle x_i y_i \rangle^2 & 3 \langle x_i^2 \rangle \langle x_i y_i \rangle \\ \langle x_i^2 \rangle \langle y_i^2 \rangle + 2 \langle x_i y_i \rangle^2 & 3\langle y_i^2 \rangle^2& 3 \langle y_i^2 \rangle \langle x_i y_i \rangle\\ 3 \langle x_i^2 \rangle \langle x_i y_i \rangle & 3 \langle y_i^2 \rangle \langle x_i y_i \rangle & \langle x_i^2 y_i^2 \rangle\\ \end{smallmatrix}\right],$$ and $$\Sigma_\mathrm{est} = \frac{1}{m} \sum_{i_1, \cdots, i_m}\left[ \begin{smallmatrix} x_i^4 & x_i^2 y_i^2 & x_i^3 y_i \\ x_i^2 y_i^2 & y_i^4 & x_i y_i^3 \\ x_i^3 y_i & x_i y_i^3 & x_i^2 y_i^2\\ \end{smallmatrix}\right].$$ Then, the Central Limit Theorem asserts that the random matrix $\sqrt{m} (\Sigma_\mathrm{est}-\Sigma_G)$ converges in distribution to a centered multivariate normal distribution with a covariance matrix depending on moments of order 8 of $(x_k,y_k)$. We do not explicitate this matrix here as it is rather cumbersome, but it is straightforward to compute it. The result of this analysis is that the error in estimating the correct covariance matrix $\Sigma_g$ scales as $1/\sqrt{m}$ where $m$ is the number of samples used. In order to obtain an error of the same order of magnitude as the one due to Berry-Esseen theorem, one should use a constant fraction of the data for the parameter estimation. In that case, the covariance matrix would be estimated with a precision $1/\sqrt{n}$. We now prove that this error for the covariance matrices translates into an error (computed with respect to the variational distance) of the same magnitude for the probability distributions. Let us first consider the case of univariate normal distributions, that is two normal distributions $\mathcal{N}(0,\sigma_1^2)$ and $\mathcal{N}(0,\sigma_2^2)$ with $\sigma_2 > \sigma_1$. Let us note $g_1$ and $g_2$ their respective density function. One has: $$\begin{aligned} && \int_{-\infty}^{\infty} |g_1(x) - g_2(x)| \mathrm{d}x \nonumber \\ &&= 2 \mathrm{erf} \left(\sigma_2 \sqrt{\frac{\ln (\sigma_2/\sigma_1)}{\sigma_2^2-\sigma_1^2}} \right)-2 \mathrm{erf} \left(\sigma_1 \sqrt{\frac{\ln (\sigma_2/\sigma_1)}{\sigma_2^2-\sigma_1^2}} \right) \nonumber\end{aligned}$$ where $\mathrm{erf}(x) = 2/\sqrt{\pi} \int_0^x e^{-t^2}\mathrm{d}t$ is the error function. In particular, if one has $\sigma_2 = \sigma_1 + \delta$ where $\delta=O(1/\sqrt{n})$ is a small error, then a first order expansion of the expression above gives $$\int_{-\infty}^{\infty} |g_1(x) - g_2(x)| \mathrm{d}x = \delta \sqrt{\frac{8}{e \pi \sigma_1^2}} + o\left(\frac{1}{\sqrt{n}}\right).$$ One can extend this analysis to the case of multivariate normal distributions with covariance matrices differing by an error of the order of $1/\sqrt{n}$, and one would get that the variational distance between the two distributions is also of the order of $1/\sqrt{n}$. For lisibility, we omit the precise bound here but it can be computed in a straightforward manner. Approximate symmetrization using efficient construction of quantum $k$-designs {#design} ============================================================================== Sampling from the group $K(n)$ is equivalent to sampling from the unitary group $U(n)$. Indeed, any map in $K(n)$ can be described by its action on the annihilation operators on the $n$ modes. Let us note $a_i$ (resp. $b_i$) the annihilation operator on the $i^\mathrm{th}$ mode of Alice (resp. Bob). A map $U \in K(n)$ is described by the unitary $u_{i,j}$ that transforms $a_i$ into $\sum_{j=1}^n u_{i,j} a_j$. Let us consider a general state $\rho \in \left(\mathcal{H}_A\otimes \mathcal{H}_B\right)^{\otimes n}$, $$\rho = \sum_{{\bf i}^a, {\bf i}^b, {\bf j}^a,{\bf j}^b} \lambda_{{\bf i}^a{\bf i}^b {\bf j}^a {\bf j}^b} |{\bf i}^a, {\bf i}^b\rangle \!\langle {\bf j}^a, {\bf j}^b|,$$ where ${\bf i}^a = (i_1^a, i_2^a, \cdots, i_n^a)$ describes the photon distribution in Alice’s $n$ modes (and similarly for Bob with ${\bf i}^b$). The state $ |{\bf i}^a, {\bf i}^b\rangle = \frac{1}{\sqrt{{\bf i}^a! {\bf i}^b !}} \prod_{k=1^n} \left( a_k^{\dagger}\right)^{i_k^a} \left( b_k^{\dagger}\right)^{i_k^b} |00\rangle$ is transformed under the action of $U\otimes U^*$ into $$\begin{gathered} U\otimes U^* |{\bf i}^a, {\bf i}^b\rangle= \\ \frac{1}{\sqrt{{\bf i}^a! {\bf i}^b !}} \prod_{k=1^n} \left( \sum_{l=1}^n u_{k,l} a_l^{\dagger}\right)^{i_k^a} \left(\sum_{l=1}^n u_{k,l}^* b_l^{\dagger}\right)^{i_k^b} |00\rangle.\end{gathered}$$ Let us fix a maximal photon number $N$ together with the projector $\Pi_{N}$ on the subspace of $\mathcal{H}_A\otimes \mathcal{H}_B$ spanned by Fock states $ |{\bf i}^a, {\bf i}^b\rangle$ with a total photon number less or equal to $N$. For some given first four moments of $\rho$, there exists a constant $c_\epsilon$ such that taking $N= c_\epsilon n$ leads to $$\left\| \rho - \Pi_{N} \rho \Pi_{N}^\dagger \right\|_\mathrm{tr} \leq \epsilon.$$ Noting $\rho_{N} =\Pi_{N} \rho \Pi_{N}^\dagger$, one observes that $(U \otimes U^*) \rho_{N} (U \otimes U^*)^\dagger$ is a polynomial of degree $N$ in $u_{i,j}$. Consider an approximate $N$-design $\nu$, then for any $k \leq N$, one has $$\begin{gathered} \left\| \int_{\mathrm{Haar}} (U \otimes U^*) \, \rho^{\otimes n} \, (U \otimes U^*)^\dagger \mathrm{d}U \right. \\ \left. - \sum_\nu (U(\nu) \otimes U^*(\nu) ) \, \rho^{\otimes n} \, (U(\nu) \otimes U^*(\nu) )^\dagger \right\|_\mathrm{tr} \leq \epsilon_\mathrm{design}.\end{gathered}$$ Hence, it is sufficient to use an approximate $N$-design instead of the Haar measure on the unitary group $U(n)$ in order to symmetrize the state $\rho_{N}$, up to some arbitrary small error $\epsilon_\mathrm{design}$. Interestingly, efficient constructions of such approximate $N$-designs are known [@HL09]. In these constructions, the number of unitaries in the design scales as $n^{O(N)}$ which means that one needs $O( n \log n)$ bits of randomness to draw one such unitary (since $N$ is proportional to the number of modes). Unfortunately, the construction provided in [@HL09] only works in the regime where $N= O(n/ \log n)$, which is close to, but not exactly the regime of interest here. Nevertheless, the existence of this construction gives hope that one could construct an approximate $N$-design also in the regime where $N = O(n)$. If this were true, then one could efficiently (in time $O(n \log n)$) perform the symmetrization studied in the main text. [29]{} natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{} , , , , , , ****, (). , , , ****, (). , , , , , ****, (pages ) (). , , , , , , , (). , ****, (). , , , , , , ****, (). , ****, (). , , , ****, (). , , , ****, (). , ****, (). , , , ****, (). , , , , ****, (). , , , ****, (). , , , , , , ****, (). , , , , , , ****, (). , ****, (). , , , , , , , ****, (). , , , ****, (). , , , , (). , , , , ****, (). , **, vol.  (, ). , , , , , ****, (). , ****, (). , ****, (). , , , ****, (). , , , , ****, (). , in ** (, , ), vol. of **, pp. . , pp. (). , , , , , , , , ****, (). [^1]: In the PM scenario, Alice’s modulation has a variance $V-1$. [^2]: Other experimental setups can be considered in practice, for instance free-space CVQKD [@HEB10]. In that case, it would be interesting to see whether the quantum channel behaves approximately like a Gaussian channel.
1
--- abstract: 'Let $X$ be a Calabi–Yau threefold fibred over ${{\mathbb P}}^1$ by non-constant semi-stable K3 surfaces and reaching the Arakelov–Yau bound. In \[STZ\], X. Sun, Sh.-L. Tan, and K. Zuo proved that $X$ is modular in a certain sense. In particular, the base curve is a modular curve. In their result they distinguish the rigid and the non-rigid cases. In \[SY\] and \[Y\] rigid examples were constructed. In this paper we construct explicit examples in non-rigid cases. Moreover, we prove for our threefolds that the “interesting” part of their $L$-series is attached to an automorphic form, and hence that they are modular in yet another sense.' address: - 'Institute of Mathematics, Hebrew University of Jerusalem, Givat-Ram, Jerusalem 91904, Israel' - 'Department of Mathematics, Queen’s University, Kingston. Ontario Canada K7L 3N6' author: - Ron Livné - Noriko Yui date: 'Version of June 16, 2005' title: 'The modularity of certain non-rigid Calabi–Yau threefolds' --- \[section\] \[section\] \[section\] \[section\] \[section\] \[section\] \[section\] \[section\] \[section\] \[section\] \[section\] [^1] [^2] Introduction ============ Let $X$ be an algebraic threefold and let $f:X\to {{\mathbb P}}^1$ be a non-isotrivial morphism whose fibers are semi-stable K3 surfaces. Let $S\subset {{\mathbb P}}^1$ be the finite set of points above which $f$ is non-smooth, and assume that the monodromy at each point of $S$ is non-trivial. Jost and Zuo \[JZ\] proved the Arakelov–Yau type inequality: $$\mbox{deg} f_* \omega_{X/{{\mathbb P}}^1}\leq \mbox{deg}\, \Omega_{{{\mathbb P}}^1}^1 (\mbox{log} S).$$ Let $\Delta\subset X$ be the pull-back of $S$. Let $\omega_{X/{{{\mathbb P}}}^1}$ be the canonical sheaf. The Kodaira–Spencer maps $\theta^{2,0}$ and $\theta^{1,1}$ are maps of sheaves fitting into the following diagram: $$f_*\Omega^2_{X/{{{\mathbb P}}}^1}(\mbox{log}\,\Delta) \overset{\theta^{2,0}}\to R^1f_*\Omega^1_{X/{{{\mathbb P}}}^1}(\mbox{log}\,\Delta) \otimes \Omega^1_{{{{\mathbb P}}}^1}(\mbox{log}\,S) \overset{\theta^{1,1}}\to R^2\, f_*{{\mathcal O}}_{X/{{{\mathbb P}}}^1}\otimes\Omega^1_{{{{\mathbb P}}}^1} (\mbox{log}\,S)^{{\otimes}2}.$$ The iterated Kodaira–Spencer map of $f$ is defined to be the map $\theta^{1,1}\theta^{2,0}$. It is known (see \[STZ\]) that when the (iterated) Kodaira–Spencer map is $0$, one actually has the stronger inequality $$\mbox{deg} f_* \omega_{X/{{\mathbb P}}^1}\leq \frac{1}{2}\mbox{deg}\, \Omega_{{{\mathbb P}}^1}^1 (\mbox{log} S).$$ Assume from now on that $X$ is a Calabi–Yau threefold. Then the triviality of the canonical bundle implies that $\mbox{deg}\, f_* \omega_{X/{{\mathbb P}}^1}=2$ (see (\[deg\]) below). Recently X. Sun, S-L. Tan and K. Zuo \[STZ\] considered Calabi–Yau threefolds for which the Arakelov–Yau inequality becomes equality. Thus $S$ consists of $6$ points when the Kodaira–Spencer map is $0$ and of $4$ points otherwise. As a consequence of the main theorem of \[STZ\], the following results were obtained. [**Theorem 1**]{} (Corollary 0.4 in \[STZ\]): *(i) If the iterated Kodaira–Spencer map $\theta^{1,1}\theta^{2,0}$ of $f$ is non-zero, then $X$ is rigid (i.e., $h^{2,1}=0$) and birational to the Nikulin–Kummer construction of a symmetric square of a family of elliptic curves $f: E\to {{\mathbb P}}^1$. After passing to a double cover $E^{\prime}\to E$ (if necessary), the family $g^{\prime}: E^{\prime}\to {{\mathbb P}}^1$ is one of the six modular families of elliptic curves on the Beauville’s list (\[B\]).* \(ii) If the iterated Kodaira–Spencer map $\theta^{1,1}\theta^{2,0}$ of $f$ is zero, then $X$ is a non-rigid Calabi–Yau threefold (i.e., $h^{2,1}\neq 0$), the general fibers have Picard number at least $18$, and ${{\mathbb P}}^1\setminus S\simeq \mathfrak H/\Gamma$ where $\Gamma$ is a congruence subgroup of $PSL(2,{{\mathbb Z}})$ of index $24$. [**Remark**]{}: The base curve ${{\mathbb P}}^1\setminus S$ is a modular variety of genus zero, i.e., $\mathfrak H/\Gamma$ where $\Gamma$ is a torsion-free genus zero congruence subgroup of $PSL(2,{{\mathbb Z}})$ of index $12$ in case (i), and of index $24$ in case (ii). In the paper of Sun, Tan and Zuo \[STZ\], the word “modularity” refers to this fact. The third cohomology of each of the six rigid Calabi–Yau threefolds in Theorem 1 (i) arises from a weight $4$ modular form. In the articles of Saito and Yui \[SY\] and of Verrill in Yui \[Y\], these forms were explicitly determined. Saito and Yui use geometric structures; while Verrill uses point counting method, to obtain the results. More precisely, the following was proved for the natural models over ${{\mathbb Q}}$ of these six rigid threefolds: [**Theorem 2**]{} (Saito and Yui \[SY\] and Verrill in Yui \[Y\]): [*For each of the six rigid Calabi–Yau threefold over ${{\mathbb Q}}$, the L-series of the third cohomology coincides with the L-series arising from the cusp form of weight $4$ of one variable on the modular group in the Beauville’s list. Beauville’s list and the corresponding cusp forms are given in Table 1.*]{} $$\begin{array}{|c|c|c|} \hline \hline \mbox{Group} & \mbox{Number of components} & \mbox{Cusp forms} \\ \Gamma & \mbox{of singular fibers} & \mbox{of weight $4$} \\ \hline \Gamma(3) & 3,3,3,3 & \eta(q^3)^8 \\ \hline \Gamma_1(4)\cap \Gamma(2) & 4,4,2,2 & \eta(q^2)^4\eta(q^4)^4 \\ \hline \Gamma_1(5) & 5,5,1,1 & \eta(q)^4\eta(q^5)^4 \\ \hline \Gamma_1(6) & 6,3,2,1 & \eta(q)^2\eta(q^2)^2\eta(q^3)^2\eta(q^6)^2 \\ \hline \Gamma_0(8)\cap\Gamma_1(4) & 8,2,1,1& \eta(q^4)^{16}\eta(q^2)^{-4}\eta(q^8)^{-4} \\ \hline \Gamma_0(9)\cap\Gamma_1(3) & 9,1,1,1 & \eta(q^3)^8 \\ \hline \hline \end{array}$$ Table 1: Rigid Calabi–Yau threefolds and cusp forms Here $\eta(q)$ denotes the Dedekind eta-function: $\eta(q)=q^{1/24}\prod_{n\geq 1}(1-q^n)$ with $q=e^{2\pi i\tau}$. It might be helpful to recall the six rigid Calabi–Yau threefolds in Theorem 2. These six rigid Calabi–Yau threefolds are obtained (by Schoen \[S\]; see also Beauville \[B\]) as the self-fiber products of stable families of elliptic curves admitting exactly four singular fibers. The base curve is a rational modular curve and correspond to the torsion-free genus zero congruence subgroups $\Gamma$ of $PSL(2,{{\mathbb Z}})$ in Table 1. Note that the $4$-tuples of natural numbers appearing in the second column add up to $12$, which is the index of the modular group $\Gamma$ in $PSL(2,{{\mathbb Z}})$. In \[STZ\] the authors indicate one example for the non-rigid extremal case. It is related to $\Gamma(4)$, which is a torsion-free congruence subgroup of genus $0$ and index $24$ in $PSL(2,{{\mathbb Z}})$. The list of torsion-free congruence subgroups of genus $0$ and index 24 in $PSL(2,{{\mathbb Z}})$ is known (see Sebbar \[Se\], and Table 2 below). In this paper we will show that most of them give rise to a similar collection of examples. In each of these cases we will compute the interesting part of the $L$-series of the third cohomology of an appropriate natural model over ${{\mathbb Q}}$ in terms of automorphic forms. This paper is organized as follows. In Section 2, we use work of Sebbar \[Se\] to determine the groups $\Gamma$ corresponding to case (ii) of Theorem 2. These are subgroups of $PSL(2,{{\mathbb Z}})$, and associated to each $PSL(2,{{\mathbb Z}})$-conjugacy class there is a natural elliptic fibration over the base curve, defined over ${{\mathbb Q}}$. The total spaces of these fibrations are elliptic modular surfaces in the sense of Shioda \[Sh1\]. Moreover each is an extremal K3 surface (namely their Picard number is 20, the maximum possible). We explain the relation between the motive of their transcendental cycles, and specific CM forms of weight $3$ using a result of Livné on orthogonal rank 2 motives in \[L2\]. Section 3 contains our main results: we construct our examples, verify the required properties, and analyze the interesting part of their cohomology. (See the final Remarks 8 (2) for the other parts.) Then in Section 4 we give explicit formulas for the weight $3$ cusp forms and defining equations for the elliptic fibrations of Section 2. The paper is supplemented by the article of Hulek and Verrill \[HV\] where they treat Kummar varieties, one of which is the case associated to the modular group $\Gamma_1(7)$. This case differs from the examples considered in our paper with the main difference being the fact that the $2$-torsion points do not decompose into four sections, leading to non-semi-stable fibrations. But it still gives rise to a Calabi–Yau threefold (Theorem 2.2 of Hulek and Verrill \[HV\]), and the modularity question can still be considered, and this is exactly what Hulek and Verrill deals with in the supplement \[HV\] to this article. Extremal congruence K3 surfaces =============================== The torsion-free genus zero congruence subgroups of $PSL(2,{{\mathbb Z}})$ of index $24$ were classified in Sebbar \[Se\]. There are precisely nine conjugacy classes of such congruence subgroups. The second column in the following Table 2 gives the complete list of the torsion-free congruence subgroups of $PSL(2,{{\mathbb Z}})$ of index $24$ up to conjugacy. Each has precisely 6 cusps. The third column in the table gives the widths of these cusps. $$\begin{array}{|c|c|c|} \hline\hline \#& \mbox{The group $\Gamma$} & \mbox{Widths of the cusps} \\ \hline 1& \Gamma(4) & 4,4,4,4,4,4 \\ \hline 2& \Gamma_0(3)\cap \Gamma(2) & 6,6,6,2,2,2 \\ \hline 3& \Gamma_1(7) & 7,7,7,1,1,1 \\ \hline 4& \Gamma_1(8) & 8,8,4,2,1,1 \\ \hline 5& \Gamma_0(8)\cap \Gamma(2) & 8,8,2,2,2,2 \\ \hline 6& \Gamma_1(8;4,1,2) & 8,4,4,4,2,2 \\ \hline 7& \Gamma_0(12) & 12,4,3,3,1,1 \\ \hline 8& \Gamma_0(16) & 16,4,1,1,1,1 \\ \hline 9& \Gamma_1(16;16,2,2) & 16,2,2,2,1,1 \\ \hline \hline \end{array}$$ Table 2: Torsion-free congruence subgroups of index 24. Here $$\Gamma_1(8;4,1,2):=\{\pm \begin{pmatrix} 1+4a & 2b \\ 4c & 1+4d \end{pmatrix},\, a\equiv c\pmod 2\}$$ and $$\Gamma_1(16;16,2,2):=\{\pm \begin{pmatrix} 1+4a & b \\ 8c & 1+4d \end{pmatrix}, \, a\equiv c\pmod 2\}.$$ : [*If we are interested in conjugacy as Fuchsian groups (in $PSL(2,{{\mathbb R}})$), Examples \#1,\#5, and \#8 are conjugate (use the matrix $\begin{pmatrix} 1 & 0 \\ 0 & 2 \end{pmatrix})$, Examples \#2 and \#7 are conjugate (use the same matrix), and Examples \#4, \#6, and \#9 are conjugate (use the same matrix as well as $\begin{pmatrix} 1 & 1 \\ 0 & 1 \end{pmatrix})$.*]{} [**Proposition 4**]{}: *Let $\Gamma$ be one of the congruence subgroups in Table 2. Then $\Gamma$ has an explicit congruence lift $\tilde{\Gamma}$ to $SL(2,{{\mathbb Z}})$ with the following properties:* \(1) $\tilde{\Gamma}$ has no elliptic elements. In particular $-{\rm Id}$ is not in $\tilde{\Gamma}$. \(2) $\tilde{\Gamma}$ contains no element of trace $-2$. [**Proof**]{}: We let $\tilde{\Gamma}$ be the subgroup of $SL(2,{{\mathbb Z}})$ consisting of the elements above $\Gamma$. Indeed, the lifts $\tilde{\Gamma}$ are sometimes the same as the groups $\Gamma$ themselves. In fact, for the cases \#1,\#2,\#3, \#4 and \#5, the lifts are the same and respectively given by: $\Gamma(4)$, $\Gamma_0(3)\cap\Gamma(2)$,$\Gamma_1(7)$,$\Gamma_1(8)$ and $\Gamma_0(8)\cap\Gamma(2)$. In the cases \#6,\#7,\#8 and \#9, the lifts are not unique, and we choose respectively the following lifts: $\Gamma_1^{\prime}(8;4,1,2)$, $\Gamma_0(12)\cap\Gamma_1(3)$, $\Gamma_0(16)\cap\Gamma_1(4)$ and $\Gamma_1^{\prime}(16;16,2,2)$. Here $\Gamma_1^{\prime}(8;4,1,2)$ and $\Gamma_1^{\prime}(16;16,2,2)$ are defined in the same way as $\Gamma_1(8;4,1,2)$ and $\Gamma_1(16;16,2,2)$ but without the $\pm$. Note that the widths of the cusps are not affected by taking a lift as $-{\rm Id}$ is the only difference. (We should remark that the lifts are not unique; for instacne, in the case \#7, there are four lifts, but only one has no elements of trace $-2$, which is the one given above.) In fact, Proposition 4 has also been obtained by A. Sebbar in his unpublished note. Proposition 4 will pave a way to the definition of elliptic modular surfaces, which we will discuss next. [**Elliptic Modular Surfaces**]{}: In \[Sh1\] Shioda has shown how to associate to any subgroup $G$ of $SL(2,{{\mathbb Z}})$ of finite index and not containing $-$Id an elliptic fibration $E(G)$, called the elliptic modular surface associated to $G$, over the modular curve $X(G)=\overline{G\backslash{\frak{H}}}$. (Shioda’s construction requires that modular groups ought to be a subgroup of $SL(2,{{\mathbb Z}})$ (rather than a subgroup of $PSL(2,{{\mathbb Z}})$) that contains no element of order $2$. This is a reason we consider a lift $\tilde{\Gamma}$ of $\Gamma$ in our discussion.) [**Remark 5 **]{}: [*It follows from Kodaira’s theory that when $G$ contains no elliptic elements and no elements of trace $-$2, all the singular fibers are above the cusps and are of type $I_n$, where $n$ is the width of the cusp. On the other hand, elements of trace $-2$ give rise to $I_n^*$-fibers above the cusps.*]{} For the $\tilde{\Gamma}$’s of Proposition 4 the modular curve $X(\tilde{\Gamma})$ has genus $0$, and, since the sum of the widths of the cusps in Table 2 is always 24, each $E(\tilde{\Gamma})$ is an extremal K3 surface. The space $S_3(\tilde{\Gamma})$ of cusp forms of weight 3 for $\tilde{\Gamma}$ is therefore one-dimensional. Up to a square, the discriminant of the intersection form on the rank 2 motive of the transcendental cycles $T: = T(E(\tilde{\Gamma})) = H^2(E(\tilde{\Gamma}),{{\mathbb Q}})/{\rm NS} (E(\tilde{\Gamma}))$ is given by $$\label{delta} \delta = \delta_k = -1,-3,-7,-2,-1,-2,-3,-1,-2$$ in cases $k = 1,\dots,9$ respectively. To see this one computes the discriminant of the (known) lattice $NS(E(\tilde{\Gamma}))$ and passes to the orthogonal complement in $H^2(E(\tilde{\Gamma}),{{\mathbb Q}})$. For details, see e. g. Besser-Livné \[BS\]. By \[L2\] it follows that the normalized newform $g_{3,\Gamma}$ generating $S_3(\tilde{\Gamma})$ has CM by ${{\mathbb Q}}(\sqrt{\delta}\,)$. To each of our 9 examples there is a naturally associated moduli problem of classifying (generalized) elliptic curves with a given level structure (Katz and Mazur \[KM\]). Each of these moduli problems refines the respective moduli problem $Y_1(M)$, of classifying elliptic curves with a point of order $M$, where $M$ is as above. Since $M\geq 4$, these moduli problems are all represented by universal families $E(\tilde{\Gamma})/X(\tilde{\Gamma})/{{\mathbb Z}}[1/M]$ (see Katz–Mazur \[KM\]). The geometric fibers are geometrically connected in all these examples, and their compactified fibers over ${{\mathbb C}}$ are the corresponding elliptic modular surfaces above. We shall now compute the $L$-series $L(T,s)$ of the transcendental cycles. By the Eichler–Shimura Isomorphism, this is the parabolic cohomology $$\tilde{H}: = \tilde{H}{\vphantom{H}}^1( X(\tilde{\Gamma})\times_{{{\mathbb Z}}[1/M]}\overline{{{\mathbb Q}}}, R^1(E(\tilde{\Gamma})\rightarrow X(\tilde{\Gamma}))).$$ Moreover, Deligne proved (\[D\]) the Eichler–Shimura congruence relation $$\text{Frob}_p + \text{Frob}'_p = T_p \qquad\text{for any $p\not|M$},$$ where $T_p$ is the $p$-th Hecke operator on $S_3(\tilde{\Gamma})$. This is the same as the $p$-th Fourier coefficient of the normalized newform $g_{3,\Gamma}$. Summarizing, we proved $$L(T(X(\tilde{\Gamma}),s) = L(g_{3,\Gamma},s).$$ Explicit Weierstrass equations for the elliptic fibrations $E(\tilde{\Gamma})/X(\tilde{\Gamma})/{{\mathbb Z}}[1/M]$ will be given in Section 4 below. [**Remarks**]{}: *The list in Table 2 exhausts all of the families of semi-stable elliptic $K3$ surfaces with exactly six singular fibers, which correspond to torsion-free genus zero [*congruence*]{} subgroups of $PSL_2({{\mathbb Z}})$ of index $24$. The $6$-tuples of natural numbers appearing in the third column add up to $24$. Therefore, the number of such $6$-tuples is a priori finite. That this list is complete was proved by Sebbar \[Se\].* \(2) Miranda and Persson \[MP\] studied all possible configurations of $I_n$ fibers on elliptic $K3$ surfaces. In the case of exactly six singular fibers, they obtained $112$ possible configurations including the above nine cases. All these $K3$ surfaces have the maximal possible Picard number $20$. It should be emphasized that the exactly nine configurations correspond to genus zero congruence subgroups of $PSL_2({{\mathbb Z}})$ of index $24$. \(3) The theory of Miranda and Persson had been extended to prove the uniqueness (over ${{\mathbb C}}$) of K3 surfaces having each of these types of singular fibers by Artal-Bartolo, Tokunaga and Zhang \[BHZ\]. Confer also the article of Shimada and Zhang \[SZ\] for a useful table of extremal elliptic $K3$ surfaces. The non-rigid examples ====================== Let $Y=E(\tilde{\Gamma})$ be one of the K3 surfaces of the previous Section, and let $g_Y = g_{3,\Gamma}$ denote the corresponding cusp form of level $3$. If $\tilde{\Gamma}$ contains $\Gamma_1(M)$ (in Table 2 this happens in cases \#3, \#4, \#7, \#8, and \#9), and $M = M_Y$ is the maximal possible, then $M$ is the [*level*]{} of $g_Y$, and the [*newtype*]{} of $g_Y$ is the Dirichlet character $$\label{new} \epsilon = \epsilon_Y,$$ of conductor $M$, so that $g_Y$ is in $S_3(\Gamma_0(M), \epsilon_Y)$. Notice that $\epsilon$ is odd (namely $\epsilon(-1) = -1$). Moreover, since the coefficients of $g_Y$ are integers, $\epsilon$ must be quadratic. (We will determine $\epsilon$ for our examples in Proposition 10 below.) Let $E$ be an elliptic curve. We view the product $Y\times E$ as a family of abelian surfaces over $X(\tilde{\Gamma})$. The fiber $A_t = A_{\Gamma,t}$ over each point $t\in X(\tilde{\Gamma})$ is the product of the fiber $E_{\Gamma,t}$ of $E(\tilde{\Gamma})$ with $E$. Then we have the following [**Theorem 6**]{}: *(1) The product $Y\times E$ has the Hodge numbers $$h^{0,3}(Y\times E)=1,\, h^{1,0}(Y\times E) = 1\,\,\mbox{and}\,\, B_3(Y\times E)=44$$ (so that $Y\times E$ is not a Calabi–Yau threefold).* \(2) The motive $T(Y\times E)=T(Y)\times H^1(E)$ is a submotive of $H^3(Y\times E)$. If $E$ and $Y$ are defined over ${{\mathbb Q}}$, this submotive is modular, in the sense that its $L$-series is associated to a cusp form $g_Y$ on $GL(2,{{\mathbb Q}}(\sqrt{\delta}\,))$. Let $g_E$ be the cusp form of weight $2$ associated to $E$ by Wiles et. al. (\[W\]). Let $A(p)$ (respectively $B(p)$) be the $p$th Fourier coefficient of $g_E$ (respectively of $g_Y$), and let $\epsilon_Y$ be the newtype character of $g_Y$ (see Section 3). Then for any good prime $p$, the local Euler factor $L_p(s)$ of the $L$-series $L(T(Y\times E),s) = L(g_E\otimes g_Y,s)$ is $$1 - A(p)B(p)p^{-s} + (B(p)^2+\epsilon_Y(p)pA(p)^2-2p^2 \epsilon_Y(p))p^{1-2s} - A(p)B(p)\epsilon_Y(p)p^{3-3s} + p^{6-4s}.$$ [**Proof**]{}: The statements about the Hodge and Betti numbers follow from the Künneth formula. Since $T(Y)$ is a factor of $H^2(Y)$, it follows again from the Künneth formula that $T(Y)\times H^1(E)$ is a factor of $H^3(Y\times E)$. For the second part, we know that $g_Y$ is a CM form. Hence it is induced from an algebraic Hecke character $\chi = \chi_Y$ of the imaginary quadratic field $F = K_i$. Let $\chi_G$ be the compatible system of $1$-dimensional $\ell$-adic representations of $G_F = \text{Gal}(\overline{{\mathbb Q}}/F)$ corresponding to $\chi$. Then the $2$-dimensional Galois representation associated to $T(Y)$ is ind$_{G_F}^{G_{{\mathbb Q}}}\chi_G$. Hence we obtain the $4$-dimensional Galois representation $$\rho_E \otimes \text{ind}_{G_F}^{G_{{\mathbb Q}}} \chi_G \simeq \text{ind}_{G_F}^{G_{{\mathbb Q}}}(\chi_G \otimes \text{Res}^{G_{{\mathbb Q}}}_{G_F}\rho_E),$$ where $\rho_E$ is the Galois representation associated to $H^1(E)$. The operation of restricting $\rho_E$ to $G_F$ and of twisting by characters have automorphic analogs. Let $\pi_E$ be the automorphic representation associated to $E$. Then $\pi' = \chi\otimes$ Res$^{{\mathbb Q}}_F\pi_E$ makes sense as an automorphic cuspidal irreducible representation of $GL(2,F)$, and we have the characterizing relationship $$L(\pi',s) = L(\text{ind}_F^{{\mathbb Q}}\pi',s) = L(\pi_E \otimes \text{ind}_F^{{\mathbb Q}}\chi,s) = L(g_E\otimes g_Y,s).$$ For the last part, write the $p$th Euler factors of $g_E$ and $g_Y$ respectively as $$(1-\alpha_p p^{-s})(1-\alpha'_p p^{-s}) = 1-A(p) p^{-s} + p^{1-2s} \qquad\text{and}$$ $$(1-\beta_p p^{-s})(1-\beta'_p p^{-s}) = 1 - B(p) p^{-s} +\epsilon_Y(p) p^{2-2s}.$$ Then the Euler factor $L_p$ is defined as $$L_p(s) =(1-\alpha_p\beta_p p^{-s}) (1-\alpha'_p\beta_p p^{-s}) (1-\alpha_p\beta_p' p^{-s}) (1-\alpha_p'\beta_p' p^{-s}),$$ and the claim follows by a direct calculation. [**Remark**]{}: [*If a K3 surface has the Picard number $19$ or $18$, the modularity question for the product $Y\times E$ is still open. However, if the Picard number is 19, one knows at least that the rank $3$ motive $T(Y)$ of the transcendental cycles is self dual orthogonal via the cup product. (For explicit examples of K3 surfaces with Picard number 19, see e. g. Besser and Livné \[BL\].) Thus one can use a result of Tate to lift each $\ell$-adic representation to the associated spin cover, which is the multiplicative group of some quaternion algebra over ${{\mathbb Q}}$. If the spin representation is modular (which should always be the case), then it is associated to a cusp form $h$ of weight $2$ on $GL(2)$, so that Symm$^2 h$ realizes $T(Y)$. Let $g_E$ be again the weight $2$ cusp form associated with $E$. It follows, by work of Gelbart and Jacquet, that $T(Y)$ is realized by an automorphic representation on $GL(3,{{\mathbb Q}})$. Hence, by the work of Kim and Shahidi (\[KS\]), $T(Y)\times E$ is realized by an automorphic form on $GL(6,{{\mathbb Q}})$. In particular, $L(\text{Symm}^2h \otimes g_E,s)$ has the expected analytic properties.*]{} To construct our promised examples, let $X= X_\Gamma{\rightarrow}X(\tilde{\Gamma})$ be the associated Kummer family, in which we divide each fiber $A_t$ of $Y(\tilde{\Gamma})\times E$ by $\pm 1$ and then blow up the locus of points of order $2$. We now have the following [**Theorem 7 **]{}: [*In the Examples \#1, \#2, \#5, and \#6 of Table 2 the resulting $X$ is a smooth Calabi–Yau threefold. It is non-rigid, and the given fibration $f:X{\rightarrow}X(\tilde{\Gamma})$ is semi-stable, with vanishing (iterated) Kodaira–Spencer mapping. We have $$\deg{f_*\omega_{X/{{\mathbb P}}^1}} = 2 = \frac{1}{2}\,\mbox{deg}\,\Omega^1_{{{\mathbb P}}^1} (\mbox{log}\,S),$$ In other words, $X$ reaches the (stronger) Arakelov–Yau bound.* ]{} [**Remark**]{}: [*For the first case in Table 2 ($\tilde{\Gamma} = \Gamma(4)$) this example is indicated in \[STZ\].*]{} [**Proof**]{}: The Examples we chose are those in which $\tilde{\Gamma}$ is a subgroup of $\Gamma(2)$. (This is because otherwise, the points of order $2$ of $X(\Gamma)$ coincide (over the cusps).) Thus the $2$-torsion points (of $E_t$ and hence of $A_t$) are distinct for [*all*]{} $t\in X(\tilde{\Gamma})$. It follows that the locus $A[2]$ of $2$-torsion points is smooth, and hence so is the blow-up $X$. We have $H^i(X) = H^i(Y(\tilde{\Gamma})\times E)^{<\pm 1>}$. But $\pm 1$ acts as $\pm 1$ on both the non-trivial holomorphic $1$-form $\omega_1$ of $E$ and on the non-trivial holomorphic $2$-form $\omega_2$ of $Y(\tilde{\Gamma})$. Hence $\omega_1\wedge \omega_2$ descends to a holomorphic $3$-form $\omega_3$ on $X$. Its divisor can only be supported on the proper transform ${\mathcal{F}}$ of $A[2]$; however $\mathcal{F}$ intersects each fiber $f^{-1}(t)$ in sixteen $(-2)$-curves, which do not contribute to the canonical class, so that $\omega_3$ is indeed nowhere-vanishing. The Künneth formula gives that $$H^1(X) = H^1(Y)^{<\pm 1>} = H^1(E)^{<\pm 1> } = 0, \qquad\text{and}$$ $$H^{2,0}(X) = H^{2,0}(Y \times E)^{<\pm 1>} = H^2(Y)^{<\pm 1>} = 0.$$ Thus $X$ is indeed a smooth Calabi–Yau threefold. It is non-rigid, because $T(Y\times E)$ descends to $X$ and each of its Hodge pieces $H^{p,q}(T(Y\times E))$ is $1$-dimensional. To compute the monodromy around each singular fiber, we notice that for a generic fiber $X_t = f^{-1}(t)$ the Kummer structure gives a canonical decomposition $$H^2(X_t,{{\mathbb Q}}) = (H^2(A_t,{{\mathbb Q}}) \oplus {{\mathbb Q}}A_t[2])^{<\pm 1>}.$$ Our examples were chosen so that the action of $\pm 1$ on $A_t[2]= E_{\Gamma,t}[2] \times E_t[2]$ is trivial. Moreover, in the Künneth decomposition $$H^2(A_t) = H^2(E) \oplus (H^1(E)\otimes H^1(E_{\Gamma,t})) \oplus H^2(E_{\Gamma,t})$$ the $\pm 1$ action is trivial on the first and last factors, is trivial on $H^1(E)$ and is unipotent on $H^1(E_\Gamma,t)$ around each singular fiber of $f$ (namely the cusps of $\Gamma$). Thus the monodromy of the fibration $f$ is unipotent as well. To compute the Kodaira–Spencer map $\Theta(f)$ for our $f$ we embed it into the Kodaira–Spencer map for $Y\times E \rightarrow X(\tilde{\Gamma})$. This map is the cup product with the Kodaira–Spencer class $\Theta$ which itself is $\Theta_{Y/X(\tilde{\Gamma})} \otimes \Theta_{X(\tilde{\Gamma}) \times E/X(\tilde{\Gamma})}$. Since the Kodaira–Spencer class of a trivial fibration vanishes, it follows that $\Theta(f) = 0$. Our examples all have $6$ singular fibers. Hence $$\frac{1}{2}\deg \Omega^1_{{{\mathbb P}}^1}(\log S) = \frac{1}{2} \deg{{\mathcal O}}_{{{\mathbb P}}^1}(-2+6) = 2.$$ On the other hand, since $X$ is a Calabi–Yau variety we have $$\omega_{X/{{\mathbb P}}^1} = \omega_X\otimes (f^*\omega_{{{\mathbb P}}^1})^{-1} = (f^*\omega_{{{\mathbb P}}^1})^{-1}.$$ Hence $$\label{deg} f_*\omega_{X/{{\mathbb P}}^1} = f_*f^*(\omega_{{{\mathbb P}}^1})^{-1}= (f_*f^*\omega_{{{\mathbb P}}^1})^{-1}= \omega_{{{\mathbb P}}^1}^{-1},$$ whose degree is $2$ as well, concluding the proof of Theorem 7. [**Remarks 8**]{}: *(1) In the other cases in Table 2 the monodromy on the points of order $2$ of $A_t[2]$ is non-trivial, and the calculation gives that the monodromy of $f$ around the cusps is not unipotent. We know by Remark 3, the groups \#1, \#5 and \#8 are in the same $PSL(2,{{\mathbb R}})$-conjugacy class. However, this group theoretic property does not guarantee isomorphisms of the corresponding Calabi–Yau threefolds, since the fiber structures are not preserved. Similarly, the groups \#4, \#6 and \#9 are $PSL(2,{{\mathbb R}})$-conjugate, but geometric structures are different (as the fibers over the cusps are different). The same applies to Examples \#2 and \#7. Therefore, Examples \#4, \#7, \#8 and \#9 are not covered by our examples. Also we do not know how to construct examples corresponding to Example \#3 in Table 2, for which $\tilde{\Gamma} = \Gamma_1(7)$. We also do not know whether our examples are the only ones.* \(2) It is an interesting exercise to compute the full $L$-series of our examples. The results are as follows: Let $N_+$ (respectively $N_-$) be the motive of algebraic cycles on $Y$ invariant (respectively anti-invariant) by $\pm 1$ acting on the elliptic fibrations of $Y$. Let $n_{\pm}$ be the rank of $N_\pm$. Then $n_+ + n_- = 20$, and if we let $\chi_{\delta^{\prime}}$ denote the quadratic character cut by ${{\mathbb Q}}(\sqrt{\delta^{\prime}}\,)$ (not necessarily the same quadratic field pre-determined by the modular group corresponding to the surface), then $N_+= {{\mathbb Z}}(1)^{n^{\prime}_+}\oplus{{\mathbb Z}}(\chi_{\delta^{\prime}}(1))^{n^{\prime\prime}_+}$ and $N_- = {{\mathbb Z}}(1)^{n^{\prime}_-}\oplus {{\mathbb Z}}(\chi_\delta^{\prime}(1))^{n^{\prime\prime}_-}$. Here $n^{\prime}_{+}$ (resp. $n^{\prime\prime}_+$) denotes the number of cycles defined over ${{\mathbb Q}}$ (resp. ${{\mathbb Q}}(\delta^{\prime})$) and similarly for $n^{\prime}_-$ (resp. $n^{\prime\prime}_-$). We have $n_{\pm}= (n^{\prime}+n^{\prime\prime})_{\pm}$. Then we have $$\begin{aligned} L(H^0,s) & = & L({{\mathbb Z}},s) = \zeta(s) \\ L(H^1,s) & = & 1 \\ L(H^2,s) &= & L(H^2({{{\mathbb P}}}^1\times{{{\mathbb P}}}^1),s)L({{\mathbb Z}}(1),s)L(N_+,s)\\ &=& \zeta(s-1)^{16}\zeta(s-1)^{1+n_+^{\prime}} L({{\mathbb Q}}\otimes\chi_{\delta^{\prime}},s-1)^{n_+^{\prime\prime}}\\ L(H^3,s) & = & L(T(Y)\otimes H^1(E),s)L(N_-\times H^1(E),s) \\ \quad & = & L(g_3\otimes g_2,s)L(E, s-1)^{n_-^{\prime}} \prod_{\delta^{\prime}}L(E\otimes\chi_{{\delta}^{\prime}},s-1)^{n_-^{\prime\prime}}\end{aligned}$$ (The higher cohomologies are determined by Poincaré duality.) [**Lemma 9 **]{}: In cases \#1, \#2, \#5, and \#6 in Table 2, we have $n_+=14$ and $n_-=6$ (so $n_+-n_-=2+6=8$). Furthermore, we have $$\begin{aligned} (n_+^{\prime},n_+^{\prime\prime})=\begin{cases} (12,2)\quad &\mbox{for \#1} \\ (14,0)\quad &\mbox{for \#2} \\ (13,1)\quad &\mbox{for \#5,\, \#6}\end{cases}\end{aligned}$$ and $$\begin{aligned} (n_-^{\prime},n_-^{\prime\prime})=\begin{cases} (3,3)\quad &\mbox{for \#1} \\ (6,0)\quad &\mbox{for \#2} \\ (5,1)\quad &\mbox{for \#5,\,\#6}\end{cases}\end{aligned}$$ In case \#3, $n_+= 11$ and $n_-=9$. (For the last case, confer the article of Hulek and Verrill \[HV\] for more detailed discussion.) For the computations of $n_+$ and $n_-$, confer Proposition 2.4 of Hulek and Verrill \[HV\]. The Proof of Lemma 9 will be given at the end of Section 4. Explicit Formulas ================= We shall now give explicit formulas for the weight $3$ forms $g_Y = g_{3,\Gamma}$ for the examples in Table 2. We will denote the weight $3$ form in the $i$th case by $h_i$. By Remark 3 it suffices to compute $h_i$ for $i=8,7,3,4$, and then $h_8(\tau) = h_5(\tau/2) = h_1(\tau/4) $, $h_7(\tau) = h_2(\tau/2)$, and $h_6(\tau) = h_9(\tau/2) = h_4(\tau/2 -1/2)$. Two kinds of formulas suggest themselves for the $h_i$’s: as a product of $\eta$-functions or as inverse Mellin transforms of the Dirichlet series attached to Hecke characters. The second method is always possible since the $g_Y$’s are CM forms. In \[M\] Martin determined which modular forms on $\Gamma_1(N)$ can be expressed as a product of $\eta$-functions. This applies to cases \#8, \#7, \#3, and \#4 in Table 2. Hence the same is also true for the \#6 and \#9 cases. For cases \#3, \#7, and \#8 the corresponding spaces of cusp forms of weight $3$ are $1$-dimensional, hence the conditions in \[M\] are satisfied and Martin gives the corresponding forms as $h_3= \eta(q)^3\eta(q^7)^3$ and $h_7= \eta(q^2)^3\eta(q^6)^3$. The modular form in case \#1 is classically known to be $h_1= \eta(q)^6$, which implies $h_8= \eta(q^4)^6$. Lastly $h_2= \eta(q)^3\eta(q^3)^3$, and $h_5= \eta(q^2)^6$. For \#4, we have $h_4(q)=\eta(q)^2\eta(q^2)\eta(q^4)\eta(q^8)^2$. We will prove the following [**Proposition 10**]{}: *Let $\chi_i$ be the Hecke character for which $L(h_i,s) = L(\chi_i,s)$ (so that the inverse Mellin transform of $L(\chi_i,s)$ is $h_i$). Let $a_p(h_i)$ be the $p$th Fourier coefficient of $h_i$, and let $K_i = {{\mathbb Q}}(\sqrt{\delta_i})$. Then we have the following:* \(1) The infinite component of $\chi_i:{{\mathbb A}}_{K_i}^\times \rightarrow {{\mathbb C}}$ is given by $\chi_{i,\infty}(z) = z^{-2}$. Moreover $\chi_i$ is the unique such Hecke character of conductor $c_i{{\mathcal O}}_{K_i}$, where $c_i = 2,2,1,1 \in {{\mathcal O}}_{K_i}$ for $i = 8,7,3,4$ respectively. \(2) For each rational prime $p$ which is prime to the level of the corresponding $\Gamma$, we have $a_p(h_i) = 0$ if $p$ is inert in $K_i$. Otherwise, there are $a$, $b$, which are integers in case \#8 and half integers in the three other cases, so that $p = a^2+d_i b^2$, where $d_i = 4, 3, 7, 2$ for $i = 8,7,3,4$ respectively. Then $a$ and $b$ are unique up to signs, and $a_p(h_i) = (a^2-d_ib^2)/2$. \(3) The newtype of $h_i$ (see (\[new\])) is the character defining $K_i$, namely $p\mapsto \left(\frac{\delta_i}{p}\right)$. [**Proof: **]{} See e.g. \[L2\] for the generalities (in particular regarding the $\infty$-component of $\chi_i$), as well as the following formula: the conductors of $\chi_i$ and of $h_i$ are related by $$\text{cond}(h_i) = \text{Nm}^{K_i}_{{{\mathbb Q}}} \text{cond}(\chi_i) \text{Disc}(K_i).$$ Since the level of $h_i$ is respectively $M=16$, $12$, $7$, and $8$ in cases \#8, \#7, \#3, and \#4 of Table 2, we get the asserted value of the $c_i$’s, and since all the fields $K_i$ involved have class number $1$ we have $$\label{dec} {{\mathbb A}}_{K_i}^\times = (K_i^\times \times U_i \times {{\mathbb C}}^\times)/\mu(K_i)$$ where $U_i$ is the maximal compact subgroup $\hat{{{\mathcal O}}}\vphantom{{{\mathcal O}}}_{K_i}^\times$ of the finite idèles of $K_i$, and $\mu(K_i)$ is the group of roots of unity of $K_i$, acting diagonally (we view ${{\mathbb C}}$ as the infinite completion of $K_i$). The existence and the uniqueness of $\chi_i$ are then verified in each case by a straightforward calculation (compare \[L1\]). For the second part, the vanishing of $a_p(h_i)$ for $p$ inert in $K_i$ is a general property of CM forms. For a split $p$ (prime to $\text{cond}(h_i)$), write $p = a^2 + d_i b^2 = \text{Nm}^{K_i}_{{{\mathbb Q}}} \pi$. Here $\pi$ is a prime element of ${{\mathcal O}}_{K_i}$, so $a$ and $b$ are half integers. We verify that, up to multiplying $\pi$ by a unit, we can guarantee that $a$ and $b$ are integers for $i=8$. In all cases, the $a$’s and the $b$’s are unique up to signs. Next one verifies that $\pi\equiv \pm 1 \pmod{c_i{{\mathcal O}}_{K_i}}$. Let $\wp$ be the ideal generated by $\pi$ (notice that changing the sign of $b$ replaces $\wp$ by its conjugate). Let ${\operatorname{tr}}$ denote the trace from $K_i$ to ${{\mathbb Q}}$. By the general theory, we have that $$a_p(h_i) = {\operatorname{tr}}\chi_\wp(\pi) = {\operatorname{tr}}\chi_\infty(\pi)^{-1} = {\operatorname{tr}}\pi^2 = 2(a^2-d_ib^2),$$ where the second equality holds since the finite components of $\chi$ other than $\pi$ are now $1$. For the third part we notice that the restriction of $\chi_i$ to $U_i$ in the decomposition (\[dec\]) above gives a Dirichlet character $\epsilon'_i$ on ${{\mathcal O}}_{K_i}$ of conductor $c_i{{\mathcal O}}_{K_i}$. The newtype Dirichlet character $\epsilon_i$ (on ${{\mathbb Z}}$) is then the product $\chi_{K_i}$ by the restriction of $\epsilon'_i$ to ${{\mathbb Z}}$. However, the conductors $c_i{{\mathcal O}}_{K_i}$ of $\chi_i$ are all $1$ or $2$, and the only character of ${{\mathbb Z}}$ of conductor $1$ or $2$ is trivial. Hence the newtype character of $h_i$ is $K_i$, concluding the proof of Proposition 10. [**Defining equations for extremal K3 surfaces**]{}: We now discuss how to determine defining equations for the extremal K3 surfaces in Theorem 7. This problem has been getting a considerable attention lately, for instance, Shioda \[Sh3\] and (independently and by a different method by Y. Iron \[I\]) have determined a defining equation for the semi-stable elliptic K3 surface with singular fibers of type $I_1,I_1,I_1,I_1,I_1,I_{19}$ whose existence was established in Miranda and Persson \[MP\] (this is given as $\#1$ in their list). As we shall see, our examples can be determined by a more classical method. There are several cases where defining equations can be found in the literature, i.e., Example \#1 in Table 2 is the classical Jacobi quartic corresponding to $\Gamma(4)$, $$\label{sig} y^2=(1-\sigma^2x^2)(1-x^2\sigma^{-2})$$ where $\sigma$ is a parameter for $X(4)$. A Legendre form is given by $$\label{leg} Y^2=X(X-1)(X-\lambda)\quad\text{with $\lambda=\frac{1}{4}(\sigma+\sigma^{-1})^2$}$$ (see e.g. Shioda \[Sh2\]). One checks that the singular fibers, all of type $I_4$, occur at the cusps $\sigma = 0,\infty,\pm 1,\pm\sqrt{-1}$. Moreover the $j$-invariant is given by $$j = 2^4\frac{(1+14 \sigma^2 + \sigma^4)^3} {\sigma^4(\sigma^4-1)^4}.$$ For the remaining cases, we can find defining equations using a method due to Tate. Since we could not find Tate’s method in the literature, we sketch it here. (Actually, we found out after completing the paper that there are several papers dealing with this exact issue, e.g., Kubert \[K\] and his arguments were reproduced in Howe–Leprévost–Poonen \[HLP\]. Also, the paper of Billing and Mahler \[BM\] dealt with the same problem.) [**A method of Tate to calculate $E_1(N)$** ]{}: Let $Y_1(N)$ be the modular curve, and let $E_1^0(N)\to Y_1(N)$, with $N\geq 4$, be the universal family of elliptic curves having a point (or section) $P=P_N$ of order $N$. Tate’s method gives a defining equation for this family over ${{\mathbb Z}}[1/N]$. We start with the general Weierstrass equation: $$E:\quad y^2+a_1xy+a_3y=x^3+a_1x^2+a_4x+a_6.$$ Let $P=(x,y)\in E$ be a rational point and assume that $P, 2P, 3P\neq 0$. Changing coordinates, we may put $P$ at $(x,y)=(0,0)$. So we may assume $a_6=0$. Since $P$ does not have order $2$, the tangent line at $(0,0)$ cannot be the $y$-axis (i.e., $x=0$), which implies that $a_3$ cannot vanish. We can therefore change coordinates again to obtain $a_4=0$ and the equation takes the form: $y^2+a_1xy+a_3y=x^3+a_2x^2$. By making a dilation, we furthermore get $a_2=a_3$. Therefore, $E$ has a Weierstrass equation of the form: $$(*)\quad y^2+axy+by=x^3+bx^2\quad \text{with $b\neq 0$}.$$ To get a defining equation for $E_1^0(N)$, we need to find the relations on $a$ and $b$ that arise if $P$ has order $N$. The coordinates of $P, -P, 2P, -2P$ are easily checked to be $$P=(0,0),\, -P=(-0,-b),\, 2P=(-b, (a-1)b),\, -2P=(-b,0).$$ We will also determine the coordinates of $3P$ and of $4P$. At $-2P$ the tangent line is: $$y=\frac{b}{1-a}(x+b).$$ Substituting this to the equation (\*) to get $$4P=\left(\frac{b}{1-a}+\frac{b^2}{(1-a)^2},\quad \frac{b^2}{1-a} (1+\frac{b}{(1-a)^2} + \frac{1}{1-a})\right).$$ Likewise, the line $x+y+b=0$ intersects $E$ at $-P$, $-2P$ and $3P$, giving $3P=(1-a,a-1-b)$. We will give Weierstrass equations for $E_1(N)$ when $N=4$, $6$, $8$, or $7$: $\boxed{E_1(4)\,\,}$ Here we get $a=1$, giving the equation $$y^2+xy+ty = x^3 + tx^2$$ (we replaced $b$ by $t$). Here $X_1(4)$ is the $t$-line. By a direct calculation or from Shioda’s result (see also Remark 5), we see that the singular fibers are over the three cusps, of types, $I_1^*$, $I_1$, and $I_4$. $\boxed{E_1(6)\,\,}$ The equation $x(4P)=x(-2P)$ readily gives $b= -(a-1)(a-2)$, giving us the equation for $E_1(6)$: $$E_1(6):\quad y^2+axy-(a-1)(a-2)y = x^3-(a-1)(a-2)x^2.$$ Here $a$ is a parameter (Hauptmodul) on $X_1(6)$. There are four cusps, and as before one gets from Remark 5 or by a direct computation that the types are $I_1$, $I_2$, $I_3$, and $I_6$, matching the widths of the cusps given in the third column of Table 2. $\boxed{E_1(8)\,\,}$ The equation $y(4P)=y(-4P)$ is equivalent to $ax(4P)+b = -2y(4P)$. Expanding, cancelling $b$, and clearing denominators gives $$ab(1-a) + (1-a)^2 = -2b\left((1-a)(2-a)+b\right).$$ Substituting $b=k(a-1)$ gives $$(a,b) = \left(\frac{-2k^2+4k-1}{k}, \quad -2k^2+3k-1\right),$$ Thus $k$ is a parameter on $X_1(8)$ and $E_1(8)$ (Example \#4) is given by $$\label{E18} y^2+\frac{-2k^2+4k-1}{k}xy +(-2k^2+3k-1)y = x^3+(-2k^2+3k-1)x^2.$$ The fibers above the cusps are found as before to have types $I_1$, $I_1$, $I_2$, $I_4$, $I_8$, and $I_8$. $\boxed{E_1(7)\,\,}$ In a similar way one gets the equation for $E_1(7)$ (Example \#3); the result is (see for instance, Silverman \[S\] Example 13.4) $$y^2+(1+t-t^2)xy+(t^2-t^3)y=x^3+(t^2-t^3)x^2.$$ Three singular fibers have type $I_1$, and three have type $I_7$. $\boxed{E(6,2)\,\,}$ Returning to our cases, we now handle Example \#2 in Table 2, corresponding to $\tilde{\Gamma} = \Gamma_0(3)\cap\Gamma(2)$, whose associated modular curve is $$X(6,2) = X_1(6) \times_{X_1(2)} X(2)$$ by pulling $E_1(6)$ back to $X(6,2)$. To do this we cannot use Tate’s method directly, since the moduli problems associated to $Y(2)$ and to $Y_1(2)$ are not representable. However $X(2)$ is the Legendre $\lambda$-line, and any elliptic curve with $\Gamma(2)$-level structure can always be written in Legendre form $$y^2 = x(x-1)(x-\lambda) = x(x^2+(-1-\lambda)x+\lambda).$$ Likewise, given an elliptic curve $E$ with a point $P$ of order $2$ (over ${{\mathbb Z}}[\frac{1}{2}]$), write $E$ in Weierstrass form $y^2= x(x^2+cx+d)$ where $P=(0,0)$. This form is unique up to homothety, and hence $c^2/d$ is a parameter on $X_0(2)$. The natural map $X(2) \rightarrow X_0(2)$ is therefore given by $u = \frac{(1+\lambda)^2}{\lambda}$. Hence the fibred product for $E(6,2)$ above is given by equating $$\frac{(1+\lambda)^2}{\lambda} = \frac{(4-3a^2)^2}{16(a-1)^3}.$$ A computation gives that $$\xi = \frac{32(a-1)^3}{(4-3a^2)(a-2)^2}\left( \lambda + 1-\frac{(4-3a^2)^2}{32(a-1)^3}\right)$$ is a parameter on $X(6,2)$ such that the map to $X_1(6)$ is given by $$a=\frac{2\xi^2-10}{\xi^2-9}.$$ Under a base change of ramification index $b$ an $I_a$ fiber pulls back to an $I_{ab}$ fiber (and an $I_a^*$ fiber pulls back to an $I_{ab}^*$ fiber if $a\geq 1$, and $b$ is odd). From this or again by Remark 5 the fiber types are as expected from Table 2. — three $I_6$ and three $I_2$ fibers. [**Remark**]{}: [*Even though we do not need the following example here, we mention that a parameter for $X_0(12)$ can be computed in the same way via the natural map $X_0(12)\rightarrow X_0(6) = X_1(6)$. By pull-back this will give a family of elliptic curves over $X_0(12)$, which will turn out to be $\Gamma_0(12)$ case (Example \#7) in Table 2. For this we let $t$ be the parameter for $X_0(4) = X_1(4)$ as before, and let $a$ be the parameter from before on $X_0(6)$. Then $X_0(12)$ is the fibred product $$X_1(4)\times_{X_1(2)} X_1(6).$$ To compute the natural “forgetting” maps of $X_1(4)$ and of $X_1(6)$ to $X_1(2)$ we again bring both $E_1(4)$ and $E_1(6)$ to the form $y^2=x(x^2+ax+b)$: $$E_1(4): w^2 = v(v^2+(\frac{1}{4}-2b)v+ b^2)$$ (here we completed the square and set $w=y+(x+b)/2$ and $v=x+b$). $$E_1(6): w^2 = v(v^2+\frac{4-3a^2} {4}v+(a-1)^3).$$ By the above, the fibred product is given by equating $$\frac{(\frac{1}{4}-2b)^2}{b^2} = \frac{(4-3a^2)^2}{16(a-1)^3}.$$ Thus $a-1$ is a square, say $a=u^2+1$, where $u$ is a parameter on $X_0(12)$ and the pulled-back family is given by $$y^2+(u^2+1)xy-u^2(u^2-1)y = x^3-u^2(u^2-1)x^2.$$*]{} One again routinely verifies that the bad fibers are as expected. (Notice, however, that the pull-back of the universal family from $X_1(4)$ has $I_a^*$ fibers!) $\boxed{E(8,2)\,\,}$ Next we handle Example \#5 in Table 2, whose associated modular curve is $X(8,2)$. As was explained in Remark 3, we can take as a parameter the same $\sigma$ as for $X(4)$ above. However to get the right family, we will divide the universal elliptic curve by a section $s$ of order 2. This changes the type of the singular fibers $E(4)_c$ at a cusp $c$ from $I_4$ to $I_8$ if $s$ meets $E(4)_c$ at the same component as the identity section, and to type $I_2$ otherwise. The singular fibers obtained in this way are as expected from Table 2. To get the new family, recall that if an elliptic curve is given in Weierstrass form $$y^2 = x(x^2+ax+b),$$ then the quotient by the two-torsion point $(0,0)$ is given by the similar equation $$\label{two} y^2 = x(x^2-2ax+a^2-4b).$$ In particular, for a curve given in Legendre form $y^2=x(x-1)(x-\lambda)$ the resulting quotient is $y^2 = x(x^2+2(1+\lambda)x+(1-\lambda)^2)$. Applying this to the Legendre form (\[leg\]) of the Jacobi quartic gives the quotient family in the form $$y^2 = x(x^2+ (2+\frac{1}{2}( \sigma+\sigma^{-1})^2)x + \frac{1}{16}(\sigma-\sigma^{-1})^4).$$ As before one sees that the singular fibers have types $I_8$, $I_8$, $I_4$, $I_4$, $I_4$, and $I_2$. $\boxed{E(8;4,1,2)\,\,}$ To handle Example \#6 in Table 2 we proceed in the same way, dividing the family $E_1(8)$ by its section of order $2$. The cusps of $\Gamma_1(8)$ are $N\backslash G/N$, where $N$ is the upper unipotent subgroup of G=$SL(2,{{\mathbb Z}}/8{{\mathbb Z}})/<\pm{\operatorname{Id}}>$. Explicitly the cusps are $\begin{bmatrix} 0\\1 \end{bmatrix} $, $\begin{bmatrix} 0\\3 \end{bmatrix} $, $\begin{bmatrix} 1\\2 \end{bmatrix} $, $\begin{bmatrix} 1\\4 \end{bmatrix} $, $\begin{bmatrix} 1\\0 \end{bmatrix} $, and $\begin{bmatrix} 3\\0 \end{bmatrix} $. The corresponding widths are 1,1,2,4,8, and 8 respectively. We identify the torsion sections of the universal family $E_1(8){\rightarrow}X_1(8)$ with the subgroup $\begin{pmatrix} *\\0 \end{pmatrix}\in ({{\mathbb Z}}/8{{\mathbb Z}})^2$. Let $s = \begin{pmatrix} 4\\0 \end{pmatrix} $ be the section of order $2$ of this elliptic fibration. Then $s$ belongs to the connected component of the $0$-section at a cusp $\begin{bmatrix} a\\b \end{bmatrix} $, if and only if it is in the subgroup generated by $\begin{pmatrix} a\\b \end{pmatrix} $. This happens at the first four cusps above but not at the last two. The Kodaira type of the fibers of the quotient family $E_1(8)/(s){\rightarrow}X_1(8)$ (which is our $E(\Gamma)$ of Example \#6) are accordingly multiplied by $2$ at the first four cusps, and divided by $2$ at the last two cusps. This results in fiber types $I_8$, $I_4$, $I_4$, $I_4$, $I_2$, and $I_2$, in agreement with Table 2. Starting with our equation (\[E18\]) for $E_1(8)$ we set $$Y = 8k^3\, y \left(y+ \frac{-2k^2+4k-1}{2k}x + \frac{-2k^2+3k-1}{2} \right) \qquad\text{and}\qquad X=4k^2(x-k+k^2).$$ A straightforward computation then gives for $E_1(8)$ the form $$Y^2 = X(X^2 +(8k^4-16k^3+16k^2-8k+1)X + (2k(k-1))^4).$$ By formula (\[two\]), the quotient family $E_1(8)/(s)$ is given by $$y^2 = x(x^2 -2(8k^4-16k^3+16k^2-8k+1)x + (8k^2-8k+1)(2k-1)^4).$$ [**Proof of Lemma 9**]{}: We will use the arguments due to Klaus Hulek and Matthias Schütt for the calculation of $n_+$ and $n_-$. The Galois action on $NS(Y)$ is computed as follows. Tensoring with ${{\mathbb Q}}$, $NS(Y)\otimes{{\mathbb Q}}$ has for basis the (classes of the) general fiber, the $0$-section and those components of the singular fibers which do not meet the identity component (the section). The Galois action clearly preserves the fiber class and the $0$-section. The action of $\pm 1$ on each fiber of type $I_n$ is given as follows. A fiber $I_n$ contributes $n-1$ to the cohomology. If we enumerate the components $e_1, e_2,\cdots, e_{n-1}$ cyclically, then $e_j$ will be sent to $e_{-j}$. If $n$ is even, $n/2$ cycles, $e_{n/2},\, e_j+e_{-j}\, (1\leq j< n/2)$ are fixed contributing to $n_{+}$; while $e_j-e_{-j}\,(1\leq j< n/2)$ contributing to $n_{-}$. If $n$ is odd, $(n-1)/2$ cycles are fixed contributing to $n_+$, and equally $(n-1)/2$ cycles to $n_-$. Further, the fields of definition of the components $e_j$ will determine $n_{\pm}^{\prime}$ and $n_{\pm}^{\prime\prime}$. In Example \#1, the cusps (singularities) are $t=0, \pm 1,\,\infty$ and $\pm\sqrt{-1}$. Put $i=\sqrt{-1}$. Then $N_+$ is spanned by the zero-section, the fiber and the following divisors: $e_{t,2},\, e_{t,1}+e_{t,3}$ where $t=0, \pm 1, \pm i,\infty$. When $t\in{{\mathbb Q}}$ or $t=\infty$, these divisors are defined over ${{\mathbb Q}}$, giving $10$ divisors out of $14$. Over $t=\pm i$, we see that $e_{i,2}+e_{-i,2}$ and $(e_{i,1}+e_{i,3})+(e_{-i,1}+e_{-i,3})$ are fixed by complex conjugation, so that these are defined over ${{\mathbb Q}}$ contributing to $n^{\prime}_+$. Hence, as Galois representations, we get $$N_+={{\mathbb Z}}(1)^{12}\oplus {{\mathbb Z}}(\chi_{i}(1))^2$$ so that $n^{\prime}_+=12$, and $n^{\prime\prime}_+=2$. On the other hand, the space $N_-$ is simply spanned by $e_{t,1}-e_{t,3}$ for the six cusps $t$. Over $t=\pm 1$, both are defined over ${{\mathbb Q}}$, contributing to $n^{\prime}_-$. Over $t=0,\infty$, $e_{t,1}$ and $e_{t,3}$ are conjugate, so the difference is not fixed under complex conjugation, so it contributes to $n^{\prime\prime}_-$. Over $t=\pm i$, we have two divisors $(e_{i,1}-e_{i,3})\pm (e_{-i,1}-e_{-i,3})$. One of these is fixed by complex conjugation, while the other is not. Thus, as Galois representation, $$N_-={{\mathbb Z}}(1)^3\oplus {{\mathbb Z}}(\chi_{i}(1))^3$$ and reading off the ranks, we get $n^{\prime}_-=3$ and $n^{\prime\prime}_-=3$. In Example \#2, the cusps are all defined over ${{\mathbb Q}}$, the torsion sections meet all the components of the fibers (this can be seen either from the moduli viewpoint or from the equations in the previous section). Then $N_+$ is spanned by the zero-section, the fiber and the divisors $e_{t,1}+e_{t,5},\, e_{t,2}+e_{t,4}$ and $e_{t,3}$ for $I_6$ type singular fibers and $e_{t,1}$ for $I_2$ type singular fibers. Thus we compute that $n_+=3+3+3+1+1+1+1+1=14$. For the space $N_-$ is spanned by $e_{t,1}-e_{t,5},\, e_{t,2}-e_{t,4}$ for $I_6$ type singular fibers. Thus we have $n_-=2+2+2+0+0+0=6.$ Since all divisors are defined over ${{\mathbb Q}}$, all these algebraic cycles are also defined over ${{\mathbb Q}}$, and we have $$N_+={{\mathbb Z}}(1)^{14}\quad\mbox{and}\quad N_-={{\mathbb Z}}(1)^6.$$ (In particular, this implies that $n_{\pm}^{\prime\prime}=0$ in this case.) For the other two cases, \#5 and \#6, we use the above argument to compute $n_{\pm}$. In fact, for Example \#5 (resp. \#6), $$n_+=4+4+1+1+1+1+1+1+1=14 \quad(\mbox{resp.}\,4+2+2+2+1+1+1+1=14),$$ and $$n_-=3+3=6\quad\mbox{for both cases.}$$ Thus $n_+ = 14$ and $n_-=6$. However, for these examples, not all algebraic cycles are defined over ${{\mathbb Q}}$. In fact, we use the fact that each of these $K3$ surfaces is realized as a quadratic base change of a rational modular elliptic surface (see Top and Yui \[TY\] for detailed argument). In the case of Example \#5, this surface is obtained as a pull-back of a rational elliptic modular surface with $4$ singular fibers of type $I_4$ over $\\infty$, $I_4$ over $0$, $I_2$ over $1$ and $I_2$ over $-1$. All cusps of the pull-back over $\infty,\, 0$ and $1$ are defined over ${{\mathbb Q}}$. However, the two cusps of the pull-back above $-1$ are defined only over ${{\mathbb Q}}(\sqrt{-1})$. Put $\sqrt{-1}=i$. Then the divisor $e_{i,1}+e_{-i,1}$ is invariant under complex conjugation, while the divisor $e_{i,1}-e_{-i,1}$ is not. Thus, we get $$N_+={{\mathbb Z}}(1)^{13}\oplus{{\mathbb Z}}(\chi_i(1))$$ so that $(n_+^{\prime}, n_+^{\prime\prime})=(13,1).$ On the other hand, all algebraic cycles spanning $N_-$ are defined over ${{\mathbb Q}}$ so that $N_-={{\mathbb Z}}(1)^6$ and $n_-=n_-^{\prime}=6$. For Example \#6, the cusps are $t=0,\, \infty,\, \pm 1$ and $\pm\sqrt{2}$. But the pull-back of the components $e_{0,1}$ and $e_{0,3}$ are conjugate over ${{\mathbb Q}}(i)$. This gives $$N_+={{\mathbb Z}}(1)^{13}\oplus {{\mathbb Z}}(\chi_{2}(1))$$ so that $n_+^{\prime}=13$ and $n_+^{\prime\prime}=1$. While $$N_-={{\mathbb Z}}(1)^5\oplus {{\mathbb Z}}(\chi_i(1))$$ so that $n_-^{\prime}=5$ and $n_-{\prime\prime}=1$. [**Remark.**]{} [*For \#3, the singular fibers are of type $I_7$ and $I_1$ ($3$ copies each). Hulek and Verrill \[HV\] compute that $n_+=11$ and $n_-=9$, and show that all cycles are defined over ${{\mathbb Q}}$. This example does not admit semi-stable fibrations, but still gives rise to a non-rigid Calabi–Yau threefold defined over ${{\mathbb Q}}$, and one can still look into the modularity question for the $L$-series associated to the third cohomology group. This is exactly what is done in the article of Hulek and Verrill \[HV\] supplementing this paper.*]{} [**Acknowledgments.**]{} We thank the Fields Institute at Toronto, Canada, for its hospitality. Most of the work described above was done while the authors were members and participants in the Automorphic Forms Thematic Program there during the spring term of 2003. The second author thanks B. van Geemen, I. Nakamura, A. Sebbar, W. Stein and K. Ueno for helpful conversations and correspondences. After the article was posted on arXiv on April 29, 2003, we have received feedbacks from several colleagues, F. Beukers, J. Stientra and J. Top. We thank them for their interest and comments. Finally, the authors are indebted to Matthias Schütt and Klaus Hulek for their numerous correspondences to clarify the discussions in Remarks 8 and proof of Lemma 9, in particular, the calculation of $n_+$ and $n_-$ for our examples. 1 \[BTZ\] Bartolo, E.A., Tokunaga, H., and Zhang, D.-Q., [*Miranda–Persson’s problem on extremal elliptic K3 surfaces*]{}, Pacific J.Math. [**202**]{} (2002), 37–72. 1 \[B\] Beauville, A., [*Les familles stables de courbes elliptiques sur ${{\mathbb P}}^1$ admettant quatre fibres singulières*]{}, C. R. Acad. Sc. Paris, [**294**]{} (1982), 657–660. 1 \[BL\] Besser, A., and Livné, R., [*Universal Kummer families over Shimura curves*]{}, in preparation. 1 \[BM\] Billing, G., and Mahler, K., [*On exceptional points on cubic curves*]{}, J. London Math. Soc. [**15**]{} (1940), 32–43. 1 \[D\] Deligne, P., [*Formes modulaires et représentations $\ell$–adiques*]{}, Sém. Bourbaki, fév 1969, exp. [**355**]{}, 1–33, Berlin-Heidelberg-New York, springer (1977). 1 \[HLP\] Howe,E.W., Leoprévost, F., and Poonen, B., [*Large torsion subgroups of split Jacobians of curves of genus two or three*]{}, Forum Math. [**12**]{} (2000), no. 3, 315–364. 1 \[HV\] Hulek, K. and Verrill, H., [*On the motive of Kummer varieties associated to $\Gamma_1(7)$ – Supplement to the paper : [*The modularity of certain non-rigid Calabi–Yau threefolds*]{} by R. Livné and N. Yui).*]{} 1 \[I\] Iron, Y., [*An explicit presentation of a K3 surface that realizes $[1,1,1,1,1,19]$*]{}, MSc thesis, Hebrew University of Jerusalem, 2003. 1 \[JZ\] Jost, J. and Zuo, K., [*Arakelov type inequalities for Hodge bundles over algebraic varieties*]{}, J. Algebraic Geometry [**11**]{} (2002), 535–546. 1 \[KM\] Katz, N., and Mazur, B., [*Arithmetic Moduli of Elliptic Curves*]{}, Annals of Math. Studies, [**108**]{}, Princeton University Press, 1985. 1 \[KS\] Kim, Henry H., and Shahidi, F., [*Functorial products of $GL_2\times GL_3$ and the symmetric cube for $GL_2$*]{}. With an appendix by C. Bushnell and G. Henniart, Ann. of Math. (2) [**155**]{} (2002), no.3, 837–893. 1 \[K\] Kubert, D., [*Universal bounds on the torsion of elliptic curves*]{}, Proc. London Math. Soc. (3) [**33**]{} (1976), no. 2, 193–237. 1 \[L1\] Livné, R., [*On the conductor of mod $\ell$ Galois representations coming from modular forms*]{}, J. Number Theory [/bf 31]{} (1989), no. 2, 133–141. 1 \[L2\] Livné, R., [*Motivic orthogonal two-dimensional representations of $\mbox{Gal}(\bar{{{\mathbb Q}}}/{{\mathbb Q}})$*]{}, Israel J. Math. [**92**]{} (1995), no. 1-3, 149–156. 1 \[M\] Martin, Y., [*Multiplicative $\eta$-quotients*]{}, Trans. Amer. Math. Soc. (1996), 4825–4856. 1 \[MP\] Miranda, R., and Persson, U., [*Configurations of $I_n$ fibers on elliptic $K3$ surfaces*]{}, Math. Z.  [**201**]{} (1989), 339–361. 1 \[S\] Schoen, C., [*On fiber products of rational elliptic surfaces with section*]{}, Math. Z. [**197**]{} (1988), 177–199. 1 \[Se\] Sebbar, A., [*Classification of torsion-free genus zero congruence groups*]{}, Proc. Amer. Math. Soc. [**129**]{}, No. 9 (2001), 2517–2527. 1 \[SY\] Saito, M-H, and Yui, N., [*The modularity conjecture for rigid Calabi–Yau threefolds over ${{\mathbb Q}}$*]{}, Kyoto J. Math. [**41**]{}, no. 2 (2001), 403–419. 1 \[SZ\] Shimada, I., and Zhang, De-Qi, [*Classification of extremal elliptic $K3$ surfaces and fundamental groups of open $K3$ surfaces*]{}, Nagoya Math. J. [**161**]{} (2001), 23–54. 1 \[Sh1\] Shioda, T., [*Elliptic modular surfaces*]{}, J. Math. Soc. Japan [**24**]{}, no. 1 (1972), 20–59. 1 \[Sh2\] Shioda, T., [*On rational points of the generic elliptic curve with level $N$ structure over the field of modular functions of level $N$*]{}, J. Math. Soc. Japan [**25**]{} (1973), 144–157. 1 \[Sh3\] Shioda, T., [*Discrete moduli and integral points*]{}, in preparation. 1 \[S\] Silverman, J., [*The Arithmetic of Elliptic Curves*]{}, Graduate Text in Mathematics [**106**]{}, Springer–Verlag, New York 1986. 1 \[STZ\] Sun, X., Tan, S.-L. and Zuo, K., [*Families of $K3$ surfaces over curves satisfying the equality of Arakelov–Yau’s type and modularity*]{}, Math. Res. Lett. [**10**]{} (2003), no.2-3, 323–342. 1 \[TY\] Top, J., and Yui, N., [*Explicit equations of some elliptic modular surfaces*]{}, Rocky Mountain J. of Math. (to appear). 1 \[W\] Wiles, A., [*Modular elliptic curves and Fermat’s last theorem*]{}, Ann. of Math. [**141**]{} (1995), 443–551. Taylor, R., and Wiles, A., [*Ring-theoretic properties of certain Hecke algebras*]{}, Ann. of Math. [**141**]{} (1995), 553–572. 1 \[Y\] Yui, N., [*Update on the modularity of Calabi–Yau varieties*]{}, with appendix by Verrill, H., [*The $L$-series of rigid Calabi–Yau threefolds from fiber products of elliptic curves*]{}, in [*Calabi–Yau Varieties and Mirror Symmetry*]{}, Fields Inst. Commun. [**38**]{} (2001), 307–362, Amer. Math. Soc. [^1]: Ron Livné was partially supported by a Israel-USA BSF Research Grant. [^2]: Noriko Yui was partially supported by a Discovery Grant from NSERC, Canada.
1
--- abstract: 'We provide complete characterizations, on Banach spaces with cotype 2, of those linear operators which happen to be weakly mixing or strongly mixing transformations with respect to some nondegenerate Gaussian measure. These characterizations involve two families of small subsets of the circle: the countable sets, and the so-called *sets of uniqueness* for Fourier-Stieltjes series. The most interesting part, the sufficient conditions for weak and strong mixing, is valid on an arbitrary (complex, separable) Fréchet space.' address: - 'Clermont Université, Université Blaise Pascal, Laboratoire de Mathématiques, BP 10448, F-63000 Clermont-Ferrand - CNRS, UMR 6620, Laboratoire de Mathématiques, F-63177 Aubière.' - 'Laboratoire de Mathématiques de Lens, Université d’Artois, Rue Jean Souvraz S. P. 18, 62307 Lens.' author: - Frédéric Bayart - Étienne Matheron title: | Mixing operators\ and small subsets of the circle --- Introduction ============ A basic problem in topological dynamics is to determine whether a given continuous map $T:X\to X$ acting on a topological space $X$ admits an ergodic probability measure. One may also ask for stronger ergodicity properties such as weak mixing or strong mixing, and put additional constraints on the measure $\mu$, for example that $\mu$ should have no discrete part, or that it should belong to some natural class of measures related to the structure of the underlying space $X$. Especially significant is the requirement that $\mu$ should have *full support* ( $\mu (V)>0$ for every open set $V\neq\emptyset$) since in this case any ergodicity property implies its topological counterpart. There is, of course, a huge literature on these matters since the classical work of Oxtoby and Ulam ([@OU]). In recent years, the above problem has received a lot of attention in the specific setting of *linear* dynamics, when the transformation $T$ is a continuous linear operator acting on a topological vector space $X$ ([@F], [@BG2], [@BG3], [@BoGE], [@Sophie]). The main reason is that people working in linear dynamics are mostly interested in studying *hypercyclic* operators, operators having dense orbits. When the space $X$ is second-countable, it is very easy to see that if a continuous map $T:X\to X$ happens to be ergodic with respect to some Borel probability measure $\mu$ with full support, then almost every $x\in X$ (relative to $\mu$) has a dense $T$-orbit. (In fact, one can say more: it follows from Birkhoff’s ergodic theorem that almost all $T$-orbits visit every non-empty open set along a set of integers having positive lower density. In the linear setting, an operator having at least one orbit with that property is said to be *frequently hypercyclic*. This notion was introduced in [@BG3] and extensively studied since then; see the books [@BM] and [@GP] for more information). Hence, to find an ergodic measure with full support is an efficient way of showing that a given operator is hypercyclic, which comes as a measure-theoretic counterpart to the more traditional Baire category approach. Throughout the paper, we shall restrict ourselves to the best understood infinite-dimensional measures, the so-called *Gaussian* measures. Moreover, the underlying topological vector space $X$ will always be a complex separable Fréchet space. (The reason for considering *complex* spaces only will become clear in the next few lines). In this setting, a Borel probability measure on $X$ is Gaussian if and only if it is the distribution of an almost surely convergent random series of the form $\xi=\sum_0^\infty g_n x_n$, where $(x_n)\subset X$ and $(g_n)$ is a sequence of independent, standard complex Gaussian variables. Given any property (P) relative to measure-preserving transformations, we shall say that an operator $T\in\mathfrak L(X)$ has property (P) *in the Gaussian sense* if there exists some Gaussian probability measure $\mu$ on $X$ with full support with respect to which $T$ has (P). The problem of determining which operators are ergodic in the Gaussian sense was investigated by E. Flytzanis ([@F]), in a Hilbert space setting. The fundamental idea of [@F] is that one has to look at the *$\TT$-eigenvectors* of the operator, the eigenvectors associated with eigenvalues of modulus $1$: roughly speaking, ergodicity is equivalent to the existence of “sufficiently many $\TT$-eigenvectors and eigenvalues". This is of course to be compared with the now classical eigenvalue criterion for hypercyclicity found by G. Godefroy and J. Shapiro ([@GS]), which says in essence that an operator having enough eigenvalues inside and outside the unit circle must be hypercyclic. The importance of the $\TT$-eigenvectors is easy to explain. Indeed, it is almost trivial that if $T\in\mathfrak L(X)$ is an operator whose $\TT$-eigenvectors span a dense subspace of $X$, then $T$ admits an invariant Gaussian measure with full support: choose a sequence of $\TT$-eigenvectors $(x_n)_{n\geq 0}$ (say $T(x_n)=\lambda_n x_n$) with dense linear span such that $\sum_0^\infty\Vert x_n\Vert<\infty$ for every continuous semi-norm $\Vert\,\cdot\,\Vert$ on $X$, and let $\mu$ be the distribution of the random variable $\xi=\sum_0^\infty g_n x_n$. That $\mu$ is $T$-invariant follows from the linearity of $T$ and the rotational invariance of the Gaussian variables $g_n$ ($\mu\circ T^{-1}\sim\sum_0^\infty g_n T(x_n)=\sum_0^\infty (\lambda_ng_n)\, x_n\sim\sum_0^\infty g_n x_n=\mu$). However, this particular measure $\mu$ cannot be ergodic ([@Sophie]). Building on Flytzanis’ ideas, the first named author and S. Grivaux came rather close to characterizing the weak and strong mixing properties for Banach space operators in terms of the $\TT$-eigenvectors ([@BG3], [@BG2]). However, this was not quite the end of the story because the sufficient conditions for weak or strong mixing found in [@BG3] and [@BG2] depend on some geometrical property of the underlying Banach space, or on some “regularity" property of the $\TT$-eigenvectors (see the remark just after Corollary \[eigvectfield\]). In the present paper, our aim is to show that in fact, these assumptions can be completely removed. Thus, we intend to establish “optimal" sufficient conditions for weak and strong mixing in terms of the $\TT$-eigenvectors which are valid on an arbitrary Fréchet space. These conditions turn out to be also necessary when the underlying space $X$ is a Banach space with *cotype 2*, and hence we get complete characterizations of weak and strong mixing in this case. We shall in fact consider some more general notions of “mixing", but our main concerns are really the weak and strong mixing properties. At this point, we should recall the definitions. A measure-preserving transformation $T:(X,\mathfrak B,\mu)\to(X,\mathfrak B,\mu)$ is *weakly mixing* (with respect to $\mu$) if $$\frac{1}{N}\sum_{n=0}^{N-1} \vert \mu(A\cap T^{-n}(B))-\mu (A)\mu(B)\vert\xrightarrow{N\to\infty} 0$$ for any measurable sets $A,B\subset X$; and $T$ is *strongly mixing* if $$\mu(A\cap T^{-n}(B))\xrightarrow{n\to\infty} \mu(A)\mu(B)$$ for any $A,B\in\mathfrak B$. (Ergodicity can be defined exactly as weak mixing, but removing the absolute value in the Cesàro mean). According to the “spectral viewpoint" on ergodic theory, weakly mixing transformations are closely related to *continuous* measures on the circle $\TT$, and strongly mixing transformations are related to *Rajchman* measures, i.e. measures whose Fourier coefficients vanish at infinity. Without going into any detail at this point, we just recall that, by a classical result of Wiener (see [@Ktz]), continuous measures on $\TT$ are characterized by the behaviour of their Fourier coefficients: a measure $\sigma$ is continuous if and only if $$\frac{1}{N}\sum_{n=0}^{N-1} \vert \widehat\sigma (n)\vert\xrightarrow{N\to\infty} 0\, .$$ Wiener’s lemma is usually stated with symmetric Cesàro means, but this turns out to be equivalent. Likewise, by the so-called *Rajchman’s lemma*, a measure $\sigma$ is Rajchman if and only if $\widehat\sigma (n)\to 0$ as $n\to +\infty$ (that is, a one-sided limit is enough). Especially important for us will be the corresponding families of “small" sets of the circle; that is, the sets which are annihilated by every positive measure in the family under consideration (continuous measures, or Rajchman measures). Obviously, a Borel set $D\subset\TT$ is small for continuous measures if and only if it is countable. The small sets for Rajchman measures are the so-called *sets of extended uniqueness* or *sets of uniqueness for Fourier-Stieltjes series*, which have been extensively studied since the beginning of the 20th century (see [@KL]). The family of all sets of extended uniqueness is usually denoted by $\mathcal U_0$. Our main results can now be summarized as follows. \[WS\] Let $X$ be a complex separable Fréchet space, and let $T\in\mathfrak L(X)$. 1. Assume that the $\TT$-eigenvectors are *perfectly spanning*, in the following sense: for any countable set $D\subset \TT$, the linear span of $\bigcup_{\lambda\in\TT\setminus D}\ker (T-\lambda)$ is dense in $X$. Then $T$ is weakly mixing in the Gaussian sense. 2. Assume that the $\TT$-eigenvectors are *$\mathcal U_0$-perfectly spanning*, in the following sense: for any Borel set of extended uniqueness $D\subset \TT$, the linear span of $\bigcup_{\lambda\in\TT\setminus D}\ker (T-\lambda)$ is dense in $X$. Then $T$ is strongly mixing in the Gaussian sense. 3. In [(1)]{} and [(2)]{}, the converse implications are true if $X$ is a Banach space with [cotype 2]{}. Some remarks are in order regarding the scope and the “history" of these results. 1. When $X$ is a Hilbert space, (1) is stated in [@F] (with some additional assumptions on the operator $T$) and a detailed proof is given in [@BG3] (without these additional assumptions). The definition of “perfectly spanning" used in [@BG3] is formally stronger than the above one, but the two notions are in fact equivalent ([@Sophie]). 2. It is shown in [@Sophie] that under the assumption of (1), the operator $T$ is frequently hypercyclic. The proof is rather complicated, and it is not clear that it could be modified to get weak mixing in the Gaussian sense. However, some of the ideas of [@Sophie] will be crucial for us. In particular, sub-section \[Sophiesection\] owes a lot to [@Sophie]. 3. In the weak mixing case, (3) is proved in [@BG2 Theorem 4.1]. 4. It seems unnecessary to recall here the definition of cotype (see any book on Banach space theory, [@AK]). Suffices it to say that this is a geometrical property of the space, and that Hilbert space has cotype $2$ as well as $L^p$ spaces for $p\in [1,2]$ (but *not* for $p>2$). 5. As observed in [@BG2 Example 4.2], (3) does not hold on an arbitrary Banach space $X$. Indeed, let $X:=\mathcal C_0([0,2\pi])=\{ f\in\mathcal C([0,2\pi]);\; f(0)=0\}$ and let $V:L^2(0,2\pi)\to X$ be the Volterra operator, $Vf(t)=\int_0^t f(s)\, ds$. There is a unique operator $T:X\to X$ such that $TV=VM_\phi$, where $M_\phi:L^2(0,2\pi)\to L^2(0,2\pi)$ is the multiplication operator associated with the function $\phi (t)=e^{it}$. The operator $T$ is given by the formula $$\label{Kal} Tf(t)=\phi(t)f(t)-\int_0^t \phi'(s)f(s)\, ds\, .$$ It is easy to check that $T$ has no eigenvalues. On the other hand, $T$ is strongly mixing with respect to the Wiener measure on $\mathcal C_0([0,2\pi])$. As it turns out, ergodicity and weak mixing in the Gaussian sense are in fact equivalent (see e.g. [@G], or [@BG2 Theorem 4.1]). Hence, from Theorem \[WS\] we immediately get the following result. (A Gaussian measure $\mu$ is *nontrivial* if $\mu\neq\delta_0$). \[characexistergod\] For a linear operator $T$ acting on a Banach space $X$ with cotype 2, the following are equivalent: - $T$ admits a nontrivial ergodic Gaussian measure; - there exists a closed, $T$-invariant subspace $Z\neq \{ 0\}$ such that $$\overline{\rm span}\, \bigcup_{\lambda\in\TT\setminus D}\ker (T_{\vert Z}-\lambda)=Z$$ for every countable set $D\subset\TT$. In this case, $T$ admits an ergodic Gaussian measure with support $Z$, for any such subspace $Z$. If $T$ admits an ergodic Gaussian measure $\mu\neq\delta_0$, then $Z:={\rm supp}(\mu)$ is a non-zero $T$-invariant subspace, and $Z$ satisfies (b) by Theorem \[WS\] (3). The converse follows from Theorem \[WS\] (1). For concrete applications, it is useful to formulate Theorem \[WS\] in terms of *$\TT$-eigenvector fields* for the operator $T$. A $\TT$-eigenvector field for $T$ is a map $E:\Lambda\to X$ defined on some set $\Lambda\subset\TT$, such that $$TE(\lambda)=\lambda E(\lambda)$$ for every $\lambda\in\Lambda$. (The terminology is not perfectly accurate: strictly speaking, $E(\lambda)$ is perhaps not a $\TT$-eigenvector because it is allowed to be $0$). Recall also that a closed set $\Lambda\subset\TT$ is *perfect* if it has no isolated points or, equivalently, if $V\cap\Lambda$ is uncountable for any open set $V\subset\TT$ such that $V\cap\Lambda\neq\emptyset$. Analogously, a closed set $\Lambda\subset \TT$ is said to be *$\mathcal U_0$-perfect* if $V\cap \Lambda$ is not a set of extended uniqueness for any open set $V$ such that $V\cap\Lambda\neq\emptyset$. (For example, any nontrivial closed arc is $\mathcal U_0$-perfect). \[eigvectfield\] Let $X$ be a separable complex Fréchet space, and let $T\in\mathfrak L(X)$. Assume that one has at hand a family of *continuous* $\TT$-eigenvector fields $(E_i)_{i\in I}$ for $T$, where $E_i:\Lambda_i\to X$ is defined on some closed set $\Lambda_i\subset\TT$, such that ${\rm span}\left(\bigcup_{i\in I} E_i(\Lambda_i)\right)$ is dense in $X$. 1. If each $\Lambda_i$ is a perfect set, then $T$ is weakly mixing in the Gaussian sense. 2. If each $\Lambda_i$ is $\mathcal U_0$-perfect, then $T$ is strongly mixing in the Gaussian sense. This follows immediately from Theorem \[WS\]. Indeed, if $\Lambda\subset\TT$ is a perfect set then $\Lambda\setminus D$ is dense in $\Lambda$ for any countable set $D$, whereas if $\Lambda$ is $\mathcal U_0$-perfect then $\Lambda\setminus D$ is dense in $\Lambda$ for any $\mathcal U_0$-set $D$. Since the $\TT$-eigenvector fields $E_i$ are assumed to be continuous, it follows that the $\TT$-eigenvectors of $T$ are perfectly spanning in case (i), and $\mathcal U_0$-perfectly spanning in case (ii). Several results of this kind are proved in [@BG2] and in [@BM Chapter 5], all of them being based on an interplay between the geometry of the (Banach) space $X$ and the regularity of the $\TT$-eigenvector fields $E_i$. For example, it is shown that if $X$ has *type 2*, then continuity of the $E_i$ is enough, whereas if the $E_i$ are Lipschitz and defined on (nontrivial) closed arcs then no assumption on $X$ is needed. “Intermediate" cases involve the [type]{} of the Banach space $X$ and H" older conditions on the $E_i$. What Corollary \[eigvectfield\] says is that continuity of the $E_i$ is *always* enough, regardless of the underlying space $X$. We also point out that the assumption in (i), i.e. the existence of $\TT$-eigenvector fields with the required spanning property defined on perfect sets, is in fact equivalent to the perfect spanning property ([@Sophie]). Likewise, the assumption in (ii) is equivalent to the $\mathcal U_0$-perfect spanning property (see Proposition \[perfect\]). In order to illustrate our results, two examples are worth presenting immediately. Other examples will be reviewed in section \[final\]. Let $\mathbf w=(w_n)_{n\geq 1}$ be a bounded sequence of nonzero complex numbers, and let $B_{\bf w}$ be the associated *weighted backward shift* acting on $X_p=\ell^p(\NN)$, $1\leq p<\infty$ or $X_\infty=c_0(\NN)$; that is, $B_{\bf w}(x_0,x_1,x_2,\dots )=(w_1x_1,w_2x_2,\dots ).$ Solving the equation $B_{\mathbf w}(x)=\lambda x$, it is easy to check that $B_{\mathbf w}$ has eigenvalues of modulus 1 if and only if $$\label{shift} \hbox{the sequence $\displaystyle\left(\frac{1}{w_0\cdots w_n}\right)_{n\geq 0}$ is in $X_p$}$$ (we have put $w_0:=1$). In this case the formula $$E(\lambda):=\sum_{n=0}^\infty \frac{\lambda^n}{w_0\cdots w_n}\, e_n$$ defines a continuous $\TT$-eigenvector field $E:\TT\to X_p$ such that $\overline{\rm span}\, E(\TT)=X_p$. Hence $B_{\mathbf w}$ is strongly mixing in the Gaussian sense. This is known since [@BG2] if $p<\infty$, but it appears to be new for weighted shifts on $c_0(\NN)$. The converse is true if $p\leq 2$ ( (\[shift\]) is satisfied if $B_{\mathbf w}$ is strongly mixing in the Gaussian sense) since in this case $X_p$ has cotype 2, but the case $p>2$ is not covered by Theorem \[WS\]. However, it turns out that the converse does hold true for any $p<\infty$. In fact, (\[shift\]) is satisfied as soon as the weighted shift $B_{\mathbf w}$ is frequently hypercyclic ([@BR]). As shown in [@BG2], this breaks down completely when $p=\infty$: there is a frequently hypercyclic weighted shift $B_{\mathbf w}$ on $c_0(\NN)$ whose weight sequence satisfies $w_1\cdots w_n=1$ for infinitely many $n$. Such a weighted shift does not admit any (nontrivial) invariant Gaussian measure. Let us also recall that, in contrast with the ergodic properties, the hypercyclicity of $B_{\bf w}$ does not depend on $p$: by a well known result of H. Salas ([@Sal]), $B_{\mathbf w}$ is hypercyclic on $X_p$ for any $p$ if and only if $\sup_{n\geq 1} \vert w_1\cdots w_n\vert=\infty$. Likewise, $B_{\mathbf w}$ is strongly mixing in the topological sense (on any $X_p$) iff $\vert w_1\cdots w_n\vert\to\infty$. Hence, we see that strong mixing in the topological sense turns out to be equivalent to strong mixing in the Gaussian sense for weighted shifts on $c_0(\NN)$. Let $T$ be the operator defined by formula (\[Kal\]), but acting on $L^2(0,2\pi)$. It is straightforward to check that for any $t\in (0,2\pi)$, the function $f_t=\mathbf 1_{(0,t)}$ is an eigenvector for $T$ with associated eigenvalue $\lambda =e^{it}$. Moreover the map $E:\TT\setminus\{ \mathbf 1\}\to L^2(0,2\pi)$ defined by $E(e^{it})=f_t$ is clearly continuous. Now, let $\Lambda$ be an arbitrary compact subset of $\TT\setminus\{ \mathbf 1\}$. Let us denote by $H_\Lambda$ the closed linear span of $E(\Lambda)$, and let $T_\Lambda$ be the restriction of $T$ to $H_\Lambda$. By a result of G. Kalisch ([@Kal], see also [@BG1 Lemma 2.12]), the point spectrum of $T_\Lambda$ is exactly equal to $\Lambda$. By Corollary \[eigvectfield\] and Theorem \[WS\] (3), it follows that the operator $T_\Lambda$ is weakly mixing in the Gaussian sense if and only if $\Lambda$ is a perfect set, and strongly mixing iff $\Lambda$ is $\mathcal U_0$-perfect. Hence, any perfect $\mathcal U_0$-set $\Lambda$ gives rise to a very simple example of a weakly mixing transformation which is not strongly mixing. This could be of some interest since the classical concrete examples are arguably more complicated (see the one given in [@P section 4.5]). Regarding the difference between weak and strong mixing, it is also worth pointing out that there exist Hilbert space operators which are weakly mixing in the Gaussian sense but not even strongly mixing in the [topological]{} sense. Indeed, in the beautiful paper [@BadG], C. Badea and S. Grivaux are able to construct a weakly mixing operator (in the Gaussian sense) which is *partially power-bounded*, $\sup_{n\in I} \Vert T^n\Vert<\infty$ for some infinite set $I\subset\NN$. This line of investigations was pursued even much further in [@BadG2] and [@EG]. We have deliberately stressed the formal analogy between weak and strong mixing in the statement of Theorem \[WS\]. In view of this analogy, it should not come as a surprise that Theorem \[WS\] can be deduced from some more general results dealing with abstract notions of “mixing". (In order not to make this introduction exceedingly long, these results will be described in the next section). In particular, (1) and (2) are formal consequences of Theorem \[abstract\] below. However, even though the proof of Theorem \[abstract\] is “conceptually" simple, the technical details make it rather long. This would be exactly the same for the strong mixing case (i.e. part (2) of Theorem \[WS\]), but in the weak mixing case it is possible to give a technically much simpler and hence much shorter proof. For the sake of readability, it seems desirable to present this proof separately. But since there is no point in repeating identical arguments, we shall follow the abstract approach as long as this does not appear to be artificial. The paper is organized as follows. In section 2, we present our abstract results. In section 3, we review some basic facts concerning Gaussian measures and we outline the strategy for proving the abstract results and hence Theorem \[WS\]. Apart from some details in the presentation and the level of generality, this follows the scheme described in [@BG3], [@BG2] and [@BM]. In section 4, we prove part (1) of Theorem \[WS\] (the sufficient condition for weak mixing). The abstract results are proved in sections \[proofabstract1\] and \[proofabstract2\]. Section \[final\] contains some additional examples and miscellaneous remarks. In particular, we briefly discuss the “continous" analogues of our results (i.e. the case of $1$-parameter semigroups), and we show that for a large class of strongly mixing weighted shifts, the set of hypercyclic vectors turns out to be rather small, namely Haar-null in the sense of Christensen. We conclude the paper with some possibly interesting questions. [**Notation and conventions.**]{} The set of natural numbers is denoted either by $\NN$ or by $\ZZ_+$. We denote by $\mathcal M(\TT)$ the space of all complex measures on $\TT$, endowed with total variation norm. The Fourier transform of a measure $\sigma\in\mathcal M(\TT)$ is denoted either by $\widehat\sigma$ or by $\mathcal F(\sigma)$. As a rule, all measurable spaces $(\Omega,\mathfrak A)$ are standard Borel, and all measure spaces $(\Omega,\mathfrak A, m)$ are sigma-finite. All Hilbert spaces $\mathcal H$ are (complex) separable and infinite-dimensional. The scalar product $\langle u,v\rangle_{\mathcal H}$ is linear with respect to $u$ and conjugate-linear with respect to $v$. Abstract results ================ $\mathbf S$-mixing ------------------ It is well known (and easy to check) that the definitions of ergodicity, weak and strong mixing can be reformulated as follows. Let $(X,\mathfrak B,\mu)$ be a probability space, and set $$L^2_0(\mu):=\left\{ f\in L^2(\mu);\; \int_X f\, d\mu=0\right\} .$$ Then, a measure-preserving transformation $T:(X,\mathfrak B,\mu)\to(X,\mathfrak B,\mu)$ is ergodic with respect to $\mu$ if and only if $$\frac{1}{N}\sum_{n=0}^{N-1} \langle f\circ T^n,g\rangle_{L^2(\mu)}\xrightarrow{N\to\infty} 0$$ for any $f,g\in L^2_0(\mu)$. The transformation $T$ is weakly mixing iff $$\frac{1}{N}\sum_{n=0}^{N-1} \left\vert \langle f\circ T^n,g\rangle_{L^2(\mu)}\right\vert\xrightarrow{N\to\infty} 0$$ for any $f,g\in L^2_0(\mu)$, and $T$ is strongly mixing iff $$\langle f\circ T^n,g\rangle_{L^2(\mu)}\xrightarrow{n\to\infty} 0\, .$$ Now, let us denote by $V_T:L^2(\mu)\to L^2(\mu)$ the Koopman operator associated with a measure-preserving transformation $T:(X,\mathfrak B,\mu)\to(X,\mathfrak B,\mu)$, the isometry defined by $$V_Tf=f\circ T\, .$$ For any $f,g\in L^2(\mu)$, there is a uniquely defined complex measure $\sigma_{f,g}=\sigma_{f,g}^T$ on $\TT$ such that $$\widehat\sigma_{f,g} (n)=\left\{ \begin{matrix} \langle V_T^n f,g\rangle_{L^2(\mu)}&{\rm if}&n\geq 0\\ \langle V_T^{*\vert n\vert} f,g\rangle_{L^2(\mu)}&{\rm if}&n< 0 \end{matrix} \right.$$ (When $f=g$, this follows from Bochner’s theorem because in this case the right-hand side defines a positive-definite function on $\ZZ$; and then one can use a “polarization" argument). We denote by $\Sigma(T,\mu)$ the collection of all measures $\sigma_{f,g}$, $f,g\in L^2_0(\mu)$, and forgetting the measure $\mu$ we refer to $\Sigma(T,\mu)$ as “the spectral measure of $T$". With these notations, we see that $T$ is weakly mixing with respect to $\mu$ iff all measures $\sigma\in \Sigma(T,\mu)$ are continuous (by Wiener’s lemma), and that $T$ is strongly mixing iff all measures $\sigma\in \Sigma(T,\mu)$ are Rajchman (by Rajchman’s lemma). Likewise, $T$ is ergodic iff $\sigma (\{ \mathbf 1\})=0$ for every $\sigma\in \Sigma(T,\mu)$. More generally, given any family of measures $\mathcal B\subset\mathcal M(\TT)$, one may say that $T$ is *$\mathcal B$-mixing* with respect to $\mu$ if the spectral measure of $T$ lies in $\mathcal B$, all measures $\sigma\in\Sigma(T,\mu)$ are in $\mathcal B$. We shall in fact consider a more specific case which seems to be the most natural one for our concerns. Let us denote by $\mathcal F_+:\mathcal M(\TT)\to \ell^\infty(\ZZ_+)$ the positive part of the Fourier transformation, i.e. $\mathcal F_+(\sigma)= \widehat\sigma_{\vert \ZZ_+}$. Given any family $\mathbf S\subset\ell^\infty(\ZZ_+)$, we say that a measure $\sigma\in\mathcal M(\TT)$ is *$\mathbf S$-continuous* if $\mathcal F_+(\sigma)\in\mathbf S$. A measure-preserving transformation $T:(X,\mu)\to (X,\mu)$ is *$\mathbf S$-mixing* with respect to $\mu$ if every measure $\sigma \in\Sigma (T,\mu)$ is $\mathbf S$-continuous. Thus, strong mixing is just $\mathbf S$-mixing for the family $\mathbf S=c_0(\ZZ_+)$, weak mixing is $\mathbf S$-mixing for the the family $\mathbf S$ of all sequences $(a_n)\in\ell^\infty(\ZZ_+)$ such that $\vert a_n\vert\to 0$ in the Cesàro sense, and ergodicity corresponds to the family $\mathbf S$ of all $a\in\ell^\infty (\ZZ_+)$ tending to $0$ in the Cesàro sense. In what follows, these families will be denoted by $\mathbf S_{\rm strong}$, $\mathbf S_{\rm weak}$ and $\mathbf S_{\rm erg}$, respectively. Small subsets of the circle --------------------------- Given a family of measures $\mathcal B\subset\mathcal M(\TT)$, it is quite natural in harmonic analysis to try to say something about the *$\mathcal B$-small* subsets of $\TT$, i.e. the sets $D\subset\TT$ that are annihilated by all positive measures $\sigma\in\mathcal B$. By this we mean that for any such measure $\sigma$, one can find a Borel set $\widetilde D$ (possibly depending on $\sigma$) such that $D\subset\widetilde D$ and $\sigma (\widetilde D)=0$. When the family $\mathcal B$ has the form $\mathcal B=\mathcal F_+^{-1}(\mathbf S)$ for some $\mathbf S\subset\ell^\infty (\ZZ_+)$, we call these sets *$\mathbf S$-small*. To avoid trivialities concerning $\mathcal B$-small sets, the family $\mathcal B$ under consideration should contain nonzero [positive]{} measures, and in fact it is desirable that it should be *hereditary* with respect to absolute continuity; that is, any measure absolutely continuous with respect to some $\sigma\in\mathcal B$ is again in $\mathcal B$. The following simple lemma shows how to achieve this for families of the form $\mathcal F_{+}^{-1}(\mathbf S)$. Let us say that a family $\mathbf S\subset\ell^\infty(\ZZ_+)$ is *translation-invariant* if it is invariant under both the forward and the backward shift on $\ell^\infty(\ZZ_+)$. \[hereditary\] If $\mathbf S$ is a translation-invariant linear subspace of $\ell^\infty(\ZZ_+)$ such that $\mathcal F_{+}^{-1}(\mathbf S)$ is norm-closed in $\mathcal M(\TT)$, then $\mathcal F_{+}^{-1}(\mathbf S)$ is hereditary with respect to absolute continuity. If $\sigma\in\mathcal F_+^{-1}(\mathbf S)$ then $P\sigma$ is in $\mathcal F_+^{-1}(\mathbf S)$ for any trigonometric polynomial $P$, by translation-invariance. So the result follows by approximation. We shall also make use of the following well known result concerning *$\mathcal B$-perfect* sets. By definition, a set $\Lambda\subset \TT$ is $\mathcal B$-perfect if $V\cap\Lambda$ is not $\mathcal B$-small for any open set $V\subset\TT$ such that $V\cap\Lambda\neq\emptyset$. \[Bperfect\] Let $\mathcal B$ be a norm-closed linear subspace of $\mathcal M(\TT)$, and assume that $\mathcal B$ is hereditary with respect to absolute continuity. For a closed set $\Lambda\subset\TT$, the following are equivalent: - $\Lambda$ is $\mathcal B$-perfect; - $\Lambda$ is the support of some probability measure $\sigma\in\mathcal B$. That (b) implies (a) is clear (without any assumption on $\mathcal B$). Conversely, assume that $\Lambda$ is $\mathcal B$-perfect. Let $(W_n)_{n\geq 1}$ be a countable basis of open sets for $\Lambda$, with $W_n\neq \emptyset$. Since $\mathcal B$ is hereditary, one can find for each $n$ a probability measure $\sigma_n\in\mathcal B$ such that ${\rm supp}(\sigma_n)\subset\Lambda$ and $\sigma_n (W_n)>0$. Then the probability measure $\sigma=\sum_1^\infty 2^{-n}\sigma_n$ is in $\mathcal B$ and ${\rm supp}(\sigma)=\Lambda$. The results ----------- To state our results we need two more definitions. \[defBperfspan\] Let $T$ be an operator acting on a complex separable Fréchet space $X$, and let $\mathcal B\subset\mathcal M(\TT)$. We say that the $\TT$-eigenvectors of $T$ are *$\mathcal B$-perfectly spanning* if, for any Borel $\mathcal B$-small set $D\subset \TT$, the linear span of $\bigcup_{\lambda\in\TT\setminus D}\ker (T-\lambda)$ is dense in $X$. When $\mathcal B$ has the form $\mathcal F_+^{-1}(\mathbf S)$, the terminology *$\mathbf S$-perfectly spanning* is used. Thus, perfect spanning is $\mathcal B$-perfect spanning for the family of continuous measures, and $\mathcal U_0$-perfect spanning is $\mathcal B$-perfect spanning for the family of Rajchman measures. At some places, we will encounter sets which are *analytic* but perhaps non Borel. Recall that a set $D$ in some Polish space $Z$ is [analytic]{} if one can find a Borel relation $B\subset Z\times E$ (where $E$ is Polish) such that $z\in D\Leftrightarrow\exists u\;:\; B(z,u)$. If the spanning property of the above definition holds for every analytic $\mathcal B$-small set $D$, we say that the $\TT$-eigenvectors of $T$ are [$\mathcal B$-perfectly spanning]{} *for analytic sets*. We shall say that a family $\mathbf S\subset\ell^\infty(\ZZ_+)$ is *$c_0$-like* if it has the form $$\mathbf S=\{ a\in\ell^\infty(\ZZ_+);\; \lim_{n\to\infty} \Phi_n(a)=0\}\,$$ where $(\Phi_n)_{n\geq 0}$ is a uniformly bounded sequence of $w^*$-$\,$continuous semi-norms on $\ell^\infty(\ZZ_+)$. (By “uniformly bounded", we mean that $\Phi_n(a)\leq C\, \Vert a\Vert_\infty$ for all $n$ and some finite constant $C$). For example, the families $\mathbf S_{\rm strong}$, $\mathbf S_{\rm weak}$ and $\mathbf S_{\rm erg}$ are $c_0$-like: just put $\Phi_n(a)=\vert a_n\vert$ in the strong mixing case, $\Phi_n(a)=\frac{1}{n}\sum_{k=0}^{n-1}\vert a_k\vert$ in the weak mixing case, and $\Phi_n(a)=\left\vert \frac{1}{n}\sum_{k=0}^{n-1} a_k\right\vert$ in the ergodic case. Our main result is the following theorem, from which (1) and (2) in Theorem \[WS\] follow immediately. Recall that a family $\mathbf S\subset \ell^\infty(\ZZ_+)$ is an *ideal* if it is a linear subspace and $ua\in \mathbf S$ for any $(u,a)\in\ell^\infty(\ZZ_+)\times\mathbf S$. \[abstract\] Let $X$ be a separable complex Fréchet space, and let $T\in\mathfrak L(X)$. Let also $\mathbf S\subset \ell^\infty (\ZZ_+)$. Assume that $\mathbf S$ is a translation-invariant and $c_0$-like [ideal]{}, and that any $\mathbf S$-continuous measure is continuous. If the $\TT$-eigenvectors of $T$ are $\mathbf S$-perfectly spanning, then $T$ is $\mathbf S$-mixing in the Gaussian sense. This theorem cannot be applied to the ergodic case, for two reasons: the family $\mathbf S_{\rm erg}$ is not an ideal of $\ell^\infty (\ZZ_+)$, and $\mathbf S_{\rm erg}$-continuous measures need not be continuous (a measure $\sigma$ is $\mathbf S_{\rm erg}$-continuous iff $\sigma(\{ \mathbf 1\})=0$). Theorem \[abstract\] will be proved in section \[proofabstract1\]. The following much simpler converse result (which corresponds to (3) in Theorem \[WS\]) will be proved in section \[proofabstract2\]. \[converse\] Let $X$ be a separable complex Banach space, and assume that $X$ has cotype 2. Let also $\mathbf S$ be an arbitrary subset of $\ell^\infty(\ZZ_+)$. If $T\in\mathfrak L(X)$ is $\mathbf S$-mixing in the Gaussian sense, then the $\TT$-eigenvectors of $T$ are $\mathbf S$-perfectly spanning for analytic sets. Two “trivial" cases are worth mentioning. If we take $\mathbf S=\mathbf S_{\rm erg}$, then $\mathcal F_+^{-1}(\mathbf S)=\{ \sigma\in\mathcal M(\TT);\; \sigma (\{\mathbf 1\})=0\}$ and hence a set $D\subset\TT$ is $\mathbf S$-small if and only if $D\subset\{\mathbf 1\}$. If we take $\mathbf S=\ell^\infty (\ZZ_+)$, then $\mathcal F_+^{-1}(\mathbf S)=\mathcal M(\TT)$ and hence the only $\mathbf S$-small set is $D=\emptyset$. So, assuming that $X$ has cotype $2$, we get the following (known) results: if $T$ admits an invariant Gaussian measure with full support, then the $\TT$-eigenvectors of $T$ span a dense subspace of $X$; and if $T$ is ergodic in the Gaussian sense, then ${\overline{\rm span}}\,\left(\bigcup_{\lambda\in\TT\setminus\{ \mathbf 1\}}\ker(T-\lambda)\right)=X$. As already pointed out, much more is true in the ergodic case: it is proved in [@BG2 Theorem 4.1] that if $T$ is ergodic in the Gaussian sense, then in fact the $\TT$-eigenvectors are perfectly spanning (provided that $X$ has cotype 2). Our last result (to be proved also in section \[proofabstract2\]) says that when the space $X$ is well-behaved, the assumptions in Theorem \[abstract\] can be relaxed: it is no longer necessary to assume that the family $\mathbf S$ is $c_0$-like, nor that $\mathbf S$-continuous measures are continuous. However, the price to pay is that one has to use the strengthened version of $\mathbf S$-perfect spanning. \[abstracteasy\] Let $X$ be a separable complex Banach space, and assume that $X$ has *type 2*. Let also $\mathbf S$ be a norm-closed, translation-invariant ideal of $\ell^\infty(\ZZ_+)$, and let $T\in\mathfrak L(X)$. If the $\TT$-eigenvectors of $T$ are $\mathbf S$-perfectly spanning for analytic sets, then $T$ is $\mathbf S$-mixing in the Gaussian sense. Since Hilbert space has both type 2 and cotype 2, we immediately deduce \[Smix=span\] Let $\mathbf S$ be a norm-closed, translation-invariant ideal of $\ell^\infty(\ZZ_+)$. For Hilbert space operators, $\mathbf S$-mixing in the Gaussian sense is equivalent to the $\mathbf S$-spanning property of $\TT$-eigenvectors (for analytic sets). The reader should wonder why analytic sets appear in Theorem \[abstracteasy\], whereas only Borel sets are needed in Theorem \[abstract\]. The reason is that the family of $\mathbf S$-small sets has a quite special structural property (the so-called *covering property*) when the family $\mathbf S$ is $c_0$-like: any analytic $\mathbf S$-small set can be covered by a sequence of *closed* $\mathbf S$-small sets (see [@MZ]). In particular, any analytic $\mathbf S$-small set is contained in a Borel $\mathbf S$-small set and hence the two notions of $\mathbf S$-perfect spanning are equivalent. The covering property is of course trivially satisfied in the weak mixing case, for the family of countable sets. In the strong mixing case, for the sets of extended uniqueness, this is a remarkable result due to G. Debs and J. Saint Raymond ([@DSR]). The proof of Theorem \[abstracteasy\] turns out to be considerably simpler than that of Theorem \[abstract\]. Roughly speaking, the reason is the following. Without any assumption on $X$, in the case of Theorem \[abstract\], we will have to be extremely careful to ensure that some “integral" operators of a certain kind are *gamma-radonifying* (see sub-section \[gamrad\]). On the other hand, such integral operators are [automatically]{} gamma-radonifying when $X$ is a Banach space with type 2. So the main technical difficulty just disappears, and the proof becomes rather easy. background ========== Throughout this section, $X$ is a separable complex Fréchet space. Gaussian measures and gamma-radonifying operators {#gamrad} ------------------------------------------------- This sub-section is devoted to a brief review of the basic facts that we shall need concerning Gaussian measures. For a reasonably self-contained exposition specifically tailored to linear dynamics, we refer to [@BM]; and for in-depth studies of Gaussian measures, to the books [@Bo] and [@CTV]. By a *Gaussian measure* on $X$, we mean a Borel probability measure $\mu$ on $X$ which is the distribution of a random variable of the form $$\sum_{n=0}^\infty g_n x_n\,$$ where $(g_n)$ is a standard complex Gaussian sequence defined on some probability space $(\Omega,\mathfrak A,\mathbb P)$ and $(x_n)$ is a sequence in $X$ such that the series $\sum g_n x_n$ is almost surely convergent. We recall that $(g_n)$ is a standard complex Gaussian sequence if the $g_n$ are independent complex-valued random variables with distribution $\gamma_\sigma\otimes\gamma_\sigma$, where $\gamma_\sigma=\frac{1}{\sqrt{2\pi}\sigma}e^{-t^2/2\sigma^2}\, dt$ is the usual Gaussian distribution with mean $0$ and variance $\sigma^2=1/2$. This implies in particular that $\mathbb Eg_n=0$ and $\mathbb E\vert g_n\vert^2=1$. “Our" definition of a Gaussian measure is not the usual one. However, it is indeed equivalent to the classical definition, each continuous linear functional $x^*$ has a complex symmetric Gaussian distribution when considered as a random variable on the probability space $(X,\mathfrak B(X),\mu)$ (where $\mathfrak B(X)$ is the Borel sigma-algebra of $X$). It is well known that for a Fréchet space valued Gaussian series $\sum g_n x_n$, almost sure convergence is equivalent to convergence in the $L^p$ sense, for any $p\in [1,\infty[$. It follows at once that if $\mu\sim \sum_{n\geq 0} g_n x_n$ is a Gaussian measure on $X$, then $\int_X \Vert x\Vert^2\, d\mu(x)<\infty$ for every continuous semi-norm $\Vert\,\cdot\,\Vert$ on $X$. In particular, any continuous linear functional $x^*\in X^*$ is in $L^2(\mu)$ when considered as a random variable on $(X,\mu)$. We also note that we consider only *centred* Gaussian measures $\mu$, $\int_X x\, d\mu(x)=0$. Gaussian measures correspond in a canonical way to *gamma-radonifying operators*. Let $\mathcal H$ be a Hilbert space. An operator $K:\mathcal H\to X$ is said to be gamma-radonifying if for some (equivalently, for any) orthonormal basis $(e_n)_{n\geq 0}$ of $\mathcal H$, the Gaussian series $\sum g_n K(e_n)$ is almost surely convergent. By rotational invariance of the Gaussian variables $g_n$, the distribution of the random variable $\sum_{n\geq 0} g_n K(e_n)$ does not depend on the orthonormal basis $(e_n)$, so the operator $K$ gives rise to a Gaussian measure which may be denoted by $\mu_K$: $$\mu_K\sim \sum g_n K(e_n)\, .$$ Conversely, it is not hard to show that any Gaussian measure $\mu\sim\sum g_n x_n$ is induced by some gamma-radonifying operator. The following examples are worth keeping in mind: when $X$ is a Hilbert space, an operator $K:\mathcal H\to X$ is gamma-radonifying if and only if it is a *Hilbert-Schmidt* operator; and for an arbitrary $X$, the operator $K$ is gamma-radonifying as soon as $\sum_0^\infty \Vert K(e_n)\Vert<\infty$, for some orthonormal basis of $\mathcal H$ and every continuous semi-norm $\Vert\,\cdot\,\Vert$ on $X$. This follows at once from the equivalence between almost sure convergence and $L^2$ or $L^1$ convergence for Gaussian series. We note that the support of a Gaussian measure $\mu\sim\sum_{n\geq 0} g_n x_n$ is the closed linear span of the $x_n$. Therefore, a Gaussian measure $\mu=\mu_K$ has full support if and only if the gamma-radonifying operator $K:\mathcal H\to X$ has dense range. If $K:\mathcal H\to X$ is a continuous linear operator from some Hilbert space $\mathcal H$ into $X$, we consider its adjoint $K^*$ as an operator from $X^*$ into $\mathcal H$. Hence, $K^*:X^*\to \mathcal H$ is a *conjugate-linear* operator. If $K$ is gamma-radonifying and $\mu=\mu_K$ is the associated Gaussian measure, then we have the following fundamental identity: $$\langle x^*, y^*\rangle_{L^2(\mu)}=\langle K^*y^*, K^*x^*\rangle_{\mathcal H}$$ for any $x^*,y^*\in X^*$. The proof is a simple computation using the orthogonality of the Gaussian variables $g_n$. Let us also recall the definition of the *Fourier transform* of a Borel probability measure $\mu$ on $X$: this is the complex-valued function defined on the dual space $X^*$ by $$\widehat\mu (x^*)=\int_X e^{-i{\rm Re}\, \langle x^*, x\rangle}d\mu (x)\, .$$ One very important property of the Fourier transform is that it uniquely determines the measure: if $\mu_1$ and $\mu_2$ have the same Fourier transform, then $\mu_1=\mu_2$. If $\mu\sim \sum_{n\geq 0} g_n x_n$ is a Gaussian measure, a simple computation shows that $\widehat\mu (x^*)=\exp\left(-\frac{1}{4} \sum_{0}^\infty \vert\langle x^*, x_n\rangle\vert^2\right)$ for any $x^*\in X^*$. It follows that if $K:\mathcal H\to X$ is a gamma-radonifying operator then $$\widehat{\mu_K}(x^*)=e^{-\frac14{\Vert K^*x^*\Vert^2}}\, .$$ In particular, two gamma-radonifying operators $K_1$ and $K_2$ define the same Gaussian measure if and only if $$\Vert K_1^*x^*\Vert= \Vert K^*_2x^*\Vert$$ for every $x^*\in X^*$. Finally, let us say a few words about type 2 and cotype 2 Banach spaces. A Banach space $X$ has *type 2* if $$\mathbb E\left\Vert \sum_ng_n x_n\right\Vert\leq C\left( \sum_n \Vert x_n\Vert^2\right)^{1/2}$$ for some finite constant $C$ and any finite sequence $(x_n)$ in $X$; and $X$ has *cotype 2* if the reverse inequality holds. Thus, type 2 makes the convergence of Gaussian series easier, whereas on a cotype 2 space the convergence of such a series has strong implications. This is apparent in the following proposition, which contains all the results that we shall need concerning these notions. (See (\[defke\]) below for the definition of the operators $K_E$). \[typecotype\] Let $X$ be a separable Banach space. 1. If $X$ has type 2 then any operator $K_E:L^2(\Omega,m)\to X$ is gamma-radonifying. 2. If $X$ has cotype 2 then any gamma-radonifying operator $K:L^2(\Omega, m)\to X$ has the form $K_E$, for some vector field $E:\Omega\to X$. These results are nontrivial (see [@BM]), but quite easy to use. The general procedure --------------------- Most of what is needed for the the proof of Theorem \[abstract\] is summarized in Proposition \[background\] below, whose statement requires to introduce some terminology. Recall that all measure spaces under consideration are *sigma-finite*, and that all $L^2$ spaces are separable. Let $(\Omega ,\mathfrak A,m)$ be a measure space. By a ($X$-valued) *vector field* on $(\Omega, \mathfrak A,m)$ we mean any measurable map $E:\Omega\to X$ which is in $L^2(\Omega,m, X)$. Such a vector field gives rise to an operator $K_E:L^2(\Omega,m)\to X$ defined as follows: $$\label{defke} K_E f=\int_\Omega f(\omega) E(\omega)\, dm(\omega)\, .$$ It is easy to check that the operator $K_E$ has dense range if and only if the vector field $E$ is *$m$-spanning* in the following sense: for any measurable set $\Delta\subset \Omega$ with $m(\Delta)=0$, the linear span of $\{ E(\omega);\; \omega\in\Omega\setminus\Delta\}$ is dense in $X$. This happens in particular if $\Omega$ is a topological space, $m$ is a Borel measure with full support, and the vector field $E$ is continuous with ${\overline{\rm span}}\, E(\Omega)=X$. We also note that the operator $K_E$ is always compact, and that it is a Hilbert-Schmidt operator if $X$ is Hilbert space (because $E$ is in $L^2(\Omega,m,X)$). Moreover, the adjoint operator $K_E^*:X^*\to L^2(\Omega, m)$ is given by the formula $$K_E^*x^*=\overline{\langle x^*, E(\,\cdot\,)\rangle}\, .$$ The next definition will be crucial for our purpose. Let $T\in\mathfrak L(X)$. By a *$\TT$-eigenfield* for $T$ on $(\Omega ,\mathfrak A, m)$ we mean a pair of maps $(E,\phi)$ where - $E:\Omega \to X$ is a vector field; - $\phi:\Omega\to \TT$ is measurable; - $TE(\omega)=\phi (\omega) E(\omega)$ for every $\omega\in \Omega$. For example, if $E:\Lambda\to X$ is a continuous $\TT$-eigenvector field for $T$ defined on some compact set $\Lambda\subset\TT$ and if we put $\phi (\lambda)=\lambda$, then $(E,\phi)$ is a $\TT$-eigenfield for $T$ on $(\Lambda, \mathfrak B(\Lambda), m)$ for any Borel probability measure $m$ on $\Lambda$. The key fact about $\TT$-eigenfields is the following: if $(E,\phi)$ is a $\TT$-eigenfield for $T$ on $(\Omega,m)$ and if $M_\phi=L^2(\Omega, m)\to L^2(\Omega, m)$ is the (unitary) multiplication operator associated with $\phi$, then the *intertwining equation* $$TK_E=K_E M_\phi$$ holds. The proof is immediate. \[background\] Let $\mathbf S$ be a norm-closed ideal of $\ell^\infty (\ZZ_+)$, and let $T\in\mathcal L(X)$. Assume that one can find a $\TT$-eigenfield $(E,\phi)$ for $T$ defined on some measure space $(\Omega, m)$, such that - the operator $K_E : L^2(\Omega, m)\to X$ is gamma-radonifying; - the vector field $E$ is $m$-spanning; - for any $f\in L^1(\Omega, m)$, the image measure $\sigma_f=(f\, m)\circ\phi^{-1}$ is $\mathbf S$-continuous. Then $T$ is $\mathbf S$-mixing in the Gaussian sense. More precisely, $T$ is $\mathbf S$-mixing with respect to $\mu_{K_E}$. As mentioned above, this proposition is essentially all what is needed to understand the proof of Theorem \[abstract\]. It will follow at once from the next two lemmas, that will also be used in the proof of Proposition \[converse\]. \[back1\] Let $T\in\mathfrak L(X)$, and let $K:\mathcal H\to X$ be gamma-radonifying. Let also $\mathbf S$ be a norm-closed ideal of $\ell^\infty (\ZZ_+)$. - The measure $\mu=\mu_{K}$ is $T$-invariant if and only if one can find an operator $M:\mathcal H\to\mathcal H$ such that $\mathcal H_{K}:=\mathcal H\ominus \ker(K)$ is $M^*$-invariant, $M^{*}$ is an isometry on $\mathcal H_K$ and $TK=KM$. - The operator $T$ is $\mathbf S$-mixing with respect to $\mu$ if and only if $M^{*}$ is *$\mathbf S$-mixing on $\mathcal H_{K}$*, the sequence $\left(\langle M^{*n}u,v\rangle_{\mathcal H})\right)_{n\geq 0}$ is in $\mathbf S$ for any $u,v\in \mathcal H_{K}$. Since $T$ is a linear operator, the image measure $\mu_{K}\circ T^{-1}$ is equal to $\mu_{TK}$. Taking the Fourier transforms, it follows that $\mu_K$ is $T$-invariant if and only if $$\Vert K^*(x^*)\Vert=\Vert K^*(T^*x^*)\Vert$$ for every $x^*\in X^*$. This means exactly that one can find an isometry $V:{\rm ran}(K^*)\to{\rm ran}(K^*)$ such that $K^*T^*=VK^*$. Since ${\overline{{\rm ran}(K^*)}}=\mathcal H\ominus \ker(K)$, this proves (1). As for (2), the basic idea is very simple. Recall that $T$ is $\mathbf S$-mixing with respect to $\mu$ if and only if $$\label{L_0} \mathcal F_+({\sigma_{f,g}})\in\mathbf S$$ for any $f,g\in L^2_0(\mu)$. When $f=x^*$ and $g=y^*$ are continuous linear functionals on $X$ (recall that $\mu$ is centred, so that $X^*\subset L^2_0(\mu)$) we have $\widehat\sigma_{f,g}(n)=\langle f\circ T^n, g\rangle_{L^2(\mu)}=\langle K^*T^{*n}(x^*), K^*y^*\rangle_{\mathcal H}=\langle M^{*n}(K^*x^*), K^*y^*\rangle_{\mathcal H}$ for every $n\geq 0$. Since $M^*$ is power-bounded on $\mathcal H_K$ and $\mathcal H_K={\overline{{\rm ran}(K^*)}}$, it follows that the $\mathbf S$-mixing property of $M^*$ on $\mathcal H_K$ is equivalent to the validity of (\[L\_0\]) for all continuous linear functionals $f,g$. So the point is to pass from linear functionals to arbitrary $f,g\in L^2_0(\mu)$. In the “classical" cases (weak and strong mixing), this can be done in at least two ways: either by reproducing a rather elementary argument of R. Rudnicki ([@R pp 108–109], see also [@GE] or [@BM Theorem 5.24]), or by the more abstract approach of [@BG3], which relies on the theory of *Fock spaces*. In the more general case we are considering, one possible proof would consist in merely copying out pp 5105–5108 of [@BG3]. The fact that $\mathbf S$ is norm-closed would be needed for the approximation argument on p. 5105, and the ideal property would be used on p. 5108 in the following way: if $(a_n)\in\mathbf S$ and if a sequence $(b_n)$ satisfies $\vert b_n\vert\leq C\, \vert a_n\vert$ for all $n\in\ZZ_+$ (and some finite constant $C$), then $(b_n)\in\mathbf S$. We outline a more direct approach, which is in fact exactly the same as in [@BG3] but without any algebraic apparatus. In what follows, we denote by ${\rm Re}(X^*)$ the set of all continuous, real-valued, real-linear functionals on $X$. For any $u\in{\rm Re}(X^*)$, we denote by $u^*$ the unique complex-linear functional with real part $u$, which is given by the formula $\langle u^*,x\rangle=u(x)-iu(ix)$. First, we note that if (\[L\_0\]) holds for all continuous linear functionals, then it holds for all $f,g\in{\rm Re}(X^*)$. Indeed, using the invariance of $\mu$ under the transformation $x\mapsto ix$, it is easily checked that $\langle u,v\rangle_{L^2(\mu)}=\frac{1}{2}\,{\rm Re}\left(\langle u^*,v^*\rangle_{L^2(\mu)}\right)$ for any $u,v\in{\rm Re}(X^*)$. Applying this with $u:=f\circ T^n$, $n\geq0$ and $v:=g$ and since $\mathbf S$, being an ideal, is closed under taking real parts, it follows that if $f,g\in{\rm Re}(X^*)$ and $\mathcal F_+({\sigma_{f^*,g^*}})\in\mathbf S$ then $\mathcal F_+({\sigma_{f,g}})\in\mathbf S$. Now, let us denote by $H_k$, $k\geq 0$ the classical real *Hermite polynomials*, i.e. the orthogonal polynomials associated with the standard Gaussian distribution $\gamma=\frac{1}{\sqrt{2\pi}}\, e^{-t^2/2}dt$ on $\RR$. For every $k\geq 0$, set $$\mathcal H_k:=\overline{\rm span}^{L^2(\mu)}\{ H_k(f);\; f\in \mathcal S\}\, ,$$ where $\mathcal S$ is the set of all $f\in{\rm Re}(X^*)$ such that $\Vert f\Vert_{L^2(\mu)}=1$. It is well known that $L^2(\mu)$ is the orthogonal direct sum of the subspaces $\mathcal H_k$, $k\geq 0$ (this is the so-called *Wiener chaos decomposition*) and hence that $L^2_0(\mu)=\bigoplus_{k\geq 1} \mathcal H_k$. Moreover, it is also well known that $$\langle H_k(u),H_k(v)\rangle_{L^2(\mu)}=\langle u,v\rangle_{L^2(\mu)}^k$$ for any $u,v\in\mathcal S$ and every $k\geq 0$ (see [@D Chapter 9], where the proofs are given in a Hilbert space setting but work equally well on a Fréchet space). Taking $u:=f\circ T^n$, $n\geq 0$ and $v:=g$, it follows that $\mathcal F_+(\sigma_{H_k(f),H_k(g)})=\left[ \mathcal F_+(\sigma_{f,g})\right]^k$ for any $f,g\in\mathcal S$ and all $k\geq 0$. Since $\mathbf S$ is a closed ideal in $\ell^\infty(\ZZ_+)$ and since the map $(f,g)\mapsto \mathcal F_+({\sigma_{f,g}})$ is continuous from $L^2(\mu)\times L^2(\mu)$ into $\ell^\infty(\ZZ_+)$ we conclude that (\[L\_0\]) does hold true for any $f,g\in L^2_0(\mu)$ as soon as it holds for linear functionals. It is apparent from the above proof that the implication “$T$ is $\mathbf S$-mixing implies $M^*$ is $\mathbf S$-mixing on $\mathcal H_K$" requires no assumption on the family $\mathbf S$. This will be used in the proof of Proposition \[converse\]. \[back2\] Let $M=M_{\phi}$ be a unitary multiplication operator on $\mathcal H=L^{2}(\Omega,m)$ associated with a measurable function $\phi:\Omega\to\TT$. Let also $\mathcal H_1\subset \mathcal H$ be a closed $M^*$-$\,$invariant subspace, and let us denote by $\mathcal H_1\cdot \mathcal H$ the set of all $f\in L^1(m)$ that can be written as $f=h_1h$, where $h_1\in \mathcal H_1$ and $h\in\mathcal H$. Finally, let $\mathbf S$ be an arbitrary subset of $\ell^\infty (\ZZ_+)$. Consider the following assertions: - $M^{*}$ is $\mathbf S$-mixing on $\mathcal H_1$; - for any $f\in\mathcal H_1\cdot\mathcal H$, the image measure $\sigma_f =(fm)\circ\phi^{-1}$ is $\mathbf S$-continuous; - $\mathbf 1_{\{ \phi \in D\}}h\perp \mathcal H_1$, for any $\mathbf S$-small analytic set $D\subset \TT$ and every $h\in\mathcal H$. Then [(i)]{} and [(ii)]{} are equivalent and imply [(iii)]{}. We note that (iii) makes sense because $\phi^{-1}(D)$ is $m$-measurable for any analytic $D\subset\TT$ (see [@K]). A straightworward computation shows that for any $u,v\in L^2(\Omega, m)$, the Fourier coefficients of $\sigma_{u\bar v}$ are given by $$\widehat\sigma_{u\bar v} (n)=\langle M^{*n} u,v\rangle_{\mathcal H}\, .$$ Moreover, since $\mathcal H_1$ is $M^*$-invariant we have $$\langle M^{*n} h_1,h\rangle_{\mathcal H}=\langle M^{*n} h_1,\pi_{1}h\rangle_{\mathcal H}\hskip 1cm (n\geq 0)$$ for any $h_1\in\mathcal H_1$ and every $h\in \mathcal H$, where $\pi_1$ is the orthogonal projection onto $\mathcal H_1$. It follows that $M^*$ is $\mathbf S$-mixing on $\mathcal H_1$ if and only if $\mathcal F_+({\sigma_f})\in\mathbf S$ for any $f=h_1\bar h\in\mathcal H_1\cdot\mathcal H$. In other words, (i) and (ii) are equivalent. For any analytic set $D\subset\TT$ and $h,h_1\in\mathcal H$, we have $$\langle h_1,\mathbf 1_{\{\phi\in D\}} h\rangle_{\mathcal H}=\sigma_{h_1\bar h} (D)\, .$$ Hence, (iii) says exactly that $\sigma_f(D)=0$ for any $f\in\mathcal H_1\cdot\mathcal H$ and every $\mathbf S$-small analytic set $D\subset \TT$. From this, it is clear that (ii) implies (iii). Properties (i), (ii), (iii) are in fact equivalent in the strong mixing case, i.e. $\mathbf S=c_0(\ZZ_+)$. Indeed, by a famous theorem of R. Lyons ([@L], see also [@KL]), a positive measure $\sigma$ is Rajchman *if and only if* $\sigma (D)=0$ for every Borel set of extended uniqueness $D\subset\TT$. From this, it follows that if (iii) holds then $\sigma_f$ is Rajchman for every nonnegative $f\in\mathcal H\cdot \mathcal H_1$, and it is easy to check that this implies (ii). Let $M_\phi :L^2(\Omega ,m)\to L^2(\Omega, m)$ be the unitary multiplication operator associated with $\phi$. Since $TM_\phi =M_\phi K_E\, ,$ the Gaussian measure $\mu=\mu_{K_E}$ is $T$-invariant by Lemma \[back1\]. Moreover, $\mu$ has full support since $K_E$ has dense range (by (b)), and $T$ is $\mathbf S$-mixing with respect to $\mu$ by Lemmas \[back1\] and \[back2\]. An examination of the proof reveals that assumption (c) in Proposition \[background\] is a little bit stronger than what is actually needed: since $K_E^*:X^*\to L^2(\Omega,m)$ is given by $K_E^*x^*=\overline{\langle x^*, E(\,\cdot\,)\rangle}$, it is enough to assume that the measure $(fm)\circ\phi^{-1}$ is $\mathbf S$-continuous for every $f\in L^1(\Omega,m)$ of the form $f=\langle x^*, E(\,\cdot\,)\rangle\,\overline{\langle y^*, E(\,\cdot\, )\rangle}$, where $x^*,y^*\in X^*$. However, the proposition is easier to remember as stated. Somewhat ironically, it will follow from Theorem \[abstract\] that the seemingly crucial gamma-radonifying assumption (a) in Proposition \[background\] is in fact not necessary to ensure $\mathbf S$-mixing in the Gaussian sense. Indeed, it is easily checked that if one can find a $\TT$-eigenfield for $T$ satisfying (b) and (c), then the $\TT$-eigenvectors of $T$ are $\mathbf S$-perfectly spanning. In fact, the proof of Theorem \[abstract\] essentially consists in showing that if there exists a $\TT$-eigenfield satisfying (b) and (c), then it is possible to construct one satisfying (b), (c) *and* (a). An “exhaustion" lemma {#Sophiesection} --------------------- Besides Proposition \[background\], the following lemma will also be needed in the proof of Theorems \[abstract\] and \[abstracteasy\]. \[exhaust\] Let $\mathcal B\subset\mathcal M(\TT)$, and let $T\in\mathcal L(X)$. Assume that the $\TT$-eigenvectors of $T$ are $\mathcal B$-perfectly spanning for analytic sets. Finally, put $$\mathbf V:=\{ (x,\lambda)\in X\times \TT;\; x\neq 0\;{\rm and}\; T(x)=\lambda x\}\, .$$ Then, one can find a closed subset $\mathbf Z$ of $\mathbf V$ with the following properties: - for any (relatively) open set $O\subset\mathbf V$ such that $O\cap\mathbf Z\neq\emptyset$, the set $\pi_2(O\cap\mathbf Z):=\{ \lambda\in\TT;\;\exists x\in O\cap \mathbf Z\;:\; (x,\lambda)\in \mathbf Z\}$ is not $\mathcal B$-small; - the linear span of $\pi_1(\mathbf Z):=\{ x\in X;\; \exists\lambda\in\TT\;:\; (x,\lambda)\in\mathbf Z\}$ is dense in $X$. Let us denote by $\mathbf O$ the union of all relatively open sets $O\subset \mathbf V$ such that $\pi_2(O)$ is $\mathcal B$-small, and set $\mathbf Z:=\mathbf V\setminus \mathbf O$. Then $\mathbf Z$ is closed in $\mathbf V$ and satisfies the first required property (by its very definition). Moreover, by the Lindelöf property and since any countable union of $\mathcal B$-small sets is $\mathcal B$-small, the set $D:=\pi_2(\mathbf O)$ is $\mathcal B$-small; and $D$ is an analytic set because $\mathbf O$ is Borel in $X\times \TT$. By assumption on $T$, the linear span of $\bigcup_{\lambda\in\TT\setminus D}\ker (T-\lambda)$ is dense in $X$. Now, any $\TT$-eigenvector $x$ for $T$ is in $\pi_1(\mathbf V)$, and if the associated eigenvalue $\lambda$ is not in $D$ then $(x,\lambda)\in\mathbf Z$ by the definition of $D$. It follows that $\ker (T-\lambda)\setminus\{ 0\}$ is contained in $\pi_1(\mathbf Z)$ for any $\lambda\in\TT\setminus D$, and hence that $\overline{\rm span}\,( \pi_1(\mathbf Z))=X$. Despite its simplicity, Lemma \[exhaust\] will be quite useful for us. We illustrate it by proving the following result. \[perfect\] Let $\mathcal B\subset \mathcal M(\TT)$ be hereditary with respect to absolute continuity, and let $T\in\mathfrak L(X)$. The following are equivalent: 1. the $\TT$-eigenvectors of $T$ are $\mathcal B$-perfectly spanning for analytic sets; 2. one can find a countable family of [continuous]{} $\TT$-eigenvector fields $(E_i)_{i\in I}$ for $T$, where $E_i:\Lambda_i\to X$ is defined on some $\mathcal B$-perfect set $\Lambda_i\subset\TT$, such that ${\rm span}\left(\bigcup_{i\in I} E_i(\Lambda_i)\right)$ is dense in $X$. Since the $E_i$ are assumed to be continuous in (ii), it is plain that (ii) implies (i). Conversely, assume that (i) holds true, and let $\mathbf Z$ be the set given by Lemma \[exhaust\]. Choose a countable dense set $\{ (x_i,\lambda_i);\; i\in\NN\}\subset\mathbf Z$, and let $({\varepsilon}_i)_{i\in\NN}$ be any sequence of positive numbers tending to $0$. Let also $d$ be a compatible metric $d$ on $X$ and put $B_i:=\{ x\in X;\; d(x,x_i)<{\varepsilon}_i\}$. By the definition of $\mathbf Z$ and since $\mathcal B$ is hereditary with respect to absolute continuity, one can find for each $i\in\NN$, a probability measure $\sigma_i\in\mathcal B$ such that $${\rm supp}(\sigma_i)\subset\{\lambda\in\TT;\; \exists x\in B_i\; :\; T(x)=\lambda x\}\, .$$ Put $\Lambda_i:={\rm supp}(\sigma_i)$, so that $\Lambda_i$ is $\mathcal B$-perfect. The set $$A_i=\{ (x,\lambda)\in B_i\times\Lambda_i;\; T(x)=\lambda x\}$$ is closed in the Polish space $B_i\times\Lambda_i$ and projects onto $\Lambda_i$. By the Jankov–von Neumann uniformization theorem (see [@K 18.A]), one can find a universally measurable map $E_i:\Lambda_i\to B_i$ such that $(E_i(\lambda), \lambda)\in A_i$ for every $\lambda\in \Lambda_i$. In other words we have a universally measurable $\TT$-eigenvector field $E_i:\Lambda_i\to X$ such that $d(E_i(\lambda),x_i)< {\varepsilon}_i$ for every $\lambda\in\Lambda_i$. By Lusin’s theorem on measurable functions (see [@K 17.D]), one can find a closed set $\Lambda'_i$ of positive $\sigma_i$-measure such that $(E_i)_{\vert \Lambda'_i}$ is continuous. So, upon replacing the measure $\sigma_i$ with its restriction $\sigma'_i$ to $\Lambda'_i$ (which is still in $\mathcal B$ since it is absolutely continuous with respect to $\sigma_i$) and $\Lambda_i$ with ${\rm supp}(\sigma_i')$, we may assume that in fact each $E_i$ is *continuous*. Since ${\rm span}\left(\bigcup_{i\in \NN} E_i(\Lambda_i)\right)$ is dense in $X$, the proof is complete. When $\mathcal B$ is the family of continuous measures, the equivalence of (i) and (ii) is proved in [@Sophie Proposition 4.2]. The weak mixing case {#WMcase} ==================== In this section, we concentrate on part (1) of Theorem \[WS\]. So we assume that the $\TT$-eigenvectors of our operator $T\in\mathfrak L(X)$ are perfectly spanning, and we want to show that $T$ is weakly mixing in the Gaussian sense. For simplicity, we assume that $X$ is a *Banach* space in order to avoid working with sequences of continuous semi-norms. Why things may not be obvious ----------------------------- Before embarking on the proof, let us briefly explain why it could go wrong. In view of Proposition \[background\], we need to construct a $\TT$-eigenfield $(E,\phi)$ for $T$ on some measure space $(\Omega ,m)$ such that - the operator $K_E : L^2(\Omega, m)\to X$ is gamma-radonifying; - the vector field $E$ is $m$-spanning; - for any $f\in L^1(\Omega, m)$, the image measure $\sigma_f=(f\, m)\circ\phi^{-1}$ is continuous. To achieve (a) and (b), the most brutal way is to choose some total sequence $(x_n)_{n\in\NN}\subset B_X$ consisting of $\TT$-eigenvectors for $T$, say $T(x_n)=\lambda_n x_n$, to take $\Omega=\NN$ with the counting measure $m$, and to define $(E,\phi)$ by $E(n)=2^{-n}x_n$ and $\phi (n)=\lambda_n$. Then the vector field $E$ is obviously $m$-spanning, and the operator $K_E$ is gamma-radonifying since $K_E(e_n)=2^{-n}x_n$ and hence $\sum_0^\infty\Vert K_E(e_n)\Vert<\infty$. (Here, of course, $(e_n)$ is the canonical basis of $L^2(\Omega,m)=\ell^2(\NN)$). However, the image measure $\sigma =m\circ\phi^{-1}$ is purely atomic, so (c) is certainly not satisfied. (In fact, by [@Sophie Theorem 5.1], $T$ will never be ergodic with respect to any measure induced by a random variable of the form $\xi=\sum_0^\infty \chi_n x_n$, where $(x_n)$ is a sequence of $\TT$-eigenvectors for $T$ and $(\chi_n)$ is a sequence of independent, rotation-invariant centred random variables). To get (c) we must take a more complicated measure space $(\Omega, m)$ and avoid atoms; but then it will be harder to show that an operator defined on $L^2(\Omega, m)$ is gamma-radonifying. Thus, conditions (a) and (c) go in somewhat opposite directions, and we have to find a kind of balance between them. In [@BG2] and [@BM], the measure space $(\Omega,m)$ was an open arc of $\mathbb T$ (with the Lebesgue measure), and the difficulty was partially settled by requiring some regularity on the $\TT$-eigenvector fields and combining this with the geometrical properties of the underlying Banach space. In the present paper, we will allow more freedom on the measure space, and this will enable us to get rid of any assumption on $X$. The main part of the proof is divided into two steps. We shall first explain how to produce gamma-radonifying operators of the form $K_E$ in a reasonably flexible way, and then we show how this can be used to construct suitable $\TT$-eigenfields for $T$ under the perfectly spanning assumption. Once this has been done, the conclusion follows easily. How to construct gamma-radonifying operators {#bordel01} -------------------------------------------- The first part of our program relies essentially on the following observation. Let $G$ be a compact metrizable abelian group with normalized Haar measure $m_G$ and dual group $\Gamma$, and let $E:G\to X$ be a vector field on $(G, m_G)$. Then the operator $K_E:L^2(G,m_G)\to X$ is gamma-radonifying as soon as $$\sum_{\gamma\in \Gamma} \Vert\widehat E(\gamma )\Vert <\infty\, ,$$ where $\widehat E$ is the Fourier transform of $E$. This is indeed obvious since the characters of $G$ form an orthonormal basis $(e_\gamma)_{\gamma\in \Gamma}$ of $L^2(G,m_G)$ and $K_E(e_\gamma)=\widehat E(\gamma )$ for every $\gamma\in\Gamma$. The compact group we shall use is the usual *Cantor group* $G:=\{ 0,1\}^\NN$, where addition is performed modulo 2. The dual group $\Gamma$ is identified with the set all of finite subsets of $\NN$, which we denote by $\rm FIN$. A set $I\in {\rm FIN}$ corresponds to the character $\gamma_I\in\Gamma$ defined by the formula $$\gamma_I(\omega)=\prod_{i\in I} {\varepsilon}_i(\omega)\, ,$$ where ${\varepsilon}_i(\omega_0, \omega_1, \dots )=(-1)^{\omega_i}$. An empty product is declared to be equal to $1$, so that $\gamma_\emptyset=\mathbf 1$. It is common knowledge that any “sufficiently regular" function has an absolutely convergent Fourier series. In our setting, regularity will be quantified as follows. Let us denote by $d$ the usual ultrametric distance on $G$, $$d(\omega, \omega')=2^{-n(\omega,\omega')}\, ,$$ where $n(\omega,\omega')$ is the first $i$ such that $\omega_i\neq \omega'_i$. We shall say that a map $E:G\to X$ is *super-Lipschitz* if it is $\alpha$-Hölderian for some $\alpha >1$, $$\Vert E(\omega)-E(\omega')\Vert \leq C\, d(\omega,\omega')^\alpha$$ for any $\omega, \omega'\in G$ and some finite constant $C$. (Of course, there are no nonconstant super-Lipschitz maps on $\TT$; but life is different on the Cantor group). The following lemma is the kind of result that we need. \[halfkey0\] If $E: G\to X$ is super-Lipschitz, then $E$ has an absolutely convergent Fourier series. Assume that $E$ is $\alpha$-Hölderian, $\alpha >1$, with constant $C$. The key point is the following Let $n\in\NN$. If $I\in{\rm FIN}$ satisfies $I\ni n$ then $\Vert\widehat E(\gamma_I)\Vert\leq (C/2)\times 2^{-\alpha n}\, .$ Indeed, setting $\mathcal F_n:=\{ I\in{\rm FIN};\; I\neq\emptyset\;{\rm and}\;\max I=n\}$ (which has cardinality $2^n$), this yields that $$\sum_{I\in\mathcal F_n} \Vert \widehat E(\gamma_I)\Vert\leq 2^n\times (C/2)\times 2^{-\alpha n}=(C/2)\times 2^{-\beta n}$$ for all $n\geq 0$ (where $\beta=\alpha-1>0$), and the result follows. Write $I=J\cup\{ n\}$, where $J\in {\rm FIN}$ and $n\not\in J$. Then $$\langle \gamma_I,\omega\rangle = {\varepsilon}_n(\omega) \langle \gamma_J,\omega\rangle\, ,$$ and hence $$\begin{aligned} \widehat E(\gamma_I)&=&\int_{\{\omega;\; {\varepsilon}_n(\omega)=1\}}\langle \gamma_J,\omega\rangle\, E(\omega)\, dm_G(\omega) - \int_{\{\omega;\; {\varepsilon}_n(\omega)=-1\}}\langle \gamma_J,\omega\rangle\, E(\omega)\, dm_G(\omega)\\ &=&\int_{\{\omega;\; {\varepsilon}_n(\omega)=1\}} \langle \gamma_J,\omega\rangle\, \left[ E(\omega)-E(\omega+\delta^n)\right] dm_G(\omega) ,\end{aligned}$$ where $\delta^n\in G$ is the sequence which is $0$ everywhere except at the $n$-th coordinate. By assumption on $E$, we know that $\Vert E(\omega)-E(\omega+\delta^n)\Vert\leq C\times 2^{-\alpha n}$; and since the set $\{ {\varepsilon}_n=1\}$ has measure $1/2$, this concludes the proof. Construction of suitable $\TT$-eigenfields {#bordel02} ------------------------------------------ The second part of our program is achieved by the following lemma. \[superkey0\] Put $\mathbf V :=\{ (x,\lambda)\in X\times \TT;\; x\neq 0\;{\rm and}\; T(x)=\lambda x\}$, and let $\bf Z\subset \mathbf V$. Assume that for any (relatively) open set $O\subset\mathbf V$ such that $O\cap\mathbf Z\neq\emptyset$, the set $\{ \lambda\in \TT;\;\exists x\, :\, (x,\lambda)\in O\}$ is uncountable. [Then,]{} [for any $(x_0,\lambda_0)$ in $\bf Z$ and ${\varepsilon}>0$,]{} [one can construct a $\TT$-eigenfield $(E,\phi)$ for $T$ on the Cantor group $(G, m_G)$]{} [such that]{} $\Vert E(\omega )-x_0\Vert<{\varepsilon}$ for all $\omega\in G$; $E:G\to X$ is super-Lipschitz; $\phi:G\to\TT$ is a homeomorphic embedding. This is a standard Cantor-like construction. Let us denote by $\mathcal S$ the set of all finite $0$-$1$ sequences (including the empty sequence $\emptyset$). We write $\vert s\vert$ for the length of a sequence $s\in\mathcal S$, and if $i\in\{ 0,1\}$ we denote by $si$ the sequence “$s$ followed by $i$". Put $V_\emptyset:=\TT$ and choose an open set $U_\emptyset\subset X$ with $x_0\in U_\emptyset$ and $\diam (U_\emptyset)<{\varepsilon}$. Let us also fix an arbitrary Hölder exponent $\alpha >1$. We construct inductively two sequences of open sets $(U_s)_{s\in\mathcal S}$ in $X$ and $(V_s)_{s\in\mathcal S}$ in $\TT$, such that the following properties hold for all $s\in\mathcal S$ and $i\in\{ 0,1\}$. 1. $\overline{U_{si}}\subset U_s$ and $\overline{V_{si}}\subset V_s$; 2. $\overline{U_{s0}}\cap\overline{U_{s1}}=\emptyset$ and $\overline{V_{s0}}\cap\overline{V_{s1}}=\emptyset$; 3. $\diam(U_s)\leq 2^{-\alpha\vert s\vert}$ and $\diam (V_{si})\leq\frac 12\, \diam (V_s)$; 4. $(U_s\times V_s)\cap\mathbf Z\neq \emptyset$. The inductive step is easy. Assume that $U_s$ and $V_s$ have been constructed for some $s$. Since $(U_s\times V_s)\cap\mathbf Z\neq \emptyset$, we know that the set $\{ \lambda\in \TT;\;\exists x\, :\, (x,\lambda)\in (U_s\times V_s)\cap \mathbf Z\}$ is uncountable, and hence contains at least 2 distinct points $\lambda^0, \lambda^1$. Pick $x^i\in U_{s}$ such that $(x^i,\lambda^i)\in\mathbf Z$. Then $x^0\neq x^1$ because $T(x^i)=\lambda^i x^i$ and $x^i\neq 0$. Thus, one may choose small enough open sets $U_{si}\ni x^i$ and $V_{si}\ni \lambda^i$ to ensure (i),$\,\dots $, (iv) at steps $s0$ and $s1$. For any $\omega\in G$ and $n\in\NN$, let us denote by $\omega_{\vert n}$ the finite sequence $(\omega_0,\dots ,\omega_n)\in\mathcal S$. By (i), (ii), (iii), the intersection $\bigcap_{n\geq 0} U_{\omega_{\vert n}}$ is a single point $\{ x_\omega\}$, and the intersection $\bigcap_{n\geq 0} V_{\omega_{\vert n}}$ is a single point $\{ \lambda_\omega\}$. Moreover, the map $\omega\mapsto \lambda_{\omega}:=\phi(\omega)$ is a homeomorphic embedding, and the map $\omega\mapsto x_\omega:=E(\omega)$ is $\alpha$-Hölderian and hence super-Lipschitz (by (iii)). Finally, it follows easily from the continuity of $T$ that $T(x_\omega)=\lambda_\omega x_\omega$ for every $\omega\in G$. In other words, $(E,\phi)$ is a $\TT$-eigenfield for $T$. Since $\Vert E(\omega)-x_0\Vert\leq \diam(U_\emptyset)<{\varepsilon}$ for all $\omega\in G$, this concludes the proof. The proof {#proofWM} --------- We now just have to put the pieces together. Let us recall what we are trying to do: we are given an operator $T\in\mathfrak L(X)$ whose $\TT$-eigenvectors are perfectly spanning, and we want to show that $T$ is weakly mixing in the Gaussian sense. Let $\mathbf V :=\{ (x,\lambda)\in X\times \TT;\; x\neq 0\;{\rm and}\; T(x)=\lambda x\}$. Applying Lemma \[exhaust\] to the family of continuous measures, we get a closed set $\mathbf Z\subset \mathbf V$ with the following properties: - $\mathbf Z$ satisfies the assumption of Lemma \[superkey0\], for any $O\subset \mathbf V$ open such that $O\cap \mathbf Z\neq \emptyset$, the set $\{ \lambda\in \TT;\;\exists x\, :\, (x,\lambda)\in O\}$ is uncountable; - ${\rm span}\left( \pi_1(\mathbf Z)\right)$ is dense in $X$. Let us fix a countable dense set $\{ (x_i,\lambda_i);\; i\in\NN\}\subset\mathbf Z$, and let us apply Lemma \[superkey0\] to each point $(x_i,\lambda_i)$, with ${\varepsilon}_i=2^{-i}$. Taking Lemma \[halfkey0\] into account and since ${\varepsilon}_i\to 0$, this give a sequence of $\TT$-eigenfields $(E_i,\phi_i)$ defined on $(\Omega_i, m_i):=(G,m_G)$, such that - each operator $K_{E_i} :L^2(\Omega_i, m_i)\to X$ is gamma-radonifying; - each $E_i$ is continuous and $\overline{\rm span}\left(\bigcup_{i\in\NN} E_i(\Omega_i)\right)=X$; - each $\phi_i$ is one-to-one. Now, let $(\Omega, m)$ be the “disjoint union" of the measure spaces $(\Omega_i,m_i)$ (so that $L^2(\Omega, m)$ is the $\ell^2$-direct sum $\oplus_i L^2(\Omega_i, m_i)$). Choose a sequence of small positive numbers $(\alpha_i)_{i\in\NN}$, and define a $\TT$-eigenfield $(E,\phi)$ on $(\Omega, m)$ as follows: $E(\omega_i)=\alpha_i E_i(\omega_i)$ and $\phi(\omega_i)=\phi_i(\omega_i)$ for each $i$ and every $\omega_i\in\Omega_i$. If the $\alpha_i$ are small enough, then $E$ is indeed in $L^2(\Omega, m)$ and (using property (1)) the operator $K_E:L^2(\Omega, m)\to X$ is gamma-radonifying. By (2) and since $m$ has full support, the vector field $E$ is $m$-spanning and hence the operator $K_E$ has dense range. Moreover, since $TK_{E_i}=K_{E_i}M_{\phi_i}$ for all $i$, the intertwining equation $TK_E=K_EM_\phi$ holds. Finally, since $(\Omega_i,m_i)$ is atomless, it follows from (3) that each measure $m_{i}\circ\phi_i^{-1}$ is continuous, and hence that $\sigma=m\circ\phi^{-1}$ is continuous as well. By Proposition \[background\], we conclude that $T$ is weakly mixing in the Gaussian sense. Proof of the abstract results (1) {#proofabstract1} ================================= In this section, we prove Theorem \[abstract\]. The proof runs along exactly the same lines as that of the weak mixing case: we first explain how to produce gamma-radonifying operators of the form $K_E$ in a flexible way, then we show how this can be used to construct suitable $\TT$-eigenfields for $T$ under the $\mathbf S$-perfectly spanning assumption, and the conclusion follows easily. However, the first two steps are technically more involved than the corresponding ones from the weak mixing case, the difficulty being not less important when dealing with the strong mixing case only. Roughly speaking, the main reason is that it is much harder for a measure to be Rajchman, or more generally to be $\mathbf S$-continuous, than to be merely continuous. To avoid artificial complications, we shall first assume that the underlying space $X$ is a Banach space (which will dispense us from using sequences of semi-norms), and we will indicate only at the very end of the proof how it can be adapted in the Fréchet space case (this is really a matter of very minor changes). Thus, in most of this section, $T$ is a linear operator acting on a complex separable Banach space $X$. We are also given a $c_0$-like translation ideal $\mathbf S\subset\ell^\infty (\ZZ_+)$ such that every $\mathbf S$-continuous measure is continuous. We are assuming that the $\TT$-eigenvectors of $T$ are $\mathbf S$-perfectly spanning, and our aim is to show that $T$ is $\mathbf S$-mixing in the Gaussian sense. How to construct gamma-radonifying operators {#bordel1} -------------------------------------------- Just as in the weak mixing case, the guiding idea of the first part of our program is the observation that we made in sub-section \[bordel01\], namely that if $E:\Omega\to X$ is a vector field defined on a compact abelian group $\Omega$ (with Haar measure $m$ and dual group $\Gamma$), then the operator $K_E:L^2(\Omega,m)\to X$ is gamma-radonifying as soon as $$\sum_{\gamma\in \Gamma} \Vert\widehat E(\gamma )\Vert <\infty\, .$$ Unfortunately, we are not able to use this observation as stated. Instead of compact groups, we will be forced to use some slightly more complicated objects $(\Omega ,m)$, due to the structure of the inductive construction that will be performed in the *second part* of our program. Let us denote by $\mathfrak Q$ the set of all finite sequences of integers $\bar q=(q_1,\dots ,q_l)$ with $q_s\geq 2$ for all $s$. For any $\bar q=(q_1,\dots ,q_l)\in\mathfrak Q$, we put $$\Omega(\bar q):=\bigsqcup_{s=1}^l\Omega(q_s)\, ,$$ where $\Omega (q)=\{ \xi\in\TT;\; \xi^{q}=\bf 1\}$ is the group of all $q$-roots of $\bf 1$, and the symbol $\sqcup$ stands for a “disjoint union". The following notation will be useful: for any $\bar q=(q_1,\dots ,q_l)\in\mathfrak Q$, we set $$l(\bar q)=l\;\;{\;\rm and}\;\;\;w(\bar q) =q_1+\dots +q_l .$$ In particular, $\Omega(\bar q)$ has cardinality $w( \bar q)$. We endow each finite group $\Omega (q)$ with its normalized Haar measure $m_q$ ( the normalized counting measure), and each $\Omega(\bar q)=\Omega (q_1,\dots ,q_l)$ with the probability measure $$m_{\bar q}=\frac{1}{l(\bar q)}\sum_{s=1}^{l(\bar q)} m_{q_s}\, .$$ Here, of course, $\Omega (q_s)$ is considered as a subset of $\Omega(\bar q)$, so that the measures $m_{q_s}$ are disjointly supported. For any infinite sequence $\mathbf q=(\bar q_n)_{n\geq 1}\in\mathfrak Q^\NN$, we put $$\Omega (\mathbf q):=\prod_{n=1}^\infty \Omega (\bar q_n)\, ,$$ and we endow $\Omega (\mathbf q)$ with the product measure $$m_{\mathbf q}=\bigotimes_{n=1}^\infty m_{\bar q_n}\, .$$ Given $\mathbf q=(\bar q_n)_{n\geq 1}\in \mathfrak Q^\NN$, it is not difficult to describe a “canonical" orthonormal basis for $L^2(\Omega (\mathbf q),m_{\mathbf q})$. However, we have to be careful with the notation. For each sequence $\bar q=(q_1,\dots ,q_l)\in\mathfrak Q$, let us denote by $\Gamma (\bar q)$ the disjoint union $\bigsqcup_{s=1}^l \Gamma(q_s)$, where $\Gamma (q)$ is the dual group of $\Omega (q)$, $\Gamma(q)=\ZZ_q$. For any $\gamma_s\in \Gamma (q_s)\subset\Gamma (\bar q)$, define a function $e_{\gamma_s}:\Omega (\bar q)\to \CC$ by $$e_{\gamma_s} (\xi)=\left\{ \begin{array}{cl} \sqrt{l(\bar q)}\, \langle \gamma_s, \xi\rangle & {\rm if\;\;}\xi\in\Omega (q_s),\\ 0&{\rm otherwise.} \end{array} \right.$$ Since the characters of $\Omega (q)$ form an orthonormal basis of $L^2(\Omega (q), m_q)$ for every positive integer $q$, it is clear that the $e_{\gamma_s}$, $\gamma_{s}\in\Gamma (\bar q)$ form an orthonormal basis of $L^2(\Omega (\bar q), m_{\bar q})$ for every $\bar q\in \mathfrak Q$. Now, write each $\bar q_n$ as $\bar q_n=(q_{n,1}, \dots ,q_{n,l_n})$, and let us denote by $\Gamma (\mathbf q)$ the set of all finite sequences of the form $\gamma=(\gamma_{s_1},\dots ,\gamma_{s_N})$ with $\gamma_{s_n}\in\Gamma (q_{n,s_n})\subset \Gamma (\bar q_n)$ for all $n$ and $\gamma_{s_N}\neq 0$. For any $\gamma=(\gamma_{s_1},\dots ,\gamma_{s_N})\in\Gamma (\mathbf q)$, we define $e_\gamma :\Omega (\mathbf q)\to \CC$ as expected: $$e_\gamma (\omega)=\prod_{n=1}^N e_{\gamma_{s_n}}(\omega _n)\, .$$ In other words, $e_\gamma=e_{\gamma_{s_1}}\otimes\cdots\otimes e_{\gamma_{s_N}}$. We also include the empty sequence $\emptyset$ in $\Gamma (\mathbf q)$ and we put $e_\emptyset=\mathbf 1$. The following lemma is essentially obvious: The family $(e_\gamma)_{\gamma\in\Gamma (\mathbf q)}$ is an orthonormal basis of $L^2(\Omega (\mathbf q), m_{\bf q})$, for any $\mathbf q\in\mathfrak Q^\NN$. We note that if $\mathbf q=(\bar q_n)_{n\geq 1}\in\mathfrak Q^\NN$ and each sequence $\bar q_n$ has length $1$, $\bar q_n=(q_n)$ for some $q_n\geq 2$, then $(\Omega (\mathbf q), m_{\mathbf q})$ is just the compact group $\prod_{n\geq 1} \Omega (q_n)$ with its normalized Haar measure, and $\Gamma (\mathbf q)$ “is" the character group of $\Omega (\mathbf q)$. Moreover, if $E:\Omega(\mathbf q)\to X$ is a vector field on $(\Omega (\mathbf q), m_{\mathbf q})$ then $K_E(e_\gamma)$ is the Fourier coefficient $\widehat E(\gamma )$, for any $\gamma\in\Gamma (\mathbf q)$. Accordingly, we shall use the following notation for an *arbitrary* $\mathbf q\in\mathfrak Q^\NN$: given a vector field $E:\Omega(\mathbf q)\to X$ on $(\Omega (\mathbf q), m_{\mathbf q})$, we set $$\widehat E(\gamma )=K_E(e_\gamma)$$ for every $\gamma\in\Gamma (\mathbf q)$. Next, we introduce a partially defined “metric" on every $\Omega (\mathbf q)$, as follows. Write $\mathbf q=(\bar q_n)_{n\geq 1}$ and $\bar q_n=(q_{n,1},\dots , q_{n,l_n})$. If $\omega=(\omega_n)$ and $\omega'=(\omega'_n)$ are distinct elements of $\Omega (\bf q)$, let us denote by $n(\omega, \omega')$ the smallest $n$ such that $\omega_n\neq \omega'_n$. If $\omega_n$ and $\omega'_n$ are in the same $\Omega (q_{n(\omega,\omega'), s})$, we denote this $s$ by $s(\omega,\omega')$ and we put $$d_{\mathbf q}(\omega,\omega'):=\frac{1}{w(\bar q_1) \cdots w(\bar q_{n(\omega,\omega')-1})}\times \frac{1}{l_{n(\omega,\omega')}^{1/4}\, q_{n(\omega,\omega'), s(\omega,\omega')}}\, \cdot$$ (The value of an empty product is declared to be $1$). Otherwise, $d_{\mathbf q} (\omega,\omega')$ is not defined. Let $\mathbf q\in\mathfrak Q^\NN$. We shall say that a map $E:\Omega(\mathbf q)\to X$ is *super-Lipschitz* if $$\Vert E(\omega)-E(\omega')\Vert\leq C\, d_{\mathbf q} (\omega,\omega')^2$$ for some finite constant $C$, whenever $d_{\mathbf q}(\omega,\omega')$ is defined. The terminology is arguably not very convincing, since super-Lipschitz maps need not be continuous. Moreover, the definition of $d_{\bf q}$ may look rather special, mainly because of the strange term $l_{n(\omega,\omega')}^{1/4}$. We note, however, that when all sequences $\bar q_n$ have length $1$, say $\bar q_n=(q_n)$, then $d_{\mathbf q}$ is a quite natural true (ultrametric) distance on the Cantor-like group $\Omega(\mathbf q)=\prod_{n\geq 1} \Omega (q_n)$: $$d_{\mathbf q}(\omega,\omega')=\frac{1}{q_1\cdots q_{n(\omega, \omega')}}\, \cdot$$ In this case, the terminology “super-Lipschitz" seems adequate (forgetting that we allowed arbitrary Hölder exponents $\alpha >1$ in the weak mixing case). More importantly, the following lemma is exactly what is needed to carry out the first part of our program. \[halfkey\] Let $\mathbf q\in\mathfrak Q^\NN$. If $E:\Omega(\mathbf q)\to X$ is a super-Lipschitz vector field on $(\Omega (\mathbf q), m_{\mathbf q})$, then $$\displaystyle\sum_{\gamma\in\Gamma(\mathbf q)} \Vert \widehat E(\gamma)\Vert<\infty\, .$$ Throughout the proof we put $\mathbf q=(\bar q_n)_{n\geq 1}$, and each $\bar q_n$ is written as $\bar q_n=(q_{n,1},\dots ,q_{n, l_n})$. For notational simplicity, we write $\Gamma$ instead of $\Gamma (\mathbf q)$. We denote by $\vert\gamma\vert$ the length of a sequence $\gamma\in\Gamma$. That is, $\vert\emptyset\vert=0$ and $\vert \gamma\vert =N$ if $\gamma=(\gamma_{s_1}, \dots ,\gamma_{s_N})$ with $\gamma_{s_n}\in \Gamma (q_{n,s_n})$ for all $n$ and $\gamma_{N, s_N}\neq 0$. Then $\Gamma$ is partitioned as $$\Gamma=\bigcup_{N=0}^\infty \Gamma_N\,$$ where $\Gamma_N=\{\gamma\in\Gamma;\; \vert\gamma\vert=N\}$. For any $\gamma=(\gamma_{s_1},\dots ,\gamma_{s_N})\in\Gamma_N$, we put $s(\gamma)=s_N$. In other words, $s(\gamma)$ is the unique $s\in\{ 1,\dots ,l_N\}$ such that the $N$-th coordinate of $\gamma$ belongs to $\Omega(q_{N, s})$. If $E:\Omega(\mathbf q)\to X$ is super-Lipschitz with constant $C$, then $$\Vert\widehat E(\gamma)\Vert\leq \frac{C}{l_N}\,\sqrt{l_1\cdots l_{N-1}} \prod_{n\leq N-1} w(\bar q_n)^{-2}\times \frac{1}{q_{N,s(\gamma)}^2}$$ for every $\gamma\in \Gamma_N$, $N\geq 1$. Let us fix $\gamma=(\gamma_{s_1},\dots ,\gamma_{s_N})\in\Gamma_N$, so that $s(\gamma)=s_N$. For notational simplicity (again), we put $q_\gamma=q_{N, s_N}=q_{N,s(\gamma)}$. For any $\xi\in\Omega (q_{\gamma})$ and $\omega\in\Omega(\mathbf q)$ with $\omega_N\in\Omega (q_{\gamma})$, let us denote by $\xi\omega$ the element $\omega'$ of $\Omega(\mathbf q)$ defined by $\omega'_n=\omega_n$ if $n\neq N$ and $\omega'_N=\xi\omega_N$. Then $$\begin{aligned} \widehat E(\gamma)&=&\int_{\{\omega;\;\omega_N\in\Omega(q_{\gamma})\}} e_\gamma (\omega)\, E(\omega)\, dm_{\bf q}(\omega)\\ &=&\sum_{\xi\in\Omega(q_{\gamma})}\int_{\{\omega;\; \omega_N=\xi\}}e_\gamma (\omega)\, E(\omega)\, dm_{\bf q}(\omega)\\ &=&\sum_{\xi\in\Omega(q_{\gamma})} \int_{\{\omega;\; \omega_N=\mathbf 1_{\Omega (q_\gamma)}\}} e_\gamma( \xi\omega)\, E(\xi\omega)\, dm_{\bf q}(\omega)\\ &=&\int_{\{\omega;\;\omega_N=\mathbf 1_{\Omega (q_\gamma)}\}} e_\gamma (\omega)\times\left(\sum_{\xi\in\Omega(q_{\gamma})} {\langle\gamma_{s_N}, \xi\rangle}E(\xi\omega)\right) dm_{\bf q}(\omega).\end{aligned}$$ Now, $\gamma_{s_N}$ is assumed to be a non-trivial character of $\Omega(q_{\gamma})$. So we have $$\sum_{\xi\in\Omega (q_{\gamma})} {\langle\gamma_{s_N}, \xi\rangle}=0\, ,$$ and it follows that $$\widehat E(\gamma)=\int_{\{\omega;\;\omega_N=\mathbf 1_{\Omega (q_\gamma)}\}} e_\gamma (\omega)\times\left[\sum_{\xi\in\Omega(q_{\gamma})} {\langle\gamma_{s_N}, \xi\rangle}\, \Bigl(E(\xi\omega)-E(\omega)\Bigr)\right] dm_{\mathbf q}(\omega)\, .$$ By the definition of a super-Lipschitz map, since $\vert e_\gamma (\omega)\vert\leq \sqrt{l_1\cdots l_N}$ and since the set $\{\omega;\; \omega_N=\mathbf 1_{\Omega(q_\gamma)}\}$ has $m_{\bf q}$-measure $1/l_Nq_{\gamma}$, we conclude that $$\begin{aligned} \Vert \widehat E(\gamma)\Vert&\leq &\sqrt{l_1\cdots l_N}\times \frac{1}{l_N}\times C\prod_{n\leq N-1} w( \bar q_n)^{-2}\times \frac{1}{l_N^{1/2}\, q_{N,s(\gamma)}^2}\, , \end{aligned}$$ which is the required estimate. It is now easy to conclude the proof of the lemma. For each $N\geq 1$ and every $s\in\{ 1,\dots ,l_N\}$, the set $\Gamma_{N,s}=\{ \gamma\in\Gamma_N;\; s(\gamma)=s\}$ has cardinality $$\vert\Gamma_{N,s}\vert=\prod_{n\leq N-1} w( \bar q_n)\times q_{N,s}\, .$$ By the above fact and since $w(\bar q_n)=q_{n,1}+\dots +q_{n,l_n}\geq 2l_n$ for all $n$, it follows that $$\begin{aligned} \sum_{\gamma\in\Gamma_{N, s}} \Vert \widehat E(\gamma)\Vert&\leq&\frac{C}{l_N}\,\sqrt{l_1\cdots l_{N-1}}\,\prod_{n\leq N-1} w(\bar q_n)^{-1}\times \frac{1}{q_{N,s}}\\ &\leq& \frac{C}{l_N}\, 2^{-N}\end{aligned}$$ for each $s\in\{ 1,\dots ,l_N\}$. Hence, we get $\sum\limits_{\gamma\in\Gamma_{N}} \Vert \widehat E(\gamma)\Vert\leq C\, 2^{-N}$ for every $N\geq 1$, and the result follows. Construction of suitable $\TT$-eigenfields {#bordel2} ------------------------------------------ We now turn to the second part of our program, which is the most technical one. Our aim is to prove the following lemma. \[superkey\] Put $\mathbf V :=\{ (x,\lambda)\in X\times \TT;\; x\neq 0\;{\rm and}\; T(x)=\lambda x\}$, and let $\bf Z$ be a closed subset of $\mathbf V$. Assume that for any (relatively) open set $O\subset\mathbf V$ such that $O\cap\mathbf Z\neq\emptyset$, the set $\{ \lambda\in \TT;\;\exists x\, :\, (x,\lambda)\in O\}$ is not $\mathbf S$-small. [Then,]{} [for any $(x_0,\lambda_0)$ in $\bf Z$ and ${\varepsilon}>0$,]{} [one can construct a $\TT$-eigenfield $(E,\phi)$ for $T$ on some $(\Omega(\mathbf q), m_{\mathbf q})$]{} [such that]{} $\Vert E(\omega )-x_0\Vert<{\varepsilon}$ for all $\omega\in \Omega(\mathbf q)$; $E:\Omega(\mathbf q)\to X$ is super-Lipschitz; $\phi:\Omega(\mathbf q)\to\TT$ is a homeomorphic embedding, and the image measure $\sigma=m_{\bf q}\circ\phi^{-1}$ is $\mathbf S$-continuous. Throughout the proof, we denote (as usual) by $\mathcal M(\TT)$ the space of all complex measures on $\TT$ endowed with the total variation norm $\Vert\hskip 0.6mm\cdot\hskip 0.6mm\Vert_{}$. We recall that $\mathcal M(\TT)$ is the dual space of $\mathcal C(\TT)$ (the space of all continuous complex-valued functions on $\TT$), so we can use the $w^*$ topology on $\mathcal M(\TT)$. Since our family $\mathbf S$ is $c_0$-like, we may fix a uniformly bounded sequence of $w^*$-$\,$continuous semi-norms $(\Phi_k)_{k\geq 0}$ on $\ell^\infty (\ZZ_+)$ such that $$\mathbf S=\left\{ a\in\ell^\infty (\ZZ_+);\; \Phi_k(a)\xrightarrow{k\to\infty} 0\right\}\, .$$ Without loss of generality, we may assume that $\Phi_k(a)\leq \Vert a\Vert_\infty$ for all $k$ and every $a\in\ell^\infty (\ZZ_+)$. Moreover, upon replacing $\Phi_k(a)$ by $\max(\vert a_k\vert, \Phi_k(a))$, we may also assume that $\Phi_k(a)\geq \vert a_k\vert$ for every $a\in\ell^\infty(\ZZ_+)$. Finally, to avoid notational heaviness, we denote by $\widehat\sigma$ the *positive part* of the Fourier transform of a measure $\sigma\in\mathcal M(\TT)$, we write $\widehat\sigma$ instead of $\widehat\sigma_{\vert \ZZ_+}$ or $\mathcal F_+(\sigma)$. We then have $$\vert\widehat\sigma(k)\vert\leq \Phi_k(\widehat\sigma)\leq \Vert\widehat\sigma\Vert_\infty\leq \Vert \sigma\Vert$$ for all $k$ and every $\sigma\in\mathcal M(\TT)$. In particular, if a bounded sequence $(\sigma_n)\subset\mathcal M(\TT)$ is such that the sequence $(\widehat\sigma_n)$ is Cauchy with respect to each semi-norm $\Phi_k$, then $(\sigma_n)$ is $w^*$ convergent in $\mathcal M(\TT)$. Let us introduce some terminology. By an *admissible sequence of open sets in $\TT$*, we shall mean a finite sequence of open sets $(V_i)_{i\in I}\subset\TT$ such that the $V_i$ have pairwise disjoint closures. An admissible sequence $(W_j)_{j\in J}$ is *finer* than an admissible sequence $(V_i)_{i\in I}$ if the following hold: - for every $j\in J$, one can find $i\in I$ such that ${\overline{W_j}}\subset V_i$; then $j$ is called a *successor* of $i$; - all $i\in I$ have the same number of successors $j\in J$. In this situation, we write $i\prec j$ when $j\in J$ is a successor of $i\in I$. We define in the same way admissible sequences of open sets $(U_i)_{i\in I}$ in $X$ and the corresponding refinement relation. An *admissible pair* is a pair $(\sigma, (V_i)_{i\in I})$ where $(V_i)_{i\in I}$ is an admissible sequence of open sets in $\TT$ and $\sigma$ is a positive $\mathbf S$-continuous measure such that - $\supp(\sigma)\subset\bigcup_{i\in I} V_i$; - all open sets $V_i$ have the same $\sigma$-measure. An admissible pair $(\sigma, (V_i)_{i\in I})$ is *normalized* if $\sigma$ is a probability measure. Let $(\sigma, (V_i)_{i\in I})$ be a normalized admissible pair. Given $N\in\NN$ and $\eta >0$, one can find two admissible sequences of open sets $(V_j')_{j\in J}$ and $(W_j)_{j\in J}$ (with the same index set $J$) and an $\mathbf S$-continuous probability measure $\nu$ such that - $(V_j')_{j\in J}$ is finer than $(V_i)_{i\in I}$ and $\diam(V_j')<\eta$ for all $j\in J$; - $(\nu, (W_j)_{j\in J})$ is an admissible pair finer than $(\sigma, (V_i)_{i\in I})$; - $\supp(\nu)\subset\supp(\sigma)$; - $\supp(\sigma)\cap V_j'\neq\emptyset$ for all $j\in J$; - ${\bigcup_j {\overline{V_j'}}}\cap{\bigcup_j{\overline{W_j}}}=\emptyset$; - $\Vert\nu-\sigma\Vert<\eta$; - whenever $\sigma'$ is a probability measure such that $(\sigma', (V'_j)_{j\in J})$ is an admissible pair, it follows that $\Phi_k(\widehat{\sigma}'-\widehat\nu)<\eta$ for all $k\leq N$. Let $\eta'$ and $\eta''$ be small positive numbers to be chosen later. First, we note that since the measure $\sigma$ is *continuous* by assumption on $\mathbf S$, one can partition $\supp(\sigma)\cap V_i$, $i\in I$ into Borel sets $A_{i,1},\dots ,A_{i, K_i}$ with $\diam(A_{i,s})<\eta'$ and $\sigma(A_{i,1})=\dots =\sigma(A_{i,K_i})$. This can be done as follows. Split $\supp(\sigma)\cap V_i$ into finitely many Borel sets $B_1,\dots, B_N$ with diameters less than $\eta'/2$ and positive $\sigma$-measure. Using the continuity of $\sigma$, choose a Borel set $B'_1\subset B_1$ such that $\sigma (B'_1)>0$ and $\sigma (B'_1)/\sigma (V_i)$ is a *rational* number. Then choose a Borel set $B'_2$ such that $B_1\setminus B'_1\subset B'_2\subset (B_1\setminus B'_1)\cup B_2$ and $\sigma (B'_2)/\sigma (V_i)$ is a positive rational number, and so on. This gives a partition of $\supp(\sigma)\cap V_i$ into pairwise disjoint Borel sets $B'_1,\dots ,B'_N$ with diameter less than $\eta'$ such that $\sigma (B'_k)/\sigma (V_i)$ is a positive rational number for each $k$, say $\sigma (B'_k)=\frac{p_k}{q}\, \sigma (V_i)$ where $p_k, q\in\NN$ (the same $q$ for all $k$). Finally, use again the continuity of $\sigma$ to split each $B'_k$ into $p_k$ Borel sets $B''_{k,1},\dots ,B''_{k,p_k}$ with $\sigma (B''_{k,s})=\frac{\sigma(V_i)}{q}$, and relabel the collection of all these sets $B''_{k,s}$ as $A_{i,1},\dots ,A_{i, K_i}$. That all the splittings can indeed be made follows from the ($1$-dimensional case of the) classical *Liapounoff convexity theorem*; see [@Rudin Theorem 5.5]. Next, given any positive number $m_i$, one can partition further each $A_{i,s}$ into $m_i$ Borel sets $B$ with the same $\sigma$-measure. Taking $m_i:=\prod_{j\neq i} K_j$, we then have the same number of Borel sets $B$ inside each open set $V_i$. Thus, we may in fact assume from the beginning that we have the same number of sets $A_{i,s}$ inside each $V_i$. We denote this number by $K$, and we put $J:=I\times\{ 1,\dots ,K\}$. We note that $\sigma(A_{i,s})=\frac{1}{\vert J\vert}$ for all $(i,s)\in J$. Since the measure $\sigma$ is regular and continuous, one can pick, for each $(i,s)\in J$, a compact set $C_{i,s}\subset A_{i,s}$ such that $0<\sigma(A_{i,s}\setminus C_{i,s})<\eta''$, and then a point $a_{i,s}\in (A_{i,s}\setminus C_{i,s})\cap\supp(\sigma)$. Then we may choose open sets $W_{i,s}\supset C_{i,s}$ with pairwise disjoint closures contained in $V_i$, and open sets $V_{i,s}'\ni a_{i,s}$ with pairwise disjoint closures contained in $V_i$ and $\diam(V'_{i,s})<\eta'$, in such a way that ${\bigcup_{j\in J} {\overline{V_j'}}}\cap{\bigcup_{j\in J}{\overline{W_j}}}=\emptyset$. If $\eta''$ is small enough, then the probability measure $$\nu =\frac 1{\vert J\vert}\, \sum_{(i,s)\in J}\frac{\sigma_{\vert C_{i,s}}}{\sigma (C_{i,s})}$$ satisfies $\Vert\sigma-\nu\Vert <\eta$. Moreover, $(\nu ,(W_j)_{j\in J})$ is an admissible pair finer than $(\sigma, (V_i)_{i\in I})$. Let us denote by $\omega_f$ the modulus of continuity of a function $f\in\mathcal C(\TT)$: $$\omega_f(\delta)=\sup\{ \vert f(u)-f(v)\vert;\; \vert u-v\vert<\delta\}\, .$$ Since the sets $C_{i,s}$, $(i,s)\in J$ form a partition of $\supp(\nu)$ with $\nu(C_{i,s})=\frac{1}{\vert J\vert}$, and since $\vert z-a_{i,s}\vert\leq \diam(A_i{,s})<\eta'$ for all $z\in C_{i,s}$, we have $$\left\vert\int_\TT f\, d\nu-\frac{1}{\vert J\vert}\sum_{(i,s)\in J} f(a_{i,s})\right\vert\leq \omega_f(\eta')$$ for any $f\in\mathcal C(\TT)$. Similarly, if $\sigma'$ is any probability measure with $\supp(\sigma')\subset\bigcup_{j\in J} V'_j$ and $\sigma'(V_j')=\frac{1}{\vert J\vert}$ for all $j\in J$, then $$\left\vert\int_\TT f\, d\sigma'-\frac{1}{\vert J\vert}\sum_{(i,s)\in J} f(a_{i,s})\right\vert\leq \omega_f(\eta')\, .$$ Hence, we see that $\sigma'$ is close to $\nu$ in the $w^*$ topology of $\mathcal M(\TT)$ if $\eta'$ is sufficiently small. Since each semi-norm $\Phi_k$ is $w^*$-$\,$continuous and since the Fourier transformation $\mathcal F_+:\mathcal M(\TT)\to\ell^\infty(\ZZ_+)$ is $(w^*,w^*)\,$-$\,$continuous on bounded sets, it follows that if $\eta'$ is small enough then, for any $\sigma'$ such that $(\sigma', (V_j')_{j\in J})$ is an admissible pair, we do have $\Phi_k(\widehat{\sigma}'-\widehat\nu)<\eta$ for all $k\leq N$. When considering two sequences of open sets $(V_j')_{j\in J}$ and $(W_j)_{j\in J}$ both finer than a given sequence $(V_i)_{i\in I}$ and with the same index set $J$, we will always assume that the corresponding “extension" relations $\prec$ on $I\times J$ are in fact the same. At this point, we need to introduce some more terminology. By a *convenient triple*, we mean a triple $(\sigma, (U_i)_{i\in I}, (V_i)_{i\in I})$, where $(U_i)\subset X$ and $(V_i)\subset\TT$ are admissible sequences of open sets (with the same index set $I$), and $\sigma$ is a positive $\mathbf S$-continuous measure such that the pair $(\sigma , (V_i)_{i\in I})$ is admissible and the following holds: for each $i\in I$ one can find a closed set $F_i\subset X$ such that $$F_i\subset U_i\setminus\{ 0\}\;\;{\rm and}\;\;\supp (\sigma)\subset\bigcup_{i\in I}\{ \lambda\in V_i;\; \exists x\in F_i\, :\, (x,\lambda)\in \mathbf Z\}\, .$$ There is a natural notion of refinement for convenient triples, which is defined exactly as for admissible pairs. Moreover, we shall say that two convenient triples $(\sigma, (U_i), (V_i))$ and $(\sigma', (U_{i'}), (V_{i'}))$ are *disjoint* if $\bigcup_i {\overline{U_i}}\cap \bigcup_{i'} {\overline{U_{i'}}}=\emptyset$ and $\bigcup_i {\overline{V_i}}\cap \bigcup_{i'} {\overline{V_{i'}}}=\emptyset$. The following fact is the key point in order to prove Lemma \[superkey\]: it is the basic inductive step towards the construction of the measure space $(\Omega(\mathbf q),m_{\mathbf q})$ and the map $\phi$. The technical difficulty comes essentially from condition (c), which is here to ensure (3) in Lemma \[superkey\]. Let $(\sigma, (U_i)_{i\in I}, (V_i)_{i\in I})$ be a normalized convenient triple, and let $\eta >0$. Then one can find a positive integer $l=l(\eta)$ and a finite sequence of pairwise disjoint normalized convenient triples $(\sigma'_1, (U_j')_{j\in J_1}, (V_j')_{j\in J_1}), \dots,(\sigma'_l, (U_j')_{j\in J_l}, (V_j')_{j\in J_l})$ finer than $(\sigma, (U_i),(V_i))$ such that - $\diam(V_j')<\eta$ for all $s\in\{ 1,\dots ,l\}$ and every $j\in J_s$; - $\diam(U_j')<\frac{\eta}{l(\eta)^{1/2}\vert J_s\vert^2}$ for all $s$ and every $j\in J_s$; - If we put $$\sigma':=\frac{1}{l}\sum_{s=1}^l \sigma'_s$$ then $\sup_{k\geq 0}\,\Phi_k(\widehat{\sigma}'-\widehat\sigma)\leq\eta$. Put $\nu_0=\sigma=\sigma'_0$, $J_0=I$ and $W_j=V_j$, $j\in J_0$. Let also $\eta^*$ be a positive number to be chosen later but depending only on $\eta$, and let $l$ be the smallest integer such that $l>1/\eta^*$. Since $\nu_0$ is $\mathbf S$-continuous, one can find $N_0\in\NN$ such that $\Phi_k(\widehat{\nu}_0)<\eta^*$ for all $k>N_0$. Applying Fact 1 to the pair $(\sigma, (V_i)_{i\in I})=(\nu_0, (V_j)_{j\in J_0})$, we find an admissible sequence of open sets $(V'_j)_{j\in J_1}$ and a normalized admissible pair $(\nu_1, (W_j)_{j\in J_1})$ such that - $(V_j')_{j\in J_1}$ is finer than $(W_j)_{j\in J_{0}}$ and $\diam(V_j')<\eta^*$ for all $j\in J_1$; - $(\nu_1, (W_{j})_{j\in J_1})$ is finer than $(\nu_0, (W_{j})_{j\in J_0})$; - $\supp(\nu_1)\subset\supp(\nu_0)$; - $\supp(\nu_0)\cap V_{j}'\neq\emptyset$ for all $j\in J_1$; - ${\bigcup_{j\in J_1} {\overline{V'_{j}}}}\cap{\bigcup_{j\in J_1}{\overline{W_{j}}}}=\emptyset$; - $\Vert\nu_1-\nu_{0}\Vert<\eta^*/l$; - whenever $\sigma'$ is a probability measure such that $(\sigma', (V'_j)_{j\in J_1})$ is an admissible pair, it follows that $\Phi_k(\widehat{\sigma}'-\widehat{\nu}_1)<\eta^*$ for all $k\leq N_{0}$. Each $j\in J_1$ has a unique “predecessor" $i\in J_0$. In what follows, we denote this predecessor by $j^-$. Since $V'_j\cap\supp(\nu_0)\neq\emptyset$ and since $(\nu_0, (U_i)_{i\in J_0}, (V_i)_{i\in J_0})$ is a convenient triple, one can pick, for each $j\in J_1$, a point $\lambda_j\in V'_j\cap\supp(\nu_0)$ and a point $x_j\in U'_{j^-}$ such that $(x_j,\lambda_j)\in \mathbf Z$. The points $x_j$ are pairwise distinct because $x_j\neq 0$ and $T(x_j)=\lambda_j x_j$. Using again the fact that $(\nu_0, (U_i)_{i\in J_0}, (V_i)_{i\in J_0})$ is a convenient triple, one can find closed sets $F_{i,0}\subset X$ with $F_{i,0}\subset U_i\setminus\{ 0\}$ such that $$\supp (\nu_0)\subset\bigcup_{i\in I}\{ \lambda\in V_i;\; \exists x\in F_{i,0}\, :\, (x,\lambda)\in \mathbf Z\}\, .$$ Since $\mathbf Z$ is closed in $(X\setminus\{ 0\})\times \TT$, the set $$F_{j}=\left\{ x\in F_{j^-, 0};\,\exists \lambda\in \supp(\nu_1)\cap {\overline{W_j}}\, :\, (x,\lambda)\in \mathbf Z\right\}$$ is closed in $X$, for each $j\in J_1$. Moreover, the sets $F_{j}$ are pairwise disjoint and we have $\bigcup_{j\in J_1}F_{j}\cap\{ x_j;\; j\in J_1\}=\emptyset$, because $\bigcup_{j\in J_1}{\overline{V_j'}}\cap\bigcup_{j\in J_1} {\overline{W_j}}=\emptyset$. It follows that one can find open sets $U_j\subset X$, $j\in J_1$ with pairwise disjoint closures and open sets $U_{j}'\subset X$ with pairwise disjoint closures, such that - $\diam(U_j')<\frac{\eta^*}{l^{1/2}\vert J_1\vert^2}$; - ${\overline{U_{j}}}\subset U_{j^-}$ and ${\overline{U'_j}}\subset U_{j^-}$; - $x_j\in U'_j$ and $F_{j}\subset U_{j}$; - $\bigcup_{j\in J_1} {\overline{U_{j}}}\cap\bigcup_{j\in J_1}{\overline{U'_{j}}}=\emptyset$. We note that $(\nu_1, (U_{j})_{j\in J_1}, (W_j)_{j\in J_1})$ is a convenient triple, by the very definition of the closed sets $F_{j}$. Now, we use the assumption on $\mathbf Z$: since $\mathbf Z\cap (U_j'\times V'_j)\neq\emptyset$, the set $$A_j=\{ \lambda\in V'_j;\;\exists x\in U_j'\,:\, (x,\lambda)\in \mathbf Z\}$$ is not $\mathbf S$-small, for any $j\in J_1$. Moreover, $A_j$ is an [analytic set]{} because $Z$ is a Borel subset of $X\times\TT$; in particular, $A_j$ is universally measurable (see [@K]), and hence it contains a *compact* set which is not $\mathbf S$-small. Since the family of $\mathbf S$-continuous measures is hereditary with respect to absolute continuity, it follows that one can find, for each $j\in J_1$, an $\mathbf S$-continuous probability measure $\widetilde\sigma_j$ such that $$\supp(\widetilde\sigma_j)\subset \{ \lambda\in V'_j;\;\exists x\in U_j'\,:\, (x,\lambda)\in \mathbf Z\}\, .$$ Moreover, since $U'_j\setminus\{ 0\}$ is an $F_\sigma$ set in $X$, we may in fact assume that there is a closed set $F_j'\subset X$ with $F_j'\subset U_j'\setminus\{ 0\}$ such that $$\supp(\widetilde\sigma_j)\subset \{ \lambda\in V'_j;\;\exists x\in F_j'\,:\, (x,\lambda)\in \mathbf Z\}\, .$$ If we put $$\sigma'_1:=\frac{1}{\vert J_1\vert}\,\sum_{j\in J_1} \widetilde\sigma_j\, ,$$ it follows that $(\sigma'_1, (U_j')_{j\in J_1}, (V_j')_{j\in J_1})$ is a convenient triple. In particular, the pair $(\sigma'_1, (V_j)_{j\in J_1})$ is admissible, so that $\Phi_k(\widehat\sigma'_1-\widehat{\nu}_1)<\eta^*$ for all $k\leq N_0$. Let us summarize what we have done up to now. Starting with the convenient triple $(\sigma, (U_i)_{i\in I}, (V_i)_{i\in I})=(\nu_0, (U_j)_{j\in J_0}, (W_j)_{j\in J_0})$ and a positive integer $N_0$ such that $\Phi_k(\widehat\nu_0)<\eta^*$ for all $k>N_0$, we have found two convenient triples $(\nu_1, (U_{j})_{j\in J_1}, (W_j)_{j\in J_1})$ and $(\sigma_1', (U'_j)_{j\in J_1}, (V'_j)_{j\in J_1})$ both finer than $(\sigma, (U_i), (V_i))=(\nu_0, (U_j)_{j\in J_0}, (W_j)_{j\in J_0})$, such that - $\bigcup_j {\overline{V'_j}}\cap\bigcup_j {\overline{W_j}}=\emptyset$; - $\Vert\nu_1-\nu_0\Vert<\eta^*/l$; - $\Phi_k(\widehat\sigma'_1-\widehat\nu_1)<\eta^*$ for all $k\leq N_0$. Now, since $\nu_1$ and $\sigma'_1$ are $\mathbf S$-continuous, we can choose $N_1>N_0$ such that $\Phi_k(\widehat{\nu}_1 )<\eta^*$ and $\Phi_k(\widehat{\sigma}'_1)<\eta^*$ for all $k>N_1$, and we repeat the whole procedure with $(\nu_1, (U_{j})_{j\in J_1}, (W_j)_{j\in J_1})$ in place of $(\nu_0, (U_j)_{j\in J_0}, (W_j)_{j\in J_0})$. This produces two new normalized convenient triples $(\nu_2, (U_{j})_{j\in J_2}, (W_j)_{j\in J_2})$ and $(\sigma_2', (U'_j)_{j\in J_2}, (V'_j)_{j\in J_2})$. Then we start again with $\nu_2$ and a positive integer $N_2>N_1$ witnessing that $\nu_2$ and $\sigma'_2$ are $\mathbf S$-continuous, and so on. After $l$ steps, we will have constructed positive integers $N_0<N_1<\dots <N_l$ and, for each $s\in\{1,\dots ,l\}$, two normalized convenient triples $(\nu_2, (U_{j})_{j\in J_s}, (W_j)_{j\in J_s})$ and $(\sigma_s', (U'_j)_{j\in J_s}, (V'_j)_{j\in J_s})$, such that the following properties hold: - both triples $(\sigma'_s, (U'_j)_{j\in J_s}, (V_j')_{j\in J_s})$ and $(\nu_s, (U_j)_{j\in J_s}, (W_j)_{i\in J_{s}})$ are finer than $(\nu_{s-1}, (U_j)_{j\in J_{s-1}}, (W_j)_{i\in J_{s-1}})$; - $\diam (U'_j)<\frac{\eta^*}{l^{1/2}\vert J_s\vert^2}$ and $\diam(V_j')<\eta^*$; - ${\bigcup_{j\in J_s} {\overline{V'_{j}}}}\cap{\bigcup_{j\in J_s}{\overline{W_{j}}}}=\emptyset$; - $\Vert\nu_l-\nu_{l-1}\Vert<\eta^*/l$; - $\Phi_k(\widehat\sigma'_s-\widehat\nu_{l})<\eta^*$ for all $k\leq N_{s-1}$; - $\Phi_k(\widehat\nu_s)<\eta^*$ and $\Phi_k(\widehat\sigma'_s)<\eta^*$ for all $k>N_s$. Then (a) and (b) hold, and we have to check (c). That is, we have to show that the measure $$\sigma'=\frac{1}{l}\sum_{s=1}^l \sigma'_s\, ,$$ satisfies $\Phi_k( \widehat{\sigma}'-\widehat\sigma)\leq\eta$ for all $k\geq 0$ if $\eta^*$ is small enough (depending on $\eta$ only) and $l>1/\eta^*$. First, we note that $\Vert{\nu_s}-\sigma\Vert=\Vert\nu_s-\nu_0\Vert<s\eta^*/l\leq \eta^*$ for every $s\in\{ 1,\dots ,l\}$. Since $\Phi_k(\widehat{\nu}_s-\widehat\sigma)\leq \Vert{\nu_s}-\sigma\Vert$, it follows that $$\Phi_k(\widehat{\sigma}'-\widehat\sigma)\leq \eta^*+\frac{1}{l}\,\sum_{s=1}^l\Phi_k(\widehat{\sigma}'_s-\widehat{\nu}_s)$$ for all $k\geq 0$. If $k\leq N_0$ then $\Phi_k(\widehat\sigma'_s-\widehat\nu_s)<{\varepsilon}$ for every $s\in\{ 1,\dots, l\}$, and hence $$\Phi_k(\widehat\sigma'-\widehat\sigma)\leq 2\eta^*\, .$$ If $k>N_0$, let us denote by $s(k)$ the largest $s\in\{ 0,\dots ,l\}$ such that $k>N_s$. Then $\Phi_k(\widehat\sigma'_s-\widehat\nu_s)<\eta^*$ if $s>s(k)+1$ because $k\leq N_{s(k)+1}\leq N_{s-1}$; and $\Phi_k(\widehat\sigma'_s-\widehat\nu_s)\leq\Phi_k(\widehat\sigma'_s) +\Phi_k(\widehat\nu_s)<2\eta^*$ if $s\leq s(k)$, because $k >N_s$. So we get $$\begin{aligned} \Phi_k(\widehat\sigma'_s-\widehat\sigma)&\leq&\eta^*+ \frac{1}{l} \left(\sum_{s\leq s(k)} 2\eta^*+\Vert \widehat\sigma'_{s(k)+1}-\widehat\nu_{s(k)+1}\Vert_\infty+\sum_{s>s(k)+1}\eta^*\right)\\ &\leq&4\eta^*+\frac{2}{l}\, \cdot\end{aligned}$$ Taking $\eta^*=\eta/6$, this concludes the proof of Fact 2. We can now start the actual proof of Lemma \[superkey\]. Recall that $\mathfrak Q$ is the set of all finite sequences of integers $\bar q=(q_1,\dots ,q_l)$ with $q_s\geq 2$ for all $s$. We denote by $\mathfrak Q^{<\NN}$ the set of all finite sequences $\mathbf s=(\bar q_1,\dots ,\bar q_n)$ with $\bar q_j\in\mathfrak Q$, plus the empty sequence $\emptyset$. We denote by $\vert \mathbf s\vert$ the length of a sequence $\mathbf s\in \mathfrak Q^{<\omega}$ (the empty sequence $\emptyset$ has length $0$), and by $\prec$ the natural extension ordering on $\mathfrak Q^{<\omega}$. If $\mathbf s=(\bar q_1,\dots ,\bar q_n)\in\mathfrak Q^{<\NN}$ we put $$\Omega (\mathbf s):=\prod_{j=1}^n \Omega( \bar q_j)\, ,$$ and we denote by $m_{\mathbf s}$ the product measure $\otimes_{j=1}^n m_{\bar q_j}$. We also put $\Omega(\emptyset)=\{\emptyset\}$, and $m_\emptyset =\delta_{\emptyset}$. Let us fix $(x_0,\lambda_0)\in \mathbf Z$ and ${\varepsilon}>0$. Put $V_\emptyset:=\TT$, and choose an open set $U_\emptyset\subset X\setminus\{ 0\}$ such that $\diam(U_\emptyset)<{\varepsilon}$ and $(x_0,\lambda_0)\in U_\emptyset\times V_\emptyset$. Let also pick an open set $U'_\emptyset\subset X$ such that $x_\emptyset\in U'_\emptyset$ and ${\overline{U'_\emptyset}}\subset U_\emptyset$. By assumption, one can find an $\mathbf S$-continuous probability measure $\sigma_0$ such that $\supp(\sigma_0)\subset\{ \lambda\in V_\emptyset;\;\exists x\in U'_\emptyset\, :\, (x,\lambda)\in Z\}\, .$ Then $(\sigma_\emptyset, U_\emptyset, V_\emptyset)$ is a convenient triple. We construct by induction a sequence $(\mathbf s_n)_{n\geq 0}\subset \mathfrak Q^{<\omega}$, a sequence of probability $\mathbf S$-continuous measures $(\sigma_n)_{n\geq 0}$ and, for each $n\geq 0$, two sequences of open sets $(U_\xi)_{\xi\in\Omega (\mathbf s_n)}\subset X$ and $(V_\xi)_{\xi\in\Omega(\mathbf s_n)}\subset\TT$. If $n\geq 1$, we write $\mathbf s_n=(\bar q_1,\dots ,\bar q_n)$ and $\bar q_j=(q_{j,1},\dots ,q_{j, l_j}\}$. If $\xi=(\xi_1,\dots ,\xi_{n-1})\in\Omega(\mathbf s_{n-1})$ and $\tau\in\Omega (\bar q_n)$, we denote by $\xi\tau$ the sequence $(\xi_1,\dots ,\xi_{n-1},\tau)\in\Omega (\mathbf s_n)$ (if $n=1$ then $\xi\tau=\emptyset\tau=\tau$). Finally, we put ${\varepsilon}_0=1$ and $${\varepsilon}_n:=\frac{1}{w(\bar q_1)\cdots w(\bar q_n)}$$ if $n\geq 1$. The following requirements have to be fulfilled for all $n\geq 0$. (i) $\vert \mathbf s_n\vert=n$, and $\mathbf s_{n-1}\prec \mathbf s_n$ if $n\geq 1$. (ii) If $n\geq 1$ and $\xi\in\Omega(\mathbf s_{n-1})$, then - ${\overline{U_{\xi\tau}}}\subset U_\xi$ and ${\overline{V_{\xi\tau}}}\subset V_\xi$ for every $\tau\in\Omega (\bar q_n)$; - ${\overline{U_{\xi\tau}}}\cap {\overline{U_{\xi\tau'}}}=\emptyset$ and ${\overline{V_{\xi\tau}}}\cap{\overline{V_{\xi\tau}}}=\emptyset$ if $\tau\neq \tau'$; - $\diam(U_{\xi\tau})\leq\frac 12\diam (U_\xi)$ and $\diam(V_{\xi\tau})\leq\frac 12\diam (V_\xi)$. (iii) If $n\geq 1$ and $\xi\in\Omega(\mathbf s_{n-1})$, then $$\diam (U_{\xi\tau})<\frac{{\varepsilon}_{n-1}}{l_n^{1/2} \, q_{n,s}^2}$$ for all $s\in\{ 1,\dots ,l_{n}\}$ and every $\tau\in\Omega(q_{n,s})$. (iv) One can find closed sets $F_\xi\subset X$, $\xi\in\Omega (\mathbf s_n)$ with $F_\xi\subset U_\xi\setminus\{ 0\}$ and such that $\supp(\sigma_n)\subset \bigcup_{\xi\in\Omega (\mathbf s_n)} \{\lambda\in V_\xi;\;\exists x\in F_\xi\, :\, (x,\lambda)\in \mathbf Z\}$. In particular: $$\supp(\sigma_n)\subset \bigcup_{\xi\in\Omega (\mathbf s_n)} \{\lambda\in V_\xi;\;\exists x\in U_\xi\, :\, (x,\lambda)\in \mathbf Z\}\, .$$ (v) If $i\leq n$ and $\xi\in\Omega (\mathbf s_i)$, then $\sigma_n(V_\xi)=m_{\mathbf s_i}(\{ \xi\}).$ (vi) If $n\geq 1$ then $\sup_{k\geq 0}\,\Phi_k(\widehat\sigma_{n}-\widehat\sigma_{n-1})<2^{-n}$. Since $(\sigma_0, U_\emptyset, V_\emptyset)$ is a convenient triple, condition (iv) is satisfied for $n=0$; and the other conditions are (trivially) satisfied as well. Applying Fact 2 with the convenient triple $(\sigma_0, U_\emptyset, V_\emptyset)$ and a small enough $\eta>0$, we get a positive integer $l$ and a finite sequence of pairwise disjoint convenient triples $(\sigma'_1, (U_j')_{j\in J_1}, (V_j')_{j\in J_1}), \dots,$ $(\sigma'_l, (U_j')_{j\in J_l}, (V_j')_{j\in J_l})$ finer than $(\sigma_0, U_\emptyset, V_\emptyset)$ such that - $\diam(U_j')<\frac 12\diam(U_\emptyset)$ and $\diam(V'_j)<\frac 12\diam(V_\emptyset)$ for all $s\in\{ 1,\dots ,l\}$ and every $j\in J_s$; - $\diam(U_j')<\frac{{\varepsilon}_0}{l^{1/2}\vert J_s\vert^2}$ for all $s$ and every $j\in J_s$; - $\sup_{k\geq 0}\,\Phi_k(\widehat{\sigma}'-\widehat\sigma_0)\leq 2^{-1}$, where $\sigma'=\frac{1}{l}\sum_{s=1}^l \sigma'_s\, . $ If we put $q_{1,s}:=\vert J_s\vert$, $s\in\{ 1,\dots ,l\}$ and $\bar q_1=(q_{1,1},\dots ,q_{1,l})$, we may enumerate in the obvious way the open sets $U'_j$, $V'_j$ as $U_{\xi}$, $V_\xi$, $\xi\in\Omega (\bar q_1)$. Then (i),$\,\dots$,(vi) are clearly satisfied for $n=1$ with $\sigma_1=\sigma'$. The general inductive step is very much the same. Assume that everything has been constructed up to some stage $n\geq 1$. Then, for every $\tilde\xi\in\Omega (\mathbf s_{n-1})$, the triple $\mathcal T_{\tilde\xi}=((\sigma_n)_{\vert V_{\tilde\xi}}, (U_{\tilde\xi\tau})_{\tau\in\Omega (\bar q_n)}, (V_{\tilde\xi\tau})_{\tau\in\Omega (\bar q_n)})$ is a (non-normalized) convenient triple. Given $\eta>0$, it is not hard to see (by examining the proof) that we can apply Fact 2 simultaneously to all triples $\mathcal T_{\tilde\xi}$, $\tilde\xi\in\Omega (\mathbf s_{n-1})$, with the same $l=l(\eta)$ and the same index set $J$. As above, we may take $J=\Omega(\bar q_n)\times \Omega (\bar q)$, for some $\bar q=(q_1,\dots ,q_l)$. This gives positive $\mathbf S$-continuous measures $\sigma_{\tilde\xi}$ and open sets $U_{\tilde\xi\tau'}$, $V_{\tilde\xi\tau'}$, $\tau'\in\Omega (\bar q_n)\times\Omega(\bar q)$. Then, if $\eta$ is small enough, conditions (i),$\,\dots$,(vi) will be met at stage $n+1$ with $\mathbf s_{n-1}=\mathbf s_n\bar q$, $\sigma_{n+1}=\sum_{\tilde\xi}\sigma_{\tilde\xi}$ and the open sets $U_\xi,V_\xi$ for $\xi\in\Omega(\mathbf s_{n+1})=\Omega(\mathbf s_{n-1})\times\Omega(\bar q_n)\times \Omega (\bar q)$. Let us denote by $\mathbf q$ the “limit" of the increasing sequence $(\mathbf s_n)$, the infinite sequence $(\bar q_n)_{n\geq 1}\in\mathfrak Q^\NN$. It follows from (ii) that for any $\omega=(\omega_n)_{n\geq 1}\in\Omega (\mathbf q)$, the intersection $\bigcap_{n\geq 1} U_{\omega_{\vert n}}$ is a single point $\{ E(\omega )\}$, the intersection $\bigcap_{n\geq 1} V_{\omega_{\vert n}}$ is a single point $\{ \phi(\omega)\} $, and the maps $\phi :\Omega (\mathbf q)\to\TT$ and $E:\Omega (\mathbf q)\to X$ are homeomorphic embeddings. Moreover, condition (iii) says exactly that $E$ is super-Lipschitz. And since $x_0\in U_\emptyset$ and $\diam (U_\emptyset)<{\varepsilon}$, we have $\Vert E(\omega)-x_0\Vert<{\varepsilon}$ for every $\omega\in \Omega(\mathbf q)$. For each $\xi\in\bigcup_{n\geq 0}\Omega (\mathbf s_n)$, let us pick a point $\lambda_\xi\in V_\xi\cap\supp(\sigma_n)$, where $n$ is the length of $\xi$, $\xi\in\Omega (\mathbf s_n)$. By (iv), one can find $x_\xi\in U_\xi$ such that $(x_\xi,\lambda_\xi)\in \mathbf Z$. In particular, we have $T(x_\xi)=\lambda_\xi x_\xi$ for every $\xi\in\bigcup_{n\geq 0}\Omega (\mathbf s_n)$. Since $x_{\omega_{\vert n}}\to E(\omega)$ and $\lambda_{\omega_{\vert n}}\to\phi(\omega)$ as $n\to\infty$, it follows that $TE(\omega )=\phi(\omega)\, E(\omega)$ for every $\omega\in\Omega (\mathbf q)$. In other words, $(E,\phi)$ is a $\TT$-eigenfield for $T$. By (vi), the sequence $(\sigma_n)$ converges $w^*$ to a probability $\mathbf S$-continuous measure $\sigma$. To conclude the proof, it remains to show that $\sigma$ is the image measure of the measure $m_{\mathbf q}$ on $\Omega(\mathbf q)$ under the embedding $\phi :\Omega (\mathbf q)\to\TT$. That is, we want to check that $$\int_{\Omega (\mathbf q)} f\circ \phi (\omega)\, dm_{\mathbf q}(\omega)=\int_\TT f\, d\sigma$$ for any $f\in\mathcal C(\TT)$. Let us fix $f$. For each $\xi\in\bigcup_{n\geq 0}\Omega (\mathbf s_n)$, let us (again) pick a point $\lambda_\xi\in V_\xi$. Then $\lambda_{\omega_{\vert n}}\to \phi(\omega)$ as $n\to\infty$, for every $\omega\in\Omega (\mathbf q)$. Setting $\Omega_\xi:=\{ \omega\in\Omega (\mathbf q);\; \xi\subset \omega\}$ and using Lebesgue’s theorem, it follows that $$\begin{aligned} \int_{\Omega(\mathbf q)} f\circ\phi (\omega)\, dm_{\mathbf q}(\omega)&=&\lim_{n\to\infty} \sum_{\xi\in\Omega (\mathbf s_n)} f(\lambda_\xi)\times m_{\mathbf q} (\Omega_\xi)\\ &=&\lim_{n\to\infty}\,\sum_{\xi\in\Omega (\mathbf s_n)} f(\lambda_\xi)\times m_{\mathbf s_n} (\{\xi\})\, .\end{aligned}$$ On the other hand, by the definition of $\sigma$ and since $\diam (V_\xi)\to 0$ as $\vert \xi\vert\to\infty$ we also have $$\begin{aligned} \int_\TT f\, d\sigma&=&\lim_{n\to\infty} \int_\TT f\, d\sigma_n\\ &=&\lim_{n\to\infty}\sum_{\xi\in\Omega (\mathbf s_n)} f(\lambda_\xi)\times \sigma_n(V_\xi)\, .\end{aligned}$$ By condition (v), this concludes the proof. The proof {#realproof} --------- Assume that the $\TT$-eigenvectors are $\mathbf S$-perfectly spanning, and let us show that $T$ is $\mathbf S$-mixing in the Gaussian sense. We recall that, since $\mathbf S$ is $c_0$-like, the $\mathbb T$-eigenvectors are in fact $\mathbf S$-perfectly spanning for analytic sets (see Remark 1 just after Corollary \[Smix=span\]). Using Lemma \[halfkey\], Lemma \[superkey\] and proceeding exactly as in sub-section \[proofWM\], we find a sequence of $\TT$-eigenfields $(E_i,\phi_i)$ defined on some $\Omega (\mathbf{q}_i)$, such that - each operator $K_{E_i} :L^2(\Omega ({\mathbf q_i}), m_{{\mathbf q_i}})\to X$ is gamma-radonifying; - each $E_i$ is continuous and $\overline{\rm span}\left(\bigcup_{i\in\NN} {\rm ran}\, (E_i)\right)=X$; - each $\phi_i$ is a homeomorphic embedding, and $\sigma_i=m_{{\mathbf q_i}}\circ\phi_i^{-1}$ is $\mathbf S$-continuous. Put $(\Omega_i, m_i):=(\Omega(\mathbf q_i), m_{\mathbf q_i})$ and let $(\Omega, m)$ be the disjoint union of the measure spaces $(\Omega_i,m_i)$. Choose a sequence of small positive numbers $(\alpha_i)_{i\in\NN}$, and define a $\TT$-eigenfield $(E,\phi)$ on $(\Omega, m)$ as expected: $E(\omega_i)=\alpha_i E_i(\omega_i)$ and $\phi(\omega_i)=\phi_i(\omega_i)$ for each $i$ and every $\omega_i\in\Omega_i$. By (1), the operator $K_E:L^2(\Omega, m)\to X$ is gamma-radonifying if the $\alpha_i$ are small enough. By (2) and since $m$ has full support, the vector field $E$ is $m$-spanning and hence the operator $K_E$ has dense range. The intertwining equation $TK_E=K_EM_\phi$ holds by the definition of $E$. Finally, it follows from (3) that the measure $(f_im_{i})\circ\phi_i^{-1}$ is $\mathbf S$-continuous for each $i$ and every $f_i\in L^1(\Omega_i, m_i)$. Hence, the measure $\sigma_f=(fm)\circ\phi^{-1}$ is $\mathbf S$-continuous for every $f\in L^1(\Omega, m)$. By Proposition \[background\], this shows that $T$ is $\mathbf S$-mixing in the Gaussian sense. Fréchet spaces -------------- The above proof can be reproduced almost word for word in the Fréchet space setting. More precisely, the following changes should be made. - Modify the definition of “super-Lipschitz": a map $E:\Omega(\mathbf q)\to X$ is super-Lipschitz if $E:\Omega(\mathbf q)\to (X,\Vert\hskip 0.6mm\cdot\hskip 0.6mm\Vert_{})$ is super-Lipschitz for any continuous semi-norm $\Vert\hskip 0.6mm\cdot\hskip 0.6mm\Vert$ on $X$. - In Lemma \[halfkey\], add “for any continuous semi-norm $\Vert\hskip 0.6mm\cdot\hskip 0.6mm\Vert$ on $X$". - In Lemma \[superkey\], fix a nondecreasing sequence of semi-norms $(\Vert\hskip 0.6mm\cdot\hskip 0.6mm\Vert_i)_{i\in\NN}$ generating the topology of $X$, and put $$\Vert x\Vert=\sum_{i\in\NN}2^{-i} \min(\Vert x\Vert_i, 1)\, .$$ (Of course this is not even a semi-norm, but the notation is convenient anyway). Then perform exactly the same construction. - Do the same when starting the proof of Theorem \[abstract\]. proof of the abstract results (2) {#proofabstract2} ================================= Proof of theorem \[abstracteasy\] --------------------------------- Let the Banach space $X$ have type 2. Let $T\in\mathfrak L(X)$ and assume that the $\TT$-eigenvectors of $T$ are $\mathbf S$-perfectly spanning for analytic sets. By Lemma \[hereditary\] and Proposition \[perfect\], one can find a countable family of [continuous]{} $\TT$-eigenvector fields $(E_i)_{i\in I}$ for $T$, where $E_i:\Lambda_i\to X$ is defined on some $\mathbf S$-perfect set $\Lambda_i\subset\TT$, such that ${\rm span}\left(\bigcup_{i\in I} E_i(\Lambda_i)\right)$ is dense in $X$. By Lemma \[Bperfect\], each $\Lambda_i$ is the support of some $\mathbf S$-continuous probability measure $\sigma_i$. If we put $\phi_i(\lambda)=\lambda$, this gives a continuous $\TT$-eigenfield $(E_i,\phi_i)$ for $T$ on the measure space $(\Lambda_i,\sigma_i)$ such that the image measure $\sigma_i\circ \phi_i^{-1}=\sigma_i$ is $\mathbf S$-continuous; and by Proposition \[typecotype\], the operator $K_{E_i}$ is gamma-radonifying because $X$ has type 2. So the proof can be completed exactly as in sub-section \[realproof\] above. Proof of Proposition \[converse\] --------------------------------- Let the Banach space $X$ have cotype 2, assume that $T\in\mathfrak L(X)$ is $\mathbf S$-mixing with respect to some Gaussian measure $\mu$ with full support, and let us show that the $\TT$-eigenvectors of $T$ are perfectly spanning for analytic sets. We start with the following fact, which follows rather easily from Lemma \[back1\] (1) (see the beginning of the proof of Theorem 5.46 in [@BM]). One can find a gamma-radonifying operator $K:\mathcal H\to X$ such that $\mu=\mu_K$ and a *unitary* operator $M=\mathcal H\to\mathcal H$ such that $TK=KM$. By the spectral theorem, we may assume that $\mathcal H=L^2(\Omega ,m)$ for some measure space $(\Omega, m)$ and that $M$ is a multiplication operator, $M=M_\phi$ for some measurable map $\phi :\Omega\to\TT$. Then we use the cotype 2 assumption on $X$: by Proposition \[typecotype\], the gamma-radonifying operator $K:L^2(\Omega, m)\to X$ is in fact given by a vector field; that is, $K=K_E$ for some vector field $E:(\Omega,m)\to X$. Now, let $D\subset \TT$ be an analytic $\mathbf S$-small set. By the remark just after Lemma \[back1\] and by Lemma \[back2\] (with $\mathcal H_1=\mathcal H\ominus\ker(K_E)$), we know that $\mathbf 1_{\{ \phi\in D\}}h\in\ker (K_E)$ for any $h\in L^2(\Omega, m)$. In other words, we have $$\int_{\{\phi\in D\}} h(\omega) E(\omega)\, dm(\omega)=0$$ for every $h\in L^2(\Omega, m)$. It follows at once that $E$ is almost everywhere $0$ on the set $\{ \phi\in D\}$; and since $E$ is $m$-spanning (because $\mu=\mu_{K_E}$ has full support), this implies that the linear span of $\{ E(\omega);\; \omega\in \Omega\setminus\phi^{-1}(D)\}$ is dense in $X$. By the very definition of a $\TT$-eigenfield, it follows that the linear span of $\bigcup_{\lambda\in\TT\setminus D} \ker(T-\lambda)$ is dense in $X$, and we conclude that the $\TT$-eigenvectors of $T$ are $\mathbf S$-perfectly spanning. Miscellaneous remarks {#final} ===================== Examples -------- Theorem \[WS\] is a theoretical statement. However, it is extremly useful to give concrete examples of mixing operators, including new ones ( the backward shift operators on $c_0$, or the translation semigroups in sub-section \[Semigroups\] below). In this sub-section, we review several examples that were already known to be mixing in the Gaussian sense, either by an application of the results of [@BG3], [@BG2] or [@BM], or by using an ad-hoc argument. What we want to point out is that Theorem \[WS\] now makes the proofs completely straightforward. In all cases, Theorem \[WS\] is used through the following immediate consequence. \[mainbis\] Let $T$ be an operator acting on a complex separable Fréchet space $X$. Assume that one can find a $\TT$-eigenfield $(E,\phi)$ for $T$ on some topological measure space $(\Omega, m)$ such that the measure $m$ has full support, $E$ is continuous with $\overline{\rm span}\, E(\Omega)=X$, and $(fm)\circ\phi^{-1}$ is a Rajchman measure for every $f\in L^1(m)$. Then $T$ is strongly mixing in the Gaussian sense. If $D\subset\TT$ is a Borel set of extended uniqueness, then $m(\phi^{-1}(D))=0$ because $(fm)\circ\phi^{-1}(D)=0$ for every $f\in L^1(m)$. Since $m$ has full support, it follows that $\Omega\setminus\phi^{-1}(D)$ is dense in $\Omega$ and hence that the linear span of $E(\Omega\setminus\phi^{-1}(D))$ is dense in $X$ because $E$ is assumed to be continuous. Since $E(\Omega\setminus \phi^{-1}(D))\subset \bigcup_{\lambda\in\TT\setminus D}\ker(T-\lambda)$, this shows that the $\TT$-eigenvectors of $T$ are $\mathcal U_0$-perfectly spanning. When the measure $m$ is finite, the third condition in Proposition \[mainbis\] just means that $m\circ\phi^{-1}$ is a Rajchman measure. ### Weighted shifts again Weighted backward shifts on $X_p=\ell^p(\NN)$, $1\leq p<\infty$ or $X_\infty=c_0(\NN)$ have already been considered in the introduction (Example 1): a weighted shift $B_{\mathbf w}$ is strongly mixing in the Gaussian sense as soon as the sequence $\left(\frac{1}{w_0\cdots w_n}\right)_{n\geq 0}$ is in $X_p$, due to the existence of a a continuous $\TT$-eigenvector field $E:\TT\to X_p$ such that $\overline{\rm span}\, E(\TT)=X_p$, namely $$E(\lambda):=\sum_{n=0}^\infty \frac{\lambda^n}{w_0\cdots w_n}\, e_n\, .$$ Here, we just would like to point out one curious fact. If $p<\infty$, the operator $K_E:L^2(\TT)\to X_p$ turns out to be gamma-radonifying (see [@BM]) and hence there is a very natural explicit ergodic Gaussian measure for $B_{\mathbf w}$, namely the distribution of the random variable $\xi=\sum_0^\infty \frac{g_n}{w_0\cdots w_n}\, e_n$. As shown in [@BG2 Example 3.13], this need not be true when $p=\infty$, $X=c_0$; that is, the weight sequence can be chosen in such a way that $K_E$ is not gamma-radonifying. So there is no “obvious" Gaussian measure in this case. ### Composition operators Let $\alpha:\DD\to\DD$ be an automorphism of the unit disk $\DD$ without fixed points in $\DD$, and let $C_\alpha$ be the associated composition operator acting on $X=H^p(\DD)$, $1\leq p<\infty$: $$C_\alpha (f)=f\circ\alpha\, .$$ As shown in [@BG3], there is a natural continuous $\TT$-eigenfield $(E,\phi)$ for $C_\alpha$, defined on $\Omega=\RR_+$ in the parabolic case and $\Omega=[0,2\pi)$ in the hyperbolic case (endowed with Lebesgue measure $m$) such that ${\overline{\rm span}}\, E(\Omega)= X$ and the image measure $m\circ\phi^{-1}$ is absolutely continuous with respect to Lebesgue measure on $\TT$. By Proposition \[mainbis\], it follows that $C_\alpha$ is strongly mixing in the Gaussian sense. This was shown in [@BG2], with a longer proof because the authors had to play with the regularity of the vector field $E$ and the geometry of the space $X$ to show that $K_E$ is gamma-radonifying; but the proof of [@BG2] is also more informative since it provides an explicit Gaussian measure for $C_\alpha$. ### Operators with analytic $\TT$-eigenvector fields The following consequence of Proposition \[mainbis\] is worth stating explicitely. Let $X$ be a complex separable Fréchet space, and let $T\in\mathfrak L(X)$. Assume that $T$ is not a scalar multiple of the identity and that one can find a map $E:U\to X$ defined on some connected open set $U\subset \CC$ such that $E$ is holomorphic or “anti-holomorphic" ( $E(\bar s)$ is holomorphic), each $E(s)$ is an eigenvector for $T$, the associated eigenvalue has modulus 1 for at least one $s\in U$ and $\overline{\rm span}\, E(U)=X$. Then $T$ is strongly mixing in the Gaussian sense. Let us denote by $\phi(s)$ the eigenvalue associated with $E(s)$. Using the Hahn-Banach theorem, it is easily seen that $\phi :U\to\CC$ is holomorphic or anti-holomorphic. Moreover, $\phi$ is non-constant since $T$ is not scalar. By the inverse function theorem, it follows that $\phi (U)\cap \TT$ contains a non trivial arc $\Lambda$ such that the restriction of $\phi$ to $\Omega:=\phi^{-1}(\Lambda)$ is a homeomorphism from $\Omega$ onto $\Lambda$. By the Hahn-Banach theorem and the identity principle for holomorphic (or anti-holomorphic) functions, the linear span of $E(\Omega)$ is dense in $X$. Hence, if we denote by $m$ the image of the Lebesgue measure on $\Lambda$ by $\phi^{-1}$, the $\TT$-eigenfield $(E,\phi)$ restricted to $\Omega$ satisfies the assumptions of Proposition \[mainbis\]. One could also have used Theorem \[WS\] directly. Indeed, if $D\subset\TT$ is a Borel $\mathcal U_0$-set, then $\phi^{-1}(\TT\setminus D)$ is an uncountable Borel set (because $\phi(U)$ contains a nontrivial arc by the open mapping theorem), and as such it contains an uncountable compact set $K$. Then $\overline{\rm{span}}\, E(K)=X$ by the identity principle, and since $E(K)\subset\bigcup_{\lambda\in\TT\setminus D}\ker(T-\lambda)$, it follows that the $\TT$-eigenvectors of $T$ are $\mathcal U_0$-perfectly spanning. This lemma may be applied, for example, in the following two cases. 1. $T=M_\phi^*$, where $M_\phi$ is a (non-scalar) multiplication operator on some reproducing kernel Hilbert space of analytic functions on a connected open set $U\subset\CC$, and $\phi(U)\cap\TT\neq\emptyset$. 2. $T$ is a (non-scalar) operator on the space of entire functions $H(\CC)$ commuting with all translation operators. In the first case one may take $E(s):=k_s$, the reproducing kernel at $s\in U$ (which depends anti-holomorphically on $s$ and satisfies $M_\phi^*(k_s)=\overline{\phi (s)}\, k_s$). In the second case the assumptions of the lemma are satisfied with $E(s)=e_s$, where $e_s(z)=e^{sz}$ (denoting by $\tau_z$ the operator of translation by $z$, use the relation $\tau_ze_s=e_s(z)e_s$ to show that $Te_s=(Te_s(0))\times e_s$). Non Gaussian measures --------------------- It is natural to ask whether one gets a more general notion of mixing for an operator $T\in\mathfrak L(X)$ by requiring only that $T$ should be mixing with respect to *some* probability measure $\mu$ on $X$ with full support. The following remark (essentially contained in [@R], and also in [@BG3]) provides a partial answer. Let $X$ be a separable Banach space, and assume that $X$ has *type $2$*. Let also $\mu$ be a centred Borel probability measure on $X$ such that $\int_X\Vert x\Vert^2d\mu (x)<\infty$. Then there is a unique Gaussian measure $\nu$ on $X$ such that $$\Vert x^*\Vert_{L^2(\nu)}=\Vert x^*\Vert_{L^2(\mu)}$$ for every $x^*\in X^*$. Moreover, - if $\mu$ has full support then so does $\nu$; - if $\mu$ is $T$-invariant for some $T\in\mathfrak L(X)$, then so is $\nu$; - if $T\in\mathfrak L(X)$ is weakly mixing or strongly mixing with respect to $\mu$, then the same is true with respect to $\nu$. Put $\mathcal H:=L^2(\mu)$. By assumption on $\mu$, there is a well defined (conjugate-linear) “inclusion" operator $J:X^*\to \mathcal H$, namely $J(x^*)=\overline{x^*}$ considered as an element of $L^2(\mu)$. Moreover, it follows from Lebesgue’s theorem and the $w^*$-$\,$metrizability of $B_{X^*}$ that $J$ is $(w^*,w^*)\,$-$\,$continuous on $B_{X^*}$. Hence, $J$ is the adjoint of a bounded operator $K:\mathcal H\to X$. By definition, this means that $K^*x^*=\overline{x^*}$ considered as an element of $L^2(\mu)$, so we have in particular $$\Vert K^*x^*\Vert_{\mathcal H}=\Vert x^*\Vert_{L^2(\mu)}\, .$$ It is fairly easy to show that the “inclusion" operator $K^*=J:X^*\to L^2(\mu)$ is *absolutely $2$-summing* (see [@AK] for the definition). Since $X$ has type 2, it follows that $K$ is gamma-radonifying (see [@CTV], or [@BM Proposition 5.19]). If we denote by $\nu=\mu_K$ the associated Gaussian measure, then $\Vert x^*\Vert_{L^2(\nu)}=\Vert x^*\Vert_{L^2(\mu)}$ for every $x^*\in X^*$ by the very definition of $\nu$. Moreover, if $\nu'$ is another Gaussian measure with the same properties, then $\nu$ and $\nu'$ have the same Fourier transform and hence $\nu=\nu'$. To prove (1), assume that $\mu$ has full support. Then the operator $K^*=J:X^*\to \mathcal H$ is one-to-one because a continuous function on $X$ is $0$ in $L^2(\mu)$ if and only if it is identically $0$. It follows that $K$ has dense range, and hence that $\nu=\mu_K$ has full support. To prove (2), assume that $\mu$ is $T$-invariant. Then $\Vert T^*x^*\Vert_{L^2(\mu)}=\Vert x^*\Vert_{L^2(\mu)}$ and hence $\Vert K^*(T^*x^*)\Vert_{L^2(\nu)}=\Vert K^*(x^*)\Vert_{L^2(\nu)}$ for every $x^*\in X^*$. By the proof of Lemma \[back1\], this shows that $\nu$ is $T$-invariant Finally, (3) follows from the following observations : (i) weak mixing or strong mixing of $T$ with respect to $\mu$ is characterized by a certain behaviour (B) of the sequence $(\langle f\circ T^n,g\rangle_{L^2(\mu)})_{n\geq 0}$, for any $f,g\in L^2_0(\mu)$; (ii) when specialized to linear functionals, this gives that the sequence $(\langle T^{*n}x^*,y^*\rangle_{L^2(\mu)})$ satisfies (B) for any $x,y^*\in X^*$; (iii) by the definition of $\nu$, this means that $(\langle T^{*n}x^*,y^*\rangle_{L^2(\nu)})$ satisfies (B) for any $x^*,y^*$; (iv) since $\nu$ is a Gaussian measure, this is enough to ensure weak mixing or strong mixing of $T$ with respect to $\nu$, by Rudnicki [@R] or Bayart-Grivaux [@BG3]. When $X$ is a Hilbert space, it follows from this remark (and from Theorem \[WS\]) that an operator $T\in\mathfrak L(X)$ is strongly mixing with respect to some centred probability measure $\mu$ on $X$ with full support such that $\int_X\Vert x\Vert^2d\mu (x)<\infty$ if and only if the $T$-eigenvectors of $T$ are $\mathcal U_0$-perfectly spanning, in which case the measure $\mu$ can be assumed to be Gaussian. It would be interesting to know if this remains true without any a priori assumption on the measure $\mu$. A characterization of $\mathcal U_0$-sets {#bizarre} ----------------------------------------- Our results easily yield the following curious characterization of sets of extended uniqueness. Let $D$ be an analytic subset of $\TT$. Then $D$ is a set of extended uniqueness if an only if the following holds: for every Hilbert space operator $T$ which is strongly mixing in the Gaussian sense, the linear span of $\bigcup_{\lambda\in\TT\setminus D}\ker(T-\lambda)$ is dense in the underlying Hilbert space. The “only if" part follows immediately from Proposition \[converse\]. Conversely, assume that $D\not\in\mathcal U_0$. Then, since $D$ is universally measurable, it contains a compact set $\Lambda$ which is the support of some Rajchman probability measure, $\Lambda$ is [$\mathcal U_0$-perfect]{}. Let $T_\Lambda:H_\Lambda\to H_\Lambda$ be the Kalisch operator from Example 2 in the introduction. Then $T_\Lambda$ is strongly mixing in the Gaussian sense, but ${\rm span}\left(\bigcup_{\lambda\in\TT\setminus D}\ker(T-\lambda)\right)$ is certainly not dense in $H$ since it is $\{ 0\}$ (recall that $\sigma_p(T)=\Lambda$ and $\Lambda\subset D$). The non-ergodicity index ------------------------ Loosely speaking, Corollary \[characexistergod\] is a kind of “perfect set theorem" for ergodicity. This can be made precise as follows. Let $T\in\mathfrak L(X)$ (where $X$ is a Banach space with cotype 2), and consider the following “derivation" on closed, $T$-invariant subspaces of $X$: for any such subspace $E$, set $$\mathcal D_T(E):=\bigcap_{D}\overline{\rm span}\, \bigcup_{\lambda\in\TT\setminus D}\ker (T_{\vert E}-\lambda)\, ,$$ where the intersection ranges over all countable sets $D\subset\TT$. By transfinite induction, one defines the iterates $\mathcal D_T^\alpha(X)$ for every ordinal $\alpha$, in the obvious way: $$\displaylines{ \mathcal D_T^{\alpha+1}(X)=\mathcal D_T[\mathcal D_T^\alpha(X)]\, ,\cr \mathcal D_T^\lambda(X)=\displaystyle\bigcap_{\alpha <\lambda} \mathcal D_T^\alpha(X)\;\;\;\;\hbox{($\lambda$ limit).} }$$ Since $X$ is a Polish space, the process must stabilize at some countable ordinal $\alpha(T)$ and hence $\mathcal D_T^\infty (X):=\bigcap_{\alpha} \mathcal D_T^\alpha (X)$ is well-defined and is a fixed point of $\mathcal D_T$ (in fact, the largest fixed point). Then, Corollary \[characexistergod\] says that $T$ admits a nontrivial ergodic Gaussian measure iff $\mathcal D_T^\infty(X)\neq\{ 0\}$, in which case one can find an ergodic measure with support $\mathcal D_T^\infty(X)$. The subspace $\mathcal D_T^\infty (X)$ is the “perfect kernel" associated with the derivation $\mathcal D_T$, a canonical witness of the ergodicity of $T$. Let us say that $T$ is *totally non-ergodic* if it does not admit any nontrivial ergodic Gaussian measure. Then the ordinal $\alpha(T)$ may be called the “non-ergodicity index" of $T$. It is quite natural to wonder whether this index can be arbitrarily large: given any countable ordinal $\alpha$, is it possible to construct a totally non-ergodic operator $T$ with $\alpha (T)>\alpha$? The scope of the abstract results --------------------------------- Some comments are in order regarding the assumptions made on the family $\mathbf S$ in Theorems \[abstract\] and \[abstracteasy\]. The main trouble with Theorem \[abstract\] is that we have no idea of how to prove it without assuming that the family $\mathbf S$ is $c_0$-like. Perhaps unexpectedly, what makes the definition of a $c_0$-like family very restrictive is the uniform boundedness assumption of the sequence of semi-norms $(\Phi_n)$. For example, any growth condition of the form $$a_n=o({\varepsilon}_n)\, ,$$ where $\bar{\varepsilon}=({\varepsilon}_n)$ is a sequence of positive numbers tending to $0$ and satisfying ${\varepsilon}_{n\pm k}\leq C_k \, {\varepsilon}_n$, defines a translation-invariant ideal $\mathbf S_{\bar{\varepsilon}}\subset\ell^\infty(\ZZ_+)$ which has the correct form *except* for this uniform boundedness condition (just put $\Phi_n(a)=\frac1{{\varepsilon}_n}\, \vert a_n\vert$). In fact, in this case one cannot hope for positive results of any kind: as observed by V. Devinck ([@Vincent]) it follows from a result of C. Badea and V. Müller ([@BadMull]) that $\mathbf S_{\bar{\varepsilon}}\,$-mixing operators just do not exist at all (at least on a Hilbert space). In the same spirit, it can be shown that it is impossible to find any $c_0$-like description for $\mathcal B=L^1(m)$, the family of all measures absolutely continuous with respect to Lebesgue measure on $\TT$ (see [@MZ]). One may also note that $\mathcal FL^1(m)$ is not an ideal of $\ell^\infty(\ZZ)$ (see [@JPK Chapter 5, Proposition 6]). As a more extreme example, the well studied notion of *mild mixing* (see [@Aa]) does not fit at all into the framework. Indeed, in this case the relevant family of measures $\mathcal B$ is the following: a measure $\sigma$ is in $\mathcal B$ iff it annihilates every *weak Dirichlet set* (a Borel set $D\subset \TT$ is weak Dirichlet if, for any measure $\mu$ supported on $D$, one can find an increasing sequence of integer $(n_k)$ such that $z^{n_k}\to\mathbf 1$ in $L^2(\mu)$). By a result of S. Kahane ([@SK]), the family $\mathcal B$ is extremely complicated, namely *non Borel* in $(\mathcal M(\TT),w^*)$. So there is absolutely no hope of finding a $c_0$-like description for $\mathcal B$. Now, if one is interested in Hilbert spaces only, Theorem \[abstracteasy\] is arguably rather general since the $c_0$-like property is not required. However, that the family $\mathbf S$ should be *norm-closed* in $\ell^\infty(\ZZ_+)$ is already quite a restrictive condition: indeed, this implies that the strongest mixing property that can be reached is, precisely, strong mixing. In particular, Theorem \[abstracteasy\] cannot be applied to any “summability" condition on the Fourier coefficients. On the other hand, the following example can be handled: let $\mathcal F$ be any translation-invariant filter on $\ZZ_+$, and take as $\mathbf S$ the family of all sequences $(a_n)\in\ell^\infty (\ZZ_+)$ tending to $0$ along $\mathcal F$. Perhaps more importantly, there are quite natural (norm-closed) families $\mathbf S$ which are not ideals of $\ell^\infty (\ZZ_+)$. The most irritating example is the ergodic case. As already observed a set $D\subset \TT$ is $\mathbf S_{\rm erg}$-small if and only if $D\subset\{ \mathbf 1\}$. Hence, if Theorem \[abstracteasy\] could be applied in this case, the result would read as follows: *if $T\in\mathfrak L(X)$ and if ${\rm span}\left(\bigcup_{\lambda\in\TT\setminus\{\mathbf 1\}} \ker(T-\lambda)\right)$ is dense in $X$, then $T$ is ergodic in the Gaussian sense.* But this is clearly not true since for example $T=- id$ satisfies the assumption and is not even hypercyclic. In spite of that, it might happen that an operator $T\in\mathfrak L(X)$ is ergodic in the Gaussian sense as soon as *it is hypercyclic* and the $\TT$-eigenvectors span $X$; but the proof of such a result would require at least one new idea. This is, of course, related to the following well known open problem: is it true that every *chaotic* operator ( a hypercylic operator with a dense set of periodic points) is frequently hypercyclic? Semigroups {#Semigroups} ---------- The proof of Theorem \[WS\] can be easily adapted to get analogous results in the continuous case, for one-parameter semigroups of operators $(T_t)_{t\geq 0}$. Indeed, the mixing properties make perfect sense in the continuous case, and the sets of extended uniqueness are well defined on $\RR$ just like on any locally compact abelian group. The only “difference" is that, according to the (point-)spectral mapping theorem for $C_0$-semigroups, the unimodular eigenvalues of the single operator $T$ from the discrete case should be replaced with the purely imaginary eigenvalues of the semigroup generator in the continuous case. Hence, the continuous analogue of Theorem \[WS\] reads as follows. \[THMSG\] Let $X$ be a separable Banach space and let $\mathcal T=(T_t)_{t\geq 0}$ be a $C_0$-semigroup in $\mathfrak L(X)$ with infinitesimal generator $A$. 1. If the linear span of $\bigcup_{\theta\in\RR\setminus D}\ker (A-i\theta)$ is dense in $X$ for any countable set $D\subset\RR$, then $\mathcal T$ is weakly mixing in the Gaussian sense. 2. If the linear span of $\bigcup_{\theta\in\RR\setminus D}\ker (A-i\theta)$ is dense in $X$ for any $\mathcal U_0$-set $D\subset\RR$, then $\mathcal T$ is strongly mixing in the Gaussian sense. Since the proof would essentially be a matter of changing the notation, we shall not give any detail. We note, however, that the semigroup case is obviously of some interest in view of its connections with partial differential equations. See [@BK], [@Las] or [@R2] for more on these matters. In another direction, a perhaps ambitious program would be to consider linear representations of more general semigroups; in other words, to establish results like Theorem \[WS\] for semigroups of operators $(T_\gamma)_{\gamma\in\Gamma}$ with $\Gamma$ no longer equal to $\NN$ or $\RR_+$. There is no difficulty in defining ergodicity or strong mixing in this setting, and the spectral approach makes sense if $\Gamma$ is a locally compact abelian group. However, it is not clear what the correct “perfect spanning" property should be. ### Translation semigroups To illustrate Theorem \[THMSG\], let us give a new and very simple example of a strongly mixing $C_0$-semigroup, that cannot be reached by applying the results of [@BG3], [@BG2] or [@BM]. Let $\rho:\mathbb R_+\to(0,\infty)$ be a locally bounded positive function on $\RR_+$, and let $$\mathcal C_{0}(\RR_+,\rho):=\left\{f\in \mathcal C(\mathbb R_+); \lim_{x\to+\infty} f(x)\rho(x)=0\right\}$$ endowed with its natural norm, $\|f\|=\|f \rho\|_\infty$. Moreover, assume that $\rho$ is an *admissible weight* in the sense of [@DSW], which means that $$C(t):=\sup_{x\in\RR_+}\frac{\rho(x)}{\rho(x+t)}<\infty$$ for any $t\geq 0$ and $C(t)$ is locally bounded on $\RR_+$. Then the *translation semigroup* $\mathcal T=(T_t)_{t\geq 0}$ defined by $$T_tf(x)= f(x+t)$$ is a $C_0$-semigroup on $\mathcal C_{0}(\RR_+,\rho)$. It is proved in [@BBCP] that $\mathcal T$ is strongly mixing in the topological sense if and only if $\rho(x)\xrightarrow{x\to\infty} 0$ as $x\to\infty$. We now show that this is in fact equivalent to strong mixing in the Gaussian sense. The translation semigroup $\mathcal T$ is strongly mixing in the Gaussian sense on $\mathcal C_{0}(\RR_+,\rho)$ if and only if $\rho(x)\to 0$ as $x\to+\infty$. Assume that $\rho(x)\to 0$ as $x\to+\infty$. The infinitesimal generator of $\mathcal T$ is the derivation operator (denoted by $A$), whose domain includes all $\mathcal C^1$ functions on $\RR_+$ with a bounded and uniformly continuous derivative. For any $\theta\in\RR$, the function $e_{\theta}(x)=e^{i\theta x}$ is in $\ker(A-i\theta)$, and the map $\theta\mapsto e_{\theta}$ is clearly continuous from $\RR$ into $\mathcal C_{0}(\RR_+,\rho)$. Moreover, it follows easily from (the Hahn-Banach theorem and) the injectivity of the Fourier transformation that the linear span of the functions $e_{\theta}$ is dense in $\mathcal C_{0}(\RR_+,\rho)$. By Theorem \[THMSG\], we conclude that $\mathcal T$ is strongly-mixing in the Gaussian sense. One may also consider the weighted $L^p$ spaces $L^p(\RR_+,\rho)$ ($1\leq p<\infty$) defined by the condition $$\int_0^\infty \vert f(x)\vert^p\, \rho(x)\, dx<\infty\, .$$ With exactly the same proof as above, one gets that the translation semigroup is strongly mixing in the Gaussian sense on $L^p(\RR_+,\rho)$ as soon as $$\int_0^\infty \rho (x)\, dx<\infty\, .$$ The size of the set of hypercyclic vectors ------------------------------------------ It is well known that if $T$ is a hypercyclic operator acting on a separable Fréchet space $X$, then $HC(T)$ (the set of hypercyclic vectors for $T$) is a dense $G_\delta$ subset of $X$. Moreover, as observed in the introduction, if $T$ happens to be ergodic with respect to some probability measure $\mu$ with full support, then $\mu$-almost every $x\in X$ is a hypercyclic vector for $T$. Thus, $HC(T)$ is large both in the Baire category sense and in a measure-theoretic sense. Now, there are many other natural notions of “largeness" in analysis. A quite popular one is that of *prevalence*, which is discussed at length in [@HSY]. In a Polish abelian group $G$, a set is prevalent if its complement $A$ is Haar-null in the sense of Christensen [@Chr], one can find a Borel probability measure $\nu$ on $G$ such that $\nu (A+g)=0$ for every $g\in G$. Some results concerning prevalence and hypercyclicity are proved in [@BMM], and much more spectacular results regarding the size of the set of hypercyclic vectors are to be found in [@GR]. For some reasons, it is not incongruous to expect that if an operator $T$ is ergodic in the Gaussian sense, then $HC(T)$ is *not* prevalent and even Haar-null. We are not able to prove this, but this is indeed true for a large class of ergodic weighted shifts, as shown by the following result. \[PROPHAARNULL\] Let $X$ be a Banach space, and let $T\in\mathcal L(X)$. Assume that one can find $u\in X$ such that $$\sum_{n=0}^\infty \frac1{\|T^n (u)\|}<\infty\, .$$ Then $HC(T)$ is Haar-null. Considering only the real-linear structure of $X$, we may assume that $X$ is a real Banach space. An efficient way of proving that a set $A\subset X$ is Haar-null is to exhibit some finite-dimensional subspace $V$ of $X$ such that $$\forall x\in X\;\forall^{a.e.} v\in V\; :\; x+v\not\in A\, ,$$ where $\forall^{a.e.}$ refers to Lebesgue measure on $V$. In the terminology of [@HSY], such a subspace $V$ is called a *probe* for $A$. We show that the one-dimensional subspace $V=\RR u$ is a probe for $HC(T)$. So let $x\in X$ be arbitrary, set $\Lambda:=\{\lambda\in\RR;\ x+\lambda u\in HC(T)\}$, and let us prove that $\Lambda$ has Lebesgue measure $0$. For any $\lambda\in \Lambda$, one can find arbitrary large $n\in\NN$ such that $$\|T^n (x)+\lambda T^n (u)\|\leq 1,$$ and hence such that $$\left\vert \lambda-({\|T^n(x)\|}/{\|T^n (u)\|})\right\vert\leq \frac1{\|T^n (u)\|}\,\cdot$$ Putting $a_n:= {\|T^n(x)\|}/{\|T^n (u)\|}$, it follows that $$|\lambda|\in\bigcap_{N\in\NN}\, \bigcup_{n\geq N}\left[a_n-\frac1{\|T^n (u)\|},a_n+\frac1{\|T^n (u)\|}\right]$$ for every $\lambda\in\Lambda$. In particular, the Lebesgue measure of $\Lambda$ is not greater than $$\inf_{N\in\NN} \; 2 \sum_{n\geq N}\frac1{\|T^n u\|}\, ,$$ which is a complicated way to write $0$. Let $B_{\mathbf w}$ be a weighted backward shift on $X_p=\ell^p(\NN)$, $1\leq p<\infty$ or $X_\infty =c_0(\NN)$. Assume that the weight sequence $\mathbf w=(w_n)_{n\geq 1}$ satisfies $$\sum_{n=1}^\infty \frac{1}{\vert w_1\cdots w_n\vert^{p/p+1}}<\infty\, .$$ Then $HC(B_{\mathbf w})$ is Haar-null. This holds for example if $\liminf\limits_{n\to\infty} \vert w_n\vert>1$. Set $w_0:=0$ and denote by $(e_k)_{k\geq 0}$ the canonical basis of $X_p$. If $p<\infty$, consider the vector $$u:=\sum_{k=0}^\infty \frac{1}{\vert w_0\cdots w_k\vert^{1/p+1}}\, e_k\, .$$ If $p=\infty$, put $$u:=\sum_{k=0}^\infty \frac{A_k}{w_0\cdots w_k}\, e_k\, ,$$ where $(A_k)$ is any sequence of positive numbers such that $A_k=o(w_0\cdots w_k)$ and $\sum_0^\infty 1/A_k<\infty$. Thus, we see that if $B$ is the usual (unweighted) backward shift, then $HC(\lambda B)$ is Haar-null in $X_p$ for any $p\in[1,\infty]$ if $\vert\lambda\vert>1$. Some questions -------------- To conclude the paper, we list a few “natural" questions, most of which have already been raised. 1. Is Theorem \[abstract\] true without assuming that the family $\mathbf S$ is $c_0$-like? 2. Is Theorem \[abstracteasy\] true for $L^1(m)$-mixing, or in the mild mixing case? 3. Let $T$ be a hypercyclic operator on $X$ whose $\TT$-eigenvectors span a dense subspace of $X$. Is $T$ ergodic in the Gaussian sense? Is $T$ frequently hypercyclic? 4. \[truc\] Let $T\in\mathcal L(X)$, and assume that for any set $D\subset\TT$ *with empty interior*, the linear span of $\bigcup_{\lambda\in\TT\setminus D}\ker(T-\lambda)$ is dense in $X$. Is it possible to find a countable family of continuous $\TT$-eigenvector fields $(E_i)_{i\in I}$, where each $E_i$ is defined on some nontrivial closed arc $\Lambda_i\subset\TT$, such that ${\rm span}\left(\bigcup_{i\in I} E_i(\Lambda_i)\right)$ is dense in $X$? 5. Let $H$ be a Hilbert space, and let $T\in\mathfrak L(H)$ be mixing with respect to some Borel measure on $H$ with full support. Is $T$ mixing in the Gaussian sense? 6. On which Banach spaces is it possible to find ergodic operators (in the Gaussian sense) with no eigenvalues? The space $X$ should of course not have cotype 2, but this is not enough: for example, if $X$ is hereditarily indecomposable then there are no frequently hypercyclic operators at all on it, and hence no ergodic operators either ([@Shk]). In fact, the question splits into two separate problems: (i) on which Banach spaces is it possible to find ergodic operators? (ii) what can be said if the space does not have cotype 2? We refer to [@GRosetc] for general results regarding the first problem. 7. Is there an ergodic weighted shift on $c_0(\NN)$ with no unimodular eigenvalues? 8. Let $T\in\mathfrak L(X)$ be ergodic in the Gaussian sense. Are the ergodic measures with full support dense in the set of all $T$-invariant measures (endowed with the usual Prokhorov topology)? See [@Sig] for a positive answer in a completely different situation, and [@CS] for more in that direction. 9. Is it true that if $T$ is an ergodic operator, then $HC(T)$ is Haar-null? Is this true at least for weighted shifts? J. Aaronson, *An introduction to infinite ergodic theory*. Mathematical Surveys and Monographs [**50**]{}, Amer. Math. Soc. (1997). F. Albiac and N. Kalton, *Topics in Banach space theory.* Graduate Texts in Mathematics [**233**]{}, Springer (2006). C. Badea and S. Grivaux, [*Unimodular eigenvalues, uniformly distributed sequences and linear dynamics*]{}. Adv. Math. [**211**]{} (2007), 766–793. C. Badea and S. Grivaux, [*Size of the peripherical point spectrum under power or resolvent growth conditions*]{}. J. Funct. Anal. [**247**]{} (2007), 302–329. C. Badea and V. Müller, [*On weak orbits of operators*]{}. Topology Appl. [**156**]{} (2009), 1381–1385. F. Bayart and S. Grivaux, [*Hypercyclicity and unimodular point spectrum*]{}. J. Funct. Anal. [**226**]{} (2005), 281–300. F. Bayart and S. Grivaux, [*Frequently hypercyclic operators*]{}. Trans. Amer. Math. Soc. [**358**]{} (2006), no. 11, 5083–5117. F. Bayart and S. Grivaux, [*Invariant Gaussian measures for operators on Banach spaces and linear dynamics*]{}. Proc. Lond. Math. Soc. [**94**]{} (2007), 181–210. F. Bayart and É. Matheron, *Dynamics of linear operators*. Cambridge Tracts in Mathematics [**179**]{}, Cambridge University Press (2009). F. Bayart, É. Matheron and P. Moreau, [*Small sets and hypercyclic vectors*]{}. Comment. Math. Univ. Carolinae [**49**]{} (2008), 53–65. F. Bayart and I. Z. Rusza, [*Difference sets and frequently hypercyclic weighted shifts*]{}. Preprint (2011). T. Bermúdez, A. Bonilla, J. Conejero and A. Peris, [*Hypercyclic, topologically mixing and chaotic semigroups on Banach spaces*]{}. Studia Math. [**170**]{} (2005), 57–75. V. I. Bogachev *Gaussian measures.* Mathematical Surveys and Monographs [**62**]{}, Amer. Math. Soc. (1998). A. Bonilla and K. -G. Grosse-Erdmann, [*On a theorem of Godefroy and Shapiro*]{}. Integral Equations operator Theory [**56**]{} (2006), 151–162. P. Brunovský and J. Komornik, [*Ergodicity and exactness of the shift on $C[0,\infty]$ and the semiflow of a first order partial differential equation.*]{} J. Math. Anal. Appl. [**104**]{} (1984), no. 1, 235–245. S. A. Chobanyan, V. I. Tarieladze and N. N. Vakhania, *Probability distributions on Banach spaces.* Mathematics and its Applications (Soviet Series) [**14**]{}, Springer (1987). J. P. R. Christensen, [*On sets of [H]{}aar measure zero in Polish abelian groups*]{}. Israel J. Math. [**13**]{} (1972), 255–260 Y. Coudène and B. Schapira, [*Generic measures for hyperbolic flows on non-compact spaces*]{}. Israel J. Math. [**179**]{} (2010), 157–172. G. Da Prato, *An introduction to infinite-dimensional analysis*. Universitext, Springer (2006). M. De la Rosa, L. Frerick, S. Grivaux and Alfredo Peris, [*Frequent hypercyclicity, chaos, and unconditional Schauder decompositions.*]{} Israel J. Math. (to appear). G. Debs and J. Saint Raymond, [*Ensembles boréliens d’unicité et d’unicité au sens large*]{}. Ann. Inst. Fourier, Grenoble [**37**]{} (1987), no. 3, 217–239. W. Desch, W. Schappacher and G. F. Webb, [*Hypercyclic and chaotic semigroups of linear operators*]{}. Ergodic Theory Dynam. Systems [**17**]{} (1997), 793–819. V. Devinck, [*Strongly mixing operators on Hilbert spaces and speed of mixing*]{}. Preprint (2011). T. Eisner and S. Grivaux, [*Hilbertian Jamison sequences and rigid dynamical systems*]{}. Preprint (2011). E. Flytzanis, [*Unimodular eigenvalues and linear chaos in Hilbert spaces*]{}. Geom. Funct. Anal. [**5**]{} (1995), 1–13. E. Glasner, *Ergodic theory via joinings*. Mathematical Surveys and Monographs [**101**]{}, Amer. Math. Soc. (2003). G. Godefroy and J. Shapiro, [*Operators with dense, invariant, cyclic vector manifolds*]{}. J. Funct. Anal. [**98**]{} (1991), 229–269. S. Grivaux, [*A new class of frequently hypercyclic operators*]{}. Indiana Univ. Math. J. (2011), [to appear]{}. S. Grivaux and M. Roginskaya, [*On Read’s type operators on Hilbert spaces*]{}. Int. Math. Res. Not. IMRN (2008), Art. ID rnn083, 42 pp. K. -G Grosse-Erdmann, [*Dynamics of linear operators.*]{} Topics in operator theory and complex analysis 41–84, University of Malaga (2007). K. -G Grosse-Erdmann and A. Peris, *Linear chaos.* Universitext, Springer (to appear). B. R. Hunt, T. Sauer and J. A. Yorke, [*Prevalence: a translation invariant “almost every" on infinite-dimensional spaces*]{}. Bull. Amer. Math. Soc. [**27**]{} (1992), no. 2, 217–238. J. P. Kahane, *Some random series of functions*. Cambridge Studies in Advanced Mathematics [**5**]{}, Cambridge University Press (1985). S. Kahane, [*On the complexity of sums of Dirichlet measures*]{}. Ann. Inst. Fourier, Grenoble [**43**]{} (1993), no. 1, 111–123. G. Kalisch, [*On operators on separable Banach spaces with arbitrary prescribed point spectrum*]{}. Proc. Amer. Math. Soc. [**34**]{} (1972) 207–208. Y. Katznelson, *An introduction to harmonic analysis.* Dover (1976). A. S. Kechris, *Classical descriptive set theory*. Graduate Texts in Mathematics [**156**]{}, Springer (1995). A. S. Kechris and A. Louveau, *Descriptive set theory and the structure of sets of uniqueness*. London Math. Soc. Lecture Notes Series [**128**]{}, Cambridge University Press (1987). A. Lasota, [*Invariant measures and a linear model of turbulence.*]{} Rend. Sem. Mat. Univ. Padova [**61**]{} (1979), 39–48. R. Lyons, [*Fourier–Stieltjes coefficients and asymptotic distribution modulo $1$.*]{} Ann. Math. [**122**]{} (1985) 155–170. É. Matheron and M. Zelený, [*Rudin-like sets and hereditary families of compact sets*]{}. Fund. Math. [**185**]{} (2005), 97–116. J. C. Oxtoby and S. M. Ulam, [*On the existence of a measure invariant under a transformation*]{}. Ann. Math [**40**]{} (1939), 560–566. K. Petersen, *Ergodic theory*. Cambridge Studies in Advanced Mathematics [**2**]{}, Cambridge University Press (1983). W. Rudin, *Functional analysis*. McGraw-Hill (1973). R. Rudnicki, [*Gaussian measure-preserving linear transformations*]{}. Univ. Iagel Acta Math. [**30**]{} (1993), 105–112. R. Rudnicki, [*Chaos for some infinite-dimensional dynamical systems.*]{} Math. Methods Appl. Sci. [**27**]{} (2004), no. 6, 723–738. H. Salas, [*Hypercyclic weighted shifts*]{}. Trans. Amer. Math. Soc. [**347**]{} (1995), no. 3, 993–1004. S. Shkarin, [*On the spectrum of frequently hypercyclic operators*]{}. Proc. Amer. Math. Soc. [**137**]{} (2009) no. 1, 123–134. K. Sigmund, [*Generic properties of invariant measures for axiom A diffeomorphisms*]{}. Inventiones Math. [**11**]{} (1970), 99–109.
1
--- abstract: | In 1990 J-L. Krivine introduced the notion of storage operators. They are $\l$-terms which simulate call-by-value in the call-by-name strategy and they can be used in order to modelize assignment instructions. J-L. Krivine has shown that there is a very simple second order type in $AF2$ type system for storage operators using Gődel translation of classical to intuitionistic logic.\ In order to modelize the control operators, J-L. Krivine has extended the system $AF2$ to the classical logic. In his system the property of the unicity of integers representation is lost, but he has shown that storage operators typable in the system $AF2$ can be used to find the values of classical integers.\ In this paper, we present a new classical type system based on a logical system called mixed logic. We prove that in this system we can characterize, by types, the storage operators and the control operators. We present also a similar result in the M. Parigot’s $\l \m$-calculus. --- =Symbol at 16pt =Symbol at 10pt ł Ł PS. Ø v \[section\] \[section\] \[section\] \[section\] **Mixed Logic and Storage Operators\ ** **Karim NOUR\ LAMA - Equipe de Logique, Université de Chambéry\ 73376 Le Bourget du Lac\ e-mail nour@univ-savoie.fr\ ** Introduction ============ In 1990, J.L. Krivine introduced the notion of storage operators (see \[4\]). They are closed $\l$-terms which allow, for a given data type (the type of integers, for example), to simulate in $\l$-calculus the “call by value” in a context of a “call by name” (the head reduction) and they can be used in order to modelize assignment instructions. J.L. Krivine has shown that the formula $\q x \{ N$\*$[x] \f \neg\neg N[x] \}$ is a specification for storage operators for Church integers : where $N[x]$ is the type of integers in $AF2$ type system, and the operation $*$ is the simple Gődel translation from classical to intuitionistic logic which associates to every formula $F$ the formula $F$\* obtained by replacing in $F$ every atomic formula by its negation (see \[3\]).\ The latter result suggests many questions : - Why do we need a Gődel translation ? - Why do we need the type $N$\*$[x]$ which characterize a class larger than integers ? In order to modelize the control operators, J-L. Krivine has extended the system $AF2$ to the classical logic (see \[6\]). His method is very simple : it consists of adding a new constant, denoted by $C$, with the declaration $C : \q X \{ \neg \neg X \f X \}$ which axiomatizes classical logic over intuitionistic logic. For the constant $C$, he adds a new reduction rule : $(C t t_1 ... t_n) \f (t \quad \l x (x \quad t_1 ... t_n))$ which is a particular case of a rule given by Felleisen for control operator (see \[1\]). In this system the property of the unicity of integers representation is lost, but J-L. Krivine has shown that storage operators typable in the intuitionistic system $AF2$ can be used to find the values of classical integers [^1](see \[6\]).\ The latter result suggests also many questions : - What is the relation between classical integers and the type $N$\*$[x]$ ? - Why do we need intuitionistic logic to modelize the assignment instruction and classical logic to modelize the control operators ? In this paper, we present a new classical type system based on a logical system called mixed logic. This system allows essentially to distinguish between classical proofs and intuitionistic proofs. We prove that, in this system, we can characterize, by types, the storage operators and the control operators. This results give some answers to the previous questions.\ We present at the end (without proof) a similar result in the M. Parigot’s $\l \m$-calculus.\ **Acknowledgement. We wish to thank J.L. Krivine, and C. Paulin for helpful discussions. We don’t forget the numerous corrections and suggestions from R. David and N. Bernard.** Pure and typed $\l$-calculus ============================ - Let $t,u,u_1,...,u_n$ be $\l$-terms, the application of $t$ to $u$ is denoted by $(t)u$. In the same way we write $(t)u_1...u_n$ instead of $(...((t)u_1)...)u_n$. - $Fv(t)$ is the set of free variables of a $\l$-term $t$. - The $\b$-reduction (resp. $\b$-equivalence) relation is denoted by $u \f\sb{\b} v$ (resp. $u \simeq\sb{\b} v$). - The notation $\s(t)$ represents the result of the simultaneous substitution $\s$ to the free variables of $t$ after a suitable renaming of the bounded variables of $t$. - We denote by $(u)^n v$ the $\l$-term $(u)...(u)v$ where $u$ occurs $n$ times, and $\sou{u}$ the sequence of $\l$-terms $u_1,...,u_n$. If $\sou{u} = u_1,...,u_n$ $n \geq 0$, we denote by $(t)\sou{u}$ the $\l$-term $(t)u_1...u_n$. - Let us recall that a $\l$-term $t$ either has a head redex \[i.e. $t=\l x_1 ...\l x_n (\l x u) v v_1 ... v_m$, the head redex being $(\l x u) v$\], or is in head normal form \[i.e. $t=\l x_1 ...\l x_n (x) v_1 ... v_m$\]. The notation $u \p v$ means that $v$ is obtained from $u$ by some head reductions. If $u \p v$, we denote by $h(u,v)$ the length of the head reduction between $u$ and $v$. (see\[3\])\ 1) If $u \p v$, then, for any substitution $\s$, $\s(u) \p \s(v)$, and $h(\s(u),\s(v))$=h(u,v).\ 2) If $u \p v$, then, for every sequence of $\l$-terms $\sou{w}$, there is a $w$, such that $(u)\sou{w} \p w$, $(v)\sou{w} \p w$, and $h((u)\sou{w},w)=h((v)\sou{w},w)+h(u,v)$. **Remark. Lemma 2.1 shows that to make the head reduction of $\s(u)$ (resp. of $(u)\sou{w}$) it is equivalent - same result, and same number of steps - to make some steps in the head reduction of $u$, and after make the head reduction of $\s(v)$ (resp. of $(v)\sou{w}$). $\Box$** - The types will be formulas of second order predicate logic over a given language. The logical connectives are $\perp$ (for absurd), $\f$, and $\q$. There are individual (or first order) variables denoted by $x,y,z,...,$ and predicate (or second order) variables denoted by $X,Y,Z,....$ - We do not suppose that the language has a special constant for equality. Instead, we define the formula $u=v$ (where $u,v$ are terms) to be $\q Y(Y(u) \f Y(v))$ where $Y$ is a unary predicate variable. Such a formula will be called an equation. We denote by $a \approx b$, if $a=b$ is a consequence of a set of equations. - The formula $F_1 \f (F_2 \f(...\f (F_n \f G)...))$ is also denoted by $F_1,F_2,...,F_n \f G$. For every formula $A$, we denote by $\neg A$ the formula $A \f \perp$. If $\sou{v} = v_1,...,v_n$ is a sequence of variables, we denote by $\q \sou{v} A$ the formula $\q v_1...\q v_n A$. - Let $t$ be a $\l$-term, $A$ a type, $\G = x_1 : A_1 ,..., x_n : A_n$ a context, and $E$ a set of equations. We define by means of the following rules the notion “$t$ is of type $A$ in $\G$ with respect to $E$” ; this notion is denoted by $\G\v_{AF2} t:A$ : <!-- --> - \(1) $\G\v_{AF2} x_i:A_i$ $1\leq i\leq n$. - \(2) If $\G,x:A \v_{AF2} t:B$, then $\G\v_{AF2} \l xt:A \f B$. - \(3) If $\G\v_{AF2} u:A \f B$, and $\G\v_{AF2} v:A$, then $\G\v_{AF2} (u)v:B$. - \(4) If $\G\v_{AF2} t:A$, and $x$ is not free in $\G$, then $\G\v_{AF2} t:\q xA$. - \(5) If $\G\v_{AF2} t:\q xA$, then, for every term $u$, $\G\v_{AF2} t:A[u/x]$. - \(6) If $\G\v_{AF2} t:A$, and $X$ is not free in $\G$, then $\G\v_{AF2} t:\q XA$. - \(7) If $\G\v_{AF2} t:\q XA$, then, for every formulas $G$, $\G\v_{AF2} t:A[G/X]$. - \(8) If $\G\v_{AF2} t:A[u/x]$, and $u \approx v$, then $\G\v_{AF2} t:A[v/x]$. This typed $\l$-calculus system is called $AF2$ (for Arithmétique Fonctionnelle du second ordre). (see \[2\]) The $AF2$ type system has the following properties :\ 1) Type is preserved during reduction.\ 2) Typable $\l$-terms are strongly normalizable. We present now a syntaxical property of system $AF2$ that we will use afterwards. (see \[8\]) If in the typing we go from $\G\v_{AF2} t:A$ to $\G\v_{AF2} t:B$, then we may assume that we begin by the $\q$-elimination rules, then by the equationnal rule, and finally by the $\q$-introduction rules. - We define on the set of types the two binary relations $\lhd$ and $\approx$ as the least reflexive and transitive binary relations such that : - - $\q xA \lhd A[u/x]$, if $u$ is a term of language ; - - $\q XA \lhd A[F/X]$, if $F$ is a formula of language ; - - $A \approx B$ if and only if $A=C[u/x]$, $B=C[v/x]$, and $u \approx v$. Pure and typed $\l C$-calculus ============================== The $C2$ type system -------------------- We present in this section the J-L. Krivine’s classical type system. - We add a constant $C$ to the pure $\l$-calculus and we denote by $\L C$ the set of new terms also called $\l C$-terms. We consider the following rules of reduction, called rules of head $C$-reduction. - 1\) $(\l x u) t t_1 ... t_n \f (u[t / x]) t_1 ... t_n$ for every $u, t, t_1,...,t_n \in \L C$. - 2\) $(C) t t_1 ... t_n \f (t) \l x (x)t_1 ... t_n$ for every $ t, t_1,...,t_n \in \L C$, $x$ being a $\l$-variable not appearing in $t_1,...,t_n$. <!-- --> - For any $\l C$-terms $t,t'$, we shall write $t \p_C t'$ if $t'$ is obtained from $t$ by applying these rules finitely many times. We say that $t'$ is obtained from $t$ by head $C$-reduction. - A $\l C$-term $t$ is said $\b$-normal if and only if $t$ does not contain a $\b$-redex. - A $\l C$-term $t$ is said $C$-solvable if and only if $t \p_C (f)t_1,...,t_n$ where $f$ is a variable. It is easy to prove that : if $t \p_C t'$, then, for any substitution $\s$, $\s (t) \p_C \s (t')$. - We add to the $AF2$ type system the new following rule : \(0) $\G \v C : \q X \{ \neg \neg X \f X \}$ This rule axiomatizes the classical logic over the intuitionistic logic. We call $C2$ the new type system, and we write $\G \v_{C2} t : A$ if $t$ is of type $A$ in the context $\G$. It is clear that $\G \v_{C2} t : A$ if and only if $\G , C : \q X \{ \neg \neg X \f X \} \v_{AF2} t : A$. (see \[6\])\ 1) If $\G \v_{C2} t:A$, and $t \f_{\b} t'$, then $\G \v_{C2} t':A$.\ 2) If $\G \v_{C2} t:\perp$, and $t \p_C t'$, then $\G \v_{C2} t':\perp$.\ 3) If $A$ is an atomic type, and $\G \v_{C2} t:A$, then $t$ is $C$-solvable. The $M2$ type system -------------------- In this section, we present the system $M2$. This system allows essentialy to distinguish between classical proofs and intuitionistic proofs\ We assume that for every integer $n$, there is a countable set of special $n$-ary second order variables denoted by $X_C,Y_C,Z_C$...., and called classical variables.\ Let $X$ be an $n$-ary predicate variable or predicate symbol. A type $A$ is said to be ending with $X$ if and only if $A$ is obtained by the following rules : - - $X(t_1,...,t_n)$ ends with $X$; - - If $B$ ends with $X$, then $A \f B$ ends with $X$ for every type $A$ ; - - If $A$ ends with $X$, then $\q vA$ ends with $X$ for every variable $v$.\ A type $A$ is said to be a classical type if and only if $A$ ends with $\perp$ or a classical variable.\ We add to the $AF2$ type system the new following rules : - (0$'$) $\G\v C : \q X_C \{ \neg \neg X_C \f X_C \}$ - (6$'$) If $\G\v t:A$, and $X_C$ has no free occurence in $\G$, then $\G\v t: \q X_C A$. - (7$'$) If $\G\v t: \q X_C A$, and $G$ is a classical type, then $\G\v t:A[G/ X_C]$. We call $M2$ the new type system, and we write $\G \v_{M2} t:A$ if $t$ is of type $A$ in the context $\G$.\ We extend the definition of $\lhd$ by : $\q X_C A \lhd A[G / X_C]$ if $G$ is a classical type. If $A$ is a classical type and $A \lhd B$ (or $A \approx B$), then $B$ is a classical type. **Proof Easy. $\Box$** The logical properties of $M2$ ------------------------------ We denote by $LAF2$, $LC2$, and $LM2$ the underlying logic systems of respectively $AF2$, $C2$, and $M2$ type systems.\ With each classical variable $X_C$, we associate a special variable $X^{\ast}$ of $AF2$ having the same arity as $X_C$. For each formula $A$ of $LM2$, we define the formula $A$\* of $LAF2$ in the following way : - - If $A=D(t_1,...,t_n)$ where $D$ is a predicate symbol or a predicate variable, then $A$\*=$A$ ; - - If $A=X_C(t_1,...,t_n)$, then $A$\*$=\neg X^{\ast}(t_1,...,t_n)$ ; - - If $A=B \f C$, then $A$\*$=B$\*$ \f C$\* ; - - If $A=\q xB$, then $A$\*=$\q xB$\*. - - If $A=\q XB$, then $A$\*=$\q XB$\*. - - If $A=\q X_C B$, then $A$\*=$\q X^{\ast} B$\*. $A$\* is called the Gődel translation of $A$. If $G$ is a classical type of $LM2$, then $\v_{LAF2} \neg \neg G$\*$ \equi G$\*. **Proof It is easy to prove that $\v_{LAF2} G$\*$ \f \neg \neg G$\*.\ We prove $\v_{LAF2} \neg \neg G$\*$ \f G$\* by induction on $G$.** - - If $G = \perp$, then $G$\*=$\perp$, and $\v_{LAF2} ((\perp \f \perp) \f \perp) \f \perp$. - - If $G = X_C (t_1,...,t_n)$, then $G$\*=$\neg X^{\ast}(t_1,...,t_n)$, and $\v_{LAF2} \neg \neg \neg X^{\ast}(t_1,...,t_n) \f \neg X^{\ast}(t_1,...,t_n)$. - - If $G = A \f B$, then $B$ is a classical type and $G$\* = $A$\* $\f$ $B$\*. By the induction hypothesis, we have $\v_{LAF2} \neg \neg B$\*$ \f B$\*. Since $\v_{LAF2} \neg \neg (A$\*$\f B$\*) $\f$ $(\neg \neg A$\*$\f \neg \neg B$\*), we check easily that $\v_{LAF2} \neg \neg (A$\* $\f B$\*) $\f (A$\* $\f B$\*). - - If $G = \q vG'$ where $v=x$ or $v=X$, then $G'$ is a classical type and $G$\*=$\q vG'$\*. By the induction hypothesis, we have $\v_{LAF2} \neg \neg G'$\*$ \f G'$\*. Since $\v_{LAF2} \neg \neg \q vG'$\* $\f$ $\q v \neg \neg G'$\*, we check easily that $\v_{LAF2} \neg \neg \q vG'$\* $\f \q vG'$\*. - - If $G = \q X_C G'$, then $G'$ is a classical type and $G$\*=$\q X^{\ast} G'$\*. By the induction hypothesis, we have $\v_{LAF2} \neg \neg G'$\*$ \f G'$\*. Since $\v_{LAF2} \neg \neg \q X^{\ast} G'$\* $\f$ $\q X^{\ast} \neg \neg G'$\*, we check easily that $\v_{LAF2} \neg \neg \q X^{\ast} G'$\* $\f \q X^{\ast} G'$\*. $\Box$ Let $A,G$ be formulas of $LM2$, $t$ a term, $x$ a first order variable, and $X$ a second order variable. We have :\ 1) $(A[t/x])$\*$= A$\*$[t/x]$.\ 2) $(A[G/X])$\*$=A$\*$[G$\*$/X]$. **Proof By induction on $A$. $\Box$** Let $A$ be a formula of $LM2$, $G$ a classical type, and $X_C$ a classical variable.\ $\v_{LAF2} (A[G/X_C])$\*$ \equi A$\*$[\neg G$\*$/X_C]$. **Proof By induction on $A$.** - - If $A = D(t_1,...,t_n)$ where $D$ is a predicate variable or a predicate symbol, then $A$\*=$A$, and $\v_{LAF2} A \equi A$. - - If $A = X_C (t_1,...,t_n)$, then $A$\*=$\neg X^{\ast}(t_1,...,t_n)$, and, by Lemma 3.2, $\v_{LAF2} \neg \neg G$\*$ \equi G$\*. - - If $A = B \f C$, then $A$\* = $B$\* $\f$ $C$\*. By the induction hypothesis, we have $\v_{LAF2} (B[G/X_C])$\*$ \equi B$\*$[\neg G$\*$/X_C]$ and $\v_{LAF2} (C[G/X_C])$\*$ \equi C$\*$[\neg G$\*$/X_C]$. Therefore $\v_{LAF2} \{ (B[G/X_C])$\*$ \f (B[G/X_C])$\*$\} \equi \{ B$\*$[\neg G$\*$/X_C] \f C$\*$[\neg G$\*$/X_C] \}$. - - If $A = \q vA'$, where $v=x$ or $v=X$, then $A$\*=$\q vA'$\*. By the induction hypothesis, we have $\v_{LAF2} (A'[G/X_C])$\*$ \equi A'$\*$[\neg G$\*$/X_C]$. Therefore $\v_{LAF2} (\q vA'[G/X_C])$\*$ \equi \q vA'$\*$[\neg G$\*$/X_C]$. - - If $A = \q Y_C A'$, then $A$\*=$\q Y^{\ast} A'$\*. By the induction hypothesis, we have $\v_{LAF2} (A'[G/X_C])$\* $\equi A'$\*$[\neg G$\*$/X_C]$. Therefore $\v_{LAF2} (\q Y_C A'[G/X_C])$\*$ \equi$ $ (\q Y_C A')$\*$[\neg G$\*$/X_C]$. $\Box$ If $A_1,...,A_n \v_{LM2} A$, then $A_1$\*$,...,A_n$\* $\v_{LAF2} A$\*. **Proof By induction on the proof of $A$ and using Lemmas 3.2, 3.3, and 3.4. $\Box$** Let $A,A_1,...,A_n$ be formulas of $LAF2$.\ $A_1,...,A_n \v_{LM2} A$ if and only if $A_1,...,A_n \v_{LAF2} A$. **Proof We use Theorem 3.2. $\Box$\ With each predicate variable $X$ of $C2$, we associate a classical variable $X_C$ having the same arity as $X$. For each formula $A$ of $LC2$, we define the formula $A^C$ of $M2$ in the following way :** - - If $A=D(t_1,...,t_n)$ where $D$ is a constant symbol, then $A^C=A$ ; - - If $A=X(t_1,...,t_n)$ where $X$ is a predicate symbol, then $A^C=X_C(t_1,...,t_n)$ ; - - If $A=B \f C$, then $A^C=B^C \f C^C$ ; - - If $A=\q xB$, then $A^C=\q xB^C$ ; - - If $A=\q XB$, then $A^C=\q X_CB^C$. $A^C$ is called the classical translation of $A$. Let $A_1,...,A_n,A$ be formulas of $LC2$.\ $A_1,...,A_n \v_{LC2} A$ if and only if $A_1^C,...,A_n^C \v_{LM2} A^C$. **Proof By induction on the proof of $A$. $\Box$** Properties of $M2$ type system ============================== By corollary 3.1, we have that a formula is provable in system $LAF2$ if and only if it is provable in system $LC2$. This resultat is not longer valid if we decorate the demonstrations by terms. We will give some conditions on the formulas in order to obtain such a result.\ We define two sets of types of $AF2$ type system : $\O^+$ (set of $\q$-positive types), and $\O^-$ (set of $\q$-negative types) in the following way : - - If $A$ is an atomic type, then $A \in \O^+$, and $A \in \O^-$ ; - - If $T \in \O^+$, and $T' \in \O^-$, then, $T' \f T \in \O^+$, and $T \f T' \in \O^-$ ; - - If $T \in \O^+$, then $\q x T \in \O^+$ ; - - If $T \in \O^-$, then $\q x T \in \O^-$ ; - - If $T \in \O^+$, then $\q X T \in \O^+$ ; - - If $T \in \O^-$, and $X$ has no free occurence in $T$, then $\q X T \in \O^-$. 1\) If $A \in \O^+$ (resp. $A \in \O^-$) and $A \approx B$, then $B \in \O^+$ (resp. $B \in \O^-$).\ 2) If $A \in \O^-$ and $A \lhd B \f C$, then $B \in \O^+$ and $C \in \O^-$. **Proof Easy. $\Box$** Let $A_1,...,A_n$ be $\q$-negative types, $A$ a $\q$-positive type of $AF2$ which does not end with $\perp$, $B_1,...,B_m$ classical types, and $t$ a $\b$-normal $\l C$-term.\ If $\G = x_1:A_1,...,x_n:A_n,y_1:B_1,...,y_m:B_m \v_{M2} t:A$, then $t$ is a normal $\l$-term, and $x_1:A_1,...,x_n:A_n \v_{AF2} t:A$. **Proof We argue by induction on $t$.** - - If $t$ is a variable, we have two cases : - - If $t=x_i$ $1 \leq i \leq n$, this is clear. - - If $t=y_j$ $1 \leq j \leq m$, then $A=\q \sou{v} B$ where $B_j \lhd B'_j$ and $B'_j \approx B$. Therefore, by Lemma 3.1, $A$ is a classical type. A contradiction. - - If $t=\l x u$, then $\G,x:E \v_{M2} u:F$, and $A=\q \sou{v}( E' \f F')$ where $E \approx E'$, $F \approx F'$ and $\sou{v}$ does not appear in $\G$. First, by Lemma 4.1, $E \in \O^-$ and $F \in \O^+$, and then, by the induction hypothesis, $u$ is a normal $\l$-term, and $x_1:A_1,...,x_n:A_n,x:E \v_{AF2} u:F$. Therefore $t$ is a normal $\l$-term, and $x_1:A_1,...,x_n:A_n \v_{AF2} t:A$. - - If $t=(x)u_1 ... u_r$ $r \geq 1$, we have two cases : - - If $t=x_i$ $1 \leq i \leq n$, then $A_i \lhd B_1 \f C_1$, $C'_i \lhd B_{i+1} \f C_{i+1}$ $1 \leq i \leq r-1$, $C'_r \lhd D$, $A = \q vD'$, where $C'_i \approx C_i$ $1 \leq i \leq r$, $D' \approx D$, and $\G \v_{M2} u_i:B_i$ $1 \leq i \leq r$. Since $A_i$ is a $\q$-negative types, we prove (by induction and using Lemma 4.1) that for all $1 \leq i \leq r$ $B_i$ is a $\q$-positive types. By the induction hypothesis we have $u_i$ is a normal $\l$-term, and $x_1:A_1,...,x_n:A_n \v_{AF2} u_i:B_i$. Therefore $t$ is a normal $\l$-term, and $x_1:A_1,...,x_n:A_n \v_{AF2} t:A$. - - If $t=y_j$ $1 \leq j \leq m$, then $B_j \lhd B_1 \f C_1$, $C'_i \lhd B_{i+1} \f C_{i+1}$ $1 \leq i \leq r-1$, $C'_r \lhd D$, $A = \q vD'$, where $C'_i \approx C_i$ $1 \leq i \leq r$, $D' \approx D$, and $\G \v_{M2} u_i:B_i$ $1 \leq i \leq r$. Therefore, by Lemma 3.1, $A$ is a classical type. A contradiction. - - If $t=(C)uu_1 ... u_r$ $r \geq 0$, then there is a classical type $E$ such that $\G \v_{M2} u:\neg \neg E$, $E \lhd B_1 \f C_1$, $C'_i \lhd B_{i+1} \f C_{i+1}$ $1 \leq i \leq r-1$, $C'_r \lhd D$, $A = \q vD'$, where $C'_i \approx C_i$ $1 \leq i \leq r$, $D' \approx D$, and $\G \v_{M2} u_i:B_i$ $1 \leq i \leq r$. Therefore, by Lemma 3.1, $A$ is a classical type. A contradiction. $\Box$ Let $A$ be a $\q$-positive type of $AF2$ and $t$ a $\b$-normal $\l C$-term.\ If $\v_{M2} t:A$, then $t$ is a normal $\l$-term, and $\v_{AF2} t:A$. **Proof We use Theorem 4.1. $\Box$\ As for relation betwen the systems $C2$ and $M2$, we have the following result.** Let $A_1,...,A_n,A$ be types of $C2$, and $t$ a $\l C$-term.\ $A_1,...,A_n \v_{C2} t:A$ if and only if $A_1^C,...,A_n^C \v_{M2} t:A^C$. **Proof By induction on the typing of $t$. $\Box$** The integers ============ - Each data type can be defined by a second order formula. For example, the type of integers is the formula : $N[x]= \q X \{ X(0), \q y(X(y) \f X(sy)) \f X(x) \}$ where $X$ is a unary predicate variable, $0$ is a constant symbol for zero, and $s$ is a unary function symbol for successor. The formula $N[x]$ means semantically that $x$ is an integer if and only if $x$ belongs to each set $X$ containing $0$ and closed under the successor function $s$.\ The $\l$-term $\so{0} = \l x \l fx$ is of type $N[0]$ and represents zero.\ The $\l$-term $\so{s} = \l n\l x\l f(f)((n)x)f$ is of type $\q y(N[y] \f N[s(y)])$ and represents the successor function. - A set of equations $E$ is said to be adequate with the type of integers if and only if : - - $s(a) \not \approx 0$ ; - - If $s(a) \approx s(b)$ , then so is $a \approx b$. In the rest of the paper, we assume that all sets of equations are adequate with the type of integers. - For each integer $n$, we define the Church integer $\so{n}$ by $\so{n} = \l x\l f(f)^n x$. The integers in $AF2$ --------------------- The system $AF2$ has the property of the unicity of integers representation. (see \[2\]) Let $n$ be an integer. If $\v_{AF2} t :N[s^n (0)]$, then $t \simeq\sb{\b} \so{n}$. The propositional trace $N=\q X \{ X,(X \f X) \f X \}$ of $N[x]$ also defines the integers. (see \[2\]) If $\v_{AF2} t :N$, then, for a certain $n$, $t \simeq\sb{\b} \so{n}$. **Remark A very important property of data type is the following (we express it for the type of integers) : in order to get a program for a function $f : N \f N$ it is sufficient to prove $\v \q x ( N[x] \f N[f(x)] )$. For example a proof of $\v \q x ( N[x] \f N[p(x)] )$ from the equations $p(0)=0$, $p(s(x))=x$ gives a $\l$-term for the predecessor in Church intergers (see \[2\]). $\Box$** The integers in $C2$ -------------------- The situation in system $C2$ is more complex. In fact, in this system the property of unicity of integers representation is lost and we have only one operational characterization of these integers.\ Let $n$ be an integer. A classical integer of value $n$ is a closed $\l C$-term $\th_n$ such that $\v_{C2} \th_n :N[s^n(0)]$. (see \[6\] and \[12\]) Let $n$ be an integer, and $\th_n$ a classical integer of value $n$. - - if $n=0$, then, for every distinct variables $x,g,y$ : $(\th_n) x g y \p_C (x) y$ ; - - if $n \not = 0$, then there is $m \geq 1$ and a mapping $I : \{0,...,m \} \f N$, such that for every distinct variables $x,g,x_0,x_1,...,x_m$ : - $(\th_n) x g x_0 \p_C (g) t_1 x_{r_0}$ ; - $(t_i) x_i \p_C (g) t_{i+1} x_{r_i}$ $1\leq i\leq m$ ; - $(t_m) x_m \p_C (x) x_{r_m}$ ; where $I(0)=n$, $I(r_m)=0$, and $I(i+1)=I(r_i)-1$ $0\leq i\leq m-1$. We will generalize this result.\ Let $O$ be a particular unary predicate symbol. The typed system $C2_O$ is the typed system $C2$ where we replace the rules (2) and (7) by : - $(2_O)$ If $\G,x:A \v_{C2_O} t:B$, $A$ and $B$ are not ending with $O$, then $\G \v_{C2_O} \l xt:A \f B$. - $(7_O)$ If $\G\v_{C2_O} t:\q X A$, and $G$ is not ending with $O$, then $\G \v_{C2_O} t:A[G/X]$. We define on the types of $C2_O$ a binary relation $\lhd_O$ as the least reflexive and transitive binary relation such that : - $\q xA \lhd_O A[u/x]$ if $u$ is a term of language ; - $\q XA \lhd_O A[G/X]$ if $G$ is a type which is not ending with $O$. a\) If $\G \v_{C2_O} t:\perp$, and $t \p_C t'$, then $\G \v_{C2_O} t':\perp$.\ b) If $\G \v_{C2_O} t:A$, and $A$ is an atomic type, then $t$ is $C$-solvable. **Proof a) It is enough to do the proof for one step of reduction. We have two cases :** - - If $t=(\l xu)vv_1...v_m$, then $t'=(u[v/x])v_1...v_m$, $\G,x:F \v_{C2_O} u:G$, $F$ and $G$ are not ending with $O$, $G'\lhd_O F_1 \f G_1$, $G'_j \lhd_O F_{j+1} \f G_{j+1}$ $1 \leq j \leq m-1$, $G_m \approx \perp$, $G_j \approx G'_j$ $1 \leq j \leq m-1$, $\G \v_{C2_O} v:F$, and $\G \v_{C2_O} v_j:F_j$ $1 \leq j \leq m$. It is easy to check that $\G \v_{C2_O} u[v/x]:G$, then $\G \v_{C2_O} t':\perp$. - - If $t=(C)vv_1...v_m$, then $t'=(v)\l x(x)v_1...v_m$, and there is a type $A$ which is not ending with $O$ such that : $A'\lhd_O F_1 \f G_1$, $G'_j \lhd_O F_{j+1} \f G_{j+1}$ $1 \leq j \leq m-1$, $G_m \approx \perp$, $A \approx A'$, $G_j \approx G'_j$ $1 \leq j \leq m$, $\G \v_{C2_O} v:\neg \neg A$, and $\G \v_{C2_O} v_j:F_j$ $1 \leq j \leq m$. It is easy to check that $\G,x:A \v_{C2_O} (x)v_1...v_m:\perp$, but $A$ is not ending with $O$, then $\G \v_{C2_O} \l x(x)v_1...v_m:\neg A$, and $\G \v_{C2_O} t':\perp$. b\) Indeed, a typing of $C2_O$ may be seen as a typing of $C2$. $\Box$ a\) If $\G \v_{C2_O} t:O(a)$, and $t \p_C t'$, then $t=t'$.\ b) If $\G=y_1:A_1,...,y_n:A_n,x_1:O(a_1),...,x_m:O(a_m) \v_{C2_O} t:O(a)$, and all $A_i$ $1 \leq i\leq n$ are not ending with $O$, then $t$ is one of $x_i$, and $a_i \approx a$ $1 \leq i \leq n$. **Proof a) It is enough to do the proof for one step of reduction. We have two cases :** - - If $t=(\l xu)vv_1...v_m$, then $t'=(u[v/x])v_1...v_m$, $\G,x:F \v_{C2_O} u:G$, $F$ and $G$ are not ending with $O$, $G'\lhd_O F_1 \f G_1$, $G'_j \lhd_O F_{j+1} \f G_{j+1}$ $1 \leq j \leq m-1$, $G_m \approx O(a)$, $G_j \approx G'_j$ $1 \leq j \leq m-1$, $\G \v_{C2_O} v:F$, and $\G \v_{C2_O} v_j:F_j$ $1 \leq j \leq m$. Therefore $G_j$ $1 \leq j \leq m$ is not ending with $O$, which is impossible since $G_m \approx O(a)$. - - If $t=(C)vv_1...v_m$, then $t'=(v)\l x(x)v_1...v_m$, and there is a type $A$ which is not ending with $O$ such that : $A'\lhd_O F_1 \f G_1$, $G'_j \lhd_O F_{j+1} \f G_{j+1}$ $1 \leq j \leq m-1$, $G_m \approx O(a)$, $A \approx A'$, $G_j \approx G'_j$ $1 \leq j \leq m$, $\G \v_{C2_O} v:\neg \neg A$, and $\G \v_{C2_O} v_j:F_j$ $1 \leq j \leq m$. $A$ is not ending with $O$, therefore $G_j$ $1 \leq j \leq m$ is not ending with $O$, which is impossible since $G_m \approx O(a)$. b\) By Lemma 5.1, we have $t \p_C (f)t_1...t_r$, and, by a), $t=(f)t_1...t_r$. Therefore $\G \v_{C2_O} (f)t_1...t_r:O(a)$. - - If $f=x_i$ $1 \leq i \leq m$, then $r=0$, $t=x_i$, and $O(a_i) \approx O(a)$, then $a_i \approx a$. - - If $f=y_j$ $1 \leq j \leq k$, then $A_j \lhd_O F_1 \f G_1$, $G'_k \lhd_O F_{k+1} \f G_{k+1}$ $1 \leq k \leq r-1$, $G_r \approx O(a)$, $G_k \approx G'_k$ $1 \leq k \leq r$, and $\G\v_{C2_O} t_k:F_k$ $1 \leq k \leq r$. Since $A_j$ is not ending with $O$, then $G_k$ $1 \leq k \leq r$ is not ending with $O$, which is impossible since $Cr \approx O(a)$. $\Box$ Let $V$ be the set of variables of $\l C$-calculus.\ Let $P$ be an infinite set of constants called stack constants [^2].\ We define a set of $\l C$-terms $\L CP$ by : - - If $x \in V$, then $x \in \L CP$ ; - - If $t \in \L CP$, and $x \in V$, then $\l xt \in \L CP$ ; - - If $t \in \L CP$, and $u \in \L CP \bigcup P$, then $(t)u \in \L CP$. In other words, $t \in \L CP$ if and only if the stack constants are in argument positions in $t$.\ Let $\s$ be a function defined on $V \bigcup P$ such that : - - If $x \in V$, then $\s (x) \in \L CP$ ; - - If $p \in P$, then $\s (p)=\sou{t}=t_1,...,t_n$, $n \geq 0$, $t_i \in \L CP \bigcup P$ $1 \leq i \leq n$. We define $\s(t)$ for all $t \in \L CP$ by : - - $\s ((u)v)=(\s (u))\s (v)$ if $v \not \in P$ ; - - $\s (\l xu)=\l x \s (u)$ ; - - $\s ((t)p)=(t)\sou{t}$ if $\s (p)=\sou{t}$. $\s$ is said to be a $P$-substitution.\ We consider, on the set $\L CP$, the following rules of reduction : - 1\) $(\l xu)tt_1...t_n \f (u[t/x])t_1...t_n$ for all $u,t \in \L CP$ and $t_1,...,t_n \in \L CP \bigcup P$ ; - 2\) $(C)tt_1...t_n \f (t)\l x(x)t_1...t_n$ for all $t \in \L CP$ and $t_1,...,t_n \in \L CP \bigcup P$, and $x$ being $\l$-variable not appearing in $t_1,...,t_n$. For any $t,t' \in \L CP$, we shall write $t \rhd_C t'$, if $t'$ is obtained from $t$ by applying these rules finitely many times. If $t \rhd_C t'$, then $\s (t) \rhd_C \s (t')$ for all $P$-substitution $\s$. **Proof Easy. $\Box$** Let $t\in \L CP$ such that the stack constants of $t$ are among $p_1,...,p_m$.\ If $t \p_C t'$, and $\G=\G',p_1:O(a_1),...,p_m:O(a_m) \v_{C2_O} t:\perp$, then $t' \in \L CP$ and $t \rhd_C t'$. **Proof It is enough to do the proof for one step of reduction. We have two cases :** - - If $t=(\l xu)vv_1...v_m$, then, $t'=(u[v/x])v_1...v_m$, $\G,x:F \v_{C2_O} u:G$, $F$ and $G$ is not ending with $O$, and $\G \v_{C2_O} v:F$. Therefore $u,v \in \L CP$, and so $t' \in \L CP$ and $t \rhd_C t'$. - - If $t=(C)vv_1...v_m$, then, $t'=(v)\l x(x)v_1...v_m$, and there is a type $A$ which is not ending with $O$ such that $\G \v_{C2_O} v:\neg \neg A$. Therefore $v \in \L CP$, and so $t' \in \L CP$ and $t \rhd_C t'$. $\Box$ Let $n$ be an integer, $\th_n$ a classical integer of value $n$, and $x,g$ two distinct variables. - - If $n=0$, then for every stack constant $p$, we have : $(\th_n)xgp \p_C (x)p$. - - If $n \not = 0$, then there is $m \geq 1$, and a mapping $IÊ:Ê\{0,...,m\}\f N$, such that for all distinct stack constants $p_0,p_1,...,p_m$, we have : - $(\th_n)xgp_0 \p_C (g)t_1 p_{r_0}$ ; - $(t_i)p_i \p_C (g)t_{i+1}p_{r_i}$ $1 \leq i \leq m-1$ ; - $(t_m)p_m \p_C (x)p_{r_m}$ where $I(0)=n$, $I(r_m)=0$, and $I(i+1)=I(r_i)-1$ $0 \leq i \leq m-1$. **Proof We denote, in this proof, the term $s^i(0)$ by $i$.\ If $\v_{C2}\th_n:N[n]$, then $\v_{C2_O} \th_n: [ O(0) \f \perp ], \q y \{ [ O(y) \f \perp ] \f [O(sy) \f \perp ] \}, O(n) \f \perp $, then $\G_1= x:O(0) \f \perp, g:\q y \{ [ O(y) \f \perp ] \f [O(sy) \f \perp ] \}, p_0:O(n) \v_{C2_O} (\th_n)xgp_0:\perp$, therefore, by Lemma 5.1, $(\th_n)xgp_0$ is $C$-solvable, and three cases may be seen :** - - If $(\th_n)xgp_0 \p_C (p_0)t_1...t_r$, then $r=0$, and there is a term $a$, such that $O(a) \approx \perp$. This is impossible. - - If $(\th_n)xgp_0 \p_C (x)t_1...t_r$, then $r=1$, and $\G_1 \v_{C2_O} t_1:O(0)$. Therefore, by Lemma 5.2, $t_1=p_0$, and so $n=0$. - - If $(\th_n)xgp_0 \p_C (g)t_1...t_r$, then $r=2$, $\G_1 \v_{C2_O} t_1:O(a) \f \perp$, $\G_1 \v_{C2_O} t_2:O(s(a'))$, and $a \approx a'$. By Lemma 5.2, we have $t_2=p_0$, and $s(a') \approx n$, then $a \approx n-1$. Therefore $(\th_n)xgp_0 \p_C (g)t_1p_0$, and $\G_1 \v_{C2_O} t_1:O(n-1) \f \perp$. Let $I(0)=n$. We prove that : if $\G_i=g:\q y \{ [ O(y) \f \perp ] \f [ O(sy) \f \perp ] \} , x:O(0) \f \perp, p_0:O(I(0)),...., p_i:O(I(i)) \v_{C2_O} (t_i)p_i:\perp$, then :\ $(t_i)p_i \p_C (g)t_{i+1}p_{r_i}$, and $\G_i \v_{C2_O} t_{i+1}:O(I(r_i)-1) \f \perp$\ or\ $(t_i)p_i \p_C (x)p_{r_i}$, and $I(r_i)=0$.\ $\G_i \v_{C2_O} (t_i)p_i:\perp$, therefore, by Lemma 5.1, $(t_i)p_i$ est $C$-solvable, and three cases may be seen : - - If $(t_i)p_i \p_C (p_j)u_1...u_r$ $0 \leq j \leq i$, then $r=0$, and there is a term $a$, such that $O(a) \approx \perp$. This is impossible. - - If $(t_i)p_i \p_C (x)u_1...u_r$, then $r=1$, and $\G_i \v_{C2_O} u_1:O(0)$. Therefore, by Lemma 5.2, $u_1=p_{r_i}$, and $I(r_i)=0$. - - If $(t_i)p_i \p_C (g)u_1...u_r$, then $r=2$, $\G_i \v_{C2_O} u_1:O(a) \f \perp$, $\G_i \v_{C2_O} u_2:O(s(a'))$, and $a \approx a'$. By Lemma 5.2, we have $u_2=p_{r_i}$, and $s(a') \approx I(r_i)$, then $a \approx I(r_i)-1$. Therefore $(t_i)p_i \p_C (g)t_{i+1} p_{r_i}$, and $\G_i \v_{C2_O} t_{i+1}:O(I(r_i)-1) \f \perp$. Let $I(i+1)=I(r_i)-1$. This construction always terminates. Indeed, if not, the $\l C$-term $(((\th_n)\l xx)\l xx)p_0$ is not $C$-solvable. This is impossible, since $p_0:\perp \v_{C2} (((\th_n)\l xx)\l xx)p_0:\perp$. $\Box$ Let $n$ be an integer, $\th_n$ a classical integer of value $n$, and $x,g$ two distinct variables. - - If $n=0$, then, for every stack constant $p$, we have : $(\th_n)xgp \rhd_C (x)p$. - - If $n \not = 0$, then there is $m \geq 1$, and a mapping $IÊ:Ê\{0,...,m\}\f N$, such that for all distinct stack constants $p_0,p_1,...,p_m$, we have : - $(\th_n)xgp_0 \rhd_C (g)t_1 p_{r_0}$ ; - $(t_i)p_i \rhd_C (g)t_{i+1}p_{r_i}$ $1 \leq i \leq m-1$ ; - $(t_m)p_m \rhd_C (x)p_{r_m}$ where $I(0)=n$, $I(r_m)=0$, and $I(i+1)=I(r_i)-1$ $0 \leq i \leq m-1$. **Proof We use Lemma 5.4. $\Box$** Let $n$ be an integer, and $\th_n$ a classical integer of value $n$. - - If $n=0$, then, for every $\l C-terms$ $a,F,\sou{u}$, we have : $(\th_n)aF\sou{u} \p_C (a)\sou{u}$. - - If $n \not = 0$, then there is $m \geq 1$, and a mapping $IÊ:Ê\{0,...,m\}\f N$, such that for all $\l C-terms$ $a,F,\sou{u_0},\sou{u_1},...,\sou{u_m}$, we have : - $(\th_n)aF\sou{u_0} \p_C (g)t_1 \sou{u_{r_0}}$ ; - $(t_i)\sou{u_i} \p_C (g)t_{i+1}\sou{u_{r_i}}$ $1 \leq i \leq m-1$ ; - $(t_m)\sou{u_m} \p_C (a)\sou{u_{r_m}}$ where $I(0)=n$, $I(r_m)=0$, and $I(i+1)=I(r_i)-1$ $0 \leq i \leq m-1$. **Proof We use Lemma 5.3. $\Box$** The integers in $M2$ -------------------- According to the results of section 4, we can obtain some results concerning the integers in the system $M2$. Let $n$ be an integer. If $\v_{M2} t :N[s^n (0)]$, then, $t \simeq\sb{\b} \so{n}$. **Proof We use Theorem 4.1. $\Box$\ Let $n$ be an integer. By Theorem 4.2, a classical integer of value $n$ is a closed $\l C$-term $\th_n$ such that $\v_{M2} \th_n :N^C[s^n(0)]$.** Let $n$ be an integer, $\th_n$ a classical integer of value $n$, and $x,g$ two distinct variables. - - If $n=0$, then, for every stack constant $p$, we have : $(\th_n)xgp \rhd_C (x)p$. - - If $n \not = 0$, then there is $m \geq 1$, and a mapping $IÊ:Ê\{0,...,m\}\f N$, such that for all distinct stack constants $p_0,p_1,...,p_m$, we have : - $(\th_n)xgp_0 \rhd_C (g)t_1 p_{r_0}$ ; - $(t_i)p_i \rhd_C (g)t_{i+1}p_{r_i}$ $1 \leq i \leq m-1$ ; - $(t_m)p_m \rhd_C (x)p_{r_m}$ where $I(0)=n$, $I(r_m)=0$, and $I(i+1)=I(r_i)-1$ $0 \leq i \leq m-1$. **Proof We use Theorem 4.2. $\Box$** Storage operators ================= Storage operators for Church integers ------------------------------------- Let $T$ be a closed $\l$-term. We say that $T$ is a storage operator for Church integers if and only if for every $n \geq 0$, there is a $\l$-term $\t_n \simeq\sb{\b} \so{n}$, such that for every $\l$-term $\th_n \simeq\sb{\b} \so{n}$, there is a substitution $\s$, such that $(T)\th_n f \p (f)\s(\t_n)$.\ **Examples If we take :\ $T_1 = \l n((n)\d)G$ where $G = \l x\l y(x)\l z(y)(\so{s})z$ and $\d = \l f(f)\so{0}$\ $T_2 = \l n\l f(((n)f)F)\so{0}$ where $F = \l x\l y(x)(\so{s})y$,\ then it is easy to check that : for every $\th_n \simeq\sb{\b} \so{n}$, $(T_i)\th_n f \p (f)(\so{s})^n \so{0}$ ($i=1$ or $2$) (see \[3\] and \[8\]).\ Therefore $T_1$ and $T_2$ are storage operators for Church integers. $\Box$\ It is a remarkable fact that we can give simple types to storage operators for Church integers. We first define the simple Gődel translation $F$\* of a formula $F$ : it is obtained by replacing in the formula $F$, each atomic formula $A$ by $\neg A$. For example :** $N$\*$[x]=\q X \{\neg X(0),\q y(\neg X(y) \f \neg X(sy)) \f \neg X(x) \}$ It is well known that, if $F$ is provable in classical logic, then $F$\* is provable in intuitionistic logic.\ We can check that $\v_{AF2} T_1,T_2 : \q x \{N$\*$[x] \f\neg\neg N[x] \}$. And, in general, we have the following Theorem : (see \[3\] and \[10\]) If $\v_{AF2} T: \q x\{N$\*$[x] \f\neg\neg N[x]\}$, then $T$ is a storage operator for Church integers. Storage operators for classical integers ---------------------------------------- The storage operators play an important role in classical type systems. Indeed, they can be used to find the value of a classical integer. (see \[6\] and \[7\]) If $\v_{AF2} T: \q x\{N$\*$[x] \f \neg\neg N[x]\}$, then for every $n \geq 0$, there is a $\l$-term $\t_n \simeq\sb{\b} \so{n}$, such that for every classical integer $\th_n$ of value $n$, there is a substitution $\s$, such that $(T)\th_n f \p_C (f)\s(\t_n)$. If $\v_{AF2} T: \q x\{N$\*$[x] \f \neg\neg N[x]\}$, then for every $n \geq 0$ and for every classical integer $\th_n$ of value $n$, there is a $\l$-term $\t_n$, such that $(T)\th_n \l xx \p_C \t_n \f\sb{\b} \so{n}$. **Proof We use Theorem 6.2. $\Box$\ **Remark. Theorem 6.2 cannot be generalized for the system $C2$. Indeed, let $T=\l \n \l f (f) (C)(T_i)\n$ ($i=1$ or $2$).\ $\n:N$\*$[x] , f:\neg N[x] \v_{C2} (T_i)\n:\neg\neg N[x] \Longrightarrow$\ $\n:N$\*$[x] , f:\neg N[x] \v_{C2} (C)(T_i)\n: N[x] \Longrightarrow$\ $\n:N$\*$[x] , f:\neg N[x] \v_{C2} (f)(C)(T_i)\n: \perp \Longrightarrow$\ $\v_{C2} T: \q x\{N$\*$[x] \f \neg\neg N[x]\}$\ Since for every $\l C$-term $\th$, $(T)\th f \p_C (f) (C)(T_i)\th$, then it is easy to check that there is not a $\l C$-term $\t_n \simeq\sb{\b} \so{n}$ such that for every classical integer $\th_n$ of value $n$, there is a substitution $\s$, such that $(T)\th_n f \p_C (f)\s(\t_n)$. $\Box$\ We will see that in system $M2$ we have a similar result to Theorem 6.2.\ Let $T$ be a closed $\l C$-term. We say that $T$ is a storage operator for classical integers if and only if for every $n \geq 0$, there is a $\l C$-term $\t_n \simeq\sb{\b} \so{n}$, such that for every classical integers $\th_n$ of value $n$, there is a substitution $\s$, such that $(T)\th_n f \p_C (f)\s(\t_n)$.**** If $\v_{M2} T: \q x \{ N^C[x] \f \neg\neg N[x] \}$, then $T$ is a storage operator for classical integers. The type system $M$ is the subsystem of $M2$ where we only have propositional variables and constants (predicate variables or predicate symbols of arity 0). So, first order variable, function symbols, and finite sets of equations are useless. The rules for typed are $0'$) 1), 2), 3), 6), $6'$), 7) and $7'$) restricted to propositional variables. With each predicate variable (resp. predicate symbol) $X$, we associate a predicate variable (resp. a predicate symbol) $X^{\di}$ of $M$ type system. For each formula $A$ of $M2$, we define the formula $A^{\di}$ of $F_C$ obtained by forgetting in $A$ the first order part. If $\G=x_1:A_1,...,x_n:A_n$ is a context of $M2$, then we denote by $\G^{\di}$ the context $x_1:A_1^{\di},...,x_n:A_n^{\di}$ of $M$. We write $\G\v_M t:A$ if $t$ is typable in $M$ of type $A$ in the context $\G$.\ We have obviously the following property : if $\G \v_{M2} t:A$, then $\G^{\di} \v_M t:A^{\di}$.\ Theorem 6.3 is a consequence of the following Theorem. If $\v_M T: N^C \f \neg\neg N$, then for every $n \geq 0$, there is an $m \geq 0$ and a $\l C$-term $\t_m \simeq\sb{\b} \so{m}$, such that for every classical integer $\th_n$ of value $n$, there is a substitution $\s$, such that $(T)\th_n f \p_C (f)\s(\t_m)$. Indeed, if $\v_{M2} T: \q x \{ N^C[x] \f \neg\neg N[x] \}$, then $\v_M T: N^C \f \neg\neg N$. Therefore for every $n \geq 0$, there is an $m \geq 0$ and $\t_m \simeq\sb{\b} \so{m}$, such that for every classical integer $\th_n$ of value $n$, there is a substitution $\s$, such that $(T)\th_n f \p_C (f)\s(\t_m)$. We have $\v_{M2} \so{n} : N^C[s^n(0)]$, then $ f:\neg N[s^n(0)] \v_{M2} (T) \so{n} f :\perp$, therefore $ f:\neg N[s^n(0)]\v_{M2} (f)\so{m} :\perp$ and $\v_{M2} \so{m} : N[s^n(0)]$. Therefore $n=m$. and $T$ is a storage operator for classical integers. $\Box$\ In order to prove Theorem 6.4, we shall need some Lemmas. If $\G,\n:N^C\v_M(\n)\sou{d}:\perp$, then $\sou{d}=a,b,d_1,...,d_r$ and there is a classical type $F$, such that : $\G,\n:N^C\v_M a:F$ ; $\G,\n:N^C\v_M b:F \f F$ ; $F \lhd E_1 \f F_1$, $F_i \lhd E_{i+1} \f F_{i+1}$ $1 \leq i \leq r-1$ ; $F_r \lhd \perp$ ; and $\G,\n:N^C\v_M c_i : E_i$ $1 \leq i \leq r$. **Proof We use Theorem 2.2. $\Box$** If $F$ is a classical type and $\G,x:F \v_M (x)\sou{d}:\perp$, then $\sou{d}=d_1,...,d_r$ ; $F \lhd E_1 \f F_1$ ; $F_i \lhd E_{i+1} \f F_{i+1}$ $1 \leq i \leq r-1$ ; $F_r \lhd \perp$ ; and $\G,x:F \v_M c_i : E_i$ $1 \leq i \leq r$. **Proof We use Theorem 2.2. $\Box$** Let $t$ be a $\b$-normal $\l C$-term, and $A_1,...,A_n$ a sequence of classical types.\ If $x_1:A_1,...,x_n:A_n \v_M t:N$, then there is an $m \geq 0$ such that $t = \so{m}$. **Proof We use Theorems 4.1 and 5.2. $\Box$\ Let $\n$ and $f$ be two fixed variables.\ We denote by $x_{n,a,b,\sou{c}}$ (where $n$ is an integer, $a,b$ two $\l$-terms, and $\sou{c}$ a finite sequence of $\l$-terms) a variable which does not appear in $a,b,\sou{c}$.** Let $n$ be an integer. There is an integer $m$ and a finite sequence of head reductions $\{ U_i \p_C V_i \}_{1\leq i\leq r}$ such that :\ 1) $U_1 = (T)\n f$ and $V_r = (f)\t_m$ where $\t_m \simeq\sb{\b}\so{m}$ ;\ 2) $V_i = (\n) a b \sou{c}$ or $V_i = (x_{l,a,b,\sou{c}}) \sou{d}$ $0 \leq l \leq n-1$;\ 3) If $V_i = (\n)a b \sou{c}$, then $U_{i+1} = (a)\sou{c}$ if $n=0$ and $U_{i+1} = ((b)x_{n-1,a,b,\sou{c}})\sou{c}$ if $n \neq 0$ ;\ 4) If $V_i = (x_{l,a,b,\sou{c}})\sou{d}$ $0 \leq l \leq n-1$, then $U_{i+1} = (a)\sou{d}$ if $l=0$ and $U_{i+1} = ((b)x_{l-1,a,b,\sou{d}})\sou{d}$ if $l \neq 0$. **Proof A good context $\G$ is a context of the form $\n:N^C, f:\neg N, x_{n_1,a_1,b_1,\sou{c_1}} : F_1 ,..., x_{n_p,a_p,b_p,\sou{c_p}} : F_p$ where $F_i$ is a classical type, $0 \leq n_i \leq n-1$, and $1 \leq i \leq p$ .\ We will prove that there is an integer $m$ and a finite sequence of head reductions $\{ U_i \p_C V_i \}_{1\leq i\leq r}$ such that we have 1), 2), 3), 4), and there is a good context $\G$ such that $\G \v_M V_i :\perp$ $1 \leq i \leq r$.\ We have $\v_M T: N^C \f \neg\neg N$, then $\n:N^C,f:\neg N \v_M(T)\n f:\perp$, and by Lemmas 6.1 and 6.2, $(T)\n f \p_C V_1$ where $V_1 = (f)\t$ or $V_1 = (\n)ab\sou{c}$.\ Assume that we have the head reduction $U_k \p_C V_k$ and $V_k \neq (f)\t$.** - - If $V_k = (\n)a b \sou{c}$, then, by the induction hypothesis, there is a good context $\G$ such that $\G \v_M (\n)a b \sou{c} :\perp$. By Lemma 6.1, there is a classical type $F$, such that $\G \v_Ma:F$ ; $\G \v_Mb:F \f F$ ; $\sou{c}=c_1,...,c_s$ ; $F \lhd E_1 \f F_1$ ; $F_i \lhd E_{i+1} \f F_{i+1}$ $1 \leq i \leq s-1$ ; $F_s \lhd \perp$ ; and $\G \v_Mc_i : E_i$ $1 \leq i \leq s$. - - If $n=0$, let $U_{k+1} = (a)\sou{c}$. We have $\G \v_M U_{k+1}:\perp$. - - If $n \neq 0$, let $U_{k+1} = ((b)x_{n-1,a,b,\sou{c}})\sou{c}$. The variable $x_{n-1,a,b,\sou{c}}$ is not used before. Indeed, if it is, we check easily that the $\l C$-term $(T)\so{n} f$ is not solvable; but that is impossible because $f:\neg N \v_M(T)\so{n} f :\perp$. Therefore $\G' = \G ,x_{n-1,a,b,\sou{c}}:F$ is a good context and $\G' \v_M U_{k+1} :\perp$. <!-- --> - - If $V_k = (x_{l,a,b,\sou{c}}) \sou{d}$, then, by the induction hypothesis, there is a good context $\G$ such that $\G \v_M(x_{l,a,b,\sou{c}}) \sou{d} :\perp$. Then there is a classical type $F$ such that $x_{l,a,b,\sou{c}} : F$ is in the context $\G$. By Lemma 6.2, $\sou{c}=d_1,...,d_s$ ; $F \lhd E_1 \f F_1$ ; $F_i \lhd E_{i+1} \f F_{i+1}$ $1 \leq i \leq s-1$ ; $F_s \lhd \perp$ ; and $\G \v_Mc_i : E_i$ $1 \leq i \leq s$. - - If $l=0$, let $U_{k+1}=(a)\sou{c}$. We have $\G \v_M U_{k+1}:\perp$. - - If $l \neq 0$, Let $U_{k+1} = ((b)x_{l-1,a,b,\sou{d}})\sou{d}$. The variable $x_{l-1,a,b,\sou{d}}$ is not used before. Indeed, if it is, we check that the $\l C$-term $(T)\so{n} f$ is not solvable; but this is impossible because $f:\neg N \v_M (T)\so{n} f :\perp$. Then $\G' = \G ,x_{l-1,a,b,\sou{d}}:F$ is a good context and $\G' \v_M U_{k+1} :\perp$. Therefore there is a good context $\G'$ such that $\G' \v_M U_{k+1} :\perp$. Then, by Lemmas 6.1 and 6.2, $U_{k+1} \p_C V_{k+1}$ where $V_{k+1} = (f)\t$ or $V_{k+1} = (\n)ab\sou{c}$ or $V_{k+1} = (x_{l,a,b,\sou{c}})\sou{d}$ $0 \leq l \leq n-1$.\ This construction always terminates. Indeed, if not, we check that the $\l C$-term $(T)\so{n} f$ is not solvable; but this is impossible because $f:\neg N \v_M (T)\so{n} f :\perp$.\ Therefore there is $r \geq 0$ and a good context $\G$ such that $\G \v_M V_{r} = (f)\t :\perp$, and $\G \v_M \t :N$. Therefore, by Lemma 6.3, there is an $m \geq 0$ such that $\t \simeq\sb{\b}\so{m}$. $\Box$\ Let $T$ be a $\l C$-term such that $\v_M T: N^C \f \neg\neg N$. By Theorem 6.5, there is an integer $s$ and a finite sequence of head reductions $\{ U_i \p_C V_i \}_{1\leq i\leq r}$ such that :\ 1) $U_1 = (T)\n f$ and $V_r = (f)\t_s$ where $\t_s \simeq\sb{\b}\so{s}$;\ 2) $V_i = (\n) a b \sou{c}$ or $V_i = (x_{l,a,b,\sou{c}}) \sou{d}$ $0 \leq l \leq n-1$;\ 3) If $V_i = (\n)a b \sou{c}$, then $U_{i+1} = (a)\sou{c}$ if $n=0$ and $U_{i+1} = ((b)x_{n-1,a,b,\sou{c}})\sou{c}$ if $n \neq 0$ ;\ 4) If $V_i = (x_{l,a,b,\sou{c}})\sou{d}$ $0 \leq l \leq n-1$, then $U_{i+1} = (a)\sou{d}$ if $l=0$ and $U_{i+1} = ((b)x_{l-1,a,b,\sou{d}})\sou{d}$ if $l \neq 0$.\ Let $\th_n$ be a classical integer of value $n$, and $x,g$ two distinct variables. By Theorem 5.6 we have :\ If $n=0$, then for every stack constant $p$, we have : $(\th_n)xgp \rhd_C (x)p$.\ If $n \not = 0$, then there is $m \geq 1$, and a mapping $IÊ:Ê\{0,...,m\}\f N$, such that for all distinct stack constants $p_0,p_1,...,p_m$, we have :\ $(\th_n)xgp_0 \rhd_C (g)t_1 p_{r_0}$ ;\ $(t_i)p_i \rhd_C (g)t_{i+1}p_{r_i}$ $1 \leq i \leq m-1$ ;\ $(t_m)p_m \rhd_C (x)p_{r_m}$\ where $I(0)=n$, $I(r_m)=0$, and $I(i+1)=I(r_i)-1$ $0 \leq i \leq m-1$.\ If $n = 0$, then $(T)\th_n f \p_C (f)\t[\th_n / \n]$. **Proof We prove by induction that for every $1 \leq i \leq r$, we have $(T)\th_n f \p_C V_i [\th_n / \n]$.\ For $i=1$, $(T)\th_n f = \{ (T)\n f \} [\th_n / \n] = U_1 [\th_n / \n] \p_C V_1 [\th_n / \n]$.\ Assume it is true for $i$, and prove it for $i+1$.\ $(T)\th_n f \p_C V_i [\th_n / \n] = \{ (\n)ab \sou{c} \} [\th_n / \n] = \{ (\th_n)ab\sou{c} \} [\th_n / \n] = \{ (\th_n)xgp \} [a / x , b / g , \sou{c} / p ] [\th_n / \n]$. Since $(\th_n)xgp \p_C (x)p$, then $(T)\th_n f \p_C \{ (a)\sou{c} \} [\th_n / \n] = U_{i+1} [\th_n / \n] \p_C V_{i+1}[\th_n / \n]$.\ So, for $i=r$, we have $(T)\th_n f \p_C V_r [\th_n / \n] = \{ (f)\t \} [\th_n / \n] = (f)\t[\th_n / \n]$. $\Box$\ We assume now that $n \geq 1$.\ A $k-\l C$-term is a $\l C$-term of the forme $V_k[\t_1 / y_1 ] ... [\t_p / y_p ] [\th_n / \n ]$ such that :\ - $Fv(V_k) \subseteq \{ \n , f , y_1 , ... , y_p \}$\ - for every $1 \leq i \leq p$, $y_i = x_{n_i,a_i,b_i,\sou{c}_i}$ and $\t_i =t_{m_i} [a_i / x , b_i / g , \sou{d_0} / p_0 ,...,\sou{d_{m_i-1}} / p_{m_i-1}]$ where $I(m_i)=n_i$\ - for every $0 \leq k \leq m_i-1$, there is $1 \leq l \leq r$ such that $U_l = (a_i)\sou{d_k}$ if $I(k)=0$ and $U_r = (b_i)x_{I(k)-1,a_i,b_i,\sou{d_k}}\sou{d_k}$ if $I(k) > 0$.\ To simplify, a $k- \l C$-term is denoted by $V_k []$.** Let $1 \leq i \leq r-1$ and $V_i []$ an $i-\l C$-term. If $(T)\th_n f \p_C V_i []$, then there is $1 \leq j \leq r$ and a $j-\l C$-term $V_j []$ such that $V_j [] \p_C V_j []$ and either $V_i [] \not = V_j []$ or $i < j$ **Proof There are only two possibilities. 1) $V_i = (\n)ab\sou{c}$ ; 2) $V_i = (x_{\a,a,b,\sou{c}})\sou{d}$.\ We now examine each of this cases.\ 1) If $V_i = (\n)ab\sou{c}$, then $V_i [] = \{ (\th_n)ab\sou{c} \}[] = \{ (\th_n)xgp_0 \} [a / x , b / g ,\sou{c} / p_0] []$. Since $(\th_n)xgp_0 \rhd_C (g)t_1 p_{r_0}=(g)t_1 p_0$, then $V_i [] \p_C \{ (b)t_1[a / x, b / g , \sou{c} / p_0 ] \sou{c} \} [] =$\ $\{ (b) x_{n-1,a,b,\sou{c}} \sou{c} \} [t_1[a / x, b / g , \sou{c} / p_0 / x_{n-1,a,b,\sou{c}}] [] = U_{i+1} [] \p_c V_{i+1} []$. Let $j=i+1$. We have $i < j$ and $I(1)=I(r_0)-1=I(0)-1=n-1$.\ 2) If $V_i = (x_{\a,a,b,\sou{c}})\sou{d}$, then $V_i []=\{ (t_{\b}[a / x, b / g , \sou{d_0} / p_0 ,...,\sou{d_{\b-1}} / p_{\b-1}])\sou{d} \} []$ where $I(\b)=\a$.\ If $I(\b)=\a \not =0$, then $U_{i+1}=(b)x_{\a-1,a,b,\sou{d}}\sou{d}=(b)x_{I(\b)-1,a,b,\sou{d}}\sou{d}$, and if $I(\b)=\a \not =0$, then $U_{i+1}= (a)\sou{d}$.\ We consider the following two cases.** - - If $\b \leq m$, then $(t_{\b})p_{\b} \rhd_C (g)t_{{\b}+1}p_{r_{\b}}$, so that\ $V_i [] \p_C \{ (g)t_{{\b}+1}p_{r_{\b}} \}[a / x, b / g , \sou{d_0} / p_0 ,...,\sou{d_{\b-1}} / p_{\b-1} , \sou{d} / p_{\b}] []$ =\ $\{ (b)t_{{\b}+1} \sou{d_{r_{\b}}} \}[a / x, b / g , \sou{d_0} / p_0 ,...,\sou{d_{\b-1}} / p_{\b-1},\sou{d} / p_{\b}] []$.\ Since $\b \not = m$, then $I(r_\b) \not = 0$. By the hypothesis there is $1 \leq j \leq r$ such that $U_j = (b)x_{I(r_{\b})-1,a,b,\sou{d_{r_{\b}}}}\sou{d_{r_{\b}}}$. Therefore\ $V_i [] \p_C U_j [t_{{\b}+1}[a / x, b / g , \sou{d_0} / p_0 ,...,\sou{d_{\b-1}} / p_{\b-1},\sou{d} / p_{\b}]/ x_{I(r_{\b})-1,a,b,\sou{d_{r_{\b}}}}][] = U_j [] \p_C V_j []$.\ If $V_i [] = V_j []$, then the head $C$-reduction $(t_{\b})p_{\b} \rhd_C (g)t_{{\b}+1}p_{r_{\b}}$ must be an identity, in other words $(t_{\b})p_{\b} = (g)t_{{\b}+1}p_{r_{\b}}$ and therefore $\b = r_{\b}$. And so $j = i+1 > i$. - - If $\b = m$, then $(t_{\b})p_{\b} = (t_m)p_m \rhd_C (x)p_{r_m}$, so that\ $V_i [] \p_C \{ (x)p_{r_m} \}[a / x, b / g , \sou{d_0} / p_0 ,...,\sou{d_{m-1}} / p_{m-1}][] = (a)t_{r_m} \}[]$.\ Since $I(r_m)=0$, then by the hypothesis there is $1 \leq j \leq r$ such that $U_j = (a)t_{r_m}$. Therefore $V_i [] \p_C U_j [] \p_C V_j []$.\ If $V_i [] = V_j []$, then the head $C$-reduction $(t_m)p_m \rhd_C (x)p_{r_m}$ must be an identity, in other words $(t_m)p_m \rhd_C (x)p_{r_m}$ and therefore $m = r_m$. And so $j = i+1 > i$. $\Box$ There is a substitution $\s$ such that $(T)\th_n f \p_C (f)\s(\t)$. **Proof $(T)\th_n f = \{ (T)\n f \} [\th_n / \n] = U_1 [\th_n / \n] \p_C V_1 [\th_n / \n]$. By Lemma 6.5 we obtaine a sequence $V_{i_1} []$ , $V_{i_2} []$ , ... , $V_{i_k} []$ , ... such that $(T)\th_n f \p_C V_{i_s} []$ and if $V_{i_s} [] \not = V_{i_{s+1}} []$ then $i_s \leq i_{s+1}$. This sequence is necessarily finite, indeed $f:\neg N \v_M (T)\th_n f :\perp$. If $V_{i_s} [] = V_{i_{s+1}} [] = ... = V_{i_{s+\a}} []$, then $i_s < i_{s+1} < ... < i_{s+\a}$ and $\a \leq r$. Therefore there is $s$ such that $V_{i_s} = (f)\t$, then $(T)\th_n f \p_C V_{i_s} [] = \{ (f)\t \} [] = (f)\t[]$. $\Box$\ Then, by Lemma 6.4 and Corollary 6.2, $T$ is a storage operator for classical integers.** General Theorem --------------- In this subsection, we give (without proof) a generalization of Theorem 6.3.\ Let $T$ be a closed $\l C$-term, and $D,E$ two closed types of $AF2$ type system. We say that $T$ is a storage operator for the pair of types $(D,E)$ iff for every $\l$-term $\v_{AF2} t:D$, there is $\l$-term $\t'_t$ and $\l C$-term $\t_t$, such that $\t'_t \simeq\sb{\b} \t_t$, $\v_{AF2} \t'_t:E$, and for every $\v_{C2} \th_t:D$, there is a substitution $\s$, such that $(T)\th_t f \p_C (f)\s(\t_t)$. Let $D,E$ two $\q$-positive closed types of $AF2$ type system, such that $E$ does not contain $\perp$. If $\v_{M2} T: D^C \f \neg\neg E$, then $T$ is a storage operator for the pair $(D,E)$. Operational characterization of $\l C$-terms of type $\q X_C \{\perp \f X_C \}$ and $\q X_C \{ \neg \neg X_C \f X_C \}$ ======================================================================================================================= Let **A (for Abort) the $\l C$-term $\l x(C)\l yx$.\ **Behaviour of **A :****** (**A )$tt_1...t_n \p_C ((C)\l yt) t_1...t_n \p_C (\l yt)\l x(x)t_1...t_n \p_C t$.** **Typing of **A :**** $x:\perp \v_{M2} \l yx:\neg \neg X_C \Longrightarrow x:\perp \v_{M2} (C)\l yx:X_C \Longrightarrow \v_{M2}$ **A $:\q X_C \{ \perp \f X_C \}$** If $\v_{M2} T:\q X_C \{ \perp \f X_C \}$, then for every integer $n$, and for all $\l C-terms$ $t,t_1,...,t_n$, $(T)t t_1...t_n \p_C t$. **Proof. Let $O_1,...,O_n$ be new predicate symbols of arity 0 different from $\perp$. Let $A=O_1,...,O_n \f \perp$. If $\v_{M2} T:\q X_C \{ \perp \f X_C \}$, then $\v_{M2} T:\perp \f A$, and $\G = x:\perp,x_1:O_1,...,x_n:O_n \v_{M2} (T)xx_1...x_n:\perp$. Therefore $(T)xx_1...x_n \p_C (f)u_1...u_r$ and $\G \v_{M2} (f)u_1...u_r:\perp$.** - - If $f=x_i$ $1 \leq i \leq n$, then $r=0$, and $O_i=\perp$. A contradiction. - - If $f=x$, then $r=0$, and $(T)xx_1...x_n \p_C x$, therefore, for every integer $n$, and for all $\l C$-terms $t,t_1,...,t_n$, $(T)t t_1...t_n \p_C t$. $\Box$ The constant $C$ satisfies the following relations :\ $(C)t t_1...t_n \p_C (t)U$ and\ $(U)y \p_C (y)t_1...t_n$ where $y$ is a new variable.\ Let $C'=\l x(C)\l d(x)\l y(x)\l z(d)y$.\ $x:\neg \neg X_C,y:X_C,z:X_C,d:\neg X_C \v_{M2} (d)y:\perp \Longrightarrow$\ $x:\neg \neg X_C,y:X_C,d:\neg X_C \v_{M2} (x)\l z(d)y:\perp \Longrightarrow$\ $x:\neg \neg X_C,d:\neg X_C \v_{M2} (x)\l y(x)\l z(d)y:\perp \Longrightarrow$\ $x:\neg \neg X_C \v_{M2} (C)\l d(x)\l y(x)\l z(d)y:X_C \Longrightarrow$\ $\v_{M2} C': \q X_C \{\neg \neg X_C \f X_C \}$.\ The $\l C$-term $C'$ satisfies the following relations :\ $(C')t t_1...t_n \p_C (t)U$,\ $(U)y \p_C (t)V$, and\ $(V)z \p_C (y)t_1...t_n$ where $y,z$ are new variables.\ In general, we have the following characterization. If $\v_{M2} T: \q X_C \{\neg \neg X_C \f X_C \}$, then there is an integer $m$, such that, for every integer $n$, and for all $\l C$-terms $t,t_1,...,t_n$ : - $(T)t t_1...t_n \p_C (t)V_1$, - $(V_i)y_i \p_C (t)V_{i+1}$ $1 \leq i\leq m-1$, and - $(V_m)y_m \p_C (y_i)t_1...t_n$ where $y_1,...,y_m$ are new variables. **Proof Let $O$ be a new predicate symbol of arity 0 different from $\perp$. We define as in section 3, the system $M2_O$. And we check easily that this system has the same results as Lemmas 5.1, 5.2, 5.3 and 5.4.\ Let $p$ be a stack constant and $A=O \f \perp$. If $\v_{M2} T: \q X_C \{\neg \neg X_C \f X_C \}$, then $\v_{M2_O} T: \neg \neg A \f A$, and $\G=x:\neg \neg A, p:O \v_{M2_O} (T)xp:\perp$. Therefore $(T)xp \p_C (f)u_1...u_r$, and $\G \v_{M2_O} (f)u_1...u_r:\perp$.** - - If $f=p$, then $r=0$, and $O=\perp$. A contradiction. - - If $f=x$, then, $(T)xp \rhd_C (x)U_1$, and $\G \v_{M2_O} U_1:\neg A$. We prove (by induction) that if $\G,y_1:A,...,y_{i-1}:A \v_{M2_O} U_i:\neg A$, then \[$(U_i)y_i \rhd_C (x)U_{i+1}$, and $\G,y_1:A,...,y_i:A \v_{M2_O} U_{i+1}:\neg A$\] or \[$(U_i)y_i \rhd_C (y_j)p$ $1 \leq j \leq i$\].\ The sequence $(U_i)_{i \geq 0}$ is not infinite. Indeed, if it is, the $\l C$-term $((T)\l x(x)z)p$ is not $C$-solvable; but this is impossible, because $z:A,p:O \v_{M2} ((T)\l x(x)z)p:\perp$.\ To obtain the Theorem, we replace the constant $p$ by the sequence $\sou{t}=t_1,...,t_n$ and we put $V_i = U_i [\sou{t} / p]$. $\Box$ The $\l\m$-calculus =================== In this section, we give a similar version to Theorem 6.3 in the M. Parigot’s $\l \m$-calculus. Pure and typed $\l\m$-calculus ------------------------------ $\l\m$-calculus has two distinct alphabets of variables : the set of $\l$-variables $x,y,z,...$, and the set of $\m$-variables $\a,\b,\g$,.... Terms are defined by the following grammar : $t$ $:=$ $x$ $\mid$ $\l xt$ $\mid$ $(t)t$ $\mid$ $\m\a[\b]t$ Terms of $\l\m$-calculus are called $\l\m$-terms.\ The reduction relation of $\l\m$-calculus is induced by fives different notions of reduction :\ **The computation rules** - ($C_1$) $(\l xu)v \f u[v/x]$ - ($C_2$) $(\m\a u)v \f \m\a u[v/$\*$\a]$ - where $u[\sou{v}/$\*$\a]$ is obtained from $u$ by replacing inductively each subterm of the form $[\a]w$ by $[\a](w)\sou{v}$. **The simplification rules** - ($S_1$) $[\a]\m\b u \f u[\a/\b]$ - ($S_2$) $\m\a [\a]u \f u$, if $\a$ has no free occurence in $u$ - ($S_3$) $\m\a u \f \l x \m\a u[x/$\*$\a]$, if $u$ contains a subterm of the form $[\a]\l yw$. (see \[18\]) In $\l\m$-calculus, reduction is confluent. The notation $u \p_{\m} v$ means that $v$ is obtained from $u$ by some head reductions.\ The head equivalence relation is denoted by : $u \sim_{\m} v$ if and only if there is a $w$, such that $u \p_{\m} w$ and $v \p_{\m} w$.\ Proofs are written in a natural deduction system with several conclusions, presented with sequents. One deals with sequents such that :\ - Formulas to the left of $\v$ are labelled with $\l$-variables ;\ - Formulas to the right of $\v$ are labelled with $\m$-variables, except one formula which is labelled with a $\l\m$-term ;\ - Distinct formulas never have the same label.\ The right and the left parts of the sequents are considered as sets and therefore contraction of formulas is done implicitly.\ Let $t$ be a $\l\m$-term, $A$ a type, $\G = x_1:A_1,...,x_n:A_n$, and $\D = \a_1:B_1,...,\a_m:B_m$. We define by means of the following rules the notion “$t$ is of type $A$ in $\G$ and $\D$”. This notion is denoted by $\G\v_{FD2} t:A,\D$. - The rules (1),...,(8) of $AF2$ type system. - \(9) If $\G\v_{FD2} t:A,\b:B,\D$, then $\G\v_{FD2}\m\b [\a]t:B,\a:A,\D$. Weakenings are included in the rules (2) and (9).\ As in typed $\l$-calculus on can define $\neg A$ as $ \f \perp$ and use the previous rules with the following special interpretation of naming for $\perp$ : for $\a$ a $\m$-variable, $\a : \perp$ is not mentioned.\ **Example Let **C =$\l x \m \a [\ph](x)\l y \m \b [\a] y$.\ $x:\neg \neg X,y:X \v_{FD2} y:X \Longrightarrow$\ $x:\neg \neg X,y:X \v_{FD2} \m \b [\a]y:\perp,\a:X \Longrightarrow$\ $x:\neg \neg X \v_{FD2} \l y \m \b [\a]y:\neg X,\a:X \Longrightarrow$\ $x:\neg \neg X \v_{FD2} \m \a [\ph] (x) \l y \b [\a]y: X \Longrightarrow$\ $\v_{FD2}$**C $: \q X \{\neg \neg X \f X \}$.****** (see \[18\] and \[20\]) The $FD2$ type system has the following properties :\ 1) Type is preserved during reduction.\ 2) Typable $\l\m$-terms are strongly normalizable. Classical integers ------------------ Let $n$ be an integer. A classical integer of value $n$ is a closed $\l\m$-term $\th_n$ such that $\v_{FD2} \th_n :N[s^n(0)]$.\ Let $x$ and $f$ fixed variables, and $N_{x,f}$ be the set of $\l\m$-terms defined by the following grammar : $u$ $:=$ $x$ $\mid$ $(f)u$ $\mid$ $\m\a[\b]x$ $\mid$ $\m\a[\b]u$ We define, for each $u \in N\sb{x,f}$ the set $rep(u)$, which is intuitively the set of integers potentially repesented by $u$ : - - $rep(x) = \{ 0 \}$ - - $rep((f)u) = \{ n+1$ if $n \in rep(u) \}$ - - $rep(\m\a[\b]u)= \bigcap rep(v)$ for each subterm $[\a]v$ of $[\b]u$ The following Theorem characterizes the normal forms of classical integers. (see \[19\]) The normal classical integers of value $n$ are exactly the $\l\m$-terms of the form $\l$x$\l$fu with u$\in$ $N_{x,f}$ without free $\m$-variable and such that rep(u)=$\{n\}$. General Theorem --------------- In order to define, in this framework, the equivalent of system $M2$, the demonstration of $\neg \neg A \f A$ should not be allowed for all formulas A, and thus we should prevent the occurrence of some formulas on the right. Thus the following definition.\ Let $t$ be a $\l\m$-term, $A$ a type, $\G = x_1:A_1,...,x_n:A_n$, and $\D = \a_1:B_1,...,\a_m:B_m$ where $B_i$ $1 \leq i \leq m$ is a classical type. We define by means of the following rules the notion “$t$ is of type $A$ in $\G$ and $\D$”, this notion is denoted by $\G\v_{M2} t:A,\D$. - The rules of $DL2$ type system. - (6$'$) If $\G\v t:A, \D$, and $X_C$ has no free occurence in $\G$, then $\G\v t: \q X_C A, \D$. - (7$'$) If $\G\v t: \q X_C A, \D$, and $G$ is a classical type, then $\G\v t:A[G/ X_C], \D$. Let $T$ be a closed $\l\m$-term. We say that $T$ is a storage operator for classical integers if and only if for every $n \geq 0$, there is $\l \m$-term $\t_n \simeq\sb{\b} \so{n}$, such that for every classical integers $\th_n$ of value $n$, there is a substitution $\s$, such that $(T)\th_n f \sim_{\m} \m \a [\a] (f)\s(\t_n)$. If $\v_{M2} T: \q x \{ N^C[x] \f \neg\neg N[x] \}$, then $T$ is a storage operator for classical integers. [99]{} M. Felleisein [*The Calculi of $\l_v -CS$ conversion: a syntactic theory of control and state in imperative higher order programming.*]{}\ Ph. D. dissertation, Indiana University, 1987. J.L. Krivine [*Lambda-calcul, types et modèles*]{}\ Masson, Paris 1990. J.L. Krivine [*Opérateurs de mise en mémoire et traduction de Gődel*]{}\ Archiv for Mathematical Logic 30, 1990, pp. 241-267. J.L. Krivine [*Lambda-calcul, évaluation paresseuse et mise en mémoire*]{}\ Thearetical Informatics and Applications. Vol. 25,1 p. 67-84 , 1991. J.L. Krivine [*Mise en mémoire (preuve générale)*]{}\ Manuscript, 1993. J.L. Krivine [*Classical logic, storage operators and 2nd order lambda-calculus*]{}\ Ann. Pure and Applied Logic 68 (1994) p. 53-78. J.L. Krivine [*A general storage theorem for integers in call-by-name $\l$-calculus*]{}\ Th. Comp. Sc. (to appear). K. Nour [*Opérateurs de mise en mémoire en lambda-calcul pur et typé*]{}\ Thèse de Doctorat, Université de Chambéry, 1993. K. Nour and R. David [*Storage operators and directed $\l$-calculus*]{}\ Journal of symbolic logic, vol 60, n 4, p. 1054-1086, 1995. K. Nour [*Une preuve syntaxique d’un Théorème de J.L. Krivine sur les opérateurs de mise en mémoire*]{}\ C.R. Acad. Sci Paris, t. 318, Série I, p. 201-204, 1994. K. Nour [*Opérateurs de mise en mémoire et types $\q$-positifs*]{}\ Thearetical Informatics and Applications (to appear). K. Nour [*Entiers intuitionnistes et entiers classiques en $\l C$-calcul*]{}\ Thearetical Informatics and Applications, vol 29, n 4, p. 293-313, 1995. K. Nour [*Quelques résultats sur le $\l C$-calcul*]{}\ C.R. Acad. Sci Paris, t. 320, Série I, p. 259-262, 1995. K. Nour [*A general type for storage operators*]{}\ Mathematical Logic Quarterly, 41 p. 505-514, 1995. K. Nour [*La valeur d’un entier classique en $\l\m$-calcul*]{}\ Submitted to Archive for Mathematical Logic. K. Nour [*Caractérisation opérationnelle des entiers classiques en $\l C$-calcul*]{}\ C.R. Acad. Sci Paris, t. 320, Série I, p. 1431-1434, 1995. M. Parigot [*Free deduction : an analyse of computations in classical logic*]{}\ Proc. Russian Conference on Logic Programming, St Petersburg (Russia), 1991, Springer LNCS 592, pp. 361-380. M. Parigot [*$\l\m$-calculus : an algorithm interpretation of classical natural deduction*]{}\ Proc. International Conference on Logic Programming and Automated Reasoning, St Petersburg (Russia), 1992, Springer LNCS 624, pp. 190-201. M. Parigot [*Classical proofs as programs*]{}\ To appear in Proc. 3rd Krut Gődel Colloquium KGC’93, Springer Lectures Notes in Computer Science. M. Parigot [*Strong normalization for second order classical deduction*]{}\ To appear in Proc.LICS 1993. [^1]: The idea of using storage operators in classical logic is due to M. Parigot (see \[19\]) [^2]: The notion of stack constants taken from a manuscript of J-L. Krivine
1
--- abstract: 'In this paper, we classify Gorenstein stable log surfaces with $(K_X+\Lambda)^2=p_g(X,\Lambda)-1$.' address: 'Yau Mathematical Sciences Center, Tsinghua University, Beijing 100084' author: - Jingshan Chen title: ' Gorenstein stable log surfaces with $(K_X+\Lambda)^2=p_g(X,\Lambda)-1$' --- Introduction ============ KSBA stable (log) surfaces are the two-dimensional analogues of stable (pointed) curves. They are the fundamental objects in compactifying the moduli spaces of smooth surfaces of general type. In general, stable (log) surfaces are difficult to classify. We may first focus on Gorenstein ones, i.e. $K_X$ (resp. $K_X+\Lambda$) being Cartier. In [@LR13], Liu and Rollenske give several inequalities for the invariants of Gorenstein stable log surfaces. One important inequality among them is the stable log Noether inequality $(K_X+\Lambda)^2\ge p_g(X,\Lambda)-2$ (see [@LR13 Thm 4.1]). This can be rephrased as $\Delta(X,K_X+\Lambda)\ge 0 $, where $\Delta$ is Fujita’s $\Delta$-genus. Gorenstein stable log surfaces with $\Delta(X,K_X+\Lambda)=0$ have been classified in [@LR13]. Normal Gorenstein stable log surfaces with $\Delta(X,K_X+\Lambda)=1$ have been classified in [@Chen18]. Here we continue to classify non-normal Gorenstein stable log surfaces with $\Delta(X,K_X+\Lambda)=1$. The main result is as follows. \[Main Theorem \] Let $(X,\Lambda)$ be a Gorenstein stable log surface with $\Delta(X,K_X+\Lambda)=1$ which is not normal. If $X$ is irreducible, let $\pi\colon \bar{X}\to X$ be the normalization map. Then - either $\Delta(\bar{X},\pi^*(K_X+\Lambda))=1$ and $(X,\Lambda)$ is as in Thm \[delta1,1\], - or $\Delta(\bar{X},\pi^*(K_X+\Lambda))=0$ and $(\bar{X},\bar{\Lambda}+\bar{D})$ is as in Thm \[delta1,0\]. If $X$ is reducible, write $X=\bigcup X_i$, where $X_i$ is an irreducible component. Then $\Delta(X_i,(K_X+\Lambda)|_{X_i})=1$ or $0$ for each component $X_i$, $X$ has a unique minimal connected component $U$ such that $\Delta(U,(K_X+\Lambda)|_{U})=1$, $X\setminus U$ is composed with several trees $T_j$ with $\Delta(T_j,(K_X+\Lambda)|_{T_j})=0$ and $X$ is glued by $U$ and $T_j$ along lines. Moreover, $U$ is one of the followings: - $X=U$ is a string of log surfaces $X_i$ with $\Delta(X_i,(K_X+\Lambda)|_{X_i})=1$ and $|(K_{X}+\Lambda)|_{X_i}|$ composed with a pencil of elliptic curves. The connecting curves are all fibers. Moreover, in this case $|K_X+\Lambda|$ is composed with a pencil of elliptic curves. - $U$ is a string of surfaces glued along lines. The end surfaces $X_i$ of the string $U$ are non-normal with $\Delta(X_i,(K_X+\Lambda)|_{X_i})=1$. - $U$ is composed with a single irreducible log surface $X_k$ with $\Delta(X_k,(K_X+\Lambda)|_{X_k})=1$ and $(K_{X}+\Lambda)|_{X_k}$ very ample. Moreover, in this case $K_X+\Lambda$ is very ample. - $U=X_j\cup X_k $ where $X_j$, $X_k$ are two Gorenstein log surfaces with $\Delta(X_j,(K_X+\Lambda)|_{X_j})=\Delta(X_k,(K_X+\Lambda)|_{X_k})=0$. All the connecting curves of $X$ are lines except $X_j\cap X_k$. - $U$ is a cycle of log surface $X_i$ with $\Delta(X_i,(K_X+\Lambda)|_{X_i})=0$. All the connecting curves of $X$ are lines. Now we give a brief account of each section. In \[prelim\], we recall some definitions and facts about Gorenstein stable log surfaces. In \[zero-Delta-genus\], we recall the definition of Fujita’s $\Delta$-genus and include some results about normal Gorenstein stable log surfaces $(X,\Lambda)$ with $\Delta(X,K_X+\Lambda)=0$ or $1$. In \[non-normal\], we include some results about non-normal Gorenstein stable log surfaces. In \[nnorm\], we deal with the case that $X$ is non-normal and irreducible. In \[reducible-stable\], we deal with the case that $X$ is reducible. Finally we describe Gorenstein stable surfaces with $K_X^2= p_g-1$. Acknowledgements: {#acknowledgements .unnumbered} ----------------- I am grateful to Prof. Jinxing Cai and Prof. Wenfei Liu for their instructions. I would also thank the anonymous referee for helpful advices and suggestions. Notations and conventions ------------------------- We work exclusively with schemes of finite type over the complex numbers $\mathbb{C}$. - A surface is a connected reduced projective schemes of pure dimension two. - By abuse of notation, we sometimes do not distinguish a Cartier divisor $D$ and its associated invertible sheaf $\mathcal{O}_X(D)$. - $\Sigma_d$ denotes a Hirzebruch surface, which admits a $\mathbb{P}^1$ fibration over $\mathbb{P}^1$. We denote $\Gamma$ as a fiber. It has a unique 1-section $\Delta_0$ whose self-intersection is $-d$. - We use ’$\equiv$’ to denote linear equivalent relation of divisors. - If $D$ is a Cartier divisor on $X$, then we denote $\Phi_{|D|}\colon X\dashrightarrow \mathbb{P}:=|D|^*$ as the rational map defined by the linear system $|D|$. - A line $l$ on a variety $X$ with respect to $\mathcal{O}_X(1)$ is a rational curve such that $l\cdot \mathcal{O}_X(1)=1$. Preliminaries {#prelim} ============= Let $X$ be a demi-normal surface, i.e. $X$ satisfies $S_2$ and is at worst ordinary double at any generic point of codimension 1. Denote $\pi\colon \bar X \to X$ as the normalisation map of $X$. The conductor ideal $ \mathrm{\mathcal{H}om}_{\mathcal{O}_X}(\pi_*\mathcal{O}_{\bar{X}}, \mathcal{O}_X)$ is an ideal sheaf both on $X$ and $\bar{X}$ and hence defines subschemes $D\subset X \text{ and } \bar D\subset \bar X$, both reduced and of pure codimension 1; we often refer to $D$ as the non-normal locus of $X$. Let $\Lambda$ be a reduced curve on $X$ whose support does not contain any irreducible component of $D$. Then the strict transform $\bar \Lambda$ in the normalization is well defined. We have $\pi^*(K_X+\Lambda)=K_{\bar{X}}+\bar D+\bar \Lambda$ and $(K_X+\Lambda)^2 = (K_{\bar X}+\bar D+\bar \Lambda)^2$. \[defin: slc\] We call a pair $(X, \Lambda)$ as above a *log surface*; $\Lambda$ is called the (reduced) boundary. A log surface $(X,\Lambda)$ is said to have *semi-log-canonical (slc)* singularities if it satisfies the following conditions: 1. $K_X + \Lambda$ is $\mathbb{Q}$-Cartier, that is, $m(K_X+\Lambda)$ is Cartier for some $m\in\mathbb{Z}^{>0}$; the minimal such $m$ is called the (global) index of $(X,\Lambda)$. 2. The pair $(\bar X, \bar \Lambda+\bar D)$ has log-canonical singularities. The pair $(X,\Lambda)$ is called stable log surface if in addition $K_X+\Lambda$ is ample. A stable surface is a stable log surface with empty boundary. By abuse of notation we say $(X, \Lambda)$ is a Gorenstein stable log surface if the index is equal to one, i.e. $K_X+\Lambda$ is an ample Cartier divisor. Gorenstein slc singularities and semi-resolutions ------------------------------------------------- Normalizing a demi-normal surface looses all information on the gluing in codimension one. Often it is better to work on a simpler but still non-normal surface. A surface $X$ is called semi-smooth if every singularity of $X$ is either double normal crossing or a pinch point [^1]. The normalization of a semi-smooth surface is smooth. A morphism of demi-normal surfaces $f\colon Y\rightarrow X$ is called a semi-resolution if the following conditions are satisfied: 1. $Y$ is semi-smooth; 2. $f$ is an isomorphism over the semi-smooth open subscheme of $X$; 3. $f$ maps the singular locus of $Y$ birationally onto the non-normal locus of $X$. A semi-resolution $f\colon Y\rightarrow X$ is called minimal if no $(-1)$-curve is contracted by $f$, that is, there is no exceptional curve $E$ such that $E^2 =K_Y\cdot E = -1$. Semi-resolutions always exist and one can also incorporate a boundary [@KollarSMMP 10.5]. \[rem: classification of sings\] Semi-log-canonical surface singularities have been classified in terms of their resolution graphs, at least for reduced boundary [@KSB88]. Let $x\in (X, \Lambda)$ be a Gorenstein slc singularity with minimal log semi-resolution $f\colon Y\to X$. Then it is one of the followings (see [@Kollar-Mori Ch. 4], [@KollarSMMP Sect. 3.3], [@kollar12] and [@LR13]): Gorenstein lc singularities, $\Lambda=0$ : In this case $x\in X$ is a canonical singularity, or a simple elliptic respectively cusp singularity. For the latter the resolution graph is a smooth elliptic curve, a nodal rational curve, or a cycle of smooth rational curves (see also [@Lau77] and [@Reid97 Ch. 4]). Gorenstein lc singularities, $\Lambda\neq 0$ : Since the boundary is reduced, $\Lambda$ has at most nodes. If $\Lambda$ is smooth so is $X$ because of the Gorenstein assumption. If $\Lambda$ has a node at $x$ then $x$ is a smooth point of $X$ or $(X, \Lambda)$ is a general hyperplane section of a cyclic quotient singularity. In the minimal log resolution the dual graph of the exceptional curves is $$\bullet \ {-}\ c_1 \ -\ \cdots \ - \ c_n \ {-} \ \bullet \qquad (c_i\geq1),$$ where $c_i$ represents a smooth rational curve of self-intersection $-c_i$ and each $\bullet$ represents a (local) component of the strict transform of $\Lambda$. If $c_i=1$ for some $i$ then $n=1$ and $\Lambda$ is a normal crossing divisor in a smooth surface. non-normal Gorenstein slc singularities, $\Lambda=0$ : We describe the dual graph of the $f$-exceptional divisors over $x$: analytically locally $X$ consists of $k$ irreducible components, on each component we have a resolution graph as in the previous item, and these are glued together where the components intersect. In total we have a cycle of smooth rational curve. non-normal Gorenstein slc singularities, $\Lambda\neq 0$ : The difference to the previous case is that the local components are now glued in a chain and the ends of the chain intersect the strict transform of the boundary. In this case $X$ itself might not even be $\mathbb{Q}$-Gorenstein. Normal Gorenstein stable log surfaces with small $\Delta$-genus {#zero-Delta-genus} =============================================================== Let $X$ be a variety and $\mathcal{L}$ be an ample line bundle on it. Fujita introduced several invariants for such polarized varieties. One important of them is the $\Delta$-genus $\Delta(X,\mathcal{L}):=\mathcal{L}^{\dim X}-h^0(X,\mathcal{L})+\dim X$. \[gepg-2\] Let $(X,\Lambda)$ be a normal irreducible Gorenstein stable log surface. Then $\Delta(X,K_X+\Lambda)\ge 0$. Moreover if ’=’ holds, then $(X,\Lambda)$ is one of the followings: - $X$ is $\mathbb{P}^{2}$, $\mathcal{O}_X(K_X+\Lambda) =\mathcal{O}_{\mathbb{P}^{2}}(1)$ and $\Lambda\in|\mathcal{O}_{\mathbb{P}^{2}}(4)|$; - $X$ is $\mathbb{P}^{2}$, $\mathcal{O}_X(K_X+\Lambda)=\mathcal{O}_{\mathbb{P}^{2}}(2)$ and $\Lambda\in|\mathcal{O}_{\mathbb{P}^{2}}(5)|$; - $X$ is $\Sigma_d$, $K_X+\Lambda\equiv \Delta_0+\frac{N+d-1}{2}\Gamma$ and $\Lambda\in|3\Delta_0+\frac{N+3d+3}{2}\Gamma|$; ($N-d-3\ge0$ is an even number); - $X$ is a singular quadric $C_2$ in $\mathbb{P}^{3}$, $\mathcal{O}_X(K_X+\Lambda)=\mathcal{O}_{C_2}(1)=\mathcal{O}_{\mathbb{P}^{2}}(1)|_{C_2}$ and $\Lambda\in|\mathcal{O}_{C_2}(3)|$; - $X$ is a cone $C_{N-1}\hookrightarrow \mathbb{P}^{N}$, $\mathcal{O}_X(K_X+\Lambda)=\mathcal{O}_{\mathbb{P}^{N}}(1)|_{C_{N-1}}$ and the proper transformation $\bar{\Lambda}$ in the minimal resolution $\Sigma_{N-1}$ is linearly equivalent to $2\Delta_0+2N\Gamma$. ($N>3$) \[delta-genus-one\] Let $(X,\Lambda)$ be a normal Gorenstein stable log surface with $(K_X+\Lambda)^2= p_g(X,\Lambda)-1$. Then $(X,\Lambda)$ is one of the followings: - $X$ is a double cover of $\mathbb{P}^2$. $\Phi_{|K_X+\Lambda|}\colon X\to \mathbb{P}^2$ is the double covering map. The branch curve $B\in |\mathcal{O}_{\mathbb{P}^2}(2k)|$ is a reduced curve which admits curve singularities of lc double-covering type, and $\Lambda\in |{\Phi_{|K_X+\Lambda|}}^*\mathcal{O}_{\mathbb{P}^2}(4-k)|$. ($p_g(X,\Lambda)=3$, and $k=2,3,4$) - $X$ is a quadric in $\mathbb{P}^3$, and $\Lambda\in |\mathcal{O}_{\mathbb{P}^3}(4)|_X|$. ($p_g(X,\Lambda)=6$) - $X$ is $\mathbb{P}^2$ blown up at $k$ points (possible infinitely near), and $\Lambda\in |-2K_X|$. ($p_g(X,\Lambda)=10-k$, and $k=0,1,...,7$) - $X$ is a cone over an elliptic curve of degree $N$ in $\mathbb{P}^{N-1}$, and $\Lambda\in |-2K_X|$. ($p_g(X,\Lambda)=N$) - $X$ is obtained from $\tilde{X}$ by contracting a $(-n)$ curve $G$, where $n=p_g(X,\Lambda)-1$ and $\tilde{X}$ is an elliptic surface (possible singular) with $G$ as the rational zero section. Every elliptic fiber of $\tilde{X}$ is irreducible. $\Lambda$ is the image of a sum of two different elliptic fibers which admit at worst $A_n$ type singularities. - $X$ is a (possibly singular) Del Pezzo surface of degree 1, namely $X$ has at most canonical singularities and elliptic singularities, $-K_X$ is ample and $K_X^2=1$. The curve $\Lambda$ belongs to the system $|-2K_X|$, and $p_a(\Lambda)=2$. $p_g(X,\Lambda)=2$. - $\Lambda=0$, $|K_X+\Lambda|$ is composed with a pencil of genus $2$ curves. $X$ is canonically embedded as a hypersurface of degree 10 in the smooth locus of $\mathbb{P}(1,1,2,5)$. $p_g(X)=2$. In Theorem 1.1(1) and Theorem 4.3(i), $k=1,2,3,4$ should be corrected by $k=2,3,4$. The case $k=1$ is excluded as $\Phi_{|K_X+\Lambda|}$ would be an embedding. Non-normal Gorenstein stable log surfaces {#non-normal} ========================================= Let $(X,\Lambda)$ be a Gorenstein stable log surface which is non-normal. $D$ is the non-normal locus of $X$. Let $C\subset D$ be a subcurve of the non-normal locus of a Gorenstein stable log surface $(X,\Lambda)$ and $\nu\colon:\tilde{X}\to X$ be the partial normalization of $X$ along $C$. Denote $\tilde{C}$ and $\tilde{\Lambda}$ as the proper transformation of $C$ and $\Lambda$. Then $(\tilde{X},\tilde{\Lambda}+\tilde{C})$ is a Gorenstein stable log surface. We first notice that $\nu^*(K_X+\Lambda)=K_{\tilde{X}}+\tilde{\Lambda}+\tilde{C}$ is ample and Cartier. Second it is easy to verify that $(\tilde{X},\tilde{\Lambda}+\tilde{C})$ still has Gorenstein slc singularities only by the classification of Gorenstein slc singularities. Therefore $(\tilde{X},\tilde{\Lambda}+\tilde{C})$ is a Gorenstein stable log surface as well. The map $\nu|_{\tilde C}\colon \tilde{C} \to C$ is generically a double cover and thus induces a rational involution $\tau$ on $\tilde{C}$. \[prop: descend section\] If $(X, \Lambda)$ is a non-normal Gorenstein stable log surface and $\nu$, $C$, $\tilde{C}$, $\tau$ are defined as above, then $\nu^*H^0(X, K_X+\Lambda)\subset H^0(\tilde{X}, \nu^*(K_X+\Lambda)))$ is the subspace of those sections $s$ whose restriction to $\tilde{C}$ is $\tau$-anti-invariant. \[separateNonnormalCurve\] It is easy to see that $\nu^*H^0(X, K_X+\Lambda)$ can not separate $\tilde{C}$. Actually, $\Phi_{\nu^*H^0(X, K_X+\Lambda)}|_{\tilde{C}}\circ \tau=\Phi_{\nu^*H^0(X, K_X+\Lambda)}|_{\tilde{C}}$ on $\tilde{C}$. \[fibersection\] Let $(X,\Lambda)$ be a reducible Gorenstein stable log surface such that $X=X_1\cup X_2$ and $C:=X_1\cap X_2$ is the connecting curve. Write $(K_X+\Lambda)|_{X_i}:=(\pi^*(K_X+\Lambda))|_{X_i}=K_{X_i}+C+\Lambda|_{X_i}$. Let $\mathcal{R}_{X_i\to C}\colon H^0(X_i,(K_X+\Lambda)|_{X_i})\to H^0(C,(K_X+\Lambda)|_C)$ be the restriction map. Then we have the following fiber product diagram of vector spaces $$\xymatrix{ H^0(X, K_X+\Lambda) \ar[r]\ar[d] & H^0(X_1,(K_X+\Lambda)|_{X_1})\ar[d]^{\mathcal{R}_{X_1\to C}}\\ H^0(X_2,(K_X+\Lambda)|_{X_2})\ar[r]^{-\mathcal{R}_{X_2\to C}}& H^0(C, (K_X+\Lambda)|_C) }.$$ Moreover, denote $r_{X_i\to C}((K_X+\Lambda)|_{X_i}):=\dim\mathcal{R}_{X_i\to C}(H^0(X_i,(K_X+\Lambda)|_{X_i}))$. We have $$\begin{aligned} h^0(X, K_X+\Lambda)\le& h^0(X_1,(K_X+\Lambda)|_{X_1})+h^0(X_2,(K_X+\Lambda)|_{X_2})\\ &-\max\{r_{X_1\to C}((K_X+\Lambda)|_{X_1}),r_{X_2\to C}((K_X+\Lambda)|_{X_2})\}.\end{aligned}$$ irreducible Non-normal Gorenstein stable surfaces with $\Delta(X,K_X+\Lambda)=1$ {#nnorm} ================================================================================ Let $(X,\Lambda)$ be an irreducible non-normal Gorenstein stable log surface and $\bar{X}$, $\pi$, $D$, $\bar{D}$, $\tau$ defined as before. Since $H^0(X,K_X+\Lambda)\cong \pi^*H^0(X,K_X+\Lambda) \subset H^0(\bar X,\pi^*(K_X+\Lambda))$, we have $\Delta(X,K_X+\Lambda)\ge \Delta(\bar{X},\pi^*(K_X+\Lambda))$, and ’=’ holds if and only if $\pi^*H^0(X,K_X+\Lambda) = H^0(\bar X,\pi^*(K_X+\Lambda))$. \[nonnorm&irred\] Let $(X,\Lambda)$ be an irreducible non-normal Gorenstein stable log surface. Then $\Delta(X,K_X+\Lambda)\ge 1$. We only need to show that the case $\Delta(X,K_X+\Lambda)=\Delta(\bar{X},\pi^*(K_X+\Lambda))=0$ does not occur. First, $h^0(X,K_X+\Lambda)=h^0(\bar{X},\pi^*(K_X+\Lambda))$ implies every section in $H^0(\bar{X},\pi^*(K_X+\Lambda))$ is $\tau$-anti-invariant restricting to $\bar{D}$. Hence $H^0(\bar{X},\pi^*(K_X+\Lambda))$ can not separate points of $\bar{D}$ by Remark \[separateNonnormalCurve\]. However, $\Delta(\bar{X},\pi^*(K_X+\Lambda))=0$ implies $\pi^*(K_X+\Lambda)$ is very ample by Cor \[gepg-2\], therefore it will separate points of $\bar{D}$, a contradiction. \[delta1,1\] Let $(X,\Lambda)$ be an irreducible non-normal Gorenstein stable log surface as before. Assume $\Delta(X,K_X+\Lambda)=\Delta(\bar{X},\pi^*(K_X+\Lambda))=1$. Then - either $X$ is a double cover of $\mathbb{P}^2$ induced by $\Phi_{|K_{X}+\Lambda|}$. The branched curve is $2C+B\in |\mathcal{O}_{\mathbb{P}^2}(2m+2k)|$, where $C,B$ are reduced curves of degree $m,2k$. $\Lambda\in |\Phi_{|K_{X}+\Lambda|}^*\mathcal{O}_{\mathbb{P}^2}(4-k-m)|$. $k=2,3$. $0<m\le 4-k$; ($p_g(X,\Lambda)=3$) - or $\Lambda=0$, $K_X^2=(K_{\bar{X}}+\bar D)^2=1$. $(\bar{X},\bar{D})$ is a normal Gorenstein stable log surface as in Thm \[delta-genus-one\] (6). $\bar D$ is a 2 section with $p_a(\bar D)=2$. $\tau$ is induced from the double covering $\bar D \to \mathbb{P}^1$. ($p_g(X)=2$) We see that $\pi^*H^0(X,K_X+\Lambda)=H^0(\bar{X},\pi^*(K_X+\Lambda))$ and $(\bar{X},\bar D+\bar{\Lambda})$ is a stable log surface as in Thm \[delta-genus-one\]. First $\Phi_{|\pi^*(K_X+\Lambda)|}$ is not an embedding by Remark \[separateNonnormalCurve\]. Second $\Phi_{|\pi^*(K_X+\Lambda)|}$ is not composed with a pencil of genus 2 curves since $\bar D+\bar{\Lambda}\not= 0$. If $\Phi_{|\pi^*(K_X+\Lambda)|}$ is composed with a pencil of elliptic curves, $\bar{D}+\bar{\Lambda}$ is either a 2-section or a sum of two elliptic fibers of the fibration map. For the former case, we see that $\bar{\Lambda}=0$ and $\bar{D}$ is a genus 2 curve by Remark \[separateNonnormalCurve\]. Moreover, $(K_{\bar X}+\bar D)^2=1$. $\tau$ is induced from the double map $\bar{D} \to \mathbb{P}^1$. For the latter case, we see that either $\bar{D}=F_1+F_2$ and $\bar{\Lambda}=0$, or $\bar{D}=F_1$ and $\bar{\Lambda}\not=0$, where $F_i$ is a fiber of the fibration map. Both cases can be excluded since the $F_1\cap F_2$ or $F_1\cap\bar{\Lambda}$ on $\bar{X}$ can not be glued into a Gorenstein slc singularity on $X$. If $\bar{X}$ is a double cover of $\mathbb{P}^2$, then $\bar{D}+\bar{\Lambda}\in |\Phi_{|K_{\bar{X}}+\bar{D}+\bar{\Lambda}|}^*\mathcal{O}_{\mathbb{P}^2}(4-k)|$, $k=2,3$. Remark \[separateNonnormalCurve\] indicates $\bar{D}=\Phi_{|K_{\bar{X}}+\bar{D}+\bar{\Lambda}|}^{-1}(C)$, where $C$ is a curve on $\mathbb{P}^2$. Denote $m$ as the degree of $C$. Then $\bar{\Lambda}\in |\Phi_{|K_{\bar{X}}+\bar{D}+\bar{\Lambda}|}^*\mathcal{O}_{\mathbb{P}^2}(4-k-m)|$. We have the following commutative diagram: $$\xymatrix{ \bar{D}\ar@{^{(}->}[r]\ar[d] & \bar{X}\ar[d]^{\pi}\ar[rr]^{\Phi_{|K_{\bar{X}}+\bar{D}+\bar{\Lambda}|}} && \mathbb{P}^2\\ D\ar@{^{(}->}[r] & X \ar[rru]_{\Phi_{|K_{X}+\Lambda|}} }.$$ We see that $X$ is also a double cover of $\mathbb{P}^2$ induced by $\Phi_{|K_{X}+\Lambda|}$. The branch curve is $2C+B\in |\mathcal{O}_{\mathbb{P}^2}(2m+2k)|$. $\Lambda\in |\Phi_{|K_{X}+\Lambda|}^*\mathcal{O}_{\mathbb{P}^2}(4-k-m)|$. Next we consider those irreducible non-normal Gorenstein stable log surfaces $(X,\Lambda)$ with $\Delta(X,K_X+\Lambda)=1$ and $\Delta(\bar{X},\pi^*(K_X+\Lambda))=0$. We first notice that $\pi^*H^0(X,K_X+\Lambda)\subset H^0(\bar{X},\pi^*(K_X+\Lambda))$ is of codimension one. Hence it has a base point $c\in \mathbb{P}(H^0(\bar{X},\pi^*(K_X+\Lambda)))$. We have the following commutative diagram: $$\xymatrix{ \bar{X} \ar@{^{(}->}[rrr]^<(0.3){\Phi_{|\pi^*(K_X+\Lambda)|}}\ar[d]^{\pi} & & & \mathbb{P}(H^0(\bar{X},\pi^*(K_X+\Lambda)))\ar@{-->}[d]^{pr_c}\\ X\ar@{-->}[rrr]^<(0.3){\Phi_{|K_X+\Lambda|}}& & & \mathbb{P}(\pi^*H^0(X,K_X+\Lambda)) }.$$ We regard $\Phi_{|\pi^*(K_X+\Lambda)|}$ as an inclusion. To describe $pr_c|_{\bar{X}}$, we use the following theorem (see [@Ber06 Thm 2.1]): \[linearsectionthm\] Let $X$ be a non-degenerate irreducible subvariety of $\mathbb{P}^r$ and $L$ be a linear subspace of $\mathbb{P}^r$ of dimension $s\le r$ such that $L\cap X$ is a 0-dimensional scheme $\zeta$. Then $\mathrm{length}\, \zeta \le \Delta(X,\mathcal{O}(1))+s+1$. \[lineIntersecting\] Let $X$ be a non-degenerate irreducible subvariety of $\mathbb{P}^r$ with $\Delta(X,\mathcal{O}(1))=0$ and $L$ is a line intersecting $X$ along a 0-dimensional scheme $\zeta$. Then $\mathrm{length}\, \zeta\le 2$. Let $X$ be a non-degenerate irreducible subvariety of $\mathbb{P}^N$ with $\Delta(X,\mathcal{O}(1))=0$ and $c\in \mathbb{P}^N$ be a point outside $X$. Denote $pr_c\colon \mathbb{P}^N\dashrightarrow \mathbb{P}^{N-1}$ as the projection from $c$. Assume $pr_c|_X$ is birational and is not an isomorphism. Denote by $W$ the image of $pr_c|_X$. Then $W$ is non-normal and $pr_c|_X\colon X\to W$ is a normalization map. The non-normal locus $D$ of $W$ is a line in $\mathbb{P}^{N-1}$. The pre-image $\bar{D}\colon = pr_c|_X^{-1}(D)$ is described as follows: - if $X$ is $\mathbb{P}^2$ embedded in $\mathbb{P}^5$, then $\bar{D}\in |\mathcal{O}_{\mathbb{P}^2}(1)|$. - if $X$ is a cone $C_{N-1}\hookrightarrow \mathbb{P}^N$, then $\bar{D}$ is a sum of two rulings. - if $X$ is $\Sigma_d\hookrightarrow \mathbb{P}^N$, either $\bar{D}\in |\Delta_0+\Gamma|$, $N=d+3$, or $\bar{D}=\Delta_0$, $N=d+5$. $pr_c|_X\colon X\to W$ is a normalization map by Cor \[lineIntersecting\]. We show that $D$ is a line in $\mathbb{P}^{N-1}$ by contradiction hypothesis. Assume $D$ is not a line, then there are two points $p,q\in D$ such that the line $L_{p,q}$ passing $p,q$ is not contained in $D$. Let $H_{p,q}\subset \mathbb{P}^N$ be the pre-image of $L_{p,q}$ with respect to $pr_c$. Then $H_{p,q}\cap X$ has length great than $4$ which contradicts Thm \[linearsectionthm\]. Therefore $D$ is a line in $\mathbb{P}^{N-1}$. Finally, we describe $\bar{D}$. For the case where $X$ is a cone $C_{N-1}$, $\bar{D}$ must be a sum of two rulings. This can be shown by considering the plane $H_{v, L}\subset \mathbb{P}^{N-1}$ containing $v,L$, where $v$ is the vertex of $C_{N-1}$ and $L$ is a secant line of $X$ passing through $c$. For the case where $X$ is $\mathbb{P}^2$ embedded in $\mathbb{P}^5$, we have $\bar{D}\cdot \mathcal{O}_{X}(1)=2$ since $D$ is a line in $\mathbb{P}^{N-1}$. Hence $\bar{D}\in |\mathcal{O}_{\mathbb{P}^2}(1)|$. For the case where $X$ is $\Sigma_d\hookrightarrow \mathbb{P}^N$, similarly as the former case we have $\bar{D}\cdot \mathcal{O}_{X}(1)=\bar{D}\cdot(\Delta_0+\frac{N+d-1}{2}\Gamma)=2$. Hence we have either $\bar{D}\in |\Delta_0+\Gamma|$, $N=d+3$, or $\bar{D}=\Delta_0$, $N=d+5$. We first consider the cases where $c\in \bar{X}$. Case I) $\bar{X}$ is $\mathbb{P}^2$ and $\pi^*(K_X+\Lambda)=\mathcal{O}_{\mathbb{P}^2}(1)$. Then $\bar{\Lambda}+\bar{D}\in |\mathcal{O}_{\mathbb{P}^2}(4)|$ and $pr_c|_{\bar{X}}$ is a fibration over $\mathbb{P}^1$. By the classification of Gorenstein slc singularities, we see that $\bar{\Lambda}\cdot\bar{D}$ is even. Hence we have either $\bar{\Lambda},\bar{D}\in |\mathcal{O}_{\mathbb{P}^2}(2)|$ or $\bar{D}\in |\mathcal{O}_{\mathbb{P}^2}(4)|$ and $\bar{\Lambda}=0$. Examples will be given later. We note further that $c\not\in\bar{D}$, otherwise it can not be glued into a Gorenstein slc singularity. Case II) $\bar{X}$ is $C_{N-1}$ and $c$ is the vertex. We see that $\bar{\Lambda}$ is two rulings and $\bar{D}\in \mathcal{O}_{C_{N-1}}(2)$. Case III) $\bar{X}$ is $C_{N-1}$ and $c$ is not the vertex. We show that this does not occur. By Cor \[lineIntersecting\] $pr_c|_{\bar{X}}$ is an isomorphism outside the ruling $l$ passing $c$. Hence $\bar{D}$ must be $l$. However this is impossible as the vertex of $C_{N-1}$ would not be glued into a Gorenstein slc singularity. Case IV) $\bar{X}$ is $\Sigma_{d}$. We show that this does not occur as well. Similarly as in Case III), we see that $\bar{D}$ should be a ruling $\Gamma$ passing through $c$. Hence $\bar{D}\cdot \bar{\Lambda}=3$, which is impossible. Case V) $\bar{X}$ is $\mathbb{P}^2$ embedding in $\mathbb{P}^5$. We show that this does not occur. Since by Cor \[lineIntersecting\], we see that $pr_c|_{\bar{X}}$ is an isomorphism outside $c$, which is impossible. Next we consider the cases where $c\not\in \bar{X}$. We see $pr_c|_{\bar{X}}$ is a morphism of degree at most two by Cor \[lineIntersecting\] and it contracts no curve on $\bar{X}$. Case VI) $pr_c|_{\bar{X}}$ is a double cover. In this case $\bar{X}$ is a quadric in $\mathbb{P}^3$. Denote $pr_c|_{\bar{X}}$ as $\delta$. The branch curve $B\in |\mathcal{O}_{\mathbb{P}^2}(2)|$. $K_{\bar{X}}+\bar{D}+\bar{\Lambda}=\mathcal{O}_{\bar{X}}(1)=\delta^*\mathcal{O}_{\mathbb{P}^2}(1)$. Hence $\bar{D}+\bar{\Lambda}\in |\delta^*\mathcal{O}_{\mathbb{P}^2}(3)|=|\mathcal{O}_{\bar{X}}(3)|$. $\bar{D}$ is the pre-image of a curve $C$ on $\mathbb{P}^2$ under $pr_c|_{\bar{X}}$. Denote $m$ as the degree of $C$. $m\le 3$. We have the following commutative diagram: $$\xymatrix{ \bar{D}\ar@{^{(}->}[r] &\bar{X}\ar[d]^{\pi}\ar[drr]^{pr_c|_{\bar{X}}}\ar@{^{(}->}[rr]^{\Phi_{|K_{\bar{X}}+\bar{D}+\bar{\Lambda}|}} && \mathbb{P}^3 \ar@{-->}[d]^{pr_c}\\ & X \ar[rr]_{\Phi_{|K_{X}+\Lambda|}} && \mathbb{P}^2 }.$$ We see that $X$ is also a double cover of $\mathbb{P}^2$ induced by $\Phi_{|K_{X}+\Lambda|}$. The branch curve ia $2C+B$. $\Lambda\in |\Phi_{|K_{X}+\Lambda|}^*\mathcal{O}_{\mathbb{P}^2}(3-m)|$. If $pr_c|_{\bar{X}}$ is birational, it must be a normalisation map. Its image must be $X$ and $\pi=pr_c|_{\bar{X}}$. Case VII) $\bar{X}$ is $\mathbb{P}^2$ embedding in $\mathbb{P}^5$. The non-normal locus $D$ is a line in $\mathbb{P}^5$ and $\bar{D}\in |\mathcal{O}_{\mathbb{P}^2}(1)|$. $\bar{\Lambda}\in |\mathcal{O}_{\mathbb{P}^2}(4)|$. Case VIII) $\bar{X}$ is $C_{N-1}$, $N>3$. $\bar{D}$ is two rulings. $\bar{\Lambda}$ is linearly equivalent to $2H_{\infty}$ where $H_{\infty}$ is a hyperplane. Case IX) $\bar{X}$ is $\Sigma_{d}$ embedded in $\mathbb{P}^N$ , $N>3$. Either $N=d+3$, $\bar{D}\in |\Delta_0+\Gamma|$ and $\bar{\Lambda}\in |2\Delta_0+(2d+2)\Gamma|$; or $N=d+5$, $\bar{D}=\Delta_0$ and $\bar{\Lambda}\in |2\Delta_0+(2d+4)\Gamma|$. We summarize the above results in the following theorem: \[delta1,0\] Let $(X,\Lambda)$ be an irreducible non-normal Gorenstein stable log surface as before. Assume $\Delta(X,K_X+\Lambda)=1$ and $\Delta(\bar{X},\pi^*(K_X+\Lambda))=0$. Then there are the following possibilities: - $\bar{X}$ is $\mathbb{P}^2$. $\bar{\Lambda}\in |\mathcal{O}_{\mathbb{P}^2}(2)|$ and $\bar{D}\in |\mathcal{O}_{\mathbb{P}^2}(2)|$. Moreover, $c\not\in \bar{D}$. ($p_g(X,\Lambda)=2$) - $\bar{X}$ is $\mathbb{P}^2$. $\bar{\Lambda}=0$ and $\bar{D}\in |\mathcal{O}_{\mathbb{P}^2}(4)|$. Moreover, $c\not\in \bar{D}$. ($p_g(X,\Lambda)=2$) - $\bar{X}$ is $C_{N-1}$. $\bar{\Lambda}$ is two rulings and $\bar{D}\in \mathcal{O}_{C_{N-1}}(2)$. $c$ is the vertex of $C_{N-1}$. ($p_g(X,\Lambda)=N$) - $\bar{X}$ is a quadric in $\mathbb{P}^3$. $\bar{D}\in |\mathcal{O}_{\bar{X}}(m)|$ and $\bar{\Lambda} \in |\mathcal{O}_{\bar{X}}(3-m)|$. $X$ is a double cover of $\mathbb{P}^2$ induced by $\Phi_{|K_{X}+\Lambda|}$. The branch curve is $2C+B$, where $C$,$B$ are reduced curves of degree $m$, $2$. $\Lambda\in |\Phi_{|K_{X}+\Lambda|}^*\mathcal{O}_{\mathbb{P}^2}(3-m)|$. $0<m \le 3$. ($p_g(X,\Lambda)=3$) - $X$ is the projection image of a Veronese embedding of $\mathbb{P}^2$. $\bar{D}\in |\mathcal{O}_{\mathbb{P}^2}(1)|$ and $\bar{\Lambda}\in |\mathcal{O}_{\mathbb{P}^2}(4)|$. ($p_g(X,\Lambda)=5$) - $X$ is the projection image of $C_{N-1}\subset\mathbb{P}^N$, $N>3$. $\bar{D}$ is a sum of two lines passing the vertex, and $\bar{\Lambda}\in \mathcal{O}_{C_{N-1}}(2)$. ($p_g(X,\Lambda)=N$) - $X$ is the projection image of $\Sigma_d$ embedded in $\mathbb{P}^N$, $N>3$. $D\subset X$ is a line. We have either $N=d+3$, $\bar{D}\in |\Delta_0+\Gamma|$ and $\bar{\Lambda}\in |2\Delta_0+(2d+2)\Gamma|$, or $N=d+5$, $\bar{D}=\Delta_0$ and $\bar{\Lambda}\in |2\Delta_0+(2d+4)\Gamma|$. ($p_g(X,\Lambda)=N$) Let $\bar{D}$ be a smooth quadric in $\mathbb{P}^2$ defined by $x^2+y^2+z^2=0$. $\tau$ acts on $\bar{D}$ by $x\mapsto -x, y \mapsto -y, z\mapsto z$. Let $\bar{\Lambda}=L_{x=0}+L_{y=0}$, where $L_{x=0}$, $L_{y=0}$ are two lines. Gluing $\mathbb{P}^2$ along $\bar{D}$ by $\tau$ we get a non-normal Gorenstein stable log surface $(X,\Lambda)$ with $(K_X+\Lambda)^2=p_g(X,\Lambda)-1=1$. Let $\bar{D}$ be a smooth quartic in $\mathbb{P}^2$ defined by $x^4+y^4+z^4=0$. $\tau$ acts on $\bar{D}$ by $x\mapsto -x, y \mapsto -y, z\mapsto z$. Gluing $\mathbb{P}^2$ along $\bar{D}$ by $\tau$ we get a non-normal Gorenstein stable surface $X$ with $K_X^2=p_g-1=1$. \[irrnonnormalstabledelta1\] Let $X$ be an irreducible non-normal Gorenstein stable surface with $\Delta(X,K_X)=1$. Then $X$ is one of the followings: - $X$ is a double cover of $\mathbb{P}^2$. The branch curve is $2C+B$, where $C$,$B$ are reduced curves of degree $4-k$, $2k$. $k=2,3$. ($p_g(X)=3$) - $X$ is obtained from a log surface $(\bar{X},\bar{D})$ by gluing the 2-section $\bar{D}$. $(\bar{X},\bar{D})$ is a normal Gorenstein stable log surface as in Thm \[delta-genus-one\] (6). ($p_g(X)=2$) - $\bar{X}$ is $\mathbb{P}^2$. $\bar{D}\in |\mathcal{O}_{\mathbb{P}^2}(4)|$. ($p_g(X)=2$) - $\bar{X}$ is a quadric in $\mathbb{P}^3$. $\bar{D}\in |\mathcal{O}_{\bar{X}}(3)|$. ($p_g(X)=3$) reducible Gorenstein stable log surfaces with $\Delta(X,K_X+\Lambda)=1$ {#reducible-stable} ======================================================================= In this section we consider reducible Gorenstein stable log surfaces. They are glued by irreducible ones along some connecting curves. \[restsecions\] Let $X$ be a connected $S_2$ scheme of pure dimension, $\mathcal{L}$ be an invertible sheaf such that $\dim \mathrm{Bs} |\mathcal{L}|<\dim X-1$ and $C$ be a subscheme of codimension 1. Then $$\begin{aligned} r_{X\to C}(\mathcal{L})=\dim <\Phi_{|\mathcal{L}|}(C)>+1,\end{aligned}$$ where $<\Phi_{|\mathcal{L}|}(C)>$ is the projective subspace of $|\mathcal{L}|^*$ spanned by $\Phi_{|\mathcal{L}|}(C)$. Denote $\mathbb{P}:=|\mathcal{L}|^*$. We have the following commutative diagram: $$\xymatrix{ H^0(\mathbb{P},\mathcal{O}_{\mathbb{P}}(1))\ar[rr]^<(0.2){\mathcal{R}_{\mathbb{P}\to \Phi_{|\mathcal{L}|}(C)}}\ar[d]^{\cong}_{\Phi_{|\mathcal{L}|}^*} & & H^0(\Phi_{|\mathcal{L}|}(C),\mathcal{O}_{\mathbb{P}}(1)|_{\Phi_{|\mathcal{L}|}(C)})\ar@{^{(}->}[d]_{\Phi_{|\mathcal{L}|}^*} \\ H^0(X,\mathcal{L})\ar[rr]^{\mathcal{R}_{X\to C}}& & H^0(C,\mathcal{L}|_C) }.$$ Therefore $r_{X\to C}(\mathcal{L})=r_{\mathbb{P}\to \Phi_{|\mathcal{L}|}(C)}(\mathcal{O}_{\mathbb{P}}(1))=\dim <\Phi_{|\mathcal{L}|}(C)>+1$. Still we call a curve $C$ on a demi-normal scheme $X$ a *[line]{} if the proper transformation $\bar{C}$ is a line on the normalization $\bar{X}$ of $X$.* \[restdim\] Let $(X,\Lambda)$ be an irreducible Gorenstein stable log surface with a reduced curve $C$ on it. Then - if $X$ is normal and $\Delta(X,K_X+\Lambda)=0$, then $r_{X\to C}(K_X+\Lambda)\ge2$. Moreover ’=’ holds if and only if $C$ is a line on $X$. - if $X$ is normal and $\Delta(X,K_X+\Lambda)=1$, then $r_{X\to C}(K_X+\Lambda)\ge1$. Moreover, ’=’ holds if and only if $|K_X+\Lambda|$ is composed with a pencil and $C$ is a fiber on $X$. If $|K_X+\Lambda|$ is not composed with a pencil, then $r_{X\to C}(K_X+\Lambda)\ge2$ and ’=’ holds if and only if $\Phi_{|K_X+\Lambda|}(C)$ is a line in $|K_X+\Lambda|^*$. - if $X$ is non-normal and $\Delta(X,K_X+\Lambda)=1$, then $r_{X\to C}(K_X+\Lambda)\ge1$. Moreover, ’=’ holds if and only if $|K_X+\Lambda|$ has a base point, and $C$ is a line passing through the base point. \(i) and (ii) follows from Thm \[gepg-2\], Thm \[delta-genus-one\] and Lemma \[restsecions\]. To prove (iii), we note that $H^0(X,K_X+\Lambda)\cong \pi^* H^0(X,K_X+\Lambda)$, which corresponds to the space of hyperplane sections passing through the base point $c$ of $\pi^* H^0(X,K_X+\Lambda)$ in $ \mathbb{P}:=|\pi^*(K_X+\Lambda)|^*$. Denote $pr_c\colon \mathbb{P}\dashrightarrow \mathbb{P}':=|K_X+\Lambda|^*$ as the projection from the point $c$. We have the following commutative diagram: $$\xymatrix{ \bar{C}\ar@{^{(}->}[r]\ar[d] & \bar{X}\ar@{^{(}->}[rr]^{\Phi_{|\pi^*(K_X+\Lambda)|}}\ar@{-->}[rrd]^{\Phi'}\ar[d]_{\pi}& & \mathbb{P}\ar@{-->}[d]^{pr_c}\\ C\ar@{^{(}->}[r] &X\ar@{-->}[rr]_{\Phi_{|K_X+\Lambda|}} &&\mathbb{P}', }$$ where $\Phi'$ is a map defined by the partial linear system $\pi^*H^0(X,K_X+\Lambda)$. We regard $\Phi_{|\pi^*(K_X+\Lambda)|}$ as an inclusion. By Lemma \[restsecions\], $r_{X\to C}(K_X+\Lambda)\ge1$. If ’=’ holds, $\Phi_{|K_X+\Lambda|}(C)$ is a point. Hence $pr_c(\bar{C})$ is a point, which implies $\bar{C}$ is a line passing $c$. Hence $C$ is a line passing through $\pi(c)$, which is the base point of $|K_X+\Lambda|$. \[2comps\] Let $(X,\Lambda)$ be a connected Gorenstein stable log surface. Assume $X=X_1\cup X_2$, where $X_1$ is connected and $X_2$ is irreducible. Denote $C:=X_1\cap X_2$ as the connecting curve of $X_1$ and $X_2$. Then: - $\Delta(X,K_X+\Lambda)\ge \Delta(X_1,(K_X+\Lambda)|_{X_1})$. - if ’=’ holds and $\Delta(X_2,(K_X+\Lambda)|_{X_2})\le 1$, then - either $X_2$ is non-normal and $\Delta(X_2,(K_X+\Lambda)|_{X_2})=1$. $r_{X_2\to C}((K_X+\Lambda)|_{X_2})=1$. $C$ is a line on $X_2$ passing through the base point of $|(K_X+\Lambda)|_{X_2}|$. Moreover, $r_{X_1\to C}((K_X+\Lambda)|_{X_1})\le 1$. - $X_2$ is normal and $\Delta(X_2,(K_X+\Lambda)|_{X_2})=0$, $|(K_X+\Lambda)|_{X_2}|$ is very ample. $r_{X_2\to C}((K_X+\Lambda)|_{X_2})=2$. $C$ is a line on $X_2$. Moreover, $r_{X_1\to C}((K_X+\Lambda)|_{X_1})\le 2$; - or $X_2$ is normal and $\Delta(X_2,(K_X+\Lambda)|_{X_2})=1$, $|(K_X+\Lambda)|_{X_2}|$ is composed with a pencil of elliptic curves. $r_{X_2\to C}((K_X+\Lambda)|_{X_2})=1$. $C$ is a fiber on $X_2$. Moreover, $r_{X_1\to C}((K_X+\Lambda)|_{X_1})\le 1$. $(K_X+\Lambda)^2=(K_X+\Lambda)|_{X_1}^2+(K_X+\Lambda)|_{X_2}^2$ together with Cor \[fibersection\] gives $$\label{deltaIneq} \begin{split} \Delta(X,K_X+\Lambda) & \ge \Delta(X_1,(K_X+\Lambda)|_{X_1})+\Delta(X_2,(K_X+\Lambda)|_{X_2})\\ &+\max\{r_{X_1\to C}((K_X+\Lambda)|_{X_1}),r_{X_2\to C}((K_X+\Lambda)|_{X_2})\}-2. \end{split}$$ By Lemma \[restdim\] we see that $\Delta(X_2,(K_X+\Lambda)|_{X_2})+r_{X_2\to C}((K_X+\Lambda)|_{X_2})\ge 2$. Thus $\Delta(X,K_X+\Lambda) \ge \Delta(X_1,(K_X+\Lambda)|_{X_1})$. If ’=’ holds, then $$\begin{aligned} \max\{r_{X_1\to C}((K_X+\Lambda)|_{X_1}),r_{X_2\to C}((K_X+\Lambda)|_{X_2})\}= 2- \Delta(X_2,(K_X+\Lambda)|_{X_2}). \end{aligned}$$ Therefore, by Lemma \[restdim\], $r_{X_2\to C}((K_X+\Lambda)|_{X_2})=2$, if $\Delta(X_2,(K_X+\Lambda)|_{X_2})=0$. $r_{X_2\to C}((K_X+\Lambda)|_{X_2})=1$, if $\Delta(X_2,(K_X+\Lambda)|_{X_2})=1$. Then other statements of (ii) follow from Lemma \[restdim\]. We then have some corollaries. \[genus-like\] Let $(X,\Lambda)$ be a connected Gorenstein stable log surface. Assume $Y\subset X$ is a connected subsurface. Then we have $\Delta(X,K_X+\Lambda)\ge \Delta(Y,(K_X+\Lambda)|_{Y})$. If $X_i\subset X$ is an irreducible surface connected to $Y$, $\Delta(Y,(K_X+\Lambda)|_{Y})\le \Delta(Y\cup X_i,(K_X+\Lambda)|_{Y\cup X_i})$ by Lemma \[2comps\]. Then by induction hypothesis, we have $\Delta(Y,(K_X+\Lambda)|_{Y})\le \Delta(X,K_X+\Lambda)$. Let $(X,\Lambda)$ be a reducible Gorenstein stable log surface. Write $X=\bigcup X_i$. We say that $\Phi:=\Phi_{K_X+\Lambda}$ separates $X_i$ and $X_j$, if $\Phi(X_i\setminus X_i\cap X_j)\cap \Phi(X_j\setminus X_i\cap X_j)=\emptyset$. \[globalsection\] Let $(X,\Lambda)$ be a connected reducible Gorenstein stable log surface with $X=X_1\cup X_2$ such that $X_1$ is connected and $X_2$ is irreducible. Then - We have the following commutative diagram: $$\xymatrix{ 0 \ar[r] & \ker \mathcal{R}_{X\to X_1} \ar[r]\ar[d]^{\cong} & H^0(X,K_X+\Lambda) \ar[r]^{\mathcal{R}_{X\to X_1}}\ar[d]^{\mathcal{R}_{X\to X_2}} &H^0(X_1,(K_X+\Lambda)|_{X_1})\ar[d]^{\mathcal{R}_{X_1\to X_1\cap X_2}}\\ 0 \ar[r] & \ker\mathcal{R}_{X_2\to X_1\cap X_2} \ar[r] & H^0(X_2,(K_X+\Lambda)|_{X_2}) \ar[r]^{\mathcal{R}_{X_2\to X_1\cap X_2}} &H^0(X_1\cap X_2,(K_X+\Lambda)|_{X_1\cap X_2}) . }$$ - If $\mathrm{im}\, \mathcal{R}_{X_1\to X_1\cap X_2}\subset \mathrm{im} \, \mathcal{R}_{X_2\to X_1\cap X_2}$, then $\mathcal{R}_{X\to X_1}$ is surjective. - If $(K_X+\Lambda)|_{X_2}$ is very ample and $X_1\cap X_2$ is a line on $X_2$, then $\mathcal{R}_{X\to X_1}$ is surjective and $\Phi_{|K_X+\Lambda)|}$ separates $X_1$ and $X_2$. - If each $(K_X+\Lambda)|_{X_i}$ is very ample and $X_1\cap X_2$ is a line on each $X_i$, then $K_X+\Lambda$ is very ample. \(i) and (ii) is obvious by chasing the diagram. For (iii), $X_1\cap X_2$ is a line implies $\mathcal{R}_{X_2\to X_1\cap X_2}$ is surjective. Then any section in $H^0(X_1,(K_X+\Lambda)|_{X_1})$ can be extended into a section in $H^0(X,K_X+\Lambda)$. Therefore $\mathcal{R}_{X\to X_1}$ is surjective. Next $\ker \mathcal{R}_{X_2\to X_1\cap X_2}$ is nontrivial and its base part is the line $X_1\cap X_2$. Then $\ker \mathcal{R}_{X\to X_1}$ is nontrivial and its base part is $X_1$. Therefore $\Phi_{|K_X+\Lambda)|}$ separates $X_1$ and $X_2$. For (iv) we see that $\mathcal{R}_{X\to X_i}$ is surjective and the kernel is nontrivial by (ii). Then we have plenty of sections to separate points and tangents, which implies $K_X+\Lambda$ is very ample. \[equaldelta\] Let $(X,\Lambda)$ be a log surface as in Lemma \[2comps\]. We assume further $X_1$ is irreducible and $\Delta(X,K_X+\Lambda)= \Delta(X_i,(K_X+\Lambda)|_{X_i})\le 1$ for $i=1,2$. Then - if $\Delta(X,K_X+\Lambda)=0$, then $X_1\cap X_2$ is a line on each $X_i$. Moreover, $K_X+\Lambda$ is very ample. - if $\Delta(X,K_X+\Lambda)=1$, then - either $X_1$ and $X_2$ are both normal. Each $|(K_X+\Lambda)|_{X_i}|$ is composed with a pencil. $X_1\cap X_2$ is a fiber on each $X_i$. $|K_X+\Lambda|$ is composed with a pencil as well. - or $X_1$ and $X_2$ are both non-normal. The base points of $|(K_X+\Lambda)|_{X_i}|$ coincide into the unique base point of $|K_X+\Lambda|$. $X_1\cap X_2$ is a line passing through the base point of $|(K_X+\Lambda)|_{X_i}|$ on each $X_i$. \(i) follows from Lemma \[2comps\] and Lemma \[globalsection\]. For (ii), applying Lemma \[2comps\] we see that either $X_i$ is non-normal or $(K_X+\Lambda)|_{X_i}$ is composed with a pencil. It is easy to see that either $X_1$, $X_2$ are both non-normal or $(K_X+\Lambda)|_{X_1}$, $(K_X+\Lambda)|_{X_2}$ are both composed with a pencil of elliptic curve, since the geometric genus of $X_1\cap X_2$ on $X_1$ or $X_2$ should coincide. If $(K_X+\Lambda)|_{X_1}$, $(K_X+\Lambda)|_{X_2}$ are both composed with a pencil of elliptic curve, then by Lemma \[2comps\] each $(K_X+\Lambda)|_{X_i}$ is composed with a pencil and $X_1\cap X_2$ is a fiber on each $X_i$. Therefore $|K_X+\Lambda|$ is composed with a pencil as well. If $X_1$ and $X_2$ are both non-normal, then by Lemma \[restdim\], $X_1\cap X_2$ is a line passing through the base point of $|(K_X+\Lambda)|_{X_i}|$ on each $X_i$. These two base points must coincide, otherwise no nontrivial section in $H^0(X_i,(K_X+\Lambda)|_{X_i})$ can be glued into a section of $H^0(X,K_X+\Lambda)$. \[decrease1\] Let $(X,\Lambda)$ be a connected Gorenstein stable log surface which has two irreducible components $X_1$, $X_2$. Assume further $\Delta(X_1,(K_X+\Lambda)|_{X_1})=\Delta(X_2,(K_X+\Lambda)|_{X_2})=0$ and $\Delta(X,K_X+\Lambda)=1$. Then $$r_{X_1\to X_1\cap X_2}((K_X+\Lambda)|_{X_1})=r_{X_2\to X_1\cap X_2}((K_X+\Lambda)|_{X_2})=3.$$ By (\[deltaIneq\]), we have $\max\{r_{X_1\to X_1\cap X_2}((K_X+\Lambda)|_{X_1}),r_{X_2\to X_1\cap X_2}((K_X+\Lambda)|_{X_2}\}\le 3$. Moreover $r_{X_i\to X_1\cap X_2}((K_X+\Lambda)|_{X_i}\ge 2$ by Lemma \[restdim\]. Thus there must be one $r_{X_i\to X_1\cap X_2}((K_X+\Lambda)|_{X_i}=3$. Now we suppose $r_{X_1}(X_1\cap X_2)=2$ and $r_{X_2}(X_1\cap X_2)=3$ for a contradiction. Then $X_1\cap X_2$ is a line on $X_1$ and not a line on $X_2$. Thus nonzero elements of $\mathrm{im}\mathrm{Res}_{X_i|X_1\cap X_2}$ have different degrees. Hence only those sections in $H^0(X_i,(K_X+\Lambda)|_{X_i})$ vanishing on $X_1\cap X_2$ can be glued together. Therefore $$\begin{aligned} p_g(X,\Lambda)&=\dim\ker \mathcal{R}_{X_1\to X_1\cap X_2}+\dim \ker \mathcal{R}_{X_2\to X_1\cap X_2}\\ &\le p_g(X_1,(K_X+\Lambda)|_{X_1})-2+p_g(X_2,(K_X+\Lambda)|_{X_2})-2\\ &\le (K_X+\Lambda)^2, \end{aligned}$$ a contradiction. This completes the proof. \[diffdelta\] Let $(X,\Lambda)$ be a connected Gorenstein stable log surface. Assume $X=X_1\cup X_2$, where $X_i$ is irreducible. Assume further $\Delta(X,K_X+\Lambda)= \Delta(X_1,(K_X+\Lambda)|_{X_1})=1$ and $\Delta(X_2,(K_X+\Lambda)|_{X_2})=0$. Then - either each $(K_X+\Lambda)|_{X_i}$ is very ample and on each $X_i$ the curve $X_1\cap X_2$ is a line on each $X_i$. $K_X+\Lambda$ is very ample; - or $X_1$ is non-normal. $X_1\cap X_2$ is a line on each $X_i$. First by Lemma \[2comps\], $X_1\cap X_2$ will be a line on $X_2$ and $r_{X_1\to X_1\cap X_2}((K_X+\Lambda)|_{X_1})=1$, or $2$. For the case $X_1$ is normal, we claim that $r_{X_1\to X_1\cap X_2}((K_X+\Lambda)|_{X_1})\not =1$. Otherwise, by Lemma \[restdim\], $|(K_X+\Lambda)|_{X_1}|$ is composed with a pencil of elliptic curves and the connecting curve $X_1\cap X_2$ will be an elliptic fiber on $X_1$. While on $X_1$ the connecting curve $X_1\cap X_2$ has geometric genus 0. This is impossible. Therefore, $r_{X_1\to X_1\cap X_2}((K_X+\Lambda)|_{X_1})=2$. $(K_X+\Lambda)|_{X_1}$ is not composed with a pencil and the log canonical image of $X_1\cap X_2$ is a line. Next we show that $X_1$ is not a double cover of $\mathbb{P}^2$. Otherwise, on $X_1$, $X_1\cap X_2$ is a double covering curve of $\mathbb{P}^1$. Then nonzero sections in the image of $\mathcal{R}_{X_i\to X_1\cap X_2}$ have different degrees. Hence only those sections of $H^0(X_i, (K_X+\Lambda)|_{X_i})$ vanishing on $X_1\cap X_2$ can be glued together. Therefore, $$\begin{aligned} p_g(X,\Lambda)&=\dim\ker \mathcal{R}_{X_1\to X_1\cap X_2}+\dim \ker \mathcal{R}_{X_2\to X_1\cap X_2}\\ &=p_g(X_1,(K_X+\Lambda)|_{X_1})+p_g(X_2,(K_X+\Lambda)|_{X_2})-4\\ &< (K_X+\Lambda)^2, \end{aligned}$$ a contradiction. Therefore $(K_X+\Lambda)|_{X_1}$ is very ample. $X_1\cap X_2$ is a line on $X_1$. The other statements follow. For the non-normal case, we take a partial normalization $\nu\colon \tilde{X}\to X$ along the non-normal locus on $X_1$. $\tilde{X}=\tilde{X_1}\cup X_2$, where $\tilde{X_1}$ is the normalization of $X_1$. It is easy to check that $\Delta(\tilde{X},\nu^*(K_X+\Lambda))=0$. Then we see that $X_1\cap X_2$ is a line on each $X_i$. Hence we complete the proof. Let $X_1$ be a $\mathbb{P}^2$ blown up at $k$ distinct points as in Thm \[delta-genus-one\] (3). Each $E_i$ is a line w.r.t $-K_{X_1}$. Choosing a general $\Lambda_1=E_1+...+E_k+B\in |-2K_{X_1}|$, then $\Lambda_1$ is nodal and $E_i$ intersects $B$ at 3 distinct points. Let $(X_2,\Lambda_2)$ be a log surface as in  Thm \[gepg-2\](iii) such that $\Lambda_2=\Gamma_1+...+\Gamma_s+D$, where $\Gamma_i$ is a ruling and it intersect $D$ at 3 distinct points. Then we can glue $X_1$, $X_2$ along $E_i$, $\Gamma_j$ to obtain a log surface as in Thm \[diffdelta\] (i). The following theorem and corollary is first obtained in [@LR13]. \[thm: log noether\] Let $(X,\Lambda)$ be a connected Gorenstein stable log surface. Then $$\Delta(X,K_X+\Lambda)\ge 0.$$ This follows directly by Cor \[gepg-2\], Lemma \[nonnorm&irred\] and Lemma \[2comps\]. \[cor: nonnormal equality\] Let $(X, \Lambda)$ be a Gorenstein stable log surface such that $\Delta(X,K_X+\Lambda)=0$. Write $X=\bigcup X_i$, where $X_i$ is an irreducible component. Then 1. $\Delta(X_i, (K_X+\Lambda)|_{X_i})=0$. 2. $K_X+\Lambda$ is very ample。 It defines an embedding $\phi\colon X\hookrightarrow \mathbb{P}=|K_X+\Lambda|^*$; 3. if $X_{i}\cap X_{j}\not = \emptyset $, then $X_{i}\cap X_{j}$ is a line on both $X_i$ and $X_j$; 4. $X$ is a tree of $X_i$ glued along lines. In particular, $\Lambda\neq 0$. Finally we are able to classify Gorenstein stable log surface with $\Delta(X,K_X+\Lambda)=1$. \[nonnormdelta-1\] Let $(X,\Lambda)$ be a reducible Gorenstein stable log surface with $\Delta(X,K_X+\Lambda)=1$. Write $X=\bigcup X_i$, where $X_i$ is an irreducible component. Then $X$ has a unique minimal connected component $U$ such that $\Delta(U,(K_X+\Lambda)|_{U})=1$, $X\setminus U$ is composed with several trees $T_j$ with $\Delta(T_j,(K_X+\Lambda)|_{T_j})=0$ and $X$ is glued by $U$ and $T_j$ along lines. $U$ is one of the followings: - $X=U$ is a string of log surfaces $X_i$ with $\Delta(X_i,(K_X+\Lambda)|_{X_i})=1$ and $|(K_{X}+\Lambda)|_{X_i}|$ composed with a pencil of elliptic curves. The connecting curves are all fibers. Moreover, in this case $|K_X+\Lambda|$ is composed with a pencil of elliptic curves. - $U$ is a string of surfaces glued along lines whose end surfaces $X_i$ of the string $U$ are non-normal with $\Delta(X_i,(K_X+\Lambda)|_{X_i})=1$. Moreover, if $U$ is reducible, $|K_X+\Lambda|$ has a base point $c\in X$ and all the connecting curves of $U$ pass through $c$. - $U$ is composed with a single irreducible log surface $X_k$ with $\Delta(X_k,(K_X+\Lambda)|_{X_k})=1$ and $|(K_{X}+\Lambda)|_{X_k}|$ very ample. Moreover, in this case $K_X+\Lambda$ is very ample. - $U=X_j\cup X_k $ where $X_j$, $X_k$ are two Gorenstein log surfaces with $\Delta(X_j,(K_X+\Lambda)|_{X_j})=\Delta(X_k,(K_X+\Lambda)|_{X_k})=0$. All the connecting curves of $X$ are lines except $X_j\cap X_k$. - $U$ is a cycle of log surface $X_i$ with $\Delta(X_i,(K_X+\Lambda)|_{X_i})=0$. All the connecting curves of $X$ are lines. First by Cor \[genus-like\] we see each component $X_i$ has $\Delta(X_i,(K_X+\Lambda)|_{X_i})=1$ or $0$. Second, once we obtain a connected component $U\subset X$ such that $\Delta(U,(K_X+\Lambda)|_{U})=1$ and $\Delta(X_i,(K_X+\Lambda)|_{X_i})=0$ for each $X_i\subset X\setminus U$, we see that $X\setminus U$ will be composed with several trees of log surfaces $T_j$ with $\Delta(T_j,(K_X+\Lambda)|_{T_j})=0$ by Lemma \[2comps\] and induction hypothesis. Therefore it remains to describe $U$. We distinguish between two cases whether there is an irreducible component $X_i$ such that $\Delta(X_i,(K_X+\Lambda)|_{X_i})=1$. Case 1. There is a component $X_i$ such that $\Delta(X_i,(K_X+\Lambda)|_{X_i})=1$. Then by Cor \[equaldelta\] and Cor \[diffdelta\], either $(K_X+\Lambda)|_{X_i}$ is composed with a pencil of elliptic curves, $X_i$ is non-normal, or $(K_X+\Lambda)|_{X_i}$ is very ample. If $(K_X+\Lambda)|_{X_i}$ is composed with a pencil of elliptic curves, then by Cor \[equaldelta\] and Cor \[diffdelta\], for each irreducible components $X_j$ connected to $X_i$, $(K_X+\Lambda)|_{X_j}$ is composed with a pencil of elliptic curves as well, $X_i\cap X_j$ is a fiber and $\Delta(X_j,(K_X+\Lambda)|_{X_j})=1$. Therefore, inductively, every $(K_X+\Lambda)|_{X_k}$ is composed with a pencil of elliptic curves. Each $X_i$ is connected to at most two other components, as there are at most two fibers as connecting curves which pass through the base point of $|(K_X+\Lambda)|_{X_i}|$ on $X_i$. Thus $X=U$ is a string of such surfaces glued along fibers. If $(K_X+\Lambda)|_{X_i}$ is very ample. Then by Cor \[equaldelta\] and Cor \[diffdelta\], every other irreducible component $X_j$ connected to $X_i$ has $\Delta(X_j,(K_X+\Lambda)|_{X_j})=0$ and the connecting curves of them are lines. It is easy to check $U=X_i$. Moreover, $K_X+\Lambda$ is very ample by Lemma \[globalsection\]. If $X_i$ is non-normal, then by Lemma \[2comps\] other component $X_j$ is either non-normal or has $\Delta(X_j,(K_X+\Lambda)|_{X_j})=0$ and $X$ is a tree of such surfaces. Let $\nu\colon \tilde{X}\to X$ be a partial normalization along the non-normal curves on $X_j$. It is easy to check that $\Delta(\bar{X}, \nu^*(K_X+\Lambda))=0$. $\nu^*H^0(X,K_X+\Lambda)\subset H^0(\bar{X}, \nu^*(K_X+\Lambda))$ has a base point $c\in \mathbb{P}:=|\nu^*(K_X+\Lambda)|^*$. We see that $c$ is also the base point of $|(K_X+\Lambda)|_{X_j}|$ for any non-normal $X_j$. If there is only one non-normal surface $X_i$, then $U=X_i$. If there are two non-normal surfaces $X_i$, $X_j$, then $U$ is the unique string of surfaces contained in $X$ with $X_i$, $X_j$ as the ends. If there are more than two non-normal surfaces, we claim that they lies on a unique minimal string of surfaces, which is $U$. Otherwise there are three non-normal surfaces contained in a fork of surfaces $Y\subset X$. By Lemma \[2comps\], we see that the base point $c$ lies on each $X_k\subset Y$. However, in the central surface of $Y$, there are three connecting curve passing through $c$, which is impossible. Hence the claim is true. Case 2. Each irreducible component $X_i$ has $\Delta(X_i,(K_X+\Lambda)|_{X_i})=0$. If there are two irreducible components $X_j$, $X_{k}$ such that $\Delta(X_j\cup X_k,(K_X+\Lambda)|_{X_j\cup X_k})=1$, then by Lemma \[decrease1\], $X_j\cap X_k$ is neither a line on $X_j$ nor $X_k$, and $r_{X_j\to X_j\cap X_k}((K_X+\Lambda)|_{X_j})=r_{X_k\to X_j\cap X_k}((K_X+\Lambda)|_{X_k})=3$. It is easy to see that $X$ is a tree of log surfaces. Ungluing $X$ along $X_j\cap X_{k}$, there will be two connected tree of surfaces $V_1\supset X_j$, $V_2\supset X_{k}$. We claim $\Delta(V_1,(K_X+\Lambda)|_{V_1})=\Delta(V_2,(K_X+\Lambda)|_{V_2})=0$. Otherwise assume $\Delta(V_1,(K_X+\Lambda)|_{V_1})=1$, then $\Delta(V_1\cup X_{k},(K_X+\Lambda)|_{V_1\cup X_{k}})=1$ implies $ V_1\cap X_{k}=X_j\cup X_{k}$ is a line on $X_k$, a contradiction. Hence $U=X_j\cap X_k$ and $X$ is a tree of log surfaces whose connecting curves are lines except $X_j\cap X_k$. Finally we consider the case that all the connecting curve are lines. It is easy to see that $X$ is not a tree, otherwise $\Delta(X,K_X+\Lambda)=0$. Hence there is a minimal cycle of surfaces $U$ of log surfaces such that $\Delta(U,(K_X+\Lambda)|_U)=1$. Let $X$ be a connected reducible Gorenstein stable surface with $K_X^2= p_g-1$. Then $X$ is a union of two $\mathbb{P}^2$ glued along a quartic curve. $X$ is a double cover of $\mathbb{P}^2$ and it can be deformed into a smooth one. ($K_X^2=2$) If $X\setminus U$ is not empty, then there is a leaf log surface $X_i$ such that $\Delta(X_i,K_X|_{X_i})=0$. While $K_X|_{X_i}=K_{X_i}+\bar{D_i}$ and the connecting curve $\bar{D_i}$ is a line, which is impossible. Hence $X=U$. A similar discussion tells us that $X$ is not a string of $X_i$ such that $|K_X|_{X_i}|$ is composed with a pencil. Therefore $X$ is either a string of surfaces containing non-normal ones or a union of two irreducible components whose connecting curve is not a line. We show that the first case does not occur. Otherwise $X=U$ is a string of surfaces glued along lines passing through $c\in X$ where $c$ is the base point of $|K_X|$. However, in this case $c$ would not be a Gorenstein slc singularity, a contradiction. Let $X=X_1\cup X_2$ with $\Delta(X_i, K_X|_{X_i})=0$ and $r_{X_i\to X_1\cap X_2}(K_X|_{X_i})=3$. Then $X_i$ is $\mathbb{P}^2$, $\Sigma_d$ or $C_{n-1}$. While in these cases $\ker \mathcal{R}_{X_i\to X_1\cap X_2}=0$ (otherwise $K_X|_{X_i}=K_{X_i}+X_1\cap X_2\ge X_1\cap X_2$ as divisors, which is impossible), thus $h^0(X_i,K_X|_{X_i})=r_{X_i\to X_1\cap X_2}(K_X|_{X_i})=3$. Hence $X_i$ is $\mathbb{P}^2$, and the connecting curve $X_1\cap X_2$ is a curve of degree 4. Let $X$ be a connected Gorenstein stable surface with $K_X^2= p_g-1$. If $X$ is irreducible, then it is one of the followings: - $X$ is a double cover of $\mathbb{P}^2$. The branch curve is $2C+B$, where $C$,$B$ are reduced curves of degree $4-k$, $2k$. $k=2,3$. ($p_g(X)=3$) - $X$ is obtained from a log surface $(\bar{X},\bar{D})$ by gluing the 2-section $\bar{D}$. $(\bar{X},\bar{D})$ is a normal Gorenstein stable log surface as in Thm \[delta-genus-one\] (6). ($p_g(X)=2$) - $\bar{X}$ is $\mathbb{P}^2$. $\bar{D}\in |\mathcal{O}_{\mathbb{P}^2}(4)|$. ($p_g(X)=2$) - $\bar{X}$ is a quadric in $\mathbb{P}^3$. $\bar{D}\in |\mathcal{O}_{\bar{X}}(3)|$. ($p_g(X)=3$) If it is reducible, then $X$ is a union of two $\mathbb{P}^2$ glued along a quartic curve. ($p_g(X)=3$) We have confirmed the conjecture in [@LR13] that there are no Gorenstein stable surfaces with $K_X^2= p_g-1\ge 3$. [plain]{} Valery Alexeev. Moduli spaces [$M\sb{g,n}(W)$]{} for surfaces. In [*Higher-dimensional complex varieties (Trento, 1994)*]{}, pages 1–22. de Gruyter, August 1996. Valery Alexeev. Higher-dimensional analogues of stable curves. In [*International [C]{}ongress of [M]{}athematicians. [V]{}ol. [II]{}*]{}, pages 515–536. Eur. Math. Soc., Z[ü]{}rich, 2006. Wolf P. Barth, Klaus Hulek, Chris A. M. Peters, and Antonius [Van de Ven]{}. , volume 4 of [*Ergebnisse der Mathematik und ihrer Grenzgebiete. 3. Folge.*]{} Springer-Verlag, Berlin, second edition, 2004. Marieamelie Bertin. On singular varieties having an extremal secant line\[J\]. , 2006, 34(3): 893-909. Jingshan Chen. Normal Gorenstein stable log surfaces with $(K_X+\Lambda)^2 = p_g(X,\Lambda)- 1$, [*Communications in Algebra*]{} (2018), DOI: 10.1080/00927872.2018.1435794 Marco Franciosi, Rita Pardini, and Sönke Rollenske. Gorenstein stable surfaces with $K_X^2 = 1$ and $p_g>0$. (2015). Marco Franciosi, Rita Pardini, and Sönke Rollenske. Log-canonical pairs and Gorenstein stable surfaces with $K_X^2=1$. 151.8(2015):1529-1542. Takao Fujita. On the structure of polarized varieties with $\Delta$-genera zero. J.fac.sci.univ.tokyo Sect.ia Math 22(1975):103-115. Takao Fujita. Classification theories of polarized varieties. Cambridge University Press, 1990. La Nave, Gabriele. Explicit stable models of elliptic surfaces with sections. (2002). J[á]{}nos Koll[á]{}r and Shigefumi [M]{}ori. , volume 134 of [ *Cambridge Tracts in Mathematics*]{}. Cambridge University Press, Cambridge, 1998. With the collaboration of C. H. Clemens and A. Corti, Translated from the 1998 Japanese original. Janós Kollár. Moduli of varieties of general type. In G. Farkas and I. Morrison, editors, [*Handbook of Moduli: Volume II*]{}, volume 24 of [*Advanced Lectures in Mathematics*]{}, pages 131–158. International Press, 2012. J[á]{}nos Koll[á]{}r. , volume 200 of [ *Cambridge Tracts in Mathematics*]{}. Cambridge University Press, Cambridge, 2013. With a collaboration of S[á]{}ndor Kov[á]{}cs. J[á]{}nos Koll[á]{}r. . 2014. book in preparation. J[á]{}nos Koll[á]{}r and Nick Shepherd-Barron. Threefolds and deformations of surface singularities. , 91(2):299–338, 1988. Janós Kollár, K. E. Smith, and A. Corti. Rational and nearly rational varieties. Cambridge University Press Cambridge (2004):vi,235. Henry B. Laufer. On minimally elliptic singularities. , 99(6):1257–1295, 1977. Wenfei Liu and S[ö]{}nke Rollenske. Geography of Gorenstein stable log surfaces\[J\]. , 2013, 368(4). Masayoshi Nagata. *On rational surfaces I*. Memoirs of the College of Science University of Kyoto, 33,(1960):351-370. Miles Reid. Chapters on algebraic surfaces. In [*Complex algebraic geometry*]{}, volume 3 of [*IAS/Park City Math. Ser.*]{}, pages 3–159. Amer. Math. Soc., Providence, RI, 1997. Fumio Sakai. Semistable curves on algebraic surfaces and logarithmic pluricanonical maps. , 254(2):89–120, 1980. Shuichiro Tsunoda and De-Qi Zhang. Noether’s inequality for noncomplete algebraic surfaces of general type. , 28(1):21–38, 1992. [^1]: A local model for the pinch point in $\mathbb{A}^3$ is given by the equation $x^2+yz^2=0$.
1
--- abstract: 'A scattering problem (or more precisely, a transmission-reflection problem) of linearized excitations in the presence of a dark soliton is considered in a one-dimensional nonlinear Schrödinger system with a general nonlinearity: $ \mathrm{i}\partial_t \phi = -\partial_x^2 \phi + F(|\phi|^2)\phi $. If the system is interpreted as a Bose-Einstein condensate, the linearized excitation is a Bogoliubov phonon, and the linearized equation is the Bogoliubov equation. We exactly prove that the perfect transmission of the zero-energy phonon is suppressed at a critical state determined by Barashenkov’s stability criterion \[Phys. Rev. Lett. 77, (1996) 1193.\], and near the critical state, the energy-dependence of the reflection coefficient shows a saddle-node type scaling law. The analytical results are well supported by numerical calculation for cubic-quintic nonlinearity. Our result gives an exact example of scaling laws of saddle-node bifurcation in time-reversible Hamiltonian systems. As a by-product of the proof, we also give all exact zero-energy solutions of the Bogoliubov equation and their finite energy extension.' address: 'Department of Basic Science, The University of Tokyo, Tokyo 153-8902, Japan' author: - 'Daisuke A. Takahashi' bibliography: - 'genNLS.bib' title: 'Soliton-phonon scattering problem in 1D nonlinear Schrödinger systems with general nonlinearity' --- Nonlinear Schrödinger equation ,Bose-Einstein condensate ,Bogoliubov equation ,saddle-node bifurcation ,universal scaling laws ,cubic-quintic nonlinear Schrödinger equation Introduction ============ In this paper, we solve a scattering problem of linearized excitations in the presence of a dark soliton in one-dimensional(1D) nonlinear Schrödinger (NLS) equation with a general nonlinearity: $$\begin{aligned} \mathrm{i}\partial_t\phi = -\partial_x^2\phi+F(|\phi|^2)\phi, \label{eq:intro001} \end{aligned}$$ and discuss the physical and mathematical significance of our results. For a schematic picture, see Fig. \[introfigure\]. The precise mathematical definition of the problem will be given in Sec. \[sec:fundamental\]. If we regard the system as a Bose-Einstein condensate(BEC), the linearized excitation is a Bogoliubov phonon, so the problem can be also called a soliton-phonon scattering problem, as this paper entitled.\ The NLS equation (\[eq:intro001\]) has a great number of applications in nonlinear optics, superconductors, magnetism, BECs, and so on. Particularly, much attention has been focused on the experimental realizations of BECs in laser-cooled ultracold atoms for more than a decade, because of high-controllability of the system parameters. By using elongated laser beams, low-dimensional systems are realized, and a dark soliton can be created via the phase imprinting method[@BurgerPhaseImprinting]. The Bogoliubov theory is also well confirmed[@Andrews; @Stamper; @Steinhauer].\ It is known that 1D NLS with a *cubic* nonlinearity is completely integrable[@ZakharovShabat; @ZakharovShabat2]. Because of integrability, the linearized equation is also solved exactly[@ChenChenHuang; @Kovrizhin], and the phonon excitations are shown to be completely reflectionless against a soliton for *any* excitation energy. Thus, the problem is trivial in this case. However, when the nonlinear term is generalized, the phonon has a finite reflection coefficient in general. It is worthy to note that the soliton decay dynamics in the laser-trapped quasi-1D BEC has been well explained by the *quintic* term, which appears as a second-order perturbation of the trapping effect[@Muryshev; @SinhaChernyKovrizhinBrand; @KhaykovichMalomed], and yields the frictional force between thermal excitation clouds and solitons[@Muryshev; @SinhaChernyKovrizhinBrand]. Thus, knowing the scattering properties between solitons and linearized excitations is essential to understand and control the transport of solitons, that is, the transport of stable wave packets. We also mention that the theory of nonpolynomial NLS equation is formulated to describe the confinement effect[@SalasnichParolaReatto; @MateoDelgado; @MateoDelgadoAnnPhys; @Salasnich]. The quintic NLS also appears in an effective mean-field description of the Tonks-Girardeau gas[@Kolomeisky].\ The NLS equation with an integrability breaking factor is also interesting from the viewpoint of an infinite-dimensional dynamical system and the bifurcation theory. When the potential barrier is added in the cubic NLS equation, there exist stable and unstable stationary supercurrent-flowing solutions[@BaratoffBlackburnSchwartz; @Hakim], if the condensate velocity does not exceed a certain critical value. Near the critical point, which separates the stable branch and the unstable branch, it is known that many physical quantities obey saddle-node type scaling, such as an emission period of dark solitons[@Hakim; @PhamBrachet], an eigenvalue of a growing mode for the unstable solution[@PhamBrachet], and a transmission coefficient of linearized excitations[@Kovrizhin; @Kagan; @DanshitaYokoshiKurihara]. It is quite nontrivial that the time-reversible Hamiltonian system exhibits the scaling behaviors of saddle-node type, since this bifurcation is normally understood to emerge in time-irreversible phenomena. However, it is not easy to prove these properties analytically or exactly, because of the infinite dimensional character of the system.\ On the other hand, as another way to break the integrability, one can consider the generalization of the nonlinearity, that is what we will consider in the present paper. When the nonlinear term includes a competing interaction, the dark soliton is no longer always stable. One typical example of an unstable dark soliton is a “bubble” in a cubic-quintic NLS (CQNLS) system[@BarashenkovMakhankov; @BarashenkovPanova]. (See also [@JYang].) The most general criterion for the stability of the dark soliton has been shown by Barashenkov[@Barashenkov], and the critical velocity of the soliton is determined by $ \partial P/\partial v=0 $, where $ v $ is a velocity of the soliton and $ P $ is a renormalized momentum. The existence of the critical velocity lower than the sound velocity and the separation of stable and unstable regions are similar to the phenomena of superflows against a potential barrier. Therefore, we can expect some scaling behavior near the critical state. Furthermore, in the present case, the preserved translational symmetry of the fundamental equation makes it possible to access the problem analytically.\ In this paper, we solve the scattering problem of linearized phonon excitations, and exactly show the following: (i) At the critical state determined by Barashenkov’s criterion[@Barashenkov], the transparency of the zero-energy phonon is suppressed, and only partial transmission occurs. (ii) Near the critical state, the energy-dependence of the reflection coefficient of low-energy phonons shows saddle-node scaling behavior, regarding the renormalized momentum as a parameter of a normal form of saddle-node bifurcation. The obtained analytical results are well confirmed by comparison with the numerical results of CQNLS equation. Our result gives an exact example of scaling laws of saddle-node bifurcation in time-reversible Hamiltonian systems. The proof is based on the exact low-energy expansion of the solution of the linearized equation. Since the exact zero-energy solutions given in this paper are quite general, we believe that our method will also be useful to derive other low-energy physical properties.\ The organization of the paper is as follows. In Sec. \[sec:fundamental\], we introduce fundamental equations and see the fundamental properties. The definition of the transmission-reflection problem is also given. In Sec. \[sec:mainresult\], we give a main result and verify it by numerical study of CQNLS equation. Sections \[sec:proof1\] and \[sec:proof2\] are devoted to the proof of main results. Discussions, future perspectives, and conclusions are given in Sec. \[sec:summary\]. Some mathematically technical formulae are treated in Appendices. ![\[introfigure\]A schematic picture of the problem that we consider in this paper. $ p $ represents a half of the velocity of the dark soliton. The problem is always considered in the comoving frame of the soliton. $ \mathrm{e}^{\mathrm{i}k_1x} $ is an incident wave of a linearized excitation, $ t\mathrm{e}^{\mathrm{i}k_1x} $ is a transmitted wave, and $ r\mathrm{e}^{\mathrm{i}k_2x} $ is a reflected wave. For more detailed definitions of each quantity, see Sec. \[sec:fundamental\].](introfiguretex2img.eps) Fundamental Equations and Definition of the Problem {#sec:fundamental} =================================================== Fundamental equations --------------------- We begin with the NLS equation with a general nonlinearity $$\begin{aligned} \mathrm{i}\partial_t\phi=-\partial_x^2\phi+F(|\phi|^2)\phi. \label{eq:nls} \end{aligned}$$ Here, $ F(\rho) $ is a real-valued function such that $ F(0)=0 $. The energy functional (Hamiltonian) which yields this equation is $$\begin{aligned} H = \int\!\mathrm{d}x \left(|\partial_x\phi|^2+U(|\phi|^2)\right), \end{aligned}$$ where $$\begin{aligned} U(\rho) = \int_0^\rho\mathrm{d}\rho' F(\rho'). \end{aligned}$$ Letting $ \phi=\phi+\delta\phi $ in Eq. (\[eq:nls\]), and discarding higher order terms of $ \delta\phi $, one obtains the following linearized equation: $$\begin{aligned} \mathrm{i}\partial_t\delta\phi=\left[-\partial_x^2+ F(|\phi|^2)+|\phi|^2F'(|\phi|^2) \right]\delta\phi+\phi^2F'(|\phi|^2)\delta\phi^*. \end{aligned}$$ Writing $ \delta\phi=u $ and $ -\delta\phi^*=v $, one obtains $$\begin{aligned} \mathrm{i}\partial_t\begin{pmatrix}u\\v\end{pmatrix}=\mathcal{L}\begin{pmatrix}u\\v\end{pmatrix}, \label{eq:tdbogo} \end{aligned}$$ where $ \mathcal{L} $ is a $ 2\times2 $ matrix operator whose components are $$\begin{aligned} \mathcal{L}_{11}^{}&=-\mathcal{L}_{22}^{}=-\partial_x^2+F(|\phi|^2)+|\phi|^2F'(|\phi|^2), \\ \mathcal{L}_{12}^{}&=-\mathcal{L}_{21}^*=-\phi^2F'(|\phi|^2). \end{aligned}$$ We use the notation $ (u,v) $ since it is commonly used by condensed matter physicists. If we interpret the system as BEC, this equation is the Bogoliubov equation which describes the Bogoliubov phonon (or Bogoliubov quasiparticle) [@Bogoliubov]. (As a review or a textbook, see, e.g., [@FetterWalecka; @DalfovoGiorginiPitaevskiiStringari; @PethickSmith].) For this reason, henceforth, we call $ \phi $ the condensate wavefunction or the order parameter, and $ (u,v) $ the Bogoliubov (quasiparticle) wavefunction, though the NLS equation itself has more applications in various fields.\ Henceforth we mainly consider the stationary (i.e., time-independent) problem. The stationary NLS equation with chemical potential $ \mu $ is obtained by setting $ \phi(x,t)=\phi(x)\mathrm{e}^{-\mathrm{i}\mu t} $: $$\begin{aligned} (-\mu-\partial_x^2+F(|\phi|^2))\phi=0. \label{eq:nls2} \end{aligned}$$ As will be seen, the value of $ \mu $ is fixed by the asymptotic form of $ \phi $ . The stationary Bogoliubov equation with eigenenergy $ \epsilon $ is obtained by setting $ u(x,t)=u(x)\mathrm{e}^{-\mathrm{i}(\epsilon+\mu)t} $ and $ v(x,t)=v(x)\mathrm{e}^{-\mathrm{i}(\epsilon-\mu)t} $: $$\begin{aligned} \epsilon\begin{pmatrix}u\\v\end{pmatrix}=\mathcal{L}_\mu\begin{pmatrix}u\\v\end{pmatrix} \label{eq:bogos} \end{aligned}$$ with $$\begin{aligned} \mathcal{L}_\mu:= \mathcal{L}+\begin{pmatrix}-\mu &0 \\ 0& \mu \end{pmatrix}. \label{eq:bogos2} \end{aligned}$$ Bogoliubov phonons in a uniform condensate {#subsec:uniform} ------------------------------------------ Let us derive the dispersion relation (the energy-momentum relation) of Bogoliubov phonons when the condensate is flowing uniformly: $ \phi(x) = \sqrt{\rho_\infty}\mathrm{e}^{\mathrm{i}(px+\varphi)} $. In order for this $ \phi(x) $ to be the solution of Eq. (\[eq:nls2\]), the chemical potential must be $$\begin{aligned} \mu=p^2+F(\rho_\infty). \label{eq:cp} \end{aligned}$$ The four solutions of Bogoliubov equation (\[eq:bogos\]) are given by $$\begin{aligned} w_i(x,\varphi):= \begin{pmatrix} \bar{u}_i\, \mathrm{e}^{\mathrm{i}(px+\varphi)} \\ \bar{v}_i\, \mathrm{e}^{-\mathrm{i}(px+\varphi)} \end{pmatrix} \mathrm{e}^{\mathrm{i}k_i x}, \label{eq:bogouniform} \end{aligned}$$ where $ i=1,\,2,\,3, \text{ and }4 $, and the wavenumber $ k_i $s are the roots of the following quartic equation: $$\begin{aligned} (\epsilon-2kp)^2=k^2(k^2+2\rho_\infty F'(\rho_\infty)). \label{eq:uniformdisp} \end{aligned}$$ Equation (\[eq:uniformdisp\]) gives the dispersion relation, and from this dispersion one can see that a half of the Landau’s critical velocity (or a half of the sound wave velocity) is given by $$\begin{aligned} p_{\text{L}}=\sqrt{\frac{\rho_\infty F'(\rho_\infty)}{2}}. \label{eq:defofpl} \end{aligned}$$ (Note that the sound wave velocity is not $ p_{\text{L}} $ but $ 2p_{\text{L}} $; see the next subsection.) Since $ p_{\text{L}} $ must be real, in order for the uniform condensate to be stable, $ F'(\rho_\infty)>0 $ must hold. The coefficients $ \bar{u}_i $ and $ \bar{v}_i $ can be, e.g., chosen as follows: $$\begin{aligned} \bar{u}_{i}&=\sqrt{1+\frac{\epsilon-2p k_i^{}+k_i^2}{2p_{\text{L}}^2}}, \label{eq:ucoeff}\\ \bar{v}_{i}&=\sqrt{1-\frac{\epsilon-2p k_i^{}-k_i^2}{2p_{\text{L}}^2}}. \label{eq:vcoeff} \end{aligned}$$ When $ \epsilon>0 $ and $ -p_{\text{L}}<p<p_{\text{L}} $, the quartic equation (\[eq:uniformdisp\]) has one real positive root, one real negative root, and two complex roots conjugate to each other. We call a real positive (negative) root $ k_1 \ (k_2) $, and a complex root with positive (negative) imaginary part $ k_3 \ (k_4) $. The low-energy expansions of them are given by $$\begin{aligned} k_1 &= \frac{\epsilon}{2(p+p_{\text{L}})}+O(\epsilon^3), \label{eq:kexpand1} \\ k_2 &= \frac{\epsilon}{2(p-p_{\text{L}})}+O(\epsilon^3), \label{eq:kexpand2} \\ k_3 &= 2\mathrm{i}\sqrt{p_{\text{L}}^2-p^2}+\frac{p\epsilon}{2(p_{\text{L}}^2-p^2)}+O(\epsilon^2), \label{eq:kexpand3} \\ k_4 &= -2\mathrm{i}\sqrt{p_{\text{L}}^2-p^2}+\frac{p\epsilon}{2(p_{\text{L}}^2-p^2)}+O(\epsilon^2). \label{eq:kexpand4} \end{aligned}$$ $ w_1 $ and $ w_2 $ are plane wave solutions propagating in the positive and negative directions, respectively. $ w_3 $ and $ w_4 $ are exponentially divergent unphysical solutions. Dark soliton solution in comoving frame --------------------------------------- Let us consider the dark soliton solution of stationary NLS Eq. (\[eq:nls2\]) in the comoving frame of the soliton. In this coordinate, the soliton is static but the surrounding condensate is flowing. Let us seek the solution with the asymptotic form $$\begin{aligned} \phi(x\rightarrow \pm\infty)=\sqrt{\rho_\infty}\mathrm{e}^{\mathrm{i}(px\pm\frac{\delta}{2})}. \label{eq:nlsdsasym} \end{aligned}$$ It should be noted that the velocity of the soliton is not $ -p $ but $ -2p $, because the Galilean covariance of NLS equation leads to the following property: $$\begin{aligned} \begin{split} &\text{$ \phi(x,t) $ is a solution.} \\ \leftrightarrow \quad & \text{$\tilde{\phi}(x,t,\alpha)=\phi(x+2\alpha t,t)\mathrm{e}^{-\mathrm{i}\alpha x}\mathrm{e}^{-\mathrm{i}\alpha^2t}$ is a solution.} \end{split}\label{eq:galilei} \end{aligned}$$ So, if one has the solution in the form $ \phi(x,t)=\mathrm{e}^{-\mathrm{i}\mu t}\mathrm{e}^{\mathrm{i}px}f(x) $, the corresponding soliton-moving solution is given by $ \tilde{\phi}(x,t,p)=\mathrm{e}^{-\mathrm{i}(\mu-p^2)t}f(x+2pt) $. However, for brevity, we sometimes call $ p $ “velocity”, ignoring the difference of twice factor.\ From the conservation laws of mass and momentum, one can immediately find two integration constants: $$\begin{aligned} j&=\frac{\phi^*\phi_x-\phi\phi_x^*}{2\mathrm{i}}, \\ j_m &= |\phi_x|^2+\mu|\phi|^2-U(|\phi|^2). \label{eq:genjm} \end{aligned}$$ Here $ j $ is a mass current density and $ j_m $ is a momentum current density. Let us write the density and the phase of the condensate as $ \phi=\sqrt{\rho}\mathrm{e}^{\mathrm{i}S} $. Taking account of the asymptotic form (\[eq:nlsdsasym\]), the chemical potential $ \mu $ becomes the same as (\[eq:cp\]), and the above constants are determined as $$\begin{aligned} j&=\rho_\infty p, \label{eq:dsj} \\ j_m&=2\rho_\infty p^2+\rho_\infty F(\rho_\infty)-U(\rho_\infty). \label{eq:dsjm} \end{aligned}$$ The conservation laws are then rewritten as $$\begin{aligned} S_x &= \frac{j}{\rho} =\frac{\rho_\infty p}{\rho} \ \leftrightarrow \ S = p\int_0^x\frac{\rho_\infty\mathrm{d}x}{\rho}, \label{eq:phasecond} \\ \frac{(\rho_x)^2}{4} &= -p^2(\rho_\infty-\rho)^2+\rho\left[ U(\rho)-U(\rho_\infty)-(\rho-\rho_\infty)U'(\rho_\infty) \right]. \label{eq:momconsrv} \end{aligned}$$ Thus, one can at least obtain the formal solution $$\begin{aligned} \pm 2(x-x_0) = \int\!\!\frac{\mathrm{d}\rho}{\sqrt{\text{R.H.S. of Eq. (\ref{eq:momconsrv})}}}, \end{aligned}$$ even though it is not easy in general to carry out this integration and obtain the solution in closed form “$ \rho(x)=\dots $”. Henceforth we do not need this formal solution, but we assume the existence of a dark soliton solution which has no singularity and satisfies the asymptotic condition (\[eq:nlsdsasym\]).\ From Eq. (\[eq:phasecond\]), The phase shift $ \delta $ in Eq. (\[eq:nlsdsasym\]) can be written down explicitly: $$\begin{aligned} \delta = p\int_{-\infty}^\infty\!\!\mathrm{d}x\left( \frac{\rho_\infty}{\rho}-1 \right). \end{aligned}$$ We also introduce the symbol for the particle number of the dark soliton for later convenience: $$\begin{aligned} N := \int_{-\infty}^\infty\!\!\mathrm{d}x\left( \rho-\rho_\infty \right)<0. \end{aligned}$$ Barashenkov’s criterion ----------------------- The stability of the soliton is described by the following renormalized momentum[@Barashenkov]: $$\begin{aligned} P = \int_{-\infty}^\infty\mathrm{d}x\left( \frac{\tilde{\phi}^*\tilde{\phi}_x-\tilde{\phi}\tilde{\phi}^*_x}{2\mathrm{i}} \right)\left( 1-\frac{\rho_\infty}{|\tilde{\phi}|^2} \right) \end{aligned}$$ Here $ \tilde{\phi}(x,t) $ is the dark soliton solution in the frame where the surrounding condensate is at rest and the soliton is moving. Writing the soliton velocity $ v $, the stability criterion for the dark solitons is expressed by $ \partial P/\partial v<0 $.\ We can rewrite the above integral by the density profile $ \rho(x) $: $$\begin{aligned} \begin{split} P &= -p\int_{-\infty}^\infty\mathrm{d}x\left( \rho_\infty-\rho \right)\left( \frac{\rho_\infty}{\rho}-1 \right) \\ &= -p N-\rho_\infty \delta. \end{split} \end{aligned}$$ Here remember that the soliton velocity is given by $ v=-2p $, as stated in the preceding subsection. The stability condition is rewritten as $ \partial (-P)/\partial p <0 $. Definition of the scattering problem {#sec:defofsp} ------------------------------------ In this subsection, we define the transmission and reflection problem of Bogoliubov phonons shown in Fig. \[introfigure\]. Since the linearized equation does not satisfy simple particle number conservation, we must define transmission and reflection coefficients via the conservation of excitation energy. The conservation of excitation energy corresponds to the constancy of the following Wronskian:[@Kagan; @DanshitaYokoshiKurihara]: $$\begin{aligned} W = u^*\partial_xu-u\partial_xu^*+v^*\partial_xv-v\partial_xv^*. \end{aligned}$$ Let us assume that the asymptotic form of the condensate wavefunction $ \phi(x) $ is given by Eq. (\[eq:nlsdsasym\]). In this situation, sufficiently far from the origin, the Bogoliubov equations have the plane wave (and exponentially decaying/diverging) solutions given by Eq. (\[eq:bogouniform\]) with $ \varphi=\pm\frac{\delta}{2} $.\ The solution of the scattering problem is defined by the one that has the following asymptotic form[@Kovrizhin; @Kagan; @DanshitaYokoshiKurihara]: $$\begin{aligned} \begin{pmatrix}u \\ v \end{pmatrix} \rightarrow \begin{cases} w_1\left(x,-\frac{\delta}{2}\right)+r\, w_2\left(x,-\frac{\delta}{2}\right) & (x\rightarrow-\infty) \\ t\, w_1\left(x,+\frac{\delta}{2}\right) & (x\rightarrow+\infty). \end{cases} \label{eq:uvasymptotic} \end{aligned}$$ Here the exponentially decaying waves, which are $ w_4(x,-\frac{\delta}{2}) $ in $ x\rightarrow -\infty $ and $ w_3(x,+\frac{\delta}{2}) $ in $ x\rightarrow+\infty $, can be also included. However, they are irrelevant in the definition of transmission and reflection coefficients. The calculation of $ W $ shows that $$\begin{aligned} \frac{W(+\infty)}{2\mathrm{i}}=&|t|^2\left[ (k_{1}+p)|\bar{u}_{1}|^2+(k_{1}-p)|\bar{v}_{1}|^2 \right]\!, \\ \begin{split} \frac{W(-\infty)}{2\mathrm{i}}=&(k_{1}+p)|\bar{u}_{1}|^2+(k_{1}-p)|\bar{v}_{1}|^2 \\ &\ +|r|^2\left[ (k_{2}+p)|\bar{u}_{2}|^2+(k_{2}-p)|\bar{v}_{2}|^2 \right]. \end{split} \end{aligned}$$ Since $ W(+\infty)=W(-\infty) $, the transmission coefficient $ T $ and the reflection coefficient $ R $ are naturally defined as $$\begin{aligned} T&=|t|^2, \label{eq:tc} \\ R&=\frac{(-k_{2}-p)|\bar{u}_{2}|^2+(-k_{2}+p)|\bar{v}_{2}|^2}{(k_{1}+p)|\bar{u}_{1}|^2+(k_{1}-p)|\bar{v}_{1}|^2}|r|^2. \label{eq:defofR} \end{aligned}$$ By this definition, $ T+R=1 $ always holds. If one chooses the normalization $ \bar{u}_i $ and $ \bar{v}_i $ as Eq. (\[eq:ucoeff\]) and (\[eq:vcoeff\]), one can show $$\begin{aligned} \frac{(-k_{2}-p)|\bar{u}_{2}|^2+(-k_{2}+p)|\bar{v}_{2}|^2}{(k_{1}+p)|\bar{u}_{1}|^2+(k_{1}-p)|\bar{v}_{1}|^2} = 1-\frac{pp_{\text{L}}\epsilon^2}{2(p^2-p_{\text{L}}^2)^3}+O(\epsilon^4). \end{aligned}$$ So, if one is only interested in the leading order, one can approximate $ R \simeq |r|^2 $. Summary of Main Result and Numerical Verification {#sec:mainresult} ================================================= In this section we present the main results of this paper and verify them by numerical study of CQNLS equation. The proof will be given in Secs. \[sec:proof1\] and \[sec:proof2\]. Main result {#subsec:mainresult} ----------- In the scattering problem of linearized excitations defined in Subsec. \[sec:defofsp\], the amplitude of the reflected component $ r $ in Eq. (\[eq:uvasymptotic\]) is given by the following Padé approximant-like form: $$\begin{aligned} r = \frac{- \mathrm{i}\epsilon(d+d_1P_p)+O(\epsilon^2)}{a P_p - \mathrm{i}\epsilon(b+b_1P_p)+O(\epsilon^2)}. \label{eq:mainr} \end{aligned}$$ Here $ X_p:= \partial X/\partial p $ and $$\begin{aligned} a &= 4p_{\text{L}}\rho_\infty, \\ b &= (N+pN_p)^2+(p_{\text{L}}N_p)^2, \\ d &= (N+pN_p)^2-(p_{\text{L}}N_p)^2, \\ b_1 &= N-\frac{p_{\text{L}}^2+p^2}{p_{\text{L}}^2-p^2}\widetilde{N}, \\ d_1 &= N+\widetilde{N} \end{aligned}$$ with $$\begin{aligned} \widetilde{N}:=p \frac{\partial N}{\partial p}-\rho_\infty\frac{\partial N}{\partial \rho_\infty}. \end{aligned}$$ From (\[eq:mainr\]), the energy dependence of the reflection coefficient $ R $ (\[eq:defofR\]) becomes $$\begin{aligned} R = \begin{cases} \displaystyle \left( \frac{d+d_1P_p}{aP_p} \right)^2\epsilon^2+O(\epsilon^4) & (P_p\ne0) \\ \displaystyle \left( \frac{d}{b} \right)^2+O(\epsilon^2) & (P_p=0). \end{cases} \label{eq:mainrR} \end{aligned}$$ When $ P_p \ne0$, the zero-energy phonon transmits perfectly: $ \lim_{\epsilon\rightarrow0}R=0 $. On the other hand, when the soliton velocity reaches a critical value, i.e. $ P_p=0 $, this perfect transmission disappears. We note that the rational form expression (\[eq:mainr\]) makes it possible to unify the description of low-energy behaviors in both critical and non-critical cases. If we use a simple Taylor series, the singular behavior at the critical velocity state cannot be expressed.\ The above (\[eq:mainrR\]) is one good result valid for any soliton velocity, even if it is far from the critical state. However, when the velocity comes close to the critical value, i.e., $ P_p $ comes close to zero, we can derive a more powerful scaling law as below. Let us assume that coefficients of $ \epsilon^n \ (n\ge2) $ in (\[eq:mainr\]) are all finite in the limit $ P_p\rightarrow0 $. and take the limit $ \epsilon\rightarrow 0 $ and $ P_p\rightarrow0 $ with a constraint $ \epsilon/P_p=\text{fix} $. We then obtain $$\begin{aligned} r \rightarrow \frac{-\mathrm{i}d(\epsilon/P_p)}{a-\mathrm{i}b(\epsilon/P_p)}, \end{aligned}$$ and in the same limit, the *universal* form of reflection coefficient $$\begin{aligned} \lim_{\substack{\epsilon\rightarrow0,\, P_p\rightarrow0,\\ \epsilon/P_p:\text{fix}}} R = \frac{d^2(\epsilon/P_p)^2}{a^2+b^2 (\epsilon/P_p)^2} \label{eq:scaledR} \end{aligned}$$ follows. Here, the values of $ a, b, $ and $ d $ in the critical state must be substituted when we use Eq. (\[eq:scaledR\]). We remark that Eq. (\[eq:scaledR\]) contains only $ p $-derivatives, whereas the expression before taking the scaling limit contains $ \rho_\infty $-derivatives in addition to $ p $-derivatives.\ Let $ p_c $ be a critical velocity of the dark soliton, i.e., $ P'(p_c)=0 $. The expansion of $ P $ near $ p=p_c $ gives $$\begin{aligned} P(p) &\simeq P(p_c)+\frac{1}{2}P''(p_c)(p-p_c)^2+\dotsb, \\ \rightarrow |P_p| &= |P'(p)| \simeq |2P''(p_c)(P(p)-P(p_c))|^{1/2}. \end{aligned}$$ Therefore we obtain $$\begin{aligned} \frac{\epsilon}{|P_p|} \simeq \frac{\epsilon}{|2P''(p_c)(P(p)-P(p_c))|^{1/2}}. \label{eq:energyscale} \end{aligned}$$ This is an expected scaling behavior from the normal form of saddle-node bifurcation[@GuckenheimerHolmes; @PhamBrachet], if we regard the renormalized momentum $ P $ as a parameter of normal form. Comparison with Numerical Results in CQNLS System {#subsec:cqnls} ------------------------------------------------- In this subsection, we numerically verify the analytical results of the preceding subsection in the CQNLS system. We first derive the expressions for the dark soliton solution and the renormalized momentum in Subsec. \[subsubsec:darksolitoncqnls\], and solve the scattering problems of linearized excitations for (i) the purely cubic case in Subsec. \[sec:purecubic\], (ii) the purely quintic case in Subsec. \[sec:purequintic\], and (iii) the case where a non-trivial critical velocity exists in Subsec. \[sec:criticalcase\]. ### Dark soliton solution and renormalized momentum {#subsubsec:darksolitoncqnls} In the CQNLS system, the nonlinear term is defined by $$\begin{aligned} U(\rho) &= a_1\rho^2+a_2\rho^3, \\ F(\rho)&=U'(\rho) = 2a_1\rho+3a_2\rho^2, \end{aligned}$$ and the NLS equation (\[eq:nls\]) has the cubic-quintic nonlinearity: $$\begin{aligned} \mathrm{i}\partial_t\phi=-\partial_x^2\phi+2a_1|\phi|^2\phi+3a_2|\phi|^4\phi. \end{aligned}$$ The stationary linearized equation, i.e., the stationary Bogoliubov equation (\[eq:bogos\]) is given by $$\begin{aligned} \!\!\!\!\!\!&\begin{pmatrix}-\partial_x^2-\mu+4a_1|\phi|^2+9a_2|\phi|^4 & -2a_1\phi^2-6a_2|\phi|^2\phi^2 \\ 2a_1\phi^{*2}+6a_2|\phi|^2\phi^{*2} & \!\! \partial_x^2+\mu-4a_1|\phi|^2-9a_2|\phi|^4 \end{pmatrix}\begin{pmatrix}u \\ v \end{pmatrix} \nonumber \\ &=\epsilon\begin{pmatrix}u\\v\end{pmatrix}. \label{eq:bogocqnls} \end{aligned}$$ It is known that a bubble and unstable dark solitons appear when $ a_1<0 $ and $ a_2>0 $[@BarashenkovMakhankov; @BarashenkovPanova]. This case is considered in Subsec. \[sec:criticalcase\]. As shown in [@BarashenkovPanova], when the soliton velocity is smaller than the critical value, a small perturbation induces “nucleation dynamics”, and the soliton cannot preserve its shape any more. So, this instability is not convective but absolute.\ The Landau velocity (\[eq:defofpl\]) is given by $$\begin{aligned} p_{\text{L}} = \sqrt{\rho_\infty(a_1+3a_2\rho_\infty)}. \end{aligned}$$ The necessary condition $ a_1+3a_2\rho_\infty>0 $ follows for a uniform condensate to be stable. The dark soliton solution is given by $$\begin{aligned} \phi(x,p,\rho_\infty) =\mathrm{e}^{\mathrm{i}px}\frac{\kappa\rho_0+\mathrm{i}p(\rho_\infty-\rho_0)\tanh\kappa x}{\sqrt{\rho_0(\kappa^2-a_2(\rho_\infty-\rho_0)^2\tanh^2\kappa x)}} \label{eq:dscqnls} \end{aligned}$$ with $$\begin{aligned} \kappa &=\sqrt{p_{\text{L}}^2-p^2}, \\ \rho_0 &=\rho(x=0)= \frac{-(2a_2\rho_\infty+a_1)+\sqrt{(2a_2\rho_\infty+a_1)^2+4a_2p^2}}{2a_2}. \label{eq:rho0cqnls} \end{aligned}$$ See \[app:cqnls\] for a detailed derivation. Since $ \kappa $ and $ \rho_0 $ are the functions of $ (p,\rho_\infty) $, the dark soliton solution has two parameters $ (p,\rho_\infty) $. From (\[eq:rho0cqnls\]), $$\begin{aligned} \lim_{p\rightarrow0}\rho_0=\begin{cases} 0 & (2a_2\rho_\infty+a_1>0) \\ \frac{1}{2a_2}|2a_2\rho_\infty+a_1| & (2a_2\rho_\infty+a_1<0), \end{cases} \end{aligned}$$ so the bubble (= a non-topological dark soliton) appears when $ \rho_\infty<-a_1/(2a_2) $. A particle number of soliton $ N $ and a phase difference $ \delta $ are calculated as $$\begin{aligned} N &= -\frac{2}{\sqrt{a_2}}\tanh^{-1}\left[ \frac{\sqrt{a_2}(\rho_\infty-\rho_0)}{\kappa} \right],\label{eq:cqnlsN} \\ \delta &= 2 \tan^{-1}\left[ \frac{p(\rho_\infty-\rho_0)}{\rho_0\kappa} \right]. \label{eq:cqnlsdelta} \end{aligned}$$ From them we can calculate the renormalized momentum $ -P=pN+\rho_\infty \delta $. An example is shown in Fig. \[fig:renP\]. The case where the unstable region exists is analyzed in Subsec. \[sec:criticalcase\] in detail. ![\[fig:renP\](Color online) Plot of renormalized momentum $ -P/\rho_\infty=\delta+pN/\rho_\infty $ in CQNLS. Here we set $ (a_1,a_2)=(-1,1) $. The values of $ \rho_\infty $ of each curve are set, from top to bottom, $ \rho_\infty=0.55,\, 0.52,\, 0.502,\, 0.5,\, 0.498,\, 0.48,\, \text{ and } 0.45 $, respectively. The dark solitons are stable in the regions of the solid lines, while unstable in the regions of the dashed lines. The critical points are marked by black dots. The unstable soliton appears when $ \rho_\infty<-a_1/(2a_2)=0.5 $. ](renP102.eps) ### Purely cubic case {#sec:purecubic} As a first example, let us consider the case $ a_2=0 $, i.e., the nonlinearity is purely cubic. As mentioned in the Introduction, the NLS equation is integrable in this case and the Bogoliubov phonons are reflectionless for any energy. Let us see that our analytical result Eq. (\[eq:mainrR\]) is consistent with these known facts.\ Without loss of generality, we can set $ a_1=1 $. The dark soliton solution (\[eq:dscqnls\]) is then reduced to $$\begin{aligned} \phi = \mathrm{e}^{\mathrm{i}px}\left( p+\mathrm{i}\kappa\tanh \kappa x \right), \quad \kappa = \sqrt{\rho_\infty-p^2}. \end{aligned}$$ The exact solution of the linearized equation, i.e., the Bogoliubov equation is given by the squared Jost solution[@ChenChenHuang; @Kovrizhin]: $$\begin{aligned} u &= \mathrm{e}^{\mathrm{i}(k_j+p)x}\left( \mathrm{i}\kappa\tanh \kappa x+\frac{k_j}{2}+\frac{\epsilon}{2k_j} \right)^2, \\ v &= \mathrm{e}^{\mathrm{i}(k_j-p)x}\left( \mathrm{i}\kappa\tanh \kappa x+\frac{k_j}{2}-\frac{\epsilon}{2k_j} \right)^2, \end{aligned}$$ where $ k_j $s are given by the roots of the dispersion relation Eq. (\[eq:uniformdisp\]) with $ F'(\rho_\infty)=2 $.\ From the above explicit expression, it is obvious that the phonons are reflectionless. Therefore, the coefficient of $ \epsilon^2 $ in Eq. (\[eq:mainrR\]) must vanish. Let us check it. For the cubic case, it follows that $$\begin{aligned} N=-2\kappa,\quad \delta = 2 \tan^{-1}\frac{\kappa}{p}, \end{aligned}$$ by taking the limit $ a_2\rightarrow0 $ of Eqs. (\[eq:cqnlsN\]) and (\[eq:cqnlsdelta\]). With the use of them, we can show $$\begin{gathered} P_p = -N-pN_p-\rho_\infty \delta_{p}=4\kappa, \\ d=4(\kappa^2-3p^2),\quad d_1=\frac{-\kappa^2+3p^2}{\kappa}. \end{gathered}$$ Thus we obtain $ d+d_1P_p=0 $, as expected. We also note that the soliton is always stable since $ -P_p<0 $ for all velocities.\ It is also possible to discuss the reflection properties when the quintic term is small by expanding Eqs (\[eq:cqnlsN\]) and (\[eq:cqnlsdelta\]) with respect to $ a_2 $, but the expression is not so simple. In this case, one can derive an approximate formula valid not only for small energy but for arbitrary energy by the method given in Refs. [@Muryshev; @SinhaChernyKovrizhinBrand]. ### Purely quintic case {#sec:purequintic} Next, we treat the purely (self-defocusing) quintic case. As already mentioned, the quintic NLS equation is known to describe the dynamics of the Tonks-Girardeau gas[@Kolomeisky].\ Without loss of generality, we can set $ a_1=0 $ and $ a_2=1 $. Though we can also set $ \rho_\infty=1 $, we keep it for a moment because we need the differentiation of $ \rho_\infty $ to calculate the reflection coefficient (\[eq:mainrR\]). Eqs. (\[eq:cqnlsN\]) and (\[eq:cqnlsdelta\]) are reduced to $$\begin{aligned} N &= -\tanh^{-1}\left[ \frac{\sqrt{3(1-y^2)}}{2} \right], \\ \delta &= 2 \tan^{-1}\left[ \frac{1-3y^2+\sqrt{1+3y^2}}{3y\sqrt{1-y^2}} \right] \\ \text{with}\quad y &:= \frac{p}{p_{\text{L}}} = \frac{1}{\sqrt{3}}\frac{p}{\rho_\infty}. \end{aligned}$$ From them one can plot the renormalized momentum $ -P/\rho_\infty=\sqrt{3}yN+\delta $ and can show that $ -P_p<0 $ always holds, i.e., the soliton is always stable. One can also obtain the coefficient of $ \epsilon^2 $ in Eq. (\[eq:mainrR\]) as follows: $$\begin{aligned} -\frac{d+d_1P_p}{aP_p} &= \frac{1}{4\sqrt{3}\rho_\infty^2}\frac{\gamma\tanh^{-1}\gamma}{\gamma+2(1-\gamma^2)\tanh^{-1}\gamma}, \label{eq:quinticcoeff} \\ \gamma&:=\frac{\sqrt{3(1-y^2)}}{2}. \end{aligned}$$ ![\[fig:quinticRC\](Color online) Energy-dependence of reflection coefficient $ R $ of linearized excitations for various soliton velocities in the purely quintic system. Here we set $ (a_1,a_2)=(0,1) $ and $ \rho_\infty=1 $. The Landau velocity is given by $ p_{\text{L}}=\sqrt{3}\rho_\infty=\sqrt{3} $. Parabolic curves represent the theoretical approximate expression (\[eq:mainrR\]) with (\[eq:quinticcoeff\]).](quinticRC002.eps) The energy-dependence of the reflection coefficient $ R $ of linearized excitations is obtained by solving the Bogoliubov equation (\[eq:bogocqnls\]) numerically, and the results are shown in Fig. \[fig:quinticRC\]. We can verify that the expression (\[eq:mainrR\]) with (\[eq:quinticcoeff\]) is valid for low-energy region. From this figure we can also see that the soliton with zero velocity is the strongest scatterer. It is intuitively clear since the shape of the soliton becomes shallower and wider if the velocity of the soliton increases. However, this intuitive understanding is not always correct, as the integrable cubic case in Subsec. \[sec:purecubic\] and the instability-induced anomaly in Subsec. \[sec:criticalcase\] illustrate. ### The case with $ a_1<0 $ and $ a_2>0 $ {#sec:criticalcase} Finally, we consider the case with $ a_1<0 $ and $ a_2>0 $, which is most interesting from the viewpoint of critical phenomena, since the soliton can become unstable and the reflection coefficient can show the singular and scaling behavior.\ If both $ a_1 $ and $ a_2 $ are nonzero, we can set $ |a_1|=|a_2|=1 $ without loss of generality by the following scale transformation: $$\begin{gathered} \bar{x}=\frac{x}{\xi},\ \bar{t}=\frac{t}{\xi^2},\ \bar{\phi}(\bar{x},\bar{t})=\frac{1}{\eta}\phi(x,t), \\ \xi=\frac{\sqrt{|a_2|}}{|a_1|},\ \eta=\sqrt{\frac{|a_1|}{|a_2|}}. \end{gathered}$$ So we performed numerical calculations by setting $ (a_1,a_2)=(-1,1)$. Note that $ \rho_\infty $ cannot be normalized to be unity if we choose $ \xi $ and $ \eta $ as the above. Another choice of $ \eta $ is possible to normalize $ \rho_\infty=1 $, but in this case either $ a_1 $ or $ a_2 $ cannot be normalized.\ Using the dark soliton solution (\[eq:dscqnls\]), we numerically solved the stationary Bogoliubov equation (\[eq:bogocqnls\]), and constructed the solution with the asymptotic form (\[eq:uvasymptotic\]). Figure \[fig:RC\] shows the reflection coefficient with $ \rho_\infty=0.45 $ for various soliton velocities. We can observe that the zero-energy phonon transmits perfectly, unless the soliton velocity is equal to the critical one. When the soliton velocity comes close to the critical value, the slope of the reflection coefficient becomes very steep, and at the critical state, the perfect transmission eventually vanishes. The approximate expression (\[eq:mainrR\]) is good for sufficiently low excitation energy. Figure \[fig:scaledR\] shows the scaling behavior of reflection coefficient $ R $. Here, based on Eq. (\[eq:energyscale\]), the horizontal axis is chosen to be the scaled energy $ \tilde{\epsilon}=\epsilon/\sqrt{2P''(p_c)(P(p)-P(p_c))} $. We can see that if the soliton velocity is close to the critical one, numerically calculated points are well fitted to the universal curve (\[eq:scaledR\]). Thus, the theoretical results are well confirmed in this example. ![\[fig:RC\](Color online) Energy-dependence of reflection coefficient $ R $ of linearized excitations for various soliton velocities. Here we set $ (a_1,a_2)=(-1,1) $ and $ \rho_\infty=0.45 $. The critical velocity of the dark soliton is given by $ p_c = (0.206597\dots)\times p_{\text{L}}$. (See the lowest curve of Fig. \[fig:renP\].) A reflection coefficient of zero-energy phonon at the critical state is given by $ (d/b)^2\simeq 0.5718 $. Parabolic curves represent the theoretical approximate expression (\[eq:mainrR\]).](RC103.eps) ![\[fig:scaledR\](Color online) Scaling behavior of reflection coefficient $ R $. Here we set $ (a_1,a_2)=(-1,1) $ and $ \rho_\infty=0.45 $. $ \tilde{\epsilon}:=\epsilon/\sqrt{2P''(p_c)(P(p)-P(p_c))} $ is a scaled excitation energy. “Theory” represents the universal form of reflection coefficient (\[eq:scaledR\]).](scaledR002.eps) Proof – Step 1: Exact Zero-Energy Solutions {#sec:proof1} =========================================== In this and the next section, we prove the main result. This section is particularly devoted to the construction of exact zero-energy solutions. As an important tool, parameter derivatives are introduced. Parameter derivative {#subsec:paradera} -------------------- As seen in the asymptotic form (\[eq:nlsdsasym\]) or in the example of the CQNLS system in Subsec. \[subsec:cqnls\], the dark soliton solution has two parameters, i.e., $ (p,\rho_\infty) $. So we can consider two kinds of parameter derivatives:[^1] $ \partial_p\phi $ and $ \partial_{\rho_\infty}\phi $. We can use arbitrary coordinates to “label” the two-dimensional parameter space $ (\alpha,\beta)=(\alpha(p,\rho_\infty), \beta(p,\rho_\infty)) $, unless the Jacobian of coordinate transformation is singular. Obviously, the final result must not depend on the choice of coordinates. In order to make the story general, we always use these general parameter derivatives, and henceforth, we write the parameter derivative simply by the subscript, i.e., $ \phi_\alpha := \partial_\alpha\phi $ and $ \phi_\beta:=\partial_\beta\phi $. We also introduce the following symbol: $$\begin{aligned} [A,B]_{\alpha\beta}:=A_\alpha B_\beta-A_\beta B_\alpha. \end{aligned}$$ Note that the ratio $$\begin{aligned} \frac{[A,B]_{\alpha\beta}}{[C,D]_{\alpha\beta}} \label{eq:invratio} \end{aligned}$$ has a coordinate-free meaning, in other words, it is invariant under coordinate transformations of parameter space. We often construct coordinate-free solutions in such a ratio form.\ An immediate application of parameter derivatives is that one can obtain a particular solution of zero-energy Bogoliubov equation (\[eq:bogos\]). By differentiation of the stationary NLS eq. (\[eq:nls2\]), one obtains $$\begin{aligned} \mathcal{L}_\mu \begin{pmatrix} \phi_\alpha \\ -\phi^*_\alpha \end{pmatrix} = \mu_\alpha \begin{pmatrix} \phi \\ \phi^* \end{pmatrix}. \label{eq:1storder} \end{aligned}$$ The same expression follows by replacement $ \alpha\rightarrow\beta $. Thus, taking the difference of double parameter derivatives, one can obtain the following zero-energy solution: $$\begin{aligned} \mathcal{L}_\mu \begin{pmatrix} [\mu,\phi]_{\alpha\beta} \\ -[\mu,\phi^*]_{\alpha\beta} \end{pmatrix} = 0. \end{aligned}$$ It must be emphasized that this solution exists even when a localized potential barrier is added, in other words, when the fundamental equation loses a translational symmetry. What we only need is two kinds of parameter derivatives. So, this solution is not a symmetry-originated zero-mode. (For a symmetry consideration, see \[app:symmetry\].)\ Some technical (but crucially important) identities are derived in \[app:idnty\]. Equation (\[eq:1storder\]) will be used again in the process of energy expansions. Density fluctuation and phase fluctuation {#subsec:fg} ----------------------------------------- Here we introduce notations for the linearized density fluctuations and phase fluctuations, and rewrite the Bogoliubov equation with respect to these variables. They are convenient for both calculations and physical interpretations. Through the symbols $ (u,v)=(\delta\phi,-\delta\phi^*) $, the density and phase fluctuations are expressed as $$\begin{aligned} \delta \rho &= \delta(\phi\phi^*)=\delta\phi\phi^*+\phi\delta\phi^*=u\phi^*-v\phi \\ \delta S &= \delta\left( \frac{1}{2\mathrm{i}}\log\frac{\phi}{\phi^*} \right)=\frac{1}{2\mathrm{i}}\left( \frac{\delta\phi}{\phi}-\frac{\delta\phi^*}{\phi^*} \right)=\frac{1}{2\mathrm{i}}\left( \frac{u}{\phi}+\frac{v}{\phi^*} \right) \end{aligned}$$ Therefore, if one defines $ f $ and $ g $ as $$\begin{aligned} \begin{cases} \displaystyle f=\frac{1}{2\mathrm{i}}\left( \frac{u}{\phi}+\frac{v}{\phi^*} \right) \\ g=u\phi^*-v\phi \end{cases} \quad\leftrightarrow\quad \begin{cases}\displaystyle u=\phi\left( \mathrm{i}f+\frac{g}{2\rho} \right) \\ \displaystyle v=\phi^*\left( \mathrm{i}f-\frac{g}{2\rho} \right), \end{cases} \end{aligned}$$ then $ f $ has the meaning of the phase fluctuation, and $ g $ the density fluctuation. The stationary Bogoliubov equation (\[eq:bogos\]) are rewritten as follows: $$\begin{aligned} \left( \rho f'+\frac{jg}{\rho} \right)'&=\frac{\mathrm{i}\epsilon}{2}g, \label{eq:bogofg1}\\ \left( g'-\frac{\rho'}{\rho}g \right)'-2\rho F'(\rho)g-4jf'&=-2\mathrm{i}\epsilon\rho f \label{eq:bogofg2}. \end{aligned}$$ For the equation with $ \epsilon=0 $, we obtain the following integration constant: $$\begin{aligned} \rho f'+\frac{jg}{\rho}=\text{const.} \label{eq:zeroBogoconst} \end{aligned}$$ Exact zero-energy solutions {#subsec:zerosol} --------------------------- Let us derive all four linearly independent solutions for the zero-energy Bogoliubov equation (\[eq:bogos\]), or equivalently, Eqs. (\[eq:bogofg1\]) and (\[eq:bogofg2\]) with $ \epsilon=0 $. From global phase symmetry and translational symmetry, we can immediately find two zero-modes: $ (u,v)=(\mathrm{i}\phi,\mathrm{i}\phi^*) $ and $ (\phi^{}_x,-\phi^*_x) $. In addition to them, we already have the third solution in Subsec. \[subsec:paradera\]. It is well known that if we have $ n-1 $ linearly independent solutions for an $ n $-th order linear differential equation, the last one can be obtained by reduction of order. So, the fourth solution also can be found. See \[app:roo\] for a detailed calculation.\ Thus, all four zero-energy solutions are given by $$\begin{aligned} \begin{pmatrix}f_1 \\ g_1 \end{pmatrix}&=\begin{pmatrix}1 \\ 0 \end{pmatrix}, \\ \begin{pmatrix}f_2 \\ g_2 \end{pmatrix}&=\frac{1}{[\mu,j]_{\alpha\beta}}\begin{pmatrix}[\mu,S]_{\alpha\beta} \\ [\mu,\rho]_{\alpha\beta} \end{pmatrix},\\ \begin{pmatrix}f_3 \\ g_3 \end{pmatrix}&=\begin{pmatrix}S_x-p \\ \rho_x \end{pmatrix}=\begin{pmatrix}p(\rho_\infty-\rho)/\rho \\ \rho_x \end{pmatrix}, \label{eq:f3g3} \\ \begin{pmatrix}f_4 \\ g_4 \end{pmatrix}&= -\frac{\rho g_2}{\rho_\infty-\rho}\begin{pmatrix}0 \\ 1\end{pmatrix}+\left[\rho_\infty \int_0^x\!\!\frac{ g_2\mathrm{d}x}{(\rho_\infty-\rho)^2} \right]\begin{pmatrix} f_3 \\ g_3 \end{pmatrix}. \label{eq:zerosol4} \end{aligned}$$ For reference, we also write down the same solutions by $ (u,v) $ notation: $$\begin{aligned} \begin{pmatrix}u_1 \\ v_1 \end{pmatrix}&=\begin{pmatrix}\mathrm{i}\phi \\ \mathrm{i}\phi^* \end{pmatrix}, \\ \begin{pmatrix}u_2 \\ v_2 \end{pmatrix}&=\frac{1}{[\mu,j]_{\alpha\beta}}\begin{pmatrix} [\mu,\phi]_{\alpha\beta} \\ -[\mu,\phi^*]_{\alpha\beta} \end{pmatrix},\\ \begin{pmatrix}u_3 \\ v_3 \end{pmatrix}&=\begin{pmatrix}\phi_x-\mathrm{i}p\phi \\ -\phi_x^*-\mathrm{i}p\phi^* \end{pmatrix}, \label{eq:u3v3}\\ \begin{pmatrix}u_4 \\ v_4 \end{pmatrix}&= \frac{g_2}{2(\rho_\infty-\rho)}\begin{pmatrix}-\phi \\ \phi^*\end{pmatrix}+\left[\rho_\infty \int_0^x\!\!\frac{ g_2\mathrm{d}x}{(\rho_\infty-\rho)^2} \right]\begin{pmatrix} u_3 \\ v_3 \end{pmatrix} \end{aligned}$$ Here $ g_2 = [\mu,\rho]_{\alpha\beta}/[\mu,j]_{\alpha\beta} = \mathrm{i}(u_1v_2-u_2v_1) $.\ $ (f_4,g_4) $ is chosen so that the integration constant (\[eq:zeroBogoconst\]) vanishes. So, only $ (f_2,g_2) $ contributes to this constant: $$\begin{aligned} \rho f_2'+\frac{jg_2}{\rho}&=1, \\ \rho f_i'+\frac{jg_i}{\rho}&=0, \quad (i=\text{1, 3, and 4.}). \end{aligned}$$ If we assume that the density $ \rho(x) $ is an even function, the parities and asymptotic behaviors of the four solutions are summarized as follows: $$\begin{aligned} (f_1,g_1)\, &: \ (\text{even},\text{odd}), \quad \text{bounded.} \label{eq:zerosolparity1} \\ (f_2,g_2)\, &: \ (\text{odd},\text{even}), \quad \text{linearly divergent.} \\ (f_3,g_3)\, &: \ (\text{even},\text{odd}), \quad \text{exponentially decreasing.} \\ (f_4,g_4)\, &: \ (\text{odd},\text{even}), \quad \text{exponentially increasing.} \label{eq:zerosolparity4} \end{aligned}$$ Keeping in mind the above (\[eq:zerosolparity1\])–(\[eq:zerosolparity4\]) will help to understand and imagine the construction of the solution in the next section. More detailed forms of asymptotes are given in \[sec:asymofzero\].\ Deriving all solutions including divergent ones might seem to be only of academic interest, but they will become necessary for the exact energy expansion, which will be performed in the next section.\ We note that four solutions are also obtained without using the parameter derivatives as shown in \[app:zerosol\]. However, those expressions are almost useless for solving a scattering problem. Proof – Step 2: Low-Energy Expansion {#sec:proof2} ==================================== In this section, with the use of zero-energy solutions derived in the preceding section, we construct finite-energy solutions up to second order by an exact energy expansion method. Using them, we solve the scattering problem of linearized excitations, and prove the main result. Exact low-energy expansion -------------------------- We construct the finite energy solution by energy expansion: $$\begin{aligned} \begin{pmatrix}u \\ v \end{pmatrix} = \sum_{n=0}^\infty \epsilon^n \begin{pmatrix} u^{(n)} \\ v^{(n)} \end{pmatrix}, \label{eq:expansion} \end{aligned}$$ or equivalently, $ (f,g) = \sum_{n=0}^\infty \epsilon^n (f^{(n)},g^{(n)}) $. The expansion only using asymptotic forms was used in Ref. [@KatoNishiwakiFujita], and the completely exact expansion was first used in Ref. [@TakahashiKato] to discuss tunneling behaviors of Bogoliubov phonons in the presence of potential walls. In order to prove the singular behavior at the critical point, the exact expansion is essentially necessary [@TakahashiKato; @TakahashiKatoConf].\ The substitution of (\[eq:expansion\]) into Bogoliubov equation (\[eq:bogos\]) yields $$\begin{aligned} \mathcal{L}_\mu\begin{pmatrix}u^{(n)} \\ v^{(n)} \end{pmatrix} = \begin{pmatrix}u^{(n-1)} \\ v^{(n-1)} \end{pmatrix}, \quad (n=1,2,\dots), \label{eq:expansion2} \end{aligned}$$ or in $ (f,g) $ expression, $$\begin{aligned} \left( \rho \left(f^{(n)}\right)'+\frac{jg^{(n)}}{\rho} \right)'&=\frac{\mathrm{i}}{2}g^{(n-1)}, \label{eq:recurf} \\ \left( \left(g^{(n)}\right)'-\frac{\rho'}{\rho}g^{(n)} \right)'-2\rho F'(\rho)g^{(n)}-4j\left(f^{(n)}\right)' &=-2\mathrm{i}\rho f^{(n-1)}. \label{eq:recurg} \end{aligned}$$ It is equivalent to the zero-energy Bogoliubov equation with an inhomogeneous term $ (u^{(n-1)},v^{(n-1)}) $. Since we already know all four solutions for the homogeneous equation, we can solve it by variation of parameters. Since we want the solution of the form (\[eq:uvasymptotic\]), in the process of energy expansions, we must cancel out the exponentially divergent terms, and must take up the power-law divergent terms, because $ \mathrm{e}^{\mathrm{i}k_ix}=1+\mathrm{i}k_ix-k_i^2x^2/2+\dotsb $ and $ k_1,k_2 \propto \epsilon $ for low-energy. (See eqs. (\[eq:kexpand1\]) and (\[eq:kexpand2\]).) First order solutions --------------------- The first order solutions are, in fact, directly calculated without the use of variation of parameters. When we choose $ (u^{(0)},v^{(0)})=(u_1,v_1)=(\mathrm{i}\phi,\mathrm{i}\phi^*) $, Eq. (\[eq:1storder\]) gives the particular solution for $ (u^{(1)},v^{(1)}) $, that is, $$\begin{aligned} \begin{pmatrix}\tilde{u}_{\text{A}}^{(\text{finite }\epsilon)} \\ \tilde{v}_{\text{A}}^{(\text{finite }\epsilon)} \end{pmatrix} &:= \begin{pmatrix} \mathrm{i}\phi \\ \mathrm{i}\phi^* \end{pmatrix} + \frac{\mathrm{i}\epsilon}{\mu_\alpha}\begin{pmatrix} \phi_\alpha \\ -\phi_\alpha^* \end{pmatrix}+O(\epsilon^2) \intertext{and} \begin{pmatrix}\tilde{u}_{\text{B}}^{(\text{finite }\epsilon)} \\ \tilde{v}_{\text{B}}^{(\text{finite }\epsilon)} \end{pmatrix} &:= \begin{pmatrix} \mathrm{i}\phi \\ \mathrm{i}\phi^* \end{pmatrix} + \frac{\mathrm{i}\epsilon}{\mu_\beta}\begin{pmatrix} \phi_\beta \\ -\phi_\beta^* \end{pmatrix}+O(\epsilon^2) \end{aligned}$$ give two kinds of first order solutions of the finite energy solution. From Galilean symmetry of the NLS equation, we can also obtain the first order solution when we set $ (u^{(0)},v^{(0)})=(\phi_x,-\phi^*_x) $. (See \[app:symmetry\]) It is given by $$\begin{aligned} \begin{pmatrix}\tilde{u}_3^{(\text{finite }\epsilon)} \\ \tilde{v}_3^{(\text{finite }\epsilon)} \end{pmatrix} &:= \begin{pmatrix} \phi_x \\ -\phi^*_x \end{pmatrix} + \frac{\mathrm{i}\epsilon x}{2}\begin{pmatrix} \mathrm{i}\phi \\ \mathrm{i}\phi^* \end{pmatrix}+O(\epsilon^2). \label{eq:1stu3v3} \end{aligned}$$ We also write down them in $ (f,g) $ notation: $$\begin{aligned} \begin{pmatrix}\tilde{f}_{\text{A}}^{(\text{finite }\epsilon)} \\ \tilde{g}_{\text{A}}^{(\text{finite }\epsilon)} \end{pmatrix} &:= \begin{pmatrix} 1 \\ 0 \end{pmatrix} + \frac{\mathrm{i}\epsilon}{\mu_\alpha}\begin{pmatrix} S_\alpha \\ \rho_\alpha \end{pmatrix}+O(\epsilon^2), \label{eq:firsta}\\ \begin{pmatrix}\tilde{f}_{\text{B}}^{(\text{finite }\epsilon)} \\ \tilde{g}_{\text{B}}^{(\text{finite }\epsilon)} \end{pmatrix} &:= \begin{pmatrix} 1 \\ 0 \end{pmatrix} + \frac{\mathrm{i}\epsilon}{\mu_\beta}\begin{pmatrix} S_\beta \\ \rho_\beta \end{pmatrix}+O(\epsilon^2), \label{eq:firstb}\\ \begin{pmatrix}\tilde{f}_3^{(\text{finite }\epsilon)} \\ \tilde{g}_3^{(\text{finite }\epsilon)} \end{pmatrix} &:= \begin{pmatrix} S_x \\ \rho_x \end{pmatrix} + \frac{\mathrm{i}\epsilon x}{2}\begin{pmatrix} 1 \\ 0 \end{pmatrix}+O(\epsilon^2). \label{eq:first3} \end{aligned}$$ Second order calculation – Identification of bounded solutions {#subsec:boundedsols} -------------------------------------------------------------- Let us calculate the second order. Before beginning, we explain the outline of the calculation. It is important to notice that only two of four solutions are the bounded solutions, i.e., propagating waves for finite positive energy $ \epsilon $, as seen in Subsec. \[subsec:uniform\]. This means, in the energy expansion, that the two of four solutions are power-law divergent and the remaining two must be exponentially divergent. On the other hand, however, we have *three* linearly divergent solutions (\[eq:firsta\])–(\[eq:first3\]) in the previous subsection. So, one of the three must be a dummy. In fact, the solutions calculated from the variation of parameters shows that all three solutions diverge exponentially; The parity of the second-order exponential divergent term becomes $ (f^{(2)},g^{(2)})=(\text{even},\text{odd}) $, so it cannot be canceled out by adding the homogeneous solution $ (f_4,g_4) $. In order to kill out this exponential divergence, we must choose the special linear combinations of the three. Through this process, we have *two* power-law divergent solutions, as expected.\ Now, let us construct the solution of Eqs. (\[eq:recurf\]) and (\[eq:recurg\]) with $ n=2 $ by the above-mentioned method. In accordance with the method of variation of parameters, let the particular solution be $ (f^{(2)},g^{(2)})=\sum_{i=1,2,3,4}c_i(f_i,g_i) $, and let $ c_i $s have the $ x $-dependence. After a little calculation, the equations for $ c_i $s are summarized as follows: $$\begin{aligned} 2\mathrm{i}\begin{pmatrix}c_1' \\ c_2' \end{pmatrix} &= \begin{pmatrix} f_2 & -g_2 \\ -f_1 & g_1 \end{pmatrix}\begin{pmatrix} g^{(1)} \\ f^{(1)} \end{pmatrix}, \label{eq:vopc12}\\ 2\mathrm{i}p\begin{pmatrix}c_3' \\ c_4' \end{pmatrix} &= \begin{pmatrix} f_4 & -g_4 \\ -f_3 & g_3 \end{pmatrix}\begin{pmatrix} g^{(1)} \\ f^{(1)} \end{pmatrix}. \label{eq:vopc34} \end{aligned}$$ Let us assume that $ f^{(1)} $ is a linearly divergent odd function and $ g^{(1)} $ is a bounded even function. From the asymptotic behaviors (\[eq:zerosolparity1\])–(\[eq:zerosolparity4\]) and \[sec:asymofzero\], we can see that $ c_1(f_1,g_1),\,c_2(f_2,g_2) $, and $ c_3(f_3,g_3) $ are always power-law divergent functions. $ c_4 $ is an asymptotically constant function, so the product $ c_4(f_4,g_4) $ diverges exponentially in general. Exceptionally, if $ c_4 $ decays exponentially, $ c_4(f_4,g_4) $ becomes a power-law divergent function. Therefore, in order for the particular solution to be a power-law divergent function, $ \lim_{|x|\rightarrow\infty}c_4=0 $, or equivalently, $$\begin{aligned} \int_{-\infty}^{\infty}c_4'\mathrm{d}x=0 \quad\leftrightarrow\quad \int_{-\infty}^{\infty} \left(f_3 g^{(1)}-g_3 f^{(1)}\right)\mathrm{d}x=0 \label{eq:nondiverge} \end{aligned}$$ must hold.\ Let us construct the solution that satisfies the above condition (\[eq:nondiverge\]). Consider the following linear combination: $$\begin{aligned} \begin{split} \begin{pmatrix}f_{\text{A}}^{(\text{finite }\epsilon)} \\ g_{\text{A}}^{(\text{finite }\epsilon)} \end{pmatrix} &:= \begin{pmatrix}\tilde{f}_{\text{A}}^{(\text{finite }\epsilon)} \\ \tilde{g}_{\text{A}}^{(\text{finite }\epsilon)} \end{pmatrix}+2\xi_{\text{A}}\begin{pmatrix}\tilde{f}_3^{(\text{finite }\epsilon)} \\ \tilde{g}_3^{(\text{finite }\epsilon)} \end{pmatrix} \\ &=\begin{pmatrix} 1+2\xi_{\text{A}} S_x \\ 2\xi_{\text{A}} \rho_x \end{pmatrix} + \frac{\mathrm{i}\epsilon}{\mu_\alpha}\begin{pmatrix} S_\alpha+\mu_\alpha\xi_{\text{A}}x \\ \rho_\alpha \end{pmatrix}+O(\epsilon^2). \end{split} \end{aligned}$$ $ \xi_{\text{A}} $ which satisfies Eq. (\[eq:nondiverge\]) is determined as $$\begin{aligned} &\int_{-\infty}^\infty\left[ f_3\rho_\alpha-g_3(S_\alpha+\mu_\alpha\xi_{\text{A}}x) \right]\mathrm{d}x=0 \\ \leftrightarrow \ & \xi_{\text{A}} = \frac{\rho_\infty\delta_\alpha+p N_\alpha}{N \mu_\alpha}. \end{aligned}$$ The same calculation can be performed with respect to the other parameter $ \beta $, so $ (f_{\text{B}}^{(\text{finite }\epsilon)},g_{\text{B}}^{(\text{finite }\epsilon)}) $ and $ \xi_{\text{B}} $ are defined in the same manner.\ Thus, we have two non-exponentially-divergent solutions $ (f_{\text{A}}^{(\text{finite }\epsilon)},g_{\text{A}}^{(\text{finite }\epsilon)}) $ and $ (f_{\text{B}}^{(\text{finite }\epsilon)},g_{\text{B}}^{(\text{finite }\epsilon)}) $. Furthermore, we can get coordinate-free solutions by making the linear combinations of these two. They are defined as follows: $$\begin{aligned} \begin{pmatrix} f_1^{(\text{finite }\epsilon)} \\ g_1^{(\text{finite }\epsilon)} \end{pmatrix} &:= \frac{1}{\xi_{\text{A}}-\xi_{\text{B}}}\left[ \xi_{\text{A}}\begin{pmatrix}f_{\text{B}}^{(\text{finite }\epsilon)} \\ g_{\text{B}}^{(\text{finite }\epsilon)} \end{pmatrix}-\xi_{\text{B}}\begin{pmatrix}f_{\text{A}}^{(\text{finite }\epsilon)} \\ g_{\text{A}}^{(\text{finite }\epsilon)} \end{pmatrix} \right], \\ \begin{pmatrix} f_3^{(\text{finite }\epsilon)} \\ g_3^{(\text{finite }\epsilon)} \end{pmatrix} &:= \frac{1}{2(\xi_{\text{A}}-\xi_{\text{B}})}\left[ \begin{pmatrix}f_{\text{A}}^{(\text{finite }\epsilon)} \\ g_{\text{A}}^{(\text{finite }\epsilon)} \end{pmatrix}-\begin{pmatrix}f_{\text{B}}^{(\text{finite }\epsilon)} \\ g_{\text{B}}^{(\text{finite }\epsilon)} \end{pmatrix} \right]-p\begin{pmatrix} f_1^{(\text{finite }\epsilon)} \\ g_1^{(\text{finite }\epsilon)} \end{pmatrix}. \end{aligned}$$ If we write $$\begin{aligned} \begin{pmatrix} f_i^{(\text{finite }\epsilon)} \\ g_i^{(\text{finite }\epsilon)} \end{pmatrix} = \sum_{n=0}^\infty \epsilon^n \begin{pmatrix} f_i^{(n)} \\ g_i^{(n)} \end{pmatrix} \quad (i=\text{1 or 3}), \end{aligned}$$ then the zeroth and first order terms are given by $$\begin{aligned} \begin{pmatrix} f_1^{(0)} \\ g_1^{(0)} \end{pmatrix} &= \begin{pmatrix} f_1 \\ g_1 \end{pmatrix} = \begin{pmatrix} 1 \\ 0 \end{pmatrix}, \\ \begin{pmatrix} f_1^{(1)} \\ g_1^{(1)} \end{pmatrix} &= \frac{\mathrm{i}}{\rho_\infty[\delta,\mu]_{\alpha\beta}+p[N,\mu]_{\alpha\beta}}\begin{pmatrix} \rho_\infty[\delta,S]_{\alpha\beta}+p[N,S]_{\alpha\beta} \\ \rho_\infty[\delta,\rho]_{\alpha\beta}+p[N,\rho]_{\alpha\beta} \end{pmatrix}, \label{eq:f11g11}\\ \begin{pmatrix} f_3^{(0)} \\ g_3^{(0)} \end{pmatrix} &= \begin{pmatrix} f_3 \\ g_3 \end{pmatrix} = \begin{pmatrix} S_x-p \\ \rho_x \end{pmatrix}, \\ \begin{pmatrix} f_3^{(1)} \\ g_3^{(1)} \end{pmatrix} &= \frac{\mathrm{i}x}{2}\begin{pmatrix}1 \\ 0 \end{pmatrix}-\frac{\mathrm{i}N[\mu,j]_{\alpha\beta}}{2(\rho_\infty[\delta,\mu]_{\alpha\beta}+p[N,\mu]_{\alpha\beta})}\begin{pmatrix} f_2 \\ g_2 \end{pmatrix}-p\begin{pmatrix} f_1^{(1)} \\ g_1^{(1)} \end{pmatrix}\label{eq:f31g31}, \end{aligned}$$ and the second order terms are, from Eqs. (\[eq:vopc12\]) and (\[eq:vopc34\]), given by $$\begin{aligned} \begin{split} \begin{pmatrix} f_i^{(2)} \\ g_i^{(2)} \end{pmatrix} &= \left[ \frac{1}{2\mathrm{i}}\int_0^x\left(f_2g_i^{(1)}-g_2f_i^{(1)}\right)\mathrm{d}x \right]\begin{pmatrix}f_1 \\ g_1 \end{pmatrix}\\ &\quad+\left[ \frac{-1}{2\mathrm{i}}\int_0^xg_i^{(1)}\mathrm{d}x \right]\begin{pmatrix}f_2 \\ g_2 \end{pmatrix} \\ &\qquad+\left[ \frac{1}{2\mathrm{i}p}\int_0^x\left(f_4g_i^{(1)}-g_4f_i^{(1)}\right)\mathrm{d}x \right]\begin{pmatrix}f_3 \\ g_3 \end{pmatrix} \\ &\qquad\quad+\left[ \frac{-1}{2\mathrm{i}p}\int_0^x\left(f_3g_i^{(1)}-g_3f_i^{(1)}\right)\mathrm{d}x \right]\begin{pmatrix}f_4 \\ g_4 \end{pmatrix} \end{split}\label{eq:2ndbound} \end{aligned}$$ with $ i=\text{1 or 3} $.\ *Every bounded solution for finite positive energy $ \epsilon $ must be constructed as a linear combination of $ (f_1^{(\mathrm{finite \ }\epsilon)}, g_1^{(\mathrm{finite \ }\epsilon)}) $ and $ (f_3^{(\mathrm{finite \ }\epsilon)}, g_3^{(\mathrm{finite \ }\epsilon)}) $*, so must the solution of the scattering problem (\[eq:uvasymptotic\]). Our remaining work is to calculate their asymptotic behavior. Second order calculation – Asymptotics {#subsec:2ndasym} -------------------------------------- The evaluation of asymptotic form of the second order term (\[eq:2ndbound\]) is tedious but straightforward. All calculations can be carried out by using the expressions in \[sec:asymofzero\]. For brevity, we introduce the following symbols for the asymptotic forms of $ f^{(1)}_i $ and $ \int_0^x g^{(1)}_i\mathrm{d}x $ $ (i=1\text{ or }3.)$: $$\begin{aligned} f^{(1)}_i &\rightarrow \mathrm{i}(l_{i1}x+l_{i0}\operatorname{sgn}x), \label{eq:f1asym} \\ \int_0^xg^{(1)}_i\mathrm{d}x &\rightarrow \mathrm{i}(m_{i1}x+m_{i0}\operatorname{sgn}x). \label{eq:g1asym} \end{aligned}$$ Explicit expressions for $ l_{ij}s $ and $ m_{ij} $s are given in \[app:asymptote\]. The asymptotic form up to second order is given by $$\begin{aligned} \begin{split} &\begin{pmatrix} f_i^{(\text{finite }\epsilon)} \\ g_i^{(\text{finite }\epsilon)} \end{pmatrix} \rightarrow \begin{pmatrix} \delta_{i1} \\ 0 \end{pmatrix} + \mathrm{i}\epsilon \begin{pmatrix} l_{i1}x+l_{i0}\operatorname{sgn}x \\ m_{i1} \end{pmatrix} \\ &\quad-\frac{\epsilon^2}{2\kappa^2}\begin{pmatrix}\frac{1}{\rho_\infty}\Bigl[ (p_{\text{L}}^2m_{i1}-jl_{i1})\frac{x^2}{2}+(p_{\text{L}}^2m_{i0}-jl_{i0})|x|\Bigr]\!+\!c_{i0}\! \\ (\rho_\infty l_{i1}-pm_{i1})x+(\rho_\infty l_{i0}-pm_{i0})\operatorname{sgn}x \end{pmatrix}+O(\epsilon^3). \end{split} \end{aligned}$$ Here $ c_{i0} $ is a certain constant whose form is not important here.\ Let $ (u^{(\text{finite }\epsilon)}_i,v^{(\text{finite }\epsilon)}_i) $ be the counterpart of $ (f^{(\text{finite }\epsilon)}_i,g^{(\text{finite }\epsilon)}_i) $ in $ (u,v) $ notation. If we write them in plane wave form $$\begin{aligned} \begin{pmatrix} u^{(\text{finite }\epsilon)}_1 \\ v^{(\text{finite }\epsilon)}_1 \end{pmatrix} &\rightarrow \frac{\mathrm{i}\sqrt{\rho_\infty}}{2pp_{\text{L}}}\left[ C_1^{\pm}w_1\left( x,\pm\tfrac{\delta}{2} \right)+C_2^{\pm}w_2\left( x,\pm\tfrac{\delta}{2} \right) \right] \quad (x\rightarrow\pm\infty) \\ \intertext{and} \begin{pmatrix} u^{(\text{finite }\epsilon)}_3 \\ v^{(\text{finite }\epsilon)}_3 \end{pmatrix} &\rightarrow \frac{\mathrm{i}\sqrt{\rho_\infty}}{2pp_{\text{L}}}\left[ D_1^{\pm}w_1\left( x,\pm\tfrac{\delta}{2} \right)+D_2^{\pm}w_2\left( x,\pm\tfrac{\delta}{2} \right) \right] \quad (x\rightarrow\pm\infty), \end{aligned}$$ then the energy dependence of coefficients can be obtained as follows: $$\begin{aligned} C_1^\pm &= p(p+p_{\text{L}}+2\kappa^2l_{11})\pm\mathrm{i}\epsilon p_{\text{L}}(p-p_{\text{L}})l_{10}+O(\epsilon^2), \label{eq:C1} \\ C_2^\pm &= p(-p+p_{\text{L}}-2\kappa^2l_{11})\pm\mathrm{i}\epsilon p_{\text{L}}(p+p_{\text{L}})l_{10}+O(\epsilon^2), \\ D_1^\pm &= 2p\kappa^2 l_{31}\pm\mathrm{i}\epsilon p_{\text{L}}\left( \frac{p_{\text{L}}N}{4\rho_\infty}+(p-p_{\text{L}})l_{30} \right)+O(\epsilon^2), \\ D_2^\pm &= -2p\kappa^2 l_{31}\pm\mathrm{i}\epsilon p_{\text{L}}\left( -\frac{p_{\text{L}}N}{4\rho_\infty}+(p+p_{\text{L}})l_{30} \right)+O(\epsilon^2). \label{eq:D2} \end{aligned}$$ The striking feature is that the zeroth order of $ D_1^\pm $ and $ D_2^\pm $ vanishes when the soliton velocity becomes the critical value, see Eq. (\[eq:l31\]). This is an immediate cause of the singular behavior of the reflection coefficient.\ A solution of the scattering problem (\[eq:uvasymptotic\]) is constructed as follows: $$\begin{aligned} \begin{pmatrix}u \\ v \end{pmatrix}=D_2^+\begin{pmatrix} u^{(\text{finite }\epsilon)}_1 \\ v^{(\text{finite }\epsilon)}_1 \end{pmatrix}-C_2^+\begin{pmatrix} u^{(\text{finite }\epsilon)}_3 \\ v^{(\text{finite }\epsilon)}_3 \end{pmatrix}, \end{aligned}$$ and coefficients $ t $ and $ r $ are given by $$\begin{aligned} t&=\frac{C_2^+D_1^+-D_2^+C_1^+}{C_2^+D_1^--D_2^+C_1^-}, \\ r&=\frac{C_2^+D_2^--D_2^+C_2^-}{C_2^+D_1^--D_2^+C_1^-}. \end{aligned}$$ Finally, moving to the particular coordinate system $ (\alpha,\beta)=(p,\rho_\infty) $, and using the expressions given in \[app:asymptote\], we get the main result $$\begin{aligned} -\frac{2\rho_\infty^2\delta_p}{p^2p_{\text{L}}^2}(C_2^+D_1^--D_2^+C_1^-) &= a P_p-\mathrm{i}\epsilon\left( b+b_1P_p \right)+O(\epsilon^2) \label{eq:final1}\\ \intertext{and} -\frac{2\rho_\infty^2\delta_p}{p^2p_{\text{L}}^2}(C_2^+D_2^--D_2^+C_2^-) &= -\mathrm{i}\epsilon\left( d+d_1P_p \right)+O(\epsilon^2) \label{eq:final2} \end{aligned}$$ with $$\begin{aligned} a &= 4p_{\text{L}}\rho_\infty, \\ b &= (N+pN_p)^2+(p_{\text{L}}N_p)^2, \\ b_1 &= N-\left( 1+\frac{2p^2}{\kappa^2} \right)[N,j]_{p\rho_\infty}, \\ d &= (N+pN_p)^2-(p_{\text{L}}N_p)^2, \\ d_1 &= N+[N,j]_{p\rho_\infty}. \label{eq:final7} \end{aligned}$$ It gives the result in Subsec. \[subsec:mainresult\]. Discussions and Concluding Remarks {#sec:summary} ================================== In this last section, we give a few discussions and future perspectives. Local density fluctuation at the critical point ----------------------------------------------- In the system with a potential wall, the emergence of a zero-energy local density fluctuation was a key of the destabilization of superflow[@TakahashiKato; @TakahashiKatoConf]. In the present case of solitons, an amplification of the zero-energy local density fluctuation also occurs at the critical point, but its mathematical structure slightly differs. Let us see it in detail.\ Before beginning, one should recall that $ f $ and $ g $ have the meaning of phase and density fluctuations. (See subsection \[subsec:fg\] again.) So, the solution $ (u,v)\propto(\phi,\phi^*) \,\leftrightarrow\, (f,g)\propto(1,0) $ has no density fluctuation. If another solution, e.g., a parameter-derivative solution, is added, a non-zero density fluctuation arises.\ For a superflow state against the potential barrier[@BaratoffBlackburnSchwartz; @Hakim; @PhamBrachet], it is known[@TakahashiKato; @TakahashiKatoConf] that $$\begin{aligned} \lim_{\epsilon\rightarrow0}\begin{pmatrix} u \\ v \end{pmatrix} &= \begin{pmatrix} \phi \\ \phi^* \end{pmatrix} & \text{(for non-critical states)}. \\ \lim_{\epsilon\rightarrow0}\begin{pmatrix} u \\ v \end{pmatrix} &= \begin{pmatrix} \phi \\ \phi^* \end{pmatrix}+c \dfrac{\partial }{\partial \varphi}\begin{pmatrix} \phi \\ -\phi^* \end{pmatrix} & \text{(for a critical state)}. \end{aligned}$$ Here $ \varphi $ is a Josephson phase difference and $ c $ is a certain constant. Thus, the density fluctuation represents the anomaly of the critical point.\ In the case of solitons, however, because of spontaneous translational symmetry breaking, the density fluctuating zero-mode $ (f_3,g_3) $ (or equivalently, $ (u_3,v_3) $ ) always exists, and this mode indeed contributes to the solution of the scattering problem: $$\begin{aligned} \lim_{\epsilon\rightarrow0}\begin{pmatrix} u \\ v \end{pmatrix} = c_1 \begin{pmatrix} u_1 \\ v_1 \end{pmatrix} +c_3 \begin{pmatrix} u_3 \\ v_3 \end{pmatrix} \quad \text{(for non critical states).} \end{aligned}$$ Here $ c_1=\lim_{\epsilon\rightarrow0}(-C_2^+) $ and $ c_3=\lim_{\epsilon\rightarrow0}D_2^+ $ are constants. (See Subsec. \[subsec:2ndasym\] for more detailed expressions.) Therefore, the local density fluctuation always exists regardless of whether the soliton is stable or unstable. However, when the soliton velocity comes closer to the critical one, the ratio $ c_3/c_1 $ becomes larger, and it becomes infinite at the critical point. That is to say, at the critical velocity state, $$\begin{aligned} \lim_{\epsilon\rightarrow0}\begin{pmatrix} u \\ v \end{pmatrix} = \begin{pmatrix} u_3 \\ v_3 \end{pmatrix} \quad \text{(for a critical state)} \end{aligned}$$ holds. Thus, we can say that the amplification of the local density fluctuation also plays a key role in the case of the destabilization of solitons. Conclusions and Future perspectives ----------------------------------- In this paper, we have solved the scattering problem of linearized excitations (Bogoliubov phonons) against a dark soliton in a generalized NLS system. We have exactly shown that the perfect transmission of a zero-energy phonon vanishes when the soliton velocity reaches the critical value, and near the critical velocity state, the reflection coefficient obeys a saddle-node type universal scaling law. Our result has a fundamental importance because it provides an exact example of saddle-node scaling in infinite dimensional time-reversible Hamiltonian systems. Through the proof, we have also obtained the exact zero-energy solutions and their finite energy generalizations. In the derivation of them, the use of two kinds of parameter derivatives has played an important role. This method will also be useful to elucidate the low-energy physics of other systems, such as higher dimensional systems or multi-component systems.\ Even though we have shown the example of scaling laws, the derivation of a normal form of saddle-node bifurcation remains unsolved. The similar problem for the supercurrent-flowing system in the presence of an obstacle[@Hakim; @PhamBrachet] also exists. Compared to the problem of solitons treated in this paper, the system with an obstacle seems to be a little more difficult, because all four zero-energy solutions for linearized equation are not yet obtained except for the critical velocity state[@TakahashiKato]. These issues are left as future works.\ In this paper we have treated the soliton-phonon scattering problem. One interesting generalization is the multi-soliton scattering process. It was shown that the existence of the two solitons makes the reflection coefficient of phonons non-trivial even for the integrable cubic case[@KolbyshovaSadreev]. The soliton collision problem in the non-integrable generalized KdV equation was recently investigated[@MartelMerleInventMath; @MartelMerleAnnMath]. The similar problem of dark solitons in non-integrable NLS systems is also important to understand the generic solitary characters of dark solitons.\ An emergence of singularity of scattering properties at a critical state which separates the stable and unstable branches is expected to be a universal character in more general systems, because the emergent or amplified zero-modes can affect the transmission properties of low-energy modes. Since the scattering problem of linearized excitations is easier and more analytically tractable than the existence-proof of an unstable mode or construction of a Lyapunov function, it will be useful to “conjecture” the criterion for stability of solitons, even though it is not a direct proof of the stability itself. For example, to the best knowledge of the present author, the stability criterion of dark solitons in multi-component NLS systems is an untouched problem. In ultracold atoms, the binary mixture of Rb atoms is realized[@HoShenoy; @Myatt] and it is known that the corresponding coupled NLS equation becomes integrable only for the Manakov case. Also, the Bose condensates with spin degree of freedom are created by optical trap[@StamperKurn], and Wadati group members have studied the spin-1 solitons at the integrable point[@TsuchidaWadati; @IedaMiyakawaWadati; @UchiyamaIedaWadati]. When the coupling constant deviates from the integrable one, it is no longer ensured that a dark soliton is always stable. So, it may be an interesting problem to study the stability of solitons through the study of scattering problems of linearized excitations.\ #### Acknowledgment The author is grateful to Y. Kato and M. Kunimi for helpful comments. This work was supported by a Grant-in-Aid for JSPS Fellows (No. 22-10058). Identities for parameter derivatives {#app:idnty} ==================================== Here we derive a few identities for parameter derivatives. The NLS equation (\[eq:nls2\]) expressed in terms of the density $ \rho(x) $ is given by $$\begin{aligned} \rho_{xx}=2\rho\left( \frac{\rho_x^2}{4\rho^2}+\frac{j^2}{\rho^2}-\mu+F(\rho) \right). \end{aligned}$$ From the parameter derivative of Eq. (\[eq:momconsrv\]) and the above equation, one obtains $$\begin{aligned} \rho_\alpha\rho_{xx}-\rho_{\alpha x}\rho_x = 2(\rho_\infty-\rho)(2p j_\alpha-\mu_\alpha \rho). \label{eq:syspar} \end{aligned}$$ Here recall that $ j=\rho_\infty p $ and $ \mu = p^2+F(\rho_\infty) $. The same expression follows by replacement $ \alpha\rightarrow\beta $. From them, an important identity $$\begin{aligned} g_3g_2'-g_2g_3' = -4p(\rho_\infty-\rho) \label{eq:wsimplify} \end{aligned}$$ follows, where $ g_2 $ and $ g_3 $ are defined in Subsec. \[subsec:zerosol\].\ Dividing both sides of (\[eq:syspar\]) by $ \rho $, and integrating them from $ -\infty $ to $ +\infty $, we obtain $$\begin{aligned} 2j_\alpha\delta+\mu_\alpha N=-\frac{\partial }{\partial \alpha}\int_{-\infty}^\infty\frac{(\rho_x)^2\mathrm{d}x}{2\rho}. \end{aligned}$$ Since the same expression for $ \beta $ holds, we obtain the second important identity $$\begin{aligned} \begin{split} & (2j_\alpha\delta+\mu_\alpha N)_\beta=(2j_\beta\delta+\mu_\beta N)_\alpha \\ \leftrightarrow\quad & 2[j,\delta]_{\alpha\beta}+[\mu,N]_{\alpha\beta}=0. \label{eq:idntydev} \end{split} \end{aligned}$$ Derivation of the fourth zero-energy solution {#app:roo} ============================================= Let us derive the expression for the fourth zero-energy solution $ (f_4,g_4) $. Eliminating $ f $ from Eqs. (\[eq:bogofg1\]) and (\[eq:bogofg2\]), one obtains the third order differential equation for $ g $: $$\begin{aligned} g'''+A g'+ B=0. \end{aligned}$$ Here $ A $ and $ B $ are some functions, but their forms are not important here. Knowing two solutions $ g_2 $ and $ g_3 $, the last one is obtained by reduction of order: $$\begin{aligned} \tilde{g}_4 &:= g_2 \int_0^x\frac{g_3\mathrm{d}x}{w^2}-g_3\int_0^x\frac{g_2\mathrm{d}x}{w^2} \intertext{with} w &= g^{}_3g_2'-g^{}_2g_3'. \end{aligned}$$ Since $ w $ is given by (\[eq:wsimplify\]), the above expression is simplified as $$\begin{aligned} 16p^2\tilde{g}_4 = \left[ \frac{1}{\rho_\infty-\rho}-\frac{1}{\rho_\infty-\rho_0} \right]g_2- g_3\int_0^x\frac{g_2\mathrm{d}x}{(\rho_\infty-\rho)^2} \end{aligned}$$ Here $ \rho_0:=\rho(0) $. The solution which cancels out the constant (\[eq:zeroBogoconst\]) is given by the following linear combination with $ g_2 $: $$\begin{aligned} g_4:=-\frac{\rho_0}{\rho_\infty-\rho_0}g_2-16p^2\rho_\infty \tilde{g}_4, \end{aligned}$$ and $ f_4 $ is calculated from $ f_4' = -jg_4/\rho^2 $. Asymptotics of zero-energy solutions {#sec:asymofzero} ==================================== Let us write the asymptotic form of $ \rho(x) $ as $$\begin{aligned} \rho(x) = \rho_\infty - \rho_\infty^{(1)}\mathrm{e}^{-2\kappa|x|}+\dotsb, \end{aligned}$$ then a half of the healing length $ \kappa $ is given by $$\begin{aligned} \kappa=\sqrt{p_{\text{L}}^2-p^2}. \end{aligned}$$ From (\[eq:nlsdsasym\]) or (\[eq:phasecond\]), an asymptotic form of the phase $ S(x) $ is given by $$\begin{aligned} S(x) \rightarrow px+(\operatorname{sgn}x)\frac{\delta}{2}. \end{aligned}$$ Using the invariant nature of the ratio (\[eq:invratio\]), one can show $$\begin{aligned} \frac{[\mu,p]_{\alpha\beta}}{[\mu,j]_{\alpha\beta}} &= \frac{[\mu,p]_{p\rho_\infty}}{[\mu,j]_{p\rho_\infty}} = \frac{p_{\text{L}}^2}{\rho_\infty \kappa^2}, \\ \frac{[\mu,\rho_\infty]_{\alpha\beta}}{[\mu,j]_{\alpha\beta}} &= \frac{[\mu,\rho_\infty]_{p\rho_\infty}}{[\mu,j]_{p\rho_\infty}} = -\frac{p}{\kappa^2}. \end{aligned}$$ With the use of the above relations, the asymptotic forms of zero energy solutions are evaluated as follows: $$\begin{aligned} f_2 & \rightarrow \frac{p_{\text{L}}^2}{\rho_\infty \kappa^2}x+(\operatorname{sgn}x)\frac{1}{2}\frac{[\mu,\delta]_{\alpha\beta}}{[\mu,j]_{\alpha\beta}}, \\ g_2 & \rightarrow -\frac{p}{\kappa^2}, \\ f_3 & \rightarrow \frac{p\rho_\infty^{(1)}}{\rho_\infty}\mathrm{e}^{-2\kappa|x|}, \\ g_3 & \rightarrow (\operatorname{sgn}x) 2\kappa \rho_\infty^{(1)}\mathrm{e}^{-2\kappa|x|}, \\ f_4 & \rightarrow -(\operatorname{sgn}x)\frac{p^2}{4\rho_\infty^{(1)}\kappa^3}\mathrm{e}^{2\kappa|x|}, \\ g_4 & \rightarrow \frac{j}{2\rho_\infty^{(1)}\kappa^2}\mathrm{e}^{2\kappa|x|}. \end{aligned}$$ Symmetry consideration on zero-modes {#app:symmetry} ==================================== In this appendix we consider how a symmetry plays a role in finding a solution of a linearized equation. Particularly, we emphasize that the information we can obtain from a Galilean symmetry is not about the zero energy solution but about the first order correction of a finite energy solution.\ Let $ \phi(x,t) $ be a solution of time-dependent NLS equation (\[eq:nls\]) and $ \tilde{\phi}(x,t,\alpha) $ be a family of solutions with continuous parameter $ \alpha $ such that $ \tilde{\phi}(x,t,0)=\phi(x,t) $. Here we assume that $ \alpha $ is free from any system parameter. (The parameter derivative introduced in Subsec. \[subsec:paradera\] is a more general concept, because it can depend on the system parameters which appear, e.g., in a boundary condition (\[eq:nlsdsasym\]).) Differentiation of NLS equation (\[eq:nls\]) with respect to $ \alpha $ immediately yields $$\begin{aligned} (\mathrm{i}\partial_t-\mathcal{L})\begin{pmatrix} \phi_\alpha \\ -\phi^*_\alpha \end{pmatrix} = 0 \end{aligned}$$ with a definition $ \phi_\alpha:= [\partial_\alpha\tilde{\phi}]_{\alpha=0} $. Thus, $ \phi_\alpha $ is a solution of a *time-dependent* linearized equation (\[eq:tdbogo\]) in the presence of the condensate wavefunction $ \phi(x,t) $. Particularly, if one sets $ \tilde{\phi}(x,t,\alpha)=\phi(x,t)\mathrm{e}^{\mathrm{i}\alpha} $, $ \tilde{\phi}(x,t,\alpha)=\phi(x+\alpha,t) $, and Eq. (\[eq:galilei\]), which represent a global phase symmetry, a translational symmetry, and a Galilean symmetry, respectively, one obtains the following particular solutions: $$\begin{aligned} (\mathrm{i}\partial_t-\mathcal{L})\begin{pmatrix} \mathrm{i}\phi \\ \mathrm{i}\phi^* \end{pmatrix} &= 0, \\ (\mathrm{i}\partial_t-\mathcal{L})\begin{pmatrix} \phi_x \\ -\phi^*_x \end{pmatrix} &= 0, \\ (\mathrm{i}\partial_t-\mathcal{L})\begin{pmatrix} 2t\phi_x-\mathrm{i}x\phi \\ -2t\phi^*_x-\mathrm{i}x\phi^* \end{pmatrix} &= 0. \end{aligned}$$ It is a result for a time-dependent equation. In order to interpret these results to a *stationary* problem, let us set $ \phi(x,t)=\phi(x)\mathrm{e}^{-\mathrm{i}\mu t} $. From the assumption stated above, $ \mu $ does not depend on $ \alpha $. (On the other hand, $ \mu $ can depend on the parameter in Subsec. \[subsec:paradera\].) The above equations are rewritten as $$\begin{aligned} \mathcal{L}_\mu\begin{pmatrix} \mathrm{i}\phi \\ \mathrm{i}\phi^* \end{pmatrix} &= 0, \label{eq:appsymzero1} \\ \mathcal{L}_\mu\begin{pmatrix} \phi_x \\ -\phi^*_x \end{pmatrix} &= 0, \label{eq:appsymzero2} \\ 2\mathrm{i}\begin{pmatrix} \phi_x \\ -\phi^*_x \end{pmatrix}-2t\mathcal{L}_\mu\begin{pmatrix} \phi_x \\ -\phi^*_x \end{pmatrix}+\mathcal{L}_\mu\begin{pmatrix} \mathrm{i}x\phi \\ \mathrm{i}x\phi^* \end{pmatrix} &=0. \label{eq:appsymzero3} \end{aligned}$$ Here $ \mathcal{L}_\mu $ defined by Eqs. (\[eq:bogos\]) and (\[eq:bogos2\]) is a differential operator of the stationary Bogoliubov equation. (\[eq:appsymzero1\]) and (\[eq:appsymzero2\]) immediately give zero-energy solutions, whereas we need a consideration on Eq. (\[eq:appsymzero3\]). Since this equation must hold for any time $ t $, both coefficients of $ t^0 $ and $ t^1 $ must vanish. The equation for the $ t^1 $-coefficient reproduces (\[eq:appsymzero2\]). From the $ t^0 $-coefficient, one obtains $$\begin{aligned} -\frac{1}{2}\mathcal{L}_\mu\begin{pmatrix} x\phi \\ x\phi^* \end{pmatrix} = \begin{pmatrix} \phi_x \\ -\phi^*_x \end{pmatrix}. \end{aligned}$$ It represents Eq. (\[eq:expansion2\]) with $ n=1 $ and $ (u^{(0)},v^{(0)})=(\phi_x,-\phi^*_x) $. Therefore, it gives the first order solution of (\[eq:1stu3v3\]). Thus, a Galilean symmetry gives us a piece of information about first order solutions, and gives no new information about zero-energy solutions. A linearly divergent zero-energy solution, which exists even when the system does not have a translational or a Galilean symmetry, can be obtained by two kinds of parameter-derivatives stated in Subsec. \[subsec:paradera\].\ We further note that Eq. (\[eq:1stu3v3\]) itself does not describe a physically meaningful non-divergent solution. As we show in Subsec. \[subsec:boundedsols\], we must construct a linear combination of solutions so that the second order term is free from exponential divergence. A correct first order solution with non-divergent character is given by (\[eq:f31g31\]). Formulae for calculation of asymptotes {#app:asymptote} ====================================== Let us derive some formulae for $ l_{ij} $s and $ m_{ij} $s defined by Eqs. (\[eq:f1asym\]) and (\[eq:g1asym\]). Using the relations $$\begin{aligned} [X,j]_{\alpha\beta} &= \rho_\infty[X,p]_{\alpha\beta}+p[X,\rho_\infty]_{\alpha\beta}, \label{eq:xj}\\ [X,\mu]_{\alpha\beta} &= 2p[X,p]_{\alpha\beta}+F'(\rho_\infty)[X,\rho_\infty]_{\alpha\beta}, \label{eq:xmu} \end{aligned}$$ one can show $$\begin{aligned} \rho_\infty[X,\mu]_{\alpha\beta}-2p[X,j]_{\alpha\beta}&=2\kappa^2[X,\rho_\infty]_{\alpha\beta}. \label{eq:xjxmu} \end{aligned}$$ Here remember that $ p_{\text{L}} $ and $ F'(\rho_\infty) $ are related to each other by (\[eq:defofpl\]), and $ \kappa^2=p_{\text{L}}^2-p^2 $. Using it and (\[eq:idntydev\]), one obtains $$\begin{aligned} \rho_\infty[\delta,\mu]_{\alpha\beta}+p[N,\mu]_{\alpha\beta}&=2\kappa^2[\delta,\rho_\infty]_{\alpha\beta}. \label{eq:deltamu} \end{aligned}$$ From (\[eq:f11g11\]), (\[eq:f31g31\]), and (\[eq:deltamu\]), we can obtain $$\begin{aligned} l_{11} &= \frac{\rho_\infty[\delta,p]_{\alpha\beta}+p[N,p]_{\alpha\beta}}{2\kappa^2[\delta,\rho_\infty]_{\alpha\beta}}, \\ m_{11} &= \frac{\rho_\infty[\delta,\rho_\infty]_{\alpha\beta}+p[N,\rho_\infty]_{\alpha\beta}}{2\kappa^2[\delta,\rho_\infty]_{\alpha\beta}}, \\ l_{10} &= \frac{p[N,\delta]_{\alpha\beta}}{4\kappa^2[\delta,\rho_\infty]_{\alpha\beta}}, \\ m_{10} &= \frac{\rho_\infty[\delta,N]_{\alpha\beta}}{4\kappa^2[\delta,\rho_\infty]_{\alpha\beta}}, \\ l_{31} &= \frac{1}{2}-\frac{N[\mu,p]_{\alpha\beta}}{4\kappa^2[\delta,\rho_\infty]_{\alpha\beta}}-pl_{11}, \\ m_{31} &= -\frac{N[\mu,\rho_\infty]_{\alpha\beta}}{4\kappa^2[\delta,\rho_\infty]_{\alpha\beta}}-pm_{11}, \\ l_{30} &= -\frac{N[\mu,\delta]_{\alpha\beta}}{8\kappa^2[\delta,\rho_\infty]_{\alpha\beta}}-pl_{10}, \\ m_{30} &= -\frac{N[\mu,N]_{\alpha\beta}}{8\kappa^2[\delta,\rho_\infty]_{\alpha\beta}}-pm_{10}. \end{aligned}$$ Further, using them and (\[eq:xj\]), (\[eq:xmu\]), and (\[eq:xjxmu\]), $$\begin{aligned} 2p l_{11}+F'(\rho_\infty)m_{11} &= 1, \label{eq:l11m11} \\ \rho_\infty l_{11}+pm_{11}&=-\frac{1}{2}\frac{[N,\rho_\infty]_{\alpha\beta}}{[\delta,\rho_\infty]_{\alpha\beta}}, \\ 2p l_{10}+F'(\rho_\infty)m_{10} &= -\frac{1}{2}\frac{[N,\delta]_{\alpha\beta}}{[\delta,\rho_\infty]_{\alpha\beta}}, \\ \rho_\infty l_{10}+pm_{10}&=0, \label{eq:l10m10} \\ 2p l_{31}+F'(\rho_\infty)m_{31} &= 0, \label{eq:l31m31}\\ \rho_\infty l_{31}+pm_{31}&=-\frac{1}{2}\frac{[P,\rho_\infty]_{\alpha\beta}}{[\delta,\rho_\infty]_{\alpha\beta}}, \\ 2p l_{30}+F'(\rho_\infty)m_{30} &=\frac{1}{2}\frac{[pN,\delta]_{\alpha\beta}}{[\delta,\rho_\infty]_{\alpha\beta}}, \\ \rho_\infty l_{30}+pm_{30}&=\frac{N}{4}. \label{eq:l30m30} \end{aligned}$$ We can eliminate $ m_{ij} $s by Eqs. (\[eq:l11m11\]), (\[eq:l10m10\]), (\[eq:l31m31\]), and (\[eq:l30m30\]), and the expressions (\[eq:C1\])–(\[eq:D2\]) are derived in that way.\ Now we consider the particular coordinate $ (\alpha,\beta)=(p,\rho_\infty) $. $ l_{ij} $s are then given by $$\begin{aligned} l_{11}&=-\frac{j\delta_p+p_{\text{L}}^2N_p}{2\rho_\infty\kappa^2\delta_p}, \label{eq:l11} \\ l_{10}&=\frac{p[N,\delta]_{p\rho_\infty}}{4\kappa^2\delta_p}, \\ l_{31}&=-\frac{p_{\text{L}}^2 P_p}{2\rho_\infty\kappa^2\delta_p}, \label{eq:l31}\\ l_{30}&=-\frac{N[\mu,\delta]_{p\rho_\infty}}{8\kappa^2\delta_p}-pl_{10}. \end{aligned}$$ From $ P = -pN-\rho_\infty\delta $ and (\[eq:idntydev\]), we obtain $$\begin{aligned} \delta_p &= -\frac{N+pN_p+P_p}{\rho_\infty}, \\ \delta_{\rho_\infty} &= -\frac{pN-\kappa^2N_p+pP_p+jN_{\rho_\infty}}{\rho_\infty^2}. \label{eq:deltarhoinfty} \end{aligned}$$ Derivatives of $ \delta $ are eliminated by using them. With the use of Eqs. (\[eq:l11\])–(\[eq:deltarhoinfty\]), we can obtain the final result (\[eq:final1\])–(\[eq:final7\]). Zero-energy solutions without using parameter derivatives {#app:zerosol} ========================================================= We can write down the general zero-energy solution without using parameter derivatives: $$\begin{aligned} \begin{pmatrix}u\\ v \end{pmatrix}&=\sum_{i=1,2,3,4} c_i \begin{pmatrix}u_i^{} \\ -u_i^* \end{pmatrix}, \\ u_1 &= \mathrm{i}\phi, \\ u_2 &= \phi_x, \\ u_3 &= \mathrm{i}\phi\int\!\frac{\mathrm{d}x}{\rho}+4\mathrm{i}j^2\phi\int\!\frac{\mathrm{d}x}{\rho\rho_x^2}-4j\phi_x\int\!\frac{\mathrm{d}x}{\rho_x^2}, \\ u_4 &= \mathrm{i}j\phi\int\!\frac{\mathrm{d}x}{\rho_x^2}-\phi_x\int\!\frac{\rho\mathrm{d}x}{\rho_x^2}. \end{aligned}$$ These expressions, however, have a fatal flaw; Namely, they have artificial singularities at the origin (or more precisely, the points where $ \rho'(x)=0 $.) It means that these expressions do not give *global* solutions which continuously connect the solutions from $ x=-\infty $ to $ x=+\infty $, instead, only give *local* solutions which satisfy the differential equation at each point. For this reason they are not so useful in scattering problems, which are equivalent to the determination of global behavior of given solutions. On the other hand, the parameter-derivative solutions given in Subsec. \[subsec:zerosol\] have no singularities if the density $ \rho(x) $ does not cross the value $ \rho_\infty $. Stationary solutions of CQNLS equation {#app:cqnls} ====================================== In the CQNLS system, the equation for momentum conservation (\[eq:genjm\]) becomes $$\begin{aligned} \frac{(\rho_x)^2}{4}=-j^2+j_m\rho-\mu\rho^2+a_1\rho^3+a_2\rho^4. \label{eq:cqdiff} \end{aligned}$$ Here we want general stationary solutions, so we do not concentrate on the solution with the asymptotic form (\[eq:nlsdsasym\]), and therefore $ \mu, j, $ and $ j_m $ need not be given by (\[eq:cp\]), (\[eq:dsj\]), and (\[eq:dsjm\]), respectively. If the right hand side of (\[eq:cqdiff\]) is factored as $ a_2\prod_{i=1}^4(\rho-\rho_i) $, the solution of this differential equation is given by the following cross-ratio form: $$\begin{aligned} \operatorname{sn}^2\left( \kappa x | m \right)&=\frac{(\rho(x)-\rho_1)(\rho_3-\rho_2)}{(\rho(x)-\rho_2)(\rho_3-\rho_1)} \\ \intertext{with} \kappa&=\sqrt{a_2(\rho_4-\rho_2)(\rho_3-\rho_1)}, \label{eq:kappaapp} \\ m&=\frac{(\rho_4-\rho_1)(\rho_3-\rho_2)}{(\rho_4-\rho_2)(\rho_3-\rho_1)}, \end{aligned}$$ or equivalently, $$\begin{aligned} \rho(x)=\frac{\rho_2(\rho_3-\rho_1)-\rho_1(\rho_3-\rho_2)\operatorname{sn}^2\left( \kappa x | m \right)}{(\rho_3-\rho_1)-(\rho_3-\rho_2)\operatorname{sn}^2\left( \kappa x | m \right)}. \label{eq:rhoapp} \end{aligned}$$ Here we use Mathematica’s definition for Jacobi elliptic functions.[^2]\ For the case of the dark soliton solution, $ \rho_3=\rho_4=\rho_\infty $ holds, so $ m=1 $ follows and the sn function reduces to the tanh function. $ \rho_1 $ and $ \rho_0:=\rho_2 $ are given by the roots of $$\begin{aligned} a_2\rho^2+(2a_2\rho_\infty+a_1)\rho-p^2=0. \label{eq:quadracqnls} \end{aligned}$$ $ \rho_0 $ (\[eq:rho0cqnls\]) is the root of Eq. (\[eq:quadracqnls\]) with a plus sign. $ \rho_1 $ can be eliminated in two ways; From Eq. (\[eq:kappaapp\]) or from the fact that $ \rho_0 $ and $ \rho_1 $ solve Eq. (\[eq:quadracqnls\]), $$\begin{aligned} \rho_\infty-\rho_1=\frac{\kappa^2}{a_2(\rho_\infty-\rho_0)}, \qquad \rho_0\rho_1=-\frac{p^2}{a_2}. \label{eq:eliminaterho1} \end{aligned}$$ The expression (\[eq:rhoapp\]) is then rewritten as $$\begin{aligned} \rho(x)=\rho_\infty-\frac{\kappa^2(\rho_\infty-\rho_0)\operatorname{sech}^2\kappa x}{\kappa^2-a_2(\rho_\infty-\rho_0)^2\tanh^2\kappa x}. \end{aligned}$$ The following expressions are also useful for calculation of the phase shift $ \delta $ and the particle number of soliton $ N $: $$\begin{aligned} S_x=\frac{j}{\rho}&= p+\frac{(\rho_0\kappa)[ p(\rho_\infty-\rho_0)\tanh\kappa x]'}{(\rho_0\kappa)^2+[ p(\rho_\infty-\rho_0)\tanh\kappa x]^2}, \label{eq:Sxinapp} \\ \rho-\rho_\infty &= -\frac{1}{\sqrt{a_2}}\frac{\kappa [\!\sqrt{a_2}(\rho_\infty-\rho_0)\tanh\kappa x]'}{\kappa^2-[\!\sqrt{a_2}(\rho_\infty-\rho_0)\tanh\kappa x ]^2}, \end{aligned}$$ where Eq. (\[eq:Sxinapp\]) is obtained by eliminating both $ a_2 $ and $ \rho_1 $ with the use of (\[eq:eliminaterho1\]). Here we can use the formulae $$\begin{aligned} \frac{ay'}{a^2+y^2} &= \Bigl(\tan^{-1}\frac{y}{a}\Bigr)', \\ \frac{ay'}{a^2-y^2} &= \Bigl(\tanh^{-1}\frac{y}{a}\Bigr)'. \end{aligned}$$ Thus the content in Subsec. \[subsec:cqnls\] is reproduced.\ [^1]: Here we do not consider the derivative with respect to the parameters included in the definition of the nonlinear term, e.g., $ a_1 $ and $ a_2 $ in the CQNLS equation, because these parameter-derivatives do not yield a solution of the linearized equation. [^2]: The Wolfram Functions Site, http://functions.wolfram.com/
1
--- abstract: 'New sensitive CO(2-1) observations of the 30 Doradus region in the Large Magellanic Cloud are presented. We identify a chain of three newly discovered molecular clouds we name KN1, KN2 and KN3 lying within 2–14pc in projection from the young massive cluster R136 in 30 Doradus. Excited H$_2$2.12$\mu$m emission is spatially coincident with the molecular clouds, but ionized Br$\gamma$ emission is not. We interpret these observations as the tails of pillar-like structures whose ionized heads are pointing towards R136. Based on infrared photometry, we identify a new generation of stars forming within this structure.' author: - 'Venu M. Kalari' - 'M[ó]{}nica Rubio' - 'Bruce G. Elmegreen' - 'Viviana V. Guzm[á]{}n' - 'Cinthya N. Herrera' - Hans Zinnecker title: | Pillars of creation amongst destruction:\ Star formation in molecular clouds near R136 in 30 Doradus --- Introduction {#sec:intro} ============ 30 Doradus is a giant H[II]{} region in the Large Magellanic Cloud (LMC). The LMC is a local group dwarf galaxy that lies at a distance of 50kpc (Pietrzy[ń]{}ski et al. 2013), and has a mean stellar metallicity ($Z$) half of the Sun (Rolleston et al. 2002). 30 Doradus hosts the young massive cluster (YMC) R136. R136 is a $\sim$1.5-3Myr YMC that encloses a total cluster mass in excess of 10$^5$$M_{\odot}$ within 10pc (Selman & Melnick 2013). The cluster contains roughly 200 massive stars ($>$8$M_{\odot}$) within a central region less than 6pc, whose radiation and mechanical feedback profoundly impact the surrounding medium (Schneider et al. 2017, subm.). R136 is the most massive YMC in our local neighbourhood that can be adequately resolved spatially (at 50kpc, the nominal distance to R136, 1$\arcsec$$\approx$0.25pc) enabling us observe individual objects at the star and molecular clump scale. This makes R136 an ideal laboratory to examine how feedback from massive stars affects further star formation (e.g. Dale et al. 2012). The mechanical and radiation output from R136 has created a central cavity by sweeping the surrounding molecular clouds (labelled as clouds 6 and 10 in Fig.1) that extend up to 100pc along the northeast-southwest axis (Pellegrini et al. 2010). We adopt the cloud nomenclature of Johansson et al. (1998). Brightly illuminated arcs delineate the interfaces between the cold gas and the ionizing radiation, where subsequent generations of stars are thought to have been triggered (Walborn et al. 2002). Studies at optical (De Marchi et al. 2011; Kalari et al. 2014), near-infrared (nIR; Rubio et al. 1998; Brandner et al. 2001), mid-infrared (mIR; Whitney et al. 2008; Gruendl & Chu 2009; Walborn et al. 2013), far-infrared (fIR; Seale et al. 2014) and sub-millimeter (Johansson et al. 1998; Indebetouw et al. 2013) wavelengths have identified evidence for active star formation throughout the 30 Doradus nebula, consistent with the idea of multiple star formation episodes. We focus on the stapler nebula that lies 2-14pc away from R136 (see Fig.1) in projection. The stapler nebula is the H[II]{} region including and surrounding the stapler shaped dark cloud that is seen in silhouette in the optical near R136. The nebula spans an area of 1.1$'\times$0.35$'$ centred on $\alpha$=05$^h$38$^m$40$^s$, $\delta\,=\,-$69$^{\rm \circ}$05$'$36$''$ and is elongated with a position angle of 35$^{\rm \circ}$. A candidate young stellar object (YSO) has been reported at the edge of the elongated dark cloud by Walborn et al. (2013; marked as S5 in that paper). The YSO is close to, but not coincident with a region of high density ($n>$10$^{6}$cm$^{-3}$) reported by Rubio et al. (2009) using CS line observations. Known infrared excess objects, some of which are thought to be disc/envelope bearing young stellar objects (YSOs) are dotted towards the edge of the dark cloud according to Rubio et al. (1998; their Figure 3). The literature evidence for dense molecular gas and YSOs in the stapler nebula lying near R136 indicates that star formation may be ongoing, which deserves further study. In this paper we discuss the properties of molecular clouds in the stapler nebula, and examine whether new stars are being formed in these clouds. This paper is organised as follows. In Section 2 we describe the data used in this study. The results from the analysis of CO(2-1) line observations are presented in Section 3. We discuss the results obtained from nIR emission line images of the stapler nebula in Section 4. Based on archival infrared photometry, we identify YSOs within the stapler nebula in Section 5. The picture obtained from our results is described in Section 6. In Section 7, a brief summary of our paper is presented along with future work arising from our results. ![image](30dorfigcocloseupcssd.jpg){width="49.50000%"} ![image](30dorcocube.pdf){width="49.50000%"} ![image](30dorfigcoh2.pdf){width="49.50000%"} ![image](30dorfigcobrg1.pdf){width="49.50000%"} \[fig:COa\] Data ==== [CO(2-1)]{} observations ------------------------ We conducted a deep CO(2-1) survey centred on the 30 Doradus Nebula using the Swedish-ESO submillimeter telescope (SEST) across March 1997- January 2001. SEST was a 15m radio telescope located at La Silla, Chile. The angular resolution at the CO(2-1) frequency of 230GHz is 23$\arcsec$, which corresponds to a projected size of 5.6pc at the distance to the LMC. Observations were conducted in position switching mode using a reference point free of CO(2-1) (at $\alpha$=05$^{h}$37$^{m}$54$^s$, $\delta$=$-69^{\rm \circ}$04$'24''$) for sky subtraction. The backend narrow-band high-resolution spectrometer (HRS) was used with a bandwidth of 80MHz, and a frequency resolution of 41.7kHz, which translates to a velocity resolution of 0.054kms$^{-1}$ at the frequency of CO(2-1). The data were reduced using the GILDAS software [^1], with linear or third-order polynomial for baseline fitting. The resultant spectra were smoothed to a velocity resolution of 0.25kms$^{-1}$. The rms noise achieved in a single channel is 0.07K after 240s of integration. We mapped the 30 Doradus region with 10$\arcsec$ spacings and detected CO(2-1) emission across 30 Doradus, including the stapler nebula where CO(1-0) emission had not been previously detected (Pineda et al. 2009; Johansson et al. 1998). In Fig.1, the CO(2-1) contours are overlaid on a [*Hubble space telescope*]{} (HST) three colour optical $BVI$H$\alpha$ image of 30 Doradus, where the position of CO(2-1) emission with respect to the R136 cluster, and its ionized surroundings can be visualized. Our observations represent a five-fold increase in sensitivity at twice the spatial resolution of previous CO(1-0) observations across the 30 Doradus nebula (see Pineda et al. 2009). Higher angular resolution observations of the 30 Doradus nebula are presented in Indebetouw et al. (2013), Anderson et al. (2014) and Nayak et al. (2016), but those data do not cover the region studied here. Near-infrared emission line imaging ----------------------------------- We obtained nIR imaging of the 30 Doradus region in H$_2$ 2.12$\mu$m narrowband filter, and the $K$s broadband filter using the ISAAC (Infrared spectrometer and Array Camera) imager mounted on 8m Melipal (UT3) telescope of the Very Large Telescope (VLT) situated in Paranal, Chile (program ID 078.C-0487A). Toward the R136 region, the mosaic covered a field of 5$\arcmin$$\times$5$\arcmin$ area. The average seeing measured from the images is $\sim$0.8$''$–1.1$''$. Observations were taken in the ABBA sequence with the sky image 30$'$ from the stapler nebula at $\alpha$=05$^h$39$^m$01$^s$, $\delta\,=\,-$69$^{\rm \circ}$42$'$36$''$ in a region free of nebulosity within the ISAAC field of view, ensuring adequate sky subtraction. The total integration time on source was 1 hour for the narrowband filter, and 100s for the $K$s broadband filter. Data were reduced using the ISAAC pipelines, with flux calibration carried out using Persson (1998) standards. Astrometric calibration was refined using 2MASS. Bright stars are saturated in the emission line images and have a non-linear CCD response, meaning they cannot be completely subtracted. This leads to circular bright residuals (and in some cases vertical bleeding) in the emission line images. We excluded these regions from further analysis by masking them. We used the Br$\gamma$ 2.165$\mu$m narrowband flux calibrated image from Yeh et al. (2015). The image was obtained using the NOAO Extremely Wide Field Infrared Imager (NEWFIRM) mounted on the 4m Victor Blanco telescope located at the Cerro Tololo Inter-American Observatory, Chile. The final Gaussian convolved image resolution for the Br$\gamma$ narrowband image is 1$\arcsec$, and is comparable to the VLT H$_2$2.12$\mu$m narrowband image. Archival photometry ------------------- mIR photometry of point sources in the stapler nebula were estimated by Gruendl & Chu (2009) from images at 3.6, 4.5, 5.8 and 8.0$\mu$m taken using the [*Spitzer*]{} space telescope IRAC (Infrared Array Camera) as part of the [*Spitzer*]{} legacy program SAGE (Spitzer Survey of the Large Magellanic Cloud: Surveying the Agents of a Galaxy’s Evolution; Meixner et al. 2006). The full width half maximum (FWHM) of the images is 1.6$\arcsec$, 1.7$\arcsec$, 1.7$\arcsec$ and 2$\arcsec$ respectively. Alternative photometry of point sources from the same images are also presented by Whitney et al. (2008), but Whitney et al. (2008) miss a significant fraction of point sources in the 30 Doradus region, as their study is motivated towards detecting reliable sources throughout the LMC via pipeline analysis. Gruendl & Chu (2009) detect objects in the dense and nebulous surroundings using detailed aperture photometry (see Section 6.3 of Gruendl & Chu (2009) for a comparison). Photometry in the fIR at 100 and 160$\mu$m, and 250 and 350$\mu$m of point-like and extended sources in the stapler nebula is given in Seale et al (2014), using images taken by the [*Herschel*]{} space telescope PACS (Photoconductor Array Camera and Spectrometer), and SPIRE (Spectral and Photometric Imaging Receiver) instruments respectively as part of the [*Hershel*]{} large program HERITAGE (HERschel Inventory of The Agents of Galaxy Evolution; Meixner et al. 2013). The FWHM for these images are 8$\arcsec$, 12$\arcsec$, 18$\arcsec$ and 25$\arcsec$ respectively. We utilize the photometry from Gruendl & Chu (2009) and Seale et al. (2014) in this study. Molecular clouds ================ Figure \[fig:CO\] shows the CO(2-1) integrated line emission over the velocity interval 235–270 kms$^{-1}$ as contours, superimposed on the HST composite $BVI$H$\alpha$ image. Strong CO(2-1) emission from Cloud 10 (northeast region of the map) and Cloud 6 (southwest region of the map) previously reported by Johansson et al. (1998) is seen along the northeast-southwest axis. We observe previously undetected CO(2-1) emission originating from the region located between Clouds 6 and 10, close to R136 (see Fig.\[fig:COa\]). The emission extends along the southeast-northwest direction in projection. The distance of the emission from R136 in projection is between 2pc (emission is located to the north of R136) to 14pc away (in the northwest direction from R136). This CO emission is approximately five times weaker than the CO emission observed in Clouds 6 and 10. This emission is coincident spatially to the stapler nebula in the optical image of Fig.\[fig:COa\]a. The emission is resolved in CO(2-1) velocity as a chain of small and weak clouds (Fig.\[fig:COa\]a,b). We define the stapler region by the extent of the CO(2-1) emission, which goes beyond the visible stapler shaped dark cloud in the optical. This boundary is marked in Fig.1 with a dashed rectangle. We named the CO clouds Knots (KN), as they form a chain separated in velocity, as demonstrated by the position velocity slice across the stapler nebula, and the CO(2-1) spectra of each individual cloud shown in Fig.\[fig:spectra\]. By analysing the radial velocities and spatial distribution we found that the KN clouds are composed of three clouds we name KN1, KN2, and KN3 in order of decreasing Right Ascension (labelled in Fig.\[fig:COa\]a). ![[*Top*]{}: The stapler nebula blown up from the HST mosaic in Fig.1, with the stapler region is outlined with the dashed white box. The outermost CO(2-1) contours integrated over the 235–240kms$^{-1}$, and 245–250kms$^{-1}$ are shown in magenta and red respectively, with the second outermost contour of the 240–245kms$^{-1}$ also shown. The solid white line marks the position of the slice shown in the middle panel. [*Middle*]{}: Position velocity slice of the CO(2-1) cube in linear scale along the direction of the slit given by the solid white line in the top panel. The slit cuts along the centre of the stapler nebula. [*Bottom*]{}: CO(2-1) spectra of each cloud extracted from the region bounded by the contours shown in top panel. The dashed lines for each cloud is it’s $V_{\rm{lsr}}$ given in Table 1. []{data-label="fig:spectra"}](30dorfigcocloseupcss.jpg "fig:"){width="49.50000%"} ![[*Top*]{}: The stapler nebula blown up from the HST mosaic in Fig.1, with the stapler region is outlined with the dashed white box. The outermost CO(2-1) contours integrated over the 235–240kms$^{-1}$, and 245–250kms$^{-1}$ are shown in magenta and red respectively, with the second outermost contour of the 240–245kms$^{-1}$ also shown. The solid white line marks the position of the slice shown in the middle panel. [*Middle*]{}: Position velocity slice of the CO(2-1) cube in linear scale along the direction of the slit given by the solid white line in the top panel. The slit cuts along the centre of the stapler nebula. [*Bottom*]{}: CO(2-1) spectra of each cloud extracted from the region bounded by the contours shown in top panel. The dashed lines for each cloud is it’s $V_{\rm{lsr}}$ given in Table 1. []{data-label="fig:spectra"}](pv2.jpg "fig:"){width="49.50000%"} ![[*Top*]{}: The stapler nebula blown up from the HST mosaic in Fig.1, with the stapler region is outlined with the dashed white box. The outermost CO(2-1) contours integrated over the 235–240kms$^{-1}$, and 245–250kms$^{-1}$ are shown in magenta and red respectively, with the second outermost contour of the 240–245kms$^{-1}$ also shown. The solid white line marks the position of the slice shown in the middle panel. [*Middle*]{}: Position velocity slice of the CO(2-1) cube in linear scale along the direction of the slit given by the solid white line in the top panel. The slit cuts along the centre of the stapler nebula. [*Bottom*]{}: CO(2-1) spectra of each cloud extracted from the region bounded by the contours shown in top panel. The dashed lines for each cloud is it’s $V_{\rm{lsr}}$ given in Table 1. []{data-label="fig:spectra"}](spectra.pdf "fig:"){width="49.50000%"} Physical properties ------------------- After identifying each cloud, we computed the central velocity ($V_{\rm lsr}$) in the local standard of rest frame, and velocity width ($\sigma_{v}$) by fitting a Gaussian profile to the total cloud spectrum. The major and minor axis sizes of the profiles, in conjunction with the rms size of the beam were used to compute the deconvolved radius ($r$). Given the uncertainties on the Gaussian fit of the CO spectra found for each cloud are around 30%, we estimate the uncertainties on $r$ to be 15%. ### CO luminosity and mass The CO cloud luminosity is computed as: $$L_{\rm CO}\, [K{\rm kms}^{-1}{\rm pc}^{-2}] = \rm{D}^2 \int_{\Omega} \int_v T_{\rm mb}(\nu) \, d\nu \,d\Omega$$ where D is the distance to the source in pc (adopted as 50kpc), $T_{\rm mb}$ the main beam temperature which is the antenna temperature corrected for the efficiency of the antenna ($T_{\rm mb} = T_{\textrm{A}}/\eta$), and $\Omega$ is the solid angle of the subtended by the source. The H$_2$ mass of the clouds can be calculated from the observed CO(1-0) luminosity assuming a linear conversion between the velocity integrated CO emission ($I_{\textrm{CO}}$) and the H$_2$ column density ($N_{{\rm H}_2}$); $$N_{\rm H{_2}} = X_{\textrm{CO}}\, [cm^{-2}(K{\rm kms^{-1}})^{-1}]\,\, I_{\textrm{CO}}\,[K{\rm kms^{-1}}],$$ where $X_{\textrm{CO}}$ is the CO-to-H$_2$ conversion factor (Bolatto et al. 2013; Roman-Duval et al. 2014). The total mass of H$_2$ ($M_{{\rm H}_2}$) is, $$M_{\textrm{H}_2} \,[M_\odot] = \alpha_{\textrm{CO}} {\rm D}^2\, [{\rm Mpc}] S_{\textrm{CO}},$$ where $$\alpha_{\textrm{CO}}\, [M_\odot {\rm Mpc}^{-2} ({\it Jy}{\rm kms}^{-1})^{-1}] = X_{\textrm{CO}} \frac{m_{\textrm{H}_2} c^2}{2 k \nu^2},$$ and the flux density $S_{\textrm{CO}}$ is, $$S_{\textrm{CO}}\,\, [{\it Jy}{\rm kms}^{-1}] = \int S_{\nu} \, dv.$$ The molecular gas mass is multiplied by 1.36 to include the Helium contribution. This method is calibrated for the $J$=1$\rightarrow$0 transition. We use a ratio between the CO$J$=2$\rightarrow$1 and $J$=1$\rightarrow$0 lines of 0.87 for 30 Doradus Cloud 10 (the North Eastern cloud; see Fig.1) found by Johannson et al. (1998) to estimate the CO(1-0) luminosity. The conversion factor depends on both metallicity and the ambient radiation field intensity (Maloney 1988). As a consequence of strong radiation fields and poor self-shielding in low metallicity environments, the CO molecule is photo-dissociated as it does not self-shield like the H$_2$ molecule. Therefore, there is less CO compared to the H$_2$ abundance in the Magellanic Clouds than in the Galaxy. This translates into higher values of $X_{\textrm{CO}}$ in the LMC compared to the Galaxy. In general the conversion factor increases with higher radiation fields and decreases with higher metallicities. We adopt the median conversion factor in the LMC compiled from the literature by Bolatto et al. (2013) of $$X_{\textrm{CO}} = 8.8 \pm 0.3 \times 10^{20} \,[\textrm{cm}^{-2} \textrm{(K kms}^{-1})^{-1}].$$ The adopted $X_{\textrm{CO}}$ factor is 3.8 times larger than the canonical Galactic $X_{\textrm{CO}}$ of Bolatto et al. (2013). Our adopted value is similar to that found by Herrera et al. (2013) when comparing molecular and dust mass estimates in the LMC N11 region; but higher than the value of $6 \times 10^{20} \,\textrm{cm}^{-2} \textrm{(K kms}^{-1})^{-1}$ reported by Roman-Duval et al. (2014). The resulting cloud masses if we adopted the Roman-Duval et al. (2014) $X_{\textrm{CO}}$ would be reduced by $\sim20$%. ### Virial mass The virial mass ($M_{\rm{vir}}$) was computed assuming that each cloud is spherical, is in virial equilibrium and has a density ($\rho$) profile of the form $\rho \propto r^{-1}$. The virial mass is given by $$M_{\textrm{vir}} \,[M_\odot] = 190 \sigma_{v}^2\,[{\rm kms}^{-1}]\, r\, [{\rm pc}] \label{eq:virial_mass2}$$ according to MacLaren et al. (1988). The results of our analysis for each cloud are given in Table 1. From Table \[tab:clouds\_properties\] we see that the virial masses are a factor 3-6 larger than the masses derived from the integrated CO emission for the resolved molecular clouds in 30 Doradus. Therefore, the conversion factor between the H$_2$ column density and the CO intensity is, on average, 4.5 times the Galactic value. This translates into a conversion factor $X_{\textrm{CO}} = 1.0 \pm 0.4 \times 10^{21}$ cm$^{-2}$ (K km s$^{-1}$)$^{-1}$. [@israel97] estimated the H$_2$ column densities towards CO clouds in the LMC and SMC from far-infrared surface brightness, and derived, in units of $10^{21}$ cm$^{-2}$ (K km s$^{-1}$)$^{-1}$, $X_{\textrm{CO}} = 12 \pm 2$ and $X_{\textrm{CO}} = 1.3 \pm 0.2$ for the SMC and LMC, respectively. [@israel03] found, in units of $10^{20}$ cm$^{-2}$ (K km s$^{-1}$)$^{-1}$, $X_{\textrm{CO}} = 4.8 \pm 1.0$ and $X_{\textrm{CO}} = 4.3 \pm 0.6$ for the SMC and LMC, respectively. [@garay02] derived $X_{\textrm{CO}} = 6.4 \times 10^{20}$ cm$^{-2}$ (K km s$^{-1}$)$^{-1}$ for the Complex-37 in the LMC. [@johansson98] also derived conversion factors close to the canonical value for the Galaxy to a factor of a few higher, although he found these values using CO($1-0$) observations whose sensitivity is lower than our CO($2-1$) data. There is a small difference between the conversion factor for 30 Doradus found in this work and the previously mentioned values found by other studies in the Magellanic clouds. However, one would expect to find a different conversion factor in the clouds of 30 Doradus than in the rest of the clouds of the LMC due to the extreme conditions in the environment, specially the strong radiation fields that photo-dissociate the CO molecule leaving large H$_2$ envelopes untraced by CO. Analysis -------- ### Larson’s Laws Molecular clouds in virial equilibrium follow the empirical power law relation $\sigma_{v} \propto r^{\alpha}$ (Larson 1981). $\alpha$ is generally agreed to be between 0.4–0.5 based on numerous molecular cloud surveys of the Milky Way, and external normal and dwarf Galaxies (Bolatto et al. 2008; Heyer et al. 2009). The observed value of $\alpha$ is oft explained by turbulence (McKee & Ostriker 2007; Lombardi et al. 2010). The velocity dispersion is considered to be a measure of the internal dynamics within the clouds, because the observed line profiles, averaged over a cloud, have Gaussian shapes and the line widths are broader than the thermal line widths. It is a reasonable assumption to make that the line profiles are produced by turbulent motions of the gas inside the clouds (Solomon et al. 1987). Figure\[fig:slw\]a displays the position of the KN clouds in the $\sigma_{v}$–$r$ diagram. Also shown are results from Heyer et al. (2009) summarising the canonical relation found for Galactic clouds of $\sigma_{v} = 0.72\,r^{0.5}$; and clouds in the 30 Doradus region (from Pineda et al. 2009; having a resolution of 43$\arcsec$; and from Nayak et al. (2016) having a resolution of 2$\arcsec$), and in the LMC (excluding 30 Doradus) from Wong et al. (2011) whose study had a spatial resolution of 45$\arcsec$. Note that the Pineda et al. (2009) and Nayak et al. (2016) clouds do not cover the central region of 30 Doradus near R136, and there are no spatial overlaps between the KN clouds and the clouds they identify. The molecular clouds associated with the stapler nebula lie above the canonical $\sigma_{v}$–$r$ relation for Galactic clouds. Interestingly, the other detected clouds in 30Doradus from Pineda et al. (2009) and Nayak et al. (2016) also lie above the canonical relation. Although the departure from the relation is not at the same scale as the KN clouds, this might still indicate that the observed $\sigma_{v}$–$r$ relation might be a function of distance from R136, and a more global property of 30Doradus. The position of the KN molecular clouds in the $\sigma_{v}$–$r$ diagram implies either of two scenarios; the clouds are collapsing or expanding, or the observed line widths are the manifestation of external pressures that keep the clouds in equilibrium. Since collapse velocities are generally only $\sim40$% larger than equilibrium velocity dispersions for a self-gravitating cloud, the observed large linewidths for the sizes of the KN clouds are probably not the result of collapse (Ballesteros-Paredes et al. 2011). Neither are they likely to result from expansion because there is no obvious shell or hole structure that usually accompanies expansion. ----------- ------------------- -------------------------------- --------------- ------------------ --------------------- --------------- --------------------- --------------------- Name R.A. Dec. $V_{\rm lsr}$ $\sigma_{\rm v}$ $L_{\rm CO}$ $r$ $M_{\rm H_2}$ $M_{\rm vir}$ (J2000) (J2000) (kms$^{-1}$) (kms$^{-1}$) (Kkms$^{-1}$pc$^2$) (pc) (10$^3\,M_{\odot}$) (10$^3\,M_{\odot}$) 30Dor-KN1 5$^h$38$^m$45$^s$ $-$69$^{\rm \circ}$06$'$00$''$ 237.2$\pm$0.1 4.0$\pm$0.7 154.2$\pm$21.3 3.93$\pm$0.58 2.7$\pm$0.2 10.9$\pm$3.3 30Dor-KN2 5$^h$38$^m$40$^s$ $-$69$^{\rm \circ}$05$'$30$''$ 244.4$\pm$0.2 4.4$\pm$0.1 243.6$\pm$3.7 3.35$\pm$0.5 4.2$\pm$0.3 11.25$\pm$1.8 30Dor-KN3 5$^h$38$^m$36$^s$ $-$69$^{\rm \circ}$05$'$30$''$ 250.0$\pm$0.2 5.0$\pm$0.3 320.8$\pm$44.3 2.96$\pm$0.44 5.3$\pm$0.4 12.83$\pm$2.5 ----------- ------------------- -------------------------------- --------------- ------------------ --------------------- --------------- --------------------- --------------------- \ We examine whether the observed large $\sigma_{v}$ are the manifestation of external pressures necessary to keep the clouds in equilibrium. In Fig.\[fig:slw\]b, we plot the mass surface density ($\Sigma_{\rm H_2}$) against the ${\sigma_v}^2/r$ value of the KN clouds, along with those in the Milky Way from Heyer et al. (2009); in 30 Dor from Pineda et al. (2009) and Nayak et al. (2016); and in the LMC from Wong et al. (2011). Isolated virial clouds confined by self-gravity follow a linear relation in the ${\sigma_v}^2/r$ vs. $\Sigma_{\rm H_2}$ plot (for e.g. see Heyer et al. 2009). The values of clouds from the literature fall along this expectation. However, the KN clouds alone depart from the expected relation, and are likely confined by external pressure and not in virial equilibrium. Following the simplifying assumptions of Field et al. (2011), we plot isobars of external pressure (in terms of $P/k_{\rm B}$) according to their prescription. The lines reflect the external pressure necessary to confine clouds for a given ${\sigma_v}^2/r$ assuming clouds with a centrally concentrated internal density structure approximated by hydrostatic equilibrium. From Fig.\[fig:slw\]b, we see that external pressures of $\sim\,10^6$cm$^{-3}$K are necessary to keep the KN clouds confined. These values are in general agreement with Chevance et al. (2016), who report that the stapler nebula is located in a region with gas pressure $\sim$ 0.85-1.2$\times 10^6$cm$^{-3}$K, with the peak found in KN2 (see Fig. 15 in Chevance et al. 2016). ### Variation in properties as a function of distance from R136 The $V_{\rm lsr}$, and $M_{{\rm H}_2}$ of each cloud are plotted as a function of projected distance from R136 in Fig.\[fig:distanced\]. The velocity of the clouds increases as a function of projected distance from R136. The velocity of the KN3 cloud loosely matches the radial velocities of the stars within R136[^2]. The KN1 cloud is closest in projection to R136 and is blue shifted with respect to the mean velocity of stars in the cluster. This suggests the KN1 cloud, (and likely KN2 and KN3 clouds as suggested by the lack of background stars) lie slightly in front of the cluster. The clouds appear to be moving away from the cluster as function of projected distance from it (Fig.\[fig:distanced\]a). The $M_{{\rm H}_2}$ of each cloud also increases as the projected distance from R136 increases (Fig.\[fig:distanced\]b). The KN1 cloud mass is approximately 2 times lower than the KN3 cloud. Under the assumption that the molecular clouds detected in CO(2-1) were initially all of similar densities, and photoionization from R136 alone is evaporating the molecular cloud, then KN1 must be closer to R136 (considering it as the only source of external photoionization) because the ionizing flux decreases to the inverse square with distance. Line of sight distances ----------------------- Our main results concerning the detection of cold molecular gas near R136 suffers from possible projection effects. Although the cold molecular gas detected in the CO(2-1) observations lies within 2-14pc in projection of the R136 cluster, the actual distance may likely be further in the line of sight direction allowing for the clouds to possibly survive photoionisation. Chevance et al. (2016) analyse the physical distance of the CO gas in 30 Doradus to the stars, by comparing the incident radiation field on the gas modelled against fIR observations in fine structure lines of the emitted radiation field measured from the known massive star population. By comparing the luminosity of the photodissociation region (which forms the interface between the photoionizing radiation from the stars and the gas) against their predictions, they are able to constrain the line of sight distance of the photodissociation regions from R136 with uncertainties of 4pc. Based on their results (Figure 20 in Chevance et al. 2016), the stapler nebula lies less than 20pc away in the line of sight direction from R136, which itself lies at the centre of a sphere of about 6pc in radius. This distance agrees well with the line of sight distance measured from line ratios of ionized lines in optical spectra by Pellegrini et al. (2010). From their Fig.12, we find that the distance of the KN clouds is less than 20pc away in our line of sight from R136. We also consider that if the CO gas is close to R136, and coincident with the dust, there is likely to be a gradient (reflecting the gradient in the projected $V_{\rm lsr}$) in the dust temperature, and the total fIR luminosity arising from the photodissociation region. Such a gradient is visible in both the dust temperature maps (Guzman 2010), and also in the total fIR luminosity which peaks at KN2, and decreases towards KN3 (see Fig.1 of Chevance of et al. 2016). Therefore, although our observations are unable to resolve the line of sight distances to the KN clouds from R136, based on corroboration from multiple independent sources in the literature, we find that the line of sight distance to the KN clouds from R136 is $\lesssim$20pc. The molecular clouds in the stapler nebula lie between 2-14pc in projected distance, and $\lesssim$20pc in the line of sight distance from R136. Near-infrared emission line imaging =================================== The H$_2$ 2.12$\mu$m emission line image is shown in Fig.\[fig:COa\]c, with the CO(2-1) contours overlaid. Strong H$_2$ emission is spatially coincident with the CO(2-1) emission of the molecular clouds, and shares similar morphology. The H$_2$ emission is clumpy, with numerous knots and a reticulated pattern. In contrast, detected ionized gas (Br$\gamma$) at the position of the CO(2-1) molecular clouds is weak and diffuse (see Fig.\[fig:COa\]d). This diffuse Br$\gamma$ emission is associated with filaments and arc-like structures of ionized gas vivid in H$\alpha$ (brown in Fig.\[fig:COa\]a). This convinces us that the strong H$_2$ nIR emission is the warmer component of the cold molecular gas traced by the CO detections, whereas the diffuse Br$\gamma$ emission lies slightly beyond the ionized surface of the molecular cloud although no clear demarcation is noted. The Br$\gamma$ morphology is not spatially coincident with the H$_2$ and CO(2-1) emission. The H$_2$/Br$\gamma$ ratio can be used to disentangle shock/collisionally excited H$_2$ from fluorescence excitation. This is because the shocks and collisional excitation affect primarily the H$_2$ gas, leaving the ratio of H$_2$/Br$\gamma$ above unity, whereas fluorescence acts on both the molecular and ionised gas leading to a ratio below unity. Using the absolute flux ratio, we find that the H$_2$/Br$\gamma$ ratio never exceeds 0.5 at an angular resolution of 1$\arcsec$ in the stapler nebula, agreeing with the findings of Yeh et al. (2015). This indicates that the excited nIR H$_2$ is primarily excited by the ultraviolet (UV) radiation from R136 acting on the surfaces of the molecular gas, with the filamentary Br$\gamma$ arising from the same source. The KN clouds therefore must lie in front of us given the morphology of the clumped H$_2$ emission, and lack of background stars. We note that it is possible that shock excited emission is prevalent on smaller scales (a few tenths of a parsec, or $\lesssim$0.5$\arcsec$ at the distance to 30 Doradus) caused by outflows from massive protostars residing within the KN molecular clouds, but our current angular resolution limitations in the nIR narrowband images ($\sim$1$\arcsec$) prevent us from examining the same in detail. Future high angular resolution integral field unit (IFU) nIR spectroscopy would help towards constraining this further, as shocked gas will likely be offset in velocity. A picture of ionization fronts emerges, with the photodissociation region extincted from our line of sight by the cold molecular gas. These structures could resemble the dense “pillars” or structures observed in the galaxy in regions such as M16 and NGC3603 (Sankrit & Hester 2000), but are smaller in scale at $\sim$0.1–0.3pc, and in the 30 Doradus Nebula by Pellegrini et al. (2010). From the observed emission line imaging and CO(2-1) observations it appears that the clouds are being ionised on the backside. We are viewing the KN molecular clouds face on, and they are likely the tail of pillar-like structures (we refer the reader to Pound 1998 for a description of pillar morphology; or to Fig.7) with the ionized head pointing towards R136. The observed velocity line widths of the CO(2-1) line are then likely caused due to the velocity gradient between the head and tail of pillars (e.g. Pound 1998). Following the Bertoldi (1989) analytical theory of photoevaporating clouds, during photoevaporation clouds form a cometary structure similar to pillars. Neutral gas at the head is pushed back by the ionization front stripping the outer envelope. The difference in velocity between the slowly moving head to the tail with respect to ionizing stars develops a velocity gradient, that is manifested as velocity line width seen in Fig.\[fig:slw\], also providing a natural explanation for the observed molecular cloud properties. The observed velocity line widths is in a similar range to those observed in M16 using CO observations (Pound 1998). Our results are well supported by the work presented in Chevance et al. (2016). The stapler nebula is prominent in forbidden mIR and fIR line emission from \[S[II]{}\], \[C[II]{}\], and \[O[I]{}\] (Fig. 1 in Chevance et al. 2016), which are key tracers of photodissociation regions. Another interesting morphological observation can be made from examining the mIR imaging in Figure \[fig:ysothumb\]e. The 8$\mu$m image from [*Spitzer*]{} covers the 8.6$\mu$m Polycyclic Aromatic Hydrocarbon emission (PAH). PAHs are considered to be released from the dust exposed to UV photons in photodissociation regions, and are altered/destroyed in these energetic environments. The northern boundary of the KN2 and KN3 cloud is bright in 8$\mu$m suggesting the PAH emission occurs at this interface where the UV photons ionise the molecular cloud. In contrast, the KN1 cloud is being ionised from the south directly facing towards R136 following the strong 8$\mu$m emission. In contrast, the 24$\mu$m imaging (Fig.\[fig:ysothumb\]f) is faint across the KN2 and KN3 cloud region, but displays a wispy structure at KN1. The 24$\mu$m emission is emitted from intermediate to larger size grains. This emission is coincident with early type stars to its immediate east, whose measured median reddening law $R_V$ is greater than the canonical average for 30 Doradus and stands $\sim$4.5. This is indicative of selective evaporation of small grains leading to a larger reddening law (e.g. De Marchi et al. 2016), and coincident with the strong 24$\mu$m emission region. Candidate massive young stellar objects ======================================= Identification of YSO --------------------- Our aim is to detect any on-going star formation through the identification of YSOs using archival infrared photometry. The youngest YSOs (Class 0 objects) will only be visible at $\lambda>$10$\mu$m, while older Class I/II sources will be visible at shorter ($>$2$\mu$m) wavelengths. Using [*Spitzer*]{} 3.6-24$\mu$m, and [*Herschel*]{} 100-350$\mu$m photometry (details in Section 2.3; and images are presented in Fig.\[fig:ysothumb\]), we classify point or point-like sources based on either the spectral energy distribution (SED) slope in the 3.6-24$\mu$m wavelength range, or by examining the parameters derived from fitting modified blackbody and YSO models across the entire wavelength range. We require detection in at least three bands in each catalogue to classify the object as a source. We then check for any counterparts with the coordinates of the detection in the [*Herschel*]{} 100$\mu$m within a crossmatch radius of 6.7$''$ (which is the convolution kernel at 100$\mu$m) and the [*Spitzer*]{} coordinates. We find a known [*Spitzer*]{} source in KN2 (KN2-A, or S5 in Walborn et al. 2013), a [*Herschel*]{} source without a [*Spitzer*]{} counterpart in KN2 visible at 100-350$\mu$m (KN2-B), and one [*Herschel*]{} source with a [*Spitzer*]{} counterpart within 0.9$''$ in KN3 (KN3-A). The thumbnails of these sources in the [*Spitzer*]{} and [*Herschel*]{} bands are shown in Fig.\[fig:ysothumb\]. For SED fitting, we utilize three methods to derive the properties of the source. First, we fit the photometry of each source with the YSO models of Robitalle (2017). The model grid covers 20000 radiative transfer YSO models covering a mass range of 0.1–50$M_{\odot}$. It should be noted that the SED fits are not intended to provide accurate parameters, but as a crude guide to the nature of each sources and that these models are limited in nature compared to the available free parameters (Offner et al. 2012). Secondly, we use a modified blackbody fit to the far-IR [*Herschel*]{} sources to constrain the temperature of the emitting source, $T_{\rm b}$. The observed flux can be reproduced as a blackbody with frequency flux density $F_{\nu}$ as $$F_{\nu} =B_{\nu}(T_{\rm b})(1-e{^{-\tau_{\nu}}})\Omega.$$ Here, $B_{\nu}(T_{\rm b})$ is the blackbody emission at $T_{\rm b}$, and $\tau_{\nu}$ is the optical depth constrained by a power law at frequency $\nu$ by $\tau_{\nu} \propto \nu^{\beta}$. $\beta$ = 1.5 in the LMC (Galliano et al. 2011). Thirdly, only for the [*Spitzer*]{} sources we employ the slope of the SED fit ($\alpha_{\rm SED}$), where $$\alpha_{\rm{SED}}\,=\,\frac{d\,{\rm{log}}(\lambda F_{\lambda})}{d\,{\rm{log}}\lambda}.$$ We consider only the flux between the wavelengths 3.6–24.0$\mu$m which classify well Class I/II sources (e.g. Greene et al. 1994). The results of our analysis are presented in Table 2, and described in the following subsections. ![image](yoss4.jpeg){width="2.1\columnwidth"} Description of YSOs in the clouds --------------------------------- ### [KN1]{} cloud KN1 contains no identified mid or far infrared source. ### [KN2]{} cloud KN2 is extremely promising as a future site of high mass star formation. KN2-B is a [*Herschel*]{} source detected in 100–350$\mu$m imaging (marked in Fig.\[fig:ysothumb\]). There exists no near counterpart mIR source. The nearest mIR source is KN2-A which lies to its immediate south west, and at the very edge of the CO emission. This source angularly coincides with the dense region observed in CS to the south of KN2 by Rubio et al. (2009). CS is a tracer of dense gas with densities $n$, upwards of $10^5$cm$^{-3}$. KN2-B is classified as a high-mass YSO based on the [*Herschel*]{} classification scheme devised by Seale et al. (2014) for YSOs in the Magellanic Clouds. Following their three classification criteria (see Sec. 4.4 of that paper for further details), we find KN2-B meets all three as it is (i) the dominant source of fIR photometry and is clearly defined in at least 3 [*Herschel*]{} bands at 3$\sigma$ above the background (ii) It is not identified as a background galaxy/interloper (iii) it has 24$\mu$m emission as seen in Fig.\[fig:ysothumb\]. The source has marked PolyAromaticHydrocarbon (PAH) emission from the 8.6$\mu$m feature. We achieve an excellent fit for KN2-B with the SED models of Robitalle et al. (2017). The best fit has a $\chi^2$/datapoint$<$3 and is shown in Fig.\[fig:sed\]. The parameters of the best fit model were a stellar temperature of 18500$\pm$1000 K, which translates to a stellar mass exceeding 20$M_{\odot}$, with the $\dot M_{\rm env}/M_{\ast}$ from the model fit suggestive of a Class 0 massive YSO (Fig.\[fig:sed\]).The temperature estimated from the blackbody fit is 28.8$\pm$4K, with the integrated logarithm of luminosity log$L/L_{\odot}$=4.59 which further indicates a Class 0 classification (Andr[é]{} et al. 2010). The luminosity of the source is much higher than expected for starless clouds heated by the ambient radiation field, considering which it is likely a true high-mass YSO (Seale et al. 2014). The estimated extinction from the SED fit (of $A_V$=6.9) suggests that the candidate protostar is shielded from external photoionization. Moreover, the external heating at wavelengths longwards of 100$\mu$m does not affect significantly the SED (Pavlyuchenkov et al. 2012). Finally, as noted in Sec.5.1, the parameters estimated from the SED fits with the Robitalle (2017) models may be ambiguous due to the limited range of parameters considered, but they do provide a crude guide to source classification. We also stress that KN2-B is classified as a high-mass YSOs based on the independent schemes of Seale et al. (2014) based on source intensity and morphology; and the SED models of Robitalle et al. (2017) based on the flux distribution of the source in the fIR. The age of KN2-B from the model fit is $\lesssim$0.1Myr, and is smaller by an order of magnitude than the crossing timescale of the cloud (the crossing timescale is the length over the velocity) of 0.66Myr. KN2-A is likely a Class II source based on its [*Spitzer*]{} colours. The $\alpha_{\rm SED}$ slope derived from the 3.6-8$\mu$m photometry is 1.22$\pm$0.2, which falls into the Class II category (Greene et al. 1994). Our classification as a Class II YSO from the SED slope agrees with the Gruendl & Chu (2009) classification based on the position of the KN2-A in [*Spitzer*]{} colour-colour and colour-magnitude diagrams. We note the there is no 24$\mu$m counterpart to KN2-A, and it also does not have a [*Herschel*]{} counterpart according to the procedure followed Seale et al. (2014), who cross match [*Spitzer*]{} and [*Herschel*]{} catalogues in the LMC. They adopt a cross match radius of 0.5$\times$FWHM to cross match the two catalogues in crowded regions. If we still consider that some of the fIR flux from KN2-A is assigned towards KN2-B, we find that $\gtrsim$5% of the fIR flux of KN2-B is required to change the classification of KN2-A from Class II to Class I. There is no high mass Class II YSOs population on the northern side. ID Wavelength($\mu$m) R.A. (J2000) Dec. (J2000) Class $\alpha_{\rm SED}$ Comments ------- -------------------- --------------------- ---------------------------------- --------- -------------------- ---------------------------------------------------------------- KN2-A 1.2–8 05$^h$38$^m$39$^s$7 $-$69$^{\rm \circ}$05$'$38.1$''$ II 1.22$\pm$0.2 S5 in Walborn et al. (2013) KN2-B 100-350 05$^h$38$^m$39$^s$1 $-$69$^{\rm \circ}$05$'$33.8$''$ 0 – $T_{\rm b}$=29$\pm$4K; point like in 24$\mu$m imaging KN3-A 3.6-500 05$^h$38$^m$36$^s$3 $-$69$^{\rm \circ}$05$'$24$''$ Dust(?) 2.65$\pm$0.7 $T_{\rm b}$=31.5$\pm$7K; extended in 24$\mu$m imaging 0.9$''$ between [*Herschel*]{} 100$\mu$m& [*Spitzer*]{} source \ ![SED of KN2-B with best-fit models from Robitalle (2017), overlaid with [*Herschel*]{} 100-350$\mu$m photometry and corresponding error bars. The best fit model (with $\chi^2$/datapoint$<$3) with a stellar temperature of $\sim$18500K is shown as a solid black line. Other models with $\chi^2$/datapoint$<$5 are also shown as solid grey lines.[]{data-label="fig:sed"}](sedA-crop.pdf){width="\columnwidth"} ### KN3 KN3-A is a bright [*Herschel*]{} source and is extended in 100–350$\mu$m imaging. The [*Herschel*]{} detection lies within 0.9$''$ of a Gruendl & Chu (2009) [*Spitzer*]{} source. We consider the mIR and fIR detection to be of the same source, as they lie well within the FWHM of the [*Herschel*]{} and [*Spitzer*]{} imaging. Gruendl & Chu (2009) classified the source as likely originating from dust, and we see that their exists no point-like or otherwise 24$\mu$m counterpart. The whole SED fits rather poorly with the models of Robitalle et al. (2017), with the best fits achieved when discarding the 4.5$\mu$m photometry of $\chi^2$/datapoint $>$100. Examination of the 24$\mu$m imaging reveals no distinct point-like source (Fig.\[fig:ysothumb\]; right dashed circle). Given the flux at long wavelengths, and the visible extended emission in the imaging we cannot classify the source as a YSO conclusively. A modified blackbody fit suggests a peak temperature of 31.5$\pm$6K, with the integrated total luminosity slightly less than 10$^4$$L_{\odot}$. This lies at the boundary of the classifying scheme between interstellar starless clouds heated by the ambient radiation field (Seale et al. 2014), and YSOs. The ambiguous classification, and lack of any clear point-like source in the nIR and fIR images suggests that this source is likely a starless dust cloud, probably heated by the ambient ionizing field. A comparison of the multi-wavelength images in Fig.\[fig:ysothumb\] demonstrates the differences between KN2-B and KN3-A. At 24$\mu$m, diffuse emission in star-forming regions is likely to originate from heated gas from inner regions of protostars (Chambers et al. 2009). In KN2-B, the 24$\mu$m emission is clearly detected at 3$\sigma$ level with respect to the immediate surroundings. In KN3-A, 24$\mu$m emission above the 3$\sigma$ threshold is not detected. At 100$\mu$m, and 160$\mu$m, both sources appear as point sources. But, at longer wavelengths, the KN3-A source is extended or disappears (see the 250$\mu$m image), suggesting that it is more likely to be dust, rather than a protostellar candidate when compared to the nature of the source detected at the position of KN2-B in the 250$\mu$m image. A picture of the stapler nebula in R136 ======================================= From our deep CO(2-1) observations, we identified three molecular clouds separated in velocity we name KN1-3 lying within 14pc in projection; and $\lesssim$20pc in the line of sight from the YMC R136. These clouds lie north of R136 and stretch for 14pc in projection from southeast to northwest. This axis is perpendicular to the previously identified giant molecular clouds by Johansson et al. (1998). The $V_{\rm lsr}$ of the clouds ranges from 237kms$^{-1}$ for the cloud directly above R136 in projection to 250kms$^{-1}$ for the furthest cloud, which is similar to the mean radial velocity of the stellar population in R136. These clouds display relatively large linewidths for their radii, lying above the predicted relation for clouds in virial equilibrium according to Larson’s first law. By plotting ${\sigma_{v}}^2$/$r$ of the KN clouds as a function of their $\Sigma_{\rm H_{2}}$, we show that KN clouds depart significantly from the relation for virialized, isolated clouds confined by self-gravity. The KN clouds are most likely confined by external pressures upwards of $\sim$10$^6$cm$^{-3}$K in this scenario. Nonetheless, the results from our CO(2-1) line observations show that there are three molecular clouds lying near R136, whose $V_{\rm lsr}$ increase as a function of projected distance from the cluster. The resolved clouds are not in virial equilibrium, but are highly turbulent requiring external pressure to confine them. Br$\gamma$ and H$_2$ nIR emission line imaging reveal that the CO clouds coincide with the dense H$_2$ clumpy structures, while Br$\gamma$ is diffuse and exhibits little spatial coincidence with the CO clouds. The lack of background stars at optical wavelengths (Fig.1) suggests that these clouds lie in front of the cluster. Based on the H$_2$/Br$\gamma$ ratio, we suggest that these clouds are UV-heated by stellar radiation emanating from R136. These results suggests that both the H$_2$ nIR emission, and CO emission arise from molecular clouds near R136. The molecular clouds lie in front of R136 from our perspective. The backside of the molecular clouds is being ionised by R136 leading to diffuse Br$\gamma$ emission and weak H$\alpha$ emission. The excited H$_2$ emission coincides with the peak of CO emission further giving weight to our view of photodissociated molecular clouds near R136. Finally, a search for results of on-going star formation yields interesting results. KN1 shows no signs of active star formation. In fIR [*Herschel*]{} 100-350$\mu$m photometry, we identify a likely Class 0/I object KN2-A corresponding to the peak of the molecular cloud KN2. It is known from previous CS observations that the densities near this source exceed $n>$10$^6$. Near this source (within 5$''$) lies a Class II object KN2-A visible in [*Spitzer*]{} imaging, which has been classified as a YSO by Gruendl & Chu (2009) and Walborn et al. (2013). Towards KN3, we detect a [*Herschel*]{} 100-160$\mu$m source (KN3-A) close to a [*Spitzer*]{} source identified by Gruendl & Chu (2009). However, inspection of both the [*Spitzer*]{} and [*Herschel*]{} imaging suggests KN3-A is likely dusty in nature, which agrees with the Gruendl & Chu (2009) classification. The complete scenario can be visualised in the toy model shown in Fig.\[fig:model\] along the observer’s viewpoint. The natal molecular cloud of R136 has an arc like structure, protruding in front of it along our line of sight, and lying directly above it and stretching towards the north west in projection forming an arc shaped structure. The structure may have been carved out from an initially spherical cloud. After the formation of the cluster, the natal molecular cloud has been steadily ionised giving rise to excited H$_2$ emission. The excited boundary lies on the backside of the molecular cloud from the observer’s viewpoint. The radiation from R136 is eroding the molecular clouds, possibly leading to the formation of structures similar to the “pillars of creation" observed in galactic regions such as M16. The observer may be visualising the tail of such pillars face on. The observed high velocity line widths could then be explained as a manifestation of the expected velocity differences between the head and tail of a photoevaporating cloud. Alternatively, the results also support the picture that the KN clouds may likely be confined by external pressures. High angular resolution IFU spectroscopy of the clouds which will reveal their gas kinematics and ionization structure (McLeod et al. 2015) can help differentiate between the two proposed scenarios. Massive star formation is occurring in the middle molecular cloud or pillar, while the remaining two molecular clouds show no active signs of massive star formation. Summary and future work ======================= Our results present a picture of three CO molecular clouds separated in velocity lying $\lesssim$20pc in front of R136, and between 2–14pc away in projection from the cluster. We appear to view the tail of pillar-like structures whose ionised heads are pointing towards R136. The observed high line widths of the molecular clouds for their sizes with respect to the canonical $\sigma_{v}$-$r$ relation are likely due to the gas being pushed away from the head of the pillar-like structures by the ionization radiation produced by the massive stars in R136, or due to external pressure confining the clouds. A massive YSO (KN2-B) is detected inside the KN2 molecular cloud, indicating active star formation. These results suggest that 1.5–3Myr R136 cluster is in the process eroding the KN molecular clouds. During this process a new generation of stars is able to form from the reservoir of cold molecular gas that is in the process of being destroyed by stellar feedback in structures that may be similar to the pillars of creation seen in Galactic regions such as M16 and NGC3603 from a different perspective. We have demonstrated that with sensitive CO(2-1) observations, molecular clouds of $\sim$10$^4$$M_{\odot}$ exist surprisingly close to the YMC R136. This observation challenges the results of most simulations of feedback from massive stars in YMC (see Dale 2015 for a review), where feedback is able to successfully expel gas close to YMCs within a few Myrs. Future high angular resolution sensitive observations of multiple molecular lines using the current generation of interferometers (for e.g. using the Atacama Large Millimeter/submillimeter Array (ALMA) which can achieve a resolution comparable to optical/nIR imaging in CO lines) may reveal in exquisite detail the structure, and densities of these clouds. Combined with observations of the photodissociation region from fine structure lines (Chevance et al. 2016), one can gather enough information about the densities, pressures, and radiation field acting on the KN clouds and the H[II]{} region to predict if the clouds are dense and massive enough to survive photoionisation and form a new massive stellar cluster, or whether star formation will be abruptly terminated due to feedback. In a broader sense, sensitive radio observations in CO lines may reveal previously undetected molecular clouds lying close to YMCs, that may nurture a new generation of star formation. The authors thank Jes[ús]{} Ma[í]{}z Apell[á]{}niz for kindly providing us with the [*HST*]{} mosaic, and sharing with us a pictorial etymology of the stapler nebula. We thank Sherry Yeh for generously providing her Br$\gamma$ image, and Hugo Saldano for his help in visualizing the radio data. The anonymous referee is thanked for providing constructive comments, and in helping to improve the overall impact of the paper. V.M.K. acknowledges support from the FONDECYT-CHILE Fellowship grant N$^{\rm o}$3116017. M.R. acknowledges support from CONICYT (CHILE) through FONDECYT grant N$^{\rm o}$1140839, and partial support through BASALPFB-06. This work made use of the CLASS software, and the Python packages ApLpy, Numpy, Scipy and Matplotlib for analysis and presentation. Anderson, C. N., Meier, D. S., Ott, J., et al. 2014, , 793, 37 Andr[é]{}, P., Men’shchikov, A., Bontemps, S., et al. 2010, , 518, L102 Ballesteros-Paredes, J., Hartmann, L. W., V[á]{}zquez-Semadeni, E., Heitsch, F., & Zamora-Avil[é]{}s, M. A. 2011, , 411, 65 Bertoldi, F. 1989, , 346, 735 Bolatto, A. D., Wolfire, M., & Leroy, A. K. 2013, , 51, 207 Bolatto, A. D., Leroy, A. K., Rosolowsky, E., Walter, F., & Blitz, L. 2008, , 686, 948-965 Brandner, W., Grebel, E. K., Barb[á]{}, R. H., Walborn, N. R., & Moneti, A. 2001, , 122, 858 Chambers, E. T., Jackson, J. M., Rathborne, J. M., & Simon, R. 2009, , 181, 360 Chevance, M., Madden, S. C., Lebouteiller, V., et al. 2016, , 590, A36 Dale, J. E. 2015, , 68, 1 Dale, J. E., Ercolano, B., & Bonnell, I. A. 2012, , 427, 2852 De Marchi, G., Panagia, N., Romaniello, M., et al. 2011, , 740, 11 Evans, C. J., Kennedy, M. B., Dufton, P. L., et al. 2015, , 574, A13 Field, G. B., Blackman, E. G., & Keto, E. R. 2011, , 416, 710 Galliano, F., Hony, S., Bernard, J.-P., et al. 2011, , 536, A88 Greene, T. P., Wilking, B. A., Andre, P., Young, E. T., & Lada, C. J. 1994, , 434, 614 Gruendl, R. A., & Chu, Y.-H. 2009, , 184, 172 Guzman, V. V. 2010, MSc. Thesis, Universidad de Chile Herrera, C. N., Rubio, M., Bolatto, A. D., et al. 2013, , 554, A91 Heyer, M., Krawczyk, C., Duval, J., & Jackson, J. M. 2009, , 699, 1092 Indebetouw, R., Brogan, C., Chen, C.-H. R., et al. 2013, , 774, 73 Johansson, L. E. B., Greve, A., Booth, R. S., et al. 1998, , 331, 857 Kalari, V. M., Vink, J. S., Dufton, P. L., et al. 2014, , 564, L7 Larson, R. B. 1981, , 194, 809 Lombardi, M., Alves, J., & Lada, C. J. 2010, , 519, L7 MacLaren, I., Richardson, K. M., & Wolfendale, A. W. 1988, , 333, 821 Maloney, P. 1988, , 334, 761 McKee, C. F., & Ostriker, E. C. 2007, , 45, 565 McLeod, A. F., Dale, J. E., Ginsburg, A., et al. 2015, , 450, 1057 Meixner, M., Panuzzo, P., Roman-Duval, J., et al. 2013, , 146, 62 Meixner, M., Gordon, K. D., Indebetouw, R., et al. 2006, , 132, 2268 Nayak, O., Meixner, M., Indebetouw, R., et al. 2016, , 831, 32 Offner, S. S. R., Robitaille, T. P., Hansen, C. E., McKee, C. F., & Klein, R. I. 2012, , 753, 98 Pavlyuchenkov, Y. N., Wiebe, D. S., Akimkin, V. V., Khramtsova, M. S., & Henning, T. 2012, , 421, 2430 Pellegrini, E. W., Baldwin, J. A., & Ferland, G. J. 2010, , 191, 160 Persson, S. E., Murphy, D. C., Krzeminski, W., Roth, M., & Rieke, M. J. 1998, , 116, 2475 Pietrzy[ń]{}ski, G., Graczyk, D., Gieren, W., et al. 2013, , 495, 76 Pineda, J. L., Ott, J., Klein, U., et al. 2009, , 703, 736 Pound, M. W. 1998, , 493, L113 Robitaille, T. P. 2017, , 600, A11 Rolleston, W. R. J., Trundle, C., & Dufton, P. L. 2002, , 396, 53 Roman-Duval, J., Gordon, K. D., Meixner, M., et al. 2014, , 797, 86 Rubio, M., Paron, S., & Dubner, G. 2009, , 505, 177 Rubio, M., Barb[á]{}, R. H., Walborn, N. R., et al. 1998, , 116, 1708 Salim, D. M., Federrath, C., & Kewley, L. J. 2015, , 806, L36 Sankrit, R., & Hester, J. J. 2000, , 535, 847 Seale, J. P., Meixner, M., Sewi[ł]{}o, M., et al. 2014, , 148, 124 Selman, F. J., & Melnick, J. 2013, , 552, A94 Solomon, P. M., Rivolo, A. R., Barrett, J., & Yahil, A. 1987, , 319, 730 Walborn, N. R., Barb[á]{}, R. H., & Sewi[ł]{}o, M. M. 2013, , 145, 98 Walborn, N. R., Ma[í]{}z-Apell[á]{}niz, J., & Barb[á]{}, R. H. 2002, , 124, 1601 Whitney, B. A., Sewilo, M., Indebetouw, R., et al. 2008, , 136, 18 Wong, T., Hughes, A., Ott, J., et al. 2011, , 197, 16 Yeh, S. C. C., Seaquist, E. R., Matzner, C. D., & Pellegrini, E. W. 2015, , 807, 117 [^1]: http://www.iram.fr/IRAMFR/GILDAS [^2]: The mean radial velocity of the stars translated to local standard of rest frame is $\approx$255$\pm$5kms$^{-1}$ (Evans et al. 2015)
1
--- abstract: 'The wide spread of location-based social networks brings about a huge volume of user check-in data, which facilitates the recommendation of points of interest (POIs). Recent advances on distributed representation shed light on learning low dimensional dense vectors to alleviate the data sparsity problem. Current studies on representation learning for POI recommendation embed both users and POIs in a common latent space, and users’ preference is inferred based on the distance/similarity between a user and a POI. Such an approach is not in accordance with the semantics of users and POIs as they are inherently different objects. In this paper, we present a novel spatiotemporal aware (STA) representation, which models the spatial and temporal information as *a relationship connecting users and POIs*. Our model generalizes the recent advances in knowledge graph embedding. The basic idea is that the embedding of a $<$time, location$>$ pair corresponds to a translation from embeddings of users to POIs. Since the POI embedding should be close to the user embedding plus the relationship vector, the recommendation can be performed by selecting the top-*k* POIs similar to the translated POI, which are all of the same type of objects. We conduct extensive experiments on two real-world datasets. The results demonstrate that our STA model achieves the state-of-the-art performance in terms of high recommendation accuracy, robustness to data sparsity and effectiveness in handling cold start problem.' author: - | Bei Liu[$~^{1}$]{}, Tieyun Qian[$~^{1}$]{}, Bing Liu[$~^{2}$]{}, Liang Hong[$~^{3}$]{}, Zhenni You[$~^{1}$]{}, Yuxiang Li[$~^{1}$]{}\ [$~^{1}$]{} State Key Laboratory of Software Engineering, Wuhan University, China\ [$~^{2}$]{} Department of Computer Science, University of Illinois at Chicago, USA\ [$~^{3}$]{} School of Information Management, Wuhan University, China\ {qty, beiliu}@whu.edu.cn, liub@cs.uic.edu, {hong, znyou, liyux}@whu.edu.cn\ title: 'Learning Spatiotemporal-Aware Representation for POI Recommendation' --- Introduction ============ Location-based social networks (LBSN), such as Foursquare, Yelp, and Facebook Places, are becoming pervasive in our daily lives. Users on LBSN like to share their experiences with their friends for points of interest (POIs), e.g., restaurants and museums. The providers of location-based services have collected a huge amount of users’ check-in data, which facilitates the recommendation of POIs to unvisited users. The POI recommendation is of high value to both the users and companies, and thus has attracted much attention from researchers in recent years [@Cheng_tist16; @Zhu_kdd15; @Chen_aaai15; @Gao_aaai15]. Most existing studies mainly focused on leveraging spatial information due to the well-known strong correlation between users’ activities and geographical distance  [@Zheng_www09; @Ye_gis10; @Cho_kdd11]. For example, Ye et al.  proposed a Bayesian collaborative filtering (CF) algorithm to explore the geographical influence. Cheng et al.   captured the geographical influence by modeling the probability of a user’s check-in on a location as a multi-center Gaussian model and then combined it into a generalized matrix factorization model. Lian et al.   adopted a weighted matrix factorization framework to incorporate the spatial clustering phenomenon. Similar to the geo-spatial information, time is another important factor in POI recommendation. Ye et al.,   found the periodic temporal property that people usually go to restaurants at around noon and visit clubs at night. Yuan et al.,   developed a CF based model to integrate temporal cyclic patterns. Cheng et al.  [@Cheng_ijcai13] explored the temporal sequential patterns for personalized POI recommendation by using the transition probability of two successive check-ins of a user. Existing studies has exploited spatial or temporal influences mainly using CF  [@Ye_sigir11; @Yuan_sigir13] and Markov transition approaches [@Cheng_ijcai13]. Due to the sparsity of users’ check-in records, it is hard to find similar users or calculates transition probability. Although matrix factorization (MF) methods are effective in dealing with the sparsity in user-POI matrix  [@Cheng_aaai12; @Lian_kdd14], they do not consider the current location of the user. More importantly, while time and location together play a critical role in determining users’ activities in LBSNs, rare work has modeled their joint effects. Considering only one factor will deteriorate the predictive accuracy. For instance, a student may go to a school cafeteria or to a food court in a mall at lunch time depending on he/she is on campus or outside. It is not suggested for a system to recommend the same restaurant to a user at the same time but different location. This example shows the ineffectiveness when using one type of information but ignoring the other. However, taking both time and location into consideration exaggerates the data sparsity. In this paper, we propose a novel spatiotemporal aware (STA) model, which captures the joint effects of spatial and temporal information. Our model has the following distinct characteristics. - STA takes location and time as a whole to determine the users’ choice of POIs. - STA embeds a spatiotemporal pair $<$time, location$>$ as *a relationship* connecting users and POIs. By considering the time and location at the same time, our model can be successfully applied to real-time POI recommendation. Furthermore, distributed representations of STA are very effective in solving the problem of data sparsity. Two recent works  [@Feng_ijcai15; @Xie_cikm16] also exploited the power of distributed representation for alleviating data sparsity. The personalized ranking metric embedding (PRME) by Feng et al.   projected each POI and each user into a latent space, and then recommended a POI $v$ to a user $u$ at location $l$ based on the Euclidean distance between the POI and the user $\parallel\vec{u}-\vec{v}\parallel^2$ and that between the POI and the location $\parallel\vec{l}-\vec{v}\parallel^2$. Xie et al.  proposed a graph based embedding model (GE) by embedding graphs into a shared low dimensional space, and then computed the similarity between a user $u$’s query $q$ at current time $t$ and location $l$ and a POI $v$ using an inner product, $S(q,v)=\vec{u}^T\cdot\vec{v}+\vec{t}^T\cdot\vec{v}+\vec{l}^T\cdot\vec{v}$. While PRME, especially GE, shows significant improvements over many other baselines, these two methods have the drawback that they embed both users and POIs in a common latent space, and users’ preference is inferred based on the distance/similarity between a user and a POI. Such an approach is unnatural since users and POIs are inherently different objects. In contrast, our STA model generalizes recent advances in knowledge graph embedding  [@Lin_aaai15]. A user $u$ reaches an interested POI $v_q$ via an edge $tl$ denoting the $<$time, location$>$ pair, i.e., $\vec{u} + \vec{tl} \approx \vec{v_q}$. With this transformation, we can do recommendation for $u$ by selecting the top-*k* POIs similar to POI $v_q$, which are all of the same type of objects with similar semantics. Problem Definition and Preliminary ================================== Definition 1. (***POI***) A POI *v* is defined as a unique identifier representing one specific position (e.g., a cafe or a hotel), and *V* is a set of POIs, i.e., $V=\{v|v=(pid, position)\}$. Definition 2. (***Check-in Activity***) A check-in activity is a quadruple (u, t, l, v), which means a user *u* visits a POI *v* in location *l* at time *t*. Definition 3. (***Spatiotemporal pattern***) A spatiotemporal pattern, denoted as *tl*, is a combination of a time slot *t* and a location *l* like $<$11 a.m., Los Angeles$>$. For ease of presentation, we summarize the notations in Table \[tbl:note\]. The POI recommendation problem investigated in this paper has the same settings as that in  [@Xie_cikm16]. The formal problem definition is as follows. [c|c]{} Variable & Interpretation\ *u*, *v* & the user *u* and POI *v*\ *t* & the time slot discretized from timestamp\ *l* & the location mapped from (longitude, latitude)\ *tl* & the spatiotemporal pattern $<$t, l$>$\ $\vec{u}$,$\vec{tl}$,$\vec{v}$ & embeddings of *u*, (*t,l*), and *v*\ $u_q$, $t_q$, $l_q$ & query user $u_q$, his/her current time $t_q$ and location $l_q$\ $v_q$ & the potential POI that query user $u_q$ is interested in\ **Problem Definition** (Location-based Recommendation) Given a dataset $D=\{d|d=(u,t,l,v)\}$ recording a set of users’ activities, and a query $q=(u_q, t_q, l_q)$, we aim to recommend top-*k* POIs in *V* that the query user $u_q$ would be interested in. **Preliminary - KG Embedding** The knowledge graph (KG) is a directed graph whose nodes and edges describing *entities* and their *relations* of the form (*head, relation, tail*), denoted as (*h, r, t*). The goal of knowledge graph embedding is to learn a continuous vector space where the embeddings of entity and relation can preserve certain information of the graph. Bordes et al.  presented a simple yet effective approach TransE to learn vector embeddings for both entities and relations in KG. The basic idea is that the relationship between entities corresponds to a translation the embeddings of entities, namely, $\vec{h}+\vec{r}\approx\vec{t}$ when (*h ,r, t*) exits in graph. Later, a model named TransH [@Wang_aaai14] was proposed to enable an entity to have distinct representations when it is involved in different relations. Both TransE and TransH project all entities and relations into the same space. However, some entities may have multiple aspects and relations focusing on different aspects of the entities. Such entities are close in the entity space when they are similar, but they should be far away from each other in the relation space if they are strongly different in some specific aspects. To address this issue, Lin et al.  presented a TransR model to project two entities *h* and *r* of (*h,r,t*) into a *r*-relation space as $h_r$ and $t_r$ with operation $M_r$, such that $\vec{h_r}+\vec{r}\approx\vec{t_r}$ holds in the relation space. Our Proposed Framework ====================== We seek to learn the representations with the following characteristics. - Spatiotemporal awareness - Location and time together play a crucial role when a user selects a POI; they should not be separated into individual ones. - Semantics consistency - All the POIs, either the query user’s interested POI $v_q$ or all existing POIs $v \in V$, should come from a consistent semantic space. In order to satisfy the first requirement, we combine each time slot and location as a spatiotemporal pattern $<$*t*, *l*$>$, and convert the quadruples $(u,t,l,v) \in D$ into triples ($u$, $<$$t$, $l$$>$, $v$) in $D'$. We then learn representations for users, spatiotemporal patterns, and POIs from the converted set $D'$ to meet the second condition, using the translation technique originated from knowledge graph embedding. STA model --------- For the location-based recommendation problem, we focus on the connections between users and POIs corresponding to the spatiotemporal relations. Intuitively, if a POI *v* is often visited by similar users in location *l* at time *t*, the probability of a query user $u_q$ visiting *v* with the same spatiotemporal relation will be high. On the other hand, users similar in the entity space may visit different POIs under distinct temporal and geographic conditions. In order to capture the strong correlations of users and POIs to the spatiotemporal patterns, we generalize the TransR technique  [@Lin_aaai15] to fit the POI recommendation task. The basic idea is that a user $u$ will reach an interested POI $v_q$ via a translation edge $tl$, i.e., $\vec{u} + \vec{tl} \approx \vec{v_q}$. Fig. \[fig:relation\] illustrates the impacts of *tl* patterns. ![Impacts of spatiotemporal patterns[]{data-label="fig:relation"}](relation3.eps){width="68.00000%" height="4.2cm"} In Fig. \[fig:relation\], suppose $u_1$, $u_q$, and $u_2$ are three university students, $u_1$ and $u_q$ taking same courses, and $u_2$ and $u_q$ sharing the dormitory. Given two patterns $tl_1=<12 a.m., campus>$ and $tl_2=<8 p.m., dormitory>$, the query user $u_q$ will be translated into two POIs $v_{q1}$ and $v_{q2}$, hence we should recommend for $u_q$ the POI $v_1$ in the left lower sub-figure and $v_2$ in the right lower sub-figure, which are the close neighbor of $v_{q1}$ and $v_{q2}$, respectively. The different recommending results $v_1$ and $v_2$ are caused by the effects of different spatiotemporal relations $tl_1$ and $tl_2$. We now introduce the detail for STA. For each triple ($u$, $<$$t$, $l$$>$, $v$) in $D'$, the user $u$, the spatiotemporal pair $<$$t$, $l$$>$ ($tl$ in short), and POI $v$ corresponds to the head entity *h*, the relationship edge *r* and the tail entity *t* in TransR, respectively. Their embeddings are set as $\vec{u}$, $\vec{v}$ $\in$ $\Re^d$, and $\vec{tl}$ $\in$ $\Re^m$. For each spatiotemporal pair $tl$, we set a projection matrix $M_{tl} \in \Re^{d\times m}$ to project a user embedding $\vec{u}$ and a POI embedding $\vec{v}$ in the original entity space to $\vec{u_{tl}}=\vec{u}M_{tl}$ and $\vec{v_{tl}}=\vec{v}M_{tl}$ in the relation space, such that $\vec{u_{tl}} + \vec{tl} \approx \vec{v_{tl}}$. This indicates that a POI embedding $\vec{v_{tl}}$ should be the nearest neighbor of $\vec{u_{tl}} + \vec{tl}$. Hence the the score function can be defined as: $$\begin{aligned} \label{score} \small \nonumber s_{tl}(u, v) = \parallel \vec{u_{tl}} + \vec{tl} - \vec{v_{tl}} \parallel _2^2\\ \nonumber s.t. \parallel \vec{u} \parallel _2 \leq 1, \parallel \vec{v} \parallel _2 \leq 1, \parallel \vec{tl} \parallel _2 \leq 1, \\ \parallel \vec{u_{tl}} \parallel _2 \leq 1, \parallel \vec{v_{tl}} \parallel _2 \leq 1\end{aligned}$$ Given the score function defined in Eq. \[score\] for a triple ($u$, $tl$, $v$), the entire objective function for training is as follows. $$\label{obj} \small L = \sum_{(u,tl,v) \in T} \sum_{(u',tl,v') \in T'} max(0, s_{tl}(u, v) + \gamma - s_{tl}(u', v')),$$ where max(*a*,*b*) is used to get the maximum between *a* and *b*, $\gamma$ is the margin, *T* and *T’* are the sets of correct and corrupted triples, respectively. The corrupted triples are generated by replacing the head and tail entities in correct triples using the same sampling method as that in [@Wang_aaai14]. We adopt stochastic gradient descent (SGD) (in mini-batch mode) to minimize the objective function in Eq. \[obj\]. A small set of triplets, is sampled from the training data. For each such triplet, we sample its corresponding incorrect triplets. All the correct and incorrect triples are put into a mini-batch. We compute the gradient and update the parameters after each mini-batch. When the iteration reaches a predefined number, we learn all the embedding for users, POIs, and spatiotemporal patterns. Recommendation Using STA ------------------------ Once we have learned the embeddings, given a query user $u_q$ with the query time $t_q$ and location $l_q$, i.e., *q* = ($u_q$, $t_q$, $l_q$), we first combine $t_q$ and $l_q$ as a spatiotemporal pattern $tl_q$, and then we can get the potential POI $v_q$ using Eq. \[querypoi\]. $$\label{querypoi} \small %\vec{v_{tlq}} = \vec{u_{tlq}} + \vec{tl_q} \vec{v_q} = \vec{u_q} M_{tl} + \vec{tl_q}$$ The learned POI embedding $v_q$ naturally reflects the user’s preference, because it encodes the users’ past activities in $\vec{u_q}$. It also captures the geographic and temporal influence in $\vec{tl_q}$. For each POI $v \in V$, we compute its distance to the POI $v_q$ in the normed linear space as defined in Eq.  \[simpoi\], and then select the *k* POIs with the smallest ranking scores as recommendations. $$\label{simpoi} \small %d(v_{tl}, v_{tlq}) = \parallel \vec{v_{tl}} - \vec{v_{tlq}} \parallel_1 d(v, v_q) = \parallel \vec{v} M_{tl} - \vec{v_q} \parallel_1$$ We would like to emphasize our differences in computing $v_q$ and recommending POIs from those in  [@Lin_aaai15; @Xie_cikm16]. First, we can find an explicit POI $v_q$ directly from the latent space through the translation of the embedding of the spatiotemporal pattern on the user’s embedding, while others compute an implicit $v_q$ by its distance/similarity to user $u_q$. Second, since the embeddings for POIs in *V* are also from the same space, we can choose the ones which are the closest neighbors of $v_q$ in this space. This indicates that our recommended POIs are semantically consistent with the query user’s interested POI $v_q$. Dealing with Cold Start POIs ---------------------------- Considering the cold start POIs, which contain geographic and content information like tags but do not have any check-ins [@Xie_cikm16], we can simply extend our model to include the POI-POI relationship through the translation of content patterns. We call this model STA-C. The rationale is that, if two POIs share a common tag or location, there will be a high degree of similarity between them, and their vector representations should be close to each other. Based on this observation, we define the score function as following: $$\label{scorecold} \small \begin{aligned} \begin{split} s_{tlw}(u, v, s) & = s_{tl}(u, v) + s_{wl}(v, s) \\ %& = s_{tl}(u, v) + s_{w}(v, s) \\ & = \parallel \vec{u_{tl}} + \vec{tl} - \vec{v_{tl}} \parallel _2^2 + \parallel \vec{v_{wl}} + \vec{wl} - \vec{s_{wl}} \parallel _2^2, \end{split} \end{aligned}$$ where *s* is a POI sharing at least one $<$word, location$>$ pair with POI *v*, and the objective function for cold start POIs is defined as: $$\label{objcold} \small \begin{aligned} \begin{split} LC = & \sum_{(u,tl,v) \in T} \sum_{(u',tl,v') \in T'} max(0, s_{tl}(u, v) + \gamma - s_{tl}(u', v')) + \\ & \sum_{(v,wl,s) \in W} \sum_{(v',wl,s') \in W'} max(0, s_{wl}(v, s) + \gamma - s_{wl}(v', s')) \end{split} \end{aligned}$$ We once again use stochastic gradient descent to minimize the objective function *LC* in Eq. \[objcold\]. The only difference is the sampling procedure. For STA-C, since we have two types of edges, we sample the triplets (u, tl, v) and (v, wl, s) and their corresponding incorrect triples alternatively to update the model. Our STA-C model proposed for dealing with cold start POIs can also be applied to the normal POI recommendation problem. However, it requires that those POIs should contain content information. For the recommendation on datasets like Gowalla, STA-C is not valid. Hence we only treat it as an extended model. Please also note that, it is STA-C that uses the same information as GE does. Our standard STA model, on the other hand, uses less information than GE because it does not include the contents of POIs. Experimental Evaluation ======================= In this section, we first introduce the experimental setup and then compare our experimental results with those of baselines. Finally we show the performance of our method for addressing the data sparsity and cold start problem. Experimental Setup ------------------ **Datasets** We evaluate our methods on two real-life LBSN datasets: Foursquare and Gowalla. A number of researchers have conducted experiments on data collected from these two social networks  [@Yuan_sigir13; @Chen_aaai15; @Gao_aaai15; @Xie_cikm16; @Yin_tkde16]. However, many of them are collected from various regions or in different time spans. For a fair comparison with GE, we use the publicly available version [^1] provided by the authors of  [@Xie_cikm16]. The two datasets have different scales such as geographic ranges, the number of users, POIs, and check-ins. Hence they are good for examining the performance of algorithms on various data types. Their statistics are listed in Table \[tbl:dataset\]. [ccc]{} & **Foursquare** & **Gowalla**\ \# of users & 114,508 & 107,092\ \# of POIs & 62,462 & 1,280,969\ \# of Check-ins & 1,434,668 & 6,442,892\ \#std time slots & 24 & 24\ \# of locations & 5,846 & 200\ \# of $<$t, l$>$ patterns & 28,868 & 3,636\ Each check-in is stored as user-ID, POI-ID, POI-location in the form of latitude and longitude, check-in timestamp, and POI-content (only for Foursquare). In order to get the spatiotemporal patterns $<$t, l$>$ in Table \[tbl:dataset\], we use the same discretized method as that in  [@Xie_cikm16], i.e., dividing time into 24 time slots which correspond to 24 hours, and the whole geographical space into a set of regions according to 5,846 administrative divisions (for Foursquare) and 200 regions clustered by a standard *k*-means method (for Gowalla). We finally get 28,868 and 3,636 $<$t, l$>$ pairs on Foursquare and Gowalla, respectively. **Baselines - {GE, STA-E, STA-H}** We use GE, the state-of-the-art location based recommendation approach in  [@Xie_cikm16], as our baseline. GE adopts a graph-based embedding framework. It learns the embeddings based on the POI-POI, POI-Time, POI-Location, and POI-Words graphs. By integrating the sequential, geographical, temporal cyclic, and semantic effect into a shared space, GE effectively overcomes the data sparsity problem and reaches the best performance so far. We do not compare our method with other existing approaches because, GE has already significantly outperformed a number of baselines including JIM  [@Yin_cikm15joint], PRME  [@Feng_ijcai15], and Geo-SAGE  [@Wang_kdd15Geo]. We thus only show our improvements over GE. Also note that although we choose the TransR technique in knowledge graph embedding to materialize our STA model, the essential of our proposed framework is the translation of $<$time, location$>$ pairs in the embedding space. This indicates that we do not rely on a specific translation model. Hence we can use TransE  [@Bordes_ml14] and TransH  [@Wang_aaai14] to realize STA. We denote the resulting methods as STA-E and STA-H baselines, respectively. **Settings** We first organize the quadruples (u, v, t, l) in each dataset by users to get each user’s profile $D_u$. We then rank the records in $D_u$ according to the check-in timestamps, and finally divide these ordered records into two parts: the first 80% as the training data, and the rest 20% data as the test data. Moreover, the last 10% check-in records in the training data are used as a validation set for tuning the hyper-parameters. We use the accuracy@*k* (*k* = {1, 5, 10, 15, 20}) as our evaluation metric. All these settings, as well as the computation approach to accuracy@*k*, are same as those in  [@Xie_cikm16]. We use the default settings in the original TransR  [@Lin_aaai15] as the parameter settings for our STA model. Specifically, we set the learning rate $\lambda = 0.0001$, the margin $\gamma = 2$, the mini-batch size $B=4800$, and the embedding dimensions $m = d = 100$, and we traverse over all the training data for 1000 rounds. Comparison with baselines ------------------------- For a fair comparison, we implement GE using the same LINE software provided by the authors of  [@Tang_www15] on our data divisions. All the parameters for GE are same as those in  [@Xie_cikm16]. We find a slightly difference (less than 1% in accuracy) between the original results and those by our implemented GE. This is understandable and acceptable considering the randomness when sampling negative edges in LINE and initiating the centers of clusters of regions. All parameters for STA-E and STA-H use the default settings in  [@Bordes_ml14] and  [@Wang_aaai14]. We present the comparison results on Foursquare and Gowalla in Fig. \[fig:comp\] (a) and (b), respectively. ![image](comp.eps){width="82.00000%" height="5.2cm"} From Fig. \[fig:comp\] (a), it is clear that all our proposed STA-style models significantly outperform GE. For instance, the accuracy@1 for STA, STA-H, and STA-E is 0.307, 0.280, 0.255, respectively, much better than 0.225 for GE. Similar results can be observed in Fig.  \[fig:comp\] (b) on Gowalla dataset. This clearly demonstrates the effectiveness of our translation based framework. While STA shows drastic improvement over GE for all *k*s on Foursquare, the trend is not that obvious on Gowalla when *k* = 15, 20. This is because there is a much smaller number of relations in Gowalla than that in Foursquare. As shown in Table  \[tbl:dataset\], Gowalla only has 3,636 relation patterns ($<$t, l$>$ pairs) while Foursquare has 28,868 pairs. Hence the learnt embeddings for entities and relations are worse than those on Foursquare, and incur the less accurate results when *k* is large. Besides the improvement over GE, STA outperforms STA-H and STA-E as well. The reason is that TransR can differentiate the entities in the transformed relation space. Nevertheless, we see a less significant enhancement of STA over STA-H on Gowalla. This also conforms to the characteristics of the data: the graph of Gowalla is much larger but has less *tl* relation edges than that of Foursquare, and the advantage of TransR over TransE is not obvious on such a dataset. Effects of Model Parameters --------------------------- The effects of embedding dimension *d* on Foursquare and Gowalla are shown in Table \[tab:dimf\] and Table \[tab:dimw\], respectively. 1 5 10 15 20 ----- ------- ------- ------- ------- ------- 70 0.281 0.376 0.409 0.433 0.451 80 0.294 0.384 0.417 0.445 0.462 90 0.300 0.390 0.425 0.459 0.476 100 0.307 0.393 0.434 0.461 0.483 110 0.311 0.407 0.439 0.463 0.486 120 0.312 0.407 0.439 0.464 0.486 : Effects of Dimensionality on Foursquare \[tab:dimf\] 1 5 10 15 20 ----- ------- ------- ------- ------- ------- 70 0.355 0.432 0.474 0.503 0.527 80 0.358 0.436 0.478 0.508 0.530 90 0.359 0.439 0.482 0.509 0.535 100 0.361 0.445 0.486 0.511 0.539 110 0.361 0.445 0.488 0.513 0.540 120 0.361 0.445 0.488 0.513 0.540 : Effects of Dimensionality on Gowalla \[tab:dimw\] We can see that the experimental results are not very sensitive to the dimension *d*. With an increasing number of dimension, the accuracy on Gowalla is almost unchanged, i.e., the improvement is less than 1% in nearly all cases. The accuracy on Foursquare is slightly enhanced with a large dimension *d*, and finally it becomes stable. To investigate the effects of time interval, we divide timestamps by three methods, i.e., splitting time into 24, 7, and 2 time slots, corresponding to the daily, weekly, and weekday/weekend patterns, respectively. Figure \[fig:time\] shows the effects of various time intervals. ![image](time.eps){width="82.00000%" height="5.5cm"} We observe that the impact of the daily patterns is the most significant on both datasets. In addition, the results for different patterns vary widely, suggesting a good strategy for dividing the time slot is important. Sensitivity to Data Sparsity ---------------------------- To investigate the sensitivity to data sparsity of STA and GE, we conduct extensive experiments to evaluate the performance on two datasets by reducing training data. More precisely, we keep the testing dataset unchanged and reduce the training data randomly by a ratio of 5% to 20% stepped by 5. Due to the space limitation, we only present the results by reducing 20% training data Table \[tab:20less\]. The trends with other ratios are all alike. We have the following important notes for Table \[tab:20less\]. - With the reduction of training data, the accuracy values for STA and GE both decrease. However, STA always achieves the best results at different *k* values on two datasets. - The reduction of accuracy of our STA model is much smaller than that of GE. For instance, the accuracy@1 of GE decreases from 0.225 to 0.154, showing a 31.69% drop. In contrast, our STA model only has a 20.00% change. This strongly suggests that our model is more robust to the data sparsity. - The declination of accuracy on Foursquare is more obvious than on Gowalla. The reason may be that Foursquare is much sparser in users’ check-ins than Gowalla, hence reducing the training data has a greater impact on Foursquare. Test for Cold Start Problem --------------------------- In this experiment, we further compare the effectiveness of our extended STA-C model with GE when addressing the cold-start problem. The cold start POIs are defined as those visited by less than 5 users  [@Yin_tkde16]. To test the performance of cold start POI recommendations, we select users who have at least one cold-start check-in as test users. For each test user, we choose his/her check-in records associated with cold-start POIs as test data and the remains as training data. Since there is no content information for POIs in Gowalla, we conduct experiments, just as GE did, only on Foursquare. The results are shown in Fig. \[fig:cold\]. ![Test for Cold Start Problem on Foursquare[]{data-label="fig:cold"}](cold1.eps){width="56.00000%" height="3.6cm"} From Fig. \[fig:cold\], it is clear that our proposed STA-C model consistently beats GE when recommending cold start POIs. The superior performance of STA-C model is due to the translation of content and geography information *wl* from an ordinary POI *v* to a cold start POI $v_c$. As long as there is an existing *v* sharing one $<$word, location$>$ pair with $v_c$, our STA-C model can get a translation for $v_c$. In contrast, GE utilizes the bipartite graphs of POI-Word and POI-Location. The weight of an edge in the graph is calculated by a TF-IDF value of the word or the frequency of a location. The edge weight is proportional to the probability of edge sampling. Since there are few check-in records for cold start POIs, a $v_c$-word and $v_c$-location edge has an extremely rare chance to be selected and updated. Consequently, the learnt embedding for $v_c$ will be poor and further deteriorates the recommendation accuracy. Conclusion ========== We present a novel spatiotemporal aware model STA for learning representations of users, spatiotemporal patterns, and POIs. The basic idea is to capture the geographic and temporal effects using a $<$time, location$>$ pair, and then model it as a translation connecting users and POIs. We realize STA using the knowledge graph embedding technique. Our method has two distinguished advantages. 1) We learn a joint representation for spatiotemporal patterns whose components contribute together to a user’s choice in POIs. 2) The translation mechanism enables the learnt POI embeddings to be in the same semantic space with that of the query POI. We conduct extensive experiments on two real-life datasets. Our results show that STA achieves the state-of-the-art performance in recommendation accuracy. It also significantly outperforms the baselines in terms of the effectiveness in addressing both the data sparsity and cold start problems. Acknowledge {#acknowledge .unnumbered} =========== The work described in this paper has been supported in part by the NSFC project (61572376). Antoine Bordes, Xavier Glorot, Jason Weston, and Yoshua Bengio. A semantic matching energy function for learning with multi-relational data. , 94(2):233–259, 2014. Xuefeng Chen, Yifeng Zeng, Gao Cong, Shengchao Qin, Yanping Xiang, and Yuanshun Dai. On information coverage for location category based point-of-interest recommendation. In [*Proc. of AAAI*]{}, page 37¨C43, 2015. Chen Cheng, Haiqin Yang, Irwin King, and Michael R Lyu. Fused matrix factorization with geographical and social influence in location-based social networks. In [*Proc. of AAAI*]{}, pages 17–23, 2012. Chen Cheng, Haiqin Yang, Michael R. Lyu, and Irwin King. Where you like to go next: Successive point-of-interest recommendation. In [*Proceedings of IJCAI*]{}, pages 2605–2611, 2013. Chen Cheng, Haiqin Yang, Irwin King, and Michael R Lyu. A unified point-of-interest recommendation framework in location-based social networks. , 8(1):1–21, 10 2016. Eunjoon Cho, Seth A. Myers, and Jure Leskovec. Friendship and mobility: user movement in location-based social networks. In [*Proceedings of SIGKDD*]{}, page 1082¨C1090, 2011. Shanshan Feng, Xutao Li, Yifeng Zeng, Gao Cong, Yeow Meng Chee, and Quan Yuan. Personalized ranking metric embedding for next new poi recommendation. In [*Proc. of 24th IJCAI*]{}, pages 2069–2075, 2015. Huiji Gao, Jiliang Tang, Xia Hu, , and Huan Liu. Content-aware point of interest recommendation on location-based social networks. In [*Proceedings of 29th AAAI*]{}, pages 1721–1727, 2015. Defu Lian, Cong Zhao, Xing Xie, Guangzhong Sun, Enhong Chen, and Yong Rui. Geomf: joint geographical modeling and matrix factorization for point-of-interest recommendation. In [*Proceedings of SIGKDD*]{}, pages 831–840, 2014. Yankai Lin, Zhiyuan Liu, Maosong Sun, Yang Liu, and Xuan Zhu. Learning entity and relation embeddings for knowledge graph completion. In [*Proc. of 29th AAAI*]{}, pages 2181–2187, 2015. Jian Tang, Meng Qu, Mingzhe Wang, Ming Zhang, Jun Yan, and Qiaozhu Mei. Line: Large-scale information network embedding. In [*Proceedings of WWW*]{}, pages 1067–1077, 2015. Zhen Wang, Jianwen Zhang, Jianlin Feng, and Zheng Chen. Knowledge graph embedding by translating on hyperplanes. In [*Proc. of 28th AAAI*]{}, pages 1112–1119, 2014. Weiqing Wang, Hongzhi Yin, Ling Chen, Yizhou Sun, Shazia Sadiq, and Xiaofang Zhou. Geo-sage: A geographical sparse additive generative model for spatial item recommendation. In [*Proceedings of SIGKDD*]{}, pages 1255–1264, 2015. Min Xie, Hongzhi Yin, Hao Wang, Fanjiang Xu, Weitong Chen, and Sen Wang. Learning graph-based poi embedding for location-based recommendation. In [*Proc. of CIKM*]{}, pages 15–24, 2016. Mao Ye, Peifeng Yin, and Wang Chien Lee. Location recommendation for location-based social networks. In [*Proc. of ACM SIGSPATIAL*]{}, pages 458–461, 2010. Mao Ye, Krzysztof Janowicz, and Wang Chien Lee. What you are is when you are: the temporal dimension of feature types in location-based social networks. In [*Proc. of ACM SIGSPATIAL*]{}, pages 102–111, 2011. Mao Ye, Peifeng Yin, Wang-Chien Lee, and Dik-Lun Lee. Exploiting geographical influence for collaborative point-of-interest recommendation. In [*Proceedings of SIGIR*]{}, pages 325–334, 2011. Hongzhi Yin, Xiaofang Zhou, Yingxia Shao, Hao Wang, and Shazia Sadiq. Joint modeling of user check-in behaviors for point-of-interest recommendation. In [*Proceedings of CIKM*]{}, pages 1631–1640, 2015. Hongzhi Yin, Xiaofang Zhou, Bin Cui, Hao Wang, Kai Zheng, and Nguyen Quoc Viet Hung. Adapting to user interest drift for poi recommendation. , 28(10):2566–2581, 2016. Quan Yuan, Gao Cong, Zongyang Ma, Aixin Sun, and Nadia Magnenat Thalmann. Time-aware point-of-interest recommendation. In [*Proc. of SIGIR*]{}, pages 363–372, 2013. Yu Zheng, Lizhu Zhang, Xing Xie, and Wei-Ying Ma. Mining interesting locations and travel sequences from gps trajectories. In [*Proc. of WWW*]{}, pages 791–800, 2009. Wen-Yuan Zhu, Wen-Chih Peng, Ling-Jyh Chen, Kai Zheng, and Xiaofang Zhou. Modeling user mobility for location promotion in location-based social networks. In [*Proceedings of SIGKDD*]{}, pages 1573–1582, 2015. [^1]: https:/sites.google.com/site/dbhongzhi
1
--- --- [**Deep supervised learning using local errors** ]{}\ Hesham Mostafa$^1$, Vishwajith Ramesh$^2$, and Gert Cauwenberghs$^{1,2}$\ $^{1}$Institute for Neural Computation,$^{2}$Department of Bioengineering\ UC San Diego, California, USA\ E-mail: hmmostafa@ucsd.edu Abstract {#abstract .unnumbered} ======== Error backpropagation is a highly effective mechanism for learning high-quality hierarchical features in deep networks. Updating the features or weights in one layer, however, requires waiting for the propagation of error signals from higher layers. Learning using delayed and non-local errors makes it hard to reconcile backpropagation with the learning mechanisms observed in biological neural networks as it requires the neurons to maintain a memory of the input long enough until the higher-layer errors arrive. In this paper, we propose an alternative learning mechanism where errors are generated locally in each layer using fixed, random auxiliary classifiers. Lower layers could thus be trained independently of higher layers and training could either proceed layer by layer, or simultaneously in all layers using local error information. We address biological plausibility concerns such as weight symmetry requirements and show that the proposed learning mechanism based on fixed, broad, and random tuning of each neuron to the classification categories outperforms the biologically-motivated feedback alignment learning technique on the MNIST, CIFAR10, and SVHN datasets, approaching the performance of standard backpropagation. Our approach highlights a potential biological mechanism for the supervised, or task-dependent, learning of feature hierarchies. In addition, we show that it is well suited for learning deep networks in custom hardware where it can drastically reduce memory traffic and data communication overheads. Introduction ============ Gradient descent training techniques [@Bottou91] have been remarkably successful in training a broad range of network architectures. This success is often attributed to the use of deep architectures with many non-linearity stages [@Ba_Caruna14] where backpropagation is used to calculate the direction of weight updates in deep layers. In convolutional networks in particular, multiple cascaded convolutional layers allow simple, lower-level, features to be successively composed into more complex features, allowing networks to obtain highly complex and relevant features from the top convolutional layers [@Razavian_etal14]. Deep convolutional neural networks trained using backpropagation are thus achieving record performance in a variety of large-scale machine vision tasks [@Krizhevsky_etal12; @Simonyan_Zisserman14; @LeCun_etal15; @He_etal16; @Zagoruyko_Komodakis16; @Huang_etal16]. For deep convolutional networks trained in a supervised setting, the training objective is typically the minimization of classification error at the top network layer. This objective is sometimes augmented by auxiliary objectives defined using the outputs of intermediate classifiers in the network [@Szegedy_etal14; @Lee_etal15]. These auxiliary objectives provide additional sources of error to deeper layers. Training, however, involves error signals that must propagate backwards from the top layer. Standard backpropagation is biologically unrealistic for several reasons: the need to buffer network states until errors arrive from the top layer; weight symmetry in the forward and backward passes; and the need to precisely interleave the forward and backward passes. Several biologically-motivated learning mechanisms have been proposed to explain how circuits in the brain are able to learn complex, hierarchical representations. One broad class of these proposals is based on contrastive learning in energy-based models [@Seung_etal03; @Bengio_Fischer15; @Scellier_Bengio17]. In these models, the network is trained to minimize the discrepancy between its equilibrium points when running freely and when observables clamp the values of some units in the network. Weight symmetry is required, though: each synaptic connection from one neuron to another assumes a matching synaptic connection of identical strength in the reverse direction. In [@Lillicrap_etal16; @Baldi_etal16], weight symmetry is avoided by using an independent set of fixed random weights to backpropagate errors between the network layers. However, like standard backpropagation, the error signals are non-local. Instead of backpropagating errors layer by layer through the random feedback connections, the networks in [@Nokland_etal16; @Neftci_etal17a] directly use a fixed random projection of the top layer error as the error signal in deep layers. Although this permits a single global error signal communicated in common to all layers, is still incurs substantial wait times and memory requirements for the weight updates as a forward pass through the entire network has to be completed before the error signal is available, which requires deep layers to hold their states for the duration of the full forward pass. We propose a learning approach where weights in any given layer are trained based on local errors that are generated solely based on neural state variables in that layer. These errors are generated directly from the training labels using a classifier with fixed random weights and no hidden layers, and whose input is the neural activations in the layer being trained. Instead of minimizing a global objective function, training thus minimizes many local objective functions. As such this approach compromises one of the core tenets of standard backpropagation: the adjustment of all parameters in concert to minimize a unified objective. Nevertheless, training with local errors still allows a deep network to compose the features learned by lower layers into more complex features in higher layers. This is evidenced below by the improvement in accuracy of the random local classifiers in deeper layers. Training with local errors thus retains the hierarchical composition of features, one of the key strengths of deep networks. To implement weight updates based on backpropagation in a biologically inspired network, the pre- or post-synaptic neurons need to buffer the past activity of the pre-synaptic neurons and reproduce this past activity in sync with the corresponding errors arriving from top layers in order to update the weights. This is incompatible with biologically motivated synaptic weight update rules that are typically triggered by pre-synaptic events and depend on the relative timing of pre- and post-synaptic spikes and/or state variables in the post-synaptic neuron. Our learning mechanism bypasses biological implausibility arguments against standard backpropagation by generating errors locally in each layer using fixed random projections. Weight updates could thus be carried out while the synaptic currents in post-synaptic neurons (the neurons receiving the local error signal) still retain a memory of recent pre-synaptic activity. Weight symmetry in the forward and backward passes in standard backpropagation learning is another biologically unrealistic aspect. In our case, the weight symmetry requirement arises in the one-step error backpropagation from the output of the local random classifier to the neurons in the layer being trained. Similar to ref. [@Lillicrap_etal16], we experimented with relaxing this symmetry requirement by using a different set of random, fixed weights to map the classifier error to the error at the layer being trained. We analyze the implications of the proposed learning approach for the design of custom hardware devices for learning the parameters of deep networks. In the proposed learning approach, there is no explicit backward pass as errors are locally generated and can be used to directly update the weights. We show that our approach drastically reduces memory traffic compared to standard backpropagation in the typical situation when the network weights and activations can not all fit into the compute device memory. We achieve this reduction even despite an increased number of parameters in the network due to the addition of the random local classifier weights in each layer. These weights, however, are fixed allowing them to be generated on the fly using pseudo-random number generators (PRNGs). Only the negligibly small random seeds of the PRNGs for each layer need to be stored. We discuss related work in section \[sec:related\_work\]. We describe the proposed learning mechanism in section \[sec:model\] and quantitatively assess the hardware-related computational and memory access benefits compared to standard learning with global objective functions in section \[sec:hw\]. We present the results of applying the proposed learning method to standard supervised learning benchmarks in section \[sec:results\] and compare our learning method’s performance to that of the feedback alignment technique [@Lillicrap_etal16] . We present our conclusions and further discussion on the biological plausibility of the proposed learning mechanism in section \[sec:conclusions\_discussion\]. Related Work {#sec:related_work} ============ Training of deep convolutional networks is currently dominated by approaches where all weights are simultaneously trained to minimize a global objective. This is typically done in a purely supervised setting where the training objective is the classification loss at the top layer. To ameliorate the problem of exploding/vanishing errors in deep layers [@Hochreiter_etal01], auxiliary classifiers are sometimes added to provide additional error information to deep layers [@Szegedy_etal14; @Lee_etal15]. Unlike our training approach, however, training still involves backpropagating errors across the entire network and simultaneous adjustments of all weights. Several learning mechanisms have been traditionally used to pre-train a deep network layer-by-layer using local error signals in order to learn the probability distribution of the input layer activations, or in order to minimize local reconstruction errors [@Hinton_etal06; @Hinton_Salakhutdinov06; @Bengio_etal07; @Vincent_etal08; @Erhan_etal10]. These mechanisms, however, are unsupervised and the networks need to be augmented by a classifier layer, typically added on top of the deepest layer. The network weights are then fine-tuned using standard backpropagation to minimize the error at the classifier layer. Supervised layer-wise training has been pursued in [@Bengio_etal07], with auxiliary classifiers that are co-trained, unlike the random fixed auxiliary classifiers proposed here. The supervised layer-wise training is used only as a pre-training step, and results are reported after full network fine-tuning using backpropagation from the top classifier layer. Some approaches forego the fine-tuning step and keep the network fixed after the unsupervised layer-wise training phase, and only train the top classifier layer or SVM on the features learned [@Lee_etal09; @Ranzato_etal07; @Kavukcuoglu_etal10]. Local learning in [@Ranzato_etal07; @Kavukcuoglu_etal10] involves an iterative procedure for learning sparse codes which is computationally demanding. The network architectures in [@Lee_etal09; @Ranzato_etal07; @Kavukcuoglu_etal10] fail to yield intermediate classification results from the intermediate layers. Moreover, their applicability to datasets that are more complex than MNIST is unclear since labels are not used to guide the learning of feature. In more complex learning scenarios with an abundance of possible features, these networks could very well learn few label-relevant features, thereby compromising the performance of the top classifier. Instead of layer-wise pre-training, several recent approaches train the whole network using a hybrid objective that contains supervised and unsupervised error terms [@Zhao_etal15]. In some of these network configurations, the unsupervised error terms are local to each layer [@Zhang_etal16]. The supervised error term, however, requires backpropagating errors through the whole network. This requirement is avoided in the training approach in [@Ranzato_Szummer08] used to learn to extract compact feature vectors from documents: training proceeds layer by layer where the error in each layer is a combination of a reconstruction error and a supervised error coming from a local classifier. The local auxiliary decoder and classifier pathways still require training, however. Other approaches also make use of a combination of supervised (label-dependent) and unsupervised error signals to train Boltzmann machines as discriminative models [@Larochelle_Bengio08; @Goodfellow_etal13a]. Learning in [@Goodfellow_etal13a], however, is more computationally demanding than our approach as as it involves several iterations to approach the mean-field equilibrium point of the network, and errors are still backpropagated through multiple layers. In [@Larochelle_Bengio08], multi-layer networks are not considered and only a single layer RBM is used. In refs [@Lillicrap_etal16; @Nokland_etal16; @Baldi_etal16; @Neftci_etal17a], the backpropagation scheme is modified to use random fixed weights in the backward path. This relaxes one of the biologically unrealistic requirements of backpropagation which is weight symmetry between the forward and backward pathways. Errors are still non-local, however, as they are generated by the top layer. A learning mechanism that is able to generate error signals locally is the synthetic gradients mechanism [@Jaderberg_etal16; @Czarnecki_etal17] in which errors are generated by dedicated error modules in each layer based only on the layer’s activity and the label. The parameters of these dedicated error modules are themselves updated based on errors arriving from higher layers in order to make the error modules better predictors of the true, globally-derived, error signal. Our approach generates errors in a different manner through the use of a local classifier, and each layer receives no error information from the layer above. Methods {#sec:model} ======= We train multi-layer networks, with either convolutional or fully connected layers, based on local errors generated by random classifiers. Consider a fully connected $i^{th}$ hidden layer in a network whose activation vector is denoted by ${\bf y}^i\in R^N$ receiving an input ${\bf x}^i \in R^M$: $${\bf y}^i = f({\bf W}^i{\bf x}^i + {\bf b}^i)$$ $${\bf y}{^i}' = f'({\bf W}^i{\bf x}^i + {\bf b}^i)$$ where ${\bf W}^i$ is the $N \times M$ weight matrix of layer $i$ and ${\bf b}^i \in R^N$ is the bias vector, and $f$ is the neuron’s activation function. In all the networks we train, we use Rectified Linear Units (ReLUs) [@Nair_Hinton10], i.e, $f(x)=max(x,0)$, with corresponding derivatives $f'(x)=H(x)$ where $H(\cdot)$ is the Heaviside step function. We pre-define for this hidden layer a fixed random classifier matrix ${\bf M}^i$ which is a $C \times N$ matrix where $C$ is the number of classification categories. The random matrix, ${\bf M}^i$, is used to convert the layer activation vector, ${\bf y}^i$, to a category score vector ${\bf s}^i \in R^C$ where ${\bf s}^i = {\bf M}^i{\bf y}^i$. Since this is a supervised learning setting, the correct input category $t$ is known during training, which allows the layer to generate a scalar loss or error signal, $E(t,{\bf s}^i)$. $E$ could be for example the cross-entropy loss or the square hinge loss. This error is then backpropagated in order to calculate the weight and bias updates, $\Delta {\bf W}^i$ and $\Delta {\bf b}^i$: $$\begin{aligned} {\bf e_s}^i &= \frac{dE}{d{\bf s}^i} \\ {\bf e_y}^i &= {\bf K^i}{\bf e_s^i} \odot {\bf y}{^i}' \\ \Delta{\bf W}^i &= -\eta {\bf e_y}^i {\times} {\bf x}^i \\ \Delta{\bf b}^i &= -\eta {\bf e_y}^i\end{aligned}$$ where $\odot$ is the element-wise multiplication operator, $\times$ is the outer product operator, and $\eta$ is the learning rate. ${\bf K}^i$ is the $N \times C$ matrix used to backpropagate the classifier error to the layer being trained. If we set ${\bf K}^i = {\bf M}{^i}^T$, then the weight and bias updates are executing exact gradient descent to minimize the random classifier error, $E$. In that case, training of each layer is equivalent to training a network with one hidden layer where only the hidden layer’s input weights and biases are trainable, while the output weights, ${\bf M}^i$ are fixed. The learning scheme is illustrated in Fig. \[fig:local\]. For convolutional layers, the learning scheme remains unchanged. The post-activation feature maps tensor is simply flattened to yield a 1D vector before multiplying by the random classifier matrix ${\bf M}$. ![Supervised learning in a multi-layer network using local errors. Biases are omitted for clarity. Red arrows indicate the error pathways. Hidden layer $i$ is trained using local errors generated by a classifier with random fixed weights ${\bf M}^i$. The errors are randomly projected back using the matrix ${\bf K}^i$, and multiplied element-wise with the layer’s activation derivative to yield the error signal ${\bf e_y}^i$ which is then used to update the weights.[]{data-label="fig:local"}](local_learning_biological){width="50.00000%"} We also use dropout [@Srivastava_etal14] in this training setting to minimize overfitting. All incoming/outgoing weights to/from a dropped neurons are not updated in the iteration in which the neuron is dropped. In some networks, we use batch normalization [@Ioffe_Szegedy15] before the layer’s non-linearity. The layer’s learnable parameters will then include a scaling factor (one for each neuron in a fully connected layer, or one for each feature map in a convolutional layer) that is also trained using local errors. For a fully connected layer, the input to the local classifier is taken after the dropout mask is applied (if dropout is used). For a convolutional layer, the input to the layer’s local classifier is taken after pooling and after applying the dropout mask. In all experiments, we initialize the fixed random classifier weights, as well as the trainable weights in the main network, from a uniform, zero-mean, distribution whose max/min values depend on the number of neurons in the source and target layers according to the scheme in [@Glorot_Bengio10]. We compare our approach to the feedback alignment training method [@Lillicrap_etal16] in which random fixed weights are used to backpropagate the error layer-by-layer from the top layer. The layer’s activation derivative is still used when backpropagating errors. In the presence of max-pooling layers, errors only backpropagate through the winner(max) neuron in each pooling window. When using feedback alignment training in the presence of dropout, a neuron that is dropped during the forward pass is also dropped during the backward pass. When using convolutional layers, we use fixed random filters that we convolve with the errors of one convolutional layer to yield the errors at the outputs of the previous/lower convolutional layer. We also use batch normalization when training using feedback alignment. The extra scaling parameters introduced by batch normalization are trained using the randomly backpropagated errors arriving at the batch-normalized layer’s output. All experiments in this paper were carried out using Theano [@Bastien_etal12; @Bergstra_etal10], and all parameters were optimized using ADAM [@Kingma_Ba14]. Hardware Implications of Learning Using Local Errors {#sec:hw} ==================================================== [0.62]{} ![Memory traffic and number of MAC operations for different training methods. Arrows between compute device and external memory indicate memory traffic while green arrows indicate data buffered and reused by the compute device. Each computation stage is executed a number of times given by the enclosing repeat block. () Standard backpropagation learning. () Training all layers simultaneously using local errors. Note that there is no backward pass as weights are updated during the forward pass.[]{data-label="fig:traffic"}](traffic_horizontal "fig:"){width="\textwidth"} \[fig:traffic\_a\] [0.42]{} ![Memory traffic and number of MAC operations for different training methods. Arrows between compute device and external memory indicate memory traffic while green arrows indicate data buffered and reused by the compute device. Each computation stage is executed a number of times given by the enclosing repeat block. () Standard backpropagation learning. () Training all layers simultaneously using local errors. Note that there is no backward pass as weights are updated during the forward pass.[]{data-label="fig:traffic"}](traffic_allnetwork_horizontal "fig:"){width="\textwidth"} \[fig:traffic\_b\] Standard learning techniques based on backpropagating errors through the whole network require the hardware executing the learning algorithm to store the activation values and activation derivatives of all network layers in order to calculate weight updates and backpropagate errors once errors are available from the top layer. This imposes several communication and memory access overheads if learning is executed on hardware whose memory can not accommodate all the network weights and activations. For large scale convolutional networks, this practically includes all CPU and GPU devices where on-chip memory is limited to few tens of MBytes, while state of the art deep convolutional networks typically require several hundred MBytes to several GBytes in order to store the network weights and mini-batch activations [@Rhu_etal16]. Data thus has to be continuously shuttled between the compute device and external memory. This is the case even in custom accelerators developed to accelerate just the inference (feed-forward) phase [@Himavathi_etal07; @Cavigelli_etal15; @Chen_etal16; @Han_etal16; @Ardakani_etal16; @Aimar_etal17; @Jouppi_etal17], where a complete forward pass through a large-scale convolutional network can not be executed completely on the accelerator without having to access external memory to store intermediate activations and to load weights. Improvements in memory bandwidth significantly lag improvements in computing elements speed [@Wulf_McKee95]. Reducing memory traffic in a compute intensive task such as learning deep networks thus improves performance as it relaxes the requirements on the memory bandwidth and latency needed to keep the compute elements occupied, allowing either the compute elements to run at higher frequencies or the external memory to run at lower frequencies. Moreover, energy needed to drive off-chip traffic from/to external memory as well as memory read/write energy often contribute significantly to the overall energy consumption [@Vogelsang10; @Lefurgy_etal03]. Reducing memory traffic can thus have significant impact on the overall energy consumption of the learning hardware. In this section, we analyze the savings in memory traffic volume obtained using the learning approach based on local errors that we propose in this paper. Consider a neural network with $L$ layers. $P^i$ and $A^i$ are the parameters and the mini-batch activations of layer $i$, respectively. $\lvert P^i\rvert$ and $\lvert A^i\rvert$ are the number of elements in $P^i$ and $A^i$. A neuron in layer $i$ has a fanout of $R^i$, i.e, a neuron in layer $i$ projects to $R^i$ neurons in layer $i+1$. In convolutional layers, we ignore any border effects which might cause the neurons at the borders of the feature maps to project to fewer neurons than neurons away from the borders. We divide the training data set into $N_{b}$ mini-batches and train the network for $N_{e}$ epochs. Each weight and each neuron activation takes up one memory word (which we assume is 32 bits). Figure \[fig:traffic\_a\] illustrates the data traffic and the number of MAC operations needed during standard backpropagation training. The data traffic in Fig. \[fig:traffic\_a\] assumes the compute device has enough on-board memory to buffer the output activations of one layer in order to use these activations to calculate the next layer’s activation. We also assume the compute device does not need the parameters of any layer to be streamed in more than once during each forward pass and during each backward pass. These assumptions would hold true if the accelerator has at least $\max_i (\lvert P^i\rvert + \lvert A^i\rvert)$ words of on-board memory. During the forward pass, the activations of all layers have to be streamed out to external memory so they can be used in the backward pass. The number of MAC operations needed to calculate the activation of layer $i$ is $R^{i-1}\lvert A^{i-1} \rvert$. During the backward pass, the compute device buffers the back-propagated errors of one layer and uses them to calculate the errors at the preceding layer. $R^{i}\lvert A^{i} \rvert$ MAC operations are needed to calculate the weight updates for $P^{i+1}$. An additional $R^{i}\lvert A^{i} \rvert$ MAC operations are needed to backpropagate the errors from layer $i+1$ to layer $i$. We ignore the special case of the input layer where errors do not need to be backpropagated. We also ignore the MAC operations needed to calculate the error at the top layer. Figure \[fig:traffic\_b\] illustrates the case when learning is done using errors generated by random local classifiers. As in standard backpropagation, $R^{i-1}\lvert A^{i-1} \rvert$ MAC operations are needed to calculate the activations of layer $i$. To calculate the local classifier output, $C \lvert A^i \rvert$ MAC operations are needed where $C$ is the number of classification classes. Note that the random classifier weights can be generated on the fly using a PRNG, and thus only require the PRNG seed (whose size can be 32 bits for 32-bit weights) to be stored. To backpropagate the local classifier error to obtain the error at layer $i$, an additional $C \lvert A^i \rvert$ MAC operations are needed and $R^{i-1}\lvert A^{i-1} \rvert$ MAC operations are needed to update the parameters of layer $i$, $P^i$, based on the layer’s error. Table \[table:traffic\_macs\] summarizes the number of MAC operations and the memory read/write volume required by the two training methods. Learning using local errors has a decisive advantage when it comes to memory traffic as it requires drastically less read and write operations compared to standard backpropagation. The reduction in the number of MAC operations is less unequivocal as it depends on the number of classification classes, $C$, and the fanout of the neurons in the network, $R^i$. Learning using local error reduces the MAC operations count if $L\times C < 0.5\sum_i R^i$. This condition is easily satisfied when the number of classes is small and it was satisfied by all the networks presented in this paper. Training method Memory read(words) Memory write (words) MAC operations ----------------------------- ----------------------------------------------------------------------- --------------------------------------------------------------------- ---------------------------------------------------- Standard backpropagation $N_{e}N_{b} \sum\limits_i (2\lvert P^i \rvert + \lvert A^i \rvert )$ $N_{e}N_{b} \sum\limits_i (\lvert P^i \rvert + \lvert A^i \rvert)$ $N_{e}N_{b} \sum\limits_i 3R^{i}\lvert A^i \rvert$ Learning using local errors $N_e N_b \sum\limits_i \lvert P^i \rvert $ $N_e N_b \sum\limits_i \lvert P^i \rvert$ $N_eN_b\sum\limits_i(2R^i + 2C)\lvert A^i \rvert$ : Memory traffic and number of MAC operations for different learning methods[]{data-label="table:traffic_macs"} Results {#sec:results} ======= MNIST ----- We first validate the performance of our training approach on the MNIST hand-written digit recognition task. We used the standard split of 50,000/10,000/10,000 examples for training/validation/testing respectively. The validation set was added to the training set after choosing the hyper-parameters. We use a network with 3 fully connected hidden layers with $1000$ neurons per layer and train the weights in the entire network using local errors. As a baseline, we also train a 2-hidden layers network and a 3-hidden layers network using standard backpropagation where each hidden layer also has $1000$ neurons. Dropout was used in all networks to reduce overfitting. We first used fixed symmetric random weights in the forward and backward pathways in the local error loops, i.e, ${\bf K}^i = {\bf M}{^i}^T$ in all layers. The results are shown in Fig. \[fig:mnist\_symmetric\]. The local classifier errors improve for the second and third hidden layers compared to the first hidden layer implying that the network is able to make use of depth to obtain better accuracy. The local classifier errors in the second and third layers are similar implying that the network is unable to make use of the increased depth beyond two hidden layers for this simple dataset. This observation is also valid for standard backpropagation where accuracy does not improve when going from two hidden layers to three hidden layers. When training using local errors, we also ran experiments where the local classifier weights were trainable parameters. This had no effect on accuracy as shown in Table \[tab:mnist\]. [0.48]{} ![() MNIST test set errors obtained from three networks: a 3-layer network trained using local errors and symmetric local feedback weights (${\bf K}^i = {\bf M}{^i}^T$ ) where the errors for the three random local classifiers are shown, a network with two hidden layers trained using standard backpropagation, and a network with three hidden layers trained using standard backpropagation. The networks were trained for 100 epochs. Each line is the average of 20 training trials. () Same as () except that in the network trained using local errors, sign-concordant local feedback weights with independent and random magnitudes were used. []{data-label="fig:mnist_results"}](mnist_symmetric "fig:"){width="\textwidth"} \[fig:mnist\_symmetric\] [0.48]{} ![() MNIST test set errors obtained from three networks: a 3-layer network trained using local errors and symmetric local feedback weights (${\bf K}^i = {\bf M}{^i}^T$ ) where the errors for the three random local classifiers are shown, a network with two hidden layers trained using standard backpropagation, and a network with three hidden layers trained using standard backpropagation. The networks were trained for 100 epochs. Each line is the average of 20 training trials. () Same as () except that in the network trained using local errors, sign-concordant local feedback weights with independent and random magnitudes were used. []{data-label="fig:mnist_results"}](mnist_signconcordant "fig:"){width="\textwidth"} \[fig:mnist\_concordant\] [lp[35mm]{}p[35mm]{}p[35mm]{}]{} & Local error learning (symmetric feedback weights) & Local error learning (sign-concordant feedback weights) & Local error learning (trainable local classifier)\ Test error & & &\ \ & Learning using feedback alignment & 2-layer network trained using backprop & 3-layer network trained using backprop\ Test error & $1.70 \pm 0.087 \%$ & $1.26 \pm 0.068 \%$ & $1.27 \pm 0.050 \%$\ \[tab:mnist\] Next, to lessen concern of biological implausibility of exact symmetry in feedforward and feedback weights, we relaxed the weight symmetry requirement in the local error loops and initialized the error feedback weights ${\bf K}^i$ randomly and independently of ${\bf M}{^i}$, except we then modified the sign of the weights in ${\bf K}^i$ so that $sign({\bf K}^i) = sign({\bf M}{^i}^T)$. The signs of the feedback weights in the local error loops thus match the signs of the feedforward weights (both are fixed and have independent magnitudes). This is the ’sign-concordant feedback weights’ case shown in Fig. \[fig:mnist\_concordant\]. Performance deteriorates slightly in this case compared to symmetric feedforward and feedback local classifier weights. When we relax the symmetry requirement further and choose ${\bf K}^i$ to be random and completely independent of ${\bf M}{^i}$, the network failed to learn and error rates stayed at near-chance level. We also experimented with training based on feedback alignment where errors from the top layer are backpropagated using random fixed weights. The network’s performance using feedback alignment is worse than learning using local errors (using either symmetric or sign-concordant weights) as shown in Table \[tab:mnist\]. It is important to note that in feedback alignment, the feedforward weights eventually ’align’ with the random weights used to backpropagate errors [@Lillicrap_etal16] enabling the network to learn. When learning using random fixed local classifiers, and if we choose random error feedback weights, the classifier weights are fixed and thus can not align with the random weights used in the one-step backpropagation. Reliable error information, however, can still reach the layer being trained if the signs of the random backpropagation weights, ${\bf K}^i$, match the signs of the fixed local classifier weights ${\bf M}^i$. This is in-line with previous investigations into the importance of weight symmetry in backpropagation that argue for the importance of sign-concordance between forward and backward weights [@Liao_etal16]. CIFAR10 ------- We trained a convolutional network with three convolutional layers followed by two fully connected layers on the CIFAR10 dataset. We used a similar network as ref. [@Srivastava_etal14]. The convolutional layers used a $5\times 5$ kernel, a stride of $1$, and had $96$, $128$, and $256$ feature maps going from the bottom upwards. Max-pooling with a pooling window of $3 \times 3$ and stride $2$ was applied after each convolutional layer. The two fully connected layers on top had $2,048$ neurons each. All layers were batch-normalized and dropout was applied after the input layer, after each max-pooling layer, and after each fully connected layer. The 32$\times$32$\times$3 CIFAR10 color images were pre-processed using global contrast normalization followed by ZCA whitening. The training set of 50,000 images was used for training/validation and we report errors on the 10,000 images test set. Unlike the MNIST dataset, standard backpropagation significantly outperforms training using local errors as shown in Fig. \[fig:cifar10\_results\] and Table \[tab:cifar10\]. Performance of local error learning deteriorates slightly when using sign-concordant local feedback weights instead of symmetric local feedback weights. Performance does not improve for the local classifier in the second fully connected layer compared to the classifier in the first fully connected layer. We trained a variant of the network using only one fully connected layer using standard backpropagation. As shown in Table \[tab:cifar10\], the improvement in performance of the network trained using standard backpropagation is minimal when going from one to two fully connected layers. This implies that the second fully connected layer is largely superfluous and local error learning is thus unable to capitalize on it. Unlike for the MNIST dataset, allowing the local classifier parameters to be trainable improves performance significantly. As was the case for the MNIST dataset, training using feedback alignment leads to significantly worse performance than learning using local errors. [0.48]{} ![() CIFAR10 test set errors obtained from two convolutional networks having the architecture described in the main text: one network was trained using local errors and symmetric local feedback weights (${\bf K}^i = {\bf M}{^i}^T$ ), where the errors for the random local classifiers in all layers are shown; the other network with identical architecture was trained using standard backpropagation. () Same as () except that in the network trained using local errors, sign-concordant local feedback weights with independent and random magnitudes were used. []{data-label="fig:cifar10_results"}](cifar10_symmetric "fig:"){width="\textwidth"} \[fig:cifar10\_symmetric\] [0.48]{} ![() CIFAR10 test set errors obtained from two convolutional networks having the architecture described in the main text: one network was trained using local errors and symmetric local feedback weights (${\bf K}^i = {\bf M}{^i}^T$ ), where the errors for the random local classifiers in all layers are shown; the other network with identical architecture was trained using standard backpropagation. () Same as () except that in the network trained using local errors, sign-concordant local feedback weights with independent and random magnitudes were used. []{data-label="fig:cifar10_results"}](cifar10_signconcordant "fig:"){width="\textwidth"} \[fig:cifar10\_concordant\] [lp[40mm]{}p[40mm]{}p[40mm]{}]{} & Local error learning (symmetric feedback weights) & Local error learning (sign-concordant feedback weights) & Local error learning (trainable local classifier)\ Test error & & &\ \ & Learning using feedback alignment & Learning using standard backpropagation (two FC layers) & Learning using standard backpropagation (one FC layer)\ Test error & $20.87 \pm 0.34 \%$ & $12.47 \pm 0.25 \%$ & $12.72 \pm 0.21 \%$\ \[tab:cifar10\] Our feedback alignment results are better than those previously reported in refs. [@Baldi_etal16; @Liao_etal16; @Nokland_etal16]. This is due to our use of a bigger network that is well-regularized using dropout. Using a well-regularized network is particularly crucial when investigating alternatives to standard backpropagation as poorly-regularized learning can make a worse learning algorithm seem better, simply because it better regularizes the learning problem compared to a superior algorithm that overfits on the training data. Strong regularization is also a potential reason why we see that exact gradient descent (standard backpropagation) is clearly superior, unlike previous investigations that report better performance when using various approximations to standard backpropagation [@Liao_etal16], where this better performance can be due to the better regularization introduced by the approximate learning algorithms. SVHN ---- We trained an identical network to the one used for the CIFAR10 dataset on the SVHN dataset. The SVHN dataset is a dataset of 32$\times$32$\times$3 color images. We used the training/validation/testing split of 598388/6000/26032 images respectively that was previously used in refs. [@Srivastava_etal14; @Goodfellow_etal13; @Sermanet_etal12]. The validation set was added to the training set after choosing the hyper-parameters (learning rate and dropout probabilities). The images were preprocessed using the local contrast normalization technique from ref. [@Jarrett_etal09]. [0.48]{} ![() SVHN test set errors obtained from two convolutional networks: one network was trained using local errors and symmetric local feedback weights (${\bf K}^i = {{\bf M}^i}^T$) where the errors for the random local classifiers in all layers are shown. The other network with identical architecture was trained using standard backpropagation . () Same as () except that in the network trained using local errors, sign-concordant local feedback weights with independent and random magnitudes were used. []{data-label="fig:svhn_results"}](svhn_symmetric "fig:"){width="\textwidth"} \[fig:svhn\_symmetric\] [0.48]{} ![() SVHN test set errors obtained from two convolutional networks: one network was trained using local errors and symmetric local feedback weights (${\bf K}^i = {{\bf M}^i}^T$) where the errors for the random local classifiers in all layers are shown. The other network with identical architecture was trained using standard backpropagation . () Same as () except that in the network trained using local errors, sign-concordant local feedback weights with independent and random magnitudes were used. []{data-label="fig:svhn_results"}](svhn_signconcordant "fig:"){width="\textwidth"} \[fig:svhn\_concordant\] Figure \[fig:svhn\_results\] shows the test error curves for the case of the symmetric local feedback weights and the case of the sign-concordant local feedback weights. The test error trends in Fig. \[fig:svhn\_results\] and Table \[tab:svhn\] are similar to those observed for CIFAR10. The performance of standard backpropagation is clearly superior, followed by learning using local errors generated by trainable local classifiers. Learning using local errors generated by fixed random classifiers lags behind (both when using symmetric feedback weights or sign-concordant feedback weights) but it still outperforms learning using feedback alignment. [lp[40mm]{}p[40mm]{}p[40mm]{}]{} & Local error learning (symmetric feedback weights) & Local error learning (sign-concordant feedback weights) & Local error learning (trainable local classifier)\ Test error & & &\ \ & Learning using feedback alignment & Learning using standard backpropagation &\ Test error & $3.74 \pm 0.077 \%$ & $2.39 \pm 0.037 \%$ &\ \[tab:svhn\] Conclusions and Discussion {#sec:conclusions_discussion} ========================== Weight symmetry between the forward and backward passes and delayed error generation are two of the most biologically unrealistic aspects of backpropagation. Recent investigations have shown that the weight symmetry requirement can be relaxed allowing learning to proceed with random feedback weights [@Lillicrap_etal16; @Baldi_etal16; @Nokland_etal16; @Neftci_etal17a]. These investigations, however, do not address the problem of local learning and require the network to maintain its state until errors arrive from higher layers. Local errors have often been used to augment the top layer errors [@Szegedy_etal14; @Lee_etal15]. However, until now, relatively little work has been done on supervised learning using exclusively local errors, and none that we know of investigated local error generation using fixed random classifiers. Our results show that learning using local errors generated using random classifiers, while falling short of the performance of standard backpropagation, significantly outperforms learning using feedback alignment techniques [@Lillicrap_etal16; @Baldi_etal16]. This holds true even when relaxing the weight symmetry requirement in the local feedback loop and using random fixed feedback weights that are sign-aligned with the random fixed classifier weights in the local learning loop. Maintaining sign-alignment is problematic in the feedback alignment technique as the sign of the feedback weights have to dynamically track the sign of the feedforward weights during training [@Liao_etal16] which introduces a dynamic dependency between the two sets of weights. In our case, since both sets of weights are fixed, this dependency need only be enforced initially. Our CIFAR10 and SVHN results indicate that locally generated errors allow a convolutional layer to learn good features that are then used by the subsequent layer to learn even more informative features as evidenced by the increased accuracy of the local classifiers in higher layers. In the end, however, our approach solves many small optimization problems where each problem involves only the weights of one layer. We therefore lose one of the core advantages of standard backpropagation learning using a global objective function: the high probability of finding a good minimum in the parameter space when the dimensionality of this parameter space is large, i.e, when it includes all the network parameters [@Choromanska_etal15; @Im_etal16]. It was thus expected that classification performance will suffer compared to learning using standard backpropagation and a global objective function. Single cell measurements in monkey area IT indicate broad tuning to a range of categories [@Kiani_etal07; @Sigala_Logothetis02]. This broad category tuning is realized in the proposed training scheme through the random local classifier weights that define how a neuron contributes to the score of each classification category. During training, the actual tuning properties of each neuron change to be in-line with the pre-defined fixed tuning defined by the random classifier weights, as this is the only way to minimize the local classifier error. Our error generation mechanism has several biologically attractive aspects: 1. It involves only two synaptic projections allowing errors to be generated quickly and weight updates to be carried out before input-induced changes in the states of the neurons have decayed. This avoids the common and unrealistic input buffering requirement encountered in standard backpropagation and feedback alignment techniques. 2. Error generation involves random projections that do not have to be learned. This makes the error generation loop particularly simple and removes any potential problematic interactions between learning the auxiliary classifier weights and learning the main network weights. 3. Strict weight symmetry is not required in the error pathway, only sign-alignment between two sets of fixed random weights is needed. The use of fixed random local classifier weights allows us to sidestep one of the main hardware-related issues of using auxiliary local classifiers: the need to store the local classifier weights. Especially in large convolutional layers, storing the local classifier weights could be prohibitively expensive in terms of memory resources. Since the local classifier weights need to be accessed in a fixed order during each training iteration in order to calculate the classifier outputs, they can be cheaply, quickly, and reproducibly generated on the fly using a PRNG and a small seed. We have shown that this approach allows us to obtain a learning mechanism that drastically reduces memory traffic compared to standard backpropagation. During inference, the random classifier weights in each layer (which are compactly stored in a small seed) can be used to generate a classification decision during the evaluation of each layer. Thus, if needed, a series of classification decisions can be obtained, one from each layer, at a small computational cost and virtually no memory cost. The decisions from bottom layers, even though less accurate than the decisions from higher layers, can be used in situations where response time is critical. This allows the network to be dynamically truncated where higher layers are not evaluated and the final decision taken from intermediate layers. This feature of the proposed networks enables a dynamical trade-off between accuracy and energy consumption/computational load where only as many layers as allowed by the energy budget, or response time constraint, are evaluated. [10]{} L[é]{}on Bottou. Stochastic gradient learning in neural networks. , 91(8), 1991. Jimmy Ba and Rich Caruana. Do deep nets really need to be deep? In [*Advances in neural information processing systems*]{}, pages 2654–2662, 2014. A.S. Razavian, Hossein Azizpour, Josephine Sullivan, and Stefan Carlsson. Cnn features off-the-shelf: an astounding baseline for recognition. In [*Proceedings of the IEEE conference on computer vision and pattern recognition workshops*]{}, pages 806–813, 2014. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In [*Advances in neural information processing systems*]{}, pages 1097–1105, 2012. Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. , 2014. Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. , 521(7553):436–444, 2015. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In [*Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition*]{}, pages 770–778, 2016. Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. , 2016. Gao Huang, Zhuang Liu, K.Q. Weinberger, and Laurens van der Maaten. Densely connected convolutional networks. , 2016. C. [Szegedy]{}, W. [Liu]{}, Y. [Jia]{}, P. [Sermanet]{}, S. [Reed]{}, D. [Anguelov]{}, D. [Erhan]{}, V. [Vanhoucke]{}, and A. [Rabinovich]{}. . , September 2014. Chen-Yu Lee, Saining Xie, Patrick Gallagher, Zhengyou Zhang, and Zhuowen Tu. Deeply-supervised nets. In [*Artificial Intelligence and Statistics*]{}, pages 562–570, 2015. Xiaohui Xie and S.H. Seung. Equivalence of backpropagation and contrastive hebbian learning in a layered network. , 15(2):441–454, 2003. Yoshua Bengio and Asja Fischer. Early inference in energy-based models approximates back-propagation. , 2015. Benjamin Scellier and Yoshua Bengio. Equilibrium propagation: Bridging the gap between energy-based models and backpropagation. , 11, 2017. T.P. Lillicrap, Daniel Cownden, D.B. Tweed, and C.J. Akerman. Random synaptic feedback weights support error backpropagation for deep learning. , 7, 2016. Pierre Baldi, Peter Sadowski, and Zhiqin Lu. Learning in the machine: Random backpropagation and the learning channel. , 2016. Arild N[ø]{}kland. Direct feedback alignment provides learning in deep neural networks. In [*Advances in Neural Information Processing Systems*]{}, pages 1037–1045, 2016. E.O. Neftci, Charles Augustine, Somnath Paul, and Georgios Detorakis. Event-driven random back-propagation: Enabling neuromorphic deep learning machines. , 11, 2017. S. Hochreiter, Y. Bengio, P. Frasconi, and J. Schmidhuber. Gradient flow in recurrent nets: the difficulty of learning long-term dependencies, 2001. G.E. Hinton, S. Osindero, and Y.W. Teh. A fast learning algorithm for deep belief nets. , 18(7):1527–1554, 2006. G.E. Hinton and R.R. Salakhutdinov. Reducing the dimensionality of data with neural networks. , 313(5786):504–507, 2006. Yoshua Bengio, Pascal Lamblin, Dan Popovici, and Hugo Larochelle. Greedy layer-wise training of deep networks. In [*Advances in neural information processing systems*]{}, pages 153–160, 2007. Pascal Vincent, Hugo Larochelle, Yoshua Bengio, and Pierre-Antoine Manzagol. Extracting and composing robust features with denoising autoencoders. In [*Proceedings of the 25th international conference on Machine learning*]{}, pages 1096–1103. ACM, 2008. D. Erhan, Y. Bengio, A. Courville, P.A. Manzagol, P. Vincent, and S. Bengio. Why does unsupervised pre-training help deep learning? , 11:625–660, 2010. Honglak Lee, Roger Grosse, Rajesh Ranganath, and Andrew Y. Ng. Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations. In [*Proceedings of the 26th Annual International Conference on Machine Learning*]{}, ICML ’09, pages 609–616, New York, NY, USA, 2009. ACM. Marc’Aurelio Ranzato, F.J. Huang, Y-Lan Boureau, and Yann LeCun. Unsupervised learning of invariant feature hierarchies with applications to object recognition. In [*Computer Vision and Pattern Recognition, 2007. CVPR’07. IEEE Conference on*]{}, pages 1–8. IEEE, 2007. Koray Kavukcuoglu, Marc’Aurelio Ranzato, and Yann LeCun. Fast inference in sparse coding algorithms with applications to object recognition. , 2010. J.J. Zhao, Michael Mathieu, Ross Goroshin, and Yann LeCun. Stacked what-where auto-encoders. , 2015. Yuting Zhang, Kibok Lee, and Honglak Lee. Augmenting supervised neural networks with unsupervised objectives for large-scale image classification. In [*International Conference on Machine Learning*]{}, pages 612–621, 2016. Marc’Aurelio Ranzato and Martin Szummer. Semi-supervised learning of compact document representations with deep networks. In [*Proceedings of the 25th international conference on Machine learning*]{}, pages 792–799. ACM, 2008. Hugo Larochelle and Yoshua Bengio. Classification using discriminative restricted boltzmann machines. In [*Proceedings of the 25th international conference on Machine learning*]{}, pages 536–543. ACM, 2008. Ian Goodfellow, Mehdi Mirza, Aaron Courville, and Yoshua Bengio. Multi-prediction deep boltzmann machines. In [*Advances in Neural Information Processing Systems*]{}, pages 548–556, 2013. Max Jaderberg, W.M. Czarnecki, Simon Osindero, Oriol Vinyals, Alex Graves, and Koray Kavukcuoglu. Decoupled neural interfaces using synthetic gradients. , 2016. W.M. Czarnecki, Grzegorz [Ś]{}wirszcz, Max Jaderberg, Simon Osindero, Oriol Vinyals, and Koray Kavukcuoglu. Understanding synthetic gradients and decoupled neural interfaces. , 2017. Vinod Nair and G.E. Hinton. Rectified linear units improve restricted [Boltzmann]{} machines. In [*Proceedings of the 27th International Conference on Machine Learning (ICML-10)*]{}, pages 807–814, 2010. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: A simple way to prevent neural networks from overfitting. , 15(1):1929–1958, 2014. Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In [*International Conference on Machine Learning*]{}, pages 448–456, 2015. Xavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neural networks. In [*Aistats*]{}, volume 9, pages 249–256, 2010. Fr[é]{}d[é]{}ric Bastien, Pascal Lamblin, Razvan Pascanu, James Bergstra, Ian Goodfellow, Arnaud Bergeron, Nicolas Bouchard, David Warde-Farley, and Yoshua Bengio. Theano: new features and speed improvements. , 2012. James Bergstra, Olivier Breuleux, Fr[é]{}d[é]{}ric Bastien, Pascal Lamblin, Razvan Pascanu, Guillaume Desjardins, Joseph Turian, David Warde-Farley, and Yoshua Bengio. Theano: a [CPU]{} and [GPU]{} math expression compiler. In [*Proceedings of the Python for scientific computing conference (SciPy)*]{}, volume 4, page 3. Austin, TX, 2010. Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. , 2014. Minsoo Rhu, Natalia Gimelshein, Jason Clemons, Arslan Zulfiqar, and S.W. Keckler. vdnn: Virtualized deep neural networks for scalable, memory-efficient neural network design. In [*Microarchitecture (MICRO), 2016 49th Annual IEEE/ACM International Symposium on*]{}, pages 1–13. IEEE, 2016. S. Himavathi, D. Anitha, and A. Muthuramalingam. Feedforward neural network implementation in [FPGA]{} using layer multiplexing for effective resource utilization. , 18(3):880–888, 2007. Lukas Cavigelli, David Gschwend, Christoph Mayer, Samuel Willi, Beat Muheim, and Luca Benini. Origami: A convolutional network accelerator. In [*Proc. 25th Great Lakes Symposium on VLSI*]{}, pages 199–204. ACM, 2015. Yu-Hsin Chen, Tushar Krishna, Joel Emer, and Vivienne Sze. : An energy-efficient reconfigurable accelerator for deep convolutional neural networks. In [*2016 IEEE International Solid-State Circuits Conference (ISSCC)*]{}, pages 262–263. IEEE, 2016. Song Han, Xingyu Liu, Huizi Mao, Jing Pu, Ardavan Pedram, Mark A. Horowitz, and William J. Dally. : Efficient inference engine on compressed deep neural network. In [*Proc. 43rd International Symposium on Computer Architecture*]{}, ISCA ’16, pages 243–254. IEEE Press, 2016. Arash Ardakani, Fran[ç]{}ois Leduc-Primeau, Naoya Onizawa, Takahiro Hanyu, and W.J. Gross. implementation of deep neural networks using integral stochastic computing. In [*Turbo Codes and Iterative Information Processing (ISTC), 2016 9th International Symposium on*]{}, pages 216–220. IEEE, 2016. Alessandro Aimar, Hesham Mostafa, Enrico Calabrese, Antonio Rios-Navarro, Ricardo Tapiador-Morales, Iulia-Alexandra Lungu, Moritz B. Milde, Federico Corradi, Alejandro Linares-Barranco, Shih-Chii Liu, and Tobi Delbruck. Nullhop: A flexible convolutional neural network accelerator based on sparse representations of feature maps. , 2017. N.P. Jouppi, Cliff Young, Nishant Patil, David Patterson, Gaurav Agrawal, Raminder Bajwa, Sarah Bates, Suresh Bhatia, Nan Boden, Al Borchers, et al. In-datacenter performance analysis of a tensor processing unit. , 2017. W.A. Wulf and S.A. McKee. Hitting the memory wall: implications of the obvious. , 23(1):20–24, 1995. Thomas Vogelsang. Understanding the energy consumption of dynamic random access memories. In [*Proceedings of the 2010 43rd Annual IEEE/ACM International Symposium on Microarchitecture*]{}, pages 363–374. IEEE Computer Society, 2010. Charles Lefurgy, Karthick Rajamani, Freeman Rawson, Wes Felter, Michael Kistler, and T.W. Keller. Energy management for commercial servers. , 36(12):39–48, 2003. Qianli Liao, J.Z. Leibo, and T.A. Poggio. How important is weight symmetry in backpropagation? In [*AAAI*]{}, pages 1837–1844, 2016. I.J. Goodfellow, David Warde-Farley, Mehdi Mirza, A.C. Courville, and Yoshua Bengio. Maxout networks. , 28:1319–1327, 2013. Pierre Sermanet, Soumith Chintala, , and Yann LeCun. Convolutional neural networks applied to house numbers digit classification. In [*Pattern Recognition (ICPR), 2012 21st International Conference on*]{}, pages 3288–3291. IEEE, 2012. Kevin Jarrett, Koray Kavukcuoglu, Yann LeCun, et al. What is the best multi-stage architecture for object recognition? In [*Computer Vision, 2009 IEEE 12th International Conference on*]{}, pages 2146–2153. IEEE, 2009. Anna Choromanska, Mikael Henaff, Michael Mathieu, G.B. Arous, and Yann LeCun. The loss surfaces of multilayer networks. In [*Artificial Intelligence and Statistics*]{}, pages 192–204, 2015. D.J. Im, Michael Tao, and Kristin Branson. An empirical analysis of deep network loss surfaces. , 2016. Roozbeh Kiani, Hossein Esteky, Koorosh Mirpour, and Keiji Tanaka. Object category structure in response patterns of neuronal population in monkey inferior temporal cortex. , 97(6):4296–4309, 2007. Natasha Sigala and N.K. Logothetis. Visual categorization shapes feature selectivity in the primate temporal cortex. , 415(6869):318–320, 2002.
1
--- abstract: | Particle Metropolis-Hastings (PMH) allows for Bayesian parameter inference in nonlinear state space models by combining Markov chain Monte Carlo (MCMC) and particle filtering. The latter is used to estimate the intractable likelihood. In its original formulation, PMH makes use of a marginal MCMC proposal for the parameters, typically a Gaussian random walk. However, this can lead to a poor exploration of the parameter space and an inefficient use of the generated particles. We propose a number of alternative versions of PMH that incorporate gradient and Hessian information about the posterior into the proposal. This information is more or less obtained as a byproduct of the likelihood estimation. Indeed, we show how to estimate the required information using a fixed-lag particle smoother, with a computational cost growing linearly in the number of particles. We conclude that the proposed methods can: (i) decrease the length of the burn-in phase, (ii) increase the mixing of the Markov chain at the stationary phase, and (iii) make the proposal distribution scale invariant which simplifies tuning. author: - 'Johan Dahlin, Fredrik Lindsten and Thomas B. Schön[^1]' bibliography: - 'dahlin.bib' title: 'Particle Metropolis-Hastings using gradient and Hessian information[^2]' --- Introduction ============ We are interested in Bayesian parameter inference in nonlinear state space models (SSM) of the form $$\begin{aligned} x_{t}|x_{t-1} \sim f_{\theta}(x_{t}|x_{t-1}), \quad y_{t}|x_t \sim g_{\theta}(y_{t}|x_t), \label{eq:SSM}\end{aligned}$$ where the latent states and the measurements are denoted by $\mathbf{x} = x_{0:T} \triangleq \{x_t\}_{t=0}^T$ and $\mathbf{y} = \measurementsdef$, respectively. Here, $f_{\theta}(\cdot)$ and $g_{\theta}(\cdot)$ denote the transition and observation kernels, respectively, parametrised by the unknown static parameter vector $\theta \in \Theta \subset \mathbb{R}^d$. The initial state is distributed according to some distribution $\mu(x_0)$ which, for notational simplicity, is assumed to be independent of $\theta$. The aim of Bayesian parameter inference (in SSMs) is to compute the *parameter posterior distribution* $$\begin{aligned} p(\theta | \mathbf{y}) = \frac{ p_{\theta}(\mathbf{y}) p(\theta)}{p(\mathbf{y})}, \label{eq:parameterPosterior}\end{aligned}$$ where $p(\theta)$ denotes the prior of $\theta$ and $p_{\theta}(\mathbf{y})$ denotes the likelihood, which for an SSM can be expressed as $$\begin{aligned} p_{\theta}(\mathbf{y}) = p_{\theta}(y_1) \prod_{t=2}^T p_{\theta}(y_t|y_{1:t-1}). \label{eq:likeFunc}\end{aligned}$$ The one-step ahead predictor $p_{\theta}(y_t|y_{1:t-1})$, and thus also the likelihood function, is in general not analytically tractable. However, unbiased estimators of the likelihood can be constructed using sequential Monte Carlo (SMC) [@DoucetJohansen2011; @DelMoral2004] and these can be used as *plug-in estimators*. This is especially useful in the Metropolis-Hastings (MH) algorithm that can be used for estimating the parameter posterior in . This combination of MH and SMC is known as the particle Metropolis-Hastings (PMH) algorithm [@AndrieuDoucetHolenstein2010]. The MH acceptance probability depends on the intractable likelihood, which in PMH is estimated using SMC (see Section \[sec:overview\]). Despite the apparent approximation, this results in an algorithm that targets the correct posterior distribution [@AndrieuDoucetHolenstein2010]. The original PMH algorithm makes use of a marginal proposal for $\theta$, i.e. only the current parameter is used when proposing a new parameter. The theoretical properties of the marginal PMH algorithm have been analysed in @AndrieuVihola2012 [@PittSilvaGiordaniKohn2012; @DoucetPittKohn2012] and it has been applied for a number of interesting applications in, e.g., economics, social network analysis and ecology [@FluryShephard2011; @Everitt2012; @GolightlyWilkinson2011]. In this paper, we show that information such as the gradient and the Hessian about the posterior can be included in the construction of the PMH proposal. This idea is first suggested by @DoucetJacobJohansen2011 in the discussions following @GirolamiCalderhead2011. In two previous proceedings, we have applied and extended this idea with gradient information [@DahlinLindstenSchon2013a] and also using Hessian information [@DahlinLindstenSchon2014a]. The present article builds upon and extends this preliminary work. A PMH method using gradient information similar to @DahlinLindstenSchon2013a has recently been proposed by @NemethFearnhead2014. In the context of MH sampling, it has been recognised that the gradient and Hessian can be used to construct efficient proposal distributions. In the Metropolis adjusted Langevin algorithm (MALA) [@RobertsStramer2003], a drift term is added to the proposal in the direction of the gradient, which intuitively guides the Markov chain to regions of high posterior probability. In the manifold MALA (mMALA) [@GirolamiCalderhead2011], the Hessian (or some other appropriate metric tensor) is also included to scale the proposal to take the curvature of the log-posterior into account. Drawing parallels with the optimisation literature, mMALA shares some properties with Newton-type optimisation algorithms (where MALA is more similar to a steepest ascent method). In particular, scaling the proposal with the Hessian can considerably simplify the tedious tuning of the method since it removes the need for running costly pilot runs, which are commonly used to tune the covariance matrices of the random walk MH and the MALA. In our problem, i.e. for inference in a nonlinear SSM , the gradient and Hessian cannot be computed analytically. However, in analogue with the intractable likelihood, these quantities can be estimated using SMC algorithms, see e.g. @PoyiadjisDoucetSingh2011 [@DoucetJacobRubenthaler2013]. This provides us with the tools necessary to construct PMH algorithms in the flavour of the MALA and the mMALA, resulting in the two methods proposed in this paper, PMH1 and PMH2, respectively. In particular, we make use of a fixed-lag (FL) particle smoother [@KitagawaSato2001] to estimate the gradient and Hessian. The motivation for this is that this smoother only makes use of the weighted particles computed by the particle filter. Consequently, we obtain this information as a *byproduct* of the likelihood computation in the PMH algorithm. This results in only a small computational overhead for the proposed methods when compared to the marginal method. Finally, we provide numerical experiments to illustrate the benefits of using the gradient and Hessian and the accuracy of the FL smoother. We demonstrate some interesting properties of the proposed algorithms, in particular that they enjoy (i) a shorter burn-in compared with the marginal algorithm, (ii) a better mixing of the Markov chain in the stationary phase, and (iii) a simplified tuning of the step length(s), especially when the target distribution is non-isotropic. Particle Metropolis-Hastings {#sec:overview} ============================ In this section, we review the PMH algorithm and show how the random variables used to compute the likelihood estimator can be incorporated in the proposal construction. We also outline the idea of how this can be used to construct the proposed PMH1 and PMH2 algorithms. MH sampling with unbiased likelihoods {#sec:PMH} ------------------------------------- The MH algorithm (see, e.g. @RobertCasella2004) is a member of the MCMC family for sampling from a target distribution $\pi(\theta)$ by simulating a carefully constructed Markov chain on $\Theta$. The chain is constructed in such a way that it admits the target as its unique stationary distribution. The algorithm consists of two steps: (i) a new parameter $\theta''$ is sampled from a proposal distribution $q(\theta''|\theta')$ given the current state $\theta'$ and (ii) the current parameter is changed to $\theta''$ with probability $\alpha(\theta',\theta'')$, otherwise the chain remains at the current state. The acceptance probability is given by $$\begin{aligned} \alpha(\theta',\theta'') = 1 \wedge \frac{ \pi(\theta'') }{\pi(\theta')} \frac{ q(\theta' | \theta'')}{q(\theta'' | \theta')}, \label{eq:MHacceptprob}\end{aligned}$$ where we use the notation $a \wedge b \triangleq \min\{a,b\}$. In this paper, we have the parameter posterior distribution as the target distribution, i.e. $\pi(\theta) = p(\theta|\mathbf{y})$. This implies that the acceptance probability will depend explicitly on the intractable likelihood $p_\theta(\mathbf{y})$, preventing direct application of the MH algorithm to this problem. However, this difficulty can be circumvented by using a *pseudo-marginal* approach [@Beaumont2003; @AndrieuRoberts2009]. Assume that there exists an unbiased, non-negative estimator of the likelihood $\widehat{p}_{\theta}(\mathbf{y}|u)$. We introduce explicitly the random variable $u \in \mathsf{U}$ used to construct this estimator, and we let $m_{\theta}(u)$ denote the probability density of $u$ on $\mathsf{U}$. The pseudo-marginal method is then a standard MH algorithm operating in a non-standard extended space $\Theta \times \mathsf{U}$, with the *extended target* $$\begin{aligned} \pi( \theta, u | \mathbf{y} ) = \frac{ \widehat{p}_{\theta}( \mathbf{y} | u ) m_{\theta}(u) p(\theta) } { p(\mathbf{y}) } = \frac{ \widehat{p}_{\theta}( \mathbf{y} | u ) m_{\theta}(u) p(\theta | \mathbf{y}) } { p_{\theta}(\mathbf{y}) },\end{aligned}$$ and proposal distribution $m_{\theta''}(u'') q(\theta'' | \theta')$. Since the likelihood estimator is unbiased, $\mathbb{E}_{u|\theta}[\widehat{p}_{\theta}(\mathbf{y}|u)] = p_{\theta}( \mathbf{y} )$, it follows that the extended target admits $p(\theta | \mathbf{y})$ as a marginal. Hence, by simulating from the extended target $\pi( \theta, u | \mathbf{y} ) $ we obtain samples from the original target distribution $p(\theta | \mathbf{y})$ as a byproduct. If the likelihood is estimated by using SMC (see Section \[sec:SMC\]) we obtain the PMH algorithm. The random variable $u$ then corresponds to all the weighted particles generated by the SMC algorithm. However, these random variables carry useful information, not only about the likelihood, but also about the geometry of the posterior distribution. We suggest to incorporate this information into the proposal construction. With $(\theta', u')$ being the current state of the Markov chain we simulate $\theta'' \sim q(\cdot | \theta', u')$ and $u'' \sim m_{\theta''}(\cdot)$, using some proposal $q$ (see Section \[sec:constructPMH2\]). It follows that the (standard) MH acceptance probability for the extended target is given by $$\begin{aligned} \nonumber \alpha(\theta'',u'', \theta', u') &= 1 \wedge \frac{ \widehat{p}_{\theta''}(\mathbf{y} | u'') m_{\theta''}(u'') p(\theta'')}{ \widehat{p}_{\theta'}(\mathbf{y} | u') m_{\theta'}(u') p(\theta') } \frac{ m_{\theta'}(u') q(\theta' | \theta'', u'') }{ m_{\theta''}(u'') q(\theta'' | \theta', u' )} \\ &=1 \wedge \frac{ \widehat{p}_{\theta''}(\mathbf{y} | u'') p(\theta'')}{ \widehat{p}_{\theta'}(\mathbf{y} | u') p(\theta') } \frac{ q(\theta' | \theta'', u'') }{ q(\theta'' | \theta', u' )}. % &= 1 % \wedge % \frac{ \widehat{p}_{\theta''}(\mathbf{y} | u'') }{ \widehat{p}_{\theta'}(\mathbf{y} | u') } % \frac{ p(\theta'') }{ p(\theta') } % \frac{ q(\theta' | \theta'', u'') }{ q(\theta'' | \theta', u' )}. \label{eq:PMHaprob}\end{aligned}$$ Note that $ q(\theta'' | \theta', u' )$ may depend on the auxiliary variable $u'$ in a (formally) arbitrary way. In particular, in Section \[sec:SMC\] we propose a construction making use of *biased* estimates of the gradient and Hessian of the log-posterior. Nevertheless, expression still defines a correct MH acceptance probability for the extended target, ensuring the validity of our approach. Note also that the aforementioned proposal construction opens up for a wide range of adapted proposals, possibly different from the ones considered in this work. Constructing PMH1 and PMH2 {#sec:constructPMH2} -------------------------- We now turn to the construction of a proposal that makes use of the gradient and Hessian of the log-posterior. Following @RobertCasella2004, we do this by a Laplace approximation of the parameter posterior around the current state $\theta'$. Hence, consider a second order Taylor expansion of $\log p(\theta''|\mathbf{y})$ at $\theta'$: $$\begin{aligned} \log p(\theta''|\mathbf{y}) &\approx \log p(\theta'|\mathbf{y}) + (\theta'' - \theta')^{\top} \Big[ \Dtheta \log p(\theta|\mathbf{y}) \Big]_{\theta=\theta'} \\ &+ \frac{1}{2} (\theta'' - \theta')^{\top} \Big[ \DDtheta \log p(\theta|\mathbf{y}) \Big]_{\theta=\theta'} (\theta'' - \theta').\end{aligned}$$ Taking the exponential of both sides and completing the square, we obtain $$\begin{aligned} p(\theta''| \mathbf{y}) &\approx \textsf{N} \Big( \theta'';\theta' + \mathsf{I}^{-1}_T(\theta') \mathsf{S}_T(\theta'), \mathsf{I}^{-1}_T(\theta') \Big),\end{aligned}$$ where we have introduced $\mathsf{S}_T(\theta') = \Dtheta \log p(\theta|\mathbf{y})|_{\theta=\theta'}$ and $\mathsf{I}_T(\theta') = - \DDtheta \log p(\theta|\mathbf{y})|_{\theta=\theta'}$, for the gradient and the negative Hessian of the log-posterior, respectively. Here, we assume for now that the negative Hessian is positive definite; see Section \[sec:regularisation\] for further discussion on this matter. As pointed out above, these quantities cannot be computed in closed form, but they can be estimated from the random variable $u'$ (see Section \[sec:SMC\]). This suggests three different versions of the PMH algorithm, each resulting from a specific choice of the proposal: $$\begin{aligned} q(\theta''|\theta',u') = \begin{dcases} \textsf{N}\left( \theta', \Gamma \right), & \text{[PMH0]} \\ \textsf{N}\left( \theta' + {\textstyle \frac{1}{2}} \Gamma \widehat{\mathsf{S}}_T(\theta'|u'), \Gamma \right), & \text{[PMH1]} \\ \textsf{N}\left( \theta' + \widehat{\textsf{G}}(\theta'|u'), \widehat{\textsf{H}}(\theta'|u') \right). & \text{[PMH2]} \end{dcases} \label{eq:2orderproposal}\end{aligned}$$ Here, we use the notation $\widehat{\textsf{G}}(\theta|u) = \frac{1}{2} \Gamma\, \widehat{\mathsf{I}}^{-1}_T(\theta|u) \, \widehat{\mathsf{S}}_T(\theta|u)$ and $\widehat{\textsf{H}}(\theta|u) = \Gamma\, \widehat{\mathsf{I}}^{-1}_T(\theta|u)$ for the natural gradient and scaled inverse Hessian, respectively. Furthermore, $\Gamma$ denotes a scaling matrix that controls the step lengths of the proposal. For PMH0 and PMH1, $\Gamma$ can be chosen as the inverse of an estimate of the posterior covariance matrix. However, computing this estimate typically requires costly and tedious trial runs. For PMH2, the curvature of the problem is captured by the Hessian matrix, i.e. a single step length can by used which can significantly simplify the tuning. It is also possible to choose different step lengths for the drift term and for the covariance matrix of the proposal. The final PMH2 algorithm is presented in Algorithm \[alg:PMH2order\]. It makes use of Algorithm \[alg:SMCfull\], described in Section \[sec:SMC\], to estimate the quantities needed for computing the proposal and the acceptance probability. Clearly, PMH0 and PMH1 are special cases obtained by using the corresponding proposal from in the algorithm. Note that, while the algorithm make explicit reference to the auxiliary variable $u$, it only depends on this variable through the estimates $\widehat{p}_{\theta'}(\mathbf{y})$, $\widehat{\mathsf{S}}_T(\theta')$ and $\widehat{\mathsf{I}}_T(\theta')$. <span style="font-variant:small-caps;">Inputs:</span> Algorithm \[alg:SMCfull\]. $M>0$ (no. MCMC steps), $\theta_0$ (initial parameters), $\gamma$ (step length).\ <span style="font-variant:small-caps;">Output:</span> $\theta=\{\theta_1,\ldots,\theta_M\}$ (samples from the posterior). Run Algorithm \[alg:SMCfull\] to obtain $\widehat{p}_{\theta_0}(\mathbf{y})$, $\widehat{\mathsf{S}}_T(\theta_0)$ and $\widehat{\mathsf{I}}_T(\theta_0)$. Sample $\theta' \sim q(\theta'|\theta_{k-1},u_{k-1})$ by , $\widehat{\mathsf{S}}_T(\theta_{k-1})$ and $\widehat{\mathsf{I}}_T(\theta_{k-1})$. Run Algorithm \[alg:SMCfull\] to obtain $\widehat{p}_{\theta'}(\mathbf{y})$, $\widehat{\mathsf{S}}_T(\theta')$ and $\widehat{\mathsf{I}}_T(\theta')$. Sample $\omega_k$ uniformly over $[0,1]$. $\theta_k \leftarrow \theta'$. $\{ \widehat{p}_{\theta_k}(\mathbf{y}), \widehat{\mathsf{S}}_T(\theta_{k}), \widehat{\mathsf{I}}_T(\theta_{k}) \} \leftarrow \{ \widehat{p}_{\theta'}(\mathbf{y}) , \widehat{\mathsf{S}}_T(\theta') , \widehat{\mathsf{I}}_T(\theta') \}$. $\theta_k \leftarrow \theta_{k-1}$. $\{ \widehat{p}_{\theta_k}(\mathbf{y}), \widehat{\mathsf{S}}_T(\theta_{k}), \widehat{\mathsf{I}}_T(\theta_{k}) \} \leftarrow \{ \widehat{p}_{\theta_{k-1}}(\mathbf{y}) , \widehat{\mathsf{S}}_T(\theta_{k-1}) , \widehat{\mathsf{I}}_T(\theta_{k-1}) \}$. \[alg:PMH2order\] Properties of the PMH1 and PMH2 proposals ----------------------------------------- In the sequel, we use a single step size $\Gamma = \gamma^2 I_d$ for all the parameters in the (standard) proposal. This is done to illustrate the advantage of adding the Hessian information, which rescales the step lengths according to the local curvature. Hence, it allows for taking larger steps when the curvature is small and vice verse. This property of PMH2 makes the algorithm scale-free in the same manner as a Newton algorithm in optimisation [@NocedalWright2006 Chapter 3]. That is, the proposal is invariant to affine transformations of the parameters. Note that, since the local information is used, this is different from scaling the proposal in PMH0 with the posterior covariance matrix estimated from a pilot run, as this only takes the geometry at the mode of the posterior into account. Some analyses of the statistical properties are available for PMH0 [@SherlockThieryRobetsRosenthal2013], MH using a random walk [@RobertsGelmanGilks1997] and MALA [@RobertsRosenthal1998]. It is known from these analyses that adding the gradient into the proposal can increase the mixing of the Markov chain. Note that these results are obtained under somewhat strict assumptions. Also, we know from numerical experiments [@GirolamiCalderhead2011] that there are further benefits of also taking the local curvature into account. Estimation of the likelihood, gradient, and Hessian {#sec:SMC} =================================================== In this section, we show how to estimate the likelihood together with the gradient and Hessian using SMC methods. Auxiliary particle filter ------------------------- An auxiliary particle filter (APF) [@PittShephard1999] can be used to approximate the sequence of joint smoothing distributions (JSDs) $p_{\theta}(x_{1:t}|y_{1:t})$ for $t = 1$ to $T$. The APF makes use of a particle system consisting of $N$ weighted particles $\{x_{1:t}\pIdx{i},w_t\pIdx{i}\}_{i=1}^N$ to approximate the JSD at time $t$ by $$\begin{aligned} \widehat{p}_{\theta}(\dd x_{1:t} | y_{1:t} ) \triangleq \sum_{i=1}^N \frac { w_t\pIdx{i} } { \sum_{k=1}^N w_t\pIdx{k} } \delta_{x_{1:t}\pIdx{i}} (\dn x_{1:t}). \label{eq:empericalfiltering}\end{aligned}$$ Here, $\delta_z(\dn x_{1:t})$ denotes the Dirac measure placed at $z$. The particle system is propagated from $t-1$ to $t$ by first sampling an *ancestor index* $a_t\pIdx{i}$, with $$\begin{aligned} \mathbb{P}( a_t\pIdx{i} = j ) = \nu_{t-1}^{(j)} \left[ \sum_{k=1}^N \nu\pIdx{k}_{t-1} \right]^{-1}, \quad i,j = 1,\dots,N, \label{eq:APFresampling} \end{aligned}$$ where $\nu\pIdx{i}_{t-1}$ denotes the resampling weights. Given the ancestor index, a new particle is sampled according to $$\begin{aligned} \quad x_t\pIdx{i} \sim R_{\theta} \Big( x_t | x^{a_t\pIdx{i}}_{1:t-1}, y_t \Big), \quad i = 1,\dots,N. \label{eq:APFpropagation} \end{aligned}$$ Finally, we append the obtained sample to the trajectory by $x_{1:t}\pIdx{i}=\{x_{1:t-1}^{a_t\pIdx{i}},x_t\pIdx{i}\}$ and compute a new importance weight by $$\begin{aligned} w_{t}\pIdx{i} &\triangleq \frac {w_{t-1}^{a\pIdx{i}_t}} {\nu_{t-1}^{a\pIdx{i}_t}} \frac { g_{\theta} \Big( y_t \Big| x_t\pIdx{i} \Big) f_{\theta} \Big( x_t\pIdx{i} \Big| x_{t-1}^{a\pIdx{i}_t} \Big) } { R_{\theta} \Big( x_t\pIdx{i} \Big| x_{1:t-1}^{a\pIdx{i}_t},y_t \Big) }, \quad i = 1,\dots,N. \label{eq:APFweights}\end{aligned}$$Hence, the empirical approximations of the smoothing distributions can be computed sequentially for $t=1$ to $T$ by repeating –. Note that the random variables $u$ appearing in the extended target of the PMH algorithm correspond to all the random variables generated by the APF, i.e. all the particles and ancestor indices, $$\begin{aligned} % u= (\{x\pIdx{i}_{0}\}_{i=1}^N, \{x\pIdx{i}_{t}, a\pIdx{i}_{t}\}_{i=1}^N, t = 1,\,\dots,\,T ). \\ u= \bigg( \Big\{x\pIdx{i}_{t}, a\pIdx{i}_{t} \Big\}_{i=1}^N, t = 1,\,\dots,\,T \bigg).\end{aligned}$$ In this article, we make use of two important special cases of the APF: the bootstrap particle filter (bPF) [@GordonSalmondSmith1993] and the fully adapted particle filter (faPF) [@PittShephard1999]. For the bPF, we select the proposal kernel $R_{\theta}(x_t|x_{1:t-1},y_t) = f_{\theta}(x_t|x_{t-1})$ and the auxiliary weights $\nu_t = w_t = g_{\theta}(y_t|x_t)$. The faPF is obtained by $R_{\theta}(x_t|x_{1:t-1},y_t) = p_{\theta}(x_t|y_t,x_{t-1})$ and $\nu_t = p_{\theta}(y_{t+1}|x_t)$, resulting in the weights $w_t \equiv 1$. Note, that the faPF can only be used in models for which these quantities are available in closed-form. Estimation of the likelihood ---------------------------- The likelihood for the SSM in can be estimated using by inserting estimated one-step predictors $p_{\theta}(y_t|y_{1:t-1})$ obtained from the APF. The resulting likelihood estimator is given by $$\begin{aligned} \widehat{p}_{\theta}( \mathbf{y} | u ) = \frac{1}{N^T} \sum_{i=1}^N w_{T}\pIdx{i} \left\{ \prod_{t=1}^{T-1} \sum_{i=1}^N \nu_{t}\pIdx{i} \right\}. \label{eq:EstLikelihood}\end{aligned}$$ It is known that this likelihood estimator is unbiased for any number of particles, see e.g. [@PittSilvaGiordaniKohn2012] and Proposition 7.4.1 in [@DelMoral2004]. As discussed in Section \[sec:PMH\], this is exactly the property that is needed in order to obtain $p(\theta | \mathbf{y})$ as the unique stationary distribution for the Markov chain generated by the PMH algorithm. Consequently, PMH will target the correct distribution for any number of particles $N\geq 1$. However, the variance in the likelihood estimate is connected with the acceptance rate and the mixing of the Markov chain. Therefore it is important to determine the number of particles that balances a reasonable acceptance rate with a reasonable computational cost. This problem is studied for PMH0 in @PittSilvaGiordaniKohn2012 [@DoucetPittKohn2012]. Estimation of the gradient -------------------------- As we shall see below, the gradient of the log-posterior can be estimated by solving a smoothing problem. The APF can be used directly to address this problem, since the particles $\{x_{1:T}^{(i)}, w_T^{(i)}\}_{i=1}^N$ provide an approximation of the JSD at time $T$ according to (see also @PoyiadjisDoucetSingh2011). However, this method can give estimates with high variance due to *particle degeneracy*. Instead, we make use of the FL smoother [@KitagawaSato2001] which has the same linear computational cost, but smaller problems with *particle degeneracy* than the APF. Alternative algorithms for estimating this information are also available [@DelMoralDoucetSingh2010; @PoyiadjisDoucetSingh2011]. The gradient of the parameter log-posterior is given by $$\begin{aligned} \mathsf{S}_T(\theta) = \Dtheta \log p(\theta) + \Dtheta \log p_{\theta}(\mathbf{y}), \label{eq:deffirst order}\end{aligned}$$ where it is assumed that the gradient of the log-prior $\Dtheta \log p(\theta)$ can be calculated explicitly. The gradient of the log-likelihood $\Dtheta \log p_{\theta}(\mathbf{y})$ can, using *Fisher’s identity* [@CappeMoulinesRyden2005], be expressed as $$\begin{aligned} \Dtheta \log p_{\theta}(\mathbf{y}) &= %\dint %\Dtheta \log p(\mathbf{x},\mathbf{y}) %p(\mathbf{x}|\mathbf{y}) %\dd \mathbf{x} \nonumber %\\ %&= \mathbb{E}_{\theta} \left[ \Dtheta \log p_{\theta}(\mathbf{x},\mathbf{y}) \Big| \mathbf{y} \right], \label{eq:FishersIdentity} \end{aligned}$$ where for an SSM we can write the gradient of the complete data log-likelihood as $$\begin{aligned} \Dtheta \log p_{\theta}(\mathbf{x},\mathbf{y}) &= \sum_{t=1}^T \xi_{\theta}(x_t,x_{t-1}), \text{ where} \label{eq:jointdistSSM} \\ \xi_{\theta}(x_t,x_{t-1}) &= \Dtheta \log f_{\theta}(x_t|x_{t-1}) + \Dtheta \log g_{\theta}(y_t|x_{t}). \nonumber\end{aligned}$$ Combining with Fisher’s identity yields $$\begin{aligned} \Dtheta \log p_{\theta}(\mathbf{y}) &= \sum_{t=1}^{T} \dint \xi_{\theta}(x_{t}, x_{t-1}) p_{\theta}(x_{t-1:t}|\mathbf{y}) \dd x_{t-1:t},\end{aligned}$$ which depends on the (intractable) two-step smoothing distribution $p_{\theta}(x_{t-1:t}|\mathbf{y})$. To approximate this quantity we use the FL smoother which relies on the assumption that there is a decaying influence of future observations $y_{t+\Delta:T}$ on the state $x_t$. This means that $$\begin{aligned} p_{\theta}(x_{t-1:t}|\mathbf{y}) \approx p_{\theta}(x_{t-1:t}|y_{1:\kappa_t}),\end{aligned}$$ holds for some large enough $\kappa_t=\min\{t+\Delta,T\}$. Here, $\Delta$ denotes a pre-determined lag decided by the user, which depends on the forgetting properties of the model. By marginalisation of the empirical smoothing distribution $\widehat{p}_{\theta}(x_{1:\kappa_t}|y_{1:\kappa_t})$ over $x_{1:t-2}$ and $x_{t+1:\kappa_t}$, we obtain the approximation $$\begin{aligned} \widehat{p}_{\theta}^\Delta(\dn x_{t-1:t}| \mathbf{y}) \triangleq \sum_{i=1}^N w_{\kappa_t}\pIdx{i} \delta_{\tilde{x}_{\kappa_t,t-1:t}\pIdx{i}} (\dn x_{t-1:t}). \label{eq:FL2step}\end{aligned}$$ Here, we use the notation $\tilde{x}_{\kappa_t,t}^{(i)}$ to denote the ancestor at time $t$ of particle $x_{\kappa_t}^{(i)}$ and $\tilde{x}_{\kappa_t,t-1:t}^{(i)} = \{ \tilde{x}_{\kappa_t,t-1}^{(i)}, \tilde{x}_{\kappa_t,t}^{(i)} \}$. Inserting – into provides an estimator of , $$\begin{aligned} \widehat{\mathsf{S}}_T(\theta|u) &= \nabla \log p(\theta) + \sum_{t=1}^{T} \sum_{i=1}^N w_{\kappa_t}\pIdx{i} \xi_{\theta} \Big( \tilde{x}_{\kappa_t,t}\pIdx{i}, \tilde{x}_{\kappa_t,t-1}\pIdx{i} \Big), \label{eq:FisherScoreParticleApproximation}\end{aligned}$$ which is used in the proposal distributions in . Estimation of the Hessian ------------------------- The negative Hessian of the parameter log-posterior can be written as $$\begin{aligned} \mathsf{I}_T(\theta) = -\DDtheta \log p(\theta) - \DDtheta \log p_{\theta}( \mathbf{y} ), \label{eq:defSecondOrder}\end{aligned}$$ where it is assumed that the Hessian of the log-prior $\DDtheta \log p(\theta)$ can be calculated analytically. The negative Hessian of the log-likelihood, also known as the *observed information matrix*, can using *Louis’ identity* [@CappeMoulinesRyden2005] be expressed as $$\begin{aligned} - \DDtheta \log p_{\theta}(\mathbf{y}) &= \Dtheta \log p_{\theta}(\mathbf{y})^2 - \mathbb{E}_{\theta} \Big[ \DDtheta \log p_{\theta}(\mathbf{x},\mathbf{y}) \Big| \mathbf{y} \Big] \nonumber \\ &-\mathbb{E}_{\theta} \Big[ \Dtheta \log p_{\theta}(\mathbf{x},\mathbf{y})^2 \Big| \mathbf{y} \Big]. \label{eq:LouisIdentity}\end{aligned}$$ Here, we have introduced the notation $v^2=vv^{\top}$ for a vector $v$. From this, we can construct an estimator of using the estimator of the gradient in , of the form $$\begin{aligned} \widehat{\mathsf{I}}_T(\theta|u) = -\DDtheta \log p(\theta) + \widehat{\mathsf{S}}_T(\theta|u)^2 - \widehat{\mathsf{I}}^{(1)}_T(\theta|u) - \widehat{\mathsf{I}}^{(2)}_T(\theta|u), \label{eq:LouisIdentityEst}\end{aligned}$$ where we introduce $\mathsf{I}^{(1)}_T(\theta)= \mathbb{E}_{\theta} \left[ \DDtheta \log p_{\theta}(\mathbf{x},\mathbf{y}) | \mathbf{y} \right]$ and ${\mathsf{I}^{(2)}_T(\theta)= \mathbb{E}_{\theta} \left[ \Dtheta \log p_{\theta}(\mathbf{x},\mathbf{y})^2| \mathbf{y} \right]}$. We obtain the estimator of the first term analogously to as $$\begin{aligned} \widehat{\mathsf{I}}^{(1)}_{T}(\theta|u) &= \sum_{t=1}^T \sum_{i=1}^N w_{\kappa_t}\pIdx{i} \zeta_{\theta} \Big( \tilde{x}_{\kappa_t,t}\pIdx{i}, \tilde{x}_{\kappa_t,t-1}\pIdx{i} \Big), \text{ where} \label{eq:LouisIdentityEstTerm1Final} \\ \zeta_{\theta}(x_t, x_{t-1}) &= \DDtheta \log f_{\theta}(x_t|x_{t-1}) + \DDtheta \log g_{\theta}(y_t|x_{t}). \nonumber\end{aligned}$$ The estimator of the second term needs a bit more work and we start by rewriting the last term in as $$\begin{aligned} &\sum_{t=1}^T \sum_{s=1}^T \mathbb{E}_{\theta} \left[ \xi_{\theta}(x_t,x_{t-1}) \xi_{\theta}(x_s,x_{s-1})^{\top} \Big| \mathbf{y} \right] \nonumber \\ &= \sum_{t=1}^T \bigg\{ \mathbb{E}_{\theta} \left[ \xi_{\theta}(x_t,x_{t-1})^2 \Big| \mathbf{y} \right] \nonumber \\ &+ \sum_{s=1}^{t-1} \mathbb{E}_{\theta} \left[ \big( \xi_{\theta}(x_t,x_{t-1}),\xi_{\theta}(x_s,x_{s-1}) \big)^{\dagger} \Big| \mathbf{y} \right] \bigg\}, \label{eq:LouisIdentityEstTerm2}\end{aligned}$$ where we have introduced the operator $(a,b)^{\dagger}=ab^{\top}+ba^{\top}$ for brevity. Consider the last term appearing in this expression, we can rewrite it as $$\begin{aligned} &\sum_{s=1}^{t-1} \mathbb{E}_{\theta} \left[ \xi_{\theta}(x_t,x_{t-1}) \xi_{\theta}(x_s,x_{s-1})^{\top} \Big| \mathbf{y} \right] \\ &= \mathbb{E}_{\theta} \Bigg[ \xi_{\theta}(x_t,x_{t-1}) \underbrace{\left\{ \sum_{s=1}^{t-1} \mathbb{E}_{\theta} \left[ \xi_{\theta}(x_s,x_{s-1}) \big| x_{t-1}, y_{1:t-1} \right] \right\}^{\top}} _{\triangleq \alpha_{\theta}(x_{t-1})^{\top}} \Big| \mathbf{y} \Bigg].\end{aligned}$$ From this, we see that can be written as an additive functional of the form $$\begin{aligned} \sum_{t=1}^T \mathbb{E}_{\theta} \left[ (\xi_{\theta}(x_t,x_{t-1}))^2 + \big(( \xi_{\theta}(x_t,x_{t-1}),\alpha_{\theta}(x_{t-1}) \big)^{\dagger} \Big| \mathbf{y} \right],\end{aligned}$$ which can be estimated using the FL smoother as before. However, for this we need to compute the quantities $\alpha_{\theta}(x_{t-1})$. One option is to make use of a type of fixed-lag approximation for $\alpha_{\theta}(x_{t-1})$, by assuming that $x_s$ and $x_t$ are conditionally independent given $y_{1:\kappa_t}$, whenever $|s-t| > \Delta$. This approach has previously been used by @DoucetJacobRubenthaler2013. Alternatively, we can use a filter approximation according to $$\begin{aligned} \widehat{\alpha}_{\theta} \Big( x_{t}\pIdx{i} \Big) = \widehat{\alpha}_{\theta} \Big( x_{t-1}^{a_{t}\pIdx{i}} \Big) + \xi_{\theta} \Big( x_t\pIdx{i},x_{t-1}^{a_{t}\pIdx{i}} \Big), \label{eq:alphaEst}\end{aligned}$$ for $i=1, \ldots, N$. Note that this approach suffers from the same particle degeneracy as the APF. However, this only affects a small number of terms and in our experience this approximation works sufficiently well to give estimates with reasonable variance. The resulting estimator using is $$\begin{aligned} \widehat{\mathsf{I}}^{(2)}_{T}(\theta|u) &= \sum_{t=1}^T \sum_{i=1}^N w_{\kappa_t}\pIdx{i} \eta_{\theta} \Big( \tilde{x}_{\kappa_t,t}\pIdx{i}, \tilde{x}_{\kappa_t,t-1}\pIdx{i} \Big), \label{eq:LouisIdentityEstTerm2Final} \text{ where} \\ \eta_{\theta}(x_t, x_{t-1}) &= \xi_{\theta}(x_t, x_{t-1})^2 + \big( \xi_{\theta}(x_t, x_{t-1}),\widehat{\alpha}_{\theta}(x_{t-1}) \big)^{\dagger}. \nonumber\end{aligned}$$ Hence, the Hessian can be estimated using by inserting the estimators from , and . Regularisation of the estimate of the Hessian {#sec:regularisation} --------------------------------------------- The PMH2 proposal relies on the assumption that the observed information matrix is positive definite (PD). The estimator given in does not always satisfy this, especially when the Markov chain is located far from the posterior mode. Typically, the amount of information is limited in such regions and this results in that the curvature is difficult to estimate. To cope with this issue, one alternative is to regularize the Hessian by adding a diagonal matrix to shift the eigenvalues to be positive. The diagonal matrix can e.g. be selected such that $$\begin{aligned} \Delta \widehat{I}_T = \max \Big\{0, - 2 \lambda_{\min} \big( \widehat{I}_T \big) \Big\} I_d, \label{eq:Hessianregularization}\end{aligned}$$ where $\lambda_{\min}(\widehat{I}_T)$ denotes the smallest eigenvalue of $\widehat{I}_T(\theta|u)$. In this article, we make use of this method for handling non–PD estimates of the negative Hessian for the PMH2 algorithm. This heuristic is common for Newton-type optimisation algorithms [@NocedalWright2006 Chapter 3.4]. Note, that there are other solutions available for ensuring positive definiteness that only shifts the negative eigenvalues, see [@NocedalWright2006 Chapter 3]. We emphasise that this type of regularization keeps the Markov chain invariant, i.e. still targets the correct posterior distribution (recall Section \[sec:PMH\]). Another alternative is to replace the estimate of the negative Hessian with the inverse sample covariance matrix calculated using the trace of Markov chain when the estimate is not PD. This can be seen as a hybrid between the PMH2 algorithm and a *pre-conditioned PMH1 algorithm*. This resembles some other adaptive MH algorithms [@AndrieuThoms2008] in which the same procedure is used to adapt the covariance matrix of a random walk proposal. For this, we can make use of the last $L$ iterations of the MH algorithm after that the Markov chain has reached stationarity. During the burn-in phase, non–PD estimates can be handled using a regularization approach or by rejecting the proposed parameter. In this article, we refer to this method for handling non–PD estimates of the negative Hessian as the *hybrid PMH2 algorithm*, where we use the latter alternative during the burn-in phase. Note that this pre-conditioning can also be applied to the PMH0 and PMH1 algorithm, we return to this in Section \[sec:results:earth\]. <span style="font-variant:small-caps;">Inputs:</span> $\mathbf{y}$ (data), $R(\cdot)$ (propagation kernel), $\nu(\cdot)$ (weight function), $N > 0$ (no. particles), $0 < \Delta \leq T$ (lag).\ <span style="font-variant:small-caps;">Outputs:</span> $\widehat{p}_{\theta}(\mathbf {y})$ (est. of the likelihood), $\widehat{\mathsf{S}}_T(\theta)$ (est. of the gradient), $\widehat{\mathsf{I}}_T(\theta)$ (est. of the negative Hessian). Initialise each particle $x_0\pIdx{i}$. Resample and propagate each particle using . Calculate the weights for each particle using . Compute $\widehat{p}_{\theta}(\mathbf{y})$ by . Compute $\widehat{\mathsf{S}}_T(\theta)$ and $\widehat{\mathsf{I}}_T(\theta)$ by and , respectively. Regularize $\widehat{\mathsf{I}}_T(\theta)$ by adding $\Delta\widehat{\mathsf{I}}_T$ computed by Replace $\widehat{\mathsf{I}}_T(\theta)$ by the inverse covariance matrix computed using the $L$ final samples of the Markov chain during the burn-in. \[alg:SMCfull\] Resulting SMC algorithm ----------------------- In Algorithm \[alg:SMCfull\], we present the complete procedure that combines the APF with the FL smoother to compute the estimates needed for the PMH2 proposal . Note that the two different methods to handle non–PD estimates of the negative Hessian matrix results in the *standard* and *hybrid* PMH2 algorithm, respectively. We end this section by briefly discussing the statistical properties of the estimates of the gradient and Hessian obtained from the FL smoother. From @OlssonCappeDoucMoulines2008, we know that the FL smoother gives biased estimates of the gradient and Hessian for any number of particles. Remember that this does not effect the invariance of the Markov chain (recall Section \[sec:PMH\]). The main advantage of the FL smoother over the APF (which gives a consistent estimate) is that the former enjoys a smaller variance than the APF, i.e. we obtain a favourable bias-variance trade-off for a certain choice of lag $\Delta$. Note that a too small lag gives a large bias in the estimate and a too large lag gives a large variance in the estimate; we return to this choice in Section \[sec:results\]. Numerical illustrations {#sec:results} ======================= In this section, we provide illustrations of the properties of the FL smoother and the different proposed algorithms. The source code in Python and the data used for some of the numerical illustrations are available for download at: <http://liu.johandahlin.com/>. Estimation of the log-likelihood and the gradient {#sec:results:flsmoother} ------------------------------------------------- We begin by illustrating the use of the FL smoother for estimating the log-likelihood and the gradient. Here, we consider a linear Gaussian state space (LGSS) model given by $$\begin{aligned} x_{t+1}|x_{t} &\sim \textsf{N} \Big( x_{t+1}; \phi x_{t}, \sigma_v^2 \Big), \\ y_{t} |x_{t} &\sim \textsf{N} \Big( y_{t}; x_{t}, \sigma_e^2 \Big).\end{aligned}$$ \[eq:lgss\] We generate two data realisations of length $T=100$ using parameters $\theta^{(1)} = \{\phi,\sigma_v^2,\sigma_e^2\} = \{0.5,1.0,0.1^2\}$ and $\theta^{(2)} = \{0.5,1.0,1.0\}$ with a known initial zero state. We use the lag $\Delta = 5$ and run the PFs with systematic resampling [@CarpenterCliffordFearnhead1999]. ![The log $L_1$-error in the log-likelihood estimates and the estimates of the gradient with respect to $\phi$ in the LGSS model with $\sigma_e=0.1$ (left) and $\sigma_e=1$ (right). The bPF (black) and faPF (red) are evaluated by $1 \thinspace 000$ MC iterations using a fixed data set with $T=100$.[]{data-label="fig:score-lgss-n-paper"}](lgss-score-npart.pdf){width="\columnwidth"} For this model, we can compute the true values of the log-likelihood and the gradient by running an RTS smoother [@RauchTungStriebel1965]. In Figure \[fig:score-lgss-n-paper\], we present boxplots of the $L_1$-errors in the estimated log-likelihood and the gradient of the log-posterior with respect to $\phi$, evaluated at the true parameters. When $\sigma_e=0.1$, we observe that the faPF has a large advantage over the bPF for all choices of $N$. When $\sigma_e=1.0$, we get smaller difference in the error of the gradient estimates, but the log-likelihood estimates are still better for the faPF. Similar results are also obtained for the gradient with respect to $\sigma_v$. ![The log $L_1$-error in the estimates of the gradient with respect to $\phi$ in the LGSS model with $\sigma_e=0.1$ (left) and $\sigma_e=1$ (right). The bPF (black) and faPF (red) are evaluated by $1 \thinspace 000$ Monte Carlo iterations using a fixed data set with $T=100$.[]{data-label="fig:score-lgss-lag-paper"}](lgss-score-lag.pdf){width="\columnwidth"} In Figure \[fig:score-lgss-lag-paper\], we present the error in the gradient estimates with respect to $\phi$ using a varying lag $\Delta$ and a varying number of particles $N$. The results are obtained by $1 \thinspace 000$ Monte Carlo runs on a single data set generated from the previously discussed LGSS model with $T=100$. We conclude again that faPF is preferable when available. The results are largely robust to the lag, as long as this is chosen large enough when using the faPF. A lag of about $12$ seems to be a good choice for this model when $T=100$ and when using the faPF with systematic resampling. Burn-in and scale-invariance ---------------------------- ![The trace plots of the first $50$ steps using PMH0 (black), PMH1 (red) and PMH2 (blue). The dotted lines show the *true* parameters of the LGSS model. The gray contours show the log-posterior.[]{data-label="fig:lgss-scaleinvariance"}](lgss-scaleinvariance-paper.pdf){width="\columnwidth"} Consider the problem of inferring $\{\theta_1,\theta_2\}=\{\phi,\sigma_v\}$ in the LGSS model . We simulate a single data set with parameters $\theta^{(1)}$ (as defined in the previous section) of length $T=250$. We use an uniform parameter prior over $|\phi|<1,\sigma_v > 0$ and initialise in $\theta_0=\{0.1,2\}$. We use faPF with systematic resampling, $N=100$ and $\Delta=12$. Here, we use the standard version of Algorithm \[alg:SMCfull\] to adjust the estimate of the Hessian in the cases when it is not PD, resulting in the PMH2 algorithm. We adjust the step lengths $\gamma$ to give an acceptance rate during a pilot run of between $0.7$ and $0.8$ in the stationary phase. We obtain $\gamma=\{0.04,0.065,1.0\}$ for PMH$\{0,1,2\}$, respectively. Note that a single step length is used for each proposal to simplify the tuning. Of course, different step lengths can be used for each parameter, and we could also use different step lengths during the burn-in and the stationary phase of the algorithm using the approach discussed in Section \[sec:constructPMH2\]. As previously mentioned, the PMH2 algorithm avoids this (potentially difficult and time-consuming) procedure, by taking the local geometric information into account. In the left column of Figure \[fig:lgss-scaleinvariance\], we present the first $50$ iterations of the Markov chain from the three different algorithms. We note that the added information in the proposals of PMH1 and PMH2 aids the Markov chain in the burn-in phase. This results in that the Markov chains for the proposed algorithms reach the mode of the posterior quicker than the random walk used in PMH0. To illustrate the scale invariance of the PMH2 algorithm, we reparametrise the LGSS model by $\{\theta_3,\theta_4\}=\{\phi,\sigma_v/10\}$. We keep the same settings as for the previous parametrisation and rerun the algorithms. From this run we obtain the middle column in Figure \[fig:lgss-scaleinvariance\]. We see clearly that the PHM1-algorithm does not perform well and gets stuck at the initial parameter value. The reason is that the second component of the gradient is increased by a factor 10 for the rescaled model. Since we still use the same step length, this will cause the PMH1 algorithm to overshoot the region of high posterior probability when proposing new values, and these will therefore never be accepted. Finally, to improve the performance we recalibrate the three algorithms on the new parametrisation using the same procedure as before. We then obtain the new step lengths $\{0.005,0.0075,1.0\}$. The resulting Markov chains are presented in the right column of Figure \[fig:lgss-scaleinvariance\]. Despite the new step lengths, PMH0 and PMH1 continue to struggle. The reason is that the step lengths are limited by the small posterior variance in the $\theta_4$-parameter, resulting in a very slow progression in the $\theta_3$-direction. Again, for PMH2, the added Hessian information is used to rescale the proposal in each dimension resulting in a more efficient exploration of the posterior than for PMH0 and PMH1. The mixing of the Markov chains at stationarity ----------------------------------------------- We continue by investigating the mixing of the Markov chains at stationarity using an estimate of the integrated autocorrelation time (IACT) given by $$\begin{aligned} \widehat{\textsf{IACT}}(\theta_{1:M}) = 1 + 2 \sum_{k=1}^{K} \widehat{\rho}_k (\theta_{1:M}),\end{aligned}$$ where $\widehat{\rho}_k(\theta_{1:M})$ denotes the empirical autocorrelation at lag $k$ of $\theta_{1:M}$ (after the burn-in has been discarded). A low value of the IACT indicates that we obtain many uncorrelated samples from the target distribution, implying that the chain is mixing well. Here, $K$ is determined as the first index for which the empirical autocorrelation satisfies $|\widehat{\rho}_K(\theta_{1:M})| < 2/\sqrt{M}$, i.e. when the coefficient is statistically insignificant. We return to the LGSS model in with the original parameterisation $\{\theta_1,\theta_2\}=\{\phi,\sigma_v\}$ using the same settings as before. A total of $25$ data sets are generated using the parameters $\theta^{(1)}$ and the algorithms are initialised at the true parameter values to avoid a long burn-in phase. The step sizes are determined using a series of pilot runs on the first generated dataset to minimise the total IACT for each algorithm. This is done to make a fair comparison between the different algorithms at their near *optimal* performance. The resulting step sizes are obtained as $\{0.08, 0.075, 1.50\}$. ---------------------------------- ----------- --------------- -------- ----- -------- ----- **Acc. rate** (r)[3-3]{} (r)[4-5]{} (r)[6-7]{} Median Median IQR Median IQR bPF(500) 0.02 257 146 265 371 bPF(1000) 0.06 83 129 79 118 bPF(2000) 0.15 29 23 15 24 faPF(50) 0.37 9 8 8 5 faPF(100) 0.38 9 6 7 4 faPF(200) 0.38 7 6 7 4 bPF(500) 0.02 187 271 203 347 bPF(1000) 0.10 64 85 49 72 bPF(2000) 0.22 23 16 12 24 faPF(50) 0.58 **3** 2 **3** 1 faPF(100) 0.59 4 2 **3** 1 faPF(200) 0.58 **3** 1 **3** 1 bPF(500) 0.03 170 211 164 190 bPF(1000) 0.10 59 73 65 80 bPF(2000) 0.24 13 10 19 17 faPF(50) 0.66 **3** 1 4 2 faPF(100) 0.66 **3** 1 5 2 faPF(200) 0.66 **3** 1 4 2 ---------------------------------- ----------- --------------- -------- ----- -------- ----- : Median and IQR for the acceptance rate and IACT using different SMC algorithms. The values are computed using $25$ different data sets from the LGSS model.[]{data-label="tbl:pmh-lgss"} Finally, we estimate the mixing in each of the $25$ simulated data sets during $M=30 \thinspace 000$ MCMC iterations (discarding the first $10 \thinspace 000$ iterations as burn-in). The results are presented in Table \[tbl:pmh-lgss\], where the median and interquartile range (IQR; the distance between the $25\%$ and $75\%$ quartiles) are presented for each PMH algorithm. Here, we present the results the standard version of Algorithm \[alg:SMCfull\]. We see that the added information decreases the IACT about $2$ times for PMH1 and PMH2 compared with PMH0. We conclude that the extra information brought by the gradient and the Hessian improves the mixing of the Markov chains in this model, which results in a more efficient exploration of the posterior. Note that, for this parametrisation of the LGSS model the posterior is quite isotropic (which can also be seen in the left column of Figure \[fig:lgss-scaleinvariance\]). Hence, the conditions are in fact rather favourable for PMH0 and PMH1. Parameter inference in a Poisson count model {#sec:results:earth} -------------------------------------------- In this section, we analyse the annual number of major earthquakes[^3] (over $7$ on the Richter scale) during the period from year $1900$ to $2014$. Following @Langrock2011, we model the data using $$\begin{aligned} x_{t+1}|x_{t} &\sim \mathsf{N} \Big( x_{t+1}; \phi x_t, \sigma^2 \Big), \\ y_{t}|x_{t} &\sim \mathsf{P} \Big( y_t; \beta \exp(x_t) \Big),\end{aligned}$$ \[eq:earthquakemodel\] with parameters $\theta=\{\phi,\sigma,\beta\}$ and uniform priors over $|\phi| < 1$, $\sigma > 0$ and $\beta >0$. Here, $\mathsf{P}(\lambda)$ denotes a Poisson distribution with parameter $\lambda$. We repeat the procedure from the previous subsection and obtain the step lengths $\{0.06,0.006,0.85\}$. Here, we use $M = 30 \thinspace 000$ MCMC iterations (discarding the first $10 \thinspace 000$ iterations as burn-in), the bPF with systematic resampling, $\Delta=12$, $\theta_0=\{0.5, 0.5, 18\}$ and $L=2 \thinspace 500$. In this model, the estimate of the negative Hessian is often non–PD (during about half of the iterations) and the choice of regularisation is therefore important. To explore the properties of the regularisation, we apply both the standard and hybrid version of the PMH2 algorithm discussed in Section \[sec:regularisation\]. We compare these methods to standard and pre-conditioned versions of the the PMH0 and PMH1 algorithms, using the sample posterior covariance matrix calculated in the same manner as for the hybrid PMH2 algorithm. ---------------------------------------------- ----------- ----------- --------------- -------- ----- -------- ----- -------- ------ Version SMC alg.  **Acc. rate** (r)[4-4]{} (r)[5-6]{} (r)[7-8]{} (r)[9-10]{} Median Median IQR Median IQR Median IQR Standard bPF(500) 0.26 497 712 16 3 2639 1163 Standard bPF(1000) 0.30 89 150 15 3 2680 438 Pre-cond. bPF(500) 0.43 35 17 16 1 107 105 Pre-cond. bPF(1000) 0.45 38 28 16 2 129 131 Standard bPF(500) 0.76 665 442 277 162 2651 364 Standard bPF(1000) 0.82 490 134 205 30 2875 1007 Pre-cond. bPF(500) 0.62 266 187 **9** 3 1728 1638 Pre-cond. bPF(1000) 0.70 98 209 **9** 3 1480 1732 Standard bPF(500) 0.24 91 17 53 14 222 37 Standard bPF(1000) 0.28 60 14 47 17 139 59 Hybrid bPF(500) 0.45 20 3 17 4 30 15 Hybrid bPF(1000) 0.49 **17** 4 18 3 **23** 5 ---------------------------------------------- ----------- ----------- --------------- -------- ----- -------- ----- -------- ------ In Table \[tbl:pmh-earth\], we present the resulting acceptance rates and IACT values for each parameter and algorithm. We note the large decrease in IACT for $\beta$ when using the Hessian information, where the hybrid PMH2 seems to perform better than standard version for this model. The improved mixing by using PMH2 is due to the scale invariance property, as the parameter $\beta$ is at least an order of magnitude larger than $\phi$ and $\sigma$ (c.f. Figure \[fig:lgss-scaleinvariance\]). Note that a reparameterisation or using separate step lengths for the parameters could possibly have helped in improving the mixing in $\beta$ for the standard versions of PMH0 and PMH1. Using the standard and hybrid version of PMH2, decreases the overall computational cost by a factor of about $100$ for a specific number of effective samples. The poor performance of the pre-conditioned algorithms is probably due to that the sample posterior covariance matrix does not fully capture the geometry of the posterior distribution. ![Part of the trace (left) and posterior estimates (right) for the $\beta$ parameter in the earthquake count model using standard versions of PMH0 (black), PMH1 (red) and hybrid version of PMH2 (blue). Dotted lines indicate the posterior means.[]{data-label="fig:earthquake-beta-post"}](earthquake-beta-post.pdf){width="\columnwidth"} In Figure \[fig:earthquake-beta-post\], we present the trace and posterior estimates for $\beta$ using the standard versions of PMH0 and PMH1 as well as hybrid PMH2. The posterior estimates are obtained by pooling the $10$ parallel Markov chains after the burn-ins have been discarded. We see that the traces behave rather differently with hybrid PMH2 exploring the space well compared with the other methods. Using the parameter posterior estimate, we can compute point estimates for the parameters of the model. The posterior mean for hybrid PMH2 is obtained as $\{0.88, 0.15 , 16.58\}$ with standard deviations $\{0.07,0.03,2\}$. The parameter estimate is comparable to the estimate $\{0.88,0.15,17.65\}$ obtained by a maximum likelihood-based method using the same data and model in @Dahlin2014 [Example 4.9]. Robustness in the lag and step size ----------------------------------- The PMH2 algorithm requires a number of parameters to be select by the user for each parameter inference problem. It is therefore interesting to discuss the robustness of the method with respect to these parameters. In the previous illustrations, we have seen that the number of particles $N$ is an important factor in determining the mixing. ![The IACT for $\phi$ (black), $\sigma$ (red) and $\beta$ (blue) for varying step sizes $\gamma$ (upper) and lag $\Delta$ (lower). The values are computed as the median of $10$ runs using standard PMH2 with the same data.[]{data-label="fig:earthquake-sensitivity"}](earth-sensitivity.pdf){width="\columnwidth"} Two other important parameters are the step length $\gamma$ and the lag in the FL-smoother $\Delta$. To illustrate the impact of these quantities on the IACT, we return to the Earthquake model in using the standard PMH2 algorithm with the same settings but with $M= 15 \thinspace 000$ (discarding the first $5 \thinspace 000$ iterations as burn-in) and $N=1 \thinspace 500$. In Figure \[fig:earthquake-sensitivity\], we present the IACT for the three parameters in the model when varying $\gamma$ and $\Delta$, keeping everything else fixed. The standard PMH2 algorithm seems to be rather robust to both the choice of $\Delta$ and $\gamma$ after a certain threshold. Recall the discussion in Section \[sec:results:flsmoother\] for the FL smoother. We conclude that a suitable standard choice for the step length could be $\gamma=1$, which can be fine tuned if the performance is not good enough. This recommendation is also common in the literature concerning Newton-type algorithms. Discussion and future work ========================== Adding the gradient and Hessian information to the PMH proposal can have beneficial results including: (i) a shorter burn-in phase, (ii) a better mixing of the Markov chain, and (iii) scale-invariance of the proposal which simplifies tuning. The latter point is true in particular for PMH2, since this method takes the local curvature of the posterior into account, effectively making the method invariant to affine transformations. It is common to distinguish between two phases of MCMC algorithms: the burn-in and stationary phases. We have seen empirically that the proposed methods can improve upon the original PMH0 during both of these phases but the *best* choices for the step lengths can differ between these two phases. Typically, a smaller step length is preferred during burn-in and a larger during stationarity (the opposite holds for PMH0). The reason for this is that during burn-in, the (natural) gradient information will heavily skew the proposal in a direction of increasing posterior probability. That is, the methods tend to be *aggressive* and propose large steps to make rapid progression toward regions of high posterior probability. While this is intuitively appealing, the problem is that we require the Markov chains to be reversible at all times. The reverse of these large steps can have very low probability which prevents them from being accepted. One interesting direction for future work is therefore to pursue adaptive algorithms (see e.g. @AndrieuThoms2008 [@PetersHosackHayes2010; @PittSilvaGiordaniKohn2012]), to automatically tune the step lengths during the different phases of the algorithms. Another interesting possibility is to relax the reversibility requirement during burn-in; see [@DiaconisHolmesNeal2000] for a related reference. This would cause the methods to behave like optimisation procedures during the initial phase, but transition into samplers during the second phase. Finally, another very interesting direction for future work is to extend the proposed methods to develop a particle-version of the manifold Hamiltonian Monte Carlo (mHMC) algorithm [@DuaneKennedyPendletonRoweth1987; @Neal2010; @GirolamiCalderhead2011]. The reason for this is motivated by the large improvement in mixing seen by e.g. @Neal2010 [@GirolamiCalderhead2011] for high dimensional problems in *vanilla* MH sampling. [^1]: This work was supported by: Learning of complex dynamical systems (Contract number: 637-2014-466) and Probabilistic modeling of dynamical systems (Contract number: 621-2013-5524) and CADICS, a Linnaeus Center, all funded by the Swedish Research Council. JD is with the Division of Automatic Control, Link[ö]{}ping University, Link[ö]{}ping, Sweden. E-mail: ` johan.dahlin@liu.se`. FL is with the Department of Engineering, University of Cambridge, Cambridge, United Kingdom. E-mail: `fredrik.lindsten@eng.cam.ac.uk`. TS is with Division of Systems and Control, Uppsala University, Uppsala, Sweden. E-mail: `thomas.schon@it.uu.se`. [^2]: The final publication is available at Springer via: <http://dx.doi.org/10.1007/s11222-014-9510-0> [^3]: The data is obtained from the Earthquake Data Base System of the U.S. Geological Survey, which can be accessed at <http://earthquake.usgs.gov/earthquakes/eqarchives/>.
1
--- abstract: 'We prove that certain acyclic cluster algebras over the complex numbers are the coordinate rings of holomorphic symplectic manifolds. We also show that the corresponding quantum cluster algebras have no non-trivial prime ideals. This allows us to give evidence for a generalization of the conjectured variant of the orbit method for quantized coordinate rings and their classical limits.' author: - Sebastian Zwicknagl title: Poisson and quantum geometry of acyclic cluster algebras --- Introduction ============ In the present paper we investigate the Poisson geometry associated with cluster algebras over the complex numbers defined by acyclic quivers, and relate them to the ideal theory of the corresponding quantum cluster algebras. Our main motivation is the following conjectural analogue of Kirillov’s Orbit method for quantized coordinate rings which has been an open problem for roughly twenty years (see e.g. [@brown-goodearl] or [@Yak-spec Section 4.3] and [@Soi] for the case of compact quantum groups). Let $G$ be a semisimple complex algebraic group and $\CC[g]$ its coordinate ring while $\CC_q[G]$ denote the corresponding quantized coordinate ring. It has been conjectured that there exists a homeomorphism between the space of primitive ideals in $R_q[G]$ and the symplectic leaves of the standard Poisson structure on $G$. For an excellent introduction to this conjecture we refer the reader to Goodearl’s paper [@Goo1]. The conjecture appears extremely difficult to prove and it is only known to be true in the cases of $G=SL_2, SL_3$. The coordinate rings $\CC[G]$ are known to have an upper cluster algebra structure ([@BFZ]) while the quantized coordinate rings are conjectured to have a quantum (upper) cluster algebra structure ([@bz-qclust Conjecture 10.10]). Indeed, it follows from recent results of Geiß, Leclerc and Schröer ([@GLSq Section 12.4]) that $\CC_q[SL_n]$ has a quantum cluster algebra structure. Cluster algebras are nowadays very well-established, hence we do not recall any of the definitions here, and refer the reader to the literature, resp. our Section \[se:Cluster Algebras\]. Most importantly for our purposes, a cluster algebra over $\CC$ is defined by a combinatorial datum in a field of fractions $\CC(x_1,\ldots, x_n)$. We will denote this [*initial seed*]{} by $({\bf x}, B)$ where ${\bf x}=(x_1,\ldots,x_n)$ and $B$ is an integer $m\times n$-matrix with $m\le n$ such that its principal $m\times m$ submatrix is skew-symmetrizable. The cluster variables $x_{m+1},\ldots, x_n$ are the frozen variables which we will call coefficients. A quantum cluster algebra is given by a [*quantum seed*]{} $( {\bf x}, B,\Lambda)$ where $(B,\Lambda)$ where $B$ is as above and $\Lambda$ is a skew-symmetric $n\times n$-matrix such that $(B,\Lambda)$ is a [*compatible pair*]{} (see Section \[se:Cluster Algebras\] for details). The set ${\bf x}=(x_1,\ldots, x_n)$ now lives in the skew-field of fractions $\CC_\Lambda(x_1,\ldots,x_n)$ defined by $\Lambda$. A compatible pair also defines a compatible Poisson structure in the sense of [@GSV] on the cluster algebra given by $({\bf x},B)$. It is well-known that the conjectured quantum cluster algebra structures on the rings $R_q[G]$ and the standard Poisson structure on $\CC[G]$ arise from such a compatible pair. Therefore, we would like to suggest the following conjecture.. \[conj: homeo cluster\] Let $(B,\Lambda)$ be a compatible pair and let $\AA$ and $\AA_q$ be a cluster, resp. quantum cluster algebra defined by $({\bf x},B)$, resp. $({\bf x},B,\Lambda)$. Suppose further that $\AA$ and $\AA_q$ are Noetherian and that $\AA$ is the coordinate ring of the affine variety $X$. Then, there exists a homeomorphism between the space of primitive ideals of $\AA_q$ and the symplectic leaves on $X$ defined by $\Lambda$. In light of Conjecture \[conj: homeo cluster\], we may think of quantum affine space and quantum tori as cluster algebras where all cluster variables are frozen. In this case the corresponding homeomorphism is well known and easy to construct. The other extreme case are cluster algebras without coefficients and here the class that is usually easiest to study are the acyclic cluster algebras. For example, it is known that such a cluster algebra is always Noetherian and the coordinate ring of an affine variety (see [@BFZ] and [@bz-qclust] for the classical and quantum versions). It is our main objective to give evidence for Conjecture \[conj: homeo cluster\] by proving it in this very specific case. It is an immediate consequence of the following two main results. \[th:Classical-intro\] Let $\AA$ be a cluster algebra with initial seed $({\bf x}, B)$ defined by an acyclic quiver where $B$ is invertible satisfying , and suppose that it is the coordinate ring of an affine variety $X$ and that $(B,\Lambda)$ is a compatible pairs. Then $X$ has the structure of a symplectic manifold, whose symplectic form is the corresponding Poisson bivector. \[th:quantum-intro\] Let $\AA_q$ be a quantum cluster algebra with quantum seed $( {\bf x}, B,\Lambda)$ satisfying the assumptions of Theorem \[th:Classical-intro\]. Then $\{0\}$ is the only proper two sided prime ideal in $\AA_q$. Our approach, is similar to that of [@ZW; @tpc], however all the proofs are self-contained and much easier, as our set-up is less general. The main idea is to study the intersection of ideals with the polynomial ring generated by a given cluster–in this case the acyclic seed. We are able to derive rather strong conditions that Poisson prime ideals–resp. two-sided prime ideals in the quantum case– must satisfy and are able to show that no non-trivial ideals satisfying them exist. A straightforward argument, then allows us to conclude that the variety $X$ is a symplectic manifold. We should also remark that we do not know whether any acyclic cluster algebras exist that do not satisfy the assumptions made in Theorem \[th:Classical-intro\]. The paper is organized as follows. We first briefly recall the definitions of cluster algebras and compatible Poisson structures, compatible pairs and quantum cluster algebras in Section \[se:Cluster Algebras\]. Thereafter, we continue with some technical key propositions (Section \[se:key propositions\]) and discuss in Section \[se:Poisson and symp geom\] the symplectic geometry of acyclic cluster algebras. The proof of Conjecture \[conj: homeo cluster\] in our specific case is completed in Section \[se:ideals in qca\] by proving Theorem \[th:quantum-intro\]. Cluster Algebras {#se:Cluster Algebras} ================ Cluster algebras {#se:Cluster Algebras-def} ---------------- In this section, we will review the definitions and some basic results on cluster algebras, or more precisely, on cluster algebras of geometric type over the field of complex numbers $\CC$. Denote by $\mathfrak{F}=\CC(x_1,\ldots, x_n)$ the field of fractions in $n$ indeterminates. Recall that a $m \times m$-integer matrix $B'$ is called skew-symmetrizable if there exists a $m \times m$-diagonal matrix $D$ with positive integer entries such that $B' \cdot D$ is skew-symmetric. Now, let $B$ be a $m\times n$-integer matrix such that its principal $m\times m$-submatrix is skew-symmetrizable. We call the tuple $(x_1,\ldots,x_n, B)$ the [*initial seed*]{} of the cluster algebra and $ (x_1,\ldots x_m)$ a cluster, while ${\bf x}=(x_1,\ldots x_n)$ is called an extended cluster. The cluster variables $x_{m+1},\ldots,x_n$ are called [*coefficients*]{}. We will now construct more clusters, $(y_1,\ldots, y_m)$ and extended clusters ${\bf y}=(y_1,\ldots, y_n)$, which are transcendence bases of $\mathfrak{F}$, and the corresponding seeds $({\bf y}, \tilde B)$ in the following way. Define for each real number $r$ the numbers $r^+={\rm max}(r,0)$ and $r^-={\rm min}(r,0)$. Given a skew-symmetrizable integer $m \times n$-matrix $B$, we define for each $1\le i\le m$ the [*exchange polynomial*]{} $$P_i = \prod_{k=1}^n x_k^{b_{ik}^+}+ \prod_{k=1}^n x_k^{-b_{ik}^-}\ .$$ We can now define the new cluster variable $x_i'\in\mathfrak{F}$ via the equation $$\label{eq:exchange} x_ix_i'=P_i\ .$$ This allows us to refer to the matrix $B$ as the [*exchange matrix*]{} of the cluster $(x_1,\ldots,x_n)$, and to the relations defined by Equation \[eq:exchange\] for $i=1,\ldots,m$ as [*exchange relations*]{}. We obtain that $(x_1,x_2,\ldots, \hat x_i,x_i',x_{i+1},\ldots, x_n)$ is a transcendence basis of $\mathfrak{F}$. We next construct the new exchange matrix $B_i=B'=(b_{ij}')$, associated to the new (extended) cluster $${\bf x}_i=(x_1,x_2,\ldots, \hat x_i,x_i',x_{i+1},\ldots, x_n)$$ via its coefficients $b_{ij}'$ as follows: $\bullet$ $b_{ij}' = -b_{ij}$ if $j \le n$ and $i = k$ or $j = k$, $\bullet$ $b_{ij}' = b_{ij} + \frac{|b_{ik} |b_{kj} + b_{ik} |b_{kj} |}{2}$ if $j \le n$ and $i \ne k$ and $j \ne k$, $\bullet$ $b_{ij}'=b_{ij}$ otherwise. This algorithm is called [*matrix mutation*]{}. Note that $B_i$ is again skew-symmetrizable (see e.g. [@FZI]). The process of obtaining a new seed is called [*cluster mutation*]{}. The set of seeds obtained from a given seed $({\bf x},B)$ is called the mutation equivalence class of $({\bf x},B)$. The cluster algebra $\mathfrak{A}\subset \mathfrak{F}$ corresponding to an initial seed $(x_1,\ldots, x_n,B)$ is the subalgebra of $\mathfrak{F}$, generated by the elements of all the clusters in the mutation equivalence class of $({\bf x},B)$ . We refer to the elements of the clusters as the [*cluster variables*]{}. Notice that the coefficients, resp. frozen variables $x_{m+1},\ldots, x_n$ will never be mutated. Of course, that explains their name. We have the following fact, motivating the definition of cluster algebras in the study of total positivity phenomena and canonical bases. [@FZI Section 3](Laurent phenomenon) Let $\mathfrak{A}$ be a cluster algebra with initial extended cluster $(x_1,\ldots, x_n)$. Any cluster variable $x$ can be expressed uniquely as a Laurent polynomial in the variables $x_1,\ldots, x_n$ with integer coefficients. Moreover, it has been conjectured for all cluster algebras, and proven in many cases (see e.g. [@MSW] and [@FST],[@FT]) that the coefficients of these polynomials are positive. Finally, we recall the definition of the lower bound of a cluster algebra $\AA$ corresponding to a seed $({\bf x}, B)$. Denote by $y_i$ for $1\le i\le m$ the cluster variables obtained from ${\bf x}$ through mutation at $i$; i.e., they satisfy the relation $x_iy_i=P_i$. [@BFZ Definition 1.10] \[def:lower bounds\] Let $\AA$ be a cluster algebra and let $({\bf x}, B)$ be a seed. The lower bound $ \mathfrak{L}_B \subset \AA$ associated with $({\bf x}, B)$ is the algebra generated by the set $\{x_1,\ldots x_n,y_1\ldots, y_m\}$. Upper cluster algebras {#se:upper cluster algebras} ---------------------- Berenstein, Fomin and Zelevinsky introduced the related concept of upper cluster algebras in [@BFZ]. Let $\mathfrak{A} \subset \mathfrak{F}$ be a cluster algebra with initial cluster $(x_1, \ldots, x_n, B)$ and let, as above, $y_1, \ldots, y_m$ be the cluster variables obtained by mutation in the directions $1, \ldots, m$, respectively. \(a) The upper bound $\UU_{{\bf x},B} ( \mathfrak{A})$ is defined as $$\UU_{{\bf x},B} ( \mathfrak{A}) = \bigcap_{j = 1}^m \CC [x_1^{\pm 1}, \ldots x_{j - 1}^{\pm 1}, x_j, y_j, x_{j + 1}^{\pm 1}, \ldots, x_m^{\pm 1}, x_{m+1},\ldots,x_n] \ .$$ \(b) The upper cluster algebra $\UU ( \mathfrak{A})$ is defined as $$\UU ( \mathfrak{A})=\bigcap_{({\bf x'},B')}\UU_{\bf x'} ( \mathfrak{A})\ ,$$ where the intersection is over all seeds $({\bf x}',B')$ in the mutation equivalence class of $({\bf x},B)$. Observe that each cluster algebra is contained in its upper cluster algebra (see [@BFZ]). Poisson structures {#se:poissonstructure} ------------------ Cluster algebras are closely related to Poisson algebras. In this section we recall some of the related notions and results. Let $k$ be a field of charactieristic $0$. A Poisson algebra is a pair $(A,\{\cdot,\cdot\})$ consisting of a commutative $k$-algebra $A$ and a bilinear map $\{\cdot,\cdot\}:A\otimes A\to A$, satisfying for all $a,b,c\in A$: 1. skew-symmetry: $\{a,b\}=-\{b,a\}$ 2. Jacobi identity: $\{a,\{b,c\}\}+\{c,\{a,b\}\}+\{b,\{c,a\}\}=0$, 3. Leibniz rule: $a\{b,c\}=\{a,b\}c+b\{a,c\}$. If there is no room for confusion we will refer to a Poisson algebra $(A,\{\cdot,\cdot\})$ simply as $A$. A [*Poisson Ideal*]{} $\II$ in a Poisson algebra $A$ is an ideal such that $\{\II,A\}\subset \II$, and if $k$ is of characteristic zero, then a Poisson prime ideal is a prime ideal which is also Poisson. Gekhtman, Shapiro and Vainshtein showed in [@GSV] that one can associate Poisson structures to cluster algebras in the following way. Let $\mathfrak{A} \subset \CC[x_1^{\pm 1}, \ldots, x_n^{\pm 1}] \subset \mathfrak{F}$ be a cluster algebra. A Poisson structure $\{\cdot, \cdot\}$ on $\CC [x_1, \ldots, x_n]$ is called log-canonical if $\{ x_i,x_j\}=\lambda_{ij} x_ix_j$ with $\lambda_{ij}\in \CC$ for all $1\le i,j\le n$. The Poisson structure can be naturally extended to $\mathfrak{F}$ by using the identity $0=\{f\cdot f^{-1},g\}$ for all $f,g\in\CC [x_1, \ldots, x_n]$. We thus obtain that $\{f^{-1},g\}=-f^{-2}\{f,g\}$ for all $f,g\in \mathfrak{F}$. We call $\Lambda=\left( \lambda_{ij}\right)_{i,j=1}^n$ the [*coefficient matrix*]{} of the Poisson structure. We say that a Poisson structure on $\mathfrak{F}$ is compatible with $\mathfrak{A}$ if it is log-canonical with respect to each cluster $(y_1,\ldots, y_n)$; i.e., it is log canonical on $\CC[y_1, \ldots, y_n]$. \[re:Class Poisson\] A classification of Poisson structures compatible with cluster algebras was obtained by Gekhtman, Shapiro and Vainshtein in [@GSV Theorem 1.4]. It is easy to see from their description that if $n$ is even, then the cluster algebra has an admissible Poisson structure of maximal rank. We will refer to the cluster algebra $\AA$ defined by the initial seeed $({\bf x},B)$ together with the compatible Poisson structure defined by the coefficient matrix $\Lambda$ with respect to the cluster ${\bf x}$ as the [*Poisson cluster algebra*]{} defined by the [*Poisson seed*]{} $({\bf x},B,\Lambda)$. It is not obvious under which conditions a Poisson seed $({\bf x},B,\Lambda)$ would yield a Poisson bracket $\{\cdot,\cdot\}_\Lambda$ on $\mathfrak{F}$ such that $\{\AA,\AA\}_\Lambda\subset \AA$. We have, however, the following fact. Let $({\bf x},B,\Lambda)$ be a Poisson seed and $\AA$ the corresponding cluster algebra. Then $\Lambda$ defines a Poisson algebra structure on the upper bound $\UU_{{\bf x},B}(\AA)$ and the upper cluster algebra $\UU(\AA)$. Denote as above by $\{\cdot,\cdot\}_\Lambda$ the Poisson bracket on $\mathfrak{F}$ by $\Lambda$. Observe that the algebras $\CC[x_1^{\pm 1},\ldots x_{i-1}^{\pm 1}, x_i,y_i, x_{i+1}^{\pm 1}, \ldots, x_n^{\pm 1}]$ are Poisson subalgebras of the Poisson algebra $\CC[x_1^{\pm 1},\ldots x_n^{\pm 1}]$ for each $1\le i\le m$, as $\{x_i,y_i\}_\Lambda=\{x_i,x_i^{-1}P_i\}_\Lambda\in \CC[x_1,\ldots, x_n]$. If $A$ is a Poisson algebra and $\{B_i\subset A:i\in I\}$ is a family of Poisson subalgebras, then $\bigcap_{i\in I} B_i$ is a Poisson algebra, as well. The assertion follows. Compatible Pairs and Their Mutation {#se:Compatible Pairs and Mut} ----------------------------------- Section \[se:Compatible Pairs and Mut\] is dedicated to compatible pairs and their mutation. Compatible pairs yield important examples of Poisson brackets which are compatible with a given cluster algebra structure, and as we shall see below, they are also integral in defining quantum cluster algebras. Note that our definition is slightly different from the original one in [@bz-qclust]. Let, as above, $m\le n$. Consider a pair consisting of a skew-symmetrizable $m\times n$-integer matrix $B$ with rows labeled by the interval $[1,m]=\{1,\ldots, m\}$ and columns labeled by $[1,n]$ together with a skew-symmetrizable $n\times n$-integer matrix $\Lambda$ with rows and columns labeled by $[1,n]$. \[def:compa pair\] Let $B$ and $\Lambda$ be as above. We say that the pair $(B,\Lambda)$ is compatible if the coefficients $d_{ij}$ of the $m\times n$-matrix $D=B\cdot \Lambda$ satisfy $d_{ij}=d_i\delta_{ij}$ for some positive integers $d_i$ ($i\in [1,m]$). This means that $D=B\cdot \Lambda$ is a $m\times n$ matrix where the only non-zero entries are positive integers on the diagonal of the principal $m\times m$-submatrix. The following fact is obvious. \[le:full rank\] Let $(B,\Lambda)$ be a compatible pair. Then $B\cdot \Lambda$ has full rank. Let $(B,\Lambda)$ be a compatible pair and let $k\in [1,m]$. We define for $\varepsilon\in \{+1,-1\}$ a $n\times n$ matrix $E_{k,\varepsilon}$ via - $(E_{k,\varepsilon})_{ij}=\delta_{ij}$ if $j\ne k$, - $(E_{k,\varepsilon})_{ij}= -1$ if $i=j= k$, - $(E_{k,\varepsilon})_{ij}= max(0,-\varepsilon b_{ik})$ if $i\ne j= k$. Similarly, we define a $m\times m$ matrix $F_{k,\varepsilon}$ via - $(F_{k,\varepsilon})_{ij}=\delta_{ij}$ if $i\ne k$, - $(F_{k,\varepsilon})_{ij}= -1$ if $i=j= k$, - $(F_{k,\varepsilon})_{ij}= max(0,\varepsilon b_{kj})$ if $i= k\ne j$. We define a new pair $(B_k,\Lambda_k)$ as $$\label{eq:mutation matrix and Poisson} B_k= F^T_{k,\varepsilon} B E_{k,\varepsilon}^T \ , \quad \Lambda_k=E_{k,\varepsilon} \Lambda E_{k,\varepsilon}^T\ ,$$ where $X^T$ denotes the transpose of $X$. We chose this rather non-straightforward way of defining $E_{k\varepsilon}$ and $F_{k,\varepsilon}$ in order to show how our definition relates to that of [@bz-qclust]. We will not need it in what follows. The motivation for the definition is the following fact. [@bz-qclust Prop. 3.4] \[pr:comp under mutation\] The pair $(B_k,\Lambda_k)$ is compatible. Moreover, $\Lambda_k$ is independent of the choice of the sign $\varepsilon$. The following fact is clear. Let $\AA$ be a cluster algebra given by an initial seed $({\bf x}, B)$ where $B$ is a $m\times n$-matrix. If $(B,\Lambda)$ is a compatible pair, then $\Lambda$ defines a compatible Poisson bracket on $\mathfrak{F}$ and on $\UU(\AA)$. \[ex:acyclic\] If $m=n$ (i.e. there are no coefficients/frozen variables) and $B$ has full rank, then $(B, \mu B^{-1})$ is a compatible pair for all $\mu\in \ZZ_{> 0}$ such that $\mu B^{-1}$ is an integer matrix. It follows from [@GSV Theorem 1.4] that in this case all compatible Poisson brackets arise in this way. Recall that double Bruhat cells in complex semisimple connected and simply connected algebraic groups have a natural structure of an upper cluster algebra (see [@BFZ]). Berenstein and Zelevinsky showed that the standard Poisson structure is given by compatible pairs relative to this upper cluster algebra structure (see [@bz-qclust Section 8]). Quantum Cluster Algebras {#se: q-cluster algebra spec} ------------------------ In this section we recall the the definition of a quantum cluster algebra, introduced by Berenstein and Zelevinsky in [@bz-qclust]. We define, for each skew-symmetric $n\times n$-integer matrix $\Lambda$, the skew-polynomial ring $\CC_\Lambda^t[x_1,\ldots, x_n]$ to be the $\CC[t^{\pm 1}]$-algebra generated by $x_1,\ldots, x_n$ subject to the relations $$x_ix_j=t^{\lambda_{ij}} x_jx_i\ .$$ Analogously, the quantum torus $H_\Lambda^t=\CC_\Lambda^t[x_1^{\pm 1},\ldots, x_n^{\pm 1} ]$ is defined as the localization of $\CC_\Lambda^t[x_1,\ldots, x_n]$ at the monoid generated by ${x_1,\ldots, x_n}$, which is an Ore set. The quantum torus is clearly contained in the skew-field of fractions $\mathcal{F}_\Lambda$ of $\CC_\Lambda^t[x_1,\ldots, x_n]$, and the Laurent monomials define a lattice $L\subset H_\Lambda^t\subset \mathcal{F}_\Lambda$ isomorphic to $\ZZ^n$. Denote for each ${\bf e}=(e_1,\ldots e_n)\in \ZZ^n$ by $x^{\bf e}$ the monomial $x_1^{e_1}\ldots x_n^{e_n}$. We need the notion of a toric frame in order to define the quantum cluster algebra. A toric frame in $ \mathfrak F$ is a mapping $M:\ZZ^n\to \mathfrak F-\{0\}$ of the form $$M(c)=\phi(X^{\eta(c)})\ ,$$ where $\phi$ is a $\QQ(\frac{1}{2})$-algebra automorphism of $\mathfrak F$ and $\eta: \ZZ^n\to L$ an isomorphism of lattices. Since a toric frame $M$ is determined uniquely by the images of the standard basis vectors $\phi(X^{\eta(e_1)})$,…, $\phi(X^{\eta(e_n)})$ of $\ZZ^n$, we can associate to each toric frame a skew commutative $n\times n$-integer matrix $\Lambda_M$. We can now define the quantized version of a seed. [@bz-qclust Definition 4.5] A quantum seed is a pair $(M,B)$ where - $M$ is a toric frame in $\mathfrak F$. - $B$ is a $n\times m$-integer matrix with rows labeled by $[1,m]$ and columns labeled by $[1,n]$. - The pair $(B,\Lambda_M)$ is compatible. Now we define the seed mutation in direction of an exchangeable index $k\in [1,m]$. For each $\varepsilon\in \{ 1,-1\}$ we define a mapping $M_k: \ZZ^n\to \mathfrak F$ via $$M_k(c)=\sum_{p=0}^{c_k} \binom{c_k}{p}_{q^{d_k}{2}} M(E_\varepsilon c +\varepsilon p b^k)\ , \quad M_k(-c)=M_k(c)^{-1}\ ,$$ where we use the well-known $q$-binomial coefficients (see e.g. [@bz-qclust Equation 4.11]), and the matrix $E_{k,\varepsilon}$ defined in Section \[se:Compatible Pairs and Mut\]. Define $B_k$ to be obtained from $B$ by the standard matrix mutation in direction $k$, as in Section \[se:Cluster Algebras-def\]. One obtains the following fact. [@bz-qclust Prop. 4.7] (a) The map $M_k$ is a toric frame, independent of the choice of sign $\varepsilon$. \(b) The pair $(B_k,\Lambda_{M_k})$ is a quantum seed. Now, given an [*initial quantum seed*]{} $(B, \Lambda_M) $ denote, in a slight abuse of notation, by $X_1=M(e_1),\ldots, X_r=M(e_r)$, which we refer to as the [*cluster variables*]{} associated to the quantum seed $(M,B)$. Here our nomenclature differs slightly from [@bz-qclust], since there one considers the coefficients not to be cluster variables. We now define the seed mutation $$X_k'=M(-e_k+\sum_{b_{ik}>0} b_{ik}e_i)+ M(-e_k-\sum_{b_{ik}<0} b_{ik}e_i)\ .$$ We obtain that $X_k'=M_k(e_k)$ (see [@bz-qclust Prop. 4.9]). We say that two quantum seeds $(M,B)$ and $(M',B')$ are mutation-equivalent if they can be obtained from one another by a sequence of mutations. Since mutations are involutive (see [@bz-qclust Prop 4.10]), the quantum seeds in $\mathfrak{F}$ can be grouped in equivalence classes, defined by the relation of mutation equivalence. The quantum cluster algebra generated by a seed $(M,B)\subset \mathfrak F$ is the $\CC[t^{\pm 1}]$-subalgebra generated by the cluster variables associated to the seeds in an equivalence class. There are definitions of quantum lower bounds, upper bounds and quantum upper cluster algebras (see [@bz-qclust Sections 5 and 7]), analogous to the classical case. Intersections of Ideals with Clusters {#se:key propositions} ===================================== In the present chapter we consider the intersection between Poisson ideals in a cluster algebra and individual clusters. Moreover, we prove quantum analogues of the propositions whenever available. \[pr:TPP super toric\] Let ${\bf x}$ be a cluster, $rank(\Lambda)=n$ and $\II$ be a non-zero Poisson ideal. Then the ideal $\II$ contains a monomial in $x^m\in \CC[{\bf x}]$. Notice first that $\II_{\bf x}=\II\cap \CC[x_1,\ldots, x_n]\ne 0$. Indeed, let $0\ne f\in \II$. We can express $f$ as a Laurent polynomial in the variables $x_1,\ldots, x_n$; i.e., $f=x_{1}^{-c_1}\ldots x_n^{-c_n} g$ where $c_1,\ldots, c_n\in \ZZ_{\ge 0}$ and $0\ne g\in\CC[x_1,\ldots, x_n]$. Clearly, $g= x_{1}^{c_1}\ldots x_n^{c_n}f \in \II_{\bf x}$. We complete the proof by contradiction. Let $f=\sum_{{\bf w}\in \ZZ^n} c_{\bf w} x^{\bf w}\in\II_{\bf x}$ We assume that $f$ has the smallest number of nonzero summands such that no monomial term $c_{\bf w} x^{\bf w}$ with $c_{\bf w}\neq 0$ is contained in $\II$. It must therefore have at least two monomial terms. Assume, as above that $c_{\bf w},c_{{\bf w}'}\ne 0$ and denote by ${\bf v}$ the difference ${\bf v}={\bf w}-{\bf w}'$. Since $\Lambda$ has full rank, there exists $i\in [1,n]$ such that $\{x_i,x^{\bf v}\}\ne 0$. Therefore, $\{x_i, x^{\bf w}\}=cx_ix^{\bf w}\ne dx_ix^{{\bf w}'}\{x_i, x^{{\bf w}'}\}$ for some $c,d\in \CC$. Note that $\{x_i,f\}=\sum_{{\bf w}\in \ZZ^n} c_{\bf w} \lambda_{\bf w}x^{\bf w}x_i$ for certain $ \lambda_{\bf w}\in\ZZ$. Clearly, $cx_if-\{x_i,f\}\in \II$ and $$cx_if-\{x_i,f\}=(c-c)c_{\bf w} x^{\bf w}x_i+(c-d)c_{{\bf w}'} x^{{\bf w}'}x_i+\ldots\ .$$ Hence, $cx_if-\{x_i,f\}\ne 0$ and it has fewer monomial summands than $f$ which contradicts our assumption. Therefore, $\II$ contains a monomial. The proposition is proved. The following fact is an obvious corollary of Proposition \[pr:TPP super toric\]. \[th:intersection\] Let $\AA$ be the cluster algebra defined by a Poisson seed $({\bf x}, B,\Lambda)$ with $rank(\Lambda)=n$. If $\II\subset \AA$ is a non-zero Poisson prime ideal, then $\II$ contains a cluster variable $x_i$. We also have the following quantum version of Proposition \[pr:TPP super toric\] which is proved analogously to the classical case. \[pr:TPP super toric q\] Let $({\bf x},B,\Lambda)$ be a quantum seed, $rank(\Lambda)=n$ and $\II$ a non-zero two-sided ideal. Then the ideal $\II$ contains a monomial in $x^m\in \CC_\Lambda[{\bf x}]$. We do not have a quantum version of Proposition \[th:intersection\] because we do not know whether prime ideals in quantum cluster algebras are completely prime. Poisson ideals in acyclic cluster algebras {#se:Poisson and symp geom} ========================================== In this section we recall results from our previous paper [@ZW; @tpc]. As the proofs are rather short, we shall include them for convenience. Recall e.g. from [@BFZ] that acyclic cluster algebras associated with an acyclic quiver and with trivial coefficients correspond, up to a reordering of the variables of the acyclic seed, to cluster algebras defined by a seed $({\bf x},B)$ where $B$ is a skew-symmetric $n\times n$-matrix with $b_{ij}>0$ if $i<j$. Berenstein, Fomin and Zelevinsky proved in [@BFZ] that such a cluster algebra $\AA$ is equal to both its lower and upper bounds. Thus, it is Noetherian and, if $B$ has full rank, a Poisson algebra with the Poisson brackets given by compatible pairs $(B,\Lambda)$ with $\Lambda=\mu B^{-1}$ for certain $\mu\in \ZZ$ (see Example \[ex:acyclic\]). In order for $B$ to have full rank we have to assume that $n=2k$ is even. Let $P_i=m_i^+ +m_i^-$ where $m_i^+$ and $m_i^-$ denote the monomial terms in the exchange polynomial. Then $\{y_i,x_i\}=\mu_1m_i^+ +\mu_2m_i^-$ for some $\mu_1,\mu_2\in \ZZ$. We, additionally, want to require that $\mu_1\ne \mu_2$. To assure this, we assume that $$\label{eq:poisson gen} \sum_{j=1}^n (b^{-1})_{ij}\left(max(b_{ij},0)+min(b_{ij},0)\right)\ne 0$$ for all $i\in [1,n]$. We have the following result. [@ZW; @tpc] \[th:acyclic\] Let $\AA$ be an acyclic cluster algebra over $\CC$ with trivial coefficients of even rank $n=2k$, given by a seed $(x_1,\ldots, x_{n}, B)$ where $B$ is a skew-symmetric $n\times n$-integer matrix satisfying $b_{ij}>0$ if $i<j$ and suppose that $B$ and $B^{-1}$ satisfy Equation \[eq:poisson gen\] for each $i\in[1,n]$. Then, the Poisson cluster algebra defined by a compatible pair $(B,\Lambda)$ where $\Lambda=\mu B^{-1}$ with $0\ne \mu\in \ZZ$ contains no non-trivial Poisson prime ideals. Suppose that there exists a non-trivial Poisson prime ideal $\II$. Then, $\II\cap {\bf x}$ is nonempty by Proposition \[pr:TPP super toric\], hence $\II\cap {\bf x}=\{x_{i_1},\ldots, x_{i_j}\}$ for some $1\le i_1\le i_2\le \ldots\le i_j\le 2k$. Additionally, observe that $P_{i_1}=m_{i_1}^++m_{i_1}^-$ has to be contained in $\II$, as well as $$\{y_{i_1},x_{i_1}\}=\mu_1m_{i_1}^+ +\mu_2m_{i_1}^-\ .$$ By our assumption, we have $\mu_1\ne \mu_2$, and therefore $m_{i_1}^-\in \II$. Hence, $x_h\in\II$ for some $h\in[1, i_{1}-1]$ or $1\in \II$. But $b_{i,h}<0$ implies that $h<i$ and we obtain the desired contradiction. The theorem is proved. The theorem has the following corollary which was also independently proved by Muller very recently [@Mu; @1], though in more generality. \[co:acyclic smooth\] Let $\AA$ be as in Theorem \[th:acyclic\]. Then, the variety $X$ defined by $\AA=\CC[X]$ is smooth. The singular subset is contained in a Poisson ideal of co-dimension greater or equal to one by a result of Polishchuk ([@Pol]). It is well known that the Poisson ideal must be contained in a proper Poisson prime ideal (see also [@ZW; @tpc]). The assertion follows. The assumption that the cluster algebra has even rank is very important. Indeed, Muller has recently shown that the variety corresponding to the cluster algebra of type $A_3$ has a singularity ([@Mu Section 6.2]). Symplectic Structure -------------------- ### Symplectic geometry of Poisson varieties In this section we will recall some well-known properties of the symplectic structure on Poisson varieties. Our discussion follows along the lines of [@brown-goodearl Part III.5]. If $A$ is a Poisson algebra over a field $k$, then each $a\in A$ defines a derivation $X_a$ on $A$ via $$X_a(b)=\{a,b\}\ .$$ This derivation is called the [*Hamiltonian vectorfield*]{} of $a$ on $A$. Now suppose that $A$ is the coordinate ring of an affine complex variety $Y$. We will associate to the Poisson bracket the [*Poisson bivector*]{} $u\in \Lambda^2 T(Y)$ where $T(Y)$ denotes the tangent bundle of $Y$. Let $p\in Y$ be a point and $\mm_p\subset A$ the corresponding maximal ideal. Let $\alpha,\beta\in \mm_p/\mm_p^2$ be elements of the cotangent space and let $f,g$ be lifts of $\alpha$ and $\beta$, respectively. We define $u_p\in \Lambda^2 T_p(Y)$ $$u_p(\alpha,\beta)=\{f,g\}(p)\ .$$ Note that $u_p$ is a well-defined skew-symmetric form. Indeed, if $I\subset A$ is an ideal and $b\in \II^2$, then for all $a\in A$ $$\{a,b\}\in \II$$ by the Leibniz rule. The form $u_P$ may be degenerate, indeed if it is non-degenerate at every point $p\in Y$, then we call $u_P$ symplectic and, moreover, if $Y$ is connected, then $Y$ is smooth and a (holomorphic) symplectic manifold. Define $$N(p)=\{\alpha \in T^*_p Y:u_p(\alpha,\cdot)=0\}\ ,$$ and $H(p)\subset T_p Y$ its orthogonal complement. The space $H(P)$ is the tangent space of the linear span of the Hamiltonian vectorfields at $p$. Recall that by the Theorem of Frobenius, a Poisson variety $Y$ decomposes as a disjoint union of symplectic leaves, maximal symplectic submanifolds. The tangent space of the symplectic leaf at the point $p$ is $H(p)$. ### The Main Theorem Corollary \[co:acyclic smooth\] implies that $X$ is smooth, hence it has the structure of a complex manifold. We have the following result. \[th:symplectic leaves\] Let $\AA$ be an acyclic cluster algebra as defined above, $X$ an affine variety such that $\AA=\CC[X]$ and let $\Lambda$ define a compatible Poisson bracket. Then, $X$ is a holomorphic symplectic manifold. First, let $p\in X$ be a generic point, by which we mean that $x_i(p)\ne 0$ for all $i=[1,\ldots,n]$. Set $x_i(p)=p_i$. It is easy to see that the Hamiltonian vectorfield $X_{x_i}$ at $p$ evaluates in the local coordinates $(x_1,\ldots, x_n)$ as $$X_{x_i}(p)= (\lambda_{1i} p_i,\ldots, \lambda_{ni} p_i)\ ,$$ where $\Lambda=(\lambda_{ij})_{i,j=1}^n$. Since, $\Lambda$ is non-degenerate, we obtain that the Hamiltonian vectorfields span the tangent space $T_p X$ at $p$. It remains to consider the case when $p_i=0$ for some $i\in[1,n]$. Suppose that $p\in X$ such that $p_i=0$ and $p_j\ne 0$ for all $j<i$. We have to show that the symplectic leaf containing $p$ is not contained in the hyper-surface $x_i=0$. We may assume, employing induction, that if $p_1,\ldots, p_i\ne 0$ then the symplectic leaf at $p$ has full rank. We now claim that $\{x_i,y_i\}(p)\ne 0 $. Indeed, suppose that $\{x_i,y_i\}(p)=(\mu_1 m^+\mu_2 m^-)(p)=0$. Since $p_i=0$ implies that $(m^++m^-)(p)=0$, we would conclude as in the proof of Theorem \[th:acyclic\] that $m^+(p)=m^-(p)=0$, but that is a contradiction to our assumption that $p_j\ne 0$ for all $j<i$. Denote by $u\in \Lambda^2 T(Y)$ the Poisson bivector. We obtain that $u_p(\frac{\delta}{\delta x_i},\cdot)\ne 0$, hence the symplectic leaf containing $p$ is not tangent to the hypersurface $x_i=0$ at $p$. It must contain a point in an analytic neighborhood of $p$ at which $x_i(p)\ne 0$ and $x_j(p)\ne 0$ for all $j<i$ by our assumption. We obtain the desired contradiction and, hence, every symplectic leaf has dimension $n$. But the manifold $X$ is connected and, therefore, cannot be a union of disjoint open submanifolds of equal dimension, hence $X$ contains only one symplectic leaf and the theorem is proved. This result can be easily generalized to acyclic cluster algebras with invertible coefficients (using Remark \[re:Class Poisson\]). This would imply that in the set-up of locally acyclic cluster algebras it should be easy to show that the spectrum of a locally acyclic cluster algebra is a holomorphic symplectic manifold. Ideals in Acyclic Quantum Cluster Algebras {#se:ideals in qca} ========================================== \[th:ideals in qclalg\] Let $\AA_q$ be a quantum cluster algebra with quantum seed $({\bf x}, B,\Lambda)$ satisfying Equation \[eq:poisson gen\]. Then, $\AA_q$ does not contain any non-trivial proper prime ideals. Let $\II$ be a prime ideal in $\AA_q$. We obtain from Proposition \[pr:TPP super toric q\] that $\II$ contains a monomial $x^{\bf v}$ with ${\bf v}\in\ZZ_{\ge 0}^n$. It is easy to observe that we can choose ${\bf v}$ to be minimal with respect to the lexicographic order on $\ZZ^n_{\ge 0}$. Recall that the lexicographic ordering defines ${\bf u}<{\bf w}$ for ${\bf u},{\bf w}\in \ZZ^n$ if and only if there exists $i\in [1,n]$ such that $u_i,w_i$ and $u_j=w_j$ if $j>i$. Recall that this defines a total ordering on $\ZZ^n_{\ge0}$. There exists some $i\in[1,n]$ such that $v_i\ne 0$ and $v_k=0$ for all $k>i$, and we write $x^{\bf v}=x^{\bf v'} x_i^{v_i}$ with ${\bf v'}={\bf v}-v_i{\bf e}_i$. Recall that an ideal $\II$ in a non-commutative ring $R$ is prime if $arb\in \II$ for all $r\in R$ implies that $a\in \II$ or $b\in \II$. Now, since $\AA_q$ is acyclic, we know from [@bz-qclust Theorem 7.3 and 7.5] that it is isomorphic to its lower bound. Hence, employing the notation of Definition \[def:lower bounds\], each element $a\in \AA_q$ can be written as $a=\sum_{p=1}^r c_p x^{\bf w}y_i^h y^{\bf w'}$ where $w_kw'_k=w_i h =0$ and $c_p\in\CC[q^{\pm 1}]$ for all $p\in[1,r]$. Observe that $y_k$ and $x_\ell$ skew-commute if $k\ne \ell$, as they are cluster variables in the cluster ${\bf x}_k$. We now compute that $$x^{\bf v'} ax_i^{j}=\sum_{p=1}^r c_p q^{\lambda_a} x^{\bf w} y_i^h x^{\bf v'} x_i^{v_i} y^{w'}= \sum_{p=1}^r c_p q^{\lambda_p} x^{\bf w} y_i^h x^{\bf v} y^{w'} \in \II\ ,$$ for all $a\in\AA_q$ and certain $\lambda_p\in\ZZ$. Hence, since $\II$ is a prime ideal, $x^{\bf v'}\in \II$ or $x_{i}^{v_i}\in\II$. But ${\bf v'}<{\bf v}$ and $v_i{\bf e}_i\le {\bf v}$, and therefore, $ x_i^{v_i}=x^{\bf v}\in\II$. But we have assumed in Equation \[eq:poisson gen\] that $x_iy_i$ and $y_ix_i$, are not linearly dependent, hence $x_i^{v_i} y_i$ and $y_i x_i^{v_i}$ are not linearly dependent. As in the proof of Theorem \[th:acyclic\], we now argue that $x_i^{v_i-1 }m_i^{+}\in \II\ $, where $P_i=m_i^++m_i^-$ (see Section \[se:Poisson and symp geom\]). But this is a contradiction to our minimality assumption. The theorem is proved. Theorem \[th:ideals in qclalg\] and Theorem \[th:symplectic leaves\] have the following immediate corollary. Let $({\bf x}, B,\Lambda)$ be an acyclic Poisson or quantum seed, as defined above. Then the space of primitive ideals in the quantum cluster algebra (one point corresponding to the $0$ ideal) and the space of symplectic leaves (also just one point) are homeomorphic. [x\*x\*x]{} A. Berenstein, S. Fomin and A.  Zelevinsky, Cluster Algebras III: Upper Bounds and Double Bruhat Cells. Duke Math . J.  126 (2005), no.  1, 1–52. A.  Berenstein and A.  Zelevinsky, Quantum cluster algebras, Adv. Math.  195 (2005), no. 2, 405–455. 3429–3472. K. Brown and K. Goodearl, Lectures on algebraic quantum groups, Birkhäuser, 2002. S. Fomin, M. Shapiro and D. Thurston, Cluster algebras and triangulated surfaces. I. Cluster complexes. Acta Math.  201 (2008), no. 1, S. Fomin and D. Thurston, Cluster algebras and triangulated surfaces. Part II: Lambda lengths, http://www.math.lsa.umich.edu/ fomin/papers.html. S. Fomin and A. Zelevinsky, Cluster Algebras I: Foundations, J. Amer. Math. Soc.  15 (2002), no. 2, 497–529. C. Geiß, B. Leclerc and J.  Schröer, Cluster Structures on Quantum Coordinate Rings, preprint, arxiv:1104.0531 M. Gekhtman, M. Shapiro and A. Vainshtein, Cluster Algebras and Poisson Geometry, Mosc. Math. J. 3 (2003), no. 3, 899–934. K. Goodearl, Semiclassical limits of quantized Coordinate Rings, Advances in Ring Theory (D. V.  Huynh and S.  Lopez-Permouth, Eds.), Basel (2009) Birkäuser, pp.  165-204. G. Muller, The Weil-Petersson form on an acyclic cluster variety, Int. Math. Res. Not., doi:10.1093/imrn/rnr155. G. Muller, Locally acyclic cluster algebras, preprint arxiv:arXiv:1111.4468. G. Musiker, R. Schiffler and L. Williams, Positivity for cluster algebras from surfaces, Advances in Mathematics, Vol. 227, Issue 6 (2011), 2241–2308. A. Polishchuk, Algebraic geometry of Poisson brackets, Algebraic geometry, 7. J. Math. Sci. (New York) 84 (1997), no. 5, 1413–1444. Y. Soibelman, Orbit Method for the Algebras of Functions of Quantum Groups and Coherent States. I., Int. Math. Res. Not. 6 (1993), 151–163. M. Yakimov, On the Spectra of Quantum Groups, preprint, arxiv:1106.3821. S. Zwicknagl, Toric Poisson Ideals in Cluster Algebras, preprint, arxiv:1009.2936.
1
--- abstract: 'We establish the global well-posedness of overdamped dynamical density functional theory (DDFT): a nonlinear, nonlocal integro-partial differential equation used in statistical mechanical models of colloidal flow and other applications including nonlinear reaction-diffusion systems and opinion dynamics. With no-flux boundary conditions, we determine the well-posedness of the full nonlocal equations including two-body hydrodynamic interactions (HI) through the theory of Fredolm operators. Principally, this is done by rewriting the dynamics for the density $\varrho$ as a nonlocal Smoluchowski equation with a non-constant diffusion tensor $\bm{D}$ dependent on the diagonal part ($\bm{Z}_1$) of the HI tensor, and an effective drift $\bm{A}[\vec{a}]$ dependent on the off-diagonal part ($\bm{Z}_2$). We derive a scheme to uniquely construct the mean colloid flux $\vec{a}(\vec{r},t)$ in terms of eigenvectors of $\bm{D}$, show that the stationary density $\varrho(\vec{r})$ is independent of the HI tensors, as well as proving exponentially fast convergence to equilibrium. The stability of the equilibria $\varrho(\vec{r})$ is studied by considering the bounded (nonlocal) perturbation of the differential (local) part of the linearised operator. We show that the spectral properties of the full nonlocal operator with no-flux boundary conditions can differ considerably from those with periodic boundary conditions. We showcase our results by using the numerical methods available in the pseudo-spectral collocation scheme 2DChebClass.' author: - '$^\dagger$,' - '[^1] ,' - '[^2]' bibliography: - 'bibYear3.bib' title: '****' --- dynamic density functional theory (DDFT), colloids, overdamped limit, hydrodynamic interactions, nonlocal-differential PDEs, interacting particle systems, McKean-Vlasov equation, phase transitions, bifurcation theory. 60F10; 60J75; 62P10; 92C37 [^3] Introduction {#intro} ============ For suspended particles in a viscous fluid, the Navier-Stokes equations are not sufficient to model flows on a spatial scale comparable with the size of the individual particles. Instead, one requires a computationally tractable model that captures meso/macro-scale dynamics whilst also including physical effects driven by particle-level interactions. Dynamic density functional theories (DDFTs) are excellent candidates for modelling such systems [@marconi1999dynamic; @ArcherEvans04]. They are typically applied in condensed matter physics in the colloidal particle regime with particles of typical diameters 1nm$-$1$\mu$m. Recent advances have allowed the inclusion of inertia [@MarconiTarazonaCecconiMelchionna07; @archer2009dynamical], multiple species [@Archer05; @RothRauscherArcher09; @GNK13; @LichtnerArcherKlapp12], hydrodynamic interactions (HI) [@rex2009dynamical; @Rauscher10; @goddard2012general; @goddard2012unification], background flows [@RauscherDominguezKrugerPenna07], temperature gradients [@WittkowskiLowenBrand12; @anero2013functional], hard spheres [@Rosenfeld89; @RothEvansLangKahl02; @roth2010fundamental; @StopperMaroltRothHansen-Goos15], confined geometries [@goddard2016dynamical; @zimmermann2016flow], arbitrary shaped particles [@WittkowskiLowen11], and active microswimmers [@menzel2016dynamical; @hoell2017dynamical]. For equilibrium fluids, there is a rigorous mathematical framework proving the existence of nontrivial fluid densities, different from those found by classical fluid dynamical formalisms, by taking into account both many body effects and external force fields. This is commonly known as (classical) density functional theory (DFT) [@Mermin:1965lo]. It is able to predict effects driven by the microscale, e.g., the non-smooth droplet profiles which are formed at the gas-liquid-solid trijunction in contact line problems [@berim2009simple] and the coexistence of multiple fluid films at critical values of the chemical potential energy in droplet spreading [@pereira2012equilibrium]. It has been used to resolve the paradox of stress and pressure singularities normally found in classical moving contact line problems [@sibley2013contact]. What is more, DFT agrees well with molecular dynamics simulations; see, e.g., [@Lutsko10] and references therein. These advancements motivate more mathematical analysis, in particular, on the well-posedness of the underlying equations being used and on the number and structure of equilibrium states. As a non-equilibrium extension to DFT for classical fluids, dynamic DFT (DDFT) has been applied to a wide range of problems: polymeric solutions [@PennaDzubiellaTarazona03], spinodal decomposition [@ArcherEvans04], phase separation [@Archer05], granular dynamics [@MarconiMelchionna07; @MarconiTarazonaCecconi07], nucleation [@vanTeeffelenLikosLowen08], liquid crystals [@WittkowskiLowenBrand10], and evaporating films [@ArcherRobbinsThiele10]. Recently, a stochastic version of DDFT has been derived [@Lutsko12], which allows the study of energy barrier crossings, such as in nucleation. A crucial point is that the computational complexity of DDFT is (essentially) constant in the number of particles, which allows the treatment of macroscopically large systems, whilst retaining microscopic information. Furthermore, due to the universality of the underlying nonlinear, nonlocal partial differential equations, DDFT may be considered as a generalisation of a wider class of such models used in the continuum modelling of many natural phenomena consisting of complex, many body, multi-agent interparticle effects including: pattern formation [@camazine2003self], the flocking of birds, cell proliferation, the self organising of morphogenetic and bacterial species [@canizo2010collective; @carrillo2009double], nonlocal reaction-diffusion equations [@al2018dynamical] and even consensus modelling in opinion dynamics[@chazelle2017well]. Many of these applications are often described as systems of interacting (Brownian) particles and, in the case of hard particle viscous suspensions, bath-mediated HI effects may be included. The HI are forces on the colloids mediated by the bath flow, generated by the motion of the colloidal particles. This in turn produces a nontrivial particle–fluid–particle hydrodynamic phenomenon, the inclusion of which has been shown to have substantial effects on the physics of many systems; for example, they have been found to be the underlying mechanism for the increased viscosity of suspensions compared to a pure bath [@Einstein06], the blurring of laning that arises in driven flow [@WysockiLowen11], the migration of molecules away from a wall [@HodaKumar07], and are particularly complex in confined systems [@happel2012low; @LaugaSquires05], and for active particles and microswimmers, which result in additional HI [@HuberKoehlerYang11]. Mathematically, the inter-particle forces and HI can be described through the hydrodynamic fields $\varrho$ and $\vec{v}$, the one-body density and one-body velocity fields, respectively. These fields, inherent to a continuum description of a collection of particles, are derived by considering successive moments (density, velocity, heat flux, …) of the underlying kinetic system [@gorban2014hilbert]. In particular, for systems of interacting Newtonian particles, when the momenta are non-negligible, the evolution of the phase space density $f(\vec{r}^N,\vec{p}^N, t)$ for a system of $N$ colloids determining the probability of finding the system in the state $(\vec{r}^N,\vec{p}^N)$ at time $t$ is described by the $N$-body Fokker-Planck equation and the dynamics of the hydrodynamic fields are defined by obtaining closed equations for $\{\varrho, \varrho\times \vec{v}\} := \int \mathrm{d}\vec{r}^{N-1}\,\mathrm{d}\vec{p}^N\, \{1, \vec{p}/m\}f(\vec{r}^N,\vec{p}^N, t)$, where $m$ is the particle mass. Here, $\vec{r}^N$ and $\vec{p}^N$ denote the $3N$-dimensional position and momentum vectors of all $N$ particles. The inclusion of HI leads to a much richer hierachy of fluid equations compared to systems without HI; compare e.g. [@goddard2012unification] and [@archer2009dynamical]. In particular, see e.g. [@goddard2012unification], by integration over all but one particle position, the one-body Fokker-Planck equation may be obtained. If, in addition, two-body HI and interparticle interactions are assumed and the inertia of the colloids is considered small, a high friction limit $\gamma\to \infty$ may be taken [@goddard2012overdamped]. The result is that the velocity distribution converges to a Maxwellian, and one can eliminate the momentum variable through an adiabatic elimination process that is based on multiscale analysis [@pavliotis2008multiscale]. The final one-body Smoluchowski equation for $\varrho$ is a novel, nonlinear, nonlocal PDE shown to be independent of the unknown kinetic pressure term $\int \mathrm{d}\vec{r}\,\mathrm{d}\vec{p}\, m^{-2} \vec{p}\otimes\vec{p} f(\vec{r},\vec{p}, t)$, which normally persists at $\gamma = O(1)$ (see[@goddard2012overdamped], Theorem 4.1). Existence, uniqueness and global asymptotic stability of the novel Smoluchowski equation in this overdamped limit has, until this work, remained unproven. It is the inclusion of HI that provides richness through additional nonlinearities in both the dissipation and convection terms. The inclusion of HI is interesting from both physical and mathematical standpoints. Physically, as above, the HI give rise to a much more complex evolution in the density. Mathematically, the convergence to equilibrium will depend inherently on the spectral properties of the effective diffusion tensor and effective drift vector arising from the HI. What is more, since the full $N$-body Fokker-Planck equation is a PDE in a very high dimensional phase space, well-posed nonlinear, nonlocal PDEs governing the evolution of the one-particle distribution function, valid in the mean field limit, describing the flow of nonhomogeneous fluids are desirable for computational reasons. The equations studied in this paper are related to the McKean-Vlasov equation [@chayes2010mckean], a nonlinear nonlocal PDE of Fokker-Planck type that arises in the meanfield limit of weakly interacting diffusions. The novelty of the present problem lies in the space dependent diffusion tensor and nonlinear, nonlocal boundary conditions. Additionally, the problem that we study in this paper may in general not be written as a gradient flow, with the exception of the modelling assumption that the off-diagonal elements of the friction tensor $\bm{\Gamma}$ are zero. This choice is equivalent to setting $\bm{Z}_2$ to zero, and would be physically relevant for a diffuse system of particles with a strong hydrodynamic interaction with a wall but weak inter-particle hydrodynamic interactions [@goddard2016dynamical]. Description of the Model. ------------------------- In this work we analyse the overdamped partial differential equation (PDE) associated to a system of interacting stochastic differential equations (SDEs) on $U$ an open, bounded subset of $\mathbb{R}^d$ of the following form, governing the positions $\vec{r}_i$ and momenta $\vec{p}_i$ of $i = 1,\dots, N$ colloidal particles immersed in a bath of many more, much smaller and much lighter particles: $$\begin{aligned} \frac{\mathrm{d}\vec{r}_i}{\mathrm{d}t} &= \frac{1}{m}\vec{p}_i,\label{eq:SDE_overdamped_pos}\\ \frac{\mathrm{d}\vec{p}_i}{\mathrm{d}t} &= -\nabla_{\vec{r}_i} V(\vec{r}^N,t)-\sum_{j=1}^{N}\boldsymbol{\Gamma}_{ij}(\vec{r}^N)\vec{p}_j + \sum_{j=1}^N\boldsymbol{B}_{ij}(\vec{r}^N)\vec{f}_{j}(t) \label{eq:SDE_overdamped_mom}\end{aligned}$$ where $\vec{r}^N = (\vec{r}_1,\cdots,\vec{r}_N)$, $\boldsymbol{B} = \left( mk_BT\boldsymbol{\Gamma}\right)^{1/2}$, $\boldsymbol{\Gamma} = \gamma (\bm{1} + \tilde{\boldsymbol{\Gamma}})$ (where the tilde denotes the nondimensional tensor and $\bm{1}$ is the $3N\times 3N$ identity matrix), $V$ is a potential, $k_B$, $\mathrm{T}$, $\gamma$ are Boltzmann’s constant, temperature and friction, respectively, and $\vec{f}_i(t) = (\zeta^x_i(t),\zeta^y_i(t),\zeta^z_i(t))^\top$ is a Gaussian white noise term with mean and correlation given by $\langle \zeta_i^a(t)\rangle = 0$ and $\langle\zeta_i^a(t),\zeta_j^b(t) \rangle = 2\delta_{ij}\delta^{ab}\delta(t-t')$. In $d = 3$ dimensions, the friction tensor $\boldsymbol{\Gamma}$ comprises $N^2$ positive definite $3\times 3$ mobility matrices $\boldsymbol{\Gamma}_{ij}$ for the colloidal particles. These couple the momenta of the colloidal particles to HI forces on the same particles, mediated by fluid flows in the bath. Typically, in the underdamped limit with dense suspensions, the HI may be short range lubrication forces, whereas in disperse systems in the overdamped limit, the HI are taken to be the long range forces given by the Rotne-Prager-Yamakawa tensor [@rotne1969variational]. However, we do not make any such assumptions on the form of the tensors here. We have described a general set of coupled Langevin equations with spatially-dependent friction tensor $\boldsymbol{\Gamma}(\vec{r}^N)$. As we will see, the dynamics – tend towards an equilibrium given by the Gibbs probability measure, which we will show to be independent of the friction tensor. Instead of computing the trajectories of individual particles we consider the evolution of the density of particles $\varrho(\vec{r},t)$ given by the Smoluchowski equation in the high friction limit $\gamma\to \infty$, $$\begin{aligned} \label{eq:mk-eq} \qquad \partial_{t}\varrho(\vec{r},t) =-\tfrac{k_B\mathrm{T}}{m\gamma} \nabla_{\vec{r}}\cdot\vec{a}(\vec{r},[\varrho], t) \qquad \text{ for } \vec{r} \in U,\,t\in [0,T]\end{aligned}$$ where $\vec{a}(\vec{r},[\varrho], t)$ is the flux, $[\varrho]$ denotes functional dependence, $U\subseteq\mathbb{R}^d$ and $T<\infty$. Equation was derived rigorously as a solvability condition of the corresponding Vlasov-Fokker-Planck equation for the one-body density in position and momentum space $f(\vec{r},\vec{p},t)$ by writing $f$ as a Hilbert expansion in a small nondimensional parameter $\epsilon\propto\gamma^{-1}$ [@goddard2012overdamped]. Therein, $\epsilon$ has units length, and therefore a problem specific length scale must be introduced to make it truly nondimensional. We are interested in global existence, uniqueness, positivity and regularity of the weak solution to when $\vec{a}(\vec{r},t)$ is given by the integral equation $$\begin{aligned} \vec{a}(\vec{r},t) + \boldsymbol{H}[\vec{a},\varrho](\vec{r},t)+\frac{\varrho(\vec{r},t)}{k_BT}\bm{D}(\vec{r},[\varrho],t)\nabla_{\vec{r}}\frac{\delta\mathcal{F}}{\delta\varrho}[\varrho](\vec{r},t) =0,\label{eq:eqn_for_a}\end{aligned}$$ $$\begin{aligned} \boldsymbol{H}[\vec{a},\varrho](\vec{r},t):= \varrho(\vec{r},t)\bm{D}(\vec{r},[\varrho],t)\int_U\mathrm{d}\vec{r}'\, g(\vec{r},\vec{r}')\boldsymbol{Z}_2(\vec{r},\vec{r}')\vec{a}(\vec{r}',t),\label{eq:eqn_for_H}\end{aligned}$$ $$\begin{aligned} &\frac{\varrho(\vec{r},t)}{k_BT}\nabla_{\vec{r}}\frac{\delta\mathcal{F}}{\delta\varrho}[\varrho](\vec{r},t) := [\nabla_{\vec{r}}+\tfrac{1}{k_B\mathrm{T}}\Big(\nabla_{\vec{r}}V_1(\vec{r},t)\nonumber\\ &\qquad\qquad\qquad\qquad\qquad\qquad+\int_U\mathrm{d}\vec{r}'\varrho(\vec{r}',t)g(\vec{r},\vec{r}')\nabla_{\vec{r}}V_{2}(\vec{r},\vec{r}')\Big) ]\varrho(\vec{r},t),\label{eq:eqn_for_J}\end{aligned}$$ where to ease notation we have suppressed $[\varrho]$ in the argument of $\vec{a}$ and $\mathcal{F}$ is the free energy functional which will be defined in Section \[subsec:free\_energy\_framework\]. The functions $V_1$ and $V_2$ are the external and (two body) interparticle potentials respectively. Additionally, the non-constant diffusion tensor $$\begin{aligned} \label{eq:def_diffusion_tensor} \boldsymbol{D}(\vec{r},[\varrho],t):=\frac{k_{\text{B} }\mathrm{T}}{m \gamma}\Big[\boldsymbol{1}+\int\mathrm{d}\vec{r}'g(\vec{r},\vec{r}')\varrho(\vec{r}',t)\boldsymbol{Z}_1(\vec{r},\vec{r}')\Big]^{-1}\end{aligned}$$ will be considered; this is interesting from a physical point of view. It has been previously shown (see [@goddard2012overdamped]) that for $\boldsymbol{Z}_1$ being positive definite, $\boldsymbol{D}$ is also positive definite and therefore has positive, finite eigenvalues. The term $g(\vec{r},\vec{r}')$ (regarded as known) is the correlation function defined by the two-body density $\varrho^{(2)}(\vec{r},\vec{r}',t) =g(\vec{r},\vec{r}')\varrho(\vec{r},t)\varrho(\vec{r}',t)$ and the operator $\boldsymbol{H}[\cdot]$ describes terms corresponding to HI. We note that if $\boldsymbol{D}$ were positive semidefinite, a zero eigenvalue of $\boldsymbol{D}$ is permitted, which physically-speaking would amount to the colloidal system possessing a zero diffusion rate in some subset of $U$ with nonzero measure. Such systems are interesting (for example, in many biological systems the physical domain $U$ could be a substrate including cuts, voids or interior walls) but are not considered in this paper. Throughout this work the largest and smallest eigenvalues of $\boldsymbol{D}$ will be denoted $\mu_{\max}$ and $\mu_{\min}$, respectively. Furthermore, for two-body HI, $\boldsymbol{Z}_1$, $\boldsymbol{Z}_2$ are the diagonal and off-diagonal blocks respectively of the translational component of the grand resistance matrix originating in the classical theory of low Reynolds number hydrodynamics between suspended particles [@happel2012low], [@jeffrey1984calculation], related to the friction tensor by $$\begin{aligned} \tilde{\boldsymbol{\Gamma}}_{ij}(\vec{r}^N) = \delta_{ij}\sum_{l\neq i}\boldsymbol{Z}_{1}(\vec{r}_i,\vec{r}_l)+(1-\delta_{ij})\boldsymbol{Z}_{2}(\vec{r}_i,\vec{r}_j).\end{aligned}$$ In $d = 3$ dimensions, and for the particular case $N=2$ (where $N$ is the number of particles in the system), $\bm{\Gamma}\in \mathbb{R}^{6\times 6}$ and $\bm{\Gamma}_{ij}$ may be seen as equivalent to the second-rank tensor of the translational part of the resistance matrix as found in [@jeffrey1984calculation] used to model lubrication forces. It should be noted however that the definition of those resistance matrices are formalism dependent, that is, the individual entries are scalar functions arising from the solution of Stokes equations for two-body lubrication interactions using multipole methods. Conversely, $\bm{\Gamma}_{ij}$ are general tensors, independent of the type of HI under consideration, and are therefore a more general representation of hydrodynamic phenomena of colloidal suspensions. Additionally, $\bm{\Gamma}_{ij}$ may be used to model not just lubrication forces between particles but also long range forces, wall effects and more. In the case of inter-particle HI, the diagonal blocks $\bm{\Gamma}_{ii}$ each represent the force exerted on the fluid due to the motion of particle $i$, which is simply the sum of all the pairwise HI from the perspective of particle $i$. The off-diagonal blocks $\bm{\Gamma}_{ij}$ represent the force on particle $i$ due to the motion of particle $j$. The stationary equations for the equilibrium density $\varrho(\vec{r})$ and equilibrium flux $\vec{a}(\vec{r})$ are given by $$\begin{aligned} &\qquad\qquad\qquad\qquad\nabla_{\vec{r}}\cdot \vec{a}(\vec{r}) = 0,\label{eq:div_a_0}\\ &\vec{a}(\vec{r}) + \boldsymbol{H}[\vec{a},\varrho](\vec{r})+\frac{\varrho(\vec{r},t)}{k_BT}\bm{D}(\vec{r},[\varrho],t)\nabla_{\vec{r}}\frac{\delta\mathcal{F}}{\delta\varrho}[\varrho](\vec{r}) =0.\label{eq:eqn_for_a_in_equilibrium} \end{aligned}$$ Note that given a finite flux vector $\vec{a}$ solving -, it is not obvious that $\varrho$ is necessarily a minimiser of the free energy $\mathcal{F}-\int_U\mathrm{d}\vec{r}\,\mu_{c}\varrho$ (where $\mu_{c}$ is the chemical potential of the species). However, for the particular choice $\vec{a}\equiv \vec{0}$ (which is a natural and physically realistic solution), $\varrho$ is necessarily a minimiser of $\mathcal{F}-\int_U\mathrm{d}\vec{r}\,\mu_{c}\varrho$, and we will show that under reasonable assumptions these are indeed the only fixed points of the system. Previous well-posedness studies of similar nonlinear, nonlocal PDEs focused on periodic boundary conditions; see, e.g., [@chazelle2017well; @greg_mckean_vlasov_torus]. In contrast, we are interested in the well-posedness of , - subject to no-flux boundary conditions. This choice admits the nontrivial effect of the two body forces generated by the potential $V_2$ interacting with density on the boundary of the physical domain. We also seek to understand the asymptotic stability of stationary states. The motivation for this choice of boundary condition is physical; it corresponds to a closed system of particles in which the particle number is conserved over time. It is clear that most applications of such equations will be in confined systems, rather than a periodic domain and, as such, no-flux boundary conditions are natural. We note that the choice of boundary condition is expected to have significant effects on the dynamics, including the form of the bifurcation diagram. Free Energy Framework. {#subsec:free_energy_framework} ---------------------- Related to the system -, we define the free energy functional $\mathcal{F}:P^+_{\text{ac}}(U)\to \mathbb{R}$ where $P^+_{\text{ac}}$ is the set of strictly positive definite absolutely continuous probability measures on $U$. We define $$\begin{aligned} \label{eq:def_of_F} \mathcal{F}[\varrho]&:=\int_U\mathrm{d}\vec{r}\,\varrho(\vec{r},t)\log\varrho(\vec{r},t)+\int_U\mathrm{d}\vec{r}\,\varrho(\vec{r},t)\,\Big[V_1(\vec{r},t)+\tfrac{1}{2}(gV_2)\star\varrho \Big],\end{aligned}$$ where $\star$ denotes convolution in space. Here we assume the probability measure $\varrho$ has density with respect to the Lebesgue measure. Additionally we define the probability measure on $U$ $$\begin{aligned} \label{eq:def_of_measure} \mu(\mathrm{d}\vec{r}) = \mathrm{d}\vec{r}\,Z^{-1}e^{-\tfrac{(V_1+(gV_2)\star\varrho)}{k_BT}}\end{aligned}$$ where $Z = \int_U \mathrm{d}\vec{r}\,e^{-\tfrac{(V_1+(gV_2)\star\varrho)}{k_BT}}$ and $\varrho$ (when it exists) satisfies the nonlinear equation $$\varrho = Z^{-1}e^{-\tfrac{(V_1+(gV_2)\star\varrho)}{k_BT}}.$$ The existence of a probability density $\varrho$, and therefore a probability measure $\mu$ in , is obtained by Lemma \[thm:exis\_fix\_point\]. The functional $\mathcal{F}$ gives rise to the density minimising the free energy associated to the system - as $\gamma \to \infty$, which will be shown in Theorem \[thm:association \_of\_free\_energy\]. To make the connection between the free energy functional $\mathcal{F}$ in and the theory of non-uniform classical fluids, one may consider the Helmholtz free energy functional, which is the central energy functional of DFT [@evans1979nature] $$\begin{aligned} \label{eq:ddft-helmholtz_func} \mathcal{F}_{H}[\varrho] = \int_U\mathrm{d}\vec{r}\,\varrho(\vec{r},t)V_1(\vec{r},t)+ k_BT\int_U\mathrm{d}\vec{r}\,\varrho(\vec{r},t)[\log(\Lambda^3\varrho(\vec{r},t))-1]+\mathcal{F}_\text{ex}[\varrho]\end{aligned}$$ where $\mathcal{F}_{\text{ex}}$ is the excess over ideal gas term and $\Lambda$ the de Broglie wavelength, which turns out to be superfluous. The term $\mathcal{F}_{\text{ex}}$ is not in general known, the exception being for one dimensional hard rods [@percus1976equilibrium]. Using the free energy functional $\mathcal{F}_H$, the corresponding Euler-Lagrange equation is $$\begin{aligned} \label{eq:chemical_potential_euler_lagrange_eqn} \mu_{c}=V_1(\vec{r}) + k_BT [\log (\Lambda^3\varrho(\vec{r}))-1]+\tfrac{\delta\mathcal{F}_{\text{ex}}}{\delta\rho}[\varrho]\end{aligned}$$ where $\mu_{c}$ is the chemical potential which is constant at equilibrium. Note that $\mu_c$ should not be confused with the measure $\mu$ defined in . After taking the gradient of and multiplying by $\varrho$ we obtain $$\begin{aligned} 0=\varrho(\vec{r})\nabla_{\vec{r}}\tfrac{\delta\mathcal{F}}{\delta\rho}[\varrho]=k_BT\nabla_{\vec{r}}\varrho+\varrho(\vec{r})\nabla_{\vec{r}}\Big(V_1(\vec{r})+\tfrac{\delta\mathcal{F}_{\text{ex}}}{\delta\rho}[\varrho]\Big).\end{aligned}$$ At equilibrium, the sum rule holds (see, e.g. [@goddard2012unification]) $$\begin{aligned} \label{eq:excess_free_energy_equilibrium_sum} \varrho(\vec{r})\nabla_{\vec{r}}\tfrac{\delta\mathcal{F}_{\text{ex}}}{\delta\varrho}[\varrho]=\sum_{n=2}^N\int\mathrm{d}\vec{r}^{n}\nabla_{\vec{r}}V_n(\vec{r}^n)\varrho_n(\vec{r}^n).\end{aligned}$$ where $\varrho_n(\vec{r}^n)$ is the standard $n-$particle configuration distribution function in equilibrium. Limiting the particle interactions to two-body, for example with the approximation $\varrho_2(\vec{r},\vec{r}') = \varrho(\vec{r})\varrho(\vec{r}')g(\vec{r},\vec{r}',[\varrho])$, we take the first term in the above series to obtain the equality $\nabla_{\vec{r}}\mathcal{F}_H[\varrho]= \nabla_{\vec{r}}\mathcal{F}[\varrho]$. In this way wee see that the density minimising $\mathcal{F}_H$ will minimise $\mathcal{F}$. When $\boldsymbol{Z}_2\equiv 0 $, and by using the adiabatic approximation that holds out of equilibrium, we note that PDE simplifies to (cf. [@rex2009dynamical]) $$\begin{aligned} \label{eq:ddft-eq} \partial_{t}\varrho = \nabla_{\vec{r}}\cdot \left[\boldsymbol{D}(\vec{r},t)\varrho(\vec{r},t)\,\nabla_{\vec{r}}\tfrac{\delta \mathcal{F}}{\delta \varrho}[\varrho]\right].\end{aligned}$$ From we conclude that the dynamics under the choice $\bm{Z}_2 \equiv 0$ has a gradient flow structure. When $\bm{Z}_2$ is not necessarily zero, one cannot in general write the full dynamics as a gradient flow and, hence, the inclusion of HI introduces a novel perturbation away from the classical theory of gradient flow structure. Additionally, one sees how the free energy functional gives rise to the concept of a local pressure variation by the term inside the divergence of . In particular, the term $\tfrac{k_\text{B}\mathrm{T}}{m}\varrho(\vec{r},t)\,\nabla_{\vec{r}}\tfrac{\delta \mathcal{F}}{\delta \varrho}[\varrho]$ represents the spatial variation of the energy available to change particle configurations per unit volume at fixed particle number, in other words, it is an analogue of a local pressure gradient for the particle density. We will show that $\mathcal{F}[\varrho]$ is associated to the PDE even when $\bm{Z}_2 \neq 0$, that is $\partial_t \varrho = 0$ implies $\varrho$ is a critical point of $\mathcal{F}$. [0.48]{} ![(a). The bifurcation diagram for (a). $V_2(x,y) = x\cdot y$ and (b). $V_2(x,y) = -\cos\left(\frac{2\pi(x-y)}{\mathrm{L}}\right)$ in Section \[subsec:numericalexperiments\]: the solid [blue]{} line denotes the stable branch of solutions while the dotted [red]{} line denotes the unstable branch of solutions. In (a) the stationary density $e^{-x^2}/Z$ changes stability at the critical interaction energy $\kappa_2 = \kappa_{2\sharp}= -2.4$ and the new stable density is asymmetric adhering to one wall (Figure \[fig:bif\_fig\_right\]). In (b), in the absence of a confining potential, the uniform density becomes unstable at the critical interaction energy $\kappa_2 = \kappa_{2\sharp} = 0.4$ and the density may become multi-modal (Figure \[fig:bif\_stable\_equilibria\_V2\_cosine\]).[]{data-label="fig:bifurcation_diagram"}](bifurcation_diagram_for_Phi2.pdf "fig:"){width="\textwidth"}   [0.48]{} ![(a). The bifurcation diagram for (a). $V_2(x,y) = x\cdot y$ and (b). $V_2(x,y) = -\cos\left(\frac{2\pi(x-y)}{\mathrm{L}}\right)$ in Section \[subsec:numericalexperiments\]: the solid [blue]{} line denotes the stable branch of solutions while the dotted [red]{} line denotes the unstable branch of solutions. In (a) the stationary density $e^{-x^2}/Z$ changes stability at the critical interaction energy $\kappa_2 = \kappa_{2\sharp}= -2.4$ and the new stable density is asymmetric adhering to one wall (Figure \[fig:bif\_fig\_right\]). In (b), in the absence of a confining potential, the uniform density becomes unstable at the critical interaction energy $\kappa_2 = \kappa_{2\sharp} = 0.4$ and the density may become multi-modal (Figure \[fig:bif\_stable\_equilibria\_V2\_cosine\]).[]{data-label="fig:bifurcation_diagram"}](bifurcation_diagram_for_Phi2_cosine.pdf "fig:"){width="\textwidth"} Description of Main Results and Organisation of the Paper. ---------------------------------------------------------- ### Main Results {#main-results .unnumbered} The main results of this work are threefold. 1. We establish existence and uniqueness of weak solutions to DDFTs including two-body HI governed by equations , - with no-flux boundary conditions. 2. We derive *a priori* convergence estimates of the density $\varrho(\vec{r},t)$ to equilibrium in $L^2$ and relative entropy. 3. We study the stability of equilibrium states and construct bifurcation diagrams for two numerical applications. These results are of particular interest for physical applications of colloidal systems where conservation of mass is either a desirable or necessary property of the system. Additionally, the stability theorem contrasts with simpler linear stability analyses of similar systems of gradient flow structure with periodic boundary conditions [@martzel2001mean], [@greg_mckean_vlasov_torus] which may be tackled by means of Fourier analysis. ### Organisation of the Paper {#organisation-of-the-paper .unnumbered} The paper is organised as follows: in Section \[sec:preliminaries\] we present the boundary and initial conditions, introduce the main notation, nondimensionalise the main equations, state the stationary equation for the density, define the weak formulation of the Smoluchowski equation including full HI and provide a list of assumptions. In Section \[sec:statement\_of\_main\_results\] we state the main results of the present work in a precise manner. In Section \[sec:ex\_uni\_full\_HI\] we provide an existence and uniqueness theorem for the flux $\vec{a}$ when full HI are included. In Section \[sec:char\_stationary\_sol\] we characterise solutions of the stationary problem and convergence to equilibrium in $L^2$ as $t \to \infty$. In Section \[sec:global\_asymptotic\_stability\] we obtain results on the global asymptotic stability of the stationary densities by showing that the free energy is a continuous functional for all two-body interaction strengths. Additionally we prove an H- theorem for the equilibria, provide *a priori* convergence estimates in relative entropy, derive an asymptotic expansion of the equilibria for small interaction energy and perform a spectral analysis of the linearised nonlocal Smoluchowski operator. In Section \[sec:bifurcation\_theory\] we provide necessary and sufficient conditions for phase transitions in generalised DDFT-like systems with no-flux boundary conditions. In Section \[sec:manufactured\_bif\] we construct the bifurcation diagram for some example problems. In Section \[sec:existence\_uniqueness\_with\_partial\_HI\] we obtain an existence and uniqueness theorem for the Smoluchowski equation with non-constant diffusion tensor and effective drift vector dependent on the two-body HI tensors $\bm{Z}_1$ and $\bm{Z}_2$. In Section \[sec:discussion\] we present our concluding remarks and state some open problems. In Appendix \[sec:classical\_paraboliv\_pde\] we provide some technical results that are used in the proof of Theorem \[thm:exis\_uniq\_weak\_sol\_rho\]. Finally in Appendix \[app:nomenclature\] we provide a list of nomenclature. Preliminaries {#sec:preliminaries} ============= In this section we specify the nonlinear boundary conditions and initial data for the DDFT . We also nondimensionalise the governing equations and provide the assumptions on the regularity of the potentials, correlation function, diffusion tensor and initial data. Boundary Conditions. {#subsec:boundary_conditions} -------------------- When $U = \mathbb{R}^d$ we take $$\begin{aligned} \label{bc:density_and_flux_decaying} \left\{\begin{aligned} \varrho(\vec{r},t)\to 0 \\ \vec{a}(\vec{r},t)\to \vec{0} \end{aligned}\right. \quad \text{ as } \quad |\vec{r}|\to \infty,\end{aligned}$$ where we require $V_1$ to be growing at least quadratically as $\vec{r}\to \infty$. Physically-speaking this prevents the density from running out to infinity. When $U\subset \mathbb{R}^d$ is open and bounded we impose that the total mass of the system $M$ remains constant, in particular we have $$\begin{aligned} \label{bc:mass_preserving} \vec{a}(\vec{r},t)\cdot\vec{n}\bigg|_{\partial U\times [0,T]} = 0.\end{aligned}$$ The boundary condition may be viewed as a [*nonlinear*]{} Robin condition imposing the flux through the boundary $\partial U$ is zero for all time $t\in [0,T]$. If $\varrho$ is a number density then $\int \mathrm{d}\vec{r}\, \varrho = N$ for all time, however for the analysis in Section \[sec:ex\_uni\_full\_HI\] and onwards we will assume $\varrho$ is a probability density so that $\int \mathrm{d}\vec{r}\, \varrho = 1$. The rescaling between number and probability densities is discussed in the following section. Initial Conditions. ------------------- We will assume that the initial data has finite free energy and is consistent with the imposed boundary conditions. For example, one could prescribe initial data $(\varrho_0,\vec{a}_0)^\top$ such that $$\begin{aligned} \label{eq:initial_data_for_a_and_rho} \frac{\delta\mathcal{F}}{\delta\varrho}[\varrho_0](\vec{r}) = \mu_{c},\qquad \vec{a}_0 = \vec{0}.\end{aligned}$$ where $\mu_{c}$ is the chemical potential, constant at equilibrium. It is straightforward to check that $(\varrho_0,\vec{a}_0)^\top$ is an equilibrium point of the system . Commonly, one then drives the system out of equilibrium via a time-dependent external potential. In principle $\mu_{c}$ may be given and the equations , are well defined. In practice, for complicated particle configurations, $\mu_{c}$ is not known but can be computed by minimising the free energy along with the additional constraint $\int_U\mathrm{d}\vec{r}\, \varrho_0(\vec{r})=N$, where $N$ is the (expected) number of particles for a finite system and $\varrho_0$ is a number density. Note that $\mu_{c}$ is a potential, so by raising it one may force more particles into the system. We will assume that $\mu_c$ is constant to fix the number of particles. To ensure $\varrho$ (and $\varrho_0$) is a probability density one may rescale $\varrho/N = \tilde{\varrho}$, $N g = \tilde{g}$ and $\vec{a}/N^2 = \tilde{\vec{a}}$, where the tilde denotes the new variable, so that $\int_U\mathrm{d}\vec{r}\, \varrho_0(\vec{r})=1$ and equations - become independent of $N$. This provides a method of converting back to the number density which is typically used in numerical modelling of finite colloidal systems [@goddard2016dynamical], [@goddard2012unification], [@goddard2012general]. Throughout however, since we will frequently use the integral of the density, we will assume $\varrho$ and $\varrho_0$ are probability densities to ease notation. With this, one has three equations for three unknowns $\mu_{c}$, $\varrho_0$, $\vec{a}_0$ and the initial density $\varrho_0$ can be computed. For the rest of paper it is convenient to work in dimensionless units. We now nondimensionalise the governing equations. Evolution Equations. -------------------- We now nondimensionalise our equations. Let $\mathrm{L}$, $\tau$, $\text{U}$ be characteristic length, time and velocity scales respectively, then by nondimensionalising $$\begin{aligned} \vec{r}\sim \mathrm{L} \tilde{\vec{r}},\quad t\sim\tau\tilde{t}, \quad \mathrm{U} = \tfrac{\mathrm{L}}{\tau}, \quad \varrho\sim\tfrac{1}{\mathrm{L}^d}\tilde{\varrho},\quad \mathcal{F}\sim k_BT \tilde{\mathcal{F}},\quad \vec{a}\sim \mathrm{A}\tilde{ \vec{a}}.\end{aligned}$$ where $d$ is the physical dimension and $\mathrm{A}$ is a characteristic flux scale. The system becomes (after dropping tildes) $$\begin{aligned} \partial_t\varrho(\vec{r},t) = -\tfrac{1}{Fr}\times \tfrac{\tau^{-1}}{\gamma}\times \mathrm{A}\times \mathrm{L}^{d+1} \nabla_{\vec{r}}\cdot \vec{a}(\vec{r},t),\end{aligned}$$ where we have defined the Froude number $Fr = m\mathrm{U}^2/(k_BT)$. By choosing $Fr = 1$, $\tau = \gamma^{-1}$ and $\mathrm{A} = 1/\mathrm{L}^d$ we simplify the system of equations to the following boundary value problem.\ \[prop:non\_dimensional\_time\_evolving\_flux\_eqn\] The non-dimensional one-body density $\varrho(\vec{r},t)$ and flux $\vec{a}(\vec{r},t)$ evolve according the the boundary value problem $$\begin{aligned} \label{eq:evolution_eqn_for_a_dimensionless} \begin{cases} &\qquad\qquad\partial_t\varrho = -\nabla_{\vec{r}}\cdot \vec{a}(\vec{r},t), \\ &\vec{a}(\vec{r},t) + \bm{H}[\vec{a},\varrho] +\varrho(\vec{r},t)\bm{D}(\vec{r},[\varrho],t)\nabla_{\vec{r}}\frac{\delta\mathcal{F}}{\delta\varrho}[\varrho]=0, \\ &\qquad [\bm{H}[\vec{a},\varrho] +\varrho(\vec{r},t)\bm{D}(\vec{r},[\varrho],t)\nabla_{\vec{r}}\frac{\delta\mathcal{F}}{\delta\varrho}[\varrho]]\cdot\vec{n}\big|_{\partial U}=0. \end{cases}\end{aligned}$$ We note that when the off-diagonal HI tensor $\bm{Z}_{2}=0$, by using the definitions of $\mathcal{F}$ and $\bm{D}$ , the evolution equations in may be written as a nonlinear Smoluchowski equation (such as ) with non-constant diffusion coefficient. However we observe that even when $\bm{Z}_2 \neq 0$ the dynamics may be recast into a Smoluchowski equation for $\varrho$ under an effective drift vector dependent on $\bm{Z}_2$.\ \[prop:non\_dimensional\_time\_evolving\_rho\_eqn\] The non-dimensional one-body density $\varrho(\vec{r},t)$ evolves according the the boundary value problem $$\begin{aligned} \label{eq:evolution_eqn_for_rho_dimensionless} \begin{cases} &\qquad\partial_t\varrho=\,\nabla_{\vec{r}}\cdot\left[Pe^{-1}\bm{D}\nabla_{\vec{r}}\varrho+\varrho\,\bm{D}\left(\nabla_{\vec{r}} (\kappa _1V_1+\kappa _2\,(gV_2)\star\varrho) + \bm{A}[\bm{a}]\right)\right], \\ &\qquad \qquad \qquad \qquad \qquad \Pi [\varrho]\cdot\vec{n}\big|_{\partial U} = 0,\\ &\qquad\Pi[\varrho]:= \bm{D}\,\left(\nabla_{\vec{r}}\varrho + \varrho\, \nabla_{\vec{r}}(\kappa_1 V_1(\vec{r},t)+ \kappa_2 (gV_2)\star \varrho)+\bm{A}[\bm{a}]\right), \end{cases}\end{aligned}$$ where $\bm{A}[\bm{a}]$ is an effective background flow induced by the hydrodynamic interactions defined by $$\begin{aligned} \label{eq:def_of_V1_eff} \bm{A}[\vec{a}]:=\int_U\mathrm{d}\vec{r}'\, g(\vec{r},\vec{r}') \bm{Z}_2(\vec{r},\vec{r}')\vec{a}(\vec{r'},t),\end{aligned}$$ $\kappa_1$, $\kappa_2$ are non-dimensional constants measuring the strength of confining and interaction potentials respectively, $Pe=\mathrm{L}\text{U}/\alpha$ is the P[é]{}clet number measuring the ratio of advection rates to diffusive rates and $\alpha = k_B\mathrm{T}/(m\gamma)$. Corollary \[prop:non\_dimensional\_time\_evolving\_rho\_eqn\] is the general formulation of the nondimensional equations - when $\bm{Z}_2\neq0$, including a non-constant diffusion coefficient and an effective drift. Throughout this paper, to study the intermediate regime of equally strong advection and diffusion, we set $Pe = 1$. Additionally, we redefine the two-body potential to absorb the correlation function $g$ to ease notation, $V_2(\vec{r},\vec{r}'):= g(\vec{r},\vec{r}') V_2(\vec{r},\vec{r}')$. In practice, there are many choices for $g$, for example the hard sphere approximation takes $g(|\vec{r}- \vec{r}'|) = 0$ for $|\vec{r}-\vec{r}'| < 1$ and unity otherwise. Alternatively $g$ may be obtained numerically from microscopic dynamics. We consolidate the choices for equations , in Section \[subsec:assumptions\_definitions\]. The effective drift $\bm{A}[\vec{a}]$, dependent on $\bm{Z}_2$ and $\vec{a}$ may be determined once $\vec{a}(\vec{r},t)$ is solved from the second equation in . Note that the evolution equation in may be viewed as a generalised McKean-Vlasov equation with a non-constant diffusion tensor and confining potential. In particular the McKean-Vlasov equation may be recovered in the special case $\bm{Z}_1 = \bm{Z}_2 = V_1 = 0$, see for example [@greg_mckean_vlasov_torus], [@chayes2010mckean]. We will use Corollary \[prop:non\_dimensional\_time\_evolving\_rho\_eqn\], to write the full dynamics including full HI, to obtain our results on weak solutions for $\varrho(\vec{r},t)$ (see Theorem \[thm:eigenfn\_expansion\_of\_flux\], Section \[sec:ex\_uni\_full\_HI\] and Theorem \[thm:existence\_and\_uniqueness\], Section \[sec:existence\_uniqueness\_with\_partial\_HI\]). We continue to the next section by stating the stationary boundary value problem for equilibrium states $\varrho(\vec{r})$. Stationary Equations. --------------------- For general $\bm{Z}_2$ we will show in Theorem \[thm:association \_of\_free\_energy\] that the stationary density $\varrho(\vec{r})$ satisfies $$\begin{aligned} \label{eq:stationary_eqn_for_rho_dimensionless} \begin{cases} &\qquad 0=\,\nabla_{\vec{r}}\cdot[\bm{D}\nabla_{\vec{r}}\varrho+\varrho\,\bm{D}\nabla_{\vec{r}}(\kappa _1V_1+\kappa _2\,V_2\star\varrho)], \\ &\qquad \qquad \qquad \qquad \qquad \Pi [\varrho]\cdot\vec{n}\big|_{\partial U} = 0,\\ &\qquad\Pi[\varrho]:= \bm{D}\,(\nabla_{\vec{r}}\varrho + \varrho\, \nabla_{\vec{r}}(\kappa_1 V_1(\vec{r},t)+ \kappa_2 \,V_2\star \varrho)). \end{cases}\end{aligned}$$ We now discuss regularity on the potentials and diffusion tensor. Assumptions & Definitions. {#subsec:assumptions_definitions} -------------------------- Typically for long range HI the $\boldsymbol{Z}_i$ exhibit singularities at the origin (particle centres) so the correlation function $g$ is a necessary inclusion and provides a way of smoothing $\boldsymbol{D}$ and we assume $g \in L^\infty(U)$. For $\varrho \geq 0$ the diffusion tensor $\boldsymbol{D}$ as a convolution with the density will then be a weakly differentiable function. For the existence and uniqueness theory in Appendix \[sec:classical\_paraboliv\_pde\] and Section \[sec:existence\_uniqueness\_with\_partial\_HI\] we require that first derivatives of $\bm{D}_{ij}$ to be bounded in $L^\infty(U)$ so that all coeeffiecients of the PDE are uniformly bounded. Out of equilibirum, we will suppress the time dependence on $\bm{D}$, $V_1$ simply to ease notation. However at equilibrium $\bm{D}$, $V_1$ are assumed to be independent of time, indeed in order for equilibrium states of the density and flux to be well defined. We note that is $\bm{D}$ positive definite and symmetric, as it has been rigorously shown to be [@goddard2012overdamped]. In summary we have the following notational choices and assumptions for the evolution problem . #### **[Notation]{}** Throughout we ease notation on the two-body interaction potential. - The two-body interaction potential is redefined to absorb the correlation function $g$ $$\begin{aligned} \label{ass:V2_redef} V_2\stackrel{\text{redef}}{:=} g V_2. \tag{N1}\end{aligned}$$ For the dynamics we assume: #### **[Assumptions D]{}** - The diffusion tensor $\bm{D}$ is symmetric, positive definite, and the first derivatives of $\bm{D}_{ij}$ are bounded in $L^\infty(U)$ $$\begin{aligned} \label{ass:D_pos_def_weak_diffable} \bm{D}_{ij}\in W^{1,\infty}(U). \tag{D1}\end{aligned}$$ - The diagonal and off-diagonal blocks of the HI tensors are uniformly bounded in the sense $$\begin{aligned} \label{ass:Z2_uniformly_bd} \|gZ_2\|_{L^\infty(U)}<\infty, \quad \|gZ_1\|_{L^\infty(U)}<\infty\tag{D2}\end{aligned}$$ - The initial data $\varrho_0$ is a non-negative, square-integrable, absolutely continuous probability density $$\begin{aligned} \label{ass:rho_in_L2_P_ac} \varrho_0\in P_{ac}(U)\cap L^2(U). \tag{D3}\end{aligned}$$ - The potentials each have two bounded derivatives $$\begin{aligned} \label{ass:V1_V2_in_W_1_inf} V_1,V_2\in W^{2,\infty}(U). \tag{D4}\end{aligned}$$ The functions $V_1$ and $V_2$ are the confining and two-body interaction potentials respectively, the former having explicit time dependence ($V_1 = V_1(\vec{r},t)$) only when we intend to drive - and out of equilibrium, and $V_1 = V_1(\vec{r})$ when we are concerned with the equilibrium properties of - and . This distinction will be important for the H Theorem and equilibrium theory in Section \[sec:global\_asymptotic\_stability\]. For the equilibrium problem we will assume: #### **[Assumptions E]{}** - The potentials have first order weak derivatives in $L^2(U)$ $$\begin{aligned} \label{ass:V1_V2_in_H_1} V_1,V_2\in H^{1}(U). \tag{E1}\end{aligned}$$ In particular, Assumption will permit us to establish smooth stationary densities. Note that typical inter-particle potentials, such as Morse or Coulomb, are unbounded as the particle separation goes to zero. This is once again mitigated by the choice of $g$, which we recall has been absorbed into $V_2$ by assumption . In general we admit non-convex $V_1$ and $V_2$, for example multi-well potentials, except for in the convergence result of Theorem \[thm:rel\_entropy\_convergence\] where $V_1$ must be convex in order to invoke a log-Sobolev inequality on the measure $\mu$ given by . The assumption that $\varrho_0\in P_{\text{ac}}(U)$ is included in order to cover a wider set of physically relevant scenarios. In particular we permit initial data such that $\varrho_0|_{A}=0$ for some $A\subset U$ where $A$ is non-empty. Physically speaking this system could correspond to, at time $t=0$, a box partitioned into closed regions with at least one region containing no particles. Then, instantaneously as soon as $t>0$, the partition is removed allowing the particles to move freely. At the end of Section \[sec:existence\_uniqueness\_with\_partial\_HI\] we will show by simple application of Harnack’s inequality, that we obtain strictly positive densities $\varrho(\vec{r},t_1)>0$ after an arbitrarily small time $t_1>0$. Principally this is provided by the property , since $\bm{D}$ is positive definite, the diffusion of density in the system is everywhere propagating in $U$. Additionally, by the positive definite property in we may uniquely define the square root of $\bm{D}$ denoted $\bm{D}^{1/2}$ such that $$\bm{D}^{1/2}\bm{D}^{1/2}=\bm{D}$$ for every $\vec{r}\in U$, $t\in [0,T]$ and each $\varrho$. We also define the eigenvalues $\mu_i\in \mathbb{R}^+$ of $\bm{D}$ and eigenvectors $\vec{e}_i(\vec{r},[\varrho],t)\in L^2(U)$ such that $$\begin{aligned} \label{eq:mu_i_eigenvalue_eqn_D} \bm{D}\vec{e}_i = \mu_i\vec{e}_i.\end{aligned}$$ Note that $\{\vec{e}_i\}_{i=1}^d$ forms an orthonormal basis of $\mathbb{R}^d$ (since $\bm{D}$ is a bounded, symmetric operator) for $i = 1,\cdots, d$ such that $$\begin{aligned} \label{eq:orthogonality_of_ei} \langle\vec{e}_i(\vec{r},[\varrho],t),\vec{e}_j(\vec{r},[\varrho],t)\rangle = \int_U\mathrm{d}\vec{r}\, \vec{e}_i(\vec{r},[\varrho],t)\cdot \vec{e}_j(\vec{r},[\varrho],t) = \delta_{ij}.\end{aligned}$$ We continue to the next section by defining a weak formulation of the dynamics .\ Weak Formulation. ----------------- We provide the weak formulation of the full dynamics including HI for $\bm{Z}_1, \bm{Z}_2$ not necessarily zero.\ Let $\vec{a}(\vec{r},t)$ be a given flux. We say $\varrho\in L^2([0,T]; H^1(U))\cap L^\infty([0,T]; L^2(U))$ and $\partial_t\varrho\in L^2([0,T]; H^{-1}(U))$ is a weak solution to if for every $\eta\in L^2([0,T]; H^1(U))$ $$\begin{aligned} \label{eq:weak_formulation_including_V1eff} \int_0^T\mathrm{d}t\, \langle \partial_t \varrho(t), \, \eta(t) \rangle +\int_0^T\mathrm{d}t\, \int_U \mathrm{d}\vec{r}\, \nabla_{\vec{r}}\eta\cdot \bm{D}\,[\nabla_{\vec{r}}\varrho +\varrho\,\nabla_{\vec{r}}(\kappa _1V_1+\kappa _2\,V_2\star\varrho + \bm{A}[\vec{a}])]=0\end{aligned}$$ where $\varrho_0 = \varrho(0)$. Here, $\bm{A}[\vec{a}]$ is the effective drift induced by $\bm{Z}_2$ and is defined by equation . It will be shown in the following sections (in particular Corollary \[cor:a\_is\_zero\_at\_equilibrium\]) that $\bm{A}[\vec{a}] \to \vec{0}$ as $t \to \infty$. We now state our main results in a precise manner.\ Statement of Main Results {#sec:statement_of_main_results} ========================= Our main results concern existence, uniqueness and convergence to equilibrium of the density of colloids $\varrho$ and flux $\vec{a}$ on $U$ a compact subset of $\mathbb{R}^d$. The first result concerns existence of the flux $\vec{a}(\vec{r},t)$ with non-zero hydrodynamic interactions, the convergence of $\vec{a}(\vec{r},t)$ to zero at equilibrium and existence and uniqueness of fixed points of .\ \[them:exis\_uni\_of\_flux\] Let $\bm{Z}_1,\bm{Z}_2$ be real, symmetric and $\mu_{\max}\|g\bm{Z}_2\|_{L^\infty(U)}<1$. Then\ 1. There exists a unique $\vec{a}(\vec{r},t)\in L^2(U)$ solving the evolution equation for each $\varrho(\vec{r},t)$. In particular $$\begin{aligned} \vec{a}(\vec{r},t) = \sum_{n=1}^d\delta_n\sum_{i=1}^d\frac{\psi_i}{\phi_n-\mu_i^{-1}} \vec{e}_i(\vec{r},[\varrho],t)\end{aligned}$$ where $\vec{e}_i(\vec{r},[\varrho],t)$ are eigenvectors of the diffusion tensor $\bm{D}(\vec{r},[\varrho],t)$ and $\delta_n$, $\phi_n$, $\psi_i$, $\mu_i^{-1}\in \mathbb{R}$. 2. In addition, every stationary density $\varrho(\vec{r})$ and stationary flux $\vec{a}(\vec{r})$ are independent of the HI tensors and satisfy $$\begin{aligned} \varrho(\vec{r})\nabla_{\vec{r}}\frac{\delta\mathcal{F}}{\delta\varrho}[\varrho(\vec{r})] = \vec{0}, \quad \vec{a}(\vec{r}) = \vec{0}\end{aligned}$$ and consequently $\varrho(\vec{r})$ minimises the free energy $\mathcal{F}[\varrho](r)-\mu_{c} \int_U\mathrm{d}\vec{r}\,\varrho$, where $\mu_{c}$ is the chemical potential. 3. If, in addition, $|\kappa_2|< \|V_2\|_{L^\infty(U)}^{-1}$ then $(\vec{a}_\star(\vec{r}),\varrho_\star) = (\vec{0}, \varrho_\infty)$ are the unique fixed points of and $\varrho_\infty(\vec{r})$ is given by the self-consistency equation $$\begin{aligned} \varrho_\infty(\vec{r}) = \frac{e^{-(\kappa_1 V_1(\vec{r})+\kappa_2 V_2\star \varrho_\infty)}}{Z(\varrho_\infty)}\end{aligned}$$ for $Z(\varrho_\infty) = \int_U\mathrm{d}\vec{r}\,e^{-(\kappa_1 V_1(\vec{r})+\kappa_2 V_2\star \varrho_\infty)}$. For the evolution system we present the following second main result of the paper.\ \[thm:exis\_uniq\_weak\_sol\_rho\] Let $\bm{Z}_1,\bm{Z}_2$ be real, symmetric and $\mu_{\max}\|g\bm{Z}_2\|_{L^{\infty}(U)}<1$, where $\mu_{\max}$ is the largest eigenvalue of $\bm{D}$, with $\varrho_0 \in C^\infty(U)$, $\varrho\geq 0$ and $\int_U \mathrm{d}\vec{r} \varrho_0(\vec{r})=1$. Then there exists a unique weak solution $\varrho\in L^\infty([0,T];L^2(U))\cap L^2([0,T]; H^1(U))$, with $\partial_t\varrho\in L^2([0,T]; H^{-1}(U))$ for , in the sense , and the following energy estimate holds $$\begin{aligned} \|\varrho\|_{L^{\infty}([0,T];L^2(U))}+\|\varrho\|_{L^{2}([0,T];H^1(U))}+\|\partial_t\varrho\|_{L^2([0,T]; H^{-1}(U))}\leq C(T) \|\varrho_{0}\|_ {L^2(U)},\end{aligned}$$ where $C(T)$ is a constant dependent on $T$, $U$ and $\mu_{\max}$. The existence and uniqueness is proved in Theorem \[thm:existence\_and\_uniqueness\], whilst the bound is shown in Lemma \[lem:weak\_conv\_results\]. Furthermore, we prove existence and uniqueness of the stationary density, and exponentially fast convergence in relative entropy.\ Let $\bm{Z}_1,\bm{Z}_2$ be real, symmetric and $\varrho$ be a solution to the DDFT with smooth initial data and smooth $V_1$, $V_2$. Then there exists stationary density $\varrho(\vec{r},t) = \varrho_0(\vec{r})$. If $|\kappa_2| \leq 1/4\times \|V_2\|_{L^\infty}^{-1}$ then the stationary solution is unique and is denoted by $\varrho_\infty$. The proof of this result is standard, see [@dressler1987stationary]. The third main result of this paper concerns *a priori* estimates for exponential convergence of the density to stationarity.\ Let $\bm{Z}_1,\bm{Z}_2$ be real, symmetric and $\varrho$ be a solution to the DDFT with smooth initial data and smooth $V_1$, $V_2$. If $\kappa_2 \leq 1/4\times \|V_2\|_{L^\infty}^{-1}$ then\ 1. [**[Convergence in ]{}**]{}$\bm{L^2(U):}$ For $\kappa_1=0$ (in the absence of a confining potential) if $$\begin{aligned} \kappa_2^2<\frac{\mu_{\min}c^{-2}_{pw}\|\nabla_{\vec{r}}V_2\|^{-2}_{L^\infty(U)}}{2(1+e)\mu_{\max}},\end{aligned}$$ where $\mu_{\min}$ and $\mu_{\max}$ are the smallest and largest eigenvalues of the diffusion tensor $\bm{D}$, then $\varrho\to \varrho_\infty$ in $L^2(U)$ exponentially fast as $t\to \infty$. For $\kappa_1\neq 0$ the convergence criteria is modified to $$\begin{aligned} \mu_{\max}(\kappa_1^2\|\nabla_{\vec{r}}V_1\|_{L^\infty(U)}^2+2\kappa_2^2(1+e)\|\nabla_{\vec{r}}V_2\|_{L^\infty(U)}^2)<\frac{\mu_{\min}}{c_{pw}^2}.\end{aligned}$$ 2. [**[Convergence in Relative Entropy:]{}**]{} For any fixed confining potential $V_1$ such that the measure $\mu'(\mathrm{d}\vec{r}) = \mathrm{d}\vec{r}\,e^{-\kappa_1V_1}/Z$ satisfies a log-Sobolev inequality and provided $$\begin{aligned} \kappa_2^2<\frac{c^{-1}_{ls}}{2\|\nabla V_2\|^2_{L^{\infty(U)}}}\end{aligned}$$ then the measure $\mu$ in satisfies a log-Sobolev inequality and $\mathcal{H}(\varrho |\varrho_\infty) \to 0$ exponentially fast as $t\to \infty$ where $$\begin{aligned} \mathcal{H}(\varrho |\varrho_\infty) = \int_U\mathrm{d}\vec{r}\, \varrho \log \left(\tfrac{\varrho}{\varrho_\infty}\right)\end{aligned}$$ denotes the relative entropy. For part 1, see Theorem \[lem:exp\_conv\_in\_L2\] and Proposition \[prop:general\_kappa1\_convergence\]. Theorem \[thm:rel\_entropy\_convergence\] gives the result for part 2. The log-Sobolev inequality for $\mu$ is established by the Holley-Stroock perturbation lemma [@holley1986logarithmic]. The constants $c_{pw}$, $c_{ls}$ are the Poincar[é]{}-Wirtinger and log-Sobolev constants respectively. Nowhere do we assume parity on the two-body potential nor $V_2$ have zero mean. Additionally the optimal $c_{pw}$ is the inverse square root of the smallest eigenvalue of the Laplacian on the domain $U$ with no-flux boundary conditions. We have the following conditions for the existence of bifurcating branches of steady states $\varrho(\vec{r})$.\ Fix $\kappa_1$ and let $\kappa_2\in(-\infty,\infty)$. Let $\mathcal{L}_{1} = \mathcal{A}_{\kappa_2}+\kappa_2\mathcal{B}$ denote the linearised operator to the stationary problem with eigenvalues $\lambda(\kappa_2)$ and eigenfunctions $w^{(\kappa_2)}(\vec{r})$. Denote by $\mathcal{A}_{\kappa_2}$, $\mathcal{B}$ the local and nonlocal parts of $\mathcal{L}_{1}$ respectively. Denote by $\gamma_k^{(\kappa_2)}$ the eigenvalues of $\mathcal{A}_{\kappa_2}$ with eigenvectors $v^{(\kappa_2)}$. If the solution $\kappa_2^\star(\lambda)$ of the equation $\lambda = \lambda_{k^\star}(\kappa_2^\star) $ exists, then it is unique and is given by the nonlinear equation $$\begin{aligned} \kappa_2^\star(\lambda) = \left(\sum_{i=1}^\infty\tfrac{\theta_i^{(\kappa_2)}\gamma _i^{(\kappa_2)}\beta_i^{(\kappa_2)}}{\lambda-\gamma_i^{(\kappa_2)}}\right)^{-1}. \end{aligned}$$ As a corollary we can determine the necessary condition on the interaction strength for a bifurcation of stable equilibrium densities solving .\ Provided that the spectral gap of $\mathcal{A}_{\kappa_2}$ is sufficiently large, that is, $$\begin{aligned} |\kappa_2|<\frac{\min_{i,j\in\mathbb{N}}|\gamma_i^{(\kappa_2)}-\gamma_{j}^{(\kappa_2)}|}{2\|\mathcal{B}\|}\end{aligned}$$ then $\lambda(\kappa_2)\in\mathbb{R}$ and the point of critical stability $\kappa_{2_\sharp}$ occurs at the solution of the nonlinear equation $$\begin{aligned} \kappa_{2_{\sharp}} = -\left(\sum_{i=1}^\infty\theta_i^{(\kappa_2)}\beta_i^{(\kappa_2)}\right)^{-1},\end{aligned}$$ where $\theta_i\beta_i$ are coefficients of the two-body potential expanded in the orthonormal basis of eigenvectors $\{v_k^{(\kappa_{2_\sharp})}\}_{k = 1}^\infty$. The proofs of these results are given by Theorem \[thm:epsilon\_as \_a\_fn\_of\_lambda\] and the discussion immediately following it. We also obtain the following theorem for existence of bifurcations for the stationary equation .\ Let $\kappa_2\in (-\infty,\infty)$and let $\{\beta_{n}^{-1}\}_{n=1}^{\infty}$ be the eigenvalues of $\mathcal{R}$ with eigenfunctions $\{u_n\}_{n=1}^{\infty}$ where $$\begin{aligned} \mathcal{R}[u_n] = -\varrho_{\kappa_2}(\vec{r})\int_U\mathrm{d}\vec{r}'\, V_2(\vec{r},\vec{r}')u_n(\vec{r}')\end{aligned}$$ and $\varrho_{\kappa_2}$ is a stationary solution to . If $|\kappa_2|\geq |\beta_1|$ then $\varrho_{\kappa_2}$ is unstable with respect to $\{u_n\}_{n=1}^{\infty}$ with $(\beta_1,u_1)$ a bifurcation point of where $\beta_1$ is the smallest eigenvalue of $\mathcal{R}^{-1}$ and $w_1$ is the eigenfunction of $\mathcal{R}$ associated to $\beta_1^{-1}$. There exists $\varrho_\ast>0$ such that $\mathcal{F}[\varrho_\ast]<\mathcal{F}[\varrho_{\kappa_2}]$. We now give our arguments for Theorem \[them:exis\_uni\_of\_flux\].\ Existence & Uniqueness of Flux With Full HI {#sec:ex_uni_full_HI} =========================================== We return to the full formulation of the overdamped DDFT with HI. The contraction condition $\mu_{\max}\|g\bm{Z}_2\|_{L^{\infty}(U)}<1$ can be seen as a necessary condition on the invertibility of an operator closely related to the positive definite grand friction tensor $\boldsymbol{\Gamma}$. Note that the flux equation in may be written more generally as $$\begin{aligned} \label{eq:matrix_operator_friction_eqn} (\bm{1}+ \mathcal{Z}_1^\varrho + \mathcal{Z}_2^\varrho)[\vec{a}(\vec{r},t)]=-\varrho(\vec{r},t)\nabla_{\vec{r}}\frac{\delta\mathcal{F}}{\delta\varrho}[\varrho]\end{aligned}$$ where the actions of the integral operators $\bm{1}+ \mathcal{Z}_1^\varrho$ and $\mathcal{Z}_2^\varrho$ are defined by $$\begin{aligned} (\bm{1}+ \mathcal{Z}_1^\varrho)[\vec{a}_1]&=\vec{a}_1(\vec{r},t)+\int_U\mathrm{d}\vec{r}' g(\vec{r},\vec{r}')\varrho(\vec{r}',t)\boldsymbol{Z}_1(\vec{r},\vec{r}')\times \vec{a}_1(\vec{r},t),\label{eq:friction_like_system_1plusZ1}\\ \mathcal{Z}_2^\varrho[\vec{a}_2]&=\varrho(\vec{r},t)\int_U\mathrm{d}\vec{r}'\, g(\vec{r},\vec{r}')\bm{Z}_2(\vec{r},\vec{r}')\vec{a}_2(\vec{r}',t).\label{eq:friction_like_system_Z2}\end{aligned}$$ Notice how the integral-matrix operators in - resemble the operators in the first row of the grand resistance matrix for a two particle system from classical hydrodynamics [@happel2012low]. The following lemma establishes a solvability condition for the flux equation in equation . \[thm:cond\_converg\_fred\_det\] Let $\bm{1}+ \mathcal{Z}_1^\varrho$ and $\mathcal{Z}_2^\varrho$ be bounded linear operators. Suppose $\mathcal{A}_\varrho:=(\bm{1}+ \mathcal{Z}_1^\varrho)^{-1}\mathcal{Z}_2^\varrho$ is compact in $L^2(U,\varrho^{-1}(\vec{r},t))$. If $\mu_{\max}\|g\bm{Z}_2\|_{L^\infty(U)}<1$ then the matrix integral operator $\bm{1}+ \mathcal{Z}_1^\varrho + \mathcal{Z}_2^\varrho $ is invertible and the system is well-posed. Since $\bm{1}+ \mathcal{Z}_1^\varrho$ is positive definite it is invertible, therefore may be rewritten $$\begin{aligned} \label{eq:matrix_integral_op_eqn_rewrite} (\bm{1} + (\bm{1}+ \mathcal{Z}_1^\varrho )^{-1}\mathcal{Z}_2^\varrho) [\vec{a}(\vec{r},t)] = -(\bm{1}+ \mathcal{Z}_1^\varrho)^{-1} \varrho(\vec{r},t)\nabla_{\vec{r}}\frac{\delta\mathcal{F}}{\delta\varrho}[\varrho].\end{aligned}$$ We note that $\mathcal{A}^\varrho$ is a trace-class operator and that the left hand side of is an operator of the form $\bm{1}-\lambda\mathcal{A}_\varrho$. By classical theory [@fredholm1900nouvelle], [@lax2014functional] we have the identity $$\begin{aligned} \label{eq:det(1+A)_in_terms_of_exp} \det(\bm{1}-\lambda\mathcal{A}_\varrho) = \exp\Big\{-\sum_{n=1}^\infty\tfrac{\text{Tr}(\mathcal{A}_\varrho^n)}{n}\lambda^n \Big\}.\end{aligned}$$ When $\lambda = -1$ we recover the determinant for the Fredholm operator on the left hand side of . Particularly, since for our consideration $|\lambda|=1$, the convergence of the infinite summation inside the argument of the exponential in will depend on the size of $\text{Tr}\mathcal{A}_\varrho$, and when $\lambda = -1$, the summand is an alternating sequence so we demand absolute convergence for the sum in to converge. We obtain results in $L^2(U, \varrho^{-1})$. By definition of the trace we have $$\begin{aligned} \text{Tr}\mathcal{A}^n_\varrho = \sum_{k=1}^d\langle \mathcal{A}^n_\varrho \vec{e}_k(\vec{r}), \vec{e}_k(\vec{r})\rangle_{L^2(U,\varrho^{-1})},\end{aligned}$$ where $\{\vec{e}_k\}_{k}$ are vectors such that their components form an orthonormal basis of $L^2(U,\varrho^{-1})$, in particular we choose the eigenvectors of the diffusion tensor $\bm{D}$. Since $\mathcal{A}$ is an integro-matrix operator the inner product is given by, for $n=1$ $$\begin{aligned} \text{Tr}\mathcal{A}_\varrho &= \sum_{k=1}^d\langle \mathcal{A}_\varrho \vec{e}_k(\vec{r}), \vec{e}_k(\vec{r})\rangle_{L^2(U,\varrho^{-1})}\\ &=\sum_{k=1}^d\int_U\mathrm{d}\vec{r}\,\vec{e}_{k}(\vec{r})\cdot\bm{D}(\vec{r})\int_U\mathrm{d}\vec{r}'\,g(\vec{r},\vec{r}')\bm{Z}_2(\vec{r},\vec{r}')\vec{e}_{k}(\vec{r}')\\ &\leq \sum_{k=1}^d\mu_{\max}\|g \bm{Z}_2\|_{L^\infty(U)}\int_U\mathrm{d}\vec{r}\,\vec{e}_{k}(\vec{r})\cdot\int_U\mathrm{d}\vec{r}'\,\vec{e}_{k}(\vec{r}')\leq d\mu_{\max}\|g \bm{Z}_2\|_{L^\infty(U)}|U|\end{aligned}$$ where we have used $\int_U\mathrm{d}\vec{r}'\,\vec{e}_{k}(\vec{r}')\leq \|\vec{e}_k\|_{L^1(U)}\leq |U|^{1/2}\|\vec{e}_k\|_{L^2(U)} = |U|^{1/2}$ by orthonormality of the basis. Now for $n=2$ we have $$\begin{aligned} \text{Tr}\mathcal{A}^2_\varrho &= \sum_{k=1}^d\langle \mathcal{A}^2_\varrho \vec{e}_k(\vec{r}), \vec{e}_k(\vec{r})\rangle_{L^2(U,\varrho^{-1})}\\ &=\sum_{k=1}^d\int_U\mathrm{d}\vec{r}\,\vec{e}_{k}(\vec{r})\cdot\varrho(\vec{r})^{-1}\int_U\mathrm{d}\vec{r}_1\,\varrho(\vec{r})\bm{D}(\vec{r})g(\vec{r},\vec{r}_1)\bm{Z}_2(\vec{r},\vec{r}_1)\\ &\quad \times\int_U\mathrm{d}\vec{r}_2\,\varrho(\vec{r}_1)\bm{D}(\vec{r}_1)g(\vec{r}_1,\vec{r}_2)\bm{Z}_2(\vec{r}_1,\vec{r}_2)\vec{e}_{k}(\vec{r}_2)\\ &\leq \sum_{k=1}^d\mu_{\max}^2\|g\bm{Z}_2\|_{L^\infty(U)}^2\int_U\mathrm{d}\vec{r}\,\vec{e}_{k}(\vec{r})\cdot\int_U\mathrm{d}\vec{r}_2\,\vec{e}_{k}(\vec{r}_2)\\ &\leq d\mu_{\max}^2\|g\bm{Z}_2\|_{L^\infty(U)}^2|U|,\end{aligned}$$ where we have used the fact that $\int_U\mathrm{d}\vec{r}_1\,\varrho(\vec{r}_1,t) = 1$ for $t\geq 0$ by the no-flux boundary condition (see Section \[subsec:boundary\_conditions\]). Iterating this argument one may obtain $$\begin{aligned} \text{Tr}\mathcal{A}^n_\varrho &= \sum_{k=1}^d\langle \mathcal{A}^n_\varrho \vec{e}_k(\vec{r}), \vec{e}_k(\vec{r})\rangle_{L^2(U,\varrho^{-1})}\\ &\leq \mu_{\max}^n\|g\bm{Z}_2\|_{L^\infty(U)}^n\sum_{k=1}^d\int_U\mathrm{d}\vec{r}\,\vec{e}_{k}(\vec{r})\cdot\int_U\mathrm{d}\vec{r}_n\,\vec{e}_{k}(\vec{r}_n)\\ & = |U|d\times \mu_{\max}^n\|g\bm{Z}_2\|_{L^{\infty}(U)}^n.\end{aligned}$$ We observe that absolute convergence of requires $\mu_{\max}\|g\bm{Z}_2\|<1$. In particular, the sum of the absolute values of the terms is given by $$\begin{aligned} \sum_{n=1}^\infty\Big|\tfrac{\text{Tr}(\mathcal{A}_\varrho^n)}{n}\Big| \leq d|U|\sum_{n=1}^\infty\tfrac{\mu_{\max}^n\|g\bm{Z}_2\|_{L^\infty(U)}^n}{n} = d|U|\log\left(\frac{1}{1-\mu_{\max}\|g\bm{Z}_2\|_{L^\infty(U)}}\right).\end{aligned}$$ Thus for $\mu_{\max}||g \bm{Z}_2||_{L^\infty(U)}<1$ the logarithm is finite and the determinant is positive, otherwise for the boundary case $\mu_{\max}||g \bm{Z}_2||_{L^\infty(U)} =1$ it may vanish, thus making $(\bm{I} + \mathcal{Z}_1^\varrho + \mathcal{Z}_2^\varrho)$ singular. We now provide a scheme for computing solutions of equation for each time dependent $\varrho(\vec{r},t)$. The existence and uniqueness of $\varrho(\vec{r},t)$ is given in Section \[sec:existence\_uniqueness\_with\_partial\_HI\]. First we establish that $\bm{1}+\mathcal{Z}^\varrho_1-\lambda\mathcal{Z}^\varrho_2$ is a compact self-adjoint operator in $L^2(U,\varrho^{-1})$.\ \[lem:flux\_operator\_is\_compact\_self\_adjoint\] Let $\lambda\in (-\infty,\infty)$ and assumption hold. Then $\bm{1}+\mathcal{Z}^\varrho_1-\lambda\mathcal{Z}^\varrho_2$ is a compact and self-adjoint operator. We let $\vec{a}\in L^1(U)$ and calculate $\|(\bm{1}+\mathcal{Z}^\varrho_1-\lambda\mathcal{Z}^\varrho_2)[\vec{a}]\|_{L^1(U)}$. In particular we have $$\begin{aligned} \|(\bm{1}+\mathcal{Z}^\varrho_1-\lambda\mathcal{Z}^\varrho_2)[\vec{a}]\|_{L^1(U)} &= \int\mathrm{d}\vec{r}\,\Big|(\bm{1}+\mathcal{Z}^\varrho_1-\lambda\mathcal{Z}^\varrho_2)[\vec{a}]\Big|\nonumber\\ &\leq \int\mathrm{d}\vec{r}\,\Big|(\bm{1}+\mathcal{Z}^\varrho_1)\vec{a}\Big|+|\lambda|\Big|\mathcal{Z}^\varrho_2[\vec{a}]\Big|\nonumber\\ &\leq (1+\|g \bm{Z}_1\|_{L^\infty(U)}+|\lambda|\|g\bm{Z}_2\|_{L^\infty(U)})\|\vec{a}\|_{L^1(U)}<\infty.\end{aligned}$$ Hence $\text{Im}(\bm{1}+\mathcal{Z}^\varrho_1-\lambda\mathcal{Z}^\varrho_2)$ is bounded in $\mathbb{R}^3$. Now by Heine–Borel, the closure of $\text{Im}(\bm{1}+\mathcal{Z}^\varrho_1-\lambda\mathcal{Z}^\varrho_2)$ is compact and hence $\bm{1}+\mathcal{Z}^\varrho_1-\lambda\mathcal{Z}^\varrho_2$ is a compact operator. We now show that $\bm{1}+\mathcal{Z}^\varrho_1-\lambda\mathcal{Z}^\varrho_2$ is self-adjoint. The local part $\bm{1}+\mathcal{Z}^\varrho_1$ is a real, symmetric matrix and it is therefore self-adjoint, and in particular self-adjoint in $L^2(U,\varrho^{-1})$ . All that remains is to study the nonlocal part $\mathcal{Z}^\varrho_2$. By direct calculation we see that for $\vec{b}\in L^2(U)$ $$\begin{aligned} \langle \vec{b},\mathcal{Z}_2^\varrho[\vec{a}]\rangle_{L^2(U,\varrho^{-1})} &= \int_U\mathrm{d}\vec{r}\,\vec{b}(\vec{r})\cdot \int_U\mathrm{d}\vec{r}'\,g(\vec{r},\vec{r}')\bm{Z}_2(\vec{r},\vec{r}')\vec{a}(\vec{r}')\\ &=\int_U\mathrm{d}\vec{r'}\,\vec{b}(\vec{r})^\top \int_U\mathrm{d}\vec{r}\,g(\vec{r},\vec{r}')\bm{Z}_2(\vec{r},\vec{r}')^\top\vec{a}(\vec{r}')\\ &=\int_U\mathrm{d}\vec{r'}\,\int_U\mathrm{d}\vec{r}\,\left(g(\vec{r},\vec{r}')\bm{Z}_2(\vec{r},\vec{r}') \vec{b}(\vec{r})\right)^\top \vec{a}(\vec{r}')\\ &=\int_U\mathrm{d}\vec{r'} \vec{a}(\vec{r}')\cdot\int_U\mathrm{d}\vec{r}\,g(\vec{r},\vec{r}')\bm{Z}_2(\vec{r},\vec{r}') \vec{b}(\vec{r})\\ &=\langle \mathcal{Z}_2^\varrho[\vec{b}],\vec{a}\rangle_{L^2(U,\varrho^{-1})}\end{aligned}$$ where we have used the symmetry of $\bm{Z}_2$, and on the last line used Fubini’s theorem to interchange the order of the integration between the $\vec{r}'$ and $\vec{r}$ variables. Hence the lemma is proved. Since we have now established that $\bm{1}+\mathcal{Z}^\varrho_1-\lambda\mathcal{Z}^\varrho_2$ is a compact and self-adjoint operator we may use its eigenvectors as a complete basis of $\mathbb{R}^3$ to expand the flux $\vec{a}(\vec{r},t)$.\ \[thm:eigenfn\_expansion\_of\_flux\] Let $\bm{Z}_2$ be symmetric and real and $\mu_{\max}\|g\bm{Z}_2\|_{L^\infty(U)}<1$ and let $\vec{e}_i(\vec{r},[\varrho],t)$ and $\mu_i^{-1}$ be the eigenvectors and eigenvalues of $\bm{D}^{-1}(\vec{r},[\varrho],t)$ where $[\cdot]$ denotes functional dependence and $i = 1,\cdots, d$. Then there is a unique $\vec{a}(\vec{r},t)\in L^2(U)$ solving given by the eigenfunction expansion $$\begin{aligned} \vec{a}(\vec{r},t) = \sum_{n=1}^d\delta_n\vec{w}_{n}(\vec{r},t)\label{eq:a_expanded_in_wn}.\end{aligned}$$ Here, $\vec{w}_n$ are eigenfunctions of $(\bm{1}+\mathcal{Z}_1^\varrho -\lambda\mathcal{Z}_2^\varrho)$ obtained by a second expansion in $\vec{e}_i(\vec{r},[\varrho],t)$ of the form $$\begin{aligned} \label{eq:wn_expanded_in_es} \vec{w}_n(\vec{r},t) = \sum_{i=1}^d\frac{\psi_i}{\mu_i^{-1}-\phi_n} \vec{e}_i(\vec{r},[\varrho],t).\end{aligned}$$ Additionally, the expansion coefficients $\delta_n$ are given by the formula $$\begin{aligned} \label{eq:eqn_for_deltans} \delta_n = \frac{1}{\phi_n}\sum_{i=1}^d \frac{\psi_i}{\phi_n-\mu_i^{-1}}\int_U\mathrm{d}\vec{r}\, \varrho(\vec{r},t)\vec{e}_i(\vec{r},[\varrho],t)\cdot\nabla_{\vec{r}}\frac{\delta\mathcal{F}}{\delta\varrho}[\varrho]\end{aligned}$$ where $\{\phi_n\}_{n=1}^d$ are the discrete set of eigenvalues of $(\bm{1}+\mathcal{Z}_1^\varrho -\lambda\mathcal{Z}_2^\varrho)$ given by roots of the equation $\lambda(\phi_n) = -1$, where the function $\lambda(\cdot)$ is defined by $$\begin{aligned} \lambda(\phi_n) := \left[\sum_{l=1}^d\frac{\eta_l \,\psi_l}{\mu_l^{-1}-\phi_n}\right]^{-1}.\end{aligned}$$ Finally, $\psi_k$ and $\eta_l$ are the expansion coefficients defined by $$\begin{aligned} \varrho(\vec{r},t)g(\vec{r},\vec{r}')\bm{Z}_2(\vec{r},\vec{r}') = \sum_{k=1}^d\sum_{l=1}^d \psi_k\eta_l \vec{e}_k(\vec{r},[\varrho],t)\otimes\vec{e}_l(\vec{r}',[\varrho],t)\label{eq:rho_g_Z2_expanded_in_e}\end{aligned}$$ and each $\varrho$ is obtained from the continuity equation and no-flux condition $$\begin{aligned} \partial_t \varrho &= -\nabla_{\vec{r}}\cdot \vec{a},\\ 0 & =\Pi[\varrho]\cdot \vec{n}|_{\partial U}.\end{aligned}$$ The scalars $\mu_i^{-1}$, $\psi_k$, $\eta_l$ (and by proxy $\delta_n$) each have functional dependence on $\varrho$ since they are obtained by integrals involving $\vec{e}_j(\vec{r},[\varrho],t)$, for $i,j,k,l = 1,\cdots, d$. The eigenvalues $\phi_n$ are so called ‘moving eigenvalues’ of $(\bm{1}+\mathcal{Z}_1^\varrho -\lambda\mathcal{Z}_2^\varrho)$ (cf. [@davidson2006spectral]). If $\bm{Z}_2 = 0$ then $\phi_i = \mu_i^{-1}$ for each $i=1,\cdots, d$. In general, for $\bm{Z}_2 \neq 0$, an eigenvalue of $\bm{D}^{-1}$ may also be an eigenvalue of $(\bm{1}+\mathcal{Z}_1^\varrho -\lambda\mathcal{Z}_2^\varrho)$ and this occurs on the line $\lambda = 0$. Since $\bm{Z}_2$ is symmetric it can be diagonalised, and therefore the kernel of the operator $\mathcal{Z}_2$ can be decomposed into a finite (of length $d$) sum of products of continuous functions and has at most $d$ eigenvalues. The equation $\lambda(\phi_n) = -1$ may be rearranged into a characteristic polynomial equation in $\phi_n$ with coefficients dependent on $\eta_l$, $\psi_l$ and $\mu_l$ and since $(\bm{1}+\mathcal{Z}_1^\varrho -\lambda\mathcal{Z}_2^\varrho)$ is assumed to be real and symmetric, each $\phi_n\in\mathbb{R}$. Finally, the condition $\mu_{max}\|g\bm{Z}_2\|_{L^\infty(U)}<1$ ensures $\phi_n \neq 0$ for any $n\in \mathbb{N}$. We consider the more general operator $(\bm{1}+\mathcal{Z}_1^\varrho-\lambda\mathcal{Z}_2^\varrho)$ where $\lambda \in \mathbb{R}$. One may think of this operator as a nonlocal matrix operator where $(\bm{1}+\mathcal{Z}_1^\varrho)$ is the local part and $\mathcal{Z}_2^\varrho$ is the nonlocal part. Here $\lambda$ is a perturbation parameter measuring the distance of the full operator $(\bm{1}+\mathcal{Z}_1^\varrho-\lambda\mathcal{Z}_2^\varrho)$ from locality. Since $\bm{Z}_1$ and $\bm{Z}_2$ are real and symmetric and $\lambda\in \mathbb{R}$ , $\bm{1}+\mathcal{Z}_1^\varrho-\lambda\mathcal{Z}_2^\varrho$ coincides with its adjoint in $L^2(U,\varrho^{-1})$. For the homogeneous adjoint equation $$\begin{aligned} \label{eq:adjoint_eqn} (\bm{1}+\mathcal{Z}_1^\varrho-\lambda\mathcal{Z}_2^\varrho)^\dagger\bm{z} = 0\end{aligned}$$ we know from Lemma \[thm:cond\_converg\_fred\_det\] that when $\mu_{\max}\|g\bm{Z}_2\|_{L^\infty(U)}<1$ there is no $\lambda\in \mathbb{R}$ satisfying $\det((\bm{1}+\mathcal{Z}_1^\varrho-\lambda\mathcal{Z}_2^\varrho)^\dagger) = 0$ and therefore the only solution to the homogeneous adjoint equation is $\bm{z} = \vec{0}$. Therefore by the Fredholm alternative there is a unique solution to . Now consider the eigenvalue problem $$\begin{aligned} \label{eq:eigenvalue_problem_(1+lambdaA)} (1+\mathcal{Z}_1^\varrho-\lambda \mathcal{Z}_2^\varrho)[\vec{w}_n(\vec{r},t)] = \phi_n\vec{w}_n(\vec{r},t)\end{aligned}$$ for eigenvalues $\phi_n\in\mathbb{R}$ and eigenvectors $\vec{w}_n\in \mathbb{R}^d$. We write $$\begin{aligned} \label{eq:wn_expanded_in_ejs} \vec{w}_n = \sum_{j=1}^d\alpha_{j,n}\vec{e}_j(\vec{r},[\varrho],t).\end{aligned}$$ By inserting into we obtain $$\begin{aligned} \label{eq:eigenvalue_problem_expanded} (\bm{1}+\mathcal{Z}_1^\varrho)\sum_{j=1}^d\alpha_{j,n}\vec{e}_j(\vec{r},[\varrho],t) - \lambda \mathcal{Z}_2^\varrho\left[ \sum_{j=1}^d\alpha_{j,n}\vec{e}_j(\vec{r},[\varrho],t)\right] = \phi_n\sum_{j=1}^d\alpha_{j,n}\vec{e}_j(\vec{r},[\varrho],t).\end{aligned}$$ Now by inserting the expansion into we obtain $$\begin{aligned} \sum_{j=1}^d\alpha_{j,n}(\mu_j^{-1}-\phi_n)\vec{e}_{j}(\vec{r},[\varrho],t) - \lambda\sum_{k,l=1}^d\psi_k\,\eta_l\int_U\mathrm{d}\vec{r}'\vec{e}_k(\vec{r},[\varrho],t)\otimes \vec{e}_l(\vec{r}',[\varrho],t)\vec{w}_n(\vec{r}',t) = 0.\end{aligned}$$ Taking the inner product of this equation with $\vec{e}_i(\vec{r},[\varrho],t)$ and integrating we obtain $$\begin{aligned} \alpha_{i,n}(\mu^{-1}_i-\phi_n) -\lambda\,\psi_i\sum_{l=1}^d\eta_l\int_U\mathrm{d}\vec{r}'\vec{e}_l(\vec{r},[\varrho],t)\cdot \vec{w}_n(\vec{r}',t)=0,\end{aligned}$$ which may be rearranged to obtain $$\begin{aligned} \label{eq:separtion_eqn_for_lambda} \lambda = \frac{\alpha_{i,n}(\mu_i^{-1}-\phi_n)}{\psi_i\,\sum_{l=1}^d\eta_l\int_U\mathrm{d}\vec{r}'\vec{e}_l(\vec{r},[\varrho],t)\cdot \vec{w}_n(\vec{r}',t)}.\end{aligned}$$ Since both the left hand side of and $\sum_{l=1}^d\eta_l\int_U\mathrm{d}\vec{r}'\vec{e}_l(\vec{r},[\varrho],t)\cdot \vec{w}_n(\vec{r}',t)$ are independent of the index $i$ it must be that $$\begin{aligned} \frac{\alpha_{i,n}(\mu_i^{-1}-\phi_n)}{\psi_i} = K\end{aligned}$$ for some constant $K$ for which, without loss of generality, we choose $K=1$. With this we obtain an expression for the coefficients $\alpha_{i,n}$ $$\begin{aligned} \label{eq:eqn_for_alpha_in} \alpha_{i,n} = \frac{\psi_i}{\mu_i^{-1}-\phi_n}.\end{aligned}$$ We may also obtain a scheme to determine the $\phi_n$. In particular by and we have $$\begin{aligned} \lambda &= \left(\sum_{l=1}^d\eta_l\int_U\mathrm{d}\vec{r}'\vec{e}_l(\vec{r},[\varrho],t)\cdot \vec{w}_n(\vec{r}',t)\right)^{-1}\nonumber\\ & = \left(\sum_{l=1}^d\eta_l\int_U\mathrm{d}\vec{r}'\vec{e}_l(\vec{r},[\varrho],t)\cdot \sum_{j=1}^d\frac{\psi_j}{\mu_j^{-1}-\phi_n}\vec{e}_j(\vec{r}',[\varrho],t)\right)^{-1} = \left(\sum_{l=1}^d\frac{\eta_l\,\psi_l}{\mu_l^{-1}-\phi_n}\right)^{-1}\end{aligned}$$ hence we have that the eigenvalues of $(\bm{1}+\mathcal{Z}_1^\varrho+\mathcal{Z}_2^\varrho)$ are given by the roots of the equation $\lambda(\phi_n) = -1$. We now return to the inhomogeneous problem and expand $\vec{a}(\vec{r},t)$ in eigenfunctions $\vec{w}_n(\vec{r},t)$. We propose an expansion of the form and insert into to obtain $$\begin{aligned} \sum_{n=1}^d \delta_n\phi_n\vec{w}_n(\vec{r},t) = -\varrho(\vec{r},t)\nabla_{\vec{r}}\frac{\delta\mathcal{F}}{\delta\varrho}[\varrho].\end{aligned}$$ Now by taking the inner product with some $\vec{w}_k(\vec{r},t)$ and integrating we obtain $$\begin{aligned} \delta_k\phi_k = -\int_U\mathrm{d}\vec{r}\,\varrho(\vec{r},t)\vec{w}_k(\vec{r},t)\cdot \nabla_{\vec{r}}\frac{\delta\mathcal{F}}{\delta\varrho}[\varrho].\end{aligned}$$ By inserting the definition of $\vec{w}_k$ from we deduce $$\begin{aligned} \label{eq:eqn_for_delta_k_coeffs} \delta_k\phi_k = \sum_{i=1}^d \frac{\psi_i}{\phi_k-\mu_i^{-1}}\int_U\mathrm{d}\vec{r}\, \vec{e}_i(\vec{r},[\varrho],t)\cdot\varrho(\vec{r},t)\nabla_{\vec{r}}\frac{\delta\mathcal{F}}{\delta\varrho}[\varrho].\end{aligned}$$ Now we would like to divide through by $\phi_k$ but must check that no $\phi_k$ is zero for each $k = 1,\cdots, d$. This is a consequence of the condition $\mu_{\max}\|g \bm{Z}_2\|<1$. In particular, using properties of the determinant, we have that $$\begin{aligned} \det(\bm{1}+\mathcal{Z}_1^\varrho-\lambda\mathcal{Z}_2^\varrho) = \det(\bm{1}+\mathcal{Z}_1^\varrho) \times\det(\bm{1}-\lambda(\bm{1}+\mathcal{Z}_1^\varrho)^{-1}\mathcal{Z}_2^\varrho).\end{aligned}$$ Now since $\bm{D}$ is positive definite, so is $\bm{1}+\mathcal{Z}_1^\varrho$ and therefore $\det(\bm{1}+\mathcal{Z}_1^\varrho) >0$ because the determinant is simply the product of its (strictly positive) eigenvalues. Additionally, since $\mu_{\max}\|g \bm{Z}_2\|<1$, we have by Lemma \[thm:cond\_converg\_fred\_det\] $\det(\bm{1}-\lambda(\bm{1}+\mathcal{Z}_1^\varrho)^{-1}\mathcal{Z}_2^\varrho)>0$ therefore $\det(\bm{1}+\mathcal{Z}_1^\varrho-\lambda\mathcal{Z}_2^\varrho)>0$ and $\phi_k \neq 0$ for all $k\in \mathbb{N}$. We may now divide by $\phi_k$ to obtain . Finally $\vec{a}(\vec{r},t)\in L^2(U)$ may be seen by squaring , integrating over $\mathrm{d}\vec{r}$ and using . Theorem \[thm:eigenfn\_expansion\_of\_flux\] provides a scheme for computing the unique flux $\vec{a}(\vec{r},t)$, given $\varrho$ satisfying $\partial_t\varrho = -\nabla_{\vec{r}}\cdot \vec{a}$ over time. We now use this result to show that the free energy functional $\mathcal{F}[\varrho]$ may be associated to the full system even when $\bm{Z}_2 \neq 0$. In particular, that $\varrho(\vec{r},t)$ solving implies $\varrho$ is a critical point of the free energy $\mathcal{F}[\varrho]$.\ \[thm:association \_of\_free\_energy\] Let $\mu_{\max}\|g\bm{Z}_2\|_{L^\infty(U)}<1$ $V_1 = V_1(\vec{r})$ be a time independent confining potential so that $\varrho(\vec{r})$ is a stationary density to the system then $\varrho(\vec{r})$ is a critical point of $\mathcal{F}[\varrho]$. Let $\varrho(\vec{r},t) = \varrho(\vec{r})$ be a stationary density. Then by equation one has $$\begin{aligned} (\bm{1}+(\bm{1}+\mathcal{Z}_1^\varrho)^{-1}\mathcal{Z}_2^\varrho)[\vec{a}(\vec{r})]&=-\varrho(\vec{r})\bm{D}(\vec{r},[\varrho],t)\nabla_{\vec{r}}\frac{\delta\mathcal{F}}{\delta\varrho}[\varrho],\label{eq:stationary_eqn_for_a_dimensionless}\\ \nabla_{\vec{r}}\cdot \vec{a}(\vec{r})&=0.\label{eq:incomp_a_at_equilibrium}\end{aligned}$$ We have that for each $\lambda$, the operator $\bm{1}+\mathcal{Z}_1^\varrho-\lambda\mathcal{Z}_2^\varrho$ is compact self-adjoint in $L^2(U,\varrho^{-1}(\vec{r},t))$ (by Lemma \[lem:flux\_operator\_is\_compact\_self\_adjoint\]). We also have that $\bm{1}+\mathcal{Z}_1^\varrho-\lambda\mathcal{Z}_2^\varrho$ is positive definite for $\mu_{\max}\|g Z_2\|_{L^\infty(U)}<1$. In particular, $\phi_n\neq 0$ for every $n = 1,\cdots d $ and $\phi_n(\lambda)$ is continuous function of $\lambda$ such that $\phi_n(0) = \mu_{n}^{-1}>0$ for each $n$. Hence we may invert $\bm{1}+\mathcal{Z}_1^\varrho+\mathcal{Z}_2^\varrho$ given $\mu_{\max}\|g Z_2\|_{L^\infty(U)}<1$. With this, by using equations , we have $$\begin{aligned} \label{eq:div_stationary_eqn_in_terms_of_F} 0 = \nabla_{\vec{r}}\cdot \vec{a} = \nabla_{\vec{r}}\cdot \left(\varrho(\vec{r})(\bm{1}+\mathcal{Z}_1^\varrho+\mathcal{Z}_2^\varrho)^{-1}\nabla_{\vec{r}}\frac{\delta\mathcal{F}}{\delta\varrho}[\varrho]\right). \end{aligned}$$ Now, assuming $\varrho$ is stationary we see that $$\begin{aligned} 0 = \Big\langle \frac{\delta\mathcal{F}}{\delta\varrho}[\varrho],\partial_t\varrho\Big\rangle = -\int_U\mathrm{d}\vec{r}\, \frac{\delta\mathcal{F}}{\delta\varrho}[\varrho]\nabla_{\vec{r}}\cdot \vec{a} = \int_U\mathrm{d}\vec{r}\, \nabla_{\vec{r}}\frac{\delta\mathcal{F}}{\delta\varrho}[\varrho]\cdot \vec{a}\end{aligned}$$ where we have used the no-flux boundary condition. Now since $(\bm{1}+\mathcal{Z}_1^\varrho+\mathcal{Z}_2^\varrho)^{-1}$ is strictly positive definite and self-adjoint in $L^2(U,\varrho^{-1}(\vec{r},t))$ it possesses a unique strictly positive definite self-adjoint square root in $L^2(U,\varrho^{-1}(\vec{r},t))$ (see [@wouk1966note]). We define $\mathcal{X}_\varrho = (\bm{1}+\mathcal{Z}_1^\varrho+\mathcal{Z}_2^\varrho)^{-1}$ and $\mathcal{X}_\varrho^{1/2}\mathcal{X}_\varrho^{1/2} = \mathcal{X}_\varrho$. Then we find $$\begin{aligned} 0 &= \int_U\mathrm{d}\vec{r}\, \nabla_{\vec{r}}\frac{\delta\mathcal{F}}{\delta\varrho}[\varrho]\cdot \vec{a} = \int_U\mathrm{d}\vec{r}\, \nabla_{\vec{r}}\frac{\delta\mathcal{F}}{\delta\varrho}[\varrho]\cdot \mathcal{X}_\varrho \left[\varrho\nabla_{\vec{r}}\frac{\delta\mathcal{F}}{\delta\varrho}[\varrho]\right]\nonumber\\ &= \Big\langle \varrho \nabla_{\vec{r}}\frac{\delta\mathcal{F}}{\delta\varrho}[\varrho],\mathcal{X}_\varrho \left[\varrho\nabla_{\vec{r}}\frac{\delta\mathcal{F}}{\delta\varrho}[\varrho]\right]\Big\rangle_{L^2(U,\varrho^{-1})}\nonumber\\ &=\Big\langle \mathcal{X}_\varrho^{1/2} \left[\varrho\nabla_{\vec{r}}\frac{\delta\mathcal{F}}{\delta\varrho}[\varrho]\right],\mathcal{X}_\varrho^{1/2} \left[\varrho\nabla_{\vec{r}}\frac{\delta\mathcal{F}}{\delta\varrho}[\varrho]\right]\Big\rangle_{L^2(U,\varrho^{-1})}\nonumber\\ &=\Big\|\mathcal{X}_\varrho^{1/2} \left[\varrho\nabla_{\vec{r}}\frac{\delta\mathcal{F}}{\delta\varrho}[\varrho]\right]\Big\|_{L^2(U,\varrho^{-1})}^2\label{eq:X_sqrt_energy_argument}\end{aligned}$$ where we have used the self-adjoint property of $\mathcal{X}_\varrho^{1/2} $. From the above we deduce that, since the integrand in the last line of is positive, that the stationary density $\varrho(\vec{r})$ satisfies $$\begin{aligned} \label{eq:grad_F_rho_is_zero} \varrho(\vec{r})\nabla_{\vec{r}}\frac{\delta\mathcal{F}}{\delta\varrho}[\varrho(\vec{r})] = \vec{0}.\end{aligned}$$ Therefore we obtain that $\varrho$ is a critical point the free energy $\mathcal{F}[\varrho]$. Let $\mu_{\max}\|g\bm{Z}_2\|_{L^\infty(U)}<1$ $V_1 = V_1(\vec{r})$ be a time independent confining potential. If $\varrho$ is a stationary density then it is a critical point of the free energy $\mathcal{F}[\varrho]-\int\mathrm{d}\vec{r}\mu_c\varrho$. From Theorem \[thm:association \_of\_free\_energy\] we obtain the following two corollaries. In particular, the proof shows rigorously how the diffusion tenor decouples from the stationary density.\ \[cor:stationary\_density\_independent\_of\_D\] Let $\bm{Z}_1$, $\bm{Z}_2$ be real and symmetric. Then the stationary density $\varrho$ is independent of $\bm{Z}_1$, $\bm{Z}_2$ and, as a consequence, of $\bm{D}$. Additionally since holds in equilibrium even when $\bm{Z}_2 \neq 0$ and the condition $\mu_{\max}\|g\bm{Z}_2\|_{L^ \infty(U)}<1$ implies that the operator $(\bm{1}+(\bm{1}+\mathcal{Z}_1^\varrho)^{-1}\mathcal{Z}_2^\varrho)$ has no zero eigenvalue, the homogeneous problem $(\bm{1}+(\bm{1}+\mathcal{Z}_1^\varrho)^{-1}\mathcal{Z}_2^\varrho) = \vec{0}$ (i.e. at equilibrium) must have only the trivial solution $\vec{a}(\vec{r}) = \vec{0}$. In addition by equation , at equilibrium one has $$\begin{aligned} \label{eq:stationary_eqn_rigorously_derived} \nabla_{\vec{r}}\cdot \left(\varrho(\vec{r})\bm{D}(\vec{r},[\varrho])\nabla_{\vec{r}}\frac{\delta\mathcal{F}}{\delta\varrho}[\varrho]\right)=0\end{aligned}$$ where $\bm{D}(\vec{r},[\varrho])$ is the time limiting diffusion tensor.\ \[cor:a\_is\_zero\_at\_equilibrium\] Let $\bm{Z}_1$, $\bm{Z}_2$ be real and symmetric. Then $\vec{a}(\vec{r}) = \vec{0}$ is the unique stationary flux. In particular, there do not exist stationary densities which are advected by the existence of some non-zero flux, hence the only stationary states are equilibrium states. We remark that Corollary \[cor:stationary\_density\_independent\_of\_D\] is related to the well-known result that for finite dimensional reversible diffusions, i.e. Langevin dynamics of the form $dX_t = - (D((X_t)) \nabla V((X_t))) \, dt + \nabla \cdot D(X_t) \, dt + \sqrt{2 D(X_t)} \, dW_t$ for an arbitrary strictly positive definite mobility matrix $D$, $V$ a confining potential and Wiener process $W_t$, the invariant measure $\mu(dx) = \frac{1}{Z} e^{-V(x)} \, dx$ is independent of $D$. We refer to [@pavliotis2014stochastic Sec 4.6]. To our knowledge, this is the first instance where such a result is proved in the context of DDFT. In the following Sections \[sec:char\_stationary\_sol\], \[sec:global\_asymptotic\_stability\], \[sec:bifurcation\_theory\], we consider the global asymptotic stability of the stationary equation (equivalently ) for which, we have shown by Corollary \[cor:stationary\_density\_independent\_of\_D\], that is the equation determining the equilibrium density the dynamics driven to equilibrium by the HI tensors $\bm{Z}_1,\bm{Z}_2$.\ \[rem:final\_assumptions\] Out of equilibrium, the effective drift is augmented by $\bm{A}[\vec{a}]$ (as defined in ), the flow induced by the HI. In order to simplify the presentation of the calculations needed for the proofs of several results presented later on, (Theorem \[lem:exp\_conv\_in\_L2\], Proposition \[prop:general\_kappa1\_convergence\], all results in Sections \[sec:existence\_uniqueness\_with\_partial\_HI\] and \[sec:classical\_paraboliv\_pde\]) we suppress $\bm{A}[\vec{a}]$ because it may trivially included as a linear contribution which is bounded in $L^1(U)$: $$\begin{aligned} \|\bm{A}[\vec{a}]\|_{L^1(U)} = \int_U\mathrm{d}\vec{r}\,\Big| \int_U\mathrm{d}\vec{r}'\, \bm{Z}_2(\vec{r},\vec{r}')\vec{a}(\vec{r}',t)\Big|\leq \|\bm{Z}_2\|_{L^\infty(U)}\|\vec{a}\|_{L^1(U)}<\infty\end{aligned}$$ where we have used and the fact that, by Theorem \[thm:eigenfn\_expansion\_of\_flux\], $\|\vec{a}\|_{L^1(U)}^2\leq |U|\|\vec{a}\|^2_{L^2(U)}<\infty$. Hence all the coefficients of remain uniformly bounded and the existence and uniqueness results of Section \[sec:existence\_uniqueness\_with\_partial\_HI\] may be easily obtained with $\bm{A}[\vec{a}]$ included. Additionally, since we have shown that at equilibrium $\bm{A}[\vec{a}]= \vec{0}$ uniquely, the results of Sections \[sec:global\_asymptotic\_stability\], \[sec:bifurcation\_theory\] hold for the dynamics tending to equilibrium including the effects of the HI. Given this remark, we now discuss the existence of stationary solutions to .\ Characterisation of Stationary Solutions {#sec:char_stationary_sol} ======================================== We now define the stationary problem. \[sec:stationary\_problem\] We seek classical solutions $\varrho\in C^2(\bar{U})$ of $$\begin{aligned} \nabla_{\vec{r}}\cdot \left[\bm{D}\left( \nabla_{\vec{r}}\varrho+\varrho\nabla_{\vec{r}}[\kappa_1 V_1+\kappa_2V_2\star\varrho]\right) \right] &= 0 \qquad \vec{r}\in U, \label{eq:stationary_problem}\\ \Pi[\varrho]\cdot \vec{n} &= 0 \qquad \vec{r} \text{ on } \partial_U. \label{eq:stationary_problem_bc}\end{aligned}$$ where $$\begin{aligned} \Pi[\varrho]:=\bm{D}\left( \nabla_{\vec{r}}\varrho+\varrho\nabla_{\vec{r}}[\kappa_1 V_1+\kappa_2V_2\star\varrho]\right).\end{aligned}$$ The existence and uniqueness for the stationary problem is based on a fixed point argument for the nonlinear map, defined by integrating equation . In particular we find the stationary distribution satisfies the nonlinear map (the self-consistency equation) $$\begin{aligned} \label{eq:self_cons_eq} \varrho(\vec{r}) = \frac{e^{-(\kappa_1V_1(\vec{r})+ \kappa_2V_2\star\varrho(\vec{r}))}}{Z},\end{aligned}$$ where $Z = \int_U\mathrm{d}\vec{r}\, \exp\{-(\kappa_1V_1(\vec{r})+ \kappa_2V_2\star\varrho(\vec{r}))\}$. Note that the stationary distribution is independent of the diffusion tensor (see Corollary \[cor:stationary\_density\_independent\_of\_D\]). We now present our first result concerning the existence and uniqueness of the solutions to the self-consistency equation.\ \[thm:exis\_fix\_point\] The stationary equation with boundary condition has a smooth, non-negative solution with $\|\varrho\|_{L^1(U)}=1$. When the interaction energy is sufficiently small, $| \kappa_2 | \leq 1/4\times \|V_2\|_{L^\infty}^{-1}$, the solution is unique. The proof follows Dressler et al. [@dressler1987stationary]. The main idea is to show that the right hand side of equation is a contraction map on $C^2(U)$, and for sufficiently small interaction energy $\kappa_2$, $\varrho_\infty\in L^1(U)$ is the unique invariant measure which is a non-negative function with unit mean. Consider the stationary problem such that Assumption holds. Then we have that 1. There exists a weak solution $\varrho\in H^1(U)\cap P_{ac}(U)$ to as a fixed point of the equation . 2. Any weak solution $\varrho\in H^1(U)\cap P_{ac}(U)$ is smooth and strictly positive, that is $\varrho\in C^{\infty}(\bar{U})\cap P^+_{ac}(U)$. The proof is similar to [@greg_mckean_vlasov_torus Theorem 2.3] but one must check the conclusions of the theorem hold with no flux boundary conditions and a confining potential $V_1$. This result is similar to arguments in [@tamura1984asymptotic] but here we consider a compact domain $U$. The weak formulation of is $$\begin{aligned} \label{eq:weak_form_of_stationary_eqn} -\int_U\mathrm{d}\vec{r}\,\nabla_{\vec{r}}\eta\cdot \bm{D}\nabla_{\vec{r}}\varrho -\kappa_1 \int_U\mathrm{d}\vec{r}\,\nabla_{\vec{r}}\eta\cdot \varrho\bm{D}\nabla_{\vec{r}}V_1 - \kappa_2 \int_U\mathrm{d}\vec{r}\,\nabla_{\vec{r}}\eta\cdot \varrho\bm{D}\nabla_{\vec{r}}V_2\star \varrho = 0,\end{aligned}$$ for $\eta \in H^1(U)$ where we have used the no-flux boundary condition in on $\varrho$ and we seek solutions $\varrho\in H^1(U)\cap P_{ac}(U)$. Now define $F:P_{ac}(U)\to P_{ac}(U)$ by $$\begin{aligned} \label{eq:def_of_T_rho} F \varrho = \frac{1}{Z(\varrho,\kappa_2)}e^{-(\kappa_1V_1+ \kappa_2V_2\star \varrho)}, \quad Z(\varrho,\kappa_2) = \int_U\mathrm{d}\vec{r}\,e^{-(\kappa_1V_1+ \kappa_2V_2\star \varrho)}.\end{aligned}$$ By we see that $$\begin{aligned} \label{eq:T_on_E} \|F\varrho\|_{L^2(U)}^2 \leq \frac{1}{|U|} e^{4(|\kappa_1|\|V_1\|_{L^\infty(U)}+|\kappa_2|\|V_2\|_{L^\infty(U)})} =: E_0,\end{aligned}$$ and therefore we seek solutions to in the set $E := \{\varrho\in L^2(U) \,:\, \|\varrho\|_{L^2(U)}^2 \leq E_0\}$. Note that $E$ is a closed, convex subset of $L^2(U)$ and therefore we may redefine $T$ to act on $E$. Additionally we see that for $\varrho\in E$ $$\begin{aligned} \|F\varrho\|_{H^1(U)}^2 &= \|F\varrho\|_{L^2(U)}^2 + \|\nabla_{\vec{r}}T\varrho\|_{L^2(U)}^2\nonumber\\ &\leq E_0\left(1+2|\kappa_1|^2\|\nabla V_1\|_{L^2(U)}^2+|\kappa_2|^2\|\nabla V_2\|_{L^2(U)}^2E_0\right),\label{eq:H1_estimate_of_rho}\end{aligned}$$ where we have used that $\varrho\in L^1(U)$ by Lemma \[thm:exis\_fix\_point\] and $V_1,V_2\in H^1(U)$. Similarly to [@greg_mckean_vlasov_torus Theorem 2.3] we have by that $F(E)\subset E$ and by $F(E)$ is uniformly bounded in $H^1(U)$. Therefore by Rellich’s compactness theorem, $F(E)$ is relatively compact in $L^2(U)$, and therefore in $E$, since $E$ is closed. We may show using similar calculations to [@dressler1987stationary Theorem 1] that the non-linear map in is Lipschitz continuous in $E$, and by Schauder fixed point theorem there exists $\varrho\in E$ solving which by is in $H^1(U)$. By inserting the expression for $F\varrho$ into we obtain (1). Also note that solutions $\varrho\in E$ to are bounded below by $E_0^{-1}/|U|^2$ giving positivity of solutions. We now show that every weak solution in $\varrho\in H^1(U)\cap P_{ac}(U)$ is a fixed point of the nonlinear map in . Consider the frozen weak formulation $$\begin{aligned} \label{eq:weak_form_of_stationary_eqn_frozen} -\int_U\mathrm{d}\vec{r}\,\nabla_{\vec{r}}\eta\cdot \bm{D}\nabla_{\vec{r}}\theta -\kappa_1 \int_U\mathrm{d}\vec{r}\,\nabla_{\vec{r}}\eta\cdot \bm{D}\nabla_{\vec{r}}V_1\theta - \kappa_2 \int_U\mathrm{d}\vec{r}\,\nabla_{\vec{r}}\eta\cdot \bm{D}\nabla_{\vec{r}}V_2\star \varrho\, \theta = 0.\end{aligned}$$ This is the weak formulation of the PDE (for the unknown function $\theta$) $$\begin{aligned} \nabla_{\vec{r}}\cdot\left(\bm{D}\nabla_{\vec{r}}\theta + \theta\bm{D}(\nabla_{\vec{r}}V_1+\nabla_{\vec{r}}V_2\star\varrho)\right) = 0, \quad \text{ s.t. } \nabla_{\vec{r}}\left((F\varrho)^{-1}\theta\right)\cdot\vec{n}|_{\partial U} = 0.\end{aligned}$$ We note that we may rewrite the weak formulation as $$\begin{aligned} \label{eq:weak_form_of_stationary_eqn_frozen_rewrite} -\int_U\mathrm{d}\vec{r}\,\nabla_{\vec{r}}\eta\cdot \bm{D}\nabla_{\vec{r}}h\,F\varrho = 0\end{aligned}$$ for every $\eta\in H^1(U)$ and where $h = \theta/(F\varrho)$. This holds true for any $\eta$, in particular $\eta = h$ hence we find $$\begin{aligned} -\int_U\mathrm{d}\vec{r}\,\Big| (F\varrho)^{1/2}\bm{D}^{1/2}\nabla_{\vec{r}}h\Big|^2 = 0\end{aligned}$$ where we have used that $\bm{D}$ is positive definite by and $F\varrho$ is strictly positive. All in all we obtain $\nabla_{\vec{r}}h = 0$ a.e. and hence $\theta = F\varrho$ up to normalisation. But if $F\varrho$ is a probability density we must have $\theta \equiv F\varrho$ and we conclude that since $\varrho = F\varrho$, any weak solution $\varrho\in H^1(U)\cap P^+_{ac}(U)$ of must be such that $\varrho = F\varrho$. The regularity of $\varrho$ follows from the same bootstrapping argument of [@greg_mckean_vlasov_torus Theorem 2.3]. We can also obtain an estimate on the rate of convergence to the equilibrium density in $L^2(U)$ as $t\to \infty$ with the following theorem. In order to forgo additional assumptions on the initial data $\varrho_0$ we restrict ourselves to the case where the equilibrium density is unique and given by $\varrho_\infty$.\ \[lem:exp\_conv\_in\_L2\] Let $\varrho\in C^1([0,\infty];C^2(U)) $ be a solution of with initial data $\varrho_0\in L^2(U)$ a probability density. For $\kappa_1=0$, if $$\begin{aligned} \kappa_2^2 < \min \Big\{ \frac{\mu_{\min}c_{pw}^{-2}\|\nabla_{\vec{r}}V_2\|^{-2}_{L^\infty}}{2(1+e)\mu_{\max}}, \frac{1}{4\|V_2\|_{L^\infty}} \Big\},\end{aligned}$$ where $c_{pw}$ is a Poincar[' e]{}$-$Wirtinger constant on the domain $U$ and $\mu_{\max}$ and $\mu_{\min}$ are the largest and smallest eigenvalues of $\bm{D}$, then $\varrho\to \varrho_\infty$ in $L^2(U)$ exponentially as $t\to \infty$. In particular the convergence in $L^2(U)$ is given by $$\begin{aligned} \|\varrho(\cdot, t)-\varrho_\infty(\cdot)\|^2_{L^2(U)}\leq \|\varrho_0(\cdot)-\varrho_\infty(\cdot)\|_{L^2(U)}^2 e^{-r_{\kappa_2}t}\end{aligned}$$ as $t \to \infty$ where $r_{\kappa_2} = \mu_{\min}c^{-2}_{pw}- 2\mu_{\max}|\kappa_2|^2(e+1)\|\nabla_{\vec{r}}V_2\|_{L^{\infty}(U)}^2$ is the rate of convergence. Let $\psi = \varrho-\varrho_\infty$, then the evolution equation for $\psi$ may be written $$\begin{aligned} \label{eq:exp_conv_in_l2_bound_1} \partial_t\psi - \nabla_{\vec{r}}\cdot[\bm{D}\,\nabla_{\vec{r}}\psi] = \kappa_2 \nabla_{\vec{r}}\cdot[\bm{D}\,(\varrho_\infty\nabla_\vec{r}V_2\star \psi +\psi\nabla_{\vec{r}}V_2\star \varrho)].\end{aligned}$$ Multiplying by $\psi$, integrating and using the boundary condition $\Pi[\psi]\cdot\vec{n} = 0$ on $\partial U\times [0,T]$ we obtain $$\begin{aligned} &\tfrac{1}{2}\der[]{t}\|\psi (t)\|^2_{L^2(U)} +\|\bm{D}^{1/2}\nabla_{\vec{r}}\psi \|_{L^2(U)}^2 \nonumber\\ &\leq \int_U\mathrm{d}\vec{r}\,|\bm{D}^{1/2}\nabla_{\vec{r}}\psi |\times |\kappa_2 \bm{D}^{1/2}(\varrho_\infty\nabla_\vec{r}V_2\star \psi +\psi\nabla_{\vec{r}}V_2\star \varrho)|. \end{aligned}$$ Using H[ö]{}lder’s inequality on the right hand side this becomes $$\begin{aligned} &\tfrac{1}{2}\der[]{t}\|\psi (t)\|^2_{L^2(U)} +\|\bm{D}^{1/2}\nabla_{\vec{r}}\psi \|_{L^2(U)}^2 \nonumber\\ &\leq \|\bm{D}^{1/2}\nabla_{\vec{r}}\psi \|_{L^2(U)}\times \|\kappa_2 \bm{D}^{1/2}(\varrho_\infty\nabla_\vec{r}V_2\star \psi +\psi\nabla_{\vec{r}}V_2\star \varrho)\|_{L^2(U)}. \end{aligned}$$ Now using Young’s inequality twice on the right hand side we obtain $$\begin{aligned} &\tfrac{1}{2}\der[]{t}\|\psi (t)\|^2_{L^2(U)} +\|\bm{D}^{1/2}\nabla_{\vec{r}}\psi \|_{L^2(U)}^2\nonumber\\ & \leq \tfrac{1}{2}\|\bm{D}^{1/2}\nabla_{\vec{r}}\psi \|_{L^2(U)}^2 + \tfrac{1}{2} \|\bm{D}^{1/2}(\varrho_\infty\nabla_\vec{r}V_2\star \psi +\psi\nabla_{\vec{r}}V_2\star \varrho)\|_{L^2(U)}^2 \nonumber \\ & \leq \tfrac{1}{2}\|\bm{D}^{1/2}\nabla_{\vec{r}}\psi \|_{L^2(U)}^2 + |\kappa_2|^2 \|\varrho_\infty\bm{D}^{1/2}\nabla_{\vec{r}}V_2\star \psi \|^2_{L^2(U)} + |\kappa_2|^2 \|\psi\bm{D}^{1/2}\nabla_{\vec{r}}V_2\star \varrho \|^2_{L^2(U)}.\label{eq:l2_trend_inq_1}\end{aligned}$$ From the positive definiteness and boundedness of the diffusion tensor, we have $\mu_{\min}\leq \|\bm{D}\|_{L^\infty(U)}\leq \mu_{\max}$. We also have the following bounds in terms of $\|\psi\|_{L^2(U)}^2$ $$\begin{aligned} &\|\psi\bm{D}^{1/2}\nabla_{\vec{r}}V_2\star \varrho \|^2_{L^2(U)}\leq\mu_{\max}\|\nabla_{\vec{r}}V_2\|_{L^\infty(U)}^2\|\psi\|_{L^2(U)}^2\label{eq:l2_trend_formula_1a}\\ &\|\varrho_\infty\bm{D}^{1/2}\nabla_{\vec{r}}V_2\star \psi \|^2_{L^2(U)}\leq |U|\mu_{\max}\|\varrho_\infty\|_{L^2(U)}^2\|\nabla_{\vec{r}}V_2\|^2_{L^\infty(U)}\|\psi \|_{L^2(U)}^2\label{eq:l2_trend_formula_1b}\end{aligned}$$ where $|U|$ denotes the size of $U$ and in we have used that $\nabla V_2 \star \varrho \leq \| \nabla V_2\|_{L^\infty(U)} \| \varrho \|_{L^1(U)}$ and the fact that $\varrho$ is a probability density with $\| \varrho \|_{L^1} =1$ (see Corollary \[cor:L\_1\_varrho\_is\_1\]). To obtain we use that $$\|\varrho_\infty\bm{D}^{1/2}\nabla_{\vec{r}}V_2\star \psi \|^2_{L^2(U)}\leq \mu_{\max}\|\varrho_\infty\|_{L^2(U)}^2\|\nabla_{\vec{r}}V_2\|^2_{L^\infty(U)} \int_U \mathrm{d}\vec{r}\, \Big| \rho_\infty(\vec{r}) \int \mathrm{d}\vec{r} ' \, \psi(\vec{r}') \Big|^2.$$ We then note that, by Hölder’s inequality, $ \int \mathrm{d}\vec{r} ' \, \psi(\vec{r}') \leq \|\psi\|_{L^2}\|1\|_{L^2(U)} = |U|^{1/2} \|\psi\|_{L^2}$, which gives the result. For it remains to bound the non explicit stationary distribution $\varrho_\infty$ in $L^2(U)$, to do this we observe that by the self-consistency equation $$\begin{aligned} \|\varrho_\infty\|_{L^2(U)}^2\leq \frac{|U|\times e^{2|\kappa_2\||V_2\|_{L^\infty}}}{|U|^2\times e^{-2|\kappa_2\||V_2\|_{L^\infty}}}.\label{eq:l2_trend_formula_2}\end{aligned}$$ Using , and the bounds on $\bm{D}$, inequality becomes $$\begin{aligned} \tfrac{1}{2}\der[]{t}\|\psi (t)\|^2_{L^2(U)}&\leq -\tfrac{\mu_{\min}}{2}\|\nabla_{\vec{r}}\psi \|_{L^2(U)}^2\nonumber\\ &\quad+ \mu_{\max}|\kappa_2|^2(e^{4|\kappa_2\||V_2\|_{L^\infty}}+1)\|\nabla_{\vec{r}}V_2\|_{L^{\infty}(U)}^2\|\psi \|^2_{L^2(U)}. \end{aligned}$$ Now since $\psi$ has mean zero we may use the Poincar[' e]{}–Wirtinger inequality to write $$\begin{aligned} \der[]{t}\|\psi (t)\|^2_{L^2(U)}&\leq -\mu_{\min}c^{-2}_{pw}\|\psi \|_{L^2(U)}^2\nonumber\\ &\quad+ 2\mu_{\max}|\kappa_2|^2(e^{4|\kappa_2\||V_2\|_{L^\infty}}+1)\|\nabla_{\vec{r}}V_2\|_{L^{\infty}(U)}^2\|\psi \|^2_{L^2(U)}.\end{aligned}$$ Finally, by Gr[ö]{}nwall’s lemma [@evans2002partial], we obtain $$\begin{aligned} &\|\psi (t)\|^2_{L^2(U)}\nonumber\\ &\leq \|\psi(0)\|_{L^2(U)}^2\exp\left\lbrace -(\mu_{\min}c^{-2}_{pw}- 2\mu_{\max}|\kappa_2|^2(e^{4|\kappa_2\||V_2\|_{L^\infty}}+1)\|\nabla_{\vec{r}}V_2\|_{L^{\infty}(U)}^2)t\right\rbrace. \label{eq:l2_trend_gronwall_1}\end{aligned}$$ Therefore for any $\varrho_\ast$ a stationary density the necessary condition for exponential convergence $\varrho\to\varrho_\ast$ in $L^2(U)$ as $t\to\infty$ is $$\label{eq:gen_kappa_2_ineq_for_exp_convergence} \mu_{\min}c^{-2}_{pw}- 2\mu_{\max}|\kappa_2|^2(e^{4|\kappa_2\||V_2\|_{L^\infty}}+1)\|\nabla_{\vec{r}}V_2\|_{L^{\infty}(U)}^2>0.$$ It will now be seen that, under the assumption that $\varrho_\infty$ is the unique stationary density with $\kappa_2 \leq \|V_2\|_{L^\infty}^{-1}/4$, we may obtain an explicit condition for $|\kappa_2|$. In particular becomes $$\begin{aligned} \|\psi (t)\|^2_{L^2(U)}\leq \|\psi(0)\|_{L^2(U)}^2 \exp\left\lbrace -(\mu_{\min}c^{-2}_{pw}- 2\mu_{\max}|\kappa_2|^2(e+1)\|\nabla_{\vec{r}}V_2\|_{L^{\infty}(U)}^2)t\right\rbrace. \end{aligned}$$\[eq:l2\_trend\_gronwall\_2\] Then to ensure the argument in the exponential remains negative, we require $$\begin{aligned} |\kappa_2|^2<\frac{\mu_{\min}c_{pw}^{-2}\|\nabla_{\vec{r}}V_2\|^{-2}_{L^\infty}}{2(1+e)\mu_{\max}}.\end{aligned}$$ This completes the proof of the theorem. We remark that $\psi\in \left\lbrace u\in H^1(U)\, | \,\int_U\mathrm{d}\vec{r}\,u = 0 \right\rbrace $, therefore, we may determine that the sharpest value of $c_{pw}$ conincides with the Poincar[é]{} constant as found by Steklov [@kuznetsov2015sharp], equal to $\nu_1^{-1/2}$ where $\nu_1$ is the smallest eigenvalue of the problem $$\begin{aligned} \Delta u &= -\nu u \quad \text{ in } U,\\ \partial_{\vec{n}}u & = 0 \qquad \,\,\,\, \text{ on } \partial U.\end{aligned}$$ Here $\partial_{\vec{n}}$ is the directional derivative along the unit vector $\vec{n}$ pointing out of the domain $U$. Additionally Payne and Weinberger [@payne1960optimal] proved that for convex domains in $\mathbb{R}^n$ one has $c_{pw}\leq \tfrac{\mathrm{diam}(U)}{\pi}$. One may obtain a similar convergence result including a confining potential as given by the following corollary.\ \[prop:general\_kappa1\_convergence\] Let $\kappa_1\neq 0$ and let $\varrho\in C^1([0,\infty];C^2(U)) $ be a solution of with initial data $\varrho_0\in L^2(U)$ a probability density. Then the exponential convergence $\varrho\to\varrho_\infty$ in $L^2$ criteria is modified to $$\begin{aligned} \mu_{\max}\kappa_1^2\|\nabla_{\vec{r}}V_1\|_{L^\infty(U)}^2<r_{\kappa_2}\end{aligned}$$ along with $| \kappa_2 | \leq 1/4\times \|V_2\|_{L^\infty(U)}^{-1}$. In particular the convergence in $L^2$ is given by $$\begin{aligned} \|\varrho(\cdot, t)-\varrho_\infty(\cdot)\|^2_{L^2(U)}\leq \|\varrho_0(\cdot)-\varrho_\infty(\cdot)\|_{L^2(U)}^2 e^{-(r_{\kappa_2}-\mu_{\max}\kappa_1^2\|\nabla_{\vec{r}}V_1\|_{L^\infty(U)}^2)t}.\end{aligned}$$ Since the inclusion of an external field is linear in the PDEs , the proof is similar to Lemma \[lem:exp\_conv\_in\_L2\], the only term to resolve for the evolution equation for $\psi$ first occurring at being $$\begin{aligned} \kappa_1^2\|\psi\boldsymbol{D}^{1/2}\nabla_{\vec{r}}V_1\|_{L^2(U)}^2 \leq \kappa_1^2\mu_{\max}\|\nabla V_1\|_{L^\infty(U)}^2\|\psi\|_{L^2(U)}^2.\end{aligned}$$ The remainder of the calculations to derive a Gr[ö]{}nwall type inequality including this term are similar. Global Asymptotic Stability {#sec:global_asymptotic_stability} =========================== In this section we study the stability properties of stationary states. We start by showing the free energy is a strictly convex functional, provided $\kappa_2$ is sufficiently small, and that $\mathcal{F}$ is bounded below. Recall the free energy functional $\mathcal{F}:P^+_{\text{ac}}(U)\to \mathbb{R}$ is given by $$\begin{aligned} \mathcal{F}[\varrho]&:=\int_U\mathrm{d}\vec{r}\,\varrho(\vec{r})\log\varrho(\vec{r})+\kappa_1\int_U\mathrm{d}\vec{r}\,V_1(\vec{r})\varrho(\vec{r})+\frac{\kappa_2}{2}\int_U\mathrm{d}\vec{r}\,\varrho(\vec{r})V_2\star\varrho(\vec{r}). \nonumber\end{aligned}$$\ \[prop:F\_is\_strictly\_convex\] For $|\kappa_2|\in [0,\|V_2\|_{L^\infty(U)}^{-1})$ the free energy functional $\mathcal{F}$ is strictly convex. Additionally there exists a positive constant $B_0<\infty$ for every $\varrho\in P_{\text{ac}}^+$ such that $|\mathcal{F}[\varrho]| \geq B_0$. Suppose $\varrho_1$ and $\varrho_2$ satisfy with $\Pi[\varrho_1]\cdot\vec{n} = \Pi[\varrho_2]\cdot\vec{n} = 0$ on $\partial_U$ for all $t\in [0,\infty)$. Letting $\zeta = \varrho_2-\varrho_1$ and $\varrho_s = (1-s)\varrho_1+s\varrho_2$ we compute $\tfrac{\mathrm{d}^2}{\mathrm{d}s^2}\mathcal{F}_H[\varrho_s]$ by direct calculation $$\begin{aligned} \frac{\mathrm{d}^2}{\mathrm{d}s^2}\mathcal{F}_H[\varrho_s] &= \frac{\mathrm{d}}{\mathrm{d}s}\frac{\mathrm{d}}{\mathrm{d}s}\left[\int_U\mathrm{d}\vec{r}\,\varrho_s\log\varrho_s +\kappa_1 \int_U\mathrm{d}\vec{r}\,\varrho_s V_1 +\frac{\kappa_2}{2} \int_U\mathrm{d}\vec{r}\,\varrho_s V_2\star\varrho_s\right]\nonumber\\ &= \frac{\mathrm{d}}{\mathrm{d}s}\left[\int_U\mathrm{d}\vec{r}\,\zeta \log\varrho_s +\zeta \right.\nonumber\\ & \qquad \left. + \kappa_1\int_U\mathrm{d}\vec{r}\,\zeta V_1 + \frac{\kappa_2}{2}\int_U\mathrm{d}\vec{r}\,\zeta V_2\star\varrho_s + \frac{\kappa_2}{2}\int_U\mathrm{d}\vec{r}\,\varrho_s V_2\star\zeta\right]\nonumber\\ &=\int_U\mathrm{d}\vec{r}\,\frac{\zeta^2}{\varrho_s} + \kappa_2\int_U\mathrm{d}\vec{r}\,\zeta V_2\star \zeta.\end{aligned}$$ Now using the measure $\mathrm{d}\mu = \varrho_s\mathrm{d}\vec{r}$ we have, by Jensen’s inequality, $$\begin{aligned} \int_U\mathrm{d}\vec{r}\,\frac{\zeta^2}{\varrho_s} = \int_U\mathrm{d}\mu\,\frac{\zeta^2}{\varrho_s^2} \geq \left(\int_U\mathrm{d}\vec{r}\,|\zeta|\right)^2.\end{aligned}$$ We also have that $V_2$ is bounded below by the negative of its its essential supremum from . Combining these facts we find $$\begin{aligned} \label{eq:convexity_condition_of_F} \frac{\mathrm{d}^2}{\mathrm{d}s^2}\mathcal{F}_H[\varrho_s] \geq (1-|\kappa_2\|V_2\|_{L^\infty(U)}\})\left(\int_U\mathrm{d}\vec{r}\,|\zeta|\right)^2\end{aligned}$$ and we therefore find that, for $\kappa_2$ such that $|k_2|\leq \tfrac{1}{4}\|V_2\|_{L^\infty(U)}^{-1}$, the free energy functional $\mathcal{F}$ is strictly convex. Now let $\varrho\in P^+_{\text{ac}}$ and observe that $$\begin{aligned} \mathcal{F}[\varrho]\geq -\Big|\int_U\mathrm{d}\vec{r}\,\varrho\log\varrho\Big| -|\kappa_1|\|V_1\|_{L^\infty(U)}-\tfrac{|\kappa_2|}{2}\|V_2\|_{L^\infty(U)}.\end{aligned}$$ The entropy $\varrho\log\varrho$ is continuous and bounded below on $U$ and therefore we have that $$\begin{aligned} \mathcal{F}[\varrho]\geq \int\mathrm{d}\vec{r}\,|\varrho\log\varrho|-|\kappa_1|\|V_1\|_{L^\infty(U)}-\tfrac{|\kappa_2|}{2}\|V_2\|_{L^\infty(U)}>-\infty.\end{aligned}$$ where we have used the assumptions on the potentials in . Hence $\mathcal{F}[\cdot]$ is bounded below. Note that the convexity condition in Proposition \[prop:F\_is\_strictly\_convex\] holds independently of the confining potential $V_1$. We therefore have the following Corollary for the total free energy $\mathcal{F}-\int_U\mathrm{d}\vec{r}\,\mu_{c}\varrho$.\ \[cor:the\_free\_energy\_is\_convex\_and\_bounded\_below\] The total free energy $\mathcal{F}-\int\mathrm{d}\vec{r}\,\mu_{c}\varrho$ is strictly convex for $|\kappa_2|\in [0,\|V_2\|_{L^\infty(U)}^{-1})$ and bounded below. We now provide a useful Lemma which will be used eventually to show that $\mathcal{F}$ always has a minimiser, for any $\kappa_2$ (see Lemma \[lem:minimisers\_always\_exist\]).\ \[lem:can\_pick\_a\_smaller\_free\_energy\_density\] Let $V_1$, $V_2$ satisfy the assumptions then there exists a positive constant $B_0$ such that for every $\varrho\in P_{ac}(U)$ with $\|\varrho\|_{L^\infty(U)}>B_0$ there exists some $\varrho^\dagger \in P_{ac}(U)$ with $\|\varrho^{\dagger}\|_{L^\infty(U)}\leq B_0$ such that $$\begin{aligned} \mathcal{F}(\varrho^\dagger)<\mathcal{F}(\varrho).\end{aligned}$$ For a proof see [@greg_mckean_vlasov_torus Lemma 2.5] or [@chayes2010mckean Lemma 2.1], the only modification required is to include $V_1$ which by assumption is bounded below and the proof follows a similar argument. We now show that minimisers of $\mathcal{F}$ exist for all $\kappa_2$. First we define the integral operator $\mathcal{R}$ which will be useful for the following calculations.\ \[def:def\_of\_R\_op\] Let $\mathcal{R}:L^1(U)\to L^1(U)$ be given by $$\begin{aligned} \label{eq:def_of_R_operator} \mathcal{R}u = -\varrho V_2\star u. \end{aligned}$$ We note that $\mathcal{R}$ is a compact (since $V_2$ is uniformly bounded in $L^\infty(U)$) self-adjoint operator in $L^2(U,\varrho^{-1})$. We label its eigenvalues $\{\beta_n^{-1}\}_{n=1}^\infty$ and eigenfunctions $\{u_n\}_{n=1}^\infty$ satisfying $$\begin{aligned} \label{eq:eigenvalue_problem_for_R} \mathcal{R}u_n = \beta_n^{-1}u_n.\end{aligned}$$ \[lem:minimisers\_always\_exist\] Let $\kappa_2\in (-\infty,\infty)$ and let $V_1$, $V_2$ satisfy the assumptions . Then there exists a $\varrho\in P_{ac}(U)$ that minimizes $\mathcal{F}$. Since $\mathcal{F}$ is bounded below there exists a minimising sequence $\{\varrho_j\}_{j=1}^\infty \in P_{ac}(U)$ so that $\mathcal{F}(\varrho_{j})<\mathcal{F}(\varrho_{j+1})$. Therefore, by Lemma \[lem:can\_pick\_a\_smaller\_free\_energy\_density\] $\{\varrho_j\}_{j = 1}^\infty$ may be chosen such that $\|\varrho_j\|_{L^2}(U)\leq \|\varrho_j\|^2_{L^\infty(U)}|U|$. Now by the Eberlein-Smuljan theorem, since $\{\varrho_j\}_{j = 1}^\infty$ is bounded, there exists a subsequence (which we will denote again by $\{\varrho_j\}_{j = 1}^\infty$) such that $\varrho_j \rightharpoonup \varrho_\ast$ weakly in $L^2$ to some $\varrho_\ast$. Therefore $\lim_{j\to \infty}\int_U\mathrm{d}\vec{r}\,\eta(\varrho_j-\varrho_\ast) = 0$ for every $\eta\in L^2(U)$, so in particular for $\eta = 1$ we obtain $\lim_{j\to \infty}\int_U\mathrm{d}\vec{r}\, \varrho_j = 1 = \int_U\mathrm{d}\vec{r}\, \varrho_\ast$. Additionally we note that $|\varrho_j| \rightharpoonup |\varrho_\ast|$ in $L^2(U)$, and therefore $\|\varrho_\ast\|_{L^1(U)} = 1$, which is enough to show that $\varrho_\ast\geq 0 $ a.e. by standard arguments (see, for example, the proof of Corollary \[cor:L\_1\_varrho\_is\_1\]). We define $\Lambda:P_{ac}\to \mathbb R$ such that $$\begin{aligned} \Lambda(z) := \int_U\mathrm{d}\vec{r}\,zV_2\star z.\end{aligned}$$ Now let $\varrho_{\beta_{n}}\in L^1(U)$ be a solution to , which is known to exist by Lemma \[thm:exis\_fix\_point\]. Note that $\varrho_{\beta_{n}}$ need not be a minimiser of $\mathcal{F}$ and may be an inflection point or local maximum. Additionally since $\varrho_{\beta_{n}}\in L^1(U)$ solves , we have that $\varrho_{\beta_{n}}>e^{-(|\kappa_1|\|V_1\|_{L^\infty(U)}+|\beta_{n}|\|V_2\|_{L^\infty(U)})}/Z>0$ (where $Z$ is a normalisation constant) and therefore there exists $\delta\in \mathbb{R}^+$ such that $\varrho_{\beta_{n}}>\delta$ for every $\vec{r}\in U$. Now we estimate the interaction energy difference by $$\begin{aligned} |\Lambda(\varrho_j)-\Lambda(\varrho_\ast)| &\leq \sum_{n=1}^N|\beta_n^{-1}|\Big|\langle \varrho_j,w_n\rangle_{L^2(U,\varrho^{-1}_{\beta_{n}})}-\langle \varrho_\ast,w_n\rangle_{L^2(U,\varrho^{-1}_{\beta_{n}})}\Big| + 2 |\beta_N^{-1}|\delta^{-1}B_0\nonumber\\ & \leq 2\delta^{-1} B_0 \sum_{n=1}^N \langle\varrho_j -\varrho_\ast, w_n\rangle_{L^2(U)} + 2 |\beta_N^{-1}|\delta^{-1} B_0\end{aligned}$$ where we have used the fact that the integrand of $\Lambda(z)$ is equal to $\mathcal{R}$ acting on $z \in P_{ac}$. Additionally we have used that $\mathcal{R}$ is self-adjoint in $L^2(U,\varrho^{-1}_{\beta_{n}})$, to write $\mathcal{R}$ as a projection onto its eigenvectors $\{w_n\}_{n=1}^{\infty}$ and bounded the tail of the infinite sum using Bessel’s inequality. Now since $\mathcal{R}$ is self-adjoint in $L^2(U, \varrho_{\beta_n}^{-1})$ we have that (after reordering) $|\beta_n^{-1}|\to 0$ as $n \to \infty$ so the second term may be made arbitrarily small. The first term may be made arbitrarily small by taking the limit $j \to \infty$ inside the finite sum and using that $\varrho_j\rightharpoonup \varrho_{\ast}$ weakly in $L^2(U)$. This shows that $\Lambda(\cdot)$ is continuous in $\varrho$. Additionally, for the external energy, we have $$\begin{aligned} \Big|\int_U\mathrm{d}\vec{r}\, V_1(\vec{r})\varrho_j(\vec{r})-\int_U\mathrm{d}\vec{r}\, V_1(\vec{r})\varrho_\ast(\vec{r})\Big| & = \Big|\int_U\mathrm{d}\vec{r}\, V_1(\vec{r})(\varrho_j(\vec{r})-\varrho_\ast(\vec{r}))\Big| \nonumber\\ & \leq \Big|\int_U\mathrm{d}\vec{r}\, V_1(\vec{r})(\varrho_j(\vec{r})-\varrho_\ast(\vec{r}))\Big| \to 0\end{aligned}$$ as $j\to \infty$. The lower semicontinuity of the entropy term in follows from standard results [@jost1998calculus Lemma 4.3.1]. Therefore the free energy $\mathcal{F}[\varrho]$ has a minimiser $\varrho$ over $P_{ac}(U)$. We may refine this result to show that minimisers are attained in $P^+_{ac}(U)$ with the following lemma.\ \[lem:minimisers\_are\_positive\] Let $\varrho\in P_{ac}(U)\backslash P^+_{ac}(U)$. Then there exists $\varrho^\dagger\in P^+_{ac}(U)$ such that $\mathcal{F}[\varrho^\dagger]<\mathcal{F}[\varrho]$. The proof is similar to [@greg_mckean_vlasov_torus Lemma 2.6]. One must show that the potential energy for a $P^+_{ac}(U)$ density may be bounded by the potential energy of a $ P_{ac}(U)$ density. We let $\epsilon>0$ and define the competition state $\varrho_\epsilon$ such that $$\begin{aligned} \varrho_\epsilon(\vec{r}) = \frac{(\varrho(\vec{r})+\epsilon\mathbb{I}_{\mathbb{B}_0}(\vec{r}))}{1+\epsilon |\mathbb{B}_0|}\end{aligned}$$ where $\mathbb{B}_0 = \{\vec{r}\in U \, :\, \varrho(\vec{r}) = 0\}$ and since by assumption $\varrho \notin P^+_{ac}(U)$ one has $|\mathbb{B}_0|>0$ and $\varrho_\epsilon\in P^+_{ac}(U)$. Then we obtain that $$\begin{aligned} \int_U\mathrm{d}\vec{r}\, V_1 \varrho_\epsilon \leq \int_U\mathrm{d}\vec{r}\, V_1 \varrho + \epsilon |\mathbb{B}_0|.\end{aligned}$$ Using this bound, together with the result [@greg_mckean_vlasov_torus Lemma 2.6] we obtain the required result. Exponential Convergence to Equilibrium in Relative Entropy. ----------------------------------------------------------- In this section we derive an H-theorem which guarantees that the time evolution of the dynamics converges to the equilibrium distribution given by the self-consistency equation. First consider the time derivative of the integral of the free energy $$\begin{aligned} &\der[]{t} \mathcal{F}[\varrho] = \int_U \mathrm{d}\vec{r}\, \partial_t \varrho\, \tfrac{\delta\mathcal{F}[\varrho] }{\delta \varrho} = \int_U \mathrm{d}\vec{r}\,\nabla\cdot\left[\bm{D}(\vec{r},t)\varrho(\vec{r},t)\nabla_{\vec{r}}\tfrac{\delta\mathcal{F}[\varrho] }{\delta \varrho} \right]\tfrac{\delta\mathcal{F}[\varrho] }{\delta \varrho} \nonumber\\ &=-\int_U \mathrm{d}\vec{r}\,\Big|\bm{D}(\vec{r},t)^{1/2}\varrho(\vec{r},t)^{1/2}\nabla_{\vec{r}}\tfrac{\delta\mathcal{F}[\varrho] }{\delta \varrho} \Big|^2,\end{aligned}$$ where we have integrated by parts and used the boundary condition $\Pi\varrho \cdot\vec{n}\big|_{\partial_U}= 0$ or $\varrho\to 0$ as $|\vec{r}|\to\infty$ for bounded and unbounded domains respectively. Here we see that as long as both $\bm{D}(\vec{r},t)$ and $\varrho(\vec{r},t)$ remain positive definite then the free energy is monotonically decreasing in time. Indeed the diffusion tensor $\bm{D}$ is positive definite as proven in [@goddard2012overdamped] and we will show strict positivity of $\varrho(\vec{r},t)$ in Section \[subsec:strict\_pos\_rho\]. We now introduce the relative entropy functional $$\begin{aligned} \label{eq:def_of_rel_entropy_func} \mathcal{H}[\varrho |\varrho_\infty]:=\int_{U}\mathrm{d}\vec{r}\, \varrho \log\left(\frac{\varrho}{\varrho_\infty}\right),\end{aligned}$$ and obtain the following theorem for convergence to equilibrium in relative entropy. \[thm:rel\_entropy\_convergence\] Let $V_1$ be convex, $|\kappa_2|<\tfrac{1}{4}\|V_2\|_{L^\infty(U)}^{-1}$ and $\varrho\in C^1([0,\infty];C^2(U))$ be a classical solution to equation . If $\kappa_2^2 < \frac{c_{ls}^{-1}}{2 \| \nabla V_2 \|_{L^\infty(U)}^2}$ then $\varrho$ is exponentially stable in relative entropy and it holds that $$\begin{aligned} \mathcal{H}[\varrho |\varrho_\infty]\leq \mathcal{H}[\varrho_0|\varrho_\infty] e^{-\tfrac{1}{2}(c_{ls}^{-1}-2|\kappa_2|^2 \|\nabla V_2\|_{L^\infty(U)}^2)t},\end{aligned}$$ where $c_{ls}>0$ is the log-Sobolev constant for the measure $\mu$. By direct calculation we find $$\begin{aligned} \label{eq:H_time_der} \frac{\mathrm{d} \mathcal{H}[\varrho |\varrho_\infty]}{\mathrm{d}t} &= \int_U\mathrm{d}\vec{r}\, \partial_t\left( \varrho\log\left(\frac{\varrho}{\varrho_\infty}\right)\right) = \int_U\mathrm{d}\vec{r}\, \partial_t\varrho \log\left(\frac{\varrho}{\varrho_\infty}\right) + \int_U\mathrm{d}\vec{r}\,\partial_t \varrho\nonumber\\ & = \int_U\mathrm{d}\vec{r}\, \partial_t\varrho \log\left(\frac{\varrho}{\varrho_\infty}\right) + \frac{\mathrm{d}M}{\mathrm{d}t} = - \int_U\mathrm{d}\vec{r}\, \varrho\nabla \frac{\delta \mathcal{F}[\varrho]}{\delta \varrho}\cdot \nabla \log\left(\frac{\varrho}{\varrho_\infty}\right) + 0 \nonumber\\ &= - \int_U\mathrm{d}\vec{r}\, \varrho \left(\nabla \log\varrho + \kappa_1 \nabla V_1 +\kappa_2\nabla V_2\star \varrho\right)\cdot \nabla \log\left(\frac{\varrho}{\varrho_\infty}\right)\nonumber\\ & = - \int_U\mathrm{d}\vec{r}\, \varrho \left(\nabla \log\varrho +\kappa_2\nabla V_2\star \varrho-\left( \nabla \log \varrho_\infty +\kappa_2\nabla V_2\star \varrho_\infty\right)\right)\cdot \nabla \log\left(\frac{\varrho}{\varrho_\infty}\right)\nonumber\\ & =- \int_U\mathrm{d}\vec{r}\, \varrho \left(\nabla \log\left(\frac{\varrho}{\varrho_\infty}\right) +\kappa_2\nabla V_2\star (\varrho-\varrho_\infty)\right)\cdot \nabla \log\left(\frac{\varrho}{\varrho_\infty}\right)\end{aligned}$$ where we have used the no-flux boundary condition and the self-consistency equation $\nabla \log\varrho_\infty + \kappa_1 \nabla V_1 +\kappa_2\nabla V_2\star \varrho_\infty = 0$. Note that the contribution from the $V_1$ term is constant, independent of $\rho$, and so cancels after using the self-consistency equation. Continuing by expanding out the integrand and using H[ö]{}lder’s inequality we obtain $$\begin{aligned} \frac{\mathrm{d} \mathcal{H}[\varrho |\varrho_\infty]}{\mathrm{d}t} &= - \int_U\mathrm{d}\vec{r}\, \varrho \Big| \nabla \log\left(\frac{\varrho}{\varrho_\infty}\right) \Big|^2 + \kappa_2 \int_U\mathrm{d}\vec{r}\, \varrho \nabla\log \left(\frac{\varrho}{\varrho_\infty}\right) \cdot \nabla V_2\star (\varrho- \varrho_\infty)\nonumber\\ &\leq -\int_U\mathrm{d}\vec{r}\, \varrho \Big| \nabla \log\left(\frac{\varrho}{\varrho_\infty}\right)\Big|^2\nonumber\\ &\qquad + \left[\int_U\mathrm{d}\vec{r}\, \varrho \Big|\nabla\log \left(\frac{\varrho}{\varrho_\infty}\right)\Big|^2\right]^{1/2} \times \left(\kappa_2^2 \int_U\mathrm{d}\vec{r}\, \varrho |\nabla V_2\star (\varrho-\varrho_\infty)|^2\right)^{1/2}.\end{aligned}$$ Now, by Young’s inequality, $$\begin{aligned} \frac{\mathrm{d} \mathcal{H}[\varrho |\varrho_\infty]}{\mathrm{d}t} \leq -\tfrac{1}{2}\int_U\mathrm{d}\vec{r}\, \varrho \Big| \nabla \log\left(\frac{\varrho}{\varrho_\infty}\right)\Big|^2 +\tfrac{\kappa_2^2}{2} \int_U\mathrm{d}\vec{r}\, \varrho |\nabla V_2\star (\varrho-\varrho_\infty)|^2\end{aligned}$$ and we may estimate the second term on the right hand side (in particular using that $\int_U \rho = 1$ from Corollary \[cor:L\_1\_varrho\_is\_1\]), giving $$\begin{aligned} \label{eq:rel_entropy_proof_first_bound} \frac{\mathrm{d} \mathcal{H}[\varrho |\varrho_\infty]}{\mathrm{d}t} \leq -\tfrac{1}{2}\int_U\mathrm{d}\vec{r}\, \varrho \Big| \nabla \log\left(\frac{\varrho}{\varrho_\infty}\right)\Big|^2 + \tfrac{\kappa_2^2}{2}\|\nabla V_2\|_{L^\infty(U)}^2\|\varrho-\varrho_\infty \|_{L^1(U)}^2\end{aligned}$$ We bound the first term as follows. Since $V_1$ is convex, we have $$\begin{aligned} \nabla_{\vec{r}}^2V_1\geq \theta_1>0\end{aligned}$$ for some $\theta_1\in \mathbb{R}^+$. Now by the Bakry–[É]{}mery criterion (see [@menz2014poincare Sec 3, Theorem 3.1], and [@malrieu2001logarithmic]) the measure $\mu'(\mathrm{d}\vec{r}) = \mathrm{d}\vec{r}\,e^{-\kappa_1 V_1}/Z$ where $Z$ is a normalisation constant satisfies a log-Sobolev inequality (LSI) with constant $c_{ls}'$ such that $$\begin{aligned} \frac{1}{c_{ls}'}\geq \theta_1\kappa_1.\end{aligned}$$ However since $V_2$ is not general a convex function, we cannot use the Bakry–[É]{}mery criterion for $\mu$ as defined in . However we may deduce a LSI using the Holley–Stroock perturbation lemma [@menz2014poincare Sec 3, Theorem 3.2] since $V_1+V_2\star\varrho_\infty$ is a bounded perturbation of $V_1$, in particular $$\begin{aligned} \Big|V_1+V_2\star\varrho_\infty\Big| \leq \Big|V_1\Big|+ \|V_2\|_{L^\infty}\|\varrho_\infty\|_{L^1(U)} <\infty.\end{aligned}$$ Therefore $\mu$ as defined in with $\varrho = \varrho_\infty$ (after appropriate nondimensionalisation) is unique and satisfies a LSI with constant $$\begin{aligned} c_{ls}^{-1} \geq \exp\left(-\kappa_1\kappa_2\,\text{Osc}\left[V_2\star \varrho_\infty\right]\right)\frac{1}{c_{ls}'}\end{aligned}$$ where $$\begin{aligned} \text{Osc}\left[V_2\star \varrho_\infty\right] = \sup V_2\star \varrho_\infty - \inf V_2\star \varrho_\infty.\end{aligned}$$ The constant $c_{ls}$ is such that such that for each $f:U\to \mathbb{R}^+$ one has $$\begin{aligned} \label{eq:LSI} \int_U f^2\log f^2\mathrm{d}\mu -\int_U f^2 \log \left(\int_U f^2 \mathrm{d}\mu\right)\mathrm{d}\mu \leq c_{ls}\int_U |\nabla f|^2\mathrm{d}\mu = c_{ls} \int_U f^2 | \nabla \log f^2 |^2 \mathrm{d} \mu.\end{aligned}$$ We let $f = \sqrt{\varrho/\varrho_\infty}$ and $\mathrm{d}\mu = \varrho_\infty \mathrm{d}\vec{r}$ and the second term on the left hand side of is zero (since, again $\int_U \rho = 1$). Hence this shows that $$\mathcal{H}[\varrho |\varrho_\infty] = \int_U f^2\log f^2\mathrm{d}\mu \leq c_{ls} \int_U\mathrm{d}\vec{r}\, \varrho \Big| \nabla \log\left(\frac{\varrho}{\varrho_\infty}\right)\Big|^2.$$ We combine the LSI with Pinsker’s inequality [@bolley2005weighted] to deduce $$\begin{aligned} \frac{\mathrm{d} \mathcal{H}[\varrho |\varrho_\infty]}{\mathrm{d}t}\leq -\tfrac{1}{2}(c_{ls}^{-1}-2 \kappa_2^2\|\nabla V_2\|_{L^\infty(U)}^2) \mathcal{H}[\varrho|\varrho_\infty]. $$ Thus we obtain, by Gr[ö]{}nwall’s inequality, $$\begin{aligned} \mathcal{H}[\varrho |\varrho_\infty]\leq \mathcal{H}[\varrho_0|\varrho_\infty] \exp[ -\tfrac{1}{2}(c_{ls}^{-1}-2\kappa_2^2 \|\nabla V_2\|_{L^\infty(U)}^2) t ]\end{aligned}$$ and the theorem is proved. The constant $c_{ls}$ is not known explicitly but may be estimated in terms of the convexity of $V_1$, $V_2$ and the curvature of $U$ [@chen1997estimates]. We now consider asymptotic expansions of the steady states for small interaction energy $\kappa_2$.\ Asymptotic Expansion of the Steady States For Weak Interactions. ---------------------------------------------------------------- We begin this section by recalling that steady states satisfy the self-consistency equation $$\begin{aligned} \label{eq:self_consis} \varrho = \frac{e^{-(\kappa_1 V_1+\kappa_2 V_2\star \varrho)}}{Z},\end{aligned}$$ where $Z = \int_U \mathrm{d}\vec{r}\,e^{-(\kappa_1V_1+\kappa_2V_2\star\varrho)}$. We know from Lemma \[thm:exis\_fix\_point\] that for sufficiently weak interactions, i.e. $|\kappa_2|<1/4\|V_2\|_{L^\infty(U)}^{-1}$, the stationary distribution is unique; equivalently, the nonlinear equation has a unique fixed point. Let $\kappa_2 \ll 1$, then the stationary solution $\varrho(\vec{r}) = \varrho_\infty(\vec{r})$ has the form $$\begin{aligned} \varrho(\vec{r}) = \frac{e^{-\kappa_1 V_1(\vec{r})}}{Z(\varrho)}(1 + O(\kappa_2)),\end{aligned}$$ where the first order correction may be obtained explicitly as follows. Recall the stationary equation for $\varrho$: $$\begin{aligned} \nabla_{\vec{r}} \cdot [\bm{D} ( \nabla_{\vec{r}}\varrho+\kappa_1\varrho\,\nabla_{\vec{r}} V_1(\vec{r}) +\kappa_2\varrho\,\nabla_{\vec{r}} V_2\star \varrho)]=0 & \text{ on } U, \label{eq:stat_pde_for_rho}\\ \bm{D}(\nabla_{\vec{r}}\varrho+\kappa_1\varrho\,\nabla_{\vec{r}}V_1 +\kappa_2\varrho\,\nabla_{\vec{r}} V_2\star \varrho)\cdot\vec{n}=0 & \text{ on } \partial_U. \label{eq:stat_pde_for_rho_bc}\end{aligned}$$ Fix $\kappa_1=1$ and insert the perturbation expansion $$\begin{aligned} \varrho(\vec{r}) = \sum_{k=0}^\infty \kappa_2^k \varrho_k(\vec{r}).\end{aligned}$$ We find at the first order of $\kappa_2$ $$\begin{aligned} \mathcal{L}_0\varrho_0:=\nabla_{\vec{r}} \cdot (\bm{D}\nabla_{\vec{r}}\varrho_0+\bm{D}\left(\varrho_0\nabla_{\vec{r}}V_1)\right) = 0 & \text{ on } U,\label{eq:fredholm_1d_eqn}\\ \bm{D}(\nabla_{\vec{r}}\varrho_0+\varrho_0\nabla_{\vec{r}}V_1) \cdot \vec{n} =0 & \text{ on } \partial_U,\label{eq:fredholm_1d_eqn_bc}\end{aligned}$$ from which we deduce $$\begin{aligned} \varrho_0(\vec{r}) = \frac{e^{-V_1(\vec{r})}}{Z_0}\end{aligned}$$ for $Z_0 = \int_{U}\mathrm{d}\vec{r}\,e^{-V_1(\vec{r})}$. Note that $\mathcal{L}_0$ is self-adjoint in the space $L^2(U,\varrho_0^{-1})$. We may also show that the resolvent of $\mathcal{L}_0$ is compact in $L^2(U,\varrho^{-1}_0)$.\ \[lem:L0\_has\_compact\_resolvent\] The operator $\mathcal{L}_0$ has a compact resolvent in $L^2(U,\varrho_0^{-1})$. We let $\phi\in C^2(U)$, by direct calculation we have that $$\begin{aligned} \mathcal{L}_0\phi = [\nabla_{\vec{r}}\cdot \bm{D}]\cdot \nabla_{\vec{r}}\phi + \text{Tr}\left[\bm{D}\nabla_{\vec{r}}^2\phi \right] + [\bm{D}\nabla_{\vec{r}}V_1]\cdot \nabla_{\vec{r}}\phi + \phi\left[[\nabla_{\vec{r}}\cdot \bm{D}]\cdot \nabla_{\vec{r}}V_1+\text{Tr}[\bm{D}\nabla_{\vec{r}}^2V_1]\right] \end{aligned}$$ then we have that $$\begin{aligned} \|\mathcal{L}_0\phi\|_{L^2(U,\varrho_0^{-1})}^2 &= \int_U\mathrm{d}\vec{r}\, \Big|\nabla_{\vec{r}} \cdot (\bm{D}\nabla_{\vec{r}}\phi+\bm{D}\left(\phi\nabla_{\vec{r}}V_1)\right)\Big|^2\varrho_0^{-1}\nonumber\\ &\leq C(U;\bm{D};V_1)\sum_{n = 0}^2\sup_{\vec{r}\in U}\Big|\phi^{(n)}(\vec{r})\Big|<\infty\end{aligned}$$ where the constant $C(U;\bm{D};V_1)$ is dependent on $U$, the diffusion tensor $\bm{D}$ and the first weak derivatives of its entries (bounded in $L^{\infty}(U)$ by ), and the confining potential $V_1$ and its first two weak derivatives (bounded in $L^{\infty}(U)$ by ). Therefore there exists $C\in \mathbb{R}^+$ such that $\|\mathcal{L}_0\|_{L^2(U,\varrho^{-1}_0)}<C$ and the spectrum of $\mathcal{L}_0$ is bounded. Now let $z\in \rho(\mathcal{L}_0)$ with $|z|>C$, where $\rho(\cdot)$ denotes the resolvent set, then we may write the resolvent $R(z;\mathcal{L}_0)$ of the operator $\mathcal{L}_0$ as $$\begin{aligned} R(z;\mathcal{L}_0) = -z^{-1}\sum_{k = 0}^\infty z^{-k} \mathcal{L}_0^k.\end{aligned}$$ We now show that $R$ is compact. First consider the sequence $(R^N)_{N\geq 1}$ defined by $$\begin{aligned} R^N(z;\mathcal{L}_0) := -z^{-1}\sum_{k = 0}^N z^{-k} \mathcal{L}_0^k,\end{aligned}$$ then let $(\phi_j)_{j\geq 1}$ be a sequence in $C^2(U)$. We have that $(\phi_j)_{j\geq 1}$ is a bounded sequence in $C^2(U)$ and $$\begin{aligned} \|R^N(z;\mathcal{L}_0)[\phi_j]\|_{L^2(U,\varrho_0^{-1})} &\leq |z|^{-1}\sum_{k = 0}^N |z|^{-k} \|\mathcal{L}_0^k[\phi_j]\|_{L^2(U,\varrho_0^{-1})}\nonumber\\ &\leq |z|^{-1}\sum_{k = 0}^N |z|^{-k} C^{K}.\end{aligned}$$ Hence, as long as $|z|>C$ then, $\|R^N(z;\mathcal{L}_0)[\phi_j]\|_{L^2(U,\varrho_0^{-1})}$ converges for all $N$ and $\text{Im}\left(R^N\right)$ is relatively compact in $L^2(U,\varrho^{-1}_0)$. It is then a standard result that the limit of a sequence of compact operators is compact, hence $R$ is compact. Thus we have a complete set of orthonormal basis functions $\{v^{(0)}_k \}_{k=0}^{\infty}$ and corresponding eigenvalues $\{\gamma^{(0)}_n\}_{n\geq 1}$. Note that $v^{(0)}_0 = \varrho_0$ and $\gamma_0^{(0)} = 0$. At the next order of $\kappa_2$ we obtain $$\begin{aligned} \label{eq:lin_stab_pert_equilib_order_eps} \mathcal{L}_0\varrho_1+f(\varrho_0) = 0,\end{aligned}$$ where $$\begin{aligned} f(\varrho_0):=-\nabla_{\vec{r}} \cdot (\bm{D}\varrho_0\,\nabla_{\vec{r}}V_2\star \varrho_0),\end{aligned}$$ subject to $$\begin{aligned} \bm{D}(\nabla_{\vec{r}}\varrho_1+\varrho_1\,\nabla_{\vec{r}}V_1 +\varrho_0\,\nabla_{\vec{r}} V_2\star \varrho)\cdot\vec{n}=0 & \text{ on } \partial_U.\end{aligned}$$ The solvability condition for then becomes $$\begin{aligned} \label{eq:solvability_condition_V2_star_rho0_dot_n} 0=\langle f(\varrho_0),\,v_0^{(0)}\rangle_{L^2(U,\varrho_0^{-1})} & =\int_U\mathrm{d}\vec{r}\,\nabla_{\vec{r}} \cdot (\varrho_0\,\bm{D}\nabla_{\vec{r}}V_2\star \varrho_0) = \int_{\partial U}\mathrm{d}S\,\vec{n}\cdot\varrho_0\,\bm{D}\nabla_{\vec{r}}V_2\star \varrho_0.\end{aligned}$$ If the solvability condition is satisfied then, by the Fredholm alternative, there exists a solution to . We may then write $\varrho_1$ in an eigenfunction expansion $$\begin{aligned} \varrho_1(\vec{r}) = \sum_{j=0}^{\infty}\alpha_j v_{j}^{(0)} \quad \text{ where } \quad \alpha_j = -\frac{1}{\gamma_j\|v_j^{(0)}\|^2_{L^2_{\varrho_0^{-1}}}}\langle f(\varrho_0),\,v_j^{(0)} \rangle_{L^2_{\varrho_0^{-1}}}.\end{aligned}$$ This yields that $$\begin{aligned} \varrho(\vec{r}) = \frac{e^{-V_1(\vec{r})}}{Z_0} + \kappa_2 \sum_{j=0}^{\infty} \frac{\langle \nabla_{\vec{r}} \cdot (\bm{D} \varrho_0 \,\nabla_{\vec{r}} V_2 \star \varrho_0),\,v_j^{(0)} \rangle_{L^2_{\varrho_0^{-1}}} v_{j}^{(0)} (\vec{r})}{\gamma_j\|v_j^{(0)}\|^2_{L^2_{\varrho_0^{-1}}}} + O(\kappa_2^2).\end{aligned}$$\ We now consider a linear stability analysis of the equilibrium density solving .\ Linear Stability Analysis. -------------------------- We first investigate the spectrum of the linearised operator $\mathcal{L}_1$ in terms of the eigenspace of its local part. We determine a scheme for computing the eigenvalues of $\mathcal{L}_1$ explicitly. Writing $\varrho = \varrho +\epsilon\,\omega + O(\epsilon^2)$ where $\epsilon \ll 1$ is an arbitrary parameter and not equal to $\kappa_2$, we obtain\ #### $O(\epsilon^{0})$: $$\begin{aligned} \mathcal{L}\varrho = 0 \end{aligned}$$ where we have set $\varrho = \varrho_\infty$ (the unique stationary state) to ease notation and $$\begin{aligned} \mathcal{L}\varrho = \nabla\cdot (\bm{D}\,\nabla \varrho) +\kappa_1\nabla\cdot(\bm{D}\,\varrho\nabla V_1) +\kappa_2\nabla\cdot (\varrho\bm{D}\,\nabla V_2\star \varrho). \end{aligned}$$ #### $O(\epsilon^{1})$: $$\begin{aligned} \label{eq:dynamical_eqn_for_omega} \dot{\omega} = \mathcal{L}_1w\end{aligned}$$ where $$\begin{aligned} \label{eq:def_of_linearised_L} \mathcal{L}_1\omega:=\nabla\cdot (\bm{D}\,\nabla \omega) +\kappa_1\nabla\cdot(\bm{D}\,\omega\nabla V_1) +\kappa_2\nabla\cdot (\varrho\bm{D}\,\nabla V_2\star \omega) + \kappa_2\nabla\cdot (\omega\bm{D}\,\nabla V_2\star \varrho).\end{aligned}$$ We remark that the operator $\mathcal{L}_1$ is different to the one found in the linear stability analysis of [@greg_mckean_vlasov_torus Sec 3.3] due to the difference in boundary conditions. Perturbations must be mean zero, that is $\int_U\mathrm{d}\vec{r}\,\omega=0$, which may be determined by observing that $$\begin{aligned} 1 = \int\mathrm{d}\vec{r}\,\varrho + \epsilon \int\mathrm{d}\vec{r}\,w + O(\epsilon^2).\end{aligned}$$ Equally, all higher order perturbations must have mean zero. Physically speaking this is a compatibility condition with the no-flux boundary condition in to ensure that perturbations do not change the mass of the system. Additionally by linearising the self-consistency equation we find that mean zero perturbations $w$ satisfy the integral equation $$\begin{aligned} \label{eq:int_eqn_for_omega} w = -\varrho_\infty \kappa_2 V_2\star w.\end{aligned}$$ We linearise the nonlinear boundary condition to find that $$\begin{aligned} \Pi_1[\omega]\cdot \vec{n}|_{\partial_U}:= 0 \end{aligned}$$ where $$\begin{aligned} \label{bc:linearised_bc_for_omega} \Pi_1[\omega] = \bm{D}\,(\nabla_{\vec{r}}\omega + \omega\nabla_{\vec{r}}(\kappa_1 V_1(\vec{r},t)+ \kappa_2 \,V_2\star \varrho) + \kappa_2 \varrho\,\nabla_{\vec{r}}V_2\star \omega).\end{aligned}$$ We note that if any such $\omega$ exist for , then trivially holds, and equation is underdetermined. In order to properly determine $\omega$ we let $\omega \in L^2_{c}(U,\varrho^{-1})$ where $$\begin{aligned} L^2_{c}(U,\varrho^{-1}) := \Big\{ u \in L^2(U,\varrho^{-1}) \, : \, \nabla_{\vec{r}}(\varrho^{-1}u)\cdot \vec{n}|_{\partial U} = 0\Big\}.\end{aligned}$$ The choice $\omega\in L^2_{c}(U,\varrho^{-1})$ preserves the boundary condition $\Pi_1[\varrho]\cdot \vec{n} |_{\partial U} =0$ and we will show in Lemma \[lem:self\_adjoint\_A\] that it is the most general restriction to ensure that the local part of $\mathcal{L}_1$ is self-adjoint in $L^2(U,\varrho^{-1})$. With this we write $$\begin{aligned} \mathcal{L}_{1} &= \mathcal{A}_{ \kappa_2} + { \kappa_2}\mathcal{B},\label{eq:def_of_L1_op}\\ \mathcal{A}_{\kappa_2}w &: =\nabla_{\vec{r}}\cdot[ \bm{D} (\nabla_{\vec{r}}w+ w \nabla_{\vec{r}}\varphi_{\kappa_2} )] ,\label{eq:def_of_A_op}\\ \mathcal{B}w&: = \nabla_{\vec{r}}\cdot\left(\varrho\bm{D}\,\nabla_{\vec{r}}V_2\star w\right),\label{eq:def_of_B_op}\\ \varphi_{\kappa_2} &:= \kappa_1 V_1+{\kappa_2} V_2\star \rho.\label{eq:def_of_varphi_k2}\end{aligned}$$ Here, $\mathcal{A}_{\kappa_2}$ and $\mathcal{B}$ are the local and nonlocal parts of $\mathcal{L}_{1}$, respectively. Note however that $\mathcal{A}_{\kappa_2}\neq \mathcal{L}_0$ by definition since $\kappa_2$ is no longer small. All operators $\mathcal{A}_{\kappa_2}$, $\mathcal{B}$, $\mathcal{L}_{1}$ are maps $H^2(U, \varrho_\infty^{-1})\to L^2(U)$. We now show that $A_{\kappa_2}$ is a self-adjoint operator in the space $L^2_{c}(U,\varrho^{-1})$. [0.485]{} ![Plots of a) The eigenfunctions of $A_{\kappa_2}$ in $L^2([-1,1],\varrho_\infty^{-1})$ as computed with pseudospectral methods for $\kappa_2 = 1$ and $N=100$ spectral points, b) the inner product between pairs of eigenfunctions showing orthogonality of the $\{v_k^{(\kappa_2)}\}$, c) the eigenfunction expansion of $V_2(r) = 1/2(-\tanh((r-1/2)/.05)+\tanh((r+1/2)/.05))$ and d) the absolute error between the expansion $V_{2e}$ and $V_{e}$. The $L^2$ error between $V_2$ and its expansion in eigenfunctions $V_{2e}$ was found to be 5.761e-9.[]{data-label="fig:eigenfunction_figs"}](eigenfuns_k2_1.pdf "fig:"){width="\textwidth"}   [0.485]{} ![Plots of a) The eigenfunctions of $A_{\kappa_2}$ in $L^2([-1,1],\varrho_\infty^{-1})$ as computed with pseudospectral methods for $\kappa_2 = 1$ and $N=100$ spectral points, b) the inner product between pairs of eigenfunctions showing orthogonality of the $\{v_k^{(\kappa_2)}\}$, c) the eigenfunction expansion of $V_2(r) = 1/2(-\tanh((r-1/2)/.05)+\tanh((r+1/2)/.05))$ and d) the absolute error between the expansion $V_{2e}$ and $V_{e}$. The $L^2$ error between $V_2$ and its expansion in eigenfunctions $V_{2e}$ was found to be 5.761e-9.[]{data-label="fig:eigenfunction_figs"}](inner_prod_eigenfunctions_on_interval.pdf "fig:"){width="\textwidth"}   [0.485]{} ![Plots of a) The eigenfunctions of $A_{\kappa_2}$ in $L^2([-1,1],\varrho_\infty^{-1})$ as computed with pseudospectral methods for $\kappa_2 = 1$ and $N=100$ spectral points, b) the inner product between pairs of eigenfunctions showing orthogonality of the $\{v_k^{(\kappa_2)}\}$, c) the eigenfunction expansion of $V_2(r) = 1/2(-\tanh((r-1/2)/.05)+\tanh((r+1/2)/.05))$ and d) the absolute error between the expansion $V_{2e}$ and $V_{e}$. The $L^2$ error between $V_2$ and its expansion in eigenfunctions $V_{2e}$ was found to be 5.761e-9.[]{data-label="fig:eigenfunction_figs"}](V2_expanded.pdf "fig:"){width="\textwidth"}   [0.485]{} ![Plots of a) The eigenfunctions of $A_{\kappa_2}$ in $L^2([-1,1],\varrho_\infty^{-1})$ as computed with pseudospectral methods for $\kappa_2 = 1$ and $N=100$ spectral points, b) the inner product between pairs of eigenfunctions showing orthogonality of the $\{v_k^{(\kappa_2)}\}$, c) the eigenfunction expansion of $V_2(r) = 1/2(-\tanh((r-1/2)/.05)+\tanh((r+1/2)/.05))$ and d) the absolute error between the expansion $V_{2e}$ and $V_{e}$. The $L^2$ error between $V_2$ and its expansion in eigenfunctions $V_{2e}$ was found to be 5.761e-9.[]{data-label="fig:eigenfunction_figs"}](error_V2_and_expanded.pdf "fig:"){width="\textwidth"} \ \[lem:self\_adjoint\_A\] $\mathcal{A}_{\kappa_2}$ is self-adjoint in $L^2_{c}(U,\varrho^{-1})$. First note that from and we have that $\nabla_{\vec{r}} \varphi_{\kappa_2} = \rho \nabla_{\vec{r}} \rho^{-1}$ and so $ \mathcal{A}_{\kappa_2}w = \nabla_{\vec{r}} \cdot [ \rho \bm{D} \nabla_{\vec{r}} ( \rho^{-1} w )]. $ Let $u\in L^2_c(U,\varrho^{-1})$ then $$\begin{aligned} \langle u, \, \mathcal{A}_{\kappa_2} w\rangle_{L^2(U, \varrho^{-1})} &= \int_U\mathrm{d}\vec{r}\, \varrho^{-1} u \mathcal{A}_{\kappa_2} w \nonumber\\ & = \int_U\mathrm{d}\vec{r}\, \varrho^{-1} u \nabla_{\vec{r}} \cdot [ \rho \bm{D} \nabla_{\vec{r}} ( \rho^{-1} w )] \nonumber \\ &=\int_{\partial U}\mathrm{d}S\vec{n}\cdot u\bm{D}\nabla_{\vec{r}}(\varrho^{-1}\omega)-\int_U \mathrm{d}\vec{r}\, \nabla_{\vec{r}}\left[ \varrho^{-1}u \right]\cdot\left[\varrho \bm{D}\nabla_{\vec{r}}\left(\varrho^{-1}w\right) \right] \nonumber\\ &= -\int_{\partial U}\mathrm{d}S\vec{n}\cdot w\bm{D}\nabla_{\vec{r}}(\varrho^{-1}u) + \int_U \mathrm{d}\vec{r}\, \nabla_{\vec{r}} \cdot [ \rho \bm{D} \nabla_{\vec{r}} ( \rho^{-1} u )] \varrho^{-1}w\nonumber\\ &= \langle A_{\kappa_2} u, \, w\rangle_{L^2(\mathbb{R}, \varrho^{-1})} \nonumber\end{aligned}$$ where we have integrated by parts twice and used that $\bm{D}$ is symmetric and the fact that $u, w\in L^2_c(U,\varrho^{-1})$ to eliminate the boundary terms. We have established that $\mathcal{A}_{\kappa_2}$ is self-adjoint in $L^2_c(U,\varrho^{-1})$. Additionally we observe that $\mathcal{A}_{\kappa_2}$ has a compact resolvent in $L^2_c(U,\varrho^{-1})$ by a similar result to Lemma \[lem:L0\_has\_compact\_resolvent\]. The spectral theorem therefore provides a complete basis of orthonormal eigenfunctions $v_k^{({\kappa_2})}$ spanning $L^2_c(U,\varrho^{-1})$ with corresponding eigenvalues $\gamma_k^{({\kappa_2})}$ such that $$\begin{aligned} \label{eq:A_k2_eigenvalue_problem} A_{\kappa_2} v_{k}^{({\kappa_2})} = \gamma_k^{({\kappa_2})}v_{k}^{(\kappa_2)}.\end{aligned}$$ We note that the operator $\mathcal{B}$ as defined in is, in general, not self-adjoint. From now on we assume that the set of eigenfunctions $\{v_{k}^{({\kappa_2})}\}_{k=1}^\infty$ are normalised to form an orthonormal basis. [0.485]{} ![Moving eigenvalues for various differential nonlocal operators.[]{data-label="fig:lambda_paths"}](davidson_dodds_example_1.pdf "fig:"){width="\textwidth"}   [0.485]{} ![Moving eigenvalues for various differential nonlocal operators.[]{data-label="fig:lambda_paths"}](davidson_dodds_example_2.pdf "fig:"){width="\textwidth"}   [0.485]{} ![Moving eigenvalues for various differential nonlocal operators.[]{data-label="fig:lambda_paths"}](tanh_pot_epsilon_lambda.pdf "fig:"){width="\textwidth"}   [0.485]{} ![Moving eigenvalues for various differential nonlocal operators.[]{data-label="fig:lambda_paths"}](gaussian_pot_epsilon_lambda.pdf "fig:"){width="\textwidth"} The stability of the equilibrium density will depend on the spectrum of the operator $\mathcal{L}_1$ so that perturbations evolving according to either grow or decay. We now study the spectrum of $\mathcal{L}_1$. We fix $\kappa_1$ and consider $\kappa_2$, not necessarily small, as a perturbation parameter from the differential part of $\mathcal{L}_1$. The following theorem establishes the parametrisation of the eigenvalues $\lambda$ by $\kappa_2$.\ \[thm:epsilon\_as \_a\_fn\_of\_lambda\] Suppose that $\lambda \neq \gamma_k^{(\kappa_2)}$ for all $k \in \mathbb{N}$. If the solution $\kappa_2^\star(\lambda)$ of the equation $\lambda = \lambda_{k^\star}(\kappa_2^\star) $ exists, then it is given by $$\begin{aligned} \label{eq:kappa2_parametrisation} \kappa_2^\star(\lambda) = \left(\sum_{i=0}^\infty\tfrac{\theta_i^{(\kappa_2)}\gamma _i^{(\kappa_2)}\beta_i^{(\kappa_2)}}{\lambda-\gamma_i^{(\kappa_2)}}\right)^{-1},\end{aligned}$$ where $\theta_j^{(\kappa_2)}$ and $\beta_j^{(\kappa_2)}$ are given by $$\begin{aligned} \label{eq:expansion_coeffs_of_V2} \theta_k^{(\kappa_2)}\beta_l^{(\kappa_2)} = \int_U\mathrm{d}\vec{r}\,v_l^{(\kappa_2)}(\vec{r})V_2\star v_{k}^{(\kappa_2)}(\vec{r}).\end{aligned}$$ Let $$\begin{aligned} V_2(\vec{r}-\vec{r}') &= \varrho^{-1}(\vec{r}) \varrho^{-1}(\vec{r'}) \sum_{j,k=0}^\infty \beta_j^{(\kappa_2)} v_j^{(\kappa_2)}(\vec{r})\theta_k^{(\kappa_2)} v_k^{(\kappa_2)}(\vec{r}'),\nonumber\\ w(\vec{r}) &= \sum_{i=0}^\infty \alpha_iv_i^{(\kappa_2)}(\vec{r}).\end{aligned}$$ Inserting these expressions into the eigenvalue problem $\mathcal{L}_1 w = \lambda w$ we find $$\begin{aligned} \sum_{i=1}^\infty \alpha_i(\gamma^{(\kappa_2)}_i-\lambda)v_i^{(\kappa_2)}(\vec{r}) + \kappa_2\nabla_{\vec{r}}\cdot\left(\varrho\bm{D} \nabla_{\vec{r}} V_2\star w\right) =0.\end{aligned}$$ Multiplying this equation by $v^{(\kappa_2)}_n$ and integrating against the weight function $\varrho^{-1}$ we obtain $$\begin{aligned} 0 &= \alpha_n(\gamma^{(\kappa_2)}_n-\lambda)+\kappa_2\int_U\mathrm{d}\vec{r}\, \varrho^{-1}(\vec{r}) v^{(\kappa_2)}_n(\vec{r})\nabla_{\vec{r}}\cdot\left(\varrho(\vec{r}) \bm{D} \nabla_{\vec{r}} V_2\star w\right) \nonumber\\ & =\alpha_n(\gamma^{(\kappa_2)}_n-\lambda) - \kappa_2 \int_U\mathrm{d}\vec{r}\, \left[\nabla_{\vec{r}}v^{(\kappa_2)}_n(\vec{r}) + v^{(\kappa_2)}_n(\vec{r})\nabla_{\vec{r}}\varphi_{\kappa_2} \right] \cdot \bm{D}\, \nabla_{\vec{r}} V_2\star w,\end{aligned}$$ where there is no boundary term since $\nabla_{\vec{r}} V_2\star w = -\kappa_2^{-1}\nabla_{\vec{r}}(\varrho^{-1}w)$ is zero on the boundary of $U$ because $w\in L^2_c(U,\varrho^{-1})$. Continuing by integrating by parts we find $$\begin{aligned} 0 &= \alpha_n(\gamma^{(\kappa_2)}_n-\lambda)+\kappa_2\int_U\mathrm{d}\vec{r}\,\nabla_{\vec{r}} \cdot [\bm{D} ( \nabla_{\vec{r}}v^{(\kappa_2)}_n(\vec{r}) + v^{(\kappa_2)}_n(\vec{r})\nabla_{\vec{r}}\varphi_{\kappa_2})] V_2\star w\nonumber\\ &= \alpha_n(\gamma^{(\kappa_2)}_n-\lambda)-\kappa_2\int_U\mathrm{d}\vec{r}\, \nabla_{\vec{r}}\cdot(\varrho\bm{D}\nabla_{\vec{r}}(\varrho^{-1}v^{(\kappa_2)}_n(\vec{r}))) V_2\star w\nonumber\\ &= \alpha_n(\gamma^{(\kappa_2)}_n-\lambda)+\kappa_2\gamma^{(\kappa_2)}_n\int_U\mathrm{d}\vec{r}\, v^{(\kappa_2)}_n(\vec{r}) V_2\star w\end{aligned}$$ where we have used $\nabla_{\vec{r}}(\varrho^{-1}v_n^{\kappa_2}) = 0$ on $\partial_U$ to eliminate the boundary term, and in the last line used the fact that $v_n^{\kappa_2}$ is an eigenfunction of $\mathcal{A}_{\kappa_2}$. Inserting the expansion for $V_2$ and using the orthonormality of the $v_i^{(\kappa_2)}$ gives $$\begin{aligned} \kappa_2 = \frac{(\lambda-\gamma^{(\kappa_2)}_n)\alpha_n}{\gamma_n^{(\kappa_2)}\theta_n^{(\kappa_2)}\sum_{j=0}^\infty \int_U\mathrm{d}\vec{r}'\, \varrho_\infty^{-1}(\vec{r}') \beta_j^{\kappa_2} v_j^{(\kappa_2)}(\vec{r}') w(\vec{r}')}.\end{aligned}$$ This holds for all $V_2$ and, in particular, for all $\theta_n^{(\kappa_2)}\neq 0$ so it be must be the case that $$\begin{aligned} \frac{(\lambda-\gamma^{(\kappa_2)}_n)\alpha_n}{\gamma_n^{(\kappa_2)}\theta_n^{(\kappa_2)}}=K,\end{aligned}$$ for some constant $K$, independent of $n$. Without loss of generality we can take $K=1$. Hence we have $$\begin{aligned} w(x) = \sum_{i=0}^\infty \tfrac{\gamma_n^{(\kappa_2)}\theta_n^{(\kappa_2)}}{\lambda-\gamma^{(\kappa_2)}_n}v_n^{(\kappa_2)}(x)\end{aligned}$$ and it follows that $$\begin{aligned} \label{eq:lem_lambda_eqn_fin_line} \kappa_2 = \left( \sum_{j=0}^\infty \int_U\mathrm{d}\vec{r}'\, \varrho^{-1}(\vec{r}') \beta_j^{(\kappa_2)} v_j^{(\kappa_2)}(\vec{r}') w(\vec{r}')\right)^{-1} =\left(\sum_{i=0}^\infty\tfrac{\theta_i^{(\kappa_2)}\gamma^{(\kappa_2)} _i\beta_i^{(\kappa_2)}}{\lambda-\gamma^{(\kappa_2)}_i}\right)^{-1}.\end{aligned}$$ Hence the theorem is proved. $k$ $-\gamma_k^{(\epsilon)}\cdot 10^3$ $\nabla_{\vec{r}}(\varrho^{-1}v_k^{(\kappa_2)})\cdot\vec{n}\big|_{x = -1}$ $\nabla_{\vec{r}}(\varrho^{-1}v_k^{(\kappa_2)})\cdot\vec{n}\big|_{x = 1}$ ----- ------------------------------------ ---------------------------------------------------------------------------- --------------------------------------------------------------------------- 1 0.042470931917315 -0.326629834290770e-10 -0.049430114879555e-10 2 0.161343622578368 0.177791115163473e-11 -0.638172005587933e-11 3 0.359053918066979 -0.532729416136135e-11 -0.705155659306588e-11 4 0.635777464488092 0.040225600628219e-10 -0.10593000196636e-10 5 0.991543834922971 -0.143929312912405e-11 -0.982251087217608e-11 6 1.426361079097481 -0.421040979858844e-11 0.332564751050043e-11 7 1.940232081076136 0.130651045537888e-11 0.966037681231493e-11 8 2.533158075256826 -0.417399448338074e-11 -0.401360089043934e-11 9 3.205139660109592 -0.019828583219805e-11 -0.919929559744713e-11 10 3.956177153938488 -0.687472301308389e-11 0.423840601914807e-11 : The first 10 eigenvalues $-\gamma_k^{(\kappa_2)}\cdot 10^3$ and boundary condition values of the corresponding eigenvectors $v_{k^{(\kappa_2)}}$ for $\kappa_2 = .05$.[]{data-label="tab:eigenstat_table"} The expression for $\kappa_2$ allows the paths of the eigenvalues $\lambda_k(\kappa_2)$ to be computed. For practical purposes, it may be sufficient to use a truncation of the series or, if $w(\vec{r})$ can be computed explicitly, the first expression in can be used. As shown in [@kato2013perturbation Section II-5.1], the eigenvalues of $\mathcal{L}_1$ will remain real as long as $$\begin{aligned} \label{eq:kappa2_bound_spectral_gap} |\kappa_2|<\frac{\min_{i,j\in\mathbb{N}}|\gamma_i^{(\kappa_2)}-\gamma_{j}^{(\kappa_2)}|}{2\|\mathcal{B}\|}.\end{aligned}$$ We see from that the point of critical stability (if it exists) $\kappa_{2\sharp}$ occurs at $$\begin{aligned} \label{eq:critical_kappa_2} \kappa_{2\sharp} = -\left(\sum_{i=0}^\infty\theta_i^{(\kappa_2)}\beta_i^{(\kappa_2)}\right)^{-1}\end{aligned}$$ and is independent of $\gamma_n^{(\kappa_2)}$ (the eigenvalues of the local operator $A_{\kappa_2}$). The critical point of stability will have implicit dependence on $\bm{D}$, $V_1$ and $V_2$ through . As long as $\kappa_2$ remains sufficiently small, Lemma \[thm:epsilon\_as \_a\_fn\_of\_lambda\] provides a nonlinear map to compute $\kappa_2$ parametrised by $\lambda$ therefore permitting the paths of the moving eigenvalues to be calculated. In particular by fixing $\lambda\in\mathbb{R}$ we have the iterative problem $$\begin{aligned} \label{eq:iteration_for_the_mving_lambdas} \begin{cases} \frac{1}{\kappa_2^{l+1}} =\sum_{i=1}^\infty\tfrac{\theta_i^{(\kappa_2^l)}\gamma^{(\kappa_2^l)} _i\beta_i^{(\kappa_2^l)}}{\lambda-\gamma^{(\kappa_2^l)}_i},\\ \gamma_{n}^{(\kappa_2^0)} = \gamma_{n}^{(0)}. \end{cases}\end{aligned}$$ We note that the eigenvalues $\lambda_{k}^{(\kappa_2)}$ are implicitly dependent on the diffusion tensor $\bm{D}$ and confining potential $V_1$. Figure \[fig:eigenfuns\_k2\_1\] shows typical eigenfunctions $v_k^{(\kappa_2)}$ of the local part of the linearised operator $\mathcal{L}$. Figure \[fig:inner\_prod\_eigenfunctions\_on\_interval\] shows the pairwise $L^2_c(U,\varrho^{-1})$ inner product of the $v_k^{(\kappa_2)}$ demonstrating orthogonality of the basis functions. Figure \[fig:V2\_expanded\] shows the expansion of the two-body function $V_2$ (here a Morse like potential) in terms of the eigenfunctions $v_k$ meanwhile Figure \[fig:error\_V2\_and\_expanded\] shows the error between the expansion and $V_2$. We also demonstrate the accuracy of the collocation scheme in computing eigenvalues and eigenfunctions of $\mathcal{A}_{\kappa_2}$ in Table \[tab:eigenstat\_table\]. In particular, $\mathcal{A}_{\kappa_2}$ is composed of dense first and second order differentiation matrices and the value $\nabla_{\vec{r}}(\varrho^{-1}v_k^{(\kappa_2)})\cdot\vec{n}$ is very small on the boundary using only 100 collocation points. In Figures \[fig:tanh\_pot\_moving\_eigenvals\], \[fig:gaussian\_pot\_epsilon\_lambda\], we plot various paths $\kappa_2^{\star}(\lambda)$ as solutions to the equation $\lambda = \lambda_{k^\star}(\kappa_2^*)$ for $k$ the wave number by numerically solving for different two-body potentials. We also reproduce figures from Davidson & Dodds [@davidson2006spectral] in Figures \[fig:davidson\_dodds\_example\_1\], \[fig:davidson\_dodds\_example\_2\], verifying our numerical procedure for computing the spectra of similar nonlocal differential operators. Note however that operators in [@davidson2006spectral] do not contain convolution type integral operators, and, with Dirichlet boundary conditions, their spectra differ substantially from those considered here (for example Figures \[fig:tanh\_pot\_moving\_eigenvals\], \[fig:gaussian\_pot\_epsilon\_lambda\]). The intersection through the $\lambda$ axis in each Figure \[fig:davidson\_dodds\_example\_1\]–\[fig:gaussian\_pot\_epsilon\_lambda\] gives the local eigenvalues $\gamma^{(0)}_k$ for the corresponding nonlocal differential operator. Note that it is not necessary for $\gamma^{(0)}_k$ to lie on the moving path for every $k$. The numerical solution of involves both a truncation of the infinite series and a numerical tolerance for the zeros of the nonlinear function $f(\kappa_2) = \kappa_2-\kappa_2(\lambda_k)$. Note that $\mathcal{L}$ is self-adjoint in $L^2_c(U, \varrho^{-1})$ (with real eigenvalues) only for $\kappa_2 = 0$. The $\lambda$’s are otherwise complex and the curves plotted show when the paths drop to the real plane. When $|\kappa_2|$ is sufficiently large, that is when is violated, the $\lambda$’s have non-zero imaginary part.\ \ We now investigate the spectrum of the linearised operator $\mathcal{L}$ in terms of the eigenspace of its nonlocal part. We determine necessary conditions for bifurcations. Bifurcation Theory {#sec:bifurcation_theory} ================== We now provide our first result of the section which relates the stability of equilibrium density to the two-body interaction potential.\ \[thm:stability\_of\_rho\_inf\] Let $\kappa_2\in(-\infty,\infty)$ and suppose $\varrho$ is a solution to the self-consistency equation . Let $\mathcal{R}$ be given by $$\begin{aligned} \label{eq:R_int_operator_defn} \mathcal{R}w = -\varrho V_2\star w, \end{aligned}$$ where $w\in L^2(U,\varrho^{-1})$ is mean zero. If $\mathcal{R}$ is positive definite and $\kappa_2<\beta_1$ where $\beta_1$ is the smallest eigenvalue of $\mathcal{R}^{-1}$, then equilibrium densities formed from repulsive two–body kernels $V_2$ are stable. Conversely if $\mathcal{R}$ is negative definite and $\kappa_2>\beta_1$ where $\beta_1$ is the largest eigenvalue of $\mathcal{R}^{-1}$, then equilibrium densities formed from attractive two–body kernels $V_2$ are stable. We observe that $\mathcal{L}_1$ is self-adjoint in $L^2(U,\varrho^{-1})$ only when there is no interaction ($\kappa_2 = 0$). We may however expand the eigenfunctions of $\mathcal{L}_1$ in the eigenfunctions of $\mathcal{R}$, $\{u_n\}_{n = 1}^\infty$ which form an orthonormal basis of $L^2(U,\varrho^{-1})$ (see Definition \[def:def\_of\_R\_op\]). We write $w_n = \sum_{i = 1}\alpha_{n_i}u_i$. By the definition of the eigenvalue problem for $\mathcal{L}_1$ $$\begin{aligned} \mathcal{L}_1w_n = \lambda_nw_n.\end{aligned}$$ Now inserting the expansion in $u_i$’s we obtain $$\begin{aligned} \lambda_n\sum_{i = 1}\alpha_{n_i}u_i &=\mathcal{L}_1\sum_{i = 1}\alpha_{n_i}u_i \\ &=\left[\mathcal{A}_{\kappa_2}+\kappa_2\mathcal{B}\right] \sum_{i = 1}\alpha_{n_i}u_i\\ &=\sum_{i = 1}\alpha_{n_i}\left\lbrace\mathcal{A}_{\kappa_2}u_i+\kappa_2\mathcal{B}u_i \right\rbrace \\ &=\sum_{i = 1}\alpha_{n_i}\left\lbrace \nabla_{\vec{r}}\cdot \left(\bm{D}\varrho\left(\nabla_{\vec{r}}(\varrho^{-1}u_i)\right) \right)-\kappa_2\nabla_{\vec{r}}\cdot \left(\bm{D}\varrho\left(\nabla_{\vec{r}}(\varrho^{-1}(\varrho\mathcal{R}u_i))\right) \right)\right\rbrace\\ &=\sum_{i = 1}\alpha_{n_i}\left\lbrace\nabla_{\vec{r}}\cdot \left(\bm{D}\varrho\left(\nabla_{\vec{r}}(\varrho^{-1}u_i)\right) \right)-\kappa_2\nabla_{\vec{r}}\cdot \left(\bm{D}\varrho\left(\nabla_{\vec{r}}(\varrho^{-1}(\varrho\mathcal{R}u_i))\right) \right)\right\rbrace \\ &=\sum_{i = 1}\alpha_{n_i}\left\lbrace 1-\frac{\kappa_2}{\beta_i}\right\rbrace \nabla_{\vec{r}}\cdot \left(\bm{D}\varrho\left(\nabla_{\vec{r}}(\varrho^{-1}u_i)\right)\right),\end{aligned}$$ where we have used the definitions , and and that each $u_i$ is an eigenfunction of $\mathcal{R}$. Now by multiplying my $\varrho^{-1}u_j$ and integrating we obtain $$\begin{aligned} \lambda_n\alpha_{n_j}\|u_j\|_{L^2(U,\varrho^{-1})}^2 = \sum_{i = 1}\alpha_{n_i} \left\lbrace 1-\frac{\kappa_2}{\beta_i}\right\rbrace\int_{U}\mathrm{d}\vec{r}\,u_j \nabla_{\vec{r}}\cdot \left(\bm{D}\varrho\left(\nabla_{\vec{r}}(\varrho^{-1}u_i)\right)\right)\end{aligned}$$ Now by integrating by parts, using Gauss’s theorem and the condition that $\nabla_{\vec{r}}(\varrho^{-1}u_i)$ is zero on the boundary of $U$, we obtain $$\begin{aligned} \lambda_n =\alpha_{n_j}^{-1} \sum_{i = 1}\alpha_{n_i}\left(\frac{\kappa_2}{\beta_i}-1\right)\int_U\mathrm{d}\vec{r}\, \Big|\varrho^{1/2}\bm{D}^{1/2}\nabla_{\vec{r}}\left(\varrho^{-1}u_i\right)\Big|^2 \end{aligned}$$ for every $j = 1,\cdots $. Hence, a bifurcation from the equilibrium density $\varrho$ may occur when $\kappa_2$ coincides with $\beta_j$, for some $j = 1,\cdots $ and perturbations $w_n$ are linear combinations of $u_j$. To ensure $\varrho$ is stable one must have, for every $j\in \mathbb{N}$ $$\begin{aligned} \begin{cases} \kappa_2<\beta_j \quad \text{ if } \mathcal{R} \text{ is positive definite,}\\ \beta_j<\kappa_2 \quad \text{ if } \mathcal{R} \text{ is negative definite.} \end{cases}\end{aligned}$$ Now by the spectral theorem, the $\{\beta_n^{-1}\}_{n\geq 1}$ are discrete, countable and may be ordered such that $|\beta_n^{-1}|\to 0$. Therefore to ensure the stability of $\varrho$ we require $\kappa_2<\beta_1$ if $\mathcal{R}$ is positive definite and $\beta_1<\kappa_2$ if $\mathcal{R}$ is negative definite. This completes the proof of the theorem. We now relate theorem \[thm:stability\_of\_rho\_inf\] to the H-stability result in [@greg_mckean_vlasov_torus].\ \ \[rem:estimate\_betans\] We remark on the consistency with the H-stability condition of [@greg_mckean_vlasov_torus] with periodic boundary conditions, the equilibrium density may bifurcate if the interaction kernel has a negative Fourier mode. In the present work, the distribution of the eigenvalues of the operator $\mathcal{R}$ determines whether the equilibrium density is stable with respect to $\{u_j\}_{j = 1}^{\infty}$. In particular, if $\mathcal{R}$ has a negative eigenvalue then equilibrium densities formed from repulsive $V_2$ may become unstable. We may obtain an estimate for the eigenvalues $\beta_n^{-1}$ in terms of $V_2$ and $\varrho$ in the following way, by the eigenvalue problem we have $$\begin{aligned} |\beta_n^{-1}| &= |\beta_n^{-1}|\langle u_n,u_n\rangle_{L^{2}(U,\varrho^{-1})} = |-\langle u_n,V_2\star u_n\rangle_{L^{2}(U)}| \nonumber\\ &\leq \|V_2\|_{L^\infty(U)} \|u_n\|^2_{L^1(U)}= \|V_2\|_{L^\infty(U)}\|\varrho^{1/2}(\varrho^{-1/2})u_n\|_{L^1}^2\nonumber\\ &\leq \|V_2\|_{L^\infty(U)}\|\varrho\|_{L^1(U)}^2\|(\varrho^{-1/2})u_n\|_{L^2(U)}^2= \|V_2\|_{L^\infty(U)},\end{aligned}$$ where we have used the Cauchy-Schwarz inequality and the fact that the $\{u_n\}_{n = 1}^\infty$ are orthonormal in $L^2(U,\varrho^{-1})$. From this we obtain the lower bound $\|V_2\|_{L^\infty(U)}^{-1}\leq |\beta_n|$, this lower bound shows that the bifurcation point coincides with the boundary of the interval in which free energy $\mathcal{F}$ is convex (c.f. Proposition \[prop:F\_is\_strictly\_convex\]). \[thm:necessary\_bifurcation\_conditions\] Let $\{\beta_{n}^{-1}\}_{n=1}^{\infty}$ be the ordered eigenvalues of $\mathcal{R}$. If $|\kappa_2|\geq |\beta_1|$ then $(\beta_1,w_1)$ is a bifurcation point of where $w_1$ is the eigenfunction of $\mathcal{R}$ associated to $\beta_1^{-1}$ and there exists $0<\varrho_\ast\neq \varrho_{\infty}$ solving . Let $\varrho_{\kappa_2}$ denote the solution to for a given $\kappa_2$ which is known to exist by Theorem \[thm:exis\_fix\_point\]. Since $\varrho_{\kappa_2}$ is continuous in $\kappa_2$ and $\mathcal{F}[\varrho]$ is continuous in $\varrho$, then $\mathcal{F}$ is continuous in $\kappa_2$. By Lemma \[lem:minimisers\_always\_exist\] we know that a minimiser of $\mathcal{F}$ exists for each $\kappa_2$ and by Lemma \[lem:minimisers\_are\_positive\] the minimiser is strictly positive. Given $|\kappa_2|\geq \|V_2\|_{L^\infty(U)}^{-1}$ then by Proposition \[prop:F\_is\_strictly\_convex\], $\mathcal{F}$ is no longer convex and $\varrho_{\kappa_2}$ is either an inflection point or a local maximum of $\mathcal{F}$. Hence $\varrho_{\kappa_2}$ is unstable and by Lemma \[lem:minimisers\_always\_exist\] there exists $\varrho_{\ast}$ such that $\mathcal{F}[\varrho_\ast]<\mathcal{F}[\varrho_{\kappa_2}]$. Additionally by the self-adjointness and compactness of $\mathcal{R}$, one has that $\beta_n^{-1}\to 0$ as $n\to \infty$ and hence $\beta_n\to \infty$ as $n\to \infty$ and $\beta_1$ is the smallest of the $\{\beta_{n}\}_{n=1}^{\infty}$. If $\mathcal{R}$ is positive definite, there are no negative $\beta_n$ and the only solution to is $u_n\equiv 0$ and $\varrho_{\kappa_2}$ will be stable for all $\kappa_2<\beta_1$. Similarly, if $\mathcal{R}$ is negative definite, there are no positive $\beta_n$ and the only solution to is $u_n\equiv 0$ and $\varrho_{\kappa_2}$ will be stable for all $\kappa_2>\beta_1$. If $\mathcal{R}$ is indefinite, by Remark \[rem:estimate\_betans\], for $|\beta_n|<\|V_2\|^{-1}_{L^\infty(U)}$ there are no solutions (other than $w_n\equiv 0$) to $\mathcal{R}[w_n] = \beta_n^{-1}w_n$, and once again for $|\kappa_2|<|\beta_1|$, $\varrho_{\kappa_2} = \varrho_{\infty}$ is stable. For $\kappa_2\geq \|V_2\|^{-1}_{L^\infty(U)}$ there are infinitely many non-trivial solutions to $\mathcal{R}[w_n] = \beta_n^{-1}w_n$ and $\kappa_2 = \beta_1$ is the first. Hence if $|\kappa_2|\geq|\beta_1|$ then the unique stationary density $\varrho_\infty$ is unstable and by Lemma \[lem:minimisers\_always\_exist\] there must exist $\varrho_{\ast}$ such that $\mathcal{F}[\varrho_\ast]<\mathcal{F}[\varrho_{\kappa_2}]$. We define the $\mathcal{W}:L^2(U)\to \mathbb{R}$ transform such that $$\begin{aligned} \mathcal{W}[f](n) = \int_U\mathrm{d}\vec{r}'\varrho^{-1}_{\beta_n}w_n(\vec{r})f(r)\end{aligned}$$ where $\varrho_{\beta_n}$ solves with $\kappa_2 = \beta_n$. With this we may plot the bifurcation diagram for the stability of the unique equilibrium state $\varrho = \varrho_\infty$, see for example Figure \[fig:bifurcation\_diagram\]. Application To Nonlinear Diffusion Equations {#sec:manufactured_bif} ============================================ In this section we consider sufficient conditions for bifurcations under particular forms of nonlocal operators. We will show that, by use of numerical examples, there may be more than one stationary solution under additional assumptions on the two-body potential by making use of the bifurcation theory developed in Section \[sec:bifurcation\_theory\]. We fix $\kappa_1$ and consider boundary value problems where the nonlocal term is not of convolution type. Let $V_2(\vec{r},\vec{r}')$ be a two-body function and consider $$\begin{aligned} \begin{cases}\label{eq:bif_stationary_eqn} \mathcal{P}[\varrho]:=\nabla \cdot \Big[ \bm{D} \Big(\nabla\varrho + \varrho \kappa_1 \nabla V_1+\kappa_2 \varrho \nabla \int_U\mathrm{d}\vec{r}' V_2(\vec{r},\vec{r}') \varrho(\vec{r'}) \Big) \Big]= 0 & \text{ in } U,\\ \Omega [\varrho]\cdot\vec{n} := \bm{D} \Big(\nabla\varrho + \varrho \kappa_1 \nabla V_1+\kappa_2 \varrho \nabla \int_U\mathrm{d}\vec{r}' V_2(\vec{r},\vec{r}') \varrho(\vec{r'}) \Big) \cdot \vec{n}= 0 & \text{ on } \partial U. \end{cases}\end{aligned}$$ Solutions of $\mathcal{P}\varrho=0$ with $\Omega [\varrho]\cdot\vec{n} = 0 $ on the boundary are denoted by $\varrho=\varrho_{\kappa_2}$ and satisfy the self-consistency equation $$\begin{aligned} \label{eq:bifur_self_con_eqn} \varrho_{\kappa_2} = \frac{e^{-(\kappa_1V_1+\kappa_2\int_U\mathrm{d}\vec{r}'\,V_2(\vec{r},\vec{r}')\varrho_{\kappa_2}(\vec{r}'))}}{Z}.\end{aligned}$$ The linear stability of the steady state may be studied implicitly by examining the properties of the linearised self-consistency map. By linearising equation , by writing $\varrho_{\kappa_2} = \phi_0+\epsilon\phi_1$, for some small $\epsilon$, we obtain the original nonlinear problem $$\begin{aligned} \label{eq:bifur_self_con_eqn_O_0} \phi_{0} = \frac{e^{-(\kappa_1V_1+\kappa_2\int_U\mathrm{d}\vec{r}'\,V_2(\vec{r},\vec{r}')\phi_0(\vec{r}'))}}{Z_0} \quad \text{ s.t } \quad \Omega[\phi_{0}]\cdot \vec{n}=0\end{aligned}$$ where $Z_0 = \int_U\mathrm{d}\vec{r}\, e^{-(\kappa_1V_1+\kappa_2\int_U\mathrm{d}\vec{r}'\,V_2(\vec{r},\vec{r}')\phi_0(\vec{r}'))}$, along with the linearised equation $$\begin{aligned} \label{eq:rho_pert_series_u1} \phi_1 = - \kappa_2\,\phi_0\int_U\mathrm{d}\vec{r}'\,V_2(\vec{r},\vec{r}')\phi_1(\vec{r}') \quad \text{ s.t } \quad \int_U\mathrm{d}\vec{r}\, \phi_1(\vec{r}) = 0.\end{aligned}$$ The integral condition in comes from the fact that higher order perturbations to $\phi_0$ must possess zero mean to preserve the mass in the system. We define the linear operator $\mathcal{T}$ in $L_1(U)$ by $$\begin{aligned} \label{eq:def_of_T_operator} \mathcal{T}\phi(\vec{r}) := \phi_0(\vec{r})\int_U\mathrm{d}\vec{r}'\,V_2(\vec{r},\vec{r}')\phi(\vec{r}').\end{aligned}$$ We also define the mapping from $\mathcal{G}:\left(L_1(U), \mathbb{R}\right) \to L_1(U)$ by $$\begin{aligned} \mathcal{G}(v,\kappa) := \phi-f(\phi,\kappa)\end{aligned}$$ where $f(\phi,\kappa) := \tfrac{e^{-(\kappa_1V_1+\kappa\mathrm{d}\vec{r}'\,V_2(\vec{r},\vec{r}')\phi(\vec{r}'))}}{\int_U\mathrm{d}\vec{r}e^{-(\kappa_1V_1+\kappa\mathrm{d}\vec{r}'\,V_2(\vec{r},\vec{r}')\phi(\vec{r}'))}}$. To construct the bifurcation diagram, we will use the following result from [@tamura1984asymptotic Tamura (1984)], or [@greg_mckean_vlasov_torus Carrillo et al. 2019], which is a direct consequence of the Crandall-Rabinowitz theorem, see, e.g. [@crandall1971bifurcation].\ \[thm:tamura\_bifurcations\] Let $V_2(x,y) = V_2(y,x)$. Also let $(\psi_0,\mu_0)$ be a fixed point in $L^1(U)\times \mathbb{R}$ such that: 1. $\mathcal{G}(\psi_0,\mu_0)=0$, 2. $\mu_0^{-1}$ is an eigenvalue of $\mathcal{T}$, 3. $\int_U\mathrm{d}\vec{r}\,V_2(\vec{r},\vec{r}')\psi_0(\vec{r})=0$, 4. $\dim \{\phi\in L^1(U)\,:\, v = \mu_0 \mathcal{T}\phi\}=1$. Then $(\psi_0,\mu_0)$ is a bifurcation point of $\mathcal{G}=0$. That is, for any neighbourhood $B$ of $(\psi_0,\mu_0)$ in $L^1(U)\times \mathbb{R}$ there exists $(\psi_1,\mu_1)\in B$ such that $\psi_1\neq \psi_0$ and $\mathcal{G}(\psi_1,\mu_1)=0$. \[thm:bifurcations\] The proof relies on checking the conditions of the Crandall-Rabinowitz Theorem and is equivalent to Tamura’s proof [@tamura1984asymptotic]. [0.48]{} ![Stable densities bifurcating from (a). $\psi_0 = \exp\{-\kappa_1V_1(x)\}/Z$ and (b). $\psi_0 = \frac{N}{2\mathrm{L}}$ where $2L$ is the length of the interval, which solve for different two-body functions (a). $V_2(x,y) = xy$ and (b). $V_2(x,y) = -\cos\left(\frac{2\pi(x-y)}{\mathrm{L}}\right)$ . Insets show the shape of perturbation function.[]{data-label="fig:bifurcations"}](xy_bifurcation_perturb_2.pdf "fig:"){width="\textwidth"}   [0.48]{} ![Stable densities bifurcating from (a). $\psi_0 = \exp\{-\kappa_1V_1(x)\}/Z$ and (b). $\psi_0 = \frac{N}{2\mathrm{L}}$ where $2L$ is the length of the interval, which solve for different two-body functions (a). $V_2(x,y) = xy$ and (b). $V_2(x,y) = -\cos\left(\frac{2\pi(x-y)}{\mathrm{L}}\right)$ . Insets show the shape of perturbation function.[]{data-label="fig:bifurcations"}](stable_equilibria_V2_cosine.pdf "fig:"){width="\textwidth"} Note that $\psi_0$ is, by construction, the background density given by $\psi_0 = \tfrac{e^{-V_1(\vec{r})}}{\int_U\mathrm{d}\vec{r}e^{-V_1(\vec{r})}}$. Theorem \[thm:tamura\_bifurcations\] presents sufficient conditions to permit bifurcations from $v_0$ with stationary equations of the form . In particular it will be sufficient that the two-body potential satisfies the normality condition (condition 3. of Theorem \[thm:tamura\_bifurcations\]). Then bifurcations occur at discrete eigenvalues of the nonlocal operator $\mathcal{T}$ as defined in . We remark that these conditions are consistent with Theorem \[thm:necessary\_bifurcation\_conditions\]. Numerical Experiments. {#subsec:numericalexperiments} ---------------------- In this section we compute the branches of solutions that may evolve in the DDFT-like example considered in Section \[sec:manufactured\_bif\] with nonlinear, nonlocal boundary conditions. Given simple interaction kernels we show that symmetry-breaking systems may be constructed quite easily given sufficiently high interaction strength. For the numerical examples presented here, $\varrho$ is a number density and hence $\int_U\mathrm{d}\vec{r}\,\varrho = 0$. We consider numerical solutions to $$\begin{aligned} \begin{cases}\label{eq:bif_evolve_eqn} \partial_t \varrho = \nabla \cdot \Big[ \bm{D} \Big(\nabla\varrho + \varrho \kappa_1 \nabla V_1+\kappa_2 \varrho \nabla \int_U\mathrm{d}\vec{r}' V_2(\vec{r},\vec{r}') \varrho(\vec{r'}) \Big) \Big] & \text{ in } U,\\ \Omega [\varrho]\cdot\vec{n} := \bm{D} \Big(\nabla\varrho + \varrho \kappa_1 \nabla V_1+\kappa_2 \varrho \nabla \int_U\mathrm{d}\vec{r}' V_2(\vec{r},\vec{r}') \varrho(\vec{r'}) \Big) \cdot \vec{n}= 0 & \text{ on } \partial U, \\ \varrho(\vec{r},0) = \tfrac{e^{-\kappa_1V_1(\vec{r})+\kappa_2\int_U\mathrm{d}\vec{r}'\,V_2(\vec{r},\vec{r}')\varrho(\vec{r}',0)}}{Z} \quad &\text{ at } t=0. \end{cases}\end{aligned}$$\ The nonlocal terms in , both in the evolution equation and the boundary condition, mean that numerical implementations require efficient and accurate quadrature. We demonstrate the power with which the pseudo-spectral collocation scheme 2DChebClass [@DDFTCode] may compute solutions with such efficiency and accuracy. For a more detailed explanation of pseudospectral methods for DDFT problems, particularly the efficient computation of convolution integrals, see [@nold2017pseudospectral]. Some numerical experiments were performed by solving in 1D with the choice $V_2 = xy$ and $V_1 = \kappa_1 x^2$ on $U = [-1/2,1/2]$. Under this choice of confining and two-body potentials the normality condition (3) of Theorem \[thm:tamura\_bifurcations\] holds. Additionally, for $|\kappa_2|$ sufficiently small, the unique stationary density is $v_0 = e^{-\kappa_1V_1}/Z$. Upon increasing $\kappa_2$ and perturbing with a mean zero function $\eta(x,\theta)$ the stability of $v_0$ breaks and transitions may be observed to non symmetric equilibria. The asymmetry of the equilibria depends on the sign of $\eta$ as seen in Figure \[fig:bifurcations\]. Figure \[fig:bifurcations\] shows long time numerical solutions to the IBVP subject to a mean zero perturbation for different interaction strengths $\kappa_2$, with $V_2$ fixed. In Figure \[fig:bif\_fig\_right\], the symmetric solution $\psi_0 = \exp(-\kappa_1 V1)/Z$ was shown to be unstable as interaction strength $\kappa_2$ was made ever negative. In particular by perturbing with a sinusoidal function with positive or negative sign, the stationary density can be shown to adhere to one boundary, thereby bifurcating from the previously symmetric solution $\psi_0$. The skewness of the density is controlled by the sign of the perturbation function $\eta$ and $\eta\in \text{Span}{\mathcal{T}}$, hence densities which adhere to the left boundary may be obtained by changing the sign of $\eta$. We predict the stable and symmetric branch to bifurcate at the critical interaction energy $\kappa_2 = -2.4$ (to 1 decimal place) which is the negative inverse of smallest eigenvalue of $\psi_0^{-1}\mathcal{T}$ in $U = [-1/2,1/2]$. This is verified in Figure \[fig:bifurcation\_diagram\] and the transition between a stable symmetric density and a stable nonsymmetric one is observed in Figure \[fig:bif\_fig\_right\] for the curves labelled $\kappa_2 = -2$ and $\kappa_2 = -3$. In Figure \[fig:bif\_stable\_equilibria\_V2\_cosine\], we see how the uniform density may become unstable. Here $\psi_0 = N/(2\mathrm{L})$ where $2L$ is the length of the interval. We perturb with eigenvectors of $\mathcal{T}$ at increasing interaction strengths. The critical strength was $\kappa_{2\sharp} = 0.4$ (to 1 decimal place), the negative inverse of smallest eigenvalue of $\psi_0^{-1}\mathcal{T}$. This is verified in Figure \[fig:bifurcation\_diagram\] and the transition between a stable uniform density and a stable multi-modal one is observed in Figure \[fig:bif\_stable\_equilibria\_V2\_cosine\] for the curves labelled $\kappa_2 = 0$ and $\kappa_2 = 0.5$. Existence & Uniqueness of Weak Solutions to Density with Full HI {#sec:existence_uniqueness_with_partial_HI} ================================================================ In this section we determine the existence and uniqueness of the weak density $\varrho(\vec{r},t)$ solving in the sense . To ease notation we suppress $\bm{A}[\vec{a}]$ as it may be trivially added (see Remark \[rem:final\_assumptions\]). We begin by determining some useful results: first, that $\varrho(\vec{r},t)$ is bounded above in $L^1(U)$ for all time by initial data $\varrho_0$ and second, the $L^1(U)$ norm of $\varrho$ is unity for all time and $\varrho(\vec{r},t)$ is non-negative. We will strengthen the non-negativity to strict positivity of $\varrho(\vec{r},t)$ in Section \[subsec:strict\_pos\_rho\]. The results in this section are analogous to those in [@chazelle2017well], [@greg_mckean_vlasov_torus] with the difference that the boundary conditions we consider are no-flux the diffusion tensor is non-constant. Useful Results. --------------- We identify the expansion of the absolute value function. \[def:def\_of\_chi\_approx\_abs\] Let $\epsilon>0$ and define the convex $C^2$ approximation of $|\cdot|$ by $$\begin{aligned} \chi_\epsilon(\psi) = \begin{cases} |\psi| \quad \text{ for } \quad \psi>\epsilon,\\ -\tfrac{\psi^4}{8 \epsilon^3 }+\tfrac{3 \psi^2}{4 \epsilon}+ \tfrac{3 \epsilon}{8} \quad \text{ for } \quad \psi\leq \epsilon. \end{cases}\end{aligned}$$ We now present our first result concerning the boundedness of the the $L^1$ norm of $\varrho$ in terms of the initial data $\varrho_0$.\ If $ \varrho\in C^1([0,\infty);C^2(U))$ is a solution of with $\varrho_0\in L^1(U)$ then $\|\varrho(t)\|_{L^1(U)}\leq \|\varrho_0\|_{L^1(U)}$ for all time $t\geq 0$. Multiplying by $\chi_\epsilon'(\varrho)$, integrating and using the divergence theorem and chain rule, we have $$\begin{aligned} &\der[]{t}\int_U\mathrm{d}\vec{r}\,\chi_\epsilon(\varrho)+\|\bm{D}^{1/2}\nabla_{\vec{r}}\varrho\,[\chi_\epsilon''(\varrho)]^{1/2}\|_{L^2(U)}^2\nonumber\\ &=-\int\mathrm{d}\vec{r}\, \nabla_{\vec{r}}\varrho\, \chi_\epsilon''(\varrho)\cdot[\varrho\,\bm{D}(\vec{r})\nabla_{\vec{r}}(\kappa _1V_1(\vec{r})+\kappa _2[V_2\star\varrho](\vec{r}) )].\end{aligned}$$ Now by H[ö]{}lder’s inequality and then Young’s inequality $$\begin{aligned} &\der[]{t}\int_U\mathrm{d}\vec{r}\,\chi_\epsilon(\varrho)+\|\bm{D}^{1/2}\nabla_{\vec{r}}\varrho\,[\chi_\epsilon''(\varrho)]^{1/2}\|_{L^2(U)}^2\nonumber\\ &\leq \|\bm{D}^{1/2}\nabla_{\vec{r}}\varrho [\chi_\epsilon''(\varrho)]^{1/2}\|_{L^2(U)} \times \| [\chi_\epsilon''(\varrho)]^{1/2}\varrho \,\bm{D}^{1/2}\nabla_{\vec{r}}(\kappa _1V_1+\kappa _2[V_2\star \varrho] )\|_{L^2(U)}\nonumber\\ &\leq \tfrac{1}{2}\|\bm{D}^{1/2}\nabla_{\vec{r}}\varrho [\chi_\epsilon''(\varrho)]^{1/2}\|_{L^2(U)}^2 +\tfrac{1}{2}\| [\chi_\epsilon''(\varrho)]^{1/2}\varrho\, \bm{D}^{1/2}\nabla_{\vec{r}}(\kappa _1V_1+\kappa _2[V_2\star \varrho] )\|_{L^2(U)}^2.\end{aligned}$$ Note there are no boundary terms due to the condition $\Pi[\varrho]\cdot\vec{n}=0$ on $\partial U$. All together this implies the inequality $$\begin{aligned} &\der[]{t}\int_U\mathrm{d}\vec{r}\,\chi_\epsilon(\varrho)+\tfrac{1}{2}\|\bm{D}^{1/2}\nabla_{\vec{r}}\varrho\,\chi_\epsilon''(\varrho)^{1/2}\|_{L^2(U)}^2\nonumber\\ &\leq \tfrac{1}{2} \| [\chi_\epsilon''(\varrho)]^{1/2}\varrho\, \bm{D}^{1/2}\nabla_{\vec{r}}(\kappa _1V_1+\kappa _2[V_2\star\varrho] )\|_{L^2(U)}^2\nonumber\\ &\leq \tfrac{1}{2}\| \bm{D}^{1/2}\nabla_{\vec{r}}(\kappa _1V_1+\kappa _2[V_2\star\varrho] )\|_{L^\infty}^2 \| [\chi_\epsilon''(\varrho)]^{1/2} \varrho \|_{L^2}^2\nonumber\\ &\leq c_0 \| [\chi_\epsilon''(\varrho)]^{1/2} \varrho \|_{L^2}^2 (1+\|\varrho\|_{L^1(U)}^2)\label{eq:ddt_chi_rho_bound}\end{aligned}$$ for the constant $c_0 = 2\mu_{\max} \max\{|\kappa_1|^2 \|\nabla_{\vec{r}}V_1\|_{L^\infty}^2,|\kappa_2|^2 \|V_2\|_{L^\infty}^2\}$. It is an elementary calculation to show that $$\begin{aligned} \varrho^2\chi_\epsilon''(\varrho) = \tfrac{3\varrho^2}{2\epsilon} - \tfrac{3\varrho^4}{2\epsilon^3}\end{aligned}$$ for $\varrho\leq \epsilon$. With this, and the fact that $\chi''(\varrho) = 0$ for $\varrho>\epsilon$, we have $$\begin{aligned} \| [\chi_\epsilon''(\varrho)]^{1/2} \varrho \|_{L^2}^2 &= \int_U \mathrm{d}\vec{r}\, \varrho^2 \chi_\epsilon''(\varrho) \mathbb{I}_{\varrho\leq \epsilon} + \int_U \mathrm{d}\vec{r}\, \varrho^2 \chi_\epsilon''(\varrho) \mathbb{I}_{\varrho > \epsilon} \nonumber \\ & = \int_U \mathrm{d}\vec{r}\, \frac{3 \varrho^2(\epsilon^2 - \varrho^2)}{2 \epsilon^2} \mathbb{I}_{\varrho\leq \epsilon} \leq \int_U \mathrm{d}\vec{r}\, \frac{3 \epsilon}{2}\mathbb{I}_{\varrho\leq \epsilon} \leq c_1\epsilon \label{eq:sqrt_chi''_rho_L2}\end{aligned}$$ for some constant $c_1$ dependent on $U$. Applying Gr[ö]{}nwall’s lemma to $\eta(\cdot)$ a non-negative, absolutely continuous function on $[0,T]$ which satisfies for a.e. $t$ $$\begin{aligned} \eta'(t)\leq \phi(t)\eta(t) + \psi(t)\end{aligned}$$ where $\phi$, $\psi$ non-negative and integrable functions on $[0,T]$ gives $$\begin{aligned} \label{eq:statement_gronwall} \eta(t)\leq e^{\int\mathrm{d}s_0^t\,\phi(s)}\eta(t)\Big[ \eta(0)+ \int_0^t\mathrm{d}s\, \psi(s)\Big].\end{aligned}$$ Observe that $\|\varrho\|_{L^1(U)}\leq \int_U\mathrm{d}\vec{r}\,\chi_\epsilon(\varrho)$. Using this with , and with $\eta(t) = \phi(t) =c_1\epsilon \int_U\mathrm{d}\vec{r}\,\chi_\epsilon(\varrho)$ and $\psi(t) = c_1\epsilon$ we obtain $$\begin{aligned} \int_U\mathrm{d}\vec{r}\,\chi_\epsilon(\varrho)\leq \left(\int_U\mathrm{d}\vec{r}\,\chi_\epsilon(\varrho_0)+c_1\epsilon \,t\right)\, e^{c_1\epsilon\int_0^t\mathrm{d}s\,\int_U\mathrm{d}\vec{r}\,\chi_\epsilon(\varrho(\vec{r},s))}.\end{aligned}$$ Now since $\varrho$ is assumed to be continuous in time on $[0,\infty)$ the integral in the exponential is finite. Therefore taking $\epsilon \to 0$ one obtains $$\begin{aligned} \|\varrho\|_{L^1}\leq \|\varrho_0\|_{L^1}\end{aligned}$$ for every $t>0$. \[cor:L\_1\_varrho\_is\_1\] If $ \varrho\in C^1([0,\infty);C^2(U))$ is a solution of with $\varrho_0$ a probability density, that is $\varrho_0\geq 0$ and $\int_U \mathrm{d}\vec{r}\,\varrho_0(\vec{r})=1$, then $\|\varrho(t)\|_{L^1(U)}=1$ and $\varrho(t)\geq 0$ in $U$ for all time $t\geq 0$. The argument is a standard one. Since, due to no-flux boundary conditions, $ \der[]{t} \int \mathrm{d}\vec{r} \, \rho(\vec{r},t) = 0$, we have $$\begin{aligned} 1 = \int_U\mathrm{d}\vec{r}\, \varrho_0(\vec{r}) = \int_U\mathrm{d}\vec{r}\, \varrho(\vec{r},t) \leq \|\varrho(t)\|_{L^1(U)} \leq \|\varrho(0)\|_{L^1(U)} = \int_U\mathrm{d}\vec{r}\, \varrho_0(\vec{r}) = 1,\end{aligned}$$ so $\|\varrho(t)\|_{L^1(U)} = 1$. Also observe the two equalities $$\begin{aligned} &1 = \int_U\mathrm{d}\vec{r}\, \varrho(\vec{r},t) = \int_U\mathrm{d}\vec{r}\, \varrho(\vec{r},t)\mathbb{I}_{\varrho\geq 0} + \int_U\mathrm{d}\vec{r}\, \varrho(\vec{r},t)\mathbb{I}_{\varrho< 0},\nonumber\\ &1 = \int_U\mathrm{d}\vec{r}\, |\varrho(\vec{r},t)| = \int_U\mathrm{d}\vec{r}\, \varrho(\vec{r},t)\mathbb{I}_{\varrho\geq 0} - \int_U\mathrm{d}\vec{r}\, \varrho(\vec{r},t)\mathbb{I}_{\varrho< 0},\nonumber\end{aligned}$$ where in the second line we have used the definition of the absolute value function. Subtracting these equalities we obtain $$\begin{aligned} 2\int_U\mathrm{d}\vec{r}\, \varrho(\vec{r},t)\mathbb{I}_{\varrho< 0} = 0\end{aligned}$$ which implies $ \varrho(\vec{r},t)\geq 0$ almost everywhere in $U$. Non-negativity of $\varrho$ on all of $U$ follows from continuity. With these results we may continue to determine the existence and uniqueness of weak densities solving in the sense . The method we use follows [@chazelle2017well] but here we must include calculations for the confining potential $V_1^{\text{eff}}$ (which for ease of notation is written $V_1$ for each $\vec{a}(\vec{r},t)$) and a much wider class of two-body potentials $V_2$ which are not necessarily step functions. To start we introduce , the frozen version of , indexed by $n\in \mathbb{N}$, by substituting $\varrho = u_n$ everywhere except in the convolution term where we substitute $\varrho = u_{n-1}$. Each equation is parametrised by $n$, a linear parabolic PDE for the unknown $u_n$ in terms of the solution $u_{n-1}$ at the previous index, for which we have existence and uniqueness of weak solutions for each $n$. The remainder of the argument is to show $\lim_{n\to\infty} u_{n}$ exists and is a limit point solving the weak problem . In this section we will make references to Appendix \[sec:classical\_paraboliv\_pde\] for results and definitions required for $u_{n}\in H^1(U)$, which differ slightly from the standard arguments found in textbooks for classical linear PDE theory (e.g. [@evans2002partial]). Energy Estimates. ----------------- The results of Appendix \[sec:classical\_paraboliv\_pde\] are that the initial boundary value problem $$\begin{aligned} \label{eq:well_posed_un_ibvp} \begin{cases} \quad\partial_t u_n-\nabla_{\vec{r}}\cdot[\bm{D}\nabla_{\vec{r}}u_n]=\nabla_{\vec{r}}\cdot[u_n\,\bm{D}\nabla_{\vec{r}}(\kappa _1V_1+\kappa _2V_2\star u_{n-1})],\\ \qquad \qquad \qquad \qquad \Xi [u_n]\cdot\vec{n} = 0 \quad \text{ on } \partial U \times [0, T],\\ \qquad\Xi[u_n]:= \bm{D}\,(\nabla_{\vec{r}}u_n+ u_n\, \nabla_{\vec{r}}(\kappa_1 V_1(\vec{r},t)+ \kappa_2 V_2\star u_{n-1})),\\ \qquad \qquad \qquad \qquad \quad u_n = \varrho_0 \quad \text{ on } U\times \left\lbrace t=0\right\rbrace \end{cases}\end{aligned}$$ is well posed, and there exists weak solutions $u_{n}$ for each $n\in\mathbb{N}$ in the sense . All that remains is to take the limit $n\to \infty$ to recover the original Smoluchowski equation . We start by deriving our first estimate on energy of $u_n$. To ease notation we derive all results with the time dependence on $\bm{D}$ suppressed since time may be trivially added to the exposition. Additionally, for a stationary density one has $$\begin{aligned} \lim_{t\to\infty}\bm{D}(\vec{r},t) = \left(\bm{1} + \int \mathrm{d}\vec{r}' g(\vec{r},\vec{r}')\bm{Z}_{1}(\vec{r},\vec{r}')\varrho(\vec{r}')\right)^{-1} \end{aligned}$$ which is a positive definite tensor and hence diagonalisable, and may be bounded by its smallest and largest eigenvalues which are positive and finite for $t\to\infty$. Hence energy estimates remain valid for $0<t\leq T$ when provided in terms of $\mu_{\min}$ and $\mu_{\max}$, both eigenvalues which depend on time but always remain positive and finite. It will be seen that a natural dual space to $H^1(U)$ is provided by the no-flux condition. In particular we denote by $H^{-1}(U)$ the dual space of $H^{1}(U)$, this is due to the divergence theorem and the boundary condition $\Xi[u_n]\cdot\vec{n}=0$ on $\partial U \times [0,T]$, there is no boundary term, and the normal characterisation of $H^{-1}=(H^1_0)^\ast$ carries over to $H^{1}(U)$. We now obtain uniform estimates on $u_n$ in terms of the initial data $\varrho_0$ in all the required energy norms. The detailed calculations follow [@chazelle2017well] but take into account the confining potential and non-constant diffusion tensor $\boldsymbol{D}$. The explicit calculations can be found in RDMW’s PhD thesis [@rdmwthesisddft]. The first estimate is in $L^{\infty}([0,T];L^2(U))$ and $L^{2}([0,T];H^1(U))$ norms.\ \[prop:bound\_rhon\] Let $T>0$ and suppose $\{u_n\}_{n\geq 1}$ satisfies with $\varrho_0\in C^{\infty}(U)$ a probability density. Then there exists a constant $C(T)$, dependent on time and $\mu_{\max}$, such that $$\begin{aligned} \label{eq:L2_H_1_bound_for_rhon} \|u_n\|_{L^{\infty}([0,T];L^2(U))}+\|u_n\|_{L^{2}([0,T];H^1(U))}\leq C(T,\mu_{\max}) \|\varrho_{0}\|_{L^2(U)}.\end{aligned}$$ The second estimate is for $L^{\infty}([0,T]; H^1(U))$ and $L^2([0,T];L^2(U))$ norms.\ \[prop:un\_H1\_rho0\_bound\] Let $T>0$ and suppose $\{u_n\}_{n\geq 1}$ satisfies with $\varrho_0\in C^{\infty}(U)$ a probability density. Then there exists some constant dependent on time $C(T)$ such that $$\begin{aligned} &\|u_n\|_{L^{\infty}([0,T]; H^1(U))}+\|\nabla_{\vec{r}}\cdot[\bm{D}\,\nabla_{\vec{r}}u_n]\|_{L^2([0,T];L^2(U))}^2\nonumber\\ &\leq C(T)(\|\varrho_0\|_{H^1(U)}^2+(1+\|\varrho_0\|_{L^2(U)})\|\varrho_0\|_{L^2(U)})^{1/2}.\end{aligned}$$ We now have strong convergence of $\left(u_n\right)_{n=1}^{\infty}$, by showing it is a Cauchy sequence in a complete metric space.\ \[lem:rhon\_is\_cauchy\] Let $T>0$ and suppose $\{u_n\}_{n\geq1}$ satisfies with $\varrho_0\in C^\infty(U)$. Then there exists $\varrho\in L^1([0,T];L^1(U))$ such that $u_n\to \varrho$ in $L^1([0,T];L^1(U))$. Lastly we have the uniform estimate on the limit point $\varrho(\vec{r},t)$ in terms of the initial data $\varrho_0$.\ \[lem:weak\_conv\_results\] One has $\varrho\in L^2([0,T]; H^1(U))\cap L^\infty([0,T]; L^2(U))$ and that $\partial_t\varrho\in L^2([0,T]; H^{-1}(U))$ with the uniform bound $$\begin{aligned} \label{eq:total_energy_bound} \|\varrho\|_{L^{\infty}([0,T];L^2(U))}+\|\varrho\|_{L^{2}([0,T];H^1(U))}+\|\partial_t\varrho\|_{L^2([0,T]; H^{-1}(U))}\leq C(T) \|\varrho_{0}\|_{L^2(U)}.\end{aligned}$$ Additionally there exists a subsequence $\{u_{n_k}\}_{k\geq 1}$ such that $$\begin{aligned} u_{n_k}\rightharpoonup \varrho& \quad \text{ in } L^2([0,T]; H^1(U)),\label{lim:rho_nk_in_H1_c}\\ \partial_tu_{n_k}\rightharpoonup \partial_t\varrho& \quad \text{ in } L^2([0,T]; H^{-1}(U)).\label{lim:partial_trho_nk_in_H_minus_1}\end{aligned}$$ where $\rightharpoonup$ denotes weak convergence. The nature of convergence of the sequence $\{\varrho_n\}_{n\geq 1}$ as $n\to\infty$ are consolidated into the following result.\ \[cor:summary\_of\_convergence\] There exists a subsequence $\{u_{n_k}\}_{k\geq 1}\subset \{u_{n}\}_{n\geq 1}$ and a function $\varrho\in L^2([0,T];H^1(U))$ with $\partial_t\varrho\in L^2([0,T];H^{-1}(U))$ such that $$\begin{aligned} u_n\to \varrho& \quad \text{ in } L^1([0,T]; L^1(U)),\\ u_{n_k}\rightharpoonup \varrho& \quad \text{(weakly) in } L^2([0,T]; H^1(U)),\\ \partial_t u_{n_k}\rightharpoonup \partial_t\varrho& \quad \text{(weakly) in } L^2([0,T]; H^{-1}(U)).\end{aligned}$$ We are now in the position to obtain the existence and uniqueness of weak solutions $\varrho(\vec{r},t)$. First we state a calculus result which will be useful when working with the weak formulation .\ \[lem:calculus\_of\_inner\_product\] Suppose $\varrho\in L^2([0,T]; H^1(U))$ and $\partial_t\varrho\in L^2([0,T]; H^{-1}(U))$ then the mapping $$\begin{aligned} t\to \|\varrho(t)\|_{L^2(U)}^2\end{aligned}$$ is absolutely continuous with $$\begin{aligned} \der[]{t}\|\varrho(t)\|_{L^2(U)}^2 = 2\langle \partial_t\varrho(t),\, \varrho(t)\rangle\end{aligned}$$ for a.e. $t\in [0,T]$. Since the condition $\Pi[\varrho]\cdot \vec{n} = 0$ on $\partial U \times [0,T]$ guarantees integration by parts without extra terms the proof is identical to the textbook one [@evans2002partial]. We are now in the position to prove existence of the weak solution to . Existence and Uniqueness. ------------------------- By using Propositions \[prop:bound\_rhon\], \[prop:un\_H1\_rho0\_bound\] and Lemmas \[lem:rhon\_is\_cauchy\], \[lem:weak\_conv\_results\], \[lem:calculus\_of\_inner\_product\] we may obtain the following theorem.\ (Existence and Uniqueness of Weak Density)\[thm:existence\_and\_uniqueness\] Let $\varrho_0 \in C^\infty(U)$, $\varrho\geq 0$ and $\int_U \mathrm{d}\vec{r} \varrho_0(\vec{r})=1$. Then there exists a unique weak solution $\varrho\in L^\infty([0,T];L^2(U))\cap L^2([0,T]; H^1(U))$, with $\partial_t\varrho\in L^2([0,T]; H^{-1}(U))$, to equation in the sense with the estimate . Multiply by $\eta\in L^2([0,T]; H^1(U))$ after setting $n=n_k\in \mathbb{N}$ and integrate over $U_T$ to obtain $$\begin{aligned} &\int_0^T\mathrm{d}t\, \langle \partial_tu_{n_k},\, \eta(t) \rangle\nonumber\\ & + \int_0^T\mathrm{d}t\,\int_U\mathrm{d}\vec{r}\, \nabla_{\vec{r}}\eta \cdot\bm{D}\,[\nabla_{\vec{r}}u_{n_k}+ u_{n_k}\nabla_{\vec{r}}(\kappa_1V_1+\kappa_2\,V_2\star u_{n_k-1})]=0.\end{aligned}$$ For the transport term we write $$\begin{aligned} &\int_0^T\mathrm{d}t\, \nabla_{\vec{r}}\eta \cdot u_{n_k}\bm{D}\,\nabla_{\vec{r}}[\kappa_1V_1+\kappa_2\,V_2\star u_{n_k-1}]\nonumber\\ & = \int_0^T\mathrm{d}t\, \nabla_{\vec{r}}\eta \cdot\,(u_{n_k}-\varrho)\bm{D}\,\nabla_{\vec{r}}[\kappa_1V_1+\kappa_2\,V_2\star u_{n_k-1}]\nonumber\\ &\quad +\int_0^T\mathrm{d}t\, \nabla_{\vec{r}}\eta \cdot \,\varrho \bm{D}\,\nabla_{\vec{r}}[\kappa_1V_1+\kappa_2\,V_2\star (u_{n_k-1}-\varrho)] +\int_0^T\mathrm{d}t\, \nabla_{\vec{r}}\eta\cdot \,\varrho \bm{D}\,\nabla_{\vec{r}}\kappa_2\,V_2\star \varrho.\end{aligned}$$ Note that $u_{n_k}\rightharpoonup\varrho$ in $L^2([0,T];H^1(U))\subset L^2([0,T];L^2(U))$ and $(\nabla_{\vec{r}}\cdot\bm{D})\cdot\nabla_{\vec{r}}[\kappa_1V_1(\vec{r})+\kappa_2\,V_2\star (\varrho_{n_k-1})]$ is uniformly bounded and so $$\begin{aligned} \int_0^T\mathrm{d}t\, \int_U\mathrm{d}\vec{r}\,\nabla_{\vec{r}}^\top\eta \, (\varrho_{n_k}-\varrho)\bm{D}\,\nabla_{\vec{r}}[\kappa_1V_1+\kappa_2\,V_2\star \varrho_{n_k-1}]\to 0\end{aligned}$$ as $k\to \infty$. Now by H[ö]{}lder’s inequality one has $$\begin{aligned} & \int_0^T\mathrm{d}t\, \nabla_{\vec{r}}\eta\cdot \,\varrho\,\bm{D}\,\nabla_{\vec{r}}\kappa_2(V_2\star (u_{n_k-1}-\varrho)) \leq \mu_{\max}\|\nabla_{\vec{r}}\eta \|_{L^2([0,T]; L^2(U))}\|\nabla_{\vec{r}}V_2\|_{L^{\infty}(U)}\nonumber\\ &\times \left( \int_0^T\mathrm{d}t\,\|u_{n_k-1}(t)-\varrho(t)\|_{L^1(U)}^2\right)^{1/2}\to 0.\end{aligned}$$ Now note that by Lemma \[lem:rhon\_is\_cauchy\], $\|\phi_n\|_{L^1(U)}$ is bounded and therefore $$\begin{aligned} \int_0^T\mathrm{d}t \|u_{n_k-1}(t)-\varrho(t)\|_{L^1(U)}^2\leq C\int_0^T\mathrm{d}t \|u_{n_k-1}(t)-\varrho(t)\|_{L^1([0,T];L^1(U))} \to 0.\end{aligned}$$ Therefore we have $$\begin{aligned} \int_0^T\mathrm{d}t\, \nabla_{\vec{r}}\eta\, \cdot \,u_{n_k}\bm{D}\,\nabla_{\vec{r}}[\kappa_1V_1+\kappa_2\,V_2\star u_{n_k-1}] \to\int_0^T\mathrm{d}t\, \nabla_{\vec{r}}\eta \cdot\varrho \bm{D}\,\nabla_{\vec{r}}[\kappa_1V_1+\kappa_2\,V_2\star \varrho]\end{aligned}$$ as $k\to \infty$. By the weak convergence results of Lemma \[lem:weak\_conv\_results\] we have $$\begin{aligned} \int_0^T\mathrm{d}t\, \langle \partial_t u_{n_k},\, u_{n_k} \rangle &\to \int_0^T\mathrm{d}t\, \langle \partial_t\varrho,\, \varrho \rangle,\nonumber\\ \int_0^T\mathrm{d}t\,\int_U\mathrm{d}\vec{r}\, \nabla_{\vec{r}}\eta \cdot\bm{D}\,\nabla_{\vec{r}}u_{n_k} &\to \int_0^T\mathrm{d}t\,\int_U\mathrm{d}\vec{r}\, \nabla_{\vec{r}}\eta \cdot\bm{D}\,\nabla_{\vec{r}}\varrho\end{aligned}$$ as $k\to \infty$. This establishes existence of weak solution to in the sense . Establishing $\varrho(0)=\varrho_0$ is a routine argument (see [@evans2002partial]). To prove uniqueness we set $\xi = \varrho_1-\varrho_2$ where $\varrho_1,\varrho_2$ are weak solutions then we have $$\begin{aligned} &\int_0^T\mathrm{d}t\, \langle \partial_t \xi(t), \, \eta(t) \rangle\nonumber\\ & +\int_0^T\mathrm{d}t\, \int_U \mathrm{d}\vec{r}\, \nabla_{\vec{r}}\eta\cdot \bm{D}\,[\nabla_{\vec{r}}\xi +\xi\,\nabla_{\vec{r}}\kappa _1V_1+\kappa _2\varrho_1\nabla_{\vec{r}}\,V_2\star\varrho_1-\kappa_2\varrho_1\nabla_{\vec{r}}\,V_2\star\varrho_2]=0\end{aligned}$$ Adding and subtracting $\int_0^T\mathrm{d}t\,\int_U\mathrm{d}\vec{r}'\,\nabla_{\vec{r}}\eta\cdot \kappa_1\varrho_2\nabla_{\vec{r}}V_2\star \varrho_1$ we find $$\begin{aligned} &\int_0^T\mathrm{d}t\, \langle \partial_t \xi(t), \, \eta(t) \rangle + \int_0^T\mathrm{d}t\, \int_U \mathrm{d}\vec{r}\, \nabla_{\vec{r}}\eta\cdot \bm{D}\,\nabla_{\vec{r}}\xi \nonumber\\ &=-\int_0^T\mathrm{d}t\, \int_U \mathrm{d}\vec{r}\, \nabla_{\vec{r}}\eta\cdot \bm{D}\,[\xi\,\nabla_{\vec{r}}\kappa _1V_1+\kappa _2\xi\nabla_{\vec{r}}\,V_2\star\varrho_1-\kappa_2\varrho_2\nabla_{\vec{r}}\,V_2\star\xi]\nonumber\\ &\leq \int_0^T\mathrm{d}t\, \int_U \mathrm{d}\vec{r}\, |\nabla_{\vec{r}}\eta\cdot \bm{D}^{1/2} \bm{D}^{1/2}\,[\xi\,\nabla_{\vec{r}}\kappa _1V_1+\kappa _2\xi\nabla_{\vec{r}}\,V_2\star\varrho_1-\kappa_2\varrho_2\nabla_{\vec{r}}\,V_2\star\xi]|.\label{eq:uniqueness_bound_1}\end{aligned}$$ By Young’s inequality we have $$\begin{aligned} &\int_0^T\mathrm{d}t\, \int_U \mathrm{d}\vec{r}\, |\nabla_{\vec{r}}\eta\cdot \bm{D}^{1/2} \bm{D}^{1/2}\,[\xi\,\nabla_{\vec{r}}\kappa _1V_1+\kappa _2\xi\nabla_{\vec{r}}\,V_2\star\varrho_1-\kappa_2\varrho_2\nabla_{\vec{r}}\,V_2\star\xi]|\nonumber\\ &\leq \int_0^T\mathrm{d}t\, \int_U \mathrm{d}\vec{r}\, |\bm{D}^{1/2} \nabla_{\vec{r}}\eta |^2\nonumber\\ & +\tfrac{1}{4}\int_0^T\mathrm{d}t\, \int_U \mathrm{d}\vec{r}\, | \bm{D}^{1/2} [\xi\,\nabla_{\vec{r}}\kappa _1V_1+\kappa _2\xi\nabla_{\vec{r}}\,V_2\star\varrho_1-\kappa_2\varrho_2\nabla_{\vec{r}}\,V_2\star\xi ] |^2.\end{aligned}$$ Using the triangle inequality and Young’s inequality we expand the absolute value inside the integral $$\begin{aligned} &\tfrac{1}{4}\int_0^T\mathrm{d}t\, \int_U \mathrm{d}\vec{r}\, |\bm{D}^{1/2} [ \xi\,\nabla_{\vec{r}}\kappa _1V_1+\kappa _2\xi\nabla_{\vec{r}}\,V_2\star\varrho_1-\kappa_2\varrho_2\nabla_{\vec{r}}\,V_2\star\xi ]|^2\nonumber\\ &\leq \tfrac{1}{4}\int_0^T\mathrm{d}t\, \int_U \mathrm{d}\vec{r}\, |\bm{D}^{1/2} \xi\,\nabla_{\vec{r}}\kappa _1V_1|^2 +\kappa _2^2|\bm{D}^{1/2} [ \xi\nabla_{\vec{r}}\,V_2\star\varrho_1-\varrho_2\nabla_{\vec{r}}\,V_2\star\xi ] |^2 \nonumber\\ &\leq \tfrac{1}{4}\int_0^T\mathrm{d}t\, \int_U \mathrm{d}\vec{r} \left( |\bm{D}^{1/2}\xi\,\nabla_{\vec{r}}\kappa _1V_1|^2 +2\kappa _2^2|\bm{D}^{1/2}\xi\nabla_{\vec{r}}\,V_2\star\varrho_1|^2 +2\kappa _2^2|\bm{D}^{1/2}\varrho_2\nabla_{\vec{r}}\,V_2\star\xi|^2\right)\nonumber\\ &\leq \tfrac{\mu_{\max}}{4}\int_0^T\mathrm{d}t\, \int_U \mathrm{d}\vec{r}\, |\xi\,\nabla_{\vec{r}}\kappa _1V_1|^2+2\kappa _2^2|\xi\nabla_{\vec{r}}\,V_2\star\varrho_1|^2+2\kappa _2^2|\varrho_2\nabla_{\vec{r}}\,V_2\star\xi|^2. \label{eq:uniqueness_bound_2}\end{aligned}$$ Estimating each of these terms, first $$\begin{aligned} \label{eq:uniqueness_bound_3} \int_0^T\mathrm{d}t\, \int_U \mathrm{d}\vec{r}\, |\xi\,\nabla_{\vec{r}}\kappa _1V_1|^2\leq \kappa_1^2\|\nabla_{\vec{r}}V_1\|_{L^\infty(U)}^2\|\xi\|_{L^2([0,T];L^2(U))}.\end{aligned}$$ Second, $$\begin{aligned} \label{eq:uniqueness_bound_4} 2\kappa _2^2\int_0^T\mathrm{d}t\, \int_U \mathrm{d}\vec{r}\, |\xi\nabla_{\vec{r}}\,V_2\star\varrho_1|^2 \leq 2\kappa _2^2|U\||\nabla_{\vec{r}}V_2\|_{L^\infty(U)}^2\|\xi\|_{L^2([0,T];L^2(U))},\end{aligned}$$ and third $$\begin{aligned} &2\kappa _2^2\int_0^T\mathrm{d}t\, \int_U \mathrm{d}\vec{r}\,|\varrho_2\nabla_{\vec{r}}\,V_2\star\xi|^2\nonumber\\ &\leq 2\kappa _2^2|U| \|\varrho_2\|_{L^\infty([0,T];L^2(U))} \|\nabla_{\vec{r}}V_2\|_{L^\infty(U)}^2\|\xi\|_{L^2([0,T];L^2(U))}.\label{eq:uniqueness_bound_5}\end{aligned}$$ Combining , , , , we obtain, after setting $\eta = \xi$, and using boundedness of $\varrho_2$ in terms of its initial data $$\begin{aligned} \int_0^T\mathrm{d}t\, \langle \partial_t \xi(t),\, \xi(t)\rangle \leq (C_1(T)+C_2(T)\|\varrho_0\|_{L^2(U)}^2)\|\xi \|_{L^2([0,T];L^2(U))}^2\end{aligned}$$ for some constants $C_1(T)$, $C_2(T)$ dependent on $U$. This holds for all $T$ so it must be the case that $$\begin{aligned} \der[]{t}\|\xi(t)\|_{L^2(U)}^2\leq (C_1(T)+C_2(T)\|\varrho_0\|_{L^2(U)}^2)\|\xi(t)\|_{L^2(U)}^2\end{aligned}$$ implying by Gr[ö]{}nwall’s lemma that $$\begin{aligned} \|\xi(t)\|_{L^2(U)} \leq (C_1(T)+C_2(T)\|\varrho_0\|_{L^2(U)}^2)\|\xi(0)\|_{L^2(U)}\end{aligned}$$ a.e. $t\in [0,T]$. However, $\xi(0)\equiv 0$ hence $\|\varrho_1(t)-\varrho_2(t)\|_L^2(U)=0$ for all $t\in [0,T]$. Strict Positivity of $\varrho$. {#subsec:strict_pos_rho} ------------------------------- With the existence of weak solutions we may establish positivity of $\varrho$ solving with reference to [@bogachev2015fokker]. In particular since $\boldsymbol{D}$ is positive definite and $\vec{b}$ is uniformly bounded and $$\begin{aligned} \sup_{\vec{r}\in U}\varrho(\vec{r},t_1)<C \inf_{\vec{r}\in U}\varrho(\vec{r},t_2)\end{aligned}$$ for $0<t_1<t_2<\infty$ and $C$ is a constant depending on $d$ (the dimension) and $\mu_{\max}$. Since $\varrho$ is non-negative for all time we must have $\inf_{\vec{r}\in U}\varrho(\vec{r},t)$ is positive and hence $\varrho$ is positive.\ Discussion & Open Problems {#sec:discussion} ========================== In this paper, the global asymptotic stability and well-posedness of overdamped DDFT with two-body HI was studied. It was shown that bifurcations occur in DDFT systems with no-flux boundary conditions at an infinite and discrete set of critical energies equal to eigenvalues of the two-body interaction integral operator $\mathcal{R}$. Additionally we have shown that a weak solution to the density with no-flux boundary conditions and strong solution to the flux equation exist and are unique under sensible assumptions on the confining and interaction potentials and initial data $V_1$, $V_2$ and $\varrho(\vec{r},0)$ respectively. Assuming a classical solution to the DDFT we also derived *a priori* convergence estimates in $L^2$ and relative entopy, the latter restricted to convex two-body potentials. Well-posedness and global asymptotic stability of the phase space equation for the time evolution of $f(\vec{r},\vec{p},t)$ remains open (see [@goddard2012overdamped Proposition 2.1] for the evolution equation for $f(\vec{r},\vec{p},t)$). It is of similar form to the Vlasov equation considered by [@degond1986global] but with Hermite dissipative term and modified nonlocal term in the momentum variable $\vec{p}$ dependent on the HI tensors. To progress further some maximum principles on $f$ solving the linearised version of the phase space equation must be found. Additionally, the existence results on the overdamped equations considered here may be made more regular by routine arguments. We also note that the present analysis is based on the Smoluchowski equation rigorously derived from the phase space Fokker-Planck equation using homogenisation methods [@goddard2012overdamped]. As an alternative to this, assuming inertia is small altogether, or if one is interested only in very short times to begin with, the system of interacting particles maybe considered solely in configuration space. Only the positions (and not the momenta) of a system of interacting Brownian particles are then taken into account with Smoluchowski equation as in [@rex2009dynamical], and, the underlying Langevin dynamics contain only velocity equations for each particle which are usually written down *a posteriori*. The justification for this is that the momentum distribution is assumed to have a minor role in the dynamical description of the fluid density, and indeed is taken to be irrelevant at the microscopic level. This Brownian approximation may also hold for highly dense suspensions, since in dense Newtonian systems there is a fast transfer of momentum and kinetic energy from the particle collisions, and this effect may be accounted for most efficiently by the bath in the Brownian dynamics with a non constant diffusion tensor. It is known however that the one-body Smoluchowski equation in [@rex2009dynamical] does not equate to equations - which are obtained in the rigorous overdamped limit starting from the Newtonian dynamics. Intuitively this is because the two-body assumption for the HI ($\boldsymbol{\Gamma}$) and mobility ($\boldsymbol{D}$) tensors and the matrix inversion $\boldsymbol{D} = \boldsymbol{\Gamma}^{-1}$ are not commutable operations; even if $\boldsymbol{D}$ is two-body then $\text{det}(\boldsymbol{D})$ is not. A flow chart demonstrating the permitted commutations between various formalisms is included in [@goddard2012overdamped]. The nonequivalence of the two Smoluchowski equations is not considered here, and therefore a natural extension for future work would be to determine the existence, uniqueness and regularity of of the density starting from [@rex2009dynamical] as well as the corresponding conditions for linear stability. Finally we remark that a well-posedness analysis of DDFT equations of the form to include a hard-sphere contribution to the free energy by fundamental measure theory (FMT) e.g. Rosenfeld [@rosenfeld1989free] or Roth [@roth2010fundamental] would be very interesting.\ ————————————- Classical Linear Parabolic PDE {#sec:classical_paraboliv_pde} ============================== The first goal is to derive a similar set of estimates as [@chazelle2017well Lemma 3.5, Lemma 3.7]. The standard argument is to set up a sequence of linear parabolic PDEs. Let $U$ be a bounded and open subset of $\mathbb{R}^d$ and set $U_T=U\times (0,\,T]$ for some time $T>0$. Now consider the linear parabolic equation $$\begin{aligned} \label{eq:linear_classical_parabolic_eqn} \partial_t u_n-\nabla_{\vec{r}}\cdot[\bm{D}\nabla_{\vec{r}}u_n]=\nabla_{\vec{r}}\cdot[u_n\,\bm{D}\nabla_{\vec{r}}(\kappa _1V_1+\kappa _2V_2\star u_{n-1})].\end{aligned}$$ In general $d$ dimensions we are in the divergence form of the parabolic PDE $$\begin{aligned} \begin{cases} \label{eq:ibvp_pde_for_rho_lin} \qquad\partial_t u_n+Lu_n = 0 \quad \text{ in } U_T,\\ \Xi [u_n]\cdot\vec{n} = 0 \quad \text{ on } \partial U \times [0, T],\\ \qquad u_n = \varrho_0 \quad \text{ on } U\times \left\lbrace t=0\right\rbrace \end{cases}\end{aligned}$$ where $\partial U$ is a $C^1$ boundary with unit normal $\vec{n}$. We define $L$ to be the linear differential operator given by $$\begin{aligned} &Lu_n := -\sum_{ij=1}^d\partial_{r_j}(\bm{D}_{ij}(\vec{r},t)\partial_{r_i}u_n)+\sum_{i=1}^d b_i(\vec{r})\partial_{r_i}u_n+c(\vec{r})u_n,\label{eq:def_of_linear_parabolic_op}\\ & \vec{b}(\vec{r}):=-\bm{D}(\vec{r},t)\nabla_{\vec{r}}(\kappa_1V_1(\vec{r})+\kappa _2 [V_2\star u_{n-1}]), \\ & c(\vec{r}):=-\nabla_{\vec{r}}\cdot (\bm{D}(\vec{r},t)\nabla_{\vec{r}}(\kappa_1V_1(\vec{r})+\kappa _2[V_2\star u_{n-1}]),\\ & \Xi[u_n]:= \bm{D}(\vec{r},t)\,(\nabla_{\vec{r}}u_n + u_n\, \nabla_{\vec{r}}(\kappa_1 V_1(\vec{r},t)+ \kappa_2 \,V_2\star u_{n-1})).\end{aligned}$$ Since $\bm{D}(\vec{r},t)$ is assumed to positive definite, there exists $\theta$ for every $\vec{r}$, $\xi$ such that $\xi^\top D(\vec{r},t) \xi\geq \theta |\xi |^2$, therefore the operator $\partial_t + L$ is uniformly parabolic. The Sobolev space of functions that permit the no-flux condition $\Xi [u_n]\cdot \vec{n}$ on $\partial U\times [0,T]$ is $H^1(U)$ which is reflexive, so that $\partial_t u$ interpreted as a bounded linear functional can be paired to an element in $H^1(U)$, and further by the Riez-Representation theorem there exists a unique element from $H^1(U)$ for the pairing. Additionally $H^1(U)$ is separable so that the (unique) weak solution may be approximated by a sequence of smooth functions coming from a countably dense subset. Weak Formulation. ----------------- Equation may be recast into weak form. We first introduce the bilinear operator, defined by $$\begin{aligned} B[u,v;t] := \int_U\mathrm{d}\vec{r}\, \nabla_{\vec{r}}v\cdot\bm{D}\,\nabla_{\vec{r}}u+\int_U\mathrm{d}\vec{r}\,\vec{b}(\vec{r})\cdot\nabla_{\vec{r}}u\, v+\int_U\mathrm{d}\vec{r}\,c(\vec{r})u\, v\end{aligned}$$ for $u,v\in H^1(U)$ and a.e.  $0\leq t \leq T$. We regard $u$ as a mapping $[\mathfrak{u}(t)](\vec{r}):=u(\vec{r},t)$ from the time interval $[0,T]$ to the function space $H^1(U)$. Now fixing $v \in H^{1}(U)$ we multiply by $v$ and integrate by parts to obtain the weak formulation $$\begin{aligned} \label{eq:weak_form_for_un} (\partial_t\mathfrak{u},\,v)+B[\mathfrak{u},\,v;t] = 0\end{aligned}$$ for each $0\leq t\leq T$ with $(\,,\,)$ denoting inner product in $L^2(U)$. Existence. ---------- The method to establish weak solution for the indexed problem is a textbook one. The method is described as follows. Fix $n$ then the evolution equation is a uniformly parabolic PDE for the unknown $u_n = u$. One now expands $\mathfrak{u} = \mathfrak{u}^m$ in a linear combination of $m$ eigenvectors of the operator $-\nabla_{\vec{r}}\cdot( \bm{D}(\vec{r})\nabla_{\vec{r}}w_k)$ for finite dimensional approximation to $u$. Since $\bm{D}_{ij}(\cdot)$ is a compact and symmetric operator then the eigenfunctions $w_k$ form an orthonormal basis of $L^2(U)$ with $w_k\in H^1(U)$. Thus $\mathfrak{u}^m$ is projected onto the finite dimensional subspace spanned by $\left\lbrace w_k\right\rbrace_{k=1}^m$. The standard existence theory of ODEs (the Carath[é]{}odory conditions with the Cauchy–Picard theorem) gives existence of weak solutions $\mathfrak{u}^m$ as expanded in the functions $\left\lbrace w_k\right\rbrace_{k=1}^m$ on a finite dimensional subspace of $H^1(U)$. All that remains is to pass to the limit $m\to \infty$ to realise the result in $H^1(U)$. To do this energy estimates are required on $\mathfrak{u}^m$, these are routine calculations except in the textbooks they are done for simpler boundary condition choices (homogeneous Dirichlet or periodic) and make use of Poincare’s inequality (holding only for $H^1_0$ functions). The calculations are similar and for the present boundary condition choice, a weaker Poincar[é]{}$-$Wirtinger inequality is used through out to obtain $$\begin{aligned} &\max_{0\leq t\leq T}\|\mathfrak{u}^m(t)\|_{L^2(U)}+\|\mathfrak{u}^m\|_{L^2([0,T]; H^{1} (U))}\nonumber\\ &+\|\pder[]{t}\mathfrak{u}^m\|_{L^2([0,T]; H^{-1}(U))} \leq c_1\|u_{0}\|_{L^2(U)}+ c_2.\label{eq:uniform_bound_on_three_norms}\end{aligned}$$ where $c_1$, $c_2$ are constants dependent on $T$ and $U$ and $\mu_{\min},\mu_{\max}$. Note that the left hand side of forms a bounded sequence in $\mathbb{R}$ and by the Bolzano–Weierstrass theorem there exists a convergent subsequence $\{\mathfrak{u}^{ m_l}\}_{l\geq 1}\subset\{\mathfrak{u}^{ m}\}_{m\geq 1}$. In particular there exists $\mathfrak{u}$ such that $$\begin{aligned} \begin{split} \mathfrak{u}^{ m_l} \rightharpoonup \mathfrak{u}\quad &\text{ weakly in } L^2([0,T]; H^1(U)),\\ \partial_t\mathfrak{u}^{ m_l} \rightharpoonup \mathfrak{u}' \quad &\text{ weakly in } L^2([0,T]; H^{-1}(U)). \end{split}\end{aligned}$$ Note of course that $\mathfrak{u} = \mathfrak{u}_n$, but we have not yet established existence of weak solution to the full nonlinear Smoluchowski equation . This result establishes existence of weak solution for the parabolic equation for every index $n$. Now since $L^2([0,T]; H^1(U))$ is separable, and weak solutions currently only exist in a finite dimensional subspace of $H^1(U)$, it makes sense to choose a test function $\bm{\phi}\in C^1([0,T]; H^1(U))\subset L^2([0,T]; H^1(U))$. We may therefore write $$\begin{aligned} \int_0^T\mathrm{d}t\,\langle\partial_t\mathfrak{u}^{ m} ,\,\bm{\phi}^N\rangle +B[\mathfrak{u}^m,\,\bm{\phi}^N;t] =0\end{aligned}$$ for $\bm{\phi}^N=\sum_{k=1}^Nd^k(t)w_k$. Making the choice $N\leq m$ and letting $N\to\infty$ one obtains $$\begin{aligned} \int_0^T\mathrm{d}t\,\langle\partial_t\mathfrak{u} ,\,\bm{\phi}^{\infty}\rangle +B[\mathfrak{u},\,\bm{\phi}^\infty;t] =0\end{aligned}$$ for any function $\bm{\phi}^\infty\in L^2([0,T]; H^1(U))$ since $\phi^N$ are dense in $L^2([0,T]; H^1(U))$. Now since $\bm{\phi}^\infty$ is arbitrary we obtain $$\begin{aligned} \langle\partial_t\mathfrak{u} ,\,\phi\rangle +B[\mathfrak{u},\,\phi;t] = 0\end{aligned}$$ for an arbitrary $\phi\in H^1(U)$. Hence the criteria of weak solution is satisfied. Uniqueness. ----------- To show uniqueness we argue by contradiction that there exists two weak solutions solutions. By linearity, their difference $\bm{\chi}$ is a weak solution of with $\chi_0\equiv 0$, for $\chi_0$ initial data. Then as it is a weak solution, we may test $\bm{\chi}$ against itself $$\begin{aligned} \langle\partial_t\bm{\chi} ,\,\bm{\chi}\rangle +B[\bm{\chi},\,\bm{\chi};t] \equiv 0\end{aligned}$$ giving $$\begin{aligned} \tfrac{1}{2}\der[]{t}(\|\bm{\chi}(t)\|^2_{L^2(U)})+B[\bm{\chi},\,\bm{\chi};t] =0\end{aligned}$$ but $B[\bm{\chi},\,\bm{\chi};t]\geq -c_{7}\|\bm{\chi}(t)\|^2_{L^2(U)}$ which may be obtained by the following estimate $$\begin{aligned} \label{eq:second_bound_on_B[u,u]} c_5\|\mathfrak{u}^m-c\|_{H^{1}(U)}^2\leq B[\mathfrak{u}^m,\mathfrak{u}^m]+c_6\|\mathfrak{u}^m\|^2_{L^2(U)}.\end{aligned}$$ and hence by Gr[ö]{}nwall $$\|\bm{\chi}(t)\|^2_{L^2(U)}\leq c_{7}(t)\|\chi_0\|^2_{L^2(U)}=0$$ and $\bm{\chi} =0 $ for a.e. $\vec{r}\in U$ for every $0\leq t \leq T$. We have established the existence and uniqueness of the weak solution to the linear parabolic equation and may apply this to an iteration problem on .\ Nomenclature {#app:nomenclature} ============ ------------------------------------------------- -- ----------------------------------------------------------------------------------------------------------------------------------- $\alpha$ Mass diffusivity $\beta_i^{(\kappa_2)}$, $\theta_i^{(\kappa_2)}$ Expansion coefficients used in Theorem \[thm:epsilon\_as \_a\_fn\_of\_lambda\] $\beta_{n}^{-1}$ Eigenvalues of $\mathcal{R}$ from Definition \[def:def\_of\_R\_op\] $\gamma$ Friction coefficient $\gamma^{(\kappa_2)}_k$ Eigenvalues of $\mathcal{A}_{\kappa_2}$ defined in $\delta_n$, $\phi_n$, $\psi_n$ Expansion coefficients used in Theorem \[thm:eigenfn\_expansion\_of\_flux\] $\epsilon$ Small nondimensional parameter $\zeta^a_i$ Gaussian white noise process $\kappa_1$ Nondimensional external potential strength $\kappa_2$ Nondimensional two body potential strength $\kappa_{2_\sharp}$ Point of critical stability defined in $\lambda$ Scalar value of $\mathbb{C}$ used in Lemma \[lem:flux\_operator\_is\_compact\_self\_adjoint\] $\lambda(\kappa_2)$ Eigenvalue of $\mathcal{L}_{1}$ used in Theorem \[thm:epsilon\_as \_a\_fn\_of\_lambda\] $\mu$ Probability measure $\mu_c$ Chemical potential $\mu_i$ Eigenvalues of $\bm{D}$ defined in $\mu_{\max}$, $\mu_{\min}$ Largest and smallest eigenvalues of $\bm{D}$ respectively $\varrho$ Density $\varrho_0$ Initial density $\varrho_n$ $n$-particle configuration space distribution function, for $n\geq 2$ $\varrho_\infty$ Unique equilibrium density $\tau$ Characteristic time scale $\phi_n$ Eigenvalues of $\bm{1}+\mathcal{Z}_1^\varrho -\lambda \mathcal{Z}_2^\varrho$ used in Theorem \[thm:eigenfn\_expansion\_of\_flux\] $\chi_\epsilon$ Convex approximation to $|\cdot |$ in Definition \[def:def\_of\_chi\_approx\_abs\] ------------------------------------------------- -- ----------------------------------------------------------------------------------------------------------------------------------- : [**[Upper Case Greek]{}**]{} \ \ -------------------- -- --------------------------------------------------------- $\bm{\Gamma}$ $3N\times 3N$ friction tensor $\bm{\Gamma}_{ij}$ $3\times 3$ block matrices of $\bm{\Gamma}$ $\Pi$, $\Pi_1$ Nonlinear and linear boundary operators respectively, , -------------------- -- --------------------------------------------------------- : [**[Upper Case Greek]{}**]{} --------------------------------- -- ------------------------------------------------------------------------------------------------------------------------------------ $\vec{a}$ Flux $c_{ls}$ Log–Sobolev constant $c_{pw}$ Poincar[é]{}–Wirtinger constant $d$ Dimension number $\vec{e}_i$ Eigenvectors of $\bm{D}$ defined in $f(\vec{r},\vec{p},t)$ Phase space density $\vec{f}_i$ Gaussian white noise vector $g(\vec{r},\vec{r}',[\varrho])$ Correlation function $k_B$ Boltzmann constant $m$ Mass of particle $i$ $\vec{p}_i$ Momentum vector of particle $i$ $\vec{r}_i$ Position vector of particle $i$ $t$ Time $u_n$ Eigenfunctions of $\mathcal{R}$ from Definition \[def:def\_of\_R\_op\] $v^{(\kappa_2)}_k$ Eigenfunctions of $\mathcal{A}_{\kappa_2}$ defined in $w^{(\kappa_2)}$ Eigenfunction of $\mathcal{L}_1$ used in Theorem \[thm:epsilon\_as \_a\_fn\_of\_lambda\] $\vec{w}_n$ Eigenfunctions of $\bm{1}+\mathcal{Z}_1^\varrho-\lambda\mathcal{Z}_2^\varrho$ used in Theorem \[thm:eigenfn\_expansion\_of\_flux\] --------------------------------- -- ------------------------------------------------------------------------------------------------------------------------------------ : [**[Upper Case Roman]{}**]{} \ \ ---------------------------------- -- --------------------------------------------------------------------------- $\bm{1}$ $3\times 3$ identity matrix $\bm{1} + \mathcal{Z}_1^\varrho$ Local operator on acting on $\vec{a}$ defined in $\mathrm{A}$ Characteristic flux scale $\bm{A}([\vec{a}],t)$ Advection tensor defined in $\mathcal{A}_\varrho$ The operator $(\bm{1} + \mathcal{Z}_1^\varrho)^{-1}\mathcal{Z}_2^\varrho$ $\mathcal{A}_{\kappa_2}$ Differential operator defined in $\boldsymbol{B}$ $\left(m k_B T \boldsymbol{\Gamma}\right)^{1/2}$ $\mathcal{B}$ Nonlocal operator $C(T)$ Constant dependent on the final time $T$ $\bm{D}(\vec{r},[\varrho],t)$ Diffusion tensor defined in $F$ Nonlinear map defined in $\mathcal{F}$ Free energy functional defined in $Fr$ Froude number $\mathcal{F}_H$ Helmholtz free energy functional defined in $\mathcal{H}$ Relative entropy functional defined in $\mathrm{L}$ Characteristic length scale $\mathcal{L}_1$ Linearised nonlocal differential operator defined in ---------------------------------- -- --------------------------------------------------------------------------- : [**[Upper Case Roman]{}**]{} --------------------------------- -- ------------------------------------------------------------------------------------------------------------------------------------- $N$ Number of particles $Pe$ P[é]{}clet number $\mathcal{R}$ Nonlocal operator from Definition \[def:def\_of\_R\_op\] $\mathrm{T}$ Temperature $T$ Final time $\mathcal{T}$ Nonlocal operator defined in $U$ Spatial domain $U_T$ Space-time cylinder $U\times [0,T]$ $\mathrm{U}$ Characteristic velocity scale $V$ Potential $V_1$, $V_2$, $V_n$ External, two body and $n$-body potentials respectively $\mathcal{X}_\varrho$ Inverse operator $(\bm{1}+\mathcal{Z}_1^\varrho+\mathcal{Z}_2^\varrho)^{-1}$ used in Theorem \[thm:association \_of\_free\_energy\] $Z$ Normalisation constant $\bm{Z}_1(\vec{r}_1,\vec{r}_2)$ Diagonal two body HI tensor $\bm{Z}_2(\vec{r}_1,\vec{r}_2)$ Off-diagonal two body HI tensor $\mathcal{Z}_2^\varrho$ Nonlocal operator acting on $\vec{a}$ defined in --------------------------------- -- ------------------------------------------------------------------------------------------------------------------------------------- : [**[Sets and Mathematical Symbols]{}**]{} \ \ ------------------------- -- ---------------------------------------------------------------------------------------- $L^2(U,\varrho^{-1})$ Weighted $L^2(U)$ space $P_{ac}(U)$ Space of absolutely continuous probability densities supported on $U$ $P_{ac}^{+}(U)$ $P_{ac}(U)$ restricted to strictly positive functions $\text{Tr}$ Trace operator $\top$ Transpose $u\star v$ Convolution of two functions $\int_U\mathrm{d}\vec{r}\,u(\vec{r}-\vec{r}')v(\vec{r}')$ $\vec{u}\otimes\vec{v}$ Outer product / dyadic of two vectors $\vec{u}\vec{v}^\top$ ------------------------- -- ---------------------------------------------------------------------------------------- : [**[Sets and Mathematical Symbols]{}**]{} \[bib:References\] [^1]: School of Mathematics and Maxwell Institute for Mathematical Sciences, University of Edinburgh, Edinburgh EH9 3FD, UK. Corresponding author: R. D. Mills-Williams (r.mills@ed.ac.uk). [^2]: Department of Mathematics, Imperial College London, London SW7 2AZ, UK. [^3]: BDG would like to acknowledge support from EPSRC EP/L025159/1. RDMW is grateful to EPSRC for PhD funding. GAP was supported by the EPSRC through grant numbers EP/P031587/1, EP/L024926/1, and EP/L020564/1. This research was funded in part by JPMorgan Chase & Co. Any views or opinions expressed herein are solely those of the authors listed, and may differ from the views and opinions expressed by JPMorgan Chase & Co. or its affiliates. This material is not a product of the Research Department of J.P. Morgan Securities LLC. This material does not constitute a solicitation or offer in any jurisdiction.
1
--- abstract: 'We present a sequence of high resolution (R$\sim$20,000 or 15 km s$^{-1}$) infrared spectra of stars and brown dwarfs spanning spectral types M2.5 to T6. Observations of 16 objects were obtained using eight echelle orders to cover part of the $J$-band from 1.165-1.323 $\mu$m with NIRSPEC on the Keck II telescope. By comparing opacity plots and line lists, over 200 weak features in the $J$-band are identified with either FeH or H$_{2}$O transitions. Absorption by FeH attains maximum strength in the mid-L dwarfs, while H$_{2}$O absorption becomes systematically stronger towards later spectral types. Narrow resolved features broaden markedly after the M to L transition. Our high resolution spectra also reveal that the disappearance of neutral Al lines at the boundary between M and L dwarfs is remarkably abrupt, presumably because of the formation of grains. Neutral Fe lines can be traced to mid-L dwarfs before Fe is removed by condensation. The neutral potassium (K I) doublets that dominate the $J$-band have pressure broadened wings that continue to broaden from $\sim$ 50 km s$^{-1}$ (FWHM) at mid-M to $\sim$ 500 km s$^{-1}$ at mid-T. In contrast however, the measured pseudo-equivalent widths of these same lines reach a maximum in the mid-L dwarfs. The young L2 dwarf, G196-3B, exhibits narrow potassium lines without extensive pressure-broadened wings, indicative of a lower gravity atmosphere. Kelu-1AB, another L2, has exceptionally broad infrared lines, including FeH and H$_{2}$O features, confirming its status as a rapid rotator. In contrast to other late T objects, the peculiar T6 dwarf 2MASS 0937+29 displays a complete absence of potassium even at high resolution, which may be a metallicity effect or a result of a cooler, higher-gravity atmosphere.' author: - '[IAN S. MCLEAN, L. PRATO, MARK R. MCGOVERN, ADAM J. BURGASSER, J. DAVY KIRKPATRICK, EMILY L. RICE AND SUNGSOO S. KIM]{}' title: 'THE NIRSPEC BROWN DWARF SPECTROSCOPIC SURVEY II: HIGH-RESOLUTION J-BAND SPECTRA OF M, L and T DWARFS[^1]' --- Introduction ============ With effective temperatures $\la$ 2200 K, the cool atmospheres of L and T dwarfs generate complex spectra that are rich in molecular features, especially at near-infrared (NIR) wavelengths where ro-vibrational transitions of many molecules dominate. Fortunately, these cool, very low luminosity objects are also brightest in the NIR. Until recently, most infrared spectroscopic investigations of L and T dwarfs have concentrated on the identification of strong, broad spectral features, useful for the establishment of spectral classification, and have employed resolving powers of $R = \lambda/\Delta\lambda \la 2000$ (Burgasser et al. 2002, 2004, 2006; Cushing et al. 2003, 2005; Geballe et al. 1996, 2002; Jones et al. 1994; Leggett et al. 2000, 2001; McLean et al. 2000a, 2001, 2003a; Reid et al. 2000; Testi et al. 2001). Observations with significantly higher spectral resolution are potentially very important because line-blending from molecular transitions is reduced and weak features are resolved. Higher resolution spectra are more useful for constraining models of the complex molecular chemistry of brown dwarf atmospheres and for characterizing properties such as gravity and metallicity (Mohanty et al. 2004). For example, less massive brown dwarfs and younger brown dwarfs have smaller surface gravities which results in less pressure broadening and a different line shape. Furthermore, spectra with $R\ga$ 20,000 ($\la$15 km s$^{-1}$) are required for the measurement of radial and rotational velocities, and to search for radial velocity variability associated with brown dwarf spectroscopic binaries. Obtaining high signal-to-noise observations with an increase in spectral resolution of a factor of ten is difficult because brown dwarfs are so faint. Basri et al. (2000) successfully resolved the resonance absorption lines of Cs and Rb in the far-red visible regime for a sample of M and L dwarfs using the HIRES echelle spectrograph on the Keck 10-m telescope and derived effective temperatures through comparison with available model atmospheres. Reid et al. (2002) also used high-resolution optical echelle spectroscopy to study 39 dwarfs with spectral types between M6.5 and L0.5. However, because brown dwarf fluxes are significantly less in visible light, high-resolution observations of fainter L dwarfs and of the even dimmer T dwarfs are not tenable in the optical and require infrared observations. With the advent and development of sensitive large-format IR array detectors, IR spectroscopy with the requisite spectral resolution is now possible (McLean et al. 1998, 2000b). In this paper we present the first well-sampled spectral sequence of late M, L and T dwarfs observed at high resolution (R $\sim$ 20,000) in the NIR. This work is part of the NIRSPEC Brown Dwarf Spectroscopic Survey (BDSS) being carried out at the Keck Observatory; preliminary results were presented in McLean et al. (2003b). The goals of the BDSS are to obtain a significant sample of NIR spectra of low-mass stars and brown dwarfs of differing ages, surface gravities, and metallicities at both medium (R $\sim$ 2,000) and high spectral resolution for spectral classification studies and comparisons with model atmospheres. McLean et al. (2003a), hereafter M03, describes the lower resolution part of the survey; spectra from that study are available online.[^2] Here, we investigate the $J$-band using ten times higher spectral resolution than in M03. The $J$-band (defined as 1.15-1.36 $\mu$m in this paper) is important because this region contains four strong lines of neutral potassium (K I) that are both temperature and gravity-sensitive, and which persist throughout the M, L, and T dwarf sequence. In §2 we describe our observations and data reduction procedures. §3 provides a discussion of the rich, spectral morphology. In addition to atomic K I, there are lines of Al I, Fe I, Mn I, Na I and Ti I, and transitions of molecular species such as CrH, FeH, and H$_{2}$O that can provide a unique resource for improving model atmospheres at these low temperatures. We show that the sudden disappearance of the Al I lines critically defines the M-L boundary at these resolutions. Concentrating on the strongly pressure-broadened K I lines, we look for correlations between spectral type and equivalent widths, velocity widths (FWHM), and residual intensity. The relation between molecular line strengths and spectral type is also investigated. The effects of rotation, surface gravity and metallicity are explored in §4. A summary of the overall results and concluding remarks is given in §5. Observations and Data Reduction =============================== Targets and Instrumentation --------------------------- Targets for the initial survey, the BDSS (M03), were selected primarily from well-known M dwarfs and from L and T dwarfs identified in the Two Micron All Sky Survey (2MASS; Kirkpatrick et al. 1997, 1999, 2000, 2001; Burgasser et al. 2000, 2002; Reid et al. 2000; Wilson et al. 2001), augmented with discoveries from the Deep Near-Infrared Survey of the Southern Sky (DENIS; Delfosse et al. 1997, 1999), the Sloan Digital Sky Survey (SDSS; Leggett et al. 2000; Geballe et al. 2002), and other investigations (Becklin & Zuckerman 1988; Ruiz, Leggett and Allard 1997). To ensure high signal-to-noise spectra for the high-resolution part of the survey, a subset of 12 of the brightest objects ($J=7-15$), spanning the spectral type range from M6 to T6, was selected. Of these, only 2MASS 0140+27 was not part of the initial survey. The M2.5 star G196-3A was also observed along with its L2 companion G196-3B, both examples of objects substantially younger than 1 Gyr (Rebolo et al. 1998) and therefore most likely to exhibit gravity effects (McGovern et al. 2004). In addition, the peculiar T6 dwarf 2MASS 0937+29 (Burgasser et al. 2002) was added to the list because of its apparent lack of K I features in lower resolution spectra. Another late T dwarf (2MASS 2356$-$15, T5.5) was observed after completion of the initial set for comparison to the 2MASS 0937+29. Although the signal-to-noise ratio was sufficient to establish the presence of K I absorption in this T5.5 dwarf, the fainter magnitude ($J=15.8$) and stronger 2o absorption made quantitative analysis too difficult so the spectrum is not shown. Table 1 provides the complete list of 16 targets and the observing log. Shorthand names such as 2MASS 1507$-$16 are used in the text for simplicity, but the full designations are listed in Table 1. Two targets were known visual doubles at the time of observing, 2MASS 0746+20 (L0.5) and DENIS 0205$-$11 (L7), but in neither case did we have sufficient angular resolution to separate the components. Subsequent to making our observations, DENIS 0205-11 was reported as a possible triple brown dwarf system (Bouy et al. 2005) based on Hubble Telescope images. Burgasser et al. (2005) subsequently found SDSS 0423$-$04 to be double, the average spectral type of T0 being due to an L6 and T2 combination. Even more recently, the binary nature of Kelu-1, a 0$\farcs$29 pair, was revealed using Laser Guide Star adaptive optics on the Keck telescope (Liu & Leggett 2005; Gelino et al. 2006). Again, in neither case were these targets resolved in our NIRSPEC observations. All of the observations were made using the NIRSPEC cryogenic spectrometer on the Keck II 10-m telescope on Mauna Kea, Hawaii. Detailed descriptions of the design and performance of this UCLA-built instrument are given elsewhere (McLean et al. 1998; 2000b). For this study, NIRSPEC was used in its cross-dispersed echelle mode. High resolution spectra are dispersed across the 1024$\times$1024 InSb detector at 0$\farcs$143 per pixel while the spatial scale in the cross-dispersed direction is 0$\farcs$19 per pixel. An independent slit-viewing camera with a scale of 0$\farcs$18 per pixel is available for centering and guiding. With the gratings used in NIRSPEC, the relationship between the blaze wavelength ($\lambda_{b}$) and echelle order number (m) is $m\lambda_{b}$ = 76.56 $\mu$m; together with the free spectral range (see below), this equation gives the order location of a given wavelength. The spectrometer was set up with the NIRSPEC-3 order-sorting filter and specific echelle and cross-dispersion grating angles to record 11 echelle orders ($m$ = 66 to $m$ = 56) covering the wavelength range from 1.15–1.36$\mu$m, corresponding approximately to the standard $J$-band. The free spectral range ($\lambda_{b}/m$) at 1.255 $\mu$m (order 61) is $\sim$206 Å, but the effective dispersion is 0.179 Å/pixel, allowing for only 183 Å  (89%) of this order to be captured by the detector. In fact, the captured wavelength range varies from 171 Å  (94%) in order 65 to 192  Å  (84%) in order 58. Thus, because the spectral interval captured by the detector is slightly smaller than the free spectral range in each order, there are small gaps, increasing with wavelength, in the total spectral coverage. Table 2 summarizes the spectral range for each order used in the subsequent analysis. In practice, for an entrance slit 0$\farcs$43 (3 pixels) wide, the final spectral resolution in the reduced data is $R \sim 20,000$, (or 15 km s$^{-1}$), compared to the theoretical value of $R = 24,000$. The average value of one spectral resolution element is $\sim$0.625Å  (equivalent to 3.6 pixels) over most of the $J$-band region. Spectroscopic observations were made as nodded pairs. Typically, integrations of 600 s each were taken with the object placed at two positions, designated A and B, separated by $\sim$7$''$ on the $\sim$12$''$ long entrance slit of NIRSPEC. Shorter exposure times were used for brighter objects. Exposures of 300 s per nod position were used for 2MASS 0746+2000AB, Kelu-1AB and 2MASS 1507-1627, 120 s for Wolf 359 and 60 s per nod position for G196-3A. Total integration times per object ranged from a few minutes to 1.5 hours depending on the apparent $J$ magnitude. Signal-to-noise ratios were typically greater than 20 (5%) per resolution element over most orders, and sometimes greater than 100 (1% noise). Seeing conditions were $\sim0\farcs5-0\farcs6$ and therefore a slit width of 0$\farcs$43 (3 pixels) was used for all observations, except in the case of 2MASS 1507$-$16, for which we used a $0\farcs576$ (4 pixels) slit because of poorer seeing. A0 V stars were observed at an airmass very close to that of the target object to calibrate for absorption features caused by the Earth’s atmosphere. Arc lamp spectra, taken immediately after each observation, and OH night sky lines in the observed spectra, were used for wavelength and dispersion calibration. A white-light spectrum and a corresponding dark frame were obtained for flat-fielding. Data Reduction Methods ---------------------- For the data reduction we used REDSPEC, an IDL-based software package developed at UCLA for NIRSPEC by S. Kim, L. Prato, and I. McLean[^3]. For each echelle order, REDSPEC uses the position of the two-dimensional spectra on the NIRSPEC array and the calibration line spectra to construct spatial and spectral maps necessary to transform the raw data onto a uniform grid. If the target spectrum itself is too faint to provide the spatial rectification, then the A0V star observed with the same set up was used instead. Although four arc lamps are available, it is often the case that there are too few well-distributed lines per echelle order for good spectral rectification. Consequently, OH night sky lines were also used. The dispersion was more than adequately fit by a second order polynomial of the form $\lambda = c_{0} + c_{1} x + c_{2} x^{2}$ where $c_{1} \sim$ 0.17$\pm$0.01 Å/pixel and $c_{2} \sim$ 7 x 10$^{-6}$  Å/pixel$^2$. To extract spectra free from atmospheric background and uneven detector response, the difference of an A/B image pair was formed and flat-fielded. The flat-fielded difference frame was then rectified using the spatial and spectral maps and the raw spectrum produced by summing 5$-$10 rows from each trace in the rectified image. The extracted traces (one positive, one negative) are subtracted again to produce a positive spectrum with residual night sky emission line features removed, unless a line was saturated. In the $J$-band, none of the night sky emission lines are saturated. A0 V star spectra were reduced in the same way, interpolating over the intrinsic Pa$\beta$ hydrogen absorption line at 1.28$\mu$m in the $J$-band spectra. The raw target spectrum was then divided by the raw A0 V star spectrum to remove telluric features. The true slope of the target spectrum was restored by multiplication with a blackbody spectrum of T$_{eff}=9500$ K for an A0 V star (Tokunaga 2000). Finally, the spectra reduced from multiple A/B pairs were averaged together to improve the signal-to-noise ratio. J-band Spectral morphology at R$\sim$20,000 =========================================== Overview -------- For each of the 16 targets we have extracted 8 echelle orders (see Table 2) yielding a total of 128 spectra. Before examining and interpreting the new spectra in detail, it is very useful to have a broad overview of the basic spectral features present and an awareness of the general trends that occur in the high-resolution $J$-band data as a function of spectral type. A convenient way of doing this is to select a representative source for a few spectral types and present all eight echelle orders on the same plot, thus enabling the entire $J$-band to be viewed at a glance. Figures 1$-$6 show the reduced spectra of Wolf 359 (M6), 2MASS 0140+27 (M9), 2MASS 0345+25 (L0), 2MASS 1507$-$16 (L5), SDSS 0423$-$04AB (T0) and 2MASS 0559$-$14 (T4.5). The double nature of SDSS 0423$-$04AB means that we do not have a true T0 spectrum, but lower resolution studies (M03) show that J-band spectral variations are relatively weak from L6 to T2 and therefore this binary remains a useful proxy for a T0 dwarf. In these plots, echelle orders 58 through 65 are shown together; the remaining orders at the edges of the $J$-band are too contaminated by strong atmospheric absorption to be useful. For ease of comparison, all spectra are shown in the laboratory reference frame and vacuum wavelengths are used throughout; radial velocities and searches for radial velocity variations will be reported and discussed in a separate forthcoming paper (Prato et al. 2006 in prep.). Each order is normalized to unity at the same wavelength. Comparison of the spectra in these six figures, all of which have excellent signal to noise ratios (at least 20:1 per pixel), shows that the region is densely populated with numerous weak absorption features and a few stronger lines. We will show that the fine-scale spectral structure is real and repeatable, and that it is mainly attributable to FeH or H$_{2}$O. The strongest atomic features are the doublets of K I that occur in orders 61 and 65. These lines persist from M6-T4.5 but clearly change their character with spectral type. In later sections the K I lines will be singled out for closer inspection. For reference, Table 3 summarizes the main spectral transitions observed in the $J$-band over the spectral type range from M6-T4.5, including the energy levels of the atomic transitions. As shown in Figure 1, the M6 dwarf Wolf 359 has at least one distinguishing feature in each order. Atomic lines of Al I at 1.31270 and 1.31543 $\mu$m appear in order 58. There is a moderately strong line of Mn I at 1.29033 $\mu$m in order 59, plus some weaker lines of Ti I at 1.28349 and 1.28505 $\mu$m. A weak unresolved Na I doublet is seen at 1.26826 $\mu$m in order 60. The first pair of strong K I lines at 1.24357 and 1.25256 $\mu$m appears in order 61. Multiple weak absorption features occur in both orders 62 and 63, the most notable grouping being the set of lines around 1.222 $\mu$m. Several of the stronger features have been identified with FeH from lower resolution studies (Jones et al. 1996; Cushing et al. 2003; M03). Note however, that a major FeH band head at 1.24 $\mu$m is just off the detector at the short wavelength edge of this order. In order 64 there is a pair of strong Fe I lines at 1.18861 and 1.18873 $\mu$m and another Fe I line at 1.19763 $\mu$m. Order 65 contains the second set of strong K I lines, one at 1.169342 $\mu$m and the close pair at 1.177286, 1.177606 $\mu$m. In general, the lines are relatively sharp and well-resolved. Wolf 359 is a bright source and therefore the signal-to-noise ratio in this spectrum is at least 100:1. Following these spectral features order by order through Figures 1-6 reveals certain general trends as a function of spectral type. Comparing the M9 object (Figure 2) with the M6 source (Figure 1) we see that the Al I lines at 1.3127 and 1.3154 $\mu$m in order 58 are somewhat weaker at M9 and then suddenly they are no longer present at L0 (Figure 3), or in any later spectral types (Figures 4-6). This is an important observation that relates to the M-L transition and we will discuss the Al I lines in the next section. Throughout order 58 there are other weaker spectral features, the so-called fine-scale spectral structure. This spectral structure becomes more pronounced at M9 (Figure 2), seems broader in the L0 and L5 objects (Figures 3 and 4), weakens at T0 or more accurately from L6 to T2 (Figure 5) and then completely changes character by T4.5 (Figure 6). The most likely interpretation of this spectral sequence is that it represents changes in the physical structure of these cool atmospheres (temperature, pressure, chemistry). In subsequent sections we compare opacity data for different molecular species to identify the primary absorbers at each spectral class. In order 59 the sharp Mn I line at 1.2903 $\mu$m seen at M6 and M9 (Figures 1 and 2) broadens and disappears after L0 (Figure 3). The fine-scale spectral structure in this order is dominant until T0 composite type (Figure 5) when the spectrum becomes remarkably smooth. Here again the high resolution data reveal a striking effect, this time at the transition from L to T dwarfs. New spectral structure develops in this order between types T0 at T4.5 (Figure 6) but, as was the case for order 58, the pattern is different, indicating different atmospheric conditions. The weak Na I line at 1.2683 $\mu$m detected in order 60 in the M6 object (Figure 1) is already absent in the M9 object (Figure 2). Otherwise, the behavior of the fine-scale structure follows a pattern similar to order 59 becoming remarkably weak at T0 (Figure 5) and leading to a smoother appearance for these spectra near the L-T transition. Order 61 contains one of the pairs of strong K I doublets located at 1.2436 and 1.2525 $\mu$m. These lines deepen and widen slightly from M6 to M9, and then become increasingly broader and shallower from L0 to T4.5. NIRSPEC spectral order 61 also has many fine-scale features attributable to molecular transitions. Two features, one at 1.24637 and the other at 1.24825 $\mu$m have been identified previously with FeH (Cushing et al. 2003). These FeH lines strengthen slightly from M6 to M9 (Figures 1 and 2), become much broader in the L dwarfs (Figures 3 and 4) and then vanish completely in the T dwarfs (Figures 5 and 6) to leave, once again, a remarkably smooth continuum between the K I lines. Comparing orders 62 and 63 in Figures 1-6, the known FeH features in these spectral bands strengthen from M6 to M9, broaden markedly at L0 and remain strong and broad through L5 before becoming weaker in the T0 and T4.5. As with the atomic lines, the individual FeH lines seem to broaden significantly at the transition from spectral types M9 to L0. Evidence of weak FeH absorption is still present around 1.222 $\mu$m at spectral type T0, and possibly even at T4.5, as shown in Figures 5 and 6, but this molecular species is clearly not dominant in T dwarfs. For the M6 dwarf (Figure 1), order 64 is characterized by a pair of strong lines of Fe I at 1.18861 and 1.18873 $\mu$m that are easily resolved, and another Fe I line at 1.19763 $\mu$m that is blended with FeH. These features remain strong at M9 (Figure 2) and persist into the L-dwarf range, becoming broader at L0 (Figure 3), and then undetectable by L5 (Figure 4). From T0 to T4.5 (Figures 5 and 6), order 64 becomes increasingly chopped up by new spectral features, some of which are quite sharp and deep. In section §3.3 we show that these features are caused by absorption by 2o. Finally, there is order 65, which contains the second K I doublet and exhibits some of the largest changes with spectral type. The slightly weaker K I companion line at 1.17728 $\mu$m, only 3.3Å  from the longer wavelength member of the doublet is easily resolved in the M9 object (Figure 2), already blended from line broadening in the L0 (Figure 3), barely discernable at L5 (Figure 4), and completely washed out by line broadening and numerous molecular features at T4.5 (Figure 6). Order 65, being close to the short wavelength edge of the $J$-band where terrestrial water vapor absorption is expected, also contains many strong intrinsic transitions of hot H$_{2}$O, for example, the feature at 1.175 $\mu$m. Al I, Fe I and Mn I; indicators of the M-L transition ----------------------------------------------------- Figure 7 provides a more detailed view of the Al I doublet in order 58. In this plot, the spectra for the M6, M9 and L0 objects shown in Figures 1-3 are expanded and overlaid. Evidently, there is significant spectral structure in this part of the spectrum making it difficult to identify a true continuum level. All three spectral types show consistent features, in particular the wide depression containing the shorter wavelength Al I line. The equivalent width of the Al I lines clearly decreases from M6 to M9, but the change over these three spectral types is only about 25%. Because this region of spectrum is contaminated by 2o absorption, it is difficult to obtain accurate equivalent widths. A pseudo-equivalent width over a 4.2Å  interval centered on each line was obtained relative to the local continuum in the troughs where the Al I lines are found. For the stronger line of the pair at 1.3127 $\mu$m, the measured values of equivalent width for the M6 and M9 dwarfs respectively are 420$\pm$20 mÅ  and 300$\pm$40 mÅ. At L0, however, the pseudo-equivalent width of this line is $\le$40 mÅ. Clearly, at the transition from M9 to L0, both Al I lines vanish completely. Although only three objects bridging this transition were observed at high resolution, the conclusions given here are supported fully by the results of our low resolution BD spectroscopic survey (M03) where two objects of every spectral type from M6 to L5 was included. As shown in Table 3, these lines arise from absorption from an energy level at 3.14 eV. Interestingly, the Na I line at 1.268 $\mu$m in order 60 is already absent in the M9, and careful inspection shows that a somewhat broadened Mn I line at 1.290 $\mu$m in order 59 persists through L0. The Na I line is excited from a high energy state at 3.6 eV whereas the Mn I line comes from a state at only 2.1 eV. Thus, the sequence in which the lines disappear is at least qualitatively consistent with thermal excitation. But the abrupt loss of Al I lines at the classical M-L boundary is too great to be explained by Boltzmann factors alone. For example, for a temperature change from 2850 K from the M6 to about 2400 K for the M9 say, the population of excited atoms in the upper level would drop by 51% and the equivalent width of the line might change from 420 mÅ  to about 210 mÅ  if all other factors remain the same. From M9 to L0 the change would be a further 13% assuming a change in effective temperature of 150 K. Thus, the line should still be measurable with an equivalent width of 140 $\pm$40 mÅ, or about one-third its value at M6. Yet, both lines disappear abruptly. It is likely, from the models of Lodders (2002), that aluminum has been sequestered in compounds such as hibonite (CaAl$_{12}$O$_{19}$) and that this abrupt change in absorption line strength is really caused by the sudden depletion of aluminum as an absorber due to a significant change in atmospheric chemistry, rather than simply a drop in effective temperature. Gas temperatures typical of this transition are near 2000 K (Lodders 2002). It is also curious that the intensity ratio of the components of the Al I doublet is closer to 3:2 than the expected 2:1 ratio based on their statistical weights. However, as shown in Figure 7, this spectral region is highly complex with many overlapping transitions which makes it difficult to determine the true continuum level for each line. Alternatively, the peculiar line ratios may be a non-LTE effect, or the result of line blending. Another element that is also important for understanding the temperature structure of these cool, dust-forming objects is iron. As previously mentioned, order 64 contains a remarkably strong pair of Fe I lines at 1.1886 and 1.1976 $\mu$m. The shorter wavelength Fe I line is a resolved double with a separation of 1.2Å  in the M6 and M9 objects, but appears as a single broad feature at L0. By L5 the Fe lines are completely absent. Lower resolution studies (M03) also suggested that Fe disappeared around L2 or L3. These Fe I lines arise from low-lying energy levels near 2.2 eV, and gas phase iron requires temperatures above about 1700 K (Burrows et al. 2001). Combining the results that Al disappears at L0 and Fe is no longer present by L3, and using the chemistry temperature scale, suggests that there is about a 300 K temperature change from L0 to L3, which is shallower but still consistent with the interval of about 140 K per spectral type derived by Burgasser et al. (2002) as well as the effective temperature scale of Golimowski et al. (2004). As noted by Burgasser et al. (2002), temperatures derived from condensation chemistry tend to be systematically cooler by about 500 K than those derived from empirical determinations of T$_{eff}$ using objects with known parallax. These conclusions are not necessarily inconsistent if different spectral features probe a range of optical depths in the atmosphere. Finally, we note the presence of several weak lines of Ti I that arise from energy states near 1.4 eV, even lower than those of the strong potassium lines. A strong Ti multiplet at 0.97 has also been seen in the spectra of M dwarfs up to at least M9 (Cushing et al. 2005). Unfortunately, these weak lines are impossible to trace after M9. Fine-scale structure; the role of FeH and 2o -------------------------------------------- The astronomical $J$-band is bounded by 2o absorption bands from terrestrial water vapor. It is therefore no surprise that high-temperature 2o (hot steam) transitions intrinsic to M, L and T dwarfs encroach far into the $J$-band from both the short and long wavelength ends. These so-called infrared water bands are difficult to model because millions of transitions are needed (Partridge & Schwenke 1997). Typically, models over-estimate the depth of the infrared water bands. In addition to 2o, some of the stronger non-atomic transition features are known to be attributable to FeH from lower resolution studies (Jones et al.1996; McLean et al. 2000). These features occur at 1.2091, 1.2113, 1.2135 and 1.2221 $\mu$m. Cushing et al. (2003) verified the features at 1.1939 and 1.2389 $\mu$m as the 0–1 and 1–2 band heads of the F$^{4}\Delta$ – X$^{4}\Delta$ system of FeH, and attributed a blended feature described by McLean et al. (2000) at 1.2221 $\mu$m as the F$^{4}\Delta_{7/2}$ – X$^{4}\Delta_{7/2}$ Q-branch. These authors also listed 24 other relatively strong features lying within the $J$-band. In the Cushing et al. list, no FeH features were identified in the wavelength interval covered by our order 65, which includes the strong shorter-wavelength doublet of K I, and only one feature (at 1.2464 $\mu$m) was tabulated for order 61, where the other K I doublet dominates. To identify many more of the complex fine-scale features seen in the spectral sequences of Figures 1–6, we analyzed opacity (cross-section) data for both FeH and H$_{2}$O, (R. Freedman 2003, private communication) and utilized the FeH line list and transition catalog by Phillips et al. (1987). We are also grateful to Adam Burrows who provided CrH opacity data (Burrows et al. 2002) and Linda Brown who provided new opacity calculations for 4 in the $J$-band (L. Brown 2004, private communication). Figure 8 is a detailed view of order 62 (1.221-1.239 $\mu$m) for the M9 object 2MASS 0140+27. Superimposed on the M9 dwarf spectrum is a normalized and scaled FeH opacity plot for a temperature of 2000 K and a pressure of 1 bar. We use this plot to assist in identifying spectral features. This interesting region of the $J$-band contains a feature which is seen in lower resolution spectra as a broad, flat-bottomed line at 1.2221 $\mu$m (McLean et al 2000a). Each transition in the opacity data can be correlated with either 0–1 or 1–2 transitions of the F$^{4}\Delta$ – X$^{4}\Delta$ system tabulated by Phillips et al. (1987). As shown in Figure 8, the feature observed at 1.2221 $\mu$m and attributed to the Q-branch by Cushing et al. (2003), is actually a composite of four Q-branch and three P-branch transitions plus one R-branch transition of FeH. Thus, at our higher resolution, the broad flat-bottomed feature seen in lower resolution spectra is completely resolved into eight separate transitions. These transitions are the following: Q(0-1) at 1.22137 and 1.221383 $\mu$m blended, 1.221934, 1.222504 and 1.22305 $\mu$m; P(0-1) at 1.22166, 1.22219 and 1.22244 $\mu$m; R(0-1) at 1.22218 $\mu$m. R-branch transitions tend to correspond closely to P-branch lines. For example, the P(0-1) 1.22219 $\mu$m line is only 0.1Å  from the R(0-1) transition just given. Also, the Q-branch line at 1.222504 $\mu$m is blended with the nearby P-branch transition at 1.22244 $\mu$m. The remainder of order 62 is dominated by 5 R-branch lines and about 30 P-branch transitions; all are members of the 0–1 band of F$^{4}\Delta$ – X$^{4}\Delta$ system. In this one order alone, there are now more identified FeH transitions than previously known for the entire $J$-band. Summing up across all NIRSPEC orders, we identify over 200 matches to the FeH opacity data base and the tables by Phillips et al. (1987). There is little doubt that FeH is a dominant molecular absorber in late M and early L spectral types. Interestingly, there are several features that cannot be identified with FeH transitions in the given opacity tables. For example, a sharp line at 1.2227 $\mu$m contaminates the Q-branch feature, and there are other isolated groups near 1.225, 1.228, 1.231 and 1.234 $\mu$m which also are not attributable to FeH. Because these features have a dependence on spectral type that is similar to FeH, they may be unknown FeH transitions or transitions of CrH. To explore the latter possibility we also compared our spectra to CrH opacity calculations by Burrows et al. (2002). Although the CrH opacity data had a lower resolution than our spectra, there was good coincidence for the features at 1.225, 1.228 and 1.231 $\mu$m, but the features near 1.234 $\mu$m could not be identified with CrH. Although transitions of CrH contaminate those of FeH, the strongest fine-scale features in this part of the $J$-band are attributable to FeH rather than CrH. Stronger CrH bands occur at shorter wavelengths (Kirkpatrick et al. 1999). We also investigated the fine-scale structure in orders 61 and 65 which contain the important K I doublets. Figure 9 provides a plot of the FeH opacity (at 2000 K and 1 bar pressure) across order 61, normalized for convenience, smoothed to match the resolution of our spectra, and over-plotted with the spectra of our M9 and L5 dwarfs. Essentially every feature can be traced to FeH, with a few notable exceptions. For example, absorption features at 1.2448 and 1.2458 $\mu$m are clearly associated with FeH, but two significant absorptions bands in between these limits do not correlate with FeH. Note also that the L5 spectral features are broader than the equivalent features in the M9 object. As shown in Figures 1-6, there is a general increase in the strength of FeH from mid-M until about L4 and then a decrease in FeH features towards later spectral types. Figure 10 shows a plot for order 61 in which the L5 and the T4.5 dwarfs in our sample are compared. In this figure H$_{2}$O opacity curves (at 1000 K and 1 bar) are over-plotted. The T4.5 spectrum lacks FeH, and its shallow depressions and small features are consistent with the H$_{2}$O opacity. The FeH feature near 1.245 $\mu$m appears strongest in the early to mid Ls, while the 2o transition at 1.1752 $\mu$m seems to gain in strength systematically toward later spectral types. In order 65, no transitions for FeH are listed in the opacity files at the shorter wavelengths. Therefore, in Figure 11, we plot the H$_{2}$O opacity (at 1000 K and 1 bar) for order 65, and over-plot with the spectrum of the L3.5 and T4.5 from our sample. This pair of spectral types provides distinct morphological samples. Most of the features in the T4.5 dwarf evidently correspond to H$_{2}$O. There is a particularly strong 2o feature at 1.1752 $\mu$m in the T4.5 dwarf, which is also present in the L3.5 object at a weaker level. In fact, weak H$_{2}$O absorption is already present even in late M objects. We also looked for transitions associated with the $\phi$ bands of TiO (Galehouse 1980; Phillips 1973). These bands have been detected in young, low-gravity sub-stellar objects by McGovern et al. (2004), but are not usually apparent in low resolution spectra of older field dwarfs. A model spectrum kindly provided by D. Saumon shows the location of numerous $\phi$-band transitions in order 61, with noticeable band heads for the $\Delta\nu$=-1, (0-1) and (1-2) transitions at 1.25644 and 1.24239 $\mu$m respectively. Absorption features do occur at these wavelengths from M6-L5, but these also coincide with FeH features and are less likely to be TiO because none of the other TiO transitions of the $\phi$ band are seen. Finally, using new opacity data for 4 (L. Brown 2004, private communication), we searched for evidence of methane transitions in the T dwarfs. The strongest features should occur in order 65 which also contains the very broad shorter-wavelength K I doublet. Unfortunately, this spectral region is already heavily blanketed by 2o absorption. Even in the high signal-to-noise spectra of 2MASS 0559-1404 (T4.5), there are no transitions that can be uniquely attributed to 4. The potassium doublets ---------------------- Because the K I doublets dominate orders 61 and 65, and because these strong lines clearly persist throughout almost the entire M, L and T range, we plot 12 of the objects from Table 1 together in Figures 12 and 13 to provide a more continuous sequence for these features. For this figure, the L4 GD165B has been left out because the L3.5 2MASS 0036+18, with better signal-to-noise ratio, has been included; GD165B will be discussed separately. The other two objects omitted from these plots are the peculiar L2, G196-3B, and the peculiar T6 dwarf, 2MASS 0937+29; again, these objects are discussed separately later. Each spectrum in Figures 12 and 13 has been normalized to a continuum value of one at a common wavelength and shifted for clarity by an additive constant along the y-axis. The spectra have been ordered according to their published classification (Optical types for L-dwarfs: Kirkpatrick et al. 1999, 2000, 2001; NIR for T dwarfs: Burgasser et al. 2006) and aligned at the K I rest wavelengths in vacuum. As mentioned already, radial velocity determinations will be presented in a separate paper (Prato et al. 2006, in prep.). As given in Table 3, the shorter wavelength K I lines (in order 65) correspond to the multiplet designation 4p $^{2}$P$^{o}$$-$3d $^{2}$D, and the order 61 pair to the 4p $^{2}$P$^{o}$$-$5s $^{2}$S multiplet. All are transitions between states at 1.61$-$2.67 eV. The pair of K I lines in order 65 have almost equal intensity at the line center, whereas in order 61, the 1.2436 $\mu$m line is always weaker than the 1.2526 $\mu$m line. For both K I doublets, the line ratios are similar throughout the spectral sequence. One of the most prominent results, evident in both Figures 12 and 13, is the significant K I line broadening in the later type objects. Primarily because of its higher temperature, the M2.5 dwarf manifests very narrow lines with almost no wings. There is a slight contamination of the shortest wavelength K I line in order 65 from an Fe I line for spectral types before L4. For the M2.5, M6 and M9 dwarfs, separate peaks are clearly discernable for the longer wavelength line of the K I doublet in order 65 at 1.17761 $\mu$m and the secondary K I line at 1.17728 $\mu$m, but for later spectral types the 1.17761 $\mu$m lines are heavily blended and not discernable as separate features. In the earlier type objects, the pair of lines near 1.1786 $\mu$m in order 65 are attributable to Ti and Fe. Analysis and Discussion ======================= Correlating spectral features and spectral type ----------------------------------------------- To quantify the changes in the K I lines illustrated in Figures 12 and 13 as a function of spectral type we calculate three quantities, a line depth, a line strength (equivalent width) and a line width. Line depth is defined in terms of the measured flux ($F_{\lambda}$) at the line center compared to the average value for the continuum. For convenience we construct the line depth as $1-F_{line}/F_{cont}$ at the line center; a weak line has almost the same flux in the line as in the continuum and its line depth measure is therefore almost zero. Deep lines have a depth index approaching unity. Determining the continuum or pseudo-continuum level introduces the largest uncertainty into this ratio. The continuum is estimated by fitting a sloping line across the feature between the highest points on either side lying within $\pm$ 50Å  of the line center. Multiple trials obtained by varying the positions by a few Angstroms provide a mean value and an estimate of the uncertainty from fluctuations in the continuum level. Equivalent width (in Å) is also determined by fitting a continuum line between two points on either side of the line, summing the residual intensities and multiplying by the width of a pixel. The same range and method is used as for the line depth. Line width (in km s$^{-1}$) is defined as the full width of the line at the intensity level halfway between the apparent local continuum, defined above for the line depth and equivalent width, and the minimum line intensity at the line center (FWHM). Several sources contribute to the uncertainty in the derived quantities. The continuum level is difficult to identify, photon noise reduces the signal-to-noise ratio for the fainter sources, and contamination by numerous weak features is dependent on spectral type. Molecular line contamination results in additional fluctuations in the measured K I quantities and gives larger uncertainties, despite the good signal-to-noise ratio of most of the spectra. Our final plotted uncertainties are based on repetitive trial fits and photon noise estimates. Table 4 and Table 5 provide the measured quantities for the 1.2525 and 1.2436 $\mu$m K I lines respectively. This pair of K I lines (order 61) is preferred because the shorter wavelength doublet is too heavily contaminated by H$_{2}$O absorption, and one of the K I lines is itself a blend (§3). Figures 14, 15, and 16 show, respectively, the line depth, equivalent width and velocity line width as a function of spectral type for the 1.2525 $\mu$m line. Several of the trends mentioned in the previous section are confirmed. The K I line depth increases from a weak line in early M dwarfs to a strong deep line in the early/mid L dwarfs before decaying towards later spectral types. Interestingly, there is considerable scatter in line depth among similar spectral types. The equivalent width (designated by W) is better behaved. Variations in defining a consistent continuum may have a large effect on the line depth quantity, whereas the growth of the line wings may overcome such variations when calculating W and FWHM. Equivalent width increases to a broad maximum around L2-L4 and then decreases again, remaining essentially constant from L5 - T4.5. These results are consistent with our more extensive lower resolution studies (M03). In contrast, the velocity line width of the 1.2525 $\mu$m K I line (Figure 16) increases almost monotonically and steeply with spectral type. The change in velocity width is dramatic, ranging from $\sim$ 60 km s$^{-1}$ at M6 to almost 500 km s$^{-1}$ at T4.5. This change does not result from rotational broadening because the line develops extensive wings characteristic of pressure broadening as discussed in the next section. One L-dwarf appears to stand out from the others in this sequence and is indicated by a star symbol. This object is Kelu-1AB, and will be discussed separately in the next section. The behavior of the 1.2436 $\mu$m K I line (Table 5) is similar. The changing behavior of the K I lines can be explained as follows. As the temperature decreases towards later spectral types, the transition levels become less populated and the line should become weaker. However, with decreasing temperature, dust grains settle below the photosphere and the transparency of the atmosphere at wavelengths where gas opacity is weak improves. Thus, line formation can be observed at much greater depths and pressures (Saumon et al. 2003). Higher pressures cause the development of the broad line wings through collision broadening, primarily with H$_{2}$ molecules (e.g. Burrows & Voloyubev 2003), and hence an increase in the FWHM of the lines. The greater column depth of K I in these transparent regions also serves to increase their equivalent widths. The behavior has been observed with the K I and Na I fundamental doublets (Reid et al. 2000), and is one of the unique properties of brown dwarf atmospheres. The pressure broadening of the K I lines at J-band follow quantitative expectations as well. The observed ratio between the K I line widths in the T4.5 and L5 spectra is about 2:1. As the T$_{eff}$ decreases, the pressure at $\tau$=2/3 increases as already described. According to models of cloudy atmospheres, kindly provided by D. Saumon (2003; private communication), the change in the photospheric pressure at these wavelengths is a factor of 5 from 1600 K to 1000 K. These effective temperatures were chosen to illustrate a prediction of the models over the typical range of L/T spectral types. This implies that the average kinetic energy of the molecules of H$_{2}$ is five times greater, or that the average speeds are $\sim$2.24 times larger. Pressure broadened line widths should be proportional to the average velocity of the disturbing atoms. Thus, the observed pressure broadening from L5 to T4.5 is consistent with the expected change. In contrast to the K I lines, Figures 17 and 18 show the changing trends with spectral type for two representative FeH and H$_{2}$O features that fall within the orders containing the K I doublets; data are provided in Table 6. For FeH we plot the line depth index of the 1.245 $\mu$m feature in order 61, and for H$_{2}$O we use the strong 1.175 $\mu$m feature in order 65. The FeH line depth reaches a peak around L3-L4, whereas the H$_{2}$O line depth increases more-or-less monotonically from M6 to T4.5. Once again Kelu-1AB stands out, this time because the FeH bands are not as deep as expected for an L2. Line shapes: rotation and pressure broadening effects ----------------------------------------------------- In the previous section we found that the K I line width (FWHM) and the 2o line depth provided the best correlations with spectral type. The exception to the FWHM correlation is Kelu-1AB, which [*is*]{} known to be a rapid rotator (Basri et al. 2000), and which was recently discovered to be a $\sim$03  visual binary (Liu & Leggett 2005; Gelino et al. 2006), although the orbital velocity of 3 km s$^{-1}$ is only a fraction of the $Vsin~i$. As will be shown below, the K I lines are not really suitable for $Vsin~i$ studies. Interestingly, Kelu-1AB is not abnormal in its 2o ratio. The one point in the 2o plot of Figure 18 that does seem slightly high is due in fact to GD165B (L4); this result is consistent with M03. As the companion to a white dwarf, GD165B is most likely to be an old L dwarf with a higher gravity. Burgasser et al. 2003 has shown that late-type subdwarfs, which are also presumably older and have higher surface gravities, can show stronger 2o compared to equivalently classified disk dwarfs. The presence of a binary companion can impact the observed line properties discussed above in several ways. For example, a spectroscopic companion only partially resolved in velocity space might produce a spectrum with anomalously broadened lines at certain epochs. A spectroscopic companion of similar mass to the primary object should not only manifest itself in broadened lines, observed at a favorable phase, but also should almost double the expected brightness of the system. Fainter companions, although interesting in their own right, should not significantly impact the primary dwarf spectrum if the mass ratio, $M_2/M_1$, is less than $\sim$0.5 (Prato et al. 2002). Visual companions at separations too great to noticeably effect the spectrum will also cause the observed spectrum to appear brighter than expected for the distance and age of the target system. Four targets in Table 1 are visual binaries: 2MASS 0746+20 (L0.5), Kelu-1 (L2), DENIS 0205$-$11 (L7) and SDSS 0423$-$04 (T0). Signatures of velocity variations in these spectra will be addressed in a forthcoming publication. Spectral features in Kelu-1AB appear to be significantly broader than other field dwarfs of similar temperature by a factor of 2-3, consistent with its high rotational velocity of $60\pm5$ km/s (Basri et al. 2000). In Figure 19 we compare the L2 dwarf, Kelu-1AB, with the L0.5 and L3.5 dwarfs in our sample. Because no feature is completely free of blending at this spectral resolution and wavelength throughout the late M to T sequence, measuring precise rotational velocities is challenging. Indeed, because of the increasing complexity of molecular features with cooler effective temperatures, most if not all of the lines in the mid-L to T objects may be blends. However, the wealth of fine-scale spectral structure attributable to FeH and 2o throughout the $J$-band does provide some possibility of detecting trends in $Vsin~i$, especially because pressure broadening should not be a large effect for molecular transitions. Examining the rich FeH structure in order 61 (Figure 12), as well as orders 62 and 63 (Figures 1-4) where there are no strong alkali lines, it is clear that there is a sudden broadening of the FeH lines at the transition from M9 to L0, and that all of the FeH features remain broad after that transition. In fact, if the M9 spectra in orders 62 and 63 are smoothed by a factor of 3 using a simple moving average, the resulting spectrum is a very good match for the observed L0 spectrum. This sudden broadening could be caused by either increased pressure broadening or an increase in rotational velocities. Examination again of Figure 13 shows that many of the 2o spectral features that develop in the T dwarfs are relatively sharp, indicating that they are not formed at the same depth and pressure as the potassium line wings. Therefore, in the coolest objects, we are seeing evidence of vertical structure in the atmosphere. Like the FeH lines in the late M and early L dwarfs, the 2o features provide a better estimate of rotational velocities than the alkali lines. We can de-convolve the instrumental profile by using the observed widths of unresolved OH night sky lines and the very narrow features seen in the M2.5 object. If we interpret the change in line width at the M9-L0 boundary as a rotational velocity, then after removal of the instrumental profile, all of the early L dwarfs appear to be rapid rotators with $Vsin~i$ $\ga$ 30 km s$^{-1}$. Rotational velocities have been reported for some of the objects in our target list. Basri et al. (2000) give $Vsin~i$ = 60$\pm$5 km s$^{-1}$ for Kelu-1AB and 22$\pm$5 km s$^{-1}$ for DENIS 0205$-$11AB and $\la$3 km s$^{-1}$ for Wolf 359 (Gl 406). The L3.5 dwarf 2MASS 0036+18 is reported to have a $Vsin~i$ of $\sim$15 km s$^{-1}$ by Schweitzer et al. (2001). Several of our targets were also observed by Zapatero Osorio et al. (2006) at high resolution, enabling them to measure the $Vsin~i$ of 2MASS 0036+18, SDSS 1254-01, and 2MASS 0559-14. We note with interest that the rotational velocities they measure for these sources, as well as for another dozen dwarfs, are all very close to 30 km s$^{-1}$ within the uncertainties. We will analyze in detail the rotational velocities of our sample in a future paper; however, we comment here that our Figures 12 and 13 suggest that the T0 and T4.5 dwarfs in our sample, SDSS 0423-04 and 2MASS 0559-14, appear to reflect larger values of $Vsin~i$. If there is a tendency towards more rapid rotation among the T dwarfs, it is not apparent in the measurements of Zapatero Osorio et al. (2006). Clarke et al. (2003) report time resolved spectroscopy of Kelu-1AB used to search for variability in photospheric molecular species. They confirm the short rotation period of the system and find variable H$\alpha$ profiles. No evidence for a spectroscopic companion was detected and it appears to be a normal L dwarf apart from the high rotation rate (Clarke et al. 2003). The recently discovered 0.29binary contributes only about 3 km s$^{-1}$ in velocity shift and therefore has no impact on the measured $Vsin~i$. However, it is interesting to ask whether or not both components have the same $Vsin~i$. In the binary system Gl569B (Zapatero Osorio et al. 2004), line broadening is 2-3 times greater in one component than the other, although this has been attributed to a nested spectroscopic binary (Simon et al. 2006). It is possible that the higher rotation rate of Kelu-1AB is the result of age. A younger L dwarf just past the accretion phase might have a higher rotational velocity. On the other hand, as discussed in the next section, the young L2 dwarf G196-3B does not have such a rapid rotation, or is being viewed at a special angle. Surface gravity effects ----------------------- As shown by models (e.g. Burrows et al. 2001), the youngest brown dwarfs are expected to be hotter and more luminous. As a sub-stellar mass object cools over the first 100 Myr, its radius will contract by a factor of 2-10, depending on initial mass, and then remain almost constant at a value close to that of Jupiter’s radius. Consequently, a very young low-mass brown dwarf could be observed with a much earlier spectral type than it will have when older than 1 Gyr, and its surface gravity ($g = GM/R^{2}$) will be less than that of an old brown dwarf with the same observed spectral type. A lower surface gravity implies less pressure broadening (P$\sim$g/k$_{r}$, where k$_{r}$ is the Rosseland mean opacity) and therefore one expects the K I lines to be narrower in such objects. The effect of surface gravity on the K I lines is significant and has already been observed at lower resolution (McGovern et al. 2004; Kirkpatrick et al. 2006). High resolution infrared spectroscopy of brown dwarfs provides a means of measuring surface gravities and hence estimating mass. Because of their greater column depth the K I lines are much more sensitive to pressure and therefore to surface gravity, whereas the weaker FeH and 2o lines formed high in the atmosphere provide a better probe of rotational broadening. G196-3B, the companion to the M2.5 G196-3A, is classified as an L2 dwarf (Kirkpatrick et al. 1999). The G196-3AB system is believed to be $\sim$20-300 Myr old, rather than the 1 Gyr thought to be typical of field L dwarfs (Rebolo et al. 1998). Figure 20 shows the order 61 spectrum of G196-3B plotted together with the L0.5 dwarf (2MASS 0746+20AB) and the L3.5 dwarf (2MASS 0036+18). Clearly, although the signal-to-noise ratio is poorer, all of the spectral features of G196-3B are much narrower than expected for an L2 dwarf. Using high resolution far-red optical spectra, Basri et al. (2000) measured 10 km s$^{-1}$ for the rotational velocity of G196-3B. The infrared K I lines have cores only half as wide as those of the L0.5 and L3.5 dwarfs, and much less pronounced pressure-broadened line wings. This behavior suggests that the line profiles result primarily from lower surface gravity, consistent with the age of this object. Metallicity effects ------------------- Burgasser et al. (2002) noted that the T6 dwarf 2MASS 0937+29 is peculiar because, despite having characteristics common to its spectral class, it appears to have no K I lines in a low-resolution $J$-band spectrum. If the K I lines were shallow and very broad, or sharp and weak, then they might escape detection at low spectral resolution. As shown in Figure 21 however, where we compare our high resolution spectrum of this T6 to that of the T4.5 dwarf 2MASS 0559$-$14, the K I lines are neither weak nor broad, they are completely absent. Given the good signal-to-noise ratio of this high resolution spectrum, there is no possibility that the K I lines were simply too weak to detect. Disappearance of the K I lines is unexpected because other T6 dwarfs still show these features (Burgasser et al. 2002, M03). In fact, we have observed the T5.5 2MASS 2356$-$15 which exhibits broad K I lines similar to those of a T5. Strong 2o absorption features are clearly present in order 65 in the T6. These features show good correlation with the same features in the T4.5 but there is clear variation in the individual features. One possible reason for the absence of the K I lines in 2MASS 0937+29 is that this object has a different metallicity from other field dwarfs. Alternatively, the lines could be veiled by dust, but the trend at these spectral types is for dust to settle below the photosphere. In any case, there are other features in 2MASS 0937+29 that do not appear to be veiled. 2MASS 0937+29 was also classified as peculiar because of its extremely blue NIR colors and because it also has a very red optical spectrum for its type, as determined from CH$_{4}$ and 2o strengths. This combination of attributes led Burgasser et al. (2002) to propose that 2MASS 0937+29 is an old, metal-poor brown dwarf in which enhanced collision-induced H$_{2}$ absorption in the $K$-band gives the unusual blue NIR color. If this is an old T dwarf then it is also probably a fairly high-mass brown dwarf that has cooled to the temperature of a T6, and consequently it has a higher gravity, higher pressure atmosphere. The higher gravity could also result in increased pressure broadening of the 2o lines in 2MASS 0937+29 which would explain why these features seem muted and broader. In a recent study, Burgasser, Burrows and Kirkpatrick (2006) have found that the high surface gravity can result in a cooler effective temperature than equivalent T6 dwarfs. It is known (e.g., M03) that K I lines disappear around T7 or T8, hence it is possible that 2MASS 0937+29 is depleted in K I because of its low temperature rather than a low metallicity. Because the high-resolution spectra presented here show no residual trace of the K I lines, and because the 2o lines are unusually broad, we contend that 2MASS 0937+29 must be cooler and/or more metal poor than a normal T6 dwarf.\ Conclusions =========== Using a sequence of high-resolution infrared $J$-band spectra (R$\sim$20,000 or 15 km s$^{-1}$) obtained with NIRSPEC on the Keck II telescope, we have studied the spectral morphology of objects from M6 to T6. The principal results of the survey are as follows: \(1) Hundreds of small-scale spectral features are identified to be either FeH or H$_{2}$O absorption features. Over ten times as many FeH features as previously identified in brown dwarf spectra are now confirmed. A few features of CrH are also identified, but no convincing transitions of TiO or 4 at J-band are found in this sample. FeH features attain maximum strength in the mid-L dwarfs, while H$_{2}$O absorption becomes steadily stronger towards later spectral types. \(2) FeH and 2o line widths are typically $\sim$20 km s$^{-1}$ for the late M dwarfs, but broaden abruptly by over a factor of two at the M to L transition. We interpret this effect as evidence for increased rotational velocities in L dwarfs. \(3) The doublet of Al I at 1.31270 and 1.31543 $\mu$m is shown to be very sensitive to spectral type. This doublet weakens through M9 and then vanishes abruptly between M9 and L0. We suggest that this sudden disappearance is more a consequence of a transition in atmospheric chemistry than a simple decrease in atomic population levels resulting from a change in effective temperature. That is, the aluminum atoms are suddenly bound up into molecules and grains. \(4) The wings and line widths of the K I doublets at 1.16934, 1.17761 $\mu$m and at 1.24357, 1.25256 $\mu$m increase systematically, while line depth weakens with later spectral type. The equivalent width (W) of the K I features reaches a maximum in the mid-L dwarfs, decreases and then remains almost constant through T4.5. The K I line profiles begin to exhibit pressure broadened wings as early as late M. Line widths (FWHM) range from $\sim$50 km s$^{-1}$ at M5 to almost 500 km s$^{-1}$ at T4.5. This effect is consistent with the much greater depth that is probed in cool T dwarf atmospheres at $J$-band. \(5) As shown in Figure 12, a characteristic of the transition from L to T dwarfs is the decay of FeH spectral structure, resulting in a smooth spectrum at high resolution for late L dwarfs and early T dwarfs, before 2o dominant spectral structure develops. \(6) The young L2, G196-3B, exhibits very narrow K I lines without extensive pressure-broadened wings, indicative of a lower gravity atmosphere. \(7) Kelu-1AB, another L2, has exceptionally broad infrared lines, including FeH and H$_{2}$O features, confirming its status as a rapid rotator ($Vsin~i \sim$ 60 km s$^{-1}$). \(8) Finally, the peculiar T6 dwarf 2MASS 0937+29 displays a complete absence of potassium, in contrast to other late T objects. We interpret this as either a metallicity effect (depletion of K atoms) or a cooler T$_{eff}$ for this high surface gravity object. Although the sample of objects of different spectral types is relatively small, these high-resolution, high signal-to-noise spectra of M, L and T dwarfs should provide an important benchmark for the detailed development and improvement of model atmospheres.\ The authors wish to thank the staff of the Keck Observatory for their outstanding support. I.S.M. acknowledges the staff of the UCLA Infrared Laboratory and colleagues James Graham (UCB), James Larkin (UCLA) and Eric Becklin (UCLA) for their support throughout the development of the NIRSPEC instrument. We thank Adam Burrows, Katharina Lodders, Linda Brown, Didier Saumon, Richard Freedman, Travis Barman and Mark Marley for helpful discussions and opacity data. Finally, we thank the anonymous referee for a careful and complete critique of this paper. Work by S.S.K. was supported by the Astrophysical research Center for the Structure and Evolution of the Cosmos (ARCSEC) of Korea Science and Engineering Foundation through the Science Research Center (SRC) program. A.J.B. acknowledges support by NASA through Hubble Fellowship grant HST-HF-01137.01 awarded by the Space Telescope Science Institute, which is operated by the Association of universities for research in Astronomy, Inc., for NASA, under contract NAS 5-26555. This research has made use of the NASA/IPAC Infrared Science Archive, which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. This publication makes use of data from the Two Micron All Sky Survey, which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center, funded by the National Aeronautics and Space Administration and the National Science Foundation. Finally, the authors wish to extend special thanks to those of Hawaiian ancestry on whose sacred mountain we are privileged to be guests. Auman, J., Jr., 1967, ApJS, 14, 171 Basri, G. 2000, , 38, 485 Basri, G., Mohanty, S., Allard, F., Hauschildt, P. H., Delfosse, X., Mart[í]{}n, E. L., Forveille, T., & Goldman, B 2000, ApJ, 538, 363 Becklin, E.E., & Zuckerman, B. 1988, Nature, 336, 656 Bouy, H., Mart[í]{}n, E. L., Brandner, W., & Bouvier, J. 2005, AJ, 129, 511 Burgasser, A. J., et al. 2000, AJ, 120, 1100 Burgasser, A. J., et al. 2002, ApJ, 564, 421 Burgasser, A. J., McElwain, M. W., Kirkpatrick, J. D., Cruz, K. L., Tinney, C. G. & Reid, I. N. 2004, AJ, 127, 2856 Burgasser, A. J., Reid, I. N., Leggett, S. K., Kirkpatrick, J. D., Liebert, J. & Burrows, A. 2005, ApJ, 634, L177 Burgasser, A. J., Geballe, T. R., Leggett, S. K., Kirkpatrick, J. D., & Golimowski, D. A. 2006, ApJ, 637, 1067 Burgasser, A. J., Burrows, A., & Kirkpatrick, J. D. 2006, ApJ, 639, 1095 Burrows, A., Hubbard, W. B., Lunine, J. I., & Liebert, J. 2001, Rev. Mod. Phys., 73, 719 Burrows, A., Ram, R.S., Bernath, P., Sharp, C.M. & Milsom, J.A. 2002, ApJ, 577, 986 Burrows, A. & Voloyubev, M. 2003, ApJ, 583, 985. Clarke et al. 2003, MNRAS, 341, 239 Cushing, M. C., Rayner, J. T., Davis, S. P., & Vacca, W. D. 2003, ApJ, 582, 1066 Cushing, M. C., Rayner, J. T., & Vacca, W. D. 2005, ApJ, 623, 1115 Delfosse, X., et al. 1997, A & A, 327, L25 Delfosse, X., Tinney, C. G., Forveille, T., Epchtein, N., Borsenberger, J., Foque, P., Kimeswenger, S., Tiphene, D., 1999, A & AS, 135, 41 Galehouse, D. C., Davis, S. P., & Brault, J. W. 1980, ApJS, 42, 241 Geballe, T. R., Kulkarni, S. R., Woodward, C. E. & Sloan, G. C. 1996, ApJ, 467, L101 Geballe, T. R., et al. 2002, ApJ, 564, 466 Gelino, C. R., Kulkarni, S. R., & Stephens, D. C. 2006, PASP, 118, 611 Golimowski, D. A., et al. 2004, AJ, 127, 3516 Jones, H. R. A., Longmore, A. J., Jameson, R. F., & Mountain, C. M. 1994, MNRAS, 267, 413 Jones, H. R. A., Longmore, A. J., Allard, F., & Hauschildt, P. H. 1996, MNRAS, 280, 77 Kirkpatrick, J. D., Beichman, C. A., & Skrutskie, M. F. 1997, ApJ, 476, 311 Kirkpatrick, J. D., et al. 1999, ApJ, 519, 802 Kirkpatrick, J. D., et al. 2000, AJ, 120, 447 Kirkpatrick, J. D., Dahn, C. C., Monet, D. G., Reid, I. N., Gizis, J. E., Liebert, J., & Burgasser, A. J. 2001, AJ, 121, 3235 Kirkpatrick, J. D., Barman, T. S., Burgasser, A. J., McGovern, M. R., McLean, I. S., Tinney, C. G., & Lowrance, P. J. 2006, ApJ, 639, 1120 Leggett, S. K., et al. 2000, ApJ, 536, L35 Leggett, S. K., Allard, F., Geballe, T., Hauschildt, P. H., & Schweitzer, A. 2001, ApJ, 548, 908 Liu, M.C. & Leggett, S.K., 2005, ApJ, 634, 616 Lodders, K. 2002, ApJ, 577, 974. McGovern, M. R., Kirkpatrick, J. D., McLean, I. S., Burgasser, A. J., Prato, L., & Lowrance, P. J. 2004, ApJ, 600, 1020 McLean, I. S., et al. 1998, Proc. SPIE, 3354, 566 McLean, I. S., et al. 2000a, ApJ, 533, L45 McLean, I. S., Graham, J. R., Becklin, E. E., Figer, D. F., Larkin, J. E., Levenson, N. A., & Teplitz, H. I.. 2000b, Proc. SPIE, 4008, 1048 McLean, I. S., Prato, L., Kim, S. S., Wilcox, M. K., Kirkpatrick, J. D., & Burgasser, A. 2001, ApJ, 561, L115 McLean, I. S., McGovern, M. R., Burgasser, A. J., Kirkpatrick, J. D., Prato, L., Kim, S. S. 2003a, ApJ, 596, 561 (M03) McLean, I. S., McGovern, M. R., Prato, L., Burgasser, A. J., & Kirkpatrick, J. D. 2003b, in Brown Dwarfs, IAU Symp. 211, ed. E. Mart[í]{}n, ASP Conference Series. Mohanty, S., Basri, G., Jayawardhana, R., Allard, F., Hauschildt, P., & Ardila, D. 2004, ApJ, 609, 854 Partridge, H., & Schwenke, D. W. 1997, J. Chem. Phys., 106, 4618 Phillips, J.G. 1973, ApJS, 26, 313 Phillips, J.G., Davis, S.P., Lindgren, B., & Balfour, W.J. 1987, ApJS, 65,721 Prato, L., Simon, M., Mazeh, T., McLean, I.S., Norman, D. & Zucker, S. 2002, ApJ, 569, 863 Rebolo, R., Zapatero Osorio, M. R., Madruga, S., Bejar, V. J. S., Arribas, S., Licandro, J., 1998, Science, 282,1309 Reid, I. N., Kirkpatrick, J. D., Gizis, J. E., Dahn, C. C., Monet, D. G., Williams, R. J., Liebert, J., & Burgasser, A. J. 2000, AJ, 119, 369 Reid, I. N., Kirkpatrick, J. D., Liebert, J., Gizis, J. E., Dahn, C. C., & Monet, D. G., 2002, AJ, 124, 519 Ruiz, M. T., Leggett, S. K., & Allard, F. 1997, ApJ, 491, L107 Saumon, D., Marley, M. S., Lodders, K., & Freedman, R. S.2003, in Brown Dwarfs, IAU Symp. 211, ed. E. Mart[’i]{}n, ASP Conference Series. Schweitzer, A., Gizis, J. E., Hauschildt, P. H., Allard, F., & Reid, I. N. 2001, ApJ, 555, 368 Simon, M., Bender, C., & Prato, L. 2006, ApJ, 644, 1183 Testi, L., et al. 2001, ApJ, 522, L147 Tokunaga, A. T. 2000, in Allen’s Astrophysical Quantities, ed. A. N. Cox (4th ed.; New York: Springer), 151 Wilson, J. C., Kirkpatrick, J. D., Gizis, J. E., Skrutskie, M. F., Monet, D. G., & Houck, J. R. 2001, AJ, 122, 1989 Zapatero Osorio, M. R., Lane, B. F., Pavlenko, Ya., Mart[í]{}in, E. L., Britton, M., & Kulkarni, S. R. 2004, ApJ, 615, 958 Zapatero Osorio, M. R., Mart[í]{}n, E. L., Bouy, H., Tata, R., Deshpande, R., & Wainscoat, R. J. 2006, ApJ, 647, 1405 [lccccc]{} G196-3A & 10 04 21.0 & 50 23 06 & M2.5 & 8.08$\pm$0.026 & 2002 Apr 23\ Wolf 359 (GJ 406) & 10 56 28.9 & 07 00 53 & M6 & 7.09$\pm$0.024 & 2002 Apr 23\ 2MASSW J0140026+270150 & 01 40 02.6 & 27 01 50 & M9 & 12.49$\pm$0.021 & 2000 Dec 4\ 2MASP J0345432+254023 & 03 45 43.2 & 25 40 23 & L0 & 14.00$\pm$0.027 & 2000 Dec 4\ 2MASSI J0746425+200032 & 07 46 42.6 & 20 00 32 & L0.5 & 11.76$\pm$0.020 & 2002 Jan 1\ Kelu-1 & 13 05 40.2 &$-$25 41 06 & L2 & 13.41$\pm$0.026 & 2003 May 12\ G196-3B & 10 04 21.0 & 50 23 06 & L2 & 14.83$\pm$0.047 & 2002 Apr 23\ 2MASSW J0036159+182110 & 00 36 16.2 & 18 21 10 & L3.5 & 12.47$\pm$0.027 & 2000 Dec 4\ GD165B & 14 24 39.1 & 09 17 10 & L4 & 15.69$\pm$0.078 & 2003 May 13\ 2MASSW J1507476$-$162738 & 15 07 47.7 &$-$16 27 39 & L5 & 12.83$\pm$0.027 & 2000 Apr 25\ DENIS-P J0205.4$-$1159 & 02 05 29.4 &$-$11 59 30 & L7 & 14.59$\pm$0.030 & 2001 Oct 9\ SDSSp J042348.57$-$041403.5 & 04 23 48.6 &$-$04 14 04 & T0 & 14.47$\pm$0.027 & 2001 Oct 9\ SDSSp J125453.90$-$012247.4 & 12 54 53.9 &$-$01 22 47 & T2 & 14.89$\pm$0.035 & 2003 May 14\ 2MASS J05591914$-$1404488 & 05 59 19.1 &$-$14 04 48 & T4.5 & 13.80$\pm$0.024 & 2001 Oct 9\ 2MASSI J2356547$-$155310 & 23 56 54.8 &$-$15 53 11 & T5.5 & 15.82$\pm$0.057 & 2005 July 19\ 2MASSI J0937347+293142 & 09 37 34.7 & 29 31 42 & T6p & 14.65$\pm$0.036 & 2003 May 12\ \ [lcccc]{} 58 & 1.30447–1.32370 & 192.3 & 0.188 & 84.5\ 59 & 1.28262–1.30151 & 188.9 & 0.184 & 85.9\ 60 & 1.26137–1.27999 & 186.2 & 0.182 & 87.5\ 61 & 1.24081–1.25913 & 183.2 & 0.179 & 89.0\ 62 & 1.22093–1.23899 & 180.6 & 0.176 & 90.7\ 63 & 1.20168–1.21938 & 177.0 & 0.173 & 91.7\ 64 & 1.18293–1.20011 & 171.8 & 0.168 & 91.9\ 65 & 1.16496–1.18207 & 171.1 & 0.167 & 94.4\ \ [lccc]{} Al I & 1.3127007 & 4s $^{2}$S$_{1/2}$ - 4p $^{2}$P$_{3/2}$ & 3.143-4.087\ Al I & 1.3154345 & 4s $^{2}$S$_{1/2}$ - 4p $^{2}$P$_{1/2}$ & 3.143-4.085\ CrH & 1.18? & 6 bands of A$^{6}$ $\Sigma^{+}$ - X$^{6}$ $\Sigma^{+}$ &\ Fe I & 1.1693174 & a$^{5}$P$_{1}$ - z$^{5}$D$^{o}$$_{1}$ & 2.223-3.283\ Fe I & 1.1786490 & b$^{3}$P$_{2}$ - z$^{3}$D$^{o}$$_{3}$ & 2.831-3.884\ Fe I & 1.1886098 & a$^{5}$P$_{2}$ - z$^{5}$D$^{o}$$_{3}$ & 2.198-3.241\ Fe I & 1.1887337 & a$^{5}$P$_{1}$ - z$^{5}$D$^{o}$$_{2}$ & 2.223-3.266\ Fe I & 1.1976325 & a$^{5}$P$_{3}$ - z$^{5}$D$^{o}$$_{4}$ & 2.176-3.211\ FeH & 1.1939 band head & 0-1 band of F$^{4}$ $\Delta$-X$^{4}$ $\Delta$ &\ FeH & 1.2389 band head & 1-2 band of F$^{4}$ $\Delta$-X$^{4}$ $\Delta$ &\ H$_{2}$O & 1.135 & $\nu_{1}+\nu_{2}+\nu_{3}$ &\ H$_{2}$O & 1.331 & 2$\nu_{3}$ band &\ K I & 1.1693420 & 4p $^{2}$P$_{1/2}$ - 3d $^{2}$D$_{3/2}$ & 1.610-2.670\ K I & 1.1772861 & 4p $^{2}$P$_{3/2}$ - 3d $^{2}$D$_{3/2}$ & 1.617-2.670\ K I & 1.1776061 & 4p $^{2}$P$_{3/2}$ - 3d $^{2}$D$_{5/2}$ & 1.617-2.670\ K I & 1.2435675 & 4p $^{2}$P$_{1/2}$ - 5s $^{2}$S$_{1/2}$ & 1.610-2.607\ K I & 1.2525560 & 4p $^{2}$P$_{3/2}$ - 5s $^{2}$S$_{1/2}$ & 1.617-2.607\ Mn I & 1.290329 & a$^{6}$D$_{9/2}$ - z$^{6}$P$_{7/2}$ & 2.114-3.075\ Na I & 1.268261 & 3d $^{2}$D$_{5/2,7/2}$ - 5f $^{2}$F$_{5/2}$ & 3.617-4.595\ Na I & 1.268269 & 3d $^{2}$D$_{3/2}$ - 5f $^{2}$F$_{5/2}$ & 3.617-4.595\ Ti I & 1.1896124 & b$^{3}$F$_{2}$ - z$^{3}$D$^{o}$$_{1}$ & 1.430-2.472\ Ti I & 1.2674567 & b$^{3}$F$_{2}$ - z$^{3}$F$^{o}$$_{3}$ & 1.430-2.408\ Ti I & 1.2834947 & b$^{3}$F$_{2}$ - z$^{3}$F$^{o}$$_{2}$ & 1.430-2.396\ Ti I & 1.2850544 & b$^{3}$F$_{3}$ - z$^{3}$F$^{o}$$_{3}$ & 1.443-2.408\ [llccc]{} M2.5 & G196-3A & 0.36$\pm$0.03 & 1.3$\pm$0.4 & 39$\pm$4\ M6 & Wolf 359 & 0.72$\pm$0.01 & 5.2$\pm$0.5 & 64$\pm$6\ M9 & 2MASS J0140+27 & 0.80$\pm$0.01 & 7.5$\pm$0.7 & 110$\pm$11\ L0 & 2MASS J0345+25 & 0.66$\pm$0.02 & 9.3$\pm$0.9 & 180$\pm$18\ L0.5 & 2MASS J0746+20AB & 0.71$\pm$0.01 & 11.5$\pm$1.1 & 210$\pm$21\ L2 & Kelu-1AB & 0.60$\pm$0.02 & 14.1$\pm$1.4 & 320$\pm$32\ L3.5 & 2MASS J0036+18 & 0.69$\pm$0.02 & 14.4$\pm$1.4 & 240$\pm$24\ L4 & GD165B & 0.80$\pm$0.01 & 12.6$\pm$1.3 & 230$\pm$23\ L5 & 2MASS J1507$-$16 & 0.68$\pm$0.02 & 10.0$\pm$1.0 & 240$\pm$24\ L7 & DENIS J0205$-$11AB & 0.47$\pm$0.05 & 8.6$\pm$1.3 & 290$\pm$29\ T0 & SDSS J0423$-$04AB & 0.47$\pm$0.06 & 9.3$\pm$1.4 & 240$\pm$24\ T2 & SDSS J1254$-$01 & 0.60$\pm$0.05 & 9.8$\pm$1.5 & 270$\pm$27\ T4.5 & 2MASS J0559$-$14 & 0.41$\pm$0.07 & 9.4$\pm$1.4 & 490$\pm$50\ [llccc]{} M2.5 & G196-3A & 0.24$\pm$0.04 & 1.3$\pm$0.1 & 47$\pm$5\ M6 & Wolf 359 & 0.65$\pm$0.02 & 5.6$\pm$0.6 & 65$\pm$7\ M9 & 2MASS J0140+27 & 0.77$\pm$0.01 & 9.0$\pm$0.9 & 78$\pm$8\ L0 & 2MASS J0345+25 & 0.61$\pm$0.02 & 11.5$\pm$1.2 & 220$\pm$22\ L0.5 & 2MASS J0746+20AB & 0.67$\pm$0.02 & 14.1$\pm$1.4 & 230$\pm$23\ L2 & Kelu-1AB & 0.57$\pm$0.02 & 14.1$\pm$1.4 & 320$\pm$31\ L3.5 & 2MASS J0036+18 & 0.63$\pm$0.02 & 11.4$\pm$1.1 & 290$\pm$29\ L4 & GD165B & 0.75$\pm$0.01 & 14.0$\pm$1.4 & 150$\pm$15\ L5 & 2MASS J1507$-$16 & 0.62$\pm$0.02 & 14.7$\pm$1.5 & 270$\pm$27\ L7 & DENIS J0205$-$11AB & 0.36$\pm$0.06 & 8.2$\pm$1.2 & 390$\pm$39\ T0 & SDSS J0423$-$04AB & 0.36$\pm$0.08 & 7.5$\pm$1.1 & 300$\pm$30\ T2 & SDSS J1254$-$01 & 0.49$\pm$0.06 & 7.9$\pm$1.2 & 260$\pm$26\ T4.5 & 2MASS J0559$-$14 & 0.29$\pm$0.09 & 9.4$\pm$1.4 & 460$\pm$46\ [llcc]{} M2.5 & G196-3A & 0.03$\pm$0.04 & 0.022$\pm$0.01\ M6 & Wolf 359 & 0.12$\pm$0.04 & 0.080$\pm$0.016\ M9 & 2MASS J0140+27 & 0.17$\pm$0.04 & 0.140$\pm$0.029\ L0 & 2MASS J0345+25 & 0.25$\pm$0.04 & 0.110$\pm$0.022\ L0.5 & 2MASS J0746+20AB & 0.29$\pm$0.04 & 0.100$\pm$0.02\ L2 & Kelu-1AB & 0.15$\pm$0.04 & 0.110$\pm$0.022\ L3.5 & 2MASS J0036+18 & 0.28$\pm$0.04 & 0.160$\pm$0.023\ L4 & GD165B & 0.35$\pm$0.05 & 0.35 $\pm$0.05\ L5 & 2MASS J1507$-$16 & 0.23$\pm$0.04 & 0.220$\pm$0.044\ L7 & DENIS J0205$-$11AB & 0.19$\pm$0.08 & 0.270$\pm$0.067\ T0 & SDSS J0423$-$04AB & 0.08$\pm$0.07 & 0.280$\pm$0.055\ T2 & SDSS J1254$-$01 & 0.16$\pm$0.07 & 0.380$\pm$0.075\ T4.5 & 2MASS J0559$-$14 & 0.06$\pm$0.09 & 0.620$\pm$0.093\ ![Part of the spectral region covered by order 58 containing the Al I doublet at 1.3127 and 1.3154 is compared for the M6 (Wolf 359; dotted), M9 (2MASS 0140+27; dashed) and L0 (2MASS 0345+25; solid) objects shown in Figures 1-3. The broad absorption features are due to blended 2o transitions. All spectra are normalized to the same level and the intensity axis is plotted from 0.5 instead of zero for clarity of presentation. The Al I lines remain significant until M9 and then vanish abruptly between M9 to L0.](f7_color.eps) ![Order 62 from 1.221-1.238 $\mu$m for the M9 dwarf 2MASS 0140+27 is over-plotted on a scaled FeH opacity plot smoothed to R$\sim$20,000. The region near 1.222 $\mu$m previously attributed to the Q-branch of the F$^{4}$ $\Delta_{7/2}$ - X$^{4}$ $\Delta_{7/2}$ system of FeH is resolved, and almost every other feature is identified with P-branch (dotted vertical lines not otherwise labeled) or R-branch (labeled) lines of the 0–1 band. Lines not attributed to FeH are discussed in the text.](f8_color.eps) ![A plot of FeH opacity at 2000 K and a pressure of 1 bar, overlaid with the order 61 spectra of the M9 (2MASS 0140+27) and L5 (2MASS 1507$-$16) dwarfs from Table 1. Dashed lines show the location of the FeH transitions and the darker solid lines correspond to a resolution of R$\sim$20,000. Although the features are broader in the L5 dwarf, almost all of the weaker features in order 61 can be attributed to FeH. Lines not attributed to FeH are discussed in the text. Note the broadening of the strong K I lines from M9 to L5.](f9_color.eps) ![A plot of H$_{2}$O opacity at 1000 K and a pressure of 1 bar, is overlaid with the order 61 spectra of the L5 (2MASS 1507$-$16) and T4.5 (2MASS 0559$-$14) dwarfs from Table 1. Dashed lines show the location of the 2o transitions and the solid lines correspond to a resolution of R$\sim$20,000. H$_{2}$O features do not dominate either spectrum. Spectral features in the L5 that were not consistent with FeH cannot be attributed to H$_{2}$O. FeH is almost completely absent from the T4.5.](f10_color.eps) ![A plot of H$_{2}$O opacity at 1000 K and a pressure of 1 bar, overlaid with the order 65 NIRSPEC high-resolution spectra of the L3.5 (2MASS 0036+18) and T4.5 (2MASS 0559$-$14) dwarfs from Table 1. As in the previous figure, the solid lines correspond to a resolution of R$\sim$20,000. No FeH lines are listed in the opacity tables for this order. All of the strong, sharper features in the T4.5 spectrum correlate well with H$_{2}$O transitions. For the L3.5 the correlation is weaker but still present. Note the broadening of the K I lines from L3.5 to T4.5. ](f11_color.eps) ![A plot of one minus the line to continuum flux ratio is used to measure the depth of the 1.2525 $\mu$m K I line as a function of spectral type. For a very strong line the index approaches unity. The object indicated by the star symbol is Kelu-1AB.](f14.eps) ![A plot of the equivalent width (W, in Å) of the 1.2525 $\mu$m K I line as a function of spectral type. Error bars are estimated from a sequence of trial fits to the continuum. The object indicated by the star symbol is Kelu-1AB.](f15.eps) ![A plot of the FWHM (in km s$^{-1}$) of the 1.2525 $\mu$m K I line as a function of spectral type. The object indicated by the star symbol is Kelu-1AB.](f16.eps) ![A plot of one minus the line to continuum flux for the strong FeH feature at 1.245 $\mu$m as a function of spectral type. Again, the object indicated by the star symbol is Kelu-1AB which appears to be anomalous.](f17.eps) ![A plot of one minus the line to continuum flux ratio for the strong 2o feature at 1.1752 $\mu$m as a function of spectral type. As before, the object indicated by the star symbol is Kelu-1AB. In this plot it is GD165B (L4) that appears slightly above the general trend.](f18.eps) ![The high resolution spectrum at 1.25 $\mu$m (order 61) of the L2 dwarf Kelu-1AB compared to L0.5 and L3.5 dwarfs (2MASS 0746+20AB and 2MASS 0036+18, respectively). The K I lines are significantly broader and shallower in Kelu-1AB, even compared to the binary L0.5. Spectra are normalized to unity and offset by a constant. Each spectrum has been shifted to the laboratory (vacuum) rest frame. ](f19.eps) [^1]: Data presented herein were obtained at the W.M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California and the National Aeronautics and Space Administration. The Observatory was made possible by the generous financial support of the W.M. Keck Foundation. [^2]: http://www.astro.ucla.edu/$\sim$mclean/BDSSarchive/ [^3]: See http://www2.keck.hawaii.edu/inst/nirspec/redspec/index.html
1
--- abstract: 'We study the gravitational Dirichlet problem in AdS spacetimes with a view to understanding the boundary CFT interpretation. We define the problem as bulk Einstein’s equations with Dirichlet boundary conditions on fixed timelike cut-off hypersurface. Using the fluid/gravity correspondence, we argue that one can determine non-linear solutions to this problem in the long wavelength regime. On the boundary we find a conformal fluid with Dirichlet constitutive relations, viz., the fluid propagates on a ‘dynamical’ background metric which depends on the local fluid velocities and temperature. This boundary fluid can be re-expressed as an emergent hypersurface fluid which is non-conformal but has the same value of the shear viscosity as the boundary fluid. The hypersurface dynamics arises as a collective effect, wherein effects of the background are transmuted into the fluid degrees of freedom. Furthermore, we demonstrate that this collective fluid is forced to be non-relativistic below a critical cut-off radius in AdS to avoid acausal sound propagation with respect to the hypersurface metric. We further go on to show how one can use this set-up to embed the recent constructions of flat spacetime duals to non-relativistic fluid dynamics into the AdS/CFT correspondence, arguing that a version of the membrane paradigm arises naturally when the boundary fluid lives on a background Galilean manifold.' author: - | Daniel Brattan$^a$[^1], Joan Camps$^a$[^2], R. Loganayagam$^b$[^3], Mukund Rangamani$^a$[^4]\ \ ,\ \ ,\ title: | [**CFT dual of the AdS Dirichlet problem:\ Fluid/Gravity on cut-off surfaces**]{} --- (0,0)(0,0) (380, 330)[DCPT-11/25]{} Introduction {#s:intro} ============ The AdS/CFT correspondence [@Maldacena:1997re] which postulates a remarkable duality between large $N$ quantum field theories and gravitational dynamics, provides a useful theoretical laboratory to address questions underlying the dynamics of these systems. Not only has it proven useful to obtain quantitative information about the dynamics of strongly coupled field theories, but it also provides a unique perspective into the geometrization of field theoretic concepts. Since the early days of the AdS/CFT correspondence it has been known that the radial direction of the bulk spacetime encodes in some sense the energy scale of the dual field theory [@Susskind:1998dq]. While the nature of this map is not terribly precise outside of the simple example of pure geometry (dual to the vacuum state of the field theory), it nevertheless provides valuable intuition about certain basic aspects of effective field theory dynamics [@Banks:1998dd; @Peet:1998wn], and has led to the idea of the holographic renormalisation group [@deBoer:1999xf], which relates the radial ‘evolution’ in AdS to RG flows in field theories. More recently this idea has been exploited to geometrize Wilson’s concept of integrating out momentum shells to generate field theory effective actions, in terms of integrating out regions of the bulk geometry which in turn lead to effective multi-trace boundary conditions on the cut-off surface, a fixed radial slice (in some preferred foliation) in AdS [@Heemskerk:2010hk; @Faulkner:2010jy]. One of the key features of this holographic Wilsonian approach was the emergence of multi-trace deformations of the field theory even in the planar limit, consistent with field theory expectation. A natural question in this context is what does this RG flow mean for the gravitational equations of motion? More precisely, consider the problem of integrating out radial geometric shells in Einstein gravity with negative cosmological constant (which is a consistent truncation of string theory/supergravity). One anticipates based on the standard dictionary which relates the bulk metric to the boundary energy momentum tensor to obtain a scale dependent effective action for the energy momentum tensor, containing arbitrarily high multi-traces of the stress tensor. The reason for the generation of these multi-traces is clear, once one factors in the intrinsic non-linearity of gravity. The basic equations in this context are of course easy to write down; as explained in [@Heemskerk:2010hk; @Faulkner:2010jy] the flow is driven by the radial ADM Hamiltonian and one can in principle solve the resulting Hamilton-Jacobi like equation for the effective action on the cut-off hypersurface. Despite the conceptual simplicity of the formulation of Wilsonian RG in terms of geometric effective actions, the point still remains that gravity’s intrinsic non-linearity makes explicit solutions hard to come by. One can ask whether there is a tractable sector of the gravitational flow equations which leads to new insight. A natural avenue for exploration is suggested by the long-wavelength regime where we restrict attention to fluctuations of low frequency in the field theory directions. As evidenced by the fluid/gravity correspondence [@Bhattacharyya:2008jc] there is an essential simplification in this regime; bulk Einstein’s equations can be explicitly solved order by order in a long-wavelength expansion along the boundary.[^5] As such one should be able to use this framework in conjunction with the fluid dynamical expansion to derive an effective action for the low frequency degrees of freedom which live on a cut-off surface in the interior of the AdS spacetime.[^6] Rather than tackle this problem directly we will take a slightly different tack in this paper, one which we believe clarifies some aspects of evolution in the radial direction and its possible connection to RG flows. One of our main conclusions will be that imposing rigid cut-offs in AdS is more naturally viewed in terms of perturbing the CFT by some non-local deformation or equivalently by introducing explicit state-dependent sources in the boundary theory. A second motivation is the recent work [@Bredberg:2010ky; @Bredberg:2011jq] which derives an explicit map between solutions of vacuum Einstein equations (with no cosmological constant) and those of incompressible Navier-Stokes equations, thereby making direct contact with some of the ideas of the black hole membrane paradigm in asymptotically flat spacetime [@Damour:1978cg; @Thorne:1986iy]. This problem, which has been further generalized in [@Compere:2011dx], is the zero cosmological constant analog of the problem we consider (see also [@Eling:2009pb; @Eling:2009sj] for another approach and [@Cai:2011xv] for related work). The idea is to consider a fixed timelike hypersurface with Dirichlet data enforcing a flat metric on the slice. Given these boundary conditions one wants to solve vacuum Einstein equations so as to obtain a solution which has a regular future horizon.[^7] By explicit construction which involves long wavelength fluctuations around flat space in a Rindler patch the authors of [@Bredberg:2011jq; @Compere:2011dx] construct solutions to vacuum Einstein’s equations order by order in a perturbation expansion in gradients along the hypersurface directions. The resulting geometry has a regular Rindler horizon, and one obtains a regular solution to Einstein’s equations contingent on the fact that dynamics of the induced stress tensor on the hypersurface satisfies the incompressible Navier-Stokes equations. While this development is fascinating, one is hampered from a first principles understanding of the physics from a holographic viewpoint, owing to the rather poorly understood concepts of flat space holography. Moreover, given the connection between fluid dynamics (albeit relativistic and conformal) and Einstein’s equations with negative cosmological constant as described by the aforementioned fluid/gravity correspondence [@Bhattacharyya:2008jc](and its non-relativistic extension in [@Bhattacharyya:2008kq] ), it is interesting to ask whether the construction in [@Bredberg:2011jq] can be obtained as a limit of the fluid/gravity map. If this is possible, one can then look for the field theoretic interpretation of the flat space problem. Motivated by these issues, we consider a region of the AdS spacetime bounded by a timelike hypersurface $\Sigma_D$ at some radial position, say $r = r_D$ in the supergravity limit of AdS/CFT. We are interested in solving for the bulk dynamics where will give ourselves the freedom to specify boundary conditions on $\Sigma_D$. The second boundary condition (which is necessary to zero-in onto a unique solution) will be specified by demanding regularity in the interior of the spacetime. We have schematically depicted the set-up in . In the large $N$ limit, the specification of the problem thus is tantamount to solving classical partial differential equations (PDEs) in AdS with a Dirichlet boundary condition imposed on various fields at the hypersurface $\Sigma_D$. The question we would like to know the answer to is simply: “What is the problem that we are solving in the CFT language?” ![Schematic representation of the Dirichlet problem we consider in this paper. The Dirichlet surface is taken to be at some value $r = r_D$ where we impose boundary conditions on the fields. The solutions will further be constrained by requiring that they be regular on any putative horizon ${\mathcal H}^+$ (shown in the figure) or the origin. The question we are after is what is the boundary image of this Dirichlet data?[]{data-label="f:setup"}](Dir-schema) (0,0) (-4.8,1.36)[$\Sigma_D$]{} (-8.4,2.3)[${\mathcal H}^+$]{} (-5.24,-0,36)[data = $\hat{\mathfrak X}$]{} (-1.7,-0.36)[data = ${\mathfrak X}$]{} As we have reviewed above, various results exist in literature that suggest that solving such a Dirichlet problem is analogous to some kind of RG from the CFT point of view. Despite the strongly suggestive nature of this holographic RG point of view, it is also not very clear what kind of an RG is one speaking of within a CFT. A-priori, for one, it does not seem like the RG flow that arises from cutting off a CFT a la Wilson is the correct way to dualize the Dirichlet problem. Hence, our question - what is the CFT dual of a bulk Dirichlet problem?[^8] As a warm-up we first consider the bulk Dirichlet problem for linear PDEs, using the simple setting of a Klein-Gordon field propagating in a cut-off AdS spacetime. In this case it is not hard to see that one is deforming the field theory by a non-local double-trace operator, whose precise form, we argue, can be extracted by suitable convolution of appropriate bulk-to-boundary propagators. We then turn to the issue of setting up the problem in a gravitational setting, outlining it in general before moving on to the tractable setting of the fluid/gravity regime. In the long wavelength regime we will argue that the bulk Dirichlet problem reduces to a particular forcing of the fluid on the boundary of the asymptotically AdS spacetime. The fluid/gravity correspondence was generalized to fluids propagating on curved backgrounds with slowly varying curvatures in [@Bhattacharyya:2008ji] and the most general solutions which will prove to be of interest to us were presented in [@Bhattacharyya:2008mz]. Using these results it transpires that we can immediately write down the solution to the bulk Dirichlet problem in the long wavelength regime. The logic is the following: we wish to prescribe on the hypersurface $r =r_D$ a Lorentzian metric which we denote as ${\hat g}_{\mu\nu}$. This is arbitrary subject to the requirement that its curvatures be slowly varying so that we can treat it with the fluid/gravity perturbation scheme. We then solve Einstein’s equations demanding regularity in the interior of the spacetime. Using standard intuition from the AdS/CFT correspondence it can be argued that the seed geometry which we need to set up the gradient expansion should simply be a black hole geometry which has a regular future event horizon, which furthermore satisfies the prescribed Dirichlet boundary condition.[^9] It is not hard to see that such a seed solution is obtained by simply performing a coordinate transformation of the well known planar Schwarzschild-AdS black hole. But this is precisely the set-up of [@Bhattacharyya:2008ji; @Bhattacharyya:2008mz], the only difference being the fact that in these works the Dirichlet data is imposed on the boundary at $r =\infty$. Let’s call this boundary metric $g_{\mu\nu}$, which is also by definition slowly varying etc.. The solution to the asymptotic Dirichlet problem is characterized by the boundary metric $g_{\mu\nu}$, a distinguished velocity field $u^\mu$ (which is unit normalized) and a scalar function $b$ (determining the temperature or equivalently the local energy density). Let us denote these variables collectively as ${\mathfrak X}$. The boundary Brown-York stress tensor (up to counter-terms) takes the fluid dynamical form and is built out of the data contained in ${\mathfrak X}$. Now given the space of solutions to the asymptotic Dirichlet problem, we can reparametrize that space of solutions appropriately to obtain the solutions of the new bulk Dirichlet problem. The only condition we have to satisfy is that the induced metric on $\Sigma_D$ in the solutions obtained this way[^10] be equal to ${\hat g}_{\mu\nu}$. Furthermore, we can extract the stress tensor on $\Sigma_D$[^11]. We will argue that there is a corresponding velocity field ${\hat u}^\mu$ (normalized with respect to the hypersurface metric) and a scalar function (the hypersurface temperature), which parameterize the stress tensor of the hypersurface, which not surprisingly takes the fluid dynamical form. The main novelty is that the stress tensor does not however correspond to that of a conformal fluid. The introduction of an explicit scale by way of the Dirichlet surface’s location engenders a non-vanishing trace, which curiously evolves in a highly suggestive manner under change of cut-off surface position, see . Calling the totality of the data on the hypersurface ${\hat {\mathfrak X}}$ we further show that within the gradient expansion there is a one-to-one correspondence between the hypersurface data and the boundary data; $\varphi_D: {\mathfrak X} \to {\hat {\mathfrak X}}$ is bijective. This then has the advantage that we can immediately understand the boundary dual of the bulk Dirichlet problem as a conformal fluid which is placed on a[^12] ‘dynamical background’ whose metric depends on the same set of variables that characterizes the fluid itself (in addition to the prescribed hypersurface metric). So from the boundary viewpoint there is a complete mixing between intrinsic and extrinsic data, which is the long-wavelength non-linear analog of the double trace deformation seen for the scalar toy model. Moreover, this solution allows us to see that the dynamics of the fluid on the Dirichlet surface, as given by the conservation equation on $\Sigma_D$, ‘emerges’ as collective dynamics of the boundary CFT. In particular, the boundary fluid lives on a ‘dynamical background’, and the effects of the background can be suitably subsumed into the fluid description. This suggests that the correct way to think about the hypersurface physics is in terms of a ‘dressed fluid’ living on an inert geometry. Thus, the effective description of a fluid on this dynamical background is geometrically encapsulated in terms of the Dirichlet hypersurface dynamics. Examining the resulting dynamics on $\Sigma_D$ we find that the hypersurface or effective fluid suffers from a possible pathology for $r_D$ smaller than some critical value $r_{D,snd}$. At $r_{D,snd}$ the sound mode of the effective fluid starts to propagate outside the inert background $\Sigma_D$’s light-cone. We suggest that in the CFT, this effect is due to the extreme forcing of the fluid on the boundary by the ‘dynamical’ metric, and moreover propose that one can obtain sensible dynamics by projecting out the sound mode. This involves looking at the fluid at a scaling limit and this can be formalized as taking the incompressible non-relativistic limit of the fluid on the hypersurface in a manner entirely analogous to the scaling limit described in for generic relativistic fluids in [@Bhattacharyya:2008kq; @Fouxon:2008tb].[^13] Having understood the Dirichlet problem for generic $\Sigma_D$ away from the horizon ${\mathcal H}^+$, we then proceed to push this surface deeper into the spacetime and ask what happens as we approach the horizon. In this regime $\Sigma_D$ dynamics continues to be described by incompressible Navier-Stokes equations in the limit, though with some slight differences from the BMW limit mentioned above. Zooming in onto the region between $\Sigma_D$ and the horizon, we provide an embedding of the construction of [@Bredberg:2011jq; @Compere:2011dx] into the fluid/gravity correspondence [@Bhattacharyya:2008jc]. Further, we demonstrate that in this limit both the bulk metric in the region between $\Sigma_D$ and the boundary, and the boundary metric degenerate from metrics on a Lorentzian manifold to Newton-Cartan like structures. This raises interesting questions about the natural emergence of the Galilean structures in the AdS/CFT correspondence which we postpone for future work. The plan of this paper is as follows: In we first address the Dirichlet problem for a scalar field propagating in using this linear problem to build intuition. In we pose the bulk Dirichlet problem for gravity in spacetime and solve it in the long wavelength approximation borrowing heavily on the results from the fluid/gravity correspondence. The remainder of the paper is then devoted to understanding the physics of our construction in various regimes: demonstrates how the Dirichlet surface dynamics, as governed by the conservation equation, arises from the boundary physics. Aided by this we argue that the Dirichlet dynamics is probably pathological past a critical radius and propose a non-relativistic scaling of the resulting fluid a la BMW in to cure this possible pathology. Finally, in we study the near-horizon Dirichlet problem and make contact with the recent work on the flat space Dirichlet problem (and its connection with Navier-Stokes equations). We end with a discussion in . Various appendices contain useful technical results. In particular, to aid the reader we provide a comprehensive glossary of our conventions and key formulae in . This is followed by a complete ‘Dirichlet dictionary’ relating hypersurface variables to boundary variables in for ready reference. [*Note added:*]{} While this work being completed we received [@Kuperstein:2011fn] which has partial overlap with the results presented in . These authors also attempt to solve for bulk geometries with prescribed boundary conditions on $\Sigma_D$ in the long wavelength regime and interpret their results in terms of a RG flow of fluid dynamics. [*Note added in v2:*]{} In the first version of the paper the non-relativistic metrics quoted in and in were incorrect; the metrics as presented do not solve the bulk Einstein’s equations to the desired order. These are now corrected in the current version. However, the full set of terms that we need to include in order to see the Naiver-Stokes equation on the boundary is quite large. Hence in the main text we only report the results for the case where the non-relativistic fluid moves on a Ricci flat spatial manifold in and present the general results in a new appendix . We note that the results of also correct the expressions originally derived in [@Bhattacharyya:2008kq]. Dirichlet problem for probe fields {#s:dscalar} ================================== To set the stage for the discussion let us consider setting up the bulk Dirichlet problem for linear PDEs in an asymptotically spacetime. As a canonical example we will consider the dynamics of a probe scalar field $\Phi(r,x^\mu)$ of mass $m$. Generalizations to other linear wave equations such as the free Maxwell equation are straightforward. We will let this scalar field propagate on a background asymptotically geometry with spatio-temporal translational symmetries so that the background metric can be brought to the form $$ds^2 =r^2\, g_{\mu\nu}(r) \, dx^\mu \, dx^\nu + \frac{g_{rr}(r)}{r^2} \, dr^2 \label{bggen}$$ The boundary of the spacetime is at $r \to \infty$ and we will assume that the boundary metric is the Minkowski metric on ${\mathbb R}^{d-1,1}$ for simplicity, so that asymptotically $g_{\mu\nu} \to \eta_{\mu\nu}$ and $g_{rr} \to 1$ (as $r\to \infty$). The dynamical equation of motion for the scalar is the free Klein-Gordon equation which can be written as an ODE in the radial direction for the Fourier modes $\Phi_{k}(r)$ of $\Phi(r,x^\mu)$ $$\Phi(r,x^\mu)=\int \frac{d^dk}{(2\pi)^d} \,e^{i\, k\cdot x} \, \Phi_k(r)\ ,$$ and takes the form $$\frac{1}{\sqrt{-g\, g_{rr}}}\, \partial_r \, \left(\sqrt{-g\, g^{rr}} \, \partial_r \Phi_{k}(r) \right) - \left(g_{\mu\nu} \,k^\mu\,k^\nu +m^2 \right) \Phi_{k} = 0$$ As a second order equation we need to specify two boundary conditions. We are going to restrict attention to the finite part of the geometry as illustrated in and impose Dirichlet boundary conditions for the field at some hypersurface $\Sigma_D$ at $r = r_D$. The second boundary condition in general can take the form of a regularity boundary condition in the interior of the spacetime. If we were working in a spacetime with a horizon this would demand that the mode functions of interest are purely ingoing at the future horizon. The question we wish to pose is the following: usually in an asymptotically spacetime we know that the solution to the scalar wave equation above has two linearly independent modes with power-law fall-off characterized by the source $J_\phi$ and vev $\phi$ of the dual boundary operator ${\cal O}_\Phi$ (which we recall is a conformal primary). We wish to ask what is the characterization of the boundary data as a functional of the Dirichlet hypersurface data. In this simple linear problem it is easy to see that there is a one-one map between the two sets of data. Essentially we are asking how to tune $J_\phi$ and $\phi$ so that the value of the scalar field on the Dirichlet hypersurface at $r=r_D$ takes on its given value. While it is possible to derive a formal answer to the above question, it is useful to first visit the simple setting of pure spacetime where we have the luxury of being able to solve the scalar wave-equation explicitly to see an explicit answer to the question. Probe scalar in {#s:adss} ---------------- We specialize our consideration to the pure geometry where $g_{\mu\nu} = \eta_{\mu\nu} $ and $g_{rr} = 1$ and one has enhanced Lorentz symmetry on the constant $r$ slices. The wave equation simplifies to $$\frac{1}{r^{d-1} }\, \frac{d}{dr} \, \left(r^{d+1} \, \frac{d}{dr} \Phi_{k}(r) \right) - \left(k^2 +m^2 \right) \Phi_{ k} = 0 \label{mkgads}$$ This is well known to have solutions in terms of Bessel functions, but we will proceed to examine the behavior in a gradient expansion to set the stage for the real problem of interest later. ### The $k=0$ case The translationally invariant solution of the massive Klein-Gordon equation in the bulk is[^14] $$\Phi(r) = \frac{\phi}{(2\nu)\,r^\Delta} + r^{\Delta-d} J_\phi$$ where the dual primary has a scaling dimension $\Delta$ obeying $\Delta(\Delta-d)=m^2$ along with a source $J_\phi$ and a normalized vev[^15] $\phi$ defined via $$J_\phi \equiv \left[r^{d-\Delta}\, \Phi(r)\right]_{r\to\infty}$$ $$\phi \equiv \left[-r^{2\nu}\times r\partial_r \left( r^{d-\Delta}\Phi \right)\right]_{r\to\infty} = \left[-r^\Delta\left( r\partial_r \Phi -(\Delta-d)\Phi\right)\right]_{r\to\infty}$$ with $$\nu \equiv \Delta - \frac{d}{2} = \sqrt{\frac{d^2}{4} + m^2}$$ for convenience. For simplicity, we will assume $\Delta > \frac{d}{2} $ and choose $m$ such that $\nu \notin {\mathbb Z}$ to avoid complications with logarithms. Extension to $\Delta \in [\frac{d}{2}-1, \frac{d}{2}]$ with the lower end of the interval saturating the unitary bound is possible with the added complication of taking proper account of the necessary boundary terms. We will begin by rewriting this solution in terms of the quantities on the Dirichlet surface which we denote with a hat to distinguish them from the boundary data: $$\hat{J}_\phi \equiv \left[r^{d-\Delta}\,\Phi(r)\right]_{r\to r_D}=J_\phi+ \frac{\phi}{(2\nu)\,r_D^{2\nu}} \ , \label{j0}$$ $$\hat{\phi} \equiv \left[-r^{2\nu}\times r\partial_r \left( r^{d-\Delta}\Phi \right)\right]_{r\to r_D} =\phi \ . \label{ph0}$$ Since the transformation between the data on the boundary $\{J_\phi,\phi\}$ and that on the hypersurface $\{\hat{J}_\phi, \hat{\phi}\}$ is linear it is a simple matter to write the bulk solution in terms of the hypersurface variables. One simply has $$\Phi(r) =\frac{\hat{\phi}}{(2\nu)\, r^\Delta} + r^{\Delta-d} \left(\hat{J}_\phi- \frac{\hat{\phi}}{(2\nu)r_D^{2\nu}}\right) . \label{pDsol}$$ This is the answer we seek and all that remains is to interpret this result. It is now easy to notice that the imposition of the Dirichlet condition on a hypersurface inside the bulk is equivalent to making the boundary source a specific function of the vev. From we can read off the specific deformation of the boundary CFT action to be given by $$\delta \mathcal{L}_{CFT} = -\frac{1}{16\pi \,G_{d+1}\, } \, \left(\hat{J}_\phi\,\hat{\phi}- \frac{1}{2(2\nu)\, r_D^{2\nu}}\, \hat{\phi}^2 \right) \propto \hat{J}_\phi \, {\cal O}_\Phi - \frac{(16\pi \,G_{d+1})}{2(2\nu)\, r_D^{2\nu}}\, {\cal O}_\Phi^2$$ which happens to be an irrelevant double-trace deformation [@Witten:2001ua; @Berkooz:2002ug] of the boundary CFT. Hence, at least in this simple setup the dual of the Dirichlet problem is to make the source of a primary ${\cal O}_\Phi$ a particular joint function of the vev of the primary in the given state and another fixed (state-independent) auxiliary source. ### The $k\neq0$ case : Derivative expansion up to $k^2$ Having seen the result for the translationally invariant case $k=0$, we now proceed with $k \neq 0$. It is well known that general solution to the wave equation is given in terms of Bessel functions which we parameterize as[^16] $$\Phi_k(r) = \frac{\phi_k}{r^\Delta}\times\frac{\Gamma(\nu)}{2(k/2r)^{\nu}}I_{\nu}(k/r) + r^{\Delta-d} (J_\phi)_k \times \frac{2(k/2r)^{\nu}}{\Gamma(\nu)}K_{\nu}(k/r)$$ Note that our previous result for $k=0$ follows from just keeping the leading $x^0$ terms in the expansions $$\begin{split} \frac{2x^{\nu}}{\Gamma(\nu)}\;K_{\nu}(2x) &= \sum_{j=0}^{\infty} \frac{\Gamma(\nu-j)}{\Gamma(\nu)} \frac{(-x^2)^j}{j!} +x^{2\nu}\sum_{j=0}^{\infty} \frac{\Gamma(-\nu-j)}{\Gamma(\nu)} \frac{(-x^2)^j}{j!}\\ \frac{\Gamma(\nu)}{2x^{\nu}}I_{\nu}(2x) &=\sum_{j=0}^{\infty} \frac{\Gamma(\nu)}{(2\nu+2j)\Gamma(\nu+j)} \frac{x^{2j}}{j!} \end{split} \label{eq:expn}$$ For a general $k$, we can repeat the analysis of the previous section. While this can be done generally at all orders in $k$ with some work, for simplicity we will resort to derivative expansion keeping terms upto order $k^2$. Not only will this allow us to see some of the structures emerging explicitly, but it also sets the stage for our gravitational computation in later sections. Using the expansion above, we have $$\begin{split} \Phi_k(r) &= \frac{\phi_k}{(2\nu)\, r^\Delta}\left(1 +\frac{2\nu}{(2\nu+2)^2} \frac{k^2}{2r^2}+\ldots \right) + r^{\Delta-d} (J_\phi)_k \left( 1-\frac{1}{(2\nu-2)} \frac{k^2}{2r^2}+\ldots\right.\\ &\qquad\qquad \quad \left. + \frac{\Gamma(-\nu)}{\Gamma(\nu)}\left(\frac{k}{2r}\right)^{2\nu}\left\{1 +\frac{1}{(2\nu+2)} \frac{k^2}{2r^2}+\ldots\right\}\right) \end{split}$$ with the ellipses representing order $k^4$ terms and higher. The source at the intermediate surface is as before easily determined $$\begin{split} (\hat{J}_\phi)_k &\equiv \left[r^{d-\Delta}\Phi_k(r)\right]_{r\to r_D}\\ &= (J_\phi)_k \left(1-\frac{1}{(2\nu-2)} \frac{k^2}{2\,r_D^2}+\ldots + \frac{\Gamma(-\nu)}{\Gamma(\nu)}\left(\frac{k}{2r_D}\right)^{2\nu}\left\{1 +\frac{1}{(2\nu+2)} \frac{k^2}{2\,r_D^2}+\ldots\right\}\right)\\ &\qquad \qquad +\frac{\phi_k}{(2\nu)\, r_D^{2\nu}}\left(1 +\frac{2\nu}{(2\nu+2)^2} \frac{k^2}{2\,r_D^2}+\ldots \right) \end{split} \label{j2}$$ while the normalized vev of the primary to this order in derivative expansion can be determined after subtracting an appropriate counter-term as[^17] $$\begin{split} \hat{\phi}_k &=\left[-r^{2\nu}\times r\partial_r \left( r^{d-\Delta}\Phi_k \right)+\frac{r^\Delta}{2\nu-2}\frac{k^2}{r^2}\Phi_k +\ldots \right]_{r\to r_D}\\ &=\phi_k\left(1 +\frac{2\nu-2}{(2\nu)^2} \frac{k^2}{2\,r_D^2}+\ldots \right) +\frac{4(J_\phi)_k}{(2\nu-2)} \times\frac{\Gamma(-\nu)}{\Gamma(\nu+2)}\left(\frac{k}{2}\right)^{2\nu}+\ldots \\ \end{split} \label{ph2}$$ To solve the Dirichlet problem,we need to solve for $\phi_k,(J_\phi)_k $ in terms of the hatted variables from and which can be inverted to get $$\begin{split} (J_\phi)_k &= \frac{(\hat{J}_\phi)_k}{\mathfrak{D}}\left(1 +\frac{2\nu-2}{(2\nu)^2} \frac{k^2}{2r_D^2}+\ldots \right) -\frac{\hat{\phi}_k}{\mathfrak{D}\, (2\nu) \, r_D^{2\nu}}\left(1 +\frac{2\nu}{(2\nu+2)^2} \frac{k^2}{2\,r_D^2}+\ldots \right) \\ \phi_k &=\frac{\hat{\phi}_k}{\mathfrak{D}} \left( 1-\frac{1}{(2\nu-2)} \frac{k^2}{2\,r_D^2}+\ldots + \frac{\Gamma(-\nu)}{\Gamma(\nu)}\left(\frac{k}{2\,r_D}\right)^{2\nu}\left\{1 +\frac{1}{(2\nu+2)} \frac{k^2}{2\,r_D^2}+\ldots\right\}\right)\\ &\qquad \qquad -\frac{4(\hat{J}_\phi)_k}{\mathfrak{D}(2\nu-2)} \times\frac{\Gamma(-\nu)}{\Gamma(\nu+2)}\left(\frac{k}{2}\right)^{2\nu}+\ldots \\ \end{split}$$ where the momentum dependent coefficient ${\mathfrak D}$ is $$\begin{split} \mathfrak{D}&\equiv 1-\frac{4(2\nu-1)}{(2\nu)^2(2\nu-2)} \frac{k^2}{2\,r_D^2}+ \frac{\Gamma(-\nu)}{\Gamma(\nu)}\left(\frac{k}{2\,r_D}\right)^{2\nu}\left\{1-\frac{1}{\nu^2(\nu-1)} \right.\\ &\left.\qquad \qquad \qquad+\; \frac{(2 \nu +1)^4-4 (2 \nu +2)^2+7}{(2 \nu)^2 (2 \nu +2)^3}\;\frac{k^2}{r_D^2}+\ldots\right\}\\ \end{split}$$ As we saw in the $k=0$ case, we have yet again determined a state dependent source on the boundary for the primary operator ${\cal O}_\Phi$. The key feature to note from the above analysis, is that the expression for the boundary source $J_\phi$ is non-analytic in $k$ and hence non-local when Fourier-transformed back to position space. Hence, we see that in general we have a map between the non-local double trace deformation on the boundary and the Dirichlet data on $\Sigma_D$ (similar non-local double-trace deformations were explored earlier in [@Marolf:2007in]). A general proposal for linear systems ------------------------------------- From the analysis of the free scalar wave equation in the picture is rather clear. In the CFT, in general one can make the source a non-local functional of the vev of the primary operator. Usually such a function can be fed into the holographic dictionary via a ‘state-dependent’ boundary condition, which whilst somewhat unnatural from a field theory is a perfectly sensible boundary condition to consider. For some special classes of functionals, this state-dependent boundary condition has a very simple bulk interpretation as a Dirichlet boundary condition imposed on an intermediate surface, implying that we can trade the non-locality of the boundary sources into local behavior at some lower radius. We just have one further question to answer before we declare victory: how do we in practice determine this special set of sources in various holographic setups? For the general backgrounds we can formally write the solution to the wave equations in terms of integrals over the Dirichlet data convolved with suitable ‘Dirichlet bulk to boundary propagators’, ${\cal K}_\text{source}$ and ${\cal K}_\text{vev}$. The former propagates the information contained in $\hat{J}_\phi$ to the boundary source, while the latter allows determination of the contribution from the vev $\hat{\phi}$ on $\Sigma_D$, i.e., formally $$J_\phi(x) = \int d^dx' \, \left\{{\cal K}_\text{source}(r_D, x; x') \, \hat{J}_\phi(x') + {\cal K}_\text{vev}(r_D, x; x') \, \hat{\phi}(x')\right\}$$ Note that implicit in our definition of these Dirichlet bulk to boundary propagators is the information of the boundary condition in the interior of the geometry and the necessary counter-terms. While it is possible to work this out in more specific geometries, such as a Schwarzschild-AdS$_{d+1}$ spacetime to see the interplay of these IR boundary conditions, we will leave this toy problem for now, and proceed to analyze the more interesting case of gravitational dynamics in wherein we do have to face-up with non-linearities of the equations of motion.[^18] Before proceeding to the gravitational setting, however, let us make a few pertinent observations relevant to the motivation mentioned at the beginning of . The result we have obtained is quite intuitive; demanding that our fields take on the desired value at $\Sigma_D$ entails a linear relation between the two pieces of data at infinity, thereby leading to the observation about the source depending on the vev. We also see that despite some superficial resemblance to the Wilsonian RG flow where too one encounters multi-trace operators there is a crucial distinction in the physics. In the formulation of [@Heemskerk:2010hk; @Faulkner:2010jy] one finds that for fixed asymptotic data, upon integrating out the region of the geometry between the boundary and a cut-off surface (which we can for simplicity take to be $\Sigma_D$ for the sake of discussion) one obtains an effective action for a cut-off field theory living on $\Sigma_D$ with scale dependent sources. These are irrelevant double traces (which are the only terms generated in a Gaussian theory which the linear models under discussion are), and one obtains the $\beta$-functions for the double trace couplings along the flow. In the present context however what we have is a situation wherein we are forced to engineer a specific double trace deformation on the boundary so as to ensure that we satisfy the Dirichlet boundary conditions on $\Sigma_D$. This is conceptually different from usual notions of RG, where one does not conventionally consider state dependent boundary conditions in the UV. However, there is a sense in which renormalisation of sources takes place which will become quite clear when we look at the gravitational problem. The Dirichlet problem for gravity {#s:dgrav} ================================= Having understood the boundary meaning of the Dirichlet problem for probe fields in a fixed background, we now turn to the situation where we consider dynamical gravity in the bulk. While we could consider other matter degrees of freedom in the bulk whose backreaction we now have to take into account, we choose for simplicity to restrict attention to the dynamics in the pure gravity sector which, as is well known, is a consistent truncation of the supergravity equations of motion. From a field theory perspective, we are going to work in the planar limit and focus on the dynamics of a single operator, the stress tensor and its source, the CFT metric $g_{\mu\nu}$. Setting up the general Dirichlet problem {#s:setdg} ---------------------------------------- First of all one should ask what does it mean to consider the Dirichlet problem at a fixed hypersurface in the bulk when gravity is dynamical. We will take the view that the location of the hypersurface $\Sigma_D$ is specified by a scalar function on the bulk manifold ${\cal M}_{d+1}$. We want to determine the metric on this spacetime by solving the dynamical equations of motion subject to the boundary condition on the prescribed hypersurface. To wit, we demand that ${\cal M}_{d+1}$ be endowed with a Lorentzian metric ${\cal G}_{MN}$ which solves Einstein’s equations with a negative cosmological constant. The equations of motion are (in units where $R_{AdS} =1$)[^19] $$E_{MN} = {\cal R}_{MN} + d \, {\cal G}_{MN} = 0 \label{eins}$$ We will adapt coordinates $X^A = \{r ,x^\mu\}$ to the hypersurface $\Sigma_D$ and take this distinguished surface to be at $r = r_D$ with intrinsic coordinates $x^\mu$.[^20] We impose the boundary condition $${\cal G}_{MN} \, dX^M \, dX^N \big|_{r\to r_D} = \;r_D^2\, \hat{g}_{\mu\nu}(x) \, dx^\mu\, dx^\nu \label{indgh}$$ where ${\hat g}_{\mu\nu}(x)$ is the Dirichlet data we wish to specify. To complete the specification of the problem we should impose some boundary condition in the interior of the spacetime, which we will canonically take to be a regularity condition. The scaling by $r_D^2$ above whilst unconventional from the bulk perspective, is more natural in the AdS/CFT context for it makes it easy to compare with the case where we push the hypersurface to the boundary. The general problem as stated above is quite hard. For one it is not clear that for generic choices of Dirichlet data one obtains a solution compatible with regularity in the interior of the spacetime. While a local solution in an open neighbourhood of $\Sigma_D$ can presumably be obtained by adapting a Fefferman-Graham like expansion, one is unlikely to be able to gain much insight using this procedure. Moreover, given that the map between $\Sigma_D$ and the boundary is expected to be non-local (borrowing intuition from the linear problem) one wonders whether there are causality issues as well. In particular, for generic state dependent boundary conditions causality is murky – does the source adjust itself acausally to obtain the appropriate response? Likewise on $\Sigma_D$ there is a concern that signals can propagate outside the light-cone of the metric $\hat{g}_{\mu\nu}$ (which they could for instance do through the bulk); does this imply a corresponding pathology for the boundary physics as well?[^21] In short, the well-posedness of the Dirichlet problem is a-priori unclear. Nevertheless, we will ignore all these subtleties for now and forge ahead. After solving Einstein’s equations with the boundary conditions we have set-up, we can extract the Brown-York stress tensor on $\Sigma_D$, denoted $\hat{T}_{\mu\nu}$, using the standard set of boundary counter-terms [@Henningson:1998gx; @Balasubramanian:1999re] $$\hat{T}_{\mu\nu}=- \frac{r_D^d}{8\pi G_{d+1}} \left(\hat{K}_{\mu\nu}-\hat{K} \,\hat{g}_{\mu\nu}+(d-1)\, \hat{g}_{\mu\nu} +\ldots\right) . \label{hypst}$$ $r_D^2\,\hat{K}_{\mu\nu}$ is the extrinsic curvature of the hypersurface $\Sigma_D$ and $\hat{K}$ is its trace, defined as usual in terms of the normal to the surface. Note that we have written the answer in terms of the intrinsic metric on the hypersurface which is related to the induced metric from the bulk up to a rescaling by $r_D^2$. The homogeneous scaling of the stress tensor under allows us to fix an overall $r_D$ dependent pre-factor. The hypersurface stress tensor is of course covariantly conserved (the Gauss-Codacci constraint), i.e., $$\hat{\nabla}_\mu{\hat{T}^\mu}{}_{\nu}=0\,. \label{hcons}$$ Given $\{\hat{g}_{\mu\nu}, \hat{T}_{\mu\nu} \}$ we can ask what are the corresponding boundary conditions on the boundary that lead to the same geometry. Based on our scalar problem we can conclude that the boundary source $g_{\mu\nu}$ and stress tensor $T_{\mu\nu}$ are in general non-local functionals of the Dirichlet data. We would like to characterize the map between these two sets of data[^22] $$\varphi_D: \{ \hat{g}_{\mu\nu}, \hat{T}_{\mu\nu} \} \to \{g_{\mu\nu}, T_{\mu\nu} \} \ .$$ While this problem is in general difficult, there is one context in which we can not only solve for the boundary data in terms of the hypersurface variables, but we can also investigate the issues raised above in precise terms. This is the long-wavelength hydrodynamical regime along the hypersurface as in the fluid/gravity correspondence [@Bhattacharyya:2008jc; @Bhattacharyya:2008ji; @Bhattacharyya:2008mz], wherein gravitational duals to arbitrary fluid flows on the boundary were constructed order by order in a gradient expansion. The reason this is possible, as explained in these works, is that the bulk spacetime in this long-wavelength regime is well approximated ‘tubewise’ by the boosted planar Schwarzschild- solution, see . As a result one finds that Einstein’s become ultra-local in the $x^\mu$ directions leading one to effective ODEs to determine the radial profiles. The tubes in question are centered around radially ingoing null geodesics, which can be used to translate information from any bulk hypersurface of interest to the boundary (see [@Bhattacharyya:2008xc; @Bhattacharyya:2008mz] for a discussion of the causal structure). Given this, it is actually easy to solve the problem of finding the map $\varphi_D$ and we now describe the construction in the rest of this section. ![Schematic representation of the gravitational Dirichlet problem in the fluid/gravity regime. The causal structure of the fluid/gravity spacetimes is illustrated emphasizing the tubewise approximation; in each tube the geometry resembles that of a uniformly boosted Schwarzschild-AdS$_{d+1}$ black hole. Suitable choices of the Dirichlet surface allow us to find the map between the boundary data ${\mathfrak X}$ and the Dirichlet hypersurface data $\hat{\mathfrak X}$ within each tube, rendering the problem tractable. []{data-label="f:tubes"}](Dir-tubes) (0,0) (-4.8,2)[$\Sigma_D$]{} (-6.5,-0.5)[ $\hat{\mathfrak X}=\{\hat{g}_{\mu\nu}, \hat{u}^\mu, \hat{T}\} \quad \stackrel{\varphi_D}{\longmapsto}\quad{\mathfrak X}=\{g_{\mu\nu}, u^\mu, T\}$]{} Dirichlet problem and the Fluid/Gravity correspondence {#s:fg1} ------------------------------------------------------ To set the stage for our discussion, let us recall that the fluid/gravity map constructs in a gradient expansion, regular solutions to the bulk Einstein’s equations which are dual to arbitrary fluid flows on the boundary of the asymptotically spacetime. For a boundary metric $g_{\mu\nu}(x)$ which is slowing varying, the boundary stress tensor in this context is a not an arbitrary symmetric traceless two tensor, but constrained to take the hydrodynamical form. It is parameterized by $d$ independent parameters - a velocity field $u_\mu(x)$ (unit normalized so that $g_{\mu\nu}\, u^\mu\,u^\nu =-1$) and a scalar function $b(x)$ which parameterizes the temperature. The bulk metric ${\cal G}_{MN}$ is determined in terms of the data ${\mathfrak X} = \{g_{\mu\nu}(x), u_\mu(x), b(x)\}$. We wish to implement the same procedure, but starting with analogous data $\hat{{\mathfrak X}} = \{\hat{g}_{\mu\nu}(x), \hat{u}_\mu(x), b(x)\}$ on the Dirichlet hypersurface $\Sigma_D$. But given the ultra-locality inherent in the long-wavelength regime and the fact that [@Bhattacharyya:2008mz] have solved the problem for arbitrary boundary metrics (corresponding to fluids on arbitrary slowly varying curved backgrounds), we don’t need to solve any equations. The solution space of the bulk Dirichlet problem coincides with the solution space found in [@Bhattacharyya:2008mz] and the problem at hand is readily solved by slicing this solution space appropriately. With this aim, we now review the solutions constructed in [@Bhattacharyya:2008mz]. ### Review of fluid/gravity {#s:fgrev} The general solutions of the bulk equations of motion in the fluid/gravity regime take the form $$ds^2 ={\cal G}_{MN} \,dX^M \,dX^N = - 2 \, {\mathfrak u}_\mu(x) \, dx^\mu \,\left( dr + r\,{\mathfrak V}_\nu(r,x)\,\,dx^\nu\right)+ r^2\,{\mathfrak G}_{\mu \nu}(r,x) \, dx^\mu\, dx^\nu \ , \label{formmetw}$$ where the fields ${\mathfrak V}_\mu$ and ${\mathfrak G}_{\mu\nu}$ are functions of $r$ and $x^\mu$ which admit an expansion in the boundary derivatives and are known to second order in the gradients. For our purposes it will suffice to consider the first order metric where[^23] $$\begin{aligned} \mathfrak{u}_\mu &=& u_\mu\ ,\qquad \mathfrak{V}_\mu = \mathcal{A}_\nu+\frac{r}{2}\, f(br)\, u_\nu \\ \mathfrak{G}_{\mu\nu} &=& P_{\mu\nu} +2b \,F(br)\ \sigma_{\mu\nu} \end{aligned}$$ with the functions $$f(x) \equiv 1 - \frac{1}{x^d} \ , \qquad F(x)\equiv \int_{x}^{\infty}\frac{y^{d-1}-1}{y(y^{d}-1)}dy\,. \label{fFdef}$$ ${\cal A}_\mu$ is the Weyl covariant connection introduced in [@Loganayagam:2008is] which is expressed in terms of the acceleration and the expansion of the velocity $u_\mu$ $$\mathcal{A}_\mu\equiv u^\lambda\nabla_\lambda u_\mu-\frac{\nabla_\lambda u^\lambda}{d-1}u_\mu = a_\mu - \frac{\theta}{d-1} \, u_\mu ,$$ while $\sigma_{\mu\nu}$ is shear strain rate tensor of $u_\mu$: $$\sigma_{\mu\nu}\equiv{P_\mu}^\alpha {P^\beta}_\nu\, \left[\nabla_{(\alpha} u_{\beta)}-g_{\alpha\beta}\frac{\nabla_\lambda u^\lambda}{d-1}\right],\quad\textrm{with}\quad P_{\mu\nu}\equiv g_{\mu\nu}+u_\mu u_\nu .$$ So the bulk metric to first order in derivatives explicitly takes the form: $$ds^2=-2 \,u_\mu \,dx^\mu \left( dr + r\ \mathcal{A}_\nu dx^\nu \right) + r^2 \left[ g_{\mu\nu} +\frac{u_\mu u_\nu}{(br)^d}+2b \,F(br)\ \sigma_{\mu\nu}\right] dx^\mu dx^\nu + \ldots \label{metricsimp:eq}$$ and we have refrained from explicitly denoting the $x^\mu$ dependence of ${\mathfrak X}$ and the ellipses denote second order and higher gradient terms. The corresponding co-metric (the metric on the cotangent bundle/the inverse metric) is given by $$\begin{split} \mathcal{G}^{AB}&\partial_A\otimes\partial_B \\ &= \left[r^2 \, f(br)-\frac{2\,r\,\theta}{d-1}\right]\partial_r\otimes\partial_r+2\left[u^\mu -r^{-1} a^{\mu}\right]\partial_\mu\otimes_s\partial_r\\ &\qquad +r^{-2}\left[P^{\mu\nu} -2b \,F(br)\ \sigma^{\mu\nu}\right]\partial_\mu\otimes\partial_\nu\\ \end{split}$$ The stress tensor on the boundary is that of a viscous relativistic fluid: $$T_{\mu\nu} = p\, g_{\mu\nu} + (\varepsilon + p) \, u_\mu\,u_\nu - 2 \, \eta\, \sigma_{\mu\nu} + \ldots \label{Tbdy}$$ with thermodynamic state variables $$p = \frac{1}{d-1}\, \varepsilon = \frac{1}{16\pi\, G_{d+1} } \, \frac{1}{b^d} \, \label{epbdy}$$ and shear viscosity $$\eta=\frac{1}{16\pi \,G_{d+1}} \;\frac{1}{b^{d-1}}\,. \label{etab}$$ The expression in should be thought of as a way to generate solutions of Einstein equations when provided with hydrodynamic configurations that solve the ideal fluid equations derived form the $T_{\mu\nu}$ above, i.e., when provided with $u^\mu,b$ that satisfy $\nabla_\mu T^{\mu\nu} = 0$ to first order in gradients. Our aim is to reformulate this set of solutions as solutions to the bulk Dirichlet problem. ### Dirichlet data from fluid/gravity solutions {#s:dfgsol} Given the solution to the bulk equations of motion, we will begin by simply slicing it at a given radial position $r=r_D$ and extract the intrinsic metric on the Dirichlet surface[^24]. The advantage of working with the Weyl covariant form of the bulk metric is that one can simultaneously deal with Dirichlet surfaces specified by slowly varying functions $r=\rho(x)$. These can always be brought to the form $r=r_D$ by working in a suitable boundary Weyl-frame (local rescaling by a conformal factor $\log(1-\rho(x)/r_D)$ will do the trick). The Weyl connection $\mathcal{A}$ eats $\rho(x)$ so that $$\mathcal{A}_{\mu}=\rho^{-1} \left(\nabla_\mu+ a_\mu - \frac{\theta}{d-1}\, u_\mu \right) \rho$$ with this understanding all our formulae hold for arbitrary $\rho(x)$. At fixed $r=r_D$ the hypersurface metric reads (recalling ) to first order $$\hat{g}_{\mu\nu}= g_{\mu\nu} + \frac{u_\mu u_\nu}{(b\,r_D)^d}+2b \,F(br_D)\ \sigma_{\mu\nu} -\frac{2}{r_D}\, u_{(\mu} \mathcal{A}_{\nu)} + \ldots$$ While this relation was obtained by slicing a known solution with prescribed asymptotic boundary conditions, it has the nice feature of having solved the equations of motion (to the desired order) and moreover satisfies the regularity condition in the interior. We will now turn the logic around and imagine $\hat{g}_{\mu\nu}$ to be specified at $\Sigma_D$ and view the equation above as specifying the boundary intrinsic metric in terms of the hypersurface metric. Hence, the above equation giving $g_{\mu\nu}$ in terms of $\hat{g}_{\mu\nu}$ and $u_\mu$ is the Dirichlet constitutive relation we seek – when such a relation is imposed on the boundary data, the intrinsic hypersurface metric is automatically fixed to $\hat{g}_{\mu\nu}$. To complete the specification of the map $\varphi_D$ we need to further eliminate the boundary velocity field in favour of a vector field on the hypersurface. To do this we need to examine the hypersurface stress tensor and parameterize it appropriately. The stress tensor at $r_D$ is easily obtained from to be $$\hat{T}_{\mu\nu} = \hat{p}\, \hat{g}_{\mu\nu} + \frac{1}{\hat{\alpha}^2}\, (\hat{\varepsilon}+\hat{p})\, u_\mu u_\nu - 2\, \hat{\alpha}\, \eta \, \sigma_{\mu\nu} +\frac{2}{r_D} \, (\hat{\varepsilon}+\hat{p})\, u_{(\mu} \mathcal{A}_{\nu)} +\ldots \label{eq:rDquantities}$$ with[^25] $$\hat{\varepsilon}\equiv\frac{(d-1)}{8\pi\, G_{d+1}}\; \frac{\hat{\alpha}}{\hat{\alpha}+1}\; \frac{1}{b^d} \,,\quad\quad \hat{\varepsilon} + \hat{p} \equiv\frac{d }{16\pi \,G_{d+1}}\, \frac{\hat{\alpha}}{b^d}\,, \label{eqofstate:eq}$$ with $\eta$ as given before in and we have defined a hypersurface scalar $$\hat{\alpha}\equiv \frac{1}{\sqrt{f(b\, r_D)}} =\frac{1}{\sqrt{1-(b\,r_D)^{-d}}}\,. \label{alhatdef}$$ The stress tensor is tantalizingly similar to that of a viscous fluid, but as yet, we cannot interpret $\eta$ as the shear viscosity since we have not expressed $\hat{T}_{\mu\nu}$, in terms of hypersurface variables. This is however easy to remedy. Define $\hat{u}_\mu$ to be the unit normalized (with respect to $\hat{g}_{\mu\nu}$ of course) timelike eigenvector of $\hat{T}_{\mu\nu}$. A simple computation shows that $$\hat{u}_\mu\equiv \frac{u_\mu}{\hat{\alpha}} + \frac{\hat{\alpha}}{r_D} \mathcal{A}_\mu\,.$$ We want to express the hypersurface stress tensor in terms of $\hat{u}_\mu$ and its gradients with respect to the $\hat{g}_{\mu\nu}$ compatible connection $\hat{\nabla}_\mu$. This can be done by the standard computation of the difference of $\nabla_\mu - \hat{\nabla}_\mu$. We outline the calculation in and simply quote the relevant result here: $$\hat{\sigma}_{\mu\nu}=\hat{\alpha} \, \sigma_{\mu\nu}\,,\quad\quad \mathcal{A}_\nu =\hat{\mathcal{A}}_\nu - \frac{\frac{d}{2}(\hat{\alpha}^2-1)}{1+\frac{d}{2}(\hat{\alpha}^2-1)}\; \hat{a}_\nu\,.$$ Armed with this data we can write now the stress tensor at $r=r_D$ as $$\hat{T}_{\mu\nu} = \hat{p}\, \hat{g}_{\mu\nu} + (\hat{\varepsilon}+\hat{p})\, \hat{u}_\mu \,\hat{u}_\nu - 2\, \eta\, \hat{\sigma}_{\mu\nu}+\ldots$$ We see that the result indeed is a stress tensor of a relativistic fluid with energy density $\hat{\varepsilon}$, pressure $\hat{p}$, given in and the same value of shear viscosity as the boundary theory $\eta$, . The dynamical content of this system is still the conservation equation $$\hat{\nabla}_\mu{\hat{T}^\mu}{}_{\nu}=0\,. \label{hypcons}$$ which follows from realizing that this is the ‘momentum constraint’ equation for the radial slicing of Einstein’s equations (or if one prefers the Gauss-Codacci constraint on the hypersurface). Its equation of state is given in and we will momentarily evaluate the trace of $\hat{T}_{\mu\nu}$ in . Further note that the stress tensor has no contribution associated with the expansion of the fluid, i.e., the bulk viscosity vanishes identically on the hypersurface. Nevertheless, the fluid is not a conformal fluid, for the trace of the stress tensor is non-vanishing. $$\hat{T}^\mu{}_\mu = \hat{T}_{\mu\nu}\, \hat{g}^{\mu\nu}= - \hat{\varepsilon} + (d-1)\hat{p} =\frac{d(d-1)}{16\pi \, G_{d+1}}\ \frac{\hat{\alpha}-1}{\hat{\alpha}+1}\ \frac{\hat{\alpha}}{b^d} \ . \label{eq:traceDirichlet}$$ This is not entirely surprising for we have introduced an explicit scale $r_D$ into the problem, and as required for consistency the trace vanishes in the limit $r_D \to \infty$ as $\hat{\alpha} \to 1$. More curious is the fact that rate of change of the trace with the $\Sigma_D$’s radial location is simple: $$\hat{T}^\mu{}_\mu =-r_D\frac{d\hat{\varepsilon}}{dr_D}\, \label{trerd}$$ The evolution of the trace is highly suggestive for can be interpreted as saying the the trace is generated by the variation of the local energy density with respect to some scale. This kind of a relation probably hints at some kind of non-linear realization of scale invariance. Again this is reminiscent of the holographic RG ideas and it would be interesting to flesh this out in greater detail. Having the notion of the hypersurface velocity field $\hat{u}_\mu$ we can now proceed to write the boundary metric in terms of hypersurface data. Inverting the relation for the velocities to obtain (see ) $$u_\mu = \hat{\alpha}\, \hat{u}_\mu - \frac{\hat{\alpha}^2}{r_D} \,\left( \hat{\mathcal{A}}_\nu-\frac{\frac{d}{2}(\hat{\alpha}^2-1)}{1+\frac{d}{2}(\hat{\alpha}^2-1)}\; \hat{a}_\nu\right) \label{buhu1}$$ one can show that $$g_{\mu\nu} = \hat{g}_{\mu\nu} -(\hat{\alpha}^2-1) \, \hat{u}_\mu \, \hat{u}_\nu + \frac{2\,\hat{\alpha}^2}{r_D} \, \left[ \hat{u}_{(\mu} \hat{\mathcal{A}}_{\nu)}-\frac{\frac{d}{2}(\hat{\alpha}^2-1)}{1+\frac{d}{2}(\hat{\alpha}^2-1)}\; \hat{u}_{(\mu}\hat{a}_{\nu)} \right] - \frac{2b}{\hat{\alpha}}\, F(br_D) \hat{\sigma}_{\mu\nu} \label{bghg1}$$ The equations and together specify the map $\varphi_D$ we seek in the long-wavelength regime. Note that the hypersurface and the boundary data are determined by the same scalar function $b(x)$, which however enters non-trivially through $\hat{\alpha}$ in the determination of dynamics on $\Sigma_D$. Note that the light-cones of $g_{\mu\nu}$ are enlarged by a factor determined by $\hat{\alpha}$ relative to that of $\hat{g}_{\mu\nu}$. This is the first signal that there is some interesting interplay between boundary causal structures and fixing boundary conditions on $\Sigma_D$; we will address this issue in some detail in . But first we finish the solution to the Dirichlet problem as stated and write down the bulk metric in the long wavelength regime. Bulk metric in terms of Dirichlet data {#s:dbulk} -------------------------------------- Given the map in and we are in a position to re-write the bulk metric in terms of $\Sigma_D$ data $\hat{{\mathfrak X}}$ alone. Substituting the transformations we can write the final result as in with $$\begin{split} \mathfrak{u}_\mu &= u_\mu = \hat{\alpha}\, \hat{u}_\mu - \frac{\hat{\alpha}^2}{r_D}\left[\frac{\hat{a}_\mu}{\left[1+\frac{d}{2}(\hat{\alpha}^2-1)\right]}-\frac{\hat{\theta}}{d-1}\hat{u}_\mu\right] \\ \mathfrak{V}_\mu &= \mathcal{A}_\mu+\frac{r}{2}\, f(br)\, u_\mu \\ &= \hat{\xi}\left[\frac{\hat{a}_\mu}{\left[1+\frac{d}{2}(\hat{\alpha}^2-1)\right]}-\frac{\hat{\theta}}{d-1}\hat{u}_\mu\right] +\frac{r}{2}\, f(br) \,\hat{\alpha}\, \hat{u}_\mu\\ \mathfrak{G}_{\mu\nu} &= P_{\mu\nu} +2\,b \,F(br)\ \sigma_{\mu\nu} = \hat{P}_{\mu\nu} +2\,b \,\hat{F}(br)\ \hat{\sigma}_{\mu\nu}\\ \label{dirbulk1} \end{split}$$ and we have defined $$\hat{\xi} \equiv 1-\frac{1}{2}\,\frac{r}{r_D}\;\frac{f(br)}{f(br_D)} = 1-\frac{\hat{\alpha}^2}{2}\, \frac{r}{r_D}\, f(br) \label{xidef}$$ and $$\hat{F}(br) \equiv \frac{1}{\hat{\alpha}}\left(F(br)-F(br_D)\right) = \frac{1}{\hat{\alpha}} \; \int_{br}^{br_D}\; \frac{y^{d-1}-1}{y(y^{d}-1)}dy\,.$$ The factors of $\hat{u}_\mu$ are distributed between ${\mathfrak G}_{\mu\nu}$ and ${\mathfrak V}_\nu$ by the requirement that the former be transverse to ${\mathfrak u}_\mu$. This bulk metric ${\cal G}_{MN}$ solves the gravitational Dirichlet problem in the long-wavelength regime. We have thus solved the problem posed at the beginning of this section completely in terms in this regime aided by the ultra-locality of the gradient expansion. We summarize the complete map $\varphi_D$ and the resultant dictionary between CFT variables and the hypersurface variables in the . As a consistency check note that the results agree when we send the surface $\Sigma_D$ to the boundary with those derived in [@Bhattacharyya:2008mz]; one simply sets $r_D=\infty$ and $\hat{\alpha} =1$. In the remainder of the paper we will explore the relation between the dynamics on $\Sigma_D$ and that on the boundary, with an aim towards getting better intuition for various issues raised in and . Emergence of Dirichlet dynamics in the CFT$_d$ {#s:emergence} ============================================== Having obtained a solution to the gravitational Dirichlet problem in the long-wavelength regime, we now turn to analyze the underlying physics of the system. From the viewpoint of the bulk, the dynamics of the system under study has two equivalent descriptions – one in terms of hypersurface (hatted) variables $\hat{\nabla}_\mu\hat{T}^{\mu\nu}=0$, and another in terms of the un-hatted boundary variables $\nabla_\mu T^{\mu\nu}=0$. While the former is more natural in the bulk (since it is $\hat{g}_{\mu\nu}$ which is fixed in the bulk for our Dirichlet boundary conditions on $\Sigma_D$), only the latter has a straightforward interpretation in the CFT$_d$. Hence, it is interesting to ask how the hypersurface description should be interpreted within the CFT. From the CFT point of view, the fluid is living in a metric background with a Dirichlet constitutive relation (rewriting in terms of $u_\mu$) $$\begin{split} g_{\mu\nu}&=\hat{g}_{\mu\nu} - \left(1-\frac{1}{\hat{\alpha}^2}+\frac{2\,\theta}{(d-1)\,r_D}\right)u_\mu u_\nu+\frac{2}{r_D}\, u_{(\mu} a_{\nu)} -2\,b F(br_D)\ \sigma_{\mu\nu} +\ldots\\ g^{\mu\nu}&= \hat{g}^{\mu\nu}-\left(1-\hat{\alpha}^2-\frac{2\,\hat{\alpha}^4\,\theta}{(d-1)\,r_D}\right)u^\mu u^\nu \ +2\,b F(br_D)\ \sigma^{\mu\nu} -\frac{2\,\hat{\alpha}^2}{r_D}\, u^{(\mu} a^{\nu)}+ \ldots \\ \end{split} \label{bdyggu}$$ where $\hat{\alpha}^2$ is defined in .[^26] The function $\hat{\alpha}$ increases with increase in the local temperature (decreases with increase in the local $b$). These expressions basically tell us how the ambient spacetime background the CFT$_d$ lives on responds to the motion of the CFT fluid. Let us first understand the physical content of the piece in the constitutive relation with zero-derivatives. This can mostly be done heuristically which we will do first and then confirm it with an explicit calculation. The zero-derivative piece of the boundary metric $g_{\mu\nu}$ is made of an inert piece $\hat{g}_{\mu\nu}$ which does not respond to the fluid motion along with an additional piece proportional to $u_\mu u_\nu$. The presence of a term proportional to $u_\mu u_\nu$ means the boundary metric effectively has a correction in its $dt^2$ piece in the local fluid rest-frame (which is responsible for opening up of the light-cone in the boundary). This kind of correction as is well known in general relativity just represents a gravitational potential well. Note that this potential well travels along with the fluid and hence it is tempting to think that there is a way to describe the collective packet of fluid and the local graviton cloud that it carries along in terms of a ‘dressed’ fluid. To guide our intuition, let us draw analogies with another familiar physical situation where the background responds to the system locally via this kind of a relation. One analogous situation is that of a charge carrier moving in a polarizable medium. The polarizability of the medium defines the constitutive relation of the medium in exact analogy with the constitutive relations for the metric above. We know that in the case of the charge carrier moving in a polarizable medium often the polarizability can be taken in to account by [*shifting*]{} the dispersion of the charge carrier and pretending that this ‘dressed’ charge carrier is essentially moving through an inert medium. What we want to argue out in this section is the fact that a similar ‘dressing’ phenomenon happens in the case of the CFT$_d$ fluid - we want to rewrite the problem of a fluid with $T^{\mu\nu}$ moving in the ‘polarizable’ $g_{\mu\nu}$ into the problem where a dressed fluid with $T^{\mu\nu}_{\text{dressed}}$ moving in the an inert spacetime $g_{\mu\nu,\text{inert}}$. It is clear that the inert background is just the non-dynamical part of the metric, i.e., $g_{\mu\nu,\text{inert}} =\hat{g}_{\mu\nu}$. We would like to claim that $T^{\mu\nu}_{\text{dressed}} = \hat{T}^{\mu\nu} $. This then would be a complete physical picture of how the dynamics on a Dirichlet hypersurface in the bulk emerges directly from the boundary description. Conservation equations at the boundary and on the Dirichlet surface {#s:conseq} ------------------------------------------------------------------- Let us now implement the dressing picture heuristically described above at the level of the equations to derive the conservation equations on the Dirichlet surface from those on the boundary. Given an arbitrary energy-momentum tensor $$T^{\mu\nu} = \varepsilon \,u^\mu u^\nu + p \,P^{\mu\nu} +\pi^{\mu\nu} \label{piTdef}$$ with $\pi^{\mu\nu}$ capturing the dissipative terms involving at least one gradient of the velocity field or thermodynamic state variables, we have $$\begin{split} \nabla_{\nu}T^{\mu\nu} &= u^\mu\left[ u^\nu \nabla_\nu \varepsilon + (\varepsilon+ p) \nabla_\nu u^\nu\right] + (\varepsilon+p)\, a^\mu+ P^{\mu\nu}\nabla_{\nu}\, p +\nabla_{\nu}\, \pi^{\mu\nu}\\ &= u^\mu\left[ \frac{s}{c_{snd}^2} u^\nu\nabla_\nu T + T\, s \, \theta-u_\alpha\nabla_{\beta}\pi^{\alpha\beta}\right] + T\,s \,a^\mu+ s\, P^{\mu\nu}\nabla_{\nu} T +P^{\mu\nu}\nabla_{\lambda}\pi_\nu^{\lambda}\\ \end{split} \label{conseom}$$ where we have introduced $$c_{snd}^2\equiv \frac{dp}{d\varepsilon}= s\frac{dT}{d\varepsilon} \ ,$$ and used the Euler relation $\varepsilon + p = s\, T$, with $s$ being the entropy density of the fluid. Since the part proportional to $u^\mu$ and the part transverse to $u^\mu$ should separately vanish, we get $$\begin{split}\label{eq:dT} s\left[\partial_\mu + a_\mu-c_{snd}^2 \,u_\mu \,\theta \right] T+P_\mu{}^{\nu}\nabla_{\lambda}\pi_\nu{}^{\lambda}-c_{snd}^2 \,u_\mu \, \pi^{\alpha\beta}\nabla_\alpha u_\beta &=0 \end{split}$$ Similarly, $\hat{\nabla}_{\nu}\hat{T}^{\mu\nu}=0$ is equivalent to the equation $$\begin{split}\label{eq:HatdT} \hat{s}\left[\partial_\mu + \hat{a}_\mu-\hat{c}_{snd}^2 \, \hat{u}_\mu \,\hat{\theta} \,\right] \hat{T}+\hat{P}_\mu{}^{\nu}\hat{\nabla}_{\lambda}\hat{\pi}_\nu{}^{\lambda}-\hat{c}_{snd}^2\, \hat{u}_\mu \,\hat{\pi}^{\alpha\beta}\hat{\nabla}_\alpha \hat{u}_\beta &=0 \end{split}$$ Using the relations $$\hat{s}=s \ , \qquad \text{and} \;\; \hat{T}=T\, \hat{\alpha} , \label{hbst}$$ which are derived in we can write $$\begin{split} \hat{s}\left[\partial_\mu + \hat{a}_\mu-\hat{c}_{snd}^2\, \hat{u}_\mu\, \hat{\theta} \right] \hat{T} &= s \left[\partial_\mu + \hat{a}_\mu-\hat{c}_{snd}^2 \,\hat{u}_\mu \,\hat{\theta} \right] T\hat{\alpha}\\ &= \hat{\alpha} \,s \left[\left(1+\frac{d\,\ln \hat{\alpha}}{d\, \ln T}\right)\partial_\mu + \hat{a}_\mu-\hat{c}_{snd}^2 \,\hat{u}_\mu \,\hat{\theta} \right] T \\ &= \hat{\alpha} \left(1+\frac{d\ln \hat{\alpha}}{d\, \ln T}\right) \,s \left[\partial_\mu + \frac{\hat{a}_\mu-\hat{c}_{snd}^2\, \hat{u}_\mu \, \hat{\theta}}{\left(1+\frac{d\, \ln \hat{\alpha}}{d\, \ln T}\right)} \right] T \\ \end{split}$$ so that becomes $$\begin{split}\label{eq:HatdT2} s \left[\partial_\mu + \frac{\hat{a}_\mu-\hat{c}_{snd}^2\, \hat{u}_\mu \,\hat{\theta}}{\left(1+\frac{d\,\ln \hat{\alpha}}{d\,\ln\ T}\right)} \right] T+\frac{\hat{P}_\mu{}^{\nu}\hat{\nabla}_{\lambda}\hat{\pi}_\nu{}^{\lambda}-\hat{c}_{snd}^2\, \hat{u}_\mu\, \hat{\pi}^{\alpha\beta}\hat{\nabla}_\alpha \hat{u}_\beta}{\hat{\alpha} \left(1+\frac{d\,\ln \hat{\alpha}}{d\,\ln\ T}\right)} &=0 \end{split}$$ For the equation to describe the same dynamical system as the equation , it is necessary and sufficient that $$\begin{split}\label{eq:matchEqn} &\hat{a}_\mu-\hat{c}_{snd}^2 \,\hat{u}_\mu \,\hat{\theta} +\frac{\hat{P}_\mu{}^{\nu}\hat{\nabla}_{\lambda}\hat{\pi}_\nu{}^{\lambda}-\hat{c}_{snd}^2 \,\hat{u}_\mu\, \hat{\pi}^{\alpha\beta}\hat{\nabla}_\alpha \hat{u}_\beta}{\hat{T}\,\hat{s} } \\ &\quad\stackrel{?}{=}\left(1+\frac{d\,\ln \hat{\alpha}}{d\,\ln\ T}\right)\left[a_\mu-c_{snd}^2 \,u_\mu \,\theta +\frac{P_\mu{}^{\nu}\nabla_{\lambda}\pi_\nu{}^{\lambda}-c_{snd}^2 \,u_\mu\, \pi^{\alpha\beta}\nabla_\alpha u_\beta}{T\,s}\right] \end{split}$$ We can show that this is indeed true at the first derivative level by using the conversion formulae (rewriting ) $$\begin{split} {u}_\mu &= \left(1+\frac{\hat{\alpha}\,\hat{\theta}}{r_D(d-1)}\right)\hat{\alpha}\,\hat{u}_\mu - \frac{\hat{\alpha}^2}{r_D} \frac{\hat{a}_\mu}{\left[1+\frac{d}{2}(\hat{\alpha}^2-1)\right]}\\ \frac{d\ln \hat{\alpha}}{d\ln\ T}&= \frac{d}{2}(\hat{\alpha}^2-1), \quad {\theta}=\frac{1}{\hat{\alpha}}\hat{\theta} \\ \hat{a}_\nu &= \left(1+\frac{d}{2}(\hat{\alpha}^2-1)\right) a_\nu ,\quad {\theta}{u}_\mu=\hat{\theta}\hat{u}_\mu \\ \hat{c}^2_{snd} &= \left(1+\frac{d}{2}(\hat{\alpha}^2-1)\right) c^2_{snd} . \end{split}$$ Hence, till this order, we have proved that $\hat{T}^{\mu\nu}$ is indeed the dressed energy-momentum tensor that we were looking for. It should be instructive to extend this analysis to higher orders in derivatives. In particular, it would be interesting to pin down the specific property of the Dirichlet constitutive relation which leads to the fact that the dressed viscosity $\hat{\eta}$ is same as the bare value $\eta$ and furthermore understand why the hypersurface fluid has no bulk viscosity. This would complement the analysis of [@Iqbal:2008by] who demonstrated the absence of corrections to the shear viscosity by considering a flow equation in the linearized regime between the boundary and the horizon. Causality and relativistic fluids on the Dirichlet hypersurface {#s:csq} --------------------------------------------------------------- Having established a clear connection between the dynamics of the dressed fluid on the Dirichlet surface and that of fluid on a ‘dynamical’ boundary metric, we now turn to examining the properties of the fluid motion. It seems a priori that all is well in the long wavelength regime with regards to the issues raised at the beginning of section viz. the issue of locality and causality of the Dirichlet problem in . However, this is probably a bit too quick; while it is true that we have local dynamical equations given by the conservation of the hypersurface stress tensor , we have not established firmly that these equations arise from a sensible thermodynamic system. We now proceed to address this issue. The energy momentum tensor of the dressed fluid on the hypersurface is characterized by an energy density $\hat{\varepsilon}$ and a pressure $\hat{p}$ which are given in . In particular, the pressure of the fluid is $$\hat{p} \equiv \frac{\left[1+\frac{d}{2}(\hat{\alpha}-1)\right]}{8\pi G_{d+1}b^d}\frac{\hat{\alpha}}{\hat{\alpha}+1} = \frac{2\hat{\alpha}}{\hat{\alpha}+1}\left[1+\frac{d}{2}(\hat{\alpha}-1)\right] p \label{prhyper}$$ We also note that $\hat{\varepsilon} = \frac{2\,\hat{\alpha}}{\hat{\alpha}+1} \, \varepsilon$ which is useful in what follows. Using the thermodynamic relations $$\frac{d\hat{s}}{\hat{s}}= \frac{d\hat{\varepsilon}}{\hat{\varepsilon}+\hat{p}} ,\quad \frac{d\hat{T}}{\hat{T}}= \frac{d\hat{p}}{\hat{\varepsilon}+\hat{p}} \quad\text{and}\quad \hat{\varepsilon}+\hat{p} = \hat{T} \hat{s}$$ we get the entropy density and the temperature of this fluid as $$\hat{s}= \frac{1}{4 G_{d+1}}\, \frac{1}{b^{d-1}}=s ,\quad\text{and} \quad \hat{T}=\frac{d}{4\pi b}\hat{\alpha}=\hat{\alpha}\, T \label{hst2}$$ as quoted above in . The first of these relations follows from the fact that the entropy of the fluids on the asymptotic boundary as well as on the Dirichlet surface are given in terms of the area of the horizon which is unchanged by the solution. To determine the temperature on the hypersurface one has to account for the fact that the surface is in the interior of the spacetime. In the planar Schwarzschild-AdS$_{d+1}$ solution we get deviation from the Hawking temperature (which is temperature in the CFT) via a red-shift factor $\hat{\alpha}$. Conversely, given the above relations , the expressions for $\hat{\varepsilon}$ and $\hat{p}$ can be deduced using $d\hat{\varepsilon}=\hat{T}d\hat{s}$ and $d\hat{p}=\hat{s}d\hat{T}$. The speed of sound mode in this system is given by $$\label{spsnd:eq} \begin{split} \hat{c}_{snd}^2 &\equiv \frac{\partial\hat{p}}{\partial\hat{\varepsilon}}= \frac{1}{d-1}\left[1+\frac{d}{2}(\hat{\alpha}^2-1)\right] = {c}_{snd}^2 \left[1+\frac{d}{2}(\hat{\alpha}^2-1)\right] \end{split}$$ This exceeds the speed of light as measured by $\hat{g}_{\mu\nu}$, i.e., we get superluminal sound propagation, for $\hat{\alpha}>\hat{\alpha}_{snd}$ where $\hat{\alpha}_{snd}\equiv \sqrt{3-\frac{4}{d}}$. This corresponds to $$\label{r_snd:eq} \begin{split} b\ r_{D,snd} &\equiv \left(\frac{\hat{\alpha}_{snd}^2}{\hat{\alpha}_{snd}^2-1}\right)^{1/d}\\ & =\left(\frac{3-\frac{4}{d}}{2-\frac{4}{d}}\right)^{1/d}\\ & \approx 1+ \frac{1}{d}\ln (3/2) + O(d^{-2}) \\ \end{split}$$ One can intuitively understand this result from the viewpoint of the boundary fluid. As we noted earlier the boundary fluid is subject to a gravitational potential well. Should one locally increase the strength of this well then the fluid would get sufficiently accelerated, perhaps leading to a pathology. This is manifest in the picture of the dressed fluid moving on an inert background achieved by translating over to the Dirichlet surface. In particular, this gets reflected in the fact that the effective pressure $\hat{p}$ felt by the dressed fluid increases relative to its energy density $\hat{\varepsilon}$ thus driving the dressed fluid into a regime where the dominant energy condition is violated. Such violations of the dominant energy condition are known to be susceptible to superluminal sound modes[^27] as we observed above. How much should one be worried by this apparent acausal behavior where the dressed sound mode travels superluminally with respect to the inert part of the boundary metric? After all, as is easily verified the mode with dispersion $\omega \sim \hat{c}_{snd}\, k$ propagates within the local light-cone of the ‘dynamical’ boundary metric $g_{\mu\nu}$. This is achieved by the phenomenon we had already alluded to towards the end of the section : as we move our Dirichlet surface into the AdS, the Dirichlet constitutive relation ensures that the boundary light-cone opens up (see ) thus ensuring that the dressed sound mode is not superluminal when measured with respect to $g_{\mu\nu}$. Of course, pending a detailed analysis of the initial value problem posed by this dynamical system (and other possible global issues), one cannot assert that the boundary physics is sensible from above observations alone. We would however like to suggest that viewing the hypersurface fluid as an autonomous dynamical system, a superluminal sound mode probably indicates a pathology. As described in [@Adams:2006sv] one should anticipate that the corresponding initial value problem[^28] for the hypersurface fluid might be ill-posed. Is this the way the long wavelength problem is telling us that the dual of the generic bulk Dirichlet problem in the CFT is ill-posed? Is it possible that the bulk Dirichlet problem in gravity is pathological the moment $r_D$ is finite and the fact that the effective dynamics of the fluid of the CFT remains sensible up to a critical radius $r_{D,snd}$ is just a long wavelength artifact? Clearly this issue deserves further investigation. We will now take an alternate approach which sidesteps these deep questions – given that our issue is with the superluminal sound mode on $\Sigma_D$ (for $r_D<r_{D,snd}$), is it possible to project this offending mode out of our dynamics hence avoiding the entire issue? We will now argue that fortunately the answer is yes – there is indeed a way to project out the sound mode, retaining sensible physics at least within the long wavelength regime. The way to do this is to move to the incompressible non-relativistic regime of fluid/gravity correspondence first studied in [@Bhattacharyya:2008kq]. We will now implement their construction in our setting allowing us to obtain sensible dynamics for the Dirichlet problem. We will postpone some of the more general questions raised above to the discussion section . The Dirichlet problem for gravity with non-relativistic fluids {#s:dgravnr} ============================================================== The discussion so far has concentrated on mapping relativistic fluid dynamics on a curved background $\Sigma_D$ to a corresponding problem on the boundary where we have a relativistic fluid on a ‘dynamical metric’. Moreover in the previous section, we cited the possible problems with the sound mode as a motivation to project it out consistently so as to get a clearly sensible dynamical system with no possible issues with the initial value problem etc.. Our goal now is to describe how this can be done consistently within our gradient expansion inspired by the non-relativistic incompressible scaling limit of [@Bhattacharyya:2008kq; @Fouxon:2008tb]. As we will see this limit has the added advantage that it naturally allows us to make contact with the metric derived in [@Cai:2011xv]. The idea as explained beautifully in [@Bhattacharyya:2008kq] is the following: every relativistic fluid has a scaling limit where we freeze out the propagating sound mode, which drives the fluid into a non-relativistic regime, while simultaneously making it incompressible. This BMW scaling can essentially be derived by the requirement that one retains the non-linearities of the conservation equation (at least at first order). The resulting conservation equation is the classic incompressible non-relativistic Navier-Stokes equations. Using the fluid/gravity map [@Bhattacharyya:2008kq] constructed a gravitational dual of this system. The BMW scaling involves two ingredients. Firstly, the velocities and the temperatures of the fluid are taken to be slowly varying functions of a specific kind, with spatial and temporal gradients having different scaling dimensions (heuristically $\partial_t \sim \partial_x^2$). Secondly, the amplitude of the spatial velocity and the temperature fluctuation (about some constant equilibrium value) are also taken to be small and are of the same order as $\partial_x$ and $\partial_t$ respectively. It is convenient to introduce a large parameter $\aleph$ (which is inverse of the small parameter $\epsilon$ in [@Bhattacharyya:2008kq]) in terms of which the above statements can be written as $\partial_t\sim \aleph^{-1}$, $\partial_x\sim \aleph^{-2}$ etc. We will denote the corresponding parameter for the hypersurface fluid by the hatted $\hat{\aleph}$. Under the large $\aleph$ limit it is possible to show that the relativistic conservation equations map straightforwardly into the incompressible Navier-Stokes equations. We review this scaling in for the convenience of the reader and proceed in the main text to directly implement an analogous scaling on the hypersurface $\Sigma_D$ using a large parameter $\hat{\aleph}$. Non-relativistic fluids on the Dirichlet hypersurface {#s:nrhyp1} ----------------------------------------------------- To keep the computation sufficiently general we will take the metric on the Dirichlet surface to have non-vanishing curvature. Furthermore, it is useful as in [@Bhattacharyya:2008kq] to allow for the background metric to be decomposed to slowly varying parts of different orders so as to recover non-relativistic fluids which are forced on $\Sigma_D$. One reason for doing so is that we are going to obtain a boundary metric, via the map $\varphi_D$ described in , which naturally contains such terms. Hence it pays to be more general to see the mixing of various contributions at the boundary. To start off let us consider on the hypersurface a metric $\hat{g}_{\mu\nu}$ of the form: $$\hat{g}_{\mu\nu} = \hat{g}^{(0)}_{\mu\nu} + \hat{h}_{\mu\nu} \label{Dgans}$$ with $$\hat{g}^{(0)}_{\mu\nu} = -dt^2 + \hat{g}^{(0)}_{ij}(x) \, dx^i \, dx^j \label{Dg0}$$ where $\hat{g}^{(0)}_{ij}$ are slowly varying functions of $x^i$ and with $\hat{h}_{\mu\nu}$ are the metric perturbations which we take it as $$\hat{h}_{\mu\nu} \,dx^\mu\,dx^\nu= 2\, \hat{\aleph}^{-1}\, \hat{k}^*_{i}\, dt\, dx^i + \hat{\aleph}^{-2}\, \left(\hat{h}^*_{tt}\, dt^2 + \hat{h}^*_{ij} \, dx^i \, dx^j \right) \label{Dhans}$$ To keep things simple it turns out to be useful to work with the background spatial metric $g^{(0)}_{ij}$ being Ricci flat i.e., $R^{(0)}_{ij} =0$. This turns out to simplify the analysis considerably for a host of terms dependent on the curvature of $g^{(0)}_{ij}$ drop out – a more comprehensive analysis for general backgrounds is presented in . We indicate the various corrections that arise from the curvature terms at appropriate stages in the main text. All the functions which have a $*$ subscript or superscript (which we freely interchange to keep formulae clear) are of a specific functional form with anisotropic scaling of their spatial and temporal gradients. $$\hat{{\cal Y}}_*(t,x^i) : {\mathbb R}^{d-1,1} \mapsto {\mathbb R}\ , \;\; \text{such that} \;\; \{ \partial_t \hat{{\cal Y}}_*(t,x^i), \hat{\nabla}^{(0)}_i \hat{{\cal Y}}_*(t,x^i)\} \sim \{{\cal O}(\hat{\aleph}^{-2}) ,{\cal O}(\hat{\aleph}^{-1})\}$$ where $\hat{\aleph}$ is a counting parameter introduced to implement the BMW scaling (on the boundary $\aleph^{-1} = \epsilon_\text{BMW}$ as discussed in ). Following [@Bhattacharyya:2008kq], we parameterize the velocity field as $$\hat{u}^{\mu} = \hat{u}^t \left(1, \hat{\aleph}^{-1}\, \hat{v}_*^i \right)$$ where the function $\hat{u}^t$ is determined by requiring that $\hat{g}_{\mu\nu}\hat{u}^{\mu}\hat{u}^{\nu}=-1$. This gives the full velocity field in a large $\hat{\aleph}$ expansion as [^29] $$\begin{split} \hat{u}^t &=1 + \frac{\hat{\aleph}^{-2}}{2} \left( \hat{h}^*_{tt} + 2 \,\hat{k}^*_{j}\, \hat{v}^{j}_{*} + \hat{g}^{(0)}_{jk} \, \hat{v}^{j}_{*}\, \hat{v}^{k}_{*} \right)+ {\cal O}(\hat{\aleph}^{-4})\\ \hat{u}^i &= \hat{\aleph}^{-1} \, \hat{v}_*^i + \frac{\hat{\aleph}^{-3}}{2} \left( \hat{h}^*_{tt} + 2 \,\hat{k}^*_{j}\, \hat{v}^{j}_{*} + \hat{g}^{(0)}_{jk} \, \hat{v}^{j}_{*}\, \hat{v}^{k}_{*} \right)\, \hat{v}_*^i + {\cal O}(\hat{\aleph}^{-4})\\ {\hat{u}}_t &= -1 - \frac{1}{2}\, {\hat{\aleph}}^{-2} \, \left(- {h}^*_{tt} + {\hat{g}}^{(0)}_{jk} \, {\hat{v}}^j_* \, {\hat{v}}^k_* \right)+ {\cal O}(\hat{\aleph}^{-4})\\ {\hat{u}}_i &= {\hat{\aleph}}^{-1}\, \left( {\hat{v}}^*_i + {k}^*_i \right)+ \hat{\aleph}^{-3}\left[{h}^*_{ij}\hat{v}^{j}_{*}+ \frac{1}{2} \left( \hat{h}^*_{tt} + 2 \,\hat{k}^*_{j}\, \hat{v}^{j}_{*} + \hat{g}^{(0)}_{jk} \, \hat{v}^{j}_{*}\, \hat{v}^{k}_{*} \right)\, \left( {\hat{v}}^*_i + {k}^*_i \right) \right] + {\cal O}(\hat{\aleph}^{-4}) \end{split}$$ and the velocity gradients are given by $$\begin{aligned} \hat{\theta} &=& {\cal O}(\hat{\aleph}^{-4}) \nonumber \\ \hat{\mathcal{A}}_{\mu} dx^{\mu} &=& \hat{a}_{\mu} dx^{\mu}= \hat{\aleph}^{-3} \left[ \partial_{t} \hat{v}_{i}^{*} + \hat{v}_{*}^{j} \hat{\nabla}^{(0)}_{j} \hat{v}_{i}^{*} - \hat{f}_i^* \right] dx^{i} + {\cal O}(\hat{\aleph}^{-4}) \nonumber \\ \hat{\sigma}_{\mu \nu} dx^{\mu} dx^{\nu} &=&\hat{\aleph}^{-2}\, \hat{\nabla}^{(0)}_{(i} \hat{v}^{*}_{j)} \,dx^{i} dx^{j} - 2\hat{\aleph}^{-3} \, \hat{v}^{j}_{*} \,\hat{\nabla}^{(0)}_{(i} \hat{v}^{*}_{j)} \,dx^{i} dt + {\cal O}(\hat{\aleph}^{-4}) \label{Duder} \end{aligned}$$ where $\hat{\nabla}^{(0)}_{\mu}$ is the covariant derivative compatible with $\hat{g}^{(0)}(x^{i})$ and we have freely raised and lowered the spatial indices with $\hat{g}^{(0)}_{ij}$ for brevity. Further, $\hat{f}^i$ is a forcing function determined as a functional of $\hat{h}_{\mu\nu}$ data $$\hat{f}_{i} = \frac{1}{2} \partial_i \hat{h}^*_{tt}- \partial_t \hat{k}^*_i + \hat{q}^{*}_{\ ij} \, \hat{v}_*^j \,. \label{hypforce}$$ and $\hat{q}^*_{ij} = \hat{\nabla}^{(0)}_i \hat{k}^*_j -\hat{\nabla}^{(0)}_j \hat{k}^*_i$ . In deriving these expressions we have used the fact that to leading order in the $\aleph \to \infty$ expansion, the velocity field $v^i_*$ is divergenceless (see below). We take the scaling in $b$ to be of the form $$b = b_0 + \hat{\aleph}^{-2}\, \delta b_* \ . \label{Dbexp}$$ Using $$\begin{aligned} \hat{\alpha} &=& \hat{\alpha}_0 + \hat{\aleph}^{-2} \, \left(\frac{d}{2}\, \hat{\alpha}_0 \left( 1 - \hat{\alpha}_0^2 \right) \,\frac{\delta b_*}{b_0} \right) +{\cal O}(\hat{\aleph}^{-4}) \ , \qquad \hat{\alpha}_0 \equiv \frac{1}{\sqrt{f \left(b_0\, r_{D}\right)}}\end{aligned}$$ we can evaluate the non-relativistic pressure per mass density and kinematic viscosity: $$\begin{aligned} \hat{p}_{*} &=&\frac{\delta \hat{p}}{\hat{\varepsilon}_0+\hat{p}_0} = - \hat{\aleph}^{-2} \, \left(1 + \frac{d}{2} \left(\hat{\alpha}^2_{(0)} -1\right) \right) \frac{\delta b_*}{b_0} +{\cal O}(\hat{\aleph}^{-4}) \nonumber \\ \hat{ \nu}_{0} &=& \frac{\eta_0}{\hat{\varepsilon}_0+ \hat{p}_0}= \frac{b_0}{d \, \hat{\alpha}_0}+ {\cal O}(\hat{\aleph}^{-2}) \label{bmwpnu} \end{aligned}$$ with $\hat{\rho}_0\equiv \hat{\varepsilon}_0+ \hat{p}_0 = \frac{d \,\hat{\alpha}_0}{16 \pi G_{d+1}\, b_0^d}$ playing the role of the non-relativistic mass density. Given these data we can show that the conservation equations reduce to the incompressible Navier-Stokes equations:$$\begin{aligned} && \hat{\nabla}^{(0)}_i \, v^i_* = 0 \nonumber \\ && \hat{\nabla}^{(0)}_i \hat{p}_* + \partial_t \hat{v}^*_i + \hat{v}_*^j \, \hat{\nabla}^{(0)}_j \hat{v}^*_i - 2\, \hat{\nu}_0\, \hat{\nabla}^{(0)^j} \left(\hat{\nabla}^{(0)}_{(i} \hat{v}^{*}_{j)}\right) = \hat{f}_i \label{hypns}\end{aligned}$$ We now proceed to derive the expressions entering into the bulk metric and the map $\varphi_D$ given in . Bulk metric in terms of Dirichlet data {#s:nrhyp2} -------------------------------------- Armed with the results from we can proceed to use those of to construct the bulk metric corresponding to the non-relativistic fluid on the Dirichlet hypersurface $\Sigma_D$. In principle Einstein equations need to be solved in the new gradient expansion to obtain the non-relativistic solutions. As argued by the authors of [@Bhattacharyya:2008kq], this can be done via an algorithm very similar to the algorithm used to find the metric dual of the relativistic fluid. The main difference is the anisotropic scaling of space with respect to time and the fact that the bulk metric is no-more ultra-local in space but is still ultra-local in time. We can again proceed from the space of non-relativistic solutions with asymptotic boundary conditions that we present in and reparametrize it in terms of Dirichlet data. But we will take here instead an easier route and directly derive it from the the bulk relativistic metric written in terms of Dirichlet data. We should be careful though – given the difference in derivative counting between the relativistic scaling and the non-relativistic scaling, it is in principle possible that a higher order term according to the relativistic counting contributes at a lower order according to the non-relativistic counting. In order to obtain the metric accurate to the order where the Navier-Stokes equations can be seen, ${\cal O}(\hat{\aleph}^{-3})$, we need to have the certain terms in the relativistic metric accurate to third order in gradients.[^30] If we however restrict to the case of Ricci flat spatial metric on the hypersurface $\Sigma_D$, then we can obtain the non-relativistic metric from the second order relativistic fluid/gravity metric obtained in [@Bhattacharyya:2008mz]. There are three terms we need to account for which give rise to non-relativistic contributions proportional to $\nabla^{(0)}_j\nabla^{(0) j} \hat{v}^*_i \equiv \nabla_{(0)}^2 \hat{v}^*_i$ and $ \nabla^{(0)}_j \hat{q}_*^{j}{}_i$. These involve new radial functions which we collect below after presenting the bulk metrics highlighting the terms that were missed in the original analyses. General expressions including spatial curvatures can be found in . Since in the scaling limit $\hat{\aleph} \gg 1$ one has from that the $\hat{a}_\mu = \hat{\mathcal{A}}_\mu$ to leading order things simplify considerably. Using the formulae in the last subsection and including the terms from the second order metric[^31] (highlighted) it is easy to show that the bulk metric becomes[^32] $$\label{hypbmwf1} \begin{split} ds^2 &= -2 \hat{\alpha}\, \hat{u}_\mu \,dx^\mu dr +\frac{2\hat{\alpha}^2}{r_D}\left[\frac{\hat{a}_\mu}{\left[1+\frac{d}{2}(\hat{\alpha}^2-1)\right]}\right]dx^\mu dr -2\,r\,\hat{\alpha}\, (2\, \hat{\xi}-1) \, \frac{\hat{u}_{(\mu}\,\hat{a}_{\nu)} }{1+\frac{d}{2}\, (\hat{\alpha}^2 -1)} \, dx^\mu \,dx^\nu\\ &\qquad +\; r^2\left[\hat{g}_{\mu\nu} + \left(1-\hat{\alpha}^2\, f(br) \right) \hat{u}_\mu \,\hat{u}_\nu + 2b\,\hat{F}(br)\, \hat{\sigma}_{\mu\nu}\right] dx^\mu \,dx^\nu \\ &\qquad \red{- 4b^2\kappa_L\hat{\alpha}\hat{P}_{\mu}^{\lambda}\hat{\mathcal{D}}_{\alpha} {\hat{\sigma}^{\alpha}}_{\lambda}\,dx^\mu\,dr}\\ &\qquad \red{-2\,\frac{\hat{\alpha}^3}{r_D^2}\hat{\mathcal{S}}_{\mu\lambda}\hat{u}^\lambda\,dx^\mu\,dr+\,2 \frac{\hat{\alpha}^3} {r_D^2(d-2)}\left[1+\frac{2}{d\hat{\alpha}(\hat{\alpha}+1)}\right] \hat{\mathcal{R}}_{\mu\lambda}\hat{u}^\lambda\,dx^\mu\,dr}\\ &\qquad \red{+\; 2\,(br)^2\left[\hat{M}_1(br)\, \hat{u}_{(\mu}\hat{\mathcal{S}}_{\nu)\lambda}\hat{u}^\lambda - \hat{M}_2(br)\, \hat{u}_{(\mu}\hat{\mathcal{R}}_{\nu)\lambda}\hat{u}^\lambda +2\, \hat{L}_1(br)\, \hat{u}_{(\mu}\hat{P}_{\nu)}^{\lambda}\hat{\mathcal{D}}_{\alpha}{\hat{\sigma}^{\alpha}}_{\lambda} \right]dx^\mu dx^\nu} \\ &= ds_0^2 + \hat{\aleph}^{-1} ds_1^2 + \hat{\aleph}^{-2} ds_2^2 + \hat{\aleph}^{-3} ds_3^2 + {\cal O}(\hat{\aleph}^{-4}) \\ \end{split}$$ with $$\label{hypbmwf1a} \begin{split} ds_0^2 &= 2\,\hat{\alpha}_0\ dt\ dr + r^2\left(-\hat{\alpha}_0^2 \,f_0 \,dt^2 + \hat{g}^{(0)}_{ij}\,dx^i dx^j\right)\\ ds_1^2 &= -2\, \hat{\alpha}_0\left( \hat{v}^*_i + \hat{k}^*_i \right)\ dx^i\ dr + 2\, r^2 \left[\hat{k}^*_i -\left(1-\hat{\alpha}_0^2\, f_0\right) \left( \hat{v}^*_i + \hat{k}^*_i \right)\right] dx^i\, dt \\ ds_2^2 &= 2\, \hat{\alpha}_0 \left[- \frac{1}{2}\hat{h}^*_{tt} + \frac{1}{2}\, \hat{g}^{(0)}_{jk} \, \hat{v}^j_* \, \hat{v}^k_* +\hat{p}_* \frac{\frac{d}{2}\, (\hat{\alpha}_0^2 -1)}{1+\frac{d}{2}\, (\hat{\alpha}_0^2 -1)} \right]dt\ dr + r^2\left[\hat{h}^*_{tt}\, dt^2 + \hat{h}^*_{ij} \, dx^i \, dx^j \right]\\ &\quad +r^2 \left(1-\hat{\alpha}_0^2\, f_0\right) \left[\left(- \hat{h}^*_{tt} + \hat{g}^{(0)}_{jk} \, \hat{v}^j_* \, \hat{v}^k_* +\hat{p}_* \frac{d\hat{\alpha}_0^2 }{1+\frac{d}{2}\, (\hat{\alpha}_0^2 -1)} \right)dt^2 \right.\\ &\qquad \left. \qquad + \left( \hat{v}^*_i + \hat{k}^*_i \right) \left( \hat{v}^*_j + \hat{k}^*_j \right)dx^i dx^j \right] +2\,r^2\,b_0\, \hat{F}_0\,\hat{\nabla}^{(0)}_{(i} \hat{v}^{*}_{j)} \,dx^{i} dx^{j}\\ ds_3^2 &= -2\,\hat{\alpha}_0\left[\hat{h}^*_{ij}\hat{v}^{j}_{*}+ \left( \frac{1}{2}\hat{h}^*_{tt} + \,\hat{k}^*_{j}\, \hat{v}^{j}_{*} +\frac{1}{2} \hat{g}^{(0)}_{jk} \, \hat{v}^{j}_{*}\, \hat{v}^{k}_{*} +\hat{p}_* \frac{\frac{d}{2}\, (\hat{\alpha}_0^2 -1)}{1+\frac{d}{2}\, (\hat{\alpha}_0^2 -1)} \right)\, \left( \hat{v}^*_i + \hat{k}^*_i \right) \right] dx^i dr \\ &\quad +\frac{2\hat{\alpha}_0^2}{r_D\left(1+\frac{d}{2}(\hat{\alpha}_0^2-1)\right)}\left[ \partial_{t} \hat{v}_{i}^{*}+\hat{v}_{*}^{j} \hat{\nabla}^{(0)}_{j} \hat{v}_{i}^{*} -\hat{f}_i^* \right] dx^{i}dr\\ &\quad + 2r\,\frac{\hat{\alpha}_0(2\,\hat{\xi}_0-1)}{1+\frac{d}{2}\, (\hat{\alpha}_0^2 -1)} \left[ \partial_{t} \hat{v}_{i}^{*}+\hat{v}_{*}^{j} \hat{\nabla}^{(0)}_{j} \hat{v}_{i}^{*} -\hat{f}_i^* \right] dx^{i}dt\\ &\quad -2\,r^2 \left(1-\hat{\alpha}_0^2 \,f_0\right)\left[\hat{h}^*_{ij}\hat{v}^{j}_{*} + \left(\hat{k}^*_{j}\, \hat{v}^{j}_{*} + \hat{g}^{(0)}_{jk} \, \hat{v}^{j}_{*}\, \hat{v}^{k}_{*} +\hat{p}_* \frac{d\hat{\alpha}_0^2 }{1+\frac{d}{2}\, (\hat{\alpha}_0^2 -1)} \right)\, \left( \hat{v}^*_i + \hat{k}^*_i \right) \right] dx^i dt \\ &\qquad -4\, r^2\, b_0\, \hat{F}_0\, \hat{v}_{*}^j\hat{\nabla}^{(0)}_{(i} \hat{v}^{*}_{j)} \,dx^{i} dt \; \red{-\;2\, b_0^2\, r^2\, \hat{L}_1 \nabla^2_{(0)} v_i^* \, dt \, dx^i -2\,b_0^2\, \hat{\kappa}_L\,\hat{\alpha}_0\, \hat{\nabla}^2_{(0)}v^*_i\, dx^i\, dr} \\ &\qquad \red{+ \;\hat{M}_0 \,\hat{\nabla}^j_{(0)}\hat{q}^*_{ij}\, dx^i \, dt+ 2\, \frac{\hat{\alpha}_0^2}{r_D^2} \, \frac{\hat{\nabla}^j_{(0)}\hat{q}^*_{ij}}{d\,(d-2)\, (\hat{\alpha}_0 +1) } \, dx^i\, dr } \end{split}$$ where $$\begin{split} \hat{\alpha}_0 &\equiv (1-(b_0 r_d)^{-d})^{-1/2}\ ,\quad f_0 \equiv 1-(b_0 r)^{-d}\ ,\\ \hat{\xi}_0 &\equiv 1- \frac{r}{2r_D}\hat{\alpha}_0^2\, f_0\\ \hat{p}_* &\equiv -\frac{\delta b_*}{b_0}\left\{1+\frac{d}{2}\, (\hat{\alpha}^2 -1)\right\} \\ \hat{F}_0 &\equiv \frac{1}{\hat{\alpha}_0} \int_{b_0 r}^{b_0 r_D}\frac{y^{d-1}-1}{y(y^{d}-1)}dy\\ \hat{f}_i^* &\equiv \frac{1}{2}\partial_i \hat{h}^*_{tt} - \partial_t \hat{k}^*_i +\left[\hat{\nabla}^{(0)}_i \hat{k}^*_j - \hat{\nabla}^{(0)}_j \hat{k}^*_i\right] {v}_*^j = \frac{1}{2}\partial_i\hat{h}^*_{tt} - \partial_t\hat{k}^*_i +\hat{q}^*_{ij} {v}_*^j \\ \hat{L}_1 &\equiv \frac{L(br)}{(br)^d}-\frac{L(br_D)}{(br_D)^d}+\hat{\kappa}_L \left[1-\hat{\alpha}^2\, f_0\right]\\ \hat{\kappa}_L &\equiv \frac{1}{d}\left[ \xi(\xi^d-1)\frac{d}{d\xi}\left[\xi^{-d}L(\xi)\right]+ \frac{1}{\xi\left[1+\frac{d}{2}(\hat{\alpha}^2-1)\right]}+\frac{1}{\xi^2(d-2)}\right]_{\xi=br_D} \\ \hat{M}_0 &\equiv \frac{1}{(d-2)}\left[-\hat{\alpha}_0^2\left(1-\frac{r^2}{r_D^2}\right)+\frac{2}{d}\frac{\hat{\alpha}_0}{1+\hat{\alpha}_0}\left(1-\hat{\alpha}_0^2 f_0\right)\frac{r^2}{r_D^2}\right] \end{split}$$ The function $L(x)$ which enters into the above formulae is given as $$L(br) = \int_{br}^\infty\xi^{d-1}d\xi\int_{\xi}^\infty dy\ \frac{y-1}{y^3(y^d -1)} \label{}$$ while the other functions $\hat{M}_1$ and $\hat{M}_2$ can be found in . In the final results we have highlighted the terms appearing in $ds_3^2$ that were missed in the first version of the paper and are necessary in order to solve Einstein’s equations to ${\cal O}(\hat{\aleph}^{-3})$. Having realized the necessity of these terms it a-posteriori became clear that the BMW scaling metric introduced in [@Bhattacharyya:2008kq] to describe the bulk dual of boundary non-relativistic fluids also receives corrections. The corrected form of this metric including effects of boundary spatial curvature is now presented in . The general expressions for the Dirichlet non-relativistic fluid are collected in as they involve many more terms compared to what was originally reported in v1 of this paper. As required this metric reduces to the metric derived by the BMW scaling on the boundary [@Bhattacharyya:2008kq] that we review in in our conventions; we simply set $r_D=\infty,\hat{\alpha}_0 = 1$ in the above metric. As remarked above, it compares with the metric given in [@Bhattacharyya:2008kq] up to the new terms mentioned above. Inspired by the gravity duals of fluids on cut-off hypersurfaces in flat space [@Bredberg:2011jq; @Compere:2011dx], in [@Cai:2011xv] the result for the bulk dual to a non-relativistic fluid on a Dirichlet hypersurface in has been recently derived. The results there are presented for a fluid on a flat background and are entirely contained within our framework. To facilitate ease of comparison with their results we now specify our computation to the case where the metric on $\Sigma_D$ is flat and furthermore switch off the forcing. Taking $\hat{g}_{\mu\nu} = \eta_{\mu\nu}$ which amounts to setting $\hat{g}^{(0)}_{ij} = \delta_{ij}$ and $\hat{k}_i^* = \hat{h}_{tt}^* = \hat{h}^*_{ij} =0$ in the above one can show that the bulk metric reduces to the form with $$\label{hypbmwf} \begin{split} ds_0^2 &= 2\,\hat{\alpha}_0\ dt\ dr + r^2\left(-\hat{\alpha}_0^2\, f_0\, dt^2 + \delta_{ij}dx^i dx^j\right)\\ ds_1^2 &= -2\, \hat{\alpha}_0\, \hat{v}^*_i \ dx^i\ dr - 2\, r^2 \,\left(1-\hat{\alpha}_0^2 \,f_0\right)\hat{v}^*_i dx^i dt \\ ds_2^2 &= 2 \,\hat{\alpha}_0 \left[ \frac{1}{2} \, \hat{v}^2_* +\hat{p}_* \frac{\frac{d}{2}\, (\hat{\alpha}_0^2 -1)}{1+\frac{d}{2}\, (\hat{\alpha}_0^2 -1)} \right]dt\ dr \\ &\quad +r^2 \left(1-\hat{\alpha}_0^2 \,f_0\right) \left[\left( \hat{v}^2_* +\hat{p}_* \frac{d\hat{\alpha}_0^2 }{1+\frac{d}{2}\, (\hat{\alpha}_0^2 -1)} \right)dt^2 + \hat{v}^*_i \hat{v}^*_j dx^i dx^j \right]\\ &\quad +2\,r^2\,b_0\,\hat{F}_0\,\hat{\nabla}^{(0)}_{(i} \hat{v}^{*}_{j)} \,dx^{i} dx^{j}\\ ds_3^2 &= -2\,\hat{\alpha}_0\, \left( \frac{1}{2} \hat{v}^{2}_{*} +\hat{p}_* \frac{\frac{d}{2}\, (\hat{\alpha}_0^2 -1)}{1+\frac{d}{2}\, (\hat{\alpha}_0^2 -1)}\right) \hat{v}^*_i\, dx^i\, dr +\frac{2\hat{\alpha}_0^2}{r_D\left(1+\frac{d}{2}(\hat{\alpha}_0^2-1)\right)}\left[ \partial_{t} \hat{v}_{i}^{*}+\hat{v}_{*}^{j} \partial_j \hat{v}_{i}^{*} \right] dx^{i}dr\\ &\quad + 2\,r\,\frac{\hat{\alpha}_0(2\hat{\xi}_0-1)}{1+\frac{d}{2}\, (\hat{\alpha}_0^2 -1)} \left[ \partial_{t} \hat{v}_{i}^{*}+\hat{v}_{*}^{j} \partial_{j} \hat{v}_{i}^{*} \right] dx^{i}\,dt\\ &\quad -2\,r^2 \left(1-\hat{\alpha}_0^2\, f_0\right) \left(\hat{v}^{2}_{*}\ +\hat{p}_* \frac{d\hat{\alpha}^2 }{1+\frac{d}{2}\, (\hat{\alpha}^2 -1)} \right) \hat{v}^*_i \,dx^i \,dt -4\,r^2\,b_0\, \hat{F}_0\,\hat{v}_{*}^j\hat{\nabla}^{(0)}_{(i} \hat{v}^{*}_{j)} \,dx^{i} dt \\ &\quad \red{-\;2\, b_0^2\, r^2\, \hat{L}_1 \nabla^2_{(0)} v_i^* \, dt \, dx^i -2\,b_0^2\, \hat{\kappa}_L\,\hat{\alpha}_0\, \hat{\nabla}^2_{(0)}v^*_i\, dx^i\, dr} \,, \end{split}$$ which agrees with that derived in [@Cai:2011xv] once one accounts for some differences in convention. In particular, one has to rescale the time coordinate to absorb the factor of $\hat{\alpha}_0$, in addition to redefining $dt \rightarrow \hat{\alpha}_0^{-1}\, dt$ along with $\hat{v}^*_i \rightarrow \frac{\hat{\alpha}_0}{r_D^2} \, \hat{v}^*_i$ and $\hat{v}_*^i \rightarrow \hat{\alpha}_0\, \hat{v}^*_i$. The inhomogeneity in the scaling of the spatial velocities results from the fact that we define our hypersurface metric to be the induced metric rescaled by a factor of $r_D^{-2}$, while [@Cai:2011xv] works with the induced metric. Also, our derivation here allows us to go to ${\cal O}(\hat{\aleph}^{-3})$ which is necessary in order to see the dynamical Navier-Stokes equation on the hypersurface $\Sigma_D$. The boundary data for the non-relativistic Dirichlet fluid {#s:dbdymet} ---------------------------------------------------------- Apart from the construction of the bulk dual to the non-relativistic fluid on the Dirichlet surface, we would also like to know what the corresponding physics on the boundary is. For instance we see that the boundary velocity field can be read off from the terms with $dr$ in and is given as $$\begin{split} u_t &= -\hat{\alpha}_0 - \hat{\aleph}^{-2} \hat{\alpha}_0 \left[- \frac{1}{2}\hat{h}^*_{tt} + \frac{1}{2} \hat{g}^{(0)}_{jk} \, \hat{v}^j_* \, \hat{v}^k_* +\hat{p}_* \frac{\frac{d}{2}\, (\hat{\alpha}_0^2 -1)}{1+\frac{d}{2}\, (\hat{\alpha}_0^2 -1)} \right] + {\cal O}(\hat{\aleph}^{-4}) \\ u_i &= \hat{\aleph}^{-1}\, \hat{\alpha}_0\, \left(\hat{v}^*_i + \hat{k}^*_i \right)\\ &\quad +\hat{\aleph}^{-3}\, \hat{\alpha}_0\left[\hat{h}^*_{ij}\hat{v}^{j}_{*}+ \left( \frac{1}{2}\hat{h}^*_{tt} + \,\hat{k}^*_{j}\, \hat{v}^{j}_{*} +\frac{1}{2} \hat{g}^{(0)}_{jk} \, \hat{v}^{j}_{*}\, \hat{v}^{k}_{*} +\hat{p}_* \frac{\frac{d}{2}\, (\hat{\alpha}_0^2 -1)}{1+\frac{d}{2}\, (\hat{\alpha}_0^2 -1)}\right)\, \left( \hat{v}^*_i + \hat{k}^*_i \right) \right]\\ &\qquad -\hat{\aleph}^{-3}\frac{\hat{\alpha}_0^2}{r_D\left(1+\frac{d}{2}(\hat{\alpha}_0^2-1)\right)}\left[ \partial_{t} \hat{v}_{i}^{*}+\hat{v}_{*}^{j} \hat{\nabla}^{(0)}_{j} \hat{v}_{i}^{*}-\hat{f}_i^* \right] \\ & \qquad +\hat{\aleph}^{-3}\left[ b_0^2\, \hat{\kappa}_L\,\hat{\alpha}_0\, \hat{\nabla}^2_{(0)} \hat{v}^*_i -\frac{\hat{\alpha}_0^2}{r_D^2} \, \frac{\hat{\nabla}^j_{(0)}\hat{q}^*_{ij}}{d\,(d-2)\, (\hat{\alpha}_0 +1) }\right] + {\cal O}(\hat{\aleph}^{-4}) \end{split}$$ while the boundary metric can be read off from the large $r$ behavior and is given to be $$\begin{split} g_{tt} &= -\hat{\alpha}_0^2 + \hat{\aleph}^{-2} \, \left[\hat{h}^*_{tt}\ + \left(1-\hat{\alpha}_0^2 \right) \left(- \hat{h}^*_{tt} + \hat{g}^{(0)}_{jk} \, \hat{v}^j_* \, \hat{v}^k_*+\hat{p}_* \frac{d\hat{\alpha}_0^2 }{1+\frac{d}{2}\, (\hat{\alpha}_0^2 -1)} \right) \right] + {\cal O}(\hat{\aleph}^{-4}) \\ g_{ti} &= \hat{\aleph}^{-1}\left( \hat{k}^*_i + (\hat{\alpha}_0^2-1)\, (\hat{k}_i^* + \hat{v}_i^*) \right) - \hat{\aleph}^{-3}\frac{\hat{\alpha}_0^3}{r_D\left(1+\frac{d}{2}\, (\hat{\alpha}_0^2 -1)\right)} \left[ \partial_{t} \hat{v}_{i}^{*}+\hat{v}_{*}^{j} \hat{\nabla}^{(0)}_{j} \hat{v}_{i}^{*} -\hat{f}_i^* \right] \\ &\quad -\hat{\aleph}^{-3} \left(1-\hat{\alpha}_0^2 \right)\left[\hat{h}^*_{ij}\hat{v}^{j}_{*}+ \left(\hat{k}^*_{j}\, \hat{v}^{j}_{*} + \hat{g}^{(0)}_{jk} \, \hat{v}^{j}_{*}\, \hat{v}^{k}_{*}+\hat{p}_* \frac{d\hat{\alpha}_0^2 }{1+\frac{d}{2}\, (\hat{\alpha}_0^2 -1)} \right)\, \left( \hat{v}^*_i + \hat{k}^*_i \right) \right]\\ &\quad +\;\hat{\aleph}^{-3} \left[\frac{2 \, b_0}{\hat{\alpha}_0}\, F(b_0\,r_D)\hat{v}_{*}^j\hat{\nabla}^{(0)}_{(i} \hat{v}^{*}_{j)} + \frac{\hat{\alpha}_0^4}{2\,(d-2)\, r_D^2)}\hat{\nabla}^j_{(0)}\hat{q}^*_{ij} - \frac{\hat{\alpha}_0^2\,(\hat{\alpha}_0^2-1)}{2\, (d-2)\, r_D^2}\left(1+\frac{2}{d\,\hat{\alpha}_0\,(\hat{\alpha}_0+1)}\right) \hat{\nabla}^j_{(0)}\hat{q}^*_{ij}\right] \\ &\quad +\;\hat{\aleph}^{-3} \, \left[ -\, b_0^2\, \hat{L}_1(\infty) \, \hat{\nabla}^2_{(0)}\hat{v}^*_{i} \right] + \;{\cal O}(\hat{\aleph}^{-4}) \\ g_{ij} &= \hat{g}^{(0)}_{ij} + \hat{\aleph}^{-2} \left(\hat{h}^*_{ij} -(\hat{\alpha}_0^2 -1)\, (\hat{v}^*_i + \hat{k}^*_i) \,(\hat{v}^*_j + \hat{k}^*_j) - \frac{2 \, b_0}{\hat{\alpha}_0}\, F(b_0\,r_D) \,\hat{\nabla}^{(0)}_{(i} \hat{v}^{*}_{j)} \right) + \;{\cal O}(\hat{\aleph}^{-4}) \end{split}$$ with $\hat{L}_1(\infty)$ denoting the asymptotic value of $\hat{L}_1(br)$. Note that even in the non-relativistic limit the boundary ‘dynamical’ metric $g_{\mu\nu}$’s light-cone is being opened up relative to that of the hypersurface metric. As before this is a consequence of the red-shift effect. We are normalizing here the hypersurface metric to have flat Minkowski metric to leading order and this causes a rescaling by an amount $\hat{\alpha}_0^2$ on the boundary. As long as $\hat{\alpha}_0$ is finite this is not an issue for we are just encountering an overall rescaling of the boundary time. What we have derived here is the AdS analog of the membrane paradigm connection recently proposed in [@Bredberg:2011jq; @Compere:2011dx]. Recall that the construction described in these papers proceeds by looking at an asymptotically flat geometry with Dirichlet boundary conditions at some timelike hypersurface (the analog of our $\Sigma_D$) and one solves vacuum Einstein’s equations. It was shown that the hypersurface dynamics is constrained to obey the incompressible Navier-Stokes equations, just as what we have shown above. However, the solutions described in this section solve Einstein’s equations with a negative cosmological constant and we furthermore have argued that the Dirichlet dynamics is obtained by suitably dressing up of a CFT fluid by allowing it to propagate on a ‘dynamical’ background metric. In our context it is clear that the interpretation of the physics is less in terms of an RG flow, and more along the lines of the medium dependent ‘dressing up’ of the boundary fluid dynamics in contrast to the Wilsonian RG perspective put forth as an interpretation of the membrane paradigm originally in [@Bredberg:2010ky]. There is however one regime where our analysis should be able to make some contact with the discussion in [@Bredberg:2011jq; @Compere:2011dx]; this is the near horizon regime where one expects to encounter a local Rindler geometry, which is the starting point for analyzing the Dirichlet problem in flat space. In the next section we show how one can embed this construction using the solutions we have described, enabling one thus to explore the AdS version of the membrane paradigm. The near horizon Dirichlet problem {#s:nh} ================================== So far in our discussion the Dirichlet hypersurface $\Sigma_D$ has been located at some radial position $r_D$ that is finite. We now want to investigate what happens as we push this surface closer towards the horizon, i.e, $\Sigma_D \to {\cal H}^+$ via $r_D \to b^{-1}$. To understand the resulting physics we first have to realize that we are doing something strange: the horizon is a null surface and has therefore a degenerate metric. $\Sigma_D$ on the other hand is constrained to be a timelike surface with a non-degenerate metric. So it is clear that demanding a well-behaved metric is going to result in an infinite scaling by the red-shift factor $\hat{\alpha}$; the main question is whether one can implement the scaling while retaining interesting dynamics. We will now proceed to show that there exists a scaling of parameters such that the near-horizon geometry makes sense. Furthermore, we argue that this allows us to embed the flat space constructions of [@Bredberg:2011jq] into our AdS set-up. We outline the construction in a couple of stages to guide intuition: firstly in we will examine the conservation equation on the hypersurface and from there infer the scaling of parameters. This is the only sensible thing to do for us, since the entire dynamical system of the boundary CFT fluid has been converted into that of the hypersurface fluid living on an inert background. We then examine the consequences of the scalings we derive in focussing on the region close to the horizon; this amounts to blowing up the region of spacetime between ${\mathcal H}^+$ and $\Sigma_D$. This blown up region bears close resemblance to the solution of the vacuum Einstein’s equations dual to the incompressible Navier-Stokes system discussed in [@Bredberg:2011jq]. Indeed one should anticipate this based on the usual intuition that near any non-degenerate horizon one encounters a patch of the Rindler geometry. However, we will also encounter differences owing to the fact at the end of the day we are solving with a non-vanishing cosmological constant. There is also the further question of what the geometry between the Dirichlet surface and the asymptotic AdS region looks like and moreover what is the boundary physics of our scalings? We argue in that the near horizon scaling regime renders the bulk metric viewed as a Lorentzian metric on a spacetime manifold nonsensical. However, it turns out that there is a nice language to describe the geometry in terms Newton-Cartan structures and we show that the bulk co-metric (which in usual Lorentzian geometry is the inverse metric) is well behaved as is the boundary co-metric. The result is quite satisfying from the physical perspective: the near horizon limit demands a drastic modification of the boundary metric, which forces one into a non-relativistic or Galilean regime. Consequently, rather than describing geometry in terms of Lorentzian structure, we are forced to use the less familiar but equally effective geometrization of the idea of a Galilean spacetime, in terms of Newton-Cartan geometry (see [@Misner:1973by] for a nice account of this subject and [@Ruede:1996sy] for a more recent review). Scaling of the Dirichlet dynamics in the near horizon region {#s:dhypdyn} ------------------------------------------------------------ To understand the behavior of the fluid on the Dirichlet surface as we push $\Sigma_D$ closer to the horizon, we first look at the conservation equation in this limit. It turns out that demanding non-trivial dynamics on the Dirichlet surface forces one into a scaling regime of the fluid, effectively making it non-relativistic. However, the scaling we encounter is not quite the BMW scaling [@Bhattacharyya:2008kq] discussed earlier in but a slightly modified version of the same. ### A new scaling regime {#s:nhnewscale} Ignoring for the moment the fact for $r_D < r_{D,snd}$ we are supposed to be projecting out the sound mode to ensure subluminal propagation on the Dirichlet hypersurface, let us write the relativistic conservation equations and examine them as we zoom in towards the horizon. Consider then the conservation equation on the Dirichlet hypersurface ; we have analyzed this from a generic viewpoint in but for now we will focus on the truncated equations to second order by setting $\hat{\pi}^{\mu\nu} = -2\eta\, \hat{\sigma}^{\mu\nu}$ (cf., ). Projecting these conservation equations parallel and transverse to the velocity we get (using $\eta$ as given in ) $$\begin{aligned} && (d-1)\, \hat{u}^\mu \, \frac{ {\hat \nabla}_\mu b}{b} + {\hat \theta} + \frac{2b}{\hat{\alpha}\ d} {\hat u}_\mu \,{\hat \nabla}_\nu {\hat \sigma}^{\mu\nu} = 0 \nonumber \\ && - \, \, \left(1+\frac{d}{2}\,(\hat{\alpha}^2-1)\right)\, {\hat P}_\mu^{\; \alpha} \frac{{\hat \nabla}_\alpha b}{b} + {\hat a}_\mu -\frac{2\, b^d}{\hat{\alpha}\ d} \, {\hat P}_{\mu\alpha} \, {\hat \nabla}_\beta \left(\frac{1}{b^{d-1}}\,{\hat \sigma}^{\alpha \beta}\right) =0 \label{releom1}\end{aligned}$$ We are going to try to analyze these equations in the limit when $\hat{\alpha} \to \infty$. From it is clear that if we insist on leaving the hypersurface data independent of $\hat{\alpha}$ then we find that we have contributions at different orders that need to be independently cancelled. The most constraining equation is the $ {\cal O}(\hat{\alpha}^2)$ term from the transverse equation which demands that the spatial gradients of $b$ vanish. Then at $ {\cal O}(1)$ we have to kill the acceleration and have a non-trivial equation from the longitudinal part. At order $\hat{\alpha}^{-1}$ we would need to kill all terms that show up with the shear viscosity. The upshot is that we are left with vacuous dynamics on the Dirichlet hypersurface should $b$ and $\hat{u}^\mu$ be ${\cal O}(1)$ as $\hat{\alpha} \to \infty$. While this sounds a bit strange, a moments pause reveals that this is indeed what one should expect on physical grounds. The horizon of a black hole is a null surface (it is generated by null generators) and in the process of moving the Dirichlet hypersurface to the horizon, we are effectively doing an infinite rescaling (hence $\hat{\alpha} \to \infty$) to bring the Dirichlet metric $\hat{g}_{\mu\nu}$ to be timelike and non-degenerate. Before we do such a rescaling however, we are in the ultra-relativistic regime as far as the horizon goes – in such a regime it is natural to expect that there is no dynamics, the fluid streams along the null generators and is effectively frozen into a stationary flow. It is then clear that in order to obtain non-trivial dynamics on the Dirichlet surface one has to scale the fields $b$ and $\hat{u}^\mu$ in some fashion. The crucial question is whether there is any scaling that retains non-trivial dynamics; operationally demanding that we obtain an ‘interesting’ non-linear equation. Consider the following scaling:[^33] $$b = \hat{\varkappa} \, b_\bullet + \frac{1}{\hat{\varkappa}^{3}}\, \delta b_\star \ , \qquad \hat{u}^\mu = \left( 1 + {\cal O}(\hat{\varkappa}^{-2} ), \;\hat{\varkappa}^{-1} \, v^i_\star\right) \ , \qquad \hat{\alpha} = \hat{\varkappa}\ \hat{\alpha}_\bullet \left( 1 + {\cal O}(\hat{\varkappa}^{-2} )\right) \label{nhscaling}$$ where the functions with subscript $\star$ have specific functional form with anisotropic scaling of their spatial and temporal gradients. Specifically, $$\hat{{\cal Y}}_\star(t,x^i) : {\mathbb R}^{d-1,1} \mapsto {\mathbb R}\ , \;\; \text{such that} \;\; \{ \partial_t \hat{{\cal Y}}_\star(t,x^i), \partial_i \hat{{\cal Y}}_\star(t,x^i)\} \sim \{{\cal O}(\hat{\varkappa}^{-2}) ,{\cal O}(\hat{\varkappa}^{-1})\}$$ where we assign gradient weight $1$ to the spatial derivatives and $2$ to temporal derivatives. This is inspired of course by the non-relativistic BMW scaling discussed in and as discussed there the ${\cal O}(\hat{\varkappa}^{-2})$ part of the velocity field is fixed by normalization.[^34] However, there is a crucial difference from the BMW scaling; the leading term in the expansion of $b_\bullet$ is growing with $ \hat{\varkappa}$, which seems to be an issue. Nevertheless, it is easy to check that under this scaling (which admittedly is obtained by demanding a sensible $ {\cal O}(1)$ equation from the longitudinal part) one finds that the equations can be reduced to: $$\partial_i v^i_\star =0 \ , \qquad -\frac{d\hat{\alpha}_\bullet^2}{2\, b_\bullet} \partial_i \delta b_\star + \partial_t v_i^\star + v^j_\star \partial_j v_i^\star - \frac{b_\bullet}{\hat{\alpha}_\bullet\ d} \nabla^2 v_i^\star = 0 \label{nrnseq1}$$ where we have specified to a flat Minkowski background metric for specificity ($\hat{g}_{\mu\nu} = \eta_{\mu\nu}$) and to make the equations more familiar. These are of course, the unforced (by background) Navier-Stokes equations which we have earlier described via the BMW scaling in . The incompressibility condition as in that case arises from the longitudinal conservation equation at ${\cal O}(\hat{\varkappa}^{-2}\,\hat{\alpha})$ while the Navier-Stokes equation itself appears at ${\cal O}(\hat{\varkappa}^{-3} \, \hat{\alpha})$. A similar scaling can be done for the forced Navier-Stokes system, we simply use the BMW scaling for the fluctuating part of the hypersurface metric. ### Deconstructing the near horizon scaling {#s:dcnh} How do we reconcile this scaling derived here for the relativistic fluid with that derived by BMW [@Bhattacharyya:2008kq]? Firstly, let’s note that the rationale of our scaling up the background value of $b \sim \hat{\varkappa}\, b_\bullet$ is that shear term survives the scaling. From the second equation of we clearly see that this is required in order to retain the shear term in the limit. On the other hand the non-relativistic pressure in the BMW limit is proportional to $\hat{\alpha}_0^2\, \frac{\delta b_*}{b_0}$ and in the near horizon limit both $b_0 \sim \hat{\varkappa}\, b_\bullet$ and $\hat{\alpha}_0 \sim \hat{\varkappa}$ diverge. The extra scaling down of $\delta b_* \sim \frac{1}{\hat{\varkappa}}\, b_\star$ (over and above the scaling by $\hat{\varkappa}^{-2}$) is necessary to offset this divergence and ensure that the pressure gradient term contributes at the same order as the convective derivative $\partial_t + v^i_*\partial_i $ and the shear. The effective pressure that enters into the equation is $$\hat{\varkappa}^{-2}\,\hat{p}_\star = -\frac{d}{2} \, \hat{\varkappa}^{2}\,\hat{\alpha}_\bullet^2 \frac{\hat{\varkappa}^{-3}\, \delta b_\star}{\hat{\varkappa} \, b_\bullet}$$ using the scaling of $\hat{\alpha}$ in so that the first term of the Navier-Stokes equation in is essentially $\partial_i \hat{p}_\star$. So the final equations of motion on the near horizon region for the hypersurface dynamics are simply: $$\partial_i v^i_\star =0 \ , \qquad \partial_i \hat{p}_\star + \partial_t v_i^\star + v^j_\star \partial_j v_i^\star - \, \hat{\nu}_\bullet \nabla^2 v_i^\star = 0 \label{nrnseq2}$$ with kinematic viscosity $$\hat{\nu}_\bullet = \frac{b_\bullet}{\hat{\alpha}_\bullet\ d}$$ We have here glossed over the fact that since we scale $b \sim \hat{\varkappa}$ we potentially have a problem. The limit seems to suggest that we are taking the zero temperature limit of the black hole geometry since the Hawking temperature (which is seen on the boundary) is $T \sim b^{-1}$ . This naively sounds like we are outside the long wavelength regime and as a result should not be using the fluid/gravity map to describe the Dirichlet problem. A different way to say this from a dynamical equation of motion perspective is that the scalings were derived by demanding that the equations remained non-trivial, which operationally means that different terms appear to have inhomogeneous weights. Hence by suitable fiddling of amplitudes and derivatives we engineered that terms which originally scaled as $\hat{\alpha}$ to various powers all contribute homogeneously. However, we have not analyzed the higher order contributions to the equations of motion. Is it possible that relativistic two derivative terms in the stress tensor, which show up as the corrections to viscous relativistic hydrodynamics equations of show up at the same order? Note that this is not a problem for the case discussed in [@Bhattacharyya:2008kq] for they engineered their scalings about a fixed background temperature, and there were no stray factors of $\hat{\aleph}$ (equivalently $\hat{\varkappa}$) in (in front of $b_0$) to augment higher order contributions. For us to make the argument that the higher order terms are suppressed requires knowledge of the transport coefficients at second order on the Dirichlet hypersurface. This is in principle calculable by extending the Dirichlet fluid/gravity map of to one higher order in gradients. However, we will now argue that there is an essential simplification which allows us to make the statement boldly without computation. The rationale is simply that the enhanced scaling of the zero mode part of $b$ is compensated by the suppressing the corresponding fluctuation term $\delta b$ so as to ensure that $p_\star$ is finite. Since the overall value of $b_0$ scales out in the non-relativistic regime (we will see this clearly from implementing the scalings in the next subsection) it cannot affect the resulting equations. So we conjecture that accounting for higher order corrections as well, one will obtain as the leading order equations in the $\hat{\varkappa}$ expansion (all further corrections are suppressed by higher powers of $\hat{\varkappa}$). One physical way to motivate the correction is to first note that while the boundary temperature is being scaled to zero in the limit the hypersurface temperature can indeed be maintained to be finite:[^35] $$\hat{T}_\bullet = \frac{\hat{\alpha}_\bullet\ d}{4\pi \,b_\bullet}$$ So from the Dirichlet observer’s point of view there is still scope for a non-relativistic scaling, and in fact this is how the Dirichlet observer would carry out the BMW regime. What is clear is that due to the extra red-shifting in translating to the boundary, the asymptotic observer is going to have trouble reconciling this near horizon scaling regime in his variables. We will return to this issue in . In summary, the non-relativistic incompressible Navier-Stokes equations determine the dynamics of the hypersurface fluid as $\Sigma_D$ approaches the horizon; we will now proceed to investigate what this means for the various metrics: first we look at the bulk metric first and then examine what is the effect on the boundary. The bulk metric between $\Sigma_D$ and ${\mathcal H}^+$ {#s:nhdirmet} ------------------------------------------------------- From the discussion at the end of it seems natural that we should try to ask the question as to whether we can satisfy the constraints of the Dirichlet problem in the region between $\Sigma_D$ and the horizon (for the moment forgetting about the asymptotic region). Clearly we have $\Sigma_D$ approaching the horizon, so we will be forced to consider a double-scaling regime where we zoom in close to the horizon and expand out the spacetime region in-between. Let us temporarily ignore the consequences of this on the equations of motion and formally take the limit of the bulk metric given in , . To zoom into the region between the horizon and $\Sigma_D$, realizing that $r_D \to \frac{1}{b}$ in the limit, we take: $$r = \frac{1}{b_\bullet\, \hat{\varkappa}} \left(1+ \frac{\rho}{\hat{\varkappa}^2 b_\bullet\ \hat{\alpha}_\bullet}\right) .$$ from which it follows that $$dr = \frac{1}{\hat{\varkappa}^3\, b_\bullet^2\ \hat{\alpha}_\bullet } \,d\rho$$ Similarly we parametrize the position of our Dirichlet hypersurface via $$r_D = \frac{1}{b_\bullet\, \hat{\varkappa}} \left(1+ \frac{\rho_D}{\hat{\varkappa}^2 b_\bullet\ \hat{\alpha}_\bullet}\right) .$$ Both $\rho_D$ and $\hat{\alpha}_\bullet$ are a measure of how close we are to the horizon after one has zoomed into the near horizon region. To relate them, we subtitute $r_D$ into the expression for $\hat{\alpha}$ and do a large $\hat{\varkappa}$ expansion. Identifying the leading term as $\hat{\alpha}_\bullet$, we get $$\rho_D \equiv\frac{b_\bullet}{\hat{\alpha}_\bullet\ d} = \hat{\nu}_\bullet$$ We see that $\rho_D$ is same as the kinematic viscosity - hence, the $\rho_D\to 0$ limit is identical to the inviscid limit in hydrodynamics. Implementing this change of variables and the scaling in the bulk metric we obtain the near-horizon metric. In fact, the fastest way to derive the metric in is to utilize the fact that the near horizon limit is essentially the non-relativistic limit together with a particular form of the pressure fluctuations; essentially we substitute $\hat{\alpha}_0 = \hat{\varkappa} \, \hat{\alpha}_\bullet$, $b_0 = \hat{\varkappa}\, b_\bullet$, $\hat{p}_* = \hat{p}_\star$ into along with the identification $\hat{\aleph} = \hat{\varkappa}$ to ensure that we have the correct scalings. We first calculate the near horizon expansions of various functions appearing in the metric $$\begin{split} \hat{\alpha} &= \hat{\alpha}_\bullet \, \hat{\varkappa} \left[1 + \hat{\varkappa}^{-2} \left( \hat{p}_\star + \frac{d+1}{4d\hat{\alpha}_\bullet^2}\right)+ {\cal O}(\hat{\varkappa}^{-4}) \right] \\ \hat{\varkappa}^2 b_\bullet^2 r^2 &= \left[1+\hat{\varkappa}^{-2}\frac{2\rho}{\hat{\alpha}_\bullet^2\rho_D d}+{\cal O}(\hat{\varkappa}^{-4}) \right]\\ \hat{\varkappa}^2 b_\bullet^2 r^2\left[1- \hat{\alpha}^2\, f(br) \right] &=\left(1-\frac{\rho}{\rho_D}\right) \left[1 + 2\hat{\varkappa}^{-2} \left( \hat{p}_\star -\frac{d-3}{4d\hat{\alpha}_\bullet^2}\frac{\rho}{\rho_D}\right)+ {\cal O}(\hat{\varkappa}^{-4}) \right] \\ -\hat{\varkappa}^2 b_\bullet^2 r^2 \hat{\alpha}^2\, f(br)&= -\frac{\rho}{\rho_D} + 2\hat{\varkappa}^{-2}\hat{p}_\star\left(1-\frac{\rho}{\rho_D}\right)\\ &\quad +\hat{\varkappa}^{-2}\frac{\rho}{2\hat{\alpha}_\bullet^2\,\rho_D} \left(\frac{(d-3)\, \rho- (d+1) \, \rho_D}{\rho_D d} \right) + {\cal O}(\hat{\varkappa}^{-4}) \\ \frac{2\hat{\alpha}^2}{r_D\left(1+\frac{d}{2}(\hat{\alpha}^2-1)\right)} &= 4 \rho_D \hat{\varkappa}\hat{\alpha}_\bullet\left[1 + {\cal O}(\hat{\varkappa}^{-2}) \right]\\ 2\,r\,\frac{\hat{\alpha}(2\hat{\xi}-1)}{1+\frac{d}{2}\, (\hat{\alpha}^2 -1)} &= \frac{4\rho_D}{\hat{\varkappa}^2 b_\bullet^2}\left(1-\frac{\rho}{\rho_D}\right)\left[1 + {\cal O}(\hat{\varkappa}^{-2}) \right]\\ \end{split}$$ Using these, we get the metric as $$b_\bullet^2\, \hat{\varkappa}^2\,ds^2 = ds_0^2 + \hat{\varkappa}^{-1} \, ds_1^2 + \hat{\varkappa}^{-2} \, ds_2^2 + \hat{\varkappa}^{-3} \, ds_3^2+ {\cal O}(\hat{\varkappa}^{-4}) \label{}$$ where $$\begin{aligned} ds_0^2 &=& 2\, dt \, d\rho -\,\frac{\rho}{\rho_D} dt^2 + \delta_{ij}\, dx^i\, dx^j \nonumber \\ ds_1^2 &= & - 2\, \hat{v}^\star_i \, d\rho\, dx^i - 2 \, \left(1-\frac{\rho}{\rho_D} \right) \hat{v}^\star_i \, dt\, dx^i \nonumber \\ ds_2^2 &= & 2\, \left( \frac{1}{2}\,\hat{v}_\star^2 + \hat{p}_\star + \frac{d+1}{4\,d} \frac{1}{\hat{\alpha}_\bullet^2}\right) dt \, d\rho \nonumber \\ && \qquad + \; \left[ \left(1-\frac{\rho}{\rho_D} \right) \left( \hat{v}_\star^2 + 2 \hat{p}_\star \right) + \frac{\rho}{2\hat{\alpha}_\bullet^2\,\rho_D} \left(\frac{(d-3)\, \rho- (d+1) \, \rho_D}{\rho_D d} \right)\right] dt^2 \nonumber \\ && \qquad +\; \left[\left(1-\frac{\rho}{\rho_D} \right) \hat{v}^\star_i\,\hat{v}^\star_j + \, \frac{2 \, \rho}{\hat{\alpha}_\bullet^2\rho_D d}\, \delta_{ij} \right] dx^i \, dx^j \nonumber \\ ds_3^2 &= & 2 \left(1-\frac{\rho}{\rho_D} \right) \left[2\rho_D \left( \partial_{t}\hat{v}_{i}^\star +\hat{v}_\star^{j} \partial_j \hat{v}_{i}^\star\right) - \left(\hat{v}_\star^2 + 2 \hat{p}_\star - \frac{d-3}{2d\hat{\alpha}_\bullet^2}\, \frac{\rho}{\rho_D}\right) \hat{v}_i^\star \right] dx^{i}\, dt \nonumber \\ && \qquad +\; 2\left[2\rho_D \left( \partial_{t}\hat{v}_{i}^\star +\hat{v}_\star^{j} \partial_j \hat{v}_{i}^\star\right)-\;\, \left( \frac{1}{2}\,\hat{v}_\star^2 + \hat{p}_\star + \frac{d+1}{4\,d} \frac{1}{\hat{\alpha}_\bullet^2}\right) \, \hat{v}_i^\star \,\right] d\rho\, dx^i \, \nonumber \\ && \qquad -\,\left[\rho^2-\rho_D^2+4\, \rho_D\,(\rho_D-\rho)\right]\,\partial_j\partial^j \hat{v}^{\star}_i \, dx^i\,dt + \frac{1}{d}\, \partial_j\partial^j \hat{v}^{\star}_i \, d\rho\, dx^i \,. \label{hypnrscale} \end{aligned}$$ We have restricted attention to the simplest setting where $\hat{g}_{\mu\nu} = \eta_{\mu\nu}$ (hence $\hat{g}^{(0)}_{ij} = \delta_{ij}$ above) and $\hat{v}_\star^2 = \delta_{ij} \, \hat{v}_\star^i \, \hat{v}_\star^j$. In deriving the expressions as in we have used the incompressibility condition, which appears as before as the leading order fluid equation of motion. The Navier-Stokes equations themselves can also be used to simplify the last term in : one can replace the convective derivative of the spatial velocity $\partial_{t}\hat{v}_{i}^\star +\hat{v}_\star^{j} \partial_j \hat{v}_{i}^\star$ by $-\partial_i \hat{p}_\star + 2\,\hat{\nu}_\bullet \, \nabla^2 v_i^\star$. This is in fact closely related (but not identical) to the metric derived in [@Bredberg:2011jq] and generalized further in [@Compere:2011dx] to higher orders (compare with for instance Eq (3.2) and (6.5) of the latter paper). We recall that these works consider solutions to vacuum Einstein’s equations without a cosmological constant. In particular, [@Bredberg:2011jq] solves the Dirichlet problem in flat space by looking for small gradient fluctuations around a Rindler geometry, while we are dealing with solutions of which has a non-vanishing negative cosmological constant. We see that to leading order we nevertheless should expect agreement to the analysis of [@Bredberg:2011jq; @Compere:2011dx] – this is clear from the first two lines of . This is reflecting the universal Rindler nature of a non-degenerate horizon such as that of the planar Schwarzschild-AdS$_{d+1}$ solution we started with. Moreover, we can also physically understand how a solution of reduces to that of vacuum Einstein’s equations by noting that in the limit we consider here, the cosmological constant gets diluted away. The factor of $\hat{\varkappa}$ on the l.h.s, is essentially indicative of $R_\text{AdS} \to \hat{\varkappa}^2 \, R_\text{AdS}$ in our near horizon limit. This is to be expected; by zooming in onto the region between $\Sigma_D$ and ${\cal H}^+$ we are effectively blowing up a small sliver of the spacetime and are thus losing any information about the background curvature scale $R_\text{AdS}$. In some sense the near horizon limit is like the limit we take to decouple the asymptotically flat region from the AdS throat in the D3-brane geometry; what is unclear is whether some notion of decoupling exists in the present context. But starting at ${\cal O}(\hat{\varkappa}^{-2})$ on the l.h.s. of we start to see deviations from the metric presented in [@Bredberg:2011jq; @Compere:2011dx]. Even though we are zooming close to the horizon of a Schwarzschild-AdS$_{d+1}$ black hole, starting at second order we should expect to see curvature contributions. The various terms at this and higher orders originate from how the Rindler geometry gets corrected as we step away from the strict limit. In particular, using the near-horizon scaling of variables we see that reduces to $$R_{AB}+\frac{d}{\hat{\varkappa}^2} \; \frac{1}{(d\, \hat{\alpha}_\bullet\, \rho_D)^2}\; {\mathcal G}_{AB}=0\,. \label{}$$ It follows that all terms which involve $\frac{1}{\hat{\alpha}_\bullet^2}$ in the denominator correspond to the corrections due to the AdS curvature. Setting such terms in to zero i.e., taking $\hat{\alpha}_\bullet \to \infty$ leads us to the metrics derived in [@Bredberg:2011jq; @Compere:2011dx]. We are here restricting attention to the metric to leading orders in the $\hat{\varkappa}$ expansion; upto ${\cal O}(\hat{\varkappa}^{-3})$. In principle since the fluid/gravity metrics are known to higher orders one can carry out the construction to higher orders and we expect that we should be able to reproduce the results of [@Compere:2011dx] who have derived the expressions in the near horizon limit to ${\cal O}(\hat{\varkappa}^{-6})$ in our notation. Note that this simplification of the near horizon region construction happens because of the correlated amplitude and gradient scaling, as already emphasized in [@Bhattacharyya:2008kq]. Near horizon limit and Galilean degenerations {#s:nhgal} --------------------------------------------- Having seen that it is possible to take the near horizon scaling limit and retain both interesting dynamics on $\Sigma_D$ and further have a sensible geometry between $\Sigma_D$ and ${\mathcal H}^+$, our next goal is to address whether we can make sense of the metric between the hypersurface and the boundary. There are already many indications that this is going to be problematic; the vanishing of the boundary temperature $ \propto b^{-1}$ being the most prominent one, which suggests break-down of the gradient expansion. The bulk metric turns out not to make sense, but we shall see that there is an object that does – this is the inverse bulk metric which we shall refer to as the co-metric (see below). Similarly, the boundary co-metric also is well behaved and we will argue naturally provides a Newton-Cartan like structure on the boundary so that the boundary geometry degenerates from a Lorentzian manifold into a Galilean manifold. In the following, we will call the metric on the co-tangent bundle $g^{\mu\nu}$ as the co-metric. This is a more accurate terminology in the context of non-relativistic limit (the Galilean or Newton-Cartan limit) than the usually preferred ‘inverse metric’ since in this limit the co-metric degenerates (becomes non-invertible) and the usual metric ceases to exist. Hence, we will prefer as much as possible to work with the co-metric instead of the metric. ### Emergence of Galilean structure on the boundary Let us first examine what happens to the boundary metric data when we try to push the Dirichlet hypersurface towards the horizon. This corresponds to $\hat{\alpha}$ tending to infinity. We will work with the relativistic expressions of to maintain covariance; it is easy to then pass over to the near horizon scaling regime discussed in and obtain explicit parameterization of the results. From the formulae for the boundary metric in terms of $u^\mu$ we see that there are various terms which blow up. Moreover, in some cases the higher derivative terms overwhelm the lower derivatives suggesting a breakdown of derivative expansion, as already suspected from the vanishing of the boundary temperature. It is clear that this limit if it exists cannot be straightforward. As the Dirichlet hypersurface tries to approach the horizon, it first hits the hypersurface $r_D=r_{D,snd}$. From the boundary viewpoint, the interaction between the fluid with its gravitational potential packet drives the system to have superluminal sound propagation, $\hat{c}_{snd}$ exceeds the dressed speed of light as determined by $\hat{g}^{\mu\nu}$. By this point the interaction between the boundary metric and the fluid has become so important that the bare velocities $u^\mu$ have no more physical significance. We should rather think in terms of the dressed fluid and see how we can make sense out of the approach to the horizon. So, we will first rewrite the above formulae in terms of the dressed velocity $\hat{u}^\mu$. The dictionary for the (normalized) metric and the co-metric are $$\begin{split} g_{\mu\nu} &= \hat{g}_{\mu\nu} + \left[1-\hat{\alpha}^2-\frac{2\,\hat{\alpha}^3\,\hat{\theta}}{r_D(\,d-1)}\right]\hat{u}_\mu \hat{u}_\nu+\frac{2\,\hat{\alpha}^3 \, \hat{u}_{(\mu} \hat{a}_{\nu)}}{r_D\left[1+\frac{d}{2}(\hat{\alpha}^2-1)\right]} - \frac{2\,b}{\hat{\alpha}} \,F(br_D)\,\hat{\sigma}_{\mu\nu} \\ g^{\mu\nu} &= \hat{g}^{\mu\nu}+ \left[1-\frac{1}{\hat{\alpha}^2}+\frac{2\, \hat{\theta}}{\hat{\alpha}\, r_D\, (d-1)}\right]\hat{u}^\mu \hat{u}^\nu+\frac{2\,b}{\hat{\alpha}} \,F(br_D)\, \hat{\sigma}^{\mu\nu}-\frac{2\,\hat{\alpha}\, \hat{u}^{(\mu} \hat{a}^{\nu)}}{r_D\left[1+\frac{d}{2}(\hat{\alpha}^2-1)\right]}\\ \end{split}$$ We have already remarked on the necessity to project out the dressed sound mode and take an incompressible limit of the dressed fluid a la BMW as the Dirichlet hypersurface crosses $r_D=r_{D,snd}$ in . The formulae above reinforce that intuition – we see for example that unless $\hat{\theta}$ is sent to zero, the first derivative terms in the boundary metric overwhelm the zeroth order answer thus leading to a breakdown of the derivative expansion. So, we will drop the $\hat{\theta}$ terms with the understanding that we are projecting into the incompressible sector of the dressed fluid. Even this does not seem to help the case for the metric, since it still diverges – keeping only leading order terms (and assuming none of the subsequent terms grow with $\hat{\alpha}$ – we will postpone a more careful analysis for the future) we get $$\begin{split} g_{\mu\nu} &= -\hat{\alpha}^2\hat{u}_\mu \hat{u}_\nu+\ldots\\ g^{\mu\nu} &= \hat{g}^{\mu\nu}+ \hat{u}^\mu \hat{u}^\nu+\ldots\\ \end{split}$$ We are thus led to a remarkable conclusion: the near horizon Dirichlet constitutive relation is that the boundary co-metric degenerates along $t_{\mu}\equiv\hat{u}_\mu$ i.e., $g^{\mu\nu}t_\nu=0$ and the boundary metric has one divergent time-like eigenvalue along the same direction $g_{\mu\nu}= -\hat{\alpha}^2 t_\mu t_\nu$. This is the signature that the boundary metric is becoming Newtonian/non-relativistic - crudely speaking this is analogous to the $c\to\infty$ behavior of Minkowski metric/co-metric $$\eta^{\mu\nu}=\text{diag}(-\frac{1}{c^2},1,1,\ldots) \quad\text{and}\quad \eta_{\mu\nu}=\text{diag}(-{c^2},1,1,\ldots)$$ The way to make this mathematically precise is to resort to what is called as Galilean manifold. A Galilean manifold is a manifold with a Newtonian time co-vector $t_\mu$, and a degenerate co-metric $g^{\mu\nu}$ satisfying $g^{\mu\nu}t_\nu=0$. As in standard differential geometry, we can demand a connection that is compatible with this structure. This requires that there exists a torsionless connection defining a covariant derivative $\nabla^{\text{NC}}_\mu$ which is compatible with both the co-metric and the time co-vector, i.e., $\nabla^{\text{NC}}_\lambda g^{\mu\nu}=0$ and $\nabla^{\text{NC}}_\mu t_\nu=0$. If in addition we have such a a torsionless covariant derivative $\nabla_\mu$ compatible with the Galilean structure then we call $\nabla^{\text{NC}}$ as the Newton-Cartan connection and the corresponding Galilean manifold is termed a Newton-Cartan manifold. See [@Misner:1973by; @Ruede:1996sy] for further discussions and [@Bagchi:2009my] for another AdS/CFT perspective on the Galilean structures. Thus we have just argued that that there is a natural emergence of Galilean structure in the boundary theory as we take our Dirichlet surface nearer and nearer to the horizon. In this case, the metric that we have to put the field theory in gets enormously simplified and we get just a CFT on a Galilean manifold with a Galilean co-metric having $\hat{u}_\mu$ as the degenerate direction. This is a very precise way of stating what the membrane paradigm is from the boundary theory viewpoint – the membrane paradigm is a particular Galilean limit of the field theory.[^36] Having shown that we have a Galilean structure on the boundary, we further ask: “Is there a Newton-Cartan structure"? Consider the Galilean limit of the Christoffel connection $\nabla_\mu$ which is by construction torsionless and compatible with the Galilean co-metric $g^{\mu\nu}$. This could be a sensible Newton-Cartan structure if it annihilates the time co-vector; to see what $\nabla_\mu t_\nu$ is we use $$\begin{split} \tilde{\Gamma}_{\mu\nu}{}^\rho\hat{u}_\rho &= -(1-\frac{1}{\hat{\alpha}^2})\left[ \hat{\sigma}_{\mu\nu}+\frac{\hat{\theta}}{d-1}\ \left(\hat{P}_{\mu\nu}+\frac{d}{2}\hat{\alpha}^2\hat{u}_\mu \hat{u}_\nu\right)-\frac{d\hat{\alpha}^2}{\left[1+\frac{d}{2}(\hat{\alpha}^2-1)\right]}\hat{a}_{(\mu}\hat{u}_{\nu)}\right]\\ \end{split}$$ so that $$\begin{split} \nabla_\mu t_\nu &= \hat{\nabla}_\mu\hat{u}_\nu+\tilde{\Gamma}_{\mu\nu}{}^\rho\hat{u}_\rho\\ &= -\frac{d}{2}(\hat{\alpha}^2-1)\frac{\hat{\theta}}{d-1}\hat{u}_\mu \hat{u}_\nu+\hat{\omega}_{\mu\nu}+\hat{a}_\mu \hat{u}_\nu +\frac{1}{\hat{\alpha}^2}\left[\hat{\sigma}_{\mu\nu}+\frac{\hat{\theta}}{d-1}\hat{P}_{\mu\nu}\right] -\frac{2\hat{a}_{(\mu}\hat{u}_{\nu)}}{\left[1+\frac{d}{2}(\hat{\alpha}^2-1)\right]} \end{split} \label{nablat}$$ This in fact determines a sensible answer in the infinite $\hat{\alpha}$ limit , once we factor in our earlier observation that $\hat{\theta}$ is suppressed in the near horizon limit (it is ${\cal O}(\hat{\alpha}^{-4})$) as are any other gradients of the velocity field. These scalings can be read off essentially from as the near horizon limit is the BMW limit as far as the velocities are concerned. The leading contribution to comes at ${\cal O}(\hat{\alpha}^{-2})$ from the vorticity and expansion. It is tempting to speculate that the degeneration of the co-metric along with define a Newton-Cartan like limit and we will postpone a structural analysis of this limit for future work. In the next subsection, we will try to examine the bulk co-metric to see what this Galilean limit entails in the bulk. The bulk co-metric in the Near-Horizon limit -------------------------------------------- In the preceding subsection, we argued for the emergence of Galilean structures in the Near-Horizon limit thus forcing us to formulate the boundary geometry in terms of a co-metric than a metric. Since the boundary metric is ill-behaved in this limit, it is clear that the bulk metric should be ill-behaved too. This poses a conundrum since it seems as if in taking the near-horizon limit we have destroyed the -asymptopia and one might wonder whether there is any sense in talking about the bulk geometry far away from the horizon. We will in this subsection take the description of the boundary geometry via a co-metric as a clue and argue that the bulk geometry is also well-described by a bulk co-metric. Thus while the metric description might break down the co-metric description with its associated Galilean structures continue to describe the bulk and the boundary geometry. Now we present the co-metric as a function of the Dirichlet data. $$\begin{aligned} \mathcal{G}^{AB}\partial_A\otimes\partial_B &=& \left[r^2\, f(br)-\frac{2\, r\, \theta}{d-1}\right]\partial_r\otimes\partial_r+2\left[u^\mu -r^{-1} a^{\mu}\right]\partial_\mu\otimes_s\partial_r \nonumber \\ &&\qquad \qquad +\; r^{-2}\left[P^{\mu\nu} -2b F(br)\ \sigma^{\mu\nu}\right]\partial_\mu\otimes\partial_\nu \nonumber \\ &=& \left[r^2\, f(br)-\frac{2\,r\,\hat{\theta}}{\hat{\alpha}\,(d-1)}\right]\partial_r\otimes\partial_r \nonumber \\ &&\quad +\; 2\left[\frac{\hat{u}^\mu}{\hat{\alpha}}\left(1-\frac{\hat{\alpha}\,\hat{\theta}}{r_D\,(d-1)}\right) - \frac{\hat{a}^\mu}{r\left[1+\frac{d}{2}(\hat{\alpha}^2-1)\right]}\right]\partial_\mu\otimes_s\partial_r \nonumber \\ &&\quad +\;r^{-2}\left[\hat{P}^{\mu\nu} -\frac{\hat{\alpha}\left[\hat{u}^\mu \hat{a}^\nu + \hat{a}^{\mu}\hat{u}^\nu\right]}{r_D\left[1+\frac{d}{2}(\hat{\alpha}^2-1)\right]} -2b\, \hat{F}(br)\ \hat{\sigma}^{\mu\nu}\right]\partial_\mu\otimes\partial_\nu\end{aligned}$$ where $$\begin{split} \hat{F}(br) &\equiv \frac{1}{\hat{\alpha}}\left(F(br)-F(br_D)\right) = \frac{1}{\hat{\alpha}} \; \int_{br}^{br_D}\; \frac{y^{d-1}-1}{y(y^{d}-1)}dy\,. \end{split}$$ As we anticipated, there are no problems evident in the infinite $\hat{\alpha}$ limit (provided one takes $\hat{\theta}$ to zero appropriately to preserve the validity of derivative expansion). Contrast this with the bulk metric (see ) whose large $\hat{\alpha}$ limit seems dubious. This supports our contention that there is a completely sensible description of the bulk and the boundary geometry in terms of co-metrics everywhere when we take the near-horizon Dirichlet problem along with an incompressible limit on the dressed fluid. Our exercise of rewriting the near-horizon bulk metric in terms of the non-relativistic variables can be repeated in the case of co-metric and we can easily convince ourselves that this goes through without any new subtleties. In fact, most terms in the above expression are sub-dominant in the $\hat{\varkappa}$ expansion introduced in the previous subsections. In fact, at the zeroth order the spatial part of the co-metric is just that of vacuum AdS; with the temporal part contributing only at a higher order. The horizon structure encoded in $(br)^{-d}$ is completely invisible until quite high orders in $\hat{\varkappa}$ expansion, which is to be expected since the boundary temperature is being scaled to zero. To be specific using the scaling we find $$\mathcal{G}^{AB}\partial_A\otimes\partial_B = r^2 \, \partial_r \otimes \partial_r + \frac{1}{r^2} \, \partial_i \otimes \partial_i + \hat{\varkappa}^{-1} \left(2 \, \partial_t \otimes_s \partial_r \right) + {\cal O}(\hat{\varkappa}^{-2}) \,.$$ Basically we seem to find that the co-metric description is mostly oblivious to the near-horizon geometry (which as we have seen is well-described in the metric description). Hence, we are naturally led to an effective description where there are two regions of the geometry: one well-described by a metric, and another by a co-metric. The interesting question is to see whether there is an overlap region where the two descriptions are valid and the metric is just the inverse of the co-metric. Having detained the reader for so long, we will leave the detailed answer of this question to future work. However, we would like to draw the attention of the reader to the following fact – the near-horizon limit of the Dirichlet problem naturally seems to lead to novel geometric structures closely associated with the Galilean limit. These Galilean structures evidently call for more detailed studies especially since they might hold valuable lessons for non-relativistic holography (as previously pointed out in [@Bagchi:2009my]). Discussion {#s:discuss} ========== The bulk Dirichlet problem in , which we defined as the gravitational dynamics in a spacetime with negative cosmological constant, subject to Dirichlet boundary conditions on a preferred timelike hypersurface $\Sigma_D$, is interesting in the AdS/CFT context for several reasons. For one, it allows us to investigate questions in a cut-off AdS spacetime which might be dual to a field theory with a rigid UV cut-off via the usual AdS/CFT dictionary. Furthermore, it allows us to touch upon the ideas involving holographic renormalisation and the implementation of the Wilsonian RG ideas in the gravitational description. The main motivation behind our work was to ask, what is it that one is doing on the dual field theory living on the boundary of the AdS spacetime that ensures these Dirichlet boundary conditions on $\Sigma_D$, and whether such a problem is always well-posed. For linear systems in AdS it is easy to see that there is a simple map between data on the hypersurface and that on the boundary. In particular, Dirichlet boundary conditions on $\Sigma_D$ translate into non-local multi-trace deformations of the dual field theory. One can think of the boundary theory being deformed by some ‘state dependent’ sources. Such a boundary condition while seemingly bizarre from standard field theoretic perspective, has an effective description in terms of a much simpler and local source, which is the Dirichlet data on $\Sigma_D$. In this paper we have further argued that such bulk Dirichlet problems for full non-linear gravitational dynamics in are also amenable to solution in a certain long wavelength regime. Using results from the fluid/gravity correspondence [@Bhattacharyya:2008jc; @Bhattacharyya:2008mz] we have constructed the bulk spacetime resulting from the specification of a hypersurface metric on $\Sigma_D$. The solution is contingent upon the hypersurface dynamics, as determined by the conservation of the stress tensor induced on $\Sigma_D$, be conserved. In the long wavelength regime this stress tensor can be written as that of a non-conformal fluid propagating on the fixed geometry of $\Sigma_D$. Armed with such a solution we can examine the dual CFT to determine how one is achieving the Dirichlet dynamics. In the long wavelength regime the boundary theory is described by fluid dynamics with what we term as Dirichlet constitutive relations. In particular, while the stress tensor of the boundary fluid is of the conventional conformal fluid form, it lives on a background geometry whose metric is ‘dynamical’ in the following sense: the boundary metric depends on the dynamical fluid degrees of freedom. One should think of this in terms of a fluid being subject to a gravitational potential well which is furthermore carried along with the fluid. A useful analogy is to think of a charge carrier in a polarizable medium. The medium or more precisely in our case the ‘dynamical’ background exerts a force on the fluid. However, one can subsume the ‘dynamical’ background’s effect on the fluid, and rewrite the dynamics as that of a ‘dressed fluid’ on a fixed background. This ‘dressed fluid’ is a collective effect and moreover from the geometry we know what its description should be – it is simply the non-conformal fluid on the Dirichlet surface. This can be independently verified by starting with the boundary fluid and the ‘dynamical’ boundary metric and showing that the boundary conservation equation can be rewritten as the hypersurface conservation equation, thus deriving the non-conformal fluid stress tensor on $\Sigma_D$. We also note that in the long wavelength regime because one is working order by order in boundary gradients, the boundary fluid is deformed locally – at any spacetime point the source $g_{\mu\nu}$ depends only on the fluid degrees of freedom (velocity and temperature) at that point. This is what allows us to think of the fluid as carrying around with it a local gravitational cloud. From a formal point of view, the gravitational Dirichlet problem involves turning on local multi-trace deformations for the field theory. From the above discussion one is then tempted to say that the bulk geometry provides a way to repackage non-local deformations into a local perturbation at a lower radius or scale. This is highly suggestive of some sort of renormalisation of sources as one propagates boundary conditions into the bulk. Let us compare this picture with the recent discussion of the holographic Wilsonian RG flow idea of [@Heemskerk:2010hk; @Faulkner:2010jy]. In these works starting with a field theory with given sources for say just single trace operators in the planar theory, one derives an effective action containing not just renormalized single trace sources, but also multi-traces on some chosen cut-off surface. The flow equation governing the radial evolution of such sources was argued to arise by effectively integrating out a part of the bulk geometry between the boundary and the cut-off surface. It was also important for that discussion given the particularization to the Wilsonian perspective that one does not prejudice oneself with the boundary conditions in the interior of the spacetime below the cut-off (the infra-red of the field theory). In the current context we see that by transferring the boundary conditions onto the Dirichlet surface we have shifted the burden of multi-traces onto the boundary – as described above there is a suitable set of non-local multi-trace deformations on the boundary which ‘renormalizes’ into a single trace source on the cut-off. Whilst this is not totally surprising for linear systems such as the scalar problem discussed in the text, it is satisfying that a similar statement can be made for non-linear gravitational dynamics by invoking the long wavelength gradient expansion. We should however note that our discussion explicitly assumes knowledge of the interior (infra-red) boundary conditions; in the fluid/gravity solutions constructed herein we have demanded that there be a regular future event horizon to single out a sensible solution. Having obtained the effective dynamics on the hypersurface, we learnt that the conservation equations result in a sound mode which travels outside the light-cone of the fixed hypersurface metric once one pushes the surface too far into the interior. Past this sonic threshold, it is probably not sensible to maintain a relativistic fluid description on $\Sigma_D$. We argued that we should pass over into a non-relativistic regime, by a suitable scaling of fluid variables (a la BMW [@Bhattacharyya:2008kq]). It is interesting to ask whether the acausality manifesting itself in the sound mode of the hypersurface dynamics can be discerned (without invoking the ‘dressed’ picture) from the boundary. More importantly, one wonders whether the gravitational Dirichlet problem suffers from such pathologies in general.[^37] Does one encounter any acausal behavior the moment the hypersurface is inside the bulk outside the long wavelength regime? What about their validity where say stringy effects are taken into account? Can we engineer such a boundary condition consistently using objects like orientifolds in string theory? What happens once we go beyond large N? Does the field theory with these non-local deformations make sense beyond large N for at least some subclass of these deformations? What is the field theory interpretation of other kinds of boundary conditions other than Dirichlet boundary conditions (say if we fix the mean extrinsic curvature as proposed in [@Lysov:2011xx]) ? These are fascinating questions which deserve to be explored further. Finally, by examining the Dirichlet problem in the vicinity of the event horizon of the fluid/gravity solutions we made contact with the recent constructions of flat space gravitational duals to incompressible Navier-Stokes flows on a cut-off hypersurface [@Bredberg:2011jq; @Compere:2011dx], deriving in effect the membrane paradigm from the boundary field theory. Focussing on a tiny sliver of the geometry between the horizon and $\Sigma_D$ leads to the long wavelength solution around the Rindler horizon found in these works. This is reminiscent of the Penrose limit; while in the latter we focus on the geometry close to a null geodesic and blow it up, here we zoom close to a null surface and blow up the spacetime in its vicinity (perhaps a better analogy is the near horizon limit of black D3-branes without any statement about decoupling). The process of blowing up the near horizon region dilutes away the cosmological constant; hence rather than obtaining a solution to Einstein’s equations with negative cosmological constant we end up with a geometry that solves vacuum Einstein’s equations at leading and next-to-leading order. However, starting at higher orders we start to see the effect of the cosmological constant, revealing the throat region between the near-horizon Rindler geometry and the asymptotic AdS spacetime.[^38] From our embedding of the construction of [@Bredberg:2011jq; @Compere:2011dx] into the fluid/gravity correspondence [@Bhattacharyya:2008jc; @Bhattacharyya:2008mz] we learn that the boundary fluid lives on a manifold with Galilean/Newton-Cartan like structure in the near horizon limit. This degeneration of the Lorentzian structure into an effective Galilean one is what enforces the incompressible Navier-Stokes scaling from the viewpoint of the CFT. One can simply say that the membrane paradigm is a particular Galilean limit of the field theory. This perspective also clarifies the universality of the membrane paradigm – since we are zooming down to the Rindler region in the vicinity of a non-extremal black hole horizon, we should expect the same geometry for any non-degenerate horizon. Hence the dual description should always be in terms of an incompressible Navier-Stokes fluid as long as we deal with systems carrying only conserved charges [@Bhattacharyya:2008kq]. In the presence of other light degrees of freedom, as happens in systems with spontaneously broken symmetries, say the holographic superfluids discussed in [@Sonner:2010yx; @Bhattacharya:2011ee; @Herzog:2011ec; @Bhattacharya:2011tr] we would have to project out all linearly dispersing modes to achieve the same. In the near horizon limit we find that both the bulk metric and the boundary metric degenerate into a Newton-Cartan time-metric, but the co-metric is spacelike and well behaved. We speculate that the bulk spacetime in this limit should be described in two patches: the region between the horizon and the Dirichlet hypersurface enjoys a description in terms of a conventional metric while the region between $\Sigma_D$ and the boundary requires use of the co-metrics and Newton-Cartan structures. We conclude that the language of Galilean geometries is what is necessary to implement the membrane paradigm in the AdS/CFT correspondence. The idea of implementing a Galilean limit of AdS/CFT correspondence was proposed recently in [@Bagchi:2009my] who were also motivated by considerations involving the incompressible Navier-Stokes equations, which turn out to enjoy an enhanced symmetry algebra [@Gusyatnikova:1989nx]. It was argued there based on this algebra that appropriate Newton-Cartan manifold should correspond to an slice of the bulk spacetime. Here in contrast we are retaining the entire manifold, not just the two dimensional slice. However, we achieve this at the expense of discarding the Lorentzian metric structure, and work with a co-metric and a time co-vector. Clearly the role of Galilean/Newton-Cartan structures in the context of AdS/CFT requires further investigation. The bulk geometries we construct here seem to provide an interesting interpolation between Lorentzian and Galilean structures somewhat reminiscent of the Galilean limit procedure described in [@Kunzle:1976fk; @Gonzalez:2003fk]. It is also tempting to contemplate the possibility that apart from the context discussed herein, these structures could also provide useful clues on how to work with the gravity duals of non-relativistic field theories. In the context of Schrödinger spacetimes which were proposed as duals to non-relativistic conformal field theories [@Son:2008ye; @Balasubramanian:2008dm] the Galilean limit is usually implemented as a DLCQ limit which can naturally be studied in the Newton-Cartan language. There are many directions in which the constructions we have described here can be generalized. For one it would be very useful to understand the gravitational Dirichlet problem outside of the long wavelength regime and to investigate its well-posedness. Further it would be useful to flesh out in precise detail the connections (if any) between the ideas around holographic RG flows and the Dirichlet problem and ask whether one can use the ideas developed here to implement the Wilsonian holographic flow of [@Heemskerk:2010hk; @Faulkner:2010jy] in the non-linear gravity context. Even within the long-wavelength regime there are tantalizing similarities with the blackfold approach [@Emparan:2009at; @Emparan:2011br], where one considers hypersurfaces which have intrinsic as well as extrinsic dynamics. Freezing out the extrinsic dynamics should lead one to the gravitational Dirichlet problem (for instance in the analysis of the black string and membrane instabilities [@Camps:2010br] the extrinsic dynamics was frozen and the blackfold equations are precisely those of fluid dynamics). While the blackfold analysis is generically well suited for co-dimension three of higher hypersurfaces, it appears that one could recover some of the results discussed herein by extrapolating the equations to a co-dimension one hypersurface.[^39] Of more immediate interest is to complete the derivation of the hypersurface dynamics from that on the boundary to higher orders in the gradient expansion. In particular, while we have shown that the ideal fluid equations on $\Sigma_D$ arise from those on the boundary, moving to second order in gradients in the conservation equation will allow us to show at the non-linear level that the shear viscosity of the hypersurface fluid is the same as that on the boundary. This statement would establish the non-linear version of the non-renormalisation of $\eta$ (or equivalently $\eta/s$ since we know that the entropy density being associated with the horizon remains unchanged) complementing the earlier analysis of [@Iqbal:2008by] who showed this in the regime of linearized hydrodynamics using a flow equation. One would hope to also understand by this study why the bulk viscosity term on the hypersurface vanishes despite the fluid being non-conformal and having a non-vanishing trace for the stress tensor. At the same time it would be interesting to understand how the higher order transport coefficients evolve from the boundary to $\Sigma_D$ which should be possible to examine within our framework with a little bit of work. Another interesting avenue for exploration is to ask about the gravitational Dirichlet problem in the presence of degenerate horizons. All of the discussion in the present paper and indeed in the fluid/gravity correspondence has focussed on situations of thermal equilibrium at non-zero temperature and hence one naturally studies spacetimes with non-degenerate horizons. Degenerate horizons pose an intriguing challenge and it would be useful to understand how the boundary theory (which should be more than just a fluid dynamical system) reorganizes itself as we pass over to the hypersurface description. It would also be interesting to extend our considerations to stationary black holes where it would be useful to understand the Dirichlet dynamics when the hypersurface is inside the ergorsurface (as defined by the asymptotic observer). Finally, another interesting avenue for contemplation is whether the ideas discussed herein can be ported to the context of brane-worlds, where we have gravity induced on a cut-off surface in AdS. Recently [@Figueras:2011gd] attempts to derive the brane gravitational equations by working on a hypersurface near the boundary of AdS and invoking the Fefferman-Graham expansion. It is clear from our discussion that no matter what boundary condition we choose on the hypersurface, it is most likely to involve non-local deformations on the dual boundary field theory. What this implies for induced gravity scenarios is an issue that deserves further study. Acknowledgements {#s:acks .unnumbered} ---------------- We would especially like to thank Sayantani Bhattacharyya for collaborating with us at the various steps of this paper. It is a pleasure to thank Jyotirmoy Bhattacharya, Amar V Chandra, Atish Dabholkar, Roberto Emparan, Sean Hartnoll, Veronika Hubeny, Nabil Iqbal, Cynthia Keeler, Vijay Kumar, Hong Liu, Donald Marolf, Shiraz Minwalla, Ricardo Monteiro, Niels Obers, David Poland, Suvrat Raju, Andrew Strominger and Piotr Surowka for extremely useful discussions on ideas presented in this paper. We further would like to thank Roberto Emparan, Rajesh Gopakumar, Cynthia Keeler, Hong Liu, Donald Marolf and Andrew Strominger for comments on a draft version of the manuscript. RL and MR would like to thank ICTS, TIFR for wonderful hospitality during the Applied AdS/CFT workshop. MR in addition would also like to thank HRI and University of Amsterdam for their kind hospitality. DB is supported by a STFC studentship, while JC, MR are supported by a STFC Rolling grant. RL is supported by the Harvard Society of Fellows through a junior fellowship. Finally, RL would like to thank various colleagues at the society for interesting discussions. Notation {#app:notation} ======== We work in the $(-++\ldots)$ signature. The dimensions of the spacetime in which the conformal fluid lives is denoted by $d$ . We usually assume $d>2$ unless otherwise specified. In the context of AdS/CFT, the dual AdS$_{d+1}$ space has $d+1$ spacetime dimensions.The lower-case Greek indices $\mu,\nu= 0,1,\ldots,d-1$ are used as boundary space-time indices, whereas the upper-case Latin indices $A,B=0,1,\ldots, d$ are used as the bulk indices. The lower-case Latin indices $i=1,\ldots,d-1$ index the different spatial directons at the boundary. Throughout this paper, we take the extra holographic co-ordinate to be $r$ with the boundary of the bulk spacetime at $r\rightarrow\infty$. Among the objects carrying Greek indices the hatted variables belong to a hypersurface $r=r_D$ whereas the unhatted objects naturally belong to the boundary $r=\infty$. Further we use $*$ to mark non-relativistic objects. We use round brackets to denote symmetrisation and square brackets to denote antisymmetrisation. For example, $B_{(\mu\nu)}\equiv \frac{1}{2}\, \left(B_{\mu\nu}+B_{\nu\mu}\right)$ and $B_{[\mu\nu]}\equiv \frac{1}{2}\left( B_{\mu\nu}-B_{\nu\mu}\right)$. For tensor products we denote symmetrisation with an explicit subscript as $X \otimes_s Y = \frac{1}{2} (X \otimes Y + Y \otimes X)$. Quick reference table {#s:tabsum} --------------------- We have included a table with other useful parameters used in the text. In the table \[notation:tab\], the relevant equations are denoted by their respective equation numbers appearing inside parentheses. \[notation:tab\] [||r|l||r|l||]{}\ Symbol & Definition & Symbol & Definition\ \ $\mathcal{G}_{AB}$ & Bulk metric with & $\mathcal{G}^{AB}$ & Bulk co-metric with\ & components $\mathfrak{u}_\mu,\mathfrak{V}_\mu,\mathfrak{G}_{\mu\nu}$ & & components $\mathfrak{u}^\mu,\mathfrak{P}^{\mu\nu},\mathfrak{P}^{\mu\nu}\mathfrak{V}_\nu$\ $G_{d+1}$ & Bulk Newton constant & $\Lambda_{d+1}$ & $-\frac{1}{2}d(d-1)$ Bulk C.C.\ $b$ & Inverse Horizon radius & $f(br)$ & $1-(br)^{-d}$\ $F(br)$ & $\int_{br}^{\infty}\; \frac{y^{d-1}-1}{y(y^{d}-1)}dy$ & $\hat{F}(br)$ & $\frac{\left(F(br)-F(br_D)\right)}{\sqrt{f(br_D)}} $\ $\hat{\xi}$ &$1-\frac{rf(br)}{2r_Df(br_D)}$ & &\ \ $g_{\mu\nu}$ & Boundary metric $[r^{-2}ds_{d+1}^2]_{r=\infty}$ & $g^{\mu\nu}$ & Boundary co-metric\ $u^\mu$ & Fluid velocity & $P_{\mu\nu}$ & $g_{\mu\nu}+u_\mu u_\nu$\ $p$ & Fluid pressure & $\varepsilon$ & Fluid energy density\ $b$ & $(4 G_{d+1} s)^{-\frac{1}{d-1}}$ & $s$ & Fluid entropy density\ $\zeta$& Bulk viscosity & $\eta$ & $\frac{s}{4\pi}$ Shear Viscosity\ $\nabla$ & Christoffel covariant derivative & ${\Gamma}_{\mu\nu}{}^\lambda$ & Christoffel symbols\ $\tilde{\Gamma}_{\mu\nu}{}^\lambda$ & $\equiv\hat{\Gamma}_{\mu\nu}{}^\lambda-{\Gamma}_{\mu\nu}{}^\lambda$ & $\mathcal{A}_\mu$ & Weyl-Connection\ $\sigma_{\mu\nu}$ & Shear strain rate & $\omega_{\mu\nu}$ & Fluid vorticity\ $a_{\mu}$ & Acceleration field & $\theta$ & expansion rate\ $c^2_{snd}$ &\ \ $\hat{g}_{\mu\nu}$ & Dirichlet metric $[r^{-2}ds_{d+1}^2]_{r=r_D}$ & $\hat{g}^{\mu\nu}$ & Dirichlet co-metric\ $\hat{u}^\mu$ & Fluid velocity & $\hat{P}_{\mu\nu}$ & $\hat{g}_{\mu\nu}+\hat{u}_\mu \hat{u}_\nu$\ $\hat{p}$ & Fluid pressure & $\hat{\varepsilon}$ & Fluid energy density\ $\hat{\alpha}$ & $(1-(br_D)^{-d})^{-1/2}=f^{-1/2}(br_D)$ & $\hat{s}$ & Fluid entropy density\ $\hat{\zeta}$& Bulk viscosity & $\hat{\eta}$ & $\frac{\hat{s}}{4\pi}$ Shear Viscosity\ $\hat{\nabla}$ & Christoffel covariant derivative & $\hat{\Gamma}_{\mu\nu}{}^\lambda$ & Christoffel symbols\ $\tilde{\Gamma}_{\mu\nu}{}^\lambda$ & $\equiv\hat{\Gamma}_{\mu\nu}{}^\lambda-{\Gamma}_{\mu\nu}{}^\lambda$ & $\hat{\mathcal{A}}_\mu$ & Weyl-Connection\ $\hat{\sigma}_{\mu\nu}$ & Shear strain rate & $\hat{\omega}_{\mu\nu}$ & Fluid vorticity\ $\hat{a}_{\mu}$ & Acceleration field & $\hat{\theta}$ & expansion rate\ $r_{D,snd}$ &\ $\hat{c}^2_{snd}$ &\ \[notation2:tab\] [||r|l||r|l||]{}\ Symbol & Definition & Symbol & Definition\ \ $\aleph^{-1}$ &\ $g^{(0)}_{ij}$ & Spatial metric backgnd. & $g^{ij}_{(0)}$ & Spatial co-metric backgnd.\ $h_{tt}^*,h_{ij}^*$ & $\delta g_{tt},\delta g_{ij}\sim \aleph^{-2}$ & $k_i^*$ & $\delta g_{ti}\sim \aleph^{-1}$\ $\nabla^{(0)}_i$ & Covariant derivative using $g^{(0)}_{ij}$ & $q^*_{ij}$ & $\nabla^{(0)}_i k_j^*-\nabla^{(0)}_j k_i^*$\ $v_*^i$ & Fluid velocity $\frac{u^i}{u^t}\sim \aleph^{-1}$ & $\rho_0$ & Mass density $\varepsilon_0+p_0$\ $b_0,\hat{\alpha}_0,\ldots$ & Backgnd. values & $\delta b_*,\hat{\alpha}_*,\ldots$ & Variation-typically $\sim \aleph^{-2} $\ $p_0$ & Backgnd. pressure & $\varepsilon_0$ & Backgnd. energy density\ $p_*$ & Pressure/mass density & $\nu_0$ & Kinematic viscosity $\frac{\eta_0}{\rho_0}$\ &$\frac{p-p_0}{\rho_0}\sim c_{snd}^2\frac{T_*}{T_0}\sim \aleph^{-2} $ & $\hat{\xi}_0$ &$1-\frac{rf(b_0r)}{2r_Df(b_0r_D)}$\ \ $\hat{\varkappa}^{-1}$ &\ &\ $\hat{g}^{(\bullet)}_{ij}$ & Spatial metric backgnd. & $\hat{g}^{ij}_{(\bullet)}$ & Spatial co-metric backgnd.\ $\hat{\nabla}^{(\bullet)}_i$ & Covariant derivative using $\hat{g}^{(\bullet)}_{ij}$ & $\rho_\bullet$ & Mass density\ $\hat{v}_\star^i$ & Fluid velocity $\frac{\hat{u}^i}{\hat{u}^t}\sim \hat{\varkappa}^{-1}$ & & $\varepsilon_\bullet+p_\bullet$ $\sim \hat{\varkappa}^{-(d-1)} $\ $b_\bullet,\hat{\alpha}$ & Backgnd. values $\sim \hat{\varkappa}$ & $\delta b_\star$ & Variation- $\sim \hat{\varkappa}^{-3} $\ $\hat{p}_\bullet$ & Backgnd. pressure $\sim \hat{\varkappa}^{-(d-1)} $ & $\hat{\varepsilon}_\bullet$ & Backgnd. energy density $\sim \hat{\varkappa}^{-d} $\ $\hat{p}_\star$ & Pressure/mass density & $\hat{\nu}_\bullet$ & Kinematic viscosity $\frac{\hat{\eta}_\bullet}{\hat{\rho}_\bullet}$\ & $\frac{\hat{p}-\hat{p}_\bullet}{\hat{\rho}_\bullet}\sim \hat{c}_{snd}^2\frac{\hat{T}_*}{\hat{T}_0}\sim \hat{\varkappa}^{-2} $ & $\rho_D$ & $\rho$ co-ordinate of Dirichlet surface\ $\rho$ &\ Notation in the Bulk -------------------- We denote the bulk metric by $\mathcal{G}_{AB}$. The inverse metric in the bulk (we will call this the co-metric - since it is the metric on the cotangent bundle) is denoted by $\mathcal{G}^{AB}$. We take the bulk AdS radius to be unity which is equivalent to setting the bulk cosmological constant to be $\Lambda_{d+1}=-\frac{d(d-1)}{2}$. We denote the bulk Newton constant by $G_{d+1}$. For the ease of reference, we now give the value of $G_{d+1}$ for some of the well-known CFTs with gravity duals (see [@Maldacena:1997re; @Aharony:1999ti; @Aharony:2008ug] for further details): 1. The d=4, $\mathcal{N}$=4 Super Yang-Mills theory on $N_c$ D3-Branes with a gauge group SU(N$_c$) and a ‘t Hooft coupling $\lambda\equiv g_{YM}^2 N_c$ is believed to be dual to IIB string theory on AdS$_5\times$S$^5_{R=1}$ with $G_{5}=\pi /(2 N_c^2)$ and $\alpha'=(4\pi\lambda)^{-1/2} $. 2. A d=3, $\mathcal{N}$=6 Superconformal [^40] Chern-Simons theory on $N_c$ M2-Branes with a gauge group U(N$_c$)$_k\times$ U(N$_c$)$_{-k}$ (where the subscripts denote the Chern-Simons couplings) and a ‘t Hooft coupling $\lambda\equiv N_c/k$ is conjectured to be dual to M-theory on AdS$_4\times$S$^7_{R=2}$/Z$_k$ with $G_{4}=N_c^{-2}\sqrt{9\lambda/8}=3k^{-1/2}(2N_c)^{-3/2}$. 3. A d=6, $\mathcal{N}$=(2,0) superconformal theory on $N_c$ M5-Branes is conjectured to be dual to M-theory on AdS$_7\times$S$^4_{R=1/2}$ with $G_{7}=3\pi^2/(16 N_c^{3})$. The general bulk metric which is Weyl-covariant is given by $$ds^2 =\mathcal{G}_{AB}dx^Adx^B= -2\,{\mathfrak u}_\mu dx^\mu\left(dr+r \,{\mathfrak V} _\nu dx^\nu\right) + r^2\, {\mathfrak G}_{\mu\nu} dx^\mu dx^\nu$$ which is invariant under the boundary Weyl transformations $$\left\{r,{\mathfrak u}_\mu, {\mathfrak V}_\nu,{\mathfrak G}_{\mu\nu} \right\} \mapsto \left\{e^{-\phi}r,e^{\phi}{\mathfrak u}_\mu, {\mathfrak V}_\nu+\partial_\nu\phi,e^{2\phi}{\mathfrak G}_{\mu\nu} \right\}$$ where $\phi=\phi(x)$ is an arbitrary function at the boundary. Without loss of generality assume that ${\mathfrak G}_{\mu\nu}$ is transverse to ${\mathfrak u}_\mu$, i.e. , ${\mathfrak G}_{\mu\nu}\mathfrak{u}^\nu=0$. Further we have $${\mathfrak u}_\mu = \mathfrak{u}_\mu(x), \quad {\mathfrak V}_\nu={\mathfrak V}_\nu(r,x),\quad {\mathfrak G}_{\mu\nu}={\mathfrak G}_{\mu\nu}(r,x)$$ We will raise/lower/contract the unhatted greek indices using the boundary metric $$g_{\mu\nu} dx^\mu dx^\nu \equiv [r^{-2}ds^2]_{r\to \infty} = \left[{\mathfrak G}_{\mu\nu}-\frac{2}{r}{\mathfrak u}_{(\mu} {\mathfrak V}_{\nu)} \right]_{r\to \infty} dx^\mu dx^\nu$$ and the velocity field $\mathfrak{u}_\mu$ is a unit time-like vector of this metric $$\mathfrak{u}^\mu \mathfrak{u}_\mu \equiv g^{\mu\nu}\mathfrak{u}_\mu \mathfrak{u}_\nu = -1$$ A Weyl-covariant basis for the bulk cotangent bundle is given by $\{dr+r\,{\mathfrak V}_\nu dx^\nu\ ,dx^\mu\}$ with a corresponding dual basis $\left\{\partial_r\ , \partial_\mu - r\,{\mathfrak V}_\mu\partial_r\right\}$. In this dual basis, the co-metric (or the inverse metric) is given by $$\begin{split} \mathcal{G}^{AB}&\partial_A\otimes\partial_B \\ &=2\, {\mathfrak u}^\mu\left[\partial_\mu - r\,{\mathfrak V}_\mu\partial_r\right]\otimes_s\partial_r + r^{-2}\,\mathfrak{P}^{\mu\nu}\left[\partial_\mu - r\,{\mathfrak V}_\mu\partial_r\right]\otimes\left[\partial_\nu - r\,{\mathfrak V}_\nu\partial_r\right] \\ \end{split}$$ where $\mathfrak{P}^{\mu\nu}$ is the unique transverse tensor that satisfies $$\mathfrak{P}^{\mu\nu} {\mathfrak u}_\nu=0 \quad\text{and}\quad \mathfrak{P}^{\mu\lambda}{\mathfrak G}_{\lambda\nu}= \delta^\mu_\nu + {\mathfrak u}^\mu \, {\mathfrak u}_\nu$$ The unit normal vector of a hypersurface $r=r_D$ is given by[^41] $$n_A dx^A = \frac{dr}{\sqrt{\mathcal{G}^{rr}}} = \frac{dr}{\sqrt{\mathfrak{P}^{\alpha\beta}{\mathfrak V}_\alpha{\mathfrak V}_\beta-2r u^\alpha{\mathfrak V}_\alpha}}$$ $$\begin{split} n^A \partial_A &= \frac{\mathcal{G}^{rr}\partial_r+\mathcal{G}^{r\mu}\partial_\mu}{\sqrt{\mathcal{G}^{rr}}}\\ &= \frac{\left(\mathfrak{P}^{\alpha\beta}\,{\mathfrak V}_\alpha\,{\mathfrak V}_\beta-2\,r \,{\mathfrak u}^\alpha\,{\mathfrak V}_\alpha\right)\partial_r+\left({\mathfrak u}^\mu-r^{-1}\,\mathfrak{P}^{\mu\alpha}\, {\mathfrak V}_\alpha\right)\partial_\mu}{\sqrt{\mathfrak{P}^{\alpha\beta}\,{\mathfrak V}_\alpha{\mathfrak V}_\beta-2\,r\, {\mathfrak u}^\alpha\,{\mathfrak V}_\alpha}}\\ &= \frac{-r\,{\mathfrak u}^\alpha\,{\mathfrak V}_\alpha\partial_r+\left({\mathfrak u}^\mu-r^{-1}\,\mathfrak{P}^{\mu\alpha}\,{\mathfrak V}_\alpha\right)\left[\partial_\mu - r\,{\mathfrak V}_\mu\partial_r\right]}{\sqrt{\mathfrak{P}^{\alpha\beta}\,{\mathfrak V}_\alpha\,{\mathfrak V}_\beta-2\,r\, {\mathfrak u}^\alpha{\mathfrak V}_\alpha}}\\ \end{split}$$ From these expressions, it follows that the normalized induced metric and co-metric on a hypersurface $r=r_D$ is given by $$\begin{split} \hat{g}_{\mu\nu} &\equiv \left\{ r^{-2}\mathcal{G}_{\mu\nu} \right\}_{r\to r_D}\\ &= \left\{ \mathfrak{G}_{\mu\nu} - \frac{2}{r}\mathfrak{u}_{(\mu}\mathfrak{V}_{\nu)} \right\}_{r\to r_D} \\ \hat{g}^{\mu\nu} &\equiv \left\{ r^2\left(\mathcal{G}^{\mu\nu} - \frac{\mathcal{G}^{\mu r}\mathcal{G}^{r\nu}}{\mathcal{G}^{rr}}\right)\right\}_{r\to r_D}\\ &= \left\{ \mathfrak{P}^{\mu\nu} - \frac{\left[r \, {\mathfrak u}^\mu-\mathfrak{P}^{\mu\alpha}\mathfrak{V}_\alpha\right]\left[r\, {\mathfrak u}^\nu-\mathfrak{P}^{\nu\beta}\mathfrak{V}_\beta\right]}{\mathfrak{P}^{\alpha\beta}\mathfrak{V}_\alpha\mathfrak{V}_\beta-2\, r\, {\mathfrak u}^\alpha\mathfrak{V}_\alpha}\right\}_{r\to r_D} \end{split}$$ The hatted greek indices are raised/lowered/contracted using this hatted metric/co-metric. Notation at the Boundary ------------------------ The metric on the $d$ dimensional boundary is denoted by $g_{\mu\nu}$ which is a representative the class of metrics on the conformal boundary of the bulk spacetime. $$g_{\mu\nu}\equiv \left\{r^{-2}ds^2\right\}_{r\to \infty}$$ The inverse of this metric (we will call this the co-metric – since it is the metric on the cotangent bundle) is denoted by $g^{\mu\nu}$. We denote with $\nabla$ the corresponding Christoffel connection/covariant derivative. Our conventions for Christoffel symbols and the curvature tensors are fixed by the relations $$\begin{split} \nabla_{\mu}V^{\nu}&=\partial_{\mu}V^{\nu}+\Gamma_{\mu\lambda}{}^{\nu}V^{\lambda} \qquad \text{and}\qquad [\nabla_\mu,\nabla_\nu]V^\lambda=-R_{\mu\nu\sigma}{}^{\lambda}V^\sigma . \end{split}$$ On this spacetime lives a conformal fluid with velocity field $u^\mu$ (with $u^\mu u_\mu =-1$) , pressure $p$, energy density $\varepsilon=(d-1)p$, and shear viscosity $\eta$. We introduce the projector $P_{\mu\nu}\equiv g_{\mu\nu}+u_\mu u_\nu$ which projects onto the space transverse to $u^\mu$. The gradients of the velocity field are decomposed as: $$\begin{split} \nabla_\mu u_\nu &= \sigma_{\mu\nu}+\omega_{\mu\nu}-u_\mu a_\nu + \frac{\theta}{d-1} P_{\mu\nu}\\ &= \sigma_{\mu\nu}+\omega_{\mu\nu}-u_\mu \mathcal{A}_\nu + \frac{\theta}{d-1} g_{\mu\nu}\\ \end{split}$$ where we have introduced $$\begin{aligned} && \bullet \; \text{the shear strain rate:} \qquad \sigma_{\mu\nu}\equiv \left[P_{(\mu}^\alpha P_{\nu)}^\beta- \frac{P_{\mu\nu}}{d-1}P^{\alpha\beta} \right] \nabla_\alpha u_\beta \label{eqn:sigdef} \\ && \bullet \; \text{the vorticity:} \qquad \qquad \quad \ \; \omega_{\mu\nu}\equiv P_{[\mu}^\alpha P_{\nu]}^\beta \nabla_\alpha u_\beta \label{eqn:omdef} \\ && \bullet \; \text{the acceleration field:} \qquad \ \,a_\mu \equiv u_\alpha\nabla^\alpha u_\mu \label{eqn:adef} \\ && \bullet \; \text{the expansion rate:} \qquad \qquad \theta \equiv \nabla_\alpha u^\alpha \label{eqn:thdef} \end{aligned}$$ The hydrodynamic Weyl-connection (see [@Loganayagam:2008is] where it was introduced for more details) is defined to be $$\mathcal{A}_\mu \equiv u_\alpha\nabla^\alpha u_\mu - \frac{\nabla_\alpha u^\alpha}{d-1} u_\mu \label{eqn:Adef}$$ The bulk metric-dual for hydrodynamics is given by (see [@Bhattacharyya:2008mz]) $$\begin{split} \mathfrak{u}_\mu &= u_\mu\ ,\quad\ \mathfrak{G}_{\mu\nu} = P_{\mu\nu} +2b\, F(br)\, \sigma_{\mu\nu}+\ldots \\ \mathfrak{V}_\mu &= \mathcal{A}_\mu+\frac{r}{2}\, (1-(br)^{-d})\, u_\nu +\ldots \\ \mathfrak{u}^\mu &= u^\mu\ ,\quad\mathfrak{P}^{\mu\nu} = P^{\mu\nu} -2b \,F(br)\, \sigma^{\mu\nu}+\ldots \\ \end{split}$$ where $$\hat{F}(br) \equiv \frac{1}{\hat{\alpha}}\left(F(br)-F(br_D)\right) = \frac{1}{\hat{\alpha}} \; \int_{br}^{br_D}\; \frac{y^{d-1}-1}{y(y^{d}-1)}dy\,.$$ The energy-momentum tensor of a general relativistic fluid (till first order in the gradient expansion) is given as $$\label{enmom:eq} T^{\mu\nu} \equiv (\varepsilon+p)\, u^\mu\, u^\nu + p\, g^{\mu\nu}- 2\,\eta\, \sigma^{\mu\nu}-\zeta\, \theta\, P^{\mu\nu}+\ldots$$ which can be computed from the bulk data (of the metric dual to hydrodynamics) via $$\label{eq:BYT} T_{\mu\nu} \equiv \left\{\frac{r^d}{16\pi \,G_{d+1}}\left[-2\,{K}_{\mu\nu}+2\,K\,{g}_{\mu\nu}-2\,(d-1)\,{g}_{\mu\nu}+\ldots \right]\right\}_{r\to\infty}$$ where $r^2 \,K_{\mu\nu}$ is the extrinsic curvature of the constant $r$ hypersurface and $K\equiv g^{\mu\nu}\, K_{\mu\nu}$ is its trace. This computation gives $$p=\frac{\varepsilon}{d-1} = \frac{1}{16\pi G_{d+1}} \; \frac{1}{b^d} ,\quad\ \eta=\frac{1}{16\pi G_{d+1}}\; \frac{1}{b^{d-1}} \quad\text{and}\quad \zeta =0$$ and the Bekenstein-Hawking argument in the bulk gives the entropy density and the temperature of this fluid as $$s= \frac{1}{4\, G_{d+1}}\; \frac{1}{b^{d-1}} ,\quad\text{and}\quad T=\frac{d}{4\pi\, b}$$ where we have found it convenient to introduce a variable $b\equiv (4 G_{d+1} s)^{-\frac{1}{d-1}}$. Notation on the Dirichlet Hypersurface -------------------------------------- We choose a hypersurface $r=r_D$ in the bulk to impose Dirichlet boundary condition. We will work in a boundary Weyl-frame where $r_D$ is independent of $x$. This Weyl-frame change is consistent with the gradient expansion in the boundary hydrodynamics provided the initial surface $r=\rho(x)$ had a slowly varying $\rho(x)$. We will introduce a parameter $$\hat{\alpha} \equiv \frac{1}{\sqrt{1-(b r_D)^{-d}}}$$ which parameterizes how far the Dirichlet surface is to the boundary/ how close it is to the horizon. We have $\hat{\alpha}(r_D=\infty) =1$ and $\hat{\alpha}(r_D=1/b) =\infty$. After this we proceed as we did in the $r=\infty$ case in the previous subsection. All the same definitions can be repeated - we will just distinguish the objects in the Dirichlet hypersurface by a hat - so we have $\hat{u}^\mu ,\ \hat{g}^{\mu\nu} ,\ \hat{p}$ and so on. Unless specified, all the hatted tensors are raised and lowered by the hatted metric/co-metric. The energy momentum tensor is calculated using the same expression as in equation except that we evaluate it at $r=r_D$ now. We get $$\label{enmomh:eq} \hat{T}^{\mu\nu} \equiv (\hat{\varepsilon}+\hat{p})\,\hat{u}^\mu \,\hat{u}^\nu + \hat{p}\, \hat{g}^{\mu\nu}- 2\,\hat{\eta}\, \hat{\sigma}^{\mu\nu}-\hat{\zeta} \,\hat{\theta}\, \hat{P}^{\mu\nu}+\ldots$$ where $$\label{enmomh2:eq} \begin{split} \hat{\varepsilon} &\equiv \frac{(d-1)}{8\pi \,G_{d+1}}\; \frac{\hat{\alpha}}{\hat{\alpha}+1} ;\; \frac{1}{b^d}= \frac{2\hat{\alpha}}{\hat{\alpha}+1}\ \varepsilon\\ \hat{p} &\equiv \frac{\left[1+\frac{d}{2}(\hat{\alpha}-1)\right]}{8\pi\, G_{d+1}}\; \frac{\hat{\alpha}}{\hat{\alpha}+1} \; \frac{1}{b^d}= \frac{2\hat{\alpha}}{\hat{\alpha}+1}\left[1+\frac{d}{2}(\hat{\alpha}-1)\right]p\\ \hat{\eta} &\equiv \frac{1}{16\pi \,G_{d+1}}\; \frac{1}{b^{d-1}}=\eta \quad\text{and}\quad \hat{\zeta} \equiv 0 =\zeta\\ \end{split}$$ Dictionary for the Dirichlet Problem {#A:dirdict} ==================================== In this subsection, we collect the formulae which translate between the boundary data and the Dirichlet data to serve as a ready reference for the reader. The (normalized) metric and the co-metric on the Dirichlet hypersurface are given by $$\begin{split} \hat{g}_{\mu\nu}&= g_{\mu\nu} + \left(1-\frac{1}{\hat{\alpha}^2}\right)u_\mu u_\nu+2b F(br_D)\ \sigma_{\mu\nu} -\frac{1}{r_D}\left[u_\mu \mathcal{A}_\nu+\mathcal{A}_\mu u_\nu \right]+\ldots\\ &= g_{\mu\nu} + \left(1-\frac{1}{\hat{\alpha}^2}+\frac{2\theta}{(d-1)r_D}\right)u_\mu u_\nu-\frac{1}{r_D}\left[u_\mu a_\nu+a_\mu u_\nu \right] +2b F(br_D)\ \sigma_{\mu\nu} +\ldots\\ \hat{g}^{\mu\nu} &= P^{\mu\nu}-2b F(br_D)\ \sigma^{\mu\nu} -\hat{\alpha}^2\left[u^\mu - r_D^{-1}a^{\mu}\right]\left[u^\nu - r_D^{-1}a^{\nu}\right]\left[1+\frac{2\hat{\alpha}^2\theta}{(d-1)r_D}\right] \\ &= g^{\mu\nu}+\left(1-\hat{\alpha}^2-\frac{2\hat{\alpha}^4\theta}{(d-1)r_D}\right)u^\mu u^\nu \ -2b F(br_D)\ \sigma^{\mu\nu} +\frac{\hat{\alpha}^2}{r_D}\left[u^\mu a^\nu + a^{\mu}u^\nu\right] \\ \end{split}$$ and the correctly normalized velocities at the hypersurface are $$\begin{split} \hat{u}_\mu &= \frac{u_\mu}{\hat{\alpha}} + \frac{\hat{\alpha}}{r_D} \mathcal{A}_\mu = \left(1-\frac{\hat{\alpha}^2\theta}{r_D(d-1)}\right)\frac{u_\mu}{\hat{\alpha}} + \frac{\hat{\alpha}}{r_D} a_\mu \\ \hat{u}^\mu &= \hat{\alpha}u^\mu\left(1+\frac{\hat{\alpha}^2\theta}{r_D(d-1)}\right) +\ldots \\ \end{split}$$ From these it follows that the transverse projectors are related by $$\begin{split} \hat{P}_{\mu\nu} &= P_{\mu\nu} + 2b F(br_D)\sigma_{\mu\nu}\\ \hat{P}^\mu{}_\nu &= {P}^\mu{}_\nu+\frac{\hat{\alpha}^2}{r_D}u^\mu a_\nu \\ \hat{P}^{\mu\nu} &= {P}^{\mu\nu} -2b F(br_D) \sigma^{\mu\nu}+\frac{\hat{\alpha}^2}{r_D}\left[u^\mu a^\nu + a^{\mu}u^\nu\right]\\ \end{split}$$ Given a relation of the form $\hat{U}_\mu=V_\mu$, we can always write $$\begin{split} \hat{\nabla}_\mu \hat{U}_\nu = \nabla_\mu V_\nu -\tilde{\Gamma}_{\mu\nu}{}^\rho V_\rho \end{split}$$ which defines the tensor $\tilde{\Gamma}_{\mu\nu}{}^\rho=\hat{\Gamma}_{\mu\nu}{}^\rho-{\Gamma}_{\mu\nu}{}^\rho$. We evaluate this tensor in the to get $$\begin{split} \tilde{\Gamma}_{\mu\nu}{}^\rho &= (\hat{\alpha}^2-1)\left[\sigma_{\mu\nu}+\frac{\theta}{d-1}\ \left(P_{\mu\nu}+\frac{d}{2}u_\mu u_\nu\right)-da_{(\mu}u_{\nu)}\right]u^\rho\\ &\qquad + \frac{\hat{\alpha}^2-1}{\hat{\alpha}^2}\left[-2\omega^\rho{}_{(\mu}u_{\nu)}+(\frac{d}{2}-1)u_{\mu}u_{\nu}a^\rho\right] \\ \end{split}$$ in particular $$\begin{split} \tilde{\Gamma}_{\mu\nu}{}^\rho u_\rho &=-(\hat{\alpha}^2-1)\left[\sigma_{\mu\nu}+\frac{\theta}{d-1}\ \left(P_{\mu\nu}+\frac{d}{2}u_\mu u_\nu\right)-da_{(\mu}u_{\nu)}\right] \\ \end{split}$$ From this it follows that $$\begin{split} \hat{\sigma}_{\mu\nu} &= \hat{\alpha} \,\sigma_{\mu\nu}\ ,\quad \hat{\omega}_{\mu\nu} = \frac{1}{\hat{\alpha}} \,\omega_{\mu\nu} ,\quad \hat{\theta}=\hat{\alpha}\,\theta,\\ \hat{a}_\nu &=\left[1+\frac{d}{2}(\hat{\alpha}^2-1)\right]a_\nu ,\quad \hat{\mathcal{A}}_\nu =\mathcal{A}_\nu+\frac{d}{2}\,(\hat{\alpha}^2-1)\,a_{\nu} \\ \end{split}$$ We refer the reader to the previous subsection for the dictionary involving the energy-momentum tensor. Now we are ready to present the inverse relations. We will first invert the equations above to get $$\begin{split} {\sigma}_{\mu\nu} &= \frac{1}{\hat{\alpha}}\, \hat{\sigma}_{\mu\nu}\ ,\quad {\omega}_{\mu\nu} = {\hat{\alpha}} \,\hat{\omega}_{\mu\nu} ,\quad {\theta}=\frac{1}{\hat{\alpha}}\,\hat{\theta},\\ {a}_\nu &=\frac{\hat{a}_\nu}{\left[1+\frac{d}{2}(\hat{\alpha}^2-1)\right]} ,\quad {\mathcal{A}}_\nu =\hat{\mathcal{A}}_\nu-\frac{(\hat{\alpha}^2-1)\frac{d}{2}}{\left[1+\frac{d}{2}(\hat{\alpha}^2-1)\right]}\hat{a}_{\nu} =\frac{\hat{a}_\nu}{\left[1+\frac{d}{2}(\hat{\alpha}^2-1)\right]}-\frac{\hat{\theta}}{d-1}\hat{u}_\nu \\ \end{split}$$ and then the velocities $$\begin{split} {u}_\mu &= \left(1+\frac{\hat{\alpha}\hat{\theta}}{r_D(d-1)}\right)\hat{\alpha}\,\hat{u}_\mu - \frac{\hat{\alpha}^2}{r_D} \frac{\hat{a}_\mu}{\left[1+\frac{d}{2}(\hat{\alpha}^2-1)\right]} \\ {u}^\mu &= \frac{\hat{u}^\mu}{\hat{\alpha}}\left(1-\frac{\hat{\alpha}\,\hat{\theta}}{r_D\,(d-1)}\right) +\ldots \\ \end{split}$$ followed by the projectors $$\begin{split} {P}_{\mu\nu} &= \hat{P}_{\mu\nu} - \frac{2b}{\hat{\alpha}} \,F(br_D)\,\hat{\sigma}_{\mu\nu}\\ {P}^\mu{}_\nu &= \hat{P}^\mu{}_\nu-\frac{\hat{\alpha}}{r_D}\frac{\hat{u}^\mu\hat{a}_\nu}{\left[1+\frac{d}{2}(\hat{\alpha}^2-1)\right]} \\ {P}^{\mu\nu} &= \hat{P}^{\mu\nu} +\frac{2b}{\hat{\alpha}} \,F(br_D)\hat{\sigma}^{\mu\nu}-\frac{\hat{\alpha}\left[\hat{u}^\mu \hat{a}^\nu + \hat{a}^{\mu}\hat{u}^\nu\right]}{r_D\left[1+\frac{d}{2}(\hat{\alpha}^2-1)\right]}\\ \end{split}$$ The dictionary for the (normalized) metric and the co-metric are $$\begin{split} g_{\mu\nu} &= \hat{g}_{\mu\nu} + \left[1-\hat{\alpha}^2-\frac{2\hat{\alpha}^3\hat{\theta}}{r_D(d-1)}\right]\hat{u}_\mu \hat{u}_\nu+\frac{\hat{\alpha}^3\left[\hat{u}_\mu \hat{a}_\nu + \hat{a}_{\mu}\hat{u}_\nu\right]}{r_D\left[1+\frac{d}{2}(\hat{\alpha}^2-1)\right]} - \frac{2b}{\hat{\alpha}} F(br_D)\hat{\sigma}_{\mu\nu} \\ g^{\mu\nu} &= \hat{g}^{\mu\nu}+ \left[1-\frac{1}{\hat{\alpha}^2}+\frac{2\hat{\theta}}{\hat{\alpha}r_D(d-1)}\right]\hat{u}^\mu \hat{u}^\nu+\frac{2b}{\hat{\alpha}} F(br_D)\hat{\sigma}^{\mu\nu}-\frac{\hat{\alpha}\left[\hat{u}^\mu \hat{a}^\nu + \hat{a}^{\mu}\hat{u}^\nu\right]}{r_D\left[1+\frac{d}{2}(\hat{\alpha}^2-1)\right]}\\ \end{split}$$ Finally, we can write the tensor $\tilde{\Gamma}_{\mu\nu}{}^\rho $ in terms of hatted variables as $$\begin{split} \tilde{\Gamma}_{\mu\nu}{}^\rho &= (1-\frac{1}{\hat{\alpha}^2})\left[ \hat{\sigma}_{\mu\nu}+\frac{\hat{\theta}}{d-1}\ \left(\hat{P}_{\mu\nu}+\frac{d}{2}\hat{\alpha}^2\hat{u}_\mu \hat{u}_\nu\right)-\frac{d\hat{\alpha}^2}{\left[1+\frac{d}{2}(\hat{\alpha}^2-1)\right]}\hat{a}_{(\mu}\hat{u}_{\nu)}\right]\hat{u}^\rho\\ &\qquad + (\hat{\alpha}^2-1)\left[-2\hat{\omega}^\rho{}_{(\mu}\hat{u}_{\nu)}+\frac{\frac{d}{2}-1}{\left[1+\frac{d}{2}(\hat{\alpha}^2-1)\right]}\hat{u}_{\mu}\hat{u}_{\nu}\hat{a}^\rho\right] \\ \end{split}$$ In particular $$\begin{split} \tilde{\Gamma}_{\mu\nu}{}^\rho \hat{u}_\rho &= - (1-\frac{1}{\hat{\alpha}^2})\left[ \hat{\sigma}_{\mu\nu}+\frac{\hat{\theta}}{d-1}\ \left(\hat{P}_{\mu\nu}+\frac{d}{2}\hat{\alpha}^2\hat{u}_\mu \hat{u}_\nu\right)-\frac{d\hat{\alpha}^2}{\left[1+\frac{d}{2}(\hat{\alpha}^2-1)\right]}\hat{a}_{(\mu}\hat{u}_{\nu)}\right] \end{split}$$ Now we present the Bulk metric/co-metric as a function of the Dirichlet data. The Bulk metric is given by $$\label{appeq:gbulk} \begin{split} \mathcal{G}_{AB}dx^A dx^B &=-2 u_\mu dx^\mu \left( dr + r\ \left[\mathcal{A}_\nu+\frac{r}{2}(1-(br)^{-d})u_\nu\right] dx^\nu \right) \\ & \qquad + r^2 \left[ P_{\mu\nu} +2b F(br)\ \sigma_{\mu\nu}\right] dx^\mu dx^\nu +\ldots\\ &= -2 \left[\hat{\alpha}\hat{u}_\mu - \frac{\hat{\alpha}^2}{r_D}{\mathcal{A}}_\mu \right]dx^\mu\left( dr + r\ \left[ \hat{\xi}\mathcal{A}_\nu +\frac{r}{2}(1-(br)^{-d})\hat{\alpha}\hat{u}_\nu \right] dx^\nu \right)\\ & \qquad + r^2 \left[ \hat{P}_{\mu\nu} +2b \hat{F}(br)\ \hat{\sigma}_{\mu\nu}\right] dx^\mu dx^\nu +\ldots\\ \end{split}$$ where $$\begin{split} \mathcal{A}_\mu &\equiv \frac{\hat{a}_\mu}{\left[1+\frac{d}{2}(\hat{\alpha}^2-1)\right]}-\frac{\hat{\theta}}{d-1}\hat{u}_\mu=\hat{\mathcal{A}}_\mu-\frac{(\hat{\alpha}^2-1)\frac{d}{2}}{\left[1+\frac{d}{2}(\hat{\alpha}^2-1)\right]}\hat{a}_{\mu}\\ \hat{\xi} &\equiv 1-\frac{1}{2}\frac{r}{r_D}\frac{(1-(br)^{-d})}{(1-(br_D)^{-d})} = 1-\frac{\hat{\alpha}^2}{r_D}\frac{r}{2}(1-(br)^{-d})\\ \hat{F}(br) &\equiv \frac{1}{\hat{\alpha}}\left(F(br)-F(br_D)\right) = \frac{1}{\hat{\alpha}} \; \int_{br}^{br_D}\; \frac{y^{d-1}-1}{y(y^{d}-1)}dy\,. \end{split}$$ The Bulk co-metric is given by $$\begin{split} \mathcal{G}^{AB}&\partial_A\otimes\partial_B \\ &= \left[r^2(1-(br)^{-d})-\frac{2r\theta}{d-1}\right]\partial_r\otimes\partial_r+2\left[u^\mu -r^{-1} a^{\mu}\right]\partial_\mu\otimes_s\partial_r\\ &\qquad +r^{-2}\left[P^{\mu\nu} -2b F(br)\ \sigma^{\mu\nu}\right]\partial_\mu\otimes\partial_\nu\\ &= \left[r^2(1-(br)^{-d})-\frac{2r\hat{\theta}}{\hat{\alpha}(d-1)}\right]\partial_r\otimes\partial_r\\ &+2\left[\frac{\hat{u}^\mu}{\hat{\alpha}}\left(1-\frac{\hat{\alpha}\hat{\theta}}{r_D(d-1)}\right) - \frac{\hat{a}^\mu}{r\left[1+\frac{d}{2}(\hat{\alpha}^2-1)\right]}\right]\partial_\mu\otimes_s\partial_r\\ &\qquad +r^{-2}\left[\hat{P}^{\mu\nu} -\frac{\hat{\alpha}\left[\hat{u}^\mu \hat{a}^\nu + \hat{a}^{\mu}\hat{u}^\nu\right]}{r_D\left[1+\frac{d}{2}(\hat{\alpha}^2-1)\right]} -2b \hat{F}(br)\ \hat{\sigma}^{\mu\nu}\right]\partial_\mu\otimes\partial_\nu \end{split}$$ In terms of the components in the Weyl-covariant basis, we have $$\begin{split} \mathfrak{u}_\mu &= u_\mu = \hat{\alpha}\, \hat{u}_\mu - \frac{\hat{\alpha}^2}{r_D}\left[\frac{\hat{a}_\mu}{\left[1+\frac{d}{2}(\hat{\alpha}^2-1)\right]}-\frac{\hat{\theta}}{d-1}\hat{u}_\mu\right] \\ \mathfrak{V}_\mu &= \mathcal{A}_\mu+\frac{r}{2}\, (1-(br)^{-d})\, u_\nu \\ &= \hat{\xi}\left[\frac{\hat{a}_\mu}{\left[1+\frac{d}{2}(\hat{\alpha}^2-1)\right]}-\frac{\hat{\theta}}{d-1}\, \hat{u}_\mu\right] +\frac{r}{2}\, (1-(br)^{-d})\, \hat{\alpha}\,\hat{u}_\mu\\ \mathfrak{G}_{\mu\nu} &= P_{\mu\nu} +2b\, F(br)\, \sigma_{\mu\nu} = \hat{P}_{\mu\nu} +2b \,\hat{F}(br)\, \hat{\sigma}_{\mu\nu}\\ \mathfrak{u}^\mu &= u^\mu = \frac{\hat{u}^\mu}{\hat{\alpha}}\left(1-\frac{\hat{\alpha}\,\hat{\theta}}{r_D(d-1)}\right)\\ \mathfrak{P}^{\mu\nu} &= P^{\mu\nu} -2b \,F(br)\, \sigma^{\mu\nu} = \hat{P}^{\mu\nu} -\frac{\hat{\alpha}\left[\hat{u}^\mu \hat{a}^\nu + \hat{a}^{\mu}\hat{u}^\nu\right]}{r_D\left[1+\frac{d}{2}(\hat{\alpha}^2-1)\right]} -2b \,\hat{F}(br)\, \hat{\sigma}^{\mu\nu} \\ \end{split}$$ Lorentz-Covariant derivative of the induced metric {#app:CovDeriv} ================================================== We wish to find the tensor $\tilde{\Gamma}_{\mu\nu}{}^\rho$ which describes the difference between the covariant derivatives at the boundary and the Dirichlet surface, i.e., Given a relation of the form $\hat{U}_\mu=V_\mu$, we can always write $$\begin{split} \hat{\nabla}_\mu \hat{U}_\nu = \nabla_\mu V_\nu -\tilde{\Gamma}_{\mu\nu}{}^\rho V_\rho \end{split}$$ which defines the tensor $\tilde{\Gamma}_{\mu\nu}{}^\rho$. For definiteness, in this subsection we will continue to raise/lower/contract using the boundary metric $g_{\mu\nu}$ - this means in particular raising/lowering/contracting do not commute with the hatted covariant derivative $\hat{\nabla}$ so we need to be a bit careful. As usual, zero-torsion condition implies $\tilde{\Gamma}_{\mu\nu}{}^\rho=\tilde{\Gamma}_{\nu\mu}{}^\rho$ and metric compatibility with $\hat{g}_{\mu\nu}\equiv g_{\mu\nu}+h_{\mu\nu}$ gives $$\tilde{\Gamma}_{\mu\nu}{}^\rho\hat{g}_{\rho\lambda}=\frac{1}{2} \left[\nabla_\mu h_{\lambda\nu}+\nabla_\nu h_{\lambda\mu}-\nabla_\lambda h_{\mu\nu} \right]$$ where $$h_{\mu\nu} = \left\{ \frac{u_\mu u_\nu}{(br)^d}+2b F(br)\ \sigma_{\mu\nu} -\frac{1}{r}\left[u_\mu \mathcal{A}_\nu+\mathcal{A}_\mu u_\nu \right]+\ldots \right\}_{r\to r_D}$$ Since all our expressions are exact upto second derivatives, it is enough to work with just the zero derivative piece in $h_{\mu\nu}$. $$\begin{split} \frac{\hat{\alpha}^2}{\hat{\alpha}^2-1}\tilde{\Gamma}_{\mu\nu}{}^\rho &=\hat{\alpha}^2\left[\sigma_{\mu\nu}+\frac{\theta}{d-1}\ \left(P_{\mu\nu}+\frac{d}{2}u_\mu u_\nu\right)-da_{(\mu}u_{\nu)}\right]u^\rho-2\omega^\rho{}_{(\mu}u_{\nu)}+(\frac{d}{2}-1)u_{\mu}u_{\nu}a^\rho \\ \end{split}$$ It follows that $$\begin{split} \tilde{\Gamma}_{\mu\nu}{}^\rho u_\rho &=-(\hat{\alpha}^2-1)\left[\sigma_{\mu\nu}+\frac{\theta}{d-1}\ \left(P_{\mu\nu}+\frac{d}{2}u_\mu u_\nu\right)-da_{(\mu}u_{\nu)}\right] \\ \end{split}$$ We can now evaluate $$\begin{split} \hat{\nabla}_\mu u_\nu &\equiv\nabla_\mu u_\nu-\tilde{\Gamma}_{\mu\nu}{}^\rho u_\rho\\ &= \hat{\alpha}^2 \sigma_{\mu\nu} + \omega_{\mu\nu}+ \hat{\alpha}^2 \frac{\theta}{d-1}\ P_{\mu\nu}-u_\mu a_\nu\left(1+\frac{d}{2}(\hat{\alpha}^2-1)\right)-\frac{d}{2}(\hat{\alpha}^2-1)\mathcal{A}_{\mu}u_{\nu}\\ \end{split}$$ Finally, we obtain[^42] $$\begin{split} \hat{\nabla}_\mu\hat{u}_\nu&=\hat{\nabla}_\mu \left\{\frac{u_\nu}{\hat{\alpha}}\right\}+\ldots\\ &= \hat{\alpha} \sigma_{\mu\nu} +\frac{\omega_{\mu\nu}}{\hat{\alpha}} + \frac{\hat{\alpha}\theta}{d-1}\ \hat{P}_{\mu\nu}-\hat{u}_\mu a_\nu\left(1+\frac{d}{2}(\hat{\alpha}^2-1)\right)\\ \end{split}$$ from which it follows that $$\begin{split} \hat{\sigma}_{\mu\nu} &\equiv \hat{\alpha} \sigma_{\mu\nu}\ ,\quad \hat{\omega}_{\mu\nu} \equiv \frac{1}{\hat{\alpha}} \omega_{\mu\nu} ,\quad \hat{\theta}\equiv\hat{\alpha}\theta\\ \hat{a}_\nu &\equiv \left[1+\frac{d}{2}(\hat{\alpha}^2-1)\right]a_\nu \\ \hat{\mathcal{A}}_\nu&\equiv \mathcal{A}_\nu-(1-\hat{\alpha}^2)\frac{d}{2}a_{\nu} \\ \end{split}$$ We invert the last relation to get $$\begin{split} \mathcal{A}_\nu &=\frac{\hat{a}_\nu}{1-\frac{d}{2}(1-\hat{\alpha}^2)}-\frac{\hat{\theta}}{d-1}\hat{u}_\nu \end{split}$$ This can be used to write $u_\mu$ is terms of hatted variables $$\begin{split} u_\mu = \hat{\alpha}\hat{u}_\mu-\frac{\hat{\alpha}^2}{r_D} \mathcal{A}_\mu = \left(1+\frac{\hat{\alpha}\hat{\theta}}{r_D(d-1)}\right)\hat{\alpha}\hat{u}_\mu - \frac{\hat{\alpha}^2}{r_D} \frac{\hat{a}_\mu}{\left[1+\frac{d}{2}(\hat{\alpha}^2-1)\right]} \end{split}$$ Now, we want to write $\tilde{\Gamma}_{\mu\nu}{}^\rho$ in hatted variables. We start with $$\begin{split} \tilde{\Gamma}_{\mu\nu}{}^\rho &= (\hat{\alpha}^2-1)\left[\sigma_{\mu\nu}+\frac{\theta}{d-1}\ \left(P_{\mu\nu}+\frac{d}{2}u_\mu u_\nu\right)-da_{(\mu}u_{\nu)}\right]u^\rho\\ &\qquad + \frac{\hat{\alpha}^2-1}{\hat{\alpha}^2}\left[-2\omega^\rho{}_{(\mu}u_{\nu)}+(\frac{d}{2}-1)u_{\mu}u_{\nu}a^\rho\right] \\ \end{split}$$ and use the Dirichlet dictionary to get $$\begin{split} \tilde{\Gamma}_{\mu\nu}{}^\rho &= (1-\frac{1}{\hat{\alpha}^2})\left[ \hat{\sigma}_{\mu\nu}+\frac{\hat{\theta}}{d-1}\ \left(\hat{P}_{\mu\nu}+\frac{d}{2}\hat{\alpha}^2\hat{u}_\mu \hat{u}_\nu\right)-\frac{d\hat{\alpha}^2}{\left[1+\frac{d}{2}(\hat{\alpha}^2-1)\right]}\hat{a}_{(\mu}\hat{u}_{\nu)}\right]\hat{u}^\rho\\ &\qquad + (\hat{\alpha}^2-1)\left[-2\hat{\omega}^\rho{}_{(\mu}\hat{u}_{\nu)}+\frac{\frac{d}{2}-1}{\left[1+\frac{d}{2}(\hat{\alpha}^2-1)\right]}\hat{u}_{\mu}\hat{u}_{\nu}\hat{a}^\rho\right] \\ \end{split}$$ Non-relativistic scaling a la BMW {#A:bmw} ================================= In this appendix we review and correct the scaling limit of [@Bhattacharyya:2008kq].The major change in notation we make is to replace $\epsilon_\text{BMW}$ by a parameter $\aleph^{-1}$ so that the BMW limit is a large $\aleph$ asymptotics of the expressions below. For simplicity we consider a fluid without bulk viscosity (which includes conformal fluids) with an energy-momentum tensor $T^{\mu\nu}$ given by $$T_{\mu\nu} = p\, g_{\mu\nu} + (\varepsilon + p) \, u_\mu\,u_\nu - 2 \, \eta\, \sigma_{\mu\nu} + \ldots \label{Tbdy2}$$ where we assume that this fluid lives on a background spacetime with a metric $g_{\mu\nu}$ and for the moment have just written out the stress tensor to first order in gradients. Spacetime split for the non-relativistic scaling {#s:bmwA} ------------------------------------------------- We begin by decomposing this metric into an ambient part $g^{(0)}_{\mu\nu}$ and a forcing part $h_{\mu\nu}$, the split being done so as to recover explicit forcing terms in the Navier-Stokes (in addition to the pressure gradient term). One picks a suitable frame for the ambient metric, and writes the geometry as $$g_{\mu\nu} = g^{(0)}_{\mu\nu} + h_{\mu\nu} \ , \qquad g^{(0)}_{\mu\nu}\, dx^\mu\, dx^\nu = -dt^2 + g^{(0)}_{ij} \, dx^i\, dx^j \ ,$$ where ${g}^{(0)}_{ij}$ are slowly varying functions of $x^i$ with ${h}_{\mu\nu}$ being treated as a perturbation. The metric perturbations which force the fluid are taken to be $${h}_{\mu\nu} \,dx^\mu\,dx^\nu= 2\, {\aleph}^{-1}\, {k}^*_{i}\, dt\, dx^i + {\aleph}^{-2}\, \left({h}^*_{tt}\, dt^2 + {h}^*_{ij} \, dx^i \, dx^j \right)$$ where $\aleph$ is the book-keeping parameter that implements the BMW scaling (note $\aleph = \epsilon_\text{BMW}^{-1}$). We employ the notation that all the functions which have a $*$ subscript or superscript (which we freely interchange to keep formulae clear) are of a specific functional form with anisotropic scaling of their spatial and temporal gradients. $${\cal Y}_*(t,x^i) : {\mathbb R}^{d-1,1} \mapsto {\mathbb R}\ , \;\; \text{such that} \;\; \{ \partial_t {\cal Y}_*(t,x^i), \nabla^{(0)}_i {\cal Y}_*(t,x^i)\} \sim \{{\cal O}(\aleph^{-2}) ,{\cal O}(\aleph^{-1})\}$$ The co-metric corresponding to the metric above is given as $$\begin{split} g^{\mu\nu} \partial_\mu \otimes \partial_\nu &= - \partial_t\otimes \partial_t + g^{ij}_{(0)}\partial_i\otimes\partial_j + 2\, {\aleph}^{-1}\ {k}_*^{i}\ \partial_t \otimes_s \partial_i\\ &\qquad - {\aleph}^{-2}\left[ \left({h}^*_{tt}-k_j^*k^j_*\right)\partial_t\otimes \partial_t + \left({h}_*^{ij}+k^i_*k^j_*\right)\partial_i\otimes \partial_j \right] \\ & \qquad + 2\, {\aleph}^{-3}\left[\left({h}^*_{tt}-k_j^*k^j_*\right) {k}_*^{i} - {h}_*^{ij} k^*_j\right]\partial_t \otimes_s \partial_i + {\cal O}(\aleph^{-4}) \end{split}$$ where we have freely raised and lowered the spatial indices with ${g}^{(0)}_{ij}$. The velocity field of the fluid is parameterized as $${u}^{\mu} = u^t\, \left(1, {\aleph}^{-1}\, {v}_*^i \right) \label{bmwd1}$$ where the function $u^t$ is determined via the constraint $g_{\mu\nu}\, u^\mu\, u^\nu =-1$. This gives the full velocity field in a large $\aleph$ expansion as $$\begin{split} u^t &=1 + \frac{\aleph^{-2}}{2} \left( h^*_{tt} + 2 \,k^*_{j}\, v^{j}_{*} +{g}^{(0)}_{jk} \, v^{j}_{*}\, v^{k}_{*} \right)+ {\cal O}(\aleph^{-4})\\ u^i &= \aleph^{-1} \, v_*^i + \frac{\aleph^{-3}}{2} \left( h^*_{tt} + 2 \,k^*_{j}\, v^{j}_{*} + {g}^{(0)}_{jk} \, v^{j}_{*}\, v^{k}_{*} \right)\, v_*^i + {\cal O}(\aleph^{-4})\\ {u}_t &= -1 - \frac{1}{2}\, {\aleph}^{-2} \, \left(- {h}^*_{tt} + {g}^{(0)}_{jk} \, {v}^j_* \, {v}^k_* \right)+ {\cal O}(\aleph^{-4})\\ {u}_i &= {\aleph}^{-1}\, \left( {v}^*_i + {k}^*_i \right)+ \aleph^{-3}\left[{h}^*_{ij}v^{j}_{*}+ \frac{1}{2} \left( h^*_{tt} + 2 \,k^*_{j}\, v^{j}_{*} +{g}^{(0)}_{jk} \, v^{j}_{*}\, v^{k}_{*} \right)\, \left( {v}^*_i + {k}^*_i \right) \right] + {\cal O}(\aleph^{-4}) \end{split}$$ which can alternately be written as $$\begin{split} u^\mu \partial_\mu &=\partial_t + \aleph^{-1} \, v_*^i \partial_i \\ &\quad + \frac{\aleph^{-2}}{2} \left( h^*_{tt} + 2 \,k^*_{j}\, v^{j}_{*} + {g}^{(0)}_{jk} \, v^{j}_{*}\, v^{k}_{*} \right)\partial_t + \frac{\aleph^{-3}}{2} \left( h^*_{tt} + 2 \,k^*_{j}\, v^{j}_{*} + {g}^{(0)}_{jk} \, v^{j}_{*}\, v^{k}_{*} \right)\, v_*^i\partial_i + {\cal O}(\aleph^{-4})\\ {u}_\mu dx^\mu &= -dt +{\aleph}^{-1}\, \left( {v}^*_i + {k}^*_i \right) dx^i - \frac{1}{2}\, {\aleph}^{-2} \, \left(- {h}^*_{tt} + {g}^{(0)}_{jk} \, {v}^j_* \, {v}^k_* \right)dt\\ &\quad + \aleph^{-3}\left[{h}^*_{ij}v^{j}_{*}+ \frac{1}{2} \left( h^*_{tt} + 2 \,k^*_{j}\, v^{j}_{*} + {g}^{(0)}_{jk} \, v^{j}_{*}\, v^{k}_{*} \right)\, \left( {v}^*_i + {k}^*_i \right) \right] dx^i + {\cal O}(\aleph^{-4})\\ \end{split}$$ Now, we are ready to calculate the velocity gradients - with some foresight, we will use the fact that the BMW limit is also an incompressibility limit where $v_*^i$ is divergenceless (see below). With this in mind, we can write the velocity gradients as $$\begin{split} {\theta} &= {\cal O}( {\aleph}^{-4}) \\ {a}_{\mu} dx^{\mu} &= {\aleph}^{-3} \left[ \partial_{t} {v}_{i}^{*}+{v}_{*}^{j} {\nabla}^{(0)}_{j} {v}_{i}^{*} -f_i^* \right] dx^{i} + {\cal O}( {\aleph}^{-4}) \\ {a}^{\mu} \partial_{\mu} &= {\aleph}^{-3} \left[ \partial_{t} {v}^{i}_{*}+{v}_{*}^{j} {\nabla}^{(0)}_{j} {v}^{i}_{*} -f^i_* \right] \partial_{i} + {\cal O}( {\aleph}^{-4}) \\ {\sigma}_{\mu \nu} dx^{\mu} dx^{\nu} &= {\aleph}^{-2}\, {\nabla}^{(0)}_{(i} {v}^{*}_{j)} \,dx^{i} dx^{j} -2{\aleph}^{-3} \, {v}^{j}_{*} \, {\nabla}^{(0)}_{(i} {v}^{*}_{j)} \,dx^{i} dt + {\cal O}( {\aleph}^{-4}) \\ {\sigma}^{\mu \nu} \partial_\mu\otimes\partial_\nu &= {\aleph}^{-2}{\nabla}_{(0)}^{(i} {v}_{*}^{j)}\partial_i\otimes\partial_j+ 2 {\aleph}^{-3} ({v}_{i}^{*}+k_i^*) {\nabla}_{(0)}^{(i} {v}_{*}^{j)}\partial_t\otimes_s\partial_j + {\cal O}( {\aleph}^{-4})\\ {\omega}_{\mu \nu} dx^{\mu}\wedge dx^{\nu} &= {\aleph}^{-2}\, {\nabla}^{(0)}_{[i} {v}^{*}_{j]} \,dx^{i} \wedge dx^{j} - 2{\aleph}^{-3} {v}^{j}_{*} \, {\nabla}^{(0)}_{[i} {v}^{*}_{j]} \,dx^{i}\wedge dt + {\cal O}( {\aleph}^{-4}) \\ \end{split}$$ where $\nabla^{(0)}_{i}$ is the covariant derivative compatible with $\hat{g}^{(0)}(x^{i})$ and $f_i^*$ is the ‘gravitational force’ acting on the fluid $$f_i^* \equiv \frac{1}{2}\partial_i {h}^*_{tt} - \partial_t {k}^*_i +\left[\nabla^{(0)}_i k^*_j - \nabla^{(0)}_j k^*_i\right] {v}_*^j = \frac{1}{2}\partial_i {h}^*_{tt} - \partial_t {k}^*_i +q^*_{ij} {v}_*^j \label{fidefn}$$ where we have introduced $q^*_{ij} \equiv \nabla^{(0)}_i k^*_j - \nabla^{(0)}_j k^*_i$. Navier-Stokes equations on a curved geometry {#s:bmwB} -------------------------------------------- Now, we turn to the scaling of the thermodynamic variables. We define the mass density $\rho_0$, the pressure per mass density $p_*$ and the kinematic viscosity $\nu_0$ by $$\begin{split} \rho_0 \equiv \epsilon_0 + p_0\ ,\quad p = p_0 + {\aleph}^{-2} \rho_0\ {p}_{*}\quad\text{and}\quad \eta_0 =\rho_0\, \nu_0 \end{split}$$ where as before the subscript $0$ indicates the background value. All other thermodynamic variables have similar scalings, for example, $$\varepsilon = \varepsilon_0 + \aleph^{-2}\rho_0 \, \varepsilon_* \quad\text{and}\quad b = b_0 + \aleph^{-2} \, \delta b_* \label{bmwd2}$$ The BMW limit is taken as to be the scaling as $\aleph \to \infty$ and in this limit the conservation equation $\nabla_\mu T^{\mu\nu} = 0$ reduces to $${\cal O}(\aleph^{-2}): \, \qquad \nabla_i^{(0)} \, v^i_* = 0 \label{incombdy}$$ and then a non-relativistic forced Navier-Stokes equation: $$\begin{split} {\cal O}(\aleph^{-3}): \qquad \left[\partial_t +v^j_*\nabla^{(0)}_j\right] v_i^* - 2\, \nu_0\, \nabla^{(0)^j} \left(\nabla^{(0)}_{(i} v^{*}_{j)}\right)= f_i^* - \nabla^{(0)}_i \left[p_*+\nu_0^2\, \frac{d\, (d-3)}{(d-1)\,(d-2)} \; R^{(0)}\right] \end{split} \label{nsbdy1}$$ where the kinematic viscosity $\nu_0$ is given as $$\nu_0 \equiv \frac{\eta_0}{\rho_0} = \frac{b_0}{d} \label{}$$ Before proceeding we should explain the origin of this equation since it differs from that presented in [@Bhattacharyya:2008kq]. On the l.h.s of we see a familiar term corresponding to the convective derivative of the non-relativistic velocity. The usual Laplacian term is modified due to the background curvature into the second derivative piece multiplying $\nu_0$. Its origin can be traced back to the term $-2 \, \eta \, \sigma_{\mu\nu}$ in the relativistic stress tensor . On the r.h.s. of we have collected all the forcing terms: there is the familiar pressure gradient term along with two other terms that arise from curvature. $f_i^*$ is the forcing term that arises from the fluctuating part of the metric as is clear from ; this term has been accounted for in [@Bhattacharyya:2008kq]. However, we also should see a forcing of the fluid from the ‘background’ curvature: the spatial part of the metric $g^{(0)}_{\mu\nu}$ is a curved spatial metric $g_{ij}^{(0)}$ and its effect on the fluid turns out to be at the order ${\cal O}(\aleph^{-3})$, just the same as the other terms in the equation. However, its origins in the relativistic stress tensor are a bit more involved; it does not arise from any of the terms written down in but rather from a second order gradient term in the relativistic stress tensor $2\,\eta\, b\, C_{\mu\alpha\nu\beta}u^\alpha u^\beta$ i.e., a coupling between the fluid and background curvature [@Bhattacharyya:2008mz]. Note that we have here specialized to relativistic conformal fluids which have holographic duals, so that the transport coefficient multiplying the tensor structure $C_{\mu\alpha\nu\beta}u^\alpha u^\beta$ is fixed to be $2\,\eta\, b$. For a general fluid we can have a new transport coefficient here $\kappa \propto \eta \, b$ and the correspondingly we would replace the coefficient of $R^{(0)}$ in with $\nu_0^2 \to \frac{\kappa_0}{\rho_0\, d}$. Also, we should note that that while other tensor structures involving curvatures couplings are allowed for non-conformal fluids, these will necessarily have non-vanishing trace and as a result will only show up at sub-leading order in the BMW scaling limit. At the risk of being overly pedantic we reiterate the fact that if we wish to place consider the non-relativistic BMW scaling of a relativistic fluid, then we must necessarily work with higher order gradient terms in the relativistic fluid stress tensor. To obtain the correct non-relativistic equations up to the order where we encounter Navier-Stokes equations the relevant part of the relativistic stress tensor is given to be $$T_{\mu\nu} = p\, g_{\mu\nu} + (\varepsilon + p) \, u_\mu\,u_\nu -2\,\eta\, \sigma_{\mu\nu}+2\,\eta\, b\, C_{\mu\alpha\nu\beta}u^\alpha u^\beta \label{}$$ which is a subset of the full second order stress tensor derived in [@Bhattacharyya:2008mz] $$\label{enmom:eq} \begin{split} T_{\mu\nu}& = p\, g_{\mu\nu} + (\varepsilon + p) \, u_\mu\,u_\nu -2\,\eta\, \sigma_{\mu\nu}\\ &-2\,\eta \,\tau_\omega \, \left[u^{\lambda}\mathcal{D}_{\lambda}\sigma_{\mu \nu}+\omega_{\mu}{}^{\lambda}\sigma_{\lambda \nu}+\omega_\nu{}^\lambda \sigma_{\mu\lambda} \right]\\ &+2\,\eta\, b\left[u^{\lambda}\mathcal{D}_{\lambda}\sigma_{\mu \nu}+\sigma_{\mu}{}^{\lambda}\sigma_{\lambda \nu} -\frac{\sigma_{\alpha \beta}\sigma^{\alpha \beta}}{d-1}P_{\mu \nu}+ C_{\mu\alpha\nu\beta}u^\alpha u^\beta \right]+\ldots\\ \end{split}$$ It is a simple exercise to verify that none of the other terms involved in the second order stress tensor contribute to the BMW scaled equations at ${\cal O}(\aleph^{-3})$. The bulk metric dual to a non-relativistic fluid on the boundary of AdS {#s:bmwC} ----------------------------------------------------------------------- One can also construct the gravitational solutions dual to such fluids as described in [@Bhattacharyya:2008kq]. To do so we simply need to apply the scaling of parameters described earlier as for e.g., in , to the general fluid/gravity bulk metric dual to a relativistic fluid on the boundary of AdS. Such a metric correct to second order in the relativistic gradient expansion was originally derived in [@Bhattacharyya:2008mz] generalizing the original result of [@Bhattacharyya:2008jc]. It was believed that this in general would suffice to find the gravity dual for a non-relativistic fluid that satisfies the incompressible Navier-Stokes equations derived above , . In fact the original computation presented in [@Bhattacharyya:2008kq] argued that it would actually suffice to consider the relativistic metric accurate to first order in gradients. Unfortunately, this turns out to be incorrect and one needs a subset of second order gradient terms along with one particular third order gradient term to solve Einstein’s equations to order ${\cal O}(\aleph^{-3})$. Note that this is necessary because it is at ${\cal O}(\aleph^{-3})$ that we encounter the dynamical content of the boundary fluid equations, viz., the Navier-Stokes equation . The new ingredient in our analysis is that we need to worry about a particular third order term proportional to ${\cal D}_\mu {\cal R}$ in the fluid/gravity correspondence. The term in question turns out to be computable using the original algorithm outlined for constructing bulk metrics dual to boundary fluids in [@Bhattacharyya:2008jc] and thankfully involves a decoupled tensor structure that can be sourced independently. Including this term, we find that for the non-relativistic fluid on the Dirichlet surface it suffices to consider the following truncation of the relativistic fluid/gravity metric to third order in boundary gradient expansion: $$\label{metric3trunc} \begin{split} ds^2&=-2 u_\mu dx^\mu \left( dr + r\ \mathcal{A}_\nu dx^\nu \right) \\ &+ \left[ r^2 g_{\mu\nu} +2u_{(\mu}\mathcal{S}_{\nu)\lambda}u^\lambda -\frac{2}{3r}\frac{u_{(\mu}\mathcal{D}_{\nu)}\mathcal{R}}{(d-1)(d-2)} \right]dx^\mu dx^\nu\\ &+r^2\left[ \frac{u_\mu u_\nu}{(br)^d} +2b F(br) \sigma_{\mu\nu}+4b^2 \frac{L(br)}{(br)^d}u_{(\mu}P_{\nu)}^{\lambda}\mathcal{D}_{\alpha}{\sigma^{\alpha}}_{\lambda}\right]dx^\mu dx^\nu \\ &-2(br)^2\left[ H_1(br) C_{\mu\alpha\nu\beta}u^\alpha u^\beta+\frac{b N(br)}{(br)^d} \frac{(d-3)u_{(\mu}\mathcal{D}_{\nu)}\mathcal{R}}{(d-1)(d-2)}\right] dx^\mu dx^\nu \\ & +\ldots\\ \end{split}$$ where the functions appearing above and their large $r$ asymptotics are given to be: $$\label{funcDreq} \begin{split} f(br) &\equiv 1-\frac{1}{(br)^{d}} \\ F(br)&\equiv \int_{br}^{\infty}\frac{y^{d-1}-1}{y(y^{d}-1)}dy \\ &\approx \frac{1}{br} -\frac{1}{d(br)^d}+ \frac{1}{(d+1)(br)^{d+1}}+\frac{\#}{(br)^{2d}}+\ldots\\ L(br) &\equiv \int_{br}^\infty\xi^{d-1}d\xi\int_{\xi}^\infty dy\ \frac{y-1}{y^3(y^d -1)} \\ &\approx \frac{1}{(d+1)(br)}-\frac{1}{2(d+2)(br)^2}+\frac{1}{(d+1)(2d+1)(br)^{d+1}}\\ &\quad-\frac{1}{(d+1)(2d+4)(br)^{d+2}} +\frac{\#}{(br)^{2d+1}}+\ldots \\ H_1(br)&\equiv \int_{br}^{\infty}\frac{y^{d-2}-1}{y(y^{d}-1)}dy \\ &\approx \frac{1}{2(br)^2}-\frac{1}{d(br)^d}+ \frac{1}{(d+2)(br)^{d+2}}+\frac{\#}{(br)^{2d}}+\ldots\\ N(br) &\equiv \int_{br}^\infty\xi^{d-1}d\xi\int_{\xi}^\infty dy\ \frac{y^2-1}{y^4(y^d -1)} \\ &\approx\frac{1}{(d+1)(br)} -\frac{1}{3(d+3)(br)^3}+\frac{1}{(d+1)(2d+1)(br)^{d+1}}\\ &\quad-\frac{1}{(d+3)(2d+3)(br)^{d+3}} +\frac{\#}{(br)^{2d+1}} +\ldots \\ \end{split}$$ There are various new curvature tensors and derivatives introduced above. ${\cal D}$ denotes the Weyl covariant derivative introduced in [@Loganayagam:2008is] which was used to present the bulk metric dual relevant for fluid/gravity correspondence in the case of curved boundaries in [@Bhattacharyya:2008mz]. ${\cal R}_{\mu\nu}$ is likewise a Weyl covariant Ricci tensor and ${\cal S}_{\mu\nu}$ is a Weyl-covariant Schouten tensor $${\cal S}_{\mu\nu} = \frac{1}{d-2} \left( {\cal R}_{\mu\nu} - \frac{1}{2\, (d-1)}\, g_{\mu\nu}\, {\cal R}\right) \label{}$$ For further details of these objects and the complete form for the second order fluid/gravity metric accurate to second order in gradients we refer the reader to [@Bhattacharyya:2008mz]. Once we have the relativistic metric at hand it is a simple matter to employ the scalings outlined earlier. We find that the bulk metric dual to an incompressible Navier-Stokes fluid living on a spatially curved geometry at the boundary of AdS takes the form $$\label{bdybmwf} \begin{split} ds^2 &= ds_0^2 + {\aleph}^{-1} ds_1^2 + {\aleph}^{-2} ds_2^2 + {\aleph}^{-3} ds_3^2 + {\cal O}( {\aleph}^{-4}) \\ &\text{with}\\ ds_0^2 &= 2 \ dt\ dr + r^2\left[- f_0 dt^2 + {g}^{(0)}_{ij}dx^i dx^j\right]\\ ds_1^2 &= -2 \left( {v}^*_i + {k}^*_i \right)\ dx^i\ dr + 2 r^2 \left[ {k}^*_i -\left(1- f_0\right)\left( {v}^*_i + {k}^*_i \right)\right] dx^i dt \\ ds_2^2 &= 2 \left[- \frac{1}{2} {h}^*_{tt} + \frac{1}{2} {g}^{(0)}_{jk} \, {v}^j_* \, {v}^k_* \right]dt\ dr + r^2\left[ {h}^*_{tt}\, dt^2 + {h}^*_{ij} \, dx^i \, dx^j \right]\\ &\quad +r^2 \left(1-f_0\right) \left[\left(- {h}^*_{tt} + {g}^{(0)}_{jk} \, {v}^j_* \, {v}^k_*+ {p}_* d\right)dt^2 \right.\\ &\qquad \left. + \left( {v}^*_i + {k}^*_i \right) \left( {v}^*_j + {k}^*_j \right)dx^i dx^j \right]+2\,r^2\, b_0 {F}_0 {\nabla}^{(0)}_{(i} {v}^{*}_{j)} \,dx^{i} dx^{j}\\ &\qquad \red{-\frac{R^{(0)}}{(d-1)\,(d-2)} \, dt^2 - 2\, H_0 \left(\text{S}_{ij}^{(0)} - \frac{R^{(0)}\, g_{ij}^{(0)}}{2\, (d-1)\, (d-2)}\right) dx^i \,dx^ j }\\ ds_3^2 &= -2 \left[ {h}^*_{ij} {v}^{j}_{*}+ \left( \frac{1}{2} {h}^*_{tt} + \, {k}^*_{j}\, {v}^{j}_{*} +\frac{1}{2} {g}^{(0)}_{jk} \, {v}^{j}_{*}\, {v}^{k}_{*}\right)\, \left( {v}^*_i + {k}^*_i \right) \right] dx^i dr \\ &\quad + 2r \left[ \partial_{t} {v}_{i}^{*}+ {v}_{*}^{j} {\nabla}^{(0)}_{j} {v}_{i}^{*} - {f}_i^* \right] dx^{i}dt -4r^2b_0 {F}_0 {v}_{*}^j {\nabla}^{(0)}_{(i} {v}^{*}_{j)} \,dx^{i} dt \\ &\quad -2\,r^2 \left(1- f_0\right)\left[ {h}^*_{ij} {v}^{j}_{*}+ \left( {k}^*_{j}\, {v}^{j}_{*} + {g}^{(0)}_{jk} \, {v}^{j}_{*}\, {v}^{k}_{*}+ {p}_* d \right)\, \left( {v}^*_i + {k}^*_i \right) \right] dx^i dt \\ &\quad \red{- 2\,\frac{L_0}{(b_0r)^{d-2}} {\nabla}^2_{(0)}v^*_{i} \,dx^i \,dt -2 \, \text{S}_{ij}^{(0)}\, v_*^j \, dx^i \,dt- \frac{1}{d-2}\, {\nabla}^j_{(0)}q^*_{ij} \, dx^i \,dt}\\ &\quad \red{+ \frac{R^{(0)}}{(d-1)\,(d-2)}\, (v^*_i + k^*_i) \, dx^i \,dt + 4\, H_0 \left(\text{S}_{ij}^{(0)} - \frac{R^{(0)}\, g_{ij}^{(0)} }{2\, (d-1)\, (d-2)} \right) v_*^j \, dx^i\, dt }\\ &\quad \red{+ 2\, \frac{b_0\, N_0}{(b_0\,r)^{d-2}} \, \frac{d-3}{(d-1)\, (d-2) }\, \nabla^{(0)}_i R^{(0)} \, dx^i \,dt} \end{split}$$ where we have highlighted the terms that were missed in the previous analysis for quick comparison. Note that when the background spatial metric $g^{(0)}_{ij}$ is Ricci flat, many of the terms vanish except for two terms in the third order metric (which are proportional to $\nabla^2_{(0)} v_i^*$ and $\nabla^j_{(0)} q^*_{ij}$ respectively). Finally we should note that $\text{S}_{ij}^{(0)}$ is used to denote the spatial components of the Schouten tensor of the full background metric $g^{(0)}_{\mu\nu}$; in particular, it should not be confused with the Schouten tensor of the spatial metric $g_{ij}^{(0)}$. The functions that enter into the metric above are: $$\begin{split} f_0 &\equiv 1-(b_0 r)^{-d}\ ,\quad {p}_* \equiv -\frac{\delta b_*}{b_0} \quad\text{and}\quad {F}_0 \equiv \int_{b_0 r}^{\infty}\frac{y^{d-1}-1}{y(y^{d}-1)}dy\\ {f}_i^* &\equiv \frac{1}{2}\partial_i {h}^*_{tt} - \partial_t {k}^*_i +\left[ {\nabla}^{(0}_i {k}^*_j - {\nabla}^{(0)}_j {k}^*_i\right] {v}_*^j = \frac{1}{2}\partial_i {h}^*_{tt} - \partial_t {k}^*_i + {q}^*_{ij} {v}_*^j \\ L_0 &\equiv \int_{b_0 r}^\infty\xi^{d-1}d\xi\int_{\xi}^\infty dy\ \frac{y-1}{y^3(y^d -1)}\\ H_0 &\equiv (b_0 r)^2\, \int_{b_0r}^{\infty}\frac{y^{d-2}-1}{y(y^{d}-1)}dy \\ N_0 &\equiv \int_{b_0r}^\infty\xi^{d-1}d\xi\int_{\xi}^\infty dy\ \frac{y^2-1}{y^4(y^d -1)} \\ \end{split}$$ Finally, let us note that the Navier-Stokes equations themselves have an interesting scaling symmetry. Given any solution to and with $g^{(0)}_{ij} = \delta_{ij}$ we can consider replacing $$p_* \to \epsilon^2 \, p_{*\epsilon} \ , \qquad v^i_* \to \epsilon\, v^i_{*\epsilon} \ , \qquad f^i_* \to \epsilon^3\, f^i_{*\epsilon}$$ where again the functions entering the dynamics with subscript $*\epsilon$ have spatial gradients $\partial_i \sim \epsilon$ and temporal gradients $\partial_t \sim \epsilon^2$. This fact makes it possible to compound the Navier-Stokes scaling which effectively allows one to replace $\aleph \to \aleph^{w}$ for some $w \geq 1$. Essentially the incompressible Navier-Stokes system of equations is a fixed point set of this scaling symmetry, a fact that we have made use of in . Bulk dual of the non-relativistic Dirichlet fluid {#s:bmwR} ================================================= In this appendix we present without derivation the results for the non-relativistic scaling limit of the Dirichlet problem, generalizing the result quoted in . Physically the only new content is that we allow the metric on the Dirichlet surface $\Sigma_D$ to be endowed with an arbitrarily slowly varying spatial metric. Thus in contrast to we are relaxing the constraint on $g_{ij}^{(0)}$ introduced in being Ricci flat. To indicate the differences note that the presence of a non-Ricci flat spatial metric $g^{(0)}_{ij}$ implies that the non-relativistic metric gets contributions from various tensor structures which appear at the second order in the gradient expansion of the fluid/gravity correspondence. In terms of the metric written down in [@Bhattacharyya:2008mz] some of these are straightforward to see – any term involving curvature tensors of boundary data (and thus via the Dirichlet constitutive relation the hypersurface $\Sigma_D$ curvatures) will contribute at this order. However, we also get contribution from spatial gradients of the hypersurface curvature, i.e., in the BMW scaling limit encounter terms of the form $\nabla^{(0)}_i R^{(0)}$. To guide the reader towards a derivation, we quote simply the relativistic tensor structures which are relevant and their scaling behavior under the Dirichlet BMW scaling. $$\begin{split} \hat{g}_{\mu\nu}\, dx^\mu\, dx^\nu &= -dt^2 + \hat{g}^{(0)}_{ij} \, dx^i\, dx^j + 2\, {\hat{\aleph}}^{-1}\, \hat{k}^*_{i}\, dt\, dx^i + {\hat{\aleph}}^{-2}\, \left(\hat{h}^*_{tt}\, dt^2 + \hat{h}^*_{ij} \, dx^i \, dx^j \right).\\ \hat{u}_\mu dx^\mu &= -dt +{\hat{\aleph}}^{-1}\, \left(\hat{v}^*_i + \hat{k}^*_i \right) dx^i - \frac{1}{2}\, {\hat{\aleph}}^{-2} \, \left(- \hat{h}^*_{tt} + \hat{g}^{(0)}_{jk} \, \hat{v}^j_* \, \hat{v}^k_* \right)dt\\ &\quad + \hat{\aleph}^{-3}\left[\hat{h}^*_{ij}\hat{v}^{j}_{*}+ \frac{1}{2} \left( \hat{h}^*_{tt} + 2 \,\hat{k}^*_{j}\, \hat{v}^{j}_{*} + \hat{g}^{(0)}_{jk} \, \hat{v}^{j}_{*}\, \hat{v}^{k}_{*} \right)\, \left( \hat{v}^*_i + \hat{k}^*_i \right) \right] dx^i + {\cal O}(\hat{\aleph}^{-4})\\ \hat{a}_{\mu} dx^{\mu} &= {\hat{\aleph}}^{-3} \left[ \partial_{t} \hat{v}_{i}^{*}+\hat{v}_{*}^{j} \hat{\nabla}^{(0)}_{j} \hat{v}_{i}^{*} -\hat{f}_i^* \right] dx^{i} + {\cal O}( {\hat{\aleph}}^{-4}) \\ \hat{\sigma}_{\mu \nu} dx^{\mu} dx^{\nu} &= {\hat{\aleph}}^{-2}\, \hat{\nabla}^{(0)}_{(i} \hat{v}^{*}_{j)} \,dx^{i} dx^{j} -2{\hat{\aleph}}^{-3} \, \hat{v}^{j}_{*} \, \hat{\nabla}^{(0)}_{(i} \hat{v}^{*}_{j)} \,dx^{i} dt + {\cal O}( {\hat{\aleph}}^{-4}) \\ \end{split} \label{genscalefA}$$ as before along with new tensor structures: $$\begin{split} \hat{\mathcal{S}}_{\nu\lambda}\hat{u}^\lambda dx^\nu &={\hat{\aleph}}^{-2}\frac{\hat{R}^{(0)}}{2(d-1)(d-2)}dt+{\hat{\aleph}}^{-3}\left[\hat{\text{S}}^{(0)}_{ij}\hat{v}^j_*+ \frac{1}{2(d-2)}\hat{\nabla}^j_{(0)}\hat{q}^*_{ij}\right] dx^i + {\cal O}( {\hat{\aleph}}^{-4})\\ \hat{\mathcal{R}}_{\nu\lambda}\hat{u}^\lambda dx^\nu &={\hat{\aleph}}^{-3}\left[\hat{R}^{(0)}_{ij}\hat{v}^j_*+ \frac{1}{2}\hat{\nabla}^j_{(0)}\hat{q}^*_{ij}\right] dx^i + {\cal O}( {\hat{\aleph}}^{-4})\\ \hat{P}_{\nu}^{\lambda}\hat{\mathcal{D}}_{\alpha}{\hat{\sigma}^{\alpha}}_{\lambda}dx^\nu &=\frac{{\hat{\aleph}}^{-3}}{2}\hat{\nabla}^2_{(0)}\hat{v}^*_{i} dx^i + {\cal O}( {\hat{\aleph}}^{-4}) \\ \hat{C}_{\mu\alpha\nu\beta}\hat{u}^\alpha \hat{u}^\beta dx^\mu dx^\nu &= {\hat{\aleph}}^{-2} \left[\hat{\text{S}}^{(0)}_{ij}-\frac{\hat{R}^{(0)}}{2(d-1)(d-2)}\hat{g}^{(0)}_{ij}\right] dx^i dx^j\\ &\quad- \; 2\,{\hat{\aleph}}^{-3} \left[\hat{\text{S}}^{(0)}_{ij}-\frac{\hat{R}^{(0)}}{2(d-1)(d-2)}\hat{g}^{(0)}_{ij}\right] \hat{v}^j_* dx^i dt\\ \hat{\mathcal{D}}_\nu\hat{\mathcal{R}} dx^\nu &= {\hat{\aleph}}^{-3}\,\hat{\nabla}^j_{(0)}\hat{R}^{(0)}dx^j \end{split} \label{genscalefB}$$ where $$\begin{split} \hat{f}_i^* &\equiv \frac{1}{2}\partial_i \hat{h}^*_{tt} - \partial_t \hat{k}^*_i +\left[\hat{\nabla}^{(0)}_i \hat{k}^*_j - \hat{\nabla}^{(0)}_j \hat{k}^*_i\right] \hat{v}_*^j = \frac{1}{2}\partial_i \hat{h}^*_{tt} - \partial_t \hat{k}^*_i +\hat{q}^*_{ij} \hat{v}_*^j \\ \hat{q}^*_{ij} &\equiv \hat{\nabla}^{(0)}_i \hat{k}^*_j - \hat{\nabla}^{(0)}_j \hat{k}^*_i \end{split}$$ and we use $\hat{\text{S}}^{(0)}_{ij}$ to denote the spatial (i.e. $ij$-) components of the Schouten tensor for $g^{(0)}_{\mu\nu}$ keep the expressions somewhat compact. Boundary data for non-relativistic fluids on $\Sigma_D$ {#s:} ------------------------------------------------------- Given the scalings to obtain the non-relativistic fluid on $\Sigma_D$, it is possible to identify the relevant terms of the third order fluid/gravity metric that we need to retain to solve the Dirichlet problem. Per se the set of terms we need in the bulk metric is still given by , though as in the main text we want to use this information to solve for the Dirichlet constitutive relations. We first quote the results for the hypersurface stress tensor which defines the hypersurface velocity $\hat{u}^\mu$ before indicating the answers for the boundary velocity field and metric in terms of the hypersurface data. We can parameterize the hypersurface stress tensor as in the main text; to obtain the correct non-relativistic equations on $\Sigma_D$ we need to retain some second order gradient terms involving hypersurface curvature tensors (analogously to the situation at the boundary as described in ). The relevant piece of the relativistic hypersurface stress tensor turns out to be: $$\begin{split} \hat{T}_{\mu\nu}&=(\hat{\varepsilon}+\hat{p})\,\hat{u}_\mu\hat{u}_\nu + \hat{p}\,\hat{g}_{\mu\nu}-2\,\hat{\eta} \,\hat{\sigma}_{\mu\nu} +\hat{\kappa}_C \, \hat{C}_{\mu\alpha\nu\beta}\, \hat{u}^\alpha \hat{u}^\beta +\ldots\\ \end{split}$$ where $$\begin{split} \hat{\varepsilon} &\equiv \frac{d-1}{8\pi G_{d+1}b^d}\frac{\hat{\alpha}}{\hat{\alpha}+1}\left[1- \frac{\hat{\alpha}\hat{R}^{(0)}}{2\,r_D^2\,(d-1)\,(d-2)} \right]\\ \hat{\varepsilon}+\hat{p} &\equiv \frac{d\hat{\alpha}}{16\pi G_{d+1}b^d}\left[1- \frac{\hat{\alpha}^2\,\hat{R}^{(0)} }{2\,r_D^2\,(d-1)\,(d-2)} \right]-\frac{1}{8\pi G_{d+1}b^d}\frac{\hat{\alpha}^2}{\hat{\alpha}+1}\frac{\hat{R}^{(0)}}{r_D^2(d-1)(d-2)}\\ \hat{\eta} &= \frac{1}{8\pi G_{d+1}b^{d-1}}\\ \hat{\kappa}_C &= \frac{1}{8\pi G_{d+1}b^{d-2}}\left[1-\frac{\hat{\alpha}^2}{\hat{\alpha}+1}\frac{(br_D)^{d-2} -1}{(br_D)^d}\right]\\ \end{split}$$ Conservation of this stress tensor together with the scaling forms introduced in , leads to the incompressible Navier-Stokes equations on $\Sigma_D$. The boundary velocity field and metric can be expressed in terms of the Dirichlet data as before. Before we write out the exressions in their gory detail, let us introduce some new parameters $\hat{\kappa}_L$, $\hat{\kappa}_N$ which depend on the location of $\Sigma_D$ as $$\begin{split} \hat{\kappa}_L &\equiv \frac{1}{d}\left[ \xi(\xi^d-1)\frac{d}{d\xi}\left[\xi^{-d}L(\xi)\right]+ \frac{1}{\xi\left[1+\frac{d}{2}(\hat{\alpha}^2-1)\right]}+\frac{1}{\xi^2(d-2)}\right]_{\xi=br_D} \\ \hat{\kappa}_N &\equiv \frac{1}{d}\left\{ (d-3)\left[\xi(\xi^d-1)\frac{d}{d\xi}\left[\xi^{-d}N(\xi)\right]+ \frac{1}{\xi\left[1+\frac{d}{2}(\hat{\alpha}^2-1)\right]}\right] \right.\\ &\left. -\frac{d-2}{2\, \xi^3\left[1+\frac{d}{2}(\hat{\alpha}^2-1)\right]}\right\}_{\xi=br_D} \end{split}$$ These are in turn limiting values of certain functions which appear in various guises in the result for the Dirichlet constitutive relations and the bulk metric and are collected once and for all below. $$\begin{split} \hat{F}(br) &\equiv \frac{1}{\hat{\alpha}}\left(F(br)-F_D\right) \\ \hat{H}_1(br) &\equiv \left(H_1(br)-H_{1D}\right)\\ \hat{\xi}_1(br) &\equiv \frac{\hat{\alpha}}{b}\left(\frac{1}{r}-\frac{1}{r_D}\right)+\frac{\hat{\alpha}}{br_D}\left[1-\hat{\alpha}^2f(br)\right]=\frac{\hat{\alpha}}{br}\left[1-\frac{r}{r_D}\hat{\alpha}^2f(br)\right]\\ \hat{M}_1(br) &\equiv \frac{\hat{\alpha}^2}{b^2}\left(\frac{1}{r^2}-\frac{1}{r_D^2}\right)+\frac{\hat{\alpha}^2}{b^2r_D^2}\left[1-\hat{\alpha}^2f(br)\right]=\frac{\hat{\alpha}^2}{(br)^2}\left[1-\frac{r^2}{r_D^2}\hat{\alpha}^2f(br)\right]\\ \hat{M}_2(br) &\equiv\frac{\hat{\alpha}^2}{b^2r_D^2(d-2)}\left[1+\frac{2}{d\hat{\alpha}(\hat{\alpha}+1)}\right] \left[1-\hat{\alpha}^2f(br)\right] \\ \hat{L}_1(br) &\equiv \frac{L(br)}{(br)^d}-\frac{L_D}{(br_D)^d}+\kappa_L \left[1-\hat{\alpha}^2f(br)\right]\\ &\quad-\frac{\hat{\alpha}^2-1}{2b^2(d-2)}\left(\frac{1}{r^2}-\frac{1}{r_D^2}\right)-\frac{(\hat{\alpha}^2-1)}{2b\left[1+\frac{d}{2}(\hat{\alpha}^2-1)\right]}\left(\frac{1}{r}-\frac{1}{r_D}\right)\\ \hat{N}_1(br) &\equiv \hat{\alpha}(d-3)\left(\frac{N(br)}{(br)^d}-\frac{N_D}{(br_D)^d}\right)+\kappa_N\hat{\alpha}\left[1-\hat{\alpha}^2f(br)\right]\\ &+\frac{\hat{\alpha}}{3b^3}\left(\frac{1}{r^3}-\frac{1}{r_D^3}\right)-\frac{\hat{\alpha}\left[\frac{\hat{\alpha}^2}{r_D^2}+(d-3)b^2(\hat{\alpha}^2-1)\right]}{2b^3\left[1+\frac{d}{2}(\hat{\alpha}^2-1)\right]}\left(\frac{1}{r}-\frac{1}{r_D}\right)\\ \end{split} \label{allhatfunx}$$ where the functions entering the above expressions and their asymptotics have been previously been collected together in . The final result of this exercise leads to: $$\begin{split} u_t &= -\hat{\alpha}_0 - \hat{\aleph}^{-2} \hat{\alpha}_0 \left[- \frac{1}{2}\hat{h}^*_{tt} + \frac{1}{2} \hat{g}^{(0)}_{jk} \, \hat{v}^j_* \, \hat{v}^k_* +\hat{p}_* \frac{\frac{d}{2}\, (\hat{\alpha}_0^2 -1)}{1+\frac{d}{2}\, (\hat{\alpha}_0^2 -1)} - \frac{\hat{\alpha}_0^2}{r_D^2} \, \frac{\hat{R}^{(0)}}{2\,(d-1)\,(d-2)} \right] + {\cal O}(\hat{\aleph}^{-4}) \\ \end{split}$$ $$\begin{split} u_i &= \hat{\aleph}^{-1}\, \hat{\alpha}_0\, \left(\hat{v}^*_i + \hat{k}^*_i \right)\\ &\quad +\hat{\aleph}^{-3}\, \hat{\alpha}_0\left[\hat{h}^*_{ij}\hat{v}^{j}_{*}+ \left( \frac{1}{2}\hat{h}^*_{tt} + \,\hat{k}^*_{j}\, \hat{v}^{j}_{*} +\frac{1}{2} \hat{g}^{(0)}_{jk} \, \hat{v}^{j}_{*}\, \hat{v}^{k}_{*} +\hat{p}_* \frac{\frac{d}{2}\, (\hat{\alpha}_0^2 -1)}{1+\frac{d}{2}\, (\hat{\alpha}_0^2 -1)}\right)\, \left( \hat{v}^*_i + \hat{k}^*_i \right) \right]\\ &\qquad -\hat{\aleph}^{-3}\, \frac{\hat{\alpha}_0^2}{r_D\left(1+\frac{d}{2}(\hat{\alpha}_0^2-1)\right)}\left[ \partial_{t} \hat{v}_{i}^{*}+\hat{v}_{*}^{j} \hat{\nabla}^{(0)}_{j} \hat{v}_{i}^{*}-\hat{f}_i^* \right] \\ &\qquad +\hat{\aleph}^{-3}\left[ b_0^2\, \hat{\kappa}_L\,\hat{\alpha}_0\, \hat{\nabla}^2_{(0)}v^*_i - b_0^3\, \hat{\kappa}_N \, \hat{\alpha}_0^2\, \frac{\nabla^{(0)}_i\, R^{(0)}}{(d-1)(d-2)} \right] \\ &\qquad + \hat{\aleph}^{-3}\, \frac{\hat{\alpha}_0^3}{r_D^2} \, \left[\hat{S}^{(0)}_{ij}\hat{v}^j_* - \frac{1}{d-2}\, \left(1 + \frac{2}{d\, \hat{\alpha}_0\, (\hat{\alpha}_0 +1) } \right) \hat{R}^{(0)}_{ij}\hat{v}^j_* - \frac{\hat{\nabla}^j_{(0)}\hat{q}^*_{ij}}{d\,(d-2)\, \hat{\alpha}_0\, (\hat{\alpha}_0 +1) }\right]\\ &\qquad + {\cal O}(\hat{\aleph}^{-4}) \end{split}$$ Further the Dirichlet constitutive relation for the boundary metric is $$\begin{split} g_{tt} &= -\hat{\alpha}_0^2 + \hat{\aleph}^{-2} \, \left[\hat{h}^*_{tt}\ + \left(1-\hat{\alpha}_0^2 \right) \left(- \hat{h}^*_{tt} + \hat{g}^{(0)}_{jk} \, \hat{v}^j_* \, \hat{v}^k_*+\hat{p}_* \frac{d\hat{\alpha}_0^2 }{1+\frac{d}{2}\, (\hat{\alpha}_0^2 -1)} \right) \right] \\ & \quad + \hat{\aleph}^{-2} \, \left[\frac{\hat{\alpha}_0^4}{r_D^2}\,\frac{\hat{R}^{(0)}}{(d-1)\,(d-2)}\right] + {\cal O}(\hat{\aleph}^{-4}) \\ g_{ti} &= \hat{\aleph}^{-1}\left( \hat{k}^*_i + (\hat{\alpha}_0^2-1)\, (\hat{k}_i^* + \hat{v}_i^*) \right) - \hat{\aleph}^{-3}\frac{\hat{\alpha}_0^3}{r_D\left(1+\frac{d}{2}\, (\hat{\alpha}_0^2 -1)\right)} \left[ \partial_{t} \hat{v}_{i}^{*}+\hat{v}_{*}^{j} \hat{\nabla}^{(0)}_{j} \hat{v}_{i}^{*} -\hat{f}_i^* \right] \\ &\quad -\hat{\aleph}^{-3} \left(1-\hat{\alpha}_0^2 \right)\left[\hat{h}^*_{ij}\hat{v}^{j}_{*}+ \left(\hat{k}^*_{j}\, \hat{v}^{j}_{*} + \hat{g}^{(0)}_{jk} \, \hat{v}^{j}_{*}\, \hat{v}^{k}_{*}+\hat{p}_* \frac{d\hat{\alpha}_0^2 }{1+\frac{d}{2}\, (\hat{\alpha}_0^2 -1)} \right)\, \left( \hat{v}^*_i + \hat{k}^*_i \right) \right]\\ &\quad +\;\hat{\aleph}^{-3} \left[\frac{2 \, b_0}{\hat{\alpha}_0}\, F(b_0\,r_D)\hat{v}_{*}^j\hat{\nabla}^{(0)}_{(i} \hat{v}^{*}_{j)} - 2\, b_0^2{H}_{1}(b_0 r_D) \left(\hat{\text{S}}^{(0)}_{ij}-\frac{\hat{R}^{(0)}}{2(d-1)(d-2)}\hat{g}^{(0)}_{ij}\right)v_*^j \right] \\ &\quad +\;\hat{\aleph}^{-3} \, \frac{\hat{\alpha}_0^4}{2\,r_D^2}\left[2\,\hat{\text{S}}^{(0)}_{ij}\,\hat{v}^j_*+ \frac{1}{(d-2)}\hat{\nabla}^j_{(0)}\hat{q}^*_{ij} -\frac{\hat{R}^{(0)}}{(d-1)\,(d-2)} \left(\hat{v}^*_i + \hat{k}^*_i \right) \right] \\ & \quad -\;\hat{\aleph}^{-3} \, \frac{\hat{\alpha}_0^2\,(\hat{\alpha}_0^2-1)}{2\,r_D^2\,(d-2)}\left[1+\frac{2}{d\,\hat{\alpha}_0\,(\hat{\alpha}_0+1)}\right] \left[2\,\hat{R}^{(0)}_{ij}\hat{v}^j_*+ \hat{\nabla}^j_{(0)}\hat{q}^*_{ij}\right] \\ &\quad +\;\hat{\aleph}^{-3} \, \left[\frac{1}{(d-1)\, (d-2)}\, b_0^3\,\hat{N}_1(\infty) \,\hat{\nabla}^j_{(0)}\hat{R}^{(0)} -\, b_0^2\, \hat{L}_1(\infty) \, \hat{\nabla}^2_{(0)}\hat{v}^*_{i} \right]\\ &\quad + \;{\cal O}(\hat{\aleph}^{-4})\\ g_{ij} &= \hat{g}^{(0)}_{ij} + \hat{\aleph}^{-2} \left(\hat{h}^*_{ij} -(\hat{\alpha}_0^2 -1)\, (\hat{v}^*_i + \hat{k}^*_i) \,(\hat{v}^*_j + \hat{k}^*_j) - \frac{2 \, b_0}{\hat{\alpha}_0}\, F(b_0\,r_D) \,\hat{\nabla}^{(0)}_{(i} \hat{v}^{*}_{j)} \right) \\ & \quad + \; \hat{\aleph}^{-2} \,b_0^2\,{H}_{1}(b_0 r_D) \left(2\,\hat{\text{S}}^{(0)}_{ij}-\frac{\hat{R}^{(0)}}{(d-1)(d-2)}\hat{g}^{(0)}_{ij}\right) \\ & \quad + \;{\cal O}(\hat{\aleph}^{-4}) \end{split}$$ where $$\begin{split} \hat{L}_1(\infty) &\equiv\frac{1}{d}\left[ \frac{b_0\, r_D}{\hat{\alpha}_0^2}\, L'(b_0\, r_D)+ \frac{1-\frac{d}{2}}{b_0\, r_D\left[1+\frac{d}{2}(\hat{\alpha}_0^2-1)\right]}+\frac{1-\frac{d}{2}}{b_0\, r_D^2(d-2)}\right] \left(1-\hat{\alpha}_0^2\right)\\ \hat{N}_1(\infty) &\equiv \frac{1}{d}\left\{ (d-3)\left[\frac{b_0\, r_D}{\hat{\alpha}_0^2}N'(b_0\, r_D)+ \frac{1-d/2}{b_0\, r_D\left[1+\frac{d}{2}(\hat{\alpha}_0^2-1)\right]}\right] \right.\\ &\left. -\frac{d-2}{2\, b_0\, r_D^3\left[1+\frac{d}{2}(\hat{\alpha}_0^2-1)\right]}\right\}\hat{\alpha}_0\left[1-\hat{\alpha}_0^2\right] -\frac{\hat{\alpha}_0}{3\, b_0\, r_D^3}+\frac{\hat{\alpha}_0^3}{2\, b_0\, r_D^3\left[1+\frac{d}{2}(\hat{\alpha}_0^2-1)\right]}\\ \end{split}$$ The bulk dual for arbitrarily spatially curved metric on $\Sigma_D$ {#s:} ------------------------------------------------------------------- The final result for the bulk metric dual to the non-relativistic fluid living on the Dirichlet hypersurface $\Sigma_D$ is simply obtained by plugging in the scaling form , into , having eliminated the boundary data in favor of the hypersurface data using the Dirichlet One obtains: $$ds^2 = ds_0^2 + {\aleph}^{-1} ds_1^2 + {\aleph}^{-2} ds_2^2 + {\aleph}^{-3} ds_3^2 + {\cal O}( {\aleph}^{-4}) \label{}$$ with $$\label{hypbmwf1aR012} \begin{split} ds_0^2 &= 2\,\hat{\alpha}_0\ dt\ dr + r^2\left(-\hat{\alpha}_0^2 \,f_0 \,dt^2 + \hat{g}^{(0)}_{ij}\,dx^i dx^j\right)\\ ds_1^2 &= -2\, \hat{\alpha}_0\left( \hat{v}^*_i + \hat{k}^*_i \right)\ dx^i\ dr + 2\, r^2 \left[\hat{k}^*_i -\left(1-\hat{\alpha}_0^2\, f_0\right) \left( \hat{v}^*_i + \hat{k}^*_i \right)\right] dx^i\, dt \\ ds_2^2 &= 2\, \hat{\alpha}_0 \left[- \frac{1}{2}\hat{h}^*_{tt} + \frac{1}{2}\, \hat{g}^{(0)}_{jk} \, \hat{v}^j_* \, \hat{v}^k_* +\hat{p}_* \frac{\frac{d}{2}\, (\hat{\alpha}_0^2 -1)}{1+\frac{d}{2}\, (\hat{\alpha}_0^2 -1)} \right]dt\ dr + r^2\left[\hat{h}^*_{tt}\, dt^2 + \hat{h}^*_{ij} \, dx^i \, dx^j \right]\\ &\quad +r^2 \left(1-\hat{\alpha}_0^2\, f_0\right) \left[\left(- \hat{h}^*_{tt} + \hat{g}^{(0)}_{jk} \, \hat{v}^j_* \, \hat{v}^k_* +\hat{p}_* \frac{d\hat{\alpha}_0^2 }{1+\frac{d}{2}\, (\hat{\alpha}_0^2 -1)} \right)dt^2 \right.\\ &\qquad \left. \qquad + \left( \hat{v}^*_i + \hat{k}^*_i \right) \left( \hat{v}^*_j + \hat{k}^*_j \right)dx^i dx^j \right] +2\,r^2\,b_0\, \hat{F}_0\,\hat{\nabla}^{(0)}_{(i} \hat{v}^{*}_{j)} \,dx^{i} dx^{j}\\ &\qquad - \frac{\hat{\alpha}_0^3}{r_D^2} \;\frac{R^{(0)}}{(d-1)(d-2)}\; dt \,dr -2\, b_0^2\, r^2\, \hat{H}_1(b_0r) \left[\hat{\text{S}}^{(0)}_{ij}-\frac{\hat{R}^{(0)}}{2(d-1)(d-2)}\hat{g}^{(0)}_{ij}\right] dx^i dx^j \\ & \qquad - \, (b_0r)^2\, \hat{M}_1(b_0r) \, \frac{R^{(0)}}{(d-1)(d-2)}\, dt^2 \end{split}$$ $$\label{hypbmwf1aR3} \begin{split} ds_3^2 &= -2\,\hat{\alpha}_0\left[\hat{h}^*_{ij}\hat{v}^{j}_{*}+ \left( \frac{1}{2}\hat{h}^*_{tt} + \,\hat{k}^*_{j}\, \hat{v}^{j}_{*} +\frac{1}{2} \hat{g}^{(0)}_{jk} \, \hat{v}^{j}_{*}\, \hat{v}^{k}_{*} +\hat{p}_* \frac{\frac{d}{2}\, (\hat{\alpha}_0^2 -1)}{1+\frac{d}{2}\, (\hat{\alpha}_0^2 -1)} \right)\, \left( \hat{v}^*_i + \hat{k}^*_i \right) \right] dx^i dr \\ &\quad +\frac{2\hat{\alpha}_0^2}{r_D\left(1+\frac{d}{2}(\hat{\alpha}_0^2-1)\right)}\left[ \partial_{t} \hat{v}_{i}^{*}+\hat{v}_{*}^{j} \hat{\nabla}^{(0)}_{j} \hat{v}_{i}^{*} -\hat{f}_i^* \right] dx^{i}dr\\ &\quad + 2r\,\frac{\hat{\alpha}_0(2\,\hat{\xi}_0-1)}{1+\frac{d}{2}\, (\hat{\alpha}_0^2 -1)} \left[ \partial_{t} \hat{v}_{i}^{*}+\hat{v}_{*}^{j} \hat{\nabla}^{(0)}_{j} \hat{v}_{i}^{*} -\hat{f}_i^* \right] dx^{i}dt\\ &\quad -2\,r^2 \left(1-\hat{\alpha}_0^2 \,f_0\right)\left[\hat{h}^*_{ij}\hat{v}^{j}_{*} + \left(\hat{k}^*_{j}\, \hat{v}^{j}_{*} + \hat{g}^{(0)}_{jk} \, \hat{v}^{j}_{*}\, \hat{v}^{k}_{*} +\hat{p}_* \frac{d\hat{\alpha}_0^2 }{1+\frac{d}{2}\, (\hat{\alpha}_0^2 -1)} \right)\, \left( \hat{v}^*_i + \hat{k}^*_i \right) \right] dx^i dt \\ &\qquad -4\, r^2\, b_0\, \hat{F}_0\, \hat{v}_{*}^j\hat{\nabla}^{(0)}_{(i} \hat{v}^{*}_{j)} \,dx^{i} dt -2\, b_0^2\, r^2\, \hat{L}_1 \nabla^2_{(0)} v_i^* \, dt \, dx^i +2\, b_0^3\, r^2\, \hat{N}_1 \, \frac{\nabla_i^{(0)} R^{(0)}}{(d-1) (d-2)} \, dx^i\, dt \\ &\qquad -2\left[ b_0^2\, \kappa_L\,\hat{\alpha}_0\, \hat{\nabla}^2_{(0)}v^*_i - b_0^3\, \kappa_N \, \hat{\alpha}_0^2\, \frac{\nabla^{(0)}_i\, R^{(0)}}{(d-1)(d-2)} \right] dx^i \, dr\\ &\qquad -2\, \frac{\hat{\alpha}_0^3}{r_D^2} \, \left[\hat{\text{S}}^{(0)}_{ij}\hat{v}^j_* - \frac{1}{d-2}\, \left(1 + \frac{2}{d\, \hat{\alpha}_0\, (\hat{\alpha}_0 +1) } \right) \hat{R}^{(0)}_{ij}\hat{v}^j_* - \frac{\hat{\nabla}^j_{(0)}\hat{q}^*_{ij}}{d\,(d-2)\, \hat{\alpha}_0\, (\hat{\alpha}_0 +1) }\right] dx^i\, dr \\ &\qquad +4 \, b_0^2\, r^2\,\hat{H}_1(b_0r) \left[\hat{\text{S}}^{(0)}_{ij}-\frac{\hat{R}^{(0)}}{2(d-1)(d-2)}\hat{g}^{(0)}_{ij}\right] \hat{v}^j_* dx^i dt \\ &\qquad + 2\, (b_0r)^2\, \hat{M}_1(b_0r) \, \left( (v_i^* + k_i^*)\, \frac{\hat{R}^{(0)}}{2(d-1)(d-2)} - \hat{\text{S}}^{(0)}_{ij}\hat{v}^j_* - \frac{1}{2(d-2)}\hat{\nabla}^j_{(0)}\hat{q}^*_{ij}\right)dx^i\, dt \\ &\qquad + 2\, (b_0r)^2\, \hat{M}_2(b_0r) \,\left[\hat{R}^{(0)}_{ij}\hat{v}^j_*+ \frac{1}{2}\hat{\nabla}^j_{(0)}\hat{q}^*_{ij}\right] dx^i \, dt \\ &\qquad \end{split}$$ [10]{} J. M. Maldacena, “[The large N limit of superconformal field theories and supergravity]{},” [[*Adv. Theor. Math. Phys.*]{} [**2**]{} (1998) 231–252](http://dx.doi.org/10.1023/A:1026654312961), [[arXiv:hep-th/9711200]{}](http://arxiv.org/abs/hep-th/9711200). L. Susskind and E. Witten, “[The holographic bound in anti-de Sitter space]{},” [[arXiv:hep-th/9805114]{}](http://arxiv.org/abs/hep-th/9805114). T. Banks, M. R. Douglas, G. T. Horowitz, and E. J. Martinec, “[AdS dynamics from conformal field theory]{},” [[arXiv:hep-th/9808016]{}](http://arxiv.org/abs/hep-th/9808016). A. W. Peet and J. Polchinski, “[UV/IR relations in AdS dynamics]{},” [[*Phys. Rev.*]{} [ **D59**]{} (1999) 065011](http://dx.doi.org/10.1103/PhysRevD.59.065011), [[arXiv:hep-th/9809022]{}](http://arxiv.org/abs/hep-th/9809022). J. de Boer, E. P. Verlinde, and H. L. Verlinde, “[On the holographic renormalization group]{},” [*JHEP*]{} [**08**]{} (2000) 003, [[arXiv:hep-th/9912012]{}](http://arxiv.org/abs/hep-th/9912012). I. Heemskerk and J. Polchinski, “[Holographic and Wilsonian Renormalization Groups]{},” [[arXiv:1010.1264 \[hep-th\]]{}](http://arxiv.org/abs/1010.1264). T. Faulkner, H. Liu, and M. Rangamani, “[Integrating out geometry: Holographic Wilsonian RG and the membrane paradigm]{},” [[*JHEP*]{} [**1108**]{} (2011) 051](http://dx.doi.org/10.1007/JHEP08(2011)051), [[arXiv:1010.4036 \[hep-th\]]{}](http://arxiv.org/abs/1010.4036). S. Bhattacharyya, V. E. Hubeny, S. Minwalla, and M. Rangamani, “[Nonlinear Fluid Dynamics from Gravity]{},” [[*JHEP*]{} [**02**]{} (2008) 045](http://dx.doi.org/10.1088/1126-6708/2008/02/045), [[arXiv:0712.2456 \[hep-th\]]{}](http://arxiv.org/abs/0712.2456). M. Rangamani, “[Gravity & Hydrodynamics: Lectures on the fluid-gravity correspondence]{},” [[*Class. Quant. Grav.*]{} [**26**]{} (2009) 224003](http://dx.doi.org/10.1088/0264-9381/26/22/224003), [[arXiv:0905.4352 \[hep-th\]]{}](http://arxiv.org/abs/0905.4352). V. E. Hubeny, “[The Fluid/Gravity Correspondence: a new perspective on the Membrane Paradigm]{},” [[ arXiv:1011.4948 \[gr-qc\]]{}](http://arxiv.org/abs/1011.4948). S. Bhattacharyya, R. Loganayagam, S. Minwalla, S. Nampuri, S. P. Trivedi, [ *et al.*]{}, “[Forced Fluid Dynamics from Gravity]{},” [[*JHEP*]{} [**0902**]{} (2009) 018](http://dx.doi.org/10.1088/1126-6708/2009/02/018), [[arXiv:0806.0006 \[hep-th\]]{}](http://arxiv.org/abs/0806.0006). S. Bhattacharyya, R. Loganayagam, I. Mandal, S. Minwalla, and A. Sharma, “[Conformal Nonlinear Fluid Dynamics from Gravity in Arbitrary Dimensions]{},” [[ *JHEP*]{} [**0812**]{} (2008) 116](http://dx.doi.org/10.1088/1126-6708/2008/12/116), [[ arXiv:0809.4272 \[hep-th\]]{}](http://arxiv.org/abs/0809.4272). S. Bhattacharyya, S. Minwalla, and S. R. Wadia, “[The Incompressible Non-Relativistic Navier-Stokes Equation from Gravity]{},” [[*JHEP*]{} [**0908**]{} (2009) 059](http://dx.doi.org/10.1088/1126-6708/2009/08/059), [[arXiv:0810.1545 \[hep-th\]]{}](http://arxiv.org/abs/0810.1545). D. Nickel and D. T. Son, “[Deconstructing holographic liquids]{},” [[arXiv:1009.3094 \[hep-th\]]{}](http://arxiv.org/abs/1009.3094). I. Bredberg, C. Keeler, V. Lysov, and A. Strominger, “[Wilsonian Approach to Fluid/Gravity Duality]{},” [[arXiv:1006.1902 \[hep-th\]]{}](http://arxiv.org/abs/1006.1902). I. Bredberg, C. Keeler, V. Lysov, and A. Strominger, “[From Navier-Stokes To Einstein]{},” [[arXiv:1101.2451 \[hep-th\]]{}](http://arxiv.org/abs/1101.2451). T. Damour, “[Black Hole Eddy Currents]{},” [[*Phys. Rev.*]{} [**D18**]{} (1978) 3598–3604](http://dx.doi.org/10.1103/PhysRevD.18.3598). K. Thorne, D. Macdonald, and R. Price, [*[Black holes: The Membrane paradigm]{}*]{}. Yale University Press, 1986. G. Compere, P. McFadden, K. Skenderis, and M. Taylor, “[The Holographic fluid dual to vacuum Einstein gravity]{},” [[arXiv:1103.3022 \[hep-th\]]{}](http://arxiv.org/abs/1103.3022). C. Eling, I. Fouxon, and Y. Oz, “[The Incompressible Navier-Stokes Equations From Membrane Dynamics]{},” [[*Phys.Lett.*]{} [ **B680**]{} (2009) 496–499](http://dx.doi.org/10.1016/j.physletb.2009.09.028), [[ arXiv:0905.3638 \[hep-th\]]{}](http://arxiv.org/abs/0905.3638). C. Eling and Y. Oz, “[Relativistic CFT Hydrodynamics from the Membrane Paradigm]{},” [[*JHEP*]{} [ **1002**]{} (2010) 069](http://dx.doi.org/10.1007/JHEP02(2010)069), [[ arXiv:0906.4999 \[hep-th\]]{}](http://arxiv.org/abs/0906.4999). R.-G. Cai, L. Li, and Y.-L. Zhang, “[Non-Relativistic Fluid Dual to Asymptotically AdS Gravity at Finite Cutoff Surface]{},” [[arXiv:1104.3281 \[hep-th\]]{}](http://arxiv.org/abs/1104.3281). I. Fouxon and Y. Oz, “[Conformal Field Theory as Microscopic Dynamics of Incompressible Euler and Navier-Stokes Equations]{},” [[*Phys.Rev.Lett.*]{} [**101**]{} (2008) 261602](http://dx.doi.org/10.1103/PhysRevLett.101.261602), [[ arXiv:0809.4512 \[hep-th\]]{}](http://arxiv.org/abs/0809.4512). L. Landau and E. Lifshitz, [*[Fluid Mechanics (Course of Theoretical Physics, Vol. 6)]{}*]{}, vol. 61. Butterworth-Heinemann, 1987 (2nd Edition). S. Kuperstein and A. Mukhopadhyay, “[The unconditional RG flow of the relativistic holographic fluid]{},” [[ arXiv:1105.4530 \[hep-th\]]{}](http://arxiv.org/abs/1105.4530). E. Witten, “[Multi-trace operators, boundary conditions, and AdS/CFT correspondence]{},” [[arXiv:hep-th/0112258]{}](http://arxiv.org/abs/hep-th/0112258). M. Berkooz, A. Sever, and A. Shomer, “[’Double trace’ deformations, boundary conditions and space-time singularities]{},” [*JHEP*]{} [**0205**]{} (2002) 034, [[arXiv:hep-th/0112264 \[hep-th\]]{}](http://arxiv.org/abs/hep-th/0112264). I. Papadimitriou, “[Holographic renormalization as a canonical transformation]{},” [[arXiv:1007.4592 \[hep-th\]]{}](http://arxiv.org/abs/1007.4592). D. Marolf and S. F. Ross, “[Reversing renormalization-group flows with AdS/CFT]{},” [[*JHEP*]{} [**0805**]{} (2008) 055](http://dx.doi.org/10.1088/1126-6708/2008/05/055), [[ arXiv:0705.4642 \[hep-th\]]{}](http://arxiv.org/abs/0705.4642). M. Henningson and K. Skenderis, “[The Holographic Weyl anomaly]{},” [*JHEP*]{} [**9807**]{} (1998) 023, [[ arXiv:hep-th/9806087 \[hep-th\]]{}](http://arxiv.org/abs/hep-th/9806087). V. Balasubramanian and P. Kraus, “[A Stress tensor for Anti-de Sitter gravity]{},” [[ *Commun.Math.Phys.*]{} [**208**]{} (1999) 413–428](http://dx.doi.org/10.1007/s002200050764), [[arXiv:hep-th/9902121 \[hep-th\]]{}](http://arxiv.org/abs/hep-th/9902121). S. Bhattacharyya, V. E. Hubeny, R. Loganayagam, G. Mandal, S. Minwalla, [*et al.*]{}, “[Local Fluid Dynamical Entropy from Gravity]{},” [[*JHEP*]{} [**0806**]{} (2008) 055](http://dx.doi.org/10.1088/1126-6708/2008/06/055), [[arXiv:0803.2526 \[hep-th\]]{}](http://arxiv.org/abs/0803.2526). R. Loganayagam, “[Entropy Current in Conformal Hydrodynamics]{},” [[*JHEP*]{} [**0805**]{} (2008) 087](http://dx.doi.org/10.1088/1126-6708/2008/05/087), [[arXiv:0801.3701 \[hep-th\]]{}](http://arxiv.org/abs/0801.3701). N. Iqbal and H. Liu, “[Universality of the hydrodynamic limit in AdS/CFT and the membrane paradigm]{},” [[*Phys. Rev.*]{} [ **D79**]{} (2009) 025023](http://dx.doi.org/10.1103/PhysRevD.79.025023), [[arXiv:0809.3808 \[hep-th\]]{}](http://arxiv.org/abs/0809.3808). S. A. Bludman and M. Ruderman, “[Possibility of the Speed of Sound Exceeding the Speed of Light in Ultradense Matter]{},” [[*Phys.Rev.*]{} [**170**]{} (1968) 1176–1184](http://dx.doi.org/10.1103/PhysRev.170.1176). A. Adams, N. Arkani-Hamed, S. Dubovsky, A. Nicolis, and R. Rattazzi, “[Causality, analyticity and an IR obstruction to UV completion]{},” [[*JHEP*]{} [**0610**]{} (2006) 014](http://dx.doi.org/10.1088/1126-6708/2006/10/014), [[ arXiv:hep-th/0602178 \[hep-th\]]{}](http://arxiv.org/abs/hep-th/0602178). V. N. Gusyatnikova and V. A. Yamaguzhin, “[Symmetries and Conservation Laws of Navier-Stokes Equations]{},”[*Acta. Appl. Math.*]{} [**15**]{} (04, 1989) 65–81. A. Bagchi and R. Gopakumar, “[Galilean Conformal Algebras and AdS/CFT]{},” [[*JHEP*]{} [**0907**]{} (2009) 037](http://dx.doi.org/10.1088/1126-6708/2009/07/037), [[arXiv:0902.1385 \[hep-th\]]{}](http://arxiv.org/abs/0902.1385). C. Misner, K. Thorne, and J. Wheeler, [*Gravitation*]{}. WH Freeman and Company, San Francisco, 1973. C. Ruede and N. Straumann, “[On Newton-Cartan Cosmology]{},” [*Helv. Phys. Acta*]{} [**70**]{} (1997) 318–335, [[gr-qc/9604054]{}](http://arxiv.org/abs/gr-qc/9604054). G. Compere and D. Marolf, “[Setting the boundary free in AdS/CFT]{},” [[ *Class.Quant.Grav.*]{} [**25**]{} (2008) 195014](http://dx.doi.org/10.1088/0264-9381/25/19/195014), [[arXiv:0805.1902 \[hep-th\]]{}](http://arxiv.org/abs/0805.1902). T. Andrade and D. Marolf, “[AdS/CFT beyond the unitarity bound]{},” [[arXiv:1105.6337 \[hep-th\]]{}](http://arxiv.org/abs/1105.6337). V. Lysov and A. Strominger, “[From Petrov-Einstein to Navier-Stokes]{},” [[arXiv:1104.5502 \[hep-th\]]{}](http://arxiv.org/abs/1104.5502). I. Bredberg, A. Strominger, “[Black Holes as Incompressible Fluids on the Sphere]{},” [[arXiv:1106.3084 \[hep-th\]]{}](http://arxiv.org/abs/1106.3084). J. Sonner and B. Withers, “[A gravity derivation of the Tisza-Landau Model in AdS/CFT]{},” [[ *Phys.Rev.*]{} [**D82**]{} (2010) 026001](http://dx.doi.org/10.1103/PhysRevD.82.026001), [[arXiv:1004.2707 \[hep-th\]]{}](http://arxiv.org/abs/1004.2707). J. Bhattacharya, S. Bhattacharyya, and S. Minwalla, “[Dissipative Superfluid dynamics from gravity]{},” [[*JHEP*]{} [**1104**]{} (2011) 125](http://dx.doi.org/10.1007/JHEP04(2011)125), [[arXiv:1101.3332 \[hep-th\]]{}](http://arxiv.org/abs/1101.3332). C. P. Herzog, N. Lisker, P. Surowka, and A. Yarom, “[Transport in holographic superfluids]{},” [[arXiv:1101.3330 \[hep-th\]]{}](http://arxiv.org/abs/1101.3330). J. Bhattacharya, S. Bhattacharyya, S. Minwalla, and A. Yarom, “[A theory of first order dissipative superfluid dynamics]{},” [[arXiv:1105.3733 \[hep-th\]]{}](http://arxiv.org/abs/1105.3733). H. P. Kunzle, “[Covarinat Newtonian limit of Lorentz spacetimes]{},”[*Gen. Rel. Grav.*]{} [**7**]{} (06, 1976) 445–457. J. A. N. Gonzalez and J. B. S. de Salas, “[The structure of the Newtonian limit]{},”[*J. Geom. Phys.*]{} [**44**]{} (03, 2003) 595–622. D. Son, “[Toward an AdS/cold atoms correspondence: A Geometric realization of the Schrodinger symmetry]{},” [[*Phys.Rev.*]{} [**D78**]{} (2008) 046003](http://dx.doi.org/10.1103/PhysRevD.78.046003), [[arXiv:0804.3972 \[hep-th\]]{}](http://arxiv.org/abs/0804.3972). K. Balasubramanian and J. McGreevy, “[Gravity duals for non-relativistic CFTs]{},” [[ *Phys.Rev.Lett.*]{} [**101**]{} (2008) 061601](http://dx.doi.org/10.1103/PhysRevLett.101.061601), [[arXiv:0804.4053 \[hep-th\]]{}](http://arxiv.org/abs/0804.4053). R. Emparan, T. Harmark, V. Niarchos, and N. A. Obers, “[Essentials of Blackfold Dynamics]{},” [[ *JHEP*]{} [**03**]{} (2010) 063](http://dx.doi.org/10.1007/JHEP03(2010)063), [[arXiv:0910.1601 \[hep-th\]]{}](http://arxiv.org/abs/0910.1601). R. Emparan, “[Blackfolds]{},” [[ arXiv:1106.2021 \[hep-th\]]{}](http://arxiv.org/abs/1106.2021). J. Camps, R. Emparan, and N. Haddad, “[Black Brane Viscosity and the Gregory-Laflamme Instability]{},” [[*JHEP*]{} [**1005**]{} (2010) 042](http://dx.doi.org/10.1007/JHEP05(2010)042), [[arXiv:1003.3636 \[hep-th\]]{}](http://arxiv.org/abs/1003.3636). P. Figueras and T. Wiseman, “[Gravity and large black holes in Randall-Sundrum II braneworlds]{},” [[arXiv:1105.2558 \[hep-th\]]{}](http://arxiv.org/abs/1105.2558). O. Aharony, S. S. Gubser, J. M. Maldacena, H. Ooguri, and Y. Oz, “[Large N field theories, string theory and gravity]{},” [[*Phys.Rept.*]{} [ **323**]{} (2000) 183–386](http://dx.doi.org/10.1016/S0370-1573(99)00083-6), [[ arXiv:hep-th/9905111 \[hep-th\]]{}](http://arxiv.org/abs/hep-th/9905111). O. Aharony, O. Bergman, D. L. Jafferis, and J. Maldacena, “[N=6 superconformal Chern-Simons-matter theories, M2-branes and their gravity duals]{},” [[*JHEP*]{} [**0810**]{} (2008) 091](http://dx.doi.org/10.1088/1126-6708/2008/10/091), [[arXiv:0806.1218 \[hep-th\]]{}](http://arxiv.org/abs/0806.1218). [^1]: d.k.brattan@durham.ac.uk [^2]: joan.camps@durham.ac.uk [^3]: nayagam@physics.harvard.edu [^4]: mukund.rangamani@durham.ac.uk [^5]: See [@Rangamani:2009xk; @Hubeny:2010wp] for reviews which describe developments in this area and [@Bhattacharyya:2008ji; @Bhattacharyya:2008mz; @Bhattacharyya:2008kq] for generalizations that will be of great use in the following. [^6]: It is natural to expect that such a construction might provide a holographic derivation of the hydrodynamic deconstruction described in [@Nickel:2010pr]. [^7]: In the asymptotically flat spacetime one also has to specify initial data for radial evolution on the past null infinity ${\mathscr I}^-$. The boundary conditions chosen in [@Bredberg:2011jq] are such that no disturbance propagates into the bulk spacetime from ${\mathscr I}^-$. [^8]: To avoid confusion we wish to emphasize that we will refer to the bulk Dirichlet problem as one defined in the preceding paragraph. While this reduces to the standard Dirichlet problem on the boundary of AdS when we take the surface $\Sigma_D$ to infinity, we will soon see that the bulk Dirichlet problem induces different boundary conditions at infinity when the surface $\Sigma_D$ is retained at a finite position and it therefore pays to maintain the distinction. [^9]: The implicit idea behind the uniqueness of the seed geometry here is based on the notion that equilibrium dynamics in the field theory at finite temperature is governed by a black hole, i.e., we are always in the ‘deconfined phase’ in the field theory on the boundary. [^10]: Note that in order to get something interesting, it is important that we have a solution space allowing for non-trivial metrics at infinity (which we do thanks to [@Bhattacharyya:2008mz]). We are not simply slicing a single spacetime but working with a family of geometries characterized by the boundary data ${\mathfrak X}$ which is being adjusted so as to agree with the Dirichlet boundary conditions. [^11]: There is an issue of counter-terms that one can use when working at finite radial coordinate in an asymptotically AdS spacetime. We will take the conservative view that the relevant counter-terms are the same as those necessary asymptotically. [^12]: We use the word dynamical to characterize the boundary metric in the following sense: the metric on the boundary depends on the dynamical degrees of freedom of the system, viz., $u_\mu$ and $T$, as in . This is a pre-specified constitutive relation for the boundary metric in terms of the fluid variables, not unlike the constitutive relation for the energy momentum tensor. We will call such a constitutive relation coming from bulk Dirichlet problem as the Dirichlet constitutive relation. The fluid at the boundary, therefore, sees a dynamic metric background but no new degrees of freedom are introduced and hence for example, there are no boundary Einstein’s equations that need to be solved. [^13]: We shall refer to scaling limit as the BMW limit after the authors of [@Bhattacharyya:2008kq], despite the fact that we are focussing on sub-sonic excitations and that it has been well documented in classic textbooks, cf., [@Landau:1965pi]. [^14]: We will for the moment refrain from imposing any IR boundary condition so as to be able to see the general structure. After all in pure AdS imposing regularity at the Poincaré horizon would kill the vev which has to vanish in the vacuum. [^15]: Note that the vev is $(16\pi\, G_{d+1})^{-1} \, \phi$ where $G_{d+1}$ is the gravitational constant in AdS$_{d+1}$. [^16]: We are working here with space-like momenta having $k^2>0$. Results for time-like momenta follow from replacing $k$ by $i\,k$ and using the Bessel relations $$\begin{split} \frac{2(ix)^{\nu}}{\Gamma(\nu)}\,K_{\nu}(2ix) &= -\frac{\pi}{2}\,\frac{2x^{\nu}}{\Gamma(\nu)}Y_{\nu}(2x) + i \sin\left[(-\nu)\pi\right]\Gamma(-\nu+1) x^{\nu}J_{\nu}(2x)\\ \frac{\Gamma(\nu)}{2(ix)^{\nu}}\, I_{\nu}(2ix) &= \frac{\Gamma(\nu)}{2x^{\nu}}J_{\nu}(2x)\\ \end{split}$$ [^17]: If we keep $k^4$ terms and higher, one needs to subtract appropriate counter-terms at that order. These counter-terms are determined by requiring that $\hat{\phi}_k$ is finite as $r_D\to\infty$. The explicit expressions for counter-terms to any required order can be determined using the expansions in (see for e.g., [@Papadimitriou:2010as]). [^18]: In fact, even for pure it is interesting to ask what the deformation on the boundary is when we move $\Sigma_D$ close to the Poincaré horizon; in this limit it seems natural to expect that the asymptotically we obtain a a Neumann boundary condition. We thank Don Marolf for emphasizing this to us. [^19]: We will use upper-case Latin indices for the bulk spacetime indices, reserving lower-case Greek indices for hypersurface or boundary indices. See for a list of conventions. [^20]: It might be useful to view this scalar function as physically being specified either as the level set of the red-shift factor, or by introducing a dynamical scalar field. [^21]: We thank Veronika Hubeny for extensive discussions on these issues. [^22]: \[alert\] At this point it is worthwhile to get a technical point out of the way. We will have two metric structures in the story henceforth, the hypersurface metric $\hat{g}_{\mu\nu}$ and a boundary metric $g_{\mu\nu}$. To avoid confusion, we will write all equations intrinsic to the hypersurface or to the boundary consistent with the respective metric structures. In practice this simply means that one raises/lowers indices of the equations with respect to the appropriate metric. These have to be handled with care, but by judicious use of the two metrics one can relate other components if required. [^23]: In the fluid/gravity literature one chooses to maintain ${\mathfrak u}_\mu = u_\mu$ to all orders in the gradient expansion for simplicity. We will generalize this suitably in the rest of the discussion to simplify our formulae. [^24]: We would like to thank Sayantani Bhattacharyya for collaboration on some of the ideas in this section. [^25]: The expressions for the hypersurface energy density and pressure $\hat{\varepsilon}$ and $\hat{p}$ can be re-written in terms of the hypersurface temperature $\hat{T}$ (see ) which is the more natural quantity on $\Sigma_D$. However, it is convenient for practical reasons to leave these definitions in terms of $b$. [^26]: We alert the reader again to footnote \[alert\] - the expressions in should be dealt with care as the l.h.s. and r.h.s contain contributions from quantities defined with respect to different metric structures. [^27]: For an early discussion see [@Bludman:1968zz] where similar issues for fluids models driven to high pressure regimes are discussed. [^28]: While the initial value problem for ideal, or even viscous fluids is ill-posed as the conservation equations are parabolic, we here want to drive home the point that the pathology we want to encounter happens for the sounds modes that are usually non-problematic. [^29]: Note that most of these expressions are readily obtained by just ‘hatting’ the formulae in . [^30]: This point was initially missed by both our analysis and that of [@Bhattacharyya:2008kq]. It was originally thought that it would be sufficient to obtain the non-relativistic metric from just the first order relativistic metric. This unfortunately is not true and as a result the non-relativistic metrics quoted in v1 of this paper and in [@Bhattacharyya:2008kq] only solve Einstein’s equations to first order in the non-relativistic gradient expansion. The correct form of the general expressions are now collected in . [^31]: Note that the highlighted terms involve Weyl invariant Ricci ${\hat{\mathcal R}}_{\mu\nu}$ and Schouten ${\hat{\mathcal S}}_{\mu\nu}$ curvature tensors which are defined in . [^32]: Note that we have retained certain terms at ${\cal O}(\hat{\aleph}^{-3})$ which are actually not necessary to solve the equation of motion at this order (eg., the velocity cubed term). This is to facilitate ease of comparison of our results when we undertake the near-horizon analysis in , with those in the existing literature. [^33]: This scaling can be compounded with the scaling symmetry of Navier-Stokes equations [@Bhattacharyya:2008kq] to change the exponents of $\hat{\varkappa}$ but we refrain from doing so for simplicity (see end of ). [^34]: Roughly speaking $\hat{\varkappa} \sim \hat{\aleph}$ of ; this is the overall parameter that will organize for us the hierarchy necessary in the near horizon limit. [^35]: We thank Sayantani Bhattacharyya for emphasizing this point to us. [^36]: Once we include the new corrections highlighted for e.g., in , we find that the boundary Dirichlet constitutive relations and the boundary gradient expansion are in tension. For instance we find terms which originate at ${\cal O}(\hat{\aleph}^{-3})$ migrate down to ${\cal O}(\hat{\aleph}^{-1})$. However, to the order we have looked at all such terms are related to enforcing Landau frame choice both on the hypersurface and the boundary, and hence to a gauge choice for the bulk metric. Our statements in this section assume that such pure gauge terms are irrelevant, but this issue deserves further investigation. [^37]: We believe the answer to this question is most likely to be in the affirmative. If we consider deforming the boundary conditions at infinity by making the boundary metric an arbitrary local function of multi-traces of the stress tensor, we can think of the problem effectively as a mixed boundary condition a la [@Compere:2008us], which has been argued to suffer from ghosts generically (see also [@Andrade:2011dg]). [^38]: It has been pointed out in [@Bredberg:2011xw] that there is an obstruction to finding solutions with Dirichlet boundary conditions in the near horizon Rindler like region, when the fluids live on compact curved spatial manifolds, such a a sphere (as would be relevant for the near horizon of asymptotically flat Schwarzschild black hole). In our construction the near-horizon limit described in requires that we simultaneously scale the curvatures to zero, and so we are unable to see the origins of such obstructions. [^39]: We thank Roberto Emparan for useful discussions on this issue. [^40]: In the case of $k=1,2$, the supersymmetry should get enhanced to d=3, $\mathcal{N}$=8. [^41]: The corresponding problem for general function $r=r_D(x)$ can be reduced to this problem by a suitable choice of Weyl frame. [^42]: We have used $$\partial_\mu \left\{\frac{1}{\hat{\alpha}}\right\}=\frac{1}{\hat{\alpha}}\frac{d}{2}(\hat{\alpha}^2-1)\mathcal{A}_\mu$$
1
--- abstract: 'We extended a previous qualitative study of the intermittent behaviour of a chaotical nucleonic system, by adding a few quantitative analyses: of the configuration and kinetic energy spaces, power spectra, Shannon entropies, and Lyapunov exponents. The system is regarded as a classical “nuclear billiard” with an oscillating surface of a 2D Woods-Saxon potential well. For the monopole and dipole vibrational modes we bring new arguments in favour of the idea that the degree of chaoticity increases when shifting the oscillation frequency from the adiabatic to the resonance stage of the interaction. The order-chaos-order-chaos sequence is also thoroughly investigated and we find that, for the monopole deformation case, an intermittency pattern is again found. Moreover, coupling between one-nucleon and collective degrees of freedom is proved to be essential in obtaining chaotic states.' author: - Daniel Felea - Cristian Constantin Bordeianu - Ion Valeriu Grossu - Călin Beşliu - Alexandru Jipa - 'Aurelian-Andrei Radu' - Emil Stan title: 'Intermittency route to chaos for the nuclear billiard - a quantitative study' --- \[intro\]Introduction ===================== We begin by briefly reminding that a conjugated continuous effort has been made to relate the emergence of the collective energy dissipation through one and two-body nuclear processes with the chaotical behaviour of nuclear systems . A few options in choosing the collective oscillation frequencies have come into focus in the past years, in connection with the onset of chaoticity for “nuclear billiards”. First of all, the issue of dissipation into thermal motion of the adiabatic collective vibrational energy of the potential well was treated for several multipolarities by Burgio, Baldo *et al.* . On the other hand, when trying to associate different vibration frequencies to various nuclear processes, the path to chaos was found to be changed with the order of multipole [@felea-01; @felea-02; @bordeianu-08a; @bordeianu-08b; @bordeianu-08c; @felea-09a]. This paper was intended to bring a quantitative argumentation, based on a systematic study of the configuration and kinetic energy spaces, power spectra, informational entropies, and largest Lyapunov exponents. The study was done in completion of a few qualitative types of analysis previously presented [@felea-09a]: sensitive dependence on the initial conditions, single-particle phase space maps, fractal dimensions of Poincare maps, and autocorrelation functions. In short, we remind that, by studying the nucleonic dynamics in a Woods-Saxon potential, one can find an increase of the chaotical degree of the system behaviour as raising the frequency of 2D wall oscillation. The main result of [@felea-09a] was reported in relation with an intermittent route to chaos for the monopole vibrations close to the resonance phase of a nuclear interaction. Still, we mention that the purpose of these two coupled etudes was only to emphasize the detection of such intermission for the “nuclear billiards” and not to establish its type according to [@pomeau-80], nor to compare it with other intermittency patterns from known experimental results . \[sec:1\]Toy model ================== We continue the study on a classical dynamical system proposed by Burgio, Baldo *et al.* , system composed of a number of $A$ nucleons with no charge, spin, or internal structure. A two-dimensional deep Woods-Saxon potential well, regarded as “nuclear billiard”, is periodically hit by the nucleons. The Bohr Hamiltonian in polar coordinates is a sum of two components: kinetic ($E_{kin.}$) and potential ($E_{pot.}$), the kinetic one decoupling into radial ($E_r$), centrifugal ($E_L$), and collective terms ($E_{coll.}$): $$E_{kin.} = E_r + E_L + E_{coll.} = {\sum_{j=1}^{A}}\left( \frac{p_{r_{j}}^{2}}{2m}+\frac{p_{\theta _{j}}^{2}}{2mr_{j}^{2}} \right)% +\frac{p_{\alpha }^{2}}{2M},$$ $$E_{pot.} = {\sum_{j=1}^{A}} V\left(r_{j},R\left( \theta _{j}\right) \right)+\frac{M\Omega ^{2}\alpha ^{2}}{2}.$$ The phase space is defined by particle and collective momenta and their conjugate coordinates: $\left( r,p_{r}\right)$, $\left( \theta,p_{\theta}\right)$ and $\left( \alpha,p_{\alpha}\right)$. The collective coordinate $\alpha $ oscillates with $\Omega $ frequency, the Inglis mass $M$ is equal to $mAR_{0}^{2}$, and the nucleon mass: $m=938\ \rm{MeV}$. For the time being we are not interested in studying the nucleon dynamics beyond the Woods-Saxon barrier: $$V\left( r_j,R\left( \theta _j\right) \right) =\frac{V_0}{1+\exp \left[ \frac{r_j-R\left( \theta _j,\alpha \right) }a\right] },$$ and therefore we choose a deep well: $V_{0}=-1500\ \rm{MeV}$ and accordingly, a low value for the diffusivity coefficient: $a=0.01\ \rm{fm}$. When considering the two-dimensional case, the frontier of the collective motion is described as a function of the collective variable $\alpha $ and of the Legendre polynomials $P_{L}\left( \cos \theta_{j}\right)$ [@burgio-95; @baldo-96; @baldo-98; @felea-09a]: $$R_j = R\left( \theta _j,\alpha \right) =R_0\left[ 1+\alpha P_L\left( \cos \theta_j\right) \right].$$ The oscillation degree of the potential well $L$ is considered for the monopole $\left(0\right)$, dipole $\left(1\right)$ and quadrupole case $\left(2\right)$. If the surface has a stationary behaviour, or whenever one takes into account the uncoupled Hamilton equations ([UCE]{}) for the particle: $$\stackrel{\cdot }{r_j}=\frac{p_{r_j}}m,\ \stackrel{\cdot }{\theta _j}=\frac{p_{\theta _j}}{mr_j^2},\ \stackrel{\cdot }{p_{r_j}}=\frac{p_{\theta _j}^2}{mr_j^3}-\frac{\partial V}{\partial r_j},\ \stackrel{\cdot }{p_{\theta _j}}=-\frac{\partial V}{\partial R_j}\cdot \frac{\partial R_j}{\partial \theta _j},$$ and collective degrees of freedom (*d.o.f.*): $$\stackrel{\cdot }{\alpha }=\frac{p_\alpha }M,\ \stackrel{\cdot }{p_\alpha}=-M\Omega ^2\alpha -\sum_{j=1}^{A}\left(\frac{\partial V}{\partial R_j} \cdot \frac{\partial R_j}{\partial \alpha }\right),$$ $R_{0}$ has a fix value, chosen for consistency with previous papers [@burgio-95; @baldo-96; @baldo-98; @felea-09a] as $6\ \rm{fm}$. A Runge-Kutta type algorithm (order 2-3) with an optimized step size was used for solving the system of differential equations, while keeping the absolute errors for the phase space variables under $10^{-6}$ and conserving the total energy with relative error: $\Delta E/E \approx 10^{-8}$ (Fig. \[fig:1\]). We imposed the equilibrium condition between the pressure exerted by the particles and the mechanical pressure of the wall [@burgio-95; @baldo-96; @baldo-98; @felea-09a] and thus obtained for the initial equilibrium value of the collective variable, perturbed with a small value: $$\alpha_{0}=\frac{-1+\sqrt{1+8T/{mR_0^2\Omega ^2}}}{2}+0.15,$$ where $T=36\ \rm{MeV}$ [@burgio-95; @baldo-96; @baldo-98; @felea-09a] is the two-dimensional kinetic energy and also the temperature of the nuclear system, when considering the natural system of units ($\hbar=c=k_{B}=1$). \[sec:2\]The quantitative analysis of the route to chaos ======================================================== We carry on the study begun in [@felea-09a] by gradually changing the degree of vibration of the potential wall from a slow motion (adiabatic state) to a rapid one (the so-called resonance state of the interaction). In the first case, the collective motion is described by a radian frequency smaller than $0.05\ c/\rm{fm}$. The latter is dominated by frequencies close to the one-nucleon collisional frequency: $$\omega _{part}=\frac{\pi}{R_0}\cdot\sqrt{\frac{2T}{m}}\approx 0.145\ c/\rm{fm}.$$ The physical motivation for studying the one-nucleon chaotical dynamics in the “nuclear billiard”, ranging the frequencies from the adiabatic to the resonance regime of a nuclear interaction, is explained in some detail in [@felea-09a]. We briefly remind that the process of nuclear multifragmentation can be viewed as a resonance process and that for smaller excitation energies, nuclear evaporation or breakup of a projectile nucleus occurs when the energy is shared between the collective and one-nucleon degrees of freedom. By using a few types of analyses: sensitive dependence on the initial conditions, single-particle phase space maps, fractal dimensions of Poincare maps and autocorrelation functions, we emphasized that an intermittent route to chaos is observed in the monopole case when increasing the vibrational frequency to $\Omega = 0.1\ c/\rm{fm}$ [@felea-09a]. In the resonance phase of the interaction the onset of chaotical behaviour was found to be earlier than at any other adiabatic oscillations of the Woods-Saxon potential well. We present here other methods [@schuster-84] promoting the idea that the degree of chaoticity increases when moving from the adiabatic to the resonance regime: analyses of the configuration and kinetic energy spaces, power spectra, generalized informational entropies and Lyapunov exponents. Furthermore, we try to identify possible pathways to chaos, including the intermittent one, previously put in evidence [@felea-09a]. Configuration and kinetic energy spaces --------------------------------------- In order to establish if a specific physical system presents a chaotic dynamics and to identify possible routes to chaos we analyzed the behaviour of a small bunch of trajectories in the configuration and in the kinetic energy space, respectively. For example, we took five trajectories separated by an $\epsilon = \Delta r=0.01\ \rm{fm}$ aperture, while keeping the rest of the initial phase space variables constant and let the system evolve over a given time $\left(\Delta t = 1,600\ \rm{fm}/c\right)$. For the transient stages from adiabatic to resonance, the temporal evolution of an initially confined trajectory bundle was studied for the monopole and dipole oscillation modes of the potential wall and also for the limit situation, in which the individual and collective degrees of freedom remain uncoupled (Figs. \[fig:2\] and \[fig:3\]). The configuration space revealed a high degree of symmetry in $\left(r,\theta\right)$ plane in both cases, $\left( 4+2\right)$ uncoupled nonlinear differential equations and monopole (left and middle panels). Also, that the central zone remained uncovered, reflecting the conservation of the nucleon angular momentum. Another important conclusion was issued from the definition of the stability concept of a dynamical system. For the aforementioned cases the dispersed trajectory pack periodically regroups on the frontier that delimits the forbidden zone of the phase space. This type of behaviour corresponds with the definition of stability given by Poisson (for e.g., in [@holmes-96]). We will herewith remind that the Poisson stability defines as steady the movement of a particle system of which configuration comes close, from time to time, to the initial position. At a first glance, on the simple [UCE]{} case one can distinguish two extremities of the radius of the particle periodic motion in the 2D potential well: $r_{min}(UCE) \approx 1.42\ \rm{fm}$, and $r_{Max}(UCE) \approx 5.96\ \rm{fm}$. These values correspond to the roots of the boundary equation: $$E = \frac{p_{\theta}^2}{2mr^2}+V\left(r \right),$$ for a given one-nucleon energy $E$, when the radial component of velocity vanishes ($\stackrel{\cdot }{r}=0$). The analysis of the particle motion in the configuration space is similar to that applied to any system with bound unclosed trajectories. The nucleonic motion takes place within a circular crown (the so-called *annulus*) determined by the concentric circles of $r_{min}$ and $r_{Max}$ radii. The trajectory is symmetric about any turning point. For $\Delta t = 1,600\ \rm{fm}/c$ we descry in the [UCE]{} case as much as $27$ distinct apocenters (at $r=r_{Max}$). These are correlated with a number of $13$ complete and one incomplete revolutions about the center of the force field (*i.e.* $27$ straight lines before $r(t)$ changes its sense of variation). The particle completely sweeps over twice the 2D configuration space after $1,537\ \rm{fm}/c$. However, we notice that the bound trajectories are open, which means that the orbits never pass twice through a given point (see Fig. \[fig:2\]), which is in concordance with Bertrand’s theorem. We briefly remind that for a bound orbit to be closed, the angle between two consecutive apocenters must be: $$\Delta \theta = 2\pi \cdot \frac{n_{1}}{n_{2}},$$ *i.e.* after $n_{2}$ revolutions about the center, the radius vector should sweep out a multiple $n_{1}$ of $2\pi$ [radians]{}. For the [UCE]{} case above considered we consequently obtain: $\Delta \theta_{UCE} \approx 4\pi/13$ [radians]{} (Fig. \[fig:2\] - left column). The kinetic energy points are displayed in right isosceles triangular shaped patterns (Fig. \[fig:3\]), whose hypotenuses are described by Eqs. (11) and (12), for the two specific non-chaotical situations: [UCE]{} and the intermittent monopolar “window” emerged at $\Omega = 0.1\ c/\rm{fm}$: $$E_{L_{UCE}} = 18.05 - E_r,$$ $$E_{L_{\Omega=0.1\: c/\rm{fm}}} = 18.09 - E_r.$$ Moreover, for the intermittency frequency of monopole oscillations, one can notice a second smaller segment with the same negative slope: $$E_{L_{\Omega=0.1\: c/\rm{fm}}} = 5.87 - E_r.$$ For the uncoupled differential equations there are a couple of extreme values for the centrifugal kinetic energy: $E_{L_{min}}(UCE) = 1.02\ \rm{MeV}$, and $E_{L_{Max}}(UCE) = 18.05\ \rm{MeV}$, associated with $r_{Max}(UCE)$ and respectively, with $r_{min}(UCE)$ (left column of Fig. \[fig:3\] and Eq. (1)). As for the monopolar intermittency, we can distinguish just five distinct values for the $E_{L}$: $1.32\ \rm{MeV}$, $2.13\ \rm{MeV}$, $3.01\ \rm{MeV}$, $5.87\ \rm{MeV}$, and $18.09\ \rm{MeV}$ (central plot of Fig. \[fig:3\]), correlated with stationary radii: $r_{1} \approx 5.26\ \rm{fm}$, $r_{2} % \approx 4.14\ \rm{fm}$, $r_{3} \approx 3.49\ \rm{fm}$, $r_{4} \approx 2.51\ \rm{fm}$, and $r_{5} \approx % 1.43\ \rm{fm}$ (Eq. (1), Fig. 2 of [@felea-09a], and central plot of Fig. \[fig:2\]). Thus, the nucleonic motion for the intermittent case is composed of alternated revolutions about the force field centre, forming a cyclic symmetrical structure, for e.g., $r_{1}$, $r_{5}$, $r_{2}$, $r_{4}$, $r_{3}$, $r_{4}$, $r_{2}$, $r_{5}$, $r_{1}$, and so on (Fig. \[fig:2\]). This behaviour can be easily verified through the sensitivity dependence on the initial conditions analysis, previously presented (third column of Fig. 2 - [@felea-09a]). It should also be mentioned that, following this radius alternation, the nucleon covers in 2D configuration space $\approx 2\pi$ radians after $6$ full revolutions in almost $590\ \rm{fm}/c$. Concluding, we highlight once more, that ordered, non-chaotical events, exhibit periodical symmetrical patterns in the configuration and kinetic energy spaces. This was shown to be a characteristic feature of the uncoupled nonlinear Hamilton equations case and also, of the steady, intermittent behaviour arisen in the monopole case at $0.1\ c/\rm{fm}$ vibrational radian frequency. A tendency to compactly fill the kinetic energy space when increasing the monopolar vibrations (from $0.05\ c/\rm{fm}$ to $0.145\ c/\rm{fm}$) was observed, except for the intermittency situation above described. For the $L = 1$ oscillation mode of the potential well, it seems that at the same frequency ($\Omega = 0.1\ c/\rm{fm}$) a somewhat intermittent behaviour could also come out, but this was proved to be elusive, as verified when reverting to this issue with the help of informational entropies and Lyapunov exponents and analyzing the system on longer time periods. Power spectra ------------- In order to better distinguish between a multiple periodical behaviour that can also exhibit an erratic pattern and chaos we used the Fourier transform of the analyzed signals: $$x\left( \omega \right) = \lim_{T\rightarrow \infty} \int^{T}_{0} e^{i\omega t} \cdot x\left( t\right) \ dt\ ,$$ $$x\left( \omega \right) = \lim_{N\rightarrow \infty} \sum^{N}_{n=0} e^{i\omega t_{n}} \cdot x\left( t_{n}\right).$$ For a multiple periodical movement the power spectrum: $$P\left( \omega \right) = \left|x\left( \omega \right)\right|^{2},$$ will only contain a number of discrete lines: the fundamental frequencies of the system and their associated sets of harmonics, while the chaotical behaviour is completely aperiodical and is represented by a continuous or quasi-continuous broadband. The obtained results are presented in Figures \[fig:4\]-\[fig:7\]. As a persistent feature of the physical system analyzed one should mention that for the monopole and dipole deformation degrees of the potential well (Figures \[fig:5\] and \[fig:6\]) the chaotic behaviour increases in time, thus confirming previous results. The transition towards a chaotic regime was put again in evidence once passing from the adiabatic to the resonance stage of the interaction. The power spectra reveal, as expected, the intermittent feature of the transition in the monopolar case at $\Omega=0.1\ c/\rm{fm}$. This can be detected for periods of time large enough ($\Delta t \geq 1,600\ \rm{fm}/c$) to positively identify chaotic patterns, by transition from a quasi-continuous spectrum of the one-nucleon radial coordinate ($\Omega _{ad}=0.02\ c/\rm{fm}$) to a discrete periodical one, containing fundamental frequencies of the system and its harmonics ($\Omega=0.1\ c/\rm{fm}$) and again to a continuous spectrum at the resonance vibrational frequency ($\Omega _{res}=0.145\ c/\rm{fm}$) (Fig. \[fig:5\] - right panels). The order-chaos-order-chaos sequence can be also spotted out for the monopole oscillations in the power spectra of the collective degree of freedom (Fig. \[fig:7\] - second column). The temporal series of the radius variable show a symmetrical sawtooth waveform for the uncoupled situation at any chosen vibration frequency, and also for the monopole case at adiabatic collective oscillations. For the rest, in general an asymmetrical sawtooth form defines the series, but sometimes, more complicated patterns appear at higher multipole orders (Figures 1-4 - [@felea-09a]). The difference between two successive maxima in the temporal series of the radius variable for the [UCE]{} case is: $T_{0_{UCE}} \approx 1,537/26 = 59.1\ \rm{fm}/c$ (left column of Fig. \[fig:2\]) and can be also obtained from the sensitive dependence on the initial conditions analysis (see Fig. 1 - [@felea-09a]). The fundamental frequency for the radial sawtooth temporal series: $\omega_{0_{UCE}}=2\pi/T_{0_{UCE}}=0.106\ c/\rm{fm}$ and its first three harmonics: $\omega_{1_{UCE}}=0.212\ c/\rm{fm}$, $\omega_{2_{UCE}}=0.318\ c/\rm{fm}$, and $\omega_{3_{UCE}}=0.424\ c/\rm{fm}$, can be easily traced down in Figure \[fig:4\]. As for the “window” of intermittency at $L = 0$, we obtained: $T_{0_{int}} \approx 590/12 = 49.2\ \rm{fm}/c$ (Fig. 2 - [@felea-09a] and central plot of current Fig. \[fig:2\]). This gives the corresponding fundamental frequency: $\omega_{0_{int}}=0.128\ c/\rm{fm}$ and its associated harmonics: $\omega_{1_{int}}=0.256\ c/\rm{fm}$, $\omega_{2_{int}}=0.384\ c/\rm{fm}$, and $\omega_{3_{int}}=0.512\ c/\rm{fm}$ (Fig. \[fig:5\]). Shannon entropies ----------------- In order to further investigate route to chaos, we paid attention to the time evolution of the generalized informational entropy (or Shannon entropy), introduced as usually : $$S_{Shannon}\left(t\right)=-{\sum^{N\left(t\right)}_{k=1}} p_k \cdot \ln p_k,$$ $N\left(t\right)$ being the number of gradually occupied cells until the time $t$. This type of entropy is actually a number which quantifies the time rate of information production for a chaotic trajectory [@ott-93]. We consider in the first place the case of a particle that at every moment occupies a cell of the two-dimensional lattice phase space with a $p_k$ probability: $$p_k=1/N_{total\ cells},$$ where: $$N_{total\ cells}=N_{r} \cdot N_{p_{r}} \cdot N_{\theta},$$ $N_{r}$, $N_{p_{r}}$, and $N_{\theta}$ are the number of bins of the $\left( r,p_{r},\theta\right)$ lattice. For $p_{\theta}$ is a constant of motion for the monopole and the [UCE]{} cases, we use for comparisons only these three phase space variables. As an alternative measure for the above defined entropy we also used the cumulative filling percentage of the one-nucleon phase space: $$\eta\left(t\right)=\frac{N\left(t\right)}{N_{total\ cells}} \cdot 100\ \left(\%\right).$$ In the first place, for a given wall frequency of vibration and for a certain multipolarity (here, for $\Omega _{res}=0.145\ c/\rm{fm}$ and $L = 0$), we studied the dependence of the Shannon entropy with the number of bins. A clear tendency for smoothing the entropy curve was found when decreasing the bin. A reduced number of cells ($N_{b}=2^3$) is characterized by an entropy formed from a small number of high-amplitude Heaviside functions. As the number of bins increases (for e.g., here to $12^3$), the entropy gets a more realistic representation, being composed of a superior number of low-amplitude step functions (Fig. \[fig:8\]). Moreover, the filling percentage $\eta$ of the one-nucleon phase space maps can drastically differ with the size of the bin. Thus, after the system evolved over $400\ \rm{fm}/c$, a phase space with $8$ bins is entirely covered, $64$ bins can be filled in with $0.8594$ probability, a $26.95$ filling percentage for $512$ cells can be found, and we counted only as much as $226$ bins occupied out of a total of $1,728$ (*i.e.* $\eta = 13.08\ \%$). At a first glance one can identify a series of entropy plateaus, which could be put in correspondence with stationary or quasi-stationary thermodynamic values of the system if a large number of particles would be under study. Some of them will vanish when considering a large number of bins. However, those surviving for $N_{b} \rightarrow \infty$ could be associated with stationary nucleonic states in the chosen potential well in the limit of a large number of degrees of freedom. For a given 2D phase space lattice formed of $N_{b}=4^3$ bins we present in Figures \[fig:9\]-\[fig:12\] a comparison between the informational entropies of the physical system in study, starting from the adiabatic stage of interaction and gradually increasing the vibrational wall frequency towards the dipole resonance value, $\Omega _{res}=0.145\ c/\rm{fm}$. The slopes for the resonance frequency case were found to be significantly higher than for the adiabatic one ($\Omega _{ad}=0.02\ c/\rm{fm}$) for all multipolarities involved. Another comparison revealed significant differences between the onset times of the quasi-constant Shannon entropy values for all cases taken into consideration. Thus, for four vibrational radian frequencies and for four coupling modes of the Hamilton equations we show the informational entropy values after $800\: \rm{fm}/c$ (Table \[tab:1\]) and the associated phase space filling degrees (Table \[tab:2\]). Also, in Table \[tab:3\], are presented the periods of time after which the filling percentages $\eta$ equal unity. ----------------------- ------------------------------------------------------------------------------------- -- -- -- Oscillation frequency [UCE]{} & $L=0$ & $L=1$ & $L=2$\ $\Omega \:_{ad} = 0.020\: c/\rm{fm}$ & $3.6889$ & $3.7136$ & $3.7842$ & $3.4340$\ $\Omega \:_{ad} = 0.050\: c/\rm{fm}$ & $3.6889$ & $4.0775$ & $4.0775$ & $3.2958$\ $\Omega \:\;\ \ = 0.100\: c/\rm{fm}$ & $3.6889$ & $3.6889$ & $3.8501$ & $4.0073$\ $\Omega _{res} = 0.145\: c/\rm{fm}$ & $3.6889$ & $4.1589$ & $4.0431$ & $4.0254$\ ----------------------- ------------------------------------------------------------------------------------- -- -- -- : \[tab:1\]The computed $S_{Shannon}\left(t=800\: \rm{fm}/c\right)$ of the $\left( r\leftrightarrow p_{r}\leftrightarrow\theta\right)$ one-particle phase space maps at several multipolarities and frequencies of wall vibration ----------------------- --------------------------------------------------------------------------------- -- -- -- Oscillation frequency [UCE]{} & $L=0$ & $L=1$ & $L=2$\ $\Omega \:_{ad} = 0.020\: c/\rm{fm}$ & $62.50$ & $64.06$ & $68.75$ & $48.44$\ $\Omega \:_{ad} = 0.050\: c/\rm{fm}$ & $62.50$ & $92.19$ & $92.19$ & $42.19$\ $\Omega \:\;\ \ = 0.100\: c/\rm{fm}$ & $62.50$ & $62.50$ & $73.44$ & $85.94$\ $\Omega _{res} = 0.145\: c/\rm{fm}$ & $62.50$ & $100.00$ & $89.06$ & $87.50$\ ----------------------- --------------------------------------------------------------------------------- -- -- -- : \[tab:2\]The filling percentage $\eta$ of the $\left( r\leftrightarrow p_{r}\leftrightarrow\theta\right)$ one-particle phase space maps at several multipolarities and frequencies of wall vibration ----------------------- ------------------------------------------------------------------------------------ -- -- -- Oscillation frequency [UCE]{} & $L=0$ & $L=1$ & $L=2$\ $\Omega \:_{ad} = 0.020\: c/\rm{fm}$ & $>10^{5}$ & $6,023$ & $6,359$ & $5,356$\ $\Omega \:_{ad} = 0.050\: c/\rm{fm}$ & $>10^{5}$ & $1,618$ & $4,223$ & $>10^{5}$\ $\Omega \:\;\ \ = 0.100\: c/\rm{fm}$ & $>10^{5}$ & $11,442$ & $3,241$ & $2,758$\ $\Omega _{res} = 0.145\: c/\rm{fm}$ & $>10^{5}$ & $729$ & $1,887$ & $10,571$\ ----------------------- ------------------------------------------------------------------------------------ -- -- -- : \[tab:3\]The time (in $\rm{fm}/c$) at which the informational entropies of the $\left( r\leftrightarrow p_{r}\leftrightarrow\theta\right)$ one-particle phase space maps at several multipolarities and frequencies of wall vibration have the maximum value (*i.e.* $\eta = 100\ \%$) We continue the analysis by further defining the Shannon entropy for a group of $w$ nearby orbits: $$S_{traject.\ pack}\left(t\right)=ln\ N_{w}\left(t\right),$$ so that the number of occupied cells is: $$1 \leq N_w(t) \leq w,$$ thus describing the spread of the trajectories at each moment of time $t$ (Figs. \[fig:13\]-\[fig:16\]). When reaching the maximum divergence, the entropy for five distinct phase space paths gets its highest value (Table \[tab:4\]). ----------------------- ------------------------------------------------------------------------------- -- -- -- Oscillation frequency [UCE]{} & $L=0$ & $L=1$ & $L=2$\ $\Omega \:_{ad} = 0.020\: c/\rm{fm}$ & $>10^{4}$ & $1,095$ & $555$ & $688$\ $\Omega \:_{ad} = 0.050\: c/\rm{fm}$ & $>10^{4}$ & $855$ & $476$ & $122$\ $\Omega \:\;\ \ = 0.100\: c/\rm{fm}$ & $>10^{4}$ & $4,133$ & $333$ & $395$\ $\Omega _{res} = 0.145\: c/\rm{fm}$ & $>10^{4}$ & $279$ & $327$ & $469$\ ----------------------- ------------------------------------------------------------------------------- -- -- -- : \[tab:4\]The time (in $\rm{fm}/c$) at which the one-particle Shannon entropies of a pack of $w=5$ close orbits begin having the maximum value (*i.e.* $S_{traject.\ pack} = 1.60944$) for various coupling degrees between the one-nucleon and the collective *d.o.f.* and for the standard chosen wall frequencies We begin the analysis with the [UCE]{} case. The single and collective uncoupled *d.o.f.* give birth to a quasi-laminar behaviour with a weak development of chaotic states. The one-particle informational entropy shows an identical evolution, no matter the frequency chosen. The orbit covers, after $800\ \rm{fm}/c$, only $62.50\ \%$ of the entire lattice (Table \[tab:2\] and Figure \[fig:9\]) and does not reach $100\ \%$, even after $\Delta t = 100,000\ \rm{fm}/c$ (Table \[tab:3\]). Also, the phase space is not covered up by all five trajectories for the whole range of $10,000\ \rm{fm}/c$ considered, when analyzing $S_{traject.\ pack}$ (Fig. \[fig:13\] and Table \[tab:4\]). For the dipole oscillations mode, at $\Omega_{ad} = 0.05\ c/\rm{fm}$, it appears that, after only $800\ \rm{fm}/c$, the entropy closes in upon its maximum value: $S_{Max} = ln\ N_{total\ cells} = 4.1589$ (Fig. \[fig:11\] and Table \[tab:1\]). However, on long periods of time, the real tendency is towards filling up the nucleonic phase space as rapid as the vibrational frequency is increased (Table \[tab:3\]). The exact pattern is repeated when studying the Shannon entropy for closeby nucleonic trajectories (Fig. \[fig:15\] and Table \[tab:4\]). We found quite the same feature for the monopole case, with exception for the intermittent “window” at $\Omega=0.1\ c/\rm{fm}$ (Tables \[tab:1\], \[tab:2\] and Fig. \[fig:10\]). The occupying rate is so small in the intermittent zone, that just at $11,442\ \rm{fm}/c$, the particle would have covered the whole phase space (see Table \[tab:3\]). A similar conclusion can be drawn from Table \[tab:4\] and Fig. \[fig:14\] (with a double temporal scale scanned for the intermittent frequency). The trajectory pack informational entropy reaches its highest value after the longest one-particle evolution time of all: $4,133\ \rm{fm}/c$. The quadrupole oscillation also reveals an apparent intermittent pattern, this time at $\Omega_{ad} = % 0.05\ c/\rm{fm}$. We call it intermittent because after $800\ \rm{fm}/c$ the nucleon fills in only $42.19\ \%$ of the total number of bins (Figure \[fig:12\] and Table \[tab:2\]), and a longer time than $100,000\ \rm{fm}/c$ is required to get to $\eta = 100\ \%$ (Table \[tab:3\]). However, this behaviour can be a misleading one, the Shannon entropy for a trajectory bunch showing exactly the opposite (see Fig. \[fig:16\] and Table \[tab:4\]), after $122\ \rm{fm}/c$ the orbits being completely dispersed. Lyapunov exponents ------------------ We furthermore presented another quantitative analysis: the temporal evolution of the Lyapunov exponents, $\lambda\left(t\right)$. As previously shown, initial adjacent points in the phase space $\Delta x_{0} (t=0)$, can generate in time separated trajectories $\Delta x\left(t\right)$. When studying the evolution of a single phase space parameter, the one-dimensional Lyapunov exponent takes the form: $$\lambda\left(t\right)=\lim_{\left|\Delta x_{0}\right|\rightarrow 0} ln\left|\frac{\Delta x\left(t\right)}{\Delta x_{0}}\right|.$$ The generalization for obtaining the multi-dimensional Lyapunov exponent is then straightforward: $$\lambda\left(t\right)=ln\frac{\left(\sum_{k=1}^{m}\left[x_{k}\left(t\right)-x_{k0}\right]^{2}\right)^{\frac{1}{2}}}{0.01},$$ where the sum is taken over all $m = 4$ squared differences between final $x_{k}\left(t\right)$ and initial $x_{k0}$ one-nucleon phase space variables. Integration times of the order of $10^{3}$ [fm]{}/c exclude errors when computing the Lyapunov exponents. In short, we here remind that the trajectories can be classified as function of the Lyapunov exponents. Thus, one can distinguish periodical behaviours, for $\lambda=0$, dissipative movements with a fixed point or a basin of attraction $\left(\lambda<0\right)$, and aperiodical chaotic states $\left(\lambda>0\right)$, when the iterative discrete evolution of the solution series (Eqs. 5 and 6) leads to a chaotic pattern. Another way of measuring the system sensitivity to initial conditions is to compute the largest Lyapunov exponent ([LLE]{}). Usually a couple of methods can be employed, one based on the time dependence of the multi-dimensional Lyapunov exponent, the other on Wolf’s standard method that uses a Gram-Schmidt Reorthonormalization of the tangent vectors [@wolf-85]. In the latter, the [LLE]{} is obtained by taking the asymptotic value of the multi-dimensional Lyapunov exponent: $$\lambda_{Max}=\lim_{t \rightarrow \infty} \frac{\lambda\left(t\right)}{t}.$$ Still, this method has the disadvantage that the integration times have to be at least an order of magnitude larger than those here considered. Other methods are slightly less efficacious, being more CPU time-consuming when simulating strong chaotic systems [@ramasubramanian-00]. We consequently used the first method and noticed the saturation behaviour, *i.e.* the arising of a plateau after a certain time $t_{c}$ (Fig. \[fig:17\]). The straight lines represent fits whose slopes match the [LLE]{} (Table \[tab:5\]). They are in inverse proportion with the onset times of chaoticity $(\tau = 1/\lambda_{Max})$, being a measure of the trajectory decoupling at a microscopic level. [cccc]{} Oscillation frequency & $L=0$ & $L=1$ & $L=2$\ $\Omega \:_{ad} = 0.020\: c/\rm{fm}$ & $0.003939$ & $0.008689$ & $0.004306$\ $\Omega \:_{ad} = 0.050\: c/\rm{fm}$ & $0.004432$ & $0.009829$ & $0.023203$\ $\Omega \:\;\ \ = 0.100\: c/\rm{fm}$ & $0.000761$ & $0.015454$ & $0.014402$\ $\Omega _{res} = 0.145\: c/\rm{fm}$ & $0.008739$ & $0.016662$ & $0.010086$\ When the single-particle and collective *d.o.f.* remain uncoupled, the Lyapunov exponents basically oscillate between two quasi-stationary regimes. This happens for all vibrational frequencies involved, reflecting a periodical regrouping of orbits in two basins of attraction. The phase space not being covered, even after a hundred of thousand of $\rm{fm}/c$, computing the [LLE]{} becomes futile for this case. One can remark for dipole oscillations (Fig. \[fig:17\] - middle column) a faster evolution towards reaching saturation states of the 4-dimensional Lypaunov exponents, once passing from the adiabatic $(\tau _{ad}=115\ \rm{fm}/c)$ to the resonance phase of the interaction $(\tau _{res}=60\ \rm{fm}/c)$. In the monopolar case the intermittency can be easily traced at $0.1\ c/\rm{fm}$ vibrational frequency (Fig. \[fig:17\] - left panels). During the intermittent stage, independent nearby orbits microscopically diverge with the slowest rate of all: $\tau =1,314\ \rm{fm}/c$ (Table \[tab:5\]). In order to catch the heaving in sight of the stationary plateau at $\approx 4,917\ \rm{fm}/c$, the temporal scale was scanned over $6,400\ \rm{fm}/c$. The study of the quadrupole collective oscillation case confirms the results obtained with all previous analyses. Namely, the neighbouring trajectories deviate one from each other after just $43\ \rm{fm}/c$ at an adiabatic frequency: $0.05\ c/\rm{fm}$. Also, when increasing $\Omega$, the [LLE]{} evolution pattern exactly matches that found with informational entropy measured for a group of orbits (Tables \[tab:4\] and \[tab:5\]). \[sec:3\]Conclusions ==================== We investigated the chaotic nucleonic behaviour in a two-dimensional deep Woods-Saxon potential well for specific phases of the nuclear interaction. By comparing the order-to-chaos transition for these cases of interest, from adiabatic to resonance regime, it was shown that the couplings between the one-particle dynamics and high multipole vibrational modes significantly decrease the onset of the chaotic nucleonic motion towards realistic nuclear interaction time scale. The quantitative study enfolded a plethora of analyses, pointing out that the paths to chaos for the “nuclear billiard” are dissimilar for the studied multipolarities. For the first two multipole degrees we noticed a more rapid emergence of chaotic states as moving on towards higher radian frequencies of oscillation. When analyzing the system with quadrupole collective deformations of the potential well, an order-strong chaos-weak chaos-order sequence is revealed. Still, as emphasized in the “Shannon entropies” subsection, the quadrupole case represents an intricate one, and further analysis would be required before concluding it. Every type of quantitative analysis strengthened previous results regarding the monopolar intermittency route to chaos for the “nuclear billiard”. The collective oscillation frequency for the intermittent behaviour was located prior to the resonance state of interaction (at $\Omega =0.1\ c/\rm{fm}$). Further studies along the above issues are currently in progress. The used formalism can be improved by adding spin and charge to the nucleons. A semi-quantal treatment of this problem, including Pauli blocking effect, is hoped to shed more light on the discussed issue in the near future. We wish to thank to R.I. Nanciu, I.S. Zgură, A.Ş. Cârstea, G. Păvălaş, S. Zaharia, A. Gheaţă, M. Rujoiu, A. Mitruţ, and R. Mărginean for fruitful discussions on this paper. G.F. Burgio, M. Baldo, A. Rapisarda, and P. Schuck, Phys. Rev. C **52**, 2475 (1995). M. Baldo, G.F. Burgio, A. Rapisarda, and P. Schuck, in *Proceedings of the $XXXIV$ International Winter Meeting on Nuclear Physics, Bormio, Italy, 1996*, edited by I. Iori. arXiv:nucl-th/9602030 M. Baldo, G.F. Burgio, A. Rapisarda, and P. Schuck, Phys. Rev. C **58**, 2821 (1998). J. Blocki, Y. Boneh, J.R. Nix, J. Randrup, M. Robel, A.J. Sierk, and W.J. Swiatecki, Ann. Phys. (N.Y.) **113**, 330 (1978). P. Ring and P. Schuck, *The Nuclear Many Body Problem* (Springer-Verlag, Berlin, 1980) p. 388. J. Speth and A. van der Woude, Rep. Prog. Phys. **44**, 719 (1981). C.Y. Wong, Phys. Rev. C **25**, 1460 (1982). P. Grassberger and I. Procaccia, Phys. Rev. Lett. **50**, 346 (1983). M. Sieber and F. Steiner, Physica D **44**, 248 (1990). A. Rapisarda and M. Baldo, Phys. Rev. Lett. **66**, 2581 (1991). A.Y. Abul-Magd and H.A. Weidenmüller, Phys. Lett. B **261**, 207 (1991). J. Blocki, F. Brut, T. Srokowski, and W.J. Swiatecki, Nucl. Phys. A**545**, 511c (1992). R. Blümel and J. Mehl, J. Stat. Phys. **68**, 311 (1992). M. Baldo, E.G. Lanza, and A. Rapisarda, Chaos **3**, 691 (1993). J. Blocki, J.J. Shi, and W.J. Swiatecki, Nucl. Phys. A**554**, 387 (1993). M.V. Berry and J.M. Robbins, Proc. R. Soc., London, Sect. A **442**, 641 (1993). E. Ott, *Chaos in Dynamical Systems* (Cambridge University Press, Cambridge, England, 1993). W. Bauer, D. McGrew, V. Zelevinsky, and P. Schuck, Phys. Rev. Lett. **72**, 3771 (1994). R. Hilborn, *Chaos and Nonlinear Dynamics* (Oxford University Press, Oxford, England, 1994). R. Blümel and B. Esser, Phys. Rev. Lett. **72**, 3658 (1994). S. Drozdz, S. Nishizaki, and J. Wambach, Phys. Rev. Lett. **72**, 2839 (1994). S. Drozdz, S. Nishizaki, J. Wambach, and J. Speth, Phys. Rev. Lett. **74**, 1075 (1995). W. Bauer, D. McGrew, V. Zelevinsky, and P. Schuck, Nucl. Phys. A**583**, 93 (1995). C. Jarzynski, Phys. Rev. Lett. **74**, 2937 (1995). A. Bulgac and D. Kusnezov, Chaos, Solitons and Fractals **5**, 1051 (1995). A. Atalmi, M. Baldo, G.F. Burgio, and A. Rapisarda, Phys. Rev. C **53**, 2556 (1996). arXiv:nucl-th/9509020 A. Atalmi, M. Baldo, G.F. Burgio, and A. Rapisarda, in *Proceedings of the $XXXIV$ International Winter Meeting on Nuclear Physics, Bormio, Italy, 1996*, edited by I. Iori. arXiv:nucl-th/9602039 P.K. Papachristou, E. Mavrommatis, V. Constantoudis, F.K. Diakonos, and J. Wambach, Phys. Rev. C **77**, 044305 (2008). arXiv:nucl-th/0803.3336 D. Felea, C. Beşliu, R.I. Nanciu, Al. Jipa, I.S. Zgură, R. Mărginean, M. Haiduc, A. Gheaţă, and M. Gheaţă, in *Proceedings of the $7^{th}$ International Conference “Nucleus-Nucleus Collisions”, Strasbourg, 2000*, edited by W. Norenberg *et al.* (North-Holland, Amsterdam, The Netherlands, 2001) p. 222. D. Felea, *The Study of Nuclear Fragmentation Process in Nucleus-Nucleus Collisions at Energies higher than 1 A GeV*, Ph.D. thesis, University of Bucharest, Faculty of Physics (2002) p. 134. C.C. Bordeianu, C. Beşliu, Al. Jipa, D. Felea, and I.V. Grossu, Comput. Phys. Commun. **178**, 788 (2008). C.C. Bordeianu, D. Felea, C. Beşliu, Al. Jipa, and I.V. Grossu, Comput. Phys. Commun. **179**, 199 (2008). C.C. Bordeianu, D. Felea, C. Beşliu, Al. Jipa, and I.V. Grossu, Rom. Rep. in Phys. **60**, 287 (2008). D. Felea, I.V. Grossu, C.C. Bordeianu, C. Beşliu, Al. Jipa, A.A. Radu, C.M. Mitu, and E. Stan, “Intermittency route to chaos for the nuclear billiard - a qualitative study”, Phys. Rev. C (submitted). Y. Pomeau and P. Manneville, Commun. Math. Phys. **74**, 189 (1980). P. Berge, M. Dubois, P. Manneville, and Y. Pomeau, J. Phys. (Paris) **41**, L344 (1980). Y. Pomeau, J.C. Roux, A. Rossi, S. Bachelart, and C. Vidal, J. Phys. (Paris) **42**, L271 (1981). P.S. Linsay, Phys. Rev. Lett. **47**, 1349 (1981). J. Testa, J. Perez, and C. Jeffries, Phys. Rev. Lett. **48**, 714 (1982). C. Jeffries and J. Perez, Phys. Rev. A **26**, 2117 (1982). M. Dubois, M.A. Rubio, and P. Berge, Phys. Rev. Lett. **51**, 1446 (1983). W.J. Yeh and Y.H. Kao, Appl. Phys. Lett. **42**, 299 (1983). J.Y. Huang and J.J. Kim, Phys. Rev. A **36**, 1495 (1987). P. Richetti, P. DeKepper, J.C. Roux, and H.L. Swinney, J. Stat. Phys. **48**, 977 (1987). N. Kreisberg, W.D. McCormick, and H.L. Swinney, Physica D **50**, 463 (1991). H.G. Schuster, *Deterministic Chaos: an introduction* (Physik-Verlag, Weinheim, Federal Republic of Germany, 1984). P. Holmes and F. Diacu, *Intalniri ceresti - originea haosului si a stabilitatii* (Societatea Stiinta si Tehnica SA, Bucuresti, Romania, 1996) p. 150. O. Penrose, Rep. Prog. Phys. **42**, 129 (1979). A. Atalmi, M. Baldo, G. F. Burgio, and A. Rapisarda, Phys. Rev. C **58**, 2238 (1998). A.M. Kowalski, M.T. Martin, J. Nuñez, A. Plastino, and A.N. Proto, Phys. Rev. A **58**, 2596 (1998). A. Bialas, in *Proceedings of the NATO-ASI International Summer School “Particle Production Spanning MeV and TeV Energies”, Nijmegen, 1999*, edited by W. Kittel, P.J. Mulders, and O. Scholten, NATO Science Series C: Mathematical and Physical Sciences - Vol. 554 (Kluwer Academic Publishers, Nijmengen, The Netherlands, 1999). V. Latora and M. Baranger, Phys. Rev. Lett. **82** 520 (1999). V. Latora, M. Baranger, A. Rapisarda, and C. Tsallis, Phys. Lett. A **273**, 97 (2000). A. Wolf, J.B. Swift, H.L. Swinney, and J.A. Vastano, Physica D **16**, 285 (1985). K. Ramasubramanian and M.S. Sriram, Physica D **139**, 72 (2000).
1
--- abstract: | This paper provides the basis for new methods of inference for max-stable processes $\xi$ on general spaces that admit a certain incremental representation, which, in important cases, has a much simpler structure than the max-stable process itself. A corresponding peaks-over-threshold approach will incorporate all single events that are extreme in some sense and will therefore rely on a substantially larger amount of data in comparison to estimation procedures based on block maxima.\ Conditioning a process $\eta$ in the max-domain of attraction of $\xi$ on being *extremal*, several convergence results for the increments of $\eta$ are proved. In a similar way, the shape functions of mixed moving maxima (M3) processes can be extracted from suitably conditioned single events $\eta$. Connecting the two approaches, transformation formulae for processes that admit both an incremental and an M3 representation are identified. bibliography: - 'HREstimation.bib' title: | Representations of max-stable processes\ based on single extreme events --- Introduction ============ The joint extremal behavior at multiple locations of some random process $\{\eta(t): t\in T\}$, $T$ an arbitrary index set, can be captured via its limiting *max-stable process*, assuming the latter exists and is non-trivial everywhere. Then, for independent copies $\eta_i$ of $\eta$, $i\in{\mathbb N}$, the functions $b_n: T \to {\mathbb R}$, $c_n : T\to (0,\infty)$ can be chosen such that the convergence $$\begin{aligned} \label{MDA} \xi(t) = \lim_{n\to\infty} c_n(t) \Big(\max_{i=1}^n \eta_i(t) - b_n(t)\Big), \quad t\in T,\end{aligned}$$ holds in the sense of finite-dimensional distributions. The process $\xi$ is said to be *max-stable* and $\eta$ is in its max-domain of attraction (MDA). The theory of max-stable processes is mainly concerned with the dependence structure while the marginals are usually assumed to be known. Even for finite-dimensional max-stable distributions, the space of possible dependence structures is uncountably infinite-dimensional and parametric models are required to find a balance between flexibility and analytical tractability [@deh2006a; @res2008]. A general construction principle for max-stable processes was provided by [@deh1984; @smi1990]: Let $\sum_{i\in{\mathbb N}} \delta_{(U_i, S_i)}$ be a Poisson point process (PPP) on $(0,\infty)\times{\mathcal S}$ with intensity measure $u^{-2}\rd u\cdot \nu(\rd s)$, where $({\mathcal S}, \mathfrak S)$ is an arbitrary measurable space and $\nu$ a positive measure on ${\mathcal S}$. Further, let $f:{\mathcal S}\times T \to [0, \infty)$ be a non-negative function with $\int_{{\mathcal S}} f(s,t) \nu(\rd s) = 1$ for all $t\in T$. Then the process $$\begin{aligned} \xi(t) = \max_{i\in{\mathbb N}} U_i f(S_i, t), \quad t\in T,\label{constr_max_stable}\end{aligned}$$ is max-stable and has standard Fréchet margins with distribution function $\exp(-1/x)$ for $x \geq 0$. In this paper, we restrict to two specific choices for $f$ and $({\mathcal S}, \mathfrak S, \nu)$ and consider processes that admit one of the resulting representations. First, let $\{W(t) : t\in T\}$ be a non-negative stochastic process with $\sE W(t) = 1$, $t\in T$, and $W(t_0) = 1$ a.s. for some point $t_0 \in T$. The latter condition means that $W(t)$ simply describes the multiplicative increment of $W$ w.r.t. the location $t_0$. For $({\mathcal S}, \mathfrak S, \nu)$ being the canonical probability space for the sample paths of $W$ and with $f(w,t)=w(t)$, $w\in{\mathcal S}$, $t\in T$, we refer to $$\begin{aligned} \label{def_xi} \xi(t) = \max_{i\in{\mathbb N}} U_i W_i(t), \quad t\in T,\end{aligned}$$ as the *incremental representation* of $\xi$, where $\{W_i\}_{i\in{\mathbb N}}$ are independent copies of $W$. Since $T$ is an arbitrary index set, the above definition covers multivariate extreme value distributions, i.e. $T=\{t_1,\dots,t_k\}$, as well as max-stable random fields, i.e. $T = {\mathbb R}^d$.\ For the second specification, let $\{F(t): \ t \in {\mathbb R}^d\}$ be a stochastic process with sample paths in the space $C({\mathbb R}^d)$ of non-negative continuous functions, such that $$\begin{aligned} \label{assumption_integral} \textstyle \sE \int_{{\mathbb R}^d} F(t) \rd t = 1.\end{aligned}$$ With $S_i = (T_i,F_i)$, $i\in{\mathbb N}$, in ${\mathcal S}= {\mathbb R}^d\times C({\mathbb R}^d)$, intensity measure $\nu(\rd t \times \rd g)=\rd t\sP_F(\rd g)$ and $f((t,g), s)=g(s-t)$, $(t,g)\in{\mathcal S}$, we obtain the class of *mixed moving maxima (M3) processes* $$\begin{aligned} \xi(t) = \max_{i\in{\mathbb N}} U_i F_i(t- T_i), \quad t\in{\mathbb R}^d. \label{def_M3}\end{aligned}$$ These processes are max-stable and stationary on ${\mathbb R}^d$ (see for instance [@wan2010]). The function $F$ is called *shape function of $\xi$* and can also be deterministic (e.g., in case of the Smith process). In Smith’s “rainfall-storm” interpretation [@smi1990], $U_i$ and $T_i$ are the strength and center point of the $i$th storm, respectively, and $U_i F_i(t- T_i)$ represents the corresponding amount of rainfall at location $t$. In this case, $\xi(t)$ is the process of extremal precipitation. When i.i.d. realizations $\eta_1, \ldots, \eta_n$ of $\eta$ in the MDA of a max-stable process $\xi$ are observed, a classical approach for parametric inference on $\xi$ is based on generating (approximate) realizations of $\xi$ out of the data $\eta_1, \ldots, \eta_n$ via componentwise block maxima and applying maximum likelihood (ML) estimation afterwards. A clear drawback of this method is that it ignores all information on large values that is contained in the order statistics below the within-block maximum. Further, ML estimation needs to evaluate the multivariate densities while for many max-stable models only the bivariate densities are known in closed form. Thus, composite likelihood approaches have been proposed [@pad2010; @dav2012].\ In univariate extreme-value theory, the second standard procedure estimates parameters by fitting a certain PPP to the *peaks-over-thresholds* (POT), i.e., to the empirical process of exceedances over a certain critical value [@lea1991; @emb1997]. Also in the multivariate framework we can expect to profit from using all extremal data via generalized POT methods instead of aggregated data. In contrast to the ML approach, in this paper, we assume that $\xi$ admits one of the two representations and and we aim at extracting realizations of the processes $W$ and $F$, respectively, from *single extreme events*. Here, the specification of a single extreme event will depend on the respective representation.\ In [@eng2012a], this concept is applied to derive estimators for the class of Brown-Resnick processes [@bro1977; @kab2009], which have the form by construction. With $a(n)$ being a sequence of positive numbers with $\lim_{n\to\infty} a(n) = \infty$, the convergence in distribution $$\begin{aligned} \Bigg( \frac{\eta(t_1)}{\eta(t_0)}, \ldots, \frac{\eta(t_k)}{\eta(t_0)} \ \Bigg|\ \eta(t_0) > a(n) \Bigg)\cvgdist \bigl( W(t_1),\dots, W(t_k) \bigr), \label{cond_incr_conv}\end{aligned}$$ $t_0,t_1,\dots,t_k\in T$, $k\in {\mathbb N}$, is established for $\eta$ being in the MDA of a Brown-Resnick process and with $W$ being the corresponding log-Gaussian random field. A similar approach exists in the theory of homogeneous discrete-time Markov chains. For instance, [@seg2007] and [@ehl2011] investigate the behavior of a Markov chain $\{M(t): t\in {\mathbb Z}\}$ conditional on the event that $M(0)$ is large. The resulting extremal process is coined the tail chain and turns out to be Markovian again. In this paper, the convergence result is generalized in different aspects. Arbitrary non-negative processes $\{W(t) : t\in T\}$ with $\sE W(t) = 1$, $t\in T$, are considered, and convergence of the conditional increments of $\eta$ in the sense of finite-dimensional distributions as well as weak convergence in continuous function spaces is shown (Theorems \[theo\_cond\_increments\_general\] and \[theo\_cond\_increments\_cont\]). Moreover, in Section \[M3representation\], similar results are established for M3 processes by considering realizations of $\eta$ around their (local) maxima. Since one and the same max-stable process $\xi$ might admit both representations and we provide formulae for switching between them in Section \[sec:switching\]. Section \[sec:application\] gives an exemplary outlook on how our results can be applied for statistical inference. Incremental representation {#examples_increment_representation} ========================== Throughout this section, we suppose that $\{\xi(t): \ t\in T\}$, where $T$ is an arbitrary index set, is normalized to standard Fréchet margins and admits a representation $$\begin{aligned} \label{def_xi2} \xi(t) = \max_{i\in{\mathbb N}} U_i V_i(t), \quad t\in T,\end{aligned}$$ where $\sum_{i\in{\mathbb N}}\delta_{U_i}$ is a PPP on $(0,\infty)$ with intensity $u^{-2}du$, which we call *Fréchet point process* in the following. The $\{V_i\}_{i\in{\mathbb N}}$ are independent copies of a non-negative stochastic process $\{V(t): \ t\in T\}$ with $\sE V(t) = 1$, $t\in T$. Note that is slightly less restrictive than the representation in that we do not require that $V(t_0)=1$ a.s. for some $t_0\in T$. For any fixed $t_0\in T$, we have $$\begin{aligned} \label{decomp_V} \xi(t) \eqdist \max_{i\in{\mathbb N}} U_i \left({\mathbf 1}_{P_i=0}V^{(1)}_i(t) + {\mathbf 1}_{P_i=1}V^{(2)}_i(t)\right), \quad t\in T,\end{aligned}$$ where $\{P_i\}_{i\in{\mathbb N}}$ are i.i.d. Bernoulli variables with parameter $p=\sP(V(t_0) = 0)$ and the $V^{(1)}_i$ and $V^{(2)}_i$ are independent copies of the process $\{V(t): \ t\in T\}$, conditioned on the events $\{V(t_0) > 0\}$ and $\{V(t_0)= 0\}$, respectively. Note that for $k\in{\mathbb N}$, $t_0, \ldots, t_k\in T$, the vector $\Xi = (\xi(t_0),\dots,\xi(t_k))$ follows a $(k+1)$-variate extreme-value distribution and its distribution function $G$ can therefore be written as $$\begin{aligned} \label{def_mu} G(\mathbf{x}) = \exp( -\mu( [{\mathbf 0},\mathbf{x}]^C) ), \quad \mathbf{x} \in {\mathbb R}^{k+1},\end{aligned}$$ where $\mu$ is a measure on $E = [0,\infty)^{k+1}\setminus\{{\mathbf 0}\}$, the so-called *exponent measure* of $G$ [@res2008 Prop. 5.8], and $[{\mathbf 0},\mathbf{x}]^C = E\setminus [{\mathbf 0},\mathbf{x}]$. The following convergence result provides the theoretical foundation for statistical inference based on the incremental process $V$. \[theo\_cond\_increments\_general\] Let $\{\eta(t): \ t\in T\}$ be non-negative and in the MDA of some max-stable process $\xi$ that admits a representation and suppose that $\eta$ is normalized such that holds with $c_n(t) = 1/n$ and $b_n(t) = 0$ for $n\in{\mathbb N}$ and $t\in T$. Let $a(n)\to\infty$ as $n\to\infty$. For $k\in {\mathbb N}$ and $t_0,\dots,t_k\in T$ we have the convergence in distribution on ${\mathbb R}^{k+1}$ $$\begin{aligned} \left(\frac{\eta(t_0)}{a(n)}, \frac{\eta(t_1)}{\eta(t_0)} ,\dots, \frac{\eta(t_k)}{\eta(t_0)} \ \Bigg|\ \eta(t_0) > a(n)\right) \cvgdist \left(Z, \Delta\mathbf{\tilde{V}}^{(1)}\right),\quad n\to \infty, \end{aligned}$$ where the distribution of $\Delta\mathbf{\tilde{V}}^{(1)}$ is given by $$\begin{aligned} \sP(\Delta\mathbf{\tilde{V}}^{(1)}\in d \mathbf z) = (1-p)\sP(\Delta\mathbf V^{(1)}\in d \mathbf z) \sE\bigl( V^{(1)}(t_0) \big| \Delta\mathbf V^{(1)}=\mathbf z \bigr), \quad \mathbf{z} \geq {\mathbf 0}. \label{density_increment} \end{aligned}$$ Here, $\Delta\mathbf V^{(1)}$ denotes the vector of increments $\left(\frac{V^{(1)}(t_1)}{V^{(1)}(t_0)}, \ldots, \frac{V^{(1)}(t_k)}{V^{(1)}(t_0)}\right)$ with respect to $t_0$, and $Z$ is an independent Pareto variable. Note that any process $\eta$ that satisfies the convergence in for a process $\xi$ with standard Fréchet margins can be normalized such that the norming functions in become $c_n(t) = 1/n$ and $b_n(t) = 0$, $n\in{\mathbb N}$, $t\in T$ [@res2008 Prop. 5.10]. For $\mathbf{X} = (\eta(t_0),\dots,\eta(t_k))$, which is in the MDA of the random vector $\Xi=(\xi(t_0),\dots,\xi(t_k))$, it follows from [@res2008 Prop. 5.17] that $$\begin{aligned} \label{conv_resnick} \lim_{m\to\infty} m \sP( \mathbf{X}/m \in B ) = \mu(B),\end{aligned}$$ for all elements $B$ of the Borel $\sigma$-algebra $\mathcal B(E)$ of $E$ bounded away from $\{{\mathbf 0}\}$ with $\mu(\partial B)=0$, where $\mu$ is defined by . For $s_0> 0$ and ${\mathbf s}=(s_1, \ldots, s_k)\in [0, \infty)^{k}$, we consider the sets $A_{s_0}=(s_0,\infty)\times [0, \infty)^k$, $A=A_1$ and $B_{\mathbf{s}} = \{\mathbf{x} \in [0, \infty)^{k+1} : (x^{(1)},\dots,x^{(k)}) \leq x^{(0)}\mathbf{s}\}$ for ${\mathbf s}$ satisfying $\sP( \Delta\tilde {\mathbf V}^{(1)}\in \partial [{\mathbf 0},{\mathbf s}])=0$. Then $$\begin{aligned} \left\{ \eta(t_0) > s_0 a(n),\, \big( \eta(t_1) / \eta(t_0) ,\dots, \eta(t_k) / \eta(t_0) \big) \leq \mathbf{s} \right\} = \{ \mathbf{X} / a(n) \in B_{\mathbf{s}}\cap A_{s_0} \},\end{aligned}$$ since $B_{\mathbf{s}}$ is invariant under multiplication, i.e., $B_{\mathbf s}=cB_{\mathbf s}$ for any $c>0$. Thus, we obtain $$\begin{aligned} \notag \sP&\left( \eta(t_0) > s_0 a(n), \, \left( \eta(t_1) / \eta(t_0) ,\dots, \eta(t_k) / \eta(t_0) \right) \leq \mathbf{s} \,\Big|\, \eta(t_0) > a(n) \right) \\ \notag&= \frac{ {a(n)} \sP( \mathbf{X} / a(n) \in B_{\mathbf{s}} \cap A \cap A_{s_0} )}{ {a(n)} \sP( \mathbf{X} / a(n) \in A)} \\ \label{eq:01} & \longrightarrow \frac{\mu(B_{\mathbf{s}} \cap A \cap A_{s_0})}{\mu(A)},\quad (n\to\infty),\end{aligned}$$ where the convergence follows from , as long as $\mu\{ \partial (B_{\mathbf{s}} \cap A \cap A_{s_0})\} = 0$.\ Let $$\begin{aligned} \label{def_xi3} \xi^{(1)}(t) = \max_{i\in{\mathbb N}} U_i^{(1)} V^{(1)}_i(t), \quad t\in T, \end{aligned}$$ where $\sum_{i\in{\mathbb N}} \delta_{U_i^{(1)}}$ is a Poisson point process with intensity $(1-p)u^{-2}\rd u$ and let $\mu^{(1)}$ be the exponent measure of the associated max-stable random vector $(\xi^{(1)}(t_0), \ldots, \xi^{(1)}(t_k))$. Then the choice $A = (1,\infty)\times [0,\infty)^k$ guarantees that $\mu(\cdot \cap A) = \mu^{(1)}(\cdot \cap A)$. Comparing the construction of $\xi^{(1)}$ in with the definition of the exponent measure, we see that $\mu^{(1)}$ is the intensity measure of the Poisson point process $\sum_{i\in{\mathbb N}} \delta_{(U_i^{(1)} V_i^{(1)}(t_0),\, \ldots,\, U_i^{(1)} V_i^{(1)}(t_k))}$ on $E$. Hence, $$\begin{aligned} \mu(A) &= \int_0^\infty (1-p)u^{-2} \sP(u V^{(1)}(t_0) > 1) \rd u \notag\\ &= (1-p)\int_0^\infty u^{-2} \int_{[u^{-1}, \infty)} \sP(V^{(1)}(t_0) \in \rd y) \rd u \notag\\ &= (1-p)\int_0^\infty y \sP(V^{(1)}(t_0) \in \rd y) = (1-p)\sE V^{(1)}(t_0) = 1, \end{aligned}$$ where the last equality follows from $\sE V^{(1)}(t_0) = \sE V(t_0)/(1-p)$. Furthermore, for $s_0\geq 1$ and ${\mathbf s}\in[0,\infty)^k$ with $\sP(\Delta\mathbf{\tilde V}^{(1)} \in \partial [{\mathbf 0},{\mathbf s}])=0$, $$\begin{aligned} &\mu(B_{\mathbf{s}} \cap A \cap A_{s_0}) / ((1-p)\mu(A)) \notag\\ &= \int_0^\infty u^{-2} \sP\Bigl(u V^{(1)}(t_0) > s_0,\, \big(u V^{(1)}(t_1),\dots, u V^{(1)}(t_k)\big) \leq \mathbf{s} u V^{(1)}(t_0) \Bigr) \rd u\notag\\ &= \int_0^\infty \int_{[s_0 u^{-1},\, \infty)} u^{-2} \sP\Bigl(V^{(1)}(t_0)\in \rd y \Big|\Delta\mathbf V^{(1)} \leq \mathbf{s} \Bigr) \sP(\Delta\mathbf V^{(1)} \leq \mathbf{s} )\rd u \notag\\ &= \int_{[{\mathbf 0}, \mathbf s]} \int_{[0,\infty)} y s_0^{-1} \cdot \sP\Bigl(V^{(1)}(t_0)\in \rd y \Big| \Delta\mathbf V^{(1)}=\mathbf z \Bigr) \sP(\Delta\mathbf V^{(1)}\in \rd{\mathbf z}) \notag\\ &= s_0^{-1}\int_{[{\mathbf 0}, \mathbf s]} \sE\Bigl( V^{(1)}(t_0) \Big| \Delta\mathbf V^{(1)}=\mathbf z \Bigr) \sP(\Delta\mathbf V^{(1)}\in \rd{\mathbf z}). \label{mu_expl} \end{aligned}$$ Equation shows that the convergence in holds for all continuity points ${\mathbf s}\in [0, \infty)^{k}$ of the distribution function of $\Delta{\mathbf V}^{(1)}$. Since $s_0\geq 1$ was arbitrary, this concludes the proof. 1. If $V^{(1)}(t_0)$ is stochastically independent of the increments $\Delta\mathbf V^{(1)}$, we simply have $\sP(\Delta\mathbf{\tilde{V}}^{(1)}\in d \mathbf z) = \sP(\Delta\mathbf{{V}}^{(1)}\in d \mathbf z)$. 2. If $p=\sP(V(t_0) = 0)=0$, the exponent measure $\mu$ of any finite-dimensional vector $\Xi=(\xi(t_0), \ldots, \xi(t_k))$, $t_0, \ldots, t_k\in T$, $k\in{\mathbb N}$, satisfies the condition $\mu\left( \{0\}\times [0,\infty)^k \right)=0,$ and following Proposition \[calculateW\], the incremental representation of $\Xi$ according to is given by $\Xi = \max_{i\in{\mathbb N}} U_i \cdot (1, \Delta\mathbf{\tilde{V}}_i)^\top$, where $\Delta\mathbf{\tilde{V}}_i$, $i\in{\mathbb N}$, are independent copies of $\Delta\mathbf{\tilde{V}}=\Delta\mathbf{\tilde{V}}^{(1)}$. 3. If $\xi$ admits a representation , we have $\sP(\Delta\mathbf{\tilde{V}}^{(1)}\in d \mathbf z) = \sP(\Delta\mathbf{{V}}\in d \mathbf z)$, which shows that is indeed a special case of Theorem \[theo\_cond\_increments\_general\]. \[rem\_thres\] In the above theorem, the sequence $a(n)$ of thresholds is only assumed to converge to $\infty$, as $n\to\infty$, ensuring that $\{\eta(t_0) > a(n)\}$ becomes a rare event. For statistical applications $a(n)$ should also be chosen such that the number of exceedances $$\begin{aligned} N(n) = \sum_{i=1}^n {\mathbf 1}\{ \eta_i(t_0) > a(n) \} \end{aligned}$$ converges to $\infty$ almost surely, where $(\eta_i)_{i\in{\mathbb N}}$ is a sequence of independent copies of $\eta$. By the Poisson limit theorem, this is equivalent to the additional assumption that $\lim_{n\to\infty} a(n)/n = 0$, since in that case $n\sP(\eta(t_0) > a(n)) = n / a(n) \to \infty$, as $n\to\infty$. [@eng2012a] consider Hüsler-Reiss distributions [@hue1989; @kab2011] and obtain their limiting results by conditioning on certain extremal events $A\subset E$. They show that various choices of $A$ are sensible in the Hüsler-Reiss case, leading to different limiting distributions of the increments of $\eta$. In case $\xi$ is a Brown-Resnick process and $A = (1,\infty)\times [0, \infty)^{k}$ the assertions of Theorem \[theo\_cond\_increments\_general\] and [@eng2012a Thm. 3.3] coincide. A commonly used class of stationary yet non-ergodic max-stable processes on ${\mathbb R}^d$ is defined by $$\begin{aligned} \label{schlather_model} \xi(t) = \max_{i\in{\mathbb N}} U_i Y_i(t), \quad t\in{\mathbb R}^d,\end{aligned}$$ where $\sum_{i\in{\mathbb N}} \delta_{U_i}$ is a Fréchet point process, $Y_i(t)=\max(0, \tilde Y_i(t))$, $i \in {\mathbb N}$, and the $\tilde Y_i$ are i.i.d. stationary, centered Gaussian processes with $\sE(\max(0, \tilde Y_i(t))) =1$ for all $t\in{\mathbb R}^d$ [@sch2002; @bla2011]. Note that in general, a $t_0\in{\mathbb R}^d$ s.t. $Y_i(t_0)=1$ a.s. does not exist, i.e., the process admits representation but not representation . In particular, for the extremal Gaussian process we have $p=\sP(V(t_0)=0)=1/2$ and the distribution of the increments in becomes $$\begin{aligned} \sP(\Delta\mathbf{\tilde{V}}^{(1)} \! \in \rd \mathbf z) &= \frac12 \sE\Bigl[ Y(t_0) \, \Big|\, (Y(t_1)/Y(t_0), \ldots, Y(t_k)/Y(t_0)) = \mathbf z, \, Y(t_0)>0\Bigr]\\ & \qquad \cdot\sP\Bigl( \bigl(Y(t_1)/Y(t_0), \ldots, Y(t_k)/Y(t_0)\bigr) \in \rd{\mathbf z}\, \Big|\, Y(t_0)>0 \Bigr).\end{aligned}$$ While the Hüsler-Reiss distribution is already given by the incremental representation , cf. [@kab2011], other distributions can be suitably rewritten, provided that the cumulative distribution function and hence the respective exponent measure $\mu$ is known. \[calculateW\] Let $\Xi = (\xi(t_0),\dots,\xi(t_k))$ be a max-stable process on $T = \{t_0, \ldots, t_k \}$ with standard Fréchet margins and suppose that its exponent measure $\mu$ is concentrated on $(0, \infty) \times [0, \infty)^{k}$. Define a random vector ${\mathbf W}=(W^{(1)}, \ldots, W^{(k)})$ via its cumulative distribution function $$\begin{aligned} \label{def_W} \sP( {\mathbf W}\leq \mathbf{s}) = \mu(B_{\mathbf{s}} \cap A), \quad \mathbf{s}\in [0,\infty)^{k}, \end{aligned}$$ where $A = (1,\infty)\times [0, \infty)^{k}$ and $B_{\mathbf s} = \{{\mathbf x}\in [0,\infty)^{k+1}: \, (x^{(1)},\ldots,x^{(k)}) \leq x^{(0)} {\mathbf s}\}$. Then, $\Xi$ allows for an incremental representation with ${\mathbf W}_i$, $i\in{\mathbb N}$, being independent copies of ${\mathbf W}$. First, we note that indeed defines a valid cumulative distribution function. To this end, consider the measurable transformation $$\begin{aligned} T: (0, \infty)\times [0, \infty)^{k} \to (0, \infty)\times [0, \infty)^{k}, \ (x_0,\dots, x_k) \mapsto \left(x_0, \frac{x_1}{x_0}, \dots, \frac{x_k}{x_0}\right). \end{aligned}$$ Then, $ T(B_{\mathbf{s}} \cap A) = (1, \infty) \times [{\mathbf 0}, {\mathbf s}]$ and the measure $\mu^T(\cdot) = \mu(T^{-1}((1,\infty)\times \,\cdot\,))$ is a probability measure on $[0, \infty)^{k}$. Since $$\begin{aligned} \mu(B_{\mathbf{s}} \cap A) = \mu(T^{-1}((1,\infty)\times[{\mathbf 0}, {\mathbf s}])) = \mu^T([{\mathbf 0}, {\mathbf s}]), \end{aligned}$$ the random vector ${\mathbf W}$ is well-defined and has law $\mu^T$. By definition of the exponent measure, we have $\Xi \eqdist \max_{i \in {\mathbb N}} {\mathbf X}_i$, where $\Pi = \sum_{i\in{\mathbb N}} \delta_{{\mathbf X}_i}$ is a PPP on $E$ with intensity measure $\mu$. Then, the transformed point process $T\Pi = \sum_{i\in{\mathbb N}} \delta_{(X_i^{(0)},\, X_i^{(1)}/X_i^{(0)},\, \ldots,\, X_i^{(k)} /X_i^{(0)})}$ has intensity measure $$\begin{aligned} \tilde \mu((c,\infty) \times [{\mathbf 0},{\mathbf s}]) ={}& \mu\left(T^{-1}\left( (c,\infty) \times [{\mathbf 0},{\mathbf s}] \right) \right)\\ ={} & \mu(B_{\mathbf{s}} \cap ((c,\infty) \times [0,\infty)^k)) {} ={} c^{-1} \mu(B_{\mathbf{s}} \cap A) \end{aligned}$$ for any $c > 0$, $\mathbf{s} \in [0,\infty)^k$, where we use the fact that $\mu$, as an exponent measure, has the homogeneity property $c^{-1}\mu(\rd{\mathbf x})=\mu(\rd(c{\mathbf x}))$. Thus, $T\Pi$ has the same intensity as $\sum_{i\in{\mathbb N}} \delta_{(U_i, {\mathbf W}_i)}$, where $\sum_{i\in{\mathbb N}} \delta_{U_i}$ is a Fréchet point process and ${\mathbf W}_i$, $i \in {\mathbb N}$, are i.i.d. vectors with law $\sP({\mathbf W}\leq \mathbf{s}) = \mu(B_{\mathbf{s}} \cap A)$. Hence, we have $$\begin{aligned} \Xi\eqdist{}& \max_{i \in {\mathbb N}} T^{-1}\left(\big(X_i^{(0)}, X_i^{(1)} / X_i^{(0)}, \ldots, X_i^{(k)} / X_i^{(0)}\big)\right)\\ \eqdist{}& \max_{i \in {\mathbb N}} T^{-1}\left(\big(U_i,{\mathbf W}_i\big)\right) {} ={} \max_{i \in {\mathbb N}} U_i {\mathbf W}_i, \end{aligned}$$ which completes the proof. \[ex:symm\_log\] For $T=\{t_0,\dots,t_k\}$, the symmetric logistic distribution is given by $$\begin{aligned} \sP(\xi(t_0) \leq x_0,\dots, \xi(t_k) \leq x_k) = \exp\left[ - \left( x_0^{-q}+ \dots + x_k^{-q}\right)^{1/q} \right], \label{eq:cdf_symm_log} \end{aligned}$$ for $x_0,\dots,x_k>0$ and $q > 1$. Hence, the density of the exponent measure is $$\begin{aligned} \mu(\rd x_0,\dots,\rd x_k) = \left(\sum_{i=0}^k x_i^{-q}\right)^{1/q -(k+1)} \left(\prod_{i=1}^k(iq-1)\right) \prod_{i=0}^k x_i^{-q-1} \rd x_0\dots \rd x_k. \end{aligned}$$ Applying Proposition \[calculateW\], the incremental process $W$ in the representation is given by $$\begin{aligned} \sP(W(t_1) \leq s_1, \dots W(t_k) \leq s_k) = \left(1 + \sum_{i=1}^k s_i^{-q}\right)^{1/q - 1}. \end{aligned}$$ Continuous sample paths ----------------------- In this subsection, we provide an analog result to Theorem \[theo\_cond\_increments\_general\], in which convergence in the sense of finite-dimensional distributions is replaced by weak convergence on function spaces. In the following, for a Borel set $U\subset{\mathbb R}^d$, we denote by $C(U)$ and $C^+(U)$ the space of non-negative and strictly positive continuous functions on $U$, respectively, equipped with the topology of uniform convergence on compact sets. \[theo\_cond\_increments\_cont\] Let $K$ be a compact subset of ${\mathbb R}^d$ and $\{\eta(t): \ t\in K\}$ be a process with positive and continuous sample paths in the MDA of a max-stable process $\{\xi(t): \ t\in K\}$ as in in the sense of weak convergence on $C(K)$. In particular, suppose that $$\frac 1n \max_{i=1}^n \eta_i(\cdot) \cvgdist \xi(\cdot), \quad n\to\infty.$$ Let $W$ be the incremental process from and $Z$ a Pareto random variable, independent of $W$. Then, for any sequence $a(n)$ of real numbers with $a(n) \to \infty$, we have the weak convergence on $(0,\infty)\times C(K)$ $$\begin{aligned} \left(\frac{\eta(t_0)}{a(n)}, \frac{\eta(\cdot)}{\eta(t_0)} \ \Big|\ \eta(t_0) > a(n) \right) \cvgdist (Z, W(\cdot)), \end{aligned}$$ as $n$ tends to $\infty$. \[weak\_conv\_Rd\] Analogously to [@whi1970 Thm. 5], weak convergence of a sequence of probability measures $P_n$, $n\in{\mathbb N}$, to some probability measure $P$ on $C({\mathbb R}^d)$ is equivalent to weak convergence of $P_n r_j^{-1}$ to $P r_j^{-1}$ on $C([-j, j]^d)$ for all $j\geq 1$, where $r_j : C({\mathbb R}^d) \to C([-j,j]^d)$ denotes the restriction of a function to the cube $[-j, j]^d$. Hence the assertion of Theorem \[theo\_cond\_increments\_cont\] remains valid if the compact set $K$ is replaced by ${\mathbb R}^d$. As the process $\xi$ is max-stable and $\eta\in\text{MDA}(\xi)$, similarly to the case of multivariate max-stable distributions (cf. Theorem \[theo\_cond\_increments\_general\]), we have that $$\begin{aligned} \label{conv_dehaan} \lim_{u \to \infty} u\sP(\eta / u \in B) = \mu(B) \end{aligned}$$ for any Borel set $B \subset C(K)$ bounded away from $0^K$, i.e., $\inf\{\sup_{s\in K} f(s) : \ f\in B\} > 0$, and with $\mu(\partial B) = 0$ [@deh2006a Cor. 9.3.2], where $\mu$ is the *exponent measure* of $\xi$, defined by $$\begin{aligned} & \sP(\xi(s) \leq x_j, \ s \in K_j, \ j=1,\ldots,m) \nonumber \\ &={} \exp\left[-\mu\left(\left\{ f \in C(K): \ \textstyle\sup_{s \in K_j} f(s) > x_j \textrm{ for some } j \in \{1,\ldots,m\} \right\}\right)\right] \end{aligned}$$ for $x_j \geq 0$, $K_j \subset K$ compact. Thus, $\mu$ equals the intensity measure of the Poisson point process $\sum_{i \in {\mathbb N}} \delta_{U_i W_i(\cdot)}$. For $z>0$ and $D\subset C(K)$ Borel, we consider the sets $$\begin{aligned} A_{z} &= \{f \in C(K): \ f(t_0) > z\}\\ B_D &= \{f \in C(K) : f(\cdot)/f(t_0)\in D\}\end{aligned}$$ and $A=A_1$. Note that $B_D$ is invariant w.r.t. multiplication by any positive constant. Then, as $W(t_0) = 1$ a.s., we have $\mu(A_{z}) = \int_{z}^\infty u^{-2} \sd u = z^{-1}$ and for $s_0\geq 1$ and any Borel set $D \subset C(K)$ with $\sP(W \in \partial D) = 0$, by , we get $$\begin{aligned} &\sP\left\{\eta(t_0) / a(n) > s_0,\ \eta(\cdot)/\eta(t_0) \in D \, \Big|\, \eta(t_0) > a(n) \right\}\\ &= \frac{a(n) \sP\bigl\{\eta(\cdot) / a(n) \in A_{s_0} \cap B_D \cap A\bigr\}}{a(n) \sP\bigl\{\eta(\cdot) / a(n) \in A\bigr\}}\\ &\stackrel{n \to \infty}{\longrightarrow}{} \frac{\mu(B_D \cap A_{s_0})}{\mu(A)}\\ &={} \int_{s_0}^\infty u^{-2}\sP\bigl\{u W(\cdot) \in B_D\bigr\} \sd u\\ &={} s_0^{-1} \sP\bigl\{W(\cdot) \in D\bigr\}, \end{aligned}$$ which is the joint distribution of $Z$ and $W(\cdot)$. \[BRproc\] For $T={\mathbb R}^d$, $d\geq 1$, let $\{Y(t): \ t\in T\}$ be a centered Gaussian process with stationary increments, continuous sample paths and $Y(t_0) = 0$ for some $t_0\in{\mathbb R}^d$. Note that by [@adl2007 Thm. 1.4.1] it is sufficient for the continuity of $Y$ that there exist constants $C,\alpha,\delta > 0$, such that $$\begin{aligned} \sE |Y(s) - Y(t)|^2 \leq \frac{C}{|\log \|s-t\| |^{1+\alpha}} \end{aligned}$$ for all $s,t\in{\mathbb R}^d$ with $\|s-t\|<\delta$. Further let $\gamma(t) = \sE(Y(t) - Y(0))^2$ and $\sigma^2(t) = \sE(Y(t))^2$, $t \in {\mathbb R}^d$, denote the variogram and the variance of $Y$, respectively. Then, with a Fréchet point process $\sum_{i\in{\mathbb N}} \delta_{U_i}$ and independent copies $Y_i$ of $Y$, $i\in{\mathbb N}$, the process $$\begin{aligned} \label{BR_proc} \xi(t) = \max_{i\in{\mathbb N}} U_i \exp\left(Y_i(t) - \sigma^2(t) / 2\right), \quad t\in{\mathbb R}^d, \end{aligned}$$ is stationary and its distribution only depends on the variogram $\gamma$. Comparing with the incremental representation , the distribution of the increments is given by the log-Gaussian random field $W(t) = \exp\left(Y(t) - \sigma^2(t) / 2\right)$, $t\in{\mathbb R}^d$, and Theorem \[theo\_cond\_increments\_cont\] applies. Mixed moving maxima representation {#M3representation} ================================== A large and commonly used class of max-stable processes is the class of M3 processes . Let $$\begin{aligned} \label{pi0} \Pi_0 = \sum_{i\in{\mathbb N}} \delta_{(U_i.T_i,F_i)} \end{aligned}$$ be the corresponding PPP on $(0,\infty)\times{\mathbb R}^d\times C({\mathbb R}^d)$ with intensity $u^{-2}\rd u \,\rd t \,\sP_F(\rd f)$. In the sequel, M3 processes are denoted by $$\begin{aligned} M(t) = \max_{i\in{\mathbb N}} U_i F_i(t- T_i), \quad t\in{\mathbb R}^d.\end{aligned}$$ The marginal distributions of $M$ are given by $$\begin{aligned} & \sP(M(t_0)\leq s_0, \ldots, M(t_k)\leq s_k) \notag\\ &= \sP\left[ \Pi_0 \left(\left\{(u,t,f): \max_{l=0}^k u f(t_l-t)/s_l > 1\right\}\right) = 0\right] \notag\\ &= \exp\left(- \int_{C({\mathbb R}^d)} \int_{{\mathbb R}^d} \max_{l=0}^k (f(t_l-t)/s_l)\, \rd t \, \sP_F(\rd f) \right),\label{M3_marginal}\end{aligned}$$ $t_0, \ldots, t_k\in{\mathbb R}^d$, $s_0, \ldots, s_k\geq 0$, $k\in{\mathbb N}$. In Section \[examples\_increment\_representation\], we were interested in recovering the incremental process $W$ from processes in the MDA of a max-stable process with incremental representation. In case of M3 processes, the object of interest is clearly the distribution of the shape function $F$. Thus, in what follows, we provide the corresponding convergence results for processes $\eta$ in the MDA of an M3 process. We distinguish between processes on ${\mathbb R}^d$ with continuous sample paths and processes on a grid (${\mathbb Z}^d$). The main idea is to consider $\eta$ in the neighborhood of its own (local) maximum, conditional on this maximum being large. Continuous Case --------------- Let $\{\eta(t): \, t \in {\mathbb R}^d\}$ be strictly positive and in the MDA of a mixed moving maxima process $M$ in the sense of weak convergence in $C({\mathbb R}^d)$. We assume that $\eta$ is normalized such that the norming functions in are given by $c_n(t) = 1 / n$ and $b_n(t) = 0$, for any $n\in{\mathbb N}$ and $t\in{\mathbb R}^d$. Further suppose that the shape function $F$ of $M$ is sample-continuous and satisfies $$\begin{aligned} \begin{split} F({\mathbf}0) &= \lambda \quad a.s., \\ F(t) & \in [0,\lambda) \ \forall t \in {\mathbb R}^d \setminus \{{\mathbf}0\} \quad a.s. \label{eq:Fmaxatorigin} \end{split}\end{aligned}$$ for some $\lambda > 0$ and $$\int_{{\mathbb R}^d} \sE\left\{ \max_{t_0 \in K} F(t_0 - t) \right\} \sd t < \infty \label{eq:sup-integrability-cont}$$ for any compact set $K \subset {\mathbb R}^d$. Under these assumptions, there is an analog result to Theorem \[theo\_cond\_increments\_cont\]. \[thm\_conv\_mmm\] Let ${Q}, K \subset {\mathbb R}^d$ be compact such that $\partial {Q}$ is a Lebesgue null set and let $$\tau_{Q}: \ C({Q}) \to {\mathbb R}^d, \ f \mapsto \inf\left( \operatorname*{arg\,max}_{t \in {Q}} f(t) \right),$$ where [“inf”]{} is understood in the lexicographic sense. Then, under the above assumptions, for any Borel set $B \subset C(K)$ with $\sP(F / \lambda \in \partial B) = 0$, and any sequence $a(n)$ with $a(n) \to \infty$ as $n \to \infty$, we have $$\begin{aligned} & \lim_{\substack{\{{\mathbf}0\} \in L \nearrow {\mathbb R}^d\\ {\rm compact}}} \limsup_{n \to \infty} \sP\Big\{ \eta\big(\tau_{Q}(\eta|_{Q})+\cdot\big) \big/ \eta(\tau_{Q}(\eta|_{Q})) \in B \ \Big| \\[-1em] & \hspace{2.5cm} \max_{t\in {Q}}\eta(t) = \max_{t \in {Q}\oplus L} \eta(t), \ \max_{t\in {Q}}\eta(t) \geq a(n)\Big\} \hfill {}={} \hfill \sP\big\{F(\cdot) / \lambda \in B\big\}, \end{aligned}$$ where $\oplus$ denotes morphological dilation. The same result holds true if we replace $\limsup_{n \to \infty}$ by $\liminf_{n \to \infty}$. First, we consider a fixed compact set $L\subset{\mathbb R}^d$ large enough such that $K \cup \{{\bf 0}\} \subset L$ and define $$\begin{aligned} A_L = \left\{ f \in C({Q}\oplus L): \ \max_{t\in {Q}}f(t) \geq 1, \ \max_{t\in {Q}}f(t) = \max_{t \in {Q}\oplus L} f(t)\right\} \end{aligned}$$ and $$\begin{aligned} C_B = \left\{f \in C({Q}\oplus L): \ f\big(\tau_{Q}(f|_{Q}) + \,\cdot\,\big) \big/ f(\tau_{Q}(f|_{Q})) \in B\right\} \end{aligned}$$ for any Borel set $B \subset C(K)$. Note that $C_B$ is invariant w.r.t. multiplication by any positive constant. Thus, we get $$\begin{aligned} & \sP\Big\{\eta\big(\tau_{Q}(\eta|_{Q}) + \cdot\big) \big/ \eta(\tau_{Q}(\eta|_{Q})) \in B \ \Big|\ \max_{t\in {Q}}\eta(t) = \max_{t \in {Q}\oplus L} \eta(t) \geq a(n)\Big\} \nonumber \\ & ={} \sP\big\{\eta / a(n) \in C_B \,\big|\, \eta / a(n) \in A_L\big\} \nonumber\\ & ={} \frac{a(n) \sP\big\{\eta/a(n) \in C_B,\,\eta/a(n) \in A_L \big\}}{a(n) \sP\big\{\eta/a(n) \in A_L \big\}}. \label{eq:expand-cont} \end{aligned}$$ By [@deh2006 Cor. 9.3.2] and [@res2008 Prop. 3.12] we have $$\begin{aligned} \limsup_{u \to \infty} u\sP(\eta / u \in C) \leq{}& \mu(C), \quad C \subset C({Q}\oplus L) \text{ closed},\\ \liminf_{u \to \infty} u\sP(\eta / u \in O) \geq{}& \mu(O), \quad O \subset C({Q}\oplus L) \text{ open}, \end{aligned}$$ where $C$ and $O$ are bounded away from $0^K$. Here, $\mu$ is the intensity measure of the PPP $\sum_{i \in {\mathbb N}} \delta_{U_i F_i(\,\cdot\, - T_i)}$ restricted to $C({Q}\oplus L)$. Thus, by adding or removing the boundary, we see that all the limit points of Equation lie in the interval $$\label{eq:liminterval} \left[ \frac{\mu(C_B \cap A_L) - \mu(\partial (C_B \cap A_L))}{\mu(A_L) + \mu(\partial A_L)}, \frac{\mu(C_B \cap A_L) + \mu(\partial (C_B \cap A_L))}{\mu(A_L) - \mu(\partial A_L)}\right].$$ We note that $A_L$ is closed and the set $$\begin{aligned} A_L^* ={} & \bigg\{ f \in C({Q}\oplus L): \\[-.5em] &\quad \, \tau_{Q}(f|_{Q}) \in {Q}^o, \ \max_{t\in {Q}} f(t) > \max\big\{1, f(t)\big\} \ \forall t \in {Q}\oplus L \setminus\{\tau_{Q}(f|_{Q})\} \bigg\} \end{aligned}$$ is in the interior of $A_L$ (Lemma \[lem:AL\]). Hence, we can assess $$\begin{aligned} \mu(\partial A_L) \leq{} & \quad \ \mu(\{f \in C({Q}\oplus L): \ \max_{t\in {Q}}f(t) = 1\}) \notag\\ & + \mu\bigg( \quad \bigg( \quad \{f \in C({Q}\oplus L): \ \tau_{Q}(f|_{Q}) \in \partial {Q}\} \notag\\ & \hspace{1.55cm} \cup \left\{f \in C({Q}\oplus L): \ \operatorname*{arg\,max}_{t \in {Q}\oplus L} f(t) \text{ is not unique}\right\}\bigg) \notag\\ & \qquad \cap \left\{f \in C({Q}\oplus L): \ \max_{t\in {Q}}f(t) = \max_{t \in {Q}\oplus L} f(t) \geq 1 \right\}\bigg) \notag \\ \leq{} & 0 + \int_{\partial {Q}} \int_{\lambda^{-1}}^\infty u^{-2} \sd u \sd t_0 \notag\\ & \phantom{0} + \int_{{\mathbb R}^d \setminus ({Q}\oplus L)} \int_{\lambda^{-1}}^\infty u^{-2} \sP\left\{u \max_{t_0 \in {Q}} F(t_0 -x) \geq 1\right\} \sd u \sd x \label{eq:partAL}. \end{aligned}$$ Here, the equality $\mu(\{f \in C({Q}\oplus L): \ \max_{t\in {Q}}f(t) = 1\}) = 0$ holds as $\max_{t \in {Q}} M(t)$ is Fréchet distributed (cf. [@deh2006 Lemma 9.3.4]). Since $\partial {Q}$ is a Lebesgue null set, the second term on the right-hand side of also vanishes. Thus, $$\begin{aligned} \mu(\partial A_L) \leq{} & \int_{{\mathbb R}^d \setminus ({Q}\oplus L)} \int_{\lambda^{-1}}^\infty u^{-2} \sP\left\{u \max_{t_0 \in {Q}} F(t_0 -x) \geq 1\right\} \sd u \sd x =: c(L) \label{eq:partAL2}. \end{aligned}$$ Now, let $B \subset C(K)$ a be Borel set such that $\sP(F / \lambda \in \partial B) = 0$. For the set $C_B$, we obtain that the set $$\begin{aligned} C_B^* ={} & \bigg\{ f \in C({Q}\oplus L): \ \operatorname*{arg\,max}_{f \in {Q}} f(t) \text{ is unique},\ \frac{f\big(\tau_{Q}(f|_{Q}) + \cdot\big)}{f(\tau_{Q}(f|_{Q}))} \in B^o \bigg\} \end{aligned}$$ is in the interior of $C_B$ and that the closure of $C_B$ is a subset of $$\begin{aligned} C_B^* \cup{} & \left\{f \in C({Q}\oplus L): \, \operatorname*{arg\,max}_{t \in {Q}} f(t) \text{ is not unique}\right\}\\ \cup{} & \left\{f \in C({Q}\oplus L): \, f\big(\tau_{Q}(f|_{Q}) + \cdot\big) \big/ f(\tau_{Q}(f|_{Q})) \in \partial B\right\} \end{aligned}$$ (Lemma \[lem:interCB\] and Lemma \[lem:CB\]). Thus, by , we can assess $$\begin{aligned} \mu(\partial (C_B \cap A_L)) \leq{} & \mu(\partial A_L) + \mu(\partial C_B \cap A_L) \notag\\ \leq{} & c(L) + \int_{{\mathbb R}^d \setminus ({Q}\oplus L)} \int_{\lambda^{-1}}^\infty u^{-2} \sP\left\{u \max_{t_0 \in {Q}} F(t_0 -x) \geq 1\right\}\sd u \sd x \notag\\ & \hspace{0.7cm} + \int_{{Q}} \int_{\lambda^{-1}}^\infty u^{-2} \sP(F / \lambda \in \partial B) \sd u \sd t \quad {}={} \quad 2 c(L). \label{eq:partCB} \end{aligned}$$ Furthermore, we get $$\begin{aligned} & \mu(C_B \cap A_L) \nonumber \\ ={} & \int_{Q}\int_{\lambda^{-1}}^\infty u^{-2} \sP\Big\{F(\cdot) / \lambda \in B\Big\} \sd u \sd t_0 \nonumber \\ & + \int_{{\mathbb R}^d \setminus({Q}\oplus L)} \int_{\lambda^{-1}}^\infty u^{-2} \sP\bigg\{u \max_{t_0 \in {Q}} F(t_0 -x) \geq 1,\ \nonumber \\ & \hspace{2.5cm} F\left(\Big(\tau_{Q}(F(\cdot-x)|_{Q})\Big)+\cdot-x\right) \Big/ \max_{t_0 \in {Q}} F(t_0-x) \in B,\nonumber \\ & \hspace{2.5cm} F(t-x) / \max_{t_0 \in {Q}} F(t_0-x) \leq 1 \ \forall t \in {Q}\oplus L \bigg\} \sd u \sd x. \label{eq:CB} \end{aligned}$$ The second term in is positive and can be bounded from above by $c(L)$. Setting $B= C(K)$, $\mu(A_L)$ can be expressed in an analogous way. Now, we plug in the results of , and into to obtain that all the limit points of are in the interval $$\begin{aligned} \left[\frac{\lambda \cdot |{Q}| \cdot \sP\big\{F(\cdot) / \lambda \in B\big\} - 2c(L)}{\lambda \cdot |{Q}| + 2c(L)}, \frac{\lambda \cdot |{Q}| \cdot \sP\big\{F(\cdot) / \lambda \in B\big\} + 3c(L)}{\lambda \cdot |{Q}| - c(L)} \right]. \end{aligned}$$ Finally, we note that $c(L)$ can be bounded from above by $$\int_{{\mathbb R}^d \setminus ({Q}\oplus L)} \sE \Big\{\max_{t_0 \in {Q}} F(t_0-x)\Big\} \sd x,$$ which vanishes for $L \nearrow {\mathbb R}^d$ because of assumption . This yields the assertion of the theorem. We conclude the treatment of the continuous case with an example of a process $\eta$ that allows for an application of Theorem \[thm\_conv\_mmm\]. As $\eta$ will be composed of a (locally) finite number of shape functions from the M3 construction in , $\eta$ may directly model rainfall data and has therefore the potential for various practical applications. \[ex:mmm-mda\] Let $\{F(t): \ t \in {\mathbb R}^d\}$ be a random shape function as defined in . For $c, \epsilon > 0$ let $\Pi_{c, \epsilon} = \sum_{i\in{\mathbb N}} \delta_{(U_i.T_i,F_i)}$ be a PPP on $(0,\infty)\times{\mathbb R}^d\times C({\mathbb R}^d)$ with intensity $$\begin{aligned} c{\mathbf 1}_{\{u\geq\epsilon\}}u^{-2}\rd u\,\rd t\,\sP_F(\rd f).\end{aligned}$$ and, for $\kappa > 0$, define a process $\tilde M=\tilde M_{c,\epsilon,\kappa}$ by $$\tilde M(\cdot) = \kappa \vee \max_{(u, t, f)\in \Pi_{c, \epsilon}} u f(\,\cdot\, - t).$$ Then, the following statements hold. 1. $\tilde M$ is in the MDA of the M3 process $M$ associated to $F$ in the sense of finite-dimensional distributions. 2. If $F$ satisfies , then $\tilde M$ is in the MDA of $M$ in the sense of weak convergence on $C({\mathbb R}^d)$. For a proof of this example, the reader is referred to Appendix \[sec:proof\_ex\_mmm-mda\]. Discrete Case ------------- Theorem \[thm\_conv\_mmm\] allows for estimation of $F$ if the complete sample paths of $\eta$ are known, at least on a large set ${Q}\oplus L \subset {\mathbb R}^d$. For many applications, this assumption might be too restrictive. Therefore, we seek after a weaker assumption that only requires to know $\eta$ on a grid. This needs a modification of the underlying model leading to a discretized mixed moving maxima process. Let $\{F(t): \ t \in {\mathbb Z}^d\}$ be a measurable stochastic process with values in $[0, \infty)$ and $$\begin{aligned} \label{assumption_integral_discr} \sum_{t \in {\mathbb Z}^d} \sE F(t) = 1.\end{aligned}$$ Further, let $\Pi_{0,\operatorname{discr}} = \sum_{i\in{\mathbb N}} \delta_{(U_i.T_i,F_i)}$ be a Poisson point process on $(0,\infty) \times {\mathbb Z}^d\times [0,\infty)^{{\mathbb Z}^d}$ with intensity $u^{-2}\rd u\, \delta_{{\mathbb Z}^d}(\rd t) \, \sP_F(\rd f)$. Then, the discrete mixed moving maxima process $M_{\operatorname{discr}}$ is defined by $$\begin{aligned} M_{\operatorname{discr}}(t) = \max_{i\in{\mathbb N}} U_i F_i(t- T_i), \quad t\in{\mathbb Z}^d. \label{def_M3_discr}\end{aligned}$$ The process $M_{\operatorname{discr}}$ is max-stable and stationary on ${\mathbb Z}^d$ and has standard Fréchet margins. Let $\{\eta(t), \ t \in {\mathbb Z}^d\}$ be in the MDA of a discrete mixed moving maxima process $M_{\operatorname{discr}}$ in the sense of convergence of finite-dimensional distributions with norming functions $c_n(t) = 1/n$ and $b_n(t) = 0$ in , $n\in{\mathbb N}$ and $t\in{\mathbb Z}^d$. Furthermore, we assume that the shape function $F$ satisfies with ${\mathbb R}^d$ being replaced by ${\mathbb Z}^d$. Then, analogously to Theorem \[thm\_conv\_mmm\], the following convergence result can be shown. Under the above assumptions, for any $k \in {\mathbb N}$, $k+1$ distinct points $t_0, \ldots, t_k \in {\mathbb Z}^d$, any Borel sets $B_1, \ldots, B_k \subset [0,\infty)$ such that $$\sP\big\{(F(t_1)/\lambda,\ldots,F(t_k)/\lambda) \in \partial(B_1\times\cdots\times B_k)\big\}=0,$$ and any sequence $a(n)$ with $a(n) \to \infty$ as $n \to \infty$, it holds $$\begin{aligned} \lim_{\substack{\{{\mathbf}0\} \in L \nearrow {\mathbb Z}^d\\ {\rm compact}}} \lim_{n \to \infty} &\sP\big\{ \eta(t_0+t_i) / \eta(t_0) \in B_i, \ i=1,\ldots,k \ \big| \\[-1em] & \hspace{3.5cm} \eta(t_0) = \max_{t \in L} \eta(t_0+t),\ \eta(t_0) \geq a(n)\big\}\\[.5em] = &\sP\big\{F(t_i) / \lambda \in B_i, \ i=1,\ldots,k\big\}. \end{aligned}$$ Switching between the different representations {#sec:switching} =============================================== In the previous sections we analyzed processes that admit the incremental representations or and, on the other hand, processes of M3 type as in . We show that under certain assumptions, we can switch from one representation to the other. Incremental representation of mixed moving maxima processes ----------------------------------------------------------- We distinguish between M3 processes with strictly positive shape functions, for which we can find an incremental representation , and general non-negative shape functions, for which only the weaker representation can be obtained. ### Mixed moving maxima processes with positive shape functions {#ex:M3} Let $M$ be an M3 process on ${\mathbb R}^d$ as in with a shape function $F$ with $F(t) > 0$ for all $t \in {\mathbb R}^d$. Then $M$ admits a representation with $t_0 = 0$ and incremental process $W$ given by $$\begin{aligned} \sP(W\in L) = \int_{C^+({\mathbb R}^d)} \int_{{\mathbb R}^d} {\mathbf 1}_{\{f(\cdot - t)/f(-t)\in L\}} f(-t) \sd t \, \sP_F(\rd f), \quad L\in\mathcal B(C^+({\mathbb R}^d)).\label{defWofM3}\end{aligned}$$ We consider the two Poisson point processes on $(0,\infty)\times C^+({\mathbb R}^d)$ $$\begin{aligned} \Pi_1 = \sum_{i\in{\mathbb N}}\delta_{(U_i F_i(-T_i), F_i(\cdot - T_i)/F_i(-T_i))}, \label{auxPPP1}\end{aligned}$$ as a transformation of $\Pi_0$ in , and $$\begin{aligned} \Pi_2 = \sum_{i\in{\mathbb N}}\delta_{(U'_i, W_i(\cdot))}, \label{auxPPP2}\end{aligned}$$ with $W_i$, $i\in{\mathbb N}$, being independent copies of $W$, and with $\sum_{i\in{\mathbb N}}\delta_{U'_i}$ being a Fréchet point process. Then the intensity measures of $\Pi_1$ and $\Pi_2$ satisfy $$\begin{aligned} &\sE\Pi_1([z,\infty)\times L)\\ &=\int_{C^+({\mathbb R}^d)}\int_{{\mathbb R}^d}\int_0^\infty u^{-2} {\mathbf 1}_{\{u f(-t) \geq z\}} {\mathbf 1}_{\{f(\cdot - t)/f(-t)\in L\}} \, \rd u \, \rd t \, \sP_F(\rd f)\\ &= z^{-1} \int_{C^+({\mathbb R}^d)}\int_{{\mathbb R}^d} {\mathbf 1}_{\{f(\cdot - t)/f(-t)\in L\}} f(-t) \, \rd t \, \sP_F(\rd f)\\ &=z^{-1} \sP(W\in L)\\ &=\sE\Pi_2([z,\infty)\times L), \end{aligned}$$ $L\in\mathcal B({C^+({\mathbb R}^d)})$, $z> 0$, and hence $\Pi_1\eqdist\Pi_2$. The assertion follows from the fact that $M$ is uniquely determined by $\Pi_1$ via the relation $M(t) = \max_{(v,g)\in\Pi_1} v g(t)$, $t\in{\mathbb R}^d$. While the definition of $W$ in is rather implicit, in the following, we provide an explicit construction of the incremental process $W$, which can also be used for simulation. To this end, let $\sum_{i\in{\mathbb N}}\delta_{U_i''}$ be a Fréchet point process and let the distribution of $(S,G)\in C^+({\mathbb R}^d)\times{\mathbb R}^d$ be given by $$\begin{aligned} \label{hat_distr} &\sP\bigl((S, G)\in (B\times L)\bigr)\\ &= \int_{C^+({\mathbb R}^d)}\int_{{\mathbb R}^d} {\mathbf 1}_{s\in B}{\mathbf 1}_{f\in L} \frac{f(-s)}{\int f(r) \rd r} \,\rd s \left(\int f(r) \rd r\right) \sP_F(\rd f) \notag\\ \notag &= \int_{C^+({\mathbb R}^d)}\int_{{\mathbb R}^d} {\mathbf 1}_{s\in B}{\mathbf 1}_{f\in L} f(-s) \, \rd s \, \sP_F(\rd f),\end{aligned}$$ $B\in\mathcal B^d$, $L\in\mathcal B({C^+({\mathbb R}^d)})$. In other words, $\sP_G(\rd f)= (\int f(r)\sd r)\,\sP_F(\rd f)$ and, conditional on $\{G=f\}$, the density function of the shift $S$ is proportional to $f(- \cdot)$. Putting $W(\cdot) = G(\cdot - S)/G(-S)$, equation is satisfied and with i.i.d. copies $W_i$, $i\in{\mathbb N}$, of $W$, we get that $\max_{i\in{\mathbb N}} U_i'' W_i(\cdot)$ is indeed an incremental representation of the mixed moving maxima process $M$. \[mmm\_BRproc\] We consider the following two special cases of mixed moving maxima processes: 1. Let $\Sigma\in{\mathbb R}^{d\times d}$ be a positive definite matrix and let the shape function be given by $F(t) = (2\pi)^{-d/2} |\Sigma|^{-1/2} \exp\left\{-\frac{1}{2}t^\top\Sigma^{-1} t\right\}$, $t\in{\mathbb R}^d$. Then, $M$ becomes the well-known Smith process. At the same time, by , $S\sim N(0,\Sigma)$ and $G \equiv F$. Thus $$\begin{aligned} Y(t)&=\exp\left\{-\textstyle\frac{1}{2}(t-S)^\top\Sigma^{-1}(t-S) + \frac{1}{2}S^\top\Sigma^{-1}S\right\}\\ &= \exp\left\{-\textstyle\frac{1}{2}t^\top\Sigma^{-1}t + t^\top \Sigma^{-1} S\right\}. \end{aligned}$$ Since $\sE(t^\top \Sigma^{-1} S)^2 = t^\top\Sigma^{-1}t $, $M$ is equivalent to the Brown-Resnick process in with variogram $\gamma(h) = h^\top \Sigma^{-1}h$. 2. For the one-dimensional Brown-Resnick process $\xi$ in with variogram $\gamma(h) =|h|$, i.e., $Y$ is the exponential of a standard Brownian motion with drift $-|t| / 2$, [@eng2011] recently showed that the M3 representation is given by $\{F(t): \, t\in{\mathbb R}\} = \{ Y(t) \mid Y(s)\leq 0 \ \forall s \in {\mathbb R}: \, t\in{\mathbb R}\}$, i.e., the shape function is the exponential of a conditionally negative drifted Brownian motion. Having these two representations, it follows that the law of the conditional Brownian motion $F$, re-weighted by $\int F(t) \rd t$ and randomly shifted with density $F(-\cdot) / \int F(t) \rd t$, coincides with the law of $Y$. ### Mixed moving maxima processes with finitely supported shape functions {#ex:M32}  Let $M$ be an M3 process on ${\mathbb R}^d$ as in . In contrast to Section \[ex:M3\], where the shape functions are required to take positive values, here, we allow for arbitrary shape functions with values in $[0, \infty)$. \[finite\_supp\] The M3 process $M$ as in allows for an incremental representation of the form , with incremental processes $V_i$ given by $$V_i(\cdot) = F_i(\cdot -R_i) / g(R_i).$$ Here $R_i$, $i\in{\mathbb N}$, are i.i.d. copies of a random vector $R$ with arbitrary density $g$ satisfying $g(t)>0$ for all $t\in{\mathbb R}^d$, and $F_i$, $i\in{\mathbb N}$, are i.i.d. copies of the random shape function $F$. With $\sum_{i\in{\mathbb N}} \delta_{U_i}$ being a Fréchet point process, we consider the process $$\begin{aligned} \tilde M(t) = \max_{i\in{\mathbb N}} U_i F_i(t-R_i) / g(R_i), \qquad t\in{\mathbb R}^d,\end{aligned}$$ which clearly is of the form . Then, $$\begin{aligned} \sP&(\tilde M(t_0)\leq s_0, \ldots, \tilde M(t_k)\leq s_k) \\ &= \exp\left(- \int_{C({\mathbb R}^d)}\int_{{\mathbb R}^d} \max_{l=0}^k (f(t_l-t)/(g(t)s_l)) g(t) \, \rd t \, \sP_F(\rd f)\right)\\ &= \exp\left(- \int_{C({\mathbb R}^d)}\int_{{\mathbb R}^d} \max_{l=0}^k (f(t_l-t)/s_l)) \, \rd t \, \sP_F(\rd f)\right).$$ The right-hand side coincides with the marginal distribution of $M$, which is given by . This concludes the proof. Decomposing $V$ as in with $t_0=0$, we obtain the equality in distribution $$\begin{aligned} V^{(1)}(\cdot) \eqdist \bigl(F(\cdot -R) / g(R) \ \big| -R\in \operatorname{supp}(F) \bigr).\end{aligned}$$ Applying Theorem \[theo\_cond\_increments\_general\] yields $$\begin{aligned} &\sP\bigl(\Delta\mathbf{\tilde{V}}^{(1)}\in \rd{\mathbf z}\bigr)\notag\\ &= \sP\bigl( F(-R) / g(R) > 0\bigr) \cdot\int_0^\infty y \sP\big(V^{(1)}(0)\in \rd y,\ \Delta{\mathbf V}^{(1)}\in\rd{\mathbf z}\big)\notag\\ &= \int_{C({\mathbb R}^d)} \int_{-\operatorname{supp}(f)} g(s)\sd s \,\sP_F(\rd f)\notag\\ &\hspace{1cm} \cdot\int_0^\infty y \int_{C({\mathbb R}^d)} \int_{-\operatorname{supp}(f)} {\mathbf 1}_{f(-t)/g(t)\in \rd y} {\mathbf 1}_{(f(t_l-t)/f(-t))_{l=1}^k \in \rd{\mathbf z}} \notag\\ &\hspace{6cm} \cdot g(t) \left(\textstyle\int_{-\operatorname{supp}(f)} g(s)\rd s\right)^{-1} \sd t \,\sP_F(\rd f) \sd y\notag\\ &= \int_{C({\mathbb R}^d)} \int_{\operatorname{supp}(f)} g(-s)\sd s \sP_F(\rd f)\notag\\ &\hspace{1cm} \cdot \int_{C({\mathbb R}^d)} \int_{\operatorname{supp}(f)} f(t) {\mathbf 1}_{(f(t_l+t)/f(t))_{l=1}^k \in \rd{\mathbf z}} \left(\textstyle\int_{\operatorname{supp}(f)} g(-s)\rd s\right)^{-1} \sd t \,\sP_F(\rd f). \label{distr_gen_incr}\end{aligned}$$ If the shape function $F$ is deterministic, the right-hand side of simplifies to $ \int_{\operatorname{supp}(f)} f(t) {\mathbf 1}_{(f(t_l+t)/f(t))_{l=1}^k \in \rd{\mathbf z}} \sd t$, i.e., the asymptotic conditional increments of $\eta\in\text{MDA}(M)$ can be seen as a convolution of the shape function’s increments with a random shift, whose density is given by the shape function itself. Note in particular, that this distribution is independent of the choice of the density $g$ in Theorem \[finite\_supp\]. Section \[ex:M3\] considers the subclass of M3 processes with strictly positive shape functions and provides an incremental representation as in , which is nicely related to the conditional increments of $\eta$ due to the property $W(0)=1$. Section \[ex:M32\] applies to arbitrary M3 processes but only yields an incremental representation as in , for which the incremental process $V$ does not directly represent the conditional increments of $\eta$. Mixed moving maxima representation of the incremental construction ------------------------------------------------------------------ \[theo:M3ofIncremental\] Let $\sum_{i \in {\mathbb N}} \delta_{U_i}$ be a Fréchet point process and let $W_i$, $i \in {\mathbb N}$, be independent copies of a non-negative, sample-continuous process $\{W(t), \ t\in{\mathbb R}^d\}$, satisfying $$\begin{aligned} \lim_{||t|| \to \infty} W(t) &= 0 && a.s.,\\ \sE W(t) &=1 &&\text{for all } t \in {\mathbb R}^d,\\ \text{and} \quad \sE\left\{\textstyle\max_{t \in K} W(t)\right\} & < \infty &&\text{for any compact set $K \subset {\mathbb R}^d$.} \end{aligned}$$ Furthermore, let $W$ be Brown-Resnick stationary, i.e., the process $\xi$, defined by $$\xi(t) = \max_{i \in {\mathbb N}} U_i W_i(t), \quad t \in {\mathbb R}^d,$$ is stationary with standard Fréchet margins. Then, the following assertions hold: 1. The random variables $$\begin{aligned} \tau_i = \inf \left\{\operatorname*{arg\,sup}_{t \in {\mathbb R}^d} W_i(t)\right\} \quad \text{and} \quad \gamma_i = \sup_{t \in {\mathbb R}^d} W_i(t) \end{aligned}$$ are well-defined. Furthermore, $\sum_{i \in {\mathbb N}} \delta_{(U_i \gamma_i, \tau_i, W_i(\cdot + \tau_i) / \gamma_i)}$ is a Poisson point process on $(0,\infty) \times {\mathbb R}^d \times C({\mathbb R}^d)$ with intensity measure $ \Psi(\rd u, \sd t, \sd f) = c u^{-2} \sd u \, \sd t \, \sP_{\tilde F}(\rd f)$ for some $c > 0$ and some probability measure $\sP_{\tilde F}$. 2. $\xi$ has an M3 representation with $\sP_F(\rd f) = \sP_{\tilde F}(c\sd f)$ being the probability measure of the shape function $F$. The constant $c>0$ is given by $$\begin{aligned} \label{eq:c} c = \left(\int_{{\mathbb R}^d} \int_{C({\mathbb R}^d)} f(t) \, \sP_{\tilde F}(\rd f) \sd t\right)^{-1} \end{aligned}$$ and the probability measure $\sP_{\tilde F}$ is defined by $$\sP_{\tilde F}(A) = \frac{\int_0^\infty y \sP(W(\cdot +\tau) / y \in A, \ \tau \in K \mid \gamma=y) \, \sP_\gamma(\rd y)} {\int_0^\infty y \sP(\tau \in K \mid \gamma=y) \, \sP_\gamma(\rd y)}$$ for any Borel set $A \subset C({\mathbb R}^d)$ and any compact set $K \subset {\mathbb R}^d$, where $\tau$ and $\gamma$ are defined as $\tau_i$ and $\gamma_i$, respectively, replacing $W_i$ by $W$, and $\sP_\gamma$ is the probability measure belonging to $\gamma$. <!-- --> 1. Analogously to the proof of [@kab2009 Thm. 14]. 2. From the first part it follows that $$\Phi_0 = \sum_{i \in {\mathbb N}} \delta_{(U_i \gamma_i / c,\, \tau_i,\, c \cdot W_i(\cdot + \tau_i) / \gamma_i)}$$ is a PPP with intensity measure $\Psi_0(\rd u, \sd t, \sd f) = u^{-2} \sd u \times \sd t \times \, \sP_F(\rd f)$ where $\sP_F(\rd f) = \sP_{\tilde F}(c \sd f)$. Hence, $\Phi_0$ is of the same type as $\Pi_0$ from the beginning of Section \[M3representation\] and $$\xi(t) = \max_{(y,s,f) \in \Phi_0} y f(\cdot - s), \quad t \in {\mathbb R}^d,$$ is a mixed moving maxima representation. The integrability condition follows from the fact that $\xi$ has standard Fréchet marginals. Thus, $$\int_{{\mathbb R}^d} \int_{C({\mathbb R}^d)} c f(t) \, \sP_{\tilde F}(\rd f) \sd t= 1,$$ which implies . In order to calculate $\sP_{\tilde F}$, let $A \in \mathcal B (C({\mathbb R}^d))$ and $K \in \mathcal{B}^d$ be compact. The first part of this Theorem implies that $$\Psi([1, \infty)\times K \times A) = c \cdot |K| \cdot \sP_{\tilde F}(A).$$ Therefore, $$\begin{aligned} \label{eq:qlim} \sP_{\tilde F}(A) = \frac{ \Psi([1, \infty) \times K \times A)}{ \Psi([1, \infty) \times K \times C({\mathbb R}^d))}, \end{aligned}$$ and both the enumerator and the denominator are finite. For the enumerator, we get $$\begin{aligned} \Psi(&[1, \infty) \times K \times A)\\ ={} & \int_0^\infty u^{-2} \int^{\infty}_{u^{-1}} \sP(W(\cdot + \tau) / \gamma \in A, \ \tau \in K \mid \gamma=y) \, \sP_\gamma(\rd y) \sd u\\ ={} & \int_0^\infty \int^{\infty}_{y^{-1}} u^{-2} \sd u \cdot \sP(W(\cdot + \tau) / y \in A, \ \tau \in K \mid \gamma=y) \, \sP_\gamma(\rd y)\\ ={} & \int_0^\infty y \sP(W(\cdot + \tau) / y \in A, \ \tau \in K \mid \gamma=y) \, \sP_\gamma(\rd y). \end{aligned}$$ Thus, by , $$\begin{aligned} \sP_{\tilde F}(A) = \frac{\int_0^\infty y \sP(W(\cdot + \tau) / y \in A, \ \tau \in K \mid \gamma=y) \, \sP_\gamma(\rd y)}{\int_0^\infty y \sP(\tau \in K \mid \gamma=y) \, \sP_\gamma(\rd y)}, \end{aligned}$$ which completes the proof. Outlook: Statistical applications {#sec:application} ================================= In univariate extreme value theory, a standard method for estimating the extreme value parameters fits all data exceeding a high threshold to a certain Poisson point process. This peaks-over-threshold approach has been generalized in [@roo2006] to the multivariate setting. Therein, generalized multivariate Pareto distributions are obtained as the max-limit of some multivariate random vector in the MDA of an extreme value distribution by conditioning on the event that at least one of the components is large. Conditioning on the same extremal events, the recent contribution [@fal2012] analyzes the asymptotic distribution of exceedance counts of stationary sequences.\ Here, we have suggested conditioning a stochastic process $\eta(t) : t\in T\}$ in the MDA of a max-stable process $\{\xi(t) : t\in T\}$ such that it converges to the incremental processes $W$ in or the shape functions $F$ in . In this section we provide several examples how these theoretical results can be used for statistical inference. The approach is based on a multivariate peaks-over-threshold method for max-stable processes, though the definition of extreme events differs from that in [@roo2006; @fal2012].\ In the sequel, suppose that $\eta_1,\dots,\eta_n,$ $n\in{\mathbb N},$ are independent observations of the random process $\eta$, already normalized to standard Pareto margins. Incremental representation {#incremental-representation} -------------------------- For a max-stable process $\xi$ that admits an incremental representation $$\begin{aligned} \label{def_xi_again} \xi(t) = \max_{i\in{\mathbb N}} U_i W_i(t), \quad t\in T,\end{aligned}$$ as in , the statistical merit of the convergence results in Theorem \[theo\_cond\_increments\_general\] and Theorem \[theo\_cond\_increments\_cont\] is the “deconvolution” of $U$ and $W$ which allows to substitute estimation of $\xi$ by estimation of the process $W$. As only the single *extreme* events converge to $W$, we define the index set of extremal observations as $$\begin{aligned} I_1(n) = \bigl\{ i\in \{1,\dots n\}: \ \eta_i(t_0) > a(n) \bigr\},\end{aligned}$$ for some fixed $t_0\in T$. The set $\{ \eta_i(\cdot) / \eta_i(t_0) : \ i \in I_1(n) \}$ then represents a collection of independent random variables that approximately follow the distribution of $W$. Thus, once the representation in is known, both parametric and non-parametric estimation for the process $W$ is feasible. For statistical inference it is necessary that the number of extremal observations $|I_1(n)|$ converges to $\infty$, as $n\to\infty$. This is achieved by choosing the sequence of thresholds $a(n)$ according to Remark \[rem\_thres\]. The dependence parameter $q\geq 1$ of the symmetric logistic distribution can be estimated by perceiving the conditional increments of $\eta$ in the MDA as realizations of $W$ and maximizing the likelihood $$\begin{aligned} &\sP\big(W(t_1) \in \rd s_1, \dots W(t_k) \in \rd s_k \,\big|\, q\big) \\ &= \left(1+\textstyle\sum_{i=1}^k s_i^{-q}\right)^{1/q -(k+1)} \left(\textstyle\prod_{i=1}^k(iq-1)\right) \textstyle\prod_{i=0}^k s_i^{-q-1} \ \rd s_1\dots \rd s_k.\end{aligned}$$ Recall that the Brown-Resnick processes in Example \[BRproc\] admit a representation with log-Gaussian incremental process $W(t) = \exp\left\{Y(t) - \sigma^2(t) / 2\right\}, t\in{\mathbb R}^d$. Hence, standard estimation procedures for Gaussian vectors or processes can be applied for statistical inference. [@eng2012a] explicitly construct several new estimators of the variogram $\gamma$ based on the incremental representation, which also covers Hüsler-Reiss distributions, and they provide some basic performance analyses. Mixed moving maxima representation {#mixed-moving-maxima-representation} ---------------------------------- Similarly, in case of the mixed moving maxima representation $$\begin{aligned} M(t) = \max_{i \in {\mathbb N}} U_i F_i(t - S_i), \quad t \in {\mathbb R}^d,\end{aligned}$$ the convergence results of Theorem \[thm\_conv\_mmm\] can be used to estimate $F$ (or $F_1 = F / \lambda$) on some compact domain $K$ instead of estimating $M$ directly. Here, the index set $T$ of the observed processes $\{\eta_i(t): \ t \in T\}$, $i=1,\ldots, n$, can be identified with ${Q}\oplus L$ from Theorem \[thm\_conv\_mmm\]. The set $L$ should be sufficiently large such that it is reasonable to assume that the components $\{U_i F_i(\cdot - S_i): \ S_i \notin {Q}\oplus L\}$ hardly affect the process $M$ on ${Q}\oplus K$ (that is, $\mu(C_B \cap A_L) / \mu(A_L) \approx \sP(F(\cdot) - \lambda \in B)$ in the proof of Theorem \[thm\_conv\_mmm\]). At the same time, a large set ${Q}$ leads to a rich set of usable observations $ \widetilde F_1^{(i)} = \eta_i(\tau_{Q}(\eta_i)+\cdot) / \eta_i(\tau_{Q}(\eta_i)), \ i \in I_2(n),$ where $$I_2(n) = \left\{ i \in \{1,\ldots,n\}: \ \max_{t\in {Q}}\eta_i(t) = \max_{t \in {Q}\oplus L} \eta_i(t) \geq a(n)\right\}.$$ The resulting processes $\widetilde F_1^{(i)},\ i \in I_2(n),$ can be interpreted as independent samples from an approximation to $F_1$. This approach can be expected to be particularly promising in case of F having a simple distribution or even being deterministic. Some examples of mixed moving maxima processes have already been analyzed for statistical inference by [@deh2006] who use normal, exponential and t densities as shape functions. More precisely, they consider M3 models with $$\begin{aligned} F_1(t) &= \exp\left\{- \frac{\beta^2 t^2} 2\right\}, \quad &\lambda {}={}& \frac{\beta}{\sqrt{2\pi}}, \label{eq:mmm-ex-1}\\ F_1(t) &= \exp\left\{- \beta |t|\right\}, \quad &\lambda {}={}& \frac{\beta}{2}, \label{eq:mmm-ex-2}\\ \text{and} \quad F_1(t) &= \left( 1 + \frac{\beta^2 t^2}{\nu} \right)^{- \frac{\nu+1} 2}, \quad &\lambda {}={}& \frac{\beta \Gamma\left( \frac{\nu+1} 2\right)}{\sqrt{\pi\nu} \Gamma\left(\frac \nu 2\right)}\, \quad \nu > 0, \label{eq:mmm-ex-3}\end{aligned}$$ all parametrized by $\beta > 0$. [@deh2006] introduce consistent and asymptotically normal estimators based on the interpretation of $\beta$ as a dependence parameter. From the samples $\widetilde F_1^{(i)}$, $i \in I_2(n)$, we get a new estimator $$\widehat F_1 = \frac 1 {|I_2(n)|} \sum_{i \in I_2} \widetilde F_1^{(i)}$$ for $F_1$. Applying this estimator, $\beta$ can be estimated by a least squares fit of – to $\widehat F_1$ at some locations $t_1, \ldots t_m \in K$. Note that in case of the normal model and the exponential model , the logarithm of the shape function $F_1$ depends linearly on $\beta^2$ and $\beta$, respectively, and $\log \widehat F_1$ can be fitted by ordinary least squares. The mixed moving maxima representation can also be employed for estimation of Brown-Resnick processes although the distribution of $F$ is much more sophisticated than the one of $W$ in the incremental representation (cf. [@eng2011; @oes2012]). A relation between the shape function $F$ and the variogram $\gamma$ of the Brown-Resnick process can be obtained via the *extremal coefficient function* $\theta(\cdot)$. For a stationary, max-stable process $\xi$ with identically distributed marginals, [@sch2003] defined the extremal coefficient function $\theta$ via the relation $$\begin{aligned} \sP(\xi(0)\leq u,\, \xi(h) \leq u) = \sP(\xi(0) \leq u)^{\theta(h)}, \quad h \in {\mathbb R}^d.\end{aligned}$$ For mixed moving maxima processes, we have $$\begin{aligned} \label{eq:theta-mmm} \theta(h) =\sE\left. \int_{{\mathbb R}^d} \{ F(t) \vee F(t+h)\} \sd t\right. =\frac{\sE\left. \int_{{\mathbb R}^d} \{ F_1(t) \vee F_1(t+h)\} \sd t\right.}{ \sE\left.\int_{{\mathbb R}^d} F_1(t) \sd t\right.}\end{aligned}$$ and, at the same time, for Brown-Resnick processes [@kab2009], $$\begin{aligned} \label{eq:theta-br} \theta(h) = 2\Phi\left(\sqrt{\gamma(h)}/ 2\right),\end{aligned}$$ where $\Phi$ is the standard Gaussian distribution function. Identifying with and plugging in the samples $\widetilde F_1^{(i)}$, $i \in I_2(n)$, we get the variogram estimator $$\begin{aligned} \widehat \gamma(h) = \left\{2 \Phi^{-1}\left( \frac{\displaystyle \sum_{i \in I_2(n)} \int_{\widetilde K} \widetilde F_1^{(i)}(t) \vee \widetilde F_1^{(i)}(t+h) \sd t}{2 \displaystyle \sum_{i \in I_2(n)} \int_{\widetilde K} \widetilde F_1^{(i)}(t) \sd t} \right)\right\}^2,\end{aligned}$$ where $\widetilde K$ is a large set such that $\widetilde K, \widetilde K +h \subset K$. Auxiliary Results for the Proof of Theorem \[thm\_conv\_mmm\] ============================================================= \[lem:AL\] $A_L$ is closed. The set $A_L^*$ is in the interior of $A_L$. The first assertion is obvious. For the second one, let $f^* \in A_L^*$. Then, we have $f^*(\tau_{Q}(f^*|_{Q})) =: \alpha > 1$. Furthermore, there is $\delta > 0$ such that $B_\delta(\tau_{Q}(f^*|_{Q})) = \{t \in {\mathbb R}^d: \ ||t - \tau_{Q}(f|_{Q})|| < \delta \} \in {Q}^o$ and we have $$\beta := \sup_{t \in {Q}\oplus L \setminus B_\delta(\tau_{Q}(f^*|_{Q}))} f^*(t) - \max_{t\in {Q}}f^*(t) < 0.$$ Now, we choose $\varepsilon < \min \{ \frac {\alpha-1} 2, \frac {|\beta|} 2\}$ and show that $B_\varepsilon(f^*) = \{ f \in C({Q}\oplus L):\ ||f - f^*||_\infty < \varepsilon\} \subset A_L$. This holds, as for any $f \in B_\varepsilon(f^*)$, we have $$\begin{aligned} & f(\tau_{Q}(f|_{Q})) \geq f(\tau_{Q}(f^*|_{Q})) > \alpha - \varepsilon > \frac {1+\alpha }2 > 1\\ \text{and} \quad & \max_{t\in {Q}}f(t) \leq \max_{t \in {Q}\oplus L} f(t) = \max\left\{\max_{t \in {Q}\oplus L \setminus {Q}^o} f(t), \ \max_{t\in {Q}}f(t)\right\}\\ & {}\leq{} \max\left\{ \beta + \alpha + \varepsilon, \ \max_{t\in {Q}}f(t)\right\} {}\leq{} \max\left\{ \alpha - \varepsilon, \ \max_{t\in {Q}}f(t)\right\} {}={} \max_{t\in {Q}}f(t), \end{aligned}$$ which means equality. \[lem:interCB\] The set $C_B^*$ is in the interior of $C_B$. Let $f^* \in C_B^*$. Then, $t^* = \operatorname*{arg\,max}_{t \in {Q}} f^*(t)$ is well-defined and necessarily, as $f \geq 0$, $$\label{eq:alpha} \alpha := f^*(t^*) \in (0,||f^*||_\infty].$$ Since $f^*(t^* + \cdot) / f^*(t^*) \in B^o$, there is some $\varepsilon > 0$ such that $$\label{eq:ball} \left\{f \in C(K): \ \left|\left|f^*(t^* + \cdot) \Big/ f^*(t^*) - f\right|\right|_\infty < \varepsilon\right\} \subset B.$$ Furthermore, $f^*$ is uniformly continuous on the compact set ${Q}\oplus L$, i.e. there exists some $\delta > 0$ such that $$\label{eq:unifcont} \sup_{s,t \in {Q}\oplus L, \ ||s-t|| < \delta} |f^*(s) - f^*(t)| < \frac \varepsilon 3 \alpha.$$ Then, as $\operatorname*{arg\,max}_{t \in {Q}} f^*(t)$ is unique, we have that $$\label{eq:beta} \beta := \max_{t \in {Q}\setminus \{t \in {\mathbb R}^d: \ ||t-t^*|| < \delta\}} f^*(t) - f^*(t^*) \in [-\alpha,0).$$ Choose $\varepsilon^* < \min\left\{\frac {|\beta|} {2\alpha}, \frac \varepsilon 6 \frac \alpha {||f^*||_\infty}\right\}$. We will show that $B_{\varepsilon^*\alpha}(f^*) =\{f \in C({Q}\oplus L): \ ||f-f^*||_\infty < \varepsilon^*\alpha\} \subset C_B$. To this end, let $f_0 \in B_{\varepsilon^*\alpha}(f^*)$. Then, because of Equation and $\varepsilon^*\alpha < \frac {|\beta|} 2$, we have that $||t_0 - t^*|| \leq \delta$ for $t_0 = \tau_{Q}(f_0|_{Q})$. Therefore, $$\begin{aligned} & \sup_{t \in K} \left| \frac{f^*(t^*+ \cdot)}{f^*(t^*)} - \frac{f_0(t_0 + \cdot)}{f_0(t_0)} \right| \notag\\ \leq{} & \sup_{t \in K} \left| \frac{f^*(t^*+\cdot)}{f^*(t^*)} - \frac{f^*(t_0+\cdot)}{f^*(t^*)}\right| + \sup_{t \in K} \left| \frac{f^*(t_0+\cdot)}{f^*(t^*)} - \frac{f^*(t_0+\cdot)}{f_0(t_0)}\right| \notag\\ & + \sup_{t \in K} \left| \frac{f^*(t_0+\cdot)}{f_0(t_0)} - \frac{f_0(t_0+\cdot)}{f_0(t_0)}\right| \qquad \leq \qquad \frac \varepsilon 3 + \frac{\varepsilon^*}{1-\varepsilon^*} \frac{||f^*||_\infty}{\alpha} + \frac{\varepsilon^*}{1-\varepsilon^*}, \label{eq:dist} \end{aligned}$$ where we used Equation and the fact that $f_0 \in B_{\varepsilon^*\alpha}(f^*)$. Equations and and the choice of $\varepsilon^*$ yield that $\varepsilon^*/(1-\varepsilon^*) \leq (\varepsilon^* ||f^*||_\infty)/((1-\varepsilon^*)\alpha) \leq 2\varepsilon^*||f^*||_\infty / \alpha < \varepsilon/3$, i.e. each summand on the right-hand side of is smaller than $\varepsilon/3$. Thus, $f_0(t_0 + \cdot) / f_0(t_0) \in \{f \in C(K): \ ||f^*(t^* + \cdot) / f^*(t^*) - f||_\infty < \varepsilon\} \subset B$ by Equation and $f_0 \in C_B$. \[lem:CB\] The closure of $C_B$ is a subset of $$\begin{aligned} B^* \cup{} & \left\{f \in C({Q}\oplus L): \, \operatorname*{arg\,max}_{t \in {Q}} f(t) \text{ is not unique} \right\}\\ \cup{} & \left\{f \in C({Q}\oplus L): \, f(\tau_{Q}(f|_{Q}) + \cdot) \Big/ f(\tau_{Q}(f|_{Q})) \in \partial B \right\}. \end{aligned}$$ Let $\{f_n\} \subset C_B$ be a sequence converging uniformly to some $f^* \in C({Q}\oplus L)$. We have to verify that $f^*(\tau_{Q}(f^*|_{Q}) + \cdot) / f^*(\tau_{Q}(f^*|_{Q})) \in B \cup \partial B$ if $\operatorname*{arg\,max}_{t \in {Q}} f^*(t)$ is unique. Analogously to the proof of Lemma \[lem:interCB\] we can show that for any $\varepsilon_2 > 0$ there is some $\varepsilon_1 > 0$ such that $$\begin{aligned} & ||f-f^*||_{\infty,{Q}\oplus L} < \varepsilon_1\\ {}\Longrightarrow{} \quad & \left|\left|\frac{f(\tau_{Q}(f|_{Q})+\cdot)}{f(\tau_{Q}(f|_{Q}))} - \frac{f^*(\tau_{Q}(f^*|_{Q})+\cdot)}{f^*(\tau_{Q}(f^*|_{Q}))}\right|\right|_{\infty,K} < \varepsilon_2. \end{aligned}$$ Thus, $f_n(\tau_{Q}(f_n|_{Q}) + \cdot)/f_n(\tau_{Q}(f_n|_{Q}))$ converges to $f^*(\tau_{Q}(f^*|_{Q}) + \cdot)/f^*(\tau_{Q}(f^*|_{Q}))$ in $C(K)$. Hence, as $B \cup \partial B$ is closed, $f^*(\tau_{Q}(f^*|_{Q}) + \cdot)/f^*(\tau_{Q}(f^*|_{Q})) \in B \cup \partial B$. Proof of Example \[ex:mmm-mda\] {#sec:proof_ex_mmm-mda} =============================== Let $\tilde M_j$, $j\in{\mathbb N}$, be independent copies of the process $\tilde M$ and consider $$M_n(\cdot) = \frac{1}{cn} \max_{i=1}^n \tilde M_i(\cdot).$$ Further, suppose that $L \subset {\mathbb R}^d$ is an arbitrary compact set. Note that by Remark \[weak\_conv\_Rd\] it suffices to show weak convergence of $M_n \cvgdist M$, $n\to\infty$, on $C(L)$. To prove the first assertion, note that, for $t_0, \ldots, t_k\in{\mathbb R}^d$, $s_0, \ldots, s_k\geq 0$, $k\in{\mathbb N}$, we have $$\begin{aligned} &\sP(M_n(t_0)\leq s_0, \ldots, M_n(t_k)\leq s_k) \notag\\ &= \left[{\mathbf 1}_{\kappa \leq \min_{l=0}^k cns_l }\cdot \sP\left\{ \Pi_{c, \epsilon} \left(\left\{(u,t,f): \max_{l=0}^k u f(t_l-t)/(cns_l) > 1\right\}\right) = 0\right\}\right]^n \notag\\ &= {\mathbf 1}_{\kappa \leq \min_{l=0}^k cns_l }\cdot \exp\left(- n\int_{C({\mathbb R}^d)} \int_{{\mathbb R}^d} \min\left\{\frac{1}{\epsilon}, \max_{l=0}^k \frac{f(t_l-t)}{cns_l}\right\}\, c\sd t \, \sP_F(\rd f) \right)\notag\\ &\longrightarrow \exp\left(- \int_{C({\mathbb R}^d)} \int_{{\mathbb R}^d} \max_{l=0}^k (f(t_l-t)/s_l) \sd t \, \sP_F(\rd f) \right), \label{proofMDAM3}\end{aligned}$$ as $n\to\infty$, where the convergence holds due to monotone convergence. The right-hand side of coincides with the marginal distribution of $M$ (cf. ). For convergence of $M_n$ to $M$ in the sense of weak convergence in $C^+(L)$ endowed with the topology of uniform convergence, it remains to show that the sequence of restricted processes $\{M_n|_L : n\in {\mathbb N}\}$ is tight. To this end, by [@bil1999 Thm. 7.3], it suffices to verify that for any $\varepsilon > 0$, $\eta \in (0,1)$, there exist $\delta > 0$, $n_0 \in {\mathbb N}$ such that $$\sP\left\{ \sup_{||s-t|| < \delta} |M_n(s) - M_n(t)| \geq \varepsilon\right\} \leq \eta, \qquad n \geq n_0.$$ By Equation , we can choose $R > 0$ such that $$\label{eq:smallintegral} \int_{{\mathbb R}^d \setminus (L \oplus B_R({\mathbf 0}))} \sE\left( \sup_{t \in L} F(t-s) \right) \sd s < \frac {\varepsilon \eta} 2,$$ where $B_R({\mathbf 0}) = \{ x \in {\mathbb R}^d: \ ||x|| \leq R\}$. Furthermore, implies that $\sE\left( \sup_{t \in K} F(s)\right) < \infty$ for any compact set $K \subset {\mathbb R}^d$. Therefore, as each realization of $F$ is uniformly continuous on $B_{R+d(L)}({\mathbf 0})$, where $d(L) = \sup_{s_1,s_2 \in L} ||s_1-s_2||$ denotes the diameter of $L$, dominated convergence yields $$\lim_{\delta \searrow 0} \sE\left( \sup_{s,t \in B_{R+d(L)}({\mathbf 0}), \ ||s-t|| < \delta} |F(s) - F(t)|\right) = 0.$$ In particular, we can choose $\delta > 0$ such that $$\label{eq:delta} \sE\left( \sup_{s_1,s_2 \in B_{R+d(L)}({\mathbf 0}), \ ||s_1-s_2|| < \delta} |F(s_1) - F(s_2)|\right) < \frac {\varepsilon \eta}{2 |L \oplus B_R({\mathbf 0})|}.$$ Then, we get $$\begin{aligned} & \sP\left\{ \sup_{||s_1-s_2|| < \delta, \ s_1,s_2 \in L} | M_n(s_1) - M_n(s_2) | \geq \varepsilon \right\} \\ \leq{} & n \sP\left\{ \sup_{||s_1-s_2|| < \delta, \ s_1,s_2 \in L} |\tilde M_n(s_1) - \tilde M_n(s_2) | \geq c n\varepsilon \right\} \\ \leq{}& n \bigg( \sP\bigg\{ \Pi\bigg(\bigg\{ (u,t,f): \ t \in L \oplus B_R({\mathbf 0}), \\ & \hspace{3cm} \sup_{\substack{s_1,s_2 \in B_{R+d(L)}({\mathbf 0}),\\ ||s_1-s_2|| < \delta} } |f(s_1)-f(s_2)| > \frac{cn\varepsilon}{u} \bigg\}\bigg) > 0\bigg\}\\ & + \sP\bigg\{ \Pi\bigg(\bigg\{ (u,t,f): \ t \in {\mathbb R}^d \setminus (L \oplus B_R({\mathbf 0})), \sup_{s \in L} |f(s-t)| > \frac{cn\varepsilon}{u} \bigg\} \bigg) > 0\bigg\}\bigg) \displaybreak[0]\\ \leq{} & n\bigg( 1 - \exp\bigg\{ - \int_{L \oplus B_R({\mathbf 0})} \int_\epsilon^\infty u^{-2} \\ & \hspace{3.2cm} \cdot \sP\bigg( \sup_{\substack{s_1,s_2 \in B_{R+d(L)}({\mathbf 0}),\\ ||s_1-s_2|| < \delta} } |F(s_1)-F(s_2)| > \frac{c n\varepsilon}{u}\bigg) \sd u \, c \sd t \bigg\}\\ & + 1- \exp\bigg\{ - \int_{{\mathbb R}^d \setminus (L \oplus B_R({\mathbf 0}))} \int_\epsilon^\infty u^{-2} \sP\bigg( \sup_{s \in L} |F(s-t)| > \frac{c n\varepsilon}{u}\bigg) \sd u \, c \sd t \bigg\} \bigg) \displaybreak[0]\\ \leq{} & n\bigg( 1 - \exp\bigg( - \frac{|L \oplus B_R({\mathbf 0})|}{n \varepsilon} \sE\bigg\{ \sup_{\substack{s_1,s_2 \in B_{R+d(L)}({\mathbf 0}),\\ ||s_1-s_2|| < \delta} } |F(s_1)-F(s_2)| \bigg\} \bigg)\\ & + 1 - \exp\bigg(- \frac 1 {n\varepsilon} \int_{{\mathbb R}\setminus (L \oplus B_R({\mathbf 0}))} \sE\left\{ \sup_{s \in L} |F(s-t)| \right\} \sd t \bigg) \bigg)\\ \leq{} & n\left( 1 - \exp\left(- \frac \eta {2n}\right) + 1 - \exp\left(- \frac \eta {2n}\right)\right) \quad \leq \quad \eta, \end{aligned}$$ where we used Equation and . Thus, the sequence of processes $\{M_n|_L : n\in {\mathbb N}\}$ is tight. The authors are grateful to Zakhar Kabluchko for useful suggestions and hints. S. Engelke has been financially supported by Deutsche Telekom Stiftung. A. Malinowski has been financially supported the German Science Foundation (DFG), Research Training Group 1644 ‘Scaling problems in Statistics’. M. Oesting and M. Schlather have been financially supported by Volkswagen Stiftung within the project ‘WEX-MOP’.
1
--- abstract: 'We derive the mean squared error convergence rates of kernel density-based plug-in estimators of mutual information measures between two multidimensional random variables $\mathbf{X}$ and $\mathbf{Y}$ for two cases: 1) $\X$ and $\Y$ are both continuous; 2) $\X$ is continuous and $\Y$ is discrete. Using the derived rates, we propose an ensemble estimator of these information measures for the second case by taking a weighted sum of the plug-in estimators with varied bandwidths. The resulting ensemble estimator achieves the $1/N$ parametric convergence rate when the conditional densities of the continuous variables are sufficiently smooth. To the best of our knowledge, this is the first nonparametric mutual information estimator known to achieve the parametric convergence rate for this case, which frequently arises in applications (e.g. variable selection in classification). The estimator is simple to implement as it uses the solution to an offline convex optimization problem and simple plug-in estimators. A central limit theorem is also derived for the ensemble estimator. Ensemble estimators that achieve the parametric rate are also derived for the first case ($\X$ and $\Y$ are both continuous) and another case 3) $\X$ and $\Y$ may have any mixture of discrete and continuous components.' author: - 'Kevin R. Moon[^1]' - 'Kumar Sricharan[^2]' - 'Alfred O. Hero III[^3]' bibliography: - 'References.bib' title: Ensemble Estimation of Mutual Information --- \#1[\_[\#1,h\_[\#1]{}]{}]{} \#1[\_[\#1]{}]{} \#1[\_[\#1,h\_[\#1]{}]{}]{} \#1[\_[\#1,h]{}]{} \#1[\_[\#1,h\_[\#1]{}(l)]{}]{} \#1[\_[\#1,h\_[\#1]{}]{}]{} \#1[\_[\#1,h]{}]{} § \#1[\_]{} \#1[\_[\#1,k\_[\#1]{}]{}]{} \#1[\_[\#1,k\_[\#1]{}+1]{}]{} \#1[\_[\#1,k\_[\#1]{}]{}]{} \#1\#2[\_[\#1,k(\#2)]{}]{} \#1[\_[\#1,k\_[\#1]{}+1]{}]{} \#1[|\_[\#1,k\_[\#1]{}]{}]{} \#1[|\_[\#1,k\_[\#1]{}+1]{}]{} \#1\#2[\_[\#1\#2,h\_[\#1]{},h\_[\#2]{}]{}]{} \#1\#2\#3[\_[\#2]{}\^[(\#3)]{}]{} Introduction {#sec:intro} ============ Mutual information (MI) estimation has many applications in machine learning including MI has been used in fMRI data processing [@chai2009fMRI], structure learning [@structure2016], independent subspace analysis [@pal2010estimation], forest density estimation [@liu2012exponential], clustering [@lewi2006real], neuron classification [@schneidman2003information], and intrinsically motivated reinforcement learning [@mohamed2015variational; @salge2014changing]. Another particularly common application is feature selection or extraction where features are chosen to maximize the MI between the chosen features $\mathbf{X}$ and the outcome variables $\mathbf{Y}$ [@torkkola2003feature; @vergara2014review; @peng2005feature; @kwak2002input]. In many of these applications, the predictor labels have discrete components (e.g. classification labels) while the input variables have continuous components. To the best of our knowledge, there are currently no nonparametric MI estimators that are known to achieve the parametric mean squared error (MSE) convergence rate $1/N$ when $\X$ and/or $\Y$ contain discrete components. Also, while many nonparametric estimators of MI exist, most can only be applied to specific information measures (e.g. Shannon or Rényi information). In this paper, we provide a framework for nonparametric estimation of a large class of MI measures where we only have available a finite population of i.i.d. samples. **** We separately consider three cases: 1) $\X$ and $\Y$ are both continuous; 2) $\X$ is continuous and $\Y$ is discrete; 3) $\X$ and $\Y$ may have any mixture of discrete and continuous components. We focus primarily on the second case which includes the problem of feature selection in classification. We derive a MI estimator for this case that achieves the parametric MSE rate when the conditional densities of the continuous variables are sufficiently smooth. We also show how these estimators are extended to the first and third cases. Our estimation method applies to other MI measures in addition to Shannon information, which have been the focus of much interest. The authors of [@torkkola2003feature] defined an information measure based on a quadratic divergence that could be estimated more efficiently than Shannon information. A MI measure based on the Pearson divergence was considered in [@sugiyama2012machine] for computational efficiency and numerical stability. The authors of [@costa2004geodesic] and [@pal2010estimation] used minimal spanning tree generalized nearest-neighbor graph approaches, respectively, to estimate Rényi information. Related Work ------------ Many estimators for Shannon MI between continuous random variables have been developed. A popular $k$-nn-based estimator was proposed in [@kraskov2004estimating] which is a modification of the entropy estimator derived in [@kozachenko1987sample]. However, these estimators only achieve the parametric convergence rate when the dimension of each of the random variables is less than 3 [@gao2016demystifying]. Similarly, the Rényi information estimator in [@pal2010estimation] does not achieve the parametric rate. Some other estimators are based on maximum likelihood estimation of the likelihood ratio [@suzuki2008approximating] and minimal spanning trees [@khan2007relative]. Recent work has focused on nonparametric divergence estimation for purely continuous random variables. One approach [@krishnamurthy2014divergence; @kandasamy2015nonparametric; @singh2014exponential; @singh2014renyi] uses an optimal kernel density estimator (KDE) to achieve the parametric convergence rate when the densities are at least $d$ [@singh2014exponential; @singh2014renyi] or $d/2$ [@krishnamurthy2014divergence; @kandasamy2015nonparametric] times differentiable where $d$ is the dimension of the data. These optimal KDEs require knowledge of the density support boundary and are difficult to construct near the boundary. Numerical integration may also be required for estimating some divergence functionals under this approach, which can be computationally expensive. In contrast, our approach to MI estimation does not require numerical integration and can be performed without knowledge of the support boundary. More closely related work [@sricharan2013ensemble; @moon2014isit; @moon2014nips; @moon2016arxiv; @moon2016isit] uses an ensemble approach to estimate entropy or divergence functionals. These works construct an ensemble of simple plug-in estimators by varying the neighborhood size of the density estimators. They then take a weighted average of the estimators where the weights are chosen to decrease the bias with only a small increase in the variance. The parametric rate of convergence is achieved when the densities are either $d$ [@sricharan2013ensemble; @moon2014isit; @moon2014nips] or $(d+1)/2$ [@moon2016arxiv; @moon2016isit] times differentiable. These approaches are simple to implement as they only require simple plug-in estimates and the solution of an offline convex optimization problem. These estimators have also performed well in various applications [@szabo2012distributed; @gliske2015intrinsic; @moon2015Bayes; @moon2015partI; @moon2015partII] Finally, the authors of [@gao2015efficient] showed that $k$-nn or KDE based approaches underestimate the MI when the MI is large. As MI increases, the dependencies between random variables increase which results in less smooth densities. Thus a common approach to overcome this issue is to require the densities to be smooth [@krishnamurthy2014divergence; @kandasamy2015nonparametric; @singh2014exponential; @singh2014renyi; @sricharan2013ensemble; @moon2014isit; @moon2014nips; @moon2016arxiv; @moon2016isit]. Contributions ------------- In the context of this related work, we make the following novel contributions in this paper: (1) For continuous random variables (case 1), we extend the asymptotic bias and variance results for divergence estimators [@moon2016isit; @moon2016arxiv] to kernel density plug-in MI estimators without boundary correction [@karunamuni2005boundary] by incorporating machinery to handle the dependence between the product of marginal density estimators (Section \[sec:MI\_est\]), (2) we extend the theory to handle discrete random variables in the mixed cases (cases 2 and 3) by reformulating the densities as a mixture of the conditional density of the continuous variables given the discrete variables (Section \[sec:mixed\]), and (3) we leverage this theory for the mixed cases in conjunction with the generalized theory of ensemble estimators [@moon2016arxiv; @moon2016isit] to derive, to the best of our knowledge, the first non-parametric estimator that achieves a parametric rate of MSE convergence of $O\left(1/N\right)$ for the mixed cases (Section \[sec:mixed\_ensemble\]), where $N$ is the number of samples available from each distribution. We also derive a central limit theorem for the ensemble estimators (Section \[subsec:clt\]). We verify the theory through experiments (Section \[sec:experiments\]). Continuous Random Variables {#sec:MI_est} =========================== In this section, we obtain MSE convergence rates of plug-in MI estimators when $\X$ and $\Y$ are continuous (case 1 in Section \[sec:intro\]). This will enable us to derive the MSE convergence rates of plug-in MI estimators when $\X$ is continuous and $\Y$ is discrete and when $\X$ and $\Y$ may have any mixture of continuous and discrete components (respectively, cases 2 and 3 in Section \[sec:intro\]). These rates can then be used to derive ensemble estimators that achieve the parametric MSE rate. Let $f_{X}(x)$, $f_{Y}(y)$, and $f_{XY}(x,y)$ be $d_{X}$, $d_{Y}$, and $d_{X}+d_{Y}=d$-dimensional densities. Let $g(t_{1},t_{2})=g\left(\frac{t_{1}}{t_{2}}\right)$ (e.g. $g(t_{1},t_{2})=\log(t_{1}/t_{2})$ for Shannon information). We define a family of MIs as $$G_{1}(\mathbf{X};\mathbf{Y})=\int g\left(\frac{f_{X}(x)f_{Y}(y)}{f_{XY}(x,y)}\right)f_{XY}(x,y)dxdy.\label{eq:MI}$$ The KDE Plug-in Estimator ------------------------- When both $\mathbf{X}$ and $\mathbf{Y}$ are continuous with marginal densities $f_{X}$ and $f_{Y}$, the MI functional $G_{1}(\mathbf{X};\mathbf{Y})$ can be estimated using KDEs. Assume that $N$ i.i.d. samples $\left\{ \mathbf{Z}_{1},\dots,\mathbf{Z}_{N}\right\} $ are available from the joint density $f_{XY}$ with $\mathbf{Z}_{i}=\left(\mathbf{X}_{i},\mathbf{Y}_{i}\right)^{T}$. Let $M=N-1$ and let $h_{X}$, $h_{Y}$ be kernel bandwidths. Let $K_{X}(\cdot)$ and $K_{Y}(\cdot)$ be kernel functions with $||K_{X}||_{\infty},\,||K_{Y}||_{\infty}<\infty$ where $||K||_{\infty}=\sup_{x}|K(x)|$. The KDE for $f_{X}$ is $$\begin{aligned} \ft X(\mathbf{X}_{j}) & = & \frac{1}{Mh_{X}^{d_{X}}}\sum_{\substack{i=1\\ i\neq j } }^{N}K_{X}\left(\frac{\mathbf{X}_{j}-\mathbf{X}_{i}}{h_{X}}\right).\label{eq:fx}\end{aligned}$$ The KDEs $\ft Y(\Y_{j})$ and $\ft Z(\mathbf{X}_{j},\mathbf{Y}_{j})$ (where $h_{Z}=(h_{X},h_{Y})$) for estimating $f_{Y}$ and $f_{XY}$, respectively, are defined similarly using $K_{Y}$ and the product kernel $K_{X}\cdot K_{Y}$. Then $G_{1}(\mathbf{X};\mathbf{Y})$ is estimated as $$\gt=\frac{1}{N}\sum_{i=1}^{N}g\left(\frac{\ft X(\mathbf{X}_{i})\ft Y(\mathbf{Y}_{i})}{\ft Z(\mathbf{X}_{i},\mathbf{Y}_{i})}\right).\label{eq:Gest}$$ Convergence Rates ----------------- To derive the convergence rates of $\gt$ we assume that 1) $f_{X}$, $f_{Y}$, $f_{XY}$, and $g$ are smooth; 2) $f_{X}$ and $f_{Y}$ have bounded support sets $\mathcal{S}_{X}$ and $\mathcal{S}_{Y}$; 3) $f_{X}$, $f_{Y}$, and $f_{XY}$ are strictly lower bounded on their support sets. More specifically, we assume that the densities belong to the bounded Hölder class $\Sigma(s,H)$ (the precise definition is included in the appendices) which implies that the densities are $r=\left\lfloor s\right\rfloor $ times differentiable. These assumptions are comparable to those in similar studies on asymptotic convergence analysis [@moon2016isit; @moon2014nips; @moon2014isit; @singh2014renyi; @singh2014exponential; @sricharan2013ensemble; @krishnamurthy2014divergence; @kandasamy2015nonparametric]. To derive the convergence rates without boundary corrections, we also assume that 4) the boundary of the support set is smooth with respect to the corresponding kernels. The full assumptions are - $(\mathcal{A}.0)$: The kernels $K_{X}$ and $K_{Y}$ are symmetric product kernels with bounded support. - $(\mathcal{A}.1)$: There exist constants $\epsilon_{0},\epsilon_{\infty}$ such that $0<\epsilon_{0}\leq f_{X}(x)\leq\epsilon_{\infty}<\infty,\,\forall x\in\mathcal{S}_{X}$, $\epsilon_{0}\leq f_{Y}(y)\leq\epsilon_{\infty},\,\forall y\in\mathcal{S}_{Y}$, and $\epsilon_{0}\leq f_{XY}(x,y)\leq\epsilon_{\infty},\,\forall(x,y)\in\mathcal{S}_{X}\times\mathcal{S}_{Y}$. - $(\mathcal{A}.2)$: Each of the densities belong to $\Sigma(s,H)$ in the interior of their support sets with $s\geq2$. - $(\mathcal{A}.3)$: $g\left(t_{1}/t_{2}\right)$ has an infinite number of mixed derivatives wrt $t_{1}$ and $t_{2}$. - $(\mathcal{A}.4$): $\left|\frac{\partial^{k+l}g(t_{1},t_{2})}{\partial t_{1}^{k}\partial t_{2}^{l}}\right|/(k!l!)$, $k,l=0,1,\ldots$ are strictly upper bounded for $\epsilon_{0}\leq t_{1},t_{2}\leq\epsilon_{\infty}$. - $(\mathcal{A}.5)$: Let $K$ be either $K_{X}$ or $K_{Y}$, $\mathcal{S}$ either $\mathcal{S}_{X}$ or $\mathcal{S}_{Y}$, $h$ either $h_{X}$ or $h_{Y}$. Let $p_{x}(u):\mathbb{R}^{d}\rightarrow\mathbb{R}$ be a polynomial in $u$ of order $q\leq r=\left\lfloor s\right\rfloor $ whose coefficients are a function of $x$ and are $r-q$ times differentiable. For any positive integer $t$ $$\int_{x\in\mathcal{S}}\left(\int_{u:K(u)>0,\,x+uh\notin\mathcal{S}}K(u)p_{x}(u)du\right)^{t}dx=v_{t}(h),$$ where $v_{t}(h)$ admits the expansion $$v_{t}(h)=\sum_{i=1}^{r-q}e_{i,q,t}h^{i}+o\left(h^{r-q}\right),$$ for some constants $e_{i,q,t}$. Assumption $(\mathcal{A}.5)$ states that the support of the density is smooth with respect to the kernel $K$ in the sense that the expectation with respect to any random variable $u$ of the area of the kernel that falls outside the support $\mathcal{S}$ is a smooth function of the bandwidth $h$ provided that the distribution function $p_{x}(u)$ of $u$ is smooth (e.g. $s\geq2$). The inner integral captures this expectation while the outer integral averages this inner integral over all points near the boundary of the support. The $v_{t}(h)$ term captures the fact that the smoothness of this expectation is proportional to the smoothness of the function $p_{x}(u)$. As an example, this smoothness assumption is satisfied when the support is rectangular and the kernel is the uniform rectangular kernel [@moon2016arxiv; @moon2016isit]. Note that this boundary assumption does not result in parametric convergence rates for the plug-in estimator $\gt$, which is in contrast with the boundary assumptions in [@singh2014exponential; @singh2014renyi; @krishnamurthy2014divergence; @kandasamy2015nonparametric]. However, the estimators in [@singh2014exponential; @singh2014renyi; @krishnamurthy2014divergence; @kandasamy2015nonparametric] perform boundary correction, which requires knowledge of the density support boundary and complex calculations at the boundary in addition to the boundary assumptions, to achieve the parametric convergence rates. In contrast, we use ensemble methods to improve the resulting convergence rates of $\gt$ without boundary correction. \[thm:bias\](Bias) Under assumptions $\mathcal{A}.0-\mathcal{A}.5$ and for general $g$, the bias of $\gt$ is $$\begin{aligned} \bias\left[\gt\right] & = & \sum_{\substack{j=0\\ i+j\neq0 } }^{r}\sum_{i=0}^{r}c_{10,i,j}h_{X}^{i}h_{Y}^{j}+\frac{c_{11}}{Nh_{X}^{d_{X}}h_{Y}^{d_{Y}}}\nonumber \\ & & +O\left(h_{X}^{s}+h_{Y}^{s}+\frac{1}{Nh_{X}^{d_{X}}h_{Y}^{d_{Y}}}\right).\label{eq:bias1}\end{aligned}$$ If $g\left(t_{1},t_{2}\right)$ also has $j,l$-th order mixed derivatives $\frac{\partial^{j+l}}{\partial t_{1}^{j}\partial t_{2}^{l}}$ that depend on $t_{1}$ and $t_{2}$ only through $t_{1}^{\alpha}t_{2}^{\beta}$ for some $\alpha,\beta\in\mathbb{R}$ for each $1\leq k,l,\leq\lambda$, the bias of $\gt$ is $$\begin{aligned} & & \lefteqn{\bias\left[\gt\right]}\nonumber \\ & & =\sum_{\substack{m,n=0\\ i+j+m+n\neq0 } }^{\left\lfloor \lambda/2\right\rfloor }\sum_{i,j=0}^{r}c_{11,j,i,m,n}\frac{h_{X}^{i}h_{Y}^{j}}{\left(Nh_{X}^{d_{X}}\right)^{m}\left(Nh_{Y}^{d_{Y}}\right)^{n}}\nonumber \\ & & +\sum_{m=1}^{\left\lfloor \lambda/2\right\rfloor }\sum_{i=0}^{r}\sum_{j=0}^{r}c_{13,m,n,j}h_{X}^{i}h_{Y}^{j}/\left(Nh_{X}^{d_{X}}h_{Y}^{d_{Y}}\right)^{m}\nonumber \\ & & +O\left(h_{X}^{s}+h_{Y}^{s}+1/\left(Nh_{X}^{d_{X}}h_{Y}^{d_{Y}}\right)^{\lambda/2}\right).\label{eq:bias2}\end{aligned}$$ The constants in both (\[eq:bias1\]) and (\[eq:bias2\]) depend only on the densities and their derivatives, the functional $g$ and its derivatives, and the kernels. They are independent of $N,$ $h_{X}$, and $h_{Y}.$ The purpose of Theorem \[thm:bias\] is two-fold. First, we use Theorem \[thm:bias\] to derive the bias expressions for the MI plug-in estimators when $\mathbf{X}$ and $\mathbf{Y}$ may have a mixture of discrete and continuous components (cases 2 and 3) in Section \[sec:mixed\]. Second, in conjunction with Theorem \[thm:variance\] which follows, the results in Theorem \[thm:bias\] can be used to derive MI ensemble estimators in Appendix \[subsec:cont\_ensemble\] that achieve the parametric MSE convergence rate when the densities are sufficiently smooth. The expression in (\[eq:bias2\]) enables us to achieve the parametric rate under less restrictive smoothness assumptions on the densities ($s>d/2$ for (\[eq:bias2\]) compared to $s\geq d$ for (\[eq:bias1\])). The extra condition required on the mixed derivatives of $g$ to obtain the expression in (\[eq:bias2\]) is satisfied, for example, for Shannon and Renyi information measures. \[thm:variance\](Variance) If the functional $g$ is Lipschitz continuous in both of its arguments with Lipschitz constant $C_{g}$, then the variance of $\gt$ is $$\var\left[\gt\right]\leq\frac{22C_{g}^{2}||K_{X}\cdot K_{Y}||_{\infty}^{2}}{N}.$$ Similar to Theorem \[thm:bias\], Theorem \[thm:variance\] is used to derive variance expressions for the MI plug-in estimators under cases 2 and 3. Theorem \[thm:variance\] is also necessary to derive optimally weighted ensemble estimators. The proofs of Theorems \[thm:bias\] and \[thm:variance\] are similar to the proofs of the bias and variance results for the divergence functional estimators in [@moon2016arxiv]. The primary difference is in handling certain products of the marginal KDEs that appear in the expansion of the MSE. See Appendix \[sec:biasProof\] and \[sec:VarProof\] for details. Theorems \[thm:bias\] and \[thm:variance\] indicate that for the MSE of the plug-in estimator to go to zero for case 1, we require $h_{X},h_{Y}\rightarrow0$ and $Nh_{X}^{d_{X}}h_{Y}^{d_{Y}}\rightarrow\infty$. The Lipschitz assumption on $g$ is comparable to other nonparametric estimators of distributional functionals [@kandasamy2015nonparametric; @singh2014exponential; @singh2014renyi; @moon2016arxiv; @krishnamurthy2014divergence]. Specifically, assumption $\mathcal{A}.1$ ensures that functionals such as those for Shannon and Renyi informations are Lipschitz on the space $\epsilon_{0}$ to $\epsilon_{\infty}$. Mixed Random Variables {#sec:mixed} ====================== In this section, we extend the results of Section \[sec:MI\_est\] to MI estimation when $\mathbf{X}$ and $\mathbf{Y}$ may have a mixture of discrete and continuous components. For simplicity, we focus primarily on the important case when $\mathbf{X}$ is continuous and $\mathbf{Y}$ is discrete (case 2 in Section \[sec:intro\]). The more general case when $\X$ and $\Y$ may have any mixture of continuous and discrete components (case 3 in Section \[sec:intro\]) is discussed in Section \[subsec:general\_rates\]. As an example of the former case, if $\mathbf{Y}$ is a predictor variable (e.g. classification labels), then the MI between $\mathbf{X}$ and $\mathbf{Y}$ indicates the value of $\mathbf{X}$ as a predictor of $\mathbf{Y}$. Although $\mathbf{Y}$ is discrete, $f_{XY}=f_{Z}$ is also a density. Let $\mathcal{S}_{X}$ be the support of the density $f_{X}$ and $\mathcal{S}_{Y}$ be the support of the probability mass function $f_{Y}$. The MI is $$\begin{aligned} & \lefteqn{G_{2}\left(\mathbf{X};\mathbf{Y}\right)}\nonumber \\ & = & \sum_{y\in\mathcal{S}_{Y}}\int g\left(\frac{f_{X}(x)f_{Y}(y)}{f_{XY}(x,y)}\right)f_{XY}(x,y)dx\label{eq:MI_cond}\\ & = & \sum_{y\in\mathcal{S}_{Y}}f_{Y}(y)\int g\left(\frac{f_{X}(x)}{f_{X|Y}(x|y)}\right)f_{X|Y}(x|y)dx.\nonumber \end{aligned}$$ Let $\mathbf{N}_{y}=\sum_{i=1}^{N}1_{\left\{ \mathbf{Y}_{i}=y\right\} }$ where $y\in\mathcal{S}_{Y}$. Let $\ft X$ be as in (\[eq:fx\]) and define $\mathcal{X}_{y}=\left\{ \mathbf{X}_{i}\in\left\{ \mathbf{X}_{1},\dots,\mathbf{X}_{N}\right\} |\mathbf{Y}_{i}=y\right\} $. Then if $\mathbf{X}_{i}\in\mathcal{X}_{y}$, the KDE of $f_{X|Y}(x|y)$ is $$\begin{aligned} \ft{X|y}(\mathbf{X}_{i}) & = & \frac{1}{\left(\mathbf{N}_{y}-1\right)h_{X|y}^{d_{X}}}\sum_{\substack{\mathbf{X}_{j}\in\mathcal{X}_{y}\\ i\neq j } }K_{X}\left(\frac{\mathbf{X}_{i}-\mathbf{X}_{j}}{h_{X|y}}\right).\end{aligned}$$ We define the plug-in estimator $\g{h_{X},h_{X|Y}}$ of (\[eq:MI\_cond\]) as $$\begin{aligned} \g{h_{X},h_{X|y}} & =\frac{1}{\mathbf{N}_{y}}\sum_{\mathbf{X}\in\mathcal{X}_{y}}g\left(\ft X(\mathbf{X})/\ft{X|y}(\mathbf{X})\right),\nonumber \\ \implies\g{h_{X},h_{X|Y}} & =\sum_{y\in\mathcal{S}_{Y}}\frac{\mathbf{N}_{y}}{N}\g{h_{X},h_{X|y}}.\label{eq:mixed_est}\end{aligned}$$ Convergence Rates {#subsec:mixed_conv} ----------------- To apply the theory of optimally weighted ensemble estimation to $\g{h_{X},h_{X|Y}}$, we need to know its MSE as a function of the bandwidths and the sample size. \[thm:bias\_mixed\](Bias) Assume that assumptions $\mathcal{A}.0-\mathcal{A}.5$ apply to the functional $g$, the kernel $K_{X}$, and the densities $f_{X}$ and $f_{X|Y}$. Assume that $\mathbf{h}_{X|y}=l\mathbf{N}_{y}^{-\beta}$ with $0<\beta<\frac{1}{d_{X}}$ and $l$ a positive number. Then the bias of $\g{h_{X},h_{X|Y}}$ is $$\begin{aligned} & \lefteqn{\bias\left[\g{h_{X},h_{X|Y}}\right]}\nonumber \\ & =\sum_{\substack{j=0\\ i+j\neq0 } }^{r}\sum_{i=0}^{r}c_{13,i,j}h_{X}^{i}l^{j}N^{-j\beta}+\frac{c_{14,X}}{Nh_{X}^{d_{X}}}+\frac{c_{14,Y}}{l^{d_{X}}N^{1-\beta d_{X}}}\nonumber \\ & +O\left(h_{X}^{s}+N^{-s\beta}+\frac{1}{Nh_{X}^{d_{X}}}+\frac{1}{N^{1-\beta d_{X}}}\right).\label{eq:bias_mixed1}\end{aligned}$$ If $g\left(t_{1},t_{2}\right)$ also has $j,l$-th order mixed derivatives $\frac{\partial^{j+l}}{\partial t_{1}^{j}\partial t_{2}^{l}}$ that depend on $t_{1}$ and $t_{2}$ only through $t_{1}^{\alpha}t_{2}^{\beta}$ for some $\alpha,\beta\in\mathbb{R}$ for each $1\leq j,l\leq\lambda$, then the bias is $$\begin{aligned} & \lefteqn{\bias\left[\g{h_{X},h_{X|Y}}\right]}\nonumber \\ & =\sum_{\substack{m,n=0\\ i+j+m+n\neq0 } }^{\left\lfloor \lambda/2\right\rfloor }\sum_{i,j=0}^{r}c_{14,j,i,m,n}\frac{h_{X}^{i}l^{j}N^{-j\beta}}{\left(Nh_{X}^{d_{X}}\right)^{m}\left(l^{d_{X}}N^{1-\beta d_{X}}\right)^{n}}\nonumber \\ & +O\left(h_{X}^{s}+N^{-s\beta}+\frac{1}{\left(Nh_{X}^{d_{X}}\right)^{\lambda/2}}+\frac{1}{\left(N^{1-\beta d_{X}}\right)^{\lambda/2}}\right).\label{eq:bias_mixed2}\end{aligned}$$ We focus on (\[eq:bias\_mixed1\]) as (\[eq:bias\_mixed2\]) follows similarly. It can be shown that $$\bias\left[\g{h_{X},h_{X|Y}}\right]=\bE\left[\sum_{y\in\mathcal{S}_{Y}}\frac{\mathbf{N}_{y}}{N}\bias\left[\left.\g{h_{X},h_{X|y}}\right|\mathbf{Y}_{1},\dots,\mathbf{Y}_{N}\right]\right].$$ The conditional bias of $\g{h_{X},h_{X|y}}$ given $\mathbf{Y}_{1},\dots,\mathbf{Y}_{N}$ can then be obtained from Theorem \[thm:bias\] as $$\begin{aligned} & \lefteqn{\bias\left[\left.\g{h_{X},h_{X|y}}\right|\mathbf{Y}_{1},\dots,\mathbf{Y}_{N}\right]}\\ & =\sum_{\substack{i,j=0\\ i+j\neq0 } }^{r}c_{10,i,j}h_{X}^{i}\mathbf{h}_{X|y}^{j}\\ & +O\left(h_{X}^{s}+\mathbf{h}_{X|y}^{s}+\frac{1}{\mathbf{N}_{y}h_{X}^{d_{X}}}+\frac{1}{\mathbf{N}_{y}\mathbf{h}_{X|y}^{d_{X}}}\right)\end{aligned}$$ Then given that $\mathbf{h}_{X|y}\propto\mathbf{N}_{y}^{-\beta}$, (\[eq:mixed\_est\]) gives terms of the form of $\mathbf{N}_{y}^{1-\gamma}$ with $\gamma>0$. $\mathbf{N}_{y}$ is a binomial random variable with parameter $f_{Y}(y)$, $N$ trials, and mean $Nf_{Y}(y)$. Thus we need to compute the fractional moments of a binomial random variable. By the generalized binomial theorem, we have that $$\begin{aligned} \mathbf{N}_{y}^{\alpha} & =\left(\mathbf{N}_{y}-Nf_{Y}(y)+Nf_{Y}(y)\right)^{\alpha}\nonumber \\ & =\sum_{i=0}^{\infty}\left(\begin{array}{c} \alpha\\ i \end{array}\right)\left(Nf_{Y}(y)\right)^{\alpha-i}\left(\mathbf{N}_{y}-Nf_{Y}(y)\right)^{i},\nonumber \\ & \lefteqn{\implies\bE\left[\mathbf{N}_{y}^{\alpha}\right]}\nonumber \\ & =\sum_{i=0}^{\infty}\left(\begin{array}{c} \alpha\\ i \end{array}\right)\left(Nf_{Y}(y)\right)^{\alpha-i}\bE\left[\left(\mathbf{N}_{y}-Nf_{Y}(y)\right)^{i}\right].\label{eq:fractional_moment}\end{aligned}$$ From [@riordan1937moment], the $i$-th central moment of $\mathbf{N}_{y}$ has the form of $$\bE\left[\left(\mathbf{N}_{Y}-Nf_{Y}(y)\right)^{i}\right]=\sum_{n=0}^{\left\lfloor i/2\right\rfloor }c_{n,i}(f_{Y}(y))N^{n}.$$ Thus $\bE\left[\mathbf{N}_{y}^{1-\gamma}\right]$ has terms proportional to $N^{1-\gamma-i+n}\leq N^{1-\gamma-\left\lfloor i/2\right\rfloor }$ for $i=0,1,\dots$ since $n\leq\left\lfloor i/2\right\rfloor $. Then since there is an $N$ in the denominator of (\[eq:mixed\_est\]), this leaves terms of the form of $N^{-\gamma}$ when $i=0,1$ and $N^{-1}$ for $i\geq2$. This completes the proof for the bias. See Appendix \[sec:MixedProofs\] for more details. \[thm:var\_mixed\]If the functional $g$ is Lipschitz continuous in both of its arguments and $\mathcal{S}_{Y}$ is finite, then the variance of $\g{h_{X},h_{X|Y}}$ is $O(1/N)$. By the law of total variance, we have $$\begin{aligned} \var\left[\g{h_{X},h_{X|Y}}\right] & =\bE\left[\var\left[\left.\g{h_{X},h_{X|Y}}\right|\mathbf{Y}_{1},\dots,\mathbf{Y}_{N}\right]\right]\\ & +\var\left[\bE\left[\left.\g{h_{X},h_{X|Y}}\right|\mathbf{Y}_{1},\dots,\mathbf{Y}_{N}\right]\right].\end{aligned}$$ Given all of the $\mathbf{Y}_{i}$’s, the estimators $\g{h_{X},h_{X|y}}$ are all independent since they use different sets of $\mathbf{X}_{i}$’s for each $y$. From Theorem \[thm:variance\], we know that $\var\left[\left.\g{h_{X},h_{X|Y}}\right|\mathbf{Y}_{1},\dots,\mathbf{Y}_{N}\right]=O\left(\sum_{y\in\mathcal{S}_{Y}}\mathbf{N}_{y}/N^{2}\right)$. Taking the expectation then yields $O(1/N)$. For the second term, we know from the proof of Theorem \[thm:bias\_mixed\] that $\bE\left[\left.\g{h_{X},h_{X|Y}}\right|\mathbf{Y}_{1},\dots,\mathbf{Y}_{N}\right]$ yields a sum of terms of the form of $\mathbf{N}_{y}^{\gamma}/N$ for $0<\gamma\leq1$. Taking the variance of the sum of these terms yields a sum of terms of the form $\var\left[\mathbf{N}_{y}^{\gamma}\right]/N^{2}$ (the covariance terms can be bounded by the Cauchy-Schwarz inequality to yield similar terms). Then $\var\left[\mathbf{N}_{y}^{\gamma}\right]$ can be bounded by taking a Taylor series expansion of the functions $\mathbf{N}_{y}^{\gamma}$ and $\mathbf{N}_{y}^{2\gamma}$ at the point $Nf_{Y}(y)$ which yields an expression that depends on the central moments of $\mathbf{N}_{y}$. From this, we obtain $\var\left[\mathbf{N}_{y}^{\gamma}\right]=O(N)$ which completes the proof. See Appendix \[sec:MixedProofs\] for details. Theorems \[thm:bias\_mixed\] and \[thm:var\_mixed\] provide exact expressions for the bias and bounds on the variance of the plug-in MI estimator, respectively. It is shown in Section \[sec:mixed\_ensemble\] that the MSE of the plug-in estimator converges very slowly to zero under this setting. However, Theorems \[thm:bias\_mixed\] and \[thm:var\_mixed\] provide with us the necessary information for applying the theory of optimally weighted ensemble estimation to obtain estimators with improved rates. This is done in Section \[sec:mixed\_ensemble\]. Extension to Other Cases {#subsec:general_rates} ------------------------ The results in Section \[subsec:mixed\_conv\] can be extended to the case where $\mathbf{X}$ and/or $\mathbf{Y}$ may have a mixture of continuous and discrete components (case 3 in Section \[sec:intro\]). This scenario can be divided further into three different cases: A) $\X$ is continuous and $\Y$ has a mixture of discrete and continuous components; B) $\X$ and $\Y$ both have a mixture of discrete and continuous components; C) $\Y$ is discrete and $\X$ has a mixture of discrete and continuous components. Consider case A first. Denote the discrete and continuous components of $\mathbf{Y}$ as $\mathbf{Y}_{1}$ and $\mathbf{Y}_{2}$, respectively. Denote the respective support sets as $\mathcal{S}_{Y_{1}}$ and $\mathcal{S}_{Y_{2}}$. We can then write $$\begin{aligned} & \lefteqn{G_{3A}(\mathbf{X};\mathbf{Y})}\nonumber \\ & =\sum_{y_{1}\in\mathcal{S}_{Y_{1}}}\int g\left(\frac{f_{X}(x)f_{Y}(y_{1},y_{2})}{f_{XY}(x,y_{1},y_{2})}\right)f_{XY}(x,y_{1},y_{2})dxdy_{2}\nonumber \\ & =\sum_{y_{1}\in\mathcal{S}_{Y_{1}}}f_{Y_{1}}(y_{1})\int g\left(\frac{f_{X}(x)f_{Y_{2}|Y_{1}}(y_{2}|y_{1})}{f_{XY_{2}|Y_{1}}(x,y_{2}|y_{1})}\right)\nonumber \\ & \times f_{XY_{2}|Y_{1}}(x,y_{2}|y_{1})dxdy_{2}.\label{eq:mixed_general}\end{aligned}$$ The subscript $3A$ indicates that we are considering case A under the third case described in the introduction. The expression in (\[eq:mixed\_general\]) is very similar to the expression in (\[eq:MI\_cond\]). After plugging in KDEs for the corresponding densities and conditional densities, a nearly identical procedure to that in Section \[subsec:mixed\_conv\] can be followed to derive the bias and variance of the corresponding plug-in estimator. Now consider case B. Denote the discrete and continuous components of $\mathbf{X}$ as $\mathbf{X}_{1}$ and $\mathbf{X}_{2}$, respectively. Then if $\mathbf{Y}_{1}$ is the discrete component of $\mathbf{Y}$, then the expression inside the $g$ functional in (\[eq:mixed\_general\]) includes $f_{X_{1}}(x_{1})f_{Y_{1}}(y_{1})/f_{X_{1}Y_{1}}(x_{1},y_{1})$. Thus the plug-in estimator must include estimators for $f_{X_{1}}(x_{1}),$ $f_{Y_{1}}(y_{1})$, and $f_{X_{1}Y_{1}}(x_{1},y_{1})$. Define $\mathbf{N}_{y_{1}}=\sum_{i=1}^{N}1_{\{\mathbf{Y}_{1,i}=y_{1}\}}$ where $\mathbf{Y}_{1,i}$ is the discrete component of $\mathbf{Y}_{i}$. Then the estimator we use for $f_{Y_{1}}(y_{1})$ is $\mathbf{N}_{y_{1}}/N$. The estimators for $f_{X_{1}}(x_{1})$ and $f_{X_{1}Y_{1}}(x_{1},y_{1})$ are defined similarly. The bias and variance expressions of this plug-in estimator can then be derived with some slight modifications of Theorems \[thm:bias\] and \[thm:variance\]. See Appendix \[subsec:generalCase\] for an expression for $G_{3B}(\mathbf{X};\mathbf{Y})$ in this case and a sketch of these modifications. Case C follows similarly as the expression inside the $g$ functional in (\[eq:mixed\_general\]) includes $f_{X_{1}}(x_{1})f_{Y}(y)/f_{X_{1}Y}(x_{1},y)$ where all the terms are probability mass functions. The resulting bias and variance expressions in these settings are analogous to those in Theorems \[thm:bias\], \[thm:variance\], and \[thm:bias\_mixed\] as the variance will be $O(1/N)$ and the bias will depend on expansions of the bandwidths for the various KDEs. Ensemble methods can then be applied to improve the MSE convergence rates as described in the next section. Ensemble Estimation of MI\[sec:mixed\_ensemble\] ================================================ Mixed Random Variables {#subsec:mixed_ensemble} ---------------------- We again focus on the case where $\mathbf{X}$ is continuous and $\mathbf{Y}$ is discrete (case 2 in Section \[sec:intro\]). If no bias correction is performed, then Theorem \[thm:bias\_mixed\] shows that the optimal bias rate of the plug-in estimator $\g{h_{X},h_{X|Y}}$ is $O\left(1/N^{1/(d_{X}+1)}\right)$, which converges very slowly to zero when $d_{X}$ is not small. We use the theory of optimally weighted ensemble estimation to improve this rate. An ensemble of estimators is formed by choosing different bandwidth values. Consider first the case where (\[eq:bias\_mixed1\]) applies. Let $\mathcal{L}$ be a set of real positive numbers with $|\mathcal{L}|=L$. This set will parameterize the bandwidths for $\ft X$ and $\ft{X|y}$ resulting in $L$ estimators in the ensemble. While different parameter sets for $\ft X$ and $\ft{X|y}$ can be chosen, we only use one set here for simplicity of exposition. To ensure that the final terms in (\[eq:bias\_mixed1\]) are $O(1/\sqrt{N})$ when $s\geq d$, for each estimator in the ensemble we choose $h_{X}(l)=lN^{-1/(2d_{X})}$ and $\mathbf{h}_{X|y}(l)=l\mathbf{N}_{y}^{-1/(2d_{X})}$ where $l\in\mathcal{L}$. Define $w$ to be a weight vector parameterized by $l\in\mathcal{L}$ with $\sum_{l\in\mathcal{L}}w(l)=1$ and define $$\g{w,1}=\sum_{l\in\mathcal{L}}w(l)\sum_{y\in\mathcal{S}_{Y}}\frac{\mathbf{N}_{y}}{N}\g{h_{X}(l),h_{X|y}(l)}.\label{eq:ensemble}$$ From Theorem \[thm:bias\_mixed\], the bias of $\g{w,1}$ is $$\begin{aligned} \bias\left[\g{w,1}\right] & =\sum_{l\in\mathcal{L}}\sum_{i=1}^{r}\theta\left(w(l)l^{i}N^{\frac{-i}{2d_{X}}}\right)\nonumber \\ & +O\left(\sqrt{L}||w||_{2}\left(N^{\frac{-s}{2d_{X}}}+N^{\frac{-1}{2}}\right)\right),\label{eq:weight_bias}\end{aligned}$$ where we use $\theta$ notation to omit the constants. We use the general theory of optimally weighted ensemble estimation in [@moon2016isit] to improve the MSE convergence rate of the plug-in estimator by using the weights to cancel the lower order terms in (\[eq:weight\_bias\]). The theory is as follows. Let $\left\{ \hat{\mathbf{E}}_{l}\right\} _{l\in\mathcal{L}}$ be an indexed ensemble of estimators with the weighted ensemble estimator $\hat{\mathbf{E}}_{w}=\sum_{l\in\mathcal{L}}w(l)\hat{\mathbf{E}}_{l}$ satisfying: - $\mathcal{C}.1$. Let $c_{i}$ be constants depending on the underlying density, $J=\{i_{1},\dots i_{I}\}$ a finite index set with $I<L$, $\psi_{i}(l)$ basis functions depending only on the parameter $l$ and not on $N$, $\phi_{i}(N)$ functions of the sample size $N$ that are independent of $l$. Assume the bias is $$\bias\left[\hat{\mathbf{E}}_{l}\right]=\sum_{i\in J}c_{i}\psi_{i}(l)\phi_{i}(N)+O\left(\frac{1}{\sqrt{N}}\right).$$ - $\mathcal{C}.2$. Assume the variance is $$\var\left[\hat{\mathbf{E}}_{l}\right]=c_{v}\left(\frac{1}{N}\right)+o\left(\frac{1}{N}\right).$$ [@moon2016isit] \[thm:opt\_weight\]If conditions $\mathcal{C}.1$ and $\mathcal{C}.2$ hold for an ensemble of estimators $\left\{ \hat{\mathbf{E}}_{l}\right\} _{l\in\mathcal{L}}$, then there exists a weight vector $w_{0}$ such that the MSE of $\hat{\mathbf{E}}_{w_{0}}$ attains the parametric rate of convergence of $O\left(1/N\right)$. The weight $w_{0}$ is the solution to the offline convex optimization problem *$$\begin{array}{rl} \min_{w} & ||w||_{2}\\ subject\,to & \sum_{l\in\mathcal{L}}w(l)=1,\\ & \gamma_{w}(i)=\sum_{l\in\mathcal{L}}w(l)\psi_{i}(l)=0,\,i\in J. \end{array}\label{eq:optimize}$$* To apply Theorem \[thm:opt\_weight\] to an ensemble of estimators, all $\phi_{i}(N)$ functions that converge to zero slower than $1/\sqrt{N}$ and the corresponding $\psi_{i}(l)$ functions must be known for the base estimator. Otherwise, Theorem \[thm:opt\_weight\] can only be guaranteed to improve the bias up to the slowest unknown bias rate. This theorem was applied in [@moon2016isit] to the problem of divergence functional estimation where the plug-in estimator has slowly converging bias but the resulting ensemble estimator achieves the parametric rate for sufficiently smooth densities. We apply Theorem \[thm:opt\_weight\] to the ensemble estimator $\g{w,1}$ as conditions $\mathcal{C}.1$ and $\mathcal{C}.2$ are satisfied with $\phi_{i}(N)=N^{-i/(2d_{X})}$ and $\psi_{i}(l)=l^{i}$ for $i\in\{1,\dots r\}$ as seen in (\[eq:bias\_mixed1\]) and (\[eq:weight\_bias\]). If $s\geq d_{X}$, then the MSE of the optimally weighted estimator $\g{w_{0},1}$ is $O(1/N)$. A similar approach can be used for the case where $\X$ contains a mixture of continuous and discrete components and $\Y$ is discrete (or vice versa). To the best of our knowledge, these are the first nonparametric estimators to achieve the MSE parametric rate in this setting of mixed random variables. If the mixed derivatives of the functional $g$ satisfy the extra condition required for (\[eq:bias\_mixed2\]), we can define an ensemble estimator $\g{w_{0},2}$ that achieves the parametric MSE rate if $s>d_{X}/2$. For simplicity, we focus primarily on $\g{w_{0},1}$. See Appendix \[subsec:Odin2\] for details on $\g{w_{0},2}$. In practice, the optimization problem in (\[eq:optimize\]) typically results in a very large increase in variance. Thus we follow the lead of [@moon2016arxiv; @moon2014isit; @moon2014nips; @sricharan2013ensemble] and use a relaxed version of (\[eq:optimize\]): $$\begin{array}{rl} \min_{w} & \epsilon\\ subject\,to & \sum_{l\in\mathcal{L}}w(l)=1,\\ & \left|\gamma_{w}(i)N^{\frac{1}{2}}\phi_{i}(N)\right|\leq\epsilon,\,\,i\in J,\\ & \left\Vert w\right\Vert _{2}^{2}\leq\eta. \end{array}\label{eq:relaxed}$$ As shown in [@moon2016arxiv; @moon2014isit; @moon2014nips; @sricharan2013ensemble], the ensemble estimator $\g{w_{0},1}$ using the resulting weight vector from the optimization problem in (\[eq:relaxed\]) still achieves the parametric MSE convergence rate under the same assumptions as described previously. It was also shown in [@moon2016arxiv] that the heuristic of setting $\eta=\epsilon$ works well in practice. Algorithm \[alg:estimator\] summarizes the estimator $\g{w_{0},1}$. $L$ positive real numbers $\mathcal{L}$, samples $\left\{ \mathbf{Z}_{1},\dots,\mathbf{Z}_{N}\right\} $ from $f_{XY}$, dimension $d_{X}$, function $g$, kernel $K_{X}$ The optimally weighted MI estimator $\g{w_{0},1}$ Solve for $w_{0}$ using (\[eq:relaxed\]) with basis functions $\psi_{i}(l)=l^{i}$, $\phi_{i}(N)=N^{-i/(2d_{X})},$ $l\in\mathcal{L}$, and $0\leq i\leq d_{X}$. $\mathbf{N}_{y}\leftarrow\sum_{i=1}^{N}1_{\{\mathbf{Y}_{i}=y\}}$ $h_{X}(l)\leftarrow lN^{-1/(2d_{X})},$ $\mathbf{h}_{X|y}(l)\leftarrow l\mathbf{N}_{y}^{-1/(2d_{Y})}$ Calculate $\ftl X(\mathbf{X}_{i})$, $\ftl{X|y}(\mathbf{X}_{i})$ as described in the text $\g{h_{X}(l),\mathbf{h}_{X|y}(l)}\leftarrow\frac{1}{\mathbf{N}_{y}}\sum_{\mathbf{X}\in\mathcal{X}_{y}}g\left(\frac{\ftl X(\mathbf{X})}{\ftl{X|y}(\mathbf{X})}\right)$ $\g{w_{0},1}\leftarrow\sum_{l\in\mathcal{L}}w_{0}(l)\sum_{y\in\mathcal{S}_{y}}\frac{\mathbf{N}_{y}}{N}\g{h_{X}(l),\mathbf{h}_{X|y}(l)}$ A similar approach can be used to derive an ensemble estimator $\g{w_{0},1}^{cont}$ for the case when $\X$ and $\Y$ are continuous (case 1 in Section \[sec:intro\]). See Appendix \[subsec:cont\_ensemble\] for details. The case where $\X$ and $\Y$ both contain a mixture of discrete and continuous components follows similarly. Parameter Selection ------------------- In theory, the theoretical results of the previous sections hold for any choice of the bandwidth vectors as determined by$\mathcal{L}$. In practice, we find that the following rules-of-thumb for tuning the parameters lead to high-quality estimates in the finite sample regime. 1. Select the minimum and maximum bandwidth parameter to produce density estimates that satisfy the following: first the minimum bandwidth should not lead to a zero-valued density estimate at any sample point; second the maximum bandwidth should be smaller than the diameter of the support. 2. Ensure the bandwidths are sufficiently distinct. Similar bandwidth values lead to negligible decrease in bias and many bandwidth values may increase $||w_{0}||_{2}$ resulting in an increase in variance [@sricharan2013ensemble]. 3. Select $L=|\mathcal{L}|>|J|=I$ to obtain a feasible solution for the optimization problems in (\[eq:optimize\]) and (\[eq:relaxed\]). We find that choosing a value of $30\leq L\leq60$, and setting $\mathcal{L}$ to be $L$ linearly spaced values between the minimum and maximum values described above works well in practice. The resulting ensemble estimators are robust in the sense that they are not sensitive to the exact choice of the bandwidths or the number of estimators as long as the the rough rules-of-thumb given above are followed. Moon et al [@moon2016arxiv; @moon2016isit] gives more details on ensemble estimator parameter selection for continuous divergence estimation. These details also apply to the continuous parts of the mixed cases for MI estimation in this paper. Since the optimal weight $w_{0}$ can be calculated offline, the computational complexity of the estimators is dominated by the construction of the KDEs which has a complexity of $O\left(N^{2}\right)$ using the standard implementation. For very large datasets, more efficient KDE implementations (e.g. [@raykar2010fast]) can be used to reduce the computational burden. Central Limit Theorem {#subsec:clt} --------------------- We finish this section with central limit theorems for the ensemble estimators. This enables us to perform hypothesis testing on the mutual information. \[thm:clt\] Let $\g w^{cont}$ be a weighted ensemble estimator when $\X$ and $\Y$ are continuous with bandwidths $h_{X}(l_{X})$ and $h_{Y}(l_{Y})$ for each estimator in the ensemble. Assume that the functional $g$ is Lipschitz in both arguments with Lipschitz constant $C_{g}$ and that $h_{X}(l_{X}),\,h_{Y}(l_{Y})=o(1)$, $N\rightarrow\infty$, and $Nh_{X}^{d_{X}}(l_{X}),\,Nh_{Y}^{d_{Y}}(l_{Y})\rightarrow\infty$ for each $l_{X}\in\mathcal{L}_{X}$ and $l_{Y}\in\mathcal{L}_{Y}$. Then for fixed $\mathcal{L}_{X}$ and $\mathcal{L}_{Y}$, and if $\mathbf{S}$ is a standard normal random variable, $$\Pr\left(\left(\g w^{cont}-\bE\left[\g w^{cont}\right]\right)/\sqrt{\var\left[\g w^{cont}\right]}\leq t\right)\rightarrow\Pr\left(\mathbf{S}\leq t\right).$$ The proof is based on an application of Slutsky’s Theorem preceded by an application of the Efron-Stein inequality (see Appendix \[sec:cltProof\]). If the space $\mathcal{S}_{Y}$ is finite, then the ensemble estimators for the mixed component case also obey a central limit theorem. The proof follows by an application of Slutsky’s Theorem combined with Theorem \[thm:clt\]. Let $\g w$ be a weighted ensemble estimator when $\X$ is continuous and $\Y$ is discrete with bandwidths $h_{X}(l)$ and $h_{X|y}(l)$ for each estimator in the ensemble. Assume that the functional $g$ is Lipschitz in both arguments and that $h_{X},\,h_{X|y}=o(1)$, $N\rightarrow\infty$, and $Nh_{X}^{d_{X}},\,Nh_{X|y}^{d_{X}}\rightarrow\infty$ for each $l\in\mathcal{L}$ and $\forall y\in\mathcal{S}_{Y}$ with $\mathcal{S}_{y}$ finite. Then for fixed $\mathcal{L}$, $$\Pr\left(\left(\g w-\bE\left[\g w\right]\right)/\sqrt{\var\left[\g w\right]}\leq t\right)\rightarrow\Pr\left(\mathbf{S}\leq t\right).$$ Experimental Validation {#sec:experiments} ======================= In this section, we validate our theory by estimating the Rényi-$\alpha$ MI integral (i.e. $g(x)=x^{\alpha}$ in (\[eq:MI\_cond\]); see [@principe2010information]) where $\mathbf{X}$ is a mixture of truncated Gaussian random variables restricted to the unit cube and $\mathbf{Y}$ is a categorical random variable. We choose Rényi MI as it has received recent interest (e.g. [@pal2010estimation]) and the estimation problem does not reduce to entropy estimation in contrast with Shannon MI. Thus this is a clear case where there are no other nonparametric estimators that are known to achieve the parametric MSE rate. We consider two cases. In the first case, $\mathbf{Y}$ has three possible outcomes (i.e. $|\mathcal{S}_{Y}|=3$) and respective probabilities $\Pr(\mathbf{Y}=0)=\Pr(\mathbf{Y}=1)=2/5$ and $\Pr(\mathbf{Y}=2)=1/5$. The conditional covariance matrices are all $0.1\times I_{d}$ and the conditional means are, respectively, $\bar{\mu}_{0}=0.25\times\bar{1}_{d}$, $\bar{\mu}_{1}=0.75\times\bar{1}_{d}$, and $\bar{\mu}_{2}=0.5\times\bar{1}_{d}$, where $I_{d}$ is the $d\times d$ identity matrix and $\bar{1}_{d}$ is a $d$-dimensional vector of ones. This experiment can be viewed as the problem of estimating MI (e.g. for feature selection or Bayes error bounds) of a classification problem where each discrete value corresponds to a distinct class, the distribution of each class overlaps slightly with others, and the class probabilities are unequal. We use $\alpha=0.5$. We set $\mathcal{L}$ to be 40 linearly spaced values between 1.2 and 3. The bandwidth in the KDE plug-in estimator is also set to $2.1N^{-1/(2d)}$. The top three plots in Figure \[fig:mseplot\] shows the MSE (200 trials) of the plug-in KDE estimator of the MI integral using a uniform kernel and the optimally weighted ensemble estimator $\g{w_{0},1}$ for various sample sizes and for $d=4,\,6,\,9$, respectively. The ensemble estimator outperforms the standard plug-in estimator, especially for larger sample sizes and larger dimensions. This demonstrates that while an individual kernel estimator performs poorly, an ensemble of estimators including the individual estimator performs well. For the second case, $\mathbf{Y}$ has six possible outcomes (i.e. $|\mathcal{S}_{Y}|=6$) and respective probabilities $\Pr(\Y=0)=0.35$, $\Pr(\Y=1)=0.2$, $\Pr(\Y=2)=\Pr(\Y=3)=0.15$, $\Pr(\Y=4)=0.1$, and $\Pr(\Y=5)=0.05$. We chose $\alpha=0.5$ and $d=6$. The conditional covariances matrices are again $0.1\times I_{d}$ and the conditional means are, respectively, $\bar{\mu}_{0}=0.25\times\bar{1}_{d}$, $\bar{\mu}_{1}=0.75\times\bar{1}_{d}$, and $\bar{\mu}_{2}=0.5\times\bar{1}_{d}$, $\bar{\mu}_{3}=\left(0.25\times\bar{1}_{4}^{T},0.5\times\bar{1}_{2}^{T}\right)^{T}$, $\bar{\mu}_{4}=\left(0.75\times\bar{1}_{2}^{T},0.375\times\bar{1}_{4}^{T}\right)^{T}$, and $\bar{\mu}_{5}=\left(0.5\times\bar{1}_{4}^{T},0.25\times\bar{1}_{2}^{T}\right)^{T}$. The parameters for the ensemble estimators and the KDE plug-in estimators are the same as in the top three plots in Figure \[fig:mseplot\]. The bottom plot in Figure \[fig:mseplot\] again compares the ensemble estimator to the plug-in KDE estimator. The ensemble estimator also outperforms the plug-in estimator in this setting. ![MSE log-log plots as a function of sample size for the uniform kernel plug-in MI estimator (Kernel) and the proposed optimally weighted ensemble estimator $\protect\g{w_{0},1}$ (Weighted) for the distributions described in the text. The top three plots each correspond to the first case where $|\mathcal{S}_{Y}|=3$ and the bottom plot corresponds to the second case where $|\mathcal{S}_{Y}|=6$. The ensemble estimator outperforms the kernel plug-in estimator, especially for larger sample sizes. Note also that as the dimension increases, the performance gap between the two estimators increases.\[fig:mseplot\]](d4_y3){width="50.00000%"} ![MSE log-log plots as a function of sample size for the uniform kernel plug-in MI estimator (Kernel) and the proposed optimally weighted ensemble estimator $\protect\g{w_{0},1}$ (Weighted) for the distributions described in the text. The top three plots each correspond to the first case where $|\mathcal{S}_{Y}|=3$ and the bottom plot corresponds to the second case where $|\mathcal{S}_{Y}|=6$. The ensemble estimator outperforms the kernel plug-in estimator, especially for larger sample sizes. Note also that as the dimension increases, the performance gap between the two estimators increases.\[fig:mseplot\]](d6_y3){width="50.00000%"} ![MSE log-log plots as a function of sample size for the uniform kernel plug-in MI estimator (Kernel) and the proposed optimally weighted ensemble estimator $\protect\g{w_{0},1}$ (Weighted) for the distributions described in the text. The top three plots each correspond to the first case where $|\mathcal{S}_{Y}|=3$ and the bottom plot corresponds to the second case where $|\mathcal{S}_{Y}|=6$. The ensemble estimator outperforms the kernel plug-in estimator, especially for larger sample sizes. Note also that as the dimension increases, the performance gap between the two estimators increases.\[fig:mseplot\]](d9_y3){width="50.00000%"} ![MSE log-log plots as a function of sample size for the uniform kernel plug-in MI estimator (Kernel) and the proposed optimally weighted ensemble estimator $\protect\g{w_{0},1}$ (Weighted) for the distributions described in the text. The top three plots each correspond to the first case where $|\mathcal{S}_{Y}|=3$ and the bottom plot corresponds to the second case where $|\mathcal{S}_{Y}|=6$. The ensemble estimator outperforms the kernel plug-in estimator, especially for larger sample sizes. Note also that as the dimension increases, the performance gap between the two estimators increases.\[fig:mseplot\]](d6_y6){width="50.00000%"} Conclusion ========== We derived the MSE convergence rates for plug-in KDE-based estimators of MI measures between $\mathbf{X}$ and $\mathbf{Y}$ when they have only continuous components and for the case where $\mathbf{Y}$ is discrete and $\mathbf{X}$ is continuous. We also showed how convergence rates can be obtained for the case when $\X$ and/or $\Y$ contain a mixture of discrete and continuous components. Using these rates, we defined ensemble estimators that achieve an MSE rate of $O(1/N)$ when the densities are sufficiently smooth and showed that a central limit theorem also holds. To the best of our knowledge, this is the first nonparametric MI estimator that achieves the MSE convergence rate of $O(1/N)$ in this setting of mixed random variables (i.e. $\X$ and $\Y$ are not both purely discrete or purely continuous). Hölder Class ============ We derive MSE convergence rates for the plug-in estimators in terms of the smoothness of the densities which we characterize by the Hölder Class. \[def:holder\]*Let $\mathcal{X}\subset\mathbb{R}^{d}$ be a compact space. For $r=(r_{1},\dots,r_{d}),$ $r_{i}\in\mathbb{N},$ define $|r|=\sum_{i=1}^{d}r_{i}$ and $D^{r}=\frac{\partial^{|r|}}{\partial x_{1}^{r_{1}}\dots\partial x_{d}^{r_{d}}}$. The Hölder class $\Sigma(s,H)$ of functions on $L_{2}(\mathcal{X})$ consists of the functions $f$ that satisfy $$\left|D^{r}f(x)-D^{r}f(y)\right|\leq H\left\Vert x-y\right\Vert ^{s-r},$$ for all $x,\,y\in\mathcal{X}$ and for all $r$ s.t. $|r|\leq\left\lfloor s\right\rfloor $.* For notation, let $\ez Z$ denote the conditional expectation given $\mathbf{Z}$. MI Ensemble Estimation Extensions ================================= Continuous Random Varables {#subsec:cont_ensemble} -------------------------- We can also apply Theorem 5 to obtain MI estimators that achieve the parametric rate for the case when $\mathbf{X}$ and $\mathbf{Y}$ are continuous. For general $g$, (4) in the main paper indicates that we need $h_{X}^{d_{X}}h_{Y}^{d_{Y}}\propto N^{-1/2}$ for the $O(1/(Nh_{X}^{d_{X}}h_{Y}^{d_{Y}}))$ terms to be $O(1/\sqrt{N})$. We consider the more general case where the parameters may differ for $h_{X}$ and $h_{Y}$. Let $\mathcal{L}_{X}$ and $\mathcal{L}_{Y}$ be sets of real, positive numbers with $|\mathcal{L}_{X}|=L_{X}$ and $|\mathcal{L}_{Y}|=L_{Y}$. For each estimator in the ensemble, choose $l_{X}\in\mathcal{L_{X}}$ and $l_{Y}\in\mathcal{L}_{Y}$ and set $h_{X}(l_{X})=l_{X}N^{-1/(2(d_{X}+d_{Y}))}$ and $h_{Y}(l_{Y})=l_{Y}N^{-1/(2(d_{X}+d_{Y}))}$. Define the matrix $w$ s.t. $\sum_{l_{X}\in\mathcal{L}_{X},l_{Y}\in\mathcal{L}_{Y}}w(l_{X},l_{Y})=1$. From Theorems 1 and 2, conditions $\mathcal{C}.1$ and $\mathcal{C}.2$ are satisfied if $s\geq d_{X}+d_{Y}$ with $\psi_{i,j}(l_{X},l_{Y})=l_{X}^{i}l_{Y}^{j}$ and $\phi_{i,j}(N)=N^{-(i+j)/(2(d_{X}+d_{Y}))}$ for $0\leq i,j\leq d_{X}+d_{Y}$ s.t. $0<i+j\leq d_{X}+d_{Y}$. The optimal weight $w_{0}$ is calculated using (14) in the main paper. The resulting estimator $$\g{w_{0},1}^{cont}=\sum_{l_{X}\in\mathcal{L}_{X},l_{Y}\in\mathcal{L}_{Y}}w_{0}(l_{X},l_{Y})\g{h_{X}(l_{X}),h_{Y}(l_{Y})}$$ achieves the parametric MSE rate when $s\geq d_{X}+d_{Y}$. Again, if the mixed derivatives of the functional $g$ also satisfy the extra condition required for (5) in the main paper, then we can define an estimator that achieves the parametric MSE rate under less strict smoothness assumptions. See Appendix \[subsec:cont\_ensemble2\]. The ODin2 Estimators {#subsec:Odin2} -------------------- The estimators $\g{w_{0},1}$ and $\g{w_{0},1}^{cont}$ are analogous to the ODin1 estimators in [@moon2016arxiv; @moon2016isit]. In this section, we derive ensemble estimators of MI that achieve the parametric rate under less strict smoothness assumptions on the densities. These estimators are analogous to the ODin2 estimators in [@moon2016arxiv; @moon2016isit]. ### Mixed Random Variables {#mixed-random-variables} We first consider the case where $\mathbf{X}$ is continuous and $\mathbf{Y}$ is discrete. Recall that if $\mathbf{h}_{X|y}=l\mathbf{N}_{y}^{-\beta}$ with $0<\beta<\frac{1}{d_{X}}$ and $l$ a positive number, and if $g\left(t_{1},t_{2}\right)$ has $j,l$-th order mixed derivatives $\frac{\partial^{j+l}}{\partial t_{1}^{j}\partial t_{2}^{l}}$ that depend on $t_{1}$ and $t_{2}$ only through $t_{1}^{\alpha}t_{2}^{\beta}$ for some $\alpha,\beta\in\mathbb{R}$ for each $1\leq j,l\leq\lambda$, then the bias of the plug-in estimator for this case is $$\begin{aligned} \bias\left[\g{h_{X},h_{X|Y}}\right] & =\sum_{\substack{m,n=0\\ i+j+m+n\neq0 } }^{\left\lfloor \lambda/2\right\rfloor }\sum_{i,j=0}^{r}c_{14,j,i,m,n}\frac{h_{X}^{i}l^{j}N^{-j\beta}}{\left(Nh_{X}^{d_{X}}\right)^{m}\left(l^{d_{X}}N^{1-\beta d_{X}}\right)^{n}}\nonumber \\ & +O\left(h_{X}^{s}+N^{-s\beta}+\frac{1}{\left(Nh_{X}^{d_{X}}\right)^{\lambda/2}}+\frac{1}{\left(N^{1-\beta d_{X}}\right)^{\lambda/2}}\right).\label{eq:bias_mixed2-1}\end{aligned}$$ Choose $\mathcal{L}$ to be a set of real positive numbers and let $\delta>0$. For each estimator in the ensemble, set $h_{X}(l)=lN^{-1/(d_{X}+\delta)}$ and $\mathbf{h}_{X|y}(l)=l\mathbf{N}_{y}^{-1/(d_{X}+\delta)}$ where $l\in\mathcal{L}$. This ensures that the final terms in (\[eq:bias\_mixed2-1\]) are $O(1/\sqrt{N})$ if $s\geq(d_{X}+\delta)/2$ and $\lambda\geq d_{X}/\delta+1$. Define $\g{w,2}$ as in (12) in the main paper with the chosen values of $h_{X}(l)$ and $\mathbf{h}_{X|y}(l)$. Theorem 5 can be applied in this case as conditions $\mathcal{C}.1$ and $\mathcal{C}.2$ are satisfied with $\phi_{i,m}(N)=N^{\frac{-i-m\delta}{d_{X}+\delta}}$ and $\psi_{i,m}(l)=l^{i-md_{X}}$ for $i\in\{0,\dots,r\}$, $m\in\{0,\dots\left\lfloor \lambda/2\right\rfloor \}$, and $0\leq i+m\delta\leq(d_{X}+\delta)/2$. Then if $s\geq(d_{X}+\delta)/2$ and $\lambda\geq d_{X}/\delta+1$, the MSE of the optimally weighted estimator $\g{w_{0},2}$ is $O(1/N)$. Then since $\delta$ can be chosen arbitrarily close to zero, the parametric rate can be achieved theoretically as long as $s>d_{X}/2$. The analogous divergence functional estimators for $\g{w_{0},1}$ and $\g{w_{0},2}$ in [@moon2016arxiv; @moon2016isit] were referred to as the ODin1 and ODin2 estimators, respectively, where ODin stands for **O**ptimally Weighted **Di**stributional Fu**n**ctional estimators. The ODin2 estimator has better statistical properties as the parametric rate is guaranteed under less restrictive smoothness assumptions on the densities. On the other hand, the number of parameters required for the optimization problem in (14) in the main paper is larger for the ODin2 estimator than the ODin1 estimator. In theory, this could lead to larger variance although this wasn’t necessarily true in practice according to the experiments in [@moon2016arxiv]. ### Continuous Random Variables {#subsec:cont_ensemble2} We now consider the case where both $\mathbf{X}$ and $\mathbf{Y}$ are continuous. Again, if $g\left(t_{1},t_{2}\right)$ has $j,l$-th order mixed derivatives $\frac{\partial^{j+l}}{\partial t_{1}^{j}\partial t_{2}^{l}}$ that depend on $t_{1}$ and $t_{2}$ only through $t_{1}^{\alpha}t_{2}^{\beta}$ for some $\alpha,\beta\in\mathbb{R}$ for each $1\leq k,l,\leq\lambda$, then the bias of $\gt$ is $$\begin{aligned} \bias\left[\gt\right] & = & \sum_{\substack{m,n=0\\ i+j+m+n\neq0 } }^{\left\lfloor \lambda/2\right\rfloor }\sum_{i,j=0}^{r}c_{11,j,i,m,n}\frac{h_{X}^{i}h_{Y}^{j}}{\left(Nh_{X}^{d_{X}}\right)^{m}\left(Nh_{Y}^{d_{Y}}\right)^{n}}\nonumber \\ & & +\sum_{m=1}^{\left\lfloor \lambda/2\right\rfloor }\sum_{i=0}^{r}\sum_{j=0}^{r}c_{13,m,n,j}h_{X}^{i}h_{Y}^{j}/\left(Nh_{X}^{d_{X}}h_{Y}^{d_{Y}}\right)^{m}\nonumber \\ & & +O\left(h_{X}^{s}+h_{Y}^{s}+1/\left(Nh_{X}^{d_{X}}h_{Y}^{d_{Y}}\right)^{\lambda/2}\right).\label{eq:bias2-1}\end{aligned}$$ Set $\delta>0$ and choose $h_{X}(l_{X})=l_{X}N^{-1/(d_{X}+d_{Y}+\delta)}$ and $h_{Y}(l_{Y})=l_{Y}N^{-1/(d_{X}+d_{Y}+\delta)}$. Then conditions $\mathcal{C}.1$ and $\mathcal{C}.2$ are satisfied if $s\geq(d_{X}+d_{Y}+\delta)/2$ and $\lambda\geq(d_{X}+d_{Y}+\delta)/\delta$ with $\psi_{1,i,j,m,n}(l_{X},l_{Y})=l_{X}^{i-md_{X}}l_{Y}^{j-nd_{Y}}$ and $\phi_{1,i,j,m,n}(N)=N^{-\frac{i+j+m(d_{Y}+\delta)+n(d_{X}+\delta)}{d_{X}+d_{Y}+\delta}}$ for $0<i+j+m(d_{Y}+\delta)+n(d_{X}+\delta)\leq\frac{d_{X}+d_{Y}+\delta}{2}$ and the terms $\psi_{2,i,j,m}(l_{X},l_{Y})=l_{X}^{i-md_{X}}l_{Y}^{j-md_{Y}}$ and $\phi_{2,i,j,m}(N)=N^{-\frac{i+j+m\delta}{d_{X}+d_{Y}+\delta}}$ for $m\geq1$ and $i+j+m\delta\leq\frac{d_{X}+d_{Y}+\delta}{2}.$ The optimal weight $w_{0}$ is again calculated using (14) in the main paper and the resulting estimator $\g{w_{0},2}^{cont}$ achieves the parametric MSE convergence rate when $s\geq(d_{X}+d_{Y}+\delta)/2$. Since $\delta$ can be chosen arbitrarily close to zero, the parametric rate can be achieved theoretically as long as $s>(d_{X}+d_{Y})/2$. $\g{w_{0},2}^{cont}$ is the ODin2 estimator for continuous random variables. Proof of Theorem 1 (Bias) {#sec:biasProof} ========================= The proof of the bias results in Theorem 1 share some similarities with the proof of the bias results for the divergence functional estimators in in [@moon2016arxiv]. The primary differences deal with the product of the marginal KDEs that appear in the expansion of the bias terms. The bias of $\gt$ can be expressed as $$\begin{aligned} \bias\left[\gt\right] & = & \bE\left[g\left(\frac{\ft X(\mathbf{X})\ft Y(\mathbf{Y})}{\ft Z(\mathbf{X},\mathbf{Y})}\right)-g\left(\frac{f_{X}(\mathbf{X})f_{Y}(\mathbf{Y})}{f_{XY}(\mathbf{X},\mathbf{Y})}\right)\right]\nonumber \\ & = & \bE\left[g\left(\frac{\ft X(\mathbf{X})\ft Y(\mathbf{Y})}{\ft Z(\mathbf{X},\mathbf{Y})}\right)-g\left(\frac{\ez X\left[\ft X(\mathbf{X})\right]\ez Y\left[\ft Y(\mathbf{Y})\right]}{\ez{X,Y}\ft Z(\mathbf{X},\mathbf{Y})}\right)\right]\nonumber \\ & & +\bE\left[g\left(\frac{\ez X\left[\ft X(\mathbf{X})\right]\ez Y\left[\ft Y(\mathbf{Y})\right]}{\ez{X,Y}\ft Z(\mathbf{X},\mathbf{Y})}\right)-g\left(\frac{f_{X}(\mathbf{X})f_{Y}(\mathbf{Y})}{f_{XY}(\mathbf{X},\mathbf{Y})}\right)\right],\label{eq:gsplit}\end{aligned}$$ where $\mathbf{X}$ and $\mathbf{Y}$ are drawn jointly from $f_{XY}$. We can view these terms as a variance-like component (the first term) and a bias-like component, where the respective Taylor series expansions depend on variance-like or bias-like terms of the KDEs. We first consider the bias-like term, i.e. the second term in (\[eq:gsplit\]). The Taylor series expansion of $g\left(\frac{\ez X\left[\ft X(\mathbf{X})\right]\ez Y\left[\ft Y(\mathbf{Y})\right]}{\ez{X,Y}\ft Z(\mathbf{X},\mathbf{Y})}\right)$ around $f_{X}(\mathbf{X})f_{Y}(\mathbf{Y})$ and $f_{XY}(\mathbf{X},\mathbf{Y})$ gives an expansion with terms of the form of $$\begin{aligned} \bias_{\mathbf{Z}}^{i}\left[\ft X(\mathbf{X})\ft Y(\mathbf{Y})\right] & = & \left(\ez X\left[\ft X(\mathbf{X})\right]\ez Y\left[\ft Y(\mathbf{Y})\right]-f_{X}(\mathbf{X})f_{Y}(\mathbf{Y})\right)^{i},\nonumber \\ \bias_{\mathbf{Z}}^{i}\left[\ft Z(\mathbf{X},\mathbf{Y})\right] & = & \left(\ez{X,Y}\ft Z(\mathbf{X},\mathbf{Y})-f_{XY}(\mathbf{X},\mathbf{Y})\right)^{i}.\label{eq:biasterms}\end{aligned}$$ Since we are not doing boundary correction, we need to consider separately the cases when $\mathbf{Z}$ is in the interior of the support $\mathcal{S}_{X}\times\mathcal{S}_{Y}$ and when $\mathbf{Z}$ is close to the boundary of the support. For precise definitions, a point $Z=(X,Y)\in\mathcal{S}_{X}\times\mathcal{S}_{Y}$ is in the interior of $\mathcal{S}_{X}\times\mathcal{S}_{Y}$ if for all $Z^{'}\notin\mathcal{S}_{X}\times\mathcal{S}_{Y}$, $K_{X}\left(\frac{X-X^{'}}{h_{X}}\right)K_{Y}\left(\frac{Y-Y^{'}}{h_{Y}}\right)=0$, and a point $Z\in\mathcal{S}_{X}\times\mathcal{S}_{Y}$ is near the boundary of the support if it is not in the interior. It can be shown by Taylor series expansions of the probability densities that for $\mathbf{Z}=(\mathbf{X},\mathbf{Y})$ drawn from $f_{XY}$ in the interior of $\mathcal{S}_{X}\times\mathcal{S}_{Y}$, then $$\begin{aligned} \ez X\left[\ft X(\mathbf{X})\right] & = & f_{X}(\mathbf{X})+\sum_{j=1}^{\left\lfloor s/2\right\rfloor }c_{X,j}(\mathbf{X})h_{X}^{2j}+O\left(h_{X}^{s}\right),\label{eq:E_fx}\\ \ez Y\left[\ft Y(\mathbf{Y})\right] & = & f_{Y}(\mathbf{Y})+\sum_{j=1}^{\left\lfloor s/2\right\rfloor }c_{Y,j}(\mathbf{Y})h_{Y}^{2j}+O\left(h_{Y}^{s}\right),\nonumber \\ \ez{X,Y}\left[\ft Z(\mathbf{Z})\right] & = & f_{XY}(\mathbf{X},\mathbf{Y})+\sum_{\substack{i=0\\ i+j\neq0 } }^{\left\lfloor s/2\right\rfloor }\sum_{j=0}^{\left\lfloor s/2\right\rfloor }c_{XY,i,j}(\mathbf{X},\mathbf{Y})h_{X}^{2i}h_{Y}^{2j}+O\left(h_{X}^{s}+h_{Y}^{s}\right).\nonumber \end{aligned}$$ For a point near the boundary of the support, we extend the expectation beyond the support of the density. As an example if $\mathbf{X}$ is near the boundary of $\mathcal{S}_{X}$, then we get $$\begin{aligned} \mathbb{E}_{\mathbf{X}}\left[\ft i(\mathbf{X})\right]-f_{i}(\mathbf{X}) & = & \frac{1}{h_{X}^{d_{X}}}\int_{V:V\in\mathcal{S}_{X}}K_{X}\left(\frac{\mathbf{X}-V}{h_{X}}\right)f_{X}(V)dV-f_{X}(\mathbf{X})\nonumber \\ & = & \left[\frac{1}{h_{X}^{d_{X}}}\int_{V:K_{X}\left(\frac{\mathbf{X}-V}{h_{X}}\right)>0}K_{X}\left(\frac{\mathbf{X}-V}{h_{X}}\right)f_{X}(V)dV-f_{X}(\mathbf{X})\right]\nonumber \\ & & -\left[\frac{1}{h_{X}^{d_{X}}}\int_{V:V\notin\mathcal{S}_{X}}K_{X}\left(\frac{\mathbf{X}-V}{h_{X}}\right)f_{X}(V)dV\right]\nonumber \\ & = & T_{1,X}(\mathbf{X})-T_{2,X}(\mathbf{X}).\label{eq:Tdiff}\end{aligned}$$ We only evaulate the density $f_{X}$ and its derivatives at points within the support when we take its Taylor series expansion. Thus the exact manner in which we define the extension of $f_{X}$ does not matter as long as the Taylor series remains the same and as long as the extension is smooth. Thus the expected value of $T_{1,X}(\mathbf{X})$ gives an expression of the form of (\[eq:E\_fx\]). For the $T_{2,X}(\mathbf{X})$ term, we can use multi-index notation on the expansion of $f_{X}$ to show that $$\begin{aligned} T_{2,X}(\mathbf{X}) & =\left[\frac{1}{h_{X}^{d_{X}}}\int_{V:V\notin\mathcal{S}_{X}}K_{X}\left(\frac{\mathbf{X}-V}{h_{X}}\right)f_{X}(V)dV\right]\\ & =\int_{u:h_{X}u+\mathbf{X}\notin\mathcal{S}_{X},K_{X}(u)>0}K_{X}(u)f_{X}(\mathbf{X}+h_{X}u)du\\ & =\sum_{|\alpha|\leq r}\frac{h_{X}^{|\alpha|}}{\alpha!}\int_{u:h_{X}u+\mathbf{X}\notin\mathcal{S}_{X},K_{X}(u)>0}K_{X}(u)D^{\alpha}f_{X}(\mathbf{X})u^{\alpha}du+o(h_{X}^{r}).\end{aligned}$$ Then since the $|\alpha|$th derivative of $f_{X}$ is $r-|\alpha|$ times differentiable, we apply the condition in assumption $\mathcal{A}.5$ to obtain $$\bE\left[T_{2,X}(\mathbf{X})\right]=\sum_{i=1}^{r}e_{i}h_{X}^{i}+o\left(h_{X}^{r}\right).$$ Similar expressions can be found for $\ft Y$ and $\ft Z$ and for when (\[eq:Tdiff\]) is raised to a power $t$. Applying this result gives for the second term in (\[eq:gsplit\]), $$\sum_{\substack{j=0\\ i+j\neq0 } }^{r}\sum_{i=0}^{r}c_{10,i,j}h_{X}^{i}h_{Y}^{j}+O\left(h_{X}^{s}+h_{Y}^{s}\right).\label{eq:varterm_bound}$$ For the first term in (\[eq:gsplit\]), a Taylor series expansion of $g\left(\frac{\ft X(\mathbf{X})\ft Y(\mathbf{Y})}{\ft Z(\mathbf{X},\mathbf{Y})}\right)$ around $\ez X\left[\ft X(\mathbf{X})\right]\ez Y\left[\ft Y(\mathbf{Y})\right]$ and $\ez{X,Y}\ft Z(\mathbf{X},\mathbf{Y})$ gives an expansion with terms of the form of $$\begin{aligned} \et Z^{q}(\mathbf{Z}) & = & \left(\ft Z(\mathbf{Z})-\ez Z\left[\ft Z(\mathbf{Z})\right]\right)^{q},\nonumber \\ \ett XY^{q}(\mathbf{Z}) & = & \left(\ft X(\mathbf{X})\ft Y(\mathbf{Y})-\ez X\left[\ft X(\mathbf{X})\right]\ez Y\left[\ft Y(\mathbf{Y})\right]\right)^{q}.\label{eq:varterms}\end{aligned}$$ We can take the expected value of these expressions to obtain terms of the form of $$\frac{1}{Nh_{X}^{d_{X}}},\,\frac{1}{Nh_{Y}^{d_{Y}}},\,\frac{1}{N^{2}h_{X}^{d_{X}}h_{Y}^{d_{Y}}},\,\frac{1}{Nh_{X}^{d_{X}}h_{Y}^{d_{Y}}}\label{eq:variance_terms}$$ and their respective powers. This can be seen for $\ett XY^{q}(\mathbf{Z})$ as follows. Define $$\begin{aligned} \mathbf{V}_{i,j}(\mathbf{Z}) & =K_{X}\left(\frac{\mathbf{X}_{i}-\X}{h_{X}}\right)K_{Y}\left(\frac{\Y_{j}-\Y}{h_{Y}}\right)-\ez X\left[K_{X}\left(\frac{\mathbf{X}_{i}-\X}{h_{X}}\right)\right]\ez Y\left[K_{Y}\left(\frac{\Y_{j}-\Y}{h_{Y}}\right)\right]\\ & =\eta_{ij}(\Z)-\ez X\left[\eta_{i}(\X)\right]\ez Y\left[\eta_{j}^{'}(\Y)\right].\end{aligned}$$ We can then write $$\ett XY(\mathbf{Z})=\frac{1}{N^{2}h_{X}^{d_{X}}h_{Y}^{d_{Y}}}\sum_{i=1}^{N}\sum_{j=1}^{N}\mathbf{V}_{i,j}(\Z).$$ The binomial theorem then gives $$\ez Z\left[\mathbf{V}_{i,j}^{k}(\Z)\right]=\sum_{l=0}^{k}\binom{k}{l}\ez Z\left[\eta_{ij}^{l}(\Z)\right]\left(\ez X\left[\eta_{i}(\X)\right]\ez Y\left[\eta_{j}^{'}(\Y)\right]\right)^{k-l}.\label{eq:exp_V}$$ By using a similar Taylor series analysis as before, for $\Z$ in the interior, $$\ez Z\left[\eta_{ij}^{l}(\Z)\right]=h_{X}^{d_{X}}h_{Y}^{d_{Y}}\sum_{m,n=0}^{\left\lfloor s/2\right\rfloor }c_{XY,2,m,n,l}(\Z)h_{X}^{2m}h_{Y}^{2n}+O\left(h_{X}^{2d_{X}}h_{Y}^{d_{Y}}+h_{X}^{d_{X}}h_{Y}^{2d_{Y}}\right).$$ Combining this with (\[eq:E\_fx\]) and (\[eq:exp\_V\]) gives $$\ez Z\left[\mathbf{V}_{i,j}^{k}(\Z)\right]=h_{X}^{d_{X}}h_{Y}^{d_{Y}}\sum_{m,n=0}^{\left\lfloor s/2\right\rfloor }c_{XY,3,m,n,k}(\X)h_{X}^{2m}h_{Y}^{2n}+O\left(h_{X}^{2d_{X}}h_{Y}^{d_{Y}}+h_{X}^{d_{X}}h_{Y}^{2d_{Y}}\right),\label{eq:exp_Vk}$$ where the constants depend on the densities, their derivatives, and the moments of the kernels. As an example, let $q=2$. Then due to the independence between the $\Z_{i}$ samples, $$\begin{aligned} \ez Z\left[\ett XY^{2}(\mathbf{Z})\right] & =\frac{1}{N^{4}h_{X}^{2d_{X}}h_{Y}^{2d_{Y}}}\sum_{i,j,m,n=1}^{N}\ez Z\left[\mathbf{V}_{i,j}(\Z)\mathbf{V}_{m,n}(\Z)\right]\\ & =\frac{1}{N^{2}h_{X}^{2d_{X}}h_{Y}^{2d_{Y}}}\ez Z\left[\mathbf{V}_{i,j}^{2}(\Z)\right]+\frac{(N-1)}{N^{2}h_{X}^{2d_{X}}h_{Y}^{2d_{Y}}}\ez Z\left[\mathbf{V}_{i,j}(\Z)\mathbf{V}_{i,n}(\Z)\right]\\ & =\frac{1}{N^{2}h_{X}^{d_{X}}h_{Y}^{d_{Y}}}\sum_{m,n=0}^{\left\lfloor s/2\right\rfloor }c_{XY,3,m,n,2}(\X)h_{X}^{2m}h_{Y}^{2n}+\sum_{m,n=0}^{\left\lfloor s/2\right\rfloor }\sum_{\substack{i,j=0\\ i+j\neq0 } }^{1}c_{XY,4,m,n,i,j}(\X)\frac{h_{X}^{2m}h_{Y}^{2n}}{Nh_{X}^{id_{X}}h_{Y}^{jd_{Y}}}+O\left(\frac{1}{N}\right),\end{aligned}$$ where the last step follows from (\[eq:exp\_Vk\]) and a similar analysis of $\ez Z\left[\mathbf{V}_{i,j}(\Z)\mathbf{V}_{i,n}(\Z)\right]$. For $q>2$, it can be shown that if $n(q)$ is the set of integer divisors of $q$ including 1 but excluding $q$, then $$\ez Z\left[\ett XY^{q}(\mathbf{Z})\right]=\sum_{i,j=0}^{\left\lfloor s/2\right\rfloor }\left(\sum_{n\in n(q)}\frac{c_{XY,5,i,j,q,n}(\Z)}{\left(N^{2}h_{X}^{d_{X}}h_{Y}^{d_{Y}}\right)^{q-n}}+\sum_{\substack{m\in n(q)\cup\{q\}\\ n\in n(q)\cup\{q\}\\ m+n\neq2q } }\frac{c_{XY,6,i,j,q,m,n}(\Z)}{\left(Nh_{X}^{d_{X}}\right)^{q-n}\left(Nh_{Y}^{d_{Y}}\right)^{q-m}}\right)h_{X}^{2i}h_{Y}^{2j}+O\left(\frac{1}{N}\right).$$ A similar procedure can be used to find the expression for $\ez Z\left[\et Z^{q}(\mathbf{Z})\right]$. When $\Z$ is near the boundary of the supposrt, we can obtain similar expressions by following a similar procedure as in the derivation of (\[eq:varterm\_bound\]). This results in powers of $h_{X}^{m}h_{Y}^{n}$ instead of $h_{X}^{2m}h_{Y}^{2n}$. For general functionals $g$, we can only guarantee that the mixed derivatives of $g$ evaluated at $\ez X\left[\ft X(\mathbf{X})\right]\ez Y\left[\ft Y(\mathbf{Y})\right]$ and $\ez{X,Y}\ft Z(\mathbf{X},\mathbf{Y})$ converge to the mixed derivative evaluated at $f_{X}(\mathbf{X})f_{Y}(\mathbf{Y})$ and $f_{XY}(\mathbf{X},\mathbf{Y})$ at some rate $o(1)$. Thus we are left with the following terms in the bias: $$o\left(\frac{1}{Nh_{X}^{d_{X}}}+\frac{1}{Nh_{Y}^{d_{Y}}}\right)$$ However, if we know that $g\left(t_{1},t_{2}\right)$ has $j,l$-th order mixed derivatives $\frac{\partial^{j+l}}{\partial t_{1}^{j}\partial t_{2}^{l}}$ that depend on $t_{1}$ and $t_{2}$ only through $t_{1}^{\alpha}t_{2}^{\beta}$ for some $\alpha,\beta\in\mathbb{R}$, then by the generalized binomial theorem, we find that $$\left(\ez{\mathbf{X}}\ft X(\mathbf{X})\right)^{\alpha}=\sum_{m=0}^{\infty}\binom{\alpha}{m}f_{X}^{\alpha-m}(\mathbf{X})\left(\sum_{j=1}^{\left\lfloor s/2\right\rfloor }c_{i,j}(\mathbf{X})h_{X}^{2j}+O\left(h_{X}^{s}\right)\right)^{m}.$$ A similar result holds for $\left(\ez{\mathbf{Y}}\ft Y(\mathbf{Y})\right)^{\alpha}$ and $\left(\ez Z\ft Z(\mathbf{Z})\right)^{\alpha}$. Combining these expressions with \[eq:variance\_terms\] completes the proof. Proof of Theorem 2 (Variance) {#sec:VarProof} ============================= As for the bias, the proof of the variance result in Theorem 2 is similar to the proof of the variance result in [@moon2016arxiv] and so we do not present all of the details. The primary differences again deal with the product of the marginal KDEs. The proof uses the Efron-Stein inequality [@efron1981jackknife]: (Efron-Stein Inequality) Let $\mathbf{X}_{1},\dots,\mathbf{X}_{n},\mathbf{X}_{1}^{'},\dots,\mathbf{X}_{n}^{'}$ be independent random variables on the space $\mathcal{S}$. Then if $f:\mathcal{S}\times\dots\times\mathcal{S}\rightarrow\mathbb{R}$, we have that $$\var\left[f(\mathbf{X}_{1},\dots,\mathbf{X}_{n})\right]\leq\frac{1}{2}\sum_{i=1}^{n}\bE\left[\left(f(\mathbf{X}_{1},\dots,\mathbf{X}_{n})-f(\mathbf{X}_{1},\dots,\mathbf{X}_{i}^{'},\dots,\mathbf{X}_{n})\right)^{2}\right].$$ In this case we consider the samples $\left\{ \mathbf{Z}_{1},\dots,\mathbf{Z}_{N}\right\} $ and $\left\{ \mathbf{Z}_{1}^{'},\Z_{2}\dots,\mathbf{Z}_{N}\right\} $ and the respective estimators $\gt$ and $\gt^{'}$. By the triangle inequality, $$\begin{aligned} \left|\gt-\gt^{'}\right| & \leq & \frac{1}{N}\left|g\left(\frac{\ft X(\mathbf{X}_{1})\ft Y(\mathbf{Y}_{1})}{\ft Z(\mathbf{X}_{1},\mathbf{Y}_{1})}\right)-g\left(\frac{\ft X(\mathbf{X}_{1}^{'})\ft Y(\mathbf{Y}_{1}^{'})}{\ft Z(\mathbf{X}_{1}^{'},\mathbf{Y}_{1}^{'})}\right)\right|\nonumber \\ & & +\frac{1}{N}\sum_{j=2}^{N_{2}}\left|g\left(\frac{\ft X(\mathbf{X}_{j})\ft Y(\mathbf{Y}_{j})}{\ft Z(\mathbf{X}_{j},\mathbf{Y}_{j})}\right)-g\left(\frac{\ft X^{'}(\mathbf{X}_{j})\ft Y^{'}(\mathbf{Y}_{1})}{\ft Z^{'}(\mathbf{X}_{1},\mathbf{Y}_{1})}\right)\right|.\label{eq:triangle}\end{aligned}$$ By the Lipschitz condition on $g$, the first term in (\[eq:triangle\]) can be decomposed into terms of the form of $$\left|\ft Z(\mathbf{Z}_{1})-\ft Z(\mathbf{Z}_{1}^{'})\right|,$$ $$\left|\ft X(\mathbf{X}_{1})\ft Y(\mathbf{Y}_{1})-\ft X(\mathbf{X}_{1}^{'})\ft Y^{'}(\mathbf{Y}_{1})\right|.$$ By making a substitution in the expectation, it can be shown that $$\bE\left[\left|\ft Z(\mathbf{Z}_{1})-\ft Z(\mathbf{Z}_{1}^{'})\right|^{2}\right]\leq2||K_{X}\cdot K_{Y}||_{\infty}^{2}.$$ For the product of the marginal KDEs, we have that $$\begin{aligned} \ft X(\mathbf{X}_{1})\ft Y(\mathbf{Y}_{1}) & = & \frac{1}{M^{2}h_{X}^{d_{X}}h_{Y}^{d_{Y}}}\sum_{i=2}^{N}\sum_{j=2}^{N}K_{X}\left(\frac{\mathbf{X}_{1}-\mathbf{X}_{i}}{h_{X}}\right)K_{Y}\left(\frac{\mathbf{Y}_{1}-\mathbf{Y}_{j}}{h_{Y}}\right)\\ & = & \frac{1}{M}\ft Z(\mathbf{Z}_{1})+\frac{1}{M^{2}h_{X}^{d_{X}}h_{Y}^{d_{Y}}}\sum_{i\neq j}K_{X}\left(\frac{\mathbf{X}_{1}-\mathbf{X}_{i}}{h_{X}}\right)K_{Y}\left(\frac{\mathbf{Y}_{1}-\mathbf{Y}_{j}}{h_{Y}}\right).\end{aligned}$$ By applying the triangle inequality, Jensen’s inequality, and similar substitutions, we get $$\begin{aligned} \bE\left[\left|\ft X(\mathbf{X}_{1})\ft Y(\mathbf{Y}_{1})-\ft X(\mathbf{X}_{1}^{'})\ft Y(\mathbf{Y}_{1}^{'})\right|^{2}\right] & \leq & \bE\left[\frac{2}{M^{2}}\left|\ft Z(\mathbf{Z}_{1})-\ft Z(\mathbf{Z}_{1}^{'})\right|^{2}\right]\\ & & +\frac{2(M-1)}{M^{3}h_{X}^{2d_{X}}h_{Y}^{2d_{Y}}}\times\\ & & \sum_{i\neq j}\bE\left[\left(K_{X}\left(\frac{\mathbf{X}_{1}-\mathbf{X}_{i}}{h_{X}}\right)K_{Y}\left(\frac{\mathbf{Y}_{1}-\mathbf{Y}_{j}}{h_{Y}}\right)\right.\right.\\ & & \left.\left.-K_{X}\left(\frac{\mathbf{X}_{1}^{'}-\mathbf{X}_{i}}{h_{X}}\right)K_{Y}\left(\frac{\mathbf{Y}_{1}^{'}-\mathbf{Y}_{j}}{h_{Y}}\right)\right)^{2}\right]\\ & \leq & \frac{4+2(M-1)^{2}}{M^{2}}||K_{X}\cdot K_{Y}||^{2}.\end{aligned}$$ For the second term in (\[eq:triangle\]), it can be shown that $$\begin{aligned} \bE\left[\left|\ft Z(\mathbf{Z}_{i})-\ft Z^{'}(\mathbf{Z}_{i})\right|^{2}\right] & = & \frac{1}{M^{2}h_{X}^{2d_{X}}h_{Y}^{2d_{Y}}}\bE\left[\left(K_{X}\left(\frac{\mathbf{X}_{1}-\mathbf{X}_{i}}{h_{X}}\right)K_{Y}\left(\frac{\mathbf{Y}_{1}-\mathbf{Y}_{j}}{h_{Y}}\right)\right.\right.\\ & & \left.\left.-K_{X}\left(\frac{\mathbf{X}_{1}^{'}-\mathbf{X}_{i}}{h_{X}}\right)K_{Y}\left(\frac{\mathbf{Y}_{1}^{'}-\mathbf{Y}_{j}}{h_{Y}}\right)\right)^{2}\right]\\ & \leq & \frac{2||K_{X}\cdot K_{Y}||_{\infty}^{2}}{M^{2}}.\end{aligned}$$ By a similar approach, $$\ft X(\mathbf{X}_{i})\ft Y(\mathbf{Y}_{i})-\ft X^{'}(\mathbf{X}_{i})\ft Y^{'}(\mathbf{Y}_{i})$$ $$\begin{aligned} & = & \ft Z(\mathbf{Z}_{i})-\ft Z^{'}(\mathbf{Z}_{i})+\frac{1}{M^{2}h_{X}^{d_{X}}h_{Y}^{d_{Y}}}\left(\sum_{\substack{n=2\\ n\neq i } }K_{Y}\left(\frac{\mathbf{Y}_{i}-\mathbf{Y}_{n}}{h_{Y}}\right)\left(K_{X}\left(\frac{\mathbf{X}_{i}-\mathbf{X}_{1}}{h_{X}}\right)-K_{X}\left(\frac{\mathbf{X}_{i}-\mathbf{X}_{1}^{'}}{h_{X}}\right)\right)\right.\\ & & \left.+\sum_{\substack{n=2\\ n\neq i } }K_{X}\left(\frac{\mathbf{X}_{i}-\mathbf{X}_{n}}{h_{X}}\right)\left(K_{Y}\left(\frac{\mathbf{Y}_{i}-\mathbf{Y}_{1}}{h_{Y}}\right)-K_{Y}\left(\frac{\mathbf{Y}_{i}-\mathbf{Y}_{1}^{'}}{h_{Y}}\right)\right)\right),\end{aligned}$$ $$\implies\bE\left[\left|\ft X(\mathbf{X}_{i})\ft Y(\mathbf{Y}_{i})-\ft X^{'}(\mathbf{X}_{i})\ft Y^{'}(\mathbf{Y}_{i})\right|^{2}\right]\leq6||K_{X}\cdot K_{Y}||_{\infty}^{2}\left(\frac{1}{M^{2}}+\frac{(M-2)^{2}}{M^{4}}\right)$$ We can then apply the Cauchy Schwarz inequality to bound the square of the second term in (\[eq:triangle\]) to get $$\bE\left[\left(\sum_{j=2}^{N_{2}}\left|g\left(\frac{\ft X(\mathbf{X}_{1})\ft Y(\mathbf{Y}_{1})}{\ft Z(\mathbf{X}_{1},\mathbf{Y}_{1})}\right)-g\left(\frac{\ft X^{'}(\mathbf{X}_{1})\ft Y^{'}(\mathbf{Y}_{1})}{\ft Z^{'}(\mathbf{X}_{1},\mathbf{Y}_{1})}\right)\right|\right)^{2}\right]\leq14C_{g}^{2}||K_{X}\cdot K_{Y}||_{\infty}^{2}.$$ Applying Jensen’s inequality in conjunction with these results gives $$\bE\left[\left|\gt-\gt^{'}\right|^{2}\right]\leq\frac{44C_{g}^{2}||K_{X}\cdot K_{Y}||_{\infty}^{2}}{N^{2}}.$$ Applying the Efron-Stein inequality finishes the proof. Theory for Mixed Random Variables\[sec:MixedProofs\] ==================================================== Proof of Theorem 3 (Bias) ------------------------- Let $\mathbf{h}_{X|y}=l\mathbf{N}_{y}^{-\beta}$ for some positive $l$ and $0<\beta<\frac{1}{d_{X}}$. Under assumptions $\mathcal{A}.0-\mathcal{A}.5$, we prove that for general $g$, the bias of the plug-in estimator $\g{h_{X},h_{X|Y}}$ $$\begin{aligned} \bias\left[\g{h_{X},h_{X|Y}}\right] & = & \sum_{\substack{j=0\\ i+j\neq0 } }^{r}\sum_{i=0}^{r}c_{13,i,j}h_{X}^{i}l^{j}N^{-j\beta}+\frac{c_{14,X}}{Nh_{X}^{d_{X}}}+\frac{c_{14,y}}{l^{d_{X}}N^{1-\beta d_{X}}}\nonumber \\ & & +O\left(h_{X}^{s}+N^{-s\beta}+\frac{1}{Nh_{X}^{d_{X}}}+\frac{1}{N^{1-\beta d_{X}}}+\frac{1}{N}\right).\label{eq:bias_mixed1-1}\end{aligned}$$ Furthermore, if $g\left(t_{1},t_{2}\right)$ has $j,l$-th order mixed derivatives $\frac{\partial^{j+l}}{\partial t_{1}^{j}\partial t_{2}^{l}}$ that depend on $t_{1}$ and $t_{2}$ only through $t_{1}^{\alpha}t_{2}^{\beta}$ for some $\alpha,\beta\in\mathbb{R}$, then for any positive integer $\lambda\geq2$, the bias is $$\begin{aligned} \bias\left[\g{h_{X},h_{X|Y}}\right] & = & \sum_{\substack{j=0\\ i+j\neq0 } }^{r}\sum_{i=0}^{r}c_{13,i,j}h_{X}^{i}l^{j}N^{-j\beta}+\sum_{j=1}^{\lambda/2}\sum_{i=1}^{\lambda/2}\sum_{m=0}^{r}\sum_{n=0}^{r}c_{14,j,i,m,n}\frac{h_{X}^{m}l^{n}N^{-n\beta}}{\left(Nh_{X}^{d_{X}}\right)^{j}\left(l^{d_{X}}N^{1-\beta d_{X}}\right)^{i}}\nonumber \\ & & +\sum_{j=1}^{\lambda/2}\sum_{m=0}^{r}\sum_{n=0}^{r}\left(c_{14,m,n,j,X}\frac{h_{X}^{m}l^{n}N^{-n\beta}}{\left(Nh_{X}^{d_{X}}\right)^{j}}+c_{14,m,n,j,Y}\frac{h_{X}^{m}l^{n}N^{-n\beta}}{\left(l^{d_{X}}N^{1-\beta d_{X}}\right)^{j}}\right)\nonumber \\ & & +O\left(h_{X}^{s}+N^{-s\beta}+\frac{1}{\left(Nh_{X}^{d_{X}}\right)^{\lambda/2}}+\frac{1}{\left(N^{1-\beta d_{X}}\right)^{\lambda/2}}+\frac{1}{N}\right).\label{eq:bias_mixed2-1}\end{aligned}$$ We only prove (\[eq:bias\_mixed1-1\]) as the proof of (\[eq:bias\_mixed2-1\]) is identical. The bias of $\g{h_{X},h_{X|Y}}$ is $$\begin{aligned} \bias\left[\g{h_{X},h_{X|Y}}\right] & = & \bE\left[\g{h_{X},h_{X|Y}}\right]-G(\mathbf{X};\mathbf{Y})\\ & = & \bE\left[\sum_{y\in\mathcal{S}_{Y}}\frac{\mathbf{N}_{y}}{N}\g{h_{X},h_{X|y}}-g\left(\frac{f_{X}(\mathbf{X})}{f_{X|Y}(\mathbf{X}|\mathbf{Y})}\right)\right]\\ & = & \bE\left[\bE\left[\left.\sum_{y\in\mathcal{S}_{Y}}\frac{\mathbf{N}_{y}}{N}\g{h_{X},h_{X|y}}-g\left(\frac{f_{X}(\mathbf{X})}{f_{X|Y}(\mathbf{X}|\mathbf{Y})}\right)\right|\mathbf{Y},\mathbf{Y}_{1},\dots,\mathbf{Y}_{N}\right]\right]\\ & = & \bE\left[\sum_{y\in\mathcal{S}_{Y}}\frac{\mathbf{N}_{y}}{N}\bE\left[\left.\left(\g{h_{X},h_{X|y}}-g\left(\frac{f_{X}(\mathbf{X})}{f_{X|Y}(\mathbf{X}|\mathbf{Y})}\right)\right)\right|\mathbf{Y},\mathbf{Y}_{1},\dots,\mathbf{Y}_{N}\right]\right]\\ & = & \bE\left[\sum_{y\in\mathcal{S}_{Y}}\frac{\mathbf{N}_{y}}{N}\bias\left[\left.\g{h_{X},h_{X|y}}\right|\mathbf{Y}_{1},\dots,\mathbf{Y}_{N}\right]\right],\end{aligned}$$ where we use the law of total expectation and the fact that $\sum_{y\in\mathcal{S}_{Y}}\frac{\mathbf{N}_{y}}{N}=1$. Let $\mathbf{h}_{X|y}=l\mathbf{N}_{y}^{-\beta}$ for some positive $l$ and $0<\beta<\frac{1}{d_{X}}$. From Theorem 1, the conditional bias of $\g{h_{X},h_{X|y}}$ given $\mathbf{Y}_{1},\dots,\mathbf{Y}_{N}$ is $$\begin{aligned} \bias\left[\left.\g{h_{X},h_{X|y}}\right|\mathbf{Y}_{1},\dots,\mathbf{Y}_{N}\right] & = & \sum_{\substack{j=0\\ i+j\neq0 } }^{r}\sum_{i=0}^{r}c_{10,i,j}h_{X}^{i}\mathbf{h}_{X|y}^{j}+\frac{c_{11,X}}{\mathbf{N}_{y}h_{X}^{d_{X}}}+\frac{c_{11,y}}{\mathbf{N}_{y}\mathbf{h}_{X|y}^{d_{X}}}\nonumber \\ & & +O\left(h_{X}^{s}+\mathbf{h}_{X|y}^{s}+\frac{1}{\mathbf{N}_{y}h_{X}^{d_{X}}}+\frac{1}{\mathbf{N}_{y}\mathbf{h}_{X|y}^{d_{X}}}\right)\nonumber \\ & & =\sum_{\substack{j=0\\ i+j\neq0 } }^{r}\sum_{i=0}^{r}c_{10,i,j}h_{X}^{i}l^{j}\mathbf{N}_{y}^{-j\beta}+\frac{c_{11,X}}{\mathbf{N}_{y}h_{X}^{d_{X}}}+\frac{c_{11,y}}{l^{d_{X}}\mathbf{N}_{y}^{1-\beta d_{X}}}\nonumber \\ & & +O\left(h_{X}^{s}+\mathbf{N}_{y}^{-s\beta}+\frac{1}{\mathbf{N}_{y}h_{X}^{d_{X}}}+\frac{1}{\mathbf{N}_{y}^{1-\beta d_{X}}}\right).\label{eq:bias_cond}\end{aligned}$$ $\mathbf{N}_{y}$ is a binomial random variable Multiplying (\[eq:bias\_cond\]) by $\mathbf{N}_{y}$ results in terms of the form of $\mathbf{N}_{y}^{1-\gamma}$ with $\gamma\geq0$. $\mathbf{N}_{y}$ is a binomial random variable with parameter $f_{Y}(y)$,$N$ trials, and mean $Nf_{Y}(y)$. We can compute the fractional moments of a binomial random variable by using the generalized binomial theorem to obtain (see the main paper) $$\begin{aligned} \bE\left[\mathbf{N}_{y}^{\alpha}\right] & = & \sum_{i=0}^{\infty}\left(\begin{array}{c} \alpha\\ i \end{array}\right)\left(Nf_{Y}(y)\right)^{\alpha-i}\bE\left[\left(\mathbf{N}_{Y}-Nf_{Y}(y)\right)^{i}\right]\\ & = & \sum_{i=0}^{\infty}\left(\begin{array}{c} \alpha\\ i \end{array}\right)\left(Nf_{Y}(y)\right)^{\alpha-i}\sum_{n=0}^{\left\lfloor i/2\right\rfloor }c_{n,i}(f_{Y}(y))N^{n}\\ & = & \sum_{i=0}^{\infty}\left(\begin{array}{c} \alpha\\ i \end{array}\right)f_{Y}(y)^{\alpha-i}\sum_{n=0}^{\left\lfloor i/2\right\rfloor }c_{n,i}(f_{Y}(y))N^{\alpha-i+n},\end{aligned}$$ where we use the following expression for the $i$-th central moment of a binomial random variable derived by Riordan [@riordan1937moment]: $$\bE\left[\left(\mathbf{N}_{Y}-Nf_{Y}(y)\right)^{i}\right]=\sum_{n=0}^{\left\lfloor i/2\right\rfloor }c_{n,i}(f_{Y}(y))N^{n}.$$ If $\alpha=1-\gamma$, then dividing by $N$ results in terms of the form of $N^{-\gamma-i+n}$. Since $n\leq\left\lfloor i/2\right\rfloor $, $-\gamma-i+n$ is always less than zero and is only greater than $-1$ if $i=0$. This completes the proof. Proof of Theorem 4 (Variance) {#subsec:varproof} ----------------------------- As for the bias, we assume that $\mathbf{h}_{X|y}=l\mathbf{N}_{y}^{-\beta}$ for some positive $l$ and $0<\beta<\frac{1}{d_{X}}$. By the law of total variance, we have $$\var\left[\g{h_{X},h_{X|Y}}\right]=\bE\left[\var\left[\left.\g{h_{X},h_{X|Y}}\right|\mathbf{Y}_{1},\dots,\mathbf{Y}_{N}\right]\right]+\var\left[\bE\left[\left.\g{h_{X},h_{X|Y}}\right|\mathbf{Y}_{1},\dots,\mathbf{Y}_{N}\right]\right].\label{eq:total_var}$$ Note that given all of the $\mathbf{Y}_{i}$’s, the estimators $\g{h_{X},h_{X|y}}$ are all independent since they use different sets of $\mathbf{X}_{i}$’s for each $y$. By Theorem 2, we have $$\begin{aligned} \var\left[\left.\g{h_{X},h_{X|Y}}\right|\mathbf{Y}_{1},\dots,\mathbf{Y}_{N}\right] & = & O\left(\sum_{y\in\mathcal{S}_{Y}}\frac{\mathbf{N}_{y}^{2}}{N^{2}}\cdot\frac{1}{\mathbf{N}_{y}}\right)\\ & = & O\left(\sum_{y\in\mathcal{S}_{Y}}\frac{\mathbf{N}_{y}}{N^{2}}\right).\end{aligned}$$ Taking the expectation wrt $\mathbf{Y}_{1},\dots\mathbf{Y}_{N}$ then gives $O\left(\frac{1}{N}\right)$ for the first term in (\[eq:total\_var\]). For the second term in (\[eq:total\_var\]), from (\[eq:bias\_cond\]) we have that for general $g$ $$\begin{aligned} \bE\left[\left.\g{h_{X},h_{X|y}}\right|\mathbf{Y}_{1},\dots,\mathbf{Y}_{N}\right] & = & O\left(\sum_{j=0}^{r}\mathbf{N}_{y}^{-j\beta}+\frac{1}{\mathbf{N}_{y}}+\mathbf{N}_{y}^{-s\beta}+\mathbf{N}_{y}^{1-\beta d_{X}}\right)\\ & = & O\left(f\left(\mathbf{N}_{y}\right)\right).\end{aligned}$$ By the Efron-Stein inequality, we have that if $\mathbf{N}_{y}^{'}$ is an independent and identically distributed realization of $\mathbf{N}_{y}$, then $$\begin{aligned} \var\left[\sum_{y\in\mathcal{S}_{Y}}\frac{\mathbf{N}_{y}}{N}f\left(\mathbf{N}_{y}\right)\right] & \leq & \frac{1}{2N^{2}}\sum_{y\in\mathcal{S}_{Y}}\bE\left[\left(\mathbf{N}_{y}f\left(\mathbf{N}_{y}\right)-\mathbf{N}_{y}^{'}f\left(\mathbf{N}_{y}^{'}\right)\right)^{2}\right]\nonumber \\ & = & O\left(\frac{1}{N^{2}}\bE\left[\left(\mathbf{N}_{y}f\left(\mathbf{N}_{y}\right)-\mathbf{N}_{y}^{'}f\left(\mathbf{N}_{y}^{'}\right)\right)^{2}\right]\right)\nonumber \\ & = & O\left(\frac{1}{N^{2}}\var\left[\mathbf{N}_{y}f\left(\mathbf{N}_{y}\right)\right]\right),\label{eq:var_cond}\end{aligned}$$ where the second step follows from the fact that $\mathcal{S}_{Y}$ is finite and the last step follows from the fact that $\mathbf{N}_{y}$ and $\mathbf{N}_{y}^{'}$ are iid. The expression $\var\left[\mathbf{N}_{y}f\left(\mathbf{N}_{y}\right)\right]$ is simply a sum of terms of the form of $\var\left[\mathbf{N}_{y}^{\gamma}\right]$ where $0<\gamma\leq1$. Even the covariance terms can be bounded by the square root of the product of these terms by the Cauchy Schwarz inequality. Let $p_{y}=f_{Y}(y)$. Consider the Taylor series expansion of the function $h(x)=x^{\gamma}$ at the point $Np_{y}$. This is $$\begin{aligned} h(x) & = & \left(Np_{y}\right)^{\gamma}+\gamma\left(Np_{y}\right)^{\gamma-1}\left(x-Np_{y}\right)+\frac{\gamma(\gamma-1)}{2}\left(Np_{y}\right)^{\gamma-2}\left(x-Np_{y}\right)^{2}\nonumber \\ & & +\sum_{k=3}^{\infty}\frac{\gamma(\gamma-1)\dots(\gamma-k+1)}{k!}\left(Np_{y}\right)^{\gamma-k}\left(x-Np_{y}\right)^{2}.\label{eq:taylor_gamma}\end{aligned}$$ From Riordan [@riordan1937moment], we know that the $i$th central moment of $\mathbf{N}_{y}$ is $O\left(N^{\left\lfloor i/2\right\rfloor }\right)$. Then since $\gamma\leq1$, the last terms in (\[eq:taylor\_gamma\]) are $O\left(N^{-1}\right)$ when $x=\mathbf{N}_{y}$ and we take the expectation. Thus $$\begin{aligned} \bE\left[\mathbf{N}_{y}^{\gamma}\right] & = & \left(Np_{y}\right)^{\gamma}+\frac{\gamma(\gamma-1)}{2}\left(Np_{y}\right)^{\gamma-1}(1-p_{y})+O\left(N^{-1}\right)\\ \implies\bE\left[\mathbf{N}_{y}^{\gamma}\right]^{2} & = & \left(Np_{y}\right)^{2\gamma}+\gamma(\gamma-1)(1-p_{y})\left(Np_{y}\right)^{2\gamma-1}+\left(\frac{\gamma(\gamma-1)}{2}\right)^{2}\left(Np_{y}\right)^{2\gamma-2}\\ & & +O\left(N^{-1}\right).\end{aligned}$$ By a similar Taylor series expansion, we have that $$\bE\left[\mathbf{N}_{y}^{2\gamma}\right]=\left(Np_{y}\right)^{2\gamma}+\gamma(2\gamma-1)(1-p_{y})\left(Np_{y}\right)^{2\gamma-1}+O\left(N^{-1}\right).$$ Combining these results gives $$\begin{aligned} \var\left[\mathbf{N}_{y}^{\gamma}\right] & = & \bE\left[\mathbf{N}_{y}^{2\gamma}\right]-\bE\left[\mathbf{N}_{y}^{\gamma}\right]^{2}\\ & = & O\left(N^{2\gamma-1}+N^{2\gamma-2}+N^{-1}\right)\\ & = & O\left(N\right),\end{aligned}$$ where the last step follows from the fact that $\gamma\leq1$. Combining this result with (\[eq:var\_cond\]) gives $$\var\left[\bE\left[\left.\g{h_{X},h_{X|Y}}\right|\mathbf{Y}_{1},\dots,\mathbf{Y}_{N}\right]\right]=O\left(\frac{1}{N}\right).$$ By the law of total variance, $\var\left[\g{h_{X},h_{X|Y}}\right]=O\left(\frac{1}{N}\right)$. Extension to the Generalized Case {#subsec:generalCase} --------------------------------- In this section, we sketch the theory for the case where both $\X$ and $\Y$ have a mixture of discrete and continuous components. Denote the discrete and continuous components of $\mathbf{X}$ as $\mathbf{X}_{1}$ and $\mathbf{X}_{2}$, respectively. Similarly, denote the discrete and continuous components of $\Y$ as $\mathbf{Y}_{1}$ and $\mathbf{Y}_{2}$, respectively. Then the generalized mutual information is $$\begin{aligned} G(\X;\Y) & =\sum_{\substack{y_{1}\in\mathcal{S}_{Y_{1}}\\ x_{1}\in\mathcal{S}_{X_{1}} } }\int g\left(\frac{f_{X}(x_{1},x_{2})f_{Y}(y_{1},y_{2})}{f_{XY}(x_{1},x_{2},y_{1},y_{2})}\right)f_{XY}(x_{1},x_{2},y_{1},y_{2})dx_{2}dy_{2}\\ & =\sum_{\substack{y_{1}\in\mathcal{S}_{Y_{1}}\\ x_{1}\in\mathcal{S}_{X_{1}} } }f_{X_{1}Y_{1}}(x_{1},y_{1})\int g\left(\frac{f_{X_{1}}(x_{1})f_{Y_{1}}(y_{1})f_{X_{2}|X_{1}}(x_{2}|x_{1})f_{Y_{2}|Y_{1}}(y_{2}|y_{1})}{f_{X_{1}Y_{1}}(x_{1},y_{1})f_{X_{2}Y_{2}|X_{1}Y_{1}}(x_{2},y_{2}|x_{1}y_{1})}\right)f_{X_{2}Y_{2}|X_{1}Y_{1}}(x_{2},y_{2}|x_{1}y_{1})dx_{2}dy_{2}.\end{aligned}$$ Define $\mathbf{N}_{y_{1}}=\sum_{i=1}^{N}1_{\{\mathbf{Y}_{1,i}=y_{1}\}}$ where $\mathbf{Y}_{1,i}$ is the discrete component of $\mathbf{Y}_{i}$. Then the estimator we use for $f_{Y_{1}}(y_{1})$ is $\mathbf{N}_{y_{1}}/N$. The estimators for $f_{X_{1}}(x_{1})$ and $f_{X_{1}Y_{1}}(x_{1},y_{1})$ are defined similarly with $\N_{x_{1}}$ and $\N_{z_{1}}$. We first consider the conditional bias of the resulting plug-in estimator where we condition on the discrete random variables. Recall that by Taylor series expansions, we decompose the bias into “variance-like” terms in (\[eq:varterms\]) and “bias-like” terms in (\[eq:biasterms\]). For the bias-like term, if we condition on the discrete random variables, then the equivalent expression in (\[eq:E\_fx\]) in this case is multiplied by $\N_{x_{1}}/N$. This results in terms of the form of, for example, $\left(\frac{\N_{z_{1}}}{N}-f_{Z_{1}}(x_{1},y_{1})\right)^{i}$. The expected value of these expressions is the $i$th central moment of a binomial random variable divided by $N^{i}$ which is $O(1/N)$ for $i\geq1$. Thus these terms contribute $O(1/N)$ to the bias. In all other cases, the expected value of $\left(\frac{\N_{z_{1}}}{N}\right)^{i}$ is $O(1)$. Thus only the constants are affected by these terms in the equivalent expression in (\[eq:varterm\_bound\]). Similar results hold for the estimators of $f_{X_{1}}(x_{1})$ and $f_{Y_{1}}(y_{1})$. For the “variance-like” terms, we can simply factor out the estimators for $f_{X_{1}}(x_{1})$, $f_{Y_{1}}(y_{1})$, and $f_{Z_{1}}(z_{1})$. The expected value of these estimators is again $O(1)$ so they only affect the constants. For the variance, the law of total variance can again be used by conditioning on the discrete components. For the conditional variance, the Lipschitz conditions on $g$ in this case simply scales the resulting terms by the square of the estimators for $f_{X_{1}}(x_{1})$, $f_{Y_{1}}(y_{1})$, and $f_{Z_{1}}(z_{1})$. Then since the expected value of the square of these estimators is $O(1),$ the expected value of the conditional variance is still $O(1/N)$. Then by similar arguments given above for the bias and in Section \[subsec:varproof\], the variance of the conditional expectation of the estimator is also $O(1/N)$. Thus the total variance is $O(1/N)$. Proof of Theorem 6 (CLT) {#sec:cltProof} ======================== This proof shares some similarities with the CLT proof for the divergence functional estimators in [@moon2016arxiv; @moon2016isit]. The primary differences again deal with handling products of marginal density estimators and with handling two of the terms in the Efron-Stein inequality. We will first find the asymptotic distribution of $$\begin{aligned} \sqrt{N}\left(\gt-\bE\left[\gt\right]\right) & =\frac{1}{\sqrt{N}}\sum_{i=1}^{N}\left(g\left(\frac{\ft X(\mathbf{X}_{i})\ft Y(\mathbf{Y}_{i})}{\ft Z(\mathbf{X}_{i},\mathbf{Y}_{i})}\right)-\ez{Z_{i}}\left[g\left(\frac{\ft X(\mathbf{X}_{i})\ft Y(\mathbf{Y}_{i})}{\ft Z(\mathbf{X}_{i},\mathbf{Y}_{i})}\right)\right]\right)\\ & +\frac{1}{\sqrt{N}}\sum_{i=1}^{N}\left(\ez{Z_{i}}\left[g\left(\frac{\ft X(\mathbf{X}_{i})\ft Y(\mathbf{Y}_{i})}{\ft Z(\mathbf{X}_{i},\mathbf{Y}_{i})}\right)\right]-\bE\left[g\left(\frac{\ft X(\mathbf{X}_{i})\ft Y(\mathbf{Y}_{i})}{\ft Z(\mathbf{X}_{i},\mathbf{Y}_{i})}\right)\right]\right).\end{aligned}$$ By the standard central limit theorem [@durrett2010probability], the second term converges in distribution to a Gaussian random variable with variance $$\var\left[\ez Z\left[g\left(\frac{\ft X(\mathbf{X})\ft Y(\mathbf{Y})}{\ft Z(\mathbf{X},\mathbf{Y})}\right)\right]\right].$$ All that remains is to show that the first term converges in probability to zero as Slutsky’s theorem [@gut2012probability] can then be applied. Denote this first term as $\W_{N}$ and note that $\bE\left[\W_{N}\right]=0$. We will use Chebyshev’s inequality combined with the Efron-Stein inequality to bound the variance of $\W_{N}$. Consider the samples $\left\{ \Z_{1},\dots,\Z_{N}\right\} $ and $\left\{ \Z_{1}^{'},\Z_{2},\dots,\Z_{N}\right\} $ and the respective sequences $\W_{N}$ and $\W_{N}^{'}$. This gives $$\begin{aligned} \W_{N}-\W_{N}^{'} & =\frac{1}{\sqrt{N}}\left(g\left(\frac{\ft X(\mathbf{X}_{1})\ft Y(\mathbf{Y}_{1})}{\ft Z(\mathbf{X}_{1},\mathbf{Y}_{1})}\right)-\ez{Z_{1}}\left[g\left(\frac{\ft X(\mathbf{X}_{1})\ft Y(\mathbf{Y}_{1})}{\ft Z(\mathbf{X}_{1},\mathbf{Y}_{1})}\right)\right]\right)\nonumber \\ & +\frac{1}{\sqrt{N}}\left(g\left(\frac{\ft X(\mathbf{X}_{1}^{'})\ft Y(\mathbf{Y}_{1}^{'})}{\ft Z(\mathbf{X}_{1}^{'},\mathbf{Y}_{1}^{'})}\right)-\ez{Z_{1}^{'}}\left[g\left(\frac{\ft X(\mathbf{X}_{1}^{'})\ft Y(\mathbf{Y}_{1}^{'})}{\ft Z(\mathbf{X}_{1}^{'},\mathbf{Y}_{1}^{'})}\right)\right]\right)\nonumber \\ & +\frac{1}{\sqrt{N}}\sum_{i=2}^{N}\left(g\left(\frac{\ft X(\mathbf{X}_{i})\ft Y(\mathbf{Y}_{i})}{\ft Z(\mathbf{X}_{i},\mathbf{Y}_{i})}\right)-g\left(\frac{\ft X^{'}(\mathbf{X}_{i})\ft Y^{'}(\mathbf{Y}_{i})}{\ft Z^{'}(\mathbf{X}_{i},\mathbf{Y}_{i})}\right)\right).\label{eq:Wndiff}\end{aligned}$$ Note that $$\bE\left[\left(g\left(\frac{\ft X(\mathbf{X}_{1})\ft Y(\mathbf{Y}_{1})}{\ft Z(\mathbf{X}_{1},\mathbf{Y}_{1})}\right)-\ez{Z_{1}}\left[g\left(\frac{\ft X(\mathbf{X}_{1})\ft Y(\mathbf{Y}_{1})}{\ft Z(\mathbf{X}_{1},\mathbf{Y}_{1})}\right)\right]\right)^{2}\right]=\bE\left[\var_{\X_{1}}\left[g\left(\frac{\ft X(\mathbf{X}_{1})\ft Y(\mathbf{Y}_{1})}{\ft Z(\mathbf{X}_{1},\mathbf{Y}_{1})}\right)\right]\right].$$ We will use the Efron-Stein inequality to bound $\var_{\X_{1}}\left[g\left(\frac{\ft X(\mathbf{X}_{1})\ft Y(\mathbf{Y}_{1})}{\ft Z(\mathbf{X}_{1},\mathbf{Y}_{1})}\right)\right]$. We thus need to bound the conditional expectation of the term $$\left|g\left(\frac{\ft X(\mathbf{X}_{1})\ft Y(\mathbf{Y}_{1})}{\ft Z(\mathbf{X}_{1},\mathbf{Y}_{1})}\right)-g\left(\frac{\ft X^{'}(\mathbf{X}_{1})\ft Y^{'}(\mathbf{Y}_{1})}{\ft Z^{'}(\mathbf{X}_{1},\mathbf{Y}_{1})}\right)\right|^{2},$$ where $\Z_{i}$ is replaced with $\Z_{i}^{'}$ in the KDEs for some $i\neq1$. Using similar steps as in Section \[sec:VarProof\], we have that $$\bE\left[\left|g\left(\frac{\ft X(\mathbf{X}_{1})\ft Y(\mathbf{Y}_{1})}{\ft Z(\mathbf{X}_{1},\mathbf{Y}_{1})}\right)-g\left(\frac{\ft X^{'}(\mathbf{X}_{1})\ft Y^{'}(\mathbf{Y}_{1})}{\ft Z^{'}(\mathbf{X}_{1},\mathbf{Y}_{1})}\right)\right|^{2}\right]=O\left(\frac{1}{N^{2}}\right).$$ Then by the Efron-Stein inequality, $\var_{\X_{1}}\left[g\left(\frac{\ft X(\mathbf{X}_{1})\ft Y(\mathbf{Y}_{1})}{\ft Z(\mathbf{X}_{1},\mathbf{Y}_{1})}\right)\right]=O\left(\frac{1}{N}\right)$. Therefore $$\bE\left[\frac{1}{N}\left(g\left(\frac{\ft X(\mathbf{X}_{1})\ft Y(\mathbf{Y}_{1})}{\ft Z(\mathbf{X}_{1},\mathbf{Y}_{1})}\right)-\ez{Z_{1}}\left[g\left(\frac{\ft X(\mathbf{X}_{1})\ft Y(\mathbf{Y}_{1})}{\ft Z(\mathbf{X}_{1},\mathbf{Y}_{1})}\right)\right]\right)^{2}\right]=O\left(\frac{1}{N^{2}}\right).$$ A similar result holds for the $g\left(\frac{\ft X(\mathbf{X}_{1}^{'})\ft Y(\mathbf{Y}_{1}^{'})}{\ft Z(\mathbf{X}_{1}^{'},\mathbf{Y}_{1}^{'})}\right)$ term in (\[eq:Wndiff\]). For the third term in (\[eq:Wndiff\]), $$\begin{aligned} & \lefteqn{\bE\left[\left(\sum_{i=2}^{N}\left|g\left(\frac{\ft X(\mathbf{X}_{i})\ft Y(\mathbf{Y}_{i})}{\ft Z(\mathbf{X}_{i},\mathbf{Y}_{i})}\right)-g\left(\frac{\ft X^{'}(\mathbf{X}_{i})\ft Y^{'}(\mathbf{Y}_{i})}{\ft Z^{'}(\mathbf{X}_{i},\mathbf{Y}_{i})}\right)\right|\right)^{2}\right]}\\ & =\sum_{i,j=2}^{N}\bE\left[\left|g\left(\frac{\ft X(\mathbf{X}_{i})\ft Y(\mathbf{Y}_{i})}{\ft Z(\mathbf{X}_{i},\mathbf{Y}_{i})}\right)-g\left(\frac{\ft X^{'}(\mathbf{X}_{i})\ft Y^{'}(\mathbf{Y}_{i})}{\ft Z^{'}(\mathbf{X}_{i},\mathbf{Y}_{i})}\right)\right|\right.\\ & \times\left.\left|g\left(\frac{\ft X(\mathbf{X}_{j})\ft Y(\mathbf{Y}_{j})}{\ft Z(\mathbf{X}_{j},\mathbf{Y}_{j})}\right)-g\left(\frac{\ft X^{'}(\mathbf{X}_{j})\ft Y^{'}(\mathbf{Y}_{j})}{\ft Z^{'}(\mathbf{X}_{j},\mathbf{Y}_{j})}\right)\right|\right]\end{aligned}$$ For the $N-1$ terms where $i=j$, we know from Section \[sec:VarProof\] that $$\bE\left[\left|g\left(\frac{\ft X(\mathbf{X}_{i})\ft Y(\mathbf{Y}_{i})}{\ft Z(\mathbf{X}_{i},\mathbf{Y}_{i})}\right)-g\left(\frac{\ft X^{'}(\mathbf{X}_{i})\ft Y^{'}(\mathbf{Y}_{i})}{\ft Z^{'}(\mathbf{X}_{i},\mathbf{Y}_{i})}\right)\right|^{2}\right]=O\left(\frac{1}{N^{2}}\right).$$ Thus these terms contribute $O(1/N)$. For the $N^{2}-N$ terms where $i\neq j$, we can do multiple substitutions of the form $\mathbf{u}_{j}=\frac{\X_{j}-\X_{1}}{h_{X}}$ resulting in $$\begin{aligned} \bE\left[\left|g\left(\frac{\ft X(\mathbf{X}_{i})\ft Y(\mathbf{Y}_{i})}{\ft Z(\mathbf{X}_{i},\mathbf{Y}_{i})}\right)-g\left(\frac{\ft X^{'}(\mathbf{X}_{i})\ft Y^{'}(\mathbf{Y}_{i})}{\ft Z^{'}(\mathbf{X}_{i},\mathbf{Y}_{i})}\right)\right|\right.\\ \times\left.\left|g\left(\frac{\ft X(\mathbf{X}_{j})\ft Y(\mathbf{Y}_{j})}{\ft Z(\mathbf{X}_{j},\mathbf{Y}_{j})}\right)-g\left(\frac{\ft X^{'}(\mathbf{X}_{j})\ft Y^{'}(\mathbf{Y}_{j})}{\ft Z^{'}(\mathbf{X}_{j},\mathbf{Y}_{j})}\right)\right|\right] & =O\left(\frac{h_{X}^{2d_{X}}h_{Y}^{2d_{Y}}}{N^{2}}\right).\end{aligned}$$ Since $h_{X}^{d_{X}}h_{Y}^{d_{Y}}=o(1)$, $$\bE\left[\left(\sum_{i=2}^{N}\left|g\left(\frac{\ft X(\mathbf{X}_{i})\ft Y(\mathbf{Y}_{i})}{\ft Z(\mathbf{X}_{i},\mathbf{Y}_{i})}\right)-g\left(\frac{\ft X^{'}(\mathbf{X}_{i})\ft Y^{'}(\mathbf{Y}_{i})}{\ft Z^{'}(\mathbf{X}_{i},\mathbf{Y}_{i})}\right)\right|\right)^{2}\right]=o(1).$$ Combining all of these results with Jensen’s inequality gives $$\begin{aligned} \bE\left[\left(\W_{N}-\W_{N}^{'}\right)^{2}\right] & \leq\frac{3}{N}\bE\left[\left(g\left(\frac{\ft X(\mathbf{X}_{1})\ft Y(\mathbf{Y}_{1})}{\ft Z(\mathbf{X}_{1},\mathbf{Y}_{1})}\right)-\ez{Z_{1}}\left[g\left(\frac{\ft X(\mathbf{X}_{1})\ft Y(\mathbf{Y}_{1})}{\ft Z(\mathbf{X}_{1},\mathbf{Y}_{1})}\right)\right]\right)^{2}\right]\\ & +\frac{3}{N}\bE\left[\left(g\left(\frac{\ft X(\mathbf{X}_{1}^{'})\ft Y(\mathbf{Y}_{1}^{'})}{\ft Z(\mathbf{X}_{1}^{'},\mathbf{Y}_{1}^{'})}\right)-\ez{Z_{1}^{'}}\left[g\left(\frac{\ft X(\mathbf{X}_{1}^{'})\ft Y(\mathbf{Y}_{1}^{'})}{\ft Z(\mathbf{X}_{1}^{'},\mathbf{Y}_{1}^{'})}\right)\right]\right)^{2}\right]\\ & +\frac{3}{N}\bE\left[\left(\sum_{i=2}^{N}\left(g\left(\frac{\ft X(\mathbf{X}_{i})\ft Y(\mathbf{Y}_{i})}{\ft Z(\mathbf{X}_{i},\mathbf{Y}_{i})}\right)-g\left(\frac{\ft X^{'}(\mathbf{X}_{i})\ft Y^{'}(\mathbf{Y}_{i})}{\ft Z^{'}(\mathbf{X}_{i},\mathbf{Y}_{i})}\right)\right)\right)^{2}\right]\\ & =o\left(\frac{1}{N}\right).\end{aligned}$$ Applying the Efron-Stein inequality gives that $\var\left[\W_{N}\right]=o(1)$. Then by ChebyShev’s inequality, $\W_{N}$ converges to zero in probability. This completes the proof for the plug-in estimator. For the weighted ensemble estimator, we can write $$\begin{aligned} \sqrt{N}\left(\g w-\bE\left[\g w\right]\right) & =\frac{1}{\sqrt{N}}\sum_{i=1}^{N}\sum_{l_{X}\in\mathcal{L}_{X},l_{Y}\in\mathcal{L}_{Y}}w(l_{X},l_{Y})\left(g\left(\frac{\ftl X(\mathbf{X}_{i})\ftl Y(\mathbf{Y}_{i})}{\ftl Z(\mathbf{X}_{i},\mathbf{Y}_{i})}\right)\right.\\ & \left.-\ez{Z_{i}}\left[g\left(\frac{\ftl X(\mathbf{X}_{i})\ftl Y(\mathbf{Y}_{i})}{\ftl Z(\mathbf{X}_{i},\mathbf{Y}_{i})}\right)\right]\right)\\ & +\frac{1}{\sqrt{N}}\sum_{i=1}^{N}\left(\ez{Z_{i}}\left[\sum_{l_{X}\in\mathcal{L}_{X},l_{Y}\in\mathcal{L}_{Y}}w(l_{X},l_{Y})g\left(\frac{\ftl X(\mathbf{X}_{i})\ftl Y(\mathbf{Y}_{i})}{\ftl Z(\mathbf{X}_{i},\mathbf{Y}_{i})}\right)\right]\right.\\ & \left.-\bE\left[\sum_{l_{X}\in\mathcal{L}_{X},l_{Y}\in\mathcal{L}_{Y}}w(l_{X},l_{Y})g\left(\frac{\ftl X(\mathbf{X}_{i})\ftl Y(\mathbf{Y}_{i})}{\ftl Z(\mathbf{X}_{i},\mathbf{Y}_{i})}\right)\right]\right).\end{aligned}$$ By the central limit theorem, the second term converges in distribution to a zero-mean Gaussian random variable with variance $$\var\left[\ez{Z_{i}}\left[\sum_{l_{X}\in\mathcal{L}_{X},l_{Y}\in\mathcal{L}_{Y}}w(l_{X},l_{Y})g\left(\frac{\ftl X(\mathbf{X}_{i})\ftl Y(\mathbf{Y}_{i})}{\ftl Z(\mathbf{X}_{i},\mathbf{Y}_{i})}\right)\right]\right].$$ From the previous results, the first term converges to zero in probability as it can be written as $$\begin{aligned} \sum_{l_{X}\in\mathcal{L}_{X},l_{Y}\in\mathcal{L}_{Y}}w(l_{X},l_{Y})\frac{1}{\sqrt{N}}\sum_{i=1}^{N}\left(g\left(\frac{\ftl X(\mathbf{X}_{i})\ftl Y(\mathbf{Y}_{i})}{\ftl Z(\mathbf{X}_{i},\mathbf{Y}_{i})}\right)\right.\\ \left.-\ez{Z_{i}}\left[g\left(\frac{\ftl X(\mathbf{X}_{i})\ftl Y(\mathbf{Y}_{i})}{\ftl Z(\mathbf{X}_{i},\mathbf{Y}_{i})}\right)\right]\right) & =\sum_{l_{X}\in\mathcal{L}_{X},l_{Y}\in\mathcal{L}_{Y}}w(l_{X},l_{Y})o_{P}(1)\\ & =o_{P}(1),\end{aligned}$$ where $o_{P}(1)$ denotes convergence to zero in probability and we use the fact that linear combinations of random variables that converge in probability individually to constants converge in probability to the linear combination of the constants. The proof is finished with Slutsky’s theorem. Note that the proof of Corollary 7 follows a similar procedure as the extension to the ensemble case. [^1]: A.A@university.edu [^2]: B.B@university.edu [^3]: D.D@university.edu
1
--- abstract: 'We study static, spherically symmetric vacuum solutions to Quadratic Gravity, extending considerably our previous Rapid Communication \[Phys. Rev. D 98, 021502(R) (2018)\] on this topic. Using a conformal-to-Kundt metric ansatz, we arrive at a much simpler form of the field equations in comparison with their expression in the standard spherically symmetric coordinates. We present details of the derivation of this compact form of two ordinary differential field equations for two metric functions. Next, we apply analytical methods and express their solutions as infinite power series expansions. We systematically derive all possible cases admitted by such an ansatz, arriving at six main classes of solutions, and provide recurrent formulas for all the series coefficients. These results allow us to identify the classes containing the Schwarzschild black hole as a special case. It turns out that one class contains only the Schwarzschild black hole, three classes admit the Schwarzschild solution as a special subcase, and two classes are not compatible with the Schwarzschild solution at all since they have strictly nonzero Bach tensor. In our analysis, we naturally focus on the classes containing the Schwarzschild spacetime, in particular on a new family of the Schwarzschild–Bach black holes which possesses one additional non-Schwarzschild parameter corresponding to the value of the Bach tensor invariant on the horizon. We study its geometrical and physical properties, such as basic thermodynamical quantities and tidal effects on free test particles induced by the presence of the Bach tensor. We also compare our results with previous findings in the literature obtained using the standard spherically symmetric coordinates.' author: - | J. Podolský$^\star$, R. Švarc$^\star$, V. Pravda$^\diamond$, A. Pravdov' a$^\diamond$\ \ \ [$^\star$ Institute of Theoretical Physics, Faculty of Mathematics and Physics,]{}\ [Charles University, V Holešovičkách 2, 180 00 Prague 8, Czech Republic.]{}\ [$^\diamond$ Institute of Mathematics, Academy of Sciences of the Czech Republic]{},\ [Žitn' a 25, 115 67 Prague 1, Czech Republic.]{}\ [E-mail: `podolsky@mbox.troja.mff.cuni.cz, robert.svarc@mff.cuni.cz, `]{}\ [`pravda@math.cas.cz, pravdova@math.cas.cz`]{} title: Black holes and other exact spherical solutions in Quadratic Gravity --- PACS numbers: 04.20.Jb, 04.50.–h, 04.70.Bw, 04.70.Dy, 11.25.–w Keywords: black holes, exact solutions, Quadratic Gravity, Einstein–Weyl gravity, Schwarzschild metric, Bach tensor, Robinson–Trautman spacetimes, Kundt spacetimes Introduction {#intro} ============ Soon after Albert Einstein formulated his General Relativity in November 1915 and David Hilbert found an elegant procedure how to derive Einstein’s field equations from the variational principle, various attempts started to extend and generalize this gravity theory. One possible road, suggested by Theodor Kaluza exactly a century ago in 1919, was to consider higher dimensions in an attempt to unify the field theories of gravitation and electromagnetism. In the same year, another road was proposed by Hermann Weyl. In this case, the idea was to derive alternative field equations of a metric theory of gravity by starting with a different action. Instead of using the Einstein–Hilbert Lagrangian of General Relativity, which is simply the Ricci curvature scalar $R$ (a double contraction of a single Riemann tensor), Weyl proposed a Lagrangian containing *contractions of a product of two curvature tensors*. Such a Lagrangian is thus not linear in curvature — it is quadratic so that this theory can be naturally called “quadratic gravity”. Einstein was well aware of these attempts to formulate such alternative theories of gravity, and for some time he also worked on them. Interestingly, expressions for the quadratic gravity theory can be found even in his last writing pad (at the bottom of its last but one page) which he used in spring 1955. Although it turned out rather quickly that these original classical theories extending General Relativity led to specific conceptual, mathematical and physical problems, the nice ideas have been so appealing that — the whole century after their conception — they are still very actively investigated. Both the higher dimensions of the Kaluza–Klein theory and Weyl’s higher-order curvature terms in an effective action are now incorporated into the foundations of string theory. Quadratic Gravity (QG) also plays an important role in contemporary studies of relativistic quantum field theories. Quadratic Gravity is a very natural and quite “conservative” extension of the Einstein theory, the most precise gravity theory today. Quadratic terms in the QG Lagrangian can be understood as corrections to General Relativity, which may play a crucial role at extremely high energies. In the search for a consistent quantum gravity theory, which could be applicable near the Big Bang or near spacetime singularities inside black holes, it is important to understand the role of these higher-order curvature corrections. Interestingly, it was suggested by Weinberg and Deser, and then proved by Stelle [@Stelle:77] already in the 1970s that adding the terms quadratic in the curvature to the Einstein–Hilbert action renders gravity renormalizable, see the very recent review [@Salvio]. This property is also preserved in the general coupling with a generic quantum field theory. However, due to the presence of higher derivatives, “massive ghosts” also appear (the corresponding classical Hamiltonian is unbounded from below). Nevertheless, there is a possibility that these ghosts could be benign [@Smilga]. For all these reasons, this QG theory has attracted considerable attention in recent years. In our work, we are interested in *classical solutions to QG in four dimensions*. It can be easily shown that all Einstein spacetimes obey the vacuum field equations of this theory. However, QG also admits additional vacuum solutions with nontrivial Ricci tensor. In this paper, we focus on such *static, spherically symmetric vacuum solutions* without a cosmological constant. They were first studied in the seminal work [@Stelle:1978], in which three families of such spacetimes were identified by using a power expansion of the metric functions around the origin. The failure of the Birkhoff theorem in [Quadratic]{} Gravity has also been pointed out therein. Spherically symmetric solutions were further studied in [@Holdom:2002], where also numbers of free parameters for some of the above-mentioned classes were determined. Recently it has been pointed out in [@LuPerkinsPopeStelle:2015; @LuPerkinsPopeStelle:2015b; @PerkinsPhD] that, apart from the Schwarzschild black hole and other spherical solutions, QG admits a *non-Schwarzschild* spherically symmetric and static black holes. The field equations of a generic Quadratic Gravity theory form a highly complicated system of fourth-order nonlinear PDEs. Only a few nontrivial exact solutions are thus known so far, and various approximative and numerical methods have had to be used in their studies. Specifically, in the new class of black holes presented in [@LuPerkinsPopeStelle:2015], the two unknown metric functions of the standard form of spherically symmetric metric were given in terms of two complicated coupled ODEs which were (apart from the first few orders in the power expansion) solved and analyzed numerically. Interestingly, all QG corrections to the four-dimensional vacuum Einstein equations for constant Ricci scalar are nicely combined into a conformally well-behaved Bach tensor. Together with a conformal-to-Kundt metric ansatz [@PravdaPravdovaPodolskySvarc:2017], this leads to a considerably simpler autonomous system of the field equations. We employed this approach in our recent letters [@PodolskySvarcPravdaPravdova:2018] and [@SvarcPodolskyPravdaPravdova:2018] for vanishing and nonvanishing cosmological constant, respectively. In [@PodolskySvarcPravdaPravdova:2018] we were thus able to present an explicit form of the corresponding nontrivial black-hole spacetimes — the so-called *Schwarzschild–Bach black holes* with two parameters, a position of the horizon and an additional Bach parameter. By setting this additional Bach parameter to zero, the Schwarzschild metric of General Relativity is directly recovered. In the present considerably longer paper, we are now giving the details of the derivation summarized in [@PodolskySvarcPravdaPravdova:2018], and also survey and analysis of other classes of spherically symmetric solutions to Quadratic Gravity. Our paper is organized as follows. In Sec. \[QGandEWtheory\] we recall the Quadratic Gravity and the Einstein–Weyl theory, and we put the corresponding field equations into a convenient form in which the Ricci tensor is proportional to the Bach tensor. In Sec. \[BHmetricsec\] we introduce a suitable spherically symmetric metric ansatz in the conformal-to-Kundt form, and we give relations to the standard metric form. In Sec. \[derivingFE\] we overview the derivation of the field equations with various technical details and thorough discussion being postponed to Appendices A–C. In Sec. \[invariants\] expressions for curvature invariants are derived. In Sec. \[integration\] expansions in powers of ${\Delta \equiv r-r_0}$ around a fixed point $r_0$, and for $r \rightarrow \infty$ are introduced. In Sec. \[expansiont\_0\] the leading orders in ${\Delta }$ of the field equations are solved and four main classes of solutions are obtained. For these solutions, in Sec. \[description\] all coefficients of the metric functions in the power expansions in $\Delta$ are given in the form of recurrent formulas, convenient gauge choices are found, and various aspects of the solutions are discussed. Sections \[expansiont\_INF\] and \[description\_INF\] focus on the same topics as Secs. \[expansiont\_0\] and \[description\], respectively, but this time for expansions $r \rightarrow \infty$. In Sec. \[summary\] the relation of the solutions obtained in Secs. \[expansiont\_0\]–\[description\_INF\] (including their special subcases) to the solutions given in the literature is discussed, and summarized in Table \[tab:3\]. Mathematical and physical aspects (specific tidal effects and thermodynamical quantities) of the Schwarzschild–Bach solutions are discussed in Sections \[discussion-and-figures\] and \[physics\], respectively. Finally, concluding remarks are given in Sec. \[conclusions\]. Quadratic Gravity and the Einstein–Weyl theory {#QGandEWtheory} ============================================== Quadratic Gravity (QG) is a natural generalization of Einstein’s theory that includes higher derivatives of the metric. Its action in four dimensions contains additional quadratic terms, namely square of the Ricci scalar $R$ and a contraction of the Weyl tensor $C_{abcd}$ with itself [@Weyl1919; @Bach1921]. In the absence of matter, the most general QG action generalizing the Einstein–Hilbert action reads[@PravdaPravdovaPodolskySvarc:2017][^1] S = [[[d]{}]{}]{}\^4 x ( (R-2) +R\^2 - C\_[abcd]{} C\^[abcd]{} ), \[actionQG\] where ${\gamma=1/G}$ ($G$ is the Newtonian constant), $\Lambda$ is the cosmological constant, and $\alpha$, $\beta$ are additional QG theory parameters. The Einstein–Weyl theory is contained as a special case by setting ${\beta=0}$. *Vacuum field equations* corresponding to the action (\[actionQG\]) are $$\begin{aligned} &\gamma \left(R_{ab} - {\pul} R\, g_{ab}+\Lambda\,g_{ab}\right)-4 \alpha\,B_{ab} \nonumber \\ &\quad +2\beta\left(R_{ab}-\tfrac{1}{4}R\, g_{ab}+ g_{ab}\, \Box - \nabla_b \nabla_a\right) R = 0 \,, \label{GenQGFieldEq}\end{aligned}$$ where $B_{ab}$ is the *Bach tensor* defined as B\_[ab]{} ( \^c \^d + R\^[cd]{} )C\_[acbd]{} . \[defBach\] It is traceless, symmetric, and conserved: $$g^{ab}B_{ab}=0 \,, \qquad B_{ab}=B_{ba} \,, \qquad \nabla^b B_{ab}=0 \,, \label{Bachproperties}$$ and also conformally well-behaved (see expression (\[OmBach\]) below). Now, *assuming* ${R=\hbox{const.}}$, the last two terms in (\[GenQGFieldEq\]) containing covariant derivatives of $R$ vanish. Using (\[Bachproperties\]), the trace of the field equations thus immediately implies R=4. \[R=4Lambda\] By substituting this relation into the field equations (\[GenQGFieldEq\]), they simplify considerably to R\_[ab]{}-g\_[ab]{}=4k B\_[ab]{}, k . \[fieldeqsgen\] In this paper, *we restrict ourselves to* investigation of solutions with *vanishing cosmological constant* $\Lambda$ (see [@SvarcPodolskyPravdaPravdova:2018] for the study of a more general case ${\Lambda\ne0}$). In view of (\[R=4Lambda\]), this implies vanishing Ricci scalar, R=0, \[R=0\] and the field equations (\[fieldeqsgen\]) further reduce to a simpler form R\_[ab]{}=4k B\_[ab]{}, \[fieldeqsEWmod\] where the constant $k$ is now a shorthand for the combination of the theory parameters ${ k \equiv \alpha/\gamma= G\alpha}$. For ${k=0}$ we recover vacuum Einstein’s equations of General Relativity. Interestingly, all solutions of (\[fieldeqsEWmod\]) in *Einstein–Weyl gravity* (${\beta=0}$) with ${R=0}$ *are also solutions to general Quadratic Gravity* (${\beta\ne0}$) since for ${\Lambda=0}$ the QG parameter $\beta$ does not contribute to the constant $k$ defined by (\[fieldeqsgen\]). Black hole metrics {#BHmetricsec} ================== For studying static, nonrotating black holes, it is a common approach to employ the canonical form of a general spherically symmetric metric $${{\rm{d}}}s^2 = -h(\bar r)\,{{\rm{d}}}t^2+\frac{{{\rm{d}}}\bar r^2}{f(\bar r)}+\bar r^2({{\rm{d}}}\theta^2+\sin^2\theta\,{{\rm{d}}}\phi^2) \,. \label{Einstein-WeylBH}$$ In particular, for the famous *Schwarzschild solution* of Einstein’s General Relativity [@Schwarzschild:1916] (and also of QG), the two metric functions *are the same* and take the well-known form $$f(\bar{r}) = h(\bar{r})=1-\frac{2m}{\bar{r}} \,. \label{SchwarzschildBH}$$ The metric (\[Einstein-WeylBH\]) was also used in the seminal papers [@LuPerkinsPopeStelle:2015; @LuPerkinsPopeStelle:2015b] to investigate generic spherical black holes in Quadratic Gravity, in which it was surprisingly shown, mostly by numerical methods, that such a class contains further black-hole solutions *distinct* from the Schwarzschild solution (\[SchwarzschildBH\]). It turned out that while the Schwarzschild black hole has ${f=h}$, this non-Schwarzschild black hole is characterized by ${f\not=h}$. However, due to the complexity of the QG field equations (\[GenQGFieldEq\]) for the classical metric form (\[Einstein-WeylBH\]), it has not been possible to find an explicit analytic form of the metric functions ${f(\bar{r}), h(\bar{r})}$. A new convenient metric form of the black hole geometry {#BH metric} ------------------------------------------------------- As demonstrated in our previous works [@PodolskySvarcPravdaPravdova:2018; @SvarcPodolskyPravdaPravdova:2018], it is much more convenient to employ an *alternative metric form* of the spacetimes represented by (\[Einstein-WeylBH\]). This is obtained by performing the transformation $$\bar{r} = \Omega(r)\,, \qquad t = u - \int\! \frac{{{\rm{d}}}r}{\H(r)} \,, \label{to static}$$ resulting in [[[d]{}]{}]{}s\^2 = \^2(r) . \[BHmetric\] The two new metric functions $\Omega(r)$ and $\H(r)$ are related to $f(\bar r)$ and $h(\bar r)$ via simple relations h = -\^2 , f = -()\^2 , \[rcehf\] where prime denotes the derivative with respect to $r$. Of course, the argument $r$ of both functions $\Omega$ and $\H$ must be expressed in terms of $\bar{r}$ using the inverse of the relation ${\bar{r} = \Omega(r)}$. The metric admits a *gauge freedom* given by a constant rescaling and a shift of $r$, r r+, u \^[-1]{}u . \[scalingfreedom\] More importantly, this new black hole metric is *conformal* to a much simpler Kundt-type metric, [[[d]{}]{}]{}s\^2 =\^2(r)[[[d]{}]{}]{}s\^2\_. \[confrelation\] Indeed, ${{{\rm{d}}}s^2_{\hbox{\tiny Kundt}}}$ belongs to the famous class of *Kundt geometries*, which are nonexpanding, shear-free and twist-free, see [@Stephanietal:2003; @GriffithsPodolsky:2009]. In fact, it is a subclass of Kundt spacetimes which is the *direct-product of two 2-spaces*, and is of Weyl algebraic type D and Ricci type II [@GriffithsPodolsky:2009; @PravdaPravdovaPodolskySvarc:2017]. The first part of [[[d]{}]{}]{}s\^2\_=[[[d]{}]{}]{}\^2+\^2[[[d]{}]{}]{}\^2 -2[[[d]{}]{}]{}u[[[d]{}]{}]{}r+[H]{}(r)[[[d]{}]{}]{}u\^2 \[Kundt seed\] spanned by ${\theta, \phi}$ is a round 2-sphere of Gaussian curvature ${K=1}$, while the second part spanned by ${u, r}$ is a 2-dim Lorentzian spacetime. With the usual stereographic representation of a 2-sphere given by ${x+\hbox{i}\, y = 2\tan(\theta/2)\exp(\hbox{i}\phi)}$, this *Kundt seed* metric can be rewritten as [[[d]{}]{}]{}s\^2\_= -2[[[d]{}]{}]{}u[[[d]{}]{}]{}r+[H]{}(r)[[[d]{}]{}]{}u\^2 . \[Kundt seed xy\] The black hole horizon {#BH horizon} ---------------------- In the usual metric form (\[Einstein-WeylBH\]), the Schwarzschild horizon is defined by the zeros of the same two metric functions $h{({\bar r})=f({\bar r})}$. Due to (\[SchwarzschildBH\]), it is located at ${{\bar r}_h=2m}$, where $m$ denotes the total mass of the black hole. In a general case, such a horizon can be defined as the *Killing horizon* associated with the vector field ${\partial_t}$. Its norm is determined by the metric function $-h({\bar r})$. In the regions where ${h({\bar r})>0}$, the spacetime is static and $t$ is the corresponding temporal coordinate. The Killing horizon is generated by the *null vector field* ${\partial_t}$, and it is thus located at a specific radius ${\bar r}_h$ satisfying $$h \big|_{{\bar r}={\bar r}_h}=0\,. \label{standardhorizon}$$ In terms of the new metric form , we may similarly employ the vector field ${\partial_u}$ which coincides with ${\partial_t}$ everywhere. Its norm is given by $\Omega^2\, \H$. Since the conformal factor $\Omega$ is nonvanishing throughout the spacetime, the Killing horizon is uniquely located at a specific radius $r_h$ satisfying the condition $$\H \big|_{r=r_h}=0\,. \label{horizon}$$ Interestingly, via the relations this automatically implies ${h({\bar r_h})=0=f({\bar r_h})}$. It is also important to recall that there is a *time-scaling freedom* of the metric tt/, \[scaling-t\] where ${\sigma \ne 0}$ is any constant, which implies ${h\to h\,\sigma^2}$. This freedom can be used to adjust an appropriate value of $h$ at a chosen radius ${\bar r}$. Or, in an asymptotically flat spacetime such as (\[SchwarzschildBH\]) it could be used to achieve ${h \to 1}$ as ${{\bar r}\to \infty}$, thus enabling us to determine the mass of a black hole. The Kundt seed of the Schwarzschild solution {#Kundt seed of the Schwarzschild} -------------------------------------------- It is also important to explicitly identify the Kundt seed geometry (\[Kundt seed\]) which, via the conformal relation (\[confrelation\]), generates the well-known vacuum *Schwarzschild solution*. This is simply given by $$\bar{r}=\Omega(r)=-\frac{1}{r}\,,\qquad \H(r) = -r^2-2m\, r^3 \,. \label{Schw}$$ Indeed, the first relation implies ${r=-1/\bar{r}}$, so that ${\H(\bar{r}) = - (1-2m/\bar{r})/\bar{r}^2}$. Using (\[rcehf\]), we easily obtain (\[SchwarzschildBH\]). It should be emphasized that the standard physical range ${\bar{r}>0}$ corresponds to ${r<0}$. Also, the auxiliary Kundt coordinate $r$ *increases from negative values to* $0$, as $\bar{r}$ increases to $\infty$. Notice that ${\cal H}$ given by (\[Schw\]) is simply a *cubic* in the coordinate $r$ of the Kundt geometry. For ${m=0}$, the Kundt seed with ${{\cal H} = -\,r^2}$ is the Bertotti–Robinson spacetime with the geometry ${S^2\times AdS_2}$ (see chapter 7 of [@GriffithsPodolsky:2009]), and the corresponding conformally related metric (\[confrelation\]) is just the flat space. It should also be emphasized that, while the Schwarzschild and Minkowski spacetimes are (the simplest) vacuum solutions in Einstein’s theory, their Kundt seeds (\[Schw\]) *are not vacuum solutions* in Einstein’s theory since their Ricci tensor is nonvanishing. In fact, the Bertotti–Robinson geometry is an electrovacuum space of Einstein’s theory. Since conformal transformations preserve the Weyl tensor, both ${{\rm{d}}}s^2$ and ${{\rm{d}}}s^2_{\hbox{\tiny Kundt}} $ are of the *same algebraic type*. Indeed, in the null frame ${{\mbox{\boldmath$k$}}= \mathbf{\partial}_r}$, ${{\mbox{\boldmath$l$}}= {\textstyle\frac{1}{2}}{\cal H}\,\mathbf{\partial}_r+\mathbf{\partial}_u}$, ${{\mbox{\boldmath$m$}}_i = \big(1+\ctvrt(x^2+y^2)\big)\mathbf{\partial}_i}$, the only Newman–Penrose Weyl scalar for (\[Kundt seed xy\]) is ${\Psi_2=-\frac{1}{12}({\cal H}''+2)}$, and both ${\mbox{\boldmath$k$}}$ and ${\mbox{\boldmath$l$}}$ are double principal null directions. For the specific function (\[Schw\]), ${\Psi_2=m\,r}$. The Kundt seed geometry for the Schwarzschild solution is thus of algebraic type D. It is conformally flat if, and only if, ${m=0}$, in which case it is the Bertotti–Robinson spacetime. The Robinson–Trautman form of the black hole metrics {#RT} ---------------------------------------------------- Recently, we have proven in [@PravdaPravdovaPodolskySvarc:2017] that *any metric conformal to a Kundt geometry must belong to the class of expanding Robinson–Trautman geometries* (or it remains in the Kundt class). Indeed, performing a simple transformation ${r(\tilde r)}$ of (\[confrelation\]), (\[Kundt seed xy\]), such that $$r = \int\!\!\frac{{{\rm{d}}}\tilde r}{\Omega^2(\tilde r)}\, , \qquad {{H}}\equiv \Omega^{2}\, \H \,, \label{guu_RT}$$ we obtain $${{\rm{d}}}s^2_{\hbox{\tiny RT}} = \Omega^2(\tilde r)\,\frac{{{\rm{d}}}x^2 + {{\rm{d}}}y^2}{\big(1+\ctvrt(x^2+y^2)\big)^2} -2\,{{\rm{d}}}u\,{{\rm{d}}}\tilde r+{{H}}(\tilde r)\,{{\rm{d}}}u^2 \,. \label{confRT}$$ This has the canonical form of the Robinson–Trautman class [@Stephanietal:2003; @GriffithsPodolsky:2009] with the identification \_[,r]{} = ,[[H]{}]{}= - h. The Schwarzschild black hole is recovered for ${\Omega(\tilde r)=\tilde r}$ that is ${\Omega_{,\tilde r}=1}$, equivalent to ${f(\bar{r}) = h(\bar{r})}$. Other distinct non-Schwarzschild black hole solutions are identified by ${f(\bar{r}) \ne h(\bar{r})}$. The Killing horizon is obviously given by ${{{H}}(\tilde r_h)=0}$, corresponding to ${\H(r_h)=0=h(\bar r_h)}$ and ${f(\bar r_h)=0}$. The field equations {#derivingFE} =================== The conformal approach to describing and studying black holes and other spherical solutions in Einstein–Weyl gravity and fully general Quadratic Gravity, based on the new form of the metric , is very convenient. Due to (\[confrelation\]), it enables to evaluate easily the Ricci and Bach tensors, entering the field equations (\[fieldeqsEWmod\]), from the Ricci and Bach tensors of the much simpler Kundt seed metric ${{{\rm{d}}}s^2_{\hbox{\tiny Kundt}}}$. In particular, to derive the explicit form of the field equations, it is possible to proceed as follows: 1. Calculate all components of the Ricci and Bach tensors $R_{ab}^{{\hbox{\tiny Kundt}}}$ and $B_{ab}^{{\hbox{\tiny Kundt}}}$ for the Kundt seed metric $g_{ab}^{{\hbox{\tiny Kundt}}}$. Since such a metric (\[Kundt seed xy\]) is simple, containing only one general metric function of one variable $\H(r)$, its key curvature tensors are also simple. Their explicit form is presented in Appendix A. 2. Use the well-known geometric relations for the Ricci and Bach tensors of conformally related metrics $g_{ab}^{{\hbox{\tiny Kundt}}}$ and ${g_{ab}=\Omega^2 \,g_{ab}^{{\hbox{\tiny Kundt}}}}$. Thus it is straightforward to evaluate the curvature tensors $R_{ab}$ and $B_{ab}$ for spherically symmetric geometries, starting from their forms of the Kundt seed calculated in the first step. In particular, since the Bach tensor trivially rescales under the conformal transformation as ${B_{ab} = \Omega^{-2}\,B_{ab}^{{\hbox{\tiny Kundt}}}}$, it remains simple. These calculations are performed in Appendix B. 3. These explicit components of the Ricci and Bach tensors are substituted into the field equations of Quadratic Gravity, which we already reduced to the expression ${R_{ab}=4k\, B_{ab}}$, see . This immediately leads to a very simple and compact form of these field equations. Moreover, using the Bianchi identities, it can be shown that the whole system reduces just to two equations , for the metric functions $\Omega(r)$ and $\H(r)$, see Appendix C. By this procedure, we thus arrive at a remarkably simple form of the field equations (\[fieldeqsEWmod\]) for spherically symmetric vacuum spacetimes in Einstein–Weyl gravity and general Quadratic Gravity with ${R=0}$, namely *two ordinary differential equations* for the *two metric functions* $\Omega(r)$ and ${\cal H}(r)$: $$\begin{aligned} \Omega\Omega''-2{\Omega'}^2 = &\ \tfrac{1}{3}k\, \B_1 \H^{-1} \,, \label{Eq1}\\ \Omega\Omega'{\cal H}'+3\Omega'^2{\cal H}+\Omega^2 = &\ \tfrac{1}{3}k \,\B_2 \,. \label{Eq2}\end{aligned}$$ The functions $\B_1(r)$ and $\B_2(r)$ denote *two independent components of the Bach tensor*, && \_1 ””, \[B1\]\ && \_2 ’[H]{}”’-\^2 +2. \[B2\] Recall also the relation (\[R=0\]), that is ${R=0}$, which is a trace of the field equations . This relation takes the explicit form $${\cal H}\Omega''+{\cal H}'\Omega'+{\textstyle \frac{1}{6}} ({\cal H}''+2)\Omega = 0 \,, \label{trace}$$ see (\[barR\]). Indeed, it immediately follows from , : just subtract from the derivative of the second equation the first equation multiplied by $\H'$ (and divide the result by $6\Omega'$). It is a great advantage of our conformal approach with the convenient form of the new metric (\[BHmetric\]) that the field equations (\[Eq1\]), (\[Eq2\]) are *considerably simpler* than the previously used field equations for the standard metric . Moreover, they form an *autonomous system*, which means that the differential equations *do not explicitly depend on the radial variable $r$*. This will be essential for solving such a system, finding their analytic solution in the generic form , or , in subsequent Section \[integration\]. Fundamental scalar invariants and geometric classification {#invariants} ========================================================== For a geometrical and physical interpretation of spacetimes that are solutions to the field equations (\[Eq1\]), (\[Eq2\]), it will be crucial to investigate the behaviour of scalar curvature invariants constructed from the Ricci, Bach, and Weyl tensors themselves. A direct calculation yields $$\begin{aligned} R_{ab}\, R^{ab} &= 16k^2\, B_{ab} B^{ab} \,, \label{invR}\\ B_{ab}\, B^{ab} &= \tfrac{1}{72}\,\Omega^{-8}\,\big[(\B_1)^2 + 2(\B_1+\B_2)^2\big] \,,\label{invB}\\ C_{abcd}\, C^{abcd} &= \tfrac{1}{3}\,\Omega^{-4}\,\big({\cal H}'' +2\big)^2 \,. \label{invC}\end{aligned}$$ To derive these expressions, we have used the field equations, the quantities (\[RT\_R rr\])–(\[RT\_R xx\]), (\[Bach rr\])–(\[Bach xx\]), (\[WeyliK\])–(\[WeylfK\]), and relations (\[contraEinstein-WeylBHC\]), (\[confrel\]), (\[OmBach\]) together with ${C_{abcd}\,C^{abcd}=\Omega^{-4}\, C_{abcd}^{{\hbox{\tiny Kundt}}}\, C^{abcd}_{{\hbox{\tiny Kundt}}}}$ which follows from the invariance of the Weyl tensor under conformal transformations. It is interesting to observe from (\[invB\]) and (\[Bach rr\])–(\[Bach xx\]) with (\[OmBach\]) that B\_[ab]{}=0B\_[ab]{}B\^[ab]{} =0. \[Bach=0iffINV=0\] Moreover, C\_[abcd]{}C\^[abcd]{}=0B\_[ab]{} =0, \[Weylinv=0thenBach=0\] because the relation ${{\cal H}'' +2=0}$ substituted into gives ${B_{ab}\,B^{ab} =0}$, i.e., ${B_{ab} =0}$ due to . Notice also that the *first Bach component* ${\B_1=\H \H''''}$ *always vanishes on the horizon* where ${\H=0}$, see the condition . In view of the key invariant , there are *two geometrically distinct classes of solutions* to (\[Eq1\]), (\[Eq2\]), depending on the Bach tensor ${B_{ab}}$. The first simple case corresponds to ${B_{ab}=0}$, while the much more involved second case, not allowed in General Relativity, arises when ${B_{ab}\ne0}$. This invariant classification has geometrical and physical consequences. In particular, the distinction of spacetimes with ${B_{ab}=0}$ and with ${B_{ab}\ne0}$ can be detected by measuring geodesic deviation of test particles, see Section \[geodeviation\] below. ${B_{ab}=0}$: Uniqueness of Schwarzschild {#integration:Schw} ----------------------------------------- First, let us assume the metrics (\[BHmetric\]) such that ${B_{ab}=0}$ everywhere. In view of and , this condition requires ${\B_1=0=\B_2}$, that is ””=0,’[H]{}”’-[[H]{}”]{}\^2 +2 =0. \[Bab=0-RHS\] Therefore, all left-hand sides and right-hand sides of equations (\[Eq1\]) and (\[Eq2\]) *vanish separately*, i.e., ”=2[’]{}\^2,’[H]{}’+3’\^2[H]{}+\^2 =0. \[Bab=0-LHS\] The first equations of (\[Bab=0-RHS\]) and (\[Bab=0-LHS\]) imply that ${\cal H}$ must be *at most cubic*, and $\Omega^{-1}$ must be *at most linear* in $r$. Using a coordinate freedom of the metric (\[BHmetric\]), without loss of generality we obtain ${\Omega=-1/r}$. The remaining equations (\[Bab=0-RHS\]), (\[Bab=0-LHS\]) then admit a unique solution $$\Omega(r)=-\frac{1}{r}\,,\qquad {\cal H}(r) = -r^2-2m\, r^3 \,. \label{IntegrSchwAdS}$$ Not surprisingly, this is exactly the Schwarzschild solution of General Relativity, see equation (\[Schw\]). Thus we have verified that the *Schwarzschild black hole spacetime is the only possible solution with vanishing Bach tensor*. Its corresponding scalar invariants (\[invR\])–(\[invC\]) are R\_[ab]{} R\^[ab]{} = 0 = B\_[ab]{} B\^[ab]{},C\_[abcd]{} C\^[abcd]{} = 48m\^2r\^6 . \[SchwarzInvariants\] Clearly, for ${m\not=0}$ there is a curvature singularity at ${r\to\infty}$ corresponding to ${\bar{r}=\Omega(r)=0}$.[^2] ${B_{ab}\ne0}$: New types of solutions to QG {#integration:nonSchw} -------------------------------------------- Many other spherically symmetric vacuum solutions to Quadratic Gravity and Einstein–Weyl gravity exist when the Bach tensor is nontrivial. They are *much more involved, and do not exist in General Relativity*. Indeed, the field equations (\[fieldeqsEWmod\]) imply ${R_{ab}=4k\, B_{ab}\ne0}$, which is in contradiction with vacuum Einstein’s equations ${R_{ab}=0}$. In the rest of this paper, we now concentrate on these new spherical spacetimes in QG, in particular on black holes generalizing the Schwarzschild solution. First, we integrate the field equations (\[Eq1\]), (\[Eq2\]) for the metric functions $\Omega(r)$ and ${\cal H}(r)$. Actually, we demonstrate that there are several classes of such solutions with ${B_{ab}\ne0}$. After their explicit identification and description, we will analyze their geometrical and physical properties. Solving the field equations {#integration} =========================== For nontrivial Bach tensor (${\B_1, \B_2 \ne0}$), the right-hand sides of the field equations (\[Eq1\]), (\[Eq2\]) are nonzero so that the nonlinear system of two ordinary differential equations for $\Omega(r)$, ${\cal H}(r)$ is coupled in a complicated way. Finding explicitly its general solution seems to be hopeless. However, *it is possible to write the admitted solutions analytically, in terms of (infinite) mathematical series expressed in powers of the radial coordinate $r$*. In fact, there are *two natural possibilities*. The first is the expansion in powers of the parameter ${\Delta \equiv r-r_0}$ which expresses the solution around any finite value $r_0$ (including ${r_0=0}$). The second possibility is the expansion in powers of $r^{-1}$ which is applicable for large values of $r$. Let us now investigate both these cases. Expansion in powers of ${\Delta \equiv r-r_0}$ {#expansio_DElta} ---------------------------------------------- It is a great advantage that , is an *autonomous system*. Thus we can find the metric functions in the form of an *expansion in powers of $r$ around any fixed value* ${r_0}$, $$\begin{aligned} \Omega(r) {\!\!\!& = &\!\!\!}\Delta^n \sum_{i=0}^\infty a_i \,\Delta^{i}\,, \label{rozvojomeg0}\\ \H(r) {\!\!\!& = &\!\!\!}\Delta^p \,\sum_{i=0}^\infty c_i \,\Delta^{i}\,, \label{rozvojcalH0}\end{aligned}$$ where r-r\_0, \[DElta\] and $r_0$ is *any real constant*.[^3] In particular, in some cases this allows us to find solutions close to any black hole horizon $r_h$ by choosing ${r_0=r_h}$. It is assumed that ${i=0, 1, 2, \ldots}$ are integers, so that the metric functions are expanded in integer steps of ${\Delta=r-r_0}$. On the other hand, the *dominant real powers* $n$ and $p$ in the expansions (\[rozvojomeg0\]) and (\[rozvojcalH0\]) *need not be* positive integers. We only assume that ${a_0\not=0}$ and ${c_0\not=0}$, so that the coefficients $n$ and $p$ are uniquely defined as the leading powers. By inserting – into the field equations , , we prove in Section \[expansiont\_0\] that *only 4 classes of solutions of this form are allowed*, namely =\[-1,2\],=\[0,1\],=\[0,0\],=\[1,0\]. \[4classes\] In subsequent Section \[description\], it will turn out that the only possible solution in the class ${[n,p]=[-1,2]}$ is the Schwarzschild black hole for which the Bach tensor vanishes. Explicit Schwarzschild–Bach black holes with ${B_{ab}\ne0}$ are contained in the classes ${[0,1]}$ and ${[0,0]}$. The fourth class ${[n,p]=[1,0]}$ represents singular solutions without horizon, and it is equivalent to the class ${(s,t)=(2,2)}$ identified previously in [@Stelle:1978; @LuPerkinsPopeStelle:2015b; @PerkinsPhD]. Expansion in powers of $r^{-1}$ {#expansion_INF} ------------------------------- Analogously, we may study and classify all possible solutions to the QG field equations for an asymptotic expansion as ${r\rightarrow \infty}$. Instead of , with , for very large $r$ we can assume that the metric functions $\Omega(r)$, $\mathcal{H}(r)$ are expanded in *negative powers* of $r$ as $$\begin{aligned} \Omega(r) {\!\!\!& = &\!\!\!}r^N \sum_{i=0}^\infty A_i \,r^{-i}\,, \label{rozvojomegINF}\\ \mathcal{H}(r) {\!\!\!& = &\!\!\!}r^P \,\sum_{i=0}^\infty C_i \,r^{-i}\,. \label{rozvojcalHINF}\end{aligned}$$ Inserting the series (\[rozvojomegINF\]), (\[rozvojcalHINF\]) into the field equations , , it can be shown that *only 2 classes of such solutions are allowed*, namely =\[-1,3\]\^,=\[-1,2\]\^, \[2classes\] see Section \[expansiont\_INF\]. In subsequent Section \[description\_INF\], it will be shown that the class ${[N,P]=[-1,3]^\infty}$ represents the Schwarzschild–Bach black holes, whereas the class ${[N,P]=[-1,2]^\infty}$ is a specific Bachian generalization of a flat space which does not correspond to a black hole. Discussion of solutions using the expansion in powers of $\Delta$ {#expansiont_0} ================================================================= By inserting the series (\[rozvojomeg0\]), (\[rozvojcalH0\]) into the first field equation (\[Eq1\]), the following key relation is obtained $$\begin{aligned} &\sum_{l=2n-2}^{\infty}\Delta^{l}\sum^{l-2n+2}_{i=0}a_i\, a_{l-i-2n+2}\,(l-i-n+2)(l-3i-3n+1) \nonumber \\ & \hspace{35.0mm}=\tfrac{1}{3}k \sum^{\infty}_{l=p-4}\Delta^{l}\,c_{l-p+4}\,(l+4)(l+3)(l+2)(l+1) \,. \label{KeyEq1}\end{aligned}$$ The second field equation (\[Eq2\]) puts further constraints on the admitted solutions, namely $$\begin{aligned} &\sum_{l=2n+p-2}^{\infty}\Delta^{l}\sum^{l-2n-p+2}_{j=0}\sum^{j}_{i=0}a_i\,a_{j-i}\,c_{l-j-2n-p+2}\,(j-i+n)(l-j+3i+n+2) +\sum_{l=2n}^{\infty}\Delta^{l}\sum^{l-2n}_{i=0}a_i\,a_{l-i-2n} \nonumber \\ & = \tfrac{1}{3}k \bigg[2+\sum^{\infty}_{l=2p-4}\Delta^{l}\sum^{l-2p+4}_{i=0}c_{i}\,c_{l-i-2p+4}\,(i+p)(l-i-p+4)(l-i-p+3)(l-\tfrac{3}{2}i-\tfrac{3}{2}p+\tfrac{5}{2})\bigg]\,. \label{KeyEq2}\end{aligned}$$ A considerably simpler is the additional (necessary but not sufficient) condition following from the trace equation (\[trace\]) which reads $$\begin{aligned} &\sum_{l=n+p-2}^{\infty}\Delta^{l}\sum^{l-n-p+2}_{i=0}c_i\,a_{l-i-n-p+2}\,\big[(l-i-p+2)(l+1)+\tfrac{1}{6}(i+p)(i+p-1)\big] =-\tfrac{1}{3}\sum^{\infty}_{l=n}\Delta^{l}\,a_{l-n} \,. \label{KeyEq3}\end{aligned}$$ Now we analyze the consequences of the equations (\[KeyEq1\])–(\[KeyEq3\]). First, by comparing the corresponding coefficients of the same powers of $\Delta^l$ on both sides of the key relation (\[KeyEq1\]), we can express the coefficients $c_j$ in terms of (products of) $a_j$. Moreover, the *terms with the lowest order* put further restrictions. In particular, comparing the lowest orders on both sides (that is ${l=2n-2}$ and ${l=p-4}$) it is obvious that *we have to discuss three distinct cases*, namely: - **Case I**: ${\ \ 2n-2<p-4}$,  i.e., ${\ p>2n+2}$, - **Case II**: ${\ 2n-2>p-4}$,  i.e., ${\ p<2n+2}$, - **Case III**: ${2n-2=p-4}$,  i.e., ${\ p=2n+2}$. Now let us systematically derive all possible solutions in these three distinct cases. **Case I** ---------- In this case, ${2n-2<p-4}$, so that the *lowest* order in the key equation (\[KeyEq1\]) is on the *left hand* side, namely $\Delta^l$ with ${l=2n-2}$, and this yields the condition $$n(n+1)=0 \,. \label{KeyEq1CaseI}$$ There are thus only two possible cases, namely ${n=0}$ and ${n=-1}$. Next, it is convenient to apply the equation (\[KeyEq3\]) whose lowest orders on its both sides are $$\big[6n(n+p-1)+p(p-1)\big]c_0\,\Delta^{n+p-2}+\cdots=-2\,\Delta^{n}+\cdots \,. \label{KeyEq3CaseI}$$ For ${n=0}$, these powers are ${\Delta^{p-2}}$ and ${\Delta^{0}}$, respectively, but ${p-2>2n=0}$ by the definition of Case I. The lowest order ${0=-2\Delta^{0}}$ thus leads to a contradiction. Only the possibility ${n=-1}$ remains, for which reduces to $$(p-3)(p-4)c_0\,\Delta^{p-3}+\cdots =-2\,\Delta^{-1}+\cdots \,. \label{KeyEq3CaseIn=-1}$$ Since ${c_0\ne0}$, the only possibility is ${p=2}$, in which case ${c_0=-1}$. **To summarize**: The only possible class of solutions in Case I is given by $$[n,p]=[-1,2]\qquad \hbox{with}\quad c_0=-1\,. \label{CaseI_summary}$$ **Case II** ----------- In this case, ${2n-2>p-4}$, so that the *lowest* order in the key equation (\[KeyEq1\]) is on the *right hand* side, namely $\Delta^l$ with ${l=p-4}$, and this gives the condition $$p(p-1)(p-2)(p-3)=0 \,. \label{KeyEq1CaseII}$$ Thus there are four possible cases, namely ${p=0}$, ${p=1}$, ${p=2}$, and ${p=3}$. Equation has the lowest orders on both sides the same as given by equation , that is $$\begin{aligned} \hbox{for}\quad p=0:\qquad & \big[6n(n-1)\big]c_0\,\Delta^{n-2}+\cdots=-2\,\Delta^{n}+\cdots&& \hbox{necessarily}\quad n=0, 1\,,\\ \hbox{for}\quad p=1:\qquad & \big[6n^2\big]c_0\,\Delta^{n-1}+\cdots=-2\,\Delta^{n}+\cdots&& \hbox{necessarily}\quad n=0\,,\\ \hbox{for}\quad p=2:\qquad & \big[6n(n+1)+2\big]c_0\,\Delta^{n}+\cdots=-2\,\Delta^{n}+\cdots&& (3n^2+3n+1)c_0=-1\,,\label{contrp=2c0} \\ \hbox{for}\quad p=3:\qquad & \big[6n(n+2)+6\big]c_0\,\Delta^{n+1}+\cdots=-2\,\Delta^{n}+\cdots&& \hbox{not compatible}\,.\end{aligned}$$ Moreover, the lowest orders of all the terms in the field equation for the case ${p=2}$, implying ${n>0}$, are 3a\_0\^2\[n(3n+2)c\_0+1\]\^[2n]{} +2k(c\_0\^2 -1) + =0 , \[eq2rozvoj0omeg\] which requires ${c_0=\pm 1}$, but the constraint ${3n^2+3n+1=\pm 1}$ cannot be satisfied for ${n>0}$. **To summarize**: The only possible three classes of solutions in Case II are given by \[n,p\]=\[0,1\], \[n,p\]=\[0,0\], \[n,p\]=\[1,0\]. \[CaseII\_summary\] **Case III** ------------ Now ${2n-2=p-4}$, that is ${n=-1+p/2}$ equivalent to ${p=2n+2}$. In such a case, the *lowest* order in the key equation (\[KeyEq1\]) is *on both sides*, namely $\Delta^l$ with ${l=p-4}$. This implies the condition p(p-2)=0. \[KeyEq1CaseIII\] There are three subcases to be considered, namely ${p=0}$, ${p=2}$, and ${3a_0^2=-4kc_0 (p-1)(p-3)}$ with ${p\not= 0,1,2,3}$. This corresponds to ${n=-1}$, ${n=0}$, and ${3a_0^2=-4kc_0(4n^2-1)}$ with ${n\not= -1,-1/2,0,1/2}$, respectively. The leading orders of the trace equation on both sides are 2(11n\^2+6n+1) c\_0\^[3n]{} +[& = &]{}-2 \^n+. \[eqtr00omegIII\] Consequently, we obtain $$\begin{aligned} &\hbox{for}\quad n=-1\Leftrightarrow p=0:\quad & 12c_0\,\Delta^{-3}+\cdots=-2\,\Delta^{-1}+\cdots& \qquad\hbox{not compatible}\,,\label{contrp=2c0IIIa}\\ &\hbox{for}\quad n=0\Leftrightarrow p=2:\quad & 2c_0+\cdots=-2+\cdots&\qquad c_0=-1\,,\label{contrp=2c0IIIb}\\ &\hbox{for}\quad 3a_0^2=4kc_0(1-4n^2): & (11n^2+6n+1)c_0+\cdots=0&\qquad \hbox{not compatible}\,. \label{contrp=2c0IIIc}\end{aligned}$$ The incompatibility in the cases and are due to the fact that ${c_0\ne 0}$ and ${11n^2+6n+1}$ is always positive. In the case , we employ the field equation which for ${n=0, p=2}$ gives the condition ${3a_0^2+2k(c_0^2-1) = 0}$. Since ${c_0=-1}$ implies ${a_0=0}$, we again end up in a contradiction. **To summarize**: There are no possible solutions in Case III. Description and study of all possible solutions in powers of $\Delta$ {#description} ===================================================================== Let us analyze all spherically symmetric solutions contained in the possible four classes and contained in Case I and Case II, respectively. Uniqueness of the Schwarzschild black hole in the class ${[n,p]=[-1,2]}$ {#Schw_[n,p]=[-1,2]} ------------------------------------------------------------------------ Starting with the only admitted class ${[n,p]=[-1,2]}$ in the Case I, see , now we prove that *the only solution in this class is the Schwarzschild solution* with vanishing Bach tensor. Such a solution can be easily identified within the complete form (\[rozvojomeg0\])–(\[DElta\]), with ${r_0=0}$, using the expression (\[IntegrSchwAdS\]) as && a\_0=-1,a\_i=0 i1,\ && c\_0=-1,c\_1=-2m,c\_i=0 i2, where $m$ is a free parameter. Let us prove the uniqueness. The full key equation for ${n=-1}$ ${p=2}$ reads $$\begin{aligned} 2a_1a_0\,\Delta^{-3} + 6a_2a_0\,\Delta^{-2} +12a_3a_0\,\Delta^{-1} & +\sum_{l=0}^{\infty}\Delta^{l}\sum^{l+4}_{i=0}a_i\, a_{l+4-i}\,(l+3-i)(l+4-3i) \nonumber \\ & =\tfrac{1}{3}k \sum^{\infty}_{l=0}\Delta^{l}\,c_{l+2}\,(l+4)(l+3)(l+2)(l+1) \,, \label{KeyEq1[n,p]=[-12]}\end{aligned}$$ which necessarily implies a\_1=0,a\_2=0,a\_3=0, \[Schwinitcond1a\] and \^[l+4]{}\_[i=0]{}a\_i a\_[l+4-i]{}(l+3-i)(l+4-3i) =k c\_[l+2]{}(l+4)(l+3)(l+2)(l+1) l0, \[Schwinitcond1b\] that is (l+4)(l+5)a\_0a\_[l+4]{} =k c\_[l+2]{}(l+4)(l+3)(l+2)(l+1) -\^[l+3]{}\_[i=1]{}a\_i a\_[l+4-i]{}(l+3-i)(l+4-3i) l0. \[Schwinitcond1c\] The second field equation , using (\[Schwinitcond1a\]), takes the explicit form $$\begin{aligned} -c_2a_0^2\,\Delta^{0}&+ \sum_{l=1}^{\infty}\Delta^{l}\sum^{l+2}_{j=0}\sum^{j}_{i=0}a_i\,a_{j-i}\,c_{l-j+2}\,(j-i-1)(l-j+3i+1) +\sum_{l=1}^{\infty}\Delta^{l}\sum^{l+2}_{i=0}a_i\,a_{l-i+2}\nonumber \\ & = \tfrac{1}{3}k \,\sum^{\infty}_{l=1}\Delta^{l}\sum^{l}_{i=0}c_{i}\,c_{l-i}\,(i+2)(l-i+2)(l-i+1) (l-\tfrac{3}{2}i-\tfrac{1}{2})\,, \label{KeyEq2[n,p]=[-12]}\end{aligned}$$ which implies c\_2=0. \[Schwinitcond2\] However, instead of solving (\[KeyEq2\[n,p\]=\[-12\]\]) for a general $l$, it is convenient to employ the “trace equation” \^[l+1]{}\_[i=0]{}c\_ia\_[l+1-i]{}=-a\_[l+1]{}  l2. \[KeyEq3\[n,p\]=\[-12\]\] This can be rewritten as (l-1)la\_0c\_[l+1]{}=6l(l+1)a\_[l+1]{}-\^[l]{}\_[i=1]{}c\_ia\_[l+1-i]{} l2, \[KeyEq3\[n,p\]=\[-12\]b\] i.e., by relabeling the index ${l \to l+2}$, as (l+1)(l+2)a\_0c\_[l+3]{}=6(l+2)(l+3)a\_[l+3]{}-\^[l+2]{}\_[i=1]{}c\_ia\_[l+3-i]{} l0. \[KeyEq3\[n,p\]=\[-12\]c\] Now, we employ the mathematical induction. Let us assume that for some ${l\ge 0}$ && a\_i=0  i=1,…,l+3,\ && c\_i=0  i=2,…,l+2. For ${l=0}$ this is true due to , . Then the field equation reduces to (l+4)(l+5)a\_0a\_[l+4]{}=0,\[caseIbeq1\] while equation gives (l+1)(l+2)a\_0c\_[l+3]{}=0. This obviously implies ${a_{l+4}=0}$ and ${c_{l+3}=0}$, completing the induction step. Therefore, *all* coefficients $a_i$ for $i\geq 1$ and *all* $c_i$ for $i\geq 2$ vanish, which means that the only possible solution in Case I is =,=-\^2 +c\_[1]{}\^[3]{}. \[Schwarzschild\[-1,2\]\] With the coordinate freedom , enabling us to set ${a_0=-1}$ and ${\Delta=r}$, this is exactly the explicit Schwarzschild solution . **To conclude**: The class of solutions ${[n,p]=[-1,2]}$ represents spherically symmetric Schwarzschild solution , and it is the only solution in this class. Schwarzschild–Bach black holes in the class ${[n,p]=[0,1]}$: near the horizon {#SchwaBach_[n,p]=[0,1]} ----------------------------------------------------------------------------- Now we will prove that this second class represents spherically symmetric *non-Schwarzschild solutions to QG* that describe *black holes with nonvanishing Bach tensor*. Thus it is natural to call this family *Schwarzschild–Bach black holes*. The first three terms in the expansion of the full solution take the explicit form $$\begin{aligned} \Omega(r) {\!\!\!& = &\!\!\!}-\frac{1}{r} + \frac{b}{r_h^2}(r-r_h) -\frac{b}{r_h^3}\Big(2 +\frac{1}{8k r_h^2}+b\Big)(r-r_h)^2+\ldots\,, \label{IIbOmegaFULL}\\ \mathcal{H}(r) {\!\!\!& = &\!\!\!}(r-r_h)\bigg[ \frac{r^2}{r_h} + 3b\,(r-r_h)+\frac{b}{r_h}\Big(4-\frac{1}{2k r_h^2} + 3b \Big)(r-r_h)^2 + \ldots \bigg]\,, \label{IIbH0FULL}\end{aligned}$$ where $r_h$ localizes the *black hole horizon* since ${\H(r_h)=0}$. In fact, for the *whole* class ${[n,p]=[0,1]}$, the metric function $\H$ given by , takes the generic form ${\H(r) = (r-r_0)\,\big(c_0+c_1(r-r_0)+\ldots\big)}$, which means that ${r=r_0}$ is the root of $\H$, and thus the horizon. Therefore, we can identify the constant $r_0$ (around which the solution is expanded) with the location of geometrical/physical horizon, r\_0r\_h . \[r0=rh\] When the additional new *“Bach parameter”* $b$ in , is set to zero, the Bach tensor vanishes, and this solution reduces to the Schwarzschild spacetime (\[IntegrSchwAdS\]) with ${r_h=-1/(2m)}$. Let us systematically derive the complete analytic form of these Schwarzschild–Bach black holes, leading to , . The equation for ${[n,p]=[0,1]}$ gives \^[l+1]{}\_[i=0]{}a\_i a\_[l+2-i]{}(l+2-i)(l+1-3i) =k c\_[l+3]{}(l+4)(l+3)(l+2)(l+1) , \[KeyEq1\[n,p\]=\[01\]\] where ${l\ge 0}$. Relabeling ${l \to l-1 }$, we thus obtain c\_[l+2]{}=\^[l]{}\_[i=0]{}a\_i a\_[l+1-i]{}(l+1-i)(l-3i)  l1, \[nonSchwinitcondc\] which enables us to express all coefficients $c_{l+2}$ in terms of ${a_0,\ldots, a_{l+1}}$, starting from $c_3$. In the lowest nontrivial order ${l=0}$, the “trace equation” implies a\_1=-(1+c\_1), \[nonSchwinitcond3\] while for higher orders ${l=1, 2, \ldots}$, yields a\_[l+1]{}=  l1, \[nonSchwinitconda\] which expresses all $a_{l+1}$ in terms of ${a_0,\ldots, a_l}$ and ${c_1,\ldots, c_{l+1}}$. Finally, in the lowest nontrivial order ${l=0}$, the field equation gives the constraint ${6kc_0c_2=3a_0(a_0+a_1c_0)+2k(c_1^2-1)}$. Using , this becomes c\_2=. \[nonSchwinitcond2\] There are thus *three free initial parameters*, namely $a_0$, $c_0$, and $c_1$ (apart from ${r_0=r_h}$). Using , , we obtain ${a_1, c_2}$, and then $a_{l+1}$, $c_{l+2}$ for all ${l = 1, 2, \ldots }$ by the alternate application of the *recurrent relations* , . This gives the complete analytic solution. Now, the scalar invariants (\[invB\]), (\[invC\]) evaluated at ${r=r_h\equiv r_0}$ take the form B\_[ab]{}B\^[ab]{}(r\_h) = ( )\^2 ,C\_[abcd]{} C\^[abcd]{}(r\_h) = (1 + c\_1)\^2 .\[BInv2\] *The Bach tensor is in general nonvanishing*. In fact, for a physical interpretation of this family of solutions, it is convenient to introduce a new parameter $b$ proportional to ${1-c_1^2+3c_0c_2}$. Setting ${b=0}$ then gives the necessary condition for the Bach tensor to vanish. In view of , such *Bach parameter* $b$ can be defined simply as b (c\_1-2), \[b\_definice\] so that the Bach scalar invariant at the black hole horizon $r_h$ becomes B\_[ab]{}B\^[ab]{}(r\_h) = . \[BachInvariant\] Using $b$ as the dimensionless key parameter in the expansion , , the recurrent relations , readily yield an explicit solution of the field equations in the form $$\begin{aligned} & a_1 = -\frac{a_0}{c_0} \Big( 1 + b \Big) \,,\label{IIb_expansiona}\\ & a_2 = +\frac{a_0}{c_0^2}\Big( 1 + \big(2+\tfrac{a_0^2}{8k}\big)b+b^2 \Big) \,,\nonumber\\ & a_3 = -\frac{a_0}{c_0^3}\Big( 1 + \tfrac{1}{9}\big(25+\tfrac{29a_0^2}{8k}+\tfrac{a_0^4}{16k^2}\big)b +\tfrac{1}{9}\big(23+\tfrac{35a_0^2}{8k}\big)b^2+\tfrac{7}{9}b^3 \Big) \,, \ldots\,, \nonumber\end{aligned}$$ and $$\begin{aligned} & c_1 = 2 + 3 b \,,\label{IIb_expansionc}\\ & c_2 = \frac{1}{c_0}\Big( 1 + \big(4-\tfrac{a_0^2}{2k}\big)b + 3 b^2 \Big) \,,\nonumber\\ & c_3 = \frac{a_0^4}{32k^2c_0^2}\, b\,,\nonumber\\ & c_4 = \frac{a_0^2}{30kc_0^3}\, b\,\Big(\big(1-\tfrac{5a_0^2}{4k}-\tfrac{a_0^4}{32k^2}\big) +\big(2-\tfrac{13a_0^2}{8k}\big)b+ b^2 \Big)\,, \ldots\,, \nonumber\end{aligned}$$ and so on, where $a_0$, $c_0$, and $b$ are three free parameters. ### Identification of the Schwarzschild black hole Now, it is possible to *identify the Schwarzschild black hole*. This is defined geometrically by the property that its Bach tensor vanishes. In view of , it requires to set the key parameter $b$ to zero. Interestingly, with ${b=0}$, the expansion coefficients , simplify enormously to && a\_i=a\_0(-)\^i i 0 ,\ && c\_1=2, c\_2=, c\_i=0 i 3. The first sequence clearly corresponds to a *geometrical series*, while the second series is *truncated to a polynomial of the 3rd order*. The metric functions thus take the explicit closed form $$\begin{aligned} \Omega(r) {\!\!\!& = &\!\!\!}a_0 \,\sum_{i=0}^\infty \,\Big(\!\!-\frac{\Delta}{c_0}\Big)^i =\frac{a_0\,c_0}{c_0+\Delta}=\frac{a_0\,c_0}{r-r_h+c_0}\,, \label{IIbOmega}\\ \mathcal{H}(r) {\!\!\!& = &\!\!\!}c_0(r-r_h)+2(r-r_h)^2+c_0^{-1}(r-r_h)^3 \,. \label{IIbH0}\end{aligned}$$ Using the gauge freedom (a constant rescaling and shift of the coordinate $r$), *we are free to chose* a\_0=-,c\_0=r\_h,\[IIb\_a0\] so that the metric functions become =(r) = -, (r) = -r\^2+ = (r-r\_h) . \[IIbH0Schw\] Clearly, there is a *black hole horizon located at* ${r_h}$. This is the *Schwarzschild horizon* given by the usual condition ${\,h=1-2m/\bar{r}=0\,}$. In terms of ${r=-1/{\bar r}}$, it is equivalent to ${r_h=-1/(2m)}$. Thus for the case ${b=0}$, we have fully recovered the standard form of the Schwarzschild solution, since the metric functions are exactly the same as . ### More general Schwarzschild–Bach black holes When ${b\not=0}$, the corresponding solution given by , , that is , , can be naturally interpreted as *generalized black holes with a nontrivial Bach tensor* whose invariant value ${B_{ab}\,B^{ab}}$ at the horizon is proportional to $b^2$, according to . Moreover, as ${b \to 0}$ we explicitly obtain the Schwarzschild black hole . Using the summation of the “background” terms independent of $b$ as in , and the same gauge fixing , it is possible to write this solution explicitly as , . Recall that $r_h$ *still gives the exact value of the horizon* even if $b$ is now nonzero, see the text below equation . To express a *general solution in this class completely*, it is convenient to introduce coefficients $\alpha_i, \gamma_i$ as *those parts of* $a_i, c_i$, respectively, which *do not involve* the ${b=0}$ Schwarzschild “background”, i.e., using the following definitions: a\_i [& &]{}a\_i(b=0)-,a\_i(b=0) ,\[def\_alphai\]\ c\_1 [& &]{}2 + 3b\_1 ,c\_2 +3b ,c\_i 3b i 3. \[def\_gammai\] With the natural gauge choice , the complete solution then takes the explicit form (r) [& = &]{}- -\_[i=1]{}\^\_i(1-)\^i , \[Omega\_\[0,1\]\]\ (r) [& = &]{}(r-r\_h), \[H\_\[0,1\]\] with the initial coefficients $$\alpha_1=1 \,, \qquad \gamma_1=1\,, \qquad \gamma_2 = \frac{1}{3}\Big(4-\frac{1}{2kr_h^2}+3b\Big) \,, \label{alphasgammaIIbinitial}$$ and all other coefficients $\alpha_l, \gamma_l$ for any ${l \ge 1}$ given by the recurrent relations (defining ${\alpha_0=0}$) $$\begin{aligned} \alpha_{l+1}= &\, \frac{1}{(l+1)^2}\Big[\alpha_l\big(2l^2+2l+1\big)-\alpha_{l-1}l^2 -3\sum_{i=1}^{l+1}(-1)^i\,\gamma_i\,(1+b\,\alpha_{l+1-i})\big[(l+1)(l+1-i)+\tfrac{1}{6}i(i+1)\big]\Big], \nonumber\\ \gamma_{l+2}= &\, \frac{(-1)^{l+1}}{kr_h^2\,(l+3)(l+2)(l+1)l}\,\sum_{i=0}^{l} \big(\alpha_i+\alpha_{l+1-i}(1+b\,\alpha_i) \big)(l+1-i)(l-3i) \,, \label{alphasIIbgeneral}\end{aligned}$$ which follow from and for $a_{l+1}$ and $c_{l+2}$, respectively. The first terms generated by these relations are $$\begin{aligned} & \alpha_2 = 2+\frac{1}{8kr_h^2}+b \,, \nonumber\\ & \alpha_3 = \frac{1}{9}\Big(25+\frac{29}{8kr_h^2}+\frac{1}{16k^2r_h^4}\Big) +\frac{1}{9}\Big(23+\frac{35}{8kr_h^2}\Big)\,b+\frac{7}{9}\,b^2 \,, \ldots\,, \label{alphasIIb0}\\ & \gamma_3 = \frac{1}{96k^2r_h^4} \,, \nonumber\\ & \gamma_4 = \frac{1}{18kr_h^2}\Big(\frac{1}{5}-\frac{1}{4kr_h^2}-\frac{1}{160k^2r_h^4}\Big) +\frac{3}{720kr_h^2}\Big(16-\frac{13}{kr_h^2}\Big)\,b+\frac{1}{90kr_h^2}\,b^2 \,, \ldots\,, \label{gammasIIb0}\end{aligned}$$ yielding , . This family of spherically symmetric black-hole spacetimes , in Einstein–Weyl/Quadratic Gravity depends on *two parameters* with a *clear geometrical and physical interpretation*, namely: - The parameter $r_h$ identifies the *horizon position*. Indeed, ${r=r_h}$ is the root of the metric function $\H(r)$ given by . - The dimensionless *Bach parameter* $b$ *distinguishes* the Schwarzschild solution (${b=0}$) from the more general Schwarzschild–Bach black hole spacetime with nonzero Bach tensor (${b\ne0}$). In fact, we have chosen the parameter $b$ in such a way that it *determines the value of the Bach tensor , on the horizon* $r_h$, namely \_1(r\_h) = 0, \_2(r\_h) = -b . \[bonhorizon\] Thus on the horizon, the invariants and of the Bach and Weyl tensors take the values B\_[ab]{}B\^[ab]{}(r\_h) = b\^2 ,C\_[abcd]{} C\^[abcd]{}(r\_h) = 12r\_h\^4(1+b)\^2 , \[BCInvariants\_\[0,1\]\] respectively. **To conclude**: The class of solutions ${[n,p]=[0,1]}$ represents spherically symmetric Schwarzschild–Bach black holes (abbreviated as Schwa–Bach), expressed in terms of the series , *around the horizon* $r_h$, i.e., for the special choice ${r_0=r_h}$. These Schwa–Bach black holes include and generalize the well-known Schwarzschild black hole. Restricting to Einstein’s theory, corresponding to ${k=0}$, requires ${a_0+a_1c_0=0}$, see the constraint above equation . Substituting this into , we obtain ${c_1=2}$, and thus ${b=0}$. This again confirms that the only possible spherical vacuum solution in General Relativity is the Schwarzschild solution. Let us finally remark that the explicit recurrent relations can be rewritten in a slightly more compact form if we relabel the index $l$ to ${j \equiv l+1}$, so that the relations for any ${j \ge 2}$ become $$\begin{aligned} \alpha_{j}= &\, \frac{1}{j^2}\Big[\alpha_{j-1}\big(2j^2-2j+1\big)-\alpha_{j-2}(j-1)^2 -3\sum_{i=1}^{j}(-1)^i\,\gamma_i\,(1+b\,\alpha_{j-i})\big[j(j-i)+\tfrac{1}{6}i(i+1)\big]\Big]\,, \nonumber\\ \gamma_{j+1}= &\, \frac{(-1)^{j}}{kr_h^2\,(j+2)(j+1)j(j-1)}\,\sum_{i=0}^{j-1} \big(\alpha_i+\alpha_{j-i}(1+b\,\alpha_i) \big)(j-i)(j-1-3i) \,. \label{alphasgammasgeneral_[0,1]}\end{aligned}$$ Schwarzschild–Bach black holes in the class ${[n,p]=[0,0]}$: near a generic point {#SchwaBach_[n,p]=[0,0]} --------------------------------------------------------------------------------- This more general class of possible spherically symmetric vacuum solutions to QG (see ) *may, as a special case, also represent the family of Schwarzschild–Bach black holes* with nonvanishing Bach tensor. In contrast to the previous case ${[n,p]=[0,1]}$, the expansion is now considered around *an arbitrary fixed value* $r_0$ which is *distinct from the position of the black hole horizon* $r_h$, r\_0 r\_h. \[r0\_NOT=\_rh\] Indeed, for ${[n,p]=[0,0]}$ the metric function $\H$ given by , is ${\H(r) = c_0+c_1(r-r_0)+\ldots}$, where ${c_0\ne0}$, so that the value ${r=r_0}$ is *not* the root of $\H$ and thus cannot be the horizon. In such a case, the first few terms in the expansion of the full solution take the explicit form $$\begin{aligned} \Omega(r) {\!\!\!& = &\!\!\!}-\frac{1}{r} + b_1\,\frac{r_h}{2r_0^3}\,\frac{(r-r_0)^2}{r_h-r_0} +\ldots\,, \label{[0,0]_OmegaFULL}\\ \mathcal{H}(r) {\!\!\!& = &\!\!\!}\big(r-r_h\big)\frac{r^2}{r_h} + (b_1-b_2)\,r_0(r-r_0) -3b_2\,(r-r_0)^2 \nonumber\\ && +\frac{(b_2-b_1)\big(1+\gamma+\frac{1}{2kr_0^2}\big)-2(2+3\gamma)b_2+3b_2^2}{(1+3\gamma+b_1-b_2)\,r_0} \,(r-r_0)^3 \ldots \,, \label{[0,0]_HFULL}\end{aligned}$$ where $b_1$ and $b_2$ are *two independent Bach parameters* proportional to values of the two components of the Bach tensor at $r_0$. By setting ${b_1=0=b_2}$, the Schwarzschild solution (which has vanishing Bach tensor) is immediately obtained. Let us derive this analytic form of the Schwa–Bach black holes. For ${[n,p]=[0,0]}$ the complete solution to , of the form – is given by the Taylor expansions (r) = a\_0 + \_[i=1]{}\^a\_i (r-r\_0)\^[i]{},(r) = c\_0 + \_[i=1]{}\^c\_i (r-r\_0)\^[i]{}. \[rozvoj\[0,0\]\] The key equation for ${n=0=p}$, after relabeling ${l \to l-1}$, gives c\_[l+3]{}=\^[l]{}\_[i=0]{} a\_i a\_[l+1-i]{}(l+1-i)(l-3i)  l1. \[\[0,0\]initcondc\] Equation , relabeling ${l \to l-1}$, implies a\_[l+1]{}=  l1. \[\[0,0\]initconda\] Finally, the field equation in the lowest nontrivial order ${l=0}$ gives one additional constraint c\_3=. \[\[0,0\]initcond2\] Thus there are *five free initial parameters*, namely ${a_0,\, a_1,\, c_0,\, c_1,\, c_2}$ (in addition to $r_0$). All the remaining coefficients $a_{l+1}$, $c_{l+3}$ in are then obtained by applying the recurrent relations , , respectively, starting as && a\_2 = -, …\ && c\_4 = -, …. \[\[0,0\]initcond3\] Now we show that *three of the five initial parameters* (namely ${a_0, a_1, c_0}$) *can be conveniently fixed using the gauge freedom* in such a way that the Schwarzschild solution and flat Minkowski background are uniquely identified and directly seen. ### Identification of the Schwarzschild black hole Specific geometry can be identified by the scalar invariants , with , . In particular, the Bach invariant evaluated at ${r=r_0}$ is $$\begin{aligned} & B_{ab}\, B^{ab} (r_0) = \frac{1}{72\,a_0^8}\,\big[ (\B_1)^2 + 2(\B_1+\B_2)^2\big]\,,\nonumber\\ & \hbox{where}\quad \B_1 (r_0)= 24 c_0c_4 \,,\qquad \B_2 (r_0)= 2(3 c_1 c_3 - c_2^2 + 1)\,. \label{[0,0]_BachInvariant}\end{aligned}$$ *Vanishing of the Bach tensor* (${B_{ab}=0 \Leftrightarrow \B_1=0=\B_2}$), which uniquely identifies the Schwarzschild solution, thus requires ${c_4=0}$ and ${3 c_1 c_3 - c_2^2 + 1 =0}$. In combination with , , this implies two necessary conditions c\_1=-(1+3c\_0),c\_2=2+3c\_0 \[\[0,0\]BachzeroA\] that only depend on the fraction ${a_1/a_0}$ and $c_0$. Interestingly, for such a choice of parameters the recurrent relations , give a very simple complete solution a\_i=a\_0()\^i i 0 ,c\_3=-(1+c\_0), c\_i=0 i 4. The first sequence clearly yields a geometrical series, while the second series is truncated to the 3rd-order polynomial. Thus the metric functions take the closed form $$\begin{aligned} \Omega(r) {\!\!\!& = &\!\!\!}a_0 \,\sum_{i=0}^\infty \,\Big(\frac{a_1}{a_0}\Delta\Big)^i =\frac{a_0^2}{a_0-a_1\Delta}=\frac{a_0^2}{(a_0+a_1r_0)-a_1r}\,, \label{[0,0]Omega}\\ \mathcal{H}(r) {\!\!\!& = &\!\!\!}c_0+c_1(r-r_0)+c_2(r-r_0)^2+c_3(r-r_0)^3 \,. \label{[0,0]H0}\end{aligned}$$ Using the gauge freedom , the most convenient choice a\_0=-,a\_1= \[\[0,0\]\_a0a1\] can always be made, so that the metric functions reduce to =(r) = -, (r) = (r-r\_0)+r\^3 . \[\[0,0\]\_Schw\] Notice that this function $\mathcal{H}$ can be rewritten as (r) = -r\^2+ = (r-r\_h) , r\_h . \[\[0,0\]\_SchwSHIFT\] This is exactly the standard form of the Schwarzschild solution, with the *black hole horizon located at* $r_h$ (clearly the root of $\H$). Thus the constant $c_0$ is uniquely determined in terms of the physical/geometrical parameter $r_h$ (the horizon) and an arbitrary parameter $r_0$ (entering the expansion variable ${\Delta=r-r_0}$) as c\_0 r\_0\^2,-1,r\_0 r\_h . \[\[0,0\]\_DEF c0\] Thus we have proven that all solutions in the class ${[n,p]=[0,0]}$ *with vanishing Bach tensor are equivalent to the Schwarzschild black hole solution*, as also identified in the classes ${[n,p]=[0,1]}$ and ${[n,p]=[-1,2]}$, see expressions and , respectively. The *main difference* is that in the class ${[n,p]=[0,1]}$, it is possible (and, in fact, necessary) to choose the expansion parameter $r_0$ equal to the horizon $r_h$, see , naturally allowing to expand the solution around the black hole horizon, while in the present case of the class ${[n,p]=[0,0]}$, *such a choice is forbidden* ($r_0$ is *not* the root of $\mathcal{H}$). Indeed, for the choice ${r_0=r_h}$, the expression would lead to ${c_0=0}$ which is not allowed. Otherwise the constant $r_0$, determining the initial position around which the solution is expanded, can be chosen *arbitrarily*. These conclusions are consistent with the behavior of the Weyl curvature invariant at $r_0$, C\_[abcd]{} C\^[abcd]{}(r\_0) = 12 = 48 m\^2r\_0\^6 , where we have used the conditions and the Schwarzschild horizon position ${r_h=-1/(2m)}$. This invariant value at the horizon agrees with . For ${m=0}$, flat Minkowski background is obtained, corresponding to ${c_2=-1}$, that is ${c_0=-r_0^2}$, in which case ${\mathcal{H}(r) = -r^2}$, and there is no horizon. ### More general black hole solutions with nontrivial Bach tensor Returning to the generic case – in the class ${[n,p]=[0,0]}$ with nonvanishing Bach tensor, it is now necessary to *introduce two distinct Bach parameters $b_1$ and $b_2$*, corresponding to the *two components $\B_1(r_0)$ and $\B_2(r_0)$* of the Bach tensor and , respectively, evaluated at $r_0$. They enter via the coefficients $c_4$ and $c_3$, which are expressed in terms of the two remaining initial parameters $c_1$ and $c_2$ using and . For ${B_{ab}=0}$, they take the form , i.e., with the gauge and fixing , ${c_1=(1+3\gamma)\,r_0}$ and ${c_2=2+3\gamma}$. It turns out to be useful to define two dimensionless Bach parameters $b_1$ and $b_2$ via the relations c\_1 = (1+3+b\_1-b\_2)r\_0 ,c\_2 = 2+3-3b\_2 , \[Bach\_c1c2\] that is b\_1 (-1-6-c\_2+3c\_1/r\_0) , b\_2 (2+3-c\_2) . \[Bach\_b1b2\] Then $b_1$ and $b_2$ are *directly proportional* to the two Bach tensor components $\B_1(r_0)$ and $\B_2(r_0)$, b\_1=kr\_0\^2\_1(r\_0) ,b\_2=kr\_0\^2(\_1(r\_0)+\_2(r\_0)), \[\[0,0\]\_BachInvariantb1b2B1B2\] and the Bach invariant at $r_0$ is simply expressed as B\_[ab]{} B\^[ab]{} (r\_0)= ( b\_1\^2 + 2 b\_2\^2) . \[\[0,0\]\_BachInvariantb1b2\] With the parametrization by $b_1$, $b_2$ introduced in , assuming again the natural gauge and fixing , the coefficients $a_i$, $c_i$ of the explicit solution – are then given as $$\begin{aligned} & a_0 = -\frac{1}{r_0}\,,\qquad a_1 =\frac{1}{r_0^2}\,,\qquad a_2 = -\frac{1}{r_0^3}-\frac{b_1}{2\gamma\, r_0^3}\,, \ldots\,,\label{[0,0]_expansionaINFa} \\ & c_0 = \gamma\,r_0^2 \,,\qquad c_1 = (1+3\gamma)\,r_0+(b_1-b_2)\,r_0 \,,\qquad c_2 = 2+3\gamma-3b_2 \,,\nonumber\\ & c_3 = \frac{(1+\gamma)(1+3\gamma)-2(2+3\gamma)b_2+3b_2^2+(b_2-b_1)/(2kr_0^2)}{(1+3\gamma+b_1-b_2)\,r_0} \,,\qquad c_4 = \frac{b_1}{8k\gamma r_0^4}\,, \ldots\,. \label{[0,0]_expansioncINFc}\end{aligned}$$ For ${b_1=0=b_2}$, we immediately recover the Schwarzschild solution , that is . In a generic case, the complete solution can be understood as the Schwarzschild black hole “background” modified by a nonzero Bach tensor, encoded in the terms that are proportional to (powers of) the dimensionless Bach parameters $b_1$ and $b_2$. The expansion of this full solution takes the explicit form , . ### Identification of the Schwa–Bach black hole solutions ${[0,1]}$ in the class ${[0,0]}$ Now a natural question arises about the explicit relation between the form , and the form , of the family of Schwarzschild–Bach black holes. The problem is that we cannot simply express the single Bach parameter $b$ in terms of the two parameters ${b_1, b_2}$. The reason is that $b$ determines the value of the Bach tensor *at the horizon* $r_h$, namely \_1(r\_h) = 0, \_2(r\_h) = -b , \[bonhorizonN\] while $b_1$ and $b_2$ determine its two independent values *at any given* $r_0$ \_1(r\_0) = b\_1,\_2(r\_0) = (b\_2 - b\_1), \[\[0,0\]\_BachInvariantb1b2B1B2N\] see and , respectively. Since the functions ${\B_1(r),\, \B_2(r)}$ are complicated, the relations between the constants $b$ and ${b_1,\, b_2}$ are obscured. However, this problem can be circumvent by the following procedure. In order to explicitly identify the Schwa–Bach black hole solution , , expressed around the horizon $r_h$ in the class ${[0,1]}$, within the generic class ${[0,0]}$ given by , we just have to determine its five free parameters ${a_0,\, a_1,\, c_0,\, c_1,\, c_2}$ properly. Instead of considering , , we can simply *evaluate the functions* , (and their derivatives) at ${r=r_0}$, and then compare them with the Taylor expansions (and their derivatives) evaluated at ${r=r_0}$, obtaining[^4] $$\begin{aligned} a_0 & = -\frac{1}{r_0}-\frac{b}{r_h}\sum_{i=1}^\infty \alpha_i\,\Big(1-\frac{r_0}{r_h}\Big)^i \,, \label{[0,0]a0}\\ a_1 & = \frac{1}{r_0^2}+\frac{b}{r_h^2}\sum_{i=1}^\infty i\,\alpha_i\,\Big(1-\frac{r_0}{r_h}\Big)^{i-1} \,, \\ c_0 & = (r_0-r_h)\bigg[\frac{r_0^2}{r_h}+3b\,r_h\sum_{i=1}^\infty \gamma_i\,\Big(\frac{r_0}{r_h}-1\Big)^i\, \bigg] \,, \\ c_1 & = (3r_0-2r_h)\frac{r_0}{r_h}+3b\,r_h\sum_{i=1}^\infty (i+1)\, \gamma_i\,\Big(\frac{r_0}{r_h}-1\Big)^i \,, \\ c_2 & = (3r_0-r_h)\frac{1}{r_h}+\frac{3}{2}b\,\sum_{i=1}^\infty i(i+1)\,\gamma_i\,\Big(\frac{r_0}{r_h}-1\Big)^{i-1} \label{[0,0]c2}\,.\end{aligned}$$ Then using the recurrent relations –, we are able to express the Schwarzschild–Bach black holes using the complete expansion around *any value* $r_0$ and just a *single Bach parameter* $b$ which determines the value of the Bach tensor at the horizon $r_h$. When ${b=0}$, the coefficients $a_i$ form a geometrical series, and the metric functions simplify to , which is again the Schwarzschild solution . Both the classes ${[0,0]}$ and ${[0,1]}$ with ${B_{ab}=0}$ thus reduce to the Schwarzschild black hole. The difference is that in the class ${[0,1]}$ the radial distance parameter $r_0$ is *equal* to $r_h$, while ${r_0 \ne r_h}$ can be chosen *arbitrarily* in the class ${[0,0]}$. ### Formal limit ${r_0 \to r_h}$ Let us consider a “consistency check” between the two series expressing the Schwa–Bach black hole solution, namely , in the class ${[0,1]}$ and , in the class ${[0,0]}$. To this end, let us denote temporarily the coefficients in the class ${[0,0]}$ by ${\hat c}_i$ and ${\hat a}_i$. The limit ${r_0 \to r_h}$ in – can be trivially performed, just by setting ${r_0=r_h}$, leading to the relations \_0 [& = &]{}- a\_0,\_1 = (1+b) a\_1,\ [c]{}\_0 [& = &]{}0,\_1 = r\_h c\_0,\_2 = 2+3b c\_1, where equation and the first relations in , have also been employed. By comparing and it is also seen that ${\hat c}_{j+1}$ satisfies the same recurrent relation as $c_j$, so that the functions $\H$ agree. Moreover, from the relation it follows that the condition ${\hat c}_0=0$ requires 0=a\_[l-1]{} +l\^2 [c]{}\_1[a]{}\_l +\^[l+1]{}\_[i=2]{}c\_ia\_[l+1-i]{}. This implies \_l =- , which (with the identification ${\hat c_{i+1}=c_i}$) is equivalent to the recurrent expression for $a_{l+1}$, so that the functions $\Omega$ also agree. In other words, in the limit ${r_0\to r_h}$ we obtain \_0 0,   \_[j+1]{} c\_j,  \_j a\_j  j 0, demonstrating the consistency of the two expressions for the Schwa–Bach black holes in these two classes of solutions. Bachian singularity in the class ${[n,p]=[1,0]}$ {#SchwaBach_[n,p]=[1,0]} ------------------------------------------------ This last possible class of spherically symmetric vacuum solutions represents spacetimes which are *not* black holes with horizon localized at $r_0$. Instead, it seems to be a specific family containing a naked singularity with ${B_{ab} \ne 0}$. The key equation for ${[n,p]=[1,0]}$, relabeling ${l \to l-3}$, gives c\_[l+1]{}=\^[l-3]{}\_[i=0]{} a\_i a\_[l-3-i]{}(l-2-i)(l-5-3i)  l3, \[(2,2)initcondc\] expressing $c_{l+1}$, starting from ${c_4}$. Equation in the lowest order ${l=0}$ implies a\_1=-, \[(2,2)initcond3\] and in higher orders a\_[l+1]{}=  l1. \[(2,2)initconda\] Finally, the field equation in the lowest nontrivial order ${l=0}$ gives the condition c\_3=. \[(2,2)initcond2\] All coefficients $a_{l+1}$, $c_{l+1}$ are obtained by applying the recurrent relations , . This yields an explicit solution (r) = (r-r\_0),(r) = c\_0 + \_[i=1]{}\^c\_i (r-r\_0)\^[i]{}, \[rozvoj\[1,0\]\] where a\_2 [& = &]{}-,\ a\_3 [& = &]{}-, …,\ c\_4 [& = &]{}-, c\_5 = , …, and ${a_0,\, c_0,\, c_1,\, c_2}$ are *four initial parameters* (apart from $r_0$), but not all of them are independent. Due to the gauge freedom , we can set, for example, ${a_0=1}$ and also ${r_0=0}$. To determine the main geometric properties we employ the scalar invariants , , which read B\_[ab]{} B\^[ab]{}(r) = +…,C\_[abcd]{} C\^[abcd]{}(r) = +….\[(2,2)BInv2\] *The Bach tensor $B_{ab}$ is thus nonvanishing near* $r_0$. And since ${R_{ab}=4k\, B_{ab} \ne 0}$, this class of solutions *does not contain Ricci-flat subcases*. The Bach invariant always diverges at ${r=r_0}$, and there is also a *Weyl curvature singularity* at ${r=r_0}$ (maybe unless ${c_2=-1}$). Moreover, for the expressions – in the limit ${r\rightarrow r_0}$ behave as && [|r]{}=(r) \~a\_0(r-r\_0) 0 , \[(2,2)horizon\]\ && h \~-c\_0[|r]{}\^2 0, f \~-a\_0\^2c\_0 ([|r]{})\^[-2]{} . \[(2,2)horizonC\] It shows a very specific and unusual behavior of the metric functions $f$ and $h$ close to the curvature singularity at ${{\bar r}=0 }$, in terms of the physical radial coordinate $\bar r$. This class ${[n,p]=[1,0]}$ of solutions corresponds to the family which has been identified in [@Stelle:1978; @LuPerkinsPopeStelle:2015b; @HoldomRen:2017] as ${(s,t)=(2,2)}$, and nicknamed *(2,2)-family* in [@PerkinsPhD], see Section \[summary\] for more details. Discussion of solutions using the expansion in powers of $r^{-1}$ {#expansiont_INF} ================================================================= By inserting the series (\[rozvojomegINF\]), (\[rozvojcalHINF\]), that is $$\Omega(r) = r^N \sum_{i=0}^\infty A_i \,r^{-i}\,, \qquad \mathcal{H}(r) = r^P \,\sum_{i=0}^\infty C_i \,r^{-i}\,,\label{rozvojomagAcalHINF}$$ into the key field equation (\[Eq1\]), we obtain the relation $$\begin{aligned} &\sum_{l=-2N+2}^{\infty}r^{-l}\sum^{l+2N-2}_{i=0}A_i\,A_{l-i+2N-2}\,(l-i+N-2)(l-3i+3N-1) \nonumber \\ & \hspace{45.0mm}=\tfrac{1}{3}k\sum^{\infty}_{l=-P+4}r^{-l}\,C_{l+P-4}\,(l-4)(l-3)(l-2)(l-1) \,. \label{KeyEq1INF}\end{aligned}$$ The second field equation (\[Eq2\]) puts further constraints, namely $$\begin{aligned} &\sum_{l=-2N-P+2}^{\infty}r^{-l}\sum^{l+2N+P-2}_{j=0}\sum^{j}_{i=0}A_i\,A_{j-i}\,C_{l-j+2N+P-2} \,(j-i-N)(l-j+3i-N-2) \nonumber \\ & \hspace{10.0mm} +\sum_{l=-2N}^{\infty}r^{-l}\sum^{l+2N}_{i=0}A_i\,A_{l-i+2N} \nonumber \\ & = \tfrac{1}{3}k \bigg[2+\sum^{\infty}_{l=-2P+4}r^{-l}\sum^{l+2P-4}_{i=0}C_{i}\,C_{l-i+2P-4}\,(i-P)(l-i+P-4)(l-i+P-3) (l-\tfrac{3}{2}i+\tfrac{3}{2}P-\tfrac{5}{2})\bigg]\,. \label{KeyEq2INF}\end{aligned}$$ The supplementry condition following from the “trace equation” (\[trace\]) reads $$\begin{aligned} &\sum_{l=-N-P+2}^{\infty}r^{-l}\sum^{l+N+P-2}_{i=0}C_i\,A_{l-i+N+P-2}\,\big[(l-i+P-2)(l-1)+\tfrac{1}{6}(i-P)(i-P+1)\big] \nonumber \\ & \hspace{80mm}=-\tfrac{1}{3}\sum^{\infty}_{l=-N}r^{-l}\,A_{l+N} \,. \label{KeyEq3INF}\end{aligned}$$ By comparing the corresponding coefficients of the same powers of $r^{-l}$ on both sides of the relation (\[KeyEq1INF\]), we can express the coefficients $C_j$ in terms of $A_j$s. Moreover, the *terms with the lowest order* imply that we have to discuss *three distinct cases*, namely: - **Case I**$^\infty$: ${\ \ -2N+2<-P+4}$,  i.e., ${\ P<2N+2}$, - **Case II**$^\infty$: ${\ -2N+2>-P+4}$,  i.e., ${\ P>2N+2}$, - **Case III**$^\infty$: ${-2N+2=-P+4}$,  i.e., ${\ P=2N+2}$. Let us derive all possible solutions in these cases. **Case I**$^\infty$ ------------------- In the case, ${-2N+2<-P+4}$, the *highest* order in the key equation (\[KeyEq1INF\]) is on the *left hand* side, namely $r^{-l}$ with ${-l=2N-2}$, which yields the condition $$N(N+1)=0 \,. \label{KeyEq1CaseIINF}$$ The only two admitted cases are ${N=0}$ and ${N=-1}$. The highest orders on both sides of equation (\[KeyEq3INF\]) are $$\big[6N(N+P-1)+P(P-1)\big]C_0\,r^{N+P-2}+\cdots=-2\,r^N+\cdots \,. \label{KeyEq3CaseIINF}$$ For ${N=0}$, these powers are ${r^{P-2}}$ and ${r^0}$, respectively, but ${P-2<2N=0}$ by the definition of Case I$^\infty$. The highest order ${0=-2r^0}$ thus leads to a contradiction. Similarly, for the second possibility ${N=-1}$, the powers are ${r^{P-3}}$ and ${r^{-1}}$, respectively, but ${P-3<2N-1=-3<-1}$. The highest order is thus ${0=-2r^{-1}}$, which is again a contradiction. **To summarize**: There are no possible solutions in Case I$^\infty$. **Case II**$^\infty$ -------------------- In this case, ${-2N+2>-P+4}$, so that the *highest* order in the key equation (\[KeyEq1INF\]) is on the *right hand* side, namely $r^{-l}$ with ${l=-P+4}$, which gives the condition $$P(P-1)(P-2)(P-3)=0 \,. \label{KeyEq1CaseIIINF}$$ Thus there are four possible cases, namely ${P=0}$, ${P=1}$, ${P=2}$, and ${P=3}$. Equation has the highest orders on both sides as given by equation , that is $$\begin{aligned} \hbox{for}\quad P=0:\qquad & \big[6N(N-1)\big]C_0\,r^{N-2}+\cdots=-2\,r^N+\cdots&& \hbox{not compatible}\,,\\ \hbox{for}\quad P=1:\qquad & \big[6N^2\big]C_0\,r^{N-1}+\cdots=-2\,r^N+\cdots&& \hbox{not compatible}\,,\\ \hbox{for}\quad P=2:\qquad & \big[6N(N+1)+2\big]C_0\,r^N+\cdots=-2\,r^N+\cdots&& (3N^2+3N+1)C_0=-1\,,\label{contrp=2c0INF} \\ \hbox{for}\quad P=3:\qquad & \big[6N(N+2)+6\big]C_0\,r^{N+1}+\cdots=-2\,r^N+\cdots&& \hbox{necessarily}\quad N=-1\,.\end{aligned}$$ The highest orders of all terms in equation for the case ${P=2}$, implying ${N<0}$, are 3A\_0\^2\[N(3N+2)C\_0+1\]r\^[2N]{} +2k(C\_0\^2 -1) + =0 , \[eq2rozvoj0omegINF\] which requires ${(3N^2+2N)C_0=-1}$. Together with constraint this implies ${N=-1}$, ${C_0=-1}$. **To summarize**: The only possible two classes of solutions in Case II$^\infty$ are given by \[N,P\]=\[-1,3\]\^, \[N,P\]=\[-1,2\]\^. \[CaseII\_summaryINF\] **Case III**$^\infty$ --------------------- Now, ${-2N+2=-P+4}$, that is ${N=-1+P/2}$ and ${P=2N+2}$. In such a case, the *highest* order in the key equation (\[KeyEq1INF\]) is *on both sides*, namely $r^{-l}$ with ${l=2-2N}$. This implies the condition P(P-2)=0. \[KeyEq1CaseIIIINF\] There are three subcases to be considered, namely ${P=0}$, ${P=2}$, and ${3A_0^2=-4kC_0 (P-1)(P-3)}$ with ${P\not= 0,1,2,3}$. This corresponds to ${N=-1}$, ${N=0}$, and also ${3A_0^2=-4kC_0(4N^2-1)}$ with $N\not= -1, -1/2, 0, 1/2$, respectively. The leading orders of the trace equation on both sides are 2(11N\^2+6N+1) C\_0r\^[3N]{} +[& = &]{}-2 r\^N+. \[eqtr00omegIIIINF\] Consequently, we obtain $$\begin{aligned} &\hbox{for}\quad N=-1\,,\ P=0:\quad & 12C_0\,r^{-3}+\cdots=-2\,r^{-1}+\cdots& \qquad\hbox{not compatible}\,,\label{contrp=2c0IIIaINF}\\ &\hbox{for}\quad N=0\,,\ P=2:\quad & 2C_0+\cdots=-2+\cdots&\qquad C_0=-1\,,\label{contrp=2c0IIIbINF}\\ &\hbox{for}\quad 3A_0^2=4kC_0(1-4N^2): & (11N^2+6N+1)C_0+\cdots=0&\qquad \hbox{not compatible}\,. \label{contrp=2c0IIIcINF}\end{aligned}$$ The incompatibility in the case is due to the fact that ${11N^2+6N+1}$ is always positive. In the case , we employ the field equation , which for ${N=0, P=2}$ requires ${3A_0^2+2k(C_0^2-1) = 0}$. Since ${C_0=-1}$ would imply ${A_0=0}$, we also end up in a contradiction. **To summarize**: There are no possible solutions in Case III$^\infty$. Description and study of all possible solutions in powers of $r^{-1}$ {#description_INF} ===================================================================== Now we derive and investigate spherically symmetric solutions in the domain as ${r \to \infty}$ by completely solving the equations (\[KeyEq1INF\]), (\[KeyEq2INF\]), and their consequence (\[KeyEq3INF\]). As it has been proven in previous Section \[expansiont\_INF\], there are only two distinct cases to be discussed.\ Schwarzschild–Bach black holes in the class ${[-1,3]^\infty}$: near the singularity {#Schw_[N,P]=[-1,3]} ----------------------------------------------------------------------------------- In the class given by ${N=-1}$, ${P=3}$ in the expansion , in *negative* powers of $r$, the only possible black hole solutions are $$\begin{aligned} \Omega(r) {\!\!\!& = &\!\!\!}-\frac{1}{r} + \frac{B}{r}\,\bigg(\frac{2}{9}\,\frac{r_h^3}{r^3} +\frac{1}{6}\,\frac{r_h^4}{r^4} +\frac{2}{15}\,\frac{r_h^5}{r^5} +\ldots\bigg)\,, \label{IIdOmegaFULL}\\ \mathcal{H}(r) {\!\!\!& = &\!\!\!}(r-r_h)\frac{r^2}{r_h} + B\,\bigg( r_h^2-\frac{1}{90k}\,\frac{r_h^3 }{r^3} -\frac{1}{140k}\,\frac{r_h^4}{r^4}-\frac{1}{210k}\,\frac{r_h^5}{r^5}+ \ldots \bigg) \,. \label{IIdH0FULL}\end{aligned}$$ These solutions represent the *class of Schwarzschild–Bach black holes* in Quadratic Gravity/the Einstein–Weyl theory. By setting ${B=0}$, the *Schwarzschild solution* is again obtained, with the *horizon located at* $r_h$. In the limit ${r\to\infty}$, the relation implies ${{\bar r}=\Omega(r)\sim -1/r \to 0 }$. In such a limit, *the curvature singularity at ${\bar r = 0}$ is approached*, where ${\H \to \infty}$. Moreover, from the relations it follows that ${h({\bar r})\sim 1/( r_h \, \bar r) \to \infty}$ and ${f({\bar r})\sim h({\bar r})}$. Thus both metric functions of *diverge exactly in the same way as for the Schwarzschild solution, independently of the Bach parameter* $B$. Let us derive this class of solutions. The key equation , relabeling ${l \to l+2 }$, implies C\_[l+1]{}=\^[l-2]{}\_[i=0]{} A\_iA\_[l-2-i]{}(l-1-i)(l-2-3i)  l3, \[nonSchwinitcondcINF\] which gives all $C_{l+1}$ in terms of ${A_0,\ldots, A_{l-2}}$, starting form ${C_4=0}$. The trace equation yields A\_[l]{}=  l1, \[nonSchwinitcondaINF\] which expresses all $A_{l}$ in terms of ${A_0,\ldots, A_{l-1}}$ and ${C_1,\ldots, C_{l}}$. Finally, the second field equation in the lowest nontrivial order ${l=0}$ gives the additional constraint C\_2=. \[nonSchwinitcond2INF\] Therefore, in this case there are *four free parameters*, namely ${A_0, C_0, C_1, C_3}$. Using we obtain ${C_2}$, and then $A_l$, $C_{l+3}$ for all ${l\ge 1}$ by the application of the recurrent relations , . ### Identification of the Schwarzschild black hole The scalar invariants , for , now take the form B\_[ab]{} B\^[ab]{}(r ) = (45 C\_6)\^2 , C\_[abcd]{} C\^[abcd]{}(r ) \~12 r\^6 . \[BInv2INF\] Since ${A_0, C_0}$ are nonzero by definition, the necessary condition for the *Bach tensor to vanish* (which geometrically identifies the classical Schwarzschild solution) is C\_6 = 0. \[C\_6=0\] Interestingly, for such a setting, the expansion coefficients simplify enormously to && A\_i=A\_0(-)\^i i 0 ,\ && C\_2=, C\_3=, C\_i=0 i 4. The first sequence is a geometrical series, while the second series is truncated to the 3rd-order polynomial. Thus the metric functions can be written in the closed form $$\begin{aligned} \Omega(r) {\!\!\!& = &\!\!\!}\frac{A_0}{r} \,\sum_{i=0}^\infty \,\Big(\!\!-\frac{C_1+1}{3C_0\,r}\Big)^i =\frac{A_0}{r+(C_1+1)/(3C_0)}\,, \label{IIdOmega}\\ \mathcal{H}(r) {\!\!\!& = &\!\!\!}C_0\,r^3+C_1\,r^2+\frac{C_1^2-1}{3C_0}\, r+\frac{(C_1+1)^2(C_1-2)}{27 C_0^2 } \,. \label{IIdH0}\end{aligned}$$ In view of , we are free to chose the gauge A\_0=-1,C\_1=-1, \[IId\_a0\] so that the metric functions become =(r) = -, (r) = -r\^2+C\_0r\^3 . \[IIdH0Schw\] This is exactly the *Schwarzschild black hole metric* in the form and . It also identifies the physical meaning of the coefficient $C_0$ as $$C_0 = \frac{1}{r_h} \,, \label{IIdH0SchwC0}$$ where $r_h$ determines the *horizon position*, the root of $\mathcal{H}$ given by . Of course, the Schwarzschild horizon is given by ${r_h=-1/(2m)}$, i.e., ${C_0=-2m}$. All free parameters of such solution are thus fixed and fully determined. ### More general Schwarzschild–Bach black holes For the physical interpretation of the more general solutions in this family, it is convenient to introduce the *Bach parameter $B$ proportional to ${C_6}$* entering , which for the gauge choice reads ${ C_6 = -C_3 / (90k C_0)}$. We also naturally require $B$ to be a *dimensionless* parameter, so that the best choice seems to be B C\_0\^2C\_3 = . \[b\_definiceINF\] With such $B$ as the key parameter in the expansions , and the same natural gauge , the recurrent relations , yield an explicit solution of the field equations in a simple form $$\begin{aligned} & A_0 = -1\,,\qquad A_1 = 0\,,\qquad A_2 = 0\,,\nonumber\\ & A_3 = \frac{2}{9}\,r_h^3\,B \,, \qquad A_4 = \frac{1}{6}\,r_h^4\,B \,, \qquad A_5 = \frac{2}{15}\,r_h^5\,B \,, \nonumber\\ & A_6 = \frac{1}{9}\,r_h^6\,\Big( 1-\frac{7}{360kr_h^2}-\frac{10}{9}\,B \Big)\,B\,, \ldots\,,\label{IId_expansionaINFa} \\ & C_0 = r_h^{-1} \,,\qquad C_1 = -1 \,,\qquad C_2 = 0 \,,\nonumber\\ & C_3 = r_h^2\,B \,,\qquad C_4 = 0 \,,\qquad C_5 = 0 \,,\nonumber\\ & C_6 = -\frac{1}{90k}\,r_h^3\,B \,,\qquad C_7 = -\frac{1}{140k}\,r_h^4\,B \,,\qquad C_8 = -\frac{1}{210k}\,r_h^5\,B \,, \ldots\,. \label{IId_expansioncINFc}\end{aligned}$$ This gives the explicit expansion , . The corresponding scalar invariants at ${\bar r = 0}$ are B\_[ab]{} B\^[ab]{}(r ) = B\^2 ,C\_[abcd]{} C\^[abcd]{} (r ) \~r\^6 , \[BInv2INFfinal\] which can be compared with the invariants evaluated at the horizon $\bar r_h$ B\_[ab]{}B\^[ab]{}(r\_h) = b\^2 ,C\_[abcd]{} C\^[abcd]{}(r\_h) = 12r\_h\^4(1+b)\^2 , \[BInv2final\] obtained previously for the class ${[n,p]=[0,1]}$ of the Schwarzschild–Bach black holes. There is a striking similarity between the two expressions for ${B_{ab}\,B^{ab}}$, and thus we could be inclined to directly identify the Bach parameter $B$ with the parameter $b$. However, is should again be emphasized that $B$ determines the value of the Bach invariant *at the Weyl curvature singularity* ${\bar r=0}$, while $b$ determines its value *at the horizon* $\bar r_h$. And these values are, in general, distinct. Bachian vacuum in the class ${[N,P]=[-1,2]^\infty}$ {#Schw_[N,P]=[-1,2]} --------------------------------------------------- Finally, it remains to analyze the second possibility in the Case II$^\infty$. For ${N=-1}$, ${P=2}$ the key equation , relabeling ${l \to l+2}$, gives C\_l=\^[l-2]{}\_[i=0]{} A\_i A\_[l-2-i]{}(l-1-i)(l-2-3i)  l3. \[\[-1,2\]initcondc\] Equation in its lowest orders ${l=1, 2 }$ puts the constraints $$A_1=\tfrac{1}{2}A_0C_1 \,, \qquad C_0=-1\,,$$ and for higher $l$ implies A\_[l-1]{}= \^[l-1]{}\_[i=1]{} C\_iA\_[l-1-i]{}  l3. \[\[-1,2\]initconda\] The equation gives no additional constraint. There are thus *three free parameters*, namely ${A_0, C_1, C_2}$, and all other coefficients are determined by the relations , , starting as $$\begin{aligned} & A_2 = \frac{A_0}{3}(C_1^2 + C_2)\,,\qquad A_3 = \frac{ A_0 }{4}C_1 (C_1^2 + 2C_2)\,,\nonumber\\ & A_4 = \frac{A_0 }{5}\Big(C_1^4 + 3 C_1^2 C_2 + C_2^2 +\frac{A_0^2}{192 k}(C_1^2 + 4 C_2)\Big)\,, \ldots\,, \label{[-1,2]_expansionaINFa} \\ & C_3 = 0 \,,\qquad C_4 = \frac{A_0^2 }{240 k}(C_1^2 + 4C_2) \,,\qquad C_5 = \frac{A_0^2}{240 k} C_1 (C_1^2 + 4C_2) \,,\nonumber\\ & C_6 = \frac{A_0^2 }{67200 k^2}\Big(3A_0^2+ 4k(59 C_1^2 + 26 C_2)\Big)(C_1^2 + 4C_2) \,, \ldots\,. \label{[-1,2]_expansioncINFc}\end{aligned}$$ ### Identification of flat Minkowski space Now, for very large $r$ the scalar invariants , behave as B\_[ab]{} B\^[ab]{}(r) = C\_4\^2 ,C\_[abcd]{} C\^[abcd]{}(r) \~C\_4\^2 . \[\[-1,2\]\_Inv\_INF\] Interestingly, they *remain finite*, so that for ${r\to\infty}$ there is *no physical singularity*. Moreover, for ${C_4 \ne0}$ they are nonzero. In fact, the necessary condition for both the Bach and Weyl tensor invariants to vanish is ${C_4=0}$, that is ${C_1^2 + 4C_2 =0}$. For such a choice, we obtain the relation ${C_2=-\frac{1}{4} C_1^2}$, and then all the coefficients , simplify enormously to ${A_i=A_0\,(\frac{1}{2}C_1)^i}$ for all $i$, and ${C_i=0}$ for all ${i\ge3}$. The metric functions thus reduce to (r) = \_[i=0]{}\^()\^i =, (r) = -(r-C\_1)\^2 . \[\[-1,2\]\_Omega\_H0\] Using the gauge freedom we can always set A\_0=-1,C\_1=0, \[\[-1,2\]\_a0\] and the functions take the trivial form =(r) = -, (r) = -r\^2 . \[\[-1,2\]\_solution\] In view of , , we conclude that the case ${C_4=0}$ gives the Schwarzschild metric with trivial value ${C_0=-2m=0}$ which is *just flat Minkowski space without any horizon* (formally ${r_h=\infty}$). Of course, for flat space, both the Bach and the Weyl tensor vanish everywhere. ### Bachian vacuum Now, the complete class of solutions ${[N,P]=[-1,2]^\infty}$ can be naturally analyzed if we introduce the *Bach parameter $B_v$ proportional to ${C_4}$* because, due to , such solutions admit general Bach and Weyl tensors. With the same gauge , we observe from that ${ C_4 = C_2/(60 k)}$, so that it is more convenient to choose the equivalent parameter $C_2$, instead. The simplest choice is B\_v C\_2 . \[beta\_definiceINF\] With the only remaining parameter $B_v$ (in this case it is not dimensionless), the coefficients , simplify to $$\begin{aligned} & A_0 = -1\,,\quad A_1 = 0\,,\quad A_2 = -\frac{1}{3}\,B_v\,,\quad A_3 = 0\,, \nonumber\\ & A_4 = -\frac{1}{5}\,\Big(\frac{1}{48k}+B_v \Big)\,B_v \,, \quad A_5 = 0 \,, \ldots \label{[-1,2]_expansionaINFaa} \\ & C_0 = -1 \,,\quad C_1 = 0 \,,\quad C_2 = B_v \,,\quad C_3 = 0 \,,\nonumber\\ & C_4 = \frac{1}{60k}\,B_v \,,\quad C_5 = 0 \,,\quad C_6 = \frac{1}{700k}\,\Big(\frac{1}{8k}+\frac{13}{3}B_v \Big)\,B_v \,, \ldots \label{[-1,2]_expansioncINFcc}\end{aligned}$$ yielding an explicit solution $$\begin{aligned} \Omega(r) {\!\!\!& = &\!\!\!}-\frac{1}{r} - B_v\,\bigg(\frac{1}{3\,r^3} +\frac{1}{5\,r^5}\,\Big(\frac{1}{48k}+B_v \Big) +\ldots\bigg)\,, \label{[-1,2]_OmegaFULL}\\ \mathcal{H}(r) {\!\!\!& = &\!\!\!}-r^2 + B_v\,\bigg( 1+\frac{1}{60k\,r^2} +\frac{1}{700k\,r^4}\,\Big(\frac{1}{8k}+\frac{13}{3}B_v \Big) + \ldots \bigg) \,. \label{[-1,2]_H0FULL}\end{aligned}$$ The corresponding scalar invariants now read B\_[ab]{} B\^[ab]{} (r)= B\_v\^2 , C\_[abcd]{} C\^[abcd]{} (r) \~ 0 . \[\[-1,2\]\_Inv\_INF\_beta\] Therefore, we may conclude that this class of metrics ${[N,P]=[-1,2]^\infty}$ can be understood as a *one-parameter Bachian generalization of flat space* (that is the limit of black hole solutions without mass and horizon) *with a nonzero Bach tensor whose magnitude is determined by the parameter $B_v$*, i.e., the “massless limit” of the previous class ${[N,P]=[-1,3]^\infty}$. Interestingly, in the limit ${r\to\infty}$, the expressions , now imply && [|r]{}=(r) \~-1/r 0 , \[\[-1,2\]A\]\ && h \~1, f \~1. \[\[-1,2\]C\] Both the metric functions $h$ and $f$ thus remain nonzero and finite, i.e., in this limit we are not approaching a horizon nor a singularity. In fact, for ${\bar r \to 0}$ the metric *becomes conformally flat*. Interestingly, the Bach invariants and are very similar. Consistency check of the limit $[-1,3]^\infty\,\rightarrow\, [-1,2]^\infty$ --------------------------------------------------------------------------- Let us consider a “consistency check” between the class of solutions $[-1,3]^\infty$, described by –, and the class $[-1,2]^\infty$, described by –, where the coefficients will now be denoted by hats. The transition from $[-1,3]^\infty$ to $[-1,2]^\infty$ requires C\_0 0,C\_i C\_[i-1]{},   i1,A\_i A\_i,  i0. The relation for ${l=1}$, that is $3C_0A_1=-A_0(1+C_1)$, in this limit leads to C\_1 -1,      C\_0=-1 , while the relations for $C_{l+1}$ and for $\hat C_l$ remain the same. Moreover, the relation for $A_l$, -l\^2C\_0A\_[l]{}=A\_[l-1]{} +\^[l]{}\_[i=1]{} C\_iA\_[l-i]{} l1, for $C_0=0$ leads to A\_[l-1]{} = \^[l-1]{}\_[i=1]{} C\_iA\_[l-1-i]{} l2, which is exactly and thus concludes the consistency check. Note that from the free parameters of the family $[-1,3]^\infty$, two parameters become determined, namely ${C_0\rightarrow0}$, ${C_1\rightarrow\hat C_0=-1}$, and one parameter ${C_2\rightarrow\hat C_1}$ becomes undetermined since $3C_0C_2=C_1^2-1\rightarrow 0$. Therefore, four free parameters ${A_0,C_0,C_1,C_3}$ of the $[-1,3]^\infty$ family reduce to three free parameters ${\hat A_0,\hat C_1, \hat C_2}$ of the $[-1,2]^\infty$ family. Summary and relations to previous results {#summary} ========================================= In this section, let us summarize all the distinct and explicit families of spherically symmetric vacuum spacetimes in QG, expressed both in powers of ${\Delta \equiv r-r_0}$ and $r^{-1}$. Moreover, we identify these families with solutions previously discussed in the literature. In particular, in [@Stelle:1978; @LuPerkinsPopeStelle:2015; @LuPerkinsPopeStelle:2015b], various classes of static spherically symmetric solutions to higher-derivative gravity equations were identified and denoted by the symbol $(s,t)$, using the *standard spherically symmetric form* . Such a classification was based on the powers $s$ and $t$ of the *leading terms* of a Laurent expansion of the two metric functions, namely[^5] f\^[-1]{}([|r]{}) [& = &]{}A([|r]{}) \~[|r]{}\^[s]{}, \[rcef2\]\ h([|r]{}) [& = &]{}B([|r]{}) \~[|r]{}\^[t]{} ,\[rceh2\] in the domain ${\bar r \to 0}$. It was shown in [@Stelle:1978; @LuPerkinsPopeStelle:2015b] that there are *three main solution families* corresponding to the following choices of ${(s,t)}$: (s,t) [& = &]{}(0,0)\_0, \[rodina1\]\ (s,t) [& = &]{}(1,-1)\_0, \[rodina2\]\ (s,t) [& = &]{}(2,2)\_0, \[rodina3\] where the subscript “$\,_0$” indicates the expansion around the origin ${\bar r =0}$. In addition, the following *three families* ${(w,t)}$ were identified in [@LuPerkinsPopeStelle:2015b; @PerkinsPhD] using a series expansion around a *finite* point ${{\bar r}\to {\bar r}_0 \ne0}$: (w,t)=(1,1)\_[[|r]{}\_0]{},\ (w,t)=(0,0)\_[[|r]{}\_0]{},\ (w,t)=(1,0)\_[[|r]{}\_0]{}, where w=-s , that is ${f \sim {\bar r}^{\,w}}$ and ${h \sim {\bar r}^{\,t}}$. The subscript “$\,_{{\bar r}_0}$” indicates the expansion around ${\bar r_0}$. In fact, *we have recovered all these families* of solutions in the present paper, and *we have also identified some additional families*. To find the specific mutual relations, first let us note that from the relation between the spherically symmetric radial coordinate $\bar r$ and the Kundt coordinate $r$, that is ${\bar r=\Omega(r)}$, it follows using and that - ${\bar r \rightarrow 0\,\,\,}$ for ${r\rightarrow r_0}$, ${n>0}$, and also for ${r\rightarrow \infty}$, ${N<0}$, - ${\bar r \rightarrow \bar r_0\,}$ for ${r\rightarrow r_0}$, ${n=0}$, and also for $r\rightarrow\ \infty$, ${N=0}$, - ${\bar r \rightarrow \infty}$ for ${r\rightarrow r_0}$, ${n<0}$, and also for $r\rightarrow\ \infty$, ${N>0}$. Now let us find a relation between the powers $(s,t)$ introduced by and , respectively, and the coefficients ${[n, p]}$ employed in this paper. They are the analogous leading powers of the two metric functions $\Omega$ and $\H$, respectively. For ${n\not=0}$, such a relation is found using the expressions with ${\bar{r} = \Omega(r)}$ and , for ${r \to r_0}$. It turns out that s = , t = 2+. \[st-np\] Analogously, using , , we obtain the relations s = , t = 2+ \[st-NP\] for the asymptotic expansion of the metric functions as ${r\rightarrow \infty}$. Thus, for ${n\not=0}$ and ${N\not=0}$, it immediately follows that - the family ${(s,t)=(0,0)_0}$ corresponds to ${[N,P]=[-1,2]^\infty}$, - the family ${(s,t)=(0,0)^ \infty}$ corresponds to ${[n, p]=[-1,2]}$, - the family ${(s,t)=(1,-1)_0}$ corresponds to ${[N,P]=[-1,3]^\infty}$, - the family ${(s,t)=(2,2)_0}$ corresponds to ${[n,p]=[1,0]}$, where the superscript “$\,^\infty$” in ${(0,0)^ \infty}$ indicates the expansion as ${{\bar r}\to\infty}$. The two admitted cases with ${n=0}$ have to be analyzed separately (there are no cases with ${N=0}$). In the generic case when ${a_1\not=0}$, using , , , we obtain that w=p, t=p. Therefore, for ${n=0}$ and ${a_1\not=0}$ we conclude that - the family ${(w,t)=(0,0)_{\bar r_0}}$ corresponds to ${[n, p]=[0,0]}$, - the family ${(w,t)=(1,1)_{\bar r_0}}$ corresponds to ${[n,p]=[0,1]}$, *completing the identification of all our main six classes of solutions*. Note that for ${n=0}$, ${a_1\not=0}$ the relation between $\Delta$ and $\bar \Delta$ is ${\bar\Delta \equiv \bar r -\bar r_0 \sim a_1\Delta}$. Therefore, a series expansion with *integer steps in* $\Delta$ corresponds to a series expansion with *integer steps in* $\bar\Delta$ in the physical radial coordinate $\bar r$. All four possible generic families compatible with the field equations as ${r \to r_0}$ and the series expansion – are summarized in Table \[tbl:01\], while the two cases compatible with the field equations as ${r \to \infty}$ and , , are summarized in Table \[tbl:02\]. We also indicate their physical interpretation and the corresponding Section, in which these solutions are described and studied. -------------------- --------------------- -------------------------------------------------------- -------------------------------- -- -- -- Class $[n,p]$ Family $(s,t)$ Interpretation Section \[0.5mm\] $[-1,2]$ $(0,0)^\infty$ Schwarzschild black hole \[Schw\_\[n,p\]=\[-1,2\]\] \[1mm\] $[0,1]$ $(-1,1)_{\bar r_0}$ Schwarzschild–Bach black holes (near the horizon) \[SchwaBach\_\[n,p\]=\[0,1\]\] \[1mm\] $[0,0]$ $(0,0)_{\bar r_0}$ generic solution, including the Schwa–Bach black holes \[SchwaBach\_\[n,p\]=\[0,0\]\] \[1mm\] $[1,0]$ $(2,2)_0$ Bachian singularity (near the singularity) \[SchwaBach\_\[n,p\]=\[1,0\]\] \[1mm\] -------------------- --------------------- -------------------------------------------------------- -------------------------------- -- -- -- : All possible generic types of solutions to Quadratic Gravity and the Einstein–Weyl theory that can be written as the power series – expanded around any constant value $r_0$. []{data-label="tbl:01"} \ --------------------------- ---------------- ------------------------------------------------------- ---------------------------- -- -- -- Class $[N,P]^\infty$ Family $(s,t)$ Interpretation Section \[0.5mm\] $[-1,3]^\infty$ $(1,-1)_0$ Schwarzschild–Bach black holes (near the singularity) \[Schw\_\[N,P\]=\[-1,3\]\] \[1mm\] $[-1,2]^\infty$ $(0,0)_0$ Bachian vacuum (near the origin) \[Schw\_\[N,P\]=\[-1,2\]\] \[1mm\] --------------------------- ---------------- ------------------------------------------------------- ---------------------------- -- -- -- : All possible generic types of solutions to Quadratic Gravity and the Einstein–Weyl theory that can be written as the power series , expanded as ${r\to\infty}$. []{data-label="tbl:02"} \ Special subclasses with $n=0$ {#relation_to_previous_results 2} ----------------------------- In addition to the above six main classes of solutions, in the case given by ${n=0}$ we have identified some other special subclasses, including a new one. These are *not* given as integer steps in $\bar r$ or ${\bar\Delta}$, so that these are additional classes from the point of view of expansions in powers of ${\bar r -\bar r_0}$ in the physical radial coordinate. In our Kundt coordinate $r$, they just naturally appear as special cases of the solutions with ${n=0}$, namely when ${a_1=0\not=a_2}$ and ${a_1=0=a_2}$. When ${a_1=0\not=a_2}$, the relation is ${\bar\Delta\sim a_2\,\Delta^2}$, and thus a series expansion with integer steps in $\Delta$ leads to *(half integer) steps* $\bar \Delta^{1/2}$. Using , in such a case we obtain w=+1,t=. For ${a_1=0}$ and ${a_2\not=0}$, we thus conclude that - the family $(w,t)=(\tfrac{3}{2},\tfrac{1}{2})_{\bar r_0,1/2}$ corresponds to ${[n, p]=[0,1]_{a_1=0}}$, - the family ${(w,t)=(1,0)_{\bar r_0,1/2}}$ corresponds to ${[n,p]=[0,0]_{a_1=0}}$. Analogously, when ${a_1=0=a_2}$ and ${a_3\not=0}$, the relation is ${{\bar \Delta}\sim a_3\,\Delta^3}$, and thus integer steps in $\Delta$ corresponds to steps in $\bar \Delta^{1/3}$. The relations are now w=,t=. Thus for ${a_1=0=a_2}$ and ${a_3\not=0}$, we conclude that - the family $(w,t)=(\tfrac{4}{3},0)_{\bar r_0,1/3}$ corresponds to ${[n, p]=[0,0]_{a_1=0=a_2}}$. Concerning the geometrical and physical interpretation of these special solutions, it can be generally said that the classes with ${n=0}$ contain (among other solutions) black holes and wormholes. In particular, the class ${[n=0,p=1]}$ represents a *black hole spacetime* since it admits a Killing horizon at ${r_h=r_0}$, see . As pointed out in [@LuPerkinsPopeStelle:2015b], a *wormhole spacetime* is characterized by admitting a finite value of ${{\bar r}_0}$ where ${f=0}$ while ${h\not=0}$. Therefore, for a series expansion around this point, necessarily ${n=0=p}$ (since $\H \not= 0$), and ${a_1=0}$ (since $\Omega'=0$). Thus wormholes may appear only in the class ${[0,0]_{a_1=0}}$. The family of solutions ${(\tfrac{3}{2},\tfrac{1}{2})_{\bar r_0,1/2}}$ was identified in [@LuPerkinsPopeStelle:2015b] and interpreted in [@PerkinsPhD] as an “unusual” type of a horizon. However, it was stated therein that it is a solution to QG only for ${\beta\not =0}$, which implies ${R\not=0}$. Thus it seems that this class does not coincide with our class ${[0,1]_{a_1=0}}$ since, for all our classes, ${R=0}$ by assumption. Our family ${[0,0]_{a_1=0}}$ corresponds to the family ${(1,0)_{\bar r_0,1/2}}$ of [@LuPerkinsPopeStelle:2015b; @PerkinsPhD], while our family $[0,0]_{a_1=0=c_1=c_3}$, where *only even powers* in $\Delta$ are considered (indicated by the subscript “$\,_E$”), corresponds to the family $(1,0)_{\bar r_0,E}$ of [@LuPerkinsPopeStelle:2015b; @PerkinsPhD]. Both these families describe wormholes with two different (half-integer wormhole) and two same patches (integer wormhole), respectively, see [@PerkinsPhD]. Note that the Bach invariant for wormholes in the ${[0,0]_{a_1=0}}$ class is always nonvanishing. To our knowledge, the specific family ${[0,0]_{a_1=0=a_2}}$ *has not yet been considered*, and it corresponds to a new family ${(\tfrac{4}{3},0)_{\bar r_0,1/3}}$ in the notation of [@LuPerkinsPopeStelle:2015b]. It also seems that the *generic solution* $[0,0]$, with the highest number of free parameters, can be connected to all other solutions, and it represents an expansion around a generic point in these spacetimes. In Table \[tab:3\], we summarize all the classes and subclasses found and identified both in the physical and Kundt coordinates, grouped according to the regions in which the expansions are taken in the usual radial coordinate $\bar r$. Family $[n,p]$ or $[N,P]^\infty\!\!$ Parameters Free param. Interpretation -------------------------------- ------------------------------- ----------------------------- ------------------ ------------------------------ $(s,t)$ $(2,2)_0$ $[1,0]$ $a_0,c_0,c_1,c_2,r_0$ $5\rightarrow 3$ Bachian singularity (nS) $(2,2)_{0,E}$ $[1,0]_{c_1=0=c_3}$ $a_0,c_0,r_0$ $3\rightarrow 1$ Bachian singularity (nS) $(1,-1)_0$ $[-1,3]^\infty$ $A_0,C_0,C_1,C_3$ $4\rightarrow 2$ Schwa–Bach black holes (S) $(0,0)_0$ $[-1,2]^\infty$ $A_0,C_1,C_2$ $3\rightarrow 1$ Bachian vacuum (nS) $(w,t)$ $(1,1)_{\bar r_0}$ $[0,1]$ $a_0,c_0,c_1,r_0=r_h$ $4\rightarrow 2$ Schwa–Bach black holes (S) $(3/2,1/2)_{\bar r_0,1/2}\!\!$ $[0,1]_{a_1=0}$ $a_0,c_0,r_0$ $3\rightarrow 1$ “unusual” horizon (nS) $(0,0)_{\bar r_0}$ $[0,0]$ $a_0,a_1,c_0,c_1,c_2,r_0\!$ $6\rightarrow 4$ generic solution (S) $(1,0)_{\bar r_0,1/2}$ $[0,0]_{a_1=0}$ $a_0,c_0,c_1,c_2,r_0$ $5\rightarrow 3$ half-integer wormhole (nS) $(1,0)_{\bar r_0,E}$ $[0,0]_{a_1=0=c_1=c_3}$ $a_0,c_0,r_0$ $3\rightarrow 1$ symmetric wormhole (nS) $(4/3,0)_{\bar r_0,1/3}$ $[0,0]_{a_1=0=a_2}$ $a_0,c_0,c_1,r_0$ $4\rightarrow 2$ not known (nS) — new $(s,t)$ $(0,0)^\infty$ $[-1,2]$ $a_0,c_1,r_0$ $3\rightarrow 1$ Schwarzschild black hole (S) : All solutions, sorted according to the physical regions in which the expansions are taken. The subscripts “$\,_0$”, “$\,_{{\bar r}_0}$” and the superscript “$\,^\infty$” denote solutions $(s,t)$ or $(w,t)$ near ${\bar r=0}$, ${\bar r={\bar r_0}}$, and ${\bar r\rightarrow\infty}$, respectively. The subscript “$\,_E$” indicates that only even powers are present in the expansion, while “$\,_{1/2}$” and “$\,_{1/3}$”indicate that fractional powers are present. Specific number of free parameters is given before and after removing two parameters by the gauge freedom in the Kundt coordinates. In physical coordinates, only one parameter can be removed by rescaling . The symbols “(S)” or “(nS)” indicate that a class of solutions contains or does *not* contain the Schwarzschild black hole, respectively. []{data-label="tab:3"} \ Discussion and analysis of the Schwarzschild–Bach black holes {#discussion-and-figures} ============================================================= In this section, we discuss the behavior of the series expressing the Schwarzschild–Bach black hole solutions , . For our analysis, we choose the same values of the parameters as in our previous paper [@PodolskySvarcPravdaPravdova:2018], namely ${r_h=-1}$, ${k = 0.5}$, ${b=0.3633018769168}$. Such a very special value of $b$ is “close” to the asymptotically flat case.[^6] The key observation for estimating the radius of convergence can be made from Figure \[Figasymp\]. Interestingly, the ratios of subsequent terms ${\frac{\alpha_n}{\alpha_{n-1}}}$ and ${-\frac{\gamma_n}{\gamma_{n-1}}}$ given by the recurrent relations are *approaching a constant asymptotically*. This suggests that both series given by $\alpha_n$ and $\gamma_n$ behave as *geometric series for large* $n$, with the ratio $q$ being apparently *equal for both the series*. Therefore, the series for $\Omega$ and $\cal H$, given by , , should be convergent for ${-1-\frac{1}{q}<r<-1+\frac{1}{q}}$, where ${q \approx 1.494}$, that is in the interval ${r\in(-1.67,-0.33)}$. ![The Schwarzschild–Bach solution ${[0,1]}$ given by , . The ratios $\frac{\alpha_n}{\alpha_{n-1}}$ (blue) and ${-\frac{\gamma_n}{\gamma_{n-1}}}$ (red) for the first 3000 coefficients $\alpha_i$ and $\gamma_i$ given by the recurrent formula are ploted. []{data-label="Figasymp"}](fig1.pdf){height="62mm"} Figure \[fig:OmegaH\] illustrates the convergence of the metric functions $\Omega(r)$ and $\H(r)$ in the Kundt coordinate $r$. In the domain of convergence, denoted by vertical dashed lines, the solution fully agrees with the numerical solution of the field equations. For comparison, Figure \[fig:fh\] illustrates the convergence of the corresponding metric functions $f(\bar r)$ and $h(\bar r)$ in the standard spherically symmetric coordinates. The solution quickly converges, and approaches a numerical solution even at a large distance from the horizon located at ${{\bar r}_h =1}$. From the value of ${\Omega(r)\equiv \bar r}$ at the *lower* boundary of the domain of convergence shown in Figure \[fig:OmegaH\], we can easily read off its value ${\bar r \approx 0.53}$ in the usual radial coordinate. In contrast, the value of the coordinate $\bar r$ given by $\Omega(r)$ at the *upper* boundary remains unclear since it depends on the precise value of the series at the upper boundary of the domain of convergence. In fact, we cannot even say with certainty that the radius of convergence in the standard spherical coordinate $\bar r $ is finite — it may well extend up to ${\bar r \to \infty}$. Finally, it is illustrative to show explicitly that, in contrast to the Schwarzschild solution, the metric functions $f(\bar r)$ and $h(\bar r)$ for the Schwarzschild–Bach black holes are *not equal*. This is clearly seen from their plots in Figure \[fig:fhnearhorizon\]. ![The metric functions $\Omega(r)$ (left) and $\H(r)$ (right) for the Schwarzschild–Bach solution $[0,1]$. The first 20 (red), 50 (orange), 100 (green), and 500 (blue) terms of the series , for $\Omega$ and $\H$ are also compared with a numerical solution (black). Boundaries of the domain of convergence are denoted by vertical dashed lines. Within this radius of convergence, all these functions overlap with the numerical solution, except the lowest shown 20th order of $\Omega$ near the top right corner on the left graph. []{data-label="fig:OmegaH"}](fig2.pdf){height="50mm"} ![The metric functions $f(\bar r)$ (left) and $h(\bar r)$ (right) for the Schwarzschild–Bach solution \[0,1\] in the standard coordinates. The first 20 (red), 50 (orange), 100 (green), and 300 (blue) terms of the series are plotted. A numerical solution (black) overlaps with the blue curve, even far above the horizon located at ${{\bar r}_h=1}$ (here up to ${\bar r = 20\, \bar r_h}$).[]{data-label="fig:fh"}](fig3.pdf){height="48mm"} ![The metric functions $f(\bar r)$ (blue) and $h(\bar r)$ (red) in the near-horizon region for the Schwarzschild–Bach solution \[0,1\]. These two functions are clearly distinct. They both vanish at the horizon, located here at ${{\bar r}_h=1}$[]{data-label="fig:fhnearhorizon"}](fig4.pdf){height="50mm"} There are *three classes* of solutions *containing the Schwarzschild black hole as a special case*, namely the ${[0,0]}$ class with *four* free parameters and the classes ${[0,1]}$ and ${[-1,3]^{\infty}}$, both with *two* free parameters, see Table \[tab:3\]. (The class ${[-1,2]}$ contains *only* the Schwarzschild solution.) The solution ${[0,0]}$ describes a generic point of a static, spherically symmetric spacetime in QG, including also black-hole and wormhole solutions. A natural question is whether the solutions ${[0,1]}$ and ${[-1,3]^{\infty}}$ describe *the same black hole at two different regions* (near the horizon and near the singularity, respectively). We have not arrived at a definite answer yet. Nevertheless the Bach invariant for the class ${[-1,3]^{\infty}}$ approaches a *finite constant* as ${|r| \rightarrow \infty}$ corresponding to ${\bar r \to 0}$, see expression , while analytical and numerical results describing the behavior of the Bach invariant of the ${[0,1]}$ class of solutions as the value of $r$ decreases below the horizon seems to suggest that in this case the Bach invariant is *unbounded*, see Figure \[fig:bach\]. If this is indeed the case, then the classes ${[0,1]}$ and ${[-1,3]^{\infty}}$ must describe *distinct* generalizations of the Schwarzschild black hole admitting a nontrivial Bach tensor. ![The Bach invariant inside the horizon of the Schwarzschild–Bach black holes ${[0,1]}$ calculated from first 20 (red), 50 (green), and 300 (blue) terms, compared with the numerical solution (black). The lower boundary of the domain of convergence is indicated by the vertical dashed line. The horizon is located at ${r_h=-1}$. The insert in the upper right corner shows the numerical value to much lower value of the coordinate $r$, indicating a possible divergence as ${r \rightarrow -\infty}$, that is as ${\bar r \to 0}$.[]{data-label="fig:bach"}](fig5.pdf){height="55mm"} Main physical properties of the Schwarzschild–Bach black holes {#physics} ============================================================== Specific observable effects on test particles caused by the Bach tensor {#geodeviation} ----------------------------------------------------------------------- In this section we demonstrate that the two parts $\B_1$, $\B_2$ of the Bach tensor , , entering the invariant , that distinguish the Schwa–Bach and the Schwarzschild black holes, can be explicitly observed via a *specific influence on particles*. It is well known that a *relative motion* of freely falling test particles (observers) directly encodes specific components of the spacetime curvature, such as the tidal deformation in the vicinity of a black hole, or a transverse effect of gravitational waves measurable by a laser interferometer detector. This is described by the *equation of geodesic deviation*, see [@BicakPodolsky:1999; @PodolskySvarc:2012] for a recent review with historical remarks and description of the formalism that we are going to employ here. ### Interpreting solutions to Quadratic Gravity using geodesic deviation To obtain physically measurable information about the relative motion, we have to choose an *orthonormal frame* ${\{{\mbox{\boldmath$e$}}_{(0)}, {\mbox{\boldmath$e$}}_{(1)}, {\mbox{\boldmath$e$}}_{(2)}, {\mbox{\boldmath$e$}}_{(3)}}\}$ such that ${{\mbox{\boldmath$e$}}_{(a)}\cdot{\mbox{\boldmath$e$}}_{(b)}=\eta_{ab}}$, where the time-like vector ${{\mbox{\boldmath$e$}}_{(0)}={\mbox{\boldmath$u$}}}$ is observer’s $4$-velocity. Projecting the equation of geodesic deviation onto this frame we obtain Z\^[([i]{})]{}= R\^[([i]{})]{}\_[(0)(0)([j]{})]{}Z\^[([j]{})]{} ,[i]{},[j]{}=1,2,3, \[InvGeoDev\] where $$\label{PhysAccel} \ddot Z^{(\rm{i})} \equiv e^{(\rm{i})}_a\,\frac{{{\rm D}}^2 Z^a}{{{\rm{d}}}\, \tau^2} =e^{(\rm{i})}_a\,{Z^a}_{;cd}\, u^c u^d \,, \qquad \hbox{and} \qquad {R_{(\rm{i})(0)(0)(\rm{j})}\equiv R_{abcd} \,e^a_{(\rm{i})}u^b u^c e^d_{(\rm{j})}} \,.$$ Spacetime curvature, characterized by the Riemann tensor, can then be decomposed into the traceless Weyl tensor, the Ricci tensor, and the scalar curvature $R$. Its projection (\[PhysAccel\]) gives $$\begin{aligned} \label{DecompFrame} R_{(\rm{i})(0)(0)(\rm{j})}=C_{(\rm{i})(0)(0)(\rm{j})}+\tfrac{1}{2}\big(R_{(\rm{i})(\rm{j})} -\delta_{\rm{i}\rm{j}}\,R_{(0)(0)}\big)-\tfrac{1}{6}\,R\,\delta_{\rm{i}\rm{j}} \,.\end{aligned}$$ Moreover, the vacuum field equations of Quadratic Gravity (including the Einstein–Weyl theory), ${R_{ab}=4k\, B_{ab}}$ implying ${R=0}$, can be employed. Substituting these relations into (\[DecompFrame\]), we finally obtain the *invariant form of the equation of geodesic deviation* (\[InvGeoDev\]) as $$\begin{aligned} \ddot{Z}^{(\rm{i})}= C_{(\rm{i})(0)(0)(\rm{j})}\,Z^{(\rm{j})} +2k\big(B_{(\rm{i})(\rm{j})}\,Z^{(\rm{j})}-B_{(0)(0)}\,Z^{(\rm{i})}\big)\,. \label{InvGeoDevExpl}\end{aligned}$$ Of course, ${C_{(\rm{i})(0)(0)(\rm{j})}=C^{(\rm{i})}_{\quad(0)(0)(\rm{j})}}$ and ${B_{(\rm{i})(\rm{j})}=B^{(\rm{i})}_{\quad(\rm{j})}}$ since the spatial part of the frame is Cartesian. The Weyl tensor projections ${C_{(\rm{i})(0)(0)(\rm{j})}}$ can be further decomposed and expressed in terms of the Newman–Penrose scalars $\Psi_A$ with respect to the (real) *null frame* ${\{{\mbox{\boldmath$k$}}, {\mbox{\boldmath$l$}}, {\mbox{\boldmath$m$}}_{i} \}}$ which is defined by $$\begin{aligned} {\mbox{\boldmath$k$}}={{\textstyle\frac1{\sqrt{2}}}}({\mbox{\boldmath$u$}}+{\mbox{\boldmath$e$}}_{(1)})\,, \qquad {\mbox{\boldmath$l$}}={{\textstyle\frac1{\sqrt{2}}}}({\mbox{\boldmath$u$}}-{\mbox{\boldmath$e$}}_{(1)})\,, \qquad {\mbox{\boldmath$m$}}_{i}={\mbox{\boldmath$e$}}_{(i)} \quad \hbox{for} \quad i=2,\,3 \,. \label{NullFrame}\end{aligned}$$ Thus, ${\mbox{\boldmath$k$}}$ and ${\mbox{\boldmath$l$}}$ are future oriented null vectors, and ${\mbox{\boldmath$m$}}_{i}$ are two spatial vectors orthogonal to them, normalized as ${{\mbox{\boldmath$k$}}\cdot{\mbox{\boldmath$l$}}=-1}$ and ${{\mbox{\boldmath$m$}}_{i}\cdot{\mbox{\boldmath$m$}}_{j}=\delta_{ij}}$. Such a generic decomposition was found in [@BicakPodolsky:1999; @PodolskySvarc:2012]. Using these results, we obtain the corresponding *general form of the equation of geodesic deviation* (\[InvGeoDevExpl\]) *in Quadratic Gravity/the Einstein–Weyl theory*: $$\begin{aligned} \ddot{Z}^{(1)} = & \quad \Psi_{2S}\,Z^{(1)}+ \tfrac{1}{\sqrt{2}}(\Psi_{1T^j}-\Psi_{3T^j})\,Z^{(j)} \nonumber\\ & \qquad +2k\,\big[(B_{(1)(1)}-B_{(0)(0)})\,Z^{(1)}+B_{(1)(j)} \,Z^{(j)}\,\big]\,,\label{InvGeoDevFinal1}\\ \ddot{Z}^{(i)} = & - \tfrac{1}{2}\Psi_{2S}\,Z^{(i)} + \tfrac{1}{\sqrt{2}}(\Psi_{1T^i}-\Psi_{3T^i})\,Z^{(1)} -\tfrac{1}{2}(\Psi_{0^{ij}}+\Psi_{4^{ij}})\,Z^{(j)} \nonumber\\ & \qquad +2k\,\big[\,B_{(i)(1)} \,Z^{(1)}+B_{(i)(j)} \,Z^{(j)}-B_{(0)(0)}\,Z^{(i)}\,\big]\,, \label{InvGeoDevFinal2}\end{aligned}$$ where we have used the relation ${\Psi_{2T^{(ij)}}=\tfrac{1}{2}\Psi_{2S}\,\delta_{ij}}$ valid in ${D=4}$, see [@PodolskySvarc:2012]. This system of equations admits a *clear physical interpretation*: The Newtonian component $\Psi_{2S}$ of the gravitational field causes classical tidal deformations, $\Psi_{3T^i}, \Psi_{1T^i}$ are responsible for longitudinal motions, while $\Psi_{4^{ij}}, \Psi_{0^{ij}}$ represent the transverse effects of gravitational waves (propagating in the directions ${\mbox{\boldmath$e$}}_{(1)}, -{\mbox{\boldmath$e$}}_{(1)}$, respectively). The additional specific effects caused by the nonvanishing Bach tensor are encoded in the frame components $B_{(a)(b)}$. ### Geodesic deviation in the Schwarzschild–Bach black hole spacetimes Let us concentrate on the spherically symmetric black hole metric in the form , or with . In particular, we introduce the “interpretation” orthonormal frame associated with a *radially falling observer*, i.e., assuming ${\dot{x}=0=\dot{y}}$. Such a frame reads $$\begin{aligned} & {\mbox{\boldmath$e$}}_{(0)}\equiv {\mbox{\boldmath$u$}}= \dot{r}\,\partial_r +\dot{u}\,\partial_u \,, \nonumber \\ & {\mbox{\boldmath$e$}}_{(1)}= \tfrac{1}{2}\big[( {\Omega^2\dot{u}} )^{-1}-{\cal H}\dot{u}\big]\partial_r -\dot{u}\,\partial_u \,, \nonumber \\ & {\mbox{\boldmath$e$}}_{(i)}= \Omega^{-1}\big[1+\tfrac{1}{4}(x^2+y^2)\big]\partial_i \,, \label{OrtFrame}\end{aligned}$$ where the normalisation of observer’s four-velocity ${{\mbox{\boldmath$u$}}\cdot{\mbox{\boldmath$u$}}=-1}$ implies ${\dot{r}=\tfrac{1}{2}\big[({\Omega^2\dot{u}})^{-1}+{\cal H}\dot{u}\big]}$. Using (\[NullFrame\]), the associated null interpretation frame thus takes the form $${\mbox{\boldmath$k$}}= \frac{1}{\sqrt{2}\,\dot{u}\,\Omega^2}\,\partial_r \,, \qquad {\mbox{\boldmath$l$}}= \frac{\dot{u}\,{\cal H}}{\sqrt{2}}\,\mathbf{\partial}_r+\sqrt{2}\dot{u}\,\partial_u\,, \qquad {\mbox{\boldmath$m$}}_i = \Omega^{-1}\big[1+\tfrac{1}{4}(x^2+y^2)\big]\partial_i \,. \label{NullIntFrame}$$ A direct calculation shows that *the only nonvanishing Weyl tensor component* with respect to is $$\Psi_{2S} \equiv C_{abcd}\; k^a\, l^b\, l^c\, k^d =\tfrac{1}{6}\,\Omega^{-2}({\cal H}''+2)\,. \label{Psi2Int}$$ This is consistent with the fact that the spherically symmetric black hole metric is of algebraic type D. The explicit Bach tensor projections with respect to the orthonormal frame (\[OrtFrame\]) are $$\begin{aligned} B_{(0)(0)}&= \frac{1}{24\,\Omega^6\dot{u}^2}\Big[-(1-\Omega^2{\cal H}\dot{u}^2)^2\,{\cal H}''''+2\Omega^2\dot{u}^2({\cal H}'{\cal H}'''-\tfrac{1}{2}{{\cal H}''}^2+2)\Big]\,, \\ B_{(1)(1)}&= \frac{1}{24\,\Omega^6\dot{u}^2}\Big[-(1+\Omega^2{\cal H}\dot{u}^2)^2\,{\cal H}''''-2\Omega^2\dot{u}^2({\cal H}'{\cal H}'''-\tfrac{1}{2}{{\cal H}''}^2+2)\Big]\,, \\ B_{(0)(1)}&= -\frac{1}{24\,\Omega^6\dot{u}^2}\,(1-\Omega^4{\cal H}^2\dot{u}^4)\,{\cal H}'''' \,, \qquad B_{(0)(i)}= 0\,, \\ B_{(i)(j)}&= \frac{\delta_{ij}}{12\,\Omega^4}({\cal H}{\cal H}''''+{\cal H}'{\cal H}'''-\tfrac{1}{2}{{\cal H}''}^2+2)\,, \qquad B_{(1)(i)}= 0 \,.\end{aligned}$$ Therefore, the equation of geodesics deviation , explicitly becomes $$\begin{aligned} \ddot{Z}^{(1)} = & \hspace{6mm} \frac{1}{6}\, \Omega^{-2}\big({\cal H}''+2\big)\,Z^{(1)}\, -\,\frac{1}{3}\,k\,\Omega^{-4}\big({\cal H}{\cal H}''''+{\cal H}'{\cal H}'''-\tfrac{1}{2}{{\cal H}''}^2+2\big)Z^{(1)} \,, \label{InvGeoDevBH1}\\ \ddot{Z}^{(i)} = & - \frac{1}{12}\,\Omega^{-2}\big({\cal H}''+2\big)\,Z^{(i)} +\frac{1}{12}\,k\,\Omega^{-4}\big((\Omega^2\H\dot{u}^2)^{-1}+\Omega^2{\cal H}\dot{u}^2\big){\cal H}{\cal H}''''\,Z^{(i)} \,. \label{InvGeoDevBHi}\end{aligned}$$ We conclude that there is a classical *tidal deformation* caused by the *Weyl curvature* proportional to ${\Omega^{-2}({\cal H}''+2)}$, i.e., the square root of the invariant . Moreover, in Quadratic Gravity (with ${k\not=0}$) there are *two additional effects* caused by the presence of a nonvanishing *Bach tensor*. The first can be observed in the longitudinal component of the acceleration , while the second can be observed in the transverse components . Interestingly, up to a constant they are exactly the square roots of the two parts of the invariant , that is the amplitudes $\B_1$, $\B_2$ given by , . The influence of these two distinct components $\B_1$ and $\B_2$ of the Bach tensor $B_{ab}$ on test particles is even more explicitly seen in the geodesic deviation of *initially static test particles with* ${\dot{r}=0}$. The 4-velocity normalization then implies ${\Omega^2\H\,\dot{u}^2=-1}$, which simplifies , to $$\begin{aligned} \ddot{Z}^{(1)} = & \hspace{6mm} \frac{1}{6}\, \Omega^{-2}\big({\cal H}''+2\big)\,Z^{(1)} -\frac{1}{3}\,k\,\Omega^{-4}\big(\B_1+\B_2\big)Z^{(1)} \,, \label{InvGeoDevBH1r0}\\ \ddot{Z}^{(i)} = & - \frac{1}{12}\,\Omega^{-2}\big({\cal H}''+2\big)\,Z^{(i)} -\frac{1}{6}\,k\,\Omega^{-4}\,\B_1\,Z^{(i)} \,. \label{InvGeoDevBHir0}\end{aligned}$$ From these expressions, it immediately follows that the first component $\B_1$ of the Bach tensor is directly observed in the *transverse* components of the acceleration along ${{\mbox{\boldmath$e$}}_{(2)}, {\mbox{\boldmath$e$}}_{(3)}}$, that is ${\partial_x, \partial_y}$ (equivalent to ${\partial_\theta, \partial_\phi}$), while the second component $\B_2$ only occurs in the *radial* component along ${\mbox{\boldmath$e$}}_{(1)}= -\dot{u}\,(\partial_u+\H\,\partial_r)= -\H\,\Omega'\,\dot{u}\,\partial_{\bar r}$, proportional to $\partial_{\bar r}$. Interestingly, *on the horizon there is only the radial effect* given by ${\B_2(r_h)}$ since ${\B_1(r_h)=0}$ due to and , see also . It can also be proven by direct calculation that the specific character of $\B_1, \B_2$ *cannot mimic* the Newtonian tidal effect in the Schwarzschild solution, i.e., cannot be “incorporated” into the first terms ${\Omega^{-2}\big({\cal H}''+2\big)}$ in , . Therefore, by measuring the free fall of a set of test particles, it is *possible to distinguish* the pure Schwarzschild black hole from the Schwarzschild–Bach black hole geometry which has nonvanishing Bach tensor ${B_{ab}\ne 0}$. Thermodynamic properties: horizon area, temperature, entropy ------------------------------------------------------------ It is also important to determine main geometrical and thermodynamic properties of the family of Schwarzschild–Bach black holes. The *horizon* in these spherically symmetric spacetimes is generated by the rescaled null Killing vector ${\xi\equiv\sigma\partial_u=\sigma\partial_t}$, considering the time-scaling freedom represented by a parameter $\sigma$. Thus it appears at *zero of the metric function* ${\H(r)}$, where the norm of $\xi$ vanishes, see . In the explicit form , this is clearly located at ${r=r_h}$ since ${\H(r_h)=0}$. By simply integrating the angular coordinates of the metric , we immediately obtain the *horizon area* as = 4\^2(r\_h)= = 4[|r]{}\_h\^2 . \[horizon\_area\] The only nonzero derivatives of $\xi$ are ${\xi_{u;r}=-\xi_{r;u}=\frac{1}{2}\sigma(\Omega^2\H)'}$, and thus ${\xi^{\,r;u}=-\xi^{\,u;r}=\Omega^{-4}\xi_{u;r}}$. From the definition [@Wald:1984] of *surface gravity* ${\kappa^2\equiv-\frac{1}{2}\,\xi_{\mu;\nu}\,\xi^{\,\mu;\nu}}$, we obtain ${\kappa=-\frac{1}{2}\sigma(\H'+2\H\,\Omega'/\Omega)}$. On the horizon, where ${\H=0}$, using this simplifies to /= -’(r\_h) = - = . \[surface\_gravity\] It is *the same expression as for the Schwarzschild solution* (in which case ${\kappa=1/4m}$). The standard expression for *temperature* of the black hole horizon ${T\equiv\kappa/(2\pi)}$, which is valid even in higher-derivate gravity theories [@FanLu:2015], thus yields T/= - = , \[temperature\] *independent of the Bach parameter $b$*. However, in higher-derivative theories it is *not* possible to use the usual formula ${S=\frac{1}{4}{\cal A}}$ to determine the *black hole horizon entropy*. Instead, it is necessary to apply the generalized formula derived by Wald [@Wald:1993; @IyerWald:1994], namely S=, \[WaldS\] where the Noether charge 2-form $\mathbf{Q}$ on the horizon is [& = &]{}\_Q\^[[[d]{}]{}]{}x\^[[[d]{}]{}]{}x\^ ,\ Q\^ [& = &]{}2X\^\_[;]{} +4[X\^]{}\_[;]{}\_X\^ , \[NoetherCharge\] in which ${\cal L}$ is the Lagrangian of the theory. In the case of Quadratic Gravity , it can be shown that X\^ [& = &]{}. \[Xabcd\] Subsequent lengthy calculation for the metric with ${\Lambda=0}$ then leads to (r\_h) = -\^2’ |\_[r=r\_h]{} [[[d]{}]{}]{}[[[d]{}]{}]{}. \[NoetherCharge2\] Evaluating the integral , and using , , , we finally obtain S = [A]{}(1-4kr\_h\^2b) = [A]{}(1-4k ) . \[entropySB\] This explicit formula for the Schwarzschild–Bach black hole entropy agrees with the numerical results presented in [@LuPerkinsPopeStelle:2015], with the identification ${k=\alpha}$ and ${b=\delta^*}$. In fact, it gives a geometrical interpretation of the “non-Schwarzschild parameter” $\delta^*$ as the dimensionless Bach parameter $b$ that determines the *value of the Bach tensor on the horizon* $r_h$, see relations . Of course, for the Schwarzschild black hole (${b=0}$) or in Einstein’s General Relativity (${k=0}$) we recover the standard expression ${S=\frac{1}{4G}\,{\cal A}}$. Notice also from that for a given ${b \ne 0}$, the *deviation* from this standard Schwarzschild entropy *is larger when the Schwarzschild–Bach black holes are smaller* because they have smaller $\bar r_h$. Conclusions =========== The class of spherically symmetric black holes in Quadratic Gravity and the Einstein–Weyl theory was studied in many previous works, in particular [@Stelle:1978; @Holdom:2002; @LuPerkinsPopeStelle:2015; @LuPerkinsPopeStelle:2015b; @PerkinsPhD], often by numerical methods applied to complicated field equations corresponding to the standard form of the spherical metric (\[Einstein-WeylBH\]). In [@PodolskySvarcPravdaPravdova:2018; @SvarcPodolskyPravdaPravdova:2018], using a convenient form of the line element (\[BHmetric\]) conformal to a simple Kundt seed, we obtained a surprisingly simple form of the field equations (\[Eq1\]), (\[Eq2\]). This enabled us to find an *explicit form* of their exact solutions. Moreover, we identified the *Bach tensor* as the key ingredient which makes the Schwarzschild solution geometrically distinct from the other branch of “non-Schwarzschild” ones. This is a direct consequence of the extension of Einstein’s theory to include higher derivative corrections. The present paper contains a thorough analysis of all such solutions and their derivation, including the details which had to be omitted in our brief letter [@PodolskySvarcPravdaPravdova:2018]. We have started with the conformal-to-Kundt metric ansatz (\[BHmetric\]). Together with the Bianchi identities, this leads to a compact form of the Quadratic Gravity field equations (\[fieldeqsEWmod\]), assuming ${R=0}$, namely the autonomous system of *two ordinary differential equations* (\[Eq1\]) and (\[Eq2\]) for two metric functions ${\Omega(r)}$ and ${{\cal H}}(r)$. They have been solved in terms of *power series* representing these metric functions, expanded around any *fixed point* $r_0$ (\[rozvojomeg0\]), (\[rozvojcalH0\]), or using the *asymptotic expansion* (\[rozvojomegINF\]), (\[rozvojcalHINF\]), respectively. The field equations have become the algebraic constraints (\[KeyEq1\]), (\[KeyEq2\]) in the fixed point case (near ${r_0}$), and (\[KeyEq1INF\]), (\[KeyEq2INF\]) in the asymptotic region (as ${r \to \infty}$). Their dominant orders restrict the admitted solutions to (\[4classes\]) and (\[2classes\]), respectively. The detailed discussion of all the possible six main classes, together with a suitable fixing of the gauge freedom, can be found in subsequent Sections \[description\] and \[description\_INF\]. The classes are summarized in Tables \[tbl:01\] and \[tbl:02\] in Section \[summary\]. The most prominent case corresponds to the spherically symmetric *black hole spacetimes with* (in general) *nonvanishing Bach tensor*. This solution has been expanded around the event horizon, see Subsection \[SchwaBach\_\[n,p\]=\[0,1\]\]. The metric functions ${\Omega(r)}$ and ${{\cal H}}(r)$ are given by the series (\[Omega\_\[0,1\]\]), (\[H\_\[0,1\]\]) with the initial coefficients specified by (\[alphasgammaIIbinitial\]), and all other coefficients determined by the recurrent relations (\[alphasIIbgeneral\]). Thus we have obtained the two-parametric family of black holes characterized by the radial position ${r_h}$ of the horizon and by the additional parameter $b$. The new Bach parameter distinguishes this more general *Schwarzschild–Bach* solutions (${b\neq0}$) from the classical Schwarzschild spacetime with vanishing Bach tensor (${b=0}$). The main mathematical properties of the Schwarzschild–Bach metric functions are presented and visualized in Section \[discussion-and-figures\]. Subsequent Section \[physics\] contains the physical and geometrical analysis. We have discussed specific behavior of freely falling test observers, described by the equation of geodesic deviation, and demonstrated that their *relative motion* encodes the presence of the Bach tensor. The physical investigation is completed by a fully explicit evaluation of the *thermodynamic quantities*. In particular, the expression for entropy (\[entropySB\]) exhibits the key role of the Bach parameter ${b}$. Finally, for convenience, in Section \[summary\] we have also *summarized all the admitted classes* of solutions, including their physical interpretation, the number of free parameters and, most importantly, relations to previous works. See, in particular, Table \[tab:3\]. We hope that our approach to spherically symmetric vacuum solutions to Quadratic Gravity and the Einstein–Weyl theory may elucidate some of their properties that are not easily accessible by numerical simulations. Of course, we are aware of many remaining open questions. For example, complete analytic identification of the same physical solution in distinct classes and their mutual relations are still missing. It is also of physical interest to understand the effect of nontrivial Bach tensor in the Schwarzschild–Bach spacetimes on perihelion shift and light bending, studied thoroughly during the last century in Einstein’s theory using the Schwarzschild solution. Acknowledgements {#acknowledgements .unnumbered} ================ This work has been supported by the Czech Science Foundation Grants No. GAČR 17-01625S (JP, R[Š]{}) and 19-09659S (VP, AP), and the Research Plan RVO: 67985840 (VP, AP). The Ricci and Bach tensors for the Kundt seed {#derivingRBseed} ============================================= We start with the seed Kundt metric (\[Kundt seed xy\]). Its nontrivial metric components $g_{ab}$ are $$\label{Einstein-WeylBHC} g_{xx}^{{\hbox{\tiny Kundt}}}= g_{yy}^{{\hbox{\tiny Kundt}}}= \textstyle{\left(1+\frac{1}{4}(x^2+y^2)\right)^{-2}}\,,\qquad g_{ru}^{{\hbox{\tiny Kundt}}}= -1 \,,\qquad g_{uu}^{{\hbox{\tiny Kundt}}}= {\cal H}\,,$$ so that the contravariant components $g^{ab}$ read $$\label{contraEinstein-WeylBHC} g^{xx}_{{\hbox{\tiny Kundt}}}= g^{yy}_{{\hbox{\tiny Kundt}}}= \textstyle{\left(1+\frac{1}{4}(x^2+y^2)\right)^2}\,,\qquad\quad g^{ru}_{{\hbox{\tiny Kundt}}}= -1 \,,\qquad g^{rr}_{{\hbox{\tiny Kundt}}}= -{\cal H} \,.$$ Recall that the spatial 2-metric ${g_{ij}}$ is a round sphere of unit radius, with the Gaussian curvature ${K=1}$ and thus its Ricci scalar is ${{\cal R}=2K=2}$. The nontrivial Christoffel symbols for this metric are \^r\_[ru]{} = [-’]{} , \^r\_[uu]{} = [’]{} , \^u\_[uu]{} = [’]{} , \^k\_[ij]{} = [\^[S]{}\^k\_[ij]{}]{} , \[ChristoffelEnd\] where ${\,^{S}\Gamma^k_{ij}\equiv\frac{1}{2}g^{kl}(2g_{l(i,j)}-g_{ij,l})}$ are the symbols with respect to the spatial metric $g_{ij}$ of the 2-sphere. The only nontrivial Riemann curvature tensor components are R\_[ruru]{}\^= [-”]{} , R\_[kilj]{}\^= g\_[kl]{}g\_[ij]{}-g\_[kj]{}g\_[il]{} , and the only nontrivial Ricci tensor components of (\[Einstein-WeylBHC\]) are $$\begin{aligned} R_{ru}^{{\hbox{\tiny Kundt}}}{\!\!\!& = &\!\!\!}-\pul\,{\cal H}'' \,, \label{Ricci ru5} \\ R_{uu}^{{\hbox{\tiny Kundt}}}{\!\!\!& = &\!\!\!}-{\cal H}\,R_{ru}^{{\hbox{\tiny Kundt}}}\,,\label{Ricci uu5}\\ R_{xx}^{{\hbox{\tiny Kundt}}}= R_{yy}^{{\hbox{\tiny Kundt}}}{\!\!\!& = &\!\!\!}g_{xx} \,, \label{Ricci ij5}\end{aligned}$$ while the Ricci scalar reads R\^=[H]{}” + 2 , so that the only nontrivial Weyl tensor components are $$\begin{aligned} C_{ruru}^{{\hbox{\tiny Kundt}}}{\!\!\!& = &\!\!\!}{\textstyle -\frac{1}{6} R} \,, \label{WeyliK}\\ C_{riuj}^{{\hbox{\tiny Kundt}}}{\!\!\!& = &\!\!\!}{\textstyle \frac{1}{12} R \,g_{ij}} \,, \\ C_{kilj}^{{\hbox{\tiny Kundt}}}{\!\!\!& = &\!\!\!}{\textstyle \frac{1}{6}R\,(g_{kl}g_{ij}-g_{kj}g_{il}) } \,, \\ C_{uiuj}^{{\hbox{\tiny Kundt}}}{\!\!\!& = &\!\!\!}-{\cal H}\, C_{riuj} \,. \label{WeylfK}\end{aligned}$$ The nonzero components of the Bach tensor are $$\begin{aligned} B_{rr}^{{\hbox{\tiny Kundt}}}{\!\!\!& = &\!\!\!}{\textstyle -\frac{1}{6}\,{\cal H}'''' } \,, \label{Bach rr}\\ B_{ru}^{{\hbox{\tiny Kundt}}}{\!\!\!& = &\!\!\!}{\textstyle \frac{1}{12}\, \big(2\,{\cal H}{\cal H}''''+{\cal H}'{\cal H}'''-{\textstyle\frac{1}{2}}{{\cal H}''}^2 +2\big) } \,, \label{Bach ru}\\ B_{uu}^{{\hbox{\tiny Kundt}}}{\!\!\!& = &\!\!\!}-{\cal H}\,B_{ru}^{{\hbox{\tiny Kundt}}}\,, \label{Bach uu}\\ B_{xx}^{{\hbox{\tiny Kundt}}}= B_{yy}^{{\hbox{\tiny Kundt}}}{\!\!\!& = &\!\!\!}{\textstyle \frac{1}{12}}\,g_{xx}\, \big({\cal H}{\cal H}''''+{\cal H}'{\cal H}'''-{\textstyle\frac{1}{2}}{{\cal H}''}^2 +2\big) \,,\label{Bach xx}\end{aligned}$$ involving up to the 4th derivative of the metric function $\H(r)$. The Ricci and Bach tensors for the conformal metric {#App_derivingFE} =================================================== Taking the class of Kundt geometries (\[Kundt seed xy\]) as a seed, we can generate the metric of spherically symmetric geometries by the conformal transformation (\[confrelation\]), that is [[[d]{}]{}]{}s\^2 = \^2(r). \[BHmetric-xy\] Now, it is well-known [@Wald:1984] that under a conformal transformation of the seed metric g\_[ab]{}=\^2 g\_[ab]{}\^, \[confrel\] the Ricci scalar and the Ricci and Bach tensors transform as $$\begin{aligned} R {\!\!\!& = &\!\!\!}\Omega^{-2}R^{{\hbox{\tiny Kundt}}}-6\Omega^{-3} \square\Omega\,, \label{OmRiccscalar}\\ R_{ab} {\!\!\!& = &\!\!\!}R_{ab}^{{\hbox{\tiny Kundt}}}- 2\Omega^{-1}\nabla_a\nabla_b\Omega - \Omega^{-1} g_{ab}^{{\hbox{\tiny Kundt}}}\square\Omega+ \Omega^{-2}(4\Omega_{,a}\Omega_{,b}-g_{ab}^{{\hbox{\tiny Kundt}}}g^{cd}_{{\hbox{\tiny Kundt}}}\Omega_{,c}\Omega_{,d})\,, \label{OmRicc}\\ B_{ab} {\!\!\!& = &\!\!\!}\Omega^{-2}B_{ab}^{{\hbox{\tiny Kundt}}}\,. \label{OmBach}\end{aligned}$$ For the Kundt seed metric $g_{ab}^{{\hbox{\tiny Kundt}}}$ (\[Einstein-WeylBHC\]), its Ricci and Bach tensors $R_{ab}^{{\hbox{\tiny Kundt}}}$ and $B_{ab}^{{\hbox{\tiny Kundt}}}$ are given by (\[Ricci ru5\])–(\[Ricci ij5\]) and (\[Bach rr\])–(\[Bach xx\]), respectively. The nontrivial derivatives (with respect to the Kundt seed) of the conformal factor $\Omega(r)$ are, in view of (\[ChristoffelEnd\]), $$\begin{aligned} && \Omega_{,r} \equiv \Omega'\,, \nonumber\\ && \nabla_r\nabla_r \Omega= \Omega''\,,\quad \nabla_r\nabla_u \Omega= {\textstyle\frac{1}{2}}{\cal H}'\Omega' = \nabla_u\nabla_r \Omega\,,\quad \nabla_u\nabla_u \Omega= {-\textstyle\frac{1}{2}}{\cal H}{\cal H}'\Omega' \,,\\ && \square\Omega = -({\cal H}\Omega''+{\cal H}'\Omega')\,. \nonumber\end{aligned}$$ Employing (\[OmRicc\]), the nonvanishing Ricci tensor components of the metric (\[BHmetric-xy\]) are thus $$\begin{aligned} R_{rr} {\!\!\!& = &\!\!\!}-2\Omega^{-2}\big(\Omega\Omega''-2{\Omega'}^2\big) \,, \label{RT_R rr}\\ R_{ru} {\!\!\!& = &\!\!\!}-\pul \Omega^{-2}\big(\Omega^2 {\cal H}\big)'' \,, \label{RT_R ru}\\ R_{uu} {\!\!\!& = &\!\!\!}-{\cal H}\, {R}_{ru} \,, \label{RT_R uu}\\ R_{xx} = {R}_{yy} {\!\!\!& = &\!\!\!}\Omega^{-2}g_{xx}\, \big[ \big({\cal H}\Omega\Omega'\big)'+\Omega^2 \big] \,,\label{RT_R xx}\end{aligned}$$ and using (\[OmRiccscalar\]) we obtain R = 6\^[-3]{} . \[barR\] The nonvanishing Bach tensor components $B_{ab}$ are obtained by a trivial rescaling (\[OmBach\]) of (\[Bach rr\])–(\[Bach xx\]). Derivation and simplification of the field equations {#analysingFE} ==================================================== The vacuum field equations in the Einstein–Weyl theory and also general Quadratic Gravity for the metric $g_{ab}$ are (\[fieldeqsEWmod\]), that is $$R_{ab} = 4k\, B_{ab} \,. \label{EWfield equations}$$ Using the expressions (\[RT\_R rr\])–(\[RT\_R xx\]) and (\[OmBach\]) with (\[Bach rr\])–(\[Bach xx\]), these *field equations explicitly read* $$\begin{aligned} \Omega\Omega''-2{\Omega'}^2 & = \tfrac{1}{3}k\, {\cal H}'''' \,, \label{Neq_rr} \\ \big(\Omega^2 {\cal H}\big)'' & = -\tfrac{2}{3}k \big(2\,{\cal H}{\cal H}''''+{\cal H}'{\cal H}'''-{\textstyle\frac{1}{2}}{{\cal H}''}^2 +2\big) \,, \label{Neq_ru} \\ \big({\cal H}\Omega\Omega'\big)'+\Omega^2 & = \tfrac{1}{3}k \,\big({\cal H}{\cal H}''''+{\cal H}'{\cal H}'''-{\textstyle\frac{1}{2}}{{\cal H}''}^2 +2 \big) \,. \label{Neq_xx}\end{aligned}$$ The equations (\[Neq\_rr\]), (\[Neq\_ru\]), (\[Neq\_xx\]) represent the nontrivial components $rr$, $ru$, $xx$ (identical to $yy$), respectively. The $uu$ component of the field equations is just the ${(-{\cal H})}$-multiple of (\[Neq\_ru\]). Moreover, recall that the trace of the field equations (\[EWfield equations\]) is ${ {R}=0}$, cf. (\[R=0\]). Using (\[barR\]) we obtain the explicit condition $$\T\equiv{\cal H}\Omega''+{\cal H}'\Omega'+{\textstyle \frac{1}{6}} ({\cal H}''+2)\Omega = 0 \,. \label{traceC}$$ It can be checked that this is a direct *consequence* of equations (\[Neq\_rr\])–(\[Neq\_xx\]). Notice that it is a *linear differential equation for the function* ${\cal H}(r)$, and also linear differential equation for $\Omega(r)$. We have thus obtained *three nontrivial field equations* (\[Neq\_rr\])–(\[Neq\_xx\]) for *two unknown functions* $\Omega(r)$ and ${\cal H}(r)$, and also their consequence (\[traceC\]). Therefore, this coupled system seems to be overdetermined. However, now we prove that the key metric functions $\Omega(r)$ and ${\cal H}(r)$ are, in fact, *solutions of just two coupled equations*. To this end, let us introduce the *auxiliary symmetric tensor* $J_{ab}$ defined as J\_[ab]{}R\_[ab]{}-Rg\_[ab]{} - 4kB\_[ab]{} . \[defJab\] Using $J_{ab}$, the vacuum field equations (\[GenQGFieldEq\]) of Quadratic Gravity (assuming a constant $R$ and ${\Lambda=0}$) or Einstein–Weyl gravity (with ${\beta=0=\Lambda}$) are simply J\_[ab]{}=0 . \[Jab=0\] Now, by employing the contracted *Bianchi identities* ${\nabla^b R_{ab}=\frac{1}{2} R_{,a}}$ and the conservation property of the Bach tensor ${\nabla^b B_{ab}=0}$, see (\[Bachproperties\]), we obtain \^b J\_[ab]{}0 . \[BianchiIdEW\] Interestingly, this is actually a *geometrical identity* which is valid without employing any field equations, namely (\[Jab=0\]), or (\[EWfield equations\]) in particular. An explicit evaluation of the identity (\[BianchiIdEW\]) for the metric $ {g}_{ab}$ of the form (\[BHmetric-xy\]) leads to the following equations, which are *always satisfied*: $$\begin{aligned} & \nabla^b J_{rb} = - \Omega^{-3}\Omega'\big(J_{ij}\,{g}^{ij} +{\cal H} J_{rr}\big)-\Omega^{-2}\big({\cal H} J_{rr,r}+ J_{ru,r}+\tfrac{3}{2}{\cal H}' J_{rr}\big) \hspace{-25mm}&\equiv 0 \,, \label{BI_r} \\ & \nabla^b J_{ub} = -2\Omega^{-3}\Omega'\big(J_{uu}+{\cal H} J_{ru}\big)-\Omega^{-2}\big(J_{uu}+ {\cal H} J_{ru}\big)_{,r} &\equiv 0 \,, \label{BI_u} \\ & \nabla^b J_{ib} = \Omega^{-2} J_{ik||l}\,{g}^{kl} &\equiv 0 \,. \label{BI_i}\end{aligned}$$ Here the spatial covariant derivative $_{||}$ is calculated with respect to the spatial part $g_{ij}$ of the Kundt seed metric (\[Einstein-WeylBHC\]). Moreover, a direct calculation of $J_{ab}$ defined by (\[defJab\]) gives $$J_{uu}=-{\cal H}\, J_{ru}\,, \qquad J_{xx}=\mathcal{J}(r)\, g_{xx} = J_{yy} \,, \label{EqDep}$$ where the function $\mathcal{J}(r)$ is defined as $$\mathcal{J} \equiv \Omega^{-2}\,\big[\big({\cal H}\Omega\Omega'\big)'+\Omega^2 -3\T\Omega -\tfrac{1}{3}k \,\big({\cal H}{\cal H}''''+{\cal H}'{\cal H}'''-{\textstyle\frac{1}{2}}{{\cal H}''}^2 +2 \big)\big] \,, \label{calJ}$$ and $$\begin{aligned} J_{rr} = 2\Omega^{-2}&\,\big[-\Omega\Omega''+2{\Omega'}^2+\tfrac{1}{3}k\,{\cal H}''''\big] \,, \label{bJrr}\\ J_{ru} = \Omega^{-2}&\,\big[-\tfrac{1}{2}\big(\Omega^2 {\cal H}\big)'' +3\T\Omega -\tfrac{1}{3}k \big(2\,{\cal H}{\cal H}''''+{\cal H}'{\cal H}'''-{\textstyle\frac{1}{2}}{{\cal H}''}^2 +2\big)\big] \label{bJru}\,.\end{aligned}$$ By substituting the relations (\[EqDep\]) into (\[BI\_u\]) and (\[BI\_i\]), it can be seen immediately that these two conditions are automatically satisfied. Interestingly, the remaining Bianchi identity (\[BI\_r\]) gives a *nontrivial* result. If the metric functions $\Omega(r)$ and ${\cal H}(r)$ satisfy the two field equations ${J_{rr}=0}$ and ${J_{ru}=0}$ then necessarily ${J_{ij}\,g^{ij}\equiv0}$, that is ${J_{xx}\,g^{xx}+J_{yy}\,g^{yy}=2\mathcal{J}(r)=0}$ and thus ${J_{xx}=0=J_{yy}}$. Therefore, we conclude that *all field equations* for the metric (\[BHmetric-xy\]) *reduce just to two key equations*, namely ${J_{rr}=0}$ and ${J_{ru}=0}$. Since ${g^{ab}J_{ab}=0}$, it also implies ${R=0}$ and thus ${\T=0}$, cf. . This coupled system of two equations completely determines all possible exact vacuum solutions of the type (\[BHmetric-xy\]) in Einstein–Weyl gravity, and since ${ R=0}$, also in a general Quadratic Gravity. The key point is that, due to the Bianchi identities, the two key equations *imply* the nontrivial field equations ${J_{xx}=0= J_{yy}}$ since necessarily ${\mathcal{J}=0}$, that is using (\[calJ\]) $$\big({\cal H}\Omega\Omega'\big)'+\Omega^2-3\T\Omega = \tfrac{1}{3}k \big({\cal H}{\cal H}''''+{\cal H}'{\cal H}'''-{\textstyle\frac{1}{2}}{{\cal H}''}^2 +2 \big) \,. \label{J=0}$$ The equation ${J_{rr}=0}$ is exactly equation , and equation is simply ${J_{ru}=0}$ with ${\T=0}$. Finally, substituting ${\T=0}$ into , we immediately obtain . This completes the proof of the equivalence. To integrate the field equations, it is necessary to solve the equation . Simultaneously, we must solve the equation (\^2 [H]{})” -6= -k (2[H]{}[H]{}””+[H]{}’[H]{}”’-[[H]{}”]{}\^2 +2) \[bJru0\]. Remarkably, this equation *can further be simplified* by expressing the term ${{\cal H}''''}$ from . We thus finally obtain *two very simple field equations* $$\begin{aligned} \Omega\Omega''-2{\Omega'}^2 = &\ \tfrac{1}{3}k\,{\cal H}'''' \,, \label{Eq1C}\\ \Omega\Omega'{\cal H}'+3\Omega'^2{\cal H}+\Omega^2 = &\ \tfrac{1}{3}k \big({\cal H}'{\cal H}'''-{\textstyle\frac{1}{2}}{{\cal H}''}^2 +2 \big)\,, \label{Eq2C}\end{aligned}$$ for the two metric functions $\Omega(r)$ and ${\cal H}(r)$. Alternatively, instead of solving the single equation (\[Eq2C\]), it is also possible to solve *any two of the three equations* (\[Neq\_ru\]), (\[Neq\_xx\]), (\[traceC\]). [10]{} Stelle K S 1977 Renormalization of higher derivative quantum gravity [*Phys. Rev. D*]{} [**16**]{} 953 Salvio A 2018 Quadratic gravity [*Front. Phys.*]{} [**6**]{} 77 Smilga A V 2014 Supersymmetric field theory with benign ghosts [*J. Phys. A*]{} [**47**]{} 052001 Stelle K S 1978 Classical gravity with higher derivatives [*Gen. Relativ. Gravit.*]{} [**9**]{} 353 Holdom B 2002 On the fate of singularities and horizons in higher derivative gravity [*Phys. Rev. D*]{} [**66**]{} 084010 Lü H, Perkins A, Pope C N, and Stelle K S 2015 Black holes in higher derivative gravity [*Phys. Rev. Lett.*]{} [**114**]{} 171601 Lü H, Perkins A, Pope C N, and Stelle K S 2015 Spherically symmetric solutions in higher derivative gravity [*Phys. Rev. D*]{} [**92**]{} 124019 Perkins A 2016 [*Static spherically symmetric solutions in higher derivative gravity*]{} (Ph.D. thesis, Imperial College London) Pravda V, Pravdová A, Podolský J, and Švarc R 2017 Exact solutions to quadratic gravity [*Phys. Rev. D*]{} [**95**]{} 084025 Podolský J, Švarc R, Pravda V, and Pravdová A 2018 Explicit black hole solutions in higher-derivative gravity [*Phys. Rev. D*]{} [**98**]{} 021502(R) Švarc R, Podolský J, Pravda V, and Pravdová A 2018 Exact black holes in quadratic gravity with any cosmological constant [*Phys. Rev. Lett*]{} [**121**]{} 231104 Weyl H 1919 Eine neue Erweiterung der Relativitätstheorie [*Ann. der Physik*]{} [**59**]{} 101 Bach R 1921 Zur Weylschen Relativitätstheorie und der Weylschen Erweiterung des Krümmungstensorbegriffs [*Math. Zeitschrift*]{} [**9**]{} 110 Schwarzschild K 1916 Über das Gravitationsfeld eines Massenpunktes nach der Einsteinschen Theorie [*Sitz. Preuss. Akad. Wiss. Berlin*]{} [**7**]{} 189 Stephani H, Kramer D, MacCallum M, Hoenselaers C, and Herlt E 2003 [*Exact Solutions of Einstein’s Field Equations*]{} (Cambridge: Cambridge University Press) Griffiths J and Podolský J 2009 [*Exact Space-Times in Einstein’s General Relativity*]{} (Cambridge: Cambridge University Press) Holdom B and Ren J 2017 Not quite a black hole, [*Phys. Rev. D*]{} [**95**]{} 084034 Bičák J and Podolský J 1999 Gravitational waves in vacuum spacetimes with cosmological constant. II. Deviation of geodesics and interpretation of nontwisting type N solutions [*J. Math. Phys. (N.Y.)*]{} [**40**]{} 4506 Podolský J and Švarc R 2012 Interpreting spacetimes of any dimension using geodesic deviation [*Phys. Rev. D*]{} [**85**]{} 044057 Wald R M 1984 [*General Relativity*]{} (Chicago: University of Chicago Press) Zhong-Ying Fan and Lü H 2015 Thermodynamical first laws of black holes in quadratically-extended gravities [*Phys. Rev. D*]{} [**91**]{} 064009 Wald R M 1993 Black hole entropy is the Noether charge [*Phys. Rev. D*]{} [**48**]{} R3427(R) Iyer V and Wald R M 1994 Some properties of Noether charge and a proposal for dynamical black hole entropy [*Phys. Rev. D*]{} [**50**]{} 846 [^1]: In four dimensions, the Gauss–Bonnet term ${R_{abcd}R^{abcd}-4R_{ab}R^{ab}+R^2}$ does not contribute to the field equations. [^2]: For brevity, in this paper the symbol ${r\to\infty}$ means ${|r|\to\infty}$, unless the sign of $r$ is explicitly specified. [^3]: There may also exist other solutions such that their expansion contains logarithmic or exponential terms in $r$. [^4]: Of course, provided $r_0$ is within the convergence radius od , . [^5]: To make the identification, we have relabeled the arguments of the metric functions $A(r), B(r)$ of [@LuPerkinsPopeStelle:2015b] to $\bar r$. [^6]: We obtained this value from the Mathematica code kindly provided by H. Lü, cf. also [@PerkinsPhD] for a very close value of $b$.
1
--- abstract: 'In this note, we show that the normalized Hochschild co–chains of an associative algebra with a non–degenerate, symmetric, invariant inner product are an algebra over a chain model of the framed little discs operad which is given by cacti. In particular, in this sense they are a BV algebra up to homotopy and the Hochschild cohomology of such an algebra is a BV algebra whose induced bracket coincides with Gerstenhaber’s bracket. To show this, we use a cellular chain model for the framed little disc operad in terms of normalized cacti. This model is given by tensoring our chain model for the little discs operad in terms of spineless cacti with natural chain models for $(S^1)^{\times n}$ adapted to cacti.' address: 'University of Connecticut, Department of Mathematics, Storrs, CT 06269' author: - 'Ralph M. Kaufmann' title: 'A proof of a cyclic version of Deligne’s conjecture via Cacti' --- Introduction {#introduction .unnumbered} ============ In this note, we expand our chain model of the little discs operad which we gave in terms of spineless cacti to a chain model for the framed little discs operad in terms of normalized cacti. Extending the philosophy of [@del], we then show that the chain model for the framed little discs operad naturally acts on the normalized Hochschild cochains of a unital associative algebra with a non–degenerate, symmetric, invariant bi–linear pairing. In fact, as in [@del], this operation can again be seen as a discretization of the calculations for the relations of a BV algebra up to homotopy on the chains of the operad $\Arc$ of [@KLP]. In [@cact] it is proven, that the operad of framed little discs is equivalent to the operad of cacti. Moreover, we gave a description of cacti in terms of a bi–crossed product of spineless cacti and an operad built on the monoid $S^1$ which we showed to be homotopy equivalent to the semi–direct product of these operads [@cact]. Furthermore, we gave a chain model for spineless cacti in terms of normalized spineless cacti which we showed to give a natural solution to Deligne’s conjecture [@del]. Using the description in terms of the bi–crossed and semi–direct products, we obtain a chain model for the operad of framed little discs, by tensoring the chains of normalized spineless cacti with the chains for the operad built on the monoid $S^1$. In order to prove the necessary relations on the chain level one can translate the respective relations from the relations in the $\Arc$ operad using the method described in [@cact; @KLP]. As it turns out, in order to translate the relations and thus to establish the homotopy BV structure on the chain level, one needs a refinement of the cell decomposition on the semi-direct product to be able to accommodate all the operations which were used in the $\Arc$ operad picture. This refinement uses cell decompositions on the $S^1$ factors which are induced by regarding them as the lobe they represent. This leads to a combinatorial description in terms of planar planted black and white (b/w) bipartite trees with additional data called spines. In the language of cacti [@cact], the additional data keeps track of the position of the local zeros. On these trees, there are linear orders at each vertex, which may differ from the induced linear order of the planar planted trees. This forces us to look at non–rooted trees or equivalently to invert the orientation of edges. According to the general calculus for “correlation functions” defined by trees, to achieve such an inversion one needs to have a non–degenerate pairing, which is symmetric and invariant. This is the assumption we have to make on our algebra. With this assumption, we can rewrite the action of the cellular chains as “operadic correlation functions” for decorated trees. In this description the operation of the chains of the framed little discs operad becomes apparent. The results and techniques we present below can also be employed in other situations, which we comment on at the end of the paper. Notably one can use it to obtain an action of cells of a ribbon graph cell decomposition of moduli space on cyclic complexes. This should ultimately lead to string topology like operations of the cells of moduli space of decorated bordered surfaces on the free loop space of a compact manifold extending the operations of the string PROP or dioperad. The basic constructions for this are announced below. Acknowledgments {#acknowledgments .unnumbered} =============== We would like to thank Alain Connes for an enlightening discussion and Jim Stasheff for his valuable comments. We also thank the Max–Planck–Institute for Mathematics in Bonn for providing the atmosphere and stimulus to conceptualize and complete this paper. Background ========== Graphs {#Graphs} ------ In this section, we formally introduce the graphs and the operations on graphs which we will use in our analysis of cacti. This is the approach as given in Appendix B of [@cact] in which cacti are characterized as a certain type of ribbon graph. Namely, a cactus is a marked treelike ribbon graph with a metric. ### Graphs {#graphs} A graph $\Gamma$ is a tuple $(V_{\Gamma},F_{\Gamma}, \imath_{\Gamma}: F_{\Gamma}\rightarrow F_{\Gamma},\del_{\Gamma}:F_{\Gamma} \rightarrow V_{\Gamma})$ where $\imath_{\Gamma}$ is an involution $\imath_{\Gamma}^2=id$ without fixed points. We call $V_{\Gamma}$ the vertices of $\Gamma$ and $F_{\Gamma}$ the flags of $\Gamma$. The edges $E_{\Gamma}$ of $\Gamma$ are the orbits of the flags under the involution $\imath_{\Gamma}$. A directed edge is an edge together with an order of the two flags which define it. In case there is no risk of confusion, we will drop the subscripts $\Gamma$. Notice that $f\mapsto (f,\imath(f))$ gives a bijection between flags and directed edges. We also call $F_v(\Gamma):=\del^{-1}(v)\subset F_{\Gamma}$ the set of flags of the vertex $v$ and call $|F_v({\Gamma})|$ the valence of $v$ and denote it by $\val(v)$. We also let $E(v)=\{\{f,\imath(f)\}|f\in F_{v}\}$ and call these edges the edges incident to $v$. The geometric realization of a graph is given by considering each flag as a half-edge and gluing the half-edges together using the involution $\imath$. This yields a one-dimensional CW complex whose realization we call the realization of the graph. ### Trees A graph is connected if its realization is. A graph is a tree if it is connected and its realization is contractible. A rooted tree is a pair $(\t,v_0)$ where $\t$ is a tree and $v_0\in V_{\t}$ is a distinguished vertex. In a rooted tree there is a natural orientation for edges, in which the edge points toward the root. That is we say $(f,\imath (f))$ is naturally oriented if $\del(\imath(f))$ is on the unique shortest path from $\del(f)$ to the root. This means that the set $E(v)$ splits up into incoming and outgoing edges. Given a vertex $v$, we let $|v|$ be the number of incoming edges and call it the arity of $v$. A vertex $v$ is called a leaf if $|v|=0$. Notice that the root is the only vertex for which $|v_0|=\val(v_0)$. For all other vertices $v\neq v_0$ one has $|v|=\val(v)-1$. A bi-colored or black and white (b/w) tree is a tree $\t$ together with a map $\color:V\rightarrow \mathbb{Z}/2\mathbb{Z}$. Such a tree is called bipartite if for all $f\in F_{\t}:\color(\del(f))+\color(\del(\imath(f)))=1$, that is edges are only between black and white vertices. We call the set $V_w:=\color^{-1}(1)$ the white vertices. If $(f,\imath (f))$ is a naturally oriented edge, we call the edge white if $\del(\imath(f))\in V_w$ and denote the set of white edges by $E_w$. Likewise we call $V_b:=\color^{-1}(0)$ the black vertices and let $E_b$ be the set of black edges, where a naturally oriented edge $(f,\imath (f))$ is called black if $\del(\imath(f))\in V_b$. The black leaves in a rooted black and white tree are called tails. The edges incident to the tails are called tail edges and are denoted $E_{tail}$. For tails, we will only consider those flags of the tail edges which are not incident to the tail vertices and call them $F_{tail}$. ### Planar trees and Ribbon graphs A ribbon graph is a connected graph whose vertices are of valence at least two together with a cyclic order of the set of flags of the vertex $v$ for every vertex $v$. A graph with a cyclic order of the flags at each vertex gives rise to bijections $N_v:F_v\rightarrow F_v$ where $N_v(f)$ is the next flag in the cyclic order. Since $F=\amalg F_v$ one obtains a map $N:F\rightarrow F$. The orbits of the map $N \circ \imath$ are called the cycles or the boundaries of the graph. These sets have the induced cyclic order. Notice that each boundary can be seen as a cyclic sequence of directed edges. The directions are as follows. Start with any flag $f$ in the orbit. In the geometric realization go along this half-edge starting from the vertex $\del(f)$, continue along the second half-edge $\imath(f)$ until you reach the vertex $\del(\imath(f))$ then continue starting along the flag $N(\imath(f))$ and repeat. A tree with a cyclic order of the flags at each vertex is called planar. A planar tree has only one cycle $c_0$. Planar planted trees -------------------- A planted planar tree is a rooted planar tree $(\t,v_0)$ together with a linear order of the set of flags at $v_0$. Such a tree has a linear order of all flags as follows: Let $f$ be the smallest element of $\del^{-1}(v_0)$, then every flag appears in $c_0$ and defining the flag $f$ to be the smallest gives a linear order on the set of all flags. This linear order induces a linear order on all oriented edges and on all un-oriented edges, by restricting to the edges in the orientation opposite the natural orientation i.e. pointing away from the root. We denote the latter by $\prec$ and its restriction to $E(v)$ or $F(v)$ by $\prec_v$. We will equivalently consider planar planted trees as defined above or as a rooted planar trees whose root vertex has valence one. The bijection in one direction is given by adding a new root vertex and one new edge such that the induced linear structure on the old root is the given one. This tree is called the realization of the planar planted tree. In the other direction the bijection is simply given by contracting the unique edge incident to the root, but retaining the linear order. In the realization of a planar planted tree, we call the unique edge incident to the (new) root $v_{root}$ the root edge and denote it by $e_{root}$ and set $f_{root}$ to be the flag of the root edge which is not incident to the root. Also $E_{root}=\{e_{root}\}, F_{root}=\{f_{root}\}$. An angle at a vertex $v$ in a planar tree is a pair of two flags incident to $v$ of which one is the immediate successor of the other in the cyclic order of $F_v$. There is a bijection between angles, flags and edges by associating to an angle its bigger flag and to the latter the unique edge defined by it. The genus of a ribbon graph and its surface ------------------------------------------- The genus $g(\Gamma)$ of a ribbon graph $\Gamma$ is given by $2g(\Gamma)+2=|V_\Gamma|-|E_{\Gamma}|+\#cycles$. The surface $\Sigma(\Gamma)$ of a ribbon graph $\Gamma$ is the surface obtained from the realization of $\Gamma$ by thickening the edges to ribbons. I.e. replace each 0-simplex $v$ by a closed oriented disc $D(v)$ and each 1-simplex $e$ by $e\times I$ oriented in the standard fashion. Now glue the boundaries of $e\times I$ to the appropriate discs in their cyclic order according to the orientations. Notice that the genus of $\Sigma(\Gamma)$ is $g(\Gamma)$ and that $\Gamma$ is naturally embedded as the spine of this surface. ### Treelike and marked ribbon graphs A ribbon graph together with a distinguished cycle $c_0$ is called [*treelike*]{} if - the graph is of genus $0$ and - for all cycles $c_i\neq c_0$: if $f\in c_i$ then $\imath(f)\in c_0$. In other words each edge is traversed by the cycle $c_0$. Therefore there is a cyclic order on all (non-directed) edges, namely the cyclic order of $c_0$. A [*marked ribbon graph*]{} is a ribbon graph together with a map $\mk:\{cycles\} \rightarrow F_{\Gamma}$ satisfying the conditions - For every cycle $c$ the directed edge $\mk(c)$ belongs to the cycle. - All vertices of valence two are in the image of $\mk$, that is $\forall v,\val(v)=2$ implies $v\in Im(\del\circ\mk)$. Notice that on a marked treelike ribbon graph there is a linear order on each of the cycles $c_i$. This order is defined by upgrading the cyclic order to the linear order $\prec_i$ in which $\mk(c_i)$ is the smallest element. ### Dual b/w tree of a marked ribbon graph Given a marked treelike ribbon graph $\G$, we define its dual tree to be the colored graph whose black vertices are given by $V_{\G}$ and whose set of white vertices is the set of cycles $c_i$ of $\G$. The set of flags at $c_i$ are the flags $f$ with $f\in c_i$ and the set of flags at $v$ are the flags $\{f:f \in c_0, \del(f)=v\}$. The involution is given by $\imath_{\t}(f)=N(f)$ if $f\in c_0$ and $\imath_{\t}(f)=N^{-1}(f)$ else. This graph is a tree and is b/w and bipartite by construction. It is also planar, since the $c_i$ and the sets $F(v)$ have a cyclic order and therefore also $F_v\cap c_0$. It is furthermore rooted by declaring $\del(\mk(c_0))$ to be the root vertex and declaring $\mk(c_0)$ to be the smallest element makes it into a planted tree. An equivalent definition is given by defining that there is an edge between a pair of a black and a white vertex if and only if the vertex corresponding to $b$ is on the boundary of the cycle $c_i$, i.e. $v\in \del(c_i):= \{\del(f):f\in c_i\}$. ### Spineless marked ribbon graphs {#spinlessgraph} A marked treelike ribbon graph is called [*spineless*]{}, if - There is at most one vertex of valence $2$. If there is such a vertex $v_0$ then $\del(mk(c_0))=v_{0}$. - The induced linear orders on the $c_i$ are compatible with that of $c_0$, i.e. $f\prec_i f'$ if and only if $\imath(f')\prec_0 \imath(f)$. ### Graphs with a metric A metric $w_{\Gamma}$ for a graph is a map $E_{\Gamma}\rightarrow \mathbb{R}_{>0}$. The (global) re-scaling of a metric $w$ by $\lambda$ is the metric $ \lambda w: \lambda w(e)=\lambda w(e)$. The length of a cycle $c$ is the sum of the lengths of its edges $length(c)=\sum_{f\in c} w(\{f,\imath(f)\})$. A metric for a treelike ribbon graph is called normalized if the length of each non-distinguished cycle is $1$. ### Marked ribbon graphs with metric and maps of circles. For a marked ribbon graph with a metric, let $c_i$ be its cycles, let $|c_i|$ be their image in the realization and let $r_i$ be the length of $c_i$. Then there are natural maps $\phi_i:S^1\rightarrow |c_i|$ which map $S^1$ onto the cycle by starting at the vertex $v_i:=\del(\mk(c_i))$ and going around the cycle mapping each point $\theta\in S^1$ to the point at distance $\frac{\theta}{2\pi}r_i$ from $v_i$ along the cycle $c_i$. ### Contracting edges The contraction $(\bar V_{\Gamma}, \bar F_{\Gamma},\bar \imath,\bar \del)$ of a graph $(V_{\Gamma},F_{\Gamma},\imath,\del)$ with respect to an edge $e=\{f,\imath(f)\}$ is defined as follows. Let $\sim$ be the equivalence relation induced by $\del(f)\sim\del(\imath(f))$. Then let $\bar V_{\Gamma}:=V_{\Gamma}/\sim$, $\bar F_{\Gamma}=F_{\Gamma}\setminus\{f,\imath(f)\}$ and $\bar \imath: \bar F_{\Gamma}\rightarrow \bar F_{\Gamma}, \bar\del: \bar F_{\Gamma}\rightarrow \bar V_{\Gamma}$ be the induced maps. For a marked ribbon graph, we define the marking of $(\bar V_{\Gamma}, \bar F_{\Gamma},\bar \imath,\bar \del)$ to be $\overline{\mk}(\bar c)=\overline{\mk(c)}$ if $\mk(c)\notin\{f,\imath(f)\}$ and $\overline{\mk}(\bar c)=\overline{N\circ \imath(\mk (c))}$ if $\mk(c)\in\{f,\imath(f)\}$, viz. the image of the next flag in the cycle. ### Labelling graphs By a labelling of the edges of a graph $\Gamma$ by a set $S$, we simply mean a map $E_{\Gamma}\rightarrow S$. A labelling of a ribbon graph $\Gamma$ by a set $S$ is a map $\lab\{$cycles of $\Gamma\}\rightarrow S$, we will write $c_i:=\lab^{-1}(i)$. By a labelling of a black and white tree by a set $S$ we mean a map $\lab:E_w\rightarrow S$. Again we will write $v_i:=\lab^{-1}(i)$. ### Planar planted bipartite labelled trees with white leaves We set $\wlbptree(n)$ to be the set of planar planted bipartite trees which are labelled from $\{1,\dots,n\}$ with white leaves only. To avoid cluttered notation, we also denote the respective free Abelian group and the $k$-vector space with basis $\wlbptree(n)$ by the same name and let $\wlbptree$ be their union respectively direct sum. Cacti ----- A cactus with $n$ lobes is a $\{0,1, \dots ,n\}$ labelled marked treelike ribbon graph with a metric. I.e. The set $\Cacti(n)$ is the set of these graphs. $\Cact(n)\subset \Cacti(n)$ is the subset of spineless graphs and called the spineless cacti or alternatively cacti without spines. $\Cacti^1(n)\subset \Cacti(n)$ is the subset of normalized graphs, called normalized cacti, and finally $\Cact^1(n)=\Cact(n)\cap\Cacti^1(n)$ is the set of normalized spineless cacti. ### Cactus terminology The edges of a cactus are traditionally called arcs or segments and the cycles of a cactus are traditionally called lobes. The vertices are sometimes called the marked or special points. Furthermore the distinguished cycle $c_0$ is called the outside circle or the perimeter and the vertex $\del(\mk(c_0))$ is called the global zero. And the vertices $\del(\mk(c_i)),i\neq 0$ are called the local zeros. In pictures these are represented by lines rather than fat dots. \[setrem\] It is clear that as sets $\Cacti(n)=\Cact(n)\times (S^1)^{\times n}$ and $\cact(n)= \cact^1(n)\times \mathbb{R}_{>0}^{\times n}$. For the first statement one notices for each lobe $v_i$ there is a unique lowest intersection point $b$ which is the vertex of the outgoing edge of $v$. Thus there is a canonical map $\phi'_i:S^1\rightarrow |c_i|$ which starts at $b$ and goes around the cycle opposite its natural orientation. So to each cycle we associate $(\phi'_i)^{-1}(\del(\mk(c_i)))$ that is the co-ordinate of the spine as measured by $\phi'_i$. This gives the projection onto the factors $(S^1)^{\times n}$. The projection onto the first factor is given by forgetting the spines, i.e. contracting the edges $\mk(c_i)$ if $\val(\del(\mk(c_i)))=2$ and changing the marking to the unique marking which makes the graph spineless. For the second statement the first projection is given by homogeneously scaling the weights of the edges of each non-marked cycle so that their lengths are one. The projection to the factors of $\mathbb{R}_{>0}$ are given by associating to each lobe its length. In both cases the inverse map is clear. The topological type of a spineless cactus in $\cact^1(n)$ is defined to be its dual b/w tree $\t \in \wlbptree(n)$. \[arctoedge\] Notice that the arcs of a cactus correspond to the set $E_{arcs}=E(\t)\setminus (\{e_{root}\})$. This bijection can be defined as follows. To a given $e\in E_{arcs}, e=\{w,b\}$ with $b$ black and $w$ white, we associate the unique arc between the points corresponding to the black vertices $b$ and $b-$ where $b-$ is the black vertex immediately preceding $b$ in the cyclic order of $v$. In other words if $e=\{f,\imath(f)\}$ with $f\in F_v$. Let $f-$ be the flag immediately preceding $f$ in the cyclic order at $v$, then $b-=\del(\imath(f-))$. Notice that if $|v|=0$ then and only then $f-=f$. \[typelemma\] A spineless cactus is uniquely determined by its topological type and the lengths of the segments. The CW complex of normalized spineless cacti -------------------------------------------- We recall from [@del] the CW complexes $\CWcact(n)$. For more details and pictures the reader is referred to [@del; @cact]. \[lengthrem\] For a normalized spineless cactus the lengths of the arcs have to sum up to the radius of the lobe and the number of arcs on a given lobe represented by a white vertex $v$ is $\val(v)=|v|+1$. Hence the lengths of the arcs lying on the lobe represented by a vertex $v$ are in 1-1 correspondence with points of the simplex $|\Delta^{|v|}|$. The coordinates of $|\Delta^{|v|}|$ naturally correspond to the arcs of the lobe represented by $v$ on one hand and on the other hand in the dual b/w graph to the edges incident to $v$. ### The tree differential in the spineless case {#diffdef} Let $\t\in \wlbptree$. We set $E_{angle}=E(\t)\setminus (E_{leaf}(\t)\cup \{e_{root}\})$ and we denote by $\num_E:E_{angle} \rightarrow \{1,\dots,N\}$ the bijection which is induced by the linear order $\prec^{(\t,p)}$. Let $\t\in \wlbptree$, $e\in E_{angle}$, $e=\{w,b\}$, with $w\in V_w$ and $b\in V_b$. Let $e-=\{w,b-\}$ be the edge preceding $e$ in the cyclic order $\prec^{\t}_w$ at $w$. Then $\del_e(\tau)$ is defined to be the planar tree obtained by collapsing the angle between the edge $e$ and its predecessor in the cyclic order of $w$ by identifying $b$ with $b-$ and $e$ with $e-$. Formally $w=\whitevert(e), e-=\prec^{\t}_w(e),\{b-\}= \del(e-)\cap V_b(\t)$, $V_{\del_e(\tau)}=V(\t)/(b\sim b-)$, $E_{\del_e(\tau)}=E_{\tau}/(e\sim e-)$. The linear order of $\del_e(\t)$ is given by keeping the linear order at all vertices which are not equal to $\bar b$ where $\bar b$ is the image of $b$ and $b-$. For $\bar b$ the order is given by extending the linear order $(\In(\bar b), \prec_{\bar b}^{\del_e(\t)}) =(\In(b-)\amalg\In(b), \prec^{\t}_{b-}\amalg \prec^{\t}_{b}) $ —the usual order on the union of totally ordered sets– to $E(\bar b)$ by declaring the image of $e$ and $e-$ to be the minimal element. We define the operator $\del$ on the space $\wlbptree$ to be given by the following formula: $\del(\t) := \sum_{e\in E_{angle}} (-1)^{\num_E(e)-1} \del_e (\tau) $. ### The Cell Complex We define $\wlbptree(n)^k$ to be the elements of $\wlbptree(n)$ with $|E_w|=k$. For $\t \in \wlbptree$ we define $\D(\t):=\times_{v \in V_w(\tau)}\D^{|v|}$. We define $C(\t)=|\D(\t)|$. Notice that $\dim(C(\t))=|E_w(\t)|$. Given $\D(\t)$ and a vertex $x$ of any of the constituting simplices of $\D(\t)$ we define the $x$-th face of $C(\t)$ to be the subset of $|\D(\t)|$ whose points have the $x$-th coordinate equal to zero. We let $\CWcact(n)$ be the CW complex whose k-cells are indexed by $\t \in \wlbptree(n)^k$ with the cell $C(\t)=|\D(\t)|$ and the attaching maps $e_{\t}$ defined as follows. We identify the $x$-th face of $C(\t)$ with $C(\t')$ where $\t'=\del_x(\t)$. This corresponds to contracting an edge of the cactus if its weight goes to zero (see Remark \[arctoedge\]) so that $\Delta(\del \t)$ is identified with $\del (\Delta(\t))$. We define the topology of $\cact^1(n)$ to be that induced by the bijection with $\CWcact(n)$. Via Remark \[setrem\] this gives a topology to the spaces $\Cact(n),\cacti(n)$ and $\cacti^1(n)$. The (quasi)-operad structure ---------------------------- ### The operad of cacti The gluing maps for cacti $$\circ_i:\cacti(n)\otimes \cacti(m)\rightarrow \cacti(n+m-1)$$ are defined on elements $(c,c')\mapsto c\circ_i c'$ as follows - Scaling the weight function $w'$ of $c'$ by the length $\frac{r_i}{R}$ where $r_i$ is the length of the cycle $c_i$ of the cactus $c$ and $R$ is the length of the cycle $c_0$ of $c'$. - Identifying the realization of the cycle $c_0$ of $c'$ with the cycle $c_i$ of $c$ via the maps $\phi_0(c')$ and $\phi_i(c)$, with the orientation on the second $S^1$ reversed, as usual. These maps together with the $\Sn$ action permuting the labels turn the collection $\{\cacti(n)\}$ into an operad $\cacti$. The collection $\{\cact(n)\}$ forms the suboperad $\cact$. ### The quasi-operad of normalized cacti We recall from [@cact] that a quasi-operad is the generalization of a (pseudo)-operad in which the axiom of associativity is omitted and the others are kept. The gluing maps for normalized cacti $$\circ_i:\cacti^1(n)\otimes \cacti^1(m)\rightarrow \cacti^1(n+m-1)$$ are defined on elements $(c,c') \mapsto c\circ_i c'$ simply by identifying the realization of the cycle $c_0$ of $c'$ with the cycle $c_i$ of $c$ via the maps $\phi_0(c')$ and $\phi_i(c)$ again with the orientation on the second $S^1$ reversed. These maps together with the $\Sn$ action permuting the labels turn the collection $\{\cacti^1(n)\}$ into a homotopy associative quasi-operad $\cacti^1$. The collection $\{\cact^1(n)\}$ forms a homotopy associative quasi-suboperad $\cact^1$ of $\cacti^1$ [@cact]. Relations among cacti --------------------- \[cactthm\] [@cact] Normalized cacti are homotopy equivalent through quasi-operads to the cacti. The same holds for the (quasi)-suboperads of normalized spineless cacti and spineless cacti. [@cact] Normalized cacti are quasi-isomorphic as quasi-operads to cacti and normalized spineless cacti are quasi-isomorphic as quasi-operads to spineless cacti. In particular in both cases the homology quasi-operads are operads and are isomorphic as operads. ### Remarks on the bi-crossed product In this section we recall the construction of the bi-crossed product as it was given in [@cact] to which we refer the reader for more details. First notice that there is an action of $S^1$ on $\Cact(n)$ given by rotating the base point [*clockwise*]{} (i.e. in the orientation opposite the usual one of $c_0$) around the perimeter. We denote this action by $$\rho^{S^1}: S^1 \times \Cact(n) \rightarrow \Cact(n)$$ With this action we can define the twisted gluing $$\begin{aligned} \label{circtheta} \circ_i^{S^1}:\Cact(n) \times S^1(n) \times \Cact(m) &\rightarrow& \Cact(n+m-1)\nn\\ (C,\theta,C')&\mapsto& C \circ \rho^{S^1}(\theta_i,C') =: C \circ_i^{\theta_i}C'\end{aligned}$$ Given a cactus without spines $C\in \Cact(n)$ the orientation reversed perimeter (i.e. going around the outer circle [*clockwise*]{} i.e. reversing the orientation of the source of $\phi_0$) gives a map $\Delta_C: S^1 \rightarrow (S^1)^n$. As one goes around the perimeter the map goes around each circle once and thus the map $\Delta_C$ is homotopic to the diagonal $ \Delta_C (S^1) \sim \Delta(S^1)$. We can use the map $\Delta_C$ to give an action of $S^1$ and $(S^1)^{\times n}$. $$\rho^C: S^1 \times(S^1)^{\times n}\stackrel{\Delta_C} {\rightarrow} (S^1)^{\times n} \times (S^1)^{\times n} \stackrel{\mu^n}{\rightarrow}(S^1)^{\times n}$$ here $\mu_n$ is the diagonal multiplication in $(S^1)^{\times n}$ and $\bar \circ_i$ is the operation which forgets the $i$-th factor and shuffles the last $m$ factors to the $i$-th, …, $i+m-1$st places. Set $$\begin{gathered} \label{perturbdef} \circ_i^C:(S^1)^{\times n} \times (S^1)^{\times m} \stackrel{(id \times \pi_i)(\Delta) \times id} {\longrightarrow} (S^1)^{\times n} \times S^1\times (S^1)^{\times m}\\ \stackrel{id \times \rho^C}{\longrightarrow} (S^1)^{\times n} \times (S^1)^{\times m} \stackrel{\bar\circ_i}{\longrightarrow}(S^1)^{\times n+m-1}\end{gathered}$$ These maps are to be understood as perturbations of the usual maps $$\begin{gathered} \circ_i:(S^1)^{\times n} \times (S^1)^{\times m} \stackrel{(id \times \pi_i)(\Delta) \times id} {\longrightarrow} (S^1)^{\times n} \times S^1\times (S^1)^{\times m}\\ \stackrel{id \times \rho}{\longrightarrow} (S^1)^{\times n} \times (S^1)^{\times m} \stackrel{\bar\circ_i}{\longrightarrow}(S^1)^{\times n+m-1}\end{gathered}$$ where now $\rho$ is the diagonal action of $S^1$ on $(S^1)^{\times n}$. The maps $\circ_i$ and the permutation action on the factors give the collection $\{\mathcal{S}^1(n)\}=(S^1)^{\times n}$ the structure of an operad. In fact this is exactly the usual construction of an operad built on a monoid. \[cactbicross\] [@cact] The operad of cacti is the bi–crossed product of the operad $\cact$ of spineless cacti with the operad $\mathcal {S}^1$ based on $S^1$. Furthermore this bi–crossed product is homotopic to the semi–direct product of the operad of cacti without spines with the circle group $S^1$. $$\cacti \cong \cact \bowtie {\mathcal S}^1 \simeq \cact \rtimes {\mathcal S}^1$$ The multiplication in the bi-crossed product is given by $$(C,\theta) \circ_i (C',\theta') = (C\circ_i^{\theta_i} C', \theta\circ_{i}^{C'}\theta')$$ The multiplication in the semi-direct product is given by $$(C,\theta) \circ_i (C',\theta') = (C\circ_i^{\theta_i} C', \theta\circ_{i}\theta')$$ Also, normalized cacti are homotopy equivalent to cacti which are homotopy equivalent to the bi-crossed product of normalized cacti with $\mathcal{S}^1$ and the semi-direct product with $\mathcal{S}^1$, where all equivalences are as quasi-operads $$\cacti^1 \sim \cacti \cong \cact \bowtie {\mathcal S}^1 \sim\cact^1 \bowtie {\mathcal S}^1\sim \cact^1 \rtimes {\mathcal S}^1$$ The proof of the first statement is given by verifying that the two operad structures coincide. For the second statement one notices that the homotopy diagonal is homotopy equivalent to the usual one and that one can find homotopies to the diagonal which continuously depend on the cactus. The third statement follows from contracting the factors $\mathbb{R}^n_{>0}$ and using Theorem \[cactthm\]. The homology operad of $\cacti$ is the semi-direct product of $\cacti$ and the homology of the operad $\mathcal{S}^1$ built on the monoid $S^1$. Relation to (framed) little discs --------------------------------- [@cact] The operad $\cact$ is equivalent to the little discs operad and the operad $\cacti$ is equivalent to the framed little discs operad. The latter result has been first stated by Voronov in [@Vor]. A CW decomposition for $\cacti^1$ and a chain model for the framed little discs =============================================================================== A $\Zz$ decoration for a black and white bipartite tree is a map $\Zdec: V_w \rightarrow \Zz$. \[firstcells\] The quasi–operad of normalized cacti $\cacti^1$ has a CW–decomposition which is given by cells indexed by planar planted bi–partite trees with a $\Zz$ decoration. The $k$ cells are indexed by trees with $k-i$ white edges and $i$ vertices marked by $1$. Moreover cellular chains are a chain model for the framed little discs operad and form an operad. This operad is isomorphic to the semi–direct product of the chain model of the little discs operad given by $CC_*(\cact)$ of [@del] and the cellular chains of the operad built on the monoid $S^1$. For the CW decomposition we note that as spaces $\cacti^1(n)= \cact^1(n) \times (S^1)^{\times n}$ see Remark \[setrem\]. Now viewing $S^1=[0,1]/0\sim1$ as a 1-cell together with the 0-cell given by $0\in S^1$ the first part of the proposition follows immediately, by viewing the decoration by 1 as indicating the presence of the 1-cell of $S^1$ for that labelled component in the product of cells. To show that the cellular chains indeed form an operad, we use the fact that the bi–crossed product is homotopy equivalent to the semi–direct product in such a way, that the action of a cell $S^1$ in the bi–crossed product is homotopic to the diagonal action. This is just the observation that the diagonal and the diagonal defined by a cactus are homotopic. Since a semi-direct product of a monoid with an operad is an operad the statement follows. Alternatively one could just remark, that there is also an obvious functorial map induced by the diagonal for these cells. The chains are a chain model for the framed little discs operad since $\cacti^1(n)$ and $\cacti(n)$ are homotopy equivalent and the latter is equivalent to the framed little discs operad. Although the above chain model is the one one would expect to use for framed little discs, it does not have enough cells for our purposes. In order to translate the proofs in the arc complex given in [@KLP] into statements about the Hochschild complex, we will need a slightly finer cell structure then the one above. After having used the larger structure one can reduce to the cell model with less cells as they are obviously equivalent. A spine decoration $\sdec$ for a planted planar bi–partite tree is a $\Zz$ decoration together with the marking of one angle at each vertex labelled by one and a flag at each vertex labelled by zero. We call the set of such trees which are $n$-labelled by $\swlbptree(n)$ and again use this notation as well for the free Abelian group and the $k$ vector space generated by these sets. We let $\swlbptree$ be their union respectively direct sum. In pictures we show the angle marking as a line emanating from the vertex which lies between the marked edges and an edge marking by a line through the respective edge. For an example see Figure \[cactexamples\] VI. We sometimes omit the edge marking if the marked edge is the outgoing edge, e.g. in Figure \[bvtopartcact\]. A realization $\hat \t$ of a planar planted bi–partite tree $\t$ with a spine decoration is a realization of $\t$ as a planar planted tree (the root is fixed to be black) together with one additional edge inserted into each marked angle connecting to a new vertex. We call the set of these edges spine edges and denote them by $E_{spine}$. Likewise set $V_{spine}$ to be the set of new vertices called the spine vertices which are defined to be black. The spine edges are then white edges. Like for tails, we will only consider the flags of $E_{spine}$, which are not incident to the spine vertices. We call the set of these flags $F_{spine}$. Notice that this tree is the dual tree of a cactus with an explicit marking of the flags $\mk(c_i)$. Given a cactus, we call its dual tree with explicit markings its topological type. If $\t$ had tails, we will split the set of tails of the realization into spines and free tails which are the images of the original tails. $E_{tails}(\hat\t)=E_{ftails}(\hat \t)\amalg E_{spine}(\hat \t)$ and likewise for the respective flags. A spine decoration induces a new linear order on the flags incident to the white vertices of its realization. This order $\prec'_v$ is given by the cyclic order at $v$ and declaring the smallest element to be the spine flag in case $\Zdec(v)=1$ and the marked flag in case $\Zdec(v)=0$. This gives a canonical identification of $\Fnum:F_v \rightarrow \{0,\dots, |v|\}$. \[secondcells\] The spaces $\cacti^1(n)$ of the quasi–operad of normalized cacti $\cacti^1$ have CW–decompositions $\CWtwo(n)$ whose cells are indexed by spine decorated planar planted bi–partite trees $(\t,\sdec)\in \swlbptree$ corresponding to the topological type of the cacti. The $k$ cells are indexed by $n$-labelled trees with $k-i$ white edges and $i$ markings by $1$. Moreover cellular chains of the complex above are a chain model for the framed little discs operad and form an operad. The decomposition is almost as in the preceding proposition except that in the product $\cact^1(n)\times (S^1)^{\times n}$ we decompose each factor $S^1$ as indicated by the lobe it presents. I.e. for the $S^1$ associated to the $n$–th lobe we chose the 0–cells to be corresponding to the marked points and 1–cells corresponding to the arcs with gluing given by attaching the 1–cells to the 0–cells representing the endpoints of the arcs. (E.g. 4 0-cells and 4 1-cells for the lobe 1 in Figure \[cactexamples\] VIa). In terms of trees, the arcs correspond to the angles and thus we take a marking of an arc to be the inclusion of the corresponding 1-cell in the tensor product of the cell complexes. Likewise the edges correspond to the marked points and we take a marking of an edge to be the inclusion of the corresponding 0-cell in the tensor product of the cell complexes. For the operadic properties, we remark that moving the spine along an arc and then gluing, which is what is parameterized by marking an angle on the lobe $i$ of $c$ when calculating $c\circ_ic'$, has the effect of moving the base point of $c'$ along a complete sequence of arcs until it coincides with a marked point in the composition of the two cacti. This is one side of the bi-crossed product. The effect on the local zeros of $c'$ of the movement of the base point is to move them corresponding to structure maps of the bi-crossed product above. The local zeros thus move through a full arc if the global zero passes through the arc on which they lie. Therefore the $\circ_i$ product of two cells results in sums of cells. Marking an arc of $c'$ obviously gives rise to a sum of cells. Alternatively, one can again just remark that there is a functorial map for the diagonal for this cell model, since there is such a map on the first factor by [@del] and its existence is obvious on the second factor. The associativity follows from the associativity of cacti. Let $C(\t)$, $\t\in \swlbptree(n)$ be the cells in the CW-complex and $\dot C(\t)$ their interior. Then $P(\t)=\dot C(\t)\times \mathbb{R}_{>0}^n, \t\in\swlbptree$ give a pseudo-cell decomposition $Cacti(n)=\amalg_{\t} P(\t)$. It is easy to see that $Im(P(\t)\circ_i P(\t'))=\amalg_k P(\t_k)$ for some $\t_k$ and $\circ_i$ is a bijection onto its image. Let $\circ_i^{comb}$ be the quasi-operad structure pulled back from $\CWtwo$ to $\swlbptree$ and $\circ_i^{+}$ be the operad structure pulled back from the pseudo-cell decomposition of $\Cacti$ to $\swlbptree$. Then these two operad structures coincide over $\mathbb{Z}/2\mathbb{Z}$ thus yielding associativity up to signs. The signs are just given by shuffles, c.f.§\[signsection\], and are associative as well. Pulling back the operadic compositions, the differential and the grading yields a dg-operad structure on $\swlbptree$ which is isomorphic to that of $\CCCacti:=\bigoplus_n CC_*(\CWtwo(n))$. The operation is briefly as follows: given two trees $\t,\t'\in\swlbptree$ the product is $\t\circ^{comb}_i\t'=\sum \pm \t_k$ where the $ \t_k$ are the trees obtained by the following procedure. Delete $v_i$ to obtain an ordered collection of trees $(\t^c_l,\prec'_v)$ then graft these trees to $\t'$ keeping their order by first identifying the spine edge or marked edge of $v_i$ with the root edge of $\t'$ and then grafting the rest of the branches to $\t'$ so that their original order is compatible with that of $\t'$. Lastly contract the image of the root edge of $\t'$ and declare the image of the root of $\t$ to be the new root. The sign is as explained in \[signsection\]. Due to the isomorphism between $\CCCacti$ and $\swlbptree$ we will drop the superscript $comb$. = The GBV structure ----------------- The picture for the GBV structure is essentially that of [@KLP] and goes back to [@CS1]. It appears here is another guise, however, since we are now dealing with cells in $\CCCacti$. First notice that there is a product on the chain level induced by the spineless cactus given by the rooted tree $\t_n$ depicted in Figure \[cactexamples\]. Explicitly: $a\cdot b\mapsto \gamma(\t^b_2; a,b)$ where $\gamma$ is the usual operadic composition. This product gives $\CCCacti$ the structure of an associative algebra with unit. Moreover the product is commutative up to homotopy. The homotopy is given by the usual operation which is induced by $\gamma(\t_1;a,b)$. This also induces a bracket which is Gerstenhaber up to homotopy. This can be seen by translating the statements from [@KLP; @del], but it also follows from the BV description of the bracket below (Figure \[bvcactfig\]). To give the BV structure, let $O'$ be the tree with one white vertex, no additional black edges, no free tails and a spine. Notice that the operation $\delta$ induced by $a\mapsto \gamma( O',a)$ on $\CCCacti$ breaks up on products of chains as follows, see Figure \[bvtopartcact\] $$\begin{aligned} \label{threedel} \delta(ab) &\sim& \vardel(a,b) + (-1)^{|a||b|}\vardel(b,a)\nn\\ \delta(abc) &\sim& \vardel(a,b,c)+(-1)^{|a|(|b|+|c|)}\vardel(b,c,a)\nn\\ &&+(-1)^{|c|(|a|+|b|)}\vardel(c,a,b)\end{aligned}$$ $$\begin{aligned} \delta(a_1 a_2\cdots a_n)&\sim& \sum_{i=0}^{n-1} (-1)^{\s(c^i,a)} \delta(a_{c^i(1)}, \dots, a_{c^i(n)})\end{aligned}$$ where $c$ is the cyclic permutation and $\s(c^i,a)$ is the sign of the cyclic permutations of the graded elements $a_i$. = 4in \[partbv\] $$\vardel(a,b,c) \sim (-1)^{(|a|+1)|b|} b \vardel(a,c) +\vardel(a,b)c -\vardel(a)bc$$ [**Proof.**]{} The proof is contained in Figure \[cactpartbvfig\]. = 4in \[GBVprop\] The chains $\CCCacti$ are a GBV algebra up to homotopy. The BV structure follows from the Lemma \[partbv\] via the calculation: $$\begin{aligned} \delta(abc) &\sim& \vardel(a,b,c)+(-1)^{|a|(|b|+|c|)}\vardel(b,c,a) +(-1)^{|c|(|a|+|b|)}\vardel(c,b,a)\nn\\ &\sim&(-1)^{(|a|+1)|b|} b \vardel(a,c) +\vardel(a,b)c -\vardel(a)bc + (-1)^{|a|} a \vardel(b,c)\nn\end{aligned}$$ $$\begin{aligned} &&+(-1)^{|a||b|}\vardel(b,a)c -(-1)^{|a|}a\vardel(b)c +(-1)^{(|a|+|b|)|c|} a \vardel(b,c)\nn\\ &&+(-1)^{|b|(|a|+1|)+|a||c|}b\vardel(c,a)c - (-1)^{|a|+|b|} ab\vardel(c)\nn\\ &\sim&\delta(ab)c+(-1)^{|a|} a\delta(bc) + (-1)^{|a+1||b|} b\delta (ac) -\delta(a)bc\nn\\ &&-(-1)^{|a|} a\delta(b)c-(-1)^{|a|+|b|}ab\vardel(c)\end{aligned}$$ Figure \[bvcactfig\] contains the homotopy relating the BV operator to the bracket. The action ========== Assumption ---------- Now we fix $A$ to be a finite dimensional associative algebra with unit $1$ together with an inner product $\eta: A\otimes A\rightarrow k$ which is non-degenerate and both i) invariant: $\eta(ab,c)=\eta(a,bc)$ and ii) symmetric: $\eta(a,b)=\eta(b,a)$. Such an algebra is called a Frobenius algebra. We will use $CH$ to stand for Hochschild cochains $CH^n(A,A):= Hom(A^{\otimes n},A)$. Actually, it would be enough to have a non-degenerate inner-product $\eta$ on $A\simeq CH^0(A,A)$ for which i) holds on $HH^0(A,A)$, that is up to homotopy for $A$. The condition ii) will then hold automatically up to homotopy since $CH^0(A,A)$ is commutative up to homotopy [@G]. If one wishes to furthermore relax the other conditions “up to homotopy”, one can fix that $\eta$ needs to be non-degenerate only on $HH^0(A,A)$ and only require that $HH^0(A,A)$ has to be finite dimensional. In this case, the operadic operations defined below will give operations $f:A^{\otimes n}\rightarrow HH^0(A,A)$ and will thus give actions only up to homotopy. This is enough to get the BV structure on $CH^*(A,A)$, but not quite enough to lift the action to the chain level. We are currently working on such a construction in formal geometry and defer the reader to [@stringarc]. Notation -------- Let $(e_i)$ be a basis for $A$ and let $C:= e_i \eta^{ij} \otimes e_j$ be the Casimir element, i.e. $\eta^{ij}$ is the inverse to $\eta_{ij}=\eta(e_i,e_j)$. With the help of the non–degenerate bilinear form, we identify $$\label{identify} CH^n(A,A)= Hom(A^{\otimes n},A) \cong A\otimes A^{* \otimes n} \cong A^{* \otimes n+1}$$ We would like to stress the order of the tensor products we choose. This is the order from right to left, which works in such a way that one does not need to permute tensor factors in order to contract. If $f \in Hom(A^{\otimes n},A)$, we denote by $\tilde f$ its image in $A^{* \otimes n+1}$, explicitly $\tilde f(a_0, \dots ,a_n)=\eta(a_0,f(a_1,\dots,a_n))$. With the help of (\[identify\]) we can pull back the Connes’ operators $b$ and $B$ (see e.g. [@Loday]) on the spaces $A^{\otimes n}$ to their duals and to $Hom(A^{\otimes n},A)$. Also let $t:A^{\otimes n}\rightarrow A^{\otimes n}$ be the operator given by performing a cyclic permutation $(a_1,\dots,a_n) \mapsto (-1)^{n-1}(a_n,a_1,\dots a_{n-1})$ and $N:=1+t+ \cdots +t^{n-1}:A^{\otimes n}\rightarrow A^{\otimes n}$. It is easy to check that the operator induced by $b$ is exactly the Hochschild differential; we will denote this operator by $\del$. We write $\Delta$ for the operator induced by $B$. It follows that $\Delta^2=0$ and $\Delta\del+\del\Delta=0$. Assumption {#assumption} ---------- To make the formulas simpler we will restrict to normalized Hochschild cochains $\overline {CH}^n(A,A)$ which are the $f\in CH^n(A,A)$ which vanish when evaluated on any tensor containing $1\in A$ as a tensor factor (see e.g. [@Loday]). On the normalized chains the operator $\Delta$ is explicitly defined as follows: for $f\in \overline {CH}^n(A,A)$ $$\eta(a_0,(\Delta f)(a_1,\dots a_{n-1})):=\eta(1,f\circ N(a_0,\dots a_n))$$ Correlators from decorated trees -------------------------------- We will use the notation of tensor products indexed by arbitrary sets, see e.g. [@Deligne]. For a linearly ordered set $I$ denote by $\bigcup_I a_i$ the product of the $a_i$ in the order dictated by $I$. Let $\t$ be the realization of a spine decorated planted planar b/w tree, $v \in V_w$, and $f\in \overline {CH}^{|v|}(A,A)$. We define $Y(v,f):A^{F_v(\t)}\rightarrow k$ by $$Y(v, f) (\bigotimes_{i\in F_v(\t)} a_i):=\eta(a_{\Fnum^{-1}(0)}, f(a_{\Fnum^{-1}(1)}\otimes \dots \otimes a_{\Fnum^{-1}(|v|)}))$$ Set $V_{b-int}:=V_b(\t)\setminus (V_{tail}\cup \{v_{root}\} \cup V_{spine})$. For $v\in V_{b-int}$ we define $Y(v):= A^{F_v(\t)}\rightarrow k$ by $$Y(v)(\bigoplus_{i\in F_v(\t)} a_i)=\eta(1, \bigcup_{i\in F_{v}} a_i)$$ \[treeactionone\] Let $\t$ be the realization of a planar planted b/w tree with $n$ free tails and $k$ labels and $f_i \in \overline{CH}^{n_i}(A,A)$. For such a tree there is a canonical identification $\{v_{root}\} \cup V_{ftail} \rightarrow \{0,1,\dots,|V_{ftail}|\}$ which is given by sending $v_{root}$ to $0$ and enumerating the tails in the linear order induced by the planted planar tree. Set $E_{int}(\t):= E(\t)\setminus (E_{tail}\cup E_{root} \cup E_{spine})$ and for $(a_0,\dots,a_n)\in A^{\otimes (\{v_{root}\}\cup V_{ftail})}$ set $$\begin{gathered} Y(\t)(f_1,\dots,f_k)(a_0,\dots,a_n) := \\ \left(\bigotimes_{v\in V_w(\t)} Y(v,f_{\lab(v)})\bigotimes_{v\in V_{b-int}} Y_v\right)\left( (\bigotimes_{i\in F_{ftail}(\t)\cup \{F_{root}\}}a_i)(\bigotimes_{j\in F_{spine}} 1) \otimes C^{\otimes E_{int}(\t)}\right)\end{gathered}$$ In other words, decorate the root flag by $a_0$, the free tail flags by $a_1,\dots,a_n$, the spines by $1$ and the edges by $C$ and then contract tensors according to the decoration at the white vertices while using the product at the black vertices. \[degreecount\] We extend the definition above by $$Y(\t)(f_1,\dots,f_k)(a_0,\dots,a_n)=0 \text{ if } |v_{\lab^{-1}(i)}| \neq n_i=:|f_i|$$ The foliage operator -------------------- Let $F$ be the foliage operator of [@del] applied to trees. This means that $F(\t)$ is the formal sum over all trees obtained from $\t$ by gluing an arbitrary number of free tails to the white vertices. The extra edges are called free tail edges $E_{ftail}$ and the extra vertices $V_{ftail}$ are defined to be black and are called free tail vertices. Using the trees defined in Figure \[cactexamples\] this corresponds to the formal sum $F(\t):= \sum_n l_n \circ_v \t$ where the operadic composition is the one for b/w trees which are not necessarily bi-partite (see [@del]). In our current setup we should first form $\tilde F(\t):=\sum_n \t_n \circ_v \t$ and then delete the images of all leaf edges together with their white vertices of the $\t_n$ to obtain $F(\t)$. Signs {#signsection} ----- The best way to fix signs of course is to work with tensors indexed by edges like in [@del; @KS]. For this one fixes a free object $L$ (free $\mathbb{Z}$-module or $k$-vector space) generated by one element of degree $\pm 1$ and calculates signs using $L^{\otimes E_w(\t)}$ before applying the foliage operator while using $L^{\otimes E_{weight}}$ after applying the foliage operator, where $E_{weight}=E_w\cup E_{root}\cup E_{ftail}\cup E_{spine}$. Explicitly, we fix the signs to be given as follows. For any tree $\t'$ in the linear combination above, we take the sign of $\t'$ to be the sign of the permutation which permutes the set $E_{weight}$ in the order induced by $\prec$ to the order where at each vertex one first has the root if applicable, then all non–tail edges, then all the free tails, and if there is a spine edge, the spine. The explicit signs above coincide with usual signs [@Loday] for the operations and the operators $b$ and $B$ and also coincide with the signs of [@G] for the $\circ_i$ and hence for the brace operations. The signs for the operations corresponding to operations on the Hochschild side are fixed by declaring the symbols “,” and “{” to have degree one. \[treeaction\] For $\t\in \swlbptree$ let $\hat\t$ be its realization. We define the operation of $\t$ on $\overline {CH}(A,A)$ by $$\eta(a_0,{\t(f_1,\dots, f_n)}(a_1,\dots, a_N)):=Y(F(\hat\t))(f_1,\dots, f_n)(a_0,\dots,a_N)$$ Notice that due to the Definition \[degreecount\] the right hand side is finite. Examples {#example} -------- We will first regard the tree $O'$ with one white vertex, no additional black edges, no free tails and a spine, see Figure \[cactexamples\]. For a function $f\in \overline {CH}^n$ we obtain: $$\begin{gathered} Y(F(O'))(f)(a_0,\dots,a_{n-1})= \eta(1,f(a_0,\dots a_{n-1})+(-1)^{n-1}f(a_{n-1}, a_0, \dots, a_{n-2}) +\dots)\\ =\eta(a_0,\Delta(f)(a_1,\dots,a_{n-1})) \end{gathered}$$ Let $\t'_{n,i}$ be the tree of Figure \[cactexamples\]. Then the operation corresponds to $$Y(F(\t'_{n,i}))(f;g_1,\dots,g_n)(a_0,\dots,a_{N})=\\ \eta(1,f\{' g_{i+1}, \dots, g_n, g_1, \dots, g_i\}(a_{(2)}, a_0,a_{(1)}))$$ where $N=|f|+\sum|g_i|-n-1$ and we used the short hand notation $$\begin{gathered} f\{' g_{j+1}, \dots, g_n, g_1, \dots, g_j\}(a_{(2)}, a_0,a_{(1)}) = \sum \pm f(a_{k+1},\dots, a_{i_{j+1}-1},\\ g_{j+1}(a_{i_{j+1}}, \dots, a_{i_{j+1}+|g_{j+1}|}), \dots, a_{i_n-1}, g_n(a_{i_n}, \dots, a_{i_n+|g_n|}), \dots , a_N, a_0, \\a_1, \dots, a_{i_1-1},g_1(a_{i_1}, \dots, a_{i_1+|g_1|}), \dots, a_{i_j-1}, g_j(a_{i_j}, \dots, a_{i_j+|g_j|}), \dots , a_k)\end{gathered}$$ where the sum runs over $1 \leq i_1 \leq \dots \leq i_j \leq \dots \leq k \leq \dots \leq i_{j+1} \leq \dots \leq i_{n} \leq N:$ $i_l + |g_l|\leq i_{l+1},i_j + |g_j|\leq k $ and the signs are as explained above. \[mainthm\] The Hochschild cochains of a finite-dimensional associative algebra with a non–degenerate, symmetric, invariant, bilinear form are an algebra over the chains of the framed little discs operad. This operation is compatible with the differentials. We will use the cellular chains $\CCCacti$ as a model for the chains of the framed little discs operad. It is clear that \[treeaction\] defines an action. On the Hochschild side, the $\circ_i$ operations are substitutions of the type $f_i=\psi(g_1,\dots,g_n)$. For $\CCCacti$ the $\t \circ_i \t'$ operations are the pull-back via the foliage operator of all possible substitutions of elements of $F(\t), \t \in \CCCacti$ into the position $i$ of $F(\t'$). The action $Y$ then projects onto the substitution $f_i=\psi(g_1,\dots,g_n)$ so that the action is operadic. Explicitly the substitution $t \circ^s_i t'$ for planted planar bi-partite trees with a decoration $\sdec$ and additional free tails is given as follows: Say the number of tails of $t'$ coincides with $|F(v_i)|$. In this case replace the vertex $v_i$ of $t$, its edges and the black vertices corresponding to the edges with the tree $t'$ matching the flags of $v_i$ with the tails of $t'$ by first matching the root edge with the marked flag of $v_i$ and then using the linear order. Lastly contract the image of the root flag. Otherwise set $t \circ^s_i t'=0$. With this definition it is easy to see that $F(\t \circ \t')=F(\t) \circ^s_i F(\t')$. The compatibility of the Hochschild differential with the differential of the cell complex follows from the relevant statements for $\t_n$ and $\t_n^b$, which are a straightforward but lengthy calculation (see e.g. [@del; @G]), together with the calculations above §\[example\] which are easily modified to show that $(\del O')(f)=\Delta(\del(f))$ and that $(\del\t'_{n,i})(f,g_1, \dots, g_n)= (\del\t'_{n,i})(f,g_1, \dots, g_n) \pm (\t'_{n,i})(\del f,g_1, \dots, g_n) + \sum_i\pm (\t'_{n,i})(f,g_1, \dots, \del(g_i), \dots, g_n)$ via an even more lengthy but still straightforward calculation. This then verifies the claim in view of the compatibility of the differentials and the respective operad structures. Alternatively, in view of the operation of the foliage operator, the compatibilities follow from a straightforward translation of trees with tails into operations on the Hochschild complex. The compatibility of the differential then follows from the almost identical definition of the differential for trees with tails of [@del] and that in the Hochschild complex as $\del(f)=f\circ \cup - (-1)^{|f||}\cup \circ f$. The normalized Hochschild cochains of an algebra as above are a BV algebra up to homotopy. This could of course have been checked directly without recourse to the operation of a chain model, but we do not know of any source for this result. It also seems to be difficult to guess the right homotopies as Gerstenhaber did in the non-cyclic case [@G]. The content of the next corollary was expected [@Connes], but we again could not find a source for it. The Hochschild cohomology of an algebra as above is a BV algebra, such that the induced bracket is the Gerstenhaber bracket. Lastly, since our second version of cellular chains of Proposition \[secondcells\] are a subdivision of the cell decomposition of Proposition \[firstcells\], we can also use the latter cell decomposition. The normalized Hochschild cochains of an algebra as above are an algebra over the semi–direct product over a chain model of the little discs operad and a chain model for the operad $\mathcal{S}$ built on the monoid $S^1$. The operation of the little discs operad by braces, viz. the original Deligne conjecture as discussed in [@del] for Frobenius algebras, corresponds to the decorations in which $\Zdec \equiv 0$ and the decorated edge is always the outgoing edge. In the Theorem \[mainthm\] we can relax the conditions and implications as explained in §\[assumption\]. Variations and relation to string topology ========================================== In terms of the setup of operadic correlation functions which we presented above, it is possible to analyze several generalizations. First, one can generalize from trees to more general graphs. This description then yields an action of the pseudo-cells of moduli spaces of curves or bordered surfaces [@ribbon]. One can also consider different types of chains, such as Hochschild chains or cyclic (co)–chains. The latter also works well with omitting markings to the trees or regarding unmarked graphs [@ribbon]. In [@KLP] we gave a map called loop which maps the so–called $\Arc$ operad to ribbon graphs with marked points on the cycles of the graph. In the case of no punctures the analysis of this map in terms of Strebel differentials yields another proof of Penner’s theorem [@P] on the homotopy equivalence of the suboperad of quasi–filling arcs and the moduli space of decorated bordered surfaces [@ribbon]. This in turn gives a cell decomposition of the aforementioned moduli space. Moreover the correspondence induces an operadic structure on ribbon graphs by pulling back the gluings from the $\Arc$ operad. Using the operadic correlation functions it is straightforward to obtain an action of the cells on a cyclic complex. In a similar spirit, an action of the framed little discs on a cyclic complex given by the $Tot$ of a special type of cyclic cosimplicial complex has been announced in [@MS]. Constructing the action in terms of our correlation functions then should allow us to construct an operation of the cells of moduli space on such a complex. Moreover a further decoration of the cells by $\mathbb{Z}/2\mathbb{Z}$ produces an operad which acts on the cyclic complex of such an algebra and is compatible with the differential [@ribbon]. The $A_{\infty}$– versions of these statements could be deduced from a conjectural “blow–up” of the cacti operads which is presented in [@woods]. Here the cells are given by products of associahedra and cyclohedra and are indexed by trees of the type appearing in [@KS]. Finally using the cyclic description of the free loop space or the iterated integral representation of [@Merk] together with the results mentioned above, we expect to be able to obtain an action of the (decorated) pseudo-cells of moduli space action on the free loop space of a compact manifold which extends the operation of the string PROP [@CS1; @CS2] thus completing a further step of the string topology program [@stringarc]. [99]{} A. Connes. Private communication. P. Deligne. [*Catégories tannakiennes.*]{} The Grothendieck Festschrift, Vol. II, 111–195, M. Chas and D. Sullivan. [*String Topology.*]{} Preprint math.GT/9911159 M. Chas and D. Sullivan. [*Closed string operators in topology leading to Lie bialgebras and higher string algebra.*]{} in: The legacy of Niels Henrik Abel, 771–784, Springer, Berlin, 2004. M. Gerstenhaber. [*The cohomology structure of an associative ring*]{}, Ann. of Math. 78 (1963), 267-288. R. M. Kaufmann. [*On several varieties of cacti and their relations.*]{} Algebraic & Geometric Topology 5 (2005), 237-300. R. M. Kaufmann. [*On Spineless Cacti, Deligne’s Conjecture and Connes–Kreimer’s Hopf Algebra.* ]{} in: N. Tongring and R. C. Penner “Woods Hole Mathematics. Perspectives in Mathematics and Physics”, Series on Knots and Everything - Vol. 34, World Scientific 2004. R. M, Kaufmann. [*Operads, Moduli of Surfaces and Quantum Algebras*]{}, in “Wood’s Hole Mathematical Meetings”, World Scientific.  To appear. R. M. Kaufmann. [*Arcs, Ribbons, Moduli spaces and operations on Hochschild*]{}. In preparation. R. M. Kaufmann [*String topology operations of moduli space via the arc operad*]{}. In preparation. R. M.Kaufmann, M. Livernet and R. B. Penner. [*Arc Operads and Arc Algebras.*]{} Geometry and Topology 7 (2003), 511-568. M. Kontsevich and Y. Soibelman. [*Deformations of algebras over operads and Deligne’s conjecture.*]{} Conférence Moshé Flato 1999, Vol. I (Dijon), 255–307, Math. Phys. Stud., 21, Kluwer Acad. Publ., Dordrecht, 2000. J–L. Loday [*Cyclic homology*]{}. Appendix E by Mar[í]{}a O. Ronco. Second edition. Chapter 13 by the author in collaboration with Teimuraz Pirashvili. Grundlehren der Mathematischen Wissenschaften , 301. Springer-Verlag, Berlin, 1998. J. E. McClure and J. H. Smith, [*Operads and cosimplicial objects: an introduction.*]{} in: Axiomatic, enriched and motivic homotopy theory, 133–171, NATO Sci. Ser. II Math. Phys. Chem., 131, Kluwer Acad. Publ., Dordrecht, 2004. S. A. Merkulov. [*De Rham model for string topology.*]{} Int. Math. Res. Not. 2004, no. 55, 2955–2981. R. C. Penner. [*Decorated Teichmüller theory of bordered surfaces.*]{} Comm. Anal. Geom. 12 (2004), no. 4, 793–820. A. A.  Voronov [*Notes on universal algebra.*]{} Preprint math.QA/0111009
1
--- abstract: 'The Casimir effect is a force arising in the macroscopic world as a result of radiation pressure of vacuum fluctuations. It thus plays a key role in the emerging domain of nano-electro-mechanical systems (NEMS). This role is reviewed in the present paper, with discussions of the influence of the material properties of the mirrors, as well as the geometry dependence of the Casimir effect between corrugated mirrors. In particular, the lateral component of the Casimir force and restoring torque between metal plates with misaligned corrugations are evaluated.' author: - 'Cyriaque Genet$^1$, Astrid Lambrecht$^2$, and Serge Reynaud$^2$' title: The Casimir effect in the nanoworld --- Introduction {#intro} ============ The Casimir force was predicted in $1948$ by H.B.G. Casimir as an attractive force between two perfectly reflecting, plane and parallel mirrors in vacuum [@Casimir48]. The force has been measured in different experiments with an increasing control of the experimental conditions . This has been considered as an important aim which should allow an accurate comparison between theoretical predictions and experimental observations [@Milonni94; @LamoreauxResource99; @Reynaud01; @GenetIAP02; @LambrechtPoincare]. These advances have been reviewed in a number of papers, for example [@LambrechtPoincare; @Bordag01; @Milton05] and in a special issue of the New Journal of Physics [@NJP06]. Meanwhile, it has been realized that the Casimir force was a dominant force at micron or sub-micron distances, and then clearly an important aspect in the domain of micro- and nano-oscillators (MEMS, NEMS) [@BuksPRB2001; @ChanScience2001; @ChanPRL2001] now emerging from modern nanofabrication techniques [@EkinciRSI2005]. If the Casimir force has been primarly considered as a source of stiction between mobile parts, it is now recognized as an essential source of actuation to be used in the design of MEMS and NEMS. In both fundamental and technological contexts, it is extremely important to take into account the real experimental situations which largely differ from the ideal conditions considered by Casimir. We review below some theoretical tools which have shown their efficiency for a general formulation of the Casimir effect, accounting for the material properties of the interacting plates as well as for the effect of non planar boundary geometries. Idealized Casimir force {#sec:1} ======================= The Casimir force and energy between two perfectly reflecting, plane and parallel mirrors immersed in quantum vacuum have the following forms $$\begin{aligned} F_{\rm Cas} = \frac{\pi^{2}\hbar c}{240}\frac{A}{L^{4}} \ \ , \ \ E_{\rm Cas} = - \frac{\pi^{2}\hbar c}{720}\frac{A}{L^{3}}. \label{FEcas}\end{aligned}$$ These expressions correspond to an attractive force $F_{\rm Cas}$ and a binding energy $E_{\rm Cas}$. Remarquably, they depend only on geometrical quantities, the area $A$ of the mirrors and their distance $L$ ($A\gg L^{2}$), and fundamental constants, the Planck constant $\hbar$ and the speed of light $c$. Imperfect reflection {#sec:3} ==================== Experiments are performed with real metallic mirrors which good reflectors only at frequencies below their plasma frequency $\omega_{\rm P}$ which depends on the properties of the conduction electrons in the metal. The effect of imperfect reflection on the Casimir force and energy has been recognized long time ago [@Lifshitz56; @Schwinger78] though it has been described with good accuracy only recently [@Lamoreaux99; @Lambrecht00; @KlimPRA00]. We recall below the scattering theory of the Casimir force which has been developed and used to this aim [@Jaekel91; @GenetPRA03; @LambrechtNJP06]. We begin with perfectly plane and parallel mirrors, separated by a distance $L$. The two mirrors form a Fabry-Perot cavity and the fluctuations of the intracavity fields propagating back and forth along the cavity axis can be calculated in terms of the fluctuations of the incoming free-space fields. The field modes are characterized by their frequency $\omega$, transverse wavevector ${\bf k}$ with components $k_{x},k_{y}$ in the plane of the mirrors, and by their polarization $p$. Time invariance of the problem, as well as transverse spatial translation invariance (along $x$ and $y$) ensure that the frequency, the transverse wavevector and the polarization are conserved quantities throughout the scattering process on the cavity. The scattering couples only the free vacuum modes with opposite signs for the component $k_{z}$ of the wavevector along the longitudinal $z$ axis of the cavity. We write $r_{\bf k}^{p}[\omega]$ the reflection amplitude of the mirror $i=1,2$ as seen from the inner side of the cavity. These amplitudes obey general physical properties of causality, unitarity and high frequency transparency. The spectral density of the vacuum intracavity fields is changed with respect to that of free-fields outside the cavity. The ratio of energy inside the cavity to energy outside the cavity is fixed, for a given mode, by the following function $$\begin{aligned} g_{\bf k}^{p}[\omega] = \frac{1-|\rho_{\bf k}^{p}[\omega]|^{2}}{|1-\rho_{\bf k}^{p}[\omega]|^{2}} \ \ , \ \ \rho_{\bf k}^{p}[\omega] = r_{\bf k}^{p}[\omega]_{1}r_{\bf k}^{p}[\omega]_{2}e^{2ik_{z}L}.\end{aligned}$$ This statement constitues a theorem which has been demonstrated for lossless as well as lossy mirrors [@GenetPRA03; @Barnett96]. It does not depend on the state of the fields and is therefore valid for vacuum fluctuations as well as for thermal fluctuations, assuming thermal equilibrium. We do not discuss here the issue of thermal dependence of the Casimir effect (see for example the recent review [@Brevik06]) and restrict our attention to the zero temperature limit. The force is the difference in radiation pressure between inner and outer faces of the mirrors, integrated over all the modes. Using analyticity properties, the force and energy may be written as integrals over imaginary frequencies $\omega =i\xi$ $$\begin{aligned} F&=&\frac{\hbar A}{\pi} \sum_{p}\int\frac{{\rm d}^{2}{\bf k}}{4\pi^{2}}\int_{0}^{\infty}{\rm d}\xi \kappa[i\xi]\frac{\rho_{\bf k}^{p}[i\xi]}{1-\rho_{\bf k}^{p}[i\xi]} \ \ , \nonumber \\ E&=&\frac{\hbar A}{2\pi}\sum_{p}\int\frac{{\rm d}^{2}{\bf k}}{4\pi^{2}}\int_{0}^{\infty}{\rm d}\xi \ln \left(1-\rho_{\bf k}^{p}[i\xi]\right). \label{FEscatt}\end{aligned}$$ $\kappa[i\xi]=\sqrt{{\bf k}^{2}+\xi^{2} / c^{2}}$ is the longitudinal component of the wavevector evaluated for imaginary frequencies. The expressions (\[FEscatt\]) are regular for any physical model of the reflection amplitudes. High frequency transparency of any real mirror ensures that the integrals are convergent, and free from the divergences usually associated with the infinitness of vacuum energy. They reproduce the Lifshitz expression for the Casimir force [@Lifshitz56; @Schwinger78] when assuming that the metal plates have large optical thickness with reflection amplitudes given by the Fresnel laws on the vacuum-bulk interface $$\begin{aligned} r_{\bf k}^{\rm TE}[i\xi]&=&-\frac{\sqrt{\xi^{2}\left(\varepsilon[i\xi]-1\right)+c^{2}\kappa^{2}}-c\kappa}{\sqrt{\xi^{2}\left(\varepsilon[i\xi]-1\right)+c^{2}\kappa^{2}}+c\kappa} \ \ , \nonumber \\ r_{\bf k}^{\rm TM}[i\xi]&=&-\frac{\sqrt{\xi^{2}\left(\varepsilon[i\xi]-1\right)+c^{2}\kappa^{2}}-c\kappa\varepsilon[i\xi]}{\sqrt{\xi^{2}\left(\varepsilon[i\xi]-1\right)+c^{2}\kappa^{2}}+c\kappa\varepsilon[i\xi]}. \label{Fresnel}\end{aligned}$$ Here $\varepsilon[i\xi]$ is the dielectric function describing a optical response of the material inside the mirrors. Taken together, relations (\[FEscatt\]) and (\[Fresnel\]) reproduce the Lifshitz expression [@Lifshitz56]. They are known to tend to the original Casimir expression in the limit $\varepsilon \rightarrow \infty$ which produces perfectly reflecting mirrors [@Schwinger78]. We may emphasize at this point that relations (\[FEscatt\]) are more general than Lifshitz expression which, incidentally, were not written originally in terms of reflection amplitudes [@Katz77]. They are valid for example for non-local optical responses of the mirrors provided the reflection amplitudes are substituted by their possibly more complicated expressions. The only limitation, discussed below, is associated with the assumption of specular scattering. Finite conductivity corrections {#sec:4} =============================== We now review the corrections to the Casimir expression coming from the finite conductivity of the bulk material. Here, these corrections are deduced from relations (\[FEscatt\]), assuming Fresnel laws (\[Fresnel\]) for a local optical response of the bulk material. This function may be given by a simple description of the conduction electrons in terms of a plasma model $$\begin{aligned} \varepsilon[i\xi]=1+\frac{\omega_{\rm P}^{2}}{\xi^2},\end{aligned}$$ characterized by a plasma frequency $\omega_{\rm P}$ and wavelength $\lambda_{\rm P}\equiv 2\pi c/\omega_{\rm P}$. It may be given by a more realistic representation based upon tabulated optical data and which includes the contribution of interband electrons [@Lambrecht00]. The corrections to the Casimir effect are conveniently represented in terms of factors measuring the reduction of the force and energy with respect to the ideal limit of perfect mirrors $$\begin{aligned} F&=&\eta_{\rm F}F_{\rm Cas} \ \ , \ \ \eta_{\rm F} < 1 \ \ \textrm{and} \nonumber \\ E&=&\eta_{\rm E}E_{\rm Cas} \ \ , \ \ \eta_{\rm E} < 1.\end{aligned}$$ The results of the calculations are plotted on Fig.(\[fig:1\]) for Au-covered mirrors. They are shown as $\eta_{\rm F}$ varying versus the ratio of the cavity length $L$ to the plasma wavelength $\lambda_{\rm P}$. For metals used in recent experiments, the plasma wavelength lies around $0.1\mu$m ($136$nm for Au and Cu). At large distances $L\gg\lambda_{\rm P}$, the ideal Casimir formula is recovered ($\eta_{\rm F}\rightarrow 1$), as expected. At short distances, a significant reduction of the force is obtained, with a change in the power law for the variation of the force with distance. This change can be understood as the result of the Coulomb interaction of surface plasmons at the two vacuum interfaces [@GenetAFLdB04; @Henkel04]. This interpretation may be actually generalized to arbitrary distances at the price of a full electromagnetic treatment of the plasmonic as well as ordinary photonic modes [@Intravaia05; @Intravaia07]. The plasma model is sufficient for a first description of the variation of the force with distance but it is not sufficient for a precise comparison. First, the relaxation of the conduction electrons has to be accounted for. Then, interband transitions are reached for metals like Au, Cu or Al for photon energies of a few eV and their effect on the optical response has to be taken into account for evaluating the Casimir force at short (sub-micron) distances. This can be done by using tabulated optical data which are integrated using causality relations [@Lambrecht00]. The result of the corresponding evaluation is shown on Fig.(\[fig:1\]). It is worth stressing that calculations are sensitive to the existing differences in optical data between different tabulated sets [@Pirozhenko06]. This means that an imperfect knowledge of the optical properties of the mirrors used in the experiment is a source of uncertainty in the experiment-theory comparison. Ideally, if the aim is to have a reliable theoretical evaluation of the Casimir force to be compared with experiments, it is necessary to measure the reflection amplitudes *in situ*. Silicon slab mirrors {#sec:5} ==================== As stressed in the introduction, the relevance of the Casimir effect on nanosystems calls for a precise understanding not only of the influence of material optical properties on the Casimir force, but also of the influence of geometrical parameters, such as the thickness of the coatings [@IannuzziPNAS2004; @LisantiPNAS2005] or the thickness of the mirrors themselves. In this context, structures made of silicon, the reference material used in nano-fabrication processes, are particularly interesting to study [@LambrechtEPL2007; @Chen07]. The reflection amplitude corresponding to a slab of finite thickness $D$ is different from the bulk expression and is given through a Fabry-Perot formula $$\begin{aligned} r_{\bf k}^{p}[i\xi]_{\rm slab}&=&r_{\bf k}^{p}[i\xi]\frac{1-e^{-2\delta}}{1-(r_{\bf k}^{p}[i\xi])^{2}e^{-2\delta}} \ \ , \nonumber \\ \delta &=&\frac{D}{c}\sqrt{\xi^{2}(\varepsilon[i\xi]-1)+c^{2}\kappa^{2}}. \label{Rslab}\end{aligned}$$ $r_{\bf k}^{p}[i\xi]$ is the bulk reflection amplitude given by (\[Fresnel\]). Using these reflection amplitudes for calculating the Casimir force between two Si slabs, interesting behaviours have been noted [@LambrechtEPL2007] which differ from the situation of metallic mirrors. In particular, it was shown that the material thickness has a stronger influence on the Casimir force for Si slabs than for Au slabs. For Si, the force decreases as soon as the slab separation $L$ is larger than the slab thickness $D$, as seen on Fig.(\[fig:2\]). In contrast to metals which become perfect reflectors in the limit of zero frequency, Si is a semiconductor with a finite transverse plasma frequency $\omega_{0}$ corresponding to a cut-off wavelength $\lambda_{0}=2\pi c / \omega_{0}\sim 286$nm. For cavity length $L$ smaller than this cut-off wavelength, Si tends to become transparent. The associated optical thickness $\delta$ given in Eq.(\[Rslab\]) is large, so that the Si slab behaves like a bulk Si mirror with low reflectivity at high frequency. The Casimir force is then much smaller than the perfect reflection limit of Eq.(\[FEcas\]). On the other hand, at low frequencies $\omega\ll\omega_{0}$, one will have $\delta\ll 1$ together with $c\kappa\rightarrow 0$, low frequencies being predominant at large distances. In this latter case, the slab is transparent again, and the Casimir force between two Si slabs is decreased when $L\geq D$. This result can have interesting consequences for nanostructures as it opens a way to control the magnitude of the Casimir force and possibly eliminate an unwanted Casimir source of stiction. From a fundamental point of view, it also offers a new solution to study the comparison between experiment and theory of the Casimir force [@Chen07] Geometry and the Casimir effect {#sec:6} =============================== Geometry effects are expected to lead to a rich variety of behaviours in the Casimir physics [@Balian7778; @Plunien86; @Balian0304]. Recent advances make it possible to explore this interplay, both from experimental and theoretical point of views. This also offers new possibilities for tailoring the Casimir force through specific designs [@EmigEPL03]. Force and energy evaluations between non planar mirrors are commonly obtained using the so-called proximity-force approximation (PFA) [@Derjaguin68; @Langbein71]. This approximation amounts to an averaging of plane-plane contributions over the distribution of local interplate separations defined by the chosen geometry. For the energy, the PFA leads to $$\begin{aligned} E_{\rm PFA} = \int\frac{{\rm d}^{2}{\bf r}}{A}E_{\rm PP}\left(\ell)\right) \ , \ \ell \equiv L-h_{1}({\bf r})-h_{2}({\bf r}), \label{PFArough}\end{aligned}$$ with $h_{1}({\bf r})$ and $h_{2}({\bf r})$ the surface profiles of each mirrors. Such profiles can be described by their spectra evaluated over the surface $A$ of the mirrors $$\begin{aligned} \int \frac{{\rm d}^{2} {\bf r}}{A}h_{i}({\bf r})h_{j}({\bf r})=\int\frac{{\rm d}^{2}{\bf k}}{(2\pi)^{2}}h_{i}[{\bf k}]h_{j}[-{\bf k}] \ , \ i,j=1,2 \end{aligned}$$ with $h_{i}[{\bf k}]$ the Fourier transform of $h_{i}({\bf r})$, and by the associated correlation lengths $\ell_{\rm C}$. When they are smaller than the other length scales, the amplitudes of deformations can be considered as perturbations. A second order expansion in the profiles can thus be performed leading to $$\begin{aligned} E_{\rm PFA} = E_{\rm PP}+\frac{1}{2}\frac{\partial^{2}E_{\rm PP}}{\partial L^{2}}\int \frac{{\rm d}^{2} {\bf r}}{A}(h_{1}({\bf r})+h_{2}({\bf r}))^{2}. \label{PFAgen}\end{aligned}$$ The trivial first-order term has been discarded, assuming that the deformations have zero spatial averages $\int {\rm d}^{2}{\bf r}h_{i=1,2}({\bf r}) / A =0$. The evaluation of the effect of geometry through the PFA, based on a summation procedure over local contributions assumes some additivity property of the Casimir effect, whereas the Casimir force is known not to be additive. The PFA can only be accurate for surfaces which can be considered as nearly plane with respect to other scales such as the separation distance $L$ [@GenetEPL03]. For example, it allows one to calculate the Casimir force in the plane-sphere (PS) configuration as $$\begin{aligned} F_{\rm PS}=\frac{2\pi R}{A}E_{\rm PP}, \ \ \textrm{with} \ \ L\ll R, \label{PFAPS}\end{aligned}$$ where $E_{\rm PP}$ is the Casimir energy in the plane-plane (PP) geometry. Most recent experiments are performed in the plane-sphere geometry which is much simpler to control than the plane-plane configuration. The PFA is here expected to be valid provided the radius $R$ of the sphere is much larger than the distance $L$ of closest approach. But the PFA certainly fails for describing more general surface profiles. As far as plate deformations are concerned, it can only be valid in the limit $\ell_{\rm C}\gg L$ which corresponds to a trivial specular description of the reflection process on the surfaces [@MaiaNetoPRA05]. For the general case, a description of non specular scattering process on mirrors is available for analyzing the connection between geometry and the Casimir effect [@MaiaNetoPRA05]. An expression for the Casimir energy between parallel mirrors with arbitrary surface profiles has been derived in [@LambrechtNJP06; @RodriguesPRA2007] $$\begin{aligned} E=\hbar\int\limits_{0}^{\infty}\frac{{\rm d}\xi}{2\pi}{\rm Tr} \ {\rm ln}\left(1-\rm{R}_{1}\left(i\xi\right)e^{-K\left(i\xi\right)L}\rm{R}_{2}\left(i\xi\right)e^{-K\left(i\xi\right)L}\right)\ \ \label{formule}\end{aligned}$$ This expression is based on non-specular reflection matrices $\rm{R}_{1}$ and $\rm{R}_{2}$ associated to each mirror. While the operator $e^{-K\left(i\xi\right)L}$ corresponds to propagation of the field between the two mirrors, and is diagonal in the plane-wave basis with elements given by $K(i\xi) =\sqrt{{\bf k}^2+\xi^2 / c^{2}}$, the two matrices $\rm{R}_{1}$ and $\rm{R}_{2}$ are non-diagonal on plane-waves. This corresponds to a mixing of polarizations and wavevectors, due to non-specularity diffraction on the gratings formed by the profiles on the surfaces of the mirrors. As it is reviewed below, this formula (\[formule\]) has been used to evaluate the effect of surface roughness [@MaiaNetoPRA05] or corrugations on the Casimir force [@RodriguesEPL2006; @RodriguesPRL2006]. Analytical expressions have been derived through a perturbative treatment, with the roughness or the corrugation amplitudes taken as the smallest length scales involved in the problem. The effect of the optical response of the metal has been included in these calculations. It is worth stressing that this formula has a wider range of validity. It can in principle describe structured plates with large corrugation amplitudes, as well as material properties not limited to a simple plasma model. The only task for a quantitative evaluation of the Casimir force or energy is to obtain the actual form of the reflection operators $\rm{R}_{1}$ and $\rm{R}_{2}$ to be inserted into Eq.(\[formule\]). Roughness correction {#sec:7} ==================== A correction to the Casimir force that must be accounted for is the effect of surface roughness, intrinsic to any real mirror. This effect is analyzed in recent experiments through procedures based on the PFA [@BordagPLA95; @KlimPRA99]. The general formula (\[formule\]) has been used to go beyond this approximation [@MaiaNetoPRA05]. As already stressed, the roughness amplitude must be the smallest length scale for perturbation theory to hold. Meanwhile, the plasma wavelength, the mirror separation and the roughness correlation length may have arbitrary relative values with respect to each other. We remind that the roughness profiles are defined with respect to reference mirror planes separated by the mean distance $L$. We assume that profiles have zero averages and show no cross-correlations. We also suppose that the area $A$ of each plate is large enough to include many correlation areas ($A\gg \ell_{\rm C}^{2}$), so that surface averages are identical to statistical averages. Up to second order in the profiles, the correction to the Casimir energy may thus be written as follows $$\begin{aligned} \delta E_{\rm PP}=\int\frac{{\rm d}^{2}{\bf k}}{(2\pi)^{2}}G_{\rm rough}[{\bf k}]\sigma[{\bf k}]. \label{beyondPFArug}\end{aligned}$$ Here $\sigma[{\bf k}]$ corresponding to the roughness spectrum added over the two plates. $G_{\rm rough}[{\bf k}]$ is a spectral sensitivity to roughness of the Casimir energy. Due to cylindrical symmetry with respect to rotations in the transverse plane, it only depends on $k=|{\bf k}|$. This dependence reveals that the roughness correction does not only depend on the root-mean-square (rms) roughness, but also on the spectral distribution of the roughness. Fig.(\[fig:3\]) displays $G_{\rm rough}[k]$ normalized by $E_{\rm PP}$ as it has been calculated for Au-covered mirrors and for various interplate distances. The rich behaviours of $G_{\rm rough}[k]$ as a function of the length scales is discussed in [@MaiaNetoEPL05]. What we want to stress here is that this function describes deviations from the PFA. The width of the roughness spectrum $\sigma[{\bf k}]$ is indeed fixed by the inverse of the correlation length $\ell_{\rm C}$. When this spectrum is contained in the region where $G_{\rm rough}[k]$ remains close to its secular limit $G_{\rm rough}[0]$, we can approximate Eq.(\[beyondPFArug\]) as proportional to the rms roughness $$\begin{aligned} \delta E_{\rm PP}\simeq G_{\rm rough}[0]\langle h_{1}^{2}+h_{2}^{2}\rangle. \label{PFArug}\end{aligned}$$ This corresponds effectively to the PFA expression, as the consequence of a theorem which was proved in [@MaiaNetoPRA05] $$\begin{aligned} G_{\rm rough}[k\rightarrow 0]= \frac{1}{2}\frac{\partial^{2}E_{\rm PP}}{\partial L^{2}}. \label{PFtheorem}\end{aligned}$$ Equation (\[PFtheorem\]) is nothing but a properly stated “Proximity Force Theorem”. It can however not be confused with the “Proximity Force Approximation” (\[PFArug\]) which is a good approximation only for smooth enough mirrors, that is also for large enough roughness correlation lengths $\ell_{\rm C}$. In the general case, the PFA result (\[PFArug\]) underestimates the effect of roughness. When performing the theory-comparison, one has therefore to carefully assess the roughness correction by measuring the roughness spectra *in situ* and using the roughness sensitivity function as given in [@MaiaNetoPRA05; @MaiaNetoEPL05]. The PFA can only be used if $\ell_{\rm C}$ has been proven to be large enough or, in a looser way, when the roughness correction has been estimated to have a negligible value. Lateral force between corrugated plates {#sec:8} ======================================= As the roughness effect remains a small correction to the Casimir force, it seems difficult to measure the deviation from PFA regime and check its agreement with theory. Fortunately, there exists an experimental configuration showing more promising perspectives as a potential probe of the non-trivial interplay between the Casimir effect and geometry. This configuration corresponds to periodically corrugated metallic plates placed face to face in vacuum, so that a lateral component of the Casimir force arises due to the breaking of the transverse translational invariance [@Golestanian]. A recent experiment has demonstrated the feasibility of a lateral force measurement at separation distances of the order of $\sim 100$nm [@ChenPRL02]. Since it would disappear in the absence of corrugation, the lateral force should not be considered as a small correction to the otherwise dominant normal Casimir force, as it was the case for the study of roughness. As we will see below, the deviation from PFA indeed appears as a factor in front of the lateral force, so that a precise measurement of this force would test in a crucial manner the interplay between Casimir effect and geometry [@RodriguesPRL2006]. As the experiments are performed at short distances, it cannot be described with the assumption of perfect reflection, where analytical results are available [@EmigPRA03; @EmigPRL05]. Again, the general scattering formula (\[formule\]) shows the ability to give an estimation for the lateral force for arbitrary relative values of the length scales $\lambda_{\rm C}$, $\lambda_{\rm P}$ and $L$, provided the corrugation amplitudes $a_{i=1,2}$ remain the smallest length scales of the problem. We consider two metallic mirrors, both sinusoidally corrugated along one dimension, with the same corrugation wavelength $\lambda_{C}$, separated by a distance $L$ and facing each other with a relative spatial mismatch $b$ between the corrugation crests -see Fig.(\[fig:4\]). The profiles $h_{i=1,2}({\bf r})$, ${\bf r}=(x,y)$, of the two uniaxial (along $y$) corrugated mirrors are defined by the two functions $h_{1} = a_{1}\cos\left(k_{\rm C}x\right)$ and $h_{2}=a_{2}\cos\left(k_{\rm C}\left(x-b\right)\right)$ with $k_{\rm C}= 2\pi / \lambda_{\rm C}$ the wavevector associated to the corrugation wavelength $\lambda_{\rm C}$. We take both profiles with zero spatial averages. At the second order in the corrugations, cross-terms of the form $a_{1}a_{2}$ appear which contribute to the lateral force because the energy depends on the transverse mismatch $b$. This fact, a consequence of the correlation between the two corrugation profiles, induces a contrast with the case of roughness where the effect was associated with quadratic terms $h_{i=1,2}^{2}$. It implies that the evaluation of the lateral force only involves first-order non-specular amplitudes calculated on each mirror separately. The full calculation gives the second-order correction to the Casimir energy induced by the corrugations $$\begin{aligned} \delta E_{\rm PP}=A\frac{a_{1}a_{2}}{2}\cos (k b) G_{\rm C}[k]. \label{secorder}\end{aligned}$$ The function $G_{\rm C}[{\bf k}]$ is given in [@RodriguesPRL2006] and does only depend on the modulus $k$ of ${\bf k}$. Here again, the PFA regime is recovered in the $k\rightarrow 0$ limit, as a consequence $$\begin{aligned} G_{\rm C}[k\rightarrow 0]=\frac{1}{2} \frac{\partial^{2}E_{\rm PP}}{\partial L^{2}}.\end{aligned}$$ This theorem is ensured, for any specific model of the material medium, by the fact that $G_{\rm C}$ is given for $k\rightarrow 0$ by the specular limit of non-specular reflection amplitudes [@MaiaNetoPRA05]. In order to compare with experiments, we consider the expression of the lateral force in the plane-sphere configuration. It is derived from the plane-plane configuration using the PFA, reliable as long as $L\ll R$. In fact, there is no interplay between curvature and corrugation provided $RL\gg\lambda_{\rm C}^{2}$, a condition met in the experiment reported in [@ChenPRL02]. From Eq.(\[PFAPS\]), the lateral force in the plane-sphere geometry is eventually given as [@RodriguesPRL2006] $$\begin{aligned} F_{\rm PS}^{\rm lat}=-\frac{\partial }{\partial b}E_{\rm PS}^{\rm lat}=\pi a_{1}a_{2}kR\sin (kb ) \int_{\infty}^{L}{\rm d}L^{\prime}G[k,L^{\prime}].\end{aligned}$$ The force is plotted in Fig.(\[fig:5\]) as a function of $k$, with length scales $\lambda_{\rm C}$, $\lambda_{\rm P}$ and $L$ fitting the experimental values [@ChenPRL02]. As the corrugation amplitudes are not small enough in the experiment to meet the perturbation condition, the theory and experiment can unfortunately not be compared directly. The plot on Fig.(\[fig:5\]) nevertheless shows the interesting fact that the length scales taken from the experiment, with $k$ indicated by the vertical dashed line, clearly fall outside the PFA sector in the perturbative calculation. For related implications, we refer the reader to the discussions in [@RodriguesPRA2007]. It appears clearly on the figure that the PFA overestimates the magnitude of the lateral force for arbitrary $k$. We also note that the PFA prediction for the force scales as $k$ when $k$ increases from zero. At larger values of $k$ in contrast, the lateral force decreases. This is due to the one-way propagation factor separating the two first-order non-specular reflections at each plate, given as a decresing exponential $e^{-kL}$ in the high $k$ limit [@RodriguesPRL2006]. It follows that there is a maximal force when $k$ is varied. It corresponds to $k=9\times 10^{-3}$nm$^{-1}$ with the other length scales corresponding to the experiment. The ratio $L / \lambda_{\rm C} = 1 / \pi$ is thus falling outside the PFA sector which confirms that a lateral force measurement is an excellent candidate for probing deviations from the PFA. Torque {#sec:9} ====== Another interesting observable for exploring the non-trivial geometry dependence of the Casimir energy is the torque arising when the corrugations of the two plates are misaligned. With this angular mismatch between the corrugations, rotational symmetry is broken and induces a restoring torque between the plates. The calculations are quite similar to those which were done for aligned corrugated surfaces, in particular because the same non-specular reflection coefficients are used to describe each plate. The second-order correction is still given by the sensitivity function $G_{\rm C}[{\bf k}]$ which does only depend on the modulus of the corrugation wavevector ${\bf k}$. The difference with the lateral force case lies only in the fact that the corrugation profiles $h_{i=1,2}({\bf r})=a_{i}\cos ({\bf k}_{i}\cdot {\bf r} - kb_{i})$ corresponds to different corrugation wavevectors ${\bf k}_{i=1,2}$ having however the same modulus $k=2\pi /\lambda_{\rm C}$. The angular mismatch between ${\bf k}_{1}$ and ${\bf k}_{2}$ is given by the angle $\theta$. The parameters $b_{i}$ represent lateral displacements with respect to the configuration with a line of maximum height at the origin. We assume that the corrugation $h_{2}$ is restricted to a rectangular section of area $L_{x}L_{y}$ centered at $x=b_{2},y=0$ and much larger than $L^{2}$ so that diffraction at the borders can be neglected. With these assumptions, and in the limit of long corrugation lines $kL_{y}\gg 1$ with $L_{x}$ smaller or of the order of $L_{y}$, the energy correction per unit area is given in [@RodriguesEPL2006] as $$\begin{aligned} \frac{\delta E_{\rm PP}}{L_{x}L_{y}}=\frac{a_{1}a_{2}}{2}G_{\rm C}[k]\cos (kb)\frac{\sin (kL_{y}\theta /2)}{kL_{y}\theta /2}. \label{torque}\end{aligned}$$ The spatial coefficient $b=b_{2}\cos \theta -b_{1}$ is the relative lateral displacement along the direction ${\bf k}_{1}$. As expected by symmetry, this correction is invariant under the transformation $\theta \rightarrow - \theta$ and $\theta\rightarrow \pi -\theta$ due to the fact that the corrugation lines have no orientation. The case $\theta =0$ corresponds to the result of pure lateral displacement discussed in the preceding section. The scale of the energy variation with $b$ and $\theta$ is set by the parameter $ \lambda_{\rm C} / L_{y}$. In fact, if plate $2$ is released after a rotation of $\theta >\lambda_{\rm C} / L_{y}$, its subsequent motion is a combination of rotation and lateral displacements. Rotation is favored over lateral displacements for $\theta <\lambda_{\rm C} / L_{y}$ (see Fig.(1) in [@RodriguesEPL2006]). The torque $\tau = - \partial \delta E_{\rm PP} / \partial\theta $ is evaluated in [@RodriguesEPL2006] for corrugated Au mirrors, with corrugation amplitudes $a_{1}=a_{2}=14$nm, corrugation length $L_{y}=24\mu$m and separated by a distance of $L=1\mu$m. It is maximum at $\theta = 0.66 \lambda_{\rm C} / L$ and is plotted in Fig.(\[fig:6\]) as a function of $k$. It starts increasing linearly with $k$ in the $k\rightarrow 0$ PFA sector and for the same reason as the lateral force, it decreases exponentially in the high-$k$ limit. As is clear on Fig.(\[fig:6\]), the PFA overestimates the magnitude of the torque by a factor of the order of $2$ at the peak value of the torque. The discrepancy even increases with $k$, since smaller values of $k$ correspond to smoother surfaces. The conditions are gathered up towards a direct experimental evidence of a non-trivial effect of geometry. Fig.(\[fig:6\]) also displays the torque when evaluated between perfect metallic corrugated mirrors [@EmigPRA03]. The corresponding deviation with respect to the calculation given by Eq.(\[torque\]) stresses that at a separation distance of $L=1\mu$m, the optical response of the metal must be accounted for in an accurate evaluation of the torque. The perfect conductor limit is reached only if the plasma wavelength $\lambda_{\rm P}$ is the smallest length scales (apart from the corrugation amplitudes) of the problem. Conclusion ========== New perspectives for studying the interplay between Casimir effect and geometry are today clearly visible. The theoretical formalism is better and better mastered, so that a rich variety of configurations can be studied. Meanwhile, novel experimental capabilities are available, allowing one to address challenging questions. Proposals have been recently made for measuring the torque between birefringent dielectric disks [@MundayPRA2005]. A measurement between metallic corrugated mirrors seems to be more easily accessible, with the torque turning out to be up to three orders of magnitude higher than the torque between dielectric plates, for comparable separation distance. At the same time, alternative routes are explored in order to probe quantum vacuum geometrical effects [@RodriguezPRL2007]. Cold atoms techniques also look like particularly promising, as they should allow one to see deviations from the PFA on the lateral component of the Casimir-Polder force, with a Bose-Einstein condensate used as a local probe trapped close to a corrugated surface [@DalvitPRL08]. These trends suggest that demonstrations of non-trivial effects of geometry should be within reach. [99]{} H.B.G. Casimir, Proc. K. Ned. Akad. Wet. **51**, 793 (1948) M.J. Sparnaay, in *Physics in the Making* eds. A. Sarlemijn and M.J. Sparnaay (North-Holland, 1989) p. 235 and references therein S.K.L. Lamoreaux, Phys. Rev. Lett. **78**, 5 (1997) U. Mohideen and A. Roy, Phys. Rev. Lett. **81**, 4549 (1998) B.W. Harris, F. Chen, and U. Mohideen, Phys. Rev. **A62**, 052109 (2000) Th. Ederth, Phys. Rev. **A62**, 062104 (2000) G. Bressi, G. Carugno, R. Onofrio, and G. Ruoso, Phys. Rev. Lett. **88**, 041804 (2002) R.S. Decca, D. López, E. Fischbach, and D.E. Krause, Phys. Rev. Lett. **91**, 050402 (2003) and references therein F. Chen, G.L. Klimchitskaya, U. Mohideen, and V.M. Mostepanenko, Phys. Rev. **A69**, 022117 (2004) R.S. Decca, D. López, E. Fischbach, G.L. Klimchitskaya, D.E. Krause, and V.M. Mostepanenko, Annals Phys. **318**, 37 (2005) R.S. Decca, D. López, E. Fischbach, G.L. Klimchitskaya, D.E. Krause, and V.M. Mostepanenko, Phys. Rev. **D75**, 077101 (2007) P.W. Milonni, *The quantum vacuum* (Academic, 1994) S.K. Lamoreaux, Resource Letter in Am. J. Phys. **67**, 850 (1999) S. Reynaud, A. Lambrecht, C. Genet and M.T. Jaekel, C. R. Acad. Sci. Paris **IV-2**, 1287 (2001) \[\] C. Genet, A. Lambrecht and S. Reynaud, in *On the Nature of Dark Energy* eds. U. Brax, J. Martin, J.P. Uzan, 121 (Frontier Group, 2002) \[\] A. Lambrecht and S. Reynaud, Poincaré Seminar on Vacuum Energy and Renormalization **1**, 107 (2002) \[\] M. Bordag, U. Mohideen, and V.M. Mostepanenko, Phys. Reports **353**, 1 (2001) and references therein K.A. Milton, J. Phys. **A20**, 4628 (2005) *Focus on Casimir Forces*, eds. R. Barrera and S. Reynaud, New J. Phys. **8** (2006) http://www.iop.org/EJ/abstract/1367-2630/8/10/E05. E. Buks and M.L. Roukes, Phys. Rev. **B63**, (2001) 033402 H.B. Chan, V.A. Aksyuk, R.N. Kleinman, D.J. Bishop, and F. Capasso, Science **291**, 1941 (2001) H.B. Chan, V.A. Aksyuk, R.N. Kleinman, D.J. Bishop, and F. Capasso, Phys. Rev. Lett. **87**, 211801 (2001) K.L. Ekinci and M.L. Roukes, Rev. Sci. Instrum. **76**, 061101 (2005) E.M Lifshitz, Sov. Phys. JETP **2**, 73 (1956) J. Schwinger, L.L. de Raad, and K.A. Milton, Ann. Phys. **115**, 1 (1978) S.K. Lamoreaux, Phys. Rev. **A59**, R3149 (1999) A. Lambrecht and S. Reynaud, Euro. Phys. J. **D8**, 309 (2000) G.L. Klimchitskaya, U. Mohideen, and V.M. Mostepanenko, Phys. Rev. **A61**, 062107 (2000) M.T. Jaekel and S. Reynaud, J. Physique **I-1**, 1395 (1991) \[\] C. Genet, A. Lambrecht, and S. Reynaud, Phys. Rev. **A67**, 043811 (2003) A. Lambrecht, P.A. Maia Neto and S. Reynaud, New J. Phys. **8**, 243 (2006) S.M. Barnett, C.R. Gilson, B. Huttner, and N. Imoto, Phys. Rev. Lett. **77**, 1739 (1996) I. Brevik, S.A. Ellingsen, and K. Milton, New J. Phys. **8**, 236 (2006) E.I. Kats, Sov. Phys. JETP **46**, 109 (1977) C. Genet, F. Intravaia, A. Lambrecht, and S. Reynaud, Ann. Found. L. de Broglie **29**, 311 (2004) \[\] C. Henkel, K. Joulain, J.Ph. Mulet, and J.J. Greffet, Phys. Rev. **A69**, 023808 (2004) F. Intravaia and A. Lambrecht, Phys. Rev. Lett. **94**, 110404 (2005) F. Intravaia, C. Henkel, and A. Lambrecht, Phys. Rev. **A76**, 033820 (2007) I. Pirozhenko, A. Lambrecht, and V.B. Svetovoy, New J. Phys. **8**, 238 (2006) D. Iannuzzi, M. Lisanti, and F. Capasso, Proc. Nat. Ac. Sci. USA **101**, 4019 (2004) M. Lisanti, D. Iannuzzi, and F. Capasso, Proc. Nat. Ac. Sci. USA **102**, 11989 (2005) A. Lambrecht, I. Pirozhenko, L. Duraffourg, and P. Andreucci, Europhys. Lett. **77**, 44006 (2007) F. Chen, G.L. Klimchitskaya, V.M. Mostepanenko and U. Mohideen, Phys. Rev. **B76**, 035338 (2007) R. Balian and B. Duplantier, Ann. Phys. NY **104**, 300 (1977); **112**, 165 (1978) G. Plunien, B. Muller and W. Greiner, Phys. Reports **134**, 87 (1986) R. Balian, in *Poincaré Seminar 2002 ‘Vacuum Energy’*, eds. B. Duplantier and V. Rivasseau (Birkhäuser, 2003), p. 71; R. Balian and B. Duplantier, in *15th SIGRAV Conference on General Relativity and Gravitation*, \[\] T. Emig, Europhys.Lett. **62**, 466 (2003) B.V. Deriagin, I.I. Abrikosova, and E.M. Lifshitz, Quart. Rev. **10**, 295 (1968) D. Langbein, J. Phys. Chem. Solids **32**, 1657 (1971) C. Genet, A. Lambrecht, P.A. Maia Neto, and S. Reynaud, Europhys. Lett. **62**, 484 (2003) P.A. Maia Neto, A. Lambrecht, and S. Reynaud, Phys. Rev. **A72**, 012115 (2005) R.B. Rodrigues, P.A. Maia Neto, A. Lambrecht, and S. Reynaud, Phys. Rev. **A75**, 062108 (2007) R.B. Rodrigues, P.A. Maia Neto, A. Lambrecht, and S. Reynaud, Europhys. Lett. **76**, 822 (2006) R.B. Rodrigues, P.A. Maia Neto, A. Lambrecht, and S. Reynaud, Phys. Rev. Lett. **96**, 100402 (2006) M. Bordag, G.L. Klimchitskaya, and V.M. Mostepanenko, Phys. Lett. **A200**, 95 (1995) G.M. Klimchitskaya, A. Roy, U. Mohideen, and V.M. Mostepanenko, Phys. Rev. **A60**, 3487 (1999) P.A. Maia Neto, A. Lambrecht, and S. Reynaud, Europhys. Lett. **69**, 924 (2005) R. Golestanian and M. Kardar, Phys. Rev. Lett. **78**, 3421 (1997); Phys. Rev. **A58**, 1713 (1998) F. Chen, and U. Mohideen, Phys. Rev. Lett. **88**, 101801 (2002) T. Emig, A. Hanke, R. Golestanian, and M. Kardar, Phys. Rev. **A67**, 022114 (2003) R. Büscher and T. Emig, Phys. Rev. Lett. **94**, 133901 (2005) J.N. Munday, D. Iannuzzi, and F. Capasso, Phys. Rev. **A71**, 042102 (2005) A. Rodriguez, M. Ibanescu, D. Iannuzzi, F. Capasso, J.D. Joannopoulos, and S.G. Johnson, Phys. Rev. Lett. **99**, 080401 (2007) D.A.R. Dalvit, P.A. Maia Neto, A. Lambrecht, and S. Reynaud, Phys. Rev. Lett. **100**, 040405 (2008)
1
--- abstract: 'Process mining techniques focus on extracting insight in processes from event logs. In many cases, events recorded in the event log are too fine-grained, causing process discovery algorithms to discover incomprehensible process models or process models that are not representative of the event log. We show that when process discovery algorithms are only able to discover an unrepresentative process model from a low-level event log, structure in the process can in some cases still be discovered by first abstracting the event log to a higher level of granularity. This gives rise to the challenge to bridge the gap between an original low-level event log and a desired high-level perspective on this log, such that a more structured or more comprehensible process model can be discovered. We show that supervised learning can be leveraged for the event abstraction task when annotations with high-level interpretations of the low-level events are available for a subset of the sequences (i.e., traces). We present a method to generate feature vector representations of events based on XES extensions, and describe an approach to abstract events in an event log with Condition Random Fields using these event features. Furthermore, we propose a sequence-focused metric to evaluate supervised event abstraction results that fits closely to the tasks of process discovery and conformance checking. We conclude this paper by demonstrating the usefulness of supervised event abstraction for obtaining more structured and/or more comprehensible process models using both real life event data and synthetic event data.' author: - '\' - '\' bibliography: - 'IEEEabrv.bib' - 'bibliography.bib' title: Event Abstraction for Process Mining using Supervised Learning Techniques --- Process Mining, Event Abstraction, Probabilistic Graphical Models Introduction ============ Process mining is a fast growing discipline that combines knowledge and techniques from computational intelligence, data mining, process modeling and process analysis [@Aalst2011]. Process mining focuses on the analysis of event logs, which consists of sequences of real-life events observed from process executions, originating e.g. from logs from ERP systems. An important subfield of process mining is process discovery, which is concerned with the task of finding a process model that is representative of the behavior seen in an event log. Many different process discovery algorithms exist ([@Aalst2004; @Gunther2007; @Werf2008; @Weijters2011; @Leemans2013]), and many different types of process models can be discovered by process discovery methods, including BPMN models, Petri nets, process trees, and statecharts. ![An excerpt of a “spaghetti”-like process model.[]{data-label="fig:spaghetti"}](spaghetti){width="48.00000%"} As event logs are often not generated specifically for the application of process mining, events granularity of the event log at hand might be too low level. It is vital for successful application of process discovery techniques to have event logs at an appropriate level of abstraction. Process discovery techniques when the input event log is too low level might result in process model with one or more undesired properties. First of all, the resulting process model might be “spaghetti”-like, as shown in Figure \[fig:spaghetti\], containing of an uninterpretable mess of nodes and connections. The aim of process discovery is to discover a structured, “lasagna”-like, process model as shown in Figure \[fig:lasagna\], which is much more interpretable than a “spaghetti”-like model. Secondly, the activities in the process model might have too specific, non-meaningful, names. Third, as we show in section \[sec:motivating\_example\], process discovery algorithms are sometimes not able to discover a process model that represents the low-level event log well, while being able to discover to discover a representative process model from a corresponding high-level event log. The problems mentioned illustrate the need for a method to abstract too low-level event logs into higher level event logs. ![A structured, or “lasagna”-like, process model.[]{data-label="fig:lasagna"}](lasagna){width="25.00000%"} Several methods have been explored within the process mining field that address the challenge of abstracting low-level events to higher level events ([@Bose2009; @Gunther2010; @Dongen2010]). Existing event abstraction methods rely on unsupervised learning techniques to abstract low-level into high-level events by clustering together groups of low-level events into one high-level event. However, using unsupervised learning introduces two new problems. First, it is unclear how to label high-level events that are obtained by clustering low-level events. Current techniques require the user / process analyst to provide high-level event labels themselves based on domain knowledge, or generate long labels by concatenating the labels of all low-level events incorporated in the cluster. However, long concatenated labels quickly become unreadable for larger clusters, and it is far from trivial for a user to come up with sensible labels manually. In addition, unsupervised learning approaches for event abstraction give no guidance with respect to the desired level of abstraction. Many existing event abstraction methods contain one or more parameters to control the degree in which events are clustered into higher level events. Finding the right level of abstraction providing meaningful results is often a matter of trial-and-error. In some cases, training data with high-level target labels of low-level events are available, or can be obtained, for a subset of the traces. In many settings, obtaining high-level annotations for all traces in an event log is infeasible or too expensive. Learning a supervised learning model on the set of traces where high-level target labels are available, and applying that model to other low-level traces where no high-level labels are available, allows us to build a high-level interpretation of a low-level event log, which can then be used as input for process mining techniques. In this paper we describe a method for supervised event abstraction that enables process discovery from too fine-grained event logs. This method can be applied to any event log where higher level training labels of low level events are available for a subset of the traces in the event log. We start by giving an overview of related work from the activity recognition field in Section \[sec:related\]. In Section \[sec:preliminaries\] we introduce basic concepts and definitions used throughout the rest of the paper. Section \[sec:motivating\_example\] explains the problem of not being able to mine representative process models from low-level data in more detail. In Section \[sec:features\] we describe a method to automatically retrieve a feature vector representation of an event that can be used with supervised learning techniques, making use of certain aspects of the XES standard definition for event logs [@Gunther2014]. In the same section we describe a supervised learning method to map low-level events into target high-level events. Sections \[sec:case\_1\] and \[sec:case\_2\] respectively show the added value of the described supervised event abstraction method for process mining on a real life event log from a smart home environment and on a synthetic log from a digital photocopier respectively. Section \[sec:conclusion\] concludes the paper. Related Work {#sec:related} ============ Supervised event abstraction is an unexplored problem in process mining. A related field is activity recognition within the field of ubiquitous computing. Activity recognition focuses on the task of detecting human activity from either passive sensors [@Kasteren2008; @Tapia2004], wearable sensors [@Bao2004; @Kwapisz2011], or cameras [@Poppe2010]. Activity recognition methods generally work on discrete time windows over the time series of sensor values and aim to map each time window onto the correct type of human activity, e.g. *eating* or *sleeping*. Activity recognition methods can be classified into probabilistic approaches [@Kasteren2008; @Tapia2004; @Bao2004; @Kwapisz2011] and approaches based on ontology reasoning [@Chen2009; @Riboni2011]. The strength of probabilistic system based approaches compared to methods based on ontology reasoning is their ability to handle noise, uncertainty and incomplete in sensor data [@Chen2009]. Tapia [@Tapia2004] was the first to explore supervised learning methods to infer human activity from passive sensors, using a naive Bayes classifier. More recently, probabilistic graphical models started to play an important role in the activity recognition field [@Kasteren2008; @Kasteren2007]. Van Kasteren et al. [@Kasteren2008] explored the use Conditional Random Fields [@Lafferty2001] and Hidden Markov Models [@Rabiner1986]. Van Kasteren and Kr[ö]{}se [@Kasteren2007] applied Bayesian Networks [@Friedman1997] on the activity recognition task. Kim et al. [@Kim2010] found Hidden Markov Models to be incapable of capturing long-range or transitive dependencies between observations, which results in difficulties recognizing multiple interacting activities (concurrent or interwoven). Conditional Random Fields do not posses these limitations. The main differences between existing work in activity recognition and the approach presented in this paper are the input data on which they can be applied and the generality of the approach. Activity recognition techniques consider the input data to be a multidimensional time series of the sensor values over time based on which time windows are mapped onto human activities. An appropriate time window size is determined based on domain knowledge of the data set. In supervised event abstraction we aim for a generic method that works for all XES event logs in general. A time window based approach contrasts with our aim for generality, as no single time window size will be appropriate for all event logs. Furthermore, the durations of the events within a single event log might differ drastically (e.g. one event might take seconds, while another event takes months), in which case time window based approaches will either miss short events in case of larger time windows or resort to very large numbers of time windows resulting in very long computational time. Therefore, we map each individual low-level event to a high-level event and do not use time windows. In a smart home environment context with passive sensors, each change in a binary sensor value can be considered to be a low-level event. Preliminaries {#sec:preliminaries} ============= In this section we introduce basic concepts used throughout the paper. We use the usual sequence definition, and denote a sequence by listing its elements, e.g. we write $\langle a_1,a_2,\dots,a_{n} \rangle$ for a (finite) sequence $s:\{1,\dots,n\}\to S$ of elements from some alphabet $S$, where $s(i)=a_i$ for any $i \in \{1,\dots,n\}$. XES Event Logs -------------- We use the XES standard definition of event logs, an overview of which is shown in Figure \[fig:XES\_metamodel\]. XES defines an event *log* as a set of *traces*, which in itself is a sequence of *event*s. The log, traces and events can all contain one or more *attribute*s, which consist of a *key* and a *value* of a certain type. Event or trace attributes may be *global*, which indicates that the attribute needs to be defined for each event or trace respectively. A log contains one or more *classifier*s, which can be seen as labeling functions on the events of a log, defined on global event attributes. *Extension*s define a set of attributes on log, trace, or event level, in such a way that the semantics of these attributes are clearly defined. One can view XES extensions as a specification of attributes that events, traces, or event logs themselves frequently contain. XES defines the following standard extensions: Concept : [Specifies the generally understood name of the event/trace/log (attribute ’Concept:name’).]{} Lifecycle : [Specifies the lifecycle phase (attribute ’Lifecycle:transition’) that the event represents in a transactional model of their generating activity. The *Lifecycle* extension also specifies a standard transactional model for activities.]{} Organizational : [Specifies three attributes for events, which identify the actor having caused the event (attribute ’Organizational:resource’), his role in the organization (attribute ’Organizational:role’), and the group or department within the organization where he is located (attribute ’Organizational:group’).]{} Time : [Specifies the date and time at which an event occurred (attribute ’Time:timestamp’).]{} Semantic : [Allows definition of an activity meta-model that specifies higher-level aggregate views on events (attribute ’Semantic:modelReference’).]{} ![XES event log meta-model, as defined in [@Gunther2014].[]{data-label="fig:XES_metamodel"}](XES_metamodel){width="0.95\linewidth"} We introduce a special attribute of type *String* with key *label*, which represents a high-level version of the generally understood name of an event. The *concept* name of a event is then considered to be a low-level name of an event. The *Semantic* extension closely resembles the *label* attribute, however, by specifying relations between low-level and high-level events in a meta-model, the *Semantic* extension assumes that all instances of a low-level event type belong to the same high-level event type. The *label* attribute specifies the high-level label for each event individually, allowing for example one low-level event of low-level type *Dishes & cups cabinet* to be of high-level type *Taking medicine*, and another low-level event of the same type to be of high-level type *Eating*. Note that for some traces high-level annotations might be available, in which case its events contain the *label* attribute, while other traces might not be annotated. High-level interpretations of unannotated traces, by inferring the *label* attribute based on information that is present in the annotated traces, allow the use of unannotated traces for process discovery and conformance checking on a high level. Petri nets ---------- A process modeling notation frequently used as output of process discovery techniques is the Petri net. Petri nets are directed bipartite graphs consisting of transitions and places, connected by arcs. Transitions represent activities, while places represent the status of the system before and after execution of a transition. Labels are assigned to transitions to indicate the type of activity that they represent. A special label $\tau$ is used to represent invisible transitions, which are only used for routing purposes and do not represent any real activity. \[def:lpn\] A labeled Petri net is a tuple $N=(P,T,F,R,\ell)$ where $P$ is a finite set of places, $T$ is a *finite set* of transitions such that $P \cap T = \emptyset$, and $F \subseteq (P \times T) \cup (T \times P)$ is a set of directed arcs, called the flow relation, $R$ is a finite set of labels representing event types, with $\tau \notin R$ is a label representing an invisible action, and $\ell:T\rightarrow R\cup {\tau}$ is a labeling function that assigns a label to each transition. The state of a Petri net is defined w.r.t. the state that a process instance can be in during its execution. A state of a Petri net is captured by the marking of its places with tokens. In a given state, each place is either empty, or it contains a certain number of tokens. A transition is enabled in a given marking if all places with an outgoing arc to this transitions contain at least one token. Once a transition fires (i.e. is executed), a token is removed from all places with outgoing arcs to the firing transition and a token is put to all places with incoming arcs from the firing transition, leading to a new state. A marked Petri net is a pair $(N,M)$, where $N=(P,T,F,L,\ell)$ is a labeled Petri net and where $M \in \mathbb{B}(P)$ denotes the marking of $N$. For $n \in (P \cup T)$ we use $\bullet n$ and $n \bullet$ to denote the set of inputs and outputs of n respectively. Let $C(s,e)$ indicate the number of occurrences (count) of element $e$ in multiset $s$. A transition $t\in T$ is enabled in a marking $M$ of net $N$ if $\forall p \in \bullet t : C(M,p)>0$. An enabled transition $t$ may fire, removing one token from each of the input places $\bullet t$ and producing one token for each of the output places $t\bullet$. Figure \[fig:double\_flower\] shows three Petri nets, with the circles representing places, the squares representing transitions. The black squares represent invisible transitions, or, $\tau$ transitions. Places annotated with an **f** belong to the final marking, indicating that the process execution can terminate in this marking. The topmost Petri net in Figure \[fig:double\_flower\] initially has one token in the place $p1$, indicated by the dot. Firing of silent transition $t1$ takes the token from $p1$ and puts a token in both $p2$ and $p3$, enabling both $t2$ and $t3$. When $t2$ fires, it takes the token from $p2$ and puts a token in $p4$. When $t3$ fires, it takes the token from $p3$ and puts a token in $p5$. After $t2$ and $t3$ have both fired, resulting in a token in both $p4$ and $p5$, $t4$ is enabled. Executing $t4$ takes the token from both $p4$ and $p5$, and puts a token in $p6$. The **f** indicates that the process execution can stop in the marking consisting of this place. Alternatively, it can fire $t5$, taking the token from $p6$ and placing a token in $p2$ and $p5$, which allows for execution of $MC$ and $W$ to reach the marking consisting of $p6$ again. We refer the interested reader to [@Reisig2012] for an extensive review of Petri nets. Conditional Random Field {#sec:crf} ------------------------ We view the recognition of high-level event labels as a sequence labeling task in which each event is classified as one of the higher-level events from a high-level event alphabet. Conditional Random Fields (CRFs) [@Lafferty2001] are a type of probabilistic graphical model which has become popular in the fields of language processing and computer vision for the task of sequence labeling. A Conditional Random Field models the conditional probability distribution of the label sequence given an observation sequence using a log-linear model. We use Linear-chain Conditional Random Fields, a subclass of Conditional Random Fields that has been widely used for sequence labeling tasks, which takes the following form:\ $p(y|x) = \frac{1}{Z(x)}exp(\sum_{t=1}\sum_k\lambda_k f_k(t,y_{t-1},y_t,x))$\ where $Z(x)$ is the normalization factor, $X=\langle x_1,\dots,x_n\rangle$ is an observation sequence, $Y=\langle y_1,\dots,y_n\rangle$ is the associated label sequence, $f_k$ and $\lambda_k$ respectively are feature functions and their weights. Feature functions, which can be binary or real valued, are defined on the observations and are used to compute label probabilities. In contrast to Hidden Markov Models [@Rabiner1986], feature functions are not assumed to be mutually independent. Motivating Example {#sec:motivating_example} ================== [0.5]{} \[node distance=1.4cm, on grid,&gt;=stealth’, bend angle=20, auto, every place/.style= [minimum size=6mm]{}, every transition/.style = [minimum size = 6mm]{} \] (p2) \[label=below:p1\]; (2) \[fill=black, right = of p2, label=below:t1\] edge \[pre\] node\[auto\] (p2); (p3) \[above right = of 2, label=below:p2\] edge\[pre\] node\[auto\] (2); (p4) \[below right = of 2, label=below:p3\] edge\[pre\] node\[auto\] (2); (3) \[label=center:, label=below:t2, right = of p3\] edge\[pre\] node\[auto\] (p3); (4) \[label=center:, label=below:t3, right = of p4\] edge\[pre\] node\[auto\] (p4); (p5) \[right = of 3, label=below:p4\] edge\[pre\] node\[auto\] (3); (p6) \[right = of 4, label=below:p5\] edge\[pre\] node\[auto\] (4); (5) \[label=center:, label=above:t4, above right = of p6\] edge\[pre\] node\[auto\] (p5) edge\[pre\] node\[auto\] (p6); (p8) \[right = of 5, label=below right:**f**, label=below:p6\] edge\[pre\] node\[auto\] (5); (6) \[fill=black,above = of p8, label=above:t5\] edge\[pre\] node\[auto\] (p8) edge\[post,bend left\] node\[auto\] (p6) edge\[post,bend right\] node\[auto\] (p3); [0.22]{} \[node distance=1.3cm, on grid,&gt;=stealth’, bend angle=20, auto, every place/.style= [minimum size=6mm]{}, every transition/.style = [minimum size = 6mm]{} \] (p1); (0) \[right = of p1,fill=black\] edge \[pre\] node\[auto\] (p1); (p2)\[above right = of 0, label=below right:**f**\] edge \[pre\] node\[auto\] (0); (2) \[label=center:, above right = of p2\] edge \[pre, bend left\] node\[auto\] (p2) edge \[post, bend right\] node\[auto\] (p2); (1) \[label=center:, above left = of p2\] edge \[pre, bend left\] node\[auto\] (p2) edge \[post, bend right\] node\[auto\] (p2); (p3)\[below right = of 0\] edge\[pre\] node\[auto\] (0); \(3) \[label=center:, right = of p3\] edge \[pre\] node\[auto\] (p3); [0.22]{} \[node distance=1.3cm, on grid,&gt;=stealth’, bend angle=20, auto, every place/.style= [minimum size=6mm]{}, every transition/.style = [minimum size = 6mm]{} \] (p1); edge \[pre, bend left\] node\[auto\] (p1) edge \[post, bend right\] node\[auto\] (p1); (t1) \[label=above:, above right = of p1\] edge \[pre\] node\[auto\] (p1); (t2) \[label=below:, below right = of p1\] edge \[post\] node\[auto\] (p1); (p2)\[below right = of t1, label=below right:**f**\] edge \[pre\] node\[auto\] (t1) edge \[post\] node\[auto\] (t2); Figure \[fig:double\_flower\] shows on a simple example how a process can be structured at a high level while this structure is not discoverable from a low-level log of this process. The bottom right Petri net shows the example process at a high-level. The high-level process model allows for any finite length alternating sequence of *Taking medicine* and *Eating* activities. The *Taking medicine* high-level activity is defined as a subprocess, corresponding to the topmost Petri net, which consists of low-level events *Medicine cabinet (MC)*, *Dishes & cups cabinet (DCC)*, and *Water (W)*. The *Eating* high-level event is also defined as a subprocess, shown in the bottom left Petri net, which consists of low-level events *Dishes & cups cabinet (DCC)* and *Cutlery drawer (CD)* that can occur an arbitrary number of times in any order and low-level event *Dishwasher (D)* which occurs exactly once, but at an arbitrary point in the *Eating* process. When we apply the Inductive Miner process discovery algorithm [@Leemans2013] to low-level traces generated by the hierarchical process of Figure \[fig:double\_flower\], we obtain the process model shown in Figure \[fig:merged\_flower\]. The obtained process model allows for almost all possible sequences over the alphabet $\{CD,D,DCC,MC,W\}$, as the only constraint introduced by the model is that *DCC* and *D* are required to be executed starting from the initial marking to end up with the same marking. Firing of all other transitions in the model can be skipped. Behaviorally this model is very close to the so called “flower” model [@Aalst2011], the model that allows for all behavior over its alphabet. The alternating structure between *Taking medicine* and *Eating* that was present in the high-level process in Figure \[fig:double\_flower\] cannot be observed in the process model in Figure \[fig:merged\_flower\]. This is caused by high variance in start and end events of the high-level event subprocesses of *Taking medicine* and *Eating* as well as by the overlap in event types between these two subprocesses. ![image](Capture_IM_Red_read){width="75.00000%"} When the event log would have consisted of the high-level *Eating* and *Taking medicine* events, process discovery techniques have no problems to discover the alternating structure in the bottom right Petri net of Figure \[fig:double\_flower\]. To discover the high-level alternating structure from a low-level event log it is necessary to first abstract the events in the event log. Through supervised learning techniques the mapping from low-level events to high-level events can be learned from examples, without requiring a hand-made ontology. Similar approaches have been explored in activity recognition in the field of ubiquitous computing, where low-level sensor signals are mapped to high-level activities from a human behavior perspective. The input data in this setting are continuous time series from sensors. Change points in these time series are triggered by low-level activities like *opening/closing the fridge door*, and the annotations of the higher level events (e.g. *cooking*) are often obtained through manual activity diaries. In contrast to unsupervised event abstraction, the annotations in supervised event abstraction provide guidance on how to label higher level events and guidance for the target level of abstraction. Event Abstraction as a Sequence Labeling Task {#sec:features} ============================================= In this section we describe an approach to supervised abstraction of events based on Conditional Random Fields. Additionally, we describe feature functions on XES event logs in a general way by using XES extensions. Figure \[fig:overview\] provides a conceptual overview of the supervised event abstraction method. The approach takes two inputs, 1) a set of annotated traces, which are traces where the high-level event that a low-level event belongs to (the *label* attribute of the low-level event) is known for all low-level events in the trace, and 2) a set of unannotated traces, which are traces where the low-level events are not mapped to high-level events. Conditional Random Fields are trained of the annotated traces to create a probabilistic mapping from low-level events to high-level events. This mapping, once obtained, can be applied to the unannotated traces in order to estimate the corresponding high-level event for each low-level event (its *label* attribute). Often sequences of low-level events in the traces with high-level annotations will have the same *label* attribute. We make the working assumption that multiple high-level events are executed in parallel. This enables us to interpret a sequence of identical *label* attribute values as a single instance of a high-level event. To obtain a true high-level log, we *collapse* sequences of events with the same value for the *label* attribute into two events with this value as *concept* name, where the first event has a *lifecycle* ’start’ and the second has the *lifecycle* ’complete’. Table \[tab:collapse\] illustrates this collapsing procedure through an input and output event log. [0.48]{} Case Time:timestamp Concept:name label ------ --------------------- ----------------------- ----------------- 1 03/11/2015 08:45:23 Medicine cabinet Taking medicine 1 03/11/2015 08:46:11 Dishes & cups cabinet Taking medicine 1 03/11/2015 08:46:45 Water Taking medicine 1 03/11/2015 08:47:59 Dishes & cups cabinet Eating 1 03/11/2015 08:47:89 Dishwasher Eating 1 03/11/2015 17:10:58 Dishes & cups cabinet Taking medicine 1 03/11/2015 17:10:69 Medicine cabinet Taking medicine 1 03/11/2015 17:11:18 Water Taking medicine [0.48]{} Case Time:timestamp Concept:name Lifecycle:transition ------ --------------------- ----------------- ---------------------- 1 03/11/2015 08:45:23 Taking medicine Start 1 03/11/2015 08:46:45 Taking medicine Complete 1 03/11/2015 08:47:59 Eating Start 1 03/11/2015 08:47:89 Eating Complete 1 03/11/2015 17:10:58 Taking medicine Start 1 03/11/2015 17:11:18 Taking medicine Complete \[tab:collapse\] The method described in this section is implemented and available for use as a plugin for the ProM 6 [@Verbeek2010] process mining toolkit and is based on the GRMM [@Sutton2006] implementation of Conditional Random Fields. We now show for each XES extension how it can be translated into useful feature functions for event abstraction. Note that we do not limit ourselves to XES logs that contain all XES extensions; when a log contains a subset of the extensions, a subset of the feature functions will be available for the supervised learning step. This approach leads to a feature space of unknown size, potentially causing problems related to the curse of dimensionality, therefore we use L1-regularized Conditional Random Fields. L1 regularization causes the vector of feature weights to be sparse, meaning that only a small fraction of the features have a non-zero weight and are actually used by the prediction model. Since the L1-norm is non-differentiable, we use OWL-QN [@Andrew2007] to optimize the model. ![Conceptual overview of Supervised Event Abstraction.[]{data-label="fig:overview"}](overview_method){width="50.00000%"} From a XES Log to a Feature Space --------------------------------- ### Concept extension The low-level labels of the preceding events in a trace can contain useful contextual information for high-level label classification. Based on the n-gram of $n$ last-seen events in a trace, we can calculate the probability that the current event has a label $l$. A multinoulli distribution is estimated for each n-gram of $n$ consecutive events, based on the training data. The Conditional Random Field model requires feature functions with numerical range. A concept extension based feature function with two parameters, $n$ and $l$, is valued with the multinoulli-estimated probability of the current event having high-level label $l$ given the n-gram of the last $n$ low-level event labels. ### Organizational extension Similar to the concept extension feature functions, multinoulli distributions can be estimated on the training set for n-grams of *resource*, *role*, or *group* attributes of the last $n$ events. Likewise, an organizational extension based feature function with three parameters, n-gram size $n$, $o\in\{resource,role,group\}$, and label $l$, is valued with the multinoulli-estimated probability of label $l$ given the n-gram of the last $n$ event resources/roles/groups. ### Time extension In terms of time, there are several potentially existing patterns. A certain high-level event might for example be concentrated in a certain parts of a day, of a week, or of a month. This concentration can however not be modeled with a single Gaussian distribution, as it might be the case that a high-level event has high probability to occur in the morning or in the evening, but low probability to occur in the afternoon in-between. Therefore we use a Gaussian Mixture Model (GMM) to model the probability of a high-level label $l$ given the timestamp. Bayesian Information Criterion (BIC) [@Schwarz1978] is used to determine the number of components of the GMM, which gives the model an incentive to not combine more Gaussians in the mixture than needed. A GMM is estimated on training data, modeling the probabilities of each label based on the time passed since the start of the day, week or month. A time extension based feature function with two parameters, $t\in\{day,week,month,\dots\}$ and label $l$, is valued with the GMM-estimated probability of label $l$ given the $t$ view on the event timestamp. ### Lifecycle extension & Time extension The XES standard [@Gunther2014] defines several lifecycle stages of a process. When an event log possesses both the lifecycle extension and the time extension, time differences can be calculated between different stages of the life cycle of a single activity. For a *complete* event for example, one could calculate the time difference with the associated *start* event of the same activity. Finding the associated *start* event becomes nontrivial when multiple instances of the same activity are in parallel, as it is then unknown which *complete* event belongs to which *start* event. We assume consecutive lifecycle steps of activities running in parallel to occur in the same order as the preceding lifecycle step. For example, when we observe two *start* events of an activity of type *A* in a row, followed by two *complete* events of type *A*, we assume the first *complete* to belong to the first *start*, and the second *complete* to belong to the second *start*. We estimate a Gaussian Mixture Model (GMM) for each tuple of two lifecycle steps for a certain activity on the time differences between those two lifecycle steps for this activity. A feature based on both the lifecycle and the time extension, with a label parameter $l$ and lifecycle $c$, is valued with the GMM-estimated probability of label $l$ given the time between the current event and lifecycle $c$. Bayesian Information Criterion (BIC) [@Schwarz1978] is again used to determine the number of components of the GMM. Evaluating High-level Event Predictions for Process Mining Applications ----------------------------------------------------------------------- Existing approaches in the field of activity recognition take as input time windows where each time window is represented by a feature vector that describes the sensor activity or status during that time window. Hence, evaluation methods in the activity recognition field are window-based, using evaluation metrics like the percentage of correctly classified time slices [@Tapia2004; @Kasteren2007; @Kasteren2008]. There are two reasons to deviate from this evaluation methodology in a process mining setting. First, our method operates on events instead of time windows. Second, the accuracy of the resulting high level sequences is much more important for many process mining techniques (e.g. process discovery, conformance checking) than the accuracy of predicting each individual minute of the day. We use *Levenshtein similarity* that expresses the degree in which two traces are similar using a metric based on the Levenshtein distance (also known as edit distance) [@Levenshtein1966], which is defined as $Levenshtein\_similarity(a,b)=1-\frac{Levenshtein\_distance(a,b)}{max(|a|,|b|)}$. The division of the Levenshtein distance by $max(|a|,|b|)$, which is the worst case number of edit operations needed to transform any sequence of length $|a|$ into any sequence of length $|b|$, causes the result to be a number between $0$ (completely different traces) and $1$ (identical traces). Case Study 1: Smart Home Environment {#sec:case_1} ==================================== [0.92]{} ![image](kasteren_no_abstraction_dot5){width="92.00000%"} [0.8]{} ![image](kasteren_abstracion_3_edited_2){width="80.00000%"} We use the smart home environment log described by Van Kasteren et al. [@Kasteren2008] to evaluate our supervised event log abstraction method. The Van Kasteren log consists of multidimensional time series data with all dimensions binary, where each binary dimension represents the state of an in-home sensor. These sensors include motion sensors, open/close sensors, and power sensors (discretized to $0$/$1$ states). Experimental setup ------------------ We transform the multidimensional time series data from sensors into events by regarding each sensor change point as an event. Cases are created by grouping events together that occurred in the same day, with a cut-off point at midnight. High-level labels are provided for the Van Kasteren data set. The generated event log based on the Van Kasteren data set has the following XES extensions: Concept : [The sensor that generated the event.]{} Time : [The time stamp of the sensor change point.]{} Lifecycle : [*Start* when the event represents a sensor value change from $0$ to $1$ and *Complete* when it represents a sensor value change from $1$ to $0$.]{} Note that annotations are provided for all traces in the obtained event log. To evaluate how well the supervised event abstraction method generalized to unannotated traces, we artificially use a part of the traces to train the abstraction model and apply them on a test set where we regard the annotations to be non-existent. We evaluate the obtained high-level labels against the ground truth labels. We use a variation on Leave-One-Out-Cross-Validation where we leave out one trace to evaluate how well this mapping generalizes to unseen events and cases. Results ------- Figure \[fig:kasteren\_no\_abstraction\] shows the result of the Inductive Miner [@Leemans2013] for the low-level events in the Van Kasteren data set. The resulting process model starts with many parallel activities that can be executed in any order and contains many unobservable transitions back. This closely resembles the flower model, which allows for any behavior in any arbitrary order. From the process model we can learn that *toilet flush* and *cups cupboard* frequently co-exists. Furthermore, the process model indicates that *groceries cupboard* is often followed by *dishwasher*. There seems to be very little structure on this level of event granularity. The average Levenshtein similarity between the predicted high-level traces in the Leave-One-Trace-Out-Cross-Validation experimental setup and the ground truth high-level traces is $0.7042$, which shows that the supervised event abstraction method produces traces which are fairly similar to the ground truth. Figure \[fig:kasteren\_abstraction\] shows the result of the Inductive Miner on the aggregated set of predicted test traces. Figure \[fig:kasteren\_abstraction\] shows that the process discovered at the high level of granularity is more structured than the process discovered at the original level of granularity (Figure \[fig:kasteren\_no\_abstraction\]). In Figure \[fig:kasteren\_abstraction\], we can see that the main daily routine starts with breakfast, followed by a shower, after which the subject leaves the house to go to work. After work the subject prepares dinner and has a drink. The subject mainstream behavior is to go to the toilet before going to bed, but he can then wake up later to go to the toilet and then continue sleeping. Note that the day can also start with going to bed. This is related to the case cut-off point of a trace at midnight. Days when the subject went to bed after midnight result in a case where going to bed occurs at the start of the trace. On these days, the subject might have breakfast and then perform the activity sequence use toilet, take shower, and leave house, possibly multiple times. Another possibility on days when the subject went to bed after midnight is that he starts by using the toilet, then has breakfast, then has the possibility to leave the house, then takes a shower, after which he always leaves the house. Prepare dinner activity is not performed on these days. This case study shows that we can find a structured high-level process from a low-level event log where the low-level process is unstructured, using supervised event abstraction and process discovery. Case Study 2: Artificial Digital Photocopier {#sec:case_2} ============================================ [0.8]{} ![image](adp_no_abstraction_2){width="85.00000%"} [0.8]{} ![image](adp_abstraction_2_edited){width="85.00000%"} Bose et al. [@Bose2012; @Bose2012b] created a synthetic event log based on a digital photocopier to evaluate his unsupervised methods of event abstraction. In this case study we show that the described supervised event abstraction method can accurately abstract to high-level labels. Experimental setup ------------------ We annotated each low-level event with the correct high-level event using domain knowledge from the actual process model as described by Bose et al. [@Bose2012; @Bose2012b]. This event log is generated by a hierarchical process, where high-level events *Capture Image*, *Rasterize Image*, *Image Processing* and *Print Image* are defined in terms of a process model. The *Print Image* subprocess amongst others contains the events *Writing*, *Developing* and *Fusing*, which are themselves defined as a subprocess. In this case study we set the task to transform the log such that subprocesses *Capture Image*, *Rasterize Image* and *Image Processing*, *Writing*, *Fusing* and *Developing*. Subprocesses *Writing* and *Developing* both contain the low-level event types *Drum Spin Start* and *Drum Spin Stop*. In this case study we focus in particular on the *Drum Spin Start* and *Drum Spin Stop* events, as they make the abstraction task non-trivial in the sense that no one-to-one mapping from low-level to high-level events exists. The artificial digital photocopier data set has the concept, time and lifecycle XES extensions. On this event log annotations are available for all traces. On this data set we use a 10-Fold Cross-Validation setting on the traces to evaluate how well the supervised event abstraction method abstracts low-level events to high-level events on unannotated traces, as this data set is larger than the Van Kasteren data set and Leave-One-Out-Cross Validation would take too much time. Results ------- The confusion matrix in Table \[tab:conf\_mat\_adp\] shows the aggregated results of the mapping of low-level events *Drum Spin Start* and *Drum Spin Stop* to high-level events *Developing* and *Writing*. The results show that the supervised event abstraction method is capable of detecting the many-to-many mappings between the low-level and high-level labels, as it maps these low-level events to the correct high-level event without making errors. The Levenshtein similarity between the aggregated set of test fold high-level traces and the ground truth high-level traces is close to perfect: 0.9667. ------------ ------------ --------- Developing Writing Developing 6653 0 Writing 0 917 ------------ ------------ --------- : Confusion matrix for classification of *Drum Spin Start* and *Drum Spin Stop* low-level events into high-level events *Writing* and *Developing*.[]{data-label="tab:conf_mat_adp"} Figure \[adp\_no\_abstraction\] shows the process model obtained with the Inductive Miner on the low-level events in the artificial digital photocopier dataset. The two sections in the process model that are surrounded by dashed lines are examples of high-level events within the low-level process model. Even though the low-level process contains structure, the size of the process model makes it hard to comprehend. Figure \[adp\_abstraction\] shows the process model obtained with the same process discovery algorithm on the aggregated high-level test traces of the 10-fold cross validation setting. This model is in line with the official artificial digital photocopier model specification, with the *Print Image* subprocess unfolded, as provided in [@Bose2012; @Bose2012b]. In contrast to the event abstraction method described by Bose et al. [@Bose2012b] which found the high-level events that match specification, supervised event abstraction is also able to find suitable event labels for the generated high-level events. This allows us to discover human-readable process models on the abstracted events without performing manual labeling, which can be a tedious task and requires domain knowledge. Instead of event abstraction on the level of the event log, unsupervised abstraction methods that work on the level of a model (e.g. [@Vanhatalo2009]) can also be applied to make large complex models more comprehensible. Note that such methods also do not give guidance on how to label resulting transitions in the process model. Furthermore, such methods do not help in cases where the process on a low-level is unstructured, like in the case study as described in Section \[sec:case\_1\]. This case study shows that supervised event abstraction can help generating a comprehensible high-level process model from a low-level event log, when a low-level process model would be too large to be understandable. Conclusion {#sec:conclusion} ========== In this paper we described a method to abstract events in a XES event log that is too low-level, based on supervised learning. The method consists of an approach to generate a feature representation of a XES event, and of a Conditional Random Field based learning step. An implementation of the method described has been made available as a plugin to the ProM 6 process mining toolkit. We introduced an evaluation metric for predicted high-level traces that is closer to process mining than time-window based methods that are often used in the sequence labeling field. Using a real life event log from a smart home domain, we showed that supervised event abstraction can be used to enable process discovery techniques to generate high-level process insights even when process models discovered by process mining techniques on the original low-level events are unstructured. Finally, we showed on a synthetic event log that supervised event abstraction can be used to discover smaller, more comprehensible, high-level process models when the process model discovered on low level events is too large to be interpretable.
1
--- abstract: | In the design of incentive compatible mechanisms, a common approach is to enforce incentive compatibility as constraints in programs that optimize over feasible mechanisms. Such constraints are often imposed on sparsified representations of the type spaces, such as their discretizations or samples, in order for the program to be manageable. In this work, we explore limitations of this approach, by studying whether all dominant strategy incentive compatible mechanisms on a set $T$ of discrete types can be extended to the convex hull of $T$. Dobzinski, Fu and Kleinberg (2015) answered the question affirmatively for all settings where types are single dimensional. It is not difficult to show that the same holds when the set of feasible outcomes is downward closed. In this work we show that the question has a negative answer for certain non-downward-closed settings with multi-dimensional types. This result should call for caution in the use of the said approach to enforcing incentive compatibility beyond single-dimensional preferences and downward closed feasible outcomes. author: - 'Submission Number: 9394' - | Taylor Lundy and Hu Fu\ University of British Columbia\ tlundy@cs.ubc.ca, hufu@cs.ubc.ca\ bibliography: - 'ref.bib' date: May 2019 title: Limitations of Incentive Compatibility on Discrete Type Spaces --- Introduction {#sec:intro} ============ Mechanism design studies optimization problems with private inputs from strategic agents. An agent’s input, known as her *type*, is her private information including her valuation for the social outcomes, which are to be decided upon by the mechanism. A mechanism needs to solicit such information to achieve certain goals, e.g. maximizing welfare, revenue, surplus or fairness measures. It needs to provide its participants with correct incentives, via both social outcomes and payments, so that the agents find it in their best interests to reveal their types. Dominant strategy incentive compatibility (DSIC) is one of the strongest and most widely used solution concepts that guarantee such incentives. Under DSIC, every participant, no matter what type she possesses and no matter what types the other participants report to the mechanism, will maximize her utility by revealing her true type. Not only is this a strong guarantee for the mechanism designer that true information should be reported and optimized over, it also alleviates the burden of strategizing from the participating agents — telling truth is a dominant strategy regardless of the other agents’ types or strategies. Partly thanks to this strong incentive guarantee, the two fundamental auctions, namely, the VCG auction that maximizes social welfare [@Vic61; @Clarke71; @Groves73], and Myerson’s auction that maximizes expected revenue for selling a single item [@Myerson81], have been foundational in both the theory and practice of mechanism design. As the scope of mechanism design expands beyond the classical settings, incentive compatible mechanisms that are optimal for various objective often lack the simple structures that characterize the VCG and Myerson’s mechanisms. By and large, there have been two approaches to the design of incentive compatible mechanisms. The first approach focuses on classes of mechanisms that, by their simple structures, have obvious incentive guarantees. For example, in a multi-item auction, a sequential pricing mechanism puts prices on items and asks each agent in turn to choose her favorite items that remain; the bidders are not asked about their values, and choosing utility-maximizing items (according to their true values) is the obvious strategy to adopt (see, e.g., [@CHMS10]; [@CMS10]; [@FGL15]). Another example is to optimize over parameterized “VCG-like” mechanisms which inherit incentive properties from the VCG mechanism (e.g. [@sandholm2015automated]). This approach is often used to search for mechanisms whose performance is a factor away from being optimal, since the optimal mechanism or its very close approximations are often not within the class of mechanisms being searched over. The second approach forgoes structures that are easily interpretable, and exhaustively searches for the optimal mechanism. This is exemplified by solving mathematical programs (typically linear or convex programs) whose feasible regions contain the set of all incentive compatible mechanisms (see, e.g. [@conitzer2002complexity]; [@DFK15]; [@DDT17];[@FH18]). Typically, incentive requirements are hardwired as constraints in such programs. Difficulty arises in the second approach when one would like to adopt strong incentive guarantees such as DSIC, which need at least one constraint per profile of types to specify. When the space of possible types is a continuum, this gives rise to uncountably many constraints. While this does not always make the program impossible to solve it considerably complicates the task. One way to work around this is to discretize the type space and only impose incentive compatible (IC) constraints on the set of discrete types used to represent the type space. Discretization is also embodied in the idea of a given prior distribution over a set of discrete types, on which the optimization can then be based (e.g. [@conitzer2002complexity]; [@DFK15]). The most common motivation for such prior distributions is that they naturally result from samples (e.g. from past observations or market research) from an underlying distribution, whereas the true distribution may be supported on a continuum of types. This approach motivates the question we study in this work. #### Questions we study. In this work we aim to answer the question: when one has a mechanism that is DSIC on a discretized subset of a type space, can one always find a mechanism that has the same behavior on the subset and yet is DSIC on the whole type space? To make the question more concrete, we study the natural case where the whole type space is the convex hull of the discrete subset. To make the presentation easier, in the following we denote by $\typespaces$ the discrete subset of types, and $\operatorname{Conv}(\typespaces)$ its convex hull. We consider the question a fundamental one for the second approach to mechanism design that we described above. When one optimizes for a mechanism with IC constraints imposed only on $\typespaces$, if the resulting mechanism cannot be *extended* to the original type space, it loses incentive guarantees when put to use. One objection may be that, given a mechanism that is DSIC on $\typespaces$, one may always run it on $\operatorname{Conv}(\typespaces)$, by restricting the “bidding language”, so that a type in $\operatorname{Conv}(\typespaces)$ but not in $\typespaces$ has to report a type in $\typespaces$. Such mechanisms, however, may lose the incentive guarantee which makes DSIC mechanisms attractive in the first place. Unless one can show that agents with types not in $\typespaces$ have a dominant strategy in such mechanisms, such agents need to strategize over which types in $\typespaces$ to report, depending on the types and strategies of their opponents. In the scenario where $\typespaces$ is a set of samples from a continuous distribution, the vast majority of types may not be in $\typespaces$ and have no incentive guarantee, which is clearly undesirable. In settings where the agents’ types are single dimensional, @DFK15 showed that “restricting the bidding language” does turn any mechanism DSIC on $\typespaces$ into a mechanism DSIC on any superset of $\typespaces$: each type in the superset has a dominant strategy, and by the revelation principle this gives rise to a DSIC mechanism that extends the given mechanism’s behavior on $\typespaces$. To the best of our knowledge, no such guarantees are known beyond single dimensional settings. #### Our Results. For agents with multi-dimensional types, we first give a condition under which any DSIC mechanism on $\typespaces$ can be extended to a DSIC mechanism on $\operatorname{Conv}(\typespaces)$, via an argument that is different from @DFK15’s yet still straightforward (Theorem \[thm:swap\]). In particular, the condition is satisfied whenever the set of feasible outcomes is *downward closed* (Theorem \[thm:downward\]). Our main result, however, is a construction of a set $\typespaces$ of multi-dimensional types and a DSIC mechanism on it, for which we show that no DSIC mechanism on $\operatorname{Conv}(\typespaces)$ can output the same social outcomes on types in $\typespaces$. The impossibility result stands even if the extension mechanism is allowed to be randomized. This shows that, without conditions such as single-dimensional types or downward closed set of feasible outcomes, designing incentive compatible mechanisms by focusing on a discrete subset of types can be a questionable approach to designing mechanisms for the whole type space — there may not be any mechanism DSIC on the whole type space which behave the same way on the subset. Near the end, we give a multi-dimensional setting where the expected *revenue* of a mechanism with only correct incentives for a set $\typespace$ of types can be unboundedly more than the revenue of a mechanism for $\operatorname{Conv}(\typespace)$. This example is much less involved than our main result, because revenue optimal mechanisms are meaningful only when they do not overcharge any reported type and guarantee non-negative utility. This constraint can be much more stringent when imposed for all types in $\operatorname{Conv}(\typespace)$ than for $\typespace$ only. Related Works {#sec:related} ------------- For multi-dimensional preferences, an allocation rule for any fixed agent is implementable if and only if it satisfies the so-called *cyclic monotonicity* property [@Rochet87]. When the type space is convex, it turns out the weaker condition of *weak monotonicity* suffices for implementability [@SY05; @AK14]. It is notable that the two solution concepts we compare in Section \[sec:mapping\] precisely correspond to the case where the type space is convex and that where it is not. However, nowhere in our arguments do we make use of this beautiful fact. Another closely related body of work, is the literature on automated mechanism design. In automated mechanism design, mechanisms are optimized for the setting and objective automatically using information about agents’ type distributions. When this work was introduced by @conitzer2002complexity the input for this problem was an explicit description of the agents distributions, however recent work has moved towards replacing this explicit description with samples from the agents type distribution [@likhodedov2004methods; @likhodedov2005approximating; @sandholm2015automated]. Our work highlights how interpolating between discrete samples can effect not only the objective but the implementablility of the mechanism itself. Luckily, the research on sample based automated mechanism design is able to avoid the pitfalls of only having discrete samples. They do this either by optimizing over parameterized families of mechanisms which are guaranteed to be implementable on the entire typespace or by working in settings where the addition of new types has no effect on the objective (i.e. downward closed settings, see )[@sandholm2015automated; @guo2010computationally; @balcan2018; @morgenstern2016learning]. However, our work points out some difficulties that might arise if one wishes to take a more general, non-parameterized approach to automated mechanism design in settings which are not downward closed. There are a variety of well-studied settings that are not downward closed in which extending a type space to its the convex hull could cause problems. One commonly studied not downward closed setting arises from the job scheduling problem introduced in @nisan2001algorithmic and later built upon by @schedule2 and @ashlagi2012optimal. since in this problem every job must eventually be scheduled and the set of feasible solutions is not downward closed. Another example of a non-downward closed setting is one-sided matching markets in which every agent must be matched with exactly one good. An example of a one-sided matching market is the fair housing allocation studied by @matchmarket. Finally, the facility location problem from @devanur2005strategyproof also not downward closed. Preliminaries {#sec:prelim} ============= We consider a setting with $N$ agents where each agent $i$ has a private type $\typei$ from her type space $\typespacei \subseteq \mathbb R_+^m$. The type profile $\types = (\typei[1], \ldots, \typei[N])$ denotes the vector of all agents’ types, from the joint type space $\typespaces \coloneqq \prod_i \typespacei$. We adopt the standard shorthand notation to write $\typesmi \coloneqq (\type_{1}, \ldots, \type_{i-1}, \type_{i+1}, \ldots, \type_{n})$ from $\typespacesmi \coloneqq \prod_{j \neq i} \typespace_{j}$. An outcome (or, interchangeably, allocation) for agent $i$ lies in $\mathbb R_+^m$; for an outcome $\alloci$, the agent with type $\typei$ has value $\langle \typei, \alloci \rangle = \sum_{j = 1}^m \type_{ij} \alloc_{ij}$. A social outcome is denoted by a vector $(\alloci[1], \ldots, \alloci[N]) \in \mathbb R_+^{mN}$. The set of all feasible social outcomes (or allocations) is denoted ${\mathscr{F}}\subseteq \mathbb R_+^{mN}$. For example, in a single-item auction, $m = 1$, each $\typei \in \mathbb R_+$ represents agent $i$’s value for the item, and ${\mathscr{F}}\subseteq \mathbb R_+^N$ is the all zero vector (representing not selling) and the $N$ standard bases (each representing selling to a corresponding agent). As another example, in a $m$-unit auction with unit-demand buyers, ${\mathscr{F}}\subseteq \mathbb R_+^{N}$ is the set of all integral points in $\{(\alloci[1], \ldots, \alloci[N]) \in \mathbb R_+^{mN} \mid \sum_{i = 1}^N \alloci[ij] \leq 1, j = 1, 2, \ldots, m\}$. #### Mechanisms. A (direct revelation) mechanism consists of an *allocation rule* $\allocs: \typespaces \to {\mathscr{F}}$ and a *payment rule* $\pays: \typespaces \to \mathbb R_+^N$. The mechanism elicits type reports from the agents, and on reported type profile $\types$, decides on an allocation $\allocs(\types) \in {\mathscr{F}}$, with each agent $i$ making a payment of $\payi(\types)$. In general, allocation rules can be randomized, in which case $\allocs(\types)$ is a randomized variable supported on ${\mathscr{F}}$. $\allocs(\cdot)$ induces allocation rule $\alloci(\cdot)$ for each agent $i$: for all $\types$, $\alloci(\types) \in \mathbb R_+^m$ is the vector consisting of the $[(i-1)m + 1]$-st to the $im$-th coordinates in $\allocs(\types)$. When $\allocs(\types)$ is a random variable, so are $\alloci(\types)$’s. When $\allocs(\types)$ is deterministic, we write $\allocs(\types) = \mathbf{y} \in {\mathscr{F}}$ as a shorthand for ${\operatorname{\mathbf{Pr}}_{}{{\mathchoice{ \left [ \allocs(\types) = \mathbf y \right ]}{[ \allocs(\types) = \mathbf y ]}{[ \allocs(\types) = \mathbf y ]}{[ \allocs(\types) = \mathbf y ]} }}} = 1$. Agents have quasi-linear utilities, that is, when reporting type $\typei'$, agent $i$’s utility is ${\operatorname{\mathbf E}_{}{{\mathchoice{ \left [ \langle \typei, \alloci(\typei', \typesmi) \rangle \right ]}{[ \langle \typei, \alloci(\typei', \typesmi) \rangle ]}{[ \langle \typei, \alloci(\typei', \typesmi) \rangle ]}{[ \langle \typei, \alloci(\typei', \typesmi) \rangle ]} }}} - \payi(\typei', \typesmi)$ (where the expectation is taken over the randomness in $\allocs(\types)$. A mechanism is *dominant strategy incentive compatible* (DSIC) if, for all $\types \in \typespaces$, and for all $\typei' \in \typespacei$, ${\operatorname{\mathbf E}_{}{{\mathchoice{ \left [ \langle \typei, \alloc_{i}(\typei, \typesmi) \rangle \right ]}{[ \langle \typei, \alloc_{i}(\typei, \typesmi) \rangle ]}{[ \langle \typei, \alloc_{i}(\typei, \typesmi) \rangle ]}{[ \langle \typei, \alloc_{i}(\typei, \typesmi) \rangle ]} }}} - \payi(\typei, \typesmi) \geq {\operatorname{\mathbf E}_{}{{\mathchoice{ \left [ \langle \typei, \alloc_{i}(\typei', \typesmi) \rangle \right ]}{[ \langle \typei, \alloc_{i}(\typei', \typesmi) \rangle ]}{[ \langle \typei, \alloc_{i}(\typei', \typesmi) \rangle ]}{[ \langle \typei, \alloc_{i}(\typei', \typesmi) \rangle ]} }}} - \payi(\typei', \typesmi)$. An allocation rule $\allocs$ is said to be DSIC implementable or simply DSIC if there is a payment rule $\pays$ such that $(\allocs, \pays)$ is a DSIC mechanism. In this case, we say $\allocs$ is implemented by payment rule $\pays$. #### Extensions. Given a subset $S \subseteq \mathbb R^n$, we denote by $\operatorname{Conv}(S)$ the convex hull of $S$. \[def:extension\] An allocation rule $\extallocs: \operatorname{Conv}(\typespaces) \to {\mathscr{F}}$ is an *extension* of an allocation rule $\allocs: \typespaces \to {\mathscr{F}}$ if for all $\types \in \typespaces$, $\extallocs(\types)$ has the same distribution as $\allocs(\types)$. Similarly, a payment rule $\extpays: \operatorname{Conv}(\typespaces) \to \mathbb R_+^N$ is an extension of payment rule $\pays: \typespaces \to \mathbb R_+^N$ if for all $\types \in \typespaces$, $\extpays(\types) = \pays(\types)$. In Definition \[def:extension\], if $\allocs(\cdot)$ is deterministic, then $\extallocs(\cdot)$ being an extension simply means $\extallocs(\types) = \allocs(\types)$ for all $\types \in \typespaces$. #### Downward closed settings. The feasible allocation set ${\mathscr{F}}$ is *downward closed* if $\mathbf y \in {\mathscr{F}}$ entails $\allocs \in {\mathscr{F}}$ for all $\allocs \preceq \mathbf y$, where $\allocs \preceq \mathbf y$ denotes $\alloci[j] \leq y_j$ for $j = 1, \cdots, mN$. #### Weak monotonicity. A well-known necessary condition for an allocation rule to be DSIC implementable is weak monotonicity: An allocation rule $\allocs: \typespaces \to {\mathscr{F}}$ is *weakly monotone* if for each agent $i$, any $\typei, \typei' \in \typespacei$ and $\typesmi \in \typespacesmi$, $$\begin{aligned} {\operatorname{\mathbf E}_{}{{\mathchoice{ \left [ \langle \typei - \typei', \alloci(\typei,\typesmi) - \alloci(\typei', \typesmi) \rangle \right ]}{[ \langle \typei - \typei', \alloci(\typei,\typesmi) - \alloci(\typei', \typesmi) \rangle ]}{[ \langle \typei - \typei', \alloci(\typei,\typesmi) - \alloci(\typei', \typesmi) \rangle ]}{[ \langle \typei - \typei', \alloci(\typei,\typesmi) - \alloci(\typei', \typesmi) \rangle ]} }}} \geq 0. \end{aligned}$$ An allocation rule is implementable only if it is weakly monotone. In fact, @SY05 showed that, if $\typespaces$ is convex, then weak monotonicity is also a sufficient condition for DSIC implementability. #### Revenue. A mechanism $(\allocs, \pays)$ is ex post *individually rational* (IR) if for each agent $i$ and for every $\types \in \typespaces$, $\langle \typei, \alloc_{i}(\types) \rangle - \payi(\types) \geq 0$. Given a distribution $D$ on $\typespaces$ and a mechanism $(\allocs, \pays)$ that is DSIC and ex post IR, the *expected revenue* of the mechanism is ${\operatorname{\mathbf E}_{\types \sim D}{{\mathchoice{ \left [ \sum_i \payi(\types) \right ]}{[ \sum_i \payi(\types) ]}{[ \sum_i \payi(\types) ]}{[ \sum_i \payi(\types) ]} }}}$. The *optimal* revenue is the maximum expected revenue achievable among all DSIC, ex post IR mechanisms. DSIC Convex Extensions {#sec:map} ====================== \[sec:mapping\] Before presenting our main result on the impossibility to extend DSIC allocation rules, we first complement @DFK15’s result in single-dimensional setting with a simple observation in multi-dimensional preference settings: whenever the feasible allocation space is downward closed, any DSIC allocation rule on a type space can be extended to its convex hull by another DSIC allocation rule. \[thm:downward\] If the set of feasible allocations ${\mathscr{F}}$ is downward closed, for any DSIC allocation rule $\allocs$ on a type space $\typespaces$, there is a DSIC extension $\extallocs$ of $\allocs$ on $\operatorname{Conv}(\typespaces)$. If $\allocs$ is implemented with a payment rule $\pays$, $\extallocs$ can be implemented by an extension $\extpays$ of $\pays$. If $\pays$ is individually rational on $\typespaces$, so is $\extpays$ on $\operatorname{Conv}(\typespaces)$. If we do not require the statement about individual rationality, extensibility is guaranteed by an even weaker condition, which we call *single swap feasible*. A feasible allocation set ${\mathscr{F}}$ is *single swap feasible* (SSF) if for every agent $i$ there exists an allocation ${\boldsymbol{x}^{\mathrm{ssf}}}(i) \in {\mathscr{F}}$ such that for any $\allocs' \in {\mathscr{F}}$, $(\alloci',{\boldsymbol{x}^{\mathrm{ssf}}}_{-i}(i)) \in {\mathscr{F}}$. Intuitively, ${\boldsymbol{x}^{\mathrm{ssf}}}(i)$ is a feasible allocation vector such that if we replace the $i^{\text{th}}$ element of this vector with the $i^{\text{th}}$ element from any other feasible allocation the resulting allocation is still feasible. If ${\mathscr{F}}$ is a product space or is downward closed, it must be SSF. [^1] \[thm:swap\] If the set of feasible allocations ${\mathscr{F}}$ is SSF, any DSIC allocation rule $\allocs$ on a type space $\typespaces$, there is a DSIC extension $\extallocs$ of $\allocs$ on $\operatorname{Conv}(\typespaces)$. The proofs for both and can be found in the supplementary materials. The main result of this paper is that without this condition a DSIC extension may not exist. \[thm:rand\] There is a two agent type space $\randtypespaces$ with a DSIC allocation rule $\randallocs$, such that $\randallocs$ cannot be extended by a DSIC allocation rule to $\operatorname{Conv}(\randtypespaces)$. We prove the theorem in two steps. We first present a setting with three-dimensional preferences for which we show the non-existence of *deterministic* extensions. We then build on the construction, lifting it to a higher dimension, where we strengthen the argument and show the non-existence of extensions that even allow randomization. Non-existence of deterministic extensions ----------------------------------------- We first present type space $\dettypespaces = \dettypespacei[1] \times \dettypespacei[2]$ and the allocation rule $\detallocs$, and then show that $\detallocs$ is DSIC and yet cannot be extended by any deterministic DSIC allocation rule on $\operatorname{Conv}(\dettypespaces)$. The two agents have identical type spaces: for $i = 1, 2$, $\dettypespacei = \dettypespace \coloneqq \{A = [1, 0, 0], B = [0, 1, 0], C = [0, 0, 1], D = [\tfrac 1 3, \tfrac 1 3, \tfrac 1 3]\}$. A visual representation of this typespace and its convex hull can be found in figure 1 in the supplementary materials. The allocation rule $\detallocs$ is also symmetric, in the sense that $\detalloci[1](\typei[1], \typei[2]) = \detalloci[2](\typei[2], \typei[1])$ for any $\typei[1], \typei[2] \in \dettypespace$. We summarize $\detalloci[1]$ with the diagram below. The rows are indexed by agent $1$’s own type $V_1$, and the columns by agent $2$’s type $V_2$: $$\begin{aligned} \begin{blockarray}{c cccc} & A & B & C & D \\ \begin{block}{c(cccc)} A & [1,1,0] & [2,0,2] & [3,0,3] & [4,0,4] \\ B & [0,1,1] & [2,2,0] & [3,3,0] & [4,4,0] \\ C & [1,0,1] & [0,2,2] & [0,3,3] & [0,4,4] \\ D & [0,1,1] & [2,2,0] & [3,3,0] & [4,4,0] \\ \end{block} \end{blockarray} \end{aligned}$$ The set of all feasible allocations is then ${\mathscr{F}}= \{(\detalloci[1](V_1, V_2), \detalloci[2](V_1, V_2) )\}_{V_1, V_2 \in \dettypespace}$. We hasten to point out that ${\mathscr{F}}$ is *not* the product between the two agents’ respective set of feasible allocations. For example, $[1, 1, 0, 1, 1, 0]$ is in ${\mathscr{F}}$ as it is $\detallocs(A, A)$, but $[1, 1, 0, 0, 1, 1]$ is not. This is important for the proof. $\detallocs$ is DSIC implementable. Let the payment be 0 for both agents and all type profiles. As the allocation and payment rules are both symmetric, consider either agent $i$. If $\typei[-i] = A$, the maximum value agent $i$ could get, when her type is $A$, $B$, or $C$, is 1, attained with truthful bidding. For $\typei = D$, the four allocations all give the same value $\tfrac 2 3$. Similar arguments hold when $\typei[-i]$ is $B$, $C$ or $D$. \[thm:det\] There exists no deterministic DSIC extension of $\detallocs$. Before proving Theorem \[thm:det\], we make several preparatory observations. A key difficulty with multi-dimensional preferences is the the lack of a payment identity à la Myerson [@Myerson81]. In order to argue that any extension of an allocation rule is not DSIC, one has to either check the many cyclic (or weak) monotonicity conditions, or show that no payment rule can support the extension in a DSIC mechanism. We designed $\dettypespaces$ and $\detallocs$ carefully so that the allocations “lock” the payment rules. \[lem:equal-pay\] For any allocation rule $\extallocs$ that is an extension of $\detallocs$, if $\extallocs$ can be implemented by a DSIC mechanism with payment rule $\pays$, then for any $V \in \{A, B, C, D\}$, $\payi[1](A, V) = \payi[1](B, V) = \payi[1](C, V) = \payi[1](D, V)$, and $\payi[2](V, A) = \payi[2](V, B) = \payi[2](V, C) = \payi[2](V, D)$. We prove the lemma for agent $1$, and the statement for agent $2$ follows by symmetry. By DSIC, for any $V \in \dettypespace$, we have $$\begin{aligned} \langle A , \detalloci[1](A,V)\rangle & -\payi[1](A,V) \\ & \geq \langle A , \detalloci[1](C,V)\rangle -\payi[1](C,V); \\ \langle B , \detalloci[1](B,V)\rangle & -\payi[1](B,V) \\ & \geq \langle B , \detalloci[1](A,V)\rangle -\payi[1](A,V); \\ \langle C, \detalloci[1](C,V)\rangle & -\payi[1](C,V) \\ & \geq \langle C , \detalloci[1](B,V)\rangle -\payi[1](B,V).\end{aligned}$$ Note that $$\begin{aligned} \langle A , \detalloci[1](A,V)\rangle = \langle A , \detalloci[1](C,V)\rangle; \\ \langle C , \detalloci[1](C,V)\rangle = \langle C , \detalloci[1](B,V)\rangle; \\ \langle B , \detalloci[1](B,V)\rangle = \langle B , \detalloci[1](A,V)\rangle. \end{aligned}$$ Therefore $\payi[1](A,V) \leq \payi[1](C,V) \leq \payi[1](B, V) \leq \payi[1](A, V)$. Hence all inequalities are tight and we have $\payi[1](C,V) = \payi[1](A,V) = \payi[1](B,V)$. For type $D$’s payment, note that $\detalloci[1](B, V)= \detalloci[1](D, V)$. If $\payi[1](B, V) \neq \payi[1](D, V)$, one of $B$ and $D$ must be incentivized to misreport the other type. Therefore $\payi[1](B, V) = \payi[1](D, V)$. In figure 2 in the supplementary materials we give a $3$ dimensional visualization of the agent’s value for each allocation which gives intuition into the proof of \[lem:equal-pay\]. \[lem:fixed-menu\] If $\tilde{x}$ is a deterministic DSIC extension of $\detallocs$, then for $\typei[1] = $ $\frac{1}{3}A + \frac{1}{3}B + \frac{1}{3}D$, and any $V_2 \in \dettypespace$, $\extalloci[1](\typei[1], V_2) \in $ $\{\extalloci[1](A, V_2), \extalloci[1](B, V_2),$ $\extalloci[1](C, V_2),$ $ \extalloci[1](D, V_2)\}$. For the sake of contradiction, assume $\extallocs$ is a DSIC extension of $\detallocs$, implementable by payment rule $\pays$, and for $\typei[1]$ and $V_2 \in \dettypespace$, $\extalloci[1](\typei[1], V_2) = \detalloci[1](V_1, V_2')$ for some $(V_1, V_2') \in \dettypespaces$ and $V_2' \neq V_2$. For any $V_2$, one of $\detalloci[1](A, V_2)$ and $\detalloci[1](B, V_2)$ gives an equal positive value to both $A$ and $B$. Let $V_1^*$ be the type that induces this equally valued allocation. (For example, if $V_2 = B$, then $\detalloci[1](A, B) = [2, 0, 2]$ and $\detalloci[1](B, B) = [2, 2, 0]$. Both $A$ and $B$ have the same value for $\detalloci[1](B, B)$ and so type $B$ would be $V_1^*$.) Observe that, for the allocation $\detalloci[1](V_1, V_2')$, $V_1$ has positive value $\langle V_1, \detalloci[1](V_1, V_2') \rangle$, and no other type has higher value for it. Therefore, type $\typei[1]$ has value at most $\langle \frac{2}{3} V_1 + \frac{1}{3} D, \detalloci[1](V_1, V_2') \rangle$ for the allocation. In order for $\typei[1]$ to have no incentive to misreport $V_1^*$, we must have $$\begin{gathered} \langle \frac{2}{3} V_1 + \frac{1}{3} D, \detalloci[1](V_1, V_2') \rangle - \payi[1](\typei[1], V_2) \\ \geq \langle \frac{2}{3} V_1^* + \frac{1}{3} D, \detalloci[1](V_1^*, V_2) \rangle - \payi[1](V_1^*, V_2) \label{eq:t1-V1}\end{gathered}$$ On the other hand, in order for type $V_1$ not to have incentive for deviating to $\typei[1]$, we have $$\begin{gathered} \langle V_1, \detalloci[1](V_1, V_2) \rangle - \payi[1](V_1, V_2) \\ = \langle V_1^*, \detalloci[1](V_1^*, V_2) \rangle - \payi[1](V_1^*, V_2) \\ \geq \langle V_1, \detalloci[1](V_1, V_2') \rangle - \payi[1](\typei[1], V_2); \label{eq:V1-t1} \end{gathered}$$ where for the equality we used the fact that the value obtained by reporting truthfully is the same for every type in $\{A, B, C\}$ given a fixed type of the opponent, and that $\payi[1](V_1, V_2) = \payi[1](V_1^{*}, V_2)$ by Lemma \[lem:equal-pay\]. Similarly, in order for type $D$ not to have incentive for deviating to $\typei[1]$, we have $$\begin{gathered} \langle D, \detalloci[1](D, V_2) \rangle - \payi[1](D, V_2) \\ = \langle D, \detalloci[1](V_1^*, V_2) \rangle - \payi[1](V_1^*, V_2) \\ \geq \langle D, \detalloci[1](V_1, V_2') \rangle - \payi[1](\typei[1], V_2), \label{eq:D-t1}\end{gathered}$$ where for the equality we used the fact that type $D$ has the same value for all allocations given a fixed type of the opponent, and that $\payi[1](D, V_2) = \payi[1](V_1, V_2)$ by Lemma \[lem:equal-pay\]. Crucially, and  cannot both be tight, because by construction, for any $V_2 \neq V_2'$, $$\begin{gathered} \langle V_1, \detalloci[1](V_1, V_2) - \detalloci[1](V_1, V_2') \rangle \\ = \frac{3}{2} \langle D, \detalloci[1](D, V_2) - \detalloci[1](V_1, V_2') \rangle \neq 0. \end{gathered}$$ Therefore, $\frac{2}{3} \cdot$ $+ \frac{1}{3} \cdot$ gives $$\begin{gathered} \langle \frac{2}{3} V_1^* + \frac{1}{3} D, \detalloci[1](V_1^*, V_2) \rangle - \payi[1](V_1^*, V_2) \\ > \langle \frac{2}{3} V_1 + \frac{1}{3} D, \detalloci[1](V_1, V_2') \rangle - \payi[1](\typei[1], V_2),\end{gathered}$$ which contradicts . By the same reasoning as for Lemma \[lem:equal-pay\], the following lemma follows from Lemma \[lem:fixed-menu\]. \[lem:general-equal-pay\] If $\extallocs$ is a deterministic DSIC extension of $\detallocs$, implementable by payment rule $\pays$, then for any $\typei[1]$ in the interior of $\operatorname{Conv}(\dettypespace)$ and any $V_2 \in \dettypespace$, $\payi[1](\typei[1], V_2) = \payi[1](A, V_2)$. We are now ready to prove Theorem \[thm:det\]. Suppose $\extallocs$ is a deterministic DSIC extension of $\detallocs$. We show a contradiction by showing that $\extallocs$ must violate weak monotonicity. Consider $\typei[1] = \tfrac 1 3 A + \tfrac 1 3 B + \tfrac 1 3 D$ and when agent $2$’s type is $A$. Since $\typei[1]$ could report any type in $\dettypespace$, she has as options $\detalloci[1](A, A), \detalloci[1](B, A), \detalloci[1](C, A)$ and $\detalloci[1](D, A) \}$, all at the same price by Lemma \[lem:general-equal-pay\]. By Lemma \[lem:fixed-menu\], these are also all the allocations she could possibly get. Since $[1, 1, 0]$ is the only allocation for which both types $A$ and $B$ have positive value, it is $\typei[1]$’s preferred allocation, i.e., $\extalloci[1](\typei[1], A)$ must be $[1, 1, 0]$. This in turn implies $\extalloci[2](\typei[1], A) = [1, 1, 0]$. (Recall that ${\mathscr{F}}$ is not a product space, and the only allocation in which agent $1$ gets $[1, 1, 0]$ is $\detallocs(A, A) = [1, 1, 0, 1, 1, 0]$.) Similarly, one can show $\extalloci[1](\typei[1], D) = [4, 4, 0]$, which implies $\extalloci[2](\typei[1], D) = [2, 2, 0]$ or $[4, 4, 0]$. But in either case, weak monotonicity is violated for agent $2$’s types $A$ and $D$. For example, if $\extalloci[2](\typei[1], D) = [2, 2, 0]$, we have $$\begin{aligned} \langle A, [1, 1, 0] \rangle + \langle D, [2, 2, 0] \rangle < \langle A, [2, 2, 0] \rangle + \langle D, [1, 1, 0] \rangle.\end{aligned}$$ Therefore, no deterministic DSIC extension of $\detallocs$ is possible. Non-existence of randomized extensions -------------------------------------- We need a more convoluted construction and a more careful argument to prove the impossibility of extensions that are possibly randomized. We build on $\dettypespaces$ and $\detallocs$ to construct $\randtypespaces$ and $\randallocs$ and prove Theorem \[thm:rand\]. We first raise types in $\dettypespace$ to a space of seven dimensions. Define $A' = [1, 0, 0, 0, 0, 0, 0]$, $B' = [0, 1, 0, 0, 0, 0, 0]$, $C' = [0, 0, 1, 0, 0, 0, 0]$ and $D' = \frac 1 3 (A' + B' + C')$. For ease of notation, we define a mapping $\det: \{A', B', C', D'\} \to \dettypespace$, with $\det(A') = A, \det(B') = B, \det(C') = C$ and $\det(D') = D$. We also introduce four new types, $E' = [0, 0, 0, 1, 0, 0, 0]$, $F'=[0,0,0,0,1,0,0]$, $G'=[0,0,0,0,0,1,0]$ and $H'=[0,0,0,0,0,0,1]$. Define $\randtypespace = \{A', B', C', D', E', F', G'\}$, and $\randtypespaces = \randtypespace \times \randtypespace$. We now define $\randallocs$, which is again symmetric, in the sense that $\randalloci[1](V_1, V_2) = \randalloci[2](V_2, V_1)$ for every $V_1, V_2 \in \randtypespace$. We therefore only describe $\randalloci[1]$. When both agents report types in $\{A', B', C', D'\}$, the first three coordinates of each agent’s allocation are given by $\detallocs$ when fed by the corresponding types in $\dettypespaces$, and the remaining coordinates are filled in according to the opponent’s report. More specifically, $$\begin{gathered} \forall V_1 \in \{A',B',C',D'\}, \\ \randalloci[1](V_1, A')=[\detalloci[1](\det(V_1), A), 0, 100, 100,100],\\ \randalloci[1](V_1, B')=[\detalloci[1](\det(V_1), B),100 , 0, 100, 100],\\ \randalloci[1](V_1, C')=[\detalloci[1](\det(V_1), C),100, 100, 0, 100], \\ \randalloci[1](V_1, D')=[\detalloci[1](\det(V_1), D), 100, 100, 100,0]\end{gathered}$$ For the other types, we have $$\begin{gathered} \forall V_1 \in \{E', F', G', H'\}, \forall V_2 \in \{A', B', C', D'\}, \\ \randalloci[1](V_1, V_2) = \randalloci[1](C', V_2).\\ \forall V_2 \in \{E', F', G', H'\}, \forall V_1 \in \randtypespace, \\ \randalloci[1](V_1, V_2) = [0, 0, 0, 100, 100, 100, 100].\end{gathered}$$ Note that $\randallocs$ itself is deterministic. The difficulty we need to overcome in this section is that the *extension* of $\randallocs$ may be randomized, and we must show that any extension to $\operatorname{Conv}(\randtypespaces)$ cannot be DSIC. The set of feasible allocations is ${\mathscr{F}}$ $= $ $\{\randallocs(V_1, V_2)\}_{V_1, V_2 \in \randtypespace}$. Again we emphasize that the set of feasible allocations is *not* a product space. We first show that a subset of the payments are still “locked” as they were in the deterministic setting. \[lem:randequal-pay\] For any allocation rule $\extallocs$ that is an extension of $\randallocs$, if $\extallocs$ can be implemented by a DSIC mechanism with payment rule $\pays$, then for any $V \in \{A', B', C', D'\}$ and any $V',V'' \in \randtypespace$, $\payi[1](V', V) = \payi[1](V'', V)$, and $\payi[2](V, V') = \payi[2](V, V'')$. The proof for types in $\{A',B',C',D'\}$ follows the same steps as . For any type in $\{E',F',G',H'\}$ note that they receive the same allocation as type $C'$ and therefore if they are charged a payment that is different from the payment $C'$ is charged than either $C'$ or that type would be incentivized to deviate. In order to have a DSIC convex extension in this setting we must satisfy a condition similar to \[lem:fixed-menu\] with the higher dimensional versions of the types from $\dettypespaces$. If $\extallocs$ is a DSIC extension of $\randallocs$, then for $\typei[1] = \frac{1}{3}A' + \frac{1}{3}B' + \frac{1}{3}D'$, and any $V_2 \in \{A',B',C',D'\}$, $\extalloci[1](\typei[1], V_2)$ is supported on $\{\extalloci[1](A', V_2), \extalloci[1](B', V_2), \extalloci[1](C', V_2), \extalloci[1](D', V_2)\}$. \[lma:invariant-random\] For the sake of contradiction, assume $\extallocs$ is a DSIC extension of $\randallocs$, implementable by payment rule $\pays$, and for some $V_2 \in \{A',B',C',D'\}$, there exist a pair of types $\vertypei[1], \vertypei[2]'$ such that ${\operatorname{\mathbf{Pr}}_{}{{\mathchoice{ \left [ \extallocs(\typei[1], \vertypei[2]) = \randallocs(\vertypei[1], \vertypei[2]') \right ]}{[ \extallocs(\typei[1], \vertypei[2]) = \randallocs(\vertypei[1], \vertypei[2]') ]}{[ \extallocs(\typei[1], \vertypei[2]) = \randallocs(\vertypei[1], \vertypei[2]') ]}{[ \extallocs(\typei[1], \vertypei[2]) = \randallocs(\vertypei[1], \vertypei[2]') ]} }}} > 0$ for a $\vertypei[2]'\neq\vertypei[2]$. Let ${X_{V_2}}$ denote the set of allocations $\{\extalloci[1](A', V_2), \extalloci[1](B', V_2), \extalloci[1](C', V_2), \extalloci[1](D', V_2)\}$. Let $\alpha = {\operatorname{\mathbf{Pr}}_{}{{\mathchoice{ \left [ \extalloci[1](\typei[1], \vertypei[2]) \in {X_{V_2}}\right ]}{[ \extalloci[1](\typei[1], \vertypei[2]) \in {X_{V_2}}]}{[ \extalloci[1](\typei[1], \vertypei[2]) \in {X_{V_2}}]}{[ \extalloci[1](\typei[1], \vertypei[2]) \in {X_{V_2}}]} }}}$ and $\beta = {\operatorname{\mathbf{Pr}}_{}{{\mathchoice{ \left [ \extalloci[1](\typei[1], \vertypei[2]) \not \in {X_{V_2}}\right ]}{[ \extalloci[1](\typei[1], \vertypei[2]) \not \in {X_{V_2}}]}{[ \extalloci[1](\typei[1], \vertypei[2]) \not \in {X_{V_2}}]}{[ \extalloci[1](\typei[1], \vertypei[2]) \not \in {X_{V_2}}]} }}}.$ Then by assumption, $\beta > 0$. Also let $\extalloci[1]^{\alpha}(\typei[1]) = {\operatorname{\mathbf E}_{}{{\mathchoice{ \left [ \extalloci[1](\typei[1],\vertypei[2]) \mid \extalloci[1](\typei[1],\vertypei[2]) \in {X_{V_2}}\right ]}{[ \extalloci[1](\typei[1],\vertypei[2]) \mid \extalloci[1](\typei[1],\vertypei[2]) \in {X_{V_2}}]}{[ \extalloci[1](\typei[1],\vertypei[2]) \mid \extalloci[1](\typei[1],\vertypei[2]) \in {X_{V_2}}]}{[ \extalloci[1](\typei[1],\vertypei[2]) \mid \extalloci[1](\typei[1],\vertypei[2]) \in {X_{V_2}}]} }}}$ and $\extalloci[1]^{\beta}(\typei[1]) = {\operatorname{\mathbf E}_{}{{\mathchoice{ \left [ \extalloci[1](\typei[1],\vertypei[2]) \mid \extalloci[1](\typei[1],\vertypei[2]) \not\in {X_{V_2}}\right ]}{[ \extalloci[1](\typei[1],\vertypei[2]) \mid \extalloci[1](\typei[1],\vertypei[2]) \not\in {X_{V_2}}]}{[ \extalloci[1](\typei[1],\vertypei[2]) \mid \extalloci[1](\typei[1],\vertypei[2]) \not\in {X_{V_2}}]}{[ \extalloci[1](\typei[1],\vertypei[2]) \mid \extalloci[1](\typei[1],\vertypei[2]) \not\in {X_{V_2}}]} }}}$. Then the expected value for agent $1$ truthfully reporting $\typei[1]$ is $\langle \typei[1], \alpha \extalloci[1]^\alpha (\typei[1])+ \beta \extalloci[1]^\beta(\typei[1]) \rangle$. In order for $\extallocs$ to be DSIC, it must provide type $\typei[1]$ with at least as much utility as it could receive from deviating to any type in $\randtypespace$. Let $\vertypei[\text{max}] \in \operatorname{argmax}_{\vertypei[1]'\in \randtypespace} \langle \typei[1],\randalloci[1](\vertypei[1]',\vertypei[2])) \rangle$ then $\extallocs$ must satisfy the following DSIC constraint, $$\begin{aligned} \alpha &\langle \typei[1], \extalloci[1]^{\alpha}(\typei[1]) \rangle + \beta \langle \typei[1], \extalloci[1]^{\beta}(\typei[1])\rangle - \payi[1](\typei[1],\vertypei[2]) \geq \nonumber \\ &\langle \typei[1],\randalloci[1](\vertypei[\text{max}],\vertypei[2])) \rangle - \payi[1](\vertypei[\text{max}],\vertypei[2]). \label{eq:randIC-contr}\end{aligned}$$ We use this constraint to place an upper bound on $\payi[1](\typei[1],\vertypei[2])$. By definition of $V_{\text{max}}$, $$\begin{aligned} \langle \typei[1],\randalloci[1](\vertypei[\text{max}],\vertypei[2])) \rangle \geq \langle \typei[1], \extalloci[1]^{\alpha}(\typei[1]) \rangle.\end{aligned}$$ Subtracting $\langle \typei[1],\randalloci[1](\vertypei[\text{max}],\vertypei[2])) \rangle$ from both sides of \[eq:randIC-contr\] and utilizing the fact that $\alpha+\beta=1$ we have $$\begin{aligned} \beta \langle \typei[1], \extalloci[1]^{\beta}(\typei[1]) \rangle - \beta \langle \typei[1],\randalloci[1](\vertypei[\text{max}],\vertypei[2]) \rangle \\ \geq \payi[1](\typei[1],\vertypei[2]) - \payi[1](\vertypei[\text{max}],\vertypei[2]).\end{aligned}$$ Now using the fact that $\typei[1]$’s maximum difference in value for any two allocations in ${\mathscr{F}}$ is upper bounded by 3, we get $$\begin{aligned} 3\beta+\payi[1](\vertypei[\text{max}],\vertypei[2]) &\geq \payi[1](\typei[1],\vertypei[2]). \label{eq:pay-bound}\end{aligned}$$ By construction, for $\vertypei[2] \in \{A', B', C', D'\}$, there exists a type $\vertypei[\text{not}] \in \{E',F',G',H'\}$ which has value $0$ for every allocation in ${X_{V_2}}$. (For example if $V_2=A'$ then all of the allocations in $\{\extalloci[1](A',A'), \extalloci[1](B',A'), \extalloci[1](C', A'), \extalloci[1](D', A')\}$ have a $0$ in the $4^{\text{th}}$ coordinate and in this case $V_{\text{not}}=E'$). Now by , $V_{\text{not}}$ receives utility $-\payi[1](V_{\text{not}}, \vertypei[2]) = -\payi[1](\vertypei[\text{max}],\vertypei[2])$ when reporting truthfully against opponent type $\vertypei[2]$. Therefore, in order to keep $\vertypei[\text{not}]$ from deviating to type $\typei[1]$, we must satisfy the following DSIC constraint, $$\begin{aligned} -\payi[1](\vertypei[max],\vertypei[2]) \geq & \alpha \langle \vertypei[\text{not}], \extalloci[1]^{\alpha}(\typei[1]) \rangle \nonumber + \beta \langle \vertypei[\text{not}], \extalloci[1]^{\beta}(\typei[1])\rangle \\ & - \payi[1](\typei[1],\vertypei[2]). \label{eq:tempDSIC}\end{aligned}$$ Since $\extalloci[1]^{\alpha}(\typei[1])$ is the expected allocation conditioned on the resulting allocation being an element of ${X_{V_2}}$, and $V_{\text{not}}$ has zero value for any allocation in ${X_{V_2}}$, we have $\langle V_{\text{not}}, \extalloci[1]^\alpha \rangle = 0$. Therefore, $$\begin{aligned} \payi[1](\typei[1],\vertypei[2]) &\geq \beta \langle \vertypei[\text{not}], \extalloci[1]^{\beta}(\typei[1])\rangle + \payi[1](\vertypei[\text{max}],\vertypei[2]). \label{eq:pay-bound2}\end{aligned}$$ Notice that for every allocation outside of ${X_{V_2}}$ the type $\vertypei[\text{not}]$ has value exactly $100$ and therefore $\langle \vertypei[\text{not}], \alloc_{\beta}(\type)\rangle = 100$. We now use this fact along with \[eq:pay-bound\] and \[eq:pay-bound2\] and derive the following contradiction, $$\begin{aligned} 3\beta+\payi[1](\vertypei[\text{max}],\vertypei[2])- \payi[1](\vertypei[\text{max}],\vertypei[2]) &\geq \beta \langle \vertypei[\text{not}], \alloc_{\beta}\rangle, \\ \Rightarrow \quad 3\beta &\geq 100 \beta.\end{aligned}$$ Since by assumption $\beta > 0$, this is a contradiction. We are now ready to prove Theorem \[thm:rand\]. Consider $\typei[1] = \tfrac 1 3 A' + \tfrac 1 3 B' + \tfrac 1 3 D'$ and assume ${\operatorname{\mathbf{Pr}}_{}{{\mathchoice{ \left [ \extalloci[1](\typei[1],A') = \randalloci[1](A', A') \right ]}{[ \extalloci[1](\typei[1],A') = \randalloci[1](A', A') ]}{[ \extalloci[1](\typei[1],A') = \randalloci[1](A', A') ]}{[ \extalloci[1](\typei[1],A') = \randalloci[1](A', A') ]} }}} <1$. We know from that $\extalloci[1](\typei[1], A')$ is supported on $\{\extalloci[1](A',A'), \extalloci[1](B',A'), \extalloci[1](C', A'), \extalloci[1](D', A')\}$. Since $\randalloci[1](A',A')$ is the only allocation for which both $A'$ and $B'$ have positive value, $\typei[1]$ satisfies $$\begin{aligned} \forall \vertype' \neq A', \quad \langle \typei[1], \randalloci[1](\vertype',A') \rangle < \langle \typei[1], \randalloci[1](A',A') \rangle.\label{eq:strictval}\end{aligned}$$ But in order for $\typei[1]$ to have no incentive to report $A'$ when agent $2$’s type is $A'$, we have $$\begin{aligned} {\operatorname{\mathbf E}_{}{{\mathchoice{ \left [ \langle \typei[1], \extalloci[1](\typei[1],A') \rangle \right ]}{[ \langle \typei[1], \extalloci[1](\typei[1],A') \rangle ]}{[ \langle \typei[1], \extalloci[1](\typei[1],A') \rangle ]}{[ \langle \typei[1], \extalloci[1](\typei[1],A') \rangle ]} }}} & - \payi[1](\typei[1],A') \geq \\ & \langle \typei[1], \randalloci[1](A', A') \rangle - \payi[1](A', A').\end{aligned}$$ Using eq. , we have $$\begin{aligned} \payi[1](A', A') &> \payi[1](\typei[1],A')\label{eq:strictpay}.\end{aligned}$$ We now show that, similar to , a strictly lower payment for $\typei[1]$ as in would violate DSIC constraints. The DSIC constraint that keeps type $D'$ from deviating to type $\typei[1]$ when the opponents type is $A'$ is $$\begin{aligned} \langle D', \randalloci[1](D', A') \rangle - \payi[1](D', A') \nonumber \\ \geq \langle D' , \extalloci[1](\typei[1],A') \rangle -\payi[1](\typei[1],A') \label{eq:Dconstraint}.\end{aligned}$$ Since $\extalloci[1](\typei[1],A')$ is supported on $\{\extalloci[1](A',A'),$ $ \extalloci[1](B',A'),$ $ \extalloci[1](C', A'),$ $\extalloci[1](D', A')\}$ and type $D'$ values all of these allocations equally we have $$\begin{aligned} \langle D', \extalloci[1](\typei[1],A') \rangle = \langle D', \randalloci[1](D', A') \rangle,\end{aligned}$$ and by we have $\payi[1](D',A') = \payi[1](A',A')$. Using these facts we write as $$\begin{aligned} \payi[1](\typei[1],A') &\geq \payi[1](A',A').\end{aligned}$$ This contradicts . Therefore, $\extalloci[1](\typei[1], A')$ must be deterministically $\randalloci[1](A', A')$. But the only allocation in ${\mathscr{F}}$ with agent $1$’s allocation being this is $\randallocs(A', A')$, and hence $\extalloci[2](\typei[1], A')$ is deterministically $[1, 1, 0,0,100,100,100]$. Using the same argument, one can show $\extalloci[1](\typei[1], D') = [4, 4, 0,100,100,100,0]$ with probability 1, which implies $\extalloci[2](\typei[1], D')$ is supported on $\{[2, 2, 0,100,0,100,100],[4, 4, 0,100,100,100,0]\}$. For any allocation with this support, weak monotonicity is violated for agent $2$’s types $A'$ and $D'$. Revenue Gap {#sec:revenue} =========== In this section we explore the revenue gap between mechanisms that satisfy DSIC and IR on $\operatorname{supp}(D)$ and mechanisms that satisfy DSIC and IR on $\operatorname{Conv}(\operatorname{supp}(D))$. Any type that is in $\operatorname{Conv}(\operatorname{supp}(D))$ and not $\operatorname{supp}(D)$ occurs with probability zero and does not itself contribute to the revenue. It is therefore not obvious that enforcing constraints on these additional types will impact revenue. In fact, shows that if the feasible set is downward closed there is no gap in revenue between these two typespaces. However, for arbitrary feasibility sets, a gap can arise because not every mechanism which is DSIC and IR can be extended to $\operatorname{Conv}(\operatorname{supp}(D))$ without changing the payment rules on the support. We show that for some feasibility sets and distributions the gap in revenue between the optimal DSIC and IR mechanism on $\operatorname{supp}(D)$ and the optimal mechanism on $\operatorname{Conv}(\operatorname{supp}(D))$ is unbounded. For a feasible set of allocations ${\mathscr{F}}$ and a distribution $\mathbf{D}$ we, let $\operatorname{OPT}({\mathscr{F}},\mathbf{D})$ denote the optimal revenue extractable by a DSIC and IR mechanism on $\operatorname{supp}(D)$, and let $\widetilde{\operatorname{OPT}}({\mathscr{F}},\mathbf{D})$ denote the maximum revenue extractable by a mechanism that is DSIC and IR on $\operatorname{Conv}(\operatorname{supp}(D))$. We focus on the single agent case and show that the ratio between $\operatorname{OPT}({\mathscr{F}}, D)$ and $\widetilde{\operatorname{OPT}}({\mathscr{F}}, D)$ is unbounded. \[thm:revenue\] $\forall \alpha$, there exists a feasible set ${\mathscr{F}}$ and distribution $\mathbf{D}$ such that $\operatorname{OPT}({\mathscr{F}},\mathbf{D}) \geq \alpha \widetilde{\operatorname{OPT}}({\mathscr{F}}, \mathbf{D})$. Let ${\vec{\mathbf{1}}}$ be the “all-ones” vector of length k and define the set $$\begin{aligned} {\mathcal{S}_{1}}= \{ \type \in \mathbb{R}^k \mid \exists j \text{ s. t. } \typei[j] =1 \\ \text{ and } \forall h \neq j, \: \typei[h]=0\}.\end{aligned}$$ Intuitively, ${\mathcal{S}_{1}}$ is the set of all vectors of length k with exactly one coordinate being $1$ and the rest of the coordinates being zero. We define the support, $\operatorname{supp}(D) = {\mathcal{S}_{1}}\cup \vec{\mathbf{1}}$ and the set of feasible allocations ${\mathscr{F}}= {\mathcal{S}_{1}}$. We can show that $\operatorname{OPT}({\mathscr{F}}, \mathbf{D})=1$ for any distribution $\mathbf{D}$ that shares this support. This is obtained by giving each $\type \in {\mathcal{S}_{1}}$ an allocation $\alloc$ equal to their type i.e. $\type = \alloc(\type)$ and giving type $\type = {\vec{\mathbf{1}}}$ any allocation in the feasible set. We can now charge a payment of $1$ to every type. Without loss of generality assume $\alpha>1$. Now for any $\alpha$ define $\epsilon< \frac{1}{\alpha}$ and consider the distribution where $Pr[\type= \vec{\mathbf{1}}] = 1- \epsilon$ and $Pr[\type \neq \vec{\mathbf{1}}] = \epsilon$. One of the types contained in $\operatorname{Conv}(\operatorname{supp}(D))$ is $\type^{k} = [\frac{1}{k}, \frac{1}{k} \ldots \frac{1}{k}]$. $\type^{k}$ has a value of $\frac{1}{k}$ for every feasible allocation and every lottery of existing allocations, therefore the maximum payment we can charge without violating IR constraints is $\frac{1}{k}$. However, the all ones type has value $1$ for every feasible allocation and can now gain utility of $\frac{k-1}{k}$ by deviating to type $\type^k$. Therefore to maintain incentive compatibility we must lower the price we charge the all ones type to $\frac{1}{k}$ (as well as the payment charged to one of the types ${\mathcal{S}_{1}}$). Therefore the revenue generated on the convex hull if we take $k$ to be large is, $\widetilde{\operatorname{OPT}}({\mathscr{F}}, \mathbf{D})\leq \lim_{k\rightarrow \infty} (1-\epsilon)(\frac{1}{k})+\epsilon \leq \frac{1}{\alpha}$. Conclusion and Future Work {#sec:disc} ========================== We presented a DSIC mechanism on a discrete type space which cannot be implemented if DSIC is required on the convex hull of its type space. Extension from discrete to convex domains is a common step in automated mechanism design, and it is unfortunate that, as an implication of our result, there exists no general procedure for extending an arbitrary DSIC mechanism to the convex hull of its domain. However, as we showed, such extensions are possible in specific settings, with special type spaces (e.g. single dimensional) or special feasible set of allocations (e.g. downward closed or, more generally, single swap feasible). Other sufficient conditions for extensibility should be useful to have. Another possibility we leave open is conditions on an allocation rule (rather than the type space or the set of feasible allocations) that would guarantee its extensibility. We know that an allocation rule is DSIC implementable if and only if it is cyclic monotone. Is there a condition stronger than cyclic monotonicity which guarantees both implementability and extensibility? The mechanism we constructed for the impossibility result is intricate. A third question we leave open is whether one may find a succinct property that is *necessary* for extensibility; in other words, is there a natural condition whose violation must result in inextensible mechanisms? Supplementary Material {#sec:supp} ====================== [^1]: To see that a downward closed ${\mathscr{F}}$ is SSF, observe that we can let ${\boldsymbol{x}^{\mathrm{ssf}}}(i)$ be the all zero vector for each agent $i$.
1
--- abstract: | We detected a ring-like distribution of far-infrared emission in the direction of the center of the Virgo cluster. We studied this feature in the FIR, radio, and optical domains, and deduced that the dust within the feature reddens the galaxies in the direction of the Virgo cluster but does not affect stars within the Milky Way. This is likely to be a dusty feature in the foreground of the Virgo cluster, presumably in the galactic halo. The HI distribution follows the morphology of the FIR emission and shows peculiar kinematic behavior. We propose that a highly supersonic past collision between an HI cloud and the Galactic HI formed a shock that heated the interface gas to soft X-ray temperatures. HI remnants from the projectile and from the shocked Galactic HI rain down onto the disk as intermediate velocity gas. Our finding emphasizes that extragalactic astronomy must consider the possibility of extinction by dust at high Galactic latitude and far from the Galactic plane, which may show structure on one-degree and smaller scales. This is particularly important for studies of the Virgo cluster, for example in the determination of the Hubble constant from Cepheids in cluster galaxies. author: - 'Noah Brosch & Elchanan Almoznino' - 'Bogdan Wszolek & Konrad Rudnicki' title: The Nature of a Dusty Ring in Virgo --- \#1\#2\#3\#4\#5\#6\#7 to\#2 ------------------------------------------------------------------------ Introduction ============ The nature of non-luminous matter that is not part of detected and catalogued galaxies remains unsolved by modern astrophysics. As mentioned in a recent thesis, low surface brightness (LSB) objects may prove to be the “icebergs” of the extragalactic world (de Blok 1997). Some searches for non-luminous matter have been successful, the detection of a giant HI ring around the small group of galaxies in Leo centered on M96 (Schneider 1983), extended HI emission in the M81 group (Lo & Sargent 1979), HI companions to dwarf galaxies (for $\sim$25% of the cases: Taylor 1996), and a large neutral hydrogen cloud in the southern outskirts of the Virgo cluster (HI 1225+01: Giovanelli & Haynes 1989). Along with HI clouds, a few large LSB galaxies have been identified: Malin-1 (Bothun 1987), F568-6 (Bothun 1990), and 1226+0105 (Sprayberry 1993). Their typical star formation rates are $\sim$0.1 M$_{\odot}$/yr and the metallicities are $\sim$1/3 solar. The HI rotation curves, measured by de Block (1997) and by Pickering (1997), indicate that their gaseous component is dynamically significant at all radii and that the galaxies are fully dark-matter dominated; their detected baryonic component is less than 4% of the total mass. This last conclusion is valid at least as long as we do not accept any of the more exotic theories of gravitation. The LSB galaxies lack bulges, bars, and nuclear activity, as well as CO or IR emission (have no molecules or dust). There have also been a few intriguing reports of presumably intergalactic dust clouds. A cloud with 0.5-1.2 mag of extinction was identified in Microscopium by Hoffmeister (1962). Three other similar objects were listed by Rudnicki (1986); they extinguish background objects by 0.57 to 1.2 mag. In all reports the main point of contention was the actual distance to the cloud, which could put it in extragalactic space but could also locate it in the halo of the Milky Way (MW). Sometimes, the argument for an extragalactic location was based on a comparison of the properties of objects whose distance could be estimated and which were located behind the cloud with those of similar objects clearly not within the cloud limits (RR Lyrae stars; Murawski 1983). The extragalactic nature is only fairly confidently established for the Abadi-Edmunds cloud at $\sim$3 Mpc (Abadi & Edmunds 1978). HI 21 cm line emission was detected from this object, whereas in other cases it was not. However, in other cases far-infrared (FIR) emission was detected and could be identified (on morphological and positional criteria) with the obscuring clouds. FIR and HI emission were clearly detected in the case of the Okroy cloud (Wszolek 1988a, 1989). FIR emission was only marginally detected from the Rudnicki-Baranowska cloud (Wszolek 1988b). This indicates that the physical conditions in this kind of objects are far from being uniform. More such examples must be identified and their properties examined. It is possible that the phenomenon of intergalactic hydrogen clouds could be related to the high-velocity cloud (HVC) complexes. These are HI structures whose radial velocities deviate by several 100 km s$^{-1}$ from the conventional galactic rotation. A recent review of HVCs is by Wakker & van Woerden (1997). Their Table 2 lists a few cloud complexes at distances $\geq$25 kpc; some of these may not belong at all to the MW. IRAS searches for FIR emission of HVC were negative (Wakker & Boulanger 1986), indicating that either the HVCs are dust-free or that their dust grains are much cooler than could be detected with IRAS. In this context we also mention the proposition by Blitz (1999) that the HVCs make up the missing mass by being essentially dark halos with low velocity dispersions. We report here results from a study of a diffuse ring-like FIR feature at high galactic latitude, which we interpret as “local”, not extragalactic, despite first indications to the contrary. The region toward which this feature is located is the center of the Virgo cluster of galaxies. This part of the sky has been studied in exquisite detail, yet new studies always detect interesting features. For example, Katsiyannis (1998) produced a very deep image of the central regions of the cluster from a combination of 13 deep Kodak TechPan films obtained with the UK Schmidt telescope. The image shows large variations in the brightness of the intra-cluster medium, with the brightest regions north of the cluster center. M87 is fairly central in the region of enhanced brightness, close to the upper left corner of the “very high contrast image” in their Fig. 6. Previous deep imaging of the central VC region (Weil 1997) revealed a diffuse extension of (presumably stellar) material extending $\sim$100 kpc to the SE of M87. Intergalactic red giant stars were apparently discovered near M87 by Ferguson (1998). It is therefore relevant to search for, and to try and explain, any extended feature one may detect in the direction of the center of the cluster. In this context, we mention the study of Haikala (1995) who examined the UV emission detected in the direction of a dust globule close to the North Galactic Pole, slightly north of the Virgo cluster (VC). Any material that could produce extinction needs to be accounted for. To the best of our knowledge, nobody attempted to study the obscuration and FIR emission by ISM or IGM in the direction of a rich, nearby cluster of galaxies. This is particularly important for the VC, which serves as one of the key stones in the distance ladder leading up to the determination of the Hubble constant (van den Bergh 1996). The HST Key Project on the Extragalactic Distance Scale, where the required accuracy of the determination of H$_0$ is 10%, could be affected significantly by unaccounted extinction. Until now, seven galaxies within 10$^{\circ}$ of the Virgo center have been observed for Cepheids in this context (Macri 1999). The plan of the paper is as follows: we first describe the FIR observations, which revealed the feature, and present confirmatory evidence of its reality. We then attempt to derive additional properties of the feature, which has an approximate ring shape, using data in the optical and radio domains. We show that the dust in the feature does not seem to affect the stars in the Milky Way but that it apparently reddens galaxies in the VC and beyond. The full data set is discussed in the last section of the paper, in which we also derive some properties of the dust grains in the feature. Observational data ================== COBE/DIRBE ---------- Far infrared (FIR) observations from the COBE satellite, specifically with the DIRBE instrument, reveal non-uniform FIR emission from the center of the VC. The DIRBE instrument mapped the entire sky at ten wavelength bands from 1.25 to 240$\mu$m and operated from November 1989 to December 1993 (cryo-cooling was available only for ten months, restricting the availability of the FIR channels). An important feature of DIRBE was that the measurements were performed against an internal calibrator source, with proper accounting for instrumental offsets and interplanetary FIR emission. For the present analysis we used the Annual Average Sky Maps (AASM: Hauser 1997), which provide a single, ten-month averaged intensity value per pixel in each of the DIRBE bands. Note that the zodiacal light contribution was not subtracted from the DIRBE counts. This is because we do not estimate the zodiacal contribution to the FIR bands to be significant or to show features on the angular scales relevant here. We conducted a number of studies of galaxies in the Virgo cluster in which we studied various photometric indices for entire objects as well as for localized regions in each galaxy (Almoznino & Brosch 1998, Heller 1998). The possibility that these programs could be affected by foreground dust imposed our selection of the Virgo Cluster as the initial target for combined FIR and other spectral band interpretations. We detected a ring-like structure of FIR emission in COBE/DIRBE maps of the VC, which is centered approximately on M87. The ring is approximately centered on (1950) 12$^h$31$^m$; +13$^{\circ}$ (l=285$^{\circ}$.8, b=75$^{\circ}$; J2000) and its diameter is $\sim4^{\circ}$. The width of the FIR emission in the rim of the ring is $\sim1^{\circ}$. The detection was made originally on the COBE/DIRBE maps, but the existence of the feature was established also on IRAS maps (see below). The M87 galaxy (l$\approx282^{\circ}$.5, b$\approx+74^{\circ}$.4) is normally taken as the center of the Virgo Cluster (VC) and one could imagine scenarios by which some sort of FIR-emitting matter could be distributed around it. For this reason, we decided to follow the FIR detection of the feature, which we call here “the Virgo Ring” (VR), and investigate it further. The detection was made on the AASM, which have noise levels of 3 10$^{-3}$ MJy sr$^{-1}$ at 100$\mu$m, 0.6 MJy sr$^{-1}$ at 140$\mu$m, and 0.3 MJy sr$^{-1}$ at 240$\mu$m (Kashlinsky 1999). The ring is visible even by superficial inspection of these COBE/DIRBE gray scale maps. No traces of the ring can be seen on 60$\mu$m or shorter wavelength maps. To obtain detailed insight into the structure of the VR we produced isophotal maps at $\lambda$=100 and 240$\mu$m using the original 0$^{\circ}$.3 square pixels, which are shown as isophote plots in Figure 1. The 100$\mu$m map shows a region of depressed FIR flux where F$_{100}\approx$8.2 MJy sr$^{-1}$. This is surrounded by regions of enhanced FIR emission, which reach F$_{100}\approx$10 MJy sr$^{-1}$. The 240$\mu$m map indicates that the region of reduced FIR emission has F$_{240}\approx$3.7 MJy sr$^{-1}$ while the surrounding regions have F$_{240}\approx$5 MJy sr$^{-1}$. It is clear that (a) the DIRBE data indicate a region of low FIR emission surrounded by enhanced emission, and (b) the feature is real, because it appears on more than one DIRBE map. The lowest values of the FIR flux originate presumably from the zodiacal light that was not subtracted from the AASMs and from the cosmic FIR background. As both these components are much smoother than the feature we describe here, there is no need to model them in detail. IRAS ---- The peculiar FIR features detected by COBE/DIRBE are confirmed by IRAS measurements. The IRAS mission mapped the sky in four wavelength bands from January 1983 to November 1983. The primary goal of the IRAS survey was the detection of point sources, but a catalog of extended sources has also been produced, as well as sky brightness images in each of the four bands with 2’ pixels and 4’-6’ resolution (Beichman 1988). IRAS 60 and 100$\mu$m Extended Emission Data in the 16$^{\circ}.5\times16^{\circ}.5$ square fields no. 83 and 84 were used to confirm the existence of the ring and to exclude the possibility of instrumental artefacts produced by the COBE/DIRBE instrument. We created maps at these two spectral bands with a 4’$\times$4’ beam. The VR is clearly visible on the 100$\mu$m map shown in Fig. 1. A similar 100$\mu$m map based on IRAS observations, and where this feature is also visible, was reproduced already by Leggett (1987) as their Plate 2. The enhanced IRAS resolution relative to COBE/DIRBE allows a good morphological evaluation of the FIR feature. In addition to the north-westerly extension of the FIR emission, along the IRAS scan direction, one sees an arc-like distribution of emission, which could be interpreted as forming an elliptical ring. Note that the feature is visible only on the 100$\mu$m map (shown in Figure 1) and is not seen on the 60$\mu$m map, or on those at even shorter wavelengths (not shown here). Although the low resolution COBE/DIRBE maps seem to indicate that the FIR emission is arranged in a ring, with low FIR at the center and high emission on its perimeter, the higher resolution IRAS maps show that this is not the case. The FIR emission is distributed in an open configuration, with a region of low emission centered on $\sim12^h30^m$, +13$^{\circ}$.2. The FIR emission could best be described as a fork, or a two-arc shape limited to $\alpha$=185$^{\circ}-189^{\circ}$. The eastern side of the feature shows a small region of enhanced FIR emission centered on $\alpha$=185$^{\circ}$ and $\delta$=13$^{\circ}$.5 that stands out over its surroundings and to which we refer as the “main blob” (MB). Optical information: stars -------------------------- The dust revealed by the FIR observations may (a) extinguish and (b) redden stars behind it. The first effect is a consequence of the “total extinction” property, whereas the second is the result of “wavelength-selective extinction”. The relative importance of the two effects is linked through the parameter $R=\frac{A_V}{E(B-V)}$, which is determined to first order by the size of the dust grains. We tested two assumptions, one of extinction within the Milky Way (MW) that would affect some of the stars but not others, and a second that the VR is extragalactic and is located between the MW and the VC. In the second case it would affect the VC galaxies, but none of the MW stars. For testing the possibility that the dust is “local” one requires a large number of stars with magnitudes and colors. These were extracted from the USNO-A2.0 catalog, which includes blue and red magnitudes for each star. The USNO-A2.0 catalog contains $>$5 10$^8$ objects ($\sim$12,750 per square degree) and is based on scans of the Palomar Sky Survey (PSS) plates produced with the Precision Measuring Machine (PMM). The catalog is an improvement over the version 1.0 both in astrometric accuracy and in photometric precision. The photometric accuracy is probably not better than $\sim$0.15 mag, but the depth of the catalog is considerable, as it reaches 20-22 mag (color-dependent). It can, therefore, serve as a source of stellar objects with which one can test the assumption of foreground extinction. We extracted objects in a number of $1^{\circ}\times1^{\circ}$ regions from the USNO-A2.0 catalog. The extraction locations are listed in Table 1 and correspond to some FIR-bright regions (where we expect a higher density of extinguishing dust) or to some FIR-faint regions (which should be $\sim$transparent). We produced Wolf diagrams for each location, and show these in Figure 2. The Wolf diagrams plot the cumulative distribution of stellar magnitudes against magnitude, and the signature of total extinction in such a plot is a step-like deviation, to fainter magnitudes of the cumulative star counts, from the pattern set by the brighter (and closer, on average) stars. The diagrams do not show such a step-like trend for regions in the direction of stronger FIR emission when compared with the behavior of the cumulative distribution in regions with lower FIR emission. It is also possible to compare the measured behavior of the cumulative star counts with that “predicted” in absence of localized extinction effects by using a model for the stellar distribution in the Galaxy for the same Milky Way locations as sampled here. A very successful and intuitively simple stellar distribution model was produced by Bahcall & Soneira (1984) and is available on-line[^1]. We calculated predicted star counts for the locations of the extracted data from the USNO-A2.0 catalog using the version of the model retrieved in December 1998. The locations are listed in Table 1. We compared the predicted cumulative star counts with the actual star counts. The comparisons are shown in Figure 3 and show no significant deviations from the predicted behavior. The exercises shown in Figs. 2 and 3 indicate that the stellar distributions are not influenced by the material producing the FIR emission. The conclusion is, therefore, that this material is either extremely nearby, so that all the stars are affected in the same manner, or that it is very distant, beyond the more distant stars listed in the USNO-A2.0 catalog. Optical information: galaxies ----------------------------- If the dust observed in the FIR does not affect stars in our galaxy, it may be located far from the MW and could affect only objects seen behind it. Testing the assumption of a dust cloud distant from the MW requires a sample of background objects with relatively high surface density, as well as brightness and color information. In the Virgo region, the “standard” extragalactic catalog has been for a number of years the Binggeli (1985) Virgo Cluster Catalog (VCC). The VCC covers $\sim$140 square degrees and contains 2096 galaxies. The surface density of galaxies is, therefore, $\sim$15 galaxies/square degree, on average. While this may appear sufficient, the photometry is not adequate because the galaxy magnitudes in the VCC are eye estimates and may have significant deviations. In addition, no colors are available for most VCC galaxies. We decided therefore to rely on a more recent galaxy compilation, which reaches deeper in brightness and is thus denser than the VCC, has better photometry, and contains color information for the objects. Currie & Young (1998, hereafter VPC) produced an extensive three-color photometric catalog of galaxies in the central regions of the VC. The catalog is based on COSMOS scans of one U plate, two B$_J$ plates, and one R$_C$ plate, all obtained with the UK Schmidt telescope. The plates were photometrically calibrated and objects were extracted automatically, with stars and galaxies separated by an automatic algorithm. The VPC provides an impartial survey of galaxies in the region of interest for the present study, which reaches to B$_J\approx$19 mag, thus it is somewhat shallower but comparable in depth with the stars from the USNO-A2.0 catalog. The area covered by the VPC covers 23 square degrees and is centered on (1950) 12:26 +13:08. The average galaxy surface density is therefore 49 galaxies/square degree, considerably more than that of the VCC. We attempted to detect total extinction effects on the VPC galaxies by limiting the analysis to regions with high FIR emission and comparing these with similar analyses in the direction of regions with lower FIR emission. Four parallerogram-shaped fields were selected, marked A, B, C, and D on Figure 4. Fields C and D are used to determine the nature of the galaxy population in the general region of the VR. Field A could also be used for this purpose, but we caution that a background cluster of galaxies (Abell 1552 at z$\approx$0.084) is located in this field and thus region A may not be representative. This galaxy cluster is presumably part of a background sheet-like complex, which includes also Abell 1526 at a very similar redshift. Field A was selected to offer insight on how the presence of background galaxies disturbs the results. The enhancements of the galaxy background may distort the Wolf diagrams of galaxies (Figure 5), and indeed some FIR enhancements could be in the direction of these galaxy clusters. However, searches for dust in clusters of galaxies have, so far, been negative (Maoz 1995). Thus, we may tentatively discount the FIR enhancements in the direction of background clusters of galaxies as chance superpositions. Field B is considered not to be affected by absorption/extinction and is in the direction of the low FIR emission of the VR. Attempts to detect the presence of dust as a “total extinction” effect, which modifies the cumulative galaxy counts between the different regions, were not successful. The differences were not significant and indicate that if dust is present, it may cause at most a small amount of total extinction: A$_B\leq$0.5 mag. We therefore checked for the presence of color-dependent extinction by studying the distribution of the (U–R$_C$) color index in one square degree areas over the central part of the Virgo cluster. The data used for this test, and an extensive description of the method and results, are given in the Appendix to this paper. Here we emphasize that the results show that the galaxies in the direction of the Virgo Ring (VR) part with the lowest FIR emission appear slightly bluer than those in the direction of the two regions with higher FIR emission. The difference is significant to $\geq$95%. Interpreted as dust extinction, this difference in average (U–R$_C$) color index indicates a possible wavelength-dependent extinction of $\Delta$(U–R$_C$)$\simeq$0.3 mag between areas with high FIR emission and areas with less dust, a total extinction A$_V\simeq$0.33 for a typical Milky Way extinction law, although this was not checked here. Radio information ----------------- Here we show that X-ray observations of the region indicate a two-component makeup for the hot gas, and that the morphology and kinematics of the HI are peculiar. Böhringer (1995) mapped the X-ray emission from the immediate vicinity of M87; this is the region of interest of the present study. Their findings show the presence of thermal X-ray emission from cooler gas than the intracluster medium. A ROSAT map of the general region, larger than the one analyzed in the 1995 paper, was presented by Böhringer (1994) and shows a ridge of X-ray emission which approximately coincides with the FIR emission ridge to the west of M87. They mention, in particular, the sharp drop in X-ray intensity on the western side of M87. Böhringer (1994) subtracted a model distribution of X-rays from M87 from the ROSAT map and derived a residual map (their Fig. 2) which shows the background cluster A1552 at 12:30+11:30 and a long filament, which is elongated $\sim$north-south at $\alpha\approx$12$^h$30$^m$ and from $\delta\approx$+15 to +6. This filament curves around M87 on its westerly side and seems to follow the contours of the 100$\mu$m emission. It is tempting to speculate on a possible link between the X-ray and FIR emission presented above, but we caution that this may not be real. One possible factor affecting the morphology of X-ray emiting gas is the amount of foreground HI, which modifies mainly the low energy end of the X-ray spectrum. Shadows in the X-ray background caused by foreground HI clouds have been detected mainly in soft X-rays by Egger & Aschenbach (1995). However, the feature detected in the ROSAT maps by Böhringer (1994) is seen in the hard energy band (0.4–2.4 keV), and is thus difficult to attribute it to gas absorption. EUVE observations of the Virgo cluster (VC) center (Lieu 1996) show the presence of gas at $\sim$0.5 10$^6$ K near M87. This matter forms an additional component of the intra-cluster material (ICM) in Virgo, as follows from their analysis, and cannot be the same hot gas which is responsible for the X-ray emission detected by ROSAT. In order to confirm the existence of this second ICM component of the VC, Lieu (1996) performed HI 21 cm observations with the 43-m Green Bank telescope (angular resolution 21’). The region surveyed by them was centered on M87, had an extent of 2$^{\circ}\times1^{\circ}.6$, and the grid of HI measurements was spaced every 8’$\simeq$1/3 of a resolution element. A comparison of the HI map of Lieu (1996) with the FIR distributions (see Figure 1) demonstrates that the FIR emission follows the total HI column density. Although Lieu do not mention the velocity range over which the Green Bank observations were performed, we assumed these to be at $\sim$0 km s$^{-1}$ because they are supposedly of “Galactic HI” origin. While some VC galaxies do have negative heliocentric velocities (c.f. Binggeli 1985), they mostly concentrate at 1,000–2,000 km s$^{-1}$. For this reason, we think it likely that the HI detected by Lieu (1996) does indeed belong to the MW and, by inference, so does the material producing the FIR emission. We note at this point that the center of the low N(HI) region, at 12:28+12:45 (1950) and only $\sim$half a degree away from M87, has N(HI)$\simeq$1.8 10$^{20}$ atoms cm$^{-2}$. This is coincident with the low FIR emission region. The ridges with the higher N(HI) values correspond to enhanced FIR emission regions. We produced N(HI) plots for the region using data from the Leiden-Dwingeloo HI survey (Hartmann & Burton 1997, LDS) in order to confirm the HI distribution measured by Lieu (1996). The LDS was conducted with the 25-m radio telescope at Dwingeloo and the data we used cover the velocity range –459$<v_{lsr}<$+415 km s$^{-1}$ with a resolution of 1.03 km s$^{-1}$. The 25-m radio telescope has a 36’ half-power beam and the survey was performed with 0$^{\circ}$.5 spacings. We used the file TOTAL\_HI.FIT from the CD-ROM supplied with the printed atlas to extract the proper sky region. The data were transformed from Galactic to equatorial coordinates, accounting for the change of scale from one side of the image to the other. This was done by dividing each pixel value by its [*cos(b)*]{}, to yield consistent units over the field. The HI total column density from the LDS is shown in Figure 6 together with the IRAS 100$\mu$m map and confirms the general impression from the Lieu (1996) map. The HI distribution has a region with lower N(HI) at the center of the Virgo Ring (VR) and ridges of higher HI emission on both sides of the VR. We also produced position-velocity (PV) plots using the channel data from the LDS, limiting these to l=276.0, the galactic longitude of the HI peak which coincides with the main FIR emission blob in area A of Fig. 6 (the more intense of the FIR peaks), to the center of the VR (l=283.0), and to the second highest FIR peak at l=290.5. The PV plots are shown in Figure 7 and indicate that at the position of the VR there is a significant disturbance of the HI, with a strong extension to negative velocities appearing in the PV plots of the high-FIR region. The sheet-like HI distribution, which links HI at low latitudes with gas near the Galactic Pole and has a slightly negative LSR velocity, appears disturbed at b$\approx$75$^{\circ}$. The velocity plot through the peak emission at l=276.0 at this latitude (Figure 8) shows three peaks separated by $\sim$20 km s$^{-1}$. The strongest has the most negative velocity, approximately –30 km s$^{-1}$ and a FWHP of $\sim$11 km s$^{-1}$. The weakest peak at this location is near +4 km s$^{-1}$ (LSR). The PV in the low FIR region at the center of the VR (l=283$^{\circ}$, b$\approx$75$^{\circ}$) shows a single strong peak at $\sim$–7 km s$^{-1}$ (LSR), with a FWHP of 12.5 km s$^{-1}$ and a low shoulder extending to more negative velocities, down to the velocity of the strong peak at the location of the main blob (–30 km s$^{-1}$). The third PV, at (l=290$^{\circ}$.5, b$\approx$75$^{\circ}$) is narrow with a FWHP of $\sim$ 5km s$^{-1}$ and is centered at –7 km s$^{-1}$ (LSR). Discussion ========== We identified a ring-like feature of FIR emission at high galactic latitude, which is distant from the main body of the Galaxy and extinguishes light from galaxies in the central part of the Virgo cluster (VC). There is no way to establish a distance to the extinguishing cloud with the data we presented above, except to note that it is probably $>$1 kpc. A nearby dust feature, observed by Haikala (1995) in the far-UV, has been located at $\sim$120 pc using the distribution of E$_{b-y}$ color excesses. This dust cloudlet produces a visual extinction A$_V\leq$0.4 mag and is located at (l=251.1, b=+73.3); this location is very similar to what we found for the Virgo Ring (VR) and may indicate that either our distance evaluation is wrong, or that the location technique of the Haikala feature did not use a sufficient number of more distant stars. Indications that the dust cloud cannot be a nearby feature originate mainly from the lack of influence on the distribution of stars. Supporting evidence to the same comes from the reddening study of Knude (1996). He used uvbyH$\beta$ measurements of A3-G0 stars with B$\leq$11.5 mag and $\vert$b$\vert>70^{\circ}$ to determine the distribution of extinction. His results for E$_{b-y}$, broken by galactic latitude and by longitude quadrants, are of particular interest. The area of interest for our study is located between the 3rd and 4th quadrants at b$\approx75^{\circ}$; the reddening to this region is small, E$_{b-y}\leq0.017$, which translates into A$_B\leq$0.095. The stars studied by Knude (1996) are closer than 1.5 kpc (for main sequence A stars brighter than 11.5 mag), thus the color-dependent extinction of the VC galaxies we detected, which is equivalent to A$_B\approx$0.4 mag, should be produced by material more distant than 1.5 kpc. If the cloud would be in the VC itself, its physical size would be $\sim$1.5 Mpc, very large indeed ! The issue of possible diffuse dust in clusters of galaxies has been studied by Ferguson (1993). He concluded, from the lack of a difference between cluster and field galaxies in the correlation of the Mg$_2$ index and (B–V), that dust is not present in the Virgo cluster (upper limit E(B–V)$<$0.06 mag.). A similar conclusion for a large number of Abell clusters, based on the (V–I) color indices of radio quasars seen in their background, was reached by Maoz (1995). Not accounting for foreground dust may affect adversely some key observations. Our finding confirms the supposition of Zonn (1957) and Zonn & Stodolkiewicz (1958), that because of the patchy structure of the interstellar dust [*it is not enough to correct for extinction assuming that the dust is localized in a narrow slab near the Galactic equator, but the detailed distribution of dust must be investigated to account properly for the extinction.*]{} In particular, many observations of the Virgo cluster and of objects within (the HST Cepheid Key Project: Graham 1998, Macri 1999) may carry significant errors because of improper extinction corrections. In this section we estimate the dust temperature and dust-to-gas ratio. To evaluate the temperature of the dust in VR we subtracted from the map intensities the minimal value for the central part of the VR, near M87, in the 100 and 240$\mu$m COBE/DIRBE bands (8 and 3.5 MJy sr$^{-1}$ respectively). To determine the color temperature for the dust in the VR cloud we assumed that the dust particles are in thermal equilibrium and that the foreground galactic IR radiation and IR emission from all point sources in the region have been subtracted accurately. These assumptions may not be necessarily fulfilled in our case. The subtraction of the intensity of the inner part of the ring would be accurate only if the distribution of the foreground galactic radiation is fairly smooth; this is not the case even at high galactic latitudes. Some galactic features may add non-negligible FIR contributions to the foreground. The radiation from very cold dust grains (T$\approx$3K) could not be detected by the means used here. We could not rule out the possibility that transient heating of dust grains takes place, and that only occasional excitation by energetic photons or particles causes them to emit brief pulses of the radiation measured by COBE/DIRBE and IRAS. With all these caveats in mind, we calculated the temperature of two regions with maximal FIR intensities within regions delimited by: $\alpha$: 185$^{\circ}$–186$^{\circ}$, $\delta$: 13$^{\circ}$–14$^{\circ}$ (approximately the Main Blob=MB region) and $\alpha$: 189$^{\circ}$–190$^{\circ}$, $\delta$: 12$^{\circ}$–13$^{\circ}$ (slightly off the secondary FIR peak). The temperature was calculated with the relation: $$log_{10} T = 1.30274 + 0.26266(log_{10} R)+0.04935(log_{10} R)^2$$ from Schlegel (1998), where R=$\frac{I_{100}}{I_{240}}$ is the ratio of the FIR intensities in the COBE/DIRBE bands. The results are 22 and 20K for the first and the second region, respectively. These values can be accepted as upper limits for the dust temperature in VR due to both effects mentioned above. The temperatures do not differ significantly from those usually adopted for interstellar dust clouds and are at the high end of the range for the “warm” cirrus component (Lagache 1998). The difference between the two regions, with the MB being “hotter” than the secondary FIR blob, is marginally significant, because the formal error in the derivation of temperatures is 1-2K. A study of the optical depth effects of high latitude clouds (Chlewicki & Laureijs 1988) found that typically $\frac{I_{100}}{N(HI)}$=0.7 MJy ster$^{-1}$/10$^{20}$ atoms cm$^2$. Deul & Burton (1990) studied the HI content and gas kinematics of seven cirrus clouds detected in the IRAS maps. They established that the FIR to HI ratio varies between 0.9 and 3.0 MJy ster$^{-1}$/10$^{20}$ atoms cm$^{-2}$. The color temperature varies much less, between 0.20 and 0.27 for $\frac{I_{60}}{I_{100}}$. Unfortunately, we did not detect 60$\mu$m emission from the VR, thus a direct comparison with the color temperature derived by Deul & Burton (1990) is not possible. However, the ratio $\frac{I_{100}}{N(HI)}$ we find for the FIR ridges ($\sim$1.5 MJy ster$^{-1}$/10$^{20}$ atoms cm$^{-2}$) is in the range of that measured for cirrus clouds. We determined N(HI) from the brightness temperature using data from Fig. 9 and applying the conversion $$N(HI)=1.8 \, 10^{18} \int{T_B(v) \, dv}$$ assuming that the HI is optically thin. The VR is located close to the northernmost point of the North Polar Spur (NPS) and could, in principle, be part of it. The NPS is presumably the remnant of a $\sim$15 Myr old explosive event in the direction of the Galactic center, which released 10$^{56}$ ergs and could be the outcome of $\sim10^5$ supernovae produced in a small region within one million years. The giant loop is detected in a number of wavebands, from the X-ray to the radio continuum (Sofue 1994). The possibility that the VR is part of a small structure, extending to lower galactic longitudes from near the northernmost apex of the large NPS feature in Sofue’s Fig. 2 (Plate L11), cannot be excluded. It is also clear that the Virgo Ring (VR) is not a known high velocity cloud, because the compilation of Deul & van Woerden (1990) lists as nearest object HVC 2 at 12:09+15:32 and v$_{lsr}$=100.7 km s$^{-1}$, covering 3.2 square degrees of the sky. Wakker & van Woerden (1997) list a possible small HVC near the VR location. This is an HI feature associated with optical absorption lines observed in SN 1994D, located at l=290$^{\circ}$.07; b=+70$^{\circ}$.13 and $\sim$240 km s$^{-1}$ LSR. This SN occured in NGC 4526, and the absorbing gas (with five absorption systems) was found by Ho & Filippenko (1996) at the same velocity as the HI measured by Kumar & Thonnard (1983). Ho & Filippenko mention other similar absorption features in the spectrum of SN 1991T (at l=292.59; b=65.18, in NGC 4527), which is $\sim5^{\circ}$ away from SN1994D. It seems that there is material in the general direction of the VR with LSR velocities a few 100 km s$^{-1}$, and that it contains at least Ca II and Na I. In the vicinity of the VR, velocities in excess of 100 km s$^{-1}$ are required in order to qualify an HI feature as a HVC. Our analysis of the HI distribution and kinematics, using the LDS survey data, indicates that there is a significant disturbance of the hydrogen at the location of the VR. In particular, the three-peak structure of the velocity-column density is peculiar. There are a number of possible explanations for this HI peculiarity, starting from an expanding supernova (SN) shell, a collision between Galactic HI and a high-velocity cloud, an ejection of a number of shells from a red (super)giant, etc. Each of these possibilities imposes some constrains on the problem. Of all these possibilities, we consider the most likely to be that of a past collision between a small HI cloud and Galactic hydrogen. The HI remnant of this event could be the feature seen at –30 km s$^{-1}$ in the high-FIR area and the Galactic HI would be the prevalent emission at –7 to –9 km s$^{-1}$, which shows up in all plots. At a velocity difference of $\sim$20 km s$^{-1}$ the collision would be highly supersonic. We note that the immediate vicinity of the VR has been studied in exactly this context by Stark (1994). The region most likely to be part of the same complex is the BX field (see Table 1 of Stark 1994), where the X feature peaks near –8 km s$^{-1}$ (LSR) but there are parts of the B feature where velocities up to –40 km s$^{-1}$ are observed. Stark (1994) interpreted the “intermediate negative velocity” (INV) gas, with v$\leq$–20 km s$^{-1}$, as material at local velocity displaced by a shock following a collision with a high velocity cloud. The difference in LSR velocity between the BX features and candidate HVCs in this vicinity (at a few 100 km s$^{-1}$ positive) forced Stark to propose scenarios for the dissipation of the impinging HVC. They asserted that this cloud has evaporated and is now present as high-temperature gas. Collisions of HVCs with the Galactic disk have been studied by Kerp (1996). They show that a collision with a differential velocity of 25 km s$^{-1}$ may increase the temperature of the post-shock material from 10$^4$ to 10$^5$K; such temperatures are consistent with the EUVE measurements (Lieu 1996). It is possible to estimate the parameters of the Main Blob (MB) using the HI information from the LDSS. With the T$_B$ value from –30 to –20 km s$^{-1}$ we find $$<N(HI)>\approx1.5 \, 10^{20} cm^{-2}$$ This implies a gas-to-dust ratio for the MB of $\frac{N(HI)}{E(B-V)}\approx$5 10$^{20}$ cm$^{-2}$ mag$^{-1}$, different from the canonical value for the Galaxy ($\frac{N(HI)}{E(B-V)}$=5.4 10$^{21}$ cm$^{-2}$ mag$^{-1}$; Bohlin 1975). The MB appears, therefore, to be dust-rich (or HI-poor). Its projected angular size is $\sim0^{\circ}.5$; this translates to a size of 10($\frac{d}{1 \, kpc}$) pc and a total hydrogen mass of $$M_{tot}(HI)\approx100 \frac{d}{1 \, kpc} M_{\odot}$$ with an average (volume) density of $\sim$4 $(\frac{d}{1 \, kpc})^{-1}$ cm$^{-3}$ assuming a spherical configuration. The cloudlet carries significant kinetic energy, considering its velocity relative to the Galactic HI (assumed to be $\sim-9$ km s$^{-1}$); E$_K\approx$3 10$^{47} \, (\frac{d}{1 \, kpc})^2$ ergs. Also, the HI profile observed in its direction (see top panel of Fig. 9) is significantly wider than that of the second blob (bottom panel of Fig. 9). This could be an indication of higher turbulence within the MB relative to other HI entities in the same vicinty but at less negative LSR velocities. It is significant that the MB, an infalling HI gas cloudlet, contains dust grains as evidenced by the extinction it produces. This indicates that any shocks that the material in MB might have encountered did not heat up its material to temperatures high enough to completely destroy the grains. However, its highly supersonic interaction with the ambient HI distribution caused a wider 21 cm profile. The MB produces significant extinction and is fairly distant from the MW plane while having an intermediate LSR velocity. This is different from the findings of Knude & H$\o$g (1999), who concluded that intermediate-velocity HI clouds show no extinction. The detection of HI condensations on sub-degree scales and high above the Galactic plane is relevant for the studies of galaxies in the Virgo cluster and beyond. A spectroscopic survey of objects in the background clusters at z$\approx$0.08 could reveal absorption lines produced by material within the MB cloudlet. This, along with a measure of the ionization presumably taking place at the interface between the MB and the ambient HI, could reveal its composition and location. The essential finding is, however, the small scale on which significant extinction variations are encountered at high $\vert$b$\vert$. This shows that the derivation of “average extinction dependences”, with galactic latitude (Knude 1996), is not very relevant to the determination of cosmologically-important parameters from studies of individual VC galaxies. Our study shows that dust can affect the light of background galaxies in the VC and, through this, the determination of H$_0$ by using Cepheid photometry. This HST Key Project relies on a canned approach to deredden individual stars and is based on the “standard” MW extinction law with R=$\frac{A_V}{E(B-V)}$=3.3 (c.f. Madore & Freedman 1991). It remains to be proven that this relation applies for high galactic latitude dust clouds, such as those studied here. Conclusions =========== We detected a dusty HI cloud seen in projection against the central regions of the Virgo cluster. We showed that the cloud, which appears ring-like in low-resolution FIR and total HI maps, has a complex FIR morphology when examined with higher resolution. The HI kinematics indicate a substantial disturbance at this location. The Virgo Ring (VR) is located beyond the main body of the Galaxy, because the distributions and colors of stars in its direction are not affected by it. The cloud could not be more distant than the VC, because it influences the colors of galaxies within the cluster. The HI evidence argues strongly for a relatively nearby location. The connection of the ring with the North Galactic Spur may be accidental, but the peculiar morphology and kinematics of the HI associated with the FIR emission make a connection, perhaps through a collision between an HI cloud and Galactic HI, more likely. Such a collision could perhaps explain also the extreme-UV (or soft X-ray) emission observed from the neighborhood of M87. The importance of our finding lies in the possibility that many key observations done in the direction of the Virgo cluster may have been adversely affected by extinguishing clouds, which are not homogeneous and where the total V-band extinction may reach up to a few tenths of a magnitude. Acknowledgements {#acknowledgements .unnumbered} ================ NB acknowledges support from the Israel Science Foundation. EA is supported by a grant from the Israel Ministry of Science to develop TAUVEX, a UV imaging telescope. KR and BW were supported by the Polish KBN grant number C76/98. We are grateful to Drs. C.K. Young and M.J. Currie for supplying electronic versions of the VPC data tables in advance of publication. NB is grateful to Mike Hauser for discussions on the reality of COBE/DIRBE features, to John Bahcall for allowing public use of his Galaxy model, and to Sara Beck and Sasha Kashlinsky for critical readings of one of the first drafts. Appendix {#appendix .unnumbered} ======== Classical Wolf diagrams for galaxies (Zonn & Rudnicki 1965) were made for the regions labelled A through D in Fig. 4 and are shown in Figure 5. Only curve C could be interpreted as shaped by extinction, and this only by a large amount of leeway. Curve A, as already cautioned, has a completely different shape. Both A and D curves are higher than the comparison curve B, and are presumably shaped by the inhomogeneous distribution of galaxies in the VC area. If, however, we would adopt curve C as representative for the background galaxies (no VC subclusters or background galaxy clusters are evident in this area), then the obscuring cloud would have to be relatively nearby, as no parts of the C and B curves overlap. To prevent distortion of the results by possible subclustering, we decided to study the color distribution of the background galaxies. The advantage of using color is that this parameter is distance-independent, provided the objects of study are not “too distant”, in the sense of requiring a k-correction. We assume that the objects in the VPC conform to this constraint and do not attempt to correct the color for redshift (which is not known, in most cases). We also caution about “edge effects”, which could arise because of a smaller population of the sub-areas located near the edges of the VPC coverage; such areas cover less than one square degree, have less than the average number of galaxies, and would be less representative. We analyzed the color distribution of all galaxies in the entire VPC. We selected objects only by their position; specifically, we included all the objects listed in the catalog with their magnitudes and colors, without separating them by morphological type, apparent blue magnitude, angular size, being members or non-members of the cluster, etc. The area enclosed within 184$^{\circ}<\alpha<189^{\circ}$ and 10$^{\circ}<\delta<16^{\circ}$ was divided into square degree areas at integer $\alpha$ and $\delta$ values. In each sub-area we calculated the statistics of the color distribution for the galaxies. The total number of galaxies per cell does not appear to show a specific pattern, apart for the edge effects mentioned above. We checked the distribution of photometric indices over the sub-areas of the VPC against the locations of the enhanced FIR emission. In each one square degree area we counted the galaxies, calculated their mean B$_J$ magnitude and the average (U–B$_J$), (B$_J$–R$_C$), and (U–R$_C$) color indices, and studied the statistical properties of the distributions of the color indices in each area. The color index (U–R$_C$) has been calculated as (U-B$_J$)-(B$_J$-R$_C$). The results are presented in Tables A1 and A2 for (U–R$_C$), where each cell contains these parameters for each one square degree region. The results for the other color indices were similar and are not shown. Each sub-area in Table A1 is represented by one cell in the table, where the rows are the declination of the sub-areas and the columns are the right ascension, both in decimal degrees. The distribution of the (U–R$_C$) color is represented by the mean, the standard error of the mean (in round brackets), and the number of galaxies in the sub-area \[in square brackets\]. The standard error of the mean $SE_a$ is defined as: $$SE_a=\frac{SD_a}{\sqrt{N_a}}$$ Here $SD_a$ is the standard deviation of the values in sub-area [ *a*]{}, and $N_a$ is the number of galaxies in the same sub-area. Table A2 shows the properties of the distribution of (U–R$_C$) in each of the one square degree areas. Specifically, we list the kurtosis with its standard error, and the skewness with its standard error, for the distribution of galaxies within the square degree area. These are listed in order to demonstrate how much are the distributions of (U–R$_C$) values in individual cells similar (or different). The significance of possible differences among the cells was considered using the mean and its standard error. A Student’s [*t-test*]{} between the means, by which one estimates the variable $$t=\frac{\vert<X_1>-<X_2>\vert}{\sqrt{SE_1^2+SE_2^2}}$$ was performed. Here $<X_i>$ is the mean of variable [*i*]{}, and SE$_i$ is its standard error of the mean (Brandt 1970). A cursory perusal of Table A1 shows that the only sub-areas where (U–R$_C$) is small, and which are not on the borders of the VPC (Currie & Young 1998), are the two areas at ($\alpha$=187.5, $\delta$=13.5), which we call “area GA”, and at (187.5, 14.5) which we call “area GB”. These regions are central in the Virgo Ring (VR) and this finding, if significant, could indicate less wavelength-dependent extinction in this direction (which shows reduced FIR emission) than in neighboring areas. The comparison is done with two regions located in the direction of enhanced FIR emission, at (186.5, 12.5) which we call “area GC”, and at (188.5, 12.5) which we call “area GD”. The latter area [**is**]{} located at the edge of the VPC but does not appear to be depleted in galaxies, thus the comparison with it is valid. Using the t-test described above, the difference in means of the (U–R$_C$) color index between areas GA and GC yields t=1.97 (significant at the 95% level), and between areas GA and GD t=2.72 (signficant at the 99.5% level). A similar significance level was found for the difference between areas GB and GC, with t=2.76. The results of the t-test indicate that the galaxies in the VPC, which are located behind the low FIR emission area, are slightly bluer than others which are located behind areas of high FIR emission. This impression is supported by an examination of the higher moments of the distribution (Table A2). The skewness of the (U–R$_C$) values in areas GA and GB is very similar, but is very different from the skewness of the distribution in areas GC and GD. References {#references .unnumbered} ========== Abadi, H.J. & Edmunds, M.G. 1978, A&A, 70, 189 Almoznino, E. & Brosch, N. 1998, MNRAS, 298, 920 Bahcall, J.N. & Soneira, R. 1984, ApJS, 55, 67 Beichman, C.A., Neugebauer, G., Habing, H.J., Clegg, P.E. & Chester, T.J. 1988 [*IRAS Catalogs and Atlases: Vol. 1 (Explanatory Supplement)*]{}, NASA RP-1190 Binggeli, B., Sandage, A. & Tammann, G.A. 1985, AJ, 90, 1681 Blitz, L., Spergel, D.N., Teuben, P.J. & Hartmann, D. 1999, astro-ph/9901307 Bohlin, R.C. 1975, ApJ, 200, 402 Böhringer, H., Nulsen, P.E.J., Braun, R. & Fabian, A.C. 1995, MNRAS, 274, L67 Böhringer, H., Briel, U.G., Schwarz, A., Voges, V., Hartner, G. & Trumper, J. 1994, Nature, 368, 828 Bothun, G., Impey, C.D., Malin, D.F. & Mould, J.R. 1987, AJ, 94, 23 Bothun, G., Schombert, J.M., Impey, C.D. & Schneider, S.E. 1990, ApJ, 360, 427 Brandt, S. 1970 [*Statistical and Computational Methods in Data Analysis*]{}, Amsterdam: North-Holland Publishing Co., pp. 125 [*et seq.*]{} Currie, C.K. & Young, M.J. 1998, A&AS, 127, 367 de Blok, W.J.G. 1997, PhD thesis, University of Groningen de Blok, W.J.G., van der Hulst, J.M. & Bothun, G.D. 1995, MNRAS, 274, 235 Deul, F.R. & Burton, W.B. 1990, A&A, 230, 153 Egger, R.J. & Aschenbach, B. 1995, A&A, 294, L25 Ferguson, H.C. 1993, MNRAS, 263, 343 Ferguson, H.C., Tanvir, N.R. & von Hippel, T. 1998, Nature, 391, 461 Giovanelli, R. & Haynes, M. 1989, ApJ, 346, L5 Graham, J.A., Ferrarese, L., Freedman, W.L. 1998, BAAS, 192, 66.12 Haikala, L.K., Mattila, K., Bowyer, S., Sasseen, T.P., Lampton, M. & Knude, J. 1995, ApJ, 443, L33 Hartmann, D. & Burton, W.B. 1997, [*Atlas of Galactic Neutral Hydrogen*]{}, Cambridge (UK): Cambridge University Press Hauser, M.G., Kelsall, T., Leisawitz, D. & Weiland, J. 1997, [*COBE Diffuse Infrared Background Experiment (DIRBE) Explanatory Supplement*]{}, COBE Reference Document Ref.No. 97A Heller, A., Almoznino, E. & Brosch, N. 1998, MNRAS, in press Ho, L.C. & Filippenko, A.V. 1996, ApJ, 463, 818 Hoffmeister, C. 1962, Z.f.Astrophys., 55, 40 Kashlinsky, A. 1999, private communication Katsiyannis, A.C., Kemp, S.N., Berry, D.S. & Meaburn, J. 1998, A&AS, 132, 387 Kerp, J., Mack, K.-H., Egger, R., Pietz, J., Zimmer, F., Mebold, U., Burton, W.B. & Hartmann, D. 1996, A&A, 312, 67 Knude, J. & H$\o$g, E. 1999, A&A, 341, 451 Knude, J. 1996, A&A, 306, 108 Kumar, K.C. & Thonnard, N. 1983, AJ, 88, 260 Lagache, G., Abergel, A., Boulanger, F. & Puget, J.-L. 1998, A&A, 333, 709 Leggett, S.K., Clowes, R.G., Kalafi, M., MacGillivray, H.T., Puxley, P.J., Savage, A. & Wolstencroft, R.D. 1987, MNRAS, 227, 563 Lieu, R., Mittaz, J.P.D., Bowyer, S., Lockman, F.J., Hwang, C.-Y. & Schmitt, J.H.M.M. 1996, ApJL, 458, 5 Lo, K.Y. & Sargent, W.L.W. 1979, ApJ, 227, 756 Macri, L.M. 1999, astro-ph/9901332 Madore, B. & Freedman, W.L. 1991, PASP, 103, 933 Maoz, D. 1995, ApJ, 455, L115 Murawski, W. 1983, Acta Cosmologica, 12, 7 Okroy, R. 1965, Astron. Cirk., 320, 4 Pickering, T.E., Impey, C.D., Van Gorkom, J.H. & Bothun, G.D. 1997, AJ, 114, 1858 Rudnicki, K. 1986, [*Gamov Cosmology*]{} (F.Melchiorri & R.Ruffini, eds.), LXXXVI Corso of the Soc. It. Fisica, Bologna, p. 480 Rudnicki, K. & Baranowska, M. 1966, Acta Astron., 16, 65 Schneider, S.E., Helou, G., Salpeter, E.E. & Terzian, Y. 1983, ApJ, 273, L1 Schlegel, D.J., Finkbeiner, D.P. & Davis, M. 1998, ApJ, 500, 525 Sofue, Y. 1994, ApJ, 431, L91 Sprayberry, D., Impey, C.D., Irwin, M.J., McMahon, R.G. & Bothun, G.D. 1993, ApJ, 417, 114 Stark, R., Dickey, J.M., Burton, W.B. & Wennmacher, A. 1994, A&A, 281, 199 Taylor, C.L., Thomas, D.L., Brinks, E. & Skillman, E.D. 1996, ApJS, 107, 143 van den Bergh, S. 1996, PASP, 108, 1091 Wakker, B.P & Boulanger, F. 1986, A&A, 170, 84 Wakker, B.P. & van Woerden, H. 1991, A&A, 250, 509 Wakker, B.P. & van Woerden, H. 1997, ARAA, 35, 217 Weil, M.L., Bland-Hawthorn, J. & Malin, D.F. 1997, ApJ, 490, 664 Wszolek, B., Rudnicki, K., de Bernardis, P., Masi, S. & Salvi, A. 1988a, Ap.Space Sci. 152, 29 Wszolek, B., Rudnicki, K., de Bernardis, P. & Masi, S. 1988b, in [*Large Scale Structures in the Universe*]{}, (Seitter, W.C., Duerbeck, H.W. & Tacke, M., eds.) Lecture Notes in Physics 310, 223 Wszolek, B., Rudnicki, K., de Bernardis, P. & Masi, S. 1989, [*The World of Galaxies*]{} (Corvin, H.G., Jr. & Bottinelli, C., eds.) p. 499 Zonn, W. 1957, Bull. Acad. Polonaise des Sciences, vol. V, no. 1, 47 Zonn, W. & Stodolkiewicz, J. 1958, Bull. Acad. Polonaise des Sciences, ser. sci. math. astr. phys. vol. VI, no. 3, 185 Zonn, W. & Rudnicki, K., 1965, in [*Stellar Astronomy*]{}, Washington DC, 168 Figure captions {#figure-captions .unnumbered} =============== Figure 1: FIR maps of the VR region. The top right panel shows the IRAS 100$\mu$m map. The COBE/DIRBE 100$\mu$m (top left panel) and 240$\mu$m emission (lower left panel) show the VR, but have a lower angular resolution than the IRAS 100$\mu$m map. The contours of the FIR emission are in MJy ster$^{-1}$. For orientation purposes, the location of M87 is (187.07, +12.67). Figure 2: Wolf diagram for USNO-A2.0 stars in five selected regions, one corresponding to the lowest FIR-emitting region in Fig. 1 and the other four coinciding with peaks of the FIR emission. The regions are defined in Table 1 and the cumulative distributions are depicted as follows: USNO1222+1330 as a dotted line, USNO1226+1430 as a dashed line, USNO1230+1130 as a long-dashed line, USNO1230+1330 as a solid line with open squares, and USNO1234+1230 as a dot-dashed line. Figure 3: Comparison of predicted star counts (using the Bahcall-Soneira model) and the actual star counts from the USNO-A2.0 catalog, for the five regions listed in Table 3. Each region is one degree square; the actual star counts are represented by open squares and the model prediction is shown as the solid line. Figure 4: Selected areas to check for total extinction effects on galaxies. The contours refer to the IRAS 100$\mu$m map and the heavy-lined parallelograms marked A through D are the selected areas for testing the number density of galaxies. Figure 5: Wolf diagram for galaxies. The three distribution curves A, C, D correspond to regions within higher FIR emission of Fig. 1 and curve B to the region of low FIR emission. For explanation of the designations A, B, C, D see Fig. 5. The data from each of the regions is depicted as follows: region A - solid line, region B - dotted line, region C - dashed line, and region D - long-dashed line. Figure 6: Total HI column density from the LDS (left panel) compared with the IRAS 100$\mu$m emission (right panel). Darker shades in the HI plot correspond to higher column densities. Lighter shades in the FIR plot correspond to more intense emission. Figure 7: Galactic latitude-velocity plots near b$\approx$75$^{\circ}$ for l=276.0 (top panel), l=283.0 (middle panel), and l=290.5 (bottom panel). The first plot corresponds to the main blob (more intense FIR emission), the second to the “hole”, center of the VR, and the third to the second blob (next intense FIR emission). The horizontal band marked with heavy lines in each panel indicates the region for which data were used to plot Fig. 8. Figure 8: Cuts through the position-velocity plots of Fig. 7, averaging the brightness temperature for b=74$^{\circ}, \, 74^{\circ}$.5, and b=75$^{\circ}$ (three latitude bands). The order of the plots is like in Fig. 8. The average HI column density shows the velocity distribution of HI clouds in this region. Three peaks stand out clearly, with the most intense one at the highest negative velocities for the main blob. [cccccc]{} USNO1222+1330 & 12:22:00 +13:30:00 & 185.5 +13.5 & 277.3 +74.7 & 1804 USNO1226+1430 & 12:26:00 +14:30:00 & 186.5 +14.5 & 279.3 +76.0 & 1593 USNO1230+1130 & 12:30:00 +11:30:00 & 187.5 +11.5 & 286.4 +73.5 & 2126 USNO1230+1330 & 12:30:00 +13:30:00 & 187.5 +13.5 & 284.4 +75.4 & 1696 USNO1234+1230 & 12:34:00 +12:30:00 & 188.5 +12.5 & 289.0 +74.7 & 1669 [ccccccc]{} 15.5 & 1.485(.109)\[08\] & 1.300(.103)\[04\] & 1.360(.186)\[06\] & 1.678(.162)\[08\] & 1.751(.118)\[12\] 14.5 & 1.347(.079)\[27\] & 1.476(.060)\[48\] & 1.547(.061)\[46\] & 1.380(.096)\[26\] & 1.526(.083)\[30\] 13.5 & 1.573(.053)\[68\] & 1.508(.057)\[56\] & 1.545(.074)\[39\] & 1.390(.094)\[26\] & 1.445(.090)\[22\] 12.5 & 1.363(.070)\[36\] & 1.602(.047)\[57\] & 1.608(.058)\[51\] & 1.593(.070)\[45\] & 1.693(.060)\[35\] 11.5 & 1.274(.075)\[36\] & 1.582(.068)\[32\] & 1.658(.054)\[72\] & 1.514(.066)\[45\] & 1.420(.101)\[26\] 10.5 & 1.296(.070)\[11\] & 1.347(.077)\[22\] & 1.530(.099)\[23\] & 1.596(.104)\[20\] & 1.054(.114)\[14\] [ccccccc]{} 15.5 & -.37(1.48) .11(.75) & 1.89(2.62) 1.35(1.01) & .83(1.74) .16(.85) & .74(1.48) -.46(.75) & .18(1.23) -.64(.64) 14.5 & -.31(.87) .58(.45) & -1.38(.67) -.05(.34) & -1.10(.69) -.36(.35) & -.96(.89) .11(.46) & .35(.83) .00(.43) 13.5 & .83(.57) -.86(.29) & -.32(.63) -.07(.32) & -.03(.74) .45(.38) & -.09(.89) .53(.47) & 1.89(.95) -.91(.49) 12.5 & -.43(.77) -.33(.39) & 1.38(.62) -.71(.32) & .43(.67) -.47(.33) & .35(.70) -.64(.35) & -.93(.78) -.43(.40) 11.5 & -.47(.77) -.08(.39) & -1.06(.81) .10(.41) & -.21(.56) -.10(.28) & -.10(.70) .38(.35) & -.86(.89) -.02(.46) 10.5 & .59(1.28) -.56(.66) & -1.04(.95) -.55(.49) & -.58(.94) -.55(.48) & .14(.99) .57(.51) & -.41(1.15) .11(.60) [^1]: http://www.sns.ias.edu/$\sim$jnb/Html/galaxy.html
1
6.0in 9.0in 0.0in 0.5in 0.5in 0.08in epsf \#1 [**\#1**]{} \#1 [**\#1**]{} 1[$U(1)$]{} 5[$SU(5)$]{} 10[$SO(10)$]{} 22[$SU(4)\otimes SU(2)_L \otimes SU(2)_R$]{} . \#1[(\[\#1\])]{} \_U[ ]{} \_P[ ]{} +N 1Charges[I]{} hep-ph/0009287 \ [S. F. King and M.Oliveira]{}\ \ [*[Department of Physics and Astronomy, University of Southampton\ Southampton, SO17 1BJ, U.K]{}*]{}\ [**Abstract**]{} [We analyse a supersymmetric string-inspired model of all fermion masses and mixing angles based on the Pati-Salam $SU(4)\times SU(2)_L \times SU(2)_R$ gauge group supplemented by a $U(1)_X$ flavour symmetry. The model involves third family Yukawa unification and predicts the top mass and the ratio of the vacuum expectation values $\tan \beta$. The model also provides a successful description of the CKM matrix and predicts the masses of the down and strange quarks. However our main focus is on the neutrino masses and MNS mixing angles, and we show how the recent atmospheric neutrino mixing observed by Super-Kamiokande, and the MSW solution to the solar neutrino problem lead to important information about the flavour structure of the model near the string scale. We show how single right-handed neutrino dominance may be implemented by the use of “Clebsch zeros”, leading to the LMA MSW solution, corresponding to bi-maximal mixing. The LOW MSW and SMA MSW solutions are also discussed.]{} The problem of understanding the quark and lepton masses and mixing angles represents one of the major unsolved questions of the standard model. Recently additional information on the fermion mass spectrum has come from the measurement of the atmospheric neutrino masses and mixing angles by Super-Kamiokande [@SKamiokandeColl]. The most recent data disfavours mixing involving a sterile neutrino, and finds a good fit for $\nu_{\mu} \rightarrow \nu_{\tau}$ mixing with $\sin^22\theta_{23}>0.88$ and a mass square splitting $\Delta m^2_{23}$ in the $1.5-5\times 10^{-3} {\rm\ eV}^2$ range at 90% CL [@HSobel]. Super-Kamiokande has also provided additional support for solar neutrino mixing. The most recent Super-Kamiokande data does not show a significant day-night asymmetry and shows an energy independent neutrino spectrum, thus it also disfavours the sterile neutrino mixing hypothesis, the just-so vacuum oscillation hypothesis, and the small mixing angle (SMA) MSW [@MSWMechanism] solution [@YSuzuki]. The preferred solution at the present time seems to be the large mixing angle (LMA) MSW solution, although a similar solution with a low mass splitting (the LOW) solution is also possible. A typical point in the LMA MSW region is $\sin^22\theta_{12}\approx 0.75$, and $\Delta m^2_{12}\approx 2.5\times 10^{-5} {\rm\ eV}^2$ [@BaKrSm]. If one accepts the recent data as evidence for neutrino masses and mixing angles, then the obvious question is how these can be accommodated in the standard model, or one of its supersymmetric extensions. The simplest possibility to account for the smallness of the neutrino masses is the see-saw mechanism [@seesaw] in which one introduces right-handed neutrinos which acquire very large Majorana masses at a super-heavy mass scale. When one integrates out the right-handed neutrinos the “normal sized” Dirac Yukawa couplings, which connect the left-handed to the right-handed neutrinos, are transformed into very small couplings which generate very light effective left-handed physical Majorana neutrino masses. Given the see-saw mechanism, it is natural to expect that the spectrum of the neutrino masses will be hierarchical, since the Dirac Yukawa couplings in the charged fermion sector are observed to be hierarchical, and if they are related to the Dirac neutrino Yukawa couplings then they should also be hierarchical, leading to hierarchical light Majorana masses. [^1] Having assumed the see-saw mechanism and a hierarchical neutrino mass spectrum, the next question is how such large (almost maximal) lepton mixing angles such as $\theta_{23}$ could emerge? There are several possibilities that have been suggested in the literature. One possibility is that it happens as a result of the off-diagonal 23 entries in the left-handed Majorana matrix being large, and the determinant of the 23 sub-matrix being accidentally small, leading to a neutrino mass hierarchy with large neutrino mixing angles [@ElLeLoNa]. Another possibility is that the neutrino mixing angles start out small at some high energy scale, then get magnified by renormalization group (RG) running down to low energies [@BaLePa:MTanimoto]. A third possibility is that the off-diagonal elements of the left-handed neutrino Majorana matrix are large, but the 23 sub-determinant of the matrix is small for a physical reason, as would be the case if a single right-handed neutrino were providing the dominant contribution to the 23 sub-matrix [@KingSRND; @SFKing1; @SFKing2]. We shall refer to these three approaches as the accidental, the magnification and the single right-handed neutrino dominance (SRHND) mechanisms, respectively. As we shall see, in the model under consideration, only the SRHND mechanism provides a successful description of the atmospheric neutrino data, and the results in this paper will rely on this mechanism. A promising approach to understanding the fermion mass spectrum is within the framework of supersymmetric (SUSY) unified theories. Within the framework of such theories the quark and lepton masses and mixing angles become related to each other, and it begins to be possible to understand the spectrum. The simplest grand unified theory (GUT) is $SU(5)$ but this theory in its minimal version does not contain any right-handed neutrinos. Nevertheless three right-handed neutrinos may be added, and in this theory it is possible to have a large 23 element [^2] on the Dirac neutrino Yukawa matrix without introducing a large 23 element into any of the charged fermion Yukawa matrices. The problem of maintaining a 23 neutrino mass hierarchy in these models may be solved for example by assuming SRHND [@AlFe2]. Another possibility within the framework of $SU(5)$ is to maintain all the off-diagonal elements to be small, but require the 22 and 32 elements of the Dirac neutrino Yukawa matrix to be equal and the second right-handed neutrino to be dominant, in which case SRHND again leads to a large 23 neutrino mixing angle with hierarchical neutrino masses [@AlFeMa]. However the drawback of $SU(5)$ is that it does not predict any right-handed neutrinos, which must be added as an afterthought. From the point of view of neutrino masses, the most natural GUTs are those like $SO(10)$ that naturally predict right-handed neutrinos. However within the framework of $SO(10)$ the quark masses and mixing angles are related to the lepton masses and mixing angles, and the existence of large neutrino mixing angles is not expected in the minimal versions of the theory in which the Higgs doublets are in one (or two) ${\bf 10}'s$ (ten dimensional representations of $SO(10)$) and each matter family is in a ${\bf 16}$. Nevertheless various possibilities have been proposed in $SO(10)$ in order to account for the large neutrino mixing angles. Within the framework of minimal $SO(10)$ with third family Yukawa unification, it has been suggested that if two operators with different Clebsch coefficients contribute with similar strength then, with suitable choice of phases, in the case of the lepton Yukawa matrices one may have large numerical 23 elements, which add up to give a large lepton mixing angle, while for the quarks the 23 elements can be small due to approximate cancellation of the two contributing operators [@BaPaWi]. This is an example of the accidental mechanism mentioned above, where in addition one requires the quark mixing angles to be small by accident, although it remains to be seen if the LMA MSW solution could be understood in this framework. Moving away from minimal $SO(10)$, one may invoke a non-minimal Higgs sector in which one Higgs doublet arises from a ${\bf 10}$ and one from a ${\bf 16}$, and in this framework it is possible to understand atmospheric neutrino mixing [@AlBa]. Alternatively, one may invoke a non-minimal matter sector in which parts of a quark and lepton family arise from a ${\bf 16}$ and other parts from a ${\bf 10}$, and in these models one may account for atmospheric and solar neutrinos via an inverted mass hierarchy mechanism [@ShTa]. In the present paper we shall discuss neutrino masses and mixing angles in a particular string-inspired [*minimal*]{} model based on the Pati-Salam $SU(4)\times SU(2)_L \times SU(2)_R$ (422) group [@PaSa]. As in $SO(10)$ the presence of the gauged $SU(2)_R$ predicts the existence of three right-handed neutrinos. However, unlike $SO(10)$, there is no Higgs doublet-triplet splitting problem since in the minimal model both Higgs doublets are contained in a $(1,2,2)$ representation. Moreover, since the left-handed quarks and leptons are in the $(4,2,1)$ and the right-handed quarks and leptons in the $(4,1,2)$ representations, the model also leads to third family Yukawa unification as in minimal $SO(10)$. Although the Pati-Salam gauge group is not unified at the field theory level, it readily emerges from string constructions either in the perturbative fermionic constructions [@AnLe], or in the more recent type I string constructions [@ShTy], unlike $SO(10)$ which typically requires large Higgs representations which do not arise from the simplest string constructions. The question of fermion masses and mixing angles in the string-inspired Pati-Salam model has already been discussed for the case of charged fermions [@SFking2; @AlKi3], and later for the case of neutrinos [@AlKi4]. For the neutrino study [@AlKi4] it was assumed that the heavy Majorana neutrino mass matrix was proportional to the unit matrix, and only small neutrino mixing angles were considered. Later on a $U(1)_X$ family symmetry was added to the model, in order to understand the horizontal hierarchies, although in this case the neutrino spectrum was not analysed at all [@AlKiLeLo1]. The purpose of the present paper is to discuss neutrino masses and mixing angles in the string-inspired Pati-Salam model supplemented by a $U(1)_X$ flavour symmetry. The model involves third family Yukawa unification and predicts the top mass and the ratio of the vacuum expectation values $\tan \beta$, as we recently discussed in Ref. [@KiOl2]. It is already known that the model can provide a successful description of the CKM matrix and predicts the down and strange quark masses, although our present analysis differs from that presented previously [@AlKiLeLo1] partly due to the recent refinements in third family Yukawa unification [@KiOl2], but mainly as a result of the recent Super-Kamiokande data which has important implications for the flavour structure of the model. In fact our main focus here is on the neutrino masses and mixing angles which were not previously discussed at all in this framework. We assume a minimal version of the model, and avoid the use of the accidental cancellation mechanism, which in any case has difficulties in accounting for bi-maximal neutrino mixing. We also show that the mixing angle magnification mechanism can only provide limited increases in the mixing angles, due to the fact that the unified third family Yukawa coupling is only approximately equal to 0.7 [@KiOl2] and is therefore too small to have a dramatic effect. Instead, we rely on the SRHND mechanism, and we show how this mechanism may be implemented in the 422 model by appropriate use of operators with “Clebsch zeros” resulting in a natural explanation for atmospheric neutrinos via a hierarchical mass spectrum. We specifically focus on the LMA MSW solution in the text, with the LOW and SMA MSW solutions relegated to Appendices. The layout of the remainder of the paper is as follows. In section II we briefly review the see-saw mechanism in the Minimal Supersymmetric Standard Model (MSSM) [@MSSM] with right-handed neutrinos. In section III we review some useful analytic results for SRHND, for the case of an approximately diagonal right-handed Majorana mass matrix. In section IV we introduce the string-inspired Pati-Salam model, and in section V we introduce an Abelian anomalous gauge $U(1)_X$ family symmetry into the model, and show how horizontal Yukawa hierarchies may be generated. In section VI we describe our operator approach to fermion masses, including the heavy Majorana neutrino masses. Section VII contains the main results of the paper. In this section we show how a particular choice of $U(1)_X$ family charges, and operators with certain Clebsch coefficients can lead to a successful description of quark and lepton masses and mixing angles, and in particular describe atmospheric and solar neutrinos via SRHND. Although the neutrino masses and mixing angles correspond to the usual LMA MSW solution, in Appendix C we show how a modification of the heavy Majorana mass matrix can lead to a large mixing angle MSW solution with a LOW mass splitting. In Appendix D we present a different choice of $U(1)_X$ charges and operators which can lead to the SMA MSW solution. The superpotential of the MSSM with right-handed neutrinos is given by : $$\begin{aligned} {\cal W} &=& {\cal W}_{MSSM}+{\cal W}_{\nu^c} \label{SRNDSuperPot} \\ {\cal W}_{MSSM} &=& q_A (\lambda_u)_{AB} u^c_B h_u - q_A (\lambda_d)_{AB} d^c_B h_d - l_A (\lambda_e)_{AB} e^c_B h_d + \mu h_u h_d \\ {\cal W}_{\nu^c} &=& l_A (\lambda_\nu)_{AB} \nu^c_B h_u + {\textstyle {1 \over 2}} \nu^c_A (M_{RR})_{AB} \nu^c_B \end{aligned}$$ where $A,B=1,..,3$ are family indices, $u^c$, $d^c$, $e^c$ and $\nu^c$ are the right-handed $SU(2)_L$ singlet superfields, $q=(u,d)$ and $l=(\nu,e)$ are the $SU(2)_L$ quark and lepton doublets, and $h_u$ ($h_d$) is the up (down) Higgs boson doublet. The Dirac neutrino coupling and the heavy Majorana mass for the right-handed neutrinos are denoted by $\lambda_\nu$ and $M_{RR}$ respectively. When the neutral components of the two MSSM Higgs bosons $h^0_{u,d}$ acquire their vacuum expectation values (VEV)s $v_{2,1}$ ($\tan\beta=v_2/v_1 \sim 40-50$) the superpotential in Eq.  generates the following sum of mass terms : $$\begin{aligned} {\cal L}_{U,D,E} &=&-U (\lambda_u v_2) {U^c} -D (\lambda_d v_1) {D^c} -E (\lambda_e v_1) {E^c} + {\rm h.c.} \label{SRNDLagUDE} \\ {\cal L}_{N} &=&-N (\lambda_\nu v_2) {N^c} -{\textstyle {1\over 2}} {N^c} M_{RR} {N^c} + {\rm h.c.} \label{SRNDLagN}\end{aligned}$$ where the upper case letters now denote the fermionic components of the superfields in ${\cal W}$, for example $u$ contains $(U,\tilde u)\equiv (U_L,\tilde u_L)$ and $u^c$ contains $(U^c, \tilde u^c)\equiv(U^*_R,\tilde u^*_R)$. The Yukawa matrices in Eq.  can be diagonalized by bi-unitary transformations $S$ and $T$ defined by : $$T^{u*} \lambda_u S^{uT} = \lambda'_u \qquad T^{d*} \lambda_d S^{dT} = \lambda'_d \qquad T^{e*} \lambda_e S^{eT} = \lambda'_e$$ Thus the physical (primed) states $U'_{R,L}$ are related to the gauge eigenstates $U_{R,L}$ by $U'_R = S^u U_R$ and $U'_L = T^u U_L$, [*etc*]{}.. In this model, the left-handed neutrino masses are generated via see-saw mechanism [@seesaw] by the terms in Eq.  which can be re-arranged into a two-by-two block matrix in the following way : $${\cal L}_N = - {\textstyle {1\over 2}} ( N \, N^c ) \left( \matrix{ 0 & m_{LR} \cr m^T_{LR} & M_{RR} }\right) \left( \matrix{ N \cr N^c} \right) + {\rm h.c.}$$ where $m_{LR} = \lambda_\nu v_2$. Thus, after the heavy $N^c$ fields are integrated out, the light left-handed neutrinos $N$ effectively acquire a small mass given by : $$m_{LL} = m_{LR} M^{-1}_{RR}\, m_{LR}^T \label{SRNDSeeSaw}$$ Finally, the diagonalization of $m_{LL}$ : $$T^{N*} m_{LL} T^{N\dagger}_{L} = {\rm diag}(m_{\nu_1},m_{\nu_2},m_{\nu_3})$$ allows the determination of the masses of the physical neutrinos $m_{\nu_A}$ and enables the physical neutrino states $N' = (\nu_1,\nu_2,\nu_3)$ to be related to the neutrino gauge fields $N = (\nu_e, \nu_\mu, \nu_\tau)$ by $N' = T^N N$. Taking into account the above conventions, we now proceed to give expressions for the Cabbibo-Kobayashi-Maskawa (CKM) matrix [@CKM] ($V^{CKM}$) and the corresponding lepton analogue, the Maki-Nakawaga-Sakata (MNS) matrix [@MaNaSa] ($V^{MNS}$). Their definitions derive from the charged current interactions : [^3] $$- {g \over \sqrt{2}} W^+_\mu \bar\Psi_U \gamma^\mu P_L \Psi_D \to - {g \over \sqrt{2}} W^+_\mu \Psi_{U'} \gamma^\mu P_L V^{CKM} \Psi_{D'}$$ $$- {g \over \sqrt{2}} W^-_\mu \bar\Psi_E \gamma^\mu P_L \Psi_{N} \to - {g \over \sqrt{2}} W^-_\mu \bar\Psi_{E'} \gamma^\mu P_L V^{MNS} \Psi_{N'}$$ that imply : $$V^{CKM} = T^u T^{d\dagger} \qquad V^{MNS} = T^e T^{N\dagger} \label{SRNDCKMMNS}$$ In what follows we will assume that the matrices in Eq.  are real. [^4] Thus, we will write $V^{MNS}$ in terms of three rotation matrices : $$V^{MNS} = R_{23} R_{13} R_{12} \label{MNSRot}$$ given by : $$R_{23} = \left( \matrix{ 1 & 0 & 0 \cr 0 & \phantom{-}c_{23} & s_{23} \cr 0 & -s_{23} & c_{23} } \right) \quad R_{13} = \left( \matrix{ \phantom{-}c_{13} & 0 & s_{13} \cr 0 & 1 & 0 \cr -s_{13} & 0 & c_{13} } \right) \quad R_{12} = \left( \matrix{ \phantom{-}c_{12} & s_{12} & 0 \cr -s_{12} & c_{12} & 0 \cr 0 & 0 & 1 } \right) \label{RotMtr}$$ where $s_{AB} = \sin\theta_{AB}$, $c_{AB} = \cos\theta_{AB}$ refer to the lepton mixing angles between the $A$ and $B$ generation. Using Eq.  into Eq.  gives : $$V^{MNS} = \left( \matrix{ c_{12} c_{13} & s_{12} c_{13} & s_{13} \cr -s_{12} c_{23}-c_{12} s_{23} s_{13} & \phantom{-} c_{12} c_{23}-s_{12} s_{23} s_{13} & s_{23} c_{13} \cr \phantom{-} s_{12} s_{23}-c_{12} c_{23} s_{13} & -c_{12} s_{23}-s_{12} c_{23} s_{13} & c_{23} c_{13} } \right) \label{VMNS}$$ It is also practical to have expressions for the $\theta_{AB}$ angles in terms of the $V^{MNS}$ entries. Inverting Eq.  we find that : [^5] $$\sin\theta_{13} = V_{e3} \qquad \sin\theta_{23} = {V_{\mu 3} \over \sqrt{1-V_{e3}^2}} \qquad \sin\theta_{12} = {V_{e2} \over \sqrt{1-V_{e3}^2}}$$ Finally we note that while the above expressions were derived in the context of three neutrino species, the analysis of the experimental results assumed only two, thus a direct comparison of mixing angles is not exactly valid. Third family single right-handed neutrino dominance (SRHND) [@KingSRND; @SFKing1; @SFKing2] is a mechanism that can explain the large atmospheric ($\theta_{23}$) and the solar LMA MSW ($\theta_{12}$) neutrino mixing angles and a small $\theta_{13}$. SRHND relies on the possibility that the neutrino mass matrix ($m_{LL}$) is dominated by the contributions coming solely from a single right-handed neutrino (for example $\nu^c_\tau$.) In this scheme a maximal $\theta_{23}$ angle arises when the tau right-handed neutrino $\nu^c_\tau$ couples to the left-handed muon $\nu_\mu$ and tau neutrino $\nu_\tau$ with equal strength. Similarly, if $\nu^c_\mu$ couples to $\nu_e$ and to $\nu_\mu$ with comparable strength then $\theta_{12}$ is large. The role of the (sub-dominant) muon neutrino is also important since it provides small perturbations to the $m_{LL}$ matrix (which otherwise has one heavy and two massless eigenstates), thus leading to a neutrino mass splitting $\Delta m^2_{12}=|m^2_{\nu_2}-m^2_{\nu_1}|$ compatible with experiment. In this section we summarize the theory behind SRHND and review the analytic results presented in Ref. [@SFKing2] for the case of the diagonal dominated right-handed neutrino mass matrix $M_{RR}$. The see-saw formula for the left-handed neutrino matrix in Eq.  depends explicitly on $M_{RR}$. Although $M_{RR}$ might have a non-trivial structure we find instructive to start our analysis by considering the very simple case of $M_{RR}$ given by : $$M^{-1}_{RR} \sim {\rm diag}(M^{-1}_{\nu_1},M^{-1}_{\nu_2},M^{-1}_{\nu_3}) \sim {\rm diag}(0,0,M^{-1}_{\nu_3}) \label{SRNDmRR}$$ which effectively corresponds to taking $M_{\nu_1},M_{\nu_2} \gg M_{\nu_3}$. Replacing Eq.  into Eq.  we find that : [^6] $$m_{LL} = v_2^2 \lambda_\nu M^{-1}_{RR} \lambda^T_\nu \sim \lambda M^{-1}_{RR} \lambda^T \sim M^{-1}_{\nu_3} \left(\matrix{ \lambda_{13}^2 & \lambda_{13} \lambda_{23} & \lambda_{13} \lambda_{33} \cr \lambda_{13} \lambda_{23} & \lambda_{22}^2 & \lambda_{23} \lambda_{33} \cr \lambda_{13} \lambda_{33} & \lambda_{23} \lambda_{33} & \lambda_{33}^2 \cr} \right) \label{SRNDmLLMTR1}$$ The $m_{LL}$ matrix above is easily diagonalized by the matrices $R_{23}$, $R_{13}$, $R_{12}$ [^7] in Eq.  with rotation angles given by : $$\matrix{ \displaystyle s_{23} = {\lambda_{23} \over A} & \displaystyle c_{23} = {\lambda_{33} \over A} & & A^2 = \lambda_{33}^2+\lambda_{23}^2 \hfill \cr & & \qquad & \cr \displaystyle s_{13} = {\lambda_{13} \over B} & \displaystyle c_{13} = { A \over B} & & \displaystyle B^2 = \lambda_{33}^2+\lambda_{23}^2+\lambda_{13}^2 } \label{SRNDRotAngles}$$ that successively act on $m_{LL}$ as follows : $$m_{LL}^{\prime\prime\prime} = R_{12}^\dagger R_{13}^\dagger R_{23}^\dagger m_{LL} R_{23} R_{13} R_{12} = {\rm diag}(m_{\nu_1},m_{\nu_2},m_{\nu_3})$$ It is also convenient to define the following primed matrices : $$m_{LL}^{\prime} = R_{23}^\dagger m_{LL} R_{23} \quad\quad m_{LL}^{\prime\prime} = R_{13}^\dagger m_{LL}^{\prime} R_{13} \quad\quad m_{LL}^{\prime\prime\prime} = R_{12}^\dagger m_{LL}^{\prime\prime} R_{12} \label{SRNDmLLprime}$$ which, for $m_{LL}$ as in Eq. , are explicitly given by : $$m_{LL}^\prime \sim M^{-1}_{\nu_3} \left(\matrix{ \lambda_{13}^2 & 0 & \lambda_{13} A \cr 0 & 1 & 0 \cr \lambda_{13} A & 0 & A^2}\right)\qquad m_{LL}^{\prime\prime} \equiv m_{LL}^{\prime\prime\prime} \sim M^{-1}_{\nu_3} \left(\matrix{ 0 & 0 & 0 \cr 0 & 0 & 0 \cr 0 & 0 & B^2}\right)\quad \label{SRNDmLLprimeMTR}$$ We can see from Eq.  that if $\lambda_{23} = \lambda_{33}$ then a maximal $\theta_{23}=45^0$ angle results. Moreover, if $\lambda_{13} \ll \lambda_{23},\lambda_{33}$ then $\theta_{13}$ is small. Although SRHND, in the limiting case of Eq. , is successful in predicting a maximal atmospheric neutrino angle, it fails to account for a viable neutrino spectrum. Indeed, from Eq. , we see that the two lightest neutrinos are massless $m_{\nu_1} = m_{\nu_2} = 0$. Moreover the solar neutrino angle $\theta_{12}$ is undetermined. These two problems can be solved by allowing the right-handed muon neutrino $\nu^c_\mu$ to play a sub-dominant/perturbative role in the structure of $m_{LL}$ in Eq. . We now turn to the more realist model in which $M_{RR}$ can be approximated by : [^8] $$M^{-1}_{RR} \sim {\rm diag}(M^{-1}_{\nu_1},M^{-1}_{\nu_2},M^{-1}_{\nu_3}) \sim {\rm diag}(0,M^{-1}_{\nu_2},M^{-1}_{\nu_3}) \label{SRNDmLLMTR2}$$ Using Eq.  into Eq.  we find that : $$m_{LL} \sim M^{-1}_{\nu_3} \left(\matrix{ \lambda_{13}^2 & \lambda_{13}\lambda_{23} & \lambda_{13}\lambda_{33} \cr \lambda_{13}\lambda_{23} & \lambda_{23}^2 & \lambda_{23}\lambda_{33} \cr \lambda_{13}\lambda_{33} & \lambda_{23}\lambda_{33} & \lambda_{33}^2 }\right)+ M^{-1}_{\nu_2} \left(\matrix{ \lambda_{12}^2 & \lambda_{12}\lambda_{22} & \lambda_{12}\lambda_{32} \cr \lambda_{12}\lambda_{22} & \lambda_{22}^2 & \lambda_{22}\lambda_{32} \cr \lambda_{12}\lambda_{32} & \lambda_{22}\lambda_{32} & \lambda_{32}^2 }\right) \label{SRNDmLLM2M3}$$ Given that we assumed SRHND by the $\nu^c_\tau$ neutrino, it follows that the contributions to the 23 block of $m_{LL}$ in Eq.  arising from the terms proportional to $M^{-1}_{\nu_3}$ dominate over the ones proportional to $M^{-1}_{\nu_2}$. [^9] Clearly, the rotations $R_{12}$, $R_{13}$ parameterised by the angles in Eq.  diagonalize $m_{LL}$ in Eq.  up to terms of order ${\cal O}(M^{-1}_{\nu_2})$. Thus the new primed matrices $m^{\prime}_{LL}$ and $m^{\prime\prime}_{LL}$ are given by : $$\begin{aligned} m_{LL}^{\prime} &\sim& M^{-1}_{\nu_3} \left(\matrix{ \lambda_{13}^2 & 0 & \lambda_{13} A \cr 0 & 0 & 0 \cr \lambda_{13} A & 0 & A^2 }\right)+ M^{-1}_{\nu_2} \left(\matrix{ \lambda_{12}^2 & \lambda_{12} {C^2 \over A} & \lambda_{12} {D^2 \over A} \cr \lambda_{12} {C^2 \over A} & {C^4 \over A^2} & {C^2 D^2 \over A^2} \cr \lambda_{12} {D^2 \over A} & {C^2 D^2 \over A^2} & {D^4 \over A^2} }\right) \\ & & \nonumber \\ m_{LL}^{\prime\prime} &\sim& M^{-1}_{\nu_3} \left(\matrix{ 0 & 0 & 0 \cr 0 & 0 & 0 \cr 0 & 0 & B^2 }\right)+ M^{-1}_{\nu_2} \left(\matrix{ {E^6 \over A^2 B^2} & {C^2 E^3 \over A^2 B} & {F^2 E^3 \over A B^2} \cr {C^2 E^3 \over A^2 B} & {C^4 \over A^2} & {C^2 F^2 \over AB} \cr {F^2 E^3 \over A B^2} & {C^2 F^2 \over AB} & {F^4 \over B^2} }\right) \label{SRNDmLLpp}\end{aligned}$$ where $$\begin{aligned} C^2 &=& \lambda_{22}\lambda_{33}-\lambda_{32}\lambda_{23} \\ D^2 &=& \lambda_{33}\lambda_{32}+\lambda_{22}\lambda_{23} \\ E^3 &=& \lambda_{12} (\lambda_{33}^2+\lambda_{23}^2) - \lambda_{13} (\lambda_{33}\lambda_{32}+\lambda_{22}\lambda_{23}) \\ F^2 &=& \lambda_{33}\lambda_{32}+\lambda_{22}\lambda_{23}+\lambda_{12}\lambda_{13}\end{aligned}$$ The diagonalization of the 12 block of $m^{\prime\prime}_{LL}$ in Eq.  is achieved by a $R_{12}$ matrix parameterised by the following $\theta_{12}$ rotation angle : $$s_{12} ={E^3 \over \sqrt{E^6+B^2 C^4}} \qquad c_{12} ={B C^2 \over \sqrt{E^6+B^2 C^4}}$$ Thus we find : $$m^{\prime\prime\prime}_{LL} \sim M^{-1}_{\nu_3} \left(\matrix{ 0 & 0 & 0 \cr 0 & 0 & 0 \cr 0 & 0 & B^2 }\right)+ M^{-1}_{\nu_2} \left(\matrix{ 0 & 0 & 0 \cr 0 & {E^6+B^2 C^4 \over A^2 B^2} & {F^2 \sqrt{E^6+B^2 C^4} \over A B^2} \cr 0 & {F^2 \sqrt{E^6+B^2 C^4} \over A B^2} & {F^4 \over B^2} \label{SRNDmLLppp} }\right)$$ It is interesting to note that the $R_{12}$ rotation has not only diagonalized the 12 block of $m^{\prime\prime}_{LL}$ but also put zeros in the 13,31 entries of $m^{\prime\prime\prime}_{LL}$. The reason is because $m^{\prime\prime}_{LL}$ displays a special structure. Indeed, their elements obey : $$t_{12} = {s_{12} \over c_{12}} = {(m^{\prime\prime}_{LL})_{12} \over (m^{\prime\prime}_{LL})_{22}} = {(m^{\prime\prime}_{LL})_{11} \over (m^{\prime\prime}_{LL})_{12}} = {(m^{\prime\prime}_{LL})_{13} \over (m^{\prime\prime}_{LL})_{23}} = {E^3 \over B C^2}$$ Explicitly $t_{12} = \tan\theta_{12}$ is given by : $$t_{12} = {\lambda_{12}(\lambda_{33}^2+\lambda_{23}^2)- \lambda_{13}(\lambda_{33}\lambda_{32}+\lambda_{22}\lambda_{23}) \over (\lambda_{22}\lambda_{33}-\lambda_{32}\lambda_{23}) \sqrt{\lambda_{33}^2+\lambda_{23}^2+\lambda_{13}^2}} \sim {\lambda_{12} \over \lambda_{22}} \label{SRNDt12}$$ From Eq.  we see that, although $t_{12}$ generally depends on the second and third family neutrino Yukawa couplings, if $\lambda_{33}$ is much bigger than the other Yukawa couplings then $t_{12} \sim \lambda_{12} / \lambda_{22}$. This means that the $\theta_{12}$ angle is set not by the dominant neutrino couplings, but by the sub-dominant $\nu^c_\mu$ neutrino couplings to the $\nu_e$ and $\nu_\mu$ neutrinos. Thus while a large atmospheric neutrino mixing angle $\theta_{23}$ can be achieved by requiring $\lambda_{23}\sim\lambda_{33}$, a large MSW solar neutrino angle $\theta_{12}$ results from $\lambda_{12}\sim\lambda_{22}$. Moreover, Eq.  and Eq.  show that bi-maximal $\theta_{23}$, $\theta_{12}$ mixing can be achieved with a small $\theta_{13}$ angle as long as $\lambda_{13} \ll \lambda_{23},\lambda_{33}$. The neutrino mass spectrum can be read from $m^{\prime\prime\prime}_{LL}$ in Eq. . We find a massless neutrino state $m_{\nu_1} = 0$, plus a light state with mass $m_{\nu_2} \sim \lambda^2_{22} / M_{\nu_2}$ and a heavy neutrino with mass $m_{\nu_3} \sim \lambda^2_{33} / M_{\nu_3}$. Here we briefly summarize the parts of the Pati-Salam model [@PaSa] that are relevant for our analysis. For a more complete discussion see Ref. [@AnLe]. The SM fermions, together with the right-handed neutrinos, are conveniently accommodated in the following $F=(4,2,1)$ and $F^c=(\bar 4,1,\bar 2)$ representations : $$\qquad F_A = \left(\matrix{ u & u & u & \nu \cr d & d & d & e \cr} \right)_A \qquad F^c_B = \left(\matrix{ d^c & d^c & d^c & e^c \cr u^c & u^c & u^c & \nu^c \cr} \right)_B \label{FcFields}$$ The MSSM Higgs bosons fields are contained in $h=(1,\bar 2,2)$ : $$h = \left(\matrix{h_d^- & h_u^0 \cr h_d^0 & h_u^+ \cr} \right) \label{SMHiggs}$$ whereas the heavy Higgs bosons $\bar H=(\bar 4,1,\bar 2)$ and $H=(4,1,2)$ are denoted by : $$\bar H = \left(\matrix{ \bar H_d & \bar H_d & \bar H_d & \bar H_e \cr \bar H_u & \bar H_u & \bar H_u & \bar H_\nu \cr}\right) \qquad H = \left(\matrix{ H_d & H_d & H_d & H_e \cr H_u & H_u & H_u & H_\nu \cr}\right). \label{RHiggs}$$ In addition to the Higgs fields in Eqs. , the model also involves an $SU(4)$ sextet field $D=(6,1,1)=(D_3,D^c_3)$. The superpotential of the minimal 422 model is : $$\begin{aligned} {\cal W} &=& F \lambda F^c h + \lambda_h S h h + \nonumber \\ & & \lambda_S S (\bar H H - M^2_H) + \lambda_H H H D + \lambda_{\bar H} {\bar H} {\bar H} D + F^c \lambda' F^c \,{H H \over M_V} \label{W3}\end{aligned}$$ where $S$ denotes a gauge singlet superfield, the $\lambda$’s are real dimensionless parameters and $M_H \sim M_X \sim 10^{16} {\rm\ GeV}$. Additionally, $M_V > M_X$ denotes the mass of extra exotic matter that has been integrated out from the model at high energy. As a result of the superpotential terms involving the singlet $S$, the Higgs fields develop VEVs $\langle H \> \rangle = \langle H_\nu \rangle \sim M_X$ and $\langle \bar H \rangle = \langle \bar H_\nu \rangle \sim M_X$ which lead to the symmetry breaking : $$SU(4) \otimes SU(2)_L \otimes SU(2)_R \to SU(3)_c \otimes SU(2)_L \otimes U(1)_Y. \label{422:321}$$ The singlet $S$ itself also naturally develops a small VEV of the order of the SUSY breaking scale [@KiSh] so that the $\lambda_h S$ term in Eq.  gives an effective $\mu$ parameter of the correct order of magnitude. Under Eq.  the Higgs field $h$ in Eq.  splits into the familiar MSSM doublets $h_u$ and $h_d$ whose neutral components subsequently develop weak scale VEVs $\langle h_u^0 \rangle = v_2$ and $\langle h_d^0 \rangle = v_1$ with $\tanb = v_2/v_1$. The neutrino fields $\nu^c$ acquires a large mass $M_{RR} \sim \lambda' \langle HH \rangle / M_V$ through the non-renormalizable term in ${\cal W}$ which, together with the Dirac $\nu^c$ – $\nu$ interaction (proportional to $\lambda \langle h_u^0 \rangle$), gives rise to a 2 $\times$ 2 matrix that generates, via see-saw mechanism [@seesaw], a suppressed mass for the left-handed neutrino states. The $D$ field does not develop a VEV but the terms $HHD$ and $\bar H \bar H D$ combine the colour triplet parts of $H$, $\bar H$ and $D$ into acceptable GUT scale mass terms [@AnLe]. The pattern of fermion masses and mixing angles is one of the fundamental problems is particle physics that has not yet been understood. The importance of this unsolved puzzle is demonstrated by the numerous works published in the literature over the past years (see Refs. [@MatrixModels1]-[@BaDo] for a “short” list.) In the standard model (SM) the quark/lepton masses and the CKM matrix are input parameters fixed by laboratory experiments. Surprisingly, however, their values, though unconstrained and [*a priori*]{} arbitrary, do display a certain degree of organization. The fermion masses are highly hierarchical and the CKM matrix can be described in terms of the small Wolfenstein expansion parameter $\lambda \sim |V_{12}| \sim 0.22$ [@LWolfenstein]. These results suggest that a broken flavour symmetry might be playing an important role in the setting of the structure of the Yukawa matrices. In this work we will assume that the “vertical” gauge group is supplemented by an additional $U(1)_X$ “horizontal” flavour symmetry that constraints the nature of the couplings of quarks and leptons to SM singlet fields $\theta$ and $\bar\theta$. The family symmetry, however, is broken at some high energy scale $M_\theta > M_X$ by the VEVs of the $\theta$, $\bar\theta$ fields which under the $U(1)_X$ group have charges $X_\theta = -1$ and $X_{\bar\theta}= +1$. As a consequence of the $U(1)_X$ symmetry breaking, the low energy effective theory includes Dirac interactions between the $F$ and $F^c$ fields of the following form : $$F_A F^c_B h \left( {\theta} \over M_V \right)^{p_{AB}} \to F_A F^c_B h \left( \langle\theta\rangle \over M_V \right)^{p_{AB}} \sim F_A F^c_B h \epsilon^{p_{AB}} \label{NonRenTheta}$$ $$F_A F^c_B h \left( {\bar\theta} \over M_V \right)^{p_{AB}} \to F_A F^c_B h \left( \langle\bar\theta\rangle \over M_V \right)^{p_{AB}} \sim F_A F^c_B h \epsilon^{p_{AB}} \label{NonRenThetaBar}$$ where $p_{AB}$ is the modulos of the sum of the $U(1)_X$ charges of the $F_A$, $F^c_B$ and $h$ fields, [*i.e.*]{} $p_{AB} = |X_{AB}| = |X_{F_A}+X_{F^c_B}+X_h|$. Thus Eq.  holds if $X_{AB} > 0$ whereas Eq.  holds if $X_{AB} < 0$. The non-renormalizable terms in Eqs. , might originate from interactions between the $F$ and $\theta$ fields with additional exotic vector matter with mass $M_V > M_X$ that lead to “spaghetti” diagrams as discussed in Ref. [@AlKiSpaghetti]. In summary, the equations above show that, in the context of a $U(1)_X$ symmetry, the observed hierarchy in the fermion masses and mixing angles might be the result of the flavour charges carried by the fields of the 422 model which act to suppress the Yukawa couplings by some $\epsilon$-power. The introduction of the $U(1)_X$ symmetry provides a way to relate the various flavour parameters of the model thus making it more predictive. However, one should be careful. Generally the $U(1)_X$ group is potentially dangerous since it can introduce, through triangle diagrams, mixed anomalies with the SM gauge group. [^10] In the last part of this section we review the constraints imposed on $X$ charges of the fields of our model enforced by the requirement of anomaly cancellation [@JaSh]. The mixed anomalies that we shall consider are : [^11] $$\begin{aligned} \!\!\!\!\!\!\!\! SU(3)^2 U(1)_X & : & A_3 = \sum_{A=1}^3 (2 X_{q_A}+ X_{u^c_A}+ X_{d^c_A}) \label{AnomA3} \\ \!\!\!\!\!\!\!\! SU(2)^2 U(1)_X & : & A_2 = \sum_{A=1}^3 (3 X_{q_A}+ X_{l_A})+ X_{h_u}+ X_{h_d} \label{AnomA2} \\ \!\!\!\!\!\!\!\! U(1)^2_Y U(1)_X & : & A_1 = \sum_{A=1}^3 ({\textstyle {1\over 3}} X_{q_A}+ {\textstyle {8\over 3}} X_{u^c_A}+ {\textstyle {2\over 3}} X_{d^c_A}+ X_{l_A}+ 2 X_{e^c_A})+ X_{h_u}+ X_{h_d} \label{AnomA1} \\ \!\!\!\!\!\!\!\! U(1)_Y U(1)_X^2 & : & A'_1 = \sum_{A=1}^3 ( X_{q_A}^2- 2X_{u^c_A}^2+X_{d^c_A}^2 -X_{l_A}^2+ X_{e^c_A}^2) + X_{h_u}^2+ X_{h_d}^2 \label{AnomA1p}\end{aligned}$$ For example, $A_3$ corresponds to the anomalous term generated by the Feynman diagram that has two $SU(3)$ gluons and one $U(1)_X$ gauge boson attached to the triangle vertices. We note that the first three anomalies $A_3$, $A_2$ and $A_1$ are linear in the trace of the charges, [*i.e.*]{} $X_f = \sum_{A=1}^3 X_{f_A}$, where $f$ is any of the $q,u^c,d^c,l,e^c$ fields, thus they constraint only the family independent (FI) part of the $U(1)_X$ charges. On the other hand, $A'_1$ is quadratic in the $X$ charges, thus it generally constraints the FI and family dependent (FD) part of the $U(1)_X$ charges. In this paper we will assume that the cancellation of anomalies results from the Green-Schwartz (GS) mechanism [@GrSc]. This is possible if the $A_3$, $A_2$ and $A_1$ anomalies are in the ratio $A_3:A_2:A_1=k_3:k_2:k_1$ where the $k_i$ are the Kac-Moody levels of the $SU(3)$, $SU(2)$ and $U(1)_Y$ gauge groups that determine the boundary conditions for the gauge couplings at the string scale $g_3^2 k_3 : g_2^2 k_2 : g_1^2 k_1$. Hence, using the canonical GUT normalization for the gauge couplings (that successfully predicts $\sin^2(\theta_W)=3/8$ [@Ibanez]), anomalies can be cancelled if we require that : $$A_3=A_2={\textstyle {5 \over 3}}A_1 \label{AnomCanc}$$ As a consequence of the two constraints implicit in Eq. , the set of solutions for the $X$ charges appearing in Eqs. (\[AnomA3\])-(\[AnomA1\]) is given by [@JaSh] : $$\matrix{ \displaystyle X_{e^c} = \sum_{A=1}^3 X_{e^c_A} = x \hfill & \displaystyle X_{l^c} = \sum_{A=1}^3 X_{l_A} = y \hfill & \displaystyle X_{h_u} = -z \hfill \cr \displaystyle X_{q\phantom{^c}} = \sum_{A=1}^3 X_{q_A} = x+u \hfill & \displaystyle X_{d^c} = \sum_{A=1}^3 X_{d^c_A} = y+v \hfill & \displaystyle X_{h_d} = +z+(u+v) \hfill \cr \displaystyle X_{u^c} = \sum_{A=1}^3 X_{u^c_A} = x+2u \hfill & & } \label{AnomSol}$$ where $x,y,z,u,v$ are free parameters. However not all the solutions in Eqs.  are valid after $A'_1=0$ is enforced. In fact, as we said before, generally $A'_1$ constrains both the FI and FD charges of $U(1)_X$. By this we mean that, if we conveniently write the charge of the $f_A$ field $X_{f_A}$ as a sum of a FI part $X_f$ plus a FD part $X'_{f_A}$, [*i.e.*]{} $X_{f_A}= {1 \over 3} X_f+X'_{f_A}$, then $A'_1=0$ is a complicated equation on all $X_f$, $X'_{f_A}$ and $X_{h_u}$, $X_{h_d}$ charges. However, it is easy to see that, if all the left-handed fields and if all the right-handed fields have the same FD charges, [*i.e.*]{} $X'_{q_A}=X'_{l_A}$ and $X'_{u^c_A}=X'_{d^c_A}=X'_{e^c_A}$, as is the case of the 422 model, then $A'_1=0$ is an equation on the FI charges only : $$A'_1 = {\textstyle {2 \over 3}} (X^2_q- 2 X^2_{u^c}+ X^2_{d^c}- X^2_{l}+ X^2_{e^c}+ X^2_{h_u}-X^2_{h_d})=0$$ Thus, a simple solution to all the anomaly constraints is given by Eq.  with $u=v=0$. Finally, we must add that since the Pati-Salam model unifies all the left/right-handed quark and lepton fields in the $F$/$F^c$ multiplets, and the MSSM Higgs fields $h_u$, $h_d$ in the $h$ Higgs bi-doublet we must also have $x=y$ and $z=0$. Thus, anomaly cancellation in the 422 model via GS mechanism is possible if the traces of the $U(1)_X$ charges of the $F$ and $F^c$ fields are equal, [*ie*]{} $X_{F} = \sum_{A=1}^3 X_{F_A} \equiv \sum_{A=1}^3 X_{F^c_A} = X_{F^c}$. In the simplest formulation of the 422 model extended by a $U(1)_X$ horizontal symmetry all the Yukawa couplings originate from a single matrix. The Abelian $U(1)_X$ symmetry introduced in the previous section mainly serves one purpose, it establishes an hierarchy between the flavour dependent couplings. Thus, it provides no precise/predictive information about the relationships between the different Yukawa coupling matrices. As a result, all the SM fermions of a given family have identical Yukawa couplings at the unification scale. Naturally, when the fermion masses are run from the $M_X$ to the $M_Z$ scale they lead to quark and lepton masses that are incompatible with the experimental data. The idea of Yukawa unification, though unsuccessful in its most simpler form, is not, however, a complete failure. As a matter of fact, it turns out that third family Yukawa unification works rather well. It is well known that the GUT boundary condition for the Yukawa couplings : $$\lambda_t(M_X)=\lambda_b(M_X)=\lambda_\tau(M_X)=\lambda_{\nu_\tau}(M_X)$$ leads to a large pole top mass prediction $M_t \sim 175$ GeV and $\tan\beta\sim m_t/m_b$. On the other hand, the first and second family fermion masses can be predicted if special relations between the “vertical” intra-generation Yukawa couplings at $M_X$ hold. For example, the Georgi-Jarlskog (GJ) [@GeJa] relation between the muon and strange Yukawa couplings $\lambda_\mu \sim 3 \lambda_s$ successfully reproduces the low energy experimental $m_s/m_\mu \sim 1$ mass ratio. In the context of GUT theories the appearance of numerical factors relating the couplings of the up-down-lepton Yukawa matrices might originate from non-renormalizable operators involving the interaction between the fermions and the heavy Higgs that break the GUT symmetry [@FrNi; @AnRaDiHaSt]. In the Pati-Salam model, we will have in mind operators of the following form [@AlKiLeLo1] : $$F_A F^c_B h \left( H \bar H \over M_V^2 \right)^{n} \left(\theta \over M_V \right)^{p_{AB}} \quad{\rm\ and}\quad F_A F^c_B h \left( H \bar H \over M_V^2 \right)^{n} \left(\bar\theta \over M_V \right)^{p_{AB}} \label{SRNDopRL}$$ The idea is that when the $H$ and $\theta$ fields develop their VEVs such operators reduce to effective Yukawa couplings with small coefficients. For example, if $F_2$, $F^c_2$ and $h$ carry a charge $X_{F_2} = 0$, $X_{F^c_2} = 2$ and $X_h = 0$ under the $U(1)_X$ symmetry then Eq.  (with $n=1$) generates the following terms : $$(x_u u_2 u^c_2 h^0_u+x_d d_2 d^c_2 h^0_d+ x_e e_2 e^c_2 h^0_d+x_\nu \nu_2 \nu^c_2 h^0_u)\delta\epsilon^2$$ where $\delta=\langle H \rangle\langle\bar H \rangle/M_V^2$ and $\epsilon=\langle\theta\rangle / M_V$ are small dimensionless parameters, $u_2$, $d_2$, $e_2$, $\nu_2$ are the charm, strange, muon, muon neutrino superfields, and $x_f$ ($f=u,d,e,\nu$) are Clebsch factors that depend on the group theoretical contractions between the fields in Eq.  [@SFking2; @AlKi3]. In Table  (Appendix A) we present a complete list of all $x_f$ values that result from $n=1$ operators in the 422 model [@AlKiLeLo1] normalized by : $$x_u^2+x_d^2+x_e^2+x_\nu^2 = 4$$ It is interesting to point out that different operators imply zero Clebsches for different $x_f$’s. For example, CLASS-I operators are rather special since of all $x_f$’s only one is non-zero (and significantly large). The CLASS-II operators have $x_u=x_\nu=0$ while CLASS-III have $x_d=x_e=0$. Additionally CLASS-IV operators have $x_u=x_d=0$ and CLASS-V have $x_e=x_\nu=0$. Finally CLASS-VI operators have all $x_f$’s different from zero. The variety of the operator Clebsches is to be welcome since, as we will see, they open the possibility of avoiding the disastrous fermion mass predictions characteristic of the minimal 422 model with a unified renormalizable interaction. Finally we shall mention the origin of the heavy Majorana neutrino mass matrix. Generally $M_{RR}$ results from non-renormalizable operators of the form : $$F^c_A F^c_B \left( H H \over M_V^2 \right) \left( H \bar H \over M_V^2 \right)^n \left(\theta \over M_V \right)^{q_{AB}} \to \nu^c_A \nu^c_B \delta^{n+1}\epsilon^{q_{AB}} \label{SRNDopRR}$$ where $q_{AB}=|X_{F^c_A}+X_{F^c_B}+\sigma|$ and $\sigma = 2 X_H$. Three important differences distinguish Eq.  from Eq. . Firstly we note that while Eq.  allows for renormalizable operators, $M_{RR}$ as given by Eq.  is always the result of non-renormalizable operators. Secondly we note that the combination of the $HH$ fields in Eq.  introduces an additional free parameter $\sigma$ that may be fixed at our convenience. Thirdly we observe that while Eq.  is able to generate precise relationships between the up-down-lepton Yukawa couplings (via Clebsch factors), Eq.  is an expression that constrains only the hierarchy of $M_{RR}$ (via the $U(1)_X$ symmetry), as a result it is less predictive. In this section we show how the $U(1)_X$ horizontal family symmetry of section V can be combined with the operator approach of section VI to give predictions for the fermion masses and mixing angles in the 422 model. In particular we are interested in the predictions for the neutrino masses and mixing angles for the LMA MSW solution to the solar neutrino problem. The LOW and the SMA MSW solutions are discussed in Appendices C and D. We start by listing the quark and charged lepton experimental data used in our analysis : [^12] $$\begin{aligned} m_u({\rm 1\ GeV}) &=& 4.7 {\rm\ MeV} \qquad \hfill (1.35-6.75) {\rm\ MeV} \label{SRNDdatamu} \\ m_c(M_c) &=& 1.21 {\rm\ GeV} \qquad \hfill (1.15-1.35) {\rm\ GeV} \\ M_t & \sim & 175 {\rm\ GeV} \qquad (170-180) {\rm\ GeV}\end{aligned}$$ $$\begin{aligned} m_d({\rm 1\ GeV}) & \sim & 6.0 {\rm\ MeV} \qquad \hfill (4-12) {\rm\ MeV} \\ m_s(M_s) & \sim & 160 {\rm\ MeV} \qquad \hfill (100-230) {\rm\ MeV} \\ m_b(M_b) &=& 4.15 {\rm\ GeV} \qquad \hfill (4.0-4.4) {\rm\ GeV}\end{aligned}$$ $$\begin{aligned} M_e &=& 0.511 {\rm\ MeV} \qquad m_e(M_e) = 0.496 {\rm\ MeV} \\ M_\mu &=& 105.7 {\rm\ MeV} \qquad m_\mu(M_\mu) = 104.6 {\rm\ MeV} \\ M_\tau &=& 1777.0 {\rm\ MeV} \qquad m_\tau(M_\tau) = 1772.8 {\rm\ MeV} \label{SRNDdatamtau}\end{aligned}$$ where $m_u({\rm 1\ GeV})$, $m_d({\rm 1\ GeV})$ denote the running masses of the up and down quarks at $Q=1{\rm\ GeV}$; [^13] $m_c(M_c)$, $m_s(M_c)$, $m_b(M_b)$ the running masses of the charm, strange and bottom quarks at their pole masses ($M_c = 1.6 {\rm\ GeV}$, $M_b = 4.8 {\rm\ GeV}$); $M_t$ the top pole mass; and $M_{e,\mu,\tau}$ ($m_{e,\mu,\tau}$) the well known pole (running) charged lepton masses. We converted the above pole masses to running masses using the expressions in Ref. [@ArCaKeMiPiRaWr] with : $$\alpha_s(M_Z) = 0.120 \qquad\qquad \alpha_e^{-1}(M_Z) = 127.8$$ Finally the CKM matrix at $Q=M_Z$ was fixed by : [^14] $$|V_{12}| = 0.2215 \qquad |V_{23}|= 0.040 \qquad |V_{13}| = 0.0035 \label{SRNDdataCKM}$$ It is important to note that, in fact, not all the parameters above were taken as an input. Indeed, $M_t \sim 175 {\rm\ GeV}$ is a prediction that results from third family Yukawa unification $\lambda_t = \lambda_b = \lambda_\tau$ at the GUT scale. Moreover, as we will see, our model is also able to predict the masses of the down and charm quarks, thus their values listed above should be taken merely as a guide and/or convenient initial estimates. We now turn to the neutrino experimental data. The results from the Super-Kamiokande collaboration [@SKamiokandeColl; @HSobel] indicate that the atmospheric neutrino anomaly can be understood in terms of $\nu_\mu\leftrightarrow\nu_\tau$ oscillations with : $$\sin^2(2\theta_{23}) > 0.88 \qquad\qquad 1.5 \times 10^{-3}{\rm\ eV}^2 < \Delta m^2_{23} < 5\times 10^{-3} {\rm\ eV}^2 \label{SRNDSKdata}$$ at 90 % confidence level. On the other hand, the large mixing angle (LMA) MSW solution to the solar neutrino deficit suggests that [@BaKrSm] : $$\sin^2(2\theta_{12}) \sim 0.75 \qquad\qquad \Delta m^2_{12} \sim 2.5 \times 10^{-5} {\rm\ eV}^2 \label{SRNDLMAMSWdata}$$ Assuming that the neutrino spectrum is hierarchical, [*i.e.*]{} $\Delta m_{23}^2 = |m^2_{\nu_3}-m^2_{\nu_2}| \sim m^2_{\nu_3}$ and $\Delta m_{12}^2 = |m^2_{\nu_2}-m^2_{\nu_1}| \sim m^2_{\nu_2}$ the values in Eqs. , give : $$\sin(\theta_{23}) > 0.57 \quad m_{\nu_3} \sim 0.05 {\rm\ eV} \qquad\qquad \sin(\theta_{12}) \sim 0.50 \quad m_{\nu_2} \sim 0.005 {\rm\ eV} \label{SRNDneutrinoDATA}$$ The latest results from the CHOOZ experiment also show that, over the interesting $\Delta m^2_{23}$ range suggested by the Super-Kamiokande data, $\sin^2(2\theta_{13})<0.10$ at 90% CL [@CHOOZ]. The experimental data in Eqs. - constrains the parameters of our model at low energy. However, the GUT symmetry is broken at an energy $M_X \sim 10^{16} {\rm\ GeV}$. Thus, before we start our analysis we should correct the fermion masses and mixing angles for the radiative corrections that result from the running of the RGEs between the $Q=M_Z$ and $Q=M_X$ scales. The implementation of the RGEs, decoupling of SUSY particles and boundary conditions is a complicated subject whose detailed description is beyond the scope of this work. Here we will only mention that we used 2-loop RGEs in the gauge and Yukawa couplings and refer the interested reader to Ref. [@KiOl2] where the issue of Yukawa unification in the 422 model is discussed. As a result of the RGEs running, subjected to third family Yukawa unification at $M_X$, the low energy input values for the fermion masses in Eqs. - effectively constraint the eigenvalues of the Yukawa couplings at $Q=M_X$ to be : $$\lambda_u(M_X) = 4.738\times 10^{-6} \qquad \lambda_c(M_X) = 1.529\times 10^{-3} \qquad \lambda_t(M_X) = 0.677 \label{SRNDdataMXUP}$$ $$\lambda_d(M_X) \sim 3.208\times 10^{-4} \qquad \lambda_s(M_X) \sim 9.612\times 10^{-3} \qquad \lambda_b(M_X) = 0.677 \label{SRNDdataMXDOWN}$$ $$\lambda_e(M_X) = 1.490\times 10^{-4} \qquad \lambda_\mu(M_X) = 3.154\times 10^{-2} \qquad \lambda_\tau(M_X) = 0.677 \label{SRNDdataMXLEP}$$ and the CKM matrix at $Q=M_X$ to be : $$|V_{12}(M_X)| = 0.2215 \qquad |V_{23}(M_X)|= 0.032 \qquad |V_{13}(M_X)| = 0.0028 \label{SRNDdataMXCKM}$$ At this point it is convenient to re-write Eqs. - in terms of the Wolfenstein [@LWolfenstein] expansion parameter $\lambda=0.22\sim|V_{12}|$. We find : $$\lambda_u(M_X) = \lambda^{8.097} \qquad \lambda_c(M_X) = \lambda^{4.282} \qquad \lambda_t(M_X) = \lambda^{0.257} \label{SRNDUPdata}$$ $$\lambda_d(M_X) \sim \lambda^{5.313} \qquad \lambda_s(M_X) \sim \lambda^{3.068} \qquad \lambda_b(M_X) = \lambda^{0.257}$$ $$\lambda_e(M_X) = \lambda^{5.820} \qquad \lambda_\mu(M_X) = \lambda^{2.283} \qquad \lambda_\tau(M_X) = \lambda^{0.257}$$ $$|V_{12}(M_X)| = \lambda^{0.996} \qquad |V_{23}(M_X)| = \lambda^{2.273} \qquad |V_{13}(M_X)| = \lambda^{3.882} \label{SRNDCKMdata}$$ Equations - neatly summarize the hierarchy of the quark and charged lepton sectors at $M_X$ that we aim to reproduce/predict. It is now time to specify the structure of the LMA model in more detail. We start by indicating the nature of the (non-)renormalizable operators responsible for the structure of the Dirac and neutrino Majorana matrices : $$\begin{aligned} \lambda_{AB} &:& F_3 F^c_3 h + F_A F^c_B h {H \bar H \over M^2_V} \left[ 1+ \left(H \bar H \over M^2_V \right)+ \left(H \bar H \over M^2_V \right)^2+\ldots \right] \left(\theta \over M_V\right)^{p_{AB}} \label{SRNDMRL} \\ (M_{RR})_{AB} &:& \left\{ F^c_3 F^c_3+ F^c_A F^c_B \left[ {H \bar H \over M^2_V}+\ldots \right] \right\} {H H \over M^2_V} \left(\theta \over M_V\right)^{q_{AB}} \label{SRNDMRR}\end{aligned}$$ The first term in Eq.  is renormalizable, thus it implies third family Yukawa unification at $M_X$. The second term, which we shall assume to be present for $AB\ne 33$, on the other hand, is a sum of non-renormalizable operators. For the sake of simplicity we will consider that the $H\bar H/M_V^2$ part of $\lambda_{AB}$ that lies outside the square brackets in Eq.  has non-trivial gauge contractions with the $F_A F^c_B h$ fields next to it, thereby generating the Clebsch factors in Table  (Appendix A). On the other hand, the $(H\bar H/M_V^2)^{1,2}$ factors inside the square brackets will form gauge singlet terms that will be responsible for the appearance of higher $\delta$ powers in the entries of $\lambda_{AB}$. The $M_{RR}$ matrix, as given by Eq. , depends only on non-renormalizable operators because gauge invariance demands that every combination of $F^c F^c$ fields must be paired with at least a couple of $HH$ fields. However, we will assume that the only $n=1$ operator in $M_{RR}$ is placed on the 33 entry. All other entries of $M_{RR}$ result from $n=2$ operators. [^15] We can see from Eqs. , that the structure of the Yukawa and Majorana matrices can be decomposed into a “vertical” $\delta$-component and a “horizontal” $\epsilon$-component. Thus we write : $$(\lambda_f)_{AB} \sim (\lambda^\delta)_{AB} (\lambda^\epsilon)_{AB} \qquad\qquad (M_{RR})_{AB} \sim (M_{RR}^\delta)_{AB} (M_{RR}^\epsilon)_{AB} \label{YukMrrLMA}$$ The hierarchies of $\lambda^\epsilon$ and $M_{RR}^\epsilon$ are fixed by the choice of the $U(1)_X$ charges. Using the results of Ref. [@SFKing1], we can write the most general form of the unified $(\lambda^\epsilon)_{AB}$ matrix in the 422 model, constrained by the absence of anomalies, in terms of only four independent parameters $\bar X_{F_1}$, $\bar X_{F_2}$, $\bar X_{F^c_1}$ and $\bar X_{F^c_2}$ : [^16] $$\lambda^\epsilon = \left(\matrix{ \epsilon^{|\bar X_{F_1}+\bar X_{F^c_1}|} & \epsilon^{|\bar X_{F_1}+\bar X_{F^c_2}|} & \epsilon^{|\bar X_{F_1}|} \cr \epsilon^{|\bar X_{F_2}+\bar X_{F^c_1}|} & \epsilon^{|\bar X_{F_2}+\bar X_{F^c_2}|} & \epsilon^{|\bar X_{F_2}|} \cr \epsilon^{|\bar X_{F^c_1}|} & \epsilon^{|\bar X_{F^c_2}|} & 1 \cr}\right) \label{lambdaepsilonLMA}$$ From the equation above it is easy to see that the values of $\bar X_{F_2}$, $\bar X_{F^c_2}$, $\bar X_{F_1}$ and $\bar X_{F^c_1}$ are closely related with the large neutrino $\theta_{23}$ angle, the second family Yukawa couplings, the $V_{12}$ CKM angle, and the masses of the lightest fermions respectively. In the first row of Table   we list our choices for the $\bar X$’s parameters which we will, from now on, refer to as $U(1)_{\bar X}$ charges. In the second row we indicate the values of the physical (anomaly free) $U(1)_X=U(1)_{FD}+U(1)_{FI}$ charges of the fields of our model. In the third and forth row we list the values of the family dependent (traceless) and family independent (unphysical) charges that sum up to give $U(1)_X$. We note that the $U(1)_{\bar X}$ and $U(1)_X$ charges are “equivalent” in the sense that they determine equal family structures for the Yukawa and neutrino Majorana matrices.  [^17] The charges in Table  fix the $\epsilon$-structure of $\lambda^\epsilon$ and $M_{RR}^\epsilon$ to be : $$\lambda^\epsilon\sim\left(\matrix{ \epsilon^5 & \epsilon^3 & \epsilon \cr \epsilon^4 & \epsilon^2 & 1 \cr \epsilon^4 & \epsilon^2 & 1 \cr }\right) \qquad\qquad M_{RR}^\epsilon\sim\left(\matrix{ \epsilon^8 & \epsilon^6 & \epsilon^4 \cr \epsilon^6 & \epsilon^4 & \epsilon^2 \cr \epsilon^4 & \epsilon^2 & 1\cr}\right) \label{SRNDmRLmRReps}$$ Comparing $\lambda^\epsilon$ above with the hierarchy of the Yukawa couplings listed in Eqs. - we see that, the $U(1)_X$ symmetry (by itself) can not explain the pattern of all fermion masses and mixing angles. For example, although the symmetry allows a large 23 entry suitable for generating large 23 mixing from the neutrino Yukawa matrix, it also allows similarly large 23 entries in the charged lepton and quark Yukawa matrices which are not welcome. In order to overcome this we shall assume that although a renormalizable operator in the 23 position is allowed by the $U(1)_X$ symmetry, it is forbidden by some unspecified string symmetry which however allows a 23 operator containing one factor of $(H\bar{H})$. We shall further select a 23 operator which will involve a Clebsch factor of zero for the charged lepton and quark entries, with only its neutrino component having a non-zero contribution, thereby generating a large 23 mixing from the neutrino sector, with only small 23 mixing in the charged lepton and quark sectors arising from operators containing higher powers of $(H\bar{H})^n$, with $n>1$. The existence of such operators with “Clebsch zeros” is clearly crucial for the success of our approach. In general, we shall show that by a suitable choice of non-renormalizable operators, which determine the $\lambda^\delta$ “vertical” structure of $\lambda_f$, we can obtain a successful description of all quark and lepton masses and mixing angles. For example, let us consider $\lambda^\delta$ and $M^\delta_{RR}$ given by the following operator matrices : $$\lambda^\delta \sim \left(\matrix{ {\cal O}^R+{\cal O}^{\prime\prime V} & {\cal O}^J+{\cal O}^{\prime Q} & {\cal O}^g+{\cal O}^{\prime f} \cr {\cal O}^G+{\cal O}^{\prime\prime K} & {\cal O}^W+{\cal O}^{\prime H} & {\cal O}^I+{\cal O}^{\prime W} \cr {\cal O}^R+{\cal O}^{\prime\prime V} & {\cal O}^M+{\cal O}^{\prime K} & 1 }\right) \qquad M_{RR}^\delta \sim \left(\matrix{ {\cal O} & {\cal O} & {\cal O} \cr {\cal O} & {\cal O} & {\cal O} \cr {\cal O} & {\cal O} & 1 }\right) \label{SRNDmRLopMtr}$$ where ${\cal O}$, ${\cal O}^{\prime}$ and ${\cal O}^{\prime\prime}$ are $n=1$, $n=2$ and (very small) $n=3$ operators respectively  [^18] where $n$ is defined in Eq.  and refers to the powers of $(H\bar{H})^n$. Using Eqs. , into Eq.  gives : $$\begin{aligned} \lambda_f(M_X) &=& \left(\matrix{ \hfil x^R_f a_{11} \delta\epsilon^5 & \hfil x^J_f a_{12} \delta\epsilon^3 & \hfil x^g_f a_{13} \delta\epsilon^2 \cr \hfil x^G_f a_{21} \delta\epsilon^4 & \hfil x^W_f a_{22} \delta\epsilon^2 & \hfil x^I_f a_{23} \delta\phantom{\epsilon^2} \cr \hfil x^R_f a_{31} \delta\epsilon^4 & \hfil x^M_f a_{32} \delta\epsilon^2 & \hfil a_{33}\phantom{\epsilon^2} \cr }\right)+ \nonumber \\ & & \cr & & \left(\matrix{ 0 & x^Q_f a^\prime_{12} \delta^2\epsilon^3 & x^f_f a^\prime_{13} \delta^2\epsilon^2 \cr 0 & x^H_f a^\prime_{22} \delta^2\epsilon^2 & \hfil x^W_f a^\prime_{23} \delta^2\phantom{\epsilon^2} \cr 0 & x^K_f a^\prime_{32} \delta^2\epsilon^2 & 0 }\right)+ \nonumber \\ & & \cr & & \left(\matrix{ x^V_f a^{\prime\prime}_{11} \delta^3\epsilon^5 & 0 & 0 \cr x^K_f a^{\prime\prime}_{21} \delta^3\epsilon^4 & 0 & 0 \cr x^V_f a^{\prime\prime}_{23} \delta^3\epsilon^4 & 0 & 0 }\right) \label{lfMXLMAan} \\ & & \cr {\displaystyle {M_{RR}(M_X)\phantom{_{33}} \over M_{RR}(M_X)_{33}}} &=& \left(\matrix{ A_{11} \delta\epsilon^8 & A_{12} \delta\epsilon^6 & A_{13} \delta\epsilon^4 \cr A_{21} \delta\epsilon^6 & A_{22} \delta\epsilon^4 & A_{23} \delta\epsilon^2 \cr A_{31} \delta\epsilon^4 & A_{32} \delta\epsilon^2 & A_{33} \phantom{\delta\epsilon^2} \cr }\right) \label{MrrMXLMAan}\end{aligned}$$ where the subscript $f$ stands for any of the $u,d,e,\nu$ indices, $x^{\cal O}_f$ is the Clebsch of the ${\cal O}$ operator of the $f$-type fermion, and the $a$’s ($A$’s) are order-one $f$-independent Yukawa (Majorana) parameters that parameterise $\lambda_f$ ($M_{RR}$). The first matrix on the right-hand side of Eq.\[lfMXLMAan\] contains the leading $n=1$ operators giving contributions of order $\delta$, while the second and third matrices contain the $n=2$ and $n=3$ operators which give contributions of order $\delta^2$ and $\delta^3$ and provide the leading contributions in the cases where the $n=1$ operators involve Clebsch zeros. The effective matrices resulting from Eqs. , are approximately given in Table . This table shows an interesting structure for the Yukawa matrices. We find that $\lambda_u\sim\lambda^8$, $\lambda_c\sim\lambda^4$ and $\lambda_d\sim\lambda_e\sim\lambda^6$, $\lambda_s\sim\lambda_\mu\sim\lambda^3$. Furthermore the CKM matrix has $|V_{12}|\sim\lambda$, $|V_{23}|\sim\lambda^2$, $|V_{13}|\sim\lambda^3$. Comparing these approximate results with the data in Eqs. - we see that only the $\lambda_d$, $\lambda_\mu$ couplings need substantial (one $\lambda$-power) corrections. On the other hand, the neutrino sector described by $\lambda_\nu$ and $M_{RR}$ in Table is clearly dominated by the right-handed tau neutrino and predicts $(\lambda_{\nu})_{12}\sim(\lambda_\nu)_{22}$ which according to Eq.  successfully generates a large $\theta_{12}$ solar neutrino angle. However, the subdominant perturbation to $m_{LL}$ in Eq.  resulting from $\lambda_\nu$ and $M_{RR}$ in Table is too small to correctly predict the neutrino mass ratio $m_{\nu_2}/m_{\nu_3}\sim\lambda^{1.5}$ required by Eq. . These approximate predictions can be further improved because Table  does not include the numerical effects of the operator Clebsches and of the order-one $a$,$A$ factors. The success of our model (in the SM sector) depends on the ability to find suitable solutions for the $a$’s in Eq.  which simultaneously can account for all the hierarchies in Eqs. -. Generally we will require that $0.5 < |a_{AB}|, |a^\prime_{AB}|, |a^{\prime\prime}_{AB}| < 2.0$ for all $A,B=1,2,3$. At first, it looks that such solution is trivial since Eq.  depends on 16 parameters, [^19] while Eqs. - is a set of 9 constraints (on the first and second family Yukawa couplings and CKM entries.) However, we should not forget that ${\cal O} \gg {\cal O}^\prime \gg {\cal O}^{\prime\prime}$ and that the CKM matrix constrains only on the 12,13 and 23 entries of $\lambda_f$. As a consequence, we find that the parameters in Eqs. - are mainly sensitive to $a_{22}\leftrightarrow\lambda_{s,\mu}$, $a_{11}\leftrightarrow\lambda_{d,e}$, an independent combination of $(a^{\prime\prime}_{11},a^{\prime\prime}_{21}, a^{\prime\prime}_{31})\leftrightarrow\lambda_u$, $a'_{22}\leftrightarrow\lambda_c$ and $a'_{12}\leftrightarrow V_{12}$, $a'_{23}\leftrightarrow V_{23}$, $a'_{13}\leftrightarrow V_{13}$, which allows two predictions to be made — $\lambda_d$ and $\lambda_s$. Thus we fitted the $a_{22}$,$a_{11}$,$a^{\prime\prime}_{11}$, $a'_{22}$,$a'_{12}$,$a'_{23}$,$a'_{13}$ dependence of $\lambda_f$ [^20] to the $\lambda_\mu$, $\lambda_e$, $\lambda_u$, $\lambda_c$ and $V_{12}$, $V_{23}$, $V_{13}$ experimental constraints in Eqs. -. The results are shown in Table . [^21] Thus, using Eq.  with the $a$’s of Table  and the Clebsch factors of Table  (Appendix A) we get the numerical results for $\lambda_f(M_X)$ shown in Table . In Table  we present the results of Table  expanded in powers of $\delta=\epsilon=\lambda=0.22$. We can analyse the effect of the operator Clebsches by comparing Table  against Table . We see that the ${\cal O}^W$ operator [^22] in the 22 entry of $\lambda_f$ split $(\lambda_d)_{22}=(\lambda_e)_{22}\sim\lambda^3$ in Table  into $|(\lambda_d)_{22}|=\lambda^{3.124}$ and $(\lambda_e)_{22}=\lambda^{2.313}$ in Table , thus allowing for a proper GJ ratio $\lambda_\mu / \lambda_s\sim 3$ at $M_X$. Similarly, the operators in the 12 block have allowed for a more appropriate $\lambda_d / \lambda_e$ Yukawa ratio. Numerically we have the following predictions for the lightest eigenvalues of the down-Yukawa matrix at $M_X$ : $\lambda_d=\lambda^{5.469}$ and $\lambda_s=\lambda^{3.087}$. The effect of the Clebsch factors also modified the neutrino Yukawa matrix $\lambda_\nu(M_X)$ in Table . Due to the ${\cal O}^I$ operator in the 23 position of $\lambda_f$, which has a large $x^I_\nu=2$ Clebsch, we are now able to predict a large $\theta_{23}$ atmospheric neutrino mixing angle. Indeed using Eq.  we can roughly estimate that $\tan\theta_{23}\sim 0.44/0.68=0.65$ implying $\sin^2(2\theta_{23}) \sim 0.83$. It is interesting to check that $\lambda_\nu$ and $M_{RR}$ in Table  do lead to a $m_{LL}$ matrix dominated by the right-handed tau neutrino. As a result of the small mixing angles of $M_{RR}$ it is convenient to work in a basis where $M_{RR}$ is diagonal. Furthermore it is practical to scale $\lambda_\nu$ [^23] such that the 23 and 33 entries are $(\lambda_\nu)_{23}\sim(\lambda_\nu)_{33}\sim 1$, and approximate the normalized entries to (semi-)integer $\lambda$-powers. Thus using : $$M_{RR}\sim\left(\matrix{ \lambda^{9.0} & 0 & 0 \cr 0 & \lambda^{5.0} & 0 \cr 0 & 0 & 1 }\right) \qquad m_{RL}\sim\left(\matrix{ \lambda^{7.5} & \lambda^{3.5} & \lambda^{1.5} \cr \lambda^{6.5} & \lambda^{3.5} & 1 \cr \lambda^{6.5} & \lambda^{3.5} & 1 }\right)$$ into Eq.  gives : $$m_{LL} \sim \left(\matrix{ \lambda^{3.0}+\lambda^{2.0}+\lambda^{6.0} & \lambda^{1.5}+\lambda^{2.0}+\lambda^{5.0} & \lambda^{1.5}+\lambda^{2.0}+\lambda^{5.0} \cr \lambda^{1.5}+\lambda^{2.0}+\lambda^{5.0} & \hfill 1+\lambda^{2.0}+\lambda^{4.0} & \hfill 1+\lambda^{2.0}+\lambda^{4.0} \cr \lambda^{1.5}+\lambda^{2.0}+\lambda^{5.0} & \hfill 1+\lambda^{2.0}+\lambda^{4.0} & \hfill 1+\lambda^{2.0}+\lambda^{4.0} }\right) \label{mLLGUTapr}$$ where the first (second and third) term in each entry corresponds to the third (second and first) family neutrino contribution $\nu^c_\tau$ ($\nu^c_\mu$ and $\nu^c_e$) coming from $M_{RR}$. Clearly Eq.  shows that even though, in this case, $\nu^c_\tau$ is the heaviest right-handed neutrino, it nevertheless dominates the 23 block, and that the sub-dominant contribution from $\nu^c_\mu$ induces $\lambda^2$ perturbations in $m_{LL}$ that are compatible with the $m_{\nu_2} / m_{\nu_3}$ mass ratio. Using the MSSM RGEs adapted and properly extended to take into account the presence and successive decoupling of the right-handed neutrinos between the $Q=M_X$ and $Q=M_Z$ scales (see Appendix B) we find that the Yukawa and neutrino Majorana matrices at low energy are given by Table . Thus, inserting the results of Table  into Eq.  and Eq.  we get the mass matrix for the left-handed neutrinos $m_{LL}$ and the $V^{MNS}$ mixing matrix shown in Table . The predictions for the neutrino masses and squared mass splittings are shown in Table . In Table  we examine how the neutrino mixing angles evolve between the unification $Q=M_X\sim 3\times 10^{16} {\rm\ GeV}$, the right-handed tau neutrino mass $Q=M_{\nu_3}\sim 3\times 10^{14} {\rm\ GeV}$ and the $M_Z$ scale. We see that the effect of the radiative corrections has increased the magnitude of $\sin\theta_{12}$, $\sin\theta_{23}$ and $\sin\theta_{13}$ by 2.5%, 6.4% and 2.4% respectively. These corrections agree with the results found in Ref. [@KiSi]. Finally, we present in Table  the predictions for the down and strange quark masses. We would like to conclude this section by noting that the predictions for the neutrino parameters, in particular for the neutrino $\Delta m_{12}^2$ squared mass splitting, should be taken carefully. Generally we expect at least 20% (theoretical) errors in the quoted values which, for example, arise from our inability to fix order-one factors in the entries of $M_{RR}(M_X)$. We have discussed a theory of all fermion masses and mixing angles based on a particular string-inspired [*minimal*]{} model based on the Pati-Salam group $SU(4)\times SU(2)_L \times SU(2)_R$ [@PaSa] supplemented by a gauged $U(1)_X$ family symmetry. We argued that this gauge group preserves the attractive features of $SO(10)$ such as predicting three right-handed neutrinos, and Yukawa unification, while avoiding the doublet-triplet splitting problem. Although it is not a unified gauge group at the field theory level, it naturally arises from string constructions and so in principle may be fully unified with gravity. Earlier work in collaboration with one of us [@AlKiLeLo1] had already shown that the model can provide a successful description of the charged fermion masses and the CKM matrix. The use of the $U(1)_X$ family symmetry to provide the horizontal mass splittings combined with the Clebsch factors arising from the $(H\bar{H})^n$ insertion in the operators has already been shown to provide a powerful approach to the fermion mass spectrum in this model [@AlKiLeLo1]. The present analysis differs from that presented previously partly due to the recent refinements in third family Yukawa unification [@KiOl2], but mainly due to the recent data from Super-Kamiokande which implies that the 23 operator should be allowed by the $U(1)_X$ family symmetry. We have therefore extended our previous analysis to the atmospheric and solar neutrino masses and mixing angles, and showed that all three MSW solutions to the solar neutrino data may be accommodated, namely the LMA MSW region discussed in the main text as well as the LOW MSW and the SMA MSW regions discussed in the Appendices. The approach to neutrino masses and mixing angles followed here makes use of the SRHND mechanism [@KingSRND; @SFKing1; @SFKing2] in which one of the right-handed neutrinos (the $\nu_{\tau}^c$) gives the dominant contribution to the 23 block of the light effective Majorana matrix. This mechanism avoids reliance on accidental cancellations, and does not rely on excessive magnification of mixing angles, although a mild enhancement was observed in the numerical results in agreement with that observed in [@KiSi]. Crucial to the implementation of SRHND in this model is the assumption that the renormalizable 23 operator is forbidden by unspecified string selection rules, and the leading 23 operator contains $(H\bar{H})$ and involves “Clebsch zeros”, which give a zero contribution to the charged lepton and quark Yukawa matrices, but a non-zero contribution to the neutrino Yukawa matrix, thereby allowing small $V_{cb}$ but large 23 mixing in the lepton sector. The analysis in this paper is essentially “bottom-up”. A particular choice of $U(1)_X$ family symmetry charges was used to give the horizontal mass splittings, and the vertical mass splittings were achieved by particular choices of operators corresponding to different Clebsch factors in the leading contributions to each entry of the Yukawa matrix. It would be very nice to understand these choices from the point of view of a “top-down” string construction, such as the Type I string construction which has recently led to the Pati-Salam gauge group with three chiral families [@ShTy]. We believe that only by a combination of top-down and bottom-up approaches (such as that presented here) will a completely successful string theory of fermion masses and mixing angles emerge. We have shown that the recent discovery of neutrino mass by Super-Kamiokande provides precious information about the flavour structure of such a future string theory. The work of M.O. was supported by JNICT under contract grant : PRAXIS XXI/BD/ 5536/95. -20pt In this appendix we briefly review some technical issues related to the presence of the right-handed neutrinos. Firstly we show how the decoupling of the neutrinos affects the one-loop RGEs for the Yukawa couplings in the MSSM+$\nu^c$ model : $$\begin{aligned} {1 \over 16\pi^2} {d\lambda_u\over dt} &=& [ 3{\rm Tr}U+{\rm Tr}N+3U+D-G^u] \lambda_u \label{SRNDrgeU} \\ {1 \over 16\pi^2} {d\lambda_d\over dt} &=& [ 3{\rm Tr}D+{\rm Tr}E+3D+U-G^d] \lambda_d \\ {1 \over 16\pi^2} {d\lambda_e\over dt} &=& [ 3{\rm Tr}D+{\rm Tr}E+3E+N-G^e] \lambda_e \\ {1 \over 16\pi^2} {d\lambda_\nu\over dt} &=& [ 3{\rm Tr}U+{\rm Tr}N+3N+E-G^\nu] \lambda_\nu \label{SRNDrgeN}\end{aligned}$$ where $t=\ln(Q)$, $$\matrix{ U = \lambda_u\lambda_u^\dagger \hfill & \phantom{space} G^u = {26 \over 30} g^2_1+3g^2_2+{16\over 3}g^2_3 \hfill \cr D = \lambda_d\lambda_d^\dagger \hfill & \phantom{space} G^d = {14 \over 30} g^2_1+3g^2_2+{16 \over 3}g^2_3 \hfill \cr E = \lambda_e\lambda_e^\dagger \hfill & \phantom{space} G^e = {18 \over 10} g^2_1+3g^2_2 \hfill \cr N = \lambda_\nu\lambda_\nu^\dagger \hfill & \phantom{space} G^\nu = {6 \over 10} g^2_1+3g^2_2 \hfill } \label{UDEN}$$ and ${\rm\ Tr}U= U_{11}+U_{22}+U_{33}$ [*etc..*]{} The general idea behind the process of decoupling the right-handed neutrinos (in the “step” approximation) is that a Feynman diagram that includes a specific flavour of a right-handed neutrino $\nu^c_A$, with mass $M_{\nu_A}$, in an internal line only makes a contribution to the RGEs in Eqs. (\[SRNDrgeU\])-(\[SRNDrgeN\]) for energies $Q$ bigger than $M_{\nu_A}$. Thus, the procedure depends on properly adapting the $N$ parameter in Eq. . We shall now make this statement more precise. Let us assume that the neutrino Majorana matrix $M_{RR}$ is diagonalized by the following transformation : $$S^{\nu^c\dagger} M_{RR} S^{\nu^c} = M'_{RR} = {\rm\ diag}(M_{\nu_1},M_{\nu_2},M_{\nu_3})$$ Then, the decoupling of the right-handed neutrinos in Eqs. (\[SRNDrgeU\])-(\[SRNDrgeN\]) can be accounted by replacing $N$ in Eq.  by $N_\theta$ given by : $$N=\lambda_\nu \lambda^\dagger_\nu = \lambda_\nu S^{\nu^c} S^{\nu^c\dagger} \lambda^\dagger_\nu \to \lambda_\nu S^{\nu^c} \Theta S^{\nu^c\dagger} \lambda^\dagger_\nu = N_\theta$$ where $\Theta(Q)$ is a energy dependent diagonal matrix defined by : $$\Theta(Q) = {\rm\ diag}( \theta(Q-M_{\nu_1}), \theta(Q-M_{\nu_2}), \theta(Q-M_{\nu_3}))$$ with $\theta(x)=0$ for $x<0$ and $\theta(x)=1$ for $x>0$. The second issue that we would like to address concerns the effect of a large $(\lambda_\nu)_{23}$ coupling on third family Yukawa unification, and as a consequence, for example, on the prediction for the top mass. We claim that the effect is small. To see why let us assume that the only large Yukawa couplings in Eqs. (\[SRNDrgeU\])-(\[SRNDrgeN\]) are $\lambda_t = (\lambda_u)_{33}$, $\lambda_b = (\lambda_d)_{33}$, $\lambda_\tau = (\lambda_e)_{33}$ and $\lambda_{\nu_\tau} = (\lambda_\nu)_{33}$, $\lambda_{23} = (\lambda_\nu)_{23}$. In this limit, the RGEs simplify to : $$\begin{aligned} {1 \over 16\pi^2} {d\lambda_t \over dt} &=& \lambda_t ( 6 \lambda_t^2 + \lambda_b^2 + \lambda_{\nu_\tau}^2 + \lambda_{23}^2-G^u) \label{SRNDrget} \\ {1 \over 16\pi^2} {d\lambda_b \over dt} &=& \lambda_b ( 6 \lambda_b^2 + \lambda_t^2 + \lambda_{\tau}^2 - G^d) \label{SRNDrgeb} \\ {1 \over 16\pi^2} {d\lambda_\tau \over dt} &=& \lambda_\tau ( 4 \lambda_\tau^2 + 3 \lambda_b^2 + \lambda_{\nu_\tau}^2 - G^e ) \label{SRNDrgee}\\ {1 \over 16\pi^2} {d\lambda_{\nu_\tau} \over dt} &=& \lambda_{\nu_\tau} ( 4 \lambda_{\nu_\tau}^2 + 4 \lambda_{23}^2 + 3 \lambda_t^2 + \lambda_\tau^2-G^\nu) \label{SRNDrgen}\end{aligned}$$ From Eqs. , we see that the presence of the $\lambda_{23}$ coupling does not affect the RGEs of $\lambda_{b,\tau}$. Moreover the effect of $\lambda_{23}$ on the RGE of $\lambda_t$ is small ($1/8 \sim 12 \%$.) The only RGE that is significantly affected by $\lambda_{23}$ is the RGE of $\lambda_{\nu_\tau}$. However, since the correct prediction for the heaviest left-handed neutrino $m_{\nu_3}\sim 0.05 {\rm\ eV}$ requires that $M_{\nu_3} > 10^{13} {\rm\ GeV}$, the $\lambda_{\nu_\tau}^2$ and $\lambda_{23}^2$ terms in Eqs. , are only present in a rather short energy range, [*i.e.*]{} between $10^{13} {\rm\ GeV} < Q < M_X \sim 10^{16} {\rm\ GeV}$. As a consequence the presence/absence of the neutrino Yukawa couplings, as far as third family Yukawa unification is concerned, is not important. [^24] Finally we find interesting to comment on the radiative corrections to the neutrino atmospheric mixing angle $\theta_{23}$ between the GUT and the $M_Z$ scale. It is well known [@BaLePa:MTanimoto] that the running of $\sin^2(2\theta_{23})$ can be understood from the following evolution equation : $${1 \over 16\pi^2} {1\over \sin^2(2\theta_{23})} {d \sin^2(2\theta_{23}) \over dt} = -2(\lambda_\tau^2-\lambda^2_\mu) { (m_{LL})^2_{33} - (m_{LL})^2_{22} \over [(m_{LL})_{33}-(m_{LL})_{22}]^2+4(m_{LL})^2_{23}} \label{SRNDsin2t}$$ which displays a resonance peak at $(m_{LL})_{33} \sim (m_{LL})_{22}$ when $(m_{LL})_{23}$ is small. Generally, it is possible that $(m_{LL})_{33}$ starts at $Q=M_X$ bigger than $(m_{LL})_{22}$ but, due to the third family Yukawa radiative effects, to be driven to smaller values faster than $(m_{LL})_{22}$. As a result, even if the initial values of $(m_{LL})_{33}$ and $(m_{LL})_{22}$ at $M_X$ are different, they may at some point become comparable. If this is the case then a large $\theta_{23}$ angle can be generated radiatively from a small tree level $\theta_{23}$ at $M_X$. This mechanism, of amplifying $\theta_{23}$ radiatively, as been studied for example in Refs. [@ElLeLoNa; @BaLePa:MTanimoto]. However, in these works, and as can be seen from Eq. , the amplification is only efficient if at least the $\lambda_\tau$ Yukawa coupling is large (about 2 or 3.) In our model, since we demanded top-bottom-tau Yukawa unification, the value of the third family Yukawa coupling is rather small ($\sim 0.7$), thus the $\sin^2(2\theta_{23})$ is stable under radiative corrections. In this appendix we show that is easy to convert the results of the LMA MSW solution found in the main body of this paper into results for the LOW solution which is also characterized by maximal $\nu_e\to\nu_\mu$ oscillations but smaller $\Delta m_{12}^2$ [@GoPe; @JEllis] : $$\hskip -2cm {\rm LOW : } \quad\quad \sin^2(2\theta_{12}) \sim 1 \qquad \Delta m^2_{12} \sim 10^{-7} {\rm\ eV}^2$$ The reason why we can adapt the LMA results is because, as we showed in section III, in the SRHND approach, the $\theta_{12}$ and $\theta_{23}$ neutrino mixing angles come “solely” from the neutrino Yukawa matrix. On the other hand, the neutrino mass spectrum depends on the hierarchies of $\lambda_\nu$ and $M_{RR}$. Thus, as long as we keep within the SRHND scenario, we can change $M_{RR}$ to fit the LOW $\Delta m^2_{12}$ solution without that implying a significant change in $\theta_{12}$ and $\theta_{23}$. Let as consider a LOW model with the same $U(1)_X$ flavour charges and the same operator matrix for $\lambda_f$ as in the LMA model, given by Table  and Eq. , but with a $M_{RR}$ matrix with the following structure : $${\displaystyle {M_{RR}(M_X)\phantom{_{33}} \over M_{RR}(M_X)_{33}}} = \left(\matrix{ A_{11} \epsilon^8 & A_{12} \epsilon^6 & A_{13} \epsilon^4 \cr A_{21} \epsilon^6 & B_{22} \epsilon^4 & A_{23} \epsilon^2 \cr A_{31} \epsilon^4 & A_{32} \epsilon^2 & A_{33} \phantom{\epsilon^2} \cr }\right) \label{MrrMXLOWan}$$ Comparing Eq.  with Eq.  we see that these two equations differ only by their “vertical” $\delta$-component (we assumed that Eq.  has $M_{RR}^\delta\sim{\bf 1}$) and by the numerical factor $B_{22}=1.821 \ne 1.072=A_{22}$. We note that the removal the $\delta$-factor in the 22 entry of $M_{RR}$ and the increase of the $B_{22} > A_{22}$ coefficient act to decrease $\Delta m^2_{12}$. The Majorana matrix $M_{RR}(M_Z)$ and the neutrino Yukawa matrix $\lambda_\nu(M_Z)$ in the LOW model resulting from $M_{RR}(M_X)$ in Eq.  and the Yukawa matrices $\lambda_f(M_X)$ given by Table   (recall that we take $\lambda_f^{LOW}(M_X) = \lambda_f^{LMA}(M_X)$) are shown in Table . In Table  we present the predicted values for the left-handed neutrino matrix and the MNS matrix in the LOW model. The results for the neutrino masses in the LOW model are given in Table . Finally in Table  we show the values of the neutrino mixing angles. In this appendix we briefly present a model that explores the possibility of a SMA MSW solution to the solar neutrino anomaly. Although the SMA region is disfavoured by the latest results from the Super-Kamiokande experiment, the SMA solution is not completely ruled out. [^25] The SMA solution data indicates [@GoPe]: $${\rm SMA : } \qquad \sin^2(2\theta_{12}) \sim 1.6\times 10^{-3} \qquad \Delta m^2_{12} \sim 5\times 10^{-6} {\rm\ eV}^2$$ In analogy with the LMA model we start by recalling that the Yukawa and the neutrino Majorana matrices in the SMA model can be decomposed into a “vertical” $\delta$-component and a “horizontal” $\epsilon$-component given by : $$(\lambda_f)_{AB} \sim (\lambda^\delta)_{AB} (\lambda^\epsilon)_{AB} \qquad\qquad (M_{RR})_{AB} \sim (M_{RR}^\delta)_{AB} (M_{RR}^\epsilon)_{AB} \label{YukMrrSMA}$$ The $U(1)_{\bar X}$ charges of the SMA given in Table fix the “horizontal” structure of $\lambda^\epsilon$ and $M_{RR}^\epsilon$ in the SMA model to be : $$\lambda^\epsilon \sim \left(\matrix{ \epsilon^5 & \epsilon^4 & \epsilon^2 \cr \epsilon^3 & \epsilon^2 & 1 \cr \epsilon^3 & \epsilon^2 & 1 }\right)\qquad M_{RR}^\epsilon \sim \left(\matrix{ \epsilon^6 & \epsilon^5 & \epsilon^3 \cr \epsilon^5 & \epsilon^4 & \epsilon^2 \cr \epsilon^3 & \epsilon^2 & 1 \cr }\right) \label{SRNDmRLmRRepsSMA}$$ On the other hand, the “vertical” structure of $\lambda^\delta$ and $M_{RR}^\delta$ is given by the following operator matrices : $$\lambda^\delta \sim \left(\matrix{ {\cal O}^G+{\cal O}^{\prime\prime K} & {\cal O}^R+{\cal O}^{\prime O} & {\cal O}^H+{\cal O}^{\prime a} \cr {\cal O}^G+{\cal O}^{\prime\prime V} & {\cal O}^W+{\cal O}^{\prime H} & {\cal O}^I+{\cal O}^{\prime W} \cr {\cal O}^M+{\cal O}^{\prime\prime V} & {\cal O}^g+{\cal O}^{\prime T} & 1 }\right)\qquad M_{RR}^\delta \sim {\bf 1} \label{SRNDmRLopMtrSMA}$$ As a result of Eqs. , the Yukawa and the neutrino Majorana matrices in Eq.  can be written as : $$\begin{aligned} \lambda_f(M_X) &=& \left(\matrix{ \hfil x^G_f c_{11} \delta\epsilon^5 & \hfil x^R_f c_{12} \delta\epsilon^4 & \hfil x^H_f c_{13} \delta\epsilon^2 \cr \hfil x^G_f c_{21} \delta\epsilon^3 & \hfil x^W_f c_{22} \delta\epsilon^2 & \hfil x^I_f c_{23} \delta\phantom{\epsilon^2} \cr \hfil x^M_f c_{31} \delta\epsilon^3 & \hfil x^g_f c_{32} \delta\epsilon^2 & \hfil c_{33}\phantom{\epsilon^2} \cr }\right)+ \nonumber \\ & & \cr & & \left(\matrix{ 0 & x^O_f c^\prime_{12} \delta^2\epsilon^4 & x^a_f c^\prime_{13} \delta^2\epsilon^2 \cr 0 & x^H_f c^\prime_{22} \delta^2\epsilon^2 & \hfil x^W_f c^\prime_{23} \delta^2\phantom{\epsilon^2} \cr 0 & x^T_f c^\prime_{32} \delta^2\epsilon^2 & 0 }\right)+ \nonumber \\ & & \cr & & \left(\matrix{ x^K_f c^{\prime\prime}_{11} \delta^3\epsilon^5 & 0 & 0 \cr x^V_f c^{\prime\prime}_{21} \delta^3\epsilon^3 & 0 & 0 \cr x^V_f c^{\prime\prime}_{31} \delta^3\epsilon^3 & 0 & 0 }\right) \label{lfMXSMAan} \\ & & \cr {\displaystyle {M_{RR}(M_X)\phantom{_{33}} \over M_{RR}(M_X)_{33}}} &=& \left(\matrix{ C_{11} \epsilon^6 & C_{12} \epsilon^5 & C_{13} \epsilon^3 \cr C_{21} \epsilon^5 & C_{22} \epsilon^4 & C_{23} \epsilon^2 \cr C_{31} \epsilon^3 & C_{32} \epsilon^2 & C_{33} \phantom{\epsilon^2} \cr }\right) \label{MrrMXSMAan}\end{aligned}$$ In the rest of this appendix we apply the same systematic approach used in the main part of the paper for the LMA solution to the SMA model. The approximate structure of the effective matrices resulting from Eqs. , is given in Table . In Table  we give the values of the $c$,$C$ parameters appearing in Eqs. ,. In Table  we present the exact numerical values of the Yukawa and Majorana matrices at the unification scale and in Table  the values of the same matrices at the $M_Z$ scale. In Table  we present the predicted values for the left-handed neutrino mass and for the MNS mixing matrices. The predictions for masses of the physical neutrinos in the SMA model is listed in Table  and in Table   we give the predictions for the neutrino mixing angles at several energy scales. Finally, in Table  we show the predictions for the masses of the down and strange quarks in the SMA model. [99]{} Y. Fukuda , Super-Kamiokande Collaboration, Phys. Lett. [**B433**]{}, 9 (1998);  Phys. Lett. [**B436**]{}, 33 (1998);  Phys. Rev. Lett. [**81**]{}, 1562 (1998). H. Sobel, talk presented at the XIX International Conference on Neutrino Physics and Astrophysics, Sudbury, Canada, June 16-21, 2000. L. Wolfenstein, Phys. Rev. [**D17**]{}, 2369 (1978);  Phys. Rev. [**D20**]{}, 2634 (1979); S. P. Mikheyev, A. Y. Smirnov, Yad. Fiz. [**42**]{}, 1441 (1985) \[Sov. J. Nucl. Phys. [**42**]{}, 913 (1985)\]; Nuovo Cimento [**9C**]{}, 17 (1986). Y. Suzuki, talk presented at the XIX International Conference on Neutrino Physics and Astrophysics, Sudbury, Canada, June 16-21, 2000. J. N. Bahcall, P. I. Krastev, A. Y. Smirnov, Phys. Rev. [**D60**]{} 093001 (1999) [hep-ph/9905220]{}. M. Gell-Mann, P. Ramond, R. Slansky, in Sanibel Talk, CALT-68-709, Feb. 1979 and in [*Supergravity*]{} (North Holland, Amsterdam 1979); T. Yanagida in [*Proc. of the Workshop on Unified Theory and Baryon Number of the Universe*]{}, KEK, Japan (1979); R. N. Mohapatra, G. Senjanovic, Phys. Rev. Lett. [**44**]{}, 912 (1980). S. F. King, N. N. Singh, [hep-ph/0007243]{}. J. Ellis, G. K. Leontaris, S. Lola, D. V. Nanopoulos, Eur. Phys. J. [**C9**]{}, 389 (1999); K. S. Babu, C. N. Leung, J. Pantaleone, Phys. Lett. [**B319**]{}, 191 (1993); M. Tanimoto, Phys. Lett. [**B360**]{}, 41 (1995). S. F. King, Phys. Lett. [**B439**]{}, 350 (1998). S. F. King, Nucl. Phys. [**B562**]{}, 57 (1999). S. F. King, Nucl. Phys. [**B576**]{}, 85 (2000). G. Altarelli, F. Feruglio, Phys. Lett. [**B451**]{}, 388 (1999). G. Altarelli, F. Feruglio, I. Masina, Phys. Lett. [**B472**]{}, 382 (2000). K. S. Babu, J. C. Pati, F. Wilczek, Nucl. Phys. [**B566**]{}, 33 (2000). C. H. Albright, S. M. Barr, Phys. Rev. Lett. [**85**]{}, 244 (2000). Q. Shafi, Z. Tavartkiladze, Phys. Lett. [**B487**]{}, 145 (2000). J. C. Pati, A. Salam, Phys. Rev. [**D10**]{}, 275 (1974). I. Antoniadis and G. K. Leontaris, Phys. Lett. [**B216**]{}, 333 (1989); I. Antoniadis and G. K. Leontaris, and J. Rizos, Phys. Lett. [**B245**]{}, 161 (1990). G. Shiu, S. H. H. Tye, Phys. Rev. [**D58**]{}, 106007 (1998). S. F. King, Phys. Lett. [**B325**]{}, 129 (1994). B. C. Allanach, S.F. King, Nucl. Phys. [**B456**]{}, 57 (1995). B. C. Allanach, S. F. King, Nucl. Phys. [**B459**]{}, 75 (1996). B. C. Allanach, S. F. King, G. K. Leontaris, S. Lola, Phys. Rev. [**D56**]{}, 2632 (1997). S. F. King, M. Oliveira, [hep-ph/0008183]{}. H. P. Nilles, Phys. Rep. [**110**]{}, 1 (1984) ; H. E. Haber and G. L. Kane, Phys. Rep. [**117**]{}, 75 (1985) ; A. B. Lahanas and D. V. Nanopoulos, Phys. Rep [**145**]{}, 1 (1987); H. E. Haber, Presented at Theoretical Advanced Study Institute (TASI 92): From Black Holes and Strings to Particles, Boulder, CO, 3-28 Jun 1992. Published in Boulder TASI 92:0589-688, [hep-ph/9305248]{}. N. Cabibbo, Phys. Rev. Lett. [**10**]{}, 531 (1963); M. Kobayashi, T. Maskawa, Prog. Theor. Phys. [**49**]{}, 652 (1973). Z. Maki, M. Nakagawa, S. Sakata, Prog. Theor. Phys. [**28**]{}, 870 (1962). S. F. King and Q. Shafi, Phys. Lett. [**B422**]{}, 135 (1998). P. Ramond, R. G. Roberts, G. G. Ross, Nucl. Phys. [**B406**]{}, 19 (1993); Y. Grossman, Y. Nir, Nucl. Phys. [**B448**]{}, 30 (1995); M. Leurer, Y. Nir, N. Seiberg, Nucl. Phys. [**B398**]{}, 319 (1993); , Nucl. Phys. [**B420**]{}, 468 (1994). L. Ibáñez, G. G. Ross, Phys. Lett. [**B332**]{}, 100 (1994). V. Jain, R. Shrock, Phys. Lett. [**B352**]{}, 83 (1995). P. Binetruy, P. Ramond, Phys. Lett. [**B350**]{}, 49 (1995); E. Dudas, S. Pokorski, C.A. Savoy, Phys. Lett. [**B356**]{}, 45 (1995); P. Binétruy, S. Lavignac, P. Ramond, Nucl. Phys. [**B477**]{}, 353 (1996); E. J. Chun, A. Lukas, Phys. Lett. [**B387**]{}, 99 (1996). H. Dreiner, G. K. Leontaris, S. Lola, G. G. Ross, C. Scheich, Nucl. Phys. [**B436**]{}, 461 (1995); G. K. Leontaris, S. Lola ,C. Scheich, J. D. Vergados, Phys. Rev. [**D53**]{}, 6381 (1996); S. Lola, G. G. Ross, Nucl. Phys. [**B553**]{}, 81 (1999); M. Carena, J. Ellis, S. Lola, C. E. M. Wagner, Eur. Phys. J. [**C12**]{}, 507 (2000). R. Barbieri, L. J. Hall, D. Smith, A. Strumia, N. Weiner, JHEP 9812, 17 (1998); Y. Grossman, Y. Nir, Y. Shadmi, JHEP [**9810**]{}, 007 (1998); G. Altarelli, F. Feruglio, Phys. Lett. [**B439**]{}, 112 (1998); S. M. Barr, I. Dorsner, [hep-ph/0003058]{}. L. Wolfenstein, Phys. Rev. Lett. [**51**]{}, 1945 (1983). B. C. Allanach, S. F. King, Nucl. Phys. [**B507**]{}, 91 (1997). M. Green, J. Schwarz, Phys. Lett. [**149**]{}, 117 (1984). L. E. Ibáñez, Phys. Lett. [**B303**]{}, 55 (1993). H. Georgi, C. Jarlskog, Phys. Lett. [**B86**]{} 297 (1979); S. Dimopoulos, L. J. Hall, S. Raby, Phys. Rev. [**D45**]{}, 4192 (1992). C. D. Froggatt, H. B. Nielsen, Nucl. Phys. [**B147**]{}, 277 (1979). G. Anderson, S. Raby, S. Dimopoulos, L.J. Hall, G.D. Starkman, Phys. Rev. [**D49**]{}, 3660 (1994). D. E. Groom,  (Particle Data Group), Eur. Phys. Jour. [**C15**]{}, 1 (2000). H. Arason, D. J. Castaño, B. Kesthelyi, S. Mikaelian, E. J. Piard, P. Ramond and B. D. Wright, Phys. Rev. [**D46**]{}, 3945 (1992). M. Apollonio  (CHOOZ collaboration), Phys. Lett. [**B466**]{}, 415, (1999); Phys. Lett. [**B420**]{}, 397 (1998). S .F. King and N. N. Singh, [hep-ph/0006229]{}, Nucl. Phys. B (to appear). M. C. Gonzalez-Garcia and C. Peña-Garay, [hep-ph/0009041]{}. J. Ellis, Talk given at the 19TH International Conference on Neutrino Physics and Astrophysics - Neutrino 2000, Sudbury, Ontario, Canada, 16-21 Jun 2000. [hep-ph/0008334]{} [^1]: However this is not guaranteed due to the unknown structure of the heavy Majorana matrix, and for example an inverted neutrino mass hierarchy could result although this relies on some non-hierarchical couplings in the Dirac Yukawa matrix [@KiSi2]. [^2]: We use Left-Right (LR) convention for Yukawa matrices in this paper. [^3]: The four component fermion fields $\Psi$ are given by $\Psi_F = (F , -i\sigma^2 F^{c*})$ for $F=U,D,E$ and $\Psi_{N} = (N, -i\sigma^2 N^{*})$ for the neutrinos. [^4]: We shall not address the question of CP violation in this paper. [^5]: $V_{e2} = V^{MNS}_{12}$, $V_{e3} = V^{MNS}_{13}$ and $V_{\mu 3} = V^{MNS}_{23}$. [^6]: In this section we will use the following simplified notation $m_{LR} = \lambda_\nu v_2 \equiv \lambda v_2 \sim \lambda$. [^7]: Note that $R_{12}$ for $m_{LL}$ in Eq.  is undetermined. [^8]: Note that although Eq.  still looks very simple it can, in many cases, provide a good qualitative description of the physics involving the heaviest neutrinos. Indeed, if $M_{RR}$ is diagonal dominated and if $m_{RL}$ is highly hierarchical then the limiting case of Eq.  applies. [^9]: Note that this does not necessarily imply that $M^{-1}_{\nu_3}$ is larger than $M^{-1}_{\nu_2}$ since the Yukawa couplings must also be taken into account. [^10]: The cancellation of anomalies requires the vanishing of the trace ${\rm Tr}(T^a \{T^b,T^c\}) = 0$ where $T^{a,b,c}$ are any of the group generators which stand at the three gauge boson vertices of the triangle diagrams. [^11]: We will not include the analysis of the $U(1)_X^3$ or of the gravitational anomaly because they depend exclusively on SM singlet fields. [^12]: The numbers inside the curly brackets indicate the experimental ranges according to Ref. [@pdg]. [^13]: All running masses are given in the $\overline{\rm MS}$ scheme. [^14]: The experimental ranges are $|V_{12}| = 0.219 {\rm\ to\ } 0.226$, $|V_{23}|= 0.037 {\rm\ to\ } 0.043$ and $|V_{13}| = 0.002 {\rm\ to\ } 0.005$ [@pdg]. [^15]: We note that these assumptions about the nature of the Majorana matrix are unique to the LMA MSW solution. The SMA MSW and LOW solutions discussed in Appendices C and D are characterized by a Majorana matrix filled with $n=1$ operators only. [^16]: In [@SFKing1] these charges were called $\alpha$, $\beta$, $\gamma$, $\delta$. Roughly, this corresponds to choosing a basis of charges, that has $\bar X_{F_3}=\bar X_{F^c_3}=\bar X_{h}=0$. [^17]: However, only the $U(1)_X$ symmetry is anomaly free. [^18]: The $n=3$ operators can, to a very good approximation, be neglected. Their inclusion here serves only to fill the 11, 21, 31 entries of the $\lambda_{u,\nu}$ Yukawa matrices, thereby ensuring (for example) that the up quark is given a very small mass. [^19]: We note that $a_{33}$ is fixed by quadruple Yukawa unification at $M_X$, [*i.e.*]{} $\lambda_t=\lambda_b=\lambda_\tau=\lambda_{\nu_\tau}$. [^20]: Fixing all other $a$’s to be one. [^21]: The values of the $A$ parameters in Eq.  are not constrained by the experimental data, thus we chose them to be “arbitrary” numbers in the $0.5 < A_{AB} < 2.0$ range. [^22]: That has Clebsches $x^W_e = -3 x^W_d$ [^23]: $(\lambda_\nu)_{AB} \to (\lambda_\nu)_{AB} / k$ with $k = {1\over 2} [ (\lambda_\nu)_{23}+(\lambda_\nu)_{33}] = \lambda^{0.384}$. [^24]: Numerically we found that when $(\lambda_\nu)_{23}$ is allowed to take values comparable with $(\lambda_\nu)_{33}$ the prediction for the top mass roughly decreased by 1 GeV, the value of $\tan\beta$ decrease by 0.5 and the value of the unified third family Yukawa coupling at the unification scale decreased by 0.015. [^25]: Statistically, the SMA solution can still describe the neutrino data with a probability of 34 %. [@GoPe]
1
--- abstract: 'The quark matter created in relativistic nuclear collisions is interpreted as a nearly-perfect fluid. The recent efforts to explore its finite-density properties in the beam energy scan programs motivate one to revisit the issue of the local rest frame fixing in off-equilibrium hydrodynamics. I first investigate full second-order relativistic hydrodynamics in the Landau and the Eckart frames. Then numerical hydrodynamic simulations are performed to elucidate the effect of frame choice on flow observables in relativistic nuclear collisions. The results indicate that the flow can differ in the Landau and the Eckart frames but charged particle and net baryon rapidity distributions are mostly frame independent when off-equilibrium kinetic freeze-out is considered.' author: - Akihiko Monnai bibliography: - 'frame.bib' title: Landau and Eckart frames for relativistic fluids in nuclear collisions --- Introduction {#sec1} ============ The existence of the strongly-coupled quark-gluon plasma [@Yagi:2005yb] as a high-temperature phase of QCD has been partly motivated by a number of relativistic hydrodynamic analyses of high-energy nuclear collisions at BNL Relativistic Heavy Ion Collider [@Arsene:2004fa; @Back:2004je; @Adams:2005dq; @Adcox:2004mh] and CERN Large Hadron Collider [@Aamodt:2010pa; @ATLAS:2011ah; @Chatrchyan:2012wg]. Modern versions of such analyses incorporate the effects of viscosity to take account of off-equilibrium processes in the system, which play important roles in quantitative understanding of the experimental data [@Wang:2016opj]. The theoretical framework of relativistic dissipative hydrodynamics, however, is still not completely understood, partially because one has to introduce relaxation effects to the off-equilibrium processes to avoid violating stability and causality [@Hiscock:1983zz; @Hiscock:1985zz; @Hiscock:1987zz]. Such extended frameworks are called the second-order theory [@Israel:1976tn; @Israel:1979wp; @Muronga:2001zk; @Muronga:2003ta; @Muronga:2006zw; @Muronga:2006zx; @Koide:2006ef; @Tsumura:2006hn; @Tsumura:2007wu; @Tsumura:2009vm; @Tsumura:2011cj; @Tsumura:2012ss; @Baier:2007ix; @Romatschke:2009kr; @Bhattacharyya:2008jc; @Natsuume:2007ty; @Lublinsky:2009kv; @Betz:2009zz; @Monnai:2010qp; @Monnai:2018rgs; @Molnar:2013lta; @Denicol:2012cn; @Denicol:2012vq; @Denicol:2019iyh; @PeraltaRamos:2009kg; @Calzetta:2010au; @Aguilar:2017ios; @Florkowski:2015lra; @Jaiswal:2015mxa; @Tinti:2016bav; @Harutyunyan:2018cmm; @Mitra:2019jld] as opposed to the traditional linear response theory [@Landau; @Eckart:1940te], which is also known as the first-order theory, because the off-equilibrium correction of the respective order in terms of dissipative currents is taken into account in the entropy current of those theories. Non-relativistic hydrodynamic flow can be defined as a local flux of particles. In relativistic systems, however, the definition of the flow is not trivial because the energy and the conserved number can flow separately in the presence of dissipative processes. There are two distinctive ways of defining the local rest frame for the flow: the Landau (or energy) frame [@Landau] and the Eckart (or conserved charge/particle) frame [@Eckart:1940te]. There have been decades of debate over the eligibility of the two definitions of the local rest frame [@Tsumura:2006hn; @Tsumura:2007wu; @Tsumura:2009vm; @Tsumura:2011cj; @Tsumura:2012ss; @Van:2007pw; @Van:2011yn; @Osada:2011gx; @Minami:2012hs; @Oliinychenko:2015lva; @Sagaceta-Mejia:2018cao]. Most of the numerical analyses of hydrodynamic models for relativistic nuclear collisions so far do not give explicit consideration to the frame because the diffusion or the dissipation current is neglected, but the Landau frame is often considered to be a preferred choice when there is a theoretical need. This could be owing to the fact that the primary conserved charge in nuclear collisions is the net baryon number, which is often small at high energies; the Eckart frame cannot be defined when conserved charges are approximated to be negligible. There are several calculations [@Monnai:2012jc; @Kapusta:2014dja; @Denicol:2018wdp; @Li:2018fow] that include the effects of baryon diffusion, which intrinsically implies that the Landau frame is chosen. The beam energy scan (BES) programs are being performed at RHIC. The exploration of mid-to-low beam energy regime is also planned at facilities including GSI Facility for Antiprotons and Ion Research (FAIR), JINR Nuclotron-based Ion Collider fAcility (NICA), CERN Super Proton Synchrotron (SPS), and JAEA/KEK Japan Proton Accelerator Research Complex (J-PARC) in order to elucidate the QCD phase structure at finite densities. It would be insightful to come back to the question of the flow frame in hydrodynamic models and investigate whether the choice of the frame can affect observables in those experiments. In this paper, full second-order hydrodynamic equations are investigated in the Landau and the Eckart frames. Stability and causality conditions in the two frames are shown to be related to the correspondences between the first- and the second-order transport coefficients in those frames. Then the implications of a frame choice on the hydrodynamic evolution in heavy-ion systems are discussed focusing on the baryon diffusion and the energy dissipation. Numerical analyses are performed for rapidity distribution because the effects of the net baryon number would appear most prominently in the direction of the collision. Fragments of the shattered nuclei are the source of the conserved charge. The paper is organized as follows. Full second-order relativistic dissipative hydrodynamics is investigated in the Landau and the Eckart frames in Sec. \[sec2\]. Causality and stability conditions are discussed in Sec. \[sec3\]. Sec. \[sec4\] presents numerical demonstration of the effects of a frame choice in nuclear collisions. Discussion and conclusions are presented in Sec. \[sec5\]. The natural unit $c = \hbar = k_B = 1$ and the mostly-negative Minkowski metric $g^{\mu \nu} = \mathrm{diag}(+,-,-,-)$ is used in this paper. Relativistic hydrodynamics in Landau and Eckart Frames {#sec2} ====================================================== The ideal hydrodynamic flow is uniquely determined since the local fluxes of the energy and the charge densities are in the same direction, *i.e.*, the directions of the eigenvector of the energy-momentum tensor and the conserved current match as $T^{\mu \nu} u_\nu = e u^\mu$ and $N^\mu = n u^\mu$. Here $e$ is the energy density and $n$ is the conserved charge density. On the other hand, the presence of the vector dissipative currents lead to the separation of the two local fluxes in relativistic systems. The Landau frame is chosen in the direction of the total energy flux so the dissipation of energy does not appear explicitly, $T^{\mu \nu} u^L_\nu = e_L u_L^\mu$. The Eckart frame is the choice of flow where the total conserved charge flux is diffusion-less $N^\mu = n_E u_E^\mu$. Here the subscripts $L$ and $E$ represent the Landau and the Eckart frames, respectively. The energy-momentum tensor, the conserved charge current, and the entropy current $s^\mu$ are assumed to be invariant under frame transformations [@Israel:1979wp]. The tensor decompositions read $$\begin{aligned} T^{\mu \nu} &=& e_L u_L^\mu u_L^\nu - (P_L + \Pi_L) \Delta_L^{\mu \nu} + \pi_L^{\mu \nu}, \\ N^\mu &=& n_L u_L^\mu + V_L^\mu ,\end{aligned}$$ in the Landau frame and $$\begin{aligned} T^{\mu \nu} &=& e_E u_E^\mu u_E^\nu - (P_E + \Pi_E) \Delta_E^{\mu \nu} \nonumber \\ &+& W_E^\mu u_E^\nu + W_E^\nu u_E^\mu + \pi_E^{\mu \nu}, \\ N^\mu &=& n_E u_E^\mu ,\end{aligned}$$ in the Eckart frame. Here $\Pi$ is the bulk pressure, $\pi^{\mu \nu}$ is the shear stress tensor, $W^\mu$ is the energy dissipation, $V^\mu$ is the baryon diffusion, and $\Delta^{\mu \nu} = g^{\mu \nu} - u^\mu u^\nu$ is the space-like projection. It can be immediately seen that the two frames become identical in the ideal hydrodynamic limit. The dissipative corrections to the energy and the conserved charge densities are neglected for simplicity [@Monnai:2018rgs]. Also I consider a system with a single charge conservation though the extension to general systems should be a straightforward task. In the following arguments, the vector dissipative currents $W_E^\mu$ and $V_L^\mu$ are considered and the shear and bulk viscous corrections are set aside for simplicity. When the dissipative corrections are much smaller than the equilibrium variables, the difference in the thermodynamic variables of the two frames $\Delta n_{E-L} = n_E - n_L$ and $\Delta e_{E-L} = e_E - e_L$ are, at a given space-time point, $$\begin{aligned} \Delta n_{E-L} &=& \frac{1}{2} \frac{V_L^\mu V^L_\mu}{n_L} + \mathcal{O}(\delta^3), \label{eq:dn}\\ \Delta e_{E-L} &=& - \frac{W_E^\mu W^E_\mu}{e_E + P_E} + \mathcal{O}(\delta^3), \label{eq:de}\end{aligned}$$ where the correction is of the second order in dissipative quantities. They indicate that the corrections to other thermodynamic variables, *i.e.*, the pressure $P$, the entropy density $s$, the temperature $T$, and the chemical potential $\mu$ are of the second order. The corrections to the transport coefficients should also be of the second order because they are functions of the energy and the conserved charge densities. Hereafter the subscripts $L$ and $E$ are dropped for those variables for simplicity unless otherwise required. The flow difference $\Delta u^\mu_{E-L} = u_E^\mu - u_L^\mu$ is $$\begin{aligned} \Delta u^\mu_{E-L} = \frac{V_L^\mu }{n} + \mathcal{O}(\delta^2) = - \frac{W_E^\mu}{e+P} + \mathcal{O}(\delta^2), \end{aligned}$$ where the leading order correction is of the first order. The macroscopic variables are estimated using the conservation laws $\partial_\mu T^{\mu \nu} = 0$ and $\partial_\mu N^\mu = 0$, the equation of state $P=P(e,n_B)$, and the constitutive relations for the dissipative currents. In the Landau frame, the second-order causal expression of the baryon diffusion, based on an extended Israel-Stewart framework [@Israel:1976tn; @Israel:1979wp; @Monnai:2010qp], reads $$\begin{aligned} V_L^\mu &=& \kappa_V \nabla_L^\mu \frac{\mu}{T} - \tau_V (\Delta_{L})^{\mu}_{\ \nu} D_L V_L^\nu \nonumber \\ &+& \chi_V^a V_L^\mu D_L \frac{\mu}{T} + \chi_V^b V_L^\mu D_L \frac{1}{T} + \chi_V^c V_L^\mu \nabla^L_\nu u_L^\nu \nonumber \\ &+& \chi_V^d V_L^\nu \nabla^L_\nu u_L^\mu + \chi_V^e V_L^\nu \nabla_L^\mu u^L_\mu, \label{eq:diffusion}\end{aligned}$$ where $\kappa_V \geq 0$ is the baryon conductivity, $\tau_V \geq 0$ is the relaxation time for the baryon diffusion, and $\chi_V^{a,b,c,d,e}$ are second-order transport coefficients. $D = u^\mu \partial_\mu$ and $\nabla^\mu = \Delta^{\mu \nu} \partial_\nu$ are the time- and the space-like derivatives, respectively. Similarly, in the Eckart frame, the energy dissipation reads $$\begin{aligned} W_E^\mu &=& - \kappa_W \bigg( \nabla_E^\mu \frac{1}{T} + \frac{1}{T} D_E u_E^\mu \bigg) - \tau_W (\Delta_{E})^{\mu}_{\ \nu} D_E W_E^\nu \nonumber \\ &+& \chi_W^a W_E^\mu D_E \frac{\mu}{T} + \chi_W^b W_E^\mu D_E \frac{1}{T} + \chi_W^c W_E^\mu \nabla^E_\nu u_E^\nu \nonumber \\ &+& \chi_W^d W_E^\nu \nabla^E_\nu u_E^\mu + \chi_W^e W_E^\nu \nabla_E^\mu u^E_\nu, \label{eq:dissipation}\end{aligned}$$ where $\kappa_W \geq 0$ is the energy conductivity and $\tau_W \geq 0$ is the relaxation time for the energy dissipation, and $\chi_W^{a,b,c,d,e}$ are second-order transport coefficients. For the full expression of the second-order hydrodynamic equations including the scalar and the tensor dissipative currents, see for example Ref. [@Monnai:2010qp]. The second law of thermodynamics implies that the entropy production is expressed in a quadratic form. It can be written in the Landau frame as $$\begin{aligned} \partial_\mu s^\mu = - \frac{V_L^\mu V^L_\mu }{\kappa_V} \geq 0,\end{aligned}$$ and in the Eckart frame as $$\begin{aligned} \partial_\mu s^\mu = - \frac{W_E^\mu W^E_\mu}{\kappa_W} \geq 0,\end{aligned}$$ with the mostly-minus metric. The first and the second order transport coefficients of the two frames are related by the identification of the entropy production: $$\begin{aligned} \kappa_V &=& \kappa_W \bigg( \frac{n}{e+P} \bigg)^2, \label{kappaLE}\\ \tau_V &=& \tau_W - \frac{\kappa_W}{(e+P)T}, \label{tauLE} \\ \chi_V^a &=& \chi_W^a - \frac{\tau_W nT}{e+P}, \\ \chi_V^b &=& \chi_W^b + \tau_W T - \frac{\kappa_W}{e+P}, \\ \chi_V^c &=& \chi_W^c + \frac{\kappa_W}{(e+P)T}, \\ \chi_V^d &=& \chi_W^d + \frac{\kappa_W}{(e+P)T}, \\ \chi_V^e &=& \chi_W^e . \label{chieLE}\end{aligned}$$ See Appendix \[sec:A\] for the derivation. Those relations indicate that the full second-order terms are necessary in addition to the conventional Israel-Stewart terms for understanding the frame dependence of relativistic dissipative hydrodynamics, because the vanishing second-order transport coefficients in one frame lead to non-vanishing ones in the other frame, except for $\chi_V^e$ and $\chi_W^e$. CAUSALITY AND STABILITY OF SECOND-ORDER HYDRODYNAMICS {#sec3} ===================================================== In this section, causality and stability conditions of the relativistic full second-order hydrodynamic equations are investigated in the Landau and the Eckart frames. A plane wave perturbation $\delta Q = \delta \bar{Q} e^{i(\omega t - kx)}$ is considered for a macroscopic variable $Q$ around global equilibrium where $u^\mu = (1,0,0,0)$. The perturbed equations of motion are used to analyze the hydrodynamic modes [@Hiscock:1985zz; @Hiscock:1987zz]. Landau Frame ------------ In the Landau frame, the perturbed energy-momentum tensor and the conserved charge current are $$\begin{aligned} \delta T^{\mu \nu} &=& (e+P) (\delta u^\mu u^\nu + u^\mu \delta u^\nu) \nonumber \\ &+& \delta e u^\mu u^\nu - \delta P g^{\mu \nu}, \\ \delta N^\mu &=& n \delta u^\mu + \delta n u^\mu + \delta V^\mu ,\end{aligned}$$ which follow the conservation law and the constitutive relation $$\begin{aligned} \delta V^\mu &=& \kappa_V \nabla^\mu \delta \alpha - \tau_V \Delta ^{\mu \nu} D \delta V_\nu ,\end{aligned}$$ where $\alpha = \mu/T$. The longitudinal and the transverse modes relevant to the diffusion are given by $$\begin{aligned} \mathcal{M}^L_{xx} \begin{pmatrix} \delta e\\ \delta n\\ \delta u^x \\ \delta V^x \\ \end{pmatrix} &=&0 , \end{aligned}$$ and $$\begin{aligned} \mathcal{M}^L_{xy} \begin{pmatrix} \delta u^y \\ \delta V^y \\ \end{pmatrix} =0 , \ \mathcal{M}^L_{xz} \begin{pmatrix} \delta u^z \\ \delta V^z \\ \end{pmatrix} =0 , \end{aligned}$$ where $$\begin{aligned} \mathcal{M}^L_{xx} = \begin{pmatrix} i\omega&0&-ikh&0\\ -ik \frac{\partial P}{\partial e}|_n&-ik\frac{\partial P}{\partial n}|_e&i\omega h&0\\ 0&i\omega&-ikn&-ik\\ -ik\kappa_V \frac{\partial \alpha}{\partial e}|_n&-ik\kappa_V \frac{\partial \alpha}{\partial n}|_e&0&1+i \omega \tau_V \end{pmatrix} , \nonumber \\\end{aligned}$$ and $$\begin{aligned} \mathcal{M}^L_{xy} = \mathcal{M}^L_{xz} = \begin{pmatrix} i\omega h &0 \\ 0&1+i\omega \tau_V \end{pmatrix} ,\end{aligned}$$ using the enthalpy density $h = e+P$. They have non-trivial solutions when the matrices have vanishing determinants. The longitudinal equations $\det(\mathcal{M}^L_{xx}) = 0$ lead to $$\begin{aligned} \omega^2 - c_s^2 k^2 = \frac{i \kappa_V (c_2 \omega^2 - c_4 k^2) k^2 }{\omega (1+i \tau_V \omega)} , \label{eq:MLxx}\end{aligned}$$ where the sound velocity is defined as $$\begin{aligned} c_s^2 = \frac{\partial P}{\partial e}\bigg|_n + \frac{n}{h} \frac{\partial P}{\partial n}\bigg|_e , \label{eq:cs2}\end{aligned}$$ and the thermodynamic coefficients as $$\begin{aligned} c_2 &=& \frac{\partial \alpha}{\partial n} \bigg|_e ,\\ c_4 &=& \frac{\partial \alpha}{\partial n} \bigg|_e \frac{\partial P}{\partial e}\bigg|_n - \frac{\partial \alpha}{\partial e}\bigg|_n \frac{\partial P}{\partial n}\bigg|_e .\end{aligned}$$ The Routh-Hurwitz stability criteria [@Routh; @Hurwitz] indicate that the $\mathrm{Im} (\omega)$ stays semi-positive when $c_2 \geq 0$ and $c_s^2 c_2 - c_4 \geq 0$. Those conditions are satisfied in thermodynamic systems since the former follows from the thermodynamic requirement that the fugacity should increase as the number density increases at a fixed energy density and the latter from $$\begin{aligned} c_s^2 c_2 - c_4 &=& \frac{\beta}{h} \bigg( \frac{\partial P}{\partial \alpha}\bigg|_\beta \frac{\partial \alpha}{\partial n} \bigg|_e - \frac{\partial P}{\partial \beta}\bigg|_\beta \frac{\partial \alpha}{\partial e} \bigg|_n \bigg)^2 \geq 0 ,\end{aligned}$$ where $\beta = 1/T$, using the definition of the sound velocity (\[eq:cs2\]) and the thermodynamic properties $$\begin{aligned} \frac{\partial P}{\partial \alpha}\bigg|_\beta &=& \frac{n}{\beta} , \label{eq:dpda}\ \ \frac{\partial P}{\partial \beta}\bigg|_\alpha = - \frac{h}{\beta} ,\end{aligned}$$ and $$\begin{aligned} \frac{\partial \beta}{\partial n} \bigg|_e &=& - \frac{\partial \alpha}{\partial e} \bigg|_n \label{eq:bnae}.\end{aligned}$$ Although it is possible to analytically solve the quartic equation, the general solutions are complicated. Here asymptotic forms at small $k$ are considered for more physical arguments. The propagating modes are, up to the leading order in real and imaginary parts, $$\begin{aligned} \omega = \pm c_s k + i \frac{\kappa_V (c_s^2 c_2 - c_4 )}{2 c_s^2} k^2 ,\end{aligned}$$ and the non-propagating mode is $$\begin{aligned} \omega = \frac{i}{\tau_V} ,\end{aligned}$$ aside from the trivial $\omega = 0$. They satisfy the causality condition $$\begin{aligned} \bigg | \frac{\partial \mathrm{Re} (\omega)}{\partial k} \bigg | \leq 1.\end{aligned}$$ The stability condition $$\begin{aligned} \mathrm{Im} (\omega) \geq 0,\end{aligned}$$ is satisfied for $c_2 c_s^2 - c_4\geq 0$, which is consistent with the Routh-Hurwitz stability conditions. The solutions to the transverse equations $$\begin{aligned} \det(\mathcal{M}^L_{xy}) &=& \det(\mathcal{M}^L_{xz}) \nonumber \\ &=& i \omega h (1+i\omega \tau_V) = 0, \label{eq:MLxy}\end{aligned}$$ are the non-propagating modes $\omega = 0$ and $\omega = i/\tau_V$. The causality and stability conditions are trivially satisfied. Those results of the longitudinal and the transverse modes indicate that the second-order diffusive hydrodynamics is causal and stable in the Landau frame. Eckart Frame ------------ In the Eckart frame, the energy-momentum tensor and the conserved charge current are expressed as $$\begin{aligned} \delta T^{\mu \nu} &=& (e+P) (\delta u^\mu u^\nu + u^\mu \delta u^\nu) + \delta e u^\mu u^\nu \nonumber \\ &-& \delta P g^{\mu \nu} + \delta W^\mu u^\nu + \delta W^\nu u^\mu, \\ \delta N^\mu &=& n \delta u^\mu + \delta n u^\mu ,\end{aligned}$$ and the energy dissipation current as $$\begin{aligned} \delta W^\mu &=& - \kappa_W \beta_\mathrm{eq} D \delta u^\mu - \kappa_W \nabla^\mu \delta \beta - \tau_W \Delta^{\mu \nu} D \delta W_\nu . \nonumber \\\end{aligned}$$ The perturbed equations of motion are $$\begin{aligned} \mathcal{M}^E_{xx} \begin{pmatrix} \delta e\\ \delta n\\ \delta u^x \\ \delta W^x \\ \end{pmatrix} &=&0 , \end{aligned}$$ and $$\begin{aligned} \mathcal{M}^E_{xy} \begin{pmatrix} \delta u^y \\ \delta W^y \\ \end{pmatrix} =0 , \ \mathcal{M}^E_{xz} \begin{pmatrix} \delta u^z \\ \delta W^z \\ \end{pmatrix} =0 , \end{aligned}$$ where $$\begin{aligned} \mathcal{M}^E_{xx} = \begin{pmatrix} i\omega&0&-ik h&-ik\\ -ik \frac{\partial P}{\partial e}|_n&-ik \frac{\partial P}{\partial n}|_e&i\omega h&i\omega \\ 0&i\omega&-ikn&0\\ ik\kappa_W \frac{\partial \beta}{\partial e}|_n&ik\kappa_W \frac{\partial \beta}{\partial n}|_e&i \omega \kappa_W \beta &1+i \omega \tau_W \end{pmatrix} , \nonumber \\\end{aligned}$$ and $$\begin{aligned} \mathcal{M}^E_{xy} = \mathcal{M}^E_{xz} = \begin{pmatrix} i\omega h&i \omega \\ i \omega \kappa_W \beta &1+i\omega \tau_V \end{pmatrix} .\end{aligned}$$ The longitudinal equations $\det(\mathcal{M}^E_{xx}) = 0$ lead to $$\begin{aligned} \omega^2 - c_s^2 k^2 = \frac{i \kappa_W (d_2 \omega^2 - d_4 k^2) k^2 }{\omega [1+i (\tau_W - \kappa_W \beta/h ) \omega]} , \label{eq:MExx}\end{aligned}$$ where $$\begin{aligned} d_2 &=& \frac{n}{h} \bigg( \frac{\partial \beta}{\partial n} \bigg|_e + \frac{\beta}{h} \frac{\partial P}{\partial n}\bigg|_e \bigg),\\ d_4 &=& \frac{n}{h} \bigg( \frac{\partial \beta}{\partial n} \bigg|_e \frac{\partial P}{\partial e}\bigg|_n - \frac{\partial \beta}{\partial e}\bigg|_n \frac{\partial P}{\partial n}\bigg|_e \bigg) .\end{aligned}$$ Here, $\mathrm{Im} (\omega)$ stays semi-positive when $d_2 \geq 0$, $c_s^2 d_2 - d_4 \geq 0$, and $$\begin{aligned} \tau_W - \frac{\beta}{h} \kappa_W \geq 0, \label{eq:twkw}\end{aligned}$$ according to the Routh-Hurwitz stability criteria. The first two conditions are again satisfied in thermodynamic systems as $$\begin{aligned} d_2 &=& \frac{n^2}{h^2} \frac{\partial \alpha}{\partial n}\bigg|_e \geq 0,\\ c_s^2 d_2 - d_4 &=& \frac{n^2 \beta}{h^3} \bigg( \frac{\partial P}{\partial \alpha}\bigg|_\beta \frac{\partial \alpha}{\partial n} \bigg|_e - \frac{\partial P}{\partial \beta}\bigg|_\beta \frac{\partial \alpha}{\partial e} \bigg|_n \bigg)^2 \geq 0,\nonumber \\\end{aligned}$$ using the relations (\[eq:dpda\]) and (\[eq:bnae\]). Note that $d_2 = c_2 n^2/h^2$ and $d_4 = c_4 n^2/h^2$. The third condition is also consistent with the ones reported in Ref. [@Hiscock:1987zz; @Osada:2011gx]. The results indicate that second-order dissipative hydrodynamics is stable in the Eckart frame if the transport coefficients satisfy the condition (\[eq:twkw\]). It can be immediately seen that the first-order theory is unstable in the Eckart frame by taking the limit of vanishing relaxation time $\tau_W\to 0$. It is important to note that the space-like projection of the energy-momentum conservation law leads to $$\begin{aligned} (e+P)Du^\mu &=& \nabla^\mu P - W^\mu \nabla_\nu u^\nu \nonumber \\ &-& W^\nu \nabla_\nu u^\mu - \Delta^{\mu \nu} DW_\nu , \label{eq:emcperp}\end{aligned}$$ which is also used to convert the thermodynamic forces (\[eq:thermoforce\]). The higher order terms in the identity is important even in the stability analyses of the first-order theory because if one neglects the correction by truncation and use it to remove the acceleration term in the energy dissipation current, the equation can become seemingly “stable" at the first order. This is because the relaxation term-like correction originating from the last term in Eq. (\[eq:emcperp\]) is effectively introduced by the procedure at the second order even though it is not apparent. The prefactor before this effective relaxation term is $\kappa_W/(e+P)T$, which is the minimum value of the relaxation time required for hydrodynamic stability. The constitutive relation is qualitatively modified and thus cannot be regarded as a first-order theory. The asymptotic forms of the propagating and the non-propagating modes at small $k$ are $$\begin{aligned} \omega = \pm c_s k + i \frac{\kappa_W (c_s^2 d_2 - d_4 )}{2 c_s^2} k^2 ,\end{aligned}$$ and $$\begin{aligned} \omega = \frac{i}{\tau_W - \kappa_W \beta/h} ,\end{aligned}$$ aside from $\omega = 0$. Those modes are causal and stable if the Routh-Hurwitz criteria are satisfied. The transverse equations $$\begin{aligned} \det(\mathcal{M}^E_{xy}) &=& \det(\mathcal{M}^E_{xz}) \nonumber \\ &=& i\omega [h+i\omega (\tau_V - \kappa_W \beta)] = 0, \label{eq:MExy}\end{aligned}$$ have the non-propagating solutions $\omega = i/(\tau_W - \kappa_W \beta/h)$ and $\omega = 0$. One can see that all the modes satisfy the stability and causality conditions if the relaxation time is sufficiently larger than the conductivity (\[eq:twkw\]). Comparing the two frames, the characteristic equations in the Landau frame (\[eq:MLxx\]) and (\[eq:MLxy\]) and their solutions are equivalent to those in the Eckart frame (\[eq:MExx\]) and (\[eq:MExy\]) under the identification of the conductivities (\[kappaLE\]) and the relaxation times (\[tauLE\]) that follow from the matching of the entropy production. The relation of the relaxation times in the two frames implies that the Eckart stability condition on $\tau_W$ is closely related with the fact that $\tau_V$ is semi-positive in the other frame. Numerical Application to Heavy-Ion Collisions {#sec4} ============================================= The effects of a frame choice on relativistic nuclear collisions are demonstrated by solving the energy dissipative and the baryon diffusive hydrodynamic equations. For this purpose, a non-boost invariant (1+1)-dimensional hydrodynamic system is considered [@Monnai:2012jc]. Full (3+1)-dimensional calculations for quantitative analyses of the data sets from the beam energy scan experiments is beyond the scope of the current study and will be presented elsewhere. The hydrodynamic model ---------------------- Hydrodynamics system are characterized with the equation of state and the transport coefficients. The equation of state at finite baryon density [@Monnai:2019hkn] is based on lattice QCD [@Bazavov:2014pvz; @Bazavov:2012jq; @Ding:2015fca; @Bazavov:2017dus] and the hadron resonance gas model. The strangeness and the electric charge are not considered here for simplicity and left for future studies. The transport coefficients are chosen as $\kappa_W = c_W (e+P)$, $\tau_W = \tilde{c}_W \kappa_W / (e+P)T$, and $\chi_W^{a,b,c,d,e} = 0$ in the Eckart frame. The model conductivity is motivated by the non-equilibrium statistical operator method for the $\phi^4$-theory [@Hosoya:1983id] coupled with the lower bound of shear viscosity conjectured in the gauge-string correspondence [@Kovtun:2004de]. $c_W = 10$ and $\tilde{c}_W = 2$ are used for demonstration. Those in the Landau frame is obtained using the relations (\[kappaLE\])-(\[chieLE\]). The initial conditions are parametrically constructed as $$\begin{aligned} e(\tau_\mathrm{th},\eta_s) &=& a_1 \exp(-a_2 \eta_s^2 - a_3 \eta_s^4), \\ n_B(\tau_\mathrm{th},\eta_s) &=& n_B^+(\eta_s) + n_B^-(\eta_s),\end{aligned}$$ where $$\begin{aligned} &&n_B^\pm(\eta_s) = \nonumber \\ &&\begin{cases} b_1 \exp[-b_2 (\eta_s \mp \eta_0)^2 - b_3 (\eta_s \mp \eta_0)^4] & \mathrm{for} \ \pm \eta_s > \eta_0, \\ b_1 \exp[-\tilde{b}_2 (\eta_s \mp \eta_0)^2 - \tilde{b}_3 (\eta_s \mp \eta_0)^4] & \mathrm{for} \ \pm \eta_s \leq \eta_0, \end{cases}\nonumber \\\end{aligned}$$ at $\tau_\mathrm{th} = 3$ fm/$s$. The parameters are tuned to roughly reproduce the SPS data for 17.3 GeV Pb+Pb collisions [@Appelshauser:1998yb] without dissipative corrections. Here $a_1= 7.19$ (GeV/fm$^{3}$), $a_2 = 0.8$, and $a_3 = 0.05$ for the energy density and $b_1 = 0.45$ (1/fm$^{3}$), $b_2 = 0.4$, $b_3 = 4.0$, $\tilde{b}_2 = 0.55$, $\tilde{b}_3 = 2.3$, and $\eta_0 = 0.69$ for the net baryon density. It should be noted again that they are for demonstration and not for full quantitative analyses of the data because even though the results exhibit fair agreement with the data, the transverse expansion and the hadronic transport are not taken into account here. The prolonged space-time evolution may partially mimic the transport effects. The initial values of the energy dissipation and the baryon diffusion currents are set to zero to allow comparison of the effects of those processes coming from hydrodynamic evolution. The kinetic freeze-out is estimated using the Cooper-Frye formula [@Cooper:1974mv] with off-equilibrium corrections to the phase-space distribution functions [@Teaney:2003kp; @Monnai:2009ad]. It reads $$\begin{aligned} E_i\frac{dN^i}{d^3p} = \frac{g_i}{(2\pi)^3} \int_\Sigma p_i^\mu d\sigma_\mu (f^0_i + \delta f_i), \label{eq:CF}\end{aligned}$$ where $g_i$ is the degeneracy, $\Sigma$ is the freeze-out hypersurface, and $d\sigma_\mu$ is the freeze-out hypersurface element. $f_i^0$ is the equilibrium (Bose-Einstein or Fermi-Dirac) phase-space distribution function for the $i$-th particle species and $\delta f_i$ is the off-equilibrium distortion of the distribution function. The expression of $\delta f$ in the Landau and the Eckart frames are shown in Appendix \[sec:B\]. The hypersurface is determined with the freeze-out energy density $e_\mathrm{f} = 0.4$ GeV/fm$^3$. Space-time evolution {#sec4B} -------------------- ![The space-time rapidity dependences of (a) the entropy density and (b) the net baryon density at the initial time (thin solid line) and those after ideal (thick solid line), baryon diffusive (dashed line), and energy dissipative (dotted line) hydrodynamic evolutions at $\tau = 20$ fm/$c$.[]{data-label="fig:1"}](entropy.pdf "fig:"){width="3.3in"} ![The space-time rapidity dependences of (a) the entropy density and (b) the net baryon density at the initial time (thin solid line) and those after ideal (thick solid line), baryon diffusive (dashed line), and energy dissipative (dotted line) hydrodynamic evolutions at $\tau = 20$ fm/$c$.[]{data-label="fig:1"}](netbaryon.pdf "fig:"){width="3.3in"} First, I investigate the off-equilibrium hydrodynamic evolution in the Landau and the Eckart frames and compare them with the ideal hydrodynamic evolution. The entropy and the net baryon distributions at the initial time and $\tau = 20$ fm/$c$ are shown in Fig. \[fig:1\]. It should be noted that the lifetime of the fireball is longer in the current geometry owing to the lack of transverse expansion. The effect of baryon diffusion or energy dissipation is small on the entropy density for the current choice of transport coefficients. The effect on the net baryon density, on the other hand, is visible. The baryon diffusion causes stronger stopping because the fugacity gradients induce net baryon diffusion from forward to mid-rapidity regions. At the edges near $|\eta_s| \sim 2$, the baryon diffusion is in the outward direction. The energy dissipation, on the other hand, is less trivial because of the interplay of the temperature gradient and the acceleration terms. The temperature gradients carry the energy density towards forward rapidity regions while the acceleration correction prevents flow convection and keep the density in the mid-rapidity region. The effects cancel at the first-order in the limit of vanishing chemical potential as seen in (\[eq:thermoforce\]). The off-equilibrium deformation of the net baryon distribution in the Eckart frame can be mainly caused by the deceleration of flow as seen in Fig. \[fig:2\] near mid-rapidity. The off-equilibrium evolutions of the net baryon distribution in the Landau frame and in the Eckart frame are quantitatively similar to each other. This can be a consequence of the fact that the frame-dependence of the thermodynamic quantities are of second order. The difference between the flow rapidity $Y_f$ and the space-time rapidity $\eta_s$ (Fig. \[fig:2\]) implies that the Landau flow is closer to the ideal flow than the Eckart flow. Here the flow rapidity is defined as $$\begin{aligned} u^\mu = (\cosh Y_f,0,0,\sinh Y_f),\end{aligned}$$ which reduces to the boost-invariant flow when $Y_f - \eta_s = 0$. The flow is affected more in the Eckart frame possibly because the energy dissipation is directly coupled to the equation of motion for flow acceleration (\[eq:emcperp\]). At forward space-time rapidity $|\eta_s| > 1.5$, the Eckart flow is faster then the Landau flow because of the peak position in the net baryon distribution. ![The space-time rapidity dependence of the difference between the flow and the space-time rapidities at the initial time (thin solid line) and those after ideal (thick solid line), baryon diffusive (dashed line), and energy dissipative (dotted line) hydrodynamic evolutions at $\tau = 20$ fm/$c$.[]{data-label="fig:2"}](flow.pdf){width="3.3in"} Charged particle and net baryon rapidity distributions ------------------------------------------------------ The charged hadron rapidity distributions are shown in Fig. \[fig:3\]. The effect of energy dissipation in the Eckart frame is visible while that of baryon diffusion is negligible when the off-equilibrium correction at freeze-out (\[eq:CF\]) is not taken into account. The difference comes from the difference in the Landau and the Eckart flow and the lack of the $\delta f$ corrections. When the correction is incorporated, the effect of energy dissipation becomes small and similar to that of baryon diffusion as found in Fig. \[fig:3\] (b). ![The rapidity distributions of charged particles (a) without and (b) with $\delta f$ correction at freeze-out for the ideal hydrodynamic system (solid line) compared to those for the systems with baryon diffusion in the Landau frame (dashed line) and with energy dissipation in the Eckart frame (dotted line).[]{data-label="fig:3"}](dnchdy.pdf "fig:"){width="3.3in"} ![The rapidity distributions of charged particles (a) without and (b) with $\delta f$ correction at freeze-out for the ideal hydrodynamic system (solid line) compared to those for the systems with baryon diffusion in the Landau frame (dashed line) and with energy dissipation in the Eckart frame (dotted line).[]{data-label="fig:3"}](dnchdy_df.pdf "fig:"){width="3.3in"} The net baryon rapidity distribution with baryon diffusion in the Landau frame and with energy dissipation in the Eckart frame are shown in Fig. \[fig:4\]. The off-equilibrium effects are visible in both frames without the $\delta f$ correction. This is consistent with the observation of hydrodynamic evolution of the net baryon density in Sec. \[sec4B\]. The baryon stopping is larger in the Eckart frame because of the flow deceleration. The effect of $\delta f$ correction at freeze-out is found to enhance the baryon stopping caused by the baryon diffusion. Again the net baryon distributions in the two frames become close to each other once the off-equilibrium correction at freeze-out is properly taken into account. ![The rapidity distributions of net baryon number (a) without and (b) with $\delta f$ correction at freeze-out for the ideal hydrodynamic system (solid line) compared to those for the systems with baryon diffusion in the Landau frame (dashed line) and with energy dissipation in the Eckart frame (dotted line).[]{data-label="fig:4"}](dnbdy.pdf "fig:"){width="3.3in"} ![The rapidity distributions of net baryon number (a) without and (b) with $\delta f$ correction at freeze-out for the ideal hydrodynamic system (solid line) compared to those for the systems with baryon diffusion in the Landau frame (dashed line) and with energy dissipation in the Eckart frame (dotted line).[]{data-label="fig:4"}](dnbdy_df.pdf "fig:"){width="3.3in"} It is worth noting that the effect of the $\delta f$ correction is larger in the Eckart frame for the charged particle distribution while it is larger in the Landau frame for the net baryon distribution (Fig. \[fig:5\]). The results suggest that an adequate treatment of $\delta f$ corrections are important for qualitative understanding of the flow observables. ![The ratios of the rapidity distributions with and without $\delta f$ correction for (a) charged particles and (b) for net baryon number in the Landau frame (dashed line) and in the Eckart frame (dotted line).[]{data-label="fig:5"}](dnchdy_ratio.pdf "fig:"){width="3.2in"} ![The ratios of the rapidity distributions with and without $\delta f$ correction for (a) charged particles and (b) for net baryon number in the Landau frame (dashed line) and in the Eckart frame (dotted line).[]{data-label="fig:5"}](dnbdy_ratio.pdf "fig:"){width="3.2in"} Discussion and Conclusions {#sec5} ========================== The baryon diffusive and the energy dissipative hydrodynamics at the second-order in the Landau and the Eckart frames have been discussed. The system is stable at the second order when the relaxation time is semi-positive in the Landau frame and it is larger than the minimum value in the Eckart frame. The mode analyses implies that causality is also satisfied in the long wave length limit. The transport coefficients of the two frames at the linear and the second order are shown to be related. The full second-order terms are found to be necessary for a consistent matching. The results are generic and independent of the individual derivation method of the hydrodynamic equations of motion. The frame dependence is tested in a numerical hydrodynamic model of relativistic heavy-ion collisions. The net baryon number is chosen as the conserved charge of the system and the space-time evolutions of a QCD medium in the Landau and the Eckart frames are compared to that of the inviscid system. The space-time rapidity distribution of the entropy density is not much affected by the dissipative currents while that of net baryon density is visibly modified. The effects of the baryon diffusion and the energy dissipation is found to be quantitatively similar for those thermodynamic variables. The flow, on the other hand, is implied to be different in the Landau and the Eckart frames. The charged particle distribution is estimated in both frames. The result is found to be mostly unaffected by the baryon diffusion for the chosen set of transport coefficients. The distribution for the energy dissipation is also not modified much owing to the cancellation of the effects of the flow deceleration and the off-equilibrium correction at freeze-out. A larger baryon stopping is observed in the net baryon distribution owing to the fugacity gradient for the Landau frame and also to the flow deceleration in the Eckart frame. The $\delta f$ correction is found to increase the baryon stopping effect of baryon diffusion so that the difference between the net baryon distributions of the two frames becomes small. The results indicate that the hydrodynamic estimation of the observables may not depend much on the choice of the local rest frame in relativistic nuclear collisions. It would be important to investigate other observables that are directly dependent on the flow, such as thermal photons with blue shifting, to elucidate the issue of the Landau and the Eckart frames in hydrodynamic models. It is worth noting that studied in the present numerical analyses are the finite temperature and chemical potential regions near the QCD transition explored by relativistic nuclear collisions. One should be careful when determining a frame in the zero temperature or chemical potential limit. A careful treatment of the equation of state and the transport coefficients may also become important in such cases. Future prospects include the application to the full (3+1) dimensional analyses of the beam energy scan data of flow-related observables to extract relations between the initial conditions and the transport coefficients in each frame to investigate the validity of the choice of the local rest frame more quantitatively. The author is grateful for the valuable comments by T. Kunihiro. The work of A.M. was supported by JSPS KAKENHI Grant Number JP19K14722. ENTROPY PRODUCTION IN LANDAU AND ECKART FRAMES {#sec:A} ============================================== The relation between the transport coefficients can be determined by the identification of the entropy production of the Landau and the Eckart frames: $$\begin{aligned} \partial_\mu s^\mu = - \frac{V_L^\mu V^L_\mu}{\kappa_V} = - \frac{W_E^\mu W^E_\mu}{\kappa_W} .\end{aligned}$$ The entropy production in the Landau frame up to the next-to-leading order is $$\begin{aligned} \partial_\mu s^\mu &=& - \kappa_V \nabla_\mu^L \frac{\mu}{T} \nabla^\mu_L \frac{\mu}{T} \nonumber \\ &+& 2 \tau_V \nabla_\mu^L \frac{\mu}{T} D_L V_L^\mu - 2 \chi_W^a \nabla_\mu^L \frac{\mu}{T} V_L^\mu D_L \frac{\mu}{T} \nonumber \\ &-& 2 \chi_V^b \nabla_\mu^L \frac{\mu}{T} V_L^\mu D_L \frac{1}{T} - 2 \chi_V^c \nabla_\mu^L \frac{\mu}{T} V_L^\mu \nabla^L_\nu u_L^\nu \nonumber \\ &-& 2 \chi_W^d \nabla_\mu^L \frac{\mu}{T} V_L^\nu \nabla^L_\nu u_L^\mu - 2 \chi_W^e \nabla_\mu^L \frac{\mu}{T} V_L^\nu \nabla_L^\mu u^L_\nu + \mathcal{O}(\delta^4). \nonumber \\\end{aligned}$$ The entropy production in the Eckart frame can be expressed using the variables in the Landau frame as, up to the same order, $$\begin{aligned} \partial_\mu s^\mu &=& - \frac{W_E^\mu W^E_\mu}{\kappa_W} \nonumber \\ &=& - \kappa_W \bigg( \frac{n}{e+P} \bigg)^2 \nabla_\mu^L \frac{\mu}{T} \nabla^\mu_L \frac{\mu}{T} \nonumber \\ &+& 2 \bigg[ \tau_W - \frac{\kappa_W}{(e+P)T} \bigg] \nabla_\mu^L \frac{\mu}{T} D_L V_L^\mu \nonumber \\ &-& 2 \bigg[ \chi_W^a - \frac{\tau_W nT}{e+P} \bigg] \nabla_\mu^L \frac{\mu}{T} V_L^\mu D_L \frac{\mu}{T} \nonumber \\ &-& 2 \bigg[ \chi_W^b + \tau_W T - \frac{\kappa_W}{(e+P)} \bigg] \nabla_\mu^L \frac{\mu}{T} V_L^\mu D_L \frac{1}{T} \nonumber \\ &-& 2 \bigg[ \chi_W^c + \frac{\kappa_W}{(e+P)T} \bigg] \nabla_\mu^L \frac{\mu}{T} V_L^\mu \nabla^L_\nu u_L^\nu \nonumber \\ &-& 2 \bigg[ \chi_W^d + \frac{\kappa_W}{(e+P)T} \bigg] \nabla_\mu^L \frac{\mu}{T} V_L^\nu \nabla^L_\nu u_L^\mu \nonumber \\ &-& 2 \chi_W^e \nabla_\mu^L \frac{\mu}{T} V_L^\nu \nabla_L^\mu u^L_\nu + \mathcal{O}(\delta^4).\end{aligned}$$ It should be noted that the thermodynamic forces of the energy dissipation and the baryon diffusion are mutually convertible using the hydrodynamic identity derived from the Gibbs-Duhem relation and energy-momentum conservation as $$\begin{aligned} &\bigg( \nabla_E^\mu \frac{1}{T} + \frac{1}{T} D_E u^\mu \bigg) = \frac{n}{e+P} \nabla_E^\mu \frac{\mu}{T} \nonumber \\ &- \frac{1}{(e+P)T} [W_E^\mu \nabla^E_\nu u_E^\nu + W_E^\nu \nabla^E_\nu u_E^\mu + (\Delta_{E})^{\mu}_{\ \nu} D_E W_E^\nu] \nonumber \\ &= \frac{n}{e+P} \nabla_L^\mu \frac{\mu}{T} - \frac{n}{e+P} \bigg( u_L^\mu \frac{V_L^\nu}{n} \nabla^L_\nu \frac{\mu}{T} + \frac{V_L^\mu}{n} D^L \frac{\mu}{T} \bigg) \nonumber \\ &+ \frac{1}{nT} \bigg[V_L^\mu \nabla^L_\nu u_L^\nu + V_L^\nu \nabla^L_\nu u_L^\mu + (\Delta_{L})^{\mu}_{\ \nu} D_L V_L^\nu \nonumber \\ &+ \frac{n}{e+P} V_L^\mu D_L \frac{e+P}{n} \bigg] + \mathcal{O}(\delta^3) , \label{eq:thermoforce}\end{aligned}$$ where $$\begin{aligned} D_L \frac{e+P}{n} &=& - \frac{e+P}{n} T D_L \frac{1}{T} +T D_L \frac{\mu}{T} .\end{aligned}$$ The correspondences between the transport coefficients in the two frames can be obtained as Eqs. (\[kappaLE\])-(\[chieLE\]). FREEZE-OUT WITH OFF-EQUILIBRIUM DISTRIBUTION {#sec:B} ============================================ The distribution function in relativistic systems with the energy dissipation and the baryon diffusion is estimated using the Grad’s moment method [@Grad] based on Ref. [@Israel:1979wp; @Monnai:2010qp]. The distribution can be decomposed into the equilibrium and the off-equilibrium parts as $$\begin{aligned} f_0^i &=& \{\exp[(p^\mu u_\mu - b_i \mu_B)/T]\mp 1\}^{-1}, \\ \delta f^i &=& -f_0^i (1\pm f_0^i) (b_i p_i^\mu \varepsilon_\mu^B + p_i^\mu p_i^\nu \varepsilon_{\mu\nu}),\end{aligned}$$ where $b_i$ is the quantum number for baryons. The upper sign is for bosons and the lower one for fermions. If the auxiliary vector and tensor $\varepsilon_\mu^B$ and $\varepsilon_{\mu \nu}$ are expressed in terms of macroscopic dissipative currents, $$\begin{aligned} \varepsilon_\mu^{L;B} = D_V V^L_\mu, \ \ \varepsilon^L_{\mu \nu} = B_V (V^L_\mu u^L_\nu + V^L_\nu u^L_\mu),\end{aligned}$$ in the Landau frame and $$\begin{aligned} \varepsilon_\mu^{E;B} = D_W W^E_\mu, \ \ \varepsilon^E_{\mu \nu} = B_W (W^E_\mu u^E_\nu + W^E_\nu u^E_\mu),\end{aligned}$$ in the Eckart frame. The coefficients can be determined by the self-consistency condition that the off-equilibrium distribution reproduces the respective dissipative current within the framework of kinetic theory. They are, $$\begin{aligned} D_W = - 2J_{31}^B \mathcal{J}_2^{-1}, \ \ B_W = J_{21}^{BB} \mathcal{J}_2^{-1},\end{aligned}$$ and $$\begin{aligned} D_V = 2J_{41} \mathcal{J}_2^{-1}, \ \ B_V = -J_{31}^B \mathcal{J}_2^{-1},\end{aligned}$$ where $$\begin{aligned} \mathcal{J}_2 = 2(J_{31}^B J_{31}^B-J_{41}J_{21}^{BB}).\end{aligned}$$ Here the moments are defined as $$\begin{aligned} J^{B...B}_{k l} &=& \frac{1}{(2l+1)!!} \sum_i \int \frac{(b_i ... b_i) d^3p}{(2\pi)^3 E_i} \nonumber \\ &\times& [m_i^2 - (p\cdot u)^2]^{l} (p\cdot u)^{k-2l} f_0^i(1\pm f_0^i) .\end{aligned}$$ The off-equilibrium corrections are essential for conserving the energy-momentum and the net baryon number during the conversion from fluid to particles at freeze-out. The underlying equations of state for the hydrodynamic model and relativistic kinetic theory should be the same for successful conversion. The hadron gas with all resonances below 2 GeV in mass [@Tanabashi:2018oca] is used for the numerical estimation of the distortion coefficients to match the constructions of $\delta f$ and the equation of state.
1
--- abstract: 'The diversity of structures in the Universe (from the smallest galaxies to the largest superclusters) has formed under the pull of gravity from the tiny primordial perturbations that we see imprinted in the cosmic microwave background. A quantitative description of this process would require description of motion of zillions of dark matter particles. This impossible task is usually circumvented by *coarse-graining* the problem: one either considers a Newtonian dynamics of “particles” with macroscopically large masses *or* approximates the dark matter distribution with a continuous density field. There is no closed system of equations for the evolution of the matter density field alone and instead it should still be discretized at each timestep. In this work we describe a method of solving the full 6-dimensional Vlasov-Poisson equation via a system of auxiliary Schrödinger-like equations. The complexity of the problem gets shifted into the choice of the number and shape of the initial wavefunctions that should only be specified at the beginning of the computation (we stress that these wavefunctions have nothing to do with quantum nature of the actual dark matter particles). We discuss different prescriptions to generate the initial wave functions from the initial conditions and demonstrate the validity of the technique on two simple test cases. This new simulation algorithm can in principle be used on an arbitrary distribution function, enabling the simulation of warm and hot dark matter structure formation scenarios.' author: - | Matthieu Schaller$^1$[^1], Claude Becker$^2$, Oleg Ruchayskiy$^2$, Alexey Boyarsky$^{3,4}$ and Mikhail Shaposhnikov$^2$\ $^1$Institute for Computational Cosmology, Durham University, South Road, Durham, UK, DH1 3LE\ $^2$Institut de Théorie des Phénomènes Physiques, École Polytechnique Fédérale de Lausanne, CH-1015 Lausanne, Switzerland\ $^3$Instituut-Lorentz for Theoretical Physics, Universiteit Leiden, Niels Bohrweg 2, Leiden, The Netherlands\ $^4$Bogolyubov Institute of Theoretical Physics, Kyiv, Ukraine bibliography: - './bibliography.bib' date: 'Accepted 2014 May 29. Received 2014 May 21; in original form 2013 October 27' nocite: '[@*]' title: A new framework for numerical simulations of structure formation --- \[firstpage\] cosmology: theory, dark matter, large-scale structure of Universe – methods: N-body, numerical Introduction {#sec:introduction} ============ The Lambda Cold Dark Matter ($\Lambda$CDM) cosmological model is the current theoretical framework to describe the formation and evolution of large scale structures in the Universe. In this model, the growth of structures occurs through the hierarchical collapse of a collisionless fluid of cold dark matter (CDM). Small initial perturbations grow through merging to create more and more massive halos and complex sub-structures (e.g. @Davis1985 [@Bertschinger1998; @Springel2005]). These initial perturbations are thought to be (almost) Gaussian, created from quantum fluctuations during the inflation epoch and are the origin of all the objects seen in the Universe. The knowledge of the precise initial conditions and a comprehensive understanding of the underlying physical laws should, in principle, enable us to evolve these fluctuations forward in time and provide a test of the current models. Most of the important features observable in the Universe today have grown via non-linear evolution from tiny primordial density perturbations. This makes the whole process of understanding their evolution complex and requires the use of techniques well beyond the linear perturbation theory [@Bernardeau2002]. Indeed, at scales below roughly $\unit[10]{Mpc}$ the evolution of structures had already entered the non-linear stage (i.e. the density contrast $\delta \rho$ is of order (or much greater) than the background density $\bar\rho$). The main resource available to cosmologists is the use of bigger and bigger cosmological simulations, most of them using the particle technique known as [$N$-body ]{}simulation [@Hockney1988; @Dehnen2011]. Numerical simulations may, for instance, help shed some light on the unknown nature of dark matter. Clearly, the number of dark matter particles is way too large to track individually each of them on a computer. Therefore most of the cosmological [$N$-body ]{}simulations use macroscopically large simulation “particles” (with their masses ranging from masses much larger than DM particles up to the size of a small galaxy, $10^8-10^9 {M_\odot}$). The problem of dark matter evolution in the Universe can be formulated as an evolution of a collisionless self-gravitating fluid. The main tool used to describe this dark matter fluid is the *phase space density distribution* $f(x,v,t)$, defined such that $f(x,v,t)dqdv$ represents the mass of material at position $x$ moving at velocity $v$ at time $t$. This function is usually normalized such that its integral over all positions and velocities gives the total mass $$\int d^3x\int d^3v f(x,v,t) = M_{\rmn{tot}}.$$ Notice that one could also normalize this integral to one or to the total number of particles in the system. When integrating over velocity space only, one gets the usual mass density $\rho(x)$, whereas integrating over all space returns the velocity distribution $d_v(v)$: $$\rho(x) = \int f(x,v,t)d^3v, \qquad d_v(v) = \int f(x,v,t) d^3x.$$ This distribution function obeys the Liouville theorem [@Binney2008] and if the only force acting on the particle is the gravitational potential $U(x)$, we can write a closed system of equations for the formation of structures [@Bertschinger1995; @Bernardeau2002]: $$\begin{aligned} {\frac{\partial f}{\partial \tau}} + \frac{v}{a(\tau)}{\frac{\partial f}{\partial x}} - a(\tau)\nabla U {\frac{\partial f}{\partial v}} &= 0, \\ \nabla^2 U &= 4\pi Ga^2(\tau) \delta\rho, \end{aligned} \label{eq:VP}$$ where $x$ and $v$ are comoving coordinates and velocities, $a(\tau)$ is the scale factor and $\tau$ is the conformal time (We will use this convention throughout this paper). This Vlasov-Poisson system has no solution in the general case and the only way to handle it is to use numerical techniques. For completeness, we also give the expressions for the density and density contrast: $$\begin{aligned} \rho(x,\tau) &=& \frac{1}{a^3(\tau)}\int f(x,v,\tau)d^3v, \\ \delta\rho(x,\tau) &=& \frac{1}{a^3(\tau)}\left(\int f(x,v,\tau)d^3v - \frac{M_{\rmn{tot}}}{V_{\rmn{tot}}} \right),\end{aligned}$$ where $V_{\rmn{tot}}$ is the total comoving volume over which we average. Structure formation simulations {#sec:theory} =============================== The numerical analysis of the Vlasov-Poisson system of equations (\[eq:VP\]) is very challenging. The first reason is that the system is six-dimensional. Recent simulations can only handle up to $64$ resolution elements in each spatial and velocity space direction [@Yoshikawa2013] due to memory restrictions. Even the use of the biggest supercomputers would not allow to go much beyond this figure. The second shortcoming of such technique is the development of fine-grained structures that are very difficult to follow numerically. These become very important in structure formation scenarios as clusters typically present many matter streams and shell crossings. Those two main shortcomings make the search for more advanced numerical scheme important. The problem of high-dimensionality could be removed if there were a way to use the density field $\rho(x)$ instead of the probability distribution function $f(x,v)$. This can be done by integrating the first few moments of the Vlasov equation and then use techniques known for hydrodynamical simulations (see e.g. @Hockney1988). This technique is limited by the formal need to integrate all moments and not just the first few ones to obtain an exact solution. Instead of a 6D space, there is now a (formally) infinite number of variables obeying an infinite series of equations. [@Peebles1987], for instance, truncates the series and uses the first two moments (mass conservation, Euler equation) of the collisionless Boltzmann equation to evolve in time the initial perturbations. The framework reaches its limits whenever the velocity dispersion of the fluid becomes important or when shell crossing occurs. [$N$-body ]{}simulations {#ssec:simulations} ------------------------ The other option to solve the system of equations (\[eq:VP\]) is to use a particle method in which the distribution function is sampled by a finite number $N$ of particles such that $$f(x,v) \cong \frac{1}{N} \sum_{i=1}^{N} m_i \delta\left(x - x_i\right)\delta\left(v - v_i\right). \label{eq:nbody}$$ Each particle or body is then evolved according to Newton’s law under the influence of the gravitational potential created by all the others as described by Poisson’s equation. In other words, [$N$-body ]{}simulations solve the Vlasov equation via its characteristics by sampling the initial phase space distribution with a discrete number of particles. The number of bodies is typically chosen as large as computationally feasible. The [$N$-body ]{}formalism is thus a Monte-Carlo approximation of the Vlasov-Poisson system. The advent of large supercomputers combined with the development of more efficient numerical algorithms has enabled the field of cosmological simulations to make considerable progress over the last decades. Simulations such as the *Millennium run* [@Springel2005] or *Bolshoi simulation* [@Klypin2011] are able to follow as many as a few billion particles. The complicated part of the [$N$-body ]{}simulation is the evaluation of the forces between pairs of particles. Over the years, many ingenious techniques (see [@Dehnen2011] for a review) have been invented to reduce the algorithms complexity for the force integration to $\mathcal{O}(N\log N)$ or even better [@Dehnen2000]. All these techniques (tree-code, particle-mesh, P$^3$M, AMR, tree-PM,...) do however rely on particles and do, hence, share the same initial assumptions leading to the two following challenges. Firstly, since the dark matter fluid is supposed to be collisionless, one has to manually suppress artificial two-body collisions arising between the pseudo-particles introduced to sample the phase space distribution. This is usually done by introducing an ad-hoc softening length and suppressing the gravitational force at scales below it [@Dehnen2001]. [$N$-body ]{}simulations are run under the assumption that for a suitable choice of the smoothing, the evolution of the $N$ pseudo-particles under the softened force should be the same as the gravitational evolution of the elementary dark matter particles. The second challenge is to relate the particle distribution to the theoretical Vlasov-Poisson the particles are supposed to model. Despite its obvious relevance, it seems that the question of the precise quantitative importance of the discretization (\[eq:nbody\]) and its effects is still not settled [@Joyce2008]. As a matter of fact, there are no alternative tools to study the cosmic structure formation with the same resolution as [$N$-body ]{}simulations. This is of course not a limitation of the [$N$-body ]{}method itself, but makes it more complicated to evaluate the possible errors of [$N$-body ]{}simulations quantitatively, as there are basically no independent results to compare with. For instance [@Ludlow2011] find a non-negligible fraction of halos in CDM simulations that cannot be matched to peaks in the initial density distribution and are possible artefacts of the [$N$-body ]{}method. The different techniques used to calculate the forces are, of course, different and can lead to marginally different results for the same initial sampling of the field when the resolution limit is reached. They do, however, all share the decomposition of $f(x,p)$ in a set of $N$ macroscopic particles and will, hence, share the consequences of this Ansatz. Spurious effects due to the discretization become more apparent when looking at simulations of warm dark matter (WDM) or hot dark matter (HDM) cosmologies. The initial matter power-spectrum entering such simulations is truncated below a certain free-streaming scale related to the dark matter particle rest mass. Those particles having a small mass, they also have a finite velocity distribution function at every point in space, making the problem effectively 6 dimensional. In practice, these velocities are neglected and the DM fluid is treated in the cold fluid limit. These simulations are run using the same [$N$-body ]{}framework but with an initial density and velocity power spectrum truncated below the scale of interest. This should lead to a suppression of small halos below a characteristic mass and the simulations ought to be able to reproduce all structures with a mass above this limit. They could thus quickly converge towards a solution. [@Colin2000; @Wang2007; @Colin2008] did, however, demonstrate that this is not the case and that spurious halos form and merge to form structures below the theoretical mass threshold. Various techniques are used in the literature to cure this problem. [@Lovell2012], for instance, filter their halo catalogues during the post-processing of their simulations. The end results are thus free from spurious halos but it does not solve the intrinsic discreteness problem of the [$N$-body ]{}technique. More details about these challenges and a comprehensive review of the topic can be found in [@Dehnen2011]. Notice that this formalism is still a very active and lively area of research with alternative more advanced formulations being proposed frequently. Some authors [@Abell2012; @Shandarin2012] proposed recently to use tessellations of the 3D matter sheet in 6D space to track some of the phase space information. This may allow them to solve the coarse graining problem and reduce the impact of non-physical two body relaxations between the macroscopical particles. This formalism has lead to promising results in the study of WDM cosmology and the differences between the CDM and WDM halo mass functions [@Angulo2013]. All the potential shortcomings of the [$N$-body ]{}formalism and the difficulty to evaluate their impact on the simulation results make it important to develop another framework not based on a particle approach. An alternative framework ------------------------ Our framework resembles the attempt by [@Peebles1987] to use only the density field $\rho(x)$ and potential $U(x)$. The main problem of such an approach is that there is no closed system of equations that includes only the density and gravitational potential. The situation is different when looking at quantum physics. In this realm, all the phase-space information can be encoded in a single function, the wavefunction $\psi(x)$ which does not depend on the velocity $v$. It is thus possible to write a closed Schrödinger-Poisson system that would replace the Vlasov-Poisson one and that would only depend on the spatial variable $x$ (See also [@Short2006] for a similar idea). This would effectively be a 3D system of equations but would allow to simulate the full 6D phase space and hence allow simulation of alternative cosmologies, such as the one including WDM or free-streaming neutrino contributions. The principal difficulty is then to find a good mapping between the distribution function $f(x,v)$ of interest and its “quantum” equivalent $\psi(x)$ and vice-versa. This is achieved by using the so-called *Wigner distribution function* $$f(x,v) \simeq \int e^{\frac{i}{\hbar}vy}\psi^*\left(x+\frac{y}{2}\right)\psi\left(x-\frac{y}{2}\right) d^3y.$$ which obeys an equation similar to the Vlasov equation but is constructed from wave functions. The main feature of this mapping is that the density field can simply be expressed as $$\label{eq:1} \rho(x) = |\psi(x)|^2.$$ However, the limitation of this approach is that one single wavefunction is in general not sufficient to encode all the complexity of the distribution function and we would then use the more general version: $$\label{eq:2} \rho(x) = \sum_n |\psi_n(x)|^2.$$ The summation index $n$ can, as a first thought, be understood as a sum over the velocities $v$ that appear in the distribution function $f(x,v)$. We somehow trade a 6D function for a (finite) set of 3D (complex valued) functions. We will, however, demonstrate that the number of wavefunctions required can be very low (of order unity in some cases), making the whole framework effectively 3D. It is important to stress from the onset that we are not trying to solve the evolution of structure formation at the quantum level. Although we make use of quantum mechanics concepts, we merely use it as mathematical “trick” to solve the Vlasov-Poisson system (\[eq:VP\]). For this reason, the constant $\hbar$ appearing in our equations has to be understood as a computational parameter whose value bares no relation to the actual Planck constant $\hbar_{\rmn{phys}}= 1.0545 \cdot10^{-34}~\rmn{m}^2~\rmn{kg}~\rmn{s}^{-1}$. Once the wavefunctions are built, they are evolved forward in time using [Schrödinger]{}’s equation. The density sourcing Poisson’s equation is obtained through equation (\[eq:2\]) and one can then solve for the potential at each time step using standard techniques. This potential enters the [Schrödinger]{}equation, closing the loop. We have, hence, built a closed system of equations using only a set of wave functions (which serve as a proxy for density) and the gravitational potential. $$\label{eq:SP} \begin{array}{rcl} i\hbar \partial_t \psi_n &=& - \displaystyle\frac{\hbar^2}{2} \nabla^2_x \psi_n + U\psi_n, \Big.\\ \nabla^2_x U &=& 4\pi G\delta\rho. \Big. \end{array}$$ The study of structure formation then becomes an exercise in solving $n$ copies of the [Schrödinger]{}equation on a computer, which is a well-studied problem. The velocity distribution can be recovered by Fourier transforming the wave functions and if one is interested in the phase space distribution, one can apply the Wigner transform. This is, however, not part of the algorithm itself. This can be done in post-processing if necessary. The entire evolution of the system can be done at the “quantum level”, i.e. using the wave functions alone. We stress that this is another approximation of the true underlying physical problem (equation \[eq:VP\]) and that this framework, as any other, will have limitations. Some of these limitations and their relevance to the case of structure formation studies will be discussed in this paper. We will address those in the context of the science we are interested in and demonstrate how alternative cosmologies, including non cold dark matter scenarios, could effectively be simulated. The development of this framework has been pioneered by [@Widrow1993] and [@Davies1997] with the important difference that these authors use a single wavefunction and another way to map the distribution function in the quantum world. Their general procedure is very similar to ours: sample the wavefunction from the initial phase space distribution, evolve in time using the Schrödinger-Poisson equations and recover the final phase space distribution from the wavefunction. Note also that another possible route, where the Hartree equation is used instead of the Schrödinger equation, has been explored by [@Aschbacher2001] and [@Frohlich2010]. The time evolution of the Schrödinger-Poisson system is done using an explicit finite-differences scheme for the wave function and a FFT algorithm to solve the Poisson equation. We try to improve upon their algorithm for the time evolution as will be described below. [@Widrow1993] have made several simulations using this Schrödinger method obtaining results in agreement with usual [$N$-body ]{}simulations. They also claim that their method is computationally comparable to [$N$-body ]{}simulations making it a promising tool for cosmological purposes. These authors choose to use one single wave function to represent the distribution function. This has important consequences on the validity of equation (\[eq:1\]). By using one single wave function, the phase space distribution built from it can not be everywhere positive and the authors have thus to add an additional Gaussian smoothing. We alleviate this shortcoming by using more than one wave function and a different transformation from wave functions to phase space distribution. Their choice of Gaussian smoothed density also led them to a simple technique to generate the initial wave function. They use a set of particles sampling the phase space distribution function exactly as in the case of [$N$-body ]{}simulations. They can then turn each particle into a Gaussian in phase space by smoothing it and use this set of wave packets as their initial wave function. In our approach, we depart from this need of an initial [$N$-body ]{}sampling by considering other techniques to generate the set of wave functions. By doing so, we allow for a completely generic distribution function and should, in principle, not experience the consequences of an *a priori* artificial Monte-Carlo sampling of $f(x,v)$. The second feature of our framework is the replacement of the Poisson equation by a Klein-Gordon equation for the potential $U(x)$: $$-\frac{1}{c^2}\frac{\partial^2}{\partial t^2}U + \nabla^2_xU = 4\pi G\delta\rho,$$ where $c$ is the numerical speed of gravity. This scalar gravity equation is, once again, purely a mathematical trick to reduce the complexity of the original system (\[eq:VP\]) and not an attempt to modify Newton’s gravity. Such a replacement makes the framework entirely local and does not require complicated integration methods for the Poisson equation. The complexity of the scheme is then formally reduced to $\mathcal{O}(M)$, where $M$ is the number of mesh points in real space used in the simulation. In this respect our approach also differs from the original work by [@Widrow1993], who stick to the classical Poisson form of gravity. We stress that this step is not formally necessary. The well-known techniques used to solve Poisson’s equation on a mesh (FFT, Gauss-Seidel relaxation, etc.) can also be used in our framework. This change of equation for gravity does just make the computations slightly faster in the cases where our approximation is valid. However, in the case of cosmological simulations with vastly different scales interacting, it is unclear how the Klein-Gordon equation for gravity would behave and defaulting to standard mesh techniques might be required. The algorithm in brief {#ssec:algorithm} ====================== Here we present the main algorithm of our framework, decomposed in a few simple steps. A formal derivation and a discussion of the convergence and accuracy of the method will be presented in the next section. Choose the parameters of your simulation. The precision and speed of the method is governed by three parameters $\hbar$, $c$ and $N$. The algorithm of choosing them is the following:\ The parameters $c$ and $\hbar$ are linked to the time and space resolution ($\Delta x$ and $\Delta\tau$ respectively) of the simulation via the Courant condition: $$\frac{\Delta x}{\Delta\tau} > c$$ and the condition on the stability of the discretized [Schrödinger]{}equation: $$\frac{\Delta x^2}{\Delta\tau} > \hbar$$ The number of wavefunctions $N$ is chosen depending on the number of relevant modes of the decomposition in wavefunctions of the initial distribution function. The optimal value of $N$ is problem dependent and is also influenced by the algorithm chosen to discretize the distribution function. The details of this procedure will be given in section \[sec:IC\]. The precision of the original accuracy is also dictated by the choice of $\hbar$. The “quantum” nature of the formalism imposes limitations on the precision of the description of position and velocity at the same point following the equivalent of Heisenberg’s uncertainty principle. Take an initial phase-space distribution function of the matter fields $f(x,v)$ (in the case of cosmological simulations, it is expressed via the power spectrum $P(k)$). Decompose the distribution function in $N$ complex-valued $\psi_n(x)$ such that $$\label{eq:initial_distribution} f(x,v) \approx \sum_{n=1}^N \int e^{\frac{i}{\hbar}v\,y}\psi_n^*\left(x+\frac{y}{2}\right)\psi_n\left(x-\frac{y}{2}\right) d^3y.$$ The number $N$ of wavefunctions is chosen such as to minimize the error introduced by the decomposition and will, in practice, be as big as computationally feasible. Various ways to generate this initial set of wavefunctions for a given $f(x,v)$ are presented in section \[sec:IC\]. *At this stage the precision of the approximation $f(x,v)\to \{\psi_n(x)\}$ is controlled by two parameters, $\hbar$ and $N$.* The wavefunctions $\psi_n(x)$ are now evolved forward in time using the coupled [Schrödinger]{}-Klein-Gordon system of equations $$\begin{array}{rcl} i \hbar \partial_t \psi_n &=& - \displaystyle\frac{\hbar^2}{2} \nabla^2_x \psi_n + U\psi_n, \Big.\\ -\displaystyle\frac{1}{c^2}\displaystyle\frac{\partial^2}{\partial t^2}U + \nabla^2_xU &=& 4\pi G\left(\displaystyle\sum_{n=1}^N |\psi_n(x)|^2 - \bar\rho\right). \Big. \end{array}$$ The integration in time of the [Schrödinger]{}-Klein-Gordon system can be done explicitly using finite differences on a regular grid, as will be described in section \[sec:numerics\]. Controlling your simulation. As the simulation is running you should monitor the following quantities in order to see that the choice of the method does not introduce artefacts. The correction terms $$\label{eq:correction_term} \sum_{\shortstack{$\scriptstyle r\geq 3$\\$\scriptstyle r~\rm {odd}$}} \frac{1}{r!}\left(\frac{\hbar}{2i}\right)^{r-1} {\frac{\partial ^r}{\partial x^r}}V {\frac{\partial ^r}{\partial v^r}}f(x,v)$$ should be small when compared to the ones ($v{\frac{\partial f}{\partial x}}$ and ${\frac{\partial U}{\partial x}}{\frac{\partial f}{\partial v}}$) entering the Vlasov equation. Thanks to the $1/r!$ decrease and the smoothness of the gravitational potential $V$ in most cases of interest, computing the first term of this series is generally sufficient. If this term grows above the value of the other terms in the Vlasov equation, then the approximation introduced in this paper is not valid any more. Reducing the value of $\hbar$ or increasing the number $N$ of wavefunctions used in the initial discretization will decrease the contribution of the correction terms but this will lead to a higher computational cost. The correction terms as well as the terms entering the Vlasov equation are expensive to compute but need not be computed at each time step. Once the final time has been reached, the distribution function can be recovered by computing the integral (\[eq:initial\_distribution\]) or if one is only interested in the density field only, then equation (\[eq:2\]) is sufficient and straightforward to compute. Formal derivation {#sec:framework} ================= In the previous section, we described the problem we were interested and the usual schemes used in the literature. We also presented a brief description of the route we intend to follow in order to tackle the issues outlined. In this section, we describe the whole formulation in detail, derive its main equations and discuss its limits. For completeness, we start with a review of a formulation of quantum mechanics and show how its main ingredient, the *Wigner Distribution Function*, will play the role of an approximate distribution function for our problem. Readers interested only in the end results can jump directly to Section \[ssec:local\]. Phase-space quantum mechanics {#ssec:QM} ----------------------------- Quantum mechanics is usually presented as emerging from the Hamiltonian formulation of classical mechanics through canonical quantization (See for instance [@Sakurai]). In this procedure, variables are promoted to Hermitian operators and the Poisson bracket is replaced by a commutator. Alternatively, one can also use Feynman’s propagator and the path integral formalism to move from classical to quantum mechanics. Alongside these well-known quantization procedures, there exist other equivalent formulations which try to emphasize more clearly certain aspects. The Moyal (or phase-space) formulation is among those and tries to find a quantum equivalent to the classical phase-space and distribution functions [@Ercolessi2007; @Hillery1984]. The quantization procedure tries to find a correspondence between classical functions (called symbols) of the phase space variables and quantum operators in Hilbert space: $$\mbox{Operators in Hilbert space}\leftrightarrow \mbox{Phase space symbols}$$ As the position and momentum operators do not commute, this mapping can not be unique. Different operator orderings will be mapped to different phase-space symbols. Hermann Weyl proposed a systematic way to associate a quantum operator to a classical distribution function, which is now referred to as *Weyl quantization*. This complex procedure will not be discussed further here but its inverse, the *Wigner transform* will be useful for our formalism. This transformation associates to every quantum operator $\hat{A}$ a real phase-space function $A(x,v)$: $$A(x,v) = {\mbox{sym}}(\hat{A}) := \int e^{\frac{i}{\hbar}vy}\left\langle x-\frac{y}{2} \right|\hat{A}\left|x+\frac{y}{2}\right\rangle dy, \label{eq:Wigner_transform}$$ where $\langle \cdot|\cdot\rangle$ is the usual Bra-ket notation for quantum states. The transformation of a product of operators is given by $${\mbox{sym}}(\hat{A}\hat{B}) :={\mbox{sym}}(\hat{A}) \star {\mbox{sym}}(\hat{B}),$$ where the *Moyal star product* $\star$ contains the quantum mixing of the operators. This product of functions in phase space is defined as $$A(x,v)\star B(x,v) := A(x,v)~ e^{\frac{i\hbar}{2}\left(\overleftarrow{\partial_x}\overrightarrow{\partial_v} - \overleftarrow{\partial_v}\overrightarrow{\partial_x} \right)}~B(x,v)$$ and is a central element in this formulation of quantum mechanics. Defining the *Moyal bracket* [@Moyal1949] by $$\left\lbrace A, B\right\rbrace_M := A\star B - B \star A$$ the commutator of operators is associated to the Moyal brackets of two symbols in the following way: $${\mbox{sym}}\left(\left[\hat{A},\hat{B}\right]\right) =\left\lbrace{\mbox{sym}}\left(\hat{A}\right),{\mbox{sym}}\left(\hat{B}\right)\right\rbrace_M.$$ The dynamical equation in this formulation can be written in a simple way using these brackets and reads $$i\hbar \partial_t f = \left\lbrace \hat{H}, f \right\rbrace_M,$$ where $\hat{H}$ is the Hamiltonian of the system. The interesting property of this formulation of quantum mechanics is that in the semi-classical limit $\hbar\rightarrow 0$, the dynamical equation reduces to the classical equation of motion expressed in terms of Poisson brackets $$\partial_t f = \left\lbrace H,f\right\rbrace_P = H \left(\overleftarrow{\partial_x}\overrightarrow{\partial_v} - \overleftarrow{\partial_v}\overrightarrow{\partial_x} \right) f.$$ This illustrates how the algebraic structures of classical and quantum mechanics are related through the continuous changing of the parameter $\hbar$. This is the reason why such an approach to quantum mechanics is known as *deformation quantization* [@Hirshfeld2002]. Let’s now stop this overview and move to the part of this formalism which will be useful for the construction of our new simulation framework. Wigner distribution function {#ssec:WDF} ---------------------------- The Wigner transform (equation \[eq:Wigner\_transform\]) maps a quantum operator $\hat{A}$ to a classical function in phase space. Wigner used this to associate a real phase space function to a quantum system [@Wigner1932], now called the *Wigner distribution function* (WDF). It is defined as the symbol in phase space associated to the density operator $\hat\rho$: $$P_W(x,v) := {\mbox{sym}}\left(\hat\rho\right) = \int e^{\frac{i}{\hbar}vy} \left\langle x-\frac{y}{2} \right|\hat{\rho}\left|x+\frac{y}{2}\right\rangle d^3y.$$ As usual, the density operator can be expressed as the combination of pure state wavefunctions $\psi_n$: $$\hat\rho = \sum_n \lambda_n \left|\psi_n\right\rangle\left\langle\psi_n\right|, \quad \lambda_n \geq 0, \quad \sum_n \lambda_n = 1.$$ For mixed states, the WDF is thus $$P_W(x,v) = \int e^{\frac{i}{\hbar}vy}\sum_n \lambda_n\psi^*_n\left(x+\frac{y}{2}\right)\psi_n\left(x-\frac{y}{2}\right) d^3y, \label{eq:WDF}$$ while for a pure state, it reads $$P_W(x,v) = \int e^{\frac{i}{\hbar}vy}\psi^*\left(x+\frac{y}{2}\right)\psi\left(x-\frac{y}{2}\right) d^3y.$$ To simplify the expressions, we will use the notation $x_\pm = x \pm \frac{y}{2}$ and $\psi_{n\pm} = \psi_n(x_\pm)$ in what follows. The WDF has many similarities to the classical distribution function: $P_W(x,v)$ is a real function, as can be seen by taking the conjugate and performing the change of variable $y\rightarrow-y$. It is also normalized to $1$ in the following sense $$\int d^3x\int \frac{d^3v}{(2\pi \hbar)^3} P_W(x,v) = 1.$$ It has similar marginal distributions as can be seen by integrating over all velocities: $$\begin{aligned} \int\frac{d^3v}{(2\pi \hbar)^3} P_W &=& \sum_n \lambda_n\int \delta^3\left(y\right)\psi^*_n\left(x_+\right)\psi_n\left(x_-\right)d^3y\nonumber \\ &=& \sum_n \lambda_n \left| \psi_n\left(x\right) \right|^2, \label{eq:density}\end{aligned}$$ or over all space $$\begin{aligned} \int d^3x P_W &=& \sum_n \lambda_n \int d^3x_- d^3x_+e^{\frac{i}{\hbar}vx_+}e^{-\frac{i}{\hbar}vx_-}\psi^*_{n+}\psi_{n-} \nonumber \\ &=& \sum_n \lambda_n\left|\tilde\psi_n\left(\frac{p}{\hbar}\right) \right|^2.\end{aligned}$$ In both cases the non-negative property of these marginal distributions is a property of the wavefunctions in quantum mechanics. The Wigner distribution function does, however, have the peculiar property that it may assume negative values. For this reason, it is called a *quasi-probability distribution* and cannot be interpreted as a phase-space probability density in the sense of classical mechanics. The non-positivity of the WDF can be seen by integrating over all phase-space the product of two distributions built from different states $\psi$ and $\Phi$: $$\int dx \int dv~ P_W[\psi](x,v) P_W[\Phi](x,v) \propto \left| \left\langle \psi |\Phi\right\rangle\right|^2.$$ The right-hand side vanishes if the two states $\psi$, $\Phi$ are orthogonal which implies that the WDF cannot be positive everywhere. According to the Hudson theorem [@Hudson1974], the WDF of a pure state is point wise non-negative if and only if the state is Gaussian. If $\hat\rho$ is not a pure state, it can be represented as a convex combination of pure state operators, $\hat\rho = \sum_n \lambda_n | \psi_n \rangle\langle \psi_n|$, in infinitely many ways. The WDF satisfies the so-called mixture property [@Ballentine1998], which is the requirement that the phase space distribution should depend only on the density operator $\hat\rho$ and not on the particular way it is represented as a mixture of some set of pure states $\left\lbrace \left|\psi_n\right\rangle\right\rbrace$. To summarize, the Wigner distribution function has many properties similar to the classical phase space distribution. Nevertheless it has been realized from the early days, that the concept of a joint probability at a phase space point is limited in quantum mechanics because the Heisenberg uncertainty principle makes it impossible to simultaneously specify the position and velocity of a particle. Therefore, the best one can hope to do is to define a function that has a maximum of properties analogous to those of the classical distribution function. Many different variants of distribution functions - Husimi, Kirkwood-Rihaczek, Glauber - have been studied over the decades, all with their own advantages and shortcomings (See [@Lee1995] for a review). The WDF is despite its non-positivity considered to be a useful calculational tool and finds applications in various domains outside of quantum physics, like signal processing or optics [@Bastiaans1997]. [@Widrow1993] use a Husimi distribution [@Husimi1940] to recover the phase space information from the wavefunction. The Husimi distribution is essentially equal to the Wigner distribution with an additional Gaussian smoothing of width $\eta$ $$P_H(x,v) = \frac{1}{(2\pi\hbar)^3}\frac{1}{(\pi\eta^2)^{3/2}}\left|\int d^3y e^{-\frac{(x-y)^2}{2\eta^2}-\frac{i}{\hbar}vy} \psi(y) \right|.$$ Compared to the WDF it has the advantage of yielding a phase space distribution that is positive-definite at every point. This comes at the price of the marginal distributions not being equal to the usual position and velocity distributions, but rather Gaussian broadened versions of it $$\rho_H(x) = \frac{1}{(\pi\eta^2)^{3/2}}\int d^3 y e^{-\frac{(x-y)^2}{2\eta^2}}|\psi(y)|^2.$$ Only in the limit $\eta\rightarrow0$ does it reduce to the usual probability distribution. Similarly one can show that the other marginal distribution reduces to the standard velocity distribution only when $\eta\rightarrow\infty$ This complementarity is of course related to Heisenberg’s uncertainty principle. Note that it is in principle this smoothed distribution that enters Poisson equation instead of $|\psi(x)|^2$. Since this would requiring an additional space integration at each time step, Widrow and Davies approximate it with the usual distribution $|\psi(x)|^2$ in the Poisson equation. Actually, $P_H(x,p) \simeq f(x,p)$ only when averaged on scales $\Delta x \geq \eta$, $\Delta p \geq \frac{\hbar}{\eta}$ . Note that there is no a priori reason why the non-linear time evolution should yield an answer that is again, in average, close to the real distribution function. Let us stress that we allow for several wavefunctions to have an initial phase space representation that is arbitrary close to the classical distribution function at every point, not only when averaged. Let us recall that our goal is not to interpret the Wigner distribution function as a fully-fledged phase space distribution, but rather as a convenient mathematical tool. Dynamical equation for the WDF {#ssec:EOM} ------------------------------ We now want to derive the dynamical equation satisfied by the WDF. A derivation starting from Liouville’s equation for the density matrix can be found in [@Ballentine1998]. Another possibility is to start by taking the time derivative of the Wigner distribution function and use the fact that the wavefunctions satisfy Schrödinger’s equation. Suppose each of the wavefunctions satisfies Schrödinger equation $$i\hbar \partial_t \psi_n = -\frac{\hbar^2}{2}\nabla^2 \psi_n + V\psi_n, \label{eq:schroedinger}$$ then the time derivative of the WDF becomes $$\begin{aligned} \partial_t P_W =\int e^{\frac{i}{\hbar}vy} \sum_n \lambda_n \left[ -\frac{i\hbar}{2}\right. \left. \left(\nabla^2_+\psi^*_{n+}\psi_{n-} \right. \right. \nonumber \\ \left.- \psi^*_{n+}\nabla^2_-\psi_{n-} \right) - \left.\frac{1}{i\hbar}\left(V_+-V_-\right)\psi^*_{n+}\psi_{n-}\bigg. \right] d^3y,\end{aligned}$$ where, once again, the subscripts $+$,$-$ denote the dependence on $x_\pm = x \pm \frac{y}{2}$. The terms containing a Laplacian can be rewritten in terms of spatial derivatives of $P_W$ only and the previous equation becomes $$\begin{aligned} 0 &=&\partial_t P_W + \vec{v}\cdot\vec{\nabla}_x P_W \nonumber \\ && - \frac{1}{i\hbar}\int e^{\frac{i}{\hbar}py}\left(V_+-V_-\right)\sum_n\lambda_n\psi^*_{n+}\psi_{n-}~d^3y.\end{aligned}$$ This is the dynamical equation for the WDF, that we will refer to as the *Wigner equation*. This dynamical equation depends on both $P_W$ and the wavefunctions which implies that we might have to define initial conditions for both. Let’s now demonstrate that one can get rid of the dependency on the $\psi_n$. Let’s expand the potential in Taylor series $$V(x_+) - V(x_-) = y {\frac{\partial }{\partial x}}V(x) + 2\sum_{\shortstack{$\scriptstyle r\geq 3$\\$\scriptstyle r~\rm {odd}$}} \frac{1}{r!}{\frac{\partial ^r}{\partial x}}V(x)\left(\frac{y}{2}\right)^2$$ and use this result in the dynamical equation: $$\begin{aligned} 0 &=& {\frac{\partial }{\partial t}} P_W + \vec{v}\cdot{\frac{\partial }{\partial x}} P_W - {\frac{\partial V}{\partial x}}{\frac{\partial }{\partial v}}P_W \nonumber\\ & & + \sum_{\shortstack{$\scriptstyle r\geq 3$\\$\scriptstyle r~\rm {odd}$}} \frac{1}{r!}\left(\frac{\hbar}{2i}\right)^{r-1} {\frac{\partial ^r}{\partial x^r}}V {\frac{\partial ^r}{\partial v^r}}P_W. \label{eq:Wigner}\end{aligned}$$ One can notice that the first three terms correspond to the classical Vlasov equation. In three cases, the Wigner equation exactly coincides with the classical Vlasov equation: for a free particle ($V=0$), for a uniform field ($V\propto x$) and for a harmonic oscillator ($V \propto x^2$). In general, there are additional terms that can be interpreted as quantum corrections[^2] or simply higher-order corrections. In any other case, corrections in the form of a power series in $\hbar$ will appear and modify the dynamic. Note that in this derivation, the only assumption made on the $\lambda_n$ is that they be constant. In principle any value is acceptable and it can even be negative or complex. As we are not using these equations to solve a quantum mechanics problem, where $\lambda_n>0$, we can use this fact to create more general sets of wavefunctions to approximate a given $f(x,v)$. Note that the mass $m$ does not appear in the [Schrödinger]{}equation in the same way that it does not appear in the Vlasov system. This, once again, illustrates that we are not solving the quantum mechanics evolution of the individual DM particles but rather find an approximation of the DM fluid evolution equation. Let us recap what we have derived so far. By inspecting the Moyal formulation of quantum mechanics, we found a quantity, the Wigner distribution function $P_W$. This quasi-probability density function obeys the Wigner equation, an equation similar to the Vlasov equation but with additional terms in the form of a power series in $\hbar$. Semi-classical limit {#ssec:limit} -------------------- The Wigner equation (\[eq:Wigner\]) reduces to the classical Vlasov equation in the limit $\hbar\rightarrow 0$. Even though the quantum correction is formally $\mathcal{O}\left(\hbar^2\right)$, the derivatives of $P_W$ could generate additional inverse powers of $\hbar$, making the semi-classical limit more involved[^3]. The properties of the semi-classical limit depend of course on the potential $V(x)$. In this paragraph we present some results concerning the case of interest to us, where the potential satisfies Poisson’s equation. In particular, different authors investigated the semi-classical limit of the Wigner-Poisson (W-P) system to the Vlasov-Poisson (V-P) system for the Coulomb potential. The mathematically rigorous classical limit from W-P to V-P has been solved first in 1993 independently by [@Lions1993] and [@Markovitch1993]. Both references consider a so-called completely mixed state; i.e. an infinite number of pure states with a strong additional constraint on the occupation probabilities: $$\mbox{Tr}\hat\rho^2 = \sum_n \lambda_n^2 \leq C\hbar^3,$$ where $C$ is a constant. Under this assumption, the classical limit of the solution to the 3D W-P system converges to the solution of the V-P system. Note that the Wigner distribution function can also have negative values, whereas the semi-classical limit is a true, non-negative distribution function. In both references, this was overcome by using a Gaussian-smoothed Wigner function. The situation for a pure state is completely different [@Zhang2002]. According to these authors, it appears that a density operator which has the above property that the trace of its square tends to zero with the third power of the Planck constant seems to be closer to classical mechanics than a pure state. For a pure state in 1D, the semi-classical limit is not unique: examples have been constructed where different regularization schemes give different limits [@Majda1994]. The question whether there exists a selection principle to pick the correct classical solution has also been investigated but is not yet settled [@Jin2008]. No proof of the semi-classical limit from W-P to V-P is known for the pure state case in 2D or 3D. For more details the reader is referred to the original papers or the review [@Mauser2002]. See also [@Frohlich2007] for an alternative approach to the semi-classical limit. Finally, let us stress once again, that we seek to use our knowledge of quantum mechanics to simplify the resolution of the mathematical problem presented in the Introduction. We are not trying to describe the physics of structure formation at the quantum level nor trying to find a wavefunction for the entire Universe. Local interaction framework {#ssec:local} --------------------------- In Newtonian gravity, much like in classical electrodynamics, each body moves in the potential generated by all the others. As both forces are long-ranged, the total force acting on each of the $N$ particles will be given by the sum of the contributions from all the other particles, no matter how far away. In gravitational [$N$-body ]{}problems, the $N$ sampling bodies also receive a contribution from all the other bodies and a naive algorithm would require $\mathcal{O}(N^2)$ operations for the force calculation at each time step. But it is well-known that this long-ranged interaction through the potential can be replaced by a purely local interaction with a gauge boson or a spin-zero boson. In this approach, each particle only interacts locally with the bosonic field. We propose to reformulate the cosmological Vlasov-Poisson problem system (\[eq:VP\]) $$\begin{aligned} {\frac{\partial f}{\partial t}} + \frac{v}{a} {\frac{\partial f}{\partial x}} - a {\frac{\partial U}{\partial x}}{\frac{\partial f}{\partial v}} &=&0, \nonumber\\ \nabla^2 U &=& 4\pi Ga^2 \delta\rho,\end{aligned}$$ as a purely local problem. To achieve spatial locality, we shall trade the real-valued phase space distribution function $f(x,v)$ for a finite set of complex-valued wavefunctions $\left\lbrace \psi_n(x)\right\rbrace$. For this we shall assume that the classical distribution function can be approximated by the Wigner distribution function of some auxiliary mixed states: $$f(x,v) \simeq P_W(x,v) = \int e^{\frac{i}{\hbar}vy}\sum_n \lambda_n \psi^*_n(x_+) \psi_n(x_-)d^3 y$$ The details of how this approximation is to be understood, and how we construct in practice the set of wavefunctions $\left\lbrace \psi_n(x)\right\rbrace$ for any given $f(x,v)$ will be discussed in Section \[sec:IC\]. For the time being, let us assume that we have determined a set of wavefunctions such that the above approximation holds. The dynamical evolution of the WDF is given by the quantum-corrected Vlasov equation (the Wigner equation (\[eq:Wigner\])), or equivalently, by the Schrödinger equation (\[eq:schroedinger\]) of the wavefunctions interacting in a self-consistent way with a potential obeying the Poisson equation. The cosmological Vlasov equation in an expanding Universe and expressed using conformal time $\tau$ is very similar to the classical one, up to the replacements $$v \mapsto \frac{v}{a(\tau)}, \qquad V \mapsto a(\tau)U.$$ Therefore the Schrödinger-Poisson system in the expanding universe becomes $$\label{eq:SP_cosmo} \begin{array}{rcl} i\hbar \partial_\tau \psi_n &=& - \displaystyle\frac{\hbar^2}{2a} \nabla^2_x \psi_n + aU\psi_n, \Big.\\ \nabla^2_x U &=& 4\pi Ga^2\delta\rho, \Big. \end{array}$$ where $\delta\rho$ is the cosmological density contrast. The mass density $\left[\rmn{kg}~\rmn{m}^{-3} \right]$ relates to the wavefunctions $\left[\rmn{kg}^{1/2}~\rmn{m}^{-3/2} \right]$ by $$\rho = \frac{1}{a^3} \int \frac{d^3v}{(2\pi\hbar)^3}f(x,v) = \frac{1}{a^3} \sum_n \lambda_n \left| \psi_n\right|^2.$$ The normalization is chosen such that the phase space density integrates to the total mass $$\int d^3 x\int \frac{d^3v}{(2\pi\hbar)^3}f(x,v) = \int d^3 x \sum_n \lambda_n \left| \psi_n\right|^2 = M_{\rmn{tot}},$$ implying for the background density $$\bar\rho = \langle\rho\rangle = \frac{1}{V_\rmn{tot}} \frac{1}{a^3}\int d^3 x \sum_n \lambda_n \left| \psi_n\right|^2 = \frac{1}{a^3}\frac{M_\rmn{tot}}{V_\rmn{tot}},$$ where $V_\rmn{tot}$ denotes the total comoving volume. Therefore the density contrast $\delta\rho$ reads $$\delta\rho = \frac{1}{a^3}\left(\sum_n \lambda_n \left| \psi_n\right|^2 - \frac{M_\rmn{tot}}{V_\rmn{tot}} \right).$$ In the semi-classical limit ($\hbar\rightarrow 0$), the Schrödinger-Poisson system (\[eq:SP\_cosmo\]) formally reduces to the original Vlasov-Poisson system describing gravitational structure formation. Notice that the total mass is conserved by construction as the normalization of the wavefunctions is a constant of motion of the Schrödinger equation. So far, we achieved locality in the sense that our set of equations does not explicitly depend on the velocity variable $v$. We traded our 6 dimensional phase-space density function for a (possibly infinite) set of complex-valued functions that depend on the space coordinate $x$ only. The numerical complexity of the problem has thus been drastically reduced as long as the number of wavefunctions remains small. Before addressing this question, let us go one step further and discuss the second equation of our system (\[eq:SP\_cosmo\]). The Poisson equation is a non-local equation as the Laplacian operator couples the contributions from the whole space. This can, however, be changed by replacing the Laplacian by a d’Alembertian operator. With this change, the Poisson equation becomes a Klein-Gordon equation and our transformed cosmological problem now reads $$\label{eq:S_KG} \begin{array}{rcl} i\hbar \partial_\tau \psi_n &=& - \displaystyle\frac{\hbar^2}{2a} \nabla^2_x \psi_n + aU\psi_n, \Big.\\ -\displaystyle\frac{1}{c^2}\partial^2_{\tau\tau}U + \nabla^2_x U &=& 4\pi Ga^2\delta\rho. \Big. \end{array}$$ This system is entirely local, meaning that it can be numerically evolved in time on a grid by summing contributions of local sampling points only. If the contribution of the term $-\frac{1}{c^2}\partial^2_{\tau\tau}U$ becomes small, then this system reduces to the Schrödinger-Poisson system discussed previously. This is in particular true in the non-relativistic limit $c\rightarrow \infty$. It is important to understand that the speed $c$ does not necessarily have to take the value of the physical speed of light (or of gravity) $c_{\rmn{phys}} = 299792458~\rmn{m}~\rmn{s}^{-1}$. It must simply be understood as a parameter that we can use to approach the physical problem we are interested in (equation \[eq:VP\]). As for $\hbar$, we are free to choose this parameter in a way that is convenient for our simulations, as long as we remain in the non-relativistic limit, meaning that the gravitational field $U$ propagates much faster than the matter fields $\psi_n$. Note, however, that using a non-infinite speed for the mediator of gravity in cosmological simulations may also be of some physical interest as the Poisson equation is, formally, only a weak-field approximation of the underlying Einstein equations from which a finite speed for the gravity emerges. Thus, modifying this parameter may also yield interesting physical results. Let us summarize what we achieved so far. Using the formalism derived in the Sections \[ssec:QM\] to \[ssec:limit\], we have been able to construct a completely local system of equations (\[eq:S\_KG\]) which in the non-relativistic classical limit $\hbar\rightarrow 0$, $c\rightarrow\infty$ reduces to the problem of cosmological structure formation. The probability density function can be computed at any time using the definition of the WDF (equation \[eq:WDF\]) but we stress that this operation is in general not necessary as one is usually interested in the evolution of the mass density (equation \[eq:density\]) only. Let us finally say that replacing the Poisson equation by a scalar field is not strictly necessary as the algorithmic complexity of the problem has already been drastically reduced by the introduction of the WDF. Having a [Schrödinger]{}-Poisson system to solve instead of equation (\[eq:VP\]) is more accurate than our final system (\[eq:S\_KG\]). It does, however, simplify a lot the numerical algorithms in some cases and does not seem to impact heavily the results as long as the parameter $c$ is chosen wisely. The effect of this choice on the evolution of highly clustered matter fields found in the low redshift Universe has not, however, not been explored. Lagrangian formulation {#ssec:lagrangian} ---------------------- The system of equations (\[eq:S\_KG\]) can be derived from a Lagrangian density using the Euler-Lagrange equations. We consider a real scalar field $U$ interacting with the complex scalar matter fields $\psi_n$. The Lagrangian for this system reads $$\begin{aligned} \mathcal{L} &=& \frac{1}{2c^2}\dot U ^2 - \frac{1}{2}\left(\vec{\nabla}U\right)^2 + \zeta\bar\rho U \nonumber \\ & & + \zeta \sum_n\lambda_n\left[\frac{i\hbar}{2}\left(\psi_n^*\dot\psi_n - \dot\psi_n^*\psi_n\right) \right. \nonumber\\ & & \qquad\qquad\quad- \left.\frac{\hbar^2}{2a}\vec{\nabla}\psi_n^*\cdot\vec\nabla\psi_n - |\psi_n|^2U\right]. \label{eq:lagrangian}\end{aligned}$$ The equations of motion are found to be $$\begin{aligned} i\hbar \dot\psi_n &=& - \frac{\hbar^2}{2a}\nabla^2 \psi_n + U\psi_n, \\ -\displaystyle\frac{1}{c^2}\ddot U + \nabla^2 U &=& \zeta \left(\sum_n \lambda_n\left|\psi_n\right|^2 - \bar\rho\right),\end{aligned}$$ which is the system we derived in the previous section if we set $\zeta = \frac{4\pi G}{a}$. The Hamiltonian density corresponding to the Lagrangian (\[eq:lagrangian\]) is given by $$\begin{aligned} \mathcal{H} &=& \frac{1}{2c^2}\dot U ^2 + \frac{1}{2}\left(\vec{\nabla}U\right)^2\nonumber \\ & & + \zeta \sum_n\lambda_n \frac{\hbar^2}{2a}\left|\vec{\nabla}\psi_n\right|^2 \nonumber \\ & & + \zeta \left(\sum_n \lambda_n\left|\psi_n\right|^2 - \bar\rho \right)U, \label{eq:hamiltonian}\end{aligned}$$ which has a positive definite kinetic energy term for the scalar potential, as expected from a well-behaved theory. One can also decompose this Hamiltonian in its various energy components. Doing so allows us to control the impact of the dynamic term for the field $U$ and consider it a valid approximation of the underlying Vlasov-Poisson problem when its value is much lower than the other energy components. Together with the computation of the higher order terms of the Wigner equation (\[eq:Wigner\]), this measure of the impact of $c\neq\infty$ gives us a measure of the approximations we made and can thus help us assess the validity of the outcome of our simulations. Generating Initial Conditions {#sec:IC} ============================= In the previous section, we showed how one can trade the Vlasov equation for the phase space distribution function for Schrödinger’s equation for the wavefunctions, as this allows for the introduction of a scalar field as the mediator of the gravitational force. Of course we do not require the wavefunctions to have any intrinsic physical interpretation. We rather consider them, just like the WDF, as a mathematical tool and not as fundamental entities. Still we are faced with the problem of how to determine a set of wavefunctions such that their WDF corresponds to the initial classical phase space distribution. One possible approach is to start from a set of $N$ particles sampling the phase-space distribution function and build Gaussians centred on each point with a certain width $\eta$ $$|\eta(x_i,v_i)\rangle\propto e^{-\frac{(x-x_i)^2}{2\eta^2}-\frac{i}{\hbar}v_i \cdot x}.$$ The wavefunction is then obtained from the incoherent superposition of these wave-packets for each “particle” $$|\psi\rangle = \frac{1}{\sqrt{N}}\sum_{i=1}^N e^{i\phi_i}|\eta(x_i,v_i)\rangle,$$ where $e^{i\phi_i}$ is a random phase. This sampling procedure relies on the assumption that each “particle” has a well-defined velocity. It is unclear how it could be generalized to the case of warm dark matter, where the velocity dispersion is important. We remove the need for this assumption by allowing for several wavefunctions. At the same time this allows us to represent any initial phase space distribution without relying on [$N$-body ]{}sampling. Such an approach was used by Widrow & Kaiser and is well-suited for Husimi distributions as they contain an extra Gaussian smoothing. We will, instead, try to work directly with the distribution function without sampling it in particles and hence taking the risk of facing the coarse graining and discreteness effects (Section \[ssec:simulations\]) that we are trying to avoid in our framework. Since the wavefunctions encode both, the position and velocity information, a single wavefunction (pure state) can in general not be sufficient to describe a generic $f(x,v)$. One should rather look for a set of wavefunctions (mixed state). The more wavefunctions we allow for, the more freedom we have and the more accurately the WDF should represent any given distribution. At the same time the total number of wavefunctions should be as small as possible because this will reduce the computational complexity of our numerical simulations. Given the classical distribution function $f(x,v)$, we want to expand it using the WDF Ansatz $$f(x,v) = \sum_{n=1}^{N} \lambda_n \int e^{\frac{i}{\hbar}vy} {\psi_n^*\left(x+\frac{y}{2}\right)} {\psi_n\left(x-\frac{y}{2}\right)} d^3y.$$ Fourier transforming from $v$-space to $\eta$-space we get $$f(x,\eta) = \sum_{n=1}^{N} \lambda_n {\psi_n^*\left(x+\frac{\eta}{2}\right)} {\psi_n\left(x-\frac{\eta}{2}\right)}. \label{eq:Fourier_WDF}$$ Finding the wavefunctions is now a simpler problem provided one can easily compute the Fourier transform of the distribution function one is interested in. We will discuss different approaches to tackle this problem of determining the set of wavefunctions $\psi_n$ and weights $\lambda_n$ representing a given initial phase space distribution $f(x,v)$. Let us stress from the outset that these procedures need only to be used once at the beginning of a numerical simulation, to set up the initial conditions. Last but not least, we need to emphasize that the number of wavefunctions is preserved by the quantum mechanical evolution. There is no evolution equation for $\lambda_n$. Only the shape of the $\psi_n$ will change. This shows that it is the complexity of the initial conditions that dictates the number of wavefunctions required. In a setup where only a restricted number of harmonics are present in the initial probability distribution, already relatively few wavefunctions would be sufficient to represent the system and its time evolution. Brute-force minimization {#ssec:minimization} ------------------------ The first and obvious method we present to choose the initial wavefunctions is a brute-force minimization. The underlying idea is to define a functional measuring the total absolute error made by approximating the phase space distribution by the WDF Ansatz $$\Phi := \int d^3q \int d^3\eta \left|f(x,\eta) - \sum_{n=1}^{N}\lambda_n \psi_{n+}\psi_{n-} \right|^2 ,$$ where, once again, $\psi_{n\pm} = \psi_n\left(x\pm \frac{\eta}{2}\right)$. We can then determine a set of wave functions that minimizes this error. In practice, the minimization is most easily done via discretization on a lattice. The problem is then cast into a minimization of the scalar error function with a large number of variables corresponding to the values of the wavefunctions at the lattice points. For different $N=1,2,\ldots$, we can determine the set of wavefunctions $\psi_n$ and corresponding weights $\lambda_n$ which minimizes the error. One can then compare the results for different $N$ to find an optimal approximation with a high enough accuracy and a minimal number of wavefunctions. Since we are not seeking a true quantum mechanical interpretation, let us consider the most general case of complex-valued weights. A naive minimization will not yield wavefunctions normalized to unity. Instead of adding this normalization as a constraint to the minimization, we remove the amplitude of the complex weights $\lambda_n$, and only keep their phases $e^{i\phi_n}$.The amplitudes of the weights are taken to be the norm of the wavefunctions, thereby normalizing them to unity. If we simply minimize the error functional, we will in general obtain wavefunctions that are not smooth enough on the lattice to be evolved numerically. For this purpose it is useful to add a term of the form of a kinetic term to the functional that will allow us to enforce a certain degree of smoothness. We construct the kinetic term from the square of the derivatives with a certain overall factor $\chi$ to tune the smoothness: $$\mathcal{K} = \chi \int d^3x \sum_{n=1}^N \left|{\frac{\partial \psi_n}{\partial x}}\right|^2.$$ Finally, we minimize this kinetic term with the total error summed over all lattice points $$\mathcal{E} = \mathcal{K} + \Phi.$$ We have applied the method to cosmic initial conditions of cold dark matter in the Zel’dovich approximation, for simplicity in a one-dimensional case. The results confirm the expectation that, increasing the number of wavefunctions, the total error is reduced. In the case we studied, it turned out that already a relatively small number of wavefunctions (compared for instance to the number of lattice points) was enough to achieve a reasonable accuracy. As usual with minimization procedures, there is no guarantee that the algorithm converges to a global minimum. This would for instance mean that one has to repeat the minimization with different initial random seeds and compare their outcomes. Also, even though this minimization was shown to work for a given phase space distribution $f(x, v)$, in practice it becomes computationally challenging even for rather small 3D lattice sizes, as the number of variables in the minimization procedure grows quickly. Despite its applicability to any distribution function, the brute-force minimization might not be the best method to determine the initial wavefunctions. Eigenvalue problem for Hermitian operator {#ssec:operator} ----------------------------------------- We now turn our attention to obtaining an analytic solution to the problem of determining the initial wavefunctions. More precisely we will show how the Wigner Ansatz can be reformulated as an eigenvalue problem, which we can then solve analytically in some specific cases. Since $f(x,\eta)$ is the Fourier transform of a real function $f(x,v)$, it satisfies the condition $f^*(x,-\eta)=f(x,\eta)$. Introducing the coordinates $x_\pm := x \pm \frac{\eta}{2}$, we can define $$g(x_-,x_+) := f\left(\frac{x_++x_-}{2}, x_+-x_-\right),$$ which is then Hermitian $$g^*(x_+,x_-) = g(x_-,x_+).$$ Hilbert-Schmidt’s theorem states that any square-integrable Hermitian kernel can be expressed in terms of its spectral decomposition $$g(x_-,x_+) = \sum_n \lambda_n\psi^*_n(x_-)\psi_n(x_+), \label{eq:spectral}$$ where the $\lambda_n$ are real eigenvalues and ${\psi_n}$ the set of orthonormal eigenfunctions with respect to the standard scalar product on $L^2(\mathbf{C}^3)$ $$\left\langle \psi_n | \psi_m\right\rangle := \int \psi^*_n(x)\psi_m(x) d^3x = \delta_{nm}. \label{eq:scalar}$$ The Fourier space WDF (equation \[eq:Fourier\_WDF\]) has exactly the same form as the spectral decomposition (equation \[eq:spectral\]). Therefore we conclude that *any* given phase space distribution function $f (x, v)$ can be written exactly as a WDF, if need be with an infinite number of wavefunctions. The wavefunctions are the eigenfunction of the Hermitian operator $g(x_- , x_+ )$ and its real eigenvalues correspond to the weights of the wavefunctions in the mixed state. Notice though, that they can in general take negative values, implying that we cannot give a full quantum-mechanical interpretation to the mixed state, as the corresponding density operator is not positive-definite. Let us emphasize once more that we consider the wavefunctions as a mere mathematical tool. Multiplying both sides of (\[eq:spectral\]) by $\psi_\alpha(x_-)$ and integrating over $x_-$ , the orthonormality of the eigenfunctions implies the following integral equation $$\int g(x_-,x_+) \psi_\alpha(x_-) d^3x = \lambda_\alpha \psi_\alpha(x_+).$$ This equation shows that the determination of the wavefunctions reduces to finding the eigenfunctions of the Hermitian kernel $g$. Unfortunately, for a completely general phase space distribution function, the above equation might not allow an analytic solution. This procedure can be generalized by allowing for a more general scalar product containing a non-trivial weight function $w(x)$: $$\left\langle \psi| \phi \right\rangle_w := \int \psi^*(x) \phi(x) w(x) d^3x.$$ For such a scalar product, the eigenvalue decomposition of $g(x_-,x_+)$ still exists but the eigenfunctions are now orthonormal with respect to the weighed scalar product. The eigenvalue problem thus reads $$\int g(x_-,x_+) \psi_\alpha(x_-) w(x_-)d^3x = \lambda_\alpha \psi_\alpha(x_+). \label{eq:scalar_weight}$$ Let us emphasize that the weighted scalar product is only used to determine the wavefunctions whose WDF equals the classical distribution function. The choice of $w(x)$ is completely arbitrary and does not affect the properties of the WDF or the Schrödinger evolution of the wavefunctions. Clearly the spectrum will depend on the choice of weight function. The additional freedom of choosing $w(x)$ could allow to reduce the number of wavefunctions needed in the Wigner Ansatz. Furthermore the arbitrariness of the weight function also reflects the freedom we have to choose wavefunctions representing the initial state. Fourier-series decomposition {#ssec:fourier} ---------------------------- Let us study the eigenvalue problem for a phase space distribution of the form[^4] $f(x, v) = \rho(x)\delta(v)$, meaning the product of a generic distribution in space with a delta function in velocity space. This choice corresponds to the case of CDM at early times, when the velocities are negligible. In such a case, the integral operator $g(x_-,x_+)$ becomes real and symmetric $$g(x_-,x_+) = \rho\left(\frac{x_++x_-}{2} \right).$$ We choose the trivial weight function $w(x) = 1$, which might not be the optimal choice for a minimal number of wavefunctions, but yields a working example of the method. We now assume a periodic distribution of matter in $[0,L]$ and expand the density as a Fourier series over the interval $$\rho(x) = \rho_0 + \sum_{n=1}^\infty a_n \cos\left(\frac{2\pi n}{L}x \right) + \sum_{n=1}^\infty b_n \sin\left(\frac{2\pi n}{L}x \right).$$ The term $\rho_0$ can be dropped without loss of generality as it can trivially be represented in the WDF using a constant wavefunction. The eigenvalue problem is easier to solve on the doubled interval $[0,2L]$. The standard scalar product for real functions on this interval is simply $$\left\langle\psi|\phi \right\rangle := \frac{1}{L} \int_0^{2L} \phi(x) \psi(x) dx,$$ which means that the eigenvalue problem reads $$\frac{1}{L} \int_0^{2L} \rho\left(\frac{x+y}{2} \right) \psi(y) dy = \lambda \psi(x).$$ We now have to choose an orthonormal basis for the wavefunctions $\psi$. As we work with a periodic interval, it is natural to use harmonic functions over $[0,2L]$. The most general case is thus $$\psi(x) = \sum_{n=1}^\infty \left[\alpha_n \cos\left(\frac{\pi n}{L}x \right) + \beta_n \sin\left(\frac{\pi n}{L}x \right)\right].$$ Using trigonometric identities and the orthonormality relations between the sine and cosine functions of different modes, the problem can be recast in a matrix problem for the coefficients of the Fourier series: $$\left(\begin{array}{cc} a_n & b_n \\ b_n & -a_n \end{array}\right) \left(\begin{array}{c} \alpha_n \\ \beta_n \end{array}\right) = \lambda \left(\begin{array}{c} \alpha_n \\ \beta_n \end{array}\right).$$ Therefore, the normalized eigenfunctions and eigenfunctions of the integral operator are finally given by $$\psi_n^\pm(x) = \mathcal{N}\left[ \left(a_n + \lambda_n^\pm\right) \cos\left(\frac{\pi n}{L}x \right)+ b_n \sin\left(\frac{\pi n}{L}x \right)\right], \label{eq:Fourier_decomposition}$$ $$\lambda_n^\pm = \pm \sqrt{a^2_n + b^2_n},$$ where $\mathcal{N} = \left[\left(a_n \pm \sqrt{a_n^2 + b_n^2} \right)^2 + b_n^2\right]^{-1/2}$ normalizes the eigenfunctions to unity. It can be checked explicitly that these eigen-vectors satisfy the condition of orthonormality and yield the correct spectral representation $$\begin{aligned} \rho\left(\frac{x_+ + x_-}{2}\right) = \sum_{n=1}^{\infty} \left[\lambda_n^+ \psi_n^+(x_+)\psi_n^+(x_-)\right. \nonumber \\ \quad~+ \left.\lambda_n^- \psi_n^-(x_+)\psi_n^-(x_-)\right].\end{aligned}$$ corresponding to the WDF $$\begin{aligned} P_W(x,v) = \sum_{n=1}^{\infty} \int e^{\frac{i}{\hbar}vy}\left[\lambda_n^+ \psi_n^+\left(x + \frac{y}{2} \right)\psi_n^+\left(x - \frac{y}{2}\right)\right. \nonumber \\ + \left.\lambda_n^- \psi_n^-\left(x+\frac{y}{2}\right)\psi_n^-\left(x-\frac{y}{2}\right)\right] d^3y.\end{aligned}$$ As a conclusion we have been able to solve the eigenvalue problem on the finite interval and use it to find the wavefunctions for the WDF Ansatz. This applies for a generic density profile $\rho(x)$ periodic on $[0,L]$ and a phase space distribution of the form $f(x, v) = \rho(x)\delta(v)$. The wavefunctions are harmonic functions with increasing velocity. In general we would need an infinite number of wavefunctions to avoid smoothing the smallest scales of the power spectrum. For many applications a finite or even small number of wavefunctions may be sufficient. In this procedure, we used the geometry of the problem to decide which orthonormal basis to use. The periodicity of the density distribution naturally led us towards the use of harmonic functions. In cases were the density is not periodic, one could use Chebyshev polynomials or any other basis whose geometry helps reduce the number of modes. As already mentioned, the technique presented in this section holds for any power spectrum and in particular is well suited to the case of WDM without initial velocities as is usually done in numerical simulations. This truncated CDM power-spectrum can easily be decomposed in a Fourier series and hence used in our framework. If the thermal velocities of the WDM particles have to be included, then another technique has to be used (see sections \[ssec:operator\] and \[ssec:SVD\]). Cosmological initial conditions {#ssec:cosmo_IC} ------------------------------- Observations of structure in the universe are perfectly compatible with the simplest possible statistical description, namely a Gaussian distribution. More precisely, each Fourier mode of the density contrast $\delta(\vec{k})$ (not to be confused with the Dirac delta distribution) satisfies an isotropic Gaussian distribution, entirely described by the power spectrum $P(k):= \langle|\delta(\vec(k))|^2\rangle$, which is a function of the modulus $k$ only, not of the direction. From the knowledge of the power spectrum one can then generate a realization with the desired statistical properties $$\begin{aligned} \delta(\vec{x}) &=& \sum_{\vec{k}}\left[\sqrt{P(k)}\mathcal{N}(0,1)\cos(\vec{k}\cdot\vec{x})\right. \nonumber\\ &&\left.\qquad+\sqrt{P(k)}\mathcal{N}(0,1)\sin(\vec{k}\cdot\vec{x})\right],\end{aligned}$$ where $\mathcal{N}(0,1)$ denotes a Gaussian random number with zero mean and unit dispersion. This shows that the density contrast for cosmological initial conditions is in a form for which we know how to construct the WDF, provided that we start our simulation at times, when the Zel’dovich velocities of the particles are negligible. Compared to [$N$-body ]{}simulations we do not need to first perform a FFT to compute $\delta(\vec{x})$ but can find the initial wavefunctions directly from the power spectrum. Additionally we do not need any glassy pre-initial conditions to model the constant background. There is, however, a little caveat when generating initial conditions for CDM. Such an initial spectrum is formally made of a Dirac distribution in $v$-space which means that even an infinite number of continuous wavefunction can not reproduce exactly this singularity. This can also be explained by the quantum aspect of our formalism. Heisenberg’s uncertainty principle forbids us to have at the same time an infinitely precise description of position and velocity of our wavefunction. There will be some necessary spread in velocity space proportional to the value of $\hbar$ chosen in the simulation. The spectrum obtained will thus formally not exactly be the CDM one but will contain some intrinsic velocities for the DM. These would vanish in the limit $\hbar\rightarrow0$. We would in principle require as many wavefunctions as Fourier modes are relevant in the power spectrum, which may lead to a prohibitive computational cost. Expanding the power-spectrum in an other basis or using a non-trivial weight $w(x)$ in the scalar product (\[eq:scalar\_weight\]) may help reduce the number of wavefunctions required. On the other hand, we may turn this as an advantage as this new formalism can allow us to probe some parts of the power spectrum only without having to use the full range of $\vec{k}$. Matrix formulation {#ssec:SVD} ------------------ Given that the WDF Ansatz can be thought of as spectral decomposition of an Hermitian operator, we can now analyse the solution in the discrete case, where the problem reduces to a matrix problem. Let us again restrict the analysis to one dimension. Working on a lattice $(x_1, x_2, \ldots,x_M)$, we can think of any function $f(x)$ as a vector $(f(x_1), f(x_2), \ldots, f(x_M))^T$ and of any function of two variables as a matrix. We can thus reinterpret the functional relationship $$g(x_-,x_+) = \sum_{n=1}^N \lambda_n\psi^*_n(x_-)\psi_n(x_+)$$ in terms of matrices $$\hat G_{ij} = \sum_{n=1}^N \lambda_n {\Psi}_{jn}^*{\Psi}_{in} = \sum_{n=1}^N\sum_{k=1}^N {\Psi}_{in}\lambda_n\delta_{nk}{\Psi}_{kj}^\dagger.$$ The property $g(x_+,x_-) = g^*(x_-,x_+)$ translates into the fact that $\hat G \in\mathbf{C}^{M\times M}$ is a Hermitian matrix $\hat G^\dagger = \hat G$ which we can diagonalize by means of a unitary transformation $$\hat G_{ij} = \left({\Psi}\cdot\hat{\Lambda}\cdot{\Psi}^\dagger\right)_{ij},$$ where $${\Psi}\in\mathbf{C}^{M\times N}, \qquad\hat{\Lambda} = \mbox{diag}(\lambda_1,\ldots,\lambda_N)\in \mathbf{R}^{N\times N}.$$ The columns of $\Psi$ are the wavefunctions $\psi_n$ sampled on the lattice. The property that $\Psi$ is unitary $\Psi^\dagger\Psi = \mathbf{1}$ implies that the normalization of the wavefunctions on the lattice. This matrix formulation has the advantage, that it is straightforward to compute the spectrum of any given Hermitian matrix. The shortcomings of this approach are two-fold: firstly we would need as many wavefunctions as lattice points, which comes at a big computational cost, and secondly the eigen-vectors have no a priori reason to be smooth enough to be used as initial conditions for our numerical scheme. Note, however, that in all the cases we tested, the eigenvalue decomposition has yield smooth enough functions. Moreover it has to be noted that we would need to compute the eigen-vectors for a matrix containing the full 3D lattice. Computing the eigen-vectors of a $n \times n$ matrix is in general a problem of complexity $\mathcal{O}(n^3)$. Since the size of the matrix is related to the number of lattice points $M^3$, one quickly reaches such lattice sizes making the solution of the eigenvalue problem impossible. This issue can be solved by combining this technique with the minimization procedure. One can first use an eigenvalue decomposition on a coarse grid and use this as an input of the brute-force minimization algorithm on a finer grid. A technique using multiple grids at the same time could also be used in the same way that Gauss-Seidel relaxation is done in some particle-mesh gravity solvers. There are multiple known algorithms available to decompose a matrix in eigen-vectors. We chose to use the singular value decomposition (SVD) as the publicly available implementations return the eigenvalues sorted in decreasing order. This allows us to choose only the wavefunctions whose eigenvalues are above a certain (arbitrarily chosen) level. Discussion and remarks {#ssec:remarks} ---------------------- For numerical simulations in a finite box with periodic boundary conditions, the spatial lattice resolution also dictates the resolution in velocity space. The size of the box is related to the lattice size in $v$-space since the wave-vectors take discrete values $\vec{v} = \frac{2\pi}{L}\vec{n}$. The maximal wave-vector is linked to the lattice spacing in real space. This illustrates the relationship between the number of wavefunctions and the spatial resolution of the simulation. If we keep all the modes, we need $\mathcal{O}(M^3)$ wavefunctions, where $M$ is the number of lattice points in one direction. Note that this corresponds, in order of magnitude, to the number of particles in [$N$-body ]{}simulations. So even if we keep the maximal number of wavefunctions needed to accurately represent the initial conditions, the complexity of our numerical scheme will still be comparable to the naive $\mathcal{O}(N^2)$ complexity of [$N$-body ]{}simulations. As we will generally use much less wavefunctions, the complexity is much lower and may even trump the usual $\mathcal{O}(N\log N)$ complexity offered by tree-codes or FFT schemes to solve Poisson equation. An other advantage of working with harmonic wavefunctions to represent the initial conditions is that we have an intuitive picture of what happens if we remove some modes. In analogy with the Fourier series, the density will not be represented exactly at every point, but the approximation becomes closer and closer as we include more and more modes. Knowing some of the properties of the system we want to model may help to get a deeper insight into which modes are really needed. The same is true when the density is expanded in another basis even if it may be more difficult to get an intuitive mental picture of the impact of high-order modes when dealing with Chebyshev polynomial say. In many simulations one does not necessarily need the same resolution on all scales. Instead one could work with an adaptive grid [@Plewa2005] and have higher resolution in the scales of interest. This would allow to reach better precisions while keeping the number of wavefunctions constant. A similar technique is used in [$N$-body ]{}solvers such as <span style="font-variant:small-caps;">RAMSES</span> [@Teyssier2002] or <span style="font-variant:small-caps;">ART</span> [@Kravstov1999]. In the special case of simulations of cosmic structure formation, the concept of cosmic variance could help to further reduce the number of wavefunctions required. Indeed, given that we can only observe one universe, the statistical fluctuation in large angular patches is high, as not many statistically independent patches are available in our sky. This is a well-known fact when studying the CMB radiation. This means that the statistical error is anyway large on these scales, so we do not need to work with a very high precision. Let us also recall that the freedom of choosing the weight function in the scalar product (\[eq:scalar\_weight\]) of the eigenvalue problem may help to considerably reduce the number of wavefunctions. Even though this seems to be a promising route to take, we did not investigate it any further in this work. Another area of interest could be the derivation of a scheme to generate initial wavefunctions analytically in the case of warm dark matter (see for instance [@Boyarsky2009]) or for any initial distribution with non-zero initial velocity spread. Implementation & Numerical results {#sec:numerics} ================================== In the previous two sections, we showed how the cosmological Vlasov-Poisson problem (\[eq:VP\]) can be approximated by the Schrödinger-Klein-Gordon system (\[eq:S\_KG\]). We showed that this approximation is valid in the limit $\hbar\rightarrow 0$, $c\rightarrow\infty$, $N\rightarrow\infty$. We also demonstrated how the wavefunctions can be built and that in general they can approximate the true density distribution in the limit $N\rightarrow\infty$. For some specific cases or for smart choices of eigenfunction basis, the exact $f(x,v)$ can even be ensured with a finite or low $N$. But let us keep the general case in mind. Contrary to the [$N$-body ]{}framework, where the convergence towards the exact solution is not granted in general, we propose a method where we have a handle on the behavior of the simulation and where we are able to easily test the dependency of the result on the parameters $\hbar,c$ and $N$. This allows us to truly speak about converged results and understand the limits of our model. Let us now present how this scheme can be discretized and implemented on a computer. We will present the implementation we used, which is probably the simplest version of what can be done. Implementation {#ssec:implementation} -------------- The simplest possible numerical scheme to solve partial differential equations is to use an explicit scheme in time. An implicit scheme would be more precise but would require more computing time and memory, the latter quantity being, as we will show, a rather scarce resource. This explains the choice of an explicit scheme, even if this imposes the use of a Courant-like condition for our time steps. For the same reasons a scheme accurate up to order $(\Delta \tau)^2$ in time has been chosen. As going to a precision of order $(\Delta \tau)^4$ would require almost twice as much memory, this choice can reasonably not be made. Using a symplectic integrator may, however, be useful in future studies as they do not cost more in terms of memory but conserve the energy of Hamiltonian systems exactly. Regarding the spatial derivatives, there are no constraints coming from the memory requirements. One could in principle go to an arbitrary level of accuracy. But as the time derivatives only have a limited precision, it is not worth going to a precision higher than $(\Delta x)^4$ , using the usual five-point stencil. With these two points being set, the system of equations (\[eq:S\_KG\]) can be written on a lattice as follows: $$\begin{aligned} \psi_n(x,\tau + \Delta \tau) &=& \psi_n(x,\tau-\Delta\tau) + i\frac{\hbar\Delta \tau}{a(\tau)} \nabla^2_{\rmn{dis}} \psi_n(x,\tau) \nonumber\\ && - i\frac{2\Delta\tau}{\hbar}U(x,\tau)\psi_n(x,\tau) \nonumber \\ U(x,\tau+\Delta\tau) &=& 2U(x,\tau) - U(x,\tau-\Delta\tau) \nonumber\\ && +c^2\Delta\tau^2\nabla^2_{\rmn{dis}}U(x,\tau) \nonumber\\ &&-\frac{4\pi Gc^2\Delta\tau^2}{a(\tau)}\left(\sum_n^N\lambda_n|\psi_n(x,\tau)|^2-\bar\rho\right),\nonumber\end{aligned}$$ where the discretized divergence operator is given by $$\begin{aligned} \nabla^2_{\rmn{dis}} f(x) &=& \frac{1}{12 \Delta x^s} \left[\big.-f(x+2\Delta x) +16f(x+\Delta x)\right. \nonumber \\ && \qquad\left. -30 f(x)+16f(x-\Delta x) -f(x-2\Delta x)\big.\right]. \nonumber\end{aligned}$$ In the non-cosmological case, the factors $a(\tau)$ can be dropped and one can use time $t$ instead of conformal time $\tau$. One can show that this scheme is unitary and conserves the norm of each wavefunction. Since the iterative solution contains the fields at neighbouring lattice sites, care has to be taken that the boundary conditions are implemented correctly. This is most easily done by augmenting the arrays containing the values of the fields on the lattice by so-called ghost points to store the periodic boundary conditions. The last important point regarding the numerics is the choice of $c$ and $\hbar$. It is clear that the Klein-Gordon equation reduces to the Poisson equation in the limit $c\rightarrow \infty$ and that the higher order terms of (\[eq:Wigner\]) vanish in the limit $\hbar \rightarrow 0$. But numerical stability imposes more conditions on these values. An explicit scheme can only converge if there is no information propagating of a distance of one cell during one time step. The scalar field propagates at speed of $c$, which gives us the following condition: $$\frac{\Delta x}{\Delta \tau} > c,$$ which is the usual *Courant condition*. In practice, the right-hand side is multiplied by a constant ($10 - 10^2$) in order to avoid any instability and to remain far from the actual condition. This condition gives a clear relation between those three quantities and shows that one cannot arbitrarily improve the spatial discretization without changing the time step size. It is not surprising to have to introduce such a condition. Indeed, if we were to truly use a value of $c=\infty$ in our simulations, we would have to use smaller and smaller time steps for a fixed grid spacing. At some point, solving the Poisson equation would become algorithmically cheaper. The Courant condition is thus the price to pay to avoid solving the usual Poisson $\mathcal{O}(M\log M)$ problem. The evolution of the Schrödinger equation also imposes conditions on the time and space slicing. It can be shown that the following relation $$\frac{\Delta x^2}{\Delta \tau} > \hbar$$ must hold, encouraging us, once again, to choose $\hbar$ as small as possible. At this stage, no lower bound has been analytically derived for $\hbar$. The full dependence on $\hbar$ of the simulation results is still an open question left for further investigation of this framework. Complexity and memory requirements {#ssec:complexity} ---------------------------------- Having presented the algorithm of the time evolution, let us estimate its computational complexity and memory requirements. Consider a three-dimensional spatial grid made of $M^3$ lattice points. Let $N_\psi$ be the number of wavefunctions we evolve. Adding the spatial components of the scalar field, $N_f = N_\psi + 1$ is the total number of fields we evolve in time. At each time step, we need to compute each of the fields at every lattice point, making the algorithm of complexity $$\mathcal{O}(M^3\cdot N_f).$$ This has to be compared with [$N$-body ]{}simulations, which have a naive complexity of $\mathcal{O}(N^2)$, that can be reduced to $\mathcal{O}(N \log N )$ using optimized algorithms. The more particles are tracked, the better becomes the spatial resolution. Roughly, for a total of $N$ particles, $\Delta x_{\mbox{resol}} \sim L_{\mbox{box}}/N^{1/3}$. In our case, the spatial resolution is defined by the lattice spacing $\Delta x_{\mbox{resol}} \sim L_{\mbox{box}}/M^3$. Thus for comparable spatial resolution, we would need $M \sim N^{1/3}$. From this we conclude that the complexity of our algorithm scales as $\mathcal{O}(M\cdot N_f)$. In the ideal situation where we only need a few wavefunctions, $N \sim \mathcal{O}(1)$, our new framework provides an $\mathcal{O}(M)$ algorithm to study structure formation. It seems that in the worst case we would need as many wavefunctions as there are Fourier modes on the lattice, $N_\psi \sim \mathcal{O}(M^3)$ implying a complexity $\mathcal{O}(M^2)$, which is the same as the naive force summation in [$N$-body ]{}simulations. These estimates illustrate that our algorithm can indeed compete with the complexity of [$N$-body ]{}simulations. It also shows how crucial it is to reduce the number of wavefunctions as much as possible. Let us next have a look at the memory requirements of our approach. Given that our time evolution relies on a two-level explicit scheme, we need to keep the field configurations at two time steps in memory. For $N_\psi$ complex wavefunctions and one real scalar field components on the whole lattice, we need $2\cdot M^3(2N_\psi+1)$ variables. Assuming that each is stored as a `double` of 8 bytes, we can estimate the minimal memory needed by our numerical simulation to be $$\geq 2 \cdot M^3(2N_\psi+1)\cdot 8~\mbox{bytes}.$$ Let us look once more at the worst case scenario $N_\psi \sim\mathcal{O}(M^3) \sim \mathcal{O}(N_\psi)$. Hence, the memory required now raises to $$\geq 32 \cdot N_\psi^2~\mbox{bytes}.$$ This has to be compared with [$N$-body ]{}simulations, which have to store at least the position and velocity of each particle at every time step leading to a memory consumption of $$\geq 2\cdot 6 \cdot N \cdot 8~\mbox{bytes}.$$ As an example we may give the Millennium simulation [@Springel2005], which needed about 400 GB to store the information of their $2160^3 \simeq 10^{10}$ particles, in agreement with the above estimate. We have to conclude that our approach can be strongly constrained by its memory requirements. The gain in computational complexity seems to have come at a considerable cost in memory. If we consider the 1 TB of memory available to the Millennium simulation, we could only have $\sim 57^3$ lattice points! However, if we were to use as many wavefunctions as spatial lattice points, we could as well directly simulate the Vlasov-Poisson system without introducing any approximation. The whole point of the framework we introduced is to simulate a realistic probability distribution with a low number of wavefunctions, in which case the memory requirements are not prohibitive any more and scale with $N$ as in the [$N$-body ]{}case. We also mentioned the idea of using an adaptive mesh to improve the (spatial) resolution without having to increase the number of wavefunctions. We now turn to two cases we simulated and show that this new framework is able to reproduce the known solutions. We also show how the solution depends on the parameters $c$, $\hbar$, $N$ and $\Delta x$. Spherical collapse of a DM sphere {#ssec:collapse} --------------------------------- There are few known non-trivial analytical solutions to the Vlasov-Poisson system (\[eq:VP\]) even in the static Universe ($a(\tau) = 1$) case. The collapse of a uniform sphere is among these and is of particular interest for cosmological applications. A comprehensive treatment of the case, known as *Tolman solution* [@Tolman1934], can, for instance, be found in the textbook [@Weinberg1972]. A uniform sphere of initial density $\rho_0$ and radius $R_0$ is collapsing under its own gravitational potential. Gauss’s law for gravity states that the evolution of a sphere is not influenced by the matter lying outside itself. This means that the density inside the sphere will remain constant with the radius at every time $t$. In other words, all matter will reach the centre at the same time which will lead to an infinite density. At this stage, the Newtonian description becomes invalid and one would have to use GR in order to take into account all the effects. In the framework of Newtonian gravity, the matter will simply cross the centre and oscillates around the centre. Due to the discretization, the simulated central density cannot become infinite and these oscillations cannot be reproduced exactly. The same shortcomings are present in [$N$-body ]{}codes. The evolution of the radius $R$ with time is a quantity which can be easily tracked. In parametric form, the Tolman solution reads $(0\leq\beta\leq\pi)$: $$\begin{aligned} t &=& \frac{\beta + \sin\beta}{2\sqrt{\frac{8\pi G}{3}\rho_0}}, \\ R &=& \frac{1}{2}(1+ \cos\beta).\end{aligned}$$ The density inside the sphere will evolve following the relation $$\rho(r,t) = \frac{\rho_0R_0^3}{R^3(t)}. \label{eq:Tolman}$$ For simplicity in what follows, we set $R_0=1$, $G=1$ and $\rho_0 = \pi$. The final collapse time (in arbitrary units) is then reached when $t_c \approx 0.306$. We will work on the periodic interval $[-5,5]$ which should be big enough to avoid any unwanted effects from the boundaries. This problem possesses an obvious spherical symmetry and in order to be able to explore a wide resolution range it is interesting to re-derive the whole framework presented in the Section \[sec:framework\] and \[sec:IC\] using this assumption. A careful derivation can be found in appendix \[sec:spheric\] and the end result is that the Vlasov-Poisson system with spherical symmetry can be re-cast in the one dimensional Schrödinger-Klein-Gordon system $$\begin{aligned} i\hbar {\frac{\partial \psi_n}{\partial \tau}} &=& \frac{-\hbar^2}{2a(\tau)}{\frac{\partial ^2\psi_n}{\partial r^2}} + \frac{V}{r}\psi_n,\\ - \frac{1}{c^2}{\frac{\partial ^2V}{\partial \tau^2}}+ {\frac{\partial ^2V}{\partial r^2}} &=& 4\pi G r\left(\frac{2\pi}{r^2}\sum_n \lambda_n |\psi_n|^2 - \frac{4\pi^2\Xi}{V_{\rm{tot}}} \right),\end{aligned}$$ where the potential $V = Ur$ and $\Xi$ is the normalization of the wavefunctions (see equation \[eq:normalisation\]). The main difference with the framework in presented earlier is the explicit dependency on the position coordinate $r$. The algorithms developed to find the wavefunctions corresponding to a given distribution function are identical. To generate the initial set of wavefunctions and eigenvalues we chose to use the matrix formulation (Section \[ssec:SVD\]). The initial density profile being discontinuous, it is obvious that it cannot be recovered exactly with a finite set of continuous functions. There will be some noticeable differences between the exact density profile and its approximation appearing at the discontinuity points, that is at the edge of the sphere. It is thus better to use a approximately correct but continuous density profile. In the case at hand, we used the following initial setup: $$\rho(r,t=0) = \frac{\pi}{2}\tanh\left(\xi(r+1)\right) - \frac{\pi}{2}\tanh\left(\xi(r-1)\right), \label{eq:smoothed_IC}$$ with $\xi=20$. The value of $\xi$ is somewhat arbitrary and has been chosen in order to be as close as possible to the perfect sphere (i.e. high $\xi$) and avoid any Gibbs oscillation at the edge of the sphere (i.e. low $\xi$). The results presented here are not really dependent on $\xi$. This parameter has just been introduced for convenience and to avoid having to analyse the effects of these unwanted and unrealistic oscillations. In fact, even a value of $\xi=\infty$ yields comparable results to what is shown below once the Gibbs oscillations have been smoothed out manually from the output. Once discretized on a lattice, the eigenvalue decomposition is straightforward to obtain, for instance using the SVD function implemented in the usual scientific software packages. Recall that there is no guarantee that the obtained functions will be periodic on the interval of interest or even that these function will be smooth. It is a pure matrix operation without any relation between the matrix elements representing the wavefunctions. The interval $[−5, 5]$ has been uniformly discretized regularly in $5000$ line elements in order to get a high enough spatial accuracy. This means that we want to perform the SVD decomposition of a $5000 \times 5000$ matrix and that we can use up to $N = 5000$ wavefunctions in the simulation. The matrix reads $$\hat G_{ij} = \rho\left(\frac{r_i+r_j}{2}\right),$$ where the $r_i$’s are the uniformly distributed lattice points. This matrix is by construction symmetric and positive definite, meaning that its eigenvalues will be positive or null. Most of the SVD routines in scientific packages sort the eigenvalues $\lambda_n$ according to their magnitude which allows us to classify the most important contributions and discard the negligible terms in equation \[eq:WDF\] if one does not want to use all the $N$ functions. The first four wavefunctions are shown on figure \[fig:wavefunctions\]. The wavefunctions obtained through this procedure are smooth (at the lattice level at least) and real but are not periodic nor anti-periodic, which leads to spurious diffusion at the boundaries of the box. For this reason, we decided to multiply them by a square-box like compact function going to zero close at the box boundaries. The first four wavefunctions before and after applying this window filter are also shown on figure \[fig:wavefunctions\]. This procedure does not modify the distribution function obtained through the WDF. This reflects the fact that there is infinitely many ways to decompose the same $f(r,v)$ in wavefunctions. Notice that this procedure of adding a window function can only be done if the density vanishes at the boundaries. ![The first four wavefunctions contributing to the WDF of the approximate uniform sphere before (dotted lines) and after (superimposed solid lines) having applied the smooth window function to make them vanish at the boundaries of the box. These functions are different from zero almost everywhere but their combination in a WDF corresponds to the density profile (dashed black line, equation \[eq:smoothed\_IC\]), which is zero on most of the interval.[]{data-label="fig:wavefunctions"}](Figures/psi.eps){width="84mm"} Apart from the wavefunction, the eigenvalue associated to each mode also enters the WDF (equation \[eq:WDF\]). These are obtained at the time than the discretized wavefunctions and their values are represented on figure \[fig:eigenvalues\]. The actual normalization of the eigenvalues does not really matter as any common factor can be absorbed as normalization in front of the WDF. But the ratio of the values plays a role. All the different wavefunctions (modes) entering the decomposition of $f(r,v)$ may not play an important role exactly as in the case of a Fourier series decomposition where some of the modes can safely be neglected. As can be seen on figure \[fig:eigenvalues\], the values of the various $\lambda_n$ decrease rapidly and for $n>100$, they represent less than $10^{-3}$ of the most important mode. As the eigenvalues are constants of motions, we can hope that neglecting modes with a high $n$ (and hence a small $\lambda_n$) will not affect the simulation too much. In fact, unless the magnitude of the wavefunction corresponding to one of the neglected mode grows significantly over the course of the simulation, this mode should remain small at all time and can thus be safely ignored. ![The first 1000 eigenvalues $\lambda_n$ corresponding to the SVD decomposition of the spherical collapse problem. The values decrease rapidly and become negligible (when compared to the first one) for $n > 100$. They even reach a minimum close to the machine epsilon for $n>700$. Our fiducial run uses all the wavefunctions up to $n=79$ which corresponds to $\frac{\lambda_n}{\lambda_0} > 10^{-3}$. This limit is shown as the red solid line on the figure. The small panel presents a zoomed-in region of the eigenvalues with $n<25$. The decrease on this small subset is already of more than an order of magnitude.[]{data-label="fig:eigenvalues"}](Figures/lambda.eps){width="84mm"} In our main run, we used all eigenfunctions $\Psi_n$ whose eigenvalue fulfils $\lambda_n>10^{-3} \lambda_0$, which left us with only $N=79$ functions to evolve. The other numerical parameters we chose in our fiducial run are $c=10$ and $\hbar=0.005$. We do not expect $\hbar$ to have a big impact on the results in this case as the potential is a combination of a second and third order polynomial for which the higher order corrections in the Wigner equation (\[eq:Wigner\]) should be small. ![image](Figures/rho_evolution.eps){width="\textwidth"} Figure \[fig:rho\_evolution\] shows four density profiles at different time steps in the simulation together with the analytical solution (equation \[eq:Tolman\]). Until $t\approx 0.2$, the behaviour of the density profiles remains close to the exact solution apart from the very edges of the sphere that are slightly smoothed. The centre of the density profile is almost flat as expected and has almost the correct value. When coming closer to the collapse time $t_c\approx0.306$, the profiles starts to deviate more and more from the expected profile. This can be seen on the last two panels of figure \[fig:rho\_evolution\] where the density inside the sphere is clearly different from a square box function. The very centre of the sphere still remains close to the analytical solution but the edges are not sharp any more and are smoothed over many lattice elements. This strongly suggests that the estimation of the derivatives of both the potential and the wavefunctions are getting poor or that the number of wavefunctions used in the run is not high enough. Our scheme uses a fourth order accurate derivative stencil but this does not necessarily help recovering sharp features such as the one present at the edge of the sphere. Increasing $N$ and reducing $\Delta r$ may help recover the right density profile everywhere in the sphere. The results on figure \[fig:rho\_evolution\] have been obtained using $N=79$ wavefunctions corresponding to all eigenvalues $\lambda_n > 10^{-3}\lambda_0$. This should be sufficient as the eigenvalues are constants of motion and we do not expect any of the neglected wavefunctions to grow by a huge factor over the course of the simulation. In order to assess this, we run the same simulation with $N=155$, corresponding to all wavefunctions whose eigenvalues $\lambda_n > 10^{-5} \lambda_0$. Notice here that decreasing the minimal eigenvalue entering the WDF by two orders of magnitude only increases $N$ by a factor of $2$. We are thus far from the worst case scenario (see Section \[ssec:complexity\]) where the same number ($N=5000$) of wavefunctions than lattice points have to be used. ![Comparison of the density profiles at the initial time for $N=79$ (green solid line) and $N=155$ (blue dashed line) wavefunctions. The figure zooms in the central regions where the difference can be spotted. The $N=79$ line presents a lot of oscillations that are suppressed if more wavefunctions are used. The $N=155$ line almost perfectly matches the density profile given by equation \[eq:smoothed\_IC\]. Notice, however, that this differs from the perfect sphere profile (red dashed-dotted line), which cannot be represented by a finite set of continuous functions.[]{data-label="fig:comparison_N"}](Figures/comparison_N.eps){width="84mm"} Figure \[fig:comparison\_N\] shows a comparison at $t=0$ of those two initial setups. The figure only shows a zoomed-in view focused on the sphere itself as the difference are less visible in the outer regions of the simulation domain. As can be seen, the $N=155$ initial setup (dashed blue line) is a much better representation of the smoothed density profile (equation \[eq:smoothed\_IC\]). At this resolution, the two are indistinguishable. The $N=79$ initial conditions (green solid line) presents some oscillations inside the sphere that are very similar to the Gibbs phenomenon that appears when computing the Fourier series of the square box function. Using a smoothed density profile and decomposing in eigenvalues using the matrix formulation thus yields a result which is very similar to generating the ICs through the Fourier decomposition (Section \[ssec:fourier\]). This could have been anticipated by looking at the wavefunctions (figure \[fig:wavefunctions\]), where the different $\psi$’s resemble sines and cosines functions at least qualitatively. As can be seen, the relative error introduced by using only $N=79$ wavefunctions is of the order $10^{-3}$, whereas the error computed when using $N=155$ is smaller than $10^{-6}$, showing once again that increasing the number of eigenfunctions used by a factor of $2$ increases the simulation by more than $2$ orders of magnitude. However, it should be noticed that using another basis or weighting function for the eigenvalue decomposition (\[eq:scalar\_weight\]) may yield another $N$ with the same or different accuracy. Comparing the number of wavefunctions only makes sense when using a similar decomposition technique. Let us also mention that we also tried using harmonic functions and Chebyshev polynomials for this test case and obtained similar results. At later times, the simulation snapshots are identical to the ones presented earlier on figure \[fig:rho\_evolution\]. The relative difference between the two runs is of order $10^{-3}$ as in the initial conditions. This implies that the difference between our simulation results and the analytical solution can not be reduced by using more and more wavefunctions. The additional modes that have been discarded when using only $79$ eigenfunctions do not contribute significantly to the final results. This could have been expected as their weightings ($\lambda_n$) are very small compared to the main modes. We can thus gain confidence in the way we generate ICs, discarding higher order modes may not be an issue and we may be able to run our algorithm in a near linear regime even when a violent collapse of matter is studied. In conclusion, increasing $N$ does make the initial conditions and the simulation outputs converge towards a solution at a high rate. However, the discrepancy between the solution and the simulation does apparently not come from the wrong choice of the parameter $N$. Let us now explore the dependency on the grid resolution. ![The output at $t=0.26$ for different lattice resolution using $N=79$ wavefunctions. The green dash-dotted line corresponds to a low resolution run with $\Delta r=5\cdot10^{-3}$, the blue dashed line corresponds to our fiducial run at $\Delta r = 2\cdot 10^{-3}$ and the black solid line is the output of a high resolution run using $\Delta r=10^{-3}$. The quality of the output is clearly improved by using a higher resolution lattice. This can be directly related to the problem of estimating sharp derivatives on a grid, where the only solution is to increase the resolution.[]{data-label="fig:comparison_dr"}](Figures/comparison_dr.eps){width="84mm"} On figure \[fig:comparison\_dr\], we show the results of three runs at different grid resolutions leaving the number of wavefunctions and all the other parameters fixed. The blue dashed line corresponds to the fiducial run ($\Delta r=2\cdot10^{-3}$), the green dotted line to a lower resolution run using $\Delta r=5\cdot10^{-3}$ and the black solid line corresponds to the high resolution run with $\Delta r=10^{-3}$. As can be seen, increasing the resolution has a huge impact on the quality of the result. As anticipated, the sharp features can only be resolved correctly when enough grid points are used. Notice that the high resolution run almost matches exactly a rescaled version of the initial density profile (equation \[eq:smoothed\_IC\]), but does break down at later times in the same way that the fiducial run did between $t=0.20$ and $t=0.26$ (figure \[fig:rho\_evolution\]). Increasing the resolution is thus important to be able to retrieve all features of this somewhat artificial test case. This test case presents a strong density gradient at the edge of the sphere which does not spread over many cells. This makes it difficult to resolve for a grid code but in a cosmological simulation such sharp gradients should not arise as the density profiles usually follow power laws and do not have infinite gradients. As already mentioned, using an adaptive mesh would help in such a case as more resolution elements could be used at the edge of the sphere without having to slow down the simulation due to an unnecessary oversampling of the steady regions. This demonstrates that our framework converges towards the analytical solution once the spatial resolution is high enough and once the number of wavefunctions has been carefully chosen to represent the distribution function of interest. ![Evolution of the density at the centre of the sphere ($r=0$) for different values of the numerical speed of light $c$. The red dashed line corresponds to the analytical solution (\[eq:Tolman\]), the vertical dash-dotted line represents the final collapse time and the different solid lines correspond to the different values of $c$. The higher the value of $c$, the closer the line lies to the exact solution. The line with $c=\infty$ has been obtained by solving Poisson’s equation on the grid instead of evolving gravity using Klein-Gordon’s equation. The quality of the simulation outcome clearly depends on the value of $c$ but if the value is high enough (compared to the velocity of the matter), the difference with the $c=\infty$ becomes very small. Once the density peak has been reached, the value of $\rho(r=0)$ decreases as is expected after the different matter shells have crossed. The simulations have not been carried on much beyond this point as the departure from the analytical solution is already significant. Moreover, the peak can not be represented accurately by any numerical mean and any subsequent event would be erroneous.[]{data-label="fig:comparison_c"}](Figures/comparison_c2.eps){width="84mm"} This new framework should converge towards the solution in the limit $N\rightarrow\infty$, $\Delta x\rightarrow 0$, $c\rightarrow\infty$ and $\hbar\rightarrow0$, the last two being, despite their physical origin, only numerical parameters. Figure \[fig:comparison\_c\] presents the evolution of the density at the centre of the sphere for our fiducial run and for higher values of $c$. The simulation with $c=\infty$ has been obtained by solving Poisson’s equation on the grid at every time step instead of using Klein-Gordon’s equation. Increasing $c$ improves the quality of the result and even relatively small values ($c=50$) of this parameter lead to a behaviour close to the limiting case. Poisson’s equation can thus safely be replaced by Klein-Gordon’s equation.The maximal speed reached by matter shells in our fiducial run is $p\approx10$ before the very end of the collapse, which can anyway not be studied by a simulation. Using a value of $c=10$ is thus intuitively too low and this plot confirms this. The speed of gravity must be at least a few times bigger than the matter velocity. Once the peak has been reached, the different matter shells should cross the centre and the density at $r=0$ has to decrease. The start of this behaviour can also be seen in figure \[fig:comparison\_c\]. The main issue with this analysis is that is happening after the moment where the density at the centre becomes infinite and hence not representable on a computer. In practice, all the wavefunctions should become infinite at this precise point and zero elsewhere. This is obviously impossible on a lattice and does anyway lead to inaccurate derivatives. To get closer and closer to the singularity requires a finer and finer mesh. The smaller the mesh size, the better the shell crossing can be followed. Notice, however, that this is an issue present in this ideal sphere case only. In a realistic scenario, where the matter has a non-zero radial velocity and in an expanding background, the usual NFW profiles [@Navarro1996] should be recovered without singularity problems. This would, however, require a truly 3D simulation and not just a spherically symmetric 1D setup. Increasing $c$ has a big impact on the simulation run time as the time step size varies as $c^{-1}$, making the total simulation wall clock time proportional to $c$. An option that has not been explored here is to change the value of $c$ to be always a (small) multiple of the maximal matter speed in the simulation. This would allow us to choose bigger time steps in the early stage of the simulation when all the matter moves slowly. It would also avoid making an initial guess for the value of $c$ without knowing how fast the matter will move during the run. As discussed earlier, the dependency on $\hbar$ is difficult to test in this case as the analytical potential only presents first order corrections in the Wigner equation. We did run some simulations with various values of this parameter without noticing important differences in the behaviour of the matter distribution. Understanding the exact dependency on $\hbar$ of the framework is left to a future work. This simple spherical collapse test showed that we were able to reproduce the analytical solution in the limit $N\rightarrow\infty$, $\Delta x\rightarrow0$ and $c\rightarrow\infty$ as expected. We investigated the different deviations from the exact solution and could explain them through our choices of numerical parameters. We also discussed how the implementation could be improved by using a mesh-refinement and adaptive $c$ values. The results obtained so far show that this new framework can reproduce known solutions and give us confidence to use it on more involved cases. Going beyond the first collapse {#ssec:nonLinear} ------------------------------- With the previous test case, we showed how our framework was able to reproduce the collapse of a matter distribution in the linear regime and studied the dependency on the model parameters. However, in most cases of interest, the systems considered in simulations are way past the linear regime. They also present multiple matter streams, i.e. at a given position $x$, there are multiple velocities $v$ and the distribution function is “wound up”. It is hence important to explore whether this behaviour can be recovered by our framework. Note that tracking precisely these multiple matter streams is extremely difficult in the case of [$N$-body ]{}simulations unless advanced phase-space tessellation techniques are used [@Abell2012; @Shandarin2012]. The test case presented in the previous section exhibits a nice analytical solution but, as discussed, the matter distribution becomes infinitely thin at the time of the collapse which makes all attempts at taking derivatives difficult. To alleviate this issue, we use a simpler one dimensional test case with a much smoother density distribution. In this section, we study the evolution in one dimension of the cold distribution function $$f(x,v) = \rho(x)\delta(v), \qquad \rho(x) = \rho_0\exp(-x^2 / 2s^2),$$ with $s$ the scale size of the matter distribution. This test case has already been studied by [@Widrow1993] in the context of their framework which makes use of a Husimi distribution instead of the Wigner one. We will use a periodic domain of size $L_{\rm{box}} \gg s$. The first step in the algorithm is to decompose the initial condition into a series of wavefunctions. There are many ways to do this and one could easily use either a decomposition in terms of sine waves or using the matrix decomposition used in the previous test case. The decomposition in Fourier modes is straightforward and the initial distribution function can be recovered in a satisfactory way with less than $10$ wavefunctions. However, to demonstrate the fact that the number $N$ of wavefunctions is only a relevant quantity once a decomposition scheme has been chosen, we will use a simpler single wavefunction to represent $f(x,v)$: $$\Psi(x,t=0) = \sqrt{\rho(x)}.\label{eq:3}$$ Using this simple decomposition leads to a an initial Wigner distribution of the form $$f(x,v) = \rho(x)\exp(-v^2/2\hbar^2),$$ once equation \[eq:initial\_distribution\] has been applied. This example also explicitly shows how $\hbar$ enters the framework and the effect this quantity has on the initial conditions and hence on the subsequent evolution of the distribution function. The previous test case gave us some insights into how to choose the value of the speed of light $c$. We could use similar considerations here to choose an appropriate value, however, to simplify the discussion, we choose to set $c=\infty$ and solve Poisson’s equation for gravity at every time step using the fast Fourier transform algorithm. In what follows, we set $\rho_0=1$, $s=10^{-2}$, $\hbar=10^{-3}$, $L_{\rm{box}}=1$ and discretize our volume in $M=100$ intervals. ![Evolution of the density at the centre of the domain ($x=0$) beyond the first collapse ($t>t_c$). The multiple collapses and re-expansions of the matter distribution can be tracked by the framework. The appearance of multiple matter streams during the evolution of the collapse can be resolved by the simulation even with one single wavefunction.[]{data-label="fig:nonLinearEvolution"}](Figures/evolution){width="84mm"} To trace the non-linear evolution of the system, we trace the value of the density field at $x=0$. This is in essence similar to figure \[fig:comparison\_c\] for the previous case but we now let the simulation run past the initial collapse time. The result of this evolution is shown on figure \[fig:nonLinearEvolution\]. As can be seen, the first peak is followed by a relaxation of the system and then by a series of additional regularly spaced collapses that occur every time the matter distribution crosses the spatial origin. The first four peaks can be well followed despite the relatively low spatial resolution and the single wavefunction used in this example. Using a higher value of $M$ leads to more peak being resolved and less oscillations in the value of the central density. This, once again, highlights the key importance of the spatial resolution over the raw number of wavefunctions. This is also true in standard [$N$-body ]{} simulations that use meshes to solve Poisson’s equation. The quality of the solution is mostly driven by the high number of grid elements and less by the pure number of particles used in the simulation. Our framework is hence able to track the collapse of a matter distribution when multiple shell crossings occur and in the presence of multiple matter streams.\ It is interesting to discuss what would happen if more wavefunctions were used to represent $f(x,v)$. Obviously, one cannot add more $\Psi$ to the decomposition given by equation \[eq:3\] as it already provides a exact match to the density profile; any addition would reduce that agreement. Note that one might want to consider doing so as it could reduce the spread in velocity and hence give a better set of initial conditions but there does not seem to be a simple way to do so. Alternatively, one might consider using a Fouried decomposition of $\rho(x)$ (section \[ssec:fourier\]) with a high enough number of cosine waves to reproduce $\rho(x)$. A small number of waves will be sufficient as the case is smooth enough and by doing so, the spread in momentum can be reduced. The more wavefunctions are used, the smaller the initial spread in velocity, allowing us to get rid of the explicit dependence on $\hbar$ in the initial Wigner distribution. This obviously comes at a higher numerical cost but might be necessary in some situations. The freedom of getting a spread in velocity space smaller than $\hbar$ is a fundamental difference between or framework and earlier work based on the Husimi function [@Widrow1993; @Davies1997]. Linear structure growth in [$\Lambda$CDM ]{} {#ssec:growth} -------------------------------------------- We now apply this new framework to a simple example of cosmic perturbation growth. We will consider the simplest possible case of a constant background $\bar\rho$ in a cold dark matter Universe and a small perturbation $\epsilon \ll 1$ with a single Fourier mode $k_p$ taken along the $x$ direction: $$\rho(\vec{x},t) = \bar\rho + \epsilon\left[\cos(k_p x_x) + \sin(k_p x_x) \right].$$ This basic setup should be sufficient to study the behaviour of the framework in an expanding Universe case. Generating the wavefunctions corresponding to this initial distribution function was discussed in Section \[ssec:fourier\]. The equations (\[eq:Fourier\_decomposition\]) define a representation of the density in terms of wavefunctions. As we only have one single mode, we only need one wavefunction for the constant background ($\psi_0$) and two for the perturbation. We run the simulation on a $30^3$ spatial lattice corresponding to a physical box size of $60~\rm{Mpc}$. It is important to notice here the low number of wavefunctions $N = 3 \ll 30^3$, allowing us to run our algorithm in a near linear regime. We choose the scale of the perturbation to be larger than the Nyquist frequency and small compared to the box size to avoid unwanted effects due to the limited box size. In a purely matter dominated (Einstein-de Sitter) Universe, the scale factor $a(\tau)$ will grow as the square of the conformal time. Without loss of generality, we can normalise it such that it is equal to one at the start of the simulation $a(\tau_{\rm{ini}})=1$, implying $$H^2_{\rm{ini}} = \frac{8\pi G}{3} \bar\rho_{\rm{com}} a(\tau_{\rm{ini}})^{-3} = \frac{8\pi G}{3} |\psi_0|^2.$$ The above relation fixes the value of this wavefunction in terms of the initial Hubble parameter, which can be computed by rescaling today’s value $H_0$ to the redshift corresponding to the beginning or our simulation $$H_{\rm{ini}} = H_0(1+z_{\rm{ini}})^{3/2}.$$ We ran our simulations for the choice $z_{\rm{ini}} = 1000$ and using today’s Hubble parameter $H_0 \simeq 70~\rm{km}~\rm{s}^{-1}~\rm{Mpc}^{-1}$. The initial conditions with a density contrast of $\delta_{\rm{ini}} = 10^{-6}$ where evolved up to a redshift of $z_{\rm{fin}} = 200$. We use a normalised time line such that $z_{\rm{ini}}$ corresponds to $ \tau=0$ and $z_{\rm{fin}}$ corresponds to $\tau=1$ using $3\cdot 10^4$ time steps. The same initial perturbations were evolved in a matter-dominated, expanding universe and in a static universe without expansion. The parameter $c$ has been chosen in accordance with the results of the previous test by making it bigger than the speed of the matter in the simulation and small enough to avoid drastically pulling down the time step. In what follows, $c=10$. The parameter $\hbar$ has been, once again, chosen small enough for the quantum corrections to be negligible. More specifically, this means that the first quantum correction in the Wigner equation (\[eq:Wigner\]) has to be small compared to the contribution to the classical Vlasov equation: $${\frac{\partial V}{\partial x}}{\frac{\partial P_W}{\partial v}} > \frac{1}{24}\hbar^2{\frac{\partial ^3V}{\partial x^3}}{\frac{\partial ^3P_W}{\partial v^3}}.$$ We verified that this indeed the case in our simulations when using $\hbar=0.005$. We could, in principle, also verify that the higher-order corrections are also suppressed but computing the fifth derivative of the potential will lead to a very noisy estimate and may not lead to useful results. Figure \[fig:density\_growth\] shows the time evolution of the density. The initial amplitude of the harmonic density increases with time, without distortion of the shape, as expected from the linear regime of structure formation. The growth of structure seems thus to be well reproduced by our framework even with such a low number of lattice points and wavefunctions. The simulation could, in principle, be carried on to a much lower redshift than $z=200$ but at some point, the spatial resolution issues highlighted in the previous test would appear here as well. Recall that we have only 30 grid points in our $60~\rm{Mpc}$ box. As soon as the variation of the density becomes important on a scale of order a few $\rm{Mpc}$, the discretized derivatives will cease to approximate the analytical ones and our formalism will break down as would any uniform grid code with the same resolution. We, hence, decided to restrict ourselves to the regime where our density field and the wavefunctions are well behaved in order to make a useful analysis of the results. To analyse the growth of the perturbation in more detail, we performed a Fourier transform on the density contrast to obtain $|\delta_k|^2$. In this way we could also check that no other Fourier modes than the one initially present were excited during the simulation. This is a cross-check for the linearity of the evolution of the small density perturbation. The figure \[fig:growth\_comparison\] compares the growth $|\delta_k(\tau)|^2/|\delta_k(\tau_{\rm{ini}})|^2$ for our mode in the expanding and non-expanding universes. Clearly, the growth of the perturbation is suppressed in presence of expansion. ![Time evolution of the density field in an expanding universe. The different lines correspond to various time steps in normalised units. The values are taken along one line parallel to the $x$-axis in the box but all lines yield the same results. The initial amplitude of the harmonic density increases with time, without distortion of the shape, as expected from structure formation in the linear regime.[]{data-label="fig:density_growth"}](Figures/density_growth.eps){width="84mm"} ![Comparison of the growth of the perturbation $|\delta_k(\tau)|^2/|\delta_k(\tau_{\rm{ini}})|^2$ in a non-expanding universe (red squares, upper line) and in a matter-dominated, expanding universe (blue circles, lower line) as a function of the conformal time $\tau$. As expected, the growth is clearly suppressed in the presence of expansion.[]{data-label="fig:growth_comparison"}](Figures/growth_comparison.eps){width="84mm"} These results clearly show that our framework is able to follow the growth of a single-mode density perturbation in an expanding background. The main features are recovered even when a low number of lattice points and wavefunctions is used. By taking advantage of the ease of decomposition in orthonormal Fourier modes of the cosmological power spectrum (Section \[ssec:cosmo\_IC\]) more complex cases can be studied by superposing the different modes. The results obtained here give us confidence about the behaviour of the framework in the non-linear regime of cosmic growth. The main features of [$\Lambda$CDM ]{}can probably be recovered in a higher-resolution run with more wave functions and a longer run time. As in the previous test case, one could track the matter distribution into the non-linear regime and track the appearance of multiple matter streams. This is of course of crucial importance for realistic simulations of structure growth in the Universe. It is, however, obvious that the addition of the scale factor $a(\tau)$ in the simulation will not alter the behaviour seen in section \[ssec:nonLinear\] and we are confident that multiple streams would also appear and be correctly tracked by the evolution of the wavefunctions. A more detailed study of the framework in the context of cold or warm dark matter cosmologies is left for future work. Conclusion {#sec:conclusion} ========== We introduced a new alternative framework for simulation of structure formation which is not based on the usual discretization of the density field in a set of particles. We made use of the Wigner distribution function to recast the distribution function in a set of wavefunctions. We could thus replace the 6-dimensional Vlasov equation by a set of Schrödinger equations acting on the wavefunctions. The Poisson equation for gravity has been transformed into a Klein-Gordon equation making the system of equations completely local. We demonstrated how this system of equation could be derived from a Lagrangian and how the total energy and mass are conserved by the equations of motion. We presented different methods to generate the initial conditions depending on the distribution function of interest and described how a cosmological power spectrum can be discretised in a low number of wavefunctions. The framework has then be tested on two simple models to assess its validity and the dependency of the outcome on the numerical parameters has been sketched. The results obtained thus far show that this framework is viable and may become a possible alternative to the [$N$-body ]{}method. The important new features introduced in this framework are the possibility to simulate a generic distribution function and not only cold dark matter. Although finding an easy and generic way to generate initial conditions for warm or hot dark matter remains an open question, there are no intrinsic limitations in the framework that could prevent such simulations. It also provides an alternative to [$N$-body ]{}codes and could thus help assess the validity of simulations. Our technique can be shown to converge towards the solution in the limit $c\rightarrow\infty$, $\hbar\rightarrow0$ and $N\rightarrow\infty$ making the formal convergence studies possible. The computational complexity of the algorithm grows as $\mathcal{O}(N\cdot M)$ where $M$ is the number of lattice points. This demonstrates the importance of finding the appropriate decomposition of the distribution function in wavefunctions. The complexity can hence be anything between linear and quadratic in the number of points. The case of structure formation may be close to the ideal case thanks to the possibility to discretise the power-spectrum in a low number of modes. This scheme is especially aimed at tackling the fundamental challenges that the [$N$-body ]{}method faces when dealing with non-CDM cosmologies. This includes simulation of a WDM Universe but also neutrino components in a standard [$\Lambda$CDM ]{}model or any other particle with non-negligible thermal velocities. At the same time, exploring CDM through this framework might help understand more precisely the limitations of the [$N$-body ]{}method by comparing results in the same way that various hydrodynamic solvers help understand the behaviour of the codes and their limits. One could also argue [@Sikivie2010] that such an approach may be appropriate to simulate axions which remain quantum during the entire cosmological evolution. In such a case, the real value of $\hbar$ and particle mass would have to be used, which would, however, probably lead to very high computational costs. In this paper, we presented the validity of the method but many promising and interesting options have not yet been explored. The first obvious domain to investigate is the dependency on $\hbar$ of the results. Early results tend to show that it may not be a crucial issue thanks to the universal gravitational profiles being low-degree power laws and hence generating only small quantum corrections to the Vlasov equation. It still remains an open question. The other important area of investigation is the generation of initial conditions for more general cases than simple CDM. The procedures presented here can not be applied without making some educated guess on the best shape of harmonic functions or without having to solve gigantic matrix eigenvalue problems. Combining some of these procedures or using interpolation techniques between lattice points are possible improvements worth exploring. Finally, on the implementation side, lot of work can be done to make the codes more efficient. We already discussed the possibility of using an adaptive mesh to refine the grid in the regions of interest. It may also be possible to use an adaptive value of $c$ and of the time step in the same way that [$N$-body ]{}codes use different time bins for different particles. The locality of the interactions is an important feature as it makes the parallelisation of the code straightforward. Running such a simulation on big clusters could thus be easily achieved without having to worry too much about complex communications and scalability issues. Let us conclude by stating that our approach has a number of attractive features. Most importantly, the full phase space information is encoded in the wavefunctions. Working with many wavefunctions, we are in principle able to represent any given phase space distribution, including those where the velocity dispersion is important. Potentially, this would allow for numerical simulations of structure formation in presence of warm dark matter. Acknowledgements {#acknowledgements .unnumbered} ================ This work was supported by the Swiss National Science Foundation and by the Tomalla Foundation. We would like to thank S. Cole, A. Maccio, J. Read and T. Theuns, for useful comments and discussions. O.R. acknowledges the support in part by the National Science Foundation under Grant No. PHYS-1066293 and the hospitality of the Aspen Center for Physics. Spherically symmetric case {#sec:spheric} ========================== The framework presented in section \[sec:framework\] can be simplified in the case of (spatially) spherically symmetric distribution functions. The dimensionality of the problem is then reduced and allows more comprehensive convergence studies thanks to the lower number of discretization points needed. If we consider only radial motion, then the distribution function can only depend on the distance to the centre $r$, the radial velocity $v_r$ and the angle between those two vectors. We choose to use the cosine of this angle as our coordinate, denoted as $y$ in what follows. The gravitational potential does only depend on the distance to the centre. We thus have $f\equiv f(r,v_r,y)$ and $U\equiv U(r)$. The density at a given $r$ and total mass can be expressed using these new coordinates and read $$\begin{aligned} \rho(r) &=& \frac{2\pi}{a^3(\tau)} \int_0^\infty v_r^2 dp_r \int_{-1}^1 dyf(r,v_r,y),\\ M &=& 4\pi \int_0^\infty r^2 dr \rho(r).\end{aligned}$$ It can be shown that the total mass is a conserved quantity under the equations of motion for $f$. The Vlasov-Poisson system using those coordinates and assuming spherical symmetry becomes $$\begin{aligned} \frac{\partial f}{\partial\tau} + \frac{yv_r}{a(\tau)}\frac{\partial f}{\partial r} - a(\tau)\frac{\partial U}{\partial r}\left[y\frac{\partial f}{\partial v_r}+\frac{(1-y^2)}{v_r}\frac{\partial f}{\partial y}\right] \nonumber\\ + \frac{(1-y^2)v_r}{ra(\tau)}\frac{\partial f}{\partial y} = 0, \Bigg.\\ \frac{1}{r^2}\frac{\partial}{\partial r}\left(r^2 \frac{\partial U}{\partial r}\right) = 4\pi Ga(\tau)^2 \left( \rho(r) - \bar\rho \right).\end{aligned}$$ It may, in principle, be possible to find a Wigner-like distribution function for which the Wigner equation corresponds to this Vlasov equation. The wavefunctions entering such a distribution would probably obey a spherically symmetric version of Schrödinger’s equation. This is, however, not the only way to handle this system.\ The distribution function can be decomposed in two parts, one for each sign of the coordinate $y$: $$f(r,v_r,y) = f_-(r,v_r)\delta_-(y+1) + f_+(r,v_r,\tau)\delta_+(y-1),$$ where $\delta_\pm(x)$ are Dirac distributions defined on the interval $[-1,1]$ only. We can then integrate over $y$ and obtain two equations, one for $f_+$ and another identical up to the signs for $f_-$ together with a boundary condition ensuring that the two distributions match when they reach $r=0$ or $v_r=0$. The next step in the procedure is to rescale these distribution functions by introducing $g_\pm(r,v_r) = f_\pm(r,v_r)r^2v_r^2$ and define a combined distribution $h(r,v_r)$ such that $$h(r,v_r) = \left\lbrace \begin{array}{rcl} g_+(|r|,|v_r|) & \rm{if} & rv_r > 0\\ g_-(|r|,|v_r|) & \rm{if} & rv_r < 0\\ \end{array} \right.\label{eq:h}$$ This new distribution function will obey the following Vlasov equation $${\frac{\partial h}{\partial \tau}} + \frac{v_r}{a(\tau)} {\frac{\partial h}{\partial r}} - a(\tau){\frac{\partial U}{\partial r}}{\frac{\partial h}{\partial v_r}} = 0,$$ which is identical to the 1D Vlasov equation (\[eq:VP\]). The difference being in the definition of density and mass that now read $$\begin{aligned} \rho(r) &=& \frac{2\pi}{r^2R^3(\tau)} \int_{-\infty}^\infty dv_r h(r,v_r, \eta),\\ M &=& \frac{4\pi^2}{a^3(\tau)} \int_{-\infty}^\infty dr \int_{-\infty}^\infty h(r,v_r,\eta)db.\end{aligned}$$ As we are back to the well-known case of Cartesian coordinates (at least for the Vlasov equation), we can introduce the same decomposition in terms of wave functions than in Section \[ssec:WDF\]. We will thus solve a set of 1D Cartesian Schrödinger equations alongside a 3D spherically symmetric Poisson equation with a slightly odd density definition. Using the usual trick $V(r) = U(r)ra(\tau)$, the laplacian term in Poisson’s equation can be simplified and the system we want to evolve reads $$\begin{aligned} i\hbar {\frac{\partial \psi_n}{\partial t}} &=& - \frac{\hbar^2}{2a(\tau)}{\frac{\partial \psi_n^2}{\partial r^2}} + m\frac{V}{r}\psi_n \Bigg.,\\ {\frac{\partial ^2V}{\partial r^2}} &=& 4\pi Gr\left( \frac{2\pi}{r^2} \displaystyle\sum_n \lambda_n |\psi_n(r)|^2 - \frac{4\pi^2\Xi}{V_{\rm{tot}}}\right), \label{eq:spheric_Poisson}\end{aligned}$$ where $\Xi$ is the normalization of the wavefunctions that can be related to the total mass of the system through $$M = \frac{4\pi^2}{a^3(\tau)}\sum_n \lambda_n \int_{-\infty}^{\infty} |\psi_n(r)|^2 dr = \frac{4\pi^2\Xi}{a^3(\tau)}. \label{eq:normalisation}$$ A dynamical term can then be added to equation \[eq:spheric\_Poisson\] to make the framework entirely local as discussed in Section \[ssec:local\]. The system can eventually be evolved as if it was a purely one-dimensional problem. The only difference being the more complicated density terms sourcing Klein-Gordon’s (or Poisson’s) equation and the $1/r$ term in the potential of Schrödinger’s equation. The generation of initial conditions can be done in exactly the same way than outlined in Section \[sec:IC\]. The only difference being the use of the modified distribution $h(r,v_r)$ (equation \[eq:h\]) instead of $f(r,v_r,y)$ as the starting point of the procedure. \[lastpage\] [^1]: E-mail: matthieu.schaller@durham.ac.uk [^2]: It may sound surprising that the equation for the harmonic oscillator reduces exactly to the classical Vlasov equation, even though we know that the quantum mechanical treatment introduces discrete energy levels. In this case the quantum information is encoded purely in the initial conditions. [^3]: This formulation of the statement is not fully satisfying, as the true semi-classical limit is also a statement about the properties of the wavefunction, and not identical to sending $\hbar\rightarrow 0$ which is anyway a dimensional parameter. [^4]: For the sake of simplicity we restrict the analysis of this section to the one dimensional case, but the generalization to the 3D case is straightforward.
1
--- abstract: 'We study wave turbulence in shallow water flows in numerical simulations using two different approximations: the shallow water model, and the Boussinesq model with weak dispersion. The equations for both models were solved using periodic grids with up to $2048^2$ points. In all simulations, the Froude number varies between $0.015$ and $0.05$, while the Reynolds number and level of dispersion are varied in a broader range to span different regimes. In all cases, most of the energy in the system remains in the waves, even after integrating the system for very long times. For shallow flows, non-linear waves are non-dispersive and the spectrum of potential energy is compatible with $\sim k^{-2}$ scaling. For deeper (Boussinesq) flows, the non-linear dispersion relation as directly measured from the wave and frequency spectrum (calculated independently) shows signatures of dispersion, and the spectrum of potential energy is compatible with predictions of weak turbulence theory, $\sim k^{-4/3}$. In this latter case, the non-linear dispersion relation differs from the linear one and has two branches, which we explain with a simple qualitative argument. Finally, we study probability density functions of the surface height and find that in all cases the distributions are asymmetric. The probability density function can be approximated by a skewed normal distribution as well as by a Tayfun distribution.' author: - 'P. Clark di Leoni' - 'P. J. Cobelli' - 'P. D. Mininni' bibliography: - 'Tesis.bib' title: Wave turbulence in shallow water models --- Introduction ============ Turbulence and non-linear wave interactions in the ocean surface are related to important processes in atmospheric sciences and oceanography, such as the exchange of energy between the atmosphere and the ocean [@iafrati_modulational_2013; @dasaro_enhanced_2011]. This exchange, in turn, plays an important role in the dynamics of the planetary and oceanic boundary layers, with consequences on the transport and mixing of momentum, CO$_2$, and heat [@ivey_density_2008]. The incorrect modeling of these phenomena affects climate evolution predictions [@rose_upper-ocean--atmosphere_2010; @cavaleri_wind_2012]. Ocean surface waves are also of interest in the search of renewable energies [@falnes_review_2007]. There are several ocean surface models which provide an excellent framework for studying *weak turbulence theory* [@hasselmann_non-linear_1962; @hasselmann_non-linear_1963; @hasselmann_non-linear_1963-1; @zakharov_kolmogorov_1992]. This theory was developed to describe the out-of-equilibrium behavior of systems of dispersive and weakly non-linear waves (see, e.g., [@newell_wave_2011; @nazarenko_wave_2011]). Unlike theories of strong turbulence, for waves and under the assumption of weak nonlinearities, the equations for two-point correlations can be closed and exact solutions with constant flux can be found. Besides this assumption, it is also assumed that wave fields are homogeneous, and that free waves are uncorrelated. Weak turbulence theory has been applied to capillary and gravito-capillary waves [@zakharov_kolmogorov_1992], vibrations on a plate [@during_weak_2006], rotating flows [@galtier_weak_2003], and magnetohydrodynamic waves [@galtier_weak_2000; @nazarenko_wave_2011; @schekochihin_weak_2012]. For some of these systems, the predictions of the theory are compatible with results obtained from experiments or from numerical simulations. For example, see Refs. [@deike_decay_2012; @falcon_capillary_2009; @kolmakov_quasiadiabatic_2004] for capillary waves, [@cobelli_different_2011] for gravitocapillary waves, [@mordant_are_2008; @boudaoud_observation_2008; @cobelli_space-time_2009] for vibrations on a plate, and [@leerink_multiscale_2012; @mininni_energy_2007] for magnetohydrodynamic waves. Although agreement has been found between theory, numerical simulations and experiments, there are also discrepancies. In some of these cases the compatibility is limited to the spectrum of certain fields (see, e.g., [@mininni_energy_2007]), or to specific configurations used to generate the excitations. Moreover, for many systems it is also not clear whether the weak turbulence approximation holds for all times, as the solutions are not homogeneous in wavenumber space and at sufficiently small scales eddies may become faster than waves resulting in strong turbulence [@chen_resonant_2005]. One of the most important applications of weak turbulence is in surface gravity waves. In oceanography, the Phillips’ spectrum [@phillips_equilibrium_1958], derived using dimensional arguments from strong turbulence and considering the coupling between waves, was long considered to be correct. However, different observational and experimental data [@toba_local_1973; @donelan_directional_1985], as well as numerical simulations [@badulin_weakly_2007], suggest that the actual spectrum is closer to that of weak turbulence. In fact, Phillips himself suggested that his spectrum may not be valid in the ocean [@phillips_spectral_1985]. Nonetheless, a scaling compatible with Phillips’ spectrum is still observed in numerical simulations [@korotkevich_simultaneous_2008] when the forcing is strong. This suggests that while weak turbulence provides an elegant theoretical way to study wave turbulence in the ocean, more considerations are necessary to appropriately describe the diversity of regimes found in these flows [@newell_wave_2011]. Most of the work done in ocean surface waves under the weak turbulence approximation concerns deep water flows. But the theory can also be applied to the shallow water case, i.e., for gravity waves whose wavelengths are large compared to the height of the fluid column at rest (see [@zakharov_statistical_1999; @onorato_four-wave_2008]). In this case, the theory leads to the prediction that the energy spectrum follows a $\sim k^{-4/3}$ behavior. Behavior compatible with this prediction was found both experimentally and observationally [@smith_equilibrium_2003; @kaihatu_asymptotic_2007; @falcon_observation_2011]. It was also found that an inertial range with a $\sim k^{-2}$ dependency can develop in the shallower regions of the fluid. The comparison between different shallow water models, with different degree of shallowness (and of dispersion) is therefore of interest, e.g., for the study of waves in basins with inhomogeneous depth. In the shallow water regime there are several models that can be considered to describe the ocean surface. There is the linear theory (see, e.g., [@landau_fluid_2004]) which can predict the dispersion relation of small amplitude waves, but which is insufficient to study turbulence. There are also non-linear theories, such as the shallow water model [@pedlosky_geophysical_1987] for non-dispersive waves, as well as the Boussinesq model [@whitham_linear_1974] for weakly dispersive waves which is the one used in some of the most recent works on the subject [@onorato_four-wave_2008]. While the former non-linear model is valid in the strict shallow water limit, the latter can be used in cases in which the wavelengths are closer to (albeit still larger than) the depth of the basin. In the present work, we study turbulent solutions of the shallow water model and of the Boussinesq equations using direct numerical simulations. Previous numerical studies considered the Hamiltonian equations for a potential flow with a truncated non-linear term, or the kinetic equations resulting from weak turbulence theory at moderate spatial resolution [@dyachenko_weak_2004; @korotkevich_simultaneous_2008] (with the notable exception of [@yokoyama_statistics_2004]). Here, we solve the primitive equations, without truncating the non-linear terms, potentially allowing for the development of vortical motions and of strong interactions between waves, and with spatial resolutions up to $2048^2$ grid points. The paper is organized as follows. In section \[equations\] we introduce both models, show the assumptions made in order to obtain them, derive their energy balance equations, and briefly discuss the predictions obtained in the framework of weak turbulence theory. In section \[simulations\] we describe the numerical methods employed and the simulations. Then, in section \[results\] we introduce several dimensionless numbers defined to characterize the flows, and present the numerical analysis and results. We present wavenumber energy spectra and fluxes, time resolved spectra (as a function of the wavenumber and frequency), frequency spectra, and probability density functions of the fluid free surface height. Finally, in section \[conclusion\] we present the conclusions. The most important results are: (a) As in previous experimental and observational studies [@smith_equilibrium_2003; @kaihatu_asymptotic_2007] we find now in simulations that different regimes arise depending on the fluid depth and the degree of nonlinearity of the system. (b) We obtain a power spectrum of the surface height compatible (within statistical uncertainties) with $\sim k^{-2}$ in the shallow water (non-dispersive) case, and one compatible with a $\sim k^{-4/3}$ spectrum as the fluid depth is increased using the Boussinesq (weakly dispersive) model. The latter spectrum is also compatible with predictions of weak turbulence theory [@onorato_four-wave_2008]. (c) Dispersion in the Boussinesq model results in more prominent small scale features and the development of rapidly varying waves. (d) In the weakly dispersive regime, the non-linear dispersion relation obtained from the simulations has two branches in a range of wavenumbers, one branch corresponding to non-dispersive waves, and another corresponding to dispersive waves. We interpret this as the result of short wavelength waves seeing an effectively deeper flow resulting from the interaction with waves with very long wavelength. (e) The probability density function of surface height can be approximated by both skewed normal and Tayfun distributions. In the latter case, the parameters of the distribution are compatible with those previously found in observations and experiments [@falcon_observation_2011]. The shallow water and Boussinesq equations {#equations} ========================================== Let us consider a volume of an incompressible fluid with uniform and constant (in time) density, with its bottom surface in contact with a flat and rigid boundary, and free surface at pressure $p_0$. A sketch illustrating the configuration is shown in Fig. \[diagram\]; $x$ and $y$ are the horizontal coordinates, $z$ is the vertical one, $h$ is the height of the fluid column (i.e., the $z$ value at the free surface), $h_0$ is the height of the fluid column at rest, $L$ is a characteristic horizontal length, gravity acts on the $-\hat{z}$ direction and its value is given by $g$. It is assumed that $L \gg h_0$ as we are interested in shallow flows. ![The shallow water geometry considered in the simulations: $x$ and $y$ are the horizontal coordinates, and $z$ is the vertical coordinate. The surface height is $h$, with $h_0$ the height of the fluid column at rest. The fluid surface is at pressure $p_0$. $L$ is a characteristic horizontal length (assumed to be much larger than $h_0$). Gravity acts on the $-\hat{z}$ direction and its acceleration has a value of $g$. []{data-label="diagram"}](Figure1.eps){width="48.00000%"} In the inviscid case, both the Euler equation and the incompressibility condition hold in the fluid body, $$\begin{aligned} \label{euler_basico} \frac{\partial \mathbf{v}}{\partial t} + (\mathbf{v} \cdot {\boldsymbol{\nabla}}) \mathbf{v} &= - \frac{1}{\rho} {\boldsymbol{\nabla}}p - g \hat{z}, \\ {\boldsymbol{\nabla}}\cdot \mathbf{v} &= 0 . \label{eq:incompressible}\end{aligned}$$ Under certain assumptions, to be discussed in the following paragraphs, the evolution of the free surface can be adequately described by means of a vector equation for the two horizontal components of the velocity at the interface, plus an equation for the local height of the fluid column. Linear dispersion relation -------------------------- Considering the case of very small amplitude waves, one can linearize the system of Eqs.  and (see, e.g., [@landau_fluid_2004]). The solutions of the resulting equations are gravity waves with the following dispersion relation $$\label{rel_disp_tot} \omega^2 = gk \frac{1 - e^{-2kh_0}}{1 + e^{-2kh_0}} .$$ We are interested in the shallow water case, i.e., when $h_0 \ll L \Rightarrow h_0 k \ll 1 $. In that limit the following dispersion relation results $$\label{rel_disp_sw} \omega = \sqrt{g h_0} k = c_0 k ,$$ where $c_0 = \sqrt{g h_0}$ is the phase velocity. Note in this case waves are not dispersive, unlike the general case given by Eq. . Shallow water model ------------------- It is possible to derive a set of non-linear equations for the surface height and the horizontal velocity at the surface by using the fact that the fluid layer is shallow. Considering the characteristic magnitudes of all quantities in Eq.  ($L$, $p_0$, $h_0$, $g$, etc.), and using the fact that in a shallow flow $h_0 k \ll 1$ with $k=2\pi/L$, one obtains hydrostatic balance in the vertical direction (for further details, see [@pedlosky_geophysical_1987]), which results in the pressure profile $$p = \rho g (h-z) + p_0 .$$ As $h$ is not a function of $z$, neither will be the horizontal pressure gradient and the horizontal components of the velocity (as long as they do not depend initially on $z$). In this way, the horizontal components of Eq.  can be written as $$\label{eqs_horz} \frac{\partial \mathbf{u}}{\partial t} = -(\mathbf{u} \cdot {\boldsymbol{\nabla}}) \mathbf{u} -g {\boldsymbol{\nabla}}h .$$ where $\mathbf{u}(x,y,t) = v_x \hat{x} + v_y \hat{y}$ is the horizontal velocity, and ${\boldsymbol{\nabla}}$ is now the horizontal gradient. Using the fact that $v_x$ and $v_y$ are independent of $z$ we can integrate the incompressibility condition, obtaining $$v_z (x,y,z,t) = -z \left( \frac{\partial v_x}{\partial x} + \frac{\partial v_y}{\partial y} \right) + \tilde{v}_z (x,y,t) . \label{eq:vz}$$ Finally, by taking the appropriate boundary conditions and setting $z=h(x,y,t)$, Eq.  provides an equation for the evolution of the height of the fluid column, namely $$\label{eq_h} \frac{\partial h}{\partial t} + \frac{\partial }{\partial x} (h v_x) + \frac{\partial }{\partial y}(h v_y) = 0 .$$ Note that we do not have to assume irrotationality to derive neither Eq.  nor Eq. . If we linearize these equations, we find again the dispersion relation given by Eq. , as expected. In the presence of external forcing $\mathbf{F}$, and viscosity $\nu$, the equations can be written as $$\begin{aligned} \label{sw_u} \frac{\partial \mathbf{u}}{\partial t} &= -(\mathbf{u} \cdot {\boldsymbol{\nabla}}) \mathbf{u} -g {\boldsymbol{\nabla}}h + \frac{\nu}{h} {\boldsymbol{\nabla}}\cdot ( h {\boldsymbol{\nabla}}\mathbf{u}) + \mathbf{F}, \\ \label{sw_h} \frac{\partial h}{\partial t} &= -{\boldsymbol{\nabla}}\cdot (h \mathbf{u}) . \end{aligned}$$ We will refer to this set of equations as the shallow water model, or SW model. In these equations the viscosity $\nu$ was added to the horizontal velocity field $\mathbf{u}$, which behaves as a compressible flow (i.e., it has non-zero divergence, see [@marche_derivation_2007]). This choice of the viscous term ensures conservation of the momentum $h \mathbf{u}$, and also that energy dissipation is always negative, as in Sec. \[energybalancesec\]. Boussinesq model ---------------- As the depth of the fluid increases, dispersion becomes important. There are several models that introduce weak dispersive effects perturbatively, but many are built to study waves propagating in only one direction. The Boussinesq equations for surface waves (see, e.g., [@whitham_linear_1974; @choi_nonlinear_1995]) provide a model to study weakly dispersive waves propagating in any direction. This model not only broadens the range of physical phenomena encompassed by the SW model, but adding dispersive effects also makes it more enticing to weak turbulence theory, for which dispersion effects are of the utmost importance. Let us take a look at the first terms in the Taylor expansion of the dispersion relation in Eq. , $$\label{taylor_exp} \omega^2 = c^2_0 k^2 - \frac{1}{3} c^2_0 h^2_0 k^4 + \ldots ,$$ where the first term is the non-dispersive shallow water term. The idea is to add terms to Eqs.  and such that the linear dispersion relation of the new system coincides, up to the fourth order, with the expansion in Eq. . This can be done by adding the term $h^2_0 {\nabla^2}\partial_t \mathbf{u}/3$ to Eq. , resulting in the following system, $$\begin{aligned} \frac{\partial \mathbf{u}}{\partial t} =& -(\mathbf{u} \cdot {\boldsymbol{\nabla}}) \mathbf{u} -g {\boldsymbol{\nabla}}h + \frac{1}{3} h^2_0 {\nabla^2}\frac{\partial \mathbf{u}}{\partial t} \nonumber \\ {}& + \frac{\nu}{h} {\boldsymbol{\nabla}}\cdot ( h {\boldsymbol{\nabla}}\mathbf{u}) + \mathbf{F}, \label{bq_u} \\ \label{bq_h} \frac{\partial h}{\partial t} =& -{\boldsymbol{\nabla}}\cdot (h \mathbf{u}) .\end{aligned}$$ We will refer to this system as the Boussinesq model, or BQ model. For $\mathbf{F}=0$ and $\nu=0$, the dispersion relation obtained by linearizing these equations is $$\label{rel_disp_bous} \omega = \frac{c_0 k }{\sqrt{1 + \frac{h^2_0 k^2}{3}}} ,$$ which, up to the fourth order, coincides with Eq. . Note that there are other choices for the extra term in Eq.  that result in many formulations of the Boussinesq model, all compatible up to fourth order in a Taylor expansion in terms of $h_0 k$ [@whitham_linear_1974]. The formulation we use here was employed in previous studies of wave turbulence [@onorato_four-wave_2008], and is also easy to solve numerically using pseudospectral methods by writing Eq.  as $$\frac{\partial \mathbf{u'}}{\partial t} = -(\mathbf{u} \cdot {\boldsymbol{\nabla}}) \mathbf{u} -g {\boldsymbol{\nabla}}h + \frac{\nu}{h} {\boldsymbol{\nabla}}\cdot ( h {\boldsymbol{\nabla}}\mathbf{u}) + \mathbf{F}, \label{eq:Helmholtz}$$ where $\mathbf{u'} = {\cal H}\mathbf{u}$, and where ${\cal H} = (1-h_0^2 {\nabla^2}/3)$ is the Helmholtz operator. This operator can be easily inverted in Fourier space [@mininni_numerical_2005; @mininni_numerical_2005-1], and the resulting equations can be efficiently solved by means of pseudospectral codes. It is interesting that the same operator appears in Lagrangian-averaged models [@foias_navier-stokes-alpha_2001]. In these models, and in regularized versions of the shallow water equations [@camassa_integrable_1993], it introduces dispersion that results in an accumulation of energy at small scales [@graham_highly_2007]. Energy balance {#energybalancesec} -------------- An exact energy balance can be easily derived for the SW model. The equation is useful to verify conservation in pseudospectral codes. By taking the dot product of Eq.  and $h \mathbf{u}$, setting ${\bf F} = 0$, and using Eq. , we obtain $$\begin{aligned} \frac{\partial }{\partial t} \left( \frac{h u^2}{2} + g \frac{h^2}{2} \right) = &- {\boldsymbol{\nabla}}\cdot \left( \frac{h u^2}{2} \mathbf{u} + g h^2 \mathbf{u} \right) \\ &+ \nu \mathbf{u} \cdot [ {\boldsymbol{\nabla}}\cdot ( h {\boldsymbol{\nabla}}\mathbf{u}) ] . \end{aligned}$$ Integrating in $x$ and $y$ over an area $A$ and taking periodic boundary conditions yields $$\label{balance} \frac{\mathrm{d} E}{\mathrm{d} t} = - 2 \nu Z,$$ where $$E = \frac{1}{A} \iint \left( \frac{h u^2}{2} + g \frac{h^2}{2} \right) \mathrm{d} x \mathrm{d} y$$ is the mean total energy, and $$Z = \frac{1}{A} \iint \frac{h \lvert {\boldsymbol{\nabla}}\mathbf{u} \rvert^2}{2} \mathrm{d} x \mathrm{d} y$$ is a mean pseudo-enstrophy, such that $-2 \nu Z$ is the mean energy dissipation rate. As $h$ is always positive, the energy dissipation is always negative. The total energy is conserved when $\nu = 0$. Now we can define $$U = \frac{1}{A} \iint \label{ec_cin} \frac{h u^2}{2}\mathrm{d} x \mathrm{d} y$$ as the mean kinetic energy, and $$\label{ec_pot} V = \frac{1}{A} \iint g \frac{h^2}{2} \mathrm{d} x \mathrm{d} y$$ as the mean potential energy, such that the sum of both gives the mean total energy $E$. The dispersive term present in the BQ model changes the balance given by Eq. . However, since the extra term is of order $(h_0/L)^2$, as long as we are in a sufficiently shallow flow it will be very small, and therefore, negligible for the conservation of energy. We verified this is the case in our numerical simulations. \[sec:weak\]Weak Turbulence prediction -------------------------------------- We briefly present some results obtained in the framework of weak turbulence theory for the BQ model (as the derivation is a bit cumbersome, only a general outline will be given here; please see [@onorato_four-wave_2008] for details). Weak turbulence is studied in the BQ model assuming the fluid is inviscid and irrotational, so that the velocity can be written in terms of a velocity potential. To obtain a statistical description of the wave field, it is also assumed that it is homogeneous and that the free modes are uncorrelated. At first sight, the quadratic nonlinear terms in Eqs.  and indicate modes interact in triads, with the wave vectors of the three interacting modes lying over a triangle, and the three frequencies satisfying the resonant condition (see, e.g., [@nazarenko_wave_2011]) $$\begin{aligned} \mathbf{k} &= \mathbf{p} + \mathbf{q} \\ \omega(\mathbf{k}) &= \omega(\mathbf{p}) + \omega(\mathbf{q}) .\end{aligned}$$ However, as there are no three wave vectors $\mathbf{k}, \mathbf{p}, \mathbf{q}$ that satisfy these two conditions when the dispersion relation is given by Eq. , three wave interactions are forbidden. Thus, only four wave interactions are present (which do satisfy their corresponding condition). After a transformation of the fields, it is possible to write an equation for the evolution of the two-point correlator of the transformed fields. This is the so-called kinetic equation, and has the following form $$\begin{gathered} \begin{aligned} \frac{\partial N_0}{\partial t} = 4 \pi &\int \lvert T_{0,1,2,3} \rvert^2 N_0 N_1 N_2 N_3 \\ & \left( \frac{1}{N_0} + \frac{1}{N_1}- \frac{1}{N_2}- \frac{1}{N_3} \right) \\ & \delta (\mathbf{k}_0 + \mathbf{k}_0 - \mathbf{k}_2 - \mathbf{k}_3) \\ &\delta (\omega_0 + \omega_0 - \omega_2 - \omega_3) \textrm{d} \mathbf{k}_{123}, \label{kinetic_eq} \end{aligned}\end{gathered}$$ where $N_i = N(\mathbf{k}_i)$ is the wave action spectral density (i.e., the two-point correlator of the wave action, the latter being a quantity proportional to the surface height), the deltas express the fact that interactions are between four wave vectors and their associated frequencies, $T_{0,1,2,3}$ is the coupling coefficient between the four modes, and $\textrm{d} \mathbf{k}_{123} = \textrm{d} \mathbf{k}_{1} \textrm{d} \mathbf{k}_{2} \textrm{d} \mathbf{k}_{3}$. From this equation, dimensional analysis yields the following expression for the energy spectrum $$\label{predicc_wt} E(k) \sim k^{-4/3} .$$ From this spectrum and using dimensional analysis, it is easy to show that in the presence of dissipation, the dissipation wavenumber in such a flow is $k_\eta \sim [\epsilon/(h^2_0 \nu^3)]^{1/5}$, where $\epsilon$ is the mean energy injection rate. A scaling compatible with a $\sim k^{-4/3}$ spectrum was observed in laboratory and field datasets [@smith_equilibrium_2003; @kaihatu_asymptotic_2007], where a spectrum compatible with $\sim k^{-2}$ was also found in shallower regions of the fluid. The prediction in Eq.  applies to the BQ model, when dispersion is not negligible. Before proceeding, we should comment on some peculiarities of the SW model regarding wave turbulence. First, an inspection of its dispersion relation, Eq. , indicates that three wave interactions are possible in this model, and as a result the arguments above for four-wave interactions do not apply. Weak turbulence theory can be used in systems with three-waves interactions (with the case of deep water flows being a paradigmatic one, but see also the case of rotating [@galtier_weak_2003] and of magnetohydronamic [@galtier_weak_2000] flows). However, the SW model is non-dispersive, and as a result the resonance condition is only satisfied for collinear wave vectors. Resonant interactions can then only couple modes that propagate in the same direction (i.e., along the ray of the wave), and non-resonant interactions must be taken into account to consider other couplings. But more importantly, dispersion is crucial in weak turbulence theory to have decorrelation between different waves: without dispersion, all modes propagate with the same velocity, and the modes initially correlated remain correlated for all times (see, e.g., [@lvov_statistical_1997] for a discussion of these effects in the context of acoustic turbulence). ![[*(Color online)*]{} Total energy as a function of time for simulation $A06$ (see Table \[table1\]). As the fluid starts from rest and the forcing is applied, energy increases until it reaches a turbulent steady state (note energy at $t=0$ is different from zero, as the flow potential energy is never zero). All the analysis of the simulations was performed after the simulations reached the turbulent steady state. [*Inset:*]{} Energy balance as a function of time (see Eq. ). Note the balance is satisfied up to the seventh decimal place. []{data-label="eng_cons"}](Figure2.eps){width="48.00000%"} Numerical simulations {#simulations} ===================== ------------ ----- -------- ------- -------------------- -------------------- ----------- --------------------- ------ Simulation Re Fr $D_s$ $N_l$ $h_0/L_0$ $f_0/U_0$ $[k_{f_1},k_{f_2}]$ $N$ $A01$ 260 0.005 0.27 $1.2\times10^{-5}$ $8.0\times10^{-4}$ 0.76 \[3,8\] 1024 $A02$ 370 0.0059 0.22 $1.6\times10^{-5}$ $6.4\times10^{-4}$ 0.71 \[3,8\] 1024 $A03$ 820 0.0075 0.33 $2.6\times10^{-5}$ $4.9\times10^{-4}$ 0.64 \[3,8\] 2048 $A04$ 760 0.0067 0.36 $2.2\times10^{-5}$ $5.3\times10^{-4}$ 0.69 \[3,8\] 2048 $A05$ 760 0.007 0.33 $2.3\times10^{-5}$ $4.8\times10^{-4}$ 0.69 \[3,8\] 2048 $A06$ 360 0.0066 0.33 $2.1\times10^{-5}$ $4.8\times10^{-4}$ 0.73 \[3,8\] 2048 $A07$ 570 0.0091 0.43 $4.1\times10^{-5}$ $6.4\times10^{-4}$ 0.92 \[3,8\] 2048 $A08$ 350 0.0083 0.43 $3.4\times10^{-5}$ $6.4\times10^{-4}$ 1 \[3,8\] 2048 $A09$ 290 0.0086 0.43 $3.6\times10^{-5}$ $6.4\times10^{-4}$ 0.98 \[3,8\] 2048 $A10$ 420 0.012 0.43 $7.8\times10^{-5}$ $6.4\times10^{-4}$ 1.4 \[3,8\] 2048 ------------ ----- -------- ------- -------------------- -------------------- ----------- --------------------- ------ ------------ ------ -------- ------- -------------------- -------------------- ----------- --------------------- ------ Simulation Re Fr $D_s$ $N_l$ $h_0/L_0$ $f_0/U_0$ $[k_{f_1},k_{f_2}]$ $N$ $B01$ 5600 0.022 0.14 $2.5\times10^{-4}$ $8.0\times10^{-4}$ 0.45 \[1,5\] 512 $B02$ 3700 0.015 0.14 $1.1\times10^{-4}$ $8.0\times10^{-4}$ 0.34 \[1,5\] 512 $B03$ 5000 0.012 0.14 $7.2\times10^{-5}$ $8.0\times10^{-4}$ 0.29 \[1,5\] 512 $B04$ 7100 0.013 0.11 $9.1\times10^{-5}$ $3.2\times10^{-4}$ 0.24 \[1,5\] 1024 $B05$ 830 0.005 0.11 $1.2\times10^{-5}$ $3.2\times10^{-4}$ 0.8 \[3,8\] 1024 $B06$ 1200 0.011 0.11 $6.2\times10^{-5}$ $3.2\times10^{-4}$ 0.57 \[3,8\] 1024 $B07$ 120 0.0046 0.14 $1.0\times10^{-5}$ $8.0\times10^{-4}$ 0.82 \[3,8\] 512 $B08$ 980 0.012 0.17 $7.9\times10^{-5}$ $2.5\times10^{-4}$ 0.54 \[3,8\] 2048 $B09$ 2500 0.038 0.27 $7.4\times10^{-4}$ $8.0\times10^{-4}$ 0.31 \[3,8\] 1024 $B10$ 670 0.0042 0.17 $8.5\times10^{-6}$ $2.5\times10^{-4}$ 0.79 \[3,8\] 2048 $B_{SW}11$ 100 0.0039 0.14 $7.6\times10^{-6}$ $8.0\times10^{-4}$ 0.96 \[3,8\] 512 $B_{SW}12$ 470 0.013 0.11 $9.0\times10^{-5}$ $3.2\times10^{-4}$ 0.24 \[1,5\] 1024 ------------ ------ -------- ------- -------------------- -------------------- ----------- --------------------- ------ ------------ ------ -------- ------- -------------------- -------------------- ----------- --------------------- ------ Simulation Re Fr $D_s$ $N_l$ $h_0/L_0$ $f_0/U_0$ $[k_{f_1},k_{f_2}]$ $N$ $C01$ 1400 0.0067 0.27 $1.9\times10^{-5}$ $8.0\times10^{-4}$ 0.56 \[3,8\] 1024 $C02$ 1400 0.0073 0.24 $2.4\times10^{-5}$ $7.0\times10^{-4}$ 0.55 \[3,8\] 1024 $C03$ 1900 0.0092 0.31 $4.1\times10^{-5}$ $4.6\times10^{-4}$ 0.54 \[3,8\] 2048 $C04$ 1400 0.0057 0.43 $1.6\times10^{-5}$ $6.4\times10^{-4}$ 0.74 \[3,8\] 2048 $C05$ 470 0.0067 0.54 $2.2\times10^{-5}$ $8.0\times10^{-4}$ 1.1 \[3,8\] 2048 ------------ ------ -------- ------- -------------------- -------------------- ----------- --------------------- ------ We performed several numerical simulations of both the shallow water and the Boussinesq models. These were done using the GHOST code [@gomez_mhd_2005; @gomez_parallel_2005; @mininni_hybrid_2011], which uses a pseudospectral method with periodic boundary conditions on a $L_0 \times L_0 = 2 \pi \times 2 \pi$ sized box (with $L_0$ the box length), the “$2/3$ rule” for the dealiasing [@canuto_spectral_1988], explicit second order Runge-Kutta for time stepping, and is parallelized using MPI and OpenMP. Almost all simulations shown here were done on grids of $N^2=2058^2$ points, with a few on grids of $N^2=1024^2$ or $512^2$ points (with $N$ the linear resolution). As a result of dealiasing, the maximum resolved wavenumber is $$k_\textrm{max}=N/3 . \label{dealising}$$ Note all magnitudes in the code are dimensionless, with the smallest wavenumber $k_\textrm{min}=2\pi/L_0=1$, and the largest wavenumber $k_\textrm{max}=2\pi/\lambda_\textrm{min}$ being associated with the minimum resolved scale $\lambda_\textrm{min}$. All runs are direct numerical simulations, with all relevant space and time scales resolved explicitly. The pseudospectral method with the $2/3$ rule is equivalent to a purely spectral method [@canuto_spectral_1988]: it converges exponentially fast, it conserves all quadratic invariants of the equations (i.e., there is no numerical dissipation introduced by the method), and it also has no numerical dispersion. All this was verified explicitly during the development of the code, using several test problems for the SW and BQ equations. Most previous numerical studies on wave turbulence in gravity waves were done at lower resolutions, with the exception of [@yokoyama_statistics_2004]. But the key difference between previous simulations and the ones presented here (besides the fact that these are for shallow flows, not for deep flows) is that the physical model we use does not assume potential flow, and, more importantly, we do not truncate the non-linear term, thus retaining all high order non-linearities. Another difference is that we do not introduce an artificial dissipation term as it is usually done, but one based on physical grounds. The key motivation for these choices is to be able to compare with experiments in the future, where vortical motions can develop, and where dissipation also plays a non-negligible role. To achieve higher resolutions than the ones studied here becomes increasingly more expensive as the BQ model is dispersive. All the simulations were started from the fluid at rest. An external mechanical forcing injected energy in the system, allowing it to reach for sufficiently long times an out-of-equilibrium turbulent steady state, after an initial transient. To excite waves, and prevent external injection of energy into vortical motions, the forcing had the following form $$\mathbf{F} = {\boldsymbol{\nabla}}f,$$ where $f$ is a randomly generated scalar function, with a time correlation of one time unit, amplitude $f_0$, and applied in a band of wavenumbers in Fourier space between $k_{f_1}$ and $k_{f_2}$ (see Tables \[table1\], \[table2\], and \[table3\]). Note that having a mechanical forcing in the momentum equation adds an extra term to the right hand side of Eq. , $$\label{balance_full} \frac{\mathrm{d} E}{\mathrm{d} t} = - 2 \nu Z + \epsilon,$$ where the mean energy injection rate can be computed as $$\label{eps_balance} \epsilon = \frac{1}{A} \iint\limits_A h \mathbf{u}\cdot \mathbf{f} \mathrm{d}x \mathrm{d}y .$$ Under the procedure described above, the typical evolution of the energy in a numerical simulation is shown in Fig. \[eng\_cons\]. The energy starts from the value corresponding to the fluid at rest (i.e., all the energy is the potential energy associated with the equilibrium height $h_0$). The total energy then grows under the action of the external mechanical forcing, and after $t \approx 80$ the system reaches a turbulent steady state in which the energy fluctuates around a mean value, and in which the energy injection and dissipation are equilibrated on the average. Even though pseudospectral methods are known to introduce no numerical dissipation, in the inset of Fig. \[eng\_cons\] we also show explicitly that the energy balance (Eq. ) is satisfied with an error of order $10^{-7}$, which remains stable and does not grow even after integrating for very long times. To ensure that the flow in the simulations remained shallow for all excited wavenumbers, we enforced the following condition $$\begin{gathered} \frac{h_0}{\lambda_\textrm{min}} = h_0 \frac{k_\textrm{max}}{2 \pi} < 1 \nonumber \\ \Rightarrow h_0 < \frac{6 \pi}{N} . \label{cond_disp}\end{gathered}$$ where $\lambda_\textrm{min}$ is, as already mentioned, the shortest wavelength resolved by the code in virtue of the condition given by Eq. . \[results\] Results =================== Description and classification of the simulations ------------------------------------------------- The spectral behavior of the flow in the simulations depends on the external parameters. We can independently control the height of the fluid at rest $h_0$, the viscosity $\nu$, the gravity acceleration $g$, the amplitude of the forcing $f_0$, the range of wavenumbers in which the force is applied, and the linear resolution $N$. However, all these parameters can be reduced to a smaller set of dimensionless controlling parameters. One of these parameters is the Froude number $$\textrm{Fr}=\frac{U_0}{\sqrt{g h_0}},$$ which measures the ratio of inertia to gravity acceleration in the momentum equation, and where $U_0$ is the r.m.s. velocity. Another dimensionless parameter is the non-linear number, $N_l$. In order to be in the regime of weak turbulence, nonlinearities should be small. The effect of nonlinearities can be measured by how large perturbations in $h$ are compared to $h_0$, so we define $N_l$ as $$N_l=\frac{h_\textrm{rms}-h_0}{h_0},$$ where $h_\textrm{rms}$ is the r.m.s. value of $h$. The two remaining dimensionless numbers are the Reynolds number, $$\label{reynolds} \textrm{Re} =\frac{U_0 L_f}{\nu},$$ where $L_f$ is the forcing scale (defined as $2 \pi/k_{f_0}$), and what we will call the dispersivity, $D_s$, defined as $$\label{dispersivity} D_s = h_0 k_\textrm{max} = \frac{2\pi h_0}{\lambda_\textrm{min}} = \frac{N h_0}{6 \pi}$$ following Eq. . This last number, only relevant for the Boussinesq model, measures how strong the dispersion is at the smallest scales, and for sufficiently small $D_s$ we can expect the solutions of the Boussinesq model to converge to the solutions of the shallow water model. In fact, it is easy to show from the weak turbulence spectrum in Eq.  that when the maximum resolved wavenumber $k_{\textrm{max}}$ is associated with the dissipation wavenumber $k_\eta$, then $$\textrm{Re} \sim \frac{U_0 L_f}{h_0 \epsilon^{1/3}} D^{5/3}_s . \label{reycurve}$$ Decreasing $D_s$ below the value given by this relation should result in negligible dispersion at all resolved wavenumbers. Note that the level of dispersion in a given Boussinesq run depends on the wavenumber, and $D_s$ actually quantifies the strongest possible dispersion at the smallest scales in the flow. By qualitatively assessing each run, we can classify them into three sets, $A$, $B$, and $C$. In tables \[table1\], \[table2\], \[table3\] the different dimensionless parameters, along with a few other useful quantities, are given for each simulation in each set, respectively. How and why these three sets differ from each other will be made clear in the following sections, when we discuss the actual results. But, for the moment, it is fruitful to analyze the behavior of the values of $\textrm{Re}$ and $D_s$ in each set, so as to keep them in mind for later on. The values of $\textrm{Re}$ and $D_s$ for all the Boussinessq runs are shown in Fig. \[phase\_dia\]. As a reference, Fig. \[phase\_dia\] also shows the curve given by Eq.  with $U_0 L_f/(h_0 \epsilon^{1/3})$ estimated from the values from the simulations in set $A$. Points below that curve are expected to have non-negligible dispersion. Runs in set $A$ have relatively small $\textrm{Re}$ ($\lesssim 1000$), and $D_s$ varying between $\approx 0.02$ and $\approx 0.05$. In other words, dispersion effects in runs in set $A$ are important. Runs in set $B$ have smaller values of $D_s$ (except for one run with $D_s \approx 0.27$, all other runs have $D_s <0.2$), and $\textrm{Re}$ varying between $\approx 100$ and $\approx 7000$. These runs have small or negligible dispersion, and note all the SW runs we performed belong to this set. The runs in set $C$ are intermediate between these two regimes. ![[*(Color online)*]{} Values of the Reynolds number $\textrm{Re}$ and dispersivity $D_s$ for all the Boussinesq runs, separated into three sets, $A$ (circles in the gray/red region), $B$ (triangles in the dark/blue region), and $C$ (stars in the light/green region), according to their different spectral behavior as discussed in Sec. \[spectra\]. The boundaries separating the three regions are arbitrary. The solid white curve corresponds to $\textrm{Re} \sim D^{5/3}_s$; points below that curve are expected to have non-negligible dispersion (see Eq. ). Note that runs in set $A$ have relatively small $\textrm{Re}$ but larger dispersion, while runs in set $B$ have either small or negligible dispersion.[]{data-label="phase_dia"}](Figure3.eps){width="48.00000%"} Finally, although the mechanical forcing we use introduces no vorticity in the horizontal velocity field, some vorticity is spontaneously generated as the flow evolves. This is probably also the case in experiments. In order to quantify the presence of vortical structures, we calculated the ratio of vorticity to divergence in the horizontal velocity field $$\frac{\langle | {\boldsymbol{\nabla}}\times \mathbf{u} | \rangle } {\langle | {\boldsymbol{\nabla}}\cdot \mathbf{u} | \rangle } ,$$ which turns out to be $\approx 0.1$ for all simulations. As a result, although the flow is not perfectly irrotational, the amplitude of vortical modes is small compared with the amplitude of modes associated with the waves. Energy spectra {#spectra} -------------- The power spectrum of $h$ (proportional to the spectrum of the potential energy) as a function of the wave number is shown in Fig. \[spectra\_far\] for runs $A06$, $B08$, and $C02$. Figure \[spectra\_zoom\] shows a close-up of the same spectrum in the inertial range. It is clearly seen that runs in each set show a different behavior. On the one hand, the run belonging to group $A$ has an inertial range compatible with $\sim k^{-4/3}$ scaling, which is the spectra predicted by weak turbulence. On the other hand, the run in set $B$ displays an inertial range compatible with $\sim k^{-2}$ dependency. While this spectrum is not predicted by weak turbulence, it was observed before in experiments and observations [@kaihatu_asymptotic_2007]. The run in group $C$ shows a shallower spectrum with no clear inertial range. We think of runs in this set as transitional between the other two. The other runs in sets $A$, $B$, and $C$ show similar power spectra for $h$. To show this, we present the compensated spectra for the simulations in sets $A$ and $B$ in Figs. \[compensado\_A\] and \[compensado\_B\] respectively (simulations from set $C$ do not have a clearly defined inertial range and are therefore not shown). The simulations from set $A$ are compensated by $h^{2/3}_0 \epsilon^{2/3} k^{-4/3}$ (which is the weak turbulence spectrum, using the height of the fluid column at rest, $h_0$, and the energy injection rate, $\epsilon$, as prefactors), while the ones in set $B$ are compensated by $g h_0 k^{-2}$ (more details on the choice of the prefactor are given below). These figures indicate that, within statistical uncertainties, all spectra in each set collapse to the same power laws, and that the simulations are well converged from the point of view of spatial resolution. Furthermore, we verified that the energy flux is approximately constant in the scales corresponding to the inertial range of each simulation. Within the limitations of spatial resolution and the drop in the flux for large wavenumbers caused by viscous dissipation, an incipient inertial range can be identified in the flux of each simulation. Figure \[flux\] shows the instantaneous energy flux (normalized by the energy injection rate $\overline{\epsilon}$ averaged over time) as a function of $k$ for several simulations in sets $A$ and $B$. The energy flux $\Pi(k)$ was calculated from the energy balance equation in Fourier space, as is usually done for turbulent flows. Figure \[flux\] also shows the normalized energy dissipation rate as a function of time (equivalent to the normalized energy flux as a function of time) for the same runs, to show that this quantity fluctuates around a mean value in the turbulent steady state. The kinetic energy spectrum is similar to the power spectrum of $h$, and in approximate equipartition with the potential energy spectrum once the system reaches a turbulent steady state. It is interesting to analyse this in the light of the values of the dimensionless parameters in the runs as shown in Fig. \[phase\_dia\]. As was explained in the previous section, set $A$ corresponds to runs with lower Reynolds number and larger dispersivity ($\textrm{Re}\lesssim 1000$, and $D_s$ varying between $\approx 0.02$ and $\approx 0.05$). As a result, these runs can be expected to display weak turbulence behavior as described in Section \[sec:weak\], because the nonlinearities are not so large as to break the weak turbulence hypothesis [@onorato_four-wave_2008], and the dispersion is not so low as to render the higher order terms of Eq.  negligible (in which case four-wave interactions would no longer be dominant, and the hypothesis used to derive Eq.  would not be satisfied). In contrast, runs in set $B$ have larger $\textrm{Re}$ and lower $D_s$ (except for one run with $D_s \approx 0.27$, all other runs have $D_s <0.2$, and $\textrm{Re}$ varying between $\approx 100$ and $\approx 7000$). In this case dispersion is smaller or negligible, while nonlinearities can be expected to be larger, two conditions that render the derivation resulting in Eq.  invalid. ![[*(Color online)*]{} Power spectrum of $h$ (proportional to the spectrum of the potential energy) for runs $A06$ (BQ model, $2048^2$ grid points, $\textrm{Re} = 360$, and $D_s = 0.33$), $B08$ (BQ model, $2048^2$ grid points, $\textrm{Re} = 980$, and $D_s = 0.17$), and $C02$ (BQ model, $1024^2$ grid points, $\textrm{Re} = 1430$, and $D_s = 0.24$). Two power laws, $\sim k^{-4/3}$ and $\sim k^{-2}$, are shown as references.[]{data-label="spectra_far"}](Figure4.eps){width="48.00000%"} ![[*(Color online)*]{} Detail of the three spectra in Fig. \[spectra\_far\] for a subset of wavenumbers to show the inertial range of the runs. Note the scaling of runs $A06$ and $B08$.[]{data-label="spectra_zoom"}](Figure5.eps){width="48.00000%"} ![[*(Color online)*]{} Compensated spectrum of potential energy for several simulations in set $A$. The spectra are compensated by $h^{2/3}_0 \epsilon^{2/3} k^{-4/3}$. The average slope for all the runs is $-1.34\pm0.12$.[]{data-label="compensado_A"}](Figure6.eps){width="48.00000%"} ![[*(Color online)*]{} Compensated spectrum of potential energy for several simulations in set $B$. The spectra are compensated by $g h_0 k^{-2}$. The average slope for all the runs is $-2.18\pm0.29$.[]{data-label="compensado_B"}](Figure7.eps){width="48.00000%"} ![[*(Color online)*]{} (a) Energy flux (normalized by the mean energy injection rate) as a function of $k$. For each simulation, a range of wavenumbers can be identified for which $\Pi(k)$ remains approximately constant, and this range is in reasonably good agreement with the inertial ranges identified in Figs. \[compensado\_A\] and \[compensado\_B\]. (b) Energy dissipation rate (normalized by the averaged in time energy injection rate) as a function of time for the same simulations. []{data-label="flux"}](Figure8a.eps "fig:"){width="48.00000%"}\ ![[*(Color online)*]{} (a) Energy flux (normalized by the mean energy injection rate) as a function of $k$. For each simulation, a range of wavenumbers can be identified for which $\Pi(k)$ remains approximately constant, and this range is in reasonably good agreement with the inertial ranges identified in Figs. \[compensado\_A\] and \[compensado\_B\]. (b) Energy dissipation rate (normalized by the averaged in time energy injection rate) as a function of time for the same simulations. []{data-label="flux"}](Figure8b.eps "fig:"){width="48.00000%"} ![[*(Color online)*]{} Power spectrum of $h$ for simulations $B04$ and $B_{SW}12$. The former corresponds to a numerical solution of the BQ model, while the later to a solution of the SW model. A $\sim k^{-2}$ power law is indicated as a reference.[]{data-label="comp_sp"}](Figure9.eps){width="48.00000%"} Moreover, the $\sim g h_0 k^{-2}$ spectra observed in the simulations in set $B$ include those runs that solve the SW model. Therefore, these spectra cannot be explained by weak turbulence, as SW simulations have no dispersion and the arguments in Section \[sec:weak\] do not apply. Also, note that in the non-dispersive limit, for constant and fixed $h$, the SW equations can be reduced to the two-dimensional Burgers equations, which amplify negative field gradients by strong nonlinearities resulting in sharp fronts in the velocity. Such a field would actually have a spectrum $\sim k^{-2}$ (note this is also the behavior expected for two-dimensional non-dispersive acoustic turbulence [@lvov_statistical_1997], that also develops sharp fronts). The spectrum can be obtained from dimensional analysis and the scaling that results for the energy is equivalent to Phillips’ spectrum [@phillips_equilibrium_1958] but in two dimensions. In the presence of strong nonlinearities, we can assume that the nonlinear and gravity terms are of the same order, $$\mathbf{u} \cdot {\boldsymbol{\nabla}}\mathbf{u} \sim g {\boldsymbol{\nabla}}h .$$ It is also reasonable to assume that the kinetic and potential energies will be of the same order (i.e., in equipartition) in the turbulent steady state. This implies that $g$ is the only dimensional constant the spectra can depend on. This is precisely how Phillips derived his spectrum. With these assumptions in mind, it is easy to obtain the observed spectra. The energy spectrum has units of energy in the fluid column per unit surface per wavenumber, $E(k) \sim h_0 u^2/k$, and assuming $E(k) \sim g h_0 k^{-\alpha}$, from dimensional analysis the only possible solution is $$E(k) \sim g h_0 k^{-2} .$$ The independence of the spectrum on the energy injection rate suggests that the energy transfer between the different scales must take place by a mechanism such as wave breaking in the case of Phillips’ spectrum, which occurs when the slope of the surface is larger than a critical value, or by nonlinear wave steepening in our case (which is finally regularized by the viscosity). Such a mechanism is independent of the power injected by external forces. Of course, this can only hold in a region of parameter space, as in the presence of weak forcing and dispersion, the solution in Eq.  is expected instead. In summary, based on the numerical results, the simulations with weaker forcing and higher dispersion develop a spectrum compatible with the predictions from weak turbulence theory, while the runs with stronger forcing or with less (or no) dispersion are compatible with dimensional analysis based on strong turbulence arguments. Comparison between SW and BQ models {#comp_sw_bq} ----------------------------------- ![One dimensional cut of the height $h$ in the turbulent steady state of runs $B04$ and $B_{SW}12$, at the same time. The former run corresponds to a numerical solution of the BQ model, while the latter to a solution of the SW model. While the long length scales show the same behavior in both runs, note the BQ model has larger fluctuations at short length scales. Both runs were computed with a linear resolution of $N=1024$ grid points, and the fast fluctuations are well resolved.[]{data-label="secc20"}](Figure10.eps){width="48.00000%"} All simulations of the SW model belong to set $B$, as that is the set of runs that has negligible or no dispersion. All other sets have moderate dispersion, and as a result the flow dynamics cannot be captured by the SW model. Note that runs in set $B$ are also the runs with an inertial range compatible with $\sim k^{-2}$ scaling. However, the BQ and SW runs in set $B$ are not identical. In this subsection we discuss the differences between these runs. As an example of two runs with and without dispersive effects, the power spectra of $h$ for runs $B04$ and $B_{SW}12$ are shown in Fig. \[comp\_sp\]. Both simulations have the same parameters, except for the viscosity which is larger in the simulation using the SW model. At small wavenumbers, where dispersion is negligible, the spectra of the BQ and SW models coincides. For wavenumbers larger than $\approx 30$, dispersion in the BQ model becomes important and a bump (an accumulation of energy at small scales) develops. This accumulation in the BQ model results in an increased dissipation (as dissipation is proportional to $k^2 E(k)$), thus allowing us to simulate the system with smaller viscosity. This difference at large wavenumbers is the most distinct feature in the two spectra in Fig. \[comp\_sp\]. As a result of the extra power at larger wavenumbers, dispersion in the BQ model results in more prominent small scale features, and in rapidly varying waves. As an example, Fig. \[secc20\] shows a transversal cut in the elevation field for runs $B04$ and $B_{SW}12$. The cuts are taken at the same place and at the same time in both runs. Even though both simulations have the same behavior at large scales, at short length scales the BQ model presents fast fluctuations. These fluctuations are well resolved (the cut corresponds to $1024$ grid points), and there is no indication that resolution is insufficient to resolve the sharp gradients. In the BQ model, while the large scales may correspond to a shallow flow, as long as there is enough scale separation, there will always be a wavenumber where the finite depth effects can be seen. Thus, the Boussinesq equations provide an interesting model to study weakly dispersive waves. Regarding the accumulation of energy that leads to a flatter spectrum for high wavenumbers in some of the BQ simulations (for several runs in set $B$ as can be seen in Fig. \[compensado\_B\], but specially in the runs in set $C$), such an accumulation has been observed before in turbulent flows. As mentioned above, we verified that this accumulation is not the result of insufficient resolution (e.g., by comparing the runs with different grid points $N$). The accumulation of energy in the spectrum near the dissipative range is often termed “bottleneck”, and bottlenecks can have dissipative [@falkovich_bottleneck_1994] or dispersive [@graham_highly_2007; @krstulovic_dispersive_2011] origins. In the former case, the accumulation results from the viscous damping of the triads at small scales, resulting in a decrease of the energy flux. Such a viscous bottleneck should be visible also in the non-dispersive simulations, and its absence in those runs indicates a dispersive origin. In the latter case, the bottleneck arises from the increasingly harder to satisfy resonant condition for the wave frequencies, as the waves become faster at smaller scales. Models with a field filtered by the Helmholtz operator (as is the case for the BQ model, see Eq. \[eq:Helmholtz\]) tend to develop a bottleneck (see [@graham_highly_2007] for a detailed description of its origin). A qualitative way to explain the tendency towards a flatter spectrum in the BQ model can be obtained by assuming that dispersion is strong enough for the dispersive term to be balanced with the buoyancy and with the non-linear terms in the BQ equations (i.e., all terms are of the same order). Then the energy spectra can depend only on both $g$ and $h_0$, and a possible solution is $E(k) \sim g h_0^2$. A detailed study of the origin of this bottleneck is left for future work. At this point it is worth pointing out that when $D_s \approx 1$ and dispersion becomes too strong, the Boussinesq approximation breaks down as more terms in the Taylor expansion in Eq.  should be preserved. As a result, the Boussinesq approximation is useful as long as $D_s < 1$ at the smallest excited scales in the system. On the other hand, from Fig. \[phase\_dia\], if $D_s \lesssim 0.15$ the behavior of the system in the inertial range is that of a shallow water flow for all Reynolds numbers studied. ![Power spectrum $E_h(k,\omega)$ for simulation $B04$. The darker regions correspond to larger power density, while the lighter regions correspond to smaller power density. (a) Normalized power spectrum $E_h(k,\omega)/E_h(k)$. (b) Non-normalized power spectrum. The white dashed line appearing in the bottom panel indicates the linear dispersion relation from Eq. . Note that as in this run dispersion is negligible, the dispersion relation is almost that given by Eq. , and non-dispersive.[]{data-label="cubo32"}](Figure11a.eps "fig:"){width="48.00000%"} ![Power spectrum $E_h(k,\omega)$ for simulation $B04$. The darker regions correspond to larger power density, while the lighter regions correspond to smaller power density. (a) Normalized power spectrum $E_h(k,\omega)/E_h(k)$. (b) Non-normalized power spectrum. The white dashed line appearing in the bottom panel indicates the linear dispersion relation from Eq. . Note that as in this run dispersion is negligible, the dispersion relation is almost that given by Eq. , and non-dispersive.[]{data-label="cubo32"}](Figure11b.eps "fig:"){width="48.00000%"} Time-resolved spectra and non-linear dispersion relations --------------------------------------------------------- Wavenumber spectra, as the spectra discussed so far, give information of how energy is distributed in spatial scales, but do not provide a quantitative estimate of how much energy in the system is associated with wave motions. A frequency spectrum $E(\omega)$ is often obtained from the wavenumber spectrum $E(k)$ using the dispersion relation (or vice versa). However, in systems that can sustain both wave and vortical motions there is no clear justification to use the dispersion relation to go from one spectrum to the other. A quantification of the amount of energy in waves, and on whether non-linear effects change the dispersion relation of the system from the linear one, can be directly obtained from the frequency and wavenumber spectrum $E(k, \omega)$ without any assumption. The spectrum $E(k, \omega)$ can be computed by storing the Fourier coefficients of the height $\hat{h}({\bf k}, t)$ as a function of time (as well as the Fourier coefficients of the velocity field), then computing the Fourier transform in time, and finally computing the isotropic power spectrum by averaging in the $(k_x, k_y)$-plane. To this end, several large-scale wave periods and turnover times must be stored (to resolve the slowest frequencies in the system), with sufficient time resolution $\Delta t$ to resolve the fastest frequencies. In the analysis we show below, time series spanning at least three periods of the slowest waves were used, and with time resolution $\Delta t \approx 3 \times 10^{-4}$. ![Power spectrum $E_h(k,\omega)$ for simulation $A02$. The darker regions correspond to larger power density, while the lighter regions correspond to smaller power density. (a) Normalized power spectrum. (b) Non-normalized power spectrum. The white dashed line appearing in the bottom panel indicates the (non-dispersive) linear dispersion relation from Eq. , and the white dash-dotted line indicates the BQ dispersion relation from Eq. .[]{data-label="cubo20"}](Figure12a.eps "fig:"){width="48.00000%"} ![Power spectrum $E_h(k,\omega)$ for simulation $A02$. The darker regions correspond to larger power density, while the lighter regions correspond to smaller power density. (a) Normalized power spectrum. (b) Non-normalized power spectrum. The white dashed line appearing in the bottom panel indicates the (non-dispersive) linear dispersion relation from Eq. , and the white dash-dotted line indicates the BQ dispersion relation from Eq. .[]{data-label="cubo20"}](Figure12b.eps "fig:"){width="48.00000%"} Figures \[cubo32\] and \[cubo20\] show the power spectrum of the flow height $E_h(k,\omega)$ for simulations $B04$ and $A02$ respectively. The linear dispersion relations for shallow water flows (Eq. \[rel\_disp\_sw\]) and for Boussinesq flows (Eq. \[rel\_disp\_bous\]) are also shown as references, using the parameters from each run. Note both runs present an energy accumulation near the dispersion relation. This indicates most of the energy is in the waves, and remains there as time evolves. As we are not solving the equations for a potential flow, and the system can develop vortical motions, this tells us that the non-linear energy transfer is mostly done between waves, and that the energy injected at large scales in wave motions is mostly transferred towards wave motions at smaller scales and faster frequencies. This is needed for weak turbulence to hold, but is also observed in run $B04$ that has a spectrum compatible with strong turbulence phenomenological arguments. There is also a turbulent broadening of the dispersion relation, also visible in cross sections of the spectrum at different wavenumbers in Fig. \[blnc\]. From this broadening, the characteristic time of non-linear wave interactions can be obtained, as was done in [@miquel_nonlinear_2011]. ![Cross sections of $E_h(k,\omega)$ at different (and fixed) values of $k=k^*$ for run $A02$. Note the peaks and surrounding wavenumbers have most of the power. Note also the two peaks for $k=200$, one corresponding to the shallow-water dispersion relation, and the other to the Boussinesq dispersion relation. []{data-label="blnc"}](Figure13.eps){width="48.00000%"} Some of the most important results in this paper are associated with these two figures. First, note that in run $B04$ most of the energy is concentrated near a dispersion relation that, as dispersion is negligible, corresponds in practice to the non-dispersive shallow-water case (Eq. \[rel\_disp\_sw\]). All runs in set $B$ have the same spectral behavior in $E_h(k,\omega)$, and confirm that the $\sim k^{-2}$ spectrum is observed when dispersion is negligible or absent (i.e., when the flow is sufficiently shallow). Second, note that the spectrum $E_h(k,\omega)$ in run $A02$ presents clear signs of dispersive effects (i.e., most of the energy for large enough $k$ is concentrated over a curve that deviates from a linear relation between $k$ and $\omega$), and this run displays a scaling in $E_h(k)$ compatible with the weak turbulence prediction $\sim k^{-4/3}$. This behavior was observed in the other runs in set $A$. However, $E_h(k,\omega)$ for runs in set $A$ presents yet another interesting feature. As expected, for small $k$ the dispersion is negligible and the energy is concentrated over a straight line in $(k,\omega)$ space. At large $k$, as already mentioned, the effective dispersion relation is compatible with that of the linearized Boussinesq equations. But at intermediate wavenumbers two branches of the dispersion relation can be observed, one that is compatible with non-dispersive waves and another compatible with dispersive waves. When both branches are present, their amplitudes are of the same order, as can be seen in Fig. \[blnc\]. At first sight, the existence of these two branches could be attributed to bound waves. Bound waves are small amplitude waves which are [*bounded*]{} to a parent wave of larger amplitude. The waves are bounded in the sense that they follow the parent wave, i.e., they travel with the same phase velocity as the parent, and thus they follow an anomalous dispersion relation (see, e.g., a discussion of bound waves in the context of gravito-capillary waves in [@longuet-higgins_generation_1963; @herbert_observation_2010]). The condition that they have the same phase velocity as the parent wave implies that they must follow a modified dispersion relation which verifies $\Omega (k) = \omega (k_0) k/k_0$, where $k_0$ is the wavenumber of the parent wave. Bound waves result in multiple branches in the $E(k,\omega)$ spectrum (and in multiple peaks in the frequency spectrum). Indeed, it is easy to show that for $k=Nk_0$, $N=2,3,4,\dots$, these multiple branches satisfy $$\Omega_N (Nk_0) = N\omega (k_0) \label{bound_condition}$$ (see, e.g., the discussion in [@herbert_observation_2010]). Extending the analysis in [@herbert_observation_2010] to our case, bound waves in the BQ model should satisfy the following dispersion relation, $$\label{boundwaves} \Omega_N (Nk) = \frac{c_0 k}{\sqrt{1 + \frac{h^2_0 k^2}{3 N^2}}} ,$$ which verifies Eq. . However, the second branch in Fig. \[cubo20\] cannot be described by this dispersion relation for any value of $N$ up to 4, and thus they are not bound waves in the sense often used in oceanography. Another explanation for the existence of these two branches can be given by keeping in mind that at intermediate wavenumbers slight variations in the fluid depth may trigger a transition in the waves from dispersive to non-dispersive (as the level of dispersion depends on the product of the wavenumber with the surface height). Indeed, in the turbulent flow there are waves with short wavelengths which ride over long ones, that have a larger amplitude. For sufficient scale separation, the fast waves see an effective depth that can be larger or smaller than $h_0$ depending on whether the wave is on a crest or a valley of the slow wave, generating in one case dispersive waves, and in the other non-dispersive waves. We can estimate the variation in the effective dispersion at a given wavenumber $k$. In simulation $A02$, $h_0=4\times 10^{-3}$ and the longer waves have an amplitude $\delta \approx 4\times 10^{-5}$ (as can be estimated, e.g., from the maximum value of the power spectrum of $h$). From the system dispersion relation, $$\omega^2 = c^2_0 k^2 \left(1 - \frac{1}{3} h_0^2 k^2 \right) ,$$ dispersion is controlled by the amplitude of the $h_0^2 k^2/3$ term. Assuming that fast waves experience an effective depth $h_0 \pm \delta$ (where the sign depends on whether they are on a valley or a crest), the variation in the dispersion is proportional to the difference between $(h_0-\delta)^2$ and $(h_0+\delta)^2$. So, for this simulation, the variation is around $4\%$, and when multiplied by $k^2$, it is sufficient to explain the two branches in $E_h(k,\omega)$ for $k$ between $\approx 150$ and $250$. Time frequency energy spectra ----------------------------- From the spectra in Figs. \[cubo32\] and \[cubo20\] the frequency spectrum $E_h(\omega)$ can be easily obtained, simply by summing over all wavenumbers, $$E_h(\omega) = \sum_k E_h(k,\omega) . \label{freqspectra}$$ As already mentioned, in experiments and simulations $E_h(\omega)$ is sometimes estimated instead from $E_h(k)$ by using the dispersion relation in the form $k = k(\omega)$. Figure \[edomega\] shows the power spectrum of $h$ as a function of $\omega$ for simulations $A02$ and $B04$. In both cases, the spectrum was calculated explicitly using Eq. , and also estimated using the dispersion relation. For each run, the two spectra show a very good agreement, which can be expected as most of the energy is in the waves. The behavior of the inertial range in each run is also in good agreement with the one found previously for $E_h(k)$ in Sec. \[spectra\]. ![[*(Color online)*]{} Power spectrum of $h$ as a function of the frequency for simulations (a) $A02$ and (b) $B04$. In both cases, the spectrum was calculated by summing over all wavenumbers in the time and space resolved spectrum, $\textstyle{\sum_k} E_h (k,\omega)$, and also by using the dispersion relation given by Eq.  to estimate the frequency spectrum from the wavenumber spectrum $E_h(k)$. As a reference, power laws $\sim \omega^{-4/3}$ and $\sim \omega^{-2}$ are shown in each case. The behavior is in good agreement with the one found for $E_h(k)$.[]{data-label="edomega"}](Figure14a.eps "fig:"){width="48.00000%"} ![[*(Color online)*]{} Power spectrum of $h$ as a function of the frequency for simulations (a) $A02$ and (b) $B04$. In both cases, the spectrum was calculated by summing over all wavenumbers in the time and space resolved spectrum, $\textstyle{\sum_k} E_h (k,\omega)$, and also by using the dispersion relation given by Eq.  to estimate the frequency spectrum from the wavenumber spectrum $E_h(k)$. As a reference, power laws $\sim \omega^{-4/3}$ and $\sim \omega^{-2}$ are shown in each case. The behavior is in good agreement with the one found for $E_h(k)$.[]{data-label="edomega"}](Figure14b.eps "fig:"){width="48.00000%"} Probability density functions ----------------------------- We calculated the probability density function (PDF) of the free surface height for different simulations. Figure \[distro\] shows the PDF of $h/\sigma$ for run $A06$, where $\sigma$ is the standard deviation of the surface height. The probability distribution is asymmetric, with a larger probability of measuring large values of $h$ than of small values. The shape can be adjusted by two distributions: We consider a skewed normal distribution [@azzalini_class_1985], $$\label{skew_normal} f(x) = \frac{2}{\kappa}\phi\left(\frac{x-\xi}{\kappa}\right) \Phi\left(\alpha\frac{x-\xi}{\kappa}\right),$$ where $\kappa$ is the so-called scale parameter (associated with the variance of the distribution), $\xi$ is the location parameter (associated with the mean value), $\alpha$ is the shape parameter (associated with the skewness), and $$\begin{aligned} \phi(x) &=\frac{1}{\sqrt{2\pi}}e^{-x^2/2}, \\ \Phi(x) &= \int_{-\infty}^{x} \phi(t)\ {\mathrm{d}}t = \frac{1}{2} \left[ 1 + \operatorname{erf} \left(\frac{x}{\sqrt{2}}\right)\right]. \end{aligned}$$ We also consider a Tayfun distribution $$p(x) = \int^\infty_0 \frac{e^{-[y^2 + (1-c)^2]/(2 s^2)}}{\pi s c} {\mathrm{d}}y,$$ with $c= \sqrt{1 + 2 s x + y^2}$ and where $s$ is the mean steepness of the waves[@tayfun_narrow-band_1980]. ![[*(Color online)*]{} Probability density function of the values of $h$ in simulation $A06$ (solid blue line). The dash-dotted (red) line indicates a maximum likelihood fit using a skewed normal distribution, while the dashed (green) line corresponds to a maximum likelihood fit for the Tayfun distribution.[]{data-label="distro"}](Figure15.eps){width="48.00000%"} For run $A06$, and from a Maximum Likelihood Estimation method for the skewed normal distribution, the location parameter is $\xi \approx -1.00$, the scale parameter is $k \approx 1.43$, and the shape parameter is $\alpha \approx 1.94$. For the same run, and for the Tayfun distribution, the mean steepness of the waves is $s \approx 0.15$. This latter value is more relevant as the Tayfun distribution is often used in oceanography and in experiments of surface waves. In this context, it is interesting to point out that experiments in [@falcon_observation_2011] found similar values for $s$. This behavior (a PDF of $h$ described correctly by both a skewed normal distribution and a Tayfun distribution with asymmetry to the left) was observed in all simulations, no matter what set they belonged to. Conclusions {#conclusion} =========== We studied wave turbulence in shallow water flows in numerical simulations using the shallow water and Boussinesq models. The equations were solved using grids up to $2048^2$ points, and the parameters were varied to study different regimes, including regimes with larger and smaller Reynolds number, and larger and smaller dispersion, while keeping the Froude number approximately the same. We summarize below the main conclusions following the same ordering as in the introduction: \(a) As in previous experimental and observational studies [@smith_equilibrium_2003; @kaihatu_asymptotic_2007], we found that the flows can be classified in different sets depending on the value of the Reynolds number (i.e., on the strength of the nonlinearities) and on the level of dispersion (associated with the fluid depth). A first set ($A$) has smaller Reynolds numbers and stronger dispersion, a second set ($B$) has larger Reynolds numbers and weaker or negligible dispersion, and a third set of runs seems to be transitional between the two. \(b) Runs in sets $A$ and $B$ have different power spectra of the surface height. Runs in set $A$, with stronger dispersion, present a spectrum compatible (within statistical uncertainties) with $E_h(k) \sim k^{-4/3}$. This is the spectrum predicted by weak turbulence theory for the Boussinesq equations [@onorato_four-wave_2008]. Runs in set $B$ with negligible or zero dispersion (i.e., for a shallower flow) show a spectrum compatible within error bars with $E_h(k) \sim k^{-2}$. This spectrum can be obtained from phenomenological arguments coming from strong turbulence [@phillips_equilibrium_1958]. The runs in set $C$ have no discernible inertial range. \(c) The Boussinesq (dispersive) model tends to develop more power in waves with short wavelengths than the shallow water model. This is associated with the development of a bottleneck for large wavenumbers in the energy spectrum. \(d) Inspection of the wave and frequency spectrum $E_h(k,\omega)$ confirms that most of the energy is in the waves in all the simulations. In runs in set $B$, most of the energy is concentrated in the vicinity the linear dispersion relation for shallow water waves, which are non-dispersive. In runs in set $A$, the resulting non-linear dispersion relation obtained from $E_h(k,\omega)$ has two branches: one that corresponds to non-dispersive waves, and another corresponding to dispersive waves. The two branches can be explained as the result of the superposition of rapidly varying waves which ride over slowly varying waves, the latter with sufficient amplitude to change whether the former see a shallower or deeper fluid. \(e) Independently of the differences between the runs, the probability distribution functions of $h$ for the runs in all sets is asymmetric, with larger probabilities of finding larger values of $h$ than smaller values. The probability distribution functions can be approximated by both a skewed normal distribution and a Tayfun distribution [@tayfun_narrow-band_1980]. In the latter case, the only parameter of the distribution, the mean steepness of the waves, has values compatible with those found in observations and experimental studies (see [@falcon_observation_2011]). The obtained probability density functions also indicate limitations in the hypothesis of Gaussianity of the fields assumed in early theories of weak turbulence. However, extensions of the theory to allow for non-Gaussian distributions exist and can be found for example in [@choi_probability_2004] and [@lvov_noisy_2004; @choi_joint_2005]. All the results presented here were obtained solving numerically equations that do not assume that the flow is inviscid or irrotational, and with realistic terms for the viscous dissipation. We believe this approach can be useful to compare with experiments, as in experiments vorticity can develop in the flow, and viscosity cannot be neglected. The authors would like to thank Prof. Oliver Buhler and the anonymous referees for their useful comments. The authors acknowledge support from grants No. PIP 11220090100825, UBACYT 20020110200359, and PICT 2011-1529 and 2011-1626. PDM and PJC acknowledge support from the Carrera del Investigador Científico of CONICET, and PCdL acknowledges support from CONICET.
1
--- abstract: | Deeply bound KNN, KNNN and KNNNN states are discussed. The effective force exerted by the K meson on the nucleons is calculated with static nucleons. Next the binding energies are obtained by solving the Schrödinger equation or by variational calculations. The dominant attraction comes from the S-wave $\Lambda(1405)$ and an additional contribution is due to $\Sigma(1385)$. The latter state is formed at the nuclear peripheries and absorbs a sizable piece of the binding energy. It also generates new branches of quasi-bound states. The lowest binding energies based on a phenomenological KN input fall into the 40-80 MeV range for KNN, 90-150 MeV for KNNN and 120-220 MeV for K$\alpha$ systems. The uncertainties are due to unknown KN interactions in the distant subthreshold energy region. address: - 'Andrzej So[ł]{}tan Institute for Nuclear Studies 00-681 Warsaw, Hoza 69, Poland [^1]\' - 'Helsinki Institute of Physics, P.O. Box 64, FIN-00014, Finland [^2]\' author: - 'S. Wycech' - 'A.M. Green' title: 'Variational calculations for K-few-nucleon systems' --- Introduction ============= In this paper a quantitative understanding of Kaon-few-nucleon quasi-bound states is attempted. In recent years, the existence of such states has been vividly discussed. It was initiated by the KEK finding of peaks in the nucleon spectra of K$^-$ absorption in $^4$He, [@KEK04; @KEK05]. Additional evidence was given by the FINUDA measurement of the invariant mass distribution of the $\Lambda p $ produced in K$^-$absorption by light nuclei [@FIN05]. The existence of such bound states have been expected as the kaon-nucleon and the kaon-nucleus interactions have been known to be strongly attractive, [@WYC86]. This is now firmly confirmed on the basis of kaonic atom data [@FRI07]. However, the KEK and FINUDA experiments indicate unexpectedly strong bindings of the order of 100, 150 MeV in the lightest KNN, KNNN systems. These experiments require further confirmation. Also, the interpretation of the observed peaks has been disputed in Refs. [@VAL06A], [@VAL06B] while the initial interpretation is defended in Ref. [@AKA07]. Calculations indicate that such states are expected, albeit these might be very broad and difficult to detect. The first calculations performed by Akaishi and Yamazaki in Ref. [@AKA02] were followed by several subsequent publications. These calculations exploited essentially the S - wave resonant attraction related to the $\Lambda(1405)$ state. With an optical model type of approach it was shown that the K-meson optical potential at the center of small nuclei may be as strong as 500 MeV generating very strong binding of the meson and a strong contraction of the few-nucleon systems. However, to reproduce the KEK data, these calculations involved some relaxation of the NN repulsion at short distances which would allow the existence of strongly bound and very dense systems. These calculations raise the important question on how to implement a realistic short range NN repulsion in the kaonic systems. Another open question is related to the strength and range of KN interactions. Any mathematical description of few body systems requires knowledge of NN and KN off-shell scattering amplitudes. Those related to NN interactions are controlled fairly well in terms of modern NN potentials. For a bound K-meson the amplitudes needed involve the subthreshold energy region $$\label{1} f_{KN}= f_{KN}( - E_B - E_{\rm{recoil}}),$$ where $E_B$ is the KN separation energy and $E_{\rm{recoil}} $ the recoil energy of the KN pair relative to the rest of the system. If the separation energy is as large as 100 MeV, meson momenta become $\approx 250 $ MeV/c and $ E_{\rm{recoil}}$ may be as large as $40$ MeV. The energies of interest for $ (- E_B - E_{\rm{recoil}})$ are then located well below the $\Lambda(1405)$ state. The amplitudes there are strongly attractive and so when used in a standard optical potential approach may support very strong bindings. One problem that arises at this stage is of a technical character. As these amplitudes are energy dependent, it is hard to account for that in the optical model approach. There exists another, more serious, problem which is common to all approaches. As the energies involved are far away from the physical region tested in KN scattering, the uncertainties in the KN scattering amplitudes are sizable. For instance, if the $\Lambda(1405)$ is a KN bound state, then the amplitude far below the resonance is given not only by the position of the singularity but to a greater extent by the Born term, which indicates a strong dependence on the uncertain interaction range $r_o$. An old multi-channel potential model of Ref. [@KRZ75] indicates that the available scattering data do not allow one to fix the precise value of $r_o$. This unfortunate situation is still actual. The $r_o$ is expected to be close to the inverse vector meson mass. However, even though a change of 20$\%$ in $r_o$ would not affect the scattering region, it results in a 30$\%$ change of $f_{KN}$ in the deep subthreshold region. The corresponding uncertainty in the binding energy then amounts to $\approx 30$ MeV. As indicated by few body calculations of Ref. [@WEI06] this problem strongly affects the outcome. The uncertainties of $f_{KN}$ require further coherent experimental and theoretical studies of KN and K-few-N interactions. It becomes one of the most important purposes of the K meson physics. On the other hand, there is one consequence of Eq.(\[1\]) which is model independent. If the binding and recoil are so large the $f_{KN}$ amplitudes involve the energies below the thresholds of meson-hyperon decay channels. As a consequence the dominant decay modes are blocked and the lifetimes of nuclear K meson systems are determined only by multi-nucleon captures. This leads to the expectation that such states may live long enough to be detectable, [@WYC86]. There exists several calculations of KNN binding energies. These states are named K$^{-}$pp although in reality they correspond to isospin $I_{NN}=1 $ and total isospin $I_{tot}=1/2$. The first prediction by Akaishi and Yamazaki lead to $( E_B, \Gamma) = (48,60) $ MeV [@AKA02]. With a similar, molecular type method, Dote and Weise [@DOT07] obtain $ E_B, < 50 $ MeV and indicate a strong dependence of this result on the short range NN repulsion. On the other hand, the recent three body calculations based on Faddeev or AGS methods yield larger bindings. Thus Schevchenko *et al.* [@SHE07] obtain $( E_B, \Gamma) = (55-70, 95-110) $ MeV while Ikeda and Sato [@IKE07] calculate $( E_B, \Gamma) = (\sim 80, \sim 73) $ MeV. Later in the text we show that the discrepancy of these two groups of results is due to: different $\Lambda(1405)$ properties, explicit description of the multiple scattering in decay channels and possibly to an incompatible treatment of the NN repulsion. There are two new elements introduced in this paper. First, the P-wave interactions due to $\Sigma(1385)$ have been indicated as a possible source of the strong binding. Here, these are introduced explicitly. Second, the stress is put on the strong KN spacial correlations induced by the S and P wave resonances. Leaving aside the interpretation of the peaks attributed to bound KNN and KNNN systems the essential theoretical questions are: $(a)$ What is the binding mechanism? $(b)$ Are the technical questions under control? $(c)$ Can the widths be narrow? This paper attempts an answer to these questions and the following results are obtained : 1\) To account properly for the KN force range, short range KN correlations and the NN repulsion, a two step calculation is performed. First a wave function involving strongly correlated K-N subsystems is found in a fixed nucleon approximation. This step also allows one to find potentials due to the K meson which tend to contract the nucleons. Next, these correlated wave functions and contracting potentials are used as the input in variational calculations for the K-few nucleon binding. In the KNN case the binding energy and width are found by solving the Schrödinger equation. 2\) While the dominant mechanism of attraction is related to the $\Lambda(1405)$ state, it is found that another resonant state, the $\Sigma(1385)$, contributes significantly to the structure of the bound states but much less to the binding in KNN and K-few-N systems. In addition the $\Sigma(1385)$ generates new branches of nuclear states that could not be generated by the $\Lambda(1405)$ alone. 3\) The binding energy is determined to a large extent by the attraction and the repulsive core in NN interactions. With the Argonne NN potential [@ARG95] one obtains the lowest state of KNN bound by about 40-80 MeV and a KNNN state bound by about 90-150 MeV. Moderate dependence on the KN interactions is found, provided these are constrained by the shape of $\Lambda(1405)$ and the value of the KN scattering length. However, the position of $\Lambda(1405)$ itself is not well known and this becomes the source of a large uncertainty. The effect of $ \Sigma(1385)$ on the binding energy is limited, In the states bound via $\Lambda(1405)$ it adds some 5-10 MeV contribution to the KNN binding and 10-20 MeV to KNNN binding. In this sense the suggestions of Ref. [@WYC07] are not fully supported. However, the effect of $ \Sigma(1385)$ on the space structure of deeply bound kaonic states is strong. The $ \Sigma(1385)$ is formed in peripheral regions and it absorbs a large fraction of the total K meson binding. In consequence the radii of these systems are fairly large and the nucleon densities are comparable to those met in the $^4$He nuclei. 4\) The problem of uncertainties related to the large recoil momenta entering Eq.([\[1\]]{}) is only partly removed. Large kaon momenta are hidden inside the resonant structures. In principle these may be kept under control with the help of other experiments. In practice it is not the case. The other sector of large momenta, due to the strong binding, is partly screened by the short range NN repulsion. The main consequence is a strong dependence of the meson binding energies on the position of the $\Lambda(1405)$ resonance. In principle the shape of $\Lambda(1405)$ is tested by the invariant mass distribution in the decay $\Sigma\pi$ channel. In practice it is not so as the relevant energy region is located close to the $\Sigma\pi$ threshold. In this region the theoretical and experimental uncertainties are large. 5\) These states are very broad if the binding energies are less than 100 MeV. For stronger bindings, which are possible under the current values of the KN parameters the main mesonic decay modes may be closed. The widths for non-mesonic modes are hard to calculate and extrapolations from the emulsion data are not very reliable. New experiments are needed. A simple physical picture emerges from this approach. The mesons are strongly correlated to slowly moving nucleons. The correlations are of the $\Lambda(1405)$ type at large densities, and of the $ \Sigma(1385)$ type in the peripheries. Each K,N pair has a good chance to stay also in the $\Sigma ,\pi$ form. The structure is rather loose as sizable fractions of the binding energies are hidden in the short ranged correlations. The KNN bound state ==================== This section presents an introduction to the method used in this work. Several steps describe the increasing degree of precision and also the increasing level of technical complications: $\bullet$ At first the KNN levels are found within the fixed nucleon approximation with a simple $S$ wave KN interaction. $\bullet$ The nucleon degrees of freedom and NN interactions are introduced and a related Schrödinger equation is solved. $\bullet$ The method is extended to multiple channel situations. $\bullet$ Both $S$ and $P$ wave KN interactions are allowed. Consider scattering of a light meson on two identical, heavy nucleons. To begin with, the nucleons are fixed at coordinates $\textbf{x}_i (i=1,2) $ and the wave function is assumed to be in the form $$\label{a2} \Psi(\textbf{x},\textbf{ x}_1 ,\textbf{x}_2) = \chi_K(\textbf{x};\textbf{ x}_1, \textbf{x}_2)~ \chi_{NN}(\textbf{x}_1,\textbf{x}_2),$$ where $\textbf{x}$ is the meson coordinate. The notation is simplified and some possible indices are suppressed. The meson wave function $\chi_K $ is given by the solution of the multiple scattering equation $$\label{a3} \chi_K(\textbf{x},\textbf{x}_1,\textbf{x}_2)= \chi_K(\textbf{x})^o - \Sigma_i ~ \int \textbf{dy }\frac {\exp[ip\mid \textbf{x-y}\mid ]} {4\pi \mid \textbf{x-y}\mid }~ U_{KN}(\textbf{y},\textbf{x}_i)~ \chi_K(\textbf{y},\textbf{x}_1,\textbf{x}_2)$$ obtained with fixed positions of the nucleons. An equation of similar structure with a zero range meson-nucleon pseudo-potential $U$ was used by Brueckner [@BRU] to calculate the scattering length of a meson on two nucleons. For a high energy scattering it was extensively discussed by Foldy and Walecka, who used finite range separable interactions $U$ [@FOL69]. With such interactions equation (\[a3\]) allows for semi-analytic solutions in the NN, and also in few nucleon cases. Here, the method is extended to the bound state problem. One looks for solutions of Eq. (\[a3\]) with no incident wave $ \chi_K(\textbf{x})^o$. The momentum $p$ becomes a complex eigenvalue $p(x_i)$ which determines the energy and width of the system for given nucleon positions $x_i$. Equation (\[a3\]) is written in terms of the Klein-Gordon or Schrödinger propagator. The difference arises when the relation of energy and momentum is established. Reasons of simplicity, which will become clear later, favor the non-relativistic relation in the KN center of mass system. Thus, the interaction is presented as $U_{KN} =2\mu_{KN} V_{KN}$ where $\mu_{KN}$ is the reduced mass. Corrections for relativity may be introduced at a later stage. The potential $V_{KN}$ for an $S$ wave interaction is chosen in a separable form $$\label{a4} V_{KN}( \bf{x-x_i}, \bf{x'-x_i}) = \lambda ~\upsilon(\bf{x-x_i})~\upsilon(\bf{x'-x_i}),$$ where $\upsilon$ is a form-factor and $\lambda$ is a strength parameter. The eigenvalue equation is now reduced to $$\label{a5} \chi_K(\textbf{x},\textbf{x}_1,\textbf{x}_2) + \Sigma_i ~ \lambda ~\int \bf{dy}~ \frac {\exp[ip(\textbf{x}_1,\textbf{x}_2)\mid \bf{x-y}\mid ]} {4\pi \mid \bf{x-y}\mid } ~\upsilon(\bf{y-x_i})~ \int \bf{dy'}~ \upsilon(\bf{y'-x_i})~\chi_K( \bf{y'},\textbf{x}_1,\textbf{x}_2) =0.$$ Equation (\[a5\]) becomes a matrix equation for wave amplitudes $\psi_i$ defined at each scatterer $i$ by $$\label{a6} \psi_i = \lambda~ \int \bf{dx}~ \upsilon( \bf{x-x}_i) ~\chi_K( \bf{x},\textbf{x}_1,\textbf{x}_2).$$ To find the equations for $\psi_i $ one introduces the off-shell KN scattering matrices $f$ and matrix elements of the propagator $$\label{a7} G_{i,j}(\bf{x_i,x_j}) = \int\bf{dy }\bf{dx } ~ \upsilon( \bf{x-x}_i)~ \frac {\exp(ik\mid \bf{x-y}\mid )}{4\pi \mid \bf{x-y}\mid } ~\upsilon(\bf{y-x}_j).$$ The diagonal value, $G_{i,i}\equiv G$, determines the meson nucleon scattering matrix $t$ by the well known (see e.g. Ref. [@FOL69]) relation $$\label{a7t} t(E) = (1+\lambda ~ G )^{-1}~ \lambda$$ and this yields the full off-shell scattering amplitude $ f $ $$\label{a7f} f(k,E,k') =\upsilon(k)~t(E)~\upsilon(k').$$ Here, $k,k'$ are the initial and final momenta while the form-factor $v(k)$ is given by the Fourier transform of $\upsilon(r)$. The Yamaguchi form $\upsilon(k)= 1/(1+k^2/\kappa^2)$ with a free parameter $\kappa$ will be used in this paper. At zero momenta and at the threshold this choice normalizes $f$ (and $t$) to the scattering length. Unfortunately for historical reasons the standard convention in the K-N system is to define the scattering length by $$\label{a7g} a+ib = - f(k=0,E=0,k'=0)\equiv F(0,0,0)$$ and the capital $F$ will be used in several places to comply with the standard KN parameters. In order to cast Eq. (\[a5\]) into a standard multiple scattering equation for $\psi_i$ one carries out the following three steps: 1) Integrates Eq. (\[a5\]) over the i-th form-factor $\upsilon(\bf{x-x_i})$. 2) Selects the i-th term from the R.H. side. 3) Multiplies Eq. (\[a5\]) by $ (1+\lambda G)^{-1}$. In this way the kernel of the multiple scattering equation can be expressed in terms of scattering amplitudes $t_i$ at each nucleon $i$ and propagators describing the passage from the nucleon $i$ to the other nucleon $j$. One now arrives at a set of linear equations $$\label{a8} \psi_i + \Sigma_{j\neq i} ~ t_j~G_{i,j}~ \psi_j =0,$$ which may be solved numerically. For the Yamaguchi form-factors, propagators $G_{i,j}$ allow analytic expressions $$\label{a9} G_{1,2}(r,k) = \frac{1}{ r}~ \upsilon(k)^2 ~[ \exp(ikr)-\exp(-\kappa r)- r \frac{\kappa^2 +k^2}{2\kappa} \exp(-\kappa r)] ~\equiv~G (r,k),$$ where $ \textbf{r }= \textbf{x}_2- \textbf{x}_1 $. For the sake of illustration, the KNN case is presented in some detail. The condition for a bound state with two amplitudes $\psi_i$ leads to a pair of equations $$\label{a10} \psi_1 + t~G ~\psi_2 = 0, ~~~~ \psi_2+t~G~\psi_1 = 0.$$ When the determinant $$\label{a11a} D= 1-( t~G)^2$$ is put to zero, the binding “momenta” $p(r)$ may be obtained numerically. Two different solutions corresponding to $ 1+ tG = 0 $ or $ 1-tG = 0$ may exist. The first solution is symmetric $\psi_2= \psi_1$ and describes the meson in the $S$ wave state with respect to the NN center of mass. The second solution is antisymmetric $\psi_2= - \psi_1$ and describes a $P$ wave solution. With the rank one separable interaction this latter solution does not exist in the full range of $r$. However, it arises with the more complicated rank two interactions discussed later. Eigenvalues corresponding to unstable quasi-bound states are obtained in the second quadrant of complex $ p(r)\equiv p = p_R +i p_I$ plane. In this quadrant the kernel $$\label{a12} tG =f(p) ~[ \exp(-p_Ir) \exp(ip_Rr)-\exp(-\kappa r)~( 1+ r \frac{\kappa^2 - p^2}{2\kappa})]/r$$ is exponentially damped at large distances as required by the asymptotic form of the bound state wave function $\chi_K$. At short distances $ G $ is regularized by the KN form-factor. If the scattering amplitude is dominated by a quasi-bound state, such as $\Lambda(1405)$, the related pole dominates and in some energy region $ f\simeq \gamma^2/(E-E^*)$, where $\gamma$ is a coupling constant and $E^*=E_r-i\Gamma_r/2$ is a complex $\Lambda(1405)$ binding energy. The full KNN binding energy, $V_K$, is given by the equation $ 1+ tG= 0 $ which becomes $$\label{a13} V_K(r) \simeq E^* - \gamma^2 G(r,p).$$ Since Re $ G(r,p)$ close to the resonance is positive this solution offers binding stronger than the $\approx$ 28 MeV binding in the $\Lambda(1405)$. Asymptotically, for $r\rightarrow\infty$ one obtains $ V_K \rightarrow E^* $, that is a kaon bound to a nucleon to form a $\Lambda(1405)$. This type of asymptotic behavior occurs in all of the K-few-nucleon systems of practical interest. Hence, the separation energy is understood here as the separation of the K-N-N system into the N-$\Lambda(1405)$ system. The limits $ r \rightarrow0$ in Eqs. (\[a12\],\[a13\]) are regular. However, a joint limit of zero range KN interactions, $\kappa \rightarrow \infty$, and $ r\rightarrow0$ is singular and the KNN system collapses. Therefore, some care is necessary when this limit is taken. Here, we stay within a phenomenological approach and the standard expectation that the range of KN interactions is determined by vector meson exchange. In equation (\[a9\]) for $G$ the range of interactions enters twice, first as a cutoff at small distances and second in terms of the form-factor $v(k)^2$. We find in a numerical way that these two effects cancel and $fG$ is very stable within the range $ 3< \kappa < 6 ~fm $. As the KN interaction range is very short, but finite, the uncertainties related to the actual value of $\kappa$ are additionally eliminated by the short range repulsion in the NN systems. This yields an important stability in the few-body calculations described here. With nucleons fixed at a distance $r$ the eigen-value condition determines $ p(r)$ which in turn generates the potential $V_K(r)$ contracting the NN system to a smaller radius. The form and strength of this potential depends on the form of the kinetic energy. In the KN C.M. system the $ E_{KN}$ energy is given by $ \sqrt{M^2 +p^2 } + \sqrt{m^2 + p^2},$ where $m,M$ are masses of the meson and nucleon. The same form is kept in the large nucleon mass $M$ limit. In the few nucleon systems the non-relativistic form of the nucleon energy is used and the problem of large nucleon mass disappears. The meson propagator in Eq. (\[a3\]) is chosen to make the multiple scattering equation (\[a8\]) equivalent to a differential equation $$\label{a14} [ - \Delta + \sum_i ~ 2\mu_{KN}~ V_{ KN_i }~]~\chi = p(r)^2$$ and the contracting potential becomes $V_{K}= p(r)^2/ 2\mu_{KN}$ . The advantage of this choice is discussed in the next section. Schrödinger equation --------------------- The solution of the full KNN bound state problem is given by equation $$\label{s15} ( - \frac{\Delta_x}{2m} - \frac{\Delta_1}{2M}- \frac{\Delta_2}{2 M} + V_{KN1}+V_{KN2}+ V_{NN} ) \Psi = E \Psi.$$ The wave function is assumed in the form $\Psi= \chi_K(x,x_i) \chi_{NN}(r)$ as given in Eq. (\[a2\]). Multiplying Eq. (\[s15\]) on the left by $ \chi_K$ and integrating over the meson coordinate $x$ one obtains the Schrödinger equation for the NN wave function $$\label{s16} \chi_{NN} (r) = \int \textbf{dx } \chi_K( x,x_i) \Psi (x,x_i)$$ in the form $$\label{s17} [ E - V_K(r) + \Delta_1 / 2 M + \Delta_2 / 2 M - V_{NN}]~\chi_{NN} + \Delta E_{kin} \chi_{NN} = 0.$$ where the last term $\Delta E_{kin}$ is a correction to the kinetic energies. This correction is small due to the choice of the meson kinetic energies. In the Schrödinger equation (\[s15\]) it is given by the meson mass $m$. On the other hand, to determine the $\Lambda(1405)$ properties and to solve the scattering equation (\[a3\]) the reduced mass $\mu_{KN}$ is used. Due to this, the correction term $ \Delta E_{kin} $ is of the order of $1/M$. In addition, the meson wave function satisfies the relation $$\label{s19} \Delta_x \chi_K = \sum_i \Delta_i\chi_K,$$ which may be obtained by partial integration over coordinate $y$ in Eq. (\[a5\]). In this way $$\label{s18} \Delta E_{kin} \chi_{NN} =-\frac{1}{M} \Sigma_i~ \int \bf{dx }~\chi_K~\overrightarrow{\partial}_i\chi_K ~\overrightarrow{\partial}_i \chi_{NN},$$ which is very small due to angular averaging and sign changes in the derivatives. In more detail this correction reduces to $$\label{s18a} \Delta E_{kin} \chi_{NN} =-\frac{2}{M} \int d\bf{\xi}~ \frac{\overrightarrow{\xi } \overrightarrow{r}}{\xi r}~~\frac{G(\bf{\xi}-\bf{r})\partial_{\xi} G(\xi)} { \int d\bf{\eta}[ G(\bf{\eta}-\bf{r})+G(\bf{\eta})]^2 } ~ ~\partial_r \chi_{NN}(r),$$ and is suppressed by the angular average over $\xi$ and at large $r$ by the small overlap of $G(\bf{\xi}-\bf{r})$ and $G(\xi)$. The $\Delta E_{kin}$ makes a contribution $ \approx 0.2 $ MeV to the binding energy. Such twice damped, small terms of similar type, arise also in more involved versions of this calculation. The $\Delta E_{kin}$ is of the same order but is given by very lengthy formulas. Since it is very small in comparison to the dominant uncertainties in $V_K$ it is dropped, leading to a significant simplification of the variational approach. As the next step, equation (\[s17\]) is solved with an S-wave interaction based on the more realistic NN potential of Argonne [@ARG95]. This solution is also compared to another, variational solution with the intention of checking the variational method used in heavier systems. The actual interaction used, in the notation of Ref. [@ARG95], has the form $$v(NN)=v^{EM}(NN)+v^{\pi}(NN)+v^{R}(NN), \label{Wir1}$$ where the electromagnetic part $v^{EM}$ only includes the dominant term proportional to $F_C(r)$ in Eq. (4) of Ref. [@ARG95], the OPE term $v^{\pi}$ is given by Eq. (18) of Ref. [@ARG95] and the phenomenological short range term $v^{R}$ from Eq. (20) with the parameters in Table II - again all from Ref. [@ARG95]. This gives directly the S-wave T=1, S=0 interaction $v(\rm{S-wave, \ T=1, \ S=0})$. However, in the T=0, S=1 deuteron channel, the effect of the tensor interaction $\upsilon_t(T=0, \ S=1)$ on the central component $ \upsilon_c(\rm{T=0, \ S=1})$ is incorporated by the closure approximation to give $$V(\rm{S-wave, \ Deuteron})= \upsilon_c(\rm{T=0, \ S=1})- \frac{8\upsilon_t(\rm{T=0, \ S=1})^2}{\rm{Den}}, \label{Wir2}$$ where the energy denominator was adjusted to Den=338 MeV to ensure the correct binding energy of the deuteron. The precision of variational estimates for E ( used in the next sections) may be checked against numerical solutions of the Schrödinger equation. It is about 0.3 MeV, compared with the overall binding of $\sim 50$ MeV. The width of the state is calculated as $$\Gamma/2 = < \chi_{NN}\mid Im~ V_K \mid \chi_{NN} > . \label{gamma}$$ Interactions in the decay channels ---------------------------------- The decay channel $ \Sigma\pi$ coupled to the basic KN channel is now introduced explicitly. The wave function at each scattering center has two components one in the KN the other in the $ \Sigma\pi$ channel. The scattering amplitudes are two dimensional vectors $ \psi_i \rightarrow [\psi_i^K,~ \psi_i^\pi $\] at each nucleon. Multiple scattering equations given in the previous section are now changed accordingly. One has $$\label{t1} \psi_1^K + t^{K,K}~G^{K,K}~ \psi_2^K + t^{K,\pi}~G^{\pi,\pi}~ \psi_2^\pi = 0$$ $$\label{t2} \psi_1^\pi +G^{\pi,\pi}~t^{\pi,\pi}~ \psi_2^\pi + t^{\pi,K}~G^{K,K}~ \psi_2^K = 0$$ and an analogous pair with $1\leftrightarrow 2$. The notation has been changed to describe channel indices and the $2\times 2 $ scattering matrix $\hat{f}$. The determinant related to these equations gives the complex eigenvalue $p(x_i)$ in the KN channel. The eigen-equation is now more complicated. Introducing a new notation in channel indices $ U^{a,b} =G^{a,a}t^{a,b}$ the determinant becomes $$\label{t3} D= [(1+ U^{K,K})(1+ U^{\pi,\pi})~ - U^{\pi,K}U^{K,\pi}][(1-U^{K,K})(1-U^{\pi,\pi})~ -U^{K,\pi}U^{\pi,K}].$$ The $D=0$ condition is more transparent close to the singularity in the case of a scattering amplitude given by $$\label{t4} f^{a,b} \approx \frac{\gamma_a \gamma_b} { E - E_o+ i \Gamma/2}.$$ Consistency requires the width to be $ \Gamma/2= p_\pi(\gamma_{\pi})^2$, where $p_\pi $ is the momentum in the decay channel. The singular term (\[t4\]) permits one to find a solution of Eq. (\[t3\]) in a fairly simple form. It is presented below in the limit of zero range KN ( and $ \Sigma\pi$ ) force. The binding energy $$\label{t5} \emph{Re } E = E_o - (\gamma_K)^2\frac{cos(p_Rr)}{r} \exp(-p_Ir) - (\gamma_{\pi})^2 \frac{cos(p_\pi r)}{r}$$ becomes larger than the binding of the resonance but the collisions in the decay channel indicates oscillations. This oscillatory behavior is also seen in the width of the system $$\label{t6} \emph{Im}~E = - (\gamma_{\pi})^2 ~ p_\pi~[ 1+\frac{sin(p_{\pi}r)}{p_{\pi}r}] - (\gamma_K)^2\frac{sin(p_Rr)}{r}\exp(-p_Ir).$$ The effect of KN scattering represented by the second term enlarges the width as $p_R$ is negative. The contribution from multiple scattering in the decay channel is sizable in general but it oscillates and may under some conditions reduce the total width. That is an effect of interference in the decay channel. Scattering in the decay channel turns out to be constructive in the KNN case but it is not necessarily so in some heavier systems. S and P wave interactions -------------------------- With the KN interactions allowed in both $S$ and $P$ waves the scattering equation (\[a8\]) is a $4\otimes 4$ matrix equation relating four amplitudes $ \psi_i$. The amplitudes for $S$ waves are now denoted by $ \psi_1^s ,\psi_2^s$. For $P$ wave interactions the corresponding amplitudes are vectors. As there is only one vector in the $NN$ system, the relative separation, the $P$ amplitudes are chosen to be $ \textbf{r}\psi_1^p $ and $ \textbf{r}\psi_2^p $. The scattering is now described by three types of propagators $ G^{a,b}$ related to consecutive collisions in the $(S,S)$ $(S,P)$ and $(P,P)$ waves. The scattering equations are $$\label{sp1} \psi_1^s ~+ ~f^s ~G^{ss}~\psi_2^s - ~f^s~G^{sp}~r^2 ~\psi_2^p =0$$ $$\label{sp2} \psi_2^s ~+ ~f^s ~G^{ss}~\psi_1^s + ~f^s~G^{sp}~r^2 ~\psi_1^p =0$$ $$\label{sp3} \psi_1^p ~+ ~f^p ~G^{pp}~\psi_2^p + ~f^p~G^{sp} ~\psi_2^s =0$$ $$\label{sp4} \psi_2^p ~+ ~f^p ~G^{pp}~\psi_1^p - ~f^p~G^{sp} ~\psi_1^s =0,$$ where the propagation in between two $P$ wave interactions is described by $ G^{pp} = G^{pp}_O + r^2 G^{pp}_T$. Indices numbering the nucleons have been suppressed. The propagator $G^{ss}$ is given in Eq. (\[a9\]) and explicit formulas for $ G^{sp},G^{pp}_O , G^{pp}_T $ may be found in the appendix. All these functions are regular in the $ r\rightarrow 0$ limit. The determinant $D$ of this system factorizes into two terms $$\label{sp5} D = D_S~D_P,$$ where $$\label{sp6} D_S = ( 1 ~+~ G^{ss}~ f ^s)( 1~-~ G^{pp}~ f ^p )~ - ~ G^{sp}~r^2 ~f^s~ f ^p,$$ $$\label{sp7} D_P = ( 1~ -~ G^{ss}~ f ^s)( 1~ + ~ G^{pp}~f ^p )~-~G^{sp}~r^2~ f^s~ f^p.$$ Let us consider the solution of $D_S=0$ close to the $\Lambda(1405)$ resonance. It is given by an equation analogous to (\[a13\]) $$\label{sp8} E = E^* - b^2~ G^{ss}(r,p(E)~ [ 1 ~-~ \frac{r^2(G^{sp})^2 f^p}{1 - G^{pp} f^p}~].$$ The second term in parentheses describes the effect of $P$ wave interactions. At energies below $\Sigma(1385)$ the amplitude $f^p$ is negative and generates an additional attraction. Isospin symmetry simplifies the algebraic structure of the scattering equations which are (see next sections) expressed by appropriate isospin combinations of the isospin KN scattering amplitudes $f$. Equations (\[sp1\]-\[sp4\]) allow for a simple symmetry of the total KNN wave function. Thus, under the eigenvalue condition (\[sp6\]) the coordinate wave function becomes symmetric with respect to the exchange of nucleon coordinates. This condition allows solutions in terms of two amplitudes $\psi^s = \psi^s_1 =\psi^s_2$ and $ \psi^p =\psi^p_1 = - \psi^p_2 $. Wave functions for the KNN system have the form $$\label{I6} \Psi(\bf{r},\bf{x}) = \chi_{NN}(r) [ G(\bf{x}-\bf{r}/2) + G(\bf{x}+\bf{r}/2)] \psi_s + \chi_{NN}(r) \overrightarrow{r} \overrightarrow {\partial}_x [ G(\bf{x}-\bf{r}/2) - G(\bf{x}+\bf{r}/2)] \psi_p,$$ where $\bf{x}$ is the meson coordinate in the NN center of mass system, $ \chi_{NN}(r)$ is the NN wave function. To make this formula more transparent the zero range force limit is taken. The two terms in Eq.(\[I6\]) follow the KN interactions in $S$ and $P$ waves. The weight of the $P$ wave contribution is given by $$\label{I7} \frac{\psi_p }{ \psi_s} = \frac{ f^p~G^{sp}}{1 - f^p~G^{pp}},$$ which becomes dominant close to the zero of the denominator in this equation. At large NN separations it happens almost at the singularity in $ f_p$. In this region the lowest energy, symmetric, solution of $D_S=0 $ is given essentially by the situation $1- G^{pp} f ^p \approx 0 $. Such a solution exists for $r\geq 1.6 fm$ in the proper quadrant of the complex momentum. This implies that, at large separations, it is energetically profitable for the KNN system to exist in the N $\Sigma(1385)$ configuration, with the nucleon and $\Sigma(1385)$ weakly repelling each other. At shorter distances the condition $ 1 + G^{ss} f ^s \approx 0 $ determines the attraction generated by $\Lambda(1405)$. Despite repulsive effects of the $P$ wave interaction such a solution yields the strongest binding, since a large piece of the binding energy is hidden within the structure of $\Sigma(1385)$. The KNN system is built on short range KN and NN correlations and the $ \Psi(\bf{r},\bf{x})$ contains a large number of partial waves coupled to zero total angular momentum. In Jacobi coordinates $(r,s)$ the $L_r \otimes L_s $ decomposition of the first term of $ \Psi(\bf{r},\bf{x})$ is mainly $ S \otimes S $. Both terms involve even values of $L_r $ and the spin-isospin structure of the NN pair is either $ I_{NN}=0 , S_{NN}=1$ or $ I_{NN}=1 , S_{NN}=0$. Both types of states may be formed. Other solutions are determined by $D_P=0$. This condition allows amplitudes of different symmetry $\psi^s = \psi^s_1 =- \psi^s_2$ and $ \psi^p =\psi^p_1 = \psi^p_2 $. In comparison with Eq. \[I6\], the wave functions for the KNN system now become $$\label{I8} \Psi(\bf{r},\bf{x}) = \chi_{NN}(r) [ G(\bf{x}-\bf{r}/2) - G(\bf{x}+\bf{r}/2)] \psi_s + \chi_{NN}(r) \overrightarrow{r} \overrightarrow {\partial}_x [ G(\bf{x}-\bf{r}/2) + G(\bf{x}+\bf{r}/2)] \psi_p$$ are now antisymmetric in the nucleon coordinates and contain odd angular momenta $L_r, L_s $ in the $L_r \otimes L_s$ decomposition. The spin-isospin structure of the NN pair is either $ I_{NN}=0 , S_{NN}=0$ or $ I_{NN}=1 , S_{NN}=1$. The NN interactions in the $ I_{NN}=0 , S_{NN}=0$ states are repulsive and do not support any KNN bound states. On the other hand, $ I_{NN}=1 , S_{NN}=1$ states may be formed. The results of the two previous subsections may be unified. The notation used in Eq.(\[t3\]) is now extended to include the partial wave index in channel KN : $U^{p,s}= G^{p,p} t^s ,~ U^{s,p}= G^{s,s} t^p ,~ U^{p,p}= G^{p,p} t^p, U^{s,s}\equiv U^{K,K}$. The last equivalence indicates that the $P$-wave multiple scattering is included only in the basic KN channel. For the determinant of scattering equations one obtains $ D= D_S~D_P $ where now $$\label{I9} D_S = [(1+ U^{K,K})(1+ U^{\pi,\pi})~ - U^{\pi,K}U^{K,\pi}][1+U^{p,p}]- [(1-U^{\pi,\pi})U^{p,s}U^{s,p}]$$ $$\label{I10} D_P = [(1- U^{K,K})(1- U^{\pi,\pi})~ - U^{\pi,K}U^{K,\pi}][1+U^{p,p}]- [(1+U^{\pi,\pi})U^{p,s}U^{s,p}].$$ The solutions of the corresponding eigen-value equations retain the symmetries indicated in the previous section. Figure 1 indicates typical contracting potentials obtained with an A.Martin solution (see next section and Ref. [@MAR81]). Asymptotically these reproduce the separation energies of the K meson hidden either in the $S$-wave $\Lambda(1405)$ or in the $P$-wave $\Sigma(1385)$. The real binding is generated by the difference $ V_K(r)- V_K(\infty)$ which jointly with $V_{NN}$ determines the nucleon wave function $\chi_{NN}$. ![Contracting potential $V_K$(r) in the NN, $I_{NN}=1, I_{KNN}=1/2$ state, for the symmetric meson wave solutions. The $V_S$ line shows *Re* $V_K(r)$ for S wave interactions. The $ V_{SP}$ line shows *Re* $V_K(r)$ for S + P wave interactions. The upper curve $-\Gamma$ shows 2 *Im* $V_K(r)$. These results are based on the A. Martin amplitudes [@MAR81].](knn.epsi){width="40.00000%"} \[fig:VK\] KN interactions =============== The coupled multichannel $KN , \Sigma\pi,\Lambda\pi $ system is the easiest to describe in terms of the $\hat{K}$ matrix related to the scattering matrix $\hat{T}$ by the algebraic Heitler equation $$\label{K1} \hat{T} = \hat{K} - \hat{K} i\hat{ Q} \hat{T},$$ where $Q$ is a diagonal matrix of channel momenta in the C.M. system. Early parameterizations involved constant $K$-matrix elements chosen to fit the scattering data. Later these were improved by an effective range expansion. As the data were (and still are) poor such fits were supplemented by additional consistency conditions formulated in terms of dispersion relations [@MAR81; @MSA69]. Such solutions can be tested above the KN threshold and to some extent in the $\Sigma\pi $ channel. For the dominant isospin $0$ interactions there are two types of solutions. These are given in Table \[KN\] in terms of the inverse $\hat{M}= \hat{K}^{-1}$ matrix which in turn determines the scattering matrix $$\label{K2} \hat{T}^{-1} = \hat{M} + i\hat{ Q}.$$ Extrapolations into the complex energy plane display a similar $\Lambda(1405)$ pole position. However, the physics in both solutions indicates different interplay of the main KN with the hyperon pion channels. The position of the singularity is given essentially by the attractive and, in both cases, large $K_{KN,KN}$ element. This allows one to interpret $\Lambda (1405)$ as a $KN$ quasi-bound state. In principle there exists an alternative possibility - the $\Lambda$ as a quark state. If this is the case it may be introduced into the $K$ matrix as an external pole in $\hat{K}\sim 1/(E-E^*)$. However, the scattering data exclude such a term or limit it to a very small contribution [@MAR81]. Amplitudes below the KN threshold may be tested indirectly, either in the elastic $\Sigma\pi $ channel or in the $KN \rightarrow\Sigma\pi $ transitions on bound nucleons [@STA87]. These reactions support the bound state interpretation but are not very restrictive on the position of the singularity. In particular, the analysis of Dalitz and Deloff [@DAL91] shows that several models offer comparable descriptions of the $\Sigma\pi $ data in the resonance region. The $M$ matrix model given in the DD column of Table \[KN\] is only slightly favored by the authors of ref. [@DAL91]. [^3]. The KWW column in Table \[KN\] comes from a quasi-relativistic separable potential model. It belongs to a second type of solution and was based on the B. Martin – Sakitt solution. In this work we use off-shell extrapolations of both types of solutions. The separable off-shell extension --------------------------------- The three-channel or two-channel separable model is used here to extend the phenomenological S-wave KN interactions off the energy shell. This method is standard in momentum space but here we have already used the coordinate representation. In a single KN channel case the potential equivalent to those of Eq.(\[a3\]) is described by $$\label{S1} V( k , k' ) = \lambda v(k)v(k').$$ The Yamaguchi form-factors $v(k) = \kappa^2 /(\kappa^2 + k^2)$ are convenient to perform explicit analytic calculations in both representations. The related Fourier transforms are of Yukawa form $ v(r)= \kappa^2\exp(-\kappa r)/ ( 4 \pi r) $ and are normalized to delta functions in the limit of zero range forces. The off-shell scattering amplitude becomes $$\label{S3} f_{KN} =\frac{ v(k)v(k')}{ \lambda^{-1} + G(E)}.$$ With the non-relativistic form of the kinetic energy $E_{kin} = q^2/(2\mu_{KN}) $ one obtains $$\label{S4} G(E) = \int d \bf{\tau} \frac{ v(t)^2}{2\pi^2(\tau^2-q^2)} = \frac{ \kappa}{2( 1 - i q/ \kappa )^2} .$$ At the KN threshold energy $ E_t=M_K+M_N $, the standard convention requires one to define the scattering length as $ a_o+ib_o = -~f_{KN}(E_t)$. Below the threshold $1/\lambda + G$ is forced to have a zero corresponding to the $\Lambda(1405)$ state. The complex momentum at this point is $$\label{S5} p_B = [\frac {\lambda_S \kappa^3}{2}]^{1/2} - \kappa.$$ The next step in order to improve the absorptive part is to guarantee that it vanishes below the $\Sigma \pi$ threshold. That is achieved by scaling the absorption strength Im $\lambda $ by a phase space factor $f_{\varrho} = q_{\Sigma\pi}(E) / q_{\Sigma\pi }( E_t) $ where $q_{\Sigma\pi}$ is the momentum in the decay channel. The values $ \lambda_S = - 0.602\exp(i~0.12~f_{\varrho})~fm $ and $ \kappa = 4.5 fm^{-1}$ give a good reproduction of the PDG recommended E = 1405 and $\Gamma=50 $ MeV values [@DAL91; @PDG]. This one channel amplitude may serve as a guide, but to describe finer details and for a better comparison with the scattering data one needs multi-channel separable models. Below, two types of multi-channel reaction matrices are extrapolated off the energy-shell. $\circ$ One solution has been given by Krzyzanowski $et \ al.$ [@KRZ75] in terms of a quasi-relativistic multi-channel separable potential. The $G(E)$ used there differs from the solution (\[S4\]) by an invariant momentum phase space and the use of quasi- relativistic intermediate meson energies $E = q^2/(2M_{N}) + \sqrt{m^2+q^2} $. This solution was motivated by the early B. Martin and Sakitt [@MSA69] K- matrix (BM in Table \[KN\]) and as may be seen in this table it offers similar on-shell parameters. $\circ$ Another solution is based on the commonly used A. Martin’s K-Matrix [@MAR81]. However, in the decay channels this matrix is not well reproduced by simple rank one separable potentials. Instead, we use the extrapolation $ K^{off}(k,E,k') = v(k)~K^{on}~ v(k')$, with the Yamaguchi form-factors and $\kappa =$ 4.5 fm$^{-1}$ as obtained above in the one channel case. P-wave KN interactions ----------------------- To account for the P-wave interactions dominated by $\Sigma(1385)$ the scattering amplitude of equation (\[a7f\]) is generalized to $$\label{P1} f_{KN} = f_S + f_P = f_S + 2 {\bf k} {\bf k}' f_P^{l+}.$$ The last term is a consequence of the $j= l+1/2$ total spin of $\Sigma(1385)$. It involves an $l+1$ factor instead of $2l+1$ typical for spin zero situations. The omitted piece contains spin flip amplitudes and is expected to be small in the few body context. The $f_P^{l+}$ term is described here by a separable single-channel K - matrix which in the KN channel is given by $$\label{P2} K( \textbf{k}, \textbf{k'}) = \gamma_K^2 \frac{ \textbf{k} \textbf{k'}v_P(k)v_P(k')}{ E- E_o+ i \Gamma_\pi/2},$$ where the form-factor is $$\label{P3} v_P(k) = \frac{\kappa^4 }{(\kappa^2 + k^2)^2},$$ and $E_o$ is a phenomenological parameter which determines the position of $\Sigma(1385)$. The width $\Gamma_{\pi}$ is strongly energy dependent $$\label{P4} \Gamma_\pi = \gamma_{\Lambda\pi}^2 q_{\Lambda\pi}^3 + \gamma_{\Sigma\pi}^2 q_{\Sigma\pi}^3 .$$ In these equations the $q$’s are the channel momenta and $ \gamma$ are couplings of the resonance to the $\Sigma\pi$ and $ \Lambda\pi$ channels. The latter are determined by the experimental decay width of about 36($\pm 2$) MeV and the $\Sigma\pi$ branching ratio of 0.13($ \pm$ 0.01 ) [@PDG]. For the coupling to the KN channel the SU(3) value $ \gamma_{KN}^2/ \gamma_{\Sigma\pi}^2 = 2/3$ is taken. This value is consistent with the experimental result of Brown, which yields $ 0.57( \pm 0.18)$ [@BRO79]. The coupling to the KN channel generates the off-shell scattering amplitude $$\label{P5} f_P(\textbf{k}, E, \textbf{k}') = \frac{2\textbf{k} \textbf{k}' v_P(k)v_P(k') \gamma_{KN}^2 }{ E- E_o+ \gamma_{KN}^2~\int \frac{d\bf{\tau} \tau^2 ~v_P(\tau)^2 } {2\pi^2(\tau^2-k_{KN}^2)}+ i~ \Gamma_\pi/2 } \equiv \textbf{k} \textbf{k}'v_P(k)f_P(E)v_P(k').$$ Let us notice that below the $KN$ threshold the integral in Eq.(\[P5\]) deforms significantly the shape of the resonance profile. The range parameter $\kappa = 3.8 $ fm$^{-1}$and $ E_o= 1505.2$ MeV are used to reproduces the profile tested experimentally by Cameron [*et al.*]{} [@CAM78] in the $\Lambda \pi$ channel. For further applications the coordinate representation is needed, which is given by equation $$\label{P6} f( \textbf{x}, E, \textbf{x}') = \lambda_P \overleftarrow{\partial}v_P(x)f_P(E) v_P(x')\overrightarrow{\partial'}.$$ in terms of the Fourier transforms of form-factors (\[P3\]). Few nucleon systems ==================== The procedure presented in the KNN section is now extended to systems consisting of several nucleons. Practical calculations are done for three and four nucleons. At first the multiple scattering equations similar to Eqs. (\[a3\] - \[a5\]) in the previous section are solved in fixed nucleon systems. The bound K-meson wave function $\chi_K$ is a solution of $$\label{f1} \chi_K(\bf{x},x_1....x_n ) = - \Sigma_i\Sigma_{\beta} \int \bf{dy }\frac {\exp(ik\mid \bf{x-y}\mid )}{4\pi \mid \bf{x-y}\mid }~ v( \bf{x-x}_i)_{\alpha}~\lambda^{\alpha\beta}~ v(\bf{y-x}_i)_{\beta} ~\chi_K( \bf{y}, x_1....x_n ),$$ where indices $ \alpha, \beta $ denote channels and partial waves of the meson-baryon pair. An index related to the symmetry of $\chi$ is suppressed. By analogy with Eqs. (6) and (7), equation (\[f1\]) may be reduced to a matrix equation for the wave amplitudes defined at each scatterer as $$\label{f2} \psi_i^{\alpha} = \Sigma_{\beta}\int \textbf{dy }~\lambda^{\alpha\beta}~ v(\bf{y-x}_i)_{\beta} ~\chi_K( \bf{y}, x_i ).$$ The kernel of the scattering equation can be now expressed in terms of scattering amplitudes $f_i^{\alpha, \beta}$ at each nucleon $i$ and propagators describing the passage from nucleon $i$ to another nucleon $j$. The latter are given by $$\label{f3} G_{i,j}^{\alpha,\beta} = \int \textbf{dy }\textbf{dx } v( \textbf{x-x}_i)_{\alpha } \frac {\exp(ik\mid \textbf{x-y}\mid )}{4\pi \mid \textbf{x-y}\mid } v(\textbf{y-x}_j)_{ \beta}.$$ The procedure explained in Sect. II leads to a set of linear equations $$\label{f4} \psi_i^{\alpha} + \Sigma_{\beta\gamma}~ \Sigma_j ~ f_j^{\alpha,\beta}~ G_{i,j}^{\beta\gamma} ~ \psi_j^{\gamma} =0 ,$$ which are solved numerically. This matrix equation is simplified as the $G$’s are diagonal in channel indices and the $f$ are diagonal in the partial wave index. Still, the corresponding determinants are complicated algebraic expressions involving functions $ G$ and $f$. Numerical solutions become a difficult problem. It has been solved in the following, approximate way. The determinant consists of many terms which are arranged according to the number of collisions. With up to four collisions in the main channel we retain the structure found in the KNN situation and the determinant $D$ of this system is presented as $$\label{f5} D = 1 + \Sigma_{pairs} (D_S~D_P-1) + O_{higher~ orders},$$ where $D_S$ and $D_P$ are defined in Eqs. (\[I9\]) and (\[I10\]). The main term is composed of collisions in the KNN subsystems which allows one to keep track of the wave function symmetry. The terms of higher order in $f$ are dropped. The solution of the full K-few-N bound state problem is given by the equation $$\label{f6} [ E + \frac{\Delta_x}{2 m}+ \sum_i \frac{\Delta_i}{2M}- \sum_i V_{KN_i}- \sum_{i,j} V_{N_iN_j} ] \Psi(x,x_1,..,x_n) = 0.$$ Again we assume the wave function to be given by eqs.(\[a2\]) and (\[f1\]) i.e. $\chi_K(x,x_1..x_n) \chi_{N}(x_1,..,x_n))$. Projecting Eq.(\[f6\]) on $\psi_K$ one obtains the Schrödinger equation for the few nucleon wave function $$\label{f7} \chi_{N} (x_i) = \int \textbf{dx } \chi_K( x,x_i) \Psi (x,x_i)$$ in the form $$\label{f8} [ E - E^{c}(x_i) + \sum_i \frac{\Delta_i}{2M} - \sum V_{NN}]\chi_N + \Delta E_{kin} \chi_N =0 ,$$ where $\Delta E_{kin}$ is a correction to the nucleon kinetic energies. As in Eq. (\[s18\]) it is given by $$\label{f9} \Delta E_{kin} \chi_N = -\frac{1}{M} \Sigma_i \int \bf{dx }\partial_i\psi_K\partial_i \Psi$$ and as before turns out to be very small due to angular averaging and sign changes in both the derivatives. As discussed in the KNN situation this correction has been dropped. In deriving equation (\[f8\]) a special form (\[f1\]) of the meson wave function is used. As in Eq. (\[s19\]) it satisfies the relation $$\label{f10} \Delta_x \chi_K =\sum_i\Delta_i\chi_K.$$ In the next step, equation (\[f8\]) is solved by a variational method with the NN potential of Argonne [@ARG95]. The trial wave function is of the form $$\label{f11} \chi_N =\prod_{i,j, i\neq j}[ 1-\exp(-\lambda_c^2 (\bf{x}_i-\bf{x}_j)^2)]\exp(-\lambda_l \mid\bf{x}_i-\bf{x}_j \mid) /\mid\bf{x}_i-\bf{x}_j \mid,$$ where $\lambda_c,\lambda_l$ are variational parameters. This form is chosen to give the correct asymptotic limit for large $ \mid \bf{x}_i-\bf{x}_j \mid$ and also gives a vanishing wave function for small $ \mid \bf{x}_i-\bf{x}_j \mid$ as expected for a strong repulsion in the NN potential. Results ======== KNN --- In this section the calculations of KNN levels are presented. The sensitivity to KN input parameters is studied and the states of different symmetry are discussed. Contracting potentials $V_K(r)$ were calculated with several solutions for the phenomenological S-wave $KN$ reaction matrices presented in Table \[KN\]. The solutions of second type may be well fitted with a rank one separable potential. Here, the calculations are done with the quasi-relativistic model of [@KRZ75]. This model is based on the $K$-matrix of B. Martin [@MSA69]. Numerically it is fairly close to the separable potential of Ref. [@ALB76]. For the the first type of solutions in Table \[KN\] - due to A. Martin [@MAR81] no satisfactory rank one separable approximation is found. This difficulty is related to the large effective range parameters involved in this $K$ matrix. In order to retain the physics involved, a simple off-shell extension is adopted $ K_{i,j} \rightarrow v(k_i)K_{i,j}v(k_j)$. The Yamaguchi form-factors have been used and the inverse range parameter $\kappa$ was varied over the range of $3-6$   fm$^{-1}$. The actual value of $\kappa$ affects the multiple scattering via propagator $G_{ss}(k,r)$ of Eq. (\[a9\]). Larger values reduce the form-factor $v(k)$ but enhance the significance of the small $r$ region in $G_{ss}(k,r)$. On average these two effects balance very well and one finds a very weak ($\sim 1$ MeV) dependence of the total binding energy on the actual value of $\kappa$. The results given in Fig.1 and in the tables which follow are obtained with $ \kappa=4.5$  fm$^{-1}$. The energies of the most strongly bound KNN, $I_{tot}=1/2$, $I_{NN}=1$, quasi-bound states are given in Tables \[KNNS1\] and \[KNNlow\]. The first table describes several steps of the approximation while the second table indicates the dependence of binding on the KN input parameters. The first line in Table \[KNNS1\] is determined essentially by the effects of $\Lambda(1405)$ excitations described in the elastic channel only. The second line describes additional effects due to multiple scattering in the $\Sigma\pi$ channel. The other two lines include the P wave interactions. The energies of $I_{tot}=1/2$, $I_{NN}=1$ states given by the S wave interactions and described by multiple scattering in the single, KN, channel span the region of 30-50 MeV. These results are consistent with the findings of Akaishi and Yamazaki [@AKA02] and Dote and Weise [@DOT07] obtained with different methods. The differences within this range are due to a different KN and/or NN input. As seen in the second and fourth rows, significant changes arise with the explicit inclusion of the multiple scattering in the $\Sigma,\pi $ channels. The binding rises by 10 to 20 MeV and the effect of collision broadening is large. solution --------------------- ------- ---------- ----------- ------- ---------- ----------- $E_B$ $\Gamma$ $R_{rms}$ $E_B$ $\Gamma$ $R_{rms}$ $KN; ~~S$ 27 36 3.1 35.5 37 2.4 $KN,\Sigma\pi;~S $ 37 42 2.5 43.1 47 2.1 $KN; ~~S,P$ 49 36 3.7 49.7 36 3.3 $KN,\Sigma\pi;~S,P$ 52 37 2.9 56.5 39 2.3 $KN; ~~S$ 47 47 2.3 : Binding energies and widths \[in MeV\] of the KNN, $I_{tot}=1/2$, $I_{NN}=1$ space-symmetric states. The results on the left are based on [@MAR81] parameters, the results on the right follow KWW [@KRZ75] parameters discussed in Table \[KN\]. The first column specifies the channels explicitly involved in the multiple scattering and meson-nucleon partial waves. $R_{rms}$ is the radius mean squared of the N-N separation \[in fm \]. The last line is obtained with the simplest separable potential discussed in the text and the $I_{KN}=1$ amplitudes from AM. \[KNNS1\] Table \[KNNlow\] shows binding energies obtained with the “canonical” $\Lambda$ pole position $E = 1405 $ MeV [@DAL91; @PDG] which is lower than the position obtained in other parameterizations. The result given in the second line of this table is comparable to the results obtained, with a similar input, by Schevchenko [*et al.*]{}[@SHE07]. The latter work employs a superior Faddeev technique, but a more detailed comparison of results is not easy since that calculation uses a rank one separable potential to describe the NN interactions. solution KWW$^*$ --------------------- ------- ---------- ----------- $E_B$ $\Gamma$ $R_{rms}$ $KN; ~~S$ 50 51 2.05 $KN,\Sigma\pi;~S $ 71 85 1.81 $KN; ~~S,P$ 65 43 2.09 $KN,\Sigma\pi;~S,P$ 78 60 1.88 : Binding energies and widths \[in MeV\] of the KNN, $I_{tot}=1/2$, $I_{NN}=1$ space-symmetric states. These results are based on KWW [@KRZ75] parameters modified to set the pole of $\Lambda(1405)$ at 1405 MeV and the width at 48 MeV and given in Table \[KN\] (KWW$^*$). The first column specifies the channels explicitly involved in the multiple scattering and the meson-nucleon partial waves. $R_{rms}$ is the radius mean squared of the N-N separation \[in fm\]. \[KNNlow\] The inclusion of resonant P wave interactions increases the binding by some 10 MeV. There is some room for different values as the experimental $K N \Sigma(1385)$ coupling is not certain. However, the main effect of $\Sigma(1385)$ is a change of structure in the KNN systems. A sizable portion of the binding energy is contained in the structure of this resonance. On the other hand the system is dissolved as the inclusion of P waves enlarges the NN separation and the formation of $\Sigma(1385)$ is essentially a peripheral effect. The KN correlations for $ r>1.6 $ fm are mostly of the $\Sigma(1385)$ type. The other effect of P wave interactions is a formation of additional KNN states. These are given in Tables \[KNNS0\],\[KNNP\] and discussed below. The energies of KNN quasi-bound states with $I_{NN}=0$ given in Table \[KNNS0\] are determined essentially by the $\Sigma(1385)$ excitations. Let us notice that the result is unstable against the KN input. The state is still more likely to exist with a lower value of the $\Lambda(1405)$ energy. In any case it is a very loose structure that might be a quasi-bound or a virtual state. $E_B$ $\Gamma$ $R_{rms}$ $E_B$ $\Gamma$ $R_{rms}$ ---------- ---------- ----------- ------- ---------- ----------- no  b.s. - - 47.1 36 $\sim $7 : Binding energy and width \[in MeV\] of the KNN, $I_{tot}=1/2$, $I_{NN}=0$ space-symmetric states. Results on the left are based on A.Martin parameters and do not support a bound state (no b.s.). The results on the right follow the KWW parameters. This result involves $S+P$ wave interactions and external interactions in two meson-nucleon channels. $R_{rms}$ is the radius mean squared of the N-N separation \[in fm\]. \[KNNS0\] $I_{NNK}$ $I_{NN}$ $E_B[MeV]$ $\Gamma[MeV]$ $R_{rms}[fm]$ --------- ----------- ---------- ------------ --------------- --------------- $K^-nn$ 3/2 1 48.5 36 4.9 : The binding energy and width \[in MeV\] of the $KNN$ $I_{tot}=3/2$, $I_{NN}=1$ space-asymmetric states. In the $NN$ subsystem $^{2S+1}L_J= ^3P_2$. The last column gives the radius mean squared of the N-N separation, \[in fm\]. \[KNNP\] The energy of an asymmetric quasi-bound state is given in Table \[KNNP\]. It is determined essentially by the $\Sigma(1385)$ excitations. The table in Appendix B indicates that the $K^-nn$ state has the largest possible $\Sigma$ component which offers the strongest $\Sigma(1385)-N$ attractive potential. It reaches a maximum depth of about $-10$ MeV at $~ 1 ~fm $ distance, but it is not strong enough to overcome the NN $P$-wave barrier and generate a quasi-bound state. To obtain real binding, assistance from the $I_{KN}=1 $ $ S$-wave state and the $NN$ $P$-wave attraction is necessary. Thus, the $NN$ interactions repulsive at large distances in the $ I_{NN}=0$, $ S_{NN}=0,L_{NN}=1$ waves do not support bound states. However, such states may be generated by $ I_{NN}=1$ , $S_{NN}=1,L_{NN}=1$ interactions. Here, the analysis becomes more subtle as the $NN$ interaction is strongly spin dependent. The energy given in Table \[KNNP\] corresponds to $J=2$ ($^3P_2$) wave in the NN subsystem where the interaction is the most attractive. This KNN system is large and loosely bound, by about $1.5$ MeV in the $\Sigma(1385)-N$ configuration. In this calculation the S wave parameters come from AM [@MAR81] and the calculated energy is uncertain, as the involved $K^-n$ parameters are poorly known. The experimental detection would be very difficult, nevertheless, a more precise analysis of the spin and isospin structure of such states is of interest in the context of $K^-D$ atoms and will be performed elsewhere. KNNN and KNNNN systems ---------------------- The discussion of these systems is limited to the states of the simplest symmetry. The fixed nucleon model generates a contracting potential which in KNNN systems may be, to a good approximation, presented in the form $$\label{R1} E^c(R_x, R_y, R_z) = - V_{NNN}[1- C\exp(-\lambda_{s}( R_x+R_y+ R_z))][\exp(-\lambda_{l}R_x) + \exp(-\lambda_{l}+ R_y)+ \exp(-\lambda_{l}R_z)] - V(\infty) ,$$ where $R_x, \ R_y, \ R_z $ are the inter-nucleon distances. The short range behavior at the triple coincidence may be obtained analytically and $C= 0.42$, other parameters being numerical. With the KN parameters of Refs.[@KRZ75],[@MAR81] and $I_{tot}=0$ the parameters are obtained in the range $V_{NNN}\sim150-200$ MeV, $\lambda_{s}\simeq4.5$ fm , $\lambda_{l}\sim 1.8-1.9$ fm. For $I_{tot}=1$ one has $V_{NNN}\sim 50-60$. $V(\infty)$ is the binding of KN into $\Lambda(1405)$ in the S wave case or $\Sigma(1385)$ in the S+P wave case. For $I_{tot}=0$ the corresponding binding energies are given on the left side of Table \[KNNN0c\]. These numbers may be compared to the simplest version of this model - the $S$ wave interactions described by the single KN channel - which produce 91 MeV binding. The modified version of the KWW model with parameters from Table \[KN\] fixed to set the $\Lambda(1405)$ energy to 1405 MeV yields much stronger contracting forces, $V_{NNN}\sim250-350$ MeV and $\lambda_{l}\sim 2.1-2.3$ fm. The states indicated on the right side in Table \[KNNN0c\] are bound very deeply. The basic NNN systems obtained with our variational wave function are over-bound by about 2 MeV and this value has already been subtracted from the numerical KNNN energies. $E_B$ $\Gamma$ $E_B$ $\Gamma$ ------- ------- ---------- ------- ---------- $S$ 103 29 142 25 $S+P$ 119 23 153 21 : Binding energies and widths \[in MeV\] of the $I_{tot}= 0$, KNNN, space-symmetric states obtained with the two channel KN , $\Sigma\pi$ channel multiple scattering formulation. The results on the left are based on the modified KWW [@KRZ75] parameters with $\Lambda(1405)$ energy set to 1405 MeV. The first column specifies meson-nucleon partial waves involved. The widths do not describe non-mesic capture modes. \[KNNN0c\] $E_B$ $\Gamma$ ------- ------- ---------- $S+P$ 63 38 : Binding energies and widths \[in MeV\] of the $I_{tot}= 1$, KNNN, space-symmetric states obtained with the two channel KN , $\Sigma\pi$ channel multiple scattering formulation. These states are formed as a result of the $P$ wave interactions with some assistance of the $S$ wave attraction. \[KNNN1c\] There may exist a number of states in the KNNNN systems. In Table \[KNNNN\] one finds only the states with the simplest symmetry, which involve wave functions symmetric under exchange of the nucleon coordinates. In the absence of the K meson the basic $\alpha$ particle structure is used and only the $S$ wave NN interactions are included. With the tensor interactions described by Eq.(\[Wir2\]) this $\alpha$ system is over-bound by about 10 MeV, and this value has been subtracted from the calculated KNNNN levels. $E_B$ $\Gamma$ $E_B$ $\Gamma$ ------- ------- ---------- ------- ---------- $S$ 121 25 170 10 $S+P$ 136 20 172 10 : Binding energies and widths \[in MeV\] of the KNNNN , space-symmetric, $S_{tot}=0$, $I_{tot}=1/2$ states. See captions to Table \[KNNN0c\]. \[KNNNN\] Level widths ------------ Level widths are calculated as twice the expectation value of $ Im ~V_K$. The KN resonant lifetimes are strongly energy dependent, being very short at the resonance, becoming longer below the resonant energies and staying infinite below the thresholds of the decay channels. This trend is reflected by $ Im ~V_K$ in Fig. \[fig:VK\]. The energy dependence contained in the amplitude $f_{KN}( - E_B- E_{\rm{recoil}})$ of Eq. (\[1\]) is traded into the space dependence $f_{KN}(V_K)$. These two types of averaging give fairly close results provided the final binding energy is located well above the threshold of the decay channels. Let us indicate some consequences of this relation. The states generated by the P wave interactions given in Tables \[KNNS0\],\[KNNP\], and \[KNNN1c\] correspond to a fairly loosely bound $\Sigma(1385)$ and the widths of quasi-bound states are essentially equal to the width of the $\Sigma(1385)$. This comes as a result of the peripheral binding and weak effects of the collision broadening in the $P$ wave resonances. In these states, the $V_K$ underestimates slightly the average $ - E_B- E_{\rm{recoil}}$ and the real widths might be smaller. For the binding energies in the range 60-90 MeV the $V_K$ is too small at large distances and too large at small distances with a reasonably good average. Let us notice that the level widths generated by the $S+P$ interactions are smaller than those generated by the $S$ waves alone. This is due to three factors: the width of $\Lambda(1405)$ is larger than the width of $\Sigma(1385)$, the collision broadening in $P$ waves is small and the systems due to the $S+P$ interactions are less compressed. Let us also notice very strong sensitivity to the input KN amplitudes. The few examples of $ Im ~F_{100}$ given in Table \[KN\] and the differences of the widths in the $ I_{tot}=1/2$, $ I_{NN}= 0$, KNN state visualize this point. In cases of very large binding, in the range of 120-200 MeV, one ($\Sigma\pi$) or two ($\Sigma\pi,\Lambda\pi$) decay channels are blocked and the widths calculated here are over estimated. Such a situation is likely to happen in the K-$\alpha$ state. To account for that effect, the calculation of the contracting potential was repeated in an optical potential manner. So, the momenta in the decay channels $\Sigma\pi,\Lambda\pi$ were related to the binding energy $E_B$ and allowed no outgoing waves. Such a calculation results in a stronger binding. In the KWW$^*$ model one obtains binding of 220 instead of the 170 given in Table \[KNNNN\]. The real decay width is now given by the multi-nucleon capture mode. The multi-nucleon captures are initiated by the non-mesic KNN $\rightarrow$ Y N mode and the branching ratio for this process is known from emulsion studies to be about $20\%$ in light nuclei [@ROO79]. The emulsion data are obtained with stopped K mesons and pertain to the nuclear surfaces. An extrapolation in terms of a characteristic nuclear densities $\rho$ and two body phase space $L$ $$\label{R2} \Gamma_{multi} \simeq L ~ \varrho^2 ~ \gamma$$ for this decay was attempted in Ref.[@WYC86]. A constant $\gamma$ may be fixed to the emulsion branching ratio and a 20 MeV level width in the nuclear matter at 90 MeV binding was obtained. In the strongly bound, few-body systems the kinematics of the decay is different since the residual nucleons also take sizable recoil energies. Roughly, for a three body decay $ L \sim Q ^2$ where $Q$ is the decay energy. Again, an extrapolation from the emulsion data in terms of the available phase space and the involved nuclear density yields non-mesic capture widths in K-$\alpha$ in the 10-30 MeV range. These estimates are somewhat larger than the 12 MeV obtained for the KNNN system in Ref.[@AKA02]. Unfortunately such extrapolations are uncertain as the energy dependence in $\gamma$ might be large and the $Q$ value is not known. Help from new experiments is necessary to settle these questions. Conclusions =========== In this paper a new method to calculate the deeply bound KNN, KNNN and KNNNN states has been presented. The calculation consists of two steps. First a wave function involving strongly correlated K-N subsystems is found in a fixed nucleon approximation. This step also allows one to find potentials due to the K meson which tend to contract the inter-nucleon distances. Next, these correlated wave functions and contracting potentials are used as input in the Schrödinger or variational calculations for the K-few-nucleon binding. The lowest binding energies based on a phenomenological KN input fall into the 40-80 MeV range for KNN, 90-150 MeV for KNNN and 120-220 MeV for K$\alpha$ systems. The uncertainties are due to unknown KN interactions in the distant subthreshold energy region. We obtain at least partial answers to the basic questions presented in the introduction. $(a)$ The binding mechanism: the dominant mechanism of the attraction is related to the $\Lambda(1405)$ state. This fact has been known for a long time. In addition, it is found here, that the $\Sigma(1385)$ contributes significantly to the structure of the K-few-N bound states but much less to the actual binding energies. The bound states are built from the strongly correlated KN subsystems. At central densities these correlations resemble the $\Lambda(1405)$ and at peripheries the correlations are made by the quasi-free $\Sigma(1385)$. Sizable fractions of the binding energies are contained in the KN correlations. One consequence is that even with the strong bindings the nucleon densities are not dramatically enhanced as in Ref. [@AKA02] but can become a factor 2-4 larger than the standard nuclear matter density $\rho_o$. The presence of $\Sigma(1385)$ resonances in the few nucleon systems generates new states. These are predominantly P-wave states or states built upon the P-wave N-N interactions and are usually broad and loosely bound developing long tails built from the $\Sigma(1385)$ correlation. $(b)$ The control of technical questions: the choice of correlated wave functions removes the difficulties related to the uncertain K-N interaction range and allows one to use realistic N-N interactions. The recoil energy of the KN subsystems with respect to the residual nucleons is described only in an average sense. This seems to be the weakest part of this method. $(c)$ The widths are related to the lifetimes of the $\Lambda(1405)$ and $\Sigma(1385)$ enhanced by the collision broadenings. Under the phenomenological KN interactions ( the $\Lambda $ pole located at  $\sim 1412 - 17i $ MeV ) the K-few-N states are $\sim~40 $ MeV wide. On the other hand, the models with the $\Lambda $ pole located at $ \sim 1405 - 25i $ MeV generate K-few-N states which are more deeply bound. These may be either very broad or quite narrow. With the binding energies in the 60-80 MeV range ( the KNN case) very broad - up to 90 MeV - states are obtained. However, with the bindings of 120 MeV (KNNN) or 170 MeV (KNNNN) the single nucleon decay modes are effectively blocked. The widths are strongly reduced and the main decay modes are due to multi-nucleon K captures. These widths are hard to predict, a simple model suggested here generate widths of about 20 MeV. A simple physical picture emerges from this approach. The mesons are strongly correlated to slowly moving nucleons. The correlations are of the $\Lambda(1405)$ type at large densities, and of the $ \Sigma(1385)$ type in the peripheries. Each K,N pair has a good chance to stay also in the $\Sigma ,\pi$ form. The structure is rather loose as sizable fractions of the binding energies are hidden in the short ranged correlations. [0]{} T. Suzuki $ et \ al$., [*Phys.Lett.*]{}   [**B597**]{}(2004)263 : M. Iwasaki, [*Proc.EXA 05, Austr. Acad. Sc. Press, 2005* ]{}p. 93: M. Agnello for FINUDA Collab. [*Phys.Rev.Lett.*]{} [**94**]{}(2005)212303 S. Wycech, [*Nucl.Phys.*]{}   [**A 450**]{}, 399c, (1986) E. Friedmann and A.Gal, [*Physics Reports* ]{}  [**452**]{} (2007)89 E. Oset and H.Toki [*Phys. Rev.* ]{} [**C74**]{}(2006)015207 V.K. Magas, E. Oset, A. Ramos and H. Toki [*Phys. Rev.* ]{} [ **C74**]{}(2006)025206, Y. Akaishi and T. Yamazaki, [*Nucl.Phys.* ]{} [**A792**]{}(2007)229 Y. Akaishi and T. Yamazaki, [*Phys. Rev.* ]{} [**C65**]{}(2002)044005 W. Weise, [*Proc.EXA 05, Austr. Acad. Sc. Press, 2005* ]{}p. 35 W. Krzyzanowski, J. Wrzecionko and S. Wycech [*et al*]{}., [*Acta.Phys.Pol.*]{} [**B6**]{}(1975)259 A. Dote and W. Weise, [*EPJ A* ]{}, nucl-th/0701050 [*Prog.Theor. Phys. Suppl.* ]{} [**168**]{}, 593(2007) N. Shevchenko, A. Gal and J. Mares, [*Phys. Rev.* ]{} [**C76**]{}, 044004(2007) Y. Ikeda and T. Sato, [*Phys. Rev.* ]{} [**C76**]{}, 035203(2007) R.B. Wiringa [*et al*]{}., [*Phys. Rev.* ]{} [**C51**]{}(1995)38 S. Wycech and A.M. Green, [*Proc.EXA 05, Austr. Acad. Sc. Press, 2005* ]{}p. 101, [*IJMP*]{}[**A22**]{}, 629(2007) K.A. Brueckner, [*Phys. Rev.*]{} [**89**]{}, 834(1953) W.L. Foldy and J.D. Walecka, [*Ann.Phys.*]{} [**54**]{}(1969)447 A.D. Martin [*Nucl.Phys.* ]{} [**B94**]{}(1975)413 B.R. Martin and M. Sakitt, [*Phys.Rev* ]{} [**183**]{}(1969)307, B.R. Martin in [*Proceedings of Herceg Novi Summer School 1972.* ]{} B.R. Martin, [*Nucl.Phys.* ]{} [**B184**]{}(1981)33 M. Alberg, E. Henley and L. Wilets, [*Ann. of Phys.* ]{} [ **96**]{}(1976)43. R.H. Dalitz and A. Deloff, [*J.Phys.G* ]{} [**17**]{}(1991)289 O. Brown, [*Nucl.Phys* ]{} [**129**]{}(1979)1 W. Cameron [*et al*]{}. , [*Nucl.Phys.* ]{} [**143**]{}(1978)189 Particle Data Group, [*Phys.Rev .* ]{}[**D 50**]{}, 1734(1990) E. Friedmann, A. Gal, J. Mares and A. Cieply, [*Phys.Rev .* ]{} [**60**]{}, 024314(1990) J. Mares, E. Friedmann and A. Gal, [*Nucl. Phys.* ]{} [**A 777**]{}, 84(2006) R. Staronski and S. Wycech, [*Journ. Phys.*]{}   [**G13**]{}, (2004)1361 : T.Yamazaki and Y. Akaishi, [*Phys. Rev.* ]{} [**C76**]{}, 045201(2007) R. Roosen [*et al.*]{}, [*Nuovo. Cim.* ]{} [**49A**]{}, 217(1979) Acknowledgments {#acknowledgments .unnumbered} =============== This work is supported by the KBN grant 1P0 3B 04229 and the EU Contract No. MRTN-CT-2006-035482, FLAVIAnet”. Part of this work was carried out while one of the authors (SW) was at the Helsinki Institute of Physics. Propagators =========== Several formulas for kernels of multiple scattering equations are collected in this appendix. For Yamaguchi form-factors, the propagators $G_{i,j}$ yield analytic expressions. Thus for two consecutive S-wave interactions one has $$\label{22} G(r,k)^{ss} = \frac{1}{4\pi r} v(k)^2 \Big[ \exp(ikr)-\exp(\kappa r)- r \frac{\kappa^2 +k^2}{2\kappa} \exp(-\kappa r) \Big],$$ where$ \mathbf{r }= \mathbf{x}_i- \mathbf{x}_j $, and the indices $i,j$ referring to the nucleon sites are suppressed. For an initial S wave scattering followed by a P wave scattering $G$ becomes a vector $$\label{22a} \textbf{G}(r,k)^{sp} = \textbf{r}G^{sp}(r,k),$$ $$\label{22b} G^{sp} = \frac{1}{4\pi r} v(k)^2 \Big[ \exp(ikr) (ikr-1) -\exp(\kappa r)[1 +\kappa r + \frac{r^2(\kappa^2 + k^2)}{2} + \frac{r^3(\kappa^2 + k^2)^2}{8\kappa} ]\Big].$$ For two consecutive P wave interactions the propagator is a tensor of the form $$\label{22c} \textbf{G}(r,k)^{pp}|_{nm} = v(k)^2[ \delta_{nm}~G^{pp}_{O} + r_{n} r_{m}~G^{pp}_{T} ].$$ These functions may be expressed in terms of basic integrals $$\label{22d} g_n(r) = \frac{4\pi}{(2\pi)^3} \int d{\bf p} \exp(ikr) \{\frac{\kappa^2 }{\kappa^2+p^2}\}^{n} \frac{1}{p^2- k^2} ,$$ which give by recurrence $$\label{23} g_{1}(r) = \frac{ \exp(ikr)-\exp(\kappa r)}{ r}, \ \ g_{2}(r)= g_{1}(r) - \frac{\kappa^2 +k^2}{2\kappa} \exp(-\kappa r)$$ $$\label{23b} g_{3}(r)= g_{2}(r) - \frac{(\kappa^2 +k^2)^2}{8\kappa^3}(1+\kappa r) \exp(-\kappa r), \ \ g_{4}(r)= g_{3}(r) - \frac{(\kappa^2 +k^2)^3}{16\kappa^5} (1+ \kappa r + \frac {(\kappa r)^2}{3} )\exp(-\kappa r)$$ and finally $$\label{24} G^{pp}_{O} = \frac{g_{4}(r)'}{r}, \ \ G^{pp}_T = \frac{ g_{4}(r)''}{r^2} -\frac{ g_{4}(r)'}{r^3}.$$ Isospin symmetry ================ It is assumed here that the isospin is conserved in the quasi-bound states of K mesons. In the lowest S-wave states of the KNN systems the isospin wave functions may be built upon iso-singlet or iso-triplet NN states. From the experimental point of view, the most interesting one seems to be $$\label{IT1} \Psi^{1/2}_1 = \{\{NN\}^{1}K\}^{1/2} = \sqrt{3}/2\{\{NK\}^{0}N\}^{1/2}+1/2\{\{NK\}^{1}N\}^{1/2},$$ where in $\Psi^{1/2}_1$ the upper index denotes isospin $ I_{nucl}$ and the lower index denotes the spin of the two nucleons. On the right side the isospin content in the KN subsystem is given. This state is a mixture of $K^-pp $ and $K^0np$ and is frequently named $K^-pp $ since it can be experimentally accessed via this entrance channel. The NN spin in this state is $S=0$ and the effective KN interaction amplitude obtained from Eq.(\[IT1\]) becomes $$\label{IT2} f_{KN} = 3/4 f^0_{KN} +1/4 f^1_{KN}.$$ Another KNN state of interest is built upon the NN iso-singlet $$\label{IT3} \Psi^{1/2}_0 = \{\{NN\}^{0}K\}^{1/2} = -1/2 \{\{NK\}^{0}N\}^{1/2}+\sqrt{3}/2\{\{NK\}^{1}N\}^{1/2} .$$ This state is a mixture of $K^-np $ and $K^0nn $ which might be reached by the $K^-np $ entrance channel. Now the NN spin is $S=1$ and the effective KN interaction amplitude obtained from Eq. \[IT3\] becomes $$\label{IT4} f_{KN} = 1/4 f^0_{KN} +3/4 f^1_{KN}.$$ The S - wave KN interaction in the $\Psi^{1/2}_0 $ state is much less attractive than in the $\Psi^{1/2}_1 $ state since the $\Lambda(1405)$ contribution is reduced. However, this is offset by the strong short range attraction in the NN system due to the tensor force. An additional attractive force is due to a larger contribution from the $\Sigma(1385)$ resonance. Finally one may have total isospin 3/2 states of the type $K^-nn $ or $K^0pp$ $$\label{IT5} \Psi^{3/2}_1 = \{\{NN\}^{1}K\}^{3/2} = \{\{NK\}^{1}N\}^{3/2}.$$ Those states, involve weakly attractive and uncertain, S - wave KN I = 1 amplitudes. A deeper state can in principle be built upon the stronger P - wave interactions. Its existence and the chances for detection present a situation that is more difficult than the other cases. For the three nucleon problem we retain the dominant structure of the triton and helium isospin symmetry. The KNNN wave function is assumed to be of the form $$\label{IT7} \Psi^{T} = \frac{1}{\sqrt{2}}\{ \{\{\{NN\}^{0,1}N\}^{1/2}+\{\{NN\}^{1,0}N\}^{1/2}\}K\}^{T},$$ where the pair of indices denote spin and isospin of the $NN$ pair. Re-coupling to the KN system leads in the total T=0 state to the relation $$\label{IT8} \Psi^{0} = \sqrt{1/2}\{\{NN\}^{1} \{NK\}^{1} + \{NN\}^{0} \{NK\}^{0}\}^0$$ and in this case the KN interaction amplitude is $$\label{IT9} f^s = 1/2 f^0_{KN} +1/2 f^1_{KN} .$$ Likewise for the total isospin 1 system $$\label{IT10} f^s = 1/6 f^0_{KN} +5/6 f^1_{KN}$$ These amplitudes are collected into the table. \[IX\] System $I_{tot}$ $I_{nucl}$ $f_{KN}$ -------- --------------- --------------- --------------------------------------- KNN $\frac{3}{2}$ 1 $f_{1}$ KNN $\frac{1}{2}$ 1 $\frac{3}{4}f_{0}+ \frac{1}{4}f_{1}$ KNN $\frac{1}{2}$ 0 $\frac{1}{4}f_{0}+ \frac{3}{4}f_{1}$ KNNN $ 0 $ $\frac{1}{2}$ $\frac{1}{2}f_{0}+ \frac{1}{2}f_{1}$ KNNN $1 $ $\frac{1}{2}$ $\frac{1}{6}f_{0}+ \frac{5}{6}f_{1}$ KNNNN $\frac{1}{2}$ $0$ $\frac{1}{3}f_{0}+ \frac{2}{3}f_{1}$ : Isospin composition of Kaon nucleon scattering amplitudes. $I_{tot}$ = total isospin, $I_{nucl}$ = isospin of nucleons, $f_{i}$ = KN amplitudes of isospin, $i$. Three nucleons, S-wave interactions =================================== The energy eigenvalue is obtained by the simultaneous solution of three equations $$\label{3a} \psi^s_1+ G^{ss}_{1,2}f^s \psi_2 + G^{ss}_{1,3}f ^s \psi^s_3 =0$$ $$\label{3b} \psi^s_2 + G^{ss}_{2,3}f ^s \psi^s_3 + G^{ss}_{2,1}f ^s \psi^s_1 =0$$ $$\label{3c} \psi^s_3+ G^{ss}_{3,1}f ^s\psi^s_1 + G^{ss}_{3,2}f ^s \psi^s_2 =0,$$ which require the eigenvalue condition $$\label{d3s} D_{3s}\equiv 1 - (f ^s)^2[ G^{ss}_{1,2} G^{ss}_{1,2}+ G^{ss}_{1,3}G^{ss}_{1,3}+G^{ss}_{3,2}G^{ss}_{3,2}] +2(f ^s)^3 G^{ss}_{1,2}G^{ss}_{2,3}G^{ss}_{3,1} =0.$$ This equation is to be solved numerically. A helpful guide to find the symmetry of two physically meaningful solutions is the situation of two equal $NN$ separations $ r_{12} = r_{13}$. Dropping the upper indices one obtains the factorized form $$\label{3d} D_{3s}= (1 - f~ G_{1,2})( 1+ f~G_{1,2} - 2~f ^2~ G_{1,3}^2).$$ The first factor corresponds to an antisymmetric solution with the meson sticking to two nucleons only. The second factor generates a solution symmetric with the interchange of nucleons 1 and 2. These solutions are a direct continuation of the two solutions obtained in the $KNN$ systems. [^1]: wycech@fuw.edu.pl [^2]: anthony.green@helsinki.fi [^3]: We thank Andrzej Deloff for supplying amplitudes of the DD model
1
**A direct method of solution for the Fokas-Lenells derivative** **nonlinear Schrödinger equation: II. Dark soliton solutions** Yoshimasa Matsuno[^1] *Division of Applied Mathematical Science,* *Graduate School of Science and Engineering* *Yamaguchi University, Ube, Yamaguchi 755-8611, Japan* In a previous study (Matsuno Y [ *J. Phys. A: Math. Theor.*]{} [**45**]{} (2012) 23202), we have developed a systematic method for obtaining the bright soliton solutions of the Fokas-Lenells derivative nonlinear Schrödinger equation (FL equation shortly) under vanishing boundary condition. In this paper, we apply the method to the FL equation with nonvanishing boundary condition. In particular, we deal with a more sophisticated problem on the dark soliton solutions with a plane wave boundary condition. We first derive the novel system of bilinear equations which is reduced from the FL equation through a dependent variable transformation and then construct the general dark $N$-soliton solution of the system, where $N$ is an arbitrary positive integer. In the process, a trilinear equation derived from the system of bilinear equations plays an important role. As a byproduct, this equation gives the dark $N$-soliton solution of the derivative nonlinear Schrödinger equation on the background of a plane wave. We then investigate the properties of the one-soliton solutions in detail, showing that both the dark and bright solitons appear on the nonzero background which reduce to algebraic solitons in specific limits. Last, we perform the asymptotic analysis of the two- and $N$-soliton solutions for large time and clarify their structure and dynamics. [*PACS:*]{} 05.45.Yv; 42.81.Dp; 02.30.Jr [*Keywords:*]{} derivative nonlinear Schrödinger equation; dark soliton; direct method of solution The Fokas-Lenells derivative nonlinear Schrödinger (NLS) equation (FL equation shortly) is a completely integrable nonlinear partial differential equation (PDE) which has been derived as an integrable generalization of the NLS equation using bi-Hamiltonian methods \[1\]. In the context of nonlinear optics, the FL equation models the propagation of nonlinear light pulses in monomode optical fibers when certain higher-order nonlinear effects are taken into account \[2\]. We employ the following equation which can be derived from its original version by a simple change of variables combined with a gauge transformation \[2\]: $$u_{xt}=u-2{\rm i}|u|^2u_x. \eqno(1.1)$$ Here, $u=u(x,t)$ is a complex-valued function of $x$ and $t$, and subscripts $x$ and $t$ appended to $u$ denote partial differentiations. The complete integrability of the FL equation has been demonstrated by means of the inverse scattering transform (IST) method \[3\]. Especially, a Lax pair and a few conservation laws associated with it have been obtained explicitly using the bi-Hamiltonian structure and the multisoliton solutions have been derived by applying the dressing method \[4\]. Another remarkable feature of the FL equation is that it is the first negative flow of the integrable hierarchy of the derivative NLS equation \[2, 5\]. In a previous study \[6\] which is referred to as I hereafter, the two different expressions of the bright $N$-soliton solution of the FL equation have been obtained by a direct method which does not recourse to the IST and their properties have been explored in detail. Here, we construct the dark $N$-soliton solution of the FL equation on the background of a plane wave. Explicitly, we consider the boundary condition $$u \rightarrow \rho\,{\rm exp}\left\{{\rm i}\left(\kappa x-\omega t+\phi^{(\pm)}\right)\right\}, \quad x \rightarrow \pm\infty, \eqno(1.2)$$ where $\rho(>0)$ and $\kappa$ are real constants representing the amplitude and wavenumber, respectively, $\phi^{(\pm)}$ are real phase constants and the angular frequency $\omega=\omega(\kappa)$ obeys the dispersion relation $\omega=1/\kappa+2\rho^2.$ Note that the plane wave given in (1.2) is an exact solution of the FL equation. As will be discussed later, the possible values of $\kappa$ must be restricted to assure the existence of the soliton solutions. A similar problem to that posed in this paper has been studied recently and an explicit formula for the dark $N$-soliton solution have been presented by an ingenious application of the Bäcklund transformation between solutions of the FL equation and the Ablowitz-Ladik hierarchy \[7\]. Nevertheless, the detailed analysis of the soliton solutions has not been undertaken as yet. An exact method of solution employed here which is sometimes called the direct method \[8\] or the bilinear transformation method \[9\] is a powerful tool for analyzing soliton equations and differs from the method used in \[7\]. Once the equation under consideration is transformed to a system of bilinear equations, the standard technique in the bilinear formalism is applied to obtain soliton solutions. A novel feature of the bilinearization of the FL equation is that one of the bilinear equations can be replaced by a [*trilinear*]{} equation, as already demonstrated in I. The same situation happens in the current dark soliton problem. However, the resulting trilinear equation will be used essentially in the process of performing the proof of the dark $N$-soliton solution. This paper is organized as follows. In section 2, we bilinearize the FL equation under the boundary condition (1.2). We then show that one of the resulting bilinear equations can be replaced by a trilinear equation. In section 3, we present the dark $N$-soliton solution of the bilinear equations. It has a simple structure expressed in terms of certain determinants. Subsequently, we perform the proof of the dark $N$-soliton solution using an elementary theory of determinants in which Jacobi’s identity will play a central role. As already noted, the proof of the trilinear equation turns out to be a core in the analysis. In accordance with the relation between the FL equation and the derivative NLS equation at the level of the Lax representation, we also demonstrate that the dark $N$-soliton solution obtained here yields the dark $N$-soliton solution of the derivative NLS equation by replacing simply the time dependence of the solution. As in the case of the defocusing NLS equation subjected to nonvanishing boundary conditions, it is necessary for the existence of dark solitons that the asymptotic state given by (1.2) must be stable. Hence, we perform the linear stability analysis of the plane wave solution (1.2) and provide a criterion for the stability. In section 4, we first investigate the properties of the one-soliton solution in detail. We find that depending on the sign of $\kappa$ and that of the real part of the complex amplitude parameter, the solution can be classified into two types, i.e., the dark and bright solitons. The latter soliton may be termed “anti-dark soliton” since the background field is nonzero. However, we use a term “bright soliton” throughout the paper. We demonstrate that regardless the sign of $\kappa$, the bright soliton has a limiting profile of algebraic type (or an algebraic bright soliton) whereas an algebraic dark soliton appears only if $\kappa<0$. We then analyze the asymptotic behavior of the two-soliton solution and derive the explicit formulas for the phase shift in terms of the amplitude parameters of solitons. In particular, we address the interaction between a dark soliton and a bright soliton as well as that of two dark solitons. Last, the similar asymptotic analysis to that of the two-soliton solution is performed for the general dark $N$-soliton solution. Section 5 is devoted to concluding remarks. In this section, we develop a direct method of solution for constructing dark soliton solutions of the FL equation (1.1) under the boundary condition (1.2). In particular, we show that it can be transformed to a system of bilinear equations by introducing the same type of the dependent variable transformation as that employed in I for the bilinearization of the FL equation under vanishing boundary condition. We also demonstrate that this system yields a trilinear equation which will play a crucial role in our analysis. The bilinearization of the FL equation (1.1) is established by the following proposition: [**Proposition 2.1.**]{} [*By means of the dependent variable transformation $$u=\rho\,{\rm e}^{{\rm i}(\kappa x-\omega t)}\,{g\over f}, \eqno(2.1)$$ with $\omega=1/\kappa+2\rho^2$, equation (1.1) can be decoupled into the following system of bilinear equations for the tau functions $f$ and $g$ $$D_tf\cdot f^*-{\rm i}\rho^2(gg^*-ff^*)=0, \eqno(2.2)$$ $$D_xD_tf\cdot f^*-{\rm i}\rho^2D_xg\cdot g^*+{\rm i}\rho^2D_xf\cdot f^*+2\kappa\rho^2(gg^*-ff^*)=0, \eqno(2.3)$$ $$D_xD_tg\cdot f+{\rm i}\kappa D_tg\cdot f-{\rm i}\omega D_xg\cdot f=0. \eqno(2.4)$$ Here, $f=f(x, t)$ and $g=g(x, t)$ are complex-valued functions of $x$ and $t$, and the asterisk appended to $f$ and $g$ denotes complex conjugate and the bilinear operators $D_x$ and $D_t$ are defined by $$D_x^mD_t^nf\cdot g=\left({\partial\over\partial x}-{\partial\over\partial x^\prime}\right)^m \left({\partial\over\partial t}-{\partial\over\partial t^\prime}\right)^n f(x, t)g(x^\prime,t^\prime)\Big|_{ x^\prime=x,\,t^\prime=t}, \eqno(2.5)$$ where $m$ and $n$ are nonnegative integers.*]{} [**Proof.**]{} Substituting (2.1) into (1.1) and rewriting the resultant equation in terms of the bilinear operators, equation (1.1) can be rewritten as $${1\over f^2}(D_xD_tg\cdot f+{\rm i}\kappa D_tg\cdot f-{\rm i}\omega D_xg\cdot f)$$ $$-{g\over f^3f^*}\bigl\{f^*D_xD_tf\cdot f-2\kappa\rho^2f^2f^*-2{\rm i}\rho^2g^*(g_xf-gf_x+{\rm i}\kappa fg)\bigr\}=0. \eqno(2.6)$$ Inserting the identity $$f^*D_xD_tf\cdot f=fD_xD_tf\cdot f^*-2f_xD_tf\cdot f^*+f(D_tf\cdot f^*)_x, \eqno(2.7)$$ which can be verified by direct calculation, into the second term on the left-hand side of (2.6), one modifies it in the form $${1\over f^2}(D_xD_tg\cdot f+{\rm i}\kappa D_tg\cdot f-{\rm i}\omega D_xg\cdot f)$$ $$-{g\over f^3f^*}\Bigl[f\bigl\{D_xD_tf\cdot f^*-{\rm i}\rho^2D_xg\cdot g^*+{\rm i}\rho^2D_xf\cdot f^*+2\kappa\rho^2(gg^*-ff^*)\bigr\}$$ $$-2f_x\bigl\{D_tf\cdot f^*-{\rm i}\rho^2(gg^*-ff^*)\bigr\}+f\bigl\{D_tf\cdot f^*-{\rm i}\rho^2(gg^*-ff^*)\bigr\}_x\Bigr]=0. \eqno(2.8)$$ By virtue of equations (2.2)-(2.4), the left-hand side of (2.8) vanishes identically. $\Box$ It follows from (2.1) and (2.2) that $$|u|^2=\rho^2+{\rm i}\,{\partial \over \partial t}\,{\rm ln}\,{f^*\over f}. \eqno(2.9)$$ The above formula gives the modulus of $u$ in terms of the tau function $f$. [**Proposition 2.2.**]{} [*The [*trilinear*]{} equation for $f$ and $g$ $$f^*\left\{g_{xt}f-(f_x-{\rm i}\kappa f)g_t-{\rm i}\left({1\over \kappa}+\rho^2\right)(g_xf-gf_x)\right\} =f_t^*(g_xf-gf_x+{\rm i}\kappa fg), \eqno(2.10)$$ is a consequence of the bilinear equations (2.2)-(2.4).*]{} [**Proof.**]{} By direct calculation, one can show the following trilinear identity among the tau functions $f$ and $g$: $$f^*\left\{g_{xt}f-(f_x-{\rm i}\kappa f)g_t-{\rm i}\left({1\over \kappa}+\rho^2\right)(g_xf-gf_x)\right\} -f_t^*(g_xf-gf_x+{\rm i}\kappa fg)$$ $$=f^*(D_xD_tg\cdot f+{\rm i}\kappa D_tg\cdot f-{\rm i}\omega D_xg\cdot f)$$ $$-{g\over 2}\Bigl[\bigl\{D_tf\cdot f^*-{\rm i}\rho^2(gg^*-ff^*)\bigr\}_x+(D_xD_tf\cdot f^*-{\rm i}\rho^2D_xg\cdot g^*+{\rm i}\rho^2D_xf\cdot f^*-2{\rm i}\kappa D_tf\cdot f^*)\Bigr]$$ $$+g_x\bigl\{D_tf\cdot f^*-{\rm i}\rho^2(gg^*-ff^*)\bigr\}. \eqno(2.11)$$ Replacing a term $2{\rm i}\kappa D_tf\cdot f^*$ on the right-hand side of (2.11) by (2.2), the right-hand side becomes zero by (2.2)-(2.4). This yields (2.10). $\Box$ In view of proposition 2.2, the proof of the dark $N$-soliton solution is completed if one can prove any three equations among the three bilinear equations (2.2)-(2.4) and a trilinear (2.10). We will see later in section 3 that the proof of (2.4) is not easy to perform and hence we prove (2.10) instead. In this section, we show that the tau functions $f$ and $g$ representing the dark $N$-soliton solution admit the compact determinantal expressions. This statement is proved by an elementary calculation using the basic formulas for determinants. We first prove that the proposed dark $N$-soliton solution solves the bilinear equations (2.2) and (2.3) and then the trilinear equation (2.10) in place of (2.4). The implication of the equation (2.10) will be discussed in conjunction with the dark $N$-soliton solution of the derivative NLS equation. Last, we perform the linear stability analysis of the plane wave solution (1.2) and provide a criterion for the stability. [*3.1. Dark $N$-soliton solution*]{} The main result in this paper is given by the following theorem: [**Theorem 3.1.**]{} [*The dark $N$-soliton solution of the system of bilinear equations (2.2)-(2.4) is expressed by the following determinants $$f=|D|, \eqno(3.1a)$$ $$g=\begin{vmatrix} D & {\bf z}^T\\ {1\over \rho^2}{\bf z}_t^* & 1\end{vmatrix}=|D|+{1\over \rho^2}\begin{vmatrix} D & {\bf z}^T\\ {\bf z}_t^* & 0\end{vmatrix}. \eqno(3.1b)$$ Here, $D$ is an $N\times N$ matrix and ${\bf z}$ and ${\bf z}_t$ are $N$-component row vectors defined below and the symbol $T$ denotes the transpose: $$D=(d_{jk})_{1\leq j,k\leq N}, \quad d_{jk}=\delta_{jk}+{\kappa-{\rm i}p_j\over p_j+p_k^*}\,z_jz_k^*, \quad z_j={\rm exp}\left(p_jx+{\kappa\rho^2\over p_j}t+{1\over p_j+{\rm i}\kappa}\,\tau+\zeta_{j0}\right), \eqno(3.2a)$$ $${\bf z}=(z_1, z_2, ..., z_N), \quad {\bf z}_t=\left({\kappa\rho^2z_1\over p_1}, {\kappa\rho^2z_2\over p_2}, ..., {\kappa\rho^2z_N\over p_N}\right), \eqno(3.2b)$$ where $p_j$ are complex parameters satisfying the constraints $$(p_j+{\rm i}\kappa)(p_j^*-{\rm i}\kappa)={1+\kappa\rho^2\over \kappa\rho^2}p_jp_j^*,\quad j=1, 2, ..., N, \eqno(3.2c)$$ $\zeta_{j0}\ (j=1, 2, ..., N)$ are arbitrary complex parameters, $\delta_{jk}$ is kronecker’s delta and $\tau$ is an auxiliary variable.*]{} The dark $N$-soliton solution is parameterized by $2N$ complex parameters $p_j$ and $\zeta_{j0}\ (j=1, 2, ..., N)$. The parameters $p_j$ determine the amplitude and velocity of the solitons whereas the parameters $\zeta_{j0}$ determine the phase of the solitons. As opposed to the bright soliton case explored in I, however, the real and imaginary parts of $p_j$ are not independent because of the constraints (3.2c). Actually, it may be parameterized either by the velocity of the $j$th soliton or by a single angular variable, as will see in section 4. An auxiliary variable $\tau$ introduced in (3.2a) will be used conveniently in performing the proof of (2.10). It can be set to zero after all the calculations have been completed. [**Remark 3.1.**]{} The tau function $g$ given by (3.1b) is represented by the determinant of an $(N+1)\times (N+1)$ matrix. It can be rewritten by the determinant of an $N\times N$ matrix. To show this, we multiply the $(N+1)$th column of $g$ by $z_{k,t}^*/\rho^2$ and subtract it from the $k$th column for $k=1, 2, ..., N$ to obtain $$g=\left|\left(\delta_{jk}-{\kappa+{\rm i}p_k^*\over p_j+p_k^*}\,{p_j\over p_k^*}\,z_jz_k^*\right)_{1\leq j,k\leq N}\right|. \eqno(3.3)$$ Although the tau function from (3.1b) is used in the proof of the dark $N$-soliton solution, an equivalent form (3.3) will be employed in section 4 to analyze the structure of the solution. [**Remark 3.2.**]{} The complex parameters $p_j$ subjected to the constraints (3.2c) exist only if the condition $\kappa(1+\kappa\rho^2)>0$ is satisfied, as confirmed easily by putting $p_j=a_j+{\rm i}b_j$ with real $a_j$ and $b_j$. We will show in section 3.6 that this condition is closely related to the stability of the plane wave solution of the FL equation. [*3.2. Notation and basic formulas for determinants*]{} Before entering into the proof of the dark $N$-soliton solution, we first define the matrices associated with the dark $N$-soliton solution and then provide some basic formulas for determinants. Although these formulas have been used extensively in I, we reproduce them for convenience. The following bordered matrices appear frequently in our analysis: $$D({\bf a}; {\bf b})=\begin{pmatrix} D & {\bf b}^T\\ {\bf a} & 0\end{pmatrix},\quad D({\bf a},{\bf b};{\bf c},{\bf d})=\begin{pmatrix} D &{\bf c}^T & {\bf d}^T \\ {\bf a} & 0 &0\\ {\bf b} & 0& 0 \end{pmatrix}, \eqno(3.4)$$ where ${\bf a}, {\bf b}, {\bf c}$ and [**d**]{} are $N$ component row vectors. Let $D_{jk}$ be the cofactor of the element $d_{jk}$. The following formulas are well known in the theory of determinants \[10\]: $${\partial\over\partial x}|D|=\sum_{j,k=1}^N{\partial d_{jk}\over\partial x}D_{jk}, \eqno(3.5)$$ $$\begin{vmatrix} D & {\bf a}^T\\ {\bf b} & z\end{vmatrix}=|D|z-\sum_{j,k=1}^ND_{jk}a_jb_k, \eqno(3.6)$$ $$|D({\bf a}, {\bf b}; {\bf c}, {\bf d})||D|= |D({\bf a}; {\bf c})||D({\bf b}; {\bf d})|-|D({\bf a}; {\bf d})||D({\bf b}; {\bf c})|. \eqno(3.7)$$ The formula (3.5) is the differentiation rule of the determinant and (3.6) is the expansion formula for a bordered determinant with respect to the last row and last column. The formula (3.7) is Jacobi’s identity. The proof of lemmas described below is based on the above three formulas as well as a few fundamental properties of determinants. [*3.3. Differentiation rules and related formulas*]{} In terms of the notation (3.4), the tau functions $f$ and $g$ can be written as $$f=|D|, \eqno(3.8a)$$ $$\quad g=|D|+{1\over\rho^2}|D({\bf z}_t^*;{\bf z})|. \eqno(3.8b)$$ The differentiation rules of the tau functions with respect to $t$ and $x$ are given by the following formulas: [**Lemma 3.1.**]{} $$f_t={\rm i}|D({\bf z}_t^*;{\bf z})| - {1\over \rho^2}|D({\bf z}_t^*;{\bf z}_t)|, \eqno(3.9)$$ $$f_x=-\kappa |D({\bf z}^*;{\bf z})|+{\rm i}|D({\bf z}^*;{\bf z}_x)|, \eqno(3.10)$$ $$f_{xt}={\rm i}\kappa\rho^2|D({\bf z}^*;{\bf z})|-\kappa|D({\bf z}_t^*;{\bf z})|-\kappa|D({\bf z}^*;{\bf z}_t)| +{\rm i}|D({\bf z}_t^*;{\bf z}_x)|$$ $$-|D({\bf z}^*,{\bf z}_t^*;{\bf z}_x,{\bf z})|+{\kappa\over \rho^2}|D({\bf z}^*,{\bf z}_t^*;{\bf z},{\bf z}_t)| -{\rm i\over \rho^2}|D({\bf z}^*,{\bf z}_t^*;{\bf z}_x,{\bf z}_t)|, \eqno(3.11)$$ $$g_t={\rm i}|D({\bf z}_t^*;{\bf z})|+{1\over \rho^2}|D({\bf z}_{tt}^*;{\bf z})|, \eqno(3.12)$$ $$g_x={\rm i}|D({\bf z}_t^*;{\bf z})|+{1\over \rho^2}|D({\bf z}_{t}^*;{\bf z}_x)| +{\rm i\over \rho^2}|D({\bf z}_t^*,{\bf z}^*;{\bf z},{\bf z}_x)|. \eqno(3.13)$$ [**Proof.**]{} We prove (3.9). Applying formula (3.5) to $f$ given by (3.1) with (3.2a), one obtains $$\begin{aligned} f_t &=\kappa\rho^2\sum_{j,k=1}^ND_{jk}{\kappa-{\rm i}p_j\over p_jp_k^*}z_jz_k^* \notag \\ &=-{\rm i}\sum_{j,k=1}^ND_{jk}z_{j}z_{k,t}^* +{1\over \rho^2}\sum_{j,k=1}^ND_{jk}z_{j,t}z_{k,t}^*, \notag\end{aligned}$$ where in passing to the second line, use has been made of the relation $z_{j,t}=(\kappa\rho^2/p_j)z_j$. Referring to formula (3.6) with $z=0$ and taking into account the notation (3.4), the above expression is equal to the right-hand side of (3.9). Formulas (3.10)-(3.13) can be proved in the same way if one uses (3.5), (3.6) and the relation ${\bf z}_{xt}=\kappa\rho^2{\bf z}$ as well as some basic properties of determinants. $\Box$ The complex conjugate expressions of the tau functions $f$ and $g$ and their derivatives are expressed as follows: [**Lemma 3.2.**]{} $$f^*=|D|-{\rm i}|D({\bf z}^*;{\bf z})|, \eqno(3.14)$$ $$f_t^*=-{\rm i}|D({\bf z}^*;{\bf z}_t)|- {1\over \rho^2}|D({\bf z}_t^*;{\bf z}_t)|+{\rm i\over \rho^2}|D({\bf z}_t^*,{\bf z}^*;{\bf z}_t,{\bf z})|, \eqno(3.15)$$ $$g^*=|D|-{\rm i}|D({\bf z}^*;{\bf z})|+{1\over \rho^2}|D({\bf z}^*;{\bf z}_t)|. \eqno(3.16)$$ [**Proof.**]{} It follows from (3.2a) that $d_{jk}^*=d_{kj}+{\rm i}z_j^*z_k$ or in the matrix form, $D^*= D^T+{\rm i}(z_jz_k^*)_{1\leq j,k\leq N}^T$. Since $|D^T|=|D|$, one has $$f^*=|D+{\rm i}(z_jz_k^*)_{1\leq j,k\leq N}|=\begin{vmatrix} D & {\bf z}^T\\ -{\rm i}{\bf z}^* & 1\end{vmatrix}.$$ Applying formula (3.6) to the right-hand side, formula (3.14) follows immediately. Formulas (3.15) and (3.16) can be derived in the same way. $\Box$ [*3.4. Proof of the dark $N$-soliton solution*]{} [*3.4.1. Proof of (2.2)*]{} Let $$P_1=D_tf\cdot f^*-{\rm i}\rho^2(gg^*-ff^*). \eqno(3.17)$$ Substituting (3.8), (3.9) and (3.14)-(3.16) into (3.17), most terms are canceled, leaving the following three terms $$P_1={{\rm i}\over \rho^2}\Bigl\{-|D||D({\bf z}_t^*,{\bf z}^*;{\bf z}_t,{\bf z})|+|D({\bf z}^*;{\bf z})||D({\bf z}_t^*;{\bf z}_t)| -|D({\bf z}_t^*;{\bf z})||D({\bf z}^*;{\bf z}_t)|\Bigr\}.$$ This expression becomes zero by Jacobi’s identity. $\Box$ [*3.4.2. Proof of (2.3)*]{} Instead of proving (2.3) directly, we differentiate (2.2) by $x$ and add the resultant expression to (2.3) and then prove the equation $P_2=0$, where $$P_2=f_{xt}f^*-f_xf_t^*-{\rm i}\rho^2(g_xg^*-f_xf^*)+\kappa\rho^2(gg^*-ff^*). \eqno(3.18)$$ Substituting (3.8)-(3.11), (3.13) and (3.14)-(3.16) into (3.18) and rearranging, $P_2$ reduces to $$P_2={{\kappa}\over \rho^2}\Bigl\{|D||D({\bf z}^*,{\bf z}_t^*;{\bf z},{\bf z}_t)|-|D({\bf z}^*;{\bf z})||D({\bf z}_t^*;{\bf z}_t)| +|D({\bf z}^*;{\bf z}_t)||D({\bf z}_t^*;{\bf z})|\Bigr\}$$ $$+{{\rm i}\over \rho^2}\Bigl\{-|D||D({\bf z}^*,{\bf z}_t^*;{\bf z}_x,{\bf z}_t)|+|D({\bf z}^*;{\bf z}_x)||D({\bf z}_t^*;{\bf z}_t)| -|D({\bf z}^*;{\bf z}_t)||D({\bf z}_t^*;{\bf z}_x)|\Bigr\}$$ $$\!+{1\over \rho^2}\Bigl\{\!-|D({\bf z}^*;{\bf z})||D({\bf z}^*,{\bf z}_t^*;{\bf z}_x,{\bf z}_t)|+|D({\bf z}^*;{\bf z}_x)||D({\bf z}_t^*,{\bf z}^*;{\bf z}_t,{\bf z})| -|D({\bf z}^*;{\bf z}_t)||D({\bf z}^*,{\bf z}_t^*;{\bf z},{\bf z}_x)|\Bigr\}. \eqno(3.19)$$ The first and second terms on the right-hand side of (3.19) vanish by virtue of Jacobi’s identity. To show that the third term becomes zero as well, we consider the determinantal identity $$\begin{vmatrix}|D({\bf z}^*;{\bf z})| & |D({\bf z}^*;{\bf z})|& |D({\bf z}_t^*;{\bf z})|\\ |D({\bf z}^*;{\bf z}_x)| & |D({\bf z}^*;{\bf z}_x)|& |D({\bf z}_t^*;{\bf z}_x)| \\ |D({\bf z}^*;{\bf z}_t)| & |D({\bf z}^*;{\bf z}_t)|& |D({\bf z}_t^*;{\bf z}_t)| \end{vmatrix}=0.$$ It is obvious that this determinant is zero since the first two columns coincide. The above assertion follows immediately by expanding the determinant with respect to the first column and using Jacobi’s identity. Consequently, $P_2=0$. $\Box$ Before proceeding to the proof of (2.10), we emphasis that the constraints (3.2c) have not been used in the process of the proof of (2.2) and (2.3). On the other hand, we find that the proof of (2.4) depends crucially on the constraints. This is an obstacle which has never been encountered in performing the proof of the bright $N$-soliton solution (see I). In conclusion, a direct proof of (2.4) still remains open and hence we shall prove the trilinear equation (2.10) instead. It turns out, however that its proof is found to be unfeasible. As we shall now demonstrate, introduction of an auxiliary variable $\tau$ in the exponential function (3.2) would resolve this difficulty. [*3.4.3. Proof of (2.10)*]{} We first prepare the two lemmas to prove (2.10). The lemma 3.3 below gives a very simple relation between the partial derivatives $f_t$ and $f_\tau$. It is to be noted that the constraints (3.2c) are used only for the proof of this lemma. [**Lemma 3.3.**]{} $$f_t=(1+\kappa\rho^2)f_\tau, \eqno(3.20a)$$ $$g_t=(1+\kappa\rho^2)g_\tau. \eqno(3.20b)$$ [**Proof.**]{} Extracting the factor $z_j$ from the $j$th row and the factor $z_k^*$ from the $k$th column of the determinant $|D|$, respectively for $j, k=1, 2, ...,N$, one can rewrite the tau function $f$ into the form $$f=\prod_{j=1}^N{\rm e}^{\zeta_j}\left|\left({\rm e}^{-\zeta_j}\delta_{jk}+{\kappa-{\rm i}p_j\over p_j+p_k^*}\right)_{1\leq j,k\leq N}\right|,$$ where $$\zeta_j=(p_j+p_j^*)x+\kappa\rho^2\left({1\over p_j}+{1\over p_j^*}\right)t+{p_j+p_j^*\over (p_j+{\rm i}\kappa)(p_j^*-{\rm i}\kappa)}\,\tau+\zeta_{j0}+\zeta_{j0}^*,$$ showing that $f$ can be regarded as a function of $\zeta_j\ (j=1, 2, .., N)$. Thus, differentiation of $f$ with respect to $t$ gives $$f_t=\sum_{j=1}^N{\partial f\over\partial\zeta_j}{\partial\zeta_j\over\partial t}=\kappa\rho^2\sum_{j=1}^N\left({1\over p_j}+{1\over p_j^*}\right){\partial f\over\partial\zeta_j}.$$ Similarly, one has $$f_\tau=\sum_{j=1}^N{p_j+p_j^*\over (p_j+{\rm i}\kappa)(p_j^*-{\rm i}\kappa)}{\partial f\over\partial\zeta_j}.$$ The constraints (3.2c) are introduced into the above expression to give $$f_\tau={\kappa\rho^2\over 1+\kappa\rho^2}\sum_{j=1}^N\left({1\over p_j}+{1\over p_j^*}\right){\partial f\over\partial\zeta_j}={1\over 1+\kappa\rho^2\,}f_t.$$ This completes the proof of (3.20a). Repeating the similar procedure, one can show that the relation (3.20b) holds as well. $\Box$ The lemma 3.4 gives the differentiation rules of $f$ and $g$ with respect to $\tau$: [**Lemma 3.4.**]{} $$f_\tau={\rm i}|D({\bf z}_\tau^*;{\bf z})|, \eqno(3.21)$$ $$f_{\tau}^*=-{\rm i}|D({\bf z}^*;{\bf z}_\tau)|, \eqno(3.22)$$ $$g_\tau={{\rm i}\over \kappa\rho^2}|D({\bf z}_t^*;{\bf z})|+{1\over \rho^2}|D({\bf z}_t^*;{\bf z}_\tau)|, \eqno(3.23)$$ $$g_{x\tau}={\rm i}|D({\bf z}^*;{\bf z})|+{1\over \rho^2}|D({\bf z}_t^*;{\bf z})|+\kappa |D({\bf z}^*;{\bf z}_\tau)|+{{\rm i}\over \kappa\rho^2}|D({\bf z}_t^*;{\bf z}_x)| -{{\rm i \kappa}\over \rho^2}|D({\bf z}_t^*;{\bf z}_\tau)|$$ $$-{1\over \kappa\rho^2}|D({\bf z}_t^*,{\bf z}^*;{\bf z},{\bf z}_x)|-{\kappa\over \rho^2}|D({\bf z}_t^*,{\bf z}^*;{\bf z}_\tau,{\bf z})| +{{\rm i}\over \rho^2}|D({\bf z}_t^*,{\bf z}^*;{\bf z}_\tau,{\bf z}_x)|. \eqno(3.24)$$ [**Proof.**]{} If one notes the relations $${\bf z}_{t\tau}=-{{\rm i}\over \kappa}{\bf z}_t+{\rm i}\rho^2{\bf z}_{\tau},\qquad {\bf z}_{x\tau}={\bf z}-{\rm i}\kappa{\bf z}_{\tau},$$ which follows from the definition (3.2a) of $z_j$, the proof can be done straightforwardly along with the same procedure as that used in the proof of lemma 3.1 and lemma 3.2. $\Box$ With lemmas (3.2) and (3.3) at hand, we are now ready for starting the proof of (2.10). [**Proof of (2.10).**]{} If one replaces the $t$ derivative by the $\tau$ derivative in accordance with (3.20), the trilinear equation (2.10) can be rewritten in the form $$f^*P_3=f_\tau^*P_3^\prime, \eqno(3.25a)$$ with the bilinear forms $P_3$ and $P_3^\prime$ defined respectively by $$P_3=g_{x\tau}f-(f_x-{\rm i}\kappa f)g_\tau-{\rm i\over \kappa}(g_xf-gf_x), \eqno(3.25b)$$ $$P_3^\prime=g_xf-gf_x+{\rm i}\kappa fg. \eqno(3.25c)$$ The trilinear equation (3.25) is proved as follows. Substituting (3.8), (3.10), (3.13), (3.23) and (3.24) into (3.25b) and applying Jacobi’s identity to terms multiplied by $|D|$, $P_3$ is simplified considerably. After some elementary calculations, one finds that $$P_3=\kappa |D({\bf z}^*;{\bf z}_\tau)|\Bigl\{|D|+{1\over\rho^2}|D({\bf z}_t^*;{\bf z})|-{{\rm i}\over \kappa\rho^2}|D({\bf z}_t^*;{\bf z}_x)|\Bigr\}. \eqno(3.26a)$$ Performing the similar calculation for $P_3^\prime$, one obtains $$P_3^\prime={\rm i}\kappa\Bigl\{|D|-|D({\bf z}^*;{\bf z})|\Bigr\}\Bigl\{|D|+{1\over\rho^2}|D({\bf z}_t^*;{\bf z})|-{{\rm i}\over \kappa\rho^2}|D({\bf z}_t^*;{\bf z}_x)|\Bigr\}. \eqno(3.26b)$$ Taking into account the formulas (3.14) and (3.22), the expressions (3.26a) and (3.26b) yield (3.25). The trilinear equation (3.25) coupled with lemma 3.3 now completes the proof of the trilinear equation (2.10). $\Box$ [*3.5. Dark $N$-soliton solution of the derivative NLS equation*]{} In accordance with the fact that the FL equation is the first negative flow of the Lax hierarchy of the derivative NLS equation, the spatial part of the Lax pair associated with the former equation coincides with that of the latter equation with an identification $q=u_x\ [2, 5]$. This observation enables us to obtain the dark $N$-soliton solution of the derivative NLS equation $${\rm i}q_t+q_{xx}+2{\rm i}(|q|^2q)_x=0,\quad q=q(x,t), \eqno(3.27)$$ under the boundary condition $$q\rightarrow \rho\,{\rm exp}\left\{{\rm i}\left(\kappa x-\omega^\prime t+\psi^{(\pm)}\right)\right\}, \quad x\rightarrow \pm\infty, \eqno(3.28)$$ where $\omega^\prime=\kappa^2+2\kappa\rho^2$ and $\psi^{(\pm)}$ are real phase constants. In particular, we establish the following proposition: [**Proposition 3.1.**]{} [*The dark $N$-soliton solution of the derivative NLS equation (3.27) subjected to the boundary condition (3.28) is given in terms of the tau functions $f$ and $h$ by $$q=\rho\,{\rm e}^{{\rm i}(\kappa x-\omega^\prime t)}\,{hf^*\over f^2}, \eqno(3.29a)$$ with $$f=|D|, \quad h=|H|. \eqno(3.29b)$$ Here, $D$ and $H$ are $N\times N$ matrices defined respectively by $$D=(d_{jk})_{1\leq j,k\leq N}, \quad d_{jk}=\delta_{jk}+{\kappa-{\rm i}p_j\over p_j+p_k^*}\,z_jz_k^*, \quad z_j={\rm exp}\left[p_jx\!+\!\{{\rm i}p_j^2-2(\kappa+\rho^2)p_j\}t+\zeta_{j0}\right], \eqno(3.30a)$$ $$H=(h_{jk})_{1\leq j,k\leq N}, \quad h_{jk}=\delta_{jk}-{\kappa-{\rm i}p_j\over p_j+p_k^*}\,{p_j\over p_k^*}\,z_jz_k^*, \eqno(3.30b)$$ where $p_j$ are complex parameters satisfying the constraints $$p_jp_j^*=\rho^2\{\kappa-{\rm i}(p_j-p_j^*)\},\quad j=1, 2, ..., N, \eqno(3.30c)$$ and $\zeta_{j0}\ (j=1, 2, ..., N)$ are arbitrary complex parameters.*]{} [**Proof.**]{} The correspondence between $q$ and $u_x$ mentioned above implies that the relation $$q=u_x={\partial\over\partial x}\left(\rho\,{\rm e}^{{\rm i}\kappa x}\,{g\over f}\right)=\rho\,{\rm e}^{{\rm i}\kappa x}{1\over f^2}(g_xf-gf_x+{\rm i}\kappa fg),$$ holds at $t=0$. On the other hand, the expression in the parentheses on the right-hand side is just $P_3^\prime$ defined by (3.25c) and hence it is equal to (3.26b). This fact and (3.14) lead, after applying the formula (3.6), to $$\begin{aligned} g_xf-gf_x+{\rm i}\kappa fg &={\rm i}\kappa\Bigl\{|D|-|D({\bf z}^*;{\bf z})|\Bigr\}\Bigl\{|D|+{1\over\rho^2}|D({\bf z}_t^*;{\bf z})|-{{\rm i}\over \kappa\rho^2}|D({\bf z}_t^*;{\bf z}_x)|\Bigr\} \notag \\ &={\rm i}\kappa f^*\begin{vmatrix} D & \left(z_j-{{\rm i}p_j\over\kappa}z_j\right)_{1\leq j\leq N}^T \\ \left({\kappa\over p_k^*}z_k^*\right)_{1\leq k\leq N} & 1\end{vmatrix}. \notag\end{aligned}$$ Multiplying the $(N+1)$th column of the determinant by $\kappa z_k^*/p_k^*$ and subtracting it from the $k$th column for $k=1, 2, ..., N$, one finds that the above expression becomes ${\rm i}\kappa f^*h$. Consequently, $$q={\rm i}\kappa \rho\,{\rm e}^{{\rm i}\kappa x}\,{f^*h\over f^2}\Bigg|_{t=0}.$$ If one replaces $q$ by ${\rm i}q$ and $\rho$ by $\rho/\kappa$, respectively and introduces the time dependence appropriately, one arrives at (3.29) with (3.30). The constraints (3.30c) follow from (3.2c) by the above replacement of $\rho$. The complex parameters $p_j$ subjected to the constraints (3.30c) exist only if the condition $\kappa+\rho^2>0$ is satisfied. $\Box$ It is instructive to perform the bilinearization of the derivative NLS equation under the boundary condition (3.28). This provides an alternative way to construct the dark $N$-soliton solution given by proposition 3.1, as we shall see now. To this end, following the procedure used in \[11, 12\], we introduce the gauge transformation $$q=v\,{\rm exp}\left[{\rm i}\int_{-\infty}^x(\rho^2-|v|^2)dx\right], \eqno(3.31a)$$ as well as the dependent variable transformation for $v$ $$v=\rho\,{\rm e}^{{\rm i}(\kappa x-\omega^\prime t)}\,{h\over f}. \eqno(3.31b)$$ Then, equation (3.27) can be decoupled to the system of bilinear equations for $f$ and $h$ $$D_xf\cdot f^*-{\rm i}\rho^2(hh^*-ff^*)=0, \eqno(3.32)$$ $$D_x^2f\cdot f^*-{\rm i}\rho^2D_xh\cdot h^* +\rho^2(2\kappa+\rho^2)(hh^*-ff^*)=0, \eqno(3.33)$$ $${\rm i} D_th\cdot f+2{\rm i}(\kappa+\rho^2)D_xh\cdot f+D_x^2h\cdot f=0. \eqno(3.34)$$ In view of (3.32), the modulus of $v$ is given in terms of the tau function $f$ by $$|v|^2=\rho^2+{\rm i}\,{\partial \over \partial x}\,{\rm ln}\,{f^*\over f}, \eqno(3.35)$$ which, combined with (3.31), yields the formula (3.29). Note from (3.31a) that $|q|^2=|v|^2$. It may be checked by direct computation that the tau functions $f$ and $h$ from (3.29b) with (3.30) satisfy the above bilinear equations. It is important to realize that we can take the limit $\kappa\rightarrow 0$ for the solution (3.29) since the dispersion relation is not singular at $\kappa=0$. This gives the $N$-soliton solution of the derivative NLS equation on a constant background which has been studied extensively using various exact methods of solution such as the IST \[13-16\], Bäcklund transformation \[17, 18\] and Hirota’s direct method \[19\]. On the other hand, for the dark $N$-soliton solution given by (2.1), this limiting procedure is not relevant because of the singular nature of the dispersion relation. Last, we shall briefly describe the properties of the one-soliton solution for the purpose of comparison with those of the one-soliton solution of the FL equation. Introducing the new real parameters $a_1$ and $b_1$ by $p_1=a_1+{\rm i}b_1$, the square of the modulus of the one-soliton solution from (3.29) and (3.30) with $N=1$ can be written in the form $$|q_1|^2=\rho^2-{2a_1^2\,{\rm sgn}\, a_1\over \sqrt{a_1^2+(\kappa+b_1)^2}}\,{1\over \cosh\,2(\theta_1+\delta_1)+{(\kappa+b_1){\rm sgn}\, a_1\over \sqrt{a_1^2+(\kappa+b_1)^2}}}. \eqno(3.36a)$$ with $$\theta_1=a_1(x+c_1t)+\theta_{10},\qquad c_1=2(b_1+\kappa+\rho^2),\qquad {\rm e}^{4\delta_1}={a_1^2+(\kappa+b_1)^2\over 4a_1^2}, \eqno(3.36b)$$ where ${\rm sgn}\, a_1$ denotes the sign of $a_1$, i.e., $a_1=1$ for $a_1>0$ and $a_1=-1$ for $a_1<0$, and $\theta_{10}$ is a real constant. The constraint (3.30c) then becomes $$a_1^2+b_1^2=\rho^2(2b_1+\kappa). \eqno(3.37)$$ Using (3.36b) and (3.37), the parameters $a_1$ and $b_1$ are expressed in terms of the velocity $c_1$ of the soliton as $$a_1^2={1\over 4}\left(c_{\rm max}-c_1\right)\left(c_1-c_{\rm min}\right),\qquad b_1={c_1\over 2}-\kappa-\rho^2, \qquad c_{\rm min}<c_1<c_{\rm max}, \eqno(3.38a)$$ where $$c_{\rm max}=2(\kappa+2\rho^2)+2\rho\sqrt{\kappa+\rho^2},\qquad c_{\rm min}=2(\kappa+2\rho^2)-2\rho\sqrt{\kappa+\rho^2}. \eqno(3.38b)$$ One must impose the condition $\kappa+\rho^2> 0$ to assure the existence of soliton solutions. Recall that this condition coincides with a criterion for the stability of the plane wave (3.28) \[20\]. We see from (3.36) that if $a_1>0$, then $|q_1|$ takes the form of a dark soliton whereas if $a_1<0$, it becomes a bright soliton on a constant background $u=\rho$. Let $A_d$ and $A_b$ be the amplitudes of the dark and bright solitons, respectively with respective to the background. The amplitude-velocity relations follow from (3.36) and (3.38). They read $$A_d=\rho-\left|\sqrt{c_1-\kappa-2\rho^2}-\sqrt{\kappa+\rho^2}\right|, \eqno(3.39a)$$ $$A_b=\sqrt{c_1-\kappa-2\rho^2}+\sqrt{\kappa+\rho^2}-\rho. \eqno(3.39b)$$ The detailed analysis for the case $\kappa>0$ has been undertaken in \[21\]. To sum up, the solution has been shown to exhibit the spiky modulation of the amplitude and phase. It also has been demonstrated that the bright soliton reduces to an algebraic soliton for both limits $c_1\rightarrow c_{\rm max}$ and $c_1\rightarrow c_{\rm min}$ whereas the algebraic dark soliton never exists. In the case $\kappa<0$ which has not been treated in \[21\], however, a careful inspection of (3.36) and (3.38) reveals that the algebraic bright and dark solitons are produced in the limit $c_1\rightarrow c_{\rm max}$ and $c_1\rightarrow c_{\rm min}$, respectively. The latter new feature is pointed out here for the first time. [**Remark 3.3.**]{} Using the result obtained in proposition 3.1, we can construct the dark $N$-soliton solution of the modified NLS equation $${\rm i}q_t+q_{xx}+\mu |q|^2q+{\rm i}\gamma(|q|^2q)_x=0,\quad q=q(x,t), \eqno(3.40)$$ under the boundary condition $$q\rightarrow \rho\,{\rm exp}\left\{{\rm i}\left(\kappa x-\omega^{\prime\prime} t+\psi^{(\pm)}\right)\right\}, \quad x\rightarrow \pm\infty, \eqno(3.41)$$ where $\omega^{\prime\prime}=\kappa^2-\mu\rho^2+\gamma\kappa\rho^2$ and $\mu$ and $\gamma$ are real constants. To show this, we apply the gauge transformation $$q={\rm exp}\left[{\mu\over\gamma}\tilde x+\left({\mu\over\gamma}\right)^2\tilde t\right]\tilde q, \quad x=\tilde x+{2\mu\over\gamma}\,\tilde t, \quad t=\tilde t, \eqno(3.42)$$ to equation (3.40) and see that it can be recast to the derivative NLS equation ${\rm i}\tilde q_{\tilde t}+\tilde q_{\tilde x\tilde x}+\gamma (|\tilde q|^2\tilde q)_{\tilde x}=0$, which coincides with equation (3.27) with the identification $\tilde q=q,\ \tilde x=x,\ \tilde t=t$ and $\gamma=2$. The dark $N$-soliton solution of the equation (3.40) then takes the form $$q= \rho\,{\rm e}^{{\rm i}(\kappa x-\omega^{\prime\prime} t)}\,{{f^\prime}^*h^\prime\over {f^\prime}^2}, \eqno(3.43a)$$ where the tau functions $f^\prime$ and $h^\prime$ are given respectively by $$f^\prime=\left|\left(\delta_{jk}+{\kappa-{\mu\over\gamma}-{\rm i}p_j\over p_j+p_k^*}\,z_jz_k^*\right)_{1\leq j,k\leq N}\right|,\eqno(3.43b)$$ $$h^\prime=\left|\left(\delta_{jk}-{\kappa-{\mu\over\gamma}-{\rm i}p_j\over p_j+p_k^*}{p_j\over p_k^*}\,z_jz_k^*\right)_{1\leq j,k\leq N}\right|,\eqno(3.43c)$$ with $$z_j={\rm exp}\left[p_jx\!+\!\{{\rm i}p_j^2-(2\kappa+\gamma\rho^2)p_j\}t+\zeta_{j0}\right]. \eqno(3.43d)$$ The constraints for $p_j$ become $$p_jp_j^*={\gamma\rho^2\over 2}\left\{\kappa-{\mu\over\gamma}-{\rm i}(p_j-p_j^*)\right\},\quad j=1, 2, ..., N. \eqno(3.44)$$ The complex parameters $p_j$ exist only if the condition $\gamma\left(\kappa-{\mu\over\gamma}+{\gamma\rho^2\over 2}\right)>0$ is satisfied. The following two special cases are worth remarking. The case $\mu=0$ and $\gamma=2$ reduces to the result given by proposition 3.1. On the other hand, in the limit $\gamma\rightarrow 0$ while $\mu$ being fixed, we first replace $z_j$ by $\sqrt \gamma z_j$ for $j=1, 2, ..., N$ and then take the limit, producing the dark $N$-soliton solution of the NLS equation. Note, in this limit, that the constraints (3.44) reduce to $p_jp_j^*=-\mu\rho^2/2$ and hence the dark soliton solutions exist only if the condition $\mu<0$ is satisfied. [*3.6. Stability of the plane wave*]{} We have considered the dark solitons on the background of a plane wave $\rho\,{\rm e}^{{\rm i}(\kappa x-\omega t)}$ with $\omega =1/\kappa +2\rho^2$. It is important to see whether the background field is stable or not against perturbations. If unstable, then dark solitons would not exist, as will be demonstrated in the next section. To this end, we perform the linear stability analysis of the plane wave. Following the standard procedure, we seek a solution of the form $$u=(\rho+\Delta\rho)\,{\rm e}^{{\rm i}(\kappa x-\omega t+\Delta\phi)}, \eqno(3.45)$$ where $\Delta\rho=\Delta\rho(x,t)$ and $\Delta\phi=\Delta\phi(x,t)$ are small perturbations. Substituting (3.45) into the FL equation (1.1) and linearizing about the plane wave, we obtain the system of linear PDEs for $\Delta\rho$ and $\Delta\phi$ $$\Delta\rho_{xt}+\rho(\omega-2\rho^2)\Delta\phi_x-\kappa\rho\Delta\phi_t-4\kappa\rho^2\Delta\rho=0, \eqno(3.46a)$$ $$\rho\Delta\phi_{xt}-(\omega-2\rho^2)\Delta\rho_x+\kappa\Delta\rho_t=0. \eqno(3.46b)$$ Assume the perturbations of the form ${\rm e}^{{\rm i}(\lambda x-\nu t)}$ with $\lambda$ real and $\nu$ possibly complex and substitute them into (3.46) to obtain a homogeneous linear system for $\Delta\rho$ and $\Delta\phi$ $$(\lambda\nu-4\kappa\rho^2)\Delta\rho+{\rm i}\{\rho\lambda(\omega-2\rho^2)+\kappa\rho\nu\}\Delta\phi=0, \eqno(3.47a)$$ $$-{\rm i}\{(\omega-2\rho^2)\lambda+\kappa\nu\}\Delta\rho+\rho\lambda\nu\Delta\phi=0. \eqno(3.47b)$$ The nontrivial solution exists if $\nu$ satisfies the quadratic equation $$(\lambda^2-\kappa^2)\nu^2-2(2\kappa\rho^2+1)\lambda\nu-{\lambda^2\over\kappa^2}=0. \eqno(3.48)$$ Solving this equation, we obtain $$\nu={\lambda\over \lambda^2-\kappa^2}\left[2\kappa\rho^2+1 \pm {1\over\kappa}\sqrt{\lambda^2+4\kappa^3(\kappa\rho^2+1)\rho^2}\right]. \eqno(3.49)$$ Thus, if the condition $$\kappa(\kappa\rho^2+1)>0, \eqno(3.50)$$ is satisfied, then $\nu$ becomes real for all values of real $\lambda$, implying that the plane wave is neutrally stable. It is evident that this condition always holds for $\kappa>0$. For negative $\kappa$, on the other hand, we put $\kappa=-K$ with $K>0$ and see that the stability criterion turns out to be as $K\rho^2>1$. Last, we remark that a similar stability analysis has been performed recently in conjunction with a plane wave solution of the original version of the FL equation \[22, 23\]. [**4. Properties of the soliton solutions**]{} In this section, we detail the properties of the soliton solutions. To this end, we first parametrize the complex parameters $p_j$ and $\zeta_{j0}$ by the real quantities $a_j, b_j, \theta_{j0}$ and $\chi_{j0}$ as $$p_j=a_j+{\rm i}b_j, \qquad \zeta_{j0}=\theta_{j0}+{\rm i}\chi_{j0}, \qquad j=1, 2, ..., N,\eqno(4.1)$$ and introduce the new independent variables $\theta_j$ and $\chi_j$ according to the relations $$\quad \theta_j=a_j(x+c_jt)+\theta_{j0}, \qquad c_j={\kappa\rho^2\over a_j^2+b_j^2}, \qquad j=1, 2, ..., N. \eqno(4.2a)$$ $$\chi_j=b_j(x-c_jt)+\chi_{j0}, \qquad j=1, 2, ..., N. \eqno(4.2b)$$ In terms of these variables, the variables $z_j$ defined by (3.2a) are put into the form $$z_j={\rm e}^{\theta_j+{\rm i}\chi_j}, \qquad j=1, 2, ..., N, \eqno(4.2c)$$ after setting $\tau=0$. Substituting (4.1) into (3.2c), the constraints for $p_j$ can be rewritten as a quadratic equation for $b_j$ $$b_j^2-2\kappa^2\rho^2b_j+a_j^2-\kappa^3\rho^2=0, \qquad j=1, 2, ..., N.\eqno(4.3)$$ The solution to this equation is found to be as follows: $$b_j=(\kappa\rho)^2\pm\sqrt{\kappa^3\rho^2(1+\kappa\rho^2)-a_j^2},\qquad j=1, 2, ..., N.\eqno(4.4)$$ We can see from the above expression that the real $b_j\ (j=1, 2, ..., N)$ exist only when the condition $\kappa^3\rho^2(1+\kappa\rho^2)>0$ is satisfied. This coincides with the criterion (3.50) for the stability of the plane wave, as discussed in section 3.6. Throughout the analysis, we assume this condition to assure the existence of soliton solutions. It is to be noted from (4.2) and (4.3) that the parameters $a_j$ and $b_j$ are expressed in terms of $c_j$ as $$a_j^2={\kappa^2\over 4c_j^2}\left(c_{\rm max}-c_j\right)\left(c_j-c_{\rm min}\right), \qquad b_j={1\over 2\kappa c_j}(1-\kappa^2 c_j), \qquad c_{\rm min}<c_j<c_{\rm max}, \eqno(4.5a)$$ where $$c_{\rm max}={1\over \kappa^2}\left\{1+2\kappa\rho^2+2\sqrt{\kappa\rho^2(1+\kappa\rho^2)}\right\}, \qquad c_{\rm min}={1\over \kappa^2}\left\{1+2\kappa\rho^2-2\sqrt{\kappa\rho^2(1+\kappa\rho^2)}\right\}. \eqno(4.5b)$$ The relations (4.5) correspond to (3.38) for those of the one-soliton solution of the derivative NLS equation. Thus, the dark $N$-soliton solution is characterized by the $N$ velocities $c_j\,(j=1, 2, ..., N)$ and the $2N$ real phase constants $\theta_{j0}$ and $\chi_{j0}\, (j=1, 2, ..., N)$, the total number of which is $3N$. Another parameterization of the solution is possible if one introduces the angular variables $\gamma_j$ by $$a_j=\sqrt{\kappa^3\rho^2(1+\kappa\rho^2)}\, \sin\,\gamma_j, \eqno(4.6a)$$ $$b_j=(\kappa\rho)^2+\sqrt{\kappa^3\rho^2(1+\kappa\rho^2)}\, \cos\,\gamma_j, \quad 0< \gamma_j <2\pi, \quad \gamma_j\not=\pi, \quad j=1, 2, ..., N. \eqno(4.6b)$$ In terms of $\gamma_j$, $p_j$ from (4.1) can be written in the form $$p_j={\rm i}\left\{(\kappa\rho)^2+\sqrt{\kappa^3\rho^2(1+\kappa\rho^2)}\,{\rm e}^{-{\rm i}\gamma_j}\right\}, \quad j=1, 2, ..., N, \eqno(4.7)$$ and the velocity $c_j$ of the $j$th soliton given in (4.2a) is expressed as $$c_j={1\over \kappa^2\{1+4\kappa\rho^2(1+\kappa\rho^2)\,\sin^2\gamma_j\}}\left\{1+2\kappa\rho^2-2\,{\rm sgn}\,\kappa\,\sqrt{\kappa\rho^2(1+\kappa\rho^2)}\,\cos\,\gamma_j\right\}. \eqno(4.8)$$ It follows from the above parametric representation that $p_j$ lies on the circle of radius $\sqrt{\kappa^3\rho^2(1+\kappa\rho^2)}$ centered at ${\rm i}(\kappa\rho)^2$ in the complex plane. Let us first describe the properties of the one- and two-soliton solutions and then address the general $N$-soliton solution. [*4.1. One-soliton solution*]{} The tau functions $f=f_1$ and $g=g_1$ for the one-soliton solution follows from (3.1)-(3.3) with $N=1$. They read $$f_1=1+{\kappa-{\rm i}p_1\over p_1+p_1^*}\,z_1z_1^*,\qquad g_1=1-{\kappa+{\rm i}p_1^*\over p_1+p_1^*}{p_1\over p_1^*}\,z_1z_1^*. \eqno(4.9)$$ The one-soliton solution $u_1$ follows from (2.1) with (4.9), yielding $$u_1=\rho\,{\rm e}^{{\rm i}(\kappa x-\omega t)}\,{1-{\kappa+b_1+{\rm i}a_1\over 2a_1}\,{a_1+{\rm i}b_1\over a_1-{\rm i}b_1}\,{\rm e}^{2\theta_1}\over 1+{\kappa+b_1-{\rm i}a_1\over 2a_1}\,{\rm e}^{2\theta_1}}. \eqno(4.10)$$ The above expression can be put into the form $$u_1=|u_1|\,{\rm e}^{{\rm i}(\kappa x-\omega t)}{\rm exp}\left\{{\rm i}\left(\phi+\phi^{(+)}\right)\right\}, \eqno(4.11)$$ where the square of the modulus of $u_1$ is represented by $$|u_1|^2=\rho^2-{2a_1^2c\,{\rm sgn}(\kappa a_1)\over \sqrt{a_1^2+(\kappa+b_1)^2}}\,{1\over \cosh\,2(\theta_1+\delta_1)+{(\kappa+b_1)\,{\rm sgn}\,a_1\over \sqrt{a_1^2+(\kappa+b_1)^2}}}, \qquad c=|c_1|, \eqno(4.12a)$$ with $$\theta_1=a_1(x+c_1t)+\theta_{10},\qquad c_1={\kappa\rho^2\over a_1^2+b_1^2},\qquad {\rm e}^{4\delta_1}={a_1^2+(\kappa+b_1)^2\over 4a_1^2}, \eqno(4.12b)$$ and the tangent of the phase $\phi$ and $\phi^{(+)}$ being given respectively by $$\tan\, \phi={\{a_1^2+b_1(\kappa+b_1)\}\,\cosh\,2(\theta_1+\delta_1)+b_1\,{\rm sgn}\, a_1\sqrt{a_1^2+(\kappa+b_1)^2}\over \kappa a_1\,\sinh\,2(\theta_1+\delta_1)}, \eqno(4.13a)$$ $$\tan\,\phi^{(+)}={a_1^2+b_1(\kappa+b_1)\over \kappa a_1}. \eqno(4.13b)$$ It can be confirmed by direct substitution that (4.11) indeed satisfies the FL equation. The one-soliton solution (4.10) is a one-parameter family of solutions. The parameterization in terms of $a_1$ will be employed in classifying the soliton solutions. The parameters $c_1$ and $b_1$ are then expressed by $a_1$. See (4.2a) and (4.4) whereas the parameters $\rho$ and $\kappa$ are fixed by the boundary condition (1.2). The relation (4.5) will be used conveniently when considering the generation of algebraic solitons in the limit $|a_1|\rightarrow 0$. The form of $|u_1|$ from (4.12) reveals that If $\kappa a_1>0$, then $|u_1|$ takes the form of a dark soliton whereas if $\kappa a_1<0$, it becomes a bright soliton on a constant background $u=\rho$. Note from (4.12) that the width of the soliton may be defined by $(2|a_1|)^{-1}$. The net change of the phase caused by the effect of nonlinear modulation is given by (4.13). Roughly speaking, the phase $\phi$ behaves like a step function as a function of $\theta_1$. Specifically, a rapid change of the phase occurs in the vicinity of the center position of the soliton ($\theta_1=-\delta_1$), yielding a phase difference $\pi$ (or $-\pi$). As a result, the phase of $u_1$ changes by a quantity $2\phi^{(+)}$ as $\theta_1$ varies from $-\infty$ to $+\infty$, where $\phi^{(+)}$ is given by (4.13b). Let us classify the one-soliton solutions in accordance with the sign of $\kappa$. We consider the two cases, i.e., case 1 ($\kappa>0, a_1\lessgtr 0$) and case 2 ($\kappa<0, a_1\lessgtr 0$) separately. For each sign of $\kappa$, both dark and bright solitons arise, as we shall show now. [*4.1.1. Case 1: $\kappa>0$* ]{} In this case, the velocity $c_1$ of the soliton is positive, as evidenced from (4.12b). Let $A_d$ and $A_b$ be the amplitudes of the dark and bright solitons, respectively with respective to the background. We then find from (4.5) and (4.12) that $$\begin{aligned} A_d &= \rho-\sqrt{\rho^2-2c_1\left\{\sqrt{a_1^2+(\kappa+b_1)^2}-(\kappa+b_1)\right\}} \notag \\ &= \rho-{1\over\sqrt{\kappa}}\left|\kappa\sqrt{c}-\sqrt{1+\kappa\rho^2}\right|, \qquad a_1>0, \quad c_1=c>0,\tag{4.14}\end{aligned}$$ $$\begin{aligned} A_b &=\sqrt{\rho^2+2c_1\left\{\sqrt{a_1^2+(\kappa+b_1)^2}+(\kappa+b_1)\right\}}-\rho \notag \\ &={1\over\sqrt{\kappa}}\left(\kappa\sqrt{c}+\sqrt{1+\kappa\rho^2}\right)-\rho, \qquad a_1<0, \quad c_1=c>0,\tag{4.15}\end{aligned}$$ where $c\equiv |c_1|$ lies in the interval $c_{\rm min}<c<c_{\rm max}$ with $c_{\rm max}$ and $c_{\rm min}$ being given by (4.5b). Note from (4.5a) that $\kappa+b_1=(1+\kappa^2c_1)/(2\kappa c_1)>0$ for $\kappa>0$ and $c_1>0$. This estimate will be used to judge the existence of algebraic solitons in the limit of infinite width. ![image](Figure-1.eps){width="10cm"} [**Figure 1.**]{} Amplitude-velocity relation for the dark soliton $A_d$ (solid line) and bright soliton $A_b$ (broken line) for $\rho=1$ and $\kappa=2$. Figure 1 plots the dependence of the amplitudes $A=A_d$ and $A=A_b$ on the velocity $c=|c_1|$ for $\rho=1$ and $\kappa=2$. ![image](Figure-2.eps){width="10cm"} [**Figure 2.**]{} Profile of the amplitude of the dark soliton $U=|u_1|$ at $t=0$. a: $c=c_0=0.75$, b: $c=0.33$, c: $c=0.098$. The profile a is a black soliton. [*(i) Dark soliton: $a_1>0$*]{} As seen from figure 1, the amplitude $A_d$ of the dark soliton becomes an increasing function of the velocity $c$ in the interval $c_{\rm min}< c\leq c_0$ and a decreasing function in the interval $c_0<c< c_{\rm max}$, where $c_{\rm max}\, (\gamma_1=\pi)$ and $c_{\rm min}\,(\gamma_1=0)$ are given by (4.5b) and a critical velocity $c_0$ and the corresponding angle $\gamma_0$ by $$c_0={1+\kappa\rho^2\over \kappa^2}, \qquad {\rm at}\quad \gamma_1=\gamma_0=\cos^{-1}\left[-{(\kappa\rho^2)^{1\over 2}(3+2\kappa\rho^2)\over 2(1+\kappa\rho^2)^{3\over 2}}\right], \qquad (0<\gamma_0<\pi). \eqno(4.16)$$ In the present numerical example ($\rho=1, \kappa=2$), $c_{\rm min}=0.025, c_0=0.75, c_{\rm max}=2.47$. The above observation shows that in the interval $c_0<c<c_{\rm max}$, a small dark soliton propagates faster than a large dark soliton. A similar behavior has also been found in I for the bright soliton solutions of the FL equation with zero background. Figure 2 depicts the profile of $U=|u_1|$ at $t=0$ for three different values of $c$, i.e., a: $c=c_0=0.75 (\gamma_1=\gamma_0=0.90\pi)$, b: $c=0.33 (\gamma_1=5\pi/6)$, c: $c=0.098 (\gamma_1=2\pi/3)$ with the parameters $\rho=1, \kappa=2, \theta_{10}=-\delta_1$ and $\chi_{10}=0$. When $c=c_0$, the amplitude of the dark soliton attains the maximum value $A_d=\rho$. See figure 2 a. It then turns out that the intensity of the soliton center falls to zero. Such a soliton is well-known in the field of nonlinear optics. It is sometimes called a [*black*]{} soliton. For this specific value of $c$, one finds from (4.5), (4.12) and (4.13) that $$a_1={\kappa^{3\over 2}\rho(4+3\kappa\rho^2)^{1\over 2}\over 2(1+\kappa\rho^2)}, \quad b_1=-{(\kappa\rho)^2\over 2(1+\kappa\rho^2)}, \eqno(4.17a)$$ $${\rm e}^{4\delta_1}={\kappa^2\over 4a_1^2}, \qquad \tan\,\phi=-{b_1\over a_1}\, \tanh(\theta_1+\delta_1),\qquad \tan\,\phi^{(+)}=-{b_1\over a_1}. \eqno(4.17b)$$ The profile of $|u_1|^2$ from (4.12) then becomes $$|u_1|^2=\rho^2\left[1-{4+3\kappa\rho^2\over 2(1+\kappa\rho^2)}{1\over \cosh\,2(\theta_1+\delta_1)+{2+\kappa\rho^2\over 2(1+\kappa\rho^2)}}\right]. \eqno(4.18)$$ As confirmed easily from the above expression, the minimum value of $|u_1|$ is zero at $\theta_1=-\delta_1$. The algebraic dark soliton may be produced from (4.12) by taking the limit $a_1\rightarrow +0$. However, as already noticed, the value of $\kappa+b_1$ is positive so that $|u_1|$ tends simply to a constant value $\rho$. Hence, this limiting procedure is irrelevant for the dark soliton solution under consideration, indicating that the algebraic dark soliton does not exist for $\kappa>0$ and $a_1>0$. ![image](Figure-3.eps){width="10cm"} [**Figure 3.**]{} Profile of a black soliton $u_{\rm R}={\rm Re}\,u_1$ at $t=1$. Figure 3 shows the profile of $u_{\rm R}={\rm Re}[u_1]$ at $t=1$ for the black soliton. The broken line indicates $\pm |u_1|$ (see figure 2 a). One can see that the dark soliton exhibits phase modulations near the center position of the soliton. This peculiar feature is in striking contrast to the bright soliton solution of the NLS equation for which no phase modulation occurs. A similar behavior has been observed for both dark and bright soliton solutions of the derivative NLS equation with the background of a plane wave \[21, 24\]. [*(ii) Bright soliton: $a_1<0$*]{} Figure 4 depicts the profile of the bright soliton $U=|u_1|$ at $t=0$ for three different values of $c$, i.e., a: $c=2.47 (\gamma_1=1.001\pi)$, b: $c=0.73 (\gamma_1=1.1\pi)$, c: $c= 0.025 (\gamma_1=1.999\pi)$ with $\rho=1$ and $\kappa=2$. The feature of the bright soliton differs substantially from that of the dark soliton. To be specific, the amplitude of the bright soliton always becomes an increasing function of the velocity (see figure 1). It takes the maximum value at $c=c_{\rm max} (\gamma_1\rightarrow\pi+0, a_1\rightarrow -0)$ and the minimum value at $c=c_{\rm min} (\gamma_1\rightarrow 2\pi-0, a_1\rightarrow -0)$. At these limiting values of the velocity, the algebraic soliton is produced from the soliton of hyperbolic type. Indeed, if we put $\theta_{10}=a_1x_0-\delta_1$ in (4.10) and (4.12) with $x_0$ being a real constant and then take the limit $a_1\rightarrow -0$, we find $$u_1=\rho\,{\rm e}^{{\rm i}(\kappa x-\omega t)}\,{x+ct+x_0-{\rm i}\,{2\kappa+b_1\over 2b_1(\kappa+b_1)} \over x+ct+x_0-{\rm i}\,{1\over 2(\kappa+b_1)}}, \eqno(4.19a)$$ $$|u_1|^2=\rho^2+{2\kappa c^2\over 1+\kappa^2c}\,{1\over (x+ct+x_0)^2+\left({\kappa c\over 1+\kappa^2c}\right)^2},\eqno(4.19b)$$ where $b_1=(1-\kappa^2c)/2\kappa c$ by (4.5a) and $c=c_{\rm max}$ or $c_{\rm min}$. Note from (4.12b) that $b_1^2=\kappa\rho^2/c$ when $a_1\rightarrow -0$. One can see that the algebraic soliton has no free parameters except a phase constant $x_0$ since the velocity $c$ is determined by $\rho$ and $\kappa$ which are fixed by the boundary condition. To derive (4.19a) from (4.10), we use the following expansion formulas for small $a_1$: $${\rm e}^{2\theta_1} ={2|a_1|\over \sqrt{a_1^2+(\kappa+b_1)^2}}\,{\rm e}^{2a_1(x+ct+x_0)} \sim {2|a_1|\over |\kappa+b_1|}\Big\{1+2a_1(x+ct+x_0)+O(a_1^2)\Big\}, \eqno(4.20a)$$ $${\kappa+b_1-{\rm i}a_1\over 2a_1}\,{\rm e}^{2\theta_1} \sim {\rm sgn}\, a_1\, {\rm sgn}(\kappa+b_1)\left[1+a_1\left\{2(x+ct+x_0)-{\rm i}\,{1\over \kappa+b_1}\right\}+O(a_1^2)\right], \eqno(4.20b)$$ $${\kappa+b_1+{\rm i}a_1\over 2a_1}{a_1+{\rm i}b_1\over a_1-{\rm i}b_1}\,{\rm e}^{2\theta_1}$$ $$\sim -{\rm sgn}\, a_1\, {\rm sgn}(\kappa+b_1)\left[1+a_1\left\{2(x+ct+x_0)-{\rm i}\,{2\kappa+b_1\over b_1(\kappa+b_1)}\right\}+O(a_1^2)\right]. \eqno(4.20c)$$ ![image](Figure-4.eps){width="10cm"} [**Figure 4.**]{} Profile of the amplitude of the bright soliton $U=|u_1|$ at $t=0$. a: $c=2.47$, b: $c=0.73$, c: $c=0.025$. The profiles a and c are algebraic solitons. Because of the inequalities $a_1<0$ and $\kappa+b_1>0$ in the current problem, one finds that the condition ${\rm sgn}\, a_1\, {\rm sgn}(\kappa+b_1)=-1$ is satisfied, which yields (4.19a) by taking the limit $a_1\rightarrow -0$ for (4.10). Actually, under the above condition, the leading-order terms of the denominator and numerator of (4.10) turn out to be of order $a_1$. Consequently, the expression (4.10) has a limiting form (4.19a) in the zero limit of $a_1$. On the other hand, the expression (4.19b) follows either directly from (4.19a) or from (4.12) by performing the similar limiting procedure. ![image](Figure-5.eps){width="10cm"} [**Figure 5.**]{} Profile of an algebraic bright soliton $u_{\rm R}={\rm Re}\,u_1$ at $t=1$. A representative profile of the algebraic bright soliton $U=|u_1|$ at $t=0$ and the corresponding profile of $u_{\rm R}={\rm Re}\,u_1$ at $t=1$ are shown in figure 4 a and figure 5, respectively. The novel feature of the bright soliton mentioned above deserves a few comments. First, the amplitude of the bright soliton tends to a finite value when its width tends to infinity, as opposed to the behavior of the dark soliton discussed just before for which the amplitude becomes zero in this limit. Second, the FL equation has an infinite number of conservation laws \[3\]. Among them, we evaluate the conserved quantity $I=\int_{-\infty}^\infty(|u_x|^2-\kappa^2\rho^2)dx$ for the one-soliton solution (4.10). This quantity may be termed the energy of the soliton in accordance with the correspondence between the solution $u$ of the FL equation and the solution $q$ of the derivative NLS equation. Using the relation $(|u_x|^2)_t=(|u|^2)_x$ which follows directly from the FL equation, we obtain $$I=-4\,{\rm sgn}\, a_1\,\tan^{-1}\left[{1\over |a_1|}\left\{\sqrt{a_1^2+(\kappa+b_1)^2}-(\kappa+b_1){\rm sgn}\, a_1\right\}\right].$$ We find from this expression that in the limit of infinite width $|a_1|\rightarrow 0$, $I$ becomes zero for the dark soliton $(a_1>0)$ and tends to a finite value $2\pi$ for the bright soliton $(a_1<0)$. See also an analogous calculation for the bright soliton solution of the derivative NLS equation with zero background \[25\]. [*4.1.2. Case 2: $\kappa<0$*]{} For negative $\kappa$, the expressions of the amplitude for the dark and bright solitons are given respectively by $$\begin{aligned} A_d &= \rho-\sqrt{\rho^2+2c_1\left\{\sqrt{a_1^2+(\kappa+b_1)^2}+(\kappa+b_1)\right\}} \notag \\ &= \rho-{1\over\sqrt{K}}\left|K\sqrt{c}-\sqrt{K\rho^2-1}\right|, \qquad a_1<0, \quad c_1=-c<0,\tag{4.21}\end{aligned}$$ $$\begin{aligned} A_b &=\sqrt{\rho^2-2c_1\left\{\sqrt{a_1^2+(\kappa+b_1)^2}-(\kappa+b_1)\right\}}-\rho \notag \\ &={1\over\sqrt{K}}\left(K\sqrt{c}+\sqrt{K\rho^2-1}\right)-\rho, \qquad a_1>0, \quad c_1=-c<0,\tag{4.22}\end{aligned}$$ where $K=-\kappa$ is a positive wavenumber and the velocity $c$ lies in the interval $c_{\rm min}^\prime<c<c_{\rm max}^\prime$ with $$c_{\rm max}^\prime={1\over K^2}\left\{2K\rho^2\!-\!1\!+2\sqrt{K\rho^2(K\rho^2\!-\!1)}\right\},\ c_{\rm min}^\prime={1\over K^2}\left\{2K\rho^2\!-\!1\!-\!2\sqrt{K\rho^2(K\rho^2\!-\!1)}\right\}. \eqno(4.23)$$ Recall that the condition $K\rho^2-1>0$ must be imposed to assure the existence of the soliton solutions. ![image](Figure-6.eps){width="10cm"} [**Figure 6.**]{} Amplitude-velocity relation for the dark soliton $A_d$ (solid line) and bright soliton $A_b$ (broken line) for $\rho=1$ and $\kappa=-2$. Figure 6 plots the dependence of the amplitudes $A=A_d$ and $A=A_b$ on the velocity $c=|c_1|$ for $\rho=1$ and $\kappa= -2$. When compared with figure 1 for $\kappa>0$, there appear several different features for $\kappa<0$. In particular, the algebraic [*dark*]{} soliton would arise in the limit $c\rightarrow c_{\rm min}^\prime$ since in this limit, the amplitude $A_d$ tends to a finite value. In addition, the algebraic bright soliton exists only in the limit $c\rightarrow c_{\rm max}^\prime$. We now proceed to the detailed description of the soliton solutions. ![image](Figure-7.eps){width="10cm"} [**Figure 7.**]{} Profile of the amplitude of the dark soliton $U=|u_1|$ at $t=0$. a: $c=c_0=0.25$, b: $c=0.16$, c: $c=0.043$. The profile a is a black soliton and the profile c is an algebraic soliton. [*(i) Dark soliton: $a_1<0$*]{} It follows from (4.5) with $\kappa=-K, c_1=-c$ that $\kappa+b_1=1/2Kc-K/2$. Since $c_{\rm min}^\prime<c<c_{\rm max}^\prime$ by (4.23), the possible value of $\kappa+b_1$ is restricted by the inequality $$K\left[K\rho^2-1-\sqrt{K\rho^2(K\rho^2-1)}\right]<\kappa+b_1<K\left[K\rho^2-1+\sqrt{K\rho^2(K\rho^2-1)}\right]. \eqno(4.24)$$ One can see that the upper limit of $\kappa+b_1$ is attained when $c=c_{\rm min}^\prime$ and its limiting value is positive by the condition $K\rho^2>1$ whereas the lower limit is attained when $c=c_{\rm max}^\prime$ and is negative. In view of this fact, the algebraic dark soliton would be produced in the limit $c\rightarrow c_{\rm min}^\prime$ for which ${\rm sgn}(\kappa+b_1)>0$. Actually, taking the limit $a_1\rightarrow -0$ for the solutions (4.10) and (4.12) and using the expansion formulas (4.20), we find that the hyperbolic soliton reduces to the limiting form $$u_1=\rho\,{\rm e}^{{\rm i}(-Kx-\omega t)}\,{x-ct+x_0-{\rm i}\,{-2K+b_1\over 2b_1(-K+b_1)} \over x-ct+x_0-{\rm i}\,{1\over 2(-K+b_1)}}, \eqno(4.25a)$$ $$|u_1|^2=\rho^2-{2K c^2\over 1-K^2c}\,{1\over (x-ct+x_0)^2+\left({K c\over 1-K^2c}\right)^2},\eqno(4.25b)$$ where $b_1=(1+K^2c)/2K c$ and $c=c_{\rm min}^\prime$. Since $1-Kc^\prime_{\rm min}>0$ by virtue of the condition $K\rho^2>1$, the expression (4.25b) actually represents an algebraic dark soliton. The black soliton appears when the velocity $c$ takes a specific value $c=c_0^\prime$, where $$c_0^\prime=(K\rho^2-1)/K^2 \qquad {\rm at}\quad \gamma_1=\gamma_0^\prime=\cos^{-1}\left[{(K\rho^2)^{1\over 2}(3-2K\rho^2)\over 2(K\rho^2-1)^{3\over 2}}\right], \qquad (\pi<\gamma_0^\prime<2\pi). \eqno(4.26)$$ Its profile is represented by $$|u_1|^2=\rho^2\left[1-{3K\rho^2-4\over 2(K\rho^2-1)}{1\over \cosh\,2(\theta_1+\delta_1)+{K\rho^2-2\over 2(K\rho^2-1)}}\right]. \eqno(4.27)$$ It is important to notice that the inequality $c_{\rm min}^\prime<c_0^\prime<c_{\rm max}^\prime$ requires the condition $K\rho^2>4/3$ for the wavenumber $K$. It then turns out that expression (4.27) takes the form of a black soliton. ![image](Figure-8.eps){width="10cm"} [**Figure 8.**]{} Profile of an algebraic dark soliton $u_{\rm R}={\rm Re}\,u_1$ at $t=1$. Figure 7 depicts the profile of $U=|u_1|$ at $t=0$ for three different values of $c$, i.e., a: $c=c_0^\prime=0.25 (\gamma_1=\gamma_0^\prime=5\pi/4)$, b: $c=0.16 (\gamma_1=4\pi/3)$, c: $c=0.043 (\gamma_1=2\pi)$ with the parameters $\rho=1, \kappa=-2, \theta_{10}=-\delta_1$ and $\chi_{10}=0$. In this example, $c_{\rm min}^\prime=0.043, c_0^\prime=0.25$ and $c_{\rm max}^\prime=1.46$ (see figure 6). An algebraic soliton appears at the lower limit of the velocity, i.e., $c=c_{\rm min}^\prime$ whereas a black soliton arises at $c=c_0^\prime$. Figure 8 shows the profile of $u_R={\rm Re}\ u_1$ at $t=1$ for an algebraic dark soliton. [*(ii) Bright soliton: $a_1>0$*]{} ![image](Figure-9.eps){width="10cm"} [**Figure 9.**]{} Profile of the amplitude of the bright soliton $U=|u_1|$ at $t=0$. a: $c=1.46$, b: $c=0.81$, c: $c=0.19$. The profiles a is an algebraic soliton. ![image](Figure-10.eps){width="10cm"} [**Figure 10.**]{} Profile of an algebraic bright soliton $u_R={\rm Re\, u_1}$ at $t=1$. The crucial difference between the case 1 and the case 2 for the bright solitons is observed if one compares figure 6 with figure 1. Notably, the bright soliton with $\kappa<0$ reduces to an algebraic soliton only at the upper limit of the velocity $c=c_{\rm max}^\prime$ whereas the bright soliton with $\kappa>0$ has two critical velocities $c_{\rm max}$ and $c_{\rm min}$ for which algebraic solitons are produced. Figure 9 depicts the profile of $U=|u_1|$ at $t=0$ for three different values of $c$, i.e., a: $c=1.46 (\gamma_1=0.998\pi)$, b: $c=0.73 (\gamma_1=0.9\pi)$, c: $c= 0.025 (\gamma_1=0.7\pi)$ with $\rho=1$ and $\kappa=-2$. Figure 10 shows the profile $u_{\rm R}={\rm Re}\,u_1$ of an algebraic bright soliton at $t=1$ which corresponds to the profile a in figure 9. [*4.1.3. Note on algebraic solitons* ]{} We have seen that the algebraic solitons arise from the hyperbolic solitons when certain conditions are satisfied. Here, we summarize the result. The algebraic bright solitons are produced when the conditions ${\rm sgn}\,a_1{\rm sgn}(\kappa+b_1)=-1$ and ${\rm sgn}(\kappa a_1)<0$ are satisfied simultaneously whereas the corresponding conditions for the dark algebraic solitons are given by ${\rm sgn}\,a_1{\rm sgn}(\kappa+b_1)=-1$ and ${\rm sgn}(\kappa a_1)>0$. Thus, for $\kappa>0$, the conditions ${\rm sgn}(\kappa+b_1)=1$ and ${\rm sgn}(\kappa+b_1)=-1$ are responsible for the generation of the algebraic bright and dark solitons, respectively. Since $\kappa+b_1>0$ in this case, only the bright algebraic soliton exists. See figure 1. For $\kappa<0$, on the other hand, the above conditions turn out to be ${\rm sgn}(\kappa+b_1)=-1$ and ${\rm sgn}(\kappa+b_1)=1$, respectively. Under this setting, the limiting value of $\kappa+b_1$ becomes negative for the bright soliton and positive for the dark soliton, respectively, implying the existence of both types of algebraic solitons. See figure 6. In conclusion, we emphasize that the criterion for the existence of solitons (which depends crucially on the sign of $\kappa$) plays an important role in our analysis. [*4.2. Two-soliton solution*]{} As clarified by the analysis of the ono-soliton solutions, both dark and bright solitons exist in our system. Therefore, the two-soliton solutions can be classified into three types, i.e., dark-dark solitons, dark-bright solitons and bright-bright solitons. Here, we focus our attention on the dark-dark solitons. Especially, we investigate the asymptotic behavior of the solution for large time. The two-soliton solution describing the interaction between a dark soliton and a bright soliton will be briefly discussed. For both cases, we assume that $\kappa>0$. [*4.2.1. Dark-dark solitons*]{} The tau functions $f_2$ and $g_2$ representing the dark two-soliton solution are given by (3.1)-(3.3) with $N=2$ subjected to the conditions $\kappa>0, a_1>0, a_2>0$. They read $$f_2=1+{\kappa-{\rm i}p_1\over p_1+p_1^*}\,z_1z_1^*+{\kappa-{\rm i}p_2\over p_2+p_2^*}\,z_2z_2^* +{(\kappa-{\rm i}p_1)(\kappa-{\rm i}p_2)(p_1-p_2)(p_1^*-p_2^*)\over (p_1+p_1^*)(p_1+p_2^*)(p_2+p_1^*)(p_2+p_2^*)}\,z_1z_2z_1^*z_2^*, \eqno(4.28a)$$ $$g_2\!=\!1\!-\!{\kappa+{\rm i}p_1^*\over p_1+p_1^*}{p_1\over p_1^*}\,z_1z_1^*\!-\!{\kappa+{\rm i}p_2^*\over p_2+p_2^*}{p_2\over p_2^*}\,z_2z_2^* \!+\!{(\kappa+{\rm i}p_1^*)(\kappa+{\rm i}p_2^*)(p_1-p_2)(p_1^*-p_2^*)\over (p_1+p_1^*)(p_1+p_2^*)(p_2+p_1^*)(p_2+p_2^*)}{p_1p_2\over p_1^*p_2^*}\,z_1z_2z_1^*z_2^*. \eqno(4.28b)$$ To investigate the interaction process of two solitons, we first order the magnitude of the velocity of each soliton in the $(x, t)$ coordinate system as $c_1>c_2>0$. Invoking the definition (4.2a) of the velocity of the solitons, this can be established by imposing the condition $|p_1|<|p_2|$ on the amplitude parameters. Now, we take the limit $t\rightarrow -\infty$ with $\theta_1$ being fixed. Since in this limit $|z_1|=$finite and $|z_2|\rightarrow \infty$, the leading-order asymptotics of $f_2$ and $g_2$ are found to be as $$f_2 \sim {\kappa-{\rm i}p_2\over p_2+p_2^*}\,z_2z_2^*\left\{1+{(\kappa-{\rm i}p_1)(p_1-p_2)(p_1^*-p_2^*)\over (p_1+p_1^*)(p_1+p_2^*)(p_2+p_1^*))}\,z_1z_1^*\right\}, \eqno(4.29a)$$ $$g_2 \sim -{\kappa+{\rm i}p_2^*\over p_2+p_2^*}{p_2\over p_2^*}\,z_2z_2^*\left\{1-{(\kappa+{\rm i}p_1^*)(p_1-p_2)(p_1^*-p_2^*)\over (p_1+p_1^*)(p_1+p_2^*)(p_2+p_1^*)}{p_1\over p_1^*}\,z_1z_1^*\right\}. \eqno(4.29b)$$ The asymptotic form of the two-dark soliton solution follows from (2.1) upon substituting (4.29) into it, giving rise to $$u_2 \sim \rho\,{\rm exp}\left\{{\rm i}\left(\kappa x-\omega t+\phi_1^{(-)}\right)\right\}{1-{\kappa+{\rm i}p_1^*\over p_1+p_1^*}{p_1\over p_1^*}\,z_1^\prime {z_1^\prime}^* \over 1 +{\kappa-{\rm i}p_1\over p_1+p_1^*}\,z_1^\prime {z_1^\prime}^*}, \eqno(4.30a)$$ where $$z_1^\prime=z_1\,{\rm exp}\left[-{\rm ln}\left({p_1+p_2^*\over p_1-p_2}\right)\right], \eqno(4.30b)$$ $$\phi_1^{(-)}={\rm arg}\left({\kappa+{\rm i}p_2^*\over \kappa-{\rm i}p_2}{p_2\over p_2^*}\right)+\pi. \eqno(4.30c)$$ Let $u_1(\theta_1)$ be the dark one-soliton solution (4.10). Then, the asymptotic form of $u_2$ can be written in terms of $u_1$ as $$u_2 \sim {\rm exp}\left({\rm i}\phi_1^{(-)}\right)u_1(\theta_1+\Delta \theta_1^{(-)}), \quad \Delta\theta_1^{(-)}=-{\rm ln}\left|{p_1+p_2^*\over p_1-p_2}\right|. \eqno(4.31)$$ Next, we take the limit $t\rightarrow +\infty$ with $\theta_1$ being fixed. In this limit, $|z_1|=$finite and $|z_2|\rightarrow 0$. Therefore, the tau functions $f_2$ and $g_2$ and the two-soliton solution $u_2$ behave like $$f_2\sim 1+{\kappa-{\rm i}p_1\over p_1+p_1^*}\,z_1z_1^*,\qquad g_2\sim 1-{\kappa+{\rm i}p_1^*\over p_1+p_1^*}{p_1\over p_1^*}\,z_1z_1^*, \eqno(4.32)$$ $$u_2 \sim \rho\,{\rm e}^{{\rm i}(\kappa x-\omega t)}\,{1-{\kappa+{\rm i}p_1\over p_1+p_1^*}{p_1\over p_1^*}\,z_1z_1^* \over 1+{\kappa-{\rm i}p_1\over p_1+p_1^*}\,z_1z_1^*}. \eqno(4.33)$$ It follows from (4.33) that $$u_2 \sim u_1(\theta_1+\Delta \theta_1^{(+)}), \quad \Delta\theta_1^{(+)}=0. \eqno(4.34)$$ The trajectory of the center position $x=x_c(t)$ of the $j$th soliton is described by the equation $\theta_j+\Delta\theta_j^{(\pm)}=0$, or $x_c=-c_jt-(\theta_{j0}+\Delta\theta_j^{(\pm)})/a_j$. Since the soliton propagates to the left, the phase shift $\Delta x_j$ of the $j$th soliton can be defined by the relation $$\Delta x_j=x_c(-\infty)-x_c(+\infty)={1\over a_j}\left(\Delta\theta_j^{(+)}-\Delta\theta_j^{(-)}\right), \quad j=1, 2. \eqno(4.35)$$ We see from (4.31) and (4.34) that the fast soliton suffers a phase shift $$\Delta x_1={1\over a_1}{\rm ln}\left|{p_1+p_2^*\over p_1-p_2}\right|. \eqno(4.36)$$ In terms of the angular variable $\gamma_1$ and $\gamma_2$ defined by (4.6) and (4.7), this expression can be rewritten in the form $$\Delta x_1={1\over a_1}\,{\rm ln}\left|{\sin{1\over 2}\left(\gamma_1+\gamma_2\right)\over \sin{1\over 2}\left(\gamma_1-\gamma_2\right)}\right|, \quad a_1=\sqrt{\kappa^3\rho^2(1+\kappa\rho^2)}\,\sin\,\gamma_1,\qquad 0<\gamma_1<\pi. \eqno(4.37)$$ We can perform the similar asymptotic analysis while keeping $\theta_2$ fixed. Hence, we quote only the final results. As $t\rightarrow -\infty$, the expressions corresponding to (4.29) and (4.31) read respectively $$f_2\sim 1+{\kappa-{\rm i}p_2\over p_2+p_2^*}\,z_2z_2^*,\qquad g_2\sim 1-{\kappa+{\rm i}p_2^*\over p_2+p_2^*}{p_2\over p_2^*}\,z_2z_2^*, \eqno(4.38)$$ $$u_2 \sim u_1(\theta_2+\Delta \theta_2^{(-)}), \quad \Delta\theta_2^{(-)}=0. \eqno(4.39)$$ As $t\rightarrow \infty$, on the other hand, they take the form $$f_2 \sim {\kappa-{\rm i}p_1\over p_1+p_1^*}\,z_1z_1^*\left\{1+{(\kappa-{\rm i}p_2)(p_1-p_2)(p_1^*-p_2^*)\over (p_2+p_2^*)(p_1+p_2^*)(p_2+p_1^*))}\,z_2z_2^*\right\}, \eqno(4.40a)$$ $$g_2 \sim -{\kappa+{\rm i}p_1\over p_1+p_1^*}{p_1\over p_1^*}\,z_1z_1^*\left\{1-{(\kappa+{\rm i}p_2^*)(p_1-p_2)(p_1^*-p_2^*)\over (p_2+p_2^*)(p_1+p_2^*)(p_2+p_1^*))}{p_2\over p_2^*}\,z_2z_2^*\right\}, \eqno(4.40b)$$ $$u_2 \sim {\rm exp}\left({\rm i}\phi_2^{(+)}\right)u_1(\theta_2+\Delta \theta_2^{(+)}),\eqno(4.41a)$$ $$\Delta\theta_2^{(+)}=-{\rm ln}\left|{p_2+p_1^*\over p_2-p_1}\right|, \quad \phi_2^{(+)}={\rm arg}\left({\kappa+{\rm i}p_1^*\over \kappa-{\rm i}p_1}{p_1\over p_1^*}\right)+\pi. \eqno(4.41b)$$ ![image](Figure-11.eps){width="10cm"} [**Figure 11.**]{} The interaction of two dark solitons. The phase shift of the slow soliton follows from (4.35), (4.39) and (4.41), resulting in $$\Delta x_2=-{1\over a_2}{\rm ln}\left|{p_2+p_1^*\over p_2-p_1}\right|, \eqno(4.42)$$ or equivalently in terms of the angular variables $\gamma_1$ and $\gamma_2$, it reads $$\Delta x_2=-{1\over a_2}\,{\rm ln}\left|{\sin{1\over 2}\left(\gamma_2+\gamma_1\right)\over \sin{1\over 2}\left(\gamma_2-\gamma_1\right)}\right|, \quad a_2=\sqrt{\kappa^3\rho^2(1+\kappa\rho^2)}\sin\,\gamma_2, \qquad 0<\gamma_2<\pi. \eqno(4.43)$$ An inspection of the formulas (4.36) and (4.42) reveals that $\Delta x_1>0$ and $\Delta x_2<0$ under the setting $a_1>0,\ a_2>0$. Figure 11 shows the intercaction of two dark solitons with the parameters $\rho=1, \kappa=2, c_1=0.75(\gamma_1=0.90\pi), c_2=0.24(\gamma_2=0.80\pi)$ and $\zeta_{10}=\zeta_{20}=0$ so that from (4.14), $A_{d1}=1.0$ and $A_{d2}=0.47$. It can be seen from figure 1 that the amplitude of each dark soliton is an increasing function of the velocity for the present choice of the parameters. Note, in this example, that the large soliton is a black soliton since its asymptotic amplitude is $A_{d1}=\rho=1$. The phase shifts evaluated from the formulas (4.37) and (4.43) are given by $\Delta x_1=0.70$ and $\Delta x_2=-0.36$, respectively. Figure 11 shows clearly a typical interaction process of solitons, i.e., as time goes, the large soliton gets close to the small soliton and overtakes it and after the collision, both solitons eventually separate each other without changing their profiles. The net effect of the collision is only the phase shift. [*4.2.2. Dark-bright solitons*]{} The two-soliton solution consisting of a dark soliton and a bright soliton is obtained by choosing the parameters such as $\kappa>0, a_1>0$ and $a_2<0$, for example. The asymptotic analysis can be performed as well for this solution and hence the detail will be omitted. ![image](Figure-12.eps){width="10cm"} [**Figure 12.**]{} The interaction between a dark soliton and a bright soliton. Figure 12 depicts the interaction between a dark soliton and a bright soliton with the parameters $\rho=1, \kappa=2, c_1=0.75(\gamma_1=0.90\pi), c_2=0.24(\gamma_2=1.2\pi)$ and $\zeta_{10}=\zeta_{20}=0$, showing that the dark soliton propagates faster than the bright soliton. The asymptotic amplitudes of the dark and bright solitons are given respectively by $A_{d1}=1.0$ and $A_{b2}=0.92$ and hence the former is a black soliton. The figure clearly shows the solitonic behavior of the solution. The dark soliton suffers a positive phase shift whereas the bright soliton suffers a negative phase shift. The formulas $\Delta x_1$ for the dark soliton and $\Delta x_2$ for the bright soliton for the phase shifts are given respectively by $$\Delta x_1=-{1\over a_1}\,{\rm ln}\left|{\sin{1\over 2}\left(\gamma_1+\gamma_2\right)\over \sin{1\over 2}\left(\gamma_1-\gamma_2\right)}\right|, \quad a_1=\sqrt{\kappa^3\rho^2(1+\kappa\rho^2)}\,\sin\,\gamma_1,\qquad 0<\gamma_1<\pi, \eqno(4.44a)$$ $$\Delta x_2=-{1\over a_2}\,{\rm ln}\left|{\sin{1\over 2}\left(\gamma_2+\gamma_1\right)\over \sin{1\over 2}\left(\gamma_2-\gamma_1\right)}\right|, \quad a_2=\sqrt{\kappa^3\rho^2(1+\kappa\rho^2)}\sin\,\gamma_2, \qquad \pi<\gamma_2<2\pi. \eqno(4.44b)$$ As in the case of the dark-dark solitons, one can see that $\Delta x_1>0$ and $\Delta x_2<0$. In the present example, $\Delta x_1= 0.70$ and $\Delta x_2=-0.36$. [*4.3. Dark $N$-soliton solution*]{} The preceding analysis reveals that the asymptotic form of the $N$-soliton solution will be represented by a superposition of $n$ dark solitons and $N-n$ bright solitons where $n$ is an arbitrary nonnegative integer in the interval $0\leq n\leq N$. The derivation of the large time asymptotic for the general $N$-soliton solution can be done following the similar procedure to that used for the two-soliton case. Hence, we outline the result. We address the dark soliton solutions satisfying the conditions $\kappa>$ and $a_j>0\ (j=1, 2, ..., N)$. The analysis for the bright soliton solutions as well as an arbitrary combination of dark and bright solitons can be carried out in exactly the same way. To begin with, we order the magnitude of the velocity of each soliton as $c_1>c_2> ... >c_N>0$. We take the limit $t \rightarrow -\infty$ with $\theta_n$ being finite. Since in this limit, $|z_j|\rightarrow 0$ for $j<n$ and $|z_j|\rightarrow \infty$ for $n<j$, we find that the leading-order asymptotic of the tau function $f=f_N$ from (3.1) with (3.2) can be written in the form $$f_N\sim \left|(c_{jk})_{n+1\leq j,k\leq N}\right|\prod_{j=n+1}^N(z_jz_j^*)\left(1+{\left|(c_{jk})_{n\leq j,k\leq N}\right|\over \left|(c_{jk})_{n+1\leq j,k\leq N}\right|}\,z_nz_n^*\right). \eqno (4.45a)$$ Here, $(c_{jk})$ is a matrix of Cauchy type given by $$c_{jk}={\kappa-{\rm i}p_j\over p_j+p_k^*}, \quad 1\leq j,\ k\leq N. \eqno(4.45b)$$ Referring to the well-known Cauchy’s formula, the determinant of the matrix $(c_{jk})$ is evaluated as $$\left|(c_{jk})_{m\leq j,k\leq n}\right|=\prod_{j=m}^n(\kappa-{\rm i}p_j){\prod_{m\leq j<k\leq n}(p_j-p_k)(p_j^*-p_k^*)\over \prod_{m\leq j,k\leq n}(p_j+p_k^*)},\quad 1\leq m<n\leq N. \eqno(4.45c)$$ If we use (4.45c), we have $${\left|(c_{jk})_{n\leq j,k\leq N}\right|\over \left|(c_{jk})_{n+1\leq j,k\leq N}\right|}={\kappa-{\rm i}p_n\over p_n+p_n^*}\,{\rm exp}\left[-\sum_{j=n+1}^N{\rm ln}\left({p_n+p_j^*\over p_n-p_j}\right) -\sum_{j=n+1}^N{\rm ln}\left({p_n^*+p_j\over p_n^*-p_j^*}\right)\right]. \eqno(4.46)$$ Substitution of (4.46) into (4.45) now gives $$f_N\sim \left|(c_{jk})_{n+1\leq j,k\leq N}\right|\prod_{j=n+1}^N(z_jz_j^*)\left(1+{\kappa-{\rm i}p_n\over p_n+p_n^*}\,z_n^\prime {z_n^\prime}^*\right), \eqno(4.47a)$$ where $$z_n^\prime=z_n\,{\rm exp}\left[-\sum_{j=n+1}^N{\rm ln}\left({p_n+p_j^*\over p_n-p_j}\right)\right]. \eqno(4.47b)$$ The leading-order asymptotic of $g_N$ in the limit of $t\rightarrow -\infty$ can be derived in the same way. It takes the form $$g_N\sim \left|(c_{jk}^\prime)_{n+1\leq j,k\leq N}\right|\prod_{j=n+1}^N(z_jz_j^*)\left(1-{\kappa+{\rm i}p_n^*\over p_n+p_n^*}\,{p_n\over p_n^*}\,z_n^\prime {z_n^\prime}^*\right), \eqno(4.48a)$$ where $$c_{jk}^\prime=-{\kappa-{\rm i}p_j\over p_j+p_k^*}\,{p_j\over p_k^*}, \quad 1\leq j, k\leq N. \eqno(4.48b)$$ The asymptotic form of the dark $N$-soliton solution follows from (2.1), (4.47) and (4.48). It reads $$u_N\sim \rho\,{\rm exp}\left\{{\rm i} \left(\kappa x-\omega t+\phi_n^{(-)}\right)\right\}{1-{\kappa+{\rm i}p_n^*\over p_n+p_n^*}\,{p_n\over p_n^*}\,z_n^\prime {z_n^\prime}^*\over 1+{\kappa-{\rm i}p_n\over p_n+p_n^*}\,z_n^\prime {z_n^\prime}^*}, \eqno(4.49a)$$ with $$\phi_n^{(-)}={\rm arg}\left[\prod_{j=n+1}^N\left({\kappa+{\rm i}p_j^*\over \kappa-{\rm i}p_j}\,{p_j\over p_j^*}\right)\right]+(N-n)\pi. \eqno(4.49b)$$ This expression can be rewritten in terms of the one-soliton solution as $$u_N\sim {\rm exp}\left({\rm i}\phi_n^{(-)}\right)u_1(\theta_n+\Delta\theta_n^{(-)}), \eqno(4.50a)$$ with $$\Delta\theta_n^{(-)}=-\sum_{j=n+1}^N{\rm ln}\left|{p_n+p_j^*\over p_n-p_j}\right|. \eqno(4.50b)$$ By a similar asymptotic analysis, we can derive the asymptotic form of $u_N$ in the limit of $t\rightarrow +\infty$. We find that $$u_N\sim {\rm exp}\left({\rm i}\phi_n^{(+)}\right)u_1(\theta_n+\Delta\theta_n^{(+)}), \eqno(4.51a)$$ with $$\Delta\theta_n^{(+)}=-\sum_{j=1}^{n-1}{\rm ln}\left|{p_n+p_j^*\over p_n-p_j}\right|, \eqno(4.51b)$$ $$\phi_n^{(+)}={\rm arg}\left[\prod_{j=1}^{n-1}\left({\kappa+{\rm i}p_j^*\over \kappa-{\rm i}p_j}\,{p_j\over p_j^*}\right)\right]+(n-1)\pi. \eqno(4.51c)$$ We see from (4.50) and (4.51) that in the rest frame of reference, the asymptotic form of the dark $N$-soliton solution can be represented by a superposition of $N$ independent dark one-soliton solutions, the only difference being the phase shifts of each soliton caused by the collisions. It follows from (4.50b) and (4.51b) that the formula for the total phase shift of the $n$th soliton is given by $$\Delta x_n={1\over a_n}\left(\sum_{j=n+1}^N{\rm ln}\left|{p_n+p_j^*\over p_n-p_j}\right|-\sum_{j=1}^{n-1}{\rm ln}\left|{p_n+p_j^*\over p_n-p_j}\right|\right), \quad n=1, 2, ..., N. \eqno(4.52)$$ As in the two-soliton case, we can rewrite the above formula in terms of the variables $\gamma_j$ defined by (4.6) and (4.7). Explicitly, $$\Delta x_n={1\over a_n}\left(\sum_{j=n+1}^N{\rm ln}\left|{\sin{1\over 2}(\gamma_n+\gamma_j)\over \sin{1\over 2}(\gamma_n-\gamma_j)}\right| -\sum_{j=1}^{n-1}{\rm ln}\left|{\sin{1\over 2}(\gamma_n+\gamma_j)\over \sin{1\over 2}(\gamma_n-\gamma_j)}\right|\right),$$ $$a_n=\sqrt{\kappa^3\rho^2(1+\kappa\rho^2)}\sin\,\gamma_n, \qquad 0<\gamma_n<\pi,\qquad n=1, 2, ..., N. \eqno(4.53)$$ The formulas (4.52) and (4.53) reduce to (4.36), (4.37), (4.42) and (4.43) for the special case of $N=2$. They clearly show that each soliton has pairwise interactions with other solitons, i.e., there are no many-particle collisions among solitons. This feature is common to that of the bright $N$-soliton solution considered in I. In this paper, the system of bilinear equations reduced from the FL equation has been derived and used to construct the dark $N$-soliton solution. The corresponding $N$-soliton solution derived in \[7\] using the Bäcklund transformation follows from our solution (2.1) with (3.1) and (3.2) if one introduces the angular variables $\gamma_j$ according to the relations (4.7). We have found that unlike the bright soliton solutions obtained in I, the complex amplitude parameters $p_j$ are subjected to the constraints (3.2c) which have prevented the proof of the solution. To overcome this difficulty, we have employed a trilinear equation in place of one of the bilinear equations, in addition to an auxiliary variable $\tau$ in (3.2c). As a byproduct, this trilinear equation has led for the first time to a simple formula for the dark $N$-soliton solution of the derivative NLS equation on the background of a plane wave. Note that the dark soliton solutions on a constant background \[13-19\] stem simply from the above-mentioned solution in the zero limit of the wavenumber $\kappa$. However, this limiting procedure is found to be unable to perform for the dark $N$-soliton solution of the FL equation due to the singularity of the dispersion relation. We have seen that the soliton solutions presented here exhibit several new features. Specifically, both the dark and bright solitons exist depending on the sign of the wavenumber $\kappa$ and that of the real part of the complex amplitude parameter. Of particular interest is the existence of an algebraic dark soliton which appears only in the case of negative $\kappa$. Finally, the asymptotic analysis of the two- and general $N$-soliton solutions has clarified their structure and dynamics. In particular, the latter solution has been shown to include $n$ dark solitons and $N-n$ bright solitons on nonzero background with $n$ being an arbitrary nonnegative integer not exceeding $N$. The application of the results summarized above to nonlinear fiber optics will be an interesting issue to be studied in a future research work. This work was partially supported by the Grant-in-Aid for Scientific Research (C) No. 22540228 from Japan Society for the Promotion of Science. 1. Fokas A S 1995 [On a class of physically important integrable equations]{} [*Physica D*]{} [**87**]{} 145-150 2. Lenells J 2009 [Exactly solvable model for nonlinear pulse propagation in optical fibers]{} [*Stud. Appl. Math.*]{} [**123**]{} 215-232 3. Lenells J and Fokas A S 2009 [On a novel integrable generalization of the nonlinear Schrödinger equation]{} [*Nonlinearity*]{} [**22**]{} 11-27 4. Lenells J 2010 [Dressing for a novel integrable generalization of the nonlinear Schrödinger equation]{} [*J. Nonlinear Sci.*]{} [**20**]{} 709-722 5. Kundu A 2010 [Two-fold integrable hierarchy of nonholonomic deformation of the derivative nonlinear Schrödinger and the Lenells-Fokas equation]{} [*J. Math. Phys.*]{} [**51**]{} 022901 6. Matsuno Y 2011 [A direct method of solution for the Fokas-Lenells derivative nonlinear Schrödinger equation: I. Bright soliton solutions]{} [*J. Phys. A: Math. Theor.*]{} [**45**]{} 235202 7. Vekslerchik V E 2011 [Lattice representation and dark solitons of the Fokas-Lenells equation]{} [*Nonlinearity*]{} [**24**]{} 1165-1175 8. Hirota R 2004 [*The Direct Method in Soliton Theory*]{} (New York: Cambridge) 9. Matsuno Y 1984 [*Bilinear Transformation Method*]{} (New York: Academic) 10. Vein R and Dale P 1999 [*Determinants and Their Applications in Mathematical Physics*]{} (New York: Springer) 11. Matsuno Y 2011 [The [*N*]{}-soliton solution of a two-component modified nonlinear Schrödinger equation]{} [*Phys. Lett.*]{} [**A 375**]{} 3090-3094 12. Matsuno Y 2011 [The bright [*N*]{}-soliton solution of a multi-component modified nonlinear Schrödinger equation]{} [*J. Phys. A: Mat. Theor.*]{} [**44**]{} 495202 13. Kawata T and Inoue H 1978 [Exact solutions of derivative nonlinear Schrödinger equation under the nonvanishing conditions]{} [*J. Phys. Soc. Jpn.*]{} [**44**]{} 1968-1976 14. Kawata T, Kobayashi N and Inoue H 1979 [Soliton solutions of the derivative nonlinear Schrödinger equation]{} [*J. Phys. Soc. Jpn.*]{} [**46**]{} 1008-1015 15. Chen X J, Yang J and Lam W K 2006 [$N$-soliton solution for the derivative nonlinear Schrödinger equation with nonvanishing boundary conditions]{} [*J. Phys. A: Math. Gen.*]{} [**39**]{} 3263-3274 16. Laskin V M 2007 [$N$-soliton solutions and perturbation theory for the derivative nonlinear Schrödinger equation with nonvanishing boundary condition]{} [*J. Phys. A: Math. Theor.*]{} [**40**]{} 6119-6132 17. Steudel H 2003 [ The hierarchy of multi-soliton solutions of the derivative nonlinear Schrödinger equation]{} [*J. Phys. A: Math. Gen.*]{} [**36**]{} 1931-1946 18. Xu S, He J and Wang L 2011 [The Darboux transformation of the derivative nonlinear Schrödinger equation]{} [*J. Phys. A: Math. Theor.*]{} [**44**]{} 305203 19. Li M, Tian B, Liu W J, Zhang H Q and Wang P 2010 [Dark and antidark solitons in the modified nonlinear Schrödinger equation accounting for the self-steepening effect]{} [*Phys. Rev. E*]{} [**81**]{} 046606 20. Mij[ø]{}lhus E 1976 [On the modulational instability of hydromagnetic waves parallel to the magnetic field]{} [*J. Plasma Phys.*]{} [**16**]{} 321-334 21. Ichikawa Y H, Konno K, Wadati M and Sanuki H 1980 [Spiky soliton in circular polarized Alfvén wave]{} [*J. Phys. Soc. Jpn.*]{} [**48**]{} 279-286 22. Wright III O C 2009 [Some homoclinic connections of a novel integrable generalized nonlinear Schrödinger equation]{} [*Nonlinearity*]{} [**22**]{} 2633-2643 23. Lü X and Tian B 2012 [Novel behavior and properties for the nonlinear pulse propagation in optical fibers]{} [*Europhys. Lett.*]{} [**97**]{} 10005 24. Mio K, Ogino T, Minami K and Takeda S 1976 [Modified nonlinear Schrödinger equation for Alfvén waves propagating along the magnetic field in cold plasmas]{} [*J. Phys. Soc. Jpn.*]{} [**41**]{} 265-271 25. Mij[ø]{}lhus E 1978 [A note on the modulational instability of long Alfvén waves parallel to the magnetic field]{} [*J. Plasma Phys.*]{} [**19**]{} 437-447 [^1]: [*E-mail address*]{}: matsuno@yamaguchi-u.ac.jp
1
--- abstract: | Today, the vocabulary size for language models in large vocabulary speech recognition is typically several hundreds of thousands of words. While this is already sufficient in some applications, the out-of-vocabulary words are still limiting the usability in others. In agglutinative languages the vocabulary for conversational speech should include millions of word forms to cover the spelling variations due to colloquial pronunciations, in addition to the word compounding and inflections. Very large vocabularies are also needed, for example, when the recognition of rare proper names is important. Previously, very large vocabularies have been efficiently modeled in conventional n-gram language models either by splitting words into subword units or by clustering words into classes. While vocabulary size is not as critical anymore in modern speech recognition systems, training time and memory consumption become an issue when state-of-the-art neural network language models are used. In this paper we investigate techniques that address the vocabulary size issue by reducing the effective vocabulary size and by processing large vocabularies more efficiently. The experimental results in conversational Finnish and Estonian speech recognition indicate that properly defined word classes improve recognition accuracy. Subword n-gram models are not better on evaluation data than word n-gram models constructed from a vocabulary that includes all the words in the training corpus. However, when recurrent neural network (RNN) language models are used, their ability to utilize long contexts gives a larger gain to subword-based modeling. Our best results are from RNN language models that are based on statistical morphs. We show that the suitable size for a subword vocabulary depends on the language. Using time delay neural network (TDNN) acoustic models, we were able to achieve new state of the art in Finnish and Estonian conversational speech recognition, 27.1 % word error rate in the Finnish task and 21.9 % in the Estonian task. author: - 'Seppo Enarvi, Peter Smit, Sami Virpioja, and Mikko Kurimo,  [^1][^2][^3]' bibliography: - 'IEEEabrv.bib' - 'references.bib' title: Automatic Speech Recognition with Very Large Conversational Finnish and Estonian Vocabularies --- [Enarvi : ASR with Very Large Conversational Finnish and Estonian Vocabularies]{} language modeling, word classes, subword units, artificial neural networks, automatic speech recognition Introduction ============ and Estonian are agglutinative languages, meaning that words are formed by concatenating smaller linguistic units, and a great deal of grammatical information is conveyed by inflection. Modeling these inflected words correctly is important for automatic speech recognition, to produce understandable transcripts. Recognizing a suffix correctly can also help to predict the other words in the sentence. By collecting enough training data, we can get a good coverage of the words in one form or another—perhaps names and numbers being an exception—but we are far from having enough training data to find examples of all the inflected word forms. Another common feature of Finnish and Estonian is that the orthography is phonemic. Consequently, the spelling of a word can be altered according to the pronunciation changes in conversational language. Especially Finnish conversations are written down preserving the variation that happens in colloquial pronunciation [@Enarvi:2013]. Modeling such languages as a sequence of complete word forms becomes difficult, as most of the forms are very rare. In our data sets, most of the word forms appear only once in the training data. Agglutination has a far more limited impact on the vocabulary size in English. Nevertheless, the vocabularies used in English language have grown as larger corpora are used and computers are able to store larger language models in memory. Moreover, as speech technology improves, we start to demand better recognition of e.g. proper names that do not appear in the training data. Modern automatic speech recognition (ASR) systems can handle vocabularies as large as millions of words with simple n-gram language models, but a second recognition pass with a neural network language model (NNLM) is now necessary for achieving state-of-the-art performance. Vocabulary size is much more critical in NNLMs, as neural networks take a long time to train, and training and inference times depend heavily on the vocabulary size. While computational efficiency is the most important reason for finding alternatives to word-based modeling, words may not be the best choice of language modeling unit with regard to model performance either, especially when modeling agglutinative languages. Subword models have been successfully used in Finnish ASR for more than a decade [@Hirsimaki:2006]. In addition to reducing the complexity of the language model, subword models bring the benefit that even words that do not occur in the training data can be predicted. However, subwords have not been used for modeling *conversational* Finnish or Estonian before. Our earlier attempts to use subwords for conversational Finnish ASR failed to improve over word models. In this paper, we show how subword models can be used in the FST-based Kaldi speech recognition toolkit and obtain the best results to date by rescoring subword lattices using subword NNLMs, 27.1 % WER for spontaneous Finnish conversations, and 21.9 % WER for spontaneous Estonian conversations. This is the first published evaluation of subwords in conversational Finnish and Estonian speech recognition tasks. Our conclusions are slightly different from those earlier published on standard Finnish and Estonian tasks, where n-gram models based on statistical morphs have provided a large improvement to speech recognition accuracy [@Siivola:2003; @Hirsimaki:2006; @Kurimo:2006]. An important reason is that we are able to use very large vocabularies (around two million words) in the word-based n-gram models. Recently it has been noticed that the gap between subword and word models becomes quite small when such a large word vocabulary is used [@Varjokallio:2016]. In our conversational Finnish and Estonian experiments, word and subword n-gram models performed quite similarly in terms of evaluation set word error rate. Our new observation is that neural networks are especially beneficial for modeling subwords—subword NNLMs are clearly better than word NNLMs trained using the full vocabulary. Another approach for very large vocabulary speech recognition is using word classes in the language models. We evaluate different algorithms for clustering words into classes. Recent comparisons have shown an advantage in perplexity for the exchange algorithm over Brown clustering, while clusterings created from distributed word representations have not worked as well [@Botros:2015; @Dehdari:2016; @Song:2017]. We present additionally a novel rule-based algorithm that clusters colloquial Finnish word forms, and also evaluate word error rate. Surprisingly, class-based n-gram models perform better than word models in terms of perplexity and speech recognition accuracy in conversational Finnish and Estonian. Word classes and subword units are especially attractive in NNLMs, because the vocabulary size has a great impact on the memory consumption and computational complexity. The size of the input layer projection matrix and the output layer weight matrix, as well as the time required to normalize the output probabilities using softmax, have a linear dependency on the vocabulary size. The output normalization can also be made more efficient by using one of the several methods that try to approximate the full softmax, either by modifying the network structure or the training objective. So far the only comparison of these approximations for large-vocabulary NNLMs that we are aware of is in [@Chen:2016]. They found hierarchical softmax to perform best in terms of perplexity with a vocabulary of 800,000 words and a feedforward network. We compare hierarchical softmax, sampling-based softmax, class-based models, and subword models in speech recognition on languages that are known for very large vocabularies. Both data sets contain around two million unique word forms. In our experiments where the training time was limited to 15 days, class-based NNLMs clearly exceeded the performance of word-based NNLMs in terms of perplexity and recognition accuracy. The best results were from subword models. In the Estonian task, the best subword vocabularies were quite large, and the best result was from a class-based subword model. We also test two methods for weighting separate language modeling data sets: weighted sampling, which has already been introduced in [@Schwenk:2005] and update weighting, which is a novel method. All the neural network language modeling techniques presented in this paper have been implemented in the open-source toolkit TheanoLM [@Enarvi:2016], which we hope to lower the threshold of using neural network language models in speech recognition research.[^4] We implemented hierarchical softmax [@Goodman:2001], noise-contrastive estimation [@Gutmann:2010], and BlackOut [@Ji:2016] training criteria, and a lattice decoder that takes advantage of parallel computation using a GPU. We use a fairly complex recurrent model consisting of an LSTM layer and a highway network to obtain state-of-the-art results, and run the experiments on a high-end GPU. Our experiments show that class and subword models are more attractive than word models for several reasons. They are efficient computationally and in terms of memory consumption, and they can achieve better performance than word models. Usually subword vocabularies include all the individual letters, meaning that any word that uses the same letters can be constructed. Class models are restricted to a certain vocabulary, but the efficiency is not limited by the vocabulary size, so very large vocabularies can be used. To summarize, this is the first time Finnish and Estonian subword models have outperformed word models in conversational speech recognition, even without limiting the word vocabulary size. We compare word clustering techniques and show that class-based models outperform full-vocabulary word models in these tasks. We also present the first comparison of word, class, and subword NNLMs trained using different softmax approximations, applied to speech recognition. Finally, we test a novel method for weighting NNLM training corpora. Class-Based Language Models =========================== Finnish and Estonian are highly agglutinative languages, so the number of different word forms that appear in training corpora is huge. The pronunciation variation in colloquial Finnish is also written down, making it very difficult to reliably estimate the probability of the rare words in new contexts. If we can cluster word forms into classes based on in which contexts they appear, we can get more reliable estimates for the class n-gram probabilities. In a class-based language model, the probability of a word within its class is usually modeled simply as the unigram probability of the word in the training data [@Brown:1992]: $$\label{eq:class-ngram} \begin{split} &P(w_t \mid w_{t-n+1} \ldots w_{t-1}) = \\ &P(c(w_t) \mid c(w_{t-n+1}) \ldots c(w_{t-1})) P(w_t \mid c(w_t)), \end{split}$$ where $c(w)$ is a function that maps a word to a class. This is also the model that we use in this article. Statistical Methods for Clustering Words into Classes ----------------------------------------------------- A common cost function for learning the word classes is the perplexity of a class bigram model, which is equivalent to using the log probability objective: $$\label{eq:class-bigram} \mathcal{L} = \sum_t [\log P(c(w_t) \mid c(w_{t-1})) + \log P(w_t \mid c(w_t))]$$ Finding the optimal clustering is computationally very challenging. Evaluating the cost involves summation over all adjacent classes in the training data [@Brown:1992]. The algorithms that have been proposed are suboptimal. Another approach that can be taken is to use knowledge about the language to group words that have a similar function. Brown et al. [@Brown:1992] start by assigning each word to a distinct class, and then merge classes in a greedy fashion. A naive algorithm would evaluate the objective function for each pair of classes. One iteration of the naive algorithm would on average run in $\mathcal{O}(N_V^4)$ time, where $N_V$ is the size of the vocabulary. This involves a lot of redundant computation that can be eliminated by storing some statistics between iterations, reducing the time required to run one iteration to $\mathcal{O}(N_V^2)$. To further reduce the computational complexity, they propose an approximation where, at any given iteration, only a subset of the vocabulary is considered. Starting from the most frequent words, $N_C$ words are assigned to distinct classes. On each iteration, the next word is considered for merging to one of the classes. The running time of one iteration is $\mathcal{O}(N_C^2)$. The algorithm stops after $N_V - N_C$ iterations, and results in all the words being in one of the $N_C$ classes. The exchange algorithm proposed by Kneser and Ney [@Kneser:1993] starts from some initial clustering that assigns every word to one of $N_C$ classes. The algorithm iterates through all the words in the vocabulary, and evaluates how much the objective function would change by moving the word to each class. If there are moves that would improve the objective function, the word is moved to the class that provides the largest improvement. By storing word and class bigram statistics, the evaluation of the objective function can be done in $\mathcal{O}(N_C)$, and thus one word iterated in $\mathcal{O}(N_C^2)$ time [@Martin:1995]. The number of words that will be iterated is not limited by a fixed bound. Even though we did not perform the experiments in such a way that we could get a fair comparison of the training times, we noticed that our exchange implementation needed less time to converge than what the Brown clustering needed to finish.[^5] These algorithms perform a lot of computation of statistics and evaluations over pairs of adjacent classes and words. In practice the running times are better than the worst case estimates, because all classes and words do not follow each other. The algorithms can also be parallelized using multiple CPUs, on the expense of memory requirements. Parallelization using a GPU would be difficult, because that would involve sparse matrices. The exchange algorithm is greedy so the order in which the words are iterated may affect the result. The initialization may also affect whether the optimization will get stuck in a local optimum, and how fast it will converge. We use the exchange[^6] tool, which by default initializes the classes by sorting the words by frequency and assigning word $w_i$ to class $i \mod N_C$, where $i$ is the sorted index. We compare this to initialization from other clustering methods. Clustering Based on Distributed Representation of Words ------------------------------------------------------- Neural networks that process words need to represent them using real-valued vectors. The networks learn the word embeddings automatically. These *distributed representations* are interesting on their own, because the network tends to learn similar representation for semantically similar words [@Mikolov:2013:NAACL]. An interesting alternative to statistical clustering of words is to cluster words based on their vector representations using traditional clustering methods. Distributed word representations can be created quickly using shallow networks, such as the Continuous Bag-of-Words (CBOW) model [@Mikolov:2013:ICLR]. We use word2vec[^7] to cluster words by creating word embeddings using the CBOW model and clustering them using k-means. A Rule-Based Method for Clustering Finnish Words ------------------------------------------------ Much of the vocabulary in conversational Finnish text is due to colloquial Finnish pronunciations being written down phonetically. There are often several ways to write the same word depending on how colloquial the writing style is. Phonological processes such as reductions (“miksi” $\rightarrow$ “miks” \[*why*\]) and even word-internal sandhi (“menenpä” $\rightarrow$ “menempä” \[*I will go*\]) are often visible in written form. Intuitively grouping these different phonetic representations of the same word together would provide a good clustering. While the extent to which a text is colloquial does provide some clues for predicting the next word, in many cases these word forms serve exactly the same function. This is closely related to normalization of imperfect text, a task which is common in all areas of language technology. Traditionally text normalization is based on hand-crafted rules and lookup tables. In the case that annotated data is available, supervised methods can be used for example to expand abbreviations [@Sproat:2001]. When annotations are not available, candidate expansions for a non-standard word can be found by comparing its lexical or phonemic form to standard words [@Han:2011]. The correct expansion often depends on the context. A language model can be incorporated to disambiguate between the alternative candidates when normalizing text. We are aware of one earlier work where colloquial Finnish has been translated to standard Finnish using both rule-based normalization and statistical machine translation [@Listenmaa:2015]. Two constraints makes our task different from text normalization: A word needs to be classified in the same way regardless of the context, and a word cannot be mapped to a word sequence. Our last clustering method, *Rules*, is based on a set of rules that describe the usual reductions and alternations in colloquial words. We iterate over a standard Finnish vocabulary and compare the standard Finnish word with every word in a colloquial Finnish vocabulary. If the colloquial word appears to be a reduced pronunciation of the standard word, these words are merged into a class. Because all the words can appear in at most one class, multiple standard words can be merged into one class, but this is rare. Thus, there will be only a handful of words in each class. Larger classes can be created by merging the classes produced by this algorithm using some other clustering technique. Subword Language Models ======================= Subword modeling is another effective technique to reduce vocabulary size. We use the Morfessor method [@Creutz:2002; @Creutz:2007:1], which has been successfully applied in speech recognition of many agglutinative languages [@Kurimo:2006; @Creutz:2007:2]. Morfessor is an unsupervised method that uses a statistical model to split words into smaller fragments. As these fragments often resemble the surface forms of morphemes, the smallest information-bearing units of a language, we will use the term “morph” for them. Morfessor has three components: a model, a cost function, and the training and decoding algorithm. The model consists of a lexicon and a grammar. The lexicon contains the properties of the morphs, such as their written forms and frequencies. The grammar contains information of how the morphs can be combined into words. The Morfessor cost function is derived from MAP estimation with the goal of finding the optimal parameters $\theta$ given the observed training data $D_W$: $$\begin{aligned} \theta_{MAP} &= \operatorname*{arg\,max}_{\theta}P(\theta \mid D_W) \\ &= \operatorname*{arg\,max}_{\theta}P(\theta)P(D_W \mid \theta) \end{aligned}$$ The objective function to be maximized is the logarithm of the product $P(\theta)P(D_W \mid \theta)$. In a semisupervised setting, it is useful to add a hyperparameter to control the weight of the data likelihood [@Kohonen:2010]: $$\mathcal{L}(\theta, D_W) = \log P(\theta) + \alpha \log P(D_W \mid \theta)$$ We use the hyperparameter $\alpha$ to control the degree of segmentation in a heuristic manner. This allows for optimizing the segmentation, either for optimal perplexity or speech recognition accuracy or to obtain a specific size lexicon. A greedy search algorithm is used to find the optimal segmentation of morphs, given the training data. When the best model is found, it is used to segment the language model training corpus using the Viterbi algorithm. We apply the Morfessor 2.0 implementation[^8] of the Morfessor Baseline algorithm with the hyperparameter extension [@Virpioja:2013]. In the output segmentation, we prepend and append the in-word boundaries of the morph surface forms by a “+” character. For example the compound word “luentokalvoja” is segmented into “luento kalvo ja” and then transformed to “luento+ +kalvo+ +ja” \[*lecture+ +slide+ +s*\] before language model training. All four different variants of a subword (e.g. “kalvo”, “kalvo+”, “+kalvo”, and “+kalvo+”) are treated as separate tokens in language model training. As high-order n-grams are required to provide enough context information for subword-based modeling, we use variable-length n-gram models trained using the VariKN toolkit[^9] that implements the Kneser-Ney growing and revised Kneser pruning algorithms [@Siivola:2007]. In the speech recognition framework based on weighted finite-state transducers (FSTs), we restrict the lexicon FST in such a way that only legal sequences (meaning that a morph can start with “+” if and only if the previous morph ends with a “+”) are allowed [@Smit:2017]. After decoding the ASR results, the morphs are joined together to form words for scoring. Neural Network Language Models ============================== Recurrent neural networks are known to work well for modeling language, as they can capture the long-term dependencies neglected by n-gram models [@Mikolov:2010]. Especially the subword-based approach should benefit from this capability of modeling long contexts. In this article we experiment with language models that are based on LSTMs and highway networks. These are layer types that use *sigmoid gates* to control information flow. The gates are optimized along with the rest of the neural network parameters, and learn to pass the relevant activations over long distances. LSTM [@Hochreiter:1997] is a recurrent layer type. Each gate can be seen as an RNN layer with two weight matrices, $W$ and $U$, a bias vector $b$, and sigmoid activation. The output of a gate at time step $t$ is $$g(x_t, h_{t-1}) = \sigma(W x_t + U h_{t-1} + b),$$ where $x_t$ is the output vector of the previous layer and $h_{t-1}$ is the LSTM layer state vector from the previous time step. When a signal is multiplied by the output of a sigmoid gate, the system learns to discard unimportant elements of the vector depending on the gate’s input. An LSTM layer uses three gates to select what information to pass from the previous time step to the next time step unmodified, and what information to modify. The same idea can be used to select what information to pass to the next layer. Highway networks [@Srivastava:2015] use gates to facilitate information flow across many layers. At its simplest, only one gate is needed. In the feedforward case, there is only one input, $x_t$, and the gate needs only one weight matrix, $W_\sigma$. The gate learns to select between the layer’s input and its activation: $$\begin{aligned} \label{eq:highway-network} g(x_t) &= \sigma(W_\sigma x_t + b_\sigma) \\ y_t &= g(x_t) \odot \tanh(W x_t + b) + (1 - g(x_t)) \odot x_t \end{aligned}$$ While LSTM helps propagation of activations and gradients in recurrent networks, deep networks benefit from highway connections. We did not notice much improvement by stacking multiple LSTM layers on top of each other. While we did not have the possibility to systematically explore different network architectures, one LSTM layer followed by a highway network seemed to perform well. The architecture used in this article is depicted in Figure \[fig:network-architecture\]. Every layer was followed by Dropout [@Srivastava:2014] at rate $0.2$. ; (x0) – (projection0); (x1) – (projection1); (x2) – (projection2); (projection0) – (hidden00); (projection1) – (hidden01); (projection2) – (hidden02); (hidden00) – node\[midway,above\] [$C_0$]{} (hidden01); (hidden01) – node\[midway,above\] [$C_1$]{} (hidden02); (hidden02) – node\[midway,above\] [$C_2$]{} (hidden03); (h0) -| node\[near start,above\] [$h_0$]{} (hidden01.145); (h1) -| node\[near start,above\] [$h_1$]{} (hidden02.145); (h2) – node\[midway,above\] [$h_2$]{} (h3); (hidden00) – (hidden10); (hidden01) – (hidden11); (hidden02) – (hidden12); (hidden10) – (hidden20); (hidden11) – (hidden21); (hidden12) – (hidden22); (hidden20) – (hidden30); (hidden21) – (hidden31); (hidden22) – (hidden32); (hidden30) – (hidden40); (hidden31) – (hidden41); (hidden32) – (hidden42); (hidden40) – (output0); (hidden41) – (output1); (hidden42) – (output2); (output0) – (y0); (output1) – (y1); (output2) – (y2); The input of the network at time step $t$ is $w_t$, an index that identifies the vocabulary element. The output contains the predicted probabilities for every vocabulary element, but only the output corresponding to the target word is used. The vocabulary can consist of words, word classes, subwords, or subword classes. The choice of vocabulary does not make any difference with regard to the neural network training, except that large vocabularies require more memory and are slower to train. Usually word vocabularies are limited to a *shortlist* of the most frequent words. A special token such as [`<unk>` ]{}can be used in place of any out-of-shortlist (OOS) words, which is necessary with RNN language models in particular. The NNLM can be combined with a large-vocabulary n-gram model to obtain a probability for every training word. The [`<unk>` ]{}probability represents the total probability mass of all OOS words, which can be distributed according to n-gram language model probabilities. An n-gram language model is convenient to integrate with a feedforward NNLM, which is a particular kind of n-gram model itself, but less trivial in our RNN decoder. It also becomes computationally demanding to normalize the resulting probability distribution correctly using a large n-gram model [@Park:2010]. In our baseline shortlist NNLMs we distribute the OOS probability according to a unigram model. When rescoring lattices, the output does not have to be a proper probability distribution. Assuming that the probability mass that the NNLM and n-gram models allocate to the OOS words are close to each other, a reasonable approximation is to replace the OOS probabilities with n-gram probabilities [@Park:2010]. We tried this, but did not get good results because the assumption was too far from the truth. Our class NNLMs are similar to Equation \[eq:class-ngram\], except that RNNs do not fix the context to $n$ previous words—the length of the history used to predict the next word is limited only by the mini-batch size. Memory consumption becomes a problem when using GPUs for training, since current GPU boards typically have no more than 12 GB of memory. Each layer learns a weight matrix whose dimensionality is input size by output size. For example, an NNLM with one hidden layer of size 1,000 and a vocabulary of size 100,000 requires a 1,000 by 100,000 matrix on the input and output layer. Assuming 32-bit floats are used, such a matrix uses 400 MB of memory. In addition, temporary matrices are needed when propagating the data through the network. Memory required to store the weight matrices can be reduced by using a small projection layer and a small layer before the output layer, or factorizing a large weight into the product of two smaller matrices [@Sainath:2013]. Another possibility is to divide weight matrices to multiple GPUs. The size of the temporary data depends also on the mini-batch size. We did the experiments with quite a complex model to see how good speech recognition accuracy we are able to achieve in these tasks. The projection layer maps words to 500-dimensional embeddings. Both the LSTM and highway network layers have 1500 outputs. With vocabularies larger than 100,000 elements we added a 500-unit feedforward layer before the output layer to reduce memory consumption. With class models mini-batches were 32 sequences and with other models 24 sequences of length 25. We also explored the possibility of training models with larger vocabularies using hierarchical softmax and sampling-based softmax. These approximations are explained in more detail in the following sections. Output Normalization -------------------- The final layer normalizes the output to provide a valid probability distribution over the output classes. Normally the softmax function is used: $$\label{eq:softmax} y_{t,i} = \frac{\exp(x_{t,i})}{\sum_j \exp(x_{t,j})}$$ At each time step $t$, the cross-entropy cost function requires computing the conditional probability of the target word only, $P(w_{t+1} \mid w_0 \ldots w_t) = y_{t,w_{t+1}}$. Still all the activations $x_{t,j}$ are needed to explicitly normalize the probability distribution. This becomes computationally expensive, because vocabularies can be very large, and the cost of computing the normalization term scales linearly with the vocabulary size. There has been a great deal of research on improving the speed of the softmax function by various approximations. Hierarchical NNLM is a class-based model that consists of a neural network that predicts the class and separate neural networks that predict the word inside a class [@Kuo:2012]. This can reduce training time of feedforward networks considerably, because different n-grams are used to train each word prediction model. Hierarchical softmax is a single model that factors the output probabilities into the product of multiple softmax functions. The idea has originally been used in maximum entropy training [@Goodman:2001], but exactly the same idea can be applied to neural networks [@Morin:2005]. SOUL combines a shortlist for the most frequent words with hierarchical softmax for the out-of-shortlist words [@Le:2011]. Adaptive softmax [@Grave:2016] is a similar approach that optimizes the word cluster sizes to minimize computational cost on GPUs. Another group of methods do not modify the model, but use sampling during training to approximate the expensive softmax normalization. These methods speed up training, but use normal softmax during evaluation. Importance sampling is a Monte Carlo method that samples words from a distribution that should be close to the network output distribution [@Bengio:2003:AISTATS]. Noise-contrastive estimation (NCE) samples random words, but instead of optimizing the cross-entropy cost directly, it uses an auxiliary cost that learns to classify a word as a training word or a noise word [@Gutmann:2010]. This allows it to treat the normalization term as a parameter of the network. BlackOut continues this line of research, using a stochastic version of softmax that explicitly discriminates the target word from the noise words [@Ji:2016]. Variance regularization modifies the training objective to encourage the network to learn an output distribution that is close to a real probability distribution even without explicit normalization [@Shi:2014]. This is useful for example in one-pass speech recognition, where evaluation speed is important but the output does not have to be a valid probability distribution. The model can also be modified to predict the normalization term along with the word probabilities [@Sethy:2015]. NCE objective also encourages the network to learn an approximately normalized distribution, and can also be used without softmax e.g. for speech recognition [@Chen:2015]. Hierarchical Softmax {#sec:hierarchical-softmax} -------------------- Hierarchical softmax factors the output probabilities into the product of multiple softmax functions. At one extreme, the hierarchy can be a balanced binary tree that is $log_2(N)$ levels deep, where $N$ is the vocabulary size. Each level would differentiate between two classes, and in total the hierarchical softmax would take logarithmic time. [@Morin:2005] We used a two-level hierarchy, because it is simple to implement, and it does not require a hierarchical clustering of the vocabulary. The first level performs a softmax between $\sqrt{N}$ word classes and the second level performs a softmax between $\sqrt{N}$ words inside the correct class: $$\label{eq:hierarchical-softmax} \begin{split} &P(w_t \mid w_0 \ldots w_{t-1}) = \\ &P(c(w_t) \mid w_0 \ldots w_{t-1}) P(w_t \mid w_0 \ldots w_{t-1}, c(w_t)) \end{split}$$ This already reduces the time complexity of the output layer to the square root of the vocabulary size. The clustering affects the performance of the resulting model, but it is not clear what kind of clustering is optimal for this kind of models. In earlier work, clusterings have been created from word frequencies [@Mikolov:2011:ICASSP], by clustering distributed word representations [@Mnih:2009], and using expert knowledge [@Morin:2005]. Ideally all class sizes would be equal, as the matrix product that produces the preactivations can be computed efficiently on a GPU when the weight matrix is dense. We use the same word classes in the hierarchical softmax layer that we use in class-based models, but we force equal class sizes; after running the clustering algorithm, we sort the vocabulary by class and split it into partitions of size $\sqrt{N}$. This may split some classes unnecessarily into two, which is not optimal. On the other hand it is easy to implement and even as simple methods as frequency binning seem to work [@Mikolov:2011:ICASSP]. An advantage of hierarchical softmax compared to sampling based output layers is that hierarchical softmax speeds up evaluation as well, while sampling is used only during training and the output is properly normalized using softmax during inference. Sampling-Based Approximations of Softmax ---------------------------------------- Noise-contrastive estimation [@Gutmann:2010] turns the problem from classification between $N$ words into binary classification. For each training word, a set of noise words (one in the original paper) is sampled from some simple distribution. The network learns to discriminate between training words and noise words. The binary-valued class label $C_w$ is used to indicate whether the word $w$ is a training or noise word. The authors derive the probability that an arbitrary word comes from either class, $P(C_w \mid w)$, given the probability distributions of both classes. The objective function is the cross entropy of the binary classifier: $$\begin{aligned} \mathcal{L} = &\sum_w [C_w \log P(C_w=1 \mid w) \\ &+ (1 - C_w) \log P(C_w=0 \mid w)] \end{aligned}$$ The expensive softmax normalization can be avoided by making the normalization term a network parameter that is learned along the weights during training. In a language model, the parameter would be dependent on the context words, but it turns out that it can be fixed to a context-independent constant without harming the performance of the resulting model [@Mnih:2012]. In the beginning of the training the cost will be high and the optimization may be unstable, unless the normalization is close to correct. We use one as the normalization constant and initialize the output layer bias to the logarithmic unigram distribution, so that in the beginning the network corresponds to the maximum likelihood unigram distribution. BlackOut [@Ji:2016] is also based on sampling a set of noise words, and motivated by the discriminative loss of NCE, but the objective function directly discriminates between the training word $w_T$ and noise words $w_N$: $$\mathcal{L} = \sum_{w_T} [\log P(w_T) + \sum_{w_N} \log (1 - P(w_N))]$$ Although not explicitly shown, the probabilities $P(w)$ are conditioned on the network state. They are computed using a weighted softmax that is normalized only on the set of training and noise words. In addition to reducing the computation, this effectively performs regularization in the output layer similarly to how the Dropout [@Srivastava:2014] technique works in the hidden layers. Often the noise words are sampled from the uniform distribution, or from the unigram distribution of the words in the training data [@Mnih:2012]. Our experiments confirmed that the choice of *proposal distribution* is indeed important. Using uniform distribution, the neural network optimization will not find as good parameters. With unigram distribution the problem is that some words may be sampled very rarely. Mikolov et al. [@Mikolov:2013:NIPS] use the unigram distribution raised to the power of $\beta$. Ji et al. [@Ji:2016] make $\beta$ a tunable parameter. They also exclude the correct target words from the noise distribution. We used the power distribution with $\beta = 0.5$ for both BlackOut and NCE. We did not modify the distribution based on the target words, however, as that would introduce additional memory transfers by the Theano computation library used by TheanoLM. We observed also that random sampling from a multinomial distribution in Theano does not work as efficiently as possible with a GPU. We used 500 noise words, shared across the mini-batch. These values were selected after noting the speed of convergence with a few values. Small $\beta$ values flatten the distribution too much and the optimal model is not reached. Higher values approach the unigram distribution, causing the network to not learn enough about the rare words. Using more noise words makes mini-batch updates slower, while using only 100 noise words we noticed that the training was barely converging. These methods seem to suffer from some disadvantages. Properly optimizing the $\beta$ parameter can take a considerable amount of time. A large enough set of noise words has to be drawn for the training to be stable, diminishing the speed advantage in our GPU implementation. While we did try a number of different parameter combinations, BlackOut never finished training on these data sets without numerical errors. Decoding Lattices with RNN Language Models ------------------------------------------ While improving training speed is the motivation behind the various softmax approximations, inference is also slow on large networks. Methods that modify the network structure, such as hierarchical softmax, improve inference speed as well. Nevertheless, using an RNN language model in the first pass of large-vocabulary speech recognition is unrealistic. It is possible to create a list of $n$ best hypothesis, or a word lattice, during the first pass, and rescore them using an NNLM in a second pass. We have implemented a word lattice decoder in TheanoLM that produces better results than rescoring n-best lists. Conceptually, the decoder propagates tokens through the lattice. Each token stores a network state and the probability of the partial path. At first one token is created at the start node with the initial network state. The algorithm iterates by propagating tokens to the outgoing links of a node, creating new copies of the tokens for each link. Evaluating a single word probability at a time would be inefficient, so the decoder combines the state from all the tokens in the node into a matrix, and the input words into another matrix. Then the network is used to simultaneously compute the probability of the target word in all of these contexts. Rescoring a word lattice using an RNN language model is equivalent to rescoring a huge n-best list, unless some approximation is used to limit the dependency of a probability on the earlier context. We apply three types of pruning, before propagation, to the tokens in the node [@Sundermeyer:2014:LatticeDecoding]: - **n-gram recombination**. If there are multiple tokens, whose last $n$ context words match, keep only the best. We use $n = 22$. - **cardinality pruning**. Keep at most $c$ best tokens. We use $c = 62$. - **beam pruning**. Prune tokens whose probability is low, compared to the best token. The best token is searched from all nodes that appear at the same time instance, or in the future. (Tokens in the past have a higher probability because they correspond to a shorter time period.) We prune tokens if the difference in log probability is larger than 650. We performed a few tests with different pruning parameters and chose large enough $n$ and $c$ so that their effect in the results was negligible. Using a larger beam would have improved the results, but the gain would have been small compared to the increase in decoding time. Experiments =========== Data Sets --------- We evaluate the methods on difficult spontaneous Finnish and Estonian conversations. The data sets were created in a similar manner for both languages. For training acoustic models we combined spontaneous speech corpora with other less spontaneous language that benefits acoustic modeling. For training language models we combined transcribed conversations with web data that has been filtered to match the conversational speaking style [@Kurimo:2016]. For the Finnish acoustic models we used 85 hours of training data from three sources. The first is the complete Finnish SPEECON [@Iskra:2002] corpus. This corpus includes 550 speakers in different noise conditions that all have read 30 sentences and 30 words, numbers, or dates, and spoken 10 spontaneous sentences. Two smaller data sets of better matching spontaneous conversations were used: DSPCON [@Aalto:2017] corpus, which consists of short conversations between Aalto University students, and FinDialogue part of the FinINTAS [@Lennes:2009] corpus, which contains longer spontaneous conversations. For language modeling we used 61,000 words from DSPCON and 76 million words of web data. We did not differentiate between upper and lower case. This resulted in 2.4 million unique words. For the Estonian acoustic models we used 164 hours of training data, including 142 hours of broadcast conversations, news, and lectures collected at Tallinn University of Technology [@Meister:2012], and 23 hours of spontaneous conversations collected at the University of Tartu[^10]. These transcripts contain 1.3 million words. For language modeling we used additionally 82 million words of web data. The language model training data contained 1.8 million unique words, differentiating between upper and lower case. One reason why the Estonian vocabulary is smaller than the Finnish vocabulary, even though the Estonian data set is larger, is that colloquial Estonian is written in a more systematic way. Also standard Estonian vocabulary is smaller than standard Finnish vocabulary [@Creutz:2007:2], probably because standard Finnish uses more inflected word forms. Vocabulary Size Finnish Estonian ------------------------- --------- ---------- 100,000 6.67 3.89 500,000 3.36 1.59 2.4M (Fin) / 1.8M (Est) 2.31 1.01 : [*Out-of-vocabulary word rates (%) of the evaluation sets, excluding start and end of sentence tokens. The last row is the full training set vocabulary, which applies also for the class models.*]{}[]{data-label="tab:oov-rates"} We use only spontaneous conversations as development and evaluation data. As mentioned earlier, Finnish words can be written down in as many different ways as they can be pronounced in colloquial speech. When calculating Finnish word error rates we accept the different forms of the same word as correct, as long as they could be used in the particular context. Compound words are accepted even if they are written as separate words. However, we compute perplexities on transcripts that contain the phonetically verbatim word forms, excluding out-of-vocabulary (OOV) words. The perplexities from n-gram and neural network word and class models are all comparable to one another, because they model the same vocabulary consisting of all the training set words. Subwords can model also unseen words, so the perplexities in subword experiments are higher. OOV word rates of the evaluation sets are reported in Table \[tab:oov-rates\] for different vocabulary sizes. The Estonian web data is the filtered data from [@Kurimo:2016]. The same transcribed data is also used, except that we removed from the acoustic training set three speakers that appear in the evaluation set. The evaluation data is still 1236 sentences or 2.9 hours. The Finnish data is what we used in [@Enarvi:2016], augmented with 2016 data of DSPCON and read speech from SPEECON. While we now have more than doubled the amount of acoustic training data, we have only a few more hours of spontaneous conversations. The switch to neural network acoustic models had a far greater impact on the results than the additional training data. We still use the same Finnish evaluation set of 541 sentences or 44 minutes. The Finnish development and evaluation sets and reference transcripts that contain the alternative forms are included in the latest DSPCON release, without a few sentences that we could not license. Models ------ The word based n-gram models were 4-grams, trained using the Modified Kneser-Ney implementation of SRILM toolkit [@Stolcke:2002]. Class-based models did not use Kneser-Ney smoothing, because the class n-gram statistics were not suitable for computing the Modified Kneser-Ney discount parameters. The quality of our web data is very different from the transcribed conversations, and simply pooling all the training data together would cause the larger web data to dominate the model. Instead we created separate models from different data sets, and combined them by interpolating the probabilities of the observed n-grams from the component models using weights that were optimized on the development data. In the Finnish task we created a mixture from two models, a web data model and a transcribed data model. In the Estonian task we created a mixture from three models, separating the transcribed spontaneous conversations from the broadcast conversations. The mixture weights were optimized independently for each language model on the development data, using expectation maximization (EM). In the Finnish experiments this gave the transcribed data a weight slightly less than 0.5. In the Estonian experiments the weights of the spontaneous conversations and the web data were typically around 0.4, while the broadcasts were given a weight less than 0.2. Morph models were similarly combined from component models, but the EM optimization failed to give good weights. We used initially those optimized for the word-based models, and after the other parameters were fixed, we optimized the mixture weights for development set perplexity using a grid search with steps of $0.05$. The word clustering algorithms do not support training data weighting, so we simply concatenated the data sets. There are many parameters that can be tweaked when creating distributed word representations with word2vec. We tried clustering words using a few different parameters, and report only the best n-gram model for each class vocabulary size. Within the set of values that we tried, the best performance was obtained with continuous bag of words (CBOW), window size 8, and layer size 300 to 500. For the subword language models, we trained Morfessor on a word list combined from all training corpora; the difference to other options such as token-based training was negligible. For each language, four segmentations were trained with $\alpha$-values 0.05, 0.2, 0.5, and 1.0. This resulted in respective vocabulary sizes of 42.5k, 133k, 265k, and 468k for Finnish, and 33.2k, 103k, 212k, and 403k for Estonian. The sizes include the different morph variants with “+” prefix and affix. When training the subword n-gram models with the VariKN toolkit, the growing threshold was optimized on the development set, while keeping the pruning threshold twice as large as the growing threshold. Word-based neural network models were trained on two shortlist sizes: 100k and 500k words. With 500k words we added a normal 500-unit layer with hyperbolic tangent activation before the output layer, which reduced memory consumption and speeded up training. The neural networks were trained using Adagrad [@Duchi:2011] optimizer until convergence or until the maximum time limit of 15 days was reached. All neural network models were trained on a single NVIDIA Tesla K80 GPU and the training times were recorded. We tried two different approaches for weighting the different data sets during neural network training: by randomly sampling a subset of every data set in the beginning of each epoch [@Schwenk:2005], and by weighting the parameter updates depending on from which corpus each sentence comes from. In the latter approach, the gradient is scaled by both the learning rate and a constant that is larger for higher-quality data, before updating the parameters. We are not aware that this kind of update weighting would have been used before. Optimizing the weights for neural network training is more difficult than for the n-gram mixture models. As we do not have a computational method for optimizing the weights, we tried a few values, observing the development set perplexity during training. Sampling 20 % of the web data on each iteration, or weighting the web data by a factor of 0.4 seemed to work reasonably well. We used a slightly higher learning rate when weighting the web data to compensate for the fact that the updates are smaller on average. More systematic tests were performed using these weights with the five vocabularies in Table \[tab:subset-processing-results\]. It would be possible to train separate neural network models from each data set, but there are no methods for merging several neural networks in the similar fashion that we combine the n-gram models. Often the best possible NNLM results are obtained by interpolating probabilities from multiple models, but that kind of system is cumbersome in practice, requiring multiple models to be trained and used for inference. The layer sizes and other parameters would have to be optimized for each model separately. We combined the NNLMs with the nonclass word or subword n-gram model by log-linear interpolation. We did not notice much difference to linear interpolation, so we chose to do the interpolation in logarithmic space, because the word probabilities may be smaller than what can be represented using 64-bit floats. We noticed that optimization of the interpolation parameters was quite difficult with our development data, so we gave equal weight to both models. In some cases it could have been beneficial to give a larger weight to the neural network model. Development data was used to select a weight for combining language model and acoustic scores from four different values. Speech Recognition System ------------------------- We use the Kaldi [@Povey:2011] speech recognition system for training our acoustic models and for first-pass decoding. The TDNN acoustic models were trained on a pure sequence criterion using Maximum Mutual Information (MMI) [@Povey:2016]. The data sets were cleaned and filtered using a Gaussian Mixture Model recognizer and augmented through speed and volume perturbation [@Ko:2015]. The number of layers and parameters of the TDNN were optimized to maximize development set accuracy on the word model. First-pass decoding was very fast with real-time factor less than $0.5$. The accuracy of the first-pass recognition exceeded our earlier results on both data sets [@Enarvi:2016; @Kurimo:2016], due to the new neural network acoustic models. Kaldi does not, at this moment, directly support class-based decoding. Instead we created lattices using regular n-gram models, and rescored them with class n-gram and neural network models. Using the methods described in [@Allauzen:2003] it is possible to construct a word FST that represents the probabilities of a class-based language model and use this in first-pass decoding or rescoring, without explicitly using the classes. Kaldi does not have any restrictions on vocabulary size, but compiling the FST-based decoding graph from the language model, pronunciation dictionary, context dependency information, and HMM structure did consume around 60 GB of memory. The memory requirement can be lowered by reducing the number of contexts in the first-pass n-gram model. Results ------- [lccc]{} Classes & Perplexity & WER & +Nonclass\ \ Nonclass & 736 & 30.5 &\ Brown 2k & 705 & 29.9 & 29.0\ CBOW 2k & 1017 & 33.2 & 30.2\ Exchange 2k & 698 & 29.7 & 29.1\ Brown+Exchange 2k & 701 & 30.0 & 29.0\ CBOW+Exchange 2k & 695 & 29.8 & 29.3\ Rules+Exchange 2k & 700 & **29.3 & **28.9\ Brown 5k & 694 & 29.9 & 29.3\ CBOW 5k & 861 & 31.4 & 29.8\ Exchange 5k & **683 & 29.9 & 29.3\ Brown+Exchange 5k & 688 & 29.8 & 29.2\ CBOW+Exchange 5k & 684 & 29.8 & 29.3\ Rules+Exchange 5k & 688 & 29.8 & 29.4\ CBOW 10k & 801 & 31.1 & 29.9\ Exchange 10k & 691 & 29.9 & 29.4\ CBOW+Exchange 10k & 691 & 29.9 & 29.5\ Rules+Exchange 10k & 690 & 29.9 & 29.4\ \ Nonclass & 1127 & 29.9 &\ Exchange 5k & 1433 & 31.2 & 29.8\ \ Nonclass & 1135 & 30.1 &\ Exchange 5k & 1412 & 31.4 & 30.2\ \ Nonclass & 1128 & 30.2 &\ Exchange 5k & 1334 & 30.6 & 29.2\ \ Nonclass & 1100 & 30.0 &\ Exchange 5k & 1252 & 30.2 & 29.1\ \ Nonclass & 447 & 23.4 &\ Brown 2k & 438 & 22.8 & **22.5\ Exchange 2k & 439 & 22.9 & 22.6\ Brown+Exchange 2k & 438 & **22.6 & **22.5\ Brown 5k & 432 & 22.7 & **22.5\ Exchange 5k & 432 & 23.0 & 22.6\ Brown+Exchange 5k & **430 & 22.8 & 22.7\ \ Nonclass & 591 & 23.4 &\ Exchange 5k & 707 & 24.3 & 23.9\ \ Nonclass & 582 & 23.7 &\ Exchange 5k & 689 & 24.1 & 23.5\ \ Nonclass & 577 & 23.1 &\ Exchange 5k & 659 & 23.6 & 23.2\ \ Nonclass & 582 & 23.4 &\ Exchange 5k & 644 & 23.4 & 23.0\ **************** Table \[tab:ngram-dev-results\] lists perplexities and word error rates given by n-gram language models on the development data. The baseline word model performance, 30.5 % on Finnish and 23,4 % on Estonian can be compared to the various word class and subword models. We have also included results from subword classes created using the exchange algorithm for reference. The word error rates were obtained by rescoring lattices that were created using the nonclass word or subword model. In the Finnish task, 2,000, 5,000, and 10,000 class vocabularies were compared. The worst case running times of the exchange and Brown algorithms have quadratic dependency on the number of classes. With 10,000 classes, even using 20 CPUs Brown did not finish in 20 days, so the experiment was not continued. The exchange algorithm can be stopped any time, so there is no upper limit on the number of classes that can be trained, but the quality of the clustering may suffer if it is stopped early. However, with 5,000 classes and using only 5 CPUs, it seemed to converge in 5 days. Increasing the number of threads increases memory consumption. Training even 40,000 classes was possible in 15 days, but the results did not improve, so they are not reported. The most promising models were evaluated also in the Estonian task. CBOW is clearly the fastest algorithm, which is probably why it has gained some popularity. These results show, however, that the clusters formed by k-means from distributed word representations are not good for n-gram language models. CBOW does improve, compared to the other clusterings, when the number of classes is increased. Other than that, the differences between different classification methods are mostly insignificant, but class-based models outperform word models on both languages. This result suggests that class-based softmax may be a viable alternative to other softmax approximations in neural networks. The performance of the Estonian subword models is close to that of the word model, and the Finnish subword models are better than the word model. Subword classes do not work as well, but the difference to nonclass subword models gets smaller when the size of the subword vocabulary increases. Mostly the differences between the different initializations of the exchange algorithm seemed insignificant. However, our rule-based clustering algorithm followed by running the exchange algorithm to create 2,000 classes (Rules+Exchange 2k) gave the best word error rate on Finnish. In the NNLM experiments we did not explore with different clusterings, but used the ones that gave the smallest development set perplexity in the n-gram experiments. For Finnish the 5,000 classes created using the exchange algorithm was selected (Exchange 5k). On Estonian, initialization using Brown classes gave slightly better perplexity (Brown+Exchange 5k) and was selected for neural network models. [lcccc]{} Subset & Training\ Processing & Time & Perplexity & WER & +NGram\ \ Uniform & 143 h & 511 & 26.0 & 25.6\ Sampling & 128 h & 505 & 26.2 & 25.6\ Weighting & 101 h & 521 & 26.4 & 25.5\ \ Uniform & 360 h & 679 & 25.2 & **24.6\ Sampling & 360 h & 671 & 25.5 & 25.0\ Weighting & 360 h & 672 & **25.1 & **24.6\ \ Uniform & 141 h & 790 & 26.0 & 25.0\ Sampling & 119 h & 761 & 25.9 & 25.1\ \ Uniform & 86 h & 339 & **19.8 & 19.9\ Sampling & 87 h & 311 & 20.2 & 19.9\ Weighting & 105 h & 335 & 20.0 & **19.6\ \ Uniform & 187 h & 424 & 20.0 & 19.7\ Sampling & 130 h & 397 & 20.0 & 19.8\ Weighting & 187 h & 409 & 19.9 & **19.6\ ************ Table \[tab:subset-processing-results\] compares training time, perplexity, and word error rate in NNLM training, when different processing is applied to the large web data set. *Uniform* means that the web data is processed just like other data sets, *sampling* means that a subset of web data is randomly sampled before each epoch, and *weighting* means that the parameter updates are given a smaller weight when the mini-batch contains web sentences. Sampling seems to improve perplexity, but not word error rate. Because sampling usually speeds up training considerably and our computational resources were limited, the rest of the experiments were done using sampling. [l@[0.3cm]{}l@[0.3cm]{}c@[0.3cm]{}c@[0.3cm]{} c@[0.3cm]{}c@[0.1cm]{}c]{} Network & & & Training\ Output & Vocabulary & Parameters & Time & PPL & WER & +NGram\ \ HSoftmax & 100k short & 231M & 360 h & 535 & 26.9 & 25.9\ NCE & 100k short & 230M & 360 h & 531 & 26.8 & 25.7\ HSoftmax & 500k short & 532M & 360 h & 686 & 28.4 & 27.0\ Softmax & 5k classes & 40M & 128 h & 505 & 26.2 & 25.6\ \ Softmax & 42.5k full & 115M & 360 h & 671 & **25.5 & **25.0\ HSoftmax & 42.5k full & 116M & 360 h & 700 & 25.7 & **25.0\ Softmax & 5k classes & 40M & 146 h & 857 & 26.2 & 25.5\ \ HSoftmax & 133k full & 164M & 360 h & 742 & 26.5 & 25.4\ Softmax & 5k classes & 40M & 190 h & 811 & 26.1 & 25.4\ \ HSoftmax & 265k full & 296M & 360 h & 849 & 27.0 & 25.6\ Softmax & 5k classes & 40M & 133 h & 813 & 26.2 & 25.3\ \ HSoftmax & 468k full & 500M & 360 h & 1026 & 28.6 & 26.9\ Softmax & 5k classes & 40M & 119 h & 761 & 25.9 & 25.1\ \ HSoftmax & 100k short & 231M & 360 h & 321 & 20.6 & 19.9\ NCE & 100k short & 230M & 142 h & 384 & 22.4 & 21.4\ HSoftmax & 500k short & 532M & 360 h & 380 & 21.0 & 20.2\ Softmax & 5k classes & 40M & 87 h & 311 & 20.2 & 19.9\ \ Softmax & 33.2k full & 97M & 360 h & 357 & 20.4 & 20.2\ HSoftmax & 33.2k full & 97M & 293 h & 370 & 20.7 & 20.2\ Softmax & 5k classes & 40M & 116 h & 418 & 20.9 & 20.2\ \ HSoftmax & 103k full & 134M & 306 h & 393 & 20.8 & 20.2\ Softmax & 5k classes & 40M & 126 h & 410 & 20.5 & 19.9\ \ HSoftmax & 212k full & 243M & 360 h & 411 & 20.9 & 20.2\ Softmax & 5k classes & 40M & 130 h & 397 & **20.0 & 19.8\ \ HSoftmax & 403k full & 434M & 360 h & 463 & 21.4 & 20.7\ Softmax & 5k classes & 40M & 124 h & 395 & 20.3 & **19.6\ ********** Table \[tab:nn-dev-results\] lists perplexities and word error rates given by neural network models on the development data. The word error rates were obtained by rescoring the same lattices as in Table \[tab:ngram-dev-results\]. The shortlist and word class models can predict all training set words, so the perplexities can be compared. Subword models can predict also new words, so their perplexities cannot be compared with word models. The percentage of evaluation set words that are not in the shortlist and words that are not in the training set can be found in Table \[tab:oov-rates\]. The class-based models were clearly the fastest to converge, 5 to 8 days on Finnish data and 4 to 6 days on Estonian data. The experiments include shortlists of 100k and 500k words. Other shortlist models, except the Estonian 100k-word NCE, did not finish before the 360 hour limit. Consequently, improvement was not seen from using a larger 500k-word shortlist. Our NCE implementation required more GPU memory than hierarchical softmax and we were unable to run it with the larger shortlist. With the smaller shortlist NCE was better on Finnish and hierarchical softmax was better on Estonian. We experienced issues with numerical stability using NCE with subwords, and decided to use only hierarchical softmax in the subword experiments. BlackOut training was slightly faster than NCE, but even less stable, and we were unable to finish the training without numerical errors. With hierarchical softmax we used the same classes that were used in the class-based models, but the classes were rearranged to have equal sizes as described in Section \[sec:hierarchical-softmax\]. This kind of class arrangement did not seem to improve from simple frequency binning, however. In terms of word error rate and perplexity, class-based word models performed somewhat better than the shortlist models. The best results were from subword models. On both languages it can be seen that class-based subword models improve compared to the nonclass subword models when the vocabulary size grows. In the Finnish task, the smallest 42.5k-subword vocabulary worked well, which is small enough to use normal softmax without classes. In the Estonian task, larger subword vocabularies performed better, provided that the subwords were clustered into classes. The best result was obtained by clustering 403k subwords into 5,000 classes using the exchange algorithm. ------------- ----- ------------------------------------------------------------ ----------- ----- ------ -------- Vocabulary PPL WER +Nonclass PPL WER +NGram **Finnish** Word 785 31.7 618 28.1 27.9 \[1mm\] Class 760 **31.4 & **31.1 & 589 & 29.2 & 27.9\ & &\ Subword & 1313 & 31.7 & & 846 & **27.3 & **27.1\ & &\ Subword Class & 1499 & 32.1 & 31.3 & 942 & 28.2 & 27.4\ **Estonian** & &\ Word & 483 & 26.1 & & 344 & 23.1 & 22.6\ & &\ Class & 465 & **25.3 & **25.2 & 324 & 22.2 & 22.2\ & &\ Subword & 628 & 26.0 & & 377 & 22.7 & 22.6\ & &\ Subword Class & 682 & 25.8 & 25.5 & 403 & **22.1 & **21.9\ **************** ------------- ----- ------------------------------------------------------------ ----------- ----- ------ -------- : [*Performance of best n-gram and NNLM models on evaluation data. All NNLMs except the shortlist models were using softmax output. Includes perplexity, word error rate (%), and word error rate after interpolation with the nonclass or n-gram model. Perplexities from word and subword models are not comparable.*]{}[]{data-label="tab:results-summary"} Table \[tab:results-summary\] compares the best models on evaluation data. The best word, class, subword, and subword class n-gram and NNLM models were selected based on development set word error rate. The evaluation set results show that the advantage that the Finnish subword n-gram models had on the development set was due to the optimization of the morph segmentations on development data. Word classes are the best choice for n-gram modeling. However, neural networks seem to benefit especially subword modeling, because the overall best results are from subword NNLMs. Classes work well also in NNLMs, although the best Finnish shortlist model, 100k-word NCE, performed exceptionally well in speech recognition. Interpolation with the n-gram model gives a small but consistent improvement. Conclusions =========== Our experiments show that class-based models are very attractive for conversational Finnish and Estonian speech recognition. When the vocabulary contains millions of words, class-based n-gram models perform better than normal word models. A class-based NNLM can be trained in less than a week and when the training time is limited, often performs better than word-shortlist models. In previous work, class-based models did not outperform word-based models in recognizing standard Finnish and Estonian, such as broadcast news [@Varjokallio:2016]. Improvement was made only when interpolating word and class models. One reason why word classes are especially beneficial in the conversational tasks may be that in the absence of large conversational corpora, most of our training data is from the Internet. Web data is noisy and there are many ways to write the same word. One would expect less to be gained from using subword models, when a word model is trained from full vocabulary of millions of words. This seems to be the case, but RNNs are good at learning the structure of the language from a text that has been segmented into subwords. Subwords can also solve the vocabulary size problem with neural network models. In the Finnish task, the best results were from an NNLM trained on a relatively small 42.5k-subword vocabulary with full softmax output. In the Estonian task, the best results are from a large 403k-subword vocabulary that was clustered into 5,000 classes. We explored the possibility of using NCE, BlackOut, or hierarchical softmax to overcome the problem of training neural networks with large output dimensionality. Generally they were slower than class-based training, and did not converge to as good a model in the 15-day time constraint, but Finnish 100k-word NCE training gave good results on the evaluation set. The mixed results could mean that some details have been overlooked in our implementation of sampling-based softmax. In both tasks we obtained the best word error rates from a subword NNLM interpolated with a subword n-gram model. In the Finnish task the best result was 27.1 %, which is a 14.5 % relative improvement from the 31.7 % WER given by our baseline 4-gram model. The best result in the Estonian task, 21.9 %, is a 16.1 % relative improvement from our 26.1 % baseline WER. These are the best results achieved in these tasks, and better than our previously best results by a large margin. The best previously published results are 48.4 % WER in the Finnish task [@Enarvi:2016] and 52.7 % WER in the Estonian task [@Kurimo:2016]. The corpus weighting methods that we used in NNLM training showed potential for improvement, but more thorough research should be done on how to select optimal weights. Acknowledgements ================ Computational resources were provided by the Aalto Science-IT project. \[[![image](photo-seppo){width="1in" height="1.25in"}]{}\] [Seppo Enarvi]{} received the Lic.Sc. in technology degree in computer and information science from Aalto University, Espoo, Finland, in 2012. He pursues to finish his Ph.D. on conversational Finnish speech recognition during 2017. His research interests are in machine learning, currently focusing on language modeling using neural networks. \[[![image](photo-peter){width="1in" height="1.25in"}]{}\] [Peter Smit]{} received the M.Sc. in technology degree in computer science from Aalto University, Espoo, Finland, in 2011. He is currently a doctoral student in the Department of Signal Processing and Acoustics at Aalto University. His research interests are in machine learning and speech recognition and his current focus is subword-modeling techniques and automatic speech recognition for underresourced languages. \[[![image](photo-sami){width="1in" height="1.25in"}]{}\] [Sami Virpioja]{} received the D.Sc. in technology degree in computer and information science from Aalto University, Espoo, Finland, in 2012. Between 2013 and 2017 was a research scientist at Lingsoft Inc., Helsinki, Finland. Currently he is a senior data scientist at Utopia Analytics Oy, Helsinki, Finland, and a postdoctoral researcher in the Department of Signal Processing and Acoustics at Aalto University. His research interests are in machine learning and natural language processing. \[[![image](photo-mikko){width="1in" height="1.25in"}]{}\] [Mikko Kurimo]{} (SM’07) received the D.Sc. (Ph.D.) in technology degree in computer science from the Helsinki University of Technology, Espoo, Finland, in 1997. He is currently an associate professor in the Department of Signal Processing and Acoustics at Aalto University, Finland. His research interests are in speech recognition, machine learning and natural language processing. Erratum {#erratum .unnumbered} ======= A highway network layer uses a separate bias for its gate, distinguished by the index $\sigma$ in Equation \[eq:highway-network\]. The index was missing in the published paper. [^1]: Manuscript received January 31, 2017; revised July 7, 2017; accepted August 7, 2017. This work was financially supported by the Academy of Finland under the grant numbers 251170 and 274075, and by Kone Foundation. [^2]: The authors work in the Department of Signal Processing and Acoustics at Aalto University, Espoo, Finland. (e-mail: firstname.lastname@aalto.fi) [^3]: Digital Object Identifier 10.1109/TASLP.2017.2743344 [^4]: <https://github.com/senarvi/theanolm> [^5]: We are using a multithreaded exchange implementation and stop the training when the cost stops decreasing. Our observation that an optimized exchange implementation can be faster than Brown clustering is in line with an earlier comparison [@Botros:2015]. [^6]: <https://github.com/aalto-speech/exchange> [^7]: <https://code.google.com/archive/p/word2vec/> [^8]: <https://github.com/aalto-speech/morfessor> [^9]: <https://github.com/vsiivola/variKN> [^10]: Phonetic Corpus of Estonian Spontaneous Speech. For information on distribution, see <http://www.keel.ut.ee/et/foneetikakorpus>.
1
--- abstract: 'Various models of charged particles interacting with a quantized, ultraviolet cutoff radiation field (but not with each other) are investigated. Upper and lower bounds are found for the self- or ground state-energies without mass renormalization. For $N$ fermions the bounds are proportional to $N$, but for bosons they are sublinear, which implies ‘binding’, and hence that ‘free’ bosons are never free. Both ‘relativistic’ and non-relativistic kinematics are considered. Our bounds are non-perturbative and differ significantly from the predictions of perturbation theory.' author: - | Elliott H. Lieb[^1]\ Departments of Mathematics and Physics\ Princeton University, Princeton, New Jersey 08544-0708\ [*lieb@math.princeton.edu*]{} - | Michael Loss[^2]\ School of Mathematics\ Georgia Institute of Technology, Atlanta, Georgia 30332-0160\ [*loss@math.gatech.edu*]{} date: 'September 15, 1999' title: 'SELF-ENERGY OF ELECTRONS IN NON-PERTURBATIVE QED[^3][^4]' --- Introduction {#intro} ============ Quantum electrodynamics (QED), the theory of electrons interacting with photons (at least for small energies) is one of the great successes of physics. Among its major achievements is the explanation of the Lamb shift and the anomalous magnetic moment of the electron. Nevertheless, its computations, which are entirely based on perturbation theory, created some uneasiness among the practitioners. The occurrence of infinities was and is especially vexing. Moreover, a truly nontrivial, 3+1-dimensional example of a relativistically invariant field theory has not yet been achieved. There are, however, unresolved issues at a much earlier stage of QED that hark back to black-body radiation, the simplest and historically first problem involving the interaction of matter with radiation. The conceptual problems stemming from black-body radiation were partly resolved by quantum mechanics, i.e., by the the non-relativistic Schrödinger equation, which is, undoubtedly, one of the most successful of theories, for it describes matter at low energies almost completely. It is mathematically consistent and there are techniques available to compute relevant quantities. Moreover, it allows us to explain certain facts about bulk matter such as its stability, it extensivity, and the existence of thermodynamic functions. What has not been as successful, so far, is the incorporation of radiation phenomena, the very problem quantum mechanics set out to explain. It ought to be possible to find a mathematically consistent theory, free of infinities, that describes the interaction of non-relativistic matter with radiation at moderate energies, such as atomic binding energies. It should not be necessary, as some physicists believe, to embed QED as a low energy part of a consistent high energy theory. From such a theory one could learn a number of things that have not been explained rigorously. i) The decay of excited states in atoms. This problem has been investigated in some ultraviolet cutoff models in [@BFS] and in a massive photon model in [@OY]. See also the review of Hogreve [@H]. ii) Non-relativistic QED could be a playground for truly non-perturbative calculations and it could shed light on renormalization procedures. In fact, this was the route historically taken by Kramers that led to the renormalization program of Dyson, Feynman, Schwinger and Tomonaga. iii) Last but not least, one could formulate and answer the problems of stability of bulk matter interacting with the radiation field. It has been proved in [@F],[@LLS] that stability of non-relativistic matter (with the Pauli Hamiltonian) interacting with classical magnetic fields holds provided that the fine-structure constant, $\alpha = e^2/\hbar c$, is small enough. It is certain, that the intricacies and difficulties of this classical field model will persist and presumably magnify in QED. The same may be expected from a relativistic QED since replacing the Pauli Hamiltonian by a Dirac operator leads to a similar requirement on $\alpha$ [@LSS]. Indeed, stability of matter in this model (the Brown-Ravenhall model) requires that the electron (positron) be defined in terms of the positive (negative) spectral subspace of the Dirac operator [*with*]{} the magnetic vector potential $A(x)$, instead of the free Dirac operator without $A(x)$. This observation, that perturbation theory, if there is one, must start from the dressed electrons rather than the electrons unclothed by its magnetic field, might ultimately be important in a non-perturbative QED. The first, humble step is to understand electrons that interact with the radiation field but which are free otherwise. In order for this model to make sense an ultraviolet cutoff has to be imposed that limits the energy of photon modes. The simplest question, which is the one we address in this paper, is the behavior of the self-energy of the electron as the cutoff tends to infinity (with the bare mass of the electron fixed). The self-energy of the electron diverges as the cutoff tends to infinity and it has to be subtracted for each electron in any interacting theory. The total energy will still depend strongly on the cutoff because of the interactions. This dependence will, hopefully, enter through an effective mass which will be set equal to the physical mass (mass renormalization). The resulting theory should be essentially Schrödinger’s mechanics, but slightly modified by so-called radiative corrections. Lest the reader think that the self-energy problem is just a mathematical exercise, consideration of the many-body problem will provide a counterexample. Imagine $N$ charged bosons interacting with the radiation field, but neglect any interaction among them such as the Coulomb repulsion. We say that these particles bind if the energy of the combined particles is less than the energy of infinitely separated particles. As we shall show, charged bosons indeed bind and they do it in such a massive way that it will be very likely that this cannot be overcome by the Coulomb repulsion. In particular, the energy of a charged many-boson system is [*not*]{} extensive, and from this perspective it is fortunate that stable, charged bosons do not exist in nature. The situation is very different for fermions. We are not able to show that they do not bind but we can show — and this is one of the main results of our paper — that the self-energy is extensive, i.e., bounded above and below by a constant times $N$. We thus have strong evidence that there is [*no*]{} consistent description of a system of stable charged relativistic or non-relativistic bosons interacting with the radiation field, while the Pauli exclusion principle, on the other hand, is able to prevent the above mentioned pathology. In the remainder of the section we explain our notation and state the results. In the subsequent sections we sketch the proof of some of them but for details we refer the reader to \[LL\]. We measure the energy in units of $mc^2$ where $m$ is the bare mass of the electron, the length in units of the Compton wave length $ \ell_C = \hbar/mc$ of the bare electron. We further choose $\ell_C^{-1} \sqrt{\hbar c}$ as the unit for the vector potential $A$ and $\ell_C^{-2} \sqrt{\hbar c}$ as the unit for the magnetic field $B$. The argument is the dimensionless quantity $ \ell_C^{-1} x$. As usual, $\alpha = e^2 / \hbar c \approx 1/137.04$ is the fine structure constant. In the expression below, $A(x)$ denotes an ultraviolet cutoff radiation field localised in a box $L\times L\times L$ with volume $V=L^3$, $$A(x)=\frac {1 }{ \sqrt{2V}} \sum_{|k| < \Lambda} \sum_ {\lambda =1,2} \frac{1}{ \sqrt{|k|}} \varepsilon_{\lambda}(k)\bigl[ a_{\lambda}(k) e^{ i x\cdot k} + a_{\lambda}^*(k) e^{- i x\cdot k} \bigr] \ .$$ \[beans\] The index $k=2 \pi n/L$ where $n \in \mathbb{Z}^3$, and the word cutoff refers to the restriction to all values of $k$ with $|k| < \Lambda$. The vectors $\varepsilon_{\lambda}(k)$ are the polarization vectors and are normalized in such a way that $$\varepsilon_{i}(k)\cdot \varepsilon_{j}(k) = \delta_{i,j}\ ,\ \varepsilon_{i}(k)\cdot k = 0 \ .$$ The operators $a_{\lambda}(k)$ and $a_{\lambda}^{*}(k)$ satisfy the commutation relation $$\bigl[a_{\lambda}(k), a_{\lambda^{\prime}}^{*}(k^{\prime})\bigr]= \delta_{\lambda,\lambda^{\prime}} \delta(k,k^{\prime}) \ ,$$ while all others commute with each other. The energy of the radiation field can now be conveniently written as $$H_f = \sum_ {|k| < \Lambda} \sum_{ \lambda =1,2}|k| a_{\lambda}^{*}(k) a_{\lambda}(k) \ .$$ These operators act on the Hilbert space generated by the polynomials in $a_{\lambda}^{*}(k)$ acting on the vacuum $|0 \rangle$. The self energy of (one or more) particles is the [**ground state energy**]{} of the Hamiltonian $$H = {\rm kinetic \ energy} \ + \ H_f \ .$$ where, as usual, the ground state energy of $H$ is defined to be $$E_0 = \inf_{\Psi} \frac{ \langle \Psi, H\ \Psi \rangle }{\langle \Psi, \Psi \rangle}\ .$$ Typically, in the inquiry about the self–energy problem, i.e., the problem of computing the self–energy for fixed, albeit small, $\alpha$ and for large $\Lambda$, one proceeds via perturbation theory. First order perturbation theory will predict an energy of the order of $\alpha \Lambda ^2$, and a higher order power counting argument confirms the asymptotically large $\Lambda$ dependence of that calculation. Our theorems below show that the predictions of perturbation theory for the self–energy problem are wrong, if one is interested in the large $\Lambda$ asymptotics of the energy. If perturbation theory works at all, then it works only for a range of $\alpha$ that vanishes as $\Lambda$ increases. In fact we deduce from the upper bound in Theorem \[theo:nonrel\] that the size of this range shrinks at least as $\Lambda^{-2/5}$. All the theorems below are asymptotic statements for large $\Lambda$ and for fixed $\alpha$. For actual bounds we refer the reader to \[LL\]. The first result concerns the self energy of a nonrelativistic electron interacting with the radiation field. The Hamiltonian is given by $$H= \frac{1}{2} (p+ \sqrt{\alpha} A(x))^2 +H_{f} \ , \label{bacon}$$ where $p =-i\nabla$ and acts on $L^2(\mathbb{R}^3)\otimes {\mathcal F}$, where ${\mathcal F}$ denotes the photon Fock space. The ground state energy, $E_0$, of the operator (\[bacon\]) satisfies the bounds $$C_1 \alpha^{1/2} \Lambda^{3/2} \ < \ E_0 \ < \ C_2 \alpha^{2/7} \Lambda^{12/7} \label{eq:nonrel}$$ \[theo:nonrel\] We do not know how to get upper and lower bounds that are of the same order in $\Lambda$, but we suspect that $\Lambda^{12/7}$ is the right exponent. This is supported by the following theorem in which the $p\cdot A$ term is omitted. The ground state energy $E_0$ of the operator $$\frac{1}{2} \left[ p^2 + \alpha A(x)^2 \right] + H_f \label{eq:2.28}$$ satisfies the bounds $$C_1 \alpha^{2/7} \Lambda ^{12/7} \leq E_0 \leq C_2 \alpha^{2/7} \Lambda ^{12/7} \label{eq:2.29}$$ \[theo:2.3\] While these results are not of direct physical relevance (since $E_0$ is not observable), the many-body problem is of importance since it reveals a dramtic difference between bosons and fermions. The ground state energy of $N$ bosons, $E^{boson}_0(N)$, with Hamiltonian $$H(N)=\sum_{j=1}^N \frac{1}{ 2}(p_j +\sqrt \alpha A(x_j))^2 + H_f\ \label{manyham}$$ satisfies the bounds $$C_1 \sqrt{N} \sqrt{\alpha} \Lambda^{3/2} \leq E^{boson}_0(N) \leq C_2 N^{5/7}\alpha ^{2/7} \Lambda^{12/7} \label{eq:nonrelboson}$$ \[theo:nonrelboson\] Thus, the energy $E^{boson}_0(N)$ is not extensive, i.e., it costs a huge energy to separate bosons. This has to be contrasted with the next theorem about fermions. The Hamiltonian is the same as before but it acts on the Hilbert space $${\mathcal F}\otimes \wedge_{j=1}^N L^2(\mathbb{R}^3; \mathbb{C}^2) ~ ,$$ where the wedge product indicates the antisymmetric tensor product is taken. The ground state energy, $E^{fermion}_0(N)$, of $N$ charged fermions interacting with the radiation field satisfies $$C_1 \alpha^{1/2} \Lambda^{3/2} N \ \leq \ E^{fermion}_0(N) \ \leq \ C_2 \alpha^{2/7} \Lambda^{12/7} N \label{eq:nonrelfer}$$ \[theo:nonrelfer\] The [**“relativistic” kinetic energy**]{} for an electron is $$T^{\rm rel} = | p + \sqrt{\alpha} A (x) | = \sqrt{[p + \sqrt{\alpha} A(x)]^2} \label{eq:4.1}$$ with $p=-i\nabla$. (Really, we should take $\sqrt{[p+\sqrt{\alpha}A (x)]^2 + 1}$, but since $x < \sqrt{x^2 + 1} < x + 1$, the difference is bounded by $N$.) Consider, first, the $N=1$ body problem with the Hamiltonian $$H =T^{\rm rel} + H_f ~. \label{eq:4.2}$$ By simple length scaling (with a simultaneous scaling of the volume $V$) we easily see that $E_0 = \inf {\rm{spec}}~ (H) = C \Lambda$. Our goal here is to show that the constant, $C$, is strictly positive and to give an effective lower bound for it. But we would like to do more, namely investigate the dependence of this constant on $\alpha$. We also want to show, later on, that for $N$ fermions the energy is bounded below by a positive constant times $N \Lambda$. Our proof will contain some novel — even bizarre — features. For the Hamiltonian in \[eq:4.2\] there are positive constants, $C, C', C''$ such that $$\begin{aligned} E_0 &\leq& C \sqrt{\alpha} \Lambda\\ E_0 &\geq& C' \sqrt{\alpha} \Lambda~{\rm{for~ small}}~ \alpha\\ E_0 &\geq& C'' \Lambda~{\rm{for ~ large}}~ \alpha ~ .\end{aligned}$$ \[theo:4.1\] The generalization of this to $N$ fermions is similar to the nonrelativistic generalization, except that the power of $\Lambda$ is the same on both sides of the inequalities. For $N$ fermions with Hamiltonian $$H_N = \sum_{i=1}^N T^{\rm rel}(x_i) + H_f$$ there are positive constants $C, C', C'',$ independent of $\alpha$ and $N$, such that $$\begin{aligned} E_0 &\leq& C N \sqrt{\alpha} \Lambda\nonumber \\ E_0 &\geq& C' N \sqrt{\alpha} \Lambda \quad {\rm{for~small}} ~ \alpha\nonumber \\E_0 &\geq& C'' N \Lambda \quad {\rm{for~large}} ~ \alpha \label{eq:4.23}\end{aligned}$$ \[theo:4.3\] We close this introduction by mentioning one last result about the Pauli–operator. The kinetic energy expression is given by $$T^{\rm Pauli}=[\sigma \cdot (p+ \sqrt{\alpha}A(x))]^2 = (p+ \sqrt{\alpha}A(x))^2 + \sqrt{\alpha}\ \sigma \cdot B(x) \ . \label{eq:4.4}$$ where $\sigma$ denotes the vector consisting of the Pauli matrices. Observe that this term automatically accounts for the spin–field interaction. Our result for the self energy of a Pauli electron is the following. The ground state energy $E_0$ of the Hamiltonian with Pauli energy, $$\frac{1}{2}[\sigma \cdot (p+ \sqrt{\alpha}A(x))]^2 + H_f ,$$ satisfies the bounds $$\begin{aligned} E_0 &\leq & C_3 \sqrt{\alpha}\Lambda^{3/2} \\ E_0 &\geq & C_1 \alpha \Lambda \quad {\rm{for~small}} ~ \alpha\nonumber \\ E_0 &\geq & C_2 \alpha^{1/3} \Lambda \quad {\rm for~large} ~ \alpha \nonumber \end{aligned}$$ For $N$ fermions, the bounds above are multiplied by $N$ (and the constants are changed). \[Pauli\] For the detail of the proof, we refer the reader to \[LL\]. We believe that the upper bound is closer to the truth since the main contributions to the self energy should come from the fluctuations of the $A^2$ term. Theorem \[Pauli\] has the following consequence for stability of matter interacting with quantized fields. It was shown in [@LLS] that a system of electrons and nuclei interacting with Coulomb forces, with the Pauli kinetic energy for the electrons and with a [*classical*]{} magnetic field energy is stable (i.e., the ground state energy is bounded below by $N$) if and only if $\alpha$ is small enough. In [@BFG],[@FFG] this result was extended to [*quantized*]{}, ultraviolet cutoff magnetic fields (as here). Among other things, it was shown in [@FFG] that the ground state energy, $E_0$, of the electrons and nuclei problem is bounded below by $-\alpha^2 \Lambda N$ for small $\alpha$. Theorem \[Pauli\] implies, as a corollary, that for small $\alpha$ the [*total energy*]{} (including Coulomb energies) is bounded below by $+\alpha \Lambda N$. In other words, among the three components of energy (kinetic, field and Coulomb), the first two overwhelm the third — for small $\alpha$, at least. All of these statements are true without mass renormalization and the situation could conceivably be more dramatic when the mass is renormalized. In any case, the true physical questions concern energy differences, and this question remains to be addressed. NON-RELATIVISTIC ENERGY BOUNDS ============================== [*Theorem \[theo:nonrel\]:*]{} We sketch a proof of Theorem \[theo:nonrel\]. It is clear by taking the state $V^{-1/2}/ \otimes |0\rangle$ that the ground state energy is bounded above by ${\rm{(const)}} \alpha \Lambda^2$, which is the same result one gets from perturbation theory. Since the field energy in this state vanishes, such a computation ignores the tradeoff between the kinetic energy of the electron and the field energy. Thus, it is important to quantify this tradeoff. The main idea is to estimate the field energy in terms of selected modes. Consider the operators (field modes), parametrized by $y \in {\mathbb R}^3$, $$L(y) = \frac{1}{\sqrt{2V}} \sum_{|k| < \Lambda, \lambda} \sqrt{|k|} a_{\lambda}(k) \overline{v}_{\lambda} (k)e^{i k \cdot y} \ , \label{eq:2.4}$$ with some arbitrary complex coefficients $v_{\lambda}(k)$. The following lemma is elementary $$H_f \geq \int w(y) L^{*}(y) L(y) {\rm d} y\ \label{eq:2.5}$$ provided that the functions $v_{\lambda}(k)$ and $w$ are chosen such that, as matrices, $$|k|\delta_{\lambda^{\prime}, \lambda} \delta(k^{\prime}, k) \geq \frac{1}{2V} \overline{v}_{\lambda} (k) \widehat{w}(k-k^{\prime}) v_{\lambda^{'}} (k^{'}) \ , \label{eq:2.6}$$ or equivalently, that $$\sum_{|k| < \Lambda, \lambda} \frac{|f_{\lambda}(k)|^2}{|v_ {\lambda} (k)|^2} \geq \sum_{|k|,|k^{'}| < \Lambda, \lambda, \lambda ^{'}} \frac{1}{2V} \overline{f_{\lambda}(k)}f_{\lambda^{\prime}} (k^{\prime}) \widehat{w}(k-k^{\prime}) \label{eq:2.7}$$ for all $f_{\lambda}(k)$, where $\widehat{w}(k) = \int e^{ik \cdot x} w(x) {\rm d} x$. \[lemma:2.2\] For the proof, one simply notes that both sides of (\[eq:2.5\]) are quadratic forms in the creation and annihilation operators, and hence (\[eq:2.6\]) and (\[eq:2.7\]) are necessary and sufficient conditions for (\[eq:2.5\]) to be true. $$H_f \geq -\frac{1}{2V} \sum_{|k| < \Lambda, \lambda} |k| |v_{\lambda} (k)|^2 \ \int w(y) {\rm d} y + \frac{1}{4} \begin{cases} \int w(y) (L(y) + L^{*}(y))^2 {\rm d} y \\ -\int w(y) (L(y) - \frac{1}{4} L^{*}(y))^2 {\rm d} y \end{cases} \label{eq:2.8}$$ \[corollary:1\] To prove this, note that $$L^{*} L = L L^{*}- \frac{1}{2V} \sum_{|k| < \Lambda , \lambda} |k| |v_{\lambda} (k)|^2 \ , \label{eq:2.9}$$ and, quite generally for operators, $$LL + L^{\ast} L^{\ast} \leq L^{\ast} L + L L^{\ast} \ . ~~~~~~~~~~~~~~~\blacksquare \label{eq:2.10}$$ Returning to the proof of Theorem \[theo:nonrel\] we start with the lower bound. Denote by $$\Pi(x) = \frac{-i}{\sqrt{2V}}\sum_{|k| < \Lambda , \lambda} {\sqrt{|k|} \varepsilon_{\lambda}(k) } \left( a_{\lambda}(k) e^{ik \cdot x} - a^{*}_{\lambda}(k) e^{-i k \cdot x} \right) \ . \label{eq:2.11}$$ This operator is canonically conjugate to $A(x)$ in the sense that we have the commutation relations $$i[\Pi_i(x) , A_j(x)] = \delta_{i,j} \frac{1}{(3\pi)^2 } \Lambda ^3 \ . \label{eq:2.12}$$ For our calculation, it is important to note the $${\rm div} ~ \Pi(x) =0 \ . \label{eq:2.13}$$ Hence from (\[eq:2.12\]) and (\[eq:2.13\]) we get that (for all $j$) $$[p_j + \sqrt{\alpha} A_j(x) , \Pi_j(x)] = \sqrt{\alpha} \frac{i}{(3\pi)^2 } \Lambda ^3 \ . \label{eq:2.14}$$ The inequality $$\frac{1}{2 } (p+ \sqrt{\alpha}A(x))^2 + 2a^2 \Pi(x)^2 \geq -ai \sum^{3}_{j=1}[p_j + \sqrt{\alpha} A_j(x) , \Pi_j(x)] \ , \label{eq:2.15}$$ valid for all positive numbers $a$, yields $$\frac{1}{2} (p+ \sqrt{\alpha} A(x))^2 + H_f \geq a \sqrt{\alpha} \frac{1}{(3 \pi)^2} \Lambda ^3 + H_f - 2a^2 \Pi(x)^2 \ . \label{eq:2.16}$$ Now, with $$v_ {\lambda} (k) = (3 \pi)\frac{\varepsilon_{\lambda}(k)}{\Lambda ^{3/2}} \label{eq:2.17}$$ and $$w(y) = \delta (x-y) \ , \label{eq:2.18}$$ it is elementary to see that (\[eq:2.7\]) is satisfied. Hence Corollary \[corollary:1\] yields $$H_f \geq \frac{9 \pi^2}{4\Lambda ^3} \Pi(x)^2 - \frac{9}{8} \Lambda \label{eq:2.19}$$ Choosing $a= (3 \pi)/(\sqrt{2} \Lambda^{3/2})$ yields the [*lower bound*]{} $$H \geq \frac{1}{3 \pi}\sqrt{\frac{\alpha}{2}} \Lambda ^{3/2} - \frac{9}{8}\Lambda \label{eq:2.20}$$ The idea of using a commutator, as in (\[eq:2.15\]), (\[eq:2.16\]) to estimate the ground state energy, goes back to the study of the polaron [@LY]. For the upper bound we take a simple trial function of the form $$\phi(x) \otimes \Psi \label{eq:2.21}$$ where $\Psi \in {\mathcal F}$ is normalized and $\phi(x)$ is a [*real*]{} function normalized in $L^2({\mathbb R}^3)$. An upper bound to the energy is thus given by $$\frac{1}{2} \int|\nabla \phi(x)|^2 {\rm d} x +\frac{ \alpha}{2} \int \phi(x)^2 \left(\Psi, A(x)^2 \Psi\right) {\rm d} x + (\Psi,H_f \Psi) ~ . \label{eq:2.22}$$ It is not very difficult to see that the last two terms can be concatenated into the following expression. $$\frac{1}{2}\int \left(\Psi, \left[ \Pi(x)^2 +\alpha A(x) (-\Delta +\phi(x)^2) A(x)\right] \Psi\right) {\rm d}x ~ -\frac{1}{2} {\rm{Tr}} \sqrt{P-\Delta P} \ . \label{eq:2.23}$$ Here, $P$ is the projection onto the divergence free vector fields with ultraviolet cutoff $\Lambda$. This can be deduced by writing the field energy in terms of $\Pi(x)$ and $A(x)$. The first term in (\[eq:2.23\]) is a sum of harmonic oscillators whose zero point energy is given by $$\frac{1}{2} {\rm{Tr}} \sqrt{ P\left(-\Delta + \alpha \phi(x)^2 \right)P} \label{eq:2.24}$$ and hence $$\frac{1}{2} {\rm{Tr}} \sqrt{ P\left(-\Delta + \alpha \phi(x)^2 \right)P} -\frac{1}{2} {\rm{Tr}} \sqrt{P (-\Delta) P}\ , \label{eq:2.25}$$ is an exact expression for the ground state energy. Using the operator monotonicity of the square root we get as an upper bound on (\[eq:2.23\]) $$\frac{1}{2} \int|\nabla \phi(x)|^2 {\rm d}x + \frac{1}{2} \sqrt{\alpha} \ {\rm{Tr}}\sqrt{ P \phi(x)^2 P} \ . \label{eq:2.26}$$ As a trial function we use the positive function $$\phi(x) = {\rm{const.}} K^{-3/2} \int \left(1 - \frac{|k|}{K} \right)^3_+ e^{ik \cdot x} {\rm d} x \ . \label{eq:2.27}$$ Optimizing the resulting expression over $K$ yields the stated result. For details we refer the reader to [@LL]. It is natural to ask, how good this upper bound is. If we neglect the cross terms in $(p+A)^2$, i.e., we replace the kinetic energy by $p^2 + \alpha A(x)^2$, then we have Theorem \[theo:2.3\], which we prove next. [*Theorem \[theo:2.3\]:*]{} The upper bound was already given in Theorem \[theo:nonrel\] because $< p\cdot A > = 0$ in the state (\[eq:2.21\]). Loosely speaking equation (\[eq:2.12\]) expresses the Heisenberg uncertainty principle for the field operators. An uncertainty principle that is quite a bit more useful is the following. The following inequality holds in the sense of quadratic forms $$\Pi(x)^2 \geq \frac{1}{4}\frac{1}{(3\pi)^4}\Lambda^6 \frac{1}{A(x)^2}\ . \label{eq:2.30}$$ \[lemma:2.4\] For the proof note that $[A_j(x), A_k(y)] = 0$ and compute $$i[\Pi(x)_j, \frac{A_j(x)}{A(x)^2}] =\frac{1}{(3\pi)^2}\Lambda^3 \left[\frac{1}{A(x)^2} - 2 \left( \frac{A_j(x)}{A(x)^2}\right)^2 \right] \ , \label{eq:2.31}$$ and summing over $j$ we obtain that $$i \sum_{j=1}^3[\Pi(x)_j, \frac{A_j(x)}{A(x)^2}] =\frac{1 }{(3\pi)^2}\Lambda^3 \frac{1}{A(x)^2} \label{eq:2.32}$$ Our statement follows from the Schwarz inequality. To prove Theorem \[theo:2.3\] we return to Lemma \[lemma:2.2\] and choose $v_{\lambda}(k)= \varepsilon_{\lambda}(k)$ and $w(x)$ any function $\leq 1$. Corollary \[corollary:1\] applied to each of the 3 components of $\Pi(x)$ then yields $$H_f \geq \frac{ 1}{4} \int w(x-y) \Pi (y)^2 {\rm d} y - \Lambda^4 \frac{3}{8 \pi^2} \int w(y) {\rm d} y \ , \label{eq:2.33}$$ for every $x \in {\mathbb R}^3$. By Lemma \[lemma:2.2\] the right side is bounded below by $$\Lambda^6 \int w(x-y) \frac{1}{A(y)^2} {\rm d} y - \Lambda^4\int w(y) {\rm d} y \ , \label{eq:2.34}$$ and hence $$\begin{aligned} \langle \Psi, H \Psi \rangle &\geq& \frac{1}{2} \int \langle \nabla \Psi(x), \nabla \Psi(x) \rangle {\rm d} x + \frac{\alpha}{2} \int \langle \Psi(x), A(x)^2 \Psi(x) \rangle {\rm d} x \nonumber \\ &+&\Lambda^6 \int w(x-y) \langle \Psi(y), \frac{1}{A(x)^2}\Psi(y) \rangle {\rm d} y {\rm d} x \nonumber \\ &-& \Lambda^4\int w(y) {\rm d} y \int \langle \Psi(x), \Psi(x) \rangle {\rm d} x \ . \label{eq:2.35}\end{aligned}$$ By Schwarz’s inequality the second and third term together are bounded below by $$\sqrt{\frac{\alpha}{2}} \Lambda^3\int \langle \Psi(x), \Psi(y) \rangle \frac{w(x-y)}{\sqrt{\int w(z) {\rm d}z}} {\rm d} x {\rm d} y \ . \label{eq:2.36}$$ If we restate our bound in terms of Fourier space variables we get $$\int \left[\frac{|p|}{2}^2 + \sqrt{\frac{\alpha}{2}} ~ \Lambda^3 \frac{\widehat{w}(p)}{\sqrt{ \widehat{w}(0)}} \right] \langle \widehat{\Psi}(p), \widehat{\Psi}(p)\rangle {\rm d} p -\Lambda^4 \widehat{w}(0) \int \langle \widehat{\Psi}(p), \widehat{\Psi}(p) \rangle {\rm d}p \ . \label{eq:2.37}$$ Choosing the function $\widehat{w}(p)$ to be $(2 \pi)^3 \Lambda^{-18/7}$ times the characteristic function of the ball of radius $\Lambda^{6/7}$, we have that $w(x) \leq 1$ and it remains to optimize (\[eq:2.37\]) over all normalized states $\widehat{\Psi}(p)$. This is easily achieved by noting that the function $$\frac{1}{2} |p|^2 + \sqrt{\frac{\alpha}{ 2}} \Lambda^3 \frac{\widehat{w}(p)}{\sqrt{ \widehat{w}(0)}} \label{eq:2.38}$$ is everywhere larger than $\Lambda ^{12/7}$. Strictly speaking, the function $w(x)$ should be positive in order for the argument that led to (\[eq:2.36\]) to be valid. This can be achieved with a different choice of $w(x)$ that is more complicated but does not change the argument in an essential way. NON-RELATIVISTIC MANY-BODY ENERGIES {#non-relativistic} =================================== A problem that has to be addressed is the energy of $N$ particles (bosons or fermions) interacting with the radiation field. If $E_0 = E_0(1)$ is the energy of one particle (which we estimated in the preceding section) then, ideally the energy, $E_0(N)$, of $N$ particles (which trivially satisfies $E_N \leq NE$, since the $N$ particles can be placed infinitely far apart) ought to be, [*exactly*]{}, $$E_0(N) = NE_0$$ in a correct QED. In other words, in the absence of nuclei and Coulomb potentials, there should be [*no binding*]{} caused by the field energy $H_f$. This is what we seem to observe experimentally, but this important topic does not seem to have been discussed in the QED literature. Normally, one should expect binding, for the following mathematical reason: The first particle generates a field, $A(x)$, and energy $E_0$. The second particle can either try to generate a field $A(x+y)$, located very far away at $y$ or the second particle can try to take advantage of the field $A(x)$, already generated by the first particle, and achieve an insertion energy lower than $E_0$. Indeed, this second phenomenon happens for bosons — as expected. For fermions, however, the Paul principle plays a crucial role (even in in the absence of Coulomb attractions). We show that $E_0(N) \geq C N E_0$ for fermions, but we are unable to show that the universal constant $C = 1$. Even if $C < 1$, the situation could still be saved by mass renormalization, which drives the bare mass to zero as $\Lambda$ increases, thereby pushing particles apart. BOSONS ------ [*Theorem \[theo:nonrelboson\];*]{} This theorem concerns the ground state energy of $N$ charged bosons. the Hamiltonian is given by \[manyham\] acting on the Hilbert space of symmetric functions tensored with the photon Fock space $\mathcal F$. It states, basically, that $C_1 \sqrt{N} \sqrt{\alpha} \Lambda^{3/2} \leq E_0^{boson}(N) \leq C_2 N^{5/7} \alpha^{2/7} \Lambda^{12/7}$. The proof follows essentially that of the one particle case. The interesting fact is that it implies [*binding*]{} of [*charged bosons*]{} (in the absence of the Coulomb repulsion). The binding energy is defined by $$\Delta E (N) = E_0 (N) - NE_0(1)$$ and satisfies the bounds $$\begin{aligned} \Delta E(N) &\geq& C_1 \sqrt{N} \sqrt{\alpha}~ \Lambda^{3/2} - C_2 N \alpha^{2/7} \Lambda^{12/7} \nonumber \\ \Delta E(N) &\leq& C_2 N^{2/7} \alpha^{2/7} \Lambda^{12/7} - C_1 N \sqrt{\alpha}~ \Lambda^{3/2} \label{eq:3.6}\end{aligned}$$ which can be made negative for appropriately chosen $N$ and $\Lambda$. There will be binding for all large enough $N$, irrespective of the cutoff $\Lambda$. It also has to be remarked that the Coulomb repulsion will, in all likelihood, not alter this result since it has an effect on energy scales of the order of $\Lambda$ and not $\Lambda^{12/7}$ or $\Lambda^{3/2}$. FERMIONS -------- The real issue for physics is what happens with fermions. We cannot show that there is no binding but we can show that the energy is extensive as in Theorem \[theo:nonrelfer\]. The Hamiltonian is the same as (\[manyham\]) but it acts on antisymmetric functions tensored with ${\mathcal F}$. (Spin can be ignored for present purposes.) [*Rough sketch of the proof of Theorem \[manyham\]*]{}. The difficulty in proving this theorem stems from the fact that the field energy is not extensive in any obvious way. Define $\underbar{X} = (x_1, \cdots , x_N)$ and define the function $$n_j (\underbar{X}, R) = \# \{ x_i \neq x_j ~:~ |x_i - x_j| < R \} ~ .$$ This function counts the number of electrons that are within a distance $R$ of the $j^{\rm{th}}$ electron. Note that this function is not smooth, so that all the following computations have to be modified. (See [@LL].) We save half of the kinetic energy and write $$H = \frac{1}{4} \sum^{N}_{j=1} (p_j + \sqrt{\alpha} A (x_j))^2 + H' ~ .$$ We apply the commutator estimate (\[eq:2.14\]) to the pair $$i [p_j + \sqrt{\alpha} A (x_j), \frac{1}{\sqrt{N_j (\underbar{X}, R) + 1}} \Pi (x_j)]$$ and obtain the bound (with the caveat mentioned above), for all $\alpha > 0$, $$H' \geq a \sqrt{\alpha} \Lambda^3 \sum^N_{j=1} \frac{1}{\sqrt{N_j(\underbar{X}, R) + 1}} - a^2 \sum^N_{j=1} \frac{1}{N_j (\underbar{X}, R) + 1} {\rm{F}} (x_j)^2 + H_f \label{eq:3.8} $$ The next two steps are somewhat nontrivial and we refer the reader to [@LL]. First one notes that the modes $F(x_i)$ and $F(x_j)$ are essentially orthogonal (i.e., they commute) if $|x_i- x_j| > \Lambda^{-1}~ .$ Ignoring the technical details of how this is implemented, the key observation is that the last two terms in (\[eq:3.8\]) can be estimated from below by $ - N\Lambda$ provided $a = \Lambda^{-3/2}$. The next ingredient is a new Lieb-Thirring type estimate involving the function $N_j(\underbar{X},R)$. It is here and only here that the Pauli exclusion principle is invoked. On the space $\wedge^{N}_{j=1} L^2 {\mathbb R}^3; {\mathbb C}^q)$ of antisymmetric functions $$\sum_{j=1}^{N} (p_j + \sqrt{\alpha} A (x_j))^2 \geq \frac{C}{q^{2/3}} \frac{1}{R^2} \sum^N_{j=1} N_j (\underbar{X}, R)^{2/3}$$ with $C \geq 0.00127$. An analogous inequality holds for the relativistic case as well: $$\sum_{j=1}^{N} |p_j + \sqrt{\alpha} A (x_j)| \geq \frac{C}{q^{1/3}} \frac{1}{R} \sum^N_{j=1} N_j (\underbar{X}, R)^{1/3}$$ \[theo:3.3\] By using the kinetic energy previously saved together with (\[eq:3.8\]) and the previous discussion, we get $$H \geq \sum^N_{j=1} \left\{ N_j (\underbar{X},R)^{2/3} + \sqrt{\alpha} \Lambda^{3/2} \frac{1}{\sqrt{N_j (\underbar{X},R)+1}} \right\} - N \Lambda ~ .$$ By minimizing over $N_j$ the desired estimate is obtained. The upper bound is fairly elementary and is omitted. $\blacksquare$ RELATIVISTIC ENERGY BOUNDS {#relativistic} ========================== [*Theorem \[theo:4.1\]:*]{} Sketch of Proof. An upper bound for $E_0$ is easy to obtain, but it is indirect. Note that $$|p + \sqrt{\alpha} A(x) | \leq \varepsilon [ p + \sqrt{\alpha} A (x) ]^2 + (4 \varepsilon)^{-1} \label{eq:4.3}$$ for any $\varepsilon > 0$. Take $\Psi = f (x) \otimes | 0\rangle$ with $|0\rangle$ being the Fock space vacuum. Using (\[eq:4.3\]) $$\begin{aligned} (\Psi, H \Psi) &\leq& \varepsilon ~\int_{\mathbb{R}^{\rm{3}}} \{ \alpha \langle 0 | A (x)^2 | 0 \rangle | f (x)|^2 + | \nabla f (x)|^2 \} dx + \varepsilon^{-1} \nonumber \\ &=& \frac{\varepsilon \alpha \Lambda^2}{4 \pi} ~ + \int |\nabla f|^2 + \frac{1}{4 \varepsilon}~, \label{eq:4.5a}\end{aligned}$$ since $\langle 0 | A (x)^2 | 0 \rangle = (2 \pi)^{-3} \int_{|k|< \Lambda} |k|^{-1} dk = \Lambda^2/4\pi^2$. We can now let $f(x) \rightarrow V^{-\frac{1}{2}}$ and take $\varepsilon = (\pi/ \alpha)^{1/2} \Lambda^{-1}$, whence $$E_0 \leq (\alpha/4 \pi)^{1/2} \Lambda ~. \label{eq:4.5b}$$ Now we turn to the lower bound for $H$. [**Step 1**]{}: Since $x \rightarrow \sqrt{x}$ is operator monotone, $$T > T_1 = | p_1 + \sqrt{\alpha} A_1 (x) | ~, \label{eq:4.6}$$ where the subscript 1 denotes the 1 component (i.e., the $x$-component) of a vector. By replacing $T$ by $T_1$, we are now in a position to remove $A_1$ by a gauge transformation - but it has to be an [*operator-valued gauge transformation*]{}. The use of such a gauge transformation is is a novelty, as far as we are aware, in QED. To effect the gauge transformation, set $$\varphi (x) = \frac{1}{\sqrt{2V}}~ \sum_{h,\lambda} ~ \frac{\varepsilon^{\lambda}(k)_1}{\sqrt{(k)}} ~ \left[ a_{\lambda} (k) + a_{\lambda}^{\ast} (-k) \right] \frac{e^{ik_1 x_1} -1}{ik_1} ~ e^{ik_{\perp}x_{\perp}} \label{eq:4.7}$$ with $x_{\perp} = (x_2, x_3$). Then $[A_j (x), \varphi (x) ] = 0, ~~j= 1,2,3$ and $p_1 \exp~ [i \varphi (x)] = -A_1(x)$. The unitary $U = \exp~ [i \varphi (x)]$ is a gauge transformation, but it is [*operator-valued*]{}. We have $$\begin{aligned} U^{-1} | p_1 + A_1 (x) | U &=& |p_1| \nonumber \\ U^{-1} a_{\lambda} (k) U &=& a_{\lambda}(k) + f_{\lambda} (k, x) \nonumber \\ U^{-1} a_{\lambda}^{\ast} (k) U &=& a_{\lambda}^{\ast}(k) + \bar{f}_{\lambda} (k, x) \nonumber \\ \stackrel{\sim}H_f ~=~ U^{-1} H_f U &=& \sum_{k,\lambda} |k| [ a_{\lambda}^{\ast} (k) + \bar{f}_{\lambda} (k,x) ] [ a_{\lambda} (k) + f_{\lambda} (k,x) ] \label{eq:4.8}\end{aligned}$$ with $$f_{\lambda} (k,x) ~=~ \sqrt{\frac{\alpha}{2V}} ~ \sum_{k,\lambda} ~ \frac{\varepsilon_{\lambda}(k)_1}{|k|} ~\frac{e^{-i k_1 x_1}-1}{k_1} ~ e^{-ik_{\perp}x_{\perp}} ~ . \label{eq:4.9}$$ Since $p_{\perp}$ does not appear in our new Hamiltonian, $$\stackrel{\sim}H ~=~ U^{-1} HU ~=~ |p_1|+~\stackrel{\sim}H_f ~, \label{eq:4.10}$$ the variable $x_{\perp}$ appears only as a parameter, and thus we can set $x_{\perp}$ = constant = (0,0), by translation invariance, and replace $\mathbb{R}^3$ by $\mathbb{R}^1 = \mathbb{R}$. From now on $x_1 = x$ and, $p_1 = p = -i ~ d/dx$. [**Step 2**]{}: The dependence on $x$ now appears in $\stackrel{\sim}H_f$ instead of in the kinetic energy, $|p|$. For each $x$ we can try to put $\stackrel{\sim}H_f$ into its ground state, which is that of a displaced harmonic oscillator. But, since this state depends on $x$, to do so will require a great deal of kinetic energy, $|p_1|$. Let $\Psi$ be a normalized wave-function, i.e., a function on $L^2(\mathbb R) \otimes {\mathcal{F}}$. We write it as $\psi_x$ where $\psi_x \in {\mathcal{F}}$. Thus, with $\langle \cdot~ ,~ \cdot \rangle$ denoting the inner product on ${\mathcal{F}}, \int_{\mathbb R} \langle \psi_x, \psi_x \rangle dx = 1$. Decompose $\mathbb R$ as the disjoint union of intervals of length $\ell/\Lambda$, where $\ell$ is a parameter to be determined later. Denote these intervals by $I_j,~ j = 1, 2, \ldots~.$ A simple Poincaré type inequality gives, for $g ~:~ L^2 (\mathbb R) \rightarrow {\mathbb{C}}$, $$(g,|p| g) \geq C_1 \frac{\Lambda}{\ell} \sum_j \int_{I_j} \{ | g(x) |^2 - |\bar{g}_j |^2 \} dx ~ ,$$ where $\bar{g}_j = \frac{\Lambda}{\ell} \int_{I_j} g (x) dx$ is the average of $g$ in $I_j$. Then $$(\Psi, |p| \Psi) ~ \geq C_1 \frac{\Lambda}{\ell} ~\sum_j ~ \int_{I_j} ~ \{ \langle \psi_x, \psi_x \rangle ~-~ \langle \bar{\psi}_j, \bar{\psi}_j \rangle~\} dx ~ . \label{eq:4.11}$$ [**Step 3**]{}: Next, we analyze $\stackrel{\sim}H_f$. We think of this as an operator on ${\mathcal{F}}$, parameterized by $x \in \mathbb R$. We would like $\stackrel{\sim}H_f$ to have a gap so we define $$H_x ~=~ \frac{\Lambda}{2} \sum_{\varepsilon \Lambda \leq |k| \leq \Lambda} ~ \sum_{\lambda} ~ [a^+_{\lambda} (k) + \bar{f}_{\lambda} (k,x)]~\cdot~ [~h.c.~] \label{eq:4.12}$$ Clearly, $\stackrel{\sim}H_f \geq H_x$ and $$(\Psi, \stackrel{\sim}H \Psi) \geq \frac{\Lambda}{\ell} \sum_{j} \int_{I_j} \langle \psi_x, \psi_x \rangle - \langle \bar{\psi}_j, \bar{\psi}_j \rangle + \langle \psi_x, H_x \psi_x \rangle dx ~ . \label{eq:4.13}$$ For each interval $I_j$ we can minimize (\[eq:4.13\]) subject to $\int_{I_j}\langle \psi_x, \psi_x \rangle dx$ fixed. This leads to $$(h_j ~ \psi)_x = e ~ \psi_x \label{eigenvalue1}$$ with $$(h_j ~ \psi)_x = \frac{\Lambda}{\ell} ~ \psi_x - \frac{\Lambda}{\ell} ~ \bar{\psi}_j + H_x ~ \psi_x \label{eigenvalue2}$$ Obviously, this eigenvalue problem (\[eigenvalue1\], \[eigenvalue2\]) is the same for all intervals $I_j$, so we shall drop the subscript $j$ and try to find the minimum $e$. A lower bound to $h_j$ (and hence to $e$) can be found by replacing $H_x$ by $$\widehat{H}_x = \frac{\Lambda}{2} ( 1 - \Pi_x) ~,$$ where $\Pi_x = |g_x\rangle \langle g_x|$ is the projector onto the ground state, $|g_x\rangle,~ {\rm{of}}~ H_x$. If we substitute $\widehat{H}_x$ into (\[eigenvalue2\]) the corresponding eigenvalue equation (\[eigenvalue1\]) becomes soluble. Multiply (\[eigenvalue1\]) on the left by $\langle g_x|$, whence $$\left( \frac{\Lambda}{\ell} - e \right) \langle g_x, \psi_x \rangle = \frac{\Lambda}{\ell} \langle g_x, \bar{\psi} \rangle \label{eq:4.16}$$ Then, substitute (\[eq:4.16\]) into (\[eigenvalue2\]) and integrate $\int_I dx$ to find $$\frac{1}{2} \Lambda^3 \ell^{-2} \left( \int \Pi_x dx \right) \bar{\psi} = \left ( \frac{\Lambda}{\ell} - e \right) \left(\frac{\Lambda}{2} - e \right) \bar{\psi} ~ . \label{eq:4.17}$$ We know that $e < \Lambda/2$ because we could take $\psi_x$ = constant as a trial function, and then use $\stackrel{\sim}H_x \leq \Lambda/2$. Also, $e < \Lambda/\ell$, because we could take $\Psi = \delta_{x_{0}} | g_{x_{0}} \rangle ~ .$ [**Step 4**]{}: Eq. (\[eq:4.17\]) will give us a lower bound to $e$ if we can find an upper bound to $Y = (\Lambda/\ell) \int_I \Pi_x dx ~.$ To do this note that $$\begin{aligned} Y^2 &\leq& {\rm{Trace}} ~ Y^2 = \left( \frac{\Lambda}{\ell} \right)^2 \int_I \int_I |\langle g_x, g_y \rangle |^2 dxdy\nonumber \\ &=& \left( \frac{\Lambda}{\ell} \right)^2 \int_I\int_I \exp \{ - \frac{\alpha}{2V} \sum_{\varepsilon \Lambda \leq |k| \leq \Lambda} \sum_{\lambda}|f_{\lambda} (k,x) - f_{\lambda} (k,y)|^2 dxdy \} \label{eq:4.18}\end{aligned}$$ Noting that $\sum_{\lambda = 1}^{2} e_{\lambda}(k)_1^2 = 1 - k_1^2/k^2$, the quantity { } in (\[eq:4.18\]) becomes (as $V \rightarrow \infty$) $$\{ \quad \} = -\frac{2 \alpha}{(2 \pi)^3} \int_{\Lambda/2 < |k| < \Lambda} \frac{1}{|k|^3 k_1^2} (k_{\perp}^2) [ \sin \frac{k_1}{2} (x-y) ] \label{eq:4.19}$$ After some algebra we find that $$\left( \frac{1}{\ell} - \frac{e}{\Lambda} \right) \left( \frac{1}{2} - \frac{e}{\Lambda} \right) \leq \frac{1}{2 \ell} ({\rm{Trace}}~ Y^2)^{1/2} \leq \frac{1}{2 \ell} \sqrt{K_{\ell} (\alpha)} \label{eq:4.20}$$ where $$\begin{aligned} K_{\ell} (\alpha) &=& \int_0^1 \int_0^1 \exp \left[ - \alpha \frac{\ell}{\pi^2} |x-y| \int_0^{|x-y|\ell/4} \left( \frac{\sin t}{t} \right)^2 ~ dt \right] ~ dxdy ~ .\nonumber \\ &\leq& \int_{-1/2}^{1/2} \exp [ - \alpha x^2 \ell^2 / 8 \pi ] ~ dx ~. \label{eq:4.21}\end{aligned}$$ We find that $$\begin{aligned} K_{\ell} (\alpha) &\sim& 1 - \alpha \ell^2 / 96 \pi, \quad \ell^2 \alpha ~ {\rm{small}}\\ &\sim& \sqrt{2} \pi ( \alpha \ell^2)^{1/2}, \quad \ell^2 \alpha ~ {\rm{large}} \nonumber \label{eq:4.22}\end{aligned}$$ If $\alpha$ is small we take $\ell \sim \alpha^{-1/2}$. If $\alpha$ is large we take $\ell = 2$. This establishes our theorem for $N=1$. [*Theorem \[theo:4.3\]:*]{} Sketch of Proof. For $N > 1$ we can decompose $\mathbb{R}^3$ into cubic boxes $B_j, j=1,2,3, \ldots$ of size $\ell \Lambda$ and “borrow” $\frac{1}{2} |p+A(x)|^2$ kinetic energy from each particle. That is, $H_N = H_N^{1/2} + \frac{1}{2} T_N$ with $T_N = \sum_{i=1}^{N} T(x_i)$. The Pauli principle will then yield an energy for $\frac{1}{2}T_N$ that is bounded below by (const.) $(n_{j-1})^{4/3}$, where $n_j$ is the particle number in box $B_j$. [A]{} L. Bugliaro, J. Fröhlich and G.M. Graf, *Stability of quantum electrodynamics with nonrelativistic matter*, Phys.Rev. Lett. [**77**]{} (1996), 3494-3497. V. Bach, J. Fröhlich and I.M. Sigal, *Renormalization group analysis of spectral problems in quantum field theory*, Adv. Math. [**137**]{} (1998),205-298; *Quantum electrodynamics of confined nonrelativistic particles*, [*ibid*]{} 299-395. C. Fefferman, *Stability of Coulomb systems in a magnetic field*, Proc. Natl. Acad. Sci. USA, [**92**]{} (1995), 5006-5007. C. Fefferman, J. Fröhlich and G.M. Graf, *Stabilty of nonrelativistic quantum mechanical matter coupled to the (ultraviolet cutoff) radiation field*, Proc. Natl. Acad. Sci. USA [**93**]{} (1996), 15009-15011; *Stability of ultraviolet cutoff quantum electrodynamics with non-relativistic matter*, Commun. Math. Phys. [**190**]{} (1997), 309–330. H. Hogreve, Math. Reviews [**99e:81051a, b**]{} Amer. Math. Soc. (1999). E.H. Lieb and M. Loss, *Remarks about the ultraviolet problem in QED*, (in preparation). E.H. Lieb, M. Loss and J. P. Solovej, *Stability of Matter in Magnetic Fields*, Phys. Rev. Lett. [*75*]{}, 985-989 (1995). E.H. Lieb, H. Siedentop and J.P. Solovej, *Stability and Instability of Relativistic Electrons in Magnetic Fields*, J. Stat. Phys. [**89**]{} (1997), 37-59. E.H.Lieb and K. Yamazaki, *Ground State Energy and Effective Mass of the Polaron*, Phys. Rev. [**111**]{} (1958), 728-733. T. Okamoto and K. Yajima, *Complex scaling technique in nonrelativistic massive QED*, Ann. Inst. H. Poincaré Phys. Théor. [**42**]{} (1985), 311-327. [^1]: Supported in part by NSF Grant PHY-98 20650. [^2]: Supported in part by NSF Grant DMS-95 00840. [^3]: ©1999 by the authors. Reproduction of this work, in its entirety, by any means, is permitted for non-commercial purposes by any means is permitted for non-commercial purposes. [^4]: This is a preliminary report on our work presented at the University of Alabama,Birmingham - Georgia Tech International Conference on Differential Equations and Mathematical Physics, Birmingham, March 15-19, 1999
1
--- abstract: 'We analyze the dynamics of a Bianchi I cosmology in the presence of a viscous fluid, causally regularized according to the Lichnerowicz approach. We show how the effect induced by shear viscosity is still able to produce a matter creation phenomenon, meaning that also in the regularized theory we address, the Universe is emerging from a singularity with a vanishing energy density value. We discuss the structure of the singularity in the isotropic limit, when bulk viscosity is the only retained contribution. We see that, as far as viscosity is not a dominant effect, the dynamics of the isotropic Universe possesses the usual inviscid power-law behavior but in correspondence of an effective equation of state, depending on the bulk viscosity coefficient. Finally, we show that, in the limit of a strong non-thermodynamical equilibrium of the Universe mimicked by a dominant contribution of the effective viscous pressure, a power-law inflation behavior of the Universe appears, the cosmological horizons are removed and a significant amount of entropy is produced.' author: - Giovanni Montani - Marta Venanzi title: Bianchi I cosmology in the presence of a causally regularized viscous fluid --- Introduction {#introduction .unnumbered} ============ The initial cosmological singularity has been demonstrated to be a true, generic property of the Universe [@BKL70; @BKL82; @Montani]. However, while the dynamics of the early Universe has been essentially understood, its physical and thermodynamical nature is far to be under control. On one hand, quantum gravity effects are able of altering the standard dynamical features proposed in [@BKL82; @K93; @Mo95], giving rise to fascinating alternatives (see for instance [@Ash],[@Pal]) and particle creation effects can also be relevant [@Star; @Mo2001]. Calling for attention is also the string cosmology paradigm, as discussed for instance in [@Barrow]. On the other hand, such an extreme region of evolution exhibits a rapid expansion and non-trivial out-of-equilibrium phenomena become possibly important, including the appearance of viscous features in the cosmological fluid. More specifically, one usually distinguishes the bulk viscosity from the shear viscosity: while the former accounts for the non-equilibrium effects associated to volume changes, the latter is a result of the friction between adjacent layers of the fluid. As a matter of fact, shear viscosity does not contribute in isotropic cosmologies, whereas it may significantly modify the dynamics of an anisotropic Universe, as we shall see below. The simplest representation of a relativistic viscous fluid is provided by the so-called Eckart energy momentum tensor [@Eckart]. However, this formulation results to be affected by non-causal features, allowing the propagation of superluminar signals [@HL]. In order to amend such a non-physical behavior, a revised approach has been proposed by Israel [@Israel], solving the non-causality problem via the introduction of phenomenological relaxation times. Having in mind the basic role the Bianchi I cosmology has in understanding the generic behavior of an inhomogeneous Universe near the singularity, in [@BK_Bianchi] and [@BK_Israel] such a model is studied as sourced by a viscous fluid, in both the Eckart and in the Israel approach respectively. One of the most intriguing issue coming out from such a study must be undoubtedly identified in the possibility of a singularity, from which the Universe emerges with negligible energy density and then a process of matter creation takes place.\ In the present analysis, we face the study of this aforesaid peculiar solution in terms of an alternative causal regularization of the Eckart energy-momentum tensor, proposed by Lichnerowicz in [@LICH1]. Such a revised formulation is based on the introduction of the so-called *index of the fluid*, de facto a regulator scaling the four-velocity field, so defining a *dynamical velocity* of the fluid. This approach has been tested on some real systems, receiving interesting confirmation to its viability [@Disc1; @Disc2]. However, being derived via a phenomenological approach, the Lichnerowicz energy momentum tensor must be completed by the specification of an ansatz linking the fluid index to the thermodynamical variables of the system, so closing the dynamical problem. In what follows, we implement the Lichnerowicz treatment to the viscous Bianchi I cosmology, by pursuing two different tasks. On one hand, we study the solution with matter creation, fixing the fluid index via the request of incompressibility. Results show that the Universe evolves trough an intrinsic shear-driven anisotropic solution, meaning that also in the Lichnerowicz scenario the solution with matter creation exists and, actually, such a phenomenon is enhanced, being therefore not related to non-physical effects of the Eckart formulation. On the other hand, we analyze the isotropic limit near the singularity, by reducing the three scale factors of the Bianchi I model to be equal. The latter is known in literature as the flat Robertson-Walker Universe, for which only bulk viscosity may be relevant due to the homogeneity and isotropy of the model, preventing shear among different layers. In this specific study we see that, as far as bulk viscosity is not dominant, the regularization provided by the index of the fluid preserves the same power-law behavior of the inviscid isotropic Universe. In other words, the bulk viscosity coefficient enters trough an effective equation of state, ranging the same parameters domain of an ideal fluid (i.e. between dust and stiff matter). Finally, we show how, if the bulk viscosity becomes sufficiently dominant, it is possible to get an equation of state having an effective polytropic index less than $\nicefrac{2}{3}$, leading to a power-law inflation solution. The latter is characterized by a massive entropy creation and no longer causal separation exists across the Universe regions. Basic formalism {#basic-formalism .unnumbered} =============== The Lichnerowicz original stress-energy tensor describing relativistic viscous fluids stands as follows [@LICH1] $$\begin{aligned} \label{TmunuC} \begin{split} T_{\mu\nu}&=(\rho+p)u_{\mu}u_{\nu}+pg_{\mu\nu}-\left(\varsigma-\frac{2}{3}\eta\right)\pi_{\mu\nu}\nabla_{\alpha}C^{\alpha}\\ &-\eta\pi_{\mu}^{\alpha}\pi_{\nu}^{\beta}\left(\nabla_{\alpha}C_{\beta}+\nabla_{\beta}C_{\alpha}\right), \end{split}\end{aligned}$$ where $\rho$ is the energy density, $p$ is the pressure, $g_{\mu\nu}$ denotes the metric tensor with the signature $(-+++)$ and $u^{\mu}$ is the four-velocity properly normalized as $$u_{\mu}u^{\mu}=-1\,.$$ The bulk and shear viscous contributions are represented by the $\zeta$ and $\eta$ coefficients, respectively. Here, $$\pi_{\mu\nu}=g_{\mu\nu}+u_{\mu}u_{\nu}$$ is the projection tensor. Furthermore, $C^\mu$ represents the so-called dynamical velocity which is related to $u^\mu$ by $$\label{C} C^{\mu}=Fu^{\mu},$$ $F$ being the index of the fluid.\ A simple algebra shows that expression (\[TmunuC\]) can be rearranged as $$\begin{aligned} \label{Tmunu} \begin{split} T_{\mu\nu}&=(\rho+p')u_{\mu}u_{\nu}+p'g_{\mu\nu} \\ &-\eta\,F\,\left[\nabla_{\mu}u_{\nu}+\nabla_{\nu}u_{\mu}+u_{\mu}u^{\alpha}\nabla_{\alpha}u_{\nu}+u_{\nu}u^{\alpha}\nabla_{\alpha}u_{\mu}\right], \end{split}\end{aligned}$$ where $p'$ is the total pressure containing the standard thermodynamical contribution and the negative component due to viscosity, i.e. $$\begin{aligned} p'\equiv p-\lambda\nabla_{\alpha}C^{\alpha}, && \lambda\equiv\zeta-\frac{2}{3}\eta.\end{aligned}$$ The introduction of $F$ was first due by Lichnerowicz in order to describe viscous processes in relativistic dynamics, attempting to avoid superluminal signals. We can think of $F$ as a contribution which eliminates the non-causality features of the Eckart’s formulation by regularazing the velocity ($F$ will be therefore referred below as the *regulator* of the theory). The cosmological consequences of the Lichnerowicz description in isotropic cosmologies have been examined in [@Disc2]. In [@Disc1]-[@Disc4], the index of the fluid is parametrized as $$\label{F} F=\frac{p+\rho}{\mu}\, ,$$ where $\mu$ is the rest mass-density which satifies the conservation law $$\label{restmass} \nabla_{\alpha}(\mu u^{\alpha})=0.$$ The main advantages of the Lichnerowicz theory have been pointed out in [@Disc2]. A key-point consists in the fact that expression (\[Tmunu\]) reduces to the traditional description provided by Eckart upon setting $F=1$. Furthermore, one of the main assumptions of the well-posedness theorems requires that [@Disc3; @Disc4] $$\label{F1} F>1.$$ Lichnerowicz was lead to introduce a new formulation for viscosity first because of the study of incompressible perfect fluids for which the following relation holds\ $$\label{incomp} \nabla_{\mu}C^{\mu}=0.$$ The basic aim of the present paper is to investigate the dynamical role of the regulator $F$ in two relevant asymptotic regimes. On one hand, we wondered how the solution with matter creation, derived in [@BK_Bianchi] for the Bianchi I solution, depends on the causal regularization of the theory (i.e. does this effect survive in the presence of the Lichnerowicz formulation?). On the other hand, we are interested to a cosmologically relevant regime, corresponding to the flat isotropic limit, for which only the bulk viscosity must be retained (the shear viscosity being suppressed due to the isotropy hypothesis). Both these regimes are investigated via a power-law solution, which is able to capture the dominant term behavior in the asymptotic solution. Such a technique offers a satisfactory answer to the questions above and follows the original treatment in [@BK_Bianchi; @BK_Israel]. Nevertheless, the cosmological relevance of the isotropic case leads us to numerically investigate the dynamics of the system, even far from the initial singularity. A subtle point in the Lichnerowicz approach to the regularization of a viscous fluid is the determination of the regulator $F$ in terms of the thermodynamical parameters. The two relevant regimes here addressed require different descriptions for the regulator expression. The most delicate case is the one associated to the matter creation, for which both the energy density and the Universe volume vanish asymptotically to the singularity. Then, the construction of a reliable ansatz for large asymptotic values of $F$ appears a non-trivial task. Nonetheless, a simple solution to this puzzle is offered by the condition of incompressibility defined by (\[incomp\]), which, as we shall see below, is also appropriate for the comparison to the original analysis in [@BK_Bianchi]. The isotropic flat Universe case can be instead easily faced by retaining the same choice as in [@Disc1], i.e. $F$ is provided by the ratio between the enthalpy and mass density of the Universe as stated in (\[F\]). This formulation appears well-grounded owing to the fact that the thermal history of the Universe naturally ensures that $F$ is large in the early Universe and it tends to unity in the present stage of evolution (say for a redshift $z<100$, when the pressure term becomes negligible). Field equations {#field-equations .unnumbered} =============== The line-element of the Bianchi I model in the synchronous reference frame reads as $$\label{metricaBI} ds^{2}=-dt^{2}+R_1(t)^2\,dx^{2}+R_2(t)^2\,dy^{2}+R_3(t)^2\,dz^{2}$$ and hence the metric determinant is given by the following relation $$\sqrt{-g}=R_1 R_2 R_3\equiv R^{3}.$$ Above, $x, y, z$ denote Euclidean coordinates and $R_1(t)$, $R_2(t)$ and $R_3(t)$ are dubbed cosmic scale factors. As well-known (see [@Montani],[@Landau1] and [@Gravitation]) such a model describes in vacuum an intrinsic anisotropic Universe, but in the presence of matter it can also admit the isotropic limit [@KirillovMo]. Let us now introduce the following quantity $$\begin{aligned} \label{H} \begin{split} H&\equiv\left(ln\,R\right)^{.}\\ &=\frac{1}{3}\left(\frac{\dot{R_1}}{R_1}+\frac{\dot{R_2}}{R_2}+\frac{\dot{R_3}}{R_3}\right). \end{split}\end{aligned}$$ where the dot denotes the time derivative. In order to have a compatible system, we set up the Einstein equations in a comoving frame with the matter source in which $u^{0}=1$ and $u^{i}=0$, $i=1,2,3$.\ In this frame, $p'=p-\lambda\dot{F}-3 \lambda F H$ and the stress-energy tensor components are $$\begin{aligned} \label{Tmunu_comp} \begin{split} T^0_0&=-\rho, \\ T^i_i&=p'-2\eta F \frac{\dot{R_i}}{R_i}. \end{split}\end{aligned}$$ Thus, the Einstein equations for the Bianchi I spacetime, having the viscous Lichnerowicz tensor as source (here only the $ii$ and $00$-components are non-vanishing), can be written as $$\begin{aligned} \frac{\ddot{R_2}}{R_2}+\frac{\ddot{R_3}}{R_3}+\frac{\dot{R_2}\dot{R_3}}{R_2 R_3}&=-\chi\left(p'-2\eta F\frac{\dot{R_1}}{R_1}\right), \label{Einst1} \\ \frac{\ddot{R_1}}{R_1}+\frac{\ddot{R_3}}{R_3}+\frac{\dot{R_1}\dot{R_3}}{R_1 R_3}&=-\chi\left(p'-2\eta F\frac{\dot{R_2}}{R_2}\right), \label{Einst2} \\ \frac{\ddot{R_1}}{R_1}+\frac{\ddot{R_2}}{R_2}+\frac{\dot{R_1}\dot{R_2}}{R_1 R_2}&=-\chi\left(p'-2\eta F\frac{\dot{R_3}}{R_3}\right), \label{Einst3} \\ \frac{\dot{R_1}\dot{R_2}}{R_1 R_2}+\frac{\dot{R_1}\dot{R_3}}{R_1 R_3}+\frac{\dot{R_2}\dot{R_3}}{R_2 R_3}&=\chi\rho, \label{Einst4} \end{aligned}$$ $\chi$ being the Einstein constant.\ From the spatial components (\[Einst1\],\[Einst2\],\[Einst3\]), it is possible to show that the system admits the following integrals of motion $$\label{integrale1} \frac{\dot{R_{i}}}{R_{i}}=H+s_{i}R^{-3}e^{\varphi}.$$ Here $\dot{\psi}\equiv -2\chi\eta F$ and the quantities $s_i$ are such that $s_{1}+s_{2}+s_{3}=0$. The evolution of $H$ is obtained using the trace of the $ii$-components combined with the $00$-component (\[Einst4\]) of the Einstein equation so getting $$\label{H_punto} \dot{H}=\chi\rho-\frac{1}{2}\chi h-3H^{2}+\frac{3}{2}\chi\zeta FH+\chi(\frac{1}{2}\zeta-\frac{1}{3}\eta)\dot{F},$$ where $$\label{enthalpy} h=\rho+p$$ represents the specific enthalpy. Moreover, the hydrodynamic equations $\nabla_{\nu}T_{\mu}^{\nu}=0$ (only the $0$-component is not vanishing here) provide the following evolution for $\rho$ $$\label{rho_punto} \dot{\rho}=-4\chi\eta F\rho-3Hh+9H^{2}\varsigma F+12H^{2}\eta F+3H\varsigma\dot{F}-2H\eta\dot{F}.$$ The first integrals (\[integrale1\]) can be re-cast in a more compact form by the use of (\[H\_punto\]) and (\[Einst4\]) as $$\label{integrale2} \rho=\frac{1}{\chi}\left(3H^{2}-q^{2}R^{-6}e^{2\varphi}\right),$$ where $q^{2}\equiv \frac{1}{2}\left(s_{1}^{2}+s_{2}^{2}+s_{3}^{2}\right)$. It worth noting that setting $q^2=0$ corresponds to the isotropic case. Equations (\[H\]), (\[H\_punto\]), (\[rho\_punto\]) and (\[integrale2\]) together with the $ii$-components of the field equations represent the full set of the dynamical equations characterizing the present model. In order to close the system we introduce a polytropic index $\gamma$ and we consider an equation of state of the form $$\begin{aligned} h=&\gamma\rho, && 1\leq\gamma\leq2\end{aligned}$$ where, e.g., $\gamma=1$ corresponds to dust matter and $\gamma=\frac{4}{3}$ to the radiation cases, respectively. Asymptotic solutions with matter creation {#asymptotic-solutions-with-matter-creation .unnumbered} ========================================= As well known [@BKL70], approaching the initial singularity, the Bianchi I solution in vacuum is Kasner-like and the presence of a perfect fluid is negligible. In [@BK_Bianchi], it has been shown instead that in the presence of a shear viscous contribution the situation significantly changes but a Kasner-like solution still exists and it is characterized by a vanishing behavior of the energy density. Here, we want to verify if such a peculiarity survives when the Eckart representation of the viscous fluid is upgraded in terms of the Lichnerowicz causal reformulation. It is rather easy to realize via a simple asymptotic analysis, that the ansatz in [@Disc1] fails in the region of small value of $\rho$. In fact, the latter predicts $F\sim\rho R^3$ and clearly vanishes near the singularity if $\rho\sim0$ (as $R^3$ is going naturally to zero toward the singularity). In investigating the solutions in this region we need therefore to search for a different representation of $F$. A possibility is to infer its form by picking the case of an incompressible fluid. From equation (\[incomp\]) and by using the line-element (\[metricaBI\]), one immediately gets the following expression for $F$ $$F=\frac{F_{0}}{R^{3}},$$ where we used the convention for which the subscript number denotes an integration constant. The latter is a convention which we will use throughout this paper. Clearly, $F$ grows to infinity as approaching the singularity, without contradicting the constraint (\[F1\]) and assuring well-behaving solutions without superluminal signals. In this regard, it is rather natural to realize that the value of $F$ should significantly grow in extremely relativistic regimes as near the cosmological singularities. As suggested in [@BK_Bianchi], we assume that in the region of low density the viscous coefficients can be expressed as power-laws of the energy density with exponents greater than unity, i.e., $$\begin{aligned} \eta&=\eta_{1}\rho^{\alpha_{1}}, && \zeta=\zeta_{1}\rho^{\beta_{1}}, &&\rho\rightarrow 0, && \alpha_{1}\geq1, \beta_{1}\geq1. \label{visc1}\end{aligned}$$ It is worth noting that for an incompressible fluid the $\zeta$-terms automatically drop out from the field equations. This is also the case treated in [@BK_Bianchi] because there the bulk viscosity contributions is asymptotically negligible for small energy densities, taking $\beta_{1}>\alpha_{1}$. In our analysis, in order to search for a consistent solution, we assume that the density vanishes faster than the volume $R^{3}$ as the system approaches the singular point $(H,\rho)=(+\infty,0)$ in the $(H,\rho)$ plane[^1]. Moreover, in the right-handed side of equation (\[H\_punto\]) we can retain the quadratic term in $H$ only. Then, the limiting form of the equations (\[H\_punto\]) and (\[rho\_punto\]) are $$\begin{aligned} \dot{H}&=-3H^{2} \label{Hpunto2}, \\ \dot{\rho}&=-3\gamma\rho H+18\eta_{0}F_{0}\frac{\rho^{\alpha_{1}}}{R^{3}}H^{2} \label{rhopunto2}.\end{aligned}$$ It is easy to see that $\dot{\varphi}\rightarrow0$ as it contains a positive power-law of the energy density and hence we can fix, without loss of generality, $\varphi\equiv 0$. The constraint equation (\[integrale2\]) reduces to the following relation $$\label{integrale3} H=\sqrt{3}qR^{-3}.$$ Then, one can easily check that the leading order asymptotic solutions for $t\rightarrow0$ of the simplified Einstein equations (\[Hpunto2\]), (\[rhopunto2\]), (\[integrale3\]) read as $$\begin{aligned} H&=\frac{1}{3t}, && \rho=Kt^{\frac{2}{\alpha_{1}-1}}, && R^{3}&=\sqrt{3}qt,\\ R_{i}&=\left(\sqrt{3}qt\right)^{p_{i}}, && p_{i}=\frac{1}{3}+\frac{s_{i}}{3q}\end{aligned}$$ where $K$ is a constant depending on the parameters of the model. In order the solutions to be asymptotically self-consistent, the relations above are applicable only when $\alpha_{1}>3$, slightly different with respect to the results given by the Eckart approach in [@BK_Bianchi] where $\alpha_{1}>1$. It is worth noting that we found that the energy density in this causal model decays more rapidly to zero with respect to the non-regularized case studied in [@BK_Bianchi] by Belinskii and Khalatnikov (BK) where $$\rho^{BK}\sim t^{\frac{1}{\alpha_{1}-1}},\qquad\alpha_{1}>1.$$ In other words, we see how the Lichnerowicz approach leads to a cosmological model in which the universe emerges from the singularity with a greater matter-rate creation with respect to the Eckart case. Viscous dynamics in the isotropic Universe {#viscous-dynamics-in-the-isotropic-universe .unnumbered} ========================================== We now focus our investigation in the opposite regime where the energy density takes a diverging value near the singularity. Then, let us consider the isotropic limit of the solution, leading to the flat Robertson-Walker Universe, by setting $q^2=0$ in the first integral (\[integrale2\]). The latter reduces to $$\label{integrale3} \rho=\frac{1}{\chi}3H^{2}.$$ As a natural consequence of isotropy and compatibility issues with the Einstein equations, shear viscosity is not permitted in the model and bulk viscosity is the only retained contribution in the energy momentum expression (\[Tmunu\]). Indeed, in a homogeneous Universe, isotropically expanding, no friction between different layers can occur and the shear viscosity can affect inhomogeneous perturbations only. This set up has already been examined in [@BK_FRW] for an Eckart fluid via the dynamical system approach, and in [@Disc1] via the Lichnerowicz treatment in an attempt to explain dark-energy related current issues. Here we drew our attention to the behavior of the solutions in the limit of large density values. We address the problem by parameterizing $F$ according to expression (\[F\]) or, $$F=\frac{\gamma \rho R^3}{\mu_0},$$ where we have used the fact that $\mu=\mu_0 {R}^{-3}$ because of the rest-mass conservation law (\[restmass\]). It is easy to check that the time derivative of $F$ yields $$\dot{F}=\left(3H+\frac{\dot{\rho}}{\rho}\right)F.$$ Then, in this limiting case, the Einstein equations take the form $$\begin{aligned} \dot{H}&=\chi\left(1-\frac{1}{2}\gamma\right)\rho-3H^{2}+3\chi\zeta FH+\frac{1}{2}\chi\zeta F\frac{\dot{\rho}}{\rho}, \label{Hpunto3} \\ \dot{\rho}&=-3H\gamma\rho+18H^{2}\varsigma F+3H\varsigma F\frac{\dot{\rho}}{\rho}. \label{rhopunto3}\end{aligned}$$ For the bulk viscosity coefficient in the limit of large density, we still have a power-law behavior of the type: $$\begin{aligned} \zeta=\zeta_{2}\rho^{\beta_{2}}, && \rho\rightarrow\infty, && 0\leq\beta_{2}\leq\frac{1}{2}\,. \label{visc2} \end{aligned}$$ Notice that we are using a different subscript with respect to equation (\[visc1\]), corresponding to a different asymptotic region of the energy density. The solutions of the full-set of the field equations given by (\[H\]), (\[integrale3\]), (\[Hpunto3\]) and (\[rhopunto3\]) are investigated assuming that the energy density and the volume are evolving like powers of time according to the relations $$\begin{aligned} \label{timepowers} \rho=\rho_{0}t^{y}, && R^{3}=R_{0}^{3}t^{x}, && y<0, && x>0, \end{aligned}$$ as the time $t\rightarrow 0$. As a consequence, the viscosity evolves trough the following expression $$\zeta=\zeta_2{\rho_0}^{\beta_{2}}{t}^{\beta_{2} y}.$$ From equation (\[timepowers\]) with the use of (\[H\]) we immediately get $$\label{Htimepower} H=\frac{x}{3t}.$$ Using equation (\[Hpunto3\]) and being $\dot{H}\sim t^{-2}$, it is necessary that $y=-2$ in order the system to be self-consistent. Similar arguments lead us to conclude that $$\label{x_beta} x=2\beta_{2}+1.$$ Then, one finds the following evolutions in terms of $t$ for the energy density and the scale-factor $$\begin{aligned} \rho=\frac{x^{2}}{3\chi}t^{-2}, \label{rho4}\\ R={R_0} t^{\frac{2}{3\gamma_{eff}}}, && \gamma_{eff}\equiv\frac{2}{2\beta_{2}+1}. \label{gammaeff} \end{aligned}$$ where we introduced an effective equation of state $\gamma_{eff}$. One can check that $\gamma$ is related to $\beta_{2}$ via the following expression $$\label{gamma1} \gamma=\frac{2}{2\beta_{2}+1}\left[\frac{1}{1-4\beta_{2}(2\beta_{2}+1)^{\beta_{2}}\frac{\zeta_{2}R_{0}^{3}}{(3\chi)^{\beta_{2}}\mu_{0}}}\right].$$ Furthermore, one can see that the regulator evolves with time as $$\label{F_t} F\sim t^{2\beta_{2}-1}$$ which is asymptotically growing when $0\leq\beta_{2}<1/2$ and reduces to a positive constant when $\beta_{2}=1/2$, leading to well-behaving regularized solutions. An interesting additional feature stands out from the behavior of the scale factor $R$. When the Robertson-Walker geometry is coupled to an ideal fluid the volume typically evolves as $R_{RW}\sim t^{\frac{2}{3\gamma}}$. Here we observe that we still have an isotropic limit, as it is found in [@BK_Bianchi], but instead of being negligible, the viscosity acquires a fundamental role in the dynamics of the Universe, driving the evolution trough an effective equation of state. Indeed, the range of the possible values of $R$ in (\[gammaeff\]) perfectly coincides with the standard non-viscous homogeneous and isotropic universe. In particular, - for $\beta_{2}=1/2$ we have $R\sim t^{\nicefrac{2}{3}}$: the universe maps a dust-dominated Friedmann Universe with an effective equation of state $\gamma_{eff}=1$; - for $\beta_{2}=0$ we get $R\sim t^{\nicefrac{1}{3}}$: the solution evolves toward a stiff-matter dominated Universe with an effective equation of state $\gamma_{eff}=2$; which allows us to conclude that the introduction of $F$ in the Friedmann universe has the role to encode all the viscous effects in the dynamics and what we see at the end is the usual ideal fluid equation of state. We say in this sense that bulk viscosity is regularized.\ Now we emphasize that by extrapolating our solutions to the regime $\beta_{2}>\nicefrac{1}{2}$ we obtain an intriguing dynamical property of the Universe. In fact, from equation (\[gammaeff\]) for $\beta_{2}=1/2+\varepsilon$ with $\varepsilon>0$ we immediately get $$\gamma_{eff}=\frac{1}{1+\varepsilon}\rightarrow p=-\frac{\varepsilon}{(1+\varepsilon)}\rho.$$ For $\varepsilon>\nicefrac{1}{2}$ (i.e. $\beta_{2}>1$ and $\gamma_{eff}<\nicefrac{2}{3}$) this dynamical behavior corresponds to a powerlaw inflation solution, induced by a negative effective pressure $p<-\nicefrac{1}{3}\rho$. Indeed, it is easy to realize that the cosmological horizon $$d_h(t)\equiv R(t) \int_{0}^{t}\frac{dt'}{R(t')}$$ takes in this case a divergent value. In this scenario the Universe corresponds to a unique causal region and we can think of it as a viable solution to the horizon paradox. It is worth noting that, since we are dealing with an isotropic flat Universe, we get $H\sim \sqrt{\rho}$ (see equation (\[integrale3\])). Then, the restriction $\beta_{2}< \nicefrac{1}{2}$ in the expression (\[visc2\]), derived for an Eckart representation of the fluid ($F\equiv 1$), acquires a clear physical meaning. In fact, as far as such a restriction holds, the negative effective pressure, due to bulk viscosity, behaves like $\rho^{\beta_{2}+\nicefrac{1}{2}}<\rho$, for large $\rho$ values. This means that the standard (positive) thermodynamical pressure $p = (\gamma - 1)\rho$ remains always the dominant contribution. This is coherent with the idea that the bulk viscosity representation of non-equilibrium effects is valid only on a perturbative level. Clearly, the presence of the regulator $F$ slightly changes this situation since it enhances the weight of viscosity in the dynamics and this is at the ground of the present results. Nonetheless, the regime $\beta_{2}>1$ can be qualitative interpreted as a fluid-like representation for strong non-equilibrium effects, not surprising in the limit of the singularity, when the geometrical velocity of Universe collapse diverges. Thus, the power-law inflation solution we find in such an extreme regime can be thought as the qualitative feature induced by a cosmological continuous source, whose thermodynamical evolution can not be approximated via equilibrium stages. If we accept a fluid representation for such a limiting scenario, we can argue that the superluminar geometrical velocity of the early phases of the Universe expansion is able to open the horizon size, making the cosmological space causally connected as a whole.\ It is now worthwhile to stress a remarkable feature of the obtained solution which makes it essentially different from the same limit treated in [@BK_Bianchi]. Indeed, there, the isotropic solution was considered for $t\rightarrow 0_-$ simply because it corresponds to a singular point in the dynamical system approach of equations (\[H\]), (\[H\_punto\]), (\[rho\_punto\]) and (\[integrale2\]). Here we deal with increasing $t$ values and our dynamics describes an expanding universe from the initial singularity. This choice characterizing the evolution regime is physically allowed when one considers the behavior of the entropy per comoving volume. The latter has the form $$\sigma \sim \rho^{\frac{1}{\gamma}}R^{3}\sim t^{2 \left(\frac{1}{\gamma_{eff}}-\frac{1}{\gamma}\right)}.$$ Since from equation (\[gamma1\]) we see that $$\label{gamma2} \gamma=\gamma_{eff} \left[\frac{1}{1-4\beta_{2}(2\beta_{2}+1)^{\beta_{2}}\frac{\zeta_{1}R_{0}^{3}}{(3\chi)^{\beta_{2}}\mu_{0}}}\right]>\gamma_{eff},$$ it is straightforward to infer that the entropy per comoving volume increases due to dissipation processes when the universe expands. This brings us to conclude that we are dealing with a cosmological paradigm in which the universe emerges from the singularity causally connected as a whole and with a significant entropy creation (this effect is enhanced by increasing $\beta_{2}$ values). This suggests how extreme non-equilibrium thermodynamics near the singularity could play a relevant role in solving some unpleasant paradoxes of the standard cosmological model, namely the horizon and entropy ones. Further remarks should now be done about the obtained solutions (\[Htimepower\]), (\[rho4\]), (\[gammaeff\]) and (\[F\_t\]). Despite the power-law approach given by (\[timepowers\]) is able to provide a satisfactory characterization of the asymptotic behavior of the model under consideration, the underlying cosmological relevance which comes with it leads us to support the solutions with an additional numerical investigation. In this regard, we show in the right panel of figure (1) how the power-law approximation is largely predictive near enough to the singularity, while discrepancies take over as the Universe volume expands. On the other hand, one expects that far from the singularity we assist to a gradual decrease of the role of the viscosity. In figure (1) (left panel) it is shown that the numerical solution for large times tends to overlap the standard inviscid flat Robertson-Walker dynamics. The cosmological picture coming out is consistent with the paradigm of an Universe characterized by a viscous non-equilibrium dynamics close to the Big-Bang whose viscous features are suppressed as the volume expands, recovering the isentropic homogeneous and isotropic Universe (as a consequence of the decreasing value of the energy density). ------------------------------------------------- ------------------------------------------------- ![image](fit){width="47.00000%" height="7.5cm"} ![image](frw){width="47.00000%" height="7.5cm"} ------------------------------------------------- ------------------------------------------------- The numerical analysis performed above enforces the idea that strong viscous effects (happening for $\gamma _{eff}<2/3$) can be viewed as a consistent solution to the horizon and entropy paradox, by means of a power-law dynamics in the very early Universe evolution. We further make some considerations about the stability of the obtained solutions. In this regard, we observe that in [@carlmont] the problem of the stability of a flat isotropic Universe has been analyzed in presence of bulk viscosity, according to the Eckart formulation. In the direction forward in time (the same one we are interested here), it is shown how the Universe is stable under scalar perturbations. This result is expected to be maintained in the present formulation too, simply because the regulator $F(t)$ is large near the singularity (according to the ansatz (\[F\]). By other words, one can infer that the presence of $F$ simply enhances the effect fo the viscosity, preserving the stability properties derived in [@carlmont]. This conjecture acquires a reliable meaning in the considered power-law approximation of the solution, where the presence of the regulator can be easily restated in terms of an effective value for $\beta_{2}$ in the corresponding Eckart formulation (i.e. asymptotically to the singularity $F$ becomes a power-law in the energy density of the viscous fluid). We conclude this section by observing that the power-law solution (\[Htimepower\]), (\[rho4\]), (\[gammaeff\]) can be extended to the negative and positive curved Robertson-Walker geometry [^2], as far as the effective polytropic index $\gamma_{eff}$ remains greater than $2/3$. Under such a restriction, the spatial curvature term (behaving like $R^{-2}$), is asymptotically negligible near the singularity with respect to both the energy density $\rho$ and the viscous contribution. The situation is different in the case of the power-law inflation, when $\gamma_{eff} < 2/3$, since the curvature term can play a relevant role. However, this is a typical feature of the inflationary-like solutions, whose existence requires the spatial gradients to be sufficiently smooth [@KT90]. From a physical point of view, if we cut our asymptotic regime at the Planck time (assuming that before the quantum dynamics is concerned), we could require that the spatial curvature is, at that time, sufficiently small and then it will become negligible forward in time. Conclusions {#conclusions .unnumbered} =========== We have studied the influence of a viscous cosmological fluid on the Bianchi I Universe dynamics in the neighborhood of the initial singularity. The characterizing aspect of the presented analysis consists of describing the viscous fluid via the Lichnerowicz formulation, introducing a causal regulator (the index of the fluid) in order to ensure a causal dynamics. We have also pointed out how this additional new degree of freedom must be properly linked to the geometrical and thermodynamical variables. We have investigated the role of the two viscosity coefficients in two different limits, when only one of them is dynamically relevant. The shear viscosity has been taken into account in the case of an incompressible fluid, evaluating the modification that the regulator introduces in the well-known solution with matter creation derived in [@BK_Bianchi]. We have shown that such a peculiar phenomenon, characterized by an asymptotically vanishing energy density, not only survives in the Lichnerowicz formulation but it is actually enhanced. The presence of a bulk viscosity term has been analyzed in the isotropic limit of the Bianchi I cosmology and again asymptotically to the initial singularity. We derived a power-law solution, which outlines some interesting features: i) the standard non-viscous Friedmannian behavior is encountered when bulk viscosity is a small deviation from equilibrium ($\beta_{2}<\nicefrac{1}{2}$), but this time the fluid presents an equation of state with an effective dependence on viscosity; ii) when bulk viscosity fully dominates the dynamics with allowance made for strong non-equilibrium effects ($\beta_{2}>1$), we see that the universe evolves trough a power-law inflation solution to the initial singularity, implying the divergence of the cosmological horizon and the subsequent disappearance of the Universe light-cone. Entropy production has been addressed in the isotropic Robertson-Walker limit and specifically for dominant viscosity, where entropy tremendously grows. Despite one may argue whether or not the above-mentioned fluid description is possible in this extreme regime of dominant bulk viscosity, the issues above strongly suggest that a comprehensive understanding of the Universe birth and of the so-called horizon and entropy paradox can not be achieved before a clear account of the non-equilibrium thermodynamical evolution near the singularity will be properly provided. Acknowledgements {#acknowledgements .unnumbered} ================ This paper has been developed within the CGW collaboration (www.cgwcollaboration.it) and supported by the *TornoSubito* project. We would like to thank Riccardo Moriconi for his valuable assistance in the numerical integration of the model. M. V. is thankful to Dr. Shabnam Beheshti who significantly motivated and assisted this work and to Queen Mary, University of London for providing all the necessary facilities. [9]{} V. A. Belinskii, I. M. Khalatnikov, E. M. Lifshttz. *Oscillatory approach to a singular point in the relativistic cosmology*. Adv. Physics, 19(80), 525-573 (1970). V. A. Belinskii, I. M. Khalatnikov, E. M. Lifshitz. *A general solution of the Einstein equations with a time singularity*, Adv. Physics, 31 (6), 639-667 (1982). G. Montani, M. V. Battisti, R. Benini, G. Imponente. *Primordial Cosmology*. World Scientific (2008). A. A. Kirillov. *On the question of the characteristics of the spatial distribution of metric inhomogeneities in a general solution to einstein equations in the vicinity of a cosmological singularity*. Soviet Physics JETP 76, p. 335 (1993). G. Montani. *On the general behavior of the universe near the cosmological singularity*, Classical and Quantum Gravity 12, 10, pp. 2505-2517 (1995). A. Ashtekar, T. Pawlowski, P. Singh. *Quantum nature of the big bang: Improved dynamics*. Physical Review D 74, 8, p. 084003 (2006). S. Pal. *Physical aspects of unitary evolution of Bianchi-I quantum cosmological model*. Classical and Quantum Gravity, 33, 4 (2016). Ya. B. Zeldovich, A. A. Starobinsky. *Particle Production and Vacuum Polarization in an Anisotropic Gravitational Field*. Sov. Phys. JETP, 34(6), 1159-1166 (1972). G. Montani. *Influence of particle creation on flat and negative curved FLRW universes*. Classical and Quantum Gravity 18, pp. 193–203 (2001). J.D. Barrow. *String-driven inflationary and deflationary cosmological models*. Nucl. Phys. B, 310, 743 (1988). C. Eckart. *The thermodynamics of irreversible processes III. Relativistic theory of the simple fluid*. Physical Review, 58 (1940). W. A. Hiscock and L. Lindblom. *Generic instabilities in first-order dissipative fluid theories*. Phys. Rev. D, 31(4) (1985). W. Israel. *Nonstationary irreversible thermodynamics: A causal relativistic theory*. Ann. Phys., 100(1-2):310– 331 (1976). V. A. Belinskii, I. M. Khalatnikov. *Influence of viscosity on the character of cosmological evolution*. Soviet Journal of Experimental and Theoretical Physics, 42, p. 205 (1975). V. A. Belinskii, E. S. Nikomarov, I. M. Khalatnikov. *Investigation of the cosmological evolution of viscoelastic matter with causal thermodynamics*. Soviet Journal of Experimental and Theoretical Physics, Vol. 50, No. 2, p. 21 (1979). A. Lichnerowicz. *Théories Relativistes de la Gravitation et de l’Électromagnétism*, Masson et Cie, Paris (1955). M. M. Disconzi, T. W. Kephart, R. J. Scherrer. *A new approach to cosmological bulk viscosity*. Physical Review D, 91:043532 (2015). M. M. Disconzi, T. W. Kephart, R. J. Scherrer. *On a viable first order formulation of relativistic viscous fluids and its applications to cosmology*. arXiv:1510.07187 \[gr-qc\] (2015). M. Czubak, M. M. Disconzi. *On the well-posedness of relativistic viscous fluids with non-zero vorticity*, J.Math.Phys. 57 (2016) 042501 (2014). M. M. Disconzi. *On the well-posedness of relativistic viscous fluids*. Nonlinearity, 27(8):1915–1935 (2014). L. D. Landau, E. M. Lifshitz. *The Classical Theory of Fields*, Volume 2 of A Course of Theoretical Physics, Pergamon Press (1971). C. W. Misner, K. S. Thorne, J. A. Wheeler. *Gravitation*, W H Freeman & Co (Sd) (1973). A. A. Kirillov, G. Montani. *Quasi-isotropization of the inhomogeneous mixmaster universe induced by an inflationary process*. Physical Review D 66, p. 064010 (2002). V. A. Belinskii, I. M. Khalatnikov. *Viscosity effects in isotropic cosmologies*. Soviet Journal of Experimental and Theoretical Physics 45, pp. 1–9 (1977). N. Carlevaro, G. Montani. *Bulk viscosity effects on the early universe stability*. Mod. Phys. Lett. A 20, 1729 (2005). E. W. Kolb, M. S. Turner. *The Early Universe*. Westview Press (1994). [^1]: Details about the dynamical study of the asymptotic solutions in the Eckart representation can be found in [@BK_Bianchi]. [^2]: We remind the reader that the isotropic Bianchi I model actually is the flat Robertson-Walker Universe
1
--- abstract: | We present a quantitative analysis of chiral symmetry breaking in two-flavour continuum QCD in the quenched limit. The theory is set-up at perturbative momenta, where asymptotic freedom leads to precise results. The evolution of QCD towards the hadronic phase is achieved by means of dynamical hadronisation in the non-perturbative functional renormalisation group approach. We use a vertex expansion scheme based on gauge-invariant operators and discuss its convergence properties and the remaining systematic errors. In particular we present results for the quark propagator, the full tensor structure and momentum dependence of the quark-gluon vertex, and the four-fermi scatterings. author: - Mario Mitter - 'Jan M. Pawlowski' - Nils Strodthoff bibliography: - '../bib\_master.bib' title: Chiral symmetry breaking in continuum QCD --- Introduction ============ The understanding of the hadron spectrum as well as the phase structure of QCD at finite temperature and density are very important and long-standing problems. Already a qualitative access to the hadron spectrum beyond low-lying resonances and the phase structure at large densities requires a quantitative hold on competing fluctuations as well as the phenomena of dynamical chiral symmetry breaking and confinement. Building on previous studies [@Braun:2008pi; @Braun:2009gm], this work together with a related qualitative study of the unquenched system in [@Braun:2014ata] provides the foundation for achieving this goal. The present work and [@Braun:2014ata] are first works within a collaboration (fQCD) aiming at a quantitative functional renormalisation group framework for QCD [@FRG-QCD]. While the correct implementation of relative fluctuation scales is not required to reproduce the thermodynamic properties of QCD at vanishing chemical potential [@Herbst:2013ufa], it will become increasingly important at finite chemical potential. As was detailed in [@Helmboldt:2014iya] at the example of quantum/thermal and density fluctuations, mismatches in thermal/density fluctuation scales inevitably lead to large systematic errors at finite chemical potential. This is particularly important for the question of the potential critical endpoint in the QCD phase diagram. Functional continuum approaches provide access to the mechanisms of dynamical chiral symmetry breaking and confinement, as well as their interrelation. Up to date, functional computations require larger or smaller amounts of phenomenological input in the form of running couplings, vertex models, or further low-energy parameters, see [@Berges:2000ew; @Pawlowski:2005xe; @Gies:2006wv; @Schaefer:2006sr; @Braun:2011pp; @Alkofer:2000wg; @Roberts:2000aa; @Fischer:2006ub; @Fischer:2008uz; @Binosi:2009qm; @Maas:2011se; @Boucaud:2011ug] and references therein. In this work we present the first closed, self-consistent and quantitative computation for quenched continuum QCD in the vacuum. A prominent feature of this calculation is the lack of additional model input, the computation only depends on the fundamental parameters of QCD, the strong coupling $\alpha_s$ and the current quark masses which are set at a large, perturbative momentum scales. We implement a systematic vertex expansion scheme that is fully capable of taking the non-perturbative physics at low momenta into account. Gauge invariance is implemented and tested in the form of modified Slavnov-Taylor identities (mSTIs). In the present work we focus on the matter system as one of the two subsectors of the full calculation. Using results for the Yang-Mills propagators [@Fischer:2008uz; @FP], we solve the matter sector in a quenched approximation and assess the quality of our results in comparison to lattice QCD, see Fig. \[fig:main\_result\]. A separate analysis of the fully coupled system is presented elsewhere. The paper is organised as follows. In Sec. \[sec:setup\] we describe our approach to QCD, in particular we briefly introduce the dynamical hadronisation procedure within the functional renormalisation group approach and describe the used truncation scheme. In Sec. \[sec:results\], we present our results and comment on the mechanism of chiral symmetry breaking in the light of our investigations. Technical details on modified Slavnov-Taylor identities and on our truncation can be found in the appendices. QCD with the Functional RG {#sec:setup} ========================== In quenched QCD there are no matter contributions to the gluon/ghost correlation functions, since these contributions involve only diagrams with closed quark loops. Therefore, all gluon/ghost correlation functions are given by those of the pure Yang-Mills theory. Consequently, we use the functional renormalisation group (FRG) results for Yang-Mills gluon and ghost propagators, [@Fischer:2008uz; @FP] in our calculation. We perform a vertex expansion including a fully momentum-dependent quark propagator and quark-gluon vertex as well as dynamically generated four-fermi interactions in the matter sector and ghost-gluon and three-gluon vertices in the glue sector. Furthermore we use gauge invariance in the form of mSTIs to include higher quark-gluon interactions as well as the four-gluon vertex. Mesonic interactions are included via the mechanism of dynamical hadronisation, an RG-scale dependent change of variables which constitutes an economic way to take resonant structures in four-fermi interaction channels into account. In the following two subsections we give a brief account of the FRG approach and our truncation scheme. A more complete description of the truncation and a discussion of its stability is found in Apps. \[app:truncation\] and \[app:results\_stab\]. Dynamical hadronisation in the functional renormalisation group {#sec:frg} --------------------------------------------------------------- The central object in the functional renormalisation group approach to quantum field theory is the scale-dependent effective action $\Gamma_k$. It generalises the effective action $\Gamma$, in the spirit of the Wilsonian RG, by introducing a cutoff scale $k$ such that $\Gamma_k$ includes only fluctuations from momentum modes with momenta larger than $k$, see [@Berges:2000ew; @Pawlowski:2005xe; @Gies:2006wv; @Schaefer:2006sr; @Braun:2011pp] for QCD-related reviews. On a technical level this is achieved by giving a momentum dependent mass to modes with momenta smaller than the scale $k$ by means of an infrared regulator function $R_k$. In this way the scale-dependent effective action $\Gamma_k$ interpolates between a microscopic action, parameterised by a finite set of parameters, at some large cutoff scale $k=\Lambda_\text{UV}$ and the full quantum effective action in the limit $k\rightarrow 0$. The evolution of $\Gamma_k$ with the RG-scale $k$ is described in terms an exact equation of one-loop structure, [@Wetterich:1992yh] $$\label{eq:floweq} \partial_t\Gamma_k=\frac{1}{2}\text{Tr}\, \frac{1}{\Gamma_k^{(2)}+R_k}\partial_t R_k\,.$$ Here $\Gamma_k^{(2)}$ denotes the second functional derivative with respect to the fields, $t=\log(k/\Lambda)$ with some reference scale $\Lambda$, and the trace includes a sum over all field species and internal indices as well as a momentum-space integration. Note that the flow equation is one-loop exact, higher loop corrections and non-perturbative effects are incorporated due the presence of dressed, field-dependent propagators $(\Gamma_k^{(2)}+R_k)^{-1}$. Flow equations for propagators or higher order $n$-point functions are obtained by taking appropriate functional derivatives of . Despite its nature as an exact equation, most practical applications require an Ansatz for the scale-dependent effective action. Therefore, identifying the operators that carry the relevant physical information is of utmost importance for any quantitatively reliable solution of the flow equation (\[eq:floweq\]). Four-fermi interactions, e.g. in the scalar channel with coupling $\lambda_{(\bar\psi\psi)^2}$, are created dynamically from two-gluon exchange box diagrams that are proportional to $\alpha_s^2$. The back-coupling of these four-fermi interactions on the system is suppressed by additional powers of $\alpha_s^2$, for example its contributions to the four-fermi system is of order $\alpha_s^4$. However, as the strong running coupling, $\alpha_s$, becomes large close to $\Lambda_{\text{QCD}}$, the suppression of the four-fermi interactions is overcome and they start to grow large. As it becomes sufficiently large, the four-fermi dynamics eventually become dominant and lead to a four-fermi resonance. This resonance corresponds to the light pions as pseudo–Nambu-Goldstone modes in the spontaneously broken phase. For even smaller momentum scales, quark interactions exhibit dominant scatterings of these resonant momentum channels. Hence, it is advantageous to describe these interactions in terms of composite operators which is achieved via the introduction of scale-dependent mesonic field operators [@Gies:2001nw; @Pawlowski:2005xe; @Floerchinger:2009uf; @Braun:2014ata]. In the present work we follow the dynamical hadronisation procedure set-up in [@Pawlowski:2005xe; @Braun:2014ata]. In each renormalisation group step this leads e.g. to $\lambda_{\pi}\rightarrow h_\pi^2/(2 m_{\pi}^2)$ at vanishing $s$-channel momentum in the four-fermi channel. Here $\lambda_{\pi}$ corresponds to the exchange of pions with mass $m_\pi$ and Yukawa coupling $h_\pi$. This exact dynamical change of field-variables avoids the numerically inconvenient singularity shown in Fig. \[fig:4fermirebos\], which is a consequence of neglected momentum dependencies in the four-fermi interaction. Additionally, it provides a smooth transition from QCD degrees of freedom to constituent quarks and light mesons as low-energy effective degrees of freedom. The resulting low-energy description in terms of a quark-meson model introduces no model parameter dependence provided the UV initial scale is chosen large enough $\Lambda_\text{UV}\gg\Lambda_\text{QCD}$, see Sec. \[sec:4fermi\] for an explicit demonstration. Finally we want to stress that the restriction to such a small set of low-energy degrees of freedom is justified by the comparably large masses in the remainder of the spectrum of the strong interaction. Since any of the hadrons can play a dynamical role only below about $500$ MeV their fluctuations are strongly suppressed in any of the loops by their comparably large mass, see also [@Braun:2014ata]. ![Dynamical hadronisation: four-fermi coupling ($\lambda_{\pi}$, see App. \[trunc:4f\]) vs. corresponding coupling from dynamical hadronisation, $(k\, h_{\pi})^2/(2m_{\pi}^2)$, in a qualitative approximation.[]{data-label="fig:4fermirebos"}](fourfermi_msp){width="48.00000%"} Truncation of effective action {#sec:truncation} ------------------------------ ![image](mattersystem_diagrams){width="98.00000%"} In our truncation we consider the momentum dependence of all vertices which include at least one relevant or marginally relevant operator with the help of [@program]. In the pure glue sector we calculate the ghost- and three-gluon vertices in single channel approximations including only the classical tensor structure. Moreover we use modified Slavnov-Taylor identities to fix the momentum dependence of the four-gluon vertex in this channel. This approximation is motivated by results from other methods [@Pelaez:2013cpa; @Blum:2014gna; @Eichmann:2014xya; @Binosi:2014kka; @Gracey:2014mpa; @Cyrol:2014kca], which show non-trivial momentum-dependencies only in momentum regions where the gluon sector already starts to decouple from the system. The matter-glue coupling as the interface between the two subsectors of the system is of crucial importance for the whole system. Therefore we include the full momentum-dependence and all eight tensor structures in the quark-gluon vertex. Furthermore, there are two exceptions from the RG relevance counting, in the sense that we also include perturbatively irrelevant operators in our truncation. Firstly, in the matter sector we include in addition the four-fermi interactions, which are required for the description of chiral symmetry breaking. Secondly, for any non-classical operator which shows a significant contribution in the flow, we identify the corresponding gauge-invariant completion and include the resulting higher order vertices in the flow equations. The general vertex construction follows [@Fischer:2009tn]. Suppressing the explicit RG-scale dependence we parameterise $$\begin{aligned} \nonumber \Gamma^{(n)}_{\Phi_1\cdots \Phi_n}(p_1,...,p_{n-1}) & \\[2ex]  & \hspace{-1.8cm}= \bar \Gamma^{(n)}_{\Phi_1\cdots \Phi_n}(p_1,...,p_{n-1}) \prod_{i=1}^{n}\sqrt{\bar Z_{\Phi_i}(p_i)}\,, \label{eq:genvert}\end{aligned}$$ where we introduced a superfield $$\begin{aligned} \label{eq:Phi} \Phi=(A_\mu\,,\, c\,,\,\bar c\,,\,q\,,\,\bar q, \vec \pi\,,\,\sigma\,,...)\end{aligned}$$ which subsums all dynamical degrees of freedom including the effective low-energy fields generated by dynamical hadronisation. The tensor kernel $\bar\Gamma^{(n)}_{\Phi_1\cdots\Phi_n}$ is expanded in a basis of tensor structures ${\cal T}_{\Phi_1\cdots \Phi_n}^{(i)}$ $$\begin{aligned} \label{eq:tensorkernel} &\bar\Gamma^{(n)}_{\Phi_1\cdots \Phi_n}(p_1,...,p_{n-1})= \nonumber\\[1ex] &\qquad \sum\limits_i z_{\Phi_1\cdots \Phi_n}^{(i)}(p_1,....,p_{n-1}) {\cal T}_{\Phi_1\cdots \Phi_n}^{(i)}(p_1,...,p_{n-1}) \,.\end{aligned}$$ Since the dressing functions $z_{\Phi_1\cdots \Phi_n}^{(i)}(p_1,....,p_{n-1})$ depend on our choice of $\bar Z_{\Phi_i}$, the latter are at our disposal to give special properties like RG-invariance in the perturbative regime to the former. If not specified otherwise, we choose $\bar Z_{\Phi_i}(p)\equiv Z_{\Phi_i,k}(p)$. An important example are the classical vertices with tensor structures ${\cal T}_{\rm class}$ present in the classical action. We use $$\begin{aligned} \label{eq:classT} {\cal T}_{{\rm class},\Phi_1\cdots \Phi_n}(p_1,\ldots,p_{n-1})=& S^{(n)}_{\Phi_1\ldots \Phi_n}\Bigr|_{g=1}\,,\end{aligned}$$ where $S^{(n)}$ denotes the appropriate $n$-th functional derivative of the action. By setting $g=1$ in , the running coupling is taken into account via the dressing functions $z_{\Phi_1\cdots \Phi_n}^{(i)}$ in . As a consequence of our choice for $\bar Z_{\Phi_i}(p)$, the $z_{\Phi_1\ldots\Phi_n,k\equiv 0}^{(1)}(p_1,\dots,p_{n-1})$ run like appropriate powers of the strong running coupling with the momenta $p_i$ in the perturbative regime. The same holds for the RG-scale dependence of the $z_{\Phi_1\ldots \Phi_n}^{(1)}(0,\dots,0)$ at perturbative scales. In the following we discuss in some detail the constituents of our Ansatz for the bosonised effective action of Landau-gauge QCD, which are also summarised pictorially in Fig. \[fig:truncation\]. For a more detailed description the reader is referred to App. \[app:truncation\]. The stability of this truncation and the systematic errors are discussed in App. \[app:results\_stab\].\ *Glue sector*: As a consistent scale-setting both in the perturbative and in the non-perturbative regime is crucial for our calculation, we use YM FRG data [@Fischer:2008uz; @FP] for both the gluon propagator and the ghost propagator, see Fig. \[fig:gluonprop\]. Here, we have matched our scale to the corresponding lattice scales in [@Sternbeck:2006cg] via the peak position in the gluon dressing function $1/Z_A$, which translates to $$\begin{aligned} \alpha_s(20 \text{ GeV}) = 0.21\ .\end{aligned}$$ The dressing functions of the Yang-Mills three-point functions, $z_{\bar c A c},\, z_{A^3}$, are calculated momentum-dependently for a single momentum channel. The four-gluon vertex is approximated using the three-gluon vertex, see App. \[trunc:3g\]. This is a very good approximation down to semi-perturbative momenta [@Pelaez:2013cpa; @Blum:2014gna; @Eichmann:2014xya; @Binosi:2014kka; @Gracey:2014mpa; @Cyrol:2014kca], wheres deviations occur mostly for momenta where the glue gap implies already decoupling.\ *Matter sector*: We take into account the full momentum dependence of the quark propagator, parameterised by its wavefunction renormalisation $Z_q(p)$ and mass function $M_q(p)$, where we have for the current quark mass $$\begin{aligned} M_q(20\text{ GeV}) = 1.3 \text{ MeV}, \end{aligned}$$ see App. \[trunc:qpr\] for details. The treatment of the quark-gluon vertex is of crucial importance for the whole system. Therefore we include the full momentum dependence of all eight linearly independent tensor structures ${\cal T}^{(i)}_{\bar q A q}$ of Landau gauge as described in App. \[trunc:qgl\]. Additionally we perform a gauge-invariant completion of any quark-gluon vertex tensor structure that contributes quantitatively, leading to the inclusion of two-quark–two-gluon and two-quark-three-gluon vertices that are approximated gauge-invariantly, see App. \[trunc:qgl\]. In the four-fermi sector we take into account a Fierz-complete basis of all ten tensor structures consistent with a $U(1)_V\times SU(2)_V$ symmetry, see App. \[trunc:4f\], and approximate their momentum dependence using a single ($s$-channel) momentum variable. As discussed previously, we utilize dynamical hadronisation to effectively remove the resonant $\sigma-\pi$ channel from the four-fermi tensor structures via the inclusion of effective (quark-)meson interactions. In the mesonic sector we include a scale-dependent mesonic wavefunction renormalisation factor, $Z_\pi$, and a Yukawa interaction between quarks and mesons, $h_\pi$. Additionally a scale-dependent effective potential, $U(\rho)$ captures higher mesonic interactions in the non-perturbative regime of spontaneously broken chiral symmetry, see e.g. [@Schaefer:2004en]. This approximation has been shown to be in quantitative agreement with the full momentum dependence [@Helmboldt:2014iya]. Results {#sec:results} ======= Quark-gluon interactions {#sec:quark} ------------------------ We start by considering the quark-gluon vertex and focus in particular on additional non-classical tensor structures, as they are shown for the symmetric momentum configuration $p_1^2=p_2^2 = p_3^2$ in Fig. \[fig:quarkgluonvertex\_components\_msp\]. To investigate their importance we calculate the full momentum dependence of a basis for the transversally projected Landau-gauge quark-gluon vertex. The resulting eight tensor structures include the classical tensor structure, $$\begin{aligned} [{\cal T}^{(1)}_{\bar q A q}]^\mu=\gamma^\mu\,,\end{aligned}$$ three further chirally symmetric tensors $$\begin{aligned} {\cal T}^{(5)}_{\bar q A q}\,, \qquad {\cal T}^{(6)}_{\bar q A q}\,, \qquad{\cal T}^{(7)}_{\bar q A q}\,,\end{aligned}$$ and four tensors which break chiral symmetry $$\begin{aligned} {\cal T}^{(2)}_{\bar q A q}\,, \qquad{\cal T}^{(3)}_{\bar q A q}\,, \qquad{\cal T}^{(4)}_{\bar q A q}\,, \qquad{\cal T}^{(8)}_{\bar q A q}\,, \end{aligned}$$ listed explicitly in App. \[trunc:qgl\]. Each of the eight tensor structures leads to a contribution in the effective action that, if separated from the remainder of the action, violates gauge invariance. For example, the classical tensor structure, $\gamma^\mu$, corresponds to the term $\bar q \slashed A q$ in the effective action which is by itself not gauge invariant. However, it appears as part of the gauge-covariant derivative $\bar q \slashed D q$ which respects gauge invariance. On the other hand, for the additional tensor structures, ${\cal T}^{(i)}_{\bar q A q}$, $i>1$, such a gauge-covariant completion is not automatically included. A naïve inclusion of these tensor structures alone would therefore violate gauge invariance in the form of (modified) Slavnov-Taylor identities, see the discussion in App. \[app:mSTI\]. In the semi-perturbative, chirally symmetric regime we find the gauge-invariant operator $$\begin{aligned} \label{eq:STI_operator_I} {\text{i}}\sqrt{ 4 \pi \alpha_{s} } \, \bar q\, \gamma_5 \gamma_\mu \,\epsilon_{\mu\nu\rho\sigma} \{ F_{\nu\rho}\,,\, D_\sigma\} \,q\,, \end{aligned}$$ whose contribution to the term of $\mathcal{O}(\bar q A q)$ is proportional to the tensor $$\begin{aligned} \tfrac{1}{2}{\cal T}^{(5)}_{\bar q A q}+{\cal T}^{(7)}_{\bar q A q}\ .\end{aligned}$$ Together with our results for the dressing functions, $z_{\bar q A q}^{(i)}(p,q)$ evaluated at $p^2=q^2=(p+q)^2$, Fig. \[fig:quarkgluonvertex\_components\_msp\], we conclude that this is indeed the gauge-invariant operator that determines most of the strength of the chirally symmetric non-classical tensor structures, see also Fig. \[fig:STI\]. Since the operator in contributes also to tensor structures in higher vertices, namely the two-quark-two-gluon and two-quark-three-gluon interactions, we include these as well in our truncation and dress them in accordance with gauge symmetry, see Apps. \[app:mSTIvert\] and \[trunc:qgl\] for further details. Similarly we find that the chiral symmetry breaking operator $$\begin{aligned} \label{eq:STI_operator_II} \bar q\, \left(\delta_{\mu\nu} +\left[\gamma_\mu,\gamma_\nu\right]\right) D_\mu D_\nu \,q\,, \end{aligned}$$ contributes to $\mathcal{O}(\bar q A q)$ to the tensor $$\begin{aligned} \tfrac{1}{2}{\cal T}^{(2)}_{\bar q A q}+{\cal T}^{(4)}_{\bar q A q}\ .\end{aligned}$$ Since this is the most relevant operator in the phase of spontaneously broken chiral symmetry we again include the corresponding contribution to the two-quark-two-gluon vertex with gauge invariant dressing. The explicit calculation of the dressing functions of the higher interactions to check the quantitative importance of deviations from the STI which are expected to occur below momenta of $\mathcal{O}(1$ GeV$)$ is deferred to future work. We want to stress at this point that if we would only take into account a full basis for the quark-gluon vertex without the corresponding gauge invariant partner tensor structures in the higher vertices, we would see considerably different results. In particular, the running coupling, see Fig. \[fig:running\_couplings\], as defined from the dressing function of the classical tensor structure, $z_{\bar q A q}^{(1)}$, would deviate from the corresponding ghost-gluon running coupling at considerably larger momenta. However, the degeneracy of the running couplings defined from the different vertices at semi-perturbative momentum scales is a consequence of gauge invariance. Hence we conclude that the higher quark-gluon vertices are important for the consistency of the truncation. Moreover, since diagrams that contain the two-quark-two-gluon vertex have a different number of quark lines than the ones that contain only classical vertices, we expect a qualitative effect at finite chemical potential due to these higher quark-gluon interactions. Note that for assessing the importance of the different tensor structures one has to take into account not only their relative strength but also their respective symmetry properties. For example, simply extracting the relative strength from Fig. \[fig:quarkgluonvertex\_components\_msp\], we would conclude that the operator in seems to be the most important one by far. We find, however, that also the operator in is very important for the value of the quark propagator mass function. This is explained by the fact that breaks chiral symmetry and contributes therefore directly to the quark mass function. Quark propagator ---------------- Next we discuss our solution for the quark propagator, see Fig. \[fig:quarkpropagator\_msp\], for an earlier study see e.g. [@Fischer:2003rp], and the effect of different quark-gluon interactions. We find very convincing agreement with results obtained in lattice QCD in the quenched approximation [@Bowman:2005vx], that are shown with dimensionful quantities rescaled by a factor of $0.91$ to match the scale of [@Sternbeck:2006cg] and [@Fischer:2008uz; @FP]. However, some care is necessary when comparing our propagator to the lattice results, since the quenched approximation in lattice simulations sets the fermion determinant to unity, whereas we just used a quenched gluon propagator. Apart from the classical tensor structure, the most important contribution to the quark propagator stems from the tensor structures $\tfrac{1}{2}{\cal T}^{(5)}_{\bar q A q}+{\cal T}^{(7)}_{\bar q A q}$ for $Z_q(p)$, and $\tfrac{1}{2}{\cal T}^{(2)}_{\bar q A q}+{\cal T}^{(4)}_{\bar q A q}$ for $M_q(p)$, where we find it necessary to include the full momentum dependence of the corresponding dressing functions $z_{\bar q A q}^{(i)}(p,q)$. It is only the combination of all these terms, including their gauge invariant partner structures in the quark-gluon vertex equation together with their momentum dependencies, that leads to the very good agreement with the lattice propagator. In particular this concerns the wave function renormalisation $Z_q(p)$ for small momenta, where an important contribution stems from mesonic fluctuations in the infrared. These fluctuations have been included with functional methods, e.g. in [@Jungnickel:1995fp; @Berges:2000ew; @Schaefer:2006ds; @Braun:2009gm; @Mitter:2013fxa; @Alkofer:1993gu; @Fischer:2007ze; @Fischer:2008sp; @Cloet:2008re]. Restricting the discussion only to the relative importance of quark-gluon vertex tensor structures, recent findings in Dyson-Schwinger studies [@Hopfer:2013np; @Williams:2014iea; @Aguilar:2014lha] agree with our, see also [@Alkofer:2008tt] for an earlier study. Moreover, our present findings suggest the inclusion of the STI-consistent higher quark-gluon interactions in future DSE-studies. Finally we want to point out that one crucial contribution to the quark mass function comes from the addition to the flow of the Yukawa coupling, $\partial_t\Delta h$, due to dynamical hadronisation, see . As soon as one runs into the spontaneously broken phase of QCD, $\langle \sigma\rangle\neq 0$, this term contributes to the quark mass function as well via the relation $\partial_t\Delta M_q(0)\varpropto \langle \sigma\rangle \partial_t\Delta h_\pi $. Momentum dependencies are very important in $\partial_t\Delta M_q(p)$, since we expect this term to be approximately zero for momenta larger than the chiral symmetry breaking scale, see App. \[sec:dynhad\] for details. Similarly, we had to include momentum dependencies in the remaining four-fermi interactions that appear in the tadpole diagram. In the language of dynamical hadronisation, chiral symmetry breaking in terms of the quark mass function is then triggered by the additional term, $\partial_t\Delta M_q$, see . This, however, is just due to the chosen parameterisation of the four-fermi interaction in terms of mesons. Without dynamical hadronisation, chiral symmetry breaking would be driven by the tadpole diagrams containing the resonant (momentum-dependent) four-fermi channel, which in turn is driven by quark-gluon interactions. Gluonic vertices and running couplings {#sec:runningcouplings} -------------------------------------- From our calculated momentum-dependent QCD vertices, namely from the quark-gluon, the ghost-gluon and the three-gluon vertex we can extract running couplings. Following the detailed discussion in App. \[app:mSTI\], these running couplings are required to be degenerate at (semi-) perturbative momenta by means of Slavnov-Taylor identities, whereas they will start to deviate in the non-perturbative regime below momenta $\mathcal{O}(1$ GeV$)$. Additionally, there is no unique definition of a running coupling extracted from a particular vertex in the non-perturbative regime. Here we define effective running couplings that explicitly take the decoupling due to the gluonic mass gap into account, illustrated exemplarily for the running coupling extracted from the quark-gluon vertex evaluated at the symmetric point, $$\begin{aligned} \label{eq:alphasAqq} \alpha_{\bar q Aq}(p^2) &= \frac{\left(z_{\bar q A q}^{(1)}(p,q)\right)^2 }{4\pi }\Bigg\vert_{p^2=q^2=(p+q)^2}\,.\end{aligned}$$ The running couplings from different vertices are shown in Fig. \[fig:running\_couplings\]. Irrespective of the definition, all running couplings coincide down to momenta of 4 GeV. This underlines the fact that STI violations are negligible, without the necessity to tune initial conditions as an implicit solution of the STI as described in App. \[app:mSTIinitial\]. Even more, the degeneracy of all running couplings at large momenta represents a highly nontrivial statement about the consistency of our truncation, in the sense that all important contributions in the semi-perturbative regime have been consistently taken into account. Note that the very good agreement of the quark-gluon and the ghost-gluon running coupling is in part a consequence of the full momentum dependence which is self-consistently taken into account in the quark gluon vertex. This suggests that a similar improvement in the glue sector might lead to an even better agreement of the three-gluon coupling with the two other couplings. At low momenta, the gap in the gluon propagator becomes important and we find a clear difference between the strength of the various vertices. In particular, the three-gluon vertex drops considerably earlier, which is mainly due to the larger number of gluon legs. Relative importance of four-fermi channels {#sec:4fermi} ------------------------------------------ As mentioned in the discussion of our truncation in Sec. \[sec:truncation\], we include a Fierz-complete basis with all ten basis elements consistent with the $SU(N_c)_c\otimes SU(N_f)_V\otimes U(1)_B$ symmetry to assess the importance of different four-fermi interaction channels, see App. \[trunc:4f\] for details on the choice of the basis. All four-fermi couplings are shown in Fig. \[fig:fourfermis\_msp\], where in particular $h_{\pi}^2/2 m_{\pi}^2$ corresponds to $\lambda_{(S-P)_+}+\lambda_{(S+P)_-}$ and $\lambda_{\eta'}$ to $\lambda_{(S-P)_+}-\lambda_{(S+P)_-}$. We find that the dynamics of spontaneous chiral symmetry breaking is almost exclusively driven by the chirally symmetric four-fermi channel $\lambda_{(S-P)_+}$, which corresponds to the quantum numbers of the $\sigma,\pi,\eta$ and $a$-mesons. However, this channel is split by the presence of the ’t Hooft determinant coupling, $\lambda_{(S+P)_-}$, such that only the $\sigma$-meson and pions become very light. The dynamically created quark mass is already sufficient to strongly suppress the $\eta'$-channel in comparison to the resonant pion channel. Additional contributions due to the $U(1)_A$-anomaly would lead to an even stronger suppression, see [@Pawlowski:1996ch; @Mitter:2013fxa]. Additionally, for sufficiently large initial scales, these anomalous contributions are suppressed relative to the contribution that orginates from the explicit symmetry breaking due to non-vanishing current quark masses. First checks indeed indicate that the anomalous contributions at a sufficiently initial large cutoff scale do not play a quantitatively important rôle, a more detailed study will be presented elsewhere. On the other hand, anomalous contributions corresponding to fluctuations below the cutoff scale are already taken into account by integrating the FRG running. This has also been demonstrated e.g. for the quantum mechanical anharmonic oscillator [@Zappala:2001nv]. Therefore, all but the resonant pion four-fermi channel constitute subleading contributions with a quantitative effect of less than $5$ %, see Fig. \[fig:4fermirebos\]. Independent of their relative strength, the suppression of any of the four-fermi interactions is overcome by the strength of $\alpha_s$ only in the non-perturbative regime of QCD at scales of $\mathcal{O}(1$ GeV$)$. In the light of these results it is sufficient to take into account only the $(\sigma-\pi)$-channel, provided one uses a projection obtained from a full basis to avoid ambiguities in the projection procedure. Furthermore, note that in the purely fermionic theory all ten channels diverge at the chiral symmetry breaking scale signaling resonant quark-anti-quark states, as illustrated in Fig. \[fig:4fermirebos\] for the $(\sigma-\pi)-$channel. These divergencies are a consequence of ignoring momentum dependencies and can be removed by dynamically hadronising only the $(\sigma-\pi)$-channel. Nevertheless it would be interesting to also bosonise other four-fermi channels to investigate the properties of the corresponding bound states. Alternatively an investigation of the momentum dependencies of the four-fermi interactions themselves is also conceivable. As a word of caution, our statements about the relative strength of four-fermi channels are only valid in the vacuum as in particular finite chemical potential is expected to shift the relative strength of four-fermi interaction channels. Finally we want to mention that Fig. \[fig:fourfermis\_msp\] captures only the zero external momentum limit of the four-fermi interaction channels. Although it was necessary to calculate the momentum dependence of the $s$-channel momentum configuration for the evaluation of the quark-propagator for this work, we postpone a thorough discussion of the momentum dependence of the four-fermi interactions to future publications. Here we only note that the effect of such momentum dependencies on the remainder of the matter system is very weak, since all but the $\sigma$-$\pi$ channel are very weak. In the latter, on the other hand, we have implicitly taken momentum dependencies into account via dynamical hadronisation. Furthermore we want to stress that the dynamical hadronisation procedure of introducing effective mesonic degrees of freedom introduces no model parameters in the theory. Therefore, the infrared physics in terms of quark and mesons are independent from the ultraviolet starting point and initial values. This is demonstrated explicitly for the Yukawa interaction between quarks and mesons in Fig. \[fig:yukawaconvergence\], where different initial scales and initial values for the Yukawa interaction have been chosen, but where all trajectories converge towards the same trajectory in the IR. Mechanism of chiral symmetry breaking {#sec:chiralsymmbr} ------------------------------------- As outlined in the introduction a proper understanding of the mechanisms of confinement and chiral symmetry breaking is a crucial step towards a quantitatively reliable approach to the phase diagram of QCD at finite chemical potential. Here we comment on the mechanism of chiral symmetry breaking from the point of view of the matter system. In [@Gies:2001nw; @Gies:2002hq; @Gies:2003dp; @Gies:2005as; @Braun:2005uj; @Braun:2006jd; @Braun:2011pp] a simple picture for chiral symmetry breaking in quenched QCD was put forward. In their analysis the IR fixed points in the four-fermi interactions are destabilised if the gauge coupling exceeds a critical coupling $\alpha_\text{crit}$ and as a result the four-fermi coupling becomes singular. Although the argument is qualitatively correct, in quenched QCD the picture is not so simple, as the drop of the gauge coupling at small momenta, see Fig. \[fig:alpha\_msp\], lets the quark sector become subcritical again. This was discussed as one possible scenario in [@Braun:2006jd], but is confirmed here as the actual physical situation. In Fig. \[fig:alpha\_msp\], we show the different running couplings and the critical gauge coupling. Since the gauge coupling decreases below the critical coupling for decreasing momenta, it is merely the area above the critical value line which is decisive for the occurrence of chiral symmetry breaking. Our findings indicate that an approach where the vertex strength of all tensor structures of the quark-gluon vertex is subsummed in an enhanced strength of the classical tensor structure lacks quantitative precision. Using such an enhanced quark-gluon vertex in our calculation would lead to much too large contributions in the four-fermi sector, from gluonic box diagrams which grow like $\alpha^2_{\bar q A q}$. Taking into account different tensor structures approximately corresponds to a sum of contributions $\simeq \sum_i \alpha_i^2$, if we denote the running couplings associated to different components of the quark-gluon vertex as $\alpha_i$ and neglect cross terms, whereas the enhanced vertex from the single channel approximation contributes as the square of the sums $\simeq (\sum_i\alpha_i )^2$ in the four-fermi box diagram. Finally, we briefly discuss the mechanism of chiral symmetry breaking which is at work in our framework. Here, chiral symmetry breaking is driven by four-fermi interactions. In a framework of dynamical hadronisation this is reflected in the corresponding contributions to the Yukawa coupling/quark mass. Therefore, our calculation requires significantly less vertex strength in the quark-gluon vertex in order to see chiral symmetry breaking compared to the required strength in single channel approximation as described above. In our framework, including just the classical tensor structure in the quark-gluon vertex leads to qualitatively albeit not quantitatively correct results. This is mainly due to the contributions from the tensor structure $\tfrac{1}{2}{\cal T}^{(5)}_{\bar q A q}+{\cal T}^{(7)}_{\bar q A q}$ in the quark-gluon vertex and its gauge invariant completion, see the discussion in Sec. \[sec:quark\] and App. \[trunc:qprqglmom\]. Summary and conclusions ======================= In the present work we have investigated spontaneous chiral symmetry breaking in quenched continuum QCD. The only relevant couplings are those of QCD: the strong coupling $\alpha_s$ and the current quark masses which are fixed in the perturbative regime. In particular this allows us to compute the quark propagator in excellent agreement with corresponding results from lattice QCD. The functional renormalisation group analysis presented here uses a vertex expansion that goes qualitatively beyond the approximation level used so far in continuum methods. On the one hand advanced approximations have been used in sub-systems such as the pure glue sector and the low-energy matter sector. On the other hand we have, for the first time, introduced a complete basis of four-fermi interactions in the $s$-channel as well as the full quark-gluon vertex with all its momentum-dependencies and tensor structures. The latter has been linked to higher order quark-gluon interactions via modified Slavnov-Taylor identities. These higher order terms are also important for the convergence of the results, which emphasises the necessity of an expansion scheme based on gauge-invariant operators. The quantitative reliability has been discussed in a detailed analysis of the systematic errors. The transition from the quark-gluon to the hadronic phase is smoothly done by means of dynamical hadronisation. This allows to monitor the emergence of composite mesonic operators as dynamical degrees of freedom at low energies. We have also investigated the relative importance of different four-fermi interaction channels. Here we find that a single channel approximation with $\sigma$ and $\vec \pi$ is sufficient to induce spontaneous chiral symmetry breaking on a semi-quantitative level. This fact together with the small width of the strongly-correlated transition region from the quark-gluon regime to the hadronic regime, see also [@Braun:2014ata], can be used to systematically improve the reliability of low-energy effective models, see [@Pawlowski:2010ht; @Haas:2013qwp; @Herbst:2013ufa; @Pawlowski:2014aha]. The present computation is currently being extended to full dynamical QCD, for first investigations see [@Braun:2014ata], and to finite temperature and density. Our analysis of the matter sector should also give access to the large density regime, provided the higher fermionic interactions including fluctuating baryons are monitored accordingly. We thank R. Alkofer, J. Braun, C.S. Fischer, L. Fister, T. K. Herbst, M. Hopfer, M.Q. Huber, F. Rennecke, B.-J. Schaefer, L. von Smekal, R. Williams and A. Windisch for discussions. This work is supported by the Helmholtz Alliance HA216/EMMI, the grant ERC-AdG-290623, the FWF through Erwin-Schrödinger-Stipendium No. J3507 and the BMBF grant 05P12VHCTG. Modified Slavnov-Taylor identities {#app:mSTI} ================================== In the presence of the regulator terms the standard Slavnov-Taylor identities (STI) are modified (mSTI). Here we briefly discuss these modifications and their implications following [@Pawlowski:2005xe], a more detailed study will be presented elsewhere. A very concise form of these identities is found in a formulation with the auxiliary Nakanishi-Laudrup field $\lambda$ with $$\begin{aligned} \label{eq:NL} e^{-\tfrac{1}{2} \int_x (\partial_\mu A^a_\mu)^2}\to \int\mathcal{D}\lambda\, e^{ -\tfrac{1}{2} \int_x \partial_\mu A^a_\mu\, \lambda^a- \tfrac{\xi}{2} \int_x \partial_\mu \lambda^a\lambda^a }\,.\end{aligned}$$ where the full classical action $S=S_{\rm QCD}+S_{\rm gf}+S_{\rm ghost}$ is invariant under the BRST transformations $$\begin{aligned} \nonumber {\mathfrak s} \Phi & = (D_\mu c\,,\, -\tfrac{1}{2} f^{abc} c^b\,c^c\,,\,\lambda^a\,,\, -c\, q\,,\, \bar q\, c\,, 0,0,\dots)\,,\\[2ex] {\mathfrak s} \lambda & =0\,. \label{eq:BRST}\end{aligned}$$ with $\Phi$ as defined in . In we have assumed that all the composite fields introduced in $\Phi$ are colourless. The introduction of the auxiliary field $\lambda$ leads to ${\mathfrak s}^2 \phi=0$. The cutoff terms are not invariant under the BRST transformations in and the standard STI is modified. It reads in a compact way [@Ellwanger:1994iz; @D'Attanasio:1996jd; @Igarashi:2001mf; @Pawlowski:2005xe] $$\begin{aligned} \label{eq:mSTI} \int \0{\delta\Gamma_k}{\delta Q_{\phi_i}}\0{\delta\Gamma_k}{\delta\phi_i}= \int \, R_{k,\phi_n \phi_i}\, \0{\delta^2\Gamma_k }{\delta Q_{\phi_i}\delta \phi_j}\, G_{\phi_j\phi_n}\,,\end{aligned}$$ where we have added a BRST source term $$\begin{aligned} \label{eq:BRST-soucre} \int Q_{\phi_i} ({\mathfrak s} \phi)_i\,,\end{aligned}$$ to the path integral, see [@Pawlowski:2005xe] for more details and further references. The sums in run over all species of fields including internal indices. Initial conditions for vertices {#app:mSTIinitial} ------------------------------- In the limit $k\rightarrow 0$ the left hand side of vanishes and we arrive at the standard STI: the derivative of $\Gamma$ with respect to $Q_\phi$ generates the (quantum) BRST transformations that act linearly on the fields via the derivative of $\Gamma$ with respect to $\phi$. For perturbative momenta $p$ this gives the standard relations between the renormalisation factors of the vertex functions at $k=0$, that is $$\label{eq:pertBRST} z^{2}_{\bar c Ac}(p)=z^{2}_{\bar q Aq}(p)=z^{2}_{A^3}(p) = z_{A^4}(p)\,,$$ corresponding to degenerate running couplings in the perturbative regime. entails that in QCD the parameters of the theory are given by the power-counting relevant mass parameters of the quarks (dimension one), one (marginal) coupling, $\alpha_s$, and the unobservable (marginal) wave function renormalisations of the fundamental fields. In turn, for $k\neq 0$ the simple relations in are in general lost and the loop term on the right hand side of leads to modifications. In terms of power-counting the most relevant modification is the occurrence of a longitudinal gluon mass parameter $m_{k,L}^2$ in the gluon propagator that vanishes for $k\to 0$. Perturbatively it relates to a transversal mass parameter $m_{k,\bot, \rm pert }^2=m_{k,L,\rm pert}^2$. Non-perturbatively this relation does not hold anymore, as we have a non-vanishing transversal mass gap in Landau gauge, for more details see [@FP]. In the present work we do not solve the mSTIs for the vertices explicitly and also avoid the necessity of discussing the decoupling of transversal and longitudinal parameters. We take the very good realisation of in our results, i.e. Fig. \[fig:running\_couplings\], as an indication that the effects of the right hand side of at the initial scale $\Lambda$ are either small or become unimportant during the evolution of flow equation. STI-invariant vertices {#app:mSTIvert} ---------------------- In the perturbative regime, the relations between the gluonic vertices can be obtained with the help of the renormalisation group invariant covariant derivative $$\begin{aligned} \label{eq:rencov} D_{\mu} = \partial_\mu - i\, \sqrt{4 \pi \alpha_{s} \bar Z_{A}} A_\mu \,,\end{aligned}$$ with the renormalised gauge field $A_\mu$. Using the corresponding field strength tensor $$\begin{aligned} \label{eq:Fren} F_{\mu\nu}= \0{i}{ \sqrt{ 4 \pi \alpha_{s}} } [ D_{\mu}\,,\,D_{\nu}]\,,\end{aligned}$$ in the effective action leads to gluonic vertices that are consistent with gauge invariance for momenta $p\approx k$ down to $\mathcal{O}($1 GeV$)$. By using $\alpha_{s,k}(p)$ and $Z_{A,k}(p)$ these relations could be made true for a larger range of momenta, which, however, is not necessary due to the locality of the flow. In any case, such relation cannot be found for the non-perturbative regime, since the perturbative mSTIs are in general only valid for the longitudinal part of the vertices which deviate from their transversal parts at non-perturbative momenta as it is for example the case for the gluon mass gap. Consequently the vertices have to be computed separately and do not follow from the mSTI at low momenta. However, the gluonic vertices and ghost-gluon vertex gain a potentially significant non-trivial momentum-dependence and non-classical tensor structures only in the deep infrared where their contributions decouple due to the mass gap in the gluon propagator. In the present work we use this as a justification for approximating the four-gluon vertex by the three-gluon vertex. On the other hand, the quark-gluon vertex potentially gets a significant non-trivial momentum-dependence and non-classical tensor structures (see ) below momenta of $\mathcal{O}(1$ GeV$)$, where chiral symmetry breaking is triggered. To take these effects into account we study the STI-consistent version of the most important tensor structures $$\begin{aligned} \label{eq:z57} z_{\bar q A q}^{(5)} (\slashed p+\slashed q) (p-q)^\mu\,, \qquad z_{\bar q A q}^{(7)} \tfrac{1}{2}[\slashed p,\slashed q]\gamma^\mu\,, \end{aligned}$$ see . It can be shown that terms proportional to $z_{\bar q A q}^{(5)}$ and $z_{\bar q A q}^{(7)}$ are derived from $$\begin{aligned} \label{eq:mSTI-57} z_{\bar q A q}^{(\rm av)}\, \bar q\, \gamma_5 \gamma_\mu \,\epsilon_{\mu\nu\rho\sigma} D_\nu D_\rho D_\sigma \,q\,,\end{aligned}$$ with $$\begin{aligned} z_{\bar q A q}^{(7)} = z^{(\rm av)}_{\bar q A q} \,,\qquad z_{\bar q A q}^{(5)} = \tfrac{1}{2} z^{(\rm av)}_{\bar q A q}\,, \end{aligned}$$ valid for constant $z_{\bar q A q}^{(\rm av)}$. In we have used that the mSTI-consistent extension of the $z^{(5)}_{\bar q A q}, z^{(7)}_{\bar q A q}$-momentum structure in is $$\begin{aligned} \label{eq:mSTI-pre57} \tfrac{1}{4} \bar q\, D_\mu\,\{ [ \gamma_\mu\,,\,\gamma_\nu]\,,\, \dr\} \, D_\nu\, q\,,\end{aligned}$$ The tensor structure only projects on the $z^{(5)}_{\bar q A q}$ and $z^{(7)}_{\bar q A q}$-terms. After some algebra can be rewritten as by using $$\begin{aligned} \tfrac{1}{4} z_{\bar q A q}^{(\rm av)}\, \{[\gamma_\mu\,,\,\gamma_\nu]\,,\, \gamma_\rho\} = \epsilon_{\mu\nu\rho\alpha}\gamma_5 \gamma_\alpha\,.\end{aligned}$$ With we get $$\begin{aligned} \label{eq:mSTI57final} z^{(\rm av)}_{\bar q A q} \tfrac{i}{4} \sqrt{ 4 \pi \alpha_{s} } \, \bar q\, \gamma_5 \gamma_\mu \,\epsilon_{\mu\nu\rho\sigma} \{ F_{\nu\rho}\,,\, D_\sigma\} \,q\,. \end{aligned}$$ Hence, as long as $$\begin{aligned} z^{(7)}_{\bar q A q}-2 z^{(5)}_{\bar q A q}=0\,,\end{aligned}$$ the quark-gluon vertex tensor structures follow from the single term , see Fig. \[fig:STI\]. This term is derived from the mSTI which is valid down to the semi-perturbative regime. We conclude that within a self-consistent approximation one also has to take into account the related quark-gluon scattering vertices ($\bar q A^2 q$ and $\bar q A^3 q$) derived from . A more detailed study will be published separately. Finally, it is noteworthy that the tensor structure also plays a crucial rôle in the so-called transverse Ward-Takahashi identity for the quark-gluon vertex, see [@Kondo:1996xn] and e.g. [@Qin:2013mta; @Qin:2014vya; @Aguilar:2014lha] for applications. Truncation {#app:truncation} ========== In this section we discuss in detail the different constituents of the truncation scheme introduced in Sec. \[sec:truncation\]. Yang-Mills sector ----------------- ### Propagators {#trunc:gpr} The only external input which is required in our calculation are the pure gauge propagators. In Landau gauge, the inverse gluon propagator can be parameterised as $$\begin{aligned} \label{eq:glueprop_dressing} \Gamma_{A^2}^{\mu\nu}(p) &= Z_A(p)p^2 \Pi^{\mu\nu}_T(p)\,,\end{aligned}$$ where $\Pi^{\mu\nu}_T(p)$ denotes the transverse projector $$\begin{aligned} \Pi^{\mu\nu}_T(p)&=\left(\delta^{\mu\nu}-\frac{p^\mu p^\nu}{p^2}\right) \, .\end{aligned}$$ In addition to the gluon propagator we also encounter the ghost propagator $$\begin{aligned} \Gamma_{\bar c c}(p) &= Z_c(p) p^2 \,.\end{aligned}$$ in the equations for the pure gauge vertices. We stress that the matter sector computation does not rely on a particular propagator input, but can use any available propagators. This includes RG-scale and momentum-dependent propagators as provided by FRG calculations [@Fischer:2008uz; @FP] or just momentum-dependent input such as lattice Yang Mills propagators in minimal Landau gauge from [@Bowman:2004jm; @Sternbeck:2006cg] where the former will be used here, see Fig. \[fig:gluonprop\]. Taking an external input for the propagators automatically sets the scale of the theory and, apart from the bare quark mass, no parameters remain in the perturbative regime, where we set our initial condition. ### Vertices {#trunc:3g} We approximate the ghost-gluon vertex, the three-gluon vertex and the four-gluon vertex with their classical tensor structures and a momentum-dependent dressing, $z_X$ for $X\in\{\bar c A c,A^3,A^4\}$, $$\begin{aligned} &\Gamma_{\bar c A c}(p_1,p_2)_{\mu}^{abc} = z_{\bar c A c}(\bar p) Z_c(\bar p) Z_A^{1/2}(\bar p) \left[{\text{i}}gf^{abc} q_\mu\right]\ ,\nonumber\\[2ex] & \Gamma_{A^3}(p_1,p_2)_{\mu\nu\rho}^{abc} = \nonumber\\[1ex] & \qquad z_{A^3}(\bar p) Z_A^{3/2}(\bar p) \bigg[{\text{i}}f^{abc} \Big\{(p_2-p_1)_\rho \delta_{\mu\nu}+ \text{ perm.}\Big\}\bigg]\ ,\nonumber\\[2ex] & \Gamma_{A^4}(p_1,p_2,p_3)_{\mu\nu\rho\sigma}^{abcd} =\nonumber\\[1ex] & \qquad z_{A^4}(\bar p) Z_A^{2}(\bar p) \left[f^{iab}f^{icd}\delta_{\mu\rho}\delta_{\nu\sigma} + \text{ perm.}\right]\ .\end{aligned}$$ Here the $p_i$, denote the momenta and we approximate the dressing functions $z_X$ as functions of one average momentum $\bar p \equiv \sqrt{\sum_i p_i^2/\sum_i}$. To project onto the dressing functions we multiply each gluon leg with the corresponding transversal projector. Therefore the projection on the ghost-gluon vertex dressing is uniquely defined whereas we contract the three-gluon vertex equation additionally with $\delta_{\mu\nu}p_{2,\rho}-\delta_{\nu\rho}p_{2,\mu}$. The four-gluon vertex is approximated from the three-gluon vertex via $$\label{eq:4gluonapprox} z_{A^4}(\bar p) = z_{A^3}^2(\bar p),$$ which leads to an approximate agreement of the three-gluon running coupling with the ghost-gluon and quark-gluon running coupling down to $\mathcal{O}($1 GeV$)$ and is still expected to improve with an improved momentum resolution of the glue sector, see the discussion in Sec. \[sec:runningcouplings\]. Matter Sector {#trunc:qprqglmom} ------------- ### Quark propagator {#trunc:qpr} We parameterise the inverse dressed quark propagator with two dressing functions as $$\begin{aligned} \label{eq:quarkprop_dressing} \Gamma_{\bar qq}(p) &= Z_q(p) \left({\text{i}}\slashed{p} + M_q(p)\right)\ ,\end{aligned}$$ where $$\begin{aligned} \label{eq:gamma+clifford} \{\gamma_\mu\,,\,\gamma_\nu\} =2\delta_{\mu\nu} \id \,,\qquad \gamma_\mu^\dagger =\gamma_\mu\,,\qquad \gamma_5= \gamma_1\gamma_2\gamma_3\gamma_4\,.\end{aligned}$$ Setting the current quark mass, $M_q(20\text{ GeV})= 1.3$ MeV, is related to the value of the pion mass, see App. \[trunc:qm\]. Apart from the purely mesonic sector of our truncation, the full momentum dependence of the quark propagator is fed back into the equations for all other vertices. In the quark-meson sector, as described in Sec. \[trunc:qm\], such an approximation leads to an overestimation of the suppression of loops containing quarks via the quark mass function. The resulting effect will most likely be an underestimation of the order parameter $\langle\sigma\rangle$ since the quarks drive the order parameter to larger values in the quark-meson model. ### Quark-gluon interactions {#trunc:qgl} In Landau gauge, a basis for the quark-gluon vertex is given by the eight tensor structures $$\begin{split} [{\cal T}^{(1)}_{\bar q A q}]^\mu(p,q)&=\gamma^\mu\ ,\\[2ex] [{\cal T}^{(2)}_{\bar q A q}]^\mu(p,q)&=-{\text{i}}(p-q)^\mu\ ,\\[2ex] [{\cal T}^{(3)}_{\bar q A q}]^\mu(p,q)&=-{\text{i}}(\slashed p-\slashed q)\gamma^\mu\ ,\\[2ex] [{\cal T}^{(4)}_{\bar q A q}]^\mu(p,q)&={\text{i}}(\slashed p+\slashed q)\gamma^\mu\ ,\\[2ex] [{\cal T}^{(5)}_{\bar q A q}]^\mu(p,q)&=(\slashed p+\slashed q) (p-q)^\mu\ ,\\[2ex] [{\cal T}^{(6)}_{\bar q A q}]^\mu(p,q)&=-(\slashed p-\slashed q) (p-q)^\mu\ ,\\[2ex] [{\cal T}^{(7)}_{\bar q A q}]^\mu(p,q)&=\tfrac{1}{2}[\slashed p,\slashed q]\gamma^\mu\ ,\\[2ex] [{\cal T}^{(8)}_{\bar q A q}]^\mu(p,q)&=-\tfrac{{\text{i}}}{2}[\slashed p,\slashed q](p-p)^\mu\ , \end{split}\label{eq:tensorqAq}$$ where $p$ ($q$) denotes the momentum of the (anti-)quark. It is important to note that the tensor structures ${\cal T}^{(2)}_{\bar q A q}$, ${\cal T}^{(3)}_{\bar q A q}$, ${\cal T}^{(4)}_{\bar q A q}$ and ${\cal T}^{(8)}_{\bar q A q}$ break chiral symmetry and are only created in the spontaneously broken phase. Our final Ansatz for the quark-gluon vertex is then $$\begin{aligned} \label{eq:quarkgluon-basis} \Gamma_{\bar q A q}(p,q) &= -{\text{i}}Z_{q}(\bar p)Z_A^{1/2}(\bar p) \nonumber\\[1ex] & \times \sum\limits_{i} \frac{z^{(i)}_{\bar q A q}(p,q)}{\bar p ^{\, n_i}}[{\cal T}^{(i)}_{\bar q A q}]^\mu(p,q)\ ,\end{aligned}$$ where $\bar p ^{\, n_i}$ is the average momentum and $n_i$ is chosen such that $z^{(i)}_{\bar q A q}(p,q)$ is dimensionless. As remarked in Sec. \[sec:quark\] a sensible truncation scheme should also include a set of higher order operators to complete it consistent with the STI. We find that the most important contributions to non-classical tensor structures in the quark-gluon vertex stem from terms of the form $$\begin{aligned} \bar q\, T_{\mu\nu}D_\mu D_\nu q\ ,\qquad \bar q\, T_{\mu\nu\rho}D_\mu D_\rho D_\nu q\ ,\end{aligned}$$ where the first (second) contribution breaks (respects) chiral symmetry. In momentum space these yield contributions to the action of the form $$\begin{aligned} \mathcal{O}(\bar q Aq): &\ \bar q(p)\Big\{T_{\mu\nu}({\text{i}}q_\nu)+T_{\nu\mu}(-{\text{i}}p_\nu)\nonumber\\ &+T_{\mu\nu\rho}(-q_\nu q_\rho) +T_{\nu\mu\rho}(- p_\nu p_\rho) + T_{\rho\nu\mu}(p_\rho q_\nu)\Big\}\nonumber\\[1ex] &\ \times \left[-{\text{i}}g A_\mu(-p-q)\right] q(q) \ ,\nonumber\\[2ex] \mathcal{O}(\bar q A^2q):&\ \bar q(p)\Big\{T_{\mu\nu}\nonumber\\ &+T_{\rho\mu\nu}(-{\text{i}}p_\rho) +T_{\nu\mu\rho}({\text{i}}(r+q)_\rho) + T_{\nu\rho\mu}({\text{i}}q_\rho)\Big\}\nonumber\\[1ex] &\ \times \left[-{\text{i}}g A_\nu(-p-q-r)\right]\left[-{\text{i}}g A_\mu(r)\right]q(q) \ ,\nonumber\\[2ex] \mathcal{O}(\bar q A^3q):&\ \bar q(p)T_{\mu\nu\rho}\left[-{\text{i}}g A_\mu(-p-q-r+s)\right]\nonumber\\[1ex] &\ \times \left[-{\text{i}}g A_\rho(s)\right]\left[-{\text{i}}g A_\nu(r)\right]q(q) \,,\end{aligned}$$ where $g$ is to be understood as $g=\sqrt{4\pi\alpha(\bar p) Z_{A}(\bar p)}$ everywhere. In particular, we find that the dominant contributions to order $D^2$ and $D^3$ correspond to the tensor structures $$\begin{aligned} T_{\mu\nu} & = \delta_{\mu\nu} +\left[\gamma_\mu,\gamma_\nu\right]\ ,\nonumber\\[2ex] T_{\mu\nu\rho} & = \{[\gamma_\mu,\gamma_\nu],\gamma_\rho\}\ .\end{aligned}$$ In terms of quark-gluon tensor structures from these are proportional to the linear combinations $$\begin{aligned} &\tfrac{1}{2}{\cal T}^{(2)}_{\bar q A q}+{\cal T}^{(4)}_{\bar q A q}\ ,\nonumber\\[2ex] &\tfrac{1}{2}{\cal T}^{(5)}_{\bar q A q}+{\cal T}^{(7)}_{\bar q A q}\ , \end{aligned}$$ see  Fig. \[fig:quarkgluonvertex\_components\_msp\]. Therefore we set for consistency reasons $z_{\bar q A q}^{(5)}=\frac{1}{2} z_{\bar q A q}^{(7)}$ and $z_{\bar q A q}^{(2)}=z_{\bar q A q}^{(4)}$ in all equations. The dressing of the corresponding higher order operators is then simply identified with that of the corresponding quark-gluon vertex dressings $z_{\bar q A q}^{(7)}$ and $z_{\bar q A q}^{(4)}$ respectively where we additionally use the RG-invariant ansatz as in all other vertices, e.g. $$\begin{aligned} &\bar q(p)Z_{q}(\bar p) z_{\bar q A q}^{(4)}(\bar p) T_{\mu\nu}\left[-{\text{i}}\sqrt{4\pi\alpha(\bar p) Z_{A}(\bar p)} A_\mu(r)\right] \nonumber\\[1ex] & \ \times \left[-{\text{i}}\sqrt{4\pi\alpha(\bar p) Z_{A}(\bar p)} A_\nu(-p-q-r)\right]q(q) \ ,\end{aligned}$$ with $\bar p = \sqrt{(p^2+q^2+r^2+(p+q+r)^2)/4}$. ### Four-fermi interactions {#trunc:4f} Here we discuss a basis for the four-fermi interactions where $(S\pm P)$/ $(V\pm A)$ denotes the scalar-pseudoscalar/vector-axialvector Dirac structure, the subscript denotes the flavor structure and the superscript the color structure. Omitted sub-/superscripts are to be understood as singlet contributions. A basis for the $U(2)_L\times U(2)_R$ symmetric four-fermi interactions is given by [@Jaeckel:2002rm], see also [@Braun:2011pp] for a review, $$\begin{aligned} \label{eq:fourfermi_sym} \mathcal{L}_{(\bar q q)^2}^{(S-P)_+^{\phantom{adj}}}&=(\bar q T^0 q)^2\!- \!(\bar q \gamma^5 T^f q)^2\!-\!(\bar q \gamma^5 T^0 q)^2\!+\!(\bar q T^f q)^2\nonumber\\[2ex] \mathcal{L}_{(\bar q q)^2}^{(V-A)_{\phantom{-}}^{\phantom{adj}}}&=(\bar q \gamma^\mu T^0 q)^2\!+\!(\bar q \gamma^\mu\gamma^5 T^0 q)^2\nonumber\\[2ex] \mathcal{L}_{(\bar q q)^2}^{(V+A)_{\phantom{-}}^{\phantom{adj}}}&=(\bar q \gamma^\mu T^0 q)^2\!-\!(\bar q \gamma^\mu\gamma^5 T^0 q)^2\nonumber\\[2ex] \mathcal{L}_{(\bar q q)^2}^{(V-A)_{\phantom{-}}^{\text{adj}}}&=(\bar q \gamma^\mu T^0 T^a q)^2\!+\!(\bar q \gamma^\mu\gamma^5 T^0 T^a q)^2\,.\end{aligned}$$ We denote the generators of flavor $U(1)$ and $SU(2)$ by $T^0$ and $T^{f}$ whereas $T^a$ are the generators of color $SU(3)_c$. Note, that the obvious choice $(S-P)_+^\text{adj}$ instead of $(V-A)^\text{adj}$ is not linearly independent of $(S-P)_+$ and $(V+A)$ and therefore $(V-A)_{\phantom{-}}^{\text{adj}}$ has to be considered. There are two four-fermi interactions which break the axial $U(1)_A$ but are symmetric under $U(1)_V\times SU(2)_L\times SU(2)_R$ $$\begin{aligned} \label{eq:fourfermi_anom} \mathcal{L}_{(\bar q q)^2}^{(S+P)_-^{\phantom{adj}}}=&(\bar q T^0 q)^2\!- \!(\bar q \gamma^5 T^f q)^2\!+\!(\bar q \gamma^5 T^0 q)^2\!-\!(\bar q T^f q)^2\nonumber\\[2ex] \mathcal{L}_{(\bar q q)^2}^{(S+P)_-^{\text{adj}}}=&(\bar q T^0 T^a q)^2\!-\!(\bar q \gamma^5 T^f T^a q)^2\!\nonumber\\[1ex] &\quad+\!(\bar q \gamma^5 T^0 T^a q)^2\!-\!(\bar q T^f T^a q)^2\, ,\end{aligned}$$ where the first corresponds to the ’t Hooft determinant [@'tHooft:1976fv]. For applications it is convenient to introduce the linear combinations $$\begin{aligned} \mathcal{L}_{(\bar q q)^2}^{(\pi)}&=\mathcal{L}_{(\bar q q)^2}^{(S-P)_+} + \mathcal{L}_{(\bar q q)^2}^{(S+P)_-}=2 (\bar q T^0 q)^2-2(\bar q \gamma^5 T^f q)^2\nonumber\\[2ex] \mathcal{L}_{(\bar q q)^2}^{(\eta')}&=\mathcal{L}_{(\bar q q)^2}^{(S-P)_+} - \mathcal{L}_{(\bar q q)^2}^{(S+P)_-}=2 (\bar q T^f q)^2-2(\bar q \gamma^5 T^0 q)^2 \end{aligned}$$ with quantum numbers corresponding to $(\sigma-\pi)-$ and $(\eta- a)-$ meson exchange channels. Since the $SU(2)_L\times SU(2)_R$ symmetry is only approximate and explicitly broken to $SU(2)_{L+R}$ we additionally take into account the tensor structures $$\begin{aligned} \label{eq:fourfermi_su2} \mathcal{L}_{(\bar q q)^2}^{(S+P)_+^{\phantom{adj}}}=&(\bar q T^0 q)^2\!+\!(\bar q \gamma^5 T^f q)^2\!+\!(\bar q \gamma^5 T^0 q)^2\!+\!(\bar q T^f q)^2\nonumber\\[2ex] \mathcal{L}_{(\bar q q)^2}^{(S+P)_+^\text{adj}}=&(\bar q T^0 T^a q)^2\!+\!(\bar q \gamma^5 T^f T^a q)^2\nonumber\\[1ex] &\quad\!+\!(\bar q \gamma^5 T^0 T^a q)^2\!+\!(\bar q T^f T^a q)^2\ ,\end{aligned}$$ which break $SU(2)_{A}$. Finally there are two basis elements which break $SU(2)_{A}$ as well as $U(1)_{A}$ $$\begin{aligned} \label{eq:fourfermi_su2_anom} \mathcal{L}_{(\bar q q)^2}^{(S-P)_-^{\phantom{adj}}}=&(\bar q T^0 q)^2\!+\!(\bar q \gamma^5 T^f q)^2\!-\!(\bar q \gamma^5 T^0 q)^2\!-\!(\bar q T^f q)^2\nonumber\\[2ex] \mathcal{L}_{(\bar q q)^2}^{(S-P)_-^{\text{adj}}}=&(\bar q T^0 T^a q)^2\!+ \!(\bar q \gamma^5 T^f T^a q)^2\nonumber\\[1ex] &\quad\!-\!(\bar q \gamma^5 T^0 T^a q)^2\!-\!(\bar q T^f T^a q)^2\ .\end{aligned}$$ Consequently a basis that respects $U(1)_V\times SU(2)_V$ consists of ten elements and the Ansatz for the full four-fermi vertex is given by $$\begin{aligned} \label{eq:4fermi-basis} \Gamma_{(\bar q q)^2,k}(p_1,p_2,p_3) &= Z^2_{q,k}(0) \sum\limits_{i} \frac{\lambda^{\ }_{i,k}(s)}{k^2}\mathcal{L}_{(\bar q q)^2}^{i}\ ,\end{aligned}$$ where the sum runs over these $10$ tensor structures. We investigated the momentum dependencies in the four-fermi interactions for three momentum configurations corresponding to pure $s$-,$t$- and $u$-channel momentum configurations on the basis of the given solution at zero external momentum. For example for the $s-$channel we consider $p_1=p_2=-p_3=-p_4=p$ corresponding to $s=4 p^2$. ### Quark-meson system: LPA$'$ approximation {#trunc:qm} We parameterise the inverse meson propagators as $$\begin{aligned} \Gamma_{\sigma^2/\vec\pi^2,k}(p) &= Z_{\pi,k} \left(p^2 + m_{\sigma/\pi,k}^2\right)\ .\end{aligned}$$ In the chirally symmetric phase the approximation $Z_\sigma\approx Z_\pi$ is exact, whereas the deviations in the broken phase are suppressed by the comparably large mass of the sigma-meson $m_\sigma$. The mass terms can be absorbed into the definition of the effective mesonic potential and will be discussed there. Additionally we neglect the momentum-dependence of $Z_\pi$ which has been shown to be a quantitatively reliable approximation [@Helmboldt:2014iya]. As a consequence only the anomalous dimension $$\begin{aligned} \eta_\pi &= -\frac{\partial_tZ_\pi}{Z_\pi}\ ,\end{aligned}$$ appears in any of the flow equations. We perform a Taylor expansion of the effective mesonic potential in $\rho$ [@Pawlowski:2014zaa] $$\begin{aligned} V(\bar\rho\equiv Z_\pi \rho) &= \sum\limits_{j=0}^6\frac{v_j}{j!}(\bar\rho-\bar\rho_0)^j\ ,\end{aligned}$$ with $\bar\rho_{0}\equiv Z_\pi\rho_0$ and $\rho_0$ scale-independent such that $\bar\rho_{0}$ becomes the minimum of the effective potential at $k\rightarrow 0$. At this order of the Taylor expansion we see convergence and a comparison to a calculation on a discrete grid in $\rho$ yields perfect agreement. The meson masses are obtained from this potential as $$\begin{aligned} \label{eq:meson_masses} m_\pi^2 &= V'(\bar\rho_0)\ ,\nonumber\\[2ex] m_\sigma^2 &= V'(\bar\rho_0)+2\bar\rho_0V''(\bar\rho_0)\ .\end{aligned}$$ Therefore the value of the pion mass depends directly on the expansion point $\bar\rho_0$ which, in turn, is directly proportional to the current quark mass $M_q(\Lambda)$. In our case we choose $M_q(20\text{ GeV})= 1.3$ MeV such that $m_\pi$ takes the physical value of $135$ MeV. We consider only one Yukawa interaction of the $\sigma$–$\pi$ tensor structure $\mathcal{L}_{(\bar q q)^2}^{(\pi)}$. Hence, integrating out the mesonic fields leads to contributions to the four-fermi interaction $\mathcal{L}_{(\bar q q)^2}^{(\pi)}$. In other words, the total coupling of $\mathcal{L}_{(\bar q q)^2}^{(\pi)}$ is a sum of the explicit four-fermi interaction and the part stored in the quark-meson sector of the theory. The distribution of these fluctuations is done with dynamical hadronisation explained in App. \[sec:dynhad\]. The chirally symmetric Yukawa interaction reads $$\begin{aligned} \nonumber & \int_{p_1,p_2} z_{\bar q\phi q}(p_1,p_2) \bar Z_\pi^\frac{1}{2}(p_1+p_2) \bar Z_q^\frac{1}{2}(p_1) \bar Z_q^\frac{1}{2}(p_2) \\[1ex] & \hspace{3cm}\times \phi(p_1+p_2)\, \bar q(p_1) \tau q(p_2)\,, \label{eq:YukInt}\end{aligned}$$ where $\tau=(T^0,\gamma_5 \vec T)$. Reducing this to the $s$-channel with $p_1=p_2=p$ leads to $$\begin{aligned} \nonumber & \int_{p} z_{\bar q\phi q}(p,p) \bar Z^\frac{1}{2}_\pi(2 p) \bar Z^\frac{1}{2}_q(p) \bar Z^\frac{1}{2}_q(p) \\[1ex] & \hspace{3cm}\times \phi(2 p)\, \bar q(p) \tau q(p)\,. \label{eq:YukInt1}\end{aligned}$$ In our calculations we use the renormalised Yukawa coupling $h_{\pi}$, with $\bar Z_{\pi}(2p) \equiv Z_{\pi,k}(0)$, $\bar Z_{q}(p) \equiv \bar Z_{q,k}(0)$ and $$\begin{aligned} \label{eq:Yukcoup} h_\pi(2 p) = 2 z_{\bar q\phi q}(p,p)\,, \end{aligned}$$ where $2p$ is the momentum of the mesonic field $\phi(2p)$. Furthermore we ignore momentum dependencies as well as field dependencies in $h_\pi$. This parameterisation of the quark-meson model is termed LPA$'$ approximation and has been shown to be capable of approximating the full momentum dependence very well [@Helmboldt:2014iya]. Furthermore, it has been found that the effect of higher meson quark interactions that stem from a possible field-dependence in the Yukawa interaction would yield a decrease of the the order of $10$ % in the chiral condensate [@Pawlowski:2014zaa]. Stability of the truncation {#app:results_stab} =========================== ![Normalised difference $(z^{(7)}_{\bar q A q}-2 z^{(5)}_{\bar q A q})/z^{(7)}_{\bar q A q}$ (blue solid) and $(z_{\bar c A c}-z^{(1)}_{\bar q A q})/z_{\bar c A c}$ (red dashed). Small values indicate that the mSTI is applicable for constraining transversal tensor structures.[]{data-label="fig:STI"}](quarkgluonvertex_7m5_msp){width="48.00000%"} Here we give a detailed analysis of the systematic errors and hence of the stability of the current truncation. To this end we briefly summarise the vertex structures taken into account: in the pure gauge sector the classical tensor structures of all primitively divergent correlation functions have been considered. The quark propagator and the quark-gluon vertex - as the essential interface coupling between glue and matter sector - have been included with full momentum-dependencies and all tensor structures. Additionally, higher quark-gluon interactions as obtained from a gauge invariant extension of non-classical tensors structures in the quark-gluon vertex have been taken into account. A Fierz-complete basis has been used for the four-fermi couplings in an $s$-channel approximation. Moreover, meson propagators and quark-meson Yukawa interactions as well as higher order mesonic correlation functions in the scalar–pseudo-scalar $s$-channel are included.\ [*Gluonic interactions:*]{} As the momentum dependence of the Yang-Mills vertices has been found to be rather small at the relevant momentum scales [@Fister:2011uw; @Fister:Diss; @Huber:2012kd; @Pelaez:2013cpa; @Blum:2014gna; @Eichmann:2014xya; @Binosi:2014kka; @Gracey:2014mpa; @Cyrol:2014kca], we approximated the momentum dependence of these vertices only with one variable at the symmetric point, which is expected to be a good approximation. For the three-gluon vertex, DSE studies show that the effect of additional tensors structures is small [@Eichmann:2014xya]. Our largest systematic error concerns therefore the four-gluon vertex which has been determined via STIs from the three-gluon vertex keeping only the classical tensor structure. While this certainly works well in the semi-perturbative regime, below $\mathcal{O}($1 GeV$)$ deviations are to be expected as well as contributions from other tensor structures, see [@Cyrol:2014kca]. Indeed, the three-gluon vertex running coupling deviates already earlier from the other running couplings, indicating some missing vertex strength, see .\ [*Quark-gluon interactions:*]{} In the matter sector the quark-gluon vertex was fully taken into account. However, based on an analysis of the relative strength of the different tensor structures we only fed back the ${\cal T}^{(4)}_{\bar q A q}$ and ${\cal T}^{(7)}_{\bar q A q}$ as the dominant chiral symmetry breaking and chiral tensor structures. We have checked the quantitative convergence of this approximation at the example of the flow equations of the quark propagator and the quark-gluon vertex itself. Higher quark-gluon interactions follow from the quark-gluon vertex using the modified Slavnov-Taylor identities (mSTIs) discussed in App. \[app:mSTI\]. The applicability of the mSTIs, which constrain only the longitudinal part of any correlation function, relies on identifying them with their transversal counterparts. At non-perturbative momenta $\mathcal{O}($1 GeV$)$ the connection between transversal and longitudinal parts is lost, with the running couplings as obtained from different vertices as a prominent example, see comparison of $z_{\bar q A q }^{(1)} \varpropto \sqrt{\alpha_{\bar q A q}}$ with $z_{\bar c A c } \varpropto \sqrt{\alpha_{\bar c A c}}$ in Fig. \[fig:STI\]. In their regime of applicability the mSTIs provide therefore relations between different tensor structures, most prominently this leads to $z^{(7)}_{\bar q A q}-2 z^{(5)}_{\bar q A q}\approx 0$, see App. \[app:mSTIvert\]. In Fig. \[fig:STI\] we show the normalised difference $(z^{(7)}_{\bar q A q}-2 z^{(5)}_{\bar q A q})/z^{(7)}_{\bar q A q}$ as obtained from the vertex equation with only the classical tensor structure inserted on the right hand side. In this case the mSTI is fulfilled even better than for the strong running coupling down to very low momenta $\mathcal{O}($0.5 GeV$)$. We take this as a justification for approximating the dressing and momentum dependence of the higher quark-gluon interactions by the solution of the semi-perturbative mSTI that relates them to the tensor structure $\tfrac{1}{2}{\cal T}^{(5)}_{\bar q A q}+{\cal T}^{(7)}_{\bar q A q}$.\ [*Quark interactions:*]{} We have taken into account a complete Fierz basis for the four-fermi interaction and used $s$-channel approximations for all tensor structures. All higher purely fermionic vertices in the $s$-channel of the scalar–pseudo-scalar interaction are included. Their contribution beyond meson exchange (eight-fermion interaction) is small, see [@Braun:2014ata]. This observation is additionally supported by the fast convergence of expansions in powers of the mesonic field that is found in the quark-meson model and in dynamical QCD, at vanishing temperature [@Pawlowski:2014zaa; @Helmboldt:2014iya]. Concerning the quark-meson interactions, we neglected higher contributions due to field-derivatives of the Yukawa interaction which have been found to be of the order of $10$ % [@Pawlowski:2014zaa; @Braun:2014ata]. A more detailed study will be presented elsewhere. In the equation for the quark propagator, momentum dependencies of the four-fermi interactions play a quantitative rôle via the tadpole diagrams. We have implemented this momentum dependence via one momentum variable using a symmetric projection. Furthermore momentum dependencies in the meson sector have been ignored which is justified by the success of the LPA$'$ approximation [@Helmboldt:2014iya]. Additionally we have ignored the backcoupling of the momentum dependence of the quark propagator in the equation for the effective potential. We expect some effects due to this approximation, which would mitigate the effect of ignoring higher quark-meson interactions. Dynamical Hadronisation {#sec:dynhad} ======================= As already pointed out in the main text, the concept of dynamical hadronisation [@Gies:2001nw; @Pawlowski:2005xe; @Floerchinger:2009uf] is of crucial importance for the present application. In the form used here it allows to exactly rewrite momentum channels of the four-fermi interactions in terms of Yukawa couplings to an effective bosonic exchange field. This corresponds to a Hubbard-Stratonovich transformation in every RG-step, and is also called rebosonisation in the present case of composite bosonic fields [@Gies:2001nw]. Naturally, the bosonic field carries the quantum numbers of the related four-fermi channel, and may be interpreted as the corresponding meson or diquark field. For simplicity we restrict ourselves in the following discussion to the dynamical hadronisation of the sigma-pion-channel. Following [@Pawlowski:2005xe] and in particular [@Braun:2014ata], we start from the path integral representation for $\Gamma_k[\Phi]$ in terms of the fundamental superfield $\hat\varphi=(\hat A_\mu , \hat C,\hat{\bar C}, \hat q,\hat{\bar q})$ $$e^{-\Gamma_k[\Phi]}=\int \mathcal{D} \hat{\varphi} \, e^{-S[\hat{\varphi}]-\Delta S_k[\hat{\phi}_k]+\frac{\delta (\Gamma_k +\Delta S_k)}{\delta \phi} (\hat{\Phi}_k-\Phi)+\Delta S_k[\Phi]}\,,$$ with $\Delta S_k[\Phi]=\tfrac{1}{2}\Phi R_k \Phi$, where we introduced a dynamical superfield $\hat \Phi_k=(\hat \varphi,\hat \sigma_k, \hat{\vec{\pi}}_k)$ with expectation value $\Phi=\langle \hat \Phi_k\rangle\equiv (\varphi,\sigma,\vec \pi)$. It is constructed from the fundamental superfield $\varphi$ and scale-dependent composite operators $\hat\phi_k=(\hat \sigma_k,\hat{\vec {\pi}}_k)$, whose flow we define to be of the form $$\label{eq:rebosfield} \partial_t \hat\phi_k(r)=\partial_t A_k(r) \,(\bar q\tau q)(r)+ \partial_t B_k(r)\, \,\hat \phi_k(r)$$ with $(\bar q\tau q)(r)= \int_l \bar q(l) \tau q(r-l)$. The flow is defined in momentum space which will allow us to identify it with a specific momentum channel in the four-fermi flow. Note also that the term multiplying $\partial_t A_k$ involves only expectation values $q$ and $\bar q$. The two coefficient functions $\partial_t A_k$ and $\partial_t B_k$ appearing in are so far undetermined and at our disposal in the dynamical hadronisation. They specify the RG-adaptive change of our field-basis. The scale-dependence of $\hat\phi_k$ leads to additional contributions on the right hand side of the flow equation compared to which now takes the form $$\begin{aligned} \nonumber \partial_t|_\phi \Gamma_k[\Phi]=& \frac{1}{2}\,\text{Tr}\, \frac{1}{\Gamma^{(2)}_k+R_k}(\partial_t R_k+2 R_k \partial_t B_k) \\[1ex] &-\int_l \frac{\delta \Gamma_k}{\delta \phi} \Bigl[ \partial_t A_k(r)\, (\bar q \tau q)(r) +\partial_t B_k(r) \phi(r)\Bigr]\,. \label{eq:DynHad}\end{aligned}$$ The second line on the right hand side account for the scale-dependence of the composite fields. Together with the left hand side they constitute a total derivative w.r.t. the logarithmic scale $t$. In the present work we use $\partial_t A_k(r)$ to completely eliminate the corresponding channel of the scalar–pseudo-scalar four-fermi interaction with $\lambda^{\ }_{\pi}\equiv \lambda^{\ }_{(S-P)_+}+ \lambda^{\ }_{(S+P)_-}$ and $\lambda^{\ }_{\pi}(s) = \lambda^{\ }_{\pi}(p,p,-p)$, see , that is with $t=u=0$. $$\begin{aligned} \label{eq:S-P0} \partial_t \lambda_{\pi}(s) ={\rm Flow}^{(4)}_{\pi}(s) - \partial_t A_k(2 p) h_\pi(2 p) \stackrel{!}{=} 0\,, \end{aligned}$$ where ${\rm Flow}^{(4)}_{\pi}(s)$ stands for the diagrams in the four-fermi flow. leads to a vanishing flow of the $s$-channel of the four-fermi coupling $\lambda_{\pi}$ and requires $$\begin{aligned} \label{eq:dtA} \partial_t A_k(2 p) = \0{{\rm Flow}^{(4)}_{\pi}(s)}{h(2 p)} \,,\qquad {\rm with}\qquad s = 4 p^2\,,\end{aligned}$$ which completely fixes $\partial_t A_k(2 p) $. Still, the second rebosonisation function $\partial_t B_k(2 p)$ is at our disposal. It can be used to improve the approximation at hand by distributing the momentum-dependence of the rebosonised four-fermi channel between the Yukawa coupling and the mesonic propagator. For example, when considering the full momentum-dependence of the latter but only a running, momentum-independent Yukawa coupling, the $\partial_t B_k$ can be chosen such that this is an exact procedure. The discussion of the general procedure is beyond the scope of the present work and will be presented elsewhere. In the present case we resort to the simplest option by using $$\begin{aligned} \label{eq:dotB=0} \partial_t B_k\equiv 0\,.\end{aligned}$$ As a consequence of together with and we get additional contributions to the mesonic anomalous dimension at vanishing momentum and the momentum-dependent quark-meson coupling $h_\pi(2p)$, $$\begin{aligned} \partial_t \Delta \eta_{\pi} &= 2\frac{V'(\bar\rho_0)}{h_{\pi}^2(0)}{\rm Flow}^{(4)}_{\pi}(0)\,, \nonumber\\[2ex] \partial_t \Delta h_{\pi} (2p) &= \frac{\Gamma_\pi^{(2)}(2 p) }{h_{\pi}(2p)}{\rm Flow}^{(4)}_{\pi}(s)\,. \label{eq:Delta1}\end{aligned}$$ The quark mass function is directly related to the quark meson coupling, $M_q (p) = \langle\sigma\rangle h_\pi(2p)/2$. Moreover, in the current approximation we use a constant $h_\pi=h_\pi(0)$ on the right hand side of the flows. With $\Gamma_\pi^{(2)}(0)=V'(\bar\rho_0)$ this leads us to $$\begin{aligned} \partial_t \Delta h_{\pi} (0) &= \frac{V'(\bar\rho_0)}{h_{\pi}(0)}{ \rm Flow}^{(4)}_{\pi}(0)\ ,\\[2ex] \partial_t \Delta M_q (p) &= M_q (p) \frac{V'(\bar\rho_0)}{h_{\pi}(0)^2}\frac{\bar \lambda_\pi(0)}{\bar \lambda_\pi(s)} \partial_t \bar\lambda_\pi(s)\, , \label{eq:Delta2}\end{aligned}$$ with $$\begin{aligned} \bar\lambda_\pi(s)= \int_{\Lambda_{\rm UV}}^k\0{d k'}{k'} {\rm Flow}^{(4)}_{\pi}(s)\,.\end{aligned}$$ In we have used that $$\begin{aligned} \label{eq:4fh} \frac{\Gamma_\pi^{(2)}(2p)}{(h_{\pi}(2p))^2} \approx \frac{V'(\bar\rho_0)}{h_{\pi}(0)^2}\frac{\bar \lambda_\pi(0)}{\bar \lambda_\pi(s)}\, , \end{aligned}$$ up to higher order terms in the mesonic potential. In the present approximation we have a better access to the momentum dependence of $\bar \lambda_\pi$ than on that of $\Gamma_\pi^{(2)}$ and $h_\pi$. Consequently, using minimises the error in our computation of $M_q(p)$. For future work it would however be preferable to calculate the momentum dependence of the right hand side directly from momentum dependencies in the mesonic sector. ![image](mattersystem_flow_diagrams_RQCD){height="0.45\textheight"} ![image](mattersystem_flow_diagrams){height="0.45\textheight"}
1
--- abstract: 'The break-up of a two-dimensional circular disc by normal and oblique impact on a hard frictionless plate is investigated by molecular dynamics simulations. The disc is composed of numerous unbreakable randomly shaped convex polygons connected together by simple elastic beams that break when bent or stretched beyond a certain limit. It is found that for both normal and oblique impacts the crack patterns are the same and depend solely on the normal component of the impact velocity. Analysing the pattern of breakage, amount of damage, fragment masses and velocities, we show the existence of a critical velocity which separates two regimes of the impact process: below the critical point only a damage cone is formed at the impact site [*(damage)*]{}, cleaving of the particle occurs at the critical point, while above the critical velocity the disc breaks into several pieces [*(fragmentation)*]{}. In the limit of very high impact velocities the disc suffers complete disintegration [*(shattering)*]{} into many small fragments. In agreement with experimental results, fragment masses are found to follow the Gates-Gaudin-Schuhmann distribution (power law) with an exponent independent of the velocity and angle of impact. The velocity distribution of fragments exhibit an interesting anomalous scaling behavior when changing the impact velocity and the size of the disc.' author: - 'Bhupalendra Behera$^{1,3}$, Ferenc Kun${}^2$, Sean McNamara${}^1$, and Hans J. Herrmann${}^1$' title: Fragmentation of a circular disc by Impact on a Frictionless plate --- Introduction {#intro} ============ The strength and break-up of agglomerates composed of smaller sized primary particles is of particular importance for the storage and handling of materials in process industries such as pharmaceuticals, chemicals, fertilizers, detergent, and food industries. In industrial processes agglomerates often collide with each other and with the hard walls of the equipment resulting in a size reduction, which is desired or not depending on the type of the process. The strength of agglomerates has to be characterized for the design of operating conditions in industrial processes such as milling, tabletting, mixing, and transport in pneumatic conveying. Another important class of agglomerates are the so-called particle compounds, which are the combination of various sized particles embedded in a cementous matrix. The different types of engineering agglomerates and building materials like concretes are some examples of particle compounds. It is of high industrial importance to recycle these particle compounds in order to use the valuable aggregates. The design and optimization of the liberation process of aggregates from the matrix material requires a detailed knowledge of the strength and break-up of compounds. For the understanding of the strength and break-up process, the study of simple systems like spherical particles is essential. During the last decades several experimental and theoretical studies have been performed to understand the break-up of spherical bodies arising due to impact. The crack pattern of sand-cement spheres by a free fall impact was studied in Ref.  [@fra_pattern], which reports observations of meridian cracks, that divide the sphere into two nearly equal parts, and oblique cracks, which are straight like median cracks, but cut the sphere into two unequal pieces. The fracture of glass and plaster spheres by free fall impact and double impact (dynamic loading between two hard plates) have been carried out recently [@powder; @chau]. It was found that at the lowest impact velocities hertzian cone cracks (formed from a surface ring crack) are developed, whereas, at high velocities, oblique cracks propagate before meridian cracks form [@fra_glass]. This finding differs from the experimental results of Ref. [@fra_pattern], where it was found that with increasing impact energy, the number of meridian planes increases and oblique cracks start to develop. Due to the high speed and violent nature of the break-up process, observations are usually restricted to the final state of impact experiments, where information has to be extracted from the remaining pieces of the body. Hence, computer simulation of models of agglomerate break-up is an indispensable tool in this field. Simulations of realistic models provide a deeper inside into the break-up process and can even complement the experimental findings directly supporting the design of industrial processing of these materials. Analytic approaches have limited capabilities in this field since they cannot capture the disordered microstructure of the material. The finite element approach and the discrete element modeling have been successfully applied to describe the stress field, crack propagation, and fragment formation in impacting spherical particles [@agglomerate; @bk; @imp_agglo; @poto; @simu; @tsoungui; @imp_ang; @poschel1; @poschel2]. Recent simulations of ball impact revealed two types of crack patterns: oblique cracks radiating from impact point, and secondary cracks perpendicular to the oblique ones. In the framework of the discrete element method it was clarified that depending on the impact velocity the result of the break-up process can be localized damage around the contact zone, fragmentation, or shattering. The evolution of several characteristic quantities of the break-up process when increasing the impact velocity were monitored and analyzed in normal and oblique impact [@agglomerate; @bk; @imp_agglo; @poto; @imp_ang]. From a more general point of view, the break-up of agglomerates presents an important class of fragmentation phenomena which is ubiquitous in everyday life and concerns a wide range of phenomena in science and technology. In general, when an object is subjected to shock or stress it will break up into smaller pieces. The length scales involved in this process range from the collisional evolution of asteroids [@asteroids] to the degradation of materials comprised of small agglomerates [@agglomerate; @bk] employed in the process industries as summarized above. There are also many geological examples associated with the use of explosives for mining and oil shale industry, coal heaps, etc. A wide variety of experiments [@self; @composite; @instable; @ice; @lab; @glassplate] and simulations [@agglomerate; @bk; @discrete; @transition; @twodisc; @granulate; @diehl; @univer; @branching; @aspect; @droplet; @britt; @poto; @simu] revealed that the fragment mass distribution is a power law except for very large fragment sizes. The exponents in the power law region were found experimentally to be between 1.35 and 2.6 depending on the effective dimensionality of the system [@asteroids; @fra_pattern; @lab; @breakup]. Recent studies revealed that power law distributions arise in fragmentation phenomena due to an underlying phase transition [@instable; @transition; @univer]. However, most of the data reported in the literature is concerned with the general behavior of fragmentation processes. There is much less literature where the propagation and orientation of cracks are discussed. In the present paper we study the normal and oblique impact of a circular brittle particle on a hard frictionless plate, varying the impact velocity and impact angle in a broad range. The particle is composed of numerous unbreakable, undeformable, randomly shaped polygons which are bonded together by elastic beams. The bonds between the polygons can be broken according to a physical breaking rule, which takes into account the stretching and bending of the connections. Based on simulations of the model, we performed a detailed study of the failure evolution at different impact velocities and of the nature of the crack propagation during the fragmentation process, and compared the results with experiments [@fra_pattern; @powder; @lab; @ice; @fra_glass]. In the analysis of the simulation data, we profit from recent theoretical results of general studies of fragmentation processes. We observed that for both normal and oblique impacts, the crack patterns are the same and depend solely on the normal component of the impact velocity. Studying the crack patterns, amount of damage, fragment masses, and velocities, we provide a quantitative foundation of the concept of damage, fragmentation, and shattering in ball impact, which was introduced recently on a more qualitative basis [@agglomerate]. We show the existence of a critical impact velocity $v_c$ which distinguishes two regimes of the impact process, [*i.e.*]{} below the critical velocity damage mainly occurs in a conical region around the impact site with a large residue, however, above $v_c$ an ensemble of oblique cracks develop and the disc breaks up into pieces. In agreement with experimental results, fragment masses are found to follow the Gates-Gaudin-Schuhmann distribution (power law) [@kelly] with an exponent independent of the velocity and angle of impact. The velocity distribution of fragments exhibit an interesting anomalous scaling behavior when changing the impact velocity and the size of the disc. An important application of our results, besides the ones mentioned at the beginning, is to the optimization and control of tumbling mill performance. These questions are of utmost practical importance as they have a tremendous influence on power draft, wear of the balls and liners and breakage characteristics of the grinding materials. During the cataracting motion where the charge material inside a mill follows a parabolic path [@data], most of the materials are ground as hard balls fall back onto them. There is particular interest in the net energy required to achieve a certain size reduction and the the energy distribution of the fragments during the grinding process. The efficiency of the mills could be controlled if the breakage characteristics of the grinding materials were better understood. Our current work can provide some valuable information for the modernization of the mill design. Model ===== In order to study fragmentation of granular solids, we performed molecular dynamic (MD) simulations in two dimensions. To better capture the complex structure of a real solid, we used randomly generated convex polygons that interact with each other elastically. The model consists of three major parts, namely, the construction of a Voronoi cellular structure, the introduction of the elastic behavior, and finally the breaking of the solid. This section gives a detailed overview of these three steps. In order to take into account the complex structure of the granular solid, we use randomly generated convex polygons, [*i.e.*]{} we divide the solid into grains by a Voronoi cellular structure. The Voronoi construction is a random tessellation of the plane into convex polygons. This is obtained by putting a random set of points onto the plane and then assigning to each point that part of the plane which is nearer to it than to any other point. One advantage of the Voronoi tessellation is that the number of neighbors of each polygon is limited which makes the computer code faster and allows us to simulate larger systems. In our case, the initial configuration of the polygons was constructed using a vectorizable random lattice, which is Voronoi construction with slightly reduced disorder [@rand_lattic]. First, the Voronoi tessellation of a square is performed, and then a circular disc with smooth surface is cut out. In the model the polygons are rigid bodies. They are neither breakable nor deformable, but they can overlap when pressed against each other. This overlap represents local deformations of the grains. Usually the overlapping polygons have two intersection points which define the contact line. In order to simulate the elastic contact force, we introduce a repulsive force between touching polygons. This force is proportional to the overlapping area $A$ divided by a characteristic length $L_c$ $(\frac{1}{L_c}= \frac{1}{2}[\frac{1}{r_i}+\frac{1}{r_j}]$, where $r_i$, $r_j$ are the radii of circles of the same area as the polygons). The direction of the elastic or normal force is perpendicular to the contact line of the polygons. The complete form of the normal force contains an elastic and damping contribution, whereas the tangential component is responsible for the friction. Again, to bond the particles together it is necessary to introduce a cohesive force between neighboring polygons. For this purpose we introduce beams. The centers of mass of neighboring polygons are joined together with elastic beams that exert an attractive, restoring force but can break in order to model the fragmentation of the solid. Because of the randomness contained in the Voronoi tessellation, the lattice of beams is also random. The length, the cross-section and the moment of inertia of each beam are determined by the initial configuration of the polygons. The Young’s modulus of the beams and of the particles are considered to be independent of each other. The beams break according to a physical breaking rule, which takes into account the stretching and bending of the connection. The surface of the grains where beams are broken represent cracks. The energy stored in the broken beams represents the energy needed to create these new crack surfaces inside the solid. In order to simulate the break-up of the disc due to impact with a hard plate, a repulsive force is introduced between the plate and those polygons of the disc which have overlap with the plate. This repulsive force is proportional to the overlap area, similarly to the polygon-polygon contacts but with a higher stiffness value. The contact force of the disc and the plate has vertical direction, tangential component like friction is excluded in the present study. The time evolution of the system is obtained by numerically solving Newton’s equations of motion of the individual polygons (Molecular Dynamics). For the solution of the equations we use a Gear Predictor-Corrector scheme of fifth order, which means that we have to keep track of the coordinates and all their derivatives up to fifth order. The breaking criterion of beams is evaluated in each iteration time step and those beams which fulfill the condition are removed from the calculations. The simulation stops after no beams break during a certain number of time steps. Previously this model has been applied to study fragmentation of solids in various experimental situations [@discrete; @transition; @twodisc; @granulate; @proj]. For more details of the simulation technique see Ref. [@discrete]. Crack pattern {#crack} ============= ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ![\[snapshots\] Snapshots of the fragmentation process in normal impact of a circular disc of radius 30 cm at $250$ cm/sec. (a) $t=0.00150$ sec, a high compressive wave generated at the impact point causes the primary breakage. (b) $t=0.00300$ sec, cracks start at the periphery of the conical region whose base is the contact line between the disc and the plate. (c) $t=0.00600$ sec, oblique cracks move outwards. (d) $t=0.00996$ sec, the oblique cracks reach at the outer surface. (e) The disc is reassembled at the end of the simulation to observe the final crack pattern. ](crack/90_250_0.00150.ps "fig:"){width="30.00000%"} ![\[snapshots\] Snapshots of the fragmentation process in normal impact of a circular disc of radius 30 cm at $250$ cm/sec. (a) $t=0.00150$ sec, a high compressive wave generated at the impact point causes the primary breakage. (b) $t=0.00300$ sec, cracks start at the periphery of the conical region whose base is the contact line between the disc and the plate. (c) $t=0.00600$ sec, oblique cracks move outwards. (d) $t=0.00996$ sec, the oblique cracks reach at the outer surface. (e) The disc is reassembled at the end of the simulation to observe the final crack pattern. ](crack/90_250_0.00300.ps "fig:"){width="30.00000%"} ![\[snapshots\] Snapshots of the fragmentation process in normal impact of a circular disc of radius 30 cm at $250$ cm/sec. (a) $t=0.00150$ sec, a high compressive wave generated at the impact point causes the primary breakage. (b) $t=0.00300$ sec, cracks start at the periphery of the conical region whose base is the contact line between the disc and the plate. (c) $t=0.00600$ sec, oblique cracks move outwards. (d) $t=0.00996$ sec, the oblique cracks reach at the outer surface. (e) The disc is reassembled at the end of the simulation to observe the final crack pattern. ](crack/90_250_0.00600.ps "fig:"){width="30.00000%"} (a) (b) (c) ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ![\[snapshots\] Snapshots of the fragmentation process in normal impact of a circular disc of radius 30 cm at $250$ cm/sec. (a) $t=0.00150$ sec, a high compressive wave generated at the impact point causes the primary breakage. (b) $t=0.00300$ sec, cracks start at the periphery of the conical region whose base is the contact line between the disc and the plate. (c) $t=0.00600$ sec, oblique cracks move outwards. (d) $t=0.00996$ sec, the oblique cracks reach at the outer surface. (e) The disc is reassembled at the end of the simulation to observe the final crack pattern. ](crack/90_250_0.00996.ps "fig:"){width="40.00000%"} ![\[snapshots\] Snapshots of the fragmentation process in normal impact of a circular disc of radius 30 cm at $250$ cm/sec. (a) $t=0.00150$ sec, a high compressive wave generated at the impact point causes the primary breakage. (b) $t=0.00300$ sec, cracks start at the periphery of the conical region whose base is the contact line between the disc and the plate. (c) $t=0.00600$ sec, oblique cracks move outwards. (d) $t=0.00996$ sec, the oblique cracks reach at the outer surface. (e) The disc is reassembled at the end of the simulation to observe the final crack pattern. ](crack/90_250_0.01000.ps "fig:"){width="40.00000%"} (d) (e) ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ In the present work we apply our model to explore the properties of the fragmentation process of a circular disc when dropped on a frictionless hard plate at different angles. The particle moves with a constant speed $v_0$ without the influence of gravity, which is obtained by supplying a constant velocity to all the polygons constituting the circular particle just before it touches the hard surface. The impact angle $\theta$ defined as the angle of the vector of the impact velocity to the horizontal, was varied between $90^{\circ}$ (normal impact) and $45^{\rm o}$ (oblique impact). In order to understand the break-up process of discs, we investigated the crack pattern arising both in normal and oblique impacts. Fig. \[snapshots\] presents the time evolution of the crack pattern of normal impact obtained by the simulation of a circular disc of radius $30$cm at $250$cm/sec. When the disc strikes against the hard plate, a high compressive wave is generated at the impact point. Fracture starts from the region of contact point and propagates through the disc. As the time passes, more and more bonds break at the impact region and the area of contact increases progressively. As a result of this primary breakage a cone shaped (triangle shaped in two dimensions) damage area is created whose base corresponds approximately to the area of contact of the specimen and the target (see Fig. \[snapshots\]b) and it is more distinct at the end of the fragmentation process (see Fig. \[snapshots\]d). When the cone is driven into the specimen a large number of cracks are generated starting from the region around the cone (see Fig. \[snapshots\]b). This indicates that a high stress concentration has developed around the conical damage region. Later on these cracks run together to form few oblique cracks (see Fig. \[snapshots\]c) directing radially outward. As crack propagation is very energy dissipative, when these oblique cracks move outwards, the intensity of the compressive wave gradually decreases and hence larger fragments appear opposite the impact point. To demonstrate the effect of the impact velocity on the break-up process, in Fig. \[snapshot\] final states of the process are shown obtained at different impact velocities, with the fragments reassembled into the initial disc. At low velocities the compressive wave intensity generated at the impact point is low, and hence, the cone could not develop fully. Moreover, only a few oblique cracks are obtained, and they do not reach the opposite surface of the disc (Fig. \[snapshot\]a). As the velocity increases, more oblique cracks develop and cover a greater distance (Fig. \[snapshot\]b) and a considerable part of the initial kinetic energy goes into the motion of the residue resulting in rebound. At the impact velocity where the oblique cracks reach the outer surface of the disc opposite to the impact point, the break-up process drastically changes: below this velocity mostly contact damage occurs in the form of the damage cone and a relatively big residue remains back. Above this velocity, however, the cracks spanning the entire disc result in the break-up of the residue into smaller pieces, see Fig. \[snapshots\]d. Later on it will be shown that the behavior of the system quantitatively changes at this velocity, which we call critical velocity. At impact velocities larger than the critical value, secondary cracks are generated roughly perpendicular to the oblique cracks. Secondary cracks from neighboring oblique cracks may merge with each other as can be seen in Fig. \[snapshot\]c. Also at higher impact velocities, vertical cracks with a direction nearly perpendicular to the target plate are more prominent as the intensity of stress concentration near the tip region of the cone is high as compared to other parts. ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![\[snapshot\] Final reassembled states of the break-up process of a disc of radius $30$ cm at different impact velocities dropped on a hard frictionless plate. (a) $v_0 = 100$ cm/sec. The cone is not fully developed and only few oblique cracks are present. (b) $v_0 = 200$ cm/sec. More oblique cracks develop and travel a greater distance. (c) $v_0 = 600$ cm/sec. Both oblique cracks and secondary cracks are present. ](crack/90_100_0.01000.ps "fig:"){width="30.00000%"} ![\[snapshot\] Final reassembled states of the break-up process of a disc of radius $30$ cm at different impact velocities dropped on a hard frictionless plate. (a) $v_0 = 100$ cm/sec. The cone is not fully developed and only few oblique cracks are present. (b) $v_0 = 200$ cm/sec. More oblique cracks develop and travel a greater distance. (c) $v_0 = 600$ cm/sec. Both oblique cracks and secondary cracks are present. ](crack/90_200_0.01000.ps "fig:"){width="30.00000%"} ![\[snapshot\] Final reassembled states of the break-up process of a disc of radius $30$ cm at different impact velocities dropped on a hard frictionless plate. (a) $v_0 = 100$ cm/sec. The cone is not fully developed and only few oblique cracks are present. (b) $v_0 = 200$ cm/sec. More oblique cracks develop and travel a greater distance. (c) $v_0 = 600$ cm/sec. Both oblique cracks and secondary cracks are present. ](crack/90_600_0.01000.ps "fig:"){width="30.00000%"} (a) (b) (c) ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Crack patterns obtained in the final state of oblique impacts at impact angles $75^{\circ}, 60^{\circ}$ and $45^{\circ}$ are compared in Fig. \[snap2\]. It is important to emphasize that in our calculations the friction between the target plate and the circular disc is completely excluded. Under this condition, varying the impact velocity while keeping its normal component constant, practically the same crack pattern is obtained (see Fig. \[snap2\]). Thus, the crack propagation and orientation during the fragmentation process solely depends on the normal component of the impact velocity. Comparing the crack pattern obtained in the simulations to the experimental results [@fra_pattern], we did not find any meridian cracks as they are difficult to detect in two dimensions. The pattern of oblique cracks and secondary cracks has a satisfactory agreement with the experimental results. The simulations confirm that oblique cracks which were observed in experimental investigations [@fra_pattern] develop along the trajectories of maximum compression planes. ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![\[snap2\] The reassembled snapshots of normal and oblique impacts at different velocities keeping the normal component constant as $500$cm/sec. The crack patterns are almost same in all cases. (a) $\theta = 90^{\circ}$. (b) $\theta = 75^{\circ}$. (c) $\theta = 60^{\circ}$. (d) $\theta = 45^{\circ}$. ](crack/crack_90_500.ps "fig:"){width="40.00000%"} ![\[snap2\] The reassembled snapshots of normal and oblique impacts at different velocities keeping the normal component constant as $500$cm/sec. The crack patterns are almost same in all cases. (a) $\theta = 90^{\circ}$. (b) $\theta = 75^{\circ}$. (c) $\theta = 60^{\circ}$. (d) $\theta = 45^{\circ}$. ](crack/crack_75_518.ps "fig:"){width="40.00000%"} (a) (b) ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![\[snap2\] The reassembled snapshots of normal and oblique impacts at different velocities keeping the normal component constant as $500$cm/sec. The crack patterns are almost same in all cases. (a) $\theta = 90^{\circ}$. (b) $\theta = 75^{\circ}$. (c) $\theta = 60^{\circ}$. (d) $\theta = 45^{\circ}$. ](crack/crack_60_578.ps "fig:"){width="40.00000%"} ![\[snap2\] The reassembled snapshots of normal and oblique impacts at different velocities keeping the normal component constant as $500$cm/sec. The crack patterns are almost same in all cases. (a) $\theta = 90^{\circ}$. (b) $\theta = 75^{\circ}$. (c) $\theta = 60^{\circ}$. (d) $\theta = 45^{\circ}$. ](crack/crack_45_708.ps "fig:"){width="40.00000%"} (c) (d) ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Results ======= Studying the evolution of the final crack patterns when changing the impact velocity, we have identified a critical velocity $v_c$ which separates the regimes of different break-up mechanisms. In the following, we analyze characteristic quantities of the break-up process, and show that there are substantial differences between the two regimes. Size Distribution of Fragments {#size} ------------------------------ ![\[maxmas\] The mass of the first and second largest fragment as a function of the normal component of the impact velocity $v_0\sin{\theta}$ at different impact angles $\theta$ and velocities $v_0$ for a system size $30$ cm. The vertical dashed line indicates the critical point, furthermore, the damaged and fragmented regimes are also shown. ](figures/new_maxmass.eps){width="50.00000%"} Recently, it has been shown that the final outcome of a fragmentation process can be classified into two states depending on the amount of the imparted energy: damaged and fragmented states with a sharp transition in between. Detailed analysis revealed that the transition between the two states occurs as a continuous phase transition which also provides a possible explanation of the power law mass distribution of fragments observed. To explore the nature of the critical velocity $v_c$ identified in the previous section we investigated the evolution of the mass of the two largest fragments when varying the angle $\theta$ and the velocity $v_0$ of impact. Plotting the largest fragment mass as a function of the normal component of the impact velocity for both normal and oblique impacts in Fig. \[maxmas\], all curves fall over one another. This implies that in the absence of friction between the plate and the disc, the size reduction achieved depends both on $v_0$ and $\theta$ but in such a way that it depends on the combination of the two variables $v_n = v_0 \sin{\theta}$. The curves of the second largest mass exhibit the same data collapse when plotting them as a function of $v_n$ further supporting the above arguments. The functional form of the two largest fragment masses in Fig. \[maxmas\] shows the existence of two distinct regions. At low impact velocity, breakage takes place only at the impact point and the largest fragment is nearly equal to the original mass. As the velocity of impact increases, more small fragments are chipped off from the impact point and cracks start around the damaged conical region and move towards the outer surface of the disc. At the critical velocity $v_c$ these propagating cracks reach the outer surface opposite to the impact point and the largest fragment break into several big pieces. The impact velocity where the second largest mass attains its maximum value or where there is an inflexion point in the largest fragment mass curve coincides with the critical velocity defined by analyzing the cracking pattern in Figs.\[snapshots\],\[snapshot\]. In our case for normal impact of a system of $30$ cm radius the critical velocity turned out to be $250$ cm/sec. The quality of the data collapse of the curves in Fig. \[maxmas\] obtained at different impact velocities $v_0$ and angles $\theta$ is excellent for the largest fragment, however, there are large fluctuations of the value of the second largest mass at impact velocities just below the critical point. Above the critical point all curves merge nicely together. ![\[avgfra\] The average fragment mass $\overline M$ as a function of the impact velocity $v_0$. The inset shows the same curves plotted as a function of the normal component of $v_0$. For oblique fragmentation when the velocity approaches the critical point always fluctuations arise, however beyond the critical velocity all curves merge nicely. ](figures/velnor_avgfra_all.eps){width="50.00000%"} More information about the evolution of fragment sizes with varying impact angle $\theta$ and velocity $v_0$ can be obtained by studying the moments of fragment masses [@instable; @transition; @univer; @proj]. The $k$th moment $M_k$ of fragment masses is defined as $$\label{eq:m_k} M_k=\sum_i^{N_f} m_i^k-M_\mathrm{max}^k,$$ where $N$ denotes the total number of fragments, $m_i$ is the mass of fragment $i$ and $M_\mathrm{max}$ is the largest fragment mass. The definition Eq. (\[eq:m\_k\]) means that the $k$th power of the largest mass is extracted from the sum of the $k$th power of the fragment masses. The average mass of fragments $\overline M$ can be defined as the ratio of the second and first moments $\overline M \equiv M_2/M_1$. In order to demonstrate the effect of rescaling the impact velocity, in the main panel of Fig.\[avgfra\] the average fragment mass $\overline M$ is presented as a function of the impact velocity $v_0$ for the system size $R=30$cm obtained at different impact angles $\theta$, and in the inset the same curves are shown as a function of the normal component of $v_0$. It can be seen that for each impact angle $\overline{M}$ has a peak which broadens and gets shifted towards larger velocity values with decreasing impact angle. However, when plotting the same quantity as a function of the normal components of the impact velocity $v_0\sin{\theta}$ all the curves fall on top of each other. Larger fluctuations arise below the critical point which are more dominant for lower impact angles. Note that the position of the maximum in the inset coincides with the transition point determined in Fig. \[maxmas\]. ![\[specific\] Average fragment mass $\overline M$ as a function of specific energy $E_0/m_{tot}$. The critical point, [ *i.e.*]{} the position of the maximum, depends on system size. ](figures/speng_avgmas_90_all.eps){width="50.00000%"} Studies of fragmentation of various types of brittle solids have revealed that a larger amount of energy is required to achieve the same size reduction on systems of larger size. However, in terms of the specific energy, [*i.e.*]{} the energy imparted to the fragmenting system divided by the total mass $E_0/m_{tot}$, all the characteristic quantities show a universal behavior, especially the critical value of the specific energy is independent of the system size [@transition; @proj; @twodisc; @granulate; @poto; @simu]. For impacting discs, however, the critical value of the specific energy shows a clear dependence on the size of disc $R$ as it is illustrated in Fig. \[specific\]. The larger the disc, the higher the energy density is required to break it into pieces. A possible explanation is that for larger discs, a larger part of the imparted energy goes into the motion of the fragments, lowering the efficiency of break-up. The total number of fragments $N_f$ is also an important measure of the degree of break-up in the impact process. Fig \[nofra\] shows that the number of fragments $N_f$ is uniquely determined by the normal component of the impact velocity $v_n$, [ *i.e.*]{} the curves obtained at different impact angles present a perfect collapse when plotting them as a function of $v_n$. It can be seen in the figure that the number of fragments is a monotonically increasing function of the velocity, however, the functional form of $N_f$ seems to be different on the two sides of the critical point, [*i.e.*]{} up to the critical point the curves show clearly a straight line, whereas, above the critical point all curves are slightly bent towards down as the efficiency of the fragmentation process decreases. Replotting the results using logarithmic scale on the horizontal axis, however, a straight line is obtained above the critical point (see the inset of Fig. \[nofra\]), which implies that $N_f$ has the form $$\begin{aligned} N_f = a\cdot \ln{\frac{v_n}{v_{nc}}} + N_c, \ \ \mbox{for} \ \ v_n > v_{nc},\end{aligned}$$ where $N_c$ denotes the number of fragments at the critical point and $a$ is the slope of the straight line in the inset of Fig.\[nofra\]. ![\[nofra\] Number of fragments $N_f$ as a function of the normal component of the impact velocity.](figures/velnor_nofra_all.eps){width="50.00000%"} The amount of damage occurring during the break-up process can be quantified by the so-called damage ratio $d$ proposed by Thornton [*et. al*]{} [@agglomerate]. $d$ is defined as the ratio of the number of broken contacts $N_b$ to the total number of contacts $N_c$ existing initially inside the disc. The damage ratio $d$ depends both on the impact angle $\theta$ and the impact velocity $v_0$, [*i.e.*]{}, increasing $v_0$ at a fixed value of $\theta$ results in an increase of $d$, furthermore, increasing the impact angle $\theta$ at a given value of $v_0$ the damage ratio also increases. However, when plotting $d$ as a function of the normal velocity $v_n$ in Fig. \[damage\_ratio\] the curves obtained at different impact angles collapse on top of each other, which implies that $d$ solely depends on $v_n$. Similarly to the number of fragments, $d$ is also a monotonically increasing function of $v_n$, however, its functional form changes at the critical point. It is observed that below the critical point $d$ is a linear function of $v_n$, while above the critical point the curve is non-linear, slightly bending down. On a semilogarithmic plot again a straight line arises which implies the functional form $$d = b\cdot \ln{\left(\frac{v_n}{v_{cn}}\right)}+d_c, \ \ \mbox{for} \ \ v_n > v_c$$ where $d_c$ is the value of $d$ at critical point and $b$ is the slope of the fitted straight line in Fig. \[damage\_ratio\]. A somewhat similar functional form of $d$ has also been pointed out by Thornton [@agglomerate; @imp_agglo] in impact of discs and spherical objects with a hard plate. ![\[damage\_ratio\] Dependency of damage ratio $d$, the number of broken beams $N_b$ to the total number of beams $N_o$, on the normal component of the impact velocity.](figures/impvelnor_beambrk_all.eps){width="50.00000%"} Plate Force ----------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----- ![\[wforce\] (a) Keeping the normal component of impact velocity constant at different impact angle, the force exerted by the plate is almost the same. (b) Force exerted by the plate at various velocities during normal impact. (c) The kinetic energy normalized by the total kinetic energy as a function of time. (d) The state of the disc of size $30$cm after $t=0.00210$sec which corresponds to the maximum plate force and minimum kinetic energy at the impact velocity of $250$cm/sec. ](figures/wallf_200_all.eps "fig:"){width="49.00000%"} ![\[wforce\] (a) Keeping the normal component of impact velocity constant at different impact angle, the force exerted by the plate is almost the same. (b) Force exerted by the plate at various velocities during normal impact. (c) The kinetic energy normalized by the total kinetic energy as a function of time. (d) The state of the disc of size $30$cm after $t=0.00210$sec which corresponds to the maximum plate force and minimum kinetic energy at the impact velocity of $250$cm/sec. ](figures/wforce_90.eps "fig:"){width="49.00000%"} (a) (b) --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ![\[wforce\] (a) Keeping the normal component of impact velocity constant at different impact angle, the force exerted by the plate is almost the same. (b) Force exerted by the plate at various velocities during normal impact. (c) The kinetic energy normalized by the total kinetic energy as a function of time. (d) The state of the disc of size $30$cm after $t=0.00210$sec which corresponds to the maximum plate force and minimum kinetic energy at the impact velocity of $250$cm/sec. ](figures/energy_90_all.eps "fig:"){width="49.00000%"} ![\[wforce\] (a) Keeping the normal component of impact velocity constant at different impact angle, the force exerted by the plate is almost the same. (b) Force exerted by the plate at various velocities during normal impact. (c) The kinetic energy normalized by the total kinetic energy as a function of time. (d) The state of the disc of size $30$cm after $t=0.00210$sec which corresponds to the maximum plate force and minimum kinetic energy at the impact velocity of $250$cm/sec. ](crack/crack_250_90_0.00210t.ps "fig:"){width="40.00000%"} (c) (d) ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ An alternative way of showing the effect of the impact angle is by analyzing the force exerted by the plate on the disc. In Fig. \[wforce\](a) and (b) typical time series of the force between the target plate and the disc are presented. If the normal component of the impact velocity $v_n = v_0\sin{\theta}$ is kept constant while changing $\theta$ and $v_0$, the force practically remains constant except for fluctuations, see Fig. \[wforce\](a). It shows clearly that the force exerted by the target plate only depends on the normal component of impact velocity providing further support for the above findings in consistency with refs. [@agglomerate; @bk; @imp_agglo; @imp_ang]. To take a clear view of the nature of the plate force, we have plotted the plate force as a function of time at various velocities (see Fig. \[wforce\](b)). In general, as the impact velocity is increased, the maximum plate force increases, the duration of the impact decreases. The maximum force exerted by the plate occurs when the kinetic energy has a minimum (see Fig. \[wforce\](c)). The maximum plate force or minimum kinetic energy corresponds to the state of the fragmented disc where most of the bonds break near the contact region and cracks start to propagate radially outwards from the conical damage region (see Fig. \[wforce\](d). Since damage, [*i.e.*]{} bond breaking dissipates energy, the final kinetic energy is significantly less than the initial kinetic energy. Increasing the impact velocity gives rise to an increase in the final kinetic energy and decrease of the duration of plate-disc contact. Moreover, the initial kinetic energy remaining at the end of the impact first decreases with impact velocity until the velocity is sufficient to produce multiple fracture and then increases due to increase in kinetic energy of the broken fragments. Clearly, there are two stages to the bond breaking process during impact. Initially bonds are broken primarily as a result of the high compressive shock wave adjacent to the impact site, which occurs during the period when the plate force is increasing. This is followed by further bond breakage due to crack propagation radially outwards, starting around the conical damage region where high stress concentration occurs, while the plate force decreases. Mass Distribution of Fragments ------------------------------ ![\[mass\] The fragment mass histograms of normal impact for the system size $R = 30$cm with varying initial impact velocity. The straight line shows the power law fitted to the curve at the critical velocity $v_0 = 250$cm/sec with an exponent $\tau = 1.835$.](figures/massdist_90.eps){width="45.00000%"} The mass (or size) distribution of fragments is one of the most important characteristic quantities of the disc impact which has also a high practical relevance. The fragment mass histograms $F(m)$ of normal impact are presented in the Fig \[mass\], for the system size $30$cm at varying impact velocity $v_0$. In order to resolve the shape of the distribution over a wide range of mass values, logarithmic binning was used [ *i.e.,*]{} the binning is equidistant on logarithmic scale. It can be observed that the histograms have a maximum at small fragment sizes due to the existence of single unbreakable polygons. The shape of the peak of the small fragments is determined by the mass distribution of single Voronoi polygons obtained by the tessellation procedure. At low velocities, much below the critical point, the distributions are discontinuous: for small fragment masses the distributions are smoothly decreasing functions, while for large fragments $F(m)$ has a peak indicating the presence of large unbroken pieces. In between however, the medium sized fragments are missing. As the impact velocity increases, the large pieces break up into smaller ones, the peak of the large fragments on the right hand side gradually disappears and the entire mass distribution becomes continuous. It is interesting to note that the peak of large fragments disappears completely at the critical point where the cracks starting from the damaged conical region reach at the outer surface of the disc, breaking the disc into several smaller pieces. As a result, $F(m)$ takes a power law form at the critical point $$\begin{aligned} F(m) \sim m^{-\tau}. \end{aligned}$$ The exponent of the power law $\tau$ fitted to the curve at the critical velocity $v_0 = 250$cm/sec is $\tau = 1.835$. For oblique impact the value of the exponent is nearly the same as in normal impact within a precision of $\pm 0.05$. Simulations with different system sizes such as $R=25$cm, $R=20$cm and $R=15$cm proved that the exponent $\tau$ is also independent of $R$. Increasing the impact velocity above the critical point, the power law regime of the mass distribution remains unchanged, however, the largest fragment size decreased and the shape of the curve attains an exponential form for large fragments. In the limiting case of very high impact velocities the disc suffers complete disintegration into small pieces. In this shattered phase $F(m)$ gradually transforms to an exponential form. However, the shattered phase is slowly approached when increasing the impact velocity $v_0$ since the damage ratio and the number of fragments have a logarithmic dependence on $v_0$. The results are in good quantitative agreement with the experimental findings on the fragmentation of plate-like objects [@glassplate; @platelike; @composite; @ice]. Scaling of the Velocity Distribution ------------------------------------ In applications like mineral processing, a fragment, formed with a certain velocity, can undergo secondary break-up due to collisions with other fragments or with the walls of the container. To get an estimate about the importance of this secondary fragmentation, it is essential to study the fragment velocities. We investigated the velocity distribution of fragments and its dependence on the macroscopic variables of the system like the impact velocity $v_0$ and radius of the disc $R$. ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![\[veldistv\] The distribution of the $x$ and $y$ components of the velocity of fragments with fixed system size R=$30$ cm varying the initial impact velocity $v_0$.](figures/veldistx_30_90.eps "fig:"){width="49.00000%"} ![\[veldistv\] The distribution of the $x$ and $y$ components of the velocity of fragments with fixed system size R=$30$ cm varying the initial impact velocity $v_0$.](figures/veldisty_30_90.eps "fig:"){width="49.00000%"} (a) (b) ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![\[veldistr\] The distribution of the $x$ and $y$ components of the velocity of fragments with fixed impact velocity $v_0 = 400$ cm/sec varying the radius $R$ of the particle.](figures/veldistx_radi_all.eps "fig:"){width="49.00000%"} ![\[veldistr\] The distribution of the $x$ and $y$ components of the velocity of fragments with fixed impact velocity $v_0 = 400$ cm/sec varying the radius $R$ of the particle.](figures/veldisty_radi_all.eps "fig:"){width="49.00000%"} (a) (b) -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- To determine the velocity distribution of fragments and explore its dependence on $v_0$ and $R$, we analyzed the data in two ways. First we fixed the disc radius $R$ and varied the impact velocity $v_0$, then fixed $v_0$ while varying $R$. In both cases the calculations are restricted to normal impact ($\theta = 90^{\circ}$) and the distributions of the velocity components $n(v_x)$, $n(v_y)$ of the center of mass of the fragments are evaluated. In Figs. \[veldistv\],\[veldistr\] we present the results for fixed radius $R = 30$ varying the initial velocity and for fixed $v_0 = 400$ cm/sec varying the radius of the particles, respectively. In Fig. \[veldistv\](a) one can observe that the distribution of the $x$ component of the fragment velocities $n(v_x)$ is symmetric about $v_x=0$ as expected from the symmetry of the initial conditions. The zero mean value is a consequence of momentum conservation. As the impact velocity increases, the distribution broadens. The distribution $n(v_y)$ of the $y$ components also broadens with increasing impact velocity but also shifts towards the negative $y$ direction. This is obvious as the direction of the impact velocity is in the negative direction of $y$ axis and the total linear momentum increases with the impact velocity. However, in the $y$ direction fragments are slower, [*i.e.,*]{} the values of $v_y$ are much smaller than those of $v_x$. Note that there is a small fraction of the debris which has velocity larger than $v_0$, in agreement with experimental findings [@breakup]. When the impact velocity $v_0$ is fixed and the size of the disc $R$ is varied, however, the distribution of the $x$ components $n(v_x)$ remains the same (see Fig. \[veldistr\](a)). For $n(v_y)$ a similar trend is observed as in the case of changing $v_0$, except that the distribution is less dispersed with varying $R$ (Fig. \[veldistr\](b)). --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![\[scalev\] Rescaled plots of the velocity distributions for a fixed system size $30$ cm varying the impact velocity $v_0$ above the critical point.](figures/veldistx_abvcrt.eps "fig:"){width="49.00000%"} ![\[scalev\] Rescaled plots of the velocity distributions for a fixed system size $30$ cm varying the impact velocity $v_0$ above the critical point.](figures/veldisty_scl_90_abvcrt.eps "fig:"){width="49.00000%"} (a) (b) --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![ \[scaler\] Rescaled plot of the distributions of the $y$ component of fragment velocities $n(v_y)$ for a fixed impact velocity $400$ cm/sec varying the size $R$ of the particle. ](figures/veldisty_sc_allradi.eps){width="49.00000%"} To reveal the functional form of the dependence of $n(v_x)$ and $n(v_y)$ on the macroscopic variables $v_0$ and $R$ we performed a scaling analysis of the distributions. Figs. \[scalev\], \[scaler\] demonstrate that by appropriate rescaling the axes one can merge the curves obtained at different values of the macroscopic variables onto a single curve. In the case of the $x$ components, the transformation is a stretching and shrinking by a power of $v_0$ on the vertical and horizontal axis, respectively. However, for the $y$ components, a combination of a linear shift and a shrinking by a power of $v_0$ is required. The good quality of the data collapse implies that the $v_0$ and $R$ dependence of the distributions can be cast in the form $$\begin{aligned} \label{eqx} n(v_x,v_0) &\sim& v_0^{-\alpha} \phi\left(v_x v_0^{-\alpha}\right), \\ \label{eqy} n(v_y,v_0) &\sim& v_0^{-\beta} \psi\left((v_y+\lambda_1 v_0) v_0^{-\beta}\right), \end{aligned}$$ where the parameter values $\alpha = 0.92$, $\beta = 0.58$, and $\lambda_1 = 0.38$ were obtained by varying them until the best data collapse is obtained in Fig. \[scalev\],\[scaler\]. Similarly at constant velocity while varying the system size $R$ the functional form of the distribution $n(v_y)$ reads as $$\label{eqyr} n(v_y,R) \sim R^{-\gamma} \xi\left((v_y+\lambda_2 R)R^{-\gamma}\right),$$ where $\gamma = 0.12$ and $\lambda_2 = 6.2$ provide the best collapse in Fig. \[scaler\]. $\phi$, $\psi$, and $\xi$ are scaling functions which seem to have a Gaussian-like shape in Figs.\[scalev\],\[scaler\]. The scaling forms Eqs. (\[eqx\],\[eqy\],\[eqyr\]) show that the width of the scaling functions have a power law dependence on the impact velocity $v_0$ and on the radius of the disc $R$. It has to be emphasized that the above scaling behavior is valid only above the critical velocity $v_c$; below $v_c$ no scaling was found. The scaling form of the distribution of the velocity components have also consequences for the spatial distribution of the flying pieces after impact. The increase of the width of the distributions with increasing impact velocity and disc radius implies that the flying fragments are more dispersed in space. Conclusion ========== We studied the normal and oblique impact of a circular brittle particle on a hard frictionless plate using a cell model of cohesive granular materials. We carried out a detailed analyses of the evolution of the crack pattern arising in the disc during the impact process, and of the mass and velocity distributions of fragments in the final state. For both normal and oblique impact, a cone shaped damage region is formed at the impact point whose base area increases gradually as the velocity of impact increases. Cracks start to develop from the the conical damaged region where the maximum stress concentration exists. The oblique crack patterns obtained resemble those of the experimental findings [@fra_pattern], where oblique cracks moving along the plane of maximum compression were found. In agreement with the experimental observations, the oblique cracks in our simulation follow the trajectory of the maximum compression plane. Varying the impact velocities while keeping its normal component constant, we observed that the crack pattern remains the same in agreement with recent experimental and theoretical findings [@fra_pattern; @breakup; @powder; @fra_glass; @ball_impact]. Our analyses showed the existence of a critical value of the impact velocity, at which the oblique cracks reach the outer surface of the disc opposite to the impact point. The critical velocity separates two regimes of the break-up process, [*i.e.*]{} below the critical point only a damage cone is formed at the impact site [*(damage)*]{}, cleaving of the particle occurs at the critical point, while above the critical velocity the disc breaks into several pieces [*(fragmentation)*]{}. In the limit of very high impact velocities, the disc suffers complete disintegration into many small fragments. However, this shattered phase is slowly approached since the damage ratio and the number of fragments increase logarithmically with the impact speed. The critical behavior proved to be independent of the impact angle, it solely depends on the normal component of the impact velocity. Studying the average fragment size revealed that the critical value of the specific energy increases with the size of the disc. This implies in practical cases that a higher energy density is required to break a particle of larger size. Above the critical point, the mass distribution $F(m)$ of fragments was found to obey the Gates-Gaudin-Schuhmann distribution (power law) with an exponent close to $2$. The power law functional form occurs at the critical point and remains unchanged in a broad interval of the impact velocity independent of the system size and of the impact angle. However, in the shattered phase attained in the limit of very high impact velocities, the fragment mass distribution tends to an exponential form. The results are in good quantitative agreement with the experimental findings on the fragmentation of plate-like objects [@glassplate; @platelike; @composite; @ice]. In applications like mineral processing, a fragment, formed with a certain velocity, can undergo secondary break-up due to collisions with other fragments or with the walls of the container. To get an estimate about the importance of this secondary fragmentation, the study of fragment velocities is essential. We determined the distribution of the velocity components of fragments and analysed the scaling behaviour of the distributions when changing the macroscopic variables of the system, [*i.e.*]{} impact velocity $v_0$ amd system size $R$. A very interesting anomalous scaling of the distribution functions were revealed with a power law dependence on $v_0$ and $R$. In a variety of size reduction operations practiced by a wide range of industries, there is a particular interest in the net energy required to achieve a certain size reduction and the energy distribution of the fragments during the grinding process. To maximize efficiency of such processes, it is important to know the breakage characteristics of the grinding materials. Our current work can provide some of this valuable information. [99]{} N. Arbiter, C. C. Harris, and G. A. Stamboltzis, Transactions of the Society of Mining Engs. of AIME [**119**]{}, 244 (1969). A. D. Salman, C. A. Biggs, J. Fu, I. Angyal, M. Szabo, M. Z. Hounslow, Powder Technology [**128**]{}, 36 (2002). K. T. Chau, S. Z. Wu, W. C. Zhu, C. A. Tang, and T. X. Yu, in Proceedings of 16[*th*]{} ASCE Engineering Mechanics Conference, July 16-18, 2003, University of Washington, Seattle. A. D. Salman, and D. A. Gorham, Powder Technology [**107**]{}, 179 (1999). S. Data, and B. K. Mishra, M. Tech. Thesis, FFT Analysis of Charge Dynamics in Tumbling Mill (1998). D. L. Turcotte, Jour. Geophys. Res. [**91 B2**]{}, 1921 (1986). C. Thornton, K. K. Yin, and M. J. Adams, J. Phys. D [**29**]{}, 424 (1996). B. K. Mishra, C. Thornton, Int. J. Mineral Processing [**61**]{}, 225 (2001). F. Kun and H. J. Herrmann, Comput. Meth. Appl.Mech. Eng. [**138**]{}, 3 (1996). F. Kun and H. J. Herrmann, Phys. Rev. E [**59**]{}, 2623 (1999). F. Kun and H. J. Herrmann, Int. Jour. Mod. Phys. C [**7**]{}, 837 (1996). G. A. D’Addetta, F. Kun, E. Ramm, and H. J. Herrmann, [*From solids to granulates - Discrete element simulations of fracture and fragmentation processes in geomaterials*]{}, Continuous and discontinuous modelling of cohesive-frictional materials, pp. 231-258 (2001), Lecture Notes in Physics (LNP) 568,Springer Verlag, Berlin. L. Oddershede, P. Dimon, and J. Bohr, Phys. Rev. Lett., 3107 (1993). A. Meibom and I. Balslev, Phys. Rev. Lett., 2492 (1996). E. S. C. Ching, [*et al*]{}, Physica A [**265**]{}, 119 (1999). A. Diehl [*et al*]{}, Phys. Rev. E [**62**]{}, 4742 (2000). J. Åström, M. Kellomäki and J. Timonen, Phys. Rev. E [**55**]{}, 4757 (1997). W. T. Ashurst and B. L. Holian, Phys. Rev. E [**59**]{}, 6742 (1999). H. Inaoka, E. Toyosawa, and H. Takayasu, Phys.  Rev. Lett. [**78**]{}, 3455 (1997). J. Åström and J. Timonen, Phys. Rev. Lett. [**78**]{}, 3677(1997). J. A. Aström, B. L. Holian, and J. Timonen, Phys. Rev. Lett. [**84**]{}, 3061 (2000). M. Arakawa, Icarus [**142**]{}, 34 (1999). F. Wittel, F. Kun, H. J. Herrmann, and B. H. Kröplin, Phys. Rev. Lett. [**93**]{}, 035504 (2004). T. Matsui, T. Waza, K. Kani and S. Suzuki, J. of Geophysical Research [**87 B13**]{}, 10968 (1982). A. Fujiwara and A. Tsukamoto, Icarus [**142**]{}, 44 (1980). V. Latora, M. Belkacem, and A. Bonasera, Phys. Rev. Lett. [**73**]{}, 1765 (1994). A. V.  Potapov, C. S. Campbell, and M. A. Hopkins, Int. J.  Mod. Phys. C [**6**]{}, 399 (1995). H. Katsuragi, D. Sugino, and H. Honjo Phys. Rev. E [**68**]{}, 046105 (2003). T. Kadono, Phys. Rev. Lett. [**78**]{}, 1444 (1997). T. Kadono and M. Arakawa, Phys Rev. E [**65**]{}, 035107(R) (2002). K. D. Kafui and C. Thornton, Powder Technology [**109**]{}, 113 (1999). O. Tsoungui, D. Vallet, J. -C. Charmet, and S. Roux, Granular Matter [**2**]{}, 19 (1999). M. Khanal, W. Schubert, and J. Tomas, Granular Matter [**5**]{}, 177 (2004). R. Moreno, M. Ghadiri, and S. J. Antony, Powder Technology [**130**]{}, 132 (2003). E. G. Kelly and D. J. Spottiswood, [ *Introduction to Mineral processing*]{}, Johns Wiley & Sons Inc. New York, 1982. C. Moukarzel and H. J. Herrmann, Jour. Stat. Phys. [**68**]{}, 911 (1992). H. J. Tillemans and H. J. Herrmann, Physica A [**217**]{}, 261 (1995). A. V. Potapov and C. S. Campbell [*Fourth year progress report*]{}, International Fine Particle Research Institute , Dept. of Mech. Eng., University of southern California, USA. B. Behera, F. Kun, S. MaNamara and H. J. Herrmann, cond-mat/0404057. V. Buchholtz, J. A. Freund, T. Poschel, Eur. Phys. Jour. B [**16**]{}, 169 (2000). Thorsten Pöschel, [*Dynamik Granularer Systeme: Theorie, Experimente und numerische Experimente*]{}, Logos Verlag Berlin (2000).
1
--- abstract: 'We obtain a sharp local well-posedness result for the Gradient Nonlinear Wave Equation on a nonsmooth curved background. In the process we introduce variable coefficient versions of Bourgain’s $X^{s,b}$ spaces, and use a trilinear multiscale wave packet decomposition in order to prove a key trilinear estimate.' address: - | Department of Mathematics, Hylan Building\ University of Rochester, Rochester, NY 14627 - | Department of Mathematics, Evans Hall\ University of California at Berkeley, Berkeley, CA 94720-3840 author: - 'Dan-Andrei Geba and Daniel Tataru' bibliography: - 'tril.bib' title: Gradient NLW on curved background in $4+1$ dimensions --- Introduction ============ In this article we are investigating the issue of local well-posedness for a variable coefficient semilinear wave equation in $4+1$ dimensions. To describe the context and motivate the interest in our problem we introduce three related equations. We begin with a generic gradient NLW equation in ${\mathbb R}^{n+1}$, $$\Box u\,=\,\Gamma(u)(\nabla u)^2 \label{sem}$$ with the nonlinearity $$\Gamma(u) (\nabla u)^2\,=\,q^{ij}(u) {\partial}_i u\, {\partial}_j u$$ where $q^{ij}$ are smooth functions and the standard summation convention is used. Then we move on to a similar equation but on a curved background, $$\Box_g u\,=\,\Gamma(u)(\nabla u)^2 \label{eq}$$ with $\Box_{g}\,=\,g^{ij}\,\partial_i\partial_j$, where the summation occurs from $0$ to $n$ and the index $0$ stands for the time variable. To insure hyperbolicity we assume that the matrix $g^{ij}$ has signature $(1,n)$ and the time level sets $x_0=const$ are space-like, i.e. $g^{00}>0$. In effect to simplify some of the computations we make the harmless assumption $g^{00}=1$. Finally, we consider a corresponding quasilinear equation $$\Box_{g(u)} u\,=\,\Gamma(u)(\nabla u)^2 \label{qua}$$ with similar assumptions on the matrix $g$. In all three cases we are interested in the local well-posedness of the Cauchy problem in Sobolev spaces $H^s({\mathbb R}^n)\times H^{s-1}({\mathbb R}^n)$ with initial data $$u(0,x)\,=\,u_0(x),\qquad \partial_t u(0,x)\,=\,u_1(x) \label{id}$$ The first equation is the best understood so far, and is known to be locally well-posed for $s$ in the range $$s>\max\{\frac{n}{2},\frac{n+5}{4}\}$$ This range is sharp. The $\frac{n}{2}$ obstruction comes from scaling, while the $\frac{n+5}{4}$ is related to concentration along light rays, see Lindblad [@MR1375301]. The proof of the positive result is fairly straightforward in dimension $2+1$ and $3+1$, where it suffices to rely on the Strichartz estimates. In $4+1$ dimensions this no longer works and one needs to use instead the $X^{s,\theta}$ spaces, see Foschi-Klainerman [@MR1755116]. These are multiplier weighted $L^2$ spaces associated to the wave operator as the Sobolev spaces $H^s$ are connected to the Laplace operator $\Delta$, see Klainerman-Machedon [@MR1420552]: $$\| u \|_{X^{s,\theta}} = \| (1+|\xi|^2)^{\frac s2}\cdot (1+||\tau|-|\xi||^2)^{\frac \theta2}\cdot |\hat{u}(\tau,\xi)| \|_{L^2} \label{standardxs}$$ where $\hat u=\hat{u}(\tau,\xi)$ is the space-time Fourier transform of function $u=u(t,x)$. Finally, in the most difficult case, $n\geq 5$, this was proved by Tataru [@MR1739207], using a suitable modification of the $X^{s,\theta}$ spaces, needed in order to control the interaction of high and low frequencies in the multiplicative estimates. For the quasilinear problem (\[qua\]) the sharp result is only known to hold in dimensions $n=2,3$. This was proved by Smith-Tataru [@MR2178963] (see also Lindblad’s counterexample [@MR1666844]). The argument there still requires the use of Strichartz estimates. These are derived from a wave packet parametrix construction for a wave equation with very rough coefficients, which in turn is obtained via a very delicate analysis of the Hamilton flow. A different proof of this result in the special case of the Einstein vacuum equation was independently obtained by Klainerman-Rodnianski [@MR1885093], [@MR2180401], [@MR2052472]. In dimensions $n\geq 4$ it is still unclear which is the optimal threshold, the best results so far being contained in the above mentioned paper of Smith-Tataru [@MR2178963] and in an earlier one, Tataru [@MR1887639]: $$\aligned &n=4,5\quad \quad s>\frac{n}{2}+\frac{1}{2}\\ &n\geq 6\qquad \quad s>\frac{n}{2}+\frac{2}{3} \endaligned$$ In the same direction but somewhat closer in spirit to the present paper is Bahouri and Chemin’s work [@MR2082388; @MR2003418]. The equation considered there is still quasilinear, but the main estimates are frequency localized versions of the Strichartz estimates for the wave equation on a rough background. As an intermediate step toward understanding the higher dimensional quasilinear problem, we consider here the semilinear problem on a curved background and we prove the sharp result: Let $n=4$ and assume that the coefficients $g^{ij}$ satisfy $\partial^2 g \in L^2 L^\infty$. Then the Cauchy problem , is locally well-posed in $H^s\times H^{s-1}$ for $s>\frac{9}{4}$. \[main\] Here well-posedness is understood in the strongest sense, i.e. the solutions have Lipschitz dependence on the initial data and they exist on a time interval which only depends on the size of the initial data. One contribution of the present paper is to introduce variable coefficient versions of the $X^{s,b}$ spaces, study their properties and obtain the corresponding Strichartz type embeddings. However, the main novelty, contained in the last two sections, is a new method, based on a trilinear wave packet decomposition, to prove a key trilinear bound which cannot be obtained directly from the Strichartz estimates. The first step in the proof is to reduce the problem to the case when the initial data is small, using scaling and the finite speed of propagation. This is a routine argument for which we refer the reader to [@MR2178963]. Once we know that the initial data is small, we can fix the time interval and set it to $[-1,1]$. This will be the case throughout the rest of the paper. To solve the problem for small data we use a fixed point argument. Let $S(u_0,u_1)$ and $\Box_{g}^{-1}$ be respectively the homogeneous and inhomogeneous solution operators $$\begin{aligned} &\Box_{g}S(u_0,u_1)\,=\,0,\quad S(u_0,u_1)(0)\,=\,u_0,\quad \partial_t S(u_0,u_1)(0)\,=\,u_1 \label{hom}\\ &\Box_{g}(\Box_{g}^{-1}H)\,=\,H,\qquad (\Box_{g}^{-1}H)(0)\,=\,0,\qquad \partial_t(\Box_{g}^{-1}H)(0)\,=\,0 \label{inhom}\end{aligned}$$ Then a solution $u$ for (\[eq\]) in $[-1,1]$ is also a fixed point for the functional $$F(u)\,= S(u_0,u_1) + \Box_{g}^{-1}( \Gamma(u)(\nabla u)^2) \label{funct}$$ In order to apply a fixed point argument for $F$ we need to find two Banach spaces $X$ and $Y$ for which the following mapping properties hold: $$\begin{aligned} &\,\| S(u_0,u_1) \|_X \lesssim \|(u_0,u_1)\|_{H^s\times H^{s-1}} \label{h}\\ &\,\|\Box_g^{-1}H\|_X \lesssim \|H\|_Y \label{i}\\ &\,\|u \cdot w\|_X \lesssim \|u\|_X \|w\|_X \label{xx}\\ &\|\Gamma(u)\|_{X} \lesssim C(\|u\|_{L^\infty}) (1 + \|u\|_{X}^5) \label{moser} \\ &\,\|u \cdot w\|_Y \lesssim \|u\|_X \|w\|_Y \label{xy} \\ &\,\|\nabla v\cdot \nabla w\|_Y\lesssim \|v\|_X\cdot \|w\|_X \label{n}\end{aligned}$$ where $C=C(\|u\|_{L^\infty})$ is a constant that depends solely on $\|u\|_{L^\infty}$. In the flat case (\[sem\]), for dimension $n=4$, one can make this argument work by choosing $$X\,=\,X^{s,\theta}\qquad Y\,=\,X^{s-1,\theta-1}$$ with $$s\,=\,\theta\,+\frac 32\qquad \theta\,>\,\frac 34 \label{st}$$ For our problem the challenge is twofold: first we need to find suitable variable coefficient versions for the $X^{s,\theta}$ spaces and then, in this new context, prove the corresponding estimates (\[h\])-(\[n\]). Such spaces were previously introduced by Tataru [@MR1391526], where they are used in the context of a unique continuation problem. There, for a hyperbolic operator $P$ one defines $$X^{s,0}\,=\,H^s,\quad X^{s,1}\,=\,\{u\in H^s| Pu\in H^{s-1}\}$$ Then all the other spaces are defined through interpolation and duality. In this article we choose to follow a different path based on dyadic decompositions with respect to the spatial frequency and the distance to the characteristic cone. Likely one should be able to prove that the two approaches are equivalent, but we choose not to pursue this here. Our article is structured as follows. In the next section we define the $X^{s,\theta}$ spaces and prove that they satisfy the linear estimates , . Our definition of the $X^{s,\theta}$ is slightly different from the standard one in the constant coefficient case. Precisely, in the constant coefficient case our definition gives $$\| u \|_{X^{s,\theta}} \approx \| (1+|\xi|^2)^{\frac s2} (1+||\tau|-|\xi||^2)^{\frac \theta2}\cdot \hat{u}(\tau,\xi) \|_{L^2} + \|\Box u\|_{L^2_t H^{s+\theta-2}_x}, \quad 0 < \theta < 1 \label{newxs}$$ and one can see that the second term above alters the behavior at high modulations $|\tau| \gg |\xi|$. Correspondingly, for negative $\theta$ we have $$\| u \|_{X^{s,\theta}} \approx \| (1+|\xi|^2)^{\frac s2} (1+||\tau|-|\xi||^2)^{\frac \theta2}\cdot \hat{u}(\tau,\xi) \|_{L^2} + \| u\|_{L^2_t H^{s+\theta}_x}, \quad -1 < \theta < 0 \label{newxs1}$$ This change is consistent with scaling and simplifies somewhat the study of high modulation interactions. In Section \[secse\] we discuss the Strichartz estimates for $\Box_g$, which translate into embeddings for the $X^{s,\theta}$ spaces. These turn out to suffice for the proof of the algebra properties - and for the high-high frequency interactions in . The difficult part is to study the high-low frequency interactions in . For this we first take advantage of the duality relation $$(X^{s,\theta}+ L^2 H^{s+\theta})'\,=\,X^{-s,-\theta} \label{dual} \qquad s \in {\mathbb R}, \quad 0 < \theta < \frac 12$$ This is consistent with and . Using this duality, after factoring out high modulation interactions, the bound (\[n\]) is transformed into the trilinear estimate: $$\left|\int u\cdot v\cdot w\,\,dx \,dt\right|\lesssim \|u\|_{X^{1-s,1-\theta}} \|v\|_{X^{s-1,\theta}} \|w\|_{X^{s-1,\theta}} \label{trilin}$$ with $(s,\theta)$ verifying (\[st\]). The last section of the paper is devoted to proving this bound. The argument is based on a multiscale trilinear wave packet decomposition for linear waves. The $X^{s,\theta}$ spaces ========================= We first introduce Littlewood-Paley decompositions. As a general rule, all frequency localizations in the sequel are only with respect to the spatial variables. There is a single exception to this. Precisely, the coefficients $g^{ij}$ are truncated using space-time multipliers. In order for these truncations to work, we need for these coefficients to be defined globally in time. Hence we assume they have been extended to functions with similar properties in all of ${\mathbb R}^{n+1}$. Let $\phi$ be a smooth function supported in $\{\frac12 \leq |\xi| \leq 2 \}$ with the property that $$1 = \sum_{j=-\infty}^\infty \phi(2^{-j} \xi)$$ We consider a spatial Littlewood-Paley decomposition, $$1 = \sum_{{\lambda}=1}^\infty S_{{\lambda}}(D_x)$$ where for dyadic ${\lambda}> 1$ we have $$\qquad S_{{\lambda}}(\xi) = \phi\left(\frac{\xi}{{\lambda}}\right)$$ while $S_1$ incorporates the low frequency contribution in $\{|\xi|\leq 1\}$. Set $$S_{<{\lambda}} = \sum_{\mu=1}^{\frac{\lambda}2} S_\mu$$ We will also use spatial multipliers ${{\tilde S}}_{\lambda}$ with slightly larger support, with $S_{\lambda}{{\tilde S}}_{\lambda}= S_{\lambda}$. We say that a function $u$ is localized at frequency ${\lambda}$ if its Fourier transform is supported in the annulus $\{ \frac{\lambda}8 \leq |\xi| \leq 8{\lambda}\}$. For the paradifferential type calculus we also need to truncate the coefficients of $\Box_g$ in frequency. Given $\Box_g$ in (\[eq\]) we define the modified operators $$\Box_{g_{<{\lambda}}} =(S_{<{\lambda}}(D_x,D_t) g^{\alpha\beta}) \partial_\alpha \partial_\beta$$ In the sequel we omit the space and time variables in our function space notations, i.e. $L^p:= L^p_{x,t}$, $L^2 H^s:= L^2_t H^s_x$, $L^p L^q:= L^p_t L^q_x$, etc. We are ready now to define our spaces: Let $\theta\in (0,1)$ and $s \in {\mathbb R}$. Then $X^{s,\theta}$ is the space of functions $u \in L^2(-1,1; H^s({\mathbb R}^n))$ for which the following norm is finite: $$\|u\|^2_{X^{s,\theta}}\,=\,\inf \left \{ \sum_{{\lambda}=1}^\infty \sum_{d=1}^\lambda \|u_{\lambda,d}\|^2_{X_{{\lambda},d}^{s,\theta}} ;\ u =\sum_{{\lambda}=1}^\infty\sum_{d=1}^{{\lambda}}S_{{\lambda}} u_{{\lambda},d}\right\} \label{tp}$$ where ${\lambda}$, $d$ take dyadic values and $$\|u_{{\lambda},d}\|_{X_{{\lambda},d}^{s,\theta}}^2 = {\lambda}^{2s}d^{2\theta}\|u_{{\lambda},d}\|^2_{L^2} + {\lambda}^{2s-2}d^{2\theta-2}\|\Box_{g_{<{\sqrt{\lambda}}}}u_{{\lambda},d}\|^2_{L^2} \label{cl}$$ We also define the space $X^{s-1,\theta-1}$ of functions for which the following norm is finite: $$\begin{split} \| f\|_{X^{s-1,\theta-1}}^2 = \inf \left\{ \|f_0\|_{L^2 H^{s-1}}^2 + \sum_{{\lambda}=1}^\infty\right.& \sum_{d=1}^\lambda \|f_{\lambda,d}\|^2_{X_{{\lambda},d}^{s,\theta}} ;\ \\ & \left. f =f_0+ \sum_{{\lambda}=1}^\infty \sum_{d=1}^{{\lambda}}\Box_{g_{<{\sqrt{\lambda}}}} S_{{\lambda}} f_{{\lambda},d} \right\} \end{split} \label{tn}$$ \[xst\] Intuitively $d$ stands for the modulation of the $u_{{\lambda},d}$ piece. Indeed, in the constant coefficient case one can easily see that $u_{\lambda,d}$ mainly contributes to $u$ in the region where $||\tau|-|\xi|| \approx d$. The condition $1 \leq d$ is related to the spatial localization on the unit scale in our problem. The condition $d \leq \lambda$ reflects the fact that at high modulation we use a simpler structure, see e.g. , . The cutoff at frequency less than $\sqrt{\lambda}$ for the coefficients $\Box_g$ is related to the regularity of the coefficients, $\partial^2 g \in L^2 L^\infty$. This implies that $\Box_{g_{\geq{\sqrt{\lambda}}}}u_{{\lambda},d}$ is an allowable error term.\ We begin our analysis of the $X^{s,\theta}$ spaces with a simple observation, namely that without any restriction in generality one can assume that the functions $u_{{\lambda},d}$ and $f_{{\lambda},d}$ in Definition \[xst\] are localized at frequency ${\lambda}$. Precisely, we have the stronger result: The following estimate holds: $${\lambda}^{s-1}d^{\theta}\|\nabla S_{\lambda}v\|_{L^2} + {\lambda}^{s-1}d^{\theta-1}\|\Box_{g_{<{\sqrt{\lambda}}}}S_{\lambda}v \|_{L^2} \lesssim \|v\|_{X_{{\lambda},d}^{s,\theta}} \label{strongst}$$ We first bound the time derivatives of $v$ in negative Sobolev spaces, $${\lambda}^{s} d^{\theta} (\| {\partial}_t^2 v\|_{L^2 (H^{-2}+{\lambda}^2 L^2)} + \|{\partial}_t v\|_{L^2(H^{-1}+{\lambda}L^2)} ) \lesssim \|v\|_{X_{{\lambda},d}^{s,\theta}} \label{lowv}$$ This follows by Cauchy-Schwartz from the interpolation inequality $$\|{\partial}_t v\|_{L^2 ( H^{-1}+{\lambda}L^2)}^2 \lesssim (\| {\partial}_t^2 v\|_{L^2 (H^{-2}+{\lambda}^2L^2)} +\|v\|_{L^2})\|v\|_{L^2}$$ combined with the bound $$\| {\partial}_t^2 v\|_{L^2 (H^{-2}+{\lambda}^2 L^2)} \lesssim {\lambda}^{-2} \| \Box_{g_{<{\sqrt{\lambda}}}} v\|_{L^2} + \|{\partial}_t v\|_{L^2 ( H^{-1}+{\lambda}L^2)} + \|v\|_{L^2}$$ To prove this last estimate we only use the $L^\infty$ regularity of $g$ together with the condition $g^{00} = 1$. Then we need the fixed time bounds $$\| g_{<{\sqrt{{\lambda}}}} {\partial}_x {\partial}_t v\|_{H^{-2}+{\lambda}^2 L^2} \lesssim \|{\partial}_t v\|_{H^{-1} + {\lambda}L^2}$$ $$\| g_{<{\sqrt{{\lambda}}}} {\partial}_x^2 v\|_{ H^{-2}+{\lambda}^2 L^2} \lesssim \|v\|_{L^2}$$ They are similar, so we only discuss the second one. We write $$g_{<{\sqrt{{\lambda}}}} {\partial}_x^2 v = {\partial}_x^2 (g_{<{\sqrt{{\lambda}}}} v) - 2{\partial}_x ({\partial}_x g_{<{\sqrt{{\lambda}}}} v) + {\partial}_x^2 g_{<{\sqrt{{\lambda}}}} v$$ and use the uniform bounds $$|g_{<{\sqrt{{\lambda}}}}| \lesssim 1, \qquad |{\partial}_x g_{<{\sqrt{{\lambda}}}}| \lesssim {\lambda}, \qquad |{\partial}_x^2 g_{<{\sqrt{{\lambda}}}}| \lesssim {\lambda}^2$$ This concludes the proof of . The first term in is directly bounded using . For the second it suffices to prove the commutator estimate $$\|[\Box_{g_{<{\sqrt{\lambda}}}},S_{\lambda}] v\|_{L^2} \lesssim {\lambda}\|v\|_{L^2}+ \|{\partial}_t v\|_{ L^2} \label{smallgcom}$$ We have $$[\Box_{g_{<{\sqrt{\lambda}}}},S_{\lambda}] = [g_{<{\sqrt{{\lambda}}}},S_{\lambda}] {\partial}_t {\partial}_x + [g_{<{\sqrt{{\lambda}}}},S_{\lambda}] {\partial}_x^2$$ and the commutators are localized at frequency ${\lambda}$ so the spatial derivatives only contribute factors of ${\lambda}$. Hence follows from the standard commutator estimate $$\|[g_{<{\sqrt{{\lambda}}}},S_{\lambda}]\|_{L^2 \to L^2} \lesssim {\lambda}^{-1} \|\nabla g\|_{L^\infty}$$ Applying the above Lemma with $S_{\lambda}$ replaced by ${{\tilde S}}_{\lambda}$ we obtain One can replace the $X^{s,\theta}_{{\lambda},d}$ norm in the definition of $X^{s,\theta}$ and $X^{s-1,\theta-1}$ by the norm $$\| v\|_{\tilde{X}^{s,\theta}_{{\lambda},d}} = {\lambda}^{s-1}d^{\theta}\|\nabla v\|_{L^2} + {\lambda}^{s-1}d^{\theta-1}\|\Box_{g_{<{\sqrt{\lambda}}}} v\|_{L^2}$$ \[tx\] For the proof of the duality relation it is convenient to work with a selfadjoint operator. Thus we consider the selfadjoint counterpart $\tilde{\Box}_g$ of $\Box_g$ $$\tilde{\Box}_g = \partial_i g^{ij} \partial_j$$ Then for $v$ localized at frequency $\lambda$ we commute and estimate the frequency localized difference $$\| \tilde \Box_{g_{<{\sqrt{\lambda}}}} v - \Box_{g_{<{\sqrt{\lambda}}}} v\|_{L^2} \lesssim \|\nabla v\|_{L^2}$$ This leads directly to One can replace the $\Box_{g_{<{\sqrt{\lambda}}}} $ operator in the definition of $X^{s,\theta}$ and $X^{s-1,\theta-1}$ by the similar operator in divergence form $\tilde\Box_{g_{<{\sqrt{\lambda}}}} $. \[self\] As a consequence of the second part of we have The following embedding holds for $-1 < \theta < 0$: $$X^{s,\theta} \subset L^2 H^{s+\theta}$$ \[sw\] Another use of this is to establish energy estimates. A direct application of energy estimates for the wave equation yields the bound $$\| \nabla v\|_{L^\infty L^2}^2 \lesssim \|\nabla v\|_{L^2}^2 + \|\nabla v\|_{L^2} \| \Box_{g} v\|_{L^2}$$ This leads to $${\lambda}^{s-1} d^{\theta-\frac12} \| \nabla S_{\lambda}v\|_{L^\infty L^2} \lesssim \| v\|_{\tilde{X}^{s,\theta}_{{\lambda},d}} \label{litwo}$$ Going back to Definition \[xst\], this implies Assume that $\theta > \frac12$. Then $$\|u\|_{L^\infty H^s} + \|u_t\|_{L^\infty H^{s-1}} \lesssim \|u \|_{X^{s,\theta}}$$ \[trace\] To prove the estimates and in the context of the $X^{s,\theta}$ spaces we need to switch from the frequency truncated coefficients to the full coefficients $g^{ij}$. The tool needed to do that is contained in the following: Assume that $0 \leq s \leq 3$. Then the following fixed time estimate holds: $$\sum_{{\lambda}=1}^\infty \lambda^{2(s-1)}\|{{\tilde S}}_{\lambda}({g_{> \sqrt{{\lambda}}}} u) \|_{L^2}^2 \lesssim (M (\|{\partial}^2 g\|_{L^\infty}))^2 \| u\|_{ H^{s-2}}^2 \label{largeg}$$ where $M$ stands for the maximal function with respect to time. We also have the dual estimate $$\| \sum_{{\lambda}=1}^\infty {g_{> \sqrt{{\lambda}}}} {{\tilde S}}_{\lambda}f_{\lambda}\|_{H^{2-s}}^2 \lesssim \sum_{{\lambda}=1}^\infty \lambda^{2(1-s)}\| f_{\lambda}\|_{L^2}^2 \label{largegdual}$$ We take a Littlewood-Paley decomposition of both factors, $${{\tilde S}}_{\lambda}(g_{> \sqrt{{\lambda}}} u) = \sum_{\mu=1}^\infty \sum_{\nu = \sqrt{{\lambda}}}^\infty {{\tilde S}}_{\lambda}( g_\nu u_\mu)$$ The $(\mu,\nu)$ term is nonzero only in the following situations: \(i) $\nu \ll {\lambda}$, $\mu \approx {\lambda}$. Then we estimate $$\| {{\tilde S}}_{\lambda}(g_\nu u_\mu)\|_{L^2} \lesssim \|g_\nu\|_{L^\infty} \|u_\mu\|_{ L^2} \lesssim \nu^{-2} M (\|{\partial}^2 g\|_{L^\infty}) \|u_\mu\|_{ L^2}$$ and use the square summability with respect to ${\lambda}$ together with the relation $\nu^{-2} \lesssim {\lambda}^{-1}$. \(ii) $\nu \approx {\lambda}$, $\mu \ll {\lambda}$. Then $$\| {{\tilde S}}_{\lambda}(g_\nu u_\mu)\|_{L^2} \lesssim \|g_\nu\|_{L^\infty} \|u_\mu\|_{ L^2} \lesssim {\lambda}^{-2} M (\|{\partial}^2 g\|_{L^\infty}) \|u_\mu\|_{ L^2}$$ This is tight only when $s=3$ and $\mu=1$, otherwise there is a gain which insures the summability in ${\lambda}$, $\mu$. \(iii) $\nu \approx \mu \gtrsim {\lambda}$. Then $$\| {{\tilde S}}_{\lambda}(g_\nu u_\mu)\|_{L^2} \lesssim \|g_\nu\|_{L^\infty} \|u_\mu\|_{ L^2} \lesssim \mu^{-2} M (\|{\partial}^2 g\|_{L^\infty}) \| u_\mu\|_{ L^2}$$ This is always stronger than we need. The proof of the lemma is concluded. We now establish some simple properties of the linear equation $$\Box_g u = f, \qquad u(0) = u_0, \qquad u_t(0) = u_1. \label{leq}$$ Then The linear equation is well-posed in $H^s \times H^{s-1}$ for $0 \leq s \leq 3$. \[lwp\] The proof follows easily from energy estimates, see [@MR2153517]. We use this to prove , namely Assume that $0 \leq s \leq 3$ and $\theta > 0$. Then the solution $u$ to verifies $$\| u \|_{X^{s,\theta}} \lesssim \|u_0\|_{H^s} + \|u_1\|_{H^{s-1}} + \|f\|_{L^2 H^{s-1}}$$ \[nhom\] We decompose the solution $u$ as $$u = \sum_{{\lambda}=1}^\infty S_\lambda {{\tilde S}}_{\lambda}u$$ and think of this as a part of the sum in which corresponds to $d=1$. Then $$\begin{split} \|u\|_{X^{s,\theta}}^2 &\lesssim \sum_{{\lambda}=1}^\infty \|{{\tilde S}}_{\lambda}u\|_{X^{s,\theta}_{\lambda,1}}^2 \\ & \approx \sum_{{\lambda}=1}^\infty {\lambda}^{2s} \| {{\tilde S}}_{\lambda}u\|^2_{L^2} + \lambda^{2(s-1)} \|\Box_{g_{<{\sqrt{\lambda}}}} {{\tilde S}}_{\lambda}u\|^2_{L^2} \\ &\lesssim \| u\|^2_{L^2 H^s} + \sum_{{\lambda}=1}^\infty \lambda^{2(s-1)} \|\Box_{g_{<{\sqrt{\lambda}}}} {{\tilde S}}_{\lambda}u - {{\tilde S}}_{\lambda}\Box_g u \|^2_{L^2} + \|f\|^2_{L^2 H^{s-1}} \end{split}$$ The first term is easily controlled by energy estimates. The second is decomposed as follows: $$\Box_{g_{<{\sqrt{\lambda}}}} {{\tilde S}}_{\lambda}u - {{\tilde S}}_{\lambda}\Box_g u = [\Box_{g_{<{\sqrt{\lambda}}}},{{\tilde S}}_{\lambda}] u - {{\tilde S}}_{\lambda}\Box_{g_{>{\sqrt{\lambda}}}} u$$ For the commutator we use the fixed time bound along with square summability in $\lambda$. The second part is controlled by . The result in the next Lemma implies the estimate for the spaces $X,Y$: Assume that $0 \leq s \leq 3$ and $\frac12 < \theta < 1$. Then the operator $\Box_g^{-1}$ has the mapping property $$\Box_g^{-1}: X^{s-1,\theta-1} \to X^{s,\theta}$$ Let $f \in X^{s-1,\theta-1}$. We use the representation in , $$f = f_0 + \sum_{{\lambda}=1}^\infty \sum_{d=1}^{{\lambda}}\Box_{g_{<{\sqrt{\lambda}}}} S_{{\lambda}} f_{{\lambda},d}$$ By Definition  the function $$u = \sum_{{\lambda}=1}^\infty \sum_{d=1}^{{\lambda}} S_{{\lambda}} f_{{\lambda},d}$$ belongs to $X^{s,\theta}$. The difference $v = u - \Box_g^{-1} f$ solves $$\Box_g v = \Box_g u - f, \qquad v(0) = u(0), \qquad v_t(0) = u_t(0)$$ To estimate it we use Lemma \[nhom\]. The initial data is controlled due to Corollary \[trace\], so it remains to bound the inhomogeneous term in $L^2 H^{s-1}$. Thus we need to show that $$\| \sum_{{\lambda}=1}^\infty \sum_{d=1}^{{\lambda}} \Box_{g_{>{\sqrt{\lambda}}}} S_{{\lambda}} f_{{\lambda},d} \|_{L^2H^{s-1}}^2 \lesssim \sum_{{\lambda},d} \|f_{{\lambda},d}\|_{X^{s,\theta}_{{\lambda},d}}^2$$ Considering the trace regularity result in Corollary \[trace\] this would follow from $$\| \sum_{{\lambda}=1}^\infty \Box_{g_{>{\sqrt{\lambda}}}} S_{{\lambda}} f_{{\lambda}} \|_{L^2H^{s-1}}^2 \lesssim \sum_{{\lambda}} \|\nabla f_{{\lambda}}\|_{L^\infty H^{s-1}}^2, \qquad f_{\lambda}= \sum_{d=1}^{\lambda}f_{{\lambda},d}$$ which in turn is a consequence of the fixed time bound . We finish this section by proving a key duality relation between $X^{s,\theta}$ spaces with positive, respectively negative $\theta$. For $ 0 < \theta < \frac12$ we have the duality relation $$X^{-s,-\theta}= (X^{s,\theta} + L^2 H^{s+\theta})' \label{dual1}$$ a\) We first show that $$X^{-s,-\theta} \subset (X^{s,\theta} + L^2 H^{s+\theta})'$$ From Corollary \[sw\] we obtain $ X^{-s,-\theta} \subset (L^2 H^{s+\theta})' $. It remains to prove the bound $$\left|\int u\cdot f\,\, dx\,dt\right|\lesssim \|u\|_{X^{s,\theta}}\,\|f\|_{X^{-s,-\theta}}$$ We consider Littlewood-Paley decompositions of $u$ and $v$ as in Definition \[xst\], $$u = \sum_{{\lambda}=1}^\infty \sum_{d=1}^{{\lambda}} S_\lambda u_{\lambda,d}, \qquad f = f_0 + \sum_{{\lambda}=1}^\infty \sum_{d=1}^{{\lambda}} \tilde \Box_{g_{<\sqrt{\lambda}}} S_\lambda f_{\lambda,d}$$ with $\tilde \Box_{g_{<\sqrt{\lambda}}}$ in divergence form, see Corollary \[self\]. The summation with respect to ${\lambda}$ is essentially diagonal therefore it follows by orthogonality. To handle the $d$ summation it suffices to obtain the off-diagonal decay $$\begin{split} \left | \int S_{\lambda}u_{\lambda,d_1} \cdot\tilde\Box_{g_{<\sqrt{\lambda}}} S_{\lambda}f_{\lambda,d_2} dx dt\right| \lesssim & \min\left\{\left(\frac{d_2}{d_1}\right)^\theta, \left (\frac{d_1}{d_2}\right)^{\frac12-\theta}\right\} \\ & \|u_{\lambda,d_1}\|_{X^{s,\theta}_{\lambda,d_1} }\,\|f_{\lambda,d_2}\|_{X^{1-s,1-\theta}_{\lambda,d_2}} \end{split}$$ If $d_2 < d_1$ then this follows directly from and . Otherwise we integrate by parts $$\begin{split} \int S_{\lambda}u_{\lambda,d_1}\cdot \, &\tilde\Box_{g_{<\sqrt{\lambda}}} S_{\lambda}f_{\lambda,d_2} dx dt = \int\tilde \Box_{g_{<\sqrt{\lambda}}} S_{\lambda}u_{\lambda,d_1} \cdot S_{\lambda}f_{\lambda,d_2} dx dt \\ + & \left. \int (S_{\lambda}u_{\lambda,d_1}\cdot g^{0\alpha}_{<\sqrt{{\lambda}}}\partial_\alpha S_{\lambda}f_{\lambda,d_2} - g^{0\alpha}_{<\sqrt{{\lambda}}}\partial_\alpha S_{\lambda}u_{\lambda,d_1} \cdot S_{\lambda}f_{\lambda,d_2}) dx \right|_{-1}^1 \end{split}$$ For the first term we use and . For the second we use the trace regularity result in . b\) We now show that $$(X^{s,\theta}+ L^2 H^{s+\theta})' \subset X^{-s,-\theta}$$ Let $T$ be a bounded linear functional on $X^{s,\theta} + L^2 H^{s+\theta}$. Due to the second term we can identify $T$ with a function $u \in L^2 H^{-s-\theta}$. On the other hand, we can apply it to functions $v \in X^{s,\theta}$ of the form $$v = \sum_{{\lambda}=1}^\infty \sum_{d=1}^{{\lambda}} S_\lambda v_{\lambda,d}$$ Then we must have the bound $$|Tv|^2 \lesssim \|v\|_{X^{s,\theta}}^2 \lesssim \sum_{{\lambda},d} \|v_{\lambda,d}\|^2_{X^{s,\theta}_{\lambda,d}} \lesssim \sum_{{\lambda},d} \left( {\lambda}^{2s}d^{2\theta}\|v_{{\lambda},d}\|^2_{L^2} + {\lambda}^{2s-2}d^{2\theta-2}\|\tilde \Box_{g_{<{\sqrt{\lambda}}}}v_{{\lambda},d}\|^2_{L^2}\right)$$ Given the definition of the $X^{s,\theta}_{\lambda,d}$ norms, using succesively the Hahn-Banach theorem and Riesz’s theorem it follows that we can find functions $f_{\lambda,d}$ and $h_{\lambda,d}$ with $$\sum_{{\lambda}=1}^\infty \sum_{d=1}^{{\lambda}} \lambda^{-2s} d^{-2\theta} \|f_{\lambda,d}\|_{L^2}^2 + \lambda^{2(1-s)} d^{2(1-\theta)} \|h_{\lambda,d}\|_{L^2}^2 = M < \infty \label{M}$$ so that $$Tv = \sum_{{\lambda}=1}^\infty \sum_{d=1}^{{\lambda}} \int f_{\lambda,d}\, v_{\lambda,d} + h_{\lambda,d}\cdot \tilde \Box_{g_{< \sqrt\lambda}}v_{\lambda,d}\, dx dt$$ In particular this must hold for $v$ of the form $v = S_\lambda v_{\lambda,d}$, $$\int u \, S_\lambda v_{\lambda,d} dx dt = \int f_{\lambda,d}\, v_{\lambda,d} + h_{\lambda,d}\cdot \tilde \Box_{g_{< \sqrt\lambda}}v_{\lambda,d} \, dx dt$$ For each $\lambda,d$ this yields $$S_\lambda u = f_{\lambda,d} + \tilde \Box_{g_{< \sqrt\lambda}} h_{\lambda,d}$$ Then we can represent $S_\lambda u$ in the form $$S_\lambda u = f_{\lambda,1} + \sum_{d=1}^{\frac\lambda 2} \tilde \Box_{g_{< \sqrt\lambda}} u_{\lambda,d} + \tilde \Box_{g_{< \sqrt\lambda}} h_{\lambda,\lambda} \qquad u_{\lambda,d} = h_{\lambda,d} -h_{\lambda,2d} \label{py}$$ This yields for $u$ the representation $$u = \sum_{\lambda = 1}^\infty \tilde S_\lambda \left( f_{\lambda,1} + \sum_{d=1}^{\frac\lambda 2} \tilde \Box_{g_{< \sqrt\lambda}} u_{\lambda,d} + \tilde \Box_{g_{< \sqrt\lambda}} h_{\lambda,\lambda}\right) \label{pya}$$ This is very close to but not exactly the form in . However the multipliers $\tilde S_\lambda$ can be easily replaced by $S_\lambda$ by reapplying the Paley-Littlewood decomposition on the right, and then $S_\lambda$ can be commuted to the right of $\tilde \Box_{g_{< \sqrt\lambda}}$ due to the Corollary \[tx\] and the commutator bound . Hence we have $$\| u\|_{X^{-s,-\theta} }^2 \lesssim \sum_{\lambda = 1}^\infty \left(\lambda^{-2s} \|f_{\lambda,1} \|_{L^2}^2 + \sum_{d=1}^{\lambda/2} \| u_{\lambda,d}\|_{X_{\lambda,d}^{1-s,1-\theta}}^2 + \| h_{\lambda,\lambda}\|_{X_{\lambda,\lambda}^{1-s,1-\theta}}^2\right)$$ and due to it remains to bound the right hand side by $$M + \| u\|_{L^2 H^{-s-\theta}}^2$$ There is nothing to do for the $f_{\lambda,1}$ term. On the other hand we can bound $$\begin{split} \|u_{\lambda,d}\|_{X^{1-s,1-\theta}_{\lambda,d}}^2 \lesssim &\ \lambda^{2(1-s)} d^{2(1-\theta)} \| u_{\lambda,d}\|_{L^2}^2 + \lambda^{-2s} d^{-2\theta} \| \tilde \Box_{g_{< \sqrt\lambda}} u_{\lambda,d}\|_{L^2}^2 \\ = &\ \lambda^{2(1-s)} d^{2(1-\theta)} \| h_{\lambda,d} - h_{\lambda,2d}\|_{L^2}^2 + \lambda^{-2s} d^{-2\theta} \| f_{\lambda,d} - f_{\lambda,2d}\|_{L^2}^2 \\ \lesssim &\ \lambda^{-2s} d^{-2\theta} (\|f_{\lambda,d}\|_{L^2}^2+ \|f_{\lambda,2d}\|_{L^2}^2) \\ &\ + \lambda^{2(1-s)} d^{2(1-\theta)} (\|h_{\lambda,2d}\|_{L^2}^2 +\|h_{\lambda,d}\|_{L^2}^2) \end{split}$$ Finally, for the last term we have $$\begin{split} \| h_{\lambda,\lambda}\|_{X_{\lambda,\lambda}^{1-s,1-\theta}}^2 \lesssim &\ \lambda^{2(1-s)} \lambda^{2(1-\theta)} \| h_{\lambda,\lambda}\|_{L^2}^2 + \lambda^{-2s} \lambda^{-2\theta} \| \tilde \Box_{g_{< \sqrt\lambda}} h_{\lambda,\lambda}\|_{L^2}^2 \\ = &\ \lambda^{2(1-s)} \lambda^{2(1-\theta)} \| h_{\lambda,\lambda}\|_{L^2}^2 + \lambda^{-2s} \lambda^{-2\theta} \| S_\lambda u - f_{\lambda,\lambda}\|_{L^2}^2 \\ \lesssim&\ \lambda^{2(1-s)} \lambda^{2(1-\theta)} \| h_{\lambda,\lambda}\|_{L^2}^2 + \lambda^{-2s} \lambda^{-2\theta} \| f_{\lambda,\lambda}\|_{L^2}^2 + \| S_\lambda u\|_{L^2 H^{-s-\theta}}^2 \end{split}$$ The proof is concluded. Strichartz estimates and applications. {#secse} ====================================== The Strichartz estimates for the variable coefficient wave equation, as proved in [@MR1887639], have the form: (Tataru [@MR1887639]) Assume that the coefficients $g^{ij}$ of $\Box_g$ satisfy ${\partial}^2 g^{ij} \in L^1L^\infty$. Then the solutions to the wave equation in $n+1$ dimensions satisfy the bounds $$\| D^{\sigma} \nabla u\|_{L^p L^q} \lesssim \|u(0)\|_{H^1} + \|u_t(0)\|_{L^2} + \|\Box_g u\|_{L^1 L^2} \label{se}$$ where $$\sigma = -\frac{n}2 + \frac{1}p +\frac{n}q, \qquad \frac{2}p +\frac{n-1}q \leq \frac{n-1}2, \qquad 2 \leq p \leq \infty, \ \ 2 \leq q < \infty \label{pq}$$ Applying this bound on an interval $I$ of size $\epsilon^2$ we obtain by Cauchy-Schwartz $$\| D^{\sigma} \nabla u\|_{L^p(I; L^q)} \lesssim \frac{1}\epsilon \|u\|_{H^1(I \times {\mathbb R}^n)} + \epsilon \|\Box_g u\|_{L^2(I \times {\mathbb R}^n)}, \qquad \epsilon \leq 1$$ Summing up over small intervals this extends to intervals of arbitrary lengths. Optimizing over $\epsilon$ yields $$\| D^{\sigma} \nabla u\|_{L^p L^q}^2 \lesssim \| u\|_{H^1}^2 + \| u\|_{H^1} \|\Box_g u\|_{L^2}$$ We want to apply this result to the functions $S_{\lambda}u_{{\lambda},d}$ in Definition \[xst\]. By we obtain a\) Let $(\sigma,p,q)$ verifying $$\sigma = -\frac{n}2 + \frac{1}p +\frac{n}q, \qquad 2 \leq p \leq \infty, \ \ 2 \leq q < \infty$$ Then for $(\sigma,p,q)$ as in we have $$\| S_{\lambda}\nabla u\|_{L^p L^q} \lesssim \lambda^{1-s-\sigma} d^{\frac12-\theta} \|u\|_{X^{s,\theta}_{{\lambda},d}}$$ If additionally $\theta > \frac12$ then $$\| S_{\lambda}\nabla u\|_{L^p L^q} \lesssim \lambda^{1-s-\sigma} \|u\|_{X^{s,\theta}}$$ b) If instead $$\frac{2}p +\frac{n-1}q \geq \frac{n-1}2$$ then $$\| S_{\lambda}\nabla u\|_{L^p L^q} \lesssim \lambda^{1- s-\sigma+\frac12(\frac{2}p +\frac{n-1}q - \frac{n-1}2)} d^{\frac12-\theta-\frac12(\frac{2}p +\frac{n-1}q - \frac{n-1}2)} \|u\|_{X^{s,\theta}_{{\lambda},d}}$$ \[sob\] The interesting triplets of indices for $(\sigma,p,q)$ in $4+1$ dimensions are $$(0,\infty,2) \text{(energy)} \qquad (-\frac12,\frac{10}3,\frac{10}3) \text{(Strichartz)} \qquad (-\frac56,2,6) \text{(Pecher)}$$ In addition, we can also use the index $q=\infty$. Thus we obtain the triplets $$(-2,\infty,\infty), \qquad (-\frac32,2,\infty)$$ For the case when $\theta<\frac12$, we rely on the additional triplets $$(-\frac16,2,3),\qquad (\frac14,4,2)$$ For convenience we summarize the bounds we need for $\tilde X^{s,\theta}_{{\lambda},d}$: For $0 < \theta < 1$ we have $${\lambda}^{s-1} \|S_{\lambda}\nabla u\|_{L^\infty L^2} + {\lambda}^{s-\frac52} \|S_{\lambda}\nabla u\|_{L^2 L^\infty} + {\lambda}^{s-3} \| S_{\lambda}\nabla u\|_{L^\infty} \lesssim d^{\frac12-\theta} \|u\|_{\tilde X^{s,\theta}_{{\lambda},d}}$$ $${\lambda}^{s-\frac{17}{12}} \|S_{\lambda}\nabla u\|_{L^2 L^3} + {\lambda}^{s-1} \| S_{\lambda}\nabla u\|_{L^4 L^2} \lesssim d^{\frac14-\theta} \|u\|_{\tilde X^{s,\theta}_{{\lambda},d}}$$ \[d\] The reason we include the gradient is to have also bounds for $u_t$. Because of the frequency localization, if we drop the gradient the same bounds hold with one less power of ${\lambda}$. In our estimates later on we also need to work with $X^{s,b}$ functions which are concentrated into a smaller modulation range. For this we introduce the additional norm $$\| u\|_{\tilde X^{s,\theta}_{{\lambda},<d}}^2 = \inf \left\{ \sum_{h=1}^d \|u_h\|_{\tilde X^{s,\theta}_{{\lambda},h}}^2;\ u = \sum_{h=1}^d u_h\right\}$$ If $d = {\lambda}$ we simply write $\tilde X^{s,\theta}_{{\lambda}}$. A simple argument leads to $$\|u\|^2_{X^{s,\theta}}\,=\,\inf \left \{ \sum_{{\lambda}=1}^\infty \|S_{{\lambda}} u_{{\lambda}}\|_{\tilde X^{s,\theta}_{{\lambda}}}^2 ;\ u =\sum_{{\lambda}=1}^\infty S_{{\lambda}} u_{{\lambda}}\right\}$$ We also have a\) Assume that $\theta > \frac12$. Then $${\lambda}^{s-1} \|S_{\lambda}\nabla u\|_{L^\infty L^2} + {\lambda}^{s-\frac52} \|S_{\lambda}\nabla u\|_{L^2 L^\infty} + {\lambda}^{s-3} \| S_{\lambda}\nabla u\|_{L^\infty} \lesssim \|u\|_{{{\tilde{X}}}^{s,\theta}_{{\lambda},<d}}$$ b\) Assume that $\theta < \frac12$. Then $${\lambda}^{s-1} \|S_{\lambda}\nabla u\|_{L^\infty L^2} + {\lambda}^{s-\frac52} \|S_{\lambda}\nabla u\|_{L^2 L^\infty} + {\lambda}^{s-3} \| S_{\lambda}\nabla u\|_{L^\infty} \lesssim d^{\frac12-\theta} \|u\|_{\tilde X^{s,\theta}_{{\lambda},<d}}$$ \[lessd\] In preparation for proving bilinear estimates for the $X^{s,\theta}$ spaces we first investigate which multiplications leave the ${{\tilde{X}}}^{s,\theta}_{{\lambda},d}$ space unchanged. For this we define the algebras $M_d$, $M_{<d}$ with the norms $$\|f\|_{M_d} = \| f\|_{L^\infty} + d^{-1} \|f_t\|_{L^\infty} + d^{-\frac12} \|f_t\|_{L^2 L^\infty} + d^{-\frac32} \|f_{tt}\|_{L^2 L^\infty}$$ $$\|f\|_{M_{<d}} = \|f\|_{M_d} + d^{\frac12} \| f\|_{L^2 L^\infty}$$ Then we have the multiplicative properties Assume that $f$ is localized at frequency $d \leq {\lambda}$. Then we have $$\|f S_{\lambda}u\|_{\tilde X^{s,\theta}_{{\lambda},d}} \lesssim \|f\|_{M_d} \|u\|_{\tilde X^{s,\theta}_{{\lambda},d}}$$ respectively $$\begin{split} \|f S_{\lambda}u\|_{\tilde X^{s,\theta}_{{\lambda},d}} & \lesssim \|f\|_{M_{<d}} \|u\|_{\tilde X^{s,\theta}_{{\lambda},<d}}, \qquad \theta < \frac12 \\ \|f S_{\lambda}u\|_{\tilde X^{s,\theta}_{{\lambda},d}} & \lesssim d^{\theta-\frac12} \|f\|_{M_{<d}} \|u\|_{\tilde X^{s,\theta}_{{\lambda},<d}}, \qquad \theta > \frac12 \end{split}$$ \[md\] The proof is straightforward, using Leibnitz’s rule and the energy estimate . To bound functions in the $M_d$, respectively $M_{<d}$ norms we use Corollary \[lessd\] with $d = {\lambda}$ to obtain: a\) Assume that $\theta > \frac12$. Then $$\| S_{\lambda}u\|_{M_{<{\lambda}}} \leq {\lambda}^{2-s} \|u\|_{\tilde X^{s,\theta}_{{\lambda}}}\qquad \| S_{<{\lambda}} u\|_{M_{{\lambda}}} \leq \max\{1,{\lambda}^{2-s}\} \|u\|_{X^{s,\theta}}$$ $$\| S_{<{\lambda}} u\|_{M_{<{\lambda}}} \leq \max\{{\lambda}^ \frac12,{\lambda}^{2-s}\} \|u\|_{X^{s,\theta}}$$ b\) Assume that $\theta < \frac12$. Then $$\| S_{\lambda}u\|_{M_{<{\lambda}}} \leq {\lambda}^{\frac52-\theta-s} \|u\|_{\tilde X^{s,\theta}_{{\lambda}}}\qquad \| S_{<{\lambda}} u\|_{M_{{\lambda}}} \leq \max\{1,{\lambda}^{\frac52-\theta-s}\} \|u\|_{X^{s,\theta}}$$ $$\| S_{<{\lambda}} u\|_{M_{<{\lambda}}} \leq \max\{{\lambda}^ \frac12,{\lambda}^{\frac52-\theta-s}\} \|u\|_{X^{s,\theta}}$$ \[xmd\] Using the above property we prove the algebra property for the space $X$. Assume that $s > 2$ and $\frac12 < \theta < s-\frac32$ . Then $X^{s,\theta}$ is an algebra. \[xxx\] Let $u,v \in X^{s,\theta}$. For both we consider the decomposition in Definition \[xst\], $$u = \sum_{{\lambda}=1}^\infty \sum_{d=1}^{{\lambda}} S_{\lambda}u_{{\lambda},d}, \qquad v = \sum_{{\lambda}=1}^\infty \sum_{d=1}^{{\lambda}} S_{\lambda}v_{{\lambda},d},$$ For the terms in the decomposition we use the $\tilde{X}^{s,\theta}_{{\lambda},d}$ norms, as allowed by Corollary \[tx\]. We denote $$u_{\lambda}= \sum_{d=1}^{{\lambda}} u_{{\lambda},d}, \qquad u_{{\lambda},<d} = \sum_{h=1}^{d} u_{{\lambda},h}$$ Then we write $$uv = \sum_{\mu =1}^\infty S_{\mu} (uv) = \sum_{\mu =1}^\infty \sum_{{\lambda}_1 =1}^\infty \sum_{{\lambda}_2 =1}^\infty S_{\mu} (S_{{\lambda}_1}u_{{\lambda}_1} S_{{\lambda}_2}v_{{\lambda}_2})$$ There are two cases when the above summand is nonzero, namely if ${\lambda}_1 \approx {\lambda}_2 \gtrsim \mu$ and if $\max\{{\lambda}_1,{\lambda}_2\} \approx \mu$. We consider them separately. [**Case 1**]{}, $ {\lambda}_1,{\lambda}_2 \approx {\lambda}\gtrsim \mu$. In this case the summability with respect to ${\lambda}$ is trivial, so it suffices to look at the product $S_{\lambda}u_{\lambda}S_{\lambda}v_{\lambda}$ for fixed ${\lambda}$. This is localized at frequency $\leq {\lambda}$. Combining the $L^\infty L^2$ and the $L^2 L^\infty$ bounds in Corollary \[lessd\] we obtain $$\| S_{\lambda}u_{\lambda}S_{\lambda}v_{\lambda}\|_{L^2} +{\lambda}^{-1} \| {\partial}_t (S_{\lambda}u_{\lambda}S_{\lambda}v_{\lambda})\|_{L^2} \lesssim {\lambda}^{-2s+\frac32} \|u_{\lambda}\|_{\tilde X^{s,\theta}_{{\lambda}}} \|v_{\lambda}\|_{\tilde X^{s,\theta}_{{\lambda}}} \label{lla}$$ Using the equation we can also bound the second time derivative, $${\lambda}^{-2} \| {\partial}_t^2 (S_{\lambda}u_{\lambda}S_{\lambda}v_{\lambda})\|_{L^2} \lesssim {\lambda}^{-2s+\frac32} \|u_{\lambda}\|_{\tilde X^{s,\theta}_{{\lambda}}} \|v_{\lambda}\|_{\tilde X^{s,\theta}_{{\lambda}}} \label{llb}$$ The three bounds above allow us to estimate for $\mu \leq {\lambda}$ $$\|S_{\lambda}u_{\lambda}S_{\lambda}v_{\lambda}\|_{X^{s,\theta}_{\mu,\mu}} \lesssim \mu^{s+\theta-2} {\lambda}^{-2s+\frac72} \|u_{\lambda}\|_{\tilde X^{s,\theta}_{{\lambda}}} \|v_{\lambda}\|_{\tilde X^{s,\theta}_{{\lambda}}}$$ This suffices provided that $\theta < s-\frac32$, which is insured by our hypothesis. [**Case 2**]{}. Here we consider products of the form $S_\mu v_\mu S_{\lambda}u_{\lambda}$ where $\mu \ll {\lambda}$. Then the product is localized at frequency ${\lambda}$. The summation with respect to ${\lambda}$ is trivial, but not the one with respect to $\mu$. We write $$S_\mu v_\mu S_{\lambda}u_{\lambda}= S_\mu v_{\mu} S_{\lambda}u_{{\lambda},<\mu} + \sum_{d = \mu}^{\lambda}S_\mu v_{\mu} S_{{\lambda}} u_{{\lambda},d}$$ Using Lemma \[md\] and Lemma \[xmd\] we obtain $$\begin{split} \|S_{\lambda}u_{\lambda}S_\mu v_\mu&\|_{\tilde X^{s,\theta}_{{\lambda}}}^2 \lesssim \|S_\mu v_{\mu} S_{\lambda}u_{{\lambda},<\mu}\|_{\tilde X^{s,\theta}_{{\lambda},\mu}}^2 + \sum_{d = \mu}^{\lambda}\|S_\mu v_{\mu} S_{{\lambda}} u_{{\lambda},d}\|_{\tilde X^{s,\theta}_{{\lambda},d}}^2 \\ &\lesssim \mu^{2\theta-1} \|S_\mu v_\mu\|^2_{M_{<\mu}} \|u_{{\lambda},<\mu}\|_{\tilde X^{s,\theta}_{{\lambda},<\mu}}^2 + \|S_\mu v_\mu\|^2_{M_{\mu}} \sum_{d=\mu}^{\lambda}\|u_{{\lambda},d}\|_{\tilde X^{s,\theta}_{{\lambda},d}}^2 \\ &\lesssim \mu^{2\theta-1}\|S_\mu v_\mu\|^2_{M_{<\mu}}\sum_{d=1}^{\lambda}\|u_{{\lambda},d}\|_{\tilde X^{s,\theta}_{{\lambda},d}}^2 \\ &\lesssim \mu^{3+2\theta-2s} \|v_\mu\|^2_{ {{\tilde{X}}}^{s,\theta}_{\mu}}\sum_{d=1}^{\lambda}\|u_{{\lambda},d}\|_{\tilde X^{s,\theta}_{{\lambda},d}}^2 \end{split}$$ The summation with respect to $\mu$ is trivial since $\theta < s-\frac32$. We next prove . Assume that $s > 2$ and $\frac12 < \theta < s-\frac32$ . Then we have the multiplicative estimate $$X^{s,\theta} \cdot X^{s-1,\theta-1} \subset X^{s-1,\theta-1}$$ By duality this reduces to the multiplicative estimate $$X^{s,\theta} \cdot (X^{1-s,1-\theta}+L^2 H^{2-s-\theta}) \subset X^{1-s,1-\theta} + L^2 H^{2-s-\theta}$$ Since $s > 2$ we have the fixed time multiplication $$H^s \cdot H^{2-s-\theta} \subset H^{2-s-\theta}$$ which implies the space-time bound $$L^\infty H^s \cdot L^2 H^{2-s-\theta} \subset L^2 H^{2-s-\theta}$$ Due to the energy estimate for $X^{s,\theta}$ it remains to show that $$X^{s,\theta} \cdot X^{1-s,1-\theta} \subset X^{1-s,1-\theta} + L^2 H^{2-s-\theta}$$ We consider a product $S_{\lambda}u_{\lambda}S_\mu v_\mu$ which we decompose as in the previous proof. Because of the lack of symmetry we now need to consider three cases. [**Case 1**]{}. Here we estimate $ S_\mu (S_{\lambda}u_{\lambda}S_{\lambda}v_{\lambda})$ where $\mu \lesssim {\lambda}$. By Corollary \[lessd\] we obtain $$\|S_{\lambda}u_{\lambda}S_{\lambda}v_{\lambda}\|_{L^2 L^\frac{3}{2}} \lesssim \|S_{\lambda}u_{\lambda}\|_{L^4 L^3} \|S_{\lambda}v_{\lambda}\|_{L^4 L^3} \lesssim {\lambda}^{\theta-\frac{2}{3}} \|u_{\lambda}\|_{\tilde X^{s,\theta}_{{\lambda}}} \|v_{\lambda}\|_{\tilde X^{1-s,1-\theta}_{{\lambda}}}$$ Using then Sobolev embeddings we obtain $$\|S_\mu(S_{\lambda}u_{\lambda}S_{\lambda}v_{\lambda})\|_{L^2 H^{2-s-\theta}} \lesssim \mu^{\frac 83 -s-\theta}{\lambda}^{\theta-\frac{2}{3}} \|u_{\lambda}\|_{\tilde X^{s,\theta}_{{\lambda}}} \|v_{\lambda}\|_{\tilde X^{1-s,1-\theta}_{{\lambda}}}$$ [**Case 2**]{}. Here we bound $S_\mu u_\mu S_{\lambda}v_{\lambda}$, $\mu \ll {\lambda}$. The product is localized at frequency ${\lambda}$, and the analysis is almost identical to Case 2 in Proposition \[xxx\]. [**Case 3**]{}. Here we bound $S_{\lambda}u_{\lambda}S_\mu v_\mu$, $\mu \ll {\lambda}$. The same argument applies, the only difference is that we gain some extra $\mu/{\lambda}$ factors. We continue with the Moser estimates in , which follow from Assume that $s > 2$ and $\frac12 < \theta < s-\frac32$ . Let $\Gamma$ be a smooth function. Then $$\|\Gamma(u)\|_{X^{s,\theta}} \lesssim C(\|u\|_{L^\infty}) (1 + \|u\|_{X^{s,\theta}}^5)$$ We write $$\Gamma(u)-\Gamma(v) = (u-v) f(u,v)$$ and $$f(u,v) - f(x,y) = (u-x) g_1(u,v,x,y) + (v-y)g_2(u,v,x,y)$$ where $f$, $g_1$ and $g_2$ are smooth functions. Then we have $$\begin{split} \Gamma(u) =& \Gamma(u_{1}) + \sum_{{\lambda}=1}^\infty \Gamma(u_{\leq 2{\lambda}}) - \Gamma(u_{\leq {\lambda}}) \\ =& \Gamma(u_{1}) + \sum_{{\lambda}=1}^\infty u_{2{\lambda}} f(u_{\leq 2{\lambda}},u_{\leq {\lambda}}) \\ = & \Gamma(u_{1}) + \sum_{{\lambda}=1}^\infty u_{2{\lambda}} [ f(u_{\leq 2},u_1) + \sum_{\mu=2}^{\lambda}( f(u_{\leq 2\mu},u_{\leq \mu}) -f(u_{\leq \mu},u_{\leq \mu/2}))] \\ =& \Gamma(u_{1}) + \sum_{{\lambda}=1}^\infty u_{2{\lambda}} [ f(u_{\leq 2},u_1) + \sum_{\mu=2}^{\lambda}(u_{2\mu}\, g_1(u_{\leq 2\mu},u_{\leq \mu},u_{\leq \mu/2}) \\ &+ u_\mu \, g_2(u_{\leq 2\mu},u_{\leq \mu},u_{\leq \mu/2}))] \end{split}$$ Hence we need to bound expressions of the form $$S_{\lambda}u_{\lambda}\, S_\mu v_\mu \, h(S_{<\mu} w), \qquad \mu \leq {\lambda}$$ There are two different cases to consider: [**Case 1**]{}. $\mu \approx {\lambda}$. Then the product has the form $$S_{\lambda}u_{\lambda}\, S_{\lambda}v_{\lambda}\, h(S_{<{\lambda}} w)$$ The first product is localized at frequency ${\lambda}$ and can be estimated as in , . For the nonlinear expression we use Lemma \[xmd\] to obtain $$\| S_{<{\lambda}} w\|_{M_{\lambda}} \lesssim \|w\|_{X^{s,\theta}}$$ On one hand by the chain rule we obtain $$\| h(S_{<{\lambda}} w)\|_{M_{\lambda}} \lesssim C(\|w\|_{L^\infty}) (1 + \|w\|_{X^{s,\theta}}^3) \label{mlx}$$ On the other hand because of the frequency localization we also have the improved high frequency bound $$\| {{\tilde S}}_\mu h(S_{<{\lambda}} w)\|_{M_\mu} \lesssim C(\|w\|_{L^\infty}) \left(\frac{\lambda}\mu\right)^N (1+ \|w\|_{X^{s,\theta}}^3), \qquad \mu \gg {\lambda}\label{mlxmu}$$ Taking this into account and repeatedly using Leibnitz’s rule we get $$\begin{split} & \| S_{\lambda}u_{\lambda}\, S_{\lambda}v_{\lambda}\, h(S_{<{\lambda}} w) \|_{X^{s,\theta}}^2 \\ &\lesssim \sum_{\mu=1}^\infty \| S_\mu ( S_{\lambda}u_{\lambda}\,S_{\lambda}v_{\lambda}\, h(S_{<{\lambda}} w)) \|_{\tilde X^{s,\theta}_\mu}^2 \\ & \lesssim \sum_{\mu \lesssim {\lambda}} \| S_{\lambda}u_{\lambda}\, S_{\lambda}v_{\lambda}\, h(S_{<{\lambda}} w) \|_{\tilde X^{s,\theta}_\mu}^2 + \sum_{\mu \gg {\lambda}} \| S_{\lambda}u_{\lambda}\, S_{\lambda}v_{\lambda}\, {{\tilde S}}_\mu h(S_{<{\lambda}} w)) \|_{\tilde X^{s,\theta}_\mu}^2 \\ & \lesssim C(\|w\|_{L^\infty}) \left (\sum_{\mu \lesssim {\lambda}} \mu^{2s+2\theta-4} {\lambda}^{-4s + 7} + \sum_{\mu \gg {\lambda}} {\lambda}^{2\theta + 3 - 2s} \left(\frac{\lambda}\mu\right)^N \right) \|u_{\lambda}\|^2_{\tilde X^{s,\theta}_{\lambda}} \|v_{\lambda}\|^2_{\tilde X^{s,\theta}_{\lambda}} (1+\|w\|_{X^{s,\theta}}^6) \\ & \lesssim C(\|w\|_{L^\infty})\, {\lambda}^{2\theta + 3 - 2s} \, \|u_{\lambda}\|^2_{\tilde X^{s,\theta}_{\lambda}} \|v_{\lambda}\|^2_{\tilde X^{s,\theta}_{\lambda}} (1+\|w\|_{X^{s,\theta}}^6) \end{split}$$ This is trivially summable with respect to ${\lambda}$. [**Case 2**]{}. $\mu \ll {\lambda}$. Then the product has the form $$S_{\lambda}u_{\lambda}\,S_\mu v_\mu\, h(S_{<\mu} w) =$$ $$\begin{split} &S_{\lambda}u_{{\lambda},<\mu}\, S_\mu v_\mu\, S_{<\mu} h(S_{<\mu} w) + \sum_{\mu \leq d \ll {\lambda}} S_{{\lambda}} u_{{\lambda},<d}\, S_\mu v_\mu \, S_d h(S_{<\mu} w) \\ & + \sum_{\mu \leq d \ll {\lambda}} S_{{\lambda}} u_{{\lambda},d} \,S_\mu v_\mu \, S_{<d} h(S_{<\mu} w) + S_{\lambda}u_{\lambda}\, S_\mu v_\mu\, S_{{\lambda}} h(S_{<\mu} w) \\ & + \sum_{\nu \gg {\lambda}} S_{\lambda}u_{\lambda}\, S_\mu v_\mu\, S_{\nu} h(S_{<\mu} w) = f_1+f_2 +f_3 + f_4 + f_5 \end{split}$$ For $f_1$ we use Lemma \[md\], Lemma \[xmd\] and to obtain $$\begin{split} \|f_1\|_{\tilde X^{s,\theta}_{{\lambda},\mu}} &\lesssim \|S_{\lambda}u_{{\lambda},<\mu}\, S_\mu v_\mu\|_{\tilde X^{s,\theta}_{{\lambda},\mu}} \| h(S_{<\mu} w)\|_{M_\mu} \\ &\lesssim \mu^{\theta-\frac12} \|u_{{\lambda},<\mu}\|_{\tilde X^{s,\theta}_{{\lambda},<\mu}} \| S_\mu v_\mu\|_{M_{<\mu}} \| h(S_{<\mu} w)\|_{M_\mu} \\ &\lesssim C(\|w\|_{L^\infty}) \mu^{\theta+\frac32 -s} \|u_{{\lambda},<\mu}\|_{\tilde X^{s,\theta}_{{\lambda},<\mu}} \|v_\mu\|_{\tilde X^{s,\theta}_{\mu}} (1+\|w\|_{X^{s,\theta}}^3) \end{split}$$ The summation with respect to $\mu$ is trivial and the square summability with respect to ${\lambda}$ is inherited from the first factor. For $f_2$ we apply the same argument. There is a loss of a small power of $(d/\mu)^\theta$ from the first product, but this is compensated by the gain of arbitrary powers of $\mu/d$ due to . The same works for $f_3$ but there is no $(d/\mu)^\theta$ loss. In the case of $f_4$ we need to worry about the ${\lambda}$ summability, but the $(\mu/{\lambda})^N$ gain in settles this. Finally, for $f_5$ there is a $(\mu/\nu)^N$ gain which cancels again all the losses. Summing up the pieces we obtain $$\begin{split} &\| S_\nu (S_{\lambda}u_{\lambda}\,S_\mu v_\mu\, h(S_{<\mu} w)) \|_{X^{s,\theta}} \\ &\lesssim C(\|w\|_{L^\infty}) \nu^{s+\theta-2} {\lambda}^{-2s+\frac72} (\frac{\mu}{{\lambda}})^N \|u_{\lambda}\|_{\tilde X^{s,\theta}_{\lambda}} \|v_\mu\|_{\tilde X^{s,\theta}_\mu } (1+\|w\|_{X^{s,\theta}}^3) \end{split}$$ for $\nu \ll {\lambda}$, $$\| S_\nu (S_{\lambda}u_{\lambda}\, S_\mu v_\mu \,h(S_{<\mu} w)) \|_{X^{s,\theta}} \lesssim C(\|w\|_{L^\infty}) \mu^{\theta+\frac32 -s} \|u_{\lambda}\|_{\tilde X^{s,\theta}_{\lambda}} \|v_\mu\|_{\tilde X^{s,\theta}_\mu } (1+\|w\|_{X^{s,\theta}}^3)$$ for $\nu \approx {\lambda}$, respectively $$\| S_\nu (S_{\lambda}u_{\lambda}\, S_\mu v_\mu\, h(S_{<\mu} w)) \|_{X^{s,\theta}} \lesssim C(\|w\|_{L^\infty}) \mu^{\theta+\frac32 -s} (\frac{\mu}{\nu})^N \|u_{\lambda}\|_{\tilde X^{s,\theta}_{\lambda}} \|v_\mu\|_{\tilde X^{s,\theta}_\mu } (1+\|w\|_{X^{s,\theta}}^3)$$ for $\nu \gg {\lambda}$. This concludes the proof of the proposition. Finally, we consider the bilinear estimate in , which follows from the next Proposition. Its proof cannot be completed using the type of arguments we have employed so far. Instead, we contend ourselves with reducing it to the trilinear estimate in , to the proof of which we devote the rest of the paper. Assume that $s > \frac94$ and $\frac34 < \theta < s-\frac32$ . Then we have the multiplicative estimate $$\|\nabla u \nabla v\|_{X^{s-1,\theta-1}} \lesssim \|u\|_{X^{s,\theta}} \|v\|_{X^{s,\theta}} \label{mainp}$$ We begin our analysis with a simple observation, namely that If $u \in X^{s,\theta}$ then $\nabla u \in \tilde X^{s-1,\theta}$ where $$\tilde X^{s-1,\theta} = X^{s-1,\theta} + (L^2H^{s+\theta-1} \cap H^1 H^{s+\theta-2}).$$ We first consider spatial derivatives, for which we prove the better bound $$\| \nabla_x u\|_{X^{s-1,\theta}} \lesssim \|u\|_{ X^{s,\theta}}$$ By Definition \[xst\] and Corollary \[tx\] it suffices to show that for functions $v$ localized at frequency $\lambda$ we have $$\| \nabla_x v\|_{X^{s-1,\theta}_{\lambda,d}} \lesssim \|v\|_{\tilde X^{s,\theta}_{\lambda,d}}$$ But this follows from the straightforward commutator bound $$\| [ \Box_{g_{< \sqrt\lambda}},\nabla] v\|_{L^2} \lesssim \lambda \|\nabla v\|_{L^2} \label{ccv}$$ Here we recall that $g^{00}=1$, therefore every term in the commutator has at least one spatial derivative. Next we consider time derivatives, where it suffices to show that for functions $v$ localized at frequency $\lambda$ we can write $v = v_1+ v_2$ where $v_1$, $v_2$ have the same frequency localization and $$\| \partial_t v_1\|_{X^{s-1,\theta}_{\lambda,d}} + \left(\frac{\lambda}{d}\right)^{1-\theta} \|\partial_t v_2\|_{ (L^2H^{s+\theta-1} \cap H^1 H^{s+\theta-2})} \lesssim \|v\|_{\tilde X^{s,\theta}_{\lambda,d}} \label{vud}$$ Roughly speaking $v_1$ accounts for the low modulation ($\lesssim \lambda$) part of $v$ while $v_2$ accounts for the high modulation part. We define $v_2$ as $$v_2 = (\Delta_{x,t})^{-1} \Box_{g_{< \sqrt\lambda}} v$$ This satisfies the bound $$\|\nabla^2 v_2\|_{L^2} \lesssim \|\Box_{g_{< \sqrt\lambda}} v\|_{L^2}$$ which implies both the $v_2$ bound in and an $H^2$ bound for $ v_1$ which gives the correct $L^2$ bound for ${\partial}_t v_1$, $$\lambda^{s-1} d^\theta \| \partial_t v_1\|_{L^2} + \left(\frac{\lambda}{d}\right)^{1-\theta} \|\partial_t v_2\|_{ (L^2H^{s+\theta-1} \cap H^1 H^{s+\theta-2})} \lesssim \|v\|_{\tilde X^{s,\theta}_{\lambda,d}}$$ It remains to estimate $ \Box_{g_{< \sqrt\lambda}} \partial_t v_1$. We have $$\| \Box_{g_{< \sqrt\lambda}} \partial_t v_1 \|_{L^2} \leq \| [ \Box_{g_{< \sqrt\lambda}},\partial_t] v_1\|_{L^2} + \| \partial_t \Box_{g_{< \sqrt\lambda}} v_1 \|_{L^2}$$ For the first term we use again . For the second we compute $$\Box_{g_{< \sqrt\lambda}} v_1 = (-\Box_{g_{< \sqrt\lambda}} +\Delta_{x,t}) (\Delta_{x,t})^{-1} \Box_{g_{< \sqrt\lambda}} v$$ Since the difference $\Box_{g_{< \sqrt\lambda}} -\Delta_{x,t}$ contains no second order time derivatives this yields the bound $$\| \partial_t \Box_{g_{< \sqrt\lambda}} v_1\|_{L^2} \lesssim \lambda \|\Box_{g_{< \sqrt\lambda}} v\|_{L^2}$$ This allows us to conclude the proof of and therefore the proof of the lemma. We now return to the estimate . Using the duality in , reduces to $$\begin{split} \left|\int uvw dx dt\right| \lesssim & \|u\|_{\tilde X^{s-1,\theta} } \|v\|_{\tilde X^{s-1,\theta}} \|w\|_{X^{1-s,1-\theta}+L^2 H^{2-s-\theta}} \end{split} \label{tri}$$ We do a trilinear Littlewood-Paley decomposition. Due to symmetry, we need to consider two cases. [**Case 1**]{}. Here we consider high-high-low interactions and bound $$I = \int S_{\lambda}u\, S_{\lambda}v\, S_\mu w \,dx dt, \qquad \mu \lesssim {\lambda}$$ We have $$|I| \lesssim \|S_{\lambda}u\|_{L^\infty L^2} \|S_{\lambda}v\|_{L^2 L^6} \|S_\mu w\|_{L^2 L^3}$$ which by the embeddings in Corollary \[sob\] give $$|I| \lesssim {\lambda}^{\frac56-2s+2} \mu^{s+\theta - \frac43} \|u\|_{\tilde X^{s-1,\theta} } \|v\|_{\tilde X^{s-1,\theta}} \|w\|_{X^{1-s,1-\theta}+L^2 H^{2-s-\theta}}$$ This suffices since both the exponent of ${\lambda}$ and the sum of the two exponents are negative. [**Case 2**]{}. Here we consider high-low-high interactions and seek to bound $$I = \int S_{\lambda}u \,S_\mu v\, S_{\lambda}w \,dx dt, \qquad \mu \ll {\lambda}$$ As a first simplification we dispense with the auxiliary $L^2$ norms. Begin with $$\begin{split} |I| &\lesssim \|S_{\lambda}u\|_{L^2} \|S_\mu v\|_{L^\infty} \|S_{\lambda}w\|_{L^2} \\ &\lesssim {\lambda}^{s-1}\mu^{\frac34} \|S_{\lambda}u\|_{ L^2}\ \mu^{\frac94-s} \|v\|_{\tilde X^{s-1,\theta}}\ {\lambda}^{1-s}\|S_{\lambda}w\|_{L^2} \end{split}$$ This allows us to dispense not only with the $L^2H^{s+\theta-1}$ part of $u$, but also with its $X^{s-1,\theta}_{{\lambda},>\mu}$ component. If $v \in L^2H^{s+\theta-1} \cap H^1 H^{s+\theta-2}$ then we bound $$\begin{split} |I| & \lesssim \|S_{\lambda}u\|_{L^\infty L^2} \|S_\mu v\|_{L^2 L^\infty} \|S_{\lambda}w\|_{L^2} \\ & \lesssim \mu^{3-s-\theta} \|u\|_{\tilde X^{s-1,\theta} } \ \|v\|_{L^2H^{s+\theta-1}}\ {\lambda}^{1-s}\|S_{\lambda}w\|_{L^2} \end{split}$$ Finally, if $w \in L^2 H^{2-s-\theta}$ then we can also estimate $$\begin{split} |I| &\leq \|S_{\lambda}u\|_{L^\infty L^2} \|S_\mu v\|_{L^2 L^\infty} \|S_{\lambda}w\|_{L^2} \\ & \lesssim \mu^{\frac32-s+\theta} \|u\|_{\tilde X^{s-1,\theta} } \ \|v\|_{\tilde X^{s-1,\theta}}\ {\lambda}^{1-s} \mu^{1-\theta} \|S_{\lambda}w\|_{L^2 } \end{split}$$ which suffices for both the $L^2 H^{2-s-\theta}$ and the $X^{1-s,1-\theta}_{{\lambda},> \mu}$ components of $w$. Hence we have reduced to the bound $$|I| \lesssim \|S_{\lambda}u\|_{ X^{s-1,\theta}_{{\lambda},<\mu }} \|S_\mu v\|_{X^{s-1,\theta}} \|S_{\lambda}w\|_{X^{1-s,1-\theta}_{{\lambda},<\mu}} \qquad \mu \ll {\lambda}\label{tri1}$$ Unfortunately we cannot fully prove this using Strichartz type estimates. However, we can use scaling to simplify this further and reduce it to $$\left|\int S_{\lambda}u \,S_\mu v \,S_{\lambda}w dx dt\right| \lesssim \ln \mu \ \|u\|_{X^{0,1}_{{\lambda},1}} \|v\|_{X^{\frac54,1}_{\mu,1}} \|w\|_{X^{0,\frac14}_{{\lambda},d}} \qquad \mu \ll {\lambda}\label{finaltri}$$ For now we show that implies . The remaining sections of the paper are devoted to the proof of . After cancelling the powers of the high frequency the estimate follows after summation with respect to $1 \leq d_1, d_2, d_3 \leq \mu$ from the bounds $$\left|\int S_{\lambda}u \,S_\mu v \,S_{\lambda}w dx dt\right| \lesssim \ln \mu \ d_{min}^{\frac12} d_{mid}^\frac12 d_{max}^\frac14 \| u\|_{ X^{0,0}_{{\lambda},d_1}} \|v\|_{ X^{\frac54,0}_{\mu,d_2}} \| w\|_{X^{0,0}_{{\lambda},d_3}} \label{casea}$$ if $d_2 < d_{max}$, respectively $$\left|\int S_{\lambda}u \,S_\mu v \,S_{\lambda}w dx dt\right| \lesssim \ln{\mu}\ d_{min}^{\frac12} d_{max}^\frac34 \|u\|_{ X^{0,0}_{{\lambda},d_1}} \| v\|_{ X^{\frac54,0}_{\mu,d_2}} \| w\|_{X^{0,0}_{{\lambda},d_3}} \label{caseb}$$ if $d_2 = d_{max}$. To reduce all these cases to we use scaling combined with a time decomposition argument. Precisely, for $1 < d < \lambda$ we consider a smooth partition of unity in time with respect to time intervals of length $d^{-1}$, $$1 = \sum \chi_d^j(t)$$ Then a simple commutation argument shows that we can localize the $\tilde X^{s,\theta}_{\lambda,d}$ norm to the $d^{-1}$ time intervals while retaining square summability, $$\| u\|_{ \tilde X^{s,\theta}_{\lambda,d}}^2 \approx \sum_j \| \chi_d^j u\|_{ \tilde X^{s,\theta}_{\lambda,d}}^2 \label{timedec}$$ We use such time decompositions in order to carry out the following three reduction steps: \(i) Reduction to $d_{min}=1$. By all three norms are square summable with respect to time intervals of length $d_{min}^{-1}$. Hence it suffices to prove the bounds on $d_{min}^{-1}$ time intervals. Rescaling such time intervals back to time $1$ we arrive at the case $d_{min}=1$. The regularity of the coefficients improves after the rescaling, here and below. Also we note that by Duhamel’s formula we can replace the factor corresponding to $d_{min}$ by a solution to the homogeneous equation. \(ii) Reduction to $d_{mid} =1$. By the norms corresponding to $d_{max}$ and $d_{mid}$ are square summable with respect to time intervals of length $d_{mid}^{-1}$. Hence it suffices to prove the bounds on $d_{mid}^{-1}$ time intervals. Rescaling such time intervals back to time $1$ we arrive at the case $d_{mid}=1$. Again by Duhamel’s formula we also replace the factor corresponding to $d_{mid}$ by a solution to the homogeneous equation. \(iii) Here we are in the case where two of the factors are solutions for the homogeneous equation. In the case of the remaining factor is at high frequency $\lambda$; then we use directly . In the case of the remaining factor is at low frequency $\mu$, so we need to prove that $$\left|\int S_{\lambda}u \,S_\mu v \,S_{\lambda}w dx dt\right| \lesssim \ln{\mu}\ d^\frac34 \|u\|_{ X^{0,0}_{{\lambda},1}} \| v\|_{ X^{\frac54,0}_{\mu,d}} \| w\|_{X^{0,0}_{{\lambda},1}}$$ Partitioning the unit time into about $d$ time intervals of length $d^{-1}$ this would follow from $$\left|\int \chi_d^i S_{\lambda}u \,S_\mu v \,S_{\lambda}w dx dt\right| \lesssim \ln{\mu}\ d^\frac14 \|u\|_{ X^{0,0}_{{\lambda},1}} \| v\|_{ X^{\frac54,0}_{\mu,d}} \| w\|_{X^{0,0}_{{\lambda},1}}$$ Rescaling the small time intervals to unit size this becomes exactly . Half-waves and angular localization operators ============================================= We write the symbol for $\Box_g$, $$p(t,x,\tau,\xi) = \tau^2 - 2 g^{0j} \tau \xi_j - g^{ij} \xi_i \xi_j$$ in the form $$p(t,x,\tau,\xi) = (\tau + a^+(t,x,\xi)) (\tau + a^-(t,x,\xi))$$ This leads to a decomposition of solutions to the wave equation into two half-waves: (Geba-Tataru [@MR2153517]) Let $u$ be a solution to the inhomogeneous equation for $\Box_g$. Then there is a representation $$\nabla u = u^+ + u^-$$ where $$\begin{split} \|u^+\|_{L^2} + \|(D_t+A^+(t,x,D)) u^+\|_{L^2} &+ \|u^-\|_{L^2} + \|(D_t+A^-(t,x,D)) u^-\|_{L^2} \\ & \lesssim \|u\|_{H^1} +\|\Box_g u\|_{L^2} \end{split}$$ As a consequence, in we are allowed to replace solutions to the $\Box_g$ equation by solutions to the $D_t+A^+$, respectively $D_t+A^-$ equation. We also denote $$\aligned \|u\|_{X_\pm}\,=\,\|u\|_{L^2} &+ \|(D_t+A^\pm(t,x,D)) u\|_{L^2}\\ \|u\|_{X_{\pm,d}}\,=\,d^\frac14\|u\|_{L^2} &+ d^{-\frac34}\|(D_t+A^\pm(t,x,D)) u\|_{L^2} \endaligned$$ In order to facilitate the use of microlocal analysis tools it is convenient to replace the symbols $a^\pm$ with mollified versions $a^{\pm}_{<\mu}$ defined by $$a^{\pm}_{<\mu}(t,x,\xi) = S_{<\mu}(D_x) a(t,x,\xi)$$ Given an angular scale $\alpha$ we consider the $\pm$ Hamilton flows for $D_t+A^{\pm}_{<\alpha^{-1}}$. $$\left\{ \begin{array}{c} \frac{d}{dt} x_t^\pm = {\partial}_\xi a^\pm_{<\alpha^{-1}}(t,x_t^\pm,\xi_t^\pm) \cr \cr \frac{d}{dt} \xi_t^\pm = -{\partial}_x a^\pm_{<\alpha^{-1}}(t,x_t^\pm,\xi_t^\pm) \end{array}\right . \qquad \left\{ \begin{array}{c} x_0^\pm=x \cr \cr \xi_0^\pm = \xi\end{array} \right . \label{hf}$$ These are bilipschitz flows, homogeneous with respect to the $\xi$ variable. The angular scale is relevant in that the Hamilton flow for $D_t+A^{\pm}_{<\alpha^{-1}}$ serves as a good approximation to the Hamilton flow for $D_t+A^{\pm}$ up to an $O(\alpha)$ angular difference. To characterize the higher regularity properties of these flows is convenient to introduce (see [@TG]) a metric $g_\alpha$ in the phase space, defined by $$ds^2 = |\xi|^{-4} (\xi d\xi)^2 + |\xi|^{-4} \alpha^{-2} (\xi \wedge d\xi)^2 + \alpha^{-4} |\xi|^{-2} (\xi dx)^2 + |\xi|^{-2} \alpha^{-2} (\xi \wedge dx)^2$$ Then as in [@TG] we obtain The Hamilton flow maps $(x_t^\pm, \xi_t^\pm)$ are $g_\alpha$-smooth canonical transformations. \[flowreg\] Given a direction $\theta \in S^{n-1}$ at time $t=0$ we introduce the size $\alpha$ sectors $$S_\alpha(\theta) = \{ \xi; \ \angle(\xi,\theta) < \alpha\}$$ $$\tilde S_\alpha(\theta) = \{ \xi; \ C \alpha < \angle(\xi,\theta) < 2 C\alpha\}$$ where $C$ is a fixed large constant. The images of ${\mathbb R}^n \times S_\alpha(\theta)$, respectively ${\mathbb R}^n \times \tilde S_\alpha(\theta)$ along the Hamilton flow for $D_t+A^{\pm}_{<\alpha^{-1}}$ are denoted by $H_\alpha^\pm S_\alpha(\theta)$, respectively $H_\alpha^\pm \tilde S_\alpha(\theta)$. Let $\xi_\theta^\alpha =\xi_\theta^\alpha(x,t)$ be the Fourier variable which is defined by the $D_t +A^{+}_{< \alpha^{-1}}$ Hamilton flow with initial data $\xi_\theta^\alpha(x,0)=\theta$ (i.e. $\xi_\theta^\alpha(x,t)=\xi^+_t(t)$ is the solution of the flow with initial data $\xi^+_0=\xi$, for which $x^+_t(t)=x$). This is well defined at least for a short time, precisely for as long as caustics do not occur. From Lemma \[flowreg\] one also sees that $\xi_\theta^\alpha$ is a $g_\alpha$-smooth function of $x$. We consider a maximal set $O_\alpha$ of $\alpha$-separated directions and a partition of unity at time $0$ $$1 =\sum_{\theta \in O_\alpha} \chi^{\pm,\alpha}_\theta(0,x,\xi)$$ consisting of $0$-homogeneous symbols supported in $S_\alpha(\theta)$ which are smooth on the corresponding scale. Transporting these symbols along the $\pm$ Hamilton flows by $$\chi^{\pm,\alpha}_\theta(0,x,\xi)\,=\,\chi^{\pm,\alpha}_\theta(t,x_t^\pm,\xi_t^\pm)$$ produces a time dependent partition of unity $$1 =\sum_{\theta \in O_\alpha} \chi^{\pm,\alpha}_\theta(t,x,\xi) \label{linpart}$$ so that the support of $\chi^{\pm,\alpha}_\theta(t,x,\xi)$ is contained in $H_\alpha^\pm S_\alpha(\theta)$. The regularity of these symbols is easily obtained from the transport equations (see again [@TG]): The symbols $\chi^{\pm,\alpha}_\theta(t,x,\xi)$ belong to the class $S(1,g_{\alpha})$[^1]. \[wpsymbols\] We use the above partition of unity in the phase space to produce a corresponding pseudodifferential partition of unity. Given a frequency $\lambda > \alpha^{-2}$ we define the symbols $$\chi^{\pm,\alpha}_{\theta,\lambda}(t,x,\xi) = S_{< \lambda /8}(D_x) \chi^{\pm,\alpha}_{\theta}(t,x,\xi) \tilde s_\lambda(\xi)$$ These are used in order to split general frequency localized waves into square summable superpositions of directionally localized waves, $$S_\lambda u = \sum_{\theta \in O_\alpha}\chi^{\pm,\alpha}_{\theta,\lambda}(t,x,D) S_\lambda u$$ This decomposition is closely related to a wave packet decomposition, see [@MR1644105], [@MR2178963], [@MR2153517], and [@TG]. The difference is that here we skip the spatial localization part since it brings no additional benefit. The above localization at spatial frequencies less than $\lambda/8$ insures that the output of the operators $\chi^{\pm,\alpha}_{\theta,\lambda}(t,x,D) S_\lambda$ is still localized at frequency $\lambda$. This localization is otherwise harmless: The symbols $\chi^{\pm,\alpha}_{\theta,\lambda} (t,x,\xi)$ belong to the class $S(1,g_{\alpha})$. In addition, we have similar bounds for the Poisson bracket $$\{ \tau+a^\pm_{<\alpha^{-1}} (t,x,\xi), \chi^{\pm,\alpha}_{\theta,\lambda} (t,x,\xi)\} \in S(1,g_{\alpha}) \label{poib}$$ \[wpsymbols\] The fact that $\chi^{\pm,\alpha}_{\theta,\lambda} (t,x,\xi) \in S(1,g_{\alpha})$ is straightforward since the multiplier $S_{<\lambda/8}$ is a mollifier on the $\lambda^{-1}$ spatial scale, which is less that the spatial scale of the $g_\alpha$ balls. Since $\chi^{\pm,\alpha}_{\theta}$ is transported along the $a^\pm_{<\alpha^{-1}} (t,x,\xi)$ flow, the Poisson bracket is expressed in the form $$\{ a^\pm_{<\alpha^{-1}} (t,x,\xi), \tilde s_\lambda(\xi)\} \chi^{\pm,\alpha}_{\theta,\lambda} (t,x,\xi) + \tilde s_\lambda(\xi) [ H_{a^\pm_{<\alpha^{-1}}},S_{<\lambda/8}(D_x)] \chi^{\pm,\alpha}_{\theta}(t,x,\xi)$$ Here $H_{a^\pm_{<\alpha^{-1}}}$ is the Hamiltonian operator associated to the $a^\pm_{<\alpha^{-1}} (t,x,\xi)$ flow. It is easy to see that the first term belongs to $S(1,g_\alpha)$, therefore it remains to consider the commutator term. We have $$[ H_{a^\pm_{<\alpha^{-1}}},S_{<\lambda/8}(D_x)] = [\partial_\xi a^\pm_{<\alpha^{-1}}, S_{<\lambda/8}(D_x)] \partial_x - [\partial_x a^\pm_{<\alpha^{-1}}, S_{<\lambda/8}(D_x)] \partial_\xi$$ The commutator of a scalar function $g$ with $S_{< \lambda/8}$ can be expressed as a rapidly convergent series of the form $$[g,S_{< \lambda/8}] = \lambda^{-1} \sum_j S_{< \lambda/8}^{1,j} \nabla g S_{< \lambda/8}^{2,j}$$ where the multipliers $S_{< \lambda/8}^{1,j}$ and $S_{< \lambda/8}^{2,j}$ have the same properties as $S_{<\lambda/8}$ and decay rapidly with respect to $j$. Then the above commutator term is expressed as $$[ H_{a^\pm_{<\alpha^{-1}}},S_{<\lambda/8}(D_x)] = \lambda^{-1} \sum_j S_{< \lambda/8}^{1,j} \left( \partial_x \partial_\xi a^\pm_{<\alpha^{-1}} \partial_x - \partial_x^2 a^\pm_{<\alpha^{-1}}\partial_\xi \right) S_{<\lambda/8}^{j,2}$$ At this stage the effect of the mollifiers is negligible and we can use the regularity properties of $a^\pm$ and $\chi^{\pm,\alpha}_{\theta}$ to directly compute $$\tilde s_\lambda(\xi) [ H_{a^\pm_{<\alpha^{-1}}},S_{<\lambda/8}(D_x)] \chi^{\pm,\alpha}_{\theta}(t,x,\xi) \in S(\frac{1}{\alpha^2 \lambda},g_\alpha)$$ To better understand the phase space localization provided by $\chi^{\pm,\alpha}_{\theta,\lambda} $ consider some point $(x_0,t_0)$ and the corresponding center direction $\xi_\theta^\alpha(x_0,t_0)$. A spatial unit $g_\alpha$ ball $B_\theta^\alpha(x_0,t_0)$ centered at $(x_0,t_0)$ has dimensions[^2] $\alpha^2 \times \alpha^{n-1}$ with the long sides normal to $\xi_\theta^\alpha(x_0,t_0)$. Within the ball $B_\theta^\alpha(x_0,t_0)$, $\chi^{\pm,\alpha}_{\theta,\lambda} \tilde S_\lambda$ localizes frequencies to a sector of angle $\alpha$ centered at $\xi_\theta^\alpha(x_0,t_0)$. Thus the frequencies are localized to a radial rectangle centered at $\lambda \xi_\theta^\alpha(x_0,t_0)$ of size $\lambda \times (\alpha \lambda)^{n-1}$. In this picture, angle $\alpha$ wave packets correspond to a spatial localization on the scale of the above ball $B_\theta^\alpha(x_0,t_0)$, constructed along a fixed ray of the Hamilton flow. The $g_\alpha$ metric restricted to frequency $\lambda$ is slowly varying and temperate at frequencies [^3] $\lambda \geq\alpha^{-2}$, and in our analysis we will always be above this threshold. Hence there is a good pseudodifferential calculus for operators with $S(1,g_\alpha)$ symbols. The semiclassical parameter $h=h(\alpha,\lambda)$ in the $S(1,g_\alpha)$ calculus at frequency $\lambda$ is given by $$h(\alpha,\lambda) = (\alpha^2 \lambda)^{-1}$$ The $S(1,g_\alpha)$ symbols at frequency $\lambda$ satisfy the bounds $$\left|(\xi_\theta^{\alpha} \partial_x)^\sigma (\xi_\theta^{\alpha} \wedge \partial_x)^\beta \partial_\xi^\nu (\xi \partial_\xi)^\gamma q(t,x,\xi)\right| \lesssim \alpha^{-2\sigma -|\beta|} (\alpha \lambda)^{-\nu} \label{simbolga}$$ Due to the $L^2$ in time regularity of the second order derivatives of the coefficients we also introduce the space of symbols $L^2 S(1,g_\alpha)$ which at frequency $\lambda$ satisfy $$\left|(\xi_\theta^{\alpha} \partial_x)^\sigma (\xi_\theta^{\alpha} \wedge \partial_x)^\beta \partial_\xi^\nu (\xi \partial_\xi)^\gamma q(t,x,\xi)\right| \lesssim \alpha^{-2\sigma -|\beta|} (\alpha \lambda)^{-\nu} f(t) \label{l2simbolga}$$ for some $f \in L^2$. In all the operators we consider here, the function $f$ is the same: $$f(t) = M(\|\nabla^2 g(t)\|_{L^\infty}) \label{fM}$$ In some of our estimates we need to deal with two distinct scales at a given frequency $\lambda$, namely the angular scale $\alpha$ and the $\lambda^{\frac12}$ scale at which the coefficients are truncated. Correspondingly we introduce additional symbol classes $C^k_\lambda S(1,g_\alpha)$ of symbols $q$ localized at frequency $\lambda$ which satisfy the $S(1,g_\alpha)$ bounds for $\sigma+|\beta| \leq k$, respectively the weaker estimate $$\left|(\xi_\theta^{\alpha} \partial_x)^\sigma (\xi_\theta^{\alpha} \wedge \partial_x)^\beta \partial_\xi^\nu (\xi \partial_\xi)^\gamma q(t,x,\xi)\right| \lesssim (\alpha^{-2\sigma -|\beta|}+\alpha^{-k} \lambda^{\frac{\sigma+|\beta|}2}) (\alpha \lambda)^{-\nu} \label{simbolgal}$$ for $\sigma+|\beta| > k$. There is still a calculus for such symbols, since the above bounds are stronger than the $S(1,g_{\lambda^{\frac12}})$ bounds. The related classes of symbols $L^2 C^k_\lambda S(1,g_\alpha)$ are defined in a manner which is similar to . Using the calculus for the above symbol classes one can prove that the partition of unity in yields an almost orthogonal decomposition of functions, namely Fix a frequency ${\lambda}$ and let $\alpha > {\lambda}^{-\frac12}$. Then for each function $u$ which is localized at frequency $\lambda$ we have $$\sum_{\theta \in O_\alpha} \| \chi^{\pm,\alpha}_{\theta,\lambda}(t,x,D) u\|_{X_\pm}^2 \approx \| u\|_{X_\pm}^2 \label{linsumeq}$$ \[linsum\] We only outline the proof, since this result is essentially contained in [@MR2153517]. There are two bounds to prove. The first $$\sum_{\theta \in O_\alpha} \| \chi^{\pm,\alpha}_{\theta,\lambda}(t,x,D) u\|_{L^2}^2 \approx \| u\|_{L^2} ^2 \label{aort}$$ follows from the almost orthogonality of the operators $\chi^{\pm,\alpha}_{\theta,\lambda}(t,x,D)$. This in turn is due to the almost disjoint supports[^4] of $\chi^{\pm,\alpha}_{\theta,\lambda}$ and to the $S(1,g_{\alpha})$ calculus. Consider now the second bound $$\sum_{\theta \in O_\alpha} \| (D_t + A^\pm) \chi^{\pm,\alpha}_{\theta,\lambda}(t,x,D) u\|_{L^2}^2 \approx \|(D_t + A^\pm) u\|_{L^2}^2 +O( \| u\|_{L^2}^2) \label{aortc}$$ We first establish it with $A^\pm$ replaced by $A^\pm_{< \lambda^{\frac12}}$, $$\sum_{\theta \in O_\alpha} \| (D_t + A^\pm_{< \lambda^{\frac12}}) \chi^{\pm,\alpha}_{\theta,\lambda}(t,x,D) u\|_{L^2}^2 \approx \|(D_t + A^\pm_{< \lambda^{\frac12}}) u\|_{L^2}^2 +O( \| u\|_{L^2}^2) \label{aortd}$$ Due to and the energy bound $$\| u\|_{L^\infty L^2}^2 \lesssim \| u\|_{L^2}^2 + \| u\|_{L^2}\|(D_t + A^\pm_{< \lambda^{\frac12}}) u\|_{L^2}$$ it suffices to prove the commutator estimate $$\sum_{\theta \in O_\alpha}\| [ D_t+A^\pm_{<\lambda^{\frac12}}, \chi^{\pm,\alpha}_{\theta,\lambda}(t,x,D)] u\|_{L^2}^2 \lesssim \| u\|_{L^\infty L^2}^2 \label{aorte}$$ which we split into two components. For the low frequency part of the coefficients we use a second order commutator $$\sum_{\theta \in O_\alpha}\| [ D_t+A^\pm_{<\alpha^{-1}}, \chi^{\pm,\alpha}_{\theta,\lambda}(t,x,D)] u\|_{L^2}^2 \lesssim \| u\|_{L^\infty L^2}^2 \label{lowcom}$$ For this it suffices to prove that $$[ D_t+A^\pm_{<\alpha^{-1}}, \chi^{\pm,\alpha}_{\theta,\lambda}(t,x,D)] \in OP L^2 S(1,g_\alpha) \label{lowcoma}$$ The summation with respect to $\theta \in O_\alpha$ follows by orthogonality since the symbols for the above commutators will retain the rapid decay away from the support of $\chi^{\pm,\alpha}_{\theta,\lambda}$. Here it is important that applies uniformly. Due to the Poisson bracket bound in it suffices to show that $$[A^\pm_{<\alpha^{-1}}, \chi^{\pm,\alpha}_{\theta,\lambda}(t,x,D)] + i \{a^\pm_{< \alpha^{-1}}, \chi^{\pm,\alpha}_{\theta,\lambda}\} (t,x,D) \in OP L^2 S(1,g_\alpha)$$ Due to the frequency localization of $\chi^{\pm,\alpha}_{\theta,\lambda}$, only the values of $a^\pm(t,x,\xi)$ in the region $|\xi| \approx \lambda$ can affect the above operator. At this point it is no longer important that $a^\pm_{< \alpha^{-1}}$ and $\chi^{\pm,\alpha}_{\theta,\lambda}$ are related. We consider a rapidly convergent spherical harmonics expansion of $a^{\pm}$, $$a^{\pm}(t,x,\xi) = \sum_j b_j (t,x) \phi_j(\xi)$$ where $b_j$ have the same regularity as the coefficients $g^{ij}$ while $ \phi_j(\xi) $ are homogeneous of order $1$. It suffices to consider a single term $b(t,x) \phi(\xi)$ in this expansion and show that $$[b_{<\alpha^{-1}} \phi(D), \chi^{\pm,\alpha}_{\theta,\lambda}(t,x,D)] + i \{b_{<\alpha^{-1}} \phi , \chi^{\pm,\alpha}_{\theta,\lambda}\} (t,x,D) \in OPL^2S(1,g_\alpha) \label{scom}$$ To see this we consider the commutators with $b$ and with $\phi$. The commutator term with $b$ has the form $$C_b = ([b_{<\alpha^{-1}} , \chi^{\pm,\alpha}_{\theta,\lambda}(t,x,D)] + i \{b_{<\alpha^{-1}},\chi^{\pm,\alpha}_{\theta,\lambda}\} (t,x,D))\phi(D)$$ Since $\partial_x^2 b_{<\alpha^{-1}} \in L^2S(1,g_\alpha)$, $\partial_\xi^2 \chi^{\pm,\alpha}_{\theta,\lambda} \in S(\alpha^{-2} \lambda^{-2},g_\alpha)$ and $\phi \in S(\lambda,g_\alpha)$, the $S(g_\alpha)$ calculus at frequency $\lambda$ yields the better result $C_b \in OPL^2S(\alpha^{-2} \lambda^{-1},g_\alpha)$, which is tight only when $\alpha= \lambda^{-\frac12}$. The commutator term with $\phi$ has the form $$C_\phi = b_{<\alpha^{-1}} ([\phi(D), \chi^{\pm,\alpha}_{\theta,\lambda}(t,x,D)] + i \{\phi , \chi^{\pm,\alpha}_{\theta,\lambda}\} (t,x,D))$$ The $b_{<\alpha^{-1}}$ factor belongs to $S(1,g_\alpha)$ and can be neglected. The argument for the remaining part is somewhat more delicate since it hinges on the homogeneity of $\phi$. With $b = 1$ denote by $\xi$ the input frequency for $C_\phi$ and by $\eta$ the output frequency. Due to the homogeneity of $\phi$ we have the representation $$\phi(\eta) - \phi(\xi) = (\eta-\xi) \nabla \phi(\xi) + \psi(\xi,\eta)(\xi \wedge (\xi-\eta))^2 \label{phixieta}$$ where $\psi$ is a smooth and homogeneous of order $-3$ matrix valued function. For $|\xi|,|\eta| \approx \lambda$ we can separate variables in $\psi$ and express it as a rapidly convergent series $$\psi(\xi,\eta) = \lambda^{-3}\sum_{j} \psi^1_j(\eta) \psi^2_j(\xi)$$ This gives a representation for $C_\phi$ of the form $$C_\phi = \lambda^{-3}\sum_{j} \psi^1_j(D) ((\xi \wedge \partial_x)^2 \chi^{\pm,\alpha}_{\theta,\lambda})(t,x,D) \psi^2_j(D)$$ Since $\chi^{\pm,\alpha}_{\theta,\lambda}(x,D) \in S(1,g_\alpha)$ we obtain $(\xi \wedge \partial_x)^2 \chi^{\pm,\alpha}_{\theta,\lambda} \in S(\lambda^2 \alpha^{-2},g_\alpha)$ which shows that $C_\phi \in OPS(\alpha^{-2} \lambda^{-1},g_\alpha)$. This concludes the proof of and thus the proof of . For the intermediate frequency part of the coefficients we have a first order commutator estimate $$\sum_{\theta \in O_\alpha}\| [ A^\pm_{\alpha^{-1} < \cdot < \lambda^{\frac12} }, \chi^{\pm,\alpha}_{\theta,\lambda}(t,x,D)] u\|_{L^2}^2 \lesssim \| u\|_{L^\infty L^2}^2 \label{incom}$$ Together with this implies . This follows from first order commutator estimate $$[ A^\pm_{\alpha^{-1} < \cdot < \lambda^{\frac12} }, \chi^{\pm,\alpha}_{\theta,\lambda}(t,x,D)] \in OPL^2 C^1_\lambda S(1,g_\alpha) \label{incoma}$$ Indeed, for a scalar function $b$ we can estimate $$\alpha^{-2} \| b_{\alpha^{-1} < \cdot < \lambda^{\frac12} } \|_{L^2 L^\infty} + \alpha^{-1} \| \partial_x b_{\alpha^{-1} < \cdot < \lambda^{\frac12} } \|_{L^2 L^\infty} \lesssim \|\partial^2 b\|_{L^2 L^\infty}$$ Applied to the the symbol $a^\pm$ as a function of $x$ this shows that $$a^\pm_{\alpha^{-1} < \cdot < \lambda^{\frac12} }\in L^2C^2_\lambda S(\alpha^2 \lambda,g_\alpha)$$ Since $\chi^{\pm,\alpha}_{\theta,\lambda} \in S(1,g_\alpha)$, the estimate follows by pdo calculus. The square summability with respect to $\theta$ is again due to the almost disjoint supports of the symbols $\chi^{\pm,\alpha}_\theta$. It remains to pass from to . Due to the energy bound $$\| u\|_{L^\infty L^2}^2 \lesssim \| u\|_{L^2}^2 + \| u\|_{L^2}\|(D_t + A^\pm_{< \lambda^{\frac12}}) u\|_{L^2}$$ this is a consequence of the estimate $$\|A^\pm_{> \lambda^{\frac12}} u\|_{L^2} \lesssim \| u\|_{L^\infty L^2}$$ applied to both $u$ and $\chi^{\pm,\alpha}_{\theta,\lambda}(t,x,D) u$. Using the spherical harmonics decomposition of the symbols $a^{\pm}$ as above this reduces to the straightforward bound $$\| b_{> \lambda^{\frac12}} u\|_{L^2} \lesssim \lambda^{-1} \|\partial^2 b\|_{L^2 L^\infty} \|u\|_{L^\infty L^2}$$ The frequency localization in $\chi^{\pm ,\alpha} _{\theta,\lambda}$ contributes to improved Strichartz type estimates above the critical range of exponents. Begin for instance with the endpoint $L^2 L^6$ Strichartz estimate $$\|\chi^{\pm ,\alpha} _{\theta,\lambda}(t,x,D) u\|_{L^2 L^6} \lesssim {\lambda}^\frac56 \| u\|_{X_\pm} \label{specher}$$ Here the angular frequency localization plays no role. However, suppose we want to use Bernstein’s inequality to replace this by an $L^2 L^\infty$ estimate. Modulo rapidly decaying tails, within each spatial $g_\alpha$ ball $B^\alpha_\theta(x_0,t_0)$ the function $ \chi^{\pm ,\alpha} _{\theta,\lambda} (t,x,D) u$ is frequency localized in a dyadic sector section of size $\lambda \times (\alpha \lambda)^3$. Then the constant in Bernstein’s inequality is $$[\lambda \times (\alpha \lambda)^3]^\frac16 = \lambda^\frac23 \alpha^\frac12$$ Hence we obtain the better $L^2 L^\infty$ bound $$\|\chi^{\pm ,\alpha} _{\theta,\lambda}(t,x,D) u\|_{L^2 L^\infty} \lesssim \alpha^\frac12 \lambda^\frac32 \| u\|_{X_\pm}, \qquad \alpha > \lambda^{-\frac12} \label{pecher}$$ A simpler related uniform bound is derived directly from the energy estimates, $$\|\chi^{\pm ,\alpha} _{\theta,\lambda}(t,x,D) u\|_{L^\infty} \lesssim \alpha^\frac32 \lambda^2 \| u\|_{X_\pm}, \qquad \alpha > \lambda^{-\frac12} \label{pechera}$$ A similar bound holds for the right hand side of the $\chi^{\pm ,\alpha} _{\theta,\lambda}(t,x,D) u$ equation. Indeed, for $u \in X_\pm$ we can write $$(D_t + A^\pm) \chi^{\pm ,\alpha}_{\theta,\lambda}(t,x,D) u = (D_t + A^\pm_{<{\lambda}^\frac12}) \chi^{\pm ,\alpha}_{\theta,\lambda}(t,x,D) u + A^\pm_{>{\lambda}^\frac12} \chi^{\pm ,\alpha}_{\theta,\lambda}(t,x,D) u$$ The first term belongs to $L^2$ and has a similar frequency localization as $\chi^{\pm ,\alpha}_{\theta,\lambda}(t,x,D) u$. The second is estimated directly using . This yields $$\|(D_t + A^\pm) \chi^{\pm ,\alpha} _{\theta,\lambda}(t,x,D) u\|_{L^2 L^\infty} \lesssim \alpha^\frac32 \lambda^2 \| u\|_{X_\pm}, \qquad \alpha > \lambda^{-\frac12} \label{pecherb}$$ Another way of taking advantage of the angular localization is in corresponding bounds for derivatives. Consider the differentiation operators $\xi_\theta^\alpha \wedge D$ whose symbol vanishes in the $\xi_\theta^\alpha$ direction. Then in the support of $\chi^{\pm ,\alpha} _{\theta,\lambda}$ these symbols have size $\alpha \lambda$. Hence from we also obtain $$\|(\xi_\theta^\alpha \wedge D)\chi^{\pm ,\alpha} _{\theta,\lambda}(t,x,D) u\|_{L^2 L^6} \lesssim (\alpha {\lambda}) {\lambda}^\frac56 \| u\|_{X_\pm} \label{spechera}$$ We can argue in the same way for the energy estimates or for the $L^2 L^\infty$ bound in . For convenience we collect several such bounds in a single norm, $$\begin{split} \| v\|_{X_{\pm}^{\lambda, \alpha, \theta}} =&\ \|v\|_{X_{\pm}} +\| v\|_{L^\infty L^2} + \lambda^{-\frac56} \| v\|_{L^2 L^6}+ \alpha^{-\frac12} \lambda^{-\frac32} \| v\|_{L^2 L^\infty} + \alpha^{-\frac32} \lambda^{-2} \|v\|_{L^\infty} \\ &\ + \alpha^{-\frac32} \lambda^{-2} \| (D_t + A^\pm) v\|_{L^2 L^\infty} + (\alpha \lambda)^{-1}\| (\xi_\theta^\alpha \wedge D) v\|_{L^\infty L^2} \\ &\ + (\alpha \lambda)^{-1} (\lambda^{-\frac56} \| (\xi_\theta^\alpha \wedge D) v\|_{L^2 L^6} + \alpha^{-\frac12} \lambda^{-\frac32} \| (\xi_\theta^\alpha \wedge D) v\|_{L^2 L^\infty}) \end{split}$$ and use it to state a corresponding version of , $$\sum_{\theta \in O_\alpha} \| \chi^{\pm,\alpha}_{\theta,\lambda}(t,x,D) u\|_{X_\pm^{\lambda,\alpha,\theta}}^2 \approx \|\tilde S_{\lambda}u\|_{X_\pm}^2 \label{limsumeqa}$$ We want to replace the partition of unity in first with a bilinear one and next with a trilinear one. Given two frequencies $\mu < \lambda$, we denote $\alpha_\mu= \mu^{-\frac12}$ and introduce a corresponding bilinear partition of unity which is useful when estimating the frequency $\mu$ output of the product of two frequency $\lambda$ waves. The main contribution corresponds to opposite frequencies $\xi$ and $\eta$, therefore we organize the following decomposition based on the dyadic angle $\alpha_\mu \leq \alpha \leq 1$ between $\xi$ and $-\eta$. Precisely, by superimposing the $\alpha$ angular decompositions for $\alpha$ in the above range we obtain $$\begin{split} & \tilde s_\lambda(\xi) \tilde s_\lambda(\eta)= \\ & \sum_{\theta_1,\theta_2\in O_{\alpha_\mu}}^{|\theta_1 + \theta_2| \leq 2C \alpha_\mu} \ \sum_{\theta_3,\theta_4 \in O_{2\alpha_\mu}}^{|\theta_3 +\theta_4| \leq 4C \alpha_\mu} \chi^{\pm, \alpha_\mu}_{\theta_1,\lambda}(t,x,\xi) \chi^{\mp, \alpha_\mu}_{\theta_2,\lambda}(t,x,\eta) \chi^{\pm,2 \alpha_\mu}_{\theta_3,\lambda}(t,x,\xi) \chi^{\mp,2 \alpha_\mu}_{\theta_4,\lambda}(t,x,\eta) \\ &+ \!\! \sum_{\alpha=\alpha_\mu}^1 \!\! \sum_{\theta_1,\theta_2 \in O_\alpha}^{C \alpha \leq |\theta_1+\theta_2| \leq 2 C \alpha} \ \sum_{\theta_3,\theta_4 \in O_{2\alpha}}^{|\theta_3+\theta_4| \leq 4C \alpha} \!\! \chi^{\pm,\alpha}_{\theta_1,\lambda}(t,x,\xi) \chi^{\mp,\alpha}_{\theta_2,\lambda}(t,x,\eta) \chi^{\pm,2\alpha}_{\theta_3,\lambda}(t,x,\xi) \chi^{\mp,2\alpha}_{\theta_4,\lambda}(t,x,\eta) \end{split}$$ To shorten this expression we redenote factors and harmlessly simplify the summation notations to $$1 = \sum_{\theta \in O_{\alpha_\mu}} \phi^{\pm,\alpha_\mu}_{\theta,\lambda}(t,x,\xi) \phi^{\mp,\alpha_\mu}_{-\theta,\lambda}(t,x,\eta) + \sum_{\alpha=\alpha_\mu}^1 \ \sum_{\theta \in O_\alpha} \phi^{\pm,\alpha}_{\theta,\lambda}(t,x,\xi) {{\tilde{\phi}}}^{\mp,\alpha}_{-\theta,\lambda}(t,x,\eta) \label{bilinsum}$$ where the tilde in $ {{\tilde{\phi}}}^{\pm,\alpha}_{\theta,\lambda}$ indicates an $O(C\alpha)$ angular separation from $\theta$. The symbols $\phi^{\pm,\alpha}_{\theta,\lambda}$, respectively ${{\tilde{\phi}}}^{\pm,\alpha}_{\theta,\lambda}$ retain the same properties as $\chi^{\pm,\alpha}_{\theta,\lambda}$, namely $$\phi^{\pm,\alpha}_{\theta,\lambda} \in S(1,g_\alpha), \qquad \{ \tau+a^\pm_{<\alpha^{-1}} (t,x,\xi), \phi^{\pm,\alpha}_{\theta,\lambda} (t,x,\xi)\} \in S(1,g_{\alpha})$$ and the same for ${{\tilde{\phi}}}^{\pm,\alpha}_{\theta,\lambda}$. In particular the counterpart of is still valid, $$\sum_{\theta \in O_\alpha} \| \phi^{\pm,\alpha}_{\theta,\lambda}(t,x,D) u\|_{X_\pm^{\lambda,\alpha,\theta}}^2 + \| {{\tilde{\phi}}}^{\pm,\alpha}_{\theta,\lambda}(t,x,D) u\|_{X_\pm^{\lambda,\alpha,\theta}}^2\approx \|\tilde S_{\lambda}u\|_{X_\pm}^2 \label{limsumeqb}$$ Finally, we arrive at the main trilinear symbol decomposition. Its aim is to achieve a simultaneous angular decomposition in trilinear expressions of the form $$\int u v w dx dt$$ We denote the three corresponding frequencies by $\xi, \eta$ and $\zeta$. We assume that each of the factors has a dyadic frequency localization, $$|\xi| \approx |\eta| \approx \lambda, \qquad |\zeta| \approx \mu, \qquad 1 \ll \mu \leq \lambda$$ If the trilinear decomposition were translation invariant then only its structure on the diagonal $\xi+\eta+\zeta=0$ is relevant. However, in our case we are working with variable coefficient operators therefore a neighborhood of the diagonal is relevant. The size of this neighborhood is determined by the spatial regularity of the symbols via the uncertainty principle. Corresponding to the first term in we consider a decomposition in $\zeta$ with respect to the dyadic angle between $\zeta$ and $\theta$, $$\tilde s_\mu(\zeta) = \phi^{\pm,\alpha_\mu}_{\theta,\mu} (t,x,\zeta) + \sum_{\alpha > \alpha_\mu} {{\tilde{\phi}}}^{\pm,\alpha}_{\theta,\mu} (t,x,\zeta)$$ To understand the $\zeta$ decomposition corresponding to the second term in we first identify the location of the diagonal $\xi+\eta+\zeta = 0$. Given the above dyadic localization of $\xi,\eta$ and $\zeta$, if the angle between $\xi$ and $-\eta$ is of order $\alpha$, then the angle between $\xi$ and $\pm \zeta$ must be of order $\alpha \lambda \mu^{-1}$ which is larger than $\alpha$. Thus the interesting angular separation threshold for $\zeta$ is $\alpha \lambda \mu^{-1}$. It would appear that there are two cases to consider, namely when the angle between $\xi$ and $\zeta$ is small, and when the angle between $-\xi$ and $\zeta$ is small. However, due to our choice of the $\pm$ signs corresponding to $\xi$, $\eta$ and $\zeta$, the latter case leads to nonresonant wave interactions and loses its relevance. Hence, the significant dyadic parameter here is the angle between $\xi$ and $\zeta$, and the $\zeta$ decomposition has the form $$\tilde s_\mu(\zeta) = \phi^{\pm,\alpha \mu^{-1} \lambda}_{\theta,\mu} (t,x,\zeta) + {{\tilde{\phi}}}^{\pm,\alpha \mu^{-1} \lambda}_{\theta,\mu} (t,x,\zeta) + \sum_{\beta >\alpha \mu^{-1} \lambda } {{\tilde{\phi}}}^{\pm,\beta}_{\theta,\mu} (t,x,\zeta)$$ Then the full trilinear decomposition has the form $$\begin{split} \tilde s_{\lambda}(\xi) \tilde s_\lambda(\eta) \tilde s_\mu(\zeta) = & \sum_{\theta \in O_{\alpha_\mu}} \phi^{\pm,\alpha_\mu}_{\theta,\lambda}(t,x,\xi) \phi^{\mp,\alpha_\mu}_{-\theta,\lambda}(t,x,\eta) \phi^{\pm,\alpha_\mu}_{\theta,\mu}(t,x,\zeta) \\ +& \sum_{\theta \in O_{\alpha_\mu}} \phi^{\pm,\alpha_\mu}_{\theta,\lambda}(t,x,\xi) \phi^{\mp,\alpha_\mu}_{-\theta,\lambda}(t,x,\eta) \sum_{\alpha > \alpha_\mu} \tilde \phi^{\pm,\alpha}_{\theta,\mu}(t,x,\zeta) \\ +& \sum_{ \alpha > \alpha_\mu }\sum_{\theta \in O_{\alpha}} \phi^{\pm,\alpha}_{\theta,\lambda}(t,x,\xi) \tilde \phi^{\mp,\alpha}_{-\theta,\lambda}(t,x,\eta) \tilde \phi^{\pm,\alpha \mu^{-1} \lambda}_{\theta,\mu}(t,x,\zeta) \\ +& \sum_{ \alpha > \alpha_\mu }\sum_{\theta \in O_{\alpha}} \phi^{\pm,\alpha}_{\theta,\lambda}(t,x,\xi) \tilde \phi^{\mp,\alpha}_{-\theta,\lambda}(t,x,\eta) \phi^{\pm,\alpha \mu^{-1} \lambda}_{\theta,\mu}(t,x,\zeta) \\ +& \sum_{ \alpha > \alpha_\mu }\sum_{\theta \in O_{\alpha}} \phi^{\pm,\alpha}_{\theta,\lambda}(t,x,\xi) \tilde \phi^{\mp,\alpha}_{-\theta,\lambda}(t,x,\eta) \sum_{\beta > \alpha \mu^{-1} \lambda} \tilde\phi^{\pm,\beta}_{\theta,\mu}(t,x,\zeta) \end{split} \label{trilindec}$$ In the above sum the first three terms are the main ones, as they account for the behavior near the diagonal. The remaining terms have off diagonal support, and their contribution to trilinear forms as above is negligible. Proof of the trilinear estimate  ================================ As noted in the previous section, we can replace the spaces $X^{s,\theta}_{{\lambda},d}$ in with the $X_{\pm}$ spaces. Hence we restate in the form For any choice of the $\pm$ signs and $1 < d < \mu \ll \lambda$ we have $$\left|\int S_{\lambda}u\, S_{\lambda}v\, S_\mu w dx dt \right | \lesssim \ln \mu \cdot \mu^\frac54\|S_\lambda u\|_{X_{\pm}} \|S_\lambda v\|_{X_{\pm,d}} \|S_\mu w\|_{X_\pm} \label{lastone}$$ We begin with several simple observations. First, by localizing to a fixed smaller space-time scale and rescaling back to unit scale we can insure that the coefficients $g^{ij}$ vary slowly inside a unit cube, $$|\nabla_{x,t} g^{ij}| \ll 1$$ This in turn insures that the Fourier variable does not vary much along the Hamilton flow, $$|\xi_\theta^\alpha - \theta| \ll 1$$ We can also localize all factors in frequency to angular regions of small size, say $< \frac{1}{20}$. The corresponding localization multipliers are easily seen to be bounded in $X_{\pm}$ and $X_{\pm,d}$. If the first two $\pm$ signs are identical then the product $S_{\lambda}u\, S_{\lambda}v$ is concentrated at a time frequency of the order of ${\lambda}$ which makes it almost orthogonal to $S_\mu w$, hence the estimate above is much easier. Therefore without any restriction in generality we fix the first sign to $+$ and the second one to $-$. Even though the problem is not symmetric with respect to the first two factors, the sign in the third factor plays no role whatsoever, so we fix it to $+$. We denote $$a(t,x,\xi)\,=\,a^+(t,x,\xi)$$ Then $$a^-(t,x,\xi)\,=\,-a(t,x,-\xi)$$ We note that, for the purpose of the above estimates, in the definition of $X_{\pm}$ at frequency $\lambda$ we can replace the symbols $a(x,\xi)$ with their regularized versions, namely $a_{<{\lambda}^\frac12}(x,\xi)$. To keep the number of parameters small we first present the argument in the case when $d=1$. Once this is done, we show what changes are necessary for $d > 1$. [**Case 1**]{}: $d = 1$. Corresponding to the trilinear symbol decomposition of the identity we consider the corresponding pseudodifferential decomposition of the trilinear expression in . The we estimate each of the five terms. We remark that, since $S_{\lambda}u$, $S_{\lambda}v$ and $S_\mu w$ are frequency localized in a small angle, so are all the factors in . [**Case 1, term I:**]{} $$I = \sum_{\theta \in O_{\alpha_\mu}} \int \phi^{+,\alpha_\mu}_{\theta,\lambda}(t,x,D) S_{\lambda}u\ \phi^{-,\alpha_\mu}_{-\theta,\lambda}(t,x,D) S_{\lambda}v\ \phi^{+,\alpha_\mu}_{\theta,\mu}(t,x,D) S_\mu w \,dx dt$$ We use the energy estimate for the first two factors and the $L^2 L^\infty$ bound for the third to obtain $$|I| \lesssim \mu^{\frac54} \| \phi^{+,\alpha_\mu}_{\theta,\lambda}(x,D) S_{\lambda}u\|_{X^{\lambda,\alpha_\mu,\theta}_{+}} \| \phi^{-,\alpha_\mu}_{-\theta,\lambda}(x,D) S_{\lambda}v\|_{X^{\lambda,\alpha_\mu,\theta}_{-}} \| \phi^{+,\alpha_\mu}_{\theta,\mu}(t,x,D) S_\mu w \|_{X^{\mu,\alpha_\mu,\theta}_{+}}$$ The summation with respect to $\theta$ is straightforward due to . [**Case 1, term II:**]{} This is the most difficult term, $$II = \sum_{\theta \in O_{\alpha_\mu}} \int \phi^{+,\alpha_\mu}_{\theta,\lambda}(t,x,D) S_{\lambda}u\ \phi^{-,\alpha_\mu}_{-\theta,\lambda}(t,x,D)S_{\lambda}v \sum_{\alpha > \alpha_\mu} \tilde \phi^{+,\alpha}_{\theta,\mu}(t,x,D) S_\mu w \,dx dt$$ The summation with respect to $\theta$ is easily done using . Hence, in what follows, we fix $\theta$ and redenote $$u_\theta = \phi^{+,\alpha_\mu}_{\theta,\lambda}(t,x,D) S_{\lambda}u, \qquad v_\theta = \phi^{-,\alpha_\mu}_{-\theta,\lambda}(t,x,D) S_{\lambda}v, \qquad w_{\theta}^\alpha = {{\tilde{\phi}}}^{+,\alpha}_{\theta,\mu}(t,x,D)S_\mu w$$ The factors $u_\theta$ and $v_\theta$ are frequency localized in small angles around $\theta$, respectively $-\theta$; $w_{\theta}^\alpha$ has a similar localization around $\pm \theta$ provided that $\alpha \ll 1$. We denote by ${{\tilde{a}}}_{<\mu^\frac12} (t,x,\xi)$ the linearization of $a_{<\mu^\frac12}(t,x,\xi)$ with respect to $\xi$ around $\xi = \xi_\theta^{\alpha_\mu}(t,x)$. Since $a_{<\mu^\frac12}(t,x,\xi)$ is a homogeneous symbol of order $1$, we have $${{\tilde{a}}}_{<\mu^\frac12} (t,x,\xi) = \xi \partial_\xi a_{<\mu^\frac12}(t,x,\xi_\theta^{\alpha_\mu})$$ Consider now the difference $$e = a_{<\mu^\frac12}- {{\tilde{a}}}_{<\mu^\frac12}$$ It vanishes of second order on the half line ${\mathbb R}^+ \xi_\theta$. Due to the uniform (nonradial) convexity of the characteristic cone $\{ \tau + a_{<\mu^\frac12}(t,x,\xi)=0\}$, it follows that $e$ is nonzero when $\xi$ is not collinear with $\xi_\theta^{\alpha_\mu}$. Precisely, we can estimate it in terms of the angle $\angle(\xi,\xi_\theta^{\alpha_\mu})$ as $$e(t,x,\xi) \approx |\xi| |\angle(\xi,\xi_\theta^{\alpha_\mu})|^2$$ In particular in the support of the symbol ${{\tilde{\phi}}}^{+,\alpha}_{\theta,\mu}$ the above angle has size $\alpha$ and the frequency has size $\mu$. Hence[^5] $$e(t,x,\zeta) \approx \alpha^2 \mu, \qquad (t,x,\zeta) \in \text{supp } {{\tilde{\phi}}}^{+,\alpha}_{\theta,\mu}$$ Here it may help to think of the constant coefficient case where $\xi_\theta^{\alpha_\mu}=\theta$, while $a-{{\tilde{a}}}= |\xi| - \xi \theta$. We introduce a local inverse for $ e(t,x,\zeta)$ in the support of ${{\tilde{\phi}}}^{+,\alpha}_{\theta,\mu}$, namely $$l(t,x,\zeta) = \tilde{{\tilde{\phi}}}^{+,\alpha}_{\theta,\mu}(t,x,\zeta) e^{-1}(t,x,\zeta)$$ The cutoff symbol $ \tilde{{\tilde{\phi}}}^{+,\alpha}_{\theta,\mu}$ is similar to ${{\tilde{\phi}}}^{+,\alpha}_{\theta,\mu}$ but has a slightly larger support and equals $1$ in a neighbourhood of the support of ${{\tilde{\phi}}}^{+,\alpha}_{\theta,\mu}$. As defined, the operator $L(t,x,D)$ is not localized at frequency $\mu$. To remedy this we truncate its output in frequency and set $$\tilde L = \tilde S_\mu(D) L(t,x,D)$$ The properties of the operator $\tilde L$ are summarized in the following The operator $\tilde L$ satisfies the following estimates: a\) fixed time $L^p$ mapping properties: $$\| \tilde L \|_{L^p \to L^p} \lesssim \alpha^{-2} \mu^{-1}, \qquad 1 \leq p \leq \infty$$ b) fixed time approximate inverse of $A(t,x,D) - {{\tilde{A}}}(t,x,D)$: $$\|(A(t,x,D)- {{\tilde{A}}}(t,x,D)) \tilde L - \tilde{{\tilde{\phi}}}(t,x,D)\|_{L^p \to L^p} \lesssim \mu^{-\frac12} +\alpha^{-2} \mu^{-1}, \quad 1 \leq p \leq \infty$$ c) space-time $X_+$ mapping properties: $$\| \tilde L \|_{X_{+} \to X_+} \lesssim \alpha^{-2} \mu^{-1}$$ \[L\] We first compute the regularity of the symbol $e(t,x,\zeta)$ within the support of $l$. With respect to $\xi$ this is smooth and homogeneous, therefore we only have to keep track of the order of vanishing when $\xi$ is in the $\xi_\theta^{\alpha_\mu}$ direction. With respect to $x$ there is the dependence coming from the symbol $a$, as well as the dependence due to the $\xi_\theta^{\alpha_\mu}$ direction occuring in the linearization. Since $a$ is Lipschitz in $x$ and $\xi_\theta$ is Lipschitz in $x$ and smooth on the $\alpha_\mu$ scale, within the support of $l$ we obtain $$e \in C^1_\mu S(\alpha^2 \mu,g_\alpha) \label{amureg}$$ Combining this with the regularity of the symbol $\tilde{{\tilde{\phi}}}^{+,\alpha}_{\theta,\mu} \in S(1,g_\alpha) $ we obtain the symbol regularity for $l$, $$l \in C^1_\mu S((\alpha^2 \mu)^{-1},g_\alpha) \label{amurega}$$ To prove part (a) of the Lemma we observe that for fixed $(t,x)$ the symbol $l(t,x,\xi)$ is a smooth bump function of size $(\alpha^2 \mu)^{-1}$ in a rectangle of size $ \mu \times (\alpha \mu)^{n-1}$ oriented in the $ \xi_\theta^{\alpha_\mu}$ direction. This implies that its kernel $K(t,x,y)$ is bounded by $(\alpha^2 \mu)^{-1}$ times an integrable bump function on the dual scale, $$|K(t,x,y)| \lesssim (\alpha^2 \mu)^{-1} \mu (\alpha \mu)^{n-1} (1 + \mu |\xi_\theta^{\alpha_\mu}(t,x)(x-y)| + \alpha \mu |\xi_\theta^{\alpha_\mu}(t,x)\wedge (x-y)|)^{-N}$$ This bound is symmetric; indeed, since $\xi_\theta^{\alpha_\mu}(t,x)$ is Lipschitz in $x$ we can replace it by $\xi_\theta^{\alpha_\mu}(t,y)$ in the above bound. Thus integrating we have $$\sup_x \int |K(t,x,y)| dy \lesssim (\alpha^2 \mu)^{-1}, \qquad \sup_y \int |K(t,x,y)| dx \lesssim (\alpha^2 \mu)^{-1}$$ The $L^p$ bounds for $L(t,x,D)$ and also for $\tilde L$ immediately follow. For later use in the proof we observe that within the support of $l$ we have $$|\xi_\theta^{\alpha_\mu}(t,x) \wedge \xi | \lesssim \alpha \mu$$ Then the same argument as above yields the additional bounds $$\| (\xi_\theta^{\alpha_\mu}(t,x) \wedge D)^\beta \tilde L u\|_{L^p} \lesssim (\alpha \mu)^{|\beta|} (\alpha^2 \mu)^{-1} \| u\|_{L^p} \label{wedga}$$ For part (b) we write $$(A(t,x,D)- {{\tilde{A}}}(t,x,D)) \tilde L - \tilde{{\tilde{\phi}}}(t,x,D) = R_1(t,x,D) + R_2(t,x,D)$$ where $$R_1(t,x,D)=E(t,x,D) \tilde S_\mu(D) L(t,x,D) - \tilde {{\tilde{\phi}}}(t,x,D),$$ respectively $$R_2(t,x,D)=(A_{>\mu^\frac12}(t,x,D)- {{\tilde{A}}}_{>\mu^\frac12}(t,x,D)) \tilde S_\mu(D) L(t,x,D),$$ The operator $R_1$ is localized at frequency $\mu$. The principal part cancels, and since $e \in C^1_\mu S(\alpha^2 \mu,g_\alpha)$ and $l \in C^1_\mu S((\alpha^2 \mu)^{-1},g_\alpha)$ by the pseudodifferential calculus it follows that $$R_1(t,x,D) \in C^0_\mu S((\alpha^2 \mu)^{-1},g_\alpha)$$ In addition, the symbol of $R_1$ decays rapidly away from the support of $\tilde{{\tilde{\phi}}}^{+,\alpha}_{\theta,\mu}$. Hence we obtain the same kernel and $L^p$ bounds as in the case of $L(t,x,D)$. Consider now the operator $R_2$. Since $a(t,x,\zeta)$ is Lipschitz in $x$ it follows that $|a_{>\mu^\frac12}(t,x,\zeta)| \lesssim \mu^{-\frac12} |\zeta|$. Expanding $a_{>\mu^\frac12}(t,x,\zeta)$ in a rapidly decreasing series of spherical harmonics with respect to $\zeta$, we can separate variables and reduce the problem to the simpler case when $a_{>\mu^\frac12}(t,x,\zeta) = b(t,x) c(\zeta)$ with $|b| < \mu^{-\frac12}$ and $c$ is smooth and homogeneous of order $1$. For the symbol $c-{{\tilde{c}}}$ we use the representation $$c(\zeta) - {{\tilde{c}}}(t,x,\zeta) = \psi(\xi_\theta^{\alpha_\mu}, \zeta) (\xi_\theta^{\alpha_\mu}(t,x) \wedge \zeta)^2$$ where $\psi$ is smooth in both arguments and homogeneous of order $-1$ in $\zeta$. Separating variables in $\psi$ we can assume without any restriction in generality that $\psi$ depends only on $\zeta$. Then after some simple commutations we obtain $$c(D) - {{\tilde{c}}}(t,x,D) = (\xi_\theta^{\alpha_\mu}(t,x) \wedge D)^2 \psi(D) + O(1)_{L^p \to L^p}$$ To estimate this we use . The factor $\psi(D)\tilde S_\mu(D)$ yields an extra $\mu^{-1}$ factor in the $L^p$ bounds, therefore we obtain $$\| R_2(t,x,D)\|_{L^p \to L^p} \lesssim \mu^{-\frac12}$$ Finally we prove part (c). By (a), $\tilde L$ is $L^2$ bounded with norm $O(\alpha^{-2} \mu^{-1})$, therefore it remains to prove the commutator estimate $$\| [ D_t+ A_{<\mu^\frac12}(t,x,D),\tilde S_\mu L(t,x,D)]\|_{L^\infty L^2 \to L^2} \lesssim \alpha^{-2} \mu^{-1} \label{lcom}$$ This is a consequence of the operator bound $$[ D_t+ A_{<\mu^\frac12}(t,x,D),\tilde S_\mu L(t,x,D)] \in L^2 C^0_\mu S(\alpha^{-2} \mu^{-1},g_\alpha)$$ To prove it we use the pdo calculus to represent the commutator as a principal term plus a second order error, $$[ D_t+ A_{<\mu^\frac12}(t,x,D),\tilde S_\mu L(t,x,D)] = \tilde S_\mu Q(t,x,D) + R(t,x,D)$$ where the principal part $q$ has symbol $$q(t,x,\xi) =-i \{\tau + a_{<\mu^\frac12}(t,x,\xi), l(t,x,\xi)\}$$ The remainder $R$ is localized at frequency $\mu$. A direct computation, using , shows that its symbol satisfies $$r \in L^2 C^0_\mu S(\alpha^{-2} \mu^{-1},g_\alpha)$$ It remains to consider the above Poisson bracket and prove that $$q \in L^2 C^0_\mu S(\alpha^{-2} \mu^{-1},g_\alpha) \label{qsal}$$ For this we write $q$ in the form $$iq = - \tilde{{\tilde{\phi}}}^{+,\alpha}_{\theta,\mu} q_1 e^{-2} + q_2 e^{-1} + q_3 e^{-1}$$ where $$q_1 (t,x,\xi)= \left\{\tau + a_{<\mu^\frac12} , e \right\}, \qquad q_2 (t,x,\xi)= \left\{\tau + a_{<\alpha^{-1}}, \tilde{{\tilde{\phi}}}^{+,\alpha}_{\theta,\mu}\right\}$$ respectively $$q_3 (t,x,\xi)= \left\{ a_{\alpha^{-1}<\cdot < \mu^{\frac12}}, \tilde{{\tilde{\phi}}}^{+,\alpha}_{\theta,\mu}\right\}$$ Within the support of $\tilde{{\tilde{\phi}}}^{+,\alpha}_{\theta,\mu}$ we know that $e \in C^1_\mu S(\alpha^2 \mu, g_\alpha)$ is an elliptic symbol. Hence for the first term it suffices to show that $q_1 \in C^0_\mu S(\alpha^2 \mu, g_\alpha)$. Indeed, by definition $q_1$ is a homogeneous symbol of order $1$ which is continuous in $x$ and homogeneous in $\zeta$. In addition, we know that $e(t,x,\zeta)$ vanishes of second order in $\zeta$ at $ (t,x,\xi_\theta^{\alpha_\mu}(x,t))$ which is also invariant with respect to the $\tau+ a_{<\mu^\frac12}$ Hamilton flow. Then $q$ must vanish of second order in $\zeta$ at $(t,x,\xi_\theta^{\alpha_\mu}(x,t))$. Arguing as in the case of $e$, this implies that within the support of $\tilde{{\tilde{\phi}}}^{+,\alpha}_{\theta,\mu}$ we have $q_1 \in C^0_\mu S(\alpha^2 \mu, g_\alpha)$. As in we know that $q_2 \in S(1,g_\alpha)$. Also we have $a_{\alpha^{-1}<\cdot < \mu^{\frac12}} \in L^2 C^2_\mu S(\alpha^2 \mu, g_\alpha)$ and $\tilde{{\tilde{\phi}}}^{+,\alpha}_{\theta,\mu} \in S(1,g_\alpha)$ therefore $q_3 \in L^2 C^1_\mu S(1,g_\alpha)$. This concludes the proof of and therefore the proof of the lemma. To continue the estimate of term II in Case 1 we define the auxiliary trilinear form $$\begin{aligned} E(u,v,{{\tilde{w}}}) &=& \int (D_t + A(t,x,D)) u\, v {{\tilde{w}}}dx dt + \int u (D_t - A(t,x,-D)) v\, {{\tilde{w}}}dx dt \\ &+& \int u v\, (D_t + {{\tilde{A}}}(t,x,D)) {{\tilde{w}}}dx dt\end{aligned}$$ With $\tilde w= \tilde L w_{\theta}^\alpha$ we write $$\begin{split} \int u_\theta v_\theta w_\theta^\alpha dx dt =& -\int u_\theta v_\theta\, ((A(t,x,D)- {{\tilde{A}}}(t,x,D))\tilde L-1)w_\theta^\alpha dx dt \\ +& \int (D_t + A(t,x,D)) u_\theta\, v_\theta {{\tilde{w}}}dx dt \\ +& \int u_\theta (D_t - A(t,x,-D)) v_\theta\, {{\tilde{w}}}dx dt \\ +& \int u_\theta v_\theta (D_t + A(t,x,D)) {{\tilde{w}}}dx dt \\ -& E(u_\theta,v_\theta,{{\tilde{w}}}) \end{split} \label{longsum}$$ We bound each term separately. For the first one we write $$\begin{split} (A(t,x,D)- {{\tilde{A}}}(t,x,D)) \tilde{ L}-1 =& (A(t,x,D)-{{\tilde{A}}}(t,x,D))\tilde L-\tilde {{\tilde{\phi}}}^{+,\alpha}_{\theta,\mu}(t,x,D) \\ & + (\tilde {{\tilde{\phi}}}^{+,\alpha}_{\theta,\mu}(t,x,D)-1) \end{split}$$ The contribution of the first line is estimated using Lemma \[L\] (b) and for $w_\theta^\alpha$, $$\begin{split} \bigg| \int u_\theta v_\theta [&(A(t,x,D)-{{\tilde{A}}}(t,x,D)) \tilde L - \tilde {{\tilde{\phi}}}^{+,\alpha}_{\theta,\mu}(t,x,D)]w_\theta^\alpha dx dt \bigg| \\& \lesssim (\alpha^{-2}\mu^{-1}+\mu^{-\frac12}) \|u_\theta\|_{L^\infty L^2} \|v_\theta\|_{L^\infty L^2} \|w_\theta^\alpha \|_{L^{2} L^{\infty}} \\& \lesssim (\alpha^{-2}\mu^{-1} +\mu^{-\frac12}) \|u_\theta\|_{X_+} \|v_\theta\|_{X_-} \|w_\theta^\alpha \|_{L^{2} L^{\infty}} \\& \lesssim (\alpha^{-2}\mu^{-1}+\mu^{-\frac12}) \alpha^\frac12 \mu^\frac32 \|u_\theta\|_{X_+} \|v_\theta\|_{X_-} \|w_\theta^\alpha\|_{X_+^{\mu,\alpha,\theta}} \end{split}$$ For the contribution of the second line we observe that $$(\tilde {{\tilde{\phi}}}^{+,\alpha}_{\theta,\mu}(t,x,D)-1) w_\theta^\alpha = (\tilde {{\tilde{\phi}}}^{+,\alpha}_{\theta,\mu}(t,x,D)-1) \tilde \phi_\theta^{+,\alpha} S_\mu w$$ where the symbols $\tilde {{\tilde{\phi}}}_{\theta,\mu}^{+,\alpha} -1$ and $\tilde \phi_{\theta,\mu}^{+,\alpha} s_\mu $ have disjoint supports. Since they both belong to $S(1,g_\alpha)$, this yields a gain of a factor $(\alpha^2 \mu)^{-N}$ in , with $N$ arbitrarily large: $$\sum_\theta \| (\tilde {{\tilde{\phi}}}^{+,\alpha}_{\theta,\mu}(t,x,D) -1) w_\theta^\alpha\|_{L^2 L^\infty}^2 \lesssim \mu^{\frac52} (\alpha^2 \mu)^{-N} \|w_\theta^\alpha\|_{X_+^{\mu,\alpha,\theta}}^2$$ This is more than we need. For the second term in we use the $L^2$ bound for $(D_t+A)u_\theta$, the energy bound for $v_\theta$ and for ${{\tilde{w}}}$. This yields $$\left| \int (D_t +A(t,x,D)) u_\theta\, v_\theta {{\tilde{w}}}dx dt \right| \lesssim \alpha^{-\frac32} \mu^{\frac12} \|u_\theta\|_{X_+} \|v_\theta\|_{X_-} \|w_\theta^\alpha\|_{X_+^{\mu,\alpha,\theta}}$$ The third term is similar. For the fourth term in we use the energy for the first two factors combined with Bernstein derived $L^2 L^\infty$ bound for the third, $$\begin{split} \left| \int u_\theta v_\theta\, (D_t + A(t,x,D)) {{\tilde{w}}}dx dt\right| & \lesssim \|u\|_{X_+} \|v\|_{X_-} \|(D_t + A(t,x,D)) {{\tilde{w}}}\|_{L^2 L^\infty} \\ &\lesssim (\alpha^2 \mu)^{-1} (\mu (\alpha \mu)^3)^\frac12 \|u_\theta\|_{X_+} \|v_\theta\|_{X_-} \|w_\theta^\alpha\|_{X_+^{\mu,\alpha,\theta}} \end{split}$$ It remains to prove the estimate for $E$. Observe that the time derivatives in $E$ can be integrated out, producing contributions of the form $$\int u_\theta\, v_\theta \, {{\tilde{w}}}dx \label{puvtw}$$ at the initial and the final time. These are estimated using energy bounds for the first two factors and the pointwise bound arising from Bernstein’s inequality for the last factor, $$\| {{\tilde{w}}}\|_{L^\infty} \lesssim (\alpha^2 \mu)^{-1} \| w^\alpha_\theta\|_{L^\infty} \lesssim (\alpha^2 \mu)^{-1} (\mu (\alpha \mu)^3)^\frac12 \|w^\alpha_\theta\|_{X_+^{\mu,\alpha,\theta}} = (\alpha^2 \mu)^{-\frac14} \mu^\frac54\|w^\alpha_\theta\|_{X_+^{\mu,\alpha,\theta}}$$ This leaves us with a purely spatial trilinear form, $$\int E_0(u_\theta,v_\theta,{{\tilde{w}}}) dt$$ where $$E_0(u,v,{{\tilde{w}}}) = \int A(t,x,D) u\, v {{\tilde{w}}}- u A(t,x,-D) v \,{{\tilde{w}}}+ u v \, {{\tilde{A}}}(t,x,D) {{\tilde{w}}}\ dx$$ The main bound for $E_0$ is provided in the next lemma. \[leob\] Let $1 \leq \mu \lesssim \lambda$. Assume that $\xi_\theta$ is a Lipschitz function of $x$ with $|\xi_\theta -\theta| \ll 1$ and that $a \in C^1 S^1_{hom}$. Then the trilinear form $E_0$ satisfies the fixed time estimate: $$\begin{split} |E_0(u,v,{{\tilde{w}}})| \lesssim& \ \|u\|_{L^{p_1}} \|v\|_{L^{q_1}} \|{{\tilde{w}}}\|_{L^{r_1}} \\ & + \lambda^{-1} \| (\xi_\theta \wedge D) u\|_{L^{p_2}} \|v\|_{L^{q_2}} \| (\xi_\theta \wedge D) {{\tilde{w}}}\|_{L^{r_2}} \\ & + \lambda^{-1} \| u\|_{L^{p_2}} \| (\xi_\theta \wedge D) v\|_{L^{q_2}} \| (\xi_\theta \wedge D) {{\tilde{w}}}\|_{L^{r_2}} \\ & + \mu \lambda^{-2} \| (\xi_\theta \wedge D) u\|_{L^{p_3}} \| (\xi_\theta \wedge D) v\|_{L^{q_3}} \|{{\tilde{w}}}\|_{L^{r_3}} \end{split} \label{eob}$$ for all indices $$\frac{1}{p_i} + \frac{1}{q_i} + \frac{1}{r_i} = 1, \qquad 1 \leq p_i,q_i,r_i \leq \infty$$ and for all functions $u$, $v$ localized at frequency $\lambda$ in a small angular neighbourhood of $\theta$, respectively $-\theta$ and all $w$ localized at frequency $\mu$. \[ee\] While any choice of $L^p$ norms is allowed in the lemma, in order to conclude the proof of the estimate for $E$ it suffices to use the set of indices $(2,2,\infty)$. We apply the lemma with $u = u_\theta$, $v = v_\theta$ and ${{\tilde{w}}}= \tilde L w^\alpha_\theta$ as above. This yields $$\begin{split} \left|\int E_0(u_\theta,v_\theta,{{\tilde{w}}}) dt\right| \lesssim & \ \|u_\theta\|_{L^{\infty} L^2} \|v_\theta\|_{L^{\infty} L^2} \|{{\tilde{w}}}\|_{L^{2} L^\infty} \\ & + \lambda^{-1} \| (\xi_\theta \wedge D) u_\theta\|_{L^{\infty} L^2} \|v_\theta\|_{L^{\infty} L^2} \| (\xi_\theta \wedge D) {{\tilde{w}}}\|_{L^{2} L^\infty} \\ & + \lambda^{-1} \| u_\theta\|_{L^{\infty} L^2} \| (\xi_\theta \wedge D) v_\theta\|_{L^{\infty} L^2} \| (\xi_\theta \wedge D) {{\tilde{w}}}\|_{L^{2} L^\infty} \\ & + \mu \lambda^{-2} \| (\xi_\theta \wedge D) u_\theta\|_{L^{\infty} L^2} \| (\xi_\theta \wedge D) v_\theta\|_{L^{\infty} L^2} \|{{\tilde{w}}}\|_{L^{2} L^\infty} \end{split} \label{secondee}$$ Due to the angular localization, the operator $ (\xi_\theta \wedge D) $ yields a factor of $\mu^{-\frac12} \lambda$ when applied to $u_\theta$ or $v_\theta$, respectively a factor of $\alpha \mu$ when applied to $\tilde w$. Hence we obtain $$\left|\int E_0(u_\theta,v_\theta,{{\tilde{w}}}) dt\right| \lesssim \frac{\mu^{\frac32} \alpha^\frac12}{\alpha^2 \mu} (1+ \alpha \mu^\frac12 +\alpha \mu^\frac12+ 1) \|u_\theta\|_{X_+^{{\lambda},\alpha,\theta}} \|v_\theta\|_{X_-^{{\lambda},\alpha,\theta}} \|w^\alpha_\theta\|_{X_+^{\mu,\alpha,\theta}}$$ which is acceptable since $\alpha^2 \mu \geq 1$. Since the symbol $a$ is smooth and homogeneous of order $1$ with respect to $\xi$, we can use its representation in terms of the spherical harmonics and reduce the problem to the case when $a$ has the form $$a(x,\xi) = b(x) c(\xi)$$ where $b$ is Lipschitz continuous. We denote by $\xi$, respectively $\eta$ the frequencies for the $u_\theta$, respectively $v_\theta$ factors in $E_0$. Then $\xi$ and $\eta$ have size $\lambda$ and are in a small angular neighbourhood of $\theta$. We expand $c$ around the line generated by $\xi_\theta$ into a linear term and a quadratic error, $$c(\xi) = \xi (\nabla c)(\xi_\theta) + \xi B(\xi,\xi_\theta) \xi$$ where $B$ is homogeneous of order $-1$ with respect to $\xi$ and can be chosen so that $$\xi_\theta B(\xi,\xi_\theta) = 0, \qquad B(\xi,\xi_\theta) \xi_\theta = 0$$ To see that this is possible we observe that after a rigid rotation we can assume that $\xi_\theta = e_1$. For $\xi = (1,\xi')$ with $|\xi'| \ll 1$ we write the first order Taylor polynomial with integral remainder $$\begin{split} c(1,\xi') = &\ c(1,0) + \xi' c_{\xi'}(1,0) + \xi' B(1,\xi') \xi' \\ = &\ c_{\xi_1}(1,0) + \xi' c_{\xi'}(1,0) + \xi' B(1,\xi') \xi' \end{split}$$ where $B$ is given by $$B(1,\xi') = \int_0^1 (1-h) \nabla^2_{\xi'} a(1,h\xi') dh$$ This extends by homogeneity to all $\xi$ in a small angle around $\theta$. We represent $B$ as a rapidly convergent sum of terms of the form $$\lambda^{-1} F(\xi_\theta) g(\xi)$$ where $g$ is a scalar function which is bounded and smooth on the $\lambda$ scale and $F$ is a matrix inheriting the above property of $B$, $$\xi_\theta F(\xi_\theta) = 0, \qquad F(\xi_\theta) \xi_\theta = 0 \label{null}$$ So we have $$c(\xi) = \xi (\nabla c)(\xi_\theta) + \lambda^{-1} \sum \xi F(\xi_\theta) \xi g(\xi)$$ Then we obtain the rapidly convergent series representation $$\begin{aligned} c(\xi) - c(\eta) &=& (\xi-\eta) (\nabla c)(\xi_\theta) +\lambda^{-1} \sum (\xi - \eta) F(\xi_\theta) \xi g(\xi) \\ &+& \lambda^{-1} \sum \eta F(\xi_\theta) (\xi -\eta) g(\eta) \\ &+& \lambda^{-2} \sum \eta F(\xi_\theta) \xi (\xi - \eta) h(\xi) k(\eta)\end{aligned}$$ where $h$ and $k$ are smooth and bounded on the $\lambda$ dyadic scale. We use this representation for the first two components in $E_0$. The contribution of the first term above cancels the principal part of the third component in $E_0$. We retain the other three terms though, therefore this yields the following rapidly convergent series representation for $E_0$: $$E_0(u,v,{{\tilde{w}}}) = \int u v {{\tilde{w}}}D(b (\nabla c)(\xi_\theta)) dx + \sum E_0^1 + \sum E_0^2 + \sum E_0^3$$ The first term is easily estimated since $b (\nabla c)(\xi_\theta)$ is Lipschitz continuous. The first summand has the form $$\begin{split} E_0^1 &= {\lambda}^{-1} \int F(\xi_\theta) D (Dg(D) u v)\, {{\tilde{w}}}dx \\ & = - {\lambda}^{-1} \int D F(\xi_\theta) D g(D) u v {{\tilde{w}}}+ F(\xi_\theta) D g(D) u v D {{\tilde{w}}}\ dx \end{split}$$ In the first term $F(\xi_\theta)$ is Lipschitz in $x$ and the $u$ derivative yields a factor of $\lambda$. For the second term on the other hand we use to estimate $$| D g(D) u F(\xi_\theta) D {{\tilde{w}}}| \lesssim |(\xi_\theta \wedge D) g(D) u| |(\xi_\theta \wedge D) {{\tilde{w}}}|$$ Commuting $g(D)$ with $(\xi_\theta \wedge D)$ we get $$|(\xi_\theta \wedge D) g(D) u| \leq |g(D) (\xi_\theta \wedge D) u| + | [ g(D), \xi_\theta \wedge D] u|$$ with the commutator $[ g(D), \xi_\theta \wedge D]$ bounded in all $L^p$ spaces. Hence $$|E_0^1| \lesssim \| u\|_{L^{p_1}} \|v\|_{L^{q_1}} \|{{\tilde{w}}}\|_{L^{r_1}} + \lambda^{-1} \| (\xi_\theta \wedge D) u\|_{L^{p_2}} \|v\|_{L^{q_2}} \|(\xi_\theta \wedge D) {{\tilde{w}}}\|_{L^{r_2}}$$ The second summand of $E_0$ is similar but with the roles of $u$ and $v$ reversed. Finally, $$\begin{split} E_0^3 &= \lambda^{-2} \int F(\xi_\theta)\, D (D h(D) u \,D k(D) v)\, {{\tilde{w}}}dx \\ &= -\lambda^{-2} \int D F(\xi_\theta) \,D h(D) u\, D k(D) v \, {{\tilde{w}}}+ F(\xi_\theta)\, D h(D) u\, D k(D) v \, D {{\tilde{w}}}\ dx \end{split}$$ where the matrix $F(\xi_\theta)$ is paired with the $u$ and $v$ derivatives. In the first term the two derivatives on $u$ and $v$ yield a $\lambda^2$ factor. In the second term we use as before and commute out the $h(D)$ and $k(D)$ multipliers. We obtain $$|E_0^3| \lesssim\| u\|_{L^{p_1}} \|v\|_{L^{q_1}} \|{{\tilde{w}}}\|_{L^{r_1}} + \mu \lambda^{-2}\| (\xi_\theta \wedge D) u\|_{L^{p_3}} \| (\xi_\theta \wedge D) v\|_{L^{q_3}} \| {{\tilde{w}}}\|_{L^{r_3}}$$ Summing up the results we get the conclusion of the Lemma. [**Case 1, term III**]{}: This has the form $$III = \int \sum_{ \alpha > \mu^{-\frac12} }\sum_{\theta \in O_{\alpha}} \phi^{+,\alpha}_{\theta,\lambda}(t,x,D) S_{\lambda}u \ \tilde \phi^{-,\alpha}_{-\theta,\lambda}(t,x,D) S_{\lambda}v \ \tilde \phi^{+,\alpha \mu^{-1} \lambda}_{\theta,\mu}(t,x,D) S_\mu w \,dx dt$$ In this case the summation with respect to $\theta$ is accomplished by , while for the $\alpha$ summation we simply accept a $\ln \mu$ loss. Fixing $\alpha$ and $\theta$ we set $$u_\theta^\alpha = \phi^{+,\alpha}_{\theta,\lambda}(t,x,D) S_{\lambda}u, \quad v_\theta^\alpha = \phi^{-,\alpha}_{-\theta,\lambda}(t,x,D) S_{\lambda}v, \quad w_\theta^\alpha=\tilde \phi^{+,\alpha \mu^{-1} \lambda}_{\theta,\mu}(t,x,D) S_\mu w.$$ and repeat the analysis for Case 1, term II. The angular localization of $u_\theta^\alpha $ and $v_\theta^\alpha$ is not used in the bounds for the first four terms in , therefore that part of the argument rests unchanged. The same applies to the bound for the fixed time integral in . It remains to consider the bound for $E(u_\theta^\alpha,v_\theta^\alpha,{{\tilde{w}}})$. The $\alpha$ localization angle for $w_\theta^\alpha$ is now $\alpha \mu^{-1} \lambda$, therefore part (b) of Lemma \[L\] gives $$\| {{\tilde{w}}}\|_{X_+} \lesssim \frac{\mu}{\alpha^2 \lambda^2} \|w_\theta^\alpha\|_{X_+}$$ This is stronger than in the previous case because it gives a high frequency gain. Now we are able to use Lemma \[ee\] with exponents $(3,2,6)$ to obtain $$\begin{split} |\int E_0(u_\theta^\alpha,v_\theta^\alpha,{{\tilde{w}}}) dt| \lesssim &\ \|u_\theta^\alpha\|_{L^2 L^3} \|v_\theta^\alpha\|_{L^{\infty} L^2} \|{{\tilde{w}}}\|_{L^{2} L^6} \\ & + \lambda^{-1} \| (\xi_\theta \wedge D) u_\theta^\alpha\|_{L^2 L^3} \|v_\theta^\alpha\|_{L^{\infty} L^2} \| (\xi_\theta \wedge D) {{\tilde{w}}}\|_{L^{2} L^6} \\ & + \lambda^{-1} \| u_\theta^\alpha\|_{L^2 L^3} \| (\xi_\theta \wedge D) v_\theta^\alpha\|_{L^{\infty} L^2} \| (\xi_\theta \wedge D) {{\tilde{w}}}\|_{L^{2} L^6} \\ & + \mu \lambda^{-2} \| (\xi_\theta \wedge D) u_\theta^\alpha\|_{L^2 L^3} \| (\xi_\theta \wedge D) v_\theta^\alpha\|_{L^{\infty} L^2} \|{{\tilde{w}}}\|_{L^{2} L^6} \end{split}$$ Due to the angular localization on the $\alpha$ scale for $u_\theta^\alpha$ and $v_\theta^\alpha$, respectively on the $\alpha \mu^{-1} \lambda$ scale for $w_\theta^\alpha$, all $(\xi_\theta \wedge D)$ operators above yield $\alpha \lambda$ factors. Hence, taking advantage of the Strichartz estimates, we obtain $$\begin{split} |\int E_0(u_\theta^\alpha,v_\theta^\alpha,{{\tilde{w}}}) dt| \lesssim&\ \frac{\mu}{\alpha^2 \lambda^2} \alpha^2 \lambda\ \lambda^{\frac5{12}} \mu^\frac56 \|u_\theta^\alpha\|_{X_+^{{\lambda},\alpha,\theta}} \|v_\theta^\alpha\|_{X_-^{{\lambda},\alpha,\theta}} \|w_\theta^\alpha\|_{X_+^{\mu,\frac{\alpha{\lambda}}{\mu},\theta}} \\ =&\ \lambda^{-\frac7{12}} \mu^\frac{11}6 \|u_\theta^\alpha\|_{X_+^{{\lambda},\alpha,\theta}} \|v_\theta^\alpha\|_{X_-^{{\lambda},\alpha,\theta}} \|w_\theta^\alpha\|_{X_+^{\mu,\frac{\alpha{\lambda}}{\mu},\theta}} \end{split}$$ which is satisfactory since $\lambda \gtrsim \mu$. We conclude this case with two remarks. First, in this context the proof of Lemma \[ee\] is somewhat of an overkill. In fact, it would suffice to linearize separately $a(t,x,\xi)$ and $a(t,x,\eta)$ around $\xi_\theta$ and use the fact that the symbol $a(t,x,\xi) - {{\tilde{a}}}(t,x,\xi)$ has size $\alpha^2 \lambda$ at frequency ${\lambda}$ in $H_\alpha S_\alpha(\theta)$. Secondly, the endpoint Strichartz estimate is only used here for convenience; there is some flexibility in choosing the indices. [**Case 1, term IV.**]{} This has the form $$IV = \int \sum_{ \alpha > \mu^{-\frac12} }\sum_{\theta \in O_{\alpha}} \phi^{+,\alpha}_{\theta,\lambda}(t,x,D) S_{\lambda}u \ \tilde \phi^{-,\alpha}_{-\theta,\lambda}(t,x,D) S_{\lambda}v \ \phi^{+,\alpha \mu^{-1} \lambda}_{\theta,\mu}(t,x,D) S_\mu w \,dx dt$$ Again the summation with respect to $\theta$ is accomplished by , while for the $\alpha$ summation we simply accept a $\ln \mu$ loss. This term is better behaved because the symbol $$\phi^{+,\alpha}_{\theta,\lambda}(x,\xi)\, \tilde \phi^{-,\alpha}_{-\theta,\lambda}(x,\eta) \,\phi^{+,\alpha \mu^{-1} \lambda}_{\theta,\mu}(x,\zeta)$$ vanishes on $H=\{\xi+\eta + \zeta = 0\}$. Precisely, in the support of the above symbol we have $$|\xi| \approx \lambda,\ |\xi \wedge \xi_\theta^\alpha| \lesssim \alpha \lambda, \qquad |\eta| \approx \lambda,\ |\eta \wedge \xi_\theta^\alpha| \approx C \alpha \lambda, \qquad |\zeta| \approx \lambda,\ |\zeta \wedge \xi_\theta^\alpha| \lesssim \alpha \lambda.$$ This leads to $$|(\xi + \eta +\zeta) \wedge \xi_\theta^\alpha| \approx C \alpha \lambda \label{offdiag}$$ This can be taken advantage of in a direct computation in the above formula. Including the dyadic frequency localizations into the $\phi$’s, each term in $IV$ has the integral representation $$\int \phi^{+,\alpha}_{\theta,\lambda}(t,x,\xi) \hat u(\xi) \ \tilde \phi^{-,\alpha}_{-\theta,\lambda}(t,x,\eta) \hat v(\eta) \ \phi^{+,\alpha \mu^{-1} \lambda}_{\theta,\mu}(t,x,\zeta) \hat w(\zeta)\, e^{i x(\xi+\eta+\zeta)} \, d\xi d\eta d\zeta dx dt$$ Defining the spatial elliptic operator $F$ with symbol $$f(t,x,\xi) = (\xi \wedge \xi_\theta^\alpha)^{2N}$$ we have $$F(t,x,D_x) e^{i x(\xi+\eta+\zeta)} = |(\xi + \eta +\zeta) \wedge \xi_\theta^\alpha|^{2N} e^{i x(\xi+\eta+\zeta)}$$ Hence integration by parts in the above formula leads to $$\int \psi(t,x,\xi,\eta,\zeta) \hat u(\xi) \, \hat v(\eta) \, w(\zeta) e^{i x(\xi+\eta+\zeta)} \, d\xi d\eta d\zeta dx dt$$ where the new symbol $\psi$ is $$\psi(t,x,\xi,\eta,\zeta) = F^*(t,x,D_x) \left( \frac{\phi^{+,\alpha}_{\theta,\lambda}(t,x,\xi) \tilde \phi^{-,\alpha}_{-\theta,\lambda}(t,x,\eta) \phi^{+,\alpha \mu^{-1} \lambda}_{\theta,\mu}(t,x,\zeta) }{ |(\xi + \eta +\zeta) \wedge \xi_\theta^\alpha|^{2N}} \right)$$ In the support of the numerator the bound holds. Hence separating the variables we can represent the denominator as a rapidly convergent series with terms $$(\alpha \lambda)^{-2N} \chi_{< \alpha {\lambda}}( \xi \wedge \xi_\theta^\alpha) \chi_{C \alpha {\lambda}} ( \eta \wedge \xi_\theta^\alpha) \chi_{< \alpha {\lambda}}( \zeta \wedge \xi_\theta^\alpha)$$ where each of the $\chi$’s above is a unit bump function on the $\alpha \lambda$ scale. Thus they can be included in the corresponding $\phi$ factors. Due to the $S(g_\alpha)$ regularity of the $\phi$ factors, each derivative $ \xi_\theta^\alpha \wedge D$ applied to them yields an $\alpha^{-1}$ factor. Thus $\psi$ is represented as a rapidly convergent series of products of the form $$(\alpha^2 \lambda)^{-2N} \psi^{+,\alpha}_{\theta,\lambda}(t,x,\xi) \tilde \psi^{-,\alpha}_{-\theta,\lambda}(t,x,\eta) \psi^{+,\alpha \mu^{-1} \lambda}_{\theta,\mu}(t,x,\zeta)$$ where the $\psi$ factors have the same support and regularity as the corresponding $\phi$’s. The integral above is similarly represented as a rapidly convergent series with terms of the form $$(\alpha^2 \lambda)^{-2N} \int \psi^{+,\alpha}_{\theta,\lambda}(t,x,D) S_\lambda u \,\tilde \psi^{-,\alpha}_{-\theta,\lambda}(t,x,D) S_{\lambda}v \, \psi^{+,\alpha \mu^{-1} \lambda}_{\theta,\mu}(t,x,D) S_\mu w \,dx dt$$ Since $\alpha > \mu^{-\frac12}$, the factor in front of the above integral allows us to exchange low frequencies for high frequencies. This suffices in order to bound the last integral using Strichartz estimates. [**Case 1, term V**]{} This is similar to Case 1, term $IV$. This time in the support of the symbol $$\phi^{+,\alpha}_{\theta,{\lambda}}(t,x,\xi)\, \tilde \phi^{-,\alpha}_{-\theta,{\lambda}}(t,x,\eta)\, \tilde\phi^{+,\beta}_{\theta,\mu}(t,x,\zeta)$$ we have $$|\xi| \approx \lambda,\ |\xi \wedge \xi_\theta^\alpha| \lesssim \alpha \lambda, \qquad |\eta| \approx \lambda,\ |\eta \wedge \xi_\theta^\alpha| \approx C \alpha \lambda, \qquad |\zeta| \approx \lambda,\ |\zeta \wedge \xi_\theta^\alpha| \approx C\beta \lambda.$$ Hence $$|(\xi + \eta +\zeta) \wedge \xi_\theta^\alpha| \approx C \alpha \lambda$$ therefore the symbol above is supported at distance $\beta {\lambda}$ from the diagonal $H$. Hence integrating by parts as in the previous case we gain arbitrary powers of $(\alpha \beta {\lambda})^{-1}$. Then we can close the argument using Strichartz type estimates. [**Case 2**]{}, $1 < d < \mu$. This requires only minor changes, which we describe in what follows. We still consider the five terms in the trilinear decomposition , but we replace the smallest localization angle $\mu^{-\frac12}$ by $d^{\frac12} \mu^{-\frac12}$. [**Case 2, term I**]{}. Here we need to sum expressions of the form $$I = \int \phi^{+,d^{\frac12}\mu^{-\frac12}}_{\theta,\lambda}(t,x,D) S_{\lambda}u \ \phi^{-,d^{\frac12}\mu^{-\frac12}}_{-\theta,\lambda}(t,x,D) S_{\lambda}v\ \phi^{+,d^{\frac12}\mu^{-\frac12}}_{\theta,\mu}(t,x,D) S_\mu w \,dx$$ over $\theta \in O_{d^{\frac12}\mu^{-\frac12}}$. Each term is bounded by combining the energy estimate for the first factor, the $L^4 L^2$ bound for the second and for the third. [**Case 2, term II.**]{} Here we use for the summation of expressions of the form $$II = \int \phi^{+,d^{\frac12}\mu^{-\frac12}}_{\theta,\lambda}(t,x,D) S_{\lambda}u\ \phi^{-,d^{\frac12}\mu^{-\frac12}}_{-\theta,\lambda}(t,x,D) S_{\lambda}v\, \sum_{\alpha > d^{\frac12}\mu^{-\frac12}} \tilde \phi^{+,\alpha}_{\theta,\lambda}(t,x,D) S_\mu w dx$$ over $\theta \in O_{d^{\frac12}\mu^{-\frac12}}$. We use the same operator $L$, the same function $\tilde w$ and the same trilinear form $E$. In the first, second and fourth terms are estimated in the same way, but using the $L^4 L^2$ bound for the second factor. In the third term we lose a power of $d$, $$\begin{aligned} \left|\int u_\theta (D_t - A(t,x,-D)) v_\theta {{\tilde{w}}}dx\right| &\lesssim& \|u_\theta\|_{L^\infty L^2} \|(D_t - A(t,x,-D)) v_\theta\|_{L^2} \|{{\tilde{w}}}\|_{L^2 L^\infty}\\ & \lesssim& \|u_\theta\|_{X_+} d^{\frac34} \|v_\theta\|_{X_{-,d}} \frac{1}{\alpha^2 \mu} \alpha^{\frac12} \mu^{\frac32} \|w_\theta^\alpha\|_{X_+^{\mu,\alpha,\theta}} \\ &\lesssim & \left(\frac{d}{\alpha^2 \mu}\right)^\frac34 \mu^{\frac54} \|u_\theta\|_{X_+}\|v_\theta\|_{X_{-,d}} \|w_\theta\|_{X_+^{\mu,\alpha,\theta}}\end{aligned}$$ But this is still acceptable due to the reduced range for $\alpha$, namely $\alpha^2 \mu \geq d$. In the expression there is a $d^{\frac14}$ loss in the $L^2$ bound for $v_\theta$, but this is compensated for by the previously unused $(\alpha^2 \mu)^{-\frac14}$ factor in the pointwise bound for ${{\tilde{w}}}$. Finally, for the $E_0$ bounds we reuse but with all the $v_\theta$ factors estimated in $L^2$. This produces an extra $d^{-\frac14}$ gain. On the other hand, the angular localization for $u_\theta$ and $v_\theta$ is worse. Precisely, the operator $ (\xi_\theta \wedge D) $ yields a factor of $d^\frac12 \mu^{-\frac12} \lambda$ when applied to $u_\theta$ or $v_\theta$, respectively a factor of $\alpha \mu$ when applied to $\tilde w$. Hence we obtain $$\left|\int E_0(u_\theta,v_\theta,{{\tilde{w}}}) dt\right| \lesssim \frac{\mu^{\frac32} \alpha^\frac12}{d^{\frac14} \alpha^2 \mu} (1+ d^\frac12 \alpha \mu^\frac12 +d^\frac12 \alpha \mu^\frac12+ d) \|u_\theta^\alpha\|_{X_+^{{\lambda},\theta,\alpha}} \|v_\theta^\alpha\|_{X_-^{\lambda,\theta,\alpha}} \|w_\theta^\alpha\|_{X_+^{\mu,\theta,\alpha}}$$ This is still acceptable since $\alpha^2 \mu \geq d$. [**Case 2, term III.**]{} Compared to the similar argument in Case 1, the following modifications are required: \(i) The third term in is treated as in Case 2, term II. \(ii) In the bound for $E_0$, the $L^\infty L^2$ norms are replaced by $L^4 L^2$ in all the $v_\theta$ factors. [**Case 2, terms IV,V.**]{} These are identical to Case 1. **Acknowledgement** Both authors would like to thank MSRI for the hospitality in the Fall 2005 semester, where part of this article was written. Both authors were supported in part by the NSF grant DMS-0301122. [^1]: Throughout this paper we will use the standard notation $S(m,g)$, while in [@TG] we used for $S(1,g)$ the shorter one: $S(g)$. [^2]: Here $n$ stands for the space dimension [^3]: This corresponds to the classical wave packets which are localized on the scale of the uncertainty principle. Above this threshold we are dealing with generalized wave packets, which may have a more complex structure, see [@MR2153517] and [@TG] [^4]: modulo tails which are rapidly decreasing on the $g_\alpha$ scale [^5]: here we switch to the letter $\zeta$ for the frequency, as the following analysis refers to the region at low frequency $\mu$ corresponding to the last factor $w$ in the trilinear form.
1
--- abstract: | We present a detailed analysis of the class of regression decision tree algorithms which employ a regulized piecewise-linear node-splitting criterion and have regularized linear models at the leaves. From a theoretic standpoint, based on Rademacher complexity framework, we present new high-probability upper bounds for the generalization error for the proposed classes of regularized regression decision tree algorithms, including LASSO-type, and $\ell_{2}$ regularization for linear models at the leaves. Theoretical result are further extended by considering a general type of variable selection procedure. Furthermore, in our work we demonstrate that the analyzed class of regression trees is not only numerically stable but can furthermore be made tractable via an algorithmic implementation, presented herein, as well as with the help of modern GPU technology. Empirically, we present results on multiple datasets which highlight the strengths and potential pitfalls, of the proposed tree algorithms compared to baselines which grow trees based on piecewise constant models. **Keywords:** Decision trees, Piecewise-Linear Regression, Rademacher complexity, regularization. author: - 'Leonidas Lefakis, Oleksandr Zadorozhnyi, Gilles Blanchard' bibliography: - 'lrt.bib' title: 'Efficient Regularized Piecewise-Linear Regression Trees ' --- Introduction {#sec:intro} ============ Decision trees and random forests remain to be very popular machine learning tools because of their general applicability and considerable empirical evidence with regards to their performance on diverse tasks. Both theoretical aspects and practical applications of such algorithms are active fields of study. At the same time in recent years another family of predictors, namely those falling under the umbrella of “deep-learning”, have met with success. Despite recent efforts to attain a better understanding of the properties of this model class, there remains much which is not well understood (from a theoretical perspective). Nonetheless, it is clear that the increase in performance of such algorithms is due, at least in part, to their ability to leverage cutting edge GPU technology to efficiently build complex models using huge datasets. Consequently, an important and largely open question is whether other families of predictors can exploit this technology to increase their performance. In the case of decision trees, recent work [@Kont15] has in fact shown that decision trees and deep learning are not mutually exclusive and can be combined to create powerful predictors. Here, however, we are interested in investigating enhancements of the decision trees algorithms, and in particular regression trees without resorting to building hybrid deep systems. In particular, we note that an integral component of every tree growing algorithm, to wit the criterion used for defining the optimal split at each node, has remained relatively simple in nature and is typically based on the squared error of a piecewise constant model. The motivation behind this simplicity has invariably been linked to the tractability and numerical stability of the underlying algorithm. In the following we propose a piecewise linear splitting criterion for building a regression tree and show how modern GPU technology makes such complex criteria both tractable and numerically stable. We furthermore present a detailed theoretical analysis that provides insights into the generalization error of this model class, obtaining high probability upper bounds on the generalization error. Finally, we present empirical evidence showing that proposed algorithm consistently outperforms many benchmark algorithms which employ the aforementioned piecewise constant model criterion. Related work {#sec:rel_work} ============ Least squares regression trees are introduced in the seminal work by Breiman et al. [@breiman1984classification]. As with the large majority of subsequent tree building algorithms, the proposed CART trees proceed in a top-down greedy manner. Nodes are split based on local criteria which in the case of regression trees takes the form of minimizing the variances of the target values in the two resulting subtrees. This can be interpreted as minimizing the squared error of a piecewise constant model. The M5 algorithm proposed by Quinlan [@quinlan1992learning] allows construction of linear models at the leaves of the resulting tree. The splitting of samples at the internal nodes is chosen so as to minimize the standard deviations on the two sub-populations. Effectively the algorithm builds a regression tree based on a piecewise constant model and exchanges these constant models for more powerful linear models at the leaves. As noted in [@landwehr2003logit] due to the similarity in splitting criteria, the CART and M5 algorithms result in similar tree structures. This similarity not withstanding, M5, and its “rational reconstruction” M5$'$ [@wang1997inducing] have been shown to outperform CART in practice [@wang1997inducing; @vogel07scalable]. Similar to the M5, the HTL algorithm [@torgo1997functional], first builds a CART tree and then replaces the models at the nodes. The main difference in this work from the M5 algorithm is that HTL allows for non-linear regressors at the leaves. Yet another approach, SECRET [@dobra2002secret], constructs an artificial classification problem, and shows that there is merit in building the tree structure using a classification tree algorithm and then finally assigning linear regression models to each of the leaves. One characteristic of these and other approaches, is that they avoid using the final model (usually linear regression) as a criterion when optimizing the splits of the internal nodes. M5 for instance, optimizes the split for piecewise constant models in the leaves and only inserts the linear models in the leaves post-hoc. However, it would be more desirable to build the tree using a split criterion that takes into account the models actually employed in the leaves: $$\begin{aligned} \label{eq:lrt} \sum_{\bx_i \in A} \paren[2]{ y_i- \hat{f}_A(\bx_i;w_A) }^2 + \sum_{\bx_j \in A^c} \paren[2]{ y_j- \hat{f}_{A^c}(\bx_j;w_{A^c}) }^2,\end{aligned}$$ where $A,A^c$ are the two split subsets and $\hat{f}_A(\cdot;w_A)$ is a linear regressor built on the corresponding subset. The discrepancy of using different models for optimizing a split and predicting at the leaves, was first pointed out by Karalic [@karalic1992employing], at the time however, solving the full optimization problem that results from using a piecewise linear model as part of the splitting criterion was neither tractable nor numerically stable. This intractability and instability was asserted many times in subsequent years [@dobra2002secret; @Potts2005; @Nata], and in the view of solving this optimization problem, multiple efforts were made to efficiently approximate the above split criterion. Two such approaches [@GUIDE; @SUPPORT] use the residuals of a linear regressor at each node to perform a kind of clustering, while another approach [@vogel07scalable] replaces linear regression with forward selection regression at each node in order to reduce the dimensionality of the models. We mention here, for the sake of completeness, that though greedy top-down growing of trees is the most common approach, there exist other approaches, such as so-called “soft” trees [@chipman2010] which predefine the tree architecture and then optimize, in a global manner, the split functions. Other non-greedy strategies for learning the split criteria have also been studied in the literature, as for example via multi-linear programming [@benett1994global] and structured prediction [@norouzi2015efficient]. We focus however on the specific, sometimes called “hard”, family of tree algorithms. We show in the following that despite the increased computational cost of using the criterion in Equation , modern GPU technology ensures that this approach is both tractable and numerically stable. More importantly we provide a comprehensive theoretical analysis of this model class, by separating the “split” space $\mathbf{\mathcal{X}^{'}}$ from the “regression” space ${\mathcal{X}}$ we show that the generalization capabilities of the model class are linked to the dimensionality of the latter as well to the regularization constraints, which is induced by some norm on ${\mathcal{X}}$ (in particular we consider $\ell_{1}-$norm and LASSO, which is well-known for inducing the sparcity). The paper is organized as follows: in Section \[sec:prob\_setup\_new\] we introduce the notation, present the general framework of Piecewise-Linear Regression Tree (PLRT) algorithm and provide necessary formalism to describe the model class. Section \[sec:theor\_analysis\] contains main theoretical contribution of the paper, namely the upper bound for the Rademacher complexity of the underlying regression trees class and the high probability inequality which controls the deviations of the generalization error. Futhermore, in this Section we provide the important corollaries which relate the different types of penalization (i.e. those which induce the sparsity) on the leaves to the performance of the risk bounds. Section \[sec:ctn\] is devoted to the question of numerical tractability and stability, while computing the regressor estimates on GPU, whereas Section \[sec:emp\_eval\] reports the empirical results of wide range of experiments, showing practical advantage of using GPU combined with superiority of models returned by PLRT algorithm (in comparison to well-known decision trees baselines). Finally, we conclude in Section \[sec:conc\], by highlighting the possible extensions. All the proofs can be found in the Appendix. Problem setup {#sec:prob_setup_new} ============= Regression algorithm {#subsec: regression_alg} -------------------- Let $\mathcal{X} = \mathbb{R}^{d}$, equipped with euclidean norm $\norm{\cdot}_{2}$ and $\mathcal{Y} \subset \mathbb{R}$ be some (closed) interval of the real line. Denote through $\mathbf{S}=\{\bx_{i},y_{i}\}_{i=1}^{n}$ the i.i.d. sample of of size $n$ from some unknown distribution $\mathbb{P}_{{X, Y}}$ over the space $\mathcal{X}\times \mathcal{Y}$. For some input $x \in \mathcal{X}$ we also consider its feature representation in some feature space $\mathcal{X}^{'}$ and denote it as $\psi(x)$. In this work we consider the case where $\mathcal{X}^{'} = \mathbb{R}^{D}$ and denote for an arbitrary point $\bx_{i}$ its representation $\psi(\bx_{i}) := \psi_{i} = \paren{\psi_{i}^{1},\ldots,\psi_{i}^{D}}$ with $d \ll D$, where as usual through $\psi_{i}^{k}$ we denote the $k$-th coordinate of the feature representation $\psi_{i}$. Therefore, in our work we consider the *extended sample* $\mathbb{S} =\{\bx_{i},y_{i},\psi_{i}\}_{i=1}^{n}$. Lastly, we assume (for the simplicity of the theoretical analysis) that the distribution of $\norm{X}_{2}$ has bounded support in the interval $[0,K]$. We will use the notation $[D]$ for the integer interval $\set{1,\ldots,D}$. We investigate both the theoretical properties and empirical performance of a general form piecewise linear regressor with regularization constraints built via the the following regression tree algorithm. The proposed algorithm proceeds in a top-down greedy fashion as is typically done in building a decision tree. At each node in the tree, a split is chosen in the $\mathcal{X}^{'}$ space so as to minimize the empirical least square errors of linear predictors in the $\mathcal{X}$ space after splitting. Let ${\mathcal{J}}$ be a subset of indices of the training dataset, corresponding to instances present at a given tree node. For all $(i,k) \in [D]\times {\mathcal{J}}$ denote $A_{i,k}=\{ j \in {\mathcal{J}}: \psi_{j}^{i} \geq \psi_{k}^{i} \}$ and $A^{c}_{i,k}= {\mathcal{J}}\setminus A_{i,k}$ corresponding to the subsets obtained after a split according to the $i$-th feature coordinate and threshold $\psi^i_{k}$. Define the matrix $\bX_{A_{i,k}}$ of dimensions $(|A_{i,k}|,d)$ whose lines are given by $(\bx^t_{l}, l \in A_{i,k})$, and similarly $\bX_{A^{c}_{i,k}}$ of dimensions $(|A^c_{i,k}|, d)$. Define label vectors $Y_{A_{i,k}}=\paren[1]{y_{l}: l \in A_{i,k}}^t$ and $Y_{A^{c}_{i,k}}=\paren[1]{y_{l}: l \in A^{c}_{i,k}}^t$ of dimension $|{A_{i,k}}|$, $|{A^{c}_{i,k}}|$ respectively. For every $(i,k) \in [D] \times {\mathcal{J}}$ consider the optimal cumulative penalized loss of linear predictors after splitting: $$\begin{gathered} \label{eq:L_diff} L^\lambda_{i,k}= \min_{w_{i,k} \in \mathbb{R}^{d}} \paren{ \norm[1]{\bX_{A^{}_{i,k}}{w}^{}_{i,k}-Y_{A^{}_{i,k}}}_{P}^{2} + \lambda \| w_{i,k} - w_0\|^2_Q} \\ + \min_{w^c_{i,k} \in \mathbb{R}^{d}} \paren{\norm[1] {\bX_{A^{c}_{i,k}}{w}^{c}_{i,k}-Y_{A^{c}_{i,k}}}_{P}^{2} + \lambda \| {w}^{c}_{i,k} - {w}_0\|^2_Q},\end{gathered}$$ where $\norm{x}^{2}_{P}:= x^{\top}Px$ and $w_{0}$ may be any vector, though in the following we set $w_0$ to be the linear regression vector of the parent node. In particular, if for a given leaf $s$ we have computed an optimal regularized least squares solution $w_s^*$; when further splitting this node into two (children) leaves we optimize problems of the form $ \min_{w_{i,k} \in \mathbb{R}^{d}} \paren{ \norm[1]{\bX_{A^{}_{i,k}}{w}^{}_{i,k}-Y_{A^{}_{i,k}}}_{P}^{2} + \lambda \| w_{i,k} - w_s^*\|^2_Q}$ and in the similar form for $A_{i,k}^{c}$. Thus, the solution at lower nodes are regularized by the propagation of the solutions from higher (parential) nodes. As will be shown in the empirical evaluation, such a regularization is crucial to the stability of the proposed algorithm. This was in fact a key insight of early experiments, without strong regularization of this form the proposed trees were not able to generalize well. We note that $L_{i,k}$ can be analytically computed by standard linear algebra formulas, see Section \[sec:ctn\] for a more detailed discussion of numerical aspects. As the optimal empirical splitting rule, we choose a pair $(i^{\star},k^{\star})$ minimizing the above: $$(i^{\star},k^{\star}) = \argmin_{(i,k) \in [D] \times {\mathcal{J}}} L^\lambda_{i,k}.$$ For the pair $(i^{\star},k^{\star})$, we split the data accordingly and get the subsets $\paren{\bx_\ell,y_\ell}_{\ell \in {A_{i^{\star},k^{\star}}}}$ and $\paren{\bx_\ell,y_\ell}_{\ell \in {A^c_{i^{\star},k^{\star}}}}$ for the two children of the node. Now, provided that none of the stopping criteria has been reached we apply the splitting algorithm recursively to these two subnodes. If the stopping criteria is reached, then the linear regression with constraints is performed on the leaves. Regression tree formalism ------------------------- For the learning-theoretic study of the algorithm, we need to define formally the set of possible decision functions that can be output by the learning algorithm. This set must be data-independent for classical learning-theoretic arguments to apply. We assume that the total number of leaves is fixed and equal to $\ell$. Let $\mathcal{T}_{\ell}$ be the set of all binary (unlabeled) trees with $\ell$ leaves. For each tree $T \in \mathcal{T}_{\ell}$, denote by $T^{\circ}$ its interior nodes, and by $\partial T$ its leaves. Consider some interior node $s \in T^{\circ}$ and some splitting rule according to the algorithmic scheme in Section \[subsec: regression\_alg\]. Notice that a pair $(i_{s},t_{s}) \in [D]\times \mathbb{R}$ fully parametrizes the splitting criteria in the form $\mathbbm{1}\paren{{\psi^{i_{s}}(\bx) \geq t_{s}}}$ for some instance $\bx$. Any split corresponding to a set $A_{i,k}$ which can be obtained by the partition procedure of the PLRT algorithm can be represented as a pair $(i_s=i,t_s=\psi^{i_s}_k)$ for some $k\in [n]$. Finally, for each leaf $ L_{i} \in \partial T$, $i \in \{1,\ldots,\ell\}$, the local prediction function at leaf $L_i$ is a linear predictor $\bx \mapsto \inner{f_i,\bx}$, which we parametrize by the vector $f_i$, under the constraint $f_{i} \in B$; we will consider the constrained classes of the form $ B := \{ \norm{f} \leq W\} \subset \mbr^d$, where $\norm{\cdot}$ is some norm in $\mathbb{R}^{d}$, and track the influence of norm constraints (for some specific choice of norm) on the complexity terms. Using the aforementioned notation, we can describe class of regression decision trees as follows: $$\begin{aligned} \label{func_class001} \mathcal{F} := \{f : f = \paren{ T, (i_{s},t_{s})_{s \in T^{\circ}},(f_{k})_{k \in \partial T}}, T \in \mathcal{T}_{\ell}, \paren{i_s,t_s} \in [D]\times \mathbb{R}, f_{k} \in B \}.\end{aligned}$$ The main aim of the next section is to obtain error bounds for statistical performance of the functional class by means of the deviation of its generalization error. The main technical tool is the concept of Rademacher complexity. Theoretical analysis {#sec:theor_analysis} ===================== Preliminaries and aim. ---------------------- In the following analysis, we obtain high probability upper bounds on the deviation of the *statistical risk* $L(f) = E_{P_{X,Y}}[\ell(f(\bx),y)]$ of the model $f \in\mathcal{F}$ from *empirical risk* $\hat{L}_{n}(f) = \frac{1}{n}\sum_{i=1}^{n}\ell(f({\bx_{i}}),y_{i})$ uniformly over the model class $\mathcal{F}$. We define the *generalization error* of the class $\mathcal{F}$, as $Z := \sup\limits_{f \in \mathcal{F}} \big(L(f) - \hat{L}_{n}(f) \big)$. The main result of this section is a high probability upper bound on the deviations of $Z$ under the different assumptions on the regularization constraints in $B$. The expectation and deviation of the generalization error is the typical measure to control the statistical performance of the underlying model class and it was widely studied in [@bartlett2002rademacher],[@koltchinskii2001rademacher]. In these works the framework of Rademacher complexity is used as the complexity measure for structured regularized risk estimation methods, including kernel methods. Let now $\bm{\sigma}:=(\sigma_{i})_{i=1}^{n}$ be a $n$-vector of i.i.d. random variables independent of $\mathbb{S}$, uniformly distributed over $\{-1,+1\}$ (Rademacher random variables). For a given loss function $l$, such that $|l(\cdot,\cdot)| \leq C$ and for all $f$ in (an arbitrary) model class $\mathcal{G}$, with probability at least $1-\delta$ it holds (see for example [@bartlett2002rademacher]): $$L(f) \leq L_{n}(f)+2\mathfrak{R}_{n}(l \circ \mathcal{\mathcal{G}}) + C\sqrt{\frac{\log \delta^{-1}}{2n}}, \label{eq:gen_error}$$ where $$\begin{aligned} \mathfrak{R}_{n}(l \circ \mathcal{F}) := E \left[\sup\limits_{f \in \mathcal{F}} \frac{1}{n} \sum\limits_{i=1}^{n}\sigma_{i}l(f(\bx_{i}),y_{i})) \right],\end{aligned}$$ is the *Rademacher complexity* of the model class $\mathcal{F}$, the last expectation being taken under the product distribution of $(\mbs,\bm{\sigma})$, where $l \circ \mathcal{G}:= \{ (\bx,y) \mapsto l(f(\bx),y) | f \in {\mathcal{G}} \}$ is the loss function class associated to $\cG$. If we treat the sample $\mathbb{S}$ as fixed, we introduce the *empirical Rademacher complexity* $$\hat{\mathfrak{R}}_{\mbs}(l \circ \mathcal{F}) := E_{\bm{\sigma}}\left[ \sup\limits_{f \in \mathcal{F}} \frac{1}{n}\sum\limits_{i=1}^{n}\sigma_{i}l(f(\bx_{i}),y_{i}))\right], \label{eq:empirical_rademacher_f}$$ so that $\mathfrak{R}_{n}(l \circ \mathcal{F}) = E_{\mathbb{S} \sim P^{\otimes n}_{x,y}}\left[\hat{\mathfrak{R}}_{\mbs}(l \circ \mathcal{F})\right] $. It is also known (see for example [@bartlett2005local]) that if the loss-function $\ell$ is $L$-Lipschitz in its second argument,i.e. $\forall y \in \mathbf{Y}$ $ |l(a,y)-l(b,y)| \leq L|a-b| \; \forall a,b \in \mathbb{R}$ then: $$\hat{\mathfrak{R}}_{\mbs}(l \circ \mathcal{F}) \leq L \hat{\mathfrak{R}}_{\mbs}(\mathcal{{F}}), \label{eq:ledoux}$$ and for we get for all $f$ in the class $\mathcal{F}$: $$\label{eq:gen_bound} L(f) \leq L_{n}(f)+2L \mathfrak{R}_{n} ( \mathcal{F}) + C\sqrt{\frac{\log\delta ^{-1}}{2n}}.$$ Now, for the PRLT learning algorithm we consider the squared loss function $l(y',y)=(y-y')^2$. Recall that from the previous assumptions we have $\norm{\bx}_{2} \leq K$ and $|y| \leq R$ almost surely with respect to ${P}_{X,Y}$. Assume also that for all $\bx$ with $\norm{\bx}_{2}\leq K$ and $f\in \cF$ it holds that $\abs{f(\bx)} \leq F$ (for linear predictors, the constant $F$ depends on the norm-constraints in the set $B$ and will be specified later). Then, we have easily for the squared-loss function: $$\begin{aligned} |l(\cdot,\cdot)| \leq (R+F)^2 := L_{R,F,B}.\end{aligned}$$ Furthermore, we can control $\mathfrak{R}_{n}(\mathcal{F})$ by means of its empirical version $\hat{\mathfrak{R}}_{\mathbb{S}}(\mathcal{F})$, since we observe that changing one point in $\mathbb{S}$ changes $\hat{\mathfrak{R}}_{\mathbb{S}}(\mathcal{F})$ by at most $\frac{F}{n}$, so that by McDiarmid’s inequality, with probability at least $1-\delta/2$ it holds: $$\begin{aligned} \mathfrak{R}_{n}\paren{\mathcal{F}} \leq \hat{\mathfrak{R}_{\mathbb{S}}}\paren{\mathcal{F}} + F\sqrt{\frac{\log{\frac{2}{\delta}}}{2n}}.\end{aligned}$$ This argument, together with the aforementioned reasoning, implies that in order to obtain high-probability upper bounds for the generalization error $Z$ of model class $\mathcal{F}$, it is sufficient to obtain bounds on its empirical Rademacher complexity based on the sample $\mathbb{S}$. More precisely, with probability at least $1-\delta/2$ it holds for all $f \in \mathcal{F}$ that: $$\begin{aligned} \label{eq:risk_ineq} L(f) \leq L_{n}(f) + 2L\hat{\mathfrak{R}}_{\mathbb{S}}\paren{\mathcal{F}} + 2LF\sqrt{\frac{\log{\frac{2}{\delta}}}{2n}} + L_{R,F,B}\sqrt{\frac{\log{\frac{2}{\delta}}}{2n}}.\end{aligned}$$ In the next part we concentrate mainly on the bounds on the (empirical) Rademacher complexity of the model class $\mathcal{{F}}$ of PLRT, but also provide bounds on the true Rademacher complexity. (Empirical) Rademacher complexity and generalization error deviation bound for the class of PLRT ------------------------------------------------------------------------------------------------ Recalling the formal definition of the predictor class in , we observe that for a fixed a element (tree) $T \in \mathcal{T}_{l}$, and some partition generated by the family $(i_{s},t_{s})_{s \in T^{\circ}} \in S$, we obtain a submodel class $\mathcal{{F}}_{T,\paren{i_{s},t_{s}}_{s \in T^{\circ}}}$ which is a product of the decision models over the leaves, and such that each $\bx_{i} \in \mathbb{S}$ belongs to exactly one leaf $L_{j}$. Thus, the whole model class $\mathcal{F}$ can be represented as follows: $$\begin{aligned} \label{eq:class_str} \mathcal{F} = \bigcup_{\substack{T \in \cT_l\\(i_{s},t_{s})_{s} \in ([D] \times \mbr)^{\cT^\circ}}}\mathcal{F}_{T,(i_{s},t_{s})_{s\in T^{\circ}}} ,\end{aligned}$$ where we formally write $\mathcal{F}_{T,(i_{s},t_{s})_{s\in \cT^\circ}} = \{f: f = (f_{1},\ldots,f_{\ell}): \forall x \in \mathcal{X}, f(x) = \sum_{j} \inner{f_{j},x}\mathbbm{1}(x \in L_{j}), f_{j} \in \partial T \} $ for each tree $T$ and split family $\paren{i_{s},t_{s}}_{s \in T^{\circ}}$. To study the statistical performance of the classes with the union-type structure we make use the of Lemma 2 from [@maurer2014inequality] and its corollary, which we provide below for completeness. \[thm:sup\_bound\] Let $N \geq 4$ be some natural number and $A_{1},\ldots,A_{N} \subset \mathbb{R}^{n}$ some subsets, such that for a given $A \subset \mathbb{R}^{n}$ we have $A =\bigcup_{i=1}^{N} A_{i}$. Consider $\bm{\sigma} = \paren{\sigma_1,\ldots,\sigma_n}$ to be a vector of i.i.d. Rademacher variables (i.e. uniformly distributed over $\set{-1,1}^n$). Then we have: $$\begin{aligned} \ee{}{\sup_{z \in A} \inner{\bm{\sigma},z}} \leq \max_{i=1}^{N}\ee{}{\sup_{z \in A_{i}}\inner{\bm{\sigma},z}} + 4 \sup_{z \in A}\norm{z}\sqrt{\log{N}}. \end{aligned}$$ Assume we have a finite family of functional classes $\mathcal{F},\mathcal{F}_{1},\ldots,\mathcal{F}_{N}$, such that $\mathcal{F} = \bigcup_{i=1,\ldots,N} \mathcal{F}_{i}$ and a sample $\mathbb{S}$ as before. Denote $A_{j} \subset \mathbb{R}^{n}, A_{j} = \{\paren{f(\bx_{1}),\ldots,f(\bx_{n})}: f \in \mathcal{F}_{j} \}$, $j \in \{1,\ldots, N\}$ (i.e the vector image of the evaluation of function $f \in \mathcal{{F}}_{j}$ on the sample $\mbs$). From the previous result, we have the next corollary (see also [@maurer2014inequality]): Now, assume we have a finite family of functional classes $\mathcal{F},\mathcal{F}_{1},\ldots,\mathcal{F}_{N}$, such that $\mathcal{F} = \bigcup_{i=1}^{N} \mathcal{F}_{i}$ and a sample $\mathbb{S}$ as before. Denote $A_{j} \subset \mathbb{R}^{n}, A_{j} = \{\paren{f(\bx_{1}),\ldots,f(\bx_{n})}: f \in \mathcal{F}_{j} \}$, $j \in \{1,\ldots, N\}$ (i.e the vector image of the evaluation of function $f \in \mathcal{{F}}_{j}$ on the sample $\mbs$). From the previous result, we have the next corollary (see also [@maurer2014inequality]): \[cor:rad\_union\] For the empirical Rademacher complexity of the class $\mathcal{F}= \bigcup_{i=1,\ldots,N} \mathcal{F}_{i}$ based on the sample $\mathbb{S}$, we have: $$\begin{aligned} \hat{\mathfrak{R}}_{\mathbb{S}}\paren{\mathcal{F}} \leq \max_{m=1}^{N}\hat{\mathfrak{R}}_{\mathbb{S}}(\mathcal{F}_{m}) + 4\mathcal{M}\sqrt{\frac{\log{N}}{n}}, \end{aligned}$$ where $\mathcal{M} = \sqrt{\sup_{f \in \mathcal{F}}\frac{1}{n}\sum_{i=1}^{n}f^{2}(\bx_{i})}$. For the analysis of the class of regression decision trees algorithms , this means that we can in principle first analyze the Rademacher complexity of the predictor class for any fixed tree structure and splits, and then pay as a price the $\log$-cardinality of the union appearing in . One issue is that (because of the real-valued thresholds $t_s$) this union is not finite; however using a classical argument we can reduce it to a finite union when considering the empirical Rademacher complexity. First bound we obtain by a simple upper bounding of the number of the possible splits ($((n-1)D)^{\ell-1}$) in the tree and the number of different trees with $\ell$ leaves (which is the $\ell-1$ Catalan’s number). Futhermore, by using simple inequalities $(\frac{k}{e})^k<k!<e(\frac{k}{2})^k$ we obtain $\frac{1}{\ell} \binom{2(\ell-1)}{\ell-1} \leq \frac{e^{\ell}}{\ell}$ and the second claim follows by simple calculations. \[lem: help\_lem01\] $$\begin{aligned} \hat{\mathfrak{R}}_{\mathbb{S}} \paren{\mathcal{F}} = \hat{\mathfrak{R}}_{\mathbb{S}} \paren[4]{\bigcup_{\substack{T \in \cT_l\\(i_s,t_s)_{s} \in ([D] \times [n])^{T^\circ}}} \mathcal{F}_{T,(i_{s},\psi_{k_s}^{i_s})_{s\in T^{\circ}}}} &\leq \max_{T,\paren{i_{s},t_{s}}_{s \in T^{\circ}}}\hat{\mathfrak{R}}_{\mathbb{S}}(\mathcal{F}_{T,\paren{i_{s},t_{s}}_{s \in T^{\circ}}}) + 4\mathcal{M}\sqrt{\frac{\ell\log{enD}}{n}}, \\ \end{aligned}$$ where $\mathcal{M} = \sqrt{\sup\limits_{f \in \cF}\frac{1}{n}\sum_{i=1}^{n}f^{2}(\bx_{i})}$. With the notation as in Section \[sec:prob\_setup\_new\] we obtain: \[thm:main\_theorem01\] With probability at least $1-\delta/2$ it holds uniformly over all $f \in \mathcal{{F}}$: $$\label{eq:Main_inequality01} L(f) - L_{n}(f) \leq 4\paren{R+F} \paren[3]{\max_{T,(i_{s},t_{s})_{s\in T^\circ}}\hat{\mathfrak{R}}_{\mathbb{S}}(\mathcal{F}_{T,(i_{s},t_{s})_{s\in T^\circ}})+ 4\mathcal{M}\sqrt{\frac{\ell \log(enD)}{n}} + (R+2F) \sqrt{\frac{\log{ \paren{\frac{2}{\delta}}}}{2n}}} $$ where we recall that $\abs{f(\bx)} \leq F$ uniformly over all $\bx$ and $\abs{y} \leq R$. In the next section we specify the constraints induced by the set $B$, and give explicit bounds for both Rademacher complexity and generalization error for $\ell_{2}$ and $\ell_{1}$ norm constraints. Different penalty constraints and corresponding bounds ------------------------------------------------------ In what follows we denote $\Sigma = \ee{}{\bx \bx^{\top}}$ the covariance matrix of the random vector $\bx$, and $\hat{\Sigma} = \frac{1}{n} \sum_{i=1}^n \bx_i\bx_i^t$ its empirical counterpart. ### Euclidean-norm penalty. Let $B = \{ \norm{f}_{2} \leq W \}$. Recall that our goal is now to control $\max_{T,(i_{s},t_{s})_{s \in T^{\circ}}} \hat{\mathfrak{R}}_{\mathbb{S}}(\mathcal{F}_{T,(i_{s},t_{s})_{s\in T^{\circ}}})$ for decision trees of total number of leaves $\ell$. The following result holds true. \[eq:l2\_bound\] For a fixed tree structure $T$ with $\ell$ leaves and the splits $(i_{s},t_{s})_{s \in T^{\circ}}$ we have the following upper bounds: $$\begin{aligned} \hat{\mathfrak{R}}_{\mathbb{S}}\paren{\mathcal{F}_{T,(i_{s},t_{s})_{s\in T^{\circ}}}} &\leq \frac{W\sqrt{2\ell} \sqrt{\sum_{j}\norm{\bx_{j}}_{2}^{2}}}{n} = W\sqrt{\frac{2 \ell \tr{\hat{\Sigma}}}{n}};\\ {\mathfrak{R}}_{n}\paren{\mathcal{F}_{T,(i_{s},t_{s})_{s\in T^{\circ}}}} &\leq W\sqrt{\frac{{2\ell \tr{{\Sigma}}}}{n}}. \end{aligned}$$ Notice that the upper bound for the empirical Rademacher complexity is data-dependent, but does not depend on the structure of the decision tree with fixed splits. \[lem:rademacher\_ell2\] By linearity of expectation and taking into account that leaves $(L_{j})_{j=1}^{\ell}$ are disjoint sets we obtain: $$\begin{aligned} \ee{}{\sup_{f = (f_{1},\ldots,f_{\ell})} \sum_{i=1}^{n}\epsilon_{i}f(\bx_{i})} &= \sum_{j=1}^{\ell}\ee{}{\sup_{f_{j} \in B} \inner{f_{j},\sum_{i: \bx_{i} \in L_{j}}^{}\epsilon_{i}\bx_{i}}} \\ & \leq W \sum_{j=1}^{\ell} \ee{}{\norm[2]{\sum_{i: \bx_{i} \in L_{j}}^{}\epsilon_{i}\bx_{i}}_{2}} \\ & \leq W\sum_{j=1}^{\ell}\sqrt{\ee{}{\sum_{i,k: \bx_{i},\bx_{k} \in L_{j}}^{}\epsilon_{i}\epsilon_{k}\inner{\bx_{i},\bx_{k}}}}, \end{aligned}$$ where we used Jensen’s inequality in the last line. Since $(\epsilon_{i})_{i=1}^{n}$ are independent Rademacher variables, by taking expectation, we obtain: $$\begin{aligned} \ee{}{\sup_{f = (f_{1},\ldots,f_{\ell})} \sum_{i=1}^{n}\epsilon_{i}f(\bx_{i})} \leq W \sum_{j=1}^{\ell}\sqrt{\sum_{i: \bx_{i} \in L_{j}}\norm{\bx_{i}}^{2}} \leq W \sqrt{\ell} \sqrt{\sum_{i=1}^{n}\norm{\bx_{i}}^{2}}, \end{aligned}$$ where in the last inequality we just used the Jensen’s inequality for the function $t \mapsto \sqrt{t}$. The claim of the Lemma now follows from the definition of Rademacher Complexity. Plugging in the result of Lemma \[eq:l2\_bound\] into Lemma \[lem: help\_lem01\] and consequently plugging the implication of the latter into the general result of Theorem \[thm:main\_theorem01\] we deduce the following general result. \[eq:rad\_comp\_ell\_2\] Let the function class $\mathcal{F}$ be as given in Equation with $B=\{\norm{f}_{2} \leq W\}$. Assume $\norm{X}_{2} \leq K$ and $\abs{Y} \leq R$, $\mathbb{P}_{X}$ and $\mathbb{P}_{Y}$-almost surely. Then the following upper bounds for the empirical and true Rademacher complexities of the class $\mathcal{F}$ hold: $$\begin{aligned} \hat{\mathfrak{R}}_{\mathbb{S}}\paren{\mathcal{F}} & \leq W \sqrt{\frac{\ell}{n}} \paren{\sqrt{2\tr{\hat{\Sigma}}} + 4\sqrt{\norm[1]{\hat{\Sigma}}_{op}}\sqrt{\log(enD)}}; \\ {\mathfrak{R}}_{n}\paren{\mathcal{F}} &\leq W \sqrt{\frac{\ell}{n}} \paren{\sqrt{2\tr{\Sigma}} + 4 \sqrt{\ee[1]{}{\norm[1]{\hat{\Sigma}}_{op}}} \sqrt{\log(enD)} }. $$ where we denote the operator norm of a matrix as $\norm{\cdot}_{op}$. Furthermore, with probability at least $1-\delta$ it holds: $$\begin{aligned} L\paren{f} - L_{n}\paren{f} \leq 4\paren{R+F} \paren{ W\sqrt{\frac{\ell}{n}}\paren{\sqrt{2\tr \hat{\Sigma}} + 4 \sqrt{\norm[2]{\hat{\Sigma}}_{op} }\sqrt{\log\paren{enD}} } + \paren{R+2F}\sqrt{\frac{\log\paren{\frac{2}{\delta}}}{2n}}}, \end{aligned}$$ or similarly in data-independent form (when using bound on Rademacher complexity instead of its data-dependent proxy) we have: $$\begin{aligned} L\paren{f} - L_{n}\paren{f} \leq 4\paren{R+F} \paren{ W \sqrt{\frac{\ell}{n}} \paren{\sqrt{2\tr{\Sigma}} + 4 \sqrt{\ee[1]{}{\norm[1]{\hat{\Sigma}}_{op}}} \sqrt{\log(enD)} } + \paren{R+F}\sqrt{\frac{\log\paren{\frac{2}{\delta}}}{2n}}} \end{aligned}$$ Furthermore, with probability at least $1-\delta/2$, we have for all $f \in \mathcal{{F}}$: $$\begin{aligned} L(f) \leq L_{n}(f) & + C_{1} \paren[3]{W\sqrt{\frac{\ell}{n}} \paren{\sqrt{2\tr{\hat{\Sigma}}} + 4 \sqrt{{\norm[1]{\hat{\Sigma}}_{op}}} \sqrt{\log(enD)} } + WK\sqrt{\frac{\log(\frac{2}{\delta})}{2n}}} + C\sqrt{\frac{\log({\frac{2}{\delta}})}{2n}}. \end{aligned}$$ Alternatively, using the bound on the true Rademacher complexity we obtain with probability at least $1-\delta$: $$\begin{aligned} L(f) \leq L_{n}(f) & + C_{1}W \sqrt{\frac{\ell}{n}} \paren{ \sqrt{2\tr{\Sigma}} + 4 \sqrt{\ee[1]{}{\norm[1]{\hat{\Sigma}}_{op}}}\sqrt{\log{enD}}} +C\sqrt{\frac{\log({\frac{1}{\delta}})}{2n}}, \end{aligned}$$ where $C_{1} = 4 \paren{R+WK}$, $C:=\paren{R+WK}^{2}$. \[rem:gen\_error\] Consider the simple case $\Sigma = \mathbbm{I}_{d}$. Then, in the last bound, the first term scales as $\tr{\Sigma} = d$, but the second term scales like $\e[1]{\norm[1]{\hat{\Sigma}}_{op}}$, which is, due to classical matrix concentration results [@Tropp:15], up to a $\log(d)$ factor the same as $\norm{\Sigma}_{op} = 1$. This implies: $$\begin{aligned} L(f) \leq L_{n}(f) & + C_{1} W \sqrt{\frac{\ell}{n}} \paren{ {\sqrt{2d} + 8C_{3} \sqrt{\log{d}}\sqrt{\log{enD}}}} +C\sqrt{\frac{\log({\frac{1}{\delta}})}{2n}},\end{aligned}$$ where $C_{3}$ is some universal numerical constant and $C=(R+WK)^{2}$, $C_{1}=4(R+WK)$. In this case, the first term contributes to the bound exponentially more ($d$ against $\log d$) comparing to the second. Furthermore, if the eigenvalues of $\Sigma$ decay geometrically fast, i.e. $\lambda_{k} \leq \lambda\rho^{k-1}$, with $0 < \rho <1$ for $k \in [d]$, we have $\tr{\Sigma} \leq \frac{\lambda}{1-\rho}$ and $\norm{\Sigma}_{op} = \lambda$ and the bound transforms to: $$\begin{aligned} L(f) \leq L_{n}(f) & + C_{1}W \sqrt{\frac{\ell}{{n}}} \paren{ {\sqrt{\frac{2\lambda}{1-\rho}} + 4C_{3}\sqrt{\lambda}\sqrt{\log{d}}\sqrt{\log{enD}}}} +C\sqrt{\frac{\log({\frac{1}{\delta}})}{2n}}.\end{aligned}$$ Here the balance between the first and second term depends on the trade-off between the log of dimensionality $d$ and the spectral decay parameter $\rho$. ### Lasso-type constraints. {#sec:lasso_type_constraints} Consider now $B = \{f: \norm{f}_{1} \leq W \}$, where $\norm{\cdot}_{1}$ is the $\ell_{1}-$norm in the space $\mathbb{R}^{d}$. For a fixed tree $T$ and split family $\paren{i_{s},t_{s}}_{s \in T^{\circ}}$, the empirical Rademacher complexity $\hat{\mathfrak{R}}(\mathcal{F}_{T,(i_{s},t_{s})_{s\in T^{\circ}}})$ can be upper bounded as follows: \[lem:rad\_lasso\_con\] $$\begin{aligned} \hat{\mathfrak{R}}_{\mathbb{S}}(\mathcal{F}_{T,(i_{s},t_{s})_{s\in T^{\circ}}}) &\leq \frac{W\sqrt{2\ell\sum_{i=1}^{n}\norm{\bx_{i}}_{\infty}^{2}}}{{n}}\paren{1+4\sqrt{\log(d)}}. \end{aligned}$$ Repeating the similar arguments as in the case with $\ell_{2}-$penalty we obtain: $$\begin{aligned} \hat{\mathfrak{R}}_{\mathbb{S}}\paren{\mathcal{F}_{T,(i_{s},t_{s})_{s\in T^{\circ}}}} = \frac{2}{n}\ee{}{\sup_{f = (f_{1},\ldots,f_{\ell})} \sum_{i=1}^{n}\epsilon_{i}f(\bx_{i})} &= \sum_{j=1}^{\ell}\frac{2}{n}\ee{}{\sup_{f_{j} \in B} \inner{f_{j},\sum_{i: \bx_{i} \in L_{j}}^{}\epsilon_{i}\bx_{i}}} \\ & = \sum_{j=1}^{\ell}\frac{r_{j}}{n}\hat{\mathfrak{R}}_{\mathbb{S}}\paren{\hat{\mathcal{F}}_{j}}, \end{aligned}$$ where $r_{j}=\abs{L_{j}}$ and $\hat{\mathcal{F}}_{j}$ is the empirical Rademacher complexity of class of functions, such that $\norm{f}_{1} \leq 1$ computed on the $r_{j}$ samples. Using Proposition 3 from [@maurer2012structured] with $\mathcal{P}$ being the set of orthogonal projectors on the coordinates, we obtain (recalling that $\norm{x}_{2} \leq X$, which implies that $\norm{x}_{\infty} \leq X$): $$\begin{aligned} \hat{\mathfrak{R}}_{\mathbb{S}}\paren{\hat{\mathcal{F}}_{j}} \leq \frac{W\sqrt{2\sum_{i: \bx_{i} \in L_{j}} \norm{\bx_{i}}^{2}_{\infty}}}{r_{j}} \paren{1+4\sqrt{\log{d}}}, \end{aligned}$$ which implies $$\begin{aligned} \hat{\mathfrak{R}}_{\mathbb{S}}\paren{\mathcal{F}_{T,(i_{s},t_{s})_{s\in T^{\circ}}}} \leq \frac{2}{n}W\paren{1+4\sqrt{\log{d}}}\sum_{j=1}^{\ell}\sqrt{2\sum_{i: \bx_{i} \in L_{j}} \norm{\bx_{i}}^{2}_{\infty}} \leq \frac{2^{\frac{3}{2}}\ell^{\frac{1}{2}}W\sqrt{\sum_{i}\norm{\bx_{i}}^{2}_{\infty}}}{{n}}\paren{1+4\sqrt{\log{d}}}, \end{aligned}$$ where in the last inequality we just used Jensen’s inequality in the similar fashion as for $\ell_{2}-$norm constaints. The Lemma is therefore proved. In the similar way, as we used above, plugging in the result of Lemma \[lem:rad\_lasso\_con\] into \[lem: help\_lem01\] we get the following result. \[prop:Lasso\_rad\_gen\] Let the function class $\mathcal{F}$ be as given by Equation  with $B=\{\norm{f}_{1} \leq W\}$. Then the following upper bounds for the empirical and true Rademacher complexity of the class $\mathcal{F}$ hold: $$\begin{aligned} \hat{\mathfrak{R}}_{\mathbb{S}}\paren{\mathcal{F}} &\leq \frac{W\sqrt{\ell}}{\sqrt{n}} \paren{\sqrt{\frac{2}{n}\sum_{j}\norm{\bx_{j}}_{\infty}^{2}}\paren{1 + 4 \sqrt{\log{d}}} + 4\sqrt{\norm[1]{\hat{\Sigma}}_{op}}\sqrt{{\log{enD}}}}; \\ {\mathfrak{R}}_{n}\paren{\mathcal{F}} &\leq \frac{W\sqrt{\ell}}{\sqrt{n}} \paren{\sqrt{2\ee{}{\norm{\bx_{j}}_{\infty}^{2}}}\paren{1 + 4 \sqrt{\log{d}}} + 4\sqrt{ \ee[1]{}{\norm[1]{\hat{\Sigma}}_{op}}}\sqrt{{\log{enD}}}}. \end{aligned}$$ Furthermore, in the similar vein to the $\ell_{2}$ regularization case, bounds on the generalization error is obtained by simply applying Theorem \[thm:main\_theorem01\] and plugging-in the upper bounds on the empirical Rademacher complexity. with probability at least $1-\delta/2$ we have for all $f \in \mathcal{{F}}$: $$\begin{aligned} L(f) \leq L_{n}(f) & + C_{1} \paren{ \frac{W\sqrt{\ell}}{\sqrt{n}} \paren{\sqrt{\frac{2}{n}\sum_{j=1}^{n}\norm{\bx_{j}}_{\infty}^{2}}\paren{1 + 4 \sqrt{\log{d}}} + 4\sqrt{\norm[1]{\hat{\Sigma}}_{op}}\sqrt{{\log{enD}}}} + WK\sqrt{\frac{\log(\frac{2}{\delta})}{2n}}} \\ & + C\sqrt{\frac{\log(\frac{2}{\delta})}{2n}}. \end{aligned}$$ Or an alternative (distribution dependent bound) in the form: $$\begin{aligned} L(f) \leq L_{n}(f) & + C_{1}\paren{ \frac{W\sqrt{\ell}}{\sqrt{n}} \paren{2^{\frac{1}{2}}\sqrt{\ee{}{\norm{\bx}_{\infty}^{2}}}\paren{1 + 4 \sqrt{\log{d}}} + 4\sqrt{\ee{}{\norm[1]{\hat{\Sigma}}_{op}}}\sqrt{{\log{enD}}}}} + C\sqrt{\frac{\log(\frac{2}{\delta})}{2n}}, \end{aligned}$$ where $C_{1} =4(R+WK)$, $C = (R+WK)^{2}$. Discussion of the results ------------------------- \[rem:gen\_error\_lasso\] Notice that the second complexity term in the bound on the Rademacher complexity in Proposition \[prop:Lasso\_rad\_gen\] is the same as in the analogue result with the $\ell_{2}$ constraints. For the first complexity term, it holds $\frac{1}{n}\sum_{j=1}^{n}\norm{\bx_{j}}^{2}_{\infty} \leq \frac{1}{n}\sum_{j=1}^{n}\norm{\bx_{j}}^{2}_{2} = \tr{\hat{\Sigma}} $ and correspondingly $\ee{}{\norm{\bx}_{\infty}^{2}} \leq \tr{\Sigma}$. On the other hand, we have the additional factor of order $\sqrt{\log{d}}$ due to LASSO-type of constraints. Thus, whether LASSO-type regularization gives better bounds for generalization error in comparison to $\ell_{2}$ penalty depends on the magnitude of the ratio $r := \tr{(\Sigma)}/{\ee{}{{\norm{\bx}}_{\infty}^{2}}}$ (or its empirical counterpart $\hat{r} := \tr{(\hat{\Sigma})}/({\frac{1}{n}\sum_{j}\norm{\bx_{j}}_{\infty}^{2}})$). Namely, the LASSO-type penalty approach gives a better bound if $r$ or $\hat{r}$ are of order $\log{d}$, where $d$ is the dimensionality of the regression space. In the simple case $\Sigma=\mathbbm{I}_d$ and assuming the coordinates have subgaussian distribution, we will in fact have $\tr{\Sigma}=d$ while $\ee{}{\norm{\bx}_{\infty}^{2}} \lesssim \log(d)$. Theoretical analysis of decision trees ( in particular regression trees) is certanly not new topic and has been considered in various settings before. For the case of $1$-dim classification, bounds for generalization error with $0/1$ loss is given in [@Mansur:99]. Furthermore, for multi-class classification the analysis of statistical performance (in terms of bound for generalization error) of a brode class of composite decision trees and forests is given in [@Salvo:16]. Although it is formally incorrect to compare these results to our setting, our bounds have the same order of magnitude in terms of the sample size and number of leaves as in [@Mansur:99]. Furthermore, the error bounds of scale linearly with the number of leaves, whereas we have sublinear scale ($\sqrt{\ell}$) while having the same order of magnitude in the number of training samples ($1/\sqrt{n}$). In the regression setting the recent work [@Lauer:17] author considers the theoretical analysis of the switching regression framework (which includes the framework of regression decision trees). However, this framework does not consider the regularization of the models on the partitioned space. In such setting, using similar toolbox of the Rademacher complexity for statistical error conctrol, in Equation $(16)$ author obtains risk upper bound which scales linearly with the number of partition cells (which corresponds to the number of leaves in regression trees scenario and comparably worse to our setting). Also, author provides only the majorant term in the upper bound, whereas (as we stressed out before) in our paper we investigate the influence of the smaller order terms (in terms of scaling in logs of dimension and of the norm of covariance matrix). Through usage of more advanced chaining argument, in the section 3.B.2 author shows an upper bound on the Rademacher complexity in the case of linear regression which matches (in the dominant term) the worse case of our bound of Proposition \[eq:rad\_comp\_ell\_2\] (when empirical covariance matrix is close to identity). Notice that our analysis directly imply the bounds for the well-known particular (and much simpler) case of algorithm where the binary regression tree structure are combined with *constant* prediction models on the leaves. This algorithm is know as CART (see original work [@Breiman:84],also in [@Donoho:97]). Using the formalism we considered in our setting, this type of model class can be described as follows: $$\begin{aligned} \label{func_class_pc} \mathcal{F}_{c} := \{\overline{f} : \overline{f} = \paren{ T, (i_{s},t_{s})_{s \in T^{\circ}},(\overline{f}_{k})_{k \in \partial T}}, T \in \mathcal{T}_{\ell}, \paren{i_s,t_s} \in [D]\times \mathbb{R}, \overline{f}_{k}= \frac{1}{\abs{\partial T_{k}}}\sum_{i}^{}X_{i}\mbi_{ X_{i} \in \partial T_{k}} \}, \end{aligned}$$ whereas splits are done by minimizing emprical $\ell_{2}-$error. In the work [@Nedelec:03] the statistical performance of pruned CART decision trees (where prunning is perfomed from the maximal tree) of type \[func\_class\_pc\] was analyzed in different setting to ours. Namely, in [@Nedelec:03] the oracle-type inequalities for a fixed tree-structure in the frameworks of gaussian and bounded regression have been established. We notice, however, that our results actually applies to much broader setting which as the specific case includes CART. We point out that despite the estimation error will be the same, our PLRT methodology will have smaller empirical error than CART, since instead of greedy error minimization, it employs exact optimization. Since generalization error is bounded by empirical error “plus” estimation error, the second term is bounded by the same quantity for both methods but the first term is smaller for our method, the latter also a better bound on generalization error. Combining PLRT with variable selection procedure {#subsec:variable_select} ------------------------------------------------ In this section we extend the setting by considering the application of some general feature selection rule (data-dependent, and considered as a black box) before applying the PLRT algorithm, and study its influence on the theoretical generalization performance. Let $s \in \mathbb{N}$ be the target number of features, which a learner aims to select. Variable selection for the PLRT learning algorithm can be implemented in different ways, from which we consider the following. At step $0$ the learner can perform selection of $s$ coordinates from the $d-$dimensional vector and build a training sample $\mathbb{S}^{v} = \{\bx_{i}^{v}, \psi_{i},y_{i}\}_{i=1}^{n}$, where $\bx_{i}^{v}$ stands for vector $\bx_{i}$ which has only selected $s$ features given by an index vector $v$, and apply the learning procedure to the sample $\mathbb{S}^{v}$ instead of the original $\mathbb{S}$. Notice that in this case the dimensionality of the feature representation remains the same. In Appendix \[sec:appendix\] we also consider the alternate situation of interest when the feature selection is performed separately at each leaf of the tree (i.e. the the set of selected features can change from leaf to leaf). Previous theoretical arguments of the deviations control for Rademacher complexity can be extended to the class of decision trees which includes variable selection procedure in the following way. For a given number $s$ define the set $\overline{S} = \{a \in \{0,1\}^{d}: \sum a_{i} =s \}$ and for vector $f \in \mathbb{R}^{d}$ denote through $f^{a}$ its $s-$dimensional restriction to the coordinates $i \in \{1,\ldots, d\}$, for which $a_{i} =1$. Define also for $\bw \in \mathbb{R}^{d}$, ${s_{0}}(\bw) = \sharp \{i: \bw_{i} \neq 0 \}$. Given some constraint set $B$ and a number $s \in \mathbb{N}$ we define $B^{s} = \{\bw: \bw \in B, s_{0}\paren{\bw}=s\}$. Now consider the extension of the class of PLRT with variable selection (PLRTVS) procedure which can be formally presented as the follows: $$\begin{aligned} \label{func_class02} \mathcal{F}_{s} := \{f : f = \paren{ T, (i_{k},t_{k})_{k \in T^{\circ}},(f_{m})_{m \in \partial T}}, T \in \mathcal{T}_{\ell}, k\in T^{\circ} \paren{i_k,t_{k}} \in [D]\times\mathbb{R}, f_{m} \in B^{s} \}.\end{aligned}$$ We remark that the correction $B^{s}$ instead of $B$ captures all possible choices of the model $f$ on the leaves according to the constraint set $B$ with *arbitrary* $s$ coordinates (which were chosen by a variable selection procedure). Using a similar argument, for a fixed tree $T \in \mathcal{T}_{l}$, some splits generated by sequence $(i_{k},t_{k})_{k \in T^{\circ}} $ and some set of variables $\overline{s} \in \overline{S}$ we have a model which is a union of the all possible models on the leaves and such that each $\bx_{i} \in \mathbb{S}$ belongs to exactly one leaf $L_{j}$ and is restricted to the coordinates indicated by $\overline{s}$ (and thus of dimension $s$). The model class $\mathcal{F}$ can be represented as follows: $$\begin{aligned} \label{eq:union_select} \mathcal{F}_{s} = \bigcup_{T,(i_{k},t_{k}),\overline{s}} \mathcal{F}_{T,(i_k,t_k)_{k \in T^{\circ}},\overline{s} \in \overline{S}},\end{aligned}$$ where $\mathcal{F}_{T,(i_{s},t_{s})_{s \in T^{\circ}},\overline{s}} = \{f^{\overline{s}}: f^{\overline{s}} = (f^{\overline{s}}_{1},\ldots,f^{\overline{s}}_{\ell}): \forall x \in \mathcal{X}, f^{\overline{s}}(x) = \sum_{j} \inner{f^{\overline{s}}_{j},x}\mathbbm{1}(x \in L_{j}), f^{\overline{s}}_{j} \in \partial T \}$. Now, in order to study the Rademacher complexity of the class $\mathcal{{F}}_{s}$ we can apply the same scheme as before. Once more, because of the real-valued thresholds $t_k$ this union is not finite; however using a classical argument same as in the proof of Lemma \[lem: help\_lem01\] we reduce it to finite union. \[lem:card02\] $$\begin{aligned} \hat{\mathfrak{R}}_{\mathbb{S}} \paren{\mathcal{F}_{s}} = \hat{\mathfrak{R}}_{\mathbb{S}} \paren[4]{\bigcup_{T,(i_{k},t_{k}),\overline{s}} \mathcal{F}_{T,(i_k,t_k)_{k \in T^{\circ}},\overline{s} \in \overline{S}}} &\leq \max_{T,i_{s},t_{s},\overline{s}}\hat{\mathfrak{R}}_{\mathbb{S}}(\mathcal{F}_{T,i_{s},t_{s},\overline{s}}) + 4\mathcal{M}\paren{\sqrt{\frac{\ell\log{enD} + s \log\paren{\frac{de}{s}}}{n}} }, \\ \end{aligned}$$ where $\mathcal{M} = \sqrt{\sup\limits_{f \in \cF}\frac{1}{n}\sum_{i=1}^{n}f^{2}(\bx_{i})}$. Proof is based on the observation that the number of possible choices of $s$ coordinates from $d$ is $\binom{d}{s}$; the other part of the proof replies the argument made in the \[lem:card\] . For the bound on the binomial coefficient we need use that $s! \geq \paren{\frac{s}{e}}^{s}$, from which we can derive that: $$\begin{aligned} \binom{d}{s} = \frac{\prod_{j=0}^{s-1}\paren{d-j}}{s!} \leq \frac{d^{s}}{s!} \leq \paren{\frac{de}{s}}^{s}\end{aligned}$$ We demonstrate the influence of the variable selection on the statistical performance for LASSO-type penalization constraints on leaves. In a very similar vein, we provide data-dependent and distribution-dependent bounds for the empirical Rademacher complexity as well as the generalization error high probability upper bound. Using the analogues of Lemma \[lem: help\_lem01\] and Theorem \[thm:main\_theorem01\] for class $\mathcal{{F}}_{s}$ and LASSO-type regularization on $s$ selected variables, we get: \[prop:Lasso\_Varsel\] Let decision class $\mathcal{F}_{s}$ be as given in Equation with $B=\{\norm{f}_{1} \leq W\}$. Then the following upper bound for the Rademacher complexity of the class $\mathcal{F}_{s}$ is true: $$\begin{aligned} \hat{\mathfrak{R}}_{\mathbb{S}}(\mathcal{F}_{s}) &\leq \frac{\sqrt{\ell}W}{\sqrt{n}} \paren[4]{ \sqrt{\frac{2}{n} \sum_{i=1}^{n}\norm{\bx_{j}}^{2}_{\infty}}\paren{1 + 4\sqrt{\log(s)}} + 4 \sqrt{\norm[1]{\hat{\Sigma}}_{op}}\sqrt{\log\paren{neD}}} + \frac{16\sqrt{s}W}{\sqrt{n}}\sqrt{\norm[1]{\hat{\Sigma}}_{op}}\sqrt{\log\paren{\frac{de}{s}}} \\ {\mathfrak{R}}_{n}(\mathcal{F}_{s}) &\leq \frac{\sqrt{\ell}W}{\sqrt{n}} \paren[3]{\sqrt{2\ee{}{\norm{\bx}_{\infty}^{2}}}\paren{1 + 4\sqrt{\log(s)}} + 4 \sqrt{\norm[1]{\hat{\Sigma}}_{op}}\sqrt{\log\paren{neD}}} + \frac{16\sqrt{s}W}{\sqrt{n}}\sqrt{\ee{}{\norm[1]{\hat{\Sigma}}_{op}}}\sqrt{\log\paren{\frac{de}{s}}} \end{aligned}$$ Also, with probability at least $1-\delta/2$ we obtain that for all $f \in \mathcal{{F}}_{s}$ it holds: $$\begin{aligned} L(f) &\leq L_{n}(f) + \frac{C_{1}\sqrt{\ell}W }{\sqrt{n}} \paren[4]{\sqrt{\frac{2}{n} \sum_{i=1}^{n}\norm{\bx_{j}}^{2}_{\infty}}\paren{1 + 4\sqrt{\log(s)}} +4 \sqrt{\norm[1]{\hat{\Sigma}}_{op}}\sqrt{\log\paren{neD}}} \\ &+\frac{8C_{1}\sqrt{s}W}{\sqrt{n}}\sqrt{\norm[1]{\hat{\Sigma}}_{op}}\sqrt{\log\paren{\frac{de}{s}}} + C_{1}WK \sqrt{\frac{\log(\frac{2}{\delta})}{2n}} + C\sqrt{\frac{\log{\paren{\frac{2}{\delta}}}}{2n}}, \end{aligned}$$ where constants $C_{1}=4(R+WK)$, $C=(R+WK)^{2}$ are as before. Discussion: generalization properties of variable selection procedure. ---------------------------------------------------------------------- Firstly, we observe that using the uniform boundedness assumption for the norm of random variables ${\bx_{i}}$ by $K$ we can obtain the data independent bounds for Rademacher complexity (and for the high-probability deviations of generalization error) of the underlying model classes, which are specified by corresponding norm. Futher control of the Rademacher complexity (and consequently for the generalization error ) is enabled through assumptions on the distributions of the input variable $\bx$. Firstly, from the results of the Propositions \[eq:rad\_comp\_ell\_2\], \[prop:Lasso\_rad\_gen\], \[prop:Lasso\_Varsel\], we observe that the convergence rates for generalization error are optimal up to the $\log(n)$ factor. Also all bounds on the Rademacher complexities scale linearly with the norm constraint on the prediction vector $f$. As it was already mentioned before, both Proposition \[eq:rad\_comp\_ell\_2\] and Proposition \[prop:Lasso\_rad\_gen\] imply high probability deviation bounds (both empirical and distribution dependent) for the statistical risk $L(f)$ and can be used for confidence intervals construction. Analysing the case of variable selection procedure at root with the LASSO-penalized regressors at leaves, we notice that it reduces the impact of the dimensionality of the underlying linear model class (from $\log{d}$ to $\log{s}$ in the complexity term), adding however an additional factor of $\sqrt{{s\log{\paren{\frac{de}{s}}}}/{n}}$ (where the dependence on the dimension of the regression space is only in $\log(d)$ term) for the generality of the selection rule (which in our framework can be arbitrary). It is worth to notice that our analysis can be naturally extended to the case in which at each node the covariates are projected on the lower-dimensional linear manifold (which may still depend on the internal nodes, in which the split is performed). In this case our analysis, as well as the discussions of Remarks \[rem:gen\_error\] and \[rem:gen\_error\_lasso\], hold with $d$ replaced by $m$, thus leading to sharper bounds for this particular subclass of PLRTs. We notice, that in this general case (where splits are data-dependent) in order to control the complexity of the functional class (i.e. apply the Rademacher-type analysis in the same vein as in Proposition \[eq:rad\_comp\_ell\_2\] or Proposition \[prop:Lasso\_rad\_gen\] ), we need to have an upper bound on the projected dimension $m$. The feature selection procedure in the regression space $\mathcal{X}$ can be also performed both in the internal nodes for finding the penalized least squares solution and additionally on the leaves to build the final regressors. More precisely, at each internal node $1,\ldots, \ell-1$ we select $s$ features among $d$ from the avaliable dataset $\bX$, find the best split by building the (penalized) cumulative least squares loss, based on selected $s$ features. After the tree has been build, at each leaf ($1,\ldots,\ell$) select $s$ (possibly another) features from $d$ and compute the regression solutions. Thus, feature selection procedure is performed $2\ell-1$ times. Formally, class of decision allows the union representation as follows: $$\begin{aligned} \label{eq:class_str_all} \mathcal{F}_{\ell,sel} = \bigcup_{\substack{T \in \cT_l\\(i_{k},t_{k})_{k} \in ([D] \times \mbr)^{\cT^\circ} \\ \paren{\overline{s}}_{p} \in \overline{S}^{\otimes 2\ell-1}}}\mathcal{F}_{T,\paren{(i_{k},t_{k})_{k\in T^{\circ}}}, \paren{\overline{s}}_{p}}. \end{aligned}$$ The aforementioned reasoning in Section \[subsec:variable\_select\] (with the only feature selection procedure at the root) can be straightforwardly extended to the case when we use feature selection at each internal node and we obtain the following analogues of Lemma \[lem:card02\] and Proposition \[prop:Lasso\_Varsel\]. \[lem:card03\] For the log-cardinality of the union of the sets in it holds: $$\begin{aligned} \log(\abs{\mathcal{{F}}_{\ell,sel}} ) \leq \ell\paren{ \log({enD}) + 2s\log{\paren{\frac{de}{s}}}}. \end{aligned}$$ \[prop:Lasso\_Varsel\_all\] Let function class $\mathcal{F}_{\ell,sel}$ be as given in the equation with $B=\{\norm{f}_{1} \leq W\}$. Then the following upper bound for the Rademacher compelxity of the class $\mathcal{F}_{\ell,sel}$ is true: $$\begin{aligned} \hat{\mathfrak{R}}_{\mathbb{S}}(\mathcal{F}_{\ell,sel}) \leq \frac{\sqrt{\ell}W}{\sqrt{n}} \paren[4]{2^{\frac{1}{2}} \sqrt{\frac{1}{n} \sum_{i=1}^{n}\norm{\bx_{j}}^{2}_{\infty}}\paren{1 + 4\sqrt{\log(s)}} + 4 \sqrt{\norm[1]{\hat{\Sigma}}_{op}}\sqrt{\log\paren{neD}} + \frac{8\sqrt{s}}{\sqrt{n}}\sqrt{\norm[1]{\hat{\Sigma}}_{op}}\sqrt{\log\paren{\frac{de}{s}}}} . \end{aligned}$$ Also, with probability at least $1-\delta/2$ we obtain that for all $f \in \mathcal{{F}}_{\ell,sel}$ it holds: $$\begin{aligned} L(f) &\leq L_{n}(f) + C_{1} \paren[3]{ \frac{\sqrt{\ell}W}{\sqrt{n}} \paren[2]{2^{\frac{1}{2}} \sqrt{\frac{1}{n} \sum_{i=1}^{n}\norm{\bx_{j}}^{2}_{\infty}}\paren{1 + 4\sqrt{\log(s)}} + 4 \sqrt{\norm[1]{\hat{\Sigma}}_{op}}\sqrt{\log\paren{neD}}} }\\ &+\frac{8C_{1}\sqrt{s\ell}W}{\sqrt{n}}\sqrt{\norm[1]{\hat{\Sigma}}_{op}}\sqrt{\log\paren{\frac{de}{s}}} + C_{1}WK \sqrt{\frac{\log(\frac{2}{\delta})}{2n}} + C\sqrt{\frac{\log{\frac{2}{\delta}}}{2n}} \end{aligned}$$ Tractability Analysis {#sec:ctn} ===================== As noted in the Introduction, the tractability and stability issues[^1] through using piecewise linear regressors for split optimization have been long been called into question (see for example [@dobra2002secret; @Potts2005; @Nata]). In the following we demonstrate that the full optimization can be efficiently solved. Computational Complexity ------------------------- The optimization problem to be solved for every candidate partitioning $\left(A,A^c\right)$ has the form $$\left\| \bX_A w - Y_A \right\|^2_P + \lambda \left\| w-w_0\right\|^2_Q,$$ which has an explicit solution $$w_A= (\bX_A^T P \bX_A + \lambda Q)^{-1} (\bX_A^T P Y_A + \lambda Q w_0).$$ As firstly noted by [@torgo2002comp], an efficient algorithm before scanning the thresholds $\psi^{k}_i$, will first sort them and then sequentially move one position in the sorted list at each iteration. Thus, once it has evaluated partition $A$ it will move on to evaluate the next partition, $A \cup i$ which implies that $$w_{A \cup i} = \left( \bX^T_A P \bX_A + \bx_i ^T P \bx_i + \lambda Q \right)^{-1} \left( \bX_A^T P Y_A +\bx_i^T P y_i + \lambda Q w_0 \right). \label{eq:k}$$ The first factor in the above product (eq. \[eq:k\]) can be expressed via the Sherman-Morrison formula. Therefore, the complexity of computing the above inverse is $O(d^2)$, given the inverse $\left( \bX^T_A P \bX_A + \lambda Q \right)^{-1}$. Thus, given the sorting of the $n$ samples along the $D$ dimensions, which itself has a complexity $O(Dn\log{n})$ , the complexity of finding the optimal split at each node is $O(nDd^2)$. It depends quadratically on underlying data dimension $d$ and only linearly on $D$ (dimension of the feature space). The complexity of the proposed method is $O(Dn\log{n} + Dnd^2)$ whereas the complexity of analogue model, which uses piecewise constant splits is $O(Dn\log{n} + Dn)$ . Speeding up ----------- Recall that $L^\lambda_{i,k}$ is defined as in . For each candidate split $i,k$ in order to evaluate $L^\lambda_{i,k}$ we need to compute the two losses $$l^\lambda_{i,k} = \min_{w_{i,k} \in \mbr^{d}} \paren{ \norm[1]{\bX_{A^{}_{i,k}}{w}^{}_{i,k}-Y_{A^{}_{i,k}}}_{P}^{2} + \lambda \| w_{i,k} - w_0\|^2_Q}$$ $$r^\lambda_{i,k}= \min_{w^c_{i,k} \in \mbr^{d}} \paren{\norm[1] {\bX_{A^{c}_{i,k}}{w}^{c}_{i,k}-Y_{A^{c}_{i,k}}}_{P}^{2} + \lambda \| {w}^{c}_{i,k} - {w}_0\|^2_Q}.$$ Assume that we have $N$ samples, which should be divided into two sets by a splitting procedure. We can significantly speedup the part of the Algorithm which performs tree construction by noticing that $\forall m>k, l^\lambda_{i,m} \geq l^\lambda_{i,k}$ and $\forall m<k, r^\lambda_{i,m} \geq r^\lambda_{i,k}$. For all $ i>1$ the algorithm possess some candidate pair $i_T,k_T$ for which $L^\lambda_{i_T,k_T}$ is the smallest encountered so far. Therefore the algorithm can avoid unneccesary computations by employing the following strategy: at each iteration for given $j \in [D]$ and $k \in [N]$ instead of computing $l^\lambda_{j,k}$ and $ r^\lambda_{j,k} $, the algorithm calculates $l^\lambda_{j,k}$ and $ r^\lambda_{j,N-k} $. If for some $k$, $l^\lambda_{j,k} + r^\lambda_{j,N-k} \geq L^\lambda_{i_T,k_T}$ then $\forall m, k \leq m \leq N-k$ it also holds that $l^\lambda_{j,m} + r^\lambda_{j,m} \geq L^\lambda_{i_T,k_T}$ and those computations can be avoided. We note that this speedup does not reduce the computational complexity of the algorithm, however it is an exact algorithm and as seen in Fig. \[fig:trac2\] can lead to significantly lower training times in practice. We can significantly speedup the tree-constructing algorithm by noting that $\forall j>i, l^\lambda_{j,k} \geq l^\lambda_{i,k}$ and $\forall j<i, r^\lambda_{j,k} \geq r^\lambda_{i,k}$. Given that $\forall k>1$ the algorithm will have some candidate pair $i_T,k_T$ for which $L^\lambda_{i_T,k_T}$ is the smallest encountered so far the algorithm can avoid many computations by employing the following strategy: at each iteration instead of computing $l^\lambda_{j,k}$ and $ r^\lambda_{j,k} $, the algorithm calculates $l^\lambda_{j,k}$ and $ r^\lambda_{j,K-k} $. If for some $k$, $l^\lambda_{j,k} + r^\lambda_{j,K-k} \geq L^\lambda_{i_T,k_T}$ then $\forall m, k \leq m \leq K-k$ it also holds that $l^\lambda_{j,m} + r^\lambda_{j,m} \geq L^\lambda_{i_T,k_T}$ and those computations can be avoided. We note that this speedup does not reduce the computational complexity of the algorithm, however it is an exact algorithm and as seen in Fig. \[fig:trac2\] can lead to significantly lower training times in practice. We also illustrate the results for two non-exact versions of the algorithm which use approximations to non-calculated losses to quickly identify non-promising splits, dimensions of the data and cut them from the computations. In particular we consider the next approaches: 1. $\forall m, k \leq m \leq N-k$ and for fixed $j \in [D]$ we approximate $$L^\lambda_{i,m} \approx l^\lambda_{j,k} + r^\lambda_{j,N-k} + \left(N-2k\right) \left(\min(l^\lambda_{j,k}-\lambda \| {w}_{j,k} - {w}_0\|^2_Q,r^\lambda_{j,N-k}-\lambda \| {w}_{j,N-k} - {w}_0\|^2_Q)\right).$$ If for a given $k$ , $L^\lambda_{i,m} \geq L^\lambda_{i_T,k_T}$,then as with the exact algorithm we forgo calculating $l^\lambda_{j,m},r^\lambda_{j,K-m}, \forall m, k \leq m \leq N-k$. 2. In a second approach where we consider that for all $m, k \leq m \leq N-k$ the following approximation: $$L^\lambda_{i,m} \approx l^\lambda_{j,k} + r^\lambda_{j,N-k} + \left(N-2k\right) \left(\max(l^\lambda_{j,k}-\lambda \| {w}_{j,k} - {w}_0\|^2_Q,r^\lambda_{j,N-k}-\lambda \| {w^c}_{j,N-k} - {w}_0\|^2_Q)\right).$$ As before, if for given $k$ , $L^\lambda_{i,m} \geq L^\lambda_{i_T,k_T}$, we proceed calculating $l^\lambda_{j,m},r^\lambda_{j,N-m}, \forall m, k \leq m \leq N-k$. Training Times in Practice -------------------------- As mentioned above, the complexity of calculating the optimal split, once the thresholds are sorted, is $O(nDd^2)$ which can involve a significant number of computations. In order to empirically evaluate the tractability of the proposed algorithm, we present in Fig. \[fig:trac2\] the time which required on a Tesla-K80 GPU to build a full regression tree of depth $10$ for various datasets. As can be seen, a careful algorithmic implementation, in conjunction with the use of GPU technology makes the models tractable despite their considerable complexity.Furthermore, by employing the proposed approximation, the speedup of the Algorithm can be significant and in some cases provide a gain up to an order of magnitude. In the appendix we present an empirical analysis of the effects of the proposed approximations on the accuracy of the trained models and demonstrate that the effects of the approximation can be negligible in many experiments. Dataset KDD04 Forest KinH KinM MNISTOE MNISTBS CT Slice Air Energy ------------------------------ ------- -------- ------ ------ --------- --------- ---------- ----- -------- CART/M5 10 5 0.5 0.5 160 124 75 0.5 2.5 PLRT (no speedup) 6202 1454 550 532 60286 59583 14660 221 2247 PLRT (exact speedup) 1803 1276 481 458 29870 33171 5928 84 1878 PLRT (approx. speedup (min)) 1014 840 276 236 17379 16393 4590 62 1087 PLRT (approx. speedup (max)) 659 444 148 154 9240 8831 3222 55 421 Empirical Evaluation {#sec:emp_eval} ==================== In this section we present an empirical evaluation of the generalization capabilities of the proposed PLRT algorithm. In particular, we show results on a number of different regression tasks comparing different regression tree algorithms, among them - PLRT trees that have been build using a $l_2$ regularized linear regression criterion to split the internal nodes and which similarly at the leaves employ $l_2$ regularization. - PLRT trees that have been build using a $l_2$ regularized linear regression criterion to split the internal nodes but which employ $l_1$ regularization at the leaves. - M5 trees that are built using a simple piecewise constant model but which employ $l_2$ regularized linear regression at the leaves. We slightly adapt the algorithm to allow it to use all variables to compute the linear regressors at its leaves. - In the few cases where CART trees are competitive we also provide their results. As has been noted CART trees typically empirically underperform when compared to M5 [@wang1997inducing; @vogel07scalable]. Summary statistics on the datasets is provided in the Table \[tbl:datasets\], we refer reader to Appendix for a detailed presentation of the various tasks. We also notice that in order to distinguish between the regularization parameter for the $l_2$ and $l_1$ penalties we use $\gamma$ in case of $\ell_{2}$ norm and $\lambda$ for LASSO-type. For example in the plots for PLRT $\gamma=1.0$,$\lambda=0.1$ denotes a tree built by solving $\left\| \bX_A w - Y_A \right\|^2_P + \gamma \left\| w-w_0\right\|^2_Q$, while using LASSO in the leaves $ \left\| \bX_A w - Y_A \right\|^2 + \lambda \left\| w\right\|$. We note that for the experiments presented here we set $P,Q$ to be the identity matrices of appropriate dimension. -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- : Datasets description[]{data-label="tbl:datasets"} - [**KDD-cup 2004**]{} which was part of the KDD-cup 2004 competition data. In particular we use the data from the physics competition which comprises data on 78 properties of 150.000 particles. As in [@vogel07scalable], we use the value of the 24th column as our target value. - [**Forest**]{}, here the task is to predict the forest cover type from $54$ cartographic variables. As in [@Ronan], we create a regression task by assigning a label of $+1$ to samples corresponding to cover type $2$ and $-1$ to the rest. We use $50.000$ samples for training and another $50.000$ for testing. - [**CT-Slice**]{} comprises $53500$ CT images, from $74$ different patients, represented by two histograms, one for bone structure and one for air inclusions. The dimensionality of the data is $383$. The target value of the task is the relative location on of the image on the axial axis. We assign images from two thirds of the patients to the training datasets, while the remaining third we keep as a test set. - [**MNIST**]{} consists of $60.000$ training samples and $10.000$ test samples of dimensionality $784$, representing handwritten digits. In order to create a regression task, we assign a label of $+1$ to all odd digits and $-1$ to the even digits. Following [@Ronan] we create a second regression task on the MNIST dataset by assigning a label of $+1$ to the digits $0$ to $4$ and $-1$ to digits $5$ through $9$. - [**Kinematic**]{} which comes from a realistic simulation of the dynamics of an 8 link robot arm. The task is to predict the distance of the arm’s end-effector from some target. We use two versions of this dataset, one with medium noise and another with high noise. \ \ \ \ \ \ \ \ \ As can be seen in Fig. \[fig:RES\], the proposed algorithm PLRT consistently outperforms the M5 baseline. This is due to the fact that tree structure of PLRT has been optimized in a greedy fashion and provides at the end complex regression model in the leaves; M5 on the other hand optimizes the tree structure based on a different model than ultimately employed. This allows PLRT to obtain similar or better generalization at smaller tree depths. Though we restrict the results shown here to $\gamma \in \{10.0,1.0\}$, we provide a more detailed analysis of the effects of the $\gamma$ parameter in the Appendix. The proposed method also outperforms the simpler CART method on all datasets; in fact in most cases CART significantly underperforms and its results are supressed in order to keep the plot compact. As highlighted by the results on the Kinematic and KDD2004 datasets, PLRT can be prone to overfitting for small sized datasets at larger depths. Nonetheless, in all $3$ tasks involving these datasets PLRT achieves better empirical mean-squared errors when compared to M5. We also note that using $\ell_{1}-$penalization at the leaves does not lead to any added benefit in the empirical performance. This is due to the discrepancy between the usage of the regularization at the nodes ($l_2$ distance from weight vector of parent node $\| w-w_0\|$) and at the leaves ($l_1$ regularization $| w |$). A more detailed empirical evaluation of the effects of the $\lambda$ parameter can be found in the Appendix. In two cases (datasets KDD2004 and Air), PLRT + LASSO performs significantly worse than PLRT and M5 and we do not show these results in the plots. Conclusion {#sec:conc} ========== In this paper we provided a broad analysis of the class of regression decision trees algorithms with regularized linear prediction models on the leaves. From a theoretic perspective, using the celebrated toolbox of Rademacher complexities, we obtained new high probability upper bounds for the generalization error of the regularized PLRT-algorithms with $\ell_{2}$, $\ell_{1}$ penalty constraints (also including the extension to variable selection setting in the root). The resulting algorithms are based on the minimization of squared error splitting criteria. We illustrated that the proposed PLRT algorithm ( together with its speedups) are numerically stable and tractable and can be efficiently implemented with the help of GPU technology. The empirical study on the diversified types of datasets reveals that the resulting algorithm outperforms the well-known piecewise constant models when using the mean-squared error metrics with $\ell_{2}$ regularized solutions. From an empirical perspective, we investigate various regularization setups and reach a key insight regarding the type of regularization needed to avoid overfitting in practice. Acknowledgments {#acknowledgments .unnumbered} =============== [OZ would like to acknowledge the full support of the Deutsche Forschungsgemeinschaft (DFG) SFB 1294. ]{} Appendix. Proofs of the main results {#sec:appendix} ===================================== Generalization error bounds of the class of Piecewise Linear Regression tree models ----------------------------------------------------------------------------------- We remind the formal definition used for the class of piecewise-linear regression trees $\mathcal{F}$. $$\begin{aligned} \label{eq:regr_trees} \mathcal{F} := \{f : f = \paren{ T, (i_{s},t_{s})_{s \in T^{\circ}},(f_{k})_{k \in \partial T}}, T \in \mathcal{T}_{\ell}, \paren{i_{s},t_s}_{s\in T^{\circ}} \in [D]\times \mathbb{R}, f_{k} \in B \},\end{aligned}$$ where $\mathcal{T}_{\ell}$ is the set of all trees, $T^{\circ}$ is the set of all internal nodes and $\partial T$ is the set of leaves. Through $\abs{A}$ we denote the cardinality of (finite) set $A$. Also we recall the following representation $$\begin{aligned} \label{eq:class_str01} \mathcal{F} = \bigcup_{\substack{T \in \cT_l\\(i_{s},t_{s})_{s \in T^{\circ}} \in ([D] \times \mbr)^{ \abs{T^\circ}}}}\mathcal{F}_{T,(i_{s},t_{s})_{s\in T^{\circ}}} ,\end{aligned}$$ of the model class as the union of the smaller sub-classes, where we formally write $\mathcal{F}_{T,(i_{s},t_{s})_{s\in T^\circ}} = \{f: f = (f_{1},\ldots,f_{\ell}): \forall x \in \mathcal{X}, f(x) = \sum_{j} \inner{f_{j},x}\mathbbm{1}(x \in L_{j}), f_{j} \in \partial T \} $ for each tree $T$ and split family $\paren{i_{s},t_{s}}_{s \in T^{\circ}}$. We present the proofs of the main results of the paper next. Observe, that for a fixed coordinate $i \in [D]$ there are $(n-1)$ different splits into two sets that can be obtained by the procedure, implemented in the algorithm. Indeed, if we consider $\psi^{i}$ sorted in decreasing order (w.r.t. samples), then any threshold $t \in [\max_{j} \{\psi_{j}^{i}: \psi_{j}^{i} < \psi_{k}^{i} \}, \psi_{k}^{i}]$ will give exact the same split as $t= \psi_{k}^{i}$. Thus, going iteratively over the (sorted) sample we obtain $n-1$ possibilities for choice of coordinate $k$ (excluding the split, when one subset is empty). The total number of possibilities at one node is therefore $(n-1)D$ and the total number of possible partitions in the fixed tree $\paren{(n-1)D}^{\ell-1}$ which means, that the infinite union $\mathcal{{F}}$ can be restricted to the finite, when for example choosing the threshold in each step $t_{s} = \psi^{i}_{k}$. Thus, we have representation that $\hat{\mathfrak{R}}_{\mathbb{S}}\paren{\mathcal{F}} = \hat{\mathfrak{R}}_{\mathbb{S}} \paren[4]{\bigcup_{\substack{T \in \cT_l\\(i_s,t_s)_{s} \in ([D] \times [n])^{\abs{T^\circ}}}} \mathcal{F}_{T,(i_{s},\psi_{k_s}^{i_s})_{s\in T^{\circ}}}}$. Also, the number of different trees with $\ell$ leaves is the $\ell-1$ Catalan’s number which is $\frac{1}{l} \binom{2(l-1)}{l-1}$. Futhermore, by using simple inequalities $(\frac{k}{e})^k<k!<e(\frac{k}{2})^k$ we obtain $\frac{1}{\ell} \binom{2(\ell-1)}{\ell-1} \leq \frac{e^{\ell}}{\ell}$ and thus the log-cardinality of the union is bounded by $\log{ \frac{e^{\ell}}{\ell}\paren{(n-1)D}^{\ell-1}} \leq \ell \log{enD}$. Applying Corollary \[cor:rad\_union\] to $\hat{\mathfrak{R}}_{\mathbb{S}} \paren[4]{\bigcup_{\substack{T \in \cT_l\\(i_s,t_s)_{s \in T^{\circ}} \in ([D] \times [n])^{\abs{T^\circ}}}} \mathcal{F}_{T,(i_{s},\psi_{k_s}^{i_s})_{s\in T^{\circ}}}}$, and plugging the bound on the log-cardinality of the union in \[eq:class\_str01\] we obtain the claim of the Lemma. Next we present the self-contained (alternative) proof of the Rademacher complexity upper bound for the case of $\ell_{2}$ regularization constraints for the class $\mathcal{{F}}$. \[lem:Comp\_rad\_average\] Let functional class $\mathcal{F}$ as defined in and $B = \{\norm{f}_{2}\leq W\}$. Assume also that $\norm{X} \leq K$ $\mathbb{P}_{X}-$ almost surely and $\abs{Y} \leq R$ $\mathbb{P}_{Y}$- almost surely. Denote $\Sigma = \ee{}{\bx \bx^{\top}}$ the covariance matrix of random vector $\bx$ and $\hat{\Sigma}$ is the empirical analogue. Let also we have the sample $\mathbb{S}=\{\bx_{i},\psi_{i},y_{i}\}_{i=1}^{n}$ of size $n$. In this case the following bounds on empirical and true Rademacher complexities of class $\mathcal{F}$ is true: $$\begin{aligned} \hat{\mathfrak{R}}_{S}(\mathcal{F}) &\leq \sqrt{2}W\sqrt{\frac{l\log(enD)}{n}\tr{\hat{\Sigma}}} \\ \mathfrak{R}_{n}\paren{\mathcal{F}} &\leq \sqrt{2}W \sqrt{\frac{l\log(enD)\tr{\Sigma}}{n}} \end{aligned}$$ In this proof we use the notation $||\cdot||$ to denote the standard euclidean norm $\norm{\cdot}_{2}$. We enumerate leaves from $1$ to $l$ and denote through $L_{j}$ the leaf with the number $j$ and associate $(f_{1}, \ldots, f_{l})$ with the tuple of linear regression function attached to the leaves. For the fixed number of leaves $l$, the empirical Rademacher complexity for $\mathcal{F}$ can be written as follows: $$\begin{aligned} \hat{\mathfrak{R}}_{S}(\mathcal{F})&= E_{\sigma}\left[\sup\limits_{T,\paren{i_{s},t_{s}}_{s \in T^{\circ}}} \sup\limits_{(f)_{k \in \partial T} }\frac{1}{n}\sum \limits_{i=1}^{n}\sigma_{i}\langle f,\bx_{i} \rangle \right] = E_{\sigma}\left[\sup\limits_{T,\paren{i_{s},t_{s}}_{s \in T^{\circ}}} \sup\limits_{f_1,...,f_l}\frac{1}{n}\sum\limits_{j=1}^{l}\sum\limits_{i:\phi_{i} \in L_{j}}\sigma_{i}\langle f_{j},\bx_{i}\rangle\right] \\ & =\frac{1}{n} E_{\sigma}\left[\sup\limits_{T,\paren{i_{s},t_{s}}_{s \in T^{\circ}}} \sum\limits_{j=1}^{l} \sup\limits_{f_{j} \in B}\sum\limits_{i: \bx_{i} \in L_{j}}\sigma_{i}\langle f_{j},\bx_{i}\rangle\right]\\ & = \frac{1}{n} E_{\sigma}\left[\sup\limits_{T,\paren{i_{s},t_{s}}_{s \in T^{\circ}}} \sum\limits_{j=1}^{l} \sup\limits_{f_{j} \in B}\langle \ f_{j} ,\sum\limits_{i: \bx_{i} \in L_{j}}\sigma_{i}\bx_{i}\rangle\right]. \end{aligned}$$ Using the Cauchy-Schwartz inequality in the last equality for the scalar product of $f_{j}$ with $\sum\limits_{i: \bx_{i} \in L_{j}}\sigma_{i}\bx_{i}$ and taking the supremum over $f_{j} \in B$ we obtain: $$\label{eq:through_S} \frac{1}{n} E_{\sigma}\left[\sup\limits_{T,\paren{i_{s},t_{s}}_{s \in T^{\circ}}} \sum\limits_{j=1}^{l} \sup\limits_{f_{j}\in B}\sum\limits_{i: \bx_{i} \in L_{j}}\sigma_{i}\langle f_{j},\bx_{i}\rangle\right] \leq \frac{W}{n} E_{\sigma}\left[\sup\limits_{T,\paren{i_{s},t_{s}}_{s \in T^{\circ}}} \sum\limits_{j=1}^{l}||S_{j}||\right],$$ in which we denote $S_{j} :=\sum\limits_{i: \bx_{i} \in L_{j}}\sigma_{i}\bx_{i}$. To upper bound we use the **softmax inequality** . This gives us: $$\label{eq:softmax} E_{\sigma}\left[\sup\limits_{T,\paren{i_{s},t_{s}}_{s \in T^{\circ}}} \sum\limits_{j=1}^{l}||S_{j}||\right] \leq E_{\sigma}\left[\frac{1}{\lambda}\log \sum\limits_{T,\paren{i_{s},t_{s}}_{s \in T^{\circ}}}\exp(\lambda \sum\limits_{j=1}^{l}||S_{j}||)\right],$$ where the last sum is taken over all possible partitions $i_{s},t_{s}$ and tree structures $T$. Using Jensen’s inequality and the linearity of expectation for we obtain: $$\begin{aligned} \label{eq:prepare} E_{\sigma}\left[\frac{1}{\lambda}\log \sum\limits_{T,\paren{i_{s},t_{s}}_{s \in T^{\circ}}}\exp(\lambda \sum\limits_{j=1}^{l}||S_{j}||)\right] &\leq \frac{1}{\lambda} \log \left[\sum\limits_{T,\paren{i_{s},t_{s}}_{s \in T^{\circ}}}E_{\sigma}\left[\exp(\lambda \sum\limits_{j=1}^{l}||S_{j}||)\right]\right]. \end{aligned}$$ $S_{j}$ are mutually independent random variables (as variables in $\sigma$) as the leaves contain disjoint sets of data-points, which gives: $$\begin{aligned} \label{eq:Prod_represent} E_{\sigma} \left[\exp(\lambda \sum\limits_{j=1}^{l}||S_{j}||)\right] = \prod\limits _{j=1}^{l}E_{\sigma}\left[\exp(\lambda||S_{j}||)\right]. \end{aligned}$$ Using for each $j \in \{1,...l\}$ the inequality $e^{tx} \leq 2\cosh(tx), t>0 $ and exponential inequalities from Theorem 3 in [@pinelis1986remark] we get: $$\begin{aligned} \begin{aligned} \label{eq:trans} E_{\sigma}\left[\exp(\lambda||S_{j}||)\right] & \leq 2E_{\sigma}\left[\cosh(\lambda ||S_{j}||)\right] \leq 2 \prod\limits_{i: \bx_{i} \in L_{j}}E_{\sigma}\left[\exp(\lambda||\sigma_{i}\bx_{i}||)-\lambda||\sigma_{i}\bx_{i}||\right] \\ & = 2\prod\limits_{i: \bx_{i} \in L_{j}} \left[ \exp(\lambda||\phi_{i}||)-\lambda||\phi_{i}|| \right] \leq 2 \exp \bigg(\frac{\lambda^{2}}{2}\sum\limits_{i: \bx_{i} \in L_{j}}\norm{\bx_{i}}^{2}\bigg), \end{aligned} \end{aligned}$$ where in the last point we used the inequality $\exp(x) - x \leq \exp(\frac{x^{2}}{2}) $ for $x>0$. Inserting the results of $\eqref{eq:trans}$ and $\eqref{eq:Prod_represent}$ into $\eqref{eq:prepare}$ we get: $$\begin{aligned} \begin{aligned} \label{eq:pre_log_last} \frac{1}{\lambda} \log &\left[\sum\limits_{T,\paren{i_{s},t_{s}}_{s \in T^{\circ}}} E_{\sigma}\left[\exp(\lambda \sum\limits_{j=1}^{l}||S_{j}||)\right]\right] \\ & \leq \frac{1}{\lambda} \log \Bigg(\sum\limits_{T,\paren{i_{s},t_{s}}_{s \in T^{\circ}}} 2^{l} \exp \bigg(\frac{\lambda^{2}}{2}\sum\limits_{i=1}^{n}\norm{\bx_{i}}^{2}\bigg)\Bigg). \end{aligned} \end{aligned}$$ We denote, $\hat{S} := \sum_{i=1}^{n}\norm{\bx_{i}}^{2}$. By a simple upper bounding of the number of the possible splits ($((n+1)D)^{l-1}$) in the tree and the number of different trees (which is the $l-1$ Catalan’s number) we obtain: $$\begin{aligned} \label{eq:catalan_numb} \sum\limits_{T,\paren{i_{s},t_{s}}_{s \in T^{\circ}}} 2^{l} \exp \bigg(\frac{\lambda^{2}}{2}\sum\limits_{i=1}^{n}\norm{\bx_{i}}^{2}\bigg) \leq \frac{1}{l} \binom{2(l-1)}{l-1} (nD)^{l-1} 2^{l} \exp\bigg(\frac{\lambda^{2}}{2}\hat{S}\bigg). \end{aligned}$$ By using $(\frac{k}{e})^k<k!<e(\frac{k}{2})^k$ we obtain $\frac{1}{l} \binom{2(l-1)}{l-1} \leq \frac{e^{l}}{l}$. Plugging this into and then plugging into and into we obtain: $$\begin{aligned} \begin{aligned} \hat{\mathfrak{R}_{S}}(\mathcal{F}) & \leq \frac{W}{n\lambda}\log \left[ \frac{(2e(n+1)D)^{l}}{l(n+1)D}\exp\bigg(\frac{\lambda^{2}}{2}\hat{S}\bigg) \right] = \frac{W}{n}\bigg(\frac{C}{\lambda}+ \frac{\lambda}{2}\hat{S}\bigg) := \frac{W}{n}\psi(\lambda). \end{aligned} \end{aligned}$$ where $C = \log \big(\frac{(2e(n+1)D)^{l}}{l(n+1)D} \big)$. After minimizing the function $\psi(\lambda)$ in $\lambda$ we obtain: $\lambda_{min} = \sqrt{\frac{2C}{\hat{S}}}$ which implies $ \hat{\mathfrak{R}_{S}}(\mathcal{F}) \leq W\psi(\lambda_{min}) = \frac{W}{n}\sqrt{2\hat{S}C} $. Upper bounding $C$ by $\log((2enD)^{l})$ we obtain: $$\begin{aligned} \hat{\mathfrak{R}}_{S}(\mathcal{F}) &\leq \sqrt{2}W\sqrt{\frac{\ell\log(2enD)}{n^{2}}\sum\limits_{i=1}^{n}\norm{\bx_{i}}^{2}} = 2^{\frac{1}{2}}W\sqrt{\frac{\ell \log(2enD)}{n}\frac{1}{n}\sum_{i=1}^{n}\tr{\bx_{i} \bx_{i}^{\top}}} \\ & = 2^{\frac{1}{2}}W\sqrt{\frac{\ell \log(2enD)}{n}\tr{\hat{\Sigma}}}, \end{aligned}$$ which proves the first part of the lemma. For the second claim we just have by using Jensen’s inequality and the exchanging linearity between trace and expectation: $$\begin{aligned} \mathfrak{R}_{n}\paren{\mathcal{F}} & \leq \sqrt{2}W\sqrt{\frac{l\log(2enD)}{n^{2}}\sum\limits_{i=1}^{n}\ee{}{\norm{\bx_{i}}^{2}}} = \sqrt{2}W\sqrt{\frac{l\log(2enD)}{n^{2}}n \ee{}{\tr{\bx \bx^{\top}}}} \\ & =\sqrt{2}W\sqrt{\frac{l\log(2enD)}{n}{\tr{\Sigma}}}. \end{aligned}$$ This bound can be improved using the general result of [@maurer2014inequality] in the sense of scaling in front of the $\sqrt{\ell}\log{enD}$ term. We consider firstly the bound on one specific tree (with fixed structure $T$ and partition $(i_{s},t_{s})_{s \in T^{\circ}}$) given by Lemma \[eq:l2\_bound\]. By linearity of expectation and taking into account, that leaves $(L_{j})_{j=1}^{\ell}$ are disjoint sets we obtain: $$\begin{aligned} \ee{}{\sup_{f = (f_{1},\ldots,f_{\ell})} \sum_{i=1}^{n}\sigma_{i}f(\bx_{i})} &= \sum_{j=1}^{\ell}\ee{}{\sup_{f_{j} \in B} \inner{f_{j},\sum_{i: \bx_{i} \in L_{j}}^{}\sigma_{i}\bx_{i}}} \\ & \leq W \sum_{j=1}^{\ell} \ee{}{\norm[2]{\sum_{i: \bx_{i} \in L_{j}}^{}\sigma_{i}\bx_{i}}_{2}} \\ & \leq W\sum_{j=1}^{\ell}\sqrt{\ee{}{\sum_{i,k: \bx_{i},\bx_{k} \in L_{j}}^{}\sigma_{i}\sigma_{k}\inner{\bx_{i},\bx_{k}}}}, \end{aligned}$$ where we used Jensen’s inequality in the last line. Since $(\sigma_{i})_{i=1}^{n}$ are independent Rademacher variables, by taking expectation, we obtain: $$\begin{aligned} \ee{}{\sup_{f = (f_{1},\ldots,f_{\ell})} \sum_{i=1}^{n}\sigma_{i}f(\bx_{i})} \leq W \sum_{j=1}^{\ell}\sqrt{\sum_{i: \bx_{i} \in L_{j}}\norm{\bx_{i}}^{2}} \leq W \sqrt{\ell} \sqrt{\sum_{i=1}^{n}\norm{\bx_{i}}^{2}}, \end{aligned}$$ where in the last inequality we used Jensen’s inequality for the function $t \mapsto \sqrt{t}$. The claim of the Lemma now follows from the definition of Rademacher complexity of the fixed class $\mathcal{F}_{T, (i_{s},t_{s})_{s \in T^{\circ}}}$ and the fact that $\sum_{i=1}^{n}\norm{\bx_{i}}^{2} = n \sum_{i=1}^{n}\tr{\frac{1}{n}\bx_{i} \bx_{i}^{\top}} = n \tr{\hat{\Sigma}}$, where $\hat{\Sigma} := \frac{1}{n}\sum_{i=1}^{n}\bx_{i} \bx_{i}^{\top}$ is the empirical covariance matrix. First of all, we notice that for linear prediction vector $f$ and the constant $\mathcal{M}$ from Lemma \[lem: help\_lem01\] we obtain: $$\begin{aligned} \mathcal{M} = \sqrt{\sup\limits_{f \in \mathcal{F},f\in B}\frac{1}{n}\sum_{i=1}^{n}\inner{f,\bx_{i}}^{2}}&= W\sqrt {\sup_{f \in \mathcal{F},f\in B}\inner{\frac{f}{W},\frac{1}{n}\sum_{i=1}^{n}\inner[1]{\frac{f}{W},\bx_{i}}\bx_{i}}} \\ & = W \sqrt{\sup_{f \in \mathcal{F},\norm{f}_{2} \leq 1}\inner{f,\frac{1}{n}\sum_{i=1}^{n}\inner[1]{f,\bx_{i}}\bx_{i}} } \\ & = W\sqrt{\norm{\hat{\Sigma}}_{op}}, \end{aligned}$$ where as before we denote $\hat{\Sigma} = \frac{1}{n}\sum_{i=1}^{n}\bx_{i}\bx_{i}^{\top}$ to be the empirical covariance matrix estimated from sample $\mathbb{S}$. Considering the fact that the (empirical) Rademacher complexity of one fixed (sub) class $\mathcal{F}_{T,\paren{i_{s},t_{s}}_{s \in T^{\circ}}}$ does not depend on the structure and partition (splitting family) of the tree, by applying Lemma \[lem: help\_lem01\] together with the claim of Lemma \[eq:l2\_bound\] for Rademacher complexity of $\hat{\mathfrak{R}}_{\mathbb{S}}\paren{\mathcal{F}_{T,(i_s,t_{s})_{s \in T^{\circ}}}}$ we deduce: $$\begin{aligned} \hat{\mathfrak{R}}_{\mathbb{S}}\paren{\mathcal{F}} \leq \frac{2^{\frac{1}{2}}\sqrt{\ell}W\sqrt{\tr{\hat{\Sigma}}}}{\sqrt{n}} + 4W\sqrt{\norm{\hat{\Sigma}}_{op}}\sqrt{\ell\frac{\log{enD}}{n}}. \end{aligned}$$ For the true Rademacher complexity, we simply take the expectation of $\hat{\mathfrak{R}}_{\mathbb{S}}\paren{\mathcal{F}}$, use Jensen’s inequality and exchange trace and expectation (due to their linearity) in the first summand; thus we get: $$\begin{aligned} \label{eq:rad_bound_true02} \begin{aligned} \mathfrak{R}_{n}\paren{\mathcal{F}} & \leq \frac{2^{\frac{1}{2}}\sqrt{\ell}W\sqrt{\ee{}{\tr{\hat{\Sigma}}}}}{\sqrt{n}} + 4W\sqrt{\ee{}{\norm[1]{\hat{\Sigma}}_{op}}}\sqrt{\ell\frac{\log{2enD}}{n}} \\ &= \frac{2^{\frac{1}{2}}\sqrt{\ell}W\sqrt{\tr{\Sigma}}}{\sqrt{n}} + 4W\sqrt{\ee{}{\norm[1]{\hat{\Sigma}}_{op}}}\sqrt{\ell\frac{\log{2enD}}{n}}. \end{aligned} \end{aligned}$$ For the expectation of the operator norm of empirical covariance matrix by using triangle inequality and upper bound for the expectation of the difference between in operator norm between empirical covariance matrix and its population analogue( [@Tropp:15]) we deduce: $$\begin{aligned} \ee{}{\norm{\hat{\Sigma}}_{op}} & \leq \ee{}{\norm{\hat{\Sigma} - \Sigma }_{op}} + \norm{\Sigma}_{op} \\ & \leq \sqrt{\frac{2K\norm{\Sigma}_{op}\log(2d)}{2n}} + \frac{2K \log(2p)}{n} + \norm{\Sigma}. \end{aligned}$$ Inserting the last bound into the inequality we obtain the result for Rademacher complexity. Furthermore, since $f(\bx) \leq \norm{f}_{2}\norm{\bx}_{2} \leq WK$, putting this together the upper bound for the empirical Rademacher complexity and the bounds for Lipschitz constant, constants $F$ and $L_{R,B,F}$ into the Theorem \[thm:main\_theorem01\] we obtain with probability at least $1-\delta/2$: $$\begin{aligned} L(f) \leq L_{n}(f) & + 4\paren{R+WK} \paren{\frac{2^{\frac{1}{2}}\sqrt{\ell}W\sqrt{\tr{\hat{\Sigma}}}}{\sqrt{n}} + 4W\sqrt{\norm{\hat{\Sigma}}_{op}}\sqrt{\ell\frac{\log{2enD}}{n}}}\\ & + 4\paren{R+WK}WK \sqrt{\frac{\log(\frac{2}{\delta})}{2n}} + \paren{R+WK}^{2}\sqrt{\frac{\log{\frac{2}{\delta}}}{2n}}. \end{aligned}$$ Finally, to obtain the alternative upper bound for the generalization error we just use the distribution dependent result for Rademacher complexity and get: $$\begin{aligned} L(f) \leq L_{n}(f) & + 4\paren{R+WK}\frac{W\sqrt{\ell}}{\sqrt{n}} \paren{{2^{\frac{1}{2}}\sqrt{\tr{\Sigma}}} + 4\sqrt{\ee{}{\norm[1]{\hat{\Sigma}}_{op}}}\sqrt{{\log{2enD}}}} + \paren{R+WK}^{2}\sqrt{\frac{\log{\frac{2}{\delta}}}{2n}}. \end{aligned}$$ Repeating the similar arguments as in the case with $\ell_{2}-$penalty we obtain: $$\begin{aligned} \hat{\mathfrak{R}}_{\mathbb{S}}\paren{\mathcal{F}_{T,(i_{s},t_{s})_{s\in T^{\circ}}}} = \frac{2}{n}\ee{}{\sup_{f = (f_{1},\ldots,f_{\ell})} \sum_{i=1}^{n}\sigma_{i}f(\bx_{i})} &= \sum_{j=1}^{\ell}\frac{2}{n}\ee{}{\sup_{f_{j} \in B} \inner{f_{j},\sum_{i: \bx_{i} \in L_{j}}^{}\sigma_{i}\bx_{i}}} \\ & = \sum_{j=1}^{\ell}\frac{r_{j}}{n}\hat{\mathfrak{R}}_{\mathbb{S}}\paren{\hat{\mathcal{F}}_{j}}, \end{aligned}$$ where $r_{j}=\abs{L_{j}}$ and $\hat{\mathcal{F}}_{j}$ is the empirical Rademacher complexity of class of functions, such that $\norm{f}_{1} \leq W$ computed on the $r_{j}$ samples. Applying Proposition 3 from [@maurer2012structured] with $\mathcal{P}$ being the set of orthogonal projectors on the coordinates and $\norm{f}_{\mathcal{P}} = \norm{f}_{1}$, after rescaling the function $f$ by $W$ we obtain: : $$\begin{aligned} \hat{\mathfrak{R}}_{\mathbb{S}}\paren{\hat{\mathcal{F}}_{j}} \leq \frac{W\sqrt{2\sum_{i: \bx_{i} \in L_{j}} \norm{\bx_{i}}^{2}_{\infty}}}{r_{j}} \paren{1+4\sqrt{\log{d}}}, \end{aligned}$$ which implies that $$\begin{aligned} \hat{\mathfrak{R}}_{\mathbb{S}}\paren{\mathcal{F}_{T,(i_{s},t_{s})_{s\in T^{\circ}}}} \leq \frac{1}{n}W\paren{1+4\sqrt{\log{d}}}\sum_{j=1}^{\ell}\sqrt{2\sum_{i: \bx_{i} \in L_{j}} \norm{\bx_{i}}^{2}_{\infty}} \leq \frac{2^{\frac{1}{2}}\ell^{\frac{1}{2}}W\sqrt{\sum_{i}\norm{\bx_{i}}^{2}_{\infty}}}{{n}}\paren{1+4\sqrt{\log{d}}}, \end{aligned}$$ where in the last inequality we used Jensen’s inequality in the similar fashion as for $\ell_{2}-$norm constraints for the function $t \mapsto \sqrt{t}$. The statement is therefore proved. The general scheme of the proof remains to be the same as in the proof of Proposition \[eq:rad\_comp\_ell\_2\]. Namely, since we have that for any $f \in \mathbb{R}^{d}$ $\norm{f}_{2} \leq \norm{f}_{1}$ then, for the constant $\mathcal{M}$ we obtain: $$\begin{aligned} \mathcal{M} = W\sqrt {\sup_{f \in \mathcal{F},f\in B}\inner{\frac{f}{W},\frac{1}{n}\sum_{i=1}^{n}\inner[1]{\frac{f}{W},\bx_{i}}\bx_{i}}} = W \sqrt{\sup_{f \in \mathcal{F},\norm{f}_{1} \leq 1}\inner{f,\frac{1}{n}\sum_{i=1}^{n}\inner[1]{f,\bx_{i}}\bx_{i}} } \leq W\sqrt{\norm{\hat{\Sigma}}_{op}}. \end{aligned}$$ Thus, using Lemma \[lem: help\_lem01\] with simply $\mathcal{M} \leq W \sqrt{\norm{{\hat{\Sigma}}}_{op}}$ and bound on the log-cardinality of the union, together with Lemma \[lem:rad\_lasso\_con\] results into: $$\begin{aligned} \hat{\mathfrak{R}}_{\mathbb{S}}\paren{\mathcal{F}} \leq \frac{2^{\frac{3}{2}}\ell^{\frac{1}{2}}W\sqrt{\sum_{i=1}^{n}\norm{\bx_{i}}^{2}_{\infty}}}{{n}}\paren{1+4\sqrt{\log{d}}} + 8W\sqrt{\norm[1]{\hat{\Sigma}}_{op}}\sqrt{\ell\frac{\log{2enD}}{n}}. \end{aligned}$$ For the true Rademacher complexity in the similar vein to the case of $\ell_{2}$ penalty we get: $$\begin{aligned} \label{eq:rad_bound_true_lasso} \begin{aligned} \mathfrak{R}_{n}\paren{\mathcal{F}} & \leq \frac{2^{\frac{1}{2}}\sqrt{\ell}W\sqrt{\sum_{j}\ee{}{\norm{\bx_{j}}_{\infty}^{2}}}}{{n}} + 8W\sqrt{\ee{}{\norm[1]{\hat{\Sigma}}_{op}}}\sqrt{\ell\frac{\log{2enD}}{n}} \\ & = \frac{2^{\frac{1}{2}}\sqrt{\ell}W\sqrt{\ee{}{\max_{k}\paren[1]{\bx^{k}}^{2}}}}{\sqrt{n}}\paren{1+4\sqrt{\log d}} + 8W\sqrt{\ee{}{\norm[1]{\hat{\Sigma}}_{op}}}\sqrt{\ell\frac{\log{2enD}}{n}}, \end{aligned} \end{aligned}$$ which proves the bound for Rademacher complexity. In order to control $\sqrt{\ee{}{\norm[1]{\hat{\Sigma}}_{op}}}$ we can apply the same argument as in the proof of Proposition \[eq:rad\_comp\_ell\_2\] and get: $$\begin{aligned} \ee{}{\norm{\hat{\Sigma}}_{op}} \leq \sqrt{\frac{2K\norm{\Sigma}_{op}\log(2d)}{2n}} + \frac{2K \log(2p)}{n} + \norm{\Sigma}. \end{aligned}$$ In order to control $\ee{}{\max_{k}\paren[1]{\bx^{k}}^{2}}$ we use the assumption that $\bx$ is subgaussian in each direction. Define $z_{k} = {\paren[1]{\bx^{k}}^{2}}$ for $k \in \{1,\ldots,d\}$ and notice that $z_{k} \geq 0$. Thus we have for any $\delta > 0$: $$\begin{aligned} \ee{}{\max_{k} z_{k}} &\leq \max_{k} \ee{}{z_{k}} + \delta + \int_{\delta + \max_{k} \ee{}{z_{k}}}^{\infty}\probb{}{\max_{k} {z_{k}} > s}ds \\ & = \max_{k} \ee{}{z_{k}} + \delta + \int_{\delta }^{\infty}\probb{}{\max_{k} {z_{k}} > + \max_{k} \ee{}{z_{k}} + s}ds \\ & \leq \max_{k} \ee{}{z_{k}} + \delta + \sum_{k=1}^{d}\int_{\delta}^{+\infty}\probb{}{z_{k} \geq \ee{}{z_{k}} +s }ds. \\ \end{aligned}$$ Now, since variable $z_{k}-\ee{}{z_{k}}$ is centered subexponential, there exist $c_{1}>2,c_{2} >0$ such that for all $s>0$ it holds that $\probb{}{z_{k}-\ee{}{z_{k}} > s} \leq c_{1}\exp(-c_{2}s)$. After integration this gives the following upper bound: $$\begin{aligned} \ee{}{\max_{k} z_{k}} \leq \max_{k}\ee{}{z_{k}} + \delta + \frac{dc_{1}}{c_{2}}\exp\paren{-c_{2}\delta}. \end{aligned}$$ Minimizing the right hand side in $\delta$ we get $\delta^{\star} := c_{2}^{-1}\log(dc_{1})$ which implies: $$\begin{aligned} \ee{}{\max_{k} z_{k}} \leq \max_{k} \ee{}{z_{k}} + C\log(dc_{1}), \end{aligned}$$ where $C = c_{2} + c_{2}^{-1}$. Now since $\ee{}{z_{k}} = \ee{}{\bx_{k}^{2}} \leq \sigma^{2}$, we finally have: $$\begin{aligned} \ee{}{\max_{k} z_{k}} \leq \sigma^{2} + C\log(dc_{1}). \end{aligned}$$ Putting this inequality into the upper bound for Rademacher complexity gives: $$\begin{aligned} \mathfrak{R}_{n}\paren{\mathcal{F}} \leq \frac{2^{\frac{1}{2}}\sqrt{\ell}W\sqrt{\sigma^{2} + C\log(dc_{1})}}{\sqrt{n}}\paren{1+4\sqrt{\log d}} + 8W\sqrt{ \paren{\frac{2K\norm{\Sigma}_{op}\log(2d)}{2n}}^{1/2} + \frac{2K \log(2p)}{n} + \norm{\Sigma} }\sqrt{\ell\frac{\log{2enD}}{n}}. \end{aligned}$$ The result for the generalization error follows immediately from Theorem \[thm:main\_theorem01\]. Proofs for variable selection results ------------------------------------- In this part we provide the results and sketch the proofs for the control of Rademacher complexities and generalization error upper bounds for the extension of PLRT algorithm to the variable selection criteria. and the following upper bound is true: \[lem:card02\] $$\begin{aligned} \hat{\mathfrak{R}}_{\mathbb{S}} \paren{\mathcal{F}_{s}} &= \hat{\mathfrak{R}}_{\mathbb{S}} \paren[4]{\bigcup_{T,(i_{k},t_{k}),\overline{s}} \mathcal{F}_{T,(i_k,t_k)_{k \in T^{\circ}},\overline{s} \in \overline{S}}} \\ &\leq \max_{T,i_{s},t_{s},\overline{s}}\hat{\mathfrak{R}}_{\mathbb{S}}(\mathcal{F}_{T,i_{s},t_{s},\overline{s}}) + \\& + 4\mathcal{M}\paren{\sqrt{\frac{\ell\log{enD} + s \log\paren{\frac{de}{s}}}{n}} }, \\ \end{aligned}$$ where $\mathcal{M} = \sqrt{\sup\limits_{f \in \cF}\frac{1}{n}\sum_{i=1}^{n}f^{2}(\bx_{i})}$. Repeating the argument of the proof of Lemma \[lem: help\_lem01\], for finite $n$, we can restrict the model class $\mathcal{F}_{s}$ to a finite union, encounting only splits with $t_{s} = \psi^{i}_{k}$ for all $i \in [D]$, $k \in [n]$. Notice, the number of possible choices of $s$ coordinates from $d$ is $\binom{d}{s}$ we derive, that the log-cardinality of the (finite) union is upper bounded by $\ell \log{enD} + \log{\binom{d}{s}}$ Now using the inequality $s! \geq \paren{\frac{s}{e}}^{s}$, from which we can derive that: $$\begin{aligned} \binom{d}{s} = \frac{\prod_{j=0}^{s-1}\paren{d-j}}{s!} \leq \frac{d^{s}}{s!} \leq \paren{\frac{de}{s}}^{s}. \end{aligned}$$ Then, the result is follows, when taking the logarithm and summing up the terms and proceeding directly as in Lemma \[lem: help\_lem01\]. Multiple feature selection. {#S-mfs} --------------------------- The feature selection procedure in the regression space $\mathcal{X}$ can be also performed both in the internal nodes for finding the penalized least squares solution *and* additionally on the leaves to build the final regressors. More precisely, at each internal node $1,\ldots, \ell-1$ we select $s$ features among $d$ from the available dataset $\bX$, find the best split by building the (penalized) cumulative least squares loss, based on selected $s$ features. After the tree has been build, at each leaf ($1,\ldots,\ell$) select $s$ (possibly different) features from $d$ and compute the regression solutions. Thus, feature selection procedure is performed $2\ell-1$ times. Formally, class of decision allows the union representation as follows: $$\begin{aligned} \label{eq:class_str_all} \mathcal{F}_{\ell,sel} = \bigcup_{\substack{T \in \cT_l\\(i_{k},t_{k})_{k} \in ([D] \times \mbr)^{\cT^\circ} \\ \paren{\overline{s}}_{p} \in \overline{S}^{\otimes 2\ell-1}}}\mathcal{F}_{T,\paren{(i_{k},t_{k})_{k\in T^{\circ}}}, \paren{\overline{s}}_{p}}.\end{aligned}$$ The reasoning in Section \[subsec:variable\_select\] (with the only feature selection procedure at the root) can be straightforwardly extended to the case when we use feature selection at each internal node and we obtain the following analogues of Lemma \[lem:card02\] and Proposition \[prop:Lasso\_Varsel\]. Firstly we formulate the quite analogues version of Lemma \[lem:card02\] with multiple splits. \[lem:card03\] $$\begin{aligned} \hat{\mathfrak{R}}_{\mathbb{S}} \paren{\mathcal{F}_{\ell,sel}} = \hat{\mathfrak{R}}_{\mathbb{S}} \paren[4]{\bigcup_{\substack{T \in \cT_l\\(i_{k},t_{k})_{k} \in ([D] \times \mbr)^{\cT^\circ} \\ \paren{\overline{s}}_{p} \in \overline{S}^{\otimes 2\ell-1}}}\mathcal{F}_{T,\paren{(i_{k},t_{k})_{k\in T^{\circ}}}, \paren{\overline{s}}_{p}}} &\leq \max_{T,i_{s},t_{s},\overline{s}}\hat{\mathfrak{R}}_{\mathbb{S}}(\mathcal{F}_{T,i_{s},t_{s},\paren{\overline{s}}_{p}}) \\ + 4\mathcal{M}\paren{\sqrt{\frac{\ell\log{enD} + 2s\ell \log\paren{\frac{de}{s}}}{n}} }, \end{aligned}$$ where $\mathcal{M} = \sqrt{\sup\limits_{f \in \cF}\frac{1}{n}\sum_{i=1}^{n}f^{2}(\bx_{i})}$. The proof is done in the very same way as in the Lemma \[lem:card02\], noticing that now we choose $s$ variables out of $d$ *at each node* which increases the factor of the union’s cardinality to $\binom{d}{s}^{2\ell-1}$. Rest of the proof follows the same argument as before. \[prop:Lasso\_Varsel\_all\] Let function class $\mathcal{F}_{\ell,sel}$ be as given in the equation with $B=\{\norm{f}_{1} \leq W\}$. Then the following upper bound for the Rademacher complexity of the class $\mathcal{F}_{\ell,sel}$ is true: $$\begin{aligned} \hat{\mathfrak{R}}_{\mathbb{S}}(\mathcal{F}_{\ell,sel}) \leq \frac{\sqrt{\ell}W}{\sqrt{n}} \paren[4]{2^{\frac{1}{2}} \sqrt{\frac{1}{n} \sum_{i=1}^{n}\norm{\bx_{j}}^{2}_{\infty}}\paren{1 + 4\sqrt{\log(s)}} + 4 \sqrt{\norm[1]{\hat{\Sigma}}_{op}}\sqrt{\log\paren{neD}} + \frac{8\sqrt{s}}{\sqrt{n}}\sqrt{\norm[1]{\hat{\Sigma}}_{op}}\sqrt{\log\paren{\frac{de}{s}}}} . \end{aligned}$$ Also, with probability at least $1-\delta/2$ we obtain that for all $f \in \mathcal{{F}}_{\ell,sel}$ it holds: $$\begin{aligned} L(f) - L_{n}(f)&\leq + C_{1} \paren[3]{ \frac{\sqrt{\ell}W}{\sqrt{n}} \paren[2]{2^{\frac{1}{2}} \sqrt{\frac{1}{n} \sum_{i=1}^{n}\norm{\bx_{j}}^{2}_{\infty}}\paren{1 + 4\sqrt{\log(s)}} + 4 \sqrt{\norm[1]{\hat{\Sigma}}_{op}}\sqrt{\log\paren{neD}}} }\\ &+\frac{8C_{1}\sqrt{s\ell}W}{\sqrt{n}}\sqrt{\norm[1]{\hat{\Sigma}}_{op}}\sqrt{\log\paren{\frac{de}{s}}} + C_{1}WK \sqrt{\frac{\log(\frac{2}{\delta})}{2n}} + C\sqrt{\frac{\log{\frac{2}{\delta}}}{2n}} \end{aligned}$$ Proof of both Propositions follows the same scheme as the Proposition \[prop:Lasso\_rad\_gen\]. Notice, that after selecting $s$ features the dimensionality of regression vector reduces to $s$, and thus the impact of the LASSO regularization in the complexity term will change from $\log{d}$ to $\log{s}$. Furthermore, for the Proposition \[prop:Lasso\_Varsel\] we use bound from Lemma 1 combined with the bound on the log-cardinality from Lemma \[lem:card02\] and general bound for a fixed class from Lemma \[lem:rad\_lasso\_con\]. In the same vein, to obtain the bounds from Proposition \[prop:Lasso\_Varsel\] we use the result of Lemma 1 combined with the bound on the log-cardinality from Lemma \[lem:card03\] and general bound for a fixed class from Lemma \[lem:rad\_lasso\_con\]. Case of clipped loss function ----------------------------- In this part we additionally investigate the influence of the scaling constants of the model class regularization constraints, under the assumption that the square loss function can be restricted to some bounded domain, when using some apriori knowledge independent of the constraints upper bounds on the target function. We give the definition (following [@steinwart2008support], chapter 2) of the clipped loss function below. A loss function $\ell\paren{y,t}: (y,t) \mapsto \mathbb{R}$ can be clipped at the value $H>0$ if for all $\paren{y,t}$ we have: $$\begin{aligned} \ell\paren{y,\widetilde{t}} \leq \ell\paren{y,t}, \end{aligned}$$ where $\widetilde{t}$ denotes the clipped value of $t$ at $H$, i.e. : $$\begin{aligned} \widetilde{t} = \begin{cases} -H & t < -H \\ t & t \in [-H,H] \\ H & t > H, \end{cases} \end{aligned}$$ or equivalently $\widetilde{t} = \max\{-H,\min\{t,H\}\}$. Now let us consider the squared loss, i.e. $\ell(y,t) = \paren{y-t}^{2}$ and for the apriori given $M>0$ define the loss function $\widetilde{\ell}(y,t) = \min\paren{M,\paren{y-t}^{2}}$. Assuming, that the output variable $y$ has bounded support on $[-R,R]$ one can readily check, that the loss function $\ell(y,t)$ can be clipped at value $R+\sqrt{M}$ and that the loss function $\widetilde{\ell}(y,t)$ is its clipped version. Moreover, doing straightforward calculations, one can obtain, that the Lipschitz constant of $\tilde{\ell}\paren{y,\cdot}$ is bounded by $4R+2\sqrt{M}$. Let firstly $\mathcal{F}$ be some model class, such that $\mathcal{F} = \cup_{i=1}^{N} \mathcal{F}_{i}$ and $\mathcal{F}_{i}$ be the arbitrary functional classes of real-valued functions with domain in $\mathcal{X}$. Applying Theorem \[thm:sup\_bound\] with the sets $A_{j}= \{\paren{\widetilde{\ell} \circ f(\bx_{1}),\ldots,\widetilde{\ell} \circ f(\bx_{n})}: f \in \mathcal{F}_{j} \}$, $j \in \{1,\ldots, N\}$ we obtain the following result for the Rademacher complexity $\hat{\mathfrak{R}}_{\mathbb{S}}(\widetilde{\ell} \circ \mathcal{F})$ of the image of the class $\mathcal{F}$ under the $\ell(\cdot,\cdot)$ map: \[lem:img\_comp\] $$\begin{aligned} \hat{\mathfrak{R}}_{\mathbb{S}}(\widetilde{\ell} \circ \mathcal{F}) \leq \max_{m=1}^{N} \hat{\mathfrak{R}}_{\mathbb{S}}\paren{\widetilde{\ell} \circ \mathcal{F}_{m}} + 4 \mathcal{M} \sqrt{\frac{\log N}{n}}, \end{aligned}$$ where we have $\mathcal{M} := \sqrt{\sup_{f \in \cup_{m=1}\mathcal{F}_{m}}\frac{1}{n}\sum_{i=1}^{n} \paren[2]{\widetilde{\ell} \circ f(\bx_{i})}^{2}}$ **Remark** Notice, that from the definition of the function $\widetilde{\ell}\paren{y,t}$ it follows that $\mathcal{M} \leq M$. Notice that in Lemma \[lem:img\_comp\] a constant $\mathcal{M}$ depends only on the apriori bound $M$ and on the contrary to that from the Propositions \[eq:rad\_comp\_ell\_2\] or \[prop:Lasso\_rad\_gen\] where it scales linearly with the norm constraint of the prediction function $f$. We demonstrate this more precisely on the example of $\ell_{1}$-type regularization constraints below. Firstly, through contraction principle with Lipschitz constant $L:= 4R+2\sqrt{M}$ of function $\widetilde{\ell}(y,t)$ we deduce from Lemma \[lem:img\_comp\] that it holds: $$\begin{aligned} \hat{\mathfrak{R}}_{\mathbb{S}}(\widetilde{\ell} \circ \mathcal{F}) \leq L\max_{m=1}^{N} \hat{\mathfrak{R}}_{\mathbb{S}}\paren{\mathcal{F}_{m}} + 4 M \sqrt{\frac{\log N}{n}}.\end{aligned}$$ Thus, considering class $\mathcal{{F}}$ from Equation  with the norm constraints $B =\{f: \norm{f}_{1} \leq W \}$ and using the upper bound of the Lemma \[lem: help\_lem01\] and upper bound on the empirical Rademacher complexity of one tree with fixed structure and partition (Lemma \[lem:rad\_lasso\_con\] ) we obtain. \[prop:Lasso\_rad\_\_clipped\] Let the function class $\mathcal{F}$ be as given in the equation with $B=\{\norm{f}_{1} \leq W\}$. Let also the underlying loss function be the clipped loss $\widetilde{\ell}$ of the squared loss, clipped at the point $R+\sqrt{M}$. Then the following (data-dependent) upper bounds for both the empirical and true Rademacher complexities of the class $\mathcal{F}$ is true: $$\begin{aligned} \hat{\mathfrak{R}}_{\mathbb{S}}\paren{\widetilde{\ell} \circ \mathcal{F}} &\leq \frac{\sqrt{\ell}}{\sqrt{n}} \paren{2^{\frac{1}{2}}WL\sqrt{\frac{1}{n}\sum_{j}\norm{\bx_{j}}_{\infty}^{2}}\paren{1 + 4 \sqrt{\log{d}}} +4M \sqrt{\log\paren{enD}}} \\ {\mathfrak{R}}_{n}\paren{\widetilde{\ell} \circ \mathcal{F}} &\leq \frac{\sqrt{\ell}}{\sqrt{n}} \paren{ 2^{\frac{1}{2}}WL\sqrt{\ee{}{\norm{\bx}_{\infty}^{2}}}\paren{1 + 4 \sqrt{\log{d}}} + 4M \sqrt{\log\paren{enD}}} \end{aligned}$$ Furthermore, with probability at least $1-\delta/2$ w.r.t. sample $\mathbb{S}$ we have for all $f \in \mathcal{{F}}$: $$\begin{aligned} L(f) - L_{n}(f)\leq & + \frac{2\sqrt{\ell}}{\sqrt{n}} \paren{ 2^{\frac{1}{2}}WL\sqrt{\ee{}{\norm{\bx}_{\infty}^{2}}}\paren{1 + 4 \sqrt{\log{d}}} + 4M \sqrt{\log\paren{enD}}} + C\sqrt{\frac{\log(\frac{2}{\delta})}{2n}}. \end{aligned}$$ where $C = (R+WK)^{2}$. **Remark.** We observe that in the previous bound for Rademacher complexity only the first term scales linearly with the norm constraint, but the second term scales only with the constant $M$ (which apriori does not depend on the norm constraint $W$). The effect of the clippable loss can be extended to the setting of feature selection procedure, when it is performed at each internal node and in the leaves. Recall that in this case the underlying model class can be written as: $$\begin{aligned} \label{eq:class_str_all_sup} \mathcal{F}_{\ell,sel} = \bigcup_{\substack{T \in \cT_l\\(i_{k},t_{k})_{k} \in ([D] \times \mbr)^{\cT^\circ} \\ \paren{\overline{s}}_{p} \in \overline{S}^{\otimes 2\ell-1}}}\mathcal{F}_{T,\paren{(i_{k},t_{k})_{k\in T^{\circ}}}, \paren{\overline{s}}_{p}}.\end{aligned}$$ The following result for the clipped square-loss function holds true. \[prop:Lasso\_Varsel\_all\_clipped\] Let function class $\mathcal{F}_{\ell,sel}$ be the class of decision trees with regularization constraints $B=\{\norm{f}_{1} \leq W\}$ described by Equation \[eq:class\_str\_all\_sup\]. Then the following upper bound for the Rademacher complexity of the class $\mathcal{F}_{\ell,sel}$ is true: $$\begin{aligned} \hat{\mathfrak{R}}_{\mathbb{S}}(\mathcal{F}_{s}) \leq \frac{\sqrt{\ell}WL}{\sqrt{n}} \paren[4]{2^{\frac{1}{2}} \sqrt{\frac{1}{n} \sum_{i=1}^{n}\norm{\bx_{j}}^{2}_{\infty}}\paren{1 + 4\sqrt{\log(s)}}} + \frac{4M\sqrt{\ell}}{\sqrt{n}} \sqrt{\log\paren{neD}} + \frac{8\sqrt{s}}{\sqrt{n}}M\sqrt{\log\paren{\frac{de}{s}}} . \end{aligned}$$ Also, with probability at least $1-\delta$ we obtain that for all $f \in \mathcal{{F}}_{\ell,sel}$ it holds: $$\begin{aligned} L(f) - L_{n}(f)&\leq + \frac{2\sqrt{\ell}WL}{\sqrt{n}} \paren[4]{2^{\frac{1}{2}} \sqrt{\ee{}{\norm{\bx}^{2}_{\infty}}}\paren{1 + 4\sqrt{\log(s)}}} + \frac{8M\sqrt{\ell}}{\sqrt{n}} \sqrt{\log\paren{neD}} + \frac{16\sqrt{s\ell}}{\sqrt{n}}M\sqrt{\log\paren{\frac{de}{s}}} \\ &+ M\sqrt{\frac{\log{\frac{2}{\delta}}}{2n}}, \end{aligned}$$ where $L= 4R + 2\sqrt{M}$ is the Lipschitz constant as before. **Remark.** Notice that the three terms in the last bound on the generalization error in the right hand side have the same scaling in terms of $\sqrt{\ell}/\sqrt{n}$, but only the first term scales linearly with respect to norm constraints $W$. Also, for $n$ large and $d<<D$, second term dominates the bound, but we can balance (and thus make the bound sharper) between first and third terms by choosing the number of selected variables depending on the norm constraints. More precisely, let $A:= \frac{2\sqrt{\ell}WL}{\sqrt{n}} \paren[4]{2^{\frac{1}{2}} \sqrt{\ee{}{\norm{\bx}^{2}_{\infty}}}\paren{1 + 4\sqrt{\log(s)}}}$ and $B := \frac{16\sqrt{s\ell}}{\sqrt{n}}M\sqrt{\log\paren{\frac{de}{s}}}$. Solving equation $A=B$ in $s$ for fixed norm constraint $W$ we have $$\begin{aligned} \label{eq:s_optimal_scale} s^{\star} = \Psi^{-1}\paren{\frac{WL\sqrt{\ee{}{\norm{\bx}_{\infty}^{2}}}}{2^{\frac{5}{2}}}},\end{aligned}$$ where $\Psi\paren{s} = \frac{\sqrt{s\log{\frac{de}{s}}}}{1+4\sqrt{\log{s}}}$ and $\Psi^{-1}$ is its inverse. This provides us the choice of number $s$ of variables to select, depending on the input dimension $d$, constant $L$ and norm constraints $W$ that balances the statistical bound for generalization error from the Proposition \[prop:Lasso\_Varsel\_all\_clipped\]. Notice that we can also obtain purely data-dependent selection rule, when substituting $\ee{}{\norm{\bx}}$ with its empirical counterpart $\frac{1}{n}\sum_{i=1}^{n}\norm{\bx_{i}}^{2}_{\infty}$. Empirical Evaluations ===================== Datasets -------- - [**KDD-cup 2004**]{} this data set which was part of the KDD-cup 2004 competition data. In particular we use the data from the physics competition which comprises data on 78 properties of 150.000 particles. As in [@vogel07scalable], we use the value of the 24th column as our target value. - [**Forest**]{}, here the task is to predict the forest cover type from $54$ cartographic variables. As in [@Ronan], we create a regression task by assigning a label of $+1$ to samples corresponding to cover type $2$ and $-1$ to the rest. As in [@Ronan], we use $25.000$ samples for training and another $10.000$ for testing. - [**CT-Slice**]{} comprises $53500$ CT images, from $74$ different patients, represented by two histograms, one for bone structure and one for air inclusions. The dimensionality of the data is $383$. The target value of the task is the relative location on of the image on the axial axis. We assign images from two thirds of the patients to the training datasets, while the remaining third we keep as a test set. - [**MNIST**]{} consists of $60.000$ training samples and $10.000$ test samples of dimensionality $784$, representing handwritten digits. In order to create a regression task, we assign a label of $+1$ to all odd digits and $-1$ to the even digits. Following [@Ronan] we create a second regression task on the MNIST dataset by assigning a label of $+1$ to the digits $0$ to $4$ and $-1$ to digits $5$ through $9$. - [**Kinematic**]{} which comes from a realistic simulation of the dynamics of an 8 link robot arm. The task is to predict the distance of the arm’s end-effector from some target. We use two versions of this dataset, one with medium noise and another with high noise. In both cases the dimensionality of the data is 32. - [**Energy**]{} The regression task here is to predict the appliances energy consumption (in Wh) of a household given the 28 measurements of a wireless sensor network of the temperature and humidity of various rooms in the household. The dataset’s github repository provides a train/test split comprising 14,803 training samples and 4932 testing samples. - [**Air**]{} The dataset contains 9358 instances of hourly averaged responses from an array of 5 metal oxide chemical sensors embedded in an Air Quality Chemical Multisensor Device. We randomly subsampled 6000 samples for training and retained the rest for testing. Effects of Speedups ------------------- We present here an empirical analysis of the effects on the prediction error of the two approximation speedups proposed in the paper. Recall that we considered two settings for the approximation. 1. $\forall m, k \leq m \leq N-k$ we approximate $$L^\lambda_{i,m} \approx l^\lambda_{j,k} + r^\lambda_{j,N-k} + \left(N-2*k\right) \left(\min(l^\lambda_{j,k}-\lambda \| {w}_{j,k} - {w}_0\|^2_Q,r^\lambda_{j,N-k}-\lambda \| {w^c}_{j,N-k} - {w^c}_0\|^2_Q)\right).$$ If for a given $k$ , $L^\lambda_{i,m} \geq L^\lambda_{i_T,k_T}$, we forgo calculating $l^\lambda_{j,m},r^\lambda_{j,N-m}, \forall m, k \leq m \leq N-k$. 2. $\forall m, k \leq m \leq N-k$ we approximate $$L^\lambda_{i,m} \approx l^\lambda_{j,k} + r^\lambda_{j,N-k} + \left(N-2*k\right) \left(\max(l^\lambda_{j,k}-\lambda \| {w}_{j,k} - {w}_0\|^2_Q,r^\lambda_{j,N-k}-\lambda \| {w^c}_{j,N-k} - {w^c}_0\|^2_Q)\right).$$ If for a given $i$ , $L^\lambda_{i,m} \geq L^\lambda_{i_T,k_T}$, we forego calculating $l^\lambda_{j,m},r^\lambda_{j,N-m}, \forall m, k \leq m \leq N-k$. We set regularization parameter $\gamma=1$ throughout these experiments and plot the loss for various tree depths on the various datasets for the 3 type of algorithmic procedures (exact, approximate (1.), approximate (2.)). \ \ \ \ \ \ \ \ \ As can be seen in most cases the difference in accuracy is negligible. Only in two of the Datasets (Energy, KDD2004) the approximation algorithms lead to poorer performance. Given the speedups they offer in computation time, we suggest that there is considerable merit in employing these algorithms despite the fact that they are not exact. Empirical effects of the choice of parameter $\lambda$ ($\ell_{1}-$penalization) -------------------------------------------------------------------------------- We present here an empirical analysis of the effect on generalization of using LASSO in the leaves of a PLRT tree built with $l_2$-regularization in the nodes. In particular we set $\gamma=1.0$ constantly throughout all these experiments meaning that all the trees were built by optimizing $\left\| \bX_A w - Y_A \right\|^2 + \left\| w-w_0\right\|^2$, and evaluate the performance of the trees for various values of the LASSO regularization parameter $\lambda$. \ \ \ \ \ \ \ \ \ As can be seen, LASSO does not lead to improved performance on the datasets presented here. This may be related to the difference, when using LASSO, in regularization criteria in the nodes and the leaves. The tree itself is constructed by using an $l_2$-regularization forcing the optimization algorithms in the prospective leaves to find weight vector solution close (in an $l_2$ sense) to the optimal weight vector already calculated in the node. This form of regularization played a crucial role in the development of the presented PLRT algorithm. As can be seen in the next section, strong regularization, i.e. a large $\gamma$ value, is needed for the stability of the algorithm. Effects of parameter $\gamma$ ($l_2$ regularization) ---------------------------------------------------- We present here an empirical evaluation of the effect of the $\gamma$ on generalization, for the various datasets. As can be seen the proposed algorithm is prone to overfitting for small values of $\gamma$. In fact for almost all datasets the algorithm overfits even for moderately deep trees. We surmise that strong regularization $\gamma\geq 1.0$ is crucial to the performance of the algorithm. By propagating the weight vectors of the higher nodes, through the $l_2$ regularization, the space of viable weight vector solutions is constrained and overfitting is avoided. As noted this was a key insight in developing the proposed solution. \ \ \ \ \ \ \ \ \ Numerical Stability =================== As noted, one concern that can be raised regarding the proposed method is its numerical stability. the proposed method involves calculating, for each threshold, the inverse of a matrix of the form $\bX^T P \bX + \lambda Q$, where $\bX$ is a $N \times d$ matrix, meaning $\bX^T P \bX + \lambda Q$ is the result of $N$ rank-one updates to $Q$. We consider in the following that $P,Q$ are identity matrices and investigate the numerical stability of calculating $\Lambda = \left( \bX^T \bX + I \right)^{-1}$ as $N$ rank-updates using Sherman-Morrison $(\Lambda_{R1})$ and calculating the inverse from scratch each time via the Cholesky decomposition $(\Lambda_{CH})$. In particular we consider the relative error of the Frobenius norm $\frac{\|\Lambda_{CH} - \Lambda_{R1}\|_F}{\|\Lambda_{CH}\|_F}$, where $\| \Lambda \|_F = \sum\limits_{i,j} \Lambda^2_{ij}$. For randomly generated data $\bX$, we plot in Figure \[fig:stab\](A) the error for various values of $N$ and $d=2048$. As can be seen the relative error in the Frobenius norm remains very small (of the order of magnitude $10^{-4}$) even for large values of $N$. We furthermore investigate the numerical stability of the calculated vector $w = \left( \bX^T \bX + I \right)^{-1}\left( \bX^T Y\right)$, as this is ultimately what is used to calculate the optimal split. In Figure \[fig:stab\](B) we plot the angle, in degrees, between the vectors $w_{CH}$ and $w_{R1}$ calculated using Cholesky decompositions and $N$ rank-one updates respectively, as before we plot the angle for various values of $N$ and for $d=2048$. As can be seen there is some inaccuracy in a certain bandwidth of values of $N$ of the order $O(d)$ but even in this case the angle between the two vectors is very small $\left(0.03^{\circ}\right)$. Plotting, in Fig. \[fig:stab\](C) the conditioning number of the matrix $\kappa\left( \Lambda_{CH}\right) = \| \Lambda_{CH}^{-1} \| \| \Lambda_{CH} \|$ against $N$ reveals the source of this, relatively small, numerical inaccuracy. The experiments were run using the cuBLAS library on the same type of GPU as used in the experiments (Tesla K80) . ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![[\[fig:stab\]]{} Numerical stability of calculating $\Lambda = \left( \bX^T \bX + I \right)^{-1}$ and $w = \left( \bX^T \bX + I \right)^{-1}\left( \bX^T Y\right)$ via $N$ rank one updates to $I$ compared to calculating the same quantities via the Cholesky Decomposition: A) The relative error in the Frobenius norm, B) the angle, in degrees, between the vectors $w_{CH}$ and $w_{R1}$, and C) the conditioning number of $\kappa\left( \Lambda_{CH}\right) = \| \Lambda_{CH}^{-1} \| \| \Lambda_{CH} \|$, for $d=2048$ and various values of $N$](pics/froberr.png "fig:"){width="33.00000%"} ![[\[fig:stab\]]{} Numerical stability of calculating $\Lambda = \left( \bX^T \bX + I \right)^{-1}$ and $w = \left( \bX^T \bX + I \right)^{-1}\left( \bX^T Y\right)$ via $N$ rank one updates to $I$ compared to calculating the same quantities via the Cholesky Decomposition: A) The relative error in the Frobenius norm, B) the angle, in degrees, between the vectors $w_{CH}$ and $w_{R1}$, and C) the conditioning number of $\kappa\left( \Lambda_{CH}\right) = \| \Lambda_{CH}^{-1} \| \| \Lambda_{CH} \|$, for $d=2048$ and various values of $N$](pics/Angles2.png "fig:"){width="33.00000%"} ![[\[fig:stab\]]{} Numerical stability of calculating $\Lambda = \left( \bX^T \bX + I \right)^{-1}$ and $w = \left( \bX^T \bX + I \right)^{-1}\left( \bX^T Y\right)$ via $N$ rank one updates to $I$ compared to calculating the same quantities via the Cholesky Decomposition: A) The relative error in the Frobenius norm, B) the angle, in degrees, between the vectors $w_{CH}$ and $w_{R1}$, and C) the conditioning number of $\kappa\left( \Lambda_{CH}\right) = \| \Lambda_{CH}^{-1} \| \| \Lambda_{CH} \|$, for $d=2048$ and various values of $N$](pics/Cond.png "fig:"){width="33.00000%"} (A) (B) (C) ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- [^1]: We contend that the empirical evaluations presented in Section \[sec:emp\_eval\] highlight that stability is not an issue. We illustrate in Appendix the evaluation of the numerical stability of the underlying algebraic computations
1
--- abstract: 'Binarized Neural Networks, a recently discovered class of neural networks with minimal memory requirements and no reliance on multiplication, are a fantastic opportunity for the realization of compact and energy efficient inference hardware. However, such neural networks are generally not entirely binarized: their first layer remains with fixed point input. In this work, we propose a stochastic computing version of Binarized Neural Networks, where the input is also binarized. Simulations [on the example of the Fashion-MNIST and CIFAR-10]{} datasets show that such networks can approach the performance of conventional Binarized Neural Networks. We evidence that the training procedure should be adapted for use with stochastic computing. Finally, the ASIC implementation of our scheme is investigated, in a system that closely associates logic and memory, implemented by Spin Torque Magnetoresistive Random Access Memory. This analysis shows that the stochastic computing approach can allow considerable savings with regards to conventional Binarized Neural networks in terms of area ([$62\%$]{} area reduction on the Fashion-MNIST task). It can also allow important savings in terms of energy consumption, if we accept reasonable reduction of accuracy: for example a factor [$2.1$]{} can be saved, with the cost of [$1.4\%$]{} in Fashion-MNIST test accuracy. These results highlight the high potential of Binarized Neural Networks for hardware implementation, and that adapting them to hardware constrains can provide important benefits.' address: - 'Centre de Nanosciences et de Nanotechnologies, Univ. Paris-Sud, CNRS, France' - 'Institut Matériaux Microélectronique Nanosciences de Provence, Univ. Aix-Marseille et Toulon, CNRS, France' author: - | , , , , ,\ and bibliography: - 'biblio.bib' title: Stochastic Computing for Hardware Implementation of Binarized Neural Networks --- Binarized Neural Network, Stochastic Computing, Embedded System, MRAM, In Memory Computing =-15pt Introduction {#sec:introduction} ============ advances in deep learning have transformed the field of machine learning, with numerous achievements in image or speech recognition, machine translation and others. However, a considerable challenge of deep neural network remains their energy consumption, which limits their use within embedded systems [@editorial_big_2018]. The hardware implementation of deep neural networks is a widely investigated approach to increase their energy efficiency. A particularly exciting opportunity is to rely on in-memory or near-memory computing implementations [@yu2018neuro; @ielmini2018memory; @querlioz2015bioinspired; @burr2017neuromorphic; @giacomin2018robust], which are highly energy efficient as they avoid the von Neumann bottleneck entirely. This idea takes special meaning today, in particular with the emergence of novel memories such Resistive and Magnetoresistive Random Access Memories (RRAMs and MRAMs). Such memories are fast and compact non volatile memories, which can be embedded at the core of CMOS processes, and therefore provide an ideal technology for realizing in-memory neural networks [@yu2018neuro; @ielmini2018memory; @burr2017neuromorphic]. A considerable challenge of this approach is that modern neural networks require important amounts of memory [@canziani2016analysis], which is not necessarily compatible with hardware in-memory computing approaches. [Multiple roads have been explored to reduce the precision and memory requirements of neural networks. The quantization of the weights used for inference is the most natural route [@hubara2017quantized]. Architectural optimization can result in considerable reduction in terms of number of parameters and arithmetic operations, with only modest reduction in accuracy [@sandler2018mobilenetv2]. Network pruning [@reagen2016minerva] or network compression [@chen2015compressing; @han2015deep] techniques, sometimes combining different methods, can allow implementing hardware neural networks with reduced memory access and therefore higher energy efficiency.]{} [ Binarized Neural Networks (BNNs) have recently appeared as one of the most extreme vision of low precision neural networks, as they go further than these approaches [@courbariaux2016binarized; @rastegari2016xnor].]{} In these simple deep neural networks, synaptic weights as well as neuron activations assume Boolean values. These models can nevertheless achieve state-of-the-art performance on image recognition, while being multiplier-less, and relying only on simple binary logic functions. First hardware implementations have already been investigated and have shown highly promising results [@bocquet2018memory; @nurvitadhi2016accelerating; @yu2018neuro; @giacomin2018robust]. However, BNNs are not entirely binarized: the first layer input is usually coded as a fixed point real number. This fact is not a significant issue for operating BNNs on graphical processor units (GPUs) [@courbariaux2016binarized], as they feature extensive arithmetic units. Research aimed at implementing binarized neural network on Field Programmable Gate Arrays (FPGAs) [@zhao2017accelerating] has also not specifically investigated the question of the non-binarized first layer: these works usually use the Digital Signal Processors (DSPs) of the FPGA to process the associated operations. However, in an application-specific integrated circuits (ASIC) implementation, the non-binarization of the first layer [implies that]{} this layer needs a specific design, which is more energy consuming and uses more area than the design used for the purely binary layers. For this reason, in this work, we introduce a stochastic computing implementation of BNNs, which allows implementing them in an entirely binarized fashion. The network functions by presenting several stochastically binarized versions of the images to the BNN, in a way reminiscent to the historic concept of stochastic computing [@gaines1969stochastic]. [ After presenting the background of the work (section \[sec:background\]), the paper describes the following contributions.]{} - [We show that this stochastic computing implementation of BNNs allows achieving high network performance in terms of recognition rate on the Fashion-MNIST and CIFAR-10 datasets. Stochastic BNN quickly approaches standard BNN performance when several stochastic binarized images are presented to the network. We also evidence that strategy for training stochastic computing BNNs should differ from the one used for conventional BNNs (section \[sec:network\]).]{} - [We design a full hardware ASIC in-memory BNN, which allows showing that the stochastic computing BNN strategy can save important area ($62\%$ on Fashion-MNIST) and energy (factor $2.1$ on Fashion-MNIST with an accuract reduction of $1.4\%$ with regards to a standard BNN (section \[sec:hardware\]). These numbers are discussed with regards to different alternative implementations.]{} Background of the Work {#sec:background} ====================== Binarized Neural Networks ------------------------- In this section, we first introduce the general principles of Binarized Neural Networks, an approach to considerably reduce the computation cost of inference in neural networks [@courbariaux2016binarized; @rastegari2016xnor]. In a conventional neural network with $L$ layers, the activation values of the neurons of layer $k$, $a^{[k]}_i$, are obtained by applying a non-linear activation function $f$ to the matrix product between real-valued synaptic weight matrix $W^{[k]}$ and the real-valued activations of the previous layer of neurons $a^{[k-1]}$: $$a^{[k]}_i = f \left( \sum_{j}{W^{[k]}_{ij} \cdot a^{[k-1]}_j} \right). \label{eq:activation}$$ [In a BNN, excluding the first layer,]{} neuron activation values as well as synaptic weights assume binary values, meaning $+1$ and $-1$. The products between weights and neuron activation values in Eq. (\[eq:activation\]) then simply become [ logic XNOR operation.]{} The sum in Eq. (\[eq:activation\]) is replaced by the $\operatorname*{popcount}$ operation, the basic function that counts the number of ones in a data vector. The resulting value is then converted to a binary value by comparing it to a trained threshold value $\mu^{[k]}_i$. Eq. (\[eq:activation\]) therefore becomes: $$a^{[k]}_i = \operatorname{sign}\left( \operatorname*{popcount}_j \left( \operatorname{XNOR}\left( W^{[k]}_{ij}, a^{[k-1]}_j \right) \right) - \mu^{[k]}_i \right), \label{eq:activation_binarized}$$ where $\operatorname{sign}$ is the sign function. Ordinarily, in binarized neural network, the first layer input $X$ is not binarized. The implementation of operations for computing the first layer activations $a^{[1]}$ is therefore more complex than the basic $\operatorname{XNOR}$ and $\operatorname*{popcount}$ operations: $$a^{[1]}_i = \operatorname{sign}\left( \sum_{j}{W^{[1]}_{ij} \cdot X_j} - \mu^{[1]}_i \right). \label{eq:activation_layer1}$$ Additionally, the thresholding operation is not performed on the last layer of the neural network. Instead, for the last layer, we identify the neuron with the maximum $\operatorname*{popcount}$ value (i.e. the $\operatorname{argmax}$ of the last layer neurons), which gives the output of the neural network. The whole inference process of a conventional BNN is described with vectorized notations in Algorithm \[alg:algorithm1\]. The performance of BNNs is quite impressive. [ A fully-connected BNN with two hidden layers of 1024 neurons, and the use of dropout during training [@srivastava2014dropout] obtains a $1.8\%$ error rate on the test dataset of the canonical MNIST handwritten digits task [@lecun1998gradient], with 300 epochs. In comparison, a conventional neural network with no binarization and $\tanh$ activation function, and the same architecture and number of neurons, obtains a $1.5\%$ test error rate after 300 epochs.]{} Similarly, on more complex datasets such as CIFAR-10 or ImageNet, near-equivalent performed is obtained by BNNs and conventional neural networks [@courbariaux2016binarized; @rastegari2016xnor; @lin2017towards]. The low memory requirements of BNNs (one bit by synapse), as well as the fact that they do not require any multiplication, makes them extremely adapted for inference hardware [@nurvitadhi2016accelerating; @sun2018xnor; @tang2017binary; @yu2018neuro]. The training process of BNNs is reminded in Appendix \[sec:appendix\_training\]. Unlike inference, the training process requires real valued weights and real arithmetic: [training BNNs is not easier than in a conventional neural network]{}. Therefore, a natural vision is to train BNNs on standard GPUs, and to use specialized ultra-efficient hardware only for inference. $z^{[1]} \leftarrow W^{[1]} \cdot X$ $a^{[1]} \leftarrow \operatorname{sign}(z^{[1]} - \mu^{[1]})$ $z^{[k]} \leftarrow \operatorname*{popcount}( \operatorname{XNOR}( W^{[k]}, a^{[k-1]}))$ $a^{[k]} \leftarrow \operatorname{sign}(z^{[k]} - \mu^{[k]})$ $z^{[L]} \leftarrow \operatorname*{popcount}( \operatorname{XNOR}( W^{[L]}, a^{[L-1]}))$ $output \leftarrow \operatorname{argmax}(z^{[L]} - \mu^{[L]}) $ In this work, we investigate how the first layer can be approximated by a stochastic input to decrease computing resources. This approach could also allow processing stochastic data for near sensor computing, which is a way to reduce considerably data transfer between sensors and data process. In addition, due to the possibility of implementing binarization from the first layer, the model can be completely generic with exactly the same architecture over the layers and allows reducing chip area. Stochastic Computing -------------------- Stochastic computing is an approximate computing paradigm, known since the early days of computing [@gaines1969stochastic; @alaghi2013survey]. [Nevertheless, hardware engineers have not exploited this computing scheme for processor design,as it requires applications that can be easily mapped with approximate computing. ]{} The principle is based on encoding [all data]{} as probabilities, represented as a temporal stochastic bitstreams: the number of ones among the bitstream represents the encoded probability. The main advantage of this [encoding scheme]{} is that mathematical functions can be easily approximated with simple logic gates. For instance a product is then implemented with a single AND gate, and a weighted adder can be implemented with a multiplexer gate [@alaghi2013survey]. Many arithmetic operations are therefore easy to implement with low power and small footprint characteristic. Despite these benefits, stochastic computing holds drawbacks: its limitation to low precision arithmetics, and the need to generate random bits. Random number generation can be a major part of the energy consumption, and, moreover, the generated random bits need to be uncorrelated. Random bits have also found applications in the field of neural networks. The most widely used neural networks that intrinsically exploit stochasticity are the restricted Boltzmann machine, where each neuron is binary valued with a probability that depends on the previous layer neurons states [@hinton2006fast]. An alternative technique to exploit stochasticity in neural networks is to approximate standard neural network architecture with stochastic computing. This approach as been proposed as early as the 1990’s [@bade1994fpga], and is currently being revisited in modern deep neural networks [@ardakani2017vlsi; @ren2017sc; @canals2016new]. These works have shown promising results in terms of area and energy consumption. Typically, the largest challenge is the implementation of the non-linear activation function within the stochastic computing framework. In this article, we suggest that stochastic computing is particularly adapted to the case of binarized neural network, as they work so naturally with bitstreams, and as the activation function is replaced by a simple thresholding operation. (topskip=0pt, botskip=0pt, midskip=0pt) \[scale = 0.25\][stock.png]{} [(a) In a conventional BNN, the first layer is not binarized. Grayscale input images are presented to the neural network. [ (b) In a stochastic computing-based BNN, binarized images are generated stochastically based on a grayscale image.]{} Several binarized versions of the same original image can be presented sequentially to the neural network, following the basic principle of stochastic computing. \[fig:stochprinciple\]]{} Stochastic Computing-Based Binarized Neural Network {#sec:network} =================================================== To evaluate [the stochastic computing approach]{}, we use the Fashion-MNIST dataset, which [has]{} the same format as MNIST, but presents grayscale images of fashion items [@xiao2017fashion], and constitutes a harder task. The canonical MNIST dataset would not be appropriate for this study, as it consists in images that are mostly black and white. As in the MNIST dataset, each image in Fashion-MNIST has 28x28 pixels, and can be classified within ten classes. The dataset contains $60,000$ training examples, $10,000$ test examples. Conventional BNNs (non-binarized first layer and no use of stochastic computing), perform very well on this task. With a fully connected BNN with first layer coded with eight bit fixed point real numbers, with two hidden layers of 1024 neurons each and dropout, [ a classification accuracy of $90\%$ can be obtained after 300 epochs]{}. This result is comparable with the test accuracy of $91\%$ obtained by a conventional real-valued neural network with the same architecture. (topskip=0pt, botskip=0pt, midskip=0pt) \[scale = 0.35\][sto.png]{} [Accuracy on the Fashion MINIST classification task as function of the number of stochastic image presented for the two training methods. Navy blue curve: training of the neural network with grayscale images. Light blue curve: training with presentation of stochastic binarized images. Dashed black line: accuracy when training with a black and white image (i.e. pixels with a value greater than 0.5 are white and pixels that are smaller are black). [ Dashed red line: best accuracy when the binarized neural network is trained on Fashion-MNIST classification task with grayscale images]{}. 300 training epochs were used. \[fig:accuracy\]]{} Stochastic Computing with Regular Training Procedure {#subsec:firstalyer} ---------------------------------------------------- $z^{[1]} \leftarrow 0$ $z^{[1]} \leftarrow z^{[1]} + \operatorname*{popcount}( \operatorname{XNOR}( W^{[1]}, X_t))$ $a^{[1]} \leftarrow \operatorname{sign}(z^{[1]} - T \mu^{[1]})$ $z^{[k]} \leftarrow \operatorname*{popcount}( \operatorname{XNOR}( W^{[k]}, a^{[k-1]}))$ $a^{[k]} \leftarrow \operatorname{sign}(z^{[k]} - \mu^{[k]})$ $z^{[L]} \leftarrow \operatorname*{popcount}( \operatorname{XNOR}( W^{[L]}, a^{[L-1]}))$ $output \leftarrow \operatorname{argmax}(z^{[L]} - \mu^{[L]}) $ [ A first approach to design a stochastic computing BNN is to reuse the synaptic weights of a conventional BNN, trained with grayscale picture.]{} However, in the inference phase, we approximate the computation of the first layer by using stochastic images presentation instead of grayscale images. The full inference algorithm is presented, in vectorized form, in Algorithm \[alg:algorithm1binary\]. An input $X$ is transformed into binarized stochastic inputs $X_t$ by taking the value of each grayscale pixel (between zero and one) as the probability for the corresponding pixel in the stochastic input to be one. Then, [the networks computes]{} $\operatorname*{popcount}( XNOR( W^{[1]}, X_t)) - \mu^{[1]}$, and sums the result of this computation over a number $T$ of stochastic versions of the input $X_t$. Finally, the output of the layer is thresholded to obtain a binary value, and the rest of the neural network is computed in one pass in a fully binarized fashion. The quality of the results depends on the number of image presentation $T$. In Fig. \[fig:accuracy\], the navy blue curve shows the network test error as a function of $T$. We can see that after 100 stochastic image presentation, the accuracy is nearly equivalent to the use of grayscale images. With eight image presentation, the test accuracy is reduced to $88\%$ instead of $90.1\%$. With a single presentation, the test accuracy is only $76\%$ Adapted Training Procedure {#sec:adaptedtrainig} -------------------------- We now try a second strategy, where we train the neural network with binarized stochastic image presentation instead of grayscale images. To do this, during training, we use the conventional BNN training technique of Appendix \[sec:appendix\_training\], but instead of using the normal grayscale Fashion-MNIST images, we use stochastic binarized ones, with the same number of presentation $T$ as will be used during inference. The inference technique then remains identical to the one described in section \[subsec:firstalyer\]. In Fig. \[fig:accuracy\], in cyan color, we plotted the test error rate as a function of the number of presentation of the same image with this scheme. We see that the test accuracy is equivalent to the one obtained with grayscale images for high numbers of image presentation. On the other hand, with few stochastic presentation (one to five), the adapted input training technique allows reaching a quite high accuracy. If a single presentation is used at inference time, the network test accuracy is $86\%$. This test accuracy is equivalent to the one obtained when training a BNN with non-stochastic black and white versions of the Fashion-MNIST dataset (dashed black line in Fig. \[fig:accuracy\]). If three image presentation are used, the network test accuracy increases to [$88.7\%$]{}. These results show that when using the stochastic computing version of BNN, the adapted training procedure should be used. Choice of the Accumulation Layer for Stochastic Samples {#subsec:accumul} ------------------------------------------------------- (topskip=0pt, botskip=0pt, midskip=0pt) \[scale = 0.35\][acc.png]{} [Accuracy on the Fashion MINIST classification task as function of the number of stochastic image presentation presented [ when the popcount value ias accumulated at different level of the network. The training was done with grayscale images and ]{}300 training epochs were used. \[fig:accumulation\] ]{} Until now, at inference time, we have accumulated the outputs of the first layer over several presentations of the same image, then propagated the binarized output of the first layer to the other layers. An alternative strategy can be to perform the accumulation over the realizations of the input images at another layer. If the accumulation is done at the last layer, this procedure corresponds to using stochastic computing in the whole depth of the neural network. Fig. \[fig:accumulation\] presents the test accuracy of the neural network on the Fashion-MNIST dataset, as a function of the number of presented realizations of the input images, for the different accumulation strategy, in networks trained with the adapted training strategy. [This Figure shows that that the different accumulation strategy lead to equivalent accuracy, consistently with the principles of stochastic computing. ]{} [The strategy of accumulation at the first layer is retained for the rest of the paper, as it allows for the minimum energy consumption.]{} [Extension to the CIFAR-10 Dataset]{} {#subsec:CIFAR10} --------------------------------------- (topskip=0pt, botskip=0pt, midskip=0pt) \[scale = 0.35\][cifar10\_noEB.png]{} [[Accuracy on the CIFAR-10 classification task as function of the number of stochastic image presented for the two training methods. Navy blue curve: training of the neural network with color images. Light blue curve: training with presentation of stochastic binarized images. Dashed black line: accuracy when training with a binarized color image (i.e. RGB values with a value greater than 0.5 are white and pixels that are smaller are black). Dashed red line: best accuracy when the binarized neural network is trained on CIFAR-10 classification task with full color images. 2000 training epochs were used. ]{} \[fig:accuracy\_cifar\_bin\]]{} (topskip=0pt, botskip=0pt, midskip=0pt) \[scale = 0.35\][cifar10\_sto\_classifier.png]{} [[Accuracy on the CIFAR-10 classification task, but the stochastic computing approach is implemented at the end of the convolutional layers. Navy blue curve: training of the neural network in a conventional fashion. Light blue curve: classifier part of the neural network retrained with stochastic versions of the output of the convolutional layers. Dashed red line: best accuracy when the binarized neural network is trained on CIFAR-10 classification task with full color images. 2000 training epochs were used.]{} \[fig:accuracy\_cifar\_bin\_classifier\]]{} [ We now apply this strategy to the more advanced CIFAR-10 dataset. We use a convolutional neural network with six convolutional layers, with kernel size of three by three and a stride of one (number of filters 384, 384, 384, 768, 768 and 1536) and three fully connected layers (number of neurons 1024, 1024 and 10). Training is done in the same conditions as the Fashion-MNIST case, using dropout and Adam optimizer, and the pytorch deep learning framework. In the stochastic computing BNN, CIFAR-10 images are presented with binarized channel: each RGB channel pixel presents a value of zero or one. This value is chosen randomly with a probability equal to the RGB value of the corresponding pixel of the image. Accumulation of stochastic realization is realized at the first layer, as described in section \[subsec:accumul\]. ]{} [ Fig. \[fig:accuracy\_cifar\_bin\] shows that the results on CIFAR-10 are very similar to the ones on Fashion-MNIST (Fig. \[fig:accuracy\]). It present results obtained using the weights trained with full color images, and weights obtained with the adapted training approach. In both cases, the stochastic BNN results approach regular BNN results when the number of presentation $T$ of stochastic images is increased. The adapted training nevertheless gives highly superior results and should be preferred. This highlights that the stochastic BNN approach can be applicable to more complicated tasks than Fashion-MNIST. ]{} [ We now consider a variation of this scheme, a partially binarized neural network. Fully connected layers of neural networks are particularly adapted for in-memory BNN implementation [@yu2018neuro; @bocquet2018memory], as these layers involve large quantities of memories. Convolutional layers are less memory intensive, and thus benefit less from binarization, while requiring increasing the number of channels when binarized [@courbariaux2016binarized]. In a hardware implementation, it can therefore be attractive to binarize only the classifier (fully connected) layers. In that case, the input of the classifier is real, and is processed with the stochastic BNN approach. This is also of special interest as the first fully connected layer in a convolutional neural network is usually the layer that involves the highest number of additions, and can therefore benefit significantly in a hardware to be implemented with the stochastic approach. ]{} [ We consider a neural network with the same architecture as the fully binarized one, a reduced number of filters (128, 128, 128, 256, 256 and 512) and the same number of neurons in the fully connected layers (1024, 1024 and 10). Without the stochastic approach, this neural network has the same CIFAR-10 recognition rate than the fully binarized one ($90\%$). Fig. \[fig:accuracy\_cifar\_bin\_classifier\] shows the results of the stochastic BNN with this approach. If the same weights are used than in a non stochastic BNN, the results look similar to the fully binarized approach of Fig. \[fig:accuracy\_cifar\_bin\]. On the other hand, if the classifier weights are retrained with the stochastic binarized inputs to the classifier, the stochastic results are very impressive. Even with a single image presentation $T$, the network approaches the performance of the non stochastic network. The stochastic BNN approach therefore appears especially effective in this situation. ]{} Hardware Implementation of Stochastic Computing-Based Binarized Neural Network {#sec:hardware} ============================================================================== In order to investigate the potential of the stochastic BNN approach, we designed a digital ASIC version of it using standard integrated circuit design tools. The architecture, presented in Fig. \[fig:archi\], allows performing the inference of a fully connected binary neural network of any size (up to 1024 neurons for each layer). The only parameter constrained by the hardware design is the number of weights that can be stored. Design of the Architecture -------------------------- (topskip=0pt, botskip=0pt, midskip=0pt) \[scale = 0.17\][archi.png]{} [Design of an MRAM based fully connected binarized neural network, computing both parallel and serial. (a) Full architecture with $32 \times 32$ repeated cells. Each cell (b-c) behaves as a neuron if the input is sequential, or each column behaves as a neuron if the input is parallel. \[fig:archi\]]{} Our architecture is inspired by the works of [@ando2017brein], with Static RAMs replaced by Spin Torque MRAM [@shum2017cmos], and adaptation to stochastic computing. This architecture aims at performing inference on binarized neuronal networks with minimal energy consumption. To achieve this goal, it brings memory and computation as close as possible, to limit energy consumption related to data transfer. Such an architecture takes special interest with the emergence of new non-volatile memory components such as Spin Torque MRAM, which can be integrated within the CMOS manufacturing process, and which we consider here. The architecture is described in detail in Appendix \[sec:appendix\_system\], and [can compute following a parallel or a sequential structure]{}. [The full design is made by a basic cell repeated 32x32 times (Fig. \[fig:archi\] (b-c)) that can perform both sequential or parallel calculation. It includes a 2 kbits memory array to stores weights, as well as XNOR gates and popcount logic.]{} We designed this system using the design kit of a commercial 28 nanometer technology. [Digital circuits were described in synthesizable SystemVerilog description.]{} MRAM memory arrays are modeled in a behavioral fashion, and their characteristics (area, energy consumption) are inspired by [@chun2013scaling]. The system was synthesized to estimate its area and energy consumption. For energy consumption, we employed Value Change Dumps extracted from a Fashion-MNIST inference task, and estimated it using the Cadence Encounter tool. Energy Consumption and Area Results ----------------------------------- (topskip=0pt, botskip=0pt, midskip=0pt) \[scale = 0.35\][merged\_cell.png]{} [(a) Area of the basic cell (Fig.  \[fig:archi\] (b-c)) of our ASIC architecture, implemented in a 28 nm CMOS technology, as function of the number of operating bit for a fixed point binary architecture. One-bit corresponds to our stochastic fully binarized architecture. (b) Corresponding energy consumption, per clock cycle. \[fig:area\]]{} Fig. \[fig:area\](a) shows the area of a basic cell of our architecture (Fig. \[fig:archi\](b-c)), in the case of binary input (one operating bit), and in situations where the input is coded in Fixed Point representation (two, four and eight operating bit), as is required in the first layer of a conventional BNN. [ This Figure separates the area used by registers, logic and MRAM. A cell with binary input uses six times less area than a cell designed for eight bit input. ]{} Interestingly, the difference is mostly due to the $\operatorname*{popcount}$ circuits, which need more depth when the input is non-binary. Similarly, as seen in Fig. \[fig:area\](b), a cell with binary input uses $4.5$ times less energy per cycle than the corresponding one with eight bits input. Again, the difference is mostly due to the $\operatorname*{popcount}$ circuits. (topskip=0pt, botskip=0pt, midskip=0pt) \[scale = 0.35\][full\_consum.png]{} [Energy consumption of the full Fashion-MNIST classifier systems, for the classification of one image. Light blue: stochastic fully binarized binary architecture. Navy blue: Conventional BNN with non binary (8 bit fixed point) first layers. The neural networks have two layers with 1024 neurons each. The light blue area indicate the regime where the non-binary first layer is more energy efficient thant the fully binarized system.\[fig:energy\] ]{} [The savings in terms of area transfer directly at the system level. We now consider the whole neural network used for Fashion-MNIST classification throughout section \[sec:network\].]{} Using our architecture, a full BNN with eight bit first layer occupies $1.95\ mm^2$, while the BNN with stochastic binarized first layer occupies $0.73\ mm^2$, a $62\%$ saving in area. These area values were extracted from a system designed for a $T$ value of eight. Fig. \[fig:energy\] plots the energy consumption for recognizing an image with our ASIC architecture, as a function of the number of presented stochastic images. This is compared with the energy cost of the same architecture, but using a non stochastic first layer, with eight bit input. We see that the system with stochastic first layer is more energy efficient than the system with non-binary first layer if less than eight presentation are used. The previous curves do not include the cost of random bit generation. If we use a simple eight-bit Linear Feedback Shift Register (LFSR) pseudo random number generator, the added energy is $0.52nJ/cycle$, and the added area is $48,000\ \mu m^2$. Both are therefore negligible. It has also been shown that Spin Torque MRAM technology can be adapted to provide very low energy true random numbers [@vodenicarevic2017low]. If such a technology was used, based on the numbers of [@vodenicarevic2017low], the energy cost of random bit generation would be $0.125nJ/cycle$, and the area much smaller than LFSR. The energy cost of random number generation is therefore negligible with regards to the consumption of the system seen in Fig. \[fig:energy\]. [These energy numbers are very attractive with regards to non binarized implementations at equivalent recognition rate. Non binarized neural networks require less neurons and synapses than BNNs to achieve equivalent recognition rate. For example, to match the performance obtained in Fig. \[fig:accuracy\] on Fashion-MNIST with three image presentations ($T=3$), one only needs a non-binarized neural network with eight-bit synapses with two layers of 500 neurons, while the BNN needs 1024 neurons per layer. However, in an ASIC, the non binarized neural network requires energy-hungry 8-bits multiplications and addition ($0.3~pJ$ and $0.04~pJ$ per operation in our $28~nm$ technology). Taking into account only these arithmetic operations, the energy consumption is $220~nJ$ for recognizing a Fashion-MNIST image with the same accuracy as the stochastic BNN with three image presentations. This stochastic BNN requires only $90~nJ$ (Fig. \[fig:energy\]), taking into account the whole system. ]{} [ As a conclusion, this works highlights that the stochastic computing approach is attractive in terms of area occupancy.]{} In terms of energy efficiency, it is very attractive if a relatively small number of presentation is used ($T<8$). Therefore, it appears preferable to rely on the stochastic training approach seen in section \[sec:adaptedtrainig\], and to use few stochastic image presentation for inference. For example, if three image presentation are used, a factor $2.1$ can be saved on the energy consumption on Fashion-MNIST, with a reduction of [$1.4\%$]{} of test accuracy with regards to the best accuracy obtained by a BNN (dashed red line in Fig. \[fig:accuracy\]). It should be noticed that the benefits of stochastic computing would be reduced on very deep neural networks, where the first layer plays a smaller role. Our approach is therefore the most promising for Internet-of-Things or sensor networks applications, where relatively small neural networks can provide sufficient intelligence, but circuit cost and energy consumption are the most critical issues. [On deep neural networks, nevertheless, the approach of implementing only the classifier with a stochastic BNN, as mentioned in section \[subsec:CIFAR10\], can be of high interest.]{} Conclusion ========== In this work, we presented a stochastic computing approach to Binarized Neural Networks. This allows implementing them in an entirely binarized fashion, whereas in conventional BNNs, the first layer is not binary. We showed that the stochastic computing approach can reach recognition results similar to the conventional approach. We identified that for highest accuracy, the neural network should not be trained with [regular]{} images as conventional BNNs: it it is more beneficial to train stochastic BNNs with stochastic binarized images, using the same number of image presentation as will be used during inference. [ The design of a full BNN ASIC relying on in-memory computing, then highlighted the benefits of BNNs in terms of area and energy consumption.]{} Stochastic BNNs allow using the same compact architecture for all layers, which leads to strong benefits in terms of area ($62\%$ reduction in the case of Fashion-MNIST classification). In terms of energy, the benefits can be very strong if we accept a slight reduction in classification accuracy. For example, on Fashion-MNIST classification, we can reduce the energy consumption by a factor [$2.1$]{}, with a decrease of [$1.4\%$]{} in classification accuracy. These results highlight the high potential of BNNs for implementing compact and energy efficient in-memory neural networks, and the potential of stochastic approaches for hardware artificial intelligence. [Future works should focus on the physical implementation of the proposed scheme, as well as the extension of the approach to other tasks than vision, such as medical tasks, where energy efficiency can be a particularly important concern.]{} Training Algorithm {#sec:appendix_training} ================== Throughout the paper, neural networks are trained with the algorithm proposed by Courbariaux et al in [@courbariaux2016binarized]. This algorithm relies on two fundamental principles. First, the function $\operatorname{Clip}(x, -1,1)$ is used instead of the $\operatorname{sign}$ function in the backpropagation phase, as it can be differentiated. Second, the binarized weights $W$ are not directly modified during the back propagation: their modification is done indirectly through the modification of the real weight $W_a$ associated with each synapse. [Our design includes two modifications with regards to the work of [@courbariaux2016binarized].]{} In the original paper, the multi-layer perceptron trained on MNIST consisted of hidden layers of binarized units, topped by L2-SVM output layer. Here, we used a $\operatorname{softmax}$ output layer. Second, the parameters $\gamma$ and $\beta$ used for the batch normalization were not trained, and we used $\gamma=1$ and $\beta=0$ instead. The complete algorithm that we used is presented in Algorithm \[alg:algorithmtrain\]. **1. Forward propagation** $W^{[k]} \leftarrow \operatorname{sign}( W^{[k]}_a )$ $z^{[k]} \leftarrow W^{[k]} \cdot a^{[k-1]}$ $\widehat{z}^{[k]} \leftarrow \operatorname{BatchNorm}(z^{[k]},\mu^{[k]},\sigma^{[k]})$ $a^{[k]} \leftarrow \operatorname{sign}(\widehat{z}^{[k]})$ $a^{[k]} \leftarrow \operatorname{softmax}(\widehat{z}^{[k]})$ Compute gradient of softmax cross entropy loss : $${} g_{a^{[L]}} = \dfrac{\partial C}{\partial a^{[L]}} = a^{[L]} - y$$ **2. Backward propagation** $g_{a^{[k]}} \leftarrow g_{a^{[k-1]}}\ \circ \ 1_{|a_{k<1}|} $ $ g_{\widehat{z}^{[k]}} \leftarrow \operatorname{BackBatchNorm}(g_{a^{[k]}},\widehat{z}^{[k]},\mu^{[k]},\sigma^{[k]}) $ $ g_{z^{[k]}} \leftarrow W^{[k]\ T}g_{\widehat{z}^{[k]}} $ $ g_{W_b^{[k]}} \leftarrow g_{\widehat{z}^{[k]}} \ a^T_{k-1}$ **3. Update parameters** $W^{[k]}_{a,t+1} \leftarrow \operatorname{Clip}(\operatorname{UpdateAdam}(W_{a,t+1}^{[k]},g_{W_b^{[k]}}),-1,1) $ $(\mu^{[k]},\sigma^{[k]})_{t+1} \leftarrow \operatorname{MovingAverage}(\mu_B^{[k]},\sigma_B^{[k]})_{t} $ Description of the ASIC BNN Architecture {#sec:appendix_system} ========================================= [ The architecture for hardware implementation of BNN inference is presented in Fig. \[fig:archi\].]{} The basic function of a BNN is to compute $\operatorname*{popcount}( \operatorname{XNOR}( W, X)) - \mu$. To perform this function, first, the system needs to perform the XNOR between the inputs $X$ and the weights $W$, stored in the Spin Torque MRAM memory blocks. Second, it needs to perform the $\operatorname*{popcount}$ function, and then compare this value with a threshold. To achieve this goal, the architecture is made of basic cells (cell Fig. \[fig:archi\] (b-c)), composed of a 2 kbits memory array that store weights, 32 XNOR logic gates that perform the XNOR between the 32 bits weights and the 32 bits received data, a 32 bits to 5 bits popcount module compound of basic tree adders. The basic cell is repeated 32x32 times. The architecture can be operated with a “parallel to sequential” structure, or a “sequential to parallel” structure. The sequential to parallel structure allows dealing with long input sequence data, and outputs a limited parallel output data. By contrast, the parallel to sequential structure allows dealing with limited parallel input data, and outputs long sequence data. The basic cells of Fig. \[fig:archi\] (b-c) can perform both, sequential or parallel calculation. The output of the popcount can be given to the sequential part of the cell or to the parallel part of the system that will perform the popcount through the whole column, with a “popcount tree” module shared with all the cells of the column. The sequential section of the cell that receive the popcount output will perform the full popcount operation sequentially by summing the popcount output using a register. [ To perform the activation function of the neuron, the system adds in each cell the threshold values $\mu$ in a memory array. ]{} The signed bit of the difference between the popcount value saved in the register and $\mu$ gave the activation value. The same operation is made with the output of the popcount tree shared along the column. [Tifenn Hirtzlin]{} is a PhD student in Electrical Engineering at Université Paris-Sud. He received the M.S. degree in Nanosciences and Electronics from the University Paris-Sud, France, in 2017. His work focuses on designing intelligent memory-chip for low energy hardware data processing using bio-inspired concepts as probabilistic approach to brain function or more classical neural network approaches. [Bogdan Penkovsky]{} is a postdoctoral CNRS researcher at Paris-Sud University. He received his M.S. degree in Applied Mathematics from the National University of Kyiv-Mohyla Academy, Ukraine, in 2013 and the Ph.D. degree in optics and photonics applied to neuromorphic computing from the University of Burgundy - Franche-Comté, France, in 2017. His work is on intelligent, low energy hardware design for biomedical applications. [Marc Bocquet]{} is an Associate Professor in the Institute of Materials, Microelectronics and Nano-sciences of Provence, IM2NP at Univerisity of Aix-Marseille. He received the M.S. in electrical engineering degree in 2006 and the Ph.D. degree in electrical engineering in 2009, both from the University of Grenoble, France. His research interests include memory model, memory design, characterization and reliability. [Jacques-Olivier Klein]{} (M’90) received the Ph.D. degree from Univ. Paris-Sud, France, in 1995. He is currently Full Professor at Univ. Paris-Sud, where he focuses on the architecture of circuits and systems based on emerging nanodevices in the field of nanomagnetism and bio-inspired nanoelectronics. In addition, he is lecturer at the Institut Universitaire de Technologie (IUT) of Cachan. He is author of more than one hundred technical papers. [Jean-Michel Portal]{} (M’87) is a Full Professor in the Institute of Materials, Microelectronics and Nano-sciences of Provence, IM2NP at Univeristé of Aix-Marseille. He received the Ph.D. degree in 1999 from University of Montpellier 2, France. From 1999 to 2000, he was temporary researcher at University of Montpellier 2 in the field of FPGA design and test. From 2000 to 2008, he was assistant professor at the Univ. of Provence, Polytech Marseille, and conducted research activities in L2MP in the field of Memory testing and diagnosis, test structure design and design for manufacturing. In this position he participated to industrial project on non-volatile memory testing and diagnosis with ST Microelectronics. In 2008, he became Full Professor at Aix-Marseille Univ. and since 2009 he heads the “Memories Team” of the IM2NP. His research fields covers design for manufacturing and memory design, test and reliability. [Damien Querlioz]{} (M’08) is a CNRS Research Scientist at Univeristé Paris-Sud. He received his predoctoral education at Ecole Normale Supérieure, Paris and his PhD from Université Paris-Sud in 2008. After postdoctoral appointments at Stanford University and CEA, he became a permanent researcher at the Centre for Nanoscience and Nanotechnology of Université Paris-Sud. He focuses on novel usages of emerging non-volatile memory, in particular relying on inspirations from biology and machine learning. Damien Querlioz coordinates the INTEGNANO interdisciplinary research group. In 2016, he was the recipient of an European Research Council Starting Grant to develop the concept of natively intelligent memory. In 2017, he received the CNRS Bronze medal.
1
--- abstract: 'The [[*Planck*]{}]{} mission is the most sensitive all-sky CMB experiment currently planned. The High Frequency Instrument ([[*HFI*]{}]{}) will be especially suited to observe clusters of galaxies by their thermal Sunyaev-Zel’dovich (SZ) effect. In order to assess [[*Planck*]{}’s ]{} SZ-capabilities in the presence of spurious signals, a simulation is presented that combines maps of the thermal and kinetic SZ-effects with a realisation of the cosmic microwave background (CMB), in addition to Galactic foregrounds (synchrotron emission, free-free emission, thermal emission from dust, CO-line radiation) as well as the sub-millimetric emission from celestial bodies of our Solar system. Additionally, observational issues such as the finite angular resolution and spatially non-uniform instrumental noise of [[*Planck*]{}’s ]{}sky maps are taken into account, yielding a set of all-sky flux maps, the auto-correlation and cross-correlation properties of which are examined in detail. In the second part of the paper, filtering schemes based on scale-adaptive and matched filtering are extended to spherical data sets, that enable the amplification of the weak SZ-signal in the presence of all contaminations stated above. The theory of scale-adaptive and matched filtering in the framework of spherical maps is developed, the resulting filter kernel shapes are discussed and their functionality is verified.' author: - | B. M. Schäfer$^{1}$[^1], C. Pfrommer$^{1}$, R. M. Hell$^{1}$and M. Bartelmann$^{2}$\ $^1$Max-Planck-Institut für Astrophysik, Karl-Schwarzschild-Stra[ß]{}e 1, Postfach 1317, 85741 Garching, Germany\ $^2$Institut für theoretische Astrophysik, Tiergartenstra[ß]{}e 15, 69121 Heidelberg, Germany bibliography: - 'bibtex/aamnem.bib' - 'bibtex/references.bib' title: | Detecting Sunyaev-Zel’dovich clusters with [[*Planck*]{}]{}:\ II. Foreground components and optimised filtering schemes --- \[firstpage\] galaxies: clusters: general, cosmology: cosmic microwave background, methods: numerical, space vehicles: [ *Planck*]{} Introduction {#sect_intro} ============ The Sunyaev-Zel’dovich (SZ) effect is the most important extragalactic source of secondary anisotropies in the CMB sky. The thermal SZ-effect is explained by the fact that CMB photons are put in thermal contact with electrons of the hot intra-cluster medium (ICM) by Compton-interactions which causes a transfer of energy from the ICM to the CMB. Because of the smallness of the Thompson cross-section and of the diluteness of the ICM this transfer of thermal energy is small. In the direction of a cluster, low-energetic photons with frequencies below $\nu=217$ GHz are removed from the line-of-sight. At frequencies above $\nu=217$ GHz CMB photons are scattered into the line-of-sight, causing a distinct modulation of the CMB surface brightness as a function of observing frequency, which enables the detection of clusters of galaxies in microwave data. In contrast, in the kinetic effect it is the peculiar motion of a cluster along the line of sight relative to the CMB frame that induces CMB surface brightness fluctuations. The peculiar motion of the cluster causes the CMB to be anisotropic in the cluster frame. Due to this symmetry breaking of the scattering geometry, photons scattered into the line-of-sight are shifted in frequency, namely to higher frequencies, if the cluster is moving towards the observer. The [[*Planck*]{}]{}-mission will be especially suited to detect SZ-clusters due to its sensitivity, its spectroscopic capabilities, sky coverage and spatial resolution. It is expected to yield a cluster catalogue containing $\simeq10^4$ entries. Extensive literature exists on the topic, but so far the influence of foregrounds and details of [[*Planck*]{}’s ]{}instrumentation and data aquisition have not been thoroughly addressed. In this work we aim at modelling the astrophysical and instrumental issues connected to the observation of SZ-clusters as exhaustively as possible: A simulation is presented that combines realistic maps of both SZ-effects with a realisation of the CMB, with four different Galactic foreground components (thermal dust, free-free emission, synchrotron emission and emission from rotational transitions of CO molecules), with maps containing the sub-millimetric emission from planets and asteroids of the Solar system and with instrumental noise. [[*Planck*]{}’s ]{}frequency response and beam shapes are modelled conforming to the present knowledge of [[*Planck*]{}’s ]{}receivers and its optical system. In order to extract the SZ-cluster signal, filtering schemes based on matched and scale-adaptive filtering are extended to spherical data sets. In contrast to the recent work by @2004astro.ph..6190G, our SZ-simulation does not rely on idealised scaling relations and futhermore, takes account of the cluster’s morphological variety. The Galactic foregrounds are modelled in concordance with WMAP observations [see @2003ApJS..148...97B], which constitues an improvement over the simplifying assumptions made by Geisb[ü]{}sch et al. In addition, instrumentation issues such as non-isotropic detector noise are properly incorporated into the simulation. The filter scheme employed in the paper by Geisb[ü]{}sch et al. is the harmonic-space maximum entropy method introduced by @2002MNRAS.336...97S which assumes approximate prior knowledge of the emission component’s power spectra. Its computational demand is much higher than matched and scale-adaptive filtering: In fact, the computations presented in this work can be run on a notebook-class computer. The paper is structured as follows: After a brief recapitulation of the SZ-effect in Sect. \[sect\_szdef\], the [[*Planck*]{}]{}-satellite and instrumental issues connected to observation of CMB anisotropies are decribed in Sect. \[sect\_planck\]. The foreground emission components are introduced in Sect. \[sect\_foreground\]. The steps in the simulation of flux maps for the various [[*Planck*]{}]{}-channels are described and their correlation properties are examined in Sect. \[sect\_plancksim\]. The theory of matched and scale-adaptive filtering is extended to spherical data sets and the resulting filter kernel shapes are in detail discussed in Sect. \[sect\_filtering\]. A summary in Sect. \[sect\_summary\] concludes the paper. Throughout the paper, the cosmological model assumed is the standard cosmology, which has recently been supported by observations of the WMAP satellite [@2003astro.ph..2209S]. Parameter values have been chosen as $\Omega_\mathrm{M} = 0.3$, $\Omega_\Lambda =0.7$, $H_0 = 100\,h\,\mbox{km~}\mbox{s}^{-1}\mbox{ Mpc}^{-1}$ with $h = 0.7$, $\Omega_\mathrm{B} = 0.04$, $n_\mathrm{s} =1$ and $\sigma_8=0.9$. Sunyaev-Zel’dovich definitions {#sect_szdef} ============================== The Sunyaev-Zel’dovich effects are the most important extragalactic sources of secondary anisotropies in the CMB. Inverse Compton scattering of CMB photons with electrons of the ionised ICM gives rise to these effects and induce surface brightness fluctuations of the CMB sky, either because of the thermal motion of the ICM electrons (thermal SZ-effect) or because of the bulk motion of the cluster itself relative to the comoving CMB-frame along the line-of-sight (kinetic SZ-effect). The relative change $\Delta T/T$ in thermodynamic CMB temperature at position $\bmath{\theta}$ as a function of dimensionless frequency $x=h\nu /(k_B T_\mathrm{CMB})$ due to the thermal SZ-effect is given by: $$\begin{aligned} \frac{\Delta T}{T}(\bmath{\theta}) & = & y(\bmath{\theta})\,\left(x\frac{e^x+1}{e^x-1}-4\right)\mbox{ with }\\ y(\bmath{\theta}) & = & \frac{\sigma_\mathrm{T} k_B}{m_{\mathrm{e}}c^2}\int{\mathrm{d}}l\:n_{\mathrm{e}}(\bmath{\theta},l)T_{\mathrm{e}}(\bmath{\theta},l)\mbox{,} \label{sz_temp_decr}\end{aligned}$$ where the amplitude $y$ of the thermal SZ-effect is commonly known as the thermal Comptonisation parameter, that itself is defined as the line-of-sight integral of the temperature weighted thermal electron density. $m_{\mathrm{e}}$, $c$, $k_B$ and $\sigma_\mathrm{T}$ denote electron mass, speed of light, Boltzmann’s constant and the Thompson cross section, respectively. The kinetic SZ-effect arises due to the motion of the cluster parallel to the line of sight relative to the CMB-frame: $$\frac{\Delta T}{T}(\bmath{\theta}) = -w(\bmath{\theta})\mbox{ with } w(\bmath{\theta}) = \frac{\sigma_\mathrm{T}}{c}\int{\mathrm{d}}l\:n_{\mathrm{e}}(\bmath{\theta,l})\upsilon_r(\bmath{\theta},l)\mbox{.}$$ Here, $\upsilon_r$ is the radial component of the cluster’s velocity. The convention is such that $\upsilon_r<0$, if the cluster is moving towards the observer. In this case, the CMB temperature is increased. In analogy, the quantity $w$ is refered toas the kinetic Comptonisation. The SZ-observables are the line-of-sight Comptonisations integrated over the solid angle subtended by the cluster. The quantities $\mathcal{Y}$ and $\mathcal{W}$ are refered to as the integrated thermal and kinetic Comptonisations, respectively: $$\begin{aligned} \mathcal{Y} & = & \int{\mathrm{d}}\Omega\: y(\bmath{\theta}) = d_A^{-2}(z)\cdot\frac{\sigma_\mathrm{T} k_B}{m_e c^2}\:\int{\mathrm{d}}V\:n_e T_e\\ \mathcal{W} & = & \int{\mathrm{d}}\Omega\: w(\bmath{\theta}) = d_A^{-2}(z)\cdot\frac{\sigma_\mathrm{T}}{c}\:\int{\mathrm{d}}V\:n_e \upsilon_r\end{aligned}$$ Here, $d_A(z)$ denotes the angular diameter distance of a cluster situated at redshift $z$. Submillimetric observations with [[*Planck*]{}]{} {#sect_planck} ================================================= The [[*Planck*]{}]{}-mission[^2]$^{,}$[^3] will perform a polarisation sensitive survey of the complete microwave sky in nine observing frequencies from the Lagrange point $L_2$ in the Sun-Earth system. It will observe at angular resolutions of up to $5\farcm0$ in the best channels and will achieve micro-Kelvin sensitivity relying on bolometric receivers [high frequency instrument [[*HFI*]{}]{}, described in @lit_hfi] and on high electron mobility transistors . The main characteristics are summarised in Table \[table\_planck\_channel\]. [[*Planck*]{}’s ]{}beam characteristics are given Sect. \[planck\_beamshape\] and the scanning strategy and the simulation of spatially non-uniform detector noise is outlined in Sect. \[planck\_scannoise\]. [[*Planck*]{}]{}  channel 1 2 3 4 5 6 7 8 9 --------------------------------------------------- ------------------- ------------------- ------------------- ------------------- ------------------- ------------------- ------------------- ------------------- ------------------- centre frequency $\nu_0$ 30 GHz 44 GHz 70 GHz 100 GHz 143 GHz 217 GHz 353 GHz 545 GHz 857 GHz frequency window $\Delta\nu$ 3.0 GHz 4.4 GHz 7.0 GHz 16.7 GHz 23.8 GHz 36.2 GHz 58.8 GHz 90.7 GHz 142.8 GHz resolution $\Delta\theta$ (FWHM) $33\farcm4$ $26\farcm8$ $13\farcm1$ $9\farcm2$ $7\farcm1$ $5\farcm0$ $5\farcm0$ $5\farcm0$ $5\farcm0$ noise level $\sigma_\mathrm{N}$ 1.01 [[*m*]{}K]{} 0.49 [[*m*]{}K]{} 0.29 [[*m*]{}K]{} 5.67 [[*m*]{}K]{} 4.89 [[*m*]{}K]{} 6.05 [[*m*]{}K]{} 6.80 [[*m*]{}K]{} 3.08 [[*m*]{}K]{} 4.49 [[*m*]{}K]{} thermal SZ-flux ${\langle}S_\mathcal{Y}{\rangle}$ -12.2 Jy -24.8 Jy -53.6 Jy -82.1 Jy -88.8 Jy -0.7 Jy 146.0 Jy 76.8 Jy 5.4 Jy kinetic SZ-flux ${\langle}S_\mathcal{W}{\rangle}$ 6.2 Jy 13.1 Jy 30.6 Jy 55.0 Jy 86.9 Jy 110.0 Jy 69.1 Jy 15.0 Jy 0.5 Jy antenna temperature $\Delta T_\mathcal{Y}$ -440 [[*n*]{}K]{} -417 [[*n*]{}K]{} -356 [[*n*]{}K]{} -267 [[*n*]{}K]{} -141 [[*n*]{}K]{} -0.5 [[*n*]{}K]{} 38 [[*n*]{}K]{} 8.4 [[*n*]{}K]{} 0.2 [[*n*]{}K]{} antenna temperature $\Delta T_\mathcal{W}$ 226 [[*n*]{}K]{} 220 [[*n*]{}K]{} 204 [[*n*]{}K]{} 179 [[*n*]{}K]{} 138 [[*n*]{}K]{} 76 [[*n*]{}K]{} 18 [[*n*]{}K]{} 1.6 [[*n*]{}K]{} 0.02 [[*n*]{}K]{} Beam shapes {#planck_beamshape} ----------- \[sect\_beam\] The beam shapes of [[*Planck*]{}]{} are well described by azimuthally symmetric Gaussians $b(\theta) = \frac{1}{2\pi\sigma_\theta^2}\exp\left(-\frac{\theta^2}{2\sigma_\theta^2}\right)$ with $\sigma_\theta = \frac{\Delta\theta}{\sqrt{8\ln(2)}}$. The residuals from the ideal Gaussian shape (ellipticity, higher order distortions, diffraction rings, far-side lobes, pick-up of stray-light) are expected not to exceed the percent level and are neglected for the purpose of this work. Table \[table\_planck\_channel\] gives the angular resolution $\Delta\theta$ in terms of FWHM of each [[*Planck*]{}]{}-channel for reference. Scanning strategy and noise-equivalent maps {#planck_scannoise} ------------------------------------------- CMB observations by [[*Planck*]{}]{} will proceed in great circles fixed on the ecliptic poles. A single scan will start at the North ecliptic pole, will follow a meridian to the South ecliptic pole and back to the North ecliptic pole by following the antipodal meridian. Such a scan will last one minute and will be repeated sixty times. After that, the rotation axis will be shifted in a precessional motion for $2\farcm5$ (approximately half a beam diameter) and the scan repeated. In this way, the entire sky is mapped once in 180 days. Fourier transform of the noise time series of [[*Planck*]{}’s ]{}receivers yields a noise power spectrum $P(f)$ of the shape $$P(f) = \sigma_\mathrm{N}^2\cdot\left[1 + \left(\frac{f}{f_\mathrm{knee}}\right)^{-\alpha}\right]\mbox{,}$$ i.e. the noise consists of two components: a power law component in frequency $f$, decribed by the spectral index $\alpha$ that assuming values $0\leq\alpha\leq 2$ and a white noise component, smoothly joined at the frequency $f_\mathrm{knee}$. The $f^{-\alpha}$-part of the noise spectrum originates from zero point drifts of the detector gain on large time scales. This power law component exhibits low-frequency variations that lead to the typical stripe pattern in simulated [[*Planck*]{}]{}-maps due to the scanning strategy . Algorithms for destriping the maps are a current research topic (for example, the [Mirage]{}-algorithm proposed by @2004astro.ph..1505Y, [MAPCUMBA]{} by and the max-likelihood algorithm by ), but it can be expected that the destriping can be done very efficiently such that the remaining noise largely consists of uncorrelated pixel noise. In order to incorporate uncorrelated pixel noise into the simulation, a set of maps has been construced, where at each pixel a number from a Gaussian distribution with width $\sigma_\mathrm{N}$ has been drawn. For [[*Planck*]{}’s ]{}[[*HFI*]{}]{}-receivers, the rms-fluctuations $\sigma_\mathrm{N}$ in antenna temperature can be calculated from the noise equivalent power NEP and the sampling frequency $\nu_\mathrm{sampling}=200$ Hz via: $$\sigma_\mathrm{N} = \frac{2~\mathrm{NEP}\sqrt{\nu_\mathrm{sampling}}}{k_B \Delta\nu} \quad\mbox{({{\em HFI}})} \label{eqn_hfi_noise}$$ Alternatively, for [[*Planck*]{}’s ]{}[[*LFI*]{}]{}-receivers, the rms-fluctuations $\sigma_\mathrm{N}$ in antenna temperature are given by: $$\sigma_\mathrm{N} = \sqrt{2}\frac{T_\mathrm{noise} + T_\mathrm{CMB}}{\sqrt{\Delta\nu/\nu_\mathrm{sampling}}} \quad\mbox{({{\em LFI}})} \label{eqn_lfi_noise}$$ Values for $T_\mathrm{noise}$ and NEP can be obtained from [[*Planck*]{}’s ]{}simulation pipeline manual. The resulting effective noise level for all [[*Planck*]{}]{} channels for a single observation of a pixel is given in Table \[table\_planck\_channel\]. The formulae and respective parameters are taken from the-planck simulation manual, available via [[*Planck*]{}’s ]{}[LiveLink]{}. The rms-fluctuations $\sigma_\mathrm{N}$ in antenna temperature have to be scaled by $\sqrt{n_\mathrm{det}}$ (assuming Poissonian statistics), where $n_\mathrm{det}$ denotes the number of redundant receivers per channel, because they provide independent surveys of the microwave sky. From simulated scanning paths it is possible to derive an exposure map using the [simmission]{}- and [ multimod]{}-utilities. An example of such an exposure map in the vicinity of the North ecliptic pole is given in Fig. \[figure\_exposure\_map\]. Using the number of observations $n_\mathrm{obs}$ per pixel, it is possible to scale down the noise amplitudes by $\sqrt{n_\mathrm{obs}}$ and to obtain a realistic noise map for each channel. Here, we apply the simplification that all detectors of a given channel are arranged collinearly. In this case, the exposure maps will have sharp transitions from well-observed regions around the ecliptic poles to the region around the ecliptic equator. In real observations these transitions will be smoothed out due slight displacements of the optical axes among each other which causes the effective exposure pattern to be a superposition of rotated and distorted single-receiver exposure patterns. Foreground emission components {#sect_foreground} ============================== The observation of the CMB and of SZ-clusters is seriously impeded by various Galactic foregrounds and by the thermal emission of celestial bodies of our Solar system. In order to describe these emission components, template maps from microwave surveys are used. @1999NewA....4..443B give a comprehensive review for the foreground components relevant for the [[*Planck*]{}]{} mission. As foreground components we include thermal emission from dust in the Galactic plane (Sect. \[foreground\_dust\]), Galactic synchrotron (Sect. \[foreground\_synchro\]) and free-free emission (Sect. \[foreground\_freefree\]), line emission from rotational transitions of carbon monoxide molecules in giant molecular clouds (Sect. \[foreground\_co\]), sub-millimetric emission from planets (Sect. \[foreground\_planets\]) and from minor bodies of the Solar system (Sect. \[foreground\_asteroids\]). Foreground components omitted at this stage are discussed in Sect. \[foreground\_omitted\]. In this work, no attempt is made at modelling the interactions between various foreground components because of poorly known parameters such as the spatial arrangement along the line-of-sight of the emitting and absorbing components. Exemplarily, the reader is referred to @2003ApJS..146..407F, where the absorption of Galactic free-free emission by dust is discussed. Galactic dust emission {#foreground_dust} ---------------------- At frequencies above $\sim$ 100 GHz, the thermal emission from dust in the disk of the Milky Way is the most prominent feature in the microwave sky. Considerable effort has been undertaken to model the thermal emission from Galactic dust [@1997AAS...191.8704S; @1998ApJ...500..525S; @1999ApJ...524..867F; @2000ApJ...544...81F]. The thermal dust emission is restricted to low Galactic latitudes and the thin disk is easily discernible. The input template map (see Fig. \[figure\_dustmap\]) is derived from an observation at a wavelength of $\lambda=100~\umu\mathrm{m}$, i.e. $\nu_0=3~\mathrm{THz}$. Its amplitudes $A_\mathrm{dust}$ are given in MJy/sr, which are extrapolated to the actual frequency channels of [[*Planck*]{}]{} using a two-component model suggested by C. Baccigalupi (personal communication). Despite the fact that the dust is expected to spread over a large range of temperatures, the model reproduces the thermal emission remarkably well. This model yields for the flux $S_\mathrm{dust}(\nu)$: $$S_\mathrm{dust}(\nu) = \frac{f_1 q\cdot\left(\frac{\nu}{\nu_0}\right)^{\alpha_1} B(\nu,T_1) + f_2\cdot\left(\frac{\nu}{\nu_0}\right)^{\alpha_2} B(\nu,T_2)}{f_1 q B(\nu_0,T_1) + f_2 B(\nu_0,T_2)}\cdot A_\mathrm{dust}\mbox{.}$$ The choice of parameters used is: $f_1 = 0.0363$, $f_2 = 1-f_1$, $\alpha_1=1.67$, $\alpha_2=2.70$, $q=13.0$. The two dust temperatures are $T_1 = 9.4$ K and $T_2 = 16.2$ K. The function $B(\nu,T)$ denotes the Planckian emission-law: $$B(\nu,T) = \frac{2h}{c^2}\cdot\frac{\nu^3}{\exp(h\nu/k_B T)-1}\mbox{.}$$ Galactic synchrotron emission {#foreground_synchro} ----------------------------- Relativistic electrons of the interstellar medium produce synchrotron radiation by spiralling around magnetic field lines, which impedes CMB observations most strongly at frequencies below 100 GHz. The synchrotron emission reaches out to high Galactic latitude and is an important ingredient for modelling foreground emission in microwave observations. An all-sky survey at an observing frequency of 408 MHz has been compiled by and adopted for usage with [[*Planck*]{}]{} by (see Fig. \[figure\_synchromap\]). The average angular resolution of this survey is $0\fdg85$ (FWHM). Recent observations with WMAP [@2003ApJS..148...97B] indicate that the spectral slope of the synchrotron emission changes dramatically from $\gamma = -0.75$ at frequencies below 22 GHz to $\gamma = -1.25$ above 22 GHz. Theoretically, this may be explained by a momentum-dependent diffusion coefficient for cosmic ray electrons. In order to take account of this spectral steepening, the amplitudes $A_\mathrm{synchro}$ are multiplied with a prefactor in order to obtain the synchrotron fluxes at $\nu=22~\mathrm{GHz}$. This value is then extrapolated to [[*Planck*]{}’s ]{}observing frequencies with a spectral index of $\gamma = -1.25$: The amplitudes $A_\mathrm{synchro}$ of the input map are given in units of MJy/sr, and for the flux $S_\mathrm{synchro}(\nu)$ one thus obtains: $$S_\mathrm{synchro}(\nu) = \sqrt{\frac{22~\mathrm{GHz}}{408~\mathrm{MHz}}}\cdot A_\mathrm{synchro}\cdot \left(\frac{\nu}{408~\mathrm{MHz}}\right)^{-1.25}\mbox{.}$$ Here, the fact that the synchrotron spectral index shows significant variations across the Milky Way due to varying magnetic field strength is ignored. Instead, a spatially constant spectral behaviour is assumed. Galactic free-free emission {#foreground_freefree} --------------------------- The Galactic ionised plasma produces free-free emission, which is an important source of contamination in CMB observations, as recently confirmed by @2003ApJS..148...97B in WMAP observations. Aiming at modelling the free-free emission at microwave frequencies, we rely on an $H_\alpha$-template provided by @2003ApJS..146..407F. Modeling of the free-free emission component on the basis of an $H_alpha$-template is feasible because both emission processes depend on the emission measure $\int n_e^2{\mathrm{d}}l$, where $n_e$ is the number density of electrons. This template is a composite of three $H_\alpha$-surveys and is because of its high resolution (on average $6\farcm0$ FWHM) particularly well suited for CMB foreground modelling. The morphology of the free-free map is very complex and the emission reaches out to intermediate Galactic latitude. For relating $H_\alpha$-fluxes $A_{H_\alpha}$ given in units of Rayleighs to the free-free signal’s antenna temperature $T_\mathrm{free-free}$ measured in Kelvin, @1998PASA...15..111V gives the formula: $$\frac{T_\mathrm{free-free}(\umu\mathrm{K})}{A_{H_\alpha}(R)} \simeq 14.0\left(\frac{T_p}{10^4~\mathrm{K}}\right)^{0.317}\cdot 10^{290~\mathrm{K}\cdot T_p^{-1}}\cdot g_\mathrm{ff}\cdot\left(\frac{\nu}{10\mbox{ GHz}}\right)^{-2}\mbox{.}$$ $T_p$ denotes the plasma temperature and is set to $10^4~\mathrm{K}$ in this work. An approximation for the Gaunt factor $g_\mathrm{ff}$ valid for microwave frequencies in the range $\nu_p\ll\nu\ll k_B T/h$ ($\nu_p$ is the plasma frequency) is given by [@2003ApJS..146..407F]: $$g_\mathrm{ff} = \frac{\sqrt{3}}{\pi} \left[\ln\left(\frac{(2 k_B T_p)^{3/2}}{\pi e^2 \nu\sqrt{m_e}}\right)-\frac{5}{2}\gamma_E\right]\mbox{,}$$ where $e$ and $m_e$ denote electron charge and mass (in Gaussian units) and $\gamma_E\simeq0.57721$ is the Euler constant. The contribution of fractionally ionised helium to the free-free emissivity as well as the absorption by interstellar dust has been ignored because of its being only a small contribution in the first case and because of poorly known parameters in the latter case. The antenna temperature can be converted to the free-free flux $S_\mathrm{free-free}(\nu)$ by means of: $$S_\mathrm{free-free}(\nu) = 2\frac{\nu^2}{c^2}\cdot k_B T_\mathrm{free-free}(\mathrm{K})\mbox{.}$$ Concerning the free-free emission, there might be the possibility of an additional free-free component uncorrelated with the $H_\alpha$-emission. This hot gas, however, should emit X-ray line radiation, which has not been observed. CO-lines from giant molecular clouds {#foreground_co} ------------------------------------ In a spiral galaxy such as the Milky Way, a large fraction of the interstellar medium is composed of molecular hydrogen, that resides in giant molecular clouds (GMC), objects with masses of $10^4 - 10^6 M_{\sun}$ and sizes of $50 - 200$ pc. Apart from molecular hydrogen, the GMCs contain carbon monoxide (CO) molecules in significant abundance. The rotational transitions of the CO molecule at 115 GHz and higher harmonics thereof constitute a source of contamination for all [[*Planck*]{}]{} [[*HFI*]{}]{}-channels. An extensive search for atomic and molecular transition lines was undertaken by @1994ApJ...434..587B with the [*FIRAS*]{} instrument onboard [*COBE*]{}. The CO-contamination is modelled by employing a mosaic of CO-surveys assembled by @1996AAS...189.7004D [@2001ApJ...547..792D]. It shows the velocity-integrated intensity of the transition from the first excited state ($J=1$) to the ground state ($J=0)$ close to the Galactic plane ($b<5^\circ$), and additionally comprises a few CO clouds at higher Galactic latitude, as well as the Large Magellanic Cloud and the Andromeda galaxy M 31. Due to the composition of the map, the angular resolution is not uniform, but the best resolution of $\simeq 7\farcm5$ is reached for a large area around the Galactic plane. From this map, it is possible to derive the line intensities of the higher harmonics, assuming thermal equilibrium: The frequency $\nu$ for a transition from a state of rotational quantum number $J$ to a state with quantum number $J+1$ of the CO molecule follows from elementary quantum mechanics: The rotational energy of a molecule with moment of inertia $\theta$ and angular momentum $\bmath{J}$ is $E_\mathrm{rot}=\bmath{J}^2/2\theta=\hbar^2\cdot J(J+1)/2\theta$. In the last step the quantum number $J$ was introduced. For the transition energy between two subsequent rotation levels, one obtains: $$\nu_{J\leftrightarrow J+1} = 2Qc\cdot(J+1) = 115\mbox{ GHz}\cdot(J+1)\mbox{,}$$ where $Q=h/8\pi^2 c\theta$ is a measure of the inverse moment of inertia of the molecule and $c$ denotes the speed of light. Thus, the spectrum consists of equidistant lines. The relative intensities of those lines is given by the ratio of their occupation numbers $\chi_J$: $$\label{eqn_occupation} \chi_J = (2J+1)\cdot\exp\left(-\frac{Qhc}{k_\mathrm{B} T_\mathrm{CO}} J(J+1)\right)\mbox{,}$$ i.e. the relative line intensities $q_{J\leftrightarrow J+1}$ of two consecutive lines is given by: $$q_{J\leftrightarrow J+1} = \frac{\chi_{J+1}}{\chi_J} = \frac{2J+3}{2J+1}\cdot\exp\left(-\frac{2Qhc}{k_B T_\mathrm{CO}}\cdot(J+1)\right)$$ $\chi_J$ is detemined by a statistical weight $(2J+1)$ reflecting the degeneracy of angular momentum and a Boltzmann factor. For the determination of line intensities thermal equilibrium is assumed, common estimates for the temperature inside GMCs are $T_\mathrm{CO} = 10 - 30$ K. For the purpose of this work, we choose $T_\mathrm{CO} = 20$ K. From the brightness temperature $T_A$ one obtains the CO-flux $S_\mathrm{CO-line}(\nu)$ by means of the following equation: $$S_\mathrm{CO-line}(\nu) = 2\frac{\nu^2}{c^2}\cdot k_B T_A(\mathrm{K})\cdot p(\nu-\nu_{J\leftrightarrow J+1})\mbox{,}$$ where the line shape $p(\nu-\nu_{J\leftrightarrow J+1})$ is assumed to be small in comparison to [[*Planck*]{}’s ]{}frequency response windows such that its actual shape (for instance, a Voigt-profile) is irrelevant. Planetary submillimetric emission {#foreground_planets} --------------------------------- Planets produce infra-red and sub-millimetric radiation by absorbing sunlight and by re-emitting this thermal load imposed by the sun. The investigation of the thermal properties of Mars, Jupiter and Saturn has been the target of several space missions [@1997ApJ...488L.161G; @1986Icar...65..244G to name but a few]. For the description of the submillimetric thermal emission properties of planets, an extension to the Wright & Odenwald model [@1976ApJ...210..250W; @1971AJ.....76..719N] was used. The orbital motion of the planets is sufficiently fast such that their movements including their epicyclic motion relative to the Lagrangian point $L_2$, [[*Planck*]{}’s ]{}observing position, has to be taken into account. All planets are imaged twice in approximate half-year intervals due to [[*Planck*]{}’s ]{}scanning strategy, while showing tiny displacements from the ecliptic plane because of the Lissajous-orbit of [[*Planck*]{}]{} around $L_2$ and their orbital inclinations. The heat balance equation for a planet or asteroid reads as: $$E + F + W \equiv P_\mathrm{emission} = P_\mathrm{absorption} \equiv I + R\mbox{,}$$ where $E$ denotes the heat loss by thermal emission (i.e. the signal for [[*Planck*]{}]{}), $F$ the heat flux outward from the interior of the planet, $W$ is the heat lost by conduction to the planet’s atmosphere, $I$ is the Solar radiation absorbed and $R$ is the heating of the planet caused by the back-scattering of radiation emanating from the surface of the planet by the atmosphere. The definition of these quantities is given by eqns. (\[eqn\_B\_def\]) through (\[eqn\_R\_def\]): $$\begin{aligned} E & = & \epsilon\:\sigma\:T_\mathrm{planet}^4\mbox{,}\label{eqn_B_def}\\ F & = & k\cdot\frac{\upartial T_\mathrm{planet}}{\upartial x}\mbox{,}\label{eqn_F_def}\\ I & = & \frac{(1-A)G}{r^2}\cos(\theta^*)\cos\left(\frac{2\pi t}{\tau}\right)\mbox{,}\label{eqn_W_def}\\ R & = & \gamma\:\frac{(1-A)G}{r^2}\cos(\theta^*)\cos\left(\frac{2\pi t}{\tau}\right) = \gamma\:I_\mathrm{max}\mbox{, and}\label{eqn_I_def}\\ W & = & \kappa\:F\mbox{.}\label{eqn_R_def}\end{aligned}$$ Here, $\epsilon$ is the surface emissivity of the planet, $\sigma$ is the Stefan-Boltzmann constant, $T_\mathrm{planet}$ is the planet’s temperature, $k$ the coefficient of heat conduction, $A$ the planet’s bolometric albedo, $G$ the Solar constant (i.e. the energy flux density of Solar irradiation at the earth’s mean distance), $r$ the distance of the planet to the sun in astronomical units, $\tau$ the planet’s rotation period and $\theta^*$ the geographical latitude of the radiation absorbing surface element. The temperature distribution in the interior of the planet at radial position $x$ is controlled by the heat conduction equation: $$c\cdot\frac{\upartial T_\mathrm{planet}}{\upartial t} = k\cdot\frac{\upartial^2 T_\mathrm{planet}}{\upartial x^2}\mbox{,}$$ with the specific heat per unit volume $c$. In our model, the heat loss $R$ of the planet’s surface due to conduction to the planet’s atmosphere is taken to be a constant fraction of the heat flux $F$ outward from the interior of the planet, the constant of proportionality being $\kappa$, for which we assumed $\kappa=0.1$. Similarly, the heat gain by back-scattering radiation by the atmosphere $R$ was assumed to be a constant fraction $\gamma$ of the local noon Solar flux $I_\mathrm{max}$, where $\gamma$ was taken to be $\gamma=0.01$. The system of differential eqns. (\[eqn\_B\_def\]) - (\[eqn\_R\_def\]) dependent on time $t$ and on Solar distance $r$ constitutes a heat conduction problem with periodic excitation (by the planet’s rotation). Thus, the heat balance of the planets is modelled by periodic solutions of the Laplacian heat conduction differential equations. It was solved iteratively by applying Laplace transforms with periodic boundary conditions. The integration over the planet’s surface then yields the radiation flux. In the calculation, we addressed rocky and gaseous planets differently with respect to their thermal properties. Furthermore, the giant gaseous planets are known to have internal sources of heat generation, which also has been taken account of. The brightest point source in the microwave sky due to the planetary thermal emission is Jupiter, causing an increase in antenna temperature of $T_\mathrm{Jupiter}=93.6$ mK in the $\nu=100$ GHz-channel, followed by Saturn with $T_\mathrm{Saturn}=15.0$ mK. All outer planets apart from Pluto will be visible for [[*Planck*]{}]{}. Estimates show that even Galilean satellites Ganymede, Callisto, Io and Europa and Saturn’s moon Titan are above the detection threshold of [[*Planck*]{}]{}, but they are outshone by the stray-light from Jupiter and Saturn, respectively and for that reason not included in our analysis. Due to the planet’s being point sources, their fast movement and their diverse surface temperatures it is not feasible to produce a template and extrapolate the fluxes with a common emission law to [[*Planck*]{}]{}-frequencies. Instead, flux maps have been produced directly for each of the nine [[*Planck*]{}]{}-channels separately taking account of the planetary motion, the solution of the heat balance equation laid down above and the finite beam-width. The analogous holds for asteroids, that are covered by the next chapter. Submillimetric emission from asteroids {#foreground_asteroids} -------------------------------------- Asteroids and minor bodies of the Solar system are easily observed by infrared satellites such as ISO and possibly by sub-millimetric observatories . An estimation by @2002NewA....7..483C shows that a large number of asteroids ($\sim 400$) should yield signals detectable by [[*Planck*]{}]{}. The orbital motion of all asteroids is fast enough to cause double detections at different positions in the sky separated by half a year due to [[*Planck*]{}’s ]{}scanning strategy. In contrast to planets, asteroids are not well restricted to the ecliptic plane and appear up to ecliptic latitudes of $\beta{\lower.5ex\hbox{{$\; \buildrel < \over \sim \;$}}}30\degr$. The thermal emission properties of asteroids are well understood such that asteroids have been used for calibrating detectors and for determining beam shapes. The thermal model used for describing the submillimetric emission by asteroids is the same extension of the Wright & Odenwald model as for rocky planets. However, additional features that had to be incorporated was the beamed emission due to surface roughness. Furthermore, in the system of differential eqns. (\[eqn\_B\_def\]) - (\[eqn\_R\_def\]) terms $W$ and $R$ were neglected due to the absence of atmospheres in asteroids. Information about the diameter and albedo was derived using the HG-magnitude system in case of asteroids for which those quantities are unknown, otherwise literature values were taken [from @2000dba..book.....M and IAU’s [*Minor Planet Centre*]{} [^4]]. For the description of the rotation period, an empirical relation that expresses the rotation period as a function of mass was used in the cases where the rotation period is unknown. The brightest sources include Ceres ($T_\mathrm{Ceres}=19.7~\umu\mbox{K}$), Pallas ($T_\mathrm{Pallas}=7.2~\umu\mbox{K}$), Vesta ($T_\mathrm{Vesta}=6.7~\umu\mbox{K}$) and Davida ($T_\mathrm{Davida}=2.1~\umu\mbox{K}$). The temperatures stated are antenna temperatures measured in the $\nu=100$ GHz-channel at the brightness maximum. Our simulation shows that the number of detectable asteroids is overestimated by @2002NewA....7..483C, who did not take the expected observation geometry and detector response into account. Typical surface temperatures of asteroids are of the order of 150 K, and therefore, [[*Planck*]{}]{} is observing their thermal emission in the Rayleigh-Jeans regime. For that reason, the number of detectable asteroids increases with observing frequency. For our sample of $5\cdot10^4$ asteroids of the [*Minor Planet Centre*]{}’s catalogue, we find a couple of asteroids at $\nu=30$ GHz, a few tens of asteroids at $\nu=100$ GHz and up to 100 asteroids in the highest frequency band at $\nu=857$ GHz. Approximately 1200 asteroids will have fluxes above half of [[*Planck*]{}’s ]{}single-band detection limit estimated for ideal observation conditions and thus they constitute an abundant population of point sources that possibly hampers the detection of SZ-clusters. The prediction of comets is very uncertain for the years 2007 through 2009: Many comets are not detected yet, non-active comets are too faint with few exceptions and the coma thermal emission features of active comets is very complex. For these reasons, they have been excluded from the analysis. Future work concerning [[*Planck*]{}’s ]{}foregrounds {#foreground_omitted} ----------------------------------------------------- Foreground components not considered so far include microwave point sources, such as infra-red galaxies and microwave emitting AGNs. The emission of infra-red galaxies is associated with absorption of star light by dust and re-emission at longer wavelengths. Galaxies with ongoing star formation can have large fractions ($\sim90$%) of their total emission at infra-red wavelengths, compared to about one third in the case of local galaxies. The integrated emission from unresolved infra-red galaxies accounts for the cosmic infra-red background (CIB) , the fluctuations of which are impeding SZ-observations at frequencies above $\nu\simeq100$ GHz [@2004astro.ph..2571A]. and @2003astro.ph..8464W have estimated the number counts of unresolved infra-red galaxies at [[*Planck*]{}]{}  frequencies, which was used by @2004astro.ph..2571A in order to estimate the level of fluctuation in the [[*Planck*]{}]{}-beam. In the easiest case, the sources are uncorrelated and the fluctuations obey Poissonian statistics, but the inclusion of correlations is expected to boost the fluctuations by a factor of $\sim1.7$ [@2003ApJ...590..664S]. According to @2004astro.ph..2571A, the resulting fluctuations vary between a few $10^2~\mathrm{Jy}/\mathrm{sr}$ and $10^5~\mathrm{Jy}/\mathrm{sr}$, depending on observing channel. A proper modeling would involve a biasing scheme for populating halos, the knowledge of the star formation history and template spectra in order to determine the K-corrections. AGNs are another extragalactic source of submillimetric emission. Here, sychrotron emission is the radiation generating mechanism. The spectra show a variety of functional behaviours, with spectral indices $\alpha$ generally ranging from -1 to -0.5, but sources with inverted spectra $\alpha>0$ are commonplace. This variety makes it difficult to extrapolate fluxes to observing frequencies of CMB experiments. Two studies [@1998MNRAS.297..117T; @2001ApJ...562...88S] have estimated the fluctuations generated by radio emitting AGNs at SZ-frequencies and found them to amount to $10^3-10^4~\mathrm{Jy}/\mathrm{sr}$. However, AGNs are known to reside in high-density environments and the proper modelling would involve a (poorly known) biasing scheme in order to assign AGN to the dark matter halos. Apart from that, one would have to assume spectral properties from a wide range of spectral indices and AGN activity duty cycles. Therefore, the study of extragalactic sources has been omitted from this analysis. Yet another source of microwave emission in the Solar system is the zodiacal light . Modelling of this emission component is very difficult due to the Lissajous-orbit of [[*Planck*]{}]{} around the Lagrangian point $L_2$. The disk of interplanetary dust is viewed under varying angles depending on the orbital period and the integration over the spatially non-uniform emission features is very complicated. @2003Icar..164..384R have investigated the thermal emission by interplanetary dust from measurements by ISO and have found dust temperatures of $T_\mathrm{zodiacal}=250-300$ K and fluxes on the level of $\simeq10^3~\mathrm{Jy}/\mathrm{sr}$, i.e. the equilibrium temperature is separated by two orders of magnitude from the CMB temperature, which means that the intensities are suppressed by a factor of $\sim10^4$ due to the Rayleigh-Jeans regime of the zodiacal emission in which [[*Planck*]{}]{} is observing and by a factor of $10^5$ due to [[*Planck*]{}’s ]{}narrow beams. From this it is concluded that the emission from zodiacal light is unlikely to exceed values of a few $\sim\umu$Jy in observations by [[*Planck*]{}]{} which compares to the fluxes generated by faint asteroids. Thus, the zodiacal light constitutes only a weak foreground emission component at submillimetric wavelengths and can safely be neglected. Simulating SZ-observations by [[*Planck*]{}]{} {#sect_plancksim} ============================================== The simulation for assessing [[*Planck*]{}’s ]{}SZ-capabilities proceeds in four steps. Firstly, all-sky maps of the thermal and kinetic SZ-effects are prepared, the details of map-construction are given in Sect. \[sim\_szmap\]. Secondly, a realisation of the CMB was prepared for the assumed cosmological model (Sect. \[sim\_cmbmap\]). The amplitudes were co-added with the Galactic and ecliptic foregrounds introduced in the previous section, subsequently degraded in resolution with [[*Planck*]{}’s ]{}beams (Sect. \[sim\_beam\]). Finally, uncorrelated pixel noise as well as the emission maps comprising planets and asteroids were added. In the last section, cross-correlation properties of the various astrophysical and instrumental noise components are discussed (Sect. \[sim\_ccproperties\]). At this stage it should be emphasised that we work exclusively with spherical harmonics expansion coefficients $a_{\ell m}$ of the flux maps. The expansion of a function $a(\bmath{\theta})$ into spherical harmonics $Y_\ell^m(\bmath{\theta})$ and the corresponding inversion is given by: $$a_{\ell m} = \int{\mathrm{d}}\Omega\: a(\bmath{\theta})\cdot Y_\ell^m(\bmath{\theta})^*\mbox{ and } a(\bmath{\theta}) = \sum_{\ell=0}^{\infty}\sum_{m=-\ell}^{+\ell} a_{\ell m}\cdot Y_\ell^m(\bmath{\theta})\mbox{.} \label{eqn_ylm_decomp}$$ Here, ${\mathrm{d}}\Omega$ denotes the differential solid angle element. For reasons of computational feasibility, we assume isotropic spectral properties of each emission component, i.e. the template map is only providing the amplitude of the respective emission component, but the spectral dependences are assumed to remain the same throughout the sky. While this is an excellent approximation for the CMB and the SZ-effects (in the non-relativistic limit), it is a serious limitation for Galactic foregrounds, where e.g. the synchrotron spectral index or the dust temperatures show significant spatial variations. Adopting this approximation, the steps in constructing spherical harmonics expansion coefficients ${\langle}S_{\ell m}{\rangle}_{\nu_0}$ of the flux maps $S(\bmath{\theta},\nu)$ for all [[*Planck*]{}]{} channels consist of deriving the expansion coefficients of the template, converting the template amplitudes to flux units, extrapolate the fluxes with a known or assumed spectral emission law to [[*Planck*]{}’s ]{}observing frequencies, to finally convolve the emission law with [[*Planck*]{}’s ]{} frequency response window for computing the spherical harmonics expansion coefficients of the average measured flux ${\langle}S_{\ell m}{\rangle}_{\nu_0}$ at nominal frequency $\nu_0$ by using eqn. (\[eqn\_tlm\_exp\]). $${\langle}S_{\ell m}{\rangle}_{\nu_0} = \frac{\int{\mathrm{d}}\nu\: S_{\ell m}(\nu) R_{\nu_0}(\nu)}{\int{\mathrm{d}}\nu\: R_{\nu_0}(\nu)} = 2\frac{\nu_0^2}{c^2}\cdot k_B T_{\ell m}\mbox{.} \label{eqn_tlm_exp}$$ Here, $S_{\ell m}(\nu)$ describes the spectral dependence of the emission component considered, and $R_{\nu_0}(\nu)$ the frequency response of [[*Planck*]{}’s ]{}receivers centered on the fiducial frequency $\nu_0$. Assuming spatial homogeneity of the spectral behaviour of each emission component it is possible to decompose $S_{\ell m}(\nu)$ into $S_{\ell m}(\nu) = q(\nu)\cdot a_{\ell m}$, i.e. a frequency dependent function $q(\nu)$ and the spherical harmonics expansion coefficients $a_{\ell m}$ of the template describing the morphology. This is possible due to the fact that the decomposition eqn. (\[eqn\_ylm\_decomp\]) is linear. Additionally, eqn. (\[eqn\_tlm\_exp\]) gives the conversion from the averaged flux ${\langle}S_{\ell m}{\rangle}_\nu$ in a [[*Planck*]{}]{}-channel to antenna temperature $T_{\ell m}$. [[*Planck*]{}’s ]{}frequency response function $R_{\nu_0}(\nu)$ is well approximated by a top-hat function: $$R_{\nu_0}(\nu) = \left\{ \begin{array}{l@{,\:}l} 1 & \nu\in\left[\nu_0-\Delta\nu,\nu_0+\Delta\nu\right] \\ 0 & \nu\notin\left[\nu_0-\Delta\nu,\nu_0+\Delta\nu\right] \end{array} \right. \label{eq_freq_resp}$$ The centre frequencies $\nu_0$ and frequency windows $\Delta\nu$ for [[*Planck*]{}’s ]{}receivers are summarised in Table. \[table\_planck\_channel\]. In this way it is possible to derive a channel-dependent prefactor relating the flux expansion coefficients ${\langle}S_{\ell m}{\rangle}_{\nu_0}$ to the template expansion coefficients $A_{\ell m}$. The superposition of the various emission components in spherical harmonics and the determination of response-folded fluxes is most conveniently done using the [almmixer]{}-utility of [[*Planck*]{}’s ]{}simulation package. SZ-map preparation {#sim_szmap} ------------------ For constructing an all-sky Sunyaev-Zel’dovich map, a hybrid approach has been pursued. Due to the SZ-clusters being detectable out to very large redshifts, due to their clustering properties on very large angular scales, and due to the requirement of reducing cosmic variance when simulating all-sky observations as will be performed by [[*Planck*]{}]{}, there is the need for very large simulation boxes, encompassing redshifts of $z\simeq1$ which corresponds to comoving scales exceeding 2 Gpc. Unfortunately, a simulation incorporating dark matter and gas dynamics that covers cosmological scales of that size down to cluster scales and possibly resolving cluster substructure is beyond computational feasibility. For that reason, two simulations have been combined: The Hubble-volume simulation [@2001MNRAS.321..372J; @2000MNRAS.319..209C], and a smaller scale simulation including (adiabatic) gas physics by @2002ApJ...579...16W performed with [GADGET]{} [@2001NewA....6...79S; @2002MNRAS.333..649S]. All-sky maps of the SZ-sky were constructed by using the light-cone output of the Hubble-volume simulation as a cluster catalogue and template clusters from the small-scale gas-dynamical simulation. In this way, the sky-maps contain all clusters above $5\cdot 10^{13} M_{\sun}/h$ out to redshift $z=1.48$. The analysis undertaken by gives expected mass and redshift ranges for detectable thermal SZ-clusters, which are covered completely by the all-sky SZ-map presented here. The maps show the correct 2-point halo correlation function, incorporate the evolution of the mass function and the correct distribution of angular sizes. Furthermore, they exhibit cluster substructure and deviations from the ideal cluster scaling relations induced by the departure from spherical symmetry. The velocities used for computing the kinetic SZ-effect correspond to the ambient density field. The map construction process and the properties of the resulting map are in detail described in @2004_szmap. Visual impressions of the SZ-maps are given by Figs. \[figure\_thszfield\] and \[figure\_kinszfield\]. The fluxes generated by the thermal SZ-effect $S_\mathcal{Y}(x)$ and of the kinetic SZ-effect $S_\mathcal{W}(x)$ are given by eqns. (\[eq:S\_thSZ\]) and (\[eq:S\_kinSZ\]), respectively. The dimensionless frequency is defined as $x=h\nu/(k_B T_\mathrm{CMB})$ and the flux density of the CMB is given by $S_0=(k_b T_\mathrm{CMB})^3\pi^3/c^2/h^2/5400=22.9~\mathrm{Jy}/\mathrm{arcmin}^2$: $$\begin{aligned} S_\mathcal{Y}(x) & = & S_0\cdot\mathcal{Y}\cdot \frac{x^4\cdot\exp(x)}{(\exp(x)-1)^2}\cdot\left[x\frac{\exp(x)+1}{\exp(x)-1} - 4\right]\mbox{.} \label{eq:S_thSZ} \\ S_\mathcal{W}(x) & = & S_0\cdot\mathcal{W}\cdot\frac{x^4\cdot\exp(x)}{(\exp(x)-1)^2}\mbox{.} \label{eq:S_kinSZ}\end{aligned}$$ Table \[table\_planck\_channel\] summarises the fluxes $S_\mathcal{Y}$ and $S_\mathcal{W}$ and the corresponding changes in antenna temperature $T_\mathcal{Y}$ and $T_\mathcal{W}$ for the respective Comptonisation of $\mathcal{Y} = \mathcal{W} = 1~\mathrm{arcmin}^2$ for all [[*Planck*]{}]{}-channels. Fig. \[figure\_sz\_planck\_window\] shows how the frequency dependence of the SZ-signal is altered by [[*Planck*]{}’s ]{}relatively broad frequency response functions. The relative deviations of curves in which the frequency window has been taken into account to the unaltered curve amounts to $5\ldots15$%, depending on observation frequency. CMB-map generation {#sim_cmbmap} ------------------ The angular power spectrum $C_\ell$ is computed for a flat -cosmology using the [CMBfast]{} code by @1996ApJ...469..437S. In addition to the cosmological parameters being already given in Sect. \[sect\_intro\], we use adiabatic initial conditions, set the CMB monopole to $T_\mathrm{CMB}=2.725$ K [@1999ApJ...512..511M] and the primordial He-mass fraction to $X_\mathrm{He} = 0.24$. The reionisation optical depth $\tau$ was set to $\tau=0.17$ and the reionisation redshift was taken to be $z_\mathrm{reion}=20$ [@2003ApJS..148....1B]. The angular power spectrum of the CMB is normalised to COBE data. With the spectrum of $C_\ell$-coefficients, a set of $a_{\ell m}$-coefficients was synthesised by using the [synalm]{} code based on [synfast]{} by @1998elss.confE..47H. The factors for converting the $a_{\ell m}$-coefficients of the CMB map showing the thermodynamic temperature and to the corresponding fluxes for each channel were then derived by convolution of the Planckian emission law eqn. (\[eq:S\_planck\_cmb\]), $$S_\mathrm{CMB}(\nu) = S_0\cdot\frac{x^3}{\exp(x)-1}\mbox{,} \label{eq:S_planck_cmb}$$ with [[*Planck*]{}’s ]{}frequency response function eqns. (\[eqn\_tlm\_exp\]) and (\[eq\_freq\_resp\]). Again, $S_0=22.9~\mathrm{Jy}/\mathrm{arcmin}^2$ is the energy flux density of the CMB. Preparation of simulation data sets {#sim_beam} ----------------------------------- The expansion coefficients of the flux maps are multiplied with the respective beam’s $b_{\ell 0}$-coefficients in order to describe the finite angular resolution. After that, expansion coefficients of the pixel noise maps and those of the planetary maps have been added. In total, three atlases consisting of nine flux ${\langle}S_{\ell m}{\rangle}_{\nu_0}$-sets belonging to each of [[*Planck*]{}’s ]{}channels with fiducial frequency $\nu_0$ have been compiled: - [The reference data set is a combination of the CMB, the SZ-maps and the instrumental noise maps. They should provide the cleanest detection of clusters and the measurement of their properties. Apart from the inevitable instrumental noise, this data set only contains cosmological components. In the remainder of the paper, this data set will be refered to as [COS]{}.]{} - [The second data set adds Galactic foregrounds to the CMB, the SZ-maps and the instrumental noise map. Here, we try to assess the extend to which Galactic foregrounds impede the SZ-observations. Thus, this data set will be denoted [GAL]{}.]{} - [In the third data set the emission from bodies inside the Solar system was included to the CMB, the SZ-maps, the Galactic foregrounds and the instrumental noise. Because of the planets and asteroids being loosely constrained to the ecliptic plane, this data set will be called [ECL]{}.]{} An example of a synthesised map showing the combined emission of the SZ-clusters and all Galactic and ecliptic components including neither CMB fluctuations nor instrumental noise at a location close to the Galactic plane is given by Fig. \[figure\_sim\_skyview\]. The observing frequency has been chosen to be $\nu = 143$ GHz, correspondingly, the map has been smoothed with a (Gaussian) beam of $\Delta\theta=7\farcm1$ (FWHM). [[*Planck*]{}]{}-channel correlation properties {#sim_ccproperties} ----------------------------------------------- In this section the auto- as well as the cross-correlation properties of the various foregrounds in different [[*Planck*]{}]{}-channels are studied. The cross power specta, defined formally by eqn. (\[eqn\_cross\_power\_def\]) are determined by using: $$C_{\ell,\nu_1\nu_2} = \frac{1}{2\ell+1}\sum_{m=-\ell}^{+\ell} {\langle}S_{\ell m}{\rangle}_{\nu_1}\cdot {\langle}S_{\ell m}{\rangle}_{\nu_2}^*\mbox{.} \label{eqn_cross_power}$$ From this definition, the auto-correlation spectra are obtained by setting $\nu_1=\nu_2$, i.e. $C_{\ell,\nu} = C_{\ell,\nu\nu}$. The band-pass averaged fluxes ${\langle}S_{\ell m}{\rangle}_\nu$ are defined in eqn. (\[eqn\_tlm\_exp\]). In Fig. \[figure\_auto\_correlation\], the power spectra are shown for the $\nu=30$ GHz-, $\nu=143$ GHz-, $\nu=353$ GHz- and the $\nu=847$ GHz-channels. The spectra have been derived including various Galactic and ecliptic noise components in order to study their relative influences. For visualisation purposes, the spectra are smoothed with a moving average filter with a filter window comprising 11 bins. Distinct acoustic peaks of the CMB are clearly visible in the clean [COS]{} data sets, but are overwhelmed by the Galactic noise components. At small scales, i.e. high multipole order $\ell$, differences between the [GAL]{} and [ECL]{} data sets become apparent, the latter showing a higher amplitude. The (single) acoustic peak measurable in the $\nu=33$ GHz channel is shifted to larger angular scales due to the coarse angular resolution of that particular channel. The $\nu=857$ GHz-curve of the [COS]{} data set behaves like a power law due to the fact that the CMB is observed in the Wien-regime and is consequently strongly suppressed, such that the angular power spectrum is dominated by uncorrelated pixel noise. Fig. \[figure\_cross\_correlation\] shows exemplarily a couple of cross power spectra. The cross-correlation spectra derived for the [COS]{} data set nicely shows the CMB power spectrum if two neighboring channels close to the CMB maximum are chosen, but the correlation is lost in two widely separated channels. This is especially the case if one considers the two lowest [*LFI*]{}-channels at angular scales which the receivers are not able to resolve. In this regime the pixel noise is still very small and the cross-correlation spectrum drops to very small values. In order to illustrate the complexity of spectral and morphological behaviour of the power spectra, they are given as contour plots depending on both the observing frequency $\nu$ and the multipole order $\ell$. Fig. \[figure\_cos\_correlation\_contour\] and \[figure\_gal\_correlation\_contour\] contrast the auto-correlation properties of the different data sets. The [COS]{} data set, shown in Fig. \[figure\_cos\_correlation\_contour\], containing nothing but the CMB and instrumental noise apart from the SZ-contribution, shows clearly the acoustic oscillations with the first peak at $\ell\simeq200$ and the consecutive higher harmonics. They are most pronouced in the $\nu=100$ GHz- and $\nu=143$ GHz-channels. At higher multipole moments, the power spectra are dominated by instrumental noise which leads to a rapid (power law) incline. Adding Galactic foregrounds yields the spectra depicted in Fig. \[figure\_gal\_correlation\_contour\]. Inclusion of Galactic foregrounds significantly complicates the picture and masks off the primary anisotropies. The spectra are dominated by large-scale emission structures of the Milky Way, most notably the emission from thermal dust that causes the spectra to increase with increasing frequency $\nu$. Cluster detection by using multi-frequency optimised filtering {#sect_filtering} ============================================================== One challenge in the analysis of two-dimensional all-sky surveys is the extraction of sources of interest which are superposed on a background of noise of varying morphology and spectral behaviour. In the presence of small-scale noise the conventional method to extract sources is low-pass filtering (e.g. with a Gaussian kernel) while wavelet analysis is most suitably applied if large scale noise fluctuations dominate. These methods, however, fail if the characteristic scale of the background fluctuations is comparable with the scale of the signal structures. Other methods have been proposed in order to separate different components in multifrequency CMB observations: They include Wiener filtering [@1996MNRAS.281.1297T; @1999NewA....4..443B; @1999MNRAS.302..663B], maximum-entropy methods [@1998MNRAS.300....1H; @1999MNRAS.306..232H], Mexican-hat wavelet analysis [@2001MNRAS.326..181V; @2000MNRAS.315..757C], fast independent component analysis [@2002MNRAS.334...53M], matched filter analysis [@1998ApJ...500L..83T], adaptive filtering techniques [@2001ApJ...552..484S; @2002ApJ...580..610H; @2002MNRAS.336.1057H], and non-parametric Bayesian approaches [@2002MNRAS.336.1351D]. However, a comparison between these methods is difficult because all of them assume different priors about the spatial properties and frequency dependence. Using prior knowledge about the frequency dependence and statistical properties of several images at different frequency channels, the maximum-entropy method and Wiener filtering are able to separate the components of interest. Contrarily, wavelet analysis is well suited in order to detect compact sources. A combination of these different techniques improves the quality of component separation [@2001MNRAS.328....1V]. Although component separation methods which assume a prior knowledge about the data are quite powerful, they yield biased or even wrong results in the case of incorrect or idealised assumptions about the data. Any error in the separation of one component propagates to the separation of the other components owing to normalisation constraints. In particular, this is the case in non-centrally symmetric source profiles, oversimplified spectral extrapolations of Galactic emission surveys into other wavebands, variations of the assumed frequency dependence, or non-Gaussian noise properties the statistics of which can not fully be characterised by power spectra. Thus, the application of a specific component separation method is a trade-off between robustness and effectiveness with regard to the particular problem. Filtering techniques relying on Mexican-hat wavelets and on matched and scale-adaptive filters are single component separation methods. They all project either spatial structure or frequency properties (within a given functional family) of the component of interest in the presence of other components acting as background in this context. While Mexican-hat wavelet analysis assumes Gaussian profiles superimposed on large scale variations of the background noise, the matched and scale-adaptive filter generalises to arbitrary source profiles and noise properties which are assumed to be locally homogeneous and isotropic [@2001ApJ...552..484S; @2002MNRAS.336.1057H; @2002ApJ...580..610H]. This section generalises the matched and scale-adaptive filter techniques to global spherical topologies which find application in all-sky surveys such as the case of [[*Planck*]{}’s ]{}microwave/submillimetric survey. In addition, optimised filters for the detection of compact sources in single frequency all-sky observations are derived in the appendix in a more detailed fashion. The proposed method aims at simultaneously localising SZ-clusters and measuring both their amplitudes and angular extent. It can also be applied for localising microwave point sources and estimating their spectral properties. We choose the spherical filtering approach rather than tiling the sky with a set of two-dimensional flat maps for the following reasons: On the sphere, we do not have to worry about spurious detections as well as double detections due to overlaps in the tesselation. Secondly, our approach provides a physical interpretation of our filter shapes in harmonic space even for the smallest multipole moments in contrast to the case of a flat map where the smallest wavenumbers are determined by the map size. Finally, our approach circumvents projection failures of the noise properties such as stretching effects in the case of conformal mapping which would introduce artifical non-Gaussianity in our maps and distort profile shapes close to the map boundaries. We pursue the concept of the [*multi-frequency approach*]{} rather than the [*combination method*]{} [c.f. @2002MNRAS.336.1057H]. In other words, we filter each channel separately while taking into account the different cross-correlations between the different channels and the frequency dependence of the signal when constructing the optimised filters. This method seems to be superior to the [*combination method*]{} which tries to find a optimised combination of the different channels with regard to the signal-to-noise ratio of the sources and successively applies filters to the combined map. The concept is introduced and central definitions are laid down in Sect. \[filter\_construct\]. The concept of constructing filter kernels is outlined in Sect. \[filter\_optimal\]. Subsequently, the matched and scale-adaptive filters are derived for expansions of spherical data sets into spherical harmonics in Sect. \[filter\_allsky\_matched\] and Sect. \[filter\_allsky\_scaleadaptive\]. Then, the numbers of merit are defined in Sect. \[filter\_gain\_reliability\]. Caveats in the numerical derivation are listed in Sect. \[filter\_numerics\]. A discussion of filter kernel shapes in Sect. \[filter\_shape\_discussion\] for actual simulation data. The application of the filter kernels to our simulated sky maps and the extraction of the SZ-cluster signal is described in Sect. \[filter\_renormalise\]. Assumptions and definitions {#filter_construct} --------------------------- When constructing the particular filters, we assume centrally symmetric profiles of the sources to be detected. This approximation is justified for most of the clusters of [[*Planck*]{}’s ]{}sample whose angular extent will be comparable in size to [[*Planck*]{}’s ]{}beams, i.e. the instrumental beam renders them azimuthally symmetric irrespective of their intrinsic shape. Azimuthal symmetry is no general requirement for the filters which can be generalised to detect e.g. elliptic clusters using expansions into vector rather than scalar spherical harmonics. We furthermore assume the background to be statistically homogeneous and isotropic, i.e.[ ]{}a complete characterisation can be given in terms of the power spectrum. This assumption obviously fails for non-Gaussian emission features of the Galaxy or of the exposure-weighted instrumental noise on large angular scales. However, the spherical harmonics expansion of any expected compact source profile, which we aim to separate, peaks at high values of the multipole moment due to the smallness of the clusters where the non-Gaussian influence is negligible. Thus, we only have to require homogeneity and isotropy of the background on small scales. In order to construct our filters, we consider a set of all-sky maps of the detected scalar field $s_\nu(\btheta)$ for the different frequency channels $$\label{eq:signal} s_\nu(\btheta) = f_\nu y_\nu(|\btheta-\btheta_0|) + n_\nu(\btheta), \quad \nu = 1, \ldots, N,$$ where $\btheta = (\vartheta, \varphi)$ denotes a two-dimensional vector on the sphere, $\btheta_0$ is the source location, and $N$ is the number of frequencies (respectively, the number of maps). The first term on the right-hand side represents the amplitude of the signal caused by the thermal and kinetic SZ-effect, $y(|\btheta-\btheta_0|)$ and $w(|\btheta-\btheta_0|)$, respectively, while the second term corresponds to the generalised noise which is composed of CMB radiation, all Galactic and ecliptic emission components, and additional instrumental noise. The frequency dependence of the SZ-effect is described by $f_\nu$ in terms of average flux, $$\label{eq:fnu} f_\nu \equiv {\langle}S_\mathcal{Y}{\rangle}_{\nu}\mbox{ and } f_\nu \equiv {\langle}S_\mathcal{W}{\rangle}_{\nu}$$ where ${\langle}S{\rangle}_\nu$ denotes the flux weighted by the frequency response at the fiducial frequency $\nu$ (c.f. eqn. (\[eqn\_tlm\_exp\])) and $S_\mathcal{Y}$ and $S_\mathcal{W}$ denote the SZ-fluxes given by eqns. (\[eq:S\_thSZ\]) and (\[eq:S\_kinSZ\]). We expect a multitude of clusters to be present in our all-sky maps. In order to sketch the construction of the optimised filter, we assume an individual cluster situated at the North pole ($\btheta_0=\bld{0}$) with a characteristic angular SZ-signal $y_\nu(\theta = |\btheta|) = A \tau_\nu(\theta)$, where we separate the true amplitude $A$ and the spatial profile normalised to unity, $\tau_\nu(\theta)$. The underlying cluster profile $p(\theta)$ is assumed to follow a generalised King-profile with an exponent $\lambda$ which is a parameter in our analysis. At each observation frequency this profile is convolved with the (Gaussian) beam of the respective [[*Planck*]{}]{}-channel (c.f. Sect. \[sect\_beam\]) yielding: $$\begin{aligned} \label{eq:profile} \tau_\nu^{}(\theta) &=& \int {\mathrm{d}}\Omega' p(\theta') b_\nu^{}(|\btheta-\btheta'|) =\sum_{\ell=0}^{\infty}\tau_{\ell 0,\, \nu}^{} Y_\ell^0(\cos \theta),\\ p(\theta) &=& \left[1+\left(\frac{\theta}{\theta_\rmn{c}}\right)^2\right]^{-\lambda}\!\!, \quad\mbox{and}\quad \tau_{\ell 0,\, \nu} = \sqrt{\frac{4 \pi}{2 \ell+1}} b_{\ell 0,\, \nu} p_{\ell 0}. \label{eqn_profile_beam_source}\end{aligned}$$ For the second step in eqn. (\[eqn\_profile\_beam\_source\]) we used the convolution theorem on the sphere to be derived in Appendix \[appendix\_sphsingle\]. The background $n_{\nu}(\btheta)$ is assumed to be a compensated homogeneous and isotropic random field with a cross power spectrum $C_{\ell, \nu_1 \nu_2}$ defined by $$\label{eq:PSnoise} \left{\langle}n_{\ell m, \nu_1}^{} n^*_{\ell' m', \nu_2} \right{\rangle}= C_{\ell, \nu_1 \nu_2}^{} \delta_{\ell \ell'}^{} \delta_{m,m'}^{}, \quad\mbox{where}\quad {\langle}n_{\nu}(\btheta) {\rangle}= 0, \label{eqn_cross_power_def}$$ $n_{\ell m, \nu}$ denotes the spherical harmonics expansion coefficient of $n_{\nu}(\btheta)$, $\delta_{\ell \ell'}$ denotes the Kronecker symbol, and ${\langle}\cdot{\rangle}$ corresponds to an ensemble average. Assuming ergodicity of the field under consideration allows taking spatial averages over sufficiently large areas $\Omega = \mathcal{O}(4\pi) $ instead of performing the ensemble average. Concepts in filter construction {#filter_optimal} ------------------------------- The idea of an optimised matched filter for multifrequency observations was recently proposed by @2002MNRAS.336.1057H for the case of a flat geometry. For each observing frequency, we aim at constructing a centrally symmetric optimised filter function $\psi_\nu(\theta)$ operating on a sphere. Its functional behaviour induces a family of filters $\psi_\nu(\theta,R_\nu)$ which differ only by a scaling parameter $R_\nu$. For a particular choice of this parameter, we define the filtered field $u_\nu(R_\nu,\bbeta)$ to be the convolution of the filter function with the observed all-sky map at frequency $\nu$, $$\begin{aligned} \label{eq:udef} u_\nu^{}(R_\nu,\bbeta) & = & \int {\mathrm{d}}\Omega\, s_\nu^{}(\btheta)\,\psi_\nu^{}(|\btheta-\bbeta|,R_\nu) \\ & = & \sum_{\ell=0}^{\infty}\sum_{m=-\ell}^{+\ell} u_{\ell m,\, \nu}^{} Y_\ell^m(\bbeta)\mbox{ with}\\ u_{\ell m,\, \nu} & = & \sqrt{\frac{4 \pi}{2 \ell+1}} s_{\ell m,\, \nu}\, \psi_{\ell 0,\, \nu}(R_\nu) \,\label{eqn_k_convolve}.\end{aligned}$$ For the second step, the convolution theorem to be derived in Appendix \[appendix\_sphsingle\] was used. The combined filtered field is defined by $$\label{eq:utotal} u(R_1,\ldots,R_N;\bbeta)=\sum_{\nu} u_\nu(R_\nu,\bbeta). \label{eqn_k_add}$$ Taking into account the vanishing expectation value of the noise ${\langle}n_{\nu}(\btheta) {\rangle}= 0$, the expectation value of the filtered field at the North pole $\bbeta=\bld{0}$ is given by $$\label{eq:umean} {\langle}u_\nu(R_\nu,\bld{0}){\rangle}= A f_\nu \sum_{\ell=0}^{\infty} \tau_{\ell 0,\, \nu}\, \psi_{\ell 0,\, \nu}(R_\nu). \label{eqn_def_ccspec}$$ The assumption that the cross power spectrum of the signal is negligible compared to the noise power spectrum is justified because the thermal and kinetic amplitudes are small compared to unity, $A_{y,w} \ll 1$. Thus, the variance of the combined filtered field (\[eq:utotal\]) is determined by $$\begin{aligned} \label{eq:uvariance} \sigma_u^2(R_1,\ldots,R_N) &=& \left{\langle}\left[u(R_1,\ldots,R_N;\bbeta) - \left{\langle}u(R_1,\ldots,R_N;\bbeta)\right{\rangle}\right]^2\right{\rangle}\nonumber\\ &=&\sum_{\nu_1,\nu_2}\sum_{\ell=0}^{\infty} C_{\ell,\,\nu_1 \nu_2} \psi_{\ell 0,\, \nu_1}(R_{\nu_1})\,\psi_{\ell 0,\, \nu_2}(R_{\nu_2}).\end{aligned}$$ The optimised filter functions $\psi_\nu(\theta)$ are chosen to detect the clusters at the North pole of the sphere (to which they have been translated). They are described by a singly peaked profile which is characterised by the scale $R^{(0)}_{\nu}$ as given by eqn. (\[eq:profile\]). While the optimised [*matched filter*]{} is defined to obey the first two of the following conditions, the optimised [*scale-adaptive filter*]{} is required to obey all three conditions: 1. [The combined filtered field $u(R^{(0)}_1,\ldots,R^{(0)}_N;\bld{0})$ is an unbiased estimator of the source amplitude $A$, i.e.[ ]{}${\langle}u(R^{(0)}_{1},\ldots,R^{(0)}_{N};\bld{0}){\rangle}= A$.]{} 2. [The variance of $u(R_1,\ldots,R_N;\bbeta)$ has a minimum at the scales $R^{(0)}_1,\ldots,R^{(0)}_N$ ensuring that the combined filtered field is an efficient estimator.]{} 3. [The expectation value of the filtered field at the source position has an extremum with respect to the the scale $R^{(0)}_{\nu}$, implying $$\label{eq:3cond} \frac{\upartial}{\upartial R^{(0)}_{\nu}}{\langle}u_\nu(R_\nu,\bld{0}){\rangle}= 0.$$]{} Matched filter {#filter_allsky_matched} -------------- For convenience, we introduce the column vectors $\bpsi_{\ell} \equiv [\psi_{\ell 0,\,\nu}]$, $\bld{F}_{\ell} \equiv [f_\nu \tau_{\ell 0, \nu}]$, and the inverse $\hat{\bld{C}}_{\ell}^{-1}$ of the matrix $\hat{\bld{C}}_{\ell} \equiv [C_{\ell,\,\nu_1 \nu_2}]$. In terms of spherical harmonic expansion coefficients, constraint (i) reads $$\label{eq:constraint1} \sum_\nu \sum_{\ell=0}^\infty f_\nu \tau_{\ell 0,\, \nu} \psi_{\ell 0,\, \nu} = \sum_{\ell=0}^\infty \bld{F}_{\ell}\bpsi_{\ell} = 1.$$ Performing functional variation (with respect to the filter function $\bpsi_{\ell}$) of $\sigma_u^2(R_1,\ldots,R_N)$ while incorporating the (isoperimetric) boundary condition (\[eq:constraint1\]) through a Lagrangian multiplier yields the spherical matched filter $\bpsi_{\ell}$ $$\label{eq:matched filter} \bpsi_{\ell}^{} = \alpha\, \hat{\bld{C}}_{\ell}^{-1} \bld{F}_{\ell}^{}, \quad\mbox{where}\quad \alpha^{-1} = \sum_{\ell=0}^\infty \bld{F}_{\ell}^T \hat{\bld{C}}_{\ell}^{-1} \bld{F}_{\ell}^{}.$$ In any realistic application, the cross power spectrum $C_{\ell, \nu_1 \nu_2}$ can be computed from observed data provided the cross power spectrum of the signal is negligible. The quantities $\alpha$, $\bld{F}_{\ell 0}$, and thus $\bpsi_{\ell 0}$ can be computed in a straightforward manner for a specific frequency dependence $f_\nu$ and for a model source profile $\tau_\nu(\theta)$. Scale-adaptive filter on the sphere {#filter_allsky_scaleadaptive} ----------------------------------- The scale-adaptive filter $\bpsi_{\ell}$ satisfying all three conditions is given by $$\label{eq:SAF} \bpsi_{\ell}^{} = \hat{\bld{C}}_{\ell}^{-1} (\alpha\, \bld{F}_{\ell}^{}+\bld{G}_{\ell}^{}) \mbox{, with } \bld{G}_{\ell}^{} \equiv [\mu_{\ell,\nu}^{}\, \beta_\nu^{}]\mbox{, and}$$ $$\label{eq:mu} \mu_{\ell ,\nu}^{} \equiv f_\nu \tau_{\ell 0, \nu} \left(2 + \frac{{\mathrm{d}}\ln \tau_{\ell 0, \nu}}{{\mathrm{d}}\ln \ell}\right) = f_\nu \left[2 \tau_{\ell 0, \nu} + \ell\left(\tau_{\ell 0, \nu} - \tau_{\ell-1\, 0, \nu}\right) \right].$$ As motivated in Appendix \[sec:app:SAF\], the logarithmic derivative of $\tau_{\ell 0}$ with respect to the multipole order $\ell$ is a shorthand notation of the differential quotient which is only valid for $\ell\gg 1$. The quantities $\alpha$ and $\beta_\nu$ are given by the components $$\label{eq:components} \alpha = (\hat{\bld{A}}^{-1})_{00}, \qquad \beta_\nu = (\hat{\bld{A}}^{-1})_{\nu0},$$ where $\hat{\bld{A}}$ is the $(1+N) \times (1+N)$ matrix with elements $$\begin{aligned} \label{eq:Amatrix} \lefteqn{ A_{00}^{}\equiv \sum_{\ell=0}^\infty \bld{F}_{\ell}^\rmn{T} \hat{\bld{C}}_{\ell}^{-1} \bld{F}_{\ell}^{}, \qquad A_{0\nu}^{}\equiv \sum_{\ell=0}^\infty \mu_{\ell ,\nu}^{} \left(\bld{F}_{\ell }^\rmn{T}\hat{\bld{C}}_{\ell}^{-1}\right)_\nu}\\ \lefteqn{ A_{\nu 0}^{}\equiv \sum_{\ell=0}^\infty \mu_{\ell,\nu}^{} \left(\hat{\bld{C}}_{\ell}^{-1}\bld{F}_{\ell }^{}\right)_\nu, \qquad A_{\nu \nu'}^{}\equiv \sum_{\ell=0}^\infty \mu_{\ell,\nu}^{}\,\mu_{\ell,\nu'}^{} \left(\hat{\bld{C}}_{\ell}^{-1}\right)_{\nu\nu'}^{}}.\end{aligned}$$ In these equations, no summation over the indices is implied. Detection level and gain {#filter_gain_reliability} ------------------------ As described by @2001ApJ...552..484S, the concept of constructing an optimised filter function for source detection aims at maximising the signal-to-noise ratio $D_u$, $$\label{eq:detlevel} D_u\equiv\frac{\left{\langle}u(R_1,\ldots,R_N;\bld{0})\right{\rangle}}{\sigma_u(R_1,\ldots,R_N)} = A\cdot\frac{\sum_{\ell=0}^\infty \bld{F}_{\ell }\bpsi_{\ell }} {\sqrt{\sum_{\ell=0}^\infty \bpsi_{\ell }^T \hat{\bld{C}}_{\ell}^{} \bpsi_{\ell }^{}}}.$$ Computing the dispersion of the unfiltered field on the sphere yields the signal-to-noise ratio $D_s$ of a signal on the fluctuating background: $$\label{eq:disp} \sigma_s^2 = \sum_{\nu_1,\nu_2}\sum_{\ell=0}^\infty C_{\ell,\,\nu_1\nu_2} \quad \Rightarrow \quad D_s = \frac{A}{\sigma_s}.$$ These considerations allow introducing the [*gain*]{} for comparing the signal-to-noise ratios of a peak before and after convolution with a filter function: $$\label{eq:gain} g \equiv \frac{D_u}{D_s} = \frac{\sigma_s}{\sigma_u(R_1,\ldots,R_N)}.$$ If the noise suppression is successful, the gain $g$ will assume values larger than one. If the filters are constructed efficiently, they are able to reduce the dispersion ($\sigma_u(R_1,\ldots,R_N)<\sigma_s$) while simultaneously retaining the expectation value of the field (\[eq:umean\]). Due to the additional third constraint, the scale-adaptive filter is expected to achieve smaller gains compared to the matched filter. Numerical derivation of filter kernels {#filter_numerics} -------------------------------------- For the derivation of suitable filter kernels the source profiles are assumed to be generalised King-profiles as described by eqn. (\[eqn\_profile\_beam\_source\]) convolved with the respective [[*Planck*]{}]{}-beam superimposed on fluctuating background given by template ${\langle}S_{\ell m}{\rangle}_\nu$-coefficients. The inversion of the matrix $\hat{\bld{C}}_{\ell}$ (c.f. eqns. (\[eqn\_cross\_power\]) and (\[eqn\_cross\_power\_def\])) can be performed using either Gauss-Jordan elimination or LU decomposition, which both were found to yield reliable results. In the derivation of the scale-adaptive filters, however, it is numerically advantageous to artificially exclude the lower multipoles $\ell\leq1$ from the calculation. Due to the sub-millimetric emission of the Milky Way, the lower multipoles are very large. Consequently, the corresponding $\psi_{\ell m}$-coefficients, $\ell\leq1$, have been set to zero, which is not a serious intervention since the filters are designed to amplify structures at angular scales well below a degree. For consistency, the multipoles below the quadrupole have been artificially removed in the derivation of the matched filters as well. In contrast to the [[*Planck*]{}]{}-simulation pipeline all numerical calculations presented here are carried out in terms of fluxes measured in Jy and not in antenna temperatures for the following reason: Cross-power spectra $C_{\ell, \nu_1\nu_2}$ given in terms of antenna temperatures are proportional to $(\nu_1\cdot\nu_2)^{-2}$ which results in a suppression of the highest frequency channels by a factor of almost $10^5$ compared to the lowest frequency channels. Furthermore, by working with fluxes instead of antenna temperatures, the filters for extracting the SZ-signal show frequency dependences which can be understood intuitively. The frequency dependence is described by eqns. \[eq:S\_thSZ\] and \[eq:S\_kinSZ\]. The normalisation $\mathcal{Y}$ has been chosen to be $1~\mathrm{arcmin}^2$, which corresponds to the weakest signals [[*Planck*]{}]{}  will be able to detect. Because of the smallness of the source profiles to be detected, the calculations were carried out to multipole orders of $\ell_\mathrm{max}=4096$, which ensures that the beams as well as the source profiles are well described. In the plots in Sect. \[filter\_shape\_discussion\], the filters depicted are smoothed with a moving average window comprising eleven bins for better visualisation. Discussion of filter kernels {#filter_shape_discussion} ---------------------------- ### Matched filter {#matched-filter} -- -- -- -- The spherical harmonics expansion coefficients $\psi_{\ell 0,\nu}$ following from the matched filter algorithm are depicted in Fig. \[figure\_filter\_kernel\_matched\] for four frequencies most relevant to SZ-observation, namely for $\nu=100$ GHz, $\nu=143$ GHz, $\nu=217$ GHz and $\nu=353$ GHz. As background noise components the clean [COS]{} data set (left column) and the exhaustive [GAL]{} data set (right column) are contrasted. The filter kernels have been derived for optimised detection of sources described by a generalised King-profile with angular core radii $\theta_c=3\farcm0$ and $\theta_c=5\farcm0$ and asymptotic slope $\lambda=1.0$. The principle how the matched filter extracts the SZ-signal from the maps is explained by Fig. \[figure\_filter\_kernel\_matched\]: The SZ-profiles the filter has been optimised are small structures at angular scales corresponding to multipole moments of $\ell\simeq 10^3$. In channels below $\nu=217$ GHz, the clusters are observed in absorption and the fluxes are decreased. For that reason, the filters have negative amplitudes at these angular scales for these specific frequencies. At larger scales, the fluctuations are suppressed by linear combination of the various channels, while the filtering functions show very similar shapes. Optimising the filters for detection of core radii of $5\farcm0$ instead of $3\farcm0$ result in a shift of the negative peak at $\ell\simeq10^3$ to smaller multipole orders. Instrumental noise which is important at even higher multipoles is suppressed by the filter’s exponential decline at high $\ell$ above $\ell{\lower.5ex\hbox{{$\; \buildrel > \over \sim \;$}}}2000$. The unwanted CMB fluctuations and all Galactic contributions at scales larger than the cluster scale are suppressed by weightings with varying sign so that the foregrounds are subtracted at the stage of forming linear combinations of the ${\langle}S_{\ell m}{\rangle}_\nu$-coefficients. Furthermore, the contours of the matched filter kernels are given in Fig. \[figure\_filter\_kernel\_matched\] as functions of both inverse angular scale $\ell$ and observing frequency $\nu$ for differing noise contributions. The figures compare filters derived for differing background noise compositions. The filters shown serve for the optimised detection of generalised King-profiles with core radius $\theta_c=15\farcm0$ and asymptotic slope $\lambda=1.0$. These (rather large) values have been chosen for visualisation purposes. For clarity, the contour denoting zero values has been omitted due to noisy data. In these figures it is apparent how the filters combine the frequency information in order to achieve a suppression of the unwanted foregrounds: At multipole moments of a few hundred, the filters exhibit changes in sign, such that the measurements at low frequencies are subtracted from the measurements at high frequencies in the linear combination of the filtered maps. Fig. \[figure\_filter\_kernel\_matched\_real\] illustrates the filter kernels $\psi_\nu(\theta)$ in real space for the same selection of frequencies and background noise components as given above. The filter kernels $\psi_\nu(\theta)$ have been synthesised from the $\psi_{\ell 0,\nu}$-coefficients using the [alm2grid]{}-utility of the [[*Planck*]{}]{}-simulation package. Here, the parameters of the King-profile to be detected are $(\theta_c,\lambda)=(5\farcm0,1.0)$. The filter kernels are similar in shape to Mexican-hat wavelets, but show more than one oscillation. Their action on the sky maps is to apply high-pass filtering, such that all long-wavelength modes are eliminated. At the cluster scale, they implement a linear combination of the sky maps that aims at amplifying the SZ-signal: The kernels derived for both the $\nu=100$ GHz- and $\nu=143$ GHz-channel exhibit a central depression which is used to convert the SZ-signal to positive amplitudes. The other two channels resemble simple Gaussian kernels which smooth the maps to a common effective angular resolution. At frequencies of $\nu=217$ GHz and $\nu=353$ GHz the most important emission feature is Galactic Dust, which is suppressed by the filter’s small amplitudes. In this way, the weak SZ-signal is dissected. In Fig. \[figure\_filter\_kernel\_matched\_spec\], filter kernels derived with both algorithms for point sources (i.e. with beam profiles of the respective [[*Planck*]{}]{}-channels) are compared, that have been optimised for the detection of varying spectral behaviour of the signal, in this case the thermal SZ-effect, the kinetic SZ-effect and a Planckian thermal emitter with a surface temperature $T_\mathrm{surface}$ of 150 K, such as an asteroid or planet. The filter kernels depicted correspond to observing frequencies of $\nu=143$ GHz and $\nu=217$ GHz. The filters clearly reflect the spectral behaviour of the emission laws of the sources one aims at detecting: While the filter kernels designed for detecting thermal SZ-clusters reflect the peculiar change in sign in the SZ-effect’s frequency dependence, the other two curves show the behaviour to be expected for a Planckian emitter and the kinetic SZ-effect, respectively. Again, the better angular resolution of the $\nu=217$ GHz-channel is apparent by the shifting of the curves to higher multipole order $\ell$. ### Scale-adaptive filter -- -- -- -- The spherical harmonics expansion coefficients $\psi_{\ell 0,\nu}$ following from the scale-adaptive filter algorithm for the frequencies $\nu=100$ GHz, $\nu=143$ GHz, $\nu=217$ GHz and $\nu=353$ GHz are given in the upper panel of Fig. \[figure\_filter\_kernel\_adaptive\]. The left and right columns compare the filter kernels for differing noise components. Their functional shape has a number of important features in common with the matched filters: They suppress the uncorrelated pixel noise, which is dominant at high $\ell$ by their exponential decline at $\ell{\lower.5ex\hbox{{$\; \buildrel > \over \sim \;$}}}2000$. Furthermore, the filters amplify the SZ-signal, which is negative at frequencies below $\nu=217$ GHz, by assuming large negative values and hence converting the SZ-signal to yield positive amplitudes. Additionally, the filters show a distinct secondary peak at $\ell\simeq2000$ which causes the kernels to be more compact after transformation to real space and enables the size measurement. A more general observation is that the scale-adaptive filter kernel shapes are more complex and noisier in comparison to the matched filter, especially at high $\ell$. The scale-adaptive filter makes even stronger use of the spectral information than the matched filter. Especially the contour plots in Fig. \[figure\_filter\_kernel\_adaptive\] show that the scale-adaptive filter exhibits alternating signs when varying the observing frequency $\nu$ while keeping the angular scale $\ell$ fixed. In this way, the noise contributions are isolated in angular scale and subsequently suppressed by linear combination of the maps. Furthermore, one notices a change in sign at multipole order $\ell\simeq 200$ which is common to the frequencies $\nu=100\ldots353$ GHz, at which the CMB signal is strongest. Aiming at reducing the variance of the filtered maps, the scale-adaptive filter is suppressing the ${\langle}S_{\ell m}{\rangle}_\nu$-coefficients by assuming small values. Fig. \[figure\_filter\_kernel\_adaptive\_real\] gives the filter kernels $\psi_\nu(\theta)$ in real space for selected frequencies and background noise components. The scale-adaptive filters work similarly as the matched filters like Mexican-hat wavelets and subject the sky maps to high pass filtering. In Fig. \[figure\_filter\_kernel\_adaptive\_spec\], filter kernels derived with both algorithms for point sources (i.e. with beam profiles of the respective [[*Planck*]{}]{}-channels) are compared, that have been optimised for the detection of varying spectral behaviour of the signal, in this case the thermal SZ-effect, the kinetic SZ-effect and a Planckian thermal emitter with a surface temperature $T_\mathrm{surface}$ of 150 K, such as an asteroid or planet. The filter kernels depicted correspond to observing frequencies of $\nu=143$ GHz and $\nu=217$ GHz. As in the case of the matched filter, the frequency dependence of the signal is reflected by the sign of the filter kernel at the anticipated angular scale of the profile to be detected. Filter renormalisation and synthesis of likelihood maps {#filter_renormalise} ------------------------------------------------------- Once the filter kernels are derived, the filtered fields $u_{\nu}(R_\nu,\bmath{\beta})$ can be synthesised from the $u_{\ell m,\nu}$-coefficients (defined in eqn. (\[eqn\_k\_convolve\])) and the resulting maps can be added in order to yield the co-added, filtered field $u(R_1,\ldots,R_N,\bmath{\beta})$ (see eqn. (\[eqn\_k\_add\])), which can be normalised by the level of fluctuation $\sigma_u$ (given by eqn. (\[eqn\_def\_ccspec\])) to yield the likelihood map $D(\bmath{\theta})$. It is favourable to divide the filter kernels by the variance $\sigma_u$ and to apply a renormalisation: $$\psi_{\ell 0,\nu}\longrightarrow\psi^\prime_{\ell 0,\nu} = \frac{\psi_{\ell 0,\nu}}{\sqrt{\sum_\ell \bmath{\psi}_{\ell}^T\hat{\bld{C}}_{\ell}\bmath{\psi}_{\ell}}}\mbox{.}$$ In this case, the filter kernels are invariant under changes in profile normalisation. With these kernels, the filtered flux maps can be synthesised from the set of ${\langle}S_{\ell m}{\rangle}_\nu$-coefficients and the resulting maps can be co-added to yield the final normalised likelihood map $D(\bmath{\beta})$. It is computationally advantageous, however, to interchange the last two steps, $$\begin{aligned} D_u(\bmath{\beta}) & = & \frac{u(\bmath{\beta})}{\sigma_u} = \frac{1}{\sigma_u}\sum_\nu u_\nu(\bmath{\beta})\\ & = & \sum_\nu\sum_{\ell=0}^{\infty}\sum_{m=-\ell}^{+\ell} \sqrt{\frac{4\pi}{2\ell+1}}{\langle}S_{\ell m}{\rangle}_\nu\frac{\psi_{\ell 0,\nu}}{\sqrt{\sum_\ell \bmath{\psi}^T_\ell\hat{\bld{C}}_\ell\bmath{\psi}_\ell}} Y_\ell^m(\bmath{\beta})\\ & = & \sum_{\ell=0}^{\infty}\sum_{m=-\ell}^{+\ell}\underbrace{\sqrt{\frac{4\pi}{2\ell+1}}\left[\sum_\nu {\langle}S_{\ell m}{\rangle}_\nu \cdot\psi^{\prime}_{\ell 0,\nu}\right]}_{\equiv D_{\ell m}} Y_\ell^m(\bmath{\beta})\mbox{,}\end{aligned}$$ and to derive the $D_{\ell m}$-coefficients first, such that the synthesis has to be performed only once. Due to the restriction to axially symmetric kernels, the convolution can be carried out using the [alm2map]{}-utility rather than [totalconvolve]{}. -- -- -- -- Fig. \[figure\_filter\_likelihood\] gives a visual impression of the capability of the above described filtering schemes: The figure shows a $30\degr\times30\degr$ wide field at the ecliptic North pole at a frequency of $\nu=353$ GHz (at the SZ-maximum) as observed by [[*Planck*]{}]{}, i.e. the image is smoothed to an angular resolution of $\Delta\theta=5\farcm0$ (FWHM) and contains the fluctuating CMB, all Galactic and ecliptic foregrounds as well as pixel noise. Matched and scale-adaptive filter kernels were derived for isolating point sources, i.e. for sources that appear to have profiles equal to [[*Planck*]{}’s ]{}beams of the corresponding channel. For clarity, only amplitudes exceeding a threshold value of 1.0 are shown. For comparison, Fig. \[figure\_filter\_likelihood\] shows the same detail of the input thermal SZ-map as well. It is immediately apparent that the observation of SZ-clusters without foreground- and noise suppression is not possible and that one has to rely on filtering schemes. As a comparison with Fig. \[figure\_filter\_likelihood\] shows, the filters are clearly able to isolate the SZ-clusters and to strongly suppress all spurious noise contributions. The matched filter, however, shows a slightly better performance and yields more significant peaks due to better background suppression. There are weak residuals present in both maps due to incomplete foreground reduction. These residuals however, have small amplitudes compared to the SZ-detections. The highest peaks exhibit detection significances amounting to $10.6\sigma$ in the case of the matched filter and $9.1\sigma$ in the case of the scale-adaptive filter. It should be emphasised that the filters work exceptionally well despite the fact that the Milky Way clearly is a non-Gaussian feature, whereas Gaussianity of the fluctuating background was an important assumption in the derivation of the filter kernels. Furthermore, the filters sucessfully separate and amplify the weak SZ-signal in the presence of seven different noise contributions (CMB, four Galactic foregrounds, thermal emission from bodies of the Solar system and instrumental noise) that exhibit different spectral behaviours by relying on just nine broad-band measurements. Fig. \[figure\_flowchart\_analysis\] summarises all steps involved in the simulation of [[*Planck*]{}]{}-observations, filter derivation and signal extraction. -- -- -- -- Summary and conclusion {#sect_summary} ====================== A simulation for assessing [[*Planck*]{}’s ]{} SZ-capabilities in the presence of spurious signals is presented that combines maps of the thermal and kinetic SZ-effects with a realisation of the cosmic microwave background (CMB), in addition to Galactic foregrounds (synchrotron emission, free-free emission, thermal emission from dust, CO-line radiation) as well as the sub-millimetric emission from celestial bodies of our Solar system. Additionally, observational issues such as the finite angular resolution and spatially non-uniform instrumental noise of [[*Planck*]{}]{} are taken into account. - [Templates for modelling the free-free emission and the carbon monoxide-line emission have been added to the [[*Planck*]{}]{}-simulation pipeline. The free-free template relies on an $H_\alpha$-survey of the Milky Way. The spectral properties of both foregrounds are modelled with reasonable parameter choices, i.e. $T_p=10^4$ K for the free-free plasma temperature and $T_\mathrm{CO}=20$ K for the mean temperature of giant molecular clouds.]{} - [An extensive package for modelling the sub-millimetric emission from planet and asteroids has been implemented for [[*Planck*]{}]{}, that solves the heat balance equation of each celestial body. It takes the movement of the planets and asteroids into account, which causes, due to [[*Planck*]{}’s ]{}scanning strategy, double detections separated by approximate half-year intervals. The total number of asteroids implemented is $\simeq 1200$.]{} - [The foregrounds have been combined under proper inclusion of [[*Planck*]{}’s ]{}frequency response windows in order to yield a set of flux maps. The auto- and cross-correlation properties of those maps are investigated in detail. Furthermore, their decomposition into spherical harmonics ${\langle}S_{\ell m}{\rangle}_\nu$ serve as the basis for the filter construction. It should be emphasised that the spectral properties of a foreground component were assumed to be isotropic.]{} - [In order to separate the SZ-Signal and to suppress the foreground components, the theory of matched and scale-adaptive filtering has been extended to spherical data sets. The formulae in the context of spherical coordinates and $Y_{\ell}^m$-decomposition are analogous to those derived for Cartesian coordinate systems and Fourier-transforms. ]{} - [The global properties of filter kernel shapes are examined as functions of observing channel, composition of noise, parameters of the profile to be detected and spectral dependence of the signal. Transformation of the filter kernels to real space yields functions that resemble the Mexican-hat wavelets, but show more than one oscillation. The shape of the filter kernels can be understood intuitively: They subject the maps to high-pass filtering while retaining structures similar in angular extent to the predefined profile size. The signal is then amplified by linear combination of the maps, which again is apparent in the sign of the filter kernels.]{} - [The functionality of the filtering scheme is verified by applying them to simulated observations. It is found that the Galactic foregrounds can be suppressed very effectively so that the SZ-cluster signals can be retrieved. Comparing the two filters, the scale-adaptive filter performs not as good as the matched filter, which is in accordance to the findings of @2002MNRAS.336.1057H [@2002ApJ...580..610H]. It should be emphasised that for the derivation of the filter kernels nothing but a model profile and all cross-power spectra (in [[*Planck*]{}’s ]{}case a total number of 45 independent $C_{\ell,\nu_1\nu_2}$-sets) are used.]{} The scientific exploitation of our simulation and the characterisation of [[*Planck*]{}’s ]{}SZ-cluster sample, e.g. the number density as a function of detection sigificance as well as filter parameters, spatial distribution in dependence on Galactic and ecliptic latitude and the distribution in redshift, mass and apparent size will be the subject of a forthcoming paper. Acknowledgements {#acknowledgements .unnumbered} ================ The authors would like to thank Torsten En[ß]{}lin for careful reading of the manuscript. The support provided by Martin Reinecke in enhancing the [[*Planck*]{}]{}-simulation tools and adding custom changes is greatly appreciated. Optimised filter for single frequency all-sky observations {#appendix_sphsingle} ========================================================== This appendix derives the optimised filters for single frequency all-sky observations and thus serves as a detailed supplement to Sect. \[sect\_filtering\] where optimised filters for multi-frequency observations were derived. The formalism outlined in this appendix might be applied to future all-sky surveys in the X-ray or microwave regime. Assumptions and definitions {#assumptions-and-definitions} --------------------------- In order to construct our filters, we consider an all-sky map of the detected scalar field $s(\btheta)$ $$\label{eq:app:signal} s(\btheta) = y(|\btheta-\btheta_0|) + n(\btheta),$$ where $\btheta = (\vartheta, \varphi)$ denotes a two-dimensional vector on the sphere and $\btheta_0$ is the source location. The first term of the right-hand side represents the amplitude of the sources to be detected, while the second term in corresponds to the generalised noise present in the map which is composed of any detected features other than the desired signal including for instance instrumental noise. The statistical properties of the noise are assumed to be characterised by its power spectrum $\left{\langle}n_{\ell m} n^*_{\ell' m'} \right{\rangle}= C_{\ell} \delta_{\ell \ell'} \delta_{m,m'}$. In order to sketch the construction of the optimised filter, we assume an individual cluster situated at the North pole ($\btheta_0=\bld{0}$) with a characteristic angular SZ-signal $y(\theta = |\btheta|) = A \tau(\theta)$, separating the amplitude $A$ and the profile $\tau(\theta)$. Convolution theorem on the sphere {#appendix_convolution} --------------------------------- Filtering a scalar field on the sphere with an arbitrary, asymmetric kernel requires the specification of the convolution path as well as the orientation of the filter kernel at each position on the sphere in order to apply any convolution algorithm [@2001PhRvD..63l3002W]. Because of the simplifying restriction to centrally symmetric filter kernels, we give the theorem for the convolution of two functions, one of which is assumed to be centrally symmetric. The filtered field $u(\bbeta)$ is obtained by convolution of the centrally symmetric filter function $\psi(\theta)$ with the scalar field on the sphere $s(\btheta)$, $$\label{eq:app:convolve1} u(\bbeta) = \int {\mathrm{d}}\Omega\, s(\btheta) \psi(|\btheta-\bbeta|).$$ Expansion of these two scalar fields into spherical harmonics yields $$\begin{aligned} \label{eq:app:convolve2} s(\btheta) &=& \sum_{\ell=0}^\infty \sum_{m=-\ell}^{+\ell} s_{\ell m}^{}\, Y_\ell^m(\btheta), \\ \psi(\theta) &=& \sum_{\ell=0}^\infty \sum_{m=-\ell}^{+\ell} \psi_{\ell m}^{}\, Y_\ell^m(\theta) = \sum_{\ell=0}^\infty \sqrt{\frac{2\ell+1}{4\pi}} \psi_{\ell 0}^{}\,P_\ell^{}(\cos\theta).\end{aligned}$$ The last step assumes central symmetry. In this case, only modes with $m=0$ are contributing. For proceeding, the addition theorem for Legendre polynomials $P_\ell(x)$ [@1995mmp..book.....A] is used in substituting $\gamma=|\btheta-\bbeta|$:$$\label{eq:app:at} P_\ell(\cos\gamma) = \frac{4\pi}{2\ell+1}\sum_{m=-\ell}^{+\ell} Y_\ell^m(\btheta)\,Y_\ell^{m*}(\bbeta).$$ Combining these equations and applying the completeness relation yields the convolution relation for a centrally symmetric filter kernel, $$\label{eq:app:ct} u(\bbeta) = \sum_{\ell=0}^{\infty}\sum_{m=-\ell}^{+\ell} u_{\ell m}^{} Y_\ell^m(\bbeta), \quad\mbox{with}\quad u_{\ell m} = \sqrt{\frac{4 \pi}{2 \ell+1}} s_{\ell m}\,\psi_{\ell 0} \,.$$ Concepts of optimised filtering on the sphere {#appendix_sphere_filter} --------------------------------------------- The idea of optimised matched filters was proposed by @1998ApJ...500L..83T, and generalised to scale-adaptive filters by @2001ApJ...552..484S for a flat topology. The construction of a centrally symmetric optimised filter function $\psi(\theta)$ for the amplification and detection of signal profiles differing only in size but not in shape implies a family of filters $\psi(\theta/R)$ introducing a scaling parameter $R$. Decomposing the family of filter functions $\psi(\theta/R)$ into spherical harmonics yields $$\begin{aligned} \label{eq:app:filter} \psi\left(\frac{\theta}{R}\right) &=& R^2\sum_{\ell=0}^\infty \sqrt{\frac{2\ell+1}{4\pi}}\psi_{\ell 0}(R)\,P_\ell^{}(\cos\theta),\\ \psi_{\ell 0}(R) &=& \frac{1}{R^2} \int {\mathrm{d}}^2\theta\, \sqrt{\frac{2\ell+1}{4\pi}}\psi\left(\frac{\theta}{R}\right) P_\ell(\cos\theta),\end{aligned}$$ while allowing for central symmetry of the filter function. For a particular choice of $R$ the filtered field $u(R,\bbeta)$ is obtained by convolution (c.f. Appendix. \[appendix\_convolution\]): $$\begin{aligned} \label{eq:app:udef} u(R,\bbeta) & = & \sum_{\ell=0}^{\infty}\sum_{m=-\ell}^{+\ell} u_{\ell m}^{} Y_\ell^m(\bbeta), \quad\mbox{and}\\ u_{\ell m} & = & \sqrt{\frac{4 \pi}{2 \ell+1}} s_{\ell m}\,\psi_{\ell 0}(R) \,.\end{aligned}$$ Taking into account the vanishing expectation value of the noise ${\langle}n_{\nu}(\btheta) {\rangle}= 0$, the expectation value of the filtered field at the North pole $\bbeta=\bld{0}$ is given by $$\label{eq:app:umean} {\langle}u(R,\bld{0}){\rangle}= A \sum_{\ell=0}^{\infty} \tau_{\ell 0}\, \psi_{\ell 0}(R).$$ Assuming that the power spectrum of the signal is negligible compared to the noise power spectrum, the variance of the filtered field is given by $$\label{eq:app:uvariance} \sigma_u^2(R) = \left{\langle}\left[u(R,\bbeta) - {\langle}u(R,\bbeta){\rangle}\right]^2\right{\rangle}=\sum_{\ell=0}^{\infty} C_{\ell}^{}\,\psi_{\ell 0}^2(R).$$ While the optimised [*matched filter*]{} in the case of single frequency observations is defined to obey the first two of the following conditions, the optimised [*scale-adaptive filter*]{} is required to obey all three conditions: 1. [ The filtered field $u(R,\bld{0})$ is an unbiased estimator of the source amplitude $A$ at the true source position, i.e. ${\langle}u(R,\bld{0}){\rangle}= A$.]{} 2. [The variance of $u(R,\bbeta)$ has a minimum at the scale $R$ ensuring that the filtered field is an efficient estimator.]{} 3. [The expectation value of the filtered field at the source position has an extremum with respect to the the scale $R$, implying $$\label{eq:app:3cond} \frac{\upartial}{\upartial R}{\langle}u(R,\bld{0}){\rangle}= 0.$$]{} Matched filter {#matched-filter-1} -------------- In order to derive the matched filter, constraint (i) can be rewritten yielding $$\label{eq:app:constraint1} \sum_{\ell=0}^\infty \tau_{\ell 0}\, \psi_{\ell 0} = 1.$$ Performing functional variation (with respect to the filter function $\psi$) of $\sigma_u^2(R)$ while incorporating the constraint (\[eq:app:constraint1\]) through a Lagrangian multiplier yields the spherical matched filter: $$\label{eq:app:matched filter} \psi_{\ell 0}^{} = \alpha\, \frac{\tau_{\ell 0}}{ C_\ell}, \quad\mbox{where}\quad \alpha^{-1} = \sum_{\ell=0}^\infty \frac{\tau_{\ell 0}^2}{ C_\ell}.$$ In any realistic application, the power spectrum $C_{\ell}$ can be estimated from the observed data provided the power spectrum of the signal is negligible. The quantities $\alpha$, $\tau_{\ell 0}$, and thus the filter kernel $\psi_{\ell 0}$ can be straightforwardly computed for any model source profile $\tau(\theta)$. Scale-adaptive filter {#sec:app:SAF} --------------------- The next step consists of reformulating constraint (iii) in order to find a convenient representation for the application of functional variation. The expansion coefficient of the family of filter functions $\psi(\theta/R)$ of eqn. (\[eq:app:filter\]) can be rewritten yielding $$\label{eq:app:approx} \psi_{\ell 0}^{}(R) = \frac{1}{R^2} \int {\mathrm{d}}^2\theta\, \psi\left(\frac{\theta}{R}\right)Y_\ell^{0}(\theta) = \int {\mathrm{d}}^2\beta\, \psi(\beta) Y_\ell^{0}(R \beta),$$ where $\beta \equiv \theta/R$. In general, this substitution is [*not*]{} valid, because ${\mathrm{d}}^2\theta = \sin\theta\,{\mathrm{d}}\theta\,{\mathrm{d}}\phi$. In the case of localised source profiles, the angle $\theta$ is small for non-vanishing values of $\psi$ justifying the approximation $\sin \theta\approx \theta$. The same argument also applies for the boundaries of integration. With the aid of eqn. (\[eq:app:umean\]), condition (\[eq:app:3cond\]) reads $$\label{eq:app:derivation1} \frac{\upartial}{\upartial R}{\langle}u(R,\bld{0}){\rangle}= \sum_{\ell=0}^{\infty} \tau_{\ell 0}\, \frac{\upartial \psi_{\ell 0}(R)}{\upartial R} = 0.$$ Using eqn. (\[eq:app:approx\]), the derivative now acts on the Legendre polynomial $P_{\ell}$, $$\label{eq:app:derivation2} \sum_{\ell=0}^\infty \sqrt{\frac{2 \ell+1}{4 \pi}}\,\tau_{\ell 0} \int {\mathrm{d}}^2 \beta\, \psi(\beta) P_\ell' (\cos R \beta)\, \beta\,\sin R \beta = 0.$$ Using the derivative relation of the Legendre polynomials [@1995mmp..book.....A], $$\label{eq:app:LegendreRelation} P_\ell'(x) = \frac{\ell+1}{1 - x^2}\,[x\,P_\ell(x) - P_{\ell+1}(x)],$$ one obtains $$\begin{aligned} \label{eq:app:derivation3} \lefteqn{\sum_{\ell=0}^\infty \sqrt{\frac{2 \ell+1}{4 \pi}}\,(\ell +1)\, \tau_{\ell 0} \int {\mathrm{d}}^2 \beta\, \psi(\beta) \frac{R \beta}{\sin R \beta} \,\times}\nonumber \\ &&[\cos R \beta\,P_\ell(\cos R \beta) - P_{\ell+1}(\cos R \beta)] = 0.\end{aligned}$$ In our case, the angle $\theta$ is small for non-vanishing values of $\psi$ justifying the approximations $\sin R\beta \approx R\beta$ and $\cos R\beta \approx 1$. Substituting back, ${\mathrm{d}}^2\beta = {\mathrm{d}}^2\theta/R^2$, introducing $x\equiv\cos \theta = \cos R \beta$, and inserting the inversion of eqn. (\[eq:app:approx\]), namely $$\label{eq:app:inversion} \psi(\beta) = \sum_{\ell'=0}^\infty \psi_{\ell'0}^{}(R) Y_{\ell'0}^{0}(R \beta),$$ one arrives at $$\begin{aligned} \label{eq:app:derivation4} \lefteqn{\sum_{\ell',\ell=0}^\infty \sqrt{\frac{2 \ell+1}{4 \pi}}\,\sqrt{\frac{2 \ell'+1}{4 \pi}}\, (\ell +1)\, \tau_{\ell 0}\psi_{\ell' 0}(R)\, \times}\nonumber \\ && \frac{2\pi}{R^2} \int {\mathrm{d}}x\, P_{\ell'}(x) [P_\ell(x) - P_{\ell+1}(x)] = 0.\end{aligned}$$ Applying the orthogonality relation for the Legendre polynomials, $$\label{eq:orthogonal} \int_{-1}^{+1}{\mathrm{d}}x\, P_\ell (x) P_{\ell'}(x) = \frac{2}{2 \ell+1} \delta_{\ell\ell'},$$ and using the small angle approximation in the second term of eqn. (\[eq:app:derivation4\]) with the same argument as given above, yields the final result $$\label{eq:app:derivation5} \sum_{\ell=0}^\infty\psi_{\ell 0}(R) [\tau_{\ell 0} + \ell (\tau_{\ell 0} - \tau_{\ell-1, 0})] = 0.$$ Replacing the differential quotient with the corresponding derivative is a valid approximation for $\ell\gg 1$. Thus, eqn. (\[eq:app:derivation5\]) can be recast in shorthand notation yielding $$\label{eq:app:derivation6} \sum_{\ell=0}^\infty\psi_{\ell 0}(R)\tau_{\ell 0} \left[2 + \frac{{\mathrm{d}}\ln \tau_{\ell 0}}{{\mathrm{d}}\ln \ell}\right] = 0.$$ This result could have been obtained independently by attaching the tangential plane to the North pole and applying Fourier decomposition of the filter function $\psi$ and the source profile $\tau$. For that reason, it is not surprising that the functional form of this condition on the sphere agrees with that obtained by @2001ApJ...552..484S for a flat topology in two dimensions. Performing functional variation (with respect to the filter function $\psi$) of $\sigma_u^2(R)$ while interlacing the constraints (\[eq:app:constraint1\]) and (\[eq:app:derivation6\]) through a pair of Lagrangian multipliers yields the spherical scale-adaptive filter, $$\begin{aligned} \label{eq:app:scale-adaptive filter} \psi_{\ell 0}^{} &=& \frac{\tau_{\ell 0}}{ C_\ell\,\Delta} \left[2 b + c - (2 a + b)\frac{{\mathrm{d}}\ln \tau_{\ell 0}}{{\mathrm{d}}\ln \ell} \right],\nonumber\\ \Delta &=& ac-b^2, \\ a &=& \sum_{\ell=0}^\infty \frac{\tau_{\ell 0}^2}{C_\ell}, \quad b = \sum_{\ell=0}^\infty \frac{\tau_{\ell 0}}{C_\ell}\frac{{\mathrm{d}}\tau_{\ell 0}}{{\mathrm{d}}\ell}, \nonumber\\ c &=& \sum_{\ell=0}^\infty C_\ell^{-1} \left(\frac{{\mathrm{d}}\tau_{\ell 0}}{{\mathrm{d}}\ln \ell}\right)^2.\end{aligned}$$ As before in the case of the matched filter, the power spectrum $C_{\ell}$ can be derived from observed data provided the power spectrum of the signal is negligible. Assuming a model source profile $\tau(\theta)$, the quantities $\tau_{\ell 0}$, $a$, $b$, $c$, and finally $\psi_{\ell 0}$ can be computed in a straightforward way. The derivative of $\tau_{\ell 0}$ with respect to the multipole order $\ell$ is a shorthand notation of the differential quotient in eqn. (\[eq:app:derivation5\]). \[lastpage\] [^1]: e-mail: spirou@mpa-garching.mpg.de (BMS); pfrommer@mpa-garching.mpg.de (CP); reinhard@mpa-garching.mpg.de (RMH); mbartelmann@ita.uni-heidelberg.de (MB) [^2]: http://planck.mpa-garching.mpg.de/ [^3]: http://astro.estec.esa.nl/Planck/ [^4]: http://cfa-www.harvard.edu/cfa/ps/mpc.html
1
epsf $^1$[*IPNL, CNRS/IN2P3, 4 rue E. Fermi, 69622 Villeurbanne cedex, France; Université Lyon 1, Villeurbanne; Université de Lyon, F-69622, Lyon, France* ]{}\ > An Effective Field Theory for dark matter at a TeV-scale hadron collider should include contact interactions of dark matter with the partons, the Higgs and the $Z$. This note estimates the impact of including dark matter-$Z$ interactions on the complementarity of spin dependent direct detection and LHC monojet searches for dark matter. The effect of the $Z$ is small, because it interacts with quarks via small electroweak couplings, and the contact interaction self-consistency condition $C/\Lambda^2 < 4\pi/\hat{s}$ restricts the coupling to dark matter. In this note, the contact interactions between the $Z$ and dark matter are parametrised by derivative operators; this is convenient at colliders because such interactions do not match onto low energy quark-dark matter contact interactions. Introduction ============ Various experiments attempt to detect the particle making up the “dark matter”[@DM] of our Universe. For instance, direct detection(DD) experiments [@EdelCDMS; @Xenon; @SC; @autres], search for $\sim $ MeV energy deposits due to scattering of dark matter particles from the galactic halo on detector nuclei. And the Large Hadron Collider (LHC) searches[@CMS; @ATLAS] for dark matter pairs produced in multi-TeV $pp$ collisions, which would materialise as an excess of events with missing energy and jets. The LHC and DD searches are at very different energy scales, so different Standard Model (SM) particles are present, and also the quantum interferences are different[@PST]. The expected rates can be compared in specific dark matter models [@R], or, in recent years, several studies[@CMS; @tevatron; @toutlmonde; @Z; @unitarity1; @monoH] have compared the LHC and DD sensitivities using a contact interaction parametrisation of the dark matter interactions with the standard model particles. The LHC bounds obtained in this way are restrictive, and probe smaller couplings than direct detection experiments searching for “spin dependent” interactions between partons and dark matter [@SC]. These contact interaction studies are refered to as “Effective Field Theory” (EFT), and considered to be relatively model independent. However, the particle content is an input in EFT, and the restrictive LHC limits assume that the dark matter particle is the only new particle accessible at the LHC. Relaxing this assumption can significantly modify the experimental sensitivities[@st; @pvz; @toni2]. This has motivated various simplified models for dark matter searches at the LHC [@marcusmodel; @DMLQ; @deSGS]. Retaining this assumption, as will be done in this note, is only marginally consistent, because the contact interactions to which the LHC is sensitive would have to be mediated by strongly coupled particles. As recalled in the next section, this implies that colliders can exclude contact interactions of order their sensitivity, but not much larger. Effective Field Theory (EFT) is a recipe to get the correct answer in a simple way[@Georgi]. So this note attempts to compare LHC and DD constraints on dark matter, according to the prescriptions of [@Georgi]. An EFT for dark matter at the LHC should parametrise all possible SM-gauge invariant interactions of the dark matter with other on-shell particles. So first, contact interactions between the dark matter and the Higgs or $Z$ should be included at the LHC. These can interfere with the contact interactions studied in previous analyses, but contribute differently at colliders from in direct detection, so the linear combination of operator coefficients constrained at high and low energy will be different. Secondly, an EFT contains in principle a tower of operators[@Porod] organised in increasing powers of the inverse cutoff scale $1/\Lambda$, and higher orders can only be neglected if there is a sufficient hierarchy of scales: $\Lambda_{NP}\gg v$. This hierarchy is absent in dark matter production at the LHC. Addressing the importance of higher dimensional operators will be left to a subsequent publication [^1]. This note focuses on including the $Z$ in the EFT for dark matter at the LHC, and estimates analytically the consequences of including the lowest dimension operators allowing dark matter interactions with the $Z$ [^2]. Section \[sec:EFT\] outlines a peculiar choice of operators for the $Z$ vertex; they are proportional to the momentum-transfer-squared. This choice appears convenient, because the effects of the $Z$ are therefore absent in direct detection. Section \[sec:LHC\] estimates the impact of cancellations between $Z$ exchange and dark matter contact interactions with quarks at the LHC, and section \[sec:DD\] recalls the direct detection bounds. EFT, assumptions and operators {#sec:EFT} =============================== The low energy consequences of New Physics from above a scale $\Lambda$ can be parametrised by contact interactions of coefficient $C/\Lambda^n$. Unitarity [@unitarity1; @unitarity2] approximately implies that $C< 4\pi$, and the contact interaction approximation implies that the momentum exchange should be less than $\Lambda$. This means that an experiment can [*exclude*]{} &gt; &gt; [sensitivity]{}  , \[plage\] where $\hat{s}$ is the four-momentum-squared of the process. Low energy experiments, where $\hat{s} \to 0$, therefore can be taken to exclude everything above their sensitivity. However, the upper limit of eqn (\[plage\]) is relevant for collider searches, where $\hat{s}$ is the invariant mass of the invisibles. This upper limit is rarely taken into account in the literature. The first step in the EFT recipe to parametrise New Physics from beyond the scale $\Lambda$ it to add to the Lagrangian (at the scale $\Lambda$), all the non-renormalisable operators which can be constructed out of the fields present, consistently with the symmetries of the theory[@Georgi]. The coefficients $C^{(n)}_O$ of these operators are unknown “coupling constants” which evolve with scale via Renormalisation Group Equations. This infinite set of operators would be unmanageable, so EFT is useful when there is a hierachy between the experimental and NP scales. Then only the lowest dimension operators need be considered. In this note, the dark matter is assumed to be the only new New Physics particle lighter than a TeV, and is taken to be a SM gauge singlet dirac fermion $\chi$ with a conserved parity, and of mass $m_\chi \geq m_Z/2$ (maybe $\geq m_h/2$), to avoid bounds on the coupling to the $Z$ from the invisible width[^3] of the $Z$ (and Higgs). So the particle content of the EFT for $\chi$ at the LHC should be $\chi$, plus all relevant particles of the SM, which I take to be the partons, the Higgs, and the $Z$. The operators should be SM gauge invariant, to profit from our knowledge of the SM gauge sector. They are of dimension $>$ 4, and should attach a $\chi \overline{\chi}$ pair to partons, to the Higgs, or to the $Z$. The quark operators are taken generation diagonal; flavour-changing operators were considered in [@CK11]. The quarks are chiral because the operators are SM gauge invariant, and also because opposite chiralities do not interfere at the LHC. The dark matter currents are taken in a vector, axial vector, etc basis because these do not interfere in direct detection, nor at the LHC in the limit where the $\chi$ mass is neglected, as done here. I focus on operators of lowest dimension, that is six and seven. This is an arbitrary simplification, because $\Lambda \sim$ TeV, which is the energy scale probed at the LHC. The contact interactions considered here therefore do not provide a “model-independent” parametrisation of the interactions of $\chi$ with the SM. This problem is left for a later publication. Concretely, $\Lambda$ will be taken as 1- $2 $ TeV, for reasons discussed above eqn (\[lim3\]). Experimental limits on contact interactions will therefore be presented as limits on the dimensionless coefficient $C^{(n)}_O$. At dimension six, there are vector and axial vector $\chi$ currents coupled to quarks:   \_ \^P\_X Q\_i    ,    -  \_\_5 \^P\_X Q\_i \[qDMd6\] where the quarks $Q_i$ are first generation SM multiplets $ \{q_L,u_R,d_R\}$, and $P_X$ is the appropriate chiral projector. The contact interactions between the dark matter and the $Z$ boson are taken as - D\^B\_ \_       s\_[w]{} p\_Z\^2 Z\^ \_\ D\^B\_ \_\_5       -s\_[w]{} p\_Z\^2 Z\^ \_\_5 \[inteff\] where to the right of the arrow is the resulting vertex, $B^\mu$ is the hypercharge gauge boson with coupling $g'=e\tan\theta_W \equiv e s_{\rm w}/c_{\rm w}$, $B^{\mu\nu} = \partial ^\mu B^\nu - \partial ^\nu B^\mu$, and a term $\propto p_Z\cdot Z$ was dropped after the arrow in the axial current operator, assuming the $Z$ was produced by light quarks. There is in addition a “dipole moment” operator $B^{\mu \nu}\overline{\chi} \sigma_{\mu \nu}\chi $, which is neglected here because it also induces dark matter interactions with the photon [@magmoDM] which are more interesting. Then at dimension seven, there are four-fermion operators:   H d + \[ H d \]\^    ,    \_5   H d + \[ H d \]\^\ \^   H \_ d + \[ H \_ d \]\^                (and similarly for $u$ quarks, but with a charge conjugate Higgs field), interactions with the gluons: $$\frac{C^{(7)}_{gg,S} }{\Lambda^3} \overline{\chi} \chi G_{\mu \nu}^A G^{\mu \nu, A} ~~~,~~~ \frac{C^{(7)}_{g\tilde{g},P} }{\Lambda^3} \overline{\chi}\g_5 \chi G_{\mu \nu}^A \tilde{G}^{\mu \nu, A}~~~,$$ and double-derivative interactions between dark matter and the Higgs: H\^D\^D\_ H       - m\_W\^2 W\_\^+W\^[-]{} - m\_Z\^2 Z\_Z\^[ ]{} + h\ H\^D\^D\_ H \_5        - m\_W\^2 W\_\^+W\^[- ]{} \_5 - m\_Z\^2 Z\_Z\^[ ]{} \_5 + h \_5\[higgs\] where $\langle H \rangle = v = 174$ GeV, $p_h$ is the four-momentum of the physical Higgs particle $h$, and after the arrow are the interactions induced by the operator. The $Z$ and Higgs operators are choson $\propto p^2$ so that they are relevant at the LHC where the $Z$ and Higgs are external legs in the EFT, but do not contribute in the low-energy scattering of DD. This choice should be acceptable, because the operator basis can always be reduced by using the equations of motion[@Simma]. Focussing for simplicity on the hypercharge boson $B$, and neglecting gauge-fixing terms, the equations of motion are [@polonais] D\_B\^= g’y\_H (H\^D\^H - (D\^H)\^H) + g’\_y\_\^\[EoMZ\] where $\psi$ is a SM fermion of hypercharge $y_\psi$. Usually[@polonais], operators containing the double derivative on the left of eqn(\[EoMZ\]) are dropped, and the operators containing the Higgs v.e.v. squared $ \langle H^\dagger \Dlr H \rangle$ are retained. In this usual basis, $\chi-Z$ interactions could be parametrised by $(\overline{\chi}\gamma^\mu \chi) H^\dagger \Dlr H$, in which case the matrix element for $Z$ exchange at the LHC is $\propto m_Z^2/(p_Z^2 -m_Z^2)$, so negligeable for $p_Z^2 \gg m_Z^2$. But $Z$ exchange should be included in the quark-$\chi$ contact interaction used in direct detection, so the coefficient of the operators of eqn (\[qDMd6\]) would not be the same in direct detection as at the LHC. To avoid this discrepancy, I retain the derivative operators of eqn (\[inteff\]), and use eq. (\[EoMZ\]) to remove the operator $ \propto \langle H^\dagger \Dlr H \rangle$. This means that the $Z$ couples significantly to $\chi$ at the LHC, but negligeably in DD, and the operator coefficients do not change when the $Z$ is matched out of the theory. In the case of the Higgs, the equation of motion is $$D_\mu D^\mu H = \mu^2 H -\lambda H^\dagger H H - \overline{e}Y_e^\dagger P_L \ell - \overline{d}Y_d^\dagger q_L + \varepsilon \overline{q_L}Y_u u$$ where $Y_f$ are Yukawa matrices. This has been used to exchange the more usual $ (H^\dagger H)^2 \overline{\chi} \chi$, and $ (H^\dagger H) \overline{\chi} \chi$ operators for the double-derivative interactions between dark matter and the Higgs given in eqn (\[higgs\]). Notice that it is possible to use the equations of motion to replace two operators ($\overline{\chi} \chi H^\dagger H$ and $\overline{\chi} \chi (H^\dagger H)^2$) with one (involving the DM and $H^\dagger D^2 H$), because I am only interested in the $h$-$\chi$-$\bar{\chi}$ interaction induced by these operators. The linear combination of operators $[\mu^2 H^\dagger H -\lambda (H^\dagger H)^2]\overline{\chi}\chi$, which is orthogonal to the combination in the Equations of Motion, gives a vanishing $h$-$\chi$-$\bar{\chi}$ interaction, due to the minimisation condition of the Higgs potential. As in the case of the $Z$, the derivative operators of eqn (\[higgs\]) are interesting, because they give a higgs coupling to dark matter $\propto p_h^2$, which has the desirable feature of being relevant at the LHC where the Higgs is in the effective theory, but not contributing at low energy. This note focusses on the interactions of $\chi$ with the $Z$ (eqn \[inteff\]), and with the quark currents of eqn (\[qDMd6\]) which can interfere with $Z$ exchange. So the dimension seven operators will be neglected in the following sections. However, it is interesting to first review the sensitivity to the coefficients of the operators of eqn (\[higgs\]). The dark matter interactions to $W$ and $Z$ pairs were studied in [@HR], who used $U(1)_{em} \times SU(3)$ invariant operators such that these contact interactions have dimension five with coupling $1/\Lambda_{CHLR}$. They find that the 8 TeV LHC with luminosity 25 fb$^{-1}$ could probe $\Lambda_{CHLR} \lsim $ TeV. This constrains the coefficients of the operators of eqn (\[higgs\]) to be $ \lsim 1/({\rm TeV}m_W^2)$, which is not restrictive. For $m_\chi< m_h/2$, a more significant limit of 10 TeV$^{-3}$ arises from requiring $\Gamma(h\to \chi \overline{\chi}) \lsim \Gamma(h\to b \overline{b})$. This restriction should be reasonable[@BMM] because the Higgs is observed to decay to $b\bar{b}$. Estimated limits from the LHC {#sec:LHC} ============================= Dark matter particles are invisible to the LHC detectors, so pair production of $\chi$s can be searched for in events with missing transverse energy ($\ETm)$, which can be identified by jet(s) radiated from the incident partons. The principle Standard Model background for such “monojet” searches is $Z +$ jet production, followed by $Z\to \bar{\nu}\nu$. The 8 TeV LHC is sensitive to dark matter contact interactions with $C/\Lambda^2 \sim $ TeV$^{-2}$. Given the operators of eqns (\[qDMd6\]) and (\[inteff\]) at the LHC, the axial vector dark matter current can interact with quarks $Q$ via the diagrams of figure \[figop\], which can be written as a four-fermion interaction of coefficient c\_[QX,A]{} & = & C\_[QX,A]{}+ g\_X\^Q\ & &C\_[QX,A]{}+ g\_X\^Q C\_[Z,A]{}    , \[annuler\] where $g_X^Q =\{ 1- \frac{4}{3}s_{\rm w}^2,- \frac{4}{3}s_{\rm w}^2, -1+ \frac{2}{3}s_{\rm w}^2, \frac{2}{3}s_{\rm w}^2\}$ for $\{u_L,u_R,d_L,d_R\}$ [@PDB]. A similiar expression can be obtained for the vector $\chi$ current. The $Z$ exchange looks like a contact interactions for large $p_Z^2 = M^2_{inv} \gg m_Z^2$, where $M^2_{inv}$ is the invariant mass-squared of the dark matter pair. This is a useful approximation, because the arguments below suggests that most $\chi\bar{\chi}$ events arise at larger $M^2_{inv}$. The aim here is to analytically estimate the invisible four-momentum-squared $ M^2_{inv} $, by comparing the partonic cross-sections for $\nu\bar{\nu}$ and $\chi\bar{\chi}$ production. I assume that the QCD part of the amplitude is identical in both cases, so it does not need to be calculated. This allows for an arbitrary number of jets, which is more difficult to simulate[@Uli2] (the data frequently contains more than one jet[@CMS]). In the matrix element for jets +$\nu \bar{\nu}$ will appear $$g_X^Q\frac{g^2}{4c^2_W} \frac{1}{p^2 -m_Z^2 +im_Z\Gamma_Z} (\bar{Q} \g^\a P_X Q ) ( \overline{\nu}\g_\a P_L\nu)$$ whereas, for DM production via the $\bar{\chi} \g^\mu\g_5 \chi$ current, this is replaced by: $$\frac{c_{QX,A}}{\Lambda^2} (\bar{Q} \g^\a P_X Q ) (\overline{\chi}\g_\a\g_5 \chi)~~.$$ Then the full matrix element must be squared, and integrated over the phase space of $N$ jets and two invisible particles. The invisibles can be treated as a single particle of variable mass $p^2 = M^2_{inv}$, using the identity d\_[N+2]{} &=& \^4(P\_[in]{}- q\_i -p) \_[i:1..N]{} ( ) (2)\^3 dp\^2 \^4(p-p\_- p\_[|]{})   . Neglecting spin correlations and the dark matter mass, the invisible phase space integral over the gamma-matrix trace for the invisible fermions gives $M^2_{inv}/(8\pi)$ for $\chi$s, and $3M^2_{inv}/(16\pi)$ for neutrinos. For neutrinos in the final state, $M_{inv}^2 = m_Z^2$ due to the delta-function-like behaviour of the $Z$ propagator-squared. However, for dark matter, the $d M^2_{inv}$ phase space integral will privilege larger values of $M^2_{inv}$. Treating the $N$ jets of the event as a particle of negligeable mass, the upper bound on $M_{inv}^2$ is $ \gsim 4 \ETm^2$, where $\ETm$ is the invisible transverse energy. The CMS study [@CMS] uses the range 400 GeV $\leq \ETm\lsim$ TeV. However, the assumption that the jet emission part of the cross-section is the same as for $\nu$ pairs will fail, if $M_{inv}^2$ is a significant fraction of the energy of the event. With the $M_{inv}$ cutoff ranging from 800 GeV to 2 TeV, requiring that the dark matter contribute $\lsim 1/6$ [@CMS] of the SM background, gives an estimated bound $\Lambda \gsim 880 \to 2200$ GeV, for $c_{uL,A} =c_{uR,A} =c_{dL,A} =c_{dR,A}=1$. This compares favourably to the CMS bound of $\Lambda > 950$ GeV, for $C_{uL,A} =C_{uR,A} =C_{dL,A} =C_{dR,A}=1$. Since the analytical estimate is reasonable, most of the dark matter signal probably comes from $M_{inv}^2 \gg m_Z^2$, and the approximation (\[annuler\]) is consistent. However, the analytic bound is a bit to restrictive (perhaps in part because it includes any number of jets), so in the remainder of the paper, the CMS limit of 950 GeV will be used. There is also an upper limit on the $C$s which a collider can exclude, eqn (\[plage\]), from requiring that the contact interaction approximation be self-consistent: $C/\Lambda^2 <4\pi/M_{inv}^2$. Since the previous analytic estimate reproduces the CMS bound for $M_{inv}^2 \sim$ TeV$^2$, the consistency condition is taken as $C< 4\pi$. For the axial $\chi$ current with $\Lambda = $ TeV, the CMS limit and eqn (\[plage\]) give 3 independent bounds on $\{ c_{qL,A}, c_{uR,A}, c_{dR,A}\}$: 4\ 4 |C\_[uR,A]{} - C\_[Z,A]{}|               \ 4 |C\_[dR,A]{} + C\_[Z,A]{}|                \[lim3\] where the first line is the summed contributions of $u_L$ and $d_L$, the fractions are approximations $gg_X^Q s_{\rm w}/2c_{\rm w}$, and the $d$ to $u$ pdf ratio is taken 1/2. Similar limits apply for the vector operator of eqn (\[qDMd6\]). It can be seen already from eqn (\[lim3\]), that including the interactions with the $Z$ will make little differences to the LHC limits on the $C_{QX,A}$: for the doublet quarks, the $Z$ contribution cannot cancel simultaneously against the $u_L$ and $d_L$ contributions, and the $Z$ contribution is irrelevant for the singlet quarks, because also $C_{Z,A}$ must be $\lsim 4\pi$. The parameters ruled out by the first and second eqns of (\[lim3\]) are represented as the central regions in figure \[fig2\]. From the TeV to the MeV {#sec:DD} ======================= In direct detection, the dark matter scatters non-relativistically off nuclei. Therefore, to translate the EFT from the TeV to the MeV, the $Z$ must be removed, the effects of QCD loops in running the operator coefficients should be included, and the quarks must be embedded in the nucleons. To remove the $Z$, the Greens function for two quarks and two $\chi$s in the effective theory with a $Z$, should be matching to the same Greens function in the theory without a $Z$. Since the matching is performed at zero momentum for the fermion legs, the contact interactions of eqn (\[inteff\]) do not contribute, and the coefficients of the four-fermion operators of eqn (\[qDMd6\]) remain the same after the $Z$ is “matched out”. The $Z$ vertices were taken $\propto p_Z^2$ to obtain this. The light quark currents $\overline{q} \g^\mu P_X q$ are conserved in QCD, so do not run. Also, since $\chi$ is a SM gauge singlet and the only dark sector particle below the TeV, I suppose that the operators with vector and axial vector $\chi$ currents do not mix below the TeV. See [*e.g.*]{} [@UliSDSI] about loop effects mixing various operators involving dark matter and the SM. Finally, the quark currents can be embedded in nucleons $N = \{p,n\}$ using identities [@BBPS] such as N| \^ Q\_i |N &=& c\_[V,i]{}\^[N]{} N| \^\_N |N where $ c_{V,u}^{p}= c_{V,d}^{n} = 2$, and $c_{V,d}^{p} = c_{V,u}^{n} = 1$, because this current counts valence quarks in the nucleon. The axial quark current is proportional to the nucleon spin: N| \^\_5 Q\_i |N &=& 2 s\^Q\_i\^N = Q\_i\^N N| \^\_5 \_N |N where the proportionality constants are measured [@DqN] as $ \Delta u^p = \Delta d^n = 0.84$, $\Delta d^p = \Delta u^n = -0.43$. In the zero-momentum-transfer limit of non-relativistic scattering, the dark matter can have spin-dependent interactions via the axial current, or spin-independent interactions via the first component of the vector current. The spin-independent scattering amplitude for $\chi$ on a nucleon, is a coherent sum of vector and scalar interactions, for quarks of both chiralities and all flavours. The experimental limit on the cross-section per nucleon is $\sigma_{SI}\lsim 10^{-44}$ cm$^2$ for $m_\chi \sim 100$ GeV [@Xenon]. For the proton ($C_{uR}\leftrightarrow C_{dR}$ for the neutron), with $ C_{qR,V} =\frac{1}{3}( C_{dR,V } + 2C_{uR,V})$, this gives [@BBPS] \_[SI]{} \^2 3 10\^[-17]{} [GeV]{}\^[-2]{} where the $+...$ contains scalar contact interactions neglected in this note. For $\Lambda =$ TeV, this gives \[C\_[qL,V]{} + ( C\_[dR,V ]{} + 2C\_[uR,V]{}) + ...\] 10\^[-2]{}    (SI). \[SIbd\] The spin dependent cross-section per proton is [@BBPS] $$\sigma_{SD} \!\simeq \! m_p^2 \left[ \frac{.42 (C_{qL,A} \!+ \!C_{uR,A} \!- \ \! 2C_{dR,A} ) }{2\Lambda^2} \right] ^2 \! \! \! \lsim \frac{ 10^{-10 }}{4} {\rm GeV}^{-2}$$ where the experimental bound is for $m_\chi \sim 100$ GeV. For $\Lambda = $ TeV, this gives | (C\_[qL,A]{}+C\_[uR,A]{} - 2C\_[dR,A]{} )| 20    (SD). \[SD\] Comparing to eqn (\[lim3\]) shows that the contact interactions explored by SD direct detection experiments are mediated by physics which is not a contact interaction at the LHC, so are not excluded by the limits given in eqn (\[lim3\]). The limit (\[SD\]) is represented in figure \[fig2\] as the vertical exclusions. Discussion ========== From a bottom-up EFT point of view, it is important to include all operators which can interfere, when computing experimental constaints. This is to allow for cancellations. Including several operators which do not interfere improves the bound, but is not otherwise motivated. In this note, operators with vector and axial vector currents for the dark matter fermion $\chi$ were presented as an example, which illustrates two points. First, the EFT at the LHC contains more particles than the light partons and dark matter that are relevant in direct detection. At the LHC, the Higgs and $Z$ should also be included. Matching the high and low energy EFTs, as done in this note, suggests that the LHC constrains several combinations of operator coefficients that are different from direct detection, as can be seen by comparing eqns (\[lim3\]) and (\[SD\]). However, the contribution of the $Z$ is relatively unimportant, because its couplings to singlet quarks are small, and it interferes with opposite sign with $u_L$ and $d_L$. The LHC limits on the dark matter couplings to quarks and the $Z$ are represented as the central exclusion areas of figure \[fig2\]: the coupling to quarks is more constrained than the coupling to the $Z$, and arbitrary axial current dark matter interactions to quarks cannot be allowed by tuning the dark matter coupling to the $Z$. This is because there is a self-consistency upper bound on contact interaction coefficients at colliders $C/\Lambda^2 < 4\pi/\hat{s}$ (see eqn (\[plage\])). It is important to notice that this upper bound also implies that the LHC limits do not exclude the parameter space probed by spin dependent direct detect experiments. Second, an interesting difference between direct detection and collider experiments, is that quarks of different chirality and flavour interfere in direct detection, whereas the LHC can constrain the interactions of dark matter with each flavour and chirality of quark individually. This is related to the relative unimportance of the $Z$: it cannot cancel separately against the contributions of $u_L, d_L, u_R$ and $d_R$. In summary, the rules of bottom-up Effective Field Theory say that one should include all operators up to some specified dimension. So to parametrise at dimension six the axial vector interactions of dark matter with quarks, one should include contact interactions of dark matter with the quarks and with the $Z$. Including interactions with the $Z$ that are $\propto p_Z^2$, as done here, suggests that these are not crucial. Acknowledgements {#acknowledgements .unnumbered} ---------------- I thank J.P. Chou, S Malik, and S. Perries for useful comments. [222222]{} G. Bertone, D. Hooper and J. Silk, “Particle dark matter: Evidence, candidates and constraints,” Phys. Rept.  [**405**]{} (2005) 279 \[hep-ph/0404175\]. [[Jungman]{}, G. and [Kamionkowski]{}, M. and [Griest]{}, K.]{}, “[Supersymmetric dark matter]{}”, Phys. Rept.  [**267**]{} (1996) 195 \[hep-ph/9506380\]. Z. Ahmed [*et al.*]{} \[CDMS and EDELWEISS Collaborations\], “Combined Limits on WIMPs from the CDMS and EDELWEISS Experiments,” Phys. Rev. D [**84**]{} (2011) 011102 \[arXiv:1105.3377 \[astro-ph.CO\]\]. D. S. Akerib [*et al.*]{} \[LUX Collaboration\], “First results from the LUX dark matter experiment at the Sanford Underground Research Facility,” arXiv:1310.8214 \[astro-ph.CO\]. E. Aprile [*et al.*]{} \[XENON100 Collaboration\], “Dark Matter Results from 225 Live Days of XENON100 Data,” Phys. Rev. Lett.  [**109**]{} (2012) 181301 \[arXiv:1207.5988 \[astro-ph.CO\]\]. M. Felizardo, T. A. Girard, T. Morlat, A. C. Fernandes, A. R. Ramos, J. G. Marques, A. Kling and J. Puibasset [*et al.*]{}, “Final Analysis and Results of the Phase II SIMPLE Dark Matter Search,” Phys. Rev. Lett.  [**108**]{} (2012) 201302 \[arXiv:1106.3014 \[astro-ph.CO\]\]. E. Vazquez-Jauregui \[COUPP Collaboration\], “COUPP: Bubble chambers for Dark Matter detection,” in Proceedings of the 48th Rencontres de Moriond on Very High Energy Phenomena in the Universe. V. Zacek, S. Archambault, E. Behnke, J. Behnke, M. Das, A. Davour, F. Debris and N. Dhungana [*et al.*]{}, “Dark matter search with PICASSO,” J. Phys. Conf. Ser.  [**375**]{} (2012) 012023. CMS Collaboration, “Search for new physics in monojet events in pp collisions at sqrt(s)= 8 TeV”, CMS-PAS-EXO-12-048. ATLAS Collaboration, “Search for New Phenomena in Monojet plus Missing Transverse Momentum Final States using 10fb$^{-1}$ of pp Collisions at $\sqrt{s}=8$ TeV with the ATLAS detector at the LHC”, ATLAS-CONF-2012-147. S. Profumo, W. Shepherd and T. Tait, “The Pitfalls of Dark Crossings,” arXiv:1307.6277 \[hep-ph\]. see [*e.g.*]{} T. G. Rizzo, “Dark Matter Complementarity in the pMSSM and the ILC,” arXiv:1402.5870 \[hep-ph\], and references therein. Y. Bai, P. J. Fox and R. Harnik, “The Tevatron at the Frontier of Dark Matter Direct Detection,” JHEP [**1012**]{} (2010) 048 \[arXiv:1005.3797 \[hep-ph\]\]. J. Goodman, M. Ibe, A. Rajaraman, W. Shepherd, T. M. P. Tait and H. B. Yu, “Constraints on Light Majorana dark Matter from Colliders,” Phys. Lett. B [**695**]{} (2011) 185 \[arXiv:1005.1286 \[hep-ph\]\]. P. J. Fox, R. Harnik, J. Kopp and Y. Tsai, “Missing Energy Signatures of Dark Matter at the LHC,” Phys. Rev. D [**85**]{} (2012) 056011 \[arXiv:1109.4398 \[hep-ph\]\]. J. Goodman, M. Ibe, A. Rajaraman, W. Shepherd, T. M. P. Tait and H. -B. Yu, “Constraints on Dark Matter from Colliders,” Phys. Rev. D [**82**]{} (2010) 116010 \[arXiv:1008.1783 \[hep-ph\]\]. P. J. Fox, R. Harnik, R. Primulando and C. T. Yu, “Taking a Razor to Dark Matter Parameter Space at the LHC,” Phys. Rev. D [**86**]{} (2012) 015010 \[arXiv:1203.1662 \[hep-ph\]\]. N. Zhou, D. Berge and D. Whiteson, “Mono-everything: combined limits on dark matter production at colliders from multiple final states,” Phys. Rev. D [**87**]{} (2013) 9, 095013 \[arXiv:1302.3619 \[hep-ex\]\]. L. M. Carpenter, A. Nelson, C. Shimmin, T. M. P. Tait and D. Whiteson, “Collider searches for dark matter in events with a Z boson and missing energy,” Phys. Rev. D [**87**]{} (2013) 7, 074005 \[arXiv:1212.3352\]. I. M. Shoemaker and L. Vecchi, “Unitarity and Monojet Bounds on Models for DAMA, CoGeNT, and CRESST-II,” Phys. Rev. D [**86**]{}, 015023 (2012) \[arXiv:1112.5457 \[hep-ph\]\]. A. A. Petrov and W. Shepherd, “Searching for dark matter at LHC with Mono-Higgs production,” Phys. Lett. B [**730**]{} (2014) 178 \[arXiv:1311.1511 \[hep-ph\]\]. M. T. Frandsen, F. Kahlhoefer, A. Preston, S. Sarkar and K. Schmidt-Hoberg, “LHC and Tevatron Bounds on the Dark Matter Direct Detection Cross-Section for Vector Mediators,” JHEP [**1207**]{} (2012) 123 \[arXiv:1204.3839 \[hep-ph\]\]. M. Papucci, A. Vichi and K. M. Zurek, “Monojet versus rest of the world I: t-channel Models,” arXiv:1402.2285 \[hep-ph\]. G. Busoni, A. De Simone, J. Gramling, E. Morgante and A. Riotto, “On the Validity of the Effective Field Theory for Dark Matter Searches at the LHC, Part II: Complete Analysis for the s-channel,” arXiv:1402.1275 \[hep-ph\]. S. Chang, R. Edezhath, J. Hutchinson and M. Luty, “Effective WIMPs,” Phys. Rev. D [**89**]{} (2014) 015011 \[arXiv:1307.8120 \[hep-ph\]\]. A. DiFranzo, K. I. Nagao, A. Rajaraman and T. M. P. Tait, “Simplified Models for Dark Matter Interacting with Quarks,” JHEP [**1311**]{} (2013) 014 \[arXiv:1308.2679 \[hep-ph\]\]. A. De Simone, G. F. Giudice and A. Strumia, “Benchmarks for Dark Matter Searches at the LHC,” arXiv:1402.6287 \[hep-ph\]. H. Georgi, “Effective field theory,” Ann. Rev. Nucl. Part. Sci.  [**43**]{} (1993) 209. H. Georgi, “On-shell effective field theory,” Nucl. Phys. B [**361**]{} (1991) 339. M. B. Krauss, S. Morisi, W. Porod and W. Winter, “Higher Dimensional Effective Operators for Direct Dark Matter Detection,” arXiv:1312.0009 \[hep-ph\]. M. Endo and Y. Yamamoto, “Unitarity Bounds on Dark Matter Effective Interactions at LHC,” arXiv:1403.6610 \[hep-ph\]. J. Beringer [*et al.*]{} \[Particle Data Group Collaboration\], “Review of Particle Physics (RPP),” Phys. Rev. D [**86**]{} (2012) 010001. J. F. Kamenik and C. Smith, “FCNC portals to the dark sector,” JHEP [**1203**]{} (2012) 090 \[arXiv:1111.6402 \[hep-ph\]\]. K. Sigurdson, M. Doran, A. Kurylov, R. R. Caldwell and M. Kamionkowski, “Dark-matter electric and magnetic dipole moments,” Phys. Rev. D [**70**]{} (2004) 083501 \[Erratum-ibid. D [**73**]{} (2006) 089903\] \[astro-ph/0406355\]. V. Barger, W. -Y. Keung, D. Marfatia and P. -Y. Tseng, “Dipole Moment Dark Matter at the LHC,” Phys. Lett. B [**717**]{} (2012) 219 \[arXiv:1206.0640 \[hep-ph\]\]. H. Simma, “Equations of motion for effective Lagrangians and penguins in rare B decays,” Z. Phys. C [**61**]{} (1994) 67 \[hep-ph/9307274\]. B. Grzadkowski, M. Iskrzynski, M. Misiak and J. Rosiek, “Dimension-Six Terms in the Standard Model Lagrangian,” JHEP [**1010**]{} (2010) 085 \[arXiv:1008.4884 \[hep-ph\]\]. R. C. Cotta, J. L. Hewett, M. P. Le and T. G. Rizzo, “Bounds on Dark Matter Interactions with Electroweak Gauge Bosons,” Phys. Rev. D [**88**]{} (2013) 116009 \[arXiv:1210.0525 \[hep-ph\]\]. S. Banerjee, S. Mukhopadhyay and B. Mukhopadhyaya, “New Higgs interactions and recent data from the LHC and the Tevatron,” JHEP [**1210**]{} (2012) 062 \[arXiv:1207.3588 \[hep-ph\]\]. P. P. Giardino, K. Kannike, M. Raidal and A. Strumia, “Reconstructing Higgs boson properties from the LHC and Tevatron data,” JHEP [**1206**]{} (2012) 117 \[arXiv:1203.4254 \[hep-ph\]\]. U. Haisch, F. Kahlhoefer and E. Re, “QCD effects in mono-jet searches for dark matter,” arXiv:1310.4491 \[hep-ph\]. U. Haisch and F. Kahlhoefer, “On the importance of loop-induced spin-independent interactions for dark matter direct detection,” JCAP [**1304**]{} (2013) 050 \[arXiv:1302.4454 \[hep-ph\]\]. G. Belanger, F. Boudjema, A. Pukhov and A. Semenov, “Dark matter direct detection rate in a generic model with micrOMEGAs 2.2,” Comput. Phys. Commun.  [**180**]{} (2009) 747 \[arXiv:0803.2360 \[hep-ph\]\]. A. Airapetian [*et al.*]{} \[HERMES Collaboration\], “Precise determination of the spin structure function g(1) of the proton, deuteron and neutron,” Phys. Rev. D [**75**]{} (2007) 012007 \[hep-ex/0609039\]. [^1]: Higher dimensional operators can contain more fields and be suppressed by phase space, or contain Higgs fields and be suppressed by $\langle H \rangle^2/\Lambda^2$, or contain derivatives and be dangerous. [^2]: Contact interactions between dark matter and the $Z$ have been proposed in [@deSGS] as a benchmark model, assuming other contact interactions to be absent. [^3]: For $m_\chi <m_Z/2$, the invisible width of the $Z$ (at “2$\sigma$”, so[@PDB] $\Gamma (Z\to \chi \overline{\chi})\leq 3$ MeV) imposes that $ | C_{Z,B}| < 8.9 (\Lambda/{\rm TeV})^2$, for $B= V,A$.
1
--- abstract: 'We present a catalogue of H–band spectra for 85 stars of approximately solar abundance observed at a resolving power of 3000 with the KPNO Mayall 4m FTS. The atlas covers spectral types O7–M5 and luminosity classes I-V as defined on the MK system. We identify both atomic and molecular indices and line–ratios which are temperature and luminosity sensitive allowing spectral classification to be carried out in the H–band. The line ratios permit spectral classification in the presence of continuum excess emission, which is commonly found in pre–main sequence and evolved stars. We demonstrate that with spectra of $R = 1000$ obtained at $SNR > 50$ it is possible to derive spectral types within $\pm 2$ subclasses for late–type stars. These data are available electronically through the Astronomical Data Center in addition to being served on the World–Wide–Web.' author: - 'Michael R. Meyer' - Suzan Edwards - 'Kenneth H. Hinkle' - 'Stephen E. Strom' title: | Near–Infrared Classification Spectroscopy:\ H–band Spectra of Fundamental MK Standards --- \#1\#2\#3\#4\#5\#6\#7[ to ]{} =-0.2 truein Introduction ============ With the recent development of large–format infrared array detectors, high quality photometric surveys are routinely conducted at wavelengths between 1–2.5 $\mu$m. Soon the completion of the 2 Micron All–Sky Survey (2MASS; Skrutskie 1997) and DENIS (Epchtein et al. 1997) will provide comprehensive catalogues of near–infrared sources with detection limits sensitive to a wide variety of stellar and non–stellar objects. Infrared spectra will be required for appropriate identification of many of these sources, and for further study of their astrophysical properties. The pioneering study of Johnson and Mendez (1970) was the first to explore the spectra of a large sample of normal stars in the near–infrared. However many years passed before improvements in instrumentation made possible similar observations of large numbers of targets of astrophysical interest. The majority of the work done in near–infrared spectroscopy to date has been focused on the K–band, in large part because intrinsically cool or heavily obscured objects are typically brighter at K–band than in the J– or H–bands. In 1986, Kleinmann and Hall (1986; KH86) provided the first comprehensive medium resolution atlas ($R=3000$) of stellar spectra in the K-band, covering all luminosity classes, but restricted to spectral types between F8-M7. More recently, Wallace and Hinkle (1997; WH97) have extended the KH96 K–band atlas, using the same FTS spectrograph on the KPNO 4m with $R=3000$, but including stellar spectra spanning spectral types O-M and luminosity classes I-V. They also summarize the considerable body of work directed toward K–band spectroscopy in the last decade. While in many situations, the K–band will be the wavelength selection of choice for spectroscopic studies of highly obscured or very cool objects, the presence of circumstellar dust ($T_{vap} < 2000 K$; Pollack et al. 1994) often results in significant excess continuum emission longward of $2~\mu$m. This continuum excess is commonly found in two important classes of objects: young stars with circumstellar disks (e.g. Meyer, Calvet, & Hillenbrand, 1997) and evolved stars with extensive envelopes from mass-loss (e.g. Le Bertre 1997). Near–infrared excess due to warm dust can also complicate spectroscopic studies of composite stellar systems aimed at discerning the stellar populations of other galaxies (e.g. Schinnerer et al. 1997). Continuum excess longward of $2~\mu$m will weaken or even render invisible the photospheric features in the K–band, while the photosphere will dominate at shorter wavelengths. In such a situation, near infrared spectra shortward of 2 $\mu$m will be required to see the stellar photosphere too obscured to be detected optically. To date, there has been relatively little work in the H–band (1.55–1.75 $\mu$m). Recent publications include: i) observations of 40 G, K, and M stars of luminosity class I and III at $R=1500$ by Origlia, Moorwood, and Oliva (1993); ii) the library of 56 spectra O–M of luminosity class I, II, and V at $R=500$ (Lancon & Rocca–Volmerange, 1992); iii) a library of 37 stars of luminosity classes I, III, V at $R=1500-2000$ (Dallier, Boisson, & Joly 1996) over a limited portion of the H–band; and iv) a study of 9 OB Stars at $R=570$ (Blum et al. 1997). Here we present an H–band spectral atlas for 85 stars of nearly solar abundance with spectral types on the MK classification system ranging from 07-M5 and luminosity classes I-V. These $R=3000$ spectra were collected with the same FTS at the KPNO 4m as the K–band atlases of KH86 and WH97. In Section 2, we describe the sample selection and in Section 3 we describe the observations and calibration of the data. In Section 4 we discuss the dependence of the spectral features on temperature and luminosity and suggest a two–dimensional classification appropriate for late-type stars. In Section 5 we discuss near–IR spectral classification with regard to wavelength range/spectral resolution, and conclude with a summary of our results. Defining the Sample =================== In our sample selection, we chose optically visible stars which had previously been identified on the temperature and luminosity scales of the revised MK system (Keenan 1987). [^1] The majority of the stars were drawn from the following fundamental lists: i) Morgan, Abt, and Tapscott (1978) for 29 stars O6–G0; ii) Keenan and McNeil (1989) for 45 stars G0–K5; and iii) Kirkpatrick, Henry, and McCarthy (1991) for 5 late–type dwarfs K5–M3. We supplemented these primary standards with an additional 5 secondary standards from the compilation of Jaschek, Conde, and de Sierra (1964) and one late–type dwarf classified by Henry, Kirkpatrick, and Simons (1994). In order to cover as complete a range of stellar temperature and luminosity as possible, we defined a two-dimensional grid with 26 bins of spectral type and three bins of luminosity class. Our temperature grid is binned $\times 2$ more coarsely in spectral subclass than the revised MK system, so that we typically sample only every other MK subclass. The three luminosity bins are divided into supergiants (I–II), giants (III), and subgiants/dwarf stars (IV–V). A full sampling of this grid would have resulted in 78 distinct temperature/luminosity pairs. Our atlas includes a total of 85 sources with 53 of the bins filled. Grid coverage was finer among the later spectral types, where for stars GO and later, we filled 26 of the 27 bins (9 spectral types $\times$ 3 luminosity classes). In contrast, for stars earlier than GO, only 27 of the 51 bins were covered (17 spectral types $\times$ 3 luminosity classes). The 85 individual stars in our H–band survey are listed in Table \[sample\] along with relevent stellar properties taken from the Bright Star Catalogue (Hoffleit & Jaschek 1982). Additional restrictions on the sample selection included: i) v$sini$ $<$ 250 km s$^{-1}$ with exception of HR3982 (B7 V); ii) near-solar metallicity (avoiding those MK standards which exhibit spectral peculiarities due to enhanced or deficient metal abundance); and iii) no visual companions within the beam (separations $1-5$”). Our program was begun in advance of the K–band FTS atlas of WH97, and their sources are drawn in large measure from our sample. We note in Table \[sample\] the stars for which K–band spectra can be found in the WH97 digital atlas. Table \[temp\] and Figure \[fig1\] provide additional insight into the temperature and luminosity coverage of our sample. In Table \[temp\], we list each of the spectral type and luminosity bins we have “filled”. For each bin in which there is at least one spectrum, we give the corresponding effective temperature. For most stars, we adopted the temperature scale of Tokunaga (1996), except for giants earlier than G0 where we adopted Schmidt–Kaler (1982) [^2]. Figure \[fig1\] provides a schematic illustrating the temperature and luminosity coverage for the 85 stars in our sample. In this illustration we have applied the same main sequence bolometric corrections to both dwarf (27) and subgiant (11) stars; as such they are indistinguishable in this diagram. Observations and Data Calibration ================================= Observations of our 85 sample stars were obtained at the Mayall 4m telescope at Kitt Peak National Observatory during four separate observing runs from 1993–1994 (Table \[ftslog\]). We used the Fourier Transform Spectrometer (FTS) dual–output interferometer (Hall et al. 1979). The FTS was ideal for this program for several reasons. First, the wavelength coverage of the FTS is limited only by the bandpass of the blocking filters, independent of the spectral resolution. This gave us complete coverage in the J– and H–bands which would have been difficult to obtain with available grating spectrographs. For example, our H–band spectral range is a factor of two greater than the spectra of Dallier et al. (1996). Secondly, the spectral resolution is fixed by the path difference scanned with the interferometer so we were able to chose the highest resolution possible and achieve S/N in excess of 75 for the majority of our sources. [^3] Finally, because of the novel background subtraction algorithm of the 4m FTS described below, we were able to observe the brightest stars in our sample ($H < 3.0^m$) during good daytime conditions (typically mornings). Combining daytime observations with targeted nighttime observations of key faint sources, the FTS provided a uniform set of high quality spectra for a large sample of spectral standards. Our observing program included simultaneous spectral coverage in both the J–band and the H–band. However, the J–band data presented difficulties which made it expeditious to focus our initial effort on the H–band. The primary problems with the J-band spectra were; i) the inherent difficulty in data reduction due to rapid temporal variations in telluric water vapor absorption; and ii) and the relative paucity of strong features which would allow spectral classification over the full range of stellar temperature and luminosity. We defer discussion of the J–band spectra to a future contribution. Spectra were collected simultaneously in the J– and H–bands with the use of a dichroic beamsplitter to separate the wavelengths longward and shortward of 1.5 $\mu$m. Each star was centered within an input aperture of 3.8 arcsec while sky background was measured through an identical aperture 50.0 arcsec away. The interferogram was scanned at a rate of 1 kHz as the path difference was varied continuously from 0.0–0.75 cm providing an unapodized resolution of 0.8 cm$^{-1}$. Data were obtained as separate scan pairs, with the path difference varied first in one direction and then the other. A forward–backward scan pair was treated as an “observation” and observations were repeated in beam–switching mode (A–B–B–A). Because the sky background from each aperture produces an interferogram shifted in phase by 180$^{\circ}$ at each set of detectors, source spectra are background subtracted in fourier space as they are collected. This permits observations of bright stars to be obtained during good daytime conditions. These beam–switched observations were repeated and scans were averaged until adequate signal–to–noise ratio (SNR) was achieved. The interferograms were transformed at Kitt Peak National Observatory yielding spectra in units of relative flux versus wavenumber ($\sigma$ in cm$^{-1}$). The transformed spectra were converted into fits format images and all data reduction was performed using the IRAF software package [^4]. The spectra were then convolved with a gaussian filter of half–width $\delta = 1.2$ cm$^{-1}$. This procedure, commonly referred to as apodization, eliminates “ringing” observed in the FTS spectra due to the finite scan path of the interferometer. The resulting apodized resolution (Rayleigh criterion) was $\delta \sigma = 2.1$ cm$^{-1}$ giving a mean resolving power of $R=3000$ in the H–band. At this stage the J– and H–band spectra were separated for ease of reduction. The slope of the continuum was normalized to 1.0 using a four–segment spline fitting function. Care was taken to keep the residuals from this fit to within 1 %. Next we corrected the spectra for telluric absorption features present in the spectra which varied with zenith angle. We attempted to construct an opacity map for the earth’s atmosphere by dividing normalized spectra obtained of the A0 star standards at different airmass. Because of the simplicity of the A0 star spectra, showing primarily hydrogen lines in absorption, it was relatively easy to monitor the degree to which this procedure was successful. In dividing two normalized spectra of the same star taken at different airmass, all stellar photospheric absorption features should directly cancel, leaving only those absorption features due to the earth’s atmosphere. If we assume that the opacity of the telluric absorption is directly proportional to airmass we derive: $$\tau(\sigma,X = 1.0) = \frac{1}{(X_{high}-X_{low})} \times ln[ \frac{I(\sigma,X_{low})}{I(\sigma,X_{high})}]$$ where $\tau$ is the atmospheric opacity, $X_{low}$ is the low airmass value, and $X_{high}$ is the high airmass value. A typical opacity map derived in this way for the H–band is shown in Figure \[fig2\]. Several of the features in this map identified with known constituents of the earth’s atmosphere such as water vapor, methane, and carbon dioxide, are denoted in Figure \[fig2\]. Again if the atmospheric opacity varies linearly with airmass we can simply scale the opacity for each star so that $\tau(\sigma,X) = X \times \tau(\sigma,X=1.0)$. Using this technique we corrected the spectra to zero–airmass; $$I(\sigma,X=0.0) = I(\sigma,X) \times e^{\tau(\sigma,X)}$$ We used the highest signal–to–noise A0 standard star spectra ($SNR > 100$) with the largest $\Delta X$ to define the opacity. We found some residual telluric absorptions, possibly due to water vapor which do not vary strictly with airmass. Such features severely complicate the reduction of the J–band spectra. Finally, the forward and backward scans of each star were averaged and residuals of the differenced spectra were calculated in order to evaluate the average SNR. The observations were obtained with the goal of achieving SNR of 75 or greater. In most cases this was achieved with the highest quality spectra reaching values of several hundreds. The average SNR for each stellar spectrum is included in Table \[ftshewsup\] –Table \[ftshewdw\] below. Line Identification and Dependence on Temperature and Luminosity ================================================================ Representative H–band spectra are shown in Figures \[fig3\]– \[fig6\] for luminosity classes I–II, III, IV, and V, with prominent atomic and molecular features identified. Line identifications were made for the strongest lines from comparison with the solar photospheric and umbral near infrared atlases (Livingston & Wallace 1991 (LW91); Wallace & Livingston 1992 (WL92)). However, at our moderate spectral resolution many features are blended and we found the model atmosphere calculations of Oliva, Moorwood, and Origlia (1993) to be useful in identifying the dominant contributors to a blend in late–type stars. Visual inspection of the features in Figures \[fig3\]– \[fig6\] reveals that $R=3000$ H–band spectra contain sufficent temperature and luminosity sensitive features to enable spectral classes to be distinguished. Beginning with the early type stars, the dominant spectral features are HeI 5882 cm$^{-1}$ (1.700 $\mu$m) and the Brackett series of hydrogen from lines 4-10 (1.736 $\mu$m) to 4-16 (1.556 $\mu$m). The He I line exceeds the strength of the Brackett lines in the very earliest stars (06 to B0), with a maximum equivalent width of $\sim 0.83$ cm$^{-1}$ (HR1903; B0 Ia), and recedes to undetectable levels ($\sim 0.10$ cm$^{-1}$) by spectral type B8. From the late B to early F stars, the Brackett series dominates the spectrum, after which lines of neutral atomic metals begin to take prominence. The strongest metallic lines include MgI, SiI, CaI, AlI, and FeI, which increase in strength toward the K stars. Finally molecular features of OH and CO dominate the spectra of the latest–type stars from K5–M5. The most striking luminosity-sensitive feature is the second–overtone CO bandhead $[v, v^{'} = 6, 3]$ at 6177 cm$^{-1}$ (1.619 $\mu$m), which is found in the spectra of the K and M stars. This feature is signficantly stronger in stars of lower surface gravity at equivalent spectral type. To further enable spectral classification in the H–band, we have identified a set of 9 features which are prominent in stars of spectral type A-M. These include a relatively isolated Brackett line (H4-11), 5 neutral metals, and 3 molecular bands. In Table \[ftshband\], we define 9 narrow band indices with bandpasses ranging from 10 to 50 cm$^{-1}$, which include each of these features. The variable widths of the bandpasses were selected to minimize line blending, contamination from residual telluric absorption, and sensitivity to radial velocity shifts. Table \[ftshband\] also identifies the wavenumber of the dominant contributor and the lower state energy level, the central wavenumber and passband of the index, and additional species which may contribute to the index strength. The equivalent widths of these 9 indices were evaluated from the normalized spectra of our 85 survey stars, and are tabulated in Tables  \[ftshewsup\] to  \[ftshewdw\] in units of cm$^{-1}$ [^5]. Uncertainties in these equivalent widths depend on the SNR of the spectrum in question and the bandpass/strength of the feature. Errors range from $\sigma_{EW} = 0.02-0.1$ cm$^{-1}$ exceeding this upper limit in very few cases. Multiple observations of several sources are listed for comparison. The temperature and luminosity dependence for four representative indices is illustrated in Figure \[fig7\]. The 4–11 Brackett line (HI5950) behaves as expected, with a rapid rise to a maximum (at a peak equivalent width of $\sim$ 3 cm$^{-1}$) as $T_{eff}$ approaches 10000 K, and a slower decline toward higher temperatures. The behavior of the index is similar in both the dwarfs and the giants, although the luminosity class I/II sources show a larger scatter, presumably due to intrinsic variability (e.g. Kaufer et al. 1996). The general behavior of the neutral atomic features is illustrated by the Mg6345 index. In luminosity classes IV–V, this index reaches a maximum strength between 5000-6000 K, with a peak equivalent width of $\sim$ 2.5 cm$^{-1}$. In contrast, the maximum strength of this index in the lower surface gravity objects (also $\sim$ 2.5 cm$^{-1}$) is found in the coolest stars in our sample, monotonically decreasing toward higher temperatures; as expected given the behavior of ionization state as a function of surface gravity. The two SiI indices exhibit similar behavior, but the AlI index, (not shown) turns over at much lower temperatures in our dwarf stars because of its lower ionization potential. Note that we have chosen not to form an index based on the strongest SiI line at 6292 cm$^{-1}$ (1.5892 $\mu$m) because it is coincident with the 4–14 Brackett line of HI at 6297 cm$^{-1}$ (1.5881 $\mu$m). The behavior of the molecular features is illustrated for both the second–overtone $^{12}$CO (6,3) and the OH ($\Delta v = 2$) indices. Both indices exhibit a similar behavior with temperature and luminosity, becoming detectable around $T_{eff}$ = 5000, with a strength in the giants approximately twice that in the dwarfs. Similar behavior was noted by KH86 in the first–overtone CO features in the K–band. In dwarf stars the second–overtone CO index reaches a maximum before M5, and displays a turnover toward the coolest stars. This may be due in part to features of CaI and FeI which contaminate the index for intermediate spectral types (F5–K3). Ali et al. (1995) find that the relationship between T$_{eff}$ and equivalent widths of the first–overtone CO bandheads flatten out between 3500–5000 K in dwarf stars. From high resolution ($R > 45,000$) FTS spectra, Wallace & Hinkle (1996) observed that the 2 $\mu$m continuum in late–type dwarf stars is suppressed by numerous water vapor features which are blended at intermediate to low resolution. Predicting the equivalent widths of features where the apparent continuum is subject to temperature and luminosity effects is not straight–forward. In contrast, both the CO and OH indices continue to rise at the coolest temperatures for stars of higher luminosity. However, the magnitude (and temperature) of the maximum in the dwarf stars differs between the CO and OH indices, which we use in the next section to define a two-dimensional classification scheme for late–type stars. We note that the OH index begins to include a contribution from the stark–broadened 4-11 Brackett line at 5949 cm$^{-1}$ creating a secondary maxiumum in the strength of this index around 10,000 K in the dwarf stars. While the temperature and luminosity dependence of the atomic features is readily understood through application of the Saha and Boltzman equations governing the population of the ionization states and energy levels respectively, the explanation behind the behavior of the molecular features is more subtle. Two possibilities for the factor of two enhancement in the molecular bands in the giants over the dwarfs have been explored in the literature. One attributes the luminosity dependence in the molecular features to differing microturbulence in the atmospheres of dwarfs and giants. The expectation is that larger microturbulence in the lower surface gravity giants effectively broadens the opacity of the feature over a larger frequency interval in these saturated features, thereby enhancing the equivalent width (McWilliams & Lambert 1984). Another possible contributor is the differing depth of the line formation region in the dwarfs versus the giants, which is fixed by the H$^{-}$ opacity. As described by Gray (1992) higher surface gravity results in a higher electron pressure (and thus H$^-$ column density). This brings the CO line formation region closer to the stellar surface reducing $N_{CO}$ according to the following proportionality: $$P_e \sim g^{1/3} \sim N_{H^-} \sim 1/N_{CO}$$ In any case, this luminosity dependence of the band strength gives an excellent empirical discriminant between giants and dwarfs, which we exploit below to develop a two dimensional spectral index. To discern surface gravity effects between the super–giants and giants or between sub–giants and dwarf stars requires more careful study. A detailed examination of line strengths as a function of surface gravity at a fixed temperature reveal the expected trends. However, this behavior does not reveal itself in the coarse analysis afforded by our narrow–band indices. While the temperature and luminosity sensitivities outlined above can provide good spectral classification in many instances, discriminants which do not rely on absolute line strength are required when a star is subject to near-infrared continuum veiling. In this case line ratio diagnostics are to be preferred, since absorption features will appear shallower in the presence of continuum excess but line ratios will be preserved as long as the excess is not strongly wavelength dependent. We have identified one diagnostic based on line ratios which can be used to evaluate both temperature and luminosity for stars from K3-M5 in the presence of continuum veiling. This two–dimensional spectral index is defined as: $$\frac{EW[OH5920]}{EW[Mg6345]} \ vs. \ \frac{EW[CO6018+CO6170]}{EW[Mg6345]}$$ where EW is the equivalent width in cm$^{-1}$ for the indices identified in Table \[ftshband\] and listed in Tables \[ftshewsup\]– \[ftshewdw\]. The temperature and luminosity dependence of this diagnostic is illustrated in Figure \[fig8\]. In this diagnostic, the ratio of the OH5920 to Mg6345 indices is temperature sensitive, with distinct temperature dependences for dwarfs and giants. Specifically we find $$T_{eff} (V) = 4640 \pm 250 - (2610 \pm 110) \frac{EW[OH5920]}{EW[Mg6345]}$$ and $$T_{eff} (III) = 5100 \pm 180 - (2730 \pm 80)\times \frac{EW[OH5920]}{EW[Mg6345]}$$ The comparison of this temperature sensitive ratio with the sum of the two $^{12}$CO indices, also normalized to Mg6345, then provides an excellent means of identifying both the temperature and luminosity class of late–type stars. Formal errors in the equivalent width suggest that spectral types can be evaluated to within $\pm 2$ subclasses ($\pm$ 300 K) from K3–M5 using spectra with $SNR > 50$ based on these indices alone. Discussion and Summary ====================== Spectral classification in the near-infrared will become increasingly important in the next decade, as the 2MASS and DENIS near-infrared sky surveys reveal unprecented numbers of stars which are optically–invisible. Because the 1–2.5 $\mu$m region is on the Rayleigh–Jeans tail of most stellar SEDs, it is not an ideal wavelength regime to pursue spectral classification. Yet, there are sufficient features in both the H– and the K–bands to allow most stellar photospheres to be classified. For heavily reddened sources, the K–band will be the wavelength of choice. However, continuum emission from circumstellar dust with temperatures less than 2000 K can heavily veil stellar photospheres at wavelengths greater than 2.0 $\mu$m. In this case, shorter wavelength spectra are required in order to identify the underlying star. The early-type stars are probably the most challenging for spectral classification in the near–infrared. We find that a rough classification from 07-B8 can be made in the H–band by the relative strengths of HeI 5882 cm$^{-1}$ and the Brackett series (see also Blum et al. 1997). Hanson, Conti, and Rieke (1995) have established a classification scheme in the K–band for O–B stars relying on lines of helium as well as higher ionization species obtained at $R > 1000$. For stars A through early K, the H–band may be superior to the K band in providing a large number of intermediate ionization potential species with strong features such as MgI, SiI, and FeI in addition to the numerous Brackett series features. Stars K3–M5 are probably best classified in the K–band (KH86; Ali et al. 1995; WH97) using atomic features of MgI, CaI, and NaI as well as the first–overtone CO bandheads observed at $R \sim 1000$. However we have found that these stars also have strong temperature and luminosity sensitive features in the H–band such as MgI, AlI, OH, and the second–overtone CO bandheads. The very latest–type stars ($> M5$) have very strong, broad, molecular features which can be identified at resolutions as low as $R \sim 300$. Kirkpatrick et al. 1993 (see also Jones et al. 1996) have classified stars in the I– or the J–band employing features due to VO, TiO, and FeH. In addition, broad water vapor bands observed throughout the 1–2.5 $\mu$m region (Jones et al. 1995) are an important opacity source in the atmospheres of the coolest stars as well as brown dwarfs (Allard & Hauschildt 1995). While the I– or J–bands are probably the best spectral regions to classify extremely cool stars (as they lie on the Wien side of the Planck function for these objects) more heavily obscured objects can still be profitably observed at low resolution in the J– and H–bands (e.g. NICMOS on HST) or in the K–band (Wilking, Greene, & Meyer 1998) in search of these water vapor absorptions. The H–band spectral atlas we have presented is comprised of moderate resolution spectra with $R \sim 3000$. In contrast, most spectral classification is typically carried out with $R \sim 500-1000$. The strongest and broadest features in the H–band are the CO(6-3) bands and the Brackett lines. These features could be identified with much lower spectral resolution than our survey, at $R \sim 500$. The most crowded region in the H–band spectra is that in the vicinity of the HI line at 5948.50 cm$^{-1}$ (1.68110 $\mu$m), the AlI triplet at 5964–5980 cm$^{-1}$ (1.677–1.672 $\mu$m), and the SiI line at 5993.29 cm$^{-1}$ (1.66853 $\mu$m). In order to properly separate these important features from each other, a resolving power of $R \sim 1000$ is required. At this resolution, one can also obtain measurements of the HeI line at 5882 cm$^{-1}$ (1.700 $\mu$m). At $R = 3000$ one can resolve individual components of the AlI triplet, the Mg I doublet, and the CO bandheads, as well as the stark–broadened Brackett lines in the early–type dwarf stars (Table \[ftshband\]). An additional issue in the near–infrared is the significant contribution to shot-noise from air-glow lines. In the H–band air–glow from OH is sufficently bright and variable that they compromise $R=1000$ H–band spectral classification for very faint sources. Spectral resolution as high as $R \sim 5000$ will be required to resolve the bulk of these air–glow features and to obtain adequate SNR spectra of faint objects [^6]. In summary, we present an H–band spectral atlas at a resolving power of $R = 3000$ that spans a wide range in stellar temperature (O7-M5) and luminosity class (I-V). This spectral region contains a number of temperature and/or luminosity sensitive atomic and molecular features which will allow spectral classification to be carried out in the H-band. As an example of the efficacy of this spectral range for distinguishing stellar spectral types, we define a set of narrow–band indices which, with $ SNR \sim 50$, permit classification of late–type stars on the MK system within $\pm 2$ subclasses. It appears however, that for most applications obtaining H–band spectra at $R \sim 1000$ will be sufficient for classification. Appendix A: Electronic Availability of the Data =============================================== The final reduced averaged spectra as well as the difference of the forward and backward scan pairs (see Section 3 for description of the reduction procedure) are available through the Astronomical Data Center (ADC) for each observation listed in this paper. The ADC can be contacted directly: i) by post at Astronomical Data Center, NASA Goddard Space Flight Center, Code 631, Greenbelt, MD 20771; or ii) by telephone at (301) 286–8310; or iii) by fax at (301) 286–1771; or iv) via the internet at http://adc.gsfc.nasa.gov. The data are in fits format with pertinent header information included for each image. These fits format files, useful plotting routines, and other relevent information are also available on the World Wide Web at http://donald.phast.umass.edu. The raw FTS data are also available directly from NOAO (contact KHH for details). We would like to thank Lori Allen, Ed Chang, Lynne Hillenbrand, Susan Kleinmann, Michael Skrutskie, and Lloyd Wallace for helpful discussions. Special thanks to John Carpenter for assisting in the initial compilation of the standard star lists, and to Karen Strom and Stephen Friedman for their assistance in making the data available electronically. Antonella Romano provided assistance in preparing the tables and figures for publication. Support for MRM during the final stages of this work was provided by NASA through Hubble Fellowship grant \# HF–01098.01–97A awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA under contract NAS 5–26555. SE acknowledges support from the National Science Foundation’s Faculty Award for Women Program. This work was supported in part through a grant from the National Science Foundation (\# AST–9114863) to SES. Ali, B., Carr, J.S., DePoy, D.L., Frogel, J.A., & Sellgren, K. 1995, AJ, 110, 2415 Allard, F., & Hausschildt, P.H. 1995, ApJ, 445, 433 Bell, J.R. 1974, Fourier Transform Spectroscopy, (John Willey and Sons: New York) Bessell, M.S., Castelli, F., & Plez, B. 1998, A&A, 333, 231 Blum, R.D., Ramond, T.M., Conti, P.S., Figer, D.F., & Sellgren, K. 1997, AJ, 113, 1855 Coxon, G. 1965, Ark. Fys., 28, 381 Dallier, R., Boisson, C., & Joly, M. 1996, A&ASS, 116, 239 Epchtein, N. et al. 1997, Impact of Large Scale Near-IR Sky Surveys, eds. Gorzon, F., Epchtein, N., Omont, A., Burton, B., & Persi, P. (Kluwer: Amsterdam, Netherlands) Eriksson, K.B.S., & Isberg, H.B.S. 1963, Ark. Fys., 23, 527 Garcia, J.D., & Mack, J.E. 1965, JOSA, 55, 654 George, T., Urban, W., & LeFloch, A. 1994, J. Mol. Spec., 165, 50 Gray, D.F. 1992, The Observation and Analysis of Stellar Photospheres, (Cambridge University Press: Cambridge), p. 374 Hall, D.N.B, Ridgeway, S., Bell, E., & Yarborough, J.M. 1979, Proc. SPIE, 248, 898 Hanson, M., Conti, P., & Rieke, M. 1996, ApJS, 107, 281 Henry, T.J., Kirkpatrick, J.D., & Simons, D. 1994, AJ, 108, 1437 Herbst, T.M. 1994, PASP, 106, 1298 Hoffleit, D., & Jaschek, C. 1982, The Bright Star Catalogue, Yale Observatory Jaschek, C., Conde, H., & de Sierra, A.C. 1964, La Plata Observatory Bulletin, 28 Johnson, H.L., & Mendez, M.E. 1970, AJ, 75, 785. Jones, H.R.A., Longmore, A.J., Allard, F., Hauschildt, P.H., Miller, S., & Tennyson, J. 1995, MNRAS, 277, 767 Jones, H.R.A., Longmore, A.J., Allard, F., & Hauschildt, P.H. 1996, MNRAS, 280, 77 Kaufer, A. et al. 1997, A&A, 320, 273 Keenan, P.C., 1985, IAU, 111, 123 Keenan, P.C., 1987, PASP, 99, 713 Keenan, P.C., & McNeil, R. 1989, APJS, 71, 245 Kirkpatrick, J.D., Henry, T.J., & McCarthy, D.W. Jr. 1991, APJS, 77, 417 Kirkpatrick, J.D., Kelly, D.M., Rieke, G.H., Liebert, J., Allard, F., Wehrse, R. 1993, ApJ, 402, 643 Kleinmann, S.G., & Hall, D.N.B. 1986, APJS, 62, 501 (KH86) Koornneef, J. 1983, A&A, 128, 84 Lancon, A., & Rocca-Volmerange, B. 1992, A&AS, 96, 593 Le Bertre, T. 1997, A&A, 324, 1059 Litzen, U. 1964, Ark. Fys., 28, 239 Livingston, W., & Wallace, L. 1991, Solar Atlas 1–5 $\mu$m, Kitt Peak Observatory Bulletin (LW91) McWilliam, A., & Lamber, D.L. 1984, PASP, 96, 882 Meyer, M.R., Calvet, N., & Hillenbrand, L.A. 1997, AJ, 114, 288 Morgan, W., Abt, H., & Tabscott, J. 1978, Revised MK Spectral Atlas for Stars Ealier than the Sun, Yerkes and Kitt Peak Observatories Origlia, L., Moorwood, A., & Oliva, E. 1993, A&A, 280, 536 Pollack, J.B., Hollenbach, D., Beckwith, S.V.W., Damon, P., Rousch, T., & Fong, W. 1994, ApJ, 421, 615. Risberg, G. 1965, Ark. Fys., 28, 381 Schinnerer, E., Eckart, A., Quirrenbach, A., Boker, T., Tacconi–Garman, L.E., Downes, D. 1997, ApJ, 488, 174 Skrutskie, M.F. 1997, Impact of Large Scale Near-IR Sky Surveys, eds. Gorzon, F., Epchtein, N., Omont, A., Burton, B., & Persi, P. (Kluwer: Amsterdam, Netherlands) Schmidt–Kaler, T.H. 1982, Physical Parameters of Stars, Landolt-Bornstein New Series, Vol. 2b, Astronomy and Astrophysics, Stars and Star Clusters, ed. K. Shaifers & H. H. Voigt, (Springer–Verlag: New York) Tokunaga, A., 1996, Astrophysical Quantities, submitted Wallace, L., & Livingston, W. 1992, Atlas of Dark Sunspot Spectrum 1–5 $\mu$m, Kitt Peak Observatory Bulletin (WL92) Wallace, L. & Hinkle, K. 1996, APJS, 107, 312 Wallace, L. & Hinkle, K. 1997, APJS, 111, 445 (WH97) Wilking, B.A., Greene, T.P., & Meyer, M.R. 1998, AJ, submitted [lllllll]{} & & & & & &\ 1903 & \*46 $\epsilon$ Ori & 2.4 & B0 & Ia & 26SB & 87\ 1203 & 44 $\zeta$ Per & 3.4 & B1 & Ib & 20SB & 59\ 2827 & 31 $\eta$ CMa & 2.5 & B5 & Ia & 41V & 45\ 1713 & \*19 $\beta$ Ori & 0.1 & B8 & Ia & 21SB & 33\ 3975 & \*30 $\eta$ Leo & 3.3 & A0 & Ib & 3V & 20\ 7924 & \*50 $\alpha$ Cyg & 1.0 & A2 & Ia & -5SBO & 21\ 1865 & 11 $\alpha$ Lep & 2.0 & F0 & Ib & 24 & 13\ 1017 & \*33 $\alpha$ Per & 0.9 & F5 & Ib & -2V & 18\ 7796 & \*37 $\gamma$ Cyg & 1.1 & F8 & Ib & -8V & 20\ 8232 & \*22 $\beta$ Aqr & 1.5 & G0 & Ib & 7V? & 18\ 7479 & 5 $\alpha$ Sge & 2.4 & G1 & II & 2V? & 0\ 7063 & $\beta$ Sct & 2.2 & G4 & IIa & -22SB10 & 10\ 8752 & \*- & 3.6 & G4v & $>I$ & -58V? & 35\ 7314 & \*21 $\theta$ Lyr & 2.1 & K0 & +II & -31V & $<$19\ 6713 & 93 Her & 2.4 & K0.5 & IIb & -24 & $<$17\ 8465 & \*21 $\zeta$ Cep & 1.1 & K1.5 & Ib & -18SB & $<$17\ 6498 & 49 $\sigma$ Oph & 1.9 & K2 & II & -27 & $<$19\ 603 & 57 $\gamma^1$ And & -0.5 & K3 & -IIb & -12SB & $<$17\ 8089 & \*63 Cyg & 1.5 & K4 & Ib-IIa & -26V & –\ 8079 & \*62 $\xi$ Cyg & 0.5 & K4.5 & Ib-II & -20SB & $<$17\ 2061 & 58 $\alpha$ Ori & -2.? & M1-2 & Ia-Iab & 21SB & –\ 1155 & \*– & 0.5 & M2 & +IIab & -3V & –\ 921 & 25 $\rho$ Per & -1.7 & M4 & II & 28 & –\ 7009 & \*– & 0.6 & M4.5-M5 & +II & -19 & –\ 6406 & \*64 $\alpha^1$ Her & -2.4 & M5 & Ib-II & -33V & 21\ 1899 & 44 $\iota$ Ori & 3.5 & O9 & III & 22SB2O & 130\ 1552 & 3 $\pi^4$ Ori & 4.1 & B2+B2 & III & 23SBO & 40\ 5291 & \*11 $\alpha$ Dra & 3.5 & A0 & III & -13SBO & 18\ 403 & \*37 $\delta$ Cas & 2.3 & A5 & III-IV & 7SB & 113\ 1412 & \*78 $\theta^2$ Tau & 2.9 & A7 & III & 40SB1O & 78\ 4031 & \*36 $\zeta$ Leo & 2.8 & F0 & III & -16SB & 84\ 21 & \* 11 $\beta$ Cas & 1.6 & F2 & III–IV & 12SB & 70\ 5017 & \*20 CVn & 3.9 & F3 & III & 8V? & 17\ 2706 & 48 Gem & 5.0 & F5 & III-IV & 13V & 74\ 8905 & \*68 $\upsilon$ Peg & 3.3 & F8 & III & -11 & 79\ 4883 & \*31 Com & 3.0 & G0 & III & -1V? & 77\ 4716 & 5 CVn & 2.8 & G6 & III & -12SB & $<$17\ 7328 & 1 $\kappa$ Cyg & 1.6 & G9 & III & -29SB & $<$17\ 7949 & 53 $\epsilon$ Cyg & 0.2 & K0 & -III & -11SB? & $<$17\ 8317 & \*11 Cep & 2.3 & K1 & III & -37 & $<$17\ 6299 & \*27 $\kappa$ Oph & 0.8 & K2 & III & -56V & $<$17\ 165 & 31 $\delta$ And & 0.5 & K3 & III & -7SB1O & $<$17\ 6705 & \*33 $\gamma$ Dra & -1.2 & K5 & III & -28 & $<$17\ 152 & – & 1.7 & K5-M0 & III & -33V & $<$17\ 4517 & \*3 $\nu$ Vir & 0.3 & M1 & IIIab & 51V? & –\ 6242 & – & 0.4 & M4 & +III-IIIa & -7V? & –\ 7886 & \*– & -0.6 & M6 & III & -66V? & –\ \[sample\] [lllllll]{} & & & & & &\ 6588 & 85 $\iota$ Her & 4.3 & B3 & IV & -20SB1O & 11\ 4033 & \*33 $\lambda$ UMa & 3.3 & A2 & IV & 18V & 48\ 1351 & 57 Tau & 4.9 & F0 & IV & 42SB1? & 109\ 5235 & 8 $\eta$ Boo & 1.5 & G0 & IV & 0SB1O & 13\ 5409 & 105 $\phi$ Vir & 2.8 & G2 & IV & -10SB & 0\ 6623 & 86 $\mu$ Her & 1.4 & G5 & IV & -16V & 20\ 995 & 59 Ari & 3.9 & G6 & IV & 0V & –\ 7602 & 60 $\beta$ Aql & 1.7 & G8 & IV & -40V & $<$ 16\ 7957 & \*3 $\eta$ Cep & 1.2 & K0 & IV & -87 & $<$17\ 5901 & 11 $\kappa$ CrB & 2.5 & K1 & IVa & -24 & $<$17\ 6014 & – & 3.6 & K1.5 & IV & -4V & –\ 2456 & \*15 Mon & 5.5 & O7 & V(e) & 33SB & 63\ 5191 & \*85 $\eta$ UMa & 2.4 & B3 & V & -11SB? & 205\ 3982 & \*32 $\alpha$ Leo & 1.6 & B7 & V & 6SB & 329\ 7001 & \* $\alpha$ Lyr & 0.0 & A0 & V & -14V & 15\ 2491 & \*9 $\alpha$ CMa & -1.5 & A1 & Vm & -8SBO & 13\ 4534 & \*94 $\beta$ Leo & 2.0 & A3 & V & 0V & 121\ 4357 & \*68 $\delta$ Leo & 2.3 & A4 & V & -20V & 181\ 4931 & 78 UMa & 4.1 & F2 & V & -10V? & 92\ 1279 & – & 5.1 & F3 & V & 36SB1? & 25\ 2943 & \*10 $\alpha$ CMi & -0.6 & F5 & IV-V & -3SBO & 6\ 1538 & 59 Eri & 2.3 & F6 & V & 35 & –\ 4375 & \*53 $\xi$ UMa & 3.0 & G0 & V & -16SB1O & 1\ 4983 & 43 $\beta$ Com & 3.1 & F9.5 & V & 6SB? & 6\ 483 & \*– & 3.7 & G1.5 & V & 4V? & 2\ 4374 & 53 $\xi$ UMa & 3.5 & G0 & V & -16SB1O & 3\ 5072 & 70 Vir & 3.6 & G4 & V & 5V & 1\ 4496 & \*61 UMa & 3.8 & G8 & V & -5V & $<$17\ 7462 & \*61 $\sigma$ Dra & 3.0 & K0 & V & 27V & $<$17\ 1084 & \*18 $\epsilon$ Eri & 1.6 & K2 & V & 15V? & $<$17\ 8832 & \*– & 3.2 & K3 & V & -18V & –\ – & GL570A & 3.0 & K4 & V & - & -\ 8085 & \*61 Cyg & 2.4 & K5 & V & -64V & $<$17\ 8086 & 61 Cyg & 3.1 & K7 & V & -64V? & =$<$25\ – & GL338A & 4.5 & M0 & V & - & -\ – & GL526 & 4.5 & M1.5 & V & - & -\ – & \*GL411 & 3.6 & M2 & V & - & -\ – & \*GL725A & 4.7 & M3 & V & - & -\ [rlll]{} & & &\ O6-O8– & & 37000 & 38000\ O9– & 32500 & 32000 & 33200\ O9.5– & & & 31450\ B0– & 26000 & 29000 & 29700\ B1– & 20700 & 24000 & 25600\ B2– & 17800 & 20300 & 22300\ B3– & 15600 & 17100 & 19000\ B4 & 13900 & & 17200\ B5– & 13400 & 15000 & 15400\ B6 & 12700 & 14100 & 14100\ B7– & 12000 & 13200 & 13000\ B8– & 11200 & 12400 & 11800\ B9– & 10500 & 11000 & 10700\ A0– & 9730 & 10100 & 9480\ A1– & 9230 & 9480 &\ A2 & 9080 & 9000 & 8810\ A5– & 8510 & 8100 & 8160\ A7– & & 7650 & 7930\ F0– & 7700 & 7150 & 7020\ F2– & 7170 & 6870 & 6750\ F5– & 6640 & 6470 & 6530\ F7– & & & 6240\ F8– & 6100 & 6150 &\ G0– & 5510 & 5910 & 5930\ G2 & & & 5830\ G3 & 4980 & &\ G4 & & 5190 & 5740\ G6 & & 5050 & 5620\ G8– & 4590 & 4960 &\ K0– & 4420 & 4810 & 5240\ K1– & 4330 & 4610 &\ K2– & 4260 & 4500 & 5010\ K3– & 4130 & 4320 &\ K4– & & 4080 & 4560\ K5– & 3850 & 3980 & 4340\ K7 & & & 4040\ M0– & 3650 & 3820 & 3800\ M1– & 3550 & 3780 & 3680\ M2– & 3450 & 3710 & 3530\ M3– & 3200 & 3630 & 3380\ M4– & 2980 & 3560 & 3180\ M5– & & 3420 & 3030\ M6– & & 3250 & 2850\ \[temp\] [lll]{} & &\ March 9–10, 1993 & Day & 17\ April 1–3, 1993 & Day/Night & 30\ May 18–19, 1993 & Day & 10\ January 30–31, 1994 & Day/Night & 42\ \[ftslog\] [lllllll]{} & & & & & &\ MgI(4s–4p) & 5.39 & 5843.41 & 1.71133 & 5844 & 10 & CO, Fe, Ni, OH\ OH($\Delta v = 2$) & 0.76 & 5920: & 1.689 & 5920 & 20 & C, CO, Fe, Ni\ HI(4-11) & 12.75 & 5948.50 & 1.68110 & 5950 & 20 & CO, Fe, Ni, Si\ AlI(4p–4d tr) & 4.09 & 5963.76 & 1.67679 & 5972.5 & 25 & CO, Fe, Ni, OH\ & & 5968.31 & 1.67552 & & &\ & & 5979.60 & 1.67235 & & &\ SiI(4p–3d) & 5.98 & 5993.29 & 1.66853 & 5993 & 10 & CO, Fe, Ni, OH\ $^{12}$CO(8,5)bh & 1.55 & 6018 & 1.662 & 6017.5 & 15 & Fe, OH, S\ $^{12}$CO(6,3)bh & 1.05 & 6177 & 1.619 & 6170 & 50 & Ca, Fe, Ni, OH, Si\ SiI(4p–5s) & 5.98 & 6263.92 & 1.59644 & 6264 & 10 & Fe, Mg, Ni, OH\ MgI(4s–4p tr) & 5.93 & 6341.10 & 1.57701 & 6345 & 20 & CN, CO, Fe, H$_2$O, Ni, OH\ & & 6347.88 & 1.57533 & & &\ & & 6351.22 & 1.57450 & & &\ \[ftshband\] [llllllllllll]{} & & & & & & & & & & &\ HR1903 & 26000 & 203 & -0.03 & 0.09 & 0.75 & 0.17 & -0.02 & 0.11 & 0.27 & 0.11 & 0.47\ HR1903 & 26000 & 167 & 0.01 & 0.07 & 0.73 & 0.11 & -0.00 & 0.09 & 0.19 & 0.12 & 0.32\ HR1203 & 20700 & 160 & -0.01 & 0.12 & 0.80 & 0.18 & 0.01 & 0.11 & 0.23 & 0.11 & 0.56\ HR2827 & 13400 & 045 & -0.05 & -0.02 & 1.06 & -0.04 & 0.20 & -0.04 & 0.01 & -0.05 & 0.08\ HR1713 & 11200 & 218 & 0.04 & 0.06 & 1.27 & 0.23 & 0.05 & 0.09 & 0.19 & 0.17 & 0.50\ HR3975 & 9730 & 068 & -0.08 & 0.11 & 2.12 & 0.23 & 0.12 & -0.09 & 0.33 & 0.20 & 0.58\ HR7924 & 9080 & 225 & -0.05 & 0.00 & 1.40 & 0.20 & -0.02 & 0.03 & -0.14 & 0.08 & 0.08\ HR1865 & 7700 & 196 & -0.01 & 0.22 & 2.21 & 0.32 & 0.14 & -0.13 & 0.30 & 0.25 & 0.52\ HR1017 & 6640 & 248 & 0.08 & 0.25 & 1.88 & 0.43 & 0.06 & -0.02 & 0.53 & 0.41 & 1.05\ HR7796 & 6100 & 324 & 0.10 & 0.29 & 1.58 & 0.69 & 0.14 & 0.03 & 0.79 & 0.60 & 0.94\ HR8232 & 5510 & 290 & 0.09 & 0.37 & 1.28 & 0.67 & 0.17 & 0.03 & 1.18 & 0.67 & 1.19\ HR8752 & 5510 & 084 & -0.13 & 0.34 & 1.16 & 0.14 & 0.07 & 0.02 & 0.16 & 0.16 & 0.41\ HR7479 & 5333 & 096 & 0.15 & 0.13 & 1.13 & 0.70 & 0.10 & 0.00 & 0.92 & 0.41 & 0.69\ HR7479 & 5333 & 109 & 0.10 & 0.22 & 1.07 & 0.49 & 0.11 & 0.02 & 0.76 & 0.54 & 1.04\ HR7479 & 5333 & 182 & 0.13 & 0.12 & 1.08 & 0.56 & 0.05 & 0.07 & 0.94 & 0.49 & 1.06\ HR7479 & 5333 & 093 & 0.06 & -0.02 & 1.20 & 0.53 & 0.03 & 0.14 & 1.05 & 0.54 & 0.93\ HR7063 & 4902 & 260 & 0.17 & 0.12 & 0.94 & 0.84 & 0.14 & 0.25 & 1.66 & 0.76 & 1.06\ HR7314 & 4420 & 193 & 0.23 & 0.15 & 0.84 & 0.88 & 0.15 & 0.46 & 2.25 & 1.00 & 1.60\ HR7314 & 4420 & 257 & 0.15 & 0.17 & 0.67 & 0.59 & 0.10 & 0.37 & 1.91 & 0.86 & 1.36\ HR6713 & 4375 & 265 & 0.22 & 0.19 & 0.81 & 0.80 & 0.16 & 0.40 & 1.86 & 0.79 & 1.25\ HR8465 & 4295 & 308 & 0.13 & 0.28 & 0.68 & 0.70 & 0.19 & 0.63 & 2.35 & 1.02 & 1.79\ HR6498 & 4260 & 247 & 0.24 & 0.30 & 0.84 & 1.00 & 0.21 & 0.83 & 2.75 & 1.11 & 1.84\ HR603 & 4130 & 453 & 0.24 & 0.34 & 0.69 & 0.61 & 0.19 & 0.49 & 2.09 & 0.87 & 1.46\ HR8089 & 3990 & 202 & 0.21 & 0.45 & 0.50 & 0.67 & 0.14 & 0.68 & 2.26 & 0.94 & 1.59\ HR8079 & 3920 & 445 & 0.24 & 0.67 & 0.62 & 0.80 & 0.24 & 1.05 & 2.70 & 1.05 & 1.72\ HR2061 & 3550 & 327 & 0.54 & 1.08 & 0.36 & 1.39 & 0.23 & 1.46 & 3.92 & 0.89 & 2.22\ HR1155 & 3450 & 237 & 0.46 & 1.31 & 0.58 & 1.12 & 0.38 & 1.37 & 3.35 & 1.12 & 2.03\ HR921 & 2980 & 225 & 0.77 & 1.41 & 0.29 & 1.27 & 0.37 & 1.10 & 4.08 & 0.67 & 1.52\ HR7009 & 2925 & 292 & 0.63 & 1.43 & 0.79 & 1.55 & 0.54 & 1.79 & 4.68 & 1.26 & 2.21\ HR6406 & 2800 & 319 & 0.59 & 1.41 & 0.80 & 1.66 & 0.54 & 1.85 & 4.98 & 1.31 & 2.31\ \[ftshewsup\] [llllllllllll]{} & & & & & & & & & & &\ HR1899 & 32000 & 164 & -0.03 & 0.17 & 0.54 & 0.15 & -0.04 & 0.03 & 0.18 & 0.10 & 0.49\ HR1899 & 32000 & 182 & -0.04 & 0.20 & 0.62 & 0.11 & 0.05 & -0.03 & 0.07 & -0.00 & 0.07\ HR1552 & 20300 & 115 & -0.07 & 0.13 & 1.29 & 0.31 & 0.10 & -0.09 & 0.12 & -0.01 & 0.09\ HR5291 & 10100 & 159 & -0.07 & 0.19 & 2.65 & 0.78 & 0.02 & 0.05 & 0.28 & 0.11 & 0.41\ HR403 & 8100 & 202 & -0.02 & 0.39 & 2.79 & 0.77 & 0.09 & -0.13 & 0.25 & 0.08 & 0.54\ HR1412 & 7650 & 186 & -0.02 & 0.43 & 2.70 & 0.71 & 0.05 & -0.10 & 0.60 & 0.25 & 0.91\ HR4031 & 7150 & 193 & 0.02 & 0.29 & 2.30 & 0.54 & 0.12 & -0.09 & 0.25 & 0.13 & 0.48\ HR21 & 6870 & 245 & 0.03 & 0.32 & 1.99 & 0.55 & 0.04 & -0.11 & 0.20 & 0.22 & 0.82\ HR21 & 6870 & 179 & 0.01 & 0.25 & 2.21 & 0.63 & 0.13 & -0.14 & 0.46 & 0.24 & 0.69\ HR5017 & 6700 & 174 & 0.04 & 0.28 & 2.58 & 0.83 & 0.07 & -0.03 & 0.43 & 0.36 & 0.96\ HR2706 & 6470 & 086 & 0.01 & 0.39 & 2.08 & 0.47 & 0.16 & -0.10 & 0.13 & 0.20 & 0.64\ HR8905 & 6270 & 059 & 0.06 & 0.13 & 1.06 & 0.45 & -0.01 & -0.16 & 0.30 & 0.28 & 1.02\ HR4883 & 5910 & 280 & 0.15 & 0.20 & 1.07 & 0.52 & 0.09 & 0.04 & 0.90 & 0.43 & 1.02\ HR4716 & 5050 & 133 & 0.17 & 0.10 & 0.97 & 0.47 & 0.13 & 0.07 & 1.29 & 0.54 & 0.92\ HR7328 & 4885 & 281 & 0.19 & 0.20 & 0.78 & 0.57 & 0.11 & 0.11 & 1.33 & 0.67 & 1.35\ HR7949 & 4810 & 369 & 0.21 & 0.24 & 0.77 & 0.59 & 0.13 & 0.18 & 1.39 & 0.67 & 1.38\ HR8317 & 4710 & 191 & 0.29 & 0.14 & 0.63 & 0.58 & 0.08 & 0.26 & 1.79 & 0.83 & 1.64\ HR6299 & 4500 & 532 & 0.26 & 0.09 & 0.77 & 0.81 & 0.10 & 0.44 & 1.92 & 0.76 & 1.40\ HR165 & 4320 & 266 & 0.34 & 0.24 & 0.64 & 0.58 & 0.18 & 0.41 & 1.98 & 0.89 & 1.45\ HR6705 & 3990 & 298 & 0.39 & 0.60 & 0.77 & 1.12 & 0.26 & 0.96 & 3.05 & 1.12 & 1.75\ HR6705 & 3990 & 694 & 0.41 & 0.61 & 0.76 & 1.07 & 0.25 & 0.89 & 2.87 & 1.06 & 1.66\ HR152 & 3956 & 270 & 0.37 & 0.86 & 0.58 & 0.92 & 0.34 & 0.87 & 2.59 & 0.78 & 1.40\ HR4517 & 3780 & 673 & 0.66 & 1.06 & 0.33 & 1.07 & 0.21 & 0.89 & 3.14 & 0.70 & 1.93\ HR6242 & 3560 & 458 & 0.52 & 1.12 & 0.57 & 1.18 & 0.39 & 1.45 & 3.93 & 1.16 & 1.96\ HR7886 & 3250 & 574 & 0.61 & 1.54 & 0.89 & 1.71 & 0.64 & 2.25 & 5.69 & 1.41 & 2.42\ \[ftshewgn\] [llllllllllll]{} & & & & & & & & & & &\ HR6588 & 19000 & 162 & -0.03 & 0.10 & 1.65 & 0.48 & -0.03 & 0.02 & 0.17 & 0.07 & 0.40\ HR4033 & 8810 & 146 & -0.00 & 0.33 & 2.62 & 0.73 & 0.06 & 0.02 & 0.27 & 0.13 & 0.35\ HR1351 & 7020 & 094 & -0.06 & 0.40 & 2.18 & 0.62 & 0.19 & -0.19 & -0.01 & 0.10 & 0.24\ HR5235 & 5930 & 341 & 0.16 & 0.12 & 1.21 & 0.83 & 0.11 & 0.03 & 1.30 & 0.64 & 1.14\ HR5235 & 5930 & 263 & 0.17 & 0.26 & 1.26 & 0.68 & 0.18 & -0.01 & 0.97 & 0.50 & 1.03\ HR5409 & 5830 & 158 & 0.16 & 0.16 & 1.02 & 0.51 & 0.12 & -0.03 & 0.91 & 0.44 & 0.95\ HR6623 & 5680 & 211 & 0.33 & 0.12 & 0.87 & 0.87 & 0.09 & 0.09 & 1.21 & 0.69 & 1.38\ HR995 & 5620 & 143 & 0.22 & 0.08 & 0.80 & 0.61 & 0.23 & 0.03 & 1.09 & 0.48 & 1.14\ HR7602 & 5430 & 056 & 0.46 & 0.15 & 0.71 & 0.86 & 0.01 & 0.17 & 1.40 & 0.57 & 1.49\ HR7957 & 5240 & 189 & 0.22 & 0.09 & 0.68 & 0.70 & 0.07 & 0.15 & 1.04 & 0.46 & 0.99\ HR7957 & 5240 & 192 & 0.09 & 0.18 & 0.56 & 0.52 & -0.01 & 0.09 & 0.75 & 0.40 & 1.17\ HR5901 & 5125 & 395 & 0.37 & 0.13 & 0.55 & 0.62 & 0.08 & 0.21 & 1.61 & 0.74 & 1.60\ HR5901 & 5125 & 153 & 0.37 & 0.09 & 0.75 & 0.93 & 0.15 & 0.30 & 1.54 & 0.71 & 1.13\ HR6014 & 5068 & 072 & 0.31 & -0.12 & 0.39 & 0.59 & 0.14 & 0.05 & 1.10 & 0.70 & 1.40\ HR6014 & 5068 & 133 & 0.42 & 0.11 & 0.58 & 0.55 & 0.18 & 0.09 & 1.48 & 0.65 & 1.24\ \[ftshewsub\] [llllllllllll]{} & & & & & & & & & & &\ HR2456 & 38000 & 053 & -0.03 & 0.36 & 0.39 & 0.26 & 0.16 & 0.12 & -0.32 & 0.11 & 0.31\ HR2456 & 38000 & 073 & -0.12 & 0.18 & 0.48 & 0.10 & 0.06 & -0.15 & 0.15 & 0.03 & 0.10\ HR5191 & 19000 & 282 & -0.05 & 0.19 & 1.71 & 0.57 & 0.08 & -0.07 & 0.22 & 0.07 & 0.24\ HR3982 & 13000 & 241 & -0.08 & 0.13 & 2.22 & 0.47 & 0.04 & -0.09 & 0.28 & 0.03 & 0.15\ HR3982 & 13000 & 241 & -0.07 & 0.09 & 2.14 & 0.51 & 0.06 & -0.09 & 0.32 & 0.12 & 0.26\ HR7001 & 9480 & 678 & -0.05 & 0.36 & 2.83 & 1.06 & 0.03 & -0.03 & 0.41 & 0.15 & 0.52\ HR7001 & 9480 & 146 & -0.10 & 0.38 & 2.81 & 1.02 & 0.10 & -0.05 & 0.43 & 0.17 & 0.41\ HR2491 & 9145 & 111 & -0.07 & 0.36 & 2.71 & 0.83 & 0.16 & -0.14 & 0.37 & -0.08 & 0.10\ HR2491 & 9145 & 148 & -0.14 & 0.40 & 2.67 & 0.93 & 0.12 & -0.13 & 0.39 & 0.05 & 0.20\ HR4534 & 8593 & 205 & -0.04 & 0.45 & 2.67 & 1.01 & 0.11 & -0.12 & 0.30 & 0.07 & 0.34\ HR4357 & 8377 & 192 & -0.05 & 0.40 & 2.83 & 0.96 & 0.09 & -0.12 & 0.24 & 0.06 & 0.38\ HR4931 & 6750 & 119 & 0.04 & 0.24 & 1.47 & 0.62 & -0.01 & -0.06 & 0.09 & 0.09 & 0.86\ HR1279 & 6677 & 080 & 0.10 & 0.32 & 1.62 & 0.67 & 0.15 & -0.09 & 0.26 & 0.12 & 0.64\ HR2943 & 6530 & 327 & 0.04 & 0.33 & 1.60 & 0.57 & 0.15 & -0.14 & 0.54 & 0.27 & 0.74\ HR1538 & 6385 & 281 & 0.06 & 0.34 & 1.26 & 0.53 & 0.18 & -0.11 & 0.51 & 0.22 & 0.90\ HR4375 & 6085 & 254 & 0.26 & 0.15 & 0.63 & 0.56 & 0.15 & -0.06 & 0.68 & 0.37 & 1.09\ HR4983 & 5930 & 123 & 0.21 & 0.16 & 0.75 & 0.66 & 0.09 & 0.04 & 0.71 & 0.45 & 1.20\ HR4983 & 5930 & 173 & 0.19 & 0.21 & 0.90 & 0.52 & 0.19 & -0.06 & 0.87 & 0.47 & 0.96\ HR483 & 5855 & 059 & 0.24 & 0.02 & 0.68 & 0.29 & 0.04 & -0.17 & 0.18 & 0.37 & 1.09\ HR483 & 5855 & 079 & 0.24 & 0.19 & 0.90 & 0.46 & 0.19 & -0.11 & 0.67 & 0.48 & 1.30\ HR4374 & 5830 & 250 & 0.28 & 0.20 & 0.62 & 0.63 & 0.10 & -0.01 & 0.90 & 0.47 & 1.35\ HR5072 & 5740 & 197 & 0.26 & 0.15 & 0.74 & 0.53 & 0.16 & -0.04 & 0.99 & 0.53 & 1.28\ HR4496 & 5430 & 105 & 0.35 & 0.17 & 0.59 & 0.59 & 0.21 & -0.08 & 1.12 & 0.61 & 1.45\ HR4496 & 5430 & 090 & 0.31 & 0.16 & 0.39 & 0.50 & 0.13 & -0.04 & 0.69 & 0.43 & 1.08\ HR7462 & 5240 & 131 & 0.52 & 0.04 & 0.38 & 0.72 & 0.09 & 0.09 & 1.20 & 0.62 & 1.89\ HR7462 & 5240 & 095 & 0.51 & 0.07 & 0.37 & 0.69 & 0.13 & 0.06 & 1.16 & 0.72 & 2.05\ HR1084 & 5010 & 223 & 0.61 & 0.19 & 0.44 & 0.74 & 0.24 & -0.01 & 1.59 & 0.76 & 2.00\ HR8832 & 4785 & 111 & 0.88 & 0.14 & 0.29 & 1.06 & 0.17 & 0.18 & 1.88 & 0.90 & 2.50\ GL570A & 4560 & 120 & 0.90 & 0.22 & 0.37 & 1.13 & 0.28 & 0.04 & 2.02 & 0.90 & 2.29\ HR8085 & 4340 & 125 & 0.65 & 0.18 & 0.01 & 1.15 & 0.11 & 0.15 & 1.59 & 0.60 & 2.38\ HR8086 & 4040 & 170 & 0.72 & 0.39 & 0.06 & 1.46 & 0.14 & 0.11 & 1.78 & 0.50 & 2.07\ HR8086 & 4040 & 181 & 0.72 & 0.32 & 0.07 & 1.48 & 0.15 & 0.12 & 1.86 & 0.52 & 2.18\ GL338A & 3800 & 072 & 0.98 & 0.37 & 0.08 & 1.36 & 0.13 & 0.02 & 1.51 & 0.34 & 1.80\ GL526 & 3605 & 068 & 0.62 & 0.42 & -0.13 & 1.32 & 0.16 & 0.10 & 1.04 & 0.04 & 0.92\ GL411 & 3530 & 202 & 0.48 & 0.53 & -0.04 & 1.29 & 0.16 & 0.01 & 1.17 & 0.10 & 1.19\ GL411 & 3530 & 189 & 0.47 & 0.55 & -0.07 & 1.29 & 0.18 & 0.06 & 1.21 & 0.12 & 1.23\ GL725A & 3380 & 083 & 0.52 & 0.57 & -0.10 & 1.15 & 0.06 & 0.03 & 0.93 & -0.00 & 0.99\ GL725A & 3380 & 107 & 0.54 & 0.56 & -0.12 & 1.25 & 0.02 & 0.08 & 1.20 & -0.02 & 1.09\ \[ftshewdw\] [^1]: For a detailed listing of spectral types and luminosity classes in the revised MK system see Keenan (1985). [^2]: Recent work by Bessell, Castelli, and Plez (1998) provides updated temperatures, colors, and bolometric corrections for a wide range of spectral types and luminosity classes. [^3]: For details concerning the advantages and disadvantages of fourier transform spectroscopy, see Bell (1974). [^4]: IRAF is distributed by the National Optical Astronomy Observatories, which is operated by the Association of Universities for Research in Astronomy, Inc., under contract to the National Science Foundation. [^5]: The conversion to angstroms is $EW(\AA) = [EW(cm^{-1})/ \sigma^2] \times 10^8$ [^6]: See Herbst (1994) for a comprehensive discussion of OH airglow background supression strategies.
1
--- abstract: | We revisit the longstanding question of whether first brightest cluster galaxies are statistically drawn from the same distribution as other cluster galaxies or are “special”, using the new non-parametric, empirically based, model presented in @paper2 for associating galaxy luminosity with halo/subhalo masses. We introduce scatter in galaxy luminosity at fixed halo mass into this model, building a conditional luminosity function (CLF) by considering two possible models: a simple lognormal and a model based on the distribution of concentration in haloes of a given mass. We show that this model naturally allows an identification of halo/subhalo systems with groups and clusters of galaxies, giving rise to a clear central/satellite galaxy distinction, obtaining a special distribution for the brightest cluster galaxies (BCGs). Finally, we use these results to build up the dependence of BCG magnitudes on cluster luminosity, focusing on two statistical indicators, the dispersion in BCG magnitude and the magnitude difference between first and second brightest galaxies. We compare our results with two simple models for BCGs: a statistical hypothesis that the BCGs are drawn from a universal distribution, and a cannibalism scenario merging two galaxies from this distribution. The statistical model is known to fail from work as far back as @tr. We show that neither the statistical model nor the simplest possibility of cannibalism provide a good match for observations, while a more realistic cannibalism scenario works better. Our CLF models both give similar results, in good agreement with observations. Specifically, we find $<m_1>$ between -25 and -25.5 in the K-band, $\sigma(m_1)\sim0.25$ and $<\Delta_{12}>$ between 0.6 and 0.8, for cluster luminosities in the range of $10^{12}$ to $10^{13} h^{-2} {\rm L_\odot}$. author: - | A. Vale$^{1,2}$[^1] and J. P. Ostriker$^{1,3}$\ $^{1}$Institute of Astronomy, University of Cambridge, Madingley Road, Cambridge CB3 0HA, United Kingdom\ $^{2}$CENTRA, Departamento de Física, Instituto Superior Técnico, Av. Rovisco Pais 1, 1049-001 Lisboa, Portugal\ $^{3}$Princeton University Observatory, Princeton University, Princeton NJ 08544, USA title: 'The Non-Parametric Model for Linking Galaxy Luminosity with Halo/Subhalo Mass: Are First Brightest Galaxies Special?' --- galaxies: haloes – galaxies: fundamental parameters – galaxies: clusters: general – dark matter – methods: statistical Introduction ============ The nature of brightest cluster galaxies (BCGs) has long been a subject of interest and much debate [@peebles; @sandage; @dressler]. In particular, investigators have asked whether their origin is statistical or special in nature, that is, whether they follow a special distribution independent of the fainter galaxies in the cluster, or on the contrary, they are merely the extreme values of the same global distribution derived for all cluster galaxies. On the theoretical side, there has been renewed interest in this subject with recent studies of the relation between galaxies and their dark matter haloes from a theoretical, statistical point of view, involving the study of the distribution of the galaxy population through different haloes while bypassing the complications of the physics of galaxy formation (e.g., @bg [@paper1; @iro; @yanghod; @zehavi; @zz; @coorayc; @charlie; @vdb]). Since these involve populating dark matter haloes with galaxies, they usually lead to a distinction between central and satellite galaxies. This in turn has lead to, in many of these works, central galaxies being treated separately from the rest, and therefore having a distinct distribution, with consequences visible, for example, in the luminosity function. Some of these studies have in fact looked at some specific BCG-related properties of clusters, like the magnitude gap (e.g., @milos [@vdb]). In the past, observational studies which have focused on this issue [@tr; @hgt; @sgh; @bb84; @hs; @bhavsar; @pl; @bb] have been hindered by the limited numbers of high luminosity galaxy observations available, since the strongly declining nature of the bright end of the luminosity function requires having very large samples to obtain significant numbers of high luminosity galaxies. Due to this, these studies were mostly inconclusive when it came to answer the question of whether BCGs were statistical or special in nature, although many works hinted at the latter. More recently, the advent of large scale surveys such as the 2dF Galaxy Redshift Survey or the Sloan Digital Sky Survey (SDSS), has motivated plentiful, ongoing work on this subject (e.g., @lms [@lm; @ls; @bernardi; @linden]). This issue is in large part motivated by the fact that, observationally, BCGs do look different from other galaxies. They usually sit at the centre of the cluster, and tend to be considerably brighter than the remaining cluster members. The most striking case is cD galaxies, found in the centre of rich clusters and which dominate their satellites in both size and brightness, while having a characteristically distinct morphology and surface brightness distribution (e.g., @cdreview). Likewise, cD galaxies tend to be brighter than what would be expected from the bright end of the cluster galaxy luminosity function. In fact, it has been observed that, when analyzing composite luminosity functions of cluster galaxies, the most luminous of them form a hump at the bright end (e.g., @colless [@yagi; @2pigg]). Yet, at the same time, there is little variation in magnitude among them [@hs; @pl; @bernardi; @linden]. This ties in with the fact that the luminosity of BCGs is expected to vary only slowly with increasing cluster luminosity (@lm; see also the results for the mass luminosity relation of central galaxies in @paper2). In order to try to answer this problem from available observational data, two different indicators have been considered. One is the shape of the overall distribution of the magnitude of BCGs. If BCGs are merely the extreme cases of a general distribution applicable to all cluster galaxies, then it is expected that results from extreme value theory in statistics apply, predicting a resulting distribution shaped like the Gumbel distribution [@bb84; @bb]. On the other hand, if BCGs are considered a special, distinct type of galaxy, then some particular distribution is to be expected, such as a Gaussian [@pl] or lognormal. Some studies have also raised the possibility that it could be actually a combination of the two, probably depending on the type of cluster [@bhavsar; @bb]. The other property studied is the ratio $r=\Delta_{12}/\sigma_1$, where $\Delta_{12}$ is the average magnitude difference between the first and second brightest galaxies, and $\sigma_1$ the dispersion in the magnitude of the first brightest galaxy. It is possible to prove the powerful conclusion [@tr] that, if all galaxies are drawn from the same statistical distribution, regardless of its exact form, then $r\leq 1$. Observational results give a value for $r$ around 1.5 (e.g., @lm [@ls]), which would exclude this possibility. This has led to the study of possible alternative scenarios for the formation of BCGs, in order to account for their special nature. One such is galactic cannibalism, initially proposed by Ostriker and collaborators [@ot; @oh; @ho]. Such a scenario is akin to taking the above case of having all galaxies drawn from the same distribution, but then merging the brightest of them with one or more of the others. From this simplistic model of the process, it is easy to see that this mechanism would help to solve the above problem, mostly by increasing the value of $\Delta_{12}$ as the luminosity of the first brightest galaxy is driven up by the mergers and the brightness of the surviving second brightest galaxy declines as luminous galaxies are merged out of existence. In the present paper, we explore this issue in light of the non-parametric model for the mass luminosity relation presented in @paper2 (hereafter paper I; see also @paper1). The basic idea behind the non-parametric model is to adopt the simple proposition that more luminous galaxies are hosted in more massive haloes/subhaloes. No attempt at physical modelling is made and the association is made simply by matching one-to-one the rank ordered observational list of galaxies with the rank ordered computed list of haloes/subhaloes. We here extend this model by introducing scatter into it, and also by considering possible effects on the total disruption of some subhaloes into the total luminosity related to the halo. As is the case in HOD models (e.g., @bg [@zehavi; @zz]), this model naturally gives rise to a separation between central and satellite galaxies, by associating the former with the parent halo itself and the latter with the subhaloes associated with it. We analyze this issue in more detail, studying how it affects the cluster galaxy luminosity function and gives rise to a bright end bump caused by the central galaxies. We then develop a model for the BCG luminosity distribution. Since the halo in fact arises from the union of subhaloes this non-parametric model is a statistically well defined variant of the cannibalism scenario. This paper is organised as follows: in section 2, we give a brief summary of the non-parametric model relating galaxy luminosity with halo/subhalo mass presented in paper I, and introduce a simple recipe for checking the contribution of destroyed subhaloes to the halo mass, and how this changes our estimate of the total luminosity. In section 3, we introduce scatter into the non-parametric model by building a conditional luminosity function, where we consider two possibilities for it, either a simple lognormal shape or a better motivated approach involving the distribution of concentration for haloes of a given mass. In section 4, we explore more indepth how the model gives rise to a central/satellite galaxy separation, and show how this impacts the cluster galaxy luminosity function. In section 5, we build up a model for the distribution of cluster galaxies, based on the mass-luminosity relation and the halo/subhaloes separation which underpins it. In section 6, we present simple models to account for another two possible origins for the BCG distribution: first, we consider that all cluster galaxies are drawn from the same distribution; then we take a simple model for cannibalism, by merging two of the galaxies (the brightest plus one other) in the first example. Finally, in section 7 we present the results of all models for the average magnitude of first and second brightest galaxies as well the dispersion of the former as a function of cluster luminosity. We then compare these results with observations. Throughout we have used a concordance cosmological model, with $\Omega_m=0.24$, $\Omega_\Lambda=0.76$, $h=0.735$ and $\sigma_8=0.74$ [@wmap]. The Mass-luminosity relation ============================ The work presented below is based on the non-parametric model for relating galaxy luminosity with halo/subhalo mass presented in paper I. The basic idea is that more massive haloes/subhaloes have deeper potential wells and will thus accrete more gas and subsequently will have more luminous galaxies forming within them. In effect, we take the relation between galaxy luminosity and halo/subhalo mass to be one to one and monotonic. An additional extra ingredient is necessary to maintain this approximation in the framework of the model, since subhaloes lose mass to the parent halo after accretion due to tidal interactions. Alternatively put, a halo is not simply the sum of the identifiable subhaloes within it due to tidal stripping. Therefore, we need to account for the mass of the subhaloes not at present, but that which they had at the time of their merger into the parent. The relation between mass and luminosity is then obtained statistically by matching the numbers of galaxies with the total number of hosts, that is, haloes plus subhaloes through their distributions. The halo abundance is given by the usual Sheth-Tormen mass function [@stmf]: $$\label{stmf} n_h(M) dM = A \Big( 1+\frac{1}{\nu^{2q}}\Big) \sqrt{\frac{2}{\pi}} \frac{\rho_m}{M} \frac{d\nu}{dM} {\rm exp}\Big(-\frac{\nu^2}{2}\Big) dM\, ,$$ with $\nu=\sqrt{a}\frac{\delta_c}{D(z) \sigma(M)}$, $a=0.707$, $A\approx 0.322$ and $q=0.3$; as usual, $\sigma(M)$ is the variance on the mass scale $M$, $D(z)$ is the growth factor, and $\delta_c$ is the linear threshold for spherical collapse, which in the case of a flat universe is $\delta_c=1.686$, with a small correction dependent on $\Omega_m$ ($\delta_c=1.673$ for $\Omega_m=0.24$). Following the discussion in paper I, we will assume a very simple model for the subhalo mass distribution within the parent. In terms of their original, pre-accretion mass, we assume that the subhalo distribution is given by a simple Schechter function: $$\label{shmf} N(m|M) dm = A(M) (m/\beta M)^{-\alpha} {\rm exp} (-m/\beta M) dm/\beta M \, ,$$ where the cutoff parameter $\beta=0.5$ serves to insure that no subhalo was larger than half the present mass of the parent (otherwise it would, by definition, be the parent). The slope $\alpha=1.9$ is set to the same value as is generally found for the present day subhalo mass function in simulations (e.g., @gao [@jochen; @vdbshmf; @zentner; @laurie]). The normalization $A(M)=1/\beta[\Gamma(2-\alpha)-\Gamma(2-\alpha,1)]$ is set so that the total mass originally in subhaloes corresponds to the present day mass (where the integration is done to an upper limit of $0.5M$). This approximation potentially ignores the problem of total disruption of some of the merged subhaloes, as can occur for example in the case of major mergers, by assuming that all of these subhaloes are still present and that therefore the total fraction of mass originally in subhaloes is one. In @paper2, we showed that as long as this fraction is close to one, then the resulting mass luminosity relation is similar, with both number of satellites in a halo and their total luminosity decreasing slightly. From the study of simulation results it is still not completely clear how to treat this complex issue, and no simple analytical models are available, so we explore a simple recipe to better account for this problem in the context of our model in section \[sect:destroyedsh\]. The galaxy distribution is given by the luminosity function. This is given by the usual Schechter function fit: $$\label{schechter} \phi_{obs}(L) dL = \phi_* \Big(\frac{L}{L_*}\Big)^{\alpha} {\rm exp}\Big(-\frac{L}{L_*}\Big) \frac{dL}{L_*} \, .$$ The values of the parameters will depend on the waveband used. In this paper, we use mostly the K-band luminosity function from the 2MASS survey, with parameters given by $\alpha=-1.09$, $\phi_*=1.16\times10^{-2} h^{3} {\rm Mpc^{-3}}$ and $M_*-5 {\rm log} h=-23.39$ [@2mass]. For comparison, we also obtain the $b_J$-band 2dF survey, with $\alpha=-1.21$, $\phi_*=1.61\times10^{-2} h^{3} {\rm Mpc^{-3}}$ and $M_*-5 {\rm log} h=-19.66$ [@2df]. Also note that we are in fact extending these fits as necessary, including beyond the magnitude interval in which they were obtained. The basic mass-luminosity relation can then be obtained from these ingredients by a counting process, matching the numbers of galaxies at a given luminosity to the total number of hosts at a given mass: $$\label{mlrel} \int_L^\infty \phi(L)dL=\int_M^\infty(n_h(M)+n_{sh}(M))dm \, ,$$ where the host contribution is separated into a halo term, $n_h(M)$, and a subhalo term obtained by summing up all the subhaloes at that mass, $n_{sh}(m)=\int_0^\infty N(m|M)n_h(M)dM$. An average relation between host mass and galaxy luminosity can then be built through this process, with results that match well with observations (see paper I for a detailed analysis). The resulting relation can be well fit by a double power law of the type: $$\label{mlfit} L_{ref}(M)=L_0\frac{(M/M_0)^a}{[1+(M/M_0)^{b k}]^{1/k}} \, ,$$ where the differents parameters are shown in table \[mlfitparam\]; mass is in units of $h^{-1} {\rm M}_\odot$, luminosity in $h^{-2} {\rm L}_\odot$. The fit was done in the mass range $10^{11}$ ($3\times 10^{10}$ in the $b_j$ band case) to $3\times10^{15}$ $h^{-1} {\rm M}_\odot$. \[mlfitparam\] K-band $b_J$-band ------- --------------------- ---------------------- $L_0$ $1.37\times10^{10}$ $4.12\times 10^9$ $M_0$ $6.14\times10^9$ $1.66\times 10^{10}$ $a$ 21.03 6.653 $b$ 20.74 6.373 $k$ 0.0363 0.111 : Fit parameters for the mass-luminosity relation. ![Mass-luminosity relation as obtained using the non-parametric model, in the K- and $b_J-$ bands. Upper pannel shows galaxy luminosity normalized to the characteristic luminosity, $L^*$, of each band; lower pannel shows the corresponding mass-to-light ratio.[]{data-label="mlfig"}](fig1a.ps "fig:"){width="84mm"} ![Mass-luminosity relation as obtained using the non-parametric model, in the K- and $b_J-$ bands. Upper pannel shows galaxy luminosity normalized to the characteristic luminosity, $L^*$, of each band; lower pannel shows the corresponding mass-to-light ratio.[]{data-label="mlfig"}](fig1b.ps "fig:"){width="84mm"} Figure \[mlfig\] shows the results for the luminosity of a single galaxy as a function of the mass of the hosting halo/subhalo, together with the corresponding mass-to-light ratio. Shown are curves for both K- and $b_J$ bands. We caution that the results for the latter band should be treated with some reserve. This counting method is not entirely adequate to get the mass-luminosity relation in the blue, due to complications arising from recent star formation, although it is still interesting to compare the differences obtained from using two different luminosity functions. Destroyed subhaloes and the subhalo mass fraction {#sect:destroyedsh} ------------------------------------------------- As mentioned, a potentially important correction to the non-parametric model in paper I is to account for subhaloes which have been completely destroyed. The study of this evolution of the subhalos and their eventual destruction, with the subsequent merger (or not) of their galaxy with the central one, is a very interesting topic by itself, which is still not completely understood but which is essential to a complete understanding of the formation of BCGs. However, such a detailed look at this question is beyond the scope of the present paper; here, we are merely interested in a simple model to account for how much mass was in these destroyed subhalos, to correct the normalization of our original subhalo mass function. Our scheme is based on the fact that most of the luminosity of the central galaxy is built up by merging with the satellite galaxies brought in by these subhaloes. In other words, the central (BCG) optical galaxy is made up of the galaxies that have been ”merged away” – disappeared from the original distributions. This is consistent with what is known of the size, shape and colour properties of central galaxies. We therefore assume that the fraction of mass in these destroyed subhalos (with respect to the total halo mass), is given by the ratio of the central galaxy luminosity to the total luminosity of the halo: $$\label{eq:destfracdef} f_{dest}=\frac{m_{dest}}{M}=\frac{L_{cent}}{L_{total}} \, .$$ For a given $L(M)$ relation, which sets the luminosity of both the central and satellite galaxies as a function of the halo/subhalo mass, the previous equation can then be solved for $f_{dest}$ as a function of halo mass, since the total luminosity is going to be a function of only it and the total mass: $L_{total}=L_{cent}(M)+(1-f_{dest})L_{sat,max}(M)$, where $L_{sat,max}$ is the maximum contribution of the satellites for when $f_{dest}=0$. This is given by: $$\label{eq:lsatmax} L_{sat,max}(M)=\int_0^{0.5 M} L_{ref}(m) N(m|M) dm \, .$$ The upper pannel of figure \[fig:fdest\] shows the destroyed mass fraction as a function of halo mass for our base mass-luminosity relation, given by equation (\[mlfit\]), while the bottom pannel shows the effect on the total luminosity. As can be seen, this is most pronounced at the lower end of the mass scale shown, and becomes small enough to have little effect at high mass. ![Upper pannel: total mass in subhaloes which have been completely disrupted, as a fraction of the total halo mass. Bottom pannel: total luminosity as a function of halo mass with or without using the fraction of mass in destroyed subhaloes shown in the upper pannel. Results are for the K-band, using the base mass-luminosity relation of equation (\[mlfit\]).[]{data-label="fig:fdest"}](fig2a.ps "fig:"){width="84mm"} ![Upper pannel: total mass in subhaloes which have been completely disrupted, as a fraction of the total halo mass. Bottom pannel: total luminosity as a function of halo mass with or without using the fraction of mass in destroyed subhaloes shown in the upper pannel. Results are for the K-band, using the base mass-luminosity relation of equation (\[mlfit\]).[]{data-label="fig:fdest"}](fig2b.ps "fig:"){width="84mm"} There are two additional factors that need to be noted. First, the introduction of this term can also have an effect on the actual mass-luminosity relation, since we are using a counting method to obtain it. However, for the mass range we are interested in, the number counts are dominated by the central galaxies and will therefore not be affected (see section 4). Likewise, the only haloes capable of hosting subhaloes large enough to be counted in this range are the most massive ones, for which the effect is smallest (see section 3). Secondly, in principle, this approach will also depend on the exact form of the mass-luminosity relation. However, for the small deviations from the base relation we will be considering in this paper, the effect on the total luminosity is small, since the variation to the base relation will be greatest at higher mass, where this effect is smallest. We will therefore, for simplicity, use this one result throughout the paper. Finally, it needs to be stressed that this is just a very simple approximation. The calculated factor is applied to the whole subhalo mass fraction as a correction to the normalization, without taking into account a possible dependence on subhalo mass. In particular, the situation with very low mass subhaloes is very uncertain in this scheme, since they are expected to be very faint, and have therefore very little weight in the sum of the total luminosity, while they can contribute an important fraction of the mass. Another important point is that in principle the luminosity of the galaxies that were contained in the destroyed subhaloes should be added to the central galaxy luminosity, since under this scheme we are assuming that these are merging. In practice, though, the light in these destroyed subhaloes is going to be small in comparison with the BCG in this model, since the largest fraction of destroyed subhaloes occurs for less massive haloes where the BCG is dominant. For simplicity, we will here ignore this contribution. Introducing scatter {#sect:scatter} =================== The conditional luminosity function ----------------------------------- In the context of the present paper, we need a more detailed model than the one described previously. Most importantly, it needs to include some kind of scatter in the mass-luminosity relation. Naturally, we expect that not all galaxies in hosts of the same mass will have the same luminosity. To capture this, we introduce a dispersion around the average relation describe above. We use the conditional luminosity function (CLF) formalism introduced by @yang (see also @vdb and references therein) and by Cooray and collaborators (e.g., @cooray [@coorayc] and references therein). This consists of replacing a deterministic mass-luminosity relation, like the one in equation (\[mlfit\]), with a distribution of luminosity around an average value for any given halo mass, $\phi_{CLF}(L|M)dL$, which represents the probability of having a galaxy of luminosity $L$ in a halo of mass $M$. Note that here we are only applying this to the central galaxy in any given halo, since that is the important one for the study of BCGs; the distribution of satellite galaxies we draw directly from the distribution of subhalo masses. An important point is that this CLF must, by definition, match the observed luminosity function when it is integrated over all haloes, i.e. $\int_0^\infty \phi_{CLF}(L|M) n(M) dM=\phi(L)$, where $n(M)$ is the halo mass function, $\phi(L)$ the observed luminosity function and $L$ should only be considered in the range where the haloes dominate the number of hosts (i.e., at high luminosity, which is precisely the range we are interested in when looking at BCGs;otherwise, we would also need to account for subhalo contribution). The introduction of scatter then leads to a problem with the mass-luminosity relation derived from the counting method, however. As noted by @iro, the fact that the mass function is decreasing with increasing mass causes an effect similar to the Malmquist bias: for any given mass bin, more objects are scattered into it from lower mass bins than are scattered out of it. If we then take our base mass-luminosity relation to be the average one in the CLF distribution, because of this effect we will end up with a calculated luminosity function that greatly overestimates the abundance of very bright galaxies when compared to the observed one. To get the correct matching to the observed luminosity function, it is then necessary to modify the average mass-luminosity function we take for the basis of the CLF. This is achieved by introducing an additional term, of the form: $$\label{eq:mlmod} L(M)=L_{ref}(M)(1+M/M_s)^a \, ,$$ where $L_{ref}(M)$ refers to the base mass-luminosity relation of equation (\[mlfit\]). In practice, $a$ is going to be negative since we need to lower the luminosity corresponding to any given high-mass halo in order to drive the value of our calculated luminosity function down. There is one final, potentially important point about this issue: once scatter is introduced, care must be taken when looking at the calculated average mass-luminosity relation. In our approach, the average luminosity at fixed mass needs to go down, relative to the scatter-less case or, looking at it the opposite way, the same average luminosity is obtained for higher mass haloes. This is due to the fact that, when considering the CLF, we are doing the binning by mass (or more precisely, taking the conditional variable in the distribution to be the mass). If we had instead binned by luminosity, the effect of introducing scatter would have been the opposite: the average mass correspoding to a given luminosity would instead have gone down. This is to be expected and is just a statistical effect of the two different ways in which the conditional function can be defined. It does however mean that care must be taken when comparing results of different authors to look at how the binning was done in each case. In this paper we will consider two different models for the CLF of the central galaxy: a simpler model where we assume the distribution is lognormal, but where we are left with a free parameter in the scatter introduced; and a more complicated one based on the distribution of concentrations at a given halo mass, which fully motivates the introduction of scatter in the CLF without any free parameters. For the satellite galaxies we will use the same modified mass-luminosity relation as well, since these were central galaxies within their own independent haloes prior to merging, so it is reasonable to expect the same effects to apply to them. From semi-analytical modelling, it has been shown that this is a good approximation, although a more careful treatment shows a slightly different relation for sattelites than for central galaxies [@wang]. But note that, when doing analytical calculations, using the subhalo mass function already introduces a form of distribution for the subhaloes as well (in that the mass of a given subhalo can be drawn from it, see section 5.1 for further discussion). Lognormal model --------------- The simplest CLF model we consider is to assume it has a lognormal form. This is similar to what was done previously by other authors [@cooray; @coorayc], and such a form seems a good match to the distribution of stellar mass obtained in semi-analytic modelling [@wang]. The problem with this approach is that there is no [*a priori*]{} reason to assume any specific value for the dispersion. Furthermore, this value is linked to the modified mass-luminosity relation of equation (\[eq:mlmod\]), so it needs to be defined in some way in order to determine the latter. We do this by determining which value of the dispersion leads to an average luminosity as a function of mass which best fits observational values. For simplicity, we will consider that the value of the dispersion, $\sigma$, is constant and independent of mass. Since we are only interested in the bright end, where we expect central galaxies to dominate, and these are known to have only a small scatter in luminosity (e.g., @pl [@bernardi]), this is quite likely a good approximation. Semi-analytical modelling also shows that scatter in stellar mass is only a weak function of halo mass [@wang]. The luminosity $L$ of a galaxy in a host of mass $M$, is then given by a lognormal distribution of the type: $$\label{eq:lognormal} \phi_{CLF}(L|M)dL=\frac{1}{\sqrt{2\pi}\sigma_{LN} L}{\rm exp} \Big(-\frac{({\rm ln} [L/L_0(M)]+\sigma_{LN}^2/2)^2}{2 \sigma_{LN}^2} \Big) dL \, ,$$ where $L_0(M)$ is some average luminosity for a host of mass $M$ (discussed further below), and $\sigma$ is the dispersion in the normal logarithm of the luminosity. Note that there is some confusion in the literature over the exact form defined for the lognormal distribution. First, it is necessary to pay attention to whether the distribution is in the natural logarithm or base 10 logarithm of the variable; the quoted value of $\sigma$ will be different in the two cases for the same distribution. Secondly, the way we have defined it in equation (\[eq:lognormal\]), the average luminosity is given by $L_0(M)$. This is due to the second term in the denominator of the exponential term, which not all authors include; if it is omitted, $L_0$ would not be the average luminosity. The difficulty with using an approach such as this is that we are left with two unknowns we need to determine, the reference luminosity function $L_0$ and the dispersion $\sigma$. The only condition we can impose on this distribution is that, when integrated over all hosts, the resulting luminosity function must match the observed one. Fitting this calculated luminosity function to the observed one then allows us to relate the two parameters we have when we take $L_0=L(M)$ from equation (\[eq:mlmod\]), $a$ and $M_s$, to the scatter $\sigma$. However, this still leaves us with one free parameter which we cannot otherwise specify. In order to address this problem, we determine which value of the scatter gives us an average luminosity as a function of mass that best fits the observed data. Since our original mass-luminosity relation was already quite a good fit to the data (see paper I), we must necessarily have only a small correction to it (i.e., a small value of $a$), which in turn implies a small value for the scatter, which is qualitatively in good agreement with observations (see further discussion in section 7). Concentration model ------------------- The other model for the CLF of BCGs we consider is based on the variation of the concentration of haloes with the same mass. The basic idea behind this is that the distribution of concentration in same mass haloes will lead to different mass in the inner region of the halo where the galaxy will be present; the luminosity of the hosted galaxy will then simply be proportional to this mass. In practice, we calculate the mass-to-light ratio of this inner region, for the average concentration and with an average luminosity given by equation (\[eq:mlmod\]), and then calculate the change in luminosity as the concentration changes by assuming that the mass-to-light ratio is fixed (observationally, it has been noted that the dynamical mass-to-light ratio of BCGs is almost constant, e.g. @linden). This then gives us the BCG luminosity as a function of both concentration and halo mass. Although the model is conceptually simple, the details are problematic. The main issue is to determine what exactly is this inner region and how to calculate its mass. The most obvious solution, taking quoted values from the literature for BCG radius and its dependence on luminosity, is not really satisfactory since these are most often determined from isophotal limits and in the case of cD galaxies it would be necessary to further consider whether to include the envelopes; also, it is natural to assume that the actual region of influence for the dark matter is more extensive than the visible galaxy. Other options such as some parameter from the dark matter structure, run into the problem of motivating what exactly it should be. In the end, after checking the results of several different possible models, we concluded that the one that has the best motivation and also gives the best results is to calculate the inner region mass by using a weighting function based on the luminosity profile of the BCGs. Since we just require it to make our weighting function, for simplicity we assume that the luminosity profile of BCGs can be universally fit by a Sersic profile: $$\label{eq:sersic} I(r)= A\, exp(-b_n[(r/r_e)^{1/n}-1])\, ,$$ where $A$ is a normalization factor, and $\Gamma(2n)=2\gamma(2n,b_n)$, where $\Gamma(2n)$ is the gamma function and $\gamma(2n,x)$ the incomplete gamma function; this can be well approximated by $b_n\approx 2n-0.327$ [@capaccioli]. It is know that there is a correlation between the profile parameters $n$ and $r_e$, and also between $r_e$ and the galaxy luminosity $L$, although the correlation between $n$ and $L$ is very weak (e.g., @graham). For simplicity, we will assume that we can relate both parameters to the galaxy luminosity; although in practice this is not really true, for our purposes here it is a sufficient, if rough, approximation. Based on results from the literature [@graham; @lm], we use the following relations: $$\label{eq:nsersic} n=2.8694{\rm log}(r_e)+2.0661 \, ,$$ $$\label{eq:resersic} {\rm log}(r_e)=0.9523{\rm log}(L)-9.1447 \, ,$$ where $L$ is the K-band luminosity; these are similar to what is reported by other authors (e.g., @bernardi). Our weighting function is then given not by the actual profile, but rather the integration factor for the luminosity, $w(r)=r I(r)$, and the normalization $A$ chosen so that $\int_0^{r_{vir}}w(r)dr=1$. Finally, the inner region mass is simply obtained by integrating the mass density times the weighting function: $$\label{eq:innermass} M_{inner}=\int_0^{r_{vir}}4\pi r^2 w(r) \rho(r) dr \, .$$ We use the usual NFW profile for the density [@nfw], $$\label{nfw} \rho=\frac{\rho_1}{x(x+1)^2} \, ,$$ where $x=r/r_s$, with $r_s=r_{vir}/c_{vir}$ and the virial radius is $r_{vir}=(3 M/4 \pi \Delta_{vir} \bar{\rho})^{1/3}$ and $\Delta_{vir}=387$; $\rho_1$ is normalized to give the halo mass at the virial radius. For the concentration distribution, we take the model of @bullock (see also @maccio, who get similar results). This relates the average concentration of a halo of mass $M$ with the scale factor at its collapse, $a_c$, given by: $$\label{acoll} \sigma(f M)=\frac{\delta_c}{D(a_c)} \, ,$$ where $\sigma$ is, as usual, the variance of the linearly extrapolated power spectrum of perturbations, $D(a)$ is the growth factor and $\delta_c=1.673$ the linear threshold for collapse; $f=0.001$ is a parameter. The concentration is then given by $c_{vir}=k/a_c$, with the parameter $k=3$. Finally, the distribution of the concentration is given by a lognormal distribution with average $c_{vir}$ and variance $\sigma[{\rm log}(c_{vir})]\sim 0.18$. The BCG luminosity distribution is then obtained from the concentration distribution by $\phi(L|M)=f(c|M)/(dL/dc)$, where $f(c|M)$ is the concentration distribution as a function of halo mass. As mentioned, the luminosity for any given concentration and halo mass is given by $$\label{eq:lumconc} L(c,M)=\frac{M_{inner}}{(M/L)_0} \, ,$$ where $(M/L)_0$ is the mass-to-light ratio with the inner mass calculated at the average concentration and the luminosity given by our mass-luminosity relation, equation (\[eq:mlmod\]), and $M_{inner}$ is given by equation (\[eq:innermass\]). Finally, we fit our calculated luminosity function to the observed one, in order to determine the parameters that go into the modified mass-luminosity relation (see table 2). Note that both @bullock and @maccio find that subhaloes tend to have higher concentrations than parent haloes of the same mass. Although it goes beyond the scope of the present work, it is wortwhile to mention that in the framework of the model just presented, this can possibly lead to slightly different distributions of luminosity as function of mass for the subhaloes, although this is most likely complicated by the fact that we need to take the subhalo properties at accretion rather than at present. \[sigmamodparam\] model $\sigma_{LN}$ $a$ $M_s$ --------------- --------------- ------- ------------- concentration N/A -0.08 $10^{13.5}$ lognormal 0.265 -0.07 $10^{13}$ Table 2 shows the values we obtain for the parameters of the modified mass-luminosity relation of equation (\[eq:mlmod\]). Figure \[fig:clfdistribution\] shows examples of the actual distribution we obtain for the BCG luminosity, both for the lognormal model and for the concentration model. ![Examples of the calculated distribution for the BCG luminosity, for both the lognormal and concentration models, and for two different values of the halo mass.[]{data-label="fig:clfdistribution"}](fig3.ps){width="84mm"} Central vs satellite galaxies ============================= As was briefly mentioned above, the way we build up the mass-luminosity relation naturally gives rise to a model for clusters, featuring a distinct separation between central and satellite galaxies. This comes from the fact that we consider that galaxies are hosted by both the parent halo and the subhaloes. Since we consider that the same mass-luminosity relation applies for both, and the former will be, by definition, considerably more massive than the latter, this results in there being a central, very luminous galaxy, hosted by the parent halo, while fainter satellite galaxies are spread throughout in the subhaloes. Of particular importance for the question of whether the first brightest galaxies are special, this separation implies that these galaxies should indeed have a special luminosity distribution, independent from that of the remaining galaxies in the cluster. This comes from the fact that the distribution functions of these two types of galaxies will be different in origin: the central galaxies one will be determined by the halo mass function, while the satellite galaxies one will depend on the subhalo mass function. This dichotomy is also found in HOD models, for instance when accounting for total galaxy occupation number (e.g., @yanghod [@zz]): while $P(N|M)$, the probability that a halo of mass $M$ hosts $N$ galaxies, is Poissonian at high $N$, where satellite galaxy numbers dominate, it is significantly sub-Poissonian at low $N$, indicating that the distribution of the central galaxies is much more deterministic. This has an important consequence, derived from the fact that, at high mass, and therefore also at high luminosity, the total mass function is dominated by the haloes, not subhaloes. This means that, when analysing the luminosity function of galaxies in clusters, the brightest region will be dominated by the central galaxies, which will actually be more abundant overall than the brightest of the satellite galaxies. We then expect that this will cause a feature in the cluster galaxy luminosity function at the bright end; this point will be examined in further detail below. Another consequence is that we expect the luminosity of the central galaxy in the cluster to be completely determined by halo mass and its distribution, without the need to account for surviving subhaloes. It is in fact possible to derive the different contributions to the global luminosity function from the central and satellite galaxies, by associating them with the halo and subhalo distributions, respectively, and then using the CLF formalism presented in the previous section: $$\label{xform} \phi_i(L)dL=\int_0^\infty \phi_{CLF}(L|M) n_i(M)dM \, ,$$ where the $i$ indexes refer to the haloes and subhaloes, respectively. The derived luminosity functions for central and satellite galaxies are shown in figure \[centrallffig\]. Unsurprisingly, the central galaxies completely dominate the overall luminosity function at high luminosity, with their numbers becoming comparable to the satellites only at low luminosity. This simply reflects the trends seen in the halo and subhalo numbers (see paper I). The expected relative contributions of both types of galaxies are still uncertain: while @bfb using their semi-analytical modelling find satellite galaxies to dominate at the faint end, @coorayb using a conditional luminosity function formalism find that central galaxies dominate throughout the range (likewise, @zz find that central galaxies dominate the stellar mass function on any mass scale). ![Contribution to the high-end luminosity function of central and satellite galaxies. These particular curves are for the $K$-band, lognormal model, but the results are similar for the concentration model. It is very noticeable that the halo numbers dominate in this luminosity range. The relative satellite contribution to the total luminosity function is qualitatively similar to what is found by other authors [@vdb].[]{data-label="centrallffig"}](fig4.ps){width="84mm"} Definition of cluster threshold mass {#sect:clmass} ------------------------------------ A necessary first step before continuing this analysis is to define precisely what we mean by a ”cluster”. We will opt for a simple choice, following the standard Abell definition of rich cluster, namely that it must have upwards of 30 objects brighter than $m_3+2^m$, where $m_3$ is the magnitude of the third brightest galaxy in the cluster. It is then possible, following our model, to translate this into a minimum mass threshold for a halo to host a cluster, as follows. The third brightest galaxy will correspond to the second most massive subhalo (since the brightest galaxy is hosted by the parent halo itself), and the probability of this having a mass $m_2$ is then: $$\label{2ndmass} P_2(m_{s,2},M_h)=N(m_{s,2}|M_h) <N> {\rm e}^{-<N>} \, ,$$ where $N(m_{s,2}|M_h)$ is the mass distribution function of the subhaloes, equation (\[shmf\]), and $<N>=\int_{m_{s,2}}^\infty N(m'|M_h) dm'$ is the average number of subhaloes more massive than $m$ in a parent halo of mass $M_h$. This expression assumes that the distribution of subhalo masses is Poissonian with average $<N>$, as expected for the subhaloes (e.g., @kravtsov; since we are looking at cluster sized haloes, $<N>$ will be large in this case), and it is simply the product of the probability of having a subhalo with mass $m_2$, given by the first term on the right hand side, by the Poisson probability of having exactly one subhalo more massive than $m_2$. Using the fact that $d<N>/dm=-N(m_s|M_h)$, it is easy to check that this probability is well normalized to 1. The average value of the magnitude corresponding to this subhalo, $m_3$, can then be calculated from the distribution by: $$\label{2ndmassavg} <m_{3}(M_h)>=\int_0^\infty m(m_s) P_2(m_s,M_h) dm_s \, ,$$ where $m(m_s)$ represents the corresponding magnitude as a function of the subhalo mass, calculated using the mass-luminosity relation from section 2. In this instance, we have used the simpler relation of equation (\[mlfit\]), since it greatly simplifies the calculations and using the full CLF results in only a slight difference. Using this magnitude we can then obtain $m_3+2^m$, and then convert this back into a mass threshold, $m_t(M_h)$, which will be dependent on the parent halo mass. Finally, we can find the probability, as a function of $M_h$, that $N(m_t|M_h)\geq 29$ (giving more than 30 objects above the magnitude limit, when including the central galaxy). Since we are assuming that the subhalo distribution is Poissonian, this will simply be given by $\sum _{n=29}^{\infty} P(n,\mu )$, where $P(n,\mu )$ is the normal Poisson probability with average $\mu=<N(m_t|M_h)>$. This gives a smooth transition for the mass of cluster hosting haloes, shown in figure \[clustprob\], starting around a halo mass of $M_h=10^{14} h^{-1} {\rm M_\odot}$, but which depends on the luminosity function considered. ![Probability that a halo of mass $M$ contains more than 30 galaxies brighter than $m_3+2^m$ and is considered a rich cluster according to the usual Abell definition. Using either the $b_J$ or the $K$ bands to do the counting results in the two different curves.[]{data-label="clustprob"}](fig5.ps){width="84mm"} Cluster galaxy luminosity function ---------------------------------- Once we have a mass threshold for haloes hosting clusters, obtaining the cluster galaxy luminosity function is straightforward: we simply sum up the mass function of haloes above this threshold with the mass function of all the subhaloes hosted by them (i.e., use equation (\[shmf\]), but further multiplied by a term to reflect the probability that the halo does indeed host a rich cluster, given by the result shown in figure \[clustprob\]). Then, we transform this into a luminosity function using the CLF. Our result is shown in figure \[cglffig\]. ![Luminosity function of galaxies in rich clusters, in the $K$-band. The two curves correspond to the two models for the dispersion used, described in the previous section. Also shown, for comparison, is the global LF. Only galaxies above the mass threshold shown in figure \[clustprob\] were considered.[]{data-label="cglffig"}](fig6.ps){width="84mm"} Qualitatively, we obtain a good agreement with observed luminosity functions (such as the one of @2dfcluster, although in this particular case a direct comparison is difficult because these results are in a different band; redoing our analysis in the same band produces a good match), particularly in the lower luminosity range. At the bright end, there is some disagreement caused by a particular feature of our result, a bump in the luminosity function at the bright end. It is simple to understand that this bump is caused by the central galaxies. The discrepancy in numbers between central and satellite galaxies comes from the fact that, at high luminosity, the contribution from parent haloes to the total luminosity distribution (shown in figure \[centrallffig\]) completely dominates over the subhalo one. This is a reflection of the fact that haloes are much more abundant at the high mass end than subhaloes (see paper I). In fact, it can be seen from the figure that these central galaxies essentially correspond to the high luminosity end of the global luminosity function. This is hardly surprising, since we can expect the most luminous galaxies to lie at the centre of the most massive clusters. Such a feature is thus a natural consequence of the model: very luminous galaxies will predominantely be central galaxies of high mass haloes, which will therefore dominate in number over satellite galaxies of the same luminosity; at the same time, since we introduce a lower mass limit to rich cluster hosting haloes, the faint end contribution to the luminosity function of galaxies in such clusters will come entirely from satellites. It is important to note that this is not a particularity of this specific model: any model associating central galaxies with parent haloes and satellite galaxies with subhaloes will show a similar feature, due to the discrepancy in numbers between the two at high mass (though it may also require that this be associated with high luminosity in both cases; or, more particularly, that the same mass-luminosity relation is used for both haloes and subhaloes, as is the case in the model used here). A note of caution comes from the fact that the shape of the bump depends on the dispersion in the CLF: the smaller it is, the sharper the bump will be. Furthermore, the actual shape of the bump is determined by the cluster definition being used, through the cluster threshold mass discussed in the previous section. This cutoff mass is responsible for the decreasing values of the cluster galaxy luminosity function on the left side of the bump; a lower threshold mass would result in a wider bump. Taking this threshold to lower and lower values (beyond the range where it would be reasonable to assume the presence of clusters) results in the progressive disappearance of the bump as we naturally regain the overall global luminosity function. It should be stressed, however, that the presence of a bump is a fundamental prediction of the model, independent of the precise cluster definition being used, since it is a direct consequence of the discrepancy in numbers between haloes and subhaloes at high mass. This kind of feature is also present in some recent work dealing with HOD models and the central/satellite galaxy separation. In @zz (see also @zhu), the authors use a semi-analytical model of galaxy formation to obtain the conditional galaxy baryonic mass function. For high mass haloes in the range we are considering here, they also obtain a high mass bump in this function caused by the central galaxy. They show that the baryonic mass function can be described by combining a Schechter function representing the satellite galaxies contribution with a high mass gaussian due to the central galaxy. Likewise, @zehavi build up HOD models from SDSS results, and analyze the central/satellite galaxy split; from this, they build conditional luminosity functions, and show that their results imply that the central galaxies lie far above a Schechter function extrapolation of the satellite population. The observational work also hints at similar features: for example, it is known that cD galaxies are brighter than what is given by the bright end of the cluster galaxy luminosity function (e.g., @cdreview). This is also present in studies of the luminosity function of galaxies in clusters (e.g., @2pigg). All this once again reinforces the notion that central galaxies form a special distribution, essentially separate from the satellite galaxy one. Building the BCG luminosity distribution ======================================== As discussed above, the non-parametric model used naturally builds a picture of galaxy clusters. This translates itself into a procedure to build up the total luminosity distribution of galaxies within a halo of a given mass. This will be composed of two steps, one dealing with the satellite galaxies in the subhaloes, another with the central galaxy. Satellites ---------- In this step, we need to sum up the total luminosity in the satellite galaxies contained in the halo. We start by taking the total number of subhaloes in a given halo, as calculated from the SHMF (that is, the occupation number; see paper I) , as an average number for a parent halo of this mass, taking into account the effect of destroyed subhaloes as introduced in section 2.1. In this step it is necessary to specify a minimum mass: we take a low enough value to ensure that we account for all subhaloes massive enough to give a noticeable contribution to the total luminosity of the halo. We then assume that the total number of subhaloes follows a Poisson distribution (as discussed above; e.g., @kravtsov). For each subhalo, up to a total as calculated from the Poisson distribution, we determine a mass, by assuming the subhaloes follow a random distribution given by the subhalo mass function (\[shmf\]). Then we convert this to the luminosity of the hosted galaxy using the mass luminosity relation. Finally, once we have the total number of subhaloes (as determined initially from the Poisson distribution), we can sum all of their calculated luminosities to obtain the total in satellite galaxies. At the same time, the average luminosity of the brightest satellite galaxy can be calculated in a more direct fashion. Using the subhalo mass distribution, we can get the probability distribution of the mass of the most massive subhalo. Analogously to what was done in the previous section for the second most massive subhalo (see equation \[2ndmass\]), this will be given by $$\label{1stmass} P_1(m_1,M_h)=N(m_1|M_h) {\rm e}^{-<N>} \, ,$$ where $<N>=\int_m^\infty N(m'|M_h) dm'$ as before. Used together with the mass luminosity relation, the average luminosity of the galaxy hosted in the most massive subhalo (and therefore the most luminous of the satellites) is simply given by $<L_1>(M_h)=\int_0^\infty L(m_1) P_1(m_1,M_h) dm_1$. Central galaxy {#centbuild} -------------- The way we have built up the CLF gives us a natural way of obtaining the distribution of the luminosity of first brightest galaxies with cluster luminosity, since we are already introducing a distribution with mass. We use the results of both the lognormal and concentration models for comparison. The total luminosity is obtained by summing over the BCG and satellite contributions, and taking into account the effect of the destroyed subhaloes from section 2.1. The average total luminosity at any given mass is simply the sum of the average luminosities of the BCG and satellites, and is, by construction, equal to the one shown in figure \[fig:fdest\]. Finally, we can also obtain the global distribution of BCG luminosity over all clusters. To do this, we use the cluster threshold mass, as calculated in the previous section, and simply integrate the conditional distribution we have multiplied by the halo mass function: $$\label{bcgglobal} f(L_1)=\int_0^\infty f(L_1|M) n(M) p(M) dM \, ,$$ where $n(M)$ is the halo mass function, given by (\[stmf\]), and $p(M)$ is the probability that a halo of mass $M$ hosts a rich cluster, as given in figure \[clustprob\]. Other models for BCGs ===================== In this section, we introduce two other simple models to complement the one presented above in order to allow better comparisons with the observational results. The first, and, a priori, best motivated is based on the simple assumption that the BCGs are merely the extreme values of the unique distribution that applies to all cluster galaxies. We do this based on a regular cluster galaxy luminosity function, and build the BCG distribution directly from it. The second is based on assuming some form of cannibalism. We use a simple approach to model this mechanism, that of merging the brightest galaxy with one other, where both are taken from the universal distribution in the first case we consider. Extremes of a general distribution {#xtremes} ---------------------------------- The simplest assumption possible when studying the distribution of the galaxies in a cluster is that they are all drawn from the same statistical distribution (for example, a Schechter function for luminosity). The galaxies in a cluster would then simply be a random sample drawn from this distribution, with the brightest galaxy simply the extreme value of this sample. This approach has been studied in the literature before (e.g., @tr). In particular, it has been shown that this approach leads to a result from extreme value theory, the Gumbel distribution, for the overall distribution of the magnitudes of the first brightest galaxy [@bb84; @bb]. This is given by $$\label{gumbel} f(M)=a e^{a(M-M^*)-e^{a(M-M^*)}} \, ,$$ with $M^*=M_G+\frac{0.577}{a}$, where $M_G$ is the mean of the magnitude values and $a$ is a measure of the steepness of fall of the parent distribution. In the present paper, we take a slightly different approach, in order to match that which we will take for the other models. This model and subsquent calculations are similar to the ones presented in @tr, although some details, like the luminosity function used, will be different. We begin by assuming that the parent distribution of the luminosity of the galaxies in a cluster is given by a Schechter function, equation (\[schechter\]) (for simplicity, we use the same values for $M_*$ and the faint end slope, $\alpha$, as the global luminosity function). The only dependence on the actual cluster considered comes in the normalization, which we set so that the total luminosity in all the galaxies equals the cluster luminosity, $L_c$: $$\label{clusternorm} \phi_*(L_c)=\frac{L_c}{\Gamma[2-\alpha] L_*} \, .$$ We then take the value of the distribution of the first brightest galaxy at a given luminosity to be the probability that there are no galaxies brighter than that luminosity, times the probability that there is a galaxy at that luminosity. The latter is simply given by (\[schechter\]), while for the former we take a Poisson fluctuation around the average number of galaxies obtained by integrating the parent distribution, (\[schechter\]). Thus, the luminosity distribution for the brightest cluster galaxy is given by: $$\label{dist_1bg} f_1(L,L_c)=\phi(L,L_c) e^{-\int_L^\infty \phi(L,L_c)dL} \, ,$$ where the integral in the exponential can be resolved to $\int_L^\infty \phi(L)dL=\phi_*(L_c) \Gamma[1-\alpha,L/L_*]$. Using the same principles as discussed in section \[sect:clmass\] above, it is simple to show that this probability distribution is adequately normalized to 1. Similarly, the probability for the luminosity of the second brightest galaxy is given by the product of the probability of having a galaxy at a given luminosity by the probability of there being a single galaxy brighter than that luminosity. In general, and again taking Poisson fluctuations around the average number, the distribution of the n-th brightest galaxy will be given by: $$\label{dist_n} f_n(L,L_c)=\phi(L,L_c) \big(\int_L^\infty \phi(L')dL'\big)^n e^{-\int_L^\infty \phi(L')dL'}/n! \, .$$ Likewise, the joint probability of having the first brightest galaxy at luminosity $L_1$ and the second at $L_2$ (given by the product of the probability of a galaxy at $L_2$, another at $L_1$, none in between and none above $L_1$) can also be derived: $$\label{dist_12bg} f_{12}(L_1,L_2,L_c)=\phi(L_1,L_c) \phi(L_2,L_c) e^{-\int_{L_2}^\infty \phi(L)dL} \, .$$ Once we have these distributions, it is then easy to obtain the various statistics we are interested in: the average magnitude of the first and second brightest galaxies, $<m_1>$ and $<m_2>$, the dispersion in first brightest galaxy magnitude, $\sigma_1$, and the average magnitude difference between these two $<\Delta_{12}>=<m_1-m_2>$. Some caution is necessary with the last one, since the fact that $L_1$ and $L_2$ are not independent variables would necessitate the use of the joint distribution; however, it turns out that the integrals over this joint distribution resolve to two different integrals over the two separate distributions, so that $<\Delta_{12}>=<m_1>-<m_2>$ as calculated for each separate, individual distribution. Cannibalism ----------- To illustrate the effect of galactic cannibalism, we consider here two simple models, starting with a common distribution like the one discussed in the previous section, and then merging the brightest galaxy with one of the others. In general, the distribution of the new BCG will have to be taken from the joint distribution of the previous first and n-th brightest galaxies. It is important to note that these variables are not independent, and as such when calculating the average of the sum it will not, in principle, be possible to separate it into integrals over the two individual distributions (except in the particular case when $n=2$, as noted above). ### $L_1+L_2$ The simplest possible model to consider when taking account of cannibalism is to merge the two brightest galaxies from a common distribution. Thus, the luminosity of the brightest galaxy is now given by the sum of the luminoisities of the previous two brightest galaxies, with a distribution given by the joint distribution of equation (\[dist\_12bg\]). The second brightest galaxy is now the old third brightest, with a distribution given by equation (\[dist\_n\]), with $n=3$. The remaining calculations follow the same pattern as discussed in the previous case. It is to be expected that, due to the fact that the new first brightest luminosity is the result of the sum of two previous variables, the dispersion in first brightest luminosity will be slightly lower than before. Likewise, it is obvious that the difference in magnitude between first and second brightest galaxies will now be much larger. ### $L_1+L_n$ A slightly more realistic toy model to illustrate galactic cannibalism is to proceed similarly as described above, but instead of merging the first brightest galaxy with the second, to take some weighting function to reflect the probability that the merger will occur with any one other of the galaxies in the cluster. We take the probability of the merger occuring with the n-th brightest galaxy to be proportional to its average luminosity, $L_n$; this gives greater weight to the first few brightest galaxies, but since the average luminosity decreases only slowly with $n$, the probability is split over a wide range. We consider a possible merger down to the 30th brightest galaxy in the cluster, for which the merger probability given by this prescription will be below 1%. The total probability distribution of the new BCG luminosity is obtained from the joint probability distribution of the different weighted pairs. Since the variables are not independent and therefore this joint distribution cannot be split into a product of terms each dependent only on a single variable, to calculate $<m_1>$, $<m_2>$ and $\sigma_1$ it is necessary to solve complicated integrations. We turn instead to a Monte Carlo method, building up the distribution of the new BCG luminosity by randomly generating merger pairs and the luminosity of their components, and then summing them. The new second brightest galaxy will be either the old one, or the old third brightest, depending on whether the merger occurred with the former or not. Results ======= We start by showing the results for average magnitude of first and second brightest galaxies as a function of total cluster luminosity, for each of the models, in figures \[m1avgfig\] and \[m2avgfig\]. The first noticeable thing is that the curves for both the concentration and lognormal models are very similar, which is not too surprising given the similar mass-luminosity relations obtained for each of them (see table 2), even though they were built in independent ways. This is in fact a success for the concentration model, since the lognormal model is by construction made to provide the best fit to the observed BCG magnitude, while the concentration model is not. Comparing the different sets of curves, it is easy to see the differences between the models. The statistical distribution has the lowest average BCG luminosity, which is considerably higher for the other models. This comes from the fact that all other models take the BCG distribution to be special, like an additional distribution added on to the base Schechter distribution of the satellite galaxies. Looking at the average magnitude of the second brightest galaxy, the most obvious factor is that the one from the more extreme cannibalism model is much lower than the others; once again, this is unsurprising since this is essentially the third brightest galaxy in the statistical model. Likewise, the 1+n cannibalism model gives essentially the same result as the statistical one, since the second brightest galaxy is the same in both in most cases. Both of our models give considerably brighter values for the second brightest galaxy. This probably indicates that the luminosity function we used to generate the statistical and cannibalism models does not have enough bright galaxies (i.e., $L_*$ is too low), which is not very surprising since we are using the parameters of the global one. On the other hand, if this were to be the case then it is quite probable that the cannibalism models would then give BCGs which are too bright. ![Average magnitudes, in the $K$-band, of the first brightest galaxy, as a function of the total cluster luminosity, as calculated for the different models: the ones based on the CLF formalism introduced, the statistical model and two forms of cannibalism: the extreme $L_1+L_2$, and the softer $L_1+L_n$. The data points shown are binned values of cluster galaxy data supplied by Yen-Ting Lin (2006, private communication).[]{data-label="m1avgfig"}](fig7.ps){width="84mm"} ![Same as figure \[m1avgfig\], but for the second brightest galaxy in the cluster.[]{data-label="m2avgfig"}](fig8.ps){width="84mm"} In any case, this problem should be less of an issue when looking at the magnitude gap $\Delta_{12}$, shown in figure \[delta12fig\]. The values for the cannibalism 1+2 model are obviously much too high to match the observed ones: it is clear that merging the two brightest galaxies leaves too big a gap to the next brightest. The values for the statistical model are too low, in this case probably indicating that it is the BCG which is too faint. The values for the other, 1+n, cannibalism model look rather better, while both of our models show good agreement with the observations. But note the size of the errorbars in the observational data: the scatter in the observed values is quite large (see also @ls [@vdb]). Since the scatter in the BCG magnitude is small, most of the scatter here in $\Delta_{12}$ is likely coming from the second brightest galaxy. The shape seen in the curves of figure \[delta12fig\] is unsurprising: the fact that $\Delta_{12}$ is increasing as the total luminosity goes down is a natural consequence of the decreasing number of satellites at this lower end. In fact, these correspond to haloes of only a few times $10^{13} h^{-1} {\rm M_\odot}$, and therefore many of these systems will not actually even be clusters (cf figure \[clustprob\]). ![Values for the average magnitude difference between first and second brightest galaxies, $\Delta_{12}$. The data points come are again binned values from the catalogue supplied by Yen-Ting Lin. The large errorbars reflect the fact that the scatter in the values of $\Delta_{12}$ in these clusters is very large. For comparison, another observational value for $\Delta_{12}$, but in the $B$-band, is of $\approx 0.55$ magnitudes [@sgh]. The values we obtain are also qualitatively consistent with the results of @milos [@ls].[]{data-label="delta12fig"}](fig9.ps){width="84mm"} Figure \[sigfig\] shows the calculated dispersion in the magnitude of the first brightest galaxy. Binning the Lin observational data results in values of $\sigma_1$ of 0.3 to 0.4 magnitudes. A direct comparison of this value with the model results shows that the values we obtain for our model are slightly too low, while the values for the statistical and cannibalism models look reasonable. There is, however, an additional point to bear in mind: these observational values are obtained by binning over a certain cluster luminosity range. Part of this observed scatter then simply comes from the fact that the average magnitude is changing as a function of this. When taking this into account, the values we get from our model are in fact in good agreement with the observed ones. Also, using Bayes’ theorem, it is possible to derive values for the dispersion in the mass of the hosting halo for a given galaxy luminosity. Doing so we obtain values which are in qualitative agreement with the ones found by @vdb, although a direct comparison is complicated by the fact that they use $b_J$-band instead of $K$-band for the galaxy luminosity. ![Dispersion in the magnitude of the first brightest galaxy. The data set used to plot the data points in the previous figures gives a value of between 0.3 and 0.4 magnitudes for the statistical error in each cluster luminosity bin. Another observationally measured dispersion in first brightest magnitude, in the $B$-band, is $\sigma_1\simeq0.24$ [@pl]. Using a CLF approach similar to ours, @coorayc obtains a value of $\sigma_1=0.17$ in the r-band.[]{data-label="sigfig"}](fig10.ps){width="84mm"} Conclusion ========== In the present paper we focus on the implications of the mass luminosity relation first introduced in paper I on the properties of clusters, and more in particular on whether BCGs are statistical or special in nature. We expand the model of paper I by introducing scatter into the relation, building a CLF based on two different models: a simple lognormal shape or a model based on the distribution of concentrations in haloes of a given mass. We also introduce a simple model to evaluate the mass fraction in subhaloes which have been completely disrupted since they were accreted. We have argued that this model naturally gives a separation between central and satellite galaxies in a cluster, with the former having a distinct distribution based on the halo mass function. We have shown that this leads to a characteristic bump in the cluster galaxy luminosity function, qualitatively similar to that seen in some observational work and some semi-analytical HOD based models. This is caused by the fact that, at any given high mass, haloes are considerably more abundant than subhaloes, coupled with the fact that, in any single system, central galaxies will be considerably brighter than the satellites (since the subhaloes are much less massive than the parent halo). Together with this, the faint end is completely determined by the satellite galaxies in the subhaloes, since we put in a minimum mass threshold for the haloes we consider host clusters. Finally, to look at the question of the nature of BCGs, we study some statistical indicators that may provide a clue to this problem, namely the ratio $r$ between the magnitude difference between first and second brightest galaxies in a cluster, $\Delta_{12}$, and the dispersion of the former, $\sigma_1$. We also introduce two simple models to account for two different possibilities that are usually considered: the statistical hypothesis, that is, that all galaxies in a cluster, including the BCG, are drawn from the same distribution; and galactic cannibalism, where the BCG grows by merging with other galaxies in the cluster. As is already known, any model of the former gives a value for $r$ which is too small compared to what is observed. At the same time, we show that the simplest case of the latter, that of merging the two brightest galaxies from a common distribution, gives a value of $\Delta_{12}$ which is far too large. From the results we obtain, it is possible to draw some answers to the issue discussed in the introduction about the nature of BCGs. The statistical hypothesis, which assumes that BCGs are drawn from the same universal distribution, can be ruled out. It gives values of $\sigma_1$ which are too big and $\Delta_{12}$ which are too small. On the other hand, the simplistic model we analyse for galactic cannibalism looks to be far too extreme. Mainly, it gives values of $\Delta_{12}$ which are far too large compared to what is observed. This model mimics the scenario of two similar sized clusters merging, with a final distribution of galaxies which can be well fit by a single distribution, but where the BCG is then built up by merging the two BCGs of the original clusters. From our results, we expect that if such a scenario does occur, a merging of the brightest galaxies is excluded, as it leaves a too bright BCG and too large a gap to the second brightest galaxy (but see the discussion in @lm). A more general cannibalistic scenario, where the BCG is built up by merging with one other galaxy, is however not excluded, nor is the possibility of minor mergers, where one of the merging systems is much smaller than the other such that its brightest galaxy will not be the second brightest galaxy in the resulting cluster. It is worth noting that this fits in well with the results of recent semi-analytic simulations done by @lucia, who find that BCG growth occurs fast enough that major mergers are relatively rare, and that most of the later growth is through minor mergers. Our model gives results which can be regarded as halfway between the other two: the BCG is in fact the product of a special distribution and is considerably brighter than what would result from a single distribution, but the second brightest galaxy is not as faint as in the cannibalistic scenario. The question remains, however, of what is the BCG formation scenario expected in this case. A satisfactory answer, in the framework of the way the mass luminosity relation is built, would require going back to the simulations and following the behaviour of the halo and its subhaloes. There is an important assumption which has been implicit throughout this paper: that both the central and satellite galaxies follow the same mass luminosity relation. This may indeed appear to be contradictory with the possibility that the central galaxies are special, since it may seem to imply that they are formed through the same processes. On the other hand, the model is based on structure build up through the merger of dark matter haloes, and we assume that the satellite galaxies were formed in their own independent haloes, prior to being accreted into their present parent halo. This original halo would be the one that determines their properties, since the galaxy stops accreting gas or undergoing mergers of its own once its halo merges into the parent system and it becomes a satellite (and hence the need for some mass loss prescription). Therefore, it is to be expected that at least in some cases, they would have been a BCG in their own system themselves, and therefore it may not be unreasonable to assign them the same mass luminosity relation. Still, this leaves out the very important factor that the formation epoch may well be different, together with subsquent growth of the BCG in the parent system. At the same time, halo numbers completely dominate at high mass (and therefore it is to be expected the same is true of central galaxy numbers at high luminosity), and consequently the total average mass luminosity relation should be pretty much the same as calculated. We should stress the fact that our concentration models gives quite good results, since unlike the lognormal model, it does not involve any fitting of parameters to BCG results. In fact, it is quite interesting that it gives values for the dispersion in BCG magnitude so close to the ones from the lognormal model, when the latter was set to the value that results in the best fit to the observed BCG magnitudes. In light of the recent discussion about the effects of additional parameters, such as environmental density, halo formation time or concentration (e.g., @wechsler [@berlind]), on the clustering of halos and subsequently on the HOD, this seems to indicate that taking the halo mass as the primary determinant of the hosted galaxy luminosity, and then taking concentration as a secondary variable which determines the scatter around the average value is a good way to proceed. Acknowledgements {#acknowledgements .unnumbered} ================ We would like to thank Yen-Ting Lin for making his data on cluster galaxy luminosities available to us. AV acknowledges financial support from Fundação para a Ciência e Tecnologia (Portugal), under grant SFRH/BD/2989/2000. [99]{} Benson A. J., Frenk C. S., Baugh C. M., Cole S., Lacey C. G., 2003, MNRAS, 343, 679 Bhavsar S.P., 1989, ApJ, 338, 718 Bhavsar S.P., Barrow J.D., 1985, MNRAS, 213, 857 Berlind A. A., Weinberg D. H., 2002, ApJ, 575,587 Berlind A.A., Kazin E., Blanton M.R., Pueblas S., Scoccimarro R., Hogg D.W., astro-ph/0610524, submitted to ApJ Bernardi M., Hyde J.B., Sheth R.K., Miller C.J., Nichol R.C., 2006, ApJ, in press, astro-ph/0607117 Bernstein J.P., Bhavsar S.P., 2000, MNRAS Binggeli B., Sandage A., Tammann G.A., 1988, ARA&A, 26, 509 Bullock, J. S., Kolatt, T. S., Sigad, Y., Somerville, R. S., Kravtsov, A. V., Klypin, A. A., Primack, J. R., Dekel, A., 2001, MNRAS, 321, 559 Capaccioli M., 1989, in The World of Galaxies, ed. H.G. Corwin& L. Bottinelli, Springer, Berlin, 208 Colless M., 1989, MNRAS, 237, 799 Conroy C., Wechsler R.H., Kravtsov A.V., 2006, ApJ, 647, 201 Cooray A., 2006, MNRAS, 365, 842 Cooray A., Milosavljević M., 2005a, ApJ, 627, L85 Cooray A., Milosavljević M., 2005b, ApJ, 627, L89 De Lucia G., Blaizot J., 2006, MNRAS, accepted, astro-ph/0606519 De Propris R. et al., 2003, MNRAS, 342, 725 Dressler A., 1978, ApJ, 222, 23 Eke V.R. et al., 2004, MNRAS, 355, 769 Gao L., White S.D.M., Jenkins A., Stoehr F., Springel V., MNRAS, 355, 819 Graham A., Lauer T.R., Colless M., Postman M., 1996, ApJ, 465, 534 Hausman M.A., Ostriker J.P., 1978, ApJ, 224, 320 Hoessel J.G., Gunn J.E., Thuan T.X., 1980, ApJ, 241, 486 Hoessel J.G., Schneider D.P., 1985, AJ, 90, 1648 Kochanek C. S., et al., 2001, ApJ, 560, 566 Kravtsov A. V., Berlind A. A., Wechsler R. H., Klypin A. A., Gottl[ö]{}ber A., Allgood B., Primack J. R., 2004, ApJ, 609, 35 Lin Y., Mohr J.J., 2004, ApJ, 617, 879 Lin Y., Mohr J.J., Stanford S.A., 2004, ApJ, 610, 745 von der Linden A., Best P.N., Kauffmann G., White S.D.M., 2006, astro-ph/0611196, submitted to MNRAS Loh Y., Strauss M.A., 2006, MNRAS, 366, 373 Macciò A.V., Dutton A.A., van den Bosch F.C., Moore B., Potter D., Stadel J., 2006, astro-ph/0608157, submitted to MNRAS Milosavljević M., Miller C.J., Furlanetto S.R., Cooray A., 2006, ApJ, 637, L9 Navarro J. F., Frenk C. S., White S. D. M., 1997, ApJ, 490, 493 Norberg P. et al., 2002, MNRAS, 336, 907 Ostriker J.P., Hausman M. A., 1977, ApJ, 217, L125 Ostriker J.P., Tremaine S.D., 1975, ApJ, 202, 113 Peebles, P.J.E., 1968, ApJ, 153, 13 Postman M., Lauer T.R., 1995, ApJ, 440, 28 Sandage, A., 1972, ApJ, 178, 1 Schneider D. P., Gunn J. E., Hoessel J. G., 1983, ApJ, 268, 476 Shaw L., Weller J., Ostriker J.P., Bode P., 2006, ApJ, 646, 815 Sheth R. K., Tormen G., MNRAS, 1999, 308, 119 Spergel D. N. et al., 2006, astro-ph/0603449, submitted to ApJ Tasitsiomi A., Kravtsov A. V., Wechsler R. H., Primack J. R., 2004, ApJ. 614, 533 Tremaine S.D., Richstone D.O., 1977, ApJ, 212, 311 Vale A., Ostriker J. P., 2004, MNRAS, 353, 189 Vale A., Ostriker J. P., 2006, MNRAS, 371, 1173 (Paper I) van den Bosch F. C., Tormen G., Giocoli C., 2005, MNRAS, 359, 1029 van den Bosch F.C. et al., 2006, astro-ph/0610686, submitted to MNRAS Wang L., Li C., Kauffmann G., De Lucia G., 2006, MNRAS, 371, 537 Wechsler R.H., Zentner A.R., Bullock J.S., Kravtsov A.V., 2006, ApJ, in press, astro-ph/0512416 Weller J., Ostriker J. P., Bode P., Shaw L., 2005, MNRAS, 364, 823 Yagi M., Kashikawa N., Sekiguchi M., Doi M., Yasuda N., Shimasaku K., Okamura S., 2002, AJ, 123, 87 Yang X.H., Mo H.J., van den Bosch F.C., 2003, MNRAS, 339, 1057 Yang X.H., Mo H.J., Jing Y.P., van den Bosch F.C., 2005, MNRAS, 358, 217 Zehavi I. et al., 2005, ApJ, 630, 1 Zentner A.R., Berlind A.A., Bullock J.S., Kravtsov A.V., Wechsler R.H., 2005, ApJ, 624, 505 Zheng Z. et al., 2005, ApJ, 633, 791 Zhu G., Zheng Z., Lin, W.P., Jing Y.P., Kang X., Gao L., 2006, astro-ph/0601120, submitted to ApJ [^1]: E-mail: avale@fisica.ist.utl.pt
1
--- abstract: 'We report on the results of a three-year program of coordinated X-ray and optical monitoring of the narrow-line Seyfert 1 galaxy NGC 4051. The rapid continuum variations observed in the X-ray spectra are not detected in the optical, although the [*time-averaged*]{} X-ray and optical continuum fluxes are well-correlated. Variations in the flux of the broad  line are found to lag behind the optical continuum variations by 6 days (with an uncertainty of 2–3 days), and combining this with the line width yields a virial mass estimate of $\sim1.1 \times 10^6$, at the very low end of the distribution of AGN masses measured by line reverberation. Strong variability of 686 is also detected, and the response time measured is similar to that of , but with a much larger uncertainty. The 686 line is almost five times broader than , and it is strongly blueward asymmetric, as are the high-ionization UV lines recorded in archival spectra of NGC 4051. The data are consistent with the Balmer lines arising in a low to moderate inclination disk-like configuration, and the high-ionization lines arising in an outflowing wind, of which we observe preferentially the near side. Previous observations of the narrow-line region morphology of this source suggest that the system is inclined by $\sim50$, and if this is applicable to the broad -emitting region, a central mass of $\sim1.4 \times 10^6$ can be inferred. During the third year of monitoring, both the X-ray continuum and the 686 line went into extremely low states, although the optical continuum and the  broad line were both still present and variable. We suggest that the inner part of the accretion disk may have gone into an advection-dominated state, yielding little radiation from the hotter inner disk.' author: - 'B.M. Peterson, I.M. McHardy, B.J. Wilkes, P. Berlind, R. Bertram,$^{,}$ M. Calkins, S.J. Collier, J.P. Huchra, S. Mathur, I. Papadakis, J. Peters,$^{,}$ R.W. Pogge, P. Romano, S. Tokarz, P. Uttley, M. Vestergaard, and R.M. Wagner$^{,}$' title: 'X-Ray and Optical Variability in NGC 4051 and the Nature of Narrow-Line Seyfert 1 Galaxies ' --- 686[4686 He[ii]{}$\lambda4686$]{} ø5007[\[O[iii]{}\]$\lambda5007$]{} Introduction ============ The broad components of permitted emission lines in the spectra of active galactic nuclei (AGNs) typically have velocity widths of a few to several thousands of kilometers per second. The defining characteristic of the subclass of AGNs known as narrow-line Seyfert 1 galaxies (NLS1s) is that the broad components of their emission lines are much narrower ($\vFWHM \ltsim 2000$) than is typical for broad-line objects (Osterbrock & Pogge 1985). NLS1s are extreme AGNs in other respects as well — their UV–optical properties correlate well with the Boroson & Green (1992) primary spectral eigenvector identified in principal component analysis. In other words, NLS1 classification correlates well with strong optical  emission and weak \[\]$\lambda\lambda4959$, 5007 emission (Boller, Brandt & Fink 1996). While the possible importance of such extreme AGNs has been recognized for two decades (Davidson & Kinman 1978), interest in NLS1s has increased recently as their unusual X-ray properties have come to light: they have unusually steep soft and hard X-ray spectra (Puchnarewicz et al. 1992; Boller, Brandt, & Fink 1996; Brandt, Mathur, & Elvis 1997) and undergo rapid non-linear variability (Boller et al. 1997). While rapid, large amplitude variability in the UV–optical has not been reported, existing data do not address well the relationship between the X-ray and long-wavelength variations. A good compilation of the observed properties of NLS1s is given by Taniguchi, Murayama, & Nagao (1999). Possible explanations for the narrowness of the permitted lines include the following: 1. NLS1s have more distant-than-normal BLRs (Mason, Puchnarewicz, & Jones 1996; Wandel & Boller 1998). The line widths are attributed to virial motion ($M \approx V^2r/G$) in the vicinity of a supermassive black hole, but the orbital velocities are smaller than in more typical AGNs because of greater distance to the central source. 2. NLS1s are low-inclination (i.e., nearly face-on) systems (Osterbrock & Pogge 1985). In this model, the line widths are again due to orbital motion around the central black hole, and the bulk of the broad-line region (BLR) gas orbits in a common plane that is almost perpendicular to the line of sight, leading to relatively small Doppler widths. 3. NLS1s have relatively low black-hole masses, but high accretion rates. Again, the basic assumption is that the BLR motions are virial, but the central source has a lower mass. The luminosity can be kept relatively high by supposing that the accretion rate (relative to the Eddington rate) is correspondingly high in these sources. The third of these explanations forms the current paradigm for NLS1s, as it explains not only the narrow emission lines, but possibly also the steepness of the soft X-ray spectrum: the temperature structure of an optically thick, geometrically thin accretion disk is given by $$T(r) = 6.3 \times 10^5 \left(\dot{M}/\dot{M}_{\rm Edd}\right)^{1/4} M_8^{-1/4} \left(r/R_{\rm S} \right)^{-3/4}\ {\rm K,}$$ where $M_8$ is the black hole mass in units of $10^8\,M_{\odot}$ and $R_{\rm S}$ is the Schwarzschild radius (Shakura & Sunyaev 1973). The temperature of the inner regions of the disk scales like $M^{-1/4}$, so the strength of the soft X-rays in NLS1s might plausibly be ascribed to a low central mass and a high accretion-rate flow. One way to distinguish among various explanations of the NLS1 phenomenon is to measure the size of the BLR, which can be done via reverberation-mapping techniques (Blandford & McKee 1982; Peterson 1993; Netzer & Peterson 1997). It is usually assumed that the variability of an emission-line $L(t)$ to a variable continuum $C(t)$ can be linearized to the form $$\label{eq:TF} \delta L(t) = \int \Psi(\tau)\ \delta C(t-\tau)\ d\tau,$$ where $\Psi(\tau)$ is the “transfer function,” which depends on the geometry and reprocessing physics of the BLR. A representative time scale for response of a line can be found by a simple cross-correlation of the continuum and emission-line light curves. By convolving eq. (\[eq:TF\]) with $C(t)$, one obtains the cross-correlation function $$\CCF(\tau) = \int \Psi(\tau') \ACF(\tau-\tau')\ d\tau',$$ where $\ACF(\tau)$ is the continuum autocorrelation function (Penston 1991; Peterson 1993). The cross correlation lag $\tau$ can be taken to be the light-travel time across the BLR, so the BLR size is given by $r=c\tau$. By combining this with the emission-line width $\vFWHM$, the mass of the central source can be inferred to be $$M = \frac{f \vFWHM^2 c\tau}{G},$$ where $f$ is a factor of order unity that depends on the still unknown geometry and kinematics of the BLR. There is an implicit assumption that the gravitational force of the central object dominates the kinematics of the BLR; this is formally unproven, but at least in the case of the well-studied Seyfert 1 galaxy NGC 5548, the reverberation-mapping data are consistent with the required $\vFWHM \propto r^{-1/2}$ relationship (Peterson & Wandel 1999). Virial mass estimates based on reverberation-mapping data are now available for nearly 40 AGNs (Wandel, Peterson, & Malkan 1999; Kaspi et al. 2000). Beginning in early 1996, we undertook a program of contemporaneous X-ray and optical spectroscopic monitoring of the galaxy NGC 4051, the only NLS1 galaxy in Seyfert’s (1943) original list of high surface-brightness galaxies with strong emission lines. The X-ray variability characteristics of NGC 4051 are typical of the NLS1 class (Lawrence et al. 1987; McHardy et al. 1995). X-ray observations were made with the [*Rossi X-Ray Timing Explorer (RXTE)*]{}, and optical spectra were obtained with ground-based telescopes, as described below. The purpose of this program has been twofold: 1. To determine the nature of the relationship between the X-ray and UV–optical continuum variations. This is a particularly interesting question in the case of NGC 4051 since the X-ray flux dropped to an extremely low level in 1998 May (Uttley et al. 1999), towards the end of this campaign. 2. To determine the BLR size and virial mass via reverberation techniques. In this contribution, we present the results of this program, and discuss their implications for the nature of the NLS1 phenomenon. Observations and Data Analysis ============================== X-Ray Observations ------------------ The X-ray observations were made with the large area (0.7 m$^{2}$) Proportional Counter Array (PCA) on [*RXTE*]{} (Bradt, Rothschild & Swank 1993). The observations shown here, which are part of a continuing monitoring program, cover the period from 1996 March to 1998 June. The observations were scheduled initially to cover the largest range of variability time scales with the smallest number of observations in order to determine the X-ray power spectrum efficiently. The program therefore consisted of observations approximately every 7 to 10 days throughout the first year, followed by observations approximately every 2 weeks thereafter. In addition, during the first year, there were two more intensive monitoring periods: a two-week period of twice-daily observations and a four-week period of of daily observations. Each observation lasted for typically $1$ksec. In 1996 December, there was also a period of 3 days during which NGC 4051 was observed for a total of 70ksec. The PCA consists of 5 proportional counter units (PCUs) but typically only 3 PCUs (numbers 0, 1 and 2) were operational and all count rates refer to the total counts from those 3 PCUs. Where other than 3 PCUs were in operation, the count rate has been normalized to 3 PCUs. The PCUs have 3 layers. Here we only include data from the upper layer as this layer provides the highest signal-to-noise ratio for photons in the energy range 2–20 keV where the flux from NGC 4051 is strongest. We used standard “good time interval” (GTI) criteria to select data with the lowest background. We reject data obtained when the Earth elevation angle was less than 10$^{\circ}$, when the pointing offset from the source was $>0.02^{\circ}$, or during passage through the South Atlantic Anomaly, or up to 5 minutes afterwards. The PCA is a non-imaging device with a field of view of FWHM $\sim1^\circ$ and so the background which we subtract must be calculated from a model. Here we use the FTOOL routine PCABACKEST V2.0c, with the new “L7” model, to calculate the background. The resultant 2–10 keV lightcurve is shown in the top panel of Fig. 1. Further details of the X-ray light curves and variability are given by Papadakis et al. (2000). For NGC 4051, 10 counts s$^{-1}$ (2–10 keV), from 3 PCUs, corresponds to a flux of $4 \times 10^{-11}$ ergs cm$^{-2}$ s$^{-1}$. Optical Spectroscopy -------------------- Optical spectroscopic observations were obtained between UT 1996 January 12 (Julian Date = JD2450095) and 1998 July 28 (JD2451022), covering three separate observing seasons. Observations were made with the Ohio State CCD Spectrograph on the 1.8-m Perkins Telescope of the Ohio State and Ohio Wesleyan Universities at the Lowell Observatory (data set A) and with the FAST spectrograph (Fabricant et al. 1998) on the 1.5-m Tillinghast Reflector of the Center for Astrophysics on Mt. Hopkins (data set B). A log of these observations is presented in Table 1. Column (1) gives the UT date of each observation and the Julian Date is given in column (2). The origin of the data (set A or B) appears in column (3). The projected size of the spectrograph entrance aperture was $5\arcsecpoint0\times7\arcsecpoint5$ (i.e., a slit width of $5\arcsecpoint0$ and a cross-dispersion extraction window of $7\arcsecpoint5$) for all set A spectra and $3\arcsecpoint0\times4\arcsecpoint6$ for all set B spectra. In each case, the slit was oriented in the east-west direction (i.e., the slit position angle was always 90). The nominal spectral resolution was 9Å for all set A spectra and 5Å for all set B spectra. The wavelength coverage of each spectrum is given in column (4), and the file name is given in column (5). All of these spectra are publicly available on the International AGN Watch site on the World-Wide Web[^1]. The spectroscopic images were processed in standard fashion for CCD frames, including bias subtraction, dark-count correction when necessary, flat-field correction, wavelength calibration, and flux calibration based on standard-star observations. Since even under photometric conditions, AGN spectrophotometry is rarely more accurate than $\sim$10%, the usual technique of flux calibration by comparison with standard stars is far too poor for AGN variability studies. We therefore base our flux calibration on a scale defined by the observed flux in the prominent narrow \[O[iii]{}\]$\lambda\lambda4959,$ 5007 doublet. These lines originate in a low-density region that is more spatially extended than the BLR or the continuum source, which for practical purposes can be regarded as point sources. The larger light-travel time and long recombination time ensure that any narrow-line variations will occur only over much longer time scales than of interest in this experiment. We therefore assume that the \[O[iii]{}\] lines are constant in flux, and use these to scale each spectrum. All spectra are scaled to a constant flux of $F(\mbox{\o5007}) = (3.91 \pm 0.12) \times 10^{-13}$, which is based on the mean of ten spectra from data set A that were obtained under photometric observing conditions during the 1996 observing season. The ø5007 flux measured from photometric spectra from subsequent years supports our assumption that this value can be assumed to be constant over the time scales of interest. All of the spectra are adjusted in flux to have this value of $F(\mbox{\o5007})$ by employing the spectral scaling software described by van Groningen & Wanders (1992). The process is as follows: spectra are adjusted in flux by a multiplicative constant that is determined by comparing each spectrum to a “reference spectrum” that has been formed by averaging all the highest-quality (i.e., typically signal-to-noise ratios $S/N \gtsim 30$) spectra and scaling this mean spectrum to the adopted ø5007 flux. All individual spectra are scaled relative to the reference spectrum in a least-squares fashion that minimizes the \[\] residuals in the difference spectrum produced by subtracting the reference spectrum from each individual spectrum. This program also corrects for small zero-point wavelength-calibration errors between the individual spectra, and takes resolution differences into account. At this point, measurements of each of the spectra are made. The continuum flux at $\sim5100$Å (in the rest frame of NGC 4051, $z = 0.002418$, based on 21-cm emission \[deVaucouleurs et al. 1991\]) is determined by averaging the flux in the 5090–5120Å bandpass (in the observed frame). The  emission-line flux is measured by assuming a linear underlying continuum between $\sim4772$Å and $\sim5105$Å, and integrating the flux above this continuum between 4820Å and 4910Å (all wavelengths in the observed frame). The long-wavelength cutoff of this integration band is chosen to avoid the  contamination underneath \[\]$\lambda4959$. We also note that no attempt has been made to correct for contamination of the line measurement by the [*narrow-line*]{} component of , which is of course expected to be constant. We have also measured the flux in the 686 line. Only set B spectra are suitable for this measurement, since set A spectra do not extend shortward far enough to provide a suitable short-wavelength continuum point. The  flux was measured by adopting a linear underlying continuum between $\sim4447$Åand $\sim4775$Å, and integrating the flux above this continuum between 4613Å and 4772Å. We then compare the independent light curves from the two sets of data to identify small systematic flux differences between the sets, as we have done in many previous experiments (see Peterson et al. 1999 and references therein). We attribute these small relative flux offsets to aperture effects, although the procedure we use also corrects for other unidentified systematic differences between data sets. We define a point-source correction factor $\varphi$ by the equation $$\label{eq:defphi} F(\Hbeta)_{\rm true} = \varphi F(\Hbeta)_{\rm observed}.$$ This factor accounts for the fact that different apertures result in different amounts of light loss for the point-spread function (which describes the surface-brightness distribution of both the broad lines and the AGN continuum source) and the partially extended narrow-line region (NLR). After correcting for aperture effects on the point-spread function to narrow-line ratio, another correction needs to be applied to adjust for the different amounts of starlight admitted by different apertures. An extended source correction $G$ is thus defined as $$\label{eq:defG} F_{\lambda}(5100\,{\textstyle {\rm \AA}})_{\rm true} = \varphi F_{\lambda}(5100\,{\textstyle {\rm \AA}})_{\rm observed} - G.$$ The value of $G$ is essentially the nominal difference in the contaminating host-galaxy flux between the two spectrograph entrance apertures employed. This intercalibration procedure is accomplished by comparing pairs of nearly simultaneous observations from the two data sets to determine $\varphi$ and $G$. In practice, the interval which we define as “nearly simultaneous” is two days or less, which means that in principle any real variability that occurs on time scales this short tends to be somewhat suppressed by the process that allows us to merge the two data sets. In this case, the adjustment has very little impact on the final results because nearly all of the pairs of data separated by two days or less are from data set B and thus have not been adjusted relative to one another. We find that the best-fit constants for set B relative to set A are $\varphi = 0.982 \pm 0.048$ and $G = (-1.243 \pm 0.729) \times 10^{-15}$. Both the  and 686 fluxes are adjusted as in eq. (\[eq:defphi\]); the fact that the factor $\varphi$ is so close to unity indicates that most of the NLR in this galaxy arises within a few arcseconds of the nucleus. The final continuum $F_{\lambda}$(5100Å) and   emission-line fluxes are given in Table 2. Simultaneous (to within 0.1 day) measurements were averaged, weighted by the reciprocal of their variances. Analysis ======== Continuum Variability --------------------- The light curves listed in Table 2 are plotted in Fig. 1, along with contemporaneous 2–10keV X-ray light curves. The data shown here span three observing seasons, beginning in 1996 January and ending in 1998 July. A summary of the general variability characteristics is given in Table 3. For the complete data base and for individual subsets of the data as given in column (1), columns (2) – (4) give respectively the number of individual observations and the average and median time intervals between them. The mean flux is given in column (5), and columns (6) and (7) give two widely used measures of the variability, , the root-mean-square (rms) fractional variability corrected for measurement error, as defined by Rodríguez-Pascual et al. (1997), and , the ratio of maximum to minimum flux, respectively. Both  and   parameters are affected by contamination of the measured quantities by constant-flux components; the optical continuum values are somewhat diluted by the constant contribution of the underlying host galaxy, and the emission-line values are affected by both narrow-line contributions and probably slowly varying  emission as well. In any case, these will have only a modest effect on  and  and inspection of Table 3 shows clearly that the large-amplitude, rapid variations that characterize X-rays in NLS1s are much less pronounced in the optical spectrum. While there is a clear lack of correlated short time-scale behavior of the X-ray and optical continua, the light curves in Fig. 1 suggest that a correlation on longer time scales is possible. To test this quantitatively, we have suppressed the rapid variations by smoothing both the optical continuum and X-ray light curves with a rectangular function of width 30 days, as shown in Fig. 3, similar to what was done by Maoz, Edelson, & Nandra (2000) in a comparison of X-ray and optical variability in the Seyfert 1 galaxy NGC 3516. Cross-correlation of the overlapping parts of these light curves using the methodology described in the next section yields a lag of the optical variations relative to the X-ray of $\tau= 6^{+62}_{-112}$ days with a correlation coefficient $r_{\rm max} = 0.74$, i.e., the mean X-ray and optical fluxes are indeed correlated once the high-frequency variability is suppressed. The lag between variations in the two wavebands is highly uncertain, but consistent with zero or any small time lag expected in the continuum emitting region. Emission-Line Variability ------------------------- Comparison of the optical continuum and emission-line light curves shows that the variations in each are quite similar, indicating that the time delay between them is small. The time delay between continuum and emission-line variations can be quantified by cross-correlation of the light curves. During the first year, there is a period from JD2450183 to JD2450262 in which the light curves are well-sampled and the character of the variations permits an accurate cross-correlation measurement. In Table 3, we refer to this subset of data as “subset 1”, and we plot this part of the light curves in an expanded form in Fig. 2. We cross-correlate the data shown here by using the interpolation cross-correlation function (ICCF) method of Gaskell & Sparke (1986) and Gaskell & Peterson (1987) and the discrete correlation function (DCF) method of Edelson & Krolik (1988), where in both cases, we employ the specific implementation described by White & Peterson (1994). The results of the cross-correlation analysis are summarized in Table 4 and the cross-correlation functions are shown in Fig. 4. For both  and 686, Table 4 gives the ICCF centroid , and the location $\tau_{\rm peak}$ of the maximum value of the correlation coefficient $r_{\rm max}$. The centroid is computed using all points near $\tau_{\rm peak}$ with values greater than 0.8$r_{\rm max}$. The uncertainties quoted for  and $\tau_{\rm peak}$ are based on the model-independent Monte-Carlo method described by Peterson et al. (1998). By combining this lag with the Doppler width of the emission line, we can estimate a virial mass, as in eq. (4); for consistency with Wandel et al. (1999) and Kaspi et al. (2000), we use $f=3/\sqrt{2}$ in eq. (4). Since the broad emission-line features are comprised of a number of different components (or contaminants), it is desirable to measure the Doppler width of only the [*variable*]{} part of the emission line. In order to isolate the variable part of the emission line and exclude constant components (such as contamination from the NLR), we measure the relevant line widths in the [*rms*]{} spectrum constructed from all the set B spectra in subset 1. The mean spectrum is constructed by averaging all $N (= 18)$ set B spectra in subset 1, i.e. $$\bar{F}(\lambda) = \frac{1}{N} \sum^{N}_{i=1} F_{i}(\lambda),$$ where $F_{i}(\lambda)$ is the flux density (in ) at wavelength $\lambda$ in the $i$th spectrum. The rms spectrum is similarly constructed as $$\sigma(\lambda) = \left[ \left( \frac{1}{N-1} \right) \sum^{N}_{i=1} \left( F_{i}(\lambda) - \bar{F}(\lambda) \right)^{2} \right]^{1/2}.$$ The mean and rms spectra are shown as the top two panels in Fig. 5. The widths of the  and 686 emission lines (full-width at half maximum, ) are given in Table 4, as Doppler widths in the rest frame of NGC 4051. A virial mass is then computed as described by Wandel, Peterson, & Malkan (1999). On the basis of the  variations, a mass of $1.1^{+0.8}_{-0.5} \times 10^6\,\Msun$ is inferred; unfortunately however, the extremely large uncertainty in the 686 lag renders the virial mass obtained from it not very enlightening, but it is consistent with the  result. It is also important to keep in mind that because of the unknown geometry and kinematics of the BLR, the virial mass is reliable to only about an order of magnitude, i.e., the systematic uncertainties are much larger than the errors quoted here. However, an independent estimate of the inclination of the system has been made by Christopoulou et al. (1997) on the basis of the NLR morphology and kinematics. These authors model the NLR as an outflowing biconical region of inclination 50 and half-opening angle 23. If the BLR and NLR axes are coaligned, then correcting the virial mass for inclination gives a central mass of $1.4^{+1.0}_{-0.6} \times 10^6\,\Msun$. There are a number of important features in the rms spectrum that deserve attention: first, the constant components, such as the \[\]$\lambda\lambda4959$, 5007 narrow lines that are so prominent in the mean spectrum, are absent, as expected, in the rms spectrum (except for weak residuals which reflect the accuracy to which accurate flux calibration can be achieved). Second, the weak broad wings of  that can be seen in the mean spectrum are much weaker in the rms spectrum, i.e., the line core is more variable than the line wings. This could occur if the higher velocity material is much farther away from the ionizing source, or if some significant component of the high-velocity gas is optically thin (e.g., Shields, Ferland, & Peterson 1995); on physical grounds, we prefer the latter explanation. Third, the rms spectrum also shows that the optical emission varied little, if at all, during this period. The optical spectra of AGNs have broad blended features that extend from just longward of  to just shortward of , and over the range $\sim5100$–5600Å. In the mean spectrum, these  emission features are quite strong, which makes it hard to isolate the 686 emission. The blends are, however, virtually absent in the rms spectrum. Fourth, 686 is very prominent in the rms spectrum and is much broader than . The rest-frame width of this line in the rms spectrum is $\sim5430$, very typical of the line widths seen in normal Seyfert 1 galaxies. The centroid of the 686 is strongly [*blueshifted*]{} relative to , by about 1400. In order to demonstrate that these properties of the 686 profile are real, we also computed the mean and rms spectra based on (a) set A spectra obtained at the same time and (b) set B spectra obtained during the 1997 monitoring season (Year 2). The rms spectra from these subsets, shown in the bottom two panels of Fig. 5, show the same 686 profile characteristics seen in the set B Subset 1 data shown in the second panel. Discussion ========== The Continuum ------------- As described in the previous section, the X-ray and optical continuum variations are not closely coupled on short time scales. The X-ray continuum shows large scale variations on short time scales, as is typical of the NLS1 class. The rapid variations seen in the X-ray are not detected in the optical, as has previously been reported by Done et al. (1990) for this same source and by Young et al. (1999) for IRAS 13224$-$3809. However, if we average over the short time-scale flares, then the X-ray and optical continuum variations [*do*]{} seem to be coupled, though the time resolution of this experiment is insufficient to determine whether there is any lag between them at the level of days or less. The absence of strong coupling of the hard X-ray and optical continuum variations argues against reprocessing models in which hard X-rays are absorbed by a dense plasma (such as the accretion disk) and the energy is re-radiated at longer wavelengths (Guilbert & Rees 1988). The X-ray variations are more suggestive of localized flaring types of activity that may arise in a patchy corona above the accretion disk. It has already been pointed out based on these same [*RXTE*]{} data (Uttley et al. 1999) that the X-ray continuum of NGC 4051 virtually “turned off” in early 1998 (around JD 2450800; see Fig. 1). However, the optical spectroscopic data show that the optical continuum and emission lines (and therefore, by inference, the ionizing UV continuum) did not disappear at the same time. This seems in a sense rather suggestive of the kind of behavior that has been seen in Galactic black-hole systems such as GRO J1655$-$40 (Orosz et al. 1997). A possible interpretation of the behavior of NGC 4051 is that the inner X-ray producing (though not necessarily by purely thermal emission) part of the accretion disk in NGC 4051 has entered an advection-dominated accretion-flow (ADAF) state, in which radiation is emitted with very low efficiency (Narayan & Yi 1994; Narayan et al. 1998). This implies that there is a transition radius $r_{\rm tran}$ inside of which the disk is an ADAF and outside of which it radiates efficiently, perhaps like a classical thin disk (Shakura & Sunyaev 1973). The persistence of the optical continuum and emission lines suggests that this transition radius is somewhere between the regions that are most responsible for the soft X-rays and the H-ionizing continuum. We comment on this further in section 4.3 below. The Virial Mass and Implications for NLS1s ------------------------------------------ As noted above, reverberation-based size estimates for the broad emission lines and resulting virial mass estimates provide a potential means of distinguishing among the various NLS1 models. In Fig. 6, we show the relationship between the BLR radius as measured from the  lag as a function of the optical continuum luminosity for all AGNs with Balmer line lags known to reasonable accuracy. All data used here are taken from the compilation of Kaspi et al. (2000), though their parameters for NGC 4051 are superceded by the values reported here. This compilation contains six additional AGNs that could in principle be classified as NLS1s as they meet the criterion $\vFWHM \ltsim 2000$. These sources, which we shall refer to below as “narrow-line objects”, are the Seyfert galaxies Mrk 335 and Mrk 110 (from Wandel, Peterson, & Malkan 1999) and the QSOs PG 0026$+$120, PG 1211$+$143, PG 1351$+$640, and PG 1704$+$608 (from Kaspi et al. 2000). The best-fit regression line ($R_{\rm BLR} \propto L^{0.62 \pm 0.02}$), based on all objects from Table 5 of Kaspi et al. (2000) except NGC 4051, is shown as a dotted line. NGC 4051 lies approximately 2.8$\sigma$ above this regression line, although all the other narrow-line objects clearly fall in the locus defined by the AGNs with broader lines. It is difficult to argue that NGC 4051 is somehow different from the other AGNs, as there are several AGNs that have large displacements from the regression line. This is reflected in the large value of the reduced $\chi^2$ statistic, $\chi^2_{\nu} = 15.7$, for this fit. In Fig. 7, we plot the mass–luminosity relationship for the AGNs from Kaspi et al., and we show (a) the best-fit regression line based on all objects [*except*]{} the seven narrow-line objects and (b) that based on the narrow-line objects alone. Formally, the slopes $\alpha$ (for $M \propto L^{\alpha}$) are significantly different, with $\alpha = 0.46 \pm 0.06$ for the narrow-line objects (including NGC 4051), and $\alpha = 0.27 \pm 0.03$ for the others. These two fits are separated by typically an order of magnitude in black-hole mass; the black holes in the narrow-line objects are about a factor of 10 lower than those of other AGNs of comparable luminosity. How well do these results allow us to distinguish among the various explanations for the NLS1 phenomenon? We consider the possibilities: 1. [*Do the BLRs of NLS1s have anomalously large radii?*]{} The position of NGC 4051 in Fig. 6 might suggest that this is possible, but the distribution of other narrow-line objects does not support this. Furthermore, as noted above, the scatter in the BLR-radius luminosity relationship is very large, and NGC 4051 is in a statistical sense not the largest outlier in this relationship (simply because other sources have smaller uncertainties in their measured lags). 2. [*Are NLS1s simply low-inclination systems?*]{} If the BLR is a flattened system, at low inclination (i.e., nearly face-on) the line widths will be decreased by a factor $\sin i$, but the measured emission-line lags will be relatively unaffected. On the other hand, assuming that the UV–optical continuum arises in an accretion disk at the same inclination, the apparent UV–optical luminosity is higher at lower inclination (e.g., see Fig. 32 of Netzer 1990). Thus, relative to similar sources at intermediate inclination, the masses of low-inclination sources will be underestimated, and their luminosities will be overestimated, displacing the narrow-line objects in Fig. 7 towards the lower right. This is generally consistent with the location of all seven of the narrow-line objects, including NGC 4051. The line transfer function for would provide a more definitive test of this hypothesis since it would allow determination of the inclination of the system. This would require more and better data than we have obtained in this experiment. However, as noted earlier, Christopoulou et al. (1997) show that the NLR morphology and kinematics suggest a system that is inclined to the line of sight by $\sim50$. It seems reasonable to suppose that the NLR and BLR axes are approximately coaligned. If this is the case, then our virial estimate for the central mass is too small by a modest factor $\sin 50\deg \approx 0.77$, and the corrected central mass is then $1.4^{+1.0}_{-0.6} \times 10^6\,\Msun$. 3. [*Are NLS1s undermassive systems with relatively high accretion rates?*]{} Again, the distribution of the narrow-line objects, including NGC 4051, in Fig. 7 is consistent with this hypothesis. The narrow-line sources on this plot lie below the mass-luminosity relationship for other AGNs, at the lower end of the envelope around this relationship. In summary, the hypothesis that NLS1s have unusually distant BLRs for their luminosity is probably not viable in general, although it could apply to the specific case of NGC 4051. At the present time, however, we cannot distinguish between the low-inclination and low-mass, high accretion-rate hypotheses on the basis of the reverberation results alone. The latter is favored on the basis X-ray considerations and the 50 inclination inferred from the NLR. Indeed, it is entirely possible that both effects (i.e., low inclination and low black-hole mass) contribute. An improvement in the optical spectroscopic monitoring data could allow determination of the  transfer function, which could allow discrimination between these competing models. Broad He[**II**]{} Emission --------------------------- As noted in section 3.2, we detected very broad, blueshifted 686 emission in the rms spectra. The blueshift of this component is suggestive of radial rather than virialized motion. There is no similar obvious component to the Balmer lines, but we should expect that similar blueshifted features might appear in the other higher-ionization lines in the UV. As no contemporaneous spectra were available, we retrieved the 31 [*International Ultraviolet Explorer*]{} SWP (Short-Wavelength Prime camera, wavelength range $\sim1150$–2000Å) spectra from the Space Telescope Science Institute Multimission Data Archive. We formed an average of all these spectra, since the individual spectra were of rather low signal-to-noise ratio. In Fig. 8, we show the line profiles of 686(the rms profile, as in Fig. 5) and those of $\lambda1640$ and $\lambda1549$ based on the mean [*IUE*]{} spectra. We note that each of these lines has a strong wing extending several thousand kilometers per second blueward of the line peak; indeed, the comparatively large widths and blueshifts of the UV lines in NLS1 spectra has been noted earlier by Rodríguez-Pascual, Mas–Hesse, & Santos-Lleó (1997). It is possible that this gas is related to the known warm absorber in NGC 4051 (McHardy et al. 1995). Interestingly, photoionization equilibrium modeling of the X-ray warm absorber data (Nicastro et al. 1999) suggests a distance from the source of approximately 5 light days, which is consistent with the reverberation result given in Table 4. The differences between the characteristics of the  emission line on one hand and of the high ionization lines on the other suggests a two-component BLR, which has been proposed on numerous occasions on other grounds (e.g., Collin-Souffrin et al. 1988; van Groningen 1987). In this particular case, an interpretation that is at least qualitatively consistent with all the data and relatively simple is that the Balmer lines arise primarily in material that is in a flattened disk-like configuration at a low to moderate inclination (to account for the narrow width of the  line), and the high-ionization lines arise in an outflowing wind, of which we see preferentially the component on the near side of the disk (to account for high velocity and blueward asymmetry). Such a model is illustrated schematically by Collin-Souffrin et al. (1988; their Fig. 1) and more recently by Dultzin-Hacyan, Taniguichi, & Uranga (1999; their Fig. 1). This geometry was also suggested by van Groningen (1987) to explain the line profiles and profile ratios of the Balmer lines in Mrk 335, another narrow-line object. As noted earlier, our virial mass estimate of $M=1.1\times10^6$might seriously underestimate the black hole mass if the inclination of the system is very low. Moreover, some low-inclination accretion-disk models predict relatively strong, variable EUV/soft X-ray fluxes (e.g., Netzer 1987; Madau 1988), consistent with observations of NLS1s. We therefore cannot rule out the possibility that the NLS1 phenomenon is due principally to inclination effects. However, the strong rapid X-ray variability of NLS1s seem to favor the low-mass, high accretion-rate explanation, as does the 50 inclination of the NLR, unless the BLR and NLR axes of symmetry are very different, which seems rather unlikely on physical grounds. The behavior of the 686 line during Year 3 may provide additional information about the ionizing continuum at the time the X-rays went into an extremely faint state. In Fig. 9, we show the rms spectrum based on set B spectra obtained between JD2450810 and JD2451022. The line is strong and narrow, as it is in the other rms spectra shown in Fig. 5, which indicates that the continuum shortward of 912Å is still present and variable. However, the 686 line is absent or very weak, indicating that the driving extreme ultraviolet (EUV) continuum shortward of the  edge at 228Å(54.4eV) is [*not*]{} driving 686 variations, either because the EUV flux is low or varying little. Evidence from earlier years (Uttley et al. 1999) shows that the EUV and X-ray fluxes in NGC 4051 are well correlated, which implies that the EUV continuum might also have been extremely weak in Year 3. Simultaneous [*BeppoSAX*]{}, [*RXTE*]{}, and [*EUVE*]{} observations obtained during the low state indicate that this is correct (Uttley et al. 2000). If the  emission has dramatically decreased during Year 3, we could infer that the transition radius between the inner ADAF region and the outer thin-disk structure must occur somewhere outside (or in the vicinity of) the region of the disk that contributes most of the flux at about 228Å, but inside the radius that contributes most of the flux at about 912Å. In this regard, whether or not there is residual 686 emission in the low state during Year 3 becomes an interesting question. Note in particular that the measurements of the 686 flux given in Table 2 and Figure 1 tell us little because these values include flux from broad-line  blends and various narrow-line features. In order to address the question of how much 686 persists in the low state, we have attempted a decomposition of the 686feature. Our first step was to form average high-state (as in the top panel of Fig. 5) and low-state spectra. We then attempted to remove the  blends from the spectra by using the optical template spectrum of Boroson & Green (1982), convolved with a Gaussian to match the width of the features in the spectrum of NGC 4051. After subtraction of the flux-scaled and broadened  template, we subtracted a power-law continuum from each spectrum. The 686 region of the resulting high-state and low-state spectra are shown in the top panel of Fig. 10. We then subtracted the low-state spectrum from the high-state spectrum, forming the difference spectrum shown in the middle panel of Fig. 10. The flux in the 686 line in the difference spectrum is $7.4 \times 10^{-15}$. We then make the assumption that the  line profile in the difference spectrum can be used to model the  contribution in the low-state spectrum. We then scale the  profile in flux on a trial-and-error basis and subtract it from the low-state spectrum until the flux in the model profile causes the residual to have negative flux. Using this procedure, we find the largest possible value for the 686 line in the low-state is $\sim8.8\times10^{-15}$. These values mean that the 686 flux decreased by at least 46% between the Year 1 high state and the Year 3 low state. Our conclusion is thus that while broad 686 probably did not completely disappear during the low state in Year 3, the line flux did decrease dramatically and any  variations during this period were too small to detect. In any case, these results underscore the importance of multiwavelength observations of NLS1s in very low states. Presumably comparison of the strength of the various UV emission lines in the low state relative to the values in the high state could lead to determination of $r_{\rm tran}$. Unfortunately, no ultraviolet data are available during the present campaign. Summary ======= On the basis of three years of combined X-ray and optical spectroscopic monitoring of the narrow-line Seyfert 1 galaxy NGC 4051, we reach the following conclusions: 1. The rapid and strong X-ray variations that characterize narrow-line Seyfert 1 galaxies are detected in our X-ray observations of NGC 4051, but are not detected in the optical, consistent with previous findings. 2. On time scales of many weeks and longer, there does appear to be a correlation between the X-ray and optical continuum fluxes. 3. The variable part of the  emission line has a Doppler width of $\sim1100$, and a time delay relative to the continuum of about six days. Combining these quantities leads to a virial mass estimate of $\sim1.1\times10^6$. If we assume that the inclination of the system is 50, as suggested by the NLR study of Christopoulous et al. (1997), then the mass of the central source is $\sim1.4\times10^6$. 4. The 686 emission line is strongly variable, although an accurate time delay cannot be measured. This line is about five times as broad as the  line, and is strongly blueward asymmetric, as are the UV high-ionization lines in this object. 5. In the BLR radius–luminosity relationship, we find that narrow-line objects (those with $\vFWHM \ltsim 2000$) seem to fall on the same locus as AGNs with broad lines. 6. In the virial mass–luminosity relationship, narrow-line objects populate the low-mass end of a rather broad envelope; they have virial masses typically an order of magnitude lower than other AGNs of similar luminosity. 7. During the third year of this program, the hard X-ray flux decreased by approximately a factor of 10, and the broad-line 686 flux decrease by nearly a factor of two and did not show detectable variations during this low state. At the same time, the optical continuum and broad emission line were only slightly fainter than previously and continued to vary significantly. This suggests that the innermost part of the accretion disk went into an ADAF state, greatly reducing the production of high-energy continuum photons from the inner part of the accretion disk. A picture that is consistent with the emission-line characteristics is one in which the Balmer lines arise primarily in a disk-like configuration seen at low-to-moderate inclination and the high-ionization lines arise primarily in an outflowing wind. The high-ionization lines are blueward asymmetric because we see emission preferentially from the near side, with the far side at least partially obscured by the disk component (which might be an extension of the accretion disk). This geometry requires that the much of the high-ionization line flux arises in a region of scale comparable to the disk system that emits the low-ionization lines and that the disk system is at least partially opaque to the  line radiation. If NGC 4051 is typical of the NLS1 class, then it might be that NLS1s are best described as low-mass, high accretion-rate systems, although the possible role of inclination cannot be discounted. Indeed, the full explanation of the NLS1 phenomenon may involve [*both*]{} inclination and black-hole mass. For support of this work, we are grateful to the National Science Foundation (grant AST–9420080 to The Ohio State University). We thank the referee, M.-H. Ulrich, for suggestions that improved this paper. Blandford, R.D., & McKee, C.F. 1982, ApJ, 255, 419 Boller, Th., Brandt, W.N., & Fink, H. 1996, A&A, 305, 53 Boller, Th., Brandt, W.N., Fabian, A., & Fink, H. 1997, MNRAS, 289, 393 Boroson, T.A., & Green, R.F. 1992, ApJS, 80, 109 Bradt, H.V.D, Rothschild, R.E.,& Swank, J.H., 1993, A&AS, 97, 355 Brandt, W.N., Mathur, S., & Elvis, M. 1997, MNRAS, 285, L25 Christopoulou, P.E., Holloway, A.J., Steffen, W., Mundell, C.G., Thean, A.H.C., Goudis, C.D., Meaburn, J., & Pedlar, A. 1997, MNRAS, 284, 385 Collin-Souffrin, S., Dyson, J.E., McDowell, J.C., & Perry, J.J. 1988, MNRAS, 232, 539 Davidson, K., and Kinman, T.D. 1978, ApJ, 225, 776 deVaucouleurs, G., deVaucouleurs, A, Corwin, H.G., Jr., Buta, R.J., Paturel, G., Fouque, P. 1991, Third Reference Catalog of Bright Galaxies Done, C., Ward, M.J., Fabian, A.C., Kunieda, H., Tsuruta, S., Lawrence, A., Smith, M.G., & Wamsteker, W. 1990, MNRAS, 243, 713 Dultzin-Hacyan, D., Taniguichi, Y., & Uranga, L. 1999, in [*Structure and Kinematics of Quasar Broad Line Regions*]{}, ed. C.M. Gaskell, W.N. Brandt, D. Dultzin-Hacyan, M. Dietrich, & M. Eracleous (San Francisco: ASP), p. 303 Guilbert, P.W., & Rees, M.J. 1988, MNRAS, 233, 475 Edelson, R.A., & Krolik, J.H. 1988, ApJ, 333, 646 Fabricant, D., Cheimets, P., Caldwell, J., & Geary, J. 1998, PASP, 110, 79 Gaskell, C.M., & Peterson, B.M. 1987, ApJS, 65, 1 Gaskell, C.M., & Sparke, L.S. 1986, ApJ, 305, 175 Kaspi, S., Smith, P.S., Netzer, H., Maoz, D., Jannuzi, B.T., & Giveon, U. 2000, ApJ, 533, 631 Lawrence, A., Watson, M.G., Pounds, K.A., & Elvis, M. 1987, Nature, 325, 694 Madau, P. 1988, ApJ, 327, 116 Maoz, D., Edelson, R., & Nandra, K. 2000, AJ, 119, 119 Mason, K., Puchnarewicz, E.M., & Jones, L.R. 1996, MNRAS, 283, L26 McHardy, I.M., Green, A.R., Done, C., Puchnarewicz, E.M., Mason, K.O., Branduardi-Raymont, & Jones, M.H. 1995, MNRAS, 273, 549 Narayan, R., Mahadevan, R., Grindlay, J.E., Popham, R.G., & Gammie, C. 1998, ApJ, 492, 554 Narayan, R., & Yi, I. 1994, ApJ, 428, L13 Netzer, H. 1987, MNRAS, 225, 55 Netzer, H. 1990, in Active Galactic Nuclei, ed. T.J,-L. Courvoisier & M. Mayor (Berlin: Springer–Verlag), p. 141 Netzer, H., & Peterson, B.M. 1997, in Astronomical Time Series, ed. D. Maoz, A. Sternberg, & E.M. Liebowitz (Dordrecht: Kluwer), p. 85 Nicastro, F., Fiore, F., Perola, G.C., & Elvis, M. 1999, ApJ, 512, 184 Orosz, J.A., Remillard, R.A., Bailyn, C.D., & McClintock, J.E. 1997, ApJ, 478, L83 Osterbrock, D.E., & Pogge, R.W. 1985, ApJ, 297, 166 Papadakis, I.E., et al. 2000, in preparation Penston, M.V. 1991, in Variability of Galactic Nuclei, ed. H.R. Miller & P.J. Wiita (Cambridge: Cambridge Univ. Press), p. 343 Peterson, B.M. 1993, PASP, 105, 247 Peterson, B.M., & Wandel, A. 1999, ApJ, 521, L95 Peterson, B.M., Wanders, I., Horne, K., Collier, S., Alexander, T., Kaspi, S., & Maoz, D. 1998, PASP, 110, 660 Peterson, B.M., et al. 1999, ApJ, 510, 659 Puchnarewicz, E.M., Mason, K.O., Córdova, F.A., Kartje, J., Branduardi-Raymont, G., Mittaz, J.P.D., Murdin, P.G., & Allington-Smith, J. 1992, MNRAS, 256, 589 Rodríguez, P.M., et al. 1997, ApJ, 110, 9 Rodríguez, P.M., Mas–Hesse, J.M., & Santos-Lleó, M. 1997, A&A, 327, 72 Seyfert, C. 1943, ApJ, 97, 28 Shakura, R.I., & Sunyaev, R.A. 1973, A&A, 24, 337 Shields, J.C., Ferland, G.J., & Peterson, B.M. 1995, ApJ, 441, 507 Tanaguchi, Y., Murayama, T., & Nagao, T. 1999, submitted to ApJ (astro-ph/9910036) Uttley, P., McHardy, I.M., Papadakis, I.E., Guainazzi, M., & Fruscione, A. 1999, MNRAS, 307, L6 Uttley, P., McHardy, I.M., Papadakis, I.E., Cagnoni, F, & Fruscione, A. 2000, MNRAS, 312, 880 van Groningen, E. 1987, A&A, 186, 103 van Groningen, E., & Wanders, I. 1992, PASP, 104, 700 Wandel, A., & Boller, Th. 1998, A&A, 331, 884 Wandel, A., Peterson, B.M., & Malkan, M.A. 1999, ApJ, 526, 579 White, R.J., & Peterson, B.M. 1994, PASP, 106, 879 Young, A.J., Crawford, C.S., Fabian, A.C., Brandt, W.N., O’Brien, P.T. 1999, MNRAS, 304, L46 [lcccl]{} 1996 Jan 12 & 95.0 & B & 3670–7450 & n00095b1996 Jan 15 & 98.0 & B & 3650–7520 & n00098b1996 Jan 19 & 102.0 & B & 3650–7410 & n00102b1996 Jan 22 & 105.1 & B & 3660–7410 & n00105b1996 Jan 25 & 108.0 & B & 3650–7520 & n00108b1996 Jan 29 & 112.0 & B & 3640–7500 & n00112b1996 Feb 05 & 118.8 & A & 4510–5670 & n00118a1996 Feb 09 & 122.8 & A & 4520–5670 & n00122a1996 Feb 10 & 124.0 & B & 3660–7400 & n00123b1996 Feb 14 & 127.8 & A & 4520–5670 & n00127a1996 Feb 15 & 129.0 & B & 3660–7420 & n00129b1996 Feb 18 & 132.0 & B & 3660–7530 & n00132b1996 Feb 21 & 134.9 & B & 3660–7410 & n00134b1996 Feb 23 & 136.8 & A & 4520–5680 & n00136a1996 Mar 07 & 149.8 & A & 4520–5680 & n00149a1996 Mar 22 & 164.8 & A & 4530–5680 & n00164a1996 Mar 25 & 167.7 & B & 3640–7500 & n00167b1996 Mar 26 & 168.8 & A & 4520–5690 & n00168a1996 Mar 28 & 170.8 & B & 3650–7390 & n00170b1996 Apr 10 & 183.6 & B & 3660–7420 & n00183b1996 Apr 12 & 185.6 & B & 3670–7420 & n00185b1996 Apr 12 & 185.7 & A & 4540–5690 & n00185a1996 Apr 15 & 188.6 & B & 3660–7410 & n00188b1996 Apr 18 & 191.6 & B & 3660–7410 & n00191b1996 Apr 20 & 193.6 & B & 3660–7400 & n00193b1996 Apr 25 & 198.7 & A & 4510–5680 & n00198a1996 Apr 25 & 198.8 & B & 3650–7410 & n00198b1996 Apr 26 & 199.6 & B & 3660–7400 & n00199b1996 May 03 & 206.7 & A & 4530–5690 & n00206a1996 May 08 & 211.6 & B & 3660–7400 & n00211b1996 May 09 & 212.8 & A & 4540–5680 & n00212a1996 May 10 & 213.6 & B & 3660–7520 & n00213b1996 May 15 & 218.6 & B & 3660–7510 & n00218b1996 May 17 & 220.8 & A & 4540–5700 & n00220a1996 May 18 & 221.7 & B & 3660–7400 & n00221b1996 May 22 & 225.6 & B & 3670–7430 & n00225b1996 May 24 & 227.8 & A & 4540–5700 & n00227a1996 May 26 & 229.7 & B & 3670–7430 & n00229b1996 May 30 & 233.7 & A & 4530–5670 & n00233a1996 Jun 06 & 240.6 & B & 3660–7400 & n00240b1996 Jun 07 & 241.7 & A & 4530–5690 & n00241a1996 Jun 14 & 248.7 & A & 4540–5690 & n00248a1996 Jun 16 & 250.7 & B & 3600–7540 & n00250b1996 Jun 19 & 253.6 & B & 3670–7530 & n00253b1996 Jun 19 & 253.7 & A & 4520–5680 & n00253a1996 Jun 23 & 257.7 & B & 3660–7420 & n00257b1996 Jun 25 & 259.6 & B & 3670–7550 & n00259b1996 Jun 28 & 262.7 & A & 4520–5680 & n00262a1996 Jul 17 & 281.6 & B & 3670–7520 & n00281b1996 Jul 20 & 284.7 & B & 3670–7530 & n00284b1996 Jul 24 & 288.6 & B & 3670–7530 & n00288b1996 Dec 11 & 429.0 & B & 3650–7510 & n00429b1996 Dec 17 & 435.0 & B & 3660–7520 & n00435b1997 Jan 02 & 451.0 & B & 3670–7510 & n00451b1997 Jan 09 & 458.0 & B & 3670–7510 & n00458b1997 Jan 16 & 465.0 & B & 4000–7500 & n00465b1997 Jan 31 & 479.9 & A & 4720–5990 & n00479a1997 Jan 31 & 480.0 & B & 4000–7500 & n00480b1997 Feb 03 & 483.1 & B & 4000–7500 & n00483b1997 Feb 06 & 485.9 & B & 4000–7530 & n00485b1997 Feb 09 & 489.0 & B & 4000–7500 & n00488b1997 Feb 14 & 493.9 & A & 4380–5550 & n00493a1997 Feb 14 & 494.0 & B & 4000–7520 & n00494b1997 Feb 27 & 506.8 & A & 4360–5500 & n00506a1997 Mar 02 & 510.0 & B & 3660–7520 & n00509b1997 Mar 12 & 519.8 & B & 3660–7500 & n00519b1997 Mar 13 & 520.8 & A & 4290–5820 & n00520a1997 Mar 14 & 521.8 & B & 3670–7310 & n00521b1997 Mar 20 & 527.8 & A & 4390–5940 & n00527a1997 Apr 09 & 547.6 & B & 3670–7500 & n00547b1997 Apr 12 & 550.8 & B & 3670–7520 & n00550b1997 Apr 13 & 551.8 & B & 3680–7540 & n00551b1997 Apr 29 & 567.6 & B & 3670–7530 & n00567b1997 May 01 & 569.6 & B & 3670–7540 & n00569b1997 May 08 & 576.6 & B & 3650–7500 & n00576b1997 May 10 & 578.6 & B & 3650–7520 & n00578b1997 May 12 & 580.7 & B & 3660–7520 & n00580b1997 May 14 & 582.6 & B & 3650–7530 & n00582b1997 May 29 & 597.7 & B & 3660–7520 & n00597b1997 Jun 01 & 600.6 & B & 4000–7530 & n00600b1997 Jun 03 & 602.6 & B & 3660–7540 & n00602b1997 Jun 05 & 604.6 & B & 3670–7530 & n00604b1997 Jun 09 & 608.6 & B & 3670–7530 & n00608b1997 Jun 11 & 610.6 & B & 3670–7540 & n00610b1997 Jun 28 & 627.6 & B & 3670–7530 & n00627b1997 Jul 01 & 630.7 & B & 3670–7520 & n00630b1997 Jul 06 & 635.6 & B & 3670–7530 & n00635b1997 Jul 12 & 641.6 & B & 3670–7530 & n00641b1997 Jul 14 & 643.6 & B & 3670–7520 & n00643b1997 Nov 22 & 775.0 & A & 4310–5800 & n00775a1997 Nov 24 & 777.0 & B & 3750–7510 & n00777b1997 Nov 29 & 782.0 & B & 3750–7500 & n00782b1997 Dec 04 & 787.0 & B & 3750–7500 & n00787b1997 Dec 27 & 810.1 & B & 3670–7530 & n00810b1998 Jan 20 & 834.0 & B & 3680–7540 & n00834b 1998 Jan 24 & 838.1 & B & 3660–7550 & n00838b 1998 Jan 25 & 839.0 & A & 4330–5840 & n00839a1998 Jan 26 & 840.1 & B & 3670–7540 & n00840b 1998 Jan 29 & 843.0 & B & 3650–7520 & n00843b 1998 Feb 02 & 847.0 & B & 3660–7540 & n00847b 1998 Feb 22 & 867.0 & B & 3650–7530 & n00867b 1998 Feb 24 & 869.0 & B & 3670–7540 & n00869b 1998 Feb 28 & 873.0 & B & 3670–7530 & n00872b 1998 Mar 03 & 875.8 & A & 4300–5830 & n00876a1998 Mar 03 & 876.0 & B & 3660–7530 & n00876b 1998 Mar 04 & 877.0 & B & 3660–7510 & n00877b 1998 Mar 13 & 885.8 & A & 4380–5910 & n00886a1998 Mar 20 & 892.8 & B & 3660–7510 & n00892b 1998 Apr 03 & 906.9 & B & 3630–7490 & n00906b 1998 Apr 18 & 921.9 & B & 3610–7470 & n00921b 1998 Apr 23 & 926.8 & A & 4320–5840 & n00927a1998 Apr 27 & 930.8 & B & 3620–7500 & n00930b 1998 May 02 & 935.9 & B & 3630–7490 & n00935b 1998 May 03 & 936.6 & B & 3620–7490 & n00936b 1998 May 16 & 949.6 & B & 3670–7510 & n00949b 1998 May 28 & 961.6 & B & 3660–7530 & n00961b 1998 Jun 02 & 966.7 & B & 3660–7540 & n00966b 1998 Jun 16 & 980.7 & A & 4310–5850 & n00981a1998 Jun 16 & 980.7 & B & 3660–7540 & n00980b 1998 Jun 19 & 983.7 & B & 3650–7520 & n00983b 1998 Jun 24 & 988.6 & B & 3680–7530 & n00988b 1998 Jun 27 & 991.7 & B & 3660–7530 & n00991b 1998 Jun 30 & 994.6 & B & 3680–7520 & n00994b 1998 Jul 15 &1009.6 & B & 3710–7510 & n01009b 1998 Jul 18 &1012.6 & B & 3670–7530 & n01012b 1998 Jul 25 &1019.6 & B & 3700–7500 & n01019b 1998 Jul 28 &1022.6 & B & 3670–7530 & n01022b & & [cccc]{} 95.0 &$ 13.41 \pm 0.54$ & $ 4.17 \pm 0.17$ & $ 4.06 \pm 0.45$ 98.0 &$ 13.88 \pm 0.56$ & $ 4.54 \pm 0.18$ & $ 4.22 \pm 0.46$ 102.0 &$ 14.42 \pm 0.58$ & $ 4.91 \pm 0.20$ & $ 4.53 \pm 0.50$ 105.1 &$ 13.46 \pm 0.54 $ & $ 4.54 \pm 0.18 $ & $\ldots$ 108.0 &$ 13.75 \pm 0.55 $ & $ 4.39 \pm 0.17 $ & $\ldots$ 112.0 &$ 13.75 \pm 0.55$ & $ 4.42 \pm 0.18$ & $ 3.80 \pm 0.42$ 118.8 &$ 14.01 \pm 0.28 $ & $ 4.20 \pm 0.08 $ & $\ldots$ 122.8 &$ 14.12 \pm 0.28 $ & $ 4.67 \pm 0.09 $ & $\ldots$ 124.0 &$ 13.74 \pm 0.55$ & $ 4.72 \pm 0.19$ & $ 3.81 \pm 0.42$ 127.8 &$ 13.51 \pm 0.27 $ & $ 4.66 \pm 0.09 $ & $\ldots$ 129.0 &$ 14.37 \pm 0.57$ & $ 5.09 \pm 0.20$ & $ 3.97 \pm 0.44$ 132.0 &$ 14.91 \pm 0.60$ & $ 4.42 \pm 0.18$ & $ 3.88 \pm 0.43$ 134.9 &$ 13.39 \pm 0.54$ & $ 4.63 \pm 0.19$ & $ 3.99 \pm 0.44$ 136.8 &$ 14.13 \pm 0.28 $ & $ 4.53 \pm 0.09 $ & $\ldots$ 149.8 &$ 14.70 \pm 0.29 $ & $ 4.91 \pm 0.10 $ & $\ldots$ 164.8 &$ 14.29 \pm 0.29 $ & $ 4.71 \pm 0.09 $ & $\ldots$ 167.7 &$ 14.66 \pm 0.59$ & $ 4.92 \pm 0.20$ & $ 3.86 \pm 0.43$ 168.8 &$ 14.58 \pm 0.29 $ & $ 4.75 \pm 0.09 $ & $\ldots$ 170.8 &$ 14.17 \pm 0.57$ & $ 5.06 \pm 0.20$ & $ 4.14 \pm 0.46$ 183.6 &$ 14.66 \pm 0.59$ & $ 4.84 \pm 0.19$ & $ 4.53 \pm 0.50$ 185.6 &$ 14.51 \pm 0.58$ & $ 5.29 \pm 0.21$ & $ 3.86 \pm 0.42$ 185.7 &$ 14.06 \pm 0.28 $ & $ 5.12 \pm 0.10 $ & $\ldots$ 188.6 &$ 14.61 \pm 0.58$ & $ 5.17 \pm 0.21$ & $ 4.73 \pm 0.52$ 191.6 &$ 14.63 \pm 0.58$ & $ 5.35 \pm 0.21$ & $ 4.78 \pm 0.52$ 193.6 &$ 15.02 \pm 0.60 $ & $ 5.75 \pm 0.23 $ & $\ldots$ 198.7 &$ 14.43 \pm 0.29 $ & $ 5.63 \pm 0.11 $ & $\ldots$ 198.8 &$ 13.54 \pm 0.54$ & $ 5.55 \pm 0.22$ & $ 4.47 \pm 0.49$ 199.6 &$ 14.02 \pm 0.56$ & $ 5.54 \pm 0.22$ & $ 4.83 \pm 0.53$ 206.7 &$ 14.08 \pm 0.28 $ & $ 5.63 \pm 0.11 $ & $\ldots$ 211.6 &$ 13.54 \pm 0.54$ & $ 5.48 \pm 0.22$ & $ 4.63 \pm 0.51$ 212.8 &$ 13.15 \pm 0.26 $ & $ 5.36 \pm 0.11 $ & $\ldots$ 213.6 &$ 13.16 \pm 0.53$ & $ 5.28 \pm 0.21$ & $ 4.73 \pm 0.52$ 218.6 &$ 12.95 \pm 0.52$ & $ 5.09 \pm 0.20$ & $ 4.28 \pm 0.47$ 220.8 &$ 12.27 \pm 0.25 $ & $ 4.59 \pm 0.09 $ & $\ldots$ 221.7 &$ 13.34 \pm 0.53$ & $ 4.28 \pm 0.17$ & $ 3.08 \pm 0.34$ 225.6 &$ 12.26 \pm 0.49$ & $ 4.37 \pm 0.17$ & $ 2.91 \pm 0.32$ 227.8 &$ 13.14 \pm 0.26 $ & $ 4.14 \pm 0.08 $ & $\ldots$ 229.7 &$ 13.91 \pm 0.56$ & $ 4.30 \pm 0.17$ & $ 3.77 \pm 0.41$ 233.7 &$ 12.84 \pm 0.26 $ & $ 4.51 \pm 0.09 $ & $\ldots$ 240.6 &$ 13.18 \pm 0.53$ & $ 4.88 \pm 0.19$ & $ 3.76 \pm 0.41$ 241.7 &$ 13.29 \pm 0.27 $ & $ 4.78 \pm 0.10 $ & $\ldots$ 248.7 &$ 13.93 \pm 0.28 $ & $ 5.64 \pm 0.11 $ & $\ldots$ 250.7 &$ 12.99 \pm 0.52$ & $ 5.68 \pm 0.23$ & $ 4.59 \pm 0.50$ 253.6 &$ 11.67 \pm 0.47$ & $ 5.22 \pm 0.21$ & $ 3.53 \pm 0.39$ 253.7 &$ 12.52 \pm 0.25 $ & $ 4.94 \pm 0.10 $ & $\ldots$ 257.7 &$ 12.09 \pm 0.48$ & $ 4.47 \pm 0.18$ & $ 3.45 \pm 0.38$ 259.6 &$ 11.92 \pm 0.48$ & $ 4.57 \pm 0.18$ & $ 3.35 \pm 0.37$ 262.7 &$ 12.41 \pm 0.25 $ & $ 4.17 \pm 0.08 $ & $\ldots$ 281.6 &$ 13.64 \pm 0.55$ & $ 5.25 \pm 0.21$ & $ 4.17 \pm 0.46$ 284.7 &$ 14.68 \pm 0.59 $ & $ 5.89 \pm 0.24 $ & $\ldots$ 288.6 &$ 13.25 \pm 0.53$ & $ 5.45 \pm 0.22$ & $ 4.74 \pm 0.52$ 429.0 &$ 12.98 \pm 0.52$ & $ 5.84 \pm 0.23$ & $ 4.46 \pm 0.49$ 435.0 &$ 13.53 \pm 0.54$ & $ 5.46 \pm 0.22$ & $ 4.71 \pm 0.52$ 451.0 &$ 12.84 \pm 0.51$ & $ 5.51 \pm 0.22$ & $ 4.05 \pm 0.45$ 458.0 &$ 13.25 \pm 0.53$ & $ 5.38 \pm 0.22$ & $ 4.34 \pm 0.48$ 465.0 &$ 12.30 \pm 0.49$ & $ 4.74 \pm 0.19$ & $ 4.56 \pm 0.50$ 479.9 &$ 12.81 \pm 0.26 $ & $ 5.04 \pm 0.10 $ & $\ldots$ 480.0 &$ 12.68 \pm 0.51$ & $ 5.10 \pm 0.20$ & $ 4.67 \pm 0.51$ 483.1 &$ 11.95 \pm 0.48$ & $ 4.69 \pm 0.19$ & $ 3.08 \pm 0.34$ 485.9 &$ 12.41 \pm 0.50$ & $ 4.70 \pm 0.19$ & $ 3.22 \pm 0.35$ 489.0 &$ 14.03 \pm 0.56$ & $ 4.92 \pm 0.20$ & $ 3.99 \pm 0.44$ 493.9 &$ 12.67 \pm 0.25 $ & $ 4.99 \pm 0.10 $ & $\ldots$ 494.0 &$ 13.53 \pm 0.54$ & $ 4.83 \pm 0.19$ & $ 3.30 \pm 0.36$ 506.8 &$ 11.46 \pm 0.23 $ & $ 4.69 \pm 0.09 $ & $\ldots$ 510.0 &$ 12.30 \pm 0.49$ & $ 4.37 \pm 0.17$ & $ 3.27 \pm 0.36$ 519.8 &$ 12.51 \pm 0.50$ & $ 4.74 \pm 0.19$ & $ 3.68 \pm 0.41$ 520.8 &$ 12.50 \pm 0.25 $ & $ 5.12 \pm 0.10 $ & $\ldots$ 521.8 &$ 12.65 \pm 0.51$ & $ 4.68 \pm 0.19$ & $ 3.49 \pm 0.38$ 527.8 &$ 10.92 \pm 0.22 $ & $ 4.57 \pm 0.09 $ & $\ldots$ 547.6 &$ 11.80 \pm 0.47$ & $ 4.52 \pm 0.18$ & $ 3.57 \pm 0.39$ 550.8 &$ 11.74 \pm 0.47$ & $ 4.47 \pm 0.18$ & $ 2.96 \pm 0.32$ 551.8 &$ 11.59 \pm 0.46$ & $ 4.42 \pm 0.18$ & $ 3.11 \pm 0.34$ 567.6 &$ 10.88 \pm 0.44$ & $ 4.48 \pm 0.18$ & $ 3.76 \pm 0.41$ 569.6 &$ 11.76 \pm 0.47$ & $ 4.31 \pm 0.17$ & $ 3.12 \pm 0.34$ 576.6 &$ 12.17 \pm 0.49$ & $ 4.12 \pm 0.17$ & $ 2.91 \pm 0.32$ 578.6 &$ 12.02 \pm 0.48$ & $ 4.32 \pm 0.17$ & $ 3.24 \pm 0.36$ 580.7 &$ 11.97 \pm 0.48$ & $ 4.91 \pm 0.20$ & $ 3.28 \pm 0.36$ 582.6 &$ 12.10 \pm 0.48$ & $ 4.49 \pm 0.18$ & $ 3.35 \pm 0.37$ 597.7 &$ 12.57 \pm 0.50$ & $ 4.24 \pm 0.17$ & $ 2.92 \pm 0.32$ 600.6 &$ 12.44 \pm 0.50$ & $ 4.15 \pm 0.17$ & $ 2.59 \pm 0.28$ 602.6 &$ 11.62 \pm 0.47$ & $ 4.61 \pm 0.18$ & $ 3.53 \pm 0.39$ 604.6 &$ 12.00 \pm 0.48$ & $ 4.50 \pm 0.18$ & $ 3.59 \pm 0.40$ 608.6 &$ 11.96 \pm 0.48$ & $ 4.16 \pm 0.17$ & $ 3.39 \pm 0.37$ 610.6 &$ 10.99 \pm 0.44$ & $ 4.24 \pm 0.17$ & $ 2.80 \pm 0.31$ 627.6 &$ 11.26 \pm 0.45$ & $ 4.01 \pm 0.16$ & $ 2.92 \pm 0.32$ 630.7 &$ 11.14 \pm 0.45$ & $ 3.88 \pm 0.16$ & $ 2.49 \pm 0.27$ 635.6 &$ 11.87 \pm 0.47$ & $ 4.24 \pm 0.17$ & $ 2.37 \pm 0.26$ 641.6 &$ 12.33 \pm 0.49$ & $ 4.71 \pm 0.19$ & $ 3.20 \pm 0.35$ 643.6 &$ 10.75 \pm 0.43$ & $ 4.46 \pm 0.18$ & $ 2.83 \pm 0.31$ 775.0 &$ 13.29 \pm 0.27 $ & $ 5.04 \pm 0.10 $ & $\ldots$ 777.0 &$ 12.69 \pm 0.51$ & $ 5.18 \pm 0.21$ & $ 3.61 \pm 0.40$ 782.0 &$ 12.86 \pm 0.51$ & $ 5.41 \pm 0.22$ & $ 3.83 \pm 0.42$ 787.0 &$ 13.49 \pm 0.54$ & $ 5.27 \pm 0.21$ & $ 4.20 \pm 0.46$ 810.1 &$ 14.06 \pm 0.56$ & $ 3.95 \pm 0.16$ & $ 2.97 \pm 0.33$ 834.0 &$ 12.00 \pm 0.48$ & $ 4.85 \pm 0.19$ & $ 2.87 \pm 0.31$ 838.1 &$ 11.44 \pm 0.46$ & $ 4.59 \pm 0.18$ & $ 2.99 \pm 0.33$ 839.0 &$ 11.89 \pm 0.24 $ & $ 4.89 \pm 0.10 $ & $\ldots$ 840.1 &$ 12.50 \pm 0.50$ & $ 4.91 \pm 0.20$ & $ 3.40 \pm 0.37$ 843.0 &$ 12.45 \pm 0.50$ & $ 4.87 \pm 0.19$ & $ 3.29 \pm 0.36$ 847.0 &$ 12.33 \pm 0.49$ & $ 4.82 \pm 0.19$ & $ 3.20 \pm 0.35$ 867.0 &$ 12.25 \pm 0.49$ & $ 4.20 \pm 0.17$ & $ 2.86 \pm 0.31$ 869.0 &$ 12.55 \pm 0.50$ & $ 3.85 \pm 0.15$ & $ 2.53 \pm 0.28$ 873.0 &$ 12.15 \pm 0.49$ & $ 4.09 \pm 0.16$ & $ 3.01 \pm 0.33$ 875.8 &$ 12.29 \pm 0.25 $ & $ 4.49 \pm 0.09 $ & $\ldots$ 876.0 &$ 12.58 \pm 0.50$ & $ 4.31 \pm 0.17$ & $ 3.09 \pm 0.34$ 877.0 &$ 12.44 \pm 0.50$ & $ 4.37 \pm 0.17$ & $ 3.09 \pm 0.34$ 885.8 &$ 11.54 \pm 0.23 $ & $ 4.18 \pm 0.08 $ & $\ldots$ 892.8 &$ 12.14 \pm 0.49$ & $ 4.54 \pm 0.18$ & $ 3.05 \pm 0.34$ 906.9 &$ 11.44 \pm 0.46$ & $ 4.53 \pm 0.18$ & $ 2.60 \pm 0.28$ 921.9 &$ 11.61 \pm 0.46$ & $ 4.53 \pm 0.18$ & $ 2.93 \pm 0.32$ 926.8 &$ 11.13 \pm 0.22 $ & $ 4.43 \pm 0.09 $ & $\ldots$ 930.8 &$ 11.43 \pm 0.46$ & $ 4.01 \pm 0.16$ & $ 2.74 \pm 0.30$ 935.9 &$ 11.92 \pm 0.48 $ & $ 4.51 \pm 0.18 $ & $\ldots$ 936.6 &$ 11.23 \pm 0.45$ & $ 4.39 \pm 0.18$ & $ 2.65 \pm 0.29$ 949.6 &$ 12.03 \pm 0.48$ & $ 5.04 \pm 0.20$ & $ 3.41 \pm 0.38$ 961.6 &$ 11.32 \pm 0.45$ & $ 4.90 \pm 0.20$ & $ 3.09 \pm 0.34$ 966.7 &$ 11.60 \pm 0.46$ & $ 4.39 \pm 0.18$ & $ 3.00 \pm 0.33$ 980.7 &$ 11.72 \pm 0.21$ & $ 4.73 \pm 0.08$ & $ 3.13 \pm 0.34$ 983.7 &$ 11.41 \pm 0.46$ & $ 4.46 \pm 0.18$ & $ 2.60 \pm 0.29$ 988.6 &$ 11.56 \pm 0.46$ & $ 4.30 \pm 0.17$ & $ 2.78 \pm 0.31$ 991.7 &$ 11.35 \pm 0.45$ & $ 4.15 \pm 0.17$ & $ 3.08 \pm 0.34$ 994.6 &$ 11.37 \pm 0.46$ & $ 4.11 \pm 0.16$ & $ 2.56 \pm 0.28$ 1009.6 &$ 10.98 \pm 0.44$ & $ 3.76 \pm 0.15$ & $ 2.50 \pm 0.28$ 1012.6 &$ 12.18 \pm 0.49$ & $ 3.70 \pm 0.15$ & $ 2.98 \pm 0.33$ 1019.6 &$ 12.75 \pm 0.51$ & $ 4.20 \pm 0.17$ & $ 3.33 \pm 0.37$ 1022.6 &$ 12.09 \pm 0.48$ & $ 4.57 \pm 0.18$ & $ 3.02 \pm 0.33$ [lcccccc]{} All & 140 & 7.5 & 7.3 & $6.20\pm3.85$ & 0.618 & $42.85\pm16.36$ Year 1 & 33 & 2.8 & 0.7 & $7.87\pm3.25$ & 0.411 & $5.766\pm0.459$ Year 2 & 20 & 11.5& 13.0 & $6.42\pm4.95$ & 0.770 & $17.68\pm2.03$ Year 3 & 19 & 14.1 & 14.1 & $1.97\pm1.81$ & 0.906 & $14.79\pm5.65$ All & 126 & 7.4 & 3.1 & $12.74\pm1.08$ & 0.077 & $1.397\pm0.079$ Year 1 & 51 & 3.9 & 3.0 & $13.66\pm0.82$ & 0.050 & $1.287\pm0.073$ Year 2 & 38 & 5.8 & 3.2 & $12.16\pm0.76$ & 0.050 & $1.305\pm0.074$ Year 3 & 37 & 6.9 & 4.5 & $12.06\pm0.69$ & 0.043 & $1.281\pm0.072$ Subset 1& 29 & 2.8 & 2.2 & $13.38\pm0.92$ & 0.059 & $1.287\pm0.073$ All & 126 & 7.4 & 3.1 & $ 4.77\pm0.48$ & 0.095 & $1.592\pm0.092$ Year 1 & 51 & 3.9 & 3.0 & $ 4.91\pm0.48$ & 0.092 & $1.423\pm0.064$ Year 2 & 38 & 5.8 & 3.2 & $ 4.65\pm0.44$ & 0.086 & $1.505\pm0.086$ Year 3 & 37 & 6.9 & 4.5 & $ 4.50\pm0.42$ & 0.086 & $1.462\pm0.084$ Subset 1& 29 & 2.8 & 2.2 & $ 5.02\pm0.51$ & 0.096 & $1.389\pm0.062$ All & 93 &10.1 & 4.5 & $ 3.50\pm0.67$ & 0.154 & $2.038\pm0.316$ Year 1 & 29 & 6.9 & 4.0 & $ 4.08\pm0.52$ & 0.064 & $1.660\pm0.258$ Year 2 & 33 & 6.7 & 3.6 & $ 3.42\pm0.62$ & 0.143 & $1.987\pm0.309$ Year 3 & 31 & 8.2 & 5.0 & $ 3.04\pm0.38$ & 0.057 & $1.680\pm0.263$ Subset 1& 17 & 4.8 & 3.5 & $ 4.08\pm0.65$ & 0.114 & $1.660\pm0.258$ [lcc]{} $\tau_{\rm cent}$ (days) & $5.92^{+3.13}_{-1.96}$ & $4.49^{+4.91}_{-5.60}$ $\tau_{\rm peak}$ (days) & $4.6^{+4.5}_{-1.5}$ & $5.9^{+8.9}_{-4.8}$ $r_{\rm max}$ & 0.800 & 0.767 $V_{\mbox{\scriptsize FWHM}}$ (kms$^{-1}$) & $1110\pm190$ & $5430\pm510$$M_{\rm vir}$ ($10^6\,M_{\odot}$) & $1.1^{+0.8}_{-0.5}$ & $19.4^{+21.5}_{-24.4}$ [^1]: The light curves and spectra are available at URL [http://www.astronomy.ohio-state.edu/$\sim$agnwatch/]{}. All publicly available International AGN Watch data can be accessed at this site, which also includes complete references to published AGN Watch papers.
1
--- author: - 'Eduard P. Kontar' title: Dynamics of electron beams in the solar corona plasma with density fluctuations --- Introduction ============ One of the challenging problems in the theory of type III bursts, widely discussed in the literature, is the fine structure of the bursts. The fine structure is observed in almost all ranges of frequencies from GHz ([@Benz82]; [@Benz96]) to a few tens of kHz in interplanetary space ([@Chaizy95]). Direct observations of Langmuir waves and energetic electrons show that Langmuir waves have rather clumpy spatial distribution whereas the electron stream seems rather continuous ([@Lin81]; [@Chaizy95]). There are a few alternative ways to explain the observational data. The existing theories can be roughly divided into three groups in accordance with the electron beam density or the energy of the Langmuir waves. The first group of theories is based on the assumption that nonlinear instabilities of strong turbulence theory can suppress quasilinear relaxation ([@Papadopoulos74]) and lead to extreme clumpiness of the spatial distribution of Langmuir waves ([@Thejappa98]). However, some observations and theoretical studies ([@Cairns95]) raise doubts as to whether the Langmuir turbulence level is high enough for strong-turbulence processes. The second, recently developed group of theories, is based on the prediction that an electron beam propagates in a state close to marginal stability, i.e. one where the fluctuation-dependent growth rate is compensated for by the damping rate ([@Robinson92]; [@Robinson93]). In this view, the growth rate of beam-plasma instability is perturbed by the ambient density fluctuations ([@Robinson92]). The third, more traditional group of theories, considers the beam propagation in the limit of weak turbulence theory ([@Ryutov70]; [@Takakura76]; [@Magelssen77]; [@Takakura82]; [@Grognard85]). The basic idea is that the electron beam generates Langmuir waves at the front of the electron stream and the waves are absorbed at the back of the stream, ensuring electron propagation over large distances. However, this idea was not proved for a long time ([@Melrose90]). Recently Mel’nik has demonstrated analytically ([@Melnik95]) that a mono-energetic beam can propagate as a beam-plasma structure (BPS). This result has been confirmed numerically ([@Kontar98]) and applied to the theory of type III bursts ([@Melnik99]). The solution obtained ([@Melnik00a]) directly resolves Sturrock’s dilemma ([@Sturrock64]) and may explain the almost constant speed of type III sources. However, the influence of plasma inhomogeneity on the dynamics of a BPS has never been studied although the correlation between Langmuir wave clumps and density fluctuations demonstrates the importance of such considerations ([@Robinson92c]). The influence of plasma inhomogeneity on Langmuir waves and beam electrons has been studied from various points of view. An account of plasma inhomogeneities may explain why accelerated beam electrons appear in the experiments with quasilinear relaxation of an electron beam ([@Ryutov69]). Relativistic dynamics of an electron beam with random inhomogeneities, as applied to laboratory plasmas, was considered in ([@Hishikawa76]). It has been shown ([@Muschietti85]) that the solar corona density fluctuations may be extremely effective in quenching the beam-plasma instability. Moreover, the isotropic plasma inhomogeneities may lead to efficient isotropisation of plasma waves ([@Goldman82]) whereas those alongated along the direction of ambient magnetic field have little influence on the beam stability. Therefore, the growth rate of beam-plasma instability was postulated to be very high in the regions of low amplitude density fluctuations ([@Melrose86]; [@Melrose87]). Isotropic density fluctuations of ambient plasma density were also employed to explain low level of Langmuir waves in microbursts ([@Gopalswamy93]). In this paper the dynamics of a spatially limited electron cloud is considered in a plasma with small scale density fluctuations. In the treatment presented here, quasilinear relaxation is a dominant process and density inhomogeneities are too weak to suppress the instability. Indeed, observations of interplanetary scintillations from extragalactic radio sources ([@Cronyn72]) lead to an average value of $\Delta n/n$ of the order of $10^{-3}$ ([@Smith79]). Nevertheless, the low intensity density fluctuations lead to significant spatial redistribution of wave energy. The numerical results obtained demonstrate that electrons propagate as a continuous stream while the Langmuir waves generated by the electrons are clumpy. Both electrons and Langmuir waves propagate in a plasma as a BPS with an almost constant velocity. However, density fluctuations lead to some energy losses. Electron beam and density fluctuations ====================================== The problem of one-dimensional electron beam propagation is considered in a plasma with density fluctuations. The one-dimensional character of electron beam propagation is supported by the 3D numerical solution of the kinetic equations ([@Churaev80]) and additionally by the fact that in the case of type III bursts electrons propagate along open magnetic field lines ([@Dulk85]). Electron beam ------------- There is still uncertainty in the literature as to whether electron beams are strong enough to produce strong turbulence or whether the beam is so rarified that quasilinear relaxation is suppressed by damping or scattering. While some observations are in favor of the strong turbulence regime ([@Thejappa98]) others are interpreted as implying marginal stability ([@Cairns95]). Therefore, we consider the intermediate case of a medium density beam, which is not strong enough to start strong turbulence processes, $$W/nT \ll (k\lambda _{D})^2, \label{weakt}$$ but is dense enough to make quasilinear relaxation a dominant process. Here $W$ is the energy density of Langmuir waves generated by the beam, $T$ is the temperature of the surrounding plasma, $k$ is the wave number, and $\lambda _{D}$ is the electron Debye length. The initial value problem is solved with an initially-unstable electron distribution function, which leads to the formation of a BPS in the case of homogeneous plasma ([@Melnik00a]) $$f(v,x,t=0)=g_0(v)\mbox{exp}(-x^2/d^2), \label{f_0}$$ where $$g_0(v)=\left\{ \begin{array}{ll} \displaystyle \frac{2n'v}{v_0^2}, &\mbox{$v<v_0$},\\ 0, &\mbox{$v>v_0$}. \end{array} \right. \label{g_0}$$ Here $d$ is the characteristic size of the electron cloud and $v_0$ is the velocity of the electron beam. The initial spectral energy density of Langmuir waves $$W(v,x,t=0)\simeq \frac{T}{2\pi ^2\lambda _{D}^2}, \label{W_0}$$ is of the thermal level and uniformly distributed in space. The electron temperature of the corona is taken to be $T=10^{6}$K, which gives an electron thermal velocity $v_{Te}=\sqrt{3kT/m}\simeq 6.7\times 10^{8}$cm s$^{-1}$. Ambient density fluctuations ---------------------------- Following common practice in the literature on plasma inhomogeneity Langmuir waves are treated in the approximation of geometrical optics (the WKB approximation) when the length of a Langmuir wave is much smaller than the size of the plasma inhomogeneity ([@Vedenov67]; [@Ryutov69]) $$\label{WKB1} \lambda \ll L,$$ where $$\label{WKB2} L\equiv \left(\frac {1}{\omega _{pe}}\frac{\partial \omega _{pe}}{\partial x}\right)^{-1},$$ is the scale of ambient plasma density fluctuations, and $\omega _{pe}$ is the local electron plasma frequency. The plasma inhomogeneity changes the dispersion properties of Langmuir waves and if the intensity of density fluctuations is small then the dispersion relation can be written $$\label{dispersion} \omega (k,x) =\omega_{pe}\left[1+\frac 12\frac{\Delta n}{n}+\frac{3k^2v_{Te}^2}{2\omega_{pe}^2}\right],$$ where $v_{Te}$ is the electron thermal velocity. The intensity of the density fluctuations should be small ([@Coste75]) $$\label{delta_n} \frac{\Delta n}{n} < \frac{3k^2v_{Te}^2}{\omega_{pe}^2},$$ to ensure that the corresponding fluctuations of local plasma frequency are within the thermal width of plasma frequency. Thus, for the typical parameters of the corona plasma (plasma density $n=5\times 10^8$cm$^{-3}$ or plasma frequency $f_p=\omega _{pe}/2\pi \approx 200.73$MHz), and assuming a beam velocity $v_0=10^{10}$cm s$^{-1}$, the density fluctuations are limited to $\Delta n/n<10^{-2}$. Quasilinear equations --------------------- In the case of weak turbulence theory (\[weakt\]), and under the conditions of the WKB approximation (\[WKB1\],\[delta\_n\]), the evolution of the electron distribution function $f(v,x,t)$ and the spectral energy density $W(v,x,t)$ are described by the system of kinetic equations ([@Ryutov69]) $$\frac{\partial f}{\partial t}+v \frac{\partial f}{\partial x}= \frac{4\pi ^2 e^2 }{m^2}\frac{\partial}{\partial v} \frac{W}{v}\frac{\partial f}{\partial v}, \label{eqk1}$$ and $$\frac{\partial W}{\partial t}+\frac{\partial \omega}{\partial k} \frac {\partial W}{\partial x}-\frac{\partial \omega_{pe}}{\partial x } \frac{\partial W}{\partial k}=\frac{\pi \omega_{pe}}{n}v^2W\frac{\partial f}{\partial v}, \; \omega_{pe}=kv, \label{eqk2}$$ where $\partial \omega/\partial k=3v_{Te}^2/v$ is the group velocity of Langmuir waves, and $W(v,x,t)$ plays the same role for waves as the electron distribution function does for particles. The system (\[eqk1\],\[eqk2\]) describes the resonant interaction $\omega_{pe}=kv$ of electrons and Langmuir waves. On the right-hand side of equations (\[eqk1\],\[eqk2\]) I omit the spontaneous terms due to their small magnitude relative to the induced ones ([@Ryutov70]). The presence of a local plasma frequency gradient leads to two physical effects on the kinetics of the Langmuir waves (\[eqk2\]). Firstly, the characteristic time of the beam-plasma interaction depends on the local density and therefore the resonance condition for the plasmons may itself change during the course of beam propagation. Secondly, the Langmuir wave propagating in the inhomogeneous plasma experiences a shift of wavenumber $\Delta k(x)$, due to the variation of the local refractive index. The second effect has been shown to have the main impact on Langmuir wave kinetics whereas the first effect can be neglected ([@Coste75]). Thus, we are confronted with the initial value problem of electron cloud propagation in a plasma with density fluctuations. The problem is nonlinear and is characterized by three different time scales. The fastest process in the system is the quasilinear relaxation, on the quasilinear timescale $\tau \approx n/n'\omega_{pe}$. The second timescale is that of processes connected with plasma inhomogeneity. Thirdly, there is the timescale of an electron cloud propagation in a plasma that significantly exceeds all other timescales. Quasilinear relaxation and plasma inhomogeneity =============================================== The main interaction in the system is beam – wave interaction governed by the quasilinear terms on the right hand side of equations (\[eqk1\],\[eqk2\]) . It is well-known that the unstable electron distribution function (\[g\_0\]) leads to the generation of plasma waves. The result of quasilinear relaxation for an electron beam homogeneously distributed in space is a plateau of the electron distribution function ([@Ryutov70]) $$\begin{aligned} \displaystyle f(v,t\approx \tau) =\left\{\begin{array}{ll} \displaystyle \frac{n'}{v_0},&\mbox{$v<v_0$}\\ 0,&\mbox{$v>v_0$} \end{array} \right. \label{f_relax}\end{aligned}$$ and the spectral energy density $$\begin{aligned} \displaystyle W(v,t\approx \tau) = \displaystyle \frac{mn'}{v_0\omega_{pe}} \int_0^v\left(1-\frac{v_0}{n'}g_0(v)\right)dv,\;\;v<v_0 \label{w_relax}\end{aligned}$$ where $g_0(v)$ is the initial distribution function of the beam. In the case of an inhomogeneous plasma we can also consider relaxation of a homogeneously distributed beam. Thus, the kinetic equations (\[eqk1\],\[eqk2\]) will take the form $$\frac{\partial f}{\partial t}= \frac{4\pi ^2 e^2 }{m^2}\frac{\partial}{\partial v} \frac{W}{v}\frac{\partial f}{\partial v}, \label{k1}$$ and $$\frac{\partial W}{\partial t}+\frac{v^2}{L_0} \frac{\partial W}{\partial v}=\frac{\pi \omega_{pe}}{n}v^2W\frac{\partial f}{\partial v}, \;\;\; \omega_{pe}=kv, \label{k2}$$ where the transport terms are omitted. Here, the inhomogeneity scale is also assumed to be constant and equal to $L_0$. It should be noted that this assumption is physically incorrect. The change in the spectrum of the Langmuir waves is due solely to the spatial movement of the waves with the group velocity. However, from a mathematical point of view, it is well justified as the group velocity of Langmuir waves is small ($3v_{Te}^2/v\ll v$) and effects connected with wave transport can be neglected. Equations (\[k1\],\[k2\]) describe two physical effects: quasilinear relaxation (with characteristic time $\tau $) and the drift of Langmuir waves in velocity space (the characteristic time $\tau _2 = |L_0|/v$. Since $\tau _2\gg \tau$ the influence of plasma inhomogeneity can be considered as the evolution of the final stage of quasilinear relaxation. Two possible cases of plasma density change are considered: plasma density decreasing ($L_0<0$) with distance and plasma density increasing ($L_0>0$) with distance. Plasma density decreasing with distance --------------------------------------- In this case $L_0$ is negative. After the time of quasilinear relaxation, a plateau is established in the electron distribution function and a high level of Langmuir waves is generated. Since the quasilinear processes are fast we have a plateau at every moment of time $$\begin{aligned} \displaystyle f(v,t\approx \tau) =\left\{\begin{array}{ll} \displaystyle \frac{n'}{v_0},&\mbox{$v<v_0$}\\ 0,&\mbox{$v>v_0$} \end{array} \right. \label{relax2}\end{aligned}$$ The wave spectrum is changing with time, and from the fact that we have a plateau at every moment equation (\[k2\]) can be reduced to $$\frac{\partial W}{\partial t}-\frac {v^2} {|L_0|} \frac{\partial W}{\partial v}= 0 \label{W}$$ The role of initial wave distribution is played by the spectral energy density generated during the relaxation stage (\[w\_relax\]). Integrating equation (\[W\]) we obtain the solution for $t\gg \tau$ $$\begin{aligned} \label{w1} W(v,t)=\frac{m}{\omega _{pe}}(1/v-t/|L_0|)^{-3} \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\cr \;\;\; \times\int_0^{1/(1/v-t/|L_0|)}\left[\frac{n'}{v_0}-g_0(v)\right]dv,\;\; v<u(t)\end{aligned}$$ where $$\label{u} u(t)=\frac{v_0}{1+v_0t/|L_0|}$$ is the maximum velocity of the Langmuir waves. Note, that the electron distribution function is constant and presents a plateau (\[relax2\]). ![The electron distribution function $f(v,t)$ and the spectral energy density of Langmuir waves $W(v,t)$ at various times, for the case where the plasma density decreases with distance, $L_0=-5\times 10^9$cm. Numerical solution of kinetic equations (\[k1\],\[k2\]) $n'=100$cm$^{-3}$, $v_0=1.0\times 10^{10}$cm s$^{-1}$.[]{data-label="fig1"}](ms1121f1.eps){width="90mm"} The numerical solution of equations (\[k1\],\[k2\]) with the initial electron distribution function (\[g\_0\]) is presented in fig. \[fig1\]. Comparing the numerical results and the simplified solution (\[w1\]) we see a good agreement (see fig. \[fig1\]). The plateau for a wide range of velocities is formed after a short time, $t=0.1$s, and it remains almost unchanged up to the end of the calculation. For the time $t>0.1$s, the drift of the Langmuir wave spectrum toward smaller phase velocities becomes observable. At $t=0.5$s, the maximum phase velocity is half of the initial beam velocity. Plasma density increasing with distance --------------------------------------- An increasing plasma density leads to a shift toward larger phase velocities. For $v>v_0$ we have a negative derivative at the edge of the electron distribution function, and electrons absorb waves with the corresponding phase velocities. Absorption of waves then leads to acceleration of particles. This process continues until all the waves generated during the beam relaxation are absorbed by the electrons. In the case of increasing density we are unable to find an exact solution, but we can find the solution for $t\rightarrow \infty$. Using conservation of energy ([@Ryutov69]) $$\begin{aligned} \label{e_cons} \omega _{pe}\int_0^{u(t)}\frac{W(v,t)}{v^2}dv + \int_0^{u(t)}\frac{mn'}{2u(t)}v^2dv \cr =\frac m2 \int_0^{v_0}g_0(v)v^2dv\end{aligned}$$ and that the fact $W(v,t)=0$ at $t\rightarrow \infty$ we can find the maximum velocity for the initial distribution function (\[g\_0\]) $$\label{u_max} u(t\rightarrow \infty) =\sqrt{3/2}v_0$$ ![The electron distribution function $f(v,t)$ and the spectral energy density of Langmuir waves $W(v,t)$ at various times for the case where the plasma density increases with distance, $L_0=5\times 10^9$cm. Numerical solution of kinetic equations (\[k1\],\[k2\]) $n'=100$cm$^{-3}$, $v_0=1.0\times 10^{10}$cm s$^{-1}$.[]{data-label="fig2"}](ms1121f2.eps){width="90mm"} As predicted, the numerical solution tends to the maximum velocity $\approx 1.22v_0$ (fig. \[fig2\]). As in the previous case, at $t=0.1$s we have the result of quasilinear relaxation - a plateau in the electron distribution function and a high level of Langmuir waves. For times $t>0.1$s, the drift of Langmuir waves and consequent acceleration of electrons is observable. At $t=1$s almost all plasma waves are observed near the leading edge of the plateau and the maximum plateau velocity is close to the value given by (\[u\_max\]). Propagation of an electron cloud ================================ In this section the numerical results of the evolution of the electron beam in the plasma with density fluctuations are presented. We begin with the case where the ambient density fluctuations in the plasma are periodic and sine-like. The dependency of plasma density on distance is $$\label{sin} n(x)= n_0(1+\alpha \mbox{sin}(x/\Delta x))$$ where $\Delta x$ defines the period of the density fluctuations and $\alpha n_0$ is the amplitude of the density irregularities. The background plasma density is taken as a typical value for the starting frequencies of type III bursts $n_0=5\times 10^8$ cm$^{-3}$, corresponding to a local plasma frequency $f_p=\omega _{pe}/2\pi=200.73$MHz. As noted, small-intensity density fluctuation are considered, i.e. the local plasma frequency change due to the inhomogeneity is less than the thermal width of the plasma frequency (\[delta\_n\]). The value $\alpha$ is taken to be $10^{-3}$, which is considered to be a typical value for solar coronal observations ([@Cronyn72]; [@Smith79]). The spatial period of the plasma fluctuations $\Delta x=d/12$ is taken to be less than the initial size of the electron cloud. Thus, we have regions of size $\pi d/12\approx 0.26d$ with positive and negative density gradients. Recently, it has been shown that an electron beam can propagate in a homogeneous plasma as a BPS ([@Melnik99]; [@Melnik00]; [@Melnik00a]). Therefore, it is important to consider the dynamics of the electron beams at distances greatly exceeding the size of the electron cloud. Initial evolution of the electron beam and formation of a BPS ------------------------------------------------------------- At the initial time $t=0$ we have an electron distribution function which is unstable. Due to fast quasilinear relaxation, electrons form a plateau in the electron distribution function and generate a high level of plasma waves. At time $t=0.1$s, the typical result of quasilinear relaxation is observed. The electron distribution function and the spectral energy density evolve in accordance with the gas-dynamic solution ([@Melnik95]; [@Melnik00a]) $$\begin{aligned} \displaystyle f(v,x,t) =\left\{\begin{array}{ll} \displaystyle \frac{n'}{v_0}\mbox{exp}\left(-\frac{(x-v_0t/2)^2}{d^2}\right), &\mbox{$v<v_0$}\\ 0,&\mbox{$v>v_0$} \end{array} \right. \label{f_hom}\end{aligned}$$ $$\begin{aligned} \label{w_hom} W(v,x,t)=\frac{mn'}{v_0\omega _{pe}}v^4 \left[1-\frac{v}{v_0}\right] \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\cr \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \times\mbox{exp}\left(-\frac{(x-v_0t/2)^2}{d^2}\right), \;\;v<v_0\end{aligned}$$ At this stage the influence of the plasma inhomogeneity is not observable. The numerical solution of the kinetic equations and the gas-dynamic solution show that electrons propagate in a plasma accompanied by a high level of plasma waves. Since the plasma waves exist at a given point for some time, while the structure passes this point, the spectrum of the waves should change due to the wave movement. To understand the physics of the process we consider the evolution of the electron distribution function and the spectral energy density of Langmuir waves at a given point. The electron distribution function and the spectral energy density of plasma waves ---------------------------------------------------------------------------------- At every spatial point we observe two physical processes. The first process is connected with the spatial movement of a BPS, as would be the case for a homogeneous plasma ([@Melnik00]; [@Melnik00a]). The second process is the influence of plasma inhomogeneity on the Langmuir waves. Depending on the sign of the density gradient, the Langmuir wave spectrum takes on a different form. Consider the time evolution of the electron distribution function and the spectral energy density of Langmuir waves at two close points $x=15.2d$ and $x=15.47d$ (see fig. \[fig3\]). The first point is chosen in the region with increasing density and the second in the region where the density decreases with distance. The first particles arrive to these points at approximately $t\sim 1.9$s. The arriving electrons form a plateau in the electron distribution function and generate a high level of plasma waves for the time of quasilinear relaxation $\tau \approx 0.01$ s. ![The electron distribution function $f(v,x,t)$ and the spectral energy density of Langmuir waves $W(v,x,t)$ at $x=15.2d$ and at $x=15.47d$. Numerical solution of kinetic equations for a plasma with sine-like density fluctuations (\[sin\]) $n'=100$cm$^{-3}$, $v_0=1.0\times 10^{10}$cm s$^{-1}$.[]{data-label="fig3"}](ms1121f3a.eps "fig:"){width="90mm"} ![The electron distribution function $f(v,x,t)$ and the spectral energy density of Langmuir waves $W(v,x,t)$ at $x=15.2d$ and at $x=15.47d$. Numerical solution of kinetic equations for a plasma with sine-like density fluctuations (\[sin\]) $n'=100$cm$^{-3}$, $v_0=1.0\times 10^{10}$cm s$^{-1}$.[]{data-label="fig3"}](ms1121f3b.eps "fig:"){width="90mm"} The movement of the particles leads to the growth of the plateau height at the front of the structure for $1.9\mbox{ s}<t<2.7\mbox{ s}$. Due to the fact that at the front of the structure electrons come with a positive derivative, $\partial f/\partial v >0$, the level of plasma waves also increases. When the peak of the plateau height is reached at $t\approx 2.7$s the reverse process takes place. The plateau height decreases and the arriving electrons have a negative derivative $\partial f/\partial v <0$ that leads to absorption of waves. The growth and decrease of the plateau height and the level of plasma waves is typical for a homogeneous plasma ([@Melnik00a]). However, while the structure passes a given point the spectrum of Langmuir waves experiences the change. This change depends on the sign of the plasma density gradient. In the region with decreasing density ($x=15.47d$) the Langmuir waves have a negative shift in velocity space while the growing plasma density ($x=15.2d$) supplies a positive shift in phase velocity of the plasma waves. At the point with the positive gradient, the Langmuir waves shifted in phase velocity space are effectively absorbed by the electrons while the negative plasma gradient does not lead to the absorption of waves. This behavior results in different levels of plasma waves at two very close points with the opposite density-gradient sign. Figure \[fig3\] demonstrates the existence of accelerated electrons with $v>v_0$. These electrons are accelerated by Langmuir waves in the regions with positive plasma-density gradient. Electrons with velocity larger than the initial beam velocity have been observed in laboratory plasma experiments. This effect was also considered from an analytical standpoint by ([@Ryutov69]) in application to laboratory plasmas. Dynamics of electrons and accompanying Langmuir waves ----------------------------------------------------- The processes of wave generation at the front and absorption at the back take place at every spatial point and therefore the structure can travel over large distances, being the source of plasma waves ([@Kontar98]; [@Melnik99]; [@Melnik00a]). At time $t=5.0$s, electrons accompanied by Langmuir waves have passed over a large distance but the general physical picture remains the same (fig. \[fig4\]). Generally, electrons and Langmuir waves propagate as a BPS. At every spatial point electrons form a plateau at the electron distribution function and we have a high level of plasma waves. The electron cloud has a maximum of the electron density at $x=27d$. Plasma waves are also concentrated in this region and the maximum of Langmuir wave density is located at the maximum of electron density $x=27d$. The spectrum of Langmuir waves has a maximum close to $v\approx 0.8v_0$. The spatial profile, averaged over the plasma inhomogeneity period, is close to the result obtained for a homogeneous plasma. ![The electron distribution function $f(v,x,t)$ and the spectral energy density of Langmuir waves $W(v,x,t)$ at $t=5.0$s. Numerical solution of the kinetic equations with sine-like density fluctuations (\[sin\]) $n'=100$cm$^{-3}$, $v_0=1.0\times 10^{10}$cm s$^{-1}$.[]{data-label="fig4"}](ms1121f4a.eps "fig:"){width="90mm"} ![The electron distribution function $f(v,x,t)$ and the spectral energy density of Langmuir waves $W(v,x,t)$ at $t=5.0$s. Numerical solution of the kinetic equations with sine-like density fluctuations (\[sin\]) $n'=100$cm$^{-3}$, $v_0=1.0\times 10^{10}$cm s$^{-1}$.[]{data-label="fig4"}](ms1121f4b.eps "fig:"){width="90mm"} However, the spatial profile of Langmuir waves has a fine structure that can be seen in fig. \[fig5\]. The Langmuir waves are grouped into clumps (the regions with high level of plasma waves, following the terminology of ([@Smith79])). The size of a clump is determined by the spatial size of the density fluctuations and is equal to half of the density fluctuation period $\pi d/12\approx 0.26 d$. The maxima of Langmuir wave density are located in regions of negative plasma-density gradient and the regions with low levels of Langmuir turbulence are where the density gradient is positive. ![Detailed picture of the electron distribution function $f(v,x,t)$ and the spectral energy density of Langmuir waves $W(v,x,t)$ at $t=5.0$s. Numerical solution of kinetic equations with sine-like density fluctuations (\[sin\]) $n'=100$cm$^{-3}$, $v_0=1.0\times 10^{10}$cm s$^{-1}$.[]{data-label="fig5"}](ms1121f5a.eps "fig:"){width="90mm"} ![Detailed picture of the electron distribution function $f(v,x,t)$ and the spectral energy density of Langmuir waves $W(v,x,t)$ at $t=5.0$s. Numerical solution of kinetic equations with sine-like density fluctuations (\[sin\]) $n'=100$cm$^{-3}$, $v_0=1.0\times 10^{10}$cm s$^{-1}$.[]{data-label="fig5"}](ms1121f5b.eps "fig:"){width="90mm"} The other interesting result is that while the Langmuir wave distribution is determined by the irregularities of ambient plasma the electron distribution function is a smooth function of distance (see fig. \[fig5\]). The electrons in the structure propagate as a continuous stream, being slightly perturbed by the density fluctuations. The influence of plasma inhomogeneity on electron distribution is observed in the appearance of accelerated particles with $v>v_0$ and the fact that the maximum plateau velocity is slightly decreasing with time during the course of beam-plasma passing a given point (fig. \[fig3\]). The accelerated electrons tend to accumulate at the front of the structure and the electrons decelerated concentrate at the back of the structure. The energy distribution of waves -------------------------------- The energy distribution of waves $$\label{we} E_w(x,t)=\int_0^{\infty}Wdk=\omega_{pe}\int_0^{v_0}\frac{W(v,x,t)}{v^2}dv$$ is presented in fig. \[e\_sin\], where $E_0=mn'v_0^2/4$ is the initial beam energy. The energy distribution explicitly shows the correlation between the plasma and wave energy-density fluctuations. The regions of decreasing plasma density have higher levels of Langmuir turbulence than the corresponding regions with increasing plasma density. The energy distribution of waves appears to be modulated by the ambient plasma density fluctuations. On the other hand, the wave energy density distribution averaged over the period of density fluctuations has a spatial profile close to that in a homogeneous plasma. The maximum of wave energy together with the maximum of electron density propagate with the constant velocity $\approx 0.5v_0$. ![The energy density of plasma waves $E(x)$ at various times and the local plasma frequency $f_p(x)$ (\[sin\]) as a function of distance at various times. The bold line shows the numerical solution for homogeneous plasma. Numerical solution of kinetic equations $n'=100$cm$^{-3}$, $v_0=1.0\times 10^{10}$cm s$^{-1}$.[]{data-label="e_sin"}](ms1121f6.eps){width="90mm"} The other physical effect that should be noted is the energy losses by the structure in the form of Langmuir waves. In figs. \[fig4\],\[e\_sin\] we see that there is a small but non-zero level of plasma waves behind the beam-plasma structure. These waves are also concentrated into clumps in the regions where the plasma gradient is negative. To explain why the structure leaves the plasma waves we note the negative shift in phase velocity of Langmuir waves in the regions with a decreasing density. Due to this shift we have more waves with low phase velocity than the electrons are able to absorb at the back. As a result the low velocity waves form a “trace” of the structure ([@Kontar01]). As was discussed previously, the quasilinear time is small but finite value. Therefore, the BPS experiences spatial expansion ([@Kontar98]). The initial width-at-half-height of the structure is less than $2d$ whereas the spatial width of the structure at $t=5.0$s is about $5d$. Most of the energy and the majority of particles are concentrated within the width of the structure. Since the quasilinear time depends on the beam density, the quasilinear time for the particles far from the center of the structure is much larger than for the structure electrons. In these regions we can observe the situation where the influence of the plasma inhomogeneity is comparable with the quasilinear time. Indeed, in the tail of the structure we have regions with zero level of waves (where the Langmuir waves are absorbed by electrons when the plasma density increases) and regions with Langmuir waves (where plasma density is decreasing). Pseudo-random fluctuations of density ------------------------------------- There is special interest in the case where the density fluctuations are random, which looks like the case for a solar coronal plasma. A pseudo-random distribution of density fluctuations can be easily built by summing $N$ sine-like perturbations with random amplitude, phase, and period $$\label{random} n(x)= n_0(1+ \sum \limits_{i=1}^{N}\alpha _i \mbox{sin}(x/\Delta x_i+\varphi _i))$$ where $n_0\alpha_i$, $\Delta x_i$, $\varphi _i$ are the amplitude, period and phase of a given sine-like density oscillations respectively. The values are chosen in the range to ensure the applicability of the kinetic equations. Thus, $0<\alpha_i\leq 0.001$, $d/2\leq \Delta x_i\leq d/12$, $0<\varphi _i\leq2\pi$, $N=10$ are taken for the numerical calculations. The resulting density profile can be seen in fig. \[fig7\]. The spatial distribution of waves has now more complex structure (fig. \[fig7\]). However, all the main results obtained for sine-like density fluctuations are also observed for pseudo-random density fluctuations (\[random\]). Firstly, the electron stream propagates in a plasma as a BPS. Secondly, observing the energy density profile of Langmuir waves one can see the clumps of Langmuir waves. The size of the clumps is determined by the size of the regions with negative density gradient. The electron distribution function of beam electrons remains smooth as in the previous case with sine-like density oscillations. ![The spectral energy density of Langmuir waves $W(v,x,t)$, the energy density of plasma waves $E(x)$ at $t=5.0$s, and the local plasma frequency $f_p(x)$ (\[random\]) as functions of distance. Numerical solution of kinetic equations with random density fluctuations (\[random\]) $n'=100$cm$^{-3}$, $v_0=1.0\times 10^{10}$cm s$^{-1}$.[]{data-label="fig7"}](ms1121f7a.eps "fig:"){width="90mm"} ![The spectral energy density of Langmuir waves $W(v,x,t)$, the energy density of plasma waves $E(x)$ at $t=5.0$s, and the local plasma frequency $f_p(x)$ (\[random\]) as functions of distance. Numerical solution of kinetic equations with random density fluctuations (\[random\]) $n'=100$cm$^{-3}$, $v_0=1.0\times 10^{10}$cm s$^{-1}$.[]{data-label="fig7"}](ms1121f7b.eps "fig:"){width="90mm"} The dependence of wave energy density on the amplitude of the density fluctuations is of special interest. From equation (\[eqk2\]) it follows that a Langmuir wave propagating with the group velocity $v_{gr}=3v_{Te}^2/v$ over the distance $\Delta l$ experiences a shift of phase velocity $$\label{delta_v} \Delta v \approx \frac{v^2}{Lv_{gr}}\Delta l.$$ Using the density profile (\[sin\]) and estimating $L\approx \Delta l/\alpha$ one derives that $$\label{delta_v2} \frac{\Delta v}{v} \approx \frac{1}{3}\left(\frac{v}{v_{Te}}\right)^2\alpha$$ where we obtain, for our parameters, a phase velocity shift $\leq 0.1 v$. Expression (\[delta\_v2\]) also demonstrates that the shift of the wave phase velocity linearly depends on the amplitude of the density fluctuations. Therefore, in the case with an arbitrary amplitude of the density fluctuations, the higher the amplitude of the plasma inhomogeneity the larger the variations of the wave energy distribution. This tendency can be observed in fig. \[fig7\]. Main results and discussion =========================== From a physical point of view it is interesting to consider the physical processes which lead to the reported results. As we see, the main physical effect, which leads to a complex spatial distribution of waves, is the shift of the phase velocity $\Delta v$ due to the wave movement. The growth rate of beam-plasma instability $$\gamma (x)=\frac{\pi \omega_{pe}}{n}v^2\frac{\partial f}{\partial v}, \label{increment}$$ also depends on distance. However, this dependency of the instability increment on local plasma density is negligible. At every spatial point we have a plateau with $\partial f/\partial v\approx 0$ and the value of $\partial f/\partial v$ is determined by the dynamics of a BPS not by the local plasma density. Therefore, the shift in phase velocity dominates the effect of instability increment dependency on distance. Indeed, if we manually exclude the terms connected with the velocity shift of the Langmuir waves, the spatial profile of waves will become smooth and the solution will be close to that obtained in the case of uniform plasma. This result agrees well with the qualitative results of ([@Coste75]). For application to the theory of type III bursts, special interest is presented by a combination of the two main properties of the solutions. On one hand, the electron beam can propagate in a plasma over large distances, and is a source of a high level of Langmuir waves. A portion of these Langmuir waves can easily be transformed into observable radio emission via nonlinear plasma processes ([@Ginzburg58]). At a scale much greater than the size of the beam, electrons and Langmuir waves propagate as a BPS that may be the source of type III bursts. The BPS propagates in inhomogeneous plasma with velocity $\approx v_0/2$ that can explain the almost-constant speed of the type III source. The finite size of the structure, the spatial expansion of the structure, and conservation of the particle number, are promising results for the theory of type III bursts. On the other hand, plasma inhomogeneity brings additional results. The spatial distribution of Langmuir wave energy is extremely spikey and the distribution of waves is fully determined by the fluctuations of the ambient plasma density. This fact is in good agreement with satellite observations ([@Robinson92c]). Moreover, following the plasma emission model, one obtains the fine structure of the radio emission. At distances about $1$AU the quasilinear time might have a large value and the characteristic time of a wave velocity shift could be comparable to the quasilinear time. Therefore, the region of growing plasma density may lead to the suppression of quasilinear relaxation, whereas, in regions with a decreasing density, relaxation is found. Thus the Langmuir waves might be generated in only those spatial regions where the plasma gradient is less than or equal to zero. Indeed, in the tails of a beam-plasma structure the electron beam density is low and Langmuir waves are only observed in certain regions with non-positive density gradient. Summary ======= In this paper the dynamics of a spatially bounded electron beam has been considered. Generally, the solution of the kinetic equations present a BPS. The structure moves with approximately constant velocity $\approx v_0/2$ and tends to conserve the number of particles. As in case of uniform plasma, electrons form a plateau and generate a high level of plasma waves at every spatial point. However, small-scale inhomogeneity in the ambient plasma leads to significant changes in the spatial distribution of Langmuir waves. It is found that low intensity oscillations perturb the spatial distribution of Langmuir waves whereas the electron distribution function remains a smooth function of distance. The other interesting fact is that the distribution of waves is determined by the distribution of plasma inhomogeneities. The energy density of Langmuir waves has maxima and minima in the regions with positive and negative density gradient respectively. Nevertheless, more detailed analysis is needed. One needs to include radio emission processes in order to calculate the observational consequences of the model in greater detail. The other challenge is the detailed comparison of such numerical results with satellite observations near the Earth’s orbit. Author is extremely thankful to C. Rosenthal for his kind help in the manuscript preparation. [111]{} Benz, A.O., Zlobec, P., and Jaeggi, M.: 1982, [ A&A]{} [**109**]{}, 305. Benz, A.O., Csillaghy, A., and Aschwanden, M.J.: 1996, [A&A]{} [**309**]{}, 291. Cairns, I.H. and Robinson, P.A.: 1995, [Geophys. Res. Lett.]{} [**22**]{}, 3437. Chaizy, P., Pick, M., Reiner, M. Anderson, K.A. Phillips, J., and Forsyth, R.: 1995, [A&A]{} [**303**]{}, 583. Churaev, R.S. & Agapov, A.V.: 1980 [Sov. J. Plasma Physics]{} [**6**]{} 232 Coste, J.,Reinisich, G., Silevitch, M.B., and Montes, C.: 1975, [ Phys. Fluids]{} [**18**]{}, 679. Cronyn, W.M.: 1972, [ ApJ]{} [**171**]{}, L101. Ginzburg, V.L., and Zheleznyakov, V.V.: 1958, [ Sov. Astron. J.]{} [**2**]{}, 653. Gopalswamy, N.: 1993, [Ap. J.]{} [**402**]{}, 326. Grognard, R.J.-M.: 1985, [In Solar Radiophysics]{} ed. McLean, N.J.,Labrum, N.R., Cambridge Univ. Press, 289. Goldman, M.V., and DuBois, D.F.: 1982, [Phys. Fluids]{} [**25**]{}, 1062. Hishikawa, K., and Ryutov, D.D.: 1976, [ J. of The Phys. Soc. of Japan]{} [**41**]{}, 1757. Kontar, E.P., Lapshin, V.I., Mel’nik, V.N.: 1998, [ Plasma Phys. Rep.]{} [**24**]{}, 772. Kontar, E.P.: 2001, [Plasma Phys. & Control. Fusion]{} [**43**]{}, 589. Lin R.P., Potter, D.W., Gurnett, D.A., and Scarf, F.L.: 1981, [ApJ]{} [**251**]{}, 364. Magelssen, G.R., Smith, D.F.: 1977, [Sol. Phys.]{} [**55**]{}, 211. Mel’nik, V.N.: 1995, [ Plasma Phys. Rep.]{} [**21**]{}, 89. Mel’nik, V.N. Lapshin, V.I., Kontar, E.P.: 1999, [ Sol. Phys.]{} [**184**]{}, 353. Mel’nik, V.N. Kontar, E.P. and Lapshin, V.I.: 2000, [ Sol. Phys.]{} [**196**]{}, 199. Mel’nik, V.N., and Kontar, E.P.: 2000, [ New Astron.]{} [**5**]{}, 35. Melrose, D.B., Dulk, G.A., and Cairns, I.H.: 1986, [A&A]{} [**163**]{}, 229. Melrose, D.B., and Goldman, M.V.: 1987, [Sol. Phys.]{} [**107**]{}, 329. Melrose, D.B.: 1990, [Sol. Phys.]{} [**130**]{}, 3. Muschietti, L., Goldman, M.V. and Newman, D.: 1985, [ Sol. Phys.]{} [**96**]{}, 181. Papadopoulos,K., Goldstein,M.L., and Smith, R.A.: 1974, [ ApJ]{} [**190**]{}, 175. Robinson, P.A., Cairns, I.H., Garnett, D.A.: 1992, [ ApJ]{} [**387**]{}, L101. Robinson, P.A.: 1992, [Sol. Phys.]{} [**139**]{}, 147. Robinson, P.A. and Cairns, I.H.: 1993, [ ApJ]{} [**418**]{}, 506. Ryutov, D.D.: 1969, [ JETP]{} [**57**]{}, 232. Ryutov, D.D. and Sagdeev, R.Z.: 1970, [ JETP]{} [**58**]{}, 739. Smith, D.F., and Sime, D.: 1979, [ ApJ]{} [**233**]{}, 998. Sturrock, P.A.: 1964 AAS-NASA Symposium on Physics of Solar Flares, ed. W.N. Hess (NASA SP-50), 357. Suzuki, S., & Dulk, G. A.: 1985 Bursts of Type III and Type V [ In Solar Radiophysics]{} ed. McLean N. J. ,Labrum N. R. Cambridge University Press, 289. Takakura, T. and Shibahashi, H.: 1976, [ Sol. Phys.]{} [**46**]{}, 323. Takakura, T.: 1982, [ Sol. Phys.]{} [**78**]{}, 141. Thejappa, G.T. and MacDowall, R.J.: 1998, [ApJ]{} [**498**]{}, 465. Vedenov, A.A., Gordeev, A.V., and Rudakov, L.I.: 1967, [ Plasma Phys.]{} [**9**]{}, 719
1
--- abstract: 'Collapse and reverse to collapse explosion transition in self-gravitating systems are studied by molecular dynamics simulations. A microcanonical ensemble of point particles confined to a spherical box is considered; the particles interact via an attractive soft Coulomb potential. It is observed that the collapse in the particle system indeed takes place when the energy of the uniform state is put near or below the metastability-instability threshold (collapse energy), predicted by the mean-field theory. Similarly, the explosion in the particle system occurs when the energy of the core-halo state is increased above the explosion energy, where according to the mean field predictions the core-halo state becomes unstable. For a system consisting of 125 – 500 particles, the collapse takes about $10^5$ single particle crossing times to complete, while a typical explosion is by an order of magnitude faster. A finite lifetime of metastable states is observed. It is also found that the mean-field description of the uniform and the core-halo states is exact within the statistical uncertainty of the molecular dynamics data.' author: - 'I. Ispolatov' - 'M. Karttunen' title: 'Collapses and explosions in self-gravitating systems' --- ł Introduction {#sec_intro} ============ Systems of particles interacting via a potential with attractive nonintegrable large $r$ asymptotics, $U(r)\sim r^{-\a}$, $0<\a<3$, and sufficiently short-range small $r$ regularization exhibit gravitational phase transition between a relatively uniform high energy state and a low-energy state with a core-halo structure [@pr; @ki2; @ch1; @usg; @dv; @ch; @chs; @chi]. Extensive mean-field (MF) studies of the equilibrium properties of such systems [@pr; @ki2; @ch1; @usg; @dv; @ch; @chs; @chi] revealed that in a microcanonical ensemble during such a transition entropy has to undergo a discontinuous jump from a state that just ceases to be a local entropy maximum to a state with the same energy but different temperature, which is the global entropy maximum. Due to the long-range nature of gravitational interaction, the MF studies are believed to provide asymptotically (in the infinite system limit) exact information about the density and the velocity distributions and other thermodynamical parameters of the uniform state. The applicability of the MF theory to the description of the core-halo state is less obvious as the properties of a core are controlled by the short-range asymptotics of the potential. Relatively little is known about how such a transition actually occurs, however. Youngkins and Miller [@bm2] performed a Molecular Dynamics (MD) study of a one-dimensional system consisting of concentric spherical shells. Their main emphasis was to check the MF description of the stable and metastable states rather than to study the dynamics of the phase transition itself. Cerruti-Sola, Cipriani, and Pettini [@pet] studied the phase diagram of a more realistic 3-dimensional particle system by using Monte Carlo and MD methods. Their studies again focused on the equilibrium properties of the system rather than on the dynamics of the transitions. In addition, their general conclusions about the second order of the gravitational phase transition apparently contradict the MF results [@ki2; @usg; @dv; @chi]. Here, we attempt to resolve this contradiction. A MF description of the dynamics of collapse in ensembles of self-gravitating Brownian particles with a bare $1/r$ interaction based on a Smoluchowski equation was developed by Chavanis et al [@chs]. It predicts a self-similar evolution of the central part of density distribution to a finite-time singularity. However, the precise nature of the random force and friction terms in the corresponding Fokker-Plank equation as well as the applicability of the overdamped limit used to reduce the Fokker-Plank equation to the Smoluchowski equation are not entirely understood. A more rigorous approach based on the Fokker-Plank equation with Landau collision integral was used by Lancellotti and Kiessling [@ki3] to prove a scaling property of the central part of the density profile. The model considered there allows the particles to escape to infinity and therefore does not have an equilibrium or even a metastable state. There exists a vast amount of literature on cosmologically- and astrophysically- motivated studies of the temporal evolution of naturally occurring self-gravitating systems (see, e.g., Ref. [@cos] and references therein). The selection of systems and their initial and final conditions made in such studies are typically astrophysically-motivated; the considered systems are often too complex to make a general conclusions about the phase diagrams and phase transitions in such systems. In this paper we present MD studies of gravitational collapse, and reverse to collapse, i.e., explosion, transitions in a microcanonical ensemble of self-attracting particles. Besides their pure statistical mechanical implications, these studies represent our attempt to bridge a gap between the usually complicated MD and hydrodynamic simulations of the realistic astrophysical systems and the MF analysis of the phase diagram of simple self-gravitating models. A system with soft Coulomb potential $-(r^2+r_0^2)^{-1/2}$ is considered. Such systems are have been studied using both MF theory (see, e.g., Refs. [@usg; @chi]) and simulations [@pet]. We chose the microcanonical ensemble as the most fundamental one for the long-range interacting systems. It has to be noted that the considered system is strongly ensemble-dependent: While the nature of the uniform state is the same in both microcanonical and canonical ensembles (apart form the difference in their stability range), the core-halo states and the collapse itself in these ensembles have very little in common with each other [@dv; @chs]. A MF phase diagram of the considered self-attracting microcanonical system is presented in Fig. (\[fig\_mf\]) [@usg; @chi]. ![\[fig\_mf\] Plots of entropy $s(\e)$ (solid line) and inverse temperature $\b(\e)=ds/d\e$ (dashed line) vs. energy $\e$ for a system with a gravitational phase transition and a short-range cutoff.](fig1.eps){width=".45\textwidth"} High- and low-energy branches terminating at the energies $\e_{coll}$ and $\e_{expl}$ correspond to the uniform and core-halo states. The energy $\e^*$ where the entropies of the core-halo and uniform states are equal is the energy of the true phase transition; the uniform and the core-halo states are metastable in the energy intervals $(\e_{coll},\e^*)$ and $(\e^*,\e_{expl})$, respectively. However, for the phase transition to occur at or near $\e^*$, a macroscopic-scale fluctuation with prohibitively low entropy is required. Consequently, the metastable branches are stable everywhere except at the vicinity of $\Delta \e \sim N^{-2/3}$ of their end-points $\e_{coll}$ and $\e_{expl}$ [@ka2; @chi], where $N$ is the number of particles in the system. Hence it is natural to assume that once the energy of the system in the uniform state is set sufficiently near above or below $\e_{coll}$, the system will undergo a collapse to a core-halo state with the same energy and higher entropy. Similarly, if the energy of the core-halo system is set sufficiently near below or above $\e_{expl}$, the system will undergo an explosion bringing it to a uniform state with the same energy and higher entropy. Our goal here is to study if and how such collapse and explosion proceed in a realistic three-dimensional N-particle dynamical system. The paper is organized as follows. In the next section we formally introduce the system, outline the MF analysis and describe the MD setup. Then, we present the simulation results for the equilibrium uniform and core-halo states and compare them to the MF predictions. After that we describe and interpret the observed dynamics of the collapse and the explosion transitions. A discussion of the obtained results concludes the paper. simulation ========== We consider a system consisting of $N$ identical particles of unit mass confined to a spherical container of radius $R$ with reflecting walls. The particles interact via the attractive soft Coulomb pair potential $-(r^2+r_0^2)^{-1/2}$. Using a traditional convention for self-gravitating systems in which the equilibrium properties of such systems become universal, we define rescaled energy $\e$, inverse temperature $\b$, distance $x$, velocity $u$, and time $\t$ as $$\begin{aligned} \label{def} \nonumber \e\equiv E{R\over N^2}\\ \nonumber x\equiv{r\over R}\\ \b\equiv{N\over RT}\\ \nonumber u \equiv v \sqrt{R \over N}\\ \nonumber \t\equiv t {N^{1/2}\over R^{3/2}}.\end{aligned}$$ The unit of time, often referred to as crossing time, $[t]=\frac {R^{3/2}} {N^{1/2}}$, is obtained by dividing the unit of length $R$ by the unit of velocity $\sqrt{N/R}$. This unit of time is also proportional to the period of plasma oscillations in a medium with the charge concentration $N/R^3$. As this time unit has purely kinematic origin, we do not expect the evolution of systems having different $N$ and $R$ to be universal in time $\t$. The evolution, assuming that it is collisional, is expected to be universal in the relaxation time $\t_{r}=\t{\ln N\over N}$ [@bt], where the factor $N/ \ln N$ is proportional to the number of crossings a typical particle needs to change its velocity by a factor of 2 through weak Coulomb scattering events. The soft core radius $x_0=r_0/R=5\times 10^{-3}$ is chosen to be well below the critical value $x_{gr}\approx 0.021$, above which the collapse-explosion transition is replaced by a normal first-order phase transition [@chi]. The MF theory of the system is described in detail in Ref.[@usg]. The equilibrium velocity distribution is Maxwellian and isothermal, while the equilibrium (saddle point) density profile $\r({ x})$ corresponding a stable or a metastable state, is a spherically symmetric solution of the integral equation (\[extr\]). This equation replaces the Poisson-Boltzmann differential equation for the self-consistent potential (see, for example, Ref.[@pr]) since the interparticle interaction considered here is not pure Coulombic. ![\[fig\_rho\] MF density profiles $\r_u({x})$ of a uniform state (dashed line) and $\r_{c-h}({x})$ of a core-halo state (solid line) for $\e=\e_{coll}$ ](fig2.eps){width=".45\textwidth"} $$\begin{aligned} \label{extr} \nonumber \r({\bf x})=\r_0 F[\r(.), {\bf x}]\\ \nonumber F[\r(.),{\bf x}]=\exp\left [\b \int {\r({\bf x}')\over \sqrt{({\bf x}-{\bf x}')^2+x_0^2}}d^3{\bf x}'\right ]\\ %\nonumber \b={3\over 2} \left[\e + {1\over 2}\int \int{\r({\bf x}_1) \r({\bf x}_2) \over \sqrt{({\bf x}_1-{\bf x}_2)^2+x_0^2} }d^3{\bf x}_1 d^3{\bf x}_2 \right ]^{-1}\\ \nonumber \r_0=\left\{\int F[\r_s(.), {\bf x}]d^3{\bf x}\right\}^{-1}\end{aligned}$$ The equilibrium density profile $\r({x})$ obtained from this equation is then used to calculate the entropy and the pressure. The MF phase diagram of the system is presented in Fig. \[fig\_mf\]. The collapse and explosion energies are $\e_{coll}\approx -0.339$ and $\e_{expl}\approx 0.267$. Examples of the uniform and the core-halo density profiles for $\e=\e_{coll}$ are shown in Fig. \[fig\_rho\]. In the MD simulations we consider a system consisting of $N=125$ – 500 particles in a spherical container of the radius $R=1$. All interparticle forces are calculated directly at each time step $dt$. This is done to avoid any mean-field-like effects inevitably present in any truncated multipole or Fourier potential expansion. The particle velocities and coordinates are updated according to the velocity-Verlet algorithm which is symplectic and reversible. The system is initialized by randomly distributing particles according to a spherically symmetric density profile; typically the appropriate MF density profile $\r(x)$ was used. The potential energy ($U$) of the initial configuration is calculated, and the target kinetic energy $E_k=E-U$ is determined. The particle velocities are randomly generated from some (usually Maxwell) distribution with the appropriate square average. Finally, the deviation of the total energy from its target value, caused by a stochasticity in velocity assignment, is determined, and the velocities are rescaled to fine-tune the total energy. Due to the isotropicity of the random velocity assignment, we have always obtained the states with sufficiently low total angular momentum which collapsed to single-core states rather than to binaries [@grv]. To implement the reflective boundary condition, at each time step the normal components $v_{\perp}$ of the velocities of all particles which had escaped from the container were reversed. Values of the normal components were stored to evaluate the pressure on the wall $P$. $$\label{P} P(t)=\frac{\sum_{t'=t-t''/2}^{t'=t+t''/2}v_{\perp}(t')}{2\pi R^2 t''}$$ During each simulation run we measured such characteristics as the kinetic energy $\e_{k in}=\frac{3}{2\b}$, the virial variable $\s$ (dimension of energy) quantifying deviations from the virial theorem, $\s \equiv \e + \e_{kin} -\frac{3 PV R}{N^2}$ (where $P$ is the pressure on the wall, $V=4\pi R^3/3$ is volume of the container, and the factor $N^2/R$ rescales the volume-pressure term to the unit of energy introduced in (\[def\])), ratio of the velocity moments, $ \frac{19 \langle v^2 \rangle ^2}{ 5 \langle v^4 \rangle }$ (which should be 1 for the Maxwell distribution), and the number of particles in the core $N_c$ of the prescribed radius $x_c$. For the last measurement we count the number of particles $N_i$ that are within $x_c$ from the $i$th particle and find the particle which has the largest $N_i$. In addition, we measured the histograms of the velocity distribution and radial distribution functions, $W(u)$ and $C(x)$, respectively. The latter was defined as the number of particles in the spherical layer of the radius $x$ around each particle, normalized by the volume of such layer, disregarding the nonuniformity of the system and the boundary effect. The measurements of the ”scalar” quantities such as energy, kinetic energy, pressure, and velocity distribution moments were taken in time intervals $\t_{meas}$, which were selected sufficiently long to avoid measuring the unchanged configuration repeatedly and sufficiently short not to miss the important details of the system evolution. We usually pick $\t_{meas}$ of the order of the uniform density sphere crossing time $\t_{cross}^u=\pi$, which is a half period of the oscillation of a particle released with zero velocity at the container wall. The histogram data, such as the velocity distribution and the radial distribution functions, was incremented at each $\t_{meas}$ and accumulated over a longer time period $\t_{hist}$, $\t_{hist}\sim 10$ – $10^3 \times \t_{meas}$. Our attempts to resolve the high-density part of the radial density profile of the system turned out to be fruitless due to the strong fluctuations in the positions of this part. This fluctuations result in smearing the central peak in both core-halo and low energy uniform states. Considering the center of mass system of reference does not resolve this difficulty, as, despite being dense, the core typically contains only 10 – 20% of the total system mass (see below) and the positions of the core and the center of mass of the system do not usually coincide. To control the quality of the simulation, we monitored the total energy $\e$ and the total angular momentum $L$. We selected timestep $dt$ small enough to keep the total energy variation within 0.01% of its initial value, usually we used $dt =10^{-5}$, or in rescaled units, $d\t \sim 10^{-4}$. For such time steps, the relative deviation of the angular momentum was within $10^{-14}$. All the measurements below are presented in the rescaled dimensionless units as defined in Eq. (\[def\]). uniform and core-halo equilibrium states: comparison to the MF ============================================================== To check our simulation procedure and possibly resolve the apparent contradiction between the MF and the particle simulation results [@pet], we first considered the system in what we expected to be a stable or a metastable states far away from a transition point. Since we were interested in the equilibrium properties, we were initiating the MD systems according the corresponding MF predictions. It meant that the density profiles were seeded according to the MF profiles and the velocities were assigned according to the Maxwell distribution. We observed that the MF density initiation virtually eliminates the transitory period, while the method of velocity assignment was practically unimportant, provided that it gave the correct value for the total kinetic energy. For example, it takes a system initialized with a flat $W({\bf u})=const$ about $\t\sim \t_r$ to evolve to the Maxwell distribution. A typical plot of the steady state time dependence of the kinetic energy, virial variable, and the total energy is presented in Fig. \[fig\_td\]. ![\[fig\_td\] Plots of time dependence of kinetic energy $e_{kin}$ (solid line), virial variable $\s$ (dashed line), and total energy $\e$ (dotted line) for a uniform system of $N=250$ particles at $e=-0.3$. ](fig3.eps){width=".45\textwidth"} The comparison between the MD measurements and the MF results for the uniform and core-halo states is presented in Tables \[tab\_un\] and \[tab\_ch\] and reveals a perfect agreement between these two sets of data. -------------------------------------------------------------------- \[tab\_un\] MD MF ------------------------------- ---------------------------- ------- $\e$ $-0.3 \pm 5\times 10^{-7}$ -0.3 $\e_{kin}$ $0.66 \pm 0.05$ 0.644 $\s$ $0\pm 0.03$ 0.012 $ 19 \langle v^2 \rangle ^2 / $1.01\pm0.04$ 1 5 \langle v^4 \rangle $ -------------------------------------------------------------------- : Equilibrium MD and MF results for a uniform state for $\e=-0.3$, $N=250$, and $0\leq \t \leq 5000 $ ------------------------------------------------------------------------ \[tab\_ch\] MD MF ------------------------------- ------------------------------- -------- $\e$ $-0.3392 \pm 2\times 10^{-4}$ -0.339 $\e_{kin}$ $2.9 \pm 0.1$ 2.94 $\s$ $-1.5 \pm 0.1$ -1.46 $ 19 \langle v^2 \rangle ^2 / $0.99 \pm 0.03$ 1 5 \langle v^4 \rangle $ $N_{core}$ $48\pm 2$ 47.6 ------------------------------------------------------------------------ : Equilibrium MD and MF results for a core-halo state at $\e=-0.339 $, $N=250$, and $0\leq \t \leq 1500 $ To obtain the expression for the MF virial variable $$\label{vir} \s_{MF}=\e+\e_{kin}(1-8\pi\r(1)/3)$$ we write for the pressure at the container wall $P=2\rho(x=1)\e_{kin}/3$ implying that the system is isothermal. Since the interparticle potential is not pure Coulombic, the virial variable is non-zero. The difference is especially prominent for the core-halo states where more particles ”probe” the short-range part of the potential. To evaluate the core radius and number of core particles of the core-halo system, we considered an integrated MF density profile, $f(x)=\int_0^x 4 \pi y^2 \r(y) dy$ (see Fig. \[fig\_irho\]). ![\[fig\_irho\] Integrated MF density profiles $f_u({x})$ of a uniform state (dashed line) and $f_{c-h}({x})$ of a core-halo state (solid line) for $\e=\e_{coll}$ ](fig4.eps){width=".45\textwidth"} As it follows from the Figure, the MF core-halo state indeed contains a distinct core with a sharp boundary of the radius $x_c\approx10^{-2}$ relatively insensitive to the energy in the range we considered, $|\e|<0.5$. Using this MF core radius, we located cores in the MD core-halo systems which contained very similar to the MF cores number of particles (see Table \[tab\_ch\]). Using smaller core radius resulted in significant reduction in the number of observed core particles. A reasonably small over-estimation of the core radius did not affect the results of the MD measurements: we observed that even in the sphere twice the core radius the number of particles is only marginally (at most by 8%) larger than in the core. To check if the system has more than one core, we performed search for the second-largest core of the same radius $x_c$. We looked for a largest group of particles which are within $x_c$ from a single particle with none of these particles belonging to the first, largest core. We never observed the second-largest core containing more than 2 particles; most of the time it contained only a single one. In Fig. \[fig\_vel\] we present the MD velocity distribution functions $W(u)$ for a core-halo and uniform states; shown $W(u)$ confirm the MF prediction for the Maxwellian form of these distributions. ![\[fig\_vel\] The MD velocity distribution functions $W(u)$ of a core-halo state with $\e=-0.339$ (solid line) and a uniform state with $\e=-0.3$ (dashed line). In both cases $N=250$.](fig5.eps){width=".45\textwidth"} As we mentioned in the previous section, we were unable to resolve the high-density part of the radial density profile due to the core motion. However, an indirect comparison between the radial distribution of particles in the MF and MD was made using the radial distribution function. The MF radial distribution function $C_{MF}(x)$ was computed as $$\label{rdf} C_{MF}(x)= \frac{1}{4\pi x^2}\int\r({\bf x}')\r({\bf x + x}')d {\bf x}'.$$ The good agreement between the MF and the MD radial distribution functions is illustrated in Fig. \[fig\_rdf\]. This indicates that the mutual distribution of particles is correctly predicted by the MF theory. ![\[fig\_rdf\] MF (dashed line) and MD (solid line) radial distribution functions $C(x)$ of a core-halo state with $\e=0.25$. The step at $x=1$ in the MF $C(x)$ is caused by the localization of the core exactly at $x=0$ and a sharp boundary of the container.](fig6.eps){width=".45\textwidth"} To summarize, for all the quantities considered, we observed no systematic deviations between the MF theory and the MD data. Collapse ======== According to the MF theory, if the energy of the uniform state becomes lower than $\e_{coll}\approx-0.339$, the system should undergo a collapse to a core-halo state. To study the collapse, we considered several uniform systems with the energies ranging between $\e=-0.5$ and $\e=-0.3$. The systems were initialized according to the MF density distributions. For systems with $\e<\e_{coll}$ the particles were distributed according to the MF density profile for $\e_{coll}$. In a perfect agreement with the MF theory, a uniform state with $\e<\e_{coll}$ undergoes a gradual transition to a core-halo state with a typical timescale of $\t_{coll}\sim 10^4$ for $N=125$ – 250 particles. An example of the time dependence of the kinetic energy and the virial variable for a collapsing system is shown in Fig. \[fig\_coll\]. ![\[fig\_coll\] Time dependence of the kinetic energy $\e_{kin}$ (top) and the virial variable $\s$ (bottom) of the collapsing uniform state with $\e=-0.5$ and $N=125$. The dashed horizontal lines indicate the equilibrium values of $\e_{kin}$ and $\s$ of the corresponding core-halo state. The data is averaged over $\d \t=100$ time intervals.](fig7.eps){width=".45\textwidth"} We observe that if the number of particles is increased but the rescaled energy $\e$ is kept fixed, it takes generally longer time for the collapse to be complete. Our results (see Fig. \[fig\_N\]) qualitatively confirm that the characteristic time for the full collapse scales as $\t_r$ [@bt]. A quantitative study of the dependence of the collapse dynamics on the number of particles requires much faster simulation code, however. ![\[fig\_N\] Collapse in systems with $\e=-0.5$ and different numbers of particles, $N=125$ (dashed line) and $N=250$ (solid line), shown in relaxation time units, $\t\ln N/N$ [@bt]. Horizontal line shows the kinetic energy of the target core-halo state. The data is averaged over $\d \t=100$ time intervals.](fig8.eps){width=".45\textwidth"} In the above examples, the energy was set to $\e=-0.5$ which is well below $\e_{coll}\approx-0.339$, and as a consequence the collapse started immediately at $\t=0$ in all simulation runs. If the system energy is $\e_{coll}$, the noticeable increase in kinetic energy and decrease of the virial variable, characteristic for collapse, start not exactly at $\t=0$ but with a small delay (Fig. \[fig\_ec1\]) which varies from run to run from almost zero to about $\t\approx 1500$. This indicates that the MD system is able to overcome the metastability at or near $\e_{coll}$. The observed uncertainty is likely due to the relatively small number of particles. ![\[fig\_ec1\] Plots of the kinetic energy $\e_{kin}$ (top) and the virial variable $\s$ (bottom) vs time $\t$ for a system with $\e=\e_{coll}\approx-0.339$ and $N=250$.The data is averaged over $\d \t=100$ time intervals.](fig9.eps){width=".45\textwidth"} As we increase the energy above $\e_{coll}$, the stability of the uniform state increases which results in a longer lifetime of such state with respect to collapse. In Fig. \[fig\_met\], an evolution of a system with $\e=-0.3$ is shown. The system stays in the uniform state for about $\d \t \approx 5000$ before the collapse starts, after which the evolution proceeds qualitatively similar to the collapses in systems with lower energies. ![\[fig\_met\] Plots of the kinetic energy $\e_{kin}$ (top) and virial variable $\s$ (bottom) vs time $\t$ for a system with $\e=-0.3$ and $N=250$. The data is averaged over $\d \t=100$ time intervals.](fig10.eps){width=".45\textwidth"} To compare the temporal evolution of the kinetic energy, virial variable, and the number of core particles, the relative variables $\e'_{kin}(\t)$, $\s'_{kin}(\t)$, and $N'_{core}(\t)$, all defined as $\e'_{kin}(t)=[\e_{kin}(t)-\e_{kin}(u)]/[\e_{kin}(c-h)-\e_{kin}(u)]$, are plotted in Fig.\[fig\_ec\]. The values $\e_{kin}(u)$ and $\e_{kin}(c-h)$ correspond to the uniform and core-halo states in equilibrium. ![\[fig\_ec\] Plots of the relative values of (from top to the bottom) number of core particles $N'_{core}(\t)$, virial variable $\s'_{kin}(\t)$, and kinetic energy $\e'_{kin}(\t)$ for the system with $\e=-0.339$ and $N=250$.](fig11.eps){width=".45\textwidth"} Figure \[fig\_ec\] indicates that during the initial stages of collapse the core grows faster than the kinetic energy and the virial variable. In addition, one can notice large reversible fluctuations in the number of core particles (the core grows up to 12% of its final value and then disappears) that are not matched by comparable scale fluctuations in the kinetic energy or virial variable. All these observations suggest that the density evolution causing the core formation plays the leading role in the process of collapse while the relaxation of kinetic energy follows. Once the collapse has started, the core grows to about a half of its final size in only $\d\t_{core}\sim 10^3$ for systems with $N=125$ – 500 particles, while the changes in kinetic energy during this interval of time are small. After this rapid initial stage the system relaxes more slowly, and finally after $\t_{coll}\sim 10^5$ reaches the equilibrium core-halo state. Our observations strongly suggest that the growth of the core takes place through a sequential absorption of single particles rather than through hierarchical merging of smaller cores: We never detected other cores containing more than two particles. Although the kinetic energy relaxation trails behind the the core formation, the velocity distribution function remains Maxwellian throughout the whole evolution with the temperature corresponding to the corresponding value of the kinetic energy. This is caused by the fast velocity relaxation ($\t_{vel} \leq 1$) as discussed in the previous section. Explosion ========= It is natural to assume that if a system exhibits a collapse, it should also exhibit an explosion which is the reverse to the collapse transition. According to the MF theory, such explosion should take place when the core-halo state becomes unstable, i.e., when $\e \geq \e_{expl}\approx 0.267$. To check this prediction, we initialized the MD system according to the MF equilibrium core-halo state and followed its evolution. As in the study of the collapse, for initial states with $\e > \e_{expl}$ we used the MF density profiles of the highest energy locally stable state, i.e. of the state with $\e = \e_{expl}$. We observe that a system with sufficiently high energy, such as $\e =0.5$ in Fig. \[fig\_expl\] or $\e =0.4$ in Fig. \[fig\_expla\], indeed undergoes an explosion which brings it to the uniform equilibrium state. During such an explosion, the state variables such as kinetic energy and virial variable continuously change from their equilibrium core-halo state values to the uniform state ones, and the core gradually sheds particles until only one particle is left. ![\[fig\_expl\] Plots of the kinetic energy $\e'_{kin}(\t)$ (top) and relative number of core particles $N'_{core}(\t)$ (bottom) (defined as in Fig. \[fig\_ec\]) vs time $\t$ for a system with $\e=0.5$ and $N=250$. The kinetic energy is averaged over $\d \t=100$ time intervals. ](fig12.eps){width=".45\textwidth"} ![\[fig\_expla\] Same as in Fig. \[fig\_expla\] but for $\e=0.4$. ](fig13.eps){width=".45\textwidth"} The main features of an explosion (Figs. \[fig\_expl\] and \[fig\_expla\]) resemble those of a time-reversed collapse. The kinetic energy evolves relatively uniformly, while the number of core particles changes only slightly during the first stages of evolution and rapidly decreases at the final stages. In the example presented in Fig. \[fig\_expl\], the explosion is complete after the time $t_{expl}\approx 15000$, which is noticeably less than the time for a collapse $t_{coll}\approx 10^5$ (see Fig. \[fig\_N\]) for a system having the same number of particles ($N=250$). However, the latter is rather vaguely defined due to larger fluctuations in a core-halo than in a uniform state. Similarly to a collapse, the system remains thermalized in the velocity space during an explosion. The velocity distribution remains Maxwellian throughout the evolution with the temperature corresponding to the current value of kinetic energy. As an illustration, Fig. \[fig\_velm\] shows the ratio of the moments of velocity distribution, $19 \langle v^2 \rangle ^2 / 5 \langle v^4 \rangle$, which should be 1 for a Gaussian distribution. ![\[fig\_velm\] Plot of he ratio of the moments of velocity distribution, $19 \langle v^2 \rangle ^2 / 5 \langle v^4 \rangle$, vs.time $\t$ for a system with $\e=0.5$ and $N=250$.](fig14.eps){width=".45\textwidth"} However, as it is evident from a comparison between the Figs. \[fig\_expl\] and \[fig\_expla\], as $\e$ gets closer to $\e_{expl}$ the explosion takes longer to initiate. We have observed, that even for $\e=0.3$, which is noticeably larger than $\e_{expl} \approx 0.267$, the explosion does not happen during the first $\t=30000$ of evolution. This suggests that either the MF value for $\e_{expl}$ is incorrect, or during the incitation of the system we somehow prepare the system not exactly in the equilibrium (metastable) core-halo state. If the latter is the case, a deviation from the equilibrium most probably takes place in the core, as because of its compactness, its equilibration with the rest of the system may take a rather long time. Using the current MD setup, we were unable to determine a reason for this apparent discrepancy. CONCLUSION ========== In the previous sections we have presented the following molecular dynamics results for the self-attracting systems with soft Coulomb potential: [.]{}[ ]{} A collapse from a uniform to a core-halo state was observed. The timescale for the collapse in systems consisting of 125 – 500 particles is of order of $10^5$ crossing times and is by the same factor longer than the timescale of the velocity relaxation. The collapse starts with a fast growth of a core via absorption of single particles and continues with more gradual relaxation towards an equilibrium core-halo state. Metastable states exhibit a finite lifetime before collapsing. A reverse to collapse, i.e., an explosion transition from a core-halo to a uniform state was observed. The explosion time is considerably shorter than the collapse time, being of the order of $10^4$ crossing times (125 – 500 particles). An explosion resembles a time-reversed collapse; the core decrease, which happens by shedding individual particles, is trailing the kinetic energy evolution till the last stages, when the core rapidly disappears. Such molecular dynamics characteristics of the equilibrium or the metastable uniform and core-halo states as kinetic energy, wall pressure, number of core particles, particle-particle radial distribution function and velocity distribution function, are found to be equal within the statistical uncertainty of the molecular dynamics measurements to the corresponding mean field predictions. The long collapse time observed in our simulations appears to be an explanation for the apparent discrepancy between the phase diagram presented in [@pet] and the mean field phase diagram (see, for example, [@chi]). The relaxation time allowed in [@pet] before the measurements of what was considered to be a steady state, $t_{rel}=3N/|EN|^{3/2}$, which is apparently equivalent to $\t_{rel}<1$, is by far insufficient for a system to collapse. Therefore, the discontinuities in caloric curves $\beta$ vs $\e$, typical for collapse and explosion gravitational transitions, were not observed in [@pet]. Although we considered systems only with the soft Coulomb potential, we speculate that a likewise similarity between the mean field and molecular dynamics equilibrium properties of the core-halo state exists for all ”soft” long-range (like a Fourier-truncated Coulomb) potentials. This is so because all soft potentials are effectively longer-ranged than the bare Coulomb one. However, the core-halo state in the system with a ”harder” short-range cutoff may have completely different properties from the one considered above, and its mean field theory may be inadequate. As for the uniform states, their properties are virtually independent on the nature of the cutoff (see, for example, [@chi]) and their mean-field description is universally correct. The main goal in the paper was to check the existence of collapses and explosions and the validity of the mean field data for the self-gravitating systems with short-range cutoff. For this goal one or few molecular dynamics runs for each considered system were sufficient. However, to be able to study the dynamical features of collapses and explosion in more detail and to compare the simulations results to various theoretical models, one needs to study the relaxation averaged over many initial configuration. For example, an interesting question is whether a collapse (or an explosion) indeed consists of two stages; the first fast stage of collisionless ”violent relaxation” with particle number-independent rate, and the slower second stage of soft collisional relaxation with characteristic time $\t_r$ (see, for example, [@bt] and references therein). Another important question is to resolve the apparent contradiction between the mean field prediction for $\e_{expl}$ and the molecular dynamics observations, outlined at the end of the previous section. Such studies require a more efficient computation code. The main improvement possibly coming from a better force calculator that may include various mean-field-like potential expansions, which are qualitatively justified by this study. We leave this for the future research. acknowledgments =============== The authors are thankful to P.-H. Chavanis and E. G. D. Cohen for helpful and inspiring discussions and gratefully acknowledge the support of Chilean FONDECYT under grants 1020052 and 7020052. M.K. would like to thank the Department of Physics at Universidad de Santiago for warm hospitality. T. Padmanabhan, Phys. Rep. [**188**]{}, 285 (1990). B. Stahl, M. K.-H. Kiessling, and K. Schindler, Planet. Space Sci. [**43**]{}, 271 (1995). P.-H. Chavanis and J. Sommeria, Mon. Not. R. Astr. Soc. [**296**]{}, 569 (1998). V. P. Youngkins and B. N. Miller, Phys. Rev. E. [**62**]{}, 4583 (2000). I. Ispolatov and E. G. D. Cohen, Phys. Rev. E. [**64**]{}, 056103 (2001), Phys. Rev. Lett. [**87**]{}, 210601 (2001). H. J. de Vega and N. Sánchez, Nucl. Phys. B [**625**]{}, 409 (2002) P.H. Chavanis, Phys. Rev. E. [**65**]{}, 056123 (2002) P.H. Chavanis, C. Rosier, and C. Sire, Phys. Rev. E. [**66**]{}, 036105 (2002). S. Chandrasekhar, [*An Introduction to the Study of Stellar Structure*]{} (Dover Publications, 1958), Ch. 11. P.H. Chavanis, I. Ispolatov, Phys. Rev. E. [**64**]{}, 056103 (2001) J. Binney and S. Tremaine [*Galactic Dynamics*]{} (Princeton Series in Astrophysics, 1987). J. Katz and I. Okamoto, Mon. Not. R. Astr. Soc. [**317**]{}, 163 (2000). M. Cerruti-Sola, P. Cipriani, and M. Pettini, Mon. Not. R.  Astr. Soc. [**328**]{}, 339 (2001). P. Hut, M. M. Shara, S. J. Aarseth, R. S. Klessen, J. C. Lombardi Jr., J. Makino, S. McMillan, O. R. Pols, P. J. Teuben, R. F. Webbink, to appear in New Astronomy, astro-ph/0207318. C. Lancellotti and M. Kiessling, Astrophys. J. [**549**]{}, L93 (2001). E. V. Votyakov, H. I. Hidmi, A. De Martino, D. H. E. Gross, Phys. Rev. Lett., [**89**]{}, 031101 (2002).
1
--- author: - 'M. Wernli' - 'L. Wiesenfeld' - 'A. Faure' - 'P. Valiron' bibliography: - 'cyano\_v3.bib' date: 'Received / Accepted ' title: 'Rotational Excitation of HC$_3$N by H$_2$ and He at low temperatures' --- Introduction ============ Cyanopolyyne molecules, with general formula HC$_{2n+1}$N, $n\ge 1$, have been detected in a great variety of astronomical environments and belong to the most abundant species in cold and dense interstellar clouds [@bell97]. One of these, HC$_{11}$N, is currently the largest unambiguously detected interstellar molecule [@bell85]. The simplest one, [$\mathrm{HC_3N}$]{}(cyanoacetylene), is the most abundant of the family. In addition to interstellar clouds, [$\mathrm{HC_3N}$]{}has been observed in circumstellar envelopes [@pepe04], in Saturn satellite Titan [@kunde81], in comets [@bockelee00] and in extragalactic sources [@mauersberger90]. Furthermore, [$\mathrm{HC_3N}$]{}has been detected both in the ground level and in excited vibrational levels, thanks to the presence of low-lying bending modes [e.g. @wyrowski03]. Owing to a low rotational constant and a large dipole moment, cyanoacetylene lines are thus observable over a wide range of excitation energies and [$\mathrm{HC_3N}$]{}is therefore considered as a very good probe of physical conditions in many environments. Radiative transfer models for the interpretation of observed [$\mathrm{HC_3N}$]{}spectra require the knowledge of collisional excitation rates participating to line formation. To the best of our knowledge, the only available collisional rates are those of @green78 for the rotational excitation of HC$_3$N by He below 100 K. In cold and dense clouds, however, the most abundant colliding partner is H$_2$. In such environments, para-[$\rm H_2$]{}is only populated in the $J=0$ level and may be treated as a spherical body. @green78 and @dickinson82 postulated that the collisional cross-sections with para-[$\rm H_2$]{}$(J=0)$ are similar to those with He (assuming thus an identical interaction and insensitivity of the scattering to the reduced mass). As a result, rates for excitation by para-[$\rm H_2$]{}were estimated by scaling the rates for excitation by He while rates involving ortho-[$\rm H_2$]{}were not considered. In the present study, we have computed new rate coefficients for rotational excitation of [$\mathrm{HC_3N}$]{}by He, para-[$\rm H_2$]{}($J=0$) and ortho-[$\rm H_2$]{}($J=1$), in the temperature range 5$-$20 K for He and 5$-$100 K for H$_2$. A comparison between the different partners is presented and the collisional selection rules are investigated in detail. The next section describes details of the PES calculations. The cross-section and rate calculations are presented in Section \[sec:cross\]. A discussion and a first application of these rates is given in Section \[sec:disc\]. Conclusions are drawn in Section 5. The following units are used throughout except otherwise stated: bond lengths and distances in Bohr; angles in degrees; energies in cm$^{-1}$; and cross-sections in $\AA^2$. Potential energy surfaces {#sec:pot} ========================= Two accurate interatomic potential energy surfaces (PES) have recently been calculated in our group, for the interaction of [$\mathrm{HC_3N}$]{}with He and H$_2$. Both surfaces involved the same geometrical setup and similar *ab initio* accuracy. An outline of those PES is given below, while a detailed presentation will be published in a forthcoming article. In the present work, we focus on low-temperature collision rates, well below the threshold for the excitation of the lower bending mode $\nu_7$ at 223 cm$^{-1}$. The collision partners may thus safely be approximated to be rigid, in order to keep the number of degrees of freedom as small as possible. For small van der Waals complexes, previous studies have suggested [@jeziorska00; @jankowski05] that properly averaged molecular geometries provide a better description of experimental data than equilibrium geometries ($r_e$ geometries). For the $\rm H_2O$ – [$\rm H_2$]{}system, geometries averaged over ground-state vibrational wave-functions ($r_0$ geometry) were shown to provide an optimal approximation of the effective interaction [@faure05; @wernlithese]. Accordingly, we used the [$\rm H_2$]{}bond separation $r_{\rm HH}= 1.44876$ Bohr obtained by averaging over the ground-state vibrational wave-function, similarly to previous calculations [@hodges04; @faure05; @wernli06]. For [$\mathrm{HC_3N}$]{}, as vibrational wave-functions are not readily available from the literature, we resorted to experimental geometries deduced from the rotational spectrum of [$\mathrm{HC_3N}$]{}and its isotopologues (@thor00; see also Table 5.8 in @gordy). The resulting bond separations are the following: $r_{\mathrm{HC_1}}= 1.998385$; $r_{\mathrm{C_1C_2}}=2.276364$;$r_{\mathrm{C_2C_3}}= 2.606688$; $r_{\mathrm{C_3N}}= 2.189625$, and should be close to vibrationally averaged values. For the [$\mathrm{HC_3N}$]{}– He collision, only two coordinates are needed to fully determine the overall geometry. Let $\vec{R}$ be the vector between the center of mass of [$\mathrm{HC_3N}$]{}and He. The two coordinates are the distance $R=|\vec{R}|$ and the angle $\theta_1$ between the [$\mathrm{HC_3N}$]{}rod and the vector ***R***. In our conventions, $\theta_1 = 0$ corresponds to an approach towards the H end of the [$\mathrm{HC_3N}$]{}rod. For the collision with H$_2$, two more angles have to be added, $\theta_2$ and $\phi$, that respectively orient the [$\rm H_2$]{}molecule in the rod-***R*** plane and out of the plane. The [$\mathrm{HC_3N}$]{}– He PES has thus two degrees of freedom, the [$\mathrm{HC_3N}$]{}– [$\rm H_2$]{}four degrees of freedom. As we aim to solve close coupling equations for the scattering, we need ultimately to expand the PES function $V$ over a suitable angular expansion for any intermolecular distance $R$. In the simpler case of the [$\mathrm{HC_3N}$]{}– He system, this expansion is in the form: $$\label{eq:pot} V_{}(R,\theta_1) = \sum_{l_1} v_{l_1}(R)\,P_{l_1}(\cos\theta_1)\quad ,$$ where $P_{l_1}(\cos\theta_1)$ is a Legendre polynomial and $v_{l_1}(R)$ are the radial coefficients. For the [$\mathrm{HC_3N}$]{}– [$\rm H_2$]{}system, the expansion becomes: $$\label{eq:pot2} V(R,\theta_1, \theta_2, \phi) = \sum_{l_1 l_2 l} v_{l_1 l_2 l}(R) s_{l_1 l_2 l}(\theta_1, \theta_2, \phi),$$ where the basis functions $s_{l_1 l_2 l}$ are products of spherical harmonics and are expressed in Eq. (A9) of @green75. Two new indices $l_2$ and $l$ are thus needed, associated respectively with the rotational angular momentum of [$\rm H_2$]{}and the total orbital angular momentum, see also eq. (A2) and (A5) of @green75. Because the Legendre polynomials form a complete set, such expansions should always be possible. However, @chapman77 failed to converge above expansion (\[eq:pot\]) due to the steric hindrance of He by the impenetrable [$\mathrm{HC_3N}$]{}rod, and @green78 abandoned quantum calculations, resorting to quasi classical trajectories (QCT) studies. Similar difficulties arise for the interaction with H$_2$. Actually, as can be seen on figure \[fig:PES\] for small $R$ values, the interaction is moderate or possibly weakly attractive for $\theta_1 \sim 90^{\circ}$ and is extremely repulsive or undefined for $\theta_1 \sim 0, 180^{\circ}$, leading to singularities in the angular expansion and severe Gibbs oscillations in the numerical fit of the PES over Legendre expansions. Accordingly, we resorted to a cautious sampling strategy for the PES, building a spline interpolation in a first step, and postponing the troublesome angular Legendre expansion to a second step. All details will be published elsewhere. Let us summarize this first step for He, then for H$_2$. For the [$\mathrm{HC_3N}$]{}– He PES, we selected an irregular grid in the $\left\{R,\theta_1\right\}$ coordinates. The first order derivatives of the angular spline were forced to zero for $\theta_1=0,180^{\circ}$ in order to comply with the PES symmetries. For each distance, angles were added until a smooth convergence of the angular spline fit was achieved, resulting to typical angular steps between 2 and 15$^{\circ}$. Then, distances were added until a smooth bicubic spline fit was obtained, amounting to 38 distances in the range 2.75 – 25 Bohr and a total of 644 geometries. The resulting PES is perfectly suited to run quasi classical trajectories. We used a similar strategy to describe the interaction with H$_2$, while minimizing the number of calculations. We selected a few $\left\{\theta_2,\phi\right\}$ orientation sets, bearing in mind that the dependence of the final PES with the orientation of [$\rm H_2$]{}is weak. In terms of spherical harmonics, the PES depends only on $Y_{l_2m_2}(\theta_2,\phi)$, with $l_2=0,2,4,\ldots$ and $m_2=0,1,2,\ldots$, $|m_2|\leq l_2$. Terms in $Y_{l_2m_2}$ and $Y_{l_2 -m_2}$ are equal by symmetry. Previous studies [@faure05a; @wernli06] have shown that terms with $l_2> 2$ are small, and we consequently truncated the $Y_{l_2m_2}$ series to $l_2\leq 2$. Hence, only four basis functions remain for the orientation of [$\rm H_2$]{}: $Y_{00},Y_{20},Y_{21}$ and $Y_{22}$. Under this assumption, the whole [$\mathrm{HC_3N}$]{}– [$\rm H_2$]{}surface can be obtained knowing its value for four sets of $\left\{\theta_2, \phi\right\}$ angles at each value of $R$. We selected actually five sets, having thus an over-determined system allowing for the monitoring of the accuracy of the $l_2$ truncation. Consequently, we determined five independent PES, each being constructed similarly to the [$\mathrm{HC_3N}$]{}– He one as a bicubic spline fit over an irregular grid in $\left\{R, \theta_1\right\}$ coordinates. The angular mesh is slightly denser than for the [$\mathrm{HC_3N}$]{}– He PES for small $R$ distances to account for more severe steric hindrance effects involving [$\rm H_2$]{}. In total, we computed 3420 $\left\{R, \theta_1, \theta_2, \phi\right\}$ geometries. Finally, the [$\mathrm{HC_3N}$]{}– H$_2$ interaction can be readily reconstructed from these five PES by expressing its analytical dependence over $\left\{\theta_2, \phi\right\}$ [@wernlithese]. For each value of the intermolecular geometry $\left\{R,\theta_1\right\}$ or $\left\{R, \theta_1, \theta_2, \phi\right\}$, the intermolecular potential energy is calculated at the conventional CCSD(T) level of theory, including the usual counterpoise correction of the Basis Set Superposition Error [@jansen69; @boys70]. We used augmented correlation-consistent atomic sets of triple zeta quality (Dunning’s aug-cc-pVTZ) to describe the [$\mathrm{HC_3N}$]{}rod. In order to avoid any possible steric hindrance problems at the basis set level, we did not use bond functions and instead chose larger Dunning’s aug-cc-pV5Z and aug-cc-pVQZ basis set to better describe the polarizable (He, H$_2$) targets, respectively. All calculations employed the direct parallel code <span style="font-variant:small-caps;">Dirccr12</span> [@dirccr12]. Comparison of the [$\mathrm{HC_3N}$]{}– He PES with existing surfaces [@akinojo03; @topic05] showed an excellent agreement. The [$\mathrm{HC_3N}$]{}– para-[$\rm H_2$]{}($J=0$) interaction (obtained by averaging the [$\mathrm{HC_3N}$]{}– H$_2$ PES over $\theta_2$ and $\phi$) is qualitatively similar to the [$\mathrm{HC_3N}$]{}– He PES with a deeper minimum (see values at the end of present Section). As illustrated in Figure \[fig:PES\], these PES are largely dominated by the rod-like shape of [$\mathrm{HC_3N}$]{}, implying a prolate ellipsoid symmetry of the equipotentials. In a second step, let us consider how to circumvent the difficulty of the angular expansion of the above PES, in order to obtain reliable expansions for He and H$_2$ (eqs \[eq:pot\] and \[eq:pot2\]). Using the angular spline representation, we first expressed each PES over a fine $\theta_1$ mesh suitable for a subsequent high $l_1$ expansion. As expected from the work of @chapman77, high $l_1$ expansions (\[eq:pot\]) resulted in severe Gibbs oscillations for $R$ in the range 5–7 Bohr, spoiling completely the description of the low energy features of the PES. Then, having in mind low energy scattering applications, we regularized the PES by introducing a scaling function $S_f$. We replaced $V(R,\theta_1,...)$ by $S_f(V(R,\theta_1,...))$, where $S_f(V)$ returns $V$ when $V$ is lower than a prescribed threshold, and then smoothly saturates to a limiting value when $V$ grows up into the repulsive walls. Consequently, the regularized PES retains only the low energy content of the original PES, unmodified up to the range of the threshold energy; it should not be used for higher collisional energies. However, in contrast to the original PES, it can be easily expanded over Legendre functions to an excellent accuracy and is thus suitable for quantum close coupling studies. We selected a threshold value of 300 cm$^{-1}$, and improved the quality of the expansion by applying a weighted fitting strategy [e.g. @hodges04] to focus the fit on the details of the attractive and weakly repulsive regions of the PES. Using $l_1\leq 35$, both the He and H$_2$ PES fits were converged to within 1 cm$^{-1}$ for $V \le 300$ cm$^{-1}$. These expansions still describe the range $300<V<1000$ cm$^{-1}$ to within an accuracy of a few $\rm cm^{-1}$. The corresponding absolute minima are the following (in cm$^{-1}$ and Bohr): for [$\mathrm{HC_3N}$]{}– He, $V=-40.25$ for $R=6.32$ and $\theta_1=95.2^{\circ}$; for [$\mathrm{HC_3N}$]{}– para-[$\rm H_2$]{}($J=0$), $V=-111.24$ for $R=6.41$ and $\theta_1=94.0^{\circ}$; and for [$\mathrm{HC_3N}$]{}– [$\rm H_2$]{}, $V=-192.49$ for $R=9.59$, $\theta_1=180^{\circ}$, and $\theta_2=0^{\circ}$. ![The [$\mathrm{HC_3N}$]{}– para-[$\rm H_2$]{}PES. The [$\mathrm{HC_3N}$]{}molecule is shown at scale. Equipotentials (in $\rm cm^{-1}$) : in dashed red, -100, -30 -10, -3; in solid black, 0; in blue, 10, 30, 100, 300, 1000, 3000. The dotted circle centered at the [$\mathrm{HC_3N}$]{}center of mass with radius $R=6.41$ Bohr illustrates the angular steric hindrance problem occurring when the collider rotates from the vicinity of the minimum towards the [$\mathrm{HC_3N}$]{}rod. []{data-label="fig:PES"}](PES.eps){height="0.55\textheight"} Inelastic cross section and rates {#sec:cross} ================================= In the following $J_1, J^\prime_1$, denote the initial and final angular momentum of the [$\mathrm{HC_3N}$]{}molecule, respectively, and $J_2$ denote the angular momentum of H$_2$. We also denote the largest value of [$J_1, J^\prime_1$]{} as $J_{1\rm up}$. The most reliable approach to compute inelastic cross sections $\sigma_{J_1J^{\,\prime}_1}(E)$ is to perform quantum close coupling calculations. In the case of molecules with a small rotational constant, like [$\mathrm{HC_3N}$]{}[$B=4549.059\mbox{~MHz}$, see e.g. @thor00], quantum calculations become soon intractable, because of the large number of open channels involved. While observations at cm-mm wavelengths culminates with $J_{1\rm up} \lesssim 24$ [@kahane94], sub-mm observations can probe transition as high as $J_{1\rm up} = 40$, at a frequency of 363.785 GHz and a rotational energy of $202.08 \:\rm cm^{-1}$ [@pepe04; @charnley04; @cauxpc]. It is thus necessary to compute rates with transitions up to $J_1=50$ ($E= 386.8$ cm$^{-1}$), in order to properly converge radiative transfer models. Also, we aim at computing rates up to a temperature of 100 K for H$_2$. We resorted to two methods in order to perform this task. For $J_{1\rm up}\leq 15$, we performed quantum inelastic scattering calculations, as presented in next subsection \[par:molscat\]. For $J_{1\rm up} > 15$, we used the QCT method, as presented in subsection \[par:rates\]. For He, of less astrophysical importance (\[He\]/\[H\]$\sim 0.1$), only quantum calculations were performed and were limited to the low temperature regime ($T$=5$-$20 K and $J_1<10$). Rotational inelastic cross sections with <span style="font-variant:small-caps;">Molscat</span> {#par:molscat} ---------------------------------------------------------------------------------------------- All calculations were made using the rigid rotor approximation, with rotational constants $B_{\rm HC_3N}=0.151739$ cm$^{-1}$ and $B_{\rm H_2}=60.853$ cm$^{-1}$, using the <span style="font-variant:small-caps;">Molscat</span> code [@molscat]. All quantum calculations for [$\mathrm{HC_3N}$]{}– ortho-[$\rm H_2$]{}were performed with $J_{\rm H_2}\equiv J_2=1$. Calculations for [$\mathrm{HC_3N}$]{}– para-[$\rm H_2$]{}were performed with $J_2=0$. We checked at $E_{\rm tot}=E_{\rm coll}+E_{\rm rot} = 30 \,\rm cm^{-1}$ that the inclusion of the closed $J_2=2$ channel led to negligible effects. The energy grid was adjusted to reproduce all the details of the resonances, as they are essential to calculate the rates with high confidence [@dubernet02; @dubernet03; @wernli06]. The energy grid and the quantum methods used are detailed in table \[tab:param\]. Using this grid, we calculated the whole resonance structure of all the transitions up to $J_1=15$ for the [$\mathrm{HC_3N}$]{}– para-[$\rm H_2$]{}collisions. At least 10 closed channels were included at each energy to fully converge the [$\mathrm{HC_3N}$]{}rotational basis. We used the hybrid log-derivative/Airy propagator [@alexander87]. We increased the parameter <span style="font-variant:small-caps;">STEPS</span> at the lowest energies to constrain the step length of the integrator below 0.1 to 0.2 Bohr, in order to properly follow the details of the radial coefficients. Other propagation parameters were taken as the <span style="font-variant:small-caps;">molscat</span> default values. [ccc]{}\ $E_{\rm tot}\:\rm (cm^{-1})$ & Energy step ($\rm cm^{-1}$) & Method\ $0.3 \rightarrow 60$ & $0.1$ & CC\ $60 \rightarrow 110$ & $10$ & CC\ $40 \rightarrow 200$ & $10$ & CS\ $50 \rightarrow 800$ & $10-100 $ & IOS\ \ $0\rightarrow 30 $& $1$ & CC\ \ $0\rightarrow 25 $& $0.1$ & CC\ $25\rightarrow 100 $& $5$ & CC\ $100\rightarrow 150 $& $10$ & CC\ Two examples of deexcitation cross-sections are shown in figure \[fig:sectionop\]. We see that for energies between threshold and about 20 cm$^{-1}$ above threshold, the cross-section displays many shape resonances, justifying *a posteriori* our very fine energy grid. This behaviour is by no means unexpected and is very similar to most earlier calculations is many different systems, see e.g. @dubernet02 [@wernli06] for a discussion. In a semi-classical point of view, those shape resonances manifest the trapping of the wave-packet between the inner repulsive wall and the outer centrifugal barrier, see @wie03 [@abrol01]. At energies higher than about 20 cm$^{-1}$ above threshold, all cross-sections become smooth functions of the energy. Figure \[fig:sectionop\] also shows that ortho-[$\rm H_2$]{}inelastic cross-sections follow very closely the para-[$\rm H_2$]{}ones, including the position of resonances. Examination of all cross-sections reveals that the relative difference between $\sigma_{J_1J'_1}(E, \mbox{para})$ and $\sigma_{J_1J'_1}(E, \mbox{ortho})$ is less than $5\%$. This justifies *a posteriori* the much smaller amount of computational effort devoted to ortho-[$\rm H_2$]{}collisions as well as the neglect of $J_2 = 2$ closed para-[$\rm H_2$]{}channels. A detailed discussion of this behaviour is put forward in section \[sec:paraortho\]. Quantum rates and classical rates {#par:rates} --------------------------------- The quantum collisional rates are calculated for $J_{1\rm up}\leq 15$, at astrophysically relevant temperatures, from 5 K to 100 K. We average the cross-sections described in the preceding section over the Maxwell distribution of velocities, up to a kinetic energy at least 10 times $kT$. The quantum calculations at the higher end of the energy range are approximated at the IOS level (see table \[tab:param\]), which is justified at these energies by the smallness of the rotational constant $B_{\rm HC_3N}$. Also we used a coarse energy grid for the IOS calculations because the energy dependence of the cross-sections becomes very smooth. For values of $J_1 > 15$, the close coupling approach enters a complexity barrier due to the rapid increase of the number of channels involved in calculations, while memory and CPU requirements scale as the square and the cube of this number, respectively. Resorting to quantum CS or IOS approximations is inaccurate, because the energy is close to threshold for high-$J_1$ channels. In the meanwhile, the accuracy of classical approximations improves for higher collisional energies. For the energy range where $J_1 > 15$ channels are open *and* for deexcitation processes involving those channels, we employ a Quasi-Classical Trajectory (QCT) method, which has been shown at several instances to be a valid approximation for higher collisional energies and large rates [@chapman77; @lepp95; @mandy04; @faure06]. For Monte-Carlo QCT methods, we must devise a way of defining an ensemble of initial conditions for classical trajectories, on the one hand, and of analyzing the final state of each trajectory, on the other hand. Contrary to the asymmetric rotor case [like water, see @faure04], the analysis of final conditions for a linear molecule is straightforward. Using the simplest quantization approximation, we bin the final classical angular momentum $J'_1$ of [$\mathrm{HC_3N}$]{}to the nearest integer. While the quantum formalism goes through a microcanonical calculation —calculating $\sigma_{J_1J_1'}(E)$ for fixed energies, then averaging over velocity distributions— it is possible for QCT calculations to directly resort to a canonical formalism, i.e. to select the initial velocities of the Monte-Carlo ensemble according to the relevant Maxwell-Boltzmann distribution and find the rates as: $$\label{eq:rate} k_{J_1J'_1} = \left(\frac{8kT}{\pi\mu}\right)^{1/2}\,\pi b_{max}^2\,\frac{N}{N_{\rm tot}}$$ where $b_{max}$ is the maximum impact parameter used (with the impact parameter $b$ distributed with the relevant $b\,\textmd{d}b$ probability density) and $N$ is the number of trajectories with the right final $J'_1$ value among all $N_{\rm tot}$ trajectories. The Monte-Carlo standard deviation is: $$\label{eq:error} \frac{\delta k_{J_1J_1'}}{k_{J_1J_1'}} = \left(\frac{N_{\rm tot}-N}{N_{\rm tot}N}\right)^{1/2} \quad ,$$ showing that the accuracy of the method improves for larger rates. The $b_{max}$ parameter was determined by sending small batches of $500$ to $1,000$ trajectories for fixed $b$ values; values of $20 \leq b_{max} \leq 26$ Bohr were found. We then sent batches of $10,000$ trajectories for each temperature in the range $5-100$K, with a step of 5K. Trajectories are integrated by means of a Bürlich-Stoer algorithm [@numrec92], with a code similar to that of @faure05a. Precision is checked by conservation of total energy and total angular momentum. Some illustrative results are shown in tables \[tab:rates\] and  \[tab:rates2\] and are illustrated in figures \[fig:ratecompare\] and \[fig:rate12\]. As an alternative to QCT calculations, we tested J-extrapolation techniques, using the form of @depristo79 generally used by astrophysicists (see for example @lamda, section 6). We found that even if it reproduces the interference pattern, the extrapolation systematically underestimates the rates, for $J_1\ge 20$. Hence, QCT rates are more precise in the average. --------- ------------------ ------------------ ------------- ------------------- $J_{1}$ $T = 10 \rm K\;$ $T = 20 \rm K\;$ $T = 50 \rm $T = 100 \rm K\;$ K\;$ 1 2.03(-11) 1.59(-11) 1.32(-11) 1.24(-11) 2 4.94(-11) 4.83(-11) 6.23(-11) 8.04(-11) 3 1.20(-11) 1.04(-11) 8.23(-12) 7.43(-12) 4 2.25(-11) 2.57(-11) 2.85(-11) 2.87(-11) 5 7.01(-12) 6.80(-12) 5.62(-12) 4.77(-12) 6 9.15(-12) 1.18(-11) 1.42(-11) 1.38(-11) 7 3.14(-12) 3.40(-12) 3.46(-12) 3.26(-12) 8 2.45(-12) 3.71(-12) 5.92(-12) 6.61(-12) 9 1.63(-12) 1.63(-12) 1.95(-12) 2.18(-12) 10 5.35(-13) 8.13(-13) 2.00(-12) 2.96(-12) 11 7.81(-13) 7.01(-13) 9.42(-13) 1.36(-12) 12 1.37(-13) 1.58(-13) 6.17(-13) 1.32(-12) 13 2.74(-13) 2.51(-13) 4.17(-13) 8.26(-13) 14 4.14(-14) 4.65(-14) 2.24(-13) 6.28(-13) 15 7.63(-14) 8.26(-14) 1.76(-13) 4.85(-13) --------- ------------------ ------------------ ------------- ------------------- : [$\mathrm{HC_3N}$]{}– para-[$\rm H_2$]{} s($J=0$)collisions. Quantum deexcitation rates in $\mathrm{cm^3\,s^{-1}}$, for $J_1'=0$, for successive initial $J_1$ and for various temperatures. Powers of ten are denoted in parenthesis.[]{data-label="tab:rates"} [llcccc]{} &\ $J_{1}'$ &$J_{1}$& $T = 10 \rm K\;$ & $T = 20 \rm K\;$ & $T = 50 \rm K\;$ & $T = 100 \rm K\;$\ &\ &\ 0 & 1 & 2.03(-11) & 1.59(-11) & 1.32(-11) & 1.24(-11)\ 0 & 2 & 4.94(-11) & 4.83(-11) & 6.23(-11) & 8.04(-11)\ 0 & 3 & 1.20(-11) & 1.04(-11) & 8.23(-12) & 7.43(-12)\ 0 & 4 & 2.25(-11) & 2.57(-11) & 2.85(-11) & 2.87(-11)\ \ 5 & 6 & 6.34(-11) & 5.48(-11) & 4.80(-11) & 4.64(-11)\ 5 & 7 & 1.30(-10) & 1.38(-10) & 1.72(-10) & 2.04(-10)\ 5 & 8 & 3.93(-11) & 3.66(-11) & 3.27(-11) & 3.21(-11)\ 5 & 9 & 6.83(-11) & 7.61(-11) & 8.63(-11) & 8.93(-11)\ \ 10 & 11 & 5.77(-11) & 5.35(-11) & 4.75(-11) & 4.61(-11)\ 10 & 12 & 1.50(-10) & 1.53(-10) & 1.81(-10) & 2.11(-10)\ 10 & 13 & 3.91(-11) & 3.80(-11) & 3.50(-11) & 3.40(-11)\ 10 & 14 & 8.51(-11) & 8.84(-11) & 9.42(-11) & 9.47(-11)\ \ \ 15 & 16 $^\dag$ & 1.45(-10) & 1.49(-10) & 1.80(-10) & 2.28(-10)\ 15 & 17 $^\dag$ & 1.06(-10) & 1.03(-10) & 9.60(-11) & 1.18(-10)\ 15 & 18 $^\dag$ & 8.71(-11) & 9.38(-11) & 7.93(-11) & 8.24(-11)\ 15 & 19 $^\dag$ & 7.59(-11) & 6.81(-11) & 6.23(-11) & 7.10(-11)\ \ 25 & 26 $^\dag$ & 1.14(-10) & 1.50(-10) & 1.83(-10) & 2.30(-10)\ 25 & 27 $^\dag$ & 1.13(-10) & 1.05(-10) & 1.18(-10) & 1.31(-10)\ 25 & 28 $^\dag$ & 8.55(-11) & 8.38(-11) & 8.34(-11) & 7.76(-11)\ 25 & 29 $^\dag$ & 7.67(-11) & 7.45(-11) & 8.31(-11) & 7.02(-11)\ \ 35 & 36 $^\dag$ & 1.16(-10) & 1.34(-10) & 1.73(-10) & 2.32(-10)\ 35 & 37 $^\dag$ & 9.63(-11) & 1.12(-10) & 1.21(-10) & 1.11(-10)\ 35 & 38 $^\dag$ & 8.33(-11) & 9.20(-11) & 8.56(-11) & 9.19(-11)\ 35 & 39 $^\dag$ & 8.77(-11) & 8.51(-11) & 7.65(-11) & 7.51(-11)\ \ For H$_2$, all deexcitation rates $k_{J_1J_1'}(T)$, $J_1\neq J_1'\leq 50$, are fitted with the following formula [@wernli06]: $$\label{eq:fit} \log_{10}\left(k_{J_1J_1'}(T)\right)=\sum_{n=0}^{ 4}a^{(n)}_{J_1J_1'} x^n$$ where $x=T^{-1/6}$. As some transitions have zero probability within the QCT approach, the above formula was employed when rates were bigger than 10$^{-12}$ cm$^3$s$^{-1}$ for at least one grid temperature. For these rates, null grid values were replaced by a very small value, namely 10$^{-14}$ cm$^3$s$^{-1}$, to avoid fitting irregularities. All rates not fulfilling this condition are set to zero. Note that below 20 K, QCT rates for low-probability transitions may show a non physical behaviour. All $a^{(n)}_{J_1J_1'}$ coefficients are provided as online material, for a temperature range $5{\rm\; K}\leq T \leq 100{\rm\; K}$. We advise to use the same rates for collisions with ortho-[$\rm H_2$]{}as for para-H$_2$, since their difference is smaller than the uncertainty on the rates themselves. Rates with He were not fitted, but can be obtained upon request to the authors. ---------------------------------- ---------------------------------- ![image](rate1.eps){width="8cm"} ![image](rate2.eps){width="8cm"} ---------------------------------- ---------------------------------- Discussion {#sec:disc} ========== Para and ortho [$\rm H_2$]{}cross-sections {#sec:paraortho} ------------------------------------------ A comparison of the $\sigma_{J_1J_1'}(E)$ cross sections for [$\mathrm{HC_3N}$]{}with ortho-[$\rm H_2$]{}and para-[$\rm H_2$]{}is given in figure \[fig:sectionop\]. It can be seen that the difference between the two spin species of [$\rm H_2$]{}may be considered as very small, in any case smaller than other PES and cross-section uncertainties. This is an unexpected result, as sizeable differences between para-[$\rm H_2$]{}and ortho-[$\rm H_2$]{}inelastic cross-sections exist for other molecules. These differences were expected to increase for a molecule possessing a large dipolar moment, in view of the results obtained for the C$_2$ molecule [@phillips94], the CO molecule [@wernli06], the OH radical [@offer94], the NH$_3$ molecule [@offer89; @flower94] and the $\rm H_2O$ molecule [@phillips96; @dubernet02; @dubernet03; @dubernet06], due to the interaction between the dipole of the molecule and the quadrupole of [$\rm H_2$]{}(J$_2$ $>$ 0). This apparently null result deserves an explanation. We focus on equation (9) of @green75. This equation describes the different matrix elements that couple the various channels in the close-coupling equations. Some triangle rules apply which restrict the number of terms in the sum of equation (9); the relevant angular coupling algebra is represented there as a sum of terms of the type $$\label{eq:green75} \left(\begin{array}[c]{ccc} l &L'& L \\ 0 & 0 & 0 \end{array}\right) \; \left(\begin{array}[c]{ccc} l_1 &J'_1& J_1 \\ 0 & 0 & 0 \end{array}\right) \; \left(\begin{array}[c]{ccc} l_2 &J'_2& J_2 \\ 0 & 0 & 0 \end{array}\right) \; \left\{\begin{array}[c]{ccc} L'& L & l \\ J_{12} & J'_{12} & J \end{array}\right\}\quad , $$ where we have the potential function expanded in terms of Eqs. (4) and (A2) in @green75, by means of the coefficients $v_{l_1l_2 l}$. The symbol $(\ldots)$ are 3-$j$ symbols, the $\{\ldots\}$ is a 6-$j$ symbol, see @messiah69. We also define $\vec{J}_{12}=\vec{J}_1+\vec{J}_2$. We have the following rules: - The para-[$\rm H_2$]{}inelastic collisions are dominated by the $J_2 = 0$ channel (the $J_2=2$ channel is closed till $E_{coll} \gtrsim 365.12 \;\rm cm^{-1}$). Then, only the $l_2=0$ may be retained ($J_2=J'_2=0$), due to the third 3-$j$ symbol in eq.(\[eq:green75\]). - The ortho-[$\rm H_2$]{}remains in $J_2=1$, implying $l_2=0,2$. - For inelastic collisions, $J_1\neq J'_1$ implies potential terms with $l\neq 0$, because of the 6-$j$ term in Eq.(\[eq:green75\]). Indeed, $J_2=J'_2$ and $J_1\neq J'_1$ entail $J_{12}\neq J'_{12}$. The key point is thus to compare the $v_{l_1l_2 l}(R)$ coefficients (eq. \[eq:pot2\]) with $l \neq 0$ in the two cases: - [$l_2=0$]{} para and ortho contributions; - [$l_2=2$]{} ortho contribution only. Figure \[fig:comp\] displays such a comparison. We notice that the coupling is largely dominated by the $l_2=0$ contribution, terms which are common to collisions with para and ortho conformations. This is particularly true for $R<10$ Bohr, the relevant part of the interaction for collisions at temperatures higher than a few Kelvin. At a higher intermolecular separation, terms implied only in collisions with ortho-[$\rm H_2$]{}become dominant, but in this regime the potential is also less than a few cm$^{-1}$. Sizeable differences in rates between ortho and para forms are thus expected only either at very low temperatures, or possibly at much higher temperatures, with the opening of [$\rm H_2$]{}(J$_2=2,3$) channels. Propensity rules ---------------- In figure \[fig:ratecompare\], we compare the various rates that we obtain here with the ones previously published by @green78. These authors used a coarse electron-gas approximation for the PES, and computed rates by a QCT classical approach. Despite these approximations, we see that the rates obtained by @green78 are qualitatively comparable with the quantum rates obtained here, in an average way. However, as table \[tab:rates2\] and figures \[fig:rate12\] and \[fig:ratecompare\] show clearly, only quantum calculations manifest the strong $\Delta J = 2$ propensity rule. This rule originates in the shape of the PES, being nearly a prolate ellipsoid, dominated by the rod shape of [$\mathrm{HC_3N}$]{}and *not dominated* by the large dipole of HC$_3$N molecule (3.724 Debye). Because of the very good approximate symmetry $\theta_1\leftrightarrow \pi - \theta_1$, the $l_1$ even terms (equation (\[eq:green75\]) and @green75) are the most important ones, directing the inelastic transition toward even $\Delta J_1$. This propensity has also been explained semi-classically by @miller77 in terms of an interference effect related to the even anisotropy of the PES. These authors show in particular that the reverse propensity can also occur if the odd anisotropy of the PES is sufficiently large. This reverse effect is indeed observed in Fig. \[fig:rate12\] for transitions with $\Delta J>10$. A similar propensity rule has been experimentally observed for CO–He collisions [@sims04]. Besides this strong $\Delta J = 2$ propensity rule, one can see from table \[tab:rates2\] and figures \[fig:ratecompare\], \[fig:rate12\] that the rod-like interaction drives large $\Delta J$ transfers. For instance, for T $> 20$ K, rates for $\Delta J > 6$ are generally larger than rates for $\Delta J = 1$, and rates for $\Delta J > 8$ are only one order of magnitude below those for $\Delta J = 2$. This behaviour is likely to emphasize the role of collisional effects versus radiative ones. This effect, of purely geometric origin, has been predicted previously [@bosanac80] and is of even greater importance for longer rods like $\rm HC_5N$, $\rm HC_7N$, $\rm HC_9N$, see @snell81 [@dickinson82]. We also observe that the ratio $k_{J_1J'_1}(\textrm{He})/k_{J_1J'_1}(\mbox{para-H$_2$}) $ is in average close to $1/1.4 \sim 1/\sqrt{2}$, thus confirming the similarity of He and para-[$\rm H_2$]{}as projectiles, as generally assumed. But it is also far from being a constant, as already observed for H$_2$O [@phillips96] or CO [@wernli06]. Our data shows that the $1/\sqrt{2}$ scaling rule results in errors up to 50%. Population inversion and critical densities ------------------------------------------- Because of the strong $\Delta J_1=0,2,4$ propensity rule, population inversion could be strengthened if LTE conditions are not met, even neglecting hyperfine effects[^1] [@hunt99]. In order to see the density conditions giving rise to population inversion, we solved the steady-state equations for the population of the $J=0,1,\dots,15$ levels of [$\mathrm{HC_3N}$]{}, including collisions with [$\rm H_2$]{}(densities ranging from $10^2$ to $10^6 \rm\; cm^{-3}$), a black-body photon bath at 2.7 K, in the optically thin approximation, [@goldsmith72] : $$\begin{aligned} \label{eq:ss} \frac{\textmd{d}n_i}{\textmd{d}t}=0&=& +\sum_{j\neq i}n_j\,\left[ A_{ji} + B_{ji} \;n_\gamma\left(\nu_{ji}\right)+k_{ji}\; n_{\rm H_2}\right] \nonumber \\ & & - n_i\,\sum_{j\neq i} \left[A_{ij}+B_{ij}\,n_\gamma\left(\nu_{ij}\right)+k_{ij}\; n_{\rm H_2}\right]\end{aligned}$$ where $i,j$ are the levels, $n_\gamma$ is the photon density at temperature $T_\gamma$ and $n_{\rm H_2} $ is the hydrogen density at kinetic temperature $T_{\rm H_2}$. Figure \[fig:invers\] shows the results at $T_{\rm H_2} = 40$ K. The lines show the population per sub-levels $\left|J_1, m_{J_1}\right>$. For a consequent range of [$\rm H_2$]{}densities, $10^4\lesssim n_{\rm H_2} \lesssim 10^6$, population inversion does occur, for $0\leq J_1\leq 2, 3, 4$. Our new rates are expected to improve the interpretation of the lowest-lying lines of [$\mathrm{HC_3N}$]{}, especially so in the 9 - 20 GHz regions (cm-mm waves), see for example @walms86 [@takano98; @hunt99], and @kal04 for a recent study. Moreover, from the knowledge of both collision coefficients $k_{ij}$ and Einstein coefficients $A_{ij}$, it is possible to derive a critical density of [$\rm H_2$]{}, defined as: $$\label{eq:nstar} n^{\star}_i(T)=\frac{\sum_{j<i}A_{ij}}{\sum_{j<i}k_{ij}}$$ The $n^\star$ density is the [$\rm H_2$]{}density at which photon deexcitation and collisional deexcitation are equal. The evolution of $n^\star$ with $J_1$ at $T= 40\;\rm K$ is given in figure \[fig:critical\]. It can be seen that for many common interstellar media, the LTE conditions are not fully met. It must be underlined that similar effects should appear for the whole cyanopolyyne ($\rm HC_{5,7,9}N$) family, where cross-sections should scale approximately with the rod length [@dickinson82]. It is expected that the propensity rule $\Delta J_1=2,4,\dots$ should remain valid. Also, the critical density should decrease for the higher members of the cyanopolyyne family, as the Einstein A$_{ij}$ coefficients, hence facilitating the LTE conditions. Conclusion ========== We have computed two [*ab initio*]{} surfaces, for the [$\mathrm{HC_3N}$]{}– He and [$\mathrm{HC_3N}$]{}– [$\rm H_2$]{}systems. The latter was built using a carefully selected set of [$\rm H_2$]{}orientations, limiting the computational effort to approximately five times the [$\mathrm{HC_3N}$]{}– He one. Both surfaces were successfully expanded on a rotational basis suitable for quantum calculations using a smooth regularization of the potentials. This approach circumvented the severe convergence problems already noticed by @chapman77 for such large molecules. The final accuracy of both PES is a few cm$^{-1}$ for potential energy below 1000 cm$^{-1}$. Rates for rotational excitation of [$\mathrm{HC_3N}$]{}by collisions with He atoms and [$\rm H_2$]{}molecules were computed for kinetic temperatures in the range 5 to 20 K and 5 to 100 K, respectively, combining quantum close coupling and quasi-classical calculations. The rod-like symmetry of the PES strongly favours even $\Delta J_1$ transfers and efficiently drives large $\Delta J_1$ transfers. Quasi classical calculations are in excellent agreement with close coupling quantum calculations but do not account for the even $\Delta J_1$ interferences. For He, results compare fairly with @green78 QCT rates, indicating a weak dependance to the details of the PES. For para-H$_2$, rates are compatible in average with the generally assumed $\sqrt{2}$ scaling rule, with a spread of about 50 %. Despite the large dipole moment of $\mathrm{HC_3N}$, rates involving ortho-H$_2$ are very similar to those involving para-H$_2$, due to the predominance of the rod interactions. A simple steady-state population model shows population inversions for the lowest [$\mathrm{HC_3N}$]{}levels at [$\rm H_2$]{}densities in the range 10$^4-$10$^6$ cm$^{-3}$. This inversion pattern manifests the importance of large angular momentum transfer, and is enhanced by the even $\Delta J_1$ quantum propensity rule. The [$\mathrm{HC_3N}$]{}molecule is large enough to present an original collisional behaviour, where steric hindrance effects hide the details of the interaction, and where quasi classical rate calculations achieve a fair accuracy even at low temperatures. With these findings, approximate studies for large and heavy molecules should become feasible including possibly the modelling of large $\Delta J$ transfer collisions and ro-vibrational excitation of low energy bending or floppy modes. This research was supported by the CNRS national program “Physique et Chimie du Milieu Interstellaire” and the “Centre National d’Etudes Spatiales”. LW was partly supported by a CNRS/NSF contract. MW was supported by the Ministère de l’Enseignement Supérieur et de la Recherche. CCSD(T) calculations were performed on the IDRIS and CINES French national computing centers (projects no. 051141 and x2005 04 20820). <span style="font-variant:small-caps;">Molscat</span> and QCT calculations were performed on local workstations and on the “Service Commun de Calcul Intensif de l’Observatoire de Grenoble” (SCCI) with the valuable help from F. Roch. [c c c c c c c]{} \ [^1]: Hyperfine effects in [$\mathrm{HC_3N}$]{}inelastic collisions will be dealt with in a forthcoming paper, @wie06
1
--- abstract: 'Earth’s tectonic processes regulate the formation of continental crust, control its unique deep water and carbon cycles, and are vital to its surface habitability. A major driver of steady-state plate tectonics on Earth is the sinking of the cold subducting plate into the underlying mantle. This sinking is the result of the combined effects of the thermal contraction of the lithosphere and of metamorphic transitions within the basaltic oceanic crust and lithospheric mantle. The latter of these effects is dependent on the bulk composition of the planet, e.g., the major, terrestrial planet-building elements Mg, Si, Fe, Ca, Al, and Na, which vary in abundance across the Galaxy. We present thermodynamic phase-equilibria calculations of planetary differentiation to calculate both melt composition and mantle mineralogy, and show that a planet’s refractory and moderately-volatile elemental abundances control a terrestrial planet’s likelihood to produce mantle-derived, melt-extracted crusts that sink. Those planets forming with a higher concentration of Si and Na abundances are less likely to undergo sustained tectonics compared to the Earth. We find only 1/3 of the range of stellar compositions observed in the Galaxy is likely to host planets able to sustain density-driven tectonics compared to the Sun/Earth. Systems outside of this compositional range are less likely to produce planets able to tectonically regulate their climate and may be inhospitable to life as we know it.' author: - 'Cayman T. Unterborn' - 'Scott D. Hull' - Lars Stixrude - 'Johanna K. Teske' - 'Jennifer A. Johnson' - 'Wendy R. Panero' bibliography: - 'main.bib' date: June 2017 title: Stellar Chemical Clues as to The Rarity of Exoplanetary Tectonics --- Introduction ============ The Earth is unique in our Solar System. It is the only planet with plate tectonics and liquid water on the surface. It is not known, however, the extent to which the Earth is unique among all terrestrial planets beyond our Solar System. The *Kepler* mission’s discoveries establish that Earth-sized planets are common in the Galaxy, with as many as 11% of Sun-like stars hosting planets 1-2 times the radius of the Earth and receiving comparable solar flux [@Marc14; @Fult17]. Together with other discovery campaigns, we know now of many exoplanets with masses and radii consistent with being terrestrial, rock/metal-dominated planets, rather than gas-dominated. The degree to which these planets can maintain surface oceans, plate tectonics or even be considered “Earth-like” is not known and is a complex function of the planet’s composition, formation, and dynamical state [e.g. @Fole15; @Fole16]. We assert that for a planet to be “Earth-like” and habitable, it must be habitable in the same manner as the Earth. At a minimum this means the planet must sustain surface liquid water for millions to billions of years. . Because stable liquid water exists in a relatively narrow range in temperatures and pressures, the planet must have a process to regulate atmospheric temperatures. On the Earth, moderate temperatures are maintained by the incoming solar radiation combined with moderate greenhouse warming from CO$_2$, H$_2$O, and CH$_4$. Therefore, supply and regulation of these gases is key to Earth’s, and thus “Earth-like” planet’s climate. This definition is in contrast to the typical one for “Earth-like," in which a planet is defined simply as one with a bulk density characteristic of being roughly that of a mixture of metal $\pm$ rock. The Earth’s atmospheric regulatory processes and aqueous chemistry arise from tectonics: the recycling of material between a planet’s surface and mantle. For Earth, the recycling process manifests as special case of tectonics, plate tectonics, in which oceanic crust continuously subducts into the interior and convective upwellings returning some material to the surface. This Earth-scale transport of material produces the buoyant continental crust and releases and sequesters CO$_2$ through weathering of silicate rocks and arc volcanism at subduction zones [@Velb93; @Brad91; @Slee01; @Fole15; @Fole16]. In contrast a terrestrial planet without tectonic processes, such as one with a rigid lid like Mars, or undergoing episodic overturn like Venus, does not have a steady state cycling of material between the surface and interior. Even if Venus had a lower surface temperature, it is unable to regulate atmospheric CO$_{2}$, which may be rapidly released in pulses during overturn or a consequence of its solidification from a magma ocean [@Hama13]. The lack of standing continents and the formation of surface carbonic acid on Venus does not allow CO$_{2}$ to be efficiently buried, and it instead accumulates in the atmosphere, creating a runaway greenhouse. The planetary controls that lead to mobile-lid regimes, including plate tectonics, and static-lid regimes are a matter of debate even for the well-determined compositional, thermal, and structural parameters found in the Solar System [e.g. @More98]. Models of large, terrestrial planets (so-called Super-Earths) have concluded that plate tectonics could be inevitable due to interior-to-surface heat transfer, surface gravity, and fault strength [@Vale07a; @vanH11], while other models, focused on the fault strength integral to subduction initiation, have come to the opposite conclusion that plate tectonics are unlikely [e.g. @ONei07]. More general models find the tectonic state of a super-Earth is a function of planet size, incident solar radiation, and atmospheric composition [e.g. @Fole12]. Each of these models, however, simply scale the Earth in composition and structure and sought only to understand how changes in the physical parameters of a planet affected tectonics, rather than address the much more complicated (and data-lacking) question of how planet chemical diversity affects tectonics. An alternate approach is to assess *probabilistically*, rather than *definitively*, the relative likelihood of tectonics on exoplanets in individual systems by examining the effect of planetary chemistry on plate tectonics [@Unte14; @Unte15; @Stam16]. In this study, we quantify the effect of planet composition on a vital aspect of sustaining plate tectonics on terrestrial planets over billion year timescales: the sinking forces of the exocrust into a planet’s mantle. We therefore address a minimum criterion for plate tectonics: whether or not the surface crusts sink into the mantle due to buoyancy forces arising from thermal and chemical differences between the surface crust and the interior mantle. In the most general sense, tectonic processes of both the Earth and Venus are driven by buoyancy forces. The magnitude of this force is proportional to the integrated density difference between the sinking surface material and the surrounding mantle. These density contrasts are due to the composition and thermal state of the surface material compared to the mantle. Those planetary compositions in which crustal material is buoyant even when forced downward through crustal thickening or contraction have no mechanism for cycling crustal material into the interior via subduction. For those planets that have a buoyancy force less than the Earth, though, there is a lower probability of plate tectonics compared to Earth due to the reduced negative buoyancy (that is downward force) available to drive plate tectonics. Those systems are *less likely* to produce planets with lithospheric material of a composition able to sink via buoyancy forces to $\sim$100 km and thereby produce arc volcanism, will not be able to maintain long-lived temperate, atmospheres, nor will they exhibit top-down, buoyancy driven, steady-state crustal recycling as part of tectonic processes. The Earth’s subducting lithosphere is composed of two parts: a 5-10 km thick basaltic layer lying on top of a 50-80 km thick, cold, and rigid layer of lithospheric mantle [e.g. @Fisc10], which formed as a result of the cooling of the surface of Earth followed by induration from a magma ocean for 50-200 million years. The basaltic layer is formed through eruption at mid-ocean ridges as the result of decompression melting of the passive upwelling of the convecting mantle. The composition of oceanic lithosphere is therefore controlled by the temperature and composition of the mantle. Once the subducting basaltic crust reaches a depth of 35-50 km below the surface, pressure-induced mineral metamorphism transforms the basaltic rock into rock denser than the surrounding mantle ($\Delta\rho\sim+100$ kg m$^{-3}$) through the formation of garnet as it replaces orthopyroxene with minor spinel [@Fros08 Supplemental Figure \[fig:phases\]]. This basalt-eclogite transition marks the first of two major metamorphic processes in the basaltic layer responsible for the negative buoyancy that causes plates to sink and continue the cycling from surface to mantle and back. The second phase transition occurs at 300 km below the surface, with excess silica (SiO$_2$) undergoing the coesite-stishovite transition, providing an additional density increase, further promoting sinking. The subducting plate is also thermally contracted relative to the surrounding mantle. The resulting density contrast produces an additional downward buoyancy force, controlled by the temperature difference between subducting plate and surrounding mantle. Pressure- and temperature-induced phase transitions also occur within the mantle surrounding the subducting plate. However, because of the different temperatures of the mantle relative to the plate, these transitions happen at different depths for each. At a depth near 410 km within the Earth, a phase transition occurs in mantle olivine \[(Mg$^{2+}$,Fe$^{2+}$)SiO$_4$\], transforming it to wadsleyite. This phase transition is what delineates Earth’s upper mantle from the transition zone (Supplementary Figure \[fig:phases\]). The depth of transition from olivine to wadsleyite is shallower at lower temperatures such that olivine in the relatively colder subducting plate will transform into the denser wadsleyite at depths shallower than the transition zone. This introduces a wedge of more dense material in the sinking plate above the 410 km transition, further promoting the sinking of subducting plates. These phase changes, and the relative depths of their occurrence, result in a chemical buoyancy force, $F_{c}$, on the plate. The integrated density contrast between the plate and mantle due to thermal differences, including depth to pressure-induced transitions, results in a thermal buoyancy force, $F_{t}$. High-temperature atmospheres limit the cooling of the plate on the surface [@Fole16], which will reduce the magnitude of $F_{t}$ and lower the likelihood of plate tectonics on these planets. To date, there are no definitive observations of terrestrial exoplanet atmospheres and thus inference of mobile versus stagnant-lid states through atmospheric observation is currently beyond the reach of exoplanetary science. Together, $F_{c}+F_{t}$ for Earth’s subducting plates contribute a net buoyancy of $\sim-(2-3)* 10^{13}$ N per meter length of subducted plate downward [recalculated from @Kird14]. While the processes responsible for the initiation of subduction are not well-understood on Earth, plates having sufficient negative buoyancy at the depth of the basalt-eclogite transition is a necessary precondition for incipient subduction and prevention of a stagnant lid [@vanH02]. These upper mantle forces, together with the shear traction and induced flow of a slab inciting downward mantle motion, constitute one of two major driving forces of plate tectonics [@Conr02]. An exoplanet’s composition and mineralogy are not directly observable and are instead inferred from mass-radius relations [e.g. @Seag07; @Weis14; @Wolf16; @Unte16]. Given only a planet’s mass and radius, there is considerable degeneracy in planet composition and structure. As noted by @Dorn15, this degeneracy is reduced when the stellar composition of terrestrial planet-building elements (Mg, Si, Fe) is adopted as a proxy for planetary composition. This is well grounded for the Sun-Earth system due to the refractory nature of these elements [@Unte16, Supplementary Table \[tab:comparison\]] and has yet to be tested for Venus as there are no definitive measurements of its interior composition or structure. Relative to solar [@Aspl05], the abundances of the major terrestrial planet-building elements Mg, Si, and Fe vary between 10 and 400% of solar in large stellar surveys [e.g. @Adib12; @Bens14; @Hink14; @Brew16]. Variations in stellar (Mg+2Si)/Fe affect an exoplanet’s core mass fraction and the ratio Mg/Si affects the dominant minerals of the silicate mantle. Decreasing Mg/Si will lead to a shift in the dominance of the upper-mantle mineral Mg$_2$SiO$_4$ (olivine), to MgSiO$_3$ (pyroxene), to SiO$_2$ [quartz; @Unte16; @Unte17]. While oxidation state of a planet will affect each of these simple assumptions somewhat, the extent of these oxidation-reduction reactions is a complex function of disk chemistry and the planet’s initial thermal profile and subsequent evolution, each of which are poorly understood areas in exoplanetary science. To first order, though, the assumption that stellar refractory composition roughly mirrors planetary refractory composition provides testable predictions of planetary mass for those planets orbiting stars of non-Solar composition [@Unte17]. We therefore adopt the stellar compositional diversity observed in the Galaxy as a proxy for the compositional diversity of terrestrial planets. From this starting point, we calculate the composition, mineralogy of the melt-extracted crust and residual mantle as relative contrast in buoyancy between each. We thus quantify how the composition of a rocky planet affects the likelihood of sustaining plate tectonics over billion-year timescales. The melt-extracted crust is created when solid-state convection develops in the bulk silicate planet (BSP) after the differentiation of a core and the adiabatic rise of rock towards the surface leads to partial melting of the mantle, producing a melt-extracted crust of different composition from the initial bulk silicate composition. A key feature of continual plate tectonics over geologic timescales is this melt-extracted crust subducting along with underlying lithospheric mantle into the interior. The relative magnitude of buoyancy contrast between the melt-extracted crust and residual mantle, therefore, will lead to a greater or lesser *likelihood* of plate tectonics in comparison to the Earth. Results ======= ![image](Ternaries.pdf){width="\linewidth"} We calculate the melt composition and stable mineralogy of the bulk silicate planet and exocrust, benchmarking our model to the Earth as derived from solar (See Supplementary Materials; Supplementary Figure \[fig:phases\]) for hypothetical planetary compositions derived from two samples of stellar compositions. The first includes 1063 FGK stars from @Adib12, which represent thin and thick-disk stars with a metallicity range from -0.8 $<$ \[Fe/H\] $<$ 0.6, over 100 of which are known to host (mostly gas-dominated) planets. Of this first sample, we thermodynamically model the composition and buoyancy of a subset of 609 stellar compositions that are within the internally consistent MELTS database [@Ghio02], 57 of which are known to host planets. We find a chemical buoyancy ($F_c$) of these systems ranges from $-7.5*10^{12}$ N m$^{-1}$ (sinks) to $4.0*10^{12}$ N m$^{-1}$ (floats) while our model Earth produces buoyancy force of $\sim-2.0*10^{12}$ N m$^{-1}$ (Figure \[fig:histogram\]). The second sample of stellar compositions comes from the Apache Point Observatory Galactic Evolution Experiment [APOGEE, @Wils10; @Maje15], which is part of the Sloan Digital Sky Survey IV [SDSS-IV; @Blan17]. From the public APOGEE Data Release 13 [DR13, @Alba16], we selected the stars known to host small planets (R$_{\rm {p}}$ $\leq$1.6 R$_{\oplus}$) from the *Kepler* transiting planet survey, totaling 123 stars with a metallicity range from -0.54 $<$ \[Fe/H\] $<$ 0.36. Of these, we thermodynamically model the composition and buoyancy of a subset of 89 stellar compositions that are within the internally consistent MELTS database [@Ghio02]. We find a chemical buoyancy of these systems ranges from $-10*10^{12}$ N m$^{-1}$ (sinks) to $3.6*10^{12}$ N m$^{-1}$ (floats). Of terrestrial planets with compositions represented by the variation in both stellar datasets, 19% would produce exocrusts entirely buoyant than their BSP throughout the entire upper mantle ($F_{c} > 0$), and therefore unable to subduct (Figure \[fig:histogram\]). Furthermore, 61% of the Adibekyan sample and 41% of the *Kepler* sample produces plates more chemically buoyant than that of our model Earth ($F_{c} > F_{c-Earth}$; Figure \[fig:histogram\]) and therefore *less likely* to host plate tectonics than the Earth. ![Histograms of buoyancy forces calculated using our model for our sample of 609 @Adib12 stars (gray) and 89 **Kepler** planet hosts (purple) with compositions inside the MELTS database [@Ghio02]. Those modeled from stellar composition are shown in red. The buoyancy force and bulk composition of model Earth is shown as a light gray dashed line.[]{data-label="fig:histogram"}](Histogram.pdf){width=".6\linewidth"} This variation in buoyancy forces is mainly caused by the enrichment of Si and alkali elements (Na, K) within the melt-extracted crust. Partial melting of typical mantle rocks leads to enrichment of silica and alkalis in the melt compared to the parent rock. We find those planetary compositions with lower alkali and silica abundance, such as those crustal materials that form on Earth with basaltic composition, are most likely to sink, while those with greater alkaline and silicic compositions are less likely (Figure \[fig:rocktype\]; Supplementary Figure \[fig:nonEarth\]). This is consistent with Earth’s tectonics, in which low-density and alkali-rich, andesitic continental crust does not subduct [@Cloo93]. ![Alkali abundance as a function of silica of the exocrust compositions for our sample of *Kepler* planet-host (diamonds) and @Adib12 stars (circles). Color is a function of the chemical buoyancy as calculated from a thermally equilibrated, 5% melt layer. Points outlined are those with known planets (exoplanets.org). The green box represents the Earth Range of average Earth basaltic compositions of @Gale13. The composition of Mercury’s Northern Volcanic Plains and Intercrater Plains and Heavily Cratered Terrains [diamonds; @Namu16] and Mars’ type 1 and type 2 crusts [squares, including K$_2$O; @McSw03] are included. Venus’ alkali composition has yet to be determined [@Trei13]. Each crustal composition of Mercury and Mars fall within the positively buoyant andesitic field, despite 3 of the 4 measurements containing Na$_2$O abundances within the Earth-like range.[]{data-label="fig:rocktype"}](RockType.pdf){width=".6\linewidth"} The magnitude of chemical buoyancy is, therefore, closely tied to the abundance of sodium and potassium in the BSP due to their incompatibility in melting processes (Figure \[fig:histogram\]). Because Na is roughly 15 times more abundant than K for the Sun [@Lodd03], we focus on Na as a controlling alkali in the buoyancy calculations. Because all alkalis are moderately volatile in the planetary condensation process [@Lodd03], their abundances are not likely to be mirrored in any orbiting terrestrial planets due to their fractionation relative to the refractory elements (e.g. Mg, Si, Fe) in the planetary formation process. For this model, we assume an incomplete accretion of Na due to volatile loss during planet formation in proportions similar to Sun-Earth fractionation for each star in our sample [$\delta_{ \rm{Na-Sun/Earth}} = \left (\rm{Na/Si}\right)_{\rm{Earth}}/\left(\rm{Na/Si}\right)_{Sun}$ = 0.26 by mole; @McD03; @Aspl05]. Chemical buoyancy is then a function of the total Na in the star as well as the degree of Na volitalization during accretion, $\delta_{\rm {Na}}$: $$\label{eq:fit} F_{c} \approx -1.15+16.6\left(\rm{[Na/Si]}+log10\left(\delta_{\rm{Na}}/\delta_{\rm{Na-Sun/Earth}}\right)\right) * 10^{12} \rm{N m^{-1}}$$ ![Chemical buoyancy force ($F_{c}$) as a function of bulk planetary Mg and Si normalized to Fe (by mole) for values for three values of $\delta_{\rm{Na}}$. Filled circles are those calculations presented here in Figures \[fig:histogram\] and \[fig:rocktype\] for the @Adib12 dataset (black) and *Kepler* hosts (red). Open symbols are those buoyancy forces calculated from the relationship between stellar \[Na/Si\] and $F_{c}$ (Equation \[eq:fit\], Supplementary Figure \[fig:fit\]) for the @Adib12 stars (squares) and *Kepler* hosts (circles). The 1063 stars in the stellar survey of @Adib12 show $\sim$2.5 orders of magnitude difference in total Na, halving or doubling $\delta _{\rm {Na}}$ from the Solar value represents the entire sample either producing none and all plates being negatively chemically buoyant. The index (Mg+2Si)/Fe is both a primary control on the compositional extent of the thermodynamic database [@Ghio02] as well as defining the approximate proportions of core and mantle of an associated terrestrial planet [@Unte16]. []{data-label="fig:Sodium"}](Sodium.pdf){width="\linewidth"} when \[Na/Si\] is scaled from @Aspl05 (Supplementary Figure \[fig:fit\]), 78% of the variation in $F_c$ can be explained by variation in \[Na/Si\]. For a given stellar \[Na/Si\] value, 95% of $F_c$ values fall within $\pm10^{12}$ N m$^{-1}$, which is a more complex function of the refractory element composition of the planet and must be derived through the methods outlined here. This correlation between $F_c$ and Na abundance allows us to estimate the chemical buoyancy of the 454 stars in the @Adib12 dataset whose mantle compositions were not located within the MELTS database and were generally those stars with bulk Fe/Si $<$ 0.95 [Bulk Earth Fe/Si = 1; @McD03]. For those stars in the Adibekyan dataset, we find 28% are of compositions likely to produce melt-extracted crust that remains buoyant relative to their mantles ($F_c > 0$), with 69% less chemically buoyant than the Earth (Figure \[fig:histogram\]). The Kepler dataset shows a similar trend, with 22% of stars likely to produce crusts with positive chemical buoyancy and 60% with crustal buoyancy forces greater than the Earth. Each of these systems are therefore *less likely* than the Earth or *completely unlikely* to maintain long-term, steady-state tectonics. When $\delta_{\rm{Na}}$ is twice $\delta_{\rm{Na-Sun/Earth}}$ ($\delta_{\rm{Na}}$ = 0.52), only 1% of stars within the Adibekyan sample and 4% of the Kepler sample produce any plates with a negative chemical buoyancy force ($F_c < 0$), whereas all but fourteen stellar compositions in our sample of 1186 stars produce negatively buoyant plates when $\delta_{\rm{Na}}$ is half of $\delta_{\rm{Na-Sun/Earth}}$ ($\delta_{\rm{Na}}$ = 0.13; Figure \[fig:Sodium\]). The likelihood of producing negative chemical buoyancy in melt-extracted surface plates is therefore very sensitive to $\delta _{\rm {Na}}$ and weakly dependent upon bulk mantle composition. Stars with greater Na abundance than Solar must retain less Na when forming terrestrial planets to create negatively buoyant plates at depth. The same is true for those stars of Solar Na composition with greater $\delta _{\rm {Na}}$ compared to the Earth and Sun ($\delta _{\rm {Na}} >$ 0.26). These high Na planetary systems may initiate subduction, but due to their higher positive chemical buoyancy force, they are unlikely to continue to do so [@Cloo93]. As mantle temperatures increase in this stagnant-lid planet, any volatiles contained within the partially subducted plate will degass into the atmosphere, thus entirely negating any long-term, continuous tectonic regulation of climate. Our compositionally focused approach is a tool to examine the influence of a host star’s refractory and moderately volatile composition on a key aspect of planetary dynamics: top-down, density-driven plate tectonics. This model attempts to capture only the most general details of terrestrial planetary formation, differentiation, and evolutionary process with the sinking of a surface plate through a planet’s mantle being a consequence of the difference in density between the mantle and crust. This model treat the complex processes of accretion, formation of overlying continents, and prolonged geochemical processing as discrete processes at a fixed pressure, temperature, and oxygen fugacity. While not meant to be strictly prescriptive, these results demonstrate first-order trends and controlling factors in the geochemical consequences of variable compositions of terrestrial planets. This approach predicts a lack of plate tectonics on Venus and the potential for (transient) plate tectonics on Mars. For planets with significant thermally insulating atmospheres, more heat is retained at the surface, limiting the cooling of warming the plate [@Fole15; @Fole16]. A surface temperature 450 K greater than Earth such as Venus will significantly reduce $F_{t}$, while $F_{c-Venus} \sim F_{c-Earth}$, such that the sum predicts Venus is less likely than Earth to undergo tectonics without having to consider the strength or thickness of surface rocks [@Fole12; @Berc14]. In the case of Mars, the lack of water and rapid heat loss due to its small size limits the likelihood for long-term plate tectonics, although it may have experienced subduction in the past [@Slee94]. One of the most significant neglected variables is the abundance of water and carbon in the planet’s interior, which are sensitive to nebula composition, formation processes and location in the nebula, and subsequent evolution. The volatile abundance in terrestrial planets is not well constrained in planetary formation models, both for our Solar System and exoplanetary systems. While planetary H$_2$O and CO$_2$ abundances may inform the scale of exoplanetary oceans and climate, the planet’s relative abundance of moderately volatile elements such as Na is a first-order control on the likelihood of regulating these important aspects of exoplanet habitability via long-term plate tectonics. This approach, therefore, points to the importance of addressing the thermal and dynamic controls for both the highly and moderately volatile abundances of planets in formation models. Of the small exoplanets discovered to date (R $\leq$ 1.6 Earth radii), few have both size and mass measurements, from which mean density may be calculated. Mean density is a non-unique function of the relative proportions of core, mantle, and gaseous envelope. Of those mass/radius measurements sufficiently dense to suggest the planets are terrestrial, planetary compositions and structures are indistinguishable from one another due to large observational uncertainties. We offer a complementary approach for those planets inferred to be terrestrial by providing a framework for quantifying an exoplanet’s potential geochemical dynamics. Indeed, these results represent the first observational metric with which to gauge the probability of an exoplanetary system to maintain steady state surface-to-interior geochemical cycling and tectonic behavior with the potential to regulate atmospheric temperatures over long timescales. Furthermore, the connection between stellar Na abundance, degree of volatilization, and surface plate chemical buoyancy point to the importance of adopting a comparative tool to ascertain the likelihood of exoplanetary plate tectonics compared to the Earth. The exoplanetary field needs to refine models of not only the volatile condensation and accretion in planetary systems, but track the fractionation and dynamics of the refractory and moderately volatile elements within the disk as well. Furthermore, geochemical and geophysical experiments and models must be expanded to those compositions relevant to planets of non-Earth/Solar composition. The upcoming James Webb Space Telescope will provide opportunities for time-intensive observational studies to follow up terrestrial exoplanet atmospheric conditions, and our framework provides a strategy to identify those planetary systems most worthy of follow up observations. Bulk planet density is only one factor in determining whether a planet is “Earth-like.” It is only through this holistic approach to characterizing exoplanets, combining data and models from across scientific fields, that we will truly calibrate the likelihood of an extrasolar planet to be behaviorally Earth-like and habitable to life as we know it. We thank the Cooperative Institute for Dynamic Earth Research (CIDER) for providing the space and intellectual environment for fostering this work as well as funds to support the BurnMan code. This project was supported by NSF-CAREER (EAR-09-55647) to Panero and the Shell Undergraduate Research Experience, OSU Undergraduate Research Scholarship, and Friends of Orton Hall grant to Hull. Funding for the Sloan Digital Sky Survey IV has been provided by the Alfred P. Sloan Foundation, the U.S. Department of Energy Office of Science, and the Participating Institutions. SDSS acknowledges support and resources from the Center for High-Performance Computing at the University of Utah. The SDSS web site is www.sdss.org. SDSS is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS Collaboration including the Brazilian Participation Group, the Carnegie Institution for Science, Carnegie Mellon University, the Chilean Participation Group, the French Participation Group, Harvard-Smithsonian Center for Astrophysics, Instituto de Astrofísica de Canarias, The Johns Hopkins University, Kavli Institute for the Physics and Mathematics of the Universe (IPMU) / University of Tokyo, Lawrence Berkeley National Laboratory, Leibniz Institut für Astrophysik Potsdam (AIP), Max-Planck-Institut für Astronomie (MPIA Heidelberg), Max-Planck-Institut für Astrophysik (MPA Garching), Max-Planck-Institut für Extraterrestrische Physik (MPE), National Astronomical Observatories of China, New Mexico State University, New York University, University of Notre Dame, Observatório Nacional / MCTI, The Ohio State University, Pennsylvania State University, Shanghai Astronomical Observatory, United Kingdom Participation Group, Universidad Nacional Autónoma de México, University of Arizona, University of Colorado Boulder, University of Oxford, University of Portsmouth, University of Utah, University of Virginia, University of Washington, University of Wisconsin, Vanderbilt University, and Yale University. Supplementary Material ====================== Stellar Composition Data ------------------------ We apply this Earth-benchmarked model to two samples of stars with abundances available for 9 elements (Mg, Fe, Si, Ca, Al, Ni, Ti, Cr and Na). First is the survey of F, G and K stars in the Galaxy of @Adib12 and the other is a sample of 123 known planet-hosting stars whose abundances were available within the APOGEE Data Release 13 [DR13 @Alba16]. Where available, only those stars with C/O $<$ 0.8 were chosen. We use 1186 total stars in our model. These stars represent both thin and thick-disk stars with a metallicity range from -0.8 to 0.6 \[Fe/H\], of which 235 have detected planets. Of these stars, 698 have compositions that fall within the thermodynamic databases used in this study (Figure \[fig:histogram\]). For those stars outside of the database, we estimate the crustal Na content and chemical buoyancy force through a linear fit of these values for only those 609 stars that were within the MELTS database and in the @Adib12 dataset (Supplemental Figure \[fig:fit\]). We note that a separate publication is in progress (Teske et al. in prep) that will discuss the reliability of FGK dwarf star abundances produced by the APOGEE automated analysis pipeline [ASPCAP; @Garc16; Holtzmann et al. in prep, Jonsson et al, in prep]. Initial results (also discussed in Wilson et al. in prep) suggest that, for solar-type stars, the Fe, Mg, and Si abundances produced by ASPCAP are reliable at the $\sim$0.15 dex level. The ASPCAP Na abundances appear to be less reliable, upon comparison with a small sample of values derived from optical data. For consistency, we did not modify the DR13 APOGEE Na abundances to bring them into better agreement with the optical data. We note that Figure 1 shows that the @Adib12 and the APOGEE samples have similar ranges in Na abundances. Methods Summary --------------- ### Planet Composition Model There is extensive debate as to the exact nature of the Earth’s composition, from one closer to a volatilized carbonaceous chondrite [@McD95; @McD03], to more similar to enstatite chondrites [@Javo10], to mixtures of carbonaceous chondrites and ordinary chondrites [e.g. @Fito12]. We instead adopt the simplest model of a composition that is identical to the Sun in refractory elements, with an oxygen abundance fixed by the fO$_{2}$ of the condensing medium. For key moderately volatile elements such as Na, we assume an ad-hoc condensation efficiency of 26% to reproduce the Earth’s bulk Na composition [@Aspl05; @McD03]. ![ Predicted phases (black) and densities (red) in this model (solid) compared to Earth reference compositions [dashed, @McD03]for (a) the bulk silicate Earth and (b) melt-extracted basalt as a function of pressure. The model mantle is calculated along a 1600 K adiabat while the basalt is calculated along a 1200 K adiabat. While the simplified model presented here does not reproduce the Earth in detail, it does reproduce the correct minerals and pressures of phase transitions and magnitude of density discontinuities. Our model mantle under predicts the olivine fraction in the mantle (55% vs. 60%) while our model basalt predicts clinopyroxene at the expense of garnet relative to average basalt compositions under predicting the density by no more than 3%. Garnet is the dense phase in the basalt-eclogite transition; this model therefore represents a lower bound on the likelihood of subduction. []{data-label="fig:phases"}](phases.pdf){width="\linewidth"} The condensation temperatures of the moderately volatile elements are highly correlated with their abundances in ordinary and CI chondrites [@Wai77], indicating a condensation process that is both composition and thermally-driven. However, because of the lower condensation temperature of these elements relative to the refractories [e.g. @Lodd03], there is likely radial mixing of material within the disk. Recent planetary formation models [e.g. @Raym06] show water ice mixing from beyond 2.5 AU into the inner Solar System, while the degree of mixing of the moderately volatile elements increases with the timescale of planet formation [@Bond10a; @Bond10b]. Other disk models addressing the mixing of moderately volatile elements in the disk systematically over-predict the Na abundance in resulting planets compared to when attempting to reproduce the Earth’s composition [@Matsu16], likely due to neglecting not accounting for volatile loss due to melting during planetary formation. [l|cc|cc|cc|cc]{} Mg&6.1&16.6&16.9&17.4&19.9&19.7&3.8-4.6&4.3\ Si&5.8&15.8&15.3&14.4&15.9&16.4&20.0-17.7&19.1\ Fe&5.1&13.8&15.3&14.4&2.4&2.4&2.8-3.9&2.4\ Al&0.4&1.1&1.5&1.2&1.8&1.4&6.3-6.8&6.6\ Ca&0.4&1.0&1.1&1.0&1.3&1.2&4.4-4.6&3.5\ Na&0.3&0.7&0.2&0.2&0.2&0.2&1.6-2.5&3.5\ O&82.0&51.0&49.6&51.3&58.5&58.7&60.0-61.2&60.5\ \[tab:comparison\] ### Planetary Differentiation Model We model two stages of differentiation from the condensed planet composition: separation of the iron-rich core from the bulk silicate planet (BSP) and crustal rock forming as a result of partial melting (exocrust) from adiabatic decompression of the BSP. This simplified two-stage model broadly reproduces the major element composition of the Earth, core-extracted bulk silicate Earth, and mid-ocean ridge basalt (MORB) from the Solar composition (Supplementary Table \[tab:comparison\]). Differences between our simplified Earth and the true Earth lead to comparable differences in modal abundance of minerals in both MORB and Bulk Silicate Earth (BSE, Supplementary Figure \[fig:phases\]). The resulting absolute difference in calculated densities vary by 0.6% for the bulk silicate Earth and 2.6% for the modeled MORB composition, with the depth-to-metamorphic transitions and the relative difference between the transitions for each composition indistinguishable. We model planetary differentiation through self-consistent thermodynamic modeling [@Ghio95; @Ghio02] of cooling the bulk composition from a molten planet to calculate mineral and melt equilibria. The pressure and gas fugacity of the equilibrium calculations are chosen to best reproduce the fraction of differentiated Fe alloy and the composition of the BSE and MORB composition from the Sun’s composition. We assume Si is primary the light element in the core [@Fisch15] to be consistent with the Mg/Si ratio of the Earth’s mantle [@McD95; @McD03; @Unte16]. The melting of the mantle is calculated by assuming an adiabatic rise of material to lower pressures until the solidus temperature of the rock is reached. We calculate the primary crust composition to be that of the BSE at 5% melt fraction, which is comparable to average mid-ocean ridge basalt (Supplementary Table \[tab:comparison\]) after imposing a systematic correction to MgO to account for an over stabilization of clinopyroxene in MELTS [@Ghio02]. If no Si is present in the core, the mantle Si abundance would increase accordingly. This would enrich the melt in silica, forming a crust of buoyant andesite rather than basalt. We solve for the thermodynamically stable mineralogical host of each BSP and exocrustal composition as a function of pressure and temperature using the HeFESTo [@Stix05; @Stix11], open-source BurnMan [@Cott14; @Unte16] and the Exoplanet-Pocketknife (available at https://github.com/ScottHull/Exoplanet-Pocketknife) software packages. Temperatures are assumed to increase adiabatically, calculated self-consistently with depth. The mantle adiabat is fixed at 1600 K, approximately the potential temperature of the Earth’s mantle, while the average exocrust temperature is varied between 1000 K geotherms ($\Delta \rm{T} =600 K$) and 1600 K (thermally equilibrated, $\Delta \rm{T}=0$) From the mineralogy, compositions of the minerals, and modal abundance of each, density profiles of profile are calculated from the mineral-specific equations of state. ![Predicted mantle mineralogies along 1600 K (top) and 1000 K (bottom) geotherms of hypothetical terrestrial planets about stars HD-19994 (left) and HD-89668 (right), representing extremes in Mg/Si. cpx = clinopyroxene, opx = orthopyroxene, c2c = c2/c structured-phase of opx, mw= magnesiowustite, $\alpha$-, $\beta$-, $\gamma$-ol = olivine, wadsleyite, and ringwoodite, respectively. Densities (right axis) of the mantle along a 1600 K geotherm (solid red) are compared to the densities of the inferred slab along a 1000 K geotherm (solid blue) are a function of the density of the exocrust (dashed cyan) and cold mantle lithosphere (dashed red). []{data-label="fig:nonEarth"}](NonEarth.pdf){width="\linewidth"} ![Crustal Na$_{2}$O wt% composition as a function of molar fraction Na and chemical buoyancy force ($F_{c}$) as a function of stellar \[Na/Si\] for stars within the MELTS database. We adopt the Solar model of @Aspl05 for calculation of \[Na/Si\]. $F_{c}$ is modeled assuming Solar Na is reduced by 74% to the Earth abundance relative to Si [@Aspl05; @McD03]. []{data-label="fig:fit"}](fit.pdf){width="\linewidth"} An additional consideration is the extent of melting for the formation of the exocrust, which is a function of both mantle composition and temperature. We find that 95% of the sample set is at 5% melt over a 65 K range, such that forming an exocrust of thickness comparable to that of the Earth occurs over a relatively narrow range. For lower mantle temperatures, little-to-no melt may be produced, while greater mantle temperatures will produce greater crustal thickness. For simplicity, we have considered a self-consistent adiabat applicable to the Earth with an Earth-like potential temperature (1600 K). Increasing the potential temperature increases the extent of melting, and consequently the thickness of melt-extracted crust. A greater extent of melting decreases Na$_2$O fraction in the melt for an approximately constant SiO$_2$ fraction when melt fraction $>$ 5% [@Ghio02], both effects increase the downward force due to chemical buoyancy for a given composition planet. The Earth’s mantle contains trace quantities of both H$_2$O and CO$_2$, which affect the melting behavior of the passive upwelling mantle. Carbon increases the depth of initial melting in the decompression melting of mantle peridotite. This melting is at significantly greater depths than assumed in the models here, but contributes very small fractions of melt [$<$0.3%; @Dasg06] and therefore unlikely to significantly affect the major element chemistry assumed here. Similarly, water decreases the solidus temperature, increasing the depth of initial melting, but not significantly altering the total melt fraction [@Asim03] and decreases the total iron in the melt. With a simple two-stage model, we do not address the chemical secular evolution of the mantle due to the extraction of continental crust. As such, we overestimate the Na$_{2}$O in the basaltic crust due to neglecting the abundance of Na$_{2}$O sequestered in the continental crust. For a continental crust with 3.6 wt% Na$_{2}$O [@Rudn03], reducing the BSP Na$_{2}$O abundance by about 5%, reducing the Na$_{2}$O content of the resulting melt-extracted crust about 77% of our predicted Na$_{2}$O content, consistent with typical MORB compositions. This suggests that early plate tectonic processes on Earth required initial secular variation in bulk composition of incompatible elements before modern plate tectonics could commence. The downward sinking force is quantified by the buoyancy of a potential sinking plate, made up of 10 km-thick exocrust and 50 km-thick lithospheric mantle. For simplicity, we adopt the BSP composition as the lithospheric mantle composition. The buoyancy force per unit length subducting plate, $F_{b}$, is $$F_{b} = t \int_{0}^{d}\Delta\rho\left(\Delta x,h,\Delta T\right)g\left(h\right)dh$$ where $\Delta\rho(\Delta x,h, \Delta T)$ is the density difference between the down-going plate and mantle compositions as a function of differences in the bulk compositions, $x$, depth, $h$, and the potential temperature difference between the mantle and slab, $\Delta T$, $g$ is the acceleration due to gravity, $d$ is the depth to the base of the Earth’s transition zone (where the integrated density differences are a maximum), and $t$ is the slab thickness. While $g$ is a function of the planet mass, the depth to metamorphic transitions are pressure dependent, such that for a planet of equal core to mantle proportions, the product of surface gravity and the depth at 25 GPa varies by just 1.3% over planets between 1 and 4.5 Earth masses. Therefore, we adopt Earth’s surface acceleration due to gravity and Earth’s pressure-depth relationship as a representative model valid for the calculation of net negative buoyancy in Super-Earths. The magnitude of the chemical buoyancy force is closely tied to the abundance of Na$_{2}$O in the BSP due to its incompatibility in melting processes, $$F_{c} \approx (-9.38+1.64*X_{Na_{2}O})*10^{12} \rm{N m^{-1}}$$ where $X_{Na_{2}0}$ is the weight percent Na$_{2}$O in the planet’s mantle ($X_{Na_{2}0}$ = 4.97 wt% for our model Earth.
1
--- abstract: 'In this article we solve the problem of maximizing the expected utility of future consumption and terminal wealth to determine the optimal pension or life-cycle fund strategy for a cohort of pension fund investors. The setup is strongly related to a DC pension plan where additionally (individual) consumption is taken into account. The consumption rate is subject to a time-varying minimum level and terminal wealth is subject to a terminal floor. Moreover, the preference between consumption and terminal wealth as well as the intertemporal coefficient of risk aversion are time-varying and therefore depend on the age of the considered pension cohort. The optimal consumption and investment policies are calculated in the case of a Black-Scholes financial market framework and hyperbolic absolute risk aversion (HARA) utility functions. We generalize Ye (2008) (2008 American Control Conference, 356-362) by adding an age-dependent coefficient of risk aversion and extend Steffensen (2011) (Journal of Economic Dynamics and Control, 35(5), 659-667), Hentschel (2016) (Doctoral dissertation, Ulm University) and Aase (2017) (Stochastics, 89(1), 115-141) by considering consumption in combination with terminal wealth and allowing for consumption and terminal wealth floors via an application of HARA utility functions. A case study on fitting several models to realistic, time-dependent life-cycle consumption and relative investment profiles shows that only our extended model with time-varying preference parameters provides sufficient flexibility for an adequate fit. This is of particular interest to life-cycle products for (private) pension investments or pension insurance in general.' address: - 'Department of Mathematics, Technical University of Munich, Munich, Germany' - 'Department of Actuarial Studies and Business Analytics, Macquarie University, Sydney, Australia' author: - Andreas Lichtenstern - 'Pavel V. Shevchenko' - Rudi Zagst bibliography: - 'Bibliography.bib' title: 'Optimal life-cycle consumption and investment decisions under age-dependent risk preferences[^1]' --- Pension investments ,optimal life-cycle consumption and investment ,age-dependent risk aversion ,HARA utility function ,martingale method G11 ,G22 ,C61 ,D14 Introduction {#sec:Introduction} ============ A suitable management of pensions needs to consider earnings/contributions and investment, but should also account for the required consumption during the accumulation and/or decumulation phase. For this sake, in this paper we consider the finite horizon portfolio problem of maximizing expected utility of future consumption and terminal wealth to determine the optimal pension or life-cycle fund strategy for a cohort of pension fund investors. The setup is strongly related to a DC pension plan where additionally (individual) consumption is taken into account. Within this framework, [@LaknerNygren2006] describe the trade-off the investor faces as a compromise between ‘living well’ (consumption) and ‘becoming rich’ (terminal wealth). Classical consumption-investment problems consider constant risk aversion in the intertemporal utility functions for consumption besides a personal discount rate or impatience factor, see [@Merton1969] or [@Merton1971]. Within classical models (where constant relative risk aversion (CRRA) utilities are applied), optimal portfolio policies turn out to be constant over the life-cycle, meaning time and wealth independent. According to [@Aase2017] this is ‘against empirical evidence, and against the typical recommendations of portfolio managers’. Furthermore, [@Aase2017] and [@YangFang2014] argue that the tendency of stocks to outperform bonds over long horizons in the past is one of the reasons why people at a younger age are advised to allocate a higher proportion of wealth to equities compared to older people. Evidence for changing risk aversion over the life-cycle is reported in the literature, although there is no broad agreement on its behavior: [@Morin1983], [@Bakshi1994], [@Palsson1996], [@Bellante2004], [@AlAjmi2008], [@Ho2009], [@Yao2011] and [@AlbertDuffy2012] observe increasing risk aversion by age, [@Bellante1986] and [@Wang1997] find risk aversion decreasing by age and [@Riley1992] detect different behavior between the pre- and post-retirement phase. Age-depending risk preferences can economically be motivated by the observed behavior of people to stepwise reduce their investment risk the closer to retirement. This behavior is reflected in many life-cycle fund allocation policies, see for instance [@Milliman2010] or [@TIAACREF_LifecycleFunds2015]. An important economic reasoning behind is that the older the person, the less time to retirement entrance is left and therefore the less likely it is for her to overcome a potential market crash, strongly connected to the fear of having an insufficient wealth left for retirement. Moreover, it is reasonable that the closer to retirement time, the more satisfaction is connected with savings, i.e. with a lower consumption surplus, which yields a higher initial wealth for the decumulation phase. Based on these economic reasons, it is meaningful to consider age-varying preference parameters (dependent on the age of the pension cohort or the individual investor) in form of a coefficient of risk aversion, later called $b(t)$, and a weighting factor, later referred to as $a(t)$, that governs the relative importance of consumption at different points in time. The latter has no impact on risk aversion but can control for the varying preference between consumption and terminal wealth over time. In an analysis of the optimal controls in Section \[sec:NCS\] we show that our proposed model can explain and describe people’s observed behavior of reducing relative risky investments by time while simultaneously targeting a certain function for the consumption rate on average. In opposite, we find that the previously described existing models are not able to capture this behavior. Therefore, particulary Section \[sec:NCS\] shows that it is economically important to have separate functions or parameters for risk aversion and preference of consumption over terminal wealth, $a(t)$ and $b(t)$. In addition, consumption and wealth floors are introduced which have an economic meaning as minimum required levels of consumption and wealth. This motivates the development of a dynamic life-cycle model with time-varying risk preferences such as coefficient of risk aversion and consumption and wealth floors which can capture age-depending consumption and investment behavior of investors. Related literature to this topic consider stochastic income and unemployment risks, see [@BodieMertonSamuelson1992], [@Koo1998], [@Munk2000], [@Viceira2001], [@Huang2008], [@Jang2013], [@Bensoussan2016], [@Wang2016] or [@Chen2018]. Setups where the investor faces uncertain lifetime, mortality and optimal life insurance are considered in [@Yaari1965], [@PliskaYe2007], [@MenoncinRegis2017], [@ZouCadenillas2014], [@KronborgSteffensen2015], [@ShenWei2016], [@Duarte2011], [@Huang2012], [@KronborgSteffensen2015], [@ShenWei2016] and [@Ye2008]; optimal consumption and investment under insurer default risk is studied by [@JangKooPark2019]. [@KraftMunk2011], [@KraftMunkWagner2018], [@AndreassonShevchenkoNovikov2017], [@CuocoLiu2000] and [@DamgaardFuglsbjergMunk2003] analyze optimal housing as a durable good. Constraints in the optimization problem are considered in [@Cuoco1997], [@Elie2008] and [@Grandits2015]. Moreover, [@Akian1996], [@Altarovici2017] and [@Dai2009] analyze the portfolio problem under transaction costs. The application of HARA utility functions in a life-cycle context can be found in [@Huang2008], [@Ye2008], [@ChangRong2014], [@ChangChang2017] and [@Wang2017]. Moreover, [@BackLiuTeguia2019] study a life-cycle consumption problem for HARA utility with time-independent, increasing risk aversion and examine the relation between age and portfolio risk by using Monte Carlo analysis. [@TangPurcalZhang2018] study an optimal consumption-investment problem under CRRA utility function with age-independent risk aversion, but examine the impact of hyperbolic discounting, where the rate of time preference is a function of time. We generalize this approach by considering general $a(t)$ or ${e^{- \beta t} a(t)}$, respectively, and by introducing age-varying risk aversion. In this paper we apply HARA utility functions on both the consumption and terminal wealth and consider time-varying preferences: an age-depending preference between consumption and terminal wealth and an age-depending coefficient of risk aversion in the intertemporal consumption utility. For simplicity, income is treated as a deterministic process. Furthermore, we do not model mortality and consider a fixed time horizon $T$ that corresponds to a retirement age, thus we assume the agent to survive up to the age of retirement. A positive, fixed floor in the terminal utility ensures a minimum liquid asset wealth level at the age of retirement, which is meaningful as the retiree needs wealth to live from and could possibly afford housing from this wealth. In addition, a positive, time-varying floor in the consumption utility guarantees a minimum (time-dependent) consumption rate. This is essential during the accumulation phase as for instance living expenses, rental payments when home is rented or mortgage payments and maintenance costs when home is bought and financed by debt or only maintenance costs when the agent already fully owns a house (e.g. inherited) need to be covered. Therefore, the economic demand for both a positive minimum level of consumption and terminal wealth can be motivated. Most related to our work are [@Ye2008], [@Steffensen2011], [@Hentschel2016] and [@Aase2017]. The difference of our approach to these papers is as follows. [@Ye2008] considers income, mortality and HARA utilities for both consumption and terminal wealth under a constant coefficient of risk aversion, i.e. constant $b(t)$, but where the age-dependent preference between consumption and wealth $a(t)$ is incorporated. We generalize the results by introducing a time-dependent coefficient of risk aversion $b(t)$. [@Steffensen2011] provides a first insight into the optimal policy when the utility parameters of the intertemporal utility, which is of a CRRA type, are time-varying; thus $a(t)$ and $b(t)$ are captured. But the model disregards terminal wealth, consumption floor and labor income. In a similar fashion, [@Hentschel2016] studies the consumption problem for CRRA utility with habit formation and considers $a(t)$ and $b(t)$. Similar to [@Steffensen2011], neither terminal wealth nor consumption floor nor income are included in their model. Finally, [@Aase2017] uses the martingale method (that allows to reformulate the optimal stochastic control problem to a simpler maximization problem with constraint) to determine optimal consumption and investment under mortality and a CRRA utility with age-depending risk aversion $b(t)$. But the model does not consider terminal wealth, consumption floor, income or time-varying preference $a(t)$. The main contributions and innovations of this paper can be summarized as follows: we consider all the ‘ingredients’ of the models in the above mentioned papers ($a(t)$, $b(t)$, terminal wealth, floors for consumption and terminal wealth via HARA utilities, income process) that leads to a novel, very flexible and more realistic dynamic life-cycle model framework. We extend or generalize [@Ye2008] by adding an age-dependent coefficient of risk aversion $b(t)$ and [@Steffensen2011], [@Hentschel2016] and [@Aase2017] by considering terminal wealth and allowing for consumption and terminal wealth floors via an application of HARA utility functions. The corresponding consumption-investment problem is solved analytically and interpretation is provided. In a case study, where we fit realistic predetermined target policies for consumption and relative allocation to several models, we realize that only our proposed and most general model is sufficiently flexible to describe human preferences on consumption and investment in a suitable fashion. This implies that modeling the agent’s preferences in an age-depending fashion is inevitable. To solve the respective portfolio problem, we follow a separation approach similar to the ones developed by [@KaratzasShreve1998] and [@LaknerNygren2006]. It divides the original consumption-terminal wealth optimization problem into two sub-problems, the corresponding consumption problem and the terminal wealth problem. These separate problems are to be solved individually. Due to time-dependent preference parameters we apply the martingale method in line with [@Aase2017] to solve the individual problems in closed form. Afterwards, we show how the individual solutions have to be glued together in order to obtain the general solution to the original consumption-terminal wealth problem. The remainder of this paper is organized as follows. Section \[sec:FinancialMarketModel\] introduces the financial market and the portfolio problem of interest, Section \[sec:SeparationTechnique\] shows the separation approach and the solution to the problem. A fit of the analytic strategy to suitable consumption and investment curves is conducted in Section \[sec:NCS\], followed by an investigation of the optimal controls and corresponding wealth process. Section \[sec:Conclusion\] concludes. \[app:Proofs\] summarizes all proofs of the claimed statements: the proofs for Section \[sec:ConsumptionProblem:y\] on the consumption problem can be found in \[app:ProofsConsumptionProblem\], the proofs related to Section \[sec:TerminalWealthProblem\] on the terminal wealth problem in \[app:ProofsTerminalWealthProblem\], and for the proofs associated with Section \[sec:OptimalMerging\] on merging both individual solutions, see \[app:ProofsOptimalMerging\]. The financial market model and consumption-investment problem {#sec:FinancialMarketModel} ============================================================= We consider a frictionless financial market $M$ which consists of $N+1$ continuously traded assets, one risk-free asset and $N$ risky assets. Let $[0,T]$ represent the fixed and finite investment horizon. Uncertainty in the continuous-time financial market is modeled by a complete, filtered probability space $(\Omega, \CMcal{F}, \left(\CMcal{F}_{t}\right)_{t \in [0,T]}, {\mathbb{P}})$, where $\Omega$ is the sample space, ${\mathbb{P}}$ the real-world probability measure, $\CMcal{F}_{t}$ is the natural filtration generated by $W(s)$, ${0 \le s \le t}$, augmented by all the null sets, and ${W = \left(W(t)\right)_{t \in [0,T]}}$, $W(t) = (W_{1}(t), \hdots, W_{N}(t))'$, $N \in \mathbb{N}$, is a standard $N$-dimensional Brownian motion. The price of the risk-free asset at time $t$ is denoted by $P_{0}(t)$ and is subject to the equation $$\begin{aligned} dP_{0}(t) = r P_{0}(t)dt,\ P_{0}(0) = 1,\end{aligned}$$ with constant risk-free interest rate $r > 0$. The remaining $N$ assets in the market are risky assets with price $P_{i}(t)$, $i = 1, \hdots, N$, at time $t$ subject to the stochastic differential equations $$\begin{aligned} dP_{i}(t) = {} & P_{i}(t) \left(\mu_{i} dt + \sigma_{i} dW(t)\right) = P_{i}(t) \left(\mu_{i} dt + \sum_{j = 1}^{N} \sigma_{ij} dW_{j}(t)\right),\ P_{i}(0) = p_{i} > 0,\end{aligned}$$ with constant drift ${\mu = \left(\mu_{1},\hdots,\mu_{N}\right)' \in \mathbb{R}_{+}^{N}}$, ${\mu - r \mathbf{1} > \mathbf{0}}$, and constant volatility vector $\sigma_{i} = (\sigma_{i1}, \hdots, \sigma_{iN}) \in \mathbb{R}_{+}^{1 \times N}$. The volatility matrix is defined by ${\sigma = \left(\sigma_{ij}\right)_{i,j = 1,\hdots,N}}$, the covariance matrix of the log-returns is ${\Sigma = \sigma \sigma'}$ which is assumed to be strongly positive definite, i.e. there exists $K > 0$ such that ${\mathbb{P}}$-a.s. it holds ${x' \Sigma x \ge K x' x}$, ${\forall x \in {\mathbb{R}}^{N}}$. Furthermore, let ${\gamma = \sigma^{-1} (\mu - r \mathbf{1})}$ denote the market price of risk. In this case of Black-Scholes market dynamics, according to [@KaratzasShreve1998], there exists a unique risk-neutral probability measure ${{\mathbb{Q}}\sim {\mathbb{P}}}$ defined by ${\frac{d{\mathbb{Q}}}{d{\mathbb{P}}} | _{\CMcal{F}_{t}} := e^{-\frac{1}{2} \|\gamma\|^{2} t - \gamma' W(t)}}$ and the market is complete (that allows to value payment streams under the measure ${\mathbb{Q}}$ as expected discounted values, meaning that the cost of a portfolio replicating the contract is given by its expected discounted value under ${\mathbb{Q}}$). The corresponding pricing kernel or state price deflator, denoted by $\tilde{Z}(t)$, is defined as $$\begin{aligned} \label{eq:PricingKernel} \tilde{Z}(t) := e^{- \left(r + \frac{1}{2}\|\gamma\|^{2}\right) t - \gamma' W(t)}\end{aligned}$$ and can be used for the valuation of payment streams under the real-world probability measure. Its dynamics are subject to the stochastic differential equation $$\begin{aligned} d \tilde{Z}(t) = - \tilde{Z}(t) \left(r dt + \gamma' dW(t)\right),\ \tilde{Z}(0) = 1.\end{aligned}$$ We consider $\CMcal{F}_{t}$-progressively measurable trading strategies $\varphi = (\varphi_{0}, \hat{\varphi})'$, ${\hat{\varphi} = (\varphi_{1}, \hdots, \varphi_{N})'}$, such that ${\mathbb{P}}$-a.s. it holds $\int_{0}^{T} |\varphi_{0}(t)| dt < \infty$ and $\int_{0}^{T} \varphi_{i}(t)^{2} dt < \infty$. $\varphi_{i}(t)$ represents the number of individual shares of asset $i$ held by the investor at time $t$. The corresponding relative portfolio process is denoted by ${\pi = (\pi_{0}, \hat{\pi}')'}$ with risky relative investment ${\hat{\pi} = (\pi_{1}, \hdots, \pi_{N})'}$ and risk-free relative investment ${\pi_{0}(t) = 1-\hat{\pi}(t)'\mathbf{1}}$, where $\pi_{i}(t)$ denotes the fraction of wealth allocated to asset $i$ at time $t$. It is to satisfy $\int_{0}^{T} \pi_{i}(t)^{2} dt < \infty$, ${\mathbb{P}}$-a.s.. Moreover, let $(c(t))_{t \in [0,T]}$ denote a non-negative, progressively measurable, real-valued stochastic consumption rate process with $\int_{0}^{T} c(t) dt < \infty$, ${\mathbb{P}}$-a.s., and $(y(t))_{t \in [0,T]}$ a non-negative, deterministic income-rate process with $\int_{0}^{T} y(t) dt < \infty$. Those technical conditions are assumed to get a solution for the subsequently formulated stochastic problem. The dynamics of the investor’s wealth process $V = \left(V(t)\right)_{t \in [0,T]}$ under the strategy $(\pi,c)$ to initial wealth $V(0) = v_{0} > 0$, including liquid assets, consumption and income, is then given by $$\begin{aligned} \label{eq:SDE:V:y} dV(t) = {} & V(t) \left[\left(r + \hat{\pi}(t)' \left(\mu - r \mathbf{1}\right)\right) dt + \hat{\pi}(t)'\sigma dW(t)\right] - c(t) dt + y(t) dt.\end{aligned}$$ The relative investment in the risk-free asset is ${\pi_{0}(t) = 1 - \hat{\pi}(t)' \mathbf{1}}$. We consider the objective of maximizing expected utility of future terminal wealth and consumption, starting at time $0$ and ending at $T$. Hence the objective function to be maximized is $$\begin{aligned} \label{eq:ObjectiveFunction} J(\pi,c;v_{0}) = {\mathbb{E}}\left[\int_{0}^{T} U_{1}(t,c(t)) dt + U_{2}(V(T))\right],\end{aligned}$$ where $v_{0} > 0$ denotes the initial endowment of the investor. All expectations in this paper are with respect to the real-world measure ${\mathbb{P}}$. The general portfolio optimization problem with initial wealth ${V(0) = v_{0} > 0}$ to be solved is then given by $$\begin{aligned} \label{eq:OptimizationProblem} \mathcal{V}(v_{0}) = \sup_{(\pi,c) \in \Lambda} J(\pi,c;v_{0})\end{aligned}$$ subject to . $\mathcal{V}(v_{0})$ is the value function of the problem. $\Lambda$ denotes the set of admissible strategies $(\pi,c)$ such that ${V(t) + \int_{t}^{T} e^{- r (s-t)} y(s) ds \ge 0}$, ${\mathbb{P}}$-a.s., $\forall t \in [0,T]$, and which admit a unique solution to while satisfying the integrability condition ${{\mathbb{E}}\left[\int_{0}^{T} |U_{1}(t,c(t))| dt\right] < \infty}$. The so-called budget constraint reads $$\begin{aligned} \label{eq:BudgetConstraint:y} {\mathbb{E}}\left[\int_{0}^{T} \tilde{Z}(t) c(t) dt + \tilde{Z}(T) V(T)\right] \le v_{0} + {\mathbb{E}}\left[\int_{0}^{T} \tilde{Z}(t) y(t) dt\right] = v_{0} + \int_{0}^{T} e^{-r t} y(t) dt.\end{aligned}$$ It describes the requirement that today’s value of future consumption and terminal wealth, less income, must not exceed the initial endowment. It can be shown that for the optimal $(\hat{\pi}^{\star}, c^{\star})$ to Problem , Equation holds with equality. We consider a preference utility model given by the utility functions $$\begin{aligned} \begin{split} \label{eq:utilitymodel:new} U_{1}(t,c) = {} & \left(e^{- \beta t} a(t)\right) \frac{1-b(t)}{b(t)} \left(\frac{1}{1-b(t)} \left(c - \bar{c}(t)\right)\right)^{b(t)}, \\ U_{2}(v) = {} & e^{- \beta T} \hat{a} \frac{1-\hat{b}}{\hat{b}} \left(\frac{1}{1-\hat{b}} (v-F)\right)^{\hat{b}}, \end{split}\end{aligned}$$ for $\beta \ge 0$, $b : [0,T] \to (- \infty, 1)\backslash \{0\}$ continuous, $\hat{b} < 1$, $\hat{b} \neq 0$, $a(t) > 0$, $\hat{a} > 0$, $c(t) > \bar{c}(t)$, $\bar{c}(t) \ge 0$ deterministic, and $v > F$ with $F \ge 0$. $U_{2}$ is a continuously differentiable and strictly concave terminal utility function, $U_{1}$ denotes a continuous (intertemporal consumption) utility function which is continuously differentiable and strictly concave in the second argument. This utility model accounts for several desired aspects: minimum liquid asset wealth level $F \ge 0$ at the age of retirement $T$, minimum consumption rate $\bar{c}(t) \ge 0$ and time-varying preference of consumption over terminal wealth in terms of $a(t)$. Moreover, the coefficient of risk aversion $b(t)$ in the consumption utility is now a continuous function in time. Notice that the associated *Arrow-Pratt measure* ${\mathcal{A}(v) := - \frac{U''(v)}{U'(v)} = - \frac{\partial}{\partial v} \ln U'(v)}$ of absolute risk aversion, developed by [@Pratt1964] and [@Arrow1970], admits the following hyperbolic representation $$\begin{aligned} \mathcal{A}_{1}(t,c) = \frac{1 - b(t)}{c - \bar{c}(t)},\ \mathcal{A}_{2}(v) = \frac{1 - \hat{b}}{v - F}.\end{aligned}$$ For this reason, we use the notation of an increasing $b(t)$ as a synonym for a decreasing coefficient of risk aversion and vice versa. Further note that $a(t)$ does not appear in $\mathcal{A}_{1}(t,c)$. Therefore we have two input functions $a(t)$ and $b(t)$ where $a(t)$ has no influence on risk aversion, but $b(t)$ determines it; hence a very flexible model. Since we have $c(t) > \bar{c}(t)$ and $V(T) > F$ by definition of the utility functions in , we restrict $$\begin{aligned} \label{eq:Condition:v0:minimalrequirement} v_{0} > \int_{0}^{T} e^{- r s} \left(\bar{c}(s) - y(s)\right) ds + e^{- r T} F =: F(0)\end{aligned}$$ on the initial endowment in . It is useful to define $$\begin{aligned} F(t) = {} & {\mathbb{E}}\left[\int_{t}^{T} \frac{\tilde{Z}(s)}{\tilde{Z}(t)} \bar{c}(s) ds + \frac{\tilde{Z}(T)}{\tilde{Z}(t)} F - \int_{t}^{T} \frac{\tilde{Z}(s)}{\tilde{Z}(t)} y(s) ds \Big| \mathcal{F}_{t}\right] \nonumber \\ = {} & \int_{t}^{T} {\mathbb{E}}\left[\frac{\tilde{Z}(s)}{\tilde{Z}(t)} \Big| \mathcal{F}_{t}\right] \left(\bar{c}(s) - y(s)\right) ds + F {\mathbb{E}}\left[\frac{\tilde{Z}(T)}{\tilde{Z}(t)} \Big| \mathcal{F}_{t}\right] = \int_{t}^{T} e^{- r (s-t)} \left(\bar{c}(s) - y(s)\right) ds + e^{- r (T-t)} F. \label{eq:def:F(t)}\end{aligned}$$ $F(t)$ can be interpreted as the time $t$ value of all future minimal liabilities less income. $F(t)$ equals the sum of the time $t$ wealth necessarily required to meet all the future minimum living expenses and expenditures $\bar{c}(s)$, ${s \in [t,T]}$ during the remaining time and the time $t$ value of the minimum desired terminal wealth level $F$; future salary income is subtracted as it reduces the time $t$ value of the minimum required capital. Solution: Separation technique {#sec:SeparationTechnique} ============================== In the sequel we follow the separation technique approach by [@KaratzasShreve1998] and [@LaknerNygren2006] for solving the consumption-terminal wealth problem as defined by . We split the problem into two sub-problems: the consumption-only and terminal wealth-only problem. Both individual problems are separately solved via the martingale method, similar to the approach by [@Aase2017]. The individual problem solutions are optimally merged at the end. For this sake, let us consider the two individual problems first. The consumption problem {#sec:ConsumptionProblem:y} ----------------------- The consumption-only problem is $$\begin{aligned} \begin{split} \label{eq:OptimizationProblem:ConsumptionOnly} J_{1}(\pi,c;v_{1}) = {} & {\mathbb{E}}\left[\int_{0}^{T} U_{1}(t,c(t)) dt\right], \\ \mathcal{V}_{1}(v_{1}) = {} & \sup_{(\pi,c) \in \Lambda_{1}} J_{1}(\pi,c;v_{1}) \end{split}\end{aligned}$$ subject to the budget constraint $$\begin{aligned} \label{eq:BudgetConstraint:ConsumptionOnly:y} {\mathbb{E}}\left[\int_{0}^{T} \tilde{Z}(t) c(t) dt\right] \le v_{1} + {\mathbb{E}}\left[\int_{0}^{T} \tilde{Z}(t) y(t) dt\right] = v_{1} + \int_{0}^{T} e^{-r t} y(t) dt.\end{aligned}$$ $\Lambda_{1}$ denotes the set of admissible strategies $(\pi,c)$ such that ${V(t) + \int_{t}^{T} e^{- r (s-t)} y(s) ds \ge 0}$, ${\mathbb{P}}$-a.s., $\forall t \in [0,T]$, and which admit a unique solution to while satisfying ${{\mathbb{E}}\left[\int_{0}^{T} |U_{1}(t,c(t))| dt\right] < \infty}$. [@Steffensen2011] provides a proof for CRRA utility functions by solving the associated Hamilton-Jacobi-Bellman (HJB) equation. We follow the approach by [@Aase2017], likewise for a HARA utility function. We extend the findings of [@Aase2017] by introducing a time-varying, deterministic consumption floor $\bar{c}(t)$, a time-varying preference function $a(t)$ of consumption over terminal wealth and an income-rate process $y(t)$. In order to guarantee the consumption rate floor, note $c(t) > \bar{c}(t)$, let us assume the following lower boundary for $v_{1}$ which equals the integral over the discounted consumption floor rate minus income rate over the whole horizon of interest: $$\begin{aligned} \begin{split} \label{eq:Condition:v1_&_eq:def:F1(t)} v_{1} > {} & \int_{0}^{T} e^{- r s} \left(\bar{c}(s) - y(s)\right) ds =: F_{1}(0), \\ F_{1}(t) := {} & {\mathbb{E}}\left[\int_{t}^{T} \frac{\tilde{Z}(s)}{\tilde{Z}(t)} \left(\bar{c}(s) - y(s)\right) ds \Big| \mathcal{F}_{t}\right] = \int_{t}^{T} e^{- r (s-t)} \left(\bar{c}(s) - y(s)\right) ds. \end{split}\end{aligned}$$ Notice that ${v_{1} < 0}$ is possible since a sufficiently large positive income stream can be high enough to finance consumption. Using the martingale method we solve the problem as summarized by the theorem below. \[thm:Solution:ConsumptionOnly\] The solution to the optimal stochastic control problem with intertemporal utility function $U_{1}$ in is $$\begin{aligned} \hat{\pi}_{1}(t ; v_{1}) = {} & \frac{1}{1 - b(\tilde{t}_{1})} \Sigma^{-1} (\mu - r \mathbf{1}) \frac{V_{1}(t ; v_{1}) - F_{1}(t)}{V_{1}(t ; v_{1})}, \\ c_{1}(t;v_{1}) = {} & g(t,t; v_{1}) \tilde{Z}(t)^{\frac{1}{b(t)-1}} + \bar{c}(t) = (1-b(t)) \left(\lambda_{1} \frac{e^{\beta t}}{a(t)} \tilde{Z}(t)\right)^{\frac{1}{b(t)-1}} + \bar{c}(t), \\ V_{1}(t ; v_{1}) = {} & \int_{t}^{T} g(s,t; v_{1}) \tilde{Z}(t)^{\frac{1}{b(s)-1}} ds + F_{1}(t), \\ V_{1}(T ; v_{1}) = {} & 0,\end{aligned}$$ for all $t \in [0,T]$, where $$\begin{aligned} g(s,t; v_{1}) = (1-b(s)) \left(\frac{e^{\beta s - b(s) \left(r - \frac{1}{2} \frac{1}{b(s)-1} \|\gamma\|^{2}\right) (s-t)}}{a(s)}\right)^{\frac{1}{b(s)-1}} \lambda_{1}^{\frac{1}{b(s)-1}}.\end{aligned}$$ $\lambda_{1} = \lambda_{1}(v_{1}) > 0$ satisfies the budget constraint uniquely and is subject to the equation $$\begin{aligned} \label{eq:ConsumptionOnly:lambda} \int_{0}^{T} g(t,0; v_{1}) dt = v_{1} - F_{1}(0).\end{aligned}$$ $\tilde{t}_{1} = \tilde{t}_{1}(v_{1}) \in (t,T)$ is the solution to the equation $$\begin{aligned} \label{eq:ConsumptionOnly:tautilde} \int_{t}^{T} \frac{1}{b(s)-1} g(s,t; v_{1}) \tilde{Z}(t)^{\frac{1}{b(s)-1}} ds = \frac{1}{b(\tilde{t}_{1})-1} \int_{t}^{T} g(s,t; v_{1}) \tilde{Z}(t)^{\frac{1}{b(s)-1}} ds.\end{aligned}$$ For the optimal $c_{1}(t;v_{1})$, Equation is fulfilled with equality. We remind the reader that all proofs can be found in \[app:Proofs\]. It is clear that ${c_{1}(t;v_{1}) > \bar{c}(t)}$, a.s.. We now aim to interpret the optimal investment strategy as proportional portfolio insurance (PPI) strategy. The first strategy family corresponds to a constant multiple, the latter one is more general and also covers proportional strategies with time-varying or even state-dependent multiples. [@ZielingMahayniBalder2014] evaluate the performance of such strategies. Theorem \[thm:Solution:ConsumptionOnly\] shows that the optimal investment strategy generally is a PPI strategy with time-varying floor $F_{1}(t)$ at time $t$, equal to the time $t$ value of the accumulated outstanding future consumption floor minus income. Notice that $\tilde{t}_{1}$ can firstly be determined at time $t$, since the value depends on the stochastic $\tilde{Z}(t)$ which is not known before time $t$. Hence, $\tilde{t}_{1}$ is time- and also state-dependent and thus the optimal PPI strategy itself is time- and state-dependent through its PPI multiple. The PPI multiple in summary is time-varying, state-dependent and depends on all future coefficients of risk aversion via $b(\tilde{t}_{1})$. Furthermore, ${V_{1}(0 ; v_{1}) > F_{1}(0)}$ holds by the assumption in . In addition, $\hat{\pi}_{1}(t ; v_{1})$ converges to $0$ when ${V_{1}(t ; v_{1})}$ approaches $F_{1}(t)$. Thus, ${V_{1}(t ; v_{1}) > F_{1}(t)}$ a.s., which additionally follows directly from the formula for $V_{1}(t ; v_{1})$ in Theorem \[thm:Solution:ConsumptionOnly\]. This further implies that $(\hat{\pi}_{1},c_{1})$ is an admissible pair, i.e. ${(\hat{\pi}_{1},c_{1}) \in \Lambda_{1}}$. The next remark provides the solution under time-independent risk aversion. When $b(t) \equiv b$, then $$\begin{aligned} \hat{\pi}_{1}(t ; v_{1}) = \frac{1}{1 - b} \Sigma^{-1} (\mu - r \mathbf{1}) \frac{V_{1}(t ; v_{1}) - F_{1}(t)}{V_{1}(t ; v_{1})}\end{aligned}$$ which is a conventional CPPI strategy with constant multiple. Moreover, if $\bar{c}(t) - y(t) \equiv 0$, i.e. the minimum consumption is eating up the whole income, then $$\begin{aligned} \hat{\pi}_{1}(t ; v_{1}) = \frac{1}{1 - b} \Sigma^{-1} (\mu - r \mathbf{1}),\end{aligned}$$ which is a constant mix strategy and represents the standard, well-known result for CRRA utility with constant risk aversion parameter. Some comments on the initial capital $v_{1}$ and the sign of the risky investments come next. As already pointed out, a start with a negative initial capital ${V_{1}(0 ; v_{1}) = v_{1} < 0}$ to Problem is possible and might be reasonable in a sense that accumulated income over the life-cycle is expected to exceed total consumption. Hence, there is no need to require positive capital to this problem. For this reason, ${V_{1}(t ; v_{1}) < 0}$ can happen and might be reasonable, too. Theorem \[thm:Solution:ConsumptionOnly\] tells that the optimal relative investment strategy is given by $$\begin{aligned} \hat{\pi}_{1}(t ; v_{1}) = {} & \frac{1}{1 - b(\tilde{t}_{1})} \Sigma^{-1} (\mu - r \mathbf{1}) \frac{V_{1}(t ; v_{1}) - F_{1}(t)}{V_{1}(t ; v_{1})},\end{aligned}$$ where ${V_{1}(t ; v_{1}) > F_{1}(t)}$ a.s.. Let ${\left(\Sigma^{-1} (\mu - r \mathbf{1})\right)_{i} > 0}$ for ${i \in \left\{1, \hdots, N\right\}}$, which for instance is the case when there is only one risky asset ($N = 1$) because then ${\Sigma^{-1} \left(\mu - r\right) = \frac{\mu - r}{\sigma^{2}} > 0}$ since $\mu - r \mathbf{1} > 0$ was assumed. Then $$\begin{aligned} & \left(\hat{\pi}_{1}(t ; v_{1})\right)_{i} > 0\ \Leftrightarrow\ V_{1}(t ; v_{1}) > 0, \\ & \left(\hat{\pi}_{1}(t ; v_{1})\right)_{i} < 0\ \Leftrightarrow\ V_{1}(t ; v_{1}) < 0.\end{aligned}$$ Even if the first part of the remark argues that ${V_{1}(t ; v_{1}) < 0}$ is a meaningful case, the conclusion ${\left(\hat{\pi}_{1}(t ; v_{1})\right)_{i} < 0}$ under ${\left(\Sigma^{-1} (\mu - r \mathbf{1})\right)_{i} > 0}$ sounds odd at a first glance. But when looking at the optimal exposure to risky asset $i$, one finds that $$\begin{aligned} \left(\hat{\pi}_{1}(t ; v_{1}) V_{1}(t ; v_{1})\right)_{i} = {} & \frac{1}{1 - b(\tilde{t}_{1})} \left(\Sigma^{-1} (\mu - r \mathbf{1})\right)_{i} \left(V_{1}(t ; v_{1}) - F_{1}(t)\right),\end{aligned}$$ which, under ${\left(\Sigma^{-1} (\mu - r \mathbf{1})\right)_{i} > 0}$, is positive no matter if ${V_{1}(t ; v_{1}) < 0}$ or ${V_{1}(t ; v_{1}) > 0}$. Therefore, the amount of money invested in the risky assets is always positive. The opposite inequalities and conclusions for $\left(\hat{\pi}_{1}(t ; v_{1})\right)_{i}$ and $\left(\hat{\pi}_{1}(t ; v_{1}) V_{1}(t ; v_{1})\right)_{i}$ apply if ${\left(\Sigma^{-1} (\mu - r \mathbf{1})\right)_{i} < 0}$. In summary, the sign of the optimal exposure to the single risky assets is determined by $$\begin{aligned} \left(\hat{\pi}_{1}(t ; v_{1}) V_{1}(t ; v_{1})\right)_{i} > 0\ \Leftrightarrow\ \left(\Sigma^{-1} (\mu - r \mathbf{1})\right)_{i} > 0.\end{aligned}$$ Thus, ${\left(\hat{\pi}_{1}(t ; v_{1}) V_{1}(t ; v_{1})\right)_{i} > 0}$ is possible although it might be ${\left(\hat{\pi}_{1}(t ; v_{1})\right)_{i} < 0}$. Finally, let ${\left(\Sigma^{-1} (\mu - r \mathbf{1})\right)_{i} > 0}$ for all ${i \in \left\{1, \hdots, N\right\}}$. When ${V_{1}(t ; v_{1}) < 0}$, the optimal exposure to the risk-free asset is negative because $$\begin{aligned} \underbrace{V_{1}(t ; v_{1})}_{< 0} \left(1 - \underbrace{\hat{\pi}_{1}(t ; v_{1})' \mathbf{1}}_{< 0}\right) < V_{1}(t ; v_{1}) < 0.\end{aligned}$$ This in turn implies that in case of ${V_{1}(t ; v_{1}) < 0}$, the investor takes leverage by borrowing from the risk-free account to achieve her investment goals. Leverage at this point can make sense as future income provides some security; note that ${V_{1}(t ; v_{1}) < 0}$ immediately implies that the time $t$ value of accumulated future income exceeds the expected value of consumption. Some more properties of $\hat{\pi}_{1}(t ; v_{1})$ can be found analytically as follows. The first and second derivative of $\left(\hat{\pi}_{1}(t ; v_{1})\right)_{i}$, $i = 1, \hdots, N$, with respect to wealth $V_{1}(t ; v_{1})$ are $$\begin{aligned} \frac{\partial}{\partial V_{1}(t ; v_{1})} \left(\hat{\pi}_{1}(t ; v_{1})\right)_{i} = {} & \frac{1}{1 - b(\tilde{t}_{1})} \left(\Sigma^{-1} (\mu - r \mathbf{1})\right)_{i} \frac{F_{1}(t)}{V_{1}(t ; v_{1})^{2}}, \\ \frac{\partial^{2}}{\partial V_{1}(t ; v_{1})^{2}} \left(\hat{\pi}_{1}(t ; v_{1})\right)_{i} = {} & - 2 \frac{1}{1 - b(\tilde{t}_{1})} \left(\Sigma^{-1} (\mu - r \mathbf{1})\right)_{i} \frac{F_{1}(t)}{V_{1}(t ; v_{1})^{3}}.\end{aligned}$$ Let ${\left(\Sigma^{-1} (\mu - r \mathbf{1})\right)_{i} > 0}$ for ${i \in \left\{1, \hdots, N\right\}}$, then 1. ${\frac{\partial}{\partial V_{1}(t ; v_{1})} \left(\hat{\pi}_{1}(t ; v_{1})\right)_{i} \stackrel{(>)}{\ge} 0\ \Leftrightarrow\ F_{1}(t) \stackrel{(>)}{\ge} 0}$. 2. ${\frac{\partial^{2}}{\partial V_{1}(t ; v_{1})^{2}} \left(\hat{\pi}_{1}(t ; v_{1})\right)_{i} \stackrel{(<)}{\le} 0\ \Leftrightarrow}$ either ${F_{1}(t) \stackrel{(>)}{\ge} 0}$ and ${V_{1}(t ; v_{1}) > 0}$ or ${F_{1}(t) \stackrel{(<)}{\le} 0}$ and ${V_{1}(t ; v_{1}) < 0}$. This implies that at time $t$: 1. $\left(\hat{\pi}_{1}(t ; v_{1})\right)_{i}$ is increasing in $V_{1}(t ; v_{1})$ if and only if ${F_{1}(t) \ge 0}$, and decreasing in $V_{1}(t ; v_{1})$ otherwise. 2. $\left(\hat{\pi}_{1}(t ; v_{1})\right)_{i}$ is concave in $V_{1}(t ; v_{1})$ if and only if 1. either ${F_{1}(t) \ge 0}$ and ${V_{1}(t ; v_{1}) > 0}$ 2. or ${F_{1}(t) \le 0}$ and ${V_{1}(t ; v_{1}) < 0}$, and convex in $V_{1}(t ; v_{1})$ otherwise. The opposite inequalities and conclusions for $\left(\hat{\pi}_{1}(t ; v_{1})\right)_{i}$ and its derivatives apply if ${\left(\Sigma^{-1} (\mu - r \mathbf{1})\right)_{i} < 0}$. The optimal controls in Theorem \[thm:Solution:ConsumptionOnly\] determine the value function and the value for $\lambda_{1}$ as follows. \[thm:Solution:ConsumptionOnly:ValueFunction:lambda\] The optimal value function $\mathcal{V}_{1}(v_{1})$ to Problem is strictly increasing and concave in $v_{1}$. Its value and first and second derivative with respect to the initial budget $v_{1}$ are given by $$\begin{aligned} \mathcal{V}_{1}(v_{1}) = {} & \int_{0}^{T} \frac{1-b(t)}{b(t)} \left(\frac{e^{\left[\beta - b(t) \left(r - \frac{1}{2} \frac{1}{b(t)-1} \|\gamma\|^{2}\right)\right] t}}{a(t)}\right)^{\frac{1}{b(t)-1}} \lambda_{1}^{\frac{b(t)}{b(t)-1}} dt, \\ \mathcal{V}_{1}^{\prime}(v_{1}) = {} & \lambda_{1} > 0, \\ \mathcal{V}_{1}^{\prime\prime}(v_{1}) = {} & \lambda_{1}^{\prime} = - \left(\int_{0}^{T} \left(\frac{e^{\left[\beta - b(t) \left(r - \frac{1}{2} \frac{1}{b(t)-1} \|\gamma\|^{2}\right)\right] t}}{a(t)}\right)^{\frac{1}{b(t)-1}} \lambda_{1}^{- \frac{b(t)-2}{b(t)-1}} dt\right)^{-1} < 0.\end{aligned}$$ The terminal wealth problem {#sec:TerminalWealthProblem} --------------------------- The terminal wealth-only problem is $$\begin{aligned} \begin{split} \label{eq:OptimizationProblem:TerminalWealthOnly} J_{2}(\pi,c;v_{2}) = {} & {\mathbb{E}}\left[U_{2}(V(T))\right], \\ \mathcal{V}_{2}(v_{2}) = {} & \sup_{(\pi,c) \in \Lambda_{2}} J_{2}(\pi,c;v_{2}) \end{split}\end{aligned}$$ subject to the budget constraint $$\begin{aligned} \label{eq:BudgetConstraint:TerminalWealthOnly} {\mathbb{E}}\left[\tilde{Z}(T) V(T)\right] \le v_{2},\ v_{2} \ge 0.\end{aligned}$$ $\Lambda_{2}$ denotes the set of admissible strategies $(\pi,c)$ such that ${V(t) \ge 0}$, ${\mathbb{P}}$-a.s., $\forall t \in [0,T]$, and which admit a unique solution to for ${y(t) \equiv 0}$. In order to guarantee the terminal wealth floor, note $V(T) > F$, let us assume the following lower bound for $v_{2}$ which equals the discounted terminal floor: $$\begin{aligned} v_{2} > {} & e^{- r T} F =: F_{2}(0),\ F_{2}(t) := {\mathbb{E}}\left[\frac{\tilde{Z}(T)}{\tilde{Z}(t)} F \Big| \mathcal{F}_{t}\right] = e^{- r (T-t)} F \ge 0. \label{eq:Condition:v2:WithoutProbabilityConstraint_&_eq:def:F2(t)}\end{aligned}$$ Applying the martingale approach leads to the solution to the terminal wealth problem according to the upcoming theorem. \[thm:Solution:TerminalWealthOnly:WithoutProbabilityConstraint\] The solution to Problem with terminal utility function $U_{2}$ in is $$\begin{aligned} \hat{\pi}_{2}(t ; v_{2}) = {} & \frac{1}{1-\hat{b}} \Sigma^{-1} (\mu - r \mathbf{1}) \frac{V_{2}(t ; v_{2}) - F_{2}(t)}{V_{2}(t ; v_{2})}, \\ c_{2}(t;v_{2}) = {} & 0, \\ V_{2}(t ; v_{2}) = {} & \left(v_{2} - e^{- r T} F\right) e^{\frac{\hat{b}}{\hat{b}-1} \left(r - \frac{1}{2} \frac{1}{\hat{b}-1} \|\gamma\|^{2}\right) t} \tilde{Z}(t)^{\frac{1}{\hat{b}-1}} + F_{2}(t), \\ V_{2}(T ; v_{2}) = {} & \left(v_{2} - e^{- r T} F\right) e^{\frac{\hat{b}}{\hat{b}-1} \left(r - \frac{1}{2} \frac{1}{\hat{b}-1} \|\gamma\|^{2}\right) T} \tilde{Z}(T)^{\frac{1}{\hat{b}-1}} + F,\end{aligned}$$ for all $t \in [0,T]$. For the optimal $\hat{\pi}_{2}(t ; v_{2})$, Equation is fulfilled with equality. Theorem \[thm:Solution:TerminalWealthOnly:WithoutProbabilityConstraint\] shows that the optimal fraction of wealth allocated to the risky assets follows a CPPI strategy with floor $F_{2}(t) \ge 0$ at time $t$, with constant multiple. Moreover, ${V_{2}(0 ; v_{2}) > F_{2}(0) = e^{- r T} F}$ by the assumption in . In addition, $\hat{\pi}_{2}(t ; v_{2})$ converges to $0$ when ${V_{2}(t ; v_{1})}$ approaches $F_{2}(t)$. Thus, it follows ${V_{2}(t ; v_{2}) > F_{2}(t)}$ a.s., which additionally yields that $(\hat{\pi}_{2},0)$ is an admissible pair, i.e. ${(\hat{\pi}_{2},0) \in \Lambda_{2}}$. The characteristics ${V_{2}(t ; v_{2}) > F_{2}(t)}$ a.s. also directly follows from the formula for ${V_{2}(t ; v_{2})}$ in Theorem \[thm:Solution:TerminalWealthOnly:WithoutProbabilityConstraint\]. The next remark shows that the optimal proportion allocated to the risky assets is constant over time if one disregards the floor $F$. When $F = 0$, then $$\begin{aligned} \hat{\pi}_{2}(t ; v_{2}) = \frac{1}{1 - \hat{b}} \Sigma^{-1} (\mu - r \mathbf{1})\end{aligned}$$ which is a constant mix strategy and equals the standard result for CRRA utility with constant risk aversion parameter, where the optimal fraction of wealth allocated to the single risky assets does not depend on time or wealth. In what follows we analyze some characteristics of the optimal strategy $\hat{\pi}_{2}(t ; v_{2})$. The first and second derivative of $\left(\hat{\pi}_{2}(t ; v_{2})\right)_{i}$, $i = 1, \hdots, N$, with respect to wealth $V_{2}(t ; v_{2})$ are $$\begin{aligned} \frac{\partial}{\partial V_{2}(t ; v_{2})} \left(\hat{\pi}_{2}(t ; v_{2})\right)_{i} = {} & \frac{1}{1 - \hat{b}} \left(\Sigma^{-1} (\mu - r \mathbf{1})\right)_{i} \frac{F_{2}(t)}{V_{2}(t ; v_{2})^{2}}, \\ \frac{\partial^{2}}{\partial V_{2}(t ; v_{2})^{2}} \left(\hat{\pi}_{2}(t ; v_{2})\right)_{i} = {} & - 2 \frac{1}{1 - \hat{b}} \left(\Sigma^{-1} (\mu - r \mathbf{1})\right)_{i} \frac{F_{2}(t)}{V_{2}(t ; v_{2})^{3}}.\end{aligned}$$ Let ${\left(\Sigma^{-1} (\mu - r \mathbf{1})\right)_{i} > 0}$ for ${i \in \left\{1, \hdots, N\right\}}$. Then ${\frac{\partial}{\partial V_{2}(t ; v_{2})} \left(\hat{\pi}_{2}(t ; v_{2})\right)_{i} \ge 0}$ and ${\frac{\partial^{2}}{\partial V_{2}(t ; v_{2})^{2}} \left(\hat{\pi}_{2}(t ; v_{2})\right)_{i} \le 0}$, where the inequalities hold strictly when $F > 0$. Hence, $\left(\hat{\pi}_{2}(t ; v_{2})\right)_{i}$ increases and is concave in the wealth $V_{2}(t ; v_{2})$. Otherwise, if ${\left(\Sigma^{-1} (\mu - r \mathbf{1})\right)_{i} < 0}$ for ${i \in \left\{1, \hdots, N\right\}}$, then $\left(\hat{\pi}_{2}(t ; v_{2})\right)_{i}$ decreases and is convex in the wealth $V_{2}(t ; v_{2})$. For the optimal exposure to the risky assets it therefore holds $$\begin{aligned} \left(\hat{\pi}_{2}(t ; v_{2}) V_{2}(t ; v_{2})\right)_{i} > 0\ \Leftrightarrow\ \left(\Sigma^{-1} (\mu - r \mathbf{1})\right)_{i} > 0.\end{aligned}$$ Thus, either it is ${\left(\hat{\pi}_{2}(t ; v_{2})\right)_{i} > 0}$ and ${\left(\hat{\pi}_{2}(t ; v_{2}) V_{2}(t ; v_{2})\right)_{i} > 0}$ or ${\left(\hat{\pi}_{2}(t ; v_{2})\right)_{i} < 0}$ and ${\left(\hat{\pi}_{2}(t ; v_{2}) V_{2}(t ; v_{2})\right)_{i} < 0}$. The optimal controls in Theorem \[thm:Solution:TerminalWealthOnly:WithoutProbabilityConstraint\] determine the value function and the value for $\lambda_{2}$. \[thm:Solution:TerminalWealthOnly:ValueFunction:lambda\] The optimal value function $\mathcal{V}_{2}(v_{2})$ to Problem is strictly increasing and concave in $v_{2}$. Its value and first and second derivative with respect to the initial budget $v_{2}$ are given by $$\begin{aligned} \mathcal{V}_{2}(v_{2}) = {} & e^{\left[- \beta + \hat{b} \left(r - \frac{1}{2} \frac{1}{\hat{b}-1} \|\gamma\|^{2}\right)\right] T} \frac{\left(1-\hat{b}\right)^{1-\hat{b}}}{\hat{b}} \hat{a} \left(v_{2} - F_{2}(0)\right)^{\hat{b}}, \\ \mathcal{V}_{2}^{\prime}(v_{2}) = {} & \lambda_{2} > 0, \\ \mathcal{V}_{2}^{\prime\prime}(v_{2}) = {} & \lambda_{2}^{\prime} = - e^{\left[- \beta + \hat{b} \left(r - \frac{1}{2} \frac{1}{\hat{b}-1} \|\gamma\|^{2}\right)\right] T} \left(1-\hat{b}\right)^{2-\hat{b}} \hat{a} \left(v_{2} - F_{2}(0)\right)^{\hat{b}-2} < 0.\end{aligned}$$ The Lagrange multiplier is given by as $$\begin{aligned} \lambda_{2} = e^{-\left[\beta - \hat{b} \left(r - \frac{1}{2} \frac{1}{\hat{b}-1} \|\gamma\|^{2}\right)\right] T} \left(1-\hat{b}\right)^{1-\hat{b}} \hat{a} \left(v_{2} - F_{2}(0)\right)^{\hat{b}-1} > 0.\end{aligned}$$ Optimal merging of the individual solutions {#sec:OptimalMerging} ------------------------------------------- Let $(\pi_{1}(t;v_{1}), c_{1}(t;v_{1}))$ denote the optimal controls to Problem with optimal wealth process $V_{1}(t;v_{1})$ to the initial wealth ${v_{1} \ge \int_{0}^{T} e^{- r t} \left(\bar{c}(t) - y(t)\right) dt = F_{1}(0)}$ and $(\pi_{2}(t;v_{2}), c_{2}(t;v_{2}))$ the optimal controls to Problem with optimal wealth process $V_{2}(t;v_{2})$ to the initial wealth ${v_{2} \ge e^{- r T} F = F_{2}(0)}$. Then merging the two solutions to solve Problem is based on the following theorem. \[thm:ConnectionValueFunctions\] The connection between the value functions is $$\begin{aligned} \mathcal{V}(v_{0}) = \sup_{v_{1} \ge F_{1}(0),\ v_{2} \ge F_{2}(0),\ v_{1} + v_{2} = v_{0}} \left\{\mathcal{V}_{1}(v_{1}) + \mathcal{V}_{2}(v_{2})\right\}.\end{aligned}$$ Notice that ${F(t) = F_{1}(t) + F_{2}(t)}$, hence ensures that ${v_{0} = v_{1} + v_{2} > F_{1}(0) + F_{2}(0)}$ is claimed. When discounted future income exceeds consumption over the considered period, i.e. when the initial budget to the consumption problem is negative (${v_{1} < 0}$), then ${v_{2} > v_{0}}$ and a higher amount of money $v_{2}$ is invested according to the terminal wealth problem at initial time as the initial endowment $v_{0}$ of the investor. Theorem \[thm:ConnectionValueFunctions\] shows that an optimal allocation to consumption and terminal wealth at $t = 0$ together with the solution to the two separate problems equals the solution to the original optimization problem. The optimal initial budgets are denoted by $v_{1}^{\star}$ and $v_{2}^{\star}$. The next lemma provides a condition for $v_{1}^{\star}$ and $v_{2}^{\star}$. \[lemma:EqualityConditionValueFunctions\] The optimal $v_{1}^{\star}$ solves $$\begin{aligned} \label{eq:OptimalMerging:ValueFunction} \mathcal{V}_{1}^{\prime}(v_{1}) - \mathcal{V}_{2}^{\prime}(v_{0} - v_{1}) = 0\end{aligned}$$ and is subject to ${F_{1}(0) \le v_{1}^{\star} \le v_{0} - F_{2}(0)}$. The optimal $v_{2}^{\star}$ is then given by ${v_{2}^{\star} = v_{0} - v_{1}^{\star}}$. Within our specified setup, we can address the condition in Lemma \[lemma:EqualityConditionValueFunctions\] in more detail, the result is provided next. \[lemma:EqualityConditionValueFunctions:Solution\] The optimal $v_{1}^{\star}$ to exists uniquely and satisfies the boundary condition $F_{1}(0) \le v_{1}^{\star} \le v_{0} - F_{2}(0)$. $v_{1}^{\star}$ is the solution to the equation $$\begin{aligned} \label{eq:Separation:Optimalv1} v_{1} - \int_{0}^{T} \chi(t) \left(v_{0} - v_{1} - F_{2}(0)\right)^{\frac{\hat{b}-1}{b(t)-1}} dt = F_{1}(0)\end{aligned}$$ with $$\begin{aligned} \label{eq:definition:chi} \chi(t) = (1-b(t)) \left(1-\hat{b}\right)^{\frac{1-\hat{b}}{b(t)-1}} \left(\frac{\hat{a}}{a(t)}\right)^{\frac{1}{b(t)-1}} \left(\frac{e^{\left[\beta - b(t) \left(r - \frac{1}{2} \frac{1}{b(t)-1} \|\gamma\|^{2}\right)\right] t}}{e^{\left[\beta - \hat{b} \left(r - \frac{1}{2} \frac{1}{\hat{b}-1} \|\gamma\|^{2}\right)\right] T}}\right)^{\frac{1}{b(t)-1}} > 0.\end{aligned}$$ The optimal $v_{2}^{\star}$ is given by ${v_{2}^{\star} = v_{0} - v_{1}^{\star}}$. Moreover, the optimal Lagrange multiplier $\lambda_{1}^{\star} = \lambda_{1}(v_{1}^{\star})$ is given by $$\begin{aligned} \lambda_{1}^{\star} = \left(1-\hat{b}\right)^{1-\hat{b}} \hat{a} e^{-\left[\beta - \hat{b} \left(r - \frac{1}{2} \frac{1}{\hat{b}-1} \|\gamma\|^{2}\right)\right] T} \left(v_{0} - v_{1}^{\star} - F_{2}(0)\right)^{\hat{b}-1}.\end{aligned}$$ For general $a(t)$ and $b(t)$, $v_{1}^{*}$ as the unique solution to Equation can for instance be determined numerically. Denote by $v_{1}^{\star} \ge F_{1}(0),\ v_{2}^{\star} \ge F_{2}(0)$ with $v_{1}^{\star} + v_{2}^{\star} = v_{0}$ the optimal allocation of the initial wealth according to Lemma \[lemma:EqualityConditionValueFunctions:Solution\] in what follows and denote $\lambda_{1}^{\star} = \lambda_{1}(v_{1}^{\star})$ and $\tilde{t}_{1}^{\star} = \tilde{t}_{1}(v_{1}^{\star})$. We use the individual solutions to the two separate Problems and and merge both solutions optimally according to Lemma \[lemma:EqualityConditionValueFunctions:Solution\] to obtain the solution to the original Problem . \[thm:Solution:Merging:OriginalProblem\] The optimal wealth process is given by ${V^{\star}(t; v_{0}) = V_{1}(t;v_{1}^{\star}) + V_{2}(t;v_{2}^{\star})}$. The optimal controls to Problem are $$\begin{aligned} c^{\star}(t; v_{0}) = c_{1}(t;v_{1}^{\star}),\ \hat{\pi}^{\star}(t; v_{0}) = \frac{\hat{\pi}_{1}(t;v_{1}^{\star}) V_{1}(t;v_{1}^{\star}) + \hat{\pi}_{2}(t;v_{2}^{\star}) V_{2}(t;v_{2}^{\star})}{V_{1}(t;v_{1}^{\star}) + V_{2}(t;v_{2}^{\star})}.\end{aligned}$$ The optimal controls and the optimal wealth process to Problem under the utility function setup are given by $$\begin{aligned} \hat{\pi}^{\star}(t ; v_{0}) = {} & \Sigma^{-1} (\mu - r \mathbf{1}) \frac{\frac{1}{1 - b(\tilde{t}_{1}^{\star})} \left(V_{1}(t ; v_{1}^{\star}) - F_{1}(t)\right) + \frac{1}{1-\hat{b}} \left(V_{2}(t ; v_{2}^{\star}) - F_{2}(t)\right)}{V^{\star}(t ; v_{0})}, \\ c^{\star}(t;v_{0}) = {} & g(t,t; v_{1}^{\star}) \tilde{Z}(t)^{\frac{1}{b(t)-1}} + \bar{c}(t) = (1-b(t)) \left(\lambda_{1}^{\star} \frac{e^{\beta t}}{a(t)} \tilde{Z}(t)\right)^{\frac{1}{b(t)-1}} + \bar{c}(t), \\ V^{\star}(t ; v_{0}) = {} & \int_{t}^{T} g(s,t; v_{1}^{\star}) \tilde{Z}(t)^{\frac{1}{b(s)-1}} ds + \left(v_{2}^{\star} - F_{2}(0)\right) e^{\frac{\hat{b}}{\hat{b}-1} \left(r - \frac{1}{2} \frac{1}{\hat{b}-1} \|\gamma\|^{2}\right) t} \tilde{Z}(t)^{\frac{1}{\hat{b}-1}} + F(t), \\ V^{\star}(T ; v_{0}) = {} & \left(v_{2}^{\star} - F_{2}(0)\right) e^{\frac{\hat{b}}{\hat{b}-1} \left(r - \frac{1}{2} \frac{1}{\hat{b}-1} \|\gamma\|^{2}\right) T} \tilde{Z}(T)^{\frac{1}{\hat{b}-1}} + F, \\ V_{1}(t ; v_{1}^{\star}) = {} & \int_{t}^{T} g(s,t; v_{1}^{\star}) \tilde{Z}(t)^{\frac{1}{b(s)-1}} ds + F_{1}(t), \\ V_{2}(t ; v_{2}^{\star}) = {} & \left(v_{2}^{\star} - F_{2}(0)\right) e^{\frac{\hat{b}}{\hat{b}-1} \left(r - \frac{1}{2} \frac{1}{\hat{b}-1} \|\gamma\|^{2}\right) t} \tilde{Z}(t)^{\frac{1}{\hat{b}-1}} + F_{2}(t) \text{,\ $\forall t \in [0,T]$, with}\end{aligned}$$ ${g(s,t; v_{1}^{\star}) = \chi(s) e^{\frac{b(s)}{b(s)-1} \left(r - \frac{1}{2} \frac{1}{b(s)-1} \|\gamma\|^{2}\right) t} \left(v_{0} - v_{1}^{\star} - F_{2}(0)\right)^{\frac{\hat{b}-1}{b(s)-1}}}$, and ${\tilde{t}_{1}^{\star} = \tilde{t}_{1}(v_{1}^{\star}) \in (t,T)}$ solves $$\begin{aligned} b(\tilde{t}_{1}^{\star}) = {} & 1 + \frac{\int_{t}^{T} g(s,t; v_{1}^{\star}) \tilde{Z}(t)^{\frac{1}{b(s)-1}} ds}{\int_{t}^{T} \frac{1}{b(s)-1} g(s,t; v_{1}^{\star}) \tilde{Z}(t)^{\frac{1}{b(s)-1}} ds}.\end{aligned}$$ For the optimal $(\hat{\pi}^{\star}(t ; v_{0}), c^{\star}(t;v_{0}))$, Equation holds with equality. It follows immediately that ${c_{1}(t;v_{1}) > \bar{c}(t)}$, a.s.. Theorem \[thm:Solution:Merging:OriginalProblem\] furthermore proves that the general optimal relative investment strategy can be written as a mixture of a PPI and a CPPI strategy, but is not necessarily of a PPI or even CPPI type itself. The PPI comes from the consumption-only problem, see Theorem \[thm:Solution:ConsumptionOnly\], the CPPI arises as the solution to the terminal wealth-only problem, see Theorem \[thm:Solution:TerminalWealthOnly:WithoutProbabilityConstraint\]. The way which of the two strategies dominates the overall optimal investment policy is initially determined by the wealth distribution through $v_{1}^{\star}$ and $v_{2}^{\star}$ and later through $V_{1}(t ; v_{1}^{\star})$ and $V_{2}(t ; v_{2}^{\star})$. The special case where the coefficient of risk aversion $b(t)$ from consumption equals the one from terminal wealth $\hat{b}$ at any time is covered by the next remark. \[remark:Ye:case\] Assume $b(t) \equiv \hat{b}$ constant. Then the optimal controls turn into $$\begin{aligned} \hat{\pi}^{\star}_{(b(t) \equiv \hat{b})}(t ; v_{0}) = {} & \frac{1}{1-\hat{b}} \Sigma^{-1} (\mu - r \mathbf{1}) \frac{V^{\star}_{(b(t) \equiv \hat{b})}(t ; v_{0}) - F(t)}{V^{\star}_{(b(t) \equiv \hat{b})}(t ; v_{0})}, \\ c^{\star}_{(b(t) \equiv \hat{b})}(t;v_{0}) = {} & \zeta(t) \left(V^{\star}_{(b(t) \equiv \hat{b})}(t ; v_{0}) -F(t)\right) + \bar{c}(t),\end{aligned}$$ with $$\begin{aligned} \zeta(t) = \frac{\chi(t)}{\int_{t}^{T} \chi(s) ds + 1} > 0,\end{aligned}$$ where $$\begin{aligned} \chi(t) = \left(\frac{\hat{a}}{a(t)}\right)^{\frac{1}{\hat{b}-1}} e^{- \frac{1}{\hat{b}-1} \left[\beta - \hat{b} \left(r - \frac{1}{2} \frac{1}{\hat{b}-1} \|\gamma\|^{2}\right)\right] (T-t)} > 0.\end{aligned}$$ The optimal investment strategy $\hat{\pi}^{\star}_{(b(t) \equiv \hat{b})}(t ; v_{0})$ now is a traditional CPPI strategy with floor $F(t)$ and constant multiple vector ${\frac{1}{1-\hat{b}} \Sigma^{-1} (\mu - r \mathbf{1})}$. The optimal consumption rate $c^{\star}_{(b(t) \equiv \hat{b})}(t;v_{0})$ is the sum of the consumption floor $\bar{c}(t)$ and the time-varying proportion $\zeta(t)$ of the cushion ${V^{\star}_{(b(t) \equiv \hat{b})}(t ; v_{0}) - F(t)}$ at time $t$. The fraction between the risky exposure (vector) and consumption is time-varying and it holds $$\begin{aligned} \frac{\hat{\pi}^{\star}_{(b(t) \equiv \hat{b})}(t ; v_{0}) V^{\star}_{(b(t) \equiv \hat{b})}(t ; v_{0})}{c^{\star}_{(b(t) \equiv \hat{b})}(t;v_{0})} = \frac{1}{1-\hat{b}} \Sigma^{-1} (\mu - r \mathbf{1}) \left(\zeta(t) + \frac{\bar{c}(t)}{V^{\star}_{(b(t) \equiv \hat{b})}(t ; v_{0}) - F(t)}\right)^{-1}. \label{eq:fraction:b(t)constant}\end{aligned}$$ Optimal consumption $c^{\star}_{(b(t) \equiv \hat{b})}(t;v_{0})$ as well as, under ${\Sigma^{-1} (\mu - r \mathbf{1}) > \mathbf{0}}$, optimal risky exposure ${\hat{\pi}^{\star}_{(b(t) \equiv \hat{b})}(t ; v_{0}) V^{\star}_{(b(t) \equiv \hat{b})}(t ; v_{0})}$ linearly increase in the cushion ${V^{\star}_{(b(t) \equiv \hat{b})}(t ; v_{0}) - F(t)}$. Hence, the higher the surplus ${V^{\star}_{(b(t) \equiv \hat{b})}(t ; v_{0}) - F(t)}$, the more is invested risky and the more is consumed. The formula shows that, under ${\Sigma^{-1} (\mu - r \mathbf{1}) > \mathbf{0}}$, an increase in the cushion ${V^{\star}_{(b(t) \equiv \hat{b})}(t ; v_{0}) - F(t)}$ leads to a stronger increase in the risky exposure ${\hat{\pi}^{\star}_{(b(t) \equiv \hat{b})}(t ; v_{0}) V^{\star}_{(b(t) \equiv \hat{b})}(t ; v_{0})}$ than in consumption $c^{\star}_{(b(t) \equiv \hat{b})}(t;v_{0})$. Therefore, for a larger surplus ${V^{\star}_{(b(t) \equiv \hat{b})}(t ; v_{0}) - F(t)}$, also the relative increase in the risky exposure is larger than the relative increase in consumption, thus investing money in stocks is preferred to consuming. The associated optimal wealth process is given as a function of the pricing kernel $$\begin{aligned} V^{\star}_{(b(t) \equiv \hat{b})}(t ; v_{0}) = \frac{1}{\zeta(t)} \tilde{Z}(t)^{\frac{1}{\hat{b}-1}} \left(v_{0} - F(0)\right) e^{\frac{\hat{b}}{\hat{b}-1} \left(r - \frac{1}{2} \frac{1}{\hat{b}-1} \|\gamma\|^{2}\right) t} \frac{\chi(t)}{\int_{0}^{T} \chi(t) dt + 1} + F(t).\end{aligned}$$ This special case result coincides with the findings by [@Ye2008], who used the HJB approach, extended by additionally providing the optimal wealth process $V^{\star}_{(b(t) \equiv \hat{b})}(t ; v_{0})$. We aim to interpret the optimal $\hat{\pi}^{\star}(t ; v_{0})$ for time-varying $b(t)$ and particularly to point out the difference to constant $b(t)$ in Remark \[remark:Ye:case\]. Writing ${V_{1}(t ; v_{1}^{\star}) = V^{\star}(t ; v_{0}) - V_{2}(t ; v_{2}^{\star})}$ where $V_{2}(t ; v_{2}^{\star})$ follows the wealth process of a standard CPPI strategy with floor $F_{2}(t)$ at time $t$ to the initial endowment $v_{2}^{\star}$ and constant multiplier vector ${\frac{1}{1-\hat{b}} \Sigma^{-1} (\mu - r \mathbf{1})}$, we obtain the following representation of the optimal investment decision $$\begin{aligned} \hat{\pi}^{\star}(t ; v_{0}) = {} & \Sigma^{-1} (\mu - r \mathbf{1}) \frac{\frac{1}{1 - b(\tilde{t}_{1}^{\star})} \left(V^{\star}(t ; v_{0}) - V_{2}(t ; v_{2}^{\star}) - F_{1}(t)\right) + \frac{1}{1-\hat{b}} \left(V_{2}(t ; v_{2}^{\star}) - F_{2}(t)\right)}{V^{\star}(t ; v_{0})} \nonumber \\ = {} & \Sigma^{-1} (\mu - r \mathbf{1}) \left\{\frac{1}{1 - b(\tilde{t}_{1}^{\star})} \frac{V^{\star}(t ; v_{0}) - F_{1}(t)}{V^{\star}(t ; v_{0})} + \frac{\hat{b}- b(\tilde{t}_{1}^{\star})}{(1-\hat{b}) \left(1 - b(\tilde{t}_{1}^{\star})\right)} \frac{V_{2}(t ; v_{2}^{\star}) - \frac{1 - b(\tilde{t}_{1}^{\star})}{\hat{b}- b(\tilde{t}_{1}^{\star})} F_{2}(t)}{V^{\star}(t ; v_{0})}\right\} \nonumber \\ = {} & \Sigma^{-1} (\mu - r \mathbf{1}) \left\{\frac{1}{1 - b(\tilde{t}_{1}^{\star})} \frac{V^{\star}(t ; v_{0}) - F(t)}{V^{\star}(t ; v_{0})} + \frac{\hat{b} - b(\tilde{t}_{1}^{\star})}{(1-\hat{b}) \left(1 - b(\tilde{t}_{1}^{\star})\right)} \frac{V_{2}(t ; v_{2}^{\star}) - F_{2}(t)}{V^{\star}(t ; v_{0})}\right\} \nonumber \\ = {} & \Sigma^{-1} (\mu - r \mathbf{1}) \left\{\frac{1}{1 - b(\tilde{t}_{1}^{\star})} \frac{V^{\star}(t ; v_{0}) - F(t)}{V^{\star}(t ; v_{0})} + \frac{\hat{b} - b(\tilde{t}_{1}^{\star})}{(1-\hat{b}) \left(1 - b(\tilde{t}_{1}^{\star})\right)} \frac{V_{2}(t ; v_{2}^{\star})}{V^{\star}(t ; v_{0})} \frac{V_{2}(t ; v_{2}^{\star}) - F_{2}(t)}{V_{2}(t ; v_{2}^{\star})}\right\}, \label{eq:SeparationTechnique:pihat:2xPPI}\end{aligned}$$ which can be implemented easily; $F(t)$ is defined in . Formula shows that the optimal relative allocation $\hat{\pi}^{\star}(t ; v_{0})$ can be written as a PPI strategy in $V^{\star}(t ; v_{0})$ with floor $F(t)$ plus a PPI strategy in $V_{2}(t ; v_{2}^{\star})$ with floor $F_{2}(t)$. Alternatively, write ${V_{2}(t ; v_{2}^{\star}) = V^{\star}(t ; v_{0}) - V_{1}(t ; v_{1}^{\star})}$, where $V_{1}(t ; v_{1}^{\star})$ is the replicating wealth process of a PPI strategy with floor $F_{1}(t)$ to the initial wealth $v_{1}^{\star}$ and now time- and state-varying multiplier vector $\frac{1}{1 - b(\tilde{t}_{1})} \Sigma^{-1} (\mu - r \mathbf{1})$ and, in contrast to $V_{2}(t ; v_{2}^{\star})$, a non-zero consumption rate process. Then $\hat{\pi}^{\star}(t ; v_{0})$ can be reformulated as $$\begin{aligned} \hat{\pi}^{\star}(t ; v_{0}) = {} & \Sigma^{-1} (\mu - r \mathbf{1}) \left\{\left(\frac{1}{1 - b(\tilde{t}_{1}^{\star})} - \frac{1}{1-\hat{b}}\right) \frac{V_{1}(t ; v_{1}^{\star}) - F_{1}(t)}{V^{\star}(t ; v_{0})} + \frac{1}{1-\hat{b}} \frac{V^{\star}(t ; v_{0}) - F(t)}{V^{\star}(t ; v_{0})}\right\} \nonumber \\ = {} & \Sigma^{-1} (\mu - r \mathbf{1}) \left\{\frac{1}{1-\hat{b}} \frac{V^{\star}(t ; v_{0}) - F(t)}{V^{\star}(t ; v_{0})} - \frac{\hat{b} - b(\tilde{t}_{1}^{\star})}{(1-\hat{b}) (1 - b(\tilde{t}_{1}^{\star}))} \frac{V_{1}(t ; v_{1}^{\star}) - F_{2}(t)}{V^{\star}(t ; v_{0})}\right\} \nonumber \\ \begin{split} \label{eq:SeparationTechnique:pihat:V1:1xCPPI:1xPPI} = {} & \Sigma^{-1} (\mu - r \mathbf{1}) \left\{\frac{1}{1-\hat{b}} \frac{V^{\star}(t ; v_{0}) - F(t)}{V^{\star}(t ; v_{0})} - \frac{\hat{b} - b(\tilde{t}_{1}^{\star})}{(1-\hat{b}) (1 - b(\tilde{t}_{1}^{\star}))} \frac{V_{1}(t ; v_{1}^{\star})}{V^{\star}(t ; v_{0})} \frac{V_{1}(t ; v_{1}^{\star}) - F_{1}(t)}{V_{1}(t ; v_{1}^{\star})}\right\}. \end{split}\end{aligned}$$ This formula shows that the optimal relative investment $\hat{\pi}^{\star}(t ; v_{0})$ is the sum of a conventional CPPI strategy on $V^{\star}(t ; v_{0})$ with floor $F(t)$ and a PPI strategy on $V_{1}(t ; v_{1}^{\star})$ with floor $F_{1}(t)$. Recall from Remark \[remark:Ye:case\] that $\hat{\pi}^{\star}_{(b(t) \equiv \hat{b})}(t ; v_{0})$ for constant $b(t) \equiv \hat{b}$ follows a traditional CPPI strategy ${\frac{1}{1-\hat{b}} \Sigma^{-1} (\mu - r \mathbf{1}) \frac{V^{\star}_{(b(t) \equiv \hat{b})}(t ; v_{0}) - F(t)}{V^{\star}_{(b(t) \equiv \hat{b})}(t ; v_{0})}}$ to the floor $F(t)$. The formula for $\hat{\pi}^{\star}(t ; v_{0})$ in shows that the optimal strategy $\hat{\pi}^{\star}(t ; v_{0})$ for time-varying $b(t)$ consists of two parts: 1. The first part coincides with $\hat{\pi}^{\star}_{(b(t) \equiv \hat{b})}(t ; v_{0})$ and is a traditional CPPI strategy ${\frac{1}{1-\hat{b}} \Sigma^{-1} (\mu - r \mathbf{1}) \frac{V^{\star}(t ; v_{0}) - F(t)}{V^{\star}(t ; v_{0})}}$ in $V^{\star}(t ; v_{0})$ to the floor $F(t)$. 2. The second, additional part is a time- and state-varying term which can be either positive, negative or zero; hence it can reduce or increase risky investments or can leave it unmodified in comparison with $\hat{\pi}^{\star}_{(b(t) \equiv \hat{b})}(t ; v_{0})$. It is the second part which leads to a deviation in $\hat{\pi}^{\star}(t ; v_{0})$ compared to $\hat{\pi}^{\star}_{(b(t) \equiv \hat{b})}(t ; v_{0})$. For this sake, we analyze this second piece in what follows. Note that by Theorem \[thm:Solution:ConsumptionOnly\] it holds ${V_{1}(t ; v_{1}) > F_{1}(t)}$ a.s.. 1. If ${V^{\star}(t ; v_{0}) > 0}$, for instance this is reasonable for ${v_{0} > 0}$ and an income rate that outweighs or exceeds consumption, then it follows $$\begin{aligned} \frac{V_{1}(t ; v_{1}^{\star})}{V^{\star}(t ; v_{0})} \frac{V_{1}(t ; v_{1}^{\star}) - F_{1}(t)}{V_{1}(t ; v_{1}^{\star})} = \frac{V_{1}(t ; v_{1}^{\star}) - F_{1}(t)}{V^{\star}(t ; v_{0})} > 0.\end{aligned}$$ This implies for ${i = 1, \hdots, N}$ at time $t$: 1. $\hat{b} > b(\tilde{t}_{1}^{\star})$: $$\begin{aligned} \left(\hat{\pi}^{\star}(t ; v_{0})\right)_{i} < \frac{1}{1-\hat{b}} \left(\Sigma^{-1} (\mu - r \mathbf{1})\right)_{i} \frac{V^{\star}(t ; v_{0}) - F(t)}{V^{\star}(t ; v_{0})}\ \Leftrightarrow\ \left(\Sigma^{-1} (\mu - r \mathbf{1})\right)_{i} > 0.\end{aligned}$$ 2. $\hat{b} = b(\tilde{t}_{1}^{\star})$: $$\begin{aligned} \hat{\pi}^{\star}(t ; v_{0}) = \frac{1}{1-\hat{b}} \Sigma^{-1} (\mu - r \mathbf{1}) \frac{V^{\star}(t ; v_{0}) - F(t)}{V^{\star}(t ; v_{0})}.\end{aligned}$$ 3. $\hat{b} < b(\tilde{t}_{1}^{\star})$: $$\begin{aligned} \left(\hat{\pi}^{\star}(t ; v_{0})\right)_{i} > \frac{1}{1-\hat{b}} \left(\Sigma^{-1} (\mu - r \mathbf{1})\right)_{i} \frac{V^{\star}(t ; v_{0}) - F(t)}{V^{\star}(t ; v_{0})}\ \Leftrightarrow\ \left(\Sigma^{-1} (\mu - r \mathbf{1})\right)_{i} < 0.\end{aligned}$$ 2. If ${V^{\star}(t ; v_{0}) < 0}$, for instance this is reasonable for ${v_{0} < 0}$ and a high demand for consumption in the past, then it follows $$\begin{aligned} \frac{V_{1}(t ; v_{1}^{\star})}{V^{\star}(t ; v_{0})} \frac{V_{1}(t ; v_{1}^{\star}) - F_{1}(t)}{V_{1}(t ; v_{1}^{\star})} = \frac{V_{1}(t ; v_{1}^{\star}) - F_{1}(t)}{V^{\star}(t ; v_{0})} > 0.\end{aligned}$$ This in turn implies for ${i = 1, \hdots, N}$ at time $t$: 1. $\hat{b} > b(\tilde{t}_{1}^{\star})$: $$\begin{aligned} \left(\hat{\pi}^{\star}(t ; v_{0})\right)_{i} > \frac{1}{1-\hat{b}} \left(\Sigma^{-1} (\mu - r \mathbf{1})\right)_{i} \frac{V^{\star}(t ; v_{0}) - F(t)}{V^{\star}(t ; v_{0})}\ \Leftrightarrow\ \left(\Sigma^{-1} (\mu - r \mathbf{1})\right)_{i} > 0.\end{aligned}$$ 2. $\hat{b} = b(\tilde{t}_{1}^{\star})$: $$\begin{aligned} \hat{\pi}^{\star}(t ; v_{0}) = \frac{1}{1-\hat{b}} \Sigma^{-1} (\mu - r \mathbf{1}) \frac{V^{\star}(t ; v_{0}) - F(t)}{V^{\star}(t ; v_{0})}.\end{aligned}$$ 3. $\hat{b} < b(\tilde{t}_{1}^{\star})$: $$\begin{aligned} \left(\hat{\pi}^{\star}(t ; v_{0})\right)_{i} < \frac{1}{1-\hat{b}} \left(\Sigma^{-1} (\mu - r \mathbf{1})\right)_{i} \frac{V^{\star}(t ; v_{0}) - F(t)}{V^{\star}(t ; v_{0})}\ \Leftrightarrow\ \left(\Sigma^{-1} (\mu - r \mathbf{1})\right)_{i} < 0.\end{aligned}$$ In particular, consider the situation ${V^{\star}(t ; v_{0}) > 0}$ and let ${\left(\Sigma^{-1} (\mu - r \mathbf{1})\right)_{i} > 0}$ hold for risky asset $i$. Under ${\hat{b} > b(\tilde{t}_{1}^{\star})}$, the optimal relative investment in stock $i$, which is $\left(\hat{\pi}^{\star}(t ; v_{0})\right)_{i}$, is reduced compared to the relative investment decision $\left(\hat{\pi}^{\star}_{(b(t) \equiv \hat{b})}(t ; v_{0})\right)_{i}$ under ${b(t) \equiv \hat{b}}$. Since ${\hat{b} > b(\tilde{t}_{1}^{\star})}$ can be interpreted as higher risk aversion for consumption than terminal wealth, this is meaningful. In the situation ${V^{\star}(t ; v_{0}) < 0}$ the interpretation seems counterintuitive at first glance. But when looking at risky exposures rather than risky relative investments, analogue conclusions hold. The same approach shall be used when considering ${V^{\star}(t ; v_{0}) = 0}$. Furthermore, it is worth to mention that $\hat{\pi}^{\star}(t ; v_{0})$ approaches $0$ when $V^{\star}(t ; v_{0})$ approaches $F(t)$, which can be observed in ; the argument is the following: When $V^{\star}(t ; v_{0})$ falls towards $F(t)$, then automatically $V_{1}(t ; v_{1}^{\star})$ approaches $F_{1}(t)$ and $V_{2}(t ; v_{2}^{\star})$ converges towards $F_{2}(t)$ simultaneously, since $V^{\star}(t ; v_{0}) = V_{1}(t ; v_{1}^{\star}) + V_{2}(t ; v_{2}^{\star})$, ${F(t) = F_{1}(t) + F_{2}(t)}$ and ${V_{1}(t ; v_{1}^{\star}) > F_{1}(t)}$, ${V_{2}(t ; v_{2}^{\star}) > F_{2}(t)}$ a.s. which was already shown in Sections \[sec:ConsumptionProblem:y\] and \[sec:TerminalWealthProblem\]. We moreover proved that in this case ${\hat{\pi}_{1}(t;v_{1}^{\star})}$ and ${\hat{\pi}_{2}(t;v_{2}^{\star})}$ approach $0$. By Theorem \[thm:Solution:Merging:OriginalProblem\] it follows that also $\hat{\pi}^{\star}(t; v_{0})$ must converge to $0$. Therefore, as ${v_{0} > F(0)}$ is assumed, it follows that ${V^{\star}(t ; v_{0}) > F(t)}$ a.s., which can additionally be seen in the respective formula in Theorem \[thm:Solution:Merging:OriginalProblem\], and the optimal decision rules provide portfolio insurance over the whole life-cycle. $F(t)$ is called the minimum asset wealth level, it holds $F(T) = F$. The optimal exposure to the risky assets equals the sum of the optimal risky exposures of the two sub-problems $$\begin{aligned} \hat{\pi}^{\star}(t; v_{0}) V^{\star}(t ; v_{0}) = \hat{\pi}_{1}(t;v_{1}^{\star}) V_{1}(t;v_{1}^{\star}) + \hat{\pi}_{2}(t;v_{2}^{\star}) V_{2}(t;v_{2}^{\star})\end{aligned}$$ and by the findings in Sections \[sec:ConsumptionProblem:y\] and \[sec:TerminalWealthProblem\] it holds $$\begin{aligned} \left(\hat{\pi}^{\star}(t ; v_{0}) V^{\star}(t ; v_{0})\right)_{i} > 0\ \Leftrightarrow\ \left(\Sigma^{-1} (\mu - r \mathbf{1})\right)_{i} > 0.\end{aligned}$$ For the ease of exposition we so far assumed that the income process is deterministic. The following remark shows the solution for a stochastic income process. \[remark:y(t)stochastic\] Let $(y(t))_{t \in [0,T]}$ be a non-negative, stochastic income-rate process with $\int_{0}^{T} y(t) dt < \infty$, ${\mathbb{P}}$-a.s.. The stated results are still valid after replacing integrals of the form ${\int_{t}^{T} e^{-r (s-t)} y(s) ds}$ by the more general conditional expectation ${{\mathbb{E}}\left[\int_{t}^{T} \frac{\tilde{Z}(s)}{\tilde{Z}(t)} y(s) ds \Big| \mathcal{F}_{t}\right] = \int_{t}^{T} {\mathbb{E}}\left[\frac{\tilde{Z}(s)}{\tilde{Z}(t)} y(s) \Big| \mathcal{F}_{t}\right] ds = \int_{t}^{T} e^{-r (s-t)} {\mathbb{E}}_{{\mathbb{Q}}}\left[y(s) \Big| \mathcal{F}_{t}\right] ds}$ by the Bayes formula for arbitrary ${t \in [0,T]}$, in particular in the definition of $F_{1}(t)$ and $F(t)$. If $(y(t))_{t \in [0,T]}$ is supposed to be independent to $\CMcal{F}$, i.e. independent to the market stochastics, then the conditional expectation ${{\mathbb{E}}\left[\int_{t}^{T} \frac{\tilde{Z}(s)}{\tilde{Z}(t)} y(s) ds \Big| \mathcal{F}_{t}\right]}$ can be reduced to ${\int_{t}^{T} e^{-r (s-t)} {\mathbb{E}}_{{\mathbb{Q}}}\left[y(s)\right] ds}$. For the lower bounds of $v_{0}$ and $v_{1}$, and need to be replaced by $$\begin{aligned} v_{0} > {} & \int_{0}^{T} e^{- r s} \left(\bar{c}(s) - \bar{y}(s)\right) ds + e^{- r T} F, \\ v_{1} > {} & \int_{0}^{T} e^{- r s} \left(\bar{c}(s) - \bar{y}(s)\right) ds,\end{aligned}$$ where ${\bar{y}(s) = \sup \left\{x \ge 0: {\mathbb{P}}(y(s) \ge x) = 1\right\}}$ denotes the minimal level of income; ${\bar{y}(s) > 0}$ is meaningful due to unemployment benefits paid by the government. Analysis of optimal controls and wealth process: A case study {#sec:NCS} ============================================================= This section targets to calibrate the life-cycle model to realistic time-dependent structures for consumption and investment observed in practice and outline the difference between our presented solution with age-depending $a(t)$ and $b(t)$ functions and the models with either only $a(t)$ or $b(t)$ time-varying or none. Hence, we not only estimate $\hat{b}$, $a(t)$ and $b(t)$ for our model, but additionally provide the respective estimates when $a(t)$ or $b(t)$, or both, are assumed to be constants. A comparison of the fit of the different models allows for making a statement on the accuracy of the models in describing the agent’s behavior. For notational convenience we call the three benchmark models as follows: - $M_{a,b(t)}$: $a(t) \equiv a$ constant, $b(t)$ time-varying - $M_{a(t),b}$: $a(t)$ time-varying, $b(t) \equiv b$ constant - $M_{a,b}$: $a(t) \equiv a$ and $b(t) \equiv b$ constant The subscript thus indicates whether $a(t)$ or $b(t)$ are age-varying. Therefore, our model is denoted by $M_{a(t),b(t)}$. As already indicated in Section \[sec:Introduction\], $M_{a,b(t)}$ is (partially) covered by [@Steffensen2011], [@Hentschel2016] and [@Aase2017], $M_{a(t),b}$ and $M_{a,b}$ are covered by [@Ye2008]. In the later Subsection \[sec:ComparisonCRRA\], we additionally analyze the impact of the floors $\bar{c}(t)$ and $F$, where our model $M_{a(t),b(t)}$ is compared to the same model but with CRRA utility functions, i.e. $\bar{c}(t) \equiv 0$ and $F \equiv 0$. The CRRA model is denoted by $M_{a(t),b(t)}^{CRRA}$ and is (partially) considered by [@Steffensen2011], [@Hentschel2016] and [@Aase2017]. Assumptions ----------- We assume an exemplary agent with average income, liabilities etc. A similar case study can be carried out for a pension cohort, but for simplicity and data availability we consider an individual client. In detail, we make the following (simplifying) assumptions: Let the market consist of one risk-free and one risky asset ($N = 1$) with parameters ${r = 0.5 \%}$, ${\mu = 5 \%}$, and ${\sigma = 20 \%}$; these values correspond approximately to the EURONIA Overnight Rate and the performance of the DAX 30 Performance Index as an equity index over the $11$ year period from 17 October 2007 to 17 October 2018. The risky asset can coincide with, but is not restricted to a pure equity portfolio. In general it can be any arbitrary given portfolio which consists of risky assets. The price process of the risky asset is assumed to be ${P(t) = p_{1} e^{(\mu - \frac{1}{2} \sigma^2) t + \sigma W(t)} = p_{1} e^{\frac{1}{2} (\mu + r) (1 - \frac{\sigma}{\gamma})} \tilde{Z}(t)^{- \frac{\sigma}{\gamma}}}$ with initial price ${P(0) = p_{1} = 100}$. Furthermore, let ${T = 40}$ years be the time to retirement, $t = 25$ years the current age of the investor and $65$ years the age of retirement. For the net salary function it is assumed ${y(t) = \frac{\tilde{r}}{e^{\tilde{r}} - 1} y_{0} e^{\tilde{r} t}}$ with $y_{0} = 26,200$ EUR and $\tilde{r} = 2.07 \%$. This corresponds to a net annual starting salary approximately equal to the average for a graduate in Germany in 2017 (cf. online portals [@Absolventa2018] or [@Stepstone2017]), with an annual increase equal to the average for a household’s net salary in Germany over years 2011 to 2016 according to [@StatistischesBundesamtEinkommen2018]. Net income accumulated over the first year is ${\int_{0}^{1} y(t)dt = y_{0}}$ and income accumulated within the year from time $s$ to $s+1$ is ${\int_{s}^{s+1} y(t)dt = \frac{\tilde{r}}{e^{\tilde{r}} - 1} y_{0} \frac{e^{\tilde{r} (s+1)} - e^{\tilde{r} s}}{\tilde{r}} = y_{0} e^{\tilde{r} s}}$. For the agent’s utility functions, let ${\beta = 3 \%}$ (cf. [@Ye2008]) and ${\hat{a} = 1}$. Let the terminal wealth floor be ${F = 435,125}$ EUR which is motivated by the following argument: According to [@StatistischesBundesamtJahrbuch2017], [@aktuare2017] or [@WKO2016] a lifetime around $81$ years can be expected for a currently $25$ year old person in Germany. Thus survival of $81 - 65 = 16$ years are expected after retirement at the age of $65$. We assume that the agent secures the income inflow during retirement to be $75 \%$ of the last wage paid from year $64$ to $65$ (replacement ratio of $75 \%$), which is ${\int_{39}^{40} y(t)dt = y_{0} e^{39 \tilde{r}} = 58,736}$ EUR. Assume that every year, half of this amount is covered by a separate pension account or plan, e.g. provided by the government. In addition, the agent wants to secure against longevity risk, hence considers $16 \times (100 + 30) \% = 20.8$ years instead of $16$ years for the remaining lifetime after the age of retirement. Thus, $F$ as value at time $T$ is chosen to be ${F = \int_{0}^{20.8} \frac{0.75 \times 58,736 \text{ EUR}}{2} e^{-r t} dt = \frac{0.75 \times 58,736 \text{ EUR}}{2} \left(\frac{1 - e^{- 20.8 \times r}}{r}\right) = 435,125 \text{ EUR}}$. Finally, the function for the net consumption floor is supposed to take the form ${\bar{c}(t) = \frac{\bar{r}}{e^{\bar{r}} - 1} \bar{c}_{0} e^{\bar{r} t}}$ with $\bar{c}_{0} = 14,880$ EUR and $\bar{r} = 1.93 \%$. This corresponds to a starting value equal to approximately $50 \%$ of the average household consumption in Germany in 2016 as starting point, with an annual increase equal to the increase in average household consumption in Germany over years 2011 to 2016 (published by [@StatistischesBundesamtEinkommen2018]). Minimum consumption expenses incurred within the first year is ${\int_{0}^{1} \bar{c}(t)dt = \bar{c}_{0}}$, within year $s$ to $s+1$ is ${\int_{s}^{s+1} \bar{c}(t)dt = \bar{c}_{0} e^{\bar{r} s}}$. The assumed income and consumption floor rates are visualized in Figure \[fig:Analysis:cbar:y\]. Fitting / Calibration under exponential preferences and discussion ------------------------------------------------------------------ In what follows we calibrate the remaining utility parameters $\hat{b}$, $a(t)$ and $b(t)$ to suitable curves for consumption and relative allocation. The targeted curves for parameter fitting are summarized by Table \[tab:Analysis:SamplePoints\]. The consumption rate $c^{\star}(t;v_{0})$ is calibrated with respect to the hump-shaped type observed by [@Carroll1997], [@GourinchasParker2002], [@JensenSteffensen2015] and [@TangPurcalZhang2018]. The relative risky investment $\hat{\pi}^{\star}(t ; v_{0})$ is calibrated towards the $(100 - \text{age}) \%$ rule of thumb; a similar structure is frequently applied by financial advisors and asset management companies for life-cycle funds (see [@Malkiel1990], [@BodieCrane1997], [@Shiller2005], [@Minderhoud2011], [@Milliman2010], [@Shafir2013]). Following this popular rule, the client at age $25$ years starts with a $75 \%$ equity investment, linearly decreases it by her age such that she ends with a $35 \%$ investment in equities at the age of retirement with $65$ years. We would like to mention that in particular relative risky investment curves or products provided by asset management companies are to be understood deterministic, i.e. wealth- / state-independent. Therefore, we calibrate the remaining unknown parameters with respect to the expected values for consumption and risky relative investment. In more detail, we fit the expected value for consumption, which is ${{\mathbb{E}}\left[c^{\star}(t;v_{0})\right]}$, to the given consumption curve. For ${{\mathbb{E}}\left[\hat{\pi}^{\star}(t ; v_{0})\right]}$ we apply the following estimate: we estimate the risky exposure ${{\mathbb{E}}\left[\hat{\pi}^{\star}(t ; v_{0}) V^{\star}(t ; v_{0})\right]}$ without any bias and then replace $V^{\star}(t ; v_{0})$ by its unbiased expectation ${{\mathbb{E}}\left[V^{\star}(t ; v_{0})\right]}$ to obtain the estimate ${\frac{{\mathbb{E}}\left[\hat{\pi}^{\star}(t ; v_{0}) V^{\star}(t ; v_{0})\right]}{{\mathbb{E}}\left[V^{\star}(t ; v_{0})\right]}}$ for ${{\mathbb{E}}\left[\hat{\pi}^{\star}(t ; v_{0})\right]}$. By doing this we replace ${{\mathbb{E}}\left[\hat{\pi}^{\star}(t ; v_{0})\right]}$ by ${\frac{{\mathbb{E}}\left[\hat{\pi}^{\star}(t ; v_{0}) V^{\star}(t ; v_{0})\right]}{{\mathbb{E}}\left[V^{\star}(t ; v_{0})\right]}}$ and fit the latter expression to the given linear relative investment curve. For further readings on deterministic investment strategies we refer to [@ChristiansenSteffensen2013] and [@ChristiansenSteffensen2018]. In summary, we have unbiased estimates for the expected values of optimal consumption, risky exposure and wealth process, and a modified estimate for the expectation of the optimal relative risky investment. Let $a(t)$ and $b(t)$ take the form of an exponential function, i.e. ${a(t) = a_{0} e^{\lambda_{a} t}}$ and ${b(t) = b_{0} e^{\lambda_{b} t}}$. Moreover, let $v_{0} = 250,000$ EUR. The estimation is carried out via the Matlab function *lsqcurvefit* which solves nonlinear curve-fitting (data-fitting) problems in a least-squares sense and minimizes the sum of the squared relative distances. The underlying time points for target consumption and allocation are set weekly on an equidistant grid which yields $2,080$ points in the time interval $[0,T]$ with ${T = 40}$. Table \[tab:Analysis:ParameterEstimation:Error:Benchmark:RichInvestor\] gives an overview of the estimated utility parameters and provides the sum of squared relative errors as a quality criterion. The errors show that considering age-depending functions $a(t)$ and $b(t)$ simultaneously in model $M_{a(t),b(t)}$ leads to a comparatively huge improvement in accuracy of the fit compared to any of the three benchmark models: model $M_{a(t),b(t)}$ sum of squared relative distances is only $19.38 \%$ of the respective sum for model $M_{a(t),b}$ which provides the second best fit in terms of sum of squared relative residuals. Figure \[fig:Analysis:InputFunctions:Benchmark:RichInvestor\] visualizes the fitted parameters and preference functions $\hat{b}$, $a(t)$, $b(t)$. The table and figure show that the estimated coefficient of risk aversion $\hat{b}$ for our model $M_{a(t),b(t)}$ is more negative, which means a higher risk aversion, compared to the three benchmark models $M_{a,b(t)}$, $M_{a(t),b}$, $M_{a,b}$. Furthermore, $a(t)$ is decreasing both within model $M_{a(t),b(t)}$ and $M_{a(t),b}$. In contrast, $b(t)$ increases in model $M_{a(t),b(t)}$ over time whereas it decreases in the comparison model $M_{a,b(t)}$. $b(t)$ in models $M_{a,b(t)}$, $M_{a(t),b}$, $M_{a,b}$ stay very close over the whole life-cycle whereas $b(t)$ in $M_{a(t),b(t)}$ starts more negative and ends less negative. In summary, this means that in model $M_{a(t),b(t)}$ the risk aversion decreases through increasing $b(t)$, but preference of the investor between consumption and terminal wealth is shifted more and more to terminal wealth through decreasing $a(t)$. Figure \[fig:Analysis:FittedConsumptionInvestment:Benchmark:RichInvestor\] illustrates the expected optimal consumption rate and relative risky investment for the fitted parameters in comparison with the given target policies or average profile. In addition to Table \[tab:Analysis:ParameterEstimation:Error:Benchmark:RichInvestor\] the figure illustrates that, under exponential preferences $a(t)$ and $b(t)$, only the most flexible model $M_{a(t),b(t)}$ provides an accurate and precise fit for both consumption rate and risky relative allocation. We realize that the benchmark models $M_{a,b(t)}$, $M_{a(t),b(t)}$, $M_{a,b}$ apparently do not provide enough flexibility to simultaneously describe the predetermined consumption and relative allocation curves. Whereas the fits for the relative investment $\hat{\pi}^{\star}(t ; v_{0})$ look acceptable, all three benchmark models fail in explaining the targeted consumption rate $c^{\star}(t;v_{0})$. We further notice that $c^{\star}(t;v_{0})$ and $\hat{\pi}^{\star}(t ; v_{0})$ for the models $M_{a,b(t)}$ and $M_{a(t),b}$ are very similar (red and black lines in the respective figures). In summary, Table \[tab:Analysis:ParameterEstimation:Error:Benchmark:RichInvestor\] and Figure \[fig:Analysis:FittedConsumptionInvestment:Benchmark:RichInvestor\] demonstrate that model $M_{a(t),b(t)}$ is the only one among our considered models which provides enough flexibility to model a hump-shaped consumption decision curve besides a linear risky allocation curve. All three benchmark models, which disregard time-dependency of $a(t)$ or $b(t)$ or both, do not lead to a satisfactory fit. In addition, fitting optimal consumption of the four models to the given consumption curve, while ignoring relative investments, shows the same picture. The result is that the sum of the squared distances associated with model $M_{a(t),b(t)}$ is only $21.26 \%$ of the respective sum associated with the second best model $M_{a,b(t)}$. This supports our findings and conclusion that time-varying preference parameters are indeed needed to model the given time-dependent hump-shaped consumption and linear risky allocation in an accurate way. In addition to the parameter estimation for the expected path, we provide the figures for optimal consumption, risky relative portfolio and wealth process of all four models under two representative scenarios: a mostly upward (see Figure \[fig:Analysis:FittedConsumptionInvestment:IncreasingPath:RichInvestor\]) and a mostly downward (see Figure \[fig:Analysis:FittedConsumptionInvestment:DecreasingPath:RichInvestor\]) moving path for the underlying stock. The corresponding expected paths for the consumption rate, the relative risky investment and the wealth process can be found in Figure \[fig:Analysis:FittedConsumptionInvestment:Benchmark:RichInvestor\]. In the increasing stock price case optimal consumption and risky relative allocation for model $M_{a(t),b(t)}$ stay very close to the targeted curve since the corresponding wealth stays close to its expected path and shows some reverting behavior. For a stronger increasing underlying price process, consumption exceeds the given consumption curve for the expected path. When the stock price decreases, then optimal consumption and risky allocation for model $M_{a(t),b(t)}$ fall below the target curves after approximately $15$ to $20$ years. In particular higher consumption can no longer be afforded due to a poorly performing equity market. This goes hand in hand with a reduction on the relative risky allocation. At first glance, it seems that there is a big difference in optimal consumption between our model $M_{a(t),b(t)}$ and the three benchmark models $M_{a,b(t)}$, $M_{a(t),b}$ and $M_{a,b}$ while optimal risky investments and wealth paths for all four models remain in a quite narrow area, although deviation of risky investments from its target curve can be high. This is due to different scales for wealth and consumption. Figure \[fig:Analysis:Difference:c:pi:V:exp\] visualizes the differences, denoted by $\Delta$, in the fitted consumption and relative risky investment and the corresponding wealth process for the three benchmark models to our model within the expected path situation. It can be observed that relative risky allocation $\hat{\pi}^{\star}(t;v_{0})$ of model $M_{a(t),b(t)}$ exceeds the ones associated with the three benchmark models in the first half of the considered period of $40$ years by up to eight percentage points, and falls below in the second half. Moreover, the difference looks monotone decreasing in age. Furthermore, the wealth process which corresponds to model $M_{a(t),b(t)}$ outperforms the three benchmark models in the first half, but provides a lower wealth in the second half due to a higher consumption rate from approx. year $8$ to $30$, with a certain recovery in the wealth close to retirement. The two exemplary scenarios and the expected development situation which was used for fitting show that the benchmark models $M_{a,b(t)}$, $M_{a(t),b}$ and $M_{a,b}$ overestimate the given consumption curve in early and older years (close to $t = 0$ and $t = 40$) and underestimate it in between. For our model $M_{a(t),b(t)}$, the optimal consumption rate stays very close to its target curve until consumption cannot be afforded anymore because of a low wealth as result of a strong market decline. We conclude that especially within phases of poor stock performance, both $c^{\star}(t;v_{0})$ and $\hat{\pi}^{\star}(t ; v_{0})$ can deviate a lot from their given curves. \ \ \ \ Comparison with CRRA {#sec:ComparisonCRRA} -------------------- We conclude the case study section by exploring the impact of minimum consumption and wealth floors on calibration and optimal controls. For this sake, we fit the model $M_{a(t),b(t)}$ to the very same parameters and target curves as before, but now enforce $\bar{c}(t) \equiv 0$ and $F \equiv 0$. This CRRA model is referred to as $M_{a(t),b(t)}^{CRRA}$. Table \[tab:Analysis:ParameterEstimation:Error:Benchmark:RichInvestor:CRRA\] provides the estimated parameters and the sum of the squared relative residuals. In terms of this sum, it is clear that model $M_{a(t),b(t)}$ provides a more adequate fit than model $M_{a(t),b(t)}^{CRRA}$, its sum is only $4.82 \%$ of the sum which corresponds to $M_{a(t),b(t)}^{CRRA}$. Going even further, all three benchmark models $M_{a,b(t)}$, $M_{a(t),b}$ and $M_{a,b}$ from the previous subsection, which all consider minimum levels for consumption and wealth, provide a more precise fit than $M_{a(t),b(t)}^{CRRA}$ in view of the sum of squared relative residuals. This shows that the introduction of floors for consumption and wealth in the model is essential. Figure \[fig:Analysis:InputFunctions:Benchmark:RichInvestor:CRRA\] visualizes the estimated input functions, Figure \[fig:Analysis:FittedConsumptionInvestment:Benchmark:RichInvestor:CRRA\] provides the graphics about the fitted consumption and relative risky portfolio process with the expected wealth and stock price path. Besides a larger sum of the squared relative distances for model $M_{a(t),b(t)}^{CRRA}$, especially the fitted risky investments $\hat{\pi}^{\star}(t ; v_{0})$ in Figure \[fig:Analysis:FittedConsumptionInvestment:Benchmark:RichInvestor:CRRA\] show that zero floors for consumption and wealth ($\bar{c}(t) \equiv 0$ and $F \equiv 0$) leads to an imprecise calibration and a large deviation from its given target curve due to a drop in model flexibility. Table \[tab:Analysis:ParameterEstimation:Error:Benchmark:RichInvestor:CRRA\] suggests that this drop in flexibility is attempted to be compensated by a higher risk aversion in terms of more negative estimated values for $\hat{b}$ and $b(t)$, see also Figure \[fig:Analysis:InputFunctions:Benchmark:RichInvestor:CRRA\]. \ Conclusion {#sec:Conclusion} ========== This paper studies the optimal quantitative and dynamic consumption and investment strategies under age-dependent risk preferences (coefficient of risk aversion $b(t)$ and preference between consumption and terminal wealth $a(t)$). The findings demonstrate that strategies applied for life-cycle pension funds or pension insurance could significantly be improved by taking age-dependent risk preferences into account. For this reason, the paper combines the elements terminal wealth with a minimum level and consumption under time-varying risk preferences and minimum level into a dynamic life-cycle consumption-investment model. A sound economic understanding of the model parts is provided. In Section \[sec:SeparationTechnique\] the corresponding portfolio optimization problem is solved analytically with a separation approach which allows to solve the consumption and the terminal wealth part of the original consumption-investment problem separately. The formulas show that age-depending risk preferences in combination with terminal wealth considerations and minimum levels for consumption and wealth have a significant impact on the optimal controls. Section \[sec:NCS\] investigates the optimal controls and provides a comparison with already existing and solved benchmark models. The analysis is divided into two parts. In the first part the risk preferences are calibrated towards given realistic curves for consumption and investment. The result emphasizes that only our proposed flexible model, in comparison with the other considered benchmark models, provides an adequate fit of the agent’s behavior. We draw the conclusion that time-varying preferences (risk aversion $b(t)$ and preference between consumption and terminal wealth $a(t)$) are necessary to provide a sufficient degree of flexibility to accurately fit the two control variables consumption and investment: Our proposed model turns out to be able to explain the given investor consumption and investment decisions, but the benchmark models fail. The very same result is obtained when time-dependent preference functions are considered, but the consumption and wealth floors are omitted. The second part focuses on the behavior analysis of the optimal consumption, investment and wealth under a positive and negative market environment. Future research on this topic could deal with generalizations of the dynamic life-cycle model. For instance, investment constraints could be included to make the whole setup more applicable as budgets in practice are commonly exposed to constraints on allocation or risk. Furthermore, since unemployment risk and uncertain future income are essential for individuals, those risks and impacts on the optimal controls and wealth process could be further explored. Finally, including mortality and a life insurance product into the model could help people in determining their optimal individual life insurance investment embedded in a more realistic, flexible framework. Acknowledgements {#acknowledgements .unnumbered} ================ Pavel V. Shevchenko acknowledges the support of Australian Research Council’s Discovery Projects funding scheme (project number DP160103489). Proofs {#app:Proofs} ====== The consumption problem {#app:ProofsConsumptionProblem} ----------------------- The Lagrangian of the Problem subject to is $$\begin{aligned} \mathcal{L}(c,\lambda_{1}) = {} & {\mathbb{E}}\left[\int_{0}^{T} U_{1}(t,c(t)) dt\right] - \lambda_{1} \left({\mathbb{E}}\left[\int_{0}^{T} \tilde{Z}(t) \left(c(t) - y(t)\right) dt\right] - v_{1}\right) \\ = {} & {\mathbb{E}}\left[\int_{0}^{T} U_{1}(t,c(t)) - \lambda_{1} \left(\tilde{Z}(t) \left(c(t) - y(t)\right) - \frac{1}{T} v_{1}\right) dt\right].\end{aligned}$$ By the structure of the utility function, the optimal $c_{1}$ fulfills $c_{1}(t;v_{1}) > \bar{c}(t)$ and thus the first order conditions involve existence of a Lagrange multiplier $\lambda_{1} = \lambda_{1}(v_{1}) > 0$ such that the optimal $c_{1}$ maximizes $\mathcal{L}(c,\lambda_{1})$ and such that complementary slackness holds true. Hence it can be shown that the Karush-Kuhn-Tucker conditions besides the first derivative condition are satisfied. Following [@Aase2017], let $\nabla_{h} \mathcal{L}(c,\lambda_{1};h)$ denote the directional derivative of $\mathcal{L}(c,\lambda_{1})$ in the feasible direction $h$. The directional derivative of a function $f$ in the direction $h$ is generally defined by $$\begin{aligned} \nabla_{h} f(x) = \lim_{y \to 0} \frac{f(x + h y) - f(x)}{y}.\end{aligned}$$ If $f$ is differentiable at $x$ this results in $$\begin{aligned} \nabla_{h} f(x) = f^{\prime}(x) h.\end{aligned}$$ In our case, for the inner function it holds $$\begin{aligned} \nabla_{h} \left(U_{1}(t,c(t)) - \lambda_{1} \left(\tilde{Z}(t) \left(c(t) - y(t)\right) - \frac{1}{T} v_{1}\right)\right) = {} & \frac{\partial}{\partial c} \left(U_{1}(t,c(t)) - \lambda_{1} \left(\tilde{Z}(t) \left(c(t) - y(t)\right) - \frac{1}{T} v_{1}\right)\right) h(t) \\ = {} & \left(\frac{\partial}{\partial c} U_{1}(t,c(t)) - \lambda_{1} \tilde{Z}(t)\right) h(t).\end{aligned}$$ By the dominated convergence theorem, which allows interchanging expectation and differentiation, the first order condition gives $$\begin{aligned} 0 = {} & {\mathbb{E}}\left[\int_{0}^{T} \left(\frac{\partial}{\partial c} U_{1}(t,c(t)) - \lambda_{1} \tilde{Z}(t)\right) h(t) dt\right] \\ = {} & {\mathbb{E}}\left[\int_{0}^{T} \left(e^{- \beta t} a(t) \left(\frac{1}{1-b(t)} \left(c(t) - \bar{c}(t)\right)\right)^{b(t)-1} - \lambda_{1} \tilde{Z}(t)\right) h(t) dt\right]\end{aligned}$$ for all feasible $h$. In order to fulfill this condition for any $h$, the optimal consumption rate process must be $$\begin{aligned} \label{eq:Solution:c1:ConsumptionOnly} c_{1}(t;v_{1}) = (1-b(t)) \left(\lambda_{1} \frac{e^{\beta t}}{a(t)} \tilde{Z}(t)\right)^{\frac{1}{b(t)-1}} + \bar{c}(t),\ t \in [0,T].\end{aligned}$$ Since $U_{1}(t,c)$ strictly increases in $c$, the budget constraint for the optimal solution in turns to equality, i.e. $$\begin{aligned} {\mathbb{E}}\left[\int_{0}^{T} \tilde{Z}(t) \left(c_{1}(t;v_{1}) - y(t)\right) dt\right] = v_{1}.\end{aligned}$$ When plugging in and by Fubini, the budget condition turns into $$\begin{aligned} v_{1} = {} & {\mathbb{E}}\left[\int_{0}^{T} \tilde{Z}(t) \left((1-b(t)) \left(\lambda_{1} \frac{e^{\beta t}}{a(t)} \tilde{Z}(t)\right)^{\frac{1}{b(t)-1}} + \bar{c}(t) - y(t)\right) dt\right] \\ = {} & \int_{0}^{T} (1-b(t)) \left(\lambda_{1} \frac{e^{\beta t}}{a(t)}\right)^{\frac{1}{b(t)-1}} {\mathbb{E}}\left[\tilde{Z}(t)^{\frac{b(t)}{b(t)-1}}\right] dt + \int_{0}^{T} {\mathbb{E}}\left[\tilde{Z}(t)\right] \left(\bar{c}(t) - y(t)\right) dt \\ = {} & \int_{0}^{T} (1-b(t)) \left(\lambda_{1} \frac{e^{\beta t}}{a(t)}\right)^{\frac{1}{b(t)-1}} e^{- \frac{b(t)}{b(t)-1} \left(r + \frac{1}{2}\|\gamma\|^{2}\right) t + \frac{1}{2} \left(\frac{b(t)}{b(t)-1}\right)^{2} \|\gamma\|^{2} t} dt \\ & + \int_{0}^{T} e^{- \left(r + \frac{1}{2}\|\gamma\|^{2}\right) t + \frac{1}{2} \|\gamma\|^{2} t} \left(\bar{c}(t) - y(t)\right) dt \\ = {} & \int_{0}^{T} (1-b(t)) \left(\frac{e^{\left[\beta - b(t) \left(r - \frac{1}{2} \frac{1}{b(t)-1} \|\gamma\|^{2}\right)\right] t}}{a(t)}\right)^{\frac{1}{b(t)-1}} \lambda_{1}^{\frac{1}{b(t)-1}} dt + \int_{0}^{T} e^{- r t} \left(\bar{c}(t) - y(t)\right) dt \\ = {} & \int_{0}^{T} (1-b(t)) \left(\frac{e^{\left[\beta - b(t) \left(r - \frac{1}{2} \frac{1}{b(t)-1} \|\gamma\|^{2}\right)\right] t}}{a(t)}\right)^{\frac{1}{b(t)-1}} \lambda_{1}^{\frac{1}{b(t)-1}} dt + F_{1}(0).\end{aligned}$$ Here we used that $\tilde{Z}(t)$ is a log-normal random variable and so is $\tilde{Z}(t)^{\frac{b(t)}{b(t)-1}}$. For any $v_{1} > F_{1}(0) = \int_{0}^{T} e^{- r t} \left(\bar{c}(t) - y(t)\right) dt$, the above equality determines $\lambda_{1} > 0$ uniquely, since the integral in which $\lambda_{1}$ appears strictly decreases in $\lambda_{1}$ and has the limits $0$ and $\infty$ as $\lambda_{1}$ approaches $\infty$ and $0$. It follows immediately that the condition $v_{1} > \int_{0}^{T} e^{- r t} \left(\bar{c}(t) - y(t)\right) dt$ in is inevitable. The optimal wealth process $V_{1}(t ; v_{1})$ which arises by applying $c_{1}(t ; v_{1})$ is $$\begin{aligned} V_{1}(t ; v_{1}) = {} & {\mathbb{E}}\left[\int_{t}^{T} \frac{\tilde{Z}(s)}{\tilde{Z}(t)} \left(c_{1}(s;v_{1}) - y(s)\right) ds \Big| \mathcal{F}_{t}\right] \\ = {} & \frac{1}{\tilde{Z}(t)} {\mathbb{E}}\left[\int_{t}^{T} \tilde{Z}(s) \left\{(1-b(s)) \left(\lambda_{1} \frac{e^{\beta s}}{a(s)} \tilde{Z}(s)\right)^{\frac{1}{b(s)-1}} + \bar{c}(s) - y(s)\right\} ds \Big| \mathcal{F}_{t}\right] \\ = {} & \frac{1}{\tilde{Z}(t)} \left\{{\mathbb{E}}\left[\int_{t}^{T} (1-b(s)) \left(\lambda_{1} \frac{e^{\beta s}}{a(s)}\right)^{\frac{1}{b(s)-1}} \tilde{Z}(s)^{\frac{b(s)}{b(s)-1}} ds \Big| \mathcal{F}_{t}\right] + {\mathbb{E}}\left[\int_{t}^{T} \tilde{Z}(s) \left(\bar{c}(s) - y(s)\right) ds \Big| \mathcal{F}_{t}\right]\right\} \displaybreak \\ = {} & \frac{1}{\tilde{Z}(t)} \left\{\int_{t}^{T} (1-b(s)) \left(\lambda_{1} \frac{e^{\beta s}}{a(s)}\right)^{\frac{1}{b(s)-1}} {\mathbb{E}}\left[\tilde{Z}(s)^{\frac{b(s)}{b(s)-1}} \Big| \mathcal{F}_{t}\right] ds + \int_{t}^{T} \left(\bar{c}(s) - y(s)\right) {\mathbb{E}}\left[\tilde{Z}(s) \Big| \mathcal{F}_{t}\right] ds\right\}.\end{aligned}$$ $\tilde{Z}(s)$ can be written as $\frac{\tilde{Z}(s)}{\tilde{Z}(t)} \tilde{Z}(t)$ where $\frac{\tilde{Z}(s)}{\tilde{Z}(t)}$ is independent of $\mathcal{F}_{t}$ and $\tilde{Z}(t)$ is $\mathcal{F}_{t}$-measurable. Therefore it follows $$\begin{aligned} {\mathbb{E}}\left[\tilde{Z}(s) \Big| \mathcal{F}_{t}\right] = {} & \tilde{Z}(t) {\mathbb{E}}\left[\frac{\tilde{Z}(s)}{\tilde{Z}(t)}\right] = \tilde{Z}(t) e^{- \left(r + \frac{1}{2}\|\gamma\|^{2}\right) (s-t) + \frac{1}{2} \|\gamma\|^{2} (s-t)} = \tilde{Z}(t) e^{- r (s-t)}, \\ {\mathbb{E}}\left[\tilde{Z}(s)^{\eta} \Big| \mathcal{F}_{t}\right] = {} & \tilde{Z}(t)^{\eta} {\mathbb{E}}\left[\left(\frac{\tilde{Z}(s)}{\tilde{Z}(t)}\right)^{\eta}\right] = \tilde{Z}(t)^{\eta} e^{- \eta \left(r + \frac{1}{2}\|\gamma\|^{2}\right) (s-t) + \frac{1}{2} \eta^{2} \|\gamma\|^{2} (s-t)} = \tilde{Z}(t)^{\eta} e^{- \eta \left(r - \frac{1}{2} (\eta - 1) \|\gamma\|^{2}\right) (s-t)}\end{aligned}$$ for any $\eta \in {\mathbb{R}}$, where we used that $\frac{\tilde{Z}(s)}{\tilde{Z}(t)}$ and thus $\left(\frac{\tilde{Z}(s)}{\tilde{Z}(t)}\right)^{\eta}$ are log-normally distributed. Define the function $g$ by $$\begin{aligned} g(s,t; v_{1}) = (1-b(s)) \left(\frac{e^{\beta s - b(s) \left(r - \frac{1}{2} \frac{1}{b(s)-1} \|\gamma\|^{2}\right) (s-t)}}{a(s)}\right)^{\frac{1}{b(s)-1}} \lambda_{1}^{\frac{1}{b(s)-1}},\end{aligned}$$ then the optimal wealth process is given by $$\begin{aligned} \label{eq:V1:ConsumptionProblem} V_{1}(t ; v_{1}) = \int_{t}^{T} g(s,t; v_{1}) \tilde{Z}(t)^{\frac{1}{b(s)-1}} ds + F_{1}(t)\end{aligned}$$ with $F_{1}(t)$ defined in . The dynamics can be calculated as $$\begin{aligned} d V_{1}(t ; v_{1}) = {} & \left(- g(t,t; v_{1}) \tilde{Z}(t)^{\frac{1}{b(t)-1}} dt + \int_{t}^{T} d_{t} \left(g(s,t; v_{1}) \tilde{Z}(t)^{\frac{1}{b(s)-1}}\right) ds\right) \\ & + \left(- \left(\bar{c}(t) - y(t)\right) dt + \int_{t}^{T} d_{t} \left(e^{- r (s-t)} \left(\bar{c}(s) - y(s)\right)\right) ds\right) \\ = {} & - g(t,t; v_{1}) \tilde{Z}(t)^{\frac{1}{b(t)-1}} dt + \int_{t}^{T} d_{t} \left(g(s,t; v_{1}) \tilde{Z}(t)^{\frac{1}{b(s)-1}}\right) ds - \left(\bar{c}(t) - y(t)\right) dt \\ & + \left(\int_{t}^{T} r e^{- r (s-t)} \left(\bar{c}(s) - y(s)\right) ds\right) dt \\ = {} & \left(- g(t,t; v_{1}) \tilde{Z}(t)^{\frac{1}{b(t)-1}} - \left(\bar{c}(t) - y(t)\right) + \int_{t}^{T} r e^{- r (s-t)} \left(\bar{c}(s) - y(s)\right) ds\right) dt \\ & + \int_{t}^{T} d_{t} \left(g(s,t; v_{1}) \tilde{Z}(t)^{\frac{1}{b(s)-1}}\right) ds.\end{aligned}$$ Notice that by Itô’s formula, $$\begin{aligned} d \left(\tilde{Z}(t)^{\frac{1}{b(s)-1}}\right) = {} & \frac{1}{b(s)-1} \tilde{Z}(t)^{\frac{1}{b(s)-1} - 1} d \tilde{Z}(t) + \frac{1}{2} \frac{1}{b(s)-1} \left(\frac{1}{b(s)-1} - 1\right) \tilde{Z}(t)^{\frac{1}{b(s)-1} - 2} \tilde{Z}(t)^{2} \|\gamma\|^{2} dt \\ = {} & \tilde{Z}(t)^{\frac{1}{b(s)-1}} \left\{\left[- \frac{1}{b(s)-1} r + \frac{1}{2} \frac{1}{b(s)-1} \left(\frac{1}{b(s)-1} - 1\right) \|\gamma\|^{2}\right] dt - \frac{1}{b(s)-1} \gamma' dW(t)\right\}.\end{aligned}$$ Moreover, it holds $$\begin{aligned} d_{t} g(s,t; v_{1}) = {} & (1-b(s)) \left(\frac{e^{\beta s - b(s) \left(r - \frac{1}{2} \frac{1}{b(s)-1} \|\gamma\|^{2}\right) s}}{a(s)}\right)^{\frac{1}{b(s)-1}} \lambda_{1}^{\frac{1}{b(s)-1}} d_{t} \left(e^{\frac{b(s)}{b(s)-1} \left(r - \frac{1}{2} \frac{1}{b(s)-1} \|\gamma\|^{2}\right) t}\right) \\ = {} & (1-b(s)) \left(\frac{e^{\beta s - b(s) \left(r - \frac{1}{2} \frac{1}{b(s)-1} \|\gamma\|^{2}\right) s}}{a(s)}\right)^{\frac{1}{b(s)-1}} \lambda_{1}^{\frac{1}{b(s)-1}} \\ & \times \frac{b(s)}{b(s)-1} \left(r - \frac{1}{2} \frac{1}{b(s)-1} \|\gamma\|^{2}\right) e^{\frac{b(s)}{b(s)-1} \left(r - \frac{1}{2} \frac{1}{b(s)-1} \|\gamma\|^{2}\right) t} dt \\ = {} & \frac{b(s)}{b(s)-1} \left(r - \frac{1}{2} \frac{1}{b(s)-1} \|\gamma\|^{2}\right) (1-b(s)) \left(\frac{e^{\beta s - b(s) \left(r - \frac{1}{2} \frac{1}{b(s)-1} \|\gamma\|^{2}\right) (s-t)}}{a(s)}\right)^{\frac{1}{b(s)-1}} \lambda_{1}^{\frac{1}{b(s)-1}} dt \\ = {} & \frac{b(s)}{b(s)-1} \left(r - \frac{1}{2} \frac{1}{b(s)-1} \|\gamma\|^{2}\right) g(s,t; v_{1}) dt.\end{aligned}$$ With this we obtain $$\begin{aligned} d_{t} \left(g(s,t; v_{1}) \tilde{Z}(t)^{\frac{1}{b(s)-1}}\right) = {} & g(s,t; v_{1}) d \left(\tilde{Z}(t)^{\frac{1}{b(s)-1}}\right) + \tilde{Z}(t)^{\frac{1}{b(s)-1}} d_{t} g(s,t; v_{1}) + 0 \\ = {} & g(s,t; v_{1}) \tilde{Z}(t)^{\frac{1}{b(s)-1}} \\ & \times \left\{\left[- \frac{1}{b(s)-1} r + \frac{1}{2} \frac{1}{b(s)-1} \left(\frac{1}{b(s)-1} - 1\right) \|\gamma\|^{2}\right] dt - \frac{1}{b(s)-1} \gamma' dW(t)\right\} \\ & + \tilde{Z}(t)^{\frac{1}{b(s)-1}} \frac{b(s)}{b(s)-1} \left(r - \frac{1}{2} \frac{1}{b(s)-1} \|\gamma\|^{2}\right) g(s,t; v_{1}) dt \\ = {} & g(s,t; v_{1}) \tilde{Z}(t)^{\frac{1}{b(s)-1}} \left\{\left(r - \frac{1}{b(s)-1} \|\gamma\|^{2}\right) dt - \frac{1}{b(s)-1} \gamma' dW(t)\right\}.\end{aligned}$$ Define $$\begin{aligned} Y(t) = \int_{t}^{T} \frac{1}{b(s)-1} g(s,t; v_{1}) \tilde{Z}(t)^{\frac{1}{b(s)-1}} ds.\end{aligned}$$ In summary, the dynamics of the optimal wealth process is then given by $$\begin{aligned} d V_{1}(t ; v_{1}) = {} & \left(- g(t,t; v_{1}) \tilde{Z}(t)^{\frac{1}{b(t)-1}} - \left(\bar{c}(t) - y(t)\right) + \int_{t}^{T} r e^{- r (s-t)} \left(\bar{c}(s) - y(s)\right) ds\right) dt \nonumber \\ & + \int_{t}^{T} g(s,t; v_{1}) \tilde{Z}(t)^{\frac{1}{b(s)-1}} \left\{\left(r - \frac{1}{b(s)-1} \|\gamma\|^{2}\right) dt - \frac{1}{b(s)-1} \gamma' dW(t)\right\} ds \nonumber \\ = {} & \Bigg(- g(t,t; v_{1}) \tilde{Z}(t)^{\frac{1}{b(t)-1}} - \left(\bar{c}(t) - y(t)\right) + \int_{t}^{T} r e^{- r (s-t)} \left(\bar{c}(s) - y(s)\right) ds \nonumber \\ & + \int_{t}^{T} \left(r - \frac{1}{b(s)-1} \|\gamma\|^{2}\right) g(s,t; v_{1}) \tilde{Z}(t)^{\frac{1}{b(s)-1}} ds\Bigg) dt \nonumber \\ & - \underbrace{\left(\int_{t}^{T} \frac{1}{b(s)-1} g(s,t; v_{1}) \tilde{Z}(t)^{\frac{1}{b(s)-1}} ds\right)}_{= Y(t)} \gamma' dW(t) \nonumber \\ = {} & \Bigg\{r \underbrace{\left(\int_{t}^{T} e^{- r (s-t)} \left(\bar{c}(s) - y(s)\right) ds + \int_{t}^{T} g(s,t; v_{1}) \tilde{Z}(t)^{\frac{1}{b(s)-1}} ds\right)}_{= V_{1}(t ; v_{1})} \nonumber \\ & - g(t,t; v_{1}) \tilde{Z}(t)^{\frac{1}{b(t)-1}} - \left(\bar{c}(t) - y(t)\right) - \|\gamma\|^{2} \underbrace{\int_{t}^{T} \frac{1}{b(s)-1} g(s,t; v_{1}) \tilde{Z}(t)^{\frac{1}{b(s)-1}} ds}_{= Y(t)}\Bigg\} dt \nonumber \\ & - Y(t) \gamma' dW(t) \nonumber \\ = {} & \left(r V_{1}(t ; v_{1}) - g(t,t; v_{1}) \tilde{Z}(t)^{\frac{1}{b(t)-1}} - \left(\bar{c}(t) - y(t)\right) - \|\gamma\|^{2} Y(t)\right) dt - Y(t) \gamma' dW(t) \nonumber \\ = {} & \mu_{V_{1}}(t) dt - Y(t) \gamma' dW(t) \label{eq:SDE:OptimalV:ConsumptionOnly}\end{aligned}$$ with drift $$\begin{aligned} \mu_{V_{1}}(t) = {} & r V_{1}(t ; v_{1}) - g(t,t; v_{1}) \tilde{Z}(t)^{\frac{1}{b(t)-1}} - \bar{c}(t) + y(t) - \|\gamma\|^{2} Y(t).\end{aligned}$$ By it follows $$\begin{aligned} c_{1}(t;v_{1}) = (1-b(t)) \left(\lambda_{1} \frac{e^{\beta t}}{a(t)} \tilde{Z}(t)\right)^{\frac{1}{b(t)-1}} + \bar{c}(t) = g(t,t; v_{1}) \tilde{Z}(t)^{\frac{1}{b(t)-1}} + \bar{c}(t).\end{aligned}$$ Hence $$\begin{aligned} \mu_{V_{1}}(t) = {} & r V_{1}(t ; v_{1}) - c_{1}(t;v_{1}) + y(t) - \|\gamma\|^{2} Y(t).\end{aligned}$$ In order to determine the optimal investment strategy $\pi_{1}(t ; v_{1})$ to Problem we compare the optimal wealth dynamics in and : $$\begin{aligned} d V_{1}(t ; v_{1}) = {} & V_{1}(t ; v_{1}) \left[\left(r + \hat{\pi}_{1}(t ; v_{1})' \left(\mu - r \mathbf{1}\right)\right) dt + \hat{\pi}_{1}(t ; v_{1})'\sigma dW(t)\right] - c_{1}(t;v_{1}) dt + y(t) dt, \\ d V_{1}(t ; v_{1}) = {} & \left(r V_{1}(t ; v_{1}) - c_{1}(t;v_{1}) + y(t) - \|\gamma\|^{2} Y(t)\right) dt - Y(t) \gamma' dW(t).\end{aligned}$$ Matching the diffusion terms yields the equality $$\begin{aligned} \hat{\pi}_{1}(t ; v_{1}) = - \frac{Y(t)}{V_{1}(t ; v_{1})} \Sigma^{-1} (\mu - r \mathbf{1})\end{aligned}$$ which simultaneously matches the drift terms. By the first mean value theorem for integrals[^2] it furthermore follows that there exists $\tilde{t}_{1} \in (t,T)$ such that $$\begin{aligned} Y(t) = {} & \int_{t}^{T} \frac{1}{b(s)-1} g(s,t; v_{1}) \tilde{Z}(t)^{\frac{1}{b(s)-1}} ds = \frac{1}{b(\tilde{t}_{1})-1} \int_{t}^{T} g(s,t; v_{1}) \tilde{Z}(t)^{\frac{1}{b(s)-1}} ds \\ \stackrel{\eqref{eq:V1:ConsumptionProblem}}{=} {} & \frac{1}{b(\tilde{t}_{1})-1} \left(V_{1}(t ; v_{1}) - F_{1}(t)\right).\end{aligned}$$ This determines the optimal investment strategy to be $$\begin{aligned} \hat{\pi}_{1}(t ; v_{1}) = \frac{1}{1 - b(\tilde{t}_{1})} \Sigma^{-1} (\mu - r \mathbf{1}) \frac{V_{1}(t ; v_{1}) - F_{1}(t)}{V_{1}(t ; v_{1})}.\end{aligned}$$ Firstly, the value function of this problem is $$\begin{aligned} \mathcal{V}_{1}(v_{1}) = {} & {\mathbb{E}}\left[\int_{0}^{T} U_{1}(t,c_{1}(t;v_{1})) dt\right] = {\mathbb{E}}\left[\int_{0}^{T} e^{- \beta t} \frac{1-b(t)}{b(t)} a(t) \left(\frac{1}{1-b(t)} \left(c_{1}(t;v_{1}) - \bar{c}(t)\right)\right)^{b(t)} dt\right] \\ = {} & {\mathbb{E}}\left[\int_{0}^{T} e^{- \beta t} \frac{1-b(t)}{b(t)} a(t) \left(\lambda_{1} \frac{e^{\beta t}}{a(t)} \tilde{Z}(t)\right)^{\frac{b(t)}{b(t)-1}} dt\right] \\ = {} & \int_{0}^{T} e^{- \beta t} \frac{1-b(t)}{b(t)} a(t) \left(\lambda_{1} \frac{e^{\beta t}}{a(t)}\right)^{\frac{b(t)}{b(t)-1}} {\mathbb{E}}\left[\tilde{Z}(t)^{\frac{b(t)}{b(t)-1}}\right] dt \\ = {} & \int_{0}^{T} \frac{1-b(t)}{b(t)} \left(\frac{e^{\beta t}}{a(t)}\right)^{\frac{1}{b(t)-1}} \lambda_{1}^{\frac{b(t)}{b(t)-1}} {\mathbb{E}}\left[\tilde{Z}(t)^{\frac{b(t)}{b(t)-1}}\right] dt \\ = {} & \int_{0}^{T} \frac{1-b(t)}{b(t)} \left(\frac{e^{\beta t}}{a(t)}\right)^{\frac{1}{b(t)-1}} \lambda_{1}^{\frac{b(t)}{b(t)-1}} e^{- \frac{b(t)}{b(t)-1} \left(r + \frac{1}{2}\|\gamma\|^{2}\right) t + \frac{1}{2} \left(\frac{b(t)}{b(t)-1}\right)^{2} \|\gamma\|^{2} t} dt \displaybreak \\ = {} & \int_{0}^{T} \frac{1-b(t)}{b(t)} \left(\frac{e^{\left[\beta - b(t) \left(r - \frac{1}{2} \frac{1}{b(t)-1} \|\gamma\|^{2}\right)\right] t}}{a(t)}\right)^{\frac{1}{b(t)-1}} \lambda_{1}^{\frac{b(t)}{b(t)-1}} dt,\end{aligned}$$ where $\lambda_{1}$ is subject to . From differentiating both sides of Equation with respect to $v_{1}$ we derive $$\begin{aligned} 1 = {} & \frac{\partial}{\partial v_{1}} \int_{0}^{T} (1-b(t)) \left(\frac{e^{\left[\beta - b(t) \left(r - \frac{1}{2} \frac{1}{b(t)-1} \|\gamma\|^{2}\right)\right] t}}{a(t)}\right)^{\frac{1}{b(t)-1}} \lambda_{1}^{\frac{1}{b(t)-1}} dt \nonumber \\ = {} & \int_{0}^{T} (1-b(t)) \left(\frac{e^{\left[\beta - b(t) \left(r - \frac{1}{2} \frac{1}{b(t)-1} \|\gamma\|^{2}\right)\right] t}}{a(t)}\right)^{\frac{1}{b(t)-1}} \frac{\partial}{\partial v_{1}} \left(\lambda_{1}^{\frac{1}{b(t)-1}}\right) dt. \label{eq:Lagrange:ConsumptionOnly:helpderivative}\end{aligned}$$ This helps to identify $\mathcal{V}_{1}^{\prime}(v_{1})$ to be $$\begin{aligned} \mathcal{V}_{1}^{\prime}(v_{1}) = {} & \frac{\partial}{\partial v_{1}} \int_{0}^{T} \frac{1-b(t)}{b(t)} \left(\frac{e^{\left[\beta - b(t) \left(r - \frac{1}{2} \frac{1}{b(t)-1} \|\gamma\|^{2}\right)\right] t}}{a(t)}\right)^{\frac{1}{b(t)-1}} \lambda_{1}^{\frac{b(t)}{b(t)-1}} dt \\ = {} & \int_{0}^{T} \frac{1-b(t)}{b(t)} \left(\frac{e^{\left[\beta - b(t) \left(r - \frac{1}{2} \frac{1}{b(t)-1} \|\gamma\|^{2}\right)\right] t}}{a(t)}\right)^{\frac{1}{b(t)-1}} \frac{\partial}{\partial v_{1}} \left(\lambda_{1}^{\frac{b(t)}{b(t)-1}}\right) dt \\ = {} & \int_{0}^{T} \frac{1-b(t)}{b(t)} \left(\frac{e^{\left[\beta - b(t) \left(r - \frac{1}{2} \frac{1}{b(t)-1} \|\gamma\|^{2}\right)\right] t}}{a(t)}\right)^{\frac{1}{b(t)-1}} \frac{\partial}{\partial v_{1}} \left(\left(\lambda_{1}^{\frac{1}{b(t)-1}}\right)^{b(t)}\right) dt \\ = {} & \int_{0}^{T} \frac{1-b(t)}{b(t)} \left(\frac{e^{\left[\beta - b(t) \left(r - \frac{1}{2} \frac{1}{b(t)-1} \|\gamma\|^{2}\right)\right] t}}{a(t)}\right)^{\frac{1}{b(t)-1}} b(t) \left(\lambda_{1}^{\frac{1}{b(t)-1}}\right)^{b(t)-1} \frac{\partial}{\partial v_{1}} \left(\lambda_{1}^{\frac{1}{b(t)-1}}\right) dt \\ = {} & \lambda_{1} \int_{0}^{T} (1-b(t)) \left(\frac{e^{\left[\beta - b(t) \left(r - \frac{1}{2} \frac{1}{b(t)-1} \|\gamma\|^{2}\right)\right] t}}{a(t)}\right)^{\frac{1}{b(t)-1}} \frac{\partial}{\partial v_{1}} \left(\lambda_{1}^{\frac{1}{b(t)-1}}\right) dt \stackrel{\eqref{eq:Lagrange:ConsumptionOnly:helpderivative}}{=} \lambda_{1}.\end{aligned}$$ further implies concavity of $\mathcal{V}_{1}(v_{1})$ as $$\begin{aligned} 1 = {} & \int_{0}^{T} (1-b(t)) \left(\frac{e^{\left[\beta - b(t) \left(r - \frac{1}{2} \frac{1}{b(t)-1} \|\gamma\|^{2}\right)\right] t}}{a(t)}\right)^{\frac{1}{b(t)-1}} \frac{\partial}{\partial v_{1}} \left(\lambda_{1}^{\frac{1}{b(t)-1}}\right) dt \\ = {} & \int_{0}^{T} (1-b(t)) \left(\frac{e^{\left[\beta - b(t) \left(r - \frac{1}{2} \frac{1}{b(t)-1} \|\gamma\|^{2}\right)\right] t}}{a(t)}\right)^{\frac{1}{b(t)-1}} \frac{1}{b(t)-1} \lambda_{1}^{\frac{1}{b(t)-1} - 1} \lambda_{1}^{\prime} dt \\ = {} & - \lambda_{1}^{\prime} \int_{0}^{T} \left(\frac{e^{\left[\beta - b(t) \left(r - \frac{1}{2} \frac{1}{b(t)-1} \|\gamma\|^{2}\right)\right] t}}{a(t)}\right)^{\frac{1}{b(t)-1}} \lambda_{1}^{- \frac{b(t)-2}{b(t)-1}} dt\end{aligned}$$ and thus $$\begin{aligned} \mathcal{V}_{1}^{\prime\prime}(v_{1}) = \lambda_{1}^{\prime} = - \left(\int_{0}^{T} \left(\frac{e^{\left[\beta - b(t) \left(r - \frac{1}{2} \frac{1}{b(t)-1} \|\gamma\|^{2}\right)\right] t}}{a(t)}\right)^{\frac{1}{b(t)-1}} \lambda_{1}^{- \frac{b(t)-2}{b(t)-1}} dt\right)^{-1} < 0.\end{aligned}$$ The terminal wealth problem {#app:ProofsTerminalWealthProblem} --------------------------- The Lagrangian of the Problem subject to is $$\begin{aligned} \mathcal{L}(V,\lambda_{2}) = {} & {\mathbb{E}}\left[U_{2}(V)\right] - \lambda_{2} \left({\mathbb{E}}\left[\tilde{Z}(T) V\right] - v_{2}\right) = {\mathbb{E}}\left[U_{2}(V) - \lambda_{2} \left(\tilde{Z}(T) V - v_{2}\right)\right].\end{aligned}$$ First of all, it is clear that ${c_{2}(t ; v_{2}) \equiv 0}$. By the structure of the utility function, the optimal $V_{2}$ fulfills $V_{2}(T;v_{2}) > F$ and thus the first order conditions involve existence of a Lagrange multiplier $\lambda_{2} = \lambda_{2}(v_{2}) > 0$ such that the optimal $V_{2}$ maximizes $\mathcal{L}(V,\lambda_{2})$ and such that complementary slackness holds true. Hence it can be shown that the Karush-Kuhn-Tucker conditions besides the first derivative condition are satisfied. By the dominated convergence theorem, the first order condition with respect to the directional derivative gives $$\begin{aligned} 0 = {} & {\mathbb{E}}\left[\left(\frac{\partial}{\partial V} U_{2}(V) - \lambda_{2} \tilde{Z}(T)\right) h\right] = {\mathbb{E}}\left[\left(e^{- \beta T} \hat{a} \left(\frac{1}{1-\hat{b}} (V-F)\right)^{\hat{b}-1} - \lambda_{2} \tilde{Z}(T)\right) h\right],\end{aligned}$$ which has to be satisfied for all suitable $h$; hence the optimal terminal wealth has to fulfill $$\begin{aligned} \label{eq:Solution:V2T:TerminalWealthOnly} V_{2}(T ; v_{2}) = (1-\hat{b}) \left(\lambda_{2} \frac{e^{\beta T}}{\hat{a}} \tilde{Z}(T)\right)^{\frac{1}{\hat{b}-1}} + F.\end{aligned}$$ Since $U_{2}(V)$ strictly increases in $V$, complementary slackness implies equality for the budget constraint $$\begin{aligned} {\mathbb{E}}\left[\tilde{Z}(T) V_{2}(T ; v_{2})\right] = v_{2}.\end{aligned}$$ Using and Fubini this gives $$\begin{aligned} v_{2} = {} & {\mathbb{E}}\left[\tilde{Z}(T) \left((1-\hat{b}) \left(\lambda_{2} \frac{e^{\beta T}}{\hat{a}} \tilde{Z}(T)\right)^{\frac{1}{\hat{b}-1}} + F\right)\right] = (1-\hat{b}) \left(\lambda_{2} \frac{e^{\beta T}}{\hat{a}}\right)^{\frac{1}{\hat{b}-1}} {\mathbb{E}}\left[\tilde{Z}(T)^{\frac{\hat{b}}{\hat{b}-1}}\right] + F {\mathbb{E}}\left[\tilde{Z}(T)\right] \\ = {} & (1-\hat{b}) \left(\lambda_{2} \frac{e^{\beta T}}{\hat{a}}\right)^{\frac{1}{\hat{b}-1}} e^{- \frac{\hat{b}}{\hat{b}-1} \left(r + \frac{1}{2}\|\gamma\|^{2}\right) T + \frac{1}{2} \left(\frac{\hat{b}}{\hat{b}-1}\right)^{2} \|\gamma\|^{2} T} + F e^{- \left(r + \frac{1}{2}\|\gamma\|^{2}\right) T + \frac{1}{2} \|\gamma\|^{2} T} \\ = {} & (1-\hat{b}) \left(\frac{e^{\left[\beta - \hat{b} \left(r - \frac{1}{2} \frac{1}{\hat{b}-1} \|\gamma\|^{2}\right)\right] T}}{\hat{a}}\right)^{\frac{1}{\hat{b}-1}} \lambda_{2}^{\frac{1}{\hat{b}-1}} + e^{- r T} F \displaybreak \\ = {} & (1-\hat{b}) \left(\frac{e^{\left[\beta - \hat{b} \left(r - \frac{1}{2} \frac{1}{\hat{b}-1} \|\gamma\|^{2}\right)\right] T}}{\hat{a}}\right)^{\frac{1}{\hat{b}-1}} \lambda_{2}^{\frac{1}{\hat{b}-1}} + F_{2}(0).\end{aligned}$$ Solving for $\lambda_{2}$ yields $$\begin{aligned} \label{eq:Lagrange:TerminalWealthOnly} \lambda_{2} = \left(\frac{v_{2} - F_{2}(0)}{(1-\hat{b}) \left(\frac{e^{\left[\beta - \hat{b} \left(r - \frac{1}{2} \frac{1}{\hat{b}-1} \|\gamma\|^{2}\right)\right] T}}{\hat{a}}\right)^{\frac{1}{\hat{b}-1}}}\right)^{\hat{b}-1} = e^{-\left[\beta - \hat{b} \left(r - \frac{1}{2} \frac{1}{\hat{b}-1} \|\gamma\|^{2}\right)\right] T} \left(1-\hat{b}\right)^{1-\hat{b}} \hat{a} \left(v_{2} - F_{2}(0)\right)^{\hat{b}-1}\end{aligned}$$ where ${v_{2} > F_{2}(0) = e^{- r T} F}$ in is required. Plugging this back into , the optimal terminal wealth is $$\begin{aligned} V_{2}(T ; v_{2}) = {} & (1-\hat{b}) \left(\frac{e^{\beta T}}{\hat{a}} \tilde{Z}(T)\right)^{\frac{1}{\hat{b}-1}} \lambda_{2}^{\frac{1}{\hat{b}-1}} + F \nonumber \\ = {} & (1-\hat{b}) \left(\frac{e^{\beta T}}{\hat{a}} \tilde{Z}(T)\right)^{\frac{1}{\hat{b}-1}} \left(\frac{v_{2} - F_{2}(0)}{(1-\hat{b}) \left(\frac{e^{\left[\beta - \hat{b} \left(r - \frac{1}{2} \frac{1}{\hat{b}-1} \|\gamma\|^{2}\right)\right] T}}{\hat{a}}\right)^{\frac{1}{\hat{b}-1}}}\right) + F \nonumber \\ = {} & \left(v_{2} - F_{2}(0)\right) \left(e^{\hat{b} \left(r - \frac{1}{2} \frac{1}{\hat{b}-1} \|\gamma\|^{2}\right) T} \tilde{Z}(T)\right)^{\frac{1}{\hat{b}-1}} + F \nonumber \\ = {} & \left(v_{2} - F_{2}(0)\right) e^{\frac{\hat{b}}{\hat{b}-1} \left(r - \frac{1}{2} \frac{1}{\hat{b}-1} \|\gamma\|^{2}\right) T} \tilde{Z}(T)^{\frac{1}{\hat{b}-1}} + F. \label{eq:Solution:V2T:TerminalWealthOnly:lambdaInserted}\end{aligned}$$ The optimal wealth process replicates $V_{2}(T ; v_{2})$ and is uniquely given by $$\begin{aligned} V_{2}(t ; v_{2}) = {} & {\mathbb{E}}\left[\frac{\tilde{Z}(T)}{\tilde{Z}(t)} V_{2}(T ; v_{2}) \Big| \mathcal{F}_{t}\right] = \frac{1}{\tilde{Z}(t)} {\mathbb{E}}\left[\tilde{Z}(T) \left\{\left(v_{2} - F_{2}(0)\right) e^{\frac{\hat{b}}{\hat{b}-1} \left(r - \frac{1}{2} \frac{1}{\hat{b}-1} \|\gamma\|^{2}\right) T} \tilde{Z}(T)^{\frac{1}{\hat{b}-1}} + F\right\} \Big| \mathcal{F}_{t}\right] \\ = {} & \frac{1}{\tilde{Z}(t)} \left\{\left(v_{2} - F_{2}(0)\right) e^{\frac{\hat{b}}{\hat{b}-1} \left(r - \frac{1}{2} \frac{1}{\hat{b}-1} \|\gamma\|^{2}\right) T} {\mathbb{E}}\left[\tilde{Z}(T)^{\frac{\hat{b}}{\hat{b}-1}} \Big| \mathcal{F}_{t}\right] + F {\mathbb{E}}\left[\tilde{Z}(T) \Big| \mathcal{F}_{t}\right]\right\} \\ = {} & \frac{1}{\tilde{Z}(t)} \left\{\left(v_{2} - F_{2}(0)\right) e^{\frac{\hat{b}}{\hat{b}-1} \left(r - \frac{1}{2} \frac{1}{\hat{b}-1} \|\gamma\|^{2}\right) T} \tilde{Z}(t)^{\frac{\hat{b}}{\hat{b}-1}} e^{- \frac{\hat{b}}{\hat{b}-1} \left(r - \frac{1}{2} \frac{1}{\hat{b}-1} \|\gamma\|^{2}\right) (T-t)} + F \tilde{Z}(t) e^{- r (T-t)}\right\}.\end{aligned}$$ This finally gives $$\begin{aligned} \label{eq:V2:TerminalWealthProblem} V_{2}(t ; v_{2}) = \left(v_{2} - F_{2}(0)\right) e^{\frac{\hat{b}}{\hat{b}-1} \left(r - \frac{1}{2} \frac{1}{\hat{b}-1} \|\gamma\|^{2}\right) t} \tilde{Z}(t)^{\frac{1}{\hat{b}-1}} + F_{2}(t)\end{aligned}$$ with $F_{2}(t)$ defined in . Recall that $$\begin{aligned} d \left(\tilde{Z}(t)^{\frac{1}{\hat{b}-1}}\right) = \tilde{Z}(t)^{\frac{1}{\hat{b}-1}} \left\{\left[- \frac{1}{\hat{b}-1} r + \frac{1}{2} \frac{1}{\hat{b}-1} \left(\frac{1}{\hat{b}-1} - 1\right) \|\gamma\|^{2}\right] dt - \frac{1}{\hat{b}-1} \gamma' dW(t)\right\}.\end{aligned}$$ It follows by Itô $$\begin{aligned} d & \left(e^{\frac{\hat{b}}{\hat{b}-1} \left(r - \frac{1}{2} \frac{1}{\hat{b}-1} \|\gamma\|^{2}\right) t} \tilde{Z}(t)^{\frac{1}{\hat{b}-1}}\right) \\ & = e^{\frac{\hat{b}}{\hat{b}-1} \left(r - \frac{1}{2} \frac{1}{\hat{b}-1} \|\gamma\|^{2}\right) t} d \left(\tilde{Z}(t)^{\frac{1}{\hat{b}-1}}\right) + \tilde{Z}(t)^{\frac{1}{\hat{b}-1}} d \left(e^{\frac{\hat{b}}{\hat{b}-1} \left(r - \frac{1}{2} \frac{1}{\hat{b}-1} \|\gamma\|^{2}\right) t}\right) + 0 \\ & = e^{\frac{\hat{b}}{\hat{b}-1} \left(r - \frac{1}{2} \frac{1}{\hat{b}-1} \|\gamma\|^{2}\right) t} \tilde{Z}(t)^{\frac{1}{\hat{b}-1}} \left\{\left[- \frac{1}{\hat{b}-1} r + \frac{1}{2} \frac{1}{\hat{b}-1} \left(\frac{1}{\hat{b}-1} - 1\right) \|\gamma\|^{2}\right] dt - \frac{1}{\hat{b}-1} \gamma' dW(t)\right\} \\ & \quad + \tilde{Z}(t)^{\frac{1}{\hat{b}-1}} \frac{\hat{b}}{\hat{b}-1} \left(r - \frac{1}{2} \frac{1}{\hat{b}-1} \|\gamma\|^{2}\right) e^{\frac{\hat{b}}{\hat{b}-1} \left(r - \frac{1}{2} \frac{1}{\hat{b}-1} \|\gamma\|^{2}\right) t} dt \\ & = e^{\frac{\hat{b}}{\hat{b}-1} \left(r - \frac{1}{2} \frac{1}{\hat{b}-1} \|\gamma\|^{2}\right) t} \tilde{Z}(t)^{\frac{1}{\hat{b}-1}} \\ & \quad \times \left\{\left[- \frac{1}{\hat{b}-1} r + \frac{1}{2} \frac{1}{\hat{b}-1} \left(\frac{1}{\hat{b}-1} - 1\right) \|\gamma\|^{2}\right] dt - \frac{1}{\hat{b}-1} \gamma' dW(t) + \frac{\hat{b}}{\hat{b}-1} \left(r - \frac{1}{2} \frac{1}{\hat{b}-1} \|\gamma\|^{2}\right) dt\right\} \\ & = e^{\frac{\hat{b}}{\hat{b}-1} \left(r - \frac{1}{2} \frac{1}{\hat{b}-1} \|\gamma\|^{2}\right) t} \tilde{Z}(t)^{\frac{1}{\hat{b}-1}} \left\{\frac{1}{\hat{b}-1}\left[(\hat{b}-1) r + \frac{1}{2} \left(\frac{1}{\hat{b}-1} - 1 - \frac{\hat{b}}{\hat{b}-1}\right) \|\gamma\|^{2}\right] dt - \frac{1}{\hat{b}-1} \gamma' dW(t)\right\} \\ & = e^{\frac{\hat{b}}{\hat{b}-1} \left(r - \frac{1}{2} \frac{1}{\hat{b}-1} \|\gamma\|^{2}\right) t} \tilde{Z}(t)^{\frac{1}{\hat{b}-1}} \left\{\left(r - \frac{1}{\hat{b}-1} \|\gamma\|^{2}\right) dt - \frac{1}{\hat{b}-1} \gamma' dW(t)\right\}.\end{aligned}$$ Then the optimal wealth dynamics can be calculated as $$\begin{aligned} d V_{2}(t ; v_{2}) = {} & \left(v_{2} - F_{2}(0)\right) d \left(e^{\frac{\hat{b}}{\hat{b}-1} \left(r - \frac{1}{2} \frac{1}{\hat{b}-1} \|\gamma\|^{2}\right) t} \tilde{Z}(t)^{\frac{1}{\hat{b}-1}}\right) + r F_{2}(t) dt \\ = {} & \underbrace{\left(v_{2} - F_{2}(0)\right) e^{\frac{\hat{b}}{\hat{b}-1} \left(r - \frac{1}{2} \frac{1}{\hat{b}-1} \|\gamma\|^{2}\right) t} \tilde{Z}(t)^{\frac{1}{\hat{b}-1}}}_{\stackrel{\eqref{eq:V2:TerminalWealthProblem}}{=} V_{2}(t ; v_{2}) - F_{2}(t)} \left\{\left(r - \frac{1}{\hat{b}-1} \|\gamma\|^{2}\right) dt - \frac{1}{\hat{b}-1} \gamma' dW(t)\right\} + r F_{2}(t) dt \\ = {} & r V_{2}(t ; v_{2}) dt + \left(V_{2}(t ; v_{2}) - F_{2}(t)\right) \left\{- \frac{1}{\hat{b}-1} \|\gamma\|^{2} dt - \frac{1}{\hat{b}-1} \gamma' dW(t)\right\}.\end{aligned}$$ Comparing the diffusion term with the one from for $y(t) \equiv 0$ implies $$\begin{aligned} \hat{\pi}_{2}(t ; v_{2}) = \frac{1}{1-\hat{b}} \Sigma^{-1} (\mu - r \mathbf{1}) \frac{V_{2}(t ; v_{2}) - F_{2}(t)}{V_{2}(t ; v_{2})}\end{aligned}$$ which automatically matches the drifts iff ${c_{2}(t ; v_{2}) \equiv 0}$. The value function of this problem is given by $$\begin{aligned} \mathcal{V}_{2}(v_{2}) = {} & {\mathbb{E}}\left[U_{2}(V_{2}(T ; v_{2}))\right] = {\mathbb{E}}\left[U_{2}\left(\left(v_{2} - F_{2}(0)\right) e^{\frac{\hat{b}}{\hat{b}-1} \left(r - \frac{1}{2} \frac{1}{\hat{b}-1} \|\gamma\|^{2}\right) T} \tilde{Z}(T)^{\frac{1}{\hat{b}-1}} + F\right)\right] \\ = {} & e^{- \beta T} \frac{1-\hat{b}}{\hat{b}} \hat{a} \left(\frac{1}{1-\hat{b}}\right)^{\hat{b}} \left(v_{2} - F_{2}(0)\right)^{\hat{b}} e^{\frac{\hat{b}^{2}}{\hat{b}-1} \left(r - \frac{1}{2} \frac{1}{\hat{b}-1} \|\gamma\|^{2}\right) T} {\mathbb{E}}\left[\tilde{Z}(T)^{\frac{\hat{b}}{\hat{b}-1}}\right] \\ = {} & e^{- \beta T} \frac{\left(1-\hat{b}\right)^{1-\hat{b}}}{\hat{b}} \hat{a} \left(v_{2} - F_{2}(0)\right)^{\hat{b}} e^{\frac{\hat{b}^{2}}{\hat{b}-1} \left(r - \frac{1}{2} \frac{1}{\hat{b}-1} \|\gamma\|^{2}\right) T} e^{- \frac{\hat{b}}{\hat{b}-1} \left(r - \frac{1}{2} \frac{1}{\hat{b}-1} \|\gamma\|^{2}\right) T} \displaybreak \\ = {} & e^{\left[- \beta + \hat{b} \left(r - \frac{1}{2} \frac{1}{\hat{b}-1} \|\gamma\|^{2}\right)\right] T} \frac{\left(1-\hat{b}\right)^{1-\hat{b}}}{\hat{b}} \hat{a} \left(v_{2} - F_{2}(0)\right)^{\hat{b}}.\end{aligned}$$ This implies $$\begin{aligned} \mathcal{V}_{2}^{\prime}(v_{2}) = {} & e^{\left[- \beta + \hat{b} \left(r - \frac{1}{2} \frac{1}{\hat{b}-1} \|\gamma\|^{2}\right)\right] T} \frac{\left(1-\hat{b}\right)^{1-\hat{b}}}{\hat{b}} \hat{a} \hat{b} \left(v_{2} - F_{2}(0)\right)^{\hat{b}-1} \\ = {} & e^{\left[- \beta + \hat{b} \left(r - \frac{1}{2} \frac{1}{\hat{b}-1} \|\gamma\|^{2}\right)\right] T} \left(1-\hat{b}\right)^{1-\hat{b}} \hat{a} \left(v_{2} - F_{2}(0)\right)^{\hat{b}-1} \\ \stackrel{\eqref{eq:Lagrange:TerminalWealthOnly}}{=} {} \lambda_{2} > 0.\end{aligned}$$ Due to the assumption ${v_{2} - F_{2}(0) > 0}$ in , it is straightforward that ${\mathcal{V}_{2}^{\prime\prime}(v_{2}) = \lambda_{2}^{\prime} < 0}$: $$\begin{aligned} \mathcal{V}_{2}^{\prime\prime}(v_{2}) = - e^{\left[- \beta + \hat{b} \left(r - \frac{1}{2} \frac{1}{\hat{b}-1} \|\gamma\|^{2}\right)\right] T} \left(1-\hat{b}\right)^{2-\hat{b}} \hat{a} \left(v_{2} - F_{2}(0)\right)^{\hat{b}-2} < 0.\end{aligned}$$ Optimal merging of the individual solutions {#app:ProofsOptimalMerging} -------------------------------------------   1. $\mathcal{V}(v_{0}) \ge \sup_{v_{1} \ge F_{1}(0),\ v_{2} \ge F_{2}(0),\ v_{1} + v_{2} = v_{0}} \left\{\mathcal{V}_{1}(v_{1}) + \mathcal{V}_{2}(v_{2})\right\}$: Let $(\pi_{1}(t;v_{1}), c_{1}(t;v_{1}))$ and $(\pi_{2}(t;v_{2}), c_{2}(t;v_{2}))$ denote the optimal controls to Problems and with optimal wealth processes $V_{1}(t;v_{1})$ and $V_{2}(t;v_{2})$ to the initial wealths ${v_{1} \ge F_{1}(0)}$ and ${v_{2} \ge F_{2}(0)}$. Then, as the budget constraints for the optimal solutions to all three problems hold with equality, $$\begin{aligned} \mathcal{V}_{1}(v_{1}) + \mathcal{V}_{2}(v_{2}) = {} & {\mathbb{E}}\left[\int_{0}^{T} U_{1}(t,c_{1}(t;v_{1})) dt + U_{2}(V_{2}(T;v_{2}))\right] \\ \le {} & \sup_{(\pi,c) \in \Lambda} J(\pi,c;v_{0}) = \mathcal{V}(v_{0})\end{aligned}$$ for all $v_{1},v_{2}$ with ${v_{1} + v_{2} = v_{0}}$. Thus $$\begin{aligned} \mathcal{V}(v_{0}) \ge \sup_{v_{1} \ge F_{1}(0),\ v_{2} \ge F_{2}(0),\ v_{1} + v_{2} = v_{0}} \left\{\mathcal{V}_{1}(v_{1}) + \mathcal{V}_{2}(v_{2})\right\}.\end{aligned}$$ 2. $\mathcal{V}(v_{0}) \le \sup_{v_{1} \ge F_{1}(0),\ v_{2} \ge F_{2}(0),\ v_{1} + v_{2} = v_{0}} \left\{\mathcal{V}_{1}(v_{1}) + \mathcal{V}_{2}(v_{2})\right\}$: Let $(\pi^{\star}, c^{\star})$ denote the optimal controls which maximize $\mathcal{V}(v_{0})$ with optimal wealth process $V^{\star}$ to the initial wealth $v_{0} > 0$. Define $$\begin{aligned} v_{1} = {\mathbb{E}}\left[\int_{0}^{T} \tilde{Z}(t) \left(c^{\star}(t) - y(t)\right) dt\right],\ v_{2} = {\mathbb{E}}\left[\tilde{Z}(T) V^{\star}(T)\right].\end{aligned}$$ Then, ${v_{1} + v_{2} = v_{0}}$ and $$\begin{aligned} \mathcal{V}(v_{0}) = {} & {\mathbb{E}}\left[\int_{0}^{T} U_{1}(t,c^{\star}(t)) dt\right] + {\mathbb{E}}\left[U_{2}(V^{\star}(T))\right] \le \mathcal{V}_{1}(v_{1}) + \mathcal{V}_{2}(v_{2}).\end{aligned}$$ Hence $$\begin{aligned} \mathcal{V}(v_{0}) \le \sup_{v_{1} \ge F_{1}(0),\ v_{2} \ge F_{2}(0),\ v_{1} + v_{2} = v_{0}} \left\{\mathcal{V}_{1}(v_{1}) + \mathcal{V}_{2}(v_{2})\right\}.\end{aligned}$$ In accordance with Theorem \[thm:ConnectionValueFunctions\] and by expressing $v_{2} = v_{0} - v_{1}$, the candidate for the optimal ${v_{1}^{\star}}$ is the one that satisfies the first order derivative condition on the budget $$\begin{aligned} 0 = \frac{\partial}{\partial v_{1}} \left(\mathcal{V}_{1}(v_{1}) + \mathcal{V}_{2}(v_{0} - v_{1})\right) = \mathcal{V}_{1}^{\prime}(v_{1}) - \mathcal{V}_{2}^{\prime}(v_{0} - v_{1})\end{aligned}$$ such that ${v_{1}^{\star} \ge F_{1}(0)}$, ${v_{2}^{\star} = v_{0} - v_{1}^{\star}}$ with ${v_{2}^{\star} \ge F_{2}(0)}$; thus ${F_{1}(0) \le v_{1}^{\star} \le v_{0} - F_{2}(0)}$. Theorems \[thm:Solution:ConsumptionOnly:ValueFunction:lambda\] and \[thm:Solution:TerminalWealthOnly:ValueFunction:lambda\] tell that $\mathcal{V}_{1}(v_{1})$ and $\mathcal{V}_{2}(v_{2})$ are strictly concave functions in $v_{1}$ respectively $v_{2}$. Therefore, it follows $$\begin{aligned} 0 = \frac{\partial^{2}}{\partial v_{1}^{2}} \left(\mathcal{V}_{1}(v_{1}) + \mathcal{V}_{2}(v_{0} - v_{1})\right) = \mathcal{V}_{1}^{\prime\prime}(v_{1}) + \mathcal{V}_{2}^{\prime\prime}(v_{0} - v_{1}) < 0.\end{aligned}$$ This implies that the candidates $v_{1}^{\star}$ and ${v_{2}^{\star} = v_{0} - v_{1}^{\star}}$ are the solution when the constraint $F_{1}(0) \le v_{1}^{\star} \le v_{0} - F_{2}(0)$ applies. In accordance with Theorems \[thm:Solution:ConsumptionOnly:ValueFunction:lambda\] and \[thm:Solution:TerminalWealthOnly:ValueFunction:lambda\] we have $$\begin{aligned} \mathcal{V}_{1}^{\prime}(v_{1}) = {} & \lambda_{1}, \\ \mathcal{V}_{2}^{\prime}(v_{2}) = {} & \lambda_{2} = e^{-\left[\beta - \hat{b} \left(r - \frac{1}{2} \frac{1}{\hat{b}-1} \|\gamma\|^{2}\right)\right] T} \left(1-\hat{b}\right)^{1-\hat{b}} \hat{a} \left(v_{2} - F_{2}(0)\right)^{\hat{b}-1}.\end{aligned}$$ By equating $\mathcal{V}_{1}^{\prime}(v_{1})$ and $\mathcal{V}_{2}^{\prime}(v_{0} - v_{1})$ we obtain $$\begin{aligned} \eqref{eq:OptimalMerging:ValueFunction} \text{ in Lemma \ref{lemma:EqualityConditionValueFunctions}} \ \Leftrightarrow\ & \mathcal{V}_{1}^{\prime}(v_{1}) = \mathcal{V}_{2}^{\prime}(v_{0} - v_{1}) \\ \Leftrightarrow\ & \lambda_{1} = \lambda_{2} = e^{-\left[\beta - \hat{b} \left(r - \frac{1}{2} \frac{1}{\hat{b}-1} \|\gamma\|^{2}\right)\right] T} \left(1-\hat{b}\right)^{1-\hat{b}} \hat{a} \left(v_{0} - v_{1} - F_{2}(0)\right)^{\hat{b}-1}.\end{aligned}$$ Inserting $\lambda_{1}$ in Equation , the optimal $v_{1}^{\star}$ is the solution to $$\begin{aligned} v_{1} - \int_{0}^{T} \chi(t) \left(v_{0} - v_{1} - F_{2}(0)\right)^{\frac{\hat{b}-1}{b(t)-1}} dt = F_{1}(0),\end{aligned}$$ where the continuous function $\chi(t)$ is defined by $$\begin{aligned} \chi(t) = (1-b(t)) \left(1-\hat{b}\right)^{\frac{1-\hat{b}}{b(t)-1}} \left(\frac{\hat{a}}{a(t)}\right)^{\frac{1}{b(t)-1}} \left(\frac{e^{\left[\beta - b(t) \left(r - \frac{1}{2} \frac{1}{b(t)-1} \|\gamma\|^{2}\right)\right] t}}{e^{\left[\beta - \hat{b} \left(r - \frac{1}{2} \frac{1}{\hat{b}-1} \|\gamma\|^{2}\right)\right] T}}\right)^{\frac{1}{b(t)-1}} > 0.\end{aligned}$$ It remains to verify ${F_{1}(0) \le v_{1}^{\star} \le v_{0} - F_{2}(0)}$ and uniqueness of $v_{1}^{\star}$. For this sake, define the function $f$ by $$\begin{aligned} f: (-\infty,v_{0} - F_{2}(0)],\ f(x) = x - \int_{0}^{T} \chi(t) \left(v_{0} - x - F_{2}(0)\right)^{\frac{\hat{b}-1}{b(t)-1}} dt - F_{1}(0).\end{aligned}$$ $v_{1}^{\star}$ is the root of the function $f$, i.e. ${f(v_{1}^{\star}) = 0}$, if it holds ${v_{1}^{\star} \ge F_{1}(0)}$. $f$ is continuous in $x$, the exponent ${\frac{\hat{b}-1}{b(t)-1}}$ within the first integral is positive. Furthermore, due to ${v_{0} > F(0)}$ claimed in and ${F(t) = F_{1}(t) + F_{2}(t)}$, we have for the limits $$\begin{aligned} \lim_{x \searrow F_{1}(0)} f(x) = {} & - \int_{0}^{T} \chi(t) \left(v_{0} - F_{1}(0) - F_{2}(0)\right)^{\frac{\hat{b}-1}{b(t)-1}} dt = - \int_{0}^{T} \chi(t) \left(v_{0} - F(0)\right)^{\frac{\hat{b}-1}{b(t)-1}} dt < 0, \\ \lim_{x \nearrow v_{0} - F_{2}(0)} f(x) = {} & v_{0} - F_{2}(0) - \int_{0}^{T} \chi(t) \left(v_{0} - \left(v_{0} - F_{2}(0)\right) - F_{2}(0)\right)^{\frac{\hat{b}-1}{b(t)-1}} dt - F_{1}(0) \\ = {} & v_{0} - F_{2}(0) - F_{1}(0) = v_{0} - F(0) > 0.\end{aligned}$$ Note, ${F_{1}(0) \le v_{1} = v_{0} - v_{2} \le v_{0} - F_{2}(0)}$ for general $v_{1}$ and $v_{2}$. Additionally, $f$ is strictly monotone increasing in $x$ since $$\begin{aligned} f^{\prime}(x) = 1 + \int_{0}^{T} \chi(t) \frac{\hat{b}-1}{b(t)-1} \left(v_{0} - x - F_{2}(0)\right)^{\frac{\hat{b}-b(t)}{b(t)-1}} dt > 0,\ \forall x \le v_{0} - F_{2}(0).\end{aligned}$$ We conclude that there exists a unique root ${x \in [F_{1}(0), v_{0} - F_{2}(0)]}$ such that ${f(x) = 0}$. Therefore, we conclude that the optimal $v_{1}^{*}$ and $v_{2}^{\star} = v_{0} - v_{1}^{*}$ exist and are unique. $v_{1}^{*}$ is the solution to Equation . The optimal Lagrange multiplier $\lambda_{1}^{\star} = \lambda_{1}(v_{1}^{\star})$ is then given by $$\begin{aligned} \lambda_{1}^{\star} = \left(1-\hat{b}\right)^{1-\hat{b}} \hat{a} e^{-\left[\beta - \hat{b} \left(r - \frac{1}{2} \frac{1}{\hat{b}-1} \|\gamma\|^{2}\right)\right] T} \left(v_{0} - v_{1}^{\star} - F_{2}(0)\right)^{\hat{b}-1}.\end{aligned}$$ Starting with ${V^{\star}(t; v_{0}) = V_{1}(t;v_{1}^{\star}) + V_{2}(t;v_{2}^{\star})}$ we compare the dynamics of both sides of the equation: $$\begin{aligned} \label{eq:Merging:dV:dV1+dV2} d V^{\star}(t; v_{0}) = d V_{1}(t;v_{1}^{\star}) + d V_{2}(t;v_{2}^{\star}).\end{aligned}$$ Equation for $V^{\star}(t; v_{0})$, $V_{1}(t;v_{1}^{\star})$ and $V_{2}(t;v_{2}^{\star})$, with $y(t) \equiv 0$ for $V_{2}(t;v_{2}^{\star})$, provides $$\begin{aligned} d V^{\star}(t; v_{0}) = {} & V^{\star}(t; v_{0}) \left[\left(r + \hat{\pi}^{\star}(t; v_{0})' \left(\mu - r \mathbf{1}\right)\right) dt + \hat{\pi}^{\star}(t; v_{0})'\sigma dW(t)\right] - c^{\star}(t; v_{0}) dt + y(t) dt, \\ d V_{1}(t;v_{1}^{\star}) = {} & V_{1}(t;v_{1}^{\star}) \left[\left(r + \hat{\pi}_{1}(t;v_{1}^{\star})' \left(\mu - r \mathbf{1}\right)\right) dt + \hat{\pi}_{1}(t;v_{1}^{\star})'\sigma dW(t)\right] - c_{1}(t;v_{1}^{\star}) dt + y(t) dt, \\ d V_{2}(t;v_{2}^{\star}) = {} & V_{2}(t;v_{2}^{\star}) \left[\left(r + \hat{\pi}_{2}(t;v_{2}^{\star})' \left(\mu - r \mathbf{1}\right)\right) dt + \hat{\pi}_{2}(t;v_{2}^{\star})'\sigma dW(t)\right].\end{aligned}$$ Comparing the diffusion terms in gives $$\begin{aligned} \hat{\pi}^{\star}(t; v_{0}) = \frac{\hat{\pi}_{1}(t;v_{1}^{\star}) V_{1}(t;v_{1}^{\star}) + \hat{\pi}_{2}(t;v_{2}^{\star}) V_{2}(t;v_{2}^{\star})}{V^{\star}(t; v_{0})}.\end{aligned}$$ Inserting this back and comparing the drift terms finally leads to $$\begin{aligned} c^{\star}(t; v_{0}) = c_{1}(t;v_{1}^{\star}).\end{aligned}$$ Notice that the pair ${(\hat{\pi}^{\star},c^{\star})}$ is admissible, i.e. ${(\hat{\pi}^{\star},c^{\star}) \in \Lambda}$ because ${(\hat{\pi}_{1},c_{1}) \in \Lambda_{1}}$ and ${(\hat{\pi}_{2},0) \in \Lambda_{2}}$ which implies $$\begin{aligned} V^{\star}(t; v_{0}) = \underbrace{V_{1}(t;v_{1}^{\star})}_{\ge - \int_{t}^{T} e^{- r (s-t)} y(s) ds} + \underbrace{V_{2}(t;v_{2}^{\star})}_{\ge 0} \ge - \int_{t}^{T} e^{- r (s-t)} y(s) ds,\ {\mathbb{P}}-a.s.,\ \forall t \in [0,T].\end{aligned}$$ Using the solutions in Theorems \[thm:Solution:ConsumptionOnly\] and \[thm:Solution:TerminalWealthOnly:WithoutProbabilityConstraint\] we derive the following for the utility setup in : $$\begin{aligned} \hat{\pi}^{\star}(t ; v_{0}) = {} & \Sigma^{-1} (\mu - r \mathbf{1}) \frac{\frac{1}{1 - b(\tilde{t}_{1}^{\star})} \left(V_{1}(t ; v_{1}^{\star}) - F_{1}(t)\right) + \frac{1}{1-\hat{b}} \left(V_{2}(t ; v_{2}^{\star}) - F_{2}(t)\right)}{V^{\star}(t ; v_{0})}, \\ c^{\star}(t;v_{0}) = {} & c_{1}(t;v_{1}^{\star}) = g(t,t; v_{1}^{\star}) \tilde{Z}(t)^{\frac{1}{b(t)-1}} + \bar{c}(t) = (1-b(t)) \left(\lambda_{1}^{\star} \frac{e^{\beta t}}{a(t)} \tilde{Z}(t)\right)^{\frac{1}{b(t)-1}} + \bar{c}(t), \\ V^{\star}(t ; v_{0}) = {} & V_{1}(t;v_{1}^{\star}) + V_{2}(t;v_{2}^{\star}) \\ = {} & \int_{t}^{T} g(s,t; v_{1}^{\star}) \tilde{Z}(t)^{\frac{1}{b(s)-1}} ds + F_{1}(t) + \left(v_{2}^{\star} - F_{2}(0)\right) e^{\frac{\hat{b}}{\hat{b}-1} \left(r - \frac{1}{2} \frac{1}{\hat{b}-1} \|\gamma\|^{2}\right) t} \tilde{Z}(t)^{\frac{1}{\hat{b}-1}} + F_{2}(t) \\ = {} & \int_{t}^{T} g(s,t; v_{1}^{\star}) \tilde{Z}(t)^{\frac{1}{b(s)-1}} ds + \left(v_{2}^{\star} - F_{2}(0)\right) e^{\frac{\hat{b}}{\hat{b}-1} \left(r - \frac{1}{2} \frac{1}{\hat{b}-1} \|\gamma\|^{2}\right) t} \tilde{Z}(t)^{\frac{1}{\hat{b}-1}} + F(t), \\ V^{\star}(T ; v_{0}) = {} & \left(v_{2}^{\star} - F_{2}(0)\right) e^{\frac{\hat{b}}{\hat{b}-1} \left(r - \frac{1}{2} \frac{1}{\hat{b}-1} \|\gamma\|^{2}\right) T} \tilde{Z}(T)^{\frac{1}{\hat{b}-1}} + F, \\ V_{1}(t ; v_{1}^{\star}) = {} & \int_{t}^{T} g(s,t; v_{1}^{\star}) \tilde{Z}(t)^{\frac{1}{b(s)-1}} ds + F_{1}(t), \\ V_{2}(t ; v_{2}^{\star}) = {} & \left(v_{2}^{\star} - F_{2}(0)\right) e^{\frac{\hat{b}}{\hat{b}-1} \left(r - \frac{1}{2} \frac{1}{\hat{b}-1} \|\gamma\|^{2}\right) t} \tilde{Z}(t)^{\frac{1}{\hat{b}-1}} + F_{2}(t),\end{aligned}$$ for all $t \in [0,T]$, with $$\begin{aligned} g(s,t; v_{1}^{\star}) = {} & (1-b(s)) \left(\frac{e^{\beta s - b(s) \left(r - \frac{1}{2} \frac{1}{b(s)-1} \|\gamma\|^{2}\right) (s-t)}}{a(s)}\right)^{\frac{1}{b(s)-1}} \left(\lambda_{1}^{\star}\right)^{\frac{1}{b(s)-1}} \\ = {} & (1-b(s)) \left(1-\hat{b}\right)^{\frac{1-\hat{b}}{b(s)-1}} \left(\frac{\hat{a}}{a(s)}\right)^{\frac{1}{b(s)-1}} \left(\frac{e^{\beta s - b(s) \left(r - \frac{1}{2} \frac{1}{b(s)-1} \|\gamma\|^{2}\right) (s-t)}}{e^{\left[\beta - \hat{b} \left(r - \frac{1}{2} \frac{1}{\hat{b}-1} \|\gamma\|^{2}\right)\right] T}}\right)^{\frac{1}{b(s)-1}} \left(v_{0} - v_{1}^{\star} - F_{2}(0)\right)^{\frac{\hat{b}-1}{b(s)-1}} \\ \stackrel{\eqref{eq:definition:chi}}{=} {} & \chi(s) e^{\frac{b(s)}{b(s)-1} \left(r - \frac{1}{2} \frac{1}{b(s)-1} \|\gamma\|^{2}\right) t} \left(v_{0} - v_{1}^{\star} - F_{2}(0)\right)^{\frac{\hat{b}-1}{b(s)-1}}.\end{aligned}$$ Furthermore, $\tilde{t}_{1}^{\star} = \tilde{t}_{1}(v_{1}^{\star}) \in (t,T)$ solves : $$\begin{aligned} b(\tilde{t}_{1}^{\star}) = {} & 1 + \frac{\int_{t}^{T} g(s,t; v_{1}^{\star}) \tilde{Z}(t)^{\frac{1}{b(s)-1}} ds}{\int_{t}^{T} \frac{1}{b(s)-1} g(s,t; v_{1}^{\star}) \tilde{Z}(t)^{\frac{1}{b(s)-1}} ds}.\end{aligned}$$ The formula for the optimal investment strategy is straightforward from Theorem \[thm:Solution:Merging:OriginalProblem\] as ${b(\tilde{t}_{1}^{\star}) \equiv \hat{b}}$ and ${V^{\star}(t ; v_{0}) = V_{1}(t ; v_{1}^{\star}) + V_{2}(t ; v_{2}^{\star})}$ for any $t \in [0,T]$. The optimal $v_{1}^{\star}$ can be determined by Lemma \[lemma:EqualityConditionValueFunctions:Solution\] as the solution to Equation : $$\begin{aligned} v_{1}^{\star} - \left(v_{0} - v_{1}^{\star} - F_{2}(0)\right) \int_{0}^{T} \chi(t) dt = F_{1}(0),\end{aligned}$$ where $$\begin{aligned} \chi(t) = \left(\frac{\hat{a}}{a(t)}\right)^{\frac{1}{\hat{b}-1}} e^{- \frac{1}{\hat{b}-1} \left[\beta - \hat{b} \left(r - \frac{1}{2} \frac{1}{\hat{b}-1} \|\gamma\|^{2}\right)\right] (T-t)}.\end{aligned}$$ Therefore, $$\begin{aligned} v_{1}^{\star} = \frac{\left(v_{0} - F_{2}(0)\right) \int_{0}^{T} \chi(t) dt + F_{1}(0)}{\int_{0}^{T} \chi(t) dt + 1}.\end{aligned}$$ is the optimal budget to the consumption problem, ${v_{2}^{\star} = v_{0} - v_{1}^{\star}}$ is the optimal budget to the terminal wealth problem. Furthermore, by Lemma \[lemma:EqualityConditionValueFunctions:Solution\] one knows $$\begin{aligned} \left(\lambda_{1}^{\star}\right)^{\frac{1}{\hat{b}-1}} = \frac{1}{1-\hat{b}} \left(\frac{\hat{a}}{e^{\left[\beta - \hat{b} \left(r - \frac{1}{2} \frac{1}{\hat{b}-1} \|\gamma\|^{2}\right)\right] T}}\right)^{\frac{1}{\hat{b}-1}} \left(v_{0} - v_{1}^{\star} - F_{2}(0)\right).\end{aligned}$$ This enables us to calculate $g(s,t; v_{1}^{\star})$ to be $$\begin{aligned} g(s,t; v_{1}^{\star}) = {} & (1-\hat{b}) \left(\frac{e^{\beta s - \hat{b} \left(r - \frac{1}{2} \frac{1}{\hat{b}-1} \|\gamma\|^{2}\right) (s-t)}}{a(s)}\right)^{\frac{1}{\hat{b}-1}} \left(\lambda_{1}^{\star}\right)^{\frac{1}{\hat{b}-1}} \\ = {} & \left(\frac{\hat{a}}{a(s)}\right)^{\frac{1}{\hat{b}-1}} e^{- \frac{1}{\hat{b}-1} \left[\beta (T-s) + \hat{b} \left(r - \frac{1}{2} \frac{1}{\hat{b}-1} \|\gamma\|^{2}\right) (s-t-T)\right]} \left(v_{0} - v_{1}^{\star} - F_{2}(0)\right) \\ = {} & \left(\frac{\hat{a}}{a(s)}\right)^{\frac{1}{\hat{b}-1}} e^{- \frac{1}{\hat{b}-1} \left[\beta (T-s) + \hat{b} \left(r - \frac{1}{2} \frac{1}{\hat{b}-1} \|\gamma\|^{2}\right) (s-t-T)\right]} \left(v_{0} - \frac{\left(v_{0} - F_{2}(0)\right) \int_{0}^{T} \chi(t) dt + F_{1}(0)}{\int_{0}^{T} \chi(t) dt + 1} - F_{2}(0)\right) \\ = {} & \left(\frac{\hat{a}}{a(s)}\right)^{\frac{1}{\hat{b}-1}} e^{- \frac{1}{\hat{b}-1} \left[\beta (T-s) + \hat{b} \left(r - \frac{1}{2} \frac{1}{\hat{b}-1} \|\gamma\|^{2}\right) (s-t-T)\right]} \left(\frac{v_{0} - F_{2}(0) - F_{1}(0)}{\int_{0}^{T} \chi(t) dt + 1}\right) \\ = {} & \left(\frac{\hat{a}}{a(s)}\right)^{\frac{1}{\hat{b}-1}} e^{- \frac{1}{\hat{b}-1} \left[\beta (T-s) + \hat{b} \left(r - \frac{1}{2} \frac{1}{\hat{b}-1} \|\gamma\|^{2}\right) (s-t-T)\right]} \left(\frac{v_{0} - F(0)}{\int_{0}^{T} \chi(t) dt + 1}\right)\end{aligned}$$ with $F(0) = \int_{0}^{T} e^{- r s} \left(\bar{c}(s) - y(s)\right) ds + e^{- r T} F$ defined in , and thus using Theorem \[thm:Solution:Merging:OriginalProblem\]: $$\begin{aligned} V_{1}(t ; v_{1}^{\star}) = {} & \int_{t}^{T} g(s,t; v_{1}^{\star}) \tilde{Z}(t)^{\frac{1}{\hat{b}-1}} ds + F_{1}(t) \\ = {} & \tilde{Z}(t)^{\frac{1}{\hat{b}-1}} \left(\frac{v_{0} - F(0)}{\int_{0}^{T} \chi(t) dt + 1}\right) \int_{t}^{T} \left(\frac{\hat{a}}{a(s)}\right)^{\frac{1}{\hat{b}-1}} e^{- \frac{1}{\hat{b}-1} \left[\beta (T-s) + \hat{b} \left(r - \frac{1}{2} \frac{1}{\hat{b}-1} \|\gamma\|^{2}\right) (s-t-T)\right]} ds + F_{1}(t) \\ = {} & \tilde{Z}(t)^{\frac{1}{\hat{b}-1}} \left(v_{0} - F(0)\right) e^{\frac{\hat{b}}{\hat{b}-1} \left(r - \frac{1}{2} \frac{1}{\hat{b}-1} \|\gamma\|^{2}\right) t} \frac{\int_{t}^{T} \chi(s) ds}{\int_{0}^{T} \chi(t) dt + 1} + F_{1}(t).\end{aligned}$$ With, again from Theorem \[thm:Solution:Merging:OriginalProblem\], $$\begin{aligned} V_{2}(t ; v_{2}^{\star}) = {} & \left(v_{2}^{\star} - F_{2}(0)\right) e^{\frac{\hat{b}}{\hat{b}-1} \left(r - \frac{1}{2} \frac{1}{\hat{b}-1} \|\gamma\|^{2}\right) t} \tilde{Z}(t)^{\frac{1}{\hat{b}-1}} + F_{2}(t) \\ = {} & \left(v_{0} - v_{1}^{\star} - F_{2}(0)\right) e^{\frac{\hat{b}}{\hat{b}-1} \left(r - \frac{1}{2} \frac{1}{\hat{b}-1} \|\gamma\|^{2}\right) t} \tilde{Z}(t)^{\frac{1}{\hat{b}-1}} + F_{2}(t) \\ = {} & \left(\frac{v_{0} - F_{2}(0) - F_{1}(0)}{\int_{0}^{T} \chi(t) dt + 1}\right) e^{\frac{\hat{b}}{\hat{b}-1} \left(r - \frac{1}{2} \frac{1}{\hat{b}-1} \|\gamma\|^{2}\right) t} \tilde{Z}(t)^{\frac{1}{\hat{b}-1}} + F_{2}(t) \\ = {} & \tilde{Z}(t)^{\frac{1}{\hat{b}-1}} \left(v_{0} - F(0)\right) \frac{e^{\frac{\hat{b}}{\hat{b}-1} \left(r - \frac{1}{2} \frac{1}{\hat{b}-1} \|\gamma\|^{2}\right) t}}{\int_{0}^{T} \chi(t) dt + 1} + F_{2}(t)\end{aligned}$$ because ${F(t) = F_{1}(t) + F_{2}(t)}$ $\forall t \in [0,T]$, it follows $$\begin{aligned} V^{\star}(t ; v_{0}) = {} & V_{1}(t ; v_{1}^{\star}) + V_{2}(t ; v_{2}^{\star}) \\ = {} & \tilde{Z}(t)^{\frac{1}{\hat{b}-1}} \left(v_{0} - F(0)\right) \frac{1}{\int_{0}^{T} \chi(t) dt + 1} \left\{e^{\frac{\hat{b}}{\hat{b}-1} \left(r - \frac{1}{2} \frac{1}{\hat{b}-1} \|\gamma\|^{2}\right) t} \int_{t}^{T} \chi(s) ds + e^{\frac{\hat{b}}{\hat{b}-1} \left(r - \frac{1}{2} \frac{1}{\hat{b}-1} \|\gamma\|^{2}\right) t}\right\} \\ & + F_{1}(t) + F_{2}(t) \displaybreak \\ = {} & \tilde{Z}(t)^{\frac{1}{\hat{b}-1}} \left(v_{0} - F(0)\right) e^{\frac{\hat{b}}{\hat{b}-1} \left(r - \frac{1}{2} \frac{1}{\hat{b}-1} \|\gamma\|^{2}\right) t} \frac{\int_{t}^{T} \chi(s) ds + 1}{\int_{0}^{T} \chi(t) dt + 1} + F(t).\end{aligned}$$ Finally, the optimal consumption rate process can then be determined from Theorem \[thm:Solution:Merging:OriginalProblem\] as $$\begin{aligned} c^{\star}(t;v_{0}) = {} & (1-\hat{b}) \left(\lambda_{1}^{\star} \frac{e^{\beta t}}{a(t)} \tilde{Z}(t)\right)^{\frac{1}{\hat{b}-1}} + \bar{c}(t) \\ = {} & \tilde{Z}(t)^{\frac{1}{\hat{b}-1}} \left(\frac{e^{\beta t}}{a(t)}\right)^{\frac{1}{\hat{b}-1}} \left(\frac{\hat{a}}{e^{\left[\beta - \hat{b} \left(r - \frac{1}{2} \frac{1}{\hat{b}-1} \|\gamma\|^{2}\right)\right] T}}\right)^{\frac{1}{\hat{b}-1}} \left(\frac{v_{0} - F(0)}{\int_{0}^{T} \chi(t) dt + 1}\right) + \bar{c}(t) \\ = {} & \tilde{Z}(t)^{\frac{1}{\hat{b}-1}} \left(v_{0} - F(0)\right) \left(\frac{\hat{a}}{a(t)}\right)^{\frac{1}{\hat{b}-1}} e^{- \frac{1}{\hat{b}-1} \beta (T-t) + \frac{\hat{b}}{\hat{b}-1} \left(r - \frac{1}{2} \frac{1}{\hat{b}-1} \|\gamma\|^{2}\right) T} \left(\frac{1}{\int_{0}^{T} \chi(t) dt + 1}\right) + \bar{c}(t) \\ = {} & \tilde{Z}(t)^{\frac{1}{\hat{b}-1}} \left(v_{0} - F(0)\right) e^{\frac{\hat{b}}{\hat{b}-1} \left(r - \frac{1}{2} \frac{1}{\hat{b}-1} \|\gamma\|^{2}\right) t} \frac{\chi(t)}{\int_{0}^{T} \chi(t) dt + 1} + \bar{c}(t) \\ = {} & \tilde{Z}(t)^{\frac{1}{\hat{b}-1}} \left(v_{0} - F(0)\right) e^{\frac{\hat{b}}{\hat{b}-1} \left(r - \frac{1}{2} \frac{1}{\hat{b}-1} \|\gamma\|^{2}\right) t} \frac{\int_{t}^{T} \chi(s) ds + 1}{\int_{0}^{T} \chi(t) dt + 1} \frac{\chi(t)}{\int_{t}^{T} \chi(s) ds + 1} + \bar{c}(t) \\ = {} & \frac{\chi(t)}{\int_{t}^{T} \chi(s) ds + 1} \left(V^{\star}(t ; v_{0}) - F(t)\right) + \bar{c}(t).\end{aligned}$$ By defining $$\begin{aligned} \zeta(t) = \frac{\chi(t)}{\int_{t}^{T} \chi(s) ds + 1} > 0\end{aligned}$$ we obtain $$\begin{aligned} c^{\star}(t;v_{0}) = \zeta(t) \left(V^{\star}(t ; v_{0}) - F(t)\right) + \bar{c}(t).\end{aligned}$$ With the definition of $\zeta(t)$, the optimal wealth process finally can be written as $$\begin{aligned} V^{\star}(t ; v_{0}) = {} & \tilde{Z}(t)^{\frac{1}{\hat{b}-1}} \left(v_{0} - F(0)\right) e^{\frac{\hat{b}}{\hat{b}-1} \left(r - \frac{1}{2} \frac{1}{\hat{b}-1} \|\gamma\|^{2}\right) t} \frac{\int_{t}^{T} \chi(s) ds + 1}{\int_{0}^{T} \chi(t) dt + 1} + F(t) \\ = {} & \frac{1}{\zeta(t)} \tilde{Z}(t)^{\frac{1}{\hat{b}-1}} \left(v_{0} - F(0)\right) e^{\frac{\hat{b}}{\hat{b}-1} \left(r - \frac{1}{2} \frac{1}{\hat{b}-1} \|\gamma\|^{2}\right) t} \frac{\chi(t)}{\int_{0}^{T} \chi(t) dt + 1} + F(t).\end{aligned}$$ [^1]: *Preprint* [^2]: For two integrable functions $f(x)$ and $g(x)$ on the interval $(a,b)$, where $f(x)$ is continuous and $g(x)$ does not change sign on $(a,b)$, there exists $d \in (a,b)$ such that $$\begin{aligned} \int_{a}^{b} f(x) g(x) dx = f(d) \int_{a}^{b} g(x) dx.\end{aligned}$$
1
--- author: - Jean Daniel Mukam - | Jean Daniel Mukam (jean.d.mukam@aims-senegal.org)\ African Institute for Mathematical Sciences (AIMS)\ Senegal\ \ [ Supervised by : Dr. Antoine Tambue]{}\ [ AIMS-South Africa and University of Cape Town ]{}\ [ antonio@aims.ac.za]{} date: | [ 13 June 2015]{}\ [*Submitted in Partial Fulfillment of a Masters II at AIMS Senegal*]{}\ title: | Stochastic Calculus with Jumps Processes : Theory and Numerical Techniques\ (Master Thesis) --- Abstract {#abstract .unnumbered} ======== In this work we consider a stochastic differential equation (SDEs) with jump. We prove the existence and the uniqueness of solution of this equation in the strong sense under global Lipschitz condition. Generally, exact solutions of SDEs are unknowns. The challenge is to approach them numerically. There exist several numerical techniques. In this thesis, we present the compensated stochastic theta method (CSTM) which is already developed in the literature. We prove that under global Lipschitz condition, the CSTM converges strongly with standard order 0.5. We also investigated the stability behaviour of both CSTM and stochastic theta method (STM). Inspired by the tamed Euler scheme developed in [@Martin1], we propose a new scheme for SDEs with jumps called compensated tamed Euler scheme. We prove that under non-global Lipschitz condition the compensated tamed Euler scheme converges strongly with standard order $0.5$. Inspired by [@Xia2], we propose the semi-tamed Euler for SDEs with jumps under non-global Lipschitz condition and prove its strong convergence of order $0.5$. This latter result is helpful to prove the strong convergence of the tamed Euler scheme. We analyse the stability behaviours of both tamed and semi-tamed Euler scheme We present also some numerical experiments to illustrate our theoretical results.\ \ **Key words** : Stochastic differential equation, strong convergence, mean-square stability, Euler scheme, global Lipschitz condition, polynomial growth condition, one-sided Lipschitz condition. Declaration {#declaration .unnumbered} ----------- I, the undersigned, hereby declare that the work contained in this essay is my original work, and that any work done by others or by myself previously has been acknowledged and referenced accordingly. ![image](signjd.jpg) Jean Daniel Mukam, 25 May 2015 INTRODUCTION {#introduction .unnumbered} ============ In many branches of sciences like finance, economics, biology, engineering, ecology one often encountered some problems influenced by uncertainties. For example, in finance, the unpredictable nature of events such as markets crashes and booms may have significant and sudden impact on the stock price fluctuations. Therefore, in order to have more realistic prediction of these phenomena, it is natural to model them with equations which involves the deterministic part and the random part including jump. The SDEs with jumps is the generalization of both deterministic part and random part with jumps. SDEs with jumps have probability theory and stochastic process as prerequisites. We refer to [@Oksa], [@Oks], [@Phi] for general notions in probability theory and stochastic process. In this thesis, under global Lipschitz condition, we prove the existence and uniqueness of solution of SDEs with jumps. We focus on the strong convergence of the compensated stochastic theta methods (CSTM) of these equations under global Lipschitz condition. In particular, we prove that CSTM have strong convergence of order $0.5$. We investigate the stability of both CSTM and stochastic theta method (STM). For the linear case, we prove that under the assumption $\dfrac{1}{2}\leq \theta\leq 1$, CSTM holds the A-stability property. For the general nonlinear problem, we study the stability for $\theta=1$. In this case, when the drift coefficient have a negative one-sided Lipschitz coefficient, the diffusion coefficient and the jump coefficient satisfy the global Lipschitz condition, we prove that STM reproduce stability under certain step-size and the CSTM is stable for any step-size. Most phenomena are modelised by SDEs with jumps where the drift coefficient is one-sided Lipschitz and satisfies the polynomial growth condition. For such equations, it is proved in [@Martin3] that Euler explicit method fails to converge strongly to the exact solution while Euler implicit method converges strongly, but requires much computational efforts. Recently, a new explicit and efficient method was developed in [@Martin1] called tamed Euler scheme. In [@Martin1], the authors proved that the tamed Euler converges strongly with order 0.5 to the exact solution of SDEs under non-global Lipschitz condition. In this thesis, we extend the tamed Euler scheme by introducing a compensated tamed Euler scheme for SDEs with jumps. We prove that this scheme converges strongly with standard order $0.5$. We also extend the semi-tamed Euler developed in [@Xia2] and we prove that this scheme converge strongly with order $0.5$ for SDEs with jumps. As a consequence of this latter result, we prove the strong convergence of the tamed Euler scheme for SDEs with jumps. The stability analysis of both tamed Euler and semi-tamed Euler are done in this thesis. This thesis is organized as follows. In chapter 1, we recall some basic notions in probability theory and stochastic process. Chapter 2 is devoted to the proof of the existence and uniqueness of SDEs with jumps under global Lipschitz condition. In chapter 3, we focus on the strong convergence of the CSTM and the stability analysis of both CSTM and STM. In chapter 4, under non-global Lipschitz condition we investigate the strong convergence of the compensated tamed Euler scheme. In chapter 5, under non-global Lipschitz condition, we investigate the strong convergence and stability of both semi-tamed Euler scheme and tamed Euler scheme. Our theoretical results are illustrated by numerical examples at the end of chapter 3, chapter 4 and chapter 5. \[chp: chap1 Basic Notions in probability theory and stochastic process\] Basic notions in probability theory and stochastic process ========================================================== Basic notions in probability theory ------------------------------------- In this chapter, we present some basic concepts and results in probability theory and stochastic process useful to understand the notion of stochastic differential equations. More details for this chapter can be found in [@Oksa], [@Oks] and [@Phi]. ### **Basic notions in probability theory** **\[$\sigma$- algebra\]** Let $\Omega$ be a non-empty set. 1. A $\sigma$-algebra (or $\sigma$-field) $\mathcal{F}$ on $\Omega$ is a family of subsets of $\Omega$ satisfying 1. $\Omega \in \mathcal{F}$. 2. $\forall A\in \mathcal{F}$, $A^c\in\mathcal{F}$. 3. If $(A_i)_{i\in I}$ is a countable collection of set in $\mathcal{F}$, then $\cup_{i\in I} A_i\in \mathcal{F}$. 2. Let $\mathcal{F}_1$ and $\mathcal{F}_2$ be two $\sigma$-algebra on $\Omega$. $\mathcal{F}_1$ is said to be a sub-$\sigma$-algebra of $\mathcal{F}_2$ if $\mathcal{F}_1\subset\mathcal{F}_2$. <!-- --> 1. Given any family $\mathcal{B}$ of subset of $\Omega$, we denote by $$\begin{aligned} \sigma(\mathcal{B}):=\cap\{\mathcal{C}: \hspace{0.2cm}\mathcal{C}, \hspace{0.2cm}\sigma- \text{algebra of}\hspace{0.2cm} \Omega, \hspace{0.2cm}\mathcal{B}\subset \mathcal{C}\}\end{aligned}$$ the smallest $\sigma$-field of $\Omega$ containing $\mathcal{B}$, $\sigma(\mathcal{B})$ is called the $\sigma$-field generated by $\mathcal{B}$. When $\mathcal{B}$ is a collection of all open sets of a topological space $\Omega$, $\sigma(\mathcal{B})$ is called the Borel $\sigma$-algebra on $\Omega$ and the elements of $\sigma(\mathcal{B})$ are called Borel sets. 2. If $ X: \Omega \longrightarrow\mathbb{R}^n$ is a function, then the $\sigma$-algebra generated by $X$ is the smallest $\sigma$-algebra on $\Omega$ containing all the sets of the form $$\begin{aligned} \{X^{-1}(U) : U\subset \mathbb{R}^n, \hspace{0.5cm}\text{open}\}.\end{aligned}$$ **\[Probability measure\]**. Let $\mathcal{F}$ be a $\sigma$-field on $\Omega$. A probability measure is an application $\mathbb{P} : \mathcal{F}\longrightarrow [0,1]$ satisfying 1. $\mathbb{P}(\Omega)=1-\mathbb{P}(\emptyset)=1$. 2. If $(A_i)_{i\in I}$ is a countable collection of elements of $\mathcal{F}$ pairwise disjoints, then $\mathbb{P}(\cup_{i\in I}A_i)=\sum\limits_{i\in I}\mathbb{P}(A_i)$. **\[Probability space\]**. Let $\Omega$ be a non-empty set, $\mathcal{F}$ a $\sigma$-field on $\Omega$ and $\mathbb{P}$ a probability measure on $\mathcal{F}$. The triple $(\Omega, \mathcal{F}, \mathbb{P})$ is called a probability space. **\[Negligeable set\]** 1. Given a probability space $(\Omega, \mathcal{F}, \mathbb{P})$, $A\subset\Omega$ is said to be $\mathbb{P}$-null or negligeable if $\mathbb{P}(A)=0$ 2. A property is said to be true almost surely (a.s) if the set on which this property is not true is negligeable. **\[Measurability and random variable\]** 1. Let $(\Omega, \mathcal{F}, \mathbb{P})$ and $(\Omega', \mathcal{F}', \mathbb{P}')$ be two probability spaces. A function $X : \Omega \longrightarrow \Omega'$ is said to be $\mathcal{F}$-measurable if and only if $$\begin{aligned} X^{-1}(U) := \{ \omega\in \Omega : X(\omega) \in U\}\subset \mathcal{F}, \hspace{0.5cm} \forall\hspace{0.2cm} U\in \mathcal{F}'\end{aligned}$$ 2. A random variable $X$ is a function $X : \Omega \longrightarrow \Omega'$ $\mathcal{F}$-measurable. 3. If $\Omega'=\mathbb{R}$, then $X$ is called a real random variable. 4. If $\Omega'=\mathbb{R}^n$, $n>1$ then $X$ is called a vector random variable. In the following, unless otherwise state, $(\Omega, \mathcal{F}, \mathbb{P})$ denote a probability space and $X$ a random variable, $X :\Omega \longrightarrow \mathbb{R}^n$. . Every random variable induces a probability measure on $\mathbb{R}^n$ denoted $\mu_X$ and define by $\mu_X(B) :=\mathbb{P}(X^{-1}(B))$, $\forall\, B$ open set of $\mathbb{R}^n$. $\mu_X$ is called the distribution function of $X$. **\[Expected value\]** 1. If $X$ is a random variable such that $\int_{\Omega}||X(\omega)||d\mathbb{P}(\omega)<\infty$ almost surely, the quantity $$\begin{aligned} \mathbb{E}(X) :=\int_{\Omega}X(\omega)d\mathbb{P}(\omega) =\int_{\mathbb{R}^n}d\mu_X(x) \end{aligned}$$ is called the expected value of $X$, where $||.||$ denote the euclidean norm on $\mathbb{R}^n$. 2. In general, if $f : \mathbb{R}^n\longrightarrow \mathbb{R}^m$ is measurable and $\int_{\Omega}||f(X(\omega))||d\mathbb{P}(\omega)<\infty $ almost surely, then the qauntity $\mathbb{E}(f(X))$ define by $$\begin{aligned} \mathbb{E}(f(X)) :=\int_{\Omega}f(X(\omega))d\mathbb{P}(\omega)=\int_{\mathbb{R}^n}f(x)d\mu_X(x)\end{aligned}$$ is called expected value of $f(X)$. **\[Independent random variables\]** Let $(\Omega, \mathcal{F}, \mathbb{P})$ be a probability space. 1. Two elements $A$ and $B$ of $\mathcal{F}$ are independent if $$\begin{aligned} \mathbb{P}(A\cap B)=\mathbb{P}(A)\cap\mathbb{P}(B).\end{aligned}$$ 2. Two random variables $X_1$ and $X_2$ of $(\Omega, \mathcal{F}, \mathbb{P})$ are independent if for every choice of different borel sets $B_1$ and $B_2$ the following holds : $$\begin{aligned} \mathbb{P}(X_1\in B_1, X_2\in B_2)=\mathbb{P}(X_1\in B_1)\times\mathbb{P}(X_2\in B_2).\end{aligned}$$ The following proposition is from [@Oksa]. Two random variables $X_1$ and $X_2$ are independent if and only if for any measurable positive functions $f_1$ and $f_2$, the following equality holds $$\begin{aligned} \mathbb{E}(f_1(X_1)f_2(X_2))=\mathbb{E}(f_1(X_1)) \mathbb{E}(f_2(X_2)).\end{aligned}$$ **\[Conditional probability\]** For any event $A$ such that $P(A)>0$, the conditional probability on $A$ is the probability measure define by : $$\begin{aligned} \mathbb{P}(B/A) :=\dfrac{\mathbb{P}(A\cap B)}{\mathbb{P}(A)},\hspace{0.3cm} \forall B\in \mathcal{F}.\end{aligned}$$ ### Conditional expectation Let $X$ be a random variable such that $\int_{\Omega}|X(\omega)|d\mathbb{P}(\omega)<\infty$ almost surely. Let $\mathcal{G}$ a sub $\sigma$-algebra of $\mathcal{F}$. The conditional expectation of $X$ relative to the $\sigma$-algebra $\mathcal{G}$ is a random variable denoted by $\mathbb{E}(X/\mathcal{G})$ satisfying 1. $ \mathbb{E}(X/\mathcal{G})$ is $\mathcal{G}$-measurable. 2. $\int_G\mathbb{E}(X/\mathcal{G})d\mathbb{P}=\int_GXd\mathbb{P}, \hspace{0.5cm}\forall \hspace{0.2cm} G\in \mathcal{G}$. In the litterature, $\mathbb{E}(X/\mathcal{G})$ is called the projection of $X$ upon $\mathcal{G}$. The proof of the following theorem can be seen in [@Phi]. 1. $ \mathbb{E}(\mathbb{E}(X/\mathcal{G}))=\mathbb{E}(X)$. 2. If $X$ is $\mathcal{G}$-measurable, then $\mathbb{E}(X/\mathcal{G})=X$. 3. $\mathbb{E}((X+Y)/\mathcal{G})=\mathbb{E}(X/\mathcal{G})+\mathbb{E}(Y/\mathcal{G})$. 4. If $\mathcal{G}\subset \mathcal{G}'$ then $\mathbb{E}(X/\mathcal{G}')=\mathbb{E}.(\mathbb{E}(X/\mathcal{G})/\mathcal{G}')$. 5. If $\sigma(X)$ and $\mathcal{G}$ are independent, then $\mathbb{E}(X/\mathcal{G})=\mathbb{E}(X)$. 6. If $X\leq Y$ a.s, then $\mathbb{E}(X/\mathcal{G})\leq \mathbb{E}(Y/\mathcal{G})$. 7. If $X$ is $\mathcal{G}$ measurable, then $\mathbb{E}(XY/\mathcal{G})=X\mathbb{E}(Y/\mathcal{G})$. ### Convergence of random variables . Let $p\in[1,\infty)$, we denote by $\mathbb{L}^p(\Omega, \mathbb{R}^n)$ the equivalence class of measurable functions $X :\Omega : \longrightarrow \mathbb{R}^n $, $\mathcal{F}_t$-measurable such that $$\begin{aligned} ||X||^p_{\mathbb{L}^p(\Omega,\mathbb{R}^n)} := \mathbb{E}(||X||^p) =\int_{\Omega}||X(\omega)||^pd\mathbb{P}(\omega)<+\infty.\end{aligned}$$ Let $(X_n)\subset \mathbb{L}^p(\Omega, \mathbb{R}^n)$ be a sequence of random variables and $X\in \mathbb{L}^p(\omega, \mathbb{R}^n)$ a random variable. Let $$\begin{aligned} N:=\{\omega : \lim_{n\longrightarrow \infty}X_n(\omega)=X(\omega)\}\end{aligned}$$ 1. $(X_n)$ converges to $X$ almost surely if $N^c$ is negligeable. 2. $(X_n)$ converges in probability to $X$ if $$\begin{aligned} \forall\hspace{0.2cm}\epsilon >0\hspace{0.3cm}\lim_{n\longrightarrow\infty}\mathbb{P}(||X_n-X||>\epsilon)=0.\end{aligned}$$ 3. $(X_n)$ converges in $\mathbb{L}^p$ to $X$ if $$\begin{aligned} \lim_{n\longrightarrow +\infty}\mathbb{E}(||X_n-X||^p)=0. \end{aligned}$$ : **\[Frobenius norm\]** The Frobenius norm of a $m\times n$ matrix $A=(a_{ij})_{1\leq i\leq n; 1\leq j\leq m}$ is defined by $$\begin{aligned} ||A||:=\sqrt{\sum_{j=1}^n\sum_{i=1}^n|a_{ij}|^2}.\end{aligned}$$ Frobenius norm and euclidean norm are the same for vectors. \[ch1Minkowski\]**\[Minkowski inequality : Integral form\]**. Let $1\leq p<+\infty$ and let $(X, \mathcal{A}, dx)$ and $(Y, \mathcal{B}, dy)$ be $\sigma$-finite measures spaces. Let $F$ be a measurable function on the product space $X\times Y$. Then $$\begin{aligned} \left(\int_X\left|\int_YF(x,y)dy\right|^pdx\right)^{1/p}\leq \int_Y\left(\int_X|F(x,y)|^pdx\right)^{1/p}dy.\end{aligned}$$ The above inequality can be writen as $$\begin{aligned} \left\|\int_YF(.,y)dy\right\|_{L^p(X,\mathcal{A}, dx)}\leq \int_Y||F(.,y)||_{L^p(X, \mathcal{A}, dx)}dy.\end{aligned}$$ **\[Gronwall inequality\] : Continous form** Let $a(t)$ and $b(t)$ be two continuous and positives functions defines on $\mathbb{R}_+$ such that $$\begin{aligned} a(t)\leq b(t)+c\int_0^ta(s)ds, \hspace{0.5cm} \forall\, t\in \mathbb{R}_+,\end{aligned}$$ then $$\begin{aligned} a(t)\leq b(t)+c\int_0^tb(s)e^{c(t-s)}ds, \hspace{0.5cm}\forall\, t\in \mathbb{R}_+.\end{aligned}$$ **\[Gronwall inequality\] : Discrete form**. Let $\theta$ and $K$ be two constants and $(v_n)$ be a sequence satisfying : $$\begin{aligned} v_{n+1}\leq (1+\theta)v_n+K,\end{aligned}$$ then $$\begin{aligned} v_n\leq e^{n\theta}v_0+K\dfrac{e^{n\theta}-1}{e^{\theta}-1}.\end{aligned}$$ [@Gron]. **\[Borel Cantelli\]** Let $(A_n)_{n\in \mathbb{N}}$ be a family of subset of $\Omega$. 1. If $\sum\limits_{n\in\mathbb{N}}\mathbb{P}(A_n)<\infty$, then $\mathbb{P}\left[\limsup\limits_{n\longrightarrow\infty}A_n\right]=0$. 2. If the events $(A_n)$ are independent and $\sum\limits_{n\in\mathbb{N}}\mathbb{P}(A_n)=0$, then $\mathbb{P}\left[\limsup\limits_{n\longrightarrow \infty}A_n\right]=1$. [@Phi]. Stochastic processes -------------------- Let $(\Omega, \mathcal{F}, \mathbb{P})$ be a probability space. A family $(\mathcal{F}_t)_{t\geq 0}$ of sub $\sigma$-algebra of $\mathcal{F}$ is called filtration if $\mathcal{F}_s\subset \mathcal{F}_t$, $\forall\hspace{0.2cm} 0\leq s\leq t$. If $(\mathcal{F}_t)$ is such that $\mathcal{F}_t=\cap_{t>s}\mathcal{F}_s$, then $(\mathcal{F}_t)_{t\geq 0}$ is said to be right continuous. A stochastic process is a family of vector random variables $(X_t)_{t\geq 0}$. That is for all $t> 0$, the application $\begin{array}{cccc} X_t & : \Omega &\longrightarrow &\mathbb{R}^n\\ & w & \longmapsto& X_t(\omega) \end{array}$ is measurable. If $(X_t)_{t\geq 0}$ is a stochastic process, then for all $t\geq 0$, the application $t\longmapsto X_t$ is called sample path. Let $(\mathcal{F}_t)$ be a filtration on $(\Omega, \mathcal{F}, \mathbb{P})$. A stochastic process $(X_t)$ is said to be $\mathcal{F}_t$-adapted if $\forall\hspace{0.2cm} t\geq 0$ $X_t$ is $\mathcal{F}_t$-measurable. **\[Martingale\]** Let $(\mathcal{F}_t)_{t\geq 0}$ be a filtration on $(\Omega, \mathcal{F}, \mathbb{P})$. A stochastic process $(M_t)_{t\geq 0}$ is called $\mathcal{F}_t$- martingale if the following properties holds 1. $(M_t)$ is $\mathcal{F}_t$-adapted. 2. $\mathbb{E}||M_t||<\infty $, $\forall\hspace{0.1cm} t\geq 0$. 3. $\mathbb{E}(M_t/\mathcal{F}_s)=M_s$, $\forall\hspace{0.1cm} 0\leq s\leq t$. <!-- --> 1. If the condition $(iii)$ of the previous definition is replaced by $\mathbb{E}(M_t/\mathcal{F}_s)\geq M_s$, $\forall\hspace{0.1cm} 0\leq s\leq t$, then $(M_t)$ is called submartingale. 2. If the condition $(iii)$ of the previous definition is replaced by $\mathbb{E}(M_t/\mathcal{F}_s)\leq M_s$, $\forall\hspace{0.1cm} 0\leq s\leq t$, then $(M_t)$ is called supermartingale. 3. A positive submartingale is a submartingale $(X_t)_{t\geq 0}$ satisfying $X_t\geq 0$ for all $t\geq 0$. **\[Predictable process\]** Let $(\mathcal{F}_t)_{t\geq 0}$ be a filtration on $(\Omega, \mathcal{F}, \mathbb{P})$. A stochastic process $(X_t)_{t\geq 0}$ is called $\mathcal{F}_t$- predictable process if for all $t> 0$, $X_t$ is measurable with respect to the $\sigma$-algebra generated by $\{X_s, \hspace{0.2cm}s<t\}$. Let $M=(M_t)$ be a submartingale. Then for $1<p<\infty$, we have 1. **Markov’s inequality** $$\begin{aligned} \mathbb{P}\left(\sup_{0\leq s \leq t}||M_t||\geq \alpha\right)\leq\dfrac{\mathbb{E}(||M_t||)}{\alpha},\hspace{0.6cm} \forall\, \alpha>0. \end{aligned}$$ 2. **Doop’s maximal inequality** $$\begin{aligned} \mathbb{E}\left[\left(\sup_{0\leq s \leq t}||M_t||\right)^p\right]^{1/p}\leq \dfrac{p}{p-1}\mathbb{E}\left[||M_t||^p\right]^{1/p}. \end{aligned}$$ [@Phi]. **\[Wiener process or Brownian motion\]** Let $(\Omega, \mathcal{F}, \mathbb{P})$ be a probability space and $(\mathcal{F}_t)_{t\geq 0}$ a filtration on this space. A $\mathcal{F}_t$-adapted stochastic process $(W_t)_{t\geq 0}$ is called Wiener process or Brownian motion if : 1. $W_0=0$. 2. $ t \longmapsto W_t$ is almost surely continous. 3. $(W_t)_{t\geq 0}$ has independent increments $($i.e $W_t-W_s$ is independent of $W_r,\hspace{0.2cm} r\leq s)$. 4. $W_t-W_s\rightsquigarrow \mathcal{N}(0, t-s)$, for $0\leq s \leq t$. Usually, this property is called stationarity. If $(W_t)$ is an $\mathcal{F}_t$- Brownian motion, then the following process are $\mathcal{F}_t$- martingales 1. $W_t$. 2. $W_t^2-t$. 3. $\exp\left(\gamma W_t-\gamma^2\dfrac{t}{2}\right),\hspace{0.3cm}\forall\; \gamma\in \mathbb{R}$. Let $0\leq s\leq t$, then 1. $$\begin{aligned} \mathbb{E}(W_t/\mathcal{F}_s)&=&\mathbb{E}(W_t-W_s+W_s/\mathcal{F}_s)\\ &=&W_s+\mathbb{E}(W_t-W_s/\mathcal{F}_s) \hspace{0.1cm}\text{since} \hspace{0.2cm} W_s \hspace{0.1cm} is \hspace{0.1cm} \mathcal{F}_s-\text{measurable}\\ & =& W_s+ \mathbb{E}(W_t-W_s)\hspace{0.1cm} (\text{since the increments are independents }) \\ &=&W_s \hspace{0.3cm}(\text{since} \hspace{0.1cm} W_t-W_s\rightsquigarrow\mathcal{N}(0,t-s)). \end{aligned}$$ 2. $$\begin{aligned} \mathbb{E}(W_t^2-t/\mathcal{F}_s)&=&\mathbb{E}(W_t^2+W_s^2-2W_sW_t+2W_sW_t-W_s^2/\mathcal{F}_s)-t\\ &=&\mathbb{E}((W_t-W_s)^2/\mathcal{F}_s)+W_s\mathbb{E}((2W_t-W_s)/\mathcal{F}_s)-t \\ & & \hspace{0.1cm}(\text{since} \hspace{0.2cm} W_s \hspace{0.1cm} is \hspace{0.1cm} \mathcal{F}_s-\text{measurable })\\ &=&\mathbb{E}((W_t-W_s)^2)+ W_s\mathbb{E}(W_t-W_s)+W_s\mathbb{E}(W_t/\mathcal{F}_s)-t\\ & &(\text{since the increments are independents})\\ &=&t-s+0+W_s^2-t\hspace{0.2cm} \text{since} \hspace{0.2cm} W_t-W_s\rightsquigarrow\mathcal{N}(0, t-s)\\ &=&W_s^2-s. \end{aligned}$$ Using the same argument as above, we have : $$\begin{aligned} \mathbb{E}(e^{\gamma W_t}/\mathcal{F}_s)&=& e^{\gamma W_s}\mathbb{E}(e^{\gamma(W_t-W_s)}/\mathcal{F}_s)\\ &=& e^{\gamma W_s}\mathbb{E}(e^{\gamma W_{t-s}})\\ &=&e^{\gamma W_s}\int_{-\infty}^{+\infty}\dfrac{e^{-x^2/2(t-s)}}{\sqrt{2\pi(t-s)}}dx\\ &=&e^{\gamma W_s}e^{\gamma^2(t-s)/2}=e^{\gamma W_s+\gamma^2(t-s)/2}. \end{aligned}$$ Therefore, $$\begin{aligned} \mathbb{E}\left(exp\left(\gamma W_t-\gamma^2\dfrac{t}{2}\right)/\mathcal{F}_s\right)&=&\mathbb{E}(e^{\gamma W_t}/\mathcal{F}_s)e^{-\gamma^2t/2}\\ &=&e^{\gamma W_s+\gamma^2(t-s)/2}e^{-\gamma^2t/2}\\ &=& \exp(\gamma W_s-{\gamma^2 s}/2). \end{aligned}$$ The following proposition is from [@Oks]. Almost all sample paths of a Brownian motion are nowhere differentiable. Let $(\Omega, \mathcal{F}, \mathbb{P})$ be a probability space and $(\mathcal{F}_t)$ a filtration on this space. Let $(S_k)_{k\geq 1}$ be an $\mathcal{F}_t$-adapted stochastic process on $(\Omega, \mathcal{F}, \mathbb{P})$ with $0\leq S_1(\omega)\leq S_2(\omega)\leq ...$ for all $k\geq 1$ and $\omega \in \Omega$. The $\mathcal{F}_t$- adapted process $N=(N_t)_{t\geq 0}$ defined by : $$\begin{aligned} N_t:=\sum_{k\geq 1}\mathbf{1}_{\{S_k\leq k\}}\end{aligned}$$ is called counting process with jump times $S_k$. Let $(\Omega, \mathcal{F}, \mathbb{P})$ be a probability space and $(\mathcal{F}_t)$ a filtration on this space. A counting process $(N)_t$, $\mathcal{F}_t$- adapted is called poisson process of intensity $\lambda >0$ if : 1. $N_0=0$. 2. $\forall\; 0\leq t_0<t_1<...<t_n$, the random variables $\{N_{t_j}-N_{t_{j-1}} \hspace{0.3cm} 1\leq j\leq n\}$ are independent. 3. For $0\leq s\leq t$, $N_t-N_s\approx N_{t-s}$, where $\approx$ stand for the equality in probability law. 4. For all $t>0$, $N_t$ follows a poisson law with parameter $\lambda t$ (and we denote $N_t\leftrightsquigarrow \mathcal{P}(\lambda t)$). That is $$\begin{aligned} \mathbb{P}(N_t=k)=e^{-\lambda t}\dfrac{(\lambda t)^k}{k!},\hspace{0.5cm} k\in \mathbb{N}. \end{aligned}$$ **\[Compound poisson process\]** Let $(Z_n)$ be a sequence of discrete independent identically distributed random variables with probability law $\nu_Z$. Let $N=(N_t)$ be a poisson process with parameter $\lambda$. Let’s assume that $(N_t)$ and $(Z_n)$ are independent. A compound poisson process with intensity $\lambda>0$ with a jump law $\nu_Z$ is a $\mathcal{F}_t$- adapted stochastic process $(Y_t)$ defined by : $$\begin{aligned} Y_t: =\sum_{k=1}^{N_t}Z_k. \end{aligned}$$ **\[Compensated poisson process\]** A compensated poisson process associated to a poisson process $N$ with intensity $\lambda$ is a stochastic process $\overline{N}$ defined by : $$\begin{aligned} \overline{N}(t) := N(t)-\lambda t.\end{aligned}$$ \[ch1quadratic\] Let $(\Omega, \mathcal{F}, \mathbb{P})$ be a probability space and $(\mathcal{F}_t)$ a filtration on this space. If $(N)_t$ is a $\mathcal{F}_t$- adapted poisson process with intensity $\lambda$, then 1. $\overline{N}$ is a $\mathcal{F}_t$- adapted martingale. 2. $\mathbb{E}(\overline{N}(t+s)-\overline{N}(t))=0$. 3. $\mathbb{E}[\overline{N}(t+s)-\overline{N}(t)]^2=\lambda s, \hspace{0.5cm} \forall\; t,s \geq 0$. 4. $\overline{N}_t^2-\lambda t$ is a martingale. <!-- --> 1. Let $\leq s\leq t$, then $$\begin{aligned} \mathbb{E}(\overline{N}_t/\mathcal{F}_s)&=& \mathbb{E}(\overline{N}_t-\overline{N}_s+\overline{N}_s/ \mathcal{F}_s)\\ &=&\mathbb{E}(\overline{N}_t-\overline{N}_s/\mathcal{F}_s)+\overline{N}_s\\ &=&\mathbb{E}(N_t-N_s-\lambda t+\lambda s/\mathcal{F}_s)+N_s-\lambda s\\ &=&\mathbb{E}(N_t-N_s)-\lambda t+\lambda s +N_s-\lambda s \hspace{0.1cm}\\ & & \text{since the increments of the poisson process are independents}\\ &=& \lambda (t-s)-\lambda t+ N_s\hspace{0.2cm}\text( since\hspace{0.1cm} N_t-N_s\rightsquigarrow\mathcal{P}(\lambda(t-s)))\\ &=& N_s-\lambda s\\ &=& \overline{N}(s).\end{aligned}$$ 2. $$\begin{aligned} \mathbb{E}(\overline{N}(t+s)-\overline{N}(t))&=& \mathbb{E}(N(t+s)-N(t)-\lambda s)\\ &=& \lambda (t+s-t)-\lambda s= 0.\end{aligned}$$ 3. $$\begin{aligned} [\overline{N}(t+s)-\overline{N}(t)]^2&=&[N(t+s)-N(t)-\lambda s]^2\\ &=&[N(t+s)-N(t)]^2+\lambda^2s^2-2\lambda s(E(t+s)-N(t)).\end{aligned}$$ Since $N(t)\rightsquigarrow \mathcal{P}(\lambda t)$, using the relation $\mathbb{E}(N_t)=var(N_t)= \lambda t$, it follows that : $$\begin{aligned} \mathbb{E}[\overline{N}(t+s)-\overline{N}(s)]^2&=&\lambda(t+s-t)+\lambda^2(t+s-t)^2+\lambda^2s^2-2\lambda s(\lambda s)=\lambda s. \end{aligned}$$ 4. $$\begin{aligned} \mathbb{E}[\overline{N}^2_t-\lambda t/\mathcal{F}_s]&=& \mathbb{E}[\overline{N}^2_t/\mathcal{F}_s]-\lambda t\\ &=&\mathbb{E}[\overline{N}^2_t+\overline{N}^2_s-2\overline{N}_t\overline{N}_s+2\overline{N}_t\overline{N}_s- \overline{N}_s^2/\mathcal{F}_s]-\lambda t\\ &=&\mathbb{E}[(\overline{N}_t-\overline{N}_s)^2/\mathcal{F}_s]+\mathbb{E}[\overline{N}_s(\overline{N}_t-\overline{N}_s)/\mathcal{F}_s]+\mathbb{E}[\overline{N}_t\overline{N}_s/\mathcal{F}_s].\end{aligned}$$ Using the fact that $\overline{N}_t$ have independent increments and using the first part of the theorem, it follows that $$\begin{aligned} \mathbb{E}[\overline{N}^2_t-\lambda t/\mathcal{F}_s]&=&\mathbb{E}[(\overline{N}_t-\overline{N}_s)^2]+\overline{N}_s\mathbb{E}(\overline{N}_t-\overline{N}_s)+\mathbb{E}[\overline{N}_t-\overline{N}_s+\overline{N}_s/\mathcal{F}_s]-\lambda t\\ &=&\lambda (t-s)+0+\overline{N}_s\mathbb{E}[\overline{N}_t-\overline{N}_s+\overline{N}_s/\mathcal{F}_s]-\lambda t\\ &=&\lambda t-\lambda s+0+0+\overline{N}^2_s-\lambda t\\ &=&\overline{N}^2_s-\lambda s\end{aligned}$$ This complete the proof. Stochastic integral ------------------- Let $\mathbb{M}^p([0,T], \mathbb{R})$ be the subspace of $\mathbb{L}^p([0,T], \mathbb{R})$ such that for any process $(X_t)\in \mathbb{M}^p([0,T], \mathbb{R})$ we have $$\begin{aligned} \mathbb{E}\left(\int_0^T|X(t)|^pdt\right)<\infty.\end{aligned}$$ Consider a Brownian motion $W$ and a stochastic process $(X_t)$ both adapted to a given filtration $(\mathcal{F}_t)$. We will define the following expression called stochastic integral $$\begin{aligned} I_t(X)=\int_0^tX(s)dW(s).\end{aligned}$$ We will also give some of its properties. Let’s start with the stochastic integral of simple process. **\[Elementary process or simple process\]** A process $(X_t)_{t\in\mathbb{R}}\in\mathbb{L}^p([0,T], \mathbb{R})$ is called simple or elementary process if there exist a partition $0=t_0<t_1<...<t_n=T$ such that $$\begin{aligned} X_s(\omega)=\sum_{j=0}^{n}1_{]t_j, t_{j+1}]}\theta_j(\omega),\end{aligned}$$ where $\theta_j$ is a bounded $\mathcal{F}_{t_j}$-measurable random variable. **\[ Itô’s integral\]** The Itô’s Integral of the simple process $(X_t)_{t\in\mathbb{R}}\in\mathbb{L}^2([0,T], \mathbb{R})$ is defined by $$\begin{aligned} I_t(X)=\int_0^tX(s)dW(s) :=\sum_{j=0}^{n-1}\theta_j(W_{t_{j+1}}-W_{t_j}).\end{aligned}$$ If $f$ is an elementary function in $\mathbb{L}^2([a,b],\mathbb{R})$ and $W_t$ a Brownian motion, then : 1. $\mathbb{E}\left(\int_a^bf(t)dW_t\right)=0$. 2. $\mathbb{E}\left(\int_a^bf(t)dW_t\right)^2=\int_a^b\mathbb{E}(f^2(t))dt$. <!-- --> 1. By definition we have $$\begin{aligned} \int_a^bf(t)dW_t=\sum_{j=0}^{n-1}f_j(W_{t_j+1}-W_{t_j}).\end{aligned}$$ By taking expectation in both sides, we obtain $$\begin{aligned} \mathbb{E}\left[\int_a^bf(t)dW_t\right]=\sum_{j=0}^{n-1}\mathbb{E}(f_j)\mathbb{E}(W_{t_{j+1}}-W_{t_j})=0,\end{aligned}$$ since $W_{t_{j+1}}-W_{t_j}$ is a normal distribution with mean $0$ and standard deviation $\sqrt{t_{j+1}-t_j}$. 2. $$\begin{aligned} \left(\int_a^bf(t)dW_t\right)^2&=&\left[\sum_{j=0}^{n-1}f_j(B_{t_{j+1}}-W_{t_j})\right]^2\\ &=&\sum_{j=0}^{n-1}(f_j)^2(W_{t_{j+1}}-W_{t_j})^2+\sum_{l=0}^{n-1}\sum_{k=0, k\neq l}^{n-1}f_lf_k(W_{t_{l+1}}-W_{t_l})(W_{t_{k+1}}-W_{t_k}).\end{aligned}$$ Taking expectation in both sides and using independence of the increments of Brownian motion, we get $$\begin{aligned} \mathbb{E}\left(\int_a^bf(t)dW_t\right)^2&=&\sum_{j=0}^{n-1}\mathbb{E}(f_j)^2E\left(W_{t_{j+1}}-W_{t_j}\right)^2\\ &=&\sum_{j=0}^{n-1}\mathbb{E}(f_j)^2(t_{j+1}-t_j)\\ &=&\int_a^b\mathbb{E}(f^2(t))dt.\end{aligned}$$ The following proposition can be seen in [@Oks]. For any process $X=(X_t)_{t\geq 0}\in\mathbb{M}^2([0,T], \mathbb{R})$, such that $\mathbb{E}|X_t|^2<\infty $ for all $t\geq 0$, there exist a sequence $(f^{(n)}_t)_{t\geq 0}$ of simple process such that $\mathbb{E}|f^{(n)}_t|^2<\infty$ and $$\begin{aligned} \lim_{n\longrightarrow \infty}\mathbb{E}\left[\int_0^t|X_s-f_s^{(n)}|^2ds\right]=0.\end{aligned}$$ For any process $X=(X_t)_{t\geq 0}\in\mathbb{M}^2([0,T], \mathbb{R})$, we define a stochastic integral of $X$ with respect to a Brownian motion $W$ by : $$\begin{aligned} \int_0^tX_sdW(s)=\lim_{n\longrightarrow \infty}\int_0^tf^{(n)}_sdW(s), \end{aligned}$$ where $(f^{(n)}_t)$ is the sequence of simple process converging almost surely to $X$ according to the previous proposition. Moreover, using Itô isometry for elementaries functions one can prove that the limit on this definition does not depend on the actual choice of $(f^{(n)}).$ **\[Properties of Itô integral\]**. For any process $X=(X_t)_{t\geq 0}\in\mathbb{M}^2([0,T], \mathbb{R})$ such that $\mathbb{E}|X_t|^2<\infty$, for any functions $f,g\in \mathbb{M}^2([0,T], \mathbb{R})$ and $0\leq S<U<T$, the following holds : 1. $\int_S^TfdW(t)=\int_S^UfdW(t)+\int_U^TfdW(t)$ almost surely. 2. $\int_S^T(cf+g)dW(t)=c\int_S^TfdW(t)+\int_S^TgdW(t)$, for any constant $c$. 3. $\int_S^TfdW(t)$ is $\mathcal{F}_T$-measurable. 4. $\mathbb{E}\left(\int_0^tX_sdW(s)\right)=0$. 5. $\mathbb{E}\left(\int_0^tX_sdW(s)\right)^2=\int_0^t\mathbb{E}(X_s^2)ds$. [@Oks] [@Oks] For any elementary function $f^{(n)}$ $\mathcal{F}_t$-adapted, the integral $$\begin{aligned} I_n(t, \omega)=\int_0^tf^{(n)}dW(r) \end{aligned}$$ is a martingale with respect to $\mathcal{F}_t$. For $t\leq s$, we have : $$\begin{aligned} \mathbb{E}[I_n(s,\omega)/\mathcal{F}_t]&=&\mathbb{E}\left[\left(\int_0^sf^{(n)}dW(r)\right)/\mathcal{F}_t\right]\\ &=&\mathbb{E}\left[\left(\int_0^tf^{(n)}dW(r)\right)/\mathcal{F}_t\right]+ \mathbb{E}\left[\left(\int_t^sf^{(n)}dW(r)\right)/\mathcal{F}_t\right]\\ &=&\int_0^tf^{(n)}dW(r)+\mathbb{E}\left[\sum_{t\leq t^{(n)}_j\leq t^{(n)}_{j+1}\leq s}f^{(n)}_j\Delta W_j/\mathcal{F}_t\right]\\ &=&\int_0^tf^{(n)}dW(r)+\sum_{t\leq t^{(n)}_j\leq t^{(n)}_{j+1}\leq s}\mathbb{E}[f^{(n)}_j\Delta W_j/\mathcal{F}_t]\\ &=&\int_0^tf^{(n)}dW(r)+\sum_{t\leq t^{(n)}_j\leq t^{(n)}_{j+1}\leq s}\mathbb{E}[\mathbb{E}[f^{(n)}_j\Delta W_j/\mathcal{F}_{t_j}]/\mathcal{F}_t]\\ &=&\int_0^tf^{(n)}dW(r)+\sum_{t\leq t^{(n)}_j\leq t^{(n)}_{j+1}\leq s}\mathbb{E}[f^{(n)}_j\mathbb{E}[\Delta W_j/\mathcal{F}_{t_j}]/\mathcal{F}_t]\\ &=&\int_0^tf^{(n)}dW(r),\hspace{0.2cm} \text{since} \hspace{0.2cm} E[\Delta W_j/\mathcal{F}_{t_j}]=\mathbb{E}[\Delta W_j]=0\\ &=&I_n(t, \omega). \end{aligned}$$ **\[Generalisation\]** Let $f(t, \omega)\in \mathbb{M}^2([0, T], \mathbb{R})$ for all $t$. Then the integral $$\begin{aligned} M_t(\omega)=\int_0^tf(s, \omega)dW(s) \end{aligned}$$ is a martingale with respect to $\mathcal{F}_t$ and $$\begin{aligned} \mathbb{P}\left[\sup_{0\leq t\leq T}|M_t|\geq \lambda\right]\leq\dfrac{1}{\lambda^2}\mathbb{E}\left[\int_0^Tf^2(s, \omega)ds\right], \hspace{0.3cm} \forall\; \lambda >0 \end{aligned}$$ [@Oks]. ### One dimensional Itô Formula **\[1-dimensional Itô process\]** Let $W_t$ be a $1$-dimensional Brownian motion on $(\Omega, \mathcal{F}, \mathbb{P})$. An Itô process (or Stochastic integral) is any stochastic process $X_t$ of the form $$\begin{aligned} X_t=X_0+\int_0^tu(s,\omega)ds+\int_0^tv(s, \omega)dW(s), \label{ch1Ito1} \end{aligned}$$ where $u\in \mathbb{L}^1([0,T], \mathbb{R})$ and $v\in\mathbb{L}^2([0, T], \mathbb{R})$. **\[ first $1$- dimensional Itô formula\]** Let $(\Omega, \mathcal{F}, \mathbb{P})$ be a complete probability space, $(W_t)_{t\in \mathbb{R}_{+}}$ a one-dimensional Brownian motion and $f : \mathbb{R}\longrightarrow \mathbb{R}$ such that $f$ is once derivable. If $(X_t)$ is any process of the form , then $f(X_t)$ is an Itô processes and $$\begin{aligned} f(X_t)=f(X_0)+\int_0^t f'(X_s)u_sds+\dfrac{1}{2}\int_0^tf''(X_s)v_s^2ds+\int_0^tf'(X_s)v_sdW_s. \end{aligned}$$ [@Oksa]. **\[ second $1$- dimensional Itô formula\]** If in the previous proposition we consider $f :[0, \infty)\times\mathbb{R}\longrightarrow\mathbb{R}$ such that $f$ is once differentiable with respect to the first variable $t$ and twice differentiable with respect to the second variable $x$, then $f(t, X_t)$ is an Itô process and $$\begin{aligned} f(t, X_t)=f(0, X_0)+\int_0^t\dfrac{\partial f}{\partial t}(s,X_s)ds+\int_0^t\dfrac{\partial f}{\partial x}(s,X_s)u_sds+\int_0^t\dfrac{\partial f}{\partial x}(s,X_s)v_sdW_s+\dfrac{1}{2}\int_0^t\dfrac{\partial^2 f}{\partial x^2}(s,X_s)v_s^2ds, \end{aligned}$$ or in its differential form : $$\begin{aligned} df(t, X_t)=\dfrac{\partial f}{\partial t}(t, X_t)dt+\dfrac{\partial f}{\partial x}(t, X_t)dX_t+\dfrac{1}{2}\dfrac{\partial^2 f}{\partial x^2}(t, X_t)(dX_t)^2. \end{aligned}$$ $(dX_t)^2=dX_tdX_t$ is computed according to the rules $$\begin{aligned} dtdt=dW_tdt=dtdW_t=0,\hspace{1cm} dW_tdW_t=dt. \end{aligned}$$ [@Oksa]. ### Multi-dimensional Itô integral **\[m-dimensional Brownian motion\]**[@Oks]. Let $W_1, \cdots W_m$ be $m$ Brownian motions. The random variable $W=(W_1, W_2, ..., W_m)$ is called $m$-dimensional Brownian motion. Let $\mathbb{L}^{n\times m}([0, T], \mathbb{R}^{n\times m})$ denotes the set of $n\times m$ matrices $v=[v_{ij}(t, \omega)]$, $1\leq i\leq n$, $1\leq j\leq m$. Where $v_{ij}(t, \omega)\in \mathbb{L}^2([0, T], \mathbb{R})$. $\int_0^tv_sdW_s$ denotes the Itô integral of $v$ with respect to the m-dimensional Brownian motion $W$. It can be written into its matrix form $$\begin{aligned} \int_0^TvdW(s)=\int_0^T\left(\begin{array}{ccc} v_{11}&\cdots& v_{1m}\\ .& &.\\ .& &.\\ .& &.\\ v_{n1}&\cdots &v_{nm} \end{array} \right)\left(\begin{array}{c} dW_1(s)\\ .\\ .\\ .\\ dW_m(s) \end{array} \right), \end{aligned}$$ which is a $n\times 1$ matrix (column vector) whose $i^{th}$ components are given by $$\begin{aligned} \sum_{j=1}^m\int_0^Tv_{ij}(s, \omega)dW_j(s). \end{aligned}$$ **\[$n$-dimensional Itô process\]**[@Oks]. Let $W$ be an $m$- Brownain motion and $v=[v_{i,j}, 1\leq i\leq n \hspace{0.2cm} 1\leq j\leq m]$ an element of $\mathbb{L}^{n\times m}([0, t], \mathbb{R}^{n\times m})$. Let $u=(u_i)_{i=1}^n$ such that $u_i\in\mathbb{L}^2([0, T])$ pour tout $1\leq i\leq n$. The $n$-dimensional Itô process is any stochastic process of the form $$\begin{aligned} dX(t)=udt+vdW(t), \end{aligned}$$ which is a system of $n$ Itô process, where the $i^{th}$ process is given by : $$\begin{aligned} dX_i(t)=u_idt+\sum_{j=1}^mv_{ij}dW_j(t). \end{aligned}$$ **\[General Itô formula\]** Let $ dX(t)=udt+vdW(t)$ be an $n$-dimensional Itô process. Let $g(t, x)=(g_1(t,x),..., g_p(t,x))$ be a function once differentiable with respect to $t$ and twice differentiable with respect to $x$. Then the process $Y(t)=g(t, X(t))$ is also a $p$-dimensional Itô process, whose component $Y_k$ are given by : $$\begin{aligned} Y_k(t)=\dfrac{\partial g_k}{\partial t}(t, X_t)dt+\sum_{i=1}^n\dfrac{\partial g_k}{\partial x_i}(t, X_t)dX_i+\dfrac{1}{2}+\sum_{i=1}^n\sum_{j=1}^n\dfrac{\partial^2 g_k}{\partial x_i\partial x_j}(t, X_t)dX_idX_j, \end{aligned}$$ where $dW_idW_j=\delta_{ij}dt$ and $dW_idt=dtdW_i=0$. [@Oks]. Stochastic process with jumps and Stochastic integral with jumps ---------------------------------------------------------------- 1. A function $f : [0,T]\longrightarrow \mathbb{R}^n$ is said to be right continuous with left limit at $t\in[0,T]$ if $$\begin{aligned} f(t^+) : =\lim_{s\longrightarrow t^+}f(s) \hspace{0.5cm}\text{and}\hspace{0.2cm}f(t^-) : =\lim_{s\longrightarrow t^-}f(s)\hspace{0.2cm}\text{exist}\hspace{0.2cm} \text{and} \hspace{0.5cm} f(t^+)=f(t). \end{aligned}$$ 2. A function $f : [0,T]\longrightarrow \mathbb{R}^n$ is said to be left continuous with right limit if $$\begin{aligned} f(t^+) : =\lim_{s\longrightarrow t^+}f(s) \hspace{0.5cm}\text{and}\hspace{0.2cm}f(t^-) : =\lim_{s\longrightarrow t^-}f(s)\hspace{0.2cm}\text{exist}\hspace{0.2cm} \text{and} \hspace{0.5cm} f(t^-)=f(t).\end{aligned}$$ In the litterature, the french short forms “cádlág” and “cáglád” denote respectively functions which are right continous with left limit and left continous with right limit. - If $f$ is right continous with left limit at $t$, then $\Delta f(t)=f(t)-f(t^-)$ is called the jump of $f$ at $t$. - If $f$ is left continous with right limit at $t$, then $\Delta f(t)=f(t^+)-f(t)$ is called the jump of $f$ at $t$. A stochastic process $X=(X_t)_{t\geq 0}$ is called jump process if the sample path $s\longmapsto X_s$ is left continuous (cáglág) or right continuous (cádlág) $\forall s\geq 0$. **\[Lévy process\]** A stochastic process $X=\{X_t,\hspace{0.3cm} t\geq 0\}$ is a Lévy process if the following conditions are fulfilled 1. The increments on disjoint time intervals are independent. That is for $0\leq t_0<t_1<...<t_n$ $\{X_{t_j}-X_{t_{j-1}}\hspace{0.3cm} 1\leq j\leq n\}$ are independent. 2. The increments of sample paths are stationary : $X_t-X_s\approx X_{t-s}$ for $0\leq t\leq s$. 3. The sample paths are right continuous with left limit. The Brownian motion and the poisson process starting at $0$ are Lévy process. [@Oksa] Let $\textbf{D}_{ucp}$ denote the space of càdlàg adapted process equipped with the topology of the uniform convergence in probability (ucp) on compact sets. ucp : $H_n\longrightarrow H$ if $\forall\; t\geq 0$ $\sup\limits_{0\leq s\leq t}|H_n(s)-H(s)|\longrightarrow 0$ in probability ($A_n\longrightarrow A$ in probability if $\forall\; \epsilon>0, \exists\; n_{\epsilon}\in \mathbb{N}$ such that $n>n_{\epsilon}\Longrightarrow \mathbb{P}(|A_n-A|>\epsilon)<\epsilon$). In the sequel $\textbf{L}_{ucp}$ denote the space of adapted càdlàg processes (left continous with right limit) equiped with the ucp topology. [@Oksa] Let $H$ be an elementary function. i.e there exist a partition $0=t_0<t_1...<t_n=T$ such that $$\begin{aligned} H=\sum_{j=0}^nH_j1_{[t_j, t_{j+1})}, \end{aligned}$$ where $H_j$ are $\mathcal{F}_{t_j}$-measurable. Let $(X_t)$ be a Lévy process. The stochastic integral $\int_0^tH(s)dX(s)$ is defined by $$\begin{aligned} J_XH(t) : =\int_0^tH(s)dX(s): =\sum_{j=0}^n H_j(X(t_{j+1})-X(t_j)) \hspace{0.3cm} t\geq 0.\end{aligned}$$ [@Oksa] Let $X$ be a semimartingale, then the mapping $J_X$ can be extended to the continuous linear map $$\begin{aligned} J_X : \textbf{L}_{ucp}\longrightarrow \textbf{D}_{ucp}.\end{aligned}$$ The above proposition allows us to define a stochastic integral of the form $$\begin{aligned} \int_0^tH(s)dX(s),\end{aligned}$$ where $H\in \textbf{L}_{ucp}$. [@Oksa] For $H\in \textbf{L}_{ucp}$ we define $\int_0^tH(s)dX(s)$ : $$\begin{aligned} \int_0^tH(s)dX(s) : =\lim_{n\longrightarrow +\infty}\int_0^tH^{n}(s)dX(s),\end{aligned}$$ where $(H^{n})$ is a sequence of simple process converging to $H$. Let $f\in\textbf{L}_{ucp}$ and $(\overline{N_t})$ be a compensated poisson process. The following holds 1. $\mathbb{E}\left(\int_0^tf(s)d\overline{N}(s)\right)=0$. 2. $\mathbb{E}\left(\int_0^tf(s)d\overline{N}(s)\right)^2=\lambda\int_0^t\mathbb{E}(f(s))^2ds$. [@Oksa] ### Itô formula for jump process **\[Itô jump-diffusion process\]** The Itô jump-diffusion process is any process of the form $$\begin{aligned} X_t=X_0+\int_0^ta(X_s)ds+\int_0^tb(X_s)dW_s+\int_0^tc(X_s)dN_s. \label{ch1ito1} \end{aligned}$$ The coefficient $a\in\mathbb{L}^1([0,T], \mathbb{R}^n)$ is called the drift coefficient, $b\in \mathbb{L}^2([0,T], \mathbb{R}^{n\times m})$ is called the diffusion coefficient and $c(X_s)\in\mathbb{L}^2([0,T], \mathbb{R}^n)$ is called the jump coefficient. $W$ is a $m$-dimentional Brownian motion and $N$ a one dimentional poisson process. [@Oksa pp 6]**\[Itô formula for jump process\]** If $(X_t)$ is a jump-diffusion process of the form and $f :[0,\infty)\longrightarrow \mathbb{R}^n$ any function twice derivable, then $f(X_t)$ is a jump-diffusion process and satisfies the following equation $$\begin{aligned} f(X_t)=f(X_0)+\int_0^t\dfrac{\partial f}{\partial x}(X_s)a_sds+\dfrac{1}{2}\int_0^t\dfrac{\partial^2f}{\partial x^2}(X_s)b_s^2ds +\int_0^t\dfrac{\partial f}{\partial x}b_sdW_s+\int_0^t(f(X_{s^-}+c( X_s))-f(X_{s^-}))dN_s. \end{aligned}$$ \[ch1Itoproduct\] **\[Itô’s lemma for product\]** If $X_t$ and $Y_t$ are two Itô’s jump-diffusion process, Then $X_tY_t$ is an Itô jump-diffusion process and $$\begin{aligned} d(X_tY_t)=Y_tdX_t+X_tdY_t+dX_tdY_t.\end{aligned}$$ $dX_1dX_2$ is called the Itô’s corrective term and it is computed according to the relations $$\begin{aligned} dt.dt=dN_tdt=dtdN_t=dW_tdN_t=dN_tdW_t=0,\hspace{0.5cm} dN_tdN_t=dN_t.\end{aligned}$$ [@Oksa] After being familiar with some basic notions in probability theory and stochastic process, we are now ready to provide in the following chapter the proof of the existence and uniqueness of solution of SDEs with jumps under global Lipschitz conditions. Existence and uniqueness of solution of the jump-diffusion Itô’s stochastic differential equations ================================================================================================== In this chapter, we give the general formulation of the compensated stochastic differential equation (CSDE) with jumps which will be helpful to prove the existence and the uniqueness solutions of the stochastic differential equation with jump. General formulation ------------------- Along this work, $||.||$ denote the Frobenius matrix norm, $(\Omega, \mathcal{F}_t, \mathbb{P})$ denote a complete probability space. For all $x, y\in\mathbb{R}^n$, $\langle x,y\rangle=x_1y_1+\cdots, x_ny_n$ is the inner product. For all $a, b\in\mathbb{R}$, $ a\vee b :=\max(a,b)$. Throughout this work, we consider a jump diffusion Itô’s stochastic differential (SDEs) of the form $$\begin{aligned} dX(t)=f(X(t^-))dt+g(X(t^-))dW(t)+h(X(t^-))dN(t),\hspace{0.5cm} X(0^-)=X_0, \label{ch2jdi1}\end{aligned}$$ where $X_0$ is the initial condition, $X(t^-)=\lim\limits_{s\longrightarrow t^-}X(s)$, $W(t)$ is an $m$-dimensional Brownian motion and $N(t)$ is a $1$-dimensional poisson process with intensity $\lambda>0$. We assume $W_t$ and $N_t$ to be both $\mathcal{F}_t$-measurable. $f : \mathbb{R}^n\longrightarrow \mathbb{R}^n$, $g : \mathbb{R}^n \longrightarrow\mathbb{R}^{n\times m}$ and $h : \mathbb{R}^n\longrightarrow \mathbb{R}^n$. Our aim in this chapter is to prove the existence and the uniqueness of solution of equation in the strong sense. **\[Strong solution\]** A stochastic process $X=\{X_t\}_{t\in[0,T]}$ is called strong solution of Itô jump-diffusion differential equation if : 1. $X_t$ is $\mathcal{F}_t-$measurable $\forall\; t\in[0,T]$. 2. $\mathbb{P}\left[\int_0^t|f(X_s,s)|ds<\infty\right]=\mathbb{P}\left[\int_0^t|g(X_s,s)|^2ds<\infty\right] =\mathbb{P}\left[\int_0^t|h(X_s,s)|^2ds<\infty\right]=1$. 3. $X$ satisfies equation almost surely. \[ch2assump\] Troughout this chapter, we make the following hypothesis : There exist positive constants $L$ and $K$ such that for all $x, y\in\mathbb{R}^n$, 1. $\mathbb{E}||X(0)||^2<+\infty$ and $X(0)$ is independent of the Weiner process $W(t)$ and of the poisson process $N(t)$. 2. $f$, $g$ and $h$ satisfy the global Lipschitz condtion : $$\begin{aligned} ||f(x)-f(y)||^2\vee ||g(x)-g(y)||^2\vee ||h(x)-h(y)||^2\leq L||x-y||^2\end{aligned}$$ 3. $f$, $g$ and $h$ satisfy the linear growth condition : $$\begin{aligned} ||f(x)||^2\vee ||g(x)||^2\vee ||h(x)||^2\leq K(1+||x||^2).\end{aligned}$$ The globaly Lipschitz condition implies the linear growth condition. So it is enough to make the assumptions only on the global Lipschitz condition. A soluion $\{X_t\}$ of is said to be pathwise unique if any other solution $\overline{X}$ is a stochastilly indistinguishable from it, that is $\mathbb{P}\{X(t)=\overline{X}(t)\}=1$, $\forall\; t\in [0,T]$ almost surely. In order to prove the existence and uniqueness solution of , it is useful to write in its compensated form. ### Compensated stochastic differential equation (CSDE) From the relation $\overline{N}(t)= N(t)-\lambda t$, we have $d\overline{N}(t)=dN_t-\lambda dt$. Substituting this latter relation in leads to : $$\begin{aligned} dX(t)=f_{\lambda}(X(t^-))dt+g(X(t^-))dW(t)+h(X(t-))d\overline{N}(t), \label{ch2jdi2}\end{aligned}$$ where $$\begin{aligned} f_{\lambda}(x) :=f(x)+\lambda h(x). \label{ch2jdi4}\end{aligned}$$ can be rewriten into its integral form $$\begin{aligned} X(t)=X_0+\int_0^tf_{\lambda}(X(s^-))ds+\int_0^tg(X(s^-))dW(s)+\int_0^th(X(s^-))d\overline{N}(s), \label{ch2jdi3}\end{aligned}$$ \[ch2lemma1\] If assumptions \[ch2assump\] are satisfied, then the function $f_{\lambda}$ satisfies the global Lipschitz and the linear growth conditions with constants $L_{\lambda}=(1+\lambda)^2L$ and $K_{\lambda}=(1+\lambda)^2K$ respectively. 1. Using the global lipschitz condition satisfied by $f$ and $h$, it follows from that: $$\begin{aligned} ||f_{\lambda}(x)-f_{\lambda}(y)||^2&=&||f(x)-f(y)+\lambda (h(x)-h(y))||^2\\ &\leq& \left(\sqrt{L}||x-y||+\lambda\sqrt{L}||x-y||\right)^2=(1+\lambda)^2L||x-y||^2.\end{aligned}$$ 2. Along the same lines as above, we obtain the linear growth condition satisfies by $f_{\lambda}$. Well-posedness problem ---------------------- Based on Lemma \[ch2lemma1\] and using the fact that equations and are equivalent, the existence and uniqueness of solution of equation is equivalent to the existence and uniqueness of solution of equation . \[ch2th1\] If Assumptions \[ch2assump\] are fulfilled, then there exist a unique strong solution of equation . In order to prove Theorem \[ch2th1\], we need the following lemma. \[ch2lemma2\] let $X^0(t)=X_0(t), \hspace{0.5cm} \forall t\in [0,T]$ and $$\begin{aligned} X^{n+1}(t)=X_0+\int_0^tf(X^n(s^-))ds+\int_0^tg(X^n(s^-))dW(s)+\int_0^th(X^n(s^-))d\overline{N}(s)ds,\end{aligned}$$ then $$\begin{aligned} \mathbb{E}||X^{n+1}(t)-X^n(t)||^2 \leq \dfrac{(Mt)^{n+1}}{(n+1)!}, \label{lemI1}\end{aligned}$$ where $M$ is a positive constant depending on $ \lambda, K, L, X_0$. : By induction 1. For $n=0$ $$\begin{aligned} ||X^1(t)-X^0(t)||^2&=&\left\|\int_0^tf_{\lambda}(X_0(s^-))ds+\int_0^tg(X_0(s^-))dW(s)+\int_0^th(X_0(s^-))d\overline{N}(s)\right\|^2\nonumber\\ &\leq& 3\left\|\int_0^tf_{\lambda}(X_0(s^-))ds\right\|^2+3\left\|\int_0^tg(X_0(s^-))dW(s)\right\|^2\nonumber\\ &+&3\left\|\int_0^th(X_0(s^-))d\overline{N}(s)\right\|^2. \label{E1}\end{aligned}$$ Using Cauchy-Schwartz inequality and the linear growth condition, it follows that: $$\begin{aligned} \mathbb{E}(I_1(t)):=\mathbb{E}\left(3\left\|\int_0^tf_{\lambda}(X_0(s^-))ds\right\|^2\right)&\leq & 3T\int_0^t\mathbb{E}||f_{\lambda}(X_0(s^-))||^2ds\nonumber\\ &\leq & 3TK_{\lambda}\int_0^t(1+E||X_0||^2)ds. \label{EI1}\end{aligned}$$ From the martingale property of $\overline{N}(t)$ and the linear growth condition, it follows that : $$\begin{aligned} \mathbb{E}(I_2(t)) := \mathbb{E}\left(3\left\|\int_0^th(X_0(s^-))d\tilde{N}(s)\right\|^2\right)&=&3\lambda\int_0^t\mathbb{E}||h(X_0(s^-))||^2ds\nonumber\\ &\leq & 3\lambda K\int_0^t(1+\mathbb{E}|X_0|^2)ds. \label{EI2}\end{aligned}$$ Using the martingale property of $W(t)$ and the linear growth condition, it follows that : $$\begin{aligned} \mathbb{E}(I_3(t)) := \mathbb{E}\left(3\left\|\int_0^tg(X_0(s^-))dW(s)\right\|^2\right)\leq 3K\int_0^t(1+\mathbb{E}||X_0||^2)ds. \label{EI3}\end{aligned}$$ Taking expectation in both sides of and using estimations , and leads to : $$\begin{aligned} \mathbb{E}||X^1(t)-X^0(t)||^2\leq (3TK_{\lambda}+3K+3\lambda K)\int_0^t(1+\mathbb{E}||X_0||^2)ds\leq Mt, \label{E2}\end{aligned}$$ where $M=(3TK_{\lambda}+3K+3\lambda K) \vee (3TL_{\lambda} +3L+3\lambda L)$. 2. Let’s assume that inequality holds up to a certain rank $n\geq 0$. We have to show that it remains true for $n+1$. That is, we have to prove that $\mathbb{E}||X^{n+2}(t)-X^{n+1}(t)||^2\leq \dfrac{(Mt)^{n+2}}{(n+2)!}$. $$\begin{aligned} X^{n+2}(t)&=&X_0+\int_0^tf_{\lambda}(X^{n+1}(s^-))ds+\int_0^tg(X^{n+1}(s^-))dW(s)+\int_0^th(X^{n+1}(s^-))d\overline{N}(s),\\ X^{n+1}(t)&=&X_0+\int_0^tf_{\lambda}(X^{n}(s^-))ds+\int_0^tg(X^{n}(s^-))dW(s)+\int_0^th(X^{n}(s^-))d\overline{N}(s)\end{aligned}$$ $$\begin{aligned} ||X^{n+2}(t)-X^{n+1}(t)||^2&=&\left\|\int_0^t\left[f_{\lambda}(X^{n+1}(s^-)-f_{\lambda}(X^n(s^-))\right]ds\right.\\ &+&\int_0^t[g(X^{n+1}(s^-))-g(X^n(s^-))]dW(s)\\ &+&\left.\int_0^t[h(X^{n+1}(s^-))-h(X^n(s^-))]d\overline{N}(s)\right\|^2.\end{aligned}$$ Using the inequality $(a+b+c)^2\leq 3a^2+3b^2+3c^2$ for all $a, b,c\in\mathbb{R}$, it follows that : $$\begin{aligned} ||X^{n+2}(t)-X^{n+1}(t)||^2&\leq & 3\left\|\int_0^t[f_{\lambda}(X^{n+1}(s^-))-f_{\lambda}(X^n(s^-))ds\right\|^2\\ &+&3\left\|\int_0^t[g(X^{n+1}(s^-))-g(X^n(s^-))dW\right\|^2\\ &+&3\left\|\int_0^t[h(X^{n+1}(s^-))-h(X^n(s^-))]d\overline{N}\right\|^2.\end{aligned}$$ Using the martingale properties of $W(s)$ and $\overline{N}(s)$ and the global Lipschitz condition satisfied by $f_{\lambda}$, $g$ and $h$, it follows that : $$\begin{aligned} \mathbb{E}||X^{n+2}(t)-X^{n+1}(t)||^2&\leq&(3TL_{\lambda}+3L+3\lambda L)\int_0^t\mathbb{E}||X^{n+1}(s^-)-X^n(s^-)||^2ds.\end{aligned}$$ Using the hypothesis of induction, it follows that : $$\begin{aligned} \mathbb{E}||X^{n+2}(t)-X^{n+1}(t)||^2&\leq &M\int_0^t\dfrac{(Ms)^{n+1}}{(n+1)!}ds=\dfrac{(Mt)^{n+2}}{(n+2)!}.\end{aligned}$$ This complete the proof of the lemma. **\[Theorem \[ch2th1\]\]** **** : Let $X_1$ and $X_2$ be two solutions of . Then : $$\begin{aligned} X_1(t)=X_0+\int_0^tf_{\lambda}(X_1(s^-))ds+\int_0^tg(X_1(s^-))dW(s)+\int_0^th(X_1(s^-))d\overline{N}(s),\end{aligned}$$ $$\begin{aligned} X_2(t)=X_0+\int_0^tf_{\lambda}(X_2(s^-))ds+\int_0^tg(X_2(s^-))dW(s)+\int_0^th(X_2(s^-))d\overline{N}(s).\end{aligned}$$ Therefore, $$\begin{aligned} ||X_1(t)-X_2(t)||^2&=&\left\|\int_0^t\left[f_{\lambda}(X_1(s^-))-f_{\lambda}(X_2(s^-))\right]ds +\int_0^t[g(X_1(s^-))-g(X_2(s^-))]dW(s)\nonumber\right.\\ &+&\left.\int_0^t[h(X_1(s^-))-h(X_2(s^-))]d\overline{N}(s)\right\|^2\nonumber\\ &\leq& 3\left\|\int_0^t[f_{\lambda}(X_1(s^-))-f_{\lambda}(X_2(s^-))]ds\right\|^2+3\left\|\int_0^t[g(X_1(s^-))-g(X_2(s^-))]dW(s)\right\|^2\nonumber\\ &+&3\left\|\int_0^t[h(X_1(s^-))-h(X_2(s^-))]d\overline{N}(s)\right\|^2. \label{tI1}\end{aligned}$$ Using Cauchy-Schwartz inequality and the globaly Lipschitz condition, it follows that : $$\begin{aligned} \mathbb{E}(I_1(t)) &:=&\mathbb{E}\left(3\left\|\int_0^t[f_{\lambda}(X_1(s^-))-f_{\lambda}(X_2(s^-))]ds\right\|^2\right)\nonumber\\ &\leq& 3t\int_0^t\mathbb{E}\left\|f_{\lambda}(X_1(s^-))-f_{\lambda}(X_2(s^-))\right\|^2\nonumber\\ &\leq& 3tL_{\lambda}\int_0^t\mathbb{E}||X_1(s^-)-X_2(s^-)||^2ds. \label{I1}\end{aligned}$$ Using the martingale property of $W(s)$ and the global Lipschitz condition, it follows that : $$\begin{aligned} \mathbb{E}(I_2(t))&:=& \mathbb{E}\left(3\left\|\int_0^t[g(X_1(s^-))-g(X_2(s^-))]dW(s)\right\|^2\right)\nonumber\\ &\leq& 3\int_0^t\mathbb{E}\left\|g(X_1(s^-))-g(X_2(s^-))\right\|^2ds\nonumber\\ &\leq& 3L\int_0^t\mathbb{E}\left\|X_1(s^-)-X_2(s^-)\right\|^2ds. \label{I2}\end{aligned}$$ Along the same lines as above, we obtain : $$\begin{aligned} \mathbb{E}(I_3(t))&:=& \mathbb{E}\left(3\left\|\int_0^t[h(X_1(s^-))-h(X_2(s^-))]d\overline{N}(s)\right\|^2\right)\nonumber\\ &\leq& 3L\int_0^tE\left\|X_1(s^-)-X_2(s^-)\right\|^2ds. \label{I3}\end{aligned}$$ Taking expectation in both sides of and using estimations , and leads to : $$\begin{aligned} \mathbb{E}||X_1(t)-X_2(t)||^2\leq (3tL_{\lambda}+3L+3\lambda L)\int_0^t\mathbb{E}||X_1(s^-)-X_2(s^-)||^2ds. \label{tI2}\end{aligned}$$ Applying Gronwall lemma (contonuous form) to inequality leads to : $$\begin{aligned} \mathbb{E}||X_1(t)-X_2(t)||^2=0, \hspace{0.5cm} \forall t\;\in[0,T].\end{aligned}$$ It follows from Markov’s inequality that : $$\begin{aligned} \forall\; a>0,\hspace{0.5cm} \mathbb{P}(||X_1-X_2||^2>a)=0.\end{aligned}$$ Therefore, $$\begin{aligned} \mathbb{P}\left(\{X_1(t)=X_2(t), \hspace{0.2cm} t\in[0,T]\}\right)=1 \hspace{0.2cm} a.s.\end{aligned}$$ ****: From the sequence $X^n(t)$ defined in Lemma \[ch2lemma2\], it follows that : $$\begin{aligned} ||X^{n+1}(t)-X^{n}(t)||^2&=&\left\|\int_0^t[f_{\lambda}(X^n(s^-))-f_{\lambda}(X^{n-1}(s^-))]ds+\int_0^t[g(X^n(s^-))-g(X^{n-1}(s^-))]dW(s)\right.\nonumber\\ &+&\left.\int_0^t[h(X^n(s^-))-h(X^{n-1}(s^-))]d\overline{N}(s)\right\|^2. \label{ch2I3}\end{aligned}$$ Taking expectation and the supremum in the both sides of inequality and using inequality $(a+b+c)^2\leq 3a^2+3b^2+3c^2$ for all $a, b, c \in\mathbb{R}$ leads to : $$\begin{aligned} \mathbb{E}\left(\sup_{0\leq t\leq T}||X^{n+1}(t)-X^n(t)||^2\right)&\leq& 3T\sup_{0\leq t\leq T}\left(\mathbb{E}\int_0^t||f_{\lambda}(X^n(s^-)-f_{\lambda}(s^-)||^2ds\right)\nonumber\\ &+&3\mathbb{E}\left(\sup_{0\leq t\leq T}M_1(t)\right)+3\mathbb{E}\left(\sup_{0\leq t\leq T}M_2(t)\right), \label{ch2I4}\end{aligned}$$ where $$\begin{aligned} M_1(t)&=&\left\|\int_0^t[g(X^n(t))-g(X^{n-1}(t))]dW(s)\right\|^2\\ M_2(t)&=&\left\|\int_0^t[h(X^n(t))-h(X^{n-1}(t))]d\overline{N}(s)\right\|^2.\end{aligned}$$ Using the global Lipschitz condition, it follows that $$\begin{aligned} \mathbb{E}\left(\sup_{0\leq t\leq T}\int_0^t||f_{\lambda }(X^n(s^-))-f_{\lambda}(X^{n-1}(s^-))||^2ds\right)\leq L_{\lambda}\int_0^T\mathbb{E}||X^n(s^-)-X^{n-1}(s^-)||^2ds. \label{ch2M0}\end{aligned}$$ Using respectively Doop’s maximal inequality, martingale property of $W(s)$ and global Lipschitz condition satisfied by $g$, it follows that : $$\begin{aligned} \mathbb{E}\left(\sup_{0\leq t\leq T}M_1(t)\right)\leq 4M_1(t)&=&4\mathbb{E}\left\|\int_0^T[g(X^n(s^-))-X^{n-1}(s^-))]dW(s)\right\|^2\nonumber\\ &=&4\int_0^T\mathbb{E}||g^(X^n(s^-))-g(X^{n-1}(s^-))||^2ds\nonumber\\ &\leq& 4L\int_0^T\mathbb{E}||X^n(s^-)-X^{n-1}(s^-)||^2ds. \label{ch2M1}\end{aligned}$$ Along the same lines as above, we obtain $$\begin{aligned} \mathbb{E}\left(\sup_{0\leq t\leq T}M_2(t)\right)\leq 4\lambda L\int_0^T\mathbb{E}||X^n(s^-)-X^{n-1}(s^-)||^2ds. \label{ch2M2}\end{aligned}$$ Inserting , and in leads to : $$\begin{aligned} \mathbb{E}\left(\sup_{0\leq t\leq T}||X^{n+1}(t)-X^n(t)||^2\right)\leq (3L_{\lambda}+12L+12\lambda L)\int_0^T\mathbb{E}||X^n(s^-)-X^{n-1}(s^-)||^2ds.\end{aligned}$$ Using Lemma \[ch2lemma2\], it follows that : $$\begin{aligned} \mathbb{E}\left(\sup_{0\leq t\leq T}||X^{n+1}(t)-X^n(t)||^2\right)\leq C\int_0^T\dfrac{(Ms^-)^n}{n!}ds=C\dfrac{(MT^-)^{n+1}}{(n+1)!},\end{aligned}$$ where $C=3L_{\lambda}+12L+12\lambda L$. It follows from Doop’s and Markov’s inequalities that : $$\begin{aligned} \mathbb{P}\left(\sup_{0\leq t\leq T}||X^{n+1}(t)-X^n(t)||>\dfrac{1}{2^{n+1}}\right)\leq \dfrac{\mathbb{E}||X^{n+1}(t)-X^n(t)||^2}{\left(\dfrac{1}{(2^{n+1})}\right)^2}\leq C\dfrac{(2^2MT)^{n+1}}{(n+1)!}.\end{aligned}$$ Moreover, $$\begin{aligned} \sum_{n=0}^{\infty}\dfrac{(2^2MT)^{n+1}}{(n+1)!}=e^{2^2MT}-1<\infty.\end{aligned}$$ Using Borel Cantelli’s Lemma, it follows that : $$\begin{aligned} \mathbb{P}\left(\sup_{0\leq t\leq T}||X^{n+1}(t)-X^n(t)||>\dfrac{1}{2^{n+1}}\right)=0, \hspace{0.5cm} \text{almost surely}.\end{aligned}$$ Therefore, for almost every $\omega\hspace{0.1cm}\in\hspace{0.1cm} \Omega, \hspace{0.1cm}\exists\hspace{0.1cm} n_0(\omega)$ such that $$\begin{aligned} \sup_{0\leq t\leq T}||X^{n+1}(t)-X^n(t)||\leq\dfrac{1}{2^{n+1}},\hspace{0.2cm} \forall\: n\geq n_0(\omega).\end{aligned}$$ Therefore $X^n$ converge uniformly on $[0,T]$. Let $X$ be its limit. Futhermore, since $X^n_t$ is continuous and $\mathcal{F}_t$-measurable for all $n\in\mathbb{N}$, it follows that $X(t)$ is continuous and $\mathcal{F}_t$- measurable. It remains to prove that $X$ is a solution of equation \[ch2jdi1\]. One can see that $(X^n)$ converges in $\mathbb{L}^2([0,T]\times \Omega, \mathbb{R}^n)$. Indeed, $$\begin{aligned} ||X^m_t-X^n_t||^2_{\mathbb{L}^2}\leq \sum_{k=n}^{m-1}||X^{k+1}_t-X^k_t||_{\mathbb{L}^2} \leq \sum_{k=n}^{\infty}\dfrac{(MT)^{k+1}}{(k+1)!}\longrightarrow 0, \hspace{0.5cm}\text{as}\hspace{0.5cm} n\longrightarrow \infty. \end{aligned}$$ Therefore, $(X^n)$ is a Cauchy sequence in a Banach $\mathbb{L}^2([0,T]\times \Omega, \mathbb{R}^n)$, so its converges to $X$. Using Fatou’s lemma, it follows that : $$\begin{aligned} \mathbb{E}\left[\int_0^T||X_t-X^n_t||^2dt\right]\leq\liminf_{m\longrightarrow +\infty}\mathbb{E}\left[\int_0^T||X^m_t-X^n_t||^2dt\right]\longrightarrow 0,\hspace{0.3cm}\text{when} \hspace{0.3cm}m\longrightarrow +\infty. \end{aligned}$$ Using the global Lipschitz condition and the Ito isometry, it follows that $$\begin{aligned} \mathbb{E}\left\|\int_0^t[g(X_s)-g(X^n_s)]dW_s\right\|^2\leq L\mathbb{E}\int_0^t||X_s-X^n_s||^2ds\longrightarrow 0\hspace{0.3cm}\text{when}\hspace{0.3cm}n\longrightarrow+\infty. \end{aligned}$$ So we have the following convergence in $\mathbb{L}^2([0,T]\times\mathbb{R}^n)$ $$\begin{aligned} \int_0^tg(X^n_s)dW_s\longrightarrow \int_0^tg(X_s)dW_s. \end{aligned}$$ Along the same lines as above, the following holds convergence holds in $\mathbb{L}^2([0,T]\times\mathbb{R}^n)$ $$\begin{aligned} \int_0^tg(X^n_s)d\overline{N}_s\longrightarrow \int_0^tg(X_s)d\overline{N}_s. \end{aligned}$$ Using Holder inequality and the global Lipschitz condition, it follows that : $$\begin{aligned} \mathbb{E}\left\|\int_0^t[f_{\lambda}(X_s)-f_{\lambda}(X^n_s)]ds\right\|^2\leq L_{\lambda}\mathbb{E}\int_0^t||X_s-X^n_s||^2ds\longrightarrow 0\hspace{0.3cm}\text{when}\hspace{0.3cm}n\longrightarrow+\infty.\end{aligned}$$ So the following convergence holds in $\mathbb{L}^2([0,T]\times\mathbb{R}^n)$ $$\begin{aligned} \int_0^tf_{\lambda}(X^n_s)d\overline{N}_s\longrightarrow \int_0^tf_{\lambda}(X_s)d\overline{N}_s. \end{aligned}$$ Therefore, taking the limit in the sense of $\mathbb{L}^2([0,T]\times\mathbb{R}^n)$ in the both sides of the following equality : $$\begin{aligned} X^{n+1}(t)=X_0+\int_0^tf(X^n(s^-))ds+\int_0^tg(X^n(s^-))dW(s)+\int_0^th(X^n(s^-))d\overline{N}(s)ds\end{aligned}$$ leads to : $$\begin{aligned} X(t)=X_0+\int_0^tf(X(s^-))ds+\int_0^tg(X(s^-))dW(s)+\int_0^th(X(s^-))d\overline{N}(s)ds.\end{aligned}$$ So $X(t)$ is a strong solution of . This complete the proof of Theorem \[ch2th1\]. Generally, analitycal solutions of SDEs are unknows. Knowing that the exact solution exist, one tool to approach it, is the numerical resolution. In the following chapters, we provide some numerical schemes for SDEs with jumps. Strong convergence and stabilty of the compensated stochastic theta methods ============================================================================ Our goal in this chapter is to prove the strong convergence of the compensated stochastic theta method (CSTM) and to analyse the stability behavior of both stochastic theta method(STM) and CSTM under global Lipschitz conditions. The strong convergence and stability of STM for SDEs with jumps has been investigated in [@Desmond1], while the strong convergence and stability of the CSTM for SDEs with jumps has been investigated in [@Xia1]. Most results presented in this chapter are from [@Desmond1] and [@Xia1]. In the following section, we recall the theta method which will be used to introduce the STM and the CSTM. Theta Method ------------ Let’s consider the following deterministic differential equation $\left\{\begin{array}{ll} u'(t)=f(t, u(t))\\ u(t_0)=u_0, \end{array} \right.$ which can be writen into its following integral form : $$\begin{aligned} u(t)=u_0+\int_{t_0}^tf(s,u(s))ds. \label{ch3Euler1}\end{aligned}$$ ### Euler explicit method This method use the following approximation : $$\begin{aligned} \int_a^bf(s)ds\simeq (b-a)f(a). \end{aligned}$$ So for a constant step $\Delta t$, the Euler explicit approximation of is given by : $$\begin{aligned} u_{k+1}=u_k+\Delta tf(t_k,u_k),\end{aligned}$$ where $u_k :=u(t_k)$. ### Euler implicit method This method use the following approximation $$\begin{aligned} \int_a^bf(s)\simeq (b-a)g(b).\end{aligned}$$ Therefore, for a constant step $\Delta t$, the Euler implicit approximation of is : $$\begin{aligned} u_{k+1}=u_k+\Delta tf(t_k,u_{k+1}).\end{aligned}$$ ### Theta Euler method In order to have a better approximation of the integral, we can take a convex combinaison of Euler explict and Euler implicit method. So we have the following approximation $$\begin{aligned} \int_a^bf(s)ds\simeq (b-a)[(1-\theta)f(a)+\theta f(b)],\end{aligned}$$ where $\theta$ is a constant satisfying $0\leq \theta\leq 1$. Hence, for a constant step $\Delta t$, the Euler theta approximation of is : $$\begin{aligned} u_{k+1}=u_k+\Delta t[(1-\theta)f(t_k, u_k)+\theta f(t_k,u_{k+1}) ].\end{aligned}$$ For $\theta=1$, the Euler theta method is called Euler backward method, which is also the Euler implicit method. ### Stochastic theta method and compensated stochastic theta method (STM and CSTM) In order to have an approximate solution of equation , we use the theta Euler method for the deterministic integral and the Euler explicit method for the two random parts. So we have the following approximate solution of called Stochastic theta method (STM) : $$\begin{aligned} Y_{n+1}=Y_n+(1-\theta)f(Y_n)\Delta t+\theta f(Y_{n+1})\Delta t+g(Y_n)\Delta W_n+h(Y_n)\Delta N_n, \label{ch2approxi1}\end{aligned}$$ where $Y_n :=X(t_n)$, $\Delta W_n :=W(t_{n+1})-W(t_n)$ and $ \Delta N_n := N(t_{n+1})-N(t_n)$ Applying the same rules as for the STM to equation leads to : $$\begin{aligned} Y_{n+1}=Y_n+(1-\theta)f_{\lambda}(Y_n)\Delta t+\theta f_{\lambda}(Y_{n+1})\Delta t +g(Y_n)\Delta W_n+h(Y_n)\Delta\overline{N}_n, \label{ch3comp1}\end{aligned}$$ where $$\begin{aligned} f_{\lambda}(x)=f(x)+\lambda h(x).\end{aligned}$$ The numerical approximation is called compensated stochastic theta method (CSTM). Strong convergence of the CSTM on a finite time interval \[0,T\] ---------------------------------------------------------------- In this section, we prove the strong convergence of order $0.5$ of the CSTM. Troughout, $T$ is a fixed constant. For $t\in [t_n,t_{n+1})$ we define the continuous time approximation of as follows : $$\begin{aligned} \overline{Y}(t):=Y_n +(1-\theta)(t-t_n)f_{\lambda}(Y_n)+\theta(t-t_n)f_{\lambda}(Y_{n+1})+g(Y_n)\Delta W_n(t)+h(Y_n)\Delta\overline{N}_n(t), \label{ch3contapproxi1}\end{aligned}$$ where $\Delta W_n(t):=W(t)-W(t_n)$, $\Delta\overline{N}_n(t):=\overline{N}(t)-\overline{N}(t_n)$ The continuous approximation can be writen into its following integral form : $$\begin{aligned} \overline{Y}(t)&=&Y_0+\int_0^t\left[(1-\theta)f_{\lambda}(Y(s))+\theta f_{\lambda}(Y(s+\Delta t))\right]ds+\int_0^tg(Y(s))dW(s)\nonumber\\ &+&\int_0^th(Y(s))d\overline{N}(s), \label{ch3contapproxi2}\end{aligned}$$ where $$\begin{aligned} Y(s):=\sum_{n=0}^{\infty}\mathbf{1}_{\{t_n\leq s<t_{n+1}\}}Y_n.\end{aligned}$$ It follows from that $\overline{Y}(t_n)=Y_n$. In others words, $\overline{Y}(t)$ and $Y_n$ coincide at the grid points. The main result of this section is formulated in the following theorem. \[ch3th1\] Under Assumptions \[ch2assump\], the continuous time approximation solution $\overline{Y}(t)$ given by converges to the true solution $X(t)$ of in the mean square sense. More precisely, $$\begin{aligned} \mathbb{E}\left(\sup_{0\leq t \leq T}||\overline{Y}(t)-X(t)||^2\right)\leq C(1+\mathbb{E}||X_0||^2)\Delta t,\end{aligned}$$ where $C$ is a positive constant independent of the stepsize $\Delta t$. In order to prove Theorem \[ch3th1\], we need the following two lemmas. \[ch3lemma1\] Under Assumptions \[ch2assump\], there exist a fixed constant $\Delta t_0$ such that for any stepsize $\Delta t$ satisfying $0<\Delta t<\Delta t_0<\dfrac{1}{K_{\lambda}+1}$, the following bound of the numerical solution holds $$\begin{aligned} \sup_{0\leq n\Delta t\leq T}\mathbb{E}||Y_n||^2\leq C_1(1+\mathbb{E}||X_0||^2),\end{aligned}$$ where $C_1$ is a positive constant independent of $\Delta t$. From , it follows that : $$\begin{aligned} ||Y_{n+1}-\theta \Delta t f_{\lambda}(Y_{n+1})||^2=||Y_n+(1-\theta)f_{\lambda}(Y_n)\Delta t+g(Y_n)\Delta W_n+h(Y_n)\Delta \overline{N}_n||^2. \label{ch3eq1}\end{aligned}$$ Taking expectation in both sides of leads to : $$\begin{aligned} \mathbb{E}||Y_{n+1}-\Delta t\theta f_{\lambda}(Y_{n+1})||^2&=&\mathbb{E}||Y_n||^2+(1-\theta)^2(\Delta t)^2\mathbb{E}||f_{\lambda}(Y_n)||^2+\mathbb{E}||g(Y_n)\Delta W_n||^2+\mathbb{E}||h(Y_n)\Delta\overline{N}_n||^2\nonumber\\ &+&2\mathbb{E}\langle Y_n,(1-\theta)f_{\lambda}(Y_n)\Delta t\rangle. \label{ch3eq2}\end{aligned}$$ Since $W$ is a Brwonian motion, $\Delta W_n=W_{t_{n+1}}-W_{t_n}\leftrightsquigarrow \mathcal{N}(0,t_{n+1}-t_n)$. So $\mathbb{E}(\Delta W_n)=0$. Using the properties $\mathbb{E}(\Delta W_n)=0$ and $\mathbb{E}(\Delta \overline{N}_n)=0$, we have $$\begin{aligned} \mathbb{E}\langle Y_n,g(Y_n)\Delta W_n\rangle=\mathbb{E}\langle f_{\lambda}(Y_n)\Delta, g(Y_n)\Delta W_n\rangle=\mathbb{E} \langle f_{\lambda}(Y_n)\Delta t, h(Y_n)\Delta \overline{N}_n\rangle=0.\end{aligned}$$ The martingale properties of $\Delta W_n$ and $\Delta \overline{N}_n$ leads to : $$\begin{aligned} \mathbb{E}||g(Y_n)\Delta W_n||^2=\mathbb{E}||g(Y_n)||^2\Delta t\hspace{0.5cm} \text{and}\hspace{0.5cm} \mathbb{E}||h(Y_n)\Delta \overline{N}_n||^2=\lambda \Delta t\mathbb{E}||h(Y_n)||^2.\end{aligned}$$ Hence equality becomes : $$\begin{aligned} \mathbb{E}||Y_{n+1}-\Delta t\theta f_{\lambda}(Y_{n+1})||^2&=&\mathbb{E}||Y_n||^2+(1-\theta)^2(\Delta t)^2\mathbb{E}||f_{\lambda}(Y_n)||^2+\mathbb{E}||g(Y_n)||^2\Delta t\nonumber\\ &+&2(1-\theta)\Delta t\mathbb{E}\langle Y_n, f_{\lambda}(Y_n)\rangle+\lambda \Delta t\mathbb{E}||h(Y_n)||^2. \label{ch3eq3}\end{aligned}$$ Using Cauchy-Schwartz inequality and the linear growth condition, it follows that : $$\begin{aligned} \mathbb{E}\langle Y_n, f_{\lambda}(Y_n)\rangle=\mathbb{E}||Y_nf_{\lambda}(Y_n)||&\leq&\sqrt{\mathbb{E}||Y_n||^2\mathbb{E}||f_{\lambda}(Y_n)||^2}\\ &\leq& \dfrac{1}{2}\mathbb{E}||Y_n||^2+\dfrac{1}{2}\mathbb{E}||f_{\lambda}(Y_n)||^2\\ &\leq& \dfrac{K_{\lambda}}{2}+\dfrac{1}{2}(1+K_{\lambda})\mathbb{E}||Y_n||^2.\end{aligned}$$ By the same arguments as above, it follows that : $$\begin{aligned} \mathbb{E}\langle Y_{n+1}, f_{\lambda}(Y_{n+1})\rangle\leq \dfrac{K_{\lambda}}{2}+\dfrac{1}{2}(1+K_{\lambda})\mathbb{E}||Y_{n+1}||^2.\end{aligned}$$ Since $\theta\in [0,1]$ and $\Delta t\in]0,1[$, it follows from that : $$\begin{aligned} \mathbb{E}||Y_{n+1}||^2&\leq& 2\Delta t\mathbb{E}|\langle Y_{n+1},f_{\lambda}(Y_{n+1})\rangle|+\mathbb{E}||Y_n||^2+\Delta t \mathbb{E}||f_{\lambda}(Y_n)||^2+\Delta t\mathbb{E}||g(Y_n)||^2\\ &+& 2\Delta t\mathbb{E}Y_n, f_{\lambda}(Y_n)+\lambda\Delta t\mathbb{E}||h(Y_n)||^2\\ &\leq&2\Delta t\left[\dfrac{1}{2}(1+K_{\lambda})\mathbb{E}||Y_{n+1}||^2+\dfrac{1}{2}K_{\lambda}\right]+\mathbb{E}||Y_n||^2+\Delta tK_{\lambda}(1+\mathbb{E}||Y_n||^2)+\Delta tK(1+\mathbb{E}||Y_n||^2)\\ &+&2\Delta t\left[\dfrac{1}{2}(1+K_{\lambda})\mathbb{E}||Y_n||^2+\dfrac{1}{2}K_{\lambda}\right]+\lambda \Delta tK(1+\mathbb{E}||Y_n||^2).\end{aligned}$$ Therefore, from the above inequality the following holds : $$\begin{aligned} (1-\Delta t-\Delta tK_{\lambda})\mathbb{E}||Y_{n+1}||^2&\leq &(1+\Delta tK_{\lambda}+\Delta tK+\Delta t+\Delta tK_{\lambda}+\lambda \Delta tK)\mathbb{E}||Y_n||^2\\ &+&\Delta tK_{\lambda}+\Delta tK_{\lambda}+\Delta tK+\Delta tK_{\lambda}+\lambda \Delta tK\\ &\leq & (1+2\Delta tK_{\lambda}+\Delta tK+\Delta t+\Delta t\lambda K)\mathbb{E}||Y_n||^2+3\Delta t K_{\lambda}+\Delta tK+\lambda\Delta t K.\end{aligned}$$ Then it follows from the previous inequality that : $$\begin{aligned} \mathbb{E}||Y_{n+1}||^2\leq\left(1+\dfrac{3K_{\lambda}\Delta t+K\Delta t+\lambda K\Delta t+2\Delta t}{1-\Delta t-K_{\lambda}\Delta t}\right)\mathbb{E}||Y_n||^2+\dfrac{3K_{\lambda}\Delta t+K\Delta t+\lambda K\Delta t+2\Delta t}{1-\Delta t-K_{\lambda}\Delta t}.\end{aligned}$$ Since $\Delta t<\Delta t_0<\dfrac{1}{K_{\lambda}+1}$, we have $1-\Delta t-K_{\lambda}\Delta t>1-\Delta t_0-K_{\lambda}\Delta t_0>0$ and then $$\begin{aligned} \mathbb{E}||Y_{n+1}||^2\leq\left(1+\dfrac{3K_{\lambda}+K+\lambda K t+2 }{1-\Delta t_0-K_{\lambda}\Delta t_0}\Delta t\right)\mathbb{E}||Y_n||^2+\dfrac{3K_{\lambda}+K+\lambda K t+2}{1-\Delta t_0-K_{\lambda}\Delta t_0}\Delta t_0.\end{aligned}$$ In the short form, we have $$\begin{aligned} \mathbb{E}||Y_{n+1}||^2\leq (1+A)\mathbb{E}||Y_{n}||^2+B, \label{ch3eq4}\end{aligned}$$ where $$\begin{aligned} A=\dfrac{3K_{\lambda}+K+\lambda K+2}{1-\Delta t_0-K_{\lambda}\Delta t_0}\Delta t\hspace{0.5cm}\text{and}\hspace{0.5cm} B=\dfrac{3K_{\lambda}+K+\lambda K+2}{1-\Delta t_0-K_{\lambda}\Delta t_0}\Delta t_0.\end{aligned}$$ Applying Gronwall lemma (discrete form) to leads to : $$\begin{aligned} \mathbb{E}||Y_n||^2<e^{nA}\mathbb{E}||X_0||^2+B\dfrac{e^{nA}-1}{e^A-1}, \label{ch3eq5}\end{aligned}$$ $$\begin{aligned} nA=\dfrac{3K_{\lambda}+K+\lambda K+2}{1-\Delta t_0-K_{\lambda}\Delta t_0}n\Delta t \leq \dfrac{3K_{\lambda}+K+\lambda K+2}{1-\Delta t_0-K_{\lambda}\Delta t_0}T,\hspace{0.2cm} \text{since}\hspace{0.2cm} n\Delta t\leq T.\end{aligned}$$ Therefore, it follows from that : $$\begin{aligned} \mathbb{E}||Y_n||^2\leq e^C\mathbb{E}||X_0||^2+B\dfrac{e^C-1}{e^D-1}, \label{ch3eq6}\end{aligned}$$ where $$\begin{aligned} C=\dfrac{3K_{\lambda}+K+\lambda K+2}{1-\Delta t_0-K_{\lambda}\Delta t_0}T\hspace{0.2cm}\text{and}\hspace{0.5cm} D=\dfrac{3K_{\lambda}+K+\lambda K+2}{1-\Delta t_0-K_{\lambda}\Delta t_0}\Delta t_0.\end{aligned}$$ It is straightforward to see that $B$, $C$ and $D$ are independents of $\Delta t$. can be rewritten into the following appropriate form : $$\begin{aligned} \mathbb{E}||Y_n||^2\leq C_1(1+\mathbb{E}||X_0||^2)\hspace{0.3cm} \hspace{0.2cm} C_1=\max\left(e^C, B\dfrac{e^C-1}{e^D-1}\right).\end{aligned}$$ This complete the proof of the lemma. \[ch3lemma2\] If the conditions of Lemma \[ch3lemma1\] are satisfied, then there exist a positive constant $C_2$ independent of $\Delta t$ such that for $s\in [t_n, t_{n+1})$ $$\begin{aligned} \mathbb{E}||\overline{Y}(s)-Y(s)||^2\vee \mathbb{E}||\overline{Y}(s)-Y(s+\Delta t)||^2\leq C_2(1+\mathbb{E}||X_0||^2)\Delta t.\end{aligned}$$ 1. The continous interpolation of the numerical solution is given by $$\begin{aligned} \overline{Y}(s)=Y_n+(1-\theta)(s-t_n)f_{\lambda}(Y_n)+\theta(s-t_n)f_{\lambda}(Y_{n+1})+g(Y_n)\Delta W_n(s)+h(Y_n)\Delta \overline{N}(s),\end{aligned}$$ where $$\begin{aligned} Y(s)=\sum_{n=0}^{\infty}1_{\{t_n\leq s<t_{n+1}\}}Y_n.\end{aligned}$$ For $s\in[t_n, t_{n+1})$, we have $Y(s)=Y_n$. Then, we have the following equality : $$\begin{aligned} \overline{Y}(s)-Y(s)&=&(1-\theta)(s-t_n)f_{\lambda}(Y_n)+\theta(s-t_n)f_{\lambda}(Y_{n+1})+g(Y_n)\Delta W_n(s)\nonumber\\ &+&h(Y_n)\Delta\overline{N}(s). \label{ch3eq7}\end{aligned}$$ By squaring both sides of and taking expectation, using the martingale properties of $\Delta W_n$ and $\Delta\overline{N}_n$ leads to : $$\begin{aligned} \mathbb{E}||\overline{Y}(s)-Y(s)||^2&\leq &3(1-\theta)^2(s-t_n)^2\mathbb{E}||f_{\lambda}(Y_n)||^2+3\theta^2(s-t_n)^2\mathbb{E}|f_{\lambda}(Y_{n+1})|^2+3\mathbb{E}||g(Y_n)\Delta W_n(s)||^2\nonumber\\ &+&3\mathbb{E}||h(Y_n)\Delta\overline{N}_n(s)||^2\nonumber\\ &\leq & 3(1-\theta)^2\Delta t^2\mathbb{E}||f_{\lambda}(Y_n)||^2+3\theta ^2\Delta t^2\mathbb{E}||f_{\lambda}(Y_{n+1})||^2+3\Delta t\mathbb{E}||g(Y_n)||^2\nonumber\\ &+&3\lambda\Delta t\mathbb{E}||h(Y_n)||^2. \label{ch3eq8}\end{aligned}$$ By using the linear growth condition and the fact that $\theta\in[0,1]$, it follows from that : $$\begin{aligned} \mathbb{E}||\overline{Y}(s)-Y(s)||^2&\leq &3\Delta tK_{\lambda}(1+\mathbb{E}||Y_n||^2)+3\Delta K_{\lambda}(1+\mathbb{E}||Y_{n+1}||^2)\nonumber\\ &+&3\Delta tK(1+\mathbb{E}||Y_n||^2)+3\Delta t\lambda K(1+\mathbb{E}||Y_n||^2). \label{ch3eq9}\end{aligned}$$ Now by application of Lemma \[ch3lemma2\] to , it follows that there exist a constant $C_1>0$ independent of $\Delta t$ such that : $$\begin{aligned} \mathbb{E}||\overline{Y}(s)-Y(s)||^2\leq C_1(1+\mathbb{E}||X_0||^2)\Delta t.\end{aligned}$$ 2. For $s\in[t_n, t_{n+1})$, $s+\Delta t \in[t_{n+1}, t_{n+2})$ and then $Y(s+\Delta t)=Y_{n+1}$. So it follows from that : $$\begin{aligned} Y(s+\Delta t)&=&Y_{n+1}\\ &=&Y_n+(1-\theta)(t_{n+1}-t_n)f_{\lambda}(Y_n)+\theta(t_{n+1}-t_n)f_{\lambda}(Y_{n+1})+g(Y_n)\Delta W_n+h(Y_n)\Delta\overline{N}_n.\end{aligned}$$ So we have $$\begin{aligned} \overline{Y}(s)-Y(s+\Delta t)&=&(1-\theta)(s-t_{n+1})f_{\lambda}(Y_n)+\theta(s-t_{n+1})f_{\lambda}(Y_{n+1})+g(Y_n)\left(W(s)-W(t_{n+1})\right)\nonumber\\ &+&h(Y_n)\left(\overline{N}(s)-\overline{N}(t_{n+1})\right). \label{ch3eq10}\end{aligned}$$ By squaring both sides of , taking expectation and using martingale properties of $\Delta W_n$ and $\Delta\overline{N}_n$, it follows that $$\begin{aligned} \mathbb{E}||\overline{N}(s)-Y(s+\Delta t)||^2\leq 3\Delta t\mathbb{E}||f_{\lambda}(Y_n)||^2+3\Delta t\mathbb{E}||f_{\lambda}(Y_{n+1})||^2+3\Delta t\mathbb{E}||g(Y_n)||^2+3\lambda \Delta t \mathbb{E}||h(Y_n)||^2.\end{aligned}$$ Applying respectively the linear growth condition and Lemma \[ch3lemma2\] to the previous inequality, it follows that there exist a positive constant $C_2$ independent of $\Delta t$ such that : $$\begin{aligned} \mathbb{E}||\overline{Y}(s)-Y(s+\Delta t)||^2\leq C_2(1+\mathbb{E}||X_0||^2)\Delta t.\end{aligned}$$ This complete the proof of Lemma \[ch3lemma2\]. Now, we are ready to give the proof of Theorem \[ch3th1\]. **\[Theorem \[ch3th1\]\]** From equations and , it follows that : $$\begin{aligned} ||\overline{Y}(s)-X(s)||^2&=&\left\|\int_0^s[(1-\theta)(f_{\lambda}(Y(r))-f_{\lambda}(X(r^-))+\theta(f_{\lambda}(Y+\Delta t))-f_{\lambda}(X(r^-))]dr\right.\\ &+&\left.\int_0^s(g(Y(r))-g(X(r^-)))dW(r)+\int_0^s(h(Y(r))-h(X(r^-)))d\overline{N}(r)\right\|^2.\end{aligned}$$ Taking expectation in both sides of the above equality and using the inequality $(a+b+c)^2\leq 3a^2+3b^2+3c^2$ for all $a, b, c\in\mathbb{R}$ leads to : $$\begin{aligned} \mathbb{E}\left[\sup_{0\leq s\leq t}||\overline{Y}(s)-X(s)||^2\right]\leq 3\mathbb{E}\left(\sup_{0\leq s\leq t}M_1(t)\right)+3\mathbb{E}\left(\sup_{0\leq s\leq t}M_2(t)\right)+3\mathbb{E}\left(\sup_{0\leq s\leq t}M_3(t)\right), \label{ch3eq11}\end{aligned}$$ where $$\begin{aligned} M_1(t)=\left\|\int_0^s[(1-\theta)(f_{\lambda}(Y(r))-f_{\lambda}(X(r^-))]dr\right\|^2,\hspace{0.5cm} M_2(s)=\left\|\int_0^s[g(Y(r))-g(X(r^-))]dW(r)\right\|^2\end{aligned}$$ $$\begin{aligned} \text{and}\hspace{0.5cm} M_3(s)=\left\|\int_0^s[h(Y(r))-h(X(r^-))]d\overline{N}(r)\right\|^2.\end{aligned}$$ Using Holder inequality, it follows that : $$\begin{aligned} M_1(s)\leq s\int_0^s\left\|(1-\theta)(f_{\lambda}(Y(r))-f_{\lambda}(X(r^-))+\theta(f_{\lambda}(Y(r+\Delta t))-f_{\lambda}(X(r^-))\right\|^2dr. \label{ch3eq12}\end{aligned}$$ Using the convexity of the application $x\longmapsto ||x||^2$, it follows from that $$\begin{aligned} M_1(s)\leq s\int_0^s(1-\theta)||f_{\lambda}(Y(r))-f_{\lambda}(X(r^-))||^2dr+s\int_0^s\theta||f_{\lambda}(Y(r+\Delta t))-f_{\lambda}(X(r^-))||^2dr.\end{aligned}$$ Taking the supremum in both sides of the above inequality and using the global Lipschitz condition satisfied by $f_{\lambda}$, and then taking expectation it follows that $$\begin{aligned} \mathbb{E}\left[\sup_{0\leq s\leq t}M_1(s)\right]\leq t(1-\theta)L_{\lambda}\int_0^t\mathbb{E}||Y(r)-X(r^-)||^2dr+t\theta L_{\lambda}\int_0^t\mathbb{E}||Y(r)-X(r^-)||^2dr. \label{ch3eq13}\end{aligned}$$ Using Doop’s inequality, it follows that : $$\begin{aligned} \mathbb{E}\left[\sup_{0\leq s\leq t}M_2(s)\right]\leq 4\sup_{0\leq s\leq t}\mathbb{E}[M_2(s)]=4\sup_{0\leq s\leq t}\int_0^s\mathbb{E}||g(Y(r))-g(X(r^-))||^2dr.\end{aligned}$$ Using the global Lipschitz condition satisfied by $g$, it follows that : $$\begin{aligned} \mathbb{E}\left[\sup_{0\leq s\leq t}M_2(s)\right]\leq 4L\int_0^t\mathbb{E}||Y(r)-X(r^-)||^2dr. \label{ch3eq14}\end{aligned}$$ Along the same lines as above, we have $$\begin{aligned} \mathbb{E}\left[\sup_{0\leq s\leq t}M_3(s)\right]=4\lambda L\int_0^t\mathbb{E}||Y(r)-X(r^-)||^2dr. \label{ch3eq15}\end{aligned}$$ Inserting , and in leads to : $$\begin{aligned} \mathbb{E}\left[\sup_{0\leq s\leq t}||\overline{Y}(s)-X(s)||^2\right]&\leq& 3T(1-\theta)L_{\lambda}\int_0^t\mathbb{E}||Y(r)-X(r^-)||^2dr\nonumber\\ &+&3T\theta L_{\lambda}\int_0^t\mathbb{E}||Y(r)-X(r^-)||^2dr\nonumber\\ &+&12L\int_0^t\mathbb{E}||Y(r)-X(r^-)||^2dr\nonumber\\ &+&12\lambda L\int_0^t\mathbb{E}||Y(r)-X(r^-)||^2dr. \label{ch3eq16}\end{aligned}$$ Using the fact that $$\begin{aligned} ||Y(r)-X(r^-)||^2=||(Y(r)-\overline{Y}(r)-(X(r^-)-\overline{Y}(r))||^2\leq 2||Y(r)-\overline{Y}(r)||^2+2||X(r^-)-\overline{Y}(r)||^2,\end{aligned}$$ it follows from that : $$\begin{aligned} \mathbb{E}\left[\sup_{0\leq s\leq t}||\overline{Y}(s)-X(s)||^2\right]&\leq & 6T(1-\theta)L_{\lambda}\int_0^t\left[\mathbb{E}||Y(r)-\overline{Y}(r)||^2+\mathbb{E}||\overline{Y}(r)-X(r^-)||^2\right]dr\\ &+&6T\theta L_{\lambda}\int_0^t\left[\mathbb{E}||Y(r)-\overline{Y}(r)||^2+\mathbb{E}||\overline{Y}(r)-X(r^-)||^2\right]dr\\ &+&24L(1+\lambda)\int_0^t\left[\mathbb{E}||Y(r)-\overline{Y}(r)||^2+\mathbb{E}||\overline{Y}(r)-X(r^-)||^2\right]dr.\end{aligned}$$ Using lemma \[ch3lemma2\] in the above inequality, it follows that $$\begin{aligned} \mathbb{E}\left[\sup_{0\leq s\leq t}||\overline{Y}(s)-X(s)||^2\right]&\leq & [6TL_{\lambda}+24L(1+\lambda)]\int_0^t\mathbb{E}\left[\sup_{0\leq r\leq s}||\overline{Y}(r)-X(r)||^2\right]ds\nonumber\\ &+&[6T^2L_{\lambda}+24TL(1+\lambda)]C_2(1+\mathbb{E}||X_0||^2)\Delta t. \label{ch3eq17}\end{aligned}$$ Applying Gronwall lemma (continous form) to leads to the existence of a positive constant $C$ independent of $\Delta t$ such that $$\begin{aligned} \mathbb{E}\left[\sup_{0\leq s\leq t}||\overline{Y}(s)-X(s)||^2\right]\leq C(1+\mathbb{E}||X_0||^2)\Delta t.\end{aligned}$$ This complete the proof of Theorem \[ch3th1\]. The strong convergence of the STM has been studied in [@Desmond1]. Since STM and CSTM convergence strongly to the exact slution, it is interesting to study their stability behaviours. Linear mean-square stability of the CSTM ----------------------------------------- In this section, we focus on the linear mean-square stability. Let’s consider the following linear test equation with real coefficients $$\begin{aligned} dX(t)=aX(t^-)dt+bX(t^-)dW(t)+cX(t^-)dN(t), \hspace{0.5cm} X(0)=X_0. \label{ch3lin1}\end{aligned}$$ The exact solution $X$ of SDEs is said to be exponentially mean-square stable if there exist constants $\alpha>0$ and $L>0$ such that : $$\begin{aligned} \mathbb{E}||X(t)||^2\leq Le^{-\alpha t}\mathbb{E}||X(0)||^2.\end{aligned}$$ 1. The numerical solution $X_n$ of SDEs is said to be exponentially mean-square stable if there exist constants $\alpha>0$ and $L>0$ such that : $$\begin{aligned} \mathbb{E}||X_n||^2\leq Le^{-\alpha t}\mathbb{E}||X(0)||^2.\end{aligned}$$ 2. The numerical solution $X_n$ of SDEs is said to be mean-square stable if there exist constants $0<L<1$ such that : for all $n\in[0,T]$ $$\begin{aligned} \mathbb{E}||X_{n+1}||^2<L\mathbb{E}||X_n||^2.\end{aligned}$$ 3. A numerical method is said to be A-stable if it is stable for any stepsize. It is proved in [@Desmond1] that the exact solution of have the following stability property : $$\begin{aligned} \lim_{t\longrightarrow \infty}\mathbb{E}||X(t)||^2=0\Longleftrightarrow l := 2a+b^2+\lambda c(2+c)<0, \label{ch3lin2}\end{aligned}$$ where $\lambda$ is the intensity of the poisson precess $(N_t)_{t\geq 0}$. Under condition , the numerical solution of produced by compensated stochastic theta method is mean-square stable for any stepsize $\Delta t>0$ if and only if $\dfrac{1}{2}\leq\theta \leq 1$. For $0\leq \theta<\dfrac{1}{2}$ this numerical solution is mean-square stable for any stepsize $\Delta t>0$ satisfying : $$\begin{aligned} \Delta t<\dfrac{-l}{(1-2\theta)(a+\lambda c)^2}.\end{aligned}$$ Applying the compensated theta method to gives $$\begin{aligned} Y_{n+1}=Y_n+(1-\theta)\Delta t(a+\lambda c)Y_n+\theta \Delta t(a+\lambda c)Y_{n+1}+bY_n\Delta W_n+cY_n\Delta\overline{N}_n.\end{aligned}$$ So we have $$\begin{aligned} (1-\theta \Delta ta-\theta \Delta t\lambda c)Y_{n+1}=Y_n+(1-\theta)\Delta t(a+\lambda c)Y_n+bY_n\Delta W_n+c\Delta\overline{N}_n.\end{aligned}$$ It follows that : $$\begin{aligned} (1-\theta \Delta ta-\theta \Delta t\lambda c)^2\mathbb{E}||Y_{n+1}||^2=[1+(1-\theta)\Delta t(a+\lambda c)]^2\mathbb{E}||Y_n||^2+b^2\Delta t\mathbb{E}||Y_n||^2+c^2\lambda\Delta t\mathbb{E}||Y_n||^2.\end{aligned}$$ Therefore, $$\begin{aligned} \mathbb{E}||Y_{n+1}||^2=\dfrac{1+[2(1-\theta)(a+\lambda c)+b^2+c^2\lambda]\Delta t+(1-\theta)^2(a+\lambda c)^2\Delta t^2}{(1-\theta \Delta t a-\theta \Delta t\lambda c)^2}\mathbb{E}||Y_n||^2.\end{aligned}$$ It follows that $\mathbb{E}||Y_n||^2$ is a geometric sequence which converge if and only if $$\begin{aligned} \dfrac{1+[2(1-\theta)(a+\lambda c)+b^2+c^2\lambda]\Delta t+(1-\theta)^2(a+\lambda c)^2\Delta t^2}{(1-\theta \Delta t a-\theta \Delta t\lambda c)^2}<1.\end{aligned}$$ That is if and only if $$\begin{aligned} (1-2\theta)(a+\lambda c)^2\Delta t<-l. \label{ch3lin3}\end{aligned}$$ It follows that : - If $\dfrac{1}{2}\leq \theta\leq 1$, then the condition is satisfied for any stepsize. And then the numerical solution is mean-square stable for any stepsize. - If $0\leq\theta<\dfrac{1}{2}$, then it follows from that if $0<\Delta t<\dfrac{-l}{(1-2\theta)(a+\lambda c)^2}$, the numerical method is stable. Changing $c$ to $-2-c$ does not affect the mean-square stability conditon . Hence the exact solution of have the same stability property under this transformation. It is interesting to look for what happens to the numerical solution under this transformation. A numerical method applied to is said to be jump symmetric if whenever stable (unstable) for $\{a,b,c,\lambda , \Delta t\}$ it is also stable (unstable) for $\{a,b,-2-c,\lambda , \Delta t\}$. The compensated stochastic theta method applied to is jump symmetric if and only if $\theta =\dfrac{1}{2}$. 1. For $\theta=\dfrac{1}{2}$, clearly the stability condition of the numerical solution is equivalent to the stability condition of the exact solution. Since the stability condition is invariant under the transformation $c\longmapsto -2-c$, it follows that the jump symmetric property holds. 2. If $\theta \neq \dfrac{1}{2}$, the right hand side of remains the same under the transformation $c\longmapsto -2-c$, but the left hand side changes. Therefore the jump symmetry property does not holds. If the exact solution of the problem is mean-square stable, then for $\dfrac{1}{2}<\theta\leq 1$, it follows from the stability property of the CSTM is preserved under the transformation $c\longmapsto -2-c$. Nonlinear mean-square stability ------------------------------- This section is devoted to the nonlinear mean-square analysis. Troughout, this section, we make the following assumptions. \[ch3assump1\] We assume that there exist constants $\mu, \sigma, \gamma$ such that for all $x,y\in\mathbb{R}^n$ $$\begin{aligned} \langle x-y, f(x)-f(y)\rangle\leq \mu||x-y||^2\label{ch3nonlin1}\\ ||g(x)-g(y)||^2\leq \sigma||x-y||^2\nonumber\\ ||h(x)-h(y)||^2\leq \gamma||x-y||^2.\nonumber\end{aligned}$$ Usually, condition is called “one-sided Lipschitz condition”. ### Nonlinear mean-square stability of the exact solution [@Desmond2 Theorem 4, pp 13] \[ch3th2\] Under assumptions \[ch3assump1\], any two solutions $X(t)$ and $Y(t)$ of the SDEs with jumps with $\mathbb{E}||X_0||^2<\infty$ and $\mathbb{E}||Y_0||^2<\infty$ satisfy the following property $$\begin{aligned} \mathbb{E}||X(t)-Y(t)||^2\leq \mathbb{E}||X_0-Y_0||^2e^{\alpha t},\end{aligned}$$ where $\alpha : =2\mu+\sigma +\lambda\sqrt{\gamma}(\sqrt{\gamma}+2)$. Hence, the condition $\alpha <0$ is sufficient for the exponential mean-square stability property. The two solutions $X(t)$ and $Y(t)$ of satisfy respectively $$\begin{aligned} dX(t)=f(X(t^-))dt+g(X(t^-))dW(t)+h(X(t^-))dN(t)\end{aligned}$$ and $$\begin{aligned} dY(t)=f(Y(t^-))dt+g(Y(t^-))dW(t)+h(Y(t^-))dN(t).\end{aligned}$$ Applying Itô’s lemma for product (Lemma \[ch1Itoproduct\]) to the stochastic process $Z(t)=||X(t)-Y(t)||^2$ leads to $$\begin{aligned} dZ(t)&=&2\langle X(t^-)-Y(t^-),d(X(t^-))-d(Y(t^-))\rangle+||d(X(t^-))-d(Y(t^-))||^2\\ &=&\left[2\langle X(t^-)-Y(t^-), f(X(t^-))-f(Y(t^-))\rangle+2\lambda \langle X(t^-)-Y(t^-), h(X(t^-))-h(Y(t^-))\rangle\right.\\ &+&\left.||g(X(t^-))-g(Y(t^-))||^2+\lambda||h(X(t^-))-h(Y(t^-))||^2\right]dt+dM_t,\end{aligned}$$ where $M_t$ is a martingale and where we used the following rule of calculation $$\begin{aligned} dtdt=dtdW(t)=0,\hspace{0.5cm} dN_tdN(t)=dN_t,\hspace{0.5cm} dW_tdW_t=dt\hspace{0.5cm} and \hspace{0.2cm}dtdN(t)=dW_tdN_t=0.\end{aligned}$$ Using Assumptions \[ch3assump1\], ones get : $$\begin{aligned} d||X(t)-Y(t)||^2&\leq& [2\mu||X(t^-)-Y(t^-)||^2+2\lambda\sqrt{\gamma}||X(t^-)-Y(t^-)||^2+\sigma||X(t^-)-Y(t^-)||^2\\ &+&\lambda\gamma||X(t^-)-Y(t^-)||^2]dt +dM(t),\end{aligned}$$ So we have $$\begin{aligned} d||X(t^-)-Y(t^-)||^2\leq[2\mu+\sigma+\lambda\sqrt{\gamma}(\sqrt{\gamma}+2)]||X(t^-)-Y(t^-)||^2dt+dM(t),\end{aligned}$$ which can be writen into its following integral form : $$\begin{aligned} ||X(t^-)-Y(t^-)||^2\leq [2\mu+\sigma+\lambda\sqrt{\gamma}(\sqrt{\gamma}+2)]\int_0^t||X(s^-)-Y(s^-)||^2ds+\int_0^tdM(s). \label{ch3nonlin2}\end{aligned}$$ Taking expectation in both sides of and using the fact that $\mathbb{E}\left(\int_0^tdM(s)\right)=0$ leads to : $$\begin{aligned} \mathbb{E}||X(t^-)-Y(t^-)||^2\leq [2\mu+\sigma+\lambda\sqrt{\gamma}(\sqrt{\gamma}+2)]\int_0^t\mathbb{E}||X(s^-)-Y(s^-)||^2ds. \label{ch3nonlin3}\end{aligned}$$ Applying Gronwall lemma ( continuous form ) to leads to : $$\begin{aligned} \mathbb{E}||X(t^-)-Y(t^-)||^2\leq \mathbb{E}||X_0-Y_0||^2e^{[2\mu+\sigma+\lambda\sqrt{\gamma}(\sqrt{\gamma}+2)]t}.\end{aligned}$$ This complete the proof of Theorem \[ch3th2\]. For the linear test equation , the one-sided Lipschitz and the global Lipschitz condition become $$\begin{aligned} \langle x-y, f(x)-f(y)\rangle=a|x-y|^2\\ |g(x)-g(y)|^2=b^2|x-y|^2\\ |h(x)-h(y)|^2=c^2|x-y|^2.\end{aligned}$$ Along the same lines as for the nonlinear case, we obtain : $$\begin{aligned} \mathbb{E}|X(t^-)|^2=\mathbb{E}|X_0|^2e^{(2a+b^2+\lambda c(2+c))t}.\end{aligned}$$ Therefore, we have the following equivalence for the linear mean-square stability $$\begin{aligned} \lim_{t\longrightarrow +\infty}\mathbb{E}|X(t)|^2=0\Longleftrightarrow l:= 2a+b^2+\lambda c(2+c)<0.\end{aligned}$$ Based on Theorem \[ch3th2\], it is interesting to analyse whether or not the numerical solution of reproduce the mean-square stability of the exact solution. ### Nonlinear mean-square stability of the numerical solutions **\[Stability of the stochastic theta method\]**\[ch3th3\] Under Assumptions \[ch3assump1\] and the further hypothesis $\alpha<0$, for $$\begin{aligned} \Delta t< \dfrac{-\alpha}{\lambda^2\gamma},\end{aligned}$$ the Euler backward method (STM with $\theta=1$) applied to equation is mean-square stable in the sense that $$\begin{aligned} \mathbb{E}||X_n-Y_n||^2\leq \mathbb{E}||X_0-Y_0||^2e^{\beta_1(\Delta t)n\Delta t},\end{aligned}$$ where $$\begin{aligned} \beta_1(\Delta t):=\dfrac{1}{\Delta t}ln\left(\dfrac{1+(\sigma+\lambda\gamma+2\lambda\sqrt{\gamma})\Delta t+\lambda^2\gamma\Delta t^2}{1-\mu\Delta t}\right).\end{aligned}$$ Let’s introduce the following notations : $$\begin{aligned} \Delta Z_n=X_n-Y_n, \hspace{0.2cm} \Delta f_n=f(X_n)-f(Y_n),\hspace{0.2cm}\Delta g_n=g(X_n)-g(Y_n), \hspace{0.2cm} \Delta h_n=h(X_n)-h(Y_n)\end{aligned}$$ If $\theta =1$, the numerical approximation applied to $X$ and $Y$ gives : $$\begin{aligned} Y_{n+1}=Y_n+f(Y_{n+1})\Delta t+g(Y_n)\Delta W_n +h(Y_n)\Delta N_n\\ X_{n+1}=X_n+f(X_{n+1})\Delta t+g(X_n)\Delta W_n +h(X_n)\Delta N_n.\end{aligned}$$ So we have : $$\begin{aligned} ||\Delta Z_{n+1}-\Delta f_{n+1}\Delta t||^2=||\Delta Z_n+\Delta g_n\Delta W_n+\Delta h_n\Delta N_n||^2. \label{ch3nonlin5}\end{aligned}$$ Using the independence of $\Delta W_n$ and $\Delta N_n$ and the fact that $$\begin{aligned} \mathbb{E}|\Delta N_n|^2&=&var(\Delta N_n)+\left(\mathbb{E}(\Delta N_n)\right)^2=\lambda\Delta t+\lambda^2\Delta t^2\\ \mathbb{E}||\Delta W_n||^2&=&\Delta t, \hspace{0.5cm} \mathbb{E}||\Delta W_n||=0, \hspace{0.5cm} \mathbb{E}|\Delta N_n|=\lambda\Delta t,\end{aligned}$$ we obtain from the following estimation : $$\begin{aligned} \mathbb{E}||\Delta Z_{n+1}||^2-2\Delta t\mathbb{E}\langle \Delta Z_{n+1}, \Delta f_{n+1}\rangle &\leq& \mathbb{E}||\Delta Z_n||^2+\Delta t\mathbb{E}||\Delta g_n||^2+\lambda \Delta t(1+\lambda \Delta t)\mathbb{E}||\Delta h_n||^2\\ &+&2\lambda\Delta t\mathbb{E}\langle\Delta Z_n, \Delta h_n\rangle.\end{aligned}$$ Using the one-sided Lipschitz condition and the global Lipschitz condition, it follows that : $$\begin{aligned} \mathbb{E}||\Delta Z_{n+1}||^2 &\leq& 2\Delta t\mu \mathbb{E}||\Delta Z_{n+1}||^2+\mathbb{E}||\Delta Z_n||^2+\sigma\Delta t\mathbb{E}||\Delta Z_n||^2+\lambda\Delta t(1+\lambda\Delta t)\gamma \mathbb{E}||\Delta Z_n||^2\\ (1-2\mu\Delta t)\mathbb{E}||\Delta Z_{n+1}||^2&\leq&[1+(\sigma + \lambda \gamma +2\lambda\sqrt{\gamma})\Delta t+\lambda^2\gamma\Delta t^2]\mathbb{E}||\Delta Z_n||^2.\end{aligned}$$ The latter inequality leads to : $$\begin{aligned} \mathbb{E}||\Delta Z_{n+1}||^2 \leq \left[\dfrac{1+(\sigma + \lambda \gamma + 2\lambda\sqrt{\gamma})\Delta t+\lambda^2\gamma\Delta t^2}{1-2\mu \Delta t}\right]\mathbb{E}||\Delta Z_n||^2.\end{aligned}$$ Therefore, we have : $$\begin{aligned} \mathbb{E}||\Delta Z_n||^2 \leq \left[\dfrac{1+(\sigma + \lambda \gamma + 2\lambda\sqrt{\gamma})\Delta t+\lambda^2\gamma\Delta t^2}{1-2\mu \Delta t}\right]^n\mathbb{E}||\Delta Z_0||^2. \label{ch3nonlin6}\end{aligned}$$ In order to have stability, we impose the following condition : $$\begin{aligned} \dfrac{1+(\sigma + \lambda \gamma + 2\lambda\sqrt{\gamma})\Delta t+\lambda^2\gamma\Delta t^2}{1-2\mu \Delta t}<1. \label{ch3nonlin7}\end{aligned}$$ The hypothesis $\alpha<0$ implies that $\mu<0$. So $1-2\mu\Delta t>0$, for all positive stepsize. It follows that is equivalent to $$\begin{aligned} \Delta t< \dfrac{-\alpha}{\lambda^2\gamma}.\end{aligned}$$ Applying the equality $a^n=e^{n\ln a}$, for all $a>0$ and all $n\in\mathbb{N}$ to complete the proof of Theorem \[ch3th3\]. **\[A-stability of the compensated Euler backward method\]**\[ch3th3\] Under Assumptions \[ch3assump1\] and the further hypothesis $\alpha<0$, for any stepsize, the compensated backward Euler method ( CSTM with $\theta=1$) for equation is mean square stable in the sense that : $$\begin{aligned} \mathbb{E}||X_n-Y_n||^2\leq \mathbb{E}||X_0-Y_0||^2e^{\beta_2(\Delta t)n\Delta t},\end{aligned}$$ where $$\begin{aligned} \beta_2(\Delta t) :=\dfrac{1}{\Delta t}ln\left(\dfrac{1+(\sigma +\lambda\gamma)\Delta t}{1-2(\mu+\lambda\sqrt{\gamma})\Delta t}\right).\end{aligned}$$ We use the same notations as for the proof of Theorem \[ch3th3\] except for $\Delta f^{\lambda}_n$ for which we have $\Delta f^{\lambda}_n=f_{\lambda}(X_n)-f_{\lambda}(Y_n)$. Along the same line as for the proof of Theorem \[ch3th3\], we obtain : $$\begin{aligned} ||\Delta Z_{n+1}-\Delta t\Delta f^{\lambda}_{n+1}||^2=||\Delta Z_n+\Delta g_n\Delta W_n+\Delta h_n\Delta\overline{N}_n||^2. \label{ch3nonlin8}\end{aligned}$$ Futhermore, the relations $\mathbb{E}|\Delta\overline{N}_n|=0$ and $ \mathbb{E}|\Delta\overline{N}_n|^2=\lambda\Delta t$ leads to : $$\begin{aligned} \langle x-y, f_{\lambda}(x)-f_{\lambda}(y)\rangle&=&\langle x-y, f(x)-f(y)\rangle+\lambda\langle x-y, h(x)-h(y)\rangle\\ &\leq& (\mu+\lambda\sqrt{\gamma})||x-y||^2.\end{aligned}$$ Using the independence of $\Delta W_n$ and $\Delta\overline{N}_n$, it follows from that $$\begin{aligned} \mathbb{E}||\Delta Z_{n+1}||^2\leq 2\Delta t\mathbb{E}\langle \Delta Z_{n+1}, \Delta f_{n+1}\rangle+\mathbb{E}||\Delta Z_n||^2+\Delta t \mathbb{E}||\Delta g_n||^2+\lambda \Delta t\mathbb{E}||\Delta h_n||^2. \label{ch3nonlin9}\end{aligned}$$ Using the one-sided Lipschitz and the global Lipschitz condition, it follows that : $$\begin{aligned} (1-2(\mu+\lambda\sqrt{\gamma})\Delta t)\mathbb{E}||\Delta Z_{n+1}||^2\leq(1+\sigma\Delta t+\lambda\gamma\Delta t)\mathbb{E}||\Delta Z_n||^2.\end{aligned}$$ Therefore, $$\begin{aligned} \mathbb{E}||\Delta Z_n||^2\leq \left[\dfrac{1+\sigma\Delta t+\lambda\gamma\Delta t}{1-2(\mu+\lambda\sqrt{\gamma})\Delta t}\right]^n\mathbb{E}||Z_0||^2. \label{ch3nonlin10}\end{aligned}$$ In order to have stability, we need the following condition to be fulfilled $$\begin{aligned} \dfrac{1+\sigma\Delta t+\lambda\gamma\Delta t}{1-2(\mu+\lambda\sqrt{\gamma})\Delta t}<1. \label{ch3nonlin11}\end{aligned}$$ From the hypothesis $\alpha<0$, we have $2(\mu+\lambda\sqrt{\gamma})<0$ and then $1-2(\mu+\lambda\sqrt{\gamma})\Delta t>0$ for any stepsize. Hence condition is equivalent to $\alpha\Delta t<0$, which is satisfied for any stepsize. Applying the relation $a^n=e^{n\ln a}$ to complete the proof of the theorem. Numerical Experiments --------------------- The purpose of this section is to illustrate our theorical results of strong convergence and stability. We will focus in the linear case. We consider the linear jump-diffusion Itô’s stochastic integral (SDEs) $$\begin{aligned} \label{ch3num1} \left\{\begin{array}{ll} dX(t)=aX(t^-)dt+bX(t^-)dW(t)+cX(t^-)dN(t), \hspace{0.5cm} t\geq 0,\hspace{0.5cm} c>-1,\\ X(0)=1. \end{array} \right.\end{aligned}$$ ### Strong convergence illustration In order to illustrate the strong convergence result, we need the exact solution of problem . The problem has the following process as a unique solution $$\begin{aligned} X(t)=X_0\exp\left[\left(a-\dfrac{b^2}{2}\right)t+bW(t)\right](1+c)^{N(t)},\end{aligned}$$ which can be written in the following equivalent form $$\begin{aligned} X(t)=X_0\exp\left[\left(a-\dfrac{b^2}{2}\right)t+bW(t)+\ln(1+c)N(t)\right].\end{aligned}$$ 1. Obviously, the functions $f(x)=ax$, $g(x)=bx$ and $h(x)=cx$ satisfy the global Lipschitz condition and the linear growth condition. Therefore from Theorem \[ch2th1\], it follows that the problem admit a unique solution. 2. Let’s consider the following Itô’s jump-diffusion process $$\begin{aligned} Z(t)=\left(a-\dfrac{b^2}{2}\right)t+bW(t)+N(t)\ln(1+c).\end{aligned}$$ The function $f: [0,\infty)\longrightarrow \mathbb{R}, \hspace{0.3cm} x\longmapsto x_0\exp(x)$ is infinitely differentiable. Then applying Itô formula for jump process to the process $Z(t)$ leads to : $$\begin{aligned} f(Z_t)&=&f(Z_0)+\int_0^t\left(a-\dfrac{b^2}{2}\right)f'(Z_{s^-})ds+\dfrac{1}{2}\int_0^tb^2f''(Z_{s^-})ds+\int_0^tbf'(Z_{s^-})dW(s)\nonumber\\ &+&\int_0^t(f(Z_s)-f(Z_{s^-}))dN(s), \label{ch3num2}\end{aligned}$$ where $$\begin{aligned} f(Z_s)-f(Z_{s^-})&=&X_0\exp[Z_{s^-}+\ln(1+c)]-X_0\exp(Z_{s^-})\nonumber\\ &=&(1+c)X_0\exp(Z_{s^-})-X_0\exp(Z_{s^-})\nonumber\\ &=&cX_0\exp(Z_{s^-})=cf(Z_{s^-}) \label{ch3num3}\end{aligned}$$ and $$\begin{aligned} X(s^-)=f(Z_{s^-})=f'(Z_{s^-})=f''(Z_{s^-}). \label{ch3num4}\end{aligned}$$ Substituting and in and rewriting the result into its differential form leads to $$\begin{aligned} dX(t)=aX(t^-)dt+bX(t^-)dW(t)+cX(t^-)dN(t).\end{aligned}$$ So $X(t)$ satisfies the desired equation. For the numerical simulation, we take $a=b=1$, $c=0.5$ and $\lambda=1$. We have the following graphs for the strong error. We use $5000$ sample paths. The algorithms for simulation are based on [@Desmond3]. We take $d t=2^{14}$ and $\Delta t=2^{p-1}$ for $p=1,2,3,4,5$. The error is computing at the end point $T=1$. ![Mean square error of the CSTM with $\theta=0$](errorl0.png) ![Mean square error of the CSTM with $\theta=0.5$](errorl1.png) ![Mean square error of the CSTM with $\theta=1$](errorl2.png) ### Mean-square stability illustration In order to illustrate our theoretical result of A-stability, we first consider two examples **Example I** $a=b=2$, $c=-0.9$ and $\lambda=9$. **Example II** $a=-7$, $b=c=1$ and $\lambda=4$. In both examples, the stability condition is satisfied. So exact solutions of both examples are mean-square stable. For $\theta$ slightly less than $0.5$ (for instance $\theta=0.495$) both solutions may be unstable for a large stepsize ($\Delta t=60, 25$), but for $\dfrac{1}{2}\leq \theta\leq 1$, numerical solutions of both examples are stable. From the top to the bottom, we present numerical examples of example I and example II respectively. ![A-stability for example I](Asta1.png) ![A-stability for example II](Asta2.png) The following curves provide the stability comparison between CSTM and STM. We focus on the example I. Here, $a>0$ and $c<0$. So the jump part can stabilise the problem. In this case, from the theoretical result the STM is stable for $\Delta t<0.0124829$. For $\Delta t=0.005$, both CSTM and STM stabilities behavior look the same. But for $\Delta t=0.5$ CSTM is stable while STM produce an oscillation. For $\Delta t=0.1$, numerical solution of STM grows rapidly to the scale $10^7$ and is unstable while the numerical solution of CSTM is stable. So CSTM works better than STM. ![Stability behavior of the STM](stm1.png) ![Stability behavior of the CSTM](cstm1.png) ![Stability behavior of the STM](stm2.png) ![Stability behavior of the CSTM](cstm2.png) ![Stability behavior of the STM](stm3.png) ![Stability behavior of the CSTM](cstm3.png) In this chapter, we provided the proof of the strong convergence of order $0.5$ of the CSTM under global Lipschitz condition. We also studied the stability behaviour of both STM and CSTM. We proved that the CSTM works better than the STM. Some situations in real life are modelised by SDEs with jumps, where the drift coefficient does not satisfy the global Lipschitz condtion. It is proved in [@Martin3] that the Euler explicit method for such equations diverge strongly. The tamed Euler scheme for SDEs without jump is the currently investigated by many authors. The compensated tamed Euler scheme for SDEs with jumps is not yet well developped in the litterature. In the following chapter, we establish the strong convergence of the compensated tamed Euler scheme for SDEs with jumps under non-global Lipschitz condition. This scheme is slightly different to what is already done in the litterature. Strong convergence of the compensated tamed Euler scheme for stochastic differential equation with jump under non-global Lipschitz condition ============================================================================================================================================ Under non-global Lipschitz condition, Euler Explicit method fails to converge strongly to the exact solution, while Euler implicit method converges but requires more computational efforts. The strong convergence of the tamed Euler scheme has been investigated in [@Martin1]. This scheme is explicit and requires less computational efforts than the Euler implicit method. In this chapter, we extend the strong convergence of the tamed Euler scheme by introducing its compensated form for stochastic differential equations with jumps. More precisely, we prove that under non-global Lipschitz condition, the compensated tamed Euler scheme converges strongly with order $0.5$ to the exact solution of the SDEs with jumps. This scheme is different to the one proposed in [@Kon]. As opposed to what is done in [@Kon], here we obtain the strong convergence and the rate of convergence simultaneously under more relaxed conditions. The contents of this chapter can also be found in [@atjdm1]. Compensated tamed Euler scheme {#ch4intro} ------------------------------ In this chapter, we still consider the jump-diffusion Itô’s stochastic differential equations (SDEs) of the form $$\begin{aligned} dX(t)= f(X(t^{-}))dt +g(X(t^{-}))dW(t)+h(X(t^{-}))dN(t), \hspace{0.5cm} X(0)=X_0, \label{ch4exactsol}\end{aligned}$$ where $W_t$ is an $m$-dimensional Brownian motion, $f :\mathbb{R}^d\longrightarrow\mathbb{R}^d$ satisfies the one-sided Lipschitz condition and the polynomial growth condition. The functions $g : \mathbb{R}^d \longrightarrow\mathbb{R}^{d\times m}$ and $h :\mathbb{R}^d \longrightarrow\mathbb{R}^d$ satisfy the global Lipschitz condition, $N_t$ is a one dimensional poisson process with parameter $\lambda$. We recall that the compensated poisson process $\overline{N}(t) := N(t)-\lambda t$ is a martingale satisfying the following properties : $$\begin{aligned} \mathbb{E}\left(\overline{N}(t+s)-\overline{N}(t)\right)=0,\,\qquad \qquad \mathbb{E}\vert \overline{N}(t+s)-\overline{N}(t)\vert^2=\lambda s,\qquad s, t \geqslant 0.\end{aligned}$$ We can rewrite the jump-diffusion SDEs in the following equivalent form $$\begin{aligned} dX(t)= f_\lambda(X(t^{-}))dt +g(X(t^{-}))dW(t)+h(X(t^{-}))d\overline{N}(t),\end{aligned}$$ where $$f_\lambda(x)=f(x)+\lambda h(x). \label{ch4flambda}$$ To easy notation, we will use $X(t)$ instead of $X(t^{-})$. If $T$ is the final time, the tamed Euler scheme is defined by : $$X_{n+1}^{M}=X_{n}^{M}+\dfrac{\Delta t f(X_{n}^{M})}{1+ \Delta t\Vert f(X_{n}^{N}) \Vert }+g(X_{n}^{M}) \Delta W_n +h(X_{n}^{M})\Delta N_n \label{ch4tam}$$ and the compensated tamed Euler scheme is given by : $$\begin{aligned} Y_{n+1}^{M}=Y_{n}^{M}+\dfrac{\Delta t f_\lambda(Y_{n}^{M})}{1+ \Delta t \Vert f_{\lambda}(Y_{n}^{M}) \Vert }+g(Y_{n}^{M}) \Delta W_n +h(Y_{n}^{M})\Delta\overline{N}_n, \label{ch4tamc}\end{aligned}$$ where $M\in\mathbb{N}$ is the number of steps and $\Delta t=\dfrac{T}{M}$ is the stepsize. Inspired by [@Martin1], we prove the strong convergence of the numerical approximation to the exact solution of . Moments bounded of the numerical solution ----------------------------------------- \[ch4nota1\] Throughout this chapter $(\Omega, \mathcal{F}, \mathbb{P})$ denote a complete probability space with a filtration $(\mathcal{F}_t)_{t\geq 0}$, $||X||_{L^p(\Omega, \mathbb{R}^d)}$ is equal to $(\mathbb{E}||X^p||)^{1/p}$, for all $p\in[1,+\infty)$ and for all $(\mathcal{F}_t)-$adapted process $X$. For all $x, y\in\mathbb{R}^d$, we denote by $\langle x, y\rangle=x.y= x_1y_1+x_2y_2+\cdots+x_dy_d$, $||x||=\left(\langle x, x\rangle\right)^{1/2}$ and $||A||=\sup_{x\in\mathbb{R}^d, ||x||\leq 1}||Ax||$ for all $A\in\mathbb{R}^{m\times d}$. We use also the following convention : $\sum_{i=u}^na_i=0$ for $u>n$. We define the continuous time interpolation of the discrete numerical approximation of by the family of processes $\left(\overline{Y}^M\right)_M $, $ \overline{Y}^M : [0,T]\times\Omega \longrightarrow \mathbb{R}^d $ such that : $$\begin{aligned} \overline{Y}^M_t =Y^M_n+\dfrac{(t-n\Delta t)f_{\lambda}(Y^M_n)}{1+\Delta t||f_{\lambda}(Y^M_n)||}+g(Y^M_n)(W_t-W_{n\Delta t})+ h(Y^M_n)(\overline{N}_t-\overline{N}_{n\Delta t}), \label{ch4continoussolu}\end{aligned}$$ for all $M\in\mathbb{N}$, all $n\in\{0,\cdots, M-1\}$, and all $t\in[n\Delta t, (n+1)\Delta t[$. \[ch4assumption1\] Throughout this chapter, We make the following assumptions : $(A.1)$ $f,g,h\in C^1$. $(A.2)$ For all $p>0$, there exist a finite $M_p>0$ such that $\mathbb{E}||X_0||\leq M_p$. $(A.3)$ $g$ and $h$ satisfy the global Lipschitz condition: $$\begin{aligned} ||g(x)-g(y)||\vee ||h(x)-h(y)||\leq C||x-y||, \hspace{0.5cm} \forall\;x,y\in \mathbb{R}^d.\end{aligned}$$ $(A.4)$ $f$ satisfies the one-sided Lipschitz condition : $$\begin{aligned} \langle x-y, f(x)-f(y)\rangle\leq C||x-y||^2,\hspace{0.5cm} \forall\; x,y\in \mathbb{R}^d.\end{aligned}$$ $(A.5)$ $f$ satisfies the superlinear growth condition : $$\begin{aligned} ||f(x)-f(y)||\leq C(K+ ||x||^c+||y||^c)||x-y||, \hspace{0.5cm} \forall\; x,y\in \mathbb{R}^d,\end{aligned}$$ where $K$, $C$ and $c$ are strictly positive constants. Under conditions $(A.1)$, $(A.2)$ and $(A.3)$ of Assumptions \[ch4assumption1\], it is proved in [@Desmond2 Lemma 1] that has a unique solution with all moments bounbed. We note that if Assumptions \[ch4assumption1\] are satisfied, the function $f_{\lambda}$ defined in satisfies the one-sided Lipschitz condition and the superlinear growth condition with constants $C_{\lambda} :=C(1+\lambda)$ and $K_{\lambda} : =K+\lambda$. Indeed, for all $x,y\in\mathbb{R}^d$, $$\begin{aligned} \langle x-y, f_{\lambda}(x)-f_{\lambda}(y)\rangle&=&\langle x-y,f(x)\rangle+\lambda\langle x-y, h(x)-h(y)\rangle \\ &\leq& C(1+\lambda)||x-y||,\\ ||f_{\lambda}(x)-f_{\lambda}(y)||&\leq& ||f(x)-f(y)||+\lambda||h(x)-h(y)||\\ &\leq& C(K+\lambda +||x||^c+||y||^c)||x-y|| \\ &=&C(K_{\lambda}+||x||^c+||y||^c)||x-y||.\end{aligned}$$ Since the value of the constant does not matter too much, we will use $C$ instead of $C_{\lambda}$ and $K$ instead of $K_{\lambda}$. Throughout this work, the generic constants $C_p$ may change the value from one line to another one. We will sometimes use $Y_n^M$ instead of $Y_n^M(\omega)$ to simplify notations. The main result of this section is given in the following theorem. \[ch4theorem1\] Let $Y_n^M : \Omega\longrightarrow \mathbb{R}^d$ be defined by for all $M\in\mathbb{N}$ and all $ n\in\{0,\cdots, M\}$. Then the following inequality holds : $$\begin{aligned} \sup_{M\in\mathbb{N}}\sup_{n\in\{0,\cdots, M\}}\mathbb{E}\left[||Y_n^M||^p\right]<+\infty, \end{aligned}$$ for all $p\in[1,\infty)$. In order to prove Theorem \[ch4theorem1\] we introduce the following notations facilitating computations. \[ch4notation1\] $$\begin{aligned} \alpha^M_k := \mathrm{1}_{\{||Y^M_k||\geq 1\}}\left\langle\dfrac{Y^M_k}{||Y^M_k||}, \dfrac{g(Y^M_k)}{||Y^M_k||}\Delta W^M_k\right\rangle,\\\\ \beta^M_k := \mathrm{1}_{\{||Y^M_k||\geq 1\}}\left\langle\dfrac{Y^M_k}{||Y^M_k||}, \dfrac{h(Y^M_k)}{||Y^M_k||}\Delta\overline{N}^M_k\right\rangle, \end{aligned}$$ $$\begin{aligned} \beta :=\left(1+K+2C+KTC+TC+||f_{\lambda}(0)||+||g(0)||+||h(0)||\right)^4,\\\\ D^M_n := (\beta+||\varepsilon||)\exp\left(\dfrac{3\beta}{2}+\sup_{u\in\{0,\cdots,n\}}\sum_{k=u}^{n-1}\left[\dfrac{3\beta}{2}||\Delta W^M_k||^2+\dfrac{3\beta}{2}||\Delta\overline{N}^M_k||+\alpha^M_k+\beta^M_k\right]\right),\\ \Omega^M_n :=\{\omega\in \Omega : \sup_{k\in\{0,1,\cdots, n-1\}}D^M_k(\omega)\leq M^{1/2c}, \sup_{k\in\{0,1,\cdots,n-1\}}||\Delta W^M_k(\omega)||\leq 1,\\ \sup_{k\in\{0,1,\cdots,n-1\}}||\Delta \overline{N}^M_k(\omega)||\leq 1\}. \end{aligned}$$ In order to prove Theorem \[ch4theorem1\], we need the following lemmas. \[ch4lemma1\] For all positive real numbers $a$ and $b$, the following inequality holds $$\begin{aligned} 1+a+b^2\leq e^{a+\sqrt{2}b}. \end{aligned}$$ For $a\geq 0$ fixed, let’s define the function $f(b)=e^{a+\sqrt{2}b}-1-a-b^2$. It can be easily checked that $f'(b)=\sqrt{2}e^{a+\sqrt{2}b}-2b$ and $f''(b)=2(e^{a+\sqrt{2}b}-1)$. Since $a$ and $b$ are positive, it follows that $f''(b)\geq 0$ for all $b\geq 0$. So $f'$ is a non-decreasing function. Therefore, $f'(b)\geq f'(0)=\sqrt{2}e^a>0$ for all $b\geq 0$. This implies that $f$ is a non-decreasing function. Hence $f(b)\geq f(0)=e^a-1-a$ for all $b\geq 0$. Since $1+a\leq e^a$ for all positive number $a$, it follows that $f(b)\geq 0$ for all positive number $b$. i.e $1+a+b^2\leq e^{a+\sqrt{2}b}$, $\forall\;b\geq0$. Therefore for all $a\geq 0$ fixed, $1+a+b^2\leq e^{a+\sqrt{2}b}$, $\forall\;b\geq0$. The proof of lemma is complete. Following closely , we have the following main lemma. \[ch4lemma2\] The following inequality holds for all $M\in \mathbb{N}$ and all $n\in\{0,1,\cdots, M\}$ $$\begin{aligned} \mathbf{1}_{\Omega^M_n}||Y^M_n||\leq D^M_n, \label{ch4Denobound} \end{aligned}$$ where $D_n^M$ and $\Omega^M_n$ are given in Notation \[ch4notation1\]. Using the inequality $\dfrac{\Delta t}{1+\Delta t||f_{\lambda}(x)||}\leq T$, the global Lipschitz condition of $g$ and $h$ and the polynomial growth condition of $f_{\lambda}$ we have the following estimation on $\Omega^M_{n+1}\cap\{\omega\in \Omega : ||Y^M_n(\omega)||\leq 1\}$, for all $n\in\{0,1,\cdots, M-1\}$ $$\begin{aligned} ||Y^M_{n+1}||&\leq ||Y^M_n||+\dfrac{\Delta t||f_{\lambda}(Y^M_n)||}{1+\Delta t||f_{\lambda}(Y^M_n)||}+||g(Y^M_n)||||\Delta W^M_n||+||h(Y^M_n)||||\Delta\overline{N} ^M_n|| \nonumber \\ &\leq ||Y^M_n||+T||f_{\lambda}(Y^M_n)-f_{\lambda}(0)||+T||f_{\lambda}(0)||+ ||g(Y^M_n)-g(0)||+||g(0)||\nonumber\\ &+||h(Y^M_n)-h(0)||+||h(0)||\nonumber\\ &\leq ||Y^M_n||+TC(K+||Y^M_n||^c+||0||^c)||Y^M_n-0||+T||f_{\lambda}(0)||+C||Y^M_n||+C||Y^M_n||\nonumber\\ &+||g(0)||+||h(0)||.\nonumber \end{aligned}$$ Since $||Y^M_n||\leq 1$, it follows that : $$\begin{aligned} ||Y^M_{n+1}|| &\leq 1+KTC +TC+2C+T||f_{\lambda}(0)||+||g(0)||+||h(0)||\leq \beta. \label{ch4normY} \end{aligned}$$ Futhermore, from the numerical approximation , we have $$\begin{aligned} \label{ch4partnorm2} ||Y^M_{n+1}||^2&=&||Y^M_n||^2+\dfrac{\Delta t^2||f_{\lambda}(Y^M_n)||^2}{(1+\Delta t||f_{\lambda}(Y^M_n)||)^2}+||g(Y^M_n)\Delta W^M_n||^2+||h(Y^M_n)\Delta\overline{N}^M_n||^2\nonumber\\ &+&\dfrac{2\Delta t\langle Y^M_n, f_{\lambda}(Y^M_n)\rangle}{1+\Delta t||f_{\lambda}(Y^M_n)||}+2\langle Y^M_n,g(Y^M_n)\Delta W^M_n\rangle+2\langle Y^M_n,h(Y^M_n)\Delta\overline{N}^M_n\rangle\nonumber\\ &+&\dfrac{2\langle \Delta tf_{\lambda}(Y^M_n),g(Y^M_n)\Delta W^M_n\rangle}{1+\Delta t||f_{\lambda}(Y^M_n)||}+\dfrac{2\langle\Delta tf_{\lambda}(Y^M_n),h(Y^M_n)\Delta\overline{N}^M_n\rangle}{1+\Delta t||f_{\lambda}(Y^M_n)||}\nonumber\\ &+&2\langle g(Y^M_n)\Delta W^M_n,h(Y^M_n)\Delta\overline{N}^M_n\rangle. \end{aligned}$$ Using Cauchy-Schwartz inequality and the estimation $ \dfrac{1}{1+\Delta t||f_{\lambda}(Y^M_n)||}\leq 1$, we obtain the following inequality from : $$\begin{aligned} ||Y^M_{n+1}||^2 &\leq& ||Y^M_n||^2+\Delta t^2||f_{\lambda}(Y^M_n)||^2+||g(Y^M_n)||^2||\Delta W^M_n||^2+||h(Y^M_n)||^2|\Delta\overline{N}^M_n|^2\nonumber\\ &+&2\Delta t\left|\langle Y^M_n, f_{\lambda}(Y^M_n)\rangle\right|+2\langle Y^M_n,g(Y^M_n)\Delta W^M_n\rangle+2\langle Y^M_n, h(Y^M_n)\Delta\overline{N}^M_n\rangle\nonumber\\ &+&2\Delta t\left|\langle f_{\lambda}(Y^M_n),g(Y^M_n)\Delta W^M_n\rangle\right|+2\Delta t\left|\langle f_{\lambda}(Y^M_n), h(Y^M_n)\Delta\overline{N}^M_n\rangle\right|\nonumber\\ &+&2\langle g(Y^M_n)\Delta W^M_n, h(Y^M_n)\Delta\overline{N}^M_n\rangle. \label{ch4ine14} \end{aligned}$$ Using the estimation $2ab\leq a^2+b^2$, inequality becomes : $$\begin{aligned} ||Y^M_{n+1}||^2 &\leq & ||Y^M_n||^2+\Delta t^2||f_{\lambda}(Y^M_n)||^2+||g(Y^M_n)||^2||\Delta W^M_n||^2+||h(Y^M_n)||^2|\Delta\overline{N}^M_n|^2\nonumber\\ &+& 2\Delta t\left|\langle Y^M_n,f_{\lambda}(Y^M_n)\rangle\right|+2\langle Y^M_n,g(Y^M_n)\Delta W^M_n\rangle+2\langle Y^M_n,h(Y^M_n)\Delta\overline{N}^M_n\rangle\nonumber\\ &+&\Delta t^2||f_{\lambda}(Y^M_n)||^2+||g(Y^M_n)||^2||\Delta W^M_n||^2+\Delta t^2||f_{\lambda}(Y^M_n)||^2\nonumber\\ &+&||h(Y^M_n)||^2|\Delta\overline{N}^M_n|^2 +||g(Y^M_n)||^2||\Delta W^M_n||^2+||h(Y^M_n)||^2|\Delta\overline{N}^M_n|^2. \label{ch4ine15} \end{aligned}$$ Putting similars terms of inequality together, we obtain : $$\begin{aligned} ||Y^M_{n+1}||^2 &\leq & ||Y^M_n||^2+3\Delta t^2||f_{\lambda}(Y^M_n)||^2+3||g(Y^M_n)||^2||\Delta W^M_n||^2+3||h(Y^M_n)||^2|\Delta\overline{N}^M_n|^2\nonumber\\ &+&2\Delta t\left|\langle Y^M_n,f_{\lambda}(Y^M_n)\rangle\right|+ 2\langle Y^M_n,g(Y^M_n)\Delta W^M_n\rangle\nonumber\\ &+&2\langle Y^M_n,h(Y^M_n)\Delta\overline{N}^M_n\rangle \label{ch4ine16} \end{aligned}$$ on $\Omega$, for all $M\in\mathbb{N}$ and all $n\in\{0,1,\cdots, M-1\}$. In addition, for all $x\in\mathbb{R}^d$ such that $||x||\geq 1$, the global Lipschitz condition satisfied by $g$ and $h$ leads to : $$\begin{aligned} ||g(x)||^2&\leq& (||g(x)-g(0)||+||g(0)||)^2\nonumber\\ &\leq & (C||x||+||g(0)||)^2\nonumber\\ &\leq & (C+||g(0)||)^2||x||^2\nonumber\\ &\leq &\beta||x||^2. \label{ch4normdeg2} \end{aligned}$$ Along the same lines as above, for all $x\in\mathbb{R}^d$ such that $||x||\geq 1$, we have : $$\begin{aligned} ||h(x)||^2\leq \beta||x||^2. \label{ch4normdeh2} \end{aligned}$$ Also, for all $x\in\mathbb{R}^d$ such that $||x||\geq 1$, the one-sided Lipschitz condition satisfied by $f_{\lambda}$ leads to : $$\begin{aligned} \langle x,f_{\lambda}(x)\rangle&=&\langle x,f_{\lambda}(x)-f_{\lambda}(0)+f_{\lambda}(0)\rangle=\langle x,f_{\lambda}(x)-f_{\lambda}(0)\rangle+\langle x,f_{\lambda}(0)\rangle\nonumber\\ &\leq & C||x||^2+||x||||f_{\lambda}(0)||\nonumber\\ &\leq &(C+||f_{\lambda}(0)||)||x||^2\nonumber\\ &\leq &\sqrt{\beta}||x||^2. \label{ch4crochetf} \end{aligned}$$ Futhermore, for all $x\in\mathbb{R}^d$ such that $1\leq ||x||\leq M^{1/2c}$ and for all $M\in\mathbb{N}$, using the polynomial growth condition of $f_{\lambda}$, the following inequality holds $$\begin{aligned} ||f_{\lambda}(x)||^2&\leq&\left(||f_{\lambda}(x)-f_{\lambda}(0)||+||f_{\lambda}(0)||\right)^2\nonumber\\ &\leq & \left(C(K+||x||^c)||x||+||f_{\lambda}(0)||\right)^2\nonumber\\ &\leq & \left(C(K+1)||x||^{c+1}+||f_{\lambda}(0)||\right)^2\nonumber\\ &\leq & (KC+C+||f_{\lambda}(0)||)^2||x||^{2(c+1)}\nonumber\\ &\leq & M\sqrt{\beta}||x||^2. \label{ch4normf2} \end{aligned}$$ Now combining inequalities , , , and , we obtain : $$\begin{aligned} ||Y^M_{n+1}||^2 &\leq& ||Y^M_n||^2+\dfrac{3T^2\sqrt{\beta}}{M}||Y^M_{n}||^2+3\beta||Y^M_n||^2||\Delta W^M_n||^2+3\beta||Y^M_n||^2|\Delta \overline{N}^M_n|^2\nonumber\\ &+&\dfrac{2T\sqrt{\beta}}{M}||Y^M_n||^2+2\langle Y^M_n, g(Y^M_n)\Delta W^M_n\rangle+2\langle Y^M_n, h(Y^M_n)\Delta\overline{N}^M_n\rangle\nonumber\\ &\leq &||Y^M_n||^2+\dfrac{(3T^2+2T)\sqrt{\beta}}{M}||Y^M_n||^2+3\beta||Y^M_n||^2||\Delta W^M_n||^2+3||Y^M_n||^2|\Delta\overline{N}^M_n|^2\nonumber\\ &+&2\langle Y^M_n, g(Y^M_n)\Delta W^M_n\rangle +2\langle Y^M_n, h(Y^M_n)\Delta\overline{N}^M_n\rangle. \end{aligned}$$ Using the inequality $3T^2+2T\leq 3\sqrt{\beta}$, it follows that : $$\begin{aligned} ||Y^M_{n+1}||^2&\leq& ||Y^M_n||^2+\dfrac{3\beta}{M}||Y^M_n||^2+3\beta||Y^M_n||^2||\Delta W^M_n||^2+3\beta||Y^M_n||^2|\Delta\overline{N}^M_n|^2\nonumber\\ &+&2\langle Y^M_n,g(Y^M_n)\Delta W^M_n\rangle+2\langle Y^M_n,h(Y^M_n)\Delta\overline{N}^M_n\rangle\nonumber\\ &=&||Y^M_n||^2 \left(1+\dfrac{3\beta}{M}+3\beta||\Delta W^M_n||^2+3\beta||\Delta\overline{N}^M_n||^2+2\left<\dfrac{Y^M_n}{||Y^M_n||}, \dfrac{g(Y^M_n)}{||Y^M_n||}\Delta W^M_n\right> \right.\nonumber\\ &+&\left. 2\left\langle\dfrac{Y^M_n}{||Y^M_n||}, \dfrac{h(Y^M_n)}{||Y^M_n||}\Delta\overline{N}^M_n\right\rangle\right)\nonumber\\ &=&||Y^M_n||^2\left(1+\dfrac{3\beta}{M}+3\beta||\Delta W^M_n||^2+3\beta|\Delta\overline{N}^M_n|^2+2\alpha^M_n+2\beta^M_n\right). \label{ch4expY1} \end{aligned}$$ Using Lemma \[ch4lemma1\] for $a=\dfrac{3\beta}{M}+3\beta||\Delta W^M_n||^2+2\alpha^M_n+2\beta^M_n$ and $b=\sqrt{3\beta}|\Delta\overline{N}^M_n| $ it follows from that : $$\begin{aligned} ||Y^M_{n+1}||^2\leq ||Y^M_n||^2\exp\left(\dfrac{3\beta}{M}+3\beta||\Delta W^M_n||^2+3\beta|\Delta\overline{N}^M_n|+2\alpha^M_n+2\beta^M_n\right) \label{ch4expY2} \end{aligned}$$ on $\{w\in\Omega : 1\leq ||Y^M_n(\omega)||\leq M^{1/2c}\}$, for all $M\in\mathbb{N}$ and all $n\in\{0,1,\cdots,M-1\}$. In order to complete our proof, we need the following map $$\begin{aligned} \tau^M_l : \Omega \longrightarrow\{-1,0,1,\cdots,l\},\hspace{0.5cm}l\in\{0,1,\cdots, M\}, \end{aligned}$$ such that : $$\begin{aligned} \tau^M_l(\omega) :=\max\left(\{-1\}\cup\{n\in\{0,1,\cdots,l-1\} : ||Y^M_n(\omega)||\leq 1\}\right), \end{aligned}$$ for all $\omega\in\Omega$, $M\in\mathbb{N}$ and all $l\in\{0,1,\cdots,M\}$. For $M\in\mathbb{N}$ fixed we prove by induction on $n\in\{0,1,\cdots,M\}$ that $$\begin{aligned} \mathbf{1}_{\Omega^M_n}||Y^M_n||\leq D^M_n. \end{aligned}$$ \[ch4borneD1\] - For $n=0$, $D_0^M =(\beta+||X_0||)\exp(\beta)$ and $||Y^M_0||=||X_0||$. Since $\beta\geq 1 $ we have $\exp(\beta)\geq 1$. So the following inequality holds $$\begin{aligned} \mathbf{1}_{\Omega^M_0}||Y^M_0||\leq D^M_0. \end{aligned}$$ - Let $l\in\{0,1,\cdots,M-1\}$ be arbitrary and let’s assume that $\mathbf{1}_{\Omega^M_n}||Y^M_n||\leq D^M_n$ for all $n\in\{0,1,\cdots,l\}$. We want to prove that inequality holds for $n=l+1$. Let $\omega\in\Omega^M_{l+1}$ we have to prove that $||Y^M_{l+1}(\omega)||\leq D^M_{l+1}(\omega)$. Since $(\Omega^M_n)$ is a decreasing sequence and $\omega\in\Omega^M_{l+1}$, we have $\omega\in\Omega^M_k$ and it follows from the hypothesis of induction that : $||Y^M_k(\omega)||\leq D^M_k(\omega)$, for all $k\in\{0,\cdots, l\}$. Also, since $\omega\in\Omega^M_{k+1}$, by definition of $\Omega^M_{k+1}$ it follows that $D^M_k(\omega)\leq M^{1/2c}$, for all $k\in\{0, \cdots, l\}$. So for all $k\in\{0,1,\cdots,l\}$, $$\begin{aligned} ||Y^M_k(\omega)||\leq D^M_k(\omega)\leq M^{1/2c}. \end{aligned}$$ For all $k\in\{\tau^M_{l+1}(\omega)+1,\tau^M_{l+1}(\omega)+2,\cdots,l\}$ we have $$\begin{aligned} 1\leq ||Y^M_k(\omega)||\leq M^{1/2c}. \label{ch4con} \end{aligned}$$ Since holds, it follows from , that $$\begin{aligned} ||Y^M_{k+1}(\omega)||&\leq& ||Y^M_k(\omega)||\exp\left(\dfrac{3\beta}{2M}+\dfrac{3\beta}{2}||\Delta W^M_k(\omega)||^2\right.\\ &+&\left.\dfrac{3\beta}{2}|\Delta\overline{N}^N_k(\omega)|+\alpha^M_k(\omega)+\beta^M_k(\omega)\right) \end{aligned}$$ for all $k\in\{\tau^M_{l+1}(\omega)+1,\tau^M_{l+1}+2,\cdots,l\}$. For $k=l$ from the previous inequality, we have : $$\begin{aligned} ||Y^M_{l+1}(\omega)||&\leq &||Y^M_l(\omega)||\exp\left(\dfrac{3\beta}{2M}+\dfrac{3\beta}{2}||\Delta W^M_l(\omega)||^2+\dfrac{3\beta}{2}|\Delta\overline{N}^N_l(\omega)|\right.\nonumber\\ &+&\left.\alpha^M_l(\omega)+\beta^M_l(\omega)\right). \label{ch4iter} \end{aligned}$$ Iterating $l-\tau^M_{l+1}(\omega)$ times leads to $$\begin{aligned} ||Y^M_{l+1}(\omega)|| &\leq &||Y^M_{\tau^M_{l+1}(\omega)+1}(\omega)||\exp\left(\sum_{m=\tau^M_{l+1}(\omega)+1}^l\left[\dfrac{3\beta}{2M}+\dfrac{3\beta}{2}||\Delta W^M_m(\omega)||^2\right.\right.\\ &+&\left.\left.\dfrac{3\beta}{2}|\Delta\overline{N}^M_m(\omega)|+\alpha^M_m(\omega)+\beta^M_m(\omega)\right]\right). \end{aligned}$$ By definition of $\tau^M_l(\omega)$, we have $||Y^M_{\tau^M_{l+1}(\omega)}(\omega)||\leq 1$. Then it follows from that $||Y^M_{\tau^M_{l+1}(\omega)+1}(\omega)||\leq \beta$. So the above estimation of $||Y^M_{l+1}(\omega)||$ becomes : $$\begin{aligned} ||Y^M_{l+1}(\omega)||&\leq& \beta\exp\left(\sum_{m=\tau^M_{l+1}(\omega)+1}^l\left[\dfrac{3\beta}{2M}\right.\right.\\ &+&\dfrac{3\beta}{2}||\Delta W^M_m(\omega)||^2+\dfrac{3\beta}{2}|\Delta\overline{N}^M_m(\omega)|+\left.\left.\alpha^M_m(\omega)+\beta^M_m(\omega)\right]\right)\\ &\leq& (\beta+||X_0||)\exp\left(\dfrac{3\beta}{2}+\sup_{u\in\{0,1,\cdots,l+1\}}\sum_{m=u}^l\left[\dfrac{3\beta}{2}||\Delta W^M_m(\omega)||^2\right.\right.\\ &+&\left.\left.\dfrac{3\beta}{2}|\Delta\overline{N}^M_m(\omega)|+\alpha^M_m(\omega)+\beta^M_m(\omega)\right]\right)=D^M_{l+1}(\omega). \end{aligned}$$ Therefore $||Y^M_{l+1}(\omega)||\leq D^M_{l+1}(\omega)$. This complete the proof of Lemma \[ch4lemma2\]. The following is from [@Martin1 Lemma 3.2 pp 15]. \[ch4lemma3\] Let $n\in\mathbb{N}$ and $Z : \Omega \longrightarrow \mathbb{R}^m$ be an $m-$dimensional standard normal random variable. Then for all $a\in\left[0,\dfrac{1}{4}\right]$ the following inequality holds $$\begin{aligned} \mathbb{E}\left[\exp(a||Z||^2)\right]=(1-2a)^{-m/2}\leq e^{2am}. \end{aligned}$$ Using the relation $||Z||^2=|Z_1|^2+|Z_2|^2+\cdots+|Z_n|^2$ and the fact that $(Z_i)$ are independent and identically distributed, we have : $$\begin{aligned} \mathbb{E}\left[\exp(a||Z||^2)\right]=\mathbb{E}\left[\exp\left(\sum_{i=1}^ma|Z_i|^2\right)\right]=\mathbb{E}\left[\prod_{i=1}^m\exp\left(a|Z_i|^2\right)\right]=\left[\mathbb{E}\left(\exp(a|Z_1|^2)\right)\right]^m. \label{ch4normal1} \end{aligned}$$ From the definition of the expected value of the standard normal random variable, we have : $$\begin{aligned} \mathbb{E}[\exp(a|Z_1|^2)]=\int_{-\infty}^{+\infty}e^{ax^2}\dfrac{1}{\sqrt{2\pi}}e^{-x^2/2}dx=\dfrac{1}{\sqrt{1-2a}}. \end{aligned}$$ Using the inequality $\dfrac{1}{1-x}\leq e^{2x} \hspace{0.3cm }\forall x\in\left[0,\dfrac{1}{2}\right]$, it follows that $$\begin{aligned} \mathbb{E}[\exp(a|Z_1|^2)]=\dfrac{1}{\sqrt{1-2a}}\leq e^{2a},\hspace{0.5cm}\forall a\in\left[0,\dfrac{1}{4}\right]. \label{ch4normal2} \end{aligned}$$ Combining and leads to : $$\begin{aligned} \mathbb{E}\left[\exp(a||Z||^2)\right]=(1-2a)^{-m/2}\leq e^{2am},\hspace{0.5cm}\forall a\in\left[0,\dfrac{1}{4}\right]. \end{aligned}$$ The following lemma and its proof are based on with only different value of the coefficient $\beta$. \[ch4lemma4\] The following inequality holds : $$\begin{aligned} \sup_{M\in\mathbb{N}, M\geq 4\beta pT}\mathbb{E}\left[\exp\left(\beta p\sum_{k=0}^{M-1}||\Delta W^M_k||^2\right)\right]<\infty. \end{aligned}$$ Let $Z=\mathcal{N}(0,1)$ be an $m$-dimensional standard normal random variable. Since for $k=0,\cdots, M-1$, $\Delta W^M_k$ are independent, stationary and follows the normal distribution with mean $0$ and variance $\dfrac{T}{M}$, $||\Delta W^M_n||^2=\dfrac{T}{M}||Z||^2$, it follows that : $$\begin{aligned} \mathbb{E}\left(\exp\left[\beta p\sum_{k=0}^{M-1}||\Delta W^M_k||^2\right]\right)&=&\prod_{k=0}^{M-1}\mathbb{E}\left[\exp(\beta p||\Delta W^M_k||^2)\right]\\ &=&\left(\mathbb{E}\left[\exp\left(\beta p\dfrac{T}{M}||Z||^2\right)\right]\right)^M \\ &\leq &\left[\exp\left(2\beta pm\dfrac{T}{M}\right)\right]^M\text{(using Lemma \ref{ch4lemma3})}\\ &\leq &\exp(2\beta pTm)<\infty, \end{aligned}$$ for all $p\in[1,+\infty)$ and all $M\in\mathbb{N}\cap[4\beta pT, \infty)$. \[ch4lemma5\] Let $Y$ be a standard normal random of dimension $m$ variable and $c\in\mathbb{R}^m$, then $$\begin{aligned} \mathbb{E}[\exp(cY)]=\exp\left(\dfrac{c^2}{2}\right). \end{aligned}$$ $\mathbb{E}[\exp(cY)]$ is the moment generating function of $Y$ at $c$. Since the mean $\mu=0$ and the standard deviation $\sigma=1$, it follows directly that $$\begin{aligned} \mathbb{E}[\exp(cY)]=\exp\left(\mu+\dfrac{1}{2}\sigma^2c^2\right)=\exp\left(\dfrac{c^2}{2}\right). \end{aligned}$$ The following lemma is from [@Martin2 Lemma 5.7, pp 15]. \[ch4lemma6\] The following inequality holds $$\begin{aligned} \mathbb{E}\left[\left|pz\mathbf{1}_{\{||x||\geq 1\}}\left<\dfrac{x}{||x||},\dfrac{g(x)}{||x||}\Delta W^M_k\right>\right|^2\right]\leq \exp\left[\dfrac{p^2T(C+||g(0)||)^2}{M}\right], \end{aligned}$$ for all $x\in\mathbb{R}^d, k\in\{0,1,\cdots,M-1\}, p\in[1,\infty)$ and all $z\in\{-1,1\}$. Let the notation $a^{\top}$ stand for the transposed of a vector $a$ and $Y$ the $m$ column vector define by $Y=\sqrt{\dfrac{T}{M}}(1,\cdots, 1)$. Then we have : $$\begin{aligned} \mathbb{E}\left[\exp\left(pz\left<\dfrac{x}{||x||},\dfrac{g(x)}{||x||}\Delta W^M_k\right>\right)\right]&=&\mathbb{E}\left[\exp\left(pz\dfrac{g(x)^{\top}x}{||x||^2}\Delta W^M_k\right)\right]\\ &=&\mathbb{E}\left[\exp\left(pz\dfrac{g(x)^{\top}x}{||x||^2}\sqrt{\dfrac{T}{M}}\mathcal{N}(0,1)\right)\right]\\ &=&\mathbb{E}\left[\exp\left(pzY\dfrac{g(x)^{\top}x}{||x||^2}\right)\right] \end{aligned}$$ Using Lemma \[ch4lemma6\], it follows that $$\begin{aligned} \mathbb{E}\left[\exp\left(pz\left<\dfrac{x}{||x||},\dfrac{g(x)}{||x||}\Delta W^M_k\right>\right)\right]&=&\exp\left[\dfrac{1}{2}\left|pz\dfrac{g(x)^{\top}x}{||x||^2}Y\right|^2\right]\\ &\leq &\exp\left[p^2\dfrac{||g(x)||^2}{||x||^2}||Y||^2\right]\\ &\leq &\exp\left[\dfrac{p^2T}{M}\dfrac{||g(x)||^2}{||x||^2}\right]. \end{aligned}$$ Using the global Lipschitz condition and the fact that $||x||\geq 1$, we have : $$\begin{aligned} \dfrac{||g(x)||^2}{||x||^2}\leq\dfrac{(||g(x)-g(0)||+||g(0)||)^2}{||x||^2}\leq\dfrac{(C+||g(0)||)^2||x||^2}{||x||^2}\leq (C+||g(0)||)^2. \end{aligned}$$ Therefore, for all $x\in\mathbb{R}^d$ such that $||x||\geq 1$, we have $$\begin{aligned} \mathbb{E}\left[\exp\left(pz\left<\dfrac{x}{||x||},\dfrac{g(x)}{||x||}\Delta W^M_k\right>\right)\right]\leq \exp\left[\dfrac{p^2T(C+||g(0)||)^2}{M}\right], \end{aligned}$$ for all $M\in\mathbb{N}$, $k\in\{0,\cdots,M-1\}$, all $p\in[1,\infty)$ and $z\in\{-1,1\}$. Following closely we have the following lemma. \[ch4lemma7\] Let $\alpha^M_n :\Omega\longrightarrow\mathbb{R}$ for $M\in\mathbb{N}$ and $n\in\{0,1,\cdots,M\}$ defined in Notation \[ch4notation1\], then the following inequality holds : $$\begin{aligned} \sup_{z\in\{-1,1\}}\sup_{M\in\mathbb{N}}\left\|\sup_{n\in\{0,1,\cdots,M\}}\exp\left(z\sum_{k=0}^{n-1}\alpha^M_k\right)\right\|_{L^p(\Omega, \mathbb{R})}<\infty, \end{aligned}$$ for all $p\in[2,+\infty)$. The time discrete stochastic process $z\sum_{k=0}^{n-1}\alpha^M_k$, $n\in\{0,1,\cdots, M\}$ is an $(\mathcal{F}_{nT/M})_{n\in\{0,\cdots,M\}}-$ martingale for every $z\in\{-1,1\}$ and $M\in \mathbb{N}$. So $\exp\left(z\sum_{k=0}^{n-1}\alpha^M_k\right)$ is a positive $(\mathcal{F}_{nT/M})_{n\in\{0,\cdots,M\}}-$ submartingale for every $z\in\{-1,1\}$ and $M\in\mathbb{N}$ since $\exp$ is a convex function. Applying Doop’s maximal inequality leads to : $$\begin{aligned} \left\|\sup_{n\in\{0,\cdots,M\}}\exp\left(z\sum_{k=0}^{n-1}\alpha^M_k\right)\right\|_{L^p{(\Omega, \mathbb{R})}}&=&\left(\mathbb{E}\left|\sup_{n\in\{0,\cdots,M\}}\exp\left(pz\sum_{k=0}^{n-1}\alpha^M_k\right)\right|\right)^{1/p}\nonumber\\ &\leq &\left(\dfrac{p}{p-1}\right)\left(\mathbb{E}\left |\exp\left(pz\sum_{k=0}^{M-1}\alpha^M_k\right)\right|\right)^{1/p}\nonumber\\ &= &\dfrac{p}{p-1}\left\|\exp\left(z\sum_{k=0}^{M-1}\alpha^M_k\right)\right\|_{L^p{(\Omega,\mathbb{R})}}. \label{ch4alpha1} \end{aligned}$$ Using Lemma \[ch4lemma6\], it follows from the previous inequality that : $$\begin{aligned} \mathbb{E}\left[\exp(pz\alpha^M_k)/\mathcal{F}_{kT/M}\right]\leq\exp\left(\dfrac{p^2T(C+||g(0)||)^2}{M}\right). \label{ch4alpha2} \end{aligned}$$ Using inequality , it follows that : $$\begin{aligned} \mathbb{E}\left[\exp\left(pz\sum_{k=0}^{M-1}\alpha^M_k\right)\right]&=&\mathbb{E}\left[\exp\left(pz\sum_{k=0}^{M-2} \alpha^M_k\right)\mathbb{E}[\exp(p\alpha^M_{M-1}/\mathcal{F}_{(M-1)T/M}\right]\\ &\leq &\mathbb{E}\left[\exp\left(pz\sum_{k=0}^{M-2}\alpha^M_k\right)\right]\exp\left(\dfrac{p^2T(C+||g(0)||)^2}{M}\right). \end{aligned}$$ Iterating the previous inequality $M$ times gives : $$\begin{aligned} \mathbb{E}\left[\exp\left(pz\sum_{k=0}^{M-1}\alpha^M_k\right)\right] \leq \exp(p^2T(C+||g(0)||)^2). \label{ch4alpha3} \end{aligned}$$ Now combining inequalities and leads to $$\begin{aligned} \sup_{z\in\{-1,1\}}\sup_{M\in\mathbb{N}}\left\|\sup_{n\in\{0,\cdots,M\}}\exp\left(z\sum_{k=0}^{n-1}\alpha^M_k\right)\right\|_{L^p(\Omega,\mathbb{R})} \leq 2\exp(p^2T(C+||g(0)||)^2) <\infty, \end{aligned}$$ for all $p\in[2,\infty)$. \[ch4lemma8\] For all $c\in\mathbb{R}$, we have : $$\begin{aligned} \mathbb{E}[\exp(c\Delta \overline{N}^M_n)]=\exp\left[\dfrac{(e^c+c-1)\lambda T}{M}\right], \end{aligned}$$ for all $M\in\mathbb{N}$ and all $n\in\{0,\cdots,M\}$. It is known that if $Y$ is a random variable following the poisson law with parameter $\lambda$, then its moment generating function is given by : $$\begin{aligned} \mathbb{E}[\exp(cY)]=\exp(\lambda(e^c-1)). \end{aligned}$$ \[gene\] Since $\Delta N_n$ follows a poisson law with parameter $\lambda\Delta t$, it follows that $$\begin{aligned} \mathbb{E}[\exp(c\Delta\overline{N}^M_n)]&=&\mathbb{E}[\exp(c\Delta N^M_n+c\lambda\Delta t)]\\ &=&\mathbb{E}\left[\exp\left(\dfrac{\lambda T}{M}\right)\exp(c\Delta N^M_n)\right]\\ &=&\exp\left(\dfrac{c\lambda T}{M} \right)\exp\left[ \dfrac{\lambda T}{M}(e^c-1)\right]\\ &=&\exp\left[\dfrac{(e^c+1-1)\lambda T}{M}\right]. \end{aligned}$$ \[ch4lemma9\] The following inequality holds $$\begin{aligned} \mathbb{E}\left[\exp\left(pz\mathbf{1}_{\{||x||\geq 1\}}\left<\dfrac{x}{||x||},\dfrac{h(x)}{||x||}\Delta\overline{N}^M_n\right>\right)\right]\leq\exp\left[\dfrac{\lambda \left(e^{p(C+||h(0)||)}+p(C+||h(0)||\right)}{M}\right], \end{aligned}$$ for all $M\in\mathbb{N}$, $z\in\{-1,1\}$, all $p\in[1, +\infty)$ and all $n\in\{0,\cdots,M\}$. For $x\in\mathbb{R}^d$ such that $||x||\neq 0$, we have : $$\begin{aligned} \mathbb{E}\left[\exp\left(pz\left<\dfrac{x}{||x||},\dfrac{h(x)}{||x||}\Delta\overline{N}^M_n\right>\right)\right] &\leq& \mathbb{E}\left[\exp\left(pz\dfrac{||x||||h(x)||}{||x||^2}\Delta\overline{N}^M_n\right)\right]\\ &=& \mathbb{E}\left[\exp\left(pz\dfrac{||h(x)||}{||x||}\Delta \overline{N}^M_n\right)\right]. \end{aligned}$$ For all $x\in\mathbb{R}^d$ such that $||x||\geq 1$, using the global Lipschitz condition satisfied by $h$, we have : $$\begin{aligned} \dfrac{||h(x)||}{||x||}\leq \dfrac{||h(x)-h(0)||+||h(0)||}{||x||}\leq C+||h(0)||. \label{ch4normeh} \end{aligned}$$ So from inequality and using Lemma \[ch4lemma8\] it follows that : $$\begin{aligned} \mathbb{E}\left[\exp\left(pz\mathbf{1}_{\{||x||\geq 1\}}\left<\dfrac{x}{||x||},\dfrac{h(x)}{||x||}\Delta\overline{N}^M_n\right>\right)\right] &\leq &\mathbb{E}[\exp(pz(C+||h(0)||)\Delta\overline{N}^M_n)]\\ &\leq &\exp\left[\dfrac{\left(e^{p(C+||h(0)||)}+p(C+||h(0)||-1\right)\lambda T}{M}\right]\\ &\leq &\exp\left[\dfrac{\left(e^{p(C+||h(0)||)}+p(C+||h(0)||\right)\lambda T}{M}\right]. \end{aligned}$$ \[ch4lemma10\] Let $\beta^M_n :\Omega \longrightarrow \mathbb{R}$ define as in Notation \[ch4notation1\] for all $M\in\mathbb{N}$ and all $n\in\{0,\cdots,M\}$, then we have the following inequality $$\begin{aligned} \sup_{z\in\{-1,1\}}\sup_{M\in\mathbb{N}}\left\|\sup_{n\in\{0,\cdots, M\}}\exp\left(z\sum_{K=0}^{n-1}\beta^M_k\right)\right\|_{L^p(\Omega, \mathbb{R})}<+\infty. \end{aligned}$$ For the same reason as for $\alpha^M_k$, $\beta^M_k$ is an $(\mathcal{F}_{nT/M})$- martingale. So $\exp\left(pz\sum_{k=0}^{n-1}\beta^M_k\right)$ is a positive $(\mathcal{F}_{nT/M})$- submartingale for all $M\in\mathbb{N}$ and all $n\in\{0,\cdots, M\}$. Using Doop’s maximal inequality we have : $$\begin{aligned} \left\|\sup_{n\in\{0,\cdots, M\}}\exp\left(z\sum_{k=0}^{n-1}\beta^M_k\right)\right\|_{L^p(\Omega, \mathbb{R})} &\leq &\left(\dfrac{p}{p-1}\right)\left\|\exp\left(z\sum_{k=0}^{M-1}\beta^M_k\right)\right\|_{L^p(\Omega,\mathbb{R})}, \label{ch4beta1} \end{aligned}$$ $$\begin{aligned} \left\|\exp\left(z\sum_{k=0}^{M-1}\beta^M_k\right)\right\|^p_{L^p(\Omega,\mathbb{R})}&=&\mathbb{E}\left[\exp\left(pz\sum_{k=0}^{M-1}\beta^M_k\right)\right]=\mathbb{E} \left[\exp\left(pz\left(\sum_{k=0}^{M-2}\beta^M_k\right)+pz\beta_{M-1}^M\right)\right]\\ &=&\mathbb{E}\left[\exp\left(pz\sum_{k=0}^{M-2}\beta^M_k\right) \mathbb{E}\left[\exp\left(pz\beta^M_{M-1}\right)/\mathcal{F}_{(M-1)T/M}\right] \right]. \end{aligned}$$ Using Lemma \[ch4lemma9\] it follows that $$\begin{aligned} \left\|\exp\left(z\sum_{k=0}^{M-1}\beta^M_k\right)\right\|^p_{L^p(\Omega,\mathbb{R})} \leq \mathbb{E}\left[\exp\left(pz\sum_{k=0}^{M-2}\beta^M_k\right)\right]\exp\left[\dfrac{\left(e^{p(C+||h(0)||)}+p(C+||h(0)||)\right)\lambda T}{M}\right]. \end{aligned}$$ Iterating this last inequality $M$ times leads to : $$\begin{aligned} \left(\mathbb{E}\left[\exp\left(pz\sum_{k=0}^{M-1}\beta^M_k\right)\right]\right)^p\leq \exp\left[\lambda T\left(e^{p(C+||h(0)||)}+Tp(C+||h(0)||\right)\right], \label{ch4beta2} \end{aligned}$$ for all $M\in\mathbb{N}$, all $p\in (1,\infty)$ and all $z\in\{-1,1\}$. Combining inequalities and complete the proof of Lemma \[ch4lemma10\] \[ch4lemma11\] The following inequality holds $$\begin{aligned} \sup_{M\in\mathbb{N}}\mathbb{E}\left[\exp\left(p\beta\sum_{k=0}^{M-1}||\Delta\overline{N}^M_k||\right)\right]<+\infty,\end{aligned}$$ for all $p\in[1, +\infty)$. Using independence, stationarity of $\Delta\overline{N}_k^M$ and Lemma \[ch4lemma8\], it follows that : $$\begin{aligned} \sup_{M\in\mathbb{N}}\mathbb{E}\left[\exp\left(p\beta\sum_{k=0}^{M-1}||\Delta\overline{N}^M_k||\right)\right]&=&\prod_{k=0}^{M-1}\mathbb{E}[\exp(p\beta||\Delta\overline{N}^M_k||)]\\ &=&\left(\mathbb{E}[\exp(p\beta||\Delta\overline{N}^M_k||)]\right)^M\\ &=&\left(\exp\left[\dfrac{(e^{p\beta}+p\beta-1)\lambda T}{M}\right]\right)^M\\ &=&\exp[e^{p\beta}+p\beta-1]<+\infty, \end{aligned}$$ for all $p\in[1, +\infty)$. Inspired by [@Martin1 Lemma 3.5, pp 15], we have the following estimation. \[ch4lemma12\] \[Uniformly bounded moments of the dominating stochastic processes\]. Let $M\in\mathbb{N}$ and $D_n^M : \Omega \longrightarrow [0,\infty)$ for $n\in\{0,1,\cdots, M\}$ be define as above, then we have : $$\begin{aligned} \sup_{M\in\mathbb{N}, M\geq8\lambda pT}\left\|\sup_{n\in\{0,1,\cdots, M\}}D_n^M\right\|_{L^p(\Omega, \mathbb{R})}<\infty, \end{aligned}$$ for all $p\in[1,\infty)$. Let’s recall that : $$\begin{aligned} D_n^M =(\beta+||\varepsilon||)\exp\left(\dfrac{3\beta}{2}+\sup_{u\in\{0,\cdots,n\}}\sum_{k=u}^{n-1}\dfrac{3\beta}{2}||\Delta W^M_k||^2+\dfrac{3\beta}{2}|\Delta\overline{N}^M_k|+\alpha^M_k+\beta^M_k\right). \end{aligned}$$ Using Holder inequality, it follows that : $$\begin{aligned} \sup_{M\in\mathbb{N}, M\geq 8\lambda pT}\left\|\sup_{n\in\{0,\cdots, M\}}D_n^M\right\|_{L^p(\Omega, \mathbb{R})}&\leq &e^{3\beta/2}\left(\beta+||\varepsilon||_{L^{4p}(\Omega, \mathbb{R})}\right)\\ &\times& \sup_{M\in\mathbb{N}, M\geq 8\lambda pT}\left\|\exp\left(\dfrac{3\beta}{2}\sum_{k=0}^{M-1}||\Delta W^M_k||^2\right)\right\|_{L^{2p}(\Omega, \mathbb{R})}\\ &\times &\sup_{M\in\mathbb{N}}\left\|\exp\left(\dfrac{3\beta}{2}\sum_{k=0}^{M-1}|\Delta\overline{N}_k^M|\right)\right\|_{L^{8p}(\Omega, \mathbb{R})}\\ &\times &\left(\sup_{M\in\mathbb{N}}\left\|\sup_{n\in\{0,\cdots, M\}}\exp\left(\sup_{u\in\{0,\cdots,n\}}\sum_{k=u}^{n-1}\alpha_k^M\right)\right\|_{L^{16p}(\Omega, \mathbb{R})}\right)\\ &\times &\left(\sup_{M\in\mathbb{N}}\left\|\sup_{n\in\{0,\cdots, M\}}\exp\left(\sup_{u\in\{0,\cdots,n\}}\sum_{k=u}^{n-1}\beta_k^M\right)\right\|_{L^{16p}(\Omega, \mathbb{R})}\right)\\ &=& A_1\times A_2\times A_3\times A_4\times A_5. \end{aligned}$$ By assumption $A_1$ is bounded. Lemma \[ch4lemma4\] and \[ch4lemma11\] show that $A_2$ and $A_3$ are bounded. Using again Holder inequality and Lemma \[ch4lemma7\] it follows that : $$\begin{aligned} A_4=\left\|\sup_{n\in\{0,\cdots, M\}}\exp\left(\sup_{u\in\{0,\cdots,n\}}\sum_{k=u}^{n-1}\alpha_k^M\right)\right\|_{L^{16p}(\Omega, \mathbb{R})} \end{aligned}$$ $$\begin{aligned} \leq\left\|\sup_{n\in\{0,\cdots, M\}}\exp\left(\sum_{k=0}^{n-1}\alpha^M_k\right)\right\|_{L^{32p}(\Omega,\mathbb{R})} \times \left\|\sup_{u\in\{0,\cdots, M\}}\exp\left(-\sum_{k=0}^{u-1}\alpha^M_k\right)\right\|_{L^{32p}(\Omega,\mathbb{R})} <+\infty, \end{aligned}$$ for all $M\in\mathbb{N}$ and all $p\in[1,\infty)$. Along the same lines as above, we prove that $A_5$ is bounded. Since each of the terms $A_1, A_2, A_3, A_4$ and $A_5$ is bounded, this complete the proof of Lemma \[ch4lemma12\]. The following lemma is an extension of [@Martin1 Lemma 3.6, pp 16]. Here, we include the jump part. \[ch4lemma13\] Let $M\in\mathbb{N}$ and $\Omega_M^M\in\mathcal{F}$. The following holds : $$\begin{aligned} \sup_{M\in\mathbb{N}}\left(M^p\mathbb{P}[(\Omega_M^M)^c]\right)<+\infty, \end{aligned}$$ for all $p\in[1,\infty)$. Using the subadditivity of the probability measure and the Markov’s inequality, it follows that $$\begin{aligned} \mathbb{P}[(\Omega_M^M)^c] &\leq & \mathbb{P}\left[\sup_{n\in\{0,\cdots, M-1\}}D_n^M>M^{1/2c}\right]+M\mathbb{P}\left[\|W_{T/M}\|>1\right]+M\mathbb{P}\left[|\overline{N}_{T/M}|>1\right]\nonumber\\ &\leq & \mathbb{P}\left[\sup_{n\in\{0,\cdots, M-1\}}|D_n^M|>M^{q/2c}\right]+M\mathbb{P}\left[\|W_{T}\|>\sqrt{M}\right]+M\mathbb{P}\left[|\overline{N}_{T}|>M\right]\nonumber\\ &\leq & \mathbb{P}\left[\sup_{n\in\{0,\cdots, M-1\}}|D_n^M|>M^{q/2c}\right]+M\mathbb{P}\left[\|W_{T}\|^q>M^{q/2}\right]+M\mathbb{P}\left[|\overline{N}_{T}|^q>M^q\right]\nonumber\\ &\leq &\mathbb{E}\left[\sup_{n\in\{0,\cdots, M-1\}}|D_n^M|^q\right]M^{-q/2c}+\mathbb{E}[\|W_T\|^q]M^{1-q/2}+ \mathbb{E}[|\overline{N}_{T}|^q] M^{1-q} \nonumber,\\ \end{aligned}$$ for all $q>1$. Multiplying both sides of the above inequality by $M^p$ leads to $$\begin{aligned} M^p\mathbb{P}[(\Omega_M^M)^c]\leq \mathbb{E}\left[\sup_{n\in\{0,\cdots, M-1\}}|D_n^M|^q\right]M^{p-q/2c}+\mathbb{E}[\|W_T\|^q]M^{p+1-q/2}+\mathbb{E}[|\overline{N}_{T}|^q] M^{p+1-q} \end{aligned}$$ for all $q>1$. For $q>\max\{2pc, 2p+2\}$, we have $M^{p+1-q/2}<1$, $M^{p-q/2c}<1$ and $ M^{p+1-q}<1$. It follows for this choice of $q$ that $$\begin{aligned} M^p\mathbb{P}[(\Omega_M^M)^c]\leq \mathbb{E}\left[\sup_{n\in\{0,\cdots, M-1\}}|D_n^M|^p\right]+\mathbb{E}[\|W_T\|^q]+\mathbb{E}[|\overline{N}_{T}|^q]. \end{aligned}$$ Using Lemma \[ch4lemma12\] and the fact that $W_T$ and $\overline{N}_{T}$ are independents of $M$, it follows that $$\begin{aligned} \sup_{M\in\mathbb{N}}\left(M^p\mathbb{P}[(\Omega_M^M)^c]\right)<+\infty. \end{aligned}$$ The following lemma can be found in [@Protter Theorem 48 pp 193] or in [@Gundy Theorem 1.1, pp 1]. \[ch4lemma18b\]\[Burkholder-Davis-Gundy inequality\] Let $M$ be a martingale with càdlàg paths and let $p\geq 1$ be fixed. Let $M_t^*=\sup\limits_{s\leq t}||M_s||$. Then there exist constants $c_p$ and $C_p$ such that for any $M$ $$\begin{aligned} c_p \left[\mathbb{E}\left([M,M]_t\right)^{p/2}\right]^{1/p}\leq \left[\mathbb{E}(M_t^*)^p\right]^{1/p}\leq C_p\left[\mathbb{E}\left([ M, M]_t\right)^{p/2}\right]^{1/p}, \end{aligned}$$ for all $0\leq t\leq \infty$, where $[M,M]$ stand for the quadratic variation of the process $M$. The constants $c_p$ and $C_p$ are universal : They does not depend on the choice of $M$. The following lemma can be found in [@Martin1 Lemma 3.7, pp 16]. \[ch4lemma15\] Let $k\in\mathbb{N}$ and let $Z : [0,T]\times \Omega \longrightarrow \mathbb{R}^{k\times m}$ be a predictable stochastic process satisfying $\mathbb{P}\left[\int_0^T||Z_s||^2ds<+\infty\right]=1$. Then we have the following inequality $$\begin{aligned} \left\|\sup_{s\in[0,t]}\left\|\int_0^sZ_udW_u\right\|\right\|_{L^p(\Omega, \mathbb{R})}\leq C_p\left(\int_0^t\sum_{i=1}^m||Z_s\vec{e}_i||^2_{L^p(\Omega, \mathbb{R}^k)}ds\right)^{1/2} \end{aligned}$$ for all $t\in[0,T]$ and all $p\in[1,\infty)$. Where $(\vec{e}_1,\cdots,\vec{e}_m)$ is the canonical basis of $\mathbb{R}^m$. Since $W$ is a continuous martingale satisfying $d[ W,W]_s= ds$, it follows from the property of the quadratic variation (see [@Fima 8.21, pp 219] ) that $$\begin{aligned} \left[ \int_0Z_sdW_s, \int_0Z_sdW_s\right]_t=\int_0^t||Z_s||^2d[W,W]_s=\int_0^t||Z_s||^2 ds. \label{ch4Bur2} \end{aligned}$$ Applying Lemma \[ch4lemma18b\] for $M_t=\sup\limits_{0\leq s\leq T}\int_0^tZ_sdW_s$ and using leads to : $$\begin{aligned} \left[\mathbb{E}\left[\sup_{0\leq t\leq T}\left\|\int_0^tZ_udW_u\right\|^p\right]\right]^{1/p}\leq C_p\left[\mathbb{E}\left(\int_0^T||Z_s||^2ds\right)^{p/2}\right]^{1/p}, \end{aligned}$$ where $C_p$ is a positive constant depending on $p$ : Using the definition of $||X||_{L^p(\Omega, \mathbb{R})}$ for any random variable $X$, it follows that $$\begin{aligned} \left\|\sup_{s\in[0,T]}\left\|\int_0^sZ_udW_u\right\|\right\|_{L^p(\Omega, \mathbb{R})}&\leq& C_p\left\|\int_0^T||Z_s||^2ds\right\|^{1/2}_{L^{p/2}(\Omega, \mathbb{R})}\\ &\leq&C_p\left\|\int_0^T\sum_{i=1}^m||Z_s.\vec{e}_i||^2ds\right\|^{1/2}_{L^{p/2}(\Omega, \mathbb{R})}. \end{aligned}$$ Using Minkowski inequality in its integral form (see Proposition \[ch1Minkowski\]) yields : $$\begin{aligned} \left\|\sup_{s\in[0,T]}\left\|\int_0^sZ_udW_u\right\|\right\|_{L^p(\Omega, \mathbb{R})}\leq C_p\left(\int_0^T\sum_{i=1}^m\left\|||Z_s.\vec{e}_i||^2\right\|_{L^{p/2}(\Omega, \mathbb{R}^k)}ds\right)^{1/2}. \end{aligned}$$ Using Holder inequality, it follows that : $$\begin{aligned} \left\|\sup_{s\in[0,T]}\left\|\int_0^sZ_udW_u\right\|\right\|_{L^p(\Omega, \mathbb{R})}\leq C_p\left(\int_0^T\sum_{i=1}^m||Z_s.\vec{e}_i||^2_{L^p(\Omega, \mathbb{R}^k)}ds\right)^{1/2}. \end{aligned}$$ This complete the proof of the lemma. The following lemma and its proof can be found in [@Martin1 Lemma 3.8, pp 16]. \[ch4lemma16\] Let $k\in\mathbb{N}$ and let $ Z^M_l : \Omega \longrightarrow \mathbb{R}^{k\times m}$, $l\in\{0,1,\cdots, M-1\}$, $M\in\mathbb{N}$ be a familly of mappings such that $Z^M_l$ is $\mathcal{F}_{lT/M}/\mathcal{B}(\mathbb{R}^{k\times m})$-measurable for all $l\in\{0,1,\cdots,M-1\}$ and $M\in\mathbb{N}$. Then the following inequality holds : $$\begin{aligned} \left\|\sup_{j\in\{0,1,\cdots,n\}}\left\|\sum_{l=0}^{j-1}Z_l^M\Delta W^M_l\right\|\right\|_{L^p(\Omega, \mathbb{R})} \leq C_p\left(\sum_{l=0}^{n-1}\sum_{i=1}^m||Z^M_l.\vec{e}_i||^2_{L^p(\Omega, \mathbb{R}^k)}\dfrac{T}{M}\right)^{1/2}. \end{aligned}$$ Let $\overline{Z}^M : [0,T]\times\Omega \longrightarrow \mathbb{R}^{k\times m}$ such that $\overline{Z}_s :=Z^M_l$ for all $s\in\left[\dfrac{lT}{M},\dfrac{(l+1)T}{M}\right)$, $l\in\{0,1,\cdots,M-1\}$ and all $M\in\mathbb{N}$. Using Lemma \[ch4lemma15\], one obtain : $$\begin{aligned} \left\|\sup_{j\in\{0,1,\cdots,n\}}\left\|\sum_{l=0}^{j-1}Z_l^M\Delta W^M_l\right\|\right\|_{L^p(\Omega, \mathbb{R})} &=&\left\|\sup_{j\in\{0,1,\cdots, n\}}\left\|\int_0^{jT/M}\overline{Z}^M_udW_u\right\|\right\|_{L^p(\Omega, \mathbb{R})}\\ &\leq & \left\|\sup_{s\in\left[0,\dfrac{nT}{M}\right]}\left\|\int_0^s\overline{Z}^M_udW_u\right\|\right\|_{L^p(\Omega, \mathbb{R})}\\ &\leq &C_p\left(\int_0^{nT/M}\sum_{i=1}^m||Z_s.\vec{e}_i||^2_{L^p(\Omega,\mathbb{R}^k)}ds\right)^{1/2}\\ &=&C_p\left(\sum_{l=0}^{n-1}\sum_{i=1}^m||Z^M_l.\vec{e}_i||^2_{L^p(\Omega, \mathbb{R}^k)}\dfrac{T}{M}\right)^{1/2}. \end{aligned}$$ \[ch4lemma18\] Let $k\in\mathbb{N}$ and $Z :[0, T]\times\Omega \longrightarrow \mathbb{R}^k$ be a predictable stochastic process satisfying $\mathbb{E}\left(\int_0^T||Z_s||^2ds\right)<+\infty$. Then the following inequality holds : $$\begin{aligned} \left\|\sup_{s\in[0,T]}\left\|\int_0^sZ_ud\overline{N}_u\right\|\right\|_{L^p(\Omega, \mathbb{R})}\leq C_p\left(\int_0^T||Z_s||^2_{L^p(\Omega, \mathbb{R}^k)}ds\right)^{1/2}, \end{aligned}$$ for all $t\in[0, T]$ and all $p\in[1,+\infty)$. Since $\overline{N}$ is a martingale with càdlàg paths satisfying $d[ \overline{N},\overline{N}]_s=\lambda s$ (see Proposition \[ch1quadratic\]), it follows from the property of the quadratic variation (see [@Fima 8.21, pp 219]) that $$\begin{aligned} \left[\int_0Z_sd\overline{N}_s, \int_0Z_sd\overline{N}_s\right]_t=\int_0^t||Z_s||^2\lambda ds. \label{ch4Bur1} \end{aligned}$$ Applying Lemma \[ch4lemma18b\] for $M_t=\sup\limits_{0\leq s\leq T}\int_0^tZ_sd\overline{N}_s$ and using leads to : $$\begin{aligned} \left[\mathbb{E}\left[\sup_{0\leq t\leq T}\left\|\int_0^tZ_ud\overline{N}_u\right\|^p\right]\right]^{1/p}\leq C_p\left[\mathbb{E}\left(\int_0^T||Z_s||^2ds\right)^{p/2}\right]^{1/p}, \end{aligned}$$ where $C_p$ is a positive constant depending on $p$ and $\lambda$. Using the definition of $||X||_{L^p(\Omega, \mathbb{R})}$ for any random variable $X$, it follows that $$\begin{aligned} \left\|\sup_{s\in[0,T]}\left\|\int_0^sZ_ud\overline{N}_u\right\|\right\|_{L^p(\Omega, \mathbb{R})}\leq C_p\left\|\int_0^T||Z_s||^2ds\right\|^{1/2}_{L^{p/2}(\Omega, \mathbb{R})}. \end{aligned}$$ Using Minkowski inequality in its integral form (see Proposition \[ch1Minkowski\]) yields : $$\begin{aligned} \left\|\sup_{s\in[0,T]}\left\|\int_0^sZ_ud\overline{N}_u\right\|\right\|_{L^p(\Omega, \mathbb{R})}\leq C_p\left(\int_0^T\left\|||Z_s||^2\right\|_{L^{p/2}(\Omega, \mathbb{R}^k)}ds\right)^{1/2}. \end{aligned}$$ Using Holder inequality leads to : $$\begin{aligned} \left\|\sup_{s\in[0,T]}\left\|\int_0^sZ_ud\overline{N}_u\right\|\right\|_{L^p(\Omega, \mathbb{R})}\leq C_p\left(\int_0^T||Z_s||^2_{L^p(\Omega, \mathbb{R}^k)}ds\right)^{1/2}. \end{aligned}$$ This complete the proof of the lemma. \[ch4lemma19\] Let $k\in\mathbb{N}$, $M\in\mathbb{N}$ and $Z^M_l : \Omega \longrightarrow \mathbb{R}^k, l\in\{0,1,\cdots, M-1\}$ be a family of mappings such that $Z^M_l$ is $\mathcal{F}_{lT/M}/\mathcal{B}(\mathbb{R}^k)$-measurable for all $l\in\{0,1,\cdots, M-1\}$, then $\forall\; n\in\{0,1\cdots, M\}$ the following inequality holds : $$\begin{aligned} \left\|\sup_{j\in\{0,1,\cdots, n\}}\left\|\sum_{l=0}^{j-1}Z^M_l\Delta\overline{N}^M_l\right\|\right\|_{L^p(\Omega, \mathbb{R})}\leq C_p\left(\sum_{j=0}^{n-1}||Z^M_j||^2_{L^p(\Omega, \mathbb{R}^k)}\dfrac{T}{M}\right)^{1/2}, \end{aligned}$$ where $C_p$ is a positive constant independent of $M$. Let’s define $\overline{Z}^M : [0, T]\times\Omega\longrightarrow \mathbb{R}^k$ such that $\overline{Z}^M_s := Z^M_l$ for all $s\in\left[\dfrac{lT}{M}, \dfrac{(l+1)T}{M}\right)$, $l\in\{0, 1,\cdots, M-1\}$. Using the definition of stochastic integral and Lemma \[ch4lemma18\], it follows that : $$\begin{aligned} \left\|\sup_{j\in\{0,1,\cdots, n\}}\left\|\sum_{l=0}^{j-1}Z^M_l\Delta\overline{N}^M_l\right\|\right\|_{L^p(\Omega, \mathbb{R})} &= &\left\|\sup_{j\in\{0,1,\cdots, n\}}\left\|\int_0^{jT/M}\overline{Z}^M_ud\overline{N}^M_u\right\|\right\|_{L^p(\Omega, \mathbb{R})}\\ &\leq &\left\|\sup_{s\in[0, nT/M]}\left\|\int_0^s\overline{Z}^M_ud\overline{N}_u\right\|\right\|_{L^p(\Omega, \mathbb{R}^k)}\\ &\leq & C_p\left(\int_0^{nT/M}||\overline{Z}^M_u||^2_{L^p(\Omega, \mathbb{R}^k)}ds\right)^{1/2}\\ &=&C_p\left(\sum_{j=0}^{n-1}||Z_j^M||^2_{L^p(\Omega,\mathbb{R}^k)}\dfrac{T}{M}\right)^{1/2}. \end{aligned}$$ This complete the proof of lemma. Now we are ready to prove Theorem \[ch4theorem1\]. **\[ Theorem \[ch4theorem1\]\]** Let’s first represent the numerical approximation $Y^M_n$ in the following appropriate form : $$\begin{aligned} Y^M_n&=&Y_{n-1}^M+\dfrac{\Delta tf_{\lambda}(Y^M_{n-1})}{1+\Delta t||f_{\lambda}(Y^M_{n-1})||}+g(Y_{n-1})\Delta W^M_{n-1}+h(Y^M_{n-1})\Delta\overline{N}^M_{n-1}\\ &=&X_0+\sum_{k=0}^{n-1}\dfrac{\Delta t f_{\lambda}(Y_k^M)}{1+\Delta t||f_{\lambda}(Y^M_k)||}+\sum_{k=0}^{n-1}g(Y^M_k)\Delta W^M_k+\sum_{k=0}^{n-1}h(Y^M_k)\Delta\overline{N}^M_k\\ &=& X_0+ \sum_{k=0}^{n-1}g(0)\Delta W^M_k+\sum_{k=0}^{n-1}h(0)\Delta\overline{N}^M_k+\sum_{k=0}^{n-1}\dfrac{\Delta tf_{\lambda}(Y^M_{n-1})}{1+\Delta t||f_{\lambda}(Y^M_{n-1})||}\\ &+&\sum_{k=0}^{n-1}(g(Y^M_k)-g(0))\Delta W^M_k+\sum_{k=0}^{n-1}(h(Y^M_k)-h(0))\Delta\overline{N}^M_k, \end{aligned}$$ for all $M\in\mathbb{N} $ and all $n\in\{0,\cdots,M\}$. Using the inequality $$\begin{aligned} \left\|\dfrac{\Delta tf_{\lambda}(Y^M_k)}{1+\Delta t||f_{\lambda}(Y^M_k)||}\right\|_{L^P(\Omega, \mathbb{R}^d)} <1\end{aligned}$$ it follows that : $$\begin{aligned} ||Y^M_n||_{L^p(\Omega, \mathbb{R}^d)} &\leq &||X_0||_{L^p(\Omega,\mathbb{R}^d)}+\left\|\sum_{k=0}^{n-1}g(0)\Delta W^M_k\right\|_{L^p(\Omega, \mathbb{R}^d)}+\left\|\sum_{k=0}^{n-1}h(0)\Delta\overline{N}^M_k\right\|_{L^p(\Omega,\mathbb{R}^d)}+M\\ &+&\left\|\sum_{k=0}^{n-1}(g(Y^M_k)-g(0))\Delta W^M_k\right\|_{L^p(\Omega,\mathbb{R}^d)}+\left\|\sum_{k=0}^{n-1}(h(Y^M_k)-h(0))\Delta\overline{N}^M_k\right\|_{L^p(\Omega,\mathbb{R}^d)}.\end{aligned}$$ Using Lemma \[ch4lemma16\] and Lemma \[ch4lemma19\], it follows that : $$\begin{aligned} ||Y^M_n||_{L^p(\Omega,\mathbb{R}^d)} &\leq &||X_0||_{L^p(\Omega,\mathbb{R})}+C_p\left(\sum_{k=0}^{n-1}\sum_{i=1}^{m}||g_i(0)||^2\dfrac{T}{M}\right)^{1/2}+C_p\left(\sum_{k=0}^{n-1}||h(0)||^2\dfrac{T}{M}\right)^{1/2}\nonumber\\ &+&M+ C_p\left(\sum_{k=0}^{n-1}\sum_{i=1}^m||(g_i(Y_k^M)-g_i(0))\Delta W^M_k||^2_{L^p(\Omega, \mathbb{R}^d)}\dfrac{T}{M}\right)^{1/2}\nonumber\\ &+&C_p\left(\sum_{k=0}^{n-1}\lambda||(h(Y_k^M)-h(0))\Delta W^M_k||^2_{L^p(\Omega, \mathbb{R}^d)}\dfrac{T}{M}\right)^{1/2}\nonumber\\ &\leq&||X_0||_{L^p(\Omega, \mathbb{R}^d)}+C_p\left(\dfrac{nT}{M}\sum_{i=1}^m||g_i(0)||^2\right)^{1/2}+C_p\left(\dfrac{nT}{M}||h(0)||^2\right)^{1/2}\nonumber\\ &+&M+C_p\left(\sum_{k=0}^{n-1}\sum_{i=1}^m||g_i(Y^M_k)-g_i(0)||^2_{L^p(\Omega,\mathbb{R}^d)}\dfrac{T}{M}\right)^{1/2}\nonumber\\ &+&C_p\left(\sum_{k=0}^{n-1}||h(Y^M_k)-h(0)||^2_{L^p(\Omega,\mathbb{R}^d)}\dfrac{T}{M}\right)^{1/2}. \label{ch4MB1}\end{aligned}$$ From $||g_i(0)||^2\leq ||g(0)||^2$ and the global Lipschitz condition satisfied by $g$ and $h$, we obtain $||g_i(Y^M_k)-g_i(0)||_{L^p(\Omega, \mathbb{R}^d)}\leq C||Y^M_k||_{L^p(\Omega,\mathbb{R}^d)}$ and $||h(Y^M_k)-h(0)||_{L^p(\Omega, \mathbb{R}^d)}\leq C||Y^M_k||_{L^p(\Omega,\mathbb{R}^d)}$. So using , we obtain $$\begin{aligned} ||Y^M_n||_{L^p(\Omega, \mathbb{R}^d)} &\leq & ||X_0||_{L^p(\Omega,\mathbb{R}^d)}+C_p\sqrt{Tm}||g(0)||+C_p\sqrt{T}||h(0)||+M\\ &+&C_p\left(\dfrac{Tm}{M}\sum_{k=0}^{n-1}||Y^M_k||^2_{L^p(\Omega,\mathbb{R}^d)}\right)^{1/2} +C_p\left(\dfrac{T}{M}\sum_{k=0}^{n-1}||Y^M_k||^2_{L^p(\Omega,\mathbb{R}^d)}\right)^{1/2}.\end{aligned}$$ Using the inequality $(a+b+c)^2\leq 3a^2+3b^2+3c^2$, it follows that : $$\begin{aligned} ||Y^M_n||^2_{L^p(\Omega,\mathbb{R}^d)}&\leq& 3\left(||X_0||_{L^p(\Omega,\mathbb{R}^d)}+C_p\sqrt{Tm}||g(0)||+C_p\sqrt{T}||h(0)||+M\right)^2\nonumber\\ &+&\dfrac{3T(C_p\sqrt{m}+C_p)^2}{M}\sum_{k=0}^{n-1}||Y^M_k||^2_{L^p(\Omega, \mathbb{R}^d)}, \label{ch4MB2}\end{aligned}$$ for all $p\in[1,\infty)$. Using the fact that $ \dfrac{3T(p\sqrt{m}+C_p)^2}{M}<3(p\sqrt{m}+C_p)^2 $ we obtain the following estimation $$\begin{aligned} ||Y^M_n||^2_{L^p(\Omega,\mathbb{R}^d)}&\leq& 3\left(||X_0||_{L^p(\Omega,\mathbb{R}^d)}+C_p\sqrt{Tm}||g(0)||+C_p\sqrt{T}||h(0)||+M\right)^2\nonumber\\ &+&3T(C_p\sqrt{m}+C_p)^2\sum_{k=0}^{n-1}||Y^M_k||^2_{L^p(\Omega, \mathbb{R}^d)}, \label{ch4MB}\end{aligned}$$ Applying Gronwall lemma to leads to $$\begin{aligned} ||Y^M_n||^2_{L^p(\Omega, \mathbb{R}^d)}\leq 3e^{3(C_p\sqrt{m}+C_p)^2}\left(||X_0||_{L^p(\Omega,\mathbb{R}^d)}+C_p\sqrt{Tm}||g(0)||+C_p\sqrt{T}||h(0)||+M\right)^2. \label{ch4MB3}\end{aligned}$$ Taking the square root and the supremum in the both sides of leads to : $ \sup\limits_{n\in\{0,\cdots, M\}}||Y^M_n||_{L^p(\Omega, \mathbb{R}^d)} $ $$\begin{aligned} \leq \sqrt{3}e^{3(C_p\sqrt{m}+C_p)^2}\left(||X_0||_{L^p(\Omega,\mathbb{R}^d)}+C_p\sqrt{Tm}||g(0)||+C_p\sqrt{T}||h(0)||+M\right) \label{ch4MB4}\end{aligned}$$ Unfortunately, is not enough to conclude the proof of the lemma due to the term $M$ in the right hand side. Using the fact that $(\Omega_n^M)_n$ is a decreasing sequence and by exploiting Holder inequality, we obtain : $$\begin{aligned} \sup_{M\in\mathbb{N}}\sup_{n\in\{0,\cdots,M\}}\left\|\mathbf{1}_{(\Omega_n^M)^c}Y^M_n\right\|_{L^p(\Omega,\mathbb{R}^d)}& \leq &\sup_{M\in\mathbb{N}}\sup_{n\in\{0,\cdots,M\}}\left\|\mathbf{1}_{(\Omega^M_M)^c}\right\|_{L^{2p}(\Omega,\mathbb{R}^d)}\left\|Y^M_n\right\|_{L^{2p}(\Omega,\mathbb{R}^d)}\nonumber\\ &\leq &\left(\sup_{M\in\mathbb{N}}\sup_{n\in\{0,\cdots,M\}}\left(M\left\|\mathbf{1}_{(\Omega^M_M)^c}\right\|_{L^{2p}(\Omega,\mathbb{R}^d)}\right)\right)\nonumber\\ &\times &\left(\sup_{M\in\mathbb{N}}\sup_{n\in\{0,\cdots,M\}}\left(M^{-1}||Y^M_n||_{L^{2p}(\Omega,\mathbb{R}^d)}\right)\right). \label{ch4MB5}\end{aligned}$$ Using inequality we have $ \left(\sup\limits_{M\in\mathbb{N}}\sup\limits_{n\in\{0,\cdots,M\}}\left(M^{-1}||Y^M_n||_{L^{2p}(\Omega,\mathbb{R}^d)}\right)\right)$ $$\begin{aligned} \leq \sqrt{3}e^{3(C_p\sqrt{m}+C_p)^2}\left(\dfrac{||X_0||_{L^{2p}(\Omega, \mathbb{R}^d)}}{M}+\dfrac{C_p\sqrt{Tm}||g(0)||+C_p\sqrt{T}||h(0)||}{M}+1\right)\nonumber\\ \leq \sqrt{3}e^{3(C_p\sqrt{m}+C_p)^2}\left(||X_0||_{L^{2p}(\Omega,\mathbb{R}^d)}+C_p\sqrt{Tm}||g(0)||+C_p\sqrt{T}||h(0)||+1\right)<+\infty, \label{ch4MB6}\end{aligned}$$ for all $p\geq 1$. From the relation $$\begin{aligned} \left\|\mathbf{1}_{(\Omega^M_M)^c}\right\|_{L^{2p}(\Omega, \mathbb{R}^d)}= \mathbb{E}\left[\mathbf{1}_{(\Omega^M_M)^c}\right]^{1/2p}= \mathbb{P}\left[(\Omega^M_M)^c\right]^{1/2p},\end{aligned}$$ it follows using Lemma \[ch4lemma13\] that : $$\begin{aligned} \sup_{M\in\mathbb{N}}\sup_{n\in\{0,\cdots, M\}}\left(M\left\|\mathbf{1}_{(\Omega^M_M)^c}\right\|_{L^{2p}(\Omega,\mathbb{R}}\right)=\sup_{M\in\mathbb{N}}\sup_{n\in\{0,\cdots, M\}}\left(M^{2p}\mathbb{P}\left[(\Omega^M_M)^c\right]\right)^{1/2p}<+\infty, \label{ch4MB7}\end{aligned}$$ for all $p\geq 1$. So plugging and in leads to : $$\begin{aligned} \sup_{M\in\mathbb{N}}\sup_{n\in\{0,\cdots, M\}}\left\|\mathbf{1}_{(\Omega^M_n)^c}Y^M_n\right\|_{L^p(\Omega,\mathbb{R}^d)}<+\infty. \label{ch4MB8}\end{aligned}$$ Futhermore, we have $$\begin{aligned} \sup_{M\in\mathbb{N}}\sup_{n\in\{0,\cdots, M\}}\left\|Y^M_n\right\|_{L^p(\Omega,\mathbb{R}^d)} &\leq &\sup_{M\in\mathbb{N}}\sup_{n\in\{0,\cdots, M\}}\left\|\mathbf{1}_{(\Omega^M_n)}Y^M_n\right\|_{L^p(\Omega,\mathbb{R}^d)}\nonumber\\ &+&\sup_{M\in\mathbb{N}}\sup_{n\in\{0,\cdots, M\}}\left\|\mathbf{1}_{(\Omega^M_n)^c}Y^M_n\right\|_{L^p(\Omega,\mathbb{R}^d)}. \label{ch4MB9}\end{aligned}$$ From , the second term of inequality is bounded, while using Lemma \[ch4lemma2\] and Lemma \[ch4lemma12\] we have : $$\begin{aligned} \sup_{M\in\mathbb{N}}\sup_{n\in\{0,\cdots, M\}}\left\|\mathbf{1}_{(\Omega^M_n)}Y^M_n\right\|_{L^p(\Omega,\mathbb{R}^d)}\leq \sup_{M\in\mathbb{N}}\sup_{n\in\{0,\cdots, M\}}\left\|D_n^M\right\|_{L^p(\Omega,\mathbb{R}^d)}<+\infty. \label{ch4MB10}\end{aligned}$$ Finally plugging and in leads to : $$\begin{aligned} \sup_{M\in\mathbb{N}}\sup_{n\in\{0,\cdots, M\}}\left\|Y^M_n\right\|_{L^p(\Omega,\mathbb{R}^d)}<+\infty.\end{aligned}$$ Strong convergence of the compensated tamed Euler scheme -------------------------------------------------------- The main result of this chapter is given in the following theorem. \[ch4theorem2\] Under Assumptions \[ch4assumption1\], for all $p\in[1,+\infty)$ there exist a positive constant $C_p$ such that : $$\begin{aligned} \left(\mathbb{E}\left[\sup_{t\in[0,T]}\left\|X_t-\overline{Y}^M_t\right\|^p\right]\right)^{1/p}\leq C_p\Delta t^{1/2}, \label{ch4inetheo} \end{aligned}$$ for all $M\in \mathbb{N}$. Where $X : [0,T]\times \Omega\longrightarrow \mathbb{R}^d$ is the exact solution of equation and $\overline{Y}^M_t$ is the time continous approximation defined by . In order to prove Theorem \[ch4theorem2\], we need the following two lemmas. Following closely , we have the following lemma. \[ch4lemma21\] Let $Y_n^M$ be defined by for all $M\in\mathbb{N}$ and all $n\in\{0,1,\cdots, M\}$, then we have $$\begin{aligned} \sup_{M\in\mathbb{N}}\sup_{n\in\{0,1,\cdots, M\}}\left(\mathbb{E}\left[||f_{\lambda}(Y_n^M)||^p\right]\vee \mathbb{E}\left[\left\|g(Y_n^M)\right\|^p\right]\vee \mathbb{E}\left[\left\|h(Y_n^M)\right\|^p\right]\right)<+\infty,\end{aligned}$$ for all $p\in[1,\infty)$. From the polynomial growth condition of $f_{\lambda}$, for all $x\in\mathbb{R}^d$ we have $$\begin{aligned} ||f_{\lambda}(x)||\leq C(K+||x||^c)||x||+||f_{\lambda}(0)||=CK||x||+C||x||^{c+1}+||f_{\lambda}(0)||.\end{aligned}$$ - If $||x||\leq 1$, then $CK||x||\leq CK$, hence $$\begin{aligned} ||f_{\lambda}(x)||&\leq& CK+C||x||^{c+1}+||f_{\lambda}(0)||\nonumber\\ &\leq & KC+KC||x||^{c+1}+C+C||x||^{c+1}+||f_{\lambda}(0)||+||f_{\lambda}(0)||||x||^{c+1}\nonumber\\ &=&(KC+C+||f_{\lambda}(0)||)(1+||x||^{c+1}). \label{ch4eq1}\end{aligned}$$ - If $||x||\geq 1$, then $C||x||\leq C||x||^{c+1}$, hence $$\begin{aligned} ||f_{\lambda}(x)||&\leq & KC||x||^{c+1}+C||x||^{c+1}+||f_{\lambda}(0)||\nonumber\\ &\leq & KC+KC||x||^{c+1}+C+C||x||^{c+1}+||f_{\lambda}(0)||+||f_{\lambda}(0)||||x||^{c+1}\nonumber\\ &=&(KC+C+||f_{\lambda}(0)||)(1+||x||^{c+1}). \label{ch4eq2}\end{aligned}$$ So it follows from and that $$\begin{aligned} ||f_{\lambda}(x)|| \leq (KC+C+||f_{\lambda}(0)||)(1+||x||^{c+1}), \hspace{0.5cm} \text{for all} \hspace{0.3cm} x\in\mathbb{R}^d. \label{ch4eq3}\end{aligned}$$ Using inequality and Theorem \[ch4theorem1\], it follows that : $$\begin{aligned} \sup_{M\in\mathbb{N}}\sup_{n\in\{0,\cdots,M\}}\left\|f_{\lambda}(Y_n^M)\right\|_{L^p(\Omega, \mathbb{R}^d)}&\leq& (KC+C+||f_{\lambda}(0)||)\\ &\times&\left(1+\sup_{M\in\mathbb{N}}\sup_{n\in\{0,\cdots, M\}}\left\|Y^M_n\right\|^{c+1}_{L^{p(c+1)}(\Omega, \mathbb{R}^d)}\right)\\ &<&+\infty,\end{aligned}$$ for all $p\in[1,\infty)$. In other hand, using the global Lipschitz condition satisfied by $g$ and $h$, it follows that : $$\begin{aligned} ||g(x)|| \leq C||x||+ ||g(0)|| \hspace{0.2cm} \text{and}\hspace{0.2cm} ||h(x)||\leq C||x||+||h(0)||. \label{ch4stron2}\end{aligned}$$ Using once again Theorem \[ch4theorem1\], it follows from that : $$\begin{aligned} \sup_{M\in\mathbb{N}, n\in\{0, \cdots, M\}}\left\|g(Y^M_n)\right\|_{L^p(\Omega, \mathbb{R}^d)}\leq||g(0)||+C\sup_{M\in\mathbb{N}}\sup_{n\in\{0,\cdots, M\}}\left\|Y^M_n\right\|_{L^p(\Omega, \mathbb{R}^d)}<+\infty,\end{aligned}$$ for all $p\in[1,\infty)$. Using the same argument as for $g$ the following holds $$\begin{aligned} \sup_{M\in\mathbb{N}}\sup_{n\in\{0,\cdots, M\}}\left\|h(Y^M_n)\right\|_{L^p(\Omega,\mathbb{R}^d)}<+\infty, \end{aligned}$$ for all $p\in[1, +\infty)$. This complete the proof of Lemma \[ch4lemma21\]. For $s\in[0,T]$ let $\lfloor s\rfloor$ be the greatest grid point less than $s$. We have the following lemma. \[ch4lemma22\] For any stepsize $\Delta t$, the following inequalities holds $$\begin{aligned} \sup_{t\in[0,T]}\left\|\overline{Y}^M_t-\overline{Y}^M_{\lfloor t\rfloor}\right\|_{L^p(\Omega, \mathbb{R}^d)}\leq C_p\Delta t^{1/2}, \end{aligned}$$ $$\begin{aligned} \sup_{M\in\mathbb{N}}\sup_{t\in[0,T]}\left\|\overline{Y}^M_t\right\|_{L^p(\Omega, \mathbb{R}^d)}<\infty, \end{aligned}$$ $$\begin{aligned} \sup_{t\in[0,T]}\left\|f_{\lambda}(\overline{Y}^M_t)-f_{\lambda}(\overline{Y}^M_{\lfloor t\rfloor})\right\|_{L^p(\Omega, \mathbb{R}^d)}\leq C_p\Delta t^{1/2}. \end{aligned}$$ - Using Lemma \[ch4lemma18\], Lemma \[ch4lemma15\] and the time continous approximation , it follows that : $ \sup\limits_{t\in[0,T]}\left\|\overline{Y}^M_t-\overline{Y}^M_{\lfloor t\rfloor}\right\|_{L^p(\Omega, \mathbb{R}^d)} $ $$\begin{aligned} &\leq &\dfrac{T}{M}\left(\sup_{t\in[0,T]}\left\|\dfrac{f_{\lambda}(\overline{Y}^M_{\lfloor t\rfloor})}{1+\Delta t||f_{\lambda}(\overline{Y}^M_{\lfloor t\rfloor})||}\right\|_{L^2(\Omega, \mathbb{R}^d)}\right)+\sup_{t\in[0, T]}\left\|\int^t_{\lfloor t\rfloor} g(\overline{Y}^M_{\lfloor t\rfloor})dW_s\right\|_{L^p(\Omega, \mathbb{R}^d)}\nonumber\\ &+& \sup_{t\in[0,T]}\left\|\int^t_{\lfloor t\rfloor}h(\overline{Y}^M_{\lfloor t\rfloor})d\overline{N}_s\right\|_{L^p(\Omega, \mathbb{R}^d)}\nonumber\\ &\leq &\dfrac{T}{\sqrt{M}}\left(\sup_{n\in\{0,\cdots, M\}}||f_{\lambda}(Y^M_n)||_{L^p(\Omega, \mathbb{R}^d)}\right)+\sup_{t\in[0,T]}\left(\dfrac{T}{M}\sum_{i=1}^m\int^t_{\lfloor t\rfloor}||g_i(\overline{Y}^M_s)||^2_{L^p(\Omega, \mathbb{R}^k)}ds\right)^{1/2}\nonumber\\ &+& \sup_{t\in[0,T]}\left(\dfrac{TC_p}{M}\int^t_{\lfloor t\rfloor}||h(\overline{Y}^M_s)||^2_{L^p(\Omega, \mathbb{R}^d)}ds\right)^{1/2}\nonumber\\ &\leq &\dfrac{T}{\sqrt{M}}\left(\sup_{n\in\{0,\cdots, M\}}||f_{\lambda}(Y^M_n)||_{L^p(\Omega, \mathbb{R}^d)}\right)+\dfrac{\sqrt{Tm}}{\sqrt{M}}\left(\sup_{i\in\{1,\cdots, m\}}\sup_{n\in\{0,\cdots, M\}}||g_i(Y^M_n)||_{L^p(\Omega, \mathbb{R}^k)}\right)\nonumber\\ &+&\dfrac{C_p\sqrt{T}}{\sqrt{M}}\left(\sup_{n\in\{0, \cdots,M\}}||h(Y^M_n)||_{L^p(\Omega, \mathbb{R}^d)}\right), \label{ch4Thcontinous} \end{aligned}$$ for all $M\in\mathbb{N}$. Using inequality and Lemma \[ch4lemma21\], it follows that : $$\begin{aligned} \left[\sup_{t\in[0,T]}\left\|\overline{Y}^M_t-\overline{Y}^M_{\lfloor t\rfloor}\right\|_{L^p(\Omega, \mathbb{R}^d)}\right]<C_p\Delta t^{1/2}, \label{Ch4bon1}\end{aligned}$$ for all $p\in[1,\infty)$ and all stepsize $\Delta t$. - Using the inequalities , $||a||\leq ||a-b||+||b||$ for all $a,b\in\mathbb{R}^d$ and Theorem \[ch4theorem1\] it follows that $$\begin{aligned} \sup_{t\in[0,T]}||\overline{Y}^M_t||_{L^p(\Omega, \mathbb{R}^d)}&\leq&\left[\sup_{t\in[0,T]}\left\|\overline{Y}^M_t-\overline{Y}^M_{\lfloor t\rfloor}\right\|_{L^p(\Omega, \mathbb{R}^d)}\right]+\sup_{t\in[0,T]}\left\|\overline{Y}^M_{\lfloor t\rfloor}\right\|_{L^p(\Omega, \mathbb{R}^d)}\\ &\leq&\dfrac{C_p}{M^{1/2}}+\sup_{t\in[0,T]}\left\|\overline{Y}^M_{\lfloor t\rfloor}\right\|_{L^p(\Omega, \mathbb{R}^d)}\\ &<&C_pT^{1/2}+\sup_{t\in[0,T]}\left\|\overline{Y}^M_{\lfloor t\rfloor}\right\|_{L^p(\Omega, \mathbb{R}^d)}\\ &<&\infty,\end{aligned}$$ for all $p\in[1,+\infty)$ and all $M\in\mathbb{N}$. - Further, using the polynomial growth condition : $$\begin{aligned} ||f_{\lambda}(x)-f_{\lambda}(y)||\leq C(K+||x||^c+||y||^c)||x-y||,\end{aligned}$$ for all $x, y\in\mathbb{R}^d$, it follows using Holder inequality that : $$\begin{aligned} \sup_{t\in[0,T]}||f_{\lambda}(\overline{Y}^M_t)-f_{\lambda}(\overline{Y}^M_{\lfloor t\rfloor})||_{L^p(\Omega, \mathbb{R}^d)} &\leq &C\left(K+2\sup_{t\in[0,T]}||\overline{Y}^M_t||^c_{L^{2pc}(\Omega, \mathbb{R}^d)}\right)\nonumber\\ &\times & \left(\sup_{t\in[0,T]}||\overline{Y}^M_t-\overline{Y}^M_{\lfloor t\rfloor}||_{L^{2p}(\Omega, \mathbb{R}^d)}\right) \label{ch4Thfcontinou}\end{aligned}$$ Using and the first part of Lemma \[ch4lemma22\], the following inequality holds $$\begin{aligned} \left[\sup_{t\in[0,T]}||f_{\lambda}(\overline{Y}^M_t)-f_{\lambda}(\overline{Y}^M_{\lfloor t\rfloor})||_{L^p(\Omega, \mathbb{R}^d)}\right]<C_p\Delta t^{1/2}, \label{ch4Thffinal}\end{aligned}$$ for all $p\in[1,\infty)$ and for all stepsize $\Delta t$. Now we are ready to give the proof of Theorem \[ch4theorem2\]. **\[ Theorem \[ch4theorem2\]\]** Let’s recall that for $s\in[0,T]$, $\lfloor s\rfloor$ denote the greatest grid point less than $s$. The time continuous solution can be writen into its integral form as bellow : $$\begin{aligned} \overline{Y}^M_s=X_0+\int_0^s\dfrac{f_{\lambda}(\overline{Y}^M_{\lfloor u\rfloor})}{1+\Delta t||f_{\lambda}(\overline{Y}^M_{\lfloor u\rfloor})||}du+ \int_0^s g(\overline{Y}^M_{\lfloor u\rfloor})dW_u+\int_0^s h(\overline{Y}^M_{\lfloor u\rfloor})d\overline{N}_u, \label{ch4continoussol2} \end{aligned}$$ for all $s\in[0, T]$ almost surely and all $M\in\mathbb{N}$. Let’s estimate first the quantity $||X_s-\overline{Y}^M_s||^2$ $$\begin{aligned} X_s-\overline{Y}_s&=&\int_0^s\left(f_{\lambda}(X_u)-\dfrac{f_{\lambda}(\overline{Y}^M_{\lfloor u\rfloor})}{1+\Delta t||f_{\lambda}(\overline{Y}^M_{\lfloor u\rfloor})||}\right)du+\int_0^s\left(g(X_u)-g(\overline{Y}^M_{\lfloor u\rfloor})\right)dW_u\\ &+&\int_0^s\left(h(X_u)-h(\overline{Y}^M_{\lfloor u\rfloor})\right)d\overline{N}_u. \end{aligned}$$ Using the relation $d\overline{N}_u=dN_u-\lambda du$, it follows that $$\begin{aligned} X_s-\overline{Y}_s&=&\int_0^s\left[\left(f_{\lambda}(X_u)-\dfrac{f_{\lambda}(\overline{Y}^M_{\lfloor u\rfloor})}{1+\Delta t||f_{\lambda}(\overline{Y}^M_{\lfloor u\rfloor})||}\right)-\lambda \left(h(X_u)-h(\overline{Y}^M_{\lfloor u\rfloor})\right)\right]du\\ &+&\int_0^s\left(g(X_u)-g(\overline{Y}^M_{\lfloor u\rfloor})\right)dW_u+\int_0^s\left(h(X_u)-h(\overline{Y}^M_{\lfloor u\rfloor})\right)dN_u. \end{aligned}$$ The function $ k :\mathbb{R}^m\longrightarrow \mathbb{R}$, $x \longmapsto ||x||^2$ is twice differentiable. Applying Itô’s formula for jumps process to the process $X_s-\overline{Y}^M_s$ leads to : $$\begin{aligned} \left\|X_s-\overline{Y}^M_s\right\|^2&=&2\int_0^s\left<X_u-\overline{Y}^M_u, f_{\lambda}(X_u)-\dfrac{f_{\lambda}(\overline{Y}^M_{\lfloor u\rfloor})}{1+\Delta t||f_{\lambda}(\overline{Y}^M_{\lfloor u\rfloor})||}\right>du\\ &-&2\lambda\int_0^s\left<X_u-\overline{Y}^M_u, h(X_u)-h(\overline{Y}^M_{\lfloor u\rfloor})\right>du +\sum_{i=1}^m\int_0^s||g_i(X_u)-g_i(\overline{Y}^M_{\lfloor u\rfloor})||^2du\\ &+&2\sum_{i=1}^m\int_0^s\left<X_u-\overline{Y}^M_u, g_i(X_u)-g_i(\overline{Y}^M_{\lfloor u\rfloor})\right>dW^i_u\\ &+& \int_0^s\left[||X_u-\overline{Y}^M_u+h(X_u)-h(\overline{Y}^M_{\lfloor u\rfloor})||^2-||X_u-\overline{Y}^M_u||^2\right]dN_u. \end{aligned}$$ Using again the relation $d\overline{N}_u=dN_u-\lambda du$ leads to : $$\begin{aligned} \left\|X_s-\overline{Y}^M_s\right\|^2&=&2\int_0^s\left<X_u-\overline{Y}^M_u, f_{\lambda}(X_u)-\dfrac{f_{\lambda}(\overline{Y}^M_{\lfloor u\rfloor})}{1+\Delta t||f_{\lambda}(\overline{Y}^M_{\lfloor u\rfloor})||}\right>du\nonumber\\ &-&2\lambda\int_0^s\left<X_u-\overline{Y}^M_u, h(X_u)-h(\overline{Y}^M_{\lfloor u\rfloor})\right>du+\sum_{i=1}^m\int_0^s||g_i(X_u)-g_i(\overline{Y}^M_{\lfloor u\rfloor})||^2du\nonumber\\ &+&2\sum_{i=1}^m\int_0^s\left<X_u-\overline{Y}^M_u, g_i(X_u)-g_i(\overline{Y}^M_{\lfloor u\rfloor})\right>dW^i_u\nonumber\\ &+& \int_0^s\left[||X_u-\overline{Y}^M_u+h(X_u)-h(\overline{Y}^M_{\lfloor u\rfloor})||^2-||X_u-\overline{Y}^M_u||^2\right]d\overline{N}_u\nonumber\\ &+&\lambda\int_0^s\left[||X_u-\overline{Y}^M_u+h(X_u)-h(\overline{Y}^M_{\lfloor u \rfloor})||^2-||X_u-\overline{Y}^M_u||^2\right]du\nonumber\\ &=&A_1+A_2+A_3+A_4+A_5+A_6. \label{ch4Th1} \end{aligned}$$ In the next step, we give some useful estimations of $A_1, A_2, A_3$ and $A_6$. $$\begin{aligned} A_1& : =&2\int_0^s\left<X_u-\overline{Y}^M_u,f_{\lambda}-\dfrac{f_{\lambda}(\overline{Y}_{\lfloor u\rfloor})}{1+\Delta t||f_{\lambda}(\overline{Y}^M_{\lfloor u\rfloor})||}\right>du\\ &=&2\int_0^s\left\langle X_s-\overline{Y}^M_u,f_{\lambda}(X_u)-f_{\lambda}(\overline{Y}^M_u)\right\rangle du\\ &+&2\int_0^s\left<X_s-\overline{Y}^M_u,f_{\lambda}(\overline{Y}^M_u)-\dfrac{f_{\lambda}(\overline{Y}^M_{\lfloor u\rfloor})}{1+\Delta t||f_{\lambda}(\overline{Y}^M_{\lfloor u\rfloor})||}\right>du.\\ &=& A_{11}+A_{12} \end{aligned}$$ Using the one-sided Lipschitz condition satisfied by $f_{\lambda}$ leads to : $$\begin{aligned} A_{11} &: =&2\int_0^s\left\langle X_s-\overline{Y}^M_u,f_{\lambda}(X_u)-f_{\lambda}(\overline{Y}^M_u)\right\rangle du\nonumber\\ &\leq& 2C\int_0^u||X_u-\overline{Y}^M_u||^2du. \label{ch4ThA11} \end{aligned}$$ Moreover, using the inequality $\langle a, b\rangle\leq |a||b|\leq \dfrac{a^2}{2}+\dfrac{b^2}{2}$ leads to : $$\begin{aligned} A_{12}&=& 2\int_0^s\left<X_u-\overline{Y}^M_u,f_{\lambda}(\overline{Y}^M_u)-\dfrac{f_{\lambda}(\overline{Y}^M_{\lfloor u\rfloor})}{1+\Delta t||f_{\lambda}(\overline{Y}^M_{\lfloor u\rfloor})||}\right>du\nonumber\\ &=&2\int_0^s\left<X_u-\overline{Y}^M_u, f_{\lambda}(\overline{Y}^M_u)-f_{\lambda}(\overline{Y}^M_{\lfloor u\rfloor})\right>ds\nonumber\\ &+&2\Delta t\int_0^s\left<X_u-\overline{Y}^M_u, \dfrac{f_{\lambda}(\overline{Y}^M_{\lfloor u\rfloor})||f_{\lambda}(\overline{Y}^M_{\lfloor u\rfloor})||}{1+\Delta t||f_{\lambda}(\overline{Y}^M_{\lfloor u\rfloor})||}\right>du\nonumber\\ &\leq &\int_0^s||X_u-\overline{Y}^M_u||^2du+\int_0^s||f_{\lambda}(\overline{Y}^M_u)-f_{\lambda}(\overline{Y}^M_{\lfloor u\rfloor})||^2du\nonumber\\ &+&\int_0^s||X_u-\overline{Y}^M_u||^2du+\dfrac{T^2}{M^2}\int_0^s||f_{\lambda}(\overline{Y}^M_{\lfloor u\rfloor})||^4du\nonumber\\ &\leq &2\int_0^s||X_u-\overline{Y}^M_u||^2du+\int_0^s||f_{\lambda}(\overline{Y}^M_u)-f_{\lambda}(\overline{Y}_{\lfloor u\rfloor})||^2du\nonumber\\ &+&\dfrac{T^2}{M^2}\int_0^s||f_{\lambda}(\overline{Y}^M_{\lfloor u\rfloor})||^4du. \label{ch4ThA12} \end{aligned}$$ Combining and give the following estimation of $A_1$ : $$\begin{aligned} A_1 &\leq & (2C+2)\int_0^s||X_u-\overline{Y}^M_u||^2du+\int_0^s||f_{\lambda}(\overline{Y}^M_u)-f_{\lambda}(\overline{Y}_{\lfloor u\rfloor})||^2du\nonumber\\ &+&\dfrac{T^2}{M^2}\int_0^s||f_{\lambda}(\overline{Y}^M_{\lfloor u\rfloor})||^4du. \label{ch4ThA1} \end{aligned}$$ Using again the inequality $2\langle a, b\rangle\leq 2|a||b|\leq a^2+b^2$ and the global Lipschitz condition satisfied by $h$ leads to : $$\begin{aligned} A_2 & : =&-2\lambda\int_0^s\left<X_u-\overline{Y}^M_u, h(X_u)-h(\overline{Y}^M_{\lfloor u\rfloor})\right>du\nonumber\\ &=&-2\lambda\int_0^s\left<X_u-\overline{Y}^M_u, h(X_u)-h(\overline{Y}^M_u)\right>du-2\lambda\int_0^s\left<X_u-\overline{Y}^M_u, h(\overline{Y}^M_u)-h(\overline{Y}^M_{\lfloor u\rfloor})\right>du\nonumber\\ &\leq &(2\lambda+\lambda C^2)\int_0^s||X_u-\overline{Y}^M_u||^2du+\lambda C^2\int_0^s||\overline{Y}^M_u-\overline{Y}^M_{\lfloor u\rfloor}||^2du. \label{ch4ThA2} \end{aligned}$$ Using the inequalities $||g_i(x)-g_i(y)||\leq ||g(x)-g(y)||$ and $(a+b)^2\leq 2a^2+2b^2$ and the global Lipschitz condition we have $$\begin{aligned} A_3 &: =&\sum_{i=1}^m\int_0^s||g_i(X_u)-g_i(\overline{Y}^M_{\lfloor u\rfloor})||^2du\nonumber\\ &\leq &m\int_0^s||g(X_u)-g(\overline{Y}^M_{\lfloor u\rfloor})||^2du\nonumber\\ &\leq &m\int_0^s||g(X_u)-g(\overline{Y}^M_u)+g(\overline{Y}^M_u)-g(\overline{Y}^M_{\lfloor u\rfloor})||^2du\nonumber\\ &\leq &2m\int_0^s||g(X_u)-g(\overline{Y}^M_u)||^2du+2m\int_0^s||g(\overline{Y}^M_u)-g(\overline{Y}^M_{\lfloor u\rfloor})||^2du\nonumber\\ &\leq& 2mC^2\int_0^s||X_u-\overline{Y}^M_u||^2du+2mC^2\int_0^s||\overline{Y}^M_u-\overline{Y}^M_{\lfloor u\rfloor}||^2du. \label{ch4ThA3} \end{aligned}$$ Using the same idea as above we obtain the following estimation of $A_6$ : $$\begin{aligned} A_6 & : =&\lambda\int_0^s\left[X_u-\overline{Y}^M_u+h(\overline{Y}^M_u)-h(\overline{Y}^M_{\lfloor u\rfloor})||^2-||X_u-\overline{Y}^M_u||^2\right]du\nonumber\\ &\leq &3\lambda\int_0^s||X_u-\overline{Y}^M_u||^2du+2\lambda\int_0^s||h(X_u)-h(\overline{Y}^M_{\lfloor u\rfloor})||^2du\nonumber\\ &\leq &3\lambda\int_0^s||X_u-\overline{Y}^M_u||^2du+4\lambda\int_0^s||h(X_u)-h(\overline{Y}^M_u)||^2du\nonumber\\ &+ &4\lambda\int_0^s||h(\overline{Y}^M_u)-h(\overline{Y}^M_{\lfloor u\rfloor})||^2du\nonumber\\ &\leq &(3\lambda+4\lambda C^2)\int_0^s||X_u-\overline{Y}^M_u||^2du+4\lambda C^2\int_0^s||\overline{Y}^M_u-\overline{Y}^M_{\lfloor u\rfloor}||^2du. \label{ch4ThA6} \end{aligned}$$ Inserting , , and in we obtain : $$\begin{aligned} \left\|X_s-\overline{Y}^M_s\right\|^2&\leq &(2C+2+2mC^2+5\lambda+5\lambda C^2)\int_0^s||X_u-\overline{Y}^M_u||^2du\nonumber\\ &+&(2mC^2+5\lambda C^2)\int_0^s||\overline{Y}^M_u-\overline{Y}^M_{\lfloor u\rfloor}||^2du\nonumber\\ &+&\int_0^s||f_{\lambda}(\overline{Y}^M_u)-f_{\lambda}(\overline{Y}^M_{\lfloor u\rfloor})||^2du+\dfrac{T^2}{M^2}\int_0^s||f_{\lambda}(\overline{Y}^M_{\lfloor u\rfloor})||^4du\nonumber\\ &+&2\sum_{i=1}^m\int_0^s\left<X_u-\overline{Y}^M_u, g_i(X_u)-g_i(\overline{Y}^M_{\lfloor u\rfloor})\right>dW^{i}_u\nonumber\\ &+&\int_0^s\left[||X_u-\overline{Y}^M_u+h(X_u)-h(\overline{Y}^M_{\lfloor u\rfloor})||^2-||X_u-\overline{Y}^M_u||^2\right]d\overline{N}_u. \end{aligned}$$ Taking the supremum in both sides of the previous inequality leads to $$\begin{aligned} \sup_{s\in[0,t]}\left\|X_s-\overline{Y}^M_s\right\|^2&\leq &(2C+2+2mC^2+5\lambda+5\lambda C^2)\int_0^t||X_u-\overline{Y}^M_u||^2du\nonumber\\ &+&(2mC^2+5\lambda C^2)\int_0^t||\overline{Y}^M_u-\overline{Y}^M_{\lfloor u\rfloor}||^2du\nonumber\\ &+&\int_0^t||f_{\lambda}(\overline{Y}^M_u)-f_{\lambda}(\overline{Y}^M_{\lfloor u\rfloor})||^2du+\dfrac{T^2}{M^2}\int_0^t||f_{\lambda}(\overline{Y}^M_{\lfloor u\rfloor})||^4du\nonumber\\ &+&2\sup_{s\in[0,t]}\left|\sum_{i=1}^m\int_0^s\left<X_u-\overline{Y}^M_u, g_i(X_u)-g_i(\overline{Y}^M_{\lfloor u\rfloor})\right>dW^{i}_u\right|\nonumber\\ &+&\sup_{s\in[0,t]}\left|\int_0^s\left[||X_u-\overline{Y}^M_u+h(X_u)-h(\overline{Y}^M_{\lfloor u\rfloor})||^2\right]d\overline{N}_u\right|\nonumber\\ &+&\sup_{s\in[0,t]}\left|\int_0^s||X_u-\overline{Y}^M_u||^2d\overline{N}_u\right| \label{ch4Th2}\end{aligned}$$ Using Lemma \[ch4lemma15\] we have the following estimation for all $p\geq 2$ $$\begin{aligned} B_1& :=&\left\|2\sup_{s\in[0,t]}\left|\sum_{i=1}^m\int_0^s\left<X_u-\overline{Y}^M_u, g_i(X_u)-g_i(\overline{Y}^M_{\lfloor u\rfloor})\right>dW^i_u\right|\right\|_{L^{p/2}(\Omega, \mathbb{R})}\\ &\leq &C_p\left(\int_0^t\sum_{i=1}^m\left\|\left<X_u-\overline{Y}^M_u, g_i(X_u)-g_i(\overline{Y}^M_{\lfloor u\rfloor})\right>\right\|^2_{L^{p/2}(\Omega, \mathbb{R})}ds\right)^{1/2}. \end{aligned}$$ Moreover, using Holder inequality, the inequalities $ab\leq \dfrac{a^2}{2}+\dfrac{b^2}{2}$ and $(a+b)^2\leq 2a^2+b^2$ we have the following estimations for all $p\geq 2$ $$\begin{aligned} B_1 &\leq& C_p\left(\int_0^t\sum_{i=1}^m\left\|\left<X_u-\overline{Y}^M_u, g_i(X_u)-g_i(\overline{Y}^M_{\lfloor u\rfloor})\right>\right\|^2_{L^{p/2}(\Omega, \mathbb{R})}ds\right)^{1/2}\nonumber\\ &\leq &C_p\left(\int_0^t\sum_{i=1}^m||X_u-\overline{Y}_u^M||^2_{L^p(\Omega, \mathbb{R})}||g_i(X_u)-g_u(\overline{Y}^M_{\lfloor u\rfloor})||^2_{L^p(\Omega, \mathbb{R}^d)}ds\right)^{1/2}\nonumber\\ &\leq &\dfrac{C_p}{\sqrt{2}}\left(\sup_{s\in[0,t]}||X_s-\overline{Y}^M_s||_{L^p(\Omega, \mathbb{R}^d)}\right)\left(2pC^2m\int_0^t||X_s-\overline{Y}^M_{\lfloor s\rfloor}||^2_{L^p(\Omega, \mathbb{R}^d)}ds\right)^{1/2}\nonumber\\ &\leq &\dfrac{1}{4}\sup_{s\in[0,t]}||X_s-\overline{Y}^M_s||^2_{L^p(\Omega, \mathbb{R}^d)}+p^2C_p^2m\int_0^t||X_s-\overline{Y}^M_{\lfloor s\rfloor}||^2_{L^p(\Omega,\mathbb{R}^d)}ds\nonumber\\ &\leq &\dfrac{1}{4}\sup_{s\in[0,t]}||X_s-\overline{Y}^M_s||^2_{L^p(\Omega, \mathbb{R}^d)}+2p^2C_p^2m\int_0^t||X_s-\overline{Y}^M_s||^2_{L^p(\Omega,\mathbb{R}^d)}ds\nonumber\\ &+&2p^2C_p^2m\int_0^t||\overline{Y}^M_s-\overline{Y}^M_{\lfloor u\rfloor}||^2_{L^p(\Omega,\mathbb{R}^d)}ds, \label{ch4ThB1} \end{aligned}$$ Using Lemma \[ch4lemma18\] and the inequality $(a+b)^4\leq 16a^4+16b^4$, it follows that $$\begin{aligned} B_2 &: =&\left\|\sup_{s\in[0,t]}\left|\int_0^s||X_u-\overline{Y}^M_u+h(X_u)-h(\overline{Y}^M_{\lfloor u\rfloor})||^2d\overline{N}_u\right|\right\|_{L^{p/2}(\Omega, \mathbb{R}^d)}\nonumber\\ &\leq &C_p\left(\int_0^t||X_u-\overline{Y}^M_u+h(X_u)-h(\overline{Y}^M_{\lfloor u\rfloor})||^4_{L^{p/2}(\Omega, \mathbb{R}^d)}du\right)^{1/2}\nonumber\\ &\leq &C_p\left(\int_0^t16||X_u-\overline{Y}^M_u||^4_{L^{p/2}(\Omega, \mathbb{R}^d)}+16||h(X_u)-h(\overline{Y}^M_{\lfloor u\rfloor})||^4_{L^{p/2}(\Omega, \mathbb{R}^d)}du\right)^{1/2}, \end{aligned}$$ for all $p\geq 2$. Using the inequality $\sqrt{a+b}\leq\sqrt{a}+\sqrt{b}$, it follows that : $$\begin{aligned} B_2&\leq &2C_p\left(\int_0^t||X_u-\overline{Y}^M_u||^4_{L^{p/2}(\Omega, \mathbb{R}^d)}du\right)^{1/2} +2C_p\left(\int_0^t||h(X_u)-h(\overline{Y}^M_{\lfloor u\rfloor})||^4_{L^{p/2}(\Omega, \mathbb{R}^d)}du\right)^{1/2}\nonumber\\ & =& B_{21}+B_{22}. \label{ch4ThB} \end{aligned}$$ Using Holder inequality, it follows that $$\begin{aligned} B_{21} &: = &2 C_p\left(\int_0^t||X_u-\overline{Y}^M_u||^4_{L^{p/2}(\Omega, \mathbb{R}^d)}du\right)^{1/2}\nonumber\\ &\leq &2C_p\left(\int_0^t||X_u-\overline{Y}^M_u||^2_{L^p(\Omega, \mathbb{R}^d)}||X_u-\overline{Y}^M_u||^2_{L^p(\Omega, \mathbb{R}^d)}du\right)^{1/2}\nonumber\\ &\leq &\dfrac{1}{4}\sup_{u\in[0,t]}||X_u-\overline{Y}^M_u||_{L^p(\Omega,\mathbb{R}^d)}8C_p\left(\int_0^t||X_u-\overline{Y}^M_u||^2_{L^p(\Omega,\mathbb{R}^d)}du\right)^{1/2}. \end{aligned}$$ Using the inequality $2ab\leq a^2+b^2$ leads to : $$\begin{aligned} B_{21}&\leq &\dfrac{1}{16}\sup_{u\in[0,t]}||X_u-\overline{Y}^M_u||^2_{L^p(\Omega, \mathbb{R}^d)} +16C^2_p\int_0^t||X_u-\overline{Y}^M_u||^2_{L^p(\Omega, \mathbb{R}^d)}du. \label{ch4ThB21} \end{aligned}$$ Using the inequalities $(a+b)^4\leq 4a^4+4b^4$ and $\sqrt{a+b}\leq\sqrt{a}+\sqrt{b}$, we obtain $$\begin{aligned} B_{22}&: = &2C_p\left(\int_0^t||h(X_u)-h(\overline{Y}^M_{\lfloor u\rfloor})||^4_{L^{p/2}(\Omega, \mathbb{R}^d)}du\right)^{1/2}\nonumber\\ &\leq &2C_p\left(\int_0^t\left[4||h(X_u)-h(\overline{Y}^M_u)||^4_{L^{p/2}(\Omega,\mathbb{R}^d)}+4||h(\overline{Y}^M_u)-h(\overline{Y}^M_{\lfloor u\rfloor})||^4_{L^{p/2}(\Omega,\mathbb{R}^d)}\right]du\right)^{1/2}\\ &\leq &4C_p\left(\int_0^t||h(X_u)-h(\overline{Y}^M_u)||^4_{L^{p/2}(\Omega, \mathbb{R}^d)}du\right)^{1/2} +4C_p\left(\int_0^t||h(\overline{Y}^M_u)-h(\overline{Y}^M_{\lfloor u\rfloor})||^4_{L^{p/2}(\Omega, \mathbb{R}^d)}du\right)^{1/2}. \end{aligned}$$ Using the global Lipschitz condition, leads to : $$\begin{aligned} B_{22}&\leq &4C_p\left(\int_0^tC||X_u-\overline{Y}^M_u||^4_{L^{p/2}(\Omega, \mathbb{R}^d)}du\right)^{1/2}+4C_p\left(\int_0^tC||\overline{Y}^M_u-\overline{Y}^M_{\lfloor u\rfloor}||^4_{L^{p/2}(\Omega, \mathbb{R}^d)}du\right)^{1/2}. \end{aligned}$$ Using the same estimations as for $B_{21}$, it follows that : $$\begin{aligned} B_{22} &\leq &\dfrac{1}{16}\sup_{u\in[0,t]}||X_u-\overline{Y}^M_u||^2_{L^p(\Omega, \mathbb{R}^d)} +64C_p\int_0^t||X_u-\overline{Y}^M_u||^2_{L^p(\Omega, \mathbb{R}^d)}du\nonumber\\ &+&\dfrac{1}{4}\sup_{u\in[0,t]}||\overline{Y}^M_u-\overline{Y}^M_{\lfloor u\rfloor}||^2_{L^p(\Omega, \mathbb{R}^d)}+64C_p\int_0^t||\overline{Y}^M_u-\overline{Y}^M_{\lfloor u\rfloor}||^2_{L^p(\Omega, \mathbb{R}^d)}du.\nonumber \end{aligned}$$ Taking the supremum under the integrand in the last term of the above inequality and using the fact that we don’t care about the value of the constant leads to : $$\begin{aligned} B_{22}&\leq &\dfrac{1}{16}\sup_{u\in[0,t]}||X_u-\overline{Y}^M_u||^2_{L^p(\Omega, \mathbb{R}^d)} +64C_p\int_0^t||X_u-\overline{Y}^M_u||^2_{L^p(\Omega, \mathbb{R}^d)}du\nonumber\\ &+&C_p\sup_{s\in[0,t]}||\overline{Y}^M_s-\overline{Y}^M_{\lfloor s\rfloor}||^2_{L^p(\Omega, \mathbb{R}^d)}. \label{ch4ThB22} \end{aligned}$$ Inserting and into gives : $$\begin{aligned} B_2&\leq &\dfrac{1}{8}\sup_{u\in[0,t]}||X_u-\overline{Y}^M_u||^2_{L^p(\Omega, \mathbb{R}^d)}+C_p\int_0^t||X_u-\overline{Y}^M_u||^2_{L^p(\Omega, \mathbb{R}^d)}du\nonumber\\ &+&C_p\sup_{s\in[0,t]}||\overline{Y}^M_s-\overline{Y}^M_{\lfloor s\rfloor}||^2_{L^p(\Omega, \mathbb{R}^d)}. \label{ch4ThB2} \end{aligned}$$ Using again Lemma \[ch4lemma18\] leads to : $$\begin{aligned} B_3 &: =& \left\|\sup_{u\in[0,t]}\left(\int_0^s||X_u-\overline{Y} ^M_u||^2d\overline{N}_u\right)^{1/2}\right\|_{L^{p/2}(\Omega, \mathbb{R}^d)}\\ &\leq &C_p\left(\int_0^t||X_u-\overline{Y}^M_u||^4_{L^{p/2}(\Omega, \mathbb{R}^d)}du\right)^{1/2}. \end{aligned}$$ Using the same argument as for $B_{21}$, we obtain : $$\begin{aligned} B_3 &\leq &\dfrac{1}{8}\sup_{u\in[0,t]}||X_u-\overline{Y}^M_u||^2_{L^p(\Omega, \mathbb{R}^d)}+C_p\int_0^t||X_u-\overline{Y}^M_u||^2_{L^p(\Omega, \mathbb{R}^d)}du. \label{ch4ThB3} \end{aligned}$$ Taking the $L^{p}$ norm in both side of , inserting inequalities , , and using Holder inequality in its integral form (see Proposition \[ch1Minkowski\]) leads to : $$\begin{aligned} \left\|\sup_{s\in[0,t]}||X_s-\overline{Y}^M_s||\right\|^2_{L^p(\Omega, \mathbb{R}^d)}&=&\left\|\sup_{s\in[0,t]}||X_s-\overline{Y}^M_s||^2\right\|_{L^{p/2}(\Omega, \mathbb{R}^d)}\end{aligned}$$ $$\begin{aligned} &\leq & C_p\int_0^t||X_s-\overline{Y}^M_s||^2_{L^p(\Omega, \mathbb{R}^d)}ds+C_p\int_0^t||\overline{Y}^M_s-\overline{Y}^M_{\lfloor s\rfloor}||^2_{L^p(\Omega,\mathbb{R}^d)}ds\\ &+&\int_0^t||f_{\lambda}(X_s)-f_{\lambda}(\overline{Y}^M_{\lfloor s\rfloor})||^2_{L^{p}(\Omega, \mathbb{R}^d)}ds+C_p\sup_{u\in[0,t]}||\overline{Y}^M_u-\overline{Y}^M_{\lfloor u\rfloor}||^2_{L^p(\Omega, \mathbb{R}^d)}\\ &+&\dfrac{T^2}{M^2}\int_0^t||f_{\lambda}(\overline{Y}^M_{\lfloor s\rfloor})||^4_{L^{2p}(\Omega, \mathbb{R}^d)}ds +2C_p\int_0^t||\overline{Y}^M_s-\overline{Y}^M_{\lfloor s\rfloor}||^2_{L^p(\Omega, \mathbb{R}^d)}ds\\ &+&\dfrac{1}{2}\left\|\sup_{s\in[0,t]}||X_s-\overline{Y}^M_s||\right\|^2_{L^{p}(\Omega, \mathbb{R}^d)}. \end{aligned}$$ for all $t\in[0,T]$ and all $p\in[2,+\infty)$. The previous inequality can be rewrite in the following appropriate form : $ \dfrac{1}{2}\left\|\sup\limits_{s\in[0,t]}\left\|X_s-\overline{Y}^M_s\right\|\right\|^2_{L^{p}(\Omega, \mathbb{R}^d)} $ $$\begin{aligned} &\leq & C_p\int_0^t||X_s-\overline{Y}^M_s||^2_{L^p(\Omega, \mathbb{R}^d)}ds+C_p\int_0^t||\overline{Y}^M_s-\overline{Y}^M_{\lfloor s\rfloor}||^2_{L^p(\Omega,\mathbb{R}^d)}ds\\ &+&\int_0^t||f_{\lambda}(X_s)-f_{\lambda}(\overline{Y}^M_{\lfloor s\rfloor})||^2_{L^{p}(\Omega, \mathbb{R}^d)}ds+C_p\sup_{u\in[0,t]}||\overline{Y}^M_u-\overline{Y}^M_{\lfloor u\rfloor}||^2_{L^p(\Omega, \mathbb{R}^d)}\\ &+&\dfrac{T^2}{M^2}\int_0^t||f_{\lambda}(\overline{Y}^M_{\lfloor s\rfloor})||^4_{L^{2p}(\Omega, \mathbb{R}^d)}ds +2C^2m\int_0^t||\overline{Y}^M_s-\overline{Y}^M_{\lfloor s\rfloor}||^2_{L^p(\Omega, \mathbb{R}^d)}ds. \end{aligned}$$ Applying Gronwall lemma to the previous inequality leads to : $ \dfrac{1}{2}\left\|\sup\limits_{s\in[0,t]}||X_s-\overline{Y}^M_s||\right\|^2_{L^p(\Omega, \mathbb{R}^d)} $ $$\begin{aligned} &\leq &C_pe^{C_p}\left(\int_0^T||f_{\lambda}(\overline{Y}^M_s)-f_{\lambda}(\overline{Y}^M_{\lfloor s\rfloor})||^2_{L^p(\Omega, \mathbb{R}^d)}ds+C_p\sup_{t\in[0,t]}||\overline{Y}^M_u-\overline{Y}^M_{\lfloor u\rfloor}||^2_{L^p(\Omega, \mathbb{R}^d)}\right.\\ & &\left. +\dfrac{T^2}{M^2}\int_0^T||f_{\lambda}(\overline{Y}^M_{\lfloor s\rfloor})||^4_{L^{2p}(\Omega, \mathbb{R}^d)}ds +C_p\int_0^T||\overline{Y}^M_s-\overline{Y}^M_{\lfloor s\rfloor}||^2_{L^p(\Omega, \mathbb{R}^d)}ds\right). \end{aligned}$$ From the inequality $\sqrt{a+b+c}\leq \sqrt{a}+\sqrt{b}+\sqrt{c}$, it follows that $ \left\|\sup\limits_{t\in[0, T]}||X_t-\overline{Y}^M_t||\right\|_{L^p(\Omega, \mathbb{R}^d)} $ $$\begin{aligned} &\leq &C_pe^{C_p}\left(\sup_{t\in[0,T]}||f_{\lambda}(\overline{Y}^M_t)-f_{\lambda}(\overline{Y}^M_{\lfloor t\rfloor})||_{L^p(\Omega, \mathbb{R}^d)}+C_p\sup_{t\in[0,t]}||\overline{Y}^M_t-\overline{Y}^M_{\lfloor t\rfloor}||_{L^p(\Omega, \mathbb{R}^d)} \right.\nonumber\\ &+& \left. \dfrac{T}{M}\left[\sup_{n\in\{0,\cdots, M\}}||f_{\lambda}(Y^M_n)||^2_{L^p(\Omega, \mathbb{R}^d)}\right]+C_p\sup_{t\in[0,T]}||\overline{Y}^M_t-\overline{Y}^M_{\lfloor t\rfloor}||_{L^p(\Omega, \mathbb{R}^d)}\right), \label{ch4Gronwall} \end{aligned}$$ for all $p\in[2,\infty)$. Using Lemma \[ch4lemma21\] and Lemma \[ch4lemma22\] it follows from \[ch4Gronwall\] that $$\begin{aligned} \left(\mathbb{E}\left[\sup_{t\in[0,T]}\left\|X_t-\overline{Y}^M_t\right\|^p\right]\right)^{1/p}\leq C_p(\Delta t)^{1/2}, \label{ch4final} \end{aligned}$$ for all $p\in[2,\infty)$. Using Holder inequality, one can prove that holds for all $p\in[1,2]$. The proof of the theorem is complete. Numerical Experiments --------------------- In order to illustrate our theoretical result, we consider the following stochastic differential equation $\left\{\begin{array}{ll} dX_t=-X_t^4dt+X_tdW_t+X_tdN_t\\ X_0=1, \end{array} \right.$ $\lambda=1$. It is straighforward to verify that Assumptions \[ch4assumption1\] are satisfied. We use Monte carlo method to evaluate the error. The exact solution is consider as the numerical one with small stepsize $dt=2^{-14}$. We have the following curve for $5000$ paths. ![Strong error of the compensated tamed Euler scheme](comtam.png) In this chapter, we proposed a compensated tamed Euler scheme to solve numerically SDEs with jumps under non-global Lipschitz condition. We proved the strong convergence of order $0.5$ of the compensated Euler scheme. This scheme is explicit and then requires less computational efforts than the implicit scheme. In some situations, the drift part can be equipped with the Lipschitz continuous part and the non-Lipschitz continuous part. In the following chapter, we combine the tamed Euler scheme and the Euler scheme and obtain another scheme called semi-tamed Euler scheme in order to solve numerically the kind of equation mentioned above. Strong convergence and stability of the semi- tamed Euler and the tamed Euler scheme for stochastic differential equations with jumps, under non-global Lipschitz continous coefficients ======================================================================================================================================================================================== Explicit numerical method called compensated tamed Euler scheme is developped in the previous chapter. More precisely, it is proved that such numerical approximation have strong convergence of order $0.5$ for stochastic differential equations with jumps under non-global Lipschitz condition. In this chapter, following the idea of [@Xia2], we propose a semi-tamed Euler scheme to solve stochastic differential equations with jumps, where the drift coefficient is equipped with the Lipschitz continuous part and the non-Lipschitz continuous part. We prove that for SDEs with jumps, the semi-tamed Euler scheme converges strongly with order $0.5$. We use this result to deduce a strong convegrence of order $0.5$ of the tamed Euler scheme for SDEs with jumps, where the drift coefficient satisfies the non-global Lipschitz condition. We also investigate the stability analysis of both semi-tamed Euler scheme and tamed Euler scheme. The contents of this chapter can also be found in [@atjdm1] and [@atjdm2]. Semi-tamed Euler scheme {#ch5intro} ----------------------- In this chapter, we consider again a jump-diffusion Itô’s stochastic differential equations (SDEs) of the form : $$\begin{aligned} dX(t)= f((X(t^{-}))dt +g((X(t^{-}))dW(t)+h(X(t^{-}))dN(t), \hspace{0.5cm} X(0)=X_0, \label{ch5exactsol1}\end{aligned}$$ where $W_t$ is a $m$-dimensional Brownian motion, $f :\mathbb{R}^d\longrightarrow\mathbb{R}^d$ such that $f(x)=u(x)+v(x)$ satisfies the global one-sided Lipschitz condition. $u, v : \mathbb{R}^d\longrightarrow\mathbb{R}^d$, $u$ is the global Lipschitz continous part while $v$ is the non-global Lipschitz continous part. The functions $ g : \mathbb{R}^d \longrightarrow\mathbb{R}^{d\times m}$ and $h :\mathbb{R}^d \longrightarrow\mathbb{R}^d$ satisfy the global Lipschitz condition, $N_t$ is a one dimensional poisson process with parameter $\lambda$. Using the relation $f=u+v$, equation can be rewritten into its equivalent form : $$\begin{aligned} dX(t)=u(X(t^-))dt+v(X(t^-))dt+g(X(t^-))dW(t)+h(X(t^-))dN(t), \hspace{0.2cm} X(0)=X_0.\end{aligned}$$ We can rewrite the jump-diffusion SDEs in the following equivalent form $$\begin{aligned} dX(t)= f_\lambda(X(t^{-}))dt +g(X(t^{-}))dW(t)+h(X(t^{-}))d\overline{N}(t),\end{aligned}$$ where $$f_\lambda(x)=f(x)+\lambda h(x)= u(x)+\lambda h(x)+v(x). \label{ch5flambda}$$ If $T$ is the final time we consider the tamed Euler scheme $$X_{n+1}^{M}=X_{n}^{M}+\dfrac{\Delta t f(X_{n}^{M})}{1+ \Delta t\Vert f(X_{n}^{N}) \Vert }+g(X_{n}^{M}) \Delta W_n +h(X_{n}^{M})\Delta N_n \label{ch5tam}$$ and the semi-tamed Euler scheme $$\begin{aligned} Y_{n+1}^{M}=Y_{n}^{M}+u(Y^M_n)\Delta t+\dfrac{\Delta t v(Y_{n}^{M})}{1+ \Delta t \Vert v(Y_{n}^{M}) \Vert }+g(Y_{n}^{M}) \Delta W_n +h(Y_{n}^{M})\Delta N_n, \label{ch5semi}\end{aligned}$$ where $\Delta t=\dfrac{T}{M}$ is the time step-size, $M\in\mathbb{N}$ is the number of steps. Inspired by [@Xia2] and [@Martin1] we prove the strong convergence of the numerical approximation and deduce the strong convergence of to the exact solution of . Moments bounded of the numerical solution ------------------------------------------ Throughout this chapter, we use Notations \[ch4nota1\]. Note that the numerical approximation can be writen into its following equivalent form $$\begin{aligned} Y_{n+1}^{M}=Y_{n}^{M}+u(Y^M_n)\Delta t+\lambda h(Y^M_n)\Delta t+\dfrac{\Delta t v(Y_{n}^{M})}{1+ \Delta t \Vert v(Y_{n}^{M}) \Vert }+g(Y_{n}^{M}) \Delta W_n +h(Y_{n}^{M})\Delta \overline{N}_n. \label{ch5semitam}\end{aligned}$$ We define the continous time interpolation of the discrete numerical approximation of $\eqref{ch5semitam}$ by the familly of processes $\left(\overline{Y}^M\right)_M $, $ \overline{Y}^M : [0,T]\times\Omega \longrightarrow \mathbb{R}^d $ such that : $$\begin{aligned} \overline{Y}^M_t &=&Y^M_n+u(Y^M_n)(t-n\Delta t)+\lambda h(Y^M_n)(t-n\Delta t)+\dfrac{(t-n\Delta t)v(Y^M_n)}{1+\Delta t||v(Y^M_n)||}\nonumber\\ &+&g(Y^M_n)(W_t-W_{n\Delta t})+ h(Y^M_n)(\overline{N}_t-\overline{N}_{n\Delta t}), \label{ch5continoussolu}\end{aligned}$$ for all $M\in\mathbb{N}$, all $n\in\{0,\cdots, M-1\}$, and all $t\in[n\Delta t, (n+1)\Delta t)$. \[ch5assumption1\] We assume that : $(A.1)$ $f,g,h\in C^1$. $(A.2)$ For all $p>0$, there exist a finite $M_p>0$ such that $\mathbb{E}||X_0||^p\leq M_p$. $(A.3)$ $g$, $h$ and $u$ satisfy global Lipschitz condition: $$\begin{aligned} ||g(x)-g(y)||\vee ||h(x)-h(y)||\vee ||u(x)-u(y)|| \leq C||x-y||, \hspace{0.5cm} \forall\; x,y\in \mathbb{R}^d.\end{aligned}$$ $(A.4)$ $v$ satisfies the one-sided Lipschitz condition : $$\begin{aligned} \langle x-y, f(x)-f(y)\rangle\leq C||x-y||^2,\hspace{0.5cm} \forall\;x,y\in \mathbb{R}^d,\end{aligned}$$ $(A.5)$ $v$ satisfies the superlinear growth condition : $$\begin{aligned} ||v(x)-v(y)||\leq C(K+ ||x||^c+||y||^c)||x-y||, \hspace{0.5cm} \forall\;x,y\in \mathbb{R}^d,\end{aligned}$$ where $K$, $C$ and $c$ are constants strictetly positives. Under conditions $(A.1)$, $(A.2)$ and $(A.3)$ of Assumptions \[ch5assumption1\] it is proved in [@Desmond2 Lemma 1] that has a unique solution with all moments bounded. Let’s define $u_{\lambda}(x)=u(x)+\lambda h(x)$. From Assumptions \[ch5assumption1\], it is straightforward to prove that $u_{\lambda}$ satisfies the global Lipschitz condition with constant $C_{\lambda}=(1+\lambda)C$ and $v$ satisfies the one-sided Lipschitz condition. We denote by $C_p$ a generic constant. Throughout this work, this constant may change the value from one line to another one. We will sometimes use $Y_n^M$ instead of $Y_n^M(\omega)$ to simplify notations. The main result of this section is formulated in the following theorem. which is based on [@Martin1 Lemma 3.9 pp 16]. Here, we include the jump part. \[ch5theorem1\] Let $Y_n^M : \Omega\longrightarrow \mathbb{R}^d$ be defined by for all $M\in\mathbb{N}$ and all $ n\in\{0,\cdots, M\}$. The following inequality holds : $$\begin{aligned} \sup_{M\in\mathbb{N}}\sup_{n\in\{0,\cdots, M\}}\mathbb{E}\left[||Y_n^M||^p\right]<+\infty, \end{aligned}$$ for all $p\in[1,\infty)$. In order to prove Theorem \[ch5theorem1\] we introduce the following notations facilitating computations. \[ch5notation1\] $$\begin{aligned} \alpha^M_k := \mathrm{1}_{\{||Y^M_k||\geq 1\}}\left\langle\dfrac{Y^M_k+u_{\lambda}(Y^M_n)\Delta t}{||Y^M_k||}, \dfrac{g(Y^M_k)}{||Y^M_k||}\Delta W^M_k\right\rangle,\\\\ \beta^M_k := \mathrm{1}_{\{||Y^M_k||\geq 1\}}\left\langle\dfrac{Y^M_k+u_{\lambda}(Y^M_n)\Delta t}{||Y^M_k||}, \dfrac{h(Y^M_k)}{||Y^M_k||}\Delta\overline{N}^M_k\right\rangle, \end{aligned}$$ $$\begin{aligned} \beta=\left(1+K+3C_{\lambda}+KTC_{\lambda}+TC_{\lambda}+||u_{\lambda}(0)||+||g(0)||+||h(0)||\right)^4,\\\\ D^M_n := (\beta+||\varepsilon||)\exp\left(4\beta+\sup_{u\in\{0,\cdots,n\}}\sum_{k=u}^{n-1}\left[2\beta||\Delta W^M_k||^2+2\beta||\Delta\overline{N}^M_k||+\alpha^M_k+\beta^M_k\right]\right),\\ \Omega^M_n :=\{\omega\in \Omega : \sup_{k\in\{0,1,\cdots, n-1\}}D^M_k(\omega)\leq M^{1/2c}, \sup_{k\in\{0,1,\cdots,n-1\}}||\Delta W^M_k(\omega)||\leq 1,\\ \sup_{k\in\{0,1,\cdots,n-1\}}||\Delta \overline{N}^M_k(\omega)||\leq 1\}, \end{aligned}$$ for all $M\in\mathbb{N}$ and $k\in\{0,\cdots M\}$. Following closely [@Xia2 Lemma 2.1] we have the following main lemma. \[ch5lemma2\] The following inequality holds for all $M\in \mathbb{N}$ and all $n\in\{0,1,\cdots, M\}$ $$\begin{aligned} \mathbf{1}_{\Omega^M_n}||Y^M_n||\leq D^M_n. \label{ch5Denobound} \end{aligned}$$ Using the inequality $\dfrac{\Delta t}{1+\Delta t||u_{\lambda}(x)||}\leq T$, the global Lipschitz condition of $g$ and $h$ and the polynomial growth condition of $v$ we have the following estimation on $\Omega^M_{n+1}\cap\{\omega\in \Omega : ||Y^M_n(\omega)||\leq 1\}$, for all $n\in\{0,1,\cdots, M-1\}$ $$\begin{aligned} ||Y^M_{n+1}||&\leq ||Y^M_n||+||u_{\lambda}(Y^M_n)||\Delta t+\dfrac{\Delta t||v(Y^M_n)||}{1+\Delta t||v(Y^M_n)||}+||g(Y^M_n)||||\Delta W^M_n||+||h(Y^M_n)||||\Delta\overline{N} ^M_n|| \nonumber \\ &\leq ||Y^M_n||+T||u_{\lambda}(Y^M_n)-u_{\lambda}(0)||+T||u_{\lambda}(0)||+T||v(Y^M_n)-v(0)||+T||v(0)||\nonumber\\ &+ ||g(Y^M_n)-g(0)||+||g(0)||+||h(Y^M_n)-h(0)||+||h(0)||\nonumber\\ &\leq 1+C||Y^M_n||+T||u_{\lambda}(0)||+TC(K+||Y^M_n||^c+||0||^c)||Y^M_n-0||+T||v(0)||\nonumber\\ &+C||Y^M_n||+C||Y^M_n||+||g(0)||+||h(0)||\nonumber\\ &\leq 1+KTC +TC+3C++T||u_{\lambda}(0)||+T||v(0)||+||g(0)||+||h(0)||\leq \beta. \label{ch5normY} \end{aligned}$$ Futhermore, from the numerical approximation , we have $$\begin{aligned} \label{ch5partnorm2} ||Y^M_{n+1}||^2&=&||Y^M_n||^2+||u_{\lambda}(Y^M_n)||^2\Delta t^2+\dfrac{\Delta t^2||v(Y^M_n)||^2}{(1+\Delta t||f_{\lambda}(Y^M_n)||)^2}+||g(Y^M_n)\Delta W^M_n||^2\nonumber\\ &+& ||h(Y^M_n)\Delta\overline{N}^M_n||^2+ 2\Delta t\langle Y^M_n, u_{\lambda}(Y^M_n)\rangle+\dfrac{2\Delta t\langle Y^M_n, v(Y^M_n)\rangle}{1+\Delta t||f_{\lambda}(Y^M_n)||}\nonumber\\ &+&2\langle Y^M_n,g(Y^M_n)\Delta W^M_n\rangle+2\langle Y^M_n,h(Y^M_n)\Delta\overline{N}^M_n\rangle+\dfrac{2\Delta t^2\langle u_{\lambda}(Y^M_n), v(Y^M_n)\rangle}{1+\Delta t||v(Y^M_n)||}\nonumber\\ &+&2\Delta t\langle u_{\lambda}(Y^M_n), g(Y^M_n)\Delta W_n\rangle+2\Delta t\langle u_{\lambda}(Y^M_n), h(Y^M_n)\Delta\overline{N}^M_n\rangle\nonumber\\ &+&\dfrac{2\langle \Delta tv(Y^M_n),g(Y^M_n)\Delta W^M_n\rangle}{1+\Delta t||v(Y^M_n)||}+\dfrac{2\langle \Delta tv(Y^M_n),h(Y^M_n)\Delta\overline{N}^M_n\rangle}{1+\Delta t||v(Y^M_n)||}\nonumber\\ &+&2\langle g(Y^M_n)\Delta W^M_n,h(Y^M_n)\Delta\overline{N}^M_n\rangle. \end{aligned}$$ Using the estimations $a\leq |a|$ and $ \dfrac{1}{1+\Delta t||v(Y^M_n)||}\leq 1$, we obtain the following inequality from equation : $$\begin{aligned} ||Y^M_{n+1}||^2 &\leq& ||Y^M_n||^2+||u_{\lambda}(Y^M_n)||\Delta t^2+\Delta t^2||v(Y^M_n)||^2+||g(Y^M_n)||^2||\Delta W^M_n||^2\nonumber\\ &+&||h(Y^M_n)||^2||\Delta\overline{N}^M_n||^2+2\Delta t\langle Y^M_n, u_{\lambda}(Y^M_n)\rangle+2\Delta t|\langle Y^M_n, v(Y^M_n)\rangle|\nonumber\\ &+&2\langle Y^M_n, g(Y^M_n)\Delta W^M_n\rangle+2\langle Y^M_n, h(Y^M_n)\Delta\overline{N}^M_n\rangle+2\Delta t^2|\langle u_{\lambda}(Y^M_n), v(Y^M_n)\rangle|\nonumber\\ &+&2\langle \Delta tu_{\lambda}(Y^M_n), g(Y^M_n)\Delta W_n^M\rangle+2\langle \Delta tu_{\lambda}(Y^M_n), h(Y^M_n)\Delta\overline{N}^M_n\rangle\nonumber\\ &+&2\Delta t|\langle v(Y^M_n), g(Y^M_n)\Delta W^M_n\rangle|+2\Delta t|\langle v(Y^M_n), h(Y^M_n)\Delta\overline{N}^M_n\rangle|\nonumber\\ &+&2\langle g(Y^M_n)\Delta W^M_n, h(Y^M_n)\Delta\overline{N}^M_n\rangle. \label{ch5ine14} \end{aligned}$$ Using the estimation $2ab\leq a^2+b^2$, the inequality becomes : $$\begin{aligned} ||Y^M_{n+1}||^2 &\leq & ||Y^M_n||^2+||u_{\lambda}(Y^M_n)||^2\Delta t^2+||v(Y^M_n)||^2\Delta t^2+||g(Y^M_n)||^2||\Delta W^M_n||^2\nonumber\\ &+&||h(Y^M_n)||^2||\Delta\overline{N}^M_n||^2+2\Delta t\langle Y^M_n, u_{\lambda}(Y^M_n)\rangle+2\Delta t|\langle Y^M_n, v(Y^M_n)\rangle|\nonumber\\ &+&2\langle Y^M_n+\Delta tu_{\lambda}(Y^M_n), g(Y^M_n)\Delta W^M_n\rangle+2\rangle Y^M_n+\Delta tu_{\lambda}(Y^M_n), h(Y^M_n)\Delta\overline{N}^M_n\rangle\nonumber\\ &+&||u_{\lambda}(Y^M_n)||^2\Delta t^2+||u_{\lambda}(Y^M_n)||^2\Delta t^2+||v(Y^M_n)||^2\Delta t^2+||h(Y^M_n)||^2||\Delta\overline{N}^M_n||^2\nonumber\\ &+&||v(Y^M_n)||^2\Delta t^2+||g(Y^M_n)||^2||\Delta W^M_n||^2+||v(Y^M_n)||^2\Delta t^2+||h(Y^M_n)||^2||\Delta\overline{N}^M_n||^2\nonumber\\ &+&||g(Y^M_n)||^2||\Delta W^M_n||^2+||h(Y^M_n)||^2||\Delta\overline{N}^M_n||^2 \label{ch5ine15} \end{aligned}$$ Putting similars terms of inequality together, we obtain : $$\begin{aligned} ||Y^M_{n+1}||^2 &\leq & ||Y^M_n||^2+3||u_{\lambda}(Y^M_n)||^2\Delta t^2+4||v(Y^M_n)||^2\Delta t^2+3||g(Y^M_n)||^2||\Delta W^M_n||^2\nonumber\\ &+&4||h(Y^M_n)||^2||\Delta\overline{N}^M_n||^2+2\Delta t\langle Y^M_n, u_{\lambda}(Y^M_n)\rangle+2\Delta t|\langle Y^M_n, v(Y^M_n)\rangle|\nonumber\\ &+&2\langle Y^M_n+\Delta tu_{\lambda}(Y^M_n), g(Y^M_n)\Delta W^M_n\rangle\nonumber\\ &+&2\langle Y^M_n+\Delta tu_{\lambda}(Y^M_n), h(Y^M_n)\Delta\overline{N}^M_n\rangle, \label{ch5ine16} \end{aligned}$$ on $\Omega$, for all $M\in\mathbb{N}$ and all $n\in\{0,1,\cdots, M-1\}$. In addition, for all $x\in\mathbb{R}^d$ such that $||x||\geq 1$, the global Lipschitz condition satisfied by $g$, $h$ and $u_{\lambda}$ leads to : $$\begin{aligned} ||g(x)||^2&\leq& (||g(x)-g(0)||+||g(0)||)^2\nonumber\\ &\leq & (C||x||+||g(0)||)^2\nonumber\\ &\leq & (C+||g(0)||)^2||x||^2\nonumber\\ &\leq &\beta||x||^2 \label{ch5normdeg2} \end{aligned}$$ Along the same lines as above, for all $x\in\mathbb{R}^d$ such that $||x||\geq 1$, we have : $$\begin{aligned} ||h(x)||^2\leq \beta||x||^2 \hspace{0.3cm} \text{and} \hspace{0.3cm} ||u_{\lambda}(x)||\leq \beta ||x||. \label{ch5normdeh2} \end{aligned}$$ Also, for all $x\in\mathbb{R}^d$ such that $||x||\geq 1$, the one-sided Lipschitz condition satisfied by $v$ leads to : $$\begin{aligned} \langle x,v(x)\rangle&=&\langle x,v(x)-v(0)+v(0)\rangle=\langle x-0,v(x)-v(0)\rangle +\langle x,v(0)\rangle\nonumber\\ &\leq & C||x||^2+||x||||v(0)||\nonumber\\ &\leq &(C+||v(0)||)||x||^2\nonumber\\ &\leq &\sqrt{\beta}||x||^2. \label{ch5crochetv} \end{aligned}$$ Along the same lines as above, $\forall x\in\mathbb{R}^d$ such that $||x||\geq 1$, we have $$\begin{aligned} \langle x, u(x)\rangle\leq \sqrt{\beta}||x||^2. \label{ch5crochetu} \end{aligned}$$ Futhermore, using the polynomial growth condition satisfied by $v$, the following inequality holds for all $x\in\mathbb{R}^d$ with $1\leq ||x||\leq M^{1/2c}$ and for all $M\in\mathbb{N}$ $$\begin{aligned} ||v(x)||^2&\leq&(||v(x)-v(0)||+||v(0)||)^2\nonumber\\ &\leq & (C(K+||x||^c)||x||+||v(0)||)^2\nonumber\\ &\leq & (C(K+1)||x||^{c+1}+||v(0)||)^2\nonumber\\ &\leq & (KC+C+||v(0)||)^2||x||^{2(c+1)}\nonumber\\ &\leq & M\sqrt{\beta}||x||^2. \label{ch5normv2} \end{aligned}$$ Using the global-Lipschitz condition of $u_{\lambda}$ leads to $$\begin{aligned} ||u_{\lambda}(x)||^2\leq \sqrt{\beta}||x||^2. \label{ch5normu2} \end{aligned}$$ Now combining inequalities , , , ,, and we obtain $$\begin{aligned} ||Y^M_{n+1}||^2 &\leq& ||Y^M_n||^2+\dfrac{3T^2\sqrt{\beta}}{M}||Y^M_n||^2+\dfrac{4T^2\sqrt{\beta}}{M}||Y^M_n||^2+3\beta||Y^M_n||^2||\Delta W^M_n||^2\nonumber\\ &+&+4\beta||Y^M_n||^2||\Delta\overline{N}^M_n||^2+\dfrac{2T\sqrt{\beta}}{M}||Y^M_n||^2+\dfrac{2T\sqrt{\beta}}{M}||Y^M_n||^2\nonumber\\ &+&2\langle Y^M_n+\Delta tu_{\lambda}(Y^M_n), g(Y^M_n)\Delta W^M_n\rangle+2\langle Y^M_n+\Delta tu_{\lambda}(Y^M_n), h(Y^M_n)\Delta\overline{N}^M_n\rangle\\ &\leq &||Y^M_n||^2+\left(\dfrac{8T^2\sqrt{\beta}}{M}+\dfrac{4T\sqrt{\beta}}{M}\right)||Y^M_n||^2+4\beta||Y^M_n||^2||\Delta W^M_n||^2+4\beta||Y^M_n||^2||\Delta\overline{N}^M_n||^2\\ &+&2\langle Y^M_n+\Delta tu_{\lambda}(Y^M_n), g(Y^M_n)\Delta W^M_n\rangle+2\langle Y^M_n+\Delta tu_{\lambda}(Y^M_n), h(Y^M_n)\Delta\overline{N}^M_n\rangle. \end{aligned}$$ Using the inequality $8T^2+4T\leq 8\sqrt{\beta}$, it follows that : $$\begin{aligned} ||Y^M_{n+1}||^2&\leq& ||Y^M_n||^2+\dfrac{8\beta}{M}||Y^M_n||^2+4\beta||Y^M_n||^2||\Delta W^M_n||^2+4\beta||Y^M_n||^2||\Delta\overline{N}^M_n||^2\nonumber\\ &+&2\langle Y^M_n+\Delta tu_{\lambda}(Y^M_n),g(Y^M_n)\Delta W^M_n\rangle+2\langle Y^M_n+\Delta tu_{\lambda}(Y^M_n),h(Y^M_n)\Delta\overline{N}^M_n\rangle\nonumber\\ &=&||Y^M_n||^2 \left(1+\dfrac{8\beta}{M}+4\beta||\Delta W^M_n||^2+4\beta||\Delta\overline{N}^M_n||^2 \right.\nonumber\\ &+&\left. \left\langle\dfrac{Y^M_n+\Delta tu_{\lambda}(Y^M_n)}{||Y^M_n||}, \dfrac{g(Y^M_n)}{||Y^M_n||}\Delta W^M_n\right\rangle+2\left\langle\dfrac{Y^M_n+\Delta tu_{\lambda}(Y^M_n)}{||Y^M_n||}, \dfrac{h(Y^M_n)}{||Y^M_n||}\Delta\overline{N}^M_n\right\rangle\right)\nonumber\\ &=&||Y^M_n||^2\left(1+\dfrac{8\beta}{M}+4\beta||\Delta W^M_n||^2+4\beta||\Delta\overline{N}^M_n||^2+2\alpha^M_n+2\beta^M_n\right). \label{ch5expY1} \end{aligned}$$ Using Lemma \[ch4lemma1\] for $a=\dfrac{8\beta}{M}+4\beta||\Delta W^M_n||^2+2\alpha^M_n+2\beta^M_n$ and $b=2\sqrt{\beta}||\Delta\overline{N}^M_n|| $ it follows from that : $$\begin{aligned} ||Y^M_{n+1}||^2\leq ||Y^M_n||^2\exp\left(\dfrac{8\beta}{M}+4\beta||\Delta W^M_n||^2+4\beta||\Delta\overline{N}^M_n||+2\alpha^M_n+2\beta^M_n\right), \label{ch5expY2} \end{aligned}$$ on $\{w\in\Omega : 1\leq ||Y^M_n(\omega)||\leq M^{1/2c}\}$, for all $M\in\mathbb{N}$ and $n\in\{0,\cdots,M-1\}.$ Now combining and and using mathematical induction as used in the proof of Lemma \[ch4lemma2\] complete the proof of Lemma \[ch5lemma2\]. The following lemma and its proof are similars to [@Martin1 Lemma 3.3 pp 15] with only different value of the coefficient $\beta$. \[ch5lemma4\] The following inequality holds : $$\begin{aligned} \sup_{M\in\mathbb{N}, M\geq 4\beta pT}\mathbb{E}\left[\exp\left(\beta p\sum_{k=0}^{M-1}||\Delta W^M_k||^2\right)\right]<\infty. \end{aligned}$$ The following lemma is based on [@Martin2 Lemma 5.7]. \[ch5lemma6\] The following inequality holds $$\begin{aligned} \mathbb{E}\left[\exp\left(pz\mathbf{1}_{\{||x||\geq 1\}}\left<\dfrac{x+u(x)T/M}{||x||},\dfrac{g(x)}{||x||}\Delta W^M_k\right>\right)\right]\leq \exp\left[\dfrac{p^2T(1+TC+T||u(0)||)^2(C+||g(0)||)^2}{M}\right]. \end{aligned}$$ Let $a^{\top}$ stand for the transposed of a vector $a$, let $Y$ be the $m$ column vector defined by : $Y=\sqrt{\dfrac{T}{M}}(1,\cdots,1)$ and let $\mathcal{N}(0,1)$ be a $1$-dimensional standard normal random variable. Then we have $$\begin{aligned} \mathbb{E}\left[\exp\left(pz\left<\dfrac{x+u(x)T/M}{||x||},\dfrac{g(x)}{||x||}\Delta W^M_k\right>\right)\right]= \mathbb{E}\left[\exp\left(pz\left<\dfrac{x+u(x)T/M}{||x||},\dfrac{g(x)}{||x||}Y\mathcal{N}(0,1)\right>\right)\right]. \end{aligned}$$ Using Lemma \[ch4lemma5\] it follows that : $$\begin{aligned} \mathbb{E}\left[\exp\left(pz\left<\dfrac{x+u(x)T/M}{||x||},\dfrac{g(x)}{||x||}\Delta W^M_k\right>\right)\right]&\leq&\exp\left[p^2z^2\left\|\dfrac{g(x)^{\top}(x+u(x)T/M)Y||}{||x||^2}\right\|^2\right]\\ &=&\exp\left[\dfrac{p^2T}{M}\dfrac{||g(x)^{\top}(x+u(x)T/M)||^2}{||x||^4}\right]\nonumber\\ &\leq & \exp\left[\dfrac{p^2T}{M}\dfrac{||g(x)||^2||x+u(x)T/M||^2}{||x||^4}\right]. \end{aligned}$$ From the global Lipschitz condition, for all $x\in\mathbb{R}^d$ such that $||x||\geq1$ we have $$\begin{aligned} ||g(x)||^2\leq (||g(x)-u(0)||+||g(0)||)^2\leq (C||x||^2+||g(0)||)^2\leq (C+||g(0)||)^2||x||^2 \end{aligned}$$ $$\begin{aligned} ||x+u(x)T/M||\leq ||x||+T/M||u(x)||&\leq&||x||+T/M||u(x)-u(0)||+T/M||u(0)||\\ &\leq &||x||+TC||x||+T||u(0)||\\ &\leq &(1+TC+T||u(0)||)||x||. \end{aligned}$$ Therefore, it follows that : $$\begin{aligned} \mathbb{E}\left[\exp\left(pz_{\{||x||\geq 1\}}\left<\dfrac{x+u(x)T/M}{||x||},\dfrac{g(x)}{||x||}\Delta W^M_k\right>\right)\right]\leq \exp\left[\dfrac{p^2T(1+TC+T||u(0)||)^2(C+||g(0)||)^2}{M}\right]. \end{aligned}$$ for all $M\in\mathbb{N}$, $k\in\{0,\cdots,M-1\}$, $p\in[1,\infty)$ and $z\in\{-1,1\}$. Following closely [@Xia2 Lemma 2.3 ] we have the following lemma. \[ch5lemma7\] Let $\alpha^M_n :\Omega\longrightarrow\mathbb{R}$ for $M\in\mathbb{N}$ and $n\in\{0,1,\cdots,M\}$ define as in notation . Then the following inequality holds : $$\begin{aligned} \sup_{z\in\{-1,1\}}\sup_{M\in\mathbb{N}}\left\|\sup_{n\in\{0,1,\cdots,M\}}\exp\left(z\sum_{k=0}^{n-1}\alpha^M_k\right)\right\|_{L^p(\Omega, \mathbb{R})}<\infty, \end{aligned}$$ for all $p\in[2,+\infty)$ The time discrete stochastic process $z\sum_{k=0}^{n-1}\alpha^M_k$, $n\in\{0,1,\cdots, M\}$ is an $(\mathcal{F}_{nT/M})_{n\in\{0,\cdots,M\}}-$ martingale for every $z\in\{-1,1\}$ and $M\in \mathbb{N}$. So $\exp\left(z\sum_{k=0}^{n-1}\alpha^M_k\right)$ is a positive $(\mathcal{F}_{nT/M})_{n\in\{0,\cdots,M\}}-$ submartingale for every $z\in\{-1,1\}$ and $M\in\mathbb{N}$ since $\exp$ is a convex function. Applying Doop’s maximal inequality leads to : $$\begin{aligned} \left\|\sup_{n\in\{0,\cdots,M\}}\exp\left(z\sum_{k=0}^{n-1}\alpha^M_k\right)\right\|_{L^p{(\Omega, \mathbb{R})}}&=&\left(\mathbb{E}\left|\sup_{n\in\{0,\cdots,M\}}\exp\left(pz\sum_{k=0}^{n-1}\alpha^M_k\right)\right|\right)^{1/p}\nonumber\\ &\leq &\left(\dfrac{p}{p-1}\right)\left(\mathbb{E}\left |\exp\left(pz\sum_{k=0}^{M-1}\alpha^M_k\right)\right|\right)^{1/p}\nonumber\\ &= &\dfrac{p}{p-1}\left\|\exp\left(z\sum_{k=0}^{M-1}\alpha^M_k\right)\right\|_{L^p{(\Omega,\mathbb{R})}}. \label{ch5alpha1} \end{aligned}$$ Using Lemma \[ch5lemma6\], it follows that : $$\begin{aligned} \mathbb{E}\left[\exp(pz\alpha^M_k)/\mathcal{F}_{kT/M}\right]\leq\exp\left(\dfrac{p^2T(C+||g(0)||)^2(1+TC+T||u(0)||)^2}{M}\right). \label{ch5alpha4} \end{aligned}$$ Using inequality , it follows that : $$\begin{aligned} \mathbb{E}\left[\exp\left(pz\sum_{k=0}^{M-1}\alpha^M_k\right)\right]&=&\mathbb{E}\left[\exp\left(pz\sum_{k=0}^{M-2} \alpha^M_k\right)\mathbb{E}[\exp(p\alpha^M_{M-1}/\mathcal{F}_{(M-1)T/M}\right]\\ &\leq &\mathbb{E}\left[\exp\left(pz\sum_{k=0}^{M-2}\alpha^M_k\right)\right]\exp\left(\dfrac{p^2T(C+||g(0)||)^2(1+TC+T||u(0)||)^2}{M}\right). \end{aligned}$$ Iterating the previous inequality $M$ times gives : $$\begin{aligned} \mathbb{E}\left[\exp\left(pz\sum_{k=0}^{M-1}\alpha^M_k\right)\right] \leq \exp(p^2T(C+||g(0)||)^2(1+TC+T||u(0)||)^2). \label{ch5alpha5} \end{aligned}$$ Now combining inequalities and leads to $$\begin{aligned} \sup_{z\in\{-1,1\}}\sup_{M\in\mathbb{N}}\left\|\sup_{n\in\{0,\cdots,M\}}\exp\left(z\sum_{k=0}^{n-1}\alpha^M_k\right)\right\|_{L^p(\Omega,\mathbb{R})} &\leq& 2\exp(p^2T(C+||g(0)||)^2(1+TC+T||u(0)||)^2) \\ &< &+\infty, \end{aligned}$$ for all $p\in[2,\infty)$. \[ch5lemma9\] The following inequality holds $ \mathbb{E}\left[\exp\left(pz\mathbf{1}_{\{||x||\geq 1\}}\left<\dfrac{x+u(x)T/M}{||x||},\dfrac{h(x)}{||x||}\Delta\overline{N}^M_n\right>\right)\right] $ $$\begin{aligned} \leq\left[\exp\left(\dfrac{\left[e^{p(C+||h(0)||)(1+TC+T||u(0)||)}+p(C+||h(0)||)(1+TC+T||u(0)||)\right]\lambda T}{M}\right)\right], \end{aligned}$$ for all $M\in\mathbb{N}$, all $p\in[1, +\infty)$ and all $n\in\{0,\cdots,M\}$, $z\in\{-1,1\}$. For $x\in\mathbb{R}^d$ such that $||x||\neq 0$, we have : $$\begin{aligned} \mathbb{E}\left[\exp\left(pz\left<\dfrac{x+u(x)T/M}{||x||},\dfrac{h(x)}{||x||}\Delta\overline{N}^M_n\right>\right)\right] &\leq& \mathbb{E}\left[\exp\left(pz\dfrac{||x+u(x)T/M||||h(x)||}{||x||^2}\Delta\overline{N}^M_n\right)\right]. \end{aligned}$$ Using the global Lipschitz condition on $h$, for all $x\in\mathbb{R}^d$ such that $||x||\geq 1$, we have : $$\begin{aligned} ||h(x)||\leq ||h(x)-h(0)||+||h(0)||\leq (C+||h(0)||)||x||. \label{ch5normeh}\\ ||x+u(x)T/M||\leq (1+TC+T||u(0)||)||x||\label{ch5normeh1} \end{aligned}$$ So using inequalities and , it follows that : $$\begin{aligned} \mathbb{E}\left[\exp\left(pz\mathbf{1}_{\{||x||\geq 1\}}\left<\dfrac{x+u(x)T/M}{||x||},\dfrac{h(x)}{||x||}\Delta\overline{N}^M_n\right>\right)\right] \leq\mathbb{E}\left( \exp[pz(C+||h(0)||)(1+TC+T||u(0)||)\Delta\overline{N}^M_n \right). \end{aligned}$$ Using Lemma \[ch4lemma8\], it follows that : $ \left[\exp\left(pz\mathbf{1}_{\{||x||\geq 1\}}\left<\dfrac{x+u(x)T/M}{||x||},\dfrac{h(x)}{||x||}\Delta\overline{N}^M_n\right>\right)\right] $ $$\begin{aligned} &\leq &\mathbb{E}[\exp(pz(C+||h(0)||)(1+TC+T||u(0)||)\Delta\overline{N}^M_n)]\\ &\leq &\left[\exp\left(\dfrac{\left[e^{p(C+||h(0)||)(1+TC+T||u(0)||)}+p[(C+||h(0)||)(1+TC+T||u(0)||)-1\right]\lambda T}{M}\right)\right]\\ &\leq &\left[\exp\left(\dfrac{\left[e^{p(C+||h(0)||)(1+TC+T||u(0)||)}+p[(C+||h(0)||)(1+TC+T||u(0)||)\right]\lambda T}{M}\right)\right]. \end{aligned}$$ The following lemma is similar to [@Xia2 Lemma 2.3]. \[ch5lemma10\] Let $\beta^M_n :\Omega \longrightarrow \mathbb{R}$ defined in Notation \[ch5notation1\] for all $M\in\mathbb{N}$ and all $n\in\{0,\cdots,M\}$ then we have the following inequality $$\begin{aligned} \sup_{z\in\{-1,1\}}\sup_{M\in\mathbb{N}}\left\|\sup_{n\in\{0,\cdots, M\}}\exp\left(z\sum_{K=0}^{n-1}\beta^M_k\right)\right\|_{L^p(\Omega, \mathbb{R})}<+\infty. \end{aligned}$$ Following the proof of [@Martin1 Lemma 3.4 ], the result is straightforward using lemmas \[ch5lemma9\] and \[ch5lemma6\]. \[ch5lemma11\] The following inequality holds $$\begin{aligned} \sup_{M\in\mathbb{N}}\mathbb{E}\left[\exp\left(p\beta\sum_{k=0}^{M-1}||\Delta\overline{N}^M_k||\right)\right]<+\infty,\end{aligned}$$ for all $p\in[1, +\infty)$. The proof is similar to the proof of Lemma \[ch4lemma11\] with only different value of $\beta$. The following lemma is based on Lemma \[ch5lemma10\]. \[ch5lemma12\] \[Uniformly bounded moments of the dominating stochastic processes\]. Let $M\in\mathbb{N}$ and $D_n^M : \Omega \longrightarrow [0,\infty)$ for $n\in\{0,1,\cdots, M\}$ be define as in notation . Then we have : $$\begin{aligned} \sup_{M\in\mathbb{N}, M\geq8\lambda pT}\left\|\sup_{n\in\{0,1,\cdots, M\}}D_n^M\right\|_{L^p(\Omega, \mathbb{R})}<\infty, \end{aligned}$$ for all $p\in[1,\infty)$. The following lemma is an extension of [@Martin1 Lemma 3.6, pp 16 ]. Here, we include the jump part. \[ch5lemma13\] Let $M\in\mathbb{N}$ and $\Omega_M^M\in\mathcal{F}$. The following inequality holds : $$\begin{aligned} \sup_{M\in\mathbb{N}}\left(M^p\mathbb{P}[(\Omega_M^M)^c]\right)<+\infty, \end{aligned}$$ for all $p\in[1,\infty)$. **\[Theorem \[ch5theorem1\]\]**. Let’s first represent the numerical approximation $Y^M_n$ in the following appropriate form : $$\begin{aligned} Y^M_n&=&Y_{n-1}^M +u_{\lambda}(Y^M_{n-1})T/M+\dfrac{\Delta tv(Y^M_{n-1})}{1+\Delta t||v(Y^M_{n-1})||}+g(Y_{n-1})\Delta W^M_{n-1}+h(Y^M_{n-1})\Delta\overline{N}^M_{n-1}\\ &=&X_0+\sum_{k=0}^{n-1}u(Y^M_{k})T/M+\sum_{k=0}^{n-1}\dfrac{\Delta t v(Y_k^M)}{1+\Delta t||v(Y^M_k)||}+\sum_{k=0}^{n-1}g(Y^M_k)\Delta W^M_k+\sum_{k=0}^{n-1}h(Y^M_k)\Delta\overline{N}^M_k\\ &=& X_0+u(0)nT/M+ \sum_{k=0}^{n-1}g(0)\Delta W^M_k+\sum_{k=0}^{n-1}h(0)\Delta\overline{N}^M_k+\sum_{=0}^{n-1}T/M(u(^M_k)-u(0))\\ &+&\sum_{k=0}^{n-1}\dfrac{\Delta tv(Y^M_{k})}{1+\Delta t||v(Y^M_{k})||}+\sum_{k=0}^{n-1}(g(Y^M_k)-g(0))\Delta W^M_k+\sum_{k=0}^{n-1}(h(Y^M_k)-h(0))\Delta\overline{N}^M_k, \end{aligned}$$ for all $M\in\mathbb{N} $ and all $n\in\{0,\cdots,M\}$. Using the inequality $$\begin{aligned} \left\|\dfrac{\Delta tv(Y^M_k)}{1+\Delta t||v(Y^M_k)||}\right\|_{L^P(\Omega, \mathbb{R}^d)} <1\end{aligned}$$ it follows that : $$\begin{aligned} ||Y^M_n||_{L^p(\Omega, \mathbb{R}^d)} &\leq &||X_0||_{L^p(\Omega,\mathbb{R}^d)}+ ||u(0)||nT/M+\left\|\sum_{k=0}^{n-1}g(0)\Delta W^M_k\right\|_{L^p(\Omega, \mathbb{R}^d)}+M\\ &+&\left\|\sum_{k=0}^{n-1}h(0)\Delta\overline{N}^M_k\right\|_{L^p(\Omega,\mathbb{R}^d)}+\left\|\sum_{k=0}^{n-1}(g(Y^M_k)-g(0))\Delta W^M_k\right\|_{L^p(\Omega,\mathbb{R}^d)}\\ &+&\left\|\sum_{k=0}^{n-1}(h(Y^M_k)-h(0))\Delta\overline{N}^M_k\right\|_{L^p(\Omega,\mathbb{R}^d)}.\end{aligned}$$ Using Lemma \[ch4lemma16\] and Lemma \[ch4lemma19\], it follows that $$\begin{aligned} ||Y^M_n||_{L^p(\Omega,\mathbb{R}^d)} &\leq &||X_0||_{L^p(\Omega,\mathbb{R})}+||u(0)||nT/M+C_p\left(\sum_{k=0}^{n-1}\sum_{i=1}^{m}||g_i(0)||^2\dfrac{T}{M}\right)^{1/2}+C_p\left(\sum_{k=0}^{n-1}||h(0)||^2\dfrac{T}{M}\right)^{1/2}\nonumber\\ &+&M+ C_p\left(\sum_{k=0}^{n-1}\sum_{i=1}^m||(g_i(Y_k^M)-g_i(0))\Delta W^M_k||^2_{L^p(\Omega, \mathbb{R}^d)}\dfrac{T}{M}\right)^{1/2}\nonumber\\ &+&C_p\left(\sum_{k=0}^{n-1}\lambda||(h(Y_k^M)-h(0))\Delta W^M_k||^2_{L^p(\Omega, \mathbb{R}^d)}\dfrac{T}{M}\right)^{1/2}\nonumber\\ &\leq&||X_0||_{L^p(\Omega, \mathbb{R}^d)}+T||u(0)||+C_p\left(\dfrac{nT}{M}\sum_{i=1}^m||g_i(0)||^2\right)^{1/2}+C_p\left(\dfrac{nT}{M}||h(0)||^2\right)^{1/2}\nonumber\\ &+&M+C_p\left(\sum_{k=0}^{n-1}\sum_{i=1}^m||g_i(Y^M_k)-g_i(0)||^2_{L^p(\Omega,\mathbb{R}^d)}\dfrac{T}{M}\right)^{1/2}\nonumber\\ &+&C_p\left(\sum_{k=0}^{n-1}||h(Y^M_k)-h(0)||^2_{L^p(\Omega,\mathbb{R}^d)}\dfrac{T}{M}\right)^{1/2}. \label{ch5MB1}\end{aligned}$$ From $||g_i(0)||^2\leq ||g(0)||^2$ and the global Lipschitz condition satisfied by $g$ and $h$, we obtain $||g_i(Y^M_k)-g_i(0)||\leq C||Y^M_k||_{L^p(\Omega,\mathbb{R}^d)}$ and $||h(Y^M_k)-h(0)||\leq C||Y^M_k||_{L^p(\Omega,\mathbb{R}^d)}$. So using , we have : $$\begin{aligned} ||Y^N_n||_{L^p(\Omega, \mathbb{R}^d)} &\leq & ||X_0||_{L^p(\Omega,\mathbb{R}^d)}+T||u(0)||+C_p\sqrt{Tm}||g(0)||+C_p\sqrt{T}||h(0)||+M\\ &+&C_p\left(\dfrac{Tm}{M}\sum_{k=0}^{n-1}||Y^M_k||^2_{L^p(\Omega,\mathbb{R}^d)}\right)^{1/2} +C_p\left(\dfrac{T}{M}\sum_{k=0}^{n-1}||Y^M_k||^2_{L^p(\Omega,\mathbb{R}^d)}\right)^{1/2}.\end{aligned}$$ Using the inequality $(a+b+c)^2\leq 3a^2+3b^2+3c^2$, it follows that : $$\begin{aligned} ||Y^M_n||^2_{L^p(\Omega,\mathbb{R}^d)}&\leq& 3\left(||X_0||_{L^p(\Omega,\mathbb{R}^d)}+T||u(0)||+C_p\sqrt{Tm}||g(0)||+C_p\sqrt{T}||h(0)||+M\right)^2\nonumber\\ &+&\dfrac{3T(C_p\sqrt{m}+C_p)^2}{M}\sum_{k=0}^{n-1}||Y^M_k||^2_{L^p(\Omega, \mathbb{R}^d)},\end{aligned}$$ Using the inequality the inequality $ \dfrac{3T(C_p\sqrt{m}+C_p)^2}{M}<3(C_p\sqrt{m}+C_p)^2 $, it follows that : $$\begin{aligned} ||Y^M_n||^2_{L^p(\Omega,\mathbb{R}^d)}&\leq& 3\left(||X_0||_{L^p(\Omega,\mathbb{R}^d)}+T||u(0)||+C_p\sqrt{Tm}||g(0)||+C_p\sqrt{T}||h(0)||+M\right)^2\nonumber\\ &+&3T(C_p\sqrt{m}+C_p)^2\sum_{k=0}^{n-1}||Y^M_k||^2_{L^p(\Omega, \mathbb{R}^d)}, \label{ch5MB2}\end{aligned}$$ for all $p\in[2,\infty)$. Applying Gronwall inequality to leads to : $$\begin{aligned} ||Y^M_n||^2_{L^p(\Omega,\mathbb{R}^d)}\leq 3e^{3(C_p\sqrt{m}+C_p)^2}\left(||X_0||_{L^p(\Omega,\mathbb{R}^d)}+T||u(0)||+C_p\sqrt{Tm}||g(0)||+C_p\sqrt{T}||h(0)||+M\right)^2 \label{ch5MB3}\end{aligned}$$ Taking the square root and the supremum in both sides of the previous inequality leads to : $\sup\limits_{n\in\{0,\cdots, M\}}||Y^M_n||_{L^p(\Omega, \mathbb{R}^d)}$ $$\begin{aligned} \leq\sqrt{3}e^{3(C_p\sqrt{m}+C_p)^2}\left(||X_0||_{L^p(\Omega,\mathbb{R}^d)}+T||u(0)||+C_p\sqrt{Tm}||g(0)||+C_p\sqrt{T}||h(0)||+M\right). \label{ch5MB4}\end{aligned}$$ Unfortunately, is not enough to conclude the proof of the theorem due to the term $M$ in the right hand side. Using the fact that $(\Omega_n^M)_n$ is a decreasing sequence and exploiting Holder inequality, we obtain : $$\begin{aligned} \sup_{M\in\mathbb{N}}\sup_{n\in\{0,\cdots,M\}}\left\|\mathbf{1}_{(\Omega_n^M)^c}Y^M_n\right\|_{L^p(\Omega,\mathbb{R}^d)}& \leq &\sup_{M\in\mathbb{N}}\sup_{n\in\{0,\cdots,M\}}\left\|\mathbf{1}_{(\Omega^M_M)^c}\right\|_{L^{2p}(\Omega,\mathbb{R}^d)}\left\|Y^M_n\right\|_{L^{2p}(\Omega,\mathbb{R}^d)}\nonumber\\ &\leq &\left(\sup_{M\in\mathbb{N}}\sup_{n\in\{0,\cdots,M\}}\left(M\left\|\mathbf{1}_{(\Omega^M_M)^c}\right\|_{L^{2p}(\Omega,\mathbb{R}^d)}\right)\right)\nonumber\\ &\times &\left(\sup_{M\in\mathbb{N}}\sup_{n\in\{0,\cdots,M\}}\left(M^{-1}||Y^M_n||_{L^{2p}(\Omega,\mathbb{R}^d)}\right)\right). \label{ch5MB5}\end{aligned}$$ From inequality we have $ \left(\sup\limits_{M\in\mathbb{N}}\sup\limits_{n\in\{0,\cdots,M\}}\left(M^{-1}||Y^M_n||_{L^p(\Omega,\mathbb{R}^d)}\right)\right)$ $$\begin{aligned} \leq \sqrt{2}e^{(C_p\sqrt{m}+C_p)^2}\left(\dfrac{||X_0||_{L^p(\Omega, \mathbb{R}^d)}}{M}+\dfrac{C_p\sqrt{Tm}||g(0)||+C_p\sqrt{T}||h(0)||}{M}+1\right)\nonumber\\ \leq \sqrt{2}e^{(p\sqrt{m}+C_p)^2}\left(||X_0||_{L^p(\Omega,\mathbb{R}^d)}+C_p\sqrt{Tm}||g(0)||+C_p\sqrt{T}||h(0)||+1\right)<+\infty, \label{ch5MB6}\end{aligned}$$ From the relation $$\begin{aligned} \left\|\mathbf{1}_{(\Omega^M_M)^c}\right\|_{L^{2p}(\Omega, \mathbb{R}^d)}= \mathbb{E}\left[\mathbf{1}_{(\Omega^M_M)^c}\right]^{1/2p}= \mathbb{P}\left[(\Omega^M_M)^c\right]^{1/2p},\end{aligned}$$ it follows using Lemma \[ch5lemma13\] that : $$\begin{aligned} \sup_{M\in\mathbb{N}}\sup_{n\in\{0,\cdots, M\}}\left(M\left\|\mathbf{1}_{(\Omega^M_M)^c}\right\|_{L^{2p}(\Omega,\mathbb{R}}\right)=\sup_{M\in\mathbb{N}}\sup_{n\in\{0,\cdots, M\}}\left(M^{2p}\mathbb{P}\left[(\Omega^M_M)^c\right]\right)^{1/2p}<+\infty. \label{ch5MB7}\end{aligned}$$ So plugging and in leads to : $$\begin{aligned} \sup_{M\in\mathbb{N}}\sup_{n\in\{0,\cdots, M\}}\left\|\mathbf{1}_{(\Omega^M_n)^c}Y^M_n\right\|_{L^p(\Omega,\mathbb{R}^d)}<+\infty. \label{ch5MB8}\end{aligned}$$ Futhermore, we have $$\begin{aligned} \sup_{M\in\mathbb{N}}\sup_{n\in\{0,\cdots, M\}}\left\|Y^M_n\right\|_{L^p(\Omega,\mathbb{R}^d)} &\leq &\sup_{M\in\mathbb{N}}\sup_{n\in\{0,\cdots, M\}}\left\|\mathbf{1}_{(\Omega^M_n)}Y^M_n\right\|_{L^p(\Omega,\mathbb{R}^d)}\nonumber\\ &+&\sup_{M\in\mathbb{N}}\sup_{n\in\{0,\cdots, M\}}\left\|\mathbf{1}_{(\Omega^M_n)^c}Y^M_n\right\|_{L^p(\Omega,\mathbb{R}^d)}. \label{ch5MB9}\end{aligned}$$ From the second term of inequality is bounded, while using Lemma \[ch5lemma2\] and Lemma \[ch5lemma12\] we have : $$\begin{aligned} \sup_{M\in\mathbb{N}}\sup_{n\in\{0,\cdots, M\}}\left\|\mathbf{1}_{(\Omega^M_n)}Y^M_n\right\|_{L^p(\Omega,\mathbb{R}^d)}\leq \sup_{M\in\mathbb{N}}\sup_{n\in\{0,\cdots, M\}}\left\|D_n^M\right\|_{L^p(\Omega,\mathbb{R}^d)}<+\infty. \label{ch5MB10}\end{aligned}$$ Finally plugging and in leads to : $$\begin{aligned} \sup_{M\in\mathbb{N}}\sup_{n\in\{0,\cdots, M\}}\left\|Y^M_n\right\|_{L^p(\Omega,\mathbb{R}^d)}<+\infty.\end{aligned}$$ This complete the proof of Theorem \[ch5theorem1\]. Strong convergence of the semi-tamed Euler scheme ------------------------------------------------- \[ch5theorem2\] Under Assumptions \[ch5assumption1\], for all $p\in[1,+\infty)$ there exist a positive constant $C_p$ such that : $$\begin{aligned} \left(\mathbb{E}\left[\sup_{t\in[0,T]}\left\|X_t-\overline{Y}^M_t\right\|^p\right]\right)^{1/p}\leq C_p\Delta t^{1/2}, \label{ch5inetheo} \end{aligned}$$ for all $M\in \mathbb{N}$. Where $X : [0,T]\times \Omega\longrightarrow \mathbb{R}^d$ is the exact solution of equation and $\overline{Y}^M_t$ is the time continous solution defined in . In order to prove Theorem \[ch5theorem2\], we need the following two lemmas. \[ch5lemma14\]\[Based on [@Martin1 Lemma 3.10, pp 16]\]. Let $Y_n^M$ be defined by for all $M\in\mathbb{N}$ and all $n\in\{0,1,\cdots, M\}$. Then the following inequalities holds : $$\begin{aligned} \sup_{M\in\mathbb{N}}\sup_{n\in\{0,1,\cdots, M\}}\left(\mathbb{E}\left[||u_{\lambda}(Y_n^M)||^p\right]\vee \mathbb{E}[||v(Y^M_n)||^p]\vee \mathbb{E}\left[||g(Y_n^M)||^p\right]\vee \mathbb{E}\left[||h(Y_n^M)||^p\right]\right)<+\infty,\end{aligned}$$ for all $p\in[1,\infty)$. The proof is similar to the proof [@Martin1 Lemma 3.10, pp 16]. For $s\in[0,T]$ let $\lfloor s\rfloor$ be the greatest grid point less than $s$. We have the following lemma. \[ch5lemma16\] The following inequalities holds for any stepsize $\Delta t$. $$\begin{aligned} \sup_{t\in[0,T]}\left\|\overline{Y}^M_t-\overline{Y}^M_{\lfloor t\rfloor}\right\|_{L^p(\Omega, \mathbb{R}^d)}\leq C_p\Delta t^{1/2}, \end{aligned}$$ $$\begin{aligned} \sup_{M\in\mathbb{N}}\sup_{t\in[0,T]}\left\|\overline{Y}^M_t\right\|_{L^p(\Omega, \mathbb{R}^d)}<\infty, \end{aligned}$$ $$\begin{aligned} \sup_{t\in[0,T]}\left\|v(\overline{Y}^M_t)-v(\overline{Y}^M_{\lfloor t\rfloor})\right\|_{L^p(\Omega, \mathbb{R}^d)}\leq C_p\Delta t^{1/2}. \end{aligned}$$ - Using the time continous approximation , Lemma \[ch4lemma19\] and Lemma \[ch4lemma16\], it follows that : $ \sup\limits_{t\in[0,T]}||\overline{Y}^M_t-\overline{Y}^M_{\lfloor t\rfloor}||_{L^p(\Omega, \mathbb{R}^d)} $ $$\begin{aligned} &\leq &\dfrac{T}{M}\sup_{t\in[0,T]}\left\|u_{\lambda}(\overline{Y}^M_{\lfloor t\rfloor})\right\|_{L^p(\Omega, \mathbb{R}^d)}+\dfrac{T}{M}\left(\sup_{t\in[0,T]}\left\|\dfrac{v(\overline{Y}^M_{\lfloor t\rfloor})}{1+\Delta t||v(\overline{Y}^M_{\lfloor t\rfloor})||}\right\|_{L^p(\Omega, \mathbb{R}^d)}\right)\nonumber\\ &+& \sup_{t\in[0, T]}\left\|\int^t_{\lfloor t\rfloor} g(\overline{Y}^M_{\lfloor t\rfloor})dW_s\right\|_{L^p(\Omega, \mathbb{R}^d)}+\sup_{t\in[0,T]}\left\|\int^t_{\lfloor t\rfloor}h(\overline{Y}^M_{\lfloor t\rfloor})d\overline{N}_s\right\|_{L^p(\Omega, \mathbb{R}^d)}\nonumber\\ &\leq &\dfrac{T}{\sqrt{M}}\left(\sup_{n\in\{0,\cdots M\}}\|u_{\lambda}(Y^M_n)\|_{L^p(\Omega,\mathbb{R}^d)}\right)+\dfrac{T}{\sqrt{M}}\left(\sup_{n\in\{0,\cdots, M\}}||v(Y^M_n)||_{L^p(\Omega, \mathbb{R}^d)}\right)\nonumber\\ &+& C_p\sup_{t\in[0,T]}\left(\dfrac{T}{M}\sum_{i=1}^m\int^t_{\lfloor t\rfloor}||g_i(\overline{Y}^M_s)||^2_{L^p(\Omega, \mathbb{R}^d)}ds\right)^{1/2}+C_p\sup_{t\in[0,T]}\left(\dfrac{TC_p}{M}\int^t_{\lfloor t\rfloor}||h(\overline{Y}^m_s)||^2_{L^p(\Omega, \mathbb{R}^d)}ds\right)^{1/2}\nonumber\\ &\leq &\dfrac{T}{\sqrt{M}}\left(\sup_{n\in\{0,\cdots, M\}}||u_{\lambda}(Y^M_n)||_{L^p(\Omega, \mathbb{R}^d)}\right)+\dfrac{T}{\sqrt{M}}\left(\sup_{n\in\{0,\cdots, M\}}||v(Y^M_n)||_{L^p(\Omega, \mathbb{R}^d)}\right)\nonumber\\ &+& \dfrac{C_p\sqrt{Tm}}{\sqrt{M}}\left(\sup_{i\in\{1,\cdots, m\}}\sup_{n\in\{0,\cdots, M\}}||g_i(Y^M_n)||_{L^p(\Omega, \mathbb{R}^d)}\right)\nonumber\\ &+&\dfrac{C_p\sqrt{T}}{\sqrt{M}}\left(\sup_{n\in\{0, \cdots,M\}}||h(Y^M_n)||_{L^p(\Omega, \mathbb{R}^d)}\right) \label{Thcontinous} \end{aligned}$$ for all $p\in[1,+\infty)$ and all $M\in\mathbb{N}$. Using inequality and Lemma \[ch5lemma14\] it follows that : $$\begin{aligned} \left[\sup_{t\in[0,T]}||\overline{Y}^M_t-\overline{Y}^M_{\lfloor t\rfloor}||_{L^p(\Omega, \mathbb{R}^d)}\right]<C_p\Delta t^{1/2}, \label{Ch5bon1}\end{aligned}$$ for all $p\in[1, +\infty)$ and for all stepsize $\Delta t$. - Using inequality , inequality $||a||\leq ||a-b||+||b||$ for all $a,b\in\mathbb{R}^d$ and Theorem \[ch5theorem1\] it follows that $$\begin{aligned} \sup_{t\in[0,T]}||\overline{Y}^M_t||_{L^p(\Omega, \mathbb{R}^d)}&\leq&\left[\sup_{t\in[0,T]}\left\|\overline{Y}^M_t-\overline{Y}^M_{\lfloor t\rfloor}\right\|_{L^p(\Omega, \mathbb{R}^d)}\right]+\sup_{t\in[0,T]}\left\|\overline{Y}^M_{\lfloor t\rfloor}\right\|_{L^p(\Omega, \mathbb{R}^d)}\\ &\leq&C_p\Delta t^{1/2}+\sup_{t\in[0,T]}\left\|\overline{Y}^M_{\lfloor t\rfloor}\right\|_{L^p(\Omega, \mathbb{R}^d)}\\ &<&C_pT^{1/2}+\sup_{t\in[0,T]}\left\|\overline{Y}^M_{\lfloor t\rfloor}\right\|_{L^p(\Omega, \mathbb{R}^d)}<\infty,\end{aligned}$$ for all $p\in[1,+\infty)$ and all $M\in\mathbb{N}$. - Further, using the polynomial growth condition : $$\begin{aligned} ||v(x)-v(y)||\leq C(K+||x||^c+||y||^c)||x-y||,\end{aligned}$$ for all $x, y\in\mathbb{R}^d$, it follows using Holder inequality that : $$\begin{aligned} \sup_{t\in[0,T]}||v(\overline{Y}^M_t)-v(\overline{Y}^M_{\lfloor t\rfloor})||_{L^p(\Omega, \mathbb{R}^d)} &\leq &C\left(K+2\sup_{t\in[0,T]}||\overline{Y}^M_t||^c_{L^{2pc}(\Omega, \mathbb{R}^d)}\right)\nonumber\\ &\times & \left(\sup_{t\in[0,T]}||\overline{Y}^M_t-\overline{Y}^M_{\lfloor t\rfloor}||_{L^{2p}(\Omega, \mathbb{R}^d)}\right) \label{ch4Thfcontinous}\end{aligned}$$ Using and the first part of Lemma \[ch5lemma16\], the following inequality holds for all $p\in[1,+\infty)$ $$\begin{aligned} \left[\sup_{t\in[0,T]}||v(\overline{Y}^M_t)-v(\overline{Y}^M_{\lfloor t\rfloor})||_{L^p(\Omega, \mathbb{R}^d)}\right]<C_p\Delta t^{1/2}, \label{ch4Thffinal}\end{aligned}$$ for all $p\in[1,\infty)$ and all $M\in\mathbb{N}$. Now we are ready to prove Theorem \[ch5theorem2\]. **\[ Theorem \[ch5theorem2\]\]** Let’s recall that for $z\in[0,T]$, $\lfloor z\rfloor$ is the greatest grid point less than $z$. The time continuous solution can be written into its integral form as bellow : $$\begin{aligned} \overline{Y}^M_s=\varepsilon+\int_0^su(\overline{Y}^M_{\lfloor z\rfloor})dz+\int_0^s\dfrac{v(\overline{Y}^M_{\lfloor z\rfloor})}{1+\Delta t||v(\overline{Y}^M_{\lfloor z\rfloor})||}dz+ \int g(\overline{Y}^M_{\lfloor z\rfloor})dW_z+\int h(\overline{Y}^M_{\lfloor z\rfloor})d\overline{N}_z, \label{ch5continoussol2} \end{aligned}$$ for all $z\in[0, T]$ almost sure (a.s) and all $M\in\mathbb{N}$. Let’s estimate first the quantity $||X_s-\overline{Y}^M_s||^2$ $$\begin{aligned} X_s-\overline{Y}_s&=&\int_0^s\left(u_{\lambda}(X_z)-u_{\lambda}(\overline{Y}^M_{\lfloor z \rfloor})\right)dz+\int_0^s\left(v(X_z)-\dfrac{v(\overline{Y}^M_{\lfloor z\rfloor})}{1+\Delta t||v(\overline{Y}^M_{\lfloor z\rfloor})||}\right)dz\\ &+&\int_0^s\left(g(X_z)-g(\overline{Y}^M_{\lfloor z\rfloor})\right)dW_z+\int_0^s\left(h(X_z)-h(\overline{Y}^M_{\lfloor z\rfloor})\right)d\overline{N}_z. \end{aligned}$$ Using the relation $d\overline{N}_z=dN_z-\lambda dz$, it follows that $$\begin{aligned} X_s-\overline{Y}_s&=&\int_0^s\left[\left(v(X_z)-\dfrac{v(\overline{Y}^M_{\lfloor z\rfloor})}{1+\Delta t||v(\overline{Y}^M_{\lfloor z\rfloor})||}\right)+\left(u(X_z)-u(\overline{Y}^M_{\lfloor z\rfloor})\right)+\lambda\left(h(X_z)-h(\overline{Y}^M_{\lfloor z\rfloor})\right)\right]dz\\ &+&\int_0^s\left(g(X_z)-g(\overline{Y}^M_{\lfloor z\rfloor})\right)dW_z+\int_0^s\left(h(X_z)-h(\overline{Y}^M_{\lfloor z\rfloor})\right)dN_z. \end{aligned}$$ The function $ k :\mathbb{R}^n\longrightarrow \mathbb{R}$, $x \longmapsto ||x||^2$ is twice differentiable. Applying Itô’s formula for jump process to the process $X_s-\overline{Y}^M_s$ with the function $k$ leads to : $$\begin{aligned} ||X_s-\overline{Y}^M_s||^2&=&2\int_0^s\left<X_z-\overline{Y}^M_z, v(X_z)-\dfrac{v(\overline{Y}^M_{\lfloor z\rfloor})}{1+\Delta t||v(\overline{Y}^M_{\lfloor z\rfloor})||}\right>dz+2\lambda\int_0^s\left\langle X_z-\overline{Y}^M_z, h(X_z)-h(\overline{Y}^M_{\lfloor z\rfloor})\right\rangle dz\\ &+&2\int_0^s\left\langle X_z-\overline{Y}^M_z, u(X_z)-u(\overline{Y}^M_{\lfloor z\rfloor})\right\rangle dz +\sum_{i=1}^m\int_0^s||g_i(X_z)-g_i(\overline{Y}^M_{\lfloor z\rfloor})||^2dz\\ &+&2\sum_{i=1}^m\int_0^s\left<X_z-\overline{Y}^M_z, g_i(X_z)-g_i(\overline{Y}^M_{\lfloor z\rfloor})\right>dW^i_z\\ &+& \int_0^s\left[||X_z-\overline{Y}^M_z+h(X_z)-h(\overline{Y}^M_{\lfloor z\rfloor})||^2-||X_z-\overline{Y}^M_z||^2\right]dN_z. \end{aligned}$$ Using again the relation $d\overline{N}_z=dN_z-\lambda dz$ leads to $$\begin{aligned} ||X_s-\overline{Y}^M_s||^2&=&2\int_0^s\left<X_z-\overline{Y}^M_z, v(X_z)-\dfrac{v(\overline{Y}^M_{\lfloor z\rfloor})}{1+\Delta t||v(\overline{Y}^M_{\lfloor z\rfloor})||}\right>dz+2\lambda\int_0^s\left\langle X_z-\overline{Y}^M_z, h(X_z)-h(\overline{Y}^M_{\lfloor z\rfloor})\right\rangle dz\nonumber\\ &+&2\int_0^s\left\langle X_z-\overline{Y}^M_z, u(X_z)-u(\overline{Y}^M_{\lfloor z\rfloor})\right\rangle dz+\sum_{i=1}^m\int_0^s||g_i(X_z)-g_i(\overline{Y}^M_{\lfloor z\rfloor})||^2dz \nonumber\\ &+&2\sum_{i=1}^m\int_0^s\left<X_z-\overline{Y}^M_z, g_i(X_z)-g_i(\overline{Y}^M_{\lfloor z\rfloor})\right>dW^i_z\nonumber\\ &+& \int_0^s\left[||X_z-\overline{Y}^M_u+h(X_z)-h(\overline{Y}^M_{\lfloor z\rfloor})||^2-||X_z-\overline{Y}^M_z||^2\right]d\overline{N}_z\nonumber\\ &+&\lambda\int_0^s\left[||X_z-\overline{Y}^M_z+h(X_z)-h(\overline{Y}^M_{\lfloor z \rfloor})||^2-||X_z-\overline{Y}^M_z||^2\right]dz\nonumber\\ &=&A_1+A'_1+A_2+A_3+A_4+A_5+A_6. \label{ch5Th1} \end{aligned}$$ In the next step, we give some useful estimations of $A_1, A'_1, A_2, A_3$ and $A_6$. $$\begin{aligned} A_1&=&2\int_0^s\left\langle X_z-\overline{Y}^M_z,v(X_z)-\dfrac{v(\overline{Y}_{\lfloor z\rfloor})}{1+\Delta t||v(\overline{Y}^M_{\lfloor z\rfloor})||}\right\rangle dz\\ &=&2\int_0^s<X_s-\overline{Y}^M_z,v(X_z)-v(\overline{Y}^M_z)>dz\\ &+&2\int_0^s\left\langle X_s-\overline{Y}^M_z,v(\overline{Y}^M_z)-\dfrac{v(\overline{Y}^M_{\lfloor z\rfloor})}{1+\Delta t||v(\overline{Y}^M_{\lfloor z\rfloor})||}\right\rangle dz\\ &=& A_{11}+A_{12}. \end{aligned}$$ Using the one-sided Lipschitz condition satiasfied by $v$ leads to $$\begin{aligned} A_{11} &=&2\int_0^s\langle X_s-\overline{Y}^M_z,v(X_z)-v(\overline{Y}^M_z)\rangle dz\nonumber\\ &\leq& 2C\int_0^s||X_z-\overline{Y}^M_z||^2dz. \label{ch5ThA11} \end{aligned}$$ Moreover, using the inequality $\langle a, b\rangle\leq |a||b|\leq \dfrac{a^2}{2}+\dfrac{b^2}{2}$ leads to : $$\begin{aligned} A_{12}&=& 2\int_0^s\left\langle X_z-\overline{Y}^M_z,v(\overline{Y}^M_z)-\dfrac{v(\overline{Y}^M_{\lfloor z\rfloor})}{1+\Delta t||v(\overline{Y}^M_{\lfloor z\rfloor})||}\right\rangle dz\nonumber\\ &=&2\int_0^s\left\langle X_z-\overline{Y}^M_z, v(\overline{Y}^M_z)-v(\overline{Y}^M_{\lfloor z\rfloor})\right\rangle dz\nonumber\\ &+&2\Delta t\int_0^s\left\langle X_z-\overline{Y}^M_z, \dfrac{v(\overline{Y}^M_{\lfloor z\rfloor})||v(\overline{Y}^M_{\lfloor z\rfloor})||}{1+\Delta t||v(\overline{Y}^M_{\lfloor z\rfloor})||}\right\rangle dz\nonumber\\ &\leq &\int_0^s||X_z-\overline{Y}^M_z||^2dz+\int_0^s||v(\overline{Y}^M_z)-v(\overline{Y}^M_{\lfloor z\rfloor})||^2dz\nonumber\\ &+&\int_0^s||X_z-\overline{Y}^M_z||^2dz+\dfrac{T^2}{M^2}\int_0^s||v(\overline{Y}^M_{\lfloor z\rfloor})||^4dz\nonumber\\ &\leq &2\int_0^s||X_z-\overline{Y}^M_z||^2dz+\int_0^s||v(\overline{Y}^M_z)-v(\overline{Y}_{\lfloor z\rfloor})||^2dz\nonumber\\ &+&\dfrac{T^2}{M^2}\int_0^s||v(\overline{Y}^M_{\lfloor z\rfloor})||^4dz \label{ch5ThA12} \end{aligned}$$ Combining and give the following estimation for $A_1$ : $$\begin{aligned} A_1 &\leq & (2C+2)\int_0^s||X_z-\overline{Y}^M_z||^2dz+\int_0^s||v(\overline{Y}^M_z)-v(\overline{Y}_{\lfloor z\rfloor})||^2dz\nonumber\\ &+&\dfrac{T^2}{M^2}\int_0^s||v(\overline{Y}^M_{\lfloor z\rfloor})||^4dz. \label{ch5ThA1} \end{aligned}$$ Using again the inequality $\langle a, b\rangle\leq |a||b|\leq \dfrac{a^2}{2}+\dfrac{b^2}{2}$ and the global-Lipschitz condition satisfied by $u$ leads to : $$\begin{aligned} A_2 &=&2\int_0^s\left\langle X_z-\overline{Y}^M_z, u(X_z)-u(\overline{Y}^M_{\lfloor z\rfloor})\right\rangle dz\nonumber\\ &=&2\int_0^s\left\langle X_z-\overline{Y}^M_z, u(X_z)-u(\overline{Y}^M_z)\right\rangle dz+\int_0^s\left\langle X_z-\overline{Y}^M_z, u(\overline{Y}^M_z)-u(\overline{Y}^M_{\lfloor z\rfloor})\right\rangle dz\nonumber\\ &\leq &2C\int_0^s||X_z-\overline{Y}^M_z||^2dz+2C\int_0^s||\overline{Y}^M_z-\overline{Y}^M_{\lfloor z\rfloor}||^2dz. \label{ch5ThA2} \end{aligned}$$ Using the same arguments as for $A_2$ leads to the following estimation of $A'_1$ $$\begin{aligned} A'_1&=&2\lambda\int_0^s\left\langle X_z-\overline{Y}^M_z, h(X_z)-h(\overline{Y}^M_{\lfloor z\rfloor})\right\rangle dz \nonumber\\ &\leq& 2\lambda C\int_0^s||X_z-\overline{Y}^M_z||^2dz+2\lambda C\int_0^s||\overline{Y}^M_z-\overline{Y}^M_{\lfloor z\rfloor}||^2dz. \label{ch5ThA1n} \end{aligned}$$ Using the inequalities $||g_i(x)-g_i(y)||\leq ||g(x)-g(y)||$ and $(a+b)^2\leq 2a^2+2b^2$ and the global Lipschitz condition satisfyed by $g$, we have $$\begin{aligned} A_3 &=&\sum_{i=1}^m\int_0^s||g_i(X_z)-g_i(\overline{Y}^M_{\lfloor z\rfloor})||^2dz\nonumber\\ &\leq &m\int_0^s||g(X_z)-g(\overline{Y}^M_{\lfloor z\rfloor})||^2dz\nonumber\\ &= &m\int_0^s||g(X_z)-g(\overline{Y}^M_z)+g(\overline{Y}^M_z)-g(\overline{Y}^M_{\lfloor z\rfloor})||^2dz\nonumber\\ &\leq &2m\int_0^s||g(X_z)-g(\overline{Y}^M_z)||^2dz+2m\int_0^s||g(\overline{Y}^M_z)-g(\overline{Y}^M_{\lfloor z\rfloor})||^2dz\nonumber\\ &\leq& 2mC^2\int_0^s||X_z-\overline{Y}^M_z||^2dz+2mC^2\int_0^s||\overline{Y}^M_z-\overline{Y}^M_{\lfloor z\rfloor}||^2dz \label{ch5ThA3} \end{aligned}$$ Using the same reasons as above we obtain the following estimation for $A_6$ : $$\begin{aligned} A_6 &=&\lambda\int_0^s\left[X_z-\overline{Y}^M_z+h(X_z)-h(\overline{Y}^M_{\lfloor z\rfloor})||^2-||X_z-\overline{Y}^M_z||^2\right]dz\nonumber\\ &\leq &3\lambda\int_0^s||X_z-\overline{Y}^M_z||^2dz+2\lambda\int_0^s||h(X_z)-h(\overline{Y}^M_{\lfloor z\rfloor})||^2dz\nonumber\\ &\leq &3\lambda\int_0^s||X_z-\overline{Y}^M_z||^2dz+4\lambda\int_0^s||h(X_z)-h(\overline{Y}^M_z)||^2dz\nonumber\\ &+ &4\lambda\int_0^s||h(\overline{Y}^M_z)-h(\overline{Y}^M_{\lfloor z\rfloor})||^2dz\nonumber\\ &\leq &(3\lambda+4\lambda C^2)\int_0^s||X_z-\overline{Y}^M_z||^2dz+4\lambda C^2\int_0^s||\overline{Y}^M_z-\overline{Y}^M_{\lfloor z\rfloor}||^2dz. \label{ch5ThA6} \end{aligned}$$ Inserting , , , and in we obtain : $$\begin{aligned} ||X_s-\overline{Y}^M_s||^2&\leq &(4C+2+2mC^2+3\lambda+4\lambda C^2+2\lambda C)\int_0^s||X_z-\overline{Y}^M_z||^2dz\nonumber\\ &+&(2C+2mC^2+4\lambda C^2+2\lambda C)\int_0^s||\overline{Y}^M_z-\overline{Y}^M_{\lfloor z\rfloor}||^2dz\nonumber\\ &+&\int_0^s||v(\overline{Y}^M_z)-v(\overline{Y}^M_{\lfloor z\rfloor})||^2dz+\dfrac{T^2}{M^2}\int_0^s||v(\overline{Y}^M_{\lfloor z\rfloor})||^4dz\nonumber\\ &+&2\sum_{i=1}^m\int_0^s\left<X_z-\overline{Y}^M_z, g_i(X_z)-g_i(\overline{Y}^M_{\lfloor z\rfloor})\right>dW^{i}_z\nonumber\\ &+&\int_0^s\left[||X_z-\overline{Y}^M_z+h(X_z)-h(\overline{Y}^M_{\lfloor z\rfloor})||^2-||X_z-\overline{Y}^M_z||^2\right]d\overline{N}_z. \end{aligned}$$ Taking the supremum in both sides of the previous inequality leads to $$\begin{aligned} \sup_{s\in[0,t]}||X_s-\overline{Y}^M_s||^2&\leq &(4C+2+2mC^2+3\lambda+4\lambda C^2+2\lambda C)\int_0^t||X_z-\overline{Y}^M_z||^2dz\nonumber\\ &+&(2C+2mC^2+4\lambda C^2+2\lambda C)\int_0^t||\overline{Y}^M_z-\overline{Y}^M_{\lfloor z\rfloor}||^2dz\nonumber\\ &+&\int_0^t||v(\overline{Y}^M_z)-v(\overline{Y}^M_{\lfloor z\rfloor})||^2dz+\dfrac{T^2}{M^2}\int_0^t||v(\overline{Y}^M_{\lfloor z\rfloor})||^4dz\nonumber\\ &+&2\sup_{s\in[0,t]}\left|\sum_{i=1}^m\int_0^s\left<X_z-\overline{Y}^M_z, g_i(X_z)-g_i(\overline{Y}^M_{\lfloor z\rfloor})\right>dW^{i}_z\right|\nonumber\\ &+&\sup_{s\in[0,t]}\left|\int_0^s\left[||X_z-\overline{Y}^M_z+h(X_z)-h(\overline{Y}^M_{\lfloor z\rfloor})||^2\right]d\overline{N}_z\right|\nonumber\\ &+&\sup_{s\in[0,t]}\left|\int_0^s||X_z-\overline{Y}^M_z||^2d\overline{N}_z\right|. \label{ch5Th2}\end{aligned}$$ Using Lemma \[ch4lemma15\] we have the following estimations $$\begin{aligned} B_1& :=&\left\|2\sup_{s\in[0,t]}\left|\sum_{i=1}^m\int_0^s\left<X_z-\overline{Y}^M_z, g_i(X_z)-g_i(\overline{Y}^M_{\lfloor z\rfloor})\right>dW^i_z\right|\right\|_{L^{p/2}(\Omega, \mathbb{R})}\\ &\leq &C_p\left(\int_0^t\sum_{i=1}^m\left\|\left<X_z-\overline{Y}^M_z, g_i(X_z)-g_i(\overline{Y}^M_{\lfloor z\rfloor})\right>\right\|^2_{L^{p/2}(\Omega, \mathbb{R}^d)}dz\right)^{1/2}, \end{aligned}$$ for all $p\in[2,\infty)$. Moreover, using Holder inequality, the inequality $ab\leq \dfrac{a^2}{2}+\dfrac{b^2}{2}$ and the global Lipschitz condition satisfied by $g$, we have the following estimation for $B_1$. $$\begin{aligned} B_1 &\leq& C_p\left(\int_0^t\sum_{i=1}^m\left\|\left<X_z-\overline{Y}^M_z, g_i(X_z)-g_i(\overline{Y}^M_{\lfloor z\rfloor})\right>\right\|^2_{L^{p/2}(\Omega, \mathbb{R}^d)}dz\right)^{1/2}\nonumber\\ &\leq &C_p\left(\int_0^t\sum_{i=1}^m||X_z-\overline{Y}_z^M||^2_{L^p(\Omega, \mathbb{R}^d)}||g_i(X_z)-g_i(\overline{Y}^M_{\lfloor z\rfloor})||^2_{L^p(\Omega, \mathbb{R}^d)}dz\right)^{1/2}\nonumber\\ &\leq &C_p\left(\int_0^t\dfrac{1}{2}||X_z-\overline{Y}_z^M||^2_{L^p(\Omega, \mathbb{R}^d)}2m||g(X_z)-g(\overline{Y}^M_{\lfloor z\rfloor})||^2_{L^p(\Omega, \mathbb{R}^d)}dz\right)^{1/2}\nonumber\\ &\leq &\dfrac{C_p}{\sqrt{2}}\left(\sup_{s\in[0,t]}||X_s-\overline{Y}^M_s||_{L^p(\Omega, \mathbb{R}^d)}\right)\left(2C^2m\int_0^t||X_s-\overline{Y}^M_{\lfloor s\rfloor}||^2_{L^p(\Omega, \mathbb{R}^d)}dz\right)^{1/2}\nonumber\\ &\leq &\dfrac{1}{4}\sup_{s\in[0,t]}||X_s-\overline{Y}^M_s||^2_{L^p(\Omega, \mathbb{R}^d)}+C_p^2m\int_0^t||X_s-\overline{Y}^M_{\lfloor s\rfloor}||^2_{L^p(\Omega,\mathbb{R}^d)}dz\nonumber\\ &\leq &\dfrac{1}{4}\sup_{s\in[0,t]}||X_s-\overline{Y}^M_s||^2_{L^p(\Omega, \mathbb{R}^d)}+2C_p^2m\int_0^t||X_s-\overline{Y}^M_s||^2_{L^p(\Omega,\mathbb{R}^d)}dz\nonumber\\ &+&2C_p^2m\int_0^t||\overline{Y}^M_s-\overline{Y}^M_{\lfloor s\rfloor}||^2_{L^p(\Omega,\mathbb{R}^d)}dz. \label{ch5ThB1} \end{aligned}$$ Using Lemma \[ch4lemma18\] and the inequality $(a+b)^4\leq 4a^4+4b^4$, it follows that : $$\begin{aligned} B_2 &=&\left\|\sup_{s\in[0,t]}\left|\int_0^s||X_z-\overline{Y}^M_z+h(X_z)-h(\overline{Y}^M_{\lfloor z\rfloor})||^2d\overline{N}_z\right|\right\|_{L^{p/2}(\Omega, \mathbb{R}^d)}\nonumber\\ &\leq &C_p\left(\int_0^t||X_z-\overline{Y}^M_z+h(X_z)-h(\overline{Y}^M_{\lfloor z\rfloor})||^4_{L^{p/2}(\Omega, \mathbb{R}^d)}dz\right)^{1/2}\nonumber\\ &\leq &C_p\left(\int_0^t\left[4||X_z-\overline{Y}^M_z||^4_{L^{p/2}(\Omega, \mathbb{R}^d)}+4||h(X_z)-h(\overline{Y}^M_{\lfloor z\rfloor})||^4_{L^{p/2}(\Omega, \mathbb{R}^d)}\right]dz\right)^{1/2}, \end{aligned}$$ for all $p\in[2,\infty)$. Using the inequality $\sqrt{a+b}\leq\sqrt{a}+\sqrt{b}$, it follows that : $$\begin{aligned} B_2&\leq &2C_p\left(\int_0^t||X_z-\overline{Y}^M_z||^4_{L^{p/2}(\Omega, \mathbb{R}^d)}dz\right)^{1/2} +2C_p\left(\int_0^t||h(X_z)-h(\overline{Y}^M_{\lfloor z\rfloor})||^4_{L^{p/2}(\Omega, \mathbb{R}^d)}dz\right)^{1/2}\nonumber\\ & =& B_{21}+B_{22}. \label{ch5ThB} \end{aligned}$$ Using Holder inequality, it follows that : $$\begin{aligned} B_{21} &: = &2 C_p\left(\int_0^t||X_z-\overline{Y}^M_z||^4_{L^{p/2}(\Omega, \mathbb{R}^d)}dz\right)^{1/2}\nonumber\\ &\leq &2C_p\left(\int_0^t||X_z-\overline{Y}^M_z||^2_{L^p(\Omega, \mathbb{R}^d)}||X_z-\overline{Y}^M_z||^2_{L^p(\Omega, \mathbb{R}^d)}dz\right)^{1/2}\nonumber\\ &= &2C_p\left(\int_0^t\dfrac{1}{16}||X_z-\overline{Y}^M_z||^2_{L^p(\Omega, \mathbb{R}^d)}16||X_z-\overline{Y}^M_z||^2_{L^p(\Omega, \mathbb{R}^d)}dz\right)^{1/2}\nonumber\\ &\leq &\dfrac{1}{4}\sup_{s\in[0,t]}||X_z-\overline{Y}^M_z||_{L^p(\Omega,\mathbb{R}^d)}8C_p\left(\int_0^t||X_z-\overline{Y}^M_z||^2_{L^p(\Omega,\mathbb{R}^d)}dz\right)^{1/2}. \end{aligned}$$ Using the inequality $2ab\leq a^2+b^2$ leads to : $$\begin{aligned} B_{21}&\leq &\dfrac{1}{16}\sup_{s\in[0,t]}||X_s-\overline{Y}^M_s||^2_{L^p(\Omega, \mathbb{R}^d)} +16C_p\int_0^t||X_s-\overline{Y}^M_z||^2_{L^p(\Omega, \mathbb{R}^d)}dz. \label{ch5ThB21} \end{aligned}$$ Using the inequalities $(a+b)^4\leq 4a^4+4b^4$ and $\sqrt{a+b}\leq\sqrt{a}+\sqrt{b}$, we have the following bound for $B_{22}$ $$\begin{aligned} B_{22}&: = &2C_p\left(\int_0^t||h(X_z)-h(\overline{Y}^M_{\lfloor z\rfloor})||^4_{L^{p/2}(\Omega, \mathbb{R}^d)}dz\right)^{1/2}\nonumber\\ &\leq &2C_p\left(\int_0^t4||h(X_z)-h(\overline{Y}^M_z)||^4_{L^{p/2}(\Omega,\mathbb{R}^d)}+4||h(\overline{Y}^M_z)-h(\overline{Y}^M_{\lfloor z\rfloor})||^4_{L^{p/2}(\Omega,\mathbb{R}^d)}dz\right)^{1/2}\\ &\leq &4C_p\left(\int_0^t||h(X_z)-h(\overline{Y}^M_z)||^4_{L^{p/2}(\Omega, \mathbb{R}^d)}dz\right)^{1/2}\\ &+&4C_p\left(\int_0^t||h(\overline{Y}^M_z)-h(\overline{Y}^M_{\lfloor z\rfloor})||^4_{L^{p/2}(\Omega, \mathbb{R}^d)}dz\right)^{1/2}. \end{aligned}$$ Using the global Lipschitz condition satisfied by $h$ leads to : $$\begin{aligned} B_{22}&\leq &4C_p\left(\int_0^tC^4||X_z-\overline{Y}^M_z||^4_{L^{p/2}(\Omega, \mathbb{R}^d)}dz\right)^{1/2}+4C_p\left(\int_0^tC^4||\overline{Y}^M_z-\overline{Y}^M_{\lfloor z\rfloor}||^4_{L^{p/2}(\Omega, \mathbb{R}^d)}dz\right)^{1/2}. \end{aligned}$$ Using the same idea as for $B_{21}$, it follows that : $$\begin{aligned} B_{22} &\leq &\dfrac{1}{16}\sup_{s\in[0,t]}||X_s-\overline{Y}^M_s||^2_{L^p(\Omega, \mathbb{R}^d)} +64C_p\int_0^t||X_z-\overline{Y}^M_z||^2_{L^p(\Omega, \mathbb{R}^d)}dz\nonumber\\ &+&\dfrac{1}{16}\sup_{s\in[0,t]}||\overline{Y}^M_s-\overline{Y}^M_{\lfloor s\rfloor}||^2_{L^p(\Omega, \mathbb{R}^d)}+64C_p\int_0^t||\overline{Y}^M_z-\overline{Y}^M_{\lfloor z\rfloor}||^2_{L^p(\Omega, \mathbb{R}^d)}dz.\nonumber \end{aligned}$$ Taking the supremum under the integrand of the last term in the above inequality and using the fact that we don’t care about the value of the constant leads to : $$\begin{aligned} B_{22}&\leq &\dfrac{1}{16}\sup_{s\in[0,t]}||X_s-\overline{Y}^M_s||^2_{L^p(\Omega, \mathbb{R}^d)} +64C_p\int_0^t||X_s-\overline{Y}^M_s||^2_{L^p(\Omega, \mathbb{R}^d)}ds\nonumber\\ &+&C_p\sup_{s\in[0,t]}||\overline{Y}^M_s-\overline{Y}^M_{\lfloor s\rfloor}||^2_{L^p(\Omega, \mathbb{R}^d)}. \label{ch5ThB22} \end{aligned}$$ Inserting and into gives : $$\begin{aligned} B_2&\leq &\dfrac{1}{8}\sup_{s\in[0,t]}||X_s-\overline{Y}^M_s||^2_{L^p(\Omega, \mathbb{R}^d)}+C_p\int_0^t||X_s-\overline{Y}^M_s||^2_{L^p(\Omega, \mathbb{R}^d)}ds\nonumber\\ &+&C_p\sup_{s\in[0,t]}||\overline{Y}^M_s-\overline{Y}^M_{\lfloor s\rfloor}||^2_{L^p(\Omega, \mathbb{R}^d)}. \label{ch5ThB2} \end{aligned}$$ Using again Lemma \[ch4lemma18\] leads to : $$\begin{aligned} B_3 &: =& \left\|\sup_{s\in[0,t]}\int_0^s||X_z-\overline{Y} ^M_z||^2d\overline{N}_z\right\|_{L^{p/2}(\Omega, \mathbb{R}^d)}\\ &\leq &C_p\left(\int_0^t||X_z-\overline{Y}^M_z||^4_{L^{p/2}(\Omega, \mathbb{R}^d)}dz\right)^{1/2}. \end{aligned}$$ Using the same argument as for $B_{21}$, we obtain : $$\begin{aligned} B_3 &\leq &\dfrac{1}{8}\sup_{s\in[0,t]}||X_s-\overline{Y}^M_s||^2_{L^p(\Omega, \mathbb{R}^d)}+C_p\int_0^t||X_s-\overline{Y}^M_s||^2_{L^p(\Omega, \mathbb{R}^d)}ds \label{ch5ThB3} \end{aligned}$$ Taking the $L^{p}$ norm in both side of and inserting inequalities , and leads to : $$\begin{aligned} \left\|\sup_{s\in[0,t]}||X_s-\overline{Y}^M_s||\right\|^2_{L^{p}(\Omega, \mathbb{R}^d)}&=&\left\|\sup_{s\in[0,t]}||X_s-\overline{Y}^M_s||^2\right\|_{L^{p/2}(\Omega, \mathbb{R}^d)}\end{aligned}$$ $$\begin{aligned} &\leq & C_p\int_0^t||X_s-\overline{Y}^M_s||^2_{L^p(\Omega, \mathbb{R}^d)}ds+C_p\int_0^t||\overline{Y}^M_s-\overline{Y}^M_{\lfloor s\rfloor}||^2_{L^p(\Omega,\mathbb{R}^d)}ds\\ &+&\int_0^t||v(X_s)-v(\overline{Y}^M_{\lfloor s\rfloor})||^2_{L^{p}(\Omega, \mathbb{R}^d)}ds+C_p\sup_{s\in[0,t]}||\overline{Y}^M_s-\overline{Y}^M_{\lfloor s\rfloor}||^2_{L^p(\Omega, \mathbb{R}^d)}\\ &+&\dfrac{T^2}{M^2}\int_0^t||v(\overline{Y}^M_{\lfloor s\rfloor})||^4_{L^{2p}(\Omega, \mathbb{R}^d)}ds +2C_p\int_0^t||\overline{Y}^M_s-\overline{Y}^M_{\lfloor s\rfloor}||^2_{L^p(\Omega, \mathbb{R}^d)}ds\\ &+&\dfrac{1}{2}\left\|\sup_{s\in[0,t]}||X_s-\overline{Y}^M_s||\right\|^2_{L^{p/2}(\Omega, \mathbb{R}^d)}, \end{aligned}$$ for all $t\in[0,T]$ and all $p\in[2,+\infty)$. Where $C_p$ is the generic constant. The previous inequality can be writen in the following appropriate form $ \dfrac{1}{2}\left\|\sup\limits_{s\in[0,t]}||X_s-\overline{Y}^M_s||\right\|^2_{L^{p}(\Omega, \mathbb{R}^d)} $ $$\begin{aligned} &\leq & C_p\int_0^t||X_s-\overline{Y}^M_s||^2_{L^p(\Omega, \mathbb{R}^d)}ds+C_p\int_0^t||\overline{Y}^M_s-\overline{Y}^M_{\lfloor s\rfloor}||^2_{L^p(\Omega,\mathbb{R}^d)}ds\\ &+&\int_0^t||v(X_s)-v(\overline{Y}^M_{\lfloor s\rfloor})||^2_{L^{p}(\Omega, \mathbb{R}^d)}ds+C_p\sup_{s\in[0,t]}||\overline{Y}^M_s-\overline{Y}^M_{\lfloor s\rfloor}||^2_{L^p(\Omega, \mathbb{R}^d)}\\ &+&\dfrac{T^2}{M^2}\int_0^t||v(\overline{Y}^M_{\lfloor s\rfloor})||^4_{L^{2p}(\Omega, \mathbb{R}^d)}ds +2C^2m\int_0^t||\overline{Y}^M_s-\overline{Y}^M_{\lfloor s\rfloor}||^2_{L^p(\Omega, \mathbb{R}^d)}ds. \end{aligned}$$ Applying Gronwall lemma to the previous inequality leads to : $ \dfrac{1}{2}\left\|\sup\limits_{s\in[0,t]}||X_s-\overline{Y}^M_s||\right\|^2_{L^p(\Omega, \mathbb{R}^d)} $ $$\begin{aligned} &\leq &C_pe^{C_p}\left(\int_0^T||v(\overline{Y}^M_s)-v(\overline{Y}^M_{\lfloor s\rfloor})||^2_{L^p(\Omega, \mathbb{R}^d)}ds+C_p\sup_{u\in[0,t]}||\overline{Y}^M_u-\overline{Y}^M_{\lfloor u\rfloor}||^2_{L^p(\Omega, \mathbb{R}^d)}\right.\\ & &\left. +\dfrac{T^2}{M^2}\int_0^T||v(\overline{Y}^M_{\lfloor s\rfloor})||^4_{L^{2p}(\Omega, \mathbb{R}^d)}ds +C_p\int_0^T||\overline{Y}^M_s-\overline{Y}^M_{\lfloor s\rfloor}||^2_{L^p(\Omega, \mathbb{R}^d)}ds\right). \end{aligned}$$ Using the inequality $\sqrt{a+b+c}\leq \sqrt{a}+\sqrt{b}+\sqrt{c}$, it follows that $ \left\|\sup\limits_{t\in[0, T]}||X_t-\overline{Y}^M_t||\right\|_{L^p(\Omega, \mathbb{R}^d)} $ $$\begin{aligned} &\leq &C_pe^{C_p}\left(\sup_{s\in[0,T]}||v(\overline{Y}^M_s)-v(\overline{Y}^M_{\lfloor s\rfloor})||_{L^p(\Omega, \mathbb{R}^d)}+C_p\sup_{s\in[0,T]}||\overline{Y}^M_s-\overline{Y}^M_{\lfloor s\rfloor}||_{L^p(\Omega, \mathbb{R}^d)} \right.\nonumber\\ &+& \left. \dfrac{T^2}{M}\left[\sup_{n\in\{0,\cdots, M\}}||v(Y^M_n)||^2_{L^{2p}(\Omega, \mathbb{R}^d)}\right]+C_p\sup_{s\in[0,T]}||\overline{Y}^M_s-\overline{Y}^M_{\lfloor s\rfloor}||_{L^p(\Omega, \mathbb{R}^d)}\right), \label{ch5Gronwall} \end{aligned}$$ for all $p\geq 2$. Using Lemma \[ch5lemma14\] and Lemma \[ch5lemma16\], it follows that : $$\begin{aligned} \mathbb{E}\left[\sup_{t\in[0,T]}\left\|X_t-\overline{Y}^M_t\right\|^p\right]^{1/p}\leq C_p(\Delta t)^{1/2}, \end{aligned}$$ for all $M\in\mathbb{N}$ and all $p\in[2,\infty)$. The application of Holder inequality shows that the latter inequality is satisfied for all $p\in[1,\infty)$, and this complete the proof of Theorem \[ch5theorem2\]. Strong convergence of the tamed Euler Scheme -------------------------------------------- \[ch5theorem2b\] Under Assumptions \[ch5assumption1\], for all $p\in[1, +\infty)$ there exist a constant $C_p>0$ such that $$\begin{aligned} \left(\mathbb{E}\left[\sup_{t\in[0,T]}\left\|X_t-\overline{X}^M_t\right\|^p\right]\right)^{1/p}\leq C_p\Delta t^{1/2}, \end{aligned}$$ for all $M\in\mathbb{N}$. Where $X : [0, T]\times\Omega \longrightarrow \mathbb{R}^d$ is the exact solution of and $\overline{X}^M_t$ the continous interpolation of the numerical solution defined by : $$\begin{aligned} \overline{X}^M_t &=&X^M_n+\dfrac{(t-n\Delta t)f(X^M_n)}{1+||f(X^M_n)||}+g(X^M_n)(W_t-W_{n\Delta t})+h(X^M_n)(\overline{N}_t-\overline{N}_{n\Delta t}), \label{ch5continoussol1} \end{aligned}$$ for all $M\in\mathbb{N}$, all $n\in\{0,\cdots, M\}$ and all $t\in[n\Delta t, (n+1)\Delta t)$. Using the relation $\Delta\overline{N}^M_n=\Delta N^M_n-\lambda\Delta t$, the continous interpolation of can be express in the following form $$\begin{aligned} \overline{X}^M_t =X^M_n+\lambda(t-n\Delta t)h(X^M_n)+\dfrac{(t-n\Delta t)f(X^M_n)}{1+||f(X^M_n)||}+g(X^M_n)(W_t-W_{n\Delta t})+h(X^M_n)(N_t-N_{n\Delta t}), \end{aligned}$$ for all $t\in[n\Delta t, (n+1)\Delta t[$. From the numerical solution and using the relation $\Delta N^M_n=\Delta\overline{N}^M_n+\lambda\Delta t$, it follows that: $$\begin{aligned} X^M_{n+1}&=&X^M_n+\dfrac{\Delta tf(X^M_n)}{1+\Delta t||f(X^M_n)||}+g(X^M_n)\Delta W^M_n+h(X^M_n)\Delta N^M_n\nonumber\\ &=&X^M_n+\lambda h(X^M_n)T/M+\dfrac{\Delta tf(X^M_n)}{1+\Delta t||f(X^M_n)||}+g(X^M_n)\Delta W^M_n\nonumber\\ &+&h(X^M_n)\Delta \overline{N}^M_n. \label{ch5contam} \end{aligned}$$ The functions $\lambda h$ and $f$ satisfy the same conditions as $u_{\lambda}$ and $v$ respectively. So from it follows that the numerical solution satisfied the same hypothesis as the numerical solution . Hence, it follows from Theorem \[ch5theorem2\] that there exist a constant $C_p>0$ such that $$\begin{aligned} \left(\mathbb{E}\left[\sup_{t\in[0,T]}\left\|X_t-\overline{X}^M_t\right\|^p\right]\right)^{1/p}\leq C_p\Delta t^{1/2}, \end{aligned}$$ for all $p\in[1,\infty)$. Linear mean-square stability ---------------------------- The goal of this section is to find a stepsize for which the tamed euler scheme and the semi-tamed Euler scheme are stable. The first approach to the stability analysis of a numerical method is to study the stability behavior of the method for a scalar linear equation. So we will focus in the linear case. Let’s consider a linear test equation with real and scalar coefficients. $$\begin{aligned} dX(t)= aX(t^{-})dt +bX(t^{-})dW(t)+cX(t^{-})dN(t), \hspace{0.5cm} X(0)=X_0. \label{ch5linearequation}\end{aligned}$$ It is proved in [@Desmond2] that the exact solution of is mean-square stable if and only if $l :=2a+b^2+\lambda c(2+c)<0$. That is : $$\begin{aligned} \label{ch5stabilitycondition} \lim_{t\longrightarrow \infty}|X(t)|^2=0 \Longleftrightarrow l := 2a+b^2+\lambda c(2+c) <0.\end{aligned}$$ In this section, to easy notations, the tamed Euler appoximation $X^M$ will be replaced by $X$ and the semi-tamed Euler approximation $Y^M$ will be replaced by $Y$. We have the following result for the numerical method . Under Assumption \[ch5stabilitycondition\], the semi-tamed Euler is mean-square stable if and only if $$\begin{aligned} \Delta t<\dfrac{-l}{(a+\lambda c)^2}. \end{aligned}$$ Aplying the semi-tamed Euler scheme to leads to $$\begin{aligned} Y_{n+1}=Y_n+aY_n\Delta t+\lambda cY_n\Delta t+bY_n\Delta W_n+cY_n\overline{N}_n. \label{ch5compen1} \end{aligned}$$ Squaring both sides of leads to $$\begin{aligned} Y_{n+1}^2&=&Y_n^2+(a+\lambda c)^2\Delta t^2Y_n^2+b^2Y_n^2\Delta W_n^2+c^2Y_n^2\Delta\overline{N}_n^2+2(a+\lambda c)Y_n^2\Delta t+2bY_n^2\Delta W_n\nonumber\\ &+&2cY_n^2\Delta\overline{N}_n+2b(a+\lambda c)Y_n^2\Delta t\Delta W_n+2c(a+\lambda c)Y_n^2\Delta t\Delta\overline{N}_n+2bcY_n^2\Delta W_n\Delta\overline{N}_n. \label{ch5compen2} \end{aligned}$$ Taking expectation in both sides of and using the relations $\mathbb{E}(\Delta W_n^2)=\Delta t$, $\mathbb{E}(\Delta\overline{N}_n^2)=\lambda \Delta t$ and $\mathbb{E}(\Delta W_n)=\mathbb{E}(\Delta\overline{N}_n)=0$ leads to $$\begin{aligned} \mathbb{E}|Y_{n+1}|^2=(1+(a+\lambda c)^2\Delta t^2+(b^2+\lambda c^2+2a+2\lambda c)\Delta t)\mathbb{E}|Y_n|^2. \end{aligned}$$ So the numerical method is stable if and only if $$\begin{aligned} 1+(a+\lambda c)^2\Delta t^2+(b^2+\lambda c^2+2a+2\lambda c)\Delta t<1. \end{aligned}$$ That is if and only if $$\begin{aligned} \Delta t<\dfrac{-l}{(a+\lambda c)^2}. \end{aligned}$$ \[thmncts\] Under Assumption \[ch5stabilitycondition\], the tamed Euler scheme is mean-square stable if one of the following conditions is satisfied : - $a(1+\lambda c\Delta t)\leq 0$, $2a-l>0$ and $\Delta t<\dfrac{2a-l}{a^2+\lambda^2c^2}$. - $a(1+\lambda c\Delta t)>0$ and $\Delta t<\dfrac{-l}{(a+\lambda c)^2}$. Applying the tamed Euler scheme to equation leads to : $$\begin{aligned} X_{n+1}=X_n+\dfrac{aX_n\Delta t}{1+\Delta t|aX_n|}+bX_n\Delta W_n+cX_n\Delta N_n. \label{ch5eq1}\end{aligned}$$ By squaring both sides of leads to : $$\begin{aligned} X_{n+1}^2&=&X_n^2+\dfrac{a^2X^2_n\Delta t^2}{(1+\Delta t|aX_n|)^2}+b^2X_n^2\Delta W_n^2+c^2X_n^2\Delta N_n^2+\dfrac{2aX_n^2\Delta t}{1+\Delta t|aX_n|}+2bX_n^2\Delta W_n\\ &+&2cX_n^2\Delta N_n+\dfrac{2abX_n^2}{1+\Delta t|aX_n|}\Delta W_n+\dfrac{2acX_n^2\Delta t}{1+\Delta t|aX_n|}\Delta N_n+2bcX_n^2\Delta W_n\Delta N_n.\end{aligned}$$ Using the inequality $ \dfrac{a^2\Delta t^2}{1+\Delta t|aX_n|}\leq a^2\Delta t^2$, the previous equality becomes $$\begin{aligned} X_{n+1}^2&\leq &X_n^2+a^2X2_n\Delta t^2+b^2X_n^2\Delta W_n^2+c^2X_n^2\Delta N_n^2+\dfrac{2aX_n^2\Delta t}{1+\Delta t|aX_n|}+2bX_n^2\Delta W_n\\ &+&2cX_n^2\Delta N_n+\dfrac{2abX_n^2}{1+\Delta t|aX_n|}\Delta W_n+\dfrac{2acX_n^2\Delta t}{1+\Delta t|aX_n|}\Delta N_n+2bcX_n^2\Delta W_n\Delta N_n.\end{aligned}$$ Taking expectation in both sides of the previous equality and using independence and the fact that $\mathbb{E}(\Delta W_n)=0$, $\mathbb{E}(\Delta W_n^2)=\Delta t$, $\mathbb{E}(\Delta N_n)=\lambda\Delta t$, $\mathbb{E}(\Delta N_n^2)=\lambda \Delta t+\lambda^2\Delta t^2$ leads to : $$\begin{aligned} \mathbb{E}|X_{n+1}|^2&\leq& \left[1+a^2\Delta t^2+b^2\Delta t+\lambda^2c^2\Delta t^2+(2+ c)\lambda c\Delta t\right]\mathbb{E}|X_n|^2\nonumber\\ &+&\mathbb{E}\left(\dfrac{2aX^2_n\Delta t(1+\lambda c\Delta t)}{1+\Delta t|aX_n|}\right). \label{ch5eq2}\end{aligned}$$ - If $a(1+\lambda c\Delta t)\leq 0$, it follows from that $$\begin{aligned} \mathbb{E}|X_{n+1}|^2\leq \{1+(a^2+\lambda^2c^2)\Delta t^2+[b^2+\lambda c(2+c)]\Delta t\}\mathbb{E}|X_n|^2.\end{aligned}$$ Therefore, the numerical solution is stable if $$\begin{aligned} 1+(a^2+\lambda^2c^2)\Delta t^2+[b^2+\lambda c(2+c)]\Delta t<1.\end{aligned}$$ That is $\Delta t<\dfrac{2a-l}{a^2+\lambda^2c^2}$. - If $a(1+\lambda c\Delta t)> 0$, using the fact that $\dfrac{2aX_n^2\Delta t(1+\lambda c\Delta t)}{1+\Delta t|aX_n|}< 2aY_n^2\Delta t(1+\lambda c\Delta t)$, inequality becomes $$\begin{aligned} \mathbb{E}|X_{n+1}|^2\leq\left[1+a^2\Delta t^2+b^2\Delta t+\lambda^2c^2\Delta t^2+2\lambda ac\Delta t^2+(2+ c)\lambda c\Delta t+2a\Delta t\right]\mathbb{E}|X_n|^2. \label{ch5eq3}\end{aligned}$$ Therefore, it follows from that the numerical solution is stable if $1+a^2\Delta t^2+b^2\Delta t+\lambda^2c^2\Delta t^2+2\lambda ac\Delta t^2+(2+ c)\lambda c\Delta t+2a\Delta t<1$. That is $\Delta t<\dfrac{-l}{(a+\lambda c)^2}$. In Theorem \[thmncts\], we can easily check that if $l<0$, we have: $$\begin{aligned} \left\lbrace \begin{array}{l} a(1+\lambda c\Delta t)\leq0,\\ 2a-l>0 \\ \Delta t<\dfrac{2a-l}{a^2+\lambda^2c^2}\\ \end{array} \right. \Leftrightarrow \left\lbrace \begin{array}{l} a \in (l/2,0], c\geq 0,\\ \Delta t <\dfrac{2a-l}{a^2+\lambda^2 c^2}\\ \end{array} \right. \bigcup \left\lbrace \begin{array}{l} a \in (l/2,0), c<0,\\ \Delta t <\dfrac{2a-l}{a^2+\lambda^2 c^2} \\ \Delta t\leq \dfrac{-1}{ \lambda c} \\ \end{array} \right.\\ \bigcup \left\lbrace \begin{array}{l} a>0, c<0 \\ \Delta t <\dfrac{2a-l}{a^2+\lambda^2 c^2} \\ \Delta t \geq \dfrac{-1}{ \lambda c} \end{array} \right. \end{aligned}$$ $$\begin{aligned} \left\lbrace \begin{array}{l} a(1+\lambda c\Delta t)>0,\\ \Delta t<\dfrac{-l}{(a+\lambda c)^2}\\ \end{array} \right. \Leftrightarrow \left\lbrace \begin{array}{l} a >0, c>0,\\ \Delta t < \dfrac{-l}{(a+\lambda c)^2}\\ \end{array} \right. \bigcup \left\lbrace \begin{array}{l} a >0, c<0,\\ \Delta t <\dfrac{-l}{(a+\lambda c)^2} \wedge \dfrac{-1}{ \lambda c} \\ \end{array} \right.\\ \bigcup \left\lbrace \begin{array}{l} a<0, c<0 \\ \Delta t <\dfrac{-l}{(a+\lambda c)^2} \\ \Delta t > \dfrac{-1}{ \lambda c} \end{array} \right. \end{aligned}$$ Nonlinear mean-square stability ------------------------------- In this section, we focus on the mean-square stability of the approximation . It is proved in [@Desmond2] that under the following conditions, $$\begin{aligned} \langle x-y, f(x)-f(y)\rangle&\leq& \mu||x-y||^2,\\ ||g(x)-g(y)||^2&\leq& \sigma||x-y||^2,\\ ||h(x)-h(y)||^2 &\leq &\gamma ||x-y||^2,\end{aligned}$$ where $\mu$, $\sigma$ and $\gamma$ are constants, the eaxct solution of SDE is mean-square stable if $$\begin{aligned} \alpha :=2\mu+\sigma+\lambda\sqrt{\gamma}(\sqrt{\gamma}+2)<0. \end{aligned}$$ In this section, to easy notations, the tamed Euler appoximation $X^M$ will be replaced by $X$ and the semi-tamed Euler approximation $Y^M$ will be replaced by $Y$. Following the literature of [@Xia2], in order to examine the mean-square stability of the numerical solution given by , we assume that $f(0)=g(0)=h(0)=0$. Also we make the following assumptions. \[ch5assumption2\] There exist a positive constants $\rho$, $\beta$, $\theta$, $K$, $C$ and $a>1$ such that $$\begin{aligned} \langle x-y, u(x)-u(y)\rangle\leq -\rho||x-y||^2,\hspace{1cm} ||u(x)-u(y)||\leq K||x-y||,\\ \langle x-y, v(x)-v(y)\rangle\leq-\beta||x-y||^{a+1}, \hspace{1.5cm} ||v(x)||\leq \overline{\beta}||x||^a,\\ ||g(x)-g(y)||\leq\theta||x-y||,\hspace{2cm} ||h(x)-h(y)||\leq C||x-y|. \end{aligned}$$ We define $\alpha_1= -2\rho+\theta^2+\lambda C(2+C)$. Under Assumptions \[ch5assumption2\] and the further hypothesis $2\beta-\overline{\beta}>0$, for any stepsize $\Delta t<\dfrac{-\alpha_1}{(K+\lambda C)^2}\wedge\dfrac{2\beta}{[2(K+\lambda C)+\overline{\beta}]\overline{\beta}}\wedge\dfrac{2\beta-\overline{\beta}}{2(K+\lambda C)\overline{\beta}}$, the numerical solution is exponentiallly mean-square stable. The numerical solution is given by $$\begin{aligned} Y_{n+1}=Y_n+\Delta tu_{\lambda}(Y_n)+\dfrac{\Delta tv(Y_n)}{1+\Delta t||v(Y_n)||}+g(Y_n)\Delta W_n+h(Y_n)\Delta\overline{N}_n, \end{aligned}$$ where $u_{\lambda}=u+\lambda h$. Taking the inner product in both sides of the previous equation leads to $$\begin{aligned} ||Y_{n+1}||^2&=&||Y_n||^2+\Delta t^2||u_{\lambda}(Y_n)||^2+\dfrac{\Delta t^2||v(Y_n)||^2}{\left(1+\Delta t||v(Y_n)||\right)^2}+||g(Y_n)||^2||\Delta W_n||^2+||h(Y_n)||^2|\Delta\overline{N}_n|^2\nonumber\\ &+&2\Delta\langle Y_n, u_{\lambda}(Y_n)\rangle+2\Delta t\left\langle Y_n, \dfrac{v(Y_n)}{1+\Delta t||v(Y_n)||}\right\rangle+2\langle Y_n, g(Y_n)\Delta W_n\rangle+2\langle Y_n, h(Y_n)\Delta\overline{N}_n\rangle\nonumber\\ &+&+2\Delta t^2\left\langle u_{\lambda}(Y_n), \dfrac{v(Y_n)}{1+\Delta t||v(Y_n)||}\right\rangle+2\Delta t\langle u_{\lambda}(Y_n), g(Y_n)\Delta W_n\rangle+2\Delta t\langle u_{\lambda}(Y_n), h(Y_n)\Delta\overline{N}_n\rangle\nonumber\\ &+&2\Delta t\left\langle\dfrac{v(Y_n)}{1+\Delta t||v(Y_n)||}, g(Y_n)\Delta W_n\right\rangle+2\Delta t\left\langle\dfrac{v(Y_n)}{1+\Delta t||v(Y_n)||}, h(Y_n)\Delta\overline{N}_n\right\rangle\nonumber\\ &+&2\langle g(Y_n)\Delta W_n, h(Y_n)\Delta\overline{N}_n\rangle. \label{ch5meansemi1} \end{aligned}$$ Using Assumptions \[ch5assumption2\], it follows that : $$\begin{aligned} 2\Delta t\left\langle Y_n, \dfrac{v(Y_n)}{1+\Delta ||v(Y_n)||}\right\rangle\leq\dfrac{-2\beta\Delta t||Y_n||^{a+1}}{1+\Delta t||v(Y_n)||} \label{ch5meansemi2} \end{aligned}$$ $$\begin{aligned} 2\Delta t^2\left<u_{\lambda}(Y_n), \dfrac{v(Y_n)}{1+\Delta t||v(Y_n)||}\right>&\leq &\dfrac{2\Delta t^2||u_{\lambda}(Y_n)||||v(Y_n)||}{1+\Delta t||v(Y_n||}\nonumber\\ &\leq&\dfrac{2\Delta t^2(K+\lambda C)\overline{\beta}||Y_n|^{a+1}}{1+\Delta t||v(Y_n)||}. \label{ch5meansemi3} \end{aligned}$$ Let’s define $\Omega_n=\{\omega\in\Omega : ||Y_n||>1\}$. - On $\Omega_n$ we have $$\begin{aligned} \dfrac{\Delta t^2||v(Y_n)||^2}{\left(1+\Delta t||v(Y_n)||\right)^2}\leq\dfrac{\Delta t||v(Y_n)||}{1+\Delta t||v(Y_n)||}\leq\dfrac{\overline{\beta}\Delta t||Y_n||^{a+1}}{1+\Delta t||v(Y_n)||}. \label{ch5meansemi4} \end{aligned}$$ Therefore using , and , equality becomes : yields $$\begin{aligned} \|Y_{n+1}\|^2& \leq &\|Y_n\|^2+\Delta t^2 \|u_{\lambda}(Y_n)\|^2+ \|g(Y_n)\|^2\|\Delta W_n\|^2+\|h(Y_n)\|^2|\Delta\overline{N}_n|^2\nonumber\\ &+&2\Delta t\langle Y_n, u_{\lambda}(Y_n)\rangle+2\langle Y_n, g(Y_n)\Delta W_n +2\langle Y_n, h(Y_n)\Delta\overline{N}_n\rangle \nonumber\\ &+&2\Delta t\langle u_{\lambda}(Y_n), g(Y_n)\Delta W_n\rangle+2\Delta t\langle u_{\lambda}(Y_n), h(Y_n)\Delta\overline{N}_n\rangle\nonumber\\ &+&2\Delta t\left\langle\dfrac{v(Y_n)}{1+\Delta t\|v(Y_n)\|}, g(Y_n)\Delta W_n\right\rangle+2\Delta t\left\langle\dfrac{v(Y_n)}{1+\Delta t\|v(Y_n)\|}, h(Y_n)\Delta\overline{N}_n\right\rangle\nonumber\\ &+&2\langle g(Y_n)\Delta W_n, h(Y_n)\Delta\overline{N}_n\rangle+\dfrac{\left[-2\beta\Delta t+2(K+\lambda c)\overline{\beta}\Delta t^2+\overline{\beta}\Delta t\right]\|Y_n\|^{a+1}}{1+\Delta t\|v(Y_n)\|}. \label{ch5meansemi4a} \end{aligned}$$ The hypothesis $\Delta t<\dfrac{2\beta-\overline{\beta}}{2(K+\lambda C)\overline{\beta}}$ implies that $-2\beta\Delta t+2(K+\lambda C)\overline{\beta}\Delta t^2+\overline{\beta}\Delta t<0$. Therefore, becomes $$\begin{aligned} ||Y_{n+1}||^2&\leq &||Y_n||^2+\Delta t^2||u_{\lambda}(Y_n)||^2+2\Delta t<Y_n, u_{\lambda}(Y_n)>+||g(Y_n)||^2||\Delta W_n||^2\nonumber\\ &+&||h(Y_n)||^2||\Delta\overline{N}_n||^2+2\langle Y_n,g(Y_n)\Delta W_n\rangle+2\langle Y_n, h(Y_n)\Delta\overline{N}_n\rangle\nonumber\\ &+&2\Delta t\langle u_{\lambda}(Y_n), g(Y_n)\Delta W_n\rangle+2\Delta t\langle u_{\lambda}(Y_n), h(Y_n)\Delta\overline{N}_n\rangle\nonumber\\ &+&2\Delta t\left\langle \dfrac{v(Y_n)}{1+\Delta t||v(Y_n)||}, g(Y_n)\Delta W_n\right\rangle+2\Delta t\left\langle\dfrac{v(Y_n)}{1+\Delta t||v(Y_n)||}, h(Y_n)\Delta\overline{N}_n\right\rangle\nonumber\\ &+&2\langle g(Y_n)\Delta W_n, h(Y_n)\Delta\overline{N}_n\rangle. \label{ch5meansemi4b} \end{aligned}$$ - On $\Omega_n^c$ we have $$\begin{aligned} \dfrac{\Delta t^2||v(Y_n)||^2}{\left(1+\Delta t||v(Y_n)||\right)^2}\leq\dfrac{\Delta t^2||v(Y_n)||^2}{1+\Delta t||v(Y_n)||}\leq\dfrac{\overline{\beta}^2\Delta t^2||Y_n||^{2a}}{1+\Delta t||v(Y_n)||}\leq\dfrac{\overline{\beta}^2\Delta t^2||Y_n||^{a+1}}{1+\Delta t||v(Y_n)||}. \label{ch5meansemi5} \end{aligned}$$ Therefore, using , and , equality becomes $$\begin{aligned} ||Y_{n+1}||^2&\leq &||Y_n||^2+\Delta t^2||u_{\lambda}(Y_n)||^2+2\Delta t<Y_n, u_{\lambda}(Y_n)>+||g(Y_n)||^2||\Delta W_n||^2\nonumber\\ &+&||h(Y_n)||^2||\Delta\overline{N}_n||^2+2\langle Y_n,g(Y_n)\Delta W_n\rangle+2\langle Y_n, h(Y_n)\Delta\overline{N}_n\rangle\nonumber\\ &+&2\Delta t\langle u_{\lambda}(Y_n), g(Y_n)\Delta W_n\rangle+2\Delta t\langle u_{\lambda}(Y_n), h(Y_n)\Delta\overline{N}_n\rangle\nonumber\\ &+&2\Delta t\left\langle \dfrac{v(Y_n)}{1+\Delta t||v(Y_n)||}, g(Y_n)\Delta W_n\right\rangle+2\Delta t\left\langle\dfrac{v(Y_n)}{1+\Delta t||v(Y_n)||}, h(Y_n)\Delta\overline{N}_n\right\rangle\nonumber\\ &+&2\langle g(Y_n)\Delta W_n, h(Y_n)\Delta\overline{N}_n\rangle+\dfrac{\left[-2\beta\Delta t+2(K+\lambda c)\overline{\beta}\Delta t^2+\overline{\beta}^2\Delta t^2\right]||Y_n||^{a+1}}{1+\Delta t t||v(Y_n)||}. \label{ch5meansemi5a} \end{aligned}$$ The hypothesis $\Delta t<\dfrac{2\beta}{[2(K+\lambda C)+\overline{\beta}]\overline{\beta}}$ implies that $-2\beta\Delta t+2(K+\lambda C)\overline{\beta}\Delta t^2+\overline{\beta}^2\Delta t^2<0$. Therefore, becomes $$\begin{aligned} ||Y_{n+1}||^2&\leq &||Y_n||^2+\Delta t^2||u_{\lambda}(Y_n)||^2+2\Delta t<Y_n, u_{\lambda}(Y_n)>+||g(Y_n)||^2||\Delta W_n||^2\nonumber\\ &+&||h(Y_n)||^2||\Delta\overline{N}_n||^2+2\langle Y_n,g(Y_n)\Delta W_n\rangle+2\langle Y_n, h(Y_n)\Delta\overline{N}_n\rangle\nonumber\\ &+&2\Delta t\langle u_{\lambda}(Y_n), g(Y_n)\Delta W_n\rangle+2\Delta t\langle u_{\lambda}(Y_n), h(Y_n)\Delta\overline{N}_n\rangle\nonumber\\ &+&2\Delta t\left\langle \dfrac{v(Y_n)}{1+\Delta t||v(Y_n)||}, g(Y_n)\Delta W_n\right\rangle+2\Delta t\left\langle\dfrac{v(Y_n)}{1+\Delta t||v(Y_n)||}, h(Y_n)\Delta\overline{N}_n\right\rangle\nonumber\\ &+&2\langle g(Y_n)\Delta W_n, h(Y_n)\Delta\overline{N}_n\rangle. \label{ch5meansemi5b} \end{aligned}$$ Finally, from the discussion above on $\Omega_n$ and $\Omega_n^c$, it follows that on $\Omega$, the following inequality holds for all $\Delta t<\dfrac{2\beta}{[2(K+\lambda C)+\overline{\beta}]\overline{\beta}}\wedge\dfrac{2\beta-\overline{\beta}}{2(K+\lambda C)\overline{\beta}}$ $$\begin{aligned} ||Y_{n+1}||^2&\leq &||Y_n||^2+\Delta t^2||u_{\lambda}(Y_n)||^2+2\Delta t<Y_n, u_{\lambda}(Y_n)>+||g(Y_n)||^2||\Delta W_n||^2\nonumber\\ &+&||h(Y_n)||^2||\Delta\overline{N}_n||^2+2\langle Y_n,g(Y_n)\Delta W_n\rangle+2\langle Y_n, h(Y_n)\Delta\overline{N}_n\rangle\nonumber\\ &+&2\Delta t\langle u_{\lambda}(Y_n), g(Y_n)\Delta W_n\rangle+2\Delta t\langle u_{\lambda}(Y_n), h(Y_n)\Delta\overline{N}_n\rangle\nonumber\\ &+&2\Delta t\left\langle \dfrac{v(Y_n)}{1+\Delta t||v(Y_n)||}, g(Y_n)\Delta W_n\right\rangle+2\Delta t\left\langle\dfrac{v(Y_n)}{1+\Delta t||v(Y_n)||}, h(Y_n)\Delta\overline{N}_n\right\rangle\nonumber\\ &+&2\langle g(Y_n)\Delta W_n, h(Y_n)\Delta\overline{N}_n\rangle. \label{ch5meansemi6} \end{aligned}$$ Taking expectation in both sides of and using the martingale properties of $\Delta W_n$ and $\Delta\overline{N}_n$ leads to : $$\begin{aligned} \mathbb{E}||Y_{n+1}||^2&\leq&\mathbb{E}||Y_n||^2+\Delta t^2\mathbb{E}||u_{\lambda}(Y_n)||^2+2\Delta t\mathbb{E}\langle Y_n, u_{\lambda}(Y_n)\rangle+\Delta t\mathbb{E}||g(Y_n)||^2\nonumber\\ &+&\lambda\Delta t\mathbb{E}||h(Y_n)||^2. \label{ch5meansemi7} \end{aligned}$$ From Assumptions \[ch5assumption2\], we have $$\begin{aligned} ||u_{\lambda}(Y_n)||^2\leq (K+\lambda C)^2||Y_n||^2 \hspace{0.5cm} \text{and} \hspace{0.5cm}\langle Y_n, u_{\lambda}(Y_n)\rangle\leq (-\rho+\lambda C)||Y_n||^2. \end{aligned}$$ So inequality gives $$\begin{aligned} \mathbb{E}||Y_{n+1}||^2&\leq&\mathbb{E}||Y_n||^2+(K+\lambda C)^2\Delta t^2\mathbb{E}||Y_n||^2+2(-\rho+\lambda C)\Delta t\mathbb{E}||Y_n||^2+\theta^2\Delta t\mathbb{E}||Y_n||^2\\ &+&\lambda C^2\Delta t\mathbb{E}||Y_n||^2\\ &=&\left[1-2\rho\Delta t+(K+\lambda C)^2\Delta t^2+2\lambda C\Delta t+\theta^2\Delta t+\lambda C^2\Delta t\right]\mathbb{E}||Y_n||^2. \end{aligned}$$ Iterating the previous inequality leads to $$\begin{aligned} \mathbb{E}||Y_n||^2\leq\left[1-2\rho\Delta t+(K+\lambda C)^2\Delta t^2+2\lambda C\Delta t+\theta^2\Delta t+\lambda C^2\Delta t\right]^n\mathbb{E}||Y_0||^2. \end{aligned}$$ In oder to have stability, we impose : $$\begin{aligned} 1-2\rho\Delta t+(K+\lambda C)^2\Delta t^2+2\lambda C\Delta t+\theta^2\Delta t+\lambda C^2\Delta t<1. \end{aligned}$$ That is $$\begin{aligned} \Delta t<\dfrac{-[-2\rho+\theta^2+\lambda C(2+C)]}{(K+\lambda C)^2} =\dfrac{-\alpha_1}{(K+\lambda C)^2}. \end{aligned}$$ In the following , we analyse the mean-square stability of the tamed Euler. We make the following assumptions which are essentially consequences of Assumptions \[ch5assumption2\]. \[ch5assumption3\] we assume that there exists positive constants $\beta$, $\overline{\beta}$, $\theta$, $\mu$, $K$, $C$, $\rho$, and $a>1$ such that : $$\begin{aligned} \langle x-y, f(x)-f(y)\rangle\leq &-\rho ||x-y||^2-\beta||x-y||^{a+1},\nonumber\\ ||f(x)|| \leq \overline{\beta}||x||^a+K||x||,\nonumber\\ ||g(x)-g(y)|| \leq &\theta||x-y||,\nonumber\\ ||h(x)-h(y)|| \leq &C||x-y||,\nonumber\\ \langle x-y, h(x)-h(y)\rangle \leq &-\mu ||x-y||^2. \label{assumparticular} \end{aligned}$$ Apart from , Assumption \[ch5assumption3\] is a consequence of Assumption \[ch5assumption2\]. Under Assumptions \[ch5assumption3\] and the further hypothesis $\beta-C\overline{\beta}>0$, $\overline{\beta}(1+2C)-2\beta<0$, $K+\theta^2+\lambda C^2-2\mu\lambda+2\lambda CK<0$, the numerical solution is mean-square stable for any stepsize $$\begin{aligned} \Delta t<\dfrac{-[K+\theta^2+\lambda C^2-2\mu\lambda+2\lambda CK]}{2K^2+\lambda^2C^2}\wedge\dfrac{\beta-C\overline{\beta}}{\overline{\beta}^2}. \end{aligned}$$ From equation , we have $$\begin{aligned} ||X_{n+1}||^2&=&||X_n||^2+\dfrac{\Delta t^2||f(X_n)||^2}{\left(1+\Delta t||f(X_n)||\right)^2}+||g(X_n)\Delta W_n||^2+||h(X_n)\Delta N_n||^2\nonumber\\ &+&\left\langle X_n,\dfrac{\Delta tf(X_n)}{1+\Delta t||f(X_n)||}\right\rangle+2\left\langle X_n+\dfrac{\Delta tf(X_n)}{1+\Delta t||f(X_n)||}, g(X_n)\Delta W_n\right\rangle\nonumber\\ &+&2\left\langle X_n+\dfrac{\Delta tf(X_n)}{1+\Delta t||f(X_n)||}, h(X_n)\Delta N_n\right\rangle+2\langle g(X_n)\Delta W_n, h(X_n)\Delta N_n\rangle. \label{ch5meantamed1} \end{aligned}$$ Using assumptions \[ch5assumption3\], it follows that : $$\begin{aligned} 2\left\langle X_n, \dfrac{\Delta tf(X_n)}{1+\Delta t||f(X_n)||}\right\rangle &\leq &\dfrac{-2\Delta t\rho||X_n||^2}{1+\Delta t||f(X_n)||}-\dfrac{2\beta \Delta t||X_n||^{a+1}}{1+\Delta t||f(X_n)||} \leq -\dfrac{2\beta \Delta t||X_n||^{a+1}}{1+\Delta t||f(X_n)||}. \label{ch5meantamed2} \end{aligned}$$ $$\begin{aligned} ||g(X_n)\Delta W_n||^2 \leq \theta^2 ||X_n||^2||\Delta W_n||^2\hspace{0.5cm}\text{and}\hspace{0.5cm} ||h(X_n)\Delta N_n||^2 \leq C^2||X_n||^2|\Delta N_n|^2.\label{ch5meantamed2a} \end{aligned}$$ $$\begin{aligned} 2\langle X_n, h(X_n)\Delta N_n\rangle =2\langle X_n, h(X_n)\rangle\Delta N_n \leq -2\mu||X_n||^2|\Delta N_n|. \label{ch5meantamed3} \end{aligned}$$ $$\begin{aligned} 2\left\langle\dfrac{\Delta tf(X_n)}{1+h||f(X_n)||}, h(X_n)\Delta N_n\right\rangle &\leq &\dfrac{2\Delta t||f(X_n)||||h(X_n)|||\Delta N_n|}{1+\Delta t||f(X_n)||}\nonumber\\ &\leq &\dfrac{2\Delta tC\overline{\beta}||X_n||^{a+1}}{1+\Delta t||f(X_n)||}|\Delta N_n|+2CK||X_n||^2|\Delta N_n|. \label{ch5meantamed4} \end{aligned}$$ So from Assumptions \[ch5assumption3\], we have $$\begin{aligned} \label{ch5assmeantamed} \left\{ \begin{array}{lllll} \left\langle X_n, \dfrac{\Delta tf(X_n)}{1+\Delta t||f(X_n)||}\right\rangle\leq -\dfrac{2\beta \Delta t||X_n||^{a+1}}{1+\Delta t||f(X_n)||}\\ ||g(X_n)\Delta W_n||^2 \leq \theta^2 ||X_n||^2||\Delta W_n||^2\\ ||h(X_n)\Delta N_n||^2 \leq C^2||X_n||^2|\Delta N_n|^2\\ 2\langle Y_n, h(X_n)\Delta N_n\rangle \leq -2\mu||X_n||^2|\Delta N_n|\\ 2\left\langle\dfrac{\Delta tf(X_n)}{1+h||f(X_n)||}, h(X_n)\Delta N_n\right\rangle \leq\dfrac{2\Delta tC\overline{\beta}||X_n||^{a+1}}{1+\Delta t||f(X_n)||}|\Delta N_n|+2CK||X_n||^2|\Delta N_n| \end{array} \right. \end{aligned}$$ Let’s define $\Omega_n :=\{w\in\Omega : ||X_n(\omega)||>1\}$. - On $\Omega_n$ we have : $$\begin{aligned} \dfrac{\Delta t^2||f(X_n)||^2}{\left(1+\Delta t||f(X_n)||\right)^2}&\leq& \dfrac{\Delta t||f(X_n)||}{1+\Delta t||f(X_n)||} \leq \dfrac{\Delta t\overline{\beta}||X_n||^a}{1+\Delta t||f(X_n)||}+K\Delta t||X_n||\nonumber\\ &\leq& \dfrac{\Delta t\overline{\beta}||X_n||^{a+1}}{1+\Delta t||f(X_n)||}+K\Delta t||X_n||^2. \label{ch5meantamed5} \end{aligned}$$ Therefore using and , equality becomes $$\begin{aligned} ||X_{n+1}||^2&\leq&||X_n||^2+K\Delta t||X_n||^2+\theta^2||X_n||^2||\Delta W_n||^2+C^2||X_n||^2|\Delta N_n|^2\nonumber\\ &+&2\left\langle X_n+\dfrac{\Delta tf(X_n)}{1+\Delta t||f(X_n)||}, g(X_n)\Delta W_n\right\rangle-2\mu||X_n||^2|\Delta N_n|+2CK|\Delta N_n|\nonumber\\ &+&2\langle g(X_n)\Delta W_n, h(X_n)\Delta N_n\rangle+\dfrac{\left[-2\beta\Delta t+\overline{\beta}\Delta t+2\overline{\beta}C\Delta t\right]||X_n||^{a+1}}{1+\Delta t||f(X_n)||}. \label{ch5meantamed6} \end{aligned}$$ Using the hypothesis $\overline{\beta}(1+2C)-2\beta<0$, becomes $$\begin{aligned} ||X_{n+1}||^2&\leq&||X_n||^2+K\Delta t||X_n||^2+\theta^2||X_n||^2||\Delta W_n||^2+C^2||X_n||^2|\Delta N_n|^2\nonumber\\ &+&2\left\langle X_n+\dfrac{\Delta tf(X_n)}{1+\Delta t||f(X_n)||}, g(X_n)\Delta W_n\right\rangle-2\mu||X_n||^2|\Delta N_n|+2CK|\Delta N_n|\nonumber\\ &+&2\langle g(X_n)\Delta W_n, h(X_n)\Delta N_n\rangle. \label{ch5meantamed7} \end{aligned}$$ - On $\Omega_n^c$, we have : $$\begin{aligned} \dfrac{\Delta t^2||f(X_n)||^2}{\left(1+\Delta t||f(X_n)||\right)^2}&\leq& \dfrac{\Delta t^2||f(X_n)||^2}{1+\Delta t||f(X_n)||} \leq \dfrac{2\Delta t^2\overline{\beta}^2||X_n||^{2a}}{1+\Delta t||f(X_n)||}+2K^2\Delta t^2||X_n||^2\\ &\leq& \dfrac{2\Delta t^2\overline{\beta}^2||X_n||^{a+1}}{1+\Delta t||f(X_n)||}+2K^2\Delta t^2||X_n||^2. \label{ch5meantamed8} \end{aligned}$$ Therefore, using , and , becomes $$\begin{aligned} ||X_{n+1}||^2&\leq&||X_n||^2+2K^2\Delta t^2||X_n||^2+\theta^2||X_n||^2||\Delta W_n||^2+C^2||X_n||^2|\Delta N_n|^2\nonumber\\ &+&2\left\langle X_n+\dfrac{\Delta tf(X_n)}{1+\Delta t||f(X_n)||}, g(X_n)\Delta W_n\right\rangle-2\mu||Y_n||^2|\Delta N_n|+2CK|\Delta N_n|\nonumber\\ &+&2\langle g(X_n)\Delta W_n, h(X_n)\Delta N_n\rangle+\dfrac{\left[2C\overline{\beta}\Delta t-2\beta\Delta t+2\overline{\beta}^2\Delta t^2\right]||X_n||^{a+1}}{1+\Delta t||f(X_n)||}. \label{ch5meantamed9} \end{aligned}$$ The hypothesis $\Delta t<\dfrac{\beta-C\overline{\beta}}{\overline{\beta}^2}$ implies that $2C\overline{\beta}\Delta t-2\beta\Delta t+2\overline{\beta}^2\Delta t^2<0$. Therefore, becomes $$\begin{aligned} ||X_{n+1}||^2&\leq&||X_n||^2+2K^2\Delta t^2||X_n||^2+\theta^2||X_n||^2||\Delta W_n||^2+C^2||X_n||^2|\Delta N_n|^2\nonumber\\ &+&2\left\langle X_n+\dfrac{\Delta tf(X_n)}{1+\Delta t||f(X_n)||}, g(X_n)\Delta W_n\right\rangle-2\mu||X_n||^2|\Delta N_n|+2CK|\Delta N_n|\nonumber\\ &+&2\langle g(X_n)\Delta W_n, h(X_n)\Delta N_n\rangle. \label{ch5meantamed10} \end{aligned}$$ From the above discussion on $\Omega_n$ and $\Omega_n^c$, the following inequality holds on $\Omega$ for all $\Delta t<\dfrac{\beta-C\overline{\beta}}{\overline{\beta}^2}$ and $\overline{\beta}(1+2C)-\beta<0$ $$\begin{aligned} ||X_{n+1}||^2&\leq&||X_n||^2+K\Delta t||X_n||^2+2K^2\Delta t^2||X_n||^2+\theta^2||X_n||^2||\Delta W_n||^2+C^2||X_n||^2|\Delta N_n|^2\nonumber\\ &+&2\left\langle X_n+\dfrac{\Delta tf(X_n)}{1+\Delta t||f(X_n)||}, g(X_n)\Delta W_n\right\rangle-2\mu||X_n||^2|\Delta N_n|+2CK|\Delta N_n|\nonumber\\ &+&2\langle g(X_n)\Delta W_n, h(X_n)\Delta N_n\rangle. \label{ch5meantamed11} \end{aligned}$$ Taking Expectation in both sides of , using the relation $\mathbb{E}||\Delta W_n||=0$, $\mathbb{E}||\Delta W_n||^2=\Delta t$, $\mathbb{E}|\Delta N_n|=\lambda\Delta t$ and $\mathbb{E}|\Delta N_n|^2=\lambda^2\Delta t^2+\lambda\Delta t$ leads to : $$\begin{aligned} \mathbb{E}||X_{n+1}||^2&\leq& \mathbb{E}||X_n||^2+2K^2\Delta t^2\mathbb{E}||X_n||^2+\theta^2\Delta t\mathbb{E}||X_n||^2+\lambda^2C^2\Delta t^2\mathbb{E}||X_n||^2+\lambda C^2\Delta t\mathbb{E}||X_n||^2\\ &-&2\mu\lambda\Delta t\mathbb{E}||X_n||^2+\lambda CK\Delta t\mathbb{E}||X_n||^2\\ &=&\left[1+(2K^2+\lambda^2C^2)\Delta t^2+(K+\theta^2+\lambda C^2-2\mu\lambda+2\lambda CK)\Delta t\right]\mathbb{E}||X_n||^2. \end{aligned}$$ Iterating the last inequality leads to $$\begin{aligned} \mathbb{E}||X_n||^2\leq \left[1+(2K^2+\lambda^2C^2)\Delta t^2+(K+\theta^2+\lambda C^2-2\mu\lambda+2\lambda CK)\Delta t\right]^n\mathbb{E}||X_0||^2. \end{aligned}$$ In order to have stability, we impose $$\begin{aligned} 1+(2K^2+\lambda^2C^2)\Delta t^2+(K+\theta^2+\lambda C^2-2\mu\lambda+2\lambda CK)\Delta t<1. \end{aligned}$$ That is $$\begin{aligned} \Delta t<\dfrac{-[K+\theta^2+\lambda C^2-2\mu\lambda+2\lambda CK]}{2K^2+\lambda^2C^2}. \end{aligned}$$ Numerical Experiments --------------------- In this section, we present some numerical experiments that illustrate our theorical strong convergence and stability results. For the strong convergence illustration of and , let’s consider the stochastic differential equation $$\begin{aligned} dX_t=(-4X_t-X^3_t)dt+X_tdW_t+X_tdN_t, \label{ch5numeric1} \end{aligned}$$ with initial $X_0=1$. $N$ is the scalar poisson process with parameter $\lambda=1$. Here $u(x)=-4x$, $v(x)=-x^3$ $g(x)=h(x)=x$. It is easy to check that $u, v, g$ and $h$ satisfy the Assumptions \[ch5assumption1\]. For the illustration of the linear mean-square stability , we consider a linear test equation $\left\{\begin{array}{ll} dX(t)=aX(t^-)+bX(t^-)dW(t)+cX(t^-)dN(t)\\ X(0)=1 \end{array} \right.$ We consider the particular case $a=-1$, $b=2$, $c=-0.9$ and $\lambda=9$. In this case $l=-0.91$, $\dfrac{-l}{(a+\lambda c)^2}<0.084$ and $\dfrac{2a-l}{a^2+\lambda c^2}<0.074$. $a(1+\lambda c\Delta t)<0$ for $\Delta t<0.124$. We test the stability behaviour of semi-tamed and of tamed Euler for different step-size, $\Delta t=0.02, 0.05$ and $0.08$. We use $7\times 10^3$ sample paths. For all step-size $\Delta t<0.083$ the semi-tamed Euler is stable. But for the step-size $\Delta t=0.08>0.074$, the tamed Euler scheme is unstable while the semi-tamed Euler scheme is stable. So the semi-tamed Euler scheme works better than the tamed Euler scheme. ![Error of the tamed Euler scheme](erortame.png) ![Error of the semi-tamed Euler scheme](erorsemi.png) ![Stability tamed Euler](statame1.png) ![Stability semi-tamed Euler](stasemitame1.png) ![Stability tamed Euler](statame2.png) ![Stability semi-tamed Euler ](stasemitame2.png) ![ Stability tamed Euler ](statame3.png) ![Stability semi-tamed Euler ](stasemitame3.png) Conclusion {#conclusion .unnumbered} ========== In this thesis, we provided an overview in probability theory which allowed us to define some basic concepts in stochastic process. Under global Lipschitz condition, we proved the existence and the uniqueness of solution of stochastic differential equation (SDEs) with jumps. In general, it is difficult to find the exact solution of most SDEs. one tool to approach the exact solution is the numerical resolution. We provided in this dissertation some numerical techniques to solve SDEs with jumps. More precisely, we investigated the strong convergence of the compensated stochastic theta method (CSTM) under global Lipschitz condition. We investigated the stability of both CSTM and stochastic theta method (STM) and we proved that for the linear test equation, when $\dfrac{1}{2}\leq\theta\leq 1$ the CSTM is A-stable while the STM is not. So CSTM works better than STM. Under non-global Lipschitz condition Euler explicit method fails to converge strongly while Euler implicit method converges strongly but requires much computational efforts. We extended the tamed Euler scheme by introducing the compensated tamed Euler scheme for SDEs with jumps, which converges strongly with standard order $0.5$ to the exact solution. We also extended the semi-tamed Euler scheme proposed in [@Xia2] for SDEs with jumps under non-global Lipschitz condition and proved his strong convergence. This latter enable us define the tamed Euler scheme and to prove his strong convergence which was not yet done in the literature. In this thesis, we also analysis the stability behaviours of both tamed and semi-tamed Euler schemes in the linear and the nonlinear case. We proved that these two numerical schemes reproduce the exponentially mean-square property of the exact solution. All the numerical scheme presented in this work are of rate of convergence $0.5$. The tamed Misltein scheme was introduced in [@Xia3], where the authors proved the strong convergence of order $1$ of this scheme for SDEs without jumps. The case with jumps is not yet well developped in the litterature. The weak convergence under non-global Lipschitz condition is not yet investigated. In the future, We would like to focus on the opened topics mentioned above. Appendix {#appendix .unnumbered} ======== The goal of this section is to present some Scilab codes for simulations. A.1 Code for simulation of the mean square error {#a.1-code-for-simulation-of-the-mean-square-error .unnumbered} ------------------------------------------------ lambda = 1; Xzero = 1; T = 1; N = 2^(14); dt = T/N; a=1, b=1, c=0.5, theta=1, A=a+lambda*c M = 5000; Xerr = zeros(M,5); for s = 1:M, dW = sqrt(dt)*grand(1,N,'nor',0,1) W = cumsum(dW); dN=grand(1,N,'poi',dt*lambda)-dt*lambda*ones(1,N); W(1)=0 dP=dN+dt*lambda*ones(1,N); P=cumsum(dP,'c'); P(1)=0; X=linspace(0,1,N); Xtrue=exp((a-1/2*b^2)*X+b*W+log(1+c)*P); for p = 1:5 R = 2^(p-1); Dt = R*dt; L = N/R; Xtemp = Xzero; for j = 1:L Winc = sum(dW(R*(j-1)+1:R*j)); Ninc=sum(dN(R*(j-1)+1:R*j)); Xtemp=(Xtemp+(1-theta)*A*Xtemp*Dt+b*Xtemp*Winc+c*Xtemp*Ninc)/(1-A*theta*Dt); end Xerr(s,p) = abs(Xtemp - Xtrue(N))^2; end end Dtvals = dt*(2.^([0:4])); T=mean(Xerr,'r')^(1/2); disp(T); plot2d('ll',Dtvals,T,[5]) plot2d('ll',Dtvals,Dtvals^(1/2),[3]) legends([ 'mean square error for CSTM for theta=1','Reference line' ],[5,3,2],2) xtitle("Mean square stability for CSTM"); xlabel("Time") ylabel("E|Y_L-X(T)|^0.5") A.2 Code for the simulation of the mean square stability {#a.2-code-for-the-simulation-of-the-mean-square-stability .unnumbered} -------------------------------------------------------- T=2500, M=5000, Xzero=1, a=-7,b=1, c=1, lambda=4, Dt=25, A=a+lambda*c // drift coefficient for compensated equation N=T/Dt; theta=[0.4995, 0.50,0.51];// different value for theta Xms=zeros(3,N);//initialisation of the mean for i=1:3 Xtemp=Xzero*ones(M,1);// initialization of the solution for j=1:N Winc=grand(M,1,'nor',0,sqrt(Dt)); // generation of random //variables following the normal distribution Ninc=grand(M,1,'poi',Dt*lambda)-Dt*lambda*ones(M,1);// generation of // compensated poisson process B=1-theta(i)*A*Dt Xtemp=(Xtemp+(1-theta(i))*A*Xtemp*Dt+b*Xtemp.*Winc+c*Xtemp.*Ninc)/B; Xms(i,j)=mean(Xtemp^2); end end X=linspace(0,T,N); plot(X,Xms(1,:),'b',X,Xms(2,:),'r',X,Xms(3,:),'g') legends(['Theta=0.499' 'theta=0.50' 'Theta=0.51' ], [2,5,3],2) xtitle("Mean-square stability for CSTM"); xlabel("tn") ylabel("E|Yn|^2") Acknowledgements {#acknowledgements .unnumbered} ================ This dissertation would not have been possible without the guidance and the help of several individuals who in one way or another contributed and extended their valuable assistance in the preparation and completion of this study. I would like to take this opportunity to express my gratitude to my supervisor Dr. Antoine Tambue to give me the chance to work with him; for his availability, his time to guide my researches, for sincerity, kindness, patience, understanding and encouragement that I will never forget. I am very happy for the way you introduced me in this nice topic which gave us two preprints papers. I would also like to thank Dr. Mousthapha Sene, my tutor for the time he took to read this work and give a valuable suggestions. Special thanks to Neil Turok for his wonderful initiative. Thanks to all staff of AIMS-Senegal, tutors and my classmates. I would like to express my gratitude to all lecturers who taught me at AIMS-Senegal, I would like to mention here Pr. Dr. Peter Stollmann. I would like to extend my thanks to the Ph.D. students at the chair of Mathematics of AIMS-senegal for their moral support during my stay in Senegal. Many thanks to all lecturers in my country (Cameroon) who have being training me since my undergraduate. Big thanks to my family, to my friends and to everyone who supported me during my studies. [00]{} Avner Friedman, Stochastic Differential Equations and Apllications, volume 1 and 2 [A]{} Document Preparation System, Bernt Oksendal and Agnés Sulem Applied Stochastic Control of Jump Diffusions, Third Edition, Bernt Oksendal Stochastic Differential Equations: An Introduction with Applications Fift Edition, Corrected printing, Jean Jacod and Philip Protter, Essentiel en Théorie des Probabilités, Desmond J. Highan, Peter E. Kloeden Convergence and Stability of Implicit Methods for Jump-Diffusion Systems D.J. Higham, P.E. Kloeden, Numerical methods for nonlinear stochastic differential equations with jumps, Xiaojie Wang, Siqing Gan, Compensated stochastic theta methods for stochastic differential equations with jumps, Martin Hutzenthaler, Arnult Jentzen and Peter E. Kloeden, Strong convergence of an explicit numerical method for SDES with nonglobally Lipschitz continous coefficients, Martin Hutzenthaler, Arnult Jentzen, Convergence of the stochastic Euler scheme for locally Lipschitz coefficients, Martin Hutzenthaler, Arnult Jentzen and Peter E. Kloeden, Divergence of the multilevel Monte Carlo Euler Euler Method for nonlinear Stochastic Differential Equations, . Xiaofeng Zong, Fuke Wu, Chengming Huang, Convergence and stability of the semi-tamed Euler scheme for stochastic differential equations with non-Lipschitz continuous coefficients, Xiaofeng, Siqing Gan, The tamed Milstein method for commutative stochastic differential equations with non-globally Lipschitz continuous coefficients, Giueseppe Da Prato, Jerzy Zabczyk, Stochastic equations in infinite dimensions, Fima C. Klebaner, Introduction To Stochastic Calculus With Applications, SECOND EDITION, Philip E. Protter, Stochastic Integration and Differential Equations, Second Edition, Sever Silvestru Dragomir, Some Gronwall Type Inequalities and Application, . Desmond J. Highan, An Algorithmic Introduction to Numerical Simulation of Stochastic Differential Equations, Carlo Marinelli, Michael Rockner, On the maximal inequalities of Burkholder, Davis and Gundy (2013), , Konstantinos Dareiotis, Chaman Kumar and Sotirios Sabanis, On Tamed Euler Approximations of SDEs with Random Coefficients and Jumps, . A. Tambue and J. D. Mukam. Strong convergence of the tamed and the semi-tamed Euler schemes for stochatic differential equations with jumps under non-global Lipschitz condition . A. Tambue and J. D. Mukam, Stability of the semi-tamed and tamed schemes for stochastic differential equations with jumps under non-global Lipschitz condition. .
1
--- abstract: 'Given a graph $G$, an incidence matrix $\mathcal{N}(G)$ is defined on the set of distinct isomorphism types of induced subgraphs of $G$. It is proved that Ulam’s conjecture is true if and only if the $\mathcal{N}$-matrix is a complete graph invariant. Several invariants of a graph are then shown to be reconstructible from its $\mathcal{N}$-matrix. The invariants include the characteristic polynomial, the rank polynomial, the number of spanning trees and the number of hamiltonian cycles in a graph. These results are stronger than the original results of Tutte in the sense that actual subgraphs are not used. It is also proved that the characteristic polynomial of a graph with minimum degree 1 can be computed from the characteristic polynomials of all its induced proper subgraphs. The ideas in Kocay’s lemma play a crucial role in most proofs. Kocay’s lemma is used to prove Whitney’s subgraph expansion theorem in a simple manner. The reconstructibility of the characteristic polynomial is then demonstrated as a direct consequence of Whitney’s theorem as formulated here.' author: - | Bhalchandra D. Thatte\ Allan Wilson Centre for Molecular Ecology and Evolution,\ and Institute of Fundamental Sciences,\ Massey University, Palmerston North, New Zealand\ `b.thatte@massey.ac.nz` date: | Submitted: June 29, 2004\ Mathematics Subject Classifications: 05C50, 05C60 title: 'Kocay’s lemma, Whitney’s theorem, and some polynomial invariant reconstruction problems' --- Introduction {#sec-intro} ============ Suppose we are given the collection of induced subgraphs of a graph. There is a natural partial order on this collection defined by the induced subgraph relationship between members of the collection. An incidence matrix may be constructed to represent this relationship along with the multiplicities with which members of the collection appear as induced subgraphs of other members. Given such a matrix, is it possible to construct the graph or compute some of its invariants? Such a question is motivated by the treatment of chromatic polynomials in Biggs [@biggs1993]. Biggs demonstrates that it is possible to compute the chromatic polynomial of a graph from its incidence matrix. The idea of Kocay’s lemma in graph reconstruction theory is extremely useful in studying the question for other invariants. In this paper, we present several results on a relationship between Ulam’s reconstruction conjecture and the incidence matrix. Extending the reconstruction results of Tutte and Kocay, we show that many graph invariants can be computed from the incidence matrix. We then consider the problem of computing the characteristic polynomial of a graph from the characteristic polynomials of all induced proper subgraphs. Finally, we present a new short proof of Whitney’s subgraph expansion theorem, and demonstrate the reconstructibility of the characteristic polynomial of a graph using Whitney’s theorem. Notation -------- We consider only finite simple graphs in this paper. Let $G$ be a graph with vertex set $VG$ and edge set $EG$. The number of vertices of $G$ is denoted by $v(G)$ and the number of edges is denoted by $e(G)$. When $VG=\emptyset $, we denote $G$ by $\Phi$, and call the graph a [*null graph*]{}. When $EG=\emptyset $, we call the graph an [*empty graph*]{}. When $F$ is a subgraph of $G$, we write $F\subseteq G$, and when $F$ is a proper subgraph of $G$, we write $F\subsetneq G$. The subgraph of $G$ induced by $S \subseteq VG$ is the subgraph whose vertex set is $S$ and whose edge set contains all the edges having both end vertices in $S$. It is denoted by $G_S$. The subgraph of $G$ induced by $VG - S$ is denoted by $G-S$, or simply $G-u$ if $S=\{u\}$. A subgraph of $G$ with vertex set $V\subseteq VG$ and edge set $E\subseteq EG$ is denoted by $G_{(V,E)}$, or just $G_E$ if $V$ consists of the end vertices of edges in $E$. The same notation is used when $E = (e_1,e_2, \ldots ,e_k)$ is a tuple of edges, some of which may be identical. Isomorphism of two graphs $G$ and $H$ is denoted by $G\cong H$. For $i > 0$, a graph isomorphic to a cycle of length $i$ is denoted by $C_i$, and the number of cycles of length $i$ in $G$ is denoted by $\psi_i(G)$, where, as a convention, $C_i \cong K_i$ for $i \in \{1,2\}$. The number of hamiltonian cycles is denoted by a special symbol $ham(G)$ instead of $\psi_{v(G)}(G)$. While counting the number of subgraphs of a graph $G$ that are isomorphic to a graph $F$, it is important to make a distinction between induced subgraphs and edge subgraphs. The number of subgraphs of $G$ that are isomorphic to $F$ is denoted by ${\mbox{$\left[\begin{array}{c}G\\#2\end{array}\right]$}}$, and the number of induced subgraphs of $G$ that are isomorphic to $F$ is denoted by $\displaystyle\binom{G}{F}$. The two numbers are related by $${\mbox{$\left[\begin{array}{c}G\\#2\end{array}\right]$}} = \sum_{H|VH=VF}\displaystyle\binom{G}{H}{\mbox{$\left[\begin{array}{c}H\\#2\end{array}\right]$}}$$ where the summation is over distinct isomorphism types of graphs $H$. The characteristic polynomial of $G$ is denoted by $P(G;\lambda) = \sum_{i = 0}^{v(G)} c_i(G) \lambda^{v(G)-i}$. The collection $\mathcal{PD}(G) = \{P(G-S;\lambda)\mid S\subsetneq VG\}$ is called the [*complete polynomial deck*]{} of $G$. Note that a polynomial may appear in the collection more than once. The [*rank*]{} of a graph $G$, which has $comp(G)$ components, is defined by $v(G) - comp(G)$, and its [*co-rank*]{} is defined by $e(G)-v(G) + comp(G)$. The rank polynomial of $G$ is defined by $R(G;x,y) = \sum \rho_{rs}x^ry^s$, where $\rho_{rs}$ is the number of subgraphs of $G$ with rank $r$ and co-rank $s$. The set of consecutive integers from $a$ to $b$ is denoted by $[a,b]$; in particular, $N_k = [1,k]$. Ulam’s Conjecture ----------------- The [*vertex deck*]{} of a graph $G$ is the collection $\mathcal{VD}(G) = \{G-v\mid v \in VG\}$, where the subgraphs in the collection are ‘unlabelled’ (or isomorphism types). Note that the vertex deck is not exactly a set: an isomorphism type may appear more than once in the vertex deck. A Graph $G$ is said to be [*reconstructible*]{} if its isomorphism class is determined by $\mathcal{VD}(G)$. Ulam [@ulam1960] proposed the following conjecture. Graphs on more than 2 vertices are reconstructible. A property or an invariant of a graph $G$ is said to be reconstructible if it can be calculated from $\mathcal{VD}(G)$. For example, Kelly’s Lemma allows us to count the number of vertex-proper subgraphs of $G$ of any given type. \[lem-kelly\][**(Kelly’s Lemma [@kelly1957])**]{} If $F$ is a graph such that $v(F) < v(G)$ then $$\label{eq-kelly} {\mbox{$\left[\begin{array}{c}G\\#2\end{array}\right]$}} = \frac{1}{v(G)-v(F)}\sum_{u\in VG}{\mbox{$\left[\begin{array}{c}G-u\\#2\end{array}\right]$}}$$ therefore, ${\mbox{$\left[\begin{array}{c}G\\#2\end{array}\right]$}}$ is reconstructible from $\mathcal{VD}(G)$. Also, in Equation (\[eq-kelly\]), ${\mbox{$\left[\begin{array}{c}G\\#2\end{array}\right]$}}$ and ${\mbox{$\left[\begin{array}{c}G-u\\#2\end{array}\right]$}}$ may be replaced by $\displaystyle\binom{G}{F}$ and $\displaystyle\binom{G-u}{F}$, respectively. Tutte [@tutte1977], [@tutte1984] proved the reconstructibility of the characteristic polynomial and the chromatic polynomial. Tutte’s results were simplified by an elegant counting argument by Kocay [@kocay1981]. This argument is useful to count certain subgraphs that span $VG$. Let $S = \{F_1, F_2, \ldots ,F_k\}$ be a family of graphs. Let $c(S,H)$ be the number of tuples $(X_1, X_2, \ldots ,X_k)$ of subgraphs of $H$ such that $X_i\cong F_i\, \forall \, i$, and $\cup_{i = 1}^{k}X_i = H$. We call it the number of [*$S$-covers*]{} of $H$. \[lem-kocay\] [**(Kocay’s Lemma [@kocay1981])**]{} $$\prod_{i = 1}^{k} {\mbox{$\left[\begin{array}{c}G\\#2\end{array}\right]$}} = \sum_{X} c(S,X){\mbox{$\left[\begin{array}{c}G\\#2\end{array}\right]$}}$$ where the summation is over all isomorphism types of subgraphs of $G$. Also, if $v(F_i)\, <\, v(G)\,\forall\, i$ then $ \sum_{X} c(S,X){\mbox{$\left[\begin{array}{c}G\\#2\end{array}\right]$}}$ over all isomorphism types $X$ of spanning subgraphs of $G$ can be reconstructed from the vertex deck of $G$. We refer to [@bondy1991] for a survey of reconstruction problems. The chromatic polynomial and the $\mathcal{N}$-matrix ----------------------------------------------------- Stronger reconstruction results on the chromatic polynomial were implicit in Whitney’s work [@whitney1932], although Ulam’s conjecture had not been posed at the time. Motivation for some of the work presented in this paper comes from Whitney’s work on the chromatic polynomials. The discussion of the chromatic polynomial presented here is based on [@biggs1993]. A graph $G$ is called [*quasi-separable*]{} if there exists $K\subsetneq VG$ such that $G_K$ is a complete graph and $G-K$ is disconnected. If $|K| \leq 1$ then $G$ is said to be separable. (Theorem 12.5 in [@biggs1993]) \[thm-biggs\] The chromatic polynomial of a graph is determined by its proper induced subgraphs that are not quasi-separable. The procedure of computing the chromatic polynomial may be outlined as follows. First a matrix $\mathcal{N}(G) = (N_{ij})$ is constructed. The rows and the columns of $\mathcal{N}(G)$ are indexed by induced subgraphs $\Lambda_1, \Lambda_2, \ldots , \Lambda_I = G$, which are the distinct isomorphism types of non-quasi-separable induced subgraphs of $G$. The list includes $K_1 = \Lambda_1$ and $K_2=\Lambda_2$. The indexing graphs are ordered in such a way that $v(\Lambda_i)$ are in non-decreasing order. The entry $N_{ij}$ is the number of induced subgraphs of $\Lambda_i$ that are isomorphic to $\Lambda_j$. It is a lower triangular matrix with diagonal entries 1. The computation of the chromatic polynomial is performed by a recursive procedure beginning with the first row of the $\mathcal{N}$-matrix, computing at each step certain polynomials in terms of the corresponding polynomials for non-quasi-separable induced subgraphs on fewer vertices. A few observations about the procedure are useful to motivate the work in this paper. The graphs $C_4$ and $K_4$ are the only non-quasi-separable graphs on $4$ vertices. Also, for any $i$, $N_{i1}=v(\Lambda_i)$, and $N_{i2}= e(\Lambda_i)$. Therefore, graphs on 4 or fewer vertices that index the first few rows of the $\mathcal{N}$-matrix can be inferred from the matrix entries. Therefore, we conclude that the computation of the chromatic polynomial can be performed on the matrix entries alone, even if the induced subgraphs indexing the rows and the columns of $\mathcal{N}(G)$ are unspecified. Therefore, we will think of the $\mathcal{N}$-matrix as [*unlabelled* ]{}, that is, we will assume that the induced subgraphs indexing the rows and the columns are not given. A natural question is what other invariants can be computed from the (unlabelled) $\mathcal{N}$-matrix? Obviously, the characteristic polynomial $P(G;\lambda)$ cannot always be computed from $\mathcal{N}(G)$. For example, the only non-quasi-separable induced subgraphs of any tree $T$ are $K_1$ and $K_2$, so $P(T;\lambda)$ cannot be computed from $\mathcal{N}(T)$. Therefore, we omit the restriction of non-quasi-separability on the induced subgraphs used in the construction of the incidence matrix. We then investigate which invariants of a graph are determined by its $\mathcal{N}$-matrix. The Sections  \[sec-n-rec1\] and  \[sec-nmatrix\] are devoted to the study of reconstruction from the $\mathcal{N}$-matrix. In Section \[sec-n-rec1\], we formally define the $\mathcal{N}$-matrix, and the related concept of the edge poset of induced subgraphs of a graph. We then prove several basic results on the relationship between the $\mathcal{N}$-matrix, the edge labelled poset and reconstruction. In particular, we show that Ulam’s conjecture is true if and only if the $\mathcal{N}$-matrix itself is a complete graph invariant. We then prove that Ulam’s conjecture is true if and only if the edge labelled poset has no non-trivial automorphisms. We also prove the $\mathcal{N}$-matrix reconstructibility of trees and forests. In Section \[sec-nmatrix\] we compute several invariants of a graph from its $\mathcal{N}$-matrix. We prove that the characteristic polynomial $P(G;\lambda)$ of a graph $G$, its rank polynomial $R(G;x,y)$, the number of spanning trees in $G$, the number of Hamiltonian cycles in $G$ etc., can be computed from $\mathcal{N}(G)$. In the standard proof of the reconstructibility of these invariants, one first counts the disconnected subgraphs of each type, (see [@bondy1991]). In view of Theorem \[thm-equiv-disc\], the proofs in Section \[sec-nmatrix\] are more involved. Theorem \[thm-equiv-disc\] implies that if there are counter examples to Ulam’s conjecture then there are many more counter examples to reconstruction from the $\mathcal{N}$-matrix. Therefore, we hope that the study of $\mathcal{N}$-matrix reconstructibility will highlight new difficulties. Similar generalisations of the reconstruction problem were also suggested by Tutte, (notes on pp. 123-124 in [@tutte1984]). Reconstruction of the characteristic polynomial ----------------------------------------------- The proof of the reconstructibility of the characteristic polynomial of a graph from its $\mathcal{N}$-matrix is also of independent technical interest, since other authors have considered the question of computing $P(G; \lambda)$ given the [*polynomial deck*]{} $\{P(G-u;\lambda); u \in VG\}$. This question was originally proposed by Gutman and Cvetkovic [@gut-cve1975], and has been studied by others, for example, [@schwenk1979] & [@sciriha2002]. This question remains open. So we consider a weaker question in Section \[sec-poly\]: the question of computing the characteristic polynomial of a graph from its complete polynomial deck. Here we present basic facts about the characteristic polynomial, and outline the idea of Section \[sec-poly\]. A graph is called [*elementary*]{} if each of its components is $1$-regular or $2$-regular. In other words, each component of an elementary graph is a single edge ($K_2$) or a cycle ($C_r; r >2$). Let $L_i$ be the collection of all unlabelled $i$-vertex elementary graphs. So, $L_0 = \{\Phi \}$, $L_1 = \emptyset$, $L_2 = \{K_2\}$, and so on. \[lem-sachs\] (Proposition 7.3 in [@biggs1993]) Coefficients of the characteristic polynomial of a graph $G$ are given by $$(-1)^i c_i(G) = \sum_{F \in L_i, F\subseteq G} (-1)^{r(F)} 2^{s(F)}$$ where $r(F)$ and $s(F)$ are the rank and the co-rank of $F$, respectively. Thus, $c_0(G) = 1$, $c_1(G) = 0$, and $c_2(G) = e(G)$. (Note 2d in [@biggs1993]) \[lem-derivative\] Let $P'(G;\lambda)$ denote the first derivative of $P(G;\lambda)$ with respect to $\lambda$. Then, $$\label{eq-derivative} P'(G;\lambda) = \sum_{u\in VG}P(G-u;\lambda)$$ From the above two lemmas, it is clear that the problem of reconstructing a characteristic polynomial (either from the vertex deck or the complete polynomial deck) reduces to computing the coefficient $c_{v(G)}(G)$, which is the constant term in $P(G;\lambda)$. This in turn is a problem of counting the elementary spanning subgraphs of $G$ - a problem that can be solved using Kocay’s Lemma in case of reconstruction from the vertex deck. Motivated by Kocay’s Lemma, we ask the following question. Suppose the coefficients $c_{i_1}(G), c_{i_1}(G), \ldots , c_{i_k}(G)$ are known, and $i_1 + i_2 + \ldots + i_k \geq v(G)$. If the coefficients $c_{i_j}; 1 \leq j \leq k$ are multiplied, can we get some information about the spanning subgraphs of $G$? This is especially tempting if $i_1+i_2 + \ldots + i_k = v(G)$, since the product is expected to have some relationship with the disconnected spanning elementary subgraphs of $G$. This idea is explored in Section \[sec-poly\]. In Section \[sec-whitney\], we present a very simple new proof of Whitney’s subgraph expansion theorem, again based on Kocay’s lemma. We then present a more direct argument to compute the characteristic polynomial of a graph from its vertex deck, based on our formulation of Whitney’s theorem. Ulam’s conjecture and the $\mathcal{N}$-matrix {#sec-n-rec1} ============================================== Let $\Lambda(G) = \{\Lambda_i; \,i \in [1,I]\}$ be the set of distinct isomorphism types of nonempty induced subgraphs of $G$. We call this the $\Lambda$-deck of $G$. Let $\mathcal{N}(G)=(N_{ij})$ be an $I$ x $I$ incidence matrix where $N_{ij}$ is the number of induced subgraphs of $\Lambda_i$ that are isomorphic to $\Lambda_j$. Thus $N_{ii}$ is 1 for all $i \in [1,I]$. We call an invariant of a graph [*$\mathcal{N}$-matrix reconstructible*]{} if it can be computed from the (unlabelled) $\mathcal{N}$-matrix of the graph. As an example, the ladder graph $L_3$ and its collection of distinct induced subgraphs with nonempty edge sets are shown in Figure 1. Below each graph (except $L_3$) is shown its multiplicity as an induced subgraph in $L_3$ and its name. \[fig-l3\] (200,400) (0,100) (200,100) (0,400) (200,400) (100,200) (100,300) (0,100)[(1,0)[200]{}]{} (0,100)[(0,1)[300]{}]{} (0,400)[(1,0)[200]{}]{} (200,100)[(0,1)[300]{}]{} (0,100)[(1,1)[100]{}]{} (200,100)[(-1,1)[100]{}]{} (0,400)[(1,-1)[100]{}]{} (200,400)[(-1,-1)[100]{}]{} (100,200)[(0,1)[100]{}]{} (0,0)[$L_3 = \Lambda_9$]{} (100,400) (100,150) (100,300) (100,150)[(0,1)[150]{}]{} (50,0)[9$\Lambda_1$]{} (200,400) (0,150) (200,150) (100,300) (0,150)[(1,0)[200]{}]{} (50,0)[6$\Lambda_2$]{} (200,400) (0,150) (150,150) (0,300) (0,150)[(1,0)[150]{}]{} (0,150)[(0,1)[150]{}]{} (10,0)[12$\Lambda_3$]{} (200,400) (0,150) (200,150) (100,300) (0,150)[(1,0)[200]{}]{} (0,150)[(2,3)[100]{}]{} (200,150)[(-2,3)[100]{}]{} (50,0)[2$\Lambda_4$]{} (50,400) (50,100) (50,200) (50,300) (50,400) (50,100)[(0,1)[100]{}]{} (50,200)[(0,1)[100]{}]{} (50,300)[(0,1)[100]{}]{} (0,0)[6$\Lambda_5$]{} (200,400) (0,150) (200,150) (100,250) (100,350) (0,150)[(1,0)[200]{}]{} (0,150)[(1,1)[100]{}]{} (200,150)[(-1,1)[100]{}]{} (100,250)[(0,1)[100]{}]{} (50,0)[6$\Lambda_6$]{} (200,400) (0,150) (200,150) (0,350) (200,350) (0,150)[(1,0)[200]{}]{} (0,150)[(0,1)[200]{}]{} (0,350)[(1,0)[200]{}]{} (200,150)[(0,1)[200]{}]{} (50,0)[3$\Lambda_7$]{} (200,400) (0,150) (200,150) (0,350) (200,350) (300,250) (0,150)[(1,0)[200]{}]{} (0,150)[(0,1)[200]{}]{} (0,350)[(1,0)[200]{}]{} (200,150)[(0,1)[200]{}]{} (200,150)[(1,1)[100]{}]{} (200,350)[(1,-1)[100]{}]{} (50,0)[6$\Lambda_8$]{} The rows and the columns of $\mathcal{N}$-matrix of $L_3$ are both indexed by $\Lambda_1$ to $\Lambda_9$. The $\mathcal{N}$-matrix of $L_3$ is shown below. $$\mathcal{N}(L_3) = \begin{pmatrix} 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 2 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 3 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ 3 & 2 & 2 & 0 & 1 & 0 & 0 & 0 & 0 \\ 4 & 1 & 2 & 1 & 0 & 1 & 0 & 0 & 0 \\ 4 & 0 & 4 & 0 & 0 & 0 & 1 & 0 & 0 \\ 6 & 3 & 6 & 1 & 2 & 2 & 1 & 1 & 0 \\ 9 & 6 & 12 & 2 & 6 & 6 & 3 & 6 & 1 \\ \end{pmatrix}$$ Let us associate an [*edge labelled poset*]{} with the graph $G$. Define a partial order $\preceq $ on the set $\Lambda(G)$ as follows: $\Lambda_j \preceq \Lambda_k$ if and only if $\Lambda_j$ is an induced subgraph of $\Lambda_k$. This poset is denoted by $(\Lambda(G), \preceq)$. We make the poset $(\Lambda(G), \preceq)$ an edge labelled poset by assigning a positive integer to every edge of its Hasse diagram, such that if $\Lambda_k$ covers $\Lambda_j$ then the edge label on $\Lambda_j$-$\Lambda_k$ is $\displaystyle\binom{\Lambda_k}{\Lambda_j}$. We say that two edge labelled posets are isomorphic if they are isomorphic as posets, and there is an isomorphism between them that preserves the edge labels. This naturally leads to the notion of the [*abstract edge labelled poset*]{} of $G$: it is the isomorphism class of the edge labelled poset of $G$. Note that the notion of the abstract edge labelled poset of a graph is not to be confused with the isomorphism class of the Hasse diagram as a graph. An isomorphism from an edge labelled poset to itself is called an automorphism of the edge labelled poset. We denote the abstract edge labelled poset of $G$ by $\mathcal{ELP}(G)$. The Hasse diagram of the abstract edge labelled poset is simply the Hasse diagram of the edge labelled poset of $G$ with labels $\Lambda_i$ removed. The Hasse diagram of $\mathcal{ELP}(L_3)$ is shown in Figure 2. \[fig-hasse\] (600,1300) (300,100) (0,400) (300,400) (600,400) (0,700) (300,700) (600,700) (300,1000) (300,1300) (300,100)[(-1,1)[300]{}]{} (100,200)[2]{} (300,100)[(0,1)[300]{}]{} (470,200)[1]{} (300,100)[(1,1)[300]{}]{} (320,250)[3]{} (0,400)[(0,1)[300]{}]{} (-80,500)[4]{} (0,400)[(1,1)[300]{}]{} (100,550)[2]{} (0,400)[(2,1)[600]{}]{} (120,400)[2]{} (300,400)[(0,1)[300]{}]{} (320,450)[1]{} (600,400)[(0,1)[300]{}]{} (620,500)[2]{} (600,400)[(-1,1)[300]{}]{} (450,465)[1]{} (0,700)[(1,1)[300]{}]{} (50,830)[1]{} (300,700)[(0,1)[300]{}]{} (320,800)[2]{} (600,700)[(-1,1)[300]{}]{} (530,830)[2]{} (300,1000)[(0,1)[300]{}]{} (320,1100)[6]{} \[lem-rank\] There is a rank function on $\rho $ on $\mathcal{ELP}(G)$ such that $\rho(\Lambda_i) = \rho(\Lambda_j) +1$ whenever $\Lambda_i $ covers $\Lambda_j$. Each $\Lambda_i$ in $\Lambda(G)$ is nonempty. Therefore, for each $\Lambda_i$ in $\Lambda(G)$ and for each $k$ such that $2 \leq k \leq v(\Lambda_i)$ there is at least one nonempty induced subgraph $\Lambda_j$ of $\Lambda_i$ such that $v(\Lambda_j) = k$. Moreover, empty induced subgraphs do not belong to $\Lambda(G)$. Therefore, $\rho(\Lambda_i) = v(\Lambda_i)$ meets the requirements of a rank function. Stanley [@stanley1997] defines a rank function such that the $\rho(x) = 0$ for a minimal element $x$. But we have deviated from that convention since $\rho(\Lambda_i) = v(\Lambda_i)$ for each $\Lambda_i \in \Lambda(G)$ is more convenient here. We now demonstrate that $\mathcal{N}(G)$ and $\mathcal{ELP}(G)$ are really equivalent, that is, they can be constructed from each other. \[lem-kelly1\] Let $F$ and $H$ be two graphs, and let $q$ be an integer such that $v(F) \leq q \leq v(H)$. Then $$\label{eq-kelly1} \sum_{X| v(X) = q}\displaystyle\binom{H}{X}\displaystyle\binom{X}{F} = \displaystyle\binom{v(H)-v(F)}{q-v(F)}\displaystyle\binom{H}{F}$$ where the summation is over distinct isomorphism types $X$. This is similar to Kelly’s Lemma  \[lem-kelly\]. Each induced subgraph of $H$ that is isomorphic to $F$ is also an induced subgraph of $\displaystyle\binom{v(H)-v(F)}{q-v(F)}$ induced subgraphs of $H$ that have $q$ vertices. \[lem-elp\_eq\_n\] The structures $\mathcal{N}(G)$ and $\mathcal{ELP}(G)$ can be constructed from each other. We first show how $\mathcal{N}(G)$ is constructed from $\mathcal{ELP}(G)$. The matrix $\mathcal{N}(G)$ is an $I\times I$ matrix where $I$ is the number of points in $\mathcal{ELP}(G)$. Without the loss of generality, suppose that the points of $\mathcal{ELP}(G)$ are labelled from $\Lambda_1$ to $\Lambda_I$ such that if $\rho(\Lambda_i) < \rho(\Lambda_j)$ then $i < j$, where $\rho$ is the rank function defined in Lemma \[lem-rank\]. Correspondingly, the rows and the columns of $\mathcal{N}(G)$ are indexed from $\Lambda_1$ to $\Lambda_I$. The edge labels in $\mathcal{ELP}(G)$ immediately give some of the entries in $\mathcal{N}(G)$: if $\Lambda_i $ covers $\Lambda_j$ then $N_{ij}$ is the label on the edge joining $\Lambda_i $ and $\Lambda_j$. The diagonal entries are 1. Except $N_{11}$, all the other entries in the first row are 0. We construct the remaining entries of $\mathcal{N}(G)$ by induction on the rank. The base case is rank 2. It corresponds to the first row, and is already filled. Let $f(r)$ denote the number of points of $\mathcal{ELP}(G)$ that have rank at most $r$. Suppose now that the first $f(r)$ rows of $\mathcal{N}(G)$ are filled for some $r \geq 2$. Let $\Lambda_i$ be a graph of rank $r+1$, and let $\Lambda_j$ be a graph of rank at most $r$. Then $N_{ij}$ is computed by applying Lemma \[lem-kelly1\] with $q = r$. $$\sum_{\Lambda_k| \rho(\Lambda_k) = r} \displaystyle\binom{\Lambda_i}{\Lambda_k} \displaystyle\binom{\Lambda_k}{\Lambda_j} = \displaystyle\binom{v(\Lambda_i)-v(\Lambda_j)}{r-v(\Lambda_j)} \displaystyle\binom{\Lambda_i}{\Lambda_j} = (r+1-v(\Lambda_j))N_{ij}$$ On the LHS, $\displaystyle\binom{\Lambda_k}{\Lambda_j}$ are known by induction hypothesis. Since $\Lambda_k$ are the graphs covered by $\Lambda_i$, $\displaystyle\binom{\Lambda_i}{\Lambda_k}$ are the edge labels. Therefore, $N_{ij}$ can be computed. This completes the construction of $\mathcal{N}(G)$ from $\mathcal{ELP}(G)$. To construct $\mathcal{ELP}(G)$ from $\mathcal{N}(G)$, define a partial order $\preceq $ on $\{\Lambda_1, \Lambda_2, \ldots , \Lambda_I\}$ as follows: $\Lambda_j \preceq \Lambda_i$ if $N_{ij} \neq 0$. In this poset, if $\Lambda_i$ covers $\Lambda_j$ then assign an edge label $N_{ij}$ to the edge $\Lambda_j-\Lambda_i$ of the Hasse diagram of the poset. This completes the construction of $\mathcal{ELP}(G)$ from $\mathcal{N}(G)$. \[lem-num-vertices\] Given $\mathcal{N}(G)$, $v(\Lambda_i)$ and $e(\Lambda_i)$ can be counted for each graph in $\Lambda(G)$. There is a unique row in $\mathcal{N}(G)$ that has only one nonzero entry (the diagonal entry 1). This row corresponds to $\Lambda_1 \cong K_2$, and we assume it to be the first row. Now $e(\Lambda_i) = N_{i1}$ for each $\Lambda_i$. By Lemma \[lem-elp\_eq\_n\], $\mathcal{ELP}(G)$ is uniquely constructed. By Lemma \[lem-rank\], the rank function of the poset defined by $\rho(\Lambda_1)=2$ gives $v(\Lambda_i)=\rho(\Lambda_i)$ for each $\Lambda_i$. Now on, without the loss of generality, we will assume that the nonisomorphic induced subgraphs $\Lambda_1, \Lambda_2, \ldots, \Lambda_I$ of a graph $G$ under consideration are ordered so that $v(\Lambda_i)$ are in a non-decreasing order. The first row will correspond to $\Lambda_1 \cong K_2$ and the last row to $\Lambda_I \cong G$. \[lem-kelly3\] The collection $\{\mathcal{N}(G-u)| u \in VG, e(G-u) > 0\}$ is unambiguously determined by $\mathcal{N}(G)$. Note that this collection is a “multiset”, that is, an $\mathcal{N}$-matrix may appear multiple times in the collection. Let $j \neq I$. The graph $\Lambda_j$ is a vertex deleted subgraph of $G$ if and only if for all $i \neq j \neq I$, $N_{ij}=0$. Now $N(\Lambda_j)$ is obtained by deleting $k$’th row and $k$’th column for each $k$ such that $N_{jk} = 0$. A multiplicity $N_{Ij}$ is assigned to $N(\Lambda_j)$. Equivalently, we can construct $\mathcal{ELP}(G)$ by Lemma \[lem-elp\_eq\_n\], then construct the down set $\mathcal{ELP}(\Lambda_j)$ of each $\Lambda_j$ that is covered by $\Lambda_I = G$, and then construct $\mathcal{N}(\Lambda_j)$, and assign it a multiplicity equal to the edge label on $\Lambda_I-\Lambda_j$. [**Remark**]{} It is is possible that for distinct $j$ and $k$, the matrices $N(\Lambda_j)$ and $N(\Lambda_k)$ are equal. In this case a multiplicity $N_{Ij}$ is assigned to $N(\Lambda_j)$ and $N_{Ik}$ is assigned to $N(\Lambda_k)$ while constructing the above collection. \[lem-empty\] Let $rK_1$ be the $r$-vertex empty graph. The number of induced subgraphs of $G$ isomorphic to $rK_1$ is determined by $\mathcal{N}(G)$. The required number is $$\displaystyle\binom{G}{rK_1} = \displaystyle\binom{v(G)}{r} - \sum_{j\mid v(\Lambda_j) = r}N_{Ij}$$ where indices $j$ in the summation are determined by Lemma \[lem-num-vertices\]. We are interested in the question of reconstructing a graph $G$ or some of its invariants given $\mathcal{N}(G)$. As indicated earlier, we will assume that the induced subgraphs $\Lambda_i; i \in [1,I]$ are not given. We have the following relationship between Ulam’s conjecture and the $\mathcal{N}$-matrix reconstructibility. \[prop:equiv-un\] Ulam’s conjecture is true if and only if all graphs on three or more vertices are $\mathcal{N}$-matrix reconstructible. Proof of [*if*]{}: by Lemma \[lem-kelly1\], $\mathcal{N}(G)$ is constructed from $\mathcal{VD}(G)$. Therefore, Ulam’s conjecture is true if all graphs are $\mathcal{N}$-matrix reconstructible. In fact, a graph is reconstructible if it is $\mathcal{N}$-matrix reconstructible. Proof of [*only if*]{}: this is proved by induction on the number of vertices. Let Ulam’s conjecture be true. Since $N_{i1} = e(\Lambda_i)$ for all $i$, every non-empty three vertex graph is $\mathcal{N}$-matrix reconstructible. Now, let all graphs on at most $n$ vertices, where $n\geq 3$, be $\mathcal{N}$-matrix reconstructible. Let $G$ be a graph on $n+1$ vertices. By Lemma \[lem-kelly3\], the collection $\{\mathcal{N}(G-u); u\in VG, e(G-u) > 0\}$ is unambiguously determined by $\mathcal{N}(G)$. The number of empty graphs in $\mathcal{VD}(G)$ is 0 or 1, and is determined by Lemma \[lem-empty\]. Therefore, by induction hypothesis, $\mathcal{VD}(G)$ is uniquely determined. Now the result follows from the assumption that Ulam’s conjecture is true. Since $\mathcal{N}(G)$ and $\mathcal{ELP}(G)$ are equivalent by Lemma \[lem-elp\_eq\_n\], we rephrase Proposition \[prop:equiv-un\] as follows. \[prop:equiv-elp\] Ulam’s conjecture is true if and only if all graphs on three or more vertices are reconstructible from their abstract edge labelled posets. We would like to point out that reconstructing $G$ from $\mathcal{N}(G)$ or from $\mathcal{ELP}(G)$ is not proved to be equivalent to reconstructing $G$ from $\mathcal{VD}(G)$. This poses a difficulty. For example, proving $\mathcal{N}$-matrix reconstructibility of disconnected graphs is as hard as Ulam’s conjecture, although disconnected graphs are known to be vertex reconstructible. This is proved below. For graphs $X$ and $Y$, we use the notation $X+Y$ to denote a graph that is a disjoint union of two graphs isomorphic to $X$ and $Y$, respectively. Suppose $G$ and $H$ are connected graphs having the same vertex deck. Consider graphs $2G = G+G$ and $2H=H+H$. \[lem-kelly2\] Let $F$ be a graph on fewer than $2v(G)$ vertices. If $F$ has a component isomorphic to $G$ (in which case we write $F=G+X$) then $\displaystyle\binom{2G}{G+X} = \displaystyle\binom{2H}{H+X}$. If $F$ has no component isomorphic to $G$ then $\displaystyle\binom{2G}{F} = \displaystyle\binom{2H}{F}$. When $F=G+X$, $X$ must have fewer than $v(G)-1$ vertices. Since $G$ and $H$ have identical vertex decks, by Kelly’s Lemma  \[lem-kelly\], $\displaystyle\binom{G}{X} = \displaystyle\binom{H}{X}$. Therefore, $\displaystyle\binom{2G}{G+X} = \displaystyle\binom{2H}{H+X}$. When $F$ does not have a component isomorphic to $G$, then if $F$ has a component on $v(G)$ vertices then $\displaystyle\binom{2G}{F} = \displaystyle\binom{2H}{F} = 0$. Therefore, assume that all components of $F$ have at most $v(G)-1$ vertices. Any realisation of $F$ as an induced subgraph of $2G$ is a disjoint union of graphs isomorphic to $X$ and $Y$ such that $X$ is an induced subgraph of one component of $2G$ and $Y$ is an induced subgraph of the other component of $2G$. Moreover, $v(X) < v(G)$ and $v(Y) < v(G)$. Now $\displaystyle\binom{2G}{F}=\displaystyle\binom{2H}{F}$ follows from the fact that $G$ and $H$ have identical vertex decks and Kelly’s Lemma. The following corollary is an immediate consequence of the above lemma. \[cor-correspondence\] Define a correspondence $f$ between $\Lambda(2G)$ and $\Lambda(2H)$ as follows. 1. $f(2G) = 2H$ 2. $F\in \Lambda(2G)$ is not $2G$ but has a component isomorphic to $G$. We write $F= G+X$, and set $f(F) = H+X$. 3. $F\in \Lambda(2G)$ has no component isomorphic to $G$. In this case we set $f(F) = F$. The correspondence defined above is a bijection. \[lem-2g2h\] $\mathcal{N}(2G) = \mathcal{N}(2H)$. For the bijection $f$ between non-empty induced subgraphs of $2G$ and $2H$ that was defined in Corollary \[cor-correspondence\], we show that, for any two nonisomorphic induced subgraphs $F_1$ and $F_2$ of $2G$, $$\label{eq-2g2h} \displaystyle\binom{F_2}{F_1} = \displaystyle\binom{f(F_2)}{f(F_1)}$$ In view of Corollary \[cor-correspondence\], it is sufficient to show this when at least one of the graphs $F_1$ and $F_2$ has a component isomorphic to $G$. 1. When $F_2 = 2G$, then Equation (\[eq-2g2h\]) follows from Lemma \[lem-kelly2\] and Corollary \[cor-correspondence\]. 2. When $F_1 = G+X$ and $F_2 = G+Y$, and $v(X) < v(G)$ and $v(Y) < v(G)$, we have $\displaystyle\binom{F_2}{F_1} = \displaystyle\binom{Y}{X} = \displaystyle\binom{H+Y}{H+X} = \displaystyle\binom{f(F_2)}{f(F_1)}$. 3. $F_2 = G+Z$, $v(Z) < v(G)$, but $F_1$ has no component isomorphic to $G$. In this case, any realisation of $F_1$ as an induced subgraph of $F_2$ may be represented (possibly in many ways) as $F_1 = X+Y$ where $X$ is an induced proper subgraph of the component $G$ of $F_2$ and $Y$ is an induced subgraph of $Z$. Moreover, $v(X) < v(G)$ and $v(Y) < v(G)$. Since $\displaystyle\binom{G}{X} = \displaystyle\binom{H}{X}$, we have $\displaystyle\binom{G+Z}{F_1} = \displaystyle\binom{H+Z}{f(F_1)}$. Note that the actual value of $\displaystyle\binom{F_2}{F_1}$ may be written by considering all possible ways of realising $F_1$ as an induced subgraph of $G+Z$. Thus we have shown Equation (\[eq-2g2h\]) for arbitrary non-empty induced subgraphs of $2G$, which implies the result. \[thm-equiv-disc\] Ulam’s conjecture is true if and only if disconnected graphs on three or more vertices are $\mathcal{N}$-matrix reconstructible. The [*only if*]{} part follows from Proposition \[prop:equiv-un\]. The [*if*]{} part is proved by contradiction. Suppose $G$ and $H$ are connected nonisomorphic graphs with the same vertex deck, that is, they are a counter example to Ulam’s conjecture. Then $2G$ and $2H$ are nonisomorphic but $\mathcal{N}(2G) = \mathcal{N}(2H)$ by Lemma \[lem-2g2h\]. Therefore, $2G$ and $2H$ are disconnected graphs that are not $\mathcal{N}$-matrix reconstructible. The following result is proved along the lines of Lemma \[lem-2g2h\]. Ulam’s conjecture is true if and only if the edge labelled poset of each graph has only the trivial automorphism. The proof of [*only if*]{} is done by contradiction. Suppose that $\mathcal{ELP}(G)$ has a nontrivial automorphism $\sigma$. Then there are nonisomorphic induced subgraphs $\Lambda_i$ and $\Lambda_j$ of $G$ such that $\sigma(\Lambda_i) = \Lambda_j$. The downsets (or the edge labelled posets) of $\Lambda_i$ and $\Lambda_j$ must be isomorphic. Therefore, by Proposition \[prop:equiv-elp\], there is a counter example to Ulam’s conjecture. The proof of [*if*]{} is also done by contradiction. Suppose that Ulam’s conjecture is false, and $G$ and $H$ are connected nonisomorphic graphs having identical vertex decks. We show that $\mathcal{ELP}(G+H)$ has a nontrivial automorphism. Define a bijective map $\sigma : \Lambda(G+H)\rightarrow \Lambda(G+H)$ as follows. 1. The graph $G+H$ is mapped to itself. 2. If $\Lambda_i\in \Lambda(G+H)$ has a component isomorphic to $G$, then denote $\Lambda_i$ by $G+X$, where $X$ is a proper subgraph of the component isomorphic to $H$. In this case, set $\sigma(G+X) = H+X$. 3. If $\Lambda_i$ is $H+X$, where $X$ is a proper subgraph of the component isomorphic to $G$, then set $\sigma(H+X) = G+X$. 4. For all other graphs $\Lambda_i \in \Lambda(G+H)$, $\sigma(\Lambda_i) = \Lambda_i$. We now show that $\sigma$ is an automorphism of $\mathcal{ELP}(G+H)$. That is, we show that $\displaystyle\binom{\Lambda_i}{\Lambda_j} = \displaystyle\binom{\sigma(\Lambda_i)}{\sigma(\Lambda_j)}$ for any two graphs $\Lambda_i $ and $\Lambda_j$ in $\Lambda(G+H)$. We have to consider only the case in which at least one of $\Lambda_i $ and $\Lambda_j$ has a component isomorphic to $G$ or $H$, and $v(\Lambda_j) \leq v(\Lambda_i)$. 1. $\Lambda_j = G+X$ and $\Lambda_i = G+H$. In this case,\ $\displaystyle\binom{G+H}{G+X} = \displaystyle\binom{H}{X} = \displaystyle\binom{G}{X} = \displaystyle\binom{H+G}{H+X} = \displaystyle\binom{\sigma(G+H)}{\sigma(G+X)}$. 2. $\Lambda_j = G+X$ and $\Lambda_i = G+Y$ and $v(Y) < v(G)=v(H)$. In this case,\ $\displaystyle\binom{G+Y}{G+X} = \displaystyle\binom{Y}{X} = \displaystyle\binom{H+Y}{H+X} = \displaystyle\binom{\sigma(G+Y)}{\sigma(G+X)}$. 3. $\Lambda_j = G+X$ and $\Lambda_j = H+Y$ and $Y\ncong G$. In this case,\ $\displaystyle\binom{H+Y}{G+X} = \displaystyle\binom{G+Y}{H+X} = 0$. 4. $\Lambda_j = G+X$ and $\Lambda_i $ has no component isomorphic to $G$ or $H$. In this case,\ $\displaystyle\binom{\Lambda_i}{G+X} = \displaystyle\binom{\Lambda_i}{H+X} = 0$. 5. $\Lambda_j$ has no component isomorphic to $G$ or $H$, and $\Lambda_i = G+H$. This is trivial since $\sigma(\Lambda_j) = \Lambda_j$ and $\sigma(G+H) = G+H$ 6. $\Lambda_j$ has no component isomorphic to $G$ or $H$ and $\Lambda_i = G+X$, where $v(X) < v(G)=v(H)$. In this case, a realisation of $\Lambda_j$ as an induced subgraph of $G+X$ may be written as $\Lambda_j = Y+Z$, where $Y$ is an induced subgraph of $G$ and $Z$ is an induced subgraph of $X$. Since, $\displaystyle\binom{G}{Y}=\displaystyle\binom{H}{Y}$, the number of such realisations is $\displaystyle\binom{G}{Y}\displaystyle\binom{X}{Z} = \displaystyle\binom{H}{Y}\displaystyle\binom{X}{Z}$. By summing over all possible ways of realising $\Lambda_j$ as an induced subgraph of $G+X$, we get $\displaystyle\binom{G+X}{\Lambda_j} = \displaystyle\binom{H+X}{\Lambda_j}$. 7. All the above arguments are valid when $G$ and $H$ are interchanged. Thus we have constructed a non-trivial automorphism, completing the [*if*]{} part. We conclude this section with a result on trees. \[thm-trees\] Trees and forests are $\mathcal{N}$-matrix reconstructible. The class of simple acyclic graphs is closed under vertex deletion. Therefore, we can use the method in the proof of Proposition \[prop:equiv-un\]. Let $T$ be a tree or a non-empty forest on three or more vertices. We prove by induction on $v(T)$ that $T$ is uniquely reconstructible from $\mathcal{N}(T)$. The base case is $v(T) = 3$. All graphs on 3 vertices are $\mathcal{N}$-matrix reconstructible by Lemma \[lem-num-vertices\]. Suppose each acyclic graphs on at most $k$ can be recognised and reconstructed from its $\mathcal{N}$-matrix. Let $v(T) = k+1$. By Lemma \[lem-kelly3\], the collection $\{\mathcal{N}(T-u)| u \in VT\}$ is unambiguously determined. Then by induction hypothesis, $T-u$ are determined (along with their multiplicities). The subgraphs in the vertex deck that are not determined by Lemma \[lem-kelly3\] are the ones having no edges. Since Ulam’s conjecture has been proved for trees and disconnected simple graphs in [@kelly1957], $T$ is $\mathcal{N}$-matrix reconstructible. [**Remark**]{} If Ulam’s conjecture is true for a class of graphs that is closed under vertex deletion, then the class is also $\mathcal{N}$-matrix reconstructible. Tutte-Kocay theory on the $\mathcal{N}$-matrix. {#sec-nmatrix} =============================================== In this section we will compute several invariants of a graph $G$ from its $\mathcal{N}$-matrix. The invariants include the number of spanning trees, the number of spanning unicyclic subgraphs containing a cycle of specified length, the characteristic polynomial and the rank polynomial. [**An outline of the proof.**]{} First we outline how the above mentioned invariants are calculated from the vertex deck using Kocay’s Lemma. 1. Suppose the graphs $F_1, F_2, \ldots , F_k$ satisfy $\sum_i v(F_i) = v(G)$ and $v(F_i)< v(G) \forall i$. Kocay’s Lemma then gives the number of disconnected spanning subgraphs having components isomorphic to $F_1, F_2, \ldots , F_k$. 2. Kacay’s lemma is then applied to $F_1 = F_2 = \ldots = F_k = K_2$, where $k=v(G)-1$. Since disconnected spanning subgraphs of each type are counted in the first step, we can now count the number of spanning trees. 3. The second step is repeated with $k=v(G)$. Since the number of spanning trees and disconnected spanning subgraphs of each type are known from the first two steps, we can now count the number of hamiltonian cycles. 4. Once the above three steps are completed, many other invariants, such as the characteristic polynomial, rank polynomial, etc. are easily computed. The procedure outlined above cannot be implemented on the $\mathcal{N}$-matrix in a straight forward manner. We do not know all the induced proper subgraphs. But we observe that the above procedure essentially reduces counting certain spanning subgraphs to counting them on vertex proper subgraphs. It turns out that we do not really need the number of vertex proper subgraphs of each type. We only need to know the ‘cycle structure’, that is, $\psi_i(\Lambda_j)$ for each $i \leq v(\Lambda_j)$, for each $j < I$. Next we outline the strategy to construct the cycle structure. Suppose $X, Y, \ldots$ is a list of some graph invariants that are either polynomials or numbers, for example, the number of hamiltonian cycles in a graph or the chromatic polynomial of a graph. We say that an invariant $Z$ can be reduced to invariants $X, Y, \ldots $ (or $Z$ has a reduction on the $\mathcal{N}$-matrix) if for each graph $G$ having a non-empty edge set, 1. $Z(G)$ can be written as $Z(G) = \Theta(X(G_U), Y(G_V), ...)$ where $\Theta(x,y,...)$ is a polynomial in $x,y, \ldots$, and $U, V, \dots $ are proper subsets of $VG$. 2. the coefficient of each term in the polynomial can be computed from $\mathcal{N}(G)$. Proving an identity that gives a reduction of an invariant $Z$ as in the above equation is not in itself sufficient to claim that $Z$ is $\mathcal{N}$-matrix reconstructible. If the invariants $X_1, X_2, \ldots, X_k$ appear on the RHS of the above equation, then it is essential to show that the invariants $X_1, X_2, \ldots, X_k$ themselves can be reduced to $X_1, X_2, \ldots, X_k$. The reconstructibility of $Z$ and $X_1, X_2, \ldots, X_k$ from the $\mathcal{N}$-matrix is then proved by induction on $v(G)$. That is possible because of the requirement that the sets $U,V,\ldots $ are proper subsets of $VG$. It is worth noting that the chromatic polynomial computation given in Biggs [@biggs1993] essentially follows a similar style. In several lemmas that precede the main theorem, we will prove identities of the form $Z(G) = \Theta(X(G_U), Y(G_V), ...)$. It will become clear that in the end all invariants computed here will reduce to the cycle structure of proper subgraphs. \[lem-kelly-cycles\] For $i < v(G)$, the number of cycles of length $i$ in $G$ has a reduction on $\mathcal{N}(G)$ given by $$\psi_i(G) = \frac{1}{v(G)-i}\sum_{u\in VG}\psi_i(G-u) = \frac{1}{v(G)-i}\sum_{j\mid v(\Lambda_j)=v(G)-1}N_{Ij}\psi_i(\Lambda_j)$$ This immediately follows from Kelly’s Lemma  \[lem-kelly\] and Lemma \[lem-kelly3\]. \[def-cycle-cover\] Let $X$ be a subset of $VG$. Let $A\equiv (a_i)_{i=1}^k$ be a sequence in $[2,|X|]$. A $k$-tuple of cycles in $G_X$, corresponding to the sequence $A$, is a $k$-tuple of cycles in $G_X$ such that the $k$ cycles have lengths $a_1, a_2, \ldots, a_k$, respectively. Additionally, if the cycles in the $k$-tuple span the set $X$, then it is called a [*spanning cycle cover*]{} of $G_X$, corresponding to the sequence $A$. The number of $k$-tuples of cycles in $G_X$, corresponding to the sequence $A$, is denoted by $p(A\rightarrow G_X)$. The number of spanning cycle covers of $G_X$, corresponding to the sequence $A$, is denoted by $c(A\rightarrow G_X)$. \[lem-p2c\] $$\label{eq-p2c} p(A \rightarrow G_X) = \sum_{Y\subseteq X}c(A\rightarrow G_Y)$$ $$\label{eq-p2c-proof} \begin{split} p(A \rightarrow G_X) & = \prod_{j=1}^{k}\psi_{a_j}(G_X)\\ & = |\{(F_1,F_2,\ldots,F_{k})\mid (\forall j\in[1,k])(F_j\subseteq G_X,\, F_j\cong C_{a_j})\}| \\ & = \sum_{Y\subseteq X}c(A\rightarrow G_Y) \end{split}$$ Thus we have grouped together the $k$-tuples of cycles in groups that span each subset of $X$. This is essentially the idea of Kocay’s Lemma \[lem-kocay\]. \[lem-exp-p\] If $A \equiv (a_i)_{i = 1}^k $ is a sequence in $[2,v(G)-1]$ then $p(A\rightarrow G)$ has a reduction on $\mathcal{N}(G)$ given by $$p(A \rightarrow G) =\prod_{i=1}^{k}\psi_{a_i}(G) = \prod_{i=1}^{k} \left(\frac{1}{v(G)-a_i} \sum_{j\mid v(\Lambda_j)=v(G)-1}N_{Ij}\psi_{a_i}(\Lambda_j)\right)$$ This follows from the definition of $p(A\rightarrow G)$ and Lemma \[lem-kelly-cycles\]. \[lem-exp-c\] If $A \equiv (a_i)_{i = 1}^k $ is a sequence in $[2,v(G)-1]$ then $c(A\rightarrow G)$ has a reduction on $\mathcal{N}(G)$. By Möbius inversion of Equation (\[eq-p2c\]), we write $$c(A\rightarrow G_X) = \sum_{Y\subseteq X}(-1)^{|X\backslash Y|} p(A\rightarrow G_Y)$$ which implies $$c(A\rightarrow G) = \sum_{j=1}^I(-1)^{v(G)- v(\Lambda_j)}N_{Ij}p(A \rightarrow \Lambda_j)$$ By Lemma \[lem-exp-p\], the RHS of this equation has a reduction on $\mathcal{N}(G)$. Therefore, $c(A\rightarrow G)$ has a reduction on $\mathcal{N}(G)$. The following definition restricts the spanning cycle covers of Definition \[def-cycle-cover\] to connected spanning cycle covers. \[def-connected-spanning-cycle-cover\] Let $X$ be a subset of $VG$. Let $A\equiv (a_i)_{i=1}^k$ be a sequence in $[2,|X|]$. A [*connected spanning cycle cover*]{} of $G_X$, corresponding to the sequence $A$, is a $k$ tuple of cycles in $G_X$ such that the $k$ cycles have lengths $a_1, a_2, \ldots, a_k$, respectively, and together they constitute a connected subgraph spanning the set $X$. More formally, it is a $k$-tuple $(F_1,F_2,\ldots,F_{k})$ such that $(\forall j\in [1,k])(F_j\subseteq G_X, F_j\cong C_{a_j})$, $\cup_{j=1}^k VF_j = X$, and $\cup_{j=1}^kF_j$ is connected. The number of connected spanning cycle covers of $G_X$, corresponding to the sequence $A$, is denoted by $con(A\rightarrow G_X)$. The [*disconnected spanning cycle covers*]{} are defined similarly, and their number, corresponding to a sequence $A$, is denoted by $discon(A\rightarrow G_X)$. Let $A\equiv (A_i)_{i=1}^l$ be a list of $l$ non-increasing sequences such that $A_i \equiv (a_{ij})_{j=1}^{k_i}$; $a_{ij}\in [2,v(G)]$. Let $B \equiv (b_i)_{i=1}^l$ be a sequence in $[2,v(G)]$. Let $m \leq n$. We now define quantities $Q_m(A,B\rightarrow G)$ and $T_p(A,B\rightarrow G)$ that are based on connected spanning cycle covers as follows. $$\begin{split} Q_m(A,B\rightarrow G) & = \sum_{\substack {S\subseteq V(G)\\ |S|=m}} \prod_{i = 1}^{l} \left( \sum_{\substack{X\subseteq S \\ |X|=b_i}} con(A_i\rightarrow G_{X}) \right)\\ & = \sum_{\substack{S\subseteq V(G)\\|S|=m}} \left( \sum_{\substack{(X_1,X_2,\ldots,X_l)\mid \\ \bigcup_{j=1}^{l}X_j\subseteq S\\ |X_j|=b_j\forall j}} \left( \prod_{i = 1}^{l}con(A_i\rightarrow G_{X_i}) \right) \right)\\ & = \sum_{p\leq m} T_p(A,B\rightarrow G)\displaystyle\binom{v(G)-p}{m-p} \end{split}$$ where $$\begin{split} T_p(A,B\rightarrow G) & = \sum_{\substack{(X_1,X_2,\ldots,X_l)\mid \\ \cup_{j=1}^{l}X_j\subseteq V(G)\\ |\cup_{j=1}^{l}X_j| = p \\ |X_j|=b_j\forall j} } \left( \prod_{i = 1}^{l}con(A_i\rightarrow G_{X_i}) \right) \end{split}$$ Solving the system of equations for $T_m(A,B\rightarrow G)$, we can write $$\label{eq-q2tp} \begin{split} T_m(A,B\rightarrow G) & = \sum_{p\leq m} (-1)^{m-p} \displaystyle\binom{v(G)-p}{m-p}Q_p(A,B\rightarrow G) \end{split}$$ When $m=v(G)$, this is simply $$\begin{split} T_{v(G)}(A,B\rightarrow G) & = \sum_{p\leq v(G)} (-1)^{v(G)-p} Q_p(A,B\rightarrow G) \end{split}$$ Note that if $m < max_{i,j}(a_{ij})$ for some $i,j$ then $T_m(A,B\rightarrow G)$ and $Q_m(A,B\rightarrow G)$ are both 0. \[lem-exp-q-tp\] If $A_i; i \in [1,l]$ are sequences in $[2,v(G)-1]$, and $B\equiv (b_i)_{i=1}^l$ are sequences in $[2,v(G)-1]$ then $Q_m(A,B\rightarrow G)$ and $T_m(A,B\rightarrow G)$ have reductions on the $\mathcal{N}$-matrix for each $m \leq v(G)$. We write $Q_m(A,B\rightarrow G)$ in terms of $\Lambda_j$ and the entries of $\mathcal{N}(G)$. $$\label{eq-exp-q} \begin{split} Q_m(A,B\rightarrow G) & = \sum_{\substack {S\subseteq V(G)\\ |S|=m}} \prod_{i = 1}^{l} \left( \sum_{\substack{X\subseteq S \\ |X|=b_i}} con(A_i\rightarrow G_{X}) \right)\\ &= \sum_{p\mid v(\Lambda_p)=m}N_{Ip} \prod_{i = 1}^{l} \left( \sum_{j\mid v(\Lambda_j)=b_i}N_{pj}con(A_i\rightarrow \Lambda_j) \right) \end{split}$$ Since $b_i<v(G)$, Equations (\[eq-exp-q\]) reduce $Q_m(A,B\rightarrow G)$ to the invariants $con(A_i\rightarrow \Lambda_j)$ for $m\leq v(G)$. Therefore, by Equation (\[eq-q2tp\]), $T_m(A,B\rightarrow G)$ are also reduced to the invariants $con(A_i\rightarrow \Lambda_j)$ for each $m\leq v(G)$. Note that if $a_{ik} > v(\Lambda_j)$ for some $k$ then $con(A_i\rightarrow \Lambda_j)=0$. \[lem-con\] If $A\equiv (a_i)_{i=1}^k$ is a sequence in $[2,v(G)-1]$ then $con(A\rightarrow G)$ has a reduction on $\mathcal{N}(G)$. The idea of the proof is similar to Kocay’s Lemma. First $c(A\rightarrow G)$ is written as $con(A\rightarrow G) + discon(A\rightarrow G)$. Then $discon(A\rightarrow G)$ is expressed in terms of $con(A_i \rightarrow G_{X_i})$ where $X_i$ are proper subsets of $VG$, and $A_i$ are certain subsequences of $A$. Then $discon(A\rightarrow G)$ are related to $T_{v(G)}(A^P, B\rightarrow G)$ for certain subsequences $A^P$ of $A$ and certain sequences $B$ constructed from appropriate partitions of $VG$. Since the reductions of $c(A\rightarrow G)$ and $T_{v(G)}(A^P, B\rightarrow G)$ have already been obtained, we get a reduction of $con(A\rightarrow G)$. Let $\mathcal{P}_q(N_k)$ be the set of all partitions of $N_k$ in $q$ parts. A partition $P$ in $\mathcal{P}_q(N_k)$, is denoted by $P = \{N_k^1,N_k^2,\ldots ,N_k^q\}$. Consider an arbitrary $k$ tuple $(F_1,F_2,\ldots,F_{k})$ such that $(\forall i\in [1,k])(F_i\subseteq G,F_i\cong C_{a_i})$. It defines a partition $P\in \mathcal{P}_q(N_k)$ so that $h$ and $j$ are in the same part of $P$ if and only if $F_h$ and $F_j$ are subgraphs of the same connected component of $\cup_{i=1}^kF_i$. We denote the contribution to $c(A\rightarrow G)$ from such tuples by $c_P(A\rightarrow G)$, and write $$c(A\rightarrow G) = \sum_{q}\sum_{P\in \mathcal{P}_q}c_P(A\rightarrow G)$$ Let the set of solutions to the equation $\sum_{i=1}^qb_i = v(G)$ be $\mathcal{B}(q)$. We can then write $$c(A\rightarrow G) = \sum_{q} \sum_{P\in \mathcal{P}_q} \sum_{B\in \mathcal{B}(q)} c_{PB}(A\rightarrow G)$$ where one more suffix $B$ in the summand is used to denote those tuples for which the connected component corresponding to part $N_k^i$ has $b_i$ vertices, for $i\in [1,q]$. Now expanding the summand in terms of $con(\ldots)$, we get $$\begin{split} c(A\rightarrow G) = \sum_{q} \sum_{P\in \mathcal{P}_q} \sum_{B\in \mathcal{B}(q)} \sum_{\substack{(X_1,X_2,\ldots,X_q)\mid \\ \cup_{j=1}^{q}X_j= V(G)\\ |X_j|=b_j\forall j} } \prod_{i=1}^q con(A_i\rightarrow G_{X_i}) \end{split}$$ where $A_i$ is the subsequence of $A$ with indexing set $N_k^i$. Innermost summation and product are now replaced by $T_{v(G)}(A^P,B\rightarrow G)$, so $$\label{eq-pqb} \begin{split} c(A\rightarrow G) = \sum_{q} \sum_{P\in \mathcal{P}_q} \sum_{B\in \mathcal{B}(q)} T_{v(G)}(A^P,B\rightarrow G) \end{split}$$ where $A^P$ is the collection of subsequences $A_i$ of $A$; $i\in [1,q]$, corresponding to the partition $P$. By Lemma \[lem-exp-c\], the LHS of Equation (\[eq-pqb\]) has a reduction on $\mathcal{N}(G)$. By Lemma \[lem-exp-q-tp\], each term on the RHS, except the term $con(A\rightarrow G)$, which corresponds to $q = 1$, has a reduction on $\mathcal{N}(G)$. This proves that $con(A\rightarrow G)$ has a reduction on $\mathcal{N}(G)$. \[cor-trees\] Let $tr(G)$ be the number of spanning trees in $G$, and let $uni(G,r)$ be the number of spanning unicyclic subgraphs of $G$ containing an $r$-cycle, respectively. Then, $tr(G)$ and $uni(G,r); r\in [3,v(G)]$ have reductions on $\mathcal{N}(G)$. Define $A\equiv (a_i)_{i=1}^{v(G)-1}$ such that $a_i=2 \,\forall \, i\in [1,v(G)-1]$. By Lemma \[lem-con\], $con(A\rightarrow G)$ has a reduction on $\mathcal{N}(G)$. But $con(A\rightarrow G) = (v(G)-1)!tr(G)$. Therefore, $tr(G)$ has a reduction $\mathcal{N}(G)$. To reduce $uni(G,r);r\in [3,v(G)-1]$, define $A\equiv(a_j)_{j=1}^{v(G)-r+1}$ such that $a_1 = r$, and $a_j = 2 \,\forall \, j\in [2,v(G)-r+1]$. Again, $con(A\rightarrow G) = (v(G)-r)!uni(G,r)$ has a reduction $\mathcal{N}(G)$ by Lemma \[lem-con\]. Thus $uni(G,r)$ also has a reduction on $\mathcal{N}(G)$. To reduce the number of hamiltonian cycles, let $A\equiv(a_i)_{i=1}^{v(G)}; a_i = 2\,\forall \,i\in [1,v(G)]$. We have, $con(A\rightarrow G) = (v(G)-1)!S(v(G),v(G)-1)tr(G) + \sum_{i=3}^{v(G)-1}v(G)!uni(G,i) + v(G)!ham(G)$, where $S(v(G),v(G)-1)$ is the Sterling number of the second kind computed for $(v(G),v(G)-1)$. Therefore, $ham(G)$, has a reduction $\mathcal{N}(G)$. \[lem-ci\] The coefficients $c_i(G)$ of the characteristic polynomial of $G$ are given by $c_0(G) = 1$ and $$c_i(G) = \frac{1}{v(G)-i}\sum_{j\mid v(\Lambda_j)=v(G)-1}N_{Ij}c_i(\Lambda_j) \,\,\text{for}\,\, 0 < i < v(G).$$ By Lemma \[lem-derivative\] we write $P'(G;\lambda) = \sum_{u\in VG}P(G-u;\lambda)$. Equating identical powers of $\lambda $ on the two sides, we get the result. \[cor-elementary\] If $F$ is an elementary graph on $v(G)$ vertices then ${\mbox{$\left[\begin{array}{c}G\\#2\end{array}\right]$}}$ has a reduction on $\mathcal{N}(G)$. The case when $F$ is a hamiltonian cycle is handled in Corollary  \[cor-trees\]. If $F$ is not a cycle, define a sequence $A\equiv (a_i)_{i=1}^k,\, a_i \in [2,v(G)-1]$ such that $\sum_{i=1}^k a_i = v(G)$. It is uniquely associated with an elementary graph $F\in L_{v(G)}$, so that the components of $F$ are cycles of length $a_i$, or $K_2$ if $a_i=2$. Now $c(A\rightarrow G) = c(A\rightarrow F){\mbox{$\left[\begin{array}{c}G\\#2\end{array}\right]$}}$, and $c(A\rightarrow F)$ depends only on the multiplicity of each cycle length in $F$. By Lemma \[lem-exp-c\], $c(A\rightarrow G)$ has a reduction on the $\mathcal{N}$-matrix. Therefore, ${\mbox{$\left[\begin{array}{c}G\\#2\end{array}\right]$}}$ has a reduction on the $\mathcal{N}$-matrix. The following chart shows how various invariants were reduced to other invariants. For example, it shows that $con(A\rightarrow G); \, 2 \leq a_i \leq v(G)$ can be reduced to computing $ham(G)$, $con(A\rightarrow G); \, 2 \leq a_i < v(G)$ and $p(A\rightarrow G); \, 2 \leq a_i < v(G)$. It is clear from the diagram that computing $con(A\rightarrow G); \, 2 \leq a_i \leq v(G)$ and $ham(G)$ reduces to computing the same invariants for induced proper subgraphs. \[fig-dependency\] This makes the reconstructibility of several invariants obvious. \[thm-everything\] Suppose that $G$ is a simple finite graph, and we are given $\mathcal{N}(G)$. 1. Let $A\equiv (a_i)_{i=1}^k$ be a sequence such that $a_i \in [2,v(G)]$. Then $con(A\rightarrow G)$ and $ham(G)$ are reconstructible from $\mathcal{N}(G)$. 2. If $A_i; i \in [1,l]$ are sequences in $[2,v(G)]$, and $B\equiv (b_i)_{i=1}^l$ is a sequence in $[2,v(G)]$ then $Q_m(A,B\rightarrow G)$ and $T_m(A,B\rightarrow G)$ are reconstructible from $\mathcal{N}(G)$ for each $m \leq v(G)$. 3. the number of spanning trees in $G$, the number of cycles of length $i$, for $3 \leq i \leq v(G)$, the number of unicyclic subgraphs containing a cycle of length $i$, for each $i \in [3,v(G)]$, and the characteristic polynomial $P(G;\lambda)$ are all reconstructible from $\mathcal{N}(G)$. We prove the first item by induction on $v(G)$. The base case is $v(G) = 2$. In this case $G = K_2$. Now suppose that $con(A\rightarrow G)$ and $ham(G)$ are reconstructible from $\mathcal{N}(G)$ when $v(G) < s$ for an arbitrary sequence $A\equiv (a_i)_{i=1}^k$ of integers in $[2,v(G)]$. Now let $v(G) = s$. Lemmas and Corollaries  \[lem-kelly-cycles\] to  \[cor-trees\] imply that computations of $con(A\rightarrow G)$ and $ham(G)$ reduce to computations on induced proper subgraphs of $G$, thus completing the induction step, and the proof of the first item. Since all other intermediate invariants $T_m(\ldots)$, $Q_m(\ldots)$, the number of spanning trees, the number of unicyclic graphs having a cycle of a specified length, number of cycles of each length, number of elementary spanning graphs of each type, etc. have been reduced to computations of invariants $con(A\rightarrow \Lambda_j); j < I$ and $\psi_i(\Lambda_j); i < v(G), j < I$, the remaining parts of the theorem follow. There is another way of proving the $\mathcal{N}$-matrix reconstructibility of the characteristic polynomial. From $\mathcal{N}(G)$, it is possible to construct $\mathcal{N}(\bar{G})$, and then invoke the result of Hagos [@hagos2000] in the induction step. Hagos proved that the pair $(P(G;\lambda),P(\bar{G};\lambda))$ can be reconstructed from the collection $\{(P(G-u;\lambda),P(\bar{G}-u;\lambda)); u\in VG\}$. We skip the details of this argument. The proof presented here counts many other invariants. It is likely that the deck of pairs of polynomials considered by Hagos contains enough information for counting hamiltonian cycles and spanning trees. Now we count the subgraphs with a given number of components, and a given number of edges in each component, and use it to compute the rank polynomial. Let $\mathcal{G}(p,l,(p_i,q_i)_{i=1}^l)$ be the family of graphs with $p$ vertices and $l$ components, such that the $i$’th component has $p_i$ vertices and $q_i$ edges for $i \in [1,l]$. So, $\sum_i p_i = p$. We also assume that $p_i \geq p_j$ whenever $i < j$. By extending the notation ${\mbox{$\left[\begin{array}{c}G\\#2\end{array}\right]$}}$ defined earlier, we denote by ${\mbox{$\left[\begin{array}{c}G\\#2\end{array}\right]$}}$ the number of subgraphs of $G$ that belong to the family $\mathcal{G}(p,l,(p_i,q_i)_{i=1}^l)$. \[lem-kedge\] The number of connected spanning subgraphs of $G$ with $k$ edges, that is,\ ${\mbox{$\left[\begin{array}{c}G\\#2\end{array}\right]$}}$, is reconstructible from $\mathcal{N}(G)$ for all $k$. When $k < v(G)-1$, ${\mbox{$\left[\begin{array}{c}G\\#2\end{array}\right]$}}$ is 0. When $k\in [v(G)-1, e(G)]$, we prove the result by induction on $k$. The base case $k = v(G)-1$, which corresponds to the number of spanning trees, was proved in Theorem \[thm-everything\]. Let the claim be true for all $k\in [v(G)-1,q-1]$. To prove the claim for $k=q$, define $A\equiv (a_i)_{i=1}^q$ such that $a_i=2$ for all $i\leq q$. We can write $$\label{eq-kedge-con} con(A\rightarrow G) = \sum_{i=n-1}^q i!S(q,i) {\mbox{$\left[\begin{array}{c}G\\#2\end{array}\right]$}}$$ In Theorem \[thm-everything\], $con(A\rightarrow G)$ was shown to be reconstructible from $\mathcal{N}(G)$. By the induction hypothesis, all terms on the RHS, except ${\mbox{$\left[\begin{array}{c}G\\#2\end{array}\right]$}}$, are known. Solving Equation (\[eq-kedge-con\]) for ${\mbox{$\left[\begin{array}{c}G\\#2\end{array}\right]$}}$ completes the induction step and the proof. \[lem-lcompo\] ${\mbox{$\left[\begin{array}{c}G\\#2\end{array}\right]$}}$, where $\sum_{i=1}^ln_i = v(G)$ and $m_i > 0$ for all $i$, is reconstructible from $\mathcal{N}(G)$. Let $A\equiv (A_i)_{i=1}^l$, where $A_i\equiv (a_{ij})_{j=1}^{m_i}$; $a_{ij} = 2\,\forall i,j$, and $B\equiv (n_i)_{i=1}^l$. By Theorem \[thm-everything\], $T_{v(G)}(A,B\rightarrow G)$ is $\mathcal{N}$-matrix reconstructible. We first express $T_{v(G)}(A,B\rightarrow G)$ in terms of the subgraphs to be counted, and then count the subgraphs by induction. $$\begin{split} T_{v(G)}(A,B\rightarrow G) & = \sum_{\substack{(X_1,X_2,\ldots,X_l)\mid \\ \cup_{j=1}^{l}X_j= V(G)\\ |X_j|=n_j\forall j} } \prod_{i = 1}^{l}con(A_i\rightarrow G_{X_i}) \\ & = \sum_{\substack{(X_1,X_2,\ldots,X_l)\mid \\ \cup_{j=1}^{l}X_j= V(G)\\ |X_j|=n_j\forall j} } \prod_{i = 1}^{l}\left( \sum_{q_i = n_i-1}^{m_i} q_i!S(m_i,q_i) {\mbox{$\left[\begin{array}{c}G_{X_i}\\#2\end{array}\right]$}} \right)\\ & = \sum_{\substack{(X_1,X_2,\ldots,X_l)\mid \\ \cup_{j=1}^{l}X_j= V(G)\\ |X_j|=n_j\forall j} } \sum_{\substack{(q_1,q_2,\ldots,q_l)\mid \\ n_j-1\leq q_j \leq m_j \forall j} } \prod_{i = 1}^{l}\left( q_i!S(m_i,q_i){\mbox{$\left[\begin{array}{c}G_{X_i}\\#2\end{array}\right]$}} \right)\\ & = \sum_{\substack{(q_1,q_2,\ldots,q_l)\mid \\ n_j-1\leq q_j \leq m_j \forall j} } \sum_{\substack{(X_1,X_2,\ldots,X_l)\mid \\ \cup_{j=1}^{l}X_j= V(G)\\ |X_j|=n_j\forall j} } \prod_{i = 1}^{l} \left( q_i!S(m_i,q_i){\mbox{$\left[\begin{array}{c}G_{X_i}\\#2\end{array}\right]$}} \right)\\ & = \sum_{\substack{(q_1,q_2,\ldots,q_l)\mid \\ n_j-1\leq q_j \leq m_j \forall j} } \left( \prod_{i = 1}^{l} q_i!S(m_i,q_i) \right) \left( \sum_{\substack{(X_1,X_2,\ldots,X_l)\mid \\ \cup_{j=1}^{l}X_j= V(G)\\ |X_j|=n_j\forall j} } \prod_{i = 1}^{l} {\mbox{$\left[\begin{array}{c}G_{X_i}\\#2\end{array}\right]$}} \right) \\ \end{split}$$ The sequence $(n_i,q_i)_{i=1}^l$ may be written as $(n_i^\prime,q_i^\prime)^{\mu_i}$; $i = 1$ to $r$, which denotes that the pair $(n_i^\prime,q_i^\prime)$ appears $\mu_i$ times in the sequence $(n_i,q_i)_{i=1}^l$, the pairs $(n_i^\prime,q_i^\prime)$ are all distinct for $i = 1$ to $r$, and that $\sum_{i=1}^r \mu_i = l$. Then each subgraph of $G$ that belongs to the family $\mathcal{G}(v(G),l,(n_i,q_i)_{i=1}^l)$ is counted $\prod_{i=1}^r \mu_i!$ times in the inner summation. Therefore, $$\label{eq-dics-subgraphs} \begin{split} T_{v(G)}(A,B\rightarrow G) & = \sum_{\substack{(q_1,q_2,\ldots,q_l)\mid \\ n_j-1\leq q_j \leq m_j \forall j} } \left(\prod_{i = 1}^{l}q_i!S(m_i,q_i)\right) \left(\prod_{i=1}^r \mu_i!\right) {\mbox{$\left[\begin{array}{c}G\\#2\end{array}\right]$}} \end{split}$$ The LHS of Equation (\[eq-dics-subgraphs\]) is known by Theorem \[thm-everything\]. Now we prove the claim by induction on $\sum_i m_i$. The base case of induction corresponds to the case in which each component in the subgraphs being counted has minimum number of edges, that is, $m_i = n_i -1$ for all $i\leq l$. In this case, there is only one term on the RHS of Equation (\[eq-dics-subgraphs\]), and it contains the unknown ${\mbox{$\left[\begin{array}{c}G\\#2\end{array}\right]$}}$, which can be solved for. Suppose the claim is true for $\sum_i m_i < m$. Now let $\sum_i m_i = m$. In this case, as in Lemma \[lem-kedge\], there is only one unknown ${\mbox{$\left[\begin{array}{c}G\\#2\end{array}\right]$}}$ on the RHS of Equation (\[eq-dics-subgraphs\]). All other terms on the RHS are known by the induction hypothesis. We can compute the unknown term to obtain the desired result. This completes the induction step and the proof. \[thm-rank\] The rank polynomial $R(G;x,y)$ is reconstructible from $\mathcal{N}(G)$. Lemma \[lem-lcompo\] can be applied to all induced subgraphs of $G$. So, we can count the number of subgraphs with $v$ vertices (none of which isolated), $e$ edges and $l$ components for all $v\leq v(G)$, $e\leq e(G)$ and $l\geq 1$. Therefore, $\rho_{rs}$ in the expression for the rank polynomial are known. Computing $P(G;\lambda)$ from $\mathcal{PD}(G)$ {#sec-poly} =============================================== In this section, we consider the problem of computing the characteristic polynomial of a graph from its complete polynomial deck. We prove that elementary spanning subgraphs of each type other than hamiltonian cycles can be counted from the complete polynomial deck of a graph, thus proving that the characteristic polynomial of a non-hamiltonian graph is reconstructible from its complete polynomial deck. Here we apply the idea of Kocay’s Lemma to the coefficients of the characteristic polynomials. Let $A \equiv (a_i)_{i = 1}^k$ be a non-increasing sequence in $[2,v(G)]$. In this section, we define the notation $p(A \rightarrow G_X)$ and $c(A\rightarrow G_X)$ differently. $$\label{eq-pc} \begin{split} p(A \rightarrow G_X) & = \prod_{j=1}^{k}(-1)^{a_j}c_{a_j}(G_X)\\ & = \prod_{j = 1}^{k}\left( \sum_{F\subseteq G_X,\,F\in L_{a_j}}(-1)^{r(F)}2^{s(F)} \right)\\ & = \sum_{\substack{(F_1,F_2,\ldots,F_{k})\mid \\ (\forall j\in [1,k])(F_j\subseteq G_X,\,F_j\in L_{a_j})} } \left((-1)^{\sum_{j=1}^{k} r(F_j)} 2^{\sum_{j=1}^{k} s(F_j)}\right)\\ & = \sum_{Y\subseteq X}c(A\rightarrow G_Y) \end{split}$$ where $$\begin{split} c(A\rightarrow G_Y) & = \sum_{\substack{(F_1,F_2,\ldots,F_{k})\mid \\ (\forall j\in [1,k])(F_j\subseteq G_Y,\,F_j\in L_{a_j})\\ \bigcup_{j=1}^{k}(VF_j)\, =\, Y} } \left((-1)^{\sum_{j=1}^{k} r(F_j)} 2^{\sum_{j=1}^{k} s(F_j)}\right) \end{split}$$ Thus we have grouped together tuples of elementary subgraphs in groups that span each subset of $X$. \[lem-pd2cdec\] If $A$ is a sequence defined over $[2,v(G)-1]$ then $c(A\rightarrow G)$ is reconstructible from the complete polynomial deck of $G$. As in the proof of Lemma \[lem-exp-c\], $p(A \rightarrow G_X)$ can be computed for each induced subgraph $G_X$. By Möbius inversion of Equation (\[eq-pc\]), we write $$\label{eq-pd-mobius} c(A\rightarrow G_X) = \sum_{Y\subseteq X}(-1)^{|X\backslash Y|} p(A\rightarrow G_Y)$$ But we cannot compute the RHS of Equation (\[eq-pd-mobius\]) because, in general, for $X\subsetneq VG$, we do not know which polynomials in $\mathcal{PD}(G)$ correspond to the induced subgraphs of $G_X$. But this is not a problem if $X = VG$. We can write $$c(A\rightarrow G) = \sum_{Y\subseteq VG}(-1)^{|VG\backslash Y|} p(A\rightarrow G_Y)$$ Now the RHS, and hence $c(A\rightarrow G)$, can be computed. [**Remark.**]{} Note that we would not be able to compute $p(A\rightarrow G)$ if we defined $A$ in $[2,v(G)]$, because we do not know $c_{v(G)}(G)$. Let $\lambda (m,p) \equiv (x_1, x_2, \ldots , x_p)$ denote a partition of $m$. We assume that $x_1\, \geq \, x_2\, \ldots \, \geq x_p$. We write $\lambda (m) $ when the number of parts $p$ is not relevant. Also, we just write $\lambda $ instead of $\lambda (m,p)$ when $m$ and $p$ are either understood from the context or not relevant. Another partition $\lambda ^\prime (m,p+1) $ can be obtained from $\lambda (m,p)$ by replacing an $x_i$ by $y$ and $z$ such that $y+z = x_i$, and ordering the numbers in a non-increasing order. Any partition that is obtained from $\lambda (m,p)$ by a sequence of such operations is called a [*refinement*]{} of $\lambda (m,p)$. Also, $\lambda (m,p)$ is a trivial refinement of itself. If $\lambda ^\prime (m,q)$ is a refinement of $\lambda (m,p)$, then we denote it by $\lambda ^\prime (m,q) \preceq \lambda (m,p)$. The relation $\preceq $ between partitions is a partial order. Now consider partitions in which the smallest part $x_p$ is at least 2. Associated with each such partition $\lambda (m,p)$, there is a unique elementary graph $F_{\lambda }\in L_m$, whose $i$’th component is a cycle of length $x_i$, or an edge if $x_i\, =\, 2$. If the sequence $A\equiv (a_i)_{i=1}^k$ is defined such that $\lambda \equiv (a_1,a_2, \ldots, a_k)$ is a non-trivial partition of $v(G)$, then we denote $c(A\rightarrow G)$ by $c(\lambda \rightarrow G)$. \[lem-refinement\] Let $\lambda \equiv (a_1,a_2, \ldots, a_k)$ be a non-trivial partition of $n = v(G)$. Then, $$\label{eq:refinement} \begin{split} c(\lambda \rightarrow G) & = \sum_{\lambda ^\prime \preceq \lambda} c(\lambda \rightarrow F_{\lambda ^\prime}) {\mbox{$\left[\begin{array}{c}G\\#2\end{array}\right]$}} \end{split}$$ From the definition of $c(A\rightarrow G)$, we write $$\begin{split} c(\lambda\rightarrow G) & = \sum_{\substack{(F_1,F_2,\ldots,F_{k})\mid \\ (\forall j\in [1,k])(F_j\subseteq G,\,F_j\in L_{a_j})\\ \bigcup_{j=1}^{k}(VF_j)\, =\, VG} } \left((-1)^{\sum_{j=1}^{k} r(F_j)} 2^{\sum_{j=1}^{k} s(F_j)}\right)\\ & = \sum_{F\subseteq G, F\in L_n} \sum_{\substack{(F_1,F_2,\ldots,F_{k})\mid \\ (\forall j\in [1,k])(F_j\subseteq F,\,F_j\in L_{a_j})\\ \bigcup_{j=1}^{k}F_j\, =\, F} } \left((-1)^{\sum_{j=1}^{k} r(F_j)} 2^{\sum_{j=1}^{k} s(F_j)}\right)\\ & = \sum_{\substack{F\subseteq G,F\in L_n}} c(\lambda \rightarrow F)\\ & = \sum_{\lambda ^\prime \preceq \lambda} c(\lambda \rightarrow F_{\lambda ^\prime}) {\mbox{$\left[\begin{array}{c}G\\#2\end{array}\right]$}} \end{split}$$ The last step may be explained as follows: if $F$ is a disjoint union of elementary graphs $F_j\in L_{a_j}; j \in [1,k]$, where $v(F) = v(G) = n$, then $F$ is isomorphic to an elementary graph $F_{\lambda ^\prime}$ for some refinement ${\lambda ^\prime}$ of $\lambda$. Trivially, if each $F_j$ is the cycle $C_{a_j}$ then $F = F_{\lambda}$. We then group the terms $c(\lambda \rightarrow F)$ by the isomorphism type of $F$. \[lem-non-hamiltonian\] If $F$ is an elementary graph on $n = v(G)$ vertices, other than the cycle, then ${\mbox{$\left[\begin{array}{c}G\\#2\end{array}\right]$}}$ is reconstructible from $\mathcal{PD}(G)$. Since $F$ is not a cycle, it is isomorphic to $F_{\lambda_0}$ for a unique non-trivial partition $\lambda_0 $ of $v(G)$. From Equation (\[eq:refinement\]) we write $$\label{eq:nonh1} \begin{split} {\mbox{$\left[\begin{array}{c}G\\#2\end{array}\right]$}} &= \frac{1}{c(\lambda_0 \rightarrow F_{\lambda_0 })} \left( c(\lambda_0 \rightarrow G) -\sum_{\lambda \prec \lambda_0} c(\lambda_0 \rightarrow F_{\lambda }) {\mbox{$\left[\begin{array}{c}G\\#2\end{array}\right]$}} \right) \end{split}$$ Now we expand $ {\mbox{$\left[\begin{array}{c}G\\#2\end{array}\right]$}} $ on the RHS of the above equation by repeated application of the same equation, and obtain the following solution. $$\label{eq:nonh2} {\mbox{$\left[\begin{array}{c}G\\#2\end{array}\right]$}} = \sum_{\lambda_q \prec \lambda_{q-1} \prec \ldots \prec \lambda_0} \frac{\left(-1\right)^q c(\lambda_q \rightarrow G) \prod_{i=0}^{q-1}c(\lambda_i\rightarrow F_{\lambda_{i+1}})} {\prod_{i=0}^qc(\lambda_i\rightarrow F_{\lambda_i})}$$ where the summation is over all chains $\lambda_q \prec \lambda_{q-1} \prec \ldots \prec \lambda_0$; $q \geq 0$, and an empty product is 1. There are finitely many terms in the above summation since there are finitely many refinements of $\lambda_0$. Since $\lambda_0 $ is a non-trivial partition of $n$, (that is, $x_i < v(G)\,\forall\, i$), by Lemma \[lem-pd2cdec\], $c(\lambda \rightarrow G)$ is reconstructible for each $\lambda \preceq \lambda_0$. Also, for each $\lambda \preceq \lambda_0$, $c(\lambda \rightarrow F_{\lambda })$ is non-zero. (Here we would like to repeat that we have considered only those partitions in which the smallest part is at least 2.) Thus the RHS can be computed. The main theorem in this section now follows from the above lemmas. \[thm-deg1\] If $F$ is an elementary graph on $v(G)$ vertices, other than a cycle, then ${\mbox{$\left[\begin{array}{c}G\\#2\end{array}\right]$}}$ is reconstructible from $\mathcal{PD}(G)$. Therefore, if there is a vertex of degree 1 in $G$, then the characteristic polynomial of $G$ is reconstructible from its complete polynomial deck. The degree sequence of a graph is reconstructed from its complete polynomial deck as follows. Consider the polynomials of degree $v(G)-1$ in $\mathcal{PD}(G)$. They are the characteristic polynomials of the vertex deleted subgraphs $G-u$ of $G$ for $u\in VG$. Since $c_2(G)$ and $c_2(G-u)$ count the number of edges of $G$ and $G-u$, respectively, we know the degree of $u$ in $G$ for each $u \in VG$. Thus the premise of Theorem \[thm-deg1\] is recognised from $\mathcal{PD}(G)$. The coefficients $c_i(G);i<v(G)$ can be computed using Lemma \[lem-derivative\]. Since there are no hamiltonian cycles in $G$, Lemma \[lem-non-hamiltonian\] implies that the constant term in the characteristic polynomial of $G$ can be calculated. [**Remark.**]{} Whenever non-hamiltonicity of a graph is recognised from its complete polynomial deck, its characteristic polynomial can be computed as well. Whitney’s Theorem {#sec-whitney} ================= In Section \[sec-intro\], it was stated that the computation of the chromatic polynomial of a graph requires only non-separable induced subgraphs of the graph. Whitney’s proof of this fact was based on his theorem that separable subgraphs can be counted from the counts of non-separable subgraphs. Let $n_t(G)$ denote the number of subgraphs of $G$ of type $t$, where ‘type’ of a graph is determined by the number of blocks of each isomorphism type. He proved, (stated in the terminology of [@biggs1993]), that there is a polynomial $\phi_t$, independent of $G$, such that $$n_t(G) = \phi_t(n_{\sigma}(G), n_{\rho}(G), ...)$$ where $\sigma$, $\rho$, ... are non-separable types with not more edges than $t$. Here we prove result using Kocay’s Lemma. Our presentation explicitly describes the polynomial in Whitney’s theorem. Let $S_0 = \{F_1, F_2, \ldots ,F_k\}$ be a family of non-separable graphs, some of them possibly isomorphic. Thus $S_0$ represents a ‘graph type’. Extending the notation introduced earlier, we write ${\mbox{$\left[\begin{array}{c}G\\#2\end{array}\right]$}}$ to denote the number of subgraphs of $G$ of type $S_0$. We define a partial order $\preceq $ on graph types as follows. Let $S$ be a graph type. We say that $S\preceq S_0$ if $c(S_0,X)$ is non-zero for some graph $X$ of type $S$. It is easily seen that $c(S_0,X)$ depends only on the type $S$ of $X$, not on a particular choice of $X$. So, we write it as $c(S_0,S)$. For any graph $G$, by Kocay’s Lemma \[lem-kocay\], $$p(S_0) = \prod_{i = 1}^{k} {\mbox{$\left[\begin{array}{c}G\\#2\end{array}\right]$}} = \sum_{X} c(S_0,X){\mbox{$\left[\begin{array}{c}G\\#2\end{array}\right]$}}$$ Terms on the RHS can be grouped together according to the types of $X$, so we can write $$\begin{split} p(S_0) &= \sum_{S\preceq S_0} c(S_0,S){\mbox{$\left[\begin{array}{c}G\\#2\end{array}\right]$}} \\ &= c(S_0,S_0){\mbox{$\left[\begin{array}{c}G\\#2\end{array}\right]$}}+\sum_{S\prec S_0} c(S_0,S){\mbox{$\left[\begin{array}{c}G\\#2\end{array}\right]$}} \end{split}$$ Therefore, $${\mbox{$\left[\begin{array}{c}G\\#2\end{array}\right]$}} = \frac{1}{c(S_0,S_0)} \left( p(S_0)-\sum_{S\prec S_0}c(S_0,S){\mbox{$\left[\begin{array}{c}G\\#2\end{array}\right]$}} \right)$$ Now we repeatedly apply the same equation to ${\mbox{$\left[\begin{array}{c}G\\#2\end{array}\right]$}}$ on the RHS, as we did in Equation (\[eq:nonh2\]). We thus get the polynomial of Whitney’s theorem. \[thm-whitney1\] $$\label{eq-whitney} {\mbox{$\left[\begin{array}{c}G\\#2\end{array}\right]$}} = \sum_{S_q \prec S_{q-1} \prec \ldots \prec S_0} \frac{\left(-1\right)^qp(S_q)\prod_{i=0}^{q-1}c(S_i,S_{i+1})} {\prod_{i=0}^qc(S_i,S_i)}$$ where the summation is over all chains $S_q \prec S_{q-1} \prec \ldots \prec S_0$; $q \geq 0$, and an empty product is 1. There are finitely many terms in the summation in Equation (\[eq-whitney\]) because each $S_q$ has fewer blocks in it than $S_{q-1}$. While some other known proofs of this theorem are based on not very different ideas, (for example, see [@biggs1978]), the above explicit formulation of the polynomial seems new. It allows us to argue about the reconstructibility of the characteristic polynomial more directly than in other standard proofs. \[cor-w2p\] The characteristic polynomial of a graph is reconstructible from its vertex deck. Let $G$ be the graph under consideration. Its elementary spanning subgraphs other than the hamiltonian cycles are counted as in the standard proof. Let $n = v(G)$, and let $S_0$ be the type of an $n$-vertex elementary graph $H$ other than $C_n$. Any block in a type $S \prec S_0$ has fewer than $n$ vertices, and $H$ is the only graph of type $S_0$ that has $n$ vertices. So ${\mbox{$\left[\begin{array}{c}G\\#2\end{array}\right]$}}$ can be counted using Whitney’s Theorem \[thm-whitney1\] and Kelly’s Lemma \[lem-kelly\]. To count hamiltonian cycles, we set $S_0 = \{nK_2\}$, that is, a graph type consisting of $n$ blocks, each one of them a $K_2$. Since no subgraph of $G$ has $n$ blocks isomorphic to $K_2$, ${\mbox{$\left[\begin{array}{c}G\\#2\end{array}\right]$}} = 0$. On the RHS of Equation  (\[eq-whitney\]), there is precisely one term that contributes hamiltonian cycles. That is, $S_1 = \{C_n\} \prec S_0$ is the unique chain that contributes $ham(G)$, implying that the terms containing $ham(G)$ cannot cancel out. All other blocks that appear in Equation (\[eq-whitney\]) have fewer than $n$ vertices, so can be counted using Kelly’s Lemma \[lem-kelly\]. Therefore, $ham(G)$ is reconstructible. The reconstructibility of $P(G;\lambda)$ now follows from Lemma \[lem-sachs\]. [**Remark.**]{} In the standard proof of the reconstructibility of the characteristic polynomial, one applies Kocay’s Lemma directly. As a result one has to proceed step by step, counting spanning trees, then spanning unicyclic subgraphs, etc., as we did in Corollary \[cor-trees\]. These intermediate steps are skipped by the direct application of Whitney’s theorem. Problems and discussion ======================= Expressing $c_n(G)$ or $\psi_n(G)$, where $n = v(G)$, as polynomials in $c_j(G-S)$ or $\psi_j(G-S)$; $S\subsetneq VG$, would be of interest. Alternatively, we would like to construct a generalisation of the characteristic polynomial which can be computed more naturally from the poset of induced subgraphs, and from which the characteristic polynomial can be easily computed. Such a goal is motivated on the one hand by the proofs in Section \[sec-nmatrix\], and, on the other hand, by similar generalisations of the chromatic polynomial, viz, Stanley’s chromatic symmetric function, (see [@stanley1995] & [@stanley1999]), and another recent two variable generalisation of the chromatic polynomial [@dohmen2003]. Both these generalisations are closely related to the lattice of connected partitions of $VG$, (see [@stanley1995] for definitions). A relationship between the poset of induced subgraphs defined in this paper and the lattice of connected partitions of the vertex set defined by Stanley could possibly be established using Kocay’s Lemma. Such a result would be useful in understanding exact relationship between different expansions (and reconstructibility) of several important invariants in a unified way. The reconstruction of the number of hamiltonian cycles is difficult and indirect in the proofs we have presented here, and in the original proof by Tutte as well. A reason for this difficulty is seen in Whitney’s theorem. Observe that for an $n$-vertex graph $G$, the polynomial of Whitney’s theorem contains a term in $ham(G)$ only if the type $S_0$ contains $n$ copies of $K_2$, or a $C_n$, and possibly other types of blocks. As a result, in Corollary \[cor-w2p\] we had to count all possible blocks with at most $n$ edges. But can we count the number of hamiltonian cycles from the $\mathcal{N}$-matrix at least as clearly as in Corollary \[cor-w2p\]? Towards this goal, we would like to understand the relationship between the structure of the edge labelled poset for separable graphs and that for blocks, and prove a generalisation of Whitney’s theorem. The crucial difference between the proofs in Section \[sec-nmatrix\] and Section \[sec-poly\] is in Lemmas \[lem-exp-c\] and  \[lem-pd2cdec\]. In Lemma \[lem-pd2cdec\], the use of Möbius inversion was limited to the computation of $c(A\rightarrow G)$ because we did not know the partial order on the induced subgraphs. This suggests that a general ‘expansion’ for the number of hamiltonian cycles would probably involve a summation over chains in $\mathcal{ELP}(G)$. Therefore, counting hamiltonian cycles from $\mathcal{PD}(G)$ and the original problem of Gutman and Chvetković seem difficult. This is probably why many known results on the reconstruction of the characteristic polynomial of a graph from its characteristic polynomial deck assume the graph to contain several pendant vertices. We propose the following generalisation of reconstruction for studying questions similar to the one posed by Gutman and Chvetković. Suppose $f$ is a graph invariant, and we are interested in reconstructing $G$ or partial invariants of $G$ from the deck $\mathcal{D}(G;f) = \{f(G-u); u \in VG\}$. A new collection $\mathcal{D}!(G;f)$ is recursively defined as $\{(f(G-u), \mathcal{D}!(G;f-u)); u\in VG\}$. We then define an equivalence relation $\sim $ on graphs such that $H_1 \sim H_2$ if $(f(H_1), \mathcal{D}!(H_1)) = (f(H_2),\mathcal{D}!(H_2))$. This relation gives an incidence matrix (or an edge labelled poset) on the types of induced subgraphs of $G$, where ‘type’ refers to an equivalence class under the relation $\sim $ defined above. It can be shown that for many invariants $f$, Ulam’s conjecture is true if and only if all graphs $G$ on more than 2 vertices are reconstructible from $\mathcal{D}!(G;f)$. One example of such an invariant is: $f(G) = 1$ if $G$ has a vertex of degree 1, and $f(G) = 0$ otherwise. Another example is $f(G) = P(G;\lambda)$. The proof of this is similar to that of Proposition \[prop:equiv-un\]: the base case follows from the fact that any three vertex graph $G$ is completely determined by $\mathcal{D}!(G;f)$ for the above invariants. The problem of reconstructing $G$ from the deck $\mathcal{D}!(G;f)$ is similar to the generalisation of the reconstruction problem suggested by Tutte, (Notes on pp. 123-124 in [@tutte1984]). We are not really interested in the question of computing $f(G)$ from $\mathcal{D}(G;f)$. Rather we ask the question - what are the incomplete invariants $f$, (that is, the invariants that do not determine a graph completely,) and classes of graphs $G$, for which $\mathcal{D}!(G;f)$ could be constructed from $\mathcal{D}(G;f)$? If we could construct $\mathcal{D}!(G;f)$ from $\mathcal{D}(G;f)$, then we could also prove all the results of Section \[sec-nmatrix\]. We would like to investigate this question when $f(G) = P(G;\lambda)$, and when $f(G) = (P(G;\lambda), P(\bar{G};\lambda))$ - the invariant which was considered by Hagos [@hagos2000]. Acknowledgements {#acknowledgements .unnumbered} ---------------- I take this opportunity to thank Allan Wilson Centre for the support and encouragement. I would also like to thank the referee for several useful suggestions for improving the presentation. [10]{} N. Biggs. On cluster expansions in graph theory and physics. , 29:159–173, 1970. N. Biggs. . Cambridge University Press, 1993. A. Bondy. A graph reconstructor’s manual. , pages 221–252, 1991. K. Dohmen, A. Pönitz, and P. Tittmann. A new two variable generalisation of the chromatic polynomial. , 6:069–090, 2003. I. Gutman and D. M. Cvetković. The reconstruction problem for the characteristic polynomial of graphs. , pages 45–48, 1975. Elias M. Hagos. The characteristic polynomial of a graph is reconstructible from the characteristic polynomials of its vertex-deleted subgraphs and their complements. , 7:\#R12, 2000. P. J. Kelly. A congruence theorem for trees. , 7:961–968, 1957. W. L. Kocay. On reconstructing spanning subgraphs. , 11:301–313, 1981. A. J. Schwenk. On the eigen values of a graph. In L. W. Beineke and R. J. Wilson, editors, [*Selected Topics in Graph Theory*]{}, pages 307–336. Academic Press, New York, 1979. I. Sciriha. Polynomial reconstruction and terminal vertices. , 356:145–156, 2002. R. P. Stanley. A symmetric function generalisation of the chromatic polynomial of a graph. , 111:166–194, 1995. R. P. Stanley. , volume 1. Cambridge University Press, 1997. R. P. Stanley. , volume 2. Cambridge University Press, 1999. W. T. Tutte. All the king’s horses. In J.A.Bondy and U.S.R.Murthy, editors, [*Graph Theory and Related Topics, (Proceedings of the conference held in honour of W. T. Tutte on the occasion of his 60th birthday, Waterloo 1977)*]{}, pages 15–33. Academic Press, New York, 1979. W. T. Tutte. , volume 21. Addison-Wesley Publishing Co., Reading, Mass., 1984. S. Ulam. . Wiley (Interscience), New York, 1960. H. Whitney. A logical expansion in mathematics. , 38:572–579, 1932.
1
--- abstract: 'Recent $\gamma$-ray observations suggest that the $\gamma$-ray millisecond pulsar (MSP) population is separated into two sub-classes with respect to the pair multiplicity. Here, we calculate the cosmic ray electron/positron spectra from MSPs. Based on the assumption of the equipartition in the pulsar wind region the typical energy of electrons/positrons ejected by a MSP with the pair multiplicity of order unity is $\sim50$ TeV. In this case, we find that a large peak at 10 - 50 TeV energy range would be observed in the cosmic ray electron/positron spectrum. Even if the fraction of pair starved MSPs is 10%, the large peak would be detectable in the future observations. We also calculate the contribution from MSPs with high pair multiplicity to the electron/positron spectrum. We suggest that if the multiplicity of dominant MSP population is $\sim 10^3$, electrons/positrons from them may contribute to the observed excess from the background electron/positron flux and positron fraction.' author: - | Shota Kisaka$^{1 \ast}$, Norita Kawanaka$^{2 \star}$\ $^1$ Department of Physics, Hiroshima University, Higashi-Hiroshima 739-8526, Japan\ $^2$ Racah Institute of Physics, Hebrew University of Jerusalem, Jerusalem, 91904, Israel\ Email: $^\ast$ kisaka@theo.phys.sci.hiroshima-u.ac.jp, $^\star$ norita@phys.huji.ac.il title: TeV Cosmic Ray Electrons from Millisecond Pulsars --- stars: neutron — cosmic rays. INTRODUCTION ============ The [*Fermi*]{} Gamma-Ray Space Telescope has detected $\gamma$-ray pulsed emissions from more than twenty millisecond pulsars (MSPs) [@Ab11], which have a rotation angular frequency $\Omega\sim 10^3$s$^{-1}$ and a stellar surface magnetic field $B_s\sim 10^{8.5}$G. The detection of the GeV emissions from a pulsar magnetosphere means that electrons and positrons are accelerated to more than $\sim$ TeV by the electric field parallel to the magnetic field, which arises in a depleted region of the Goldreich-Julian (GJ) charge density [@GJ69]. The $\gamma$-ray light curve is an important tool for probing the particle acceleration process in the pulsar magnetosphere. Therefore, the $\gamma$-ray emission region has been explored by comparing theoretical models such as polar cap [@DH96], outer gap [@CHR86] and slot gap models [@MH04] with the observed light curve (e.g., Venter, Harding & Guillemot 2009; Romani & Watters 2010; Kisaka & Kojima 2011). @VHG09 fitted the pulse profiles of the [*Fermi*]{} detected MSPs with the geometries of $\gamma$-ray emission region predicted by different theoretical models. They found that the pulse profiles of six of eight MSPs could be fitted by the geometries of either the outer gap or the slot gap model, as was the case of canonical pulsars. They interpreted that copious pairs are produced in the magnetosphere of these MSPs. However, @VHG09 also found that the pulse profiles of remaining two MSPs show the unusual behavior in the $\gamma$-ray light curves and could not be fitted by the geometry of either the outer gap or the slot gap models. They proposed that these unusual light curves could be fitted by the pair starved polar cap model [@MH04b], in which the multiplicity of the pairs is not high enough to completely screen the electric field above the polar cap, and the particles are continuously accelerated up to high altitude over the entire open field line region. Thus, from the model fitting of the $\gamma$-ray light curves, @VHG09 suggested that the $\gamma$-ray MSP population is separated into two sub-classes. The important fact is that radio pulsed emission is also detected from all currently detected $\gamma$-ray MSPs and remarkably similar to that from canonical pulsars. The pulsar radio emission is a highly coherent process because the brightness temperature is extremely high. In the theoretical models of the radio emission mechanisms, some authors have believed the conditions that there are a highly relativistic primary beam with the large Lorentz factor ($\sim 10^7$) and the number density nearly equal to the GJ density, and the secondary electron/positron plasma with relatively small bulk streaming Lorentz factor ($\sim 10$ - $10^3$) and the large pair multiplicity ($\sim 10^3$ - $10^5$) in the radio emitting region (e.g., Melrose 1995; Lyutikov, Blandford & Machabeli 1999; Gedalin, Gruman & Melrose 2002). However, the existence of pair starved MSPs suggests that the radio emission mechanisms should be insensitive to the particle number density down to sub-GJ number density. The pulsar radio emission mechanism is still poorly understood, so that the observationally-based constraints are valuable [@M95]. Therefore, another verification for the extent of the MSP multiplicity, especially the existence of the pair starved MSPs is important for the pulsar radio emission mechanisms. Recently, HESS has discovered a new TeV source, which is located in the close vicinity of the globular cluster Terzan 5 [@HE11]. Several globular clusters, including Terzan 5 also emit GeV $\gamma$-ray [@Ab09b; @KHC10; @Ab10b; @Ta11], which may plausibly be due to a number of MSPs residing in these clusters [@HUM05; @VDC09]. Thus, inverse Compton scattering by the high-energy particles ejected from MSPs are proposed for the origin of the observed TeV emission [@BS07; @VDC09]. The high-energy electron/positron spectrum ejected from MSPs would be a useful probe for the multiplicity of the MSPs. However, only from the TeV spectra, we cannot distinguish two models [@BS07; @VDC09], which assume different pair multiplicities [@HE11]. Another way to investigate the electron/positron spectrum ejected from MSPs is its direct measurement. Since high-energy electrons/positrons can propagate only about a few kpc due to the energy losses by the synchrotron and the inverse Compton emission, the direct detection of the electrons/positrons ejected from MSPs in the globular clusters is unlikely. However, for the following reasons, we may detect those from nearby MSPs. MSPs have much lower spin-down luminosity than canonical pulsars. @BVD08 investigated the possible contribution of the nearby MSP, PSR J0437-4715 to the cosmic ray electron/positron spectrum. They concluded that unlike canonical pulsars such as Geminga pulsar, the contribution from a MSP to the observed electron/positron flux is negligible. However, since the lifetime of MSPs is much longer than canonical pulsars ($>10^{10}$ yr), there should be much more nearby active MSPs. Furthermore, Kashiyama, Ioka & Kawanaka (2011; hereafter KIK11) pointed out that since white dwarf pulsars have long lifetime and continue to inject the electrons/positrons after the nebulae stop expanding, the adiabatic energy losses of electrons/positrons in the pulsar wind nebula region are negligible. Also the synchrotron cooling of electrons/positrons is so small and the high-energy electrons/positrons can escape the nebulae without losing much energy. Although they consider the case of the white dwarfs, their results are also applicable to MSPs. Therefore MSPs could potentially contribute to the observed high energy cosmic ray electrons/positrons and will be detectable by the next generation experiments, such as CALET [@To08] and CTA [@CTA]. In this paper, we investigate the contribution of electrons/positrons ejected from the MSPs to the observed cosmic ray spectrum. In section 2, we apply KIK11 model to the case of MSPs. We estimate the typical energy of electrons/positrons from the MSPs and show that during the propagation in pulsar wind nebulae, the adiabatic losses and radiative cooling of electrons/positrons are not so large. We also describe the propagation in the interstellar medium (ISM). In section 3, we calculate the energy spectrum of cosmic ray electrons/positrons from the MSPs and show the possibility that the electrons/positrons from these MSPs are detectable for the future observations. THE MODEL ========= Acceleration and cooling ------------------------ In order to estimate the energy of electrons/positrons available in the wind region and their adiabatic and radiative cooling in the shocked region, we adopt the model of KIK11. For a pulsar wind nebula formed by a MSP, we consider the conditions that the relativistic wind blasts off from the light cylinder $\sim R_{\rm lc}=c/\Omega$ where $c$ is the speed of light, and two shock fronts are formed between the supersonic pulsar wind and ISM. Since the energy from a MSP is continuously transported to the wind, the shock fronts are expanding until the pressure of the shocked region $P_{\rm sh}$ becomes equal to that of ISM $p$. Although KIK11 considered the case of white dwarf pulsars, the situations are similar to the case of MSPs because they have a long lifetime and the supernova shock front have already decayed. We assume that the effects of binary companion are negligible, because the fraction of solid angle occupied by companion is small ($<$1%). We also neglect radiative loss due to curvature radiation within light cylinder ($\sim 10$%). From now on we set fiducial parameters of the MSP’s surface magnetic field strength, angular frequency and radius as $B_0 = 10^{8.5}$G, $\Omega = 10^3$s$^{-1}$ and $R = 10^6$cm, respectively. We assume the energy equipartition between particles and magnetic field, $\varepsilon_eN=B^2/8\pi$, and the conservation of the particle number flux, $4\pi r^2cN\sim$ constant, in the MSP wind region. Here, $N$ is the number density of electrons/positrons. The number density can be described as $$N=N_{\rm lc}\left(\frac{R_{\rm lc}}{r}\right)^2=\frac{B_{\rm lc}\Omega\kappa}{2\pi ce}\left(\frac{R_{\rm lc}}{r}\right)^2,$$ where $\kappa$ is the multiplicity of electrons/positrons, $N_{\rm lc}$ and $B_{\rm lc}$ are the number density and the magnetic field at the light cylinder, respectively. We assume the magnetic field configuration as pure dipole ($B\propto r^{-3}$) within light cylinder. Outside the light cylinder, we assume the conservation of the energy flux of the magnetic field, $Br\sim$ constant. Using these assumptions, the typical energy of electrons/positrons $\varepsilon_e$ can be described as $$\varepsilon_e = \frac{e\psi_{\max}}{\kappa} \sim 50\kappa^{-1}\left(\frac{B_0}{10^{8.5}{\rm G}}\right)\left(\frac{\Omega}{10^3{\rm s^{-1}}}\right)^2\left(\frac{R}{10^{6}{\rm cm}}\right)^3 {\rm TeV}, \label{eq:2.1}$$ where $\psi_{\rm max}$ is the electric potential difference across the open magnetic field lines described as $$\psi_{\max}=\frac{B_0\Omega^2R^3}{2c^2}\sim 5\times10^{13}\left(\frac{B_0}{10^{8.5}{\rm G}}\right)\left(\frac{\Omega}{10^3{\rm s^{-1}}}\right)^2\left(\frac{R}{10^{6}{\rm cm}}\right)^3 {\rm Volt}. \label{eq:2.2}$$ These values are similar to those in the case of white dwarf pulsars (KIK11). Note that the typical energy depends on the pair multiplicity. Next, we estimate the adiabatic and the radiative cooling of electrons/positrons in the shocked region. The outer shock of the pulsar wind nebula finally decays when the pressure of the shocked region $P_{\rm sh}$ becomes equal to that of the ISM $p$. If the outer shock decaying time is shorter than the lifetime of MSP, the adiabatic cooling is negligible. In order to estimate the outer shock decaying timescale, we solve the equation of motion and the energy conservation law at the outer shock front. The equation of motion can be described as $$\label{eos} \frac{d}{dt} \left\{ \frac{4 \pi}{3} R_{\rm out}^3 \rho \frac{dR_{\rm out}}{dt} \right\} = 4\pi R_{\rm out}^2 P_{\rm sh},$$ where $R_{\rm out}$ is the radius of the outer shock front, $P_{\rm sh}$ is the pressure of the shocked region and $\rho$ is the density of ISM. The energy equation is $$\label{ece} \frac{d}{dt} \left\{ \frac{4 \pi}{3} R_{\rm out}^3 \frac{3}{2} P_{\rm sh} \right\} = L_{\rm sd} - P_{\rm sh} \frac{d}{dt} \left\{ \frac{4 \pi}{3} R_{\rm out}^3 \right\} ,$$ where $L_{\rm sd}$ is the spin-down luminosity of MSP, $$L_{\rm sd} =\frac{B^2_0\Omega^4R^6}{2c^3} = 2\times 10^{33}\left(\frac{B_0}{10^{8.5}{\rm G}}\right)^2\left(\frac{\Omega}{10^3{\rm s^{-1}}}\right)^4\left(\frac{R}{10^{6}{\rm cm}}\right)^6 {\rm erg}\ {\rm s}^{-1}.$$ In the derivation of eq.(\[ece\]), we assume that in the shocked region the internal energy of particles is $3P_{\rm sh}/2$ because the energy of particles is the relativistic regime. Using the typical value for the density of ISM $\rho\sim 10^{-24}{\rm g}\ {\rm cm}^{-3}$, the pressure in ISM is $p\sim 10^{-13} {\rm dyn}\ {\rm cm}^{-2}$. Solving above equations, the outer shock decays at about $$t_{\rm dec}\sim 10^6 \left(\frac{B_0}{10^{8.5}{\rm G}}\right)\left(\frac{\Omega}{10^3{\rm s^{-1}}}\right)^2\left(\frac{R}{10^{6}{\rm cm}}\right)^3 \left(\frac{T}{10^{3}{\rm K}}\right)^{-5/4} {\rm yr}, \label{eq:2.3}$$ where $T$ is the temperature of ISM. The lifetime of a MSP $\tau$ can be estimated as $$\tau=\frac{E_{\rm rot}}{L_{\rm sd}}, \label{eq:2.4}$$ where $E_{\rm rot}$ is the rotation energy described as $$E_{\rm rot}\sim 10^{52}\left(\frac{M}{1.4M_{\odot}}\right)\left(\frac{R}{10^6{\rm cm}}\right)^2\left(\frac{\Omega}{10^3{\rm s^{-1}}}\right)^2 {\rm erg}, \label{eq:2.5}$$ where $M$ is the mass of a MSP. For the fiducial parameters of MSP $$\tau\sim 5\times 10^{10} \left(\frac{M}{1.4M_{\odot}}\right)\left(\frac{B_0}{10^{8.5}{\rm G}}\right)^{-2}\left(\frac{\Omega}{10^3{\rm s^{-1}}}\right)^{-2}\left(\frac{R}{10^{6}{\rm cm}}\right)^{-4} {\rm yr}. \label{eq:2.7}$$ We find that the outer shock decays at a very early stage of the lifetime of MSP and does not expand after $t > t_{\rm dec}$. Therefore, in the similar to the case of white dwarf pulsars (KIK11), the adiabatic cooling shall give minor contributions to the cooling process of the high-energy electrons/positrons. Also for the radiative cooling by the synchrotron radiation in the shocked region $R_{\rm in} < r < R_{\rm out}$, we follow the discussion in KIK11 based on the diffusion in the shocked region. We take the Bohm limit, where the fluctuation of the magnetic field is comparable to the coherent magnetic field strength. In this limit, the timescale $t_{\rm diff}$ for the electron/positron trapping in the shocked region is given by $$t_{\rm diff}=\frac{d^2}{2D_{\rm sh}}=\frac{3}{2}\frac{eBd^2}{\varepsilon_ec},$$ where $D_{\rm sh}=cr_{\rm g}/3$ is the diffusion coefficient under the Bohm limit, $d$ is the size of the shocked region and $r_g=\varepsilon_e/eB$ is the Larmor radius of electron/positron with energy $\varepsilon_e$. To estimate the diffusion timescale, we need to know the size and the magnetic field strength of the shocked region. Here we consider the time $t>t_{\rm dec}$, so that the size of the shocked region is an order of the radius of forward shock front at $t=t_{\rm dec}$, $d\sim R_{\rm out}(t=t_{\rm dec})\sim 3\times 10^{19}{\rm cm}$. For the radius of the inner shock front at $t>t_{\rm dec}$, we use the balance condition between the momentum transferred by wind and the pressure of ISM $$\frac{L_{\rm sd}}{4\pi R^2_{\rm in}c}=p.$$ For the fiducial parameters, $R_{\rm in}(t>t_{\rm dec})\sim 2\times 10^{17}$ cm. The strength of the magnetic field at the inner radius $B_{\rm in}$ can be estimated as $B_{\rm in}\sim 2 \times 10^{-6}$ G, which is almost the same as that of ISM. Then, the diffusion timescale is $$t_{\rm diff}\sim 2 \times 10^4 \left(\frac{\varepsilon_e}{50 {\rm TeV}}\right)^{-1} {\rm yr}.$$ The synchrotron energy loss of a particle with energy $\varepsilon_e$ is described as $$\frac{d\varepsilon_e}{dt}=-\frac{4}{3}\sigma_{\rm T}c\beta^2\frac{B^2}{8\pi}\left(\frac{\varepsilon_e}{m_ec^2}\right)^2,$$ where $\sigma_{\rm T}$ is the Thomson scattering cross section, $\beta$ is the particle velocity normalized by the speed of light and $m_e$ is the mass of electron/positron. Then, The typical energy loss of the electrons/positrons $\Delta\varepsilon_e$ with energy $\varepsilon_e$ can be estimated as $$\frac{\Delta\varepsilon_e}{\varepsilon_e}\sim 0.3\left(\frac{B_0}{10^{8.5}{\rm G}}\right)^2 \left(\frac{\Omega}{10^3{\rm s}^{-1}}\right)^4\left(\frac{R}{10^6{\rm cm}}\right)^6. \label{eq:2.8}$$ This means that the high-energy electrons/positrons injected into the shocked region lose roughly 30% of the energy by the synchrotron radiation before diffusing out into ISM. Therefore, as in the case of white dwarf pulsars (KIK11), we can conclude that the radiative energy loss of electrons/positrons in the pulsar wind nebula is not so large. The above expressions for the estimate of the energy losses are only applicable to the case that the velocity of a MSP is subsonic in ISM. The observed velocity of MSPs is less than that of canonical pulsars in average sense [@Ho05]. However, some MSPs have the large velocity and a few MSPs actually forms bow shock nebulae [@St03; @HB06]. In this case, the size of the bow shock is described as (e.g., Wilkin 1996) $$R_{\rm bow}=\left(\frac{L_{\rm sd}}{4\pi c\rho V^2}\right)^{1/2}\sim 10^{16}\left(\frac{B_0}{10^{8.5}{\rm G}}\right)\left(\frac{\Omega}{10^3{\rm s^{-1}}}\right)^{2}\left(\frac{R}{10^{6}{\rm cm}}\right)^{3}\left(\frac{V}{10^7{\rm cm\ s^{-1}}}\right)^{-1} {\rm cm},$$ where $V$ is the velocity of a MSP. Due to the assumption of the energy equipartition, the strength of the magnetic field is $$B_{\rm bow}=\left(\frac{2L_{\rm sd}}{cR^2_{\rm bow}}\right)^{1/2}\sim 50\left(\frac{V}{10^7{\rm cm\ s^{-1}}}\right) \mu {\rm G}.$$ The ratio of the Larmor radius of electrons/positrons to the bow shock radius is $$\frac{r_{\rm g}}{R_{\rm bow}}\sim 0.5 \kappa^{-1}.$$ The fact that $r_{\rm g}/R_{\rm bow}$ is close to unity supports that electrons/positrons may escape from the bow shock region. Therefore, in the case of $\kappa\sim 1$, high-energy electrons/positrons can escape with an efficiency of order unity [@B08; @BA10]. Even if we consider the case of $\kappa \gg 1$, the synchrotron loss can be estimated by using eqs. (11), (14) and (16) as $$\frac{\Delta\varepsilon_e}{\varepsilon_e}\sim 9\times 10^{-4}\left(\frac{B_0}{10^{8.5}{\rm G}}\right)^2 \left(\frac{\Omega}{10^3{\rm s}^{-1}}\right)^4\left(\frac{R}{10^6{\rm cm}}\right)^6\left(\frac{V}{10^7{\rm cm\ s^{-1}}}\right).$$ Therefore, we can conclude again that the radiative energy loss of electrons/positrons in the pulsar wind nebula is not so large. Diffusion in Interstellar medium -------------------------------- The observed electron/positron spectrum after the propagation in ISM is obtained by solving the diffusion equation $$\frac{\partial}{\partial t}f(t,r,\varepsilon_e)=D(\varepsilon_e)\nabla^2 f+\frac{\partial}{\partial \varepsilon_e}\left( P(\varepsilon_e)f \right) +Q(t,r,\varepsilon_e),$$ where $f(t,r,\varepsilon_e)$ is the energy distribution function of electrons/positrons, $D(\varepsilon _e)=D_0(1+\varepsilon_e/3{\rm GeV})^{\delta}$ is the diffusion coefficient, $P(\varepsilon_e)$ is the cooling function of the electrons/positrons which takes into account synchrotron emissions and inverse Compton scatterings during the propagation, and $Q(t,\varepsilon_e,r)$ is the injection term. Here we adopt $D_0=5.8\times 10^{28}{\rm cm}^2{\rm s}^{-1}$, $\delta=1/3$, which is consistent with the boron-to-carbon ratio according to the latest GALPROP code. Atoyan et al. (1995) showed a solution in the case of an instantaneous injection from a single point-like source, i.e. $Q(t, \varepsilon_e,r) \approx Q_0(\varepsilon_e) \delta(t-t_i)\delta(r)$. Then the observed spectrum $G(t,r,\varepsilon_e; \tilde{t})$ would be $$G(t,r,\varepsilon_e; \tilde{t})=\frac{Q_0(\varepsilon_{e,0})P(\varepsilon_{e,0})}{\pi ^{3/2} P(\varepsilon_e)d_{\rm diff}(\varepsilon_e, \varepsilon_{e,0})^3}\exp \left(-\frac{r^2}{d_{\rm diff}(\varepsilon_e, \varepsilon_{e,0})^2} \right),$$ where $\varepsilon_{e,0}$ is the energy of electrons/positrons at the time $\tilde{t} (<t)$ and which are cooled down to $\varepsilon_e$ at the time $t$, and $d_{\rm diff}$ is the diffusion length given by $$d_{\rm diff}=2\left[ \int_{\varepsilon_e}^{\varepsilon_{e,0}} \frac{D(x)dx}{P(x)} \right]^{-1/2}. \label{d_diff}$$ The cooling function $P(\varepsilon_e)$ can be described as $$P(\varepsilon_e)=\frac{4\sigma_T \varepsilon_e^2}{3m_e^2 c^3} \left[ \frac{B^2}{8\pi} + \int d\varepsilon_{\gamma} u_{\rm tot} (\varepsilon_{\gamma}) f_{\rm KN} \left( \frac{4\varepsilon_e \varepsilon_{\gamma}}{m_e^2 c^4} \right) \right]$$ where $u_{\rm tot}(\varepsilon_{\gamma})d\varepsilon_{\gamma}$ is the energy density of interstellar radiation fields with the photon energy between $\varepsilon_{\gamma}$ and $\varepsilon_{\gamma}+d\varepsilon_{\gamma}$ (including cosmic microwave background, starlight, and dust emission; Porter et al. 2008), and $B$ is the interstellar magnetic field with we here set as $1\mu {\rm G}$. Here the function $f_{\rm KN}(x)$ is the correction factor to include the Klein-Nishina (KN) effect, which approaches to unity when $x$ is much smaller than unity. The mathematical expression of $f_{\rm KN}$ can be found in Moderski et al. (2005). As shown in the last section, in general MSPs have a very long lifetime ($\tau \sim 5\times 10^{10}{\rm yr}$) which is comparable with the cosmic age. In such a case that a point-like source with a finite duration, taking into account the time-dependence of an injection rate ($Q_0(\varepsilon_e)\rightarrow Q_0(\varepsilon_e, \tilde{t})$), the spectrum can be calculated by integrating $G(t,r,\varepsilon_e;\tau)$ for $\tau$: $$f_1(t,r,\varepsilon_e; t_i)=\int_{t_i}^{t} d\tilde{t}~G\left( t,r,\varepsilon_e; \tilde{t} \right) ,$$ where $t_i$ is the time when the particle injection from a source has started. Here we assume that the electron/positron injection spectrum can be described as mono-energetic distribution $$\label{mono} Q_0(\varepsilon_e,\tilde{t})\propto \left( 1+\frac{\tilde{t}-t_i}{\tau} \right)^{-2},$$ or power-law distribution $$\label{pl} Q_0(\varepsilon_e,\tilde{t})\propto \varepsilon_e^{-\alpha}\exp \left( -\frac{\varepsilon_e}{\varepsilon_{\rm cut}}\right) \left( 1+\frac{\tilde{t}-t_i}{\tau} \right)^{-2},$$ where $\alpha$ is the intrinsic power-law index of an electron/positron spectrum and $\varepsilon_{\rm cut}$ is the maximum electron/positron energy from a source. Now we can calculate the average electron/positron spectrum by considering the birth rate of MSPs as follows: $$f_{\rm ave}(\varepsilon_e)=\int_0^{t_0} dt_i \int_0^{d_{\rm diff}(\varepsilon_e,\varepsilon_{e,i})} 2\pi r dr f_1(t_0,r,\varepsilon_e; t_i)R,$$ where $t_0$ is the cosmic age (i.e. the present time), and $R$ is the local pulsar birth rate (${\rm yr}^{-1}~{\rm kpc}^{-2}$). Here $\varepsilon_{e,i}$ is the energy at the time $t_i$ of CR electrons/positrons which are cooled down to $\varepsilon_e$ at $t$. RESULTS AND DISCUSSIONS ======================= First, we calculate the cosmic ray electron/positron spectra from the pair starved MSPs. We set the pair multiplicity $\kappa= 1$, the lifetime $\tau= 5\times 10^{10}$yr, the total energy $E_{\rm rot}= 10^{52}$ erg, the local birth rate $R = 3\times 10^{-9}$ yr$^{-1}$ kpc$^{-2}$ and the fraction of the lost energy due to synchrotron emission 30% for each MSP. We assume that each MSP has the same value of the parameters ($B_0=10^{8.5}{\rm G}, \Omega=10^3{\rm s}^{-1}, R=10^6{\rm cm}$), because most MSPs have the almost same spin-down luminosity. For the injection distribution function, we assume mono-energetic distribution eq.(\[mono\]) with the energy $\varepsilon_e=50$ TeV. Even if we consider the power-law distribution eq.(\[pl\]), the energy range of the distribution is small because the allowed range of the cutoff energy should be only 50 - 80 TeV due to the observed constraint by KASKADE/GRAPES/CASA-MIA [@KY09]. Thus the distribution should be nearly mono-energetic distribution. Note that our local MSP birth rate is based on the MSP local surface density of 38$\pm$ 16 pulsars kpc$^{-2}$ for 430 MHz luminosity above 1 mJy kpc$^2$ [@L08]. Actually, now 23 MSPs are detected within 1kpc [@ATNF]. We can only detect MSPs that have the radio beam directed toward us and the radio flux larger than the threshold of detectors. However, the cosmic ray electrons/positrons ejected from MSPs are distributed isotropically because of the effect of Galactic magnetic field, so that a large number of MSPs will contribute to the observed electron/positron spectrum. Therefore, our local MSP birth rate corresponds to the lower limit for the current radio observations. In figure \[fig:3.1\], the electron/positron flux from multiple pair starved MSPs is shown. Thick solid, dashed and dash-dotted lines show the total electron/positron spectra if the fraction of the pair starved MSPs is 100%, 25% and 10%, respectively. Thin solid line shows the contribution of electrons/positrons from the pair starved MSPs when the fraction is 100%. The background flux is shown as a thin dashed line. We adopt the background model of an exponentially cutoff power law with an index of -3.0 and a cutoff at 1.5 TeV, which is similar to that shown in @Ah08 and reproduces the data in $\sim$10 GeV-1 TeV well. It is very interesting that there is a large peak at 10-50 TeV energy range. The existence of this peak cannot be ruled out from the current observations. The high-energy component is more enhanced for the long-duration sources [@AAV95; @KIN10]. This is because the longer the duration of injection is, the larger fraction of fresh electrons/positrons are expected to reach the Earth without losing their energy so much during the propagation. MSPs are the continuously injecting sources with the duration as long as the cosmic age, so that the spectrum from them has nearly the same shape with the injection spectrum with the soft energy tail component. This is the difference from other sources such as young pulsars, whose typical duration is only $\sim 10^4$-$10^5$ yrs. Fitting result of @VHG09 showed that the fraction of the pair starved MSPs is 25%, although they have only eight samples. Even if the fraction is 10%, the flux is $\sim 20$m$^{-2}$ s$^{-1}$ sr$^{-1}$ GeV$^2$ at 10 TeV. In this case, we can detect the electron/positron flux with near future missions such as CALET (we assume the geometrical factor times the observation time $\sim 5$ yrs as $\sim 220$ m$^2$ sr days) because the predicted electron/positron flux is sufficiently large. It was considered that the number of astrophysical sources contributing to the above several TeV energy range is quite small according to the birth rate of the supernovae and the canonical pulsars in the vicinity of the Earth [@Ko04; @KIN10; @KIOK11]. However, we find that it is possible for multiple pair starved MSPs to contribute to the 10 TeV energy range in the electron/positron spectrum. Therefore, if the anisotropy of the observed electrons/positrons are weak in the 10 TeV energy range, we suggest that the pair starved MSPs may contribute to the spectrum significantly. Next, we also investigate the contribution of MSPs with high pair multiplicity to the observed cosmic ray electrons/ positrons. In this model, we assume that the injection function of these MSPs is power-law distribution with index $\alpha=2$, the cutoff energy $\varepsilon_{e, {\rm cut}}=1$TeV and the minimum energy $\varepsilon_{e, \min}=1$GeV. In this case, the pair multiplicity is $\sim 2000$. Other parameters are the same values as in the case of the pair starved MSPs. We assume that the fraction of MSPs with high multiplicity is 100%. The results are shown in figure \[fig:3.2\] for the electron/positron spectrum and in figure \[fig:3.3\] for the positron fraction. Both figures show that electrons/positrons ejected from MSPs with high multiplicity partially contribute to the excess from background flux observed by PAMELA, HESS and [*Fermi*]{}. In this energy range, the other sources such as canonical pulsars [@S70; @AAV95; @CCY96; @ZC01; @Ko04; @G07; @Bu08; @P08; @HBS09; @YKS09; @MCG09; @Gr09; @KIN10; @HGH10] would also contribute to the observed electron/positron spectrum. Note that even if other sources are dominant for the observed excess, the total spectrum added to the contribution of MSPs with high multiplicity does not significantly exceed the observed electron/positron spectrum. SUMMARY ======= In this paper, we show the possibility that the cosmic ray electrons/positrons from MSPs would significantly contribute to the observed spectrum. Although MSPs have relatively low spin-down luminosity and low birth rate, the lifetime is so long that there are many active MSPs in the vicinity of the Earth. Furthermore, such a long lifetime source continuously injects electrons/positrons after the nebula ceases expanding, so the adiabatic energy losses in a pulsar wind nebula region are negligible. The synchrotron cooling in the nebula is also small, so the high-energy electrons/positrons can escape the nebula without losing much energy. We calculate the diffusive propagation of high-energy electrons/positrons in the ISM taking into account the cooling via synchrotron emissions and inverse Compton scatterings, and predict their spectrum observed at the Earth. In the case of the MSPs with multiplicity that is unity, the typical energy of the electrons/positrons produced should be $\sim 50$ TeV based on the assumption of the equipartition in the MSP wind region. Since the long duration of injection make a hard spectrum, the peak is enhanced at 10 - 50 TeV energy range. Even if the fraction of pair starved MSPs is as small as 10%, this peak would be detectable in the future missions such as CALET and CTA. Although a single young source can make the similar spectral feature in this energy range, in the case of pair starved MSPs the anisotropy of the electron/positron flux would be weaker because a number of sources contribute to it. If this peak is detected, that will be a great impact for on the studies of pulsar radio emission mechanisms because the existence of pair starved MSPs suggests that the radio emission mechanisms should be insensitive to the particle number density down to sub-GJ number density. The detection also suggests that the current outer gap model should be modified because @WH11 suggested that most MSPs locate above the pair death line of the outer gap and the multiplicity is larger than unity. We also calculate the electron/positron spectrum from MSPs with high pair multiplicity. We suggest that if multiplicity of these MSPs is the order of $\sim 10^3$, electrons/positrons from them partially contribute to the observed excess of the total spectrum and the positron fraction. Acknowledgements {#acknowledgements .unnumbered} ================ We thank K. Ioka, K. Kashiyama, T. N. Kato, Y. Kojima, J. Takata and S. J. Tanaka for useful discussions and comments. This work was supported in part by the Grant-in-Aid for Scientific Research from the Japan Society for Promotion of Science (S.K.). [99]{} Abdo, A. A., et al. 2011, arXiv:1108.1435 Abdo, A. A., et al. 2010, A&A, 524, 75 Abdo, A. A., et al. 2009, Science, 325, 845 Abramowski, A., et al. 2011, A&A, 531, L18 Ackermann, M., et al. 2010, Phys. Rev. D, 82, 092004 Adriani, O., et al. 2009, Nature, 458, 607 Aharonian, F. A., et al. 2008, Phys. Rev. Lett., 101, 261104 Aharonian, F. A., et al. 2009, A&A, 508, 561 Atoyan, A. M., Aharonian, F. A., & Völk, H. J. 1995, Phys. Rev. D, 52, 3265 Bandiera, R. 2008, A&A, 490, L3 Bednarek, W., & Sitarek, J. 2007, MNRAS, 377, 920 Blasi, P., & Amato, E. 2010, arXiv:1007.4745 Büsching, I., Venter, C., & de Jager, O. C. 2008, Adv. Space Res., 42, 497 Büsching, I., de Jager, O. C., Potgieter, M. S., & Venter, C. 2008, ApJ, 678, L39 Chang, J., et al. 2008, Nature, 456, 362 Cheng, K. S., Ho, C., & Ruderman, M. 1986, ApJ, 300, 522 Chi, X., Cheng, K. S., & Young, E. C. M. 1996, ApJ, 459, L83 CTA Consortium. 2010, arXiv:1008.3703 Daugherty, J. K., & Harding, A. K. 1996, ApJ, 458, 278 Gedalin, M., Gruman, E., & Melrose, D. B. 2002, MNRAS, 337, 422 Goldreich, P., & Julian, W. H. 1969, ApJ, 157, 869 Grasso, D., et al. 2009, Astropart. Phys., 32, 140 Grimani, C. 2007, A&A, 474, 339 Harding, A. K., Usov, V. C., & Musliov, A. G. 2005, ApJ, 622, 531 Heyl, S. J., Gill, R., & Hernquist, L. 2010, MNRAS, 406, L25 Hobbs, G., Lorimer, D. R., Lyne, A. G., & Kramer, M. 2005, MNRAS, 360, 974 Hooper, D., Blasi, P., & Serpico, P. D. 2009, J. Cosmol. Astropart. Phys., JCAP(2009)025 Hui, C. Y., & Becker, W. 2006, A&A, 448, L13 Kashiyama, K., Ioka, K., & Kawanaka, N. 2011, Phys. Rev. D, 83, 023002 Kawanaka, N., Ioka, K., & Nojiri, M. M. 2010, ApJ, 710, 958 Kawanaka, N., Ioka, K., Ohira, Y., & Kashiyama, K. 2011, ApJ, 729, 93 Kisaka, S., & Kojima, Y. 2011, ApJ, 739, 14 Kistler, M. D., & Yüksel, H. 2009, arXiv:0912.0264 Kobayashi, T., Komori, Y., Yoshida, K., & Nishinuma, J. 2004, ApJ, 601, 340 Kong, A. K. H., Hui, C. Y., & Cheng, K. S. 2010, ApJ, 712, 36 Lorimer, D. R. 2008, Living Rev. Relativ., 11, 8 Lyutikov, M., Blandford, R. D., & Machabeli, G. 1999, MNRAS, 305, 338 Malyshev, D., Cholis, I., & Gelfand, J. 2009, Phys. Rev. D, 80, 063005 Manchester, R. N., Hobbs, G. B., Teoh, A., & Hobbs, M. 2005, AJ, 129, 1993 Melrose, D. B. 1995, J. Astrophys. Astron., 16, 137 Moderski, R.,Sikora, M., Coppi, P. S., & Aharonian, F. A. 2005, MNRAS, 363, 954 Muslimov, A. G., & Harding, A. K. 2004a, ApJ, 606, 1143 Muslimov, A. G., & Harding, A. K. 2004b, ApJ, 617, 471 Porter, T. A., Moskalenko, I. V., Strong, A. W., Orlando, E., & Bouchet, L. 2008, ApJ, 682, 400 Profumo, S. 2012, Cent. Eur. J. Phys., 10, 1 Romani, R. W., & Watters, K. P. 2010, ApJ, 714, 810 Shen, C. S. 1970, ApJ, 162, L181 Stappers, B. W., Gaensler, B. M., Kaspi, V. M., van der Klis, M., & Lewin, W. H. G. 2003, ApJ, 299, 1372 Tam, P. H. T., et al. 2011, ApJ, 729, 90 Torii, S., et al. 2008a, J. Phys.: Conf. Ser., 120, 062020 Torii, S., et al. 2008b, arXiv:0809.0760 Venter, C., de Jager, O. C., & Clapson, A.-C. 2009, ApJ, 696, L52 Venter, C., Harding, A. K., & Guillemot, L. 2009, ApJ, 707, 800 Wang, R.-B., & Hirotani, K. 2011, ApJ, 736, 127 Wilkin, F. P. 1996, ApJ, 459, L31 Yüksel, H., Kistler, M. D., & Stanev, T. 2009, Phys. Rev. Lett., 103, 051101 Zhang, L., & Cheng, K. S. 2001, A&A, 368,1063 ![Electron/positron spectrum predicted from MSPs with the fraction of pair starved MSPs 100% (thin solid line), and its sum (thick solid line) with the background (thin dashed line). The injection distribution function is the mono-energetic distribution eq.(\[mono\]) with the energy $\varepsilon_e=50$ TeV and the multiplicity $\kappa = 1$. Data points correspond to measurements of ATIC (purple boxes, Chang et al. 2008), HESS (light-green triangles and black triangles, Aharonian et al. 2008; 2009), PPB-BETS (yellow triangles, Torii et al. 2008b), [*Fermi*]{} (blue circles, Ackermann et al. 2010) and GRAPES (black circle, Kistler & Yüksel 2009). We also show the total spectra from pair starved MSPs with the different fractions: 25% (thick dashed line) and 10% (dot-dashed line). We assume that the lifetime $\tau=5\times 10^{10}$ yr, the total energy $E_{\rm rot}=10^{52}$ erg, the local birth rate $R=3\times 10^{-9}$ yr$^{-1}$ kpc$^{-2}$ and the fraction of the energy loss is 30%.[]{data-label="fig:3.1"}](fig1.eps){width="160mm"} ![Electron/positron spectrum predicted from MSPs which have the power-law injection function with the cutoff energy $\varepsilon_{e,{\rm cut}}=1$ TeV, the minimum energy $\varepsilon_{e,{\rm min}} = 1$ GeV and index $\alpha=2$ (thin solid line). The thick solid line shows the total spectrum. For other parameters, we use the same values as in the case of Figure 1.[]{data-label="fig:3.2"}](fig2.eps){width="160mm"} ![Total positron fraction resulting from the spectrum (solid line) that have the same parameters as in Figure 2, and the background (dotted line), compared with the PAMELA data as red circles [@pamera]. Note that the solar modulation is important below $\sim 10$ GeV.[]{data-label="fig:3.3"}](fig3.eps){width="160mm"}
1
--- abstract: 'Two high performance coronagraphic approaches compatible with segmented and obstructed telescope pupils are described. Both concepts use entrance pupil amplitude apodization and a combined phase and amplitude focal plane mask to achieve full coronagraphic extinction of an on-axis point source. While the first concept, named Apodized Pupil Complex Mask Lyot Coronagraph (APCMLC), relies on a transmission mask to perform the pupil apodization, the second concept, named Phase-Induced Amplitude Apodization complex mask coronagraph (PIAACMC), uses beam remapping for lossless apodization. Both concepts theoretically offer complete coronagraphic extinction (infinite contrast) of a point source in monochromatic light, with high throughput and sub-$\lambda$/D inner working angle, regardless of aperture shape. The PIAACMC offers nearly 100% throughput and approaches the fundamental coronagraph performance limit imposed by first principles. The steps toward designing the coronagraphs for arbitrary apertures are described for monochromatic light. Designs for the APCMLC and the higher performance PIAACMC are shown for several monolith and segmented apertures, such as the apertures of the Subaru Telescope, Giant Magellan Telescope (GMT), Thirty Meter Telescope (TMT), the European Extremely Large Telescope (E-ELT) and the Large Binocular Telescope (LBT). Performance in broadband light is also quantified, suggesting that the monochromatic designs are suitable for use in up to 20% wide spectral bands for ground-based telescopes.' author: - Olivier Guyon - 'Philip M. Hinz' - Eric Cady - Ruslan Belikov - Frantz Martinache bibliography: - 'ms.bib' title: High Performance Lyot and PIAA Coronagraphy for Arbitrarily shaped Telescope Apertures --- Introduction {#sec:intro} ============ Direct imaging of exoplanets with ground-based telescopes is becoming possible thanks to advances in adaptive optics, as demonstrated by several recent direct imaging exoplanet discoveries [@2010Sci...329...57L; @2008Sci...322.1348M; @2013ApJ...763L..32C]. While current ground-based instruments are most sensitive to relatively massive and young planets at large angular separation (typically beyond a few tenths of an arcsecond), recent developments in coronagraphic techniques, “extreme” Adaptive Optics and calibration techniques are pushing detection limits deeper in contrast and closer in angular separation, soon providing access to the planet-rich inner parts of planetary systems [@2008SPIE.7015E..31M; @2008SPIE.7014E..41B; @2009SPIE.7440E..20M; @2011ApJ...729..132C]. Direct imaging of the inner part (1 to 5 AU) of young planetary systems is of especially high scientific value to constrain and understand planetary systems formation and evolution near the habitable zone, and requires the combination of an efficient coronagraph offering small inner working angle and a high level of wavefront correction and calibration. High contrast imaging from space allows access to considerably better contrast than possible with ground-based telescopes, thanks to the absence of atmospheric turbulence. Laboratory coronagraphy systems have demonstrated that raw contrasts of about 1e-9 can be achieved in a stable environment with a deformable mirror and a coronagraph (see for example [@2007Natur.446..771T]). At such high contrast, coronagraphic imaging can allow characterization of potentially habitable planets through spectroscopy from space [@2009arXiv0911.3200L]. While most high performance coronagraphs are designed for unobstructed circular pupils, current and future large ground-based telescopes are centrally obscured, and also segmented above 8.4-m diameter. Future large space-based telescopes will also likely be centrally obscured and/or segmented, although a telescope dedicated to high contrast imaging could be built off-axis if required for coronagraphy [@2009arXiv0911.3200L]. The scientific return of an exoplanet direct imaging mission or instrument is a steep function of telescope diameter: larger telescopes allow access to exoplanets at smaller angular separations, which are brighter in reflected light (apparent luminosity scales as inverse square of angular separation in reflected light), more numerous (the number of planets of a given type accessible with a telescope scales as the third power of telescope diameter), and more relevant to exoplanet systems habitability than widely separated planets. Larger ground-based telescope size also allows higher contrast observation by better concentrating planet light over the speckle halo background, and the gain in collecting area enables spectroscopic characterization. It is therefore essential to identify and develop coronagraph concepts which can deliver high performance on centrally obscured and/or segmented apertures. Coronagraph designs for centrally obscured and/or segmented apertures have been proposed by several authors, offering a wide range of solutions and approaches: - [[**Lyot-type coronagraphs with amplitude masks.**]{} Most studies of coronagraph designs for obscured and/or segmented apertures considered Lyot-type coronagraph optimized for high contrast by either apodization of the entrance pupil (APLC concept introduced by [@2003AA...397.1161S]) or apodization of the focal plane mask (Band-limited coronagraph concept introduced by [@2002ApJ...570..900K]). For the apodized pupil Lyot coronagraph (APLC), [@2005ApJ...618L.161S] and [@2009ApJ...695..695S] showed that the entrance pupil apodizer can be optimized for centrally obscured pupils. Using this technique, [@2007AA...474..671M] studied the APLC for ELTs, finding high throughput solutions offering better than 1e-5 contrast at and beyond 3 $\lambda$/D separation. [@2010AA...519A..61M] proposed using a multistage apodized pupil Lyot coronagraph (APLC) to mitigate central obstruction limitations. While central obstruction can be mitigated in the Lyot-type coronagraph design, [@2005ApJ...633..528S; @2005ApJ...626L..65S] showed that spiders and gaps in APLC and band-limited Lyot coronagraphs diffract light within the geometrical aperture, making it impossible to achieve very high contrast on segmented apertures. Moderate-contrast band-limited Lyot coronagraphs have been designed for the NIRCAM instrument [@2009SPIE.7440E..28K] on the James Webb Space Telescope, but the aggressive Lyot stop designs, which remove much of the residual diffraction from the secondary structure and the segmented primary, come at the cost of significant companion throughput.]{} - [[**Phase mask coronagraphs.**]{} Coronagraphs using phase focal plane masks are also affected by central obscuration and spiders/gaps. [@2003SPIE.4860..171L] showed that the 4-quadrant phase mask can only achieve full coronagraphic suppression on unobstructed pupils free of gaps or spiders, as any obscuration diffracts light outward in the Lyot plane. The optical vortex coronagraph is similarly affected by obscurations, although [@2011OptL...36.1506M] showed that central obstruction can be mitigated by a dual-stage approach, where the second stage rejects most of the light diffracted by the central obstruction. For both the vortex and the 4 quadrant phase mask coronagraphs, no solution has been found to eliminate the light diffracted by spiders and gaps. ]{} - [[**Shaped Apertures.**]{} For moderate contrast level and relatively large IWA, shaped apertures can be designed for centrally obscured and segmented pupils. [@2006PASJ...58..627T] designed shaped apertures delivering 1e-7 contrast at 4 $\lambda$/D. Similarly, [@2011arXiv1108.4050C] showed that shaped pupil can be designed for  1e-6 contrast and $\approx 4 \lambda$/D inner working angle for a variety of centrally obscured and segmented apertures. ]{} A different approach to this problem is to remap the entrance aperture to remove central obstruction and/or spiders. [@2005PASP..117..295M] propose a 2-mirror system to remove central obstruction and spiders for a four-quadrant coronagraph. [@2006PASP..118..860G] propose a high efficiency nulling coronagraph concept adapted to central obstruction and spiders by performing destructive interferences between pairs of unobstructed off-axis subapertures. [@2009PASP..121.1232L] demonstrate that a prism-like transmissive device and aspheric optics can be used to remove both central obstruction and spiders from the Subaru Telescope pupil, theoretically allowing high performance coronagraphy with the full telescope aperture. These remapping solutions are complex, challenging to implement and align, and extremely sensitive to tip-tilt and stellar angular size [@2009ApJ...702..672C] at high contrast: when points on either size of an obstruction are brought next to each other in the remapped pupil, a small tip-tilt in the entrance beam leads to a phase discontinuity in the remapped beam. When due to finite stellar angular diameters, diffraction due to this discontinuity cannot be mitigated or controlled by wavefront control, as it is incoherent (opposite sides of the stellar disk produce diffracted light components of opposite signs). [@2007ApJ...658.1386S] chose to avoid entirely the problem by using an unobstructed 1.5-m diameter off-axis part of the 5-m Palomar telescope to perform high contrast imaging with the optical vortex coronagraph. While this allows the use of high performance coronagraphs designed for unobstructed apertures, the performance loss due to the use of an aperture considerably smaller than the full telescope is significant. The solutions previously proposed to mitigate the effects of central obstruction, spiders and gaps are generally suitable for ground-based coronagraphy at a few $\lambda$/D IWA, with a raw contrast around $10^{-5}$, as reported by [@2008AA...492..289M] who performed a study of coronagraphic performance on ELTs including realistic assumptions on the level of residual wavefront error after an extreme-AO system. For most of the coronagraphs, central obstruction and spiders were found to have a major impact on performance, limiting the achievable contrast to $10^{-4}$ in the 1 to 4 $\lambda$/D separation range. The notable exceptions to this rule were the Achromatic Interfero Coronagraph , which is insensitive to centro-symmetric pupil features (such as a central obstruction or a set of four radial spiders at 90 deg), and the APLC, which could be designed to take into account central obstruction and was found to be quite robust to spiders at the $10^{-5}$ contrast level. The coronagraphs concepts for which ground-based designs compatible with central obscuration have been proposed (shaped aperture, APLC, band-limited Lyot coronagraph) are unfortunately not able to offer IWA less than $\approx 2 \lambda/D$, and also do not enable high contrast (approximately $10^{9}$) coronagraphy on centrally obscured or segmented apertures. The work presented in this paper is aimed at demonstrating that high performance coronagraphy is possible in monochromatic light on centrally obscured and/or segmented pupils for both ground-based and space-based telescopes. The Apodized Pupil Complex Mask Lyot Coronagraph (APCMLC) and Phase-Induced Amplitude Apodization complex mask coronagraph (PIAACMC) concepts, previously described for circular unobstructed apertures in [@2010ApJS..190..220G], are here adapted to arbitrarily shaped apertures. Section \[sec:APCMLC\] describes how the APCMLC can be adapted to non-circular apertures, and a step by step process to design a APCMLC for any aperture shape is proposed and examples are shown. In Section \[sec:PIAACMC\], the PIAACMC is shown to offer performance superior to the APCMLC, and its design for centrally obscured and segmented apertures is discussed, with examples representative of current and future large telescopes shown. High performance APCMLC and PIAACMC for pupils with strong aspect ratios is briefly discussed in Section \[sec:aspectratio\]. Chromaticity of the concepts is discussed in Section \[sec:chrom\]. Results are discussed in Section \[sec:conclusion\]. ![image](fig01s.eps) ![image](fig02.eps) [lcccc]{} \ Subaru APCMLC \#1 & 0.596 $\lambda/D$ & 99.62% & 68.88% & 0.71 $\lambda/D$\ Subaru APCMLC \#2 & 0.8 $\lambda/D$ & 24.89% & 54.65% & 0.90 $\lambda/D$\ Subaru APCMLC \#3 & 1.2 $\lambda/D$ & 8.57% & 39.19% & 1.30 $\lambda/D$\ \ GMT APCMLC \#1 & 0.666 $\lambda/D$ & 99.64% & 64.50% & 0.78 $\lambda/D$\ GMT APCMLC \#2 & 0.7 $\lambda/D$ & 79.47% & 61.99% & 0.81 $\lambda/D$\ GMT APCMLC \#3 & 1.2 $\lambda/D$ & 35.16% & 11.39% & 1.28 $\lambda/D$\ GMT APCMLC \#4 & 1.5 $\lambda/D$ & 28.59% & 9.68% & 1.58 $\lambda/D$\ \ TMT APCMLC \#1 & 0.764 $\lambda/D$ & 99.72% & 55.67% & 0.86 $\lambda/D$\ TMT APCMLC \#2 & 0.8 $\lambda/D$ & 85.48% & 53.23% & 0.90 $\lambda/D$\ TMT APCMLC \#3 & 1.2 $\lambda/D$ & 36.08% & 34.26% & 1.27 $\lambda/D$\ TMT APCMLC \#4 & 1.5 $\lambda/D$ & 40.94% & 28.41% & 1.57 $\lambda/D$\ \ E-ELT APCMLC \#1 & 0.825 $\lambda/D$ & 99.85% & 54.26% & 0.93 $\lambda/D$\ E-ELT APCMLC \#2 & 0.9 $\lambda/D$ & 78.76% & 49.92% & 1.00 $\lambda/D$\ E-ELT APCMLC \#3 & 1.2 $\lambda/D$ & 50.56% & 38.43% & 1.29 $\lambda/D$\ Apodized pupil complex mask Lyot Coronagraph (APCMLC) for apertures of arbitrary shape {#sec:APCMLC} ====================================================================================== Principle {#ssec:APCMLCprinc} --------- In this section, it is shown that the APCMLC is compatible with non-circular apertures, as illustrated in Figure \[fig:APCMLCprinciple\], and a description of how it can be designed for arbitrarily shaped pupils is provided. While the APCMLC description provided here remains qualitative and focused on aspects relevant to non-circular apertures, a more complete analytical description is provided in [@2010ApJS..190..220G] for circular unobstructed apertures. We describe the APCMLC by following how electric field (also refered to as complex amplitude in this paper) from an on-axis point source propagates through the coronagraph system. The APCMLC, illustrated in Figure \[fig:APCMLCprinciple\], uses a circular focal plane mask to partially transmit and phase shift the on-axis point spread function (PSF) core (complex amplitude B on Figure \[fig:APCMLCprinciple\]). The transmission and phase shift are uniform within the mask radius, and the mask is fully transmissive, with no phase shift, outside this radius. This produces a destructive interference within the geometric pupil, between the light that passes around the focal plane mask disk and the phase-shifted light passing through the focal plane phase-shifting disk. With a Lyot mask (Lmask) selecting only the geometric pupil, a coronagraphic effect is achieved. The concept is thus an intermediate point between the conventional Lyot coronagraph or Apodized Pupil Lyot Coronagraph (APLC) [@2003AA...397.1161S], which use a large fully opaque focal plane mask, and the phase mask coronagraph [@1997PASP..109..815R; @2000SPIE.4006..377G; @1999PASP..111.1321G; @2010AA...509A...8N] which uses a small size fully transmissive phase-shifting focal plane mask. In the APCMLC, the focal plane mask size can be chosen anywhere between these two extremes, and defines the ratio between the amount of light within the circular mask and outside the mask. As the focal plane mask radius decreases, a smaller fraction of the light is within the mask radius, and its transmission must increase to maintain the flux balance between the “inside focal plane mask” (corresponding to low spatial frequencies in the pupil) and “outside focal plane mask” (high spatial frequencies in the pupil) components, a necessary condition to achieve destructive interference. Full destructive interference within the geometric pupil also requires that the two components are equal in amplitude for every point within the pupil. Since this match does not naturally occur, all three concepts (APLC, Roddier phase mask coronagraph and APCMLC) require the entrance pupil to be amplitude apodized to enforce this match. Qualitatively, for small focal plane mask size, the apodization mostly changes the pupil light distribution for the “outside focal plane mask” component, while the light distribution for the “inside focal plane mask” component is mostly driven by the size of the focal plane mask. The entrance pupil apodization can therefore be iteratively derived to force the “outside focal plane mask” component to match the “inside focal plane mask component”, using the following steps: 1. [Adopt a focal plane mask diameter $a$]{} 2. [Compute the on-axis complex amplitude PSF for the aperture. This is the Fourier transform of the pupil complex amplitude P]{} 3. [Clip the PSF: values outside the focal plane mask radius are forced to zero]{} 4. [Inverse-Fourier transform the clipped PSF, and adopt this function as the apodized pupil plane amplitude function A, after multiplication by a factor $\Lambda_a$ so that its maximum value across the pupil is be equal to 1 (full transmission)]{} 5. [Return to step (2), with the output of step (4) as the pupil complex amplitude function]{} This iterative algorithm is a generalization of the iterative algorithm used by [@2000SPIE.4006..377G; @2002AA...391..379G] and detailed in @guyonPhD to compute optimal apodization for the phase mask coronagraph (for which the mask is fully transmissive), and the iterative algorithm used to compute optimal apodization for the APLC (for which the mask is fully opaque) on centrally obscured circular apertures [@2005ApJ...618L.161S; @2010AA...520A.110M] and on more complex pupil shapes [@2009ApJ...695..695S]. showed that the apodization solutions obtained for rectangular and circular apertures are Prolate functions for which analytical expressions exist. Apodization functions can also be computed for centrally obscured apertures [@2005ApJ...618L.161S], and for arbitrary non circular symmetric pupils [@2009ApJ...695..695S]. A remarkable property of the iterative algorithm described above is that it converges for a wide range of pupil shapes and focal plane mask diameters [@guyonPhD]. For small focal plane mask diameters, convergence is due to the fact that modifying the entrance aperture light distribution predominantly affects the “outside focal plane mask” light component. Exact apodization solutions for the APLC and APCMLC therefore exist for most aperture geometries and focal plane mask diameters. An example APCMLC design on a non-circular aperture, for which the entrance pupil apodization function was computed using the iterative algorithm described in this section, is shown in Figure \[fig:APCMLCprinciple\]. The APCMLC is described here analytically for monochromatic light using notations shown in Figure \[fig:APCMLCprinciple\]. The entrance pupil shape is defined by the real function $P(\mathbf{r})$, with $\mathbf{r}$ the 2-D position vector in the pupil plane, and $P(\mathbf{r})=1$ for points within the pupil and $P(\mathbf{r})=0$ outside of the pupil. The apodizer function $Apo(\mathbf{r})$ is applied to the pupil, yielding the following complex amplitude in plane $A$: $$\Psi_A(\mathbf{r}) = Apo(\mathbf{r}) P(\mathbf{r})$$ The $\mathbf{r}$-dependence is dropped in subsequent equations. The iterative algorithm previously described is used to numerically compute the apodization function $Apo(\mathbf{r})$, which will converge to a pupil function $\psi_a$ which is the eigenvector of the “truncate (by P), Fourier Transform, truncate (by $|\mathbf{r}|<a$), and inverse Fourier Transform” operator, with eigenvalue equal to the scaling factor $\Lambda_a$ used in step (4) of the iterative algorithm given previously. $$\label{equ:La} ( \psi_a P ) \otimes \widetilde{M_a} = \Lambda_a \psi_a$$ where $\otimes$ is the convolution operator, $M_a$ is is equal to 1 within a disk of diameter $a$ and is equal to 0 outside it, and $\widetilde{M_a}$ is the Fourier Transform of $M_a$. In the APCMLC, the apodizer function is chosen equal to $\psi_a$: $$\Psi_A = \psi_a P$$ The focal plane mask complex amplitude is : $$F_{mask} = 1 - (1-t) M_a$$ where $t$ is the complex amplitude transmission within the circular focal plane mask. The complex amplitude in plane B is : $$\Psi_B = F_{mask} \widetilde{\Psi_A} = \widetilde{\Psi_A} - (1-t) M_a \widetilde{\Psi_A}$$ The complex amplitude in plane C is obtained by Fourier transform of $\Psi_B$: $$\label{equ:PsiC0} \Psi_C = \Psi_A - (1-t) (\psi_a P) \otimes \widetilde{M_a}$$ By combining equations \[equ:La\] and \[equ:PsiC0\], and multiplying by $P(r)$, the complex amplitude in plane C within the geometrical pupil is: $$\label{equ:PsiC} \Psi_C P(\mathbf{r}) = P(\mathbf{r}) \times (1- (1-t) \Lambda_a) \psi_a$$ This equation shows that, if $t = 1-\Lambda_a^{-1}$ (this value is now noted $t_a$), then $\Psi_C$ is equal to zero within the geometric pupil. This is the condition for a APCMLC, which completely removes light from an on-axis point source, provided that a Lyot pupil plane mask $L_{mask}(\mathbf{r})=P(\mathbf{r})$ is used to only select light within the geometric pupil. Since $\Lambda_a < 1$, $t_a$ is negative: the focal plane mask is both partially transmissive and π-phase shifting. A coronagraphic solution requires $t_a>-1$, and therefore exists only if $\Lambda_a > 0.5$: the focal plane size needs to be sufficiently large so that light going through the mask can be balanced with light going outside the mask. The same pupil apodization technique is used in the Apodized Pupil Lyot Coronagraph (APLC) to optimize the pupil entrance complex amplitude to the hard edged opaque focal plane mask [@2003AA...397.1161S]. In the APLC, $t=0$ in equation \[equ:PsiC\], and the coronagraphic extinction is therefore not total for an on-axis point source. Equation \[equ:PsiC\] shows that the on-axis PSF in the final focal plane mask is an exact copy of the non-coronagraphic PSF, scaled by $(1-\Lambda_a)^2$ in intensity. For large focal plane masks diameter $a$, $\Lambda_a$ is close to 1, and the coronagraphic extinction is satisfactory. The APLC concept has been adopted for the Palomar Observatory high contrast imaging program [@2011PASP..123...74H] and the Gemini Planet Imager [@2008SPIE.7015E..31M] and has been validated in laboratory demonstrations [@2011AJ....142..119T]. The APCMLC concept is very similar to the APLC, the only fundamental difference being that its focal plane mask transmission is allowed to be non-zero, therefore allowing full coronagraphic extinction for any focal plane mask size $a$ for which $\Lambda_a > 0.5$. APCMLC designs for segmented apertures {#ssec:APCMLCdesigns} -------------------------------------- Apodized pupil complex mask Lyot coronagraphs (APCMLCs) were designed for the Subaru Telescope, Giant Magellan Telescope (GMT), Thirty Meter Telescope (TMT) and European Extremely Large Telescope (E-ELT) pupil geometries, following the process described in the previous section. For each pupil geometry, several focal plane mask sizes were chosen. The designs with the smallest possible focal plane mask sizes use full transmission $\pi$-phase shifting circular focal plane masks, and are referred to as optimal IWA APCMLC designs in this paper. As the focal plane mask size increases, it also becomes more opaque, the system throughput (which is equal to the apodizer throughput) decreases and the IWA increases. Results are summarized in Table \[tab:APCMLC\], and show that optimal IWA designs offer IWAs around 0.9 $\lambda/D$ and throughputs around 60%. For all designs, the IWA is approximately equal to the focal plane mask radius, and the throughput decreases rapidly with increasing focal plane mask size: with a 1.2 $\lambda/D$ radius, the throughput ranges from approximately 10% to 35% depending on the pupil geometry. The performance of the optimal IWA design is largely independent of pupil geometry, and is similar for segmented apertures to the performance previously reported for a full unobstructed circular pupil [@2010ApJS..190..220G]. However, as the mask size increases, pupil geometry has a larger impact on performance, as the range of pupil plane spatial frequencies accessed by the focal plane mask begins to overlap with the low spatial frequency components of the pupil geometry (central obstruction, large segments, thick spider vanes). This difference is most noticeable between the GMT pupil with few large segments and the TMT or EELT geometries with numerous small segments. Selected examples of apodization functions and Lyot plane intensity images are shown in Figure \[fig:APCMLCexamples\]. In each case, the apodization function is smooth and free of high spatial frequencies, and no light is left within the geometric pupil in the Lyot pupil plane, as all residual starlight is diffracted outside of the pupil and in the gaps between segments. ![image](fig03.eps) Table \[tab:APCMLC\] gives for several APCMLC designs the key design parameters (focal plane mask size $a$, focal plane mask transmission) as well as the coronagraph performance (throughput and IWA). For each pupil shape considered, the first design (design \# 1) is the most aggressive in IWA, with a nearly fully transmissive focal plane mask. This aggressive design is also the one with the highest throughput, as the apodization strength needs to increase for larger focal plane mask sizes. The APCMLC throughput never exceeds 70% due to the need for a pupil apodization. Transmission curves are given in Figure \[fig:APCMLCtransm\] for the APCMLC designs listed in Table \[tab:APCMLC\]. Phase Induced Amplitude Apodization Complex Mask Coronagraph (PIAACMC) for apertures of arbitrary shape {#sec:PIAACMC} ======================================================================================================= Lossless Phase-Induced Amplitude Apodization (PIAA) --------------------------------------------------- ![image](fig04s.eps) While the APCMLC described in Section \[sec:APCMLC\] achieves full on-axis coronagraphic extinction for almost any pupil shape, its throughput is limited by the entrance apodization required to achieve perfect destructive interference in the output pupil plane. The system throughput decreases as the focal plane mask size increases, with a maximum throughput equal to 72% for a 0.64 $\lambda/D$ radius purely phase-shifting transparent mask on a circular unobstructed aperture. Throughput, and consequently angular resolution, degrade rapidly with increased focal plane mask size: it is 18% for a 2 $\lambda/D$ radius mask. The results obtained in Section \[ssec:APCMLCdesigns\] also show that the APCMLC maximum throughput (achieved for the designs with the most aggressive IWA) is lower on segmented pupils than it is for an unobstructed circular pupil (“Throughput” column of Table \[tab:APCMLC\]). Moreover, throughput, angular resolution and IWA are significantly degraded when the focal plane mask size is increased - while mitigation of undesired chromatic effects at the focal plane mask may require a larger and more opaque mask. Phase-induced Amplitude Apodization (PIAA) uses aspheric mirrors to achieve a lossless beam apodization [@2003AA...404..379G], and can therefore produce a highly apodized beam suitable for high contrast imaging without the angular resolution loss and throughput loss of a conventional apodizer. PIAA can also be used to replace the entrance apodization in the APCMLC design described in Section \[sec:APCMLC\], as previously proposed for unobstructed circular pupils [@2010ApJS..190..220G]. The resulting coronagraph, denoted Phase-induced Amplitude Apodization Complex mask coronagraph (PIAACMC), offers simultaneously full throughput, small inner working angle and total on-axis extinction. An example PIAACMC design is shown in Figure \[fig:PIAACMCprinc\] for a segmented centrally obscured pupil. The entrance pupil P (image shown in the lower left of the figure) is apodized with lossless aspheric PIAA optics. Because the PIAA optics perform apodization by remapping instead of selective transmission, the resulting pupil P1 shape is modified. A conventional apodizing mask may be used to fine-tune the apodization if the PIAA optics do not exactly produce the required amplitude distribution (this will be addressed in the following section). The resulting pupil A is shown in the second image from the lower left corner. The image of an on-axis point source is shown in the center image, where the phase-shifting partially transmissive focal plane mask is inserted. In the output pupil plane C, all light within the pupil has been removed, while diffracted starlight fills the gap and obstructions of the segmented pupil. A Lyot mask (noted Lmask) can then select only the geometric pupil area (after remapping) to fully block on-axis starlight while fully transmitting the light from distant off-axis source. A well-documented side-effect of apodization with PIAA optics is that off-axis PSFs are highly distorted, and corrective optics (inverse PIAA) are required at the output of the coronagraph to maintain diffraction limited sharp PSFs over a scientifically useful field of view [@2009PASP..121.1232L]. Except for PIAA and inverse PIAA optics, the PIAACMC architecture is functionally identical to the APCMLC architecture described in Section 2.1: between planes P1 (output of the PIAA optics) and the plane immediately after the pupil plane Lyot mask, the architecture is an APCMLC. The main difference between APCMLC and PIAACMC is that the lossless apodization allows increased performance by maintaining full throughput and angular resolution, regardless of the focal plane mask size adopted. Designing a PIAACMC for a non circular aperture ----------------------------------------------- We consider in this work PIAACMC designs that perform a lossless PIAA apodization of the pupil to produce a generalized prolate function for the aperture. We note that other apodization functions could be adopted, and could potentially lead to superior performance, but this is not explored in this paper. In the unobstructed circular pupil case [@2010ApJS..190..220G], designing the PIAACMC is relatively simple, as PIAA apodization using a radial remapping function preserves the circular aperture shape. The prolate function can thus be first computed, and then realized with a radial PIAA apodization. Designing a PIAACMC for complex shaped apertures is considerably more challenging because the PIAA apodization modifies the aperture shape, which itself changes the generalized prolate function. In addition to this circular problem, if the aperture is not circularly symetric, the generalized prolate is also not symmetric, and the required remapping function therefore cannot be written as a radial function. While PIAA optics can be designed for any radial remapping [@2003AA...404..379G], an arbitrarily chosen 2D remapping function can almost never be realized with a set of two PIAA optics. To overcome the two challenges listed above (circular design problem due to effect of PIAA on aperture shape, and complexity/impossibility of designing PIAA optics for non-circular symmetric remapping), a hybrid PIAACMC design is adopted, which includes a conventional apodizer after the remapping to produce the required prolate function. Thanks to this post-apodizer, the output of the PIAA apodization does not need to exactly match the generalized prolate function, allowing radial remapping functions to be used on non-circular symmetric apertures. The goal of the design optimization is to bring the PIAA apodization and generalized prolate functions close, in order to minimize the strength of the post-apodizer and thus maintain a high system throughput. The proposed steps for designing a PIAACMC for complex shaped apertures are : 1. [Choose radial remapping function $r_1=f_b(r_0)$, where $r_0$ and $r_1$ are the radial coordinate in the input (before remapping) and output(after remapping) pupils respectively. For convenience, the remapping function is selected among a pre-computed set of functions used to produce prolate spheroidal apodizations on circular apertures. The focal plane mask diameter corresponding to the prolate spheroidal function is denoted $b$, and the corresponding remapping function and apodization intensity profile are respectively $f_b$ and $I_b$.]{} 2. [Apply the remapping function to the entrance pupil. The remapping transforms the entrance pupil intensity P(x,y) into P1(x,y).]{} 3. [Choose a focal plane mask diameter $a$.]{} 4. [Compute the generalized prolate function $Prola(x,y)$ for the remapped aperture shape defined by $P1(x,y)>0$, using the focal plane mask diameter $a$. This is done iteratively as described in section \[ssec:APCMLCprinc\]]{} 5. [Compute the amplitude ratio $Apo(x,y)=Prola(x,y)/P1(x,y)$. This is the post-apodizer amplitude transmission function. $Apo(x,y)$ is then scaled to ensure that its maximum value is equal to 1. The intensity-weighted average of $Apo(x,y)^2$ defines the coronagraph throughput for off-axis sources.]{} Steps (3) to (5) are repeated for different values of the focal plane mask size $a$. The off-axis coronagraph throughput is computed for each choice of $a$, and the final focal plane mask size is chosen to maximize throughput. This optimization links the choice of the remapping function (step (1)) to a value of the focal plane mask radius a. For a circular unobstructed aperture, the solution would be $a=b$, for which the PIAA apodization would perfectly match the generalized prolate function. On arbitrarily shaped pupils, the focal plane mask radius is usually close to, but not equal to, $b$. Stronger apodization functions correspond to larger values for $a$ and $b$. ![image](fig05.eps) ![\[fig:PIAACMC\_Subaru\_PSFs\] Simulated Subaru PIAACMC images of 5 point sources of equal brightness. The point sources are at coordinates (0;0), (1;0), (0;2), (-4;0) and (0;-8) in $/lambda/D$ units. A non-coronagraphic images (left) shows all five point sources. The partially transmissive central focal plane mask is visible in the intermediate focal plane image (center), where off-axis PSFs are distorted by the PIAA remapping. In the output focal plane image (right), the central source is fully canceled and the off-axis PSFs images are sharp thanks to the inverse PIAA optics. ](fig06.eps) PIAACMC design examples ----------------------- ### Centrally obscured pupils: Subaru Telescope pupil The Subaru telescope pupil is representative of current large aperture astronomical telescopes, with a large central obstruction and thick spiders. Both features must be taken into account for the design of a high performance coronagraph. Figure \[fig:PIAACMC\_Subaru\] shows two PIAACMC designs for the Subaru Telescope pupil. The small IWA design (left) was computed from the $b/2=0.6 \lambda/D$ beam remapping, and uses a small sub-$\lambda/D$ radius focal plane mask with high transmission. The large IWA design was computed from $b/2=1.2 \lambda/D$, adopts a larger mostly opaque focal plane mask, and relies on a stronger PIAA remapping. Both designs offer throughput above 97%, and their throughput could be further increased by slightly elongating the focal plane mask, which was kept circular for simplicity in this study. The large IWA design, by relying on a stronger PIAA remapping, introduces a large pupil deformation, as visible in the figure. The post-focal plane mask pupil images demonstrate the PIAACMC’s ability to diffract all of the light from a central source outside the geometrical pupil, including within the gaps of the pupil (here, central obstruction and spiders). Figure \[fig:PIAACMC\_Subaru\_PSFs\] shows intensity images of a field consisting of five equally bright point sources. The left images are obtained without a coronagraph, and simply show the imaging quality of the Subaru pupil in the absence of wavefront aberrations. The center column shows images in plane B of Figure \[fig:PIAACMCprinc\], immediately after the focal plane mask. The focal plane mask in the low IWA design (top) is more transmissive, and is also physically smaller. The large IWA design (bottom) introduces large off-axis aberrations due to the stong remapping. In the final coronagraphic images (right column), the central source is perfectly removed, and the images of the off-axis sources are sharp thanks to the inverse-PIAA optics. ### Segmented pupils: Giant Magellan Telescope (GMT) The Giant Magellan Telescope (GMT) consists of one central 8.4-m circular segment, surrounded by a ring of six 8.4-m diameter segments. While the outer segments are unobscured, the central segment includes a central obstruction due to the secondary mirrors and its support structure. ![image](fig07.eps) Figure \[fig:PIAACMC\_GMT\] show three PIAACMC designs for the GMT pupil: a small IWA design computed for $b/2=0.7 \lambda/D$ (design \#1), a medium IWA design computed for $b/2=1.2 \lambda/D$ (design \#2), and a large IWA design computed for $b/2=1.5 \lambda/D$ (design \#3). As $b$ increases, the PIAA remapping becomes stronger, and the physical size of the focal plane mask increases. In each case, the PIAACMC achieves complete suppression of the on-axis point source, and its light is diffracted outside the geometrical aperture in plane C, including between the seven subapertures and within the secondary mirror obstruction and support structure. The imaging quality of the GMT PIAACMC designs is illustrated in the right panel of Figure \[fig:PIAACMC\_GMT\], which shows that for each of the three designs, the final coronagraphic image maintains high thoughput and largely uncompromized imaging quality outside the central $\approx 1 \lambda/D$ region. The images also show that off-axis aberrations are stronger as the design relies more on PIAA remapping, although these aberrations are well corrected by the inverse PIAA system. ### Highly segmented pupils: European Extremely Large Telescope (E-ELT) and Thirty Magellan Telescope (TMT) ![image](fig08.eps) ![image](fig09.eps) Figures \[fig:PIAACMC\_EELT\] and \[fig:PIAACMC\_TMT\] each show three PIAACMC designs for the European Extremely Large Telescope (E-ELT) and the Thirty Meter Telescope (TMT) pupil geometries. Both pupils consist of a large number of small segments, a central obstruction and spider vanes. Each of the six designs achieves total rejection of a central point souce with high system throughput (between 97.8 % and 99.98%) for off-axis sources. The inner working angle ranges from $\approx$ 0.8 $\lambda$/D for the most aggressive designs (designs \#1) to $\approx$ 1.0 $\lambda$/D for the more conservative designs (designs \#3). Figures \[fig:PIAACMC\_EELT\] and \[fig:PIAACMC\_TMT\] show that thanks to the phase-shifting focal plane mask, light from an on-axis source is diffracted between the small segments of the pupil, within the spider vane shadows, within the central obstruction and outside the overall pupil: in the output pupil plane, no light is present within the geometric pupil. For these designs, the Lyot mask must mask the gaps between the segements while transmitting light within the segments, and it must therefore be carefully aligned with the pupil. A Lyot mask for which the masked zones are slightly oversized may be used to accomodate pupil alignment errors at the cost of system throughput. Discussion ---------- [lcccccc]{}\ Subaru PIAACMC \#1 & 0.6 $\lambda/D$ & 2.42 & 0.603 $\lambda/D_{syst}$ & 84.24% & 99.91% & 0.67 $\lambda/D$\ Subaru PIAACMC \#2 & 0.9 $\lambda/D$ & 6.79 & 0.916 $\lambda/D_{syst}$ & 8.57% & 99.39% & 0.88 $\lambda/D$\ Subaru PIAACMC \#3 & 1.2 $\lambda/D$ & 26.83 & 1.33 $\lambda/D_{syst}$ & 2.06% & 97.04% & 1.11 $\lambda/D$\ \ GMT APCMLC \#1 & 0.7 $\lambda/D$ & 3.30 & 0.693 $\lambda/D_{syst}$ & 98.55% & 99.98% & 0.72 $\lambda/D$\ GMT APCMLC \#2 & 1.2 $\lambda/D$ & 26.83 & 1.12 $\lambda/D_{syst}$ & 20.71% & 99.47% & 0.89 $\lambda/D$\ GMT APCMLC \#3 & 1.5 $\lambda/D$ & 124.09 & 1.32 $\lambda/D_{syst}$ & 16.64% & 99.14% & 0.92 $\lambda/D$\ \ TMT APCMLC \#1 & 0.8 $\lambda/D$ & 4.69 & 0.797 $\lambda/D_{syst}$ & 85.51% & 99.80% & 0.78 $\lambda/D$\ TMT APCMLC \#2 & 1.2 $\lambda/D$ & 26.83 & 1.16 $\lambda/D_{syst}$ & 32.46% & 98.51% & 0.94 $\lambda/D$\ TMT APCMLC \#3 & 1.5 $\lambda/D$ & 124.09 & 1.394 $\lambda/D_{syst}$ & 27.73% & 98.71% & 0.99 $\lambda/D$\ \ E-ELT APCMLC \#1 & 0.8 $\lambda/D$ & 4.69 & 0.816 $\lambda/D_{syst}$ & 99.87% & 97.77% & 0.81 $\lambda/D$\ E-ELT APCMLC \#2 & 1.2 $\lambda/D$ & 26.83 & 1.15 $\lambda/D_{syst}$ & 45.58% & 99.50% & 0.93 $\lambda/D$\ E-ELT APCMLC \#3 & 1.5 $\lambda/D$ & 124.09 & 1.37 $\lambda/D_{syst}$ & 37.85% & 99.42% & 0.98 $\lambda/D$\ ![image](fig10.eps) Table \[tab:PIAACMC\] summarizes the PIAACMC designs discussed in this section. For each design, the circular remapping function was first chosen, and is represented in the table by the parameter $b$, which is the diameter of the focal plane mask used to iteratively compute the generalized prolate function for a circular aperture. A small value of $b$ indicates a weak apodization. The PIAA strength listed in the table is the surface brightness ratio between the brightest and faintest parts of the remapped beam, and is a function of only $b$. This ratio is a good indicator for both the level of distortions of the off-axis PSFs in the intermediate focal plane, and for the difficulty in making the PIAA optics. Current PIAA optics for conventional PIAA coronagraphs have a strength around 100, and any value below 100 therefore corresponds to PIAA optics that can be manufactured to nm-level surface accuracy without technological advances. For PIAA strength values above 100, a hybrid scheme where some of the edge apodization is offloaded to a conventional apodizer should be adopted, at the cost of lower throughput (typically up to 10% throughput loss) and loss of angular resolution and IWA (by typically up to 5%). ![image](fig11.eps) Pupils with strong aspect ratio {#sec:aspectratio} =============================== Challenges ---------- The APCMLC and PIAACMC coronagraphs described in the previous section achieve full starlight suppression by performing, for each point in the output pupil, perfect destructive interference between the light that passes through the circular focal plane mask and the light that passes around it. To offer $\lambda/D$-level inner working angle, these concepts therefore require that the telescope’s non coronagraphic point spread function consist of a central diffraction spot within which a disk containing approximately half of the total PSF flux can be drawn, surrounded by other fainter diffractive features (rings, spikes). The examples given in the previous sections (Subaru, GMT, TMT, E-ELT) fulfill this requirement, as these pupil shapes are sufficiently close to a disk. While very sparse or elongated apertures are not compatible with the APCMLC and PIAACMC concepts as described so far, simple geometric transformations can extend the concepts to a wider range of pupil shapes. The Large Binocular Telescope (LBT) pupil is used in this section as an example of a sparse aperture with a strong aspect ratio: with its two centrally obscured 8.4-m diameter circular subapertures separated by 14.4m (center to center), the LBT pupil has a strong aspect ratio (8.4-m x 22.8-m). The corresponding non-coronagraphic PSF consists of three bright interference fringes within an envelope defined by the single aperture PSF. No circular mask can be drawn within the central bright fringe that contains half of the total PSF flux. Using non-circular focal plane masks ------------------------------------ Stretching the LBT pupil along its narrow direction by a factor four would create a pupil sufficiently close to circular for the APCMLC and PIAACMC concepts as presented above. This stretch is equivalent to using an elliptical focal plane mask, which is four times longer in the direction running along the fringe in the PSF. Figure \[fig:LBT\_APCMLC\_design\] shows an APCMLC design for the LBT pupil using an elliptical focal plane mask. The design shown does produce total extinction of an on-axis point source, and its inner working angle is close to 1 $\lambda$/D along the long axis of the pupil (here, D is defined as the diameter of the circle enscribing the pupil, and is equal to 22.8 m), while it is $\approx$ 3.5 $\lambda$/D along the short axis (fundamentally limited by the telescope pupil diffraction along this axis, rather than by the coronagraph). A focal plane mask consisting of three separate zones covering part of the three central fringes may also be adopted to further improve system throughput, although this has not been numerically tested. The same elliptical focal plane mask scheme can also applied for the PIAACMC concept on the LBT pupil. Interestingly, the pupil remapping which is part of the PIAACMC concept may be chosen to also bring the two aperture closer to approach the circular pupil case. The elliptical focal plane mask approach may also be adopted to improve the APCMLC and PIAACMC performance for other non-circular pupil geometries: the focal plane mask shape should ideally be chosen to best match the non-coronagraphic PSF in order to maximize the conventional apodizer’s transmission. For example, the generalized prolate function for the Subaru Telescope PIAACMC design \#1 is slightly elongated due to the off-axis spider vanes. This produces a slight mismatch with the circular symmetric remapping function, which is absorbed by the conventional apodizer. Most of the conventional apodizer’s light loss (0.1% total) is due to this mismatch. For this example, using an slightly elliptical focal plane mask would only improve throughput by at most 0.1% since the pupil is very close to being circular. More importantly, the elliptical focal plane mask may allow high performance operation of the PIAACMC without an apodizer. Adopting a hexagonal shaped focal plane mask would offer similar benefits for hegagonal-shaped pupils such as the one shown in Figures \[fig:APCMLCprinciple\] and \[fig:PIAACMCprinc\]. Pupil remapping --------------- With extremely sparse pupil geometries, the re-design of the focal plane mask geometry may not be sufficient to adapt the pupil shape to the APCMLC and PIAACMC requirements. In this case, geometrical transformation of the sparse entrance pupil into a more compact geometry can be achieved through pupil remapping. This scheme was explored to implement coronagraphy on sparse apertur [@2002AA...396..345R; @2002AA...391..379G], and commonly referred to as the hypertelescope concept. Even if pupil remapping is not required, it may be useful to improve the APCMLC and PIAACMC system throughput. With sparse apertures, the apodizer becomes less transmissive: for example, the LBT pupil APCMLC design given in this section offers a 41% throughput, which is significantly less than the $\approx$60% throughput of comparable APCMLC designs for the Subaru, GMT, TMT and E-ELT pupils. Bringing the LBT subapertures closer together with periscope-like optics would allow for higher throughput in the coronagraph. In order to maintain a good image quality over a wide field of view, the original pupil geometry should be re-created prior to the final imaging focal plane: the compact pupil is only an intermediate step required for efficient removal of the central source’s light. [lcccccc]{} Monochromatic & 1568.34 nm & 0 & 0 & 0 & 0\ 2% band & 1568.25 nm & 5.29e-5 & 9.56e-11 & 8.31e-7 & 3.10e-7\ 4% band & 1568.03 nm & 1.89e-4 & 2.15e-10 & 3.35e-6 & 1.16e-6\ 10% band & 1566.54 nm & 1.08e-3 & 2.19e-7 & 2.13e-5 & 7.07e-6\ 20% band & 1561.24 nm & 4.30e-3 & 7.68e-7 & 8.57e-5 & 2.80e-5\ 40% band & 1543.88 nm & 1.52e-2 & 6.09e-7 & 3.24e-4 & 1.05e-4\ Chromaticity {#sec:chrom} ============ Sensitivity to chromatic effects -------------------------------- All coronagraph systems discussed in this paper were designed for monochromatic light operation. While design of polychromatic APCMLC and PIAACMC systems is outside the scope of this paper (this will be discussed in a future publication), we describe qualitatively in this section how the monochromatic designs perform in broadband light. Several effects result in a loss of performance in broadband light : 1. [The physical size of the focal plane mask is adjusted for a single wavelength. While the mask size is independent of wavelength, it should ideally scale linearly with wavelength.]{} 2. [The phase shift introduced by the mask may vary as a function of wavelength, while it should ideally be constant across the spectral band.]{} 3. [The transmission of the mask may vary as a function of wavelength, while it should ideally be constant across the band]{} The amplitude of the last two effects is a function of how the focal plane mask is manufactured. In this section, we assume that no attempt to achromatize the mask phase shift has been made, and that it consists of a single material deposited on a substrate, with the material thickness adjusted for monochromatic light operation. The sensitivity to chromatic effects is mostly driven by the focal plane transmission for both APCMLC and PIAACMC systems. Designs with large nearly opaque focal plane masks are more tolerant to chromatic effects, since the mask’s role becomes close to a simple light block, and he mask size relative to the on-axis source image increases. To illustrate broadband performance, we adopt in the next section a monochromatic PIAACMC design with partial ($0<|t|<1$) focal plane mask transmission. Example: PIAACMC design for a centrally obscured pupil ------------------------------------------------------ We consider the PIAACMC design 2 for the Subaru Telescope pupil in this section. It is assumed that the focal plane mask size is optimized for monochromatic light at $\lambda = 1.65 \mu m$, and that the mask is a disk of material ($SiO_2$). The mask transmission is fixed to the ideal monochromatic value, and is not assumed to change with wavelength. Several scenarios are considered: monochromatic, 2%, 4%, 10%, 20% and 40% wide bands (all centered at 1.65 $\mu$m). The mask thickness is a free parameter, and is adjusted for each case to yield the best broadband on-axis extinction, as measured by the total light in the final focal plane. Results are shown in table \[tab:chrom\]. The last 3 columns of the table show spatially averaged contrast values between the coronagraph’s inner working angle ($0.88 \lambda / D$) and $3.6 \lambda/D$. This particular design delivers better than 1e-4 averaged raw contrast in a 20% wide specral band, and is therefore valuable for ground-based use behind adaptive optics. Pupil and focal plane images and contrast radial profiles are shown in Figure \[fig:chrom\] across a 40% wide band centered at $1.65 \mu m$, illustrating that raw contrast is best at the center of the band. ![image](fig12s.eps) Conclusions {#sec:conclusion} =========== The APCMLC and PIAACMC concepts, previously proposed for unobstructed circular apertures, are also applicable to telescopes with arbitrary pupil shapes. Their performance is largely unaffected by aperture shape, and full throughput low-IWA coronagraphy is therefore theoretically possible on any pupil shape with the PIAACMC. The demonstration that the coronagraph with the highest known theoretical performance can be applied on any pupil may remove the requirement that a future space-based exoplanet direct imaging mission should use an off-axis telescope. On ground-based telescopes, which adopt optical designs which are generally not driven by coronagraphy, high efficiency coronagraphy at and within 1 $\lambda/D$ is possible, potentially allowing direct imaging of habitable planets around nearby M and K type main sequence stars for which the planet-to-star contrast is favorable but the angular separation is extremely challenging and requires $\approx \lambda/D$ IWA even on a 30-m class telescope. Manufacturing and implementation challenges have not been addressed in this paper. Manufacturing an achromatic focal plane mask for the APCMLC or PIAACMC is challenging, as its size should scale linearly with wavelength, and its complex amplitude transmission should be achromatic. Similar challenges have been previously addressed for other coronagraphs [@2011ApJ...729..144S], using carefully designed multilayer coatings of variable thickness and/or sub-$\lambda/D$ sized features optimized to produced the required chromatic dependence within the geometric pupil [@2003AA...403..369S; @2012AA...538A..55N]. The PIAA optics required for the PIAACMC are however not as challenging to manufacture as PIAA optics previously made for hard edged opaque focal plane masks, because the PIAACMC’s entrance apodization is milder. As any high performance low IWA coronagraph, the PIAACMC performance is highly sensitive to residual wavefront errors, which must be actively sensed and controlled. The PIAACMC’s high throughput is an asset for achieving the required wavefront quality, as wavefront sensing can be performed rapidly, using all incoming light. Small IWA high contrast coronagraphy requires exquisite control of tip-tilt and low order wavefront errors. The central star angular size may also impose limits on the achievable performance [@2009ApJ...702..672C]. These issues have not been addressed or quantified in this paper, but may drive the optimal coronagraph design for a particular scientific application. We note that both sub-$\lambda/D$ IWA coronagraphs designs described in this paper can also be designed for IWA equal to or larger than $\lambda$/D if required, offering lower performance but improved resilience against pointing errors and stellar angular size. While the APCMLC design with larger IWA has a lower throughput (due to the stronger apodization), for the PIAACMC, the large-IWA designs maintain full throughput and total on-axis extinction, offering a wide range of practical high performance coronagraphic options. $\:$ $\:$ $\:$ $\:$ $\:$ $\:$ -
1
--- author: - 'G. Galletta' - 'V. Casasola' - 'L. Piovan' - 'E. Merlin' - 'D. Bettoni' date: 'Received ; Accepted 20 September 2006' title: Relations between ISM tracers in galaxies --- [We study the relations existing between fluxes emitted at CO(1-0) line, 60 and 100 $\mu$m wavelengths, B and soft X-ray wavebands for galaxies of all morphological types. The large set of data that we created allows to revisit some of known relations existing between the different tracers of the Interstellar Medium (ISM): the link between the FIR flux and the CO line emission, the relation between X-ray emission in non active galaxies and the blue or FIR luminosity.]{} [Using catalogues of galaxies and works presented in the literature, we collected fluxes in FIR, 21 cm, CO(0-1) line and soft X-ray for two samples, consisting of normal and interacting galaxies respectively. Joining together these samples, we have data for a total of 2953 galaxies, not all observed in the four above wavebands.]{} [ All the relations found are discussed in the frame of the star formation activity that is the link for most of them. We note that when an active star formation is present, it may link the galaxy fluxes at almost all wavelengths, from X to microwaves. On the contrary, in early-type galaxies where the current star formation rate has faded out the X-FIR fluxes link disappears. This result obtained for early-type galaxies is discussed and explained in detail in the frame of a suitable theoretical model, obtained coupling chemo-dynamical N-body simulations with a dusty spectrophotometric code of population synthesis.]{} Introduction ============ The observations of galaxies at various wavelengths, going from radio to X-ray, allow to study the relationships existing between the various phases of the interstellar gas, and between gas, dust and stars. Some of these relations are already known since many years, such as that between CO and far infrared (FIR) luminosities [@mirabel; @sanders; @solomon; @devereux]. Others, connected with X-ray emission, have been studied more recently [@padovani; @david; @ranalli]. At present, different tracers of the gas are known, such as millimetric lines for the cold molecular gas, the 21 cm line for atomic hydrogen at $\sim$100 K, IR bands for molecules at thousands of degrees, UV lines and X-ray emission for hotter gas. The dust distribution is traced also by FIR emission at 60 and 100 $\mu$m if the grains are warm [@bregman] or at 170 $\mu$m, if they are colder [@popescu]. The diffusion of large archives of observations at the above wavelengths (except for molecular lines) allowed in the last years the compilation of catalogues containing a huge number of galaxies. Using these catalogues and the works presented in the literature, we collected fluxes in FIR, 21 cm, CO(0-1) line and soft X-ray for two wide samples of normal [@normal] and interacting [@interacting] galaxies. Joining together these samples, we have data for a total of 2953 galaxies, not all observed in the four above wavebands. The fluxes measured with the different tracers allow now a study of the link existing between dust, gas and stars based on hundreds of galaxies. It is known that the fluxes emitted by a galaxy at very different wavelengths may be linked together by means of the star formation mechanism (see @david [@ranalli]). For instance, the formation of massive stars generates the heating of the dust clouds in which they are embedded, by absorption of their UV radiation, and produces a re-emission of this energy in the far infrared. This process links the current star formation rate to the IR emission at 60 and 100 $\mu$m [@thronson]. The ionizing radiation of stars may produce also the evaporation of the molecular clouds. Inside these clouds, where the particle density is great enough to produce a significant number of collisions between H$_2$ and CO molecules, these latter are excited and produce photons, but in optically thick regions. The warming by the UV stellar light makes these regions less dense, making visible the CO lines at their edge. Because of this mechanism, these lines are considered tracers of the cold molecular hydrogen that does not emit observable lines. The newly formed stars are also responsible of the X-ray emission, produced by very massive stars, by core-collapse SN, and by high mass X-ray binaries. According to the above described mechanisms, we expect that galaxies with active star formation will have a far infrared emission, but also CO and X-ray emissions induced by the more massive stars, linked together by means of different relations. When the star formation decreases or vanishes, the far infrared emission decreases as well, but it may be fed by the stellar light absorbed and re-emitted in the infrared by dust (cirrus), while low-mass X-ray binaries and Type I SN contribute to the high energy galaxy spectrum. In addition, AGB stars, surrounded by dust, and the cooling flows of the interstellar medium ejected by supernovae may produce additional IR and X emission, between each other. To study the activity of the galaxies at different wavebands, we collected data on galaxies starting from the original data of fluxes at 60, 100 $\mu$m, CO(1-0) 2.6 mm and soft X-ray used to compile our catalogues [@norm_cat; @inter_cat]. The merging of the two above catalogues produces 1764 known values of far infrared fluxes (1837 have 100 $\mu$m flux), 391 soft X-ray fluxes and 434 values of the CO(1-0) line luminosity. We extracted from LEDA catalogue [@leda] the values of the distance moduli, blue absolute magnitudes and morphological classification for all of them. Galaxies with evident sign of interactions or disturbed morphologies according to the catalogues of @arp [@am; @vv] are 1038. We shall refer to them as “perturbed galaxies”. The remaining 1915 galaxies that appear neither morphologically nor dynamically perturbed are called “normal galaxies”. In our sample, we have 253 galaxies that have spectral classification of the nucleus and 231 of these appear to host an AGN (Seyfert 1, 2 or transition type, Seyfert 3 or Liners) according to the classifications of @ho and @veron. Most part of the remaining 2722 galaxies lacks of information about nuclear spectrum or have spectra of HII regions (22 starburst spectra). They are not included in any AGN catalogue and for this reason in the following discussion we refer to them as “non active galaxies" and to the others as “active galaxies". With all these data, we crossed the various tracers to understand and revisit the main relations existing between X, FIR, CO and B luminosities. Cold gas and warm dust ====================== The relations existing between different cold components of the ISM such as the molecular gas and the dust have been studied since many years [@mirabel; @solomon; @bregman]. They find that the global galaxy luminosity derived from CO(1-0) line is directly related with the flux at 100 $\mu$m. With our large sample we can now test these relations using galaxies of different morphological types and activity or interaction. In Figure \[CO\_100\] we plotted the logarithm of the flux measured from CO(1-0) line vs. the logarithm of the IRAS flux at 100 $\mu$m. In our plots, we have 193 galaxies with classification from E to Sb and 178 from Sbc to Sm. The relation found by @bregman for a sample of early-type galaxies, log S$_{CO}$=log S$_{100}$ - 1.76, is also plotted as comparison, as a dotted line. The relations are evident, with this wider sample of galaxies. In these diagrams, active and non-active galaxies appear mixed together without clear differences and have been plotted together. The same behaviour appears for interacting and non interacting galaxies, that are not distinguished in our plots. For all the galaxy types, we find: $$Log S_{CO}= 1.06\ Log S_{100} + 2.02 \label{eqCO_100}$$ with a correlation coefficient of 0.74 and a r.m.s. of 0.37. In the above formula, S$_{100}$ is in mJy and S$_{CO}$ is in Jy km/s. Similar relations exist between the CO fluxes and the FIR magnitudes, defined as: $$m_{FIR} = -2.5\ Log(2.58\ S_{60}+S_{100})+ 22.25 \label{def_mfir}$$ where S$_{60}$ and S$_{100}$, the fluxes at 60 and 100 $\mu$m respectively, are in mJy. We find for all the galaxy types: $$Log S_{CO} = 0.41\ m_{FIR}+ 6.86 \label{CO_mfir}$$ with a correlation coefficient of 0.69 and a r.m.s of 0.40. The results are based on 179 early types and 170 late-type galaxies. For their similarity with Figures \[CO\_100\] these relations are not plotted in this paper. We note that irregular galaxies are not fitted by these relations but have a wide spread. In our sample there are just 10 galaxies and their representative points have been not plotted in Figure \[CO\_100\]. X-ray component. ================ We are interested to understand what relations exist between L$_X$, the X-ray luminosity, and the other global galaxy properties. From the literature, it is known the existence of a proportionality between L$_X$ produced by discrete sources and L$_B$, the blue luminosity of the whole galaxy. This relation has been studied by @ciotti and compared by @beuing with soft X-ray fluxes measured by ROSAT satellite. It appears that late-type galaxies have a global X-ray luminosity directly proportional to L$_B$, while early-type systems are dominated by emission produced by hot diffuse gas and their $L_X$ is proportional to the square power of the blue luminosity, as discussed by @beuing. For this reason, the early and late-type galaxies are discussed separately. Late-type galaxies ------------------ With our data, the X-ray luminosity of galaxies with morphological type later than Sb can be fitted by a linear relation as a function of L$_B$(dotted line in Fig. \[X1\], left panel). The direct proportionality is expressed by the equation: $$Log L_X = Log L_B - 3.85 \label{XBlate}$$ with a r.m.s. from observed data of $\sigma$=0.61 based on 63 galaxies. In this formula and in the following, all the luminosities are expressed in solar units. If, instead of the blue luminosity, we use the galaxy area $D^2_{kpc}$, calculated from the apparent diameter measured at the 25 mag arcsec$^2$ isophote and converted in kpc$^2$, we discover that the relation is still present, but with a larger spread. It becomes: $$Log L_X = Log D^2_{kpc} + 3.83$$ ($\sigma$=0.80) for a sample of 64 galaxies. A relation similar to that of @ciotti has been found by some authors [@padovani; @david; @ranalli], but using 60 $\mu$m fluxes or FIR luminosities. The values of $L_{FIR}$ are calculated using the formula: $$Log L_{FIR}= 2.59+Log(2.58\ S_{60}+S_{100})+2 Log\ d$$ where L$_{FIR}$ is in solar luminosities, fluxes are in mJy and the galaxy distance d is in Mpc. From our data it is possible to find a relation between L$_X$ and L$_{FIR}$ that fits the values of late-type galaxies. We found L$_X \propto$ L$_{FIR}^{0.90}$, similar to the L$_X \propto$ L$_{FIR}^{0.88}$ found by @ranalli for fluxes between 0.5 and 2 keV and to L$_X \propto$ L$_{FIR}^{0.95}$ found by @david using fluxes between 0.5 and 4.5 keV. Forcing the relation to a linear proportionality between L$_X$ and L$_{FIR}$ we find: $$Log L_X = Log L_{FIR} - 3.18 \label{XFIR}$$ with a $\sigma$ of 0.47, based on 147 galaxies. This relation is plotted as a dashed line in the right panels of Figures from \[X1\] to \[X3\]. We note that the B and FIR luminosities are also connected in late-type galaxies by means of a linear relation fitted by: $$Log L_{FIR} = Log L_B - 0.38$$ with a r.m.s.=0.5. This equation, inserted into the relation (\[XBlate\]) gives: $$Log L_X = Log L_{FIR} -3.47$$ similar to the result of equation (\[XFIR\]) and to that found by @ranalli. This is an independent way to confirm our results and to verify the existence of a global link between L$_{FIR}$, B light and X-ray emission. The connection between B luminosity or galaxy area and X or FIR luminosities will be discussed in Section \[Discussion\]. Early-type galaxies ------------------- When the early-type galaxies are considered in the above described relations involving X-ray emission, the correlations become less evident. Considering soft X-ray and B luminosities, we find a relation: $$Log L_X = 2\ Log L_B - 13.57 \label{XBearly}$$ ($\sigma$=0.73) based on 224 galaxies and plotted as full line in Fig. \[X2\], left panel. The above formula agrees with the expected relation for X-ray emission coming from hot diffuse gas, as discussed by @beuing. The relation still hold if $D^2_{kpc}$ (kpc$^2$) is used. It becomes: $$Log L_X = 2 Log D^2_{kpc} + 1.51$$ ($\sigma$=0.85) for 226 early-type galaxies from E to Sb. Many galaxies with high blue luminosity, indication of high masses and of a recent star formation, lie quite far from the mean line, with a behaviour different than that of late-type galaxies. If the X-ray fluxes are compared with FIR luminosity, the disagreement with the behaviour found in late-type galaxies is more evident. The plot L$_X$ vs. L$_{FIR}$ for early-type galaxies shows the representative points of the galaxies above the relation (\[XFIR\]) for late-type galaxies (Fig.\[X2\], right panel). To understand this apparent disagreement, we should use a theoretical analysis of the far infrared emission, as explained in the next Section 4. Active galaxies --------------- Active galaxies (Seyfert 1, Seyfert 2 and Liners) have X-ray, B and FIR fluxes that are not linked together. This happens because, to the emission mechanisms stimulating the light emission at the different wavebands described for non active galaxies, adds an X-ray emission coming from nucleus. In fact, the points representative of these active galaxies are spread in the plot over the discrete sources line and around the diffuse gas line (see Fig.\[X3\], left side). In the L$_X$–L$_{FIR}$ diagram (Fig.\[X3\], right side) the spread is similar to that of early-type galaxies plotted in Fig.\[X2\], but we separately plotted the active galaxies because of the particular nature of their X-ray emission, due to the nuclear contribution. Modelling $L_{X}$, $L_{B}$ and $L_{FIR}$ of early-type galaxies {#Modelling} =============================================================== To cast light on the nature of the relations observed between $L_{X}$, **$L_{B}$** and $L_{FIR}$ for early-type galaxies, one has to consider the various components of a galaxy (stars, gas and dust) and to understand their mutual interactions as far as the spectral energy distribution (SED) is concerned. There are two basic schemes to model the formation and evolution of early type galaxies: (1) the semi-analytical models on which a great deal of our understanding of the chemo-spectro-photometric properties is derived, and (2) the N-Body Tree-SPH simulations which, in contrast, have been only occasionally used to study spectro-photometric properties of early type galaxies. In the following part of this section we will proceed as follows. First we will analyse the drawbacks of semi-analytical models, in particular dealing with the calculation of the infrared emission of early-type galaxies. Second, we will discuss how dynamical simulations and a dusty spectrophotometric code, when mixed together allow to move a step forward in the calculations of the SEDs properties. Third, we will show in detail how our model has been built and the coupling between dynamics and dusty population synthesis has been done. The semi-analytical models and their drawbacks ---------------------------------------------- The semi-analytical models approximate a galaxy to a point mass system in which gas is turned into stars by means of suitable recipes for star formation and heavy elements are produced by stellar nucleosynthesis and stellar winds/explosions. The standard evolutionary population synthesis technique (EPS) is usually applied to derive the SED of the galaxy, with models able to explain many global features of early type galaxies, as amply described by many authors [@Arimoto87; @Arimoto89; @Bressan94; @Gibson97b; @Tantalo96; @Tantalo98]. There are three important and problematic issues of these models to be discussed for our purposes. First, to determine the age at which the galactic wind sets [@Larson74; @Larson75], we need some hypothesis about Dark and Baryonic Matter with their relative distributions, and about the heating and cooling efficiency of the various mechanisms, to properly evaluate the total gravitational potential well and to describe the thermal history of the gas. In this scheme it comes out that the galactic wind occurs typically for ages $t_{GW} < 1$ Gyr, later in a massive early-type galaxy and much earlier in galaxies of lower mass [@Arimoto87; @Arimoto89; @Bressan94; @Gibson97b; @Tantalo96; @Tantalo98; @Chiosi98]. The maximum duration of star forming activity follows therefore in these models the trend $\Delta t_{SF} \propto M_{G}$. This trend of the SFH is, however, contrary to what required by the observes trend of the $\alpha$-enhancement for early type galaxies, which implies that the maximum duration of the star forming activity should decrease when the galaxy mass increases $\left( \Delta t_{SF} \propto M_{G}^{-1}\right)$ [see @Bressan96; @Kuntschner00; @Trager00a; @Trager00b; @Tantalo04; @Thomas05 for more details on the enhancement in $\alpha$-elements and the SFH of early-type galaxies]. Second, after the galactic wind phase, star formation does no longer occur and the evolution is merely passive. However, AGB and RGB stars continue to loose gas in amounts that are comparable to those before the galactic wind [@Chiosi00]. What is the fate of this gas? One may imagine that the large amount of gas lost by stars will expand into the Dark Matter halo and heat up to an energy overwhelming the gravitational potential, it will escape the galaxy. Most likely a sort of dynamical equilibrium is reached in which gas is continuously ejected by stars and lost by the galaxy. It may happen therefore that some amount of gas is always present in the galaxy. The question is not trivial because if an early type galaxy is free of gas and contains only stars, the SED is expected to drop off long-ward of about $2 \mu m$ and no IR emission should be detected. However, as already pointed out long ago by @Guhathakurta86 [@Knapp89] (see also Fig. \[X2\]), many early-type galaxies of the local universe emit in the IR. The origin of this flux in the MIR/FIR is likely due to dust present in a diffuse ISM which, heated up by the galactic radiation field, emits at those wavelengths. Therefore to match the IR emission one has to allow for some amount of diffuse ISM. An interesting question to rise is therefore: how much gas can be present today in an elliptical galaxy and how is it distributed across the galaxy? Even if we can correctly estimate the amount of gas ejected by stars, the fate of this gas goes beyond the possibilities of classical semi-analytical models. As a third point, note that when we fold many SSPs to calculate a galaxy SED using the classical EPS technique we simply convolve their fluxes with the SFH of the galaxy. Many classical spectrophotometric semi-analytical models of galaxies are built in this way: there is no dust at the level of SSPs and again no dust at the level of the galaxy model [see e.g. @Arimoto87; @Arimoto90; @Bruzual93; @Tantalo96; @Kodama97; @Tantalo98; @Buzzoni02; @Buzzoni05]. To calculate the emission by dust, a higher level of sophistication of the model is required. Indeed one has to develop a model in which the sources of radiation and the emitting/absorbing medium are distributed, to face and solve the problem of the radiative transfer simulating in a realistic way the interactions among the various physical components of a galaxy. Among recent models of this kind are those by @Silva98, @Devriendt99 and @Takagi03. Improving upon semi-analytical models ------------------------------------- Two drawbacks of the semi-analytical models concern therefore: (1) the description of galactic wind, which is supposed to occur within a finite time interval and (2) the star formation history that is reversed allowing longer SFH for more massive galaxies. These two problems, combined with a lack of geometrical information about the distribution of gas and dust, make semi-analytical models not suitable to calculate properly the IR emission of early type galaxies. To improve upon them we need to use the results obtained from dynamical simulations. They have shown to be able to properly model the ejection of gas by the galaxy as a sort of continuous process, taking place whenever a gas particle heated up by various mechanism has acquired a velocity greater than the escape velocity [see e.g. @Carraro98; @Kawata01; @Springel01; @Chiosi02]. They are able to reproduce the SF history of early-type galaxies both in the context of the monolithic collapse scenario [@Kawata01; @Chiosi02] and recently in the context of hierarchical scenario [@DeLucia06]. Finally, the galaxy is no more a mass point, but a fully three-dimensional structure of the galaxy is available with spatial distribution of stars and gas. @Merlin06, with the aid of *N-Body Tree-SPH* simulations based on quasi-cosmological initial conditions in the standard-Cold Dark Matter scenario (S-CDM), modelled the formation and evolution of two early-type galaxies of different total mass (Dark + Baryonic Matter in the cosmological proportions 9:1). The total masses under considerations are $1.62\times 10^{12} M_\odot$ (Model A) and $0.03\times 10^{12} M_\odot$ (Model B). The galaxies have been followed from their separation from the global expansion of the universe to their collapse to virialized structures, the formation of stars and subsequent nearly passive evolution. They are followed for a long period of time, i.e. 13 Gyr (Model A) and 5 Gyr (Model B). In any case, well beyond the stages of active star formation which occurs within the first 3 to 4 Gyr (see below). The models take into account radiative cooling by several processes, heating by energy feed back from supernova explosions (both Type I and II) and chemical enrichment. All the models conform to the so-called *revised monolithic scheme*, because mergers of substructures have occurred very early in the galaxy life. Some parameters and results of the two models are summarized in Table \[tabcosmo\]. Note that the shape of the resulting galaxies is nearly spherical both in Dark Matter and stars. ----------------------------------------- ---------- ---------- -- Model A B Cosmological background S-CDM S-CDM Initial redshift 50 53 $\Omega_m$ 1 1 $H_0 = 50 \mbox{ } km Mpc^{-1} s^{-1} $ 50 50 Gas particles 13719 13904 CDM particles 13685 13776 Total Mass $ 1.62 $ $ 0.03 $ Initial baryonic mass fraction 0.10 0.10 Present gas mass 0.062 0.0004 Present star mass 0.091 0.0029 $M_{star}/M_{baryons}$ 0.556 0.82 Initial radius 33 9 Half-Mass radius of stars 7 1 Half-Mass radius of DM 52 15 Effective radius of stars 5.2 0.8 Present virial radius 300 41 Axial ratio b/a (stars) 1.08 1.04 Axial ratio c/a (stars) 1.07 1.00 Axial ratio b/a (Dark Matter) 1.14 1.14 Axial ratio c/a (Dark Matter) 1.17 0.96 Age of the last computed model 13 5 ----------------------------------------- ---------- ---------- -- : Initial parameters for the dynamical simulations of @Merlin06 in the Standard CMD scenario. Masses are in units of $10^{12}M_\odot$, radii are in kpc and ages are in Gyr. \[tabcosmo\] The third drawback of classical semi-analytical model was the lack of the description of the dusty component, that for our purposes needs to be included. The semi-analytical chemo-spectro-photometric model developed by @Piovan06b allows us to overcome this issue. It takes into account not only the geometrical structure of galaxies of different morphological type, but also the effect of dust in converting the UV and Optical light in far IR radiation. In brief the @Piovan06b model follows the infall scheme, allows for the onset of galactic winds, and contains three main components: (i) the diffuse interstellar medium composed of gas and dust whose emission and extinction properties have been studied in detail by @Piovan06a, (ii) the large complexes of molecular clouds in which new stars are formed and (iii) the stars of any age and chemical composition. The total gas and star mass provided by the chemical model are distributed over the whole volume by means of suitable density profiles, one for each component and depending on the galaxy type (spheroidal, disk and disk plus bulge). The galaxy is then splitted in suitable volume elements to each of which the appropriate amounts of stars, molecular clouds and interstellar medium are assigned. Each elemental volume absorbs radiation from all other volumes and from the interstellar medium in between. The elemental volume also re-emits the absorbed light and produces radiation by the stars that it contains. On the other hand, the star formation, the initial mass function, the chemical enrichment of the @Piovan06b model are much similar to those by @Bressan94 [@Tantalo96; @Tantalo98; @Portinari98]. Coupling dynamical simulations and dusty population synthesis models -------------------------------------------------------------------- The description of an early-type galaxy as far as predicting its spectro-photometric infrared properties can be therefore realized with a suitable combination of dynamical and spectro-photometric approaches. Coupling the dynamical models with spectro-photometric synthesis requires a number of steps that deserve some remarks. ### Radial density profiles. Fig. \[RaggiViriali\] shows the cumulative distribution of gas and stars as a function of the radial galactocentric distance normalized to the virial radius for model A (top panel) and model B (bottom panel). The gas is generally distributed in the external regions of the galaxy and steeply decreases inward. In contrast the stars are more concentrated toward the centre. The gradients in the spherically averaged star- and gas- content provided by the dynamical models are the primary information to load into the spectro-photometric code of @Piovan06b. They allow us to infer the amount of gas contained within a given radius or within a given aperture. We fix the total dimension of portion of the average model producing the IR flux at a diameter $D_{gal} = 25$ kpc, consistent with the mean galaxy size of the observed sample. As the spectro-photometric code of @Piovan06b suited to describe early-type galaxies is written in spherical symmetry, we have to derive suitable spherical distributions for the density of stars and gas to be used into the model. The task is facilitated by the nearly spherical shape of the dynamical models. To this aim, we consider the sphere of radius $R_{gal}$ centred at the centre of mass of the stellar component. The sphere is then divided in a number of thin spherical shells whose derived average density of stars and gas is shown in Fig. \[Fit\_RHO\]. Even if the centre of mass of the star and gas distributions may not be exactly coincident, this not relevant here, so that the same coordinate centre can be used for both components. In order to secure a smooth behaviour at the galaxy radius $R_{gal}$ the star and gas density profiles are represented by the law: $$\rho _{i}=\rho _{0i}\left[ 1+\left( \frac{r}{r_{c}^{i}}\right) ^{2}\right] ^{-\gamma _{i}} \label{rhostar_ell}$$ where $``i"$ stands for $``stars"$ or $``gas "$, $r_{c}^{i}$ are the corresponding core radii. The above representation is more suited to our aims than the classical King law. The fits are shown in Fig. \[Fit\_RHO\] (solid lines). They are normalized in such a way that the integral over the galaxy volume corresponds to the amount of gas contained inside $R_{gal}$. ### Star formation rate. In the dynamical models, the period of intense star formation, during which most of the star mass is built up, is confined within the first 3 to 4 Gyr. In Model A this is followed by a long tail of minimal stellar activity which continues forever. If this activity would be real, we would expect a background of young stars giving rise to a significant emission in the UV-optical region up to the present, which is not compatible with the observed spectra of typical early-type galaxies. As already pointed out by @Merlin06 this minimal stellar activity is an artefact of the poor mass-resolution for the baryonic component, in other words the low number of particles considered in the numerical simulations. To cope with this, we simply set to zero the star formation rate when only one or two star particles are involved. This is equivalent to cut the star formation rate for ages older than about 5 Gyr. The problem does not occur with model B simply because the last computed models is at 6 Gyr. ### Checking dynamical models against chemo-spectro-photometric models. To this aim we plug the star formation history (SFH) of dynamical models into the chemical code of @Portinari98. The closed-box approximation is adopted. The total baryonic mass of the chemical models is the same as in the dynamical ones. Equally for the initial mass function of the stars composing each star particle: @Kroupa98 in our case. In Fig. \[BIG\_SF\] we show the results obtained by inserting the SFH of Model A into a classical chemical model with total baryonic mass $M_B$ equal to $1.6 \cdot 10^{11} M_{\odot}$. The top panels display the adopted SFH (left) and the gas metallicity of the chemical model, respectively. The bottom left panel shows the temporal variation of the star mass $M_{star}$ and gas mass $M_{gas}$, whereas the bottom right panel shows the ratios $M_{star}/M_B$ and $M_{gas}/M_B$ for both the dynamical (thin lines) and chemical model (thick lines). The agreement is very good thus confirming the internal consistency between the descriptions of the same object. We also show the amount of gas at $13$ Gyr contained in the whole galaxy for both the dynamical (heavy dots) and the classical chemical models (open circles) and the amount of gas contained inside $R_{gal}$ (open squares). Indeed there is little gas left over inside the 25 kpc radius region. Similarly in Fig. \[SMALL\_SF\], we show the results obtained inserting the SFH of Model B into a classical chemical model with total baryonic mass of $3.5 \cdot 10^{9} M_{\odot}$. The only difference is that the maximum age of the dynamical model is 5 Gyr. This cross checking of the models is particularly significant because: first, it secures that the results of the analytical models fairly reproduce those of the dynamical simulations as far as some important features are concerned; second, it secures that we can safely use the result of chemical models to prolong the evolutionary history of Model B up to the present; third, that we can safely apply the population synthesis technique of @Piovan06b. Knowing the amount of gas, we need to specify the fraction of it in form of dust to finally be able to derive the whole SED from X to FIR and look for relationships between the luminosity in the X, B and FIR pass-bands we want to interpret. Our models, both semi-analytical and chemo-dynamical, are not suitable to describe the evolution of the compositions and abundances of *both* gas and dust phases. The relative proportions of the various components of the dust would require the detailed study of the evolution of the dusty environment and the complete information on the dust yields, as in the models of @Dwek98 [@Dwek05]. This would lead to a better and more physically sounded correlation between the composition of dust and the star formation and chemical enrichment history of the galaxy itself, however at the price of increasing the complexity and the uncertainty of the problem. The key parameter to calculate the amount of dust is the dust-to-gas ratio, defined as $\delta =M_{d}/ M_{H}$, where $M_d$ and $M_{H}$ are the total dust and hydrogen mass, respectively. For the Milky Way and the galaxies of the Local Group, $\delta$ is estimated to vary from about $1/100$ to $1/500$ and typical values $\delta = 0.01$, $\delta =0.00288$ and $\delta =0.00184$ are used for the Milky Way (MW) and the Large and Small Magellanic Clouds (LMC and SMC). These dust-to-gas mass ratios describe a decreasing sequence, going from the MW to the LMC and SMC. Since these galaxies also describe a sequence of decreasing metallicity, a simple assumption is to hypothesize $\delta \varpropto Z$ in such a way to match the approximate results for MW, LMC and SMC: $\delta =\delta_{\odot}\left(Z/Z_{\odot}\right)$. This relation simply implies that the higher is the metal content of a galaxy, the higher is the abundance of grains per $H$ atom. However, the metallicity difference does not only imply a difference in the absolute abundance of heavy elements in the dust, but also a difference in the composition pattern as a function of the star formation history @Dwek98 [@Dwek05]. Despite these uncertainties [@Devriendt99], the relation $\delta \varpropto Z$ is often adopted to evaluate the amount of dust in galaxy models [e.g. @Silva98] by simply scaling the dust content adopted for the ISM of the MW to the metallicity under consideration. The $1.6 \cdot 10^{11}M_{\odot}$ and $3.5 \cdot 10^{9} M_{\odot}$ galaxy models reach an average metallicity of solar and slightly more than twice solar, respectively. To describe them we have adopted the description of @Piovan06a [@Piovan06b] where a model of dusty ISM taking into account different metallicities is built. The problem however remained unsettled for metallicities higher than the solar one, where relative proportions holding good for the MW average diffuse ISM model have been adopted and the amount of dust scaled with $\delta \varpropto Z$. Therefore, for the $1.6 \cdot 10^{11}M_{\odot}$ galaxy with solar metallicity the MW diffuse ISM model has been adopted $\left(\delta =\delta_{\odot} \right)$, while for the $3.5 \cdot 10^{9} M_{\odot}$ model we followed the $\delta \varpropto Z$ relation, using the MW average pattern of dust composition. The connection between the results of this model and the observed diagrams are discussed in the following section. Discussion {#Discussion} ========== Our data confirm and extend the previous relations existing between various tracers of the ISM in galaxies of different morphological types. In the literature the relation found by @bregman between S$_{CO}$ and S$_{100}$ indicates a direct proportionality (slope=1) between the two fluxes and differs from that of @solomon, that exhibits a steeper gradient. Our relation (\[eqCO\_100\]) agrees quite well with the proportionality found by @bregman, the slope we found being equal to 1.06. The similarity between the two curves in Figure \[CO\_100\] is evident. This link derives, as described in the introduction, from the excitation of gas clouds by the currently forming stars and by the warming of the dust present in the galaxy. Late-type galaxies ------------------ In late-type galaxies (t$>$Sb) our data show the existence of a linear relation between soft X-ray fluxes and other indicators of recent and current star formation, such as the B and FIR luminosity respectively (equations \[XBlate\] and \[XFIR\]). This is known since the first X-ray observations of large samples @fabbiano0 and this connection between B and X-ray luminosity in late type galaxies has been interpreted as due to the contribution of discrete X-ray sources, whose number is proportional to the quantity of already formed stars [@ciotti; @beuing]. The recent work of @fabbiano, that is able to resolve the single X-ray binaries in 14 galaxies, indicates that the X-ray luminosity produced by discrete sources is related to B luminosity by a similar relation, with an intercept value of -3.63, similar to our -3.85 of equation \[XBlate\]. In addition to the interstellar radiation, that is proportional to the number of already formed stars, the X-ray emission is produced also by HII regions, where there is an ongoing vigorous star formation [@david]. This latter contribution appears more evident in FIR light and may explain the existence of a similar linear relation between L$_X$ and L$_{FIR}$. Early-type galaxies ------------------- In early type galaxies the behaviour of these relations is quite different. For most of these galaxies, the star formation is exhausted and it may be present in a few of them, eventually fed by gas accretion phenomena. Different mechanisms have been suggested to explain the X-ray emission in this kind of galaxies. In particular the main ones are the thermal emission due to hot ISM and the emission generated by a relatively old population of end objects of stellar evolution, composed by Type I supernovae remnants and low-mass X-ray binaries not yet evolved. In particular, for the fainter galaxies the X-ray emission is compatible with discrete sources and seems to be dominated by compact accreting systems, while for the brighter objects the emission from hot diffuse gas still present in the galactic potential well is present as additional component [@beuing]. The number size of this population of relatively old objects is well represented by the total blue luminosity of the galaxy. For this reason the X-ray fluxes are still linked in early type galaxies with the total blue luminosity, representing the more recent part of the history of star formation in the galaxy. In the FIR however, since the star formation in most of these systems is almost exhausted, mechanisms different from the emission from warm dust heated by the newly born stars predominate. The FIR emission comes from circumstellar dusty shells around AGB stars and from an interstellar medium due to the outflow of dusty gas from AGB and RGB stars, as it has been described in Sect. \[Modelling\]. The key point to interpret the observed trends is that we deal with an emission coming from a more or less small amount of dust distributed over all the galaxy and heated by an average interstellar radiation field due to all the stars of any age. The situation is quite different from what happens for instance in starburst galaxies where high optical depth dusty regions reprocess the light coming from newly born stars embedded in the parental environment. We can therefore conclude that in most of our early-type galaxies the mechanism of IR emission is not strictly related to the star formation and the link between the younger generations of stars and dust emission is lost. For these reasons one may expect that the soft X-ray luminosity in early type galaxies is traced by the total blue luminosity [*but not*]{} by the FIR luminosity. With the end of the star formation, the far infrared emission of these galaxies has faded out and an early type galaxy with the same $L_X$ of a late type will have a lower $L_{FIR}$. This could explain the location of the points in Fig. \[X2\] (right panel), on the left side of the linear relation. To check if this interpretation is correct we try to apply the detailed chemo-dynamical spectrophotometric model described in the previous section, in such a way to estimate the luminosities produced by the stars in connection with the various phenomena present inside the galaxy, taking into account the contribution by dust as well. Since the theoretical model can not derive the $L_{X}$ luminosity, we proceed in the following way. The luminosities $L_{B}$ and $L_{FIR}$ are directly derived from the model. Then, we assume that the X-ray production of these galaxies is proportional to $L_{B}$ according to our relation (\[XBearly\]). In this way we may estimate the expected X-ray flux and define a representative point in the $L_{FIR}$ vs $L_{X}$ plot. We start considering two template models, in which all the parameters are fixed using the clues coming from the dynamical simulations of @Merlin06, as described in Sect. \[Modelling\]. The King profiles represented in Fig. \[Fit\_RHO\] are similar for all the components, with $\gamma_{stars} \simeq \gamma_{gas} \simeq 1.5$ and $r_{c}^{stars} \simeq r_{c}^{gas} \simeq 0.5$ Kpc, while the dimension of the galaxy is an average one corresponding to most of the galaxies available in the catalogue. The SFH is exactly the one obtained by the dynamical simulations. The two values of $L_{FIR}$ and $L_{X}$ obtained for the $3.5 \cdot 10^{9} M_{\odot}$ and $1.6 \cdot 10^{11} M_{\odot}$ baryonic mass models are plotted in Fig. \[LxLfirES\]. The more massive galaxy fits well into the region defined by the really observed galaxies, while we can notice as the model of smaller mass, even if falling up to the linear relationship as we could expect, belongs to a region not covered by the observed data. The calculated levels of emission $L_{X}$ and $L_{FIR}$ of this galaxy are very low and for this reason they belong to a region where we do not have enough observations. The weak $L_{FIR}$ emission of this galaxy can be explained by the dynamical evolution in which almost all the gas is consumed to form stars and the galactic winds are very efficient [see @Chiosi02 for more details about galactic wind in low mass galaxies]. Therefore, even if the trend of this galaxy is the expected one for early-type galaxies (the model stays above the linear relation), nothing safer can be said, because we lack observed data in that region of the diagram. Much more interesting is the model of higher mass. The calculated luminosities of the model, with its exhaustion of the star formation, seem to agree well with the observations of early-type galaxies. However, the model needs to be checked against other possibilities, at the purpose to understand the way in which the various parameters of the model influence the spreading of early-type galaxies into the observational data. First of all we have to check the effect of the geometrical parameters and of the masses of stars/gas. ### The galactic radius In fig. \[Raggi\] we show the model of $1.6 \cdot 10^{11} M_{\odot}$ baryonic mass at varying the galactic radius, keeping the galactic center in the center of mass of the stellar component. The radii taken into account range from $6$ Kpc to $50$ kpc. All the other parameters are fixed. Four models are represented (filled circles) and connected by a continuous line and the smaller and bigger models are marked using an arrow. For larger radii we observe an increase of both $L_{FIR}$ and $L_{X}$, with a more emphasized increase in $L_{X}$. Since the density profile is unchanged, both the increases in luminosity are simply due to the bigger amount of material considered taking into account larger radii in the dynamical simulation. The stronger increase in $L_{X}$ than $L_{FIR}$ can be simply explained. $L_{X}$ is linearly related to $L_{B}$, that is directly connected to the stellar luminosity. The stellar component is more massive and more concentrated toward the centre than the gaseous one (Fig. \[RaggiViriali\], upper panel). It follows that at increasing radius we introduce into the models more stars and more gas, but the added amount of stars is bigger than the gaseous one, shifting $L_{B}$ (and then the linearly related $L_{X}$) more than $L_{FIR}$. Finally, we observe how, even taking into account the smallest radius of $6$ Kpc, it is not possible to move the theoretical point near the linear relation holding for spirals. ### The masses of stars and gas We also investigated in Fig. \[Raggi\] what happens if we forget about the clues coming from dynamical simulations on the masses of stars and gas and we arbitrarily start varying the amounts of stars or gas, keeping all fixed. Filled diamonds represent the shift of the model of lowest radius if we are changing the mass of stars inside $R_{gal}$, going in fraction from $f_{*}=0.2$ to $f_{*}=1.0$, with respect to the total amount of stars in the dynamical model. The effect is simply to move the point along a line about parallel to the linear relation. A smaller amount of stars imply directly a lower luminosity $L_{B}$ (and therefore a lower $L_{X}$), but also a lower $L_{FIR}$, because the weaker radiation field makes dust cooler and shifts the peak of dust emission to wavelengths longer than $100 \mu m$, with the result of a smaller $L_{FIR}$. Finally, with open circles we show in Fig. \[Raggi\] five models obtained at fixed amount of stars and at varying the mass of gas (and therefore of dust) from $f_{d}=0.2$ to $f_{d}=1.0$, in fraction respect to the total amount of gas in the dynamical model. The effect of this huge increase of the mass of diffuse gas and dust (in the original model at $R_{gal} = 6 $Kpc only $0.03 \%$ of the gas is inside $R_{gal}$) is to shift the models straight toward the linear relation. It can be explained in the following way. Increasing the amount of diffuse gas/dust (with all the parameters fixed and the star formation exhausted) implies more absorption of the stellar radiation and therefore a smaller $L_{B}$ (and $L_{X}$). On the other side, $L_{FIR}$ remains almost unchanged or becomes smaller. The reason is that the strongly increased mass of dust makes the average stellar radiation field weaker and therefore the increased emission of dust (due to the bigger mass) peaks at wavelengths longer than $100 \mu m$, leaving $L_{FIR}$ almost unchanged. Even if in this way we can shift the model toward the linear relation, the situation is physically unrealistic, requiring a huge amount of gas/dust concentrated in the centre of an early-type galaxy with exhausted star formation, which is not commonly observed and also not predicted by dynamical models. ### The scale radii Further geometrical parameters that must be examined are the scale radii $r_{c}^{i}$ of the King’s laws - eqn. (\[rhostar\_ell\]) - that describe the distribution of the stellar and gaseous components. The averaged profiles showed in Fig. \[Fit\_RHO\] and used for the models of Figs. \[LxLfirES\] and \[Raggi\] are both characterized by $r_{c}^{i} \simeq 0.5$, allowing for a concentrated amount of stars and gas in the inner regions. Keeping all the other parameters fixed, we investigated what happens if we allow for a uniform distribution of one or both the physical components. Three cases have been considered: a uniform distribution of gas keeping fixed the stellar one $\left(r_{c}^{gas} \rightarrow \infty, r_{c}^{stars} \simeq 0.5 \right)$, a uniform distribution of stars keeping fixed the gaseous one $\left(r_{c}^{stars} \rightarrow \infty, r_{c}^{gas} \simeq 0.5 \right)$ and, finally, a uniform distribution of both the components $\left(r_{c}^{stars} \rightarrow \infty, r_{c}^{gas} \simeq \infty \right)$. The results are shown in Fig. \[Profili\], for two radii of the galaxy model, $R_{gal}=6$ Kpc and $R_{gal}=20$ Kpc, respectively. The three different distributions give a similar result: a weaker $L_{FIR}$, shifting the point to the left, and a slightly higher $L_{X}$. This can be explained in the following way: for $r_{c}^{stars}$ and $r_{c}^{gas}$ both $\simeq 0.5$, the diffuse ISM and the stars are both concentrated in the inner region of the galaxy with a density of stars/gas of many order of magnitude bigger than the outer regions. This is the best condition to produce high $L_{FIR}$, because we have that the regions of higher density of dust are the same in which there is also the higher average radiation field heating dust. The spatial distribution of the ISM favors the interaction with the stellar radiation. When we destroy this coupling between stellar emission and density of gas, as we do allowing for a uniform distribution of gas or stars or both, the emission in the $L_{FIR}$ becomes weaker. The weakening of the dusty emission is stronger for the bigger radius of $20$ Kpc because in all the three cases one or both the components are distributed over a huge galactic volume and we have low density of gas eventually coupled with weak radiation field. For the $6$ Kpc model, even if the coupling in the central regions is destroyed, the galaxy is small enough to keep a good level of $L_{FIR}$, even when the matter is equally distributed across all the galaxy volume. ### The star formation history Last and main point to be examined is how varying the star formation history affects the position of the galaxies into the $L_{FIR}$ vs $L_{X}$ plot. In Figs. \[Raggi\] and \[Profili\] the galaxies of different morphological type form a sequence that, going from systems in which the star formation got exhausted long ago to systems in which star formation is still active, moves toward the linear relation and suggests the key role played by the star formation. First of all we calculate the $L_{FIR}$ and $L_{X}$ obtained by the SEDs and the models by @Piovan06b of real galaxies of the local universe: three spiral galaxies $(M100, M51$ and $NGC6946)$ and two starburst galaxies $(Arp220$ and $M82)$. The key point is that the SFHs of these galaxies allow us to cover a good number of different star formation histories. All these SFHs, unlike the ones of the ellipticals obtained by dynamical simulations, never end and in the case of the two starbursters a strong burst of star formation is added in the last millions of years. A huge amount of $L_{FIR}$ comes therefore from the young and deeply obscured region of star formation and not only from the diffuse component. The results, presented in Fig. \[RealGalaxies\], show that the three models of spirals stay near the linear relation, while the two starbursters stay below the line, with the model of $Arp220$, powered by a huge burst of star formation falling well below the linear relation. The stronger is the emission coming from the regions of star formation and the bigger is the shift toward higher $L_{FIR}$ and lower $L_{X}$ (due to the lower $L_{B}$). The results obtained from the models are quite similar to the observational data: for $M100$ we get $(L_{FIR},L_{X})=(10.28,7.29)$ with the observations giving $(10.37,7.01)$, for $Arp220$ we have $(L_{FIR},L_{X})=(11.92,7.17)$ compared with $(11.99,7.60)$ and for $M82$ we get $(L_{FIR},L_{X})=(10.15,6.45)$ against $(9.79,6.31)$. However, these galaxy models, even if they well represent real galaxies, differ in many parameters from the early-type galaxy model of $1.6 \cdot 10^{11} M_{\odot}$, like geometry and mass. These parameters, together with the SFH, obviously concur to determine the position of the models into the $L_{FIR}$ vs $L_{X}$ plot. To isolate the effect of the SFH, we first re-calculated the SFHs of the above five theoretical models, rescaled to the mass of $1.6 \cdot 10^{11} M_{\odot}$ of the early-type galaxy model. In Fig. \[SFH\] we can see four of the five SFH obtained. Second, we fixed all the geometrical parameters to the same values used for the average model of $1.6 \cdot 10^{11} M_{\odot}$ early type galaxy. The additional parameters, that is the escaping time of young stars from parental molecular clouds, the library of SEDs of young dusty regions and the mass of gas in the diffuse and molecular component, are fixed to the values used in @Piovan06b for spirals and starbursters as appropriate. In Fig. \[varyingSFHs\] we finally show the results obtained as a function of the SFH of the galaxy of $1.6 \cdot 10^{11} M_{\odot}$, keeping all the other parameters fixed. It is interesting to observe that since now the star formation never ends and the galactic wind is not included, the classical semi-analytical chemical evolution can be much more safely coupled to the spectro-photometric code. The effect of varying the SFH at fixed mass is to enhance the $L_{FIR}$, keeping almost fixed the $L_{X}$ and shifting the points toward the linear relation at higher infrared luminosities. This is ultimately due to the strong and efficient reprocessing of the light coming from very young stars, occurring into the dusty star-forming regions. As a consequence of this, models with starburst-like SFHs shift, as expected, toward higher $L_{FIR}$ luminosity than models with spiral-like SFH, because of the stronger star formation and therefore emission coming from young dusty regions. This can be also understood if we look in detail at the relative contribution to $L_{FIR}$ coming from the regions of star formation (let us define it $f_{MCs})$ and represent it as usual in $\log(L_{FIR}/L_{\odot}$) and from the diffuse interstellar medium $(f_{ISM})$. We get the following values: ($f_{SFR}=10.15, f_{ISM}=10.48$), ($f_{SFR}=10.02, f_{ISM}=10.45$), ($f_{SFR}=9.98, f_{ISM}=10.37$) for the three models with spiral-like SFHs, while we have ($f_{SFR}=10.97, f_{ISM}=10.47$) and ($f_{SFR}=11.93, f_{ISM}=10.56$) for the models with starburst-like SFHs. The stronger is the contribution from star forming regions, the higher is $L_{FIR}$ keeping $L_{B}$ (and $L_{X}$) almost unchanged. Models slightly dominated by the ISM contribution, but with a significant contribution coming from obscured newly born stars are more suitable to agree with the linear relation of spirals. It is worth noticing that in Fig. \[varyingSFHs\] we show both the results obtained applying the early-type linear relation between $L_{X}$ and $L_{B}$ - eqn. (\[XBearly\]) - and the late-type one - eqn. (\[XBlate\]). Since, however, the SFHs used (see Fig. \[SFH\]) are typical of late type galaxies (or starbursters), it’s more physically sounded to apply eqn. (\[XBlate\]) to obtain the $L_{X}$ luminosity. As last point we calculated also a sequence of models in which one of the SFHs of the spirals has been chosen (namely the one of NGC$6946$) with all the parameters fixed and only the mass is varied. As we see from Fig \[varyingSFHs\] the effect of varying the mass is to shift the object in diagonal almost along the relation. This is simply explained by the smaller amounts of stars/gas emitting radiation. Conclusions =========== We have been able to describe the relations existing in a galaxy between the various tracers of the ISM and to fix the coefficients of the relations existing between FIR, B and X-ray luminosity, both for early-type and late-type galaxies. The large set of data we used allowed us to redefine more clearly the relation existing between the CO and the 100 $\mu$m fluxes. We found that the relation, first obtained by @bregman for early type galaxies, is valid also for late type galaxies. In these galaxies, the X-ray flux appears linked also to B and FIR emissions. The only relation lacking from observations, i.e. the one between L$_X$ and L$_{FIR}$ has been studied by the use of the most recent chemo-dynamical models coupled with dusty evolutionary population synthesis. The calculated luminosities of the models seem to confirm our hypothesis about a connection between the exhaustion of the star formation and the “migration” of the early type galaxies above the linear relation in the L$_X$ vs L$_{FIR}$ plot. In the frame of our assumptions, we may therefore conclude that the prediction of our dusty chemo-dynamical models of galaxy evolution is consistent with the observed lack of a direct relation between $L_{X}$ and $L_{FIR}$ for early type galaxies and is due to the different mechanisms of production of FIR light in galaxies where the active star formation is no longer active. In most of our early-type galaxies the mechanism of IR emission is no more strictly related to the ongoing star formation and to the reprocessing of the radiation in the dense regions where new stars are born. The FIR emission comes therefore most likely from circumstellar dusty shells around AGB stars and from an interstellar diffuse medium due to the outflow of dusty gas from AGB and RGB stars. Finally, we can summarize that: (i) the SFH of the galaxies seems therefore to have the stronger effect on the position of early-type galaxies in the L$_X$ vs L$_{FIR}$ plot; (ii) other parameters, like the radius of the galaxy and the scale radii of stars and gas, play a secondary role, even if they can significantly contribute to the scatter of the models in the region above the linear relation; (iii) the mass is the main parameter explaining the scatter of the points along the linear relation. This research has been partially funded by the University of Padua with Funds ex 60% 2005. We acknowledge Prof. C. Chiosi for useful discussions on theoretical subjects of this paper. L. Piovan is pleased to acknowledge the hospitality and stimulating environment provided by Max-Planck-Institut für Astrophysik in Garching where part of the work described in this paper has been made during his visit as EARA fellow on leave from the Department of Astronomy of the Padua University. We also thank the Referee for the detailed and useful comments about this topic. Arimoto, N., & Yoshii Y., 1987, A&A, 173, 23 Arimoto, N., & Yoshii Y., 1989, A&A, 224, 361 Arimoto, N., & Tarrab, I., 1990, A&A, 228, 6 Arp, H. 1966, Atlas of Peculiar Galaxies Publisher: California Institute of Technology, Pasaadena, CA, 1966. Arp, H. C., & Madore, B. F. 1987, A Catalog of Southern Peculiar Galaxies and Associations Publisher: Cambridge University Press, 1987. Bettoni, D., Galletta, G., & Garc[í]{}a-Burillo, S. 2003a, A&A, 405, 5 Bettoni, D., Galletta, G., & Garcia-Burillo, S. 2003b, VizieR Online Data Catalog, 340, 50005 Beuing, J., Döbereiner, S., Böhringer, H., Bender, R., 1999, MNRAS, 302, 209 Bregman, J.N., Hogg, D.E., Roberts, M.S., 1992, ApJ, 387, 484 Bressan, A., Chiosi, C., & Fagotto, F., 1994, ApJS, 94, 63 Bressan, A., Chiosi, C., & Tantalo, R., 1996, A&A, 311, 425 Bruzual, G, & Charlot, S., 1993, ApJ, 405, 538 Buzzoni, A., 2002, AJ, 123, 1188 Buzzoni, A., 2005, MNRAS, 361, 725 Casasola, V., Bettoni, D., & Galletta, G. 2004a, A&A, 422, 941 Casasola, V., Bettoni, D., & Galletta, G. 2004b, VizieR Online Data Catalog, 342, 20941 Carraro, G., Lia, C, & Chiosi, C., 1998, MNRAS, 297, 1021 Chiosi, C., 1980, A&A, 83, 206 Chiosi, C., Bressan, A., Portinari L., & Tantalo R. 1998, A&A, 339, 355 Chiosi, C., 2000, A&A, 364, 423 Chiosi, C., Carraro, G., 2002, MNRAS, 335, 335 Ciotti, L., Pellegrini, S., Renzini, A., & D’Ercole, A. 1991, ApJ, 376, 380 David, L. P., Jones, C., & Forman, W. 1992, ApJ, 388, 82 De Lucia, G., Springel, V., White, S. D. M., Kauffmann, G., 2006, MNRAS, 366, 499 de Vaucouleurs G., de Vaucouleurs A., Corwin H.G., Buta R.J., Paturel G., Fouque P., 1991, Third Reference Catalogue of Bright Galaxies (RC3), Springer-Verlag: New York Devereux, N. A., & Young, J. S. 1991, ApJ, 371, 515 Devriendt, J. E. G., Guiderdoni, B., Sadat, R., 1999, A&A, 350, 381 Dwek, E. 1998, Apj, 501, 643 Dwek, E. 2005, AIP Conf, Proc. 761: The Spectral Energy Distributions of Gas-Rich Galaxies: Confronting Models with Data, Popescu, C. C., & Tuffs, R. J. editors, 103 Fabbiano, G., Kim, D.-W., & Trinchieri, G. 1992, ApJS, 80, 531 Gibson, B.K., Matteucci, F. 1997a, ApJ, 475, 47 Gibson, B.K., 1997b, MNRAS, 290, 471 Griffiths, R. E., & Padovani, P.1990, ApJ, 360, 483 Guhathakurta, P., Knapp, G. R., Kim, D. W. & Jura, M., 1986, Baas, 18, 926 Ho, L. C., Filippenko, A. V., & Sargent, W. L. W. 1997, ApJS, 112, 315 Kawata, D., 2001, ApJ, 558, 598 Kim, D.-W., & Fabbiano, G. 2004, ApJ, 611, 846 Kodama, T., & Arimoto, N. 1997 A&A, 320, 41 Knapp, G. R., Guhathakurta, P., Kim, D.-W. & Jura, M. A., 1989, ApJS, 70, 329 Kuntschner, H., 2000, MNRAS, 315, 184 Kroupa, P., 1998, MNRAS, 298, 231 Larson, R. B., 1974, MNRAS, 169, 229 Larson, R. B. & Dinerstein, H. L., 1975, PASP, 87, 911 Merlin, E. & Chiosi, C., 2006, MNRAS, in press (astro-ph/0605052) Paturel, G., Andernach, H., Bottinelli, L., Di Nella, H., Durand, N., Garnier, R., Gouguenheim, L., Lanoix, P., Martinet,M.C., Petit, C., Rousseau, J., Theureau, G., Vauglin, I., 1997, A&AS, 124, 109 Piovan, L., Tantalo, R. & Chiosi, C., 2006a, MNRAS, 366, 923 Piovan, L., Tantalo, R. & Chiosi, C., 2006b, MNRAS, in press (astro-ph/0605541) Popescu, C. C., Tuffs, R. J., V[" o]{}lk, H. J., Pierini, D., & Madore, B. F. 2002, ApJ, 567, 221 Portinari, L., Chiosi, C., & Bressan, A. 1998, A&A, 334, 505 Ranalli, P., Comastri, A., & Setti, G.2003, A&A, 399, 39 Sanders, D. B., & Mirabel, I. F. 1985, ApJL, 298, L31 Sanders, D. B., Scoville, N. Z., Young, J. S., Soifer, B. T., Schloerb, F. P., Rice, W. L., & Danielson, G. E. 1986, ApJL, 305, L45 Silva, L., Granato, G. L., Bressan, A., Danese, L., 1998, ApJ, 509, 103 Solomon, P. M., & Sage, L. J. 1988, ApJ, 334, 613 Springel, V., Yoshida, N., & White, S. D. M., 2001, New Astronomy, 6, 79 Tagaki, T., Vansevicius, V. & Arimoto N. 2003, Publ. Astron. Soc. Jap., 55, 385 Tantalo, R., Chiosi, C. & Bressan, A. 1996, A&A, 311, 361 Tantalo, R., Chiosi, C., Bressan, A., Marigo, P. & Portinari, L. 1998, A&A, 335, 823 Tantalo, R., & Chiosi, C., 2004, MNRAS, 353, 405 Thomas, D., Maraston, C., Bender, R. & de Oliveira, C. M., 2005, ApJ, 621, 673 Thronson, H. A., & Telesco, C. M. 1986, ApJ, 311, 98 Trager, S. C., Faber, S. M., Worthey, G., & González, J. J., 2000a, AJ, 119, 164 Trager, S. C., Faber, S. M., Worthey, G., & González, J. J., 2000b, AJ, 120, 165 V[' e]{}ron-Cetty, M.-P., & V[' e]{}ron, P. 2003, A&A, 412, 399 Vorontsov-Velyaminov, B. A. 1959, Atlas and catalog of interacting galaxies (1959), 0
1
--- abstract: 'Age determination is undertaken for nearby early-type (BAF) stars, which constitute attractive targets for high-contrast debris disk and planet imaging surveys. Our analysis sequence consists of: acquisition of $uvby\beta$ photometry from catalogs, correction for the effects of extinction, interpolation of the photometry onto model atmosphere grids from which atmospheric parameters are determined, and finally, comparison to the theoretical isochrones from pre-main sequence through post-main sequence stellar evolution models, accounting for the effects of stellar rotation. We calibrate and validate our methods at the atmospheric parameter stage by comparing our results to fundamentally determined $T_\text{eff}$ and $\log g$ values. We validate and test our methods at the evolutionary model stage by comparing our results on ages to the accepted ages of several benchmark open clusters (IC 2602, $\alpha$ Persei, Pleiades, Hyades). Finally, we apply our methods to estimate stellar ages for 3493 field stars, including several with directly imaged exoplanet candidates.' author: - 'Trevor J. David and Lynne A. Hillenbrand' bibliography: - 'main.bib' title: 'The Ages of Early-Type Stars: Strömgren Photometric Methods Calibrated, Validated, Tested, and Applied to Hosts and Prospective Hosts of Directly Imaged Exoplanets' --- Introduction {#sec:intro} ============ In contrast to other fundamental stellar parameters such as mass, radius, and angular momentum – that for certain well-studied stars and stellar systems can be anchored firmly in observables and simple physics – stellar ages for stars other than the Sun have no firm basis. Ages are critical, however, for many investigations involving time scales including formation and evolution of planetary systems, evolution of debris disks, and interpretation of low mass stars, brown dwarfs, and so-called planetary mass objects that are now being detected routinely as faint point sources near bright stars in high contrast imaging surveys. The Era of Direct Imaging of Exoplanets {#subsec:directimaging} --------------------------------------- Intermediate-mass stars ($1.5-3.0\ M_{\odot}$) have proven themselves attractive targets for planet search work. Hints of their importance first arose during initial data return from IRAS in the early 1980s, when several A-type stars (notably Vega but also $\beta$ Pic and Fomalhaut) as well K-star Eps Eri – collectively known as “the fab four” – distinguished themselves by showing mid-infrared excess emission due to optically thin dust in Kuiper-Belt-like locations. Debris disks are signposts of planets, which dynamically stir small bodies resulting in dust production. Spitzer results in the late 2000s solidified the spectral type dependence of debris disk presence (e.g. @carpenter2006 [@wyatt2008]) for stars of common age. For a random sample of field stars, however, the primary variable determining the likelihood of debris is stellar age [@kains2011]. The correlation in radial velocity studies of giant planet frequency with stellar mass [@fischer2005; @gaidos2013] is another line of evidence connecting planet formation efficiency to stellar mass. The claim is that while $\sim$14% of A stars have one or more $>1 M_\mathrm{Jupiter}$ companions at $<$5 AU, only $\sim$2% of M stars do (@johnson2010, c.f. @lloyd2013 [@schlaufman2013]). Consistently interpreted as indicators of hidden planets, debris disks finally had their long-awaited observational connection to planets with the watershed discovery of [*directly imaged*]{} planetary mass companions. These were – like the debris disks before them – found first around intermediate-mass A-type stars, rather than the solar-mass FGK-type stars that had been the subject of much observational work at high contrast during the 2000s. HR 8799 [@marois2008; @marois2010] followed by Fomalhaut [@kalas2008] and $\beta$ Pic [@lagrange2009; @lagrange2010] have had their planets [*and indeed one planetary system*]{}, digitally captured by ground-based and/or space-based high contrast imaging techniques. Of the known [*bona fide*]{} planetary mass ($< 10 M_\text{Jup}$) companions that have been directly imaged, six of the nine are located around the three A-type host stars mentioned above, with the others associated with lower mass stars including the even younger 5-10 Myr old star 1RXS 1609-2105 [@lafreniere2008; @ireland2011] and brown dwarf 2MASS 1207-3933 [@chauvin2004] and the probably older GJ 504 [@kuzuhara2013]. Note that to date these directly imaged objects are all “super-giant planets" and not solar system giant planet analogs (e.g. Jupiter mass or below). Based on the early results, the major direct imaging planet searches have attempted to optimize success by preferentially observing intermediate-mass, early-type stars. The highest masses are avoided due to the limits of contrast. Recent campaigns include those with all the major large aperture telescopes: Keck/NIRC2, VLT/NACO, Gemini/NICI, and Subaru/HiCAO. Current and near-future campaigns include Project 1640 (P1640; Hinkley et al. 2011) at Palomar Observatory, Gemini Planet Imager (GPI), operating on the Gemini South telescope, VLT/SPHERE, and Subaru/CHARIS. The next-generation TMT and E-ELT telescopes both feature high contrast instruments. @mawet2012 compares instrumental contrast curves in their Figure 1. Despite the technological developments over the past decade, given the as-built contrast realities, only the largest, hottest, brightest, and therefore the youngest planets, i.e. those less than a few to a few hundred Myr in age, are still self-luminous enough to be amenable to direct imaging detection. Moving from the 3-10 $M_\text{Jupiter}$ detections at several tens of AU that are possible today/soon, to detection of lower mass, more Earth-like planets located at smaller, more terrestrial zone, separations, will require pushing to higher contrast from future space-based platforms. The targets of future surveys, whether ground or space, are however not likely to be substantially different from the samples targeted in today’s ground-based surveys. The most important parameter really is age, since the brightness of planets decreases so sharply with increasing age due to the rapid gravitational contraction and cooling [@fortney2008; @burrows2004]. There is thus a premium on identifying the closest, youngest stars. The Age Challenge {#subsec:agechallenge} ----------------- Unlike the other fundamental parameters of stellar mass (unambiguously determined from measurements of double-line eclipsing binaries and application of Kepler’s laws) and stellar radius (unambiguously measured from interferometric measurements of angular diameters and parallax measurements of distances), there are no directly interpretable observations leading to stellar age. Solar-type stars ($\sim 0.7-1.4 M_{\odot}$, spectral types F6-K5) were the early targets of radial velocity planet searches and later debris disk searches that can imply the presence of planets. For these objects, although more work remains to be done, there are established activity-rotation-age diagnostics that are driven by the presence of convective outer layers and can serve as proxies for stellar age [e.g. @mamajek2008]. For stars significantly different from our Sun, however, and in particular the intermediate-mass stars ($\sim 1.5-3.0 M_{\odot}$, spectral types A0-F5 near the main sequence) of interest here, empirical age-dating techniques have not been sufficiently established or calibrated. Ages have been investigated recently for specific samples of several tens of stars using color-magnitude diagrams by @nielsen2013 [@vigan2012; @moor2006; @su2006; @rhee2007; @lowrance2000]. Perhaps the most robust ages for young BAF stars come from clusters and moving groups, which contain not only the early-type stars of interest, but also lower mass stars to which the techniques mentioned above can be applied. These groups are typically dated using a combination of stellar kinematics, lithium abundances, rotation-activity indicators, and placement along theoretical isochrones in a color-magnitude diagram. The statistics of these coeval stellar populations greatly reduce the uncertainty in derived ages. However, only four such groups exist within $\sim$ 60 pc of the Sun and the number of early-type members is small. Field BAF stars having late-type companions at wide separation could have ages estimated using the methods valid for F6-K5 age dating. However, these systems are not only rare in the solar neighborhood, but considerable effort is required in establishing companionship e.g. [@stauffer1995; @barrado1997; @song2000]. Attempts to derive fractional main sequence ages for A-stars based on the evolution of rotational velocities are ongoing [@zorec2012], but this method is undeveloped and a bimodal distribution in $v \sin i$ for early-type A-stars may inhibit its utility. Another method, asteroseismology, which detects low-order oscillations in stellar interiors to determine the central density and hence age, is a heavily model-dependent method, observationally expensive, and best suited for older stars with denser cores. The most general and quantitative way to age-date A0-F5 field stars is through isochrone placement. As intermediate-mass stars evolve quickly along the H-R diagram, they are better suited for age-dating via isochrone placement relative to their low-mass counterparts which remain nearly stationary on the main sequence for many Gyr [@soderblom2010]. Indeed, the mere presence of an early-type star on the main sequence suggests moderate youth, since the hydrogen burning phase is relatively short-lived. However, isochronal ages are obviously model-dependent and they do require precise placement of the stars on an H-R diagram implying a parallax. The major uncertainties arise from lack of information regarding metallicity [@nielsen2013], rotation [@collinssmith1985] and multiplicity [@derosa2014]. Our Approach ------------ ![image](f1.pdf){width="95.00000%"} Despite that many nearby BAF stars are well-studied, historically, there is no modern data set leading to a set of consistently derived stellar ages for this population of stars. Here we apply Strömgren photometric techniques, and by combining modern stellar atmospheres and modern stellar evolutionary codes, we develop the methods for robust age determination for stars more massive than the Sun. The technique uses specific filters, careful calibration, definition of photometric indices, correction for any reddening, interpolation from index plots of physical atmospheric parameters, correction for rotation, and finally Bayesian estimation of stellar ages from evolutionary models that predict the atmospheric parameters as a function of mass and age. Specifically, our work uses high-precision archival $uvby\beta$ photometry and model atmospheres so as to determine the fundamental stellar atmospheric parameters $T_\text{eff}$ and $\log g$. Placing stars accurately in an $\log T_\mathrm{eff}$ vs. $\log g$ diagram leads to derivation of their ages and masses. We consider [@bressan2012] evolutionary models that include pre-main sequence evolutionary times (2 Myr at 3 $M_{\odot}$ and 17 Myr at 1.5 $M_{\odot}$), which are a significant fraction of any intermediate mass star’s absolute age, as well as [@ekstrom2012] evolutionary models that self-consistently account for stellar rotation, which has non-negligible effects on the inferred stellar parameters of rapidly rotating early-type stars. Figure  \[fig:evolutiont\] shows model predictions for the evolution of both physical and observational parameters. The primary sample to which our technique is applied in this work consists of 3499 BAF field stars within 100 pc and with $uvby\beta$ photometry available in the [@hauck1998] catalog, hereafter HM98. The robustness of our method is tested at different stages with several control samples. To assess the uncertainties in our atmospheric parameters we consider (1) 69 $T_\mathrm{eff}$ standard stars from [@boyajian2013] or [@napiwotzki1993]; (2) 39 double-lined eclipsing binaries with standard $\log{g}$ from [@torres2010]; (3) 16 other stars from [@napiwotzki1993], also for examining $\log{g}$. To examine isochrone systematics, stars in four open clusters are studied (31 members of IC 2602, 51 members of $\alpha$ Per, 47 members of the Pleiades, and 47 members of the Hyades). Some stars belonging to sample (1) above are also contained in the large primary sample of field stars. The Strömgren Photometric System {#sec:uvby} ================================ ![The $u, v, b, y,$ H$\beta_\text{wide}$ and H$\beta_\text{narrow}$ passbands. Overplotted on an arbitrary scale is the synthetic spectrum of an A0V star generated by [@munari2005] from an ATLAS9 model atmosphere. The $uvby$ filter profiles are those of [@bessell2011], while the H$\beta$ filter profiles are those originally described in [@crawford1966] and the throughput curves are taken from [@castelli2006].[]{data-label="fig:filters"}](f2.pdf){width="45.00000%"} Historical use of Strömgren photometry methods indeed has been for the purpose of determining stellar parameters for early-type stars. Recent applications include work by @nieva2013 [@dallemese2012; @onehag2009; @allende1999]. An advantage over more traditional color-magnitude diagram techniques [@nielsen2013; @derosa2014] is that distance knowledge is not required, so the distance-age degeneracy is removed. Also, metallicity effects are relatively minor (as addressed in an Appendix) and rotation effects are well-modelled and can be corrected for (§ \[subsec:vsinicorrection\]). Description of the Photometric System {#subsec:uvbydescription} ------------------------------------- The $uvby\beta$ photometric system is comprised of four intermediate-band filters ($uvby$) first advanced by [@stromgren1966] plus the H$\beta$ narrow and wide filters developed by [@crawford1958]; see Figure \[fig:filters\]. Together, the two filter sets form a well-calibrated system that was specifically designed for studying earlier-type BAF stars, for which the hydrogen line strengths and continuum slopes in the Balmer region rapidly change with temperature and gravity. From the fluxes contained in the six passbands, five $uvby\beta$ indices are defined. The color indices, ($b-y$) and ($u-b$), and the $\beta$-index, $$\beta = \mathrm{H}\beta_\text{narrow} - \mathrm{H}\beta_\text{wide},$$ are all sensitive to temperature and weakly dependent on surface gravity for late A- and F-type stars. The Balmer discontinuity index, $$c_1 = (u-v) - (v-b),$$ is sensitive to temperature for early type (OB) stars and surface gravity for intermediate (AF) spectral types. Finally, the metal line index, $$m_1 = (v-b) - (b-y),$$ is sensitive to the metallicity $[M/H]$. For each index, there is a corresponding intrinsic, dereddened index denoted by a naught subscript with e.g $c_0, (b-y)_0,$ and $(u-b)_0$, referring to the intrinsic, dereddened equivalents of the indices $c_1, (b-y),$ and $(u-b)$, respectively. Furthermore, although reddening is expected to be negligible for the nearby sources of primary interest to us, automated classification schemes that divide a large sample of stars for analysis into groups corresponding to earlier than, around, and later than the Balmer maximum will sometimes rely on the reddening-independent indices defined by [@crawfordmandwewala1976] for A-type dwarfs: $$\begin{aligned} [c_1] &= c_1 - 0.19 (b-y) \\ [m_1] &= m_1 + 0.34 (b-y) \\ [u-b] &= [c_1] + 2 [m_1]. \end{aligned}$$ Finally, two additional indices useful for early A-type stars, $a_0$ and $r^*$, are defined as follows: $$\begin{aligned} a_0 &= 1.36(b-y)_0 + 0.36m_0 + 0.18c_0 - 0.2448 \\ &= (b-y)_0 + 0.18[(u-b)_0 - 1.36], \\ r^* &= 0.35c_1 - 0.07(b-y)-(\beta-2.565). \end{aligned}$$ Note that $r^*$ is a reddening free parameter, and thus indifferent to the use of reddened or unreddened photometric indices. Extinction Correction {#subsec:reddening} --------------------- Though the sample of nearby stars to which we applying the Strömgren methodology are assumed to be unextincted or only lightly extincted, interstellar reddening is significant for the more distant stars including those in the open clusters used in § \[subsec:openclustertests\] to test the accuracy of the ages derived using our $uvby\beta$ methodology. In the cases where extinction is thought to be significant, corrections are performed using the `UVBYBETA`[^1] and `DEREDD`[^2] programs for IDL. These IDL routines take as input $(b-y), m_1, c_1, \beta$, and a class value (between 1-8) that is used to roughly identify what region of the H-R diagram an individual star resides in. For our sample, stars belong to only four of the eight possible classes. These classes are summarized as follows: (1) B0-A0, III-V, $2.59 < \beta < 2.88$, $-0.20 < c_0 < 1.00$, (5) A0-A3, III-V, $2.87 < \beta < 2.93$, $-0.01 < (b-y)_0 < 0.06$, (6) A3-F0, III-V, $2.72 < \beta < 2.88$, $0.05 < (b-y)_0 < 0.22$, and (7) F1-G2, III-V, $2.60 < \beta < 2.72$, $0.22 < (b-y)_0 < 0.39$. The class values in this work were assigned to individual stars based on their known spectral types (provided in the XHIP catalog [@anderson2011]), and $\beta$ values where needed. In some instances, A0-A3 stars assigned to class (5) with values of $\beta < 2.87$, the dereddening procedure was unable to proceed. For these cases, stars were either assigned to class (1) if they were spectral type A0-A1, or to class (6), if they were spectral type A2-A3. Depending on the class of an individual star, the program then calculates the dereddened indices $(b-y)_0, m_0, c_0$, the color excess $E(b-y)$, $\delta m_0$, the absolute V magnitude, $M_V$, the stellar radius and effective temperature. Notably, the $\beta$ index is unaffected by reddening as it is the flux difference between two narrow band filters with essentially the same central wavelength. Thus, no corrections are performed on $\beta$ and this index can be used robustly in coarse classification schemes. To transform $E(b-y)$ to $A_V$, we use the extinction measurements of [@schlegel1998] and to propagate the effects of reddening through to the various $uvby\beta$ indices we use the calibrations of [@crawfordmandwewala1976]: $$\begin{aligned} E(m_1) &= -0.33 E(b-y) \\ E(c_1) &= 0.20 E(b-y) \\ E(u-b) &= 1.54 E(b-y).\end{aligned}$$ From these relations, given the intrinsic color index $(b-y)_0$, the reddening free parameters $m_0, c_0, (u-b)_0,$ and $a_0$ can be computed. In § \[subsec:tefflogguncertainties\] we quantify the effects of extinction and extinction uncertainty on the final atmospheric parameter estimation, $T_\mathrm{eff}, \log g$. Utility of the Photometric System {#subsec:uvbyutility} --------------------------------- From the four basic Strömgren indices – $b-y$ color, $\beta$, $c_1$, and $m_1$ – accurate determinations of the stellar atmospheric parameters $T_\text{eff}, \log g$, and $[M/H]$ are possible for B, A, and F stars. Necessary are either empirical [e.g. @crawford1979; @lester1986; @olsen1988; @smalley1993; @smalley1995; @clem2004], or theoretical [e.g. @balona1984; @moon1985; @napiwotzki1993; @balona1994; @lejeune1999; @castelli2006; @castelli2004; @onehag2009] calibrations. Uncertainties of 0.10 dex in $\log g$ and 260 K in $T_\text{eff}$ are claimed as achievable and we reassess these uncertainties ourselves § \[subsec:tefflogguncertainties\]. Determination of Atmospheric Parameters $T_\mathrm{eff}, \log g$ {#sec:atmosphericparameters} ================================================================ Procedure {#subsec:atmosphericparameters} --------- ![image](f3.pdf){width="99.00000%"} ![image](f4.pdf){width="99.00000%"} Once equipped with $uvby\beta$ colors and indices and understanding the effects of extinction, arriving at the fundamental parameters $T_\text{eff}$ and $\log g$ for program stars, proceeds by interpolation among theoretical color grids (generated by convolving filter sensitivity curves with model atmospheres) or explicit formulae (often polynomials) that can be derived empirically or using the theoretical color grids. In both cases, calibration to a sample of stars with atmospheric parameters that have been independently determined through fundamental physics is required. See e.g. [@figueras1991] for further description. Numerous calibrations, both theoretical and empirical, of the $uvby\beta$ photometric system exist. For this work we use the [@castelli2006; @castelli2004] color grids generated from solar metallicity (Z=0.017, in this case) ATLAS9 model atmospheres using a microturbulent velocity parameter of $\xi = $ 0 km s$^{-1}$ and the new ODF. We do not use the alpha-enhanced color grids. The grids are readily available from F. Castelli[^3] or R. Kurucz[^4]. Prior to assigning atmospheric parameters to our program stars directly from the model grids, we first investigated the accuracy of the models on samples of BAF stars with fundamentally determined $T_\mathrm{eff}$ (through interferometric measurements of the angular diameter and estimations of the total integrated flux) and $\log g$ (from measurements of the masses and radii of double lined eclipsing binaries). We describe these validation procedures in § \[subsec:teffvalidation\] and § \[subsec:loggvalidation\]. Atmospheric parameter determination occurs in three different observational Strömgren planes depending on the temperature regime (see Figure  \[fig:uvbygrids\]); this is in order to avoid the degeneracies that are present in all single observational planes when mapped onto the physical parameter space of $\log T_\text{eff}$ and $\log g$. Building off of the original work of e.g. [@stromgren1951; @stromgren1966], [@moon1985], and later [@napiwotzki1993], suggested assigning physical parameters in the following three regimes: for cool stars ($T_\text{eff} \leq$ 8500 K), $\beta$ or $(b-y)$ can be used as a temperature indicator and $c_0$ is a surface gravity indicator; for intermediate temperature stars (8500 K $\leq T_\text{eff} \leq$ 11000 K), the temperature indicator is $a_0$ and surface gravity indicator $r^*$; finally, for hot stars ($T_\text{eff} \gtrsim$ 11000 K), the $c_0$ or the $[u-b]$ indices can be used as a temperature indicator while $\beta$ is a gravity indicator (note that the role of $\beta$ is reversed for hot stars compared to its role for cool stars). We adopt here $c_1$ vs. $\beta$ for the hottest stars, $a_0$ vs. $r^*$ for the intermediate temperatures, and $(b-y)$ vs. $c_1$ for the cooler stars. Choosing the appropriate plane for parameter determination effectively means establishing a crude temperature sequence prior to fine parameter determination; in this, the $\beta$ index is critical. Because the $\beta$ index switches from being a temperature indicator to a gravity indicator in the temperature range of interest to us (spectral type B0-F5, luminosity class IV/V stars), atmospheric parameter determination proceeds depending on the temperature regime. For the $T_\mathrm{eff}$ and $\log g$ calibrations described below, temperature information existed for all of the calibration stars, though this is not the case for our program stars. In the general case we must rely on photometric classification to assign stars to the late, intermediate, and early groups, and then proceed to determine atmospheric parameters in the relevant $uvby\beta$ planes. [@ttmoon1985] provides a scheme, present in the `UVBYBETA` IDL routine, for roughly identifying the region of the H-R diagram in which a star resides. However, because our primary sample of field stars are assumed to be unextincted, and because the `UVBYBETA` program relies on user-inputted class values based on unverified spectral types from the literature, we opt for a classification scheme based solely on the $uvby\beta$ photometry. [@monguio2014], hereafter M14, designed a sophisticated classification scheme, based on the work of [@stromgren1966]. The M14 scheme places stars into early (B0-A0), intermediate (A0-A3), and late (later than A3) groups based solely on $\beta$, the reddened color $(b-y)$, and the reddening-free parameters $[c_1], [m_1], [u-b]$. The M14 scheme improves upon the previous method of [@figueras1991] by imposing two new conditions (see their Figure 2 for the complete scheme) intended to prevent the erroneous classification of some stars. For our sample of 3499 field stars (see § \[subsec:fieldstars\]), there are 699 stars lacking $\beta$ photometry, all but three of which cannot be classified by the M14 scheme. For such cases, we rely on supplementary spectral type information and manually assign these unclassified stars to the late group. Using the M14 scheme, the final makeup of our field star sample is 85.9% late, 8.4% intermediate, and 5.7% early. Sample and Numerical Methods ---------------------------- For all stars in this work, $uvby\beta$ photometry is acquired from the [@hauck1998] compilation (hereafter HM98), unless otherwise noted. HM98 provides the most extensive compilation of $uvby\beta$ photometric measurements, taken from the literature and complete to the end of 1996 (the photometric system has seen less frequent usage/publication in more modern times). The HM98 compilation includes 105,873 individual photometric measurements for 63,313 different stars, culled from 533 distinct sources, and are presented both as individual measurements and weighted means of the literature values. The HM98 catalog provides $(b-y), m_1, c_1,$ and $\beta$ and the associated errors in each parameter if available. From these indices $a_0$ and $r^*$ are computed according to Equations (7), (8) & (9). The ATLAS9 $uvby\beta$ grids provide a means of translating from ($b-y, m_1, c_1, \beta, a_0, r^*$) to a precise combination of ($T_\mathrm{eff}, \log g$). Interpolation within the model grids is performed on the appropriate grid: ($(b-y)$ vs. $c_1$ for the late group, $a_0$ vs. $r^*$ for the intermediate group, and $c_1$ vs. $\beta$ for the early group). The interpolation is linear and performed using the SciPy routine `griddata`. Importantly, the model $\log g$ values are first converted into linear space so that $g$ is determined from the linear interpolation procedure before being brought back into log space. The model grids used in this work are spaced by 250 K in $T_\mathrm{eff}$ and 0.5 dex in $\log g$. To improve the precision of our method of atmospheric parameter determination in the future, it would be favorable to use model color grids that have been calculated at finer resolutions, particularly in $\log g$, directly from model atmospheres. However, the grid spacings stated above are fairly standardized among extant $uvby\beta$ grids. Rotational Velocity Correction {#subsec:vsinicorrection} ------------------------------ ![Vectors showing the magnitude and direction of the rotational velocity corrections at 100 (black), 200, and 300 (light grey) km s$^{-1}$ for a grid of points in log(Teff)-log$g$ space, with PARSEC isochrones overlaid for reference. While typical A-type stars rotate at about 150 km s$^{-1}$, high-contrast imaging targets are sometimes selected for slow rotation and hence favorable inclinations, typically $v \sin i <$50 km s$^{-1}$ or within the darkest black vectors. For rapid rotators, a 100$\%$ increase in the inferred age due to rotational effects is not uncommon.[]{data-label="fig:rotation-vectors"}](f5.pdf){width="49.00000%"} Early-type stars are rapid rotators, with rotational velocities of $v \sin i \gtrsim 150$ km s$^{-1}$ being typical. For a rotating star, both surface gravity and effective temperature decrease from the poles to the equator, changing the mean gravity and temperature of a rapid rotator relative to a slower rotator [@sweetroy1953]. Vega, rotating with an inferred equatorial velocity of $v_\mathrm{eq} \sim 270$ km s$^{-1}$ at a nearly pole-on inclination, has measured pole-to-equator gradients in $T_\mathrm{eff}$ and $\log{g}$ that are $\sim$ 2400 K and $\sim$ 0.5 dex, respectively [@peterson2006]. The apparent luminosity change due to rotation depends on the inclination: a pole-on ($i=0^\circ$) rapid rotator appears more luminous than a nonrotating star of the same mass, while an edge-on ($i=90^\circ$) rapid rotator appears less luminous than a nonrotating star of the same mass. [@sweetroy1953] found that a $(v \sin i)^2$ correction factor could describe the changes in luminosity, gravity, and temperature. The net effect of stellar rotation on inferred age is to make a rapid rotator appear cooler, more luminous, and hence older when compared to a nonrotating star of the same mass (or more massive when compared to a nonrotating star of the same age). Optical colors can be affected since the spectral lines of early type stars are strong and broad. @kraftwrubel1965 demonstrated specifically in the Str[ö]{}mgren system that the effects are predominantly in the gravity indicators ($c_1$, which then also affects the other gravity indicator $r^*$) and less so in the temperature indicators ($b-y$, which then affects $a_0$). [@figueras1998], hereafter FB98, used Monte-Carlo simulations to investigate the effect of rapid rotation on the measured $uvby\beta$ indices, derived atmospheric parameters, and hence isochronal ages of early-type stars. Those authors concluded that stellar rotation conspires to artificially enhance isochronal ages derived through $uvby\beta$ photometric methods by 30-50% on average. To mitigate the effect of stellar rotation on the parameters $T_\text{eff}$ and $\log(g)$, FB98 presented the following corrective formulae for stars with $T_\mathrm{eff} > 11000$ K: $$\begin{aligned} \Delta T_\mathrm{eff} &= 0.0167 (v \sin i)^2 + 218, \\ \Delta \log g &= 2.10 \times 10^{-6} (v \sin i)^2 + 0.034.\end{aligned}$$ For stars with $8500 \mathrm{K} \leq T_\mathrm{eff} \leq 11000 \mathrm{K}$, the analogous formulae are: $$\begin{aligned} \Delta T_\mathrm{eff} &= 0.0187 (v \sin i)^2 + 150, \\ \Delta \log g &= 2.92 \times 10^{-6} (v \sin i)^2 + 0.048.\end{aligned}$$ In both cases, $\Delta T_\mathrm{eff}$ and $\Delta \log g$ are *added* to the $T_\mathrm{eff}$ and $\log g$ values derived from $uvby\beta$ photometry. Notably, the rotational velocity correction is dependent on whether the star belongs to the early, intermediate, or late group. Specifically, FB98 define three regimes: $T_\mathrm{eff}<$8830 K (no correction), 8830 K$<$Teff$<$9700 K (correction for intermediate A0-A3 stars), Teff$>$9700 K (correction for stars earlier than A3). [@song2001], who performed a similar isochronal age analysis of A-type stars using $uvby\beta$ photometry, extended the FB98 rotation corrections to stars earlier and later than B7 and A4, respectively. In the present work, a more conservative approach is taken and the rotation correction is applied only to stars in the early or intermediate groups, as determined by the classification scheme discussed in § \[subsec:atmosphericparameters\]. This decision was partly justified by the abundance of late-type stars that fall below the ZAMS in the open cluster tests (§ \[subsec:openclustertests\]), for which the rotation correction would have a small (due to the lower rotational velocities of late-type stars) but exacerbating effect on these stars whose surface gravities are already thought to be overestimated. We include these corrections and, as illustrated in Figure \[fig:rotation-vectors\], emphasize that in their absence we would err on the side of over-estimating the age of a star, meaning conservatively overestimating rather than underestimating companion masses based on assumed ages. As an example, for a star with $T_\mathrm{eff} \approx$ 13,275 K and log$g$ $\approx$ 4.1, assumed to be rotating edge-on at 300 km s$^{-1}$, neglecting to apply the rotation correction would result in an age of $\sim$ 100 Myr. Applying the rotation correction to this star results in an age of $\sim$ 10 Myr. Of note, the FB98 corrections were derived for atmospheric parameters determined using the synthetic $uvby\beta$ color grids of [@moon1985]. It is estimated that any differences in derived atmospheric parameters resulting from the use of color grids other than those of [@moon1985] are less than the typical measurement errors in those parameters. In § \[subsec:tefflogguncertainties\] we quantify the effects of rotation and rotation correction uncertainty on the final atmospheric parameter estimation, $T_\mathrm{eff}, \log g$. Calibration and Validation Using the HM98 Catalog ================================================= In this section we assess the effective temperatures and surface gravities derived from atmospheric models and $uvby\beta$ color grids relative to fundamentally determined temperatures (§ \[subsec:teffvalidation\]) and surface gravities (§ \[subsec:loggvalidation\]). Effective Temperature {#subsec:teffvalidation} --------------------- A fundamental determination of $T_\mathrm{eff}$ is possible through an interferometric measurement of the stellar angular diameter and an estimate of the total integrated flux. We gathered 69 stars (listed in Table  \[table:teffcal\]) with fundamental $T_\mathrm{eff}$ measurements from the literature and determine photometric temperatures for these objects from interpolation of $uvby\beta$ photometry in ATLAS9 model grids. Fundamental $T_\mathrm{eff}$ values were sourced from [@boyajian2013], hereafter B13, and [@napiwotzki1993], hereafter N93. Several stars have multiple interferometric measurements of the stellar radius, and hence multiple fundamental $T_\mathrm{eff}$ determinations. For these stars, identified as those objects with multiple radius references in Table  \[table:teffcal\], the mean $T_\mathrm{eff}$ and standard deviation were taken as the fundamental measurement and standard error. Among the 16 stars with multiple fundamental $T_\mathrm{eff}$ determinations by between 2 and 5 authors, there is a scatter of typically several percent (with 0.1-4% range). Additional characteristics of the $T_\mathrm{eff}$ “standard” stars are summarized as follows: spectral types B0-F9, luminosity classes III-V, 2 km s$^{-1}$ $\leq v \sin i \leq$ 316 km s$^{-1}$, mean and median $v \sin i$ of 58 and 26 km s$^{-1}$, respectively, 2.6 pc $\leq d \leq$ 493 pc, and a mean and median \[Fe/H\] of -0.08 and -0.06 dex, respectively. Line-of-sight rotational velocities were acquired from the [@glebocki2005] compilation and \[Fe/H\] values were taken from SIMBAD. Variability and multiplicity were considered, and our sample is believed to be free of any possible contamination due to either of these effects. From the HM98 compilation we retrieved $uvby\beta$ photometry for these “effective temperature standards.” The effect of reddening was considered for the hotter, statistically more distant stars in the N93 sample. Comparing mean $uvby\beta$ photometry from HM98 with the dereddened photometry presented in N93 revealed that nearly all of these stars have negligible reddening ($E(b-y) \leq$ 0.001 mag). The exceptions are HD 82328, HD 97603, HD 102870, and HD 126660 with color excesses of $E(b-y)=$ 0.010, 0.003, 0.011, and 0.022 mag, respectively. Inspection of Table  \[table:teffcal\] indicates that despite the use of the reddened HM98 photometry the $T_\mathrm{eff}$ determinations for three of these four stars are still of high accuracy. For HD 97603, there is a discrepancy of $>$ 300 K between the fundamental and photometric temperatures. However, the $uvby\beta$ $T_\mathrm{eff}$ using reddened photometry for this star is actually hotter than the fundamental $T_\mathrm{eff}$. Notably, the author-to-author dispersion in multiple fundamental $T_\mathrm{eff}$ determinations for HD 97603 is also rather large. As such, the HM98 photometry was deemed suitable for all of the “effective temperature standards.” ![*Top:* Comparison of the temperatures derived from the ATLAS9 $uvby\beta$ color grids (T$_{uvby}$) and the fundamental effective temperatures ($\mathrm{T_{fund}}$) taken from B13 and N93. *Bottom:* Ratio of $uvby\beta$ temperature to fundamental temperature, as a function of $T_\mathrm{uvby}$. For the majority of stars, the $uvby\beta$ grids can predict $\mathrm{T_{eff}}$ to within $\sim 5 \%$ without any additional correction factors. []{data-label="fig:teff-cal-3"}](f6.pdf){width="45.00000%"} For the sake of completeness, different model color grids were investigated, including those of [@fitzpatrick2005], which were recently calibrated for early group stars, and those of [@onehag2009], which were calibrated from MARCS model atmospheres for stars cooler than 7000 K. We found the grids that best matched the fundamental effective temperatures were the ATLAS9 grids of solar metallicity with no alpha-enhancement, microturbulent velocity of 0 km s$^{-1}$, and using the new opacity distribution function (ODF). The ATLAS9 grids with microturbulent velocity of 2 km s$^{-1}$ were also tested, but were found to worsen both the fractional $T_\mathrm{eff}$ error and scatter, though only nominally (by a few tenths of a percent). For the early group stars, temperature determinations were attempted in both the $c_1-\beta$ and \[$u-b$\]-$\beta$ planes. The $c_1$ index was found to be a far better temperature indicator in this regime, with the \[$u-b$\] index underestimating $T_\mathrm{eff}$ relative to the fundamental values $>$10% on average. Temperature determinations in the $c_1-\beta$ plane, however, were only $\approx$ 1.9% cooler than the fundamental values, regardless of whether $c_1$ or the dereddened index $c_0$ was used. This is not surprising as the $c_1-\beta$ plane is not particularly susceptible to reddening. At intermediate temperatures, the $a_0-r^*$ plane is used. In this regime, the ATLAS9 grids were found to overestimate $T_\mathrm{eff}$ by $\approx$ 2.0% relative to the fundamental values. Finally, for the late group stars temperature determinations were attempted in the $(b-y)-c_1$ and $\beta-c_1$ planes. In this regime, $(b-y)$ was found to be a superior temperature indicator, improving the mean fractional error marginally and reducing the RMS scatter by more than 1%. In this group, the model grids overpredict $T_\mathrm{eff}$ by $\approx$ 2.4% on average, regardless of whether the reddened or dereddened indices are used. Figure \[fig:teff-cal-3\] shows a comparison of the temperatures derived from the ATLAS9 $uvby\beta$ color grids and the fundamental effective temperatures given in B13 and N93. For the majority of stars the color grids can predict the effective temperature to within about 5 $\%$. A slight systematic trend is noted in Figure \[fig:teff-cal-3\], such that the model color grids overpredict $T_\mathrm{eff}$ at low temperatures and underpredict $T_\mathrm{eff}$ at high temperatures. We attempt to correct for this systematic effect by applying $T_\mathrm{eff}$ offsets in three regimes according to the mean behavior of each group: late and intermediate group stars were shifted to cooler temperatures by 2.4% and 2.0%, respectively, and early group stars were shifted by 1.9% toward hotter temperatures. After offsets were applied, the remaining RMS error in temperature determinations for these “standard” stars was 3.3%, 2.5%, and 3.5% for the late, intermediate, and early groups, respectively, or 3.1% overall. Taking the uncertainties or dispersions in the fundamental $T_\mathrm{eff}$ determinations as the standard error, there is typically a 5-6 $\sigma$ discrepancy between the fundamental and photometric $T_\mathrm{eff}$ determinations. However, given the large author-to-author dispersion observed for stars with multiple fundamental $T_\mathrm{eff}$ determinations, it is likely that the formal errors on these measurements are underestimated. Notably, N93 does not publish errors for the fundamental $T_\mathrm{eff}$ values, which are literature means. However, those authors did find fractional errors in their photometric $T_\mathrm{eff}$ ranging from 2.5-4% for BA stars. In § \[subsec:openclustertests\], we opted not to apply systematic offsets, instead assigning $T_\mathrm{eff}$ uncertainties in three regimes according to the average fractional uncertainties noted in each group. In our final $T_\mathrm{eff}$ determinations for our field star sample (§ \[subsec:fieldstars\]) we attempted to correct for the slight temperature systematics and applied offsets, using the magnitude of the remaining RMS error (for all groups considered collectively) as the dominant source of uncertainty in our $T_\mathrm{eff}$ measurement (see § \[subsec:tefflogguncertainties\]). As demonstrated in Figure  \[fig:teffcal-vsini\], rotational effects on our temperature determinations for the $T_\mathrm{eff}$ standards were investigated. Notably, the FB98 $v\sin{i}$ corrections appear to enhance the discrepancy between our temperature determinations and the fundamental temperatures for the late and intermediate groups, while moderately improving the accuracy for the early group. For the late group this is expected, as the correction formulae were originally derived for intermediate and early group stars. Notably, however, only two stars in the calibration sample exhibit projected rotational velocities $>200$ km s$^{-1}$. We examine the utility of the $v\sin{i}$ correction further in § \[subsec:loggvalidation\] & § \[subsec:openclustertests\]. The effect of metallicity on the determination of $T_\mathrm{eff}$ from the $uvby\beta$ grids is investigated in Figure \[fig:teff-cal-2\] showing the ratio of the grid-determined temperature to the fundamental temperature as a function of \[Fe/H\]. The sample of temperature standards spans a large range in metallicity, yet there is no indication of any systematic effect with \[Fe/H\], justifying our choice to assume solar metallicity throughout this work (see further discussion of metallicity effects in the Appendix). The effect of reddening on our temperature determinations was considered but since the vast majority of sources with fundamental effective temperatures are nearby, no significant reddening was expected. Indeed, no indication of a systematic trend of the temperature residuals as a function of distance was noted. In summary our findings that the ATLAS9 predicted $T_\mathrm{eff}$ values are $\sim 2 \%$ hotter than fundamental values for AF stars are consistent with the results of @bertone2004, who found 4-8% shifts warmer in $T_\mathrm{eff}$ from fits of ATLAS9 models to spectrophotometry relative to $T_\mathrm{eff}$ values determined from the infrared flux method (IRFM). We attempt systematic corrections with offsets of magnitude $\sim 2\%$ according to group, and the remaining RMS error between $uvby\beta$ temperatures and fundamental values is $\sim 3\%$. ![Ratio of the $uvby\beta$ temperature to fundamental temperature as a function of $v \sin i$, for the late (left), intermediate (middle), and early (right), group stars. The solid horizontal colored lines indicate the mean ratios in each case. The arrows reperesent both the magnitude and direction of change to the ratio $T_{uvby}/T_\mathrm{fund}$ after applying the FB98 rotation corrections. The dashed horizontal colored lines indicate the mean ratios after application of the rotation correction. The rotation correction appears to improve temperature estimates for early group stars, but worsen estimates for the late and intermediate groups. Notably, however, the vast majority of $T_\mathrm{eff}$ standards are slowly rotating ($v\sin{i}<150$ km s$^{-1}$). Note one rapidly rotating intermediate group star extends beyond the scale of the figure, with a rotation corrected $T_{uvby}/T_\mathrm{fund}$ ratio of $\approx 1.26$.[]{data-label="fig:teffcal-vsini"}](f7.pdf){width="48.00000%"} ![Ratio of the $uvby\beta$ temperature to fundamental temperature as a function of \[Fe/H\]. There is no indication that the grids systematically overestimate or underestimate $T_\mathrm{eff}$ for different values of \[Fe/H\].[]{data-label="fig:teff-cal-2"}](f8.pdf){width="30.00000%"} [cccccccccccc]{} 4614 & F9V & 5973 $\pm$ 8 & 3 & 5915 & 4.442 & -0.28 & 1.8 & 0.372 & 0.185 & 0.275 & 2.588\ 5015 & F8V & 5965 $\pm$ 35 & 3 & 6057 & 3.699 & 0.04 & 8.6 & 0.349 & 0.174 & 0.423 & 2.613\ 5448 & A5V & 8070 & 18 & 8350 & 3.964 & & 69.3 & 0.068 & 0.189 & 1.058 & 2.866\ 6210 & F6Vb & 6089 $\pm$ 35 & 1 & 5992 & 3.343 & -0.01 & 40.9 & 0.356 & 0.183 & 0.475 & 2.615\ 9826 & F8V & 6102 $\pm$ 75 & 2,4 & 6084 & 3.786 & 0.08 & 8.7 & 0.346 & 0.176 & 0.415 & 2.629\ 16765 & F7V & 6356 $\pm$ 46 & 1 & 6330 & 4.408 & -0.15 & 30.5 & 0.318 & 0.160 & 0.355 & 2.647\ 16895 & F7V & 6153 $\pm$ 25 & 3 & 6251 & 4.118 & 0.00 & 8.6 & 0.325 & 0.160 & 0.392 & 2.625\ 17081 & B7V & 12820 & 18 & 12979 & 3.749 & 0.24 & 23.3 & -0.057 & 0.104 & 0.605 & 2.717\ 19994 & F8.5V & 5916 $\pm$ 98 & 2 & 5971 & 3.529 & 0.17 & 7.2 & 0.361 & 0.185 & 0.422 & 2.631\ 22484 & F9IV-V & 5998 $\pm$ 39 & 3 & 5954 & 3.807 & -0.09 & 3.7 & 0.367 & 0.173 & 0.376 & 2.615\ 30652 & F6IV-V & 6570 $\pm$ 131 & 3,6 & 6482 & 4.308 & 0.00 & 15.5 & 0.298 & 0.163 & 0.415 & 2.652\ 32630 & B3V & 17580 & 18 & 16536 & 4.068 & & 98.2 & -0.085 & 0.104 & 0.318 & 2.684\ 34816 & B0.5IV & 27580 & 18 & 28045 & 4.286 & -0.06 & 29.5 & -0.119 & 0.073 & -0.061 & 2.602\ 35468 & B2III & 21230 & 18 & 21122 & 3.724 & -0.07 & 53.8 & -0.103 & 0.076 & 0.109 & 2.613\ 38899 & B9IV & 10790 & 18 & 11027 & 3.978 & -0.16 & 25.9 & -0.032 & 0.141 & 0.906 & 2.825\ 47105 & AOIV & 9240 & 18 & 9226 & 3.537 & -0.28 & 13.3 & 0.007 & 0.149 & 1.186 & 2.865\ 48737 & F5IV-V & 6478 $\pm$ 21 & 3 & 6510 & 3.784 & 0.14 & 61.8 & 0.287 & 0.169 & 0.549 & 2.669\ 48915 & A0mA1Va & 9755 $\pm$ 47 & 7,8,9,10,11 & 9971 & 4.316 & 0.36 & 15.8 & -0.005 & 0.162 & 0.980 & 2.907\ 49933 & F2Vb & 6635 $\pm$ 90 & 12 & 6714 & 4.378 & -0.39 & 9.9 & 0.270 & 0.127 & 0.460 & 2.662\ 56537 & A3Vb & 7932 $\pm$ 62 & 3 & 8725 & 4.000 & & 152 & 0.047 & 0.198 & 1.054 & 2.875\ 58946 & F0Vb & 6954 $\pm$ 216 & 3,18 & 7168 & 4.319 & -0.25 & 52.3 & 0.215 & 0.155 & 0.615 & 2.713\ 61421 & F5IV-V & 6563 $\pm$ 24 & 11,13,14,15,18 & 6651 & 3.983 & -0.02 & 4.7 & 0.272 & 0.167 & 0.532 & 2.671\ 63922 & BOIII & 29980 & 18 & 29973 & 4.252 & 0.16 & 40.7 & -0.122 & 0.043 & -0.092 & 2.590\ 69897 & F6V & 6130 $\pm$ 58 & 1 & 6339 & 4.290 & -0.26 & 4.3 & 0.315 & 0.149 & 0.384 & 2.635\ 76644 & A7IV & 7840 & 18 & 8232 & 4.428 & -0.03 & 142 & 0.104 & 0.216 & 0.856 & 2.843\ 80007 & A2IV & 9240 & 18 & 9139 & 3.240 & & 126 & 0.004 & 0.140 & 1.273 & 2.836\ 81937 & F0IVb & 6651 $\pm$ 27 & 3 & 7102 & 3.840 & 0.17 & 146 & 0.211 & 0.180 & 0.752 & 2.733\ 82328 & F5.5IV-V & 6299 $\pm$ 61 & 3,18 & 6322 & 3.873 & -0.16 & 7.1 & 0.314 & 0.153 & 0.463 & 2.646\ 90839 & F8V & 6203 $\pm$ 56 & 3 & 6145 & 4.330 & -0.11 & 8.6 & 0.341 & 0.171 & 0.333 & 2.618\ 90994 & B6V & 14010 & 18 & 14282 & 4.219 & & 84.5 & -0.066 & 0.111 & 0.466 & 2.730\ 95418 & A1IV & 9181 $\pm$ 11 & 3,18 & 9695 & 3.899 & -0.03 & 40.8 & -0.006 & 0.158 & 1.088 & 2.880\ 97603 & A5IV(n) & 8086 $\pm$ 169 & 3,6,18 & 8423 & 4.000 & -0.18 & 177 & 0.067 & 0.195 & 1.037 & 2.869\ 102647 & A3Va & 8625 $\pm$ 175 & 5,6,18 & 8775 & 4.188 & 0.07 & 118 & 0.043 & 0.211 & 0.973 & 2.899\ 102870 & F8.5IV-V & 6047 $\pm$ 7 & 3,18 & 6026 & 3.689 & 0.12 & 5.4 & 0.354 & 0.187 & 0.416 & 2.628\ 118098 & A2Van & 8097 $\pm$ 43 & 3 & 8518 & 4.163 & -0.26 & 200 & 0.065 & 0.183 & 1.006 & 2.875\ 118716 & B1III & 25740 & 18 & 23262 & 3.886 & & 113 & -0.112 & 0.058 & 0.040 & 2.608\ 120136 & F7IV-V & 6620 $\pm$ 67 & 2 & 6293 & 3.933 & 0.24 & 14.8 & 0.318 & 0.177 & 0.439 & 2.656\ 122408 & A3V & 8420 & 18 & 8326 & 3.500 & -0.27 & 168 & 0.062 & 0.164 & 1.177 & 2.843\ 126660 & F7V & 6202 $\pm$ 35 & 3,6,18 & 6171 & 3.881 & -0.02 & 27.7 & 0.334 & 0.156 & 0.418 & 2.644\ 128167 & F4VkF2mF1 & 6687 $\pm$ 252 & 3,18 & 6860 & 4.439 & -0.32 & 9.3 & 0.254 & 0.134 & 0.480 & 2.679\ 130948 & F9IV-V & 5787 $\pm$ 57 & 1 & 5899 & 4.065 & -0.05 & 6.3 & 0.374 & 0.191 & 0.321 & 2.625\ 136202 & F8IV & 5661 $\pm$ 87 & 1 & 6062 & 3.683 & -0.04 & 4.9 & 0.348 & 0.170 & 0.427 & 2.620\ 141795 & kA2hA5mA7V & 7928 $\pm$ 88 & 3 & 8584 & 4.346 & 0.38 & 33.1 & 0.066 & 0.224 & 0.950 & 2.885\ 142860 & F6V & 6295 $\pm$ 74 & 3,6 & 6295 & 4.130 & -0.17 & 9.9 & 0.319 & 0.150 & 0.401 & 2.633\ 144470 & BlV & 25710 & 18 & 25249 & 4.352 & & 107 & -0.112 & 0.043 & -0.005 & 2.621\ 162003 & F5IV-V & 5928 $\pm$ 81 & 3 & 6469 & 3.916 & -0.03 & 11.9 & 0.294 & 0.147 & 0.497 & 2.661\ 164259 & F2V & 6454 $\pm$ 113 & 3 & 6820 & 4.121 & -0.03 & 66.4 & 0.253 & 0.153 & 0.560 & 2.690\ 168151 & F5Vb & 6221 $\pm$ 39 & 1 & 6600 & 4.203 & -0.28 & 9.7 & 0.281 & 0.143 & 0.472 & 2.653\ 169022 & B9.5III & 9420 & 18 & 9354 & 3.117 & & 196 & 0.016 & 0.102 & 1.176 & 2.778\ 172167 & AOVa & 9600 & 18 & 9507 & 3.977 & -0.56 & 22.8 & 0.003 & 0.157 & 1.088 & 2.903\ 173667 & F5.5IV-V & 6333 $\pm$ 37 & 3,18 & 6308 & 3.777 & -0.03 & 16.3 & 0.314 & 0.150 & 0.484 & 2.652\ 177724 & A0IV-Vnn & 9078 $\pm$ 86 & 3 & 9391 & 3.870 & -0.52 & 316 & 0.013 & 0.146 & 1.080 & 2.875\ 181420 & F2V & 6283 $\pm$ 106 & 16 & 6607 & 4.187 & -0.03 & 17.1 & 0.280 & 0.157 & 0.477 & 2.657\ 185395 & F3+V & 6516 $\pm$ 203 & 3,4 & 6778 & 4.296 & 0.02 & 5.8 & 0.261 & 0.157 & 0.502 & 2.688\ 187637 & F5V & 6155 $\pm$ 85 & 16 & 6192 & 4.103 & -0.09 & 5.4 & 0.333 & 0.151 & 0.380 & 2.631\ 190993 & B3V & 17400 & 18 & 16894 & 4.195 & -0.14 & 140 & -0.083 & 0.100 & 0.295 & 2.686\ 193432 & B9.5V & 9950 & 18 & 10411 & 3.928 & -0.15 & 23.4 & -0.021 & 0.134 & 1.015 & 2.852\ 193924 & B2IV & 17590 & 18 & 17469 & 3.928 & & 15.5 & -0.092 & 0.087 & 0.271 & 2.662\ 196867 & B9IV & 10960 & 18 & 10837 & 3.861 & -0.06 & 144 & -0.019 & 0.125 & 0.889 & 2.796\ 209952 & B7IV & 13850 & 18 & 13238 & 3.913 & & 215 & -0.061 & 0.105 & 0.576 & 2.728\ 210027 & F5V & 6324 $\pm$ 139 & 6 & 6496 & 4.187 & -0.13 & 8.6 & 0.294 & 0.161 & 0.446 & 2.664\ 210418 & A2Vb & 7872 $\pm$ 82 & 3 & 8596 & 3.966 & -0.38 & 136 & 0.047 & 0.161 & 1.091 & 2.886\ 213558 & A1Vb & 9050 $\pm$ 157 & 3 & 9614 & 4.175 & & 128 & 0.002 & 0.170 & 1.032 & 2.908\ 215648 & F6V & 6090 $\pm$ 22 & 3 & 6198 & 3.950 & -0.26 & 7.7 & 0.331 & 0.147 & 0.407 & 2.626\ 216956 & A4V & 8564 $\pm$ 105 & 5,18 & 8857 & 4.198 & 0.20 & 85.1 & 0.037 & 0.206 & 0.990 & 2.906\ 218396 & F0+($\lambda$ Boo) & 7163 $\pm$ 84 & 17 & 7540 & 4.435 & & 47.2 & 0.178 & 0.146 & 0.678 & 2.739\ 219623 & F8V & 6285 $\pm$ 94 & 1 & 6061 & 3.85 & 0.04 & 4.9 & 0.351 & 0.169 & 0.395 & 2.624\ 222368 & F7V & 6192 $\pm$ 26 & 3 & 6207 & 3.988 & -0.14 & 6.1 & 0.330 & 0.163 & 0.399 & 2.625\ 222603 & A7V & 7734 $\pm$ 80 & 1 & 8167 & 4.318 & & 62.8 & 0.105 & 0.203 & 0.891 & 2.826 Surface Gravity {#subsec:loggvalidation} --------------- To assess the surface gravities derived from the $uvby\beta$ grids, we compare to results on both double-lined eclipsing binary and spectroscopic samples. ### Comparison with Double-Line Eclipsing Binaries [@torres2010] compiled an extensive catalog of 95 double-lined eclipsing binaries with fundamentally determined surface gravities for all 180 individual stars. Eclipsing binary systems allow for dynamical determinations of the component masses and geometrical determinations of the component radii. From the mass and radius of an individual component, the Newtonian surface gravity, $g=GM/R^2$ can be calculated. From these systems, 39 of the primary components have $uvby\beta$ photometry available for determining surface gravities using our methodology. The spectral type range for these systems is O8-F2, with luminosity classes of IV, V. The mass ratio (primary/secondary) for these systems ranges from $\approx$ 1.00-1.79, and the orbital periods of the primaries range from $\approx$ 1.57-8.44 days. In the cases of low mass ratios, the primary and secondary components should have nearly identical fundamental parameters, assuming they are coeval. In the cases of high mass ratios, given that the individual components are presumably unresolved, we assume that the primary dominates the $uvby\beta$ photometry. For both cases (of low and high mass ratios), we assume that the photometry allows for accurate surface gravity determinations for the primary components and so we only consider the primaries from the [@torres2010] sample. It is important to note that the eclipsing binary systems used for the surface gravity calibration are more distant than the stars for which we can interferometrically determine angular diameters and effective temperatures for. Thus, for the surface gravity calibration it was necessary to compute the dereddened indices $(b-y)_0, m_0, c_0$ in order to obtain the highest accuracy possible for the intermediate-group stars, which rely on $a_0$ (an index using dereddened colors) as a temperature indicator. Notably, however, we found that the dereddened photometry actually worsened $\log{g}$ determinations for the early and late groups. Dereddened colors were computed using the IDL routine `UVBYBETA`. The results of the $\log g$ calibration are presented in Table \[table:loggcal\] and Figure  \[fig:logg-cal-fig1\]. As described above, for the late group stars. ($T_\mathrm{eff} <$8500 K), $\log g$ is determined in the $(b-y)-c_1$ plane. The mean and median of the $\log g$ residuals (in the sense of grid-fundamental) are -0.001 dex and -0.038 dex, respectively, and the RMS error 0.145 dex. As in § \[subsec:teffvalidation\], we found that the $\beta-c_1$ plane produced less accurate atmospheric parameters, relative to fundamental determinations, for late group stars. For the intermediate group stars (8500 K $\leq T_\mathrm{eff} \leq$ 11000 K), $\log g$ is determined in the $a_0-r^*$ plane. The mean and median of the $\log g$ residuals are -0.060 dex and -0.069 dex, respectively with RMS error 0.091 dex. For the early group stars ($T_\mathrm{eff} >$ 11000 K), $\log g$ is determined in the $c_1-\beta$ plane. The mean and median of the $\log g$ residuals are -0.0215 dex and 0.024 dex, respectively, with RMS error 0.113 dex. The $[u-b]-\beta$ plane was also investigated for early group stars, but was found to produce $\log{g}$ values of lower accuracy relative to the fundamental determinations. When considered collectively, the mean and median of the $\log g$ residuals for all stars are -0.017 dex and -0.034 dex, and the RMS error 0.127 dex. The uncertainties in our surface gravities that arise from propagating the photometric errors through our atmospheric parameter determination routines are of the order $\sim$ 0.02 dex, significantly lower than the uncertainties demonstrated by the comparison to fundamental values of $\log g$. As stated above, the main concern with using double-lined eclipsing binaries as surface gravity calibrators for our photometric technique is contamination from the unresolved secondary components. The $\log g$ residuals were examined as a function of both mass ratio and orbital period. While the amplitude of the scatter is marginally larger for low mass ratio or short period systems, in all cases our $\log g$ determinations are within 0.2 dex of the fundamental values $\approx 85\%$ of the time. To assess any potential systematic inaccuracies of the grids themselves, the surface gravity residuals were examined as a function of $T_\mathrm{eff}$ and the grid-determined $\log g$. Figures \[fig:logg-cal-fig3\] show the $\log g$ residuals as a function of $T_\mathrm{eff}$ and $\log g$, respectively. No considerable systematic effects as a function of either effective temperature or $\log g$ were found in the $uvby\beta$ determinations of $\log g$. The effect of rotational velocity on our $\log g$ determinations was considered. As before, $v \sin i$ data for the surface gravity calibrators was collected from [@glebocki2005]. As seen in Figure \[fig:dlogg-vsini\], the majority of the $\log g$ calibrators are somewhat slowly rotating ($v \sin i \leq$ 150 km s$^{-1}$). While the $v \sin i$ correction increases the accuracy of our $\log g$ determinations for the early-group stars in most cases, the correction appears to worsen our determinations for the intermediate group, which appear systematically high to begin with. The potential systematic effect of metallicity on our $\log g$ determinations is considered in Figure \[fig:logg-cal-fig8\], showing the surface gravity residuals as a function of \[Fe/H\]. Metallicity measurements were available for very few of these stars, and were primarily taken from [@ammons2006; @anderson2012]. Nevertheless, there does not appear to be a global systematic trend in the surface gravity residuals with metallicity. There is a larger scatter in $\log g$ determinations for the more metal-rich, late-type stars, however it is not clear that this effect is strictly due to metallicity. In summary, for the open cluster tests we assign $\log g$ uncertainties in three regimes: $\pm$ 0.145 dex for stars belonging to the late group, $\pm$ 0.091 dex for the intermediate group, and $\pm$ 0.113 dex for the early group. For our sample of nearby field stars we opt to assign a uniform systematic uncertainty of $\pm$ 0.116 dex for all stars. We do not attempt to correct for any systematic effects by applying offsets in $\log g$, as we did with $T_\mathrm{eff}$. As noted in discussion of the $T_\mathrm{eff}$ calibration, we do apply the $v \sin i$ correction to both intermediate and early group stars, as these corrections permit us to better reproduce open cluster ages (as presented in § \[subsec:openclustertests\]). ![image](f9.pdf){width="99.00000%"} ![Surface gravity residuals, $\Delta \log g$ (in the sense of fundamental-$uvby\beta$), as a function of $uvby\beta$-determined $\log{(T_\mathrm{eff})}$ (left) and $\log{g}$ (right). Solid points represent eclipsing binary primaries from [@torres2010] and open circles are stars with spectroscopic $\log{g}$ determinations in N93. Of the 39 eclipsing binaries, only six have residuals greater than 0.2 dex in magnitude. This implies that the $uvby\beta$ grids determine $\log g$ to within 0.2 dex of fundamental values $\sim 85\%$ of the time. Surface gravity residuals are largest for the cooler stars. Photometric surface gravity measurements are in better agreement with spectroscopic determinations than the eclipsing binary sample. There is no indication for a global systematic offset in $uvby\beta$-determined $\log g$ values as a function of either $T_\mathrm{eff}$ or $\log g$. []{data-label="fig:logg-cal-fig3"}](f10.pdf){width="45.00000%"} ![Surface gravity residuals, $\Delta \log g$ (in the sense of fundamental-$uvby\beta$), of eclipsing binary primaries as a function of $v \sin i$. Arrows indicate the locations of points after application of the [@figueras1998] $v\sin{i}$ correction, where in this case late group stars received the same correction as the intermediate group.[]{data-label="fig:dlogg-vsini"}](f11.pdf){width="45.00000%"} ![Surface gravity residuals, $\Delta \log g$ (in the sense of fundamental-$uvby\beta$), as a function of \[Fe/H\]. The metallicity values have been taken primarily from [@ammons2006], with additional values coming from [@anderson2012]. While metallicities seem to exist for very few of the surface gravity calibrators used here, there does not appear to be a systematic trend in the residuals with \[Fe/H\]. There is a larger amount of scatter for the more metal-rich late-type stars, however the scatter is confined to a relatively small range in \[Fe/H\] and it is not clear that this effect is due to metallicity effects. []{data-label="fig:logg-cal-fig8"}](f12.pdf){width="45.00000%"} [ccccccccccccc]{} EM Car & O8V & 34000 $\pm$ 2000 & 21987 & 3.855 $\pm$ 0.016 & 3.878 & 146.0 & & 0.279 & -0.042 & 0.083 & 2.617\ V1034 Sco & O9V & 33200 $\pm$ 900 & 28228 & 3.923 $\pm$ 0.008 & 3.969 & 159.0 & -1.0 & 0.190 & -0.024 & -0.068 & 2.587\ AH Cep & B0.5Vn & 29900 $\pm$ 1000 & 24867 & 4.017 $\pm$ 0.009 & 4.115 & 154.0 & & 0.290 & -0.064 & 0.003 & 2.611\ V578 Mon & B1V & 30000 $\pm$ 740 & 25122 & 4.176 $\pm$ 0.015 & 4.200 & 107.0 & & 0.206 & -0.024 & -0.003 & 2.613\ V453 Cyg & B0.4IV & 27800 $\pm$ 400 & 24496 & 3.725 $\pm$ 0.006 & 3.742 & 130.0 & & 0.212 & -0.004 & -0.004 & 2.590\ CW Cep & B0.5V & 28300 $\pm$ 1000 & 22707 & 4.050 $\pm$ 0.019 & 3.716 & 120.0 & & 0.355 & -0.077 & 0.050 & 2.601\ V539 Ara & B3V & 18100 $\pm$ 500 & 17537 & 3.924 $\pm$ 0.016 & 3.964 & 85.6 & & -0.033 & 0.089 & 0.268 & 2.665\ CV Vel & B2.5V & 18100 $\pm$ 500 & 17424 & 3.999 $\pm$ 0.008 & 3.891 & 42.8 & & -0.057 & 0.083 & 0.273 & 2.659\ AG Per & B3.4V & 18200 $\pm$ 800 & 15905 & 4.213 $\pm$ 0.020 & 4.311 & 92.6 & -0.04 & 0.048 & 0.079 & 0.346 & 2.708\ U Oph & B5V & 16440 $\pm$ 250 & 15161 & 4.076 $\pm$ 0.004 & 3.954 & 350.0 & & 0.081 & 0.050 & 0.404 & 2.695\ V760 Sco & B4V & 16900 $\pm$ 500 & 15318 & 4.176 $\pm$ 0.019 & 4.061 & & & 0.169 & 0.023 & 0.392 & 2.701\ GG Lup & B7V & 14750 $\pm$ 450 & 13735 & 4.298 $\pm$ 0.009 & 4.271 & 123.0 & & -0.049 & 0.115 & 0.514 & 2.747\ $\zeta$ Phe & B6V & 14400 $\pm$ 800 & 13348 & 4.121 $\pm$ 0.004 & 4.153 & 111.0 & & -0.039 & 0.118 & 0.559 & 2.747\ $\chi^2$ Hya & B8V & 11750 $\pm$ 190 & 11382 & 3.710 $\pm$ 0.007 & 3.738 & 131.0 & & -0.020 & 0.110 & 0.841 & 2.769\ V906 Sco & B9V & 10400 $\pm$ 500 & 10592 & 3.656 $\pm$ 0.012 & 3.719 & 81.3 & & 0.039 & 0.101 & 0.996 & 2.805\ TZ Men & A0V & 10400 $\pm$ 500 & 10679 & 4.224 $\pm$ 0.009 & 4.169 & 14.4 & & 0.000 & 0.142 & 0.918 & 2.850\ V1031 Ori & A6V & 7850 $\pm$ 500 & 8184 & 3.559 $\pm$ 0.007 & 3.793 & 96.0 & & 0.076 & 0.174 & 1.106 & 2.848\ $\beta$ Aur & A1m & 9350 $\pm$ 200 & 9167 & 3.930 $\pm$ 0.005 & 3.894 & 33.2 & -0.11 & 0.017 & 0.173 & 1.091 & 2.889\ V364 Lac & A4m: & 8250 $\pm$ 150 & 7901 & 3.766 $\pm$ 0.005 & 3.707 & & & 0.107 & 0.168 & 1.061 & 2.875\ V624 Her & A3m & 8150 $\pm$ 150 & 7902 & 3.832 $\pm$ 0.014 & 3.794 & 38.0 & & 0.111 & 0.230 & 1.025 & 2.870\ V1647 Sgr & A1V & 9600 $\pm$ 300 & 9142 & 4.252 $\pm$ 0.008 & 4.087 & & & 0.040 & 0.174 & 1.020 & 2.899\ VV Pyx & A1V & 9500 $\pm$ 200 & 9560 & 4.087 $\pm$ 0.008 & 4.004 & 22.1 & & 0.028 & 0.161 & 1.013 & 2.881\ KW Hya & A5m & 8000 $\pm$ 200 & 8053 & 4.078 $\pm$ 0.006 & 4.390 & 16.6 & & 0.122 & 0.232 & 0.832 & 2.827\ WW Aur & A5m & 7960 $\pm$ 420 & 8401 & 4.161 $\pm$ 0.005 & 4.286 & 35.8 & & 0.081 & 0.231 & 0.944 & 2.862\ V392 Car & A2V & 8850 $\pm$ 200 & 10263 & 4.296 $\pm$ 0.011 & 4.211 & 163.0 & & 0.097 & 0.108 & 1.019 & 2.889\ RS Cha & A8V & 8050 $\pm$ 200 & 7833 & 4.046 $\pm$ 0.022 & 4.150 & 30.0 & & 0.136 & 0.186 & 0.866 & 2.791\ MY Cyg & F0m & 7050 $\pm$ 200 & 7054 & 3.994 $\pm$ 0.019 & 3.882 & & & 0.219 & 0.226 & 0.709 & 2.756\ EI Cep & F3V & 6750 $\pm$ 100 & 6928 & 3.763 $\pm$ 0.014 & 3.904 & 16.2 & 0.27 & 0.234 & 0.199 & 0.658 & 2.712\ FS Mon & F2V & 6715 $\pm$ 100 & 6677 & 4.026 $\pm$ 0.005 & 3.992 & 40.0 & 0.07 & 0.266 & 0.148 & 0.594 & 2.688\ PV Pup & A8V & 6920 $\pm$ 300 & 7327 & 4.255 $\pm$ 0.009 & 4.386 & 66.4 & & 0.200 & 0.169 & 0.636 & 2.722\ HD 71636 & F2V & 6950 $\pm$ 140 & 6615 & 4.226 $\pm$ 0.014 & 4.104 & 13.5 & 0.15 & 0.278 & 0.157 & 0.496 &\ RZ Cha & F5V & 6450 $\pm$ 150 & 6326 & 3.905 $\pm$ 0.006 & 3.808 & & 0.02 & 0.312 & 0.155 & 0.482 &\ BW Aqr & F7V & 6350 $\pm$ 100 & 6217 & 3.979 $\pm$ 0.018 & 3.877 & & & 0.328 & 0.165 & 0.432 & 2.650\ V570 Per & F3V & 6842 $\pm$ 50 & 6371 & 4.234 $\pm$ 0.019 & 3.998 & 44.9 & 0.06 & 0.308 & 0.165 & 0.441 &\ CD Tau & F6V & 6200 $\pm$ 50 & 6325 & 4.087 $\pm$ 0.007 & 3.973 & 18.9 & 0.19 & 0.314 & 0.178 & 0.436 &\ V1143 Cyg & F5V & 6450 $\pm$ 100 & 6492 & 4.322 $\pm$ 0.015 & 4.155 & 19.8 & 0.22 & 0.294 & 0.165 & 0.451 & 2.663\ VZ Hya & F3V & 6645 $\pm$ 150 & 6199 & 4.305 $\pm$ 0.003 & 4.182 & & -0.22 & 0.333 & 0.145 & 0.370 & 2.629\ V505 Per & F5V & 6510 $\pm$ 50 & 6569 & 4.323 $\pm$ 0.016 & 4.325 & 31.4 & -0.03 & 0.287 & 0.142 & 0.435 & 2.654\ HS Hya & F4V & 6500 $\pm$ 50 & 6585 & 4.326 $\pm$ 0.005 & 4.471 & 23.3 & 0.14 & 0.287 & 0.160 & 0.397 & 2.648 \[table:loggcal\] ### Comparison with Spectroscopic Measurements The Balmer lines are a sensitive surface gravity indicator for stars hotter than $T_\mathrm{eff} \gtrsim$ 9000 K and can be used as a semi-fundamental surface gravity calibration for the early- and intermediate-group stars. The reason why surface gravities derived using this method are considered semi-fundamental and not fundamental is because the method still relies on model atmospheres for fitting the observed line profiles. Nevertheless, surface gravities determined through this method are considered of high fidelity and so we performed an additional consistency check, comparing our $uvby\beta$ values of $\log g$ to those with well-determined spectroscopic $\log g$ measurements. N93 fit theoretical profiles of hydrogen Balmer lines from [@kurucz1979] to high resolution spectrograms of the H$\beta$ and H$\gamma$ lines for a sample of 16 stars with $uvby\beta$ photometry. The sample of 16 stars was mostly drawn from the list of photometric $\beta$ standards of [@crawford1966]. We compared the $\log g$ values we determined through interpolation in the $uvby\beta$ color grids to the semi-fundamental spectroscopic values determined by N93. The results of this comparison are presented in Table  \[table:loggspec\]. Though N93 provide dereddened photometry for the spectroscopic sample, we found using the raw HM98 photometry produced significantly better results (yielding an RMS error that was three times lower). For the early group stars, the atmospheric parameters were determined in both the $c_0-\beta$ plane and the $[u-b]-\beta$ plane. In both cases, $\beta$ is the gravity indicator, but we found that the $\log g$ values calculated when using $c_0$ as a temperature indicator for hot stars better matched the semi-fundamental spectroscopic $\log g$ values. This result is consistent with the result from the effective temperature calibration that suggests $c_0$ better predicted the effective temperatures of hot stars than $[u-b]$. As before, $\log{g}$ for intermediate group stars is determined in the $a_0-r^*$ plane. We tested $uvby\beta$ color grids of different metallicity, alpha-enhancement, and microturbulent velocity and determined that the non-alpha-enhanced, solar metallicity grids with microturbulent velocity $v_\mathrm{turb}$ = 0 km s$^{-1}$ best reproduced the spectroscopic surface gravities for the sample of 16 early- and intermediate-group stars measured by N93. The $\log g$ residuals, in the sense of (spectroscopic – grid), as a function of the grid-calculated effective temperatures are plotted in Figure  \[fig:logg-cal-fig3\]. There is no evidence for a significant systematic offset in the residuals as a function of either the $uvby\beta$-determined $T_\mathrm{eff}$ or $\log{g}$. For the early group, the mean and median surface gravity residuals are -0.007 dex and 0.004 dex, respectively, with RMS 0.041 dex. For the intermediate group, the mean and median surface gravity residuals are -0.053 dex and -0.047 dex, respectively, with RMS 0.081 dex. Considering both early and intermediate group stars collectively, the mean and median surface gravity residuals are -0.027 dex and -0.021 dex, and the RMS 0.062 dex. One issue that may cause statistically larger errors in the $\log g$ determinations compared to the $T_\mathrm{eff}$ determinations is the linear interpolation in a low resolution logarithmic space (the $uvby\beta$ colors are calculated at steps of 0.5 dex in $\log g$). In order to mitigate this effect one requires either more finely gridded models or an interpolation scheme that takes the logarithmic gridding into account. [ccccccccccc]{} 63 & A2V & 8970 & 9047 & 3.73 & 3.912 & 0.026 & 0.181 & 1.050 & 1.425 & 2.881\ 153 & B2IV & 20930 & 20635 & 3.78 & 3.872 & -0.090 & 0.087 & 0.134 & 0.264 & 2.627\ 1641 & B3V & 16890 & 16528 & 4.07 & 4.044 & -0.085 & 0.104 & 0.319 & 0.485 & 2.683\ 2421 & AOIV & 9180 & 9226 & 3.49 & 3.537 & 0.007 & 0.149 & 1.186 & 1.487 & 2.865\ 4119 & B6V & 14570 & 14116 & 4.18 & 4.176 & -0.062 & 0.111 & 0.481 & 0.673 & 2.730\ 4554 & AOVe & 9360 & 9398 & 3.82 & 3.863 & 0.006 & 0.155 & 1.112 & 1.425 & 2.885\ 5191 & B3V & 17320 & 16797 & 4.28 & 4.292 & -0.080 & 0.106 & 0.297 & 0.470 & 2.694\ 6588 & B3IV & 17480 & 17025 & 3.82 & 3.864 & -0.065 & 0.079 & 0.292 & 0.418 & 2.661\ 7001 & AOVa & 9540 & 9508 & 4.01 & 3.977 & 0.003 & 0.157 & 1.088 & 1.403 & 2.903\ 7447 & B5III & 13520 & 13265 & 3.73 & 3.712 & -0.016 & 0.088 & 0.575 & 0.743 & 2.707\ 7906 & B9IV & 10950 & 10838 & 3.85 & 3.861 & -0.019 & 0.125 & 0.889 & 1.130 & 2.796\ 8585 & A1V & 9530 & 9615 & 4.11 & 4.175 & 0.002 & 0.170 & 1.032 & 1.373 & 2.908\ 8634 & B8V & 11330 & 11247 & 3.69 & 3.672 & -0.035 & 0.113 & 0.868 & 1.077 & 2.768\ 8781 & B9V & 9810 & 9868 & 3.54 & 3.593 & -0.011 & 0.128 & 1.129 & 1.380 & 2.838\ 8965 & B8V & 11850 & 11721 & 3.47 & 3.422 & -0.031 & 0.100 & 0.784 & 0.969 & 2.725\ 8976 & B9IVn & 11310 & 11263 & 4.23 & 4.260 & -0.035 & 0.131 & 0.831 & 1.076 & 2.833\ \[table:loggspec\] Summary of Atmospheric Parameter Uncertainties {#subsec:tefflogguncertainties} ---------------------------------------------- Precise and accurate stellar ages are the ultimate goal of this work. The accuracy of our ages is determined by both the accuracy with which we can determine atmospheric parameters and any systematic uncertainties associated with the stellar evolutionary models and our assumptions in applying them. The precision, on the other hand, is determined almost entirely by the precision with which we determine atmospheric parameters and, because there are some practical limits to how well we may ever determine $T_\mathrm{eff}$ and $\log{g}$, the location of the star in the H-R diagram (e.g. stars closer to the main sequence will always have more imprecise ages using this method). It is thus important to provide a detailed accounting of the uncertainties involved in our atmospheric parameter determinations, as the final uncertainties quoted in our ages will arise purely from the values of the $\sigma_{T_\mathrm{eff}}, \sigma_{\log{g}}$ used in our $\chi^2$ calculations. Below we consider the contribution of the systematics already discussed, as well as the contributions from errors in interpolation, photometry, metallicity, extinction, rotational velocity, multiplicity, and spectral peculiarity. [**Systematics:**]{} The dominant source of uncertainty in our atmospheric parameter determinations are the systematics quantified in § \[subsec:teffvalidation\] and § \[subsec:loggvalidation\]. All systematic effects inherent to the $uvby\beta$ method, and the particular model color grids chosen, which we will call $\sigma_\mathrm{sys}$, are embedded in the comparisons to the stars with fundamentally or semi-fundamentally determined parameters, summarized as approximately $\sim 3.1\%$ in $T_\mathrm{eff}$ and $\sim 0.116$ dex in $\log{g}$. We also found that for stars with available \[Fe/H\] measurements, the accuracy with which we can determine atmospheric parameters using $uvby\beta$ photometry does not vary systematically with metallicity, though we further address metallicity issues both below and in an Appendix. [**Interpolation Precision:**]{} To estimate the errors in atmospheric parameters due to the numerical precision of the interpolation procedures employed here, we generated 1000 random points in each of the three relevant $uvby\beta$ planes. For each point, we obtained ten independent $T_\mathrm{eff}, \log{g}$ determinations to test the repeatability of the interpolation routine. The scatter in independent determinations of the atmospheric parameters were found to be $<10^{-10}$ K, dex, and thus numerical errors are assumed zero. [**Photometric Errors:**]{} Considering the most basic element of our approach, there are uncertainties due to the propagation of photometric errors through our atmospheric parameter determination pipeline. As discussed in § \[subsec:fieldstars\], the photometric errors are generally small ($\sim 0.005$ mag in a given index). Translating the model grid points in the rectangular regions defined by the magnitude of the mean photometric error in a given index, and then interpolating to find the associated atmospheric parameters of the perturbed point, we take the maximum and minimum values for $T_\mathrm{eff}$ and $\log{g}$ to calculate the error due to photometric measurement error. To simplify the propagation of photometric errors for individual stars, we performed simulations with randomly generated data to ascertain the mean uncertainty in $T_\mathrm{eff}$, $\log{g}$ that results from typical errors in each of the $uvby\beta$ indices. We begin with the HM98 photometry and associated measurement errors for our sample (3499 stars within 100 pc, B0-F5, luminosity classes IV-V). Since the HM98 compilation does not provide $a_0$ or $r^*$, as these quantities are calculated from the four fundamental indices, we calculate the uncertainties in these parameters using the crude approximation that none of the $uvby\beta$ indices are correlated. Under this assumption, the uncertainties associated with $a_0$ and $r^*$ are as follows: $$\begin{aligned} \sigma_{a_0} &= \sqrt{1.36^2\sigma_{b-y}^2+0.36^2\sigma_{m_1}^2+0.18^2\sigma_{c_1}^2} \\ \sigma_{r^*} &= \sqrt{0.07^2\sigma_{b-y}^2+0.35^2\sigma_{c_1}^2+\sigma_{\beta}^2}.\end{aligned}$$ A model for the empirical probability distribution function (hereafter PDF) for the error in a given $uvby\beta$ index is created through a normalized histogram with 25 bins. From this empirical PDF, one can randomly draw values for the error in a given index. For each $uvby\beta$ plane, 1,000 random points in the appropriate range of parameter space were generated with photometric errors drawn as described above. The eight ($T_\mathrm{eff}$, $\log{g}$) values corresponding to the corners and midpoints of the “standard error rectangle” centered on the original random data point are then evaluated. The maximally discrepant ($T_\mathrm{eff}$, $\log{g}$) values are saved and the overall distributions of $\Delta T_\mathrm{eff}/T_\mathrm{eff}$ and $\Delta \log{g}$ are then analyzed to assess the mean uncertainties in the atmospheric parameters derived in a given $uvby\beta$ plane due to the propagation of typical photometric errors. For the late group, points were generated in the range of $(b-y)-c_1$ parameter space bounded by 6500 K $\leq T_\mathrm{eff} \leq$ 9000 K and $3.0 \leq \log{g} \leq 5.0$. In this group, typical photometric uncertainties of $\left \langle \sigma_\mathrm{b-y} \right \rangle$ = 0.003 mag and $\left \langle \sigma_\mathrm{c_1} \right \rangle$ = 0.005 mag lead to average uncertainties of 0.6 % in $T_\mathrm{eff}$ and 0.055 dex in $\log{g}$. For the intermediate group, points were generated in the range of $a_0-r^*$ parameter space bounded by 8500 K $\leq T_\mathrm{eff} \leq$ 11000 K and $3.0 \leq \log{g} \leq 5.0$. In this group, typical photometric uncertainties of $\left \langle \sigma_\mathrm{a_0} \right \rangle$ = 0.005 mag and $\left \langle \sigma_\mathrm{r^*} \right \rangle$ = 0.005 mag lead to average uncertainties of 0.8 % in $T_\mathrm{eff}$ and 0.046 dex in $\log{g}$. For the early group, points were generated in the range of $c_1-\beta$ parameter space bounded by 10000 K $\leq T_\mathrm{eff} \leq$ 30000 K and $3.0 \leq \log{g} \leq 5.0$. In this group, typical photometric uncertainties of $\left \langle \sigma_\mathrm{c_1} \right \rangle$ = 0.005 mag and $\left \langle \sigma_\mathrm{\beta} \right \rangle$ = 0.004 mag lead to average uncertainties of 1.1 % in $T_\mathrm{eff}$ and 0.078 dex in $\log{g}$. Across all three groups, the mean uncertainty due to photometric errors is $\approx 0.9\%$ in $T_\mathrm{eff}$ and $\approx 0.060$ dex in $\log{g}$. [**Metallicity Effects:**]{} For simplicity and homogeneity, our method assumes solar composition throughout. However, our sample can more accurately be represented as a Gaussian centered at -0.109 dex with $\sigma \approx$ 0.201 dex. Metallicity is a small, but non-negligible, effect and allowing \[M/H\] to change by $\pm$ 0.5 dex can lead to differences in the assumed $T_\mathrm{eff}$ of $\sim$ 1-2$\%$ for late-, intermediate-, and some early-group stars, or differences of up to $6\%$ for stars hotter than $\sim$ 17000 K (of which there are few in our sample). In $\log{g}$, shifts of $\pm$0.5 dex in \[M/H\] can lead to differences of $\sim 0.1$ dex in the assumed $\log{g}$ for late- or early-group stars, or $\sim 0.05$ dex in the narrow region occupied by intermediate-group stars. Here, we estimate the uncertainty the metallicity approximation introduces to the fundamental stellar parameters derived in this work. We begin with the actual $uvby\beta$ data for our sample, and \[Fe/H\] measurements from the XHIP catalog [@anderson2012], which exist for approximately 68% of our sample. Those authors collected photometric and spectroscopic metallicity determinations of Hipparcos stars from a large number of sources, calibrated the values to the high-resolution catalog of [@wu2011] in an attempt to homogenize the various databases, and published weighted means for each star. The calibration process is described in detail in §5 of [@anderson2012]. For each of the stars with available \[Fe/H\] in our field star sample, we derive $T_\mathrm{eff}, \log{g}$ in the appropriate $uvby\beta$ plane for the eight cases of \[M/H\]=-2.5,-2.0,-1.5,-1.0,-0.5,0.0,0.2,0.5. Then, given the measured \[Fe/H\], and making the approximation that \[M/H\]=\[Fe/H\], we perform a linear interpolation to find the most accurate values of $T_\mathrm{eff}, \log{g}$ given the color grids available. We also store the atmospheric parameters a given star would be assigned assuming \[M/H\]=0.0. Figure \[fig:metallicity-error\] shows the histograms of $T_\mathrm{eff}/T_\mathrm{eff, [M/H]=0}$ and $\log{g}-\log{g}_\mathrm{[M/H]=0}$. We take the standard deviations in these distributions to reflect the typical error introduced by the solar metallicity approximation. For $T_\mathrm{eff}$, there is a 0.8% uncertainty introduced by the true dispersion of metallicities in our sample, and for $\log{g}$, the uncertainty is 0.06 dex. These uncertainties in the atmospheric parameters are naturally propagated into uncertainties in the age and mass of a star through the likelihood calculations outlined in § \[subsubsec:formalism\]. ![Distributions of the true variations in $T_\mathrm{eff}$ (left) and $\log{g}$ (right) caused by our assumption of solar metallicity. The “true” $T_\mathrm{eff}$ and $\log{g}$ values are determined for the $\sim 68\%$ of our field star sample with \[Fe/H\] measurements in XHIP and from linear interpolation between the set of atmospheric parameters determined in eight ATLAS9 grids [@castelli2006; @castelli2004] that vary from -2.5 to 0.5 dex in \[M/H\].[]{data-label="fig:metallicity-error"}](f13.pdf){width="49.00000%"} [**Reddening Effects:**]{} For the program stars studied here, interstellar reddening is assumed negligible. Performing the reddening corrections (described in § \[subsec:reddening\]) on our presumably unreddened sample of stars within 100 pc, we find for the $\sim 80\%$ of stars for which dereddening proved possible, that the distribution of $A_V$ values in our sample is approximately Gaussian with a mean and standard deviation of $\mu=0.007, \sigma=0.125$ mag, respectively (see Figure \[fig:sample-specs\]). Of course, negative $A_V$ values are unphysical, but applying the reddening corrections to our $uvby\beta$ photometry and deriving the atmospheric parameters for each star in both the corrected and uncorrected cases gives us an estimate of the uncertainties in those parameters due to our assumption of negligible reddening out to 100 pc. The resulting distributions of $T_\mathrm{eff,0}/T_\mathrm{eff}$ and $\log{g}_0-\log{g}$, where the naught subscripts indicate the dereddened values, are sharply peaked at 1 and 0, respectively. The FWHM of these distributions indicate an uncertainty $<0.2\%$ in $T_\mathrm{eff}$ and $\sim 0.004$ dex in $\log{g}$. For the general case of sources at larger distances that may suffer more significant reddening, the systematic effects of under-correcting for extinction are illustrated in Figure \[fig:redvectors\]. ![The effect of interstellar reddening on atmospheric parameters derived from $uvby\beta$ photometry. The isochrones and mass tracks plotted are those of [@bressan2012]. The tail of each vector represents a given point in a specific photometric plane ($(b-y)-c_1$ for the late group stars in red, $a_0-r^*$ for the intermediate group stars in teal, and $c_1-\beta$ for the early group stars in black) and its corresponding value in \[$T_\mathrm{eff}, \log{g}$\]. The tip of the vector points to the new value of \[$T_\mathrm{eff}, \log{g}$\] after each point in photometric space has been “dereddened” assuming arbitrary values of $A_V$. The shifts in $uvby\beta$ space have been computed according to the extinction measurements of [@schlegel1998] and [@crawfordmandwewala1976], assuming $A_V \simeq 4.237 E(b-y)$. The magnitudes of $A_V$ chosen for this figure represent the extremes of values expected for our sample of nearby stars and are meant to illustrate the directionality of the effects of reddening as propagated through the $uvby\beta$ planes. Finally, note for the early group (black vectors), the $A_V$ values are an order of magnitude larger and much higher than expected for our sample. Again, this is to illustrate the directionality of the reddening effect, which is particularly small for the early group which rely on $c_1$, the Balmer discontinuity index, for temperature, and $\beta$, a color between two narrow-band filters with nearly the same central wavelength, for $\log{g}$.[]{data-label="fig:redvectors"}](f14.pdf){width="49.00000%"} [**Uncertainties in Projected Rotational Velocities:**]{} The [@glebocki2005] compilation contains mean $v\sin{i}$ measurements, as well as individual measurements from multiple authors. Of the 3499 stars in our sub-sample of the HM98 catalog, 2547 stars have $v\sin{i}$ values based on 4893 individual $v\sin{i}$ measurements, 1849 of which have an accompanying measurement error. Of these measurements, 646 are for intermediate or early groups, for which rotation corrections are performed in our method. The mean fractional error in $v\sin{i}$ for this subset of measurements is $\sim 13\%$. Caclulating the atmospheric parameters for these stars, then performing the FB98 $v\sin{i}$ corrections using $v_\mathrm{rot}$ and $v_\mathrm{rot} \pm \sigma_{v_\mathrm{rot}}$ allows us to estimate the magnitude of the uncertainty in $T_\mathrm{eff}, \log{g}$ due to the uncertainties in $v\sin{i}$ measurements. The resulting RMS errors in $T_\mathrm{eff}, \log{g}$ are 0.7% and 0.01 dex, respectively. When $v\sin{i}$ measurements are not available, an average value based on the spectral type can be assumed, resulting in a somewhat larger error. The systematic effects of under-correcting for rotation are illustrated in Figure \[fig:rotation-vectors\]. [**Influence of Multiplicity:**]{} In a large study such as this one, a high fraction of stars are binaries or higher multiples. Slightly more than 30% of our sample stars are known as members of multiple systems. We choose not to treat these stars differently, given the unknown multiplicity status of much of the sample, and caution our readers to use due care regarding this issue. [**Influence of Spectral peculiarities:**]{} Finally, early-type stars possess several peculiar subclasses (e.g. Ap, Bp, Am, etc. stars) for which anomalous behavior has been reported in the $uvby\beta$ system with respect to their “normal-type” counterparts. Some of these peculiarities have been linked to rotation, which we do account for. We note that peculiar subclasses constitute $\sim 4\%$ of our sample and these stars could suffer unquantified errors in the determination of fundamental parameters when employing a broad methodology based on calibrations derived from mostly normal-type stars (see Tables \[table:teffcal\] & \[table:loggcal\] for a complete accounting of the spectral types used for calibrations). As these subclasses were included in the atmospheric parameter validation stage (§ \[sec:atmosphericparameters\]), and satisfactory accuracies were still obtained, we chose not to adjust our approach for these stars and estimate the uncertainties introduced by their inclusion is negligible. [**Final Assessment:**]{} Our final atmospheric parameter uncertainties are dominated by the systematic effects quantified in § \[subsec:teffvalidation\] and § \[subsec:loggvalidation\], with the additional effects outlined above contributing very little to the total uncertainty. The largest additional contributor comes from the photometric error. Adding in quadrature the sources $\sigma_\mathrm{sys}, \sigma_\mathrm{num}, \sigma_\mathrm{phot}, \sigma_\mathrm{[Fe/H]}, \sigma_{v\sin{i}}$ and $\sigma_\mathrm{A_V}$ results in final error estimates of 3.4% in $T_\mathrm{eff}$ and 0.14 dex in $\log{g}$. The use of $uvby\beta$ photometry to determine fundamental stellar parameters is estimated in previous literature to lead to uncertainties of just 2.5% in $T_\text{eff}$ and 0.1 dex in $\log g$ [@asiain1997], with our assessment of the errors somewhat higher. The uncertainties that we derive in our Strömgren method work can be compared with those given by other methods. The Geneva photometry system ($U,B1,B2, V1, G$ filters), like the Strömgren system, has been used to derive $T_\mathrm{eff}, \log{g}$, and \[M/H\] values based on atmospheric grids [@kobinorth1990; @kunzli1997], with [@kunzli1997] finding 150-250 K (few percent) errors in $\log{T_\mathrm{eff}}$ and 0.1-0.15 dex errors in log g, comparable to our values. From stellar model atmosphere fitting to high dispersion spectra, errors of 1-5% in $T_\mathrm{eff}$ and 0.05-0.15 dex (typically 0.1 dex) in $\log g$ are quoted for early type stars [e.g. @nieva2011], though systematic effects in log g on the order of an additional 0.1 dex may be present. Wu et al. (2011) tabulate the dispersions in atmospheric parameters among many different studies, finding author-to-author values that differ for OBA stars by 300-5000 K in $T_\text{eff}$ (3-12%) and 0.2-0.6 dex in $\log{g}$ (cm/s$^2$), and for FGK stars 40-100 K in $T_\text{eff}$ and 0.1-0.3 dex in $\log{g}$ (cm/s$^2$). Age Estimation from Isochrones {#sec:ageestimation} ============================== Selection of Evolutionary Models -------------------------------- Once $T_\text{eff}$ and $\log g$ have been established, ages are determined through a Bayesian grid search of the fundamental parameter space encompassed by the evolutionary models. In this section we discuss the selection of evolution models, the Bayesian approach, numerical methods, and resulting age/mass uncertainties. ![Comparison of PARSEC isochrones (solid lines), Ekström isochrones in the rotating case (dashed lines), and Ekström isochrones in the non-rotating case (dotted lines). The solid black lines are evolutionary tracks for stars of intermediate-mass, from the PARSEC models. All evolutionary tracks plotted are for solar metallicity.[]{data-label="fig:isochrone-compare"}](f15.pdf){width="49.00000%"} Two sets of isochrones are considered in this work. The model families are compared in Figure \[fig:isochrone-compare\]. The PARSEC solar-metallicity isochrones of [@bressan2012], hereafter B12, take into account in a self-consistent manner the pre-main-sequence phase of evolution. The PARSEC models are the most recent iteration of the Padova evolutionary models, with significant revisions to the major input physics such as the equation of state, opacities, nuclear reaction rates and networks, and the inclusion of microscopic diffusion. The models are also based on the new reference solar composition, $Z = 0.01524$ from [@caffau2011], but can be generated for a wide range of metallicities. The B12 models cover the mass range $0.1-12 M_\odot$. PARSEC isochrones are attractive because early-type dwarfs have relatively rapid evolution with the pre-main-sequence evolution constituting a significant fraction of their lifetimes, i.e. $\tau_\text{PMS} / \tau_\text{MS}$ is larger compared to stars of later types. For stars with effective temperatures in the range 6500 K - 25000 K (approximately spectral types B0-F5), the B12 models predict pre-main sequence lifetimes ranging from $\sim$ 0.2-40 Myr, main-sequence lifetimes from $\sim$ 14 Myr - 2.2 Gyr, and the ratio $\tau_\mathrm{PMS}/\tau_\mathrm{MS} \sim 1.6-2.4 \%$. A star of given initial mass thus can be followed consistently through the pre-MS, MS, and post-MS evolutionary stages. As a consequence, most points in $T_\text{eff}-\log{g}$ space will have both pre-ZAMS and post-ZAMS ages as possible solutions. Figure  \[fig:evolutiont\] illustrates the evolution of atmospheric and corresponding photometric properties according to the PARSEC models. The solar-metallicity isochrones of [@ekstrom2012], hereafter E12, also use updated opacities and nuclear reaction rates, and are the first to take into account the effects of rotation on global stellar properties at intermediate masses. They are available for both non-rotating stars and stars that commence their lives on the ZAMS with a rotational velocity of 40$\%$ their critical rotational velocity ($v_\text{rot,i}/v_\text{crit}$ = 0.4); however, the [@ekstrom2012] models do not take the pre-main sequence phase into account. The E12 models currently exist only for solar metallicity (Z=0.014 is used), but cover a wider range of masses ($0.8-120 M_\odot$). The E12 models are attractive because they explictly account for rotation, though at a fixed percentage of breakup velocity. All output of stellar evolutionary models (e.g. lifetimes, evolution scenarios, nucleosynthesis) are affected by axial stellar rotation which for massive stars enhances the MS lifetime by about 30$\%$ and may increase isochronal age estimates by about 25$\%$ [@meynet2000]. In terms of atmospheres, for A-type stars, stellar rotation increases the strength of the Balmer discontinuity relative to a non-rotating star with the same color index [@maeder1970]. In the E12 models, the convective overshoot parameter was selected to reproduce the observed main sequence width at intermediate masses, which is important for our aim of distinguishing the ages of many field stars clustered on the main sequence with relatively large uncertainties in their surface gravities. Figure  \[fig:isochrone-compare\] shows, however, that there is close agreement between the B12 and the rotating E12 models. Thus, there is not a significant difference between the two models in regards to the predicted width of the MS band. It should be noted that the $uvby\beta$ grids of [@castelli2006; @castelli2004] were generated assuming a solar metallicity value of Z=0.017. As discussed elsewhere, metallicity effects are not the dominant uncertainty in our methods and we are thus not concerned about the very small metallicty differences between the two model isochrone sets nor the third metallicity assumption in the model atmospheres. In matching data to evolutionary model grids, a general issue is that nearly any given point in an H-R diagram (or equivalently in $T_\mathrm{eff}$-$\log g$ space), can be reproduced by multiple combinations of stellar age and mass. Bayesian inference can be used to determine the relative likelihoods of these combinations, incorporating prior knowledge about the distributions of the stellar parameters being estimated. Bayesian Age Estimation ----------------------- A simplistic method for determining the theoretical age and mass for a star on the Hertzsprung-Russell (H-R) diagram is interpolation between isochrones or evolutionary models. Some problems with this approach, as pointed out by [@gtakeda2007; @pont2004], is that interpolation between isochrones neither accounts for the nonlinear mapping of time onto the H-R diagram nor the non-uniform distribution of stellar masses observed in the galaxy. As a consequence, straightforward interpolation between isochrones results in an age distribution for field stars that is biased towards older ages compared to the distribution predicted by stellar evolutionary theory. Bayesian inference of stellar age and mass aims to eliminate such a bias by accounting for observationally and/or theoretically motivated distribution functions for the physical parameters of interest. As an example, for a given point with error bars on the H-R diagram, a lower stellar mass should be considered more likely due to the initial mass function. Likewise, due to the longer main-sequence timescales for lower mass stars, a star that is observed to have evolved off the main sequence should have a probability distribution in mass that is skewed towards higher masses, i.e. because higher mass stars spend a more significant fraction of their entire lifetime in the post-MS stage. ### Bayes Formalism {#subsubsec:formalism} Bayesian estimation of the physical parameters can proceed from comparison of the data with a selection of models. Bayes’ Theorem states: $$P(\mathrm{model|data}) \propto P(\mathrm{data|model}) \times P(\mathrm{model})$$ The probability of a model given a set of data is proportional to the product of the probability of the data given the model and the probability of the model itself. In the language of Bayesian statistics, this is expressed as: $$\mathrm{posterior} \propto \mathrm{likelihood} \times \mathrm{prior}.$$ Our model is the set of stellar parameters, age ($\tau$) and mass ($M_*$), and our data are the measured effective temperature, $T_\mathrm{eff}$, and surface gravity, $\log g$, for a given star. At any given combination of age and mass, the predicted $T_\mathrm{eff}$ and $\log g$ are provided by stellar evolutionary models. The $\chi^2$ statistic for an individual model can be computed as follows: $$\begin{aligned} \chi^2 (\tau, M_*) &= \sum \frac{(O-E)^2}{\sigma^2} \\ &= \frac{[(T_\mathrm{eff})_O-(T_\mathrm{eff})_E]^2}{\sigma_{T_\mathrm{eff}}^2} + \frac{[(\log{g})_O-(\log{g})_E]^2}{\sigma_{\log{g}}^2},\end{aligned}$$ where the subscripts O and E refer to the observed and expected (or model) quantities, respectively, and $\sigma$ is the measurement error in the relevant quantity. Assuming Gaussian statistics, the relative likelihood of a specific combination of ($T_\mathrm{eff}, \log g$) is: $$\begin{aligned} P(\mathrm{data|model}) &= P(T_\mathrm{eff, obs}, \log{g}_\mathrm{obs} | \tau, M_*) \\ &\propto \exp\left [{-\frac{1}{2}\chi^2(\tau, M_*}) \right ]. \end{aligned}$$ Finally, the joint posterior probability distribution for a model with age $\tau$ and mass $M_*$, is given by: $$\begin{aligned} P(\mathrm{model|data}) &= P(\tau, M_*|T_\mathrm{eff, obs}, \log{g}_\mathrm{obs}) \\ &\propto \exp\left [{-\frac{1}{2}\chi^2(\tau, M_*}) \right ] P(\tau)P(M_*),\end{aligned}$$ where $P(\tau)$ and $P(M_*)$ are the prior probability distributions in age and mass, respectively. The prior probabilities of age and mass are assumed to be independent such that $P(\tau,M_*)=P(\tau)P(M_*)$. ### Age and Mass Prior Probability Distribution Functions {#subsubsec:priors} Standard practice in the Bayesian estimation of stellar ages is to assume an age prior that is uniform in linear age, e.g. [@pont2004; @jorgensen2005; @gtakeda2007; @nielsen2013]. There are two main justifications for choosing a uniform age prior: 1) it is the least restrictive choice of prior and 2) at this stage the assumption is consistent with observations that suggest a fairly constant star formation rate in solar neighborhood over the past 2 Gyr [@cignoni2006]. Since the evolutionary models are logarithmically gridded in age, the relative probability of age bin $i$ is given by the bin width in linear age divided by the total range in linear age: $$P(\log(\tau_{i}) \leq \log(\tau) < \log(\tau_{i+1})) = \frac{\tau_{i+1}-\tau_i}{\tau_n - \tau_0},$$ where $\tau_n$ and $\tau_0$ are the largest and smallest allowed ages, respectively. This weighting scheme gives a uniform probability distribution in linear age. As noted by [@gtakeda2007], it is important to understand that assuming a flat prior in linear age corresponds to a highly non-uniform prior in the measured quantities of $\log T_\mathrm{eff}$ and $\log g$. This is due to the non-linear mapping between these measurable quantities and the physical quantities of mass and age in evolutionary models. Indeed, the ability of the Bayesian approach to implicitly account for this effect is considered one of its main strengths. As is standard in the Bayesian estimation of stellar masses, an initial mass function (IMF) is assumed for the prior probability distribution of all possible stellar masses. Several authors point out that Bayesian estimates of physical parameters are relatively insensitive to the mass prior (i.e. the precise form of the IMF assumed), especially in the case of parameter determination over a small or moderate range in mass space. For this work considering BAF stars, the power law IMF of [@salpeter1955] is assumed for the mass prior, so that the relative probability of mass bin $i$ is given by the following expression: $$P(M_i \leq M < M_{i+1}) \propto M_i^{-2.35}.$$ ### Numerical Methods {#subsubsec:numericalmethods} As [@gtakeda2007] point out, in Bayesian age estimation interpolation should be performed only along isochrones and not between them. To avoid biasing our derived physical parameters from interpolating between isochrones, we generated a dense grid of PARSEC models. The evolutionary models were acquired with a spacing of 0.0125 dex in log(age/yr) and 0.0001 $M_\odot$ in mass. All probabilities were then computed on a 321 $\times$ 321 grid ranging from log(age/yr)=6 to 10 and from 1-10 $M_\odot$. ### Age and Mass Uncertainties Confidence intervals in age and mass are determined from the one-dimensional marginalized posterior probability distributions for each parameter. Since the marginalized probability distributions can often be assymetric, the region chosen for determining confidence intervals is that of the Highest Posterior Density (HPD). This method selects the smallest range in a parameter that encompasses $N \%$ of the probability. The HPD method is discussed in more detail in the appendix. Notably, uncertainties in the ages depend on where in the $\log g$ and $\log T_\text{eff}$ parameter space the star is located, and whether a pre-main sequence or a post-zero-age-main sequence age is more appropriate. In the pre-main sequence phase, both atmospheric parameters are important in age determination. For post-ZAMS stars, however, the relative importance of the two parameters changes. When stars are just bouncing back from the ZAMS and are starting to evolve through the MS phase, $\log g$ must be known precisely (within the range of $\sim$4.3 to 4.45) in order to derive a good age estimate. The age at which this bounce occurs will be a function of mass (earlier for more massive stars). Otherwise, once late B, A, and early F stars are comfortably settled on the MS, their evolution is at roughly constant temperature (see Figure  \[fig:isochrone-compare\]) and so the gravity precision becomes far less important, with temperature precision now critical. The Methodology Tested on Open Clusters {#subsec:openclustertests} ======================================= ![Histograms of the visual extinction, $A_V$, in magnitudes for individual members of the four open clusters considered here. The extinction values are calculated using the relation $A_V=4.237E(b-y)$, with the $(b-y)$ color excesses computed as described in § \[subsec:reddening\].[]{data-label="fig:clusters-av"}](f16.pdf){width="45.00000%"} ![image](f17.pdf){width="49.00000%"} ![image](f18.pdf){width="49.00000%"} ![image](f19.pdf){width="49.00000%"} ![image](f20.pdf){width="49.00000%"} An important test of our methods is to assess the ages derived from our combination of $uvby\beta$ photometry, atmospheric parameter placement, and comparison to evolutionary models relative to the accepted ages for members of well-studied open clusters. We investigate four such clusters with rigorous age assessment in previous literature: IC 2602, $\alpha$ Persei, the Pleiades, and the Hyades. The youngest ($\lesssim 20-30$ Myr) open clusters may be age-dated kinematically, by tracing the space motions of individual members back to the time when the stars were in closest proximity to one another [@soderblom2010]. After $\lesssim$ 1 galactic rotation period, however, individual member motions are randomized to the extent of limiting the utility of the kinematic method. Beyond $\sim 20-30$ Myr, the most precise open cluster ages come from the lithium depletion boundary (LDB) technique. This method uses the lithium abundances, which diminish predictably with time, of the lowest mass cluster members to converge on precise ($\sim$10%) ages. LDB ages are available for IC 2602: $\tau = 46^{+6}_{-5}$ Myr [@dobbie2010], $\alpha$ Per: $\tau = 90 \pm 10$ Myr [@stauffer1999], and the Pleiades: $\tau = 125 \pm 8$ Myr [@stauffer1998]. The LDB technique does not work past $\sim$ 250 Myr, so the Hyades is dated based on isochrone fitting in the H-R diagram using stars with high precision distance measurements, with currently accepted age $625 \pm 50$ Myr [@perryman1998]. Process ------- Membership probabilities, $uvby\beta$ photometry, and projected rotational velocities are obtained for members of these open clusters via the WEBDA open cluster database[^5]. For the Pleiades, membership information was augmented and cross-referenced with [@stauffer2007]. Both individual $uvby\beta$ measurements and calculations of the mean and scatter from the literature measurements are available from WEBDA in each of the photometric indices. As the methodology requires accurate classification of the stars according to regions of the H-R diagram, we inspected the spectral types and $\beta$ indices and considered only spectral types B0-F5 and luminosity classes III-V for our open cluster tests. In contrast to the field stars studied in the next section, the open clusters studied here are distant enough for interstellar reddening to significantly affect the derived stellar parameters. The photometry is thus dereddened as described in § \[subsec:reddening\]. Figure \[fig:clusters-av\] shows the histograms of the visual extinction $A_V$ for each cluster, with the impact of extinction on the atmospheric parameter determination illustrated above in Figure \[fig:redvectors\]. In many cases, individual cluster stars have multiple measurements of $v \sin i$ in the WEBDA database and we select the measurement from whichever reference is the most inclusive of early-type members. In very few cases does a cluster member have no rotational velocity measurement present in the database; for these stars we assume the mean $v \sin i$ according to the $T_\mathrm{eff}-v\sin{i}$ relation presented in Appendix B of [@gray2005book]. Atmospheric parameters are determined for each cluster member, as described in § \[sec:atmosphericparameters\]. Adopting our knowledge from the comparison to fundamental and semi-fundamental atmospheric parameters (§ \[subsec:teffvalidation\] & § \[subsec:loggvalidation\]), a uniform 1.6$\%$ shift towards cooler $T_\mathrm{eff}$ was applied to all temperatures derived from the model color grids to account for systematic effects in those grids. The FB98 $v \sin i$ corrections were then applied to the atmospheric parameters. The $v\sin{i}$ corrections prove to be a crucial step in achieving accurate ages for the open clusters (particularly for the Pleiades). Results ------- The results of applying our procedures to open cluster samples appear in Figure \[fig:cluster-hrds\]. While the exact cause(s) of the remaining scatter observed in the empirical isochrones for each cluster is not known, possible contributors may be systematic or astrophysical in nature, or due to incorrect membership information. Multiplicity, variability, and spectral peculiarities were among the causes investigated for this scatter, but the exclusion of objects on the basis of these criteria did not improve age estimation for any individual cluster. The number of stars falling below the theoretical ZAMS, particulary for stars with $\log{T_\mathrm{eff}} \lesssim 3.9$, is possibly systematic and may be due to an incomplete treatment of convection by the ATLAS9 models. This source of uncertainty is discussed in further detail in § \[subsec:belowzams\]. For each cluster, we publish the individual stars considered, along with relevant parameters, in Tables  \[table:ic2602\],  \[table:alphaper\],  \[table:pleiades\], &  \[table:hyades\]. In each table, the spectral types and $v\sin{i}$ measurements are from WEBDA, while the dereddened $uvby\beta$ photometry and atmospheric parameters are from this work. ### Ages from Bayesian Inference Once atmospheric parameters have been determined, age determination proceeds as outlined in § \[sec:ageestimation\]. For each individual cluster member, the $\chi^2$, likelihood, and posterior probability distribution are calculated for each point on a grid ranging from log(age/yr)=6.5 to 10, with masses restricted to $1 \leq M/M_\odot \leq 10$. The resolution of the grid is 0.0175 dex in log(age/yr) and 0.045 $M_\odot$ in mass. The 1-D marginalized posterior PDFs for each individual cluster member are normalized and then summed to obtain an overall posterior PDF in age for the cluster as a whole. This composite posterior PDF is also normalized prior to the determination of statistical measures (mean, median, confidence intervals). Additionally, the posterior PDFs in log(age) for each member are multiplied to obtain the total probability in each log(age) bin that all members have a single age. While the summed PDF depicts better the behavior of individual stars or groups of stars, the multiplied PDF is best for assigning a single age to the cluster and evaluating any potential systematics of the isochrones themselves. As shown in Figure  \[fig:cluster-hist\], the summed age PDFs for each cluster generally follow the same behavior: (1) the peaks are largely determined by the early group (B-type) stars which have well-defined ages due to their unambiguous locations in the $T_\mathrm{eff}-\log{g}$ diagram; (2) examining the age posteriors for individual stars, the intermediate group stars tend to overpredict the cluster age relative to the early group stars, and the same is true for the late group stars with respect to the intermediate group stars, resulting in a large tail at older ages for each of the summed PDFs due to the relatively numerous and broad PDFs of the later group stars. For IC 2602 and the Pleiades, the multiplied PDFs have median ages and uncertainties that are in close agreement with the literature ages. Notably, the results of the open cluster tests favor an age for the Hyades that is older ($\sim$ 800 Myr) than the accepted value, though not quite as old as the recent estimate of  950$\pm$100 Myr from [@brandt2015]. The Bayesian age analysis also favors an age for $\alpha$ Per that is younger ($\sim$ 70 Myr) than the accepted value based on lithium depletion, but older than the canonical 50 Myr from the Upper Main Sequence Turnoff [@mermilliod1981]. In an appendix, we perform the same analysis for the open clusters on p($\tau$) rather than p($\log{\tau}$), yielding similar results. ![image](openclusters-log-v1.pdf){width="90.00000%"} The results of the open cluster test are presented in Table  \[table:clusters\]. It is noted that all statistical measures of the marginalized age PDFs quoted hereafter are from PDFs normalized in log(age), as opposed to converting to linear age and then normalizing. This choice was made due to the facts that 1) the isochrones are provided in uniform logarithmic age bins, and 2) the marginalized PDFs of individual stars are more symmetric (and thus better characterized by traditional statistical measures) in log(age) than in linear age. Notably, the median age is equivalent regardless of whether one chooses to analyze prob($\log \tau$) or prob($\tau$). This issue is discussed further in an appendix. In general, there is very close agreement in the Bayesian method ages between B12 and rotating E12 models. For IC 2602 and the Pleiades, our analysis yields median cluster ages (as determined from the multiplied PDFs) that are within 1-$\sigma$ of accepted values, regardless of the evolutionary models considered. The Bayesian analysis performed with the PARSEC models favor an age for $\alpha$ Persei that is $\sim$ 20% younger than the currently accepted value, or $\sim$ 20% older for the Hyades. [cccccccccccc]{} IC 2602 & 46$^{+6}_{-5}$ & [@ekstrom2012] & 80 & 32-344 & 42 & 41-46 & 39\ & & [@bressan2012] & 79 & 27-284 & 46 & 44-50 & 37\ $\alpha$ Persei & 90$^{+10}_{-10}$ & [@ekstrom2012] & 234 & 83-1618 & 71 & 68-74 & 50\ & & [@bressan2012] & 226 & 74-1500 & 70 & 69-74 & 48\ Pleiades & 125$^{+8}_{-8}$ & [@ekstrom2012] & 277 & 81-899 & 128 & 126-130 & 126\ & & [@bressan2012] & 271 & 85-948 & 123 & 121-126 & 115\ Hyades & 625$^{+50}_{-50}$ & [@ekstrom2012] & 872 & 518-1940 & 827 & 812-837 & 631\ & & [@bressan2012] & 844 & 487-1804 & 764 & 747-780 & 501 ### Ages from Isochrone Fitting As a final test of the two sets of evolutionary models, we used $\chi^2$-minimization to find the best-fitting isochrone for each cluster. By fitting all members of a cluster simultaneously, we are able to assign a single age to all stars, test the accuracy of the isochrones for stellar ensembles, and test the ability of our $uvby\beta$ method to reproduce the shapes of coeval stellar populations in $T_\mathrm{eff}-\log{g}$ space. For this exercise, we did not interpolate between isochrones, choosing instead to use the default spacing for each set of models (0.1 dex and 0.0125 dex in log(age/yr) for the E12 and B12 models, respectively). For the best results, we consider only the sections of the isochrones with $\log{g}$ between 3.5 and 5.0 dex. The results of this exercise are shown in Figure \[fig:bestfit-isochrones\]. The best-fitting E12 isochrone (including rotation) is consistent with accepted ages to within 1% for the Pleiades and Hyades, $\sim 15\%$ for IC 2602, and $\sim 44\%$ for $\alpha$-Per. For the B12 models, the best-fit isochrones are consistent with accepted ages to $\sim 8\%$ for the Pleiades, $\sim 20\%$ for the Hyades and IC 2602, and $\sim 47\%$ for $\alpha$-Per. The B12 models produce systematically younger ages than the E12 models, by a fractional amount that increases with absolute age. As detailed above, the open cluster tests revealed that our method is able to distinguish between ensembles of differing ages, from tens to hundreds of Myr, at least in a statistical sense. For individual stars, large uncertainties may remain, particularly for the later types, owing almost entirely to the difficulty in determining both precise and accurate surface gravities. The open cluster tests also demonstrate the importance of a $v \sin i$ correction for early (B0-A0) and intermediate (A0-A3) group stars in determining accurate stellar parameters. While the $v \sin i$ correction was not applied to the late group (A3-F5 in this case) stars, it is likely that stars in this group experience non-negligible gravity darkening. The typically unknown inclination angle, $i$, also contributes significant uncertainties in derived stellar parameters and hence ages. ![image](f25.pdf){width="45.00000%"} ![image](f26.pdf){width="45.00000%"} ![image](f27.pdf){width="45.00000%"} ![image](f28.pdf){width="45.00000%"} ![image](f29.pdf){width="45.00000%"} ![image](f30.pdf){width="45.00000%"} ![image](f31.pdf){width="45.00000%"} ![image](f32.pdf){width="45.00000%"} The Methodology Applied to Nearby Field Stars {#subsec:fieldstars} ============================================= ![image](f33.pdf){width="90.00000%"} ![Histograms of the uncertainties (in mag) for different $uvby\beta$ indices for the sample of $\sim$ 3500 field stars discussed in § \[subsec:fieldstars\]. The solid lines in each plot indicate the position of the mean uncertainty in that parameter. Uncertainties in $a_0$ and $r^*$ are calculated according to Eqns. (13) & (14).[]{data-label="fig:hm98err"}](f34.pdf){width="45.00000%"} ![image](f35.pdf){width="99.00000%"} As an application of our developed, calibrated, validated, and tested methodology, we consider the complete HM98 photometric catalog of 63,313 stars. We are interested only in nearby stars that are potential targets for high contrast imaging campaigns, and for which interstellar extinction is negligible. We thus perform a distance cut at 100 pc, using distances from the XHIP catalog [@anderson2012]. We perform an additional cut in spectral type (using information from XHIP), considering only B0-F5 stars belonging to luminosity classes IV,V, because this is the range for which our method has been shown to work with high fidelity and additionally these are the primary stars of interest to near-term high-contrast imaging surveys. In total, we are left with 3499 stars. Figure \[fig:sample-specs\] shows the distribution of our field star sample in spectral type, distance, $A_V$, \[Fe/H\], and $v\sin{i}$. The distributions of photometric errors in given $uvby\beta$ indices are shown in Figure  \[fig:hm98err\], and the mean errors in each index are summarized as follows: $\left \langle \sigma_{b-y} \right \rangle, \left \langle \sigma_{m_1} \right \rangle, \left \langle \sigma_{c_1} \right \rangle, \left \langle \sigma_{\beta} \right \rangle, \left \langle \sigma_{a_0} \right \rangle, \left \langle \sigma_{r^*} \right \rangle = 0.003, 0.004, 0.005, 0.004, 0.005, 0.005$ mag. Projected rotational velocities for the sample of nearby field stars are sourced from the [@glebocki2005] compilation, which contains $v\sin{i}$ measurements for 2874 of the stars, or $\sim 82\%$ of the sample. For an additional 8 stars $v\sin{i}$ measurements are collected from [@zorec2012], and for another 5 stars $v\sin{i}$ values come from [@schroeder2009]. For the remaining stars without $v\sin{i}$ measurements, a projected rotational velocity is assumed according to the mean $v\sin{i}-T_\mathrm{eff}$ relation from Appendix B of [@gray2005book]. Atmospheric parameters are corrected for rotational velocity effects as outlined in § \[subsec:vsinicorrection\]. Atmospheric parameter determination was not possible for six stars, due to discrepant positions in the relevant $uvby\beta$ planes: HIP 8016 (a B9V Algol-type eclipsing binary), HIP 12887 (a poorly studied F3V star), HIP 36850 (a well-studied A1V+A2Vm double star system), HIP 85792 (a well-studied Be star, spectral type B2Vne), HIP 97962 (a moderately studied B9V star), and HIP 109745 (an A0III star, classified in XHIP as an A1IV star). Consequently, ages and masses were not computed for these stars. An H-R diagram of the entire sample is shown in Figure  \[fig:hrd\], with the evolutionary models of [@bressan2012] overlaid. Equipped with atmospheric parameters for the remaining 3493 stars, and assuming uniform uncertainties of 3.4% and 0.14 dex in $T_\mathrm{eff}$ and $\log{g}$, respectively, ages and masses were computed via the process outlined in § \[sec:ageestimation\]. Posterior probabilities were calculated on a uniform 321$\times$321 grid of the [@bressan2012] models, gridded from 1 Myr-10 Gyr in steps of 0.0125 dex in log(age), and from 1-10$M_\odot$ in steps of 0.028$M_\odot$. As the [@bressan2012] models exist for high resolution timesteps, no interpolation between isochrones was required. From the 2D joint posterior PDF, we obtain the marginalized 1D PDFs in age and mass, from which we compute the mean (expected value), median, mode (most probable value), as well as 68% and 95% confidence intervals. Interpolated ages and masses are also included, and these values may be preferred, particularly for objects with an interpolated age $\lesssim 10^8$ yr and a $\log{g}$ placing it near the ZAMS (see § \[subsec:belowzams\] for more detail). The table of ages and masses for all 3943 stars, including our newly derived atmospheric parameters, are available as an electronic table and a portion (sorted in ascending age) is presented here in Table  \[table:fieldstars\]. In rare instances (for $\sim 5\%$ of the sample), true 68% and 95% confidence intervals were not obtained due to numerical precision, the star’s location near the edge of the computational grid, or some combination of the two effects. In these cases the actual confidence interval quoted is noted as a flag in the electronic table. [ccccccccccccccc]{} 65474 & 24718 & 4.00 & 6 & 7 & 9 & 5-12 & 2-14 & 13 & 9.59 & 9.61 & 9.62 & 9.4-9.9 & 9.2-10.0 & 10.26\ 61585 & 21790 & 4.32 & 6 & 7 & 11 & 4-18 & 1-21 & 1 & 7.53 & 7.52 & 7.48 & 7.3-7.7 & 7.1-8.0 & 7.34\ 61199 & 16792 & 4.18 & 18 & 22 & 33 & 13-53 & 3-60 & 36 & 4.84 & 4.83 & 4.78 & 4.6-5.0 & 4.5-5.2 & 4.95\ 60718 & 16605 & 4.35 & 19 & 23 & 35 & 13-55 & 3-61 & 1 & 4.75 & 4.74 & 4.70 & 4.6-4.9 & 4.4-5.1 & 4.58\ 60000 & 15567 & 4.12 & 21 & 26 & 40 & 14-65 & 4-77 & 60 & 4.27 & 4.26 & 4.22 & 4.1-4.4 & 4.0-4.6 & 4.48\ 100751 & 17711 & 3.94 & 23 & 29 & 43 & 20-50 & 5-52 & 48 & 5.41 & 5.42 & 5.35 & 5.1-5.6 & 5.0-5.9 & 5.91\ 23767 & 16924 & 4.10 & 23 & 30 & 44 & 18-56 & 4-61 & 46 & 4.96 & 4.95 & 4.92 & 4.7-5.2 & 4.5-5.4 & 5.14\ 92855 & 19192 & 4.26 & 24 & 29 & 34 & 23-38 & 8-40 & 8 & 6.39 & 6.37 & 6.25 & 6.0-6.6 & 5.8-7.1 & 5.95\ 79992 & 14947 & 3.99 & 26 & 31 & 48 & 18-78 & 4-89 & 88 & 4.01 & 4.00 & 3.97 & 3.8-4.1 & 3.7-4.3 & 4.45 As with the open clusters, we can sum the individual, normalized PDFs in age to produce composite PDFs for various subsets of our sample. Figure  \[fig:compositeage\] depicts the composite age PDF for our entire sample, as well as age PDFs for the subsets of B0-B9, A0-A4, A5-A9, and F0-F5 stars. From these PDFs we can ascertain the statistical properties of these subsets of solar neighborhood stars, which are presented in Table  \[table:compositeagepdfs\]. ![Normalized composite age PDFs for our sample of field B0-F5 stars within 100 pc. The normalized composite PDFs are created by summing the normalized, 1D marginalized age PDFs of individual stars in a given spectral type grouping. The black curve represents the composite pdf for all spectral types, while the colored curves represent the composite PDFs for the spectral type groups B0-B9, A0-A4, A5-A9, F0-F5 (see legend). Circles represent the expectation values of the composite PDFs, while squares represent the medians. The solid and dashed lines represent the 68% and 95% confidence intervals, respectively, of the composite PDFs. The statistical measures for these composite PDFs are also presented in Table \[table:compositeagepdfs\].[]{data-label="fig:compositeage"}](f36.pdf){width="45.00000%"} [cccccc]{} B0-B9 & 93 & 122 & 147 & 56-316 & 8-410\ A0-A4 & 296 & 365 & 392 & 200-794 & 39-1090\ A5-A9 & 572 & 750 & 854 & 434-1372 & 82-1884\ F0-F5 & 1554 & 1884 & 2024 & 1000-4217 & 307-6879\ \[table:compositeagepdfs\] Empirical Mass-Age Relation {#subsec:massagerelation} --------------------------- From our newly derived set of ages and masses of solar-neighborhood B0-F5 stars, we can determine an empirical mass-age relation. Using the mean ages and masses for all stars in our sample, we performed a linear least squares fit using the NumPy polyfit routine, yielding the following relation, valid for stars $1.04<M/M_\odot<9.6$: $$\log(\mathrm{age/yr}) = 9.532 - 2.929 \log\left ( \frac{M}{M_\odot}\right ).$$ The RMS error between the data and this relation is a fairly constant 0.225 dex as a function of stellar mass. ![image](f37.pdf){width="49.00000%"} ![image](f38.pdf){width="49.00000%"} Empirical Spectral-Type-Age/Mass Relations {#subsec:sptagerelation} ------------------------------------------ We can also derive empirical spectral-type-age and spectral-type-mass relations for the solar neighborhood, using the mean masses derived from our 1D marginalized posterior PDFs in age, and spectral type information from XHIP. These relations are plotted in Figure  \[fig:sptrelations\], and summarized in Table  \[table:sptagerelation\]. [cccccc]{} B0 & 19 & & 4.75 & & 1\ B1 & 6 & & 9.59 & & 1\ B2 & 15 & 13 & 6.96 & 0.81 & 2\ B3 & 41 & 16 & 5.22 & 0.50 & 3\ B4 & 26 & 12 & 4.94 & 0.59 & 4\ B5 & 44 & 16 & 3.94 & 0.49 & 5\ B6 & 84 & 51 & 3.69 & 0.23 & 4\ B7 & 140 & 209 & 3.23 & 0.60 & 13\ B8 & 99 & 43 & 3.28 & 0.38 & 18\ B9 & 154 & 86 & 2.88 & 0.88 & 67\ A0 & 285 & 437 & 2.47 & 0.40 & 120\ A1 & 313 & 217 & 2.23 & 0.29 & 132\ A2 & 373 & 320 & 2.11 & 0.30 & 144\ A3 & 462 & 412 & 2.07 & 0.96 & 100\ A4 & 540 & 333 & 1.84 & 0.19 & 37\ A5 & 514 & 350 & 1.86 & 0.81 & 81\ A6 & 628 & 265 & 1.85 & 0.49 & 46\ A7 & 574 & 262 & 1.78 & 0.30 & 79\ A8 & 642 & 272 & 1.64 & 0.11 & 62\ A9 & 800 & 339 & 1.62 & 0.21 & 102\ F0 & 994 & 544 & 1.52 & 0.19 & 324\ F1 & 948 & 352 & 1.51 & 0.13 & 68\ F2 & 1280 & 526 & 1.42 & 0.19 & 441\ F3 & 1687 & 633 & 1.34 & 0.23 & 605\ F4 & 1856 & 600 & 1.30 & 0.12 & 129\ F5 & 2326 & 697 & 1.27 & 0.18 & 905\ \[table:sptagerelation\] Discussion ========== The precision of the age-dating method described here relies on the use of Strömgren $ubvy\beta$ photometry to finely distinguish stellar atmosphere parameters and compare them to isochrones from stellar evolution models. For ages $\leq$ 10 Myr and $\gtrsim$ 100 Myr, in particular, there is rapid evolution of $\log{T_\text{eff}}$ and $\log{g}$ for intermediate-mass stars (see Figure  \[fig:evolutiont\]). This enables greater accuracy in age determination through isochrone placement for stars in this mass and age range. Fundamentally, our results rely on the accuracy of both the stellar evolution models and the stellar atmosphere models that we have adopted. Accuracy is further set by the precision of the photometry, the derived atmospheric parameters, the calibration of the isochrones, and the ability to determine whether an individual star is contracting onto the main sequence or expanding off of it. By using isochrones that include both pre-MS and post-MS evolution in a self-consistent manner [@bressan2012], we can determine pre-ZAMS in addition to post-ZAMS ages for every data point in $T_\mathrm{eff}, \log{g}$). Above, we have described our methodology in detail, including corrections for reddening and rotation, and we have presented quality control tests that demonstrate the precision and accuracy of our ages. In the section we describe several aspects of our analysis of specific interest, including the context of previous estimates of stellar ages for early type stars (§ \[sec:context\]), how to treat stars with locations apparently below the ZAMS (§ \[subsec:belowzams\]), and discussion of notable individual objects (§ \[subsec:specialstars\]). We will in the future apply our methods to new spectrophotometry. Methods Previously Employed in Age Determination for Early Type Stars {#sec:context} --------------------------------------------------------------------- In this section we place our work on nearby open cluster stars and approximately 3500 nearby field stars in the context of previous age estimation methods for BAF stars. [@song2001] utilized a method quite similar to ours, employing $uvby\beta$ data from the catalogs of [@hauck1980; @olsen1983; @olsen1984], the color grids of [@moon1985] including a temperature-dependent gravity modification suggested by [@napiwotzki1993], and isochrones from [@schaller1992], to determine the ages of 26 Vega-like stars. For A-type stars, [@vican2012] determined ages for *Herschel* DEBRIS survey stars by means of isochrone placement in log($T_\text{eff}$)-log($g$) space using [@li2008] and [@pinsonneault2004] isochrones, and atmospheric parameters from the literature. [@rieke2005] published age estimates for 266 B- & A-type main sequence stars using cluster/moving group membership, isochrone placement in the H-R diagram, and literature ages (mostly coming from earlier application of $uvby\beta$ photometric methods). Among later type F dwarfs, previous age estimates come primarily from the Geneva-Copenhagen Survey [@casagrande2011], but their reliability is caveated by the substantially different values published in various iterations of the catalog [@nordstrom2004; @holmberg2007; @holmberg2009; @casagrande2011] and the inherent difficulty of isochrone dating these later type dwarfs. More recently, [@nielsen2013] applied a Bayesian inference approach to the age determination of 70 B- & A-type field stars via $M_V$ vs $B-V$ color-magnitude diagram isochrone placement, assuming a constant star formation rate in the solar neighborhood and a Salpeter IMF. [@derosa2014] estimated the ages of 316 A-type stars through placement in a $M_K$ vs $V-K$ color-magnitude diagram. Both of these broad-band photometric studies used the theoretical isochrones of [@siess2000]. Considering the above sources of ages, the standard deviation among them suggests scatter among authors of only 15% for some stars up to 145%, with a typical value of 40%. The full range (as opposed to the dispersion) of published ages is 3-300%, with a peak around the 80% level. The value of the age estimates presented here resides in the large sample of early type stars and the uniform methodology applied to them. Stars Below the Main Sequence {#subsec:belowzams} ----------------------------- In Figure \[fig:hrd\] it may be noted that many stars, particularly those with $\log \mathrm{T_{eff}} \leq 3.9$, are located well below the model isochrones. Using rotation-corrected atmospheric parameters, $\sim$ 540 stars or $\sim 15\%$ of the sample, fall below the theoretical ZAMS. Prior studies also faced a similarly large fraction of stars falling below the main sequence. [@song2001] arbitrarily assigned an age of 50 Myr to any star lying below the 100 Myr isochrone used in that work. [@tetzlaff2011] arbitrarily shifted stars towards the ZAMS and treated them as ZAMS stars. Several possibilities might be invoked to explain the large population of stars below the $\log{g}-\log{T_\mathrm{eff}}$ isochrones: (1) failure of evolutionary models to predict the true width of the MS, (2) spread of metallicities, with the metal-poor MS residing beneath the solar-metallicity MS, (3) overaggressive correction for rotational velocity effects, or (4) systematics involved in surface gravity or luminosity determinations. Of these explanations, we consider (4) the most likely, with (3) also contributing somewhat. @valenti2005 found a 0.4 dex spread in $\log{g}$ among their main sequence FGK stars along with a 0.1 dex shift downward relative to the expected zero metallicity main sequence. The Bayesian age estimates for stars below the MS are likely to be unrealistically old, so we compared the ages for these stars with interpolated ages. Using the field star atmospheric parameters and [@bressan2012] models, we performed a 2D linear interpolation with the SciPy routine `griddata`. Stars below the main sequence could be easily identified by selecting objects with $\log{\mathrm{(age/yr)}}_\mathrm{Bayes} - \log{\mathrm{(age/yr)}}_\mathrm{interp} > 1.0$. Notably, for these stars below the MS, the linear interpolation produces more realistic ages than the Bayesian method. A comparison of the Bayesian and interpolated ages for all stars is presented in Figure  \[fig:bayesinterp\]. Of note, there is closer agreement between the Bayesian and interpolation methods in regards to estimating masses. Figure  \[fig:bayesinterp\] further serves to illustrate the difference between the Bayesian ages and the interpolated ages, which scatters over an order magnitude from a 1:1 relationship. A number of stars that fall below the MS and have independently constrained ages are examined in detail in n § \[subsec:specialstars\]. These stars have interpolated ages that are more in line with prior studies, and in light of this, we publish the interpolated ages in addition to the Bayesian ages in the final electronic table. ![Comparison of ages for BAF field stars derived through 2D linear interpolation and Bayesian inference. Grey points represent those stars with $\Delta \log{\mathrm{age/yr}} > 1$ (in the sense of Bayesian minus interpolated), which coincide with the same stars that reside below the MS.[]{data-label="fig:bayesinterp"}](f39.pdf){width="40.00000%"} Stars of Special Interest {#subsec:specialstars} ------------------------- In this section we discuss stars of particular interest given that they have either spatially resolved debris disks, detected possibly planetary mass companions, or both. As a final test of the [@bressan2012] evolutionary models and our Bayesian age and mass estimation method, we performed our analysis on these stars, including the Sun. ![image](f40.pdf){width="45.00000%"} ![image](f41.pdf){width="45.00000%"} ### Sun The atmospheric parameters of our Sun are known with a precision that is orders of magnitude higher than what is obtainable for nearby field stars. One would thus expect the assumed priors to have a negligible influence on the Bayesian age and mass estimates. The effective temperature of the Sun is calculated to be $T_\mathrm{eff} = 5771.8 \pm 0.7$ K from the total solar irradiance [@kopp2011], the solar radius [@haberreiter2008], the IAU 2009 definition of the AU, and the CODATA 2010 value of the Stefan-Boltzmann constant. The solar surface gravity is calculated to be $\log{g} = 4.43812 \pm 0.00013$ dex from the IAU 2009 value of $GM_\odot$ and the solar radius [@haberreiter2008]. Using these values, our Bayesian analysis yields a median age of 5.209$\pm$0.015 Gyr. The Bayesian estimation also yields a mass estimate of 0.9691$\pm$0.0003 $M_\odot$. Performing a 2D linear interpolation yields a slightly older age of 5.216 Gyr and slightly lower mass of 0.9690 $M_\odot$. As expected, the precise solar values lead to an elliptical joint posterior PDF in age and mass, and symmetric 1D marginalized PDFs. The difference between the Bayesian age estimate and interpolated age is negligible in this regime of extremely small uncertainties. This test also demonstrates that the [@bressan2012] evolutionary models may introduce a systematic overestimation of ages and underestimation of masses towards cooler temperatures, though because the Sun is substantially different from our sample stars we do not extrapolate this conclusion to our sample. ### HR 8799 HR 8799 is located near the ZAMS and is metal-poor with \[Fe/H\]=$−0.47 \pm 0.10$ dex [@gray1999]. However, because HR 8799 is a $\lambda$ Boo peculiar-type star, its photospheric metallicity may not reflect the global stellar metal abundance. The age of HR 8799 is believed to be $30^{+20}_{-10}$ Myr based on its proposed membership to the Columba association [@zuckerman2011]. Figure  \[fig:hrd\] shows that HR 8799 lays well below the theoretical ZAMS. This location is well-documented from other spectroscopic and photometric analyses of the star, and is likely due to a combination of its genuine youth and subsolar metallicity. Consistent with the discussion in §8.2 and as illustrated in Figure  \[fig:bayesinterp\], our Bayesian age analysis leads to an unrealistically old age for the star, with a median age of 956 Myr and a 68% confidence interval of 708-1407 Myr. The Bayesian approach also seems to overestimate the mass, with a median mass of $1.59 M_\odot$ and 68% confidence interval of $1.49-1.68 M_\odot$. Notably, however, 2D linear interpolation leads to more reasonable age estimates: 26 Myr assuming our newly derived atmospheric parameters ($T_\mathrm{eff}$=7540 K, $\log{g}$ = 4.43), or 25 Myr using $T_\mathrm{eff}$=7430 K and $\log{g}$ = 4.35 from [@gray1999]. ### $\beta$ Pic [@zuckerman2001] assigned an age of 12 Myr to $\beta$ Pic based on its proposed membership to the moving group of the same name. Isochronal age estimates for the star have ranged from the ZAMS age to 300 Myr [@barrado1999]. [@nielsen2013] performed a Bayesian analysis concluding a median age of 109 Myr with a 68% confidence interval of 82-134 Myr. Although barely below the ZAMS, $\beta$ Pic in our own Bayesian analysis has a much older median age of 524 Myr with a 68% confidence interval of 349-917 Myr. Prior authors also have noted that $\beta$ Pic falls below the ZAMS on a color-magnitude diagram. As was the case for HR 8799, we conclude that our erroneous age for $\beta$ Pic is due to the dominance of the prior assumption/s in exactly such a scenario. However, the interpolated age using our atmospheric parameters of $T_\mathrm{eff}$=8300 K, $\log{g}$=4.389, is 20 Myr. Using the [@gray2006] values of $T_\mathrm{eff}$=8052 K (within $1\sigma$ of our determination), $\log{g}$=4.15 ($>1.5\sigma$ away from our surface gravity) we obtain an interpolated age of 308 Myr. ### $\kappa$ And $\kappa$ Andromedae is another proposed member of the Columba association [@zuckerman2011]. Using the nominal 30 Myr age, [@carson2013] suggested a companion discovered via direct imaging is of planetary mass ($12-13 M_\mathrm{Jup}$). [@hinkley2013] refuted this claim, concluding an age of $220\pm100$ Myr from multiple isochronal analyses in §3.2 of that work. This older age estimate leads to a model-dependent companion mass of $50^{+16}_{-13} M_\mathrm{Jup}$. Our Bayesian analysis allows us to nearly rule out a 30 Myr age with a 95% confidence interval of 29-237 Myr. The mean, median, mode, and 68% confidence interval of the 1D marginalized posterior PDF in age for $\kappa$ And are 118, 150, 191, and 106-224 Myr, respectively. Notably, $\kappa$ And has a projected rotational velocity of $v\sin{i} \sim 160$ km s$^{-1}$ [@glebocki2005], and we find its rotation corrected atmospheric parameters ($T_\mathrm{eff} = 11903 \pm 405$ K, $\log{g} = 4.35 \pm 0.14$ dex) produce an interpolated age of 16 Myr. Using uncorrected atmospheric parameters ($T_\mathrm{eff} = 11263 \pm 383$ K, $\log{g} = 4.26 \pm 0.14$ dex) leads to an interpolated age of 25 Myr. ### $\zeta$ Delphini [@derosa2014] recently published the discovery of a wide companion to $\zeta$ Delphini (HIP 101589). Those authors estimated the age of the system as 525$\pm$125 Myr, from the star’s positions on a color-magnitude and a temperature-luminosity diagram, leading to a model-dependent companion mass of 50$\pm$15 M$_\mathrm{Jup}$. Our method yields a mean age of 552 Myr, with 68% and 95% confidence intervals of 531-772 Myr, and 237-866 Myr, respectively. Our revised age is in agreement with the previous estimate of [@derosa2014], although favoring the interpretation of an older system and thus more massive companion. The interpolated ages for $\zeta$ Del are somewhat older: 612 Myr for the rotation-corrected atmospheric parameters $T_\mathrm{eff}$=8305 K, $\log{g}$=3.689, or 649 Myr for the uncorrected parameters $T_\mathrm{eff}$=8639 K, $\log{g}$=3.766. Note, in this case moderate rotation ($v \sin i =$ 99.2 km s$^{-1}$) leads to a discrepancy of only $\approx 6\%$ in the derived ages. ### 49 Ceti 49 Ceti does not have a known companion at present, but does possess a resolved molecular gas disk [@hughes2008]. The star is a proposed member of the 40 Myr Argus association, which would require cometary collisions to explain the gaseous disk that should have dissipated by $\sim$ 10 Myr due to radiation pressure [@zuckerman2012]. With a mean rotational velocity of $\sim$ 190 km s$^{-1}$ [@glebocki2005], and evidence that the star is highly inclined to our line of sight, rotational effects on photometric H-R diagram placement are prominent. Our $uvby\beta$ atmospheric parameters for 49 Ceti are $T_\mathrm{eff}$ = 10007 $\pm$ 340 K, $\log{g} = 4.37 \pm 0.14$ dex, after rotational effects were accounted for. These parameters place the star essentially on the ZAMS, with an interpolated age of 9 Myr, and calling into question the cometary genesis of its gaseous disk. However, the uncorrected atmospheric parameters ($T_\mathrm{eff} = 9182 \pm 309$ K, $\log{g}=4.22 \pm 0.14$ dex) are more consistent with the A1 spectral type and produces an interpolated age of 57 Myr, which seems to support the cometary collision hypothesis. This case illustrates the importance of high-precision atmospheric parameters. Conclusions =========== In the absence of finely calibrated empirical age indicators, such as the rotation-activity-age relation for solar-type stars [e.g. @mamajek2008], ages for early spectral type stars typically have come from open cluster and moving group membership, or through association with a late-type companion that can be age-dated through one of the applicable empirical methods. Because of their rapid evolution, early type stars are amenable to age dating via isochrones. In this paper we have investigated the use of Strömgren photometric techniques for estimating stellar atmospheric parameters, which are then compared to isochrones from modern stellar evolution models. Bayesian inference is a particularly useful tool in the estimation of parameters such as age and mass from evolutionary models for large samples that span considerable ranges in temperature, luminosity, mass, and age. The Bayesian approach produces unbiased ages relative to a straightforward interpolation among isochrones which leads to age estimates that are biased towards older ages. However, as noted earlier, stars located beyond the range of the theory (below the theoretical ZAMS in our case) are assigned unreasonably old ages with the Bayesian method. This presumably is due to the clustering of isochrones and the dominance of the prior in inference scenarios in which the prior probability is changing quickly relative to the magnitude of the uncertainty in the atmospheric parameters. Linear interpolation for stars apparently below the MS may produce more reasonable age estimates. The most important parameter for determining precise stellar ages near the ZAMS is the luminosity or surface gravity indicator. Effective temperatures, or observational proxies for temperature, are currently estimated with suitable precision. However, $\log g$, luminosity, or absolute magnitude (requiring a precise distance as well), are not currently estimated with the precision needed to meaningfully constrain the ages of field stars near the ZAMS. This effect is particularly pronounced for lower temperature stars where, for a given shift in $\log g$, the inferred age can change by many orders of magnitude. Our open cluster tests indicated that the age uncertainties due to the choice of evolutionary models are not significant compared to those introduced by the uncertainties in the surface gravities. We have derived new atmospheric parameters (taking stellar rotation into account) and model-dependent ages and masses for 3493 BAF stars within 100 pc of the Sun. Our method of atmospheric parameter determination was calibrated and validated to stars with fundamentally determined atmospheric parameters. We further tested and validated our method of age estimation using open clusters with well-known ages. In determining the uncertainties in all of our newly derived parameters we conservatively account for the effects of systematics, metallicity, numerical precision, reddening, photometric errors, and uncertainties in $v\sin{i}$ as well as unknown rotational velocities. Field star ages must be considered with caution. At minimum, our homogeneously derived set of stellar ages provides a relative youth ordering. For those stars below the MS we encourage the use of interpolated ages rather than Bayesian ages, unless more precise atmospheric parameters become available. Using the new set of ages, we presented an empirical mass-age relation for solar neighborhood B0-F5 stars. We also presented empirical relations between spectral type and age/mass and we discussed ages in detail for several famous low mass companion and/or debris disk objects. An anticipated use of our catalog is in the prioritization of targets for direct imaging of brown dwarf and planetary mass companions. David & Hillenbrand (2015b, *in preparation*) will explore how ages derived using this methodology can be applied to investigations such as debris disk evolution. The authors wish to thank John Stauffer for his helpful input on sources of $uvby\beta$ data for open clusters and Timothy Brandt for helpful discussions during the proof stage of this work regarding the open cluster analysis, resulting in the appendix material concerning logarithmic versus linear approaches and a modified version of Figure 17. This material is based upon work supported in 2014 and 2015 by the National Science Foundation Graduate Research Fellowship under Grant No. DGE‐1144469. This research has made use of the WEBDA database, operated at the Institute for Astronomy of the University of Vienna, as well as the SIMBAD database and VizieR catalog access tool, operated at CDS, Strasbourg, France. Metallicity Effects {#metallicity} ------------------- We do not account explicitly for metallicity in this study, having assumed solar values in both our atmospheric models and our evolutionary grids. Our analysis in the $T_\mathrm{eff}$ and $\log g$ calibrations found that for stars with fundamentally determined atmospheric parameters and available \[Fe/H\] measurements, the accuracy with which we can determine atmospheric parameters using $uvby\beta$ photometry does not vary systematically with metallicity. The effects of different metallicity assumptions on the Str[ö]{}mgren index atmospheric grids is illustrated in Figure \[fig:grids\_metallicity\]. Moving from the atmospheric grid to the evolutionary grid, Figure 17 of @valenti2005 illustrates that for the coolest stars under consideration here, which were the focus of their study, variation of metallicity from +0.5 to -0.5 dex in \[Fe/H\] corresponds to a +0.1 to -0.1 dex shift in $\log g$ of an evolutionary isochrone. Among hotter stars, Figure \[fig:grids\_metallicity\] shows that metallicity uncertainty affects temperatures only minorly, and gravities not at all or minimally. We similarly calculated the effect on atmospheric parameter determination when allowing the model color grids to vary from +0.5 to -0.5 dex in \[M/H\], which notably represent the extremes of the metallicity range included in our sample (less than 1% of stars considered here have $\left | [\mathrm{Fe/H}] \right |>0.5$ dex). Figures  \[fig:met-teff\] &  \[fig:met-logg\] examine in detail the effects of metallicity on $T_\mathrm{eff}, \log{g}$ determinations in the relevant $uvby\beta$ planes. In summary, $T_\mathrm{eff}$ variations of up to $\sim 1\%$ in the $(b-y)-c_1$ plane, $\sim 2\%$ in the $a_0-r^*$ plane, and $6\%$ in the $c_1-\beta$ plane are possible with shifts of $\pm$ 0.5 dex in \[M/H\]. Notably, however, $T_\mathrm{eff}$ variations above the 2% level are only expected in the $c_1-\beta$ plane for stars hotter than $\sim 17000$ K, or roughly spectral type B4, of which there are very few in our sample. Similarly metallicity shifts of $\pm$ 0.5 dex can cause variations of $\sim$ 0.1 dex in $\log{g}$ in the $(b-y)-c_1$ and $c_1-\beta$ planes, while the same variation in the $a_0-r^*$ plane produces surface gravity shifts closer to $\sim 0.05$ dex. By contrast, metallicity effects are more prominent in color-magnitude techniques. Recently, @nielsen2013 executed a Bayesian analysis of the locations in the $M_V$ vs $B-V$ diagram of Gemini/NICI targets to derive their ages including confidence contours for the stellar masses, ages, and metallicities. The work demonstrates correlation in this particular color-magnitude diagram of increasing mass and decreasing age with higher metallicity. Metal poor stars will have erroneously young ages attributed to them when solar metallicity is assumed. ![image](f42.pdf){width="99.00000%"} ![image](f43.pdf){width="30.00000%"} ![image](f44.pdf){width="30.00000%"} ![image](f45.pdf){width="30.00000%"} ![image](f46.pdf){width="30.00000%"} ![image](f47.pdf){width="30.00000%"} ![image](f48.pdf){width="30.00000%"} Confidence Intervals -------------------- All confidence intervals in age and mass quoted in this work are the bounds of the Highest Posterior Density (HPD) Region. For a given posterior probability density, $p(\theta | x)$, the $100(1-\alpha) \%$ HPD region is defined as the subset, $\mathcal{C}$, of $\theta$ values: $$\mathcal{C} = \left \{ \theta : p(\theta | x) \geq p^*\right \},$$ where $p^*$ is the largest number such that $$\int_{\theta: p(\theta | x) \geq p^*} p(\theta | x) \mathrm{d} \theta = 1- \alpha.$$ In other words, the HPD region is the set of most probable values (corresponding to the smallest range in $\theta$) that encloses $100(1-\alpha) \%$ of the posterior mass. The HPD method is particularly suited for finding confidence intervals of skewed probability distributions, such as the stellar age posteriors studied in this work. To find the highest posterior density (HPD) region numerically, a function is created that iteratively integrates a normalized posterior PDF above a test value of $p^*$ while the area/volume under the PDF is less than the desired confidence interval. Open Cluster Tables ------------------- [ccccccccc]{} HD 91711 & B8 V & -0.062 & 0.146 & 0.457 & 2.745 & 14687 $\pm$ 235 & 4.467 $\pm$ 0.113 & 153\ HD 91839 & A1 V & 0.025 & 0.178 & 1.033 & 2.904 & 9509 $\pm$ 152 & 4.188 $\pm$ 0.091 & 146\ HD 91896 & B7 III & -0.081 & 0.093 & 0.346 & 2.660 & 16427 $\pm$ 263 & 3.782 $\pm$ 0.113 & 155\ HD 91906 & A0 V & 0.016 & 0.177 & 1.005 & 2.889 & 9799 $\pm$ 157 & 4.146 $\pm$ 0.113 & 149\ HD 92275 & B8 III/IV & -0.056 & 0.125 & 0.562 & 2.709 & 13775 $\pm$ 220 & 3.852 $\pm$ 0.113 & 153\ HD 92467 & B95III & -0.026 & 0.168 & 0.833 & 2.851 & 11178 $\pm$ 179 & 4.423 $\pm$ 0.113 & 110\ HD 92478 & A0 V & 0.010 & 0.183 & 0.978 & 2.925 & 9586 $\pm$ 153 & 4.431 $\pm$ 0.091 & 60\ HD 92535 & A5 V n & 0.104 & 0.194 & 0.884 & 2.838 & 8057 $\pm$ 129 & 4.344 $\pm$ 0.145 & 140\ HD 92536 & B8 V & -0.043 & 0.131 & 0.705 & 2.795 & 13183 $\pm$ 211 & 4.423 $\pm$ 0.113 & 250\ HD 92568 & A M & 0.209 & 0.237 & 0.625 & 2.748 & 7113 $\pm$ 114 & 4.341 $\pm$ 0.145 & 126\ HD 92664 & B8 III P & -0.083 & 0.118 & 0.386 & 2.702 & 15434 $\pm$ 247 & 4.145 $\pm$ 0.113 & 65\ HD 92715 & B9 V nn & -0.027 & 0.136 & 0.882 & 2.836 & 12430 $\pm$ 199 & 4.362 $\pm$ 0.113 & 290\ HD 92783 & B85V nn & -0.033 & 0.124 & 0.835 & 2.804 & 12278 $\pm$ 196 & 4.130 $\pm$ 0.113 & 230\ HD 92837 & A0 IV nn & -0.007 & 0.160 & 0.953 & 2.873 & 10957 $\pm$ 175 & 4.322 $\pm$ 0.113 & 220\ HD 92896 & A3 IV & 0.114 & 0.193 & 0.838 & 2.831 & 8010 $\pm$ 128 & 4.425 $\pm$ 0.145 & 139\ HD 92938 & B3 V n & -0.075 & 0.105 & 0.384 & 2.690 & 15677 $\pm$ 251 & 4.015 $\pm$ 0.113 & 120\ HD 92966 & B95V nn & -0.019 & 0.158 & 0.930 & 2.878 & 11372 $\pm$ 182 & 4.445 $\pm$ 0.113 & 225\ HD 92989 & A05Va & 0.008 & 0.180 & 0.982 & 2.925 & 9979 $\pm$ 160 & 4.480 $\pm$ 0.091 & 148\ HD 93098 & A1 V s & 0.017 & 0.180 & 0.993 & 2.915 & 9688 $\pm$ 155 & 4.385 $\pm$ 0.091 & 135\ HD 93194 & B3 V nn & -0.078 & 0.105 & 0.357 & 2.668 & 17455 $\pm$ 279 & 4.015 $\pm$ 0.113 & 310\ HD 93424 & A3 Va & 0.060 & 0.197 & 0.950 & 2.890 & 8852 $\pm$ 142 & 4.247 $\pm$ 0.113 & 95\ HD 93517 & A1 V & 0.052 & 0.196 & 0.976 & 2.919 & 9613 $\pm$ 154 & 4.510 $\pm$ 0.091 & 220\ HD 93540 & B6 V nn & -0.065 & 0.116 & 0.476 & 2.722 & 15753 $\pm$ 252 & 4.308 $\pm$ 0.113 & 305\ HD 93549 & B6 V & -0.066 & 0.123 & 0.454 & 2.729 & 15579 $\pm$ 249 & 4.422 $\pm$ 0.113 & 265\ HD 93607 & B25V n & -0.084 & 0.102 & 0.292 & 2.675 & 17407 $\pm$ 279 & 4.098 $\pm$ 0.113 & 160\ HD 93648 & A0 V n & 0.041 & 0.188 & 1.025 & 2.890 & 9672 $\pm$ 155 & 4.157 $\pm$ 0.091 & 215\ HD 93714 & B2 IV-V n & -0.092 & 0.100 & 0.201 & 2.647 & 18927 $\pm$ 303 & 3.979 $\pm$ 0.113 & 40\ HD 93738 & A0 V nn & -0.027 & 0.158 & 0.842 & 2.817 & 12970 $\pm$ 208 & 4.336 $\pm$ 0.113 & 315\ HD 93874 & A3 IV & 0.071 & 0.203 & 0.947 & 2.896 & 8831 $\pm$ 141 & 4.367 $\pm$ 0.091 & 142\ HD 94066 & B5 V n & -0.068 & 0.117 & 0.439 & 2.680 & 15096 $\pm$ 242 & 3.792 $\pm$ 0.113 & 154\ HD 94174 & A0 V & 0.046 & 0.193 & 0.946 & 2.907 & 9305 $\pm$ 149 & 4.391 $\pm$ 0.113 & 149 [ccccccccc]{} BD$+$49 868 & F5 V & 0.261 & 0.165 & 0.459 & 2.683 & 6693 $\pm$ 107 & 4.455 $\pm$ 0.145 & 20\ HD 19767 & F0 V N & 0.176 & 0.178 & 0.756 & 2.765 & 7368 $\pm$ 118 & 4.174 $\pm$ 0.145 & 140\ HD 19805 & A0 Va & -0.000 & 0.161 & 0.931 & 2.887 & 10073 $\pm$ 161 & 4.344 $\pm$ 0.113 & 20\ HD 19893 & B9 V & -0.031 & 0.131 & 0.850 & 2.807 & 12614 $\pm$ 202 & 4.176 $\pm$ 0.113 & 280\ HD 19954 & A9 IV & 0.150 & 0.200 & 0.794 & 2.792 & 7632 $\pm$ 122 & 4.297 $\pm$ 0.145 & 85\ HD 20135 & A0 P & -0.011 & 0.186 & 0.970 & 2.848 & 10051 $\pm$ 161 & 3.998 $\pm$ 0.113 & 35\ BD$+$49 889 & F5 V & 0.292 & 0.156 & 0.418 & 2.656 & 6430 $\pm$ 103 & 4.352 $\pm$ 0.145 & 65\ BD$+$49 896 & F4 V & 0.261 & 0.168 & 0.472 & 2.686 & 6686 $\pm$ 107 & 4.410 $\pm$ 0.145 & 30\ HD 20365 & B3 V & -0.079 & 0.103 & 0.346 & 2.681 & 16367 $\pm$ 262 & 4.025 $\pm$ 0.113 & 145\ HD 20391 & A1 Va n & 0.026 & 0.179 & 1.006 & 2.901 & 10415 $\pm$ 167 & 4.386 $\pm$ 0.091 & 260\ HD 20487 & A0 V N & -0.016 & 0.151 & 0.976 & 2.856 & 11659 $\pm$ 187 & 4.198 $\pm$ 0.113 & 280\ BD$+$47 808 & F1 IV N & 0.183 & 0.179 & 0.759 & 2.763 & 7281 $\pm$ 116 & 4.062 $\pm$ 0.145 & 180\ BD$+$48 892 & F3 IV-V & 0.246 & 0.167 & 0.524 & 2.696 & 6800 $\pm$ 109 & 4.359 $\pm$ 0.145 & 20\ BD$+$48 894 & F0 IV & 0.174 & 0.202 & 0.734 & 2.770 & 7416 $\pm$ 119 & 4.284 $\pm$ 0.145 & 75\ HD 20809 & B5 V & -0.074 & 0.109 & 0.395 & 2.696 & 15934 $\pm$ 255 & 4.097 $\pm$ 0.113 & 200\ HD 20842 & A0 Va & -0.005 & 0.157 & 0.950 & 2.886 & 10258 $\pm$ 164 & 4.325 $\pm$ 0.113 & 85\ HD 20863 & B9 V & -0.034 & 0.134 & 0.810 & 2.813 & 12154 $\pm$ 194 & 4.267 $\pm$ 0.113 & 200\ BD$+$49 914 & F5 V & 0.281 & 0.170 & 0.431 & 2.664 & 6520 $\pm$ 104 & 4.395 $\pm$ 0.145 & 120\ HD 20919 & A8 V & 0.168 & 0.191 & 0.757 & 2.775 & 7463 $\pm$ 119 & 4.259 $\pm$ 0.145 & 50\ BD$+$49 918 & F1 V N & 0.186 & 0.183 & 0.770 & 2.755 & 7235 $\pm$ 116 & 3.977 $\pm$ 0.145 & 175\ HD 20931 & A1 Va & 0.018 & 0.174 & 0.979 & 2.911 & 9588 $\pm$ 153 & 4.342 $\pm$ 0.113 & 85\ BD$+$47 816 & F4 V & 0.271 & 0.155 & 0.452 & 2.672 & 6600 $\pm$ 106 & 4.399 $\pm$ 0.145 & 28\ HD 20961 & B95V & -0.019 & 0.163 & 0.920 & 2.875 & 10537 $\pm$ 169 & 4.344 $\pm$ 0.113 & 25\ BD$+$46 745 & F4 V & 0.274 & 0.169 & 0.462 & 2.674 & 6566 $\pm$ 105 & 4.332 $\pm$ 0.145 & 160\ HD 20969 & A8 V & 0.186 & 0.192 & 0.715 & 2.758 & 7291 $\pm$ 117 & 4.239 $\pm$ 0.145 & 20\ HD 20986 & A3 V N & 0.046 & 0.190 & 1.004 & 2.896 & 9584 $\pm$ 153 & 4.243 $\pm$ 0.091 & 210\ HD 21005 & A5 V N & 0.074 & 0.189 & 0.987 & 2.862 & 8266 $\pm$ 132 & 4.197 $\pm$ 0.145 & 250\ HD 21091 & B95IV nn & -0.019 & 0.152 & 0.938 & 2.856 & 12477 $\pm$ 200 & 4.416 $\pm$ 0.113 & 340\ HD 21092 & A5 V & 0.054 & 0.218 & 0.938 & 2.893 & 8775 $\pm$ 140 & 4.311 $\pm$ 0.091 & 75\ TYC 3320-1715-1 & F4 V & 0.281 & 0.153 & 0.469 & 2.663 & 6495 $\pm$ 104 & 4.220 $\pm$ 0.145 & 110\ HD 21152 & B9 V & -0.018 & 0.158 & 0.943 & 2.868 & 11306 $\pm$ 181 & 4.353 $\pm$ 0.113 & 225\ HD 232793 & F5 V & 0.311 & 0.172 & 0.377 & 2.645 & 6274 $\pm$ 100 & 4.362 $\pm$ 0.145 & 93\ HD 21181 & B85V N & -0.038 & 0.122 & 0.784 & 2.766 & 13726 $\pm$ 220 & 4.119 $\pm$ 0.113 & 345\ HD 21239 & A3 V N & 0.045 & 0.190 & 0.997 & 2.910 & 9182 $\pm$ 147 & 4.320 $\pm$ 0.091 & 145\ HD 21278 & B5 V & -0.073 & 0.111 & 0.398 & 2.705 & 15274 $\pm$ 244 & 4.152 $\pm$ 0.113 & 75\ HD 21302 & A1 V N & 0.022 & 0.177 & 0.989 & 2.888 & 10269 $\pm$ 164 & 4.301 $\pm$ 0.091 & 230\ BD$+$48 923 & F4 V & 0.270 & 0.153 & 0.464 & 2.673 & 6603 $\pm$ 106 & 4.362 $\pm$ 0.145 & 20\ HD 21345 & A5 V N & 0.051 & 0.208 & 0.969 & 2.893 & 9435 $\pm$ 151 & 4.324 $\pm$ 0.091 & 200\ HD 21398 & B9 V & -0.030 & 0.145 & 0.825 & 2.837 & 11615 $\pm$ 186 & 4.372 $\pm$ 0.113 & 135\ HD 21428 & B3 V & -0.077 & 0.105 & 0.363 & 2.686 & 16421 $\pm$ 263 & 4.076 $\pm$ 0.113 & 200\ HD 21481 & A0 V N & -0.013 & 0.164 & 0.993 & 2.858 & 11187 $\pm$ 179 & 4.141 $\pm$ 0.113 & 250\ HD 21527 & A7 IV & 0.093 & 0.231 & 0.855 & 2.856 & 8231 $\pm$ 132 & 4.486 $\pm$ 0.145 & 80\ HD 21551 & B8 V & -0.048 & 0.118 & 0.673 & 2.746 & 14869 $\pm$ 238 & 4.220 $\pm$ 0.113 & 380\ HD 21553 & A6 V N & 0.072 & 0.206 & 0.921 & 2.872 & 8381 $\pm$ 134 & 4.414 $\pm$ 0.145 & 150\ HD 21619 & A6 V & 0.052 & 0.221 & 0.935 & 2.894 & 8843 $\pm$ 141 & 4.329 $\pm$ 0.091 & 90\ BD$+$49 957 & F3 V & 0.258 & 0.168 & 0.500 & 2.687 & 6699 $\pm$ 107 & 4.334 $\pm$ 0.145 & 56\ HD 21641 & B85V & -0.042 & 0.131 & 0.721 & 2.747 & 12914 $\pm$ 207 & 3.929 $\pm$ 0.113 & 215\ BD$+$49 958 & F1 V & 0.198 & 0.188 & 0.732 & 2.739 & 7137 $\pm$ 114 & 3.989 $\pm$ 0.145 & 155\ HD 21672 & B8 V & -0.050 & 0.119 & 0.649 & 2.747 & 13473 $\pm$ 216 & 4.071 $\pm$ 0.113 & 225\ BD$+$48 944 & A5 V & 0.063 & 0.220 & 0.931 & 2.886 & 8799 $\pm$ 141 & 4.305 $\pm$ 0.091 & 120\ HD 21931 & B9 V & -0.029 & 0.147 & 0.835 & 2.829 & 11998 $\pm$ 192 & 4.343 $\pm$ 0.113 & 205\ [ccccccccc]{} HD 23157 & A9 V & 0.168 & 0.190 & 0.725 & 2.778 & 7463 $\pm$ 121 & 4.369 $\pm$ 0.145 & 100\ HD 23156 & A7 V & 0.111 & 0.215 & 0.815 & 2.837 & 8046 $\pm$ 130 & 4.498 $\pm$ 0.145 & 70\ HD 23247 & F3 V & 0.237 & 0.174 & 0.527 & 2.704 & 6863 $\pm$ 111 & 4.424 $\pm$ 0.145 & 40\ HD 23246 & A8 V & 0.170 & 0.184 & 0.758 & 2.773 & 7409 $\pm$ 120 & 4.234 $\pm$ 0.145 & 200\ HD 23288 & B7 V & -0.051 & 0.120 & 0.636 & 2.747 & 13953 $\pm$ 226 & 4.151 $\pm$ 0.113 & 280\ HD 23302 & B6 III & -0.054 & 0.098 & 0.638 & 2.690 & 13308 $\pm$ 216 & 3.478 $\pm$ 0.113 & 205\ HD 23289 & F3 V & 0.244 & 0.164 & 0.521 & 2.699 & 6796 $\pm$ 110 & 4.387 $\pm$ 0.145 & 40\ HD 23326 & F4 V & 0.250 & 0.164 & 0.514 & 2.691 & 6741 $\pm$ 109 & 4.358 $\pm$ 0.145 & 40\ HD 23324 & B8 V & -0.052 & 0.116 & 0.634 & 2.747 & 13748 $\pm$ 223 & 4.126 $\pm$ 0.113 & 255\ HD 23338 & B6 IV & -0.061 & 0.104 & 0.553 & 2.702 & 13696 $\pm$ 222 & 3.772 $\pm$ 0.113 & 130\ HD 23351 & F3 V & 0.249 & 0.176 & 0.507 & 2.695 & 6755 $\pm$ 109 & 4.391 $\pm$ 0.145 & 80\ HD 23361 & A25Va n & 0.069 & 0.201 & 0.959 & 2.872 & 8356 $\pm$ 135 & 4.309 $\pm$ 0.145 & 235\ HD 23375 & A9 V & 0.180 & 0.187 & 0.710 & 2.765 & 7336 $\pm$ 119 & 4.318 $\pm$ 0.145 & 75\ HD 23410 & A0 Va & 0.004 & 0.164 & 0.975 & 2.899 & 10442 $\pm$ 169 & 4.382 $\pm$ 0.113 & 200\ HD 23409 & A3 V & 0.070 & 0.202 & 0.980 & 2.892 & 8903 $\pm$ 144 & 4.270 $\pm$ 0.091 & 170\ HD 23432 & B8 V & -0.039 & 0.127 & 0.758 & 2.793 & 12695 $\pm$ 206 & 4.250 $\pm$ 0.113 & 235\ HD 23441 & B9 V N & -0.029 & 0.135 & 0.858 & 2.822 & 11817 $\pm$ 191 & 4.209 $\pm$ 0.113 & 200\ HD 23479 & A9 V & 0.188 & 0.166 & 0.716 & 2.755 & 7239 $\pm$ 117 & 4.212 $\pm$ 0.145 & 150\ HD 23489 & A2 V & 0.033 & 0.183 & 1.012 & 2.907 & 9170 $\pm$ 149 & 4.239 $\pm$ 0.091 & 110\ HD 23512 & A2 V & 0.057 & 0.196 & 1.035 & 2.909 & 8852 $\pm$ 143 & 4.214 $\pm$ 0.091 & 145\ HD 23511 & F5 V & 0.279 & 0.174 & 0.412 & 2.674 & 6521 $\pm$ 106 & 4.477 $\pm$ 0.145 & 28\ HD 23514 & F5 V & 0.285 & 0.179 & 0.443 & 2.668 & 6450 $\pm$ 104 & 4.307 $\pm$ 0.145 & 40\ HD 23513 & F5 V & 0.278 & 0.170 & 0.423 & 2.673 & 6528 $\pm$ 106 & 4.447 $\pm$ 0.145 & 30\ HD 23568 & B95Va n & -0.024 & 0.139 & 0.914 & 2.847 & 11731 $\pm$ 190 & 4.301 $\pm$ 0.113 & 240\ HD 23567 & F0 V & 0.159 & 0.196 & 0.735 & 2.788 & 7560 $\pm$ 122 & 4.407 $\pm$ 0.145 & 50\ HD 23585 & F0 V & 0.168 & 0.185 & 0.713 & 2.780 & 7472 $\pm$ 121 & 4.405 $\pm$ 0.145 & 100\ HD 23608 & F5 V & 0.278 & 0.177 & 0.482 & 2.673 & 6492 $\pm$ 105 & 4.185 $\pm$ 0.145 & 110\ HD 23607 & F0 V & 0.108 & 0.203 & 0.814 & 2.841 & 8085 $\pm$ 131 & 4.534 $\pm$ 0.145 & 12\ HD 23629 & A0 V & -0.001 & 0.163 & 0.986 & 2.899 & 10340 $\pm$ 168 & 4.342 $\pm$ 0.113 & 170\ HD 23632 & A0 Va & 0.006 & 0.167 & 1.009 & 2.899 & 10461 $\pm$ 169 & 4.312 $\pm$ 0.113 & 225\ HD 23628 & A4 V & 0.090 & 0.189 & 0.904 & 2.853 & 8163 $\pm$ 132 & 4.381 $\pm$ 0.145 & 215\ HD 23643 & A35V & 0.079 & 0.194 & 0.943 & 2.862 & 8258 $\pm$ 134 & 4.301 $\pm$ 0.145 & 185\ HD 23733 & A9 V & 0.207 & 0.177 & 0.672 & 2.736 & 7066 $\pm$ 114 & 4.174 $\pm$ 0.145 & 180\ HD 23732 & F5 V & 0.258 & 0.172 & 0.460 & 2.688 & 6695 $\pm$ 108 & 4.473 $\pm$ 0.145 & 50\ HD 23753 & B8 V N & -0.046 & 0.113 & 0.712 & 2.736 & 13096 $\pm$ 212 & 3.859 $\pm$ 0.113 & 240\ HD 23791 & A9 V+ & 0.139 & 0.214 & 0.758 & 2.811 & 7776 $\pm$ 126 & 4.480 $\pm$ 0.145 & 85\ HD 23850 & B8 III & -0.048 & 0.102 & 0.701 & 2.695 & 13446 $\pm$ 218 & 3.483 $\pm$ 0.113 & 280\ HD 23863 & A8 V & 0.116 & 0.201 & 0.857 & 2.826 & 7926 $\pm$ 128 & 4.354 $\pm$ 0.145 & 160\ HD 23872 & A1 Va n & 0.032 & 0.182 & 1.013 & 2.894 & 10028 $\pm$ 162 & 4.247 $\pm$ 0.091 & 240\ HD 23873 & B95Va & -0.023 & 0.143 & 0.907 & 2.852 & 10897 $\pm$ 177 & 4.255 $\pm$ 0.113 & 90\ HD 23886 & A4 V & 0.068 & 0.214 & 0.915 & 2.880 & 8974 $\pm$ 145 & 4.343 $\pm$ 0.091 & 165\ HD 23912 & F3 V & 0.274 & 0.154 & 0.481 & 2.671 & 6531 $\pm$ 106 & 4.242 $\pm$ 0.145 & 130\ HD 23924 & A7 V & 0.100 & 0.223 & 0.852 & 2.852 & 8121 $\pm$ 132 & 4.460 $\pm$ 0.145 & 100\ HD 23923 & B85V N & -0.033 & 0.124 & 0.839 & 2.794 & 12911 $\pm$ 209 & 4.159 $\pm$ 0.113 & 310\ HD 23948 & A1 Va & 0.033 & 0.191 & 0.984 & 2.905 & 9237 $\pm$ 150 & 4.307 $\pm$ 0.091 & 120\ HD 24076 & A2 V & 0.008 & 0.168 & 0.923 & 2.867 & 10196 $\pm$ 165 & 4.298 $\pm$ 0.091 & 155\ HD 24132 & F2 V & 0.245 & 0.149 & 0.597 & 2.692 & 6744 $\pm$ 109 & 4.182 $\pm$ 0.145 & 230 \[table:pleiades\] [ccccccccc]{} HD 26015 & F3 V & 0.252 & 0.174 & 0.537 & 2.693 & 6732 $\pm$ 109 & 4.244 $\pm$ 0.145 & 25\ HD 26462 & F1 IV-V & 0.230 & 0.165 & 0.596 & 2.710 & 6916 $\pm$ 112 & 4.291 $\pm$ 0.145 & 30\ HD 26737 & F5 V & 0.274 & 0.168 & 0.477 & 2.674 & 6558 $\pm$ 106 & 4.263 $\pm$ 0.145 & 60\ HD 26911 & F3 V & 0.258 & 0.176 & 0.525 & 2.690 & 6682 $\pm$ 108 & 4.228 $\pm$ 0.145 & 30\ HD 27176 & A7 m & 0.172 & 0.187 & 0.785 & 2.767 & 7380 $\pm$ 120 & 4.087 $\pm$ 0.145 & 125\ HD 27397 & F0 IV & 0.171 & 0.194 & 0.770 & 2.766 & 7410 $\pm$ 120 & 4.173 $\pm$ 0.145 & 100\ HD 27429 & F2 VN & 0.240 & 0.171 & 0.588 & 2.693 & 6828 $\pm$ 111 & 4.270 $\pm$ 0.145 & 150\ HD 27459 & F0 IV & 0.129 & 0.204 & 0.871 & 2.812 & 7782 $\pm$ 126 & 4.198 $\pm$ 0.145 & 35\ HD 27524 & F5 V & 0.285 & 0.161 & 0.461 & 2.656 & 6461 $\pm$ 105 & 4.213 $\pm$ 0.145 & 110\ HD 27561 & F4 V & 0.270 & 0.162 & 0.482 & 2.677 & 6594 $\pm$ 107 & 4.284 $\pm$ 0.145 & 30\ HD 27628 & A2 M & 0.133 & 0.225 & 0.707 & 2.756 & 7944 $\pm$ 129 & 4.743 $\pm$ 0.145 & 30\ HD 27819 & A7 IV & 0.080 & 0.209 & 0.982 & 2.857 & 8203 $\pm$ 133 & 4.170 $\pm$ 0.145 & 35\ HD 27901 & F4 V N & 0.238 & 0.178 & 0.597 & 2.704 & 6837 $\pm$ 111 & 4.233 $\pm$ 0.145 & 110\ HD 27934 & A5 IV-V & 0.064 & 0.201 & 1.053 & 2.867 & 8506 $\pm$ 138 & 3.884 $\pm$ 0.091 & 90\ HD 27946 & A7 V & 0.149 & 0.192 & 0.840 & 2.783 & 7584 $\pm$ 123 & 4.112 $\pm$ 0.145 & 210\ HD 27962 & A3 V & 0.020 & 0.193 & 1.046 & 2.889 & 9123 $\pm$ 148 & 4.004 $\pm$ 0.091 & 30\ HD 28024 & A9 IV- N & 0.165 & 0.175 & 0.947 & 2.753 & 7279 $\pm$ 118 & 3.503 $\pm$ 0.145 & 215\ HD 28226 & A M & 0.164 & 0.213 & 0.771 & 2.775 & 7493 $\pm$ 121 & 4.248 $\pm$ 0.145 & 130\ HD 28294 & F0 IV & 0.198 & 0.173 & 0.694 & 2.745 & 7174 $\pm$ 116 & 4.194 $\pm$ 0.145 & 135\ HD 28319 & A7 III & 0.097 & 0.198 & 1.011 & 2.831 & 7945 $\pm$ 129 & 3.930 $\pm$ 0.145 & 130\ HD 28355 & A7 m & 0.112 & 0.226 & 0.908 & 2.832 & 7930 $\pm$ 128 & 4.207 $\pm$ 0.145 & 140\ HD 28485 & F0 V+ N & 0.200 & 0.192 & 0.717 & 2.740 & 7129 $\pm$ 115 & 4.035 $\pm$ 0.145 & 150\ HD 28527 & A5 m & 0.085 & 0.218 & 0.964 & 2.856 & 8180 $\pm$ 133 & 4.194 $\pm$ 0.145 & 100\ HD 28546 & A7 m & 0.142 & 0.234 & 0.796 & 2.809 & 7726 $\pm$ 125 & 4.354 $\pm$ 0.145 & 30\ HD 28556 & F0 IV & 0.147 & 0.202 & 0.814 & 2.795 & 7645 $\pm$ 124 & 4.244 $\pm$ 0.145 & 140\ HD 28568 & F5 V & 0.274 & 0.168 & 0.466 & 2.676 & 6564 $\pm$ 106 & 4.315 $\pm$ 0.145 & 55\ HD 28677 & F2 V & 0.214 & 0.176 & 0.654 & 2.725 & 7032 $\pm$ 114 & 4.161 $\pm$ 0.145 & 100\ HD 28911 & F5 V & 0.283 & 0.163 & 0.459 & 2.663 & 6481 $\pm$ 105 & 4.249 $\pm$ 0.145 & 40\ HD 28910 & A9 V & 0.144 & 0.200 & 0.830 & 2.796 & 7659 $\pm$ 124 & 4.213 $\pm$ 0.145 & 95\ HD 29169 & F2 V & 0.236 & 0.183 & 0.567 & 2.708 & 6880 $\pm$ 111 & 4.321 $\pm$ 0.145 & 80\ HD 29225 & F5 V & 0.276 & 0.171 & 0.461 & 2.675 & 6547 $\pm$ 106 & 4.316 $\pm$ 0.145 & 45\ HD 29375 & F0 IV-V & 0.187 & 0.187 & 0.740 & 2.754 & 7257 $\pm$ 118 & 4.106 $\pm$ 0.145 & 155\ HD 29388 & A5 IV-V & 0.062 & 0.199 & 1.047 & 2.870 & 8645 $\pm$ 140 & 3.927 $\pm$ 0.091 & 115\ HD 29499 & A M & 0.140 & 0.231 & 0.826 & 2.810 & 7713 $\pm$ 125 & 4.266 $\pm$ 0.145 & 70\ HD 29488 & A5 IV-V & 0.080 & 0.196 & 1.017 & 2.852 & 8127 $\pm$ 132 & 4.025 $\pm$ 0.145 & 160\ HD 30034 & A9 IV- & 0.150 & 0.195 & 0.813 & 2.791 & 7610 $\pm$ 123 & 4.218 $\pm$ 0.145 & 75\ HD 30210 & A5 m & 0.091 & 0.252 & 0.955 & 2.845 & 8126 $\pm$ 132 & 4.181 $\pm$ 0.145 & 30\ HD 30780 & A9 V+ & 0.122 & 0.207 & 0.900 & 2.813 & 7823 $\pm$ 127 & 4.141 $\pm$ 0.145 & 155\ HD 31845 & F5 V & 0.294 & 0.165 & 0.439 & 2.658 & 6396 $\pm$ 104 & 4.229 $\pm$ 0.145 & 25\ HD 32301 & A7 IV & 0.079 & 0.202 & 1.034 & 2.847 & 8116 $\pm$ 131 & 3.975 $\pm$ 0.145 & 115\ HD 33254 & A7 m & 0.132 & 0.251 & 0.835 & 2.824 & 7797 $\pm$ 126 & 4.306 $\pm$ 0.145 & 30\ HD 33204 & A7 m & 0.149 & 0.245 & 0.803 & 2.796 & 7634 $\pm$ 124 & 4.270 $\pm$ 0.145 & 30\ HD 25202 & F4 V & 0.206 & 0.172 & 0.695 & 2.724 & 7082 $\pm$ 115 & 4.064 $\pm$ 0.145 & 160\ HD 28052 & F0 IV-V N & 0.153 & 0.183 & 0.934 & 2.767 & 7431 $\pm$ 120 & 3.733 $\pm$ 0.145 & 170\ HD 18404 & F5 IV & 0.269 & 0.169 & 0.481 & 2.680 & 6605 $\pm$ 107 & 4.299 $\pm$ 0.145 & 0\ HD 25570 & F4 V & 0.249 & 0.147 & 0.557 & 2.688 & 6752 $\pm$ 109 & 4.183 $\pm$ 0.145 & 34\ HD 40932 & A2 M & 0.079 & 0.205 & 0.978 & 2.853 & 8224 $\pm$ 133 & 4.191 $\pm$ 0.145 & 18 \[table:hyades\] Alternative Treatment of Open Clusters -------------------------------------- As described in § \[subsubsec:numericalmethods\] The 1-D marginalized PDF in age for an individual star is computed on a model grid that is uniformly spaced in log(age). As such, the prior probability of each bin is also encoded in log(age) (see § \[subsubsec:priors\]). Thus, the resultant PDF is naturally in the units of $d$ p($\log{\tau}$)/$d\log{\tau}$, where $p$ is probability and $\tau$ is age. In order to transform p($\log{\tau}$) to p($\tau$) one uses the conversion p($\tau$) = p($\log{\tau}$)/$\tau$. Statistical measures *other than the median*, such as the mean, mode, confidence intervals, etc. will be different depending on whether the PDF being quantified is p($\log{\tau}$) or p($\tau$). For example, $10^{\left \langle \log{\tau} \right \rangle} \neq \left \langle \tau \right \rangle$. Strictly speaking, however, both values are meaningful and authors frequently choose to report one or the other in the literature. In the case at hand, p($\log{\tau}$) for an individual star is more symmetric than the linear counterpart, p($\tau$). As such, one could reasonably argue that $10^{\left \langle \log{\tau} \right \rangle}$ is a more meaningful metric than $\left \langle \tau \right \rangle$. In either case, because the PDFs in age or log(age) are both skewed, the median (which, again, is equal regardless of whether p($\tau$) or p($\log{\tau}$) is under consideration), is actually the most meaningful quantification of the PDF since it is less susceptible to extreme values than either the mean or mode. With respect to the open clusters, regardless of whether our analyses are performed in logarthmic or linear space, our results favor ages that are younger and older than accepted values for $\alpha$ Per and the Hyades, respectively. ![image](openclusters-lin-v1.pdf){width="90.00000%"} [cccccccccccc]{} IC 2602 & 46$^{+6}_{-5}$ & [@ekstrom2012] & 22 & 3-39 & 41 & 41-42\ & & [@bressan2012] & 24 & 3-40 & 40 & 37-43\ $\alpha$ Persei & 90$^{+10}_{-10}$ & [@ekstrom2012] & 41 & 3-68 & 63 & 61-68\ & & [@bressan2012] & 45 & 3-71 & 62 & 58-66\ Pleiades & 125$^{+8}_{-8}$ & [@ekstrom2012] & 61 & 3-113 & 125 & 122-131\ & & [@bressan2012] & 77 & 3-117 & 112 & 107-120\ Hyades & 625$^{+50}_{-50}$ & [@ekstrom2012] & 118 & 3-403 & 677 & 671-690\ & & [@bressan2012] & 288 & 17-593 & 738 & 719-765 [^1]: <http://idlastro.gsfc.nasa.gov/ftp/pro/astro/uvbybeta.pro> [^2]: <http://idlastro.gsfc.nasa.gov/ftp/pro/astro/deredd.pro> [^3]: <http://wwwuser.oats.inaf.it/castelli> [^4]: <http://kurucz.harvard.edu/grids/gridP00ODFNEW/uvbyp00k0odfnew.dat> [^5]: <http://www.univie.ac.at/webda/>
1
--- abstract: 'Genetic differences between individuals associated to quantitative phenotypic traits, including disease states, are usually found in non-coding genomic regions. These genetic variants are often also associated to differences in expression levels of nearby genes (they are “expression quantitative trait loci” or eQTLs, for short) and presumably play a gene regulatory role, affecting the status of molecular networks of interacting genes, proteins and metabolites. Computational systems biology approaches to reconstruct causal gene networks from large-scale omics data have therefore become essential to understand the structure of networks controlled by eQTLs together with other regulatory genes, as well as to generate detailed hypotheses about the molecular mechanisms that lead from genotype to phenotype. Here we review the main analytical methods and softwares to identify eQTLs and their associated genes, to reconstruct co-expression networks and modules, to reconstruct causal Bayesian gene and module networks, and to validate predicted networks *in silico*.' author: - 'Lingfei Wang$^{1}$ and Tom Michoel$^{1,\ast}$' title: Detection of regulator genes and eQTLs in gene networks --- $^1$Division of Genetics and Genomics, The Roslin Institute, The University of Edinburgh, Midlothian EH25 9RG, Scotland, United Kingdom $^\ast$Corresponding author, E-mail: tom.michoel@roslin.ed.ac.uk Introduction ============ ![A flow chart for a typical systems genetics study and the corresponding softwares. Steps in light yellow are covered in this chapter.\[fig-flow\]](fig1.pdf) Genetic differences between individuals are responsible for variation in the observable phenotypes. This principle underpins genome-wide association studies (GWAS), which map the genetic architecture of complex traits by measuring genetic variation at single-nucleotide polymorphisms (SNPs) on a genome-wide scale across many individuals [@mackay2009genetics]. GWAS have resulted in major improvements in plant and animal breeding [@goddard2009mapping] and in numerous insights into the genetic basis of complex diseases in human [@manolio2013bringing]. However, quantitative trait loci (QTLs) with large effects are uncommon and a molecular explanation for their trait association rarely exists [@mackay2009genetics]. The vast majority of QTLs indeed lie in non-coding genomic regions and presumably play a gene regulatory role [@hindorff2009potential; @schaub2012linking]. Consequently, numerous studies have identified *cis*- and *trans*-acting DNA variants that influence gene expression levels (i.e., “expression QTLs”; eQTLs) in model organisms, plants, farm animals and human (reviewed in [@rockman2006genetics; @georges2007mapping; @cookson2009mapping; @cheung2009genetics; @cubillos2012lessons]). Gene expression programmes are of course highly tissue- and cell-type specific, and the properties and complex relations of eQTL associations across multiple tissues are only beginning to be mapped [@dimas2009common; @foroughi2015; @greenawalt2011survey; @ardlie2015genotype]. At the molecular level, a mounting body of evidence shows that *cis*-eQTLs primarily cause variation in transcription factor (TF) binding to gene regulatory DNA elements, which then causes changes in histone modifications, DNA methylation and mRNA expression of nearby genes; *trans*-eQTLs in turn can usually be attributed to coding variants in regulatory genes or *cis*-eQTLs of such genes [@albert2015role]. Taken together, these results motivate and justify a systems biological view of quantitative genetics (“systems genetics”), where it is hypothesized that genetic variation, together with environmental perturbations, affects the status of molecular networks of interacting genes, proteins and metabolites; these networks act within and across different tissues and collectively control physiological phenotypes [@williams2006expression; @kadarmideen2006genetical; @rockman2008reverse; @schadt2009; @schadt2012new; @civelek2014systems; @bjorkegren2015genome]. Studying the impact of genetic variation on gene regulation networks is of crucial importance in understanding the fundamental biological mechanisms by which genetic variation causes variation in phenotypes [@chen2008], and is expected to lead to the discovery of novel disease biomarkers and drug targets in human and veterinary medicine [@schadt2009b]. Since direct experimental mapping of genetic, protein–protein or protein–DNA interactions is an immensely challenging task, further exacerbated by the cell-type specific and dynamic nature of these interactions [@walhout2006unraveling], comprehensive, experimentally verified molecular networks will not become available for multi-cellular organisms in the foreseeable future. Statistical and computational methods are therefore essential to reconstruct trait-associated causal networks by integrating diverse omics data [@rockman2008reverse; @schadt2009; @ritchie2015methods]. A typical systems genetics study collects genotype and gene, protein and/or metabolite expression data from a large number of individuals segregating for one or more traits of interest. After raw data processing and normalization, eQTLs are identified for each of the expression data types, and a co-expression matrix is constructed. Causal Bayesian gene networks, co-expression modules (i.e. clusters) and/or causal Bayesian module networks are then reconstructed. *In silico* validation of predicted networks and modules using independent data confirms their overall validity, ideally followed by experimental validation of the most promising findings in a relevant cell line or model organism (Figure \[fig-flow\]). Here we review the main analytic principles behind each of the steps from eQTL identification to *in silico* network validation, and present a selection of most commonly used methods and softwares for each step. Throughout this chapter, we tacitly assume that all data has been quality controlled, pre-processed and normalized to suit the assumptions of the analytic methods presented here. For expression data, this usually means working with log-transformed data where each gene expression profile is centred around zero with standard deviation one. We also assume that the data has been corrected for any confounding factors, either by regressing out known covariates and/or by estimating hidden factors [@stegle2012using]. Genetics of gene expression {#sec:genet-gene-expr} =========================== A first step towards identifying molecular networks affected by DNA variants is to identify variants that underpin variations in eQTLs of transcripts [@cookson2009mapping], proteins [@foss2007] or metabolites [@nicholson2011genome] across individuals. When studying a single trait, as in GWAS, it is possible to consider multiple statistical models to explicitly account for additive and/or dominant genetic effects [@laird2011]. However, when the possible effects of a million or more SNPs on tens of thousands of molecular abundance traits need to be tested, as is common in modern genetics of gene expression studies, the computational cost of testing SNP-trait associations one-by-one becomes prohibitive. To address this problem, new methods have been developed to calculate the test statistics for the parametric linear regression and analysis of variance (ANOVA) models [@shabalin2012matrix] and the non-parametric ANOVA model (or Kruskal-Wallis test) [@qi2014] using fast matrix multiplication algorithms, implemented in the softwares **matrix-eQTL** (<http://www.bios.unc.edu/research/genomic_software/Matrix_eQTL/>) [@shabalin2012matrix] and **kruX** (<https://github.com/tmichoel/krux>) [@qi2014]. In both softwares, genotype values of $s$ genetic markers and expression levels of $k$ transcripts, proteins or metabolites in $n$ individuals are organized in an $s\times n$ genotype matrix ${\mathbf{G}}$ and $k\times n$ expression data matrix ${\mathbf{X}}$. Genetic markers take values $0,1,\dots,\ell$, where $\ell$ is the maximum number of alleles ($\ell=2$ for biallelic markers), while molecular traits take continuous values. In the linear model, a linear relation is tested between the expression level of gene $i$ and the genotype value (i.e. the number of reference alleles) of SNP $j$. The corresponding test statistic is the Pearson correlation between the $i$th row of ${\mathbf{X}}$ and the $j$th row of ${\mathbf{G}}$, for all values of $i$ and $j$. Standardising the data matrices to zero mean and unit variance, such that for all $i$ and $j$, $$\begin{aligned} \sum_{l=1}^nX_{il} = \sum_{l=1}^nG_{jl} = 0 \quad\text{and}\quad \sum_{l=1}^n X_{il}^2 = \sum_{l=1}^n G_{jl}^2 = n , \end{aligned}$$ it follows that the correlation values can be computed as $$\begin{aligned} R_{ij} = \sum_{l=1}^nX_{il}G_{jl} = ({\mathbf{X}}{\mathbf{G}}^T)_{ij},\end{aligned}$$ where ${\mathbf{G}}^T$ denotes the transpose of ${\mathbf{G}}$. Hence, a single matrix multiplication suffices to compute the test statistics for the linear model for all pairs of traits and SNPs. The ANOVA models test if expression levels in different genotype groups originate from the same distribution. Therefore, ANOVA models can account for both additive and dominant effects of a genetic variant on expression levels. In the parametric ANOVA model, suppose the test samples are divided into $\ell+1$ groups by the SNP $j$. The mean expression level for gene $i$ in each group $m$ can be written as $$\begin{aligned} \overline{X_i^{(m,j)}} = \frac1{n^{(m,j)}} \sum_{\{l\colon G_{jl}=m\}} X_{il},\end{aligned}$$ where $n^{(m,j)}$ is the number of samples in genotype group $m$ for SNP $j$. Again assuming that the expression data is standardised, the F-test statistic for testing gene $i$ against SNP $j$ can be written as $$\begin{aligned} F_i^{(j)} = \frac{n-\ell-1}{\ell} \frac{SS_i^{(j)}}{n-SS_i^{(j)}},\end{aligned}$$ where $SS_i^{(j)}$ is the sum of squares between groups, $$\begin{aligned} SS_i^{(j)} = \sum_{m=0}^\ell n^{(m,j)}\overline{X_i^{(m,j)}}^2.\end{aligned}$$ Let us define the $n\times s$ indicator matrix ${\mathbf{I}}^{(m)}$ for genotype group $m$, i.e. ${\mathbf{I}}_{lj}^{(m)} = 1$ if $G_{jl}=m$ and $0$ otherwise. Then $$\begin{aligned} \sum_{\{l\colon G_{jl}=m\}} X_{il} = \left({\mathbf{X}}{\mathbf{I}}^{(m)}\right)_{ij}.\end{aligned}$$ Hence, for each pair of expression level $X_i$ and SNP $G_j$, the sum of squares matrix $SS_i^{(j)}$ can be computed via $\ell-1$ matrix multiplications [^1]. In the non-parametric ANOVA model, the expression data matrix is converted to a matrix ${\mathbf{T}}$ of data ranks, independently over each row. In the absence of ties, the Kruskal-Wallis test statistic is given by $$\begin{aligned} S_{ij} = \frac{12}{n(n+1)} \sum_{m=0}^\ell n^{(m,j)}\,\overline{T_i^{(m,j)}}^2 - 3(n+1),\end{aligned}$$ where $\overline{T_i^{(m,j)}}$ is the average expression rank of gene $i$ in genotype group $m$ of SNP $j$, defined as $$\begin{aligned} \overline{T_i^{(m,j)}}&=\frac1{n^{(m,j)}} \sum_{\{l\colon G_{jl}=m\}}T_{il},\end{aligned}$$ which can be similarly obtained from the $\ell-1$ matrix multiplications. There is as yet no consensus about which statistical model is most appropriate for eQTL detection. Non-parametric methods were introduced in the earliest eQTL studies [@brem2002; @schadt2008] and have remained popular, as they are robust against variations in the underlying genetic model and trait distribution. More recently, the linear model implemented in matrix-eQTL has been used in a number of large-scale studies [@lappalainen2013transcriptome; @ardlie2015genotype]. A comparison on a dataset of 102 human whole blood samples showed that the parametric ANOVA method was highly sensitive to the presence of outlying gene expression values and SNPs with singleton genotype group. Linear models reported the highest number of eQTL associations after empirical False Discovery Rate (FDR) correction, with an expected bias towards additive linear associations. The Kruskal-Wallis test was most robust against data outliers and heterogeneous genotype group sizes and detected a higher proportion of non-linear associations, but was more conservative for calling additive linear associations than linear models [@qi2014]. In summary, when large numbers of traits and markers have to be tested for association, efficient matrix multiplication methods can be employed to calculate all test statistics at once, leading to a dramatic reduction in computation time compared to calculating these statistics one-by-one for every pair using traditional methods. Matrix multiplication is a basic mathematical operation which has been purposely studied and optimized for tens of years [@golub1996]. Highly efficient packages, such as **BLAS** (<http://www.netlib.org/blas/>) and **LAPACK** (<http://www.netlib.org/lapack/>), are available for use on generic CPUs, and are indeed employed in most mainstream scientific computing softwares and programming languages, such as Matlab and R. In recent years, Graphics Processor Unit (GPU)-accelerated computing, such as CUDA, has revolutionised scientific calculations that involve repetitive operations in parallel on bulky data, offering even more speedup than the existing CPU-based packages. The first applications of GPU computing in eQTL analysis have already appeared (e.g. [@hemani2014detection]), and more can be expected in the future. Lastly, for pairs exceeding a pre-defined threshold on the test statistic, a $p$-value can be computed from the corresponding test distribution, and these $p$-values can then be further corrected for multiple testing by common procedures [@shabalin2012matrix; @qi2014]. Co-expression networks and modules ================================== Co-expression gene networks\[sec-coex\] --------------------------------------- The Pearson correlation is the simplest and computationally most efficient similarity measure for gene expression profiles. For genes $i$ and $j$, their Pearson correlation can be written as $$\label{eq:1} C_{ij}=\sum_{l=1}^nX_{il}X_{jl}\,.$$ In matrix notation, this can be combined as the matrix multiplication [$${\mathbf{C}}={\mathbf{X}}{\mathbf{X}}^T.$$]{} Gene pairs with large positive or negative correlation values tend to be up- or down-regulated together, due to either a direct regulatory link between them, or being jointly co-regulated by a third, often hidden, factor. By filtering for correlation values exceeding a significance threshold determined by comparison with randomly permuted data, a discrete co-expression network is obtained. Assuming that a high degree of co-expression signifies that genes are involved in the same biological processes, graph theoretical methods can be employed, for instance, to predict gene function [@sharan2007network]. One drawback of the Pearson correlation is that by definition it is biased towards *linear* associations. To overcome this limitation, other measures are available. The Spearman correlation uses expression data ranks (cf. Section \[sec:genet-gene-expr\]) in Equation , and will give high score to *monotonic* relations. Mutual information is the most general measure and detects both linear and non-linear associations. For a pair of discrete random variables $A$ and $B$ (representing the expression levels of two genes) taking values $a_l$ and $b_m$, respectively, the mutual information is defined as $$MI(A,B)=H(A)+H(B)-H(A,B),$$ where $$\begin{aligned} H(A) &= -\sum_l P(a_l)\log P(a_l),\\ H(B) &= -\sum_m P(b_m)\log P(b_m),\\ H(A,B) &= \sum_{lm} P(a_l,b_m) \log P(a_l,b_m),\end{aligned}$$ are the individual and joint Shannon entropies of $A$ and $B$, and $P(a_l)=P(A=a_l)$, and likewise for the other terms. Since gene expression data are continuous, mutual information estimation is non-trivial and usually involves some form of discretisation [@daub2004]. Mutual information has been successfully used as a co-expression measure in a variety of contexts [@butte2000; @basso2005; @faith2007]. Clustering and co-expression module detection {#sec:clust-co-expr} --------------------------------------------- It is generally understood that cellular functions are carried out by “modules”, groups of molecules that operate together and whose function is separable from that of other modules [@hartwell1999]. Clustering gene expression data (i.e. dividing genes into discrete groups on the basis of similarities in their expression profiles) is a standard approach to detect such functionally coherent gene modules. The literature on gene expression clustering is vast and cannot possibly be reviewed comprehensively here. It includes “standard” methods such as hierarchical clustering [@eisen1998cluster], $k$-means [@tavazoie1999systematic], graph-based methods that operate directly on co-expression networks [@sharan2000click], and model-based clustering algorithms which assume that the data is generated by a mixture of probability distributions, one for each cluster [@medvedovic2002bayesian]. Here we briefly describe a few recently developed methods with readily available softwares. #### Modularity maximization Modularity maximization is a network clustering method that is particularly popular in the physical and social sciences, based on the assumption that intra-module connectivity should be much denser than inter-module connectivity [@newman2004; @newman2006b]. In the context of co-expression networks, this method can be used to identify gene modules directly from the correlation matrix ${\mathbf{C}}$ [@Ayroles:2009]. Suppose the genes are grouped into $N$ modules $M_l,~l=1,\dots,N$. Each module $M_l$ is a non-empty set that can contain any combination of the genes $i=1,\dots,k$, but each gene is contained by exactly one module. Also define $M_0$ as the set containing all genes. The modularity score function is defined as [$$S(M)=\sum_{l=1}^N\left(\frac{W(M_l,M_l)}{W(M_0,M_0)}-\left(\frac{W(M_l,M_0)}{W(M_0,M_0)}\right)^2\right),$$]{} where $W(A,B) =\sum_{i\in A,j\in B,i\ne j}w(C_{ij})$ is a weight function, summing over all the edges that connect one vertex in $A$ with another vertex in $B$, and $w(x)$ is a monotonic function to map correlation values to edge strengths. Common functions are $w(x)=|x|$, $|x|^\beta$ (power law) [@Langfelder:2008], $e^{\beta |x|}$ (exponential) [@Ayroles:2009], or $1/(1+e^{\beta x})$ (sigmoid) [@lee2009learning]. A modularity maximization software particularly suited for large networks is **Fast Modularity** (<http://www.cs.unm.edu/~aaron/research/fastmodularity.htm>) [@Clauset:2004]. #### Markov Cluster algorithm The Markov Cluster (MCL) algorithm is a graph-based clustering algorithm, which emulates random walks among gene vertices to detect clusters in a graph obtained directly from the co-expression matrix ${\mathbf{C}}$. It is implemented in the **MCL** software (<http://micans.org/mcl/>) [@Van-Dongen:2001; @Enright:2002]. The MCL algorithm starts with the correlation matrix ${\mathbf{C}}$ as the probability flow matrix of a random walk, and then iteratively suppresses weak structures of the network and performs a multi-step random walk. In the end, only backbones of the network structure remain, essentially capturing the modules of co-expression network. To be precise, the MCL algorithm performs the following two operations on ${\mathbf{C}}$ alternatingly: - **Inflation:** The algorithm first contrasts stronger direct connections against weaker ones, using an element-wise power law transformation, and normalizes each column separately to sum to one, such that the element $C_{ij}$ corresponds to the dissipation rate from vertex $X_i$ to $X_j$ in a single step. The inflation operation hence updates ${\mathbf{C}}$ as ${\mathbf{C}}\rightarrow\mathbf{\Gamma}_\alpha{\mathbf{C}}$, where the contrast rate $\alpha>1$ is a predefined parameter of the algorithm. After operation $\mathbf{\Gamma}_\alpha$, each element of ${\mathbf{C}}$ becomes [$$C_{ij}\rightarrow\mathbf{\Gamma}_\alpha C_{ij}=|C_{ij}|^\alpha/\sum_{p=1}^k|C_{pj}|^\alpha.$$]{} - **Expansion:** The probability flow matrix ${\mathbf{C}}$ controls the random walks performed in the expansion phase. After some integer $\beta\ge2$ steps of random walk, gene pairs with strong direct connections and/or strong indirect connections through other genes tend to see more probability flow exchanges, suggesting higher probabilities of belonging to the same gene modules. The expansion operation for the $\beta$-step random walk corresponds to the matrix power operation [$${\mathbf{C}}\rightarrow{\mathbf{C}}^\beta.$$]{} The MCL algorithm performs the above two operations iteratively until convergence. Non-zero entries in the convergent matrix ${\mathbf{C}}$ connect gene pairs belonging to the same cluster, whereas all inter-cluster edges attain the value zero, so that cluster structure can be obtained directly from this matrix [@Van-Dongen:2001; @Enright:2002]. #### Weighted Gene Co-expression Network Analysis With higher than average correlation or edge densities within clusters, genes from the same cluster typically share more neighbouring (i.e. correlated) genes. The weighted number of shared neighbouring genes hence can be another measure of gene function similarity. This information is captured in the so-called topological overlap matrix ${\mathbf{\Omega}}$, first defined in [@ravasz2002] for binary networks as [$$\omega_{ij}=\frac{A_{ij}+\sum_u A_{iu}A_{uj}}{\mathrm{min}(k_i,k_j)+1-A_{ij}},$$]{} where $A$ is the (binary) adjacency matrix of the network and $k_i=\sum_uA_{iu}$ is the connectivity of vertex $X_i$. The $\sum_uA_{iu}A_{uj}$ term represents vertex similarity through neighbouring genes, and the rest of terms normalise the output as $0\le\omega_{ij}\le1$. This concept was later extended onto networks with weighted edges by applying a “soft threshold” pre-process on the correlation matrix, for example as $$\begin{aligned} A_{ij}&=\left|\frac{1+C_{ij}}{2}\right|^\alpha,\\ \intertext{or} A_{ij}&=\left|C_{ij}\right|^\alpha,\end{aligned}$$ such that $0\le A_{ij}\le1$ [@zhang2005b]. Note that in the first case only positive correlations have high edge weight, whereas in the second case positive and negative correlations are treated equally. The parameter $\alpha>1$ is determined such that the weighted network with adjacency matrix $A$ has approximately a scale-free degree distribution [@zhang2005b]. In principle, any clustering algorithm (including the aforementioned ones) can be applied to the topological overlap matrix ${\mathbf{\Omega}}$. In the popular **WGCNA** software (<http://labs.genetics.ucla.edu/horvath/htdocs/CoexpressionNetwork/Rpackages/WGCNA/>) [@Langfelder:2008], which is a multi-purpose toolbox for network analysis, hierachical clustering with a dynamic tree-cut algorithm [@langfelder2008defining] is employed. #### Model-based clustering Model-based clustering approaches assume that the observed data is generated by a mixture of probability distributions, one for each cluster, and takes explicitly into account the noise of gene expression data. To infer model parameters and cluster assignments, techniques such as Expectation Maximization (EM) or Gibbs sampling are used [@liu2002]. A recently developed method assumes that the expression levels of genes in a cluster are random samples drawn from a mixture of normal distributions, where each mixture component corresponds to a clustering of samples for that module, i.e. it performs a two-way co-clustering operation [@joshi2008]. The method is available as part of the **Lemon-Tree** package (<https://github.com/eb00/lemon-tree>) and has been successfully used in a variety of applications [@bonnet2015]. The co-clustering is carried out by a Gibbs sampler which iteratively updates the assignment of each gene and, within each gene cluster, the assignment of each experimental condition. The co-clustering operation results the full posterior distribution, which can be written as $$p(\mathcal{C}\mid{\mathbf{X}}) \propto \prod_{l=1}^N \prod_{u=1}^{L_l} \iint p(\mu,\tau) \prod_{i\in\mathcal{M}_l}\prod_{m\in \mathcal{E}_{l,u}} p (X_{im}\mid \mu,\tau)\; d\mu d\tau,$$ where $\mathcal{C}=\{M_l, \mathcal{E}_{l,u}\colon l=1,\dots,N; u=1,\dots, L_l\}$ is a co-clustering consisting of $N$ gene modules $M_l$, each of which has a set of $L_m$ sample clusters as $\mathcal{E}_{l,u}$; $p(X_{im}\mid \mu,\tau)$ is a normal distribution function with mean $\mu$ and precision $\tau$; and $p(\mu,\tau)$ is a non-informative normal-gamma prior. Detailed investigations of the convergence properties of the Gibbs sampler showed that the best results are obtained by deriving consensus clusters from multiple independent runs of the sampler. In the **Lemon-Tree** package, consensus clustering is performed by a novel spectral graph clustering algorithm [@michoel2012] applied to the weighted graph of pairwise frequencies with which two genes are assigned to the same gene module [@bonnet2015]. Causal gene networks ==================== Using genotype data to prioritize edge directions in co-expression networks\[sec-direction\] -------------------------------------------------------------------------------------------- Pairwise correlations between gene expression traits define undirected co-expression networks. Several studies have shown that pairs of gene expression traits can be causally ordered using genotype data [@zhu2004; @chen2007harnessing; @aten2008using; @schadt2005integrative; @neto2008inferring; @neto2013modeling; @millstein2009disentangling]. Although varying in their statistical details, these methods conclude that gene $A$ is causal for gene $B$, if expression of $B$ associates significantly with $A$’s eQTLs and this association is abolished by conditioning on expression of $A$ and on any other known confounding factors. In essence, this is the principle of “Mendelian randomization”, first introduced in epidemiology as an experimental design to detect causal effects of environmental exposures on human health [@smith2003mendelian], applied to gene expression traits. To illustrate how these methods work, let $A$ and $B$ be two random variables representing two gene expression traits, and let $E$ be a random variable representing a SNP which is an eQTL for gene $A$ and $B$. Since genotype cannot be altered by gene expression (i.e. $E$ cannot have any incoming edges), there are three possible regulatory models to explain the joint association of $E$ to $A$ and $B$: 1. $E\rightarrow A\rightarrow B$: the association of $E$ to $B$ is indirect and due to a causal interaction from $A$ to $B$. 2. $E\rightarrow B\rightarrow A$: idem with the roles of $A$ and $B$ reversed. 3. $A\leftarrow E\rightarrow B$: $A$ and $B$ are independently associated to $E$. To determine if gene $A$ mediates the effect of SNP $E$ on gene $B$ (model 1), one can test whether conditioning on $A$ abolishes the correlation between $E$ and $B$, using the partial correlation coefficient $$\begin{aligned} {\mathrm{cor}(E,B\mid A)} = \frac{{\mathrm{cor}(E,B)}-{\mathrm{cor}(E,A)}{\mathrm{cor}(B,A)}}{\sqrt{(1-{\mathrm{cor}(E,A)}^2)(1-{\mathrm{cor}(B,A)}^2)}.} \end{aligned}$$ If model 1 is correct, then ${\mathrm{cor}(E,B\mid A)}$ is expected to be zero, and this can be tested for example using Fisher’s $Z$ transform to assess the significance of a sample correlation coefficient. The same approach can be used to test model 2, and if neither is significant, it is concluded that no inference on the causal direction between $A$ and $B$ can be made (using SNP $E$), i.e. that model 3 is correct. For more details, see [@aten2008using], who have implemented this approach in the **NEO** software (<http://labs.genetics.ucla.edu/horvath/htdocs/aten/NEO/>). Other approaches are based on the same principle, but use statistical model selection to identify the most likely causal model, with the probability density functions (PDF) for the models below: - $p(E,A,B)=p(E)p(A\mid E)p(B\mid A)$, - $p(E,A,B)=p(E)p(B\mid E)p(A\mid B)$, - $p(E,A,B)=p(E)p(A\mid E) p(B\mid E,A)$, where the dependence on $A$ in the last term of the last model indicates that there may be a residual correlation between $B$ and $A$ not explained by $E$. The minimal additive model assumes the distributions are [@schadt2005integrative] [$$\begin{aligned} E&\sim&\mathrm{Bernoulli}(q),\nonumber\\ A\mid E&\sim&\mathrm{N}(\mu_{A\mid E},\sigma_A^2),\nonumber\\ B\mid A&\sim&\mathrm{N}\left(\mu_B+\rho\frac{\sigma_B}{\sigma_A}(A-\mu_A),(1-\rho^2)\sigma_B^2\right),\nonumber\\ B\mid E,A&\sim&\mathrm{N}\left(\mu_{B\mid E}+\rho\frac{\sigma_B}{\sigma_A}(A-\mu_{A\mid E}),(1-\rho^2)\sigma_B^2\right),\nonumber\end{aligned}$$]{} so that $E$ fulfills a Bernoulli distribution, $A\mid E$ undergoes a normal distribution whose mean depends on $E$, and that $B\mid A$ has a conditional normal distribution whose mean and variance are contributed in part by $A$. For $(B\mid E,A)$, the mean of $B$ also depends on $E$. The parameters of all distributions can be estimated by maximum likelihood, and the model with the highest likelihood is selected as the most likely causal model. The number of free parameters can be accounted using penalties like the Akaike information criterion (AIC) [@schadt2005integrative]. The approach has been extended in various ways. In [@chen2007harnessing], likelihood ratio tests, comparison to randomly permuted data, and false discovery rate estimation techniques are used to convert the three model scores in a single probability value $P(A\to B)$ for a causal interaction from gene $A$ to $B$. This method is available in the **Trigger** software (<https://www.bioconductor.org/packages/release/bioc/html/trigger.html>). In [@millstein2009disentangling] and [@neto2013modeling], the model selection task is recast into a single hypothesis test, using $F$-tests and Vuong’s model selection test respectively, resulting in a significance $p$-value for each gene-gene causal interaction. It should be noted that all of the above approaches suffer from limitations due to their inherent model assumptions. In particular, the presence of unequal levels of measurement noise among genes, or of hidden regulatory factors causing additional correlation among genes, can confuse causal inference. For example, excessive error level in the expression data of gene $A$, may mistake the true structure $E\rightarrow A\rightarrow B$ as $E\rightarrow B\rightarrow A$. These limitations are discussed in [@rockman2008reverse; @li2010critical]. Using Bayesian networks to identify causal regulatory mechanisms ---------------------------------------------------------------- Bayesian networks are probabilistic graphical models which encode conditional dependencies between random variables in a directed acyclic graph (DAG). Although Bayesian network cannot fully reflect certain pathways in gene regulation, such as self-regulation or feedback loops, they still serve as a popular method for modelling gene regulation networks, as they provide a clear methodology for learning statistical dependency structures from possibly noisy data [@Friedman:1999; @Friedman:2000; @koller2009]. We adopt our previous convention in Section \[sec:genet-gene-expr\], where we have the gene expression data ${\mathbf{X}}$ and genetic markers ${\mathbf{G}}$. The model contains a total of $k$ vertices (i.e. random variables), $X_i$ with $i=1,\dots,k$, corresponding to the expression level of gene $i$. Given a DAG ${\mathcal{G}}$, and denoting the parental vertex set of $X_i$ by ${\mathbf{Pa}}^{({\mathcal{G}})}(X_i)$, the acyclic property of ${\mathcal{G}}$ allows to define the joint probability distribution function as $$\label{eq:2} p(X_1,\dots,X_k\mid{\mathcal{G}}) =\prod_{i=1}^kp(X_i \mid {\mathbf{Pa}}^{({\mathcal{G}})}(X_i)).$$ In its simplest form, we model the conditional distributions as $$p\bigl(X_i\mid {\mathbf{Pa}}^{({\mathcal{G}})}(X_i)\bigl) = N\biggl(\alpha_i + \sum_{X_j\in{\mathbf{Pa}}^{({\mathcal{G}})}(X_i)} \beta_{ji}(X_j-\alpha_j),\sigma_i^2\biggr),$$ where $(\alpha_i,\sigma_i)$ and $\beta_{ji}$ are parameters for vertex $X_i$ and edge $X_j\rightarrow X_i$ respectively, as part of the DAG structure ${\mathcal{G}}$. Under such modelling, the Bayesian network is called a linear Gaussian network. The likelihood of data ${\mathbf{X}}$ given the graph ${\mathcal{G}}$ is $$\begin{aligned} p({\mathbf{X}}\mid{\mathcal{G}}) = \prod_{i=1}^k\prod_{l=1}^n p(X_{il}\mid \{X_{jl}, X_j\in{\mathbf{Pa}}^{({\mathcal{G}})}(X_i)\}).\end{aligned}$$ Using Bayes’ rule, the log-likelihood of the DAG ${\mathcal{G}}$ based on the gene expression data ${\mathbf{X}}$ becomes [$$\log p({\mathcal{G}}\mid{\mathbf{X}})=\log p({\mathbf{X}}\mid {\mathcal{G}})+\log p({\mathcal{G}})-\log p({\mathbf{X}}),$$]{} where $p({\mathcal{G}})$ is the prior probability for ${\mathcal{G}}$, and $p({\mathbf{X}})$ is a constant when the expression data is provided, so the follow-up calculations do not rely on it. Typically, a locally optimal DAG is found by starting from a random graph and randomly ascending the likelihood by adding, modifying, or removing one directed edge at a time [@Friedman:1999; @Friedman:2000; @koller2009]. Alternatively, the posterior distribution $p({\mathcal{G}}\mid{\mathbf{X}})$ can be estimated with Bayesian inference using Markov Chain Monte Carlo (MCMC) simulation, allowing us to estimate the significance levels at an extra computational cost. The parameter values of $\alpha$, $\beta$, and $\sigma$, as part of ${\mathcal{G}}$, can be estimated with maximum likelihood. When Bayesian network is modified by a single edge, only the vertices that receive a change would require a recalculation, whilst all others remain intact. This significantly reduces the amount of computation needed for each random step. A further speedup is achievable if we constrain the maximum number of parents each vertex can have, either by using the same fixed number for all nodes, or by pre-selecting a variable number of potential parents for each node using, for instance, a preliminary $L_1$-regularisation step [@schmidt2007learning]. Two DAGs are called Markov equivalent if they result in the same PDF [@koller2009]. Clearly, using gene expression data alone, Bayesian networks can only be resolved up to Markov equivalence. To break this equivalence and uncover a more specific causal gene regulation network, genotype data is incorporated in the model inference process. The most straightforward approach is to use any of the methods in the previous section to calculate the probability $P(X_i\to X_j)$ of a causal interaction from $X_i$ to $X_j$ [@zhu2004; @zhu2008; @zhu2012stitching; @zhang2013integrated], for example by defining the prior as $p({\mathcal{G}})=\prod_{X_i}\left(\prod_{X_j\in{\mathbf{Pa}}^{({\mathcal{G}})}(X_i)}P(X_j\to X_i) \prod_{X_j\not\in{\mathbf{Pa}}^{({\mathcal{G}})}(X_i)}(1-P(X_j\to X_i))\right)$. A more ambitious approach is to jointly learn the eQTL associations and causal trait (i.e. gene or phenotype) networks. In [@neto2010causal], EM is used to alternatingly map eQTLs given the current DAG structure, and update the DAG structure and model parameters given the current eQTL mapping. In [@scutari2014multiple], Bayesian networks are learned where SNPs and traits both enter as variables in the model, with the constraint that traits can depend on SNPs, but not vice versa. However, the additional complexity of both methods means that they are computationally expensive and have only been applied to problems with a handful of traits [@neto2010causal; @scutari2014multiple]. A few additional “tips and tricks” are worth mentioning: - First, when the number of vertices is much larger than the sample count, we may break the problem into independent sub-problems by learning a separate Bayesian network for each co-expression module (Section \[sec-coex\] and [@zhang2013integrated]). Dependencies between modules could then be learned as a Bayesian network among the module eigengenes [@langfelder2007eigengene], although this does not seem to have been explored. - Second, Bayesian network learning algorithms inevitably result in locally optimal models which may contain a high number of false positives. To address this problem, we can run the algorithm multiple times and report an averaged network, only consisting of edges which appear sufficiently frequent. - Finally, another technique that helps in distinguishing genuine dependencies from false positives is *bootstrapping*, where resampling with replacement is executed on the existing sample pool. A fixed number of samples are randomly selected and then processed to predict a Bayesian network. This process is repeated many times, essentially regarding the distribution of sample pool as the true PDF, and allowing to estimate the robustness of each predicted edge, so that only those with high significance are retained [@friedman1999data]. In theory, even the whole pipeline of Figure \[fig-flow\] up to the *in silico* validation could be simulated in this way. Although bootstrapping is computationally expensive and mostly suited for small datasets, it could be used in conjunction with the separation into modules on larger datasets. Using module networks to identify causal regulatory mechanisms -------------------------------------------------------------- Module network inference is a statistically well-grounded method which uses probabilistic graphical models to reconstruct modules of co-regulated genes and their upstream regulatory programs, and which has been proven useful in many biological case studies [@add1; @segal2003; @friedman2004; @bonnet2015]. The module network model was originally introduced as a method to infer regulatory networks from large-scale gene expression compendia, as implemented in the **Genomica** software (<http://genomica.weizmann.ac.il>)[@segal2003]. Subsequently the method has been extended to integrate eQTL and gene expression data [@lee2006; @lee2009learning; @zhang2010bayesian]. The module network model starts from the same formula as Equation . It is then assumed that genes belonging to the same module share the same parents and conditional distributions; these conditional distributions are parameterized as decision trees, with the parental genes on the internal (decision) nodes and normal distributions on the leaf nodes [@segal2003]. Recent algorithmic innovations decouple the module assignment and tree structure learning from the parental gene assignment and use Gibbs sampling and ensemble methods for improved module network inference [@joshi2008; @joshi2009]. These algorithms are implemented in the **Lemon-Tree** software (<https://github.com/eb00/lemon-tree>), a command line software suite for module network inference [@bonnet2015]. Illustrative example {#sec:illustrative-example} -------------------- We have recently identified genome-wide significant eQTLs for 6,500 genes in seven tissues from the Stockholm Atherosclerosis Gene Expression (STAGE) study [@foroughi2015], and performed co-expression clustering and causal networks reconstruction [@talukdar2015]. To illustrate the above concepts, we show some results for a co-expression cluster in visceral fat (88 samples, 324 genes) which was highly enriched for tissue development genes ($P=5\times 10^{-10}$) and contained 10 genome-wide significant eQTL genes and 25 transcription factors, including eight members of the homeobox family (Figure \[fig-figs\]a). ![ **(a)** Heatmap of standardized expression profiles across 88 visceral fat samples for 10 eQTL genes and 25 TFs belonging to a co-expression cluster inferred from the STAGE data. **(b)** Co-expression of HAP1 and FOXG1 across 88 visceral fat samples. **(c)** Association between HAP1’s eQTL (rs1558285) and expression of HAP1 (red), FOXG1 (blue) and FOXG1 adjusted for HAP1 and FOXG1’s eQTL (green). **(d)** Association between FOXG1’s eQTL (rs7160881) and expression of FOXG1 (blue), HAP1 (red), and HAP1 adjusted for FOXG1 and HAP1’s eQTL (green). **(e)** Causal interactions inferred between the same genes as in (a) using Bayesian network inference. **(f)** Example of a regulatory module inferred by **Lemon-Tree** from the STAGE data. See Section \[sec:illustrative-example\] for further details.\[fig-figs\]](fig2.pdf) A representative example of an inferred causal interaction is given by the co-expression interaction between HAP1 (huntingtin-associated protein 1, chr17 q21.2-21.3) and FOXG1 (forkhead box G1, chr14 q11-q13). The expression of both genes is highly correlated ($\rho=0.85$, $P=4.4\times 10^{-24}$, Figure \[fig-figs\]b). HAP1 expression shows a significant, non-linear association with its eQTL rs1558285 ($P=1.2\times 10^{-4}$); this SNP also associates significantly with FOXG1 expression in the cross-association test ($P=0.0024$), but not anymore after conditioning FOXG1 on HAP1 and its own eQTL rs7160881 ($P=0.67$) (Figure \[fig-figs\]c). In contrast, although FOXG1 expression is significantly associated with its eQTL rs7160881 ($P=0.0028$), there is no association between this SNP and HAP1 expression ($P=0.037$), and conditioning on FOXG1 and HAP1’s eQTL has only a limited effect ($P=0.19$) (Figure \[fig-figs\]d). Using conditional independence tests (Section \[sec-direction\]), this results in a high-confidence prediction that HAP1 $\to$ FOXG1 is causal. A standard greedy Bayesian network search algorithm [@schmidt2007learning] was run on the aforementioned cluster of 324 genes. Figure \[fig-figs\]e shows the predicted consensus sub-network of causal interactions between the 10 eQTLs and 25 TFs. This illustrates how a sparse Bayesian network can accurately represent the fully connected co-expression network (all 35 genes have high-mutual co-expression, cf. Figure \[fig-figs\]a). Figure \[fig-figs\]f shows a typical regulatory module inferred by the **Lemon-Tree** software, also from the STAGE data. Here a heatmap is shown of the genotypes of an eQTL (top), the expression levels of a regulatory gene (middle), predicted to regulate a co-expression module of 11 genes (bottom). The red lines indicate sample clusters representing separate normal distributions inferred by the model-based co-clustering algorithm (Section \[sec:clust-co-expr\]). *In silico* validation of predicted gene regulation networks {#sec:silico-valid-pred} ============================================================ Gene regulation networks reconstructed from omics data represent hypotheses about the downstream molecular implications of genetic variations in a particular cell or tissue type. An essential first step towards using these networks in concrete applications (e.g. discovering novel candidate drug target genes and pathways) consists of validating them using independent data. The following is a non-exhaustive list of typical *in silico* validation experiments. #### Model likelihood comparison and cross-validation. When different algorithms are used to infer gene network models, their log-likelihoods can be compared to select the best one. (With the caveat that the same data that was used to learn the models is used to compare them, this comparison is meaningful only when the algorithms optimize *exactly* the same (penalized) log-likelihood functions.) In a $K$-fold cross-validation experiment, the available samples are divided into $K$ subsets of approximately equal size. For each subset, models are learned from a dataset consisting of the $K-1$ other subsets, and the model likelihood is calculated using only the unseen data subset. Thus, cross-validation is used to test the generalisability of the inferred network models to unseen data. For an example where model likelihood comparison and cross-validation were used to compare two module network inference strategies, see [@joshi2009]. #### Functional enrichment. Organism-specific gene ontology databases contain structured functional gene annotations [@ashb00]. These databases can be used to construct gene signature sets composed of genes annotated to the same biological process, molecular function or cellular component. Reconstructed gene networks can then be validated by testing for enriched connectivity of gene signature sets using a method proposed by [@zhu2008]. For a given gene set, this method considers all network nodes belonging to the set and their nearest neighbours, and from this set of nodes and edges, the largest connected sub-network is identified. Then the enrichment of the gene set in this sub-network is tested using the Fisher exact test and compared to the enrichment of randomly selected gene sets of the same size. #### Comparison with physical interaction networks. Networks of transcription factor - target interactions based on ChIP-sequencing data [@furey2012chip] from diverse cell and tissue types are available from the **ENCODE** [@encode2012], **Roadmap Epigenomics** [@kundaje2015integrative] and **modENCODE** [@gerstein2010integrative; @roy2010identification; @yue2014comparative] projects, while physical protein-protein interaction networks are available for many organisms through databases such as the **BioGRID** [@chatr2014biogrid]. Due to indirect effects, networks predicted from gene expression data rarely show a significant overlap with networks of direct physical interactions. A more appropriate validation is therefore to test for enrichment for short connection paths in the physical networks between pairs predicted to interact in the reconstructed networks [@bonnet2015]. #### Gene perturbation experiments. Gene knock-out experiments provide the ultimate gold standard of a causal network intervention, and genes differentially expressed between knock-out and control experiments can be considered as true positive direct or indirect targets of the knocked-out gene. Predicted gene networks can be validated by compiling relevant (i.e. performed in a relevant cell or tissue type) gene knock-out experiments from the **Gene Expression Omnibus** (<http://www.ncbi.nlm.nih.gov/geo/>) or **ArrayExpress** (<https://www.ebi.ac.uk/arrayexpress/>) and comparing the overlap between gene sets responding to a gene knock-out and network genes predicted to be downstream of the knocked-out gene. Overlap significance can be estimated by using randomized networks with the same degree distribution as the predicted network. Future perspective: Integration of multi-omics data =================================================== Although combining genotype and transcriptome data to reconstruct causal gene networks has led to important discoveries in a variety of applications [@civelek2014systems], important details are not incorporated in the resulting network models, particularly regarding the causal molecular mechanisms linking eQTLs to their target genes, and the relation between variation in transcript levels and protein levels, with the latter ultimately determining phenotypic responses. Several recent studies have shown that at the molecular level, *cis*-eQTLs primarily cause variation in transcription factor binding to gene regulatory DNA elements, which then causes changes in histone modifications, DNA methylation and mRNA expression of nearby genes (reviewed in [@albert2015role]). Although mRNA expression can be used as a surrogate for protein expression, due to diverse post-transcriptional regulation mechanisms, the correlation between mRNA and protein levels is known to be modest [@lu2007; @schwanhausser2011], and genetic loci that affect mRNA and protein expression levels do not always overlap [@foss2007; @wu2013variation]. Thus, an ideal systems genetics study would integrate genotype data and molecular measurements at all levels of gene regulation from a large number of individuals. Human lymphblastoid cell lines (LCL) are emerging as the primary model system to test such a approach. Whole-genome mRNA and micro-RNA sequencing data are available for 462 LCL samples from five populations genotyped by the 1000 Genomes Project [@lappalainen2013transcriptome]; protein levels from quantitative mass spectrometry for 95 samples [@wu2013variation]; ribosome occupancy levels from sequencing of ribosome-protected mRNA for 50 samples [@cenik2015integrative]; DNA-occupancy levels of the regulatory TF PU.1, the RNA polymerase II subunit RBP2, and three histone modifications from ChIP-sequencing of 47 samples [@waszak2015population]; and the same three histone modifications from ChIP-sequencing of 75 samples [@grubert2015genetic]. These population-level datasets can be combined further with three-dimensional chromatin contact data from Hi-C [@rao20143d] and ChIA-PET [@grubert2015genetic], knock-down experiments followed by microarray measurements for 59 transcription-associated factors and chromatin modifiers [@cusanovich2014functional], as well as more than 260 ENCODE assays (including ChIP-sequencing of 130 TFs) [@encode2012] in a reference LCL cell line (GM12878). Although the number of samples where all measures are simultaneously available is currently small, this number is sure to rise in the coming years, along with the availability of similar measurements in other cell types. Despite the challenging heterogeneity of data and analyses in the integration of multi-omics data, web-based toolboxes, such as **GenomeSpace** (<http://www.genomespace.org>) [@add1] can prove helpful to non-programmer researchers. Conclusions {#sec:conclusions} =========== In this chapter we have reviewed the main methods and softwares to carry out a systems genetics analysis, which combines genotype and various omics data to identify eQTLs and their associated genes, reconstruct co-expression networks and modules, reconstruct causal Bayesian gene and module networks, and validate predicted networks *in silico*. Several method and software options are available for each of these steps, and by necessity a subjective choice about which ones to include had to be made, based largely on their ability to handle large datasets, their popularity in the field, and our personal experience of using them. Where methods have been compared in the literature, they have usually been performed on a small number of datasets for a specific subset of tasks, and results have rarely been conclusive. That is, although each of the presented methods will give somewhat different results, no objective measurements will consistently select one of them as the “best” one. Given this lack of objective criterion, the reader may well prefer to use a single software that allows to perform all of the presented analyses, but such an integrated software does not currently exist. Nearly all of the examples discussed referred to the integration of genotype and transcriptome data, reflecting the current dominant availability of these two data types. However, omics technologies are evolving at a fast pace, and it is clear that data on the variation of TF binding, histone modifications, and post-transcriptional and protein expression levels will soon become more widely available. Developing appropriate statistical models and computational methods to infer causal gene regulation networks from these multi-omics datasets is surely the most important challenge for the field. Acknowledgements {#acknowledgements .unnumbered} ================ The authors’ work is supported by the BBSRC \[BB/M020053/1\] and Roslin Institute Strategic Grant funding from the BBSRC \[BB/J004235/1\]. [100]{} Mackay TF, Stone EA and Ayroles JF. The genetics of quantitative traits: challenges and prospects. *Nature Reviews Genetics* **10**:565–577 (2009). Goddard ME and Hayes BJ. Mapping genes for complex traits in domestic animals and their use in breeding programmes. *Nature Reviews Genetics* **10**:381–391 (2009). Manolio TA. Bringing genome-wide association findings into clinical use. *Nature Reviews Genetics* **14**:549–558 (2013). Hindorff LA *et al.* Potential etiologic and functional implications of genome-wide association loci for human diseases and traits. *Proceedings of the National Academy of Sciences* **106**:9362–9367 (2009). Schaub MA *et al.* Linking disease associations with regulatory information in the human genome. *Genome Research* **22**:1748–1759 (2012). Rockman MV and Kruglyak L. Genetics of global gene expression. *Nature Reviews Genetics* **7**:862–872 (2006). Georges M. Mapping, fine mapping, and molecular dissection of quantitative trait loci in domestic animals. *Annu Rev Genomics Hum Genet* **8**:131–162 (2007). Cookson W *et al.* Mapping complex disease traits with global gene expression. *Nature Reviews Genetics* **10**:184–194 (2009). Cheung VG and Spielman RS. Genetics of human gene expression: mapping dna variants that influence gene expression. *Nature Reviews Genetics* **10**:595–604 (2009). Cubillos FA, Coustham V and Loudet O. Lessons from [eQTL]{} mapping studies: non-coding regions and their role behind natural phenotypic variation in plants. *Current Opinion in Plant Biology* **15**:192–198 (2012). Dimas AS *et al.* Common regulatory variation impacts gene expression in a cell type–dependent manner. *Science* **325**:1246–1250 (2009). Foroughi Asl H *et al.* Expression quantitative trait loci acting across multiple tissues are enriched in inherited risk of coronary artery disease. *Circulation: Cardiovascular Genetics* **8**:305–315 (2015). Greenawalt DM *et al.* A survey of the genetics of stomach, liver, and adipose gene expression from a morbidly obese cohort. *Genome Research* **21**:1008–1016 (2011). Ardlie KG *et al.* The genotype-tissue expression ([GTEx]{}) pilot analysis: Multitissue gene regulation in humans. *Science* **348**:648–660 (2015). Albert FW and Kruglyak L. The role of regulatory variation in complex traits and disease. *Nature Reviews Genetics* **16**:197–212 (2015). Williams RW. Expression genetics and the phenotype revolution. *Mammalian Genome* **17**:496–502 (2006). Kadarmideen HN, von Rohr P and Janss LL. From genetical genomics to systems genetics: potential applications in quantitative genomics and animal breeding. *Mammalian Genome* **17**:548–564 (2006). Rockman MV. Reverse engineering the genotype–phenotype map with natural genetic variation. *Nature* **456**:738–744 (2008). Schadt EE. Molecular networks as sensors and drivers of common human diseases. *Nature* **461**:218–223 (2009). Schadt EE and Bj[ö]{}rkegren JL. New: network-enabled wisdom in biology, medicine, and health care. *Science Translational Medicine* **4**:115rv1–115rv1 (2012). Civelek M and Lusis AJ. Systems genetics approaches to understand complex traits. *Nature Reviews Genetics* **15**:34–48 (2014). Bj[ö]{}rkegren JL *et al.* Genome-wide significant loci: How important are they?: Systems genetics to understand heritability of coronary artery disease and other common complex disorders. *Journal of the American College of Cardiology* **65**:830–845 (2015). Chen Y *et al.* Variations in [DNA]{} elucidate molecular networks that cause disease. *Nature* **452**:429–435 (2008). Schadt EE, Friend SH and Shaywitz DA. A network view of disease and compound screening. *Nat Rev Drug Disc* **8**:286–295 (2009). Walhout AJ. Unraveling transcription regulatory networks by protein–[DNA]{} and protein–protein interaction mapping. *Genome Research* **16**:1445–1454 (2006). Ritchie MD *et al.* Methods of integrating data to uncover genotype-phenotype interactions. *Nature Reviews Genetics* **16**:85–97 (2015). Stegle O *et al.* Using probabilistic estimation of expression residuals (peer) to obtain increased power and interpretability of gene expression analyses. *Nature Protocols* **7**:500–507 (2012). Foss EJ *et al.* Genetic basis of proteome variation in yeast. *Nature Genetics* **39**:1369–1375 (2007). Nicholson G *et al.* A genome-wide metabolic [QTL]{} analysis in [E]{}uropeans implicates two loci shaped by recent positive selection. *PLoS Genetics* **7**:e1002270 (2011). Laird N and Lange C. *The Fundamentals of Modern Statistical Genetics* (Springer2011). Shabalin AA. Matrix [eQTL]{}: ultra fast [eQTL]{} analysis via large matrix operations. *Bioinformatics* **28**:1353–1358 (2012). Qi J *et al.* : Matrix-based non-parametric [eQTL]{} discovery. *BMC Bioinformatics* **15**:11 (2014). Brem RB *et al.* Genetic dissection of transcriptional regulation in budding yeast. *Science* **296**:752–755 (2002). Schadt EE *et al.* . *PLoS Biol* **6**:e107 (2008). Lappalainen T *et al.* Transcriptome and genome sequencing uncovers functional variation in humans. *Nature* **501**:506–511 (2013). Golub GH and Van Loan CF. *Matrix computations* (The Johns Hopkins University Press1996), third edn. Hemani G *et al.* Detection and replication of epistasis influencing transcription in humans. *Nature* **508**:249–253 (2014). Sharan R, Ulitsky I and Shamir R. Network-based prediction of protein function. *Molecular Systems Biology* **3**:88 (2007). Daub CO *et al.* Estimating mutual information using [B]{}-spline functions – an improved similarity measure for analysing gene expression data. *BMC Bioinformatics* **5**:118 (2004). Butte A and Kohane I. Mutual information relevance networks: Functional genomic clustering using pairwise entropy measurements. *Pac Symp Biocomputing* **5**:415–426 (2000). Basso K *et al.* Reverse engineering of regulatory networks in human b cells. *Nat Genet* **37**:382–390 (2005). Faith JJ *et al.* Large-scale mapping and validation of *Escherichia coli* transcriptional regulation from a compendium of expression profiles. *PLoS Biol* **5**:e8 (2007). Hartwell LH *et al.* From molecular to modular cell biology. *Nature* **402**:C47–C52 (1999). Eisen MB *et al.* Cluster analysis and display of genome-wide expression patterns. *PNAS* **95**:14863–14868 (1998). Tavazoie S *et al.* Systematic determination of genetic network architecture. *Nature Genetics* **22**:281–285 (1999). Sharan R and Shamir R. : a clustering algorithm with applications to gene expression analysis. In *Proc Int Conf Intell Syst Mol Biol*, vol. 8, 16 (2000). Medvedovic M and Sivaganesan S. Bayesian infinite mixture model based clustering of gene expression profiles. *Bioinformatics* **18**:1194–1206 (2002). Newman MEJ and Girvan M. Finding and evaluating community structure in networks. *Phys Rev E* **69**:026113 (2004). Newman MEJ. Modularity and community structure in networks. *PNAS* **103**:8577–8582 (2006). Ayroles JF *et al.* Systems genetics of complex traits in drosophila melanogaster. *Nat Genet* **41**:299–307 (2009). Langfelder P and Horvath S. Wgcna: an r package for weighted correlation network analysis. *BMC Bioinformatics* **9**:559 (2008). Lee SI *et al.* Learning a prior on regulatory potential from eqtl data. *PLoS Genetics* **5**:e1000358 (2009). Clauset A, Newman MEJ and Moore C. Finding community structure in very large networks. *Phys Rev E* **70**:066111 (2004). Van Dongen SM. Graph clustering by flow simulation (2001). Enright AJ, Van Dongen S and Ouzounis CA. An efficient algorithm for large-scale detection of protein families. *Nucleic Acids Research* **30**:1575–1584 (2002). Ravasz E *et al.* Hierarchical organization of modularity in metabolic networks. *Science* **297**:1551–1555 (2002). Zhang B and Horvath S. A general framework for weighted gene co-expression network analysis. *Stat Appl Genet Mol Biol* **4**:17 (2005). Langfelder P, Zhang B and Horvath S. Defining clusters from a hierarchical cluster tree: the dynamic tree cut package for r. *Bioinformatics* **24**:719–720 (2008). Liu JS. *[M]{}onte [C]{}arlo strategies in scientific computing* (Springer2002). Joshi A, Van de Peer Y and Michoel T. Analysis of a [Gibbs]{} sampler for model based clustering of gene expression data. *Bioinformatics* **24**:176–183 (2008). Bonnet E, Calzone L and Michoel T. Integrative multi-omics module network inference with [Lemon-Tree]{}. *PLoS Computational Biology* **11** (2015). Michoel T and Nachtergaele B. Alignment and integration of complex networks by hypergraph-based spectral clustering. *Physical Review E* **86**:056111 (2012). Zhu J *et al.* An integrative genomics approach to the reconstruction of gene networks in segregating populations. *Cytogenet Genome Res* **105**:363–374 (2004). Chen LS, Emmert-Streib F and Storey JD. Harnessing naturally randomized transcription to infer regulatory relationships among genes. *Genome Biology* **8**:R219 (2007). Aten JE *et al.* Using genetic markers to orient the edges in quantitative trait networks: the [NEO]{} software. *BMC Systems Biology* **2**:34 (2008). Schadt EE *et al.* An integrative genomics approach to infer causal associations between gene expression and disease. *Nature Genetics* **37**:710–717 (2005). Neto EC *et al.* Inferring causal phenotype networks from segregating populations. *Genetics* **179**:1089–1100 (2008). Neto EC *et al.* Modeling causality for pairs of phenotypes in system genetics. *Genetics* **193**:1003–1013 (2013). Millstein J *et al.* Disentangling molecular relationships with a causal inference test. *BMC Genetics* **10**:23 (2009). Smith GD and Ebrahim S. ‘mendelian randomization’: can genetic epidemiology contribute to understanding environmental determinants of disease? *International Journal of Epidemiology* **32**:1–22 (2003). Li Y *et al.* Critical reasoning on causal inference in genome-wide linkage and association studies. *Trends in Genetics* **26**:493–498 (2010). Friedman N, Nachman I and Pe[é]{}r D. Learning bayesian network structure from massive datasets: The “sparse candidate” algorithm. In *Proceedings of the Fifteenth Conference on Uncertainty in Artificial Intelligence*, UAI’99, 206–215 (Morgan Kaufmann Publishers Inc., San Francisco, CA, USA1999). Friedman N *et al.* Using bayesian networks to analyze expression data. *Journal of Computational Biology* **7**:601–620 (2000). Koller D and Friedman N. *Probabilistic Graphical Models: Principles and Techniques* ([The MIT Press]{}2009). Schmidt M, Niculescu-Mizil A and Murphy K. Learning graphical model structure using [L1]{}-regularization paths. In *AAAI*, vol. 7, 1278–1283 (2007). Zhu J *et al.* Integrating large-scale functional genomic data to dissect the complexity of yeast regulatory networks. *Nature Genetics* **40**:854–861 (2008). Zhu J *et al.* Stitching together multiple data dimensions reveals interacting metabolomic and transcriptomic networks that modulate cell regulation. *PLoS Biology* **10**:e1001301 (2012). Zhang B *et al.* . *Cell* **153**:707–720 (2013). Neto EC *et al.* Causal graphical models in systems genetics: a unified framework for joint inference of causal network and genetic architecture for correlated phenotypes. *The Annals of Applied Statistics* **4**:320 (2010). Scutari M *et al.* Multiple quantitative trait analysis using [B]{}ayesian networks. *Genetics* **198**:129–137 (2014). Langfelder P and Horvath S. Eigengene networks for studying the relationships between co-expression modules. *BMC Systems Biology* **1**:54 (2007). Friedman N, Goldszmidt M and Wyner A. Data analysis with [B]{}ayesian networks: A bootstrap approach. In *Proceedings of the Fifteenth conference on Uncertainty in artificial intelligence*, 196–205 (Morgan Kaufmann Publishers Inc.1999). Segal E *et al.* Module networks: identifying regulatory modules and their condition-specific regulators from gene expression data. *Nat Genet* **34**:166–167 (2003). Friedman N. Inferring cellular networks using probabilistic graphical models. *Science* **308**:799–805 (2004). Lee S *et al.* . *Proc Natl Acad Sci USA* **103**:14062–14067 (2006). Zhang W *et al.* A [B]{}ayesian partition method for detecting pleiotropic and epistatic [eQTL]{} modules. *PLoS Computational Biology* **6**:e1000642 (2010). Joshi A *et al.* Module networks revisited: computational assessment and prioritization of model predictions. *Bioinformatics* **25**:490–496 (2009). Talukdar H *et al.* Cross-tissue regulatory gene networks in coronary artery disease. *Cell Systems* **2**:196–208 (2016). Ashburner M *et al.* . *Nat Genet* **25**:25–29 (2000). Furey TS. –seq and beyond: new and improved methodologies to detect and characterize protein–[DNA]{} interactions. *Nature Reviews Genetics* **13**:840–852 (2012). . . *Nature* **489**:57–74 (2012). Kundaje A *et al.* Integrative analysis of 111 reference human epigenomes. *Nature* **518**:317–330 (2015). Gerstein M *et al.* Integrative analysis of the [C]{}aenorhabditis elegans genome by the [modENCODE]{} project. *Science* **330**:1775–1787 (2010). Roy S *et al.* Identification of functional elements and regulatory circuits by [D]{}rosophila [modENCODE]{}. *Science* **330**:1787–1797 (2010). Yue F *et al.* A comparative encyclopedia of [DNA]{} elements in the mouse genome. *Nature* **515**:355–364 (2014). Chatr-aryamontri A *et al.* The [BioGRID]{} interaction database: 2015 update. *Nucleic acids research* gku1204 (2014). Lu P *et al.* Absolute protein expression profiling estimates the relative contributions of transcriptional and translational regulation. *Nature Biotech* **25**:117–124 (2007). Schwanhausser B *et al.* . *Nature* **473**:337–342 (2011). Wu L *et al.* Variation and genetic control of protein abundance in humans. *Nature* **499**:79–82 (2013). Cenik C *et al.* Integrative analysis of rna, translation and protein levels reveals distinct regulatory variation across humans. *Genome Research* doi:10.1101/gr.193342.115 (2015). Waszak SM *et al.* Population variation and genetic control of modular chromatin architecture in humans. *Cell* **162**:1039–1050 (2015). Grubert F *et al.* Genetic control of chromatin states in humans involves local and distal chromosomal interactions. *Cell* **162**:1051–1065 (2015). Rao SS *et al.* A [3D]{} map of the human genome at kilobase resolution reveals principles of chromatin looping. *Cell* **159**:1665–1680 (2014). Cusanovich DA *et al.* The functional consequences of variation in transcription factor binding. *PLoS Genetics* **10**:e1004226 (2014). Qu K *et al.* Integrative genomic analysis by interoperation of bioinformatics tools in GenomeSpace. *Nature Methods* **13**:245–247 (2016). [^1]: There are only $\ell-1$ matrix multiplications, because the data standardization implies that ${\mathbf{X}}{\mathbf{I}}^{(0)}=1-\sum_{m=1}^{\ell-1}{\mathbf{X}}{\mathbf{I}}^{(m)}$.
1
--- abstract: 'We investigate the stellar population of star-forming galaxies at *z* $\sim$ 4 by focusing on their slope of rest-frame ultraviolet (UV) continuum, $\beta$ where $f_{\lambda} \propto \lambda^{\beta}$. We investigate the sample of bright Lyman Break Galaxies (LBGs) with *i’* $\leq$ 26.0 in the Subaru/XMM-Newton Deep Survey field by using the SED fitting analysis. We find that the apparently redder ($\beta_{\mathrm{obs}} > -1.73$) LBGs tend to be dusty ($\mathrm{Av} > 1.0$), have the young stellar population ($\beta_{\mathrm{int}} < -2.42$), and the intrinsically active star-forming galaxies (SFR $\gtrsim$ a few $\times\ 10^{2}\,\mathrm{M_{\solar}}\>\mathrm{yr^{-1}}$). It means that a significant fraction of the UV–selected LBGs at *z* $\sim$ 4 is on-going active and dust obscured star-forming galaxies. We compare the IR to UV luminosity ratio assuming the dust attenuation laws with the sub-millimeter observations from previous works. The result suggests that the Calzetti-like dust attenuation law is preferable for the active and dusty star-forming LBGs at $z = 4$. We also find that an extrapolation of the $\beta_{\mathrm{int}}$–$M_{\mathrm{UV,int}}$ relation toward the fainter magnitude range below our sample magnitude limit intersects the $\beta_{\mathrm{obs}}$–$M_{\mathrm{UV,obs}}$ relation previously obtained in the deeper narrow-area observations at $M_{\mathrm{UV}} = -18.9$ and $\beta = -1.94$, which coincides with the break point of $\beta_{\mathrm{obs}}$–$M_{\mathrm{UV,obs}}$ relation observed so far. The coincidence suggest that we see the almost dust-free population at $M_{\mathrm{UV,obs}} \gtrsim -18.9$.' author: - 'Satoshi <span style="font-variant:small-caps;">Yamanaka</span>' - 'Toru <span style="font-variant:small-caps;">Yamada</span>,' title: 'The UV spectral slope beta and stellar population of most active star-forming galaxies at z$\sim$4' --- Introduction {#S1Intr} ============ The ultraviolet (UV) continuum spectrum of star-forming galaxies is characterized by the spectral index $\beta$ in the form of $f_{\lambda} \propto \lambda^{\beta}$ [@Calz94]. The $\beta$ values are related with physical quantities such as age, metallicity, and dust extinction of the galaxies. In the case of less dust attenuation, younger age, and lower metallicity, the galaxy has the larger negative $\beta$ value (e.g., [@Bouw10]; [@Stan16]). Since it is relatively simple to measure $\beta$ values even if objects are at high redshift, the $\beta$ index is a useful tool to probe their physical quantities. The typical value of $\beta$ found in the previous works is $\sim -1.7$ at $z \sim 4$ for $\sim L_{*}$ galaxies with $M_{\mathrm{UV}} \sim -21.0$ ([@Bouw12]; [@Fink12]), and it is bluer at higher redshift up to $z \sim 7$ ($\beta$–*z* relation: e.g., [@Wil11]). At the fainter magnitude (e.g. $M_{\mathrm{UV}} \sim -19.0$), the observed $\beta$ value has still uncertainties and the relation between $\beta$ and UV absolute magnitude ($\beta$–$M_{\mathrm{UV}}$ relation) has been a subject of debate for the several years (e.g., [@Bouw12]; [@Bouw14]; [@Dunl12]; [@Dunl13]; [@Fink12]; [@Rog14]). and report that bright galaxies have redder $\beta$ values and faint galaxies have bluer $\beta$ values, while and report that $\beta$ values are constant over the observed magnitude range. This inconsistency in the $\beta$–$M_{\mathrm{UV}}$ relation can be caused by both large photometric errors for faint galaxies and selection bias. In order to reveal the true $\beta$–$M_{\mathrm{UV}}$ relation, a further large sample of objects with small photometric uncertainties is needed and/or it is necessary to assess the incompleteness of the observed $\beta$ distribution. Recently, @Dunc15 and @Bouw14 show that the $\beta$ value decreases with the $M_{\mathrm{UV}}$ value. finds the trend by combining the results of the previous literature ([@Bouw14];[@Dunc14]; [@Dunl12]; [@Dunl13]; [@Fink12]; [@Rog14]; [@Wil11]) and discusses the trend by assessing the observational bias (incompleteness) of the observed $\beta$ distribution in the faint magnitude range. The $\beta$–$M_{\mathrm{UV}}$ relation is understood as another aspect of the mass-metallicity relation seen in star-forming galaxies as the $\beta$ value depends on the dust extinction ([@Bouw09], , ). Moreover, it is suggested that there is a “knee” in the $\beta$–$M_{\mathrm{UV}}$ relation at $M_{\mathrm{UV}} \sim -19.0$, and the dependence of $\beta$ on $M_{\mathrm{UV}}$ becomes weaker at $M_{\mathrm{UV}} \gtrsim -19.0$ than at $M_{\mathrm{UV}} < -19.0$ [@Bouw14]. The change of the dependence is interpreted as the change of the dependence of the dust extinction on the UV luminosity or the stellar mass (e.g., [@Pann09]; [@Redd10]). The semi-analytic model predicts the sudden change of the dust-to-gas mass ratio at the critical metallicity and the dust mass rapidly increase at the metallicity level larger than the critical metallicity (e.g., [@Hira11]). The change of the dependence of $\beta$ on $M_{\mathrm{UV}}$ perhaps indicates the existence of the critical metallicity. The redshift dependence of $\beta$ is interpreted as the dust attenuation history or the dust production history in star-forming galaxies. Interestingly, the dust attenuation history at $z \gtrsim 3$ estimated from the redshift dependence of $\beta$ is smoothly connected with the dust attenuation history at $z \lesssim 3$ estimated from the direct measurements of both IR and UV luminosity [@Burg13]. The dust attenuation history is also used for revealing the history of the true (dust-corrected) cosmic Star Formation Rate (SFR) density in the high-z universe because it is still difficult to obtain the IR luminosity for high-z star-forming galaxies (e.g., [@Bouw09], ; [@Mada14]). Currently, the $\beta$–*z* relation has been used for considering the source of cosmic reionization, assuming that the $\beta$ value represents the production rate of hydrogen ionizing photons since the production rate is susceptible to the stellar population (e.g., [@Dunc15]; [@Bouw15], ). We note that the samples in most of the literature are overlapped and not independent since the set of GOODS-South/HST or HUDF/HST data is mostly used so far in their works. Due to the small observed area, the number of bright objects at $z = 4$ in the field is limited (except for [@Rog14]) and these previous studies focus on relatively faint galaxies at high redshift. The $\beta$ distribution of luminous objects is, however, also important since such population provides important clues to understand early star-formation history in the universe (e.g., [@Cucc12]; [@Hatf18]). In fact, by using stacked images, @LeeKS11 investigate the $\beta$ value of ultra-luminous star-forming galaxies ($19.46 \leq I < 24.96$) at *z* $\sim$ 4 and they find that the $\beta$ value of the stacked star-forming galaxies tends to be redder toward the brighter magnitude range. When taking all the previous works into consideration, the investigation for individual and ”normal” luminous galaxies ($L \sim L_{*}$) is very comprehensive. Recent Atacama Large Millimeter/submillimeter Array (ALMA) observations have revealed the dust properties of high redshift star-forming galaxies through the relation between the ratio of IR to UV (so called IRX) and the UV slope $\beta$ (IRX–$\beta$ relation; e.g., [@Cpk15]; [@Bouw16]; [@McLu18]). In order to interpret these results, it is very important to investigate the detailed relation between the UV slope $\beta$ and the stellar population which is hidden by dust extinction. On the other hand, there is difficulty in studying the intrinsic values of the $\beta$, $M_{\mathrm{UV}}$, and stellar population for the luminous and massive galaxies, since the effect of dust extinction on their color degenerates with the age and metallicity of their stellar population: the more massive systems are on average older and metal rich. It is essential to resolve these degeneracy and understand the intrinsic properties of star formation and effects of dust extinction. In this paper, we present the results of the UV slope $\beta$ and stellar population for the relatively bright galaxies with $M_{\mathrm{UV}} \lesssim -20$ at $z \sim 4$ in relatively wide-area and deep Subaru/XMM-Newton Deep Survey (SXDS) field. Wide area coverage is essential to sample rare luminous galaxies. In section 2, we explain the data and sample selection. In section 3, we describe our method to evaluate the $\beta$ value. In section 4, we show the result of the observed $\beta$–$M_{UV}$ relation, and we assess the incompleteness of our sample selection. In section 5, we discuss the intrinsic $\beta$–$M_{UV}$ relation, most active star-forming galaxies, and dust attenuation law for $z \sim 4$ galaxies. In section 6, we give our conclusions to summarize this work. In regard to the cosmological parameters, we assume $\Omega_{m,0} = 0.3$, $\Omega_{\Lambda,0} = 0.7$, $H_{\mathrm{0}} = 70\>\mathrm{km\>s^{-1}\>Mpc^{-1}}$. Finally throughout this work we apply the AB magnitude system ([@OkeGunn]; [@Fkgt96]). Data and sample selection {#S2Dtss} =========================== Data {#S2s1da} ------ In our analysis, we select the Lyman Break Galaxies (LBGs) at $z \sim 4$ in the SXDS field which is partially covered by other surveys (i.e., UDS-UKIDSS/UKIRT, UDS-CANDELS/HST, and SEDS/Spitzer). Figure \[fig1\] shows the field map of each survey and indicates that UDS-CANDELS and SEDS survey do not cover the entire field of SXDS or UDS-UKIDSS. Our catalog includes *B*, *V*, *R*, *i’*, *z’*, and updated-*z’* from Subaru/Suprime-Cam, *J*, *H*, and *K* from UKIRT/WFCAM, F125W and F160W from HST/WFC3, and 3.6$\>\micron$ and 4.5$\>\micron$ from Spitzer/IRAC. The *B*, *V*, *R*, *i’*, *z’* band images are taken from the archived SXDS data [@Furu08]. The CCD of Subaru/Suprime-Cam is replaced with the new CCD of Hamamatsu Photonics in 2008, and the total response function improve especially at the longer wavelength. The Subaru/Suprime-Cam *z’* band images are taken after the replacement again, and we call the images the updated-*z’* band. The limiting magnitude of the updated-*z’* images is $\sim 0.5\>\mathrm{mag}$ deeper than the archived SXDS data [@Furu16]. The *J*, *H*, and *K* band images are taken from the UKIRT Deep Sky Survey (UKIDSS: [@Law07]) DR10, and the F125W and F160W band images are taken from the Cosmic Assembly Near-Infrared Deep Extragalactic Legacy Survey (CANDELS: [@Grog11]; [@Koek11]). For the data from *B* to *K* band, we use 2”-diameter aperture magnitude measured by using the SExtractor[^1] ver.2.5.0 [@BeAr96]. The smaller PSF images are convolved with Gaussian kernels to be matched in FWHM of the stars (1”.0) with the original updated-*z’* band image. On the other hand the 3.6$\>\micron$ and 4.5$\>\micron$ photometry are taken from the Spitzer Extended Deep Survey (SEDS) catalog [@Ash13] and we apply aperture correction. Finally, we pick up the objects which are in the overlapped region covered by both the SXDS and UDS-UKIDSS fields (see figure \[fig1\]: the overlapped region is filled by yellow slanting lines) because we need both optical and NIR photometry for estimating the UV slope $\beta$ value of *z* $\sim$ 4 galaxies. The information of the imaging data is summarized in table \[tab1\]. ![ Field map of imaging data used in our analysis. The blue solid line show the sky coverage of the SXDS field; the green dot-dash line represent the sky coverage observed with Subaru/updated-*z’*; The red dash line represents the sky coverage of the UDS-UKIDSS field; the black dotted line represents the sky coverage of the UDS-CANDELS field; and the brown two-dot-dash line represents the sky coverage of the SEDS field. Our catalog consists of the objects within the area covered by all of the SXDS, updated-*z’*, and UDS field since we use the *i’*, *z’*, updated-*z’*, and *J* band photometry for estimating the UV slope $\beta$ value. This area is filled by the yellow slanting lines. []{data-label="fig1"}](Pic.Region_SXDS_new1812.eps){width="80mm"} \[tab1\] (1)@Furu08; (2)@Furu16; (3); (4)@Grog11; (5)@Koek11; (6)@Ash13;\ The values show the total magnitude where the completeness of the source detection is 50%. Sample selection and SED fitting {#S2s2ss} ---------------------------------- From the photometric sample, we select the objects satisfying all the following criteria. - $i' \leq 26.0$ - Subaru/*z’* or Subaru/updated-*z’* $\geq$ 2$\,\sigma$ - UKIRT/*J* or HST/F125W $\geq$ 2$\,\sigma$ - $B - R > 1.2$, $R - i' < 0.7$, and $B - R > 1.6(R - i') + 1.9$ - $3.5 \leq z_{phot} < 4.5$ with reduced $\chi^2 \leq 2$ The criteria (1) is applied so as to select the galaxies bright enough to have small photometric errors, and the magnitude threshold corresponds to $S / N \gtrsim 11.5$. The criteria (2) and (3) are required to estimate the $\beta$ value accurately. Due to the stellar spikes and/or saturated pixels, some objects are not detected in the deeper Subaru/updated-*z’* or HST/F125W imaging but detected in the shallower Subaru/*z’* or UKIRT/*J* imaging. Therefore, we use such the criteria (2) and (3). The criteria (4) is the *BRi’*–LBG selection investigated by @Ouch04 for the Subaru/Suprime-Cam filter set, and this criteria is intended to pick up star-forming galaxies at *z* $\sim$ 4. In @Ouch04, the detectability of *z* $\sim$ 4 galaxies and the contamination rate from low-*z* galaxies are discussed, and leaders should refer to the reference for more details. After the criteria (1), (2), (3), and (4), the total number of objects is $\sim$ 2100. The criteria (5) is applied so as to select the reliable galaxies at *z* $\sim$ 4. The reduced $\chi^{2}$ value is calculated for each galaxy as $\chi^2 / \mathrm{d.o.f}$, in which d.o.f $=$ (number of observed broad-band filters for each galaxy) $-$ (number of free parameters in the fitting). In the selection procedure, our concern is only the photometric redshift, and thus we adopt the number of free parameters $=$ 1. As a result, our catalog contains $\sim$ 1800 objects which are visually checked in order to avoid stellar spikes and/or saturated pixels. We write down the detail of $\sim$ 300 objects, which are excluded by the criteria (5). Among $\sim$ 300 objects, (i) $\sim$ 130 objects are the reduced $\chi^2 > 2$ objects at $3.5 \leq z_{phot} < 4.5$, (ii) $\sim$ 90 objects are the low-z interlopers at $z_{phot} < 3.0$, and (iii) $\sim$ 80 objects are the slightly lower/higher redshift objects at $3.0 \leq z_{phot} < 3.5$ or $4.5 \leq z_{phot} < 5.0$. First of all, it is reasonable that we exclude the objects in the sub-sample (ii) because they are the possible contamination in our study. On the other hand, the objects in the sub-sample (i) and (iii) are the potential LBGs at $3.5 \leq z_{phot} < 4.5$. We check the influence of the sub-sample (i) and (iii) on the UV slope $\beta$, the UV magnitude, and the other quantities, and consequently we confirm that the $\sim$ 300 objects excluded by the criteria (5) does not change our results and the criteria (5) is reasonable. \[tab2\] For the investigation of photometric redshift and stellar population, we use the Hyperz[^2] photometric redshift code ver.1.1 [@Bolz00] with the @Bruz03 templates[^3] (hereafter BC03) and the STARBURST99[^4] [@Leit99] templates (hereafter SB99). The BC03 templates are chosen as ”typical galaxy” models and constructed from five different star formation histories (SFHs), thirty age values, and three metallicity values with the Chabrier Initial Mass Function (IMF). The SB99 templates are adopted as ”young star-forming galaxy” models and constructed from two SFHs, thirty age values, four metallicity values, and two extreme nebular continuum cases with the Kroupa IMF. In the run of the Hyperz, the dust attenuation value, Av, is ranged from 0.0 to 3.0 with $\Delta \mathrm{Av} = 0.1$ assuming the @Calz00 attenuation law for dust extinction curve. The details of the parameters are summarized in table \[tab2\]. The motivation to use the BC03 and SB99 model templates simultaneously is that the BC03 template is not enough to describe young star-forming galaxies: A time step, which is critical to make spectra of young-age galaxies, in the SB99 computation is much smaller than that in the BC03 computation. Although the ages of the SB99 template set is very young, we consider that our template sets are optimal for fitting the spectrum of young star-forming galaxies. We note that we compare the photometry and the UV slope $\beta$ value used in this work with those of the published catalog. In the UDS-CANDELS field (the small area enclosed with the black dotted line in figure \[fig1\]), a multi-wavelength photometry catalog has been published by the CANDELS collaborators [@Gala13], and the catalog has the total flux of Subaru/*BVRi’z’*, UKIRT/*JHK*, HST/F125WF160W, and Spitzer/3.6$\>\micron$4.5$\>\micron$. Their photometry of the Subaru and UKIRT bands are consistent with ours if we take account of the difference in the measurement method. The UV slope $\beta$ value of our catalog is also comparable to that of the published catalog when applying the same measurement method for UV slope $\beta$ to both catalogs. However, the difference of the HST and Spitzer bands is slightly larger than expected. We consider that for the HST data the difference is attributed to our very large PSF correction factor applied to be matched with the Subaru images. For the Spitzer data the difference is due to the uncertainty in the aperture correction. We remark that our catalog tends to have the slightly bluer *K* $-$ \[3.6\] and *K* $-$ \[4.5\] colors than those of the published one. Example results of SED fitting {#S2s3exs} -------------------------------- We show two examples in figures \[fig2\] and \[fig3\] in order to show the validity of our SED fitting analysis. The first figure is the red LBG observed with Spitzer ($\beta_{\mathrm{obs}} = -1.27$ and $M_{\mathrm{UV,obs}} = -20.38$), and the second figure is the blue LBG observed without Spitzer ($\beta_{\mathrm{obs}} = -2.39$ and $M_{\mathrm{UV,obs}} = -20.32$). In the top nine panels of each figure, we show the stamps of the imaging data from Subaru/*B* to UKIRT/*K*. The center of the images represent the detected position, and the green two straight lines in each image with 1” length are placed at 1” from the detected position. In the bottom left panel, we show the observed photometry and best-fit SED. The blue plus points with the error bar represent the observed photometry and its uncertainty, and the red solid line represents the best-fit SED. The black open circle with the arrow represents the upper limit of the photometry at 2$\sigma$ level. In the bottom right three panels, we show the $\chi^{2}$ map of our SED fitting analysis on one-dimensional space. The vertical axis represents the $\chi^{2}$ value and the horizontal axis represents the photometric redshift, dust attenuation, and age. The horizontal black dashed line in each panel represents the minimum $\chi^{2}$ value. We emphasize that the two examples we show here are categorized as faintest objects, and most of the other objects are fitted better than the examples. Our SED fitting procedure performs well due to the deep imaging data of UKIRT/*HK* which covers the wavelength of Balmer break for LBGs at *z* $\sim$ 4. ![image](im10051_comb.eps){width="140mm"} ![image](im10643_comb.eps){width="140mm"} Measurement method of UV slope $\beta$ {#S3Mmuvb} ======================================== According to the original definition of @Calz94, the UV slope $\beta$ value should be estimated from the Spectral Energy Distribution (SED) from 1250Å to 2600Å through the 10 fitting windows. However, spectroscopic observation generally requires a long exposure time for measuring the continuum flux from high-z galaxies due to their faint continuum. Furthermore, the number of objects which can be observed at a time is limited depending on slit configuration. Therefore, it is impractical to accurately measure the continuum flux from spectroscopic data for all the $\sim$ 1800 targets in our sample. Instead, we apply the simple power-law fitting to the broad-band photometry with the following functional form, $$M(\lambda_{x}) = -2.5 (\beta + 2) \log \lambda_{x} + Const \label{eq1}$$ where $\lambda_{x}$ is the effective wavelength of $x$th broad-band filter, $M$($\lambda_{x}$) is the measured magnitude of $x$th broad-band filter, and [*Const*]{} is a constant value. This method is suitable since the bias in the $\beta$ estimation is small ([@Fink12]; [@Rog13]). For the fitting, we conduct the least square fitting to the *i’*, *z’*, updated-*z’*, and *J* band filters which cover the rest-frame wavelength from $\sim$ 1500Å to $\sim$ 2500Å for the objects at *z* $=$ 3.5–4.5. In our analysis, an uncertainty of the $\beta$ value represents the uncertainty of the least square fitting when taking account of the photometric error of the broad-band filters. Although using the larger number of the photometry data points results in more accurate determination of the UV slope $\beta$ value, we need to select the optimal broad-band filters for the fitting so as to avoid strong spectral features. In the rest-frame UV and optical wavelength range, the redshifted Ly$\alpha$ ($\lambda$ $=$ 1216Å) line (or Lyman break) and Balmer break ($\lambda$ $\sim$ 3600Å) can be a contamination affecting the broad-band photometry. Figure \[fig4\] illustrates the position of the Lyman and Balmer breaks in the observed-frame, and filter profiles of the optical and NIR broad-band. For the sake of clarity, we show three model spectra with clear Lyman and Balmer breaks at *z* $=$ 3, 4, and 5, and we omit the filter profiles of Subaru/updated-*z’*, HST/F125W, F160W, Spitzer/3.6$\>\micron$, and 4.5$\>\micron$ bands. This figure indicates that the *R* ($\lambda$ $\sim$ 6000–7000Å) and *H* ($\lambda$ $\sim$ 15000-17000Å) band filters are probably affected by Lyman or Blamer breaks, and the wavelength coverage from *i’* to *J* band (black horizontal line in upper panel of figure \[fig4\]) is suitable for calculating the UV slope $\beta$ value of *z* $\sim$ 4 LBGs. Consequently, we use the Subaru/*i’*, *z’*, updated-*z’*, UKIRT/*J*, and HST/F125W band filters. Another critical point is that the robustness of the $\beta$ measurement for the faint and blue galaxies is strongly influenced by the depth of the imaging data at longer wavelength in our analysis. As mentioned above, we select the $i' \leq 26.0$ LBGs, and we use the Subaru/*i’*, *z’*, updated-*z’*, UKIRT/*J*, and HST/F125W band filters for the $\beta$ measurement. Under the condition, for instance, the galaxies with $i' = 26.0$ and $\beta < -2.0$ have the larger photometric uncertainty in the *z’*, updated-*z’*, *J*, and F125W band filters compared with the galaxies with $i' = 26.0$ and $\beta \geq -2.0$, and hence the bluer galaxies have the larger uncertainty in $\beta$. Moreover, the sample completeness of the extremely blue galaxies such as $\beta \sim -3.0$ is also influenced by the depth of the imaging data although the extremely blue objects are rare. From equation \[eq1\], the case of $\beta = -2.0$ is derived from $M(\lambda_{x}) = Const$, namely, all the broad-band photometry is same. For the appropriate $\beta$ measurement with the wide $\beta$ range, it is required that the magnitude threshold of *i’* band is brighter than the $\sim$2–3$\sigma$ limiting magnitude of the *z’*, updated-*z’*, *J*, and F125W bands. Our selection criteria described in section \[S2s2ss\] is applied by taking the aspect into consideration. We use the conservative criteria and consider that our selection does not cause the strong bias in the $\beta$ distribution except for the extremely faint and blue objects. In order to quantify and discuss the influence, we estimate the recovery fraction in section \[S4s2RF\]. ![ Model spectra and important broad-band filters. In the top panel, the green, blue, and red solid lines represent the model spectrum at *z* $=$ 3, 4, and 5, respectively, which clearly show the Lyman and Balmer breaks. The length of the black horizontal line denotes the wavelength coverage of Subaru/*i’*, Subaru/*z’*, and UKIRT/*J* band filters which are used for calculating the UV slope $\beta$. In the bottom panel, we show the Subaru/*BVRi’z’* and UKIRT/*JHK* band filters from left to right. For the sake of clarity, the filter responses of UKIRT/*JHK* are multiplied by an arbitrary constant. []{data-label="fig4"}](Pic.speccomp5-multi.eps){width="80mm"} Results {#S4Rslt} ======= ![ Observed UV slope $\beta$ vs. UV absolute magnitude at rest-frame 1500Å ($\beta$–$M_{\mathrm{UV}}$ relation). The UV absolute magnitude is calculated by integrating the flux of the best-fit SED model template from rest-frame 1450Å to 1550Å. The green filled circles represent the individual objects in the area of the SEDS (Spitzer field) and the blue filled squares represent the individual objects out of the area. The magenta open circles with the error bar represent the mean UV slope $\beta$ value, standard deviation (left side), and mean error (right side) of the UV slope $\beta$ for each magnitude bin, and they are summarized in table \[tab3\]. []{data-label="fig5"}](Pic2-Official.SEDSBetaMuv+1sig-v1cal.v1BobsMobs.eps){width="80mm"} Relation between $\beta$ and $M_{UV}$ {#S4S1BM} ------------------------------------- Figure \[fig5\] shows the obtained distribution of the UV slope $\beta$ as a function of the UV absolute magnitude at rest-frame 1500Å ($\beta$–$M_{\mathrm{UV}}$ relation). The UV absolute magnitude for each object is calculated from the best-fit SED model template by integrating the flux from rest-frame 1450Å to 1550Å. The green filled circles show the objects in the area observed with Spitzer and the blue ones show the objects out of the area. There seems to be no notable systematic difference in the distribution of the sample with and without the Spitzer data, and the lack of the information about 3.6$\>\micron$ and 4.5$\>\micron$ does not influence the selection for *z* $\sim$ 4 star-forming galaxies very much. In fact, we conduct the Kolmogorov–Smirnov test (K–S test) for the whole sample and some magnitude sub-samples. The null hypothesis of the K–S test is that the samples with (green) and without (blue) SEDS/Spitzer data are derived from the same distribution. The p-values of all the test are $>> 5\,\%$, and the null hypothesis is not rejected at 5$\%$ significance level. Therefore, there is no evidence that the information about 3.6$\>\micron$ and 4.5$\>\micron$ influences the $\beta$–$M_{\mathrm{UV}}$ relation. The magenta open circles with the error bar indicate the mean $\beta$ value and the mean $M_{\mathrm{UV}}$ value for each magnitude bin. The standard deviation of the $\beta$ distribution is indicated by the thick marks toward the left side, and the typical uncertainty in the $\beta$ value for the individual objects is shown by the thick marks toward the right side. For the mean values, we just apply the simple geometric mean without taking account of the individual uncertainty in $\beta$ for the individual objects. This is because the mean $\beta$ value can be biased toward positive values if we weight the $\beta$ values by the individual uncertainty of the objects, i.e., the uncertainty is not symmetric and becomes smaller toward positive $\beta$ values than the opposite. The mean $\beta$ value, standard deviation, typical uncertainty, and mean $M_{\mathrm{UV}}$ value for each bin are summarized in table \[tab3\]. In addition, the other useful information such as the median values is also summarized. \[tab3\] The standard deviation is clearly larger than the typical uncertainty in the $\beta$ value except for the two of the most faint magnitude bin. Therefore, in the magnitude range of $M_{\mathrm{UV}} \lesssim -20.5$, the observed $\beta$ distribution is more scattered than the typical uncertainty and the observed scatter represents the variation of the stellar population and dust extinction among the sample. In the magnitude range of $M_{\mathrm{UV}} \gtrsim -20.5$, the typical uncertainty in $\beta$ becomes as large as the standard deviation, and particularly the objects with $M_{\mathrm{UV}} \gtrsim -20.0$ are not uniformly distributed. Although we does not add any criteria to our sample since our purpose is to investigate the bright LBGs with *i’* $\leq$ 26.0, we check the sample completeness in section \[S4s2RF\] and discuss the results with some $M_{\mathrm{UV}}$ sub-samples. The mean $\beta$ value seems not to decrease with the mean UV magnitude. In order to quantify the $\beta$–$M_{\mathrm{UV}}$ trend, we conduct the least-square linear fitting for our mean $\beta$ and $M_{\mathrm{UV}}$ values described in table \[tab3\]. We use the four data points in $-22.0 \leq M_{\mathrm{UV}} \leq -20.0$, and the slope of the fitted linear equation becomes $-0.02\,\pm\,0.02$, which is nearly zero compared with the value in the previous works for the similar redshift, $-0.13\,\pm\,0.02$ [@Bouw14] and $-0.10\,\pm\,0.03$ [@Kurc14]. We note that our target is the relatively bright LBGs and the dynamic range of $M_{\mathrm{UV}}$ in our study is smaller than that in the previous works, $-22.0 \leq M_{\mathrm{UV}} \leq -15.5$ [@Bouw14] and $-21.0 \leq M_{\mathrm{UV}} \leq -15.0$ [@Kurc14]. When calculating the slope of the $\beta$–$M_{\mathrm{UV}}$ relation, we use *standard deviation of the mean* as an uncertainty of each mean $\beta$ value. The uncertainty is calculated from the standard deviation divided by the number of galaxies in each bin ($= \sigma_{\beta} / N_{\mathrm{obj}}$) described in table \[tab3\], and thus the uncertainty of the slope of the $\beta$–$M_{\mathrm{UV}}$ relation is much smaller than the mean error of $\beta$. In figure \[fig6\], we show the direct comparison of our result to the results from *z* $\sim$ 4 super luminous stacked LBGs ([@LeeKS11], red diamonds) and *z* $\sim$ 4 faint LBGs ([@Bouw14], green triangles). The red data points are calculated by us from the photometry described in @LeeKS11 since all the $\beta$ values and its uncertainties are not described in their paper. The error bars denotes the typical uncertainty in the $\beta$ value. The error bars of @Bouw14 show the sum of the random and systematic error described in their paper. The blue circles with the error bars show our result which is same as the magenta points in figure \[fig5\] but the error bars are the typical uncertainty. In the magnitude range of $-22.0 \leq M_{\mathrm{UV}} \leq -20.0$, our results consistent with the previous works. Although the dynamic range of $M_{\mathrm{UV}}$ is smaller than the previous works, the luminous star-forming galaxies at *z* $\sim$ 4, which are selected from the ground-based wide field images, seem to show the weaker $\beta$–$M_{\mathrm{UV}}$ relation over the magnitude range of $-22.0 \leq M_{\mathrm{UV}} \leq -20.0$. ![ Comparison of this work with the previous studies at the similar redshift. For the sake of clarity, the vertical scale is changed from figure \[fig5\]. The red open diamonds and green open triangle with the error bars show the results from @LeeKS11 and @Bouw14, respectively. The blue open circles with the error bars show the result from our sample which is same as the magenta points in figure \[fig5\], but the error bars are the typical uncertainty. []{data-label="fig6"}](Pic.liter-comp-new1812.v1.eps){width="80mm"} It seems that the distribution of the objects in figure \[fig5\] is truncated and the shape of the distribution looks like a “triangle”. This must be a result from either some physical constraint or some sample selection bias, or both. For example, a galaxy at *z* $=$ 4.5 with the *i’*-band magnitude *i’* $= 26.0$ has $M_{UV} \lesssim -20.1$, and therefore the number of objects, which are selected from our selection criteria, decrease in the region around $M_{UV} \sim -20$ and $\beta \sim -2.5$. By using only our three data points in $-22.0 \leq M_{\mathrm{UV}} \leq -20.5$, the slope of the relation indeed becomes $-0.09\,\pm\,0.04$ which is quite similar to the value of the previous works. However, the number of the objects in the brightest (most left) bin is much less than the other bins (see column 8 in table \[tab3\]), and the slope is almost estimated from only the two data points. We need to assess the incompleteness of the observed $\beta$ distribution in order to take account of our selection bias, which is discussed in the next section. Recovery Fraction {#S4s2RF} ----------------- As shown in figure \[fig5\], the observed $\beta$ distribution is restricted in the “triangle” zone. It seems that there are three truncations, namely, (a) at the top left side, (b) at the bottom left side, and (c) at the bottom right side. In order to discuss the reason of those truncation and evaluate the validity of our results, we calculate the recovery fraction which is the number ratio of recovered objects to input objects by using Monte Carlo method. At first, we make a uniform input distribution on $\beta$–$M_{\mathrm{UV}}$ space for the quantitative discussion. For this purpose we consider the $8 \times 13 = 104$ grids with $\Delta \beta = 0.5$ and $\Delta M_{UV} = 0.25$, and we generate 300 mock galaxy spectra whose $\beta$ and $M_{UV}$ values place in each the small grid (so the total spectra are $104 \times 300 = 31,200$). The mock spectra are constructed from the BC03 or SB99 model templates which are the similar template sets as described in section \[S2s2ss\]. All of the parameters such as SFH, dust attenuation value, age, metallicity, and source redshift are determined at random. We note that the range of the dust attenuation value and source redshift for the mock galaxies are different from the range described in section \[S2s2ss\], and they are $0.0 \leq {\rm Av} \leq 1.5$ and $3.5 \leq z_{s} \leq 4.5$. The number of the age step is also changed from 30 into 15. If a resultant spectrum does not place in the designated small grid, we repeat the procedure until the desired $\beta$ and $M_{\mathrm{UV}}$ values. Second, we calculate the apparent magnitude of the broad-band filters for each mock spectrum and we put the artificial galaxies on the real observed images from Subaru/*B* to UKIRT/*K* by using the IRAF mkobjects task. Since we check that the impact of Spitzer/3.6$\>\micron$ and 4.5$\>\micron$ is negligible for the $\beta$–$M_{\mathrm{UV}}$ relation in section \[S4S1BM\], we omit both the information in our simulation. The size and shape of the mock galaxies are also determined at random so that the size distribution of our simulated objects reproduces the observed size distribution. Finally these embedded mock galaxies are re-detected, re-measured, and re-compiled by the same manners described in section \[S2s2ss\]. In the SED fitting procedure, however, we change the number of the age step from 30 to 15 in order to save the computer resource. After the compilation, we count the number of final recovered objects for each small grid and calculate the number ratio of recovered to input objects. The final result includes the impact from the image quality, the magnitude criteria, the LBG selection, and the photo-*z* selection. Note that the prepared objects are restricted by only the rest-frame [*UV*]{} information and thus the rest-frame [*optical*]{} information such as Balmer break is purely determined at random. ![ Recovery fraction which is the number ratio of recovered to input objects on $\beta$–$M_{\mathrm{UV}}$ space. The black grid lines represent the area where we prepare the input objects uniformly throughout the $\beta$–$M_{\mathrm{UV}}$ space and a total of 31,200 mock galaxies is distributed. The colored area represents the detected area where we can find recovered objects. The white colored area represents the non-detection area where we cannot find any recovered objects. []{data-label="fig7"}](Pic-Official.BC5.BetaRF.v3.eps){width="80mm"} Figure \[fig7\] shows the final recovery fraction map by the color-coded area. The vertical axis is the UV slope $\beta$ and the horizontal axis is the absolute magnitude at rest-frame 1500Å. Although the UV absolute magnitude for input objects is given as total magnitude, the UV absolute magnitude for recovered objects is calculated from 2”-diameter aperture photometry. Therefore we convert the total magnitude of input objects to the 2” aperture magnitude by the aperture correction: $M_{\mathrm{UV,aperture}} = M_{\mathrm{UV,total}} + 0.352$. The black lattice lines indicate each area where $\sim$ 300 mock galaxies (or input objects) are distributed except for both the faintest and brightest magnitude bins where $\sim$ 150 mock galaxies are distributed. The white area represents the non-detection area which means that there are no recovered objects. We find that the relative value of the recovery fraction is roughly homogeneous over the area where the observed objects are distributed ($-2.5 \lesssim \beta \lesssim -0.5$ and $-22.0 \lesssim M_{\mathrm{UV}} \lesssim -20.0$) and does not depend on the UV magnitude except for the area around the truncation (c). The rough homogeneity of the relative value indicates that the observed $\beta$–$M_{\mathrm{UV}}$ distribution is not strongly biased at least over the area of $-2.5 \leq \beta \leq -0.5$ and $-22.0 \leq M_{\mathrm{UV}} \leq -20.0$, and our measurement for the $\beta$–$M_{\mathrm{UV}}$ relation described in section \[S4S1BM\] is reasonable. In other words, we find no evidence that the truncation (a) and (b) are artificial, and they must be caused by some other reasons. On the other hand, this figure also shows that the truncation (c) is artificially caused by our sample selection. Our simulation indicates that the truncation (c) is attributed to the selection criteria of the detection in Subaru/*z* or Subaru/updated-*z’*, and UKIRT/*J* or HST/F125W. The figure also indicates that the recovery fraction locally peaks around $\beta \sim -2.0$ and there are some fluctuated peaks at $\beta \sim -0.25$. Qualitatively, prominent spectral features can be easily identified by the SED fitting procedure, and then the recovery fraction may become higher than the other $\beta$ values. Since the Lyman Break technique prefers to select blue galaxies such as $\beta \sim -2.0$, our simulation indeed reflects our sample selection rather than the assumption about input objects. In addition, red galaxies such as $\beta \sim -0.25$ have clear Balmer Break if their red color is due to the aged stellar population. In our simulation the rest-frame optical information is determined at random, and hence the too many input galaxies, which have the prominent Balmer Break, are probably generated to have $\beta \sim -0.25$. In conclusion, the inhomogeneity of the recovery fraction seen in figure \[fig7\] is due to our sample selection criteria adopting the Lyman Break technique and photo-*z* estimation. The truncation (a) and (b) can be interpreted as follows: The observed number of LBGs decreases toward the brighter UV magnitude and the average $\beta$ value converges in $\beta \sim -1.7$. The decrease of LBGs along with the UV magnitude is explained by the drop of UV Luminosity Function since the characteristic luminosity of *z* $\sim$ 4 LBGs is $M_{\mathrm{UV}}^{*} = -21.14$ [@Yoshi06]. However, it is not clear why the average UV slope $\beta$ value converges in $\beta \sim -1.7$. Qualitatively the galaxies with $\beta \gtrsim -1.7$ should contain a large amount of dust and their UV magnitude becomes fainter due to the dust obscuration. Therefore the red and bright galaxies are a rare or almost impossible population, and it causes the truncation (a). On the other hand, the galaxies with $\beta \lesssim -1.7$ contain a less amount of dust and the galaxies can remain bright UV magnitude. As we cannot find the blue and bright galaxies from figure \[fig5\], such the objects are indeed a rare population in the observational data. In summary, we conclude that the truncation (a) and (b) are not only caused by our sample selection and are most likely caused by some physical requirements, and the truncation (c) is clearly caused by our sample selection. In order to understand what make the blue and bright galaxies rare and to reveal the reason of the truncation (b), we discuss the underlying stellar population of LBGs for our sample in section \[S5Dis\]. Discussion {#S5Dis} ========== Relation between Intrinsic $\beta$ and Intrinsic $M_{\mathrm{UV}}$ {#S5s1IBM} ------------------------------------------------------------------ We here estimate the dust-corrected $\beta$ (hereafter we call it [*intrinsic*]{} UV slope, $\beta_{\mathrm{int}}$) and the dust-corrected $M_{\mathrm{UV}}$ (hereafter we call it [*intrinsic*]{} UV absolute magnitude, $M_{\mathrm{UV,int}}$). In section \[S4s2RF\], our simulation indicates that the observed distribution on $\beta$–$M_{\mathrm{UV}}$ space is caused by some physical reasons. Both observed $\beta$ and $M_{\mathrm{UV}}$ value strongly depend on the dust attenuation value, and hence it is helpful to investigate the $\beta$–$M_{\mathrm{UV}}$ distribution before the dust reddening. In our discussion, we assume that the reasonable best-fit physical quantities are estimated from the SED fitting analysis in which the observed photometry covers the wavelength range between rest-frame $\sim$ 900Å and $\sim$ 4400Å (or $\sim$ 9000Å in part) for *z* $\sim$ 4 objects. ![image](Pic.ConvBetaMuv-new1812.v1.eps){width="140mm"} We calculate the intrinsic UV slope by equation \[eq1\] for the intrinsic magnitude of Subaru/*i’*, Subaru/updated-*z’*, and UKIRT/*J* band filters. For estimating the intrinsic magnitude, we convolve the intrinsic SED, which is reproduced with the best-fit physical quantities without any dust extinction, with the three broad-band filters. We note that the intrinsic UV slope depends on the prepared model templates in the SED fitting (i.e., SFH, age, and metallicity) and has discrete values in out discussion. Figure \[fig8\] shows the conversion from observed to intrinsic value for $\beta$ and $M_{\mathrm{UV}}$. The left panel (a) of figure \[fig8\] shows the *observed* $\beta$ as a function of the *observed* $M_{\mathrm{UV}}$ ($\beta_{\mathrm{obs}}$–$M_{\mathrm{UV,obs}}$ relation, same as figure \[fig4\]). The right panel (b) of figure \[fig8\] shows the *intrinsic* $\beta$ as a function of the *intrinsic* $M_{\mathrm{UV}}$ ($\beta_{\mathrm{int}}$–$M_{\mathrm{UV,int}}$ relation). The blue filled circles, green filled triangles, and red filled squares represent individual objects with the best-fit dust attenuation value of Av $<$ 0.5, 0.5 $\leq$ Av $<$ 1.0, and Av $\geq$ 1.0, respectively. In the panel (a), we confirm that the objects with the higher dust attenuation value are distributed at the upper area where the $\beta_{\mathrm{obs}}$ value becomes redder. This trend is natural and is not inconsistent with the previous studies reported as the relation between the $\beta_{\mathrm{obs}}$ and dust attenuation value (IRX–$\beta$ relation: e.g., [@Calz94]; [@Meur99]; [@Take12]). In the panel (b), due to the large dust correction, the objects with the higher dust attenuation value and the redder $\beta_{\mathrm{obs}}$ value tend to be distributed at the bottom left area where the $\beta_{\mathrm{int}}$ and $M_{\mathrm{UV,int}}$ value becomes bluer and brighter. Moreover, the trend of the distribution is different from the panel (a), namely, the slope of the $\beta$–$M_{\mathrm{UV}}$ relation is nearly constant or positive. We discuss this distribution for different sub-samples in the following. Both two panels in figure \[fig910\] shows the same $\beta_{\mathrm{int}}$–$M_{\mathrm{UV,int}}$ distribution but the color-coding represent the different sub-samples classified according to with and without SEDS/Spitzer (left) and $M_{\mathrm{UV,obs}}$ (right). In the left panel, the red filled squares and blue filled circles indicate the objects with and without SEDS/Spitzer data, respectively. In the right panel, the blue filled circles, green filled triangles, and red filed squares indicate the objects with $M_{\mathrm{UV,obs}} > -20.5$, $-20.5 \geq M_{\mathrm{UV,obs}} > -21.0$, and $M_{\mathrm{UV,obs}} < -21.0$, respectively. The large open circle, triangle, and square with the error bars in each panel represent the median value and the median uncertainty for each sub-sample. We calculate the dust attenuation value at $\chi^{2}_{min} + 1$ for the individual objects as the uncertainty of the dust attenuation, and then the uncertainty in Av is converted into the uncertainty in $\beta_{\mathrm{int}}$ and $M_{\mathrm{UV,int}}$. Therefore, the error bars in figure \[fig910\] denote the uncertainty in Av. We also show the histogram of $\beta_{\mathrm{int}}$ and $M_{\mathrm{UV,int}}$ for each sub-sample. ![image](Pic.BintMint-new1812.v3SEDS.eps){width="70mm"} ![image](Pic.BintMint-new1812.v4Mobs.eps){width="70mm"} From figure \[fig910\], there seems not to be systematic difference in the $\beta_{\mathrm{int}}$ distribution. From the left panel, we find that the information of the Spitzer data does not influence the estimation of the $\beta_{\mathrm{int}}$ value although the uncertainty in $\beta_{\mathrm{int}}$ and $M_{\mathrm{UV,int}}$ for the sub-sample with Spitzer tends to be smaller than that for the sub-sample without Spitzer. From the right panel, we find that the $\beta_{\mathrm{int}}$–$M_{\mathrm{UV,int}}$ distribution of each $M_{\mathrm{UV,obs}}$ sub-sample is almost parallel to each other. When calculating the slope of the $\beta_{\mathrm{int}}$–$M_{\mathrm{UV,int}}$ relation for each sub-sample by the same manner described in section \[S4S1BM\], we obtain the value of $0.12 \pm 0.01$ ($M_{\mathrm{UV,obs}} > -20.5$), $0.14 \pm 0.01$ ($-20.5 \geq M_{\mathrm{UV,obs}} > -21.0$), and $0.16 \pm 0.02$ ($M_{\mathrm{UV,obs}} < -21.0$). It means that the variation of the $M_{\mathrm{UV,obs}}$ value does not significantly affect the shape of the $\beta_{\mathrm{int}}$–$M_{\mathrm{UV,int}}$ distribution. Although the faint objects such as the objects with $M_{\mathrm{UV,obs}} > -20.5$ have the $\beta_{\mathrm{obs}}$ value with the large uncertainty, the $\beta_{\mathrm{int}}$ values are reasonably well evaluated after the SED fitting using the all photometric data points. ![image](Pic.BintMint-new1812.v1Av-Bobs.eps){width="140mm"} Figure \[fig11\] shows the same $\beta_{\mathrm{int}}$–$M_{\mathrm{UV,int}}$ distribution as figure \[fig910\] but the color-coding represent the sub-samples classified according to Av and $\beta_{\mathrm{obs}}$. In the left panel, the blue filled circles, green filled triangles, and red filled squares indicate the objects with Av $<$ 0.5, 0.5 $\leq$ Av $<$ 1.0, and Av $\geq$ 1.0, respectively. In the right panel, the blue filled circles and red filled squares indicate the objects with $\beta_{\mathrm{obs}} \leq -1.73$ and $\beta_{\mathrm{obs}} > -1.73$, respectively. $\beta_{\mathrm{obs}} = -1.73$ represents the median $\beta_{\mathrm{obs}}$ value for our whole sample. The open circle, triangle, and square with the error bars in each panel represent the median value and the median uncertainty for each sub-sample. As mentioned above, the uncertainty in $\beta_{\mathrm{int}}$ and $M_{\mathrm{UV,int}}$ is estimated from the uncertainty in Av. Figure \[fig11\], interestingly, shows that the objects which are dusty and redder in the observed $\beta$ tend to be bluer in the intrinsic $\beta$ and brighter in the intrinsic $M_{\mathrm{UV}}$. In addition, the intrinsic $\beta$ value slightly increases with the intrinsic $M_{\mathrm{UV}}$ value and the trend is opposite of that of the $\beta_{\mathrm{obs}}$–$M_{\mathrm{UV,obs}}$ relation. Our result can be interpreted as follows. The more intense ongoing star-forming galaxies, whose [*intrinsic*]{} $\beta$ and $M_{\mathrm{UV}}$ value are bluer and brighter, generate and/or contain a large amount of dust, and the [*observed*]{} $\beta$ and $M_{\mathrm{UV}}$ value result in a redder and fainter value due to the dust attenuation. Then, the nearly constant $\beta_{\mathrm{obs}}$–$M_{\mathrm{UV,obs}}$ distribution observed in our analysis is formed by the galaxies which have a blue $\beta_{\mathrm{int}}$ and bright $M_{\mathrm{UV,int}}$ value because they are distributed at the area of a red $\beta_{\mathrm{obs}}$ and faint $M_{\mathrm{UV,obs}}$ value. According to our SED fitting analysis, a young-age stellar population is responsible for the bluest $\beta_{\mathrm{int}}$ value. In other words, there are some young-age galaxies with the bluest $\beta_{\mathrm{int}}$ value in the brightest $M_{\mathrm{UV,int}}$ range, but there are no intermediate-age and old-age galaxies with the bluest $\beta_{\mathrm{int}}$ value in the brightest $M_{\mathrm{UV,int}}$ range. This is not surprising because the intrinsic UV luminosity is expected to be sensitive to the age of the stellar population. It is hard to sustain a very high star formation rate with the intermediate and long time duration due to rapid gas depletion. The UV luminosity is dominated by the stars at ”turn-off point” on Hertzsprung–Russell Diagram which is an age indicator of the young stellar population. We, however, emphasize that other parameters such as metallicity and/or IMF can also explain the reason of the bluest $\beta_{\mathrm{int}}$ value. Indeed some literature argue that dusty star-forming galaxies have a ”top-heavy” IMF although the discussion still continues ([@Bau05], [@Tacc08], [@Bas10]). Under the top-heavy IMF environment, hot and massive stars can be formed more and more, and the bluer $\beta_{\mathrm{int}}$ value is easily produced. Otherwise, among the galaxies with the bluest $\beta_{\mathrm{int}}$ value, there may be a post-primordial starburst which is dominated by extremely metal-poor (or PopIII) stars. In order to investigate the star formation activity, we plot the Star Formation Rate (SFR) of the individual objects as a function of their stellar mass in figure \[fig12\]. For the estimation of SFR, we convolve the best-fit template with the GALEX/FUV filter response curve and use the calibration for FUV luminosity ([@Hao11]; [@Kenn12]). For estimating the stellar mass, we multiply the best-fit normalization factor to the output from the BC03 model template. Figure \[fig12\] shows the dust-corrected SFR as a function of the stellar mass, and the blue circle, green triangle, and red squares represent the individual objects with the dust attenuation value of Av $<$ 0.5, 0.5 $\leq$ Av $<$ 1.0, and Av $\geq$ 1.0, respectively. The large open circle, triangle, and square with the error bars show the median value and median uncertainty of each sub-sample. The uncertainty in SFR is estimated from the uncertainty in Av, and the uncertainty in stellar mass is estimated from the photometric uncertainty of *K* band since the estimation of the stellar mass is almost determined by the *K* band photometry. We also plot the previous results of the SFR–$\mathrm{M_{*}}$ relation called as main-sequence of star forming galaxies at *z* $\sim$ 4 ([@Spe14], [@Ste14], [@Cap17]). ![ Dust-corrected SFR vs. stellar mass estimated from the BC03 model templates. The SFR is estimated from the luminosity at the *GALEX*/FUV filter by using the @Hao11 calibration. The stellar mass is calculated multiplying the normalization factor to the output of the BC03 model. The color-coding represents the best-fit dust attenuation value. We also draw the SFR–$\mathrm{M_{*}}$ relation at *z* $\sim$ 4 reported by the previous works ([@Spe14], [@Ste14], [@Cap17]). []{data-label="fig12"}](Pic.SFRMs-new1812.v2AvSingle.eps){width="80mm"} The figure shows that the most intense star-forming galaxies in our sample have SFR $\gtrsim$ a few $\times\ 10^{2}\,\mathrm{M_{\solar}}\>\mathrm{yr^{-1}}$, and they are the objects with Av $\geq$ 1.0. Since most of the objects with Av $\geq$ 1.0 have $\beta_{\mathrm{obs}} > -1.73$ and $\beta_{\mathrm{int}} \leq -2.42$ ($\beta_{\mathrm{int}} = -2.42$ is the median $\beta_{\mathrm{int}}$ value for our whole sample) from figure \[fig11\], our analysis indicates that the highly dust-attenuated and intense star-forming galaxies at *z* $\sim$ 4 tend to have $\beta_{\mathrm{obs}} > -1.73$ and $\beta_{\mathrm{int}} \leq -2.42$. When comparing our result with the previous works, our median value of Av $<$ 0.5 and 0.5 $\leq$ Av $<$ 1.0 sub-sample is consistent with the relation from the previous works although the distribution of our sample is significantly scattered. The sub-sample with Av $\geq$ 1.0 tends to be distributed above the relation from the previous works, and the deviation of the median value from the relation is larger than the uncertainty in SFR. Because the galaxies distributed above the star formation main sequence are classified as starburst phase (e.g., [@Cap17]; [@Bisi18]), we consider that the objects with Av $\geq$ 1.0 are indeed in the starburst phase and our results is not inconsistent with the previous works. In conclusion, we find some highly dust-attenuated (Av $\geq$ 1.0) and intense star-forming (SFR $\gtrsim$ a few $\times\ 10^{2}\,\mathrm{M_{\solar}}\>\mathrm{yr^{-1}}$) galaxies at *z* $\sim$ 4 which have $\beta_{\mathrm{obs}} > -1.73$ and $\beta_{\mathrm{int}} \leq -2.42$. Finally, we consider a simple case in which the $\beta_{\mathrm{int}}$–$M_{\mathrm{UV,int}}$ trend monotonically continues in the fainter magnitude range. According to @Bouw14, the $\beta_{\mathrm{obs}}$ value becomes bluer when the $M_{\mathrm{UV,obs}}$ value becomes fainter, but the slope of the $\beta_{\mathrm{obs}}$–$M_{\mathrm{UV,obs}}$ relation becomes flatter in $M_{\mathrm{UV,obs}} \gtrsim -19.0$. In order to establish both the observed and intrinsic $\beta$–$M_{\mathrm{UV}}$ relation without contradiction, it is expected that the $\beta_{\mathrm{int}}$ value becomes redder and converges to the certain $\beta$ value toward the fainter magnitude range. When we extrapolate the $\beta_{\mathrm{int}}$–$M_{\mathrm{UV,int}}$ relation faintward below our sample magnitude limit, we will find the intersection point of the observed and intrinsic $\beta$–$M_{\mathrm{UV}}$ relation. Since the dust attenuation value becomes smaller toward the fainter magnitude range along the $\beta_{\mathrm{int}}$–$M_{\mathrm{UV,int}}$ relation, the intersection point (or convergence point) will represent the position of the appearance of nearly dust-free population. Our $\beta_{\mathrm{int}}$–$M_{\mathrm{UV,int}}$ relation shows $\beta_{\mathrm{int}} = 0.61 + 0.14 M_{\mathrm{UV,int}}$ by the same manner described in section \[S4S1BM\], and the $\beta_{\mathrm{obs}}$–$M_{\mathrm{UV,obs}}$ relation from @Bouw14 shows $\beta_{\mathrm{obs}} = -4.39 - 0.13 M_{\mathrm{UV,obs}}$ in $M_{\mathrm{UV,obs}} \leq -18.8$. As a result, both relations intersect at $M_{\mathrm{UV}} = -18.9$ and $\beta = -1.94$ and its point corresponds to the break point of $\beta_{\mathrm{obs}}$–$M_{\mathrm{UV,obs}}$ relation at $M_{\mathrm{UV}} = -18.8$ and $\beta = -1.95$ reported by @Bouw14. Therefore the transition of the $\beta_{\mathrm{obs}}$-$M_{\mathrm{UV,obs}}$ relation around $M_{\mathrm{UV}} \sim -18.8$ indicates that we really see the almost dust-free population in $M_{\mathrm{UV}} > -18.8$, and the apparently bluest star-forming galaxies at *z* $\sim$ 4 distribute around $\beta \sim -2.0$. Case of fixed star formation history and SMC attenuation law {#S5s2fhsmc} -------------------------------------------------------------- In this section and the following sections (section \[S5s3zjkd\] and \[S5s4iasfg\]), we verify our results by using different and somewhat independent ways. We emphasize that these verification is intended not only to check the results from our SED fitting analysis but also to strengthen our suggestion, i.e., we find the dusty and on-going active star-forming galaxies at *z* $\sim$ 4. First of all, we repeat the SED fitting analysis (1) by fixing SFH parameter and (2) by using SMC attenuation law for dust extinction curve from @Pre84 and @Bouch85. Figures \[fig13\] and \[fig14\] show the result of the case (1) and (2), respectively. The figures show the $\beta_{\mathrm{int}}$–$M_{\mathrm{UV,int}}$ relation, and the fixed SFH parameter or dust extinction curve used in the SED fitting is labeled on the top of each panel. In figure \[fig13\], the first and second rows show the results of the BC03 model template and the third row shows the results of the SB99 model template. In all the panels except for the case of the SMC attenuation law, the blue, green, and red points represent the individual objects with the best-fit dust attenuation value of Av $<$ 0.5, 0.5 $\leq$ Av $<$ 1.0, and Av $\geq$ 1.0, respectively. In the case of the SMC attenuation law, the blue, green, and red points represent the objects with Av $<$ 0.3, 0.3 $\leq$ Av $<$ 0.6, and Av $\geq$ 0.6, respectively. In figure \[fig14\], the large diamonds with the error bars represent the median value and the median uncertainty for each sub-sample. Although the error bars is quite large for the objects with Av $\geq$ 0.6 in the right panel, it is caused from the small number of objects in the sub-sample. ![image](Pic-Official.zpBetaMuv_SFH.v2Av.eps){width="140mm"} From figure \[fig13\], we find that the global trend of the $\beta_{\mathrm{int}}$–$M_{\mathrm{UV,int}}$ relation does not significantly change among SFH parameters, which supports our interpretation described in section \[S5s1IBM\]. We note that the $\beta_{\mathrm{int}}$ value has discrete values and makes discrete sequences, especially in the panel (c). It is attributed to the age step of the prepared model template in the SED fitting, and the more large number of the age step will dilute the discrete sequences. However, it is not critical when taking account of the moderate uncertainty in photometry. In brief, the effect of dust attenuation significantly distorts the $\beta_{\mathrm{int}}$–$M_{\mathrm{UV,int}}$ relation, which is probably positive, and then the $\beta$–$M_{\mathrm{UV}}$ relation results in the negative $\beta_{\mathrm{obs}}$–$M_{\mathrm{UV,obs}}$ relation reported by the previous works. In $-22 \lesssim M_{\mathrm{UV,obs}} \lesssim -20$, however, the $\beta_{\mathrm{obs}}$ value seems to be constant to the $M_{\mathrm{UV,obs}}$ value (constant $\beta_{\mathrm{obs}}$–$M_{\mathrm{UV,obs}}$ relation) due to the existence of dusty active star-forming population. ![image](Pic-Official.AvBintMint_Comp_Cal-SMC.unique1.eps){width="130mm"} Figure \[fig14\] shows that the best-fit Av value from the SMC attenuation law becomes much smaller than that from the attenuation law because the slope of the dust extinction curve of the SMC is much steeper than that of the . Consequently, we cannot identify the intrinsically active star-forming galaxies which show the high dust attenuation (Av $>$ 1.0), blue $\beta_{\mathrm{int}}$ value ($\beta_{\mathrm{int}} < -2.42$), and red $\beta_{\mathrm{obs}}$ value ($\beta_{\mathrm{obs}} > -1.7$), although we can again find that the intrinsic $\beta$ value slightly increases with the intrinsic $M_{\mathrm{UV}}$ value. Actually, recent works of Atacama Large Millimeter/submillimeter Array (ALMA) observations report that the SMC dust attenuation law is appropriate for normal star forming galaxies at high redshift (e.g., [@Cpk15]; [@Bouw16]). On the other hand, as discussed in section \[S5s4iasfg\], the -like attenuation law is partly required to reproduce the results of the Submillimeter Common User Bolometer Array 2 (SCUBA2) from @Copp15 and @Kopr18. *zJK*-diagram {#S5s3zjkd} --------------- In this section, we compare the observed color of the *z’JK* band photometry with the predicted color which is estimated from the model simulation. Since our sample tends to have a larger photometric error in the broad-band filters at longer wavelength owing to the depth of the imaging data, the weight of the broad-band filters at longer wavelength becomes smaller than the opposite in the SED fitting analysis. It is possible that the photometry of the *z’JHK* band filters does not have a considerable constraint on the best-fit SED. We therefore focus on the observed color of the *z’JK* band photometry, and more directly compare the observed value with the predicted value in the color–color space. For the model simulation, we calculate the color of the two SFH model templates with some condition: One is the BC03 Instantaneous Burst model (hereafter IB), and the other is the BC03 Continuous Constant star formation model (hereafter CSF). We consider that the IB and CSF SFH model is most opposite case in star formation activity, and the models are helpful to interpret the observed results. For the sake of simplicity, we fix the metallicity value with $Z = 0.2 Z_{\solar}$ and the redshift value with *z* $=$ 3.5 and 4.5. In order to clarify the variation of the colors depending on the dust and age, we calculate the colors of the IB and CSF model templates with (a) the fixed age but the variable dust ranging from 0.0 to 3.0, and (b) the variable age ranging from 10$\>\mathrm{Myr}$ to 15.0$\>\mathrm{Gyr}$ but the fixed dust. ![image](Pic-Official.color_zJHK.v3obj-mdl.eps){width="140mm"} Figure \[fig15\] shows the $z - J$ vs. $J - K$ color–color diagram. The vertical axis is the $z - J$ color and the horizontal axis is the $J - K$ color. The $J - K$ color can trace the Balmer break of galaxies at *z* $\sim$ 4 and the $z - J$ color represents the observed UV slope $\beta$. The blue filled triangles and the red filled circles denote the objects with $\beta_{\mathrm{obs}} \leq -1.73$ and $\beta_{\mathrm{obs}} > -1.73$ in our sample, respectively. In this figure, we only use the objects satisfying all the following condition, $z' > 3\,\sigma$, $J > 3\,\sigma$, and $K > 3\,\sigma$, so as to calculate the reliable colors. The blue and red large circles with the error bars denote the median value and median uncertainty in the observed colors. In the left panel (a), the green lines represent the CSF model template with age $=$ 10$\>\mathrm{Myr}$, and the purple lines represent the IB model template with age $=$ 100$\>\mathrm{Myr}$. The solid and dashed lines represent the case of *z* $=$ 3.5 and 4.5, respectively, and the space between the lines is filled with the shaded area. The solid circles on each line indicate the dust attenuation value of Av $=$ 0.0, 1.0, 2.0, and 3.0 from bottom left to top right, and the given value is labeled beside the circles. In the right panel (b), the green lines represent the CSF model template with Av $=$ 1.0, and the purple lines represent the IB model template with Av $=$ 0.0. The solid and dashed lines represent the case of *z* $=$ 3.5 and 4.5, respectively, and the space between the lines is filled with the shaded area. The solid circles on each line indicate the age value of 10$\>\mathrm{Myr}$, 100$\>\mathrm{Myr}$, 500$\>\mathrm{Myr}$, and 1$\>\mathrm{Gyr}$ from bottom left to top right, and the given value is labeled beside the circles. We note that we omit the label of 100$\>\mathrm{Myr}$ for the green line in the panel (b) since the corresponding point is placed under the median value and cannot be seen. The figure indicates that the observed distribution of the $\beta_{\mathrm{obs}} > -1.73$ sub-sample tends to be reproduced by the star-forming, dusty, and very young-age, that is bluer $\beta_{\mathrm{int}}$, population. Although we only show the extreme and slightly arbitrary cases in the figure, we can deduce the other possibilities from the examples such as star-forming, less dusty, and middle-age population. However, when we take the other possibilities into consideration, the above interpretation is not changed because the direction of the increase in age and dust is different. We consider that the observed $J - K$ color of the $\beta_{\mathrm{obs}} > -1.73$ sub-sample is not sufficiently red, and thus the middle-age and old-age population is not preferred in the SED fitting analysis. The observed distribution of the $\beta_{\mathrm{obs}} \leq -1.73$ sub-sample tends to be reproduced by the less star-forming, less dusty, and young-age population, although the sub-sample can be also reproduced by the star-forming, less dusty, and middle-age population. We note that there are some outliers in our sample, but most of them have a lower signal to noise ratio ($S/N \sim 3$–$5$) in *J* and/or *K* band than the other objects. In summary, the interpretation from the *z’JK* color–color diagram is consistent with the interpretation from our SED fitting analysis, and therefore a part of star-forming galaxies at *z* $\sim$ 4 in our sample is indeed classified as dusty star-forming population. Expected features of most active star-forming galaxies at *z* $\sim$ 4 {#S5s4iasfg} ------------------------------------------------------------------------ Last of this paper, we show two estimation for the IR features of the active star-forming galaxies at *z* $\sim$ 4: One is the luminosity ratio of IR to UV so called IRX, and the other is the flux density at observed-frame 850$\>\micron$, $S_{850}$. Our sample does not have the rest-frame IR information for the individual objects and therefore we use the approximate conversion. For estimating the IRX value, we apply the empirical conversion between ${\rm IRX_{TIR-FUV}}$ and ${\rm A_{FUV}}$ for low-*z* galaxies reported by @Burg05: ${\rm A_{FUV}} = -0.028[{\rm log_{10}}L_{\rm TIR}/L_{\rm FUV}]^3 + 0.392[{\rm log_{10}}L_{\rm TIR}/L_{\rm FUV}]^2 + 1.094[{\rm log_{10}}L_{\rm TIR}/L_{\rm FUV}] + 0.546$. For estimating the $S_{\mathrm{850}}$ value, we first calculate the total (bolometric) IR luminosity by utilizing the not dust-corrected FUV luminosity and the IRX value, and then we convert the total IR Luminosity into the flux density at observed-frame 850$\>\micron$. In the conversion, we use the modified blackbody $+$ power-law formula as the dust thermal emission and the total IR luminosity is estimated by integrating the modeled spectrum from 8$\>\micron$ to 1000$\>\micron$ in the rest-frame. The formula is, $$S (\nu, T_{d}) \propto \left\{ \begin{array}{ll} \frac{\nu^{\beta}\nu^{3}}{e^{h\nu/kT_{d}}-1} & (\nu \leq \nu_{c}); \\ \nu^{-\alpha} & (\nu > \nu_{c}), \end{array} \label{eq2} \right.$$ where $S(\nu, T_{d})$ is the flux density at $\nu$ for a dust temperature $T_{d}$ in the units of Jy and $\beta$ is a dust emissivity index. The connecting frequency, $\nu_{c}$, is calculated from, $$\Biggl. \frac{dS}{d\nu} \Biggr|_{\nu=\nu_{c}} = -\alpha. \label{eq3}$$ For the sake of simplicity, we fix all the above parameters and the source redshift referring to @Copp15: the dust temperature of $T_{d} = 38\>$K, the dust emissivity index of $\beta_{\mathrm{dust}} = 1.5$, the power-law index of $\alpha = 1.7$, and the source redshift of $z = 3.87$. We emphasize that the cautious treatment is required for the comparison between our result and the previous results presented in this paper. ![image](Pic2-Official.IRXBeta.v01Av.eps){width="52mm"} ![image](Pic2-Official.IRXBeta-v2smc.v01Av.eps){width="52mm"} ![image](Pic2-Official.IRXBeta-Comp.v1med.eps){width="52mm"} Figure \[fig161718\] shows the IRX–$\beta$ relation obtained from the attenuation law (left) and SMC attenuation law (middle). The vertical axis is the predicted IRX value, and the horizontal axis is the observed UV slope $\beta$. The small dots represent each object, and the color-coding is same as the figure \[fig14\]. The large blue, green, and red square with the error bars represent the median value and median uncertainty for each sub-sample. In the right panel, we show the median values of our result and the previous works from (2016: AM16, magenta square), (2017: F17, red triangle), and (2016: B16, orange circle). The sample of AM16 is LBGs at *z* $\sim$ 3 in the COSMOS field and the IR luminosity is obtained from the stacked image of the Herschel and AzTEC. The sample of F17 is massive star-forming galaxies at *z* $\sim$ 3.2 in the COSMOS field, which are distributed on the main-sequence of star formation, and the IR luminosity is obtained from the stacked image of the ALMA. We note that both the samples consist of the relatively more massive ($\mathrm{M_{*} \gtrsim 10^{10} M_{\solar}}$) and lower redshift LBGs compared with our sample. The sample of B16 is LBGs at *z* $=$ 4–10 in the Hubble Ultra Deep Field, and the IR Luminosity is obtained from the stacked image of the ALMA. For B16, the data points in this panel represent the 2$\,\sigma$ upper limit of the formal uncertainty for the $\mathrm{M_{*} < 10^{9.75} M_{\solar}}$ sample described in table 13 of their paper, and thus their sample is relatively less massive (and possibly higher redshift) LBGs compared with our sample. The black solid and dashed lines show the relation based on the attenuation law from @Meur99 and @Take12, respectively. The black dot-dashed line shows the relation based on the SMC attenuation law from @Bouw16. In the case of the attenuation law (left panel), our sample shows the systematically bluer UV slope $\beta$ and the systematic offset becomes larger at the larger IRX value. According to previous works for lower redshift star-forming galaxies (e.g., [@Redd06]; [@Hein13]; [@Oteo13]; [@Alva16]), normal star-forming galaxies are distributed along the IRX–$\beta$ relation, and IR luminous galaxies such as Luminous InfraRed Galaxies (LIRGs; $L_{\mathrm{TIR}} > 10^{11}\,L_{\solar}$) or Ultra Luminous InfraRed Galaxies (ULIRGs; $L_{\mathrm{TIR}} > 10^{12}\,L_{\solar}$) are distributed above the IRX–$\beta$ relation. The offset of our red points implies the presence of IR excess galaxies at at *z* $\sim$ 4 such as local LIRG/ULIRGs although the systematic shift can be attributed to the uncertainty of IRX which comes from the conversion from $A_{\mathrm{FUV}}$ to $\mathrm{IRX_{TIR−-FUV}}$ and/or the failure of the SED fitting analysis. In the case of the SMC attenuation law (middle panel), our sample also shows the systematically bluer UV slope $\beta$ especially at the larger IRX value. Most of our sample, however, show the moderate IRX value (IRX $\leq$ 10), and we find a few IR excess galaxies in our sample. In conclusion, our sample indicates the presence of the IR excess galaxies at *z* $\sim$ 4. When comparing the previous works (right panel in figure \[fig161718\]), our sample from attenuation law tends to have the bluer UV slope $\beta$ at the larger IRX value than all the previous works while our sample from SMC attenuation law is comparable to the those of AM16 and F17. Our results from both attenuation law are not consistent with the result of B16. We note that the difference in the stellar mass of the sample is critical for the IRX–$\beta$ relation since both the IRX and $\beta$ values depend on the stellar mass (e.g., [@Alva16]; [@Bouw16]; [@Fink12]; [@Fuda17]), and we consider that the inconsistency between our results and B16 is attributed to the difference in the stellar mass. The red data point from F17 at $\beta \sim -1.7$ and IRX $\sim$ 10 (most left side point) is comparable to our result from attenuation law although the other data point from F17 is comparable to that from SMC. The authors mention that the most left side point is uncertain because of the small sample size of the bin. Therefore, our result from SMC attenuation law is not inconsistent with the previous works. ![image](Pic-Official.IRX-850um.v1cal.v3K5sigSecure-scat.eps){width="60mm"} ![image](Pic-Official.IRX-850um.v2smc.v6K5sig-scat.eps){width="60mm"} For the further verification of our result, figure \[fig1920\] shows the prediction of $S_{\mathrm{850}}$ for the case of the attenuation law (Left) and the SMC attenuation law (Right). The vertical axis is the predicted $S_{850}$ value and the horizontal axis is the predicted IRX value. The blue open diamonds represents the individual objects detected at $> 5\,\sigma$ level in *K* band photometry in our sample. The green filled circle with the orange error bars denotes the median value and median uncertainty of the whole sample. The uncertainty is also estimated from the uncertainty in Av. The horizontal magenta solid line denotes the flux density for the stacked LBG at *z* $\sim$ 4 measured in @Copp15 whose sample is quite similar to our sample. Since the sample in @Copp15 consists of the *K* band detected objects, we only show the $K > 5\,\sigma$ objects in the figure. According to @Copp15, the flux density measured for the stacked image is $S_{\mathrm{850}} = 0.411 \pm 0.064\>$mJy. We note that the result of @Copp15 is derived from the SED template library constructed by @Swin14 and the modified blackbody $+$ power-law formula is used just for checking the validity of their SED fitting analysis. The figure shows that the predicted $S_{\mathrm{850}}$ flux from the SMC attenuation law is insufficient to reproduce the stacking result of @Copp15, but the result from the attenuation law is consistent with the stacking result. This comparison indicates that a part of *z* $\sim$ 4 LBGs are indeed significantly dust attenuated and there must be IR luminous star-forming galaxies in our sample. Alternatively, at least the SMC attenuation law is unsuitable for high-*z* and *K*-detected LBGs. However, the difference between @Copp15 result and ours can be due to the fact that the stacking result is the average weighted by luminosities while our median values are not. Since the red LBGs in our sample can be easily detected and measured by using the ALMA, future ALMA observations for individual detection will potentially solve the discrepancy. We consider the possible interpretation of our optical/NIR-based IRX–$\beta$ relation. The IRX–$\beta$ relation is expressed as $\log_{10} \mathrm{IRX} = \log_{10}(10^{0.4*\mathrm{c1}*(\beta - \beta_{0})} - 1.0) + \mathrm{c2}$ where c1, c2, and $\beta_{0}$ are a constant value. The c1 value is the slope of the relation between dust attenuation Av and the observed UV slope $\beta$, $d \mathrm{Av}/d \beta$, which is specified by the dust extinction curve. The c2 value represents the bolometric correction because the observed UV and IR Luminosity is not the representative value and we need the correction factor for the observed values. The $\beta_{0}$ value is the intrinsic UV slope $\beta$ as investigated in this paper. In short, the IRX–$\beta$ relation assumes that the extinction curve and the stellar population hidden by dust does not vary significantly with the physical quantities of the star-forming galaxies. In the previous works for the IR-based IRX–$\beta$ relation, by using the fixed $\beta_{0}$ value ($\sim -2.2$), the authors discuss the suitable extinction curve for reproducing the IRX–$\beta$ relation seen in high redshift galaxies (e.g., [@Cpk15]; [@Alva16]; [@Bouw16]). @Redd18 explain the the IRX–$\beta$ relation of the *z* $\sim$ 2 galaxies by using the SMC attenuation law and the more bluer $\beta_{0}$ value ($\sim -2.6$), which is derived from the recent stellar population synthesis model. Moreover, @LeeKS12 and @Redd12 discuss the variation of the extinction curve according to the observed UV magnitude and the age of star-forming galaxies. From our analysis, assuming a certain dust extinction curve, the observed properties are not represented by the IRX–$\beta$ relation with the fixed $\beta_{0}$ value, and it is required that there is the variation of the intrinsic $\beta$ value or the variation of the extinction curve, or both, depending on the physical quantities of the star-forming galaxies. The prediction of the $S_{850}$ flux indicates that our sample is expected to include the highly dust attenuated and IR luminous galaxies which are explained by the attenuation law. Therefore, while the less dusty galaxies can be characterized by either attenuation law of and SMC, the highly dust attenuated galaxies are most likely characterized by the attenuation law and the bluer intrinsic $\beta$ value. Although it is difficult to confirm the variation from our results, there seems to be the variation of the intrinsic $\beta$ value or the extinction curve according to the physical quantities of the star-forming galaxies. Conclusion {#S6Conc} ========== In this work, we investigate the UV slope $\beta$ and stellar population of bright star-forming galaxies at *z* $\sim$ 4 in the SXDS field which is the wide-area and deep survey field. We use the imaging data of Subaru/*BVRi’z’*updated-*z’*, UKIRT/*JHK*, HST/F125WF160W, and Spitzer/3.6$\>\micron$ 4.5$\>\micron$, and we construct the sample of star-forming galaxies at *z* $\sim$ 4 by both Lyman Break technique and photometric redshift selection. The UV slope $\beta$ is calculated by the simple power-law fit, and the stellar population is estimated from the optical and NIR photometry thorough the SED fitting analysis. Consequently, we find a sign that some star-forming galaxies, which experience on-going active star formation and suffer heavy dust attenuation, really exist in the *z* $\sim$ 4 universe. We list our main results below. - There seems to be little dependence of the observed UV slope $\beta$ on the observed UV absolute magnitude $M_{\mathrm{UV}}$ in the range of $-22.0 \lesssim M_{\mathrm{UV}} \lesssim -20.0$ although the dynamic range of $M_{\mathrm{UV}}$ is limited. The slope of the $\beta$–$M_{\mathrm{UV}}$ relation is $-$0.02 $\pm$ 0.02, and it is more shallower than the previous studies for similar redshift but fainter LBGs ($-$0.13 $\pm$ 0.02 from [@Bouw14] and $-$0.10 $\pm$ 0.03 from [@Kurc14]). - For investigating the dependence of the UV slope $\beta$ on the dust attenuation, age, metallicity, and SFH, we calculate the *intrinsic* (dust-corrected) UV slope, $\beta_{\mathrm{int}}$, and *intrinsic* UV absolute magnitude, $M_{\mathrm{UV,int}}$, by using the results of the SED fitting analysis. The star-forming galaxies with the bluest $\beta_{\mathrm{int}}$ and brightest $M_{\mathrm{UV,int}}$ value are the dusty star-forming population which is observed with the red $\beta_{\mathrm{obs}}$ value. The dusty star-forming population has $\beta_{\mathrm{obs}} > -1.73$, $\mathrm{Av} \geq 1.0$, $\beta_{\mathrm{int}} \leq -2.42$, and SFR $\gtrsim$ a few $\times\ 10^{2}\,\mathrm{M_{\solar}}\>\mathrm{yr^{-1}}$, and we see the flat $\beta_{\mathrm{obs}}$–$M_{\mathrm{UV,obs}}$ distribution due to such population. - We find the intersection point of the $\beta_{\mathrm{int}}$–$M_{\mathrm{UV,int}}$ relation and the $\beta_{\mathrm{obs}}$–$M_{\mathrm{UV,obs}}$ relation by extrapolating our relation toward the fainter magnitude range. The intersection point represents the position of the appearance of nearly dust-free population, and it is at $\beta = -1.94$ and $M_{UV} = -18.88$ which is close to the break point of $\beta_{\mathrm{obs}}$–$M_{\mathrm{UV,obs}}$ relation reported by @Bouw14. - Our result does not depend on the SFHs used in the SED fitting analysis. However, our result depends on the assumption of the attenuation law. The best-fit dust attenuation value assuming the SMC attenuation law is found to be smaller than that obtained with the attenuation law. The trend that the intrinsic $\beta$ value increases with the intrinsic $M_{\mathrm{UV,int}}$ value appears for both the cases. - We compare the observed color of the *zJK* broad-band filters with the expected colors. Since the *z-J* color traces the UV slope $\beta$ and the *J-K* color traces the Balmer break of *z* $\sim$ 4 LBGs, we can also infer the stellar population by the observed quantities. The observed color of the $\beta_{\mathrm{obs}} > -1.73$ sub-sample of the *z* $~$ 4 star-forming galaxies is well reproduced by star-forming, dusty, and young-age (blue $\beta_{\mathrm{int}}$) population. - We estimate the IRX ($= L_{\mathrm{TIR}}/L_{\mathrm{FUV}}$) value and the flux density at observed-frame 850$\>\micron$, $S_{\mathrm{850}}$, from only the optical and NIR imaging data. The optical/NIR-based IRX–$\beta$ relation indicates the variation of the intrinsic $\beta$ value or the variation of the dust attenuation law, or both, according to the physical quantities of the star-forming galaxies. The $S_{\mathrm{850}}$ value estimated from the SMC attenuation law is not consistent with the stacking results of @Copp15, and thus the attenuation law is preferable to the *z* $\sim$ 4 intrinsically luminous LBGs. - Our analysis indicates that a significant fraction of *z* $\sim$ 4 LBGs are the highly dust attenuated and IR luminous population such as ULIRGs/LIRGs. This population has not been recognized very well in the previous analysis but is important in understanding early phase of galaxy formation possibly linking the typical blue LBGs and the further very red sub-mm selected galaxies. We appreciate M. Kajisawa, A. Inoue, K. mawatari, and T. Hashimoto for helpful comments and discussions. This work is mainly based on data collected at Subaru Telescope, which is operated by the National Astronomical Observatory of Japan. The UKIDSS project is defined in @Law07. UKIDSS uses the UKIRT Wide Field Camera (WFCAM; [@Cas07]). The photometric system is described in @Hew06, and the calibration is described in @Hodg09. The pipeline processing and science archive are described in Irwin et al (2009, in prep) and @Hamb08. We used UKIDSS data release 10. This work is based on observations taken by the CANDELS Multi-Cycle Treasury Program with the NASA/ESA HST, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555. This work is based in part on observations made with the Spitzer Space Telescope, which is operated by the Jet Propulsion Laboratory, California Institute of Technology under a contract with NASA. Data analysis was in part carried out on the open use data analysis computer system at the Astronomy Data Center, ADC, of the National Astronomical Observatory of Japan. We used the interactive analysis servers (anam\[01-16\]), the batch processing servers (bapm\[01-06\]), the terminal workstations (new-r\[01-13\]), and the disk space (home and mfst). This work was supported by JSPS KAKENHI Grant Number JP26400217. lvarez-M[á]{}rquez, J., Burgarella, D., Heinis, S., et al. 2016, , 587, A122 Ashby, M. L. N., Willner, S. P., Fazio, G. G., et al. 2013, , 769, 80 Baugh, C. M., Lacey, C. G., Frenk, C. S., et al. 2005, , 356, 1191 Bastian, N., Covey, K. R., Meyer, M. R., 2010, , 48, 339 Bertin, E. & Arnouts, S., 1996, , 117, 393 Bolzonella, M., Miralles, J.-M., Pello, R., et al. 2000, , 363, 476 Bisigello, L., Caputi, K. I., Grogin, N., & Koekemoer, A. 2018, , 609, A82 Bouchet, P., Lequeux, J., Maurice, E., Prévot, L. and Prévot-Burnichon, M. L., 1985, , 149, 330 Bouwens, R. J., Illingworth, G. D., Franx, M., et al. 2009, , 705, 936 Bouwens, R. J., Illingworth, G. D., Oesch, P. A., et al. 2010, , 708, L69 Bouwens, R. J., Illingworth, G. D., Oesch, P. A., et al. 2012, , 754, 83 Bouwens, R. J., Illingworth, G. D., Oesch, P. A., et al. 2014, , 793, 115 Bouwens, R. J., Illingworth, G. D., Oesch, P. A., et al. 2015, , 811, 140 Bouwens, R. J., Smit, R., Labb[é]{}, I., et al. 2016a, , 831, 176 Bouwens, R. J., Aravena, M., Decarli, R., et al. 2016, , 833, 72 Bruzual, G. & Charlot, S., 2003, , 344, 1000 Burgarella, D., Buat, V. & Iglesias-Páramo, J., 2005, , 360, 1413 Burgarella, D., Buat, V., Gruppioni, C., et al. 2013, , 554, A70 Calzetti, D., Kinney, A. L., Storch-Bergmann, T., 1994, , 429, 582 Calzetti, D., Armus, L., Bohlin, R. C., et al. 2000, , 533, 682 Capak, P. L., Carilli, C., Jones, G., et al. 2015, , 522, 455 Caputi, K. I., Deshmukh, S., Ashby, M. L. N., et al. 2017, , 849, 45 Casali, M., Adamson, A., Alves de Oliveira, C., et al. 2007, , 467, 777 Coppin, K. E. K., Geach, J. E., Almaini, O., et al. 2015, , 446, 1293 Cucciati, O., Tresse, L., Ilbert, O., et al. 2012, , 539, A31 Duncan, K., Conselice, C. J., Mortlock, A., et al. 2014, , 444, 2960 Duncan, K. & Conselice, C. J., 2015, , 451, 2030 Dunlop, J. S., McLure, R. J., Robertson, B. E., et al. 2012, , 420, 901 Dunlop, J. S., Rogers, A. B., McLure, R. J., et al. 2013, , 432, 3520 Finkelstein, S. L., Papovich, C., Salmon, B., et al. 2012, , 756, 164 Fudamoto, Y., Oesch, P. A., Schinnerer, E., et al. 2017, , 472, 483 Fukugita, M., Ichikawa, T., Gunn, J. E., et al. 1996, , 111, 1748 Furusawa, H., Kosugi, G., Akiyama, M., et al. 2008, , 176, 1 Furusawa, H, Kashikawa, N, Kobayashi, M. A. R., et al. 2016, , 822, 46 Galametz, A., Grazian, A., Fontana, A., et al. 2013, , 206, 10 Grogin, N. A., Kocevski, D. D., Faber, S. M., et al. 2011, , 197, 35 Hambly, N. C., Collins, R. S., Cross, N. J. G., et al. 2008, , 384, 637 Hao, Cai-Na, Kennicutt, R. C., Johnson, B. D., et al. 2011, , 741, 124 Hatfield, P. W., Bowler, R. A. A., Jarvis, M. J., & Hale, C. L. 2018, , Heinis, S., Buat, V., B[é]{}thermin, M., et al. 2013, , 429, 1113 Hewett, P. C., Warren, S. J., Leggett, S. K., & Hodgkin, S. T., 2006, , 367, 454 Hirashita, H., & Kuo, T.-M. 2011, , 416, 1340 Hodgkin, S. T., Irwin, M. J., Hewett, P. C., & Warren, S. J., 2009, , 394, 675 Kennicutt, Jr, R. C., & Evans, II, N. J., 2012, , 50, 531 Koekemoer, A. M., Faber, S. M., Ferguson, H. C., et al. 2011, , 197, 36 Koprowski, M. P., Coppin, K. E. K., Geach, J. E., et al. 2018, arXiv:1801.00791 Kurczynski, P., Gawiser, E., Rafelski, M., et al. 2014, , 793, L5 Lawrence, A., Warren, S. J., Almaini, O., et al. 2007, , 379, 1599 Lee, K.-S., Dey, A., Reddy, N., et al. 2011, , 733, 99 Lee, K.-S., Alberts, S., Atlee, D., et al. 2012, , 758, L31 Leitherer, C., Schaerer, D., Goldader, J. D., et al. 1999, , 123, 3 Madau, P., & Dickinson, M. 2014, , 52, 415 McLure, R. J., Dunlop, J. S., Cullen, F., et al. 2018, , 476, 3991 Muerer, G. R., Heckman, T. M. & Calzetti, D. 1999, , 521, 64 Oke, J. B., & Gunn, J. E., 1983, , 266, 713 Oteo, I., Cepa, J., Bongiovanni, [Á]{}., et al. 2013, , 554, L3 Ouchi, M., Shimasaku, K., Okamura, S., et al. 2004, , 611, 685 Pannella, M., Carilli, C. L., Daddi, E., et al. 2009, , 698, L116 Prévot, M. L., Lequeux, J., Maurice, E., Prévot, L. and Rocca-Volmerange, B., 1984, , 132, 389 Reddy, N. A., Steidel, C. C., Fadda, D., 2006, , 644, 792 Reddy, N. A., Erb, D. K., Pettini, M, Steidel, C. C., Shapley, A. E., 2010, , 712, 1070 Reddy, N., Dickinson, M., Elbaz, D., et al. 2012, , 744, 154 Reddy, N. A., Oesch, P. A., Bouwens, R. J., et al. 2018, , 853, 56 Rogers, A, B., McLure, R. J., Dunlop, J. S., 2013, , 429, 2456 Rogers, A. B., McLure, R. J., Dunlop, J. S., et al. 2014, , 440, 3714 Speagle, J. S., Steinhardt, C. L., Capak, P. L., Silverman, J. D., 2014, , 214, 15 Stanway, E. R., Eldridge, J. J., & Becker, G. D. 2016, , 456, 485 Steinhardt, C. L.,Speagle, J. S., Capak, P. L., Silverman, J. D., 2014, , 791, 25 Swinbank, A. M., Simpson, J. M., Smail, I., et al. 2014, , 438, 1267 Takeuchi, T. T., Yuan, F.-T., Ikeyama, A., Murata, K. L. & Inoue, A. K., 2012, , 755, 144 Tacconi, L. J., Genzel, R., Smail, I., et al. 2008, , 680, 246 Yoshida, M., Shimasaku, K., Kashikawa, N., et al. 2006, , 653, 988 Wilkins, S. M., Bunker, A. J., Stanway, E., Lorenzoni, S. & Caruana, J., 2011, , 417, 717 [^1]: $\langle$http://www.astromatic.net/software/sextractor$\rangle$ [^2]: $\langle$http://webast.ast.obs-mip.fr/hyperz/$\rangle$ [^3]: $\langle$http://www.bruzual.org/bc03/$\rangle$ [^4]: $\langle$http://www.stsci.edu/science/starburst99/docs/default.htm$\rangle$
1
--- abstract: 'In this article, we proposed a new probability distribution named as power Maxwell distribution (PMaD). It is another extension of Maxwell distribution (MaD) which would lead more flexibility to analyze the data with non-monotone failure rate. Different statistical properties such as reliability characteristics, moments, quantiles, mean deviation, generating function, conditional moments, stochastic ordering, residual lifetime function and various entropy measures have been derived. The estimation of the parameters for the proposed probability distribution has been addressed by maximum likelihood estimation method and Bayes estimation method. The Bayes estimates are obtained under gamma prior using squared error loss function. Lastly, real-life application for the proposed distribution has been illustrated through different lifetime data.' author: - | [Abhimanyu Singh Yadav$^{1}$[^1], Hassan S. Bakouch $^2$, Sanjay Kumar Singh$^3$ and Umesh Singh$^3$]{}\ $^1$[Department of Statistics, Central University of Rajasthan, Ajamer, India]{}\ $^2$ [Department of Mathematics, Faculty of Science, Tanta University, Tanta, Egypt]{}\ $^3$[Department of Statistics and DST-CIMS, BHU, Varanasi, India]{} title: ' **Power Maxwell distribution: Statistical Properties, Estimation and Application**' --- **Keywords:** Maxwell distribution, Power Maxwell distribution, moments, stochastic order, entropy, Classical and Bayes estimation. Introduction ============ The Maxwell distribution has broad application in statistical physics, physical chemistry, and their related areas. Besides Physics and Chemistry it has good number of applications in reliability theory also. At first, the Maxwell distribution was used as lifetime distribution by Tyagi and Bhattacharya (1989). The inferences based on generalized Maxwell distribution has been discussed by Chaturvedi and Rani (1998). Bekker and Roux (2005) consider the estimation of reliability characteristics under for Maxwell distribution under Bayes paradigm. Radha and Vekatesan (2005) discuss the prior selection procedure in case of Maxwell probability distribution. Shakil et al. (2008) studied the distributions of the product $|XY|$ and ratio $|X/Y|$ when X and Y are independent random variables having the Maxwell and Rayleigh distributions. Day and Maiti (2010) proposed the Bayesian estimation of the parameter for the Maxwell distribution. Tomer and Panwar (2015) discussed the estimation procedure for the parameter of Maxwell distribution in the presence of progressive type-I hybrid censored data. After this, Modi (2015), Saghir and Khadim (2016), proposed lengths biased Maxwell distribution and discussed its various properties. Furthermore, several generalizations based on Maxwell distribution are advocated and statistically justified. Recently, two more extensions of Maxwell distribution has been introduced by Sharma et al. (2017a), (2017b) and discussed the classical as well as Bayesian estimation of the parameter along with the real-life application.\ A random variable $Z$ follows Maxwell distribution with scale parameter $\alpha$, denoted as $Z \sim MaD(\alpha )$, if its probability density function (pdf) and cumulative distribution function (cdf) are given by $$f(z,\alpha)=\dfrac{4}{\sqrt{\pi}} \alpha^{\frac{3}{2}} z^{2}e^{-\alpha z^{2}}~~~~z\ge 0,\alpha>0$$ and $$F(z,\alpha)=\dfrac{2}{\sqrt{\pi}}\Gamma\left( \frac{3}{2},\alpha z^{2}\right)$$ respectively, where $\Gamma (a, z) =\int_{0}^{z}p^{a-1}e^{-p}dp$ is the incomplete gamma function.\ In this article, we proposed PMaD as a new generalization of the Maxwell distribution and discussed its various statistical properties and application. The objective of this article is to study the statistical properties of the PMaD distribution, and then estimate the unknown parameters using classical and Bayes estimation methods. Other motivations regarding advantages of the PMaD distribution comes from its flexibility to model the data with nono-monotone failure rate. Thus, it can be taken as an excellent alternative to several inverted families of distributions. The uniqueness of this study comes from the fact that we provide a comprehensive description of mathematical and statistical properties of this distribution with the hope that it will attract more extensive applications in biology, medicine, economics, reliability, engineering, and other areas of research.\ The rest of the paper has been shaped in the following manner. The introduction of the proposed study including the methodological details is given in Section and Subsection of 1. Section 2 provides some statistical properties related to the proposed model. Residual and reverse residual lifetime function for PMaD is derived in Section 3. In Section 4, order statistics have been obtained. The MLEs and Bayes estimation procedure have been discussed in Section 5. In Section 6, simulation study is carried out to compare the performance of maximum likelihood estimates (MLEs) and Bayes estimates. In Section 7, we illustrate the application and usefulness of the proposed model by applying it to four data sets. Finally, Section 8 offers some concluding remarks. Power Maxwell Distribution and Statistical Properties ===================================================== In statistical literature, several generalizations based on certain baseline probability distribution have been advocated regarding the need of the study. These generalized model accommodate the various nature of hazard rate and seems to be more flexible. Here, this paper provides another generalization of the MaD using power transformation of Maxwell random variates. Let us consider a transformation $X=Z^{\frac{1}{\beta}}$ where $Z\sim MaD(\alpha)$. Then the resulting distribution of $X$ is called as power maxwell distribution with parameter $\alpha$ and $\beta$ respectively. From now, it is denoted by $X\sim PMaD(\alpha,\beta)$, where, $\alpha$ and $\beta$ are the scale and shape parameter of the proposed distribution. The probability density function and cumulative distribution function of the PMaD are given by $$f(x,\alpha,\beta)=\dfrac{4}{\sqrt{\pi}} \alpha^{\frac{3}{2}}\beta x^{3\beta-1}e^{-\alpha x^{2\beta}}~~~~x\ge 0,\alpha,\beta>0$$ $$F(x,\alpha,\beta)=\dfrac{2}{\sqrt{\pi}}\gamma\left( \frac{3}{2},\alpha x^{2\beta}\right)$$ respectively. The different mathematical and statistical properties such as moments, reliability, hazard, median, mode, the coefficient of variation, mean deviation, conditional moments, Lorentz curve, stochastic ordering, residual life, entropy measurements, of PMaD have been derived in following subsections. ![image](density){width="6.5in" height="4in"} Asymptotic behaviour -------------------- This subsection, described the symptotic nature of density and survival functions for the proposed distribution. To illustrate assymptoic behaviour, at first, we will show that $\lim\limits_{x\rightarrow 0}f(x,\alpha,\beta)=0$ and $\lim\limits_{x\rightarrow \infty}f(x,\alpha,\beta)=0$ . Therefore, using (2.1) $$\begin{split} \lim\limits_{x\rightarrow 0}f(x,\alpha,\beta)&=\dfrac{4}{\sqrt{\pi}} \alpha^{\frac{3}{2}}\beta\lim\limits_{x\rightarrow 0} x^{3\beta-1}e^{-\alpha x^{2\beta}}\\&=\dfrac{4}{\sqrt{\pi}} \alpha^{\frac{3}{2}}\beta\times 0=0 \end{split}$$ $\implies$ $\lim\limits_{x\rightarrow \infty}f(x,\alpha,\beta)=0$\ and $$\begin{split} \lim\limits_{x\rightarrow \infty}f(x,\alpha,\beta)&=\dfrac{4}{\sqrt{\pi}} \alpha^{\frac{3}{2}}\beta\lim\limits_{x\rightarrow \infty} x^{3\beta-1}\lim\limits_{x\rightarrow \infty}e^{-\alpha x^{2\beta}}\\&=\dfrac{4}{\sqrt{\pi}} \alpha^{\frac{3}{2}}\beta\times \infty\times 0=0 \end{split}$$ $\implies$ $\lim\limits_{x\rightarrow \infty}f(x,\alpha,\beta)=0$\ Similarly, the asymptotic behaviour of survival function can also be shown and found that $\lim\limits_{x\rightarrow 0}S(x)=1$ and $\lim\limits_{x\rightarrow \infty}S(x)=0$. Reliability and hazard functions -------------------------------- The characteristics based on reliability function and hazard function are very useful to study the pattern of any lifetime phenomenon. The reliability and hazard function of the proposed distribution have been derived as; - The reliability function $R(x,\alpha,\beta)$ is given by $$R(x)=1-\dfrac{2}{\sqrt{\pi}}\gamma\left( \frac{3}{2},\alpha x^{2\beta}\right)$$ - The mean time to system failure (MTSF) is given as $$Mt(x)=\dfrac{2}{\sqrt{\pi}}\left( \frac{1}{\alpha}\right) ^{\frac{1}{2\beta}}\Gamma\left( \dfrac{3\beta+1}{2\beta}\right)$$ - The hazard function $H(x)$ is given as $$H(x)=\dfrac{4\alpha^{\frac{3}{2}}\beta x^{3\beta-1}e^{-\alpha x^{2\beta}}}{\sqrt{\pi}-2\gamma\left( \frac{3}{2},\alpha x^{2\beta}\right) }$$ ![image](hazard){width="6.5in" height="4in"} - The reverse hazard rate $h(x)$ is obatined as $$h(x)=\dfrac{2\alpha^{\frac{3}{2}}\beta x^{3\beta-1}e^{-\alpha x^{2\beta}}}{\gamma\left( \frac{3}{2},\alpha x^{2\beta}\right) }$$ - The odds function is defined as; $$O(x)=\dfrac{2\gamma\left( \frac{3}{2},\alpha x^{2\beta}\right) }{\sqrt{\pi}-2\gamma\left( \frac{3}{2},\alpha x^{2\beta}\right) }$$ Moments ------- Let $x_1,x_2,\cdots x_n$ be the random observation from PMaD$(\alpha,\beta)$. The $r^{th}$ moment $\mu_r^{'}$ about origin is defined as $$\begin{split} \mu_r^{'}&=E(x^{r})=\int_{x=0}^{\infty}x^{r}f(x,\alpha,\beta)\quad dx\\&=\dfrac{2}{\sqrt{\pi}}\left( \dfrac{1}{\alpha}\right)^{\frac{r}{2\beta}}\Gamma\left( \dfrac{3\beta+r}{2\beta}\right) \end{split}$$ The first, second, third and fourth raw moment about origin are obtained by putting $r=1, 2 ,\cdots,4$ in above expression. If $r=1$ then we get mean of the proposed distribution. Thus, $$\mu_1^{'}=\dfrac{2}{\sqrt{\pi}}\left( \frac{1}{\alpha}\right) ^{\frac{1}{2\beta}}\Gamma\left( \dfrac{3\beta+1}{2\beta}\right)$$ for $r=2,3 \, \& 4$ $$\mu_2^{'}=\dfrac{2}{\sqrt{\pi}}\left( \frac{1}{\alpha}\right) ^{\frac{1}{\beta}}\Gamma\left( \dfrac{3\beta+2}{2\beta}\right)$$ $$\mu_3^{'}=\dfrac{2}{\sqrt{\pi}}\left( \frac{1}{\alpha}\right) ^{\frac{3}{2\beta}}\Gamma\left( \dfrac{3\beta+3}{2\beta}\right)$$ and $$\mu_4^{'}=\dfrac{2}{\sqrt{\pi}}\left( \frac{1}{\alpha}\right) ^{\frac{2}{\beta}}\Gamma\left( \dfrac{3\beta+4}{2\beta}\right)$$ The respective central moment can be evaluated by using the following relations. $$\mu_2=\mu_2^{'}-\left( \mu_1^{'}\right)^{2}=\dfrac{4}{\pi}\left( \dfrac{1}{\alpha}\right)^{\frac{1}{\beta}}\left[ \sqrt{\frac{\pi}{4}} \Gamma\left( \dfrac{3\beta+2}{2\beta}\right) -\left\lbrace \Gamma\left( \dfrac{3\beta+1}{2\beta}\right)\right\rbrace^{2} \right]$$ $$\mu_3=\mu_3^{'}-3\mu_2^{'}\mu_1^{'}+2\left( \mu_1^{'}\right) ^{3}$$ $$\mu_4=\mu_4^{'}-4\mu_3^{'}\mu_1^{'}+6\mu_2^{'}\left( \mu_1^{'}\right) ^{2}-3\left( \mu_1^{'}\right) ^{4}$$ Coefficient of Skewness and Kurtosis ------------------------------------ The coefficient of skewness and kurtosis measure nd convexity of the curve and its shape. It is obtained by moments based relations suggested by Pearson and given by; $$\beta_1=\dfrac{\left[ \mu_3^{'}-3\mu_2^{'}\mu_1^{'}+2\left( \mu_1^{'}\right) ^{3}\right] ^{2}}{\left[\mu_2^{'}-\left( \mu_1^{'}\right)^{2} \right]^{3} }$$ and $$\beta_2=\dfrac{\mu_4^{'}-4\mu_3^{'}\mu_1^{'}+6\mu_2^{'}\left( \mu_1^{'}\right) ^{2}-3\left( \mu_1^{'}\right) ^{4}}{\left[\mu_2^{'}-\left( \mu_1^{'}\right)^{2} \right]^{2} }$$ These values are calculated in Table 1 for different combination of model parameters and it is observed that the shape of PMaD is right skewed and almost symmetrical for some choices of $\alpha, \beta$. Also, it can has the nature of platykurtic, mesokurtic and leptokurtic, thus PMaD may be used to model skewed and symmetric data as well. Coefficient of variation ------------------------ The coefficient of variation (CV) is calculated by $$CV=\dfrac{\sqrt{\mu_2^{'}-\left( \mu_1^{'}\right)^{2}}}{\mu_1^{'}}\times 100$$ Mode and Median --------------- The mode $(M_0)$ for PMaD $(\alpha,\beta)$ is obtained by solving the following expression $$\dfrac{d}{dx}f(x,\alpha,\beta)|_{M_0}=0$$ which yield $$M_0=\left( \dfrac{3\beta-1}{2\alpha\beta}\right)^{\frac{1}{2\beta}}$$ The median $(M_{d})$ of the proposed distribtuion can be calculated by using the empirical relation amung mean, median and mode. Thus, the median is, $$M_{d}=\frac{1}{3} M_{0}+\frac{2}{3}\mu_1^{'}=\frac{1}{3} \left[ \left( \dfrac{3\beta-1}{2\alpha\beta}\right)^{\frac{1}{2\beta}}+ \dfrac{4}{\sqrt{\pi}}\left( \frac{1}{\alpha}\right) ^{\frac{1}{2\beta}}\Gamma\left( \dfrac{3\beta+1}{2\beta}\right)\right]$$ [ccccccc]{}\ \[0\][\*]{}[$ \alpha,~ \beta $]{} & $\mu_1^{'}$ & $\mu_2$ & $\beta_1$& $\beta_2$ & $x_0$ & CV\ &\ 0.5, 0.5 & 3.0008 & 5.9992 & 2.6675 & 7.0010 & 1.0000 & 0.8162\ 0.5, 1.0 & 1.5962 & 0.4530 & 0.2384 & 3.1071 & 1.4142 & 0.4217\ 0.5, 1.5 & 1.3376 & 0.1499 & 0.0102 & 2.7882 & 1.3264 & 0.2894\ 0.5, 2.5 & 1.1780 & 0.0445 & 0.0481 & 2.7890 & 1.2106 & 0.1792\ 0.5, 3.5 & 1.1204 & 0.0211 & 0.1037 & 2.4351 & 1.1533 & 0.1298\ \ 0.5, 0.75 & 1.9392 & 1.1443 & 0.7425 & 3.8789 & 1.4057 & 0.5516\ 1.0, 0.75 & 1.2216 & 0.4541 & 0.7425 & 3.8789 & 0.8855 & 0.5516\ 1.5, 0.75 & 0.9323 & 0.2645 & 0.7425 & 3.8789 & 0.6758 & 0.5516\ 2.5, 0.75 & 0.6632 & 0.1338 & 0.7425 & 3.8789 & 0.4807 & 0.5516\ 3.5, 0.75 & 0.5299 & 0.0855 & 0.7425 & 3.8789 & 0.3841 & 0.5516\ \ 1, 1 & 1.1287 & 0.2265 & 0.2384 & 3.1071 & 1.0000 & 0.4217\ 2, 2 & 0.8723 & 0.0372 & 0.0102 & 2.7895 & 0.8891 & 0.2212\ 3, 3 & 0.8484 & 0.0163 & 0.0831 & 2.6907 & 0.8736 & 0.1506\ 4, 4 & 0.8509 & 0.0094 & 0.1069 & 1.9643 & 0.8750 & 0.1140\ 5, 5 & 0.8586 & 0.0062 & 0.0677 & 0.1072 & 0.8805 & 0.0915\ \[tab:addlabel\] Mean Deviation -------------- The mean deviation (MD) about mean $(\mu^{'}_{1}=\mu)$ is defined by $$\begin{split} MD&=\int_{x}|x-\mu| f(x,\alpha,\beta) dx\\&=\int_{x=0}^{\mu}(\mu-x)f(x,\alpha,\beta) dx+\int_{x=\mu}^{\infty}(x-\mu) f(x,\alpha,\beta) dx \end{split}$$ After simplification, we get $$\begin{split} MD&=2\mu F(\mu)-2\mu+2\int_{\mu}^{\infty} f(x,\alpha,\beta) dx\\&=(\mu-1)\left[ \dfrac{4}{\sqrt{\pi}}\gamma\left( \frac{3}{2},\alpha \mu^{2\beta}\right)-2\right] \end{split}$$ Generating Functions -------------------- In distribution theory, the role of generating functions are very usefull to generate the respective moments of the distribution and also these functions are uniquely determine the distribution. The different generating function for PMaD $(\alpha,\beta)$ have been caluculated as follows; - Moment generating function (mgf) $M_X(t)$ for a random variable $X$ is obatined as $$M_X(t)=E(e^{tx})=\dfrac{2}{\sqrt{\pi}}\sum_{i=0}^{\infty}\frac{1}{j!} \left( \dfrac{t}{\alpha^{2\beta}}\right)^{r} \Gamma\left( \dfrac{3\beta+r}{2\beta}\right)$$ - Characteristics function (chf) $\phi_X(t)$ for random variable $X$ is obtained by replacing $t$ by $jt$, $$\phi_X(t)=E(e^{jtx})=\dfrac{2}{\sqrt{\pi}}\sum_{i=0}^{\infty}\frac{1}{j!} \left( \dfrac{jt}{\alpha^{2\beta}}\right)^{r} \Gamma\left( \dfrac{3\beta+r}{2\beta}\right)$$ where, $j^2=-1$. - The kumulants generating function (KGF) is obtained as $$K_X(t)=\ln \left[ \dfrac{2}{\sqrt{\pi}}\sum_{i=0}^{\infty}\frac{1}{j!} \left( \dfrac{t}{\alpha^{2\beta}}\right)^{r} \Gamma\left( \dfrac{3\beta+r}{2\beta}\right)\right]$$ Conditional Moment and MGF -------------------------- Let $X$ be a random variable from PMaD$ (\alpha,\beta)$, then conditional moments $E(X^r|X>k)$ and conditional mgf $E(e^{tx}|X>k)$ are evaluated in following expressions; $$\begin{split} E(X^r|X>k)&=\dfrac{\int_{x>k}x^r f(x,\alpha,\beta) dx}{\int_{x>k} f(x,\alpha,\beta) dx}\\&=\dfrac{2\left(\frac{1}{\alpha}\right)^{\frac{r}{2\beta}}\gamma\left( \dfrac{3\beta+r}{2\beta},\alpha x^{2\beta}\right)}{\sqrt{\pi}-2\gamma\left( \frac{3}{2},\alpha x^{2\beta}\right)} \end{split}$$ $$\begin{split} E(e^{tx}|X>k)&=\dfrac{\int_{x>k}e^{tx}f(x,\alpha,\beta) dx}{\int_{x>k} f(x,\alpha,\beta) dx}\\&=\dfrac{2\sum_{i=0}^{\infty}\dfrac{t^i}{i!}\left(\frac{1}{\alpha}\right)^{\frac{r}{2\beta}}\gamma\left( \dfrac{3\beta+r}{2\beta},\alpha x^{2\beta}\right)}{\sqrt{\pi}-2\gamma\left( \frac{3}{2},\alpha x^{2\beta}\right)} \end{split}$$ respectively. Bonferroni and Lorenz Curves ---------------------------- In economics to measure the income and poverty level, Bonferroni and Lorenz curves are frequently used. These two have good linkup to each other and has more comprehensive applications in actuarial as well as in demography. It was initially proposed and studied by Bonferroni (1920), matthematically, it is defined as; $$\zeta(\nu)_{b}=\dfrac{1}{\nu \mu}\int_{0}^{q}xf(x,\alpha,\beta)dx$$ $$\zeta(\nu)_{l}=\dfrac{1}{ \mu}\int_{0}^{q}xf(x,\alpha,\beta)dx$$ respectively. where $q=F^{-1}(\nu)$ and $\mu=E(X)$. Hence using eqn (2.1), the above two equations are reduces as $$\zeta(\nu)_{b}=\dfrac{\sqrt{\alpha}~IG\left( \frac{1+2\beta}{2\beta},\alpha q^{2\beta}\right) }{\nu\Gamma\left( \frac{3\beta+1}{2\beta}\right) }$$ $$\zeta(\nu)_{l}=\dfrac{\sqrt{\alpha}~IG\left( \frac{1+2\beta}{2\beta},\alpha q^{2\beta}\right) }{\Gamma\left( \frac{3\beta+1}{2\beta}\right) }$$ Stochastic Ordering ------------------- A random variable $X$ is said to be stochastically greater $(Y \le_{st} X)$ than $Y$ if $F_X(x)\le F_Y (x)$ for all $x$. In the similar way, $X$ is said to be stochastically greater $(X \le_{st} Y )$ than $Y$ in the - hazard rate order $(X \le_{hr} Y )$ if $h_X(x) \ge h_Y (x)$ $\forall x$. - mean residual life order $(X \le_{mrl} Y )$ if $m_X(x) \ge m_Y (x)$ $\forall x$. - likelihood ratio order $(X \le_{lr} Y )$ if $\left[\dfrac{f_{X}(x)}{f_{Y}(x)}\right]$ decreases in x. From the above relations, we can veryfied that; $$(X \le_{lr} Y )\Rightarrow (X \le_{hr} Y )\Downarrow (X \le_{st} Y )\Rightarrow (X \le_{mrl} Y )$$ The PMaD is ordered with respect to the strongest likelihood ratio ordering as shown in the following theorem.\ **Theorem:** Let $X\sim PMaD(\alpha_1,\beta_1)$ and $Y\sim PMaD(\alpha_2,\beta_2)$. Then $(X \le_{lr} Y )$ and hence $(X \le_{hr} Y )$, $(X \le_{mrl} Y )$ and $(X \le_{st} Y )$ for all values of $\alpha_i,\beta_i$; $i=1, 2$.\ **Proof:** The likelihood ratio is $\left[\dfrac{f_{X}(x)}{f_{Y}(x)}\right]$ i.e. $$\begin{split} \Phi=\dfrac{f_{X}(x)}{f_{Y}(x)}&=\left( \dfrac{\alpha_1}{\alpha_2}\right)^{\frac{3}{2}} \left( \dfrac{\beta_1}{\beta_2}\right) x^{3(\beta_1-\beta_2)} e^{-(\alpha_1 x^{2\beta_1}+\alpha_2 x^{2\beta_2})} \end{split}$$ Therefore, $$\Phi^{'}=\log \left( \dfrac{f_{X}(x)}{f_{Y}(x)}\right) =\dfrac{1}{x}\left[ 3(\beta_1-\beta_2)-(\alpha_1 x^{2\beta_1}+\alpha_2 x^{2\beta_2})\right]$$ If $\beta_1=\beta_2=\beta (say)$, then $\Phi^{'}<0$, which shows that $(X \le_{lr} Y )$. The remaining ordering behaviour can be proved in same way. Also, if $\alpha_1=\alpha_2=\alpha (say)$ and $\beta_1<\beta_2$ then again $\Phi^{'}<0$, which shows that $(X \le_{lr} Y )$. The remaining ordering can be proved in same way. Residual Lifetime ================= In survival analysis, the term residual lifetime often used to describe the remaining lifetime associated with any particular system. Here, we derived the expression of residual life and reversed residual life for PMaD. The residual lifetime function is defined by $ R_{t}= P[x -t|x > t], t \ge0$ and the reversed residual life is described as $\bar{R }_{t} = P[t -x|x \le t]$ which denotes the time elapsed from the failure of a component given that its life less or equal to $t$. - **Residual life time function**\ The survival function of the residual lifetime is given by $$S_{R_t}(x)=\dfrac{S(t+x)}{S(t)}= \dfrac{\sqrt{\pi}-2\gamma\left[ \frac{3}{2},\alpha (x+t)^{2\beta}\right] }{\sqrt{\pi}-2\gamma\left( \frac{3}{2},\alpha x^{2\beta}\right) }~~~~~ ;x> t$$ The corresponding probability density function is $$f_{R_t}(x)= \dfrac{4 \alpha^{\frac{3}{2}}\beta (x+t)^{3\beta-1}e^{-\alpha (x+t)^{2\beta}}}{\sqrt{\pi}-2\gamma\left( \frac{3}{2},\alpha x^{2\beta}\right) }$$ Thus, hazard function is obtained as $$h_{R_t}(x)=\dfrac{4 \alpha^{\frac{3}{2}}\beta (x+t)^{3\beta-1}e^{-\alpha (x+t)^{2\beta}}}{\sqrt{\pi}-2\gamma\left[ \frac{3}{2},\alpha (x+t)^{2\beta}\right]}$$ - **Reversed residual lifetime function**\ The survival function for MOEIED is given by $$S_{\bar{R}_t}=\dfrac{F(t-x)}{F(t)}= \dfrac{\gamma\left[ \frac{3}{2},\alpha (t-x)^{2\beta}\right] }{\gamma\left( \frac{3}{2},\alpha x^{2\beta}\right) }~~~~~ ;0\le x<t$$ The associated pdf is evaluated as; $$f_{\bar{R}_t}=\dfrac{2 \alpha^{\frac{3}{2}}\beta (t-x)^{3\beta-1}e^{-\alpha (t-x)^{2\beta}}}{\gamma\left( \frac{3}{2},\alpha x^{2\beta}\right) }$$ Hence, the hazard function based on reversed residual lifetime is obtained as $$h_{\bar{R}_t}=\dfrac{2 \alpha^{\frac{3}{2}}\beta (t-x)^{3\beta-1}e^{-\alpha (t-x)^{2\beta}}}{\gamma\left[ \frac{3}{2},\alpha (t-x)^{2\beta}\right]}$$ Entropy Measurements ==================== In information theory, entropy measurement plays a vital role to study the uncertainty associated with the probability distribution. In this section, we discuss the different measure of change. For more detail about entropy measurement, see Reniyi (1961). Renyi Entropy ------------- Renyi entropy of a r.v. $x$ is defined as $$\begin{split} R_E&=\dfrac{1}{(1-\in)}\ln\left[ \int_{x=0}^{\infty}f_{w}^{\in}(x,\alpha,\beta)dx\right]\\&=\dfrac{1}{(1-\in)}\ln \left[ \int_{x=0}^{\infty}\left\lbrace \dfrac{4}{\sqrt{\pi}} \alpha^{\frac{3}{2}}\beta x^{3\beta-1}e^{-\alpha x^{2\beta}} \right\rbrace^{\in} dx \right] \end{split}$$ Hence, after solving the internal, we get the following [$$\begin{split} R_E&=\dfrac{1}{(1-\in)}\left[\lambda \ln 4-\frac{\lambda}{2}\ln\pi+\lambda\ln\beta-\dfrac{1-\lambda-2\beta}{2\beta}\ln\alpha-\dfrac{3\lambda\beta-\lambda+1}{2\beta}\ln \lambda+\ln \left( \dfrac{3\beta\lambda-\lambda+1}{2\beta}\right) \right] \end{split}$$]{} $\Delta$-Entropy ---------------- The $\beta$-entropy is obtained as follows $$\Delta_E=\dfrac{1}{\Delta-1}\left[1-\int_{x=0}^{\infty}f^{\Delta}(x,\alpha,\beta)dx \right]$$ Using pdf (1.4) and after simplification the expression for $\beta$-entropy is given by; $$\Delta_E=\dfrac{1}{\Delta-1}\left[1-\left( \dfrac{4}{\sqrt{\pi}}\right) ^{\Delta}\beta^{\Delta} \left( \dfrac{1}{\alpha}\right)^{\dfrac{1-\Delta-2\beta}{2\beta}} \left( \dfrac{\Gamma\left(\dfrac{3\Delta\beta-\Delta+1}{2\beta} \right) }{\Delta^{\dfrac{3\Delta\beta-\Delta+1}{2\beta}}}\right) \right]$$ Generalized Entropy ------------------- The generalized entropy is obtained by; $$G_E=\dfrac{\nu_\lambda\mu^{-\lambda}-1}{\lambda(\lambda-1)}~~~~~;\lambda\ne 0,1$$ where, $\nu_\lambda=\int_{x=0}^{\infty}x^{\lambda}f_w(x,\alpha,\theta^{})dx$ and $\mu=E(X)$. The value of $\nu_\lambda$ is calculated as $$\begin{split} \nu_\lambda&=\dfrac{2}{\sqrt{\pi}}\left( \dfrac{1}{\alpha}\right)^{\frac{\lambda}{2\beta}}\Gamma\left( \dfrac{3\beta+\lambda}{2\beta}\right) \end{split}$$ After using (4.6) and (2.8), we get $$G_E=\left( \dfrac{4}{\pi}\right)^{\frac{1-\lambda}{2}} \left[ \dfrac{\Gamma\left( \dfrac{3\beta+\lambda}{2\beta}\right) \left\lbrace \Gamma\left( \dfrac{3\beta+1}{2\beta}\right) \right\rbrace ^{-\lambda}}{\lambda(\lambda-1)}\right] ~~~~~;\lambda\ne0,1$$ Parameter Estimation ==================== Here, we describe maximum likelihood estimation method and Bayes estimation method for estimating the unknown parameters $\alpha, \beta$ of the PMaD. The estimators obtained under these methods are not in nice closed form; thus, numerical approximation techniques are used to get the solution. Further, the performances of these estimators are studied through Monte Carlo simulation. Maximum Likelihood Estimation ----------------------------- The most popular and efficient method of classical estimation of the parameter (s) is maximum likelihood estimation. The estimators obtained by this method passes several optimum properties. The maximum likelihood estimation theory required formulation of the likelihood function. Thus, let us suppose that $X_1,X_2,\cdots,X_n$ are the iid random sample of size $n$ taken from PMaD $(\alpha,\beta)$. The likelihood function is written as; $$\begin{split} L(\alpha,\theta)&=\prod_{i=1}^{n}\dfrac{4}{\sqrt{\pi}} \alpha^{\frac{3}{2}}\beta x_i^{3\beta-1}e^{-\alpha x_i^{2\beta}}=\dfrac{4^{n}}{\pi^{n/2}} \alpha^{\frac{3n}{2}}\beta^{n} e^{-\alpha\sum_{i=1}^{n} x_i^{2\beta}}\left( \prod_{i=1}^{n}x_i^{3\beta-1}\right) \end{split}$$ Log-likelihood function is written as; $$\ln L(\alpha,\theta)=l=n\ln 4-\frac{n}{2}\ln \pi+\dfrac{3n}{2}\ln \alpha+n\ln\beta -\alpha\sum_{i=1}^{n}x_i^{2\beta}+(3\beta-1)\sum_{i=1}^{n}\ln x_i$$ for MLEs of $\alpha$ and $\beta$, $$\dfrac{\partial l}{\partial\alpha}=0~~~\&~~~\dfrac{\partial l}{\partial\beta}=0$$ which yield, $$\dfrac{3n}{2\alpha}-\sum_{i=1}^{n}x_i^{2\beta}=0$$ $$\dfrac{n}{\beta}-2\alpha\sum_{i=1}^{n} x_i^{2\beta}\ln x_i+3\sum_{i=1}^{n}\ln x_i =0$$ The MLE’s of the parameters are obtained by solving the above two equations simultaneously. Here, we used non-linear maximization techniques to get the solution. ### Uniqueness of MLEs The uniqueness of MLEs discussed in previous section can be checked by using following propositions.\ **Proposition 1:** If $\beta$ is fixed, then $\hat{\alpha}$ exist and it is unique.\ **Proof:** Let $L_{\alpha}=\dfrac{3n}{2\alpha}-\sum_{i=1}^{n}x_i^{2\beta}$, since $L_{\alpha}$ is continuous and it has been verified that $\lim\limits_{\alpha\rightarrow 0}L_{\alpha}=\infty$ and $\lim\limits_{\alpha\rightarrow \infty}L_{\alpha}=-\sum_{i=1}^{n}x_i^{2\beta}<0$. This implies that $L_{\alpha}$ will have atleast one root in interval $(0,\infty)$ and hence $L_{\alpha}$ is a decreasing function in $\alpha$. Thus, $L_{\alpha}=0$ has a unique solution in $(0,\infty)$.\ **Proposition 2:** If $\alpha$ is fixed, then $\hat{\beta}$ exist and it is unique.\ **Proof:** Let $L_{\beta}=\dfrac{n}{\beta}-\alpha\sum_{i=1}^{n} x_i^{2\beta}\ln x_i+3\sum_{i=1}^{n}\ln x_i $, since $L_{\beta}$ is continuous and it has been verified that $\lim\limits_{\beta\rightarrow 0}L_{\beta}=\infty$ and $\lim\limits_{\beta\rightarrow \infty}L_{\beta}=-2\sum_{i=1}^{n}\ln {x_i}<0$. This implies in same as above $\hat{\beta}$ exists and it will be unique. ### Fisher Information Matrix Here, we derive Fisher information matrix for constructing 100$(1-\Psi)\%$ asymptotic confidence interval for the parameters using large sample theory. The Fisher information matrix can be obtained by using equation (5.2) as $$I(\hat{\alpha},\hat{\beta})=-E\begin{pmatrix} l_{\alpha\alpha} & l_{\alpha\beta}\\ \\ l_{\beta\alpha} & l_{\beta\beta} \end{pmatrix}_{(\hat{\alpha},\hat{\beta})} \eqno{(5.2.1)}$$ where, $$l_{\alpha\alpha}=-\dfrac{3n}{2\alpha^2},~~l_{\alpha\beta}=-2\sum_{i=1}^{n}x_i^{2\beta}\ln x_i, ~~ l_{\beta\beta}=-\dfrac{n}{\beta^2}-4\alpha\sum_{i=1}^{n}x_i^{2\beta}(\ln x_i)^{2}$$ The above matrix can be inverted and diagonal elements of $I^{-1}(\hat{\alpha},\hat{\beta})$ provide asymptotic variance of $\alpha$ and $\beta$ respectively. Now, two sided $100(1-\Psi)\%$ asymptotic confidence interval for $\alpha$, $\beta$ has been obtained as $$[\alpha_l,\alpha_u] \in [\hat{\alpha}\mp Z_{1-\frac{\Psi}{2}}\sqrt{var(\hat{\alpha})}]$$ $$[\beta_l,\beta_u] \in [\hat{\beta}\mp Z_{1-\frac{\Psi}{2}}\sqrt{var(\hat{\beta})}]$$ respectively. Bayes Estimation ---------------- In this subsection, the Bayes estimation procedure for the PMaD has been developed. In this estimation technique, as we all know that the unknown parameter treated as the random variable and this randomness of the parameter quantify in the form of prior distribution. Here, we took two independent gamma prior for both shape and scale parameter. The considered prior is very flexible due to its flexibility of assuming different shape. Thus, the joint prior $g(\alpha,\beta)$ is given by; $$g(\alpha,\beta)\propto \alpha^{a-1}\beta^{c-1}e^{-b\alpha-d\beta}~~;~~~\alpha, \beta>0$$ where, a, b, c & d are the hyperparmaters of the considered priors. Using (5.1) and (5.5), the joint posterior density function $\pi(\alpha,\beta|x)$ is derived as $$\begin{split} \pi(\alpha,\beta|x)&= \dfrac{L(x|\alpha,\beta)g(\alpha,\beta)}{\int_{\alpha}\int_{\beta} L(x|\alpha,\beta)g(\alpha,\beta) d\alpha\,d\beta}\\&= \dfrac{\alpha^{\frac{3n}{2}+a-1}\beta^{n+c-1} e^{-\alpha \left(b+\sum_{i=1}^{n} x_i^{2\beta}\right) }e^{-d\beta}\left( \prod_{i=1}^{n}x_i^{3\beta-1}\right) }{\int_{\alpha}\int_{\beta}\alpha^{\frac{3n}{2}+a-1}\beta^{n+c-1} e^{-\alpha \left(b+\sum_{i=1}^{n} x_i^{2\beta}\right) }e^{-d\beta}\left( \prod_{i=1}^{n}x_i^{3\beta-1}\right) \quad d\alpha\quad d\beta} \end{split}$$ In the Bayesian analysis, the specification of proper loss function plays an important role. Here, we took most frequently used square error loss function (SELF) to obtain the estimates of the parameters. It is defined as; $$L(\phi,\hat{\phi})\propto \left( \phi-\hat{\phi}\right)^{2}$$ where, $\hat{\phi}$ is estimate of $\phi$. Bayes estimates under SELF is the posterior mean and evaluated by $$\hat{\phi}_{SELF}=\left[ E(\phi|x)\right]$$ provided the expectation exist and finite. Thus, the Bayes estimator based on equation no. (5.6) under SELF are given by $$\hat{\alpha}_{bs}=E_{\alpha,\beta|x}(\alpha|\beta,x)=\eta \int_{\alpha}\int_{\beta}\alpha^{\frac{3}{2}+a}\beta^{n+c-1} e^{-\alpha \left(b+\sum_{i=1}^{n} x_i^{2\beta}\right) }e^{-d\beta}\left( \prod_{i=1}^{n}x_i^{3\beta-1}\right) ~ d\alpha~ d\beta$$ and $$\hat{\beta}_{bs}=E_{\alpha,\beta|x}(\beta|\alpha,x)=\eta \int_{\alpha}\int_{\beta}\alpha^{\frac{1}{2}+a}\beta^{n+c} e^{-\alpha \left(b+\sum_{i=1}^{n} x_i^{2\beta}\right) }e^{-d\beta}\left( \prod_{i=1}^{n}x_i^{3\beta-1}\right) ~ d\alpha~ d\beta$$ where, $\eta=\int_{\alpha}\int_{\beta}\alpha^{\frac{3n}{2}+a-1}\beta^{n+c-1} e^{-\alpha \left(b+\sum_{i=1}^{n} x_i^{2\beta}\right) }e^{-d\beta}\left( \prod_{i=1}^{n}x_i^{3\beta-1}\right) ~d\alpha~ d\beta$\ From equation number (5.9), (5.10) it is easy to observed that the posterior expectations are appearing in the form of the ratio of two integrals. Thus, the analytical solution of these expetations are not presumable. Therefore, any numerical approximation techniques may be implemented to secure the solutions. Here, we used one of the most popular and quite effective approximation technique suggested by Lindley (1980). The detailed description can be seen in below; $$\begin{aligned} \label{eq20} (\hat{\alpha},\hat{\beta})_{Bayes}&=&\dfrac{\int_{\alpha}\int_{\beta}u(\alpha,\beta)e^{\rho(\alpha,\beta)+l}\quad d\alpha d\beta }{\int_{\alpha}\int_{\beta}e^{\rho(\alpha,\beta)+l}\quad d\alpha d\beta}\\&=&(\hat{\alpha},\hat{\beta})_{ml}+\dfrac{1}{2}[(u_{\alpha\alpha}+ 2 u_\alpha\rho_\alpha)\tau_{\alpha\alpha}+(u_{\alpha\beta}+2u_\alpha\rho_\beta)\tau_{\alpha\beta}+(u_{\beta\alpha}+ 2u_\beta\rho_\alpha)\tau_{\beta\alpha}\nonumber\\ &+&(u_{\beta\beta}+2u_\beta\rho_\beta)\tau_{\beta\beta}]+\dfrac{\alpha}{\beta}[(u_\alpha\tau_{\alpha\alpha}+u_\beta\tau_{\alpha\beta})(l_{111}\tau_{\alpha\alpha}+2l_{21}\tau_{\alpha\beta}+l_{12}\tau_{\beta\beta})\nonumber\\ &+&(u_\alpha\tau_{\beta\alpha}+u_\beta\tau_{\beta\beta})(l_{21}\tau_{\alpha\alpha}+2 l_{12}\tau_{\beta\alpha}+l_{222}\tau_{\beta\beta})] \end{aligned}$$ where, $u(\alpha,\beta)=(\alpha,\beta)$, $\rho(\alpha,\beta)=\ln g(\alpha,\beta)$ and $l=\ln L(\alpha,\beta|\underbar x)$, $$\begin{aligned} l_{ab}=\dfrac{\partial^{3}l}{\partial\alpha^{a}\partial\beta^{b}},\quad a,b=0,1,2,3\quad a+b=3,\quad \rho_\alpha=\dfrac{\partial \rho}{\partial\alpha},\quad\rho_\beta=\dfrac{\partial \rho}{\partial\beta}\\ u_\alpha=\dfrac{\partial u}{\partial\alpha},\quad u_\beta=\dfrac{\partial u}{\partial \beta},\quad u_{\alpha\alpha}=\dfrac{\partial^2 u}{\partial\alpha^2},\quad u_{\beta\beta}=\dfrac{\partial^2 u}{\partial\beta^2},\quad u_{\alpha\beta}=\dfrac{\partial^2 u}{\partial\alpha\partial\beta}, \end{aligned}$$ $$\tau_{\alpha\alpha}=\dfrac{1}{l_{20}},\,\tau_{\alpha\beta}=\dfrac{1}{l_{11}}=\tau_{\beta\alpha},\, \tau_{\beta\beta}=\dfrac{1}{l_{02}}$$ since, $u(\alpha,\beta$ is the function of $\alpha,\beta$ both. Therefore, - If $u(\alpha,\beta)=\alpha$ in (5.12) then; $$\begin{aligned} u_\alpha&=1,\quad u_\beta=0, \quad u_{\alpha\alpha}=u_{\beta\beta}=0,\quad u_{\alpha\beta}=u_{\beta\alpha}=0 \end{aligned}$$ - If $u(\alpha,\beta)=\beta$ in (5.12) then; $$\begin{aligned} u_\beta&=1,\quad u_\alpha=0, \quad u_{\alpha\alpha}=u_{\beta\beta}=0,\quad u_{\alpha\beta}=u_{\beta\alpha}=0\\ \end{aligned}$$ and the rest derivatives based on likelihood function are as obtained as; $$l_{30}=\dfrac{3n}{\alpha^3},~~l_{11}=-2\sum_{i=1}^{n}x_i^{2\beta}\ln x_i, ~~ l_{03}=\dfrac{2n}{\beta^3}-8\alpha\sum_{i=1}^{n}x_i^{2\beta}(\ln x_i)^{3}$$ $$l_{12}=-4\sum_{i=1}^{n}x_i^{2\beta}(\ln x_i)^{2}=l_{21}$$ Using these derivatives the Bayes estimates of $(\alpha,\beta)$ are obtained by following expressions $$\begin{split} \hat{\alpha}_{bl}=&\hat{\alpha}_{ml}+\dfrac{1}{2}[(2u_\alpha\rho_\alpha)\tau_{\alpha\alpha}+(2u_\alpha\rho_\beta)\tau_{\alpha\beta}]+\dfrac{1}{2}[(u_\alpha\tau_{\alpha\alpha})(l_{30}\tau_{\alpha\alpha}+2l_{21}\tau_{\alpha\beta}+l_{12}\tau_{\beta\beta})\\&+(u_\alpha\tau_{\beta\alpha})(l_{21}\tau_{\alpha\alpha}+2l_{12}\tau_{\beta\alpha}+l_{03}\tau_{\beta\beta})] \end{split}$$ $$\begin{split} \hat{\beta}_{bl}&=\hat{\beta}_{ml}+\dfrac{1}{2}[(2u_\beta\rho_\alpha)\tau_{\beta\alpha}+(2u_\beta\rho_\beta)\tau_{\beta\beta}]+\dfrac{1}{2}[(u_\beta\tau_{\alpha\beta})(l_{30}\tau_{\alpha\alpha}+2l_{21}\tau_{\alpha\beta}+l_{12}\tau_{\beta\beta})\\&+(u_\beta\tau_{\beta\beta})(l_{21}\tau_{\alpha\alpha}+2l_{12}\tau_{\beta\alpha}+l_{03}\tau_{\beta\beta})] \end{split}$$ Simulation Study ================ In this section, Monte Carlo simulation study has been performed to assess the performance of the obtained estimators in terms of their mean square error (MSEs). The maximum likelihood estimates of the parameters are evaluated by using $nlm () $ function, and MLEs of reliability characteristics are obtained by using invariance properties. The Bayes estimates of the parameter are evaluated by Lindley’s approximation technique. The hyper-parameters values are chosen in such a way that the prior mean is equal to the true value, and prior variance is taken as very small, say 0.5. All the computations are done by $R 3.4.1$ software. At first, we generated 5000 random samples from PMaD $(\alpha,\beta)$ using Newton-Raphson algorithm for different variation of sample sizes as $n=10$ (small), $n=20, 30$ (moderate), $n=50$ (large) for fixed $(\alpha=0.75, \beta=0.75)$ and secondly for different variation of $(\alpha, \beta)$ when sample size is fixed say $(n=20)$ respectively. Average estimates and mean square error (MSE) of the parameters and reliability characteristics are calculated for the above mentioned choices, and the corresponding results are reported in Table 2. The asymptotic confidence interval (ACI) and asymptotic confidence length (ACL) are also obtained and presented in Table 3. From this extensive simulation study, it has been observed that the precision of MLEs and Bayes estimator are increasing when the sample size is increasing while average ACL is decresing. Further, the Bayes estimators are more precise as compared ML estimators for all considered cases. [ccccccccc]{}\ n & $\alpha, \beta$ & $\alpha_{ml}$& $\beta_{ml}$ & $MTTF_{ml}$ &$R (t)_{ml}$& $H(t)_{ml}$ & $\alpha_{bl}$ & $\beta_{bl}$\ \[0\][\*]{}[10]{} & \[0\][\*]{}[0.75,0.75]{} & 0.5070 & 1.1598 & 1.5119 & 0.9691 & 0.1663 & 0.5063 & 1.1028\ & & 0.0631 & 0.2588 & 0.0164 & 0.0049 & 0.0947 & 0.0631 & 0.2027\ \[0\][\*]{}[20]{} & & 0.6560 & 0.8848 & 1.4922 & 0.9343 & 0.2965 & 0.6521 & 0.8647\ & & 0.0098 & 0.0326 & 0.0093 & 0.0014 & 0.0703 & 0.0105 & 0.0263\ \[0\][\*]{}[30]{} & & 0.7096 & 0.8064 & 1.4883 & 0.9163 & 0.3504 & 0.7058 & 0.7951\ & & 0.0022 & 0.0103 & 0.0071 & 0.0004 & 0.0010 & 0.0025 & 0.0087\ \[0\][\*]{}[50]{} & & 0.7542 & 0.7453 & 1.4869 & 0.8988 & 0.3968 & 0.7514 & 0.7397\ & & 0.0003 & 0.0031 & 0.0046 & 0.0001 & 0.0003 & 0.0003 & 0.0031\ \ \[0\][\*]{}[20]{} & \[0\][\*]{}[0.5,0.75]{} & 0.6603 & 0.6832 & 1.7380 & 0.9044 & 0.3400 & 0.6574 & 0.6716\ & & 0.0261 & 0.0125 & 0.0585 & 0.0017 & 0.0099 & 0.0252 & 0.0117\ & \[0\][\*]{}[0.5, 1.5]{} & 0.7290 & 0.3033 & 4.6222 & 0.7871 & 0.3556 & 0.7258 & 0.3229\ & & 0.0528 & 1.4330 & 11.9171 & 0.0402 & 0.1139 & 0.0513 & 1.3866\ & \[0\][\*]{}[1.5, 0.5]{} & 0.5090 & 2.9297 & 1.1531 & 0.9983 & 0.0207 & 0.5517 & 2.8634\ & & 0.9907 & 6.6465 & 0.0242 & 0.1274 & 26.0695 & 0.9087 & 6.3006\ & \[0\][\*]{}[2.5,2.5]{} & 1.0448 & 0.5958 & 1.4084 & 0.7953 & 0.6393 & 1.2825 & 0.6727\ & & 2.1402 & 3.6573 & 0.3860 & 0.0373 & 0.3553 & 1.5058 & 3.3715\ \[tab:addlabel\] [cccccccc]{}\ n & $\alpha, \beta$ & $\alpha_L$ & $\alpha_U$ & $ACL_\alpha$ & $\beta_L$& $\beta_U$ & $ACl_\beta$\ 10 & 0.75,0.75 & 0.0874 & 0.9266 & 0.8393 & 0.5711 & 1.7485 & 1.1775\ 20 & 0.75,0.75 & 0.3209 & 0.9911 & 0.6703 & 0.5525 & 1.2171 & 0.6646\ 30 & 0.75,0.75 & 0.4263 & 0.9928 & 0.5665 & 0.5555 & 1.0574 & 0.5019\ 50 & 0.75,0.75 & 0.5290 & 0.9794 & 0.4505 & 0.5631 & 0.9275 & 0.3644\ \ \[0\][\*]{}[20]{} & 0.5, 0.75 & 0.3255 & 0.9951 & 0.6696 & 0.4142 & 0.9523 & 0.5381\ & 0.5, 1.5 & 0.3794 & 1.0785 & 0.6991 & 0.4819 & 1.7425 & 1.2429\ & 1.5, 0.5 & 0.4206 & 1.7812 & 0.76058 & 0.2260 & 1.8334 & 1.3807\ & 2.5, 2.5 & 0.5804 & 2.9509 & 0.9788 & 0.54133 & 2.7783 & 1.1365\ \[tab:addlabel\] Real Data Illustration ====================== This section, demonstrate the practical applicability of the proposed model in real life scenario especially for the survival/relibaility data taken from diffierent sources. The proposed distribution is compared with Maxwell distribution (MaD) and its different generalizations, such as length biased maxwell distribution (LBMaD), area biased maxwell distribution (ABMaD), extended Maxwell distribution (EMaD) and generalized Maxwell distribution (EMaD). For these models the estimates of the parameter (s) are obtained by method of maximum likelihood and the compatibility of PMaD has been discussed using model selection tools such as log-likelihood (-log L), Akaike information criterion (AIC), corrected Akaike information criterion (AICC), Bayesian information criterion (BIC) and Kolmogorov Smirnov (K-S) test. In general, the smaller values of these statistics indicate, the better fit to the data. The point estimates of the parameters and reliability function and hazard function for each data set are reported in Table 6. The interval estimate of the parameter and corresponding asimptotic confidence length are also evaluated and presented in Table 7. **Data Set-I (Bladder Cancer Data):** This data set represents the remission times (in months) of a 128 bladder cancer patients, and it was initially used by Lee and Wang (2003). The same data set is used to show the superiority of extended maxwell distribution by Sharma et al (2017).\ **Data Set-II : Item Failure Data**\ This dataset is taken from Murthy et al. (2004). It shows 50 items put into use at t = 0 and failure items are recorded in weeks.\ **Data Set-III:** The data set was initially considered by Chhikara and Folks (1977). It represent the 46 repair times (in hours) for an airborne communication transceiver.\ **Data Set-IV: Flood data**\ The data are the exceedances of flood peaks (in m3/s) of the Wheaton River near Carcross in Yukon Territory, Canada. The data consist of 72 exceedances for the years 1958–1984, rounded to one decimal place. This data was analyzed by Choulakian and Stephens (2011). [cccccccc]{}\ \ Model & $ \hat{\alpha} $ & $ \hat{\beta} $ & -logL & AIC & AICC & BIC & K-S\ **PMaD** & **0.7978** & **0.1637** & **366.3820** & **736.7639** & **732.8599** & **742.4680** & **0.3675**\ MaD & 0.0076 &– & 1014.4440 & 2030.8870 & 2028.9190 & 2033.7400 & 0.4144\ LBMaD & 98.6386 & – & 669.3668 & 1340.7340 & 1338.7650 & 1343.5860 & 0.4906\ ABMaD & 78.9109 & – & 767.8122 & 1537.6240 & 1535.6560 & 1540.4770 & 0.5608\ ExMaD & 0.8447 & 1.4431 & 412.1232 & 828.2464 & 824.3424 & 833.9504 & 0.8265\ GMaD & 0.7484 & 527.2314 & 426.6019 & 857.2037 & 853.2997 & 862.9078 & 0.7086\ \ Model & $ \hat{\alpha} $ & $ \hat{\beta} $ & -logL & AIC & AICC & BIC & K-S\ **PMaD** & **0.8339** & **0.1820** & **135.8204** & **275.6407** & **271.8961** & **279.4648** & **0.2625**\ MaD & 0.0104 & – & 367.8528 & 737.7056 & 735.7890 & 739.6177 & 0.4268\ LBMaD & 72.1146 & – & 315.1624 & 632.3248 & 630.4081 & 634.2368 & 0.5112\ ABMaD & 57.6917 & – & 374.1247 & 750.2494 & 748.3328 & 752.1615 & 0.5825\ ExMaD & 0.6186 & 1.0139 & 151.2998 & 306.5996 & 302.8550 & 310.4237 & 0.7327\ GMaD & 0.5400 & 534.1569 & 151.2643 & 306.5287 & 302.7840 & 310.3527 & 0.3920\ \ Model & $ \hat{\alpha} $ & $ \hat{\beta} $ & - logL & AIC & AICC & BIC & K-S\ **PMaD** & **0.8735** & **0.2709** & **101.9125** & **207.8249** & **204.1040** & **211.4822** & **0.2136**\ MaD & 0.0406 & – & 245.1383 & 492.2766 & 490.3675 & 494.1052 & 0.5027\ LBMaD & 18.4603 & – & 237.4945 & 476.9890 & 475.0799 & 478.8176 & 0.5771\ ABMaD & 14.7683 & – & 284.7017 & 571.4034 & 569.4943 & 573.2320 & 0.6324\ ExMaD & 0.7290 & 0.8672 & 103.3052 & 210.6104 & 206.8895 & 214.2677 & 0.2989\ GMaD & 0.6015 & 122.7666 & 110.8521 & 225.7042 & 221.9833 & 229.3615 & 0.4392\ \ Model & $ \hat{\alpha} $ & $ \hat{\beta} $ & - logL & AIC & AICC & BIC & K-S\ **PMaD** & 0.805185 & 0.1504145 & 212.8942 & 429.7884 & 425.9623 & 434.3418 & 0.2760\ MaD & 0.005032 & – & 610.9235 & 1223.847 & 1221.904 & 1226.124 & 0.3821\ LBMaD & 149.0315 & – & 426.3076 & 854.6153 & 852.6724 & 856.8919 & 0.4113\ ABMaD & 119.2252 & – & 493.3271 & 988.6543 & 986.7114 & 990.9309 & 0.4529\ ExMaD & 0.697471 & 1.306933 & 251.9244 & 507.8487 & 504.0226 & 512.4021 & 0.7487\ GMaD & 0.648149 & 919.7356 & 251.2767 & 506.5534 & 502.7273 & 511.1068 & 0.4998\ \[tab:addlabel\] Data Min Q1 Q2 Mean Q3 Max Kurtosis Skewness ------ ------- ------- ------- -------- -------- -------- ---------- ---------- I 0.080 3.348 6.395 9.366 11.838 79.050 18.483 3.287 II 0.013 1.390 5.320 7.821 10.043 48.105 9.408 2.306 III 0.200 0.800 1.750 3.607 4.375 24.500 11.803 2.888 IV 0.100 2.125 9.500 12.204 20.125 64.000 5.890 1.473 : Summary of the data sets \[tab:addlabel\] Data $ \alpha_{ml}$ $ \beta_{ml}$ $ MTTF_{ml} $ $ R(t)_{ml} $ $ H(t)_{ml} $ $ \alpha_{bl}$ $ \beta_{bl}$ ------ ---------------- --------------- --------------- --------------- --------------- ---------------- --------------- I 0.7978 0.1637 28.2109 0.7019 0.2827 0.7962 0.1639 II 0.8339 0.1820 15.3594 0.6953 0.3224 0.8292 0.1821 III 0.8735 0.2709 4.0773 0.7212 0.4326 0.8675 0.2703 IV 0.8052 0.1504 42.8622 0.6923 0.2696 0.8023 0.1506 : Real data estimates \[tab:addlabel\] Data $ \alpha_L $ $ \alpha_U $ $ ACL_\alpha $ $ \beta_L $ $ \beta_U $ $ ACL_\beta $ ------ -------------- -------------- ---------------- ------------- ------------- --------------- I 0.6545 0.9411 0.2866 0.1373 0.1902 0.0529 II 0.5962 1.0717 0.4754 0.1376 0.2263 0.0888 III 0.6202 1.1269 0.5067 0.2081 0.3337 0.1256 IV 0.6126 0.9978 0.3852 0.1186 0.1822 0.0636 : Interval estimates based on real data \[tab:addlabel\] ![Empirical cumulative distribution function and QQ plot for the data set-I](ecdf1){width="6.5in" height="3in"} ![Empirical cumulative distribution function and QQ plot for the data set-II](ecdf2){width="6.5in" height="3in"} ![Empirical cumulative distribution function and QQ plot for the data set-III](ecdf3){width="6.5in" height="3in"} ![Empirical cumulative distribution function and QQ plot for the data set-IV](ecdf4){width="6.5in" height="3in"} From the tabulated value of different model selection tools, -LogL, AIC, AICC, BIC, and K-S values it has been noticed that PMaD has least -LogL, AIC, AICC, BIC, and K-S. The empirical cumulative distribution function and Q-Q plots are also given in Figure 5, 6 & 7 respectively. Therefore, PMaD can be recommended as a good alternative to the existing family of Maxwell distribution. Summary of the considered data sets is given in Table 2 and seen that skewness is positive for all data sets which indicates that it has positive skewness which appropriately suited to the proposed model. Conclusion ========== This article proposed power Maxwell distribution (PMaD) as an extension of Maxwell distribution and studied its different mathematical and statistical properties, such as reliability characteristics, moments, median, mode, mean deviation, generating functions, stochastic ordering, residual functions, entropy etc. We also study the skewness and kurtosis of the PMaD and found that it is capable of modeling the positively skewed as well as symmetric data sets. The unknown parameters of the PMaD are estimated by maximum likelihood estimation and Bayes estimation method. The MLEs of the reliability function and hazard function are also obtained by using invariance property. The 95% asymptotic confidence interval for the parameter are constructed using Fisher information matrix. The MLEs and Bayes estimators are compared through the Monte Carlo simulation and observed that Bayes estimators are more precise under informative prior. Finally, medical/reliability data have been used to show practical utility of the power Maxwell distribution, and it is observed that PMaD provides the better fit as compared to other Maxwell family of distributions. Thus, it can be recommended as an alternative model for the non-monotone failure rate model. [20]{} Maxwell, J., 1860. On the dynamical theory of gases, presented to the meeting of the british association for the advancement of science. Scientific Letters I, 616. Gupta, R. C., Gupta, R. D. and Gupta, P. L. (1998): Modeling failure time data by Lehman alternatives. Communication in Statistics-Theory and Methods, 27 (4), 887?-904. Lee, E. T. and Wang, J. W. (2003): Statistical Methods for Survival Data Analysis. Wiley, New York, DOI:10.1002/0471458546. Vikas Kumar Sharma, Hassan S. Bakouch & Khushboo Suthar (2017) An extended Maxwell distribution: Properties and applications, Communications in Statistics - Simulation and Computation, 46:9, 6982-7007, DOI: 10.1080/03610918.2016.1222422. Vikas Kumar Sharma, Sanku Dey, Sanjay Kumar Singh & Uzma Manzoor (2017): On Length and Area biased Maxwell distributions, Communications in Statistics - Simulation and Computation, DOI: 10.1080/03610918.2017.1317804. Bekker, A., and J. J. Roux. 2005. Reliability characteristics of the Maxwell distribution: A Bayes estimation study. Communications in Statistics - Theory and Methods 34:2169-78. doi:10.1080/STA-200066424. Chaturvedi, A., and U. Rani. 1998. Classical and Bayesian reliability estimation of the generalized Maxwell failure distribution. Journal of Statistical Research 32:113-20. Lindley, D. V. Approximate Bayes method, Trabajos de estadistica, Vol. 31, 223-237, 1980. Dey, S., and S. S. Maiti. 2010. Bayesian estimation of the parameter of Maxwell distribution under different loss functions. Journal of Statistical Theory and Practice 4:279–87. doi:10.1080/ 15598608.2010.10411986. Modi, K. 2015. Length-biased weighted Maxwell distribution. Pakistan Journal of Statistics and Operation Research 11:465–72. doi:10.18187/pjsor.v11i4.1008. Saghir, A., and A. Khadim. 2016. The mathematical properties of length biased Maxwell distribution. Journal of Basic and Applied Research International 16:189–95. Bonferroni C. E. ( 1930), Elementi di Statistica General, Seeber, Firenze. Gross A. J. and Clark V. A. (1975) Survival distributions: reliability application in the biomedical sciences. New Yark: John Wiley and Sons. Tyagi, R. K., and S. K. Bhattacharya. 1989. A note on the MVU estimation of reliability for the Maxwell failure distribution. Estadistica 41:73–79. Lin C., Duran B. S. and Lewis T. O. (1989), Inverted gamma as a life distribution. Microelectron. Reliab. 29 (4):619-626. Renyi A. (1961). On measures of entropy and information, in: Proceedings of the 4th Berkeley Symposium on Mathematical Statistics and Probability, University of California Press, Berkeley. Tomer, S. K., and Panwar, M. S. 2015. Estimation procedures for Maxwell distribution under type I progressive hybrid censoring scheme. Journal of Statistical Computation and Simulation 85:339–56. [^1]: Corresponding author E-mail: asybhu10@gmail.com
1
--- abstract: 'In this paper, we study the representations of the new finite-dimensional pointed Hopf algebras in positive characteristic given in [@Cib09]. We find that these Hopf algebras are symmetric algebras. We determine the simple modules and their projective covers over these Hopf algebras. We show that these Hopf algebras are of wild representation type.' address: - 'School of Mathematical Science, Yangzhou University, Yangzhou 225002, China; and School of Science, Huaihai Institute of Technology, Lianyungang 222005, China' - 'School of Mathematical Science, Yangzhou University, Yangzhou 225002, China' author: - Ying Zhang - 'Hui-Xiang Chen' title: 'REPRESENTATIONs OF FINITE DIMENSIONAL POINTED HOPF ALGEBRAS OVER $\mathbb{Z}_n$' --- Introduction and Preliminaries {#1} ============================== The construction and classification of Hopf algebras play an important role in the theory of Hopf algebras. During the last few years several classification results for pointed Hopf algebras were obtained based on the theory of Nichols algebras [@And98; @And02; @And08]. In [@Cib09], Cibils, Lauve and Witherspoon studied Nichols algebras via an embedding in Hopf quiver algebras. They constructed some new finite dimensional Hopf algebras in positive characteristic $p$, which are pointed Hopf algebras over $\mathbb{Z}_n$, the cyclic group of order $n$, where $p|n$. In this paper, we study these Hopf algebras. We organize the paper as follows. In this section, we recall some properties of projective cover and representation theories of Artin algebras, and integrals in a finite dimensional Hopf algebra, which can be found in [@Aus95; @Igl09; @Mon93]. In Section \[2\], we introduce the Hopf algebras $\mathcal{B}(V)\# kG$ and its “lifting" $H(\lambda,\mu)$ given in [@Cib09], and investigate some properties of $H(\lambda,\mu)$. We show that $\mathcal{B}(V)\# kG$ and $H(\lambda,\mu)$ are symmetric algebras. In Section \[3\], we describe the simple modules over $H(\lambda,\mu)$. Then we consider the tensor products of simple module by using the idea of [@Ch00] and prove that the tensor product of any two simple modules is indecomposable. Through computing idempotent elements, we find the projective covers of these simple modules. In Section \[4\], we compute the extensions of some simple modules over the Hopf algebras and prove that these Hopf algebras are of wild representation type. Now we recall some general facts about the representation theory of a finite dimensional algebra. Let $A$ be a finite dimensional algebra over an algebraically closed field and $\widehat{A}=\{S_1,\cdots,S_n\}$ be a complete set of non-isomorphic simple $A$-modules. Let $P(S)$ denote the projective cover of $S$, $S\in\widehat{A}$. It is well-known that ${}_A\!A\cong \bigoplus_{S\in\widehat{A}}P(S)^{{\rm dim} S}$ as left $A$-modules by Wedderburn-Artin theorem. Let $H$ be a finite-dimensional Hopf algebra. A left integral in $H$ is an element $t\in H$ such that $ht=\varepsilon(h)t$ for all $h\in H$. A right integral in $H$ is an element $t'\in H$ such that $t'h=\varepsilon(h)t'$ for all $h\in H$. $\int_H^l$ denotes the space of left integrals, and $\int_H^r$ denotes the space of right integrals. $H$ is called unimodular if $\int_H^l=\int_H^r$. Note that $\int_H^l$ and $\int_H^r$ are each one-dimensional (see [@Mon93]). A $k$-algebra $A$ is called symmetric if there exists a nondegenerate $k$-bilinear form $\beta:A\times A\rightarrow k$, which is associative and symmetric. A symmetric algebra $A$ is self-injective, that is, the left regular module $A$ is injective. A finite dimensional Hopf algebra $H$ is a symmetric algebra if and only if $H$ is unimodular and $S^2$ is inner, where $S$ is the antipode of $H$ [@Lorenz; @ObSchn]. Throughout this paper, we work over an algebraically closed field $k$ with a positive characteristic $p$. All algebras, Hopf algebras and modules are finite dimensional over $k$. Unless otherwise stated, all maps are $k$-linear, dim and $\otimes$ stand for dim$_k$ and $\otimes_k$, respectively. The Hopf Algebras $\mathcal{B}(V)\# kG$ and $H(\lambda, \mu)$ {#2} ============================================================= Let $n>1$ be a positive integer with $p|n$. Let $G=\langle g\rangle$ be the cyclic group of order $n$. Then $kG$ has a 2-dimensional indecomposable right-right Yetter-Drinfeld module $V$. $V$ has a basis $\{v_1, v_2\}$ such that the right $kG$-action and $kG$-coaction are defined by $$v_1\cdot g=v_1,\ v_2\cdot g=v_1+v_2,\ \rho(v)=v\otimes g,\ v\in V.$$ Then one can form a Nichols algebra $\mathcal{B}(V)$ and the corresponding pointed Hopf algebra $\mathcal{B}(V)\# kG$. $\mathcal{B}(V)\# kG$ is a finite dimensional graded Hopf algebra, which is generated as an algebra by three elements $g$, $a$ and $b$ (see [@Cib09]). When $p=2$, the generators $g$, $a$ and $b$ of $\mathcal{B}(V)\# kG$ are subject to the relations: $$\begin{aligned} & g^n=1,\ g^{-1}ag=a,\ g^{-1}bg=a+b,&\end{aligned}$$ $$\begin{aligned} & a^2=0, \ b^4=0, \ baba=abab, \ {b^2}a=ab^2+aba. &\end{aligned}$$ When $p>2$, the generators $g$, $a$ and $b$ of $\mathcal{B}(V)\# kG$ are subject to the relations: $$\begin{aligned} & g^n=1, \ g^{-1}ag=a,\ g^{-1}bg=a+b, &\end{aligned}$$ $$\begin{aligned} & a^p=0, \ b^p=0, \ ba=ab+{\frac{1}{2}}a^2. &\end{aligned}$$ The coalgebra structure and the antipode of $\mathcal{B}(V)\# kG$ are determined by $$\begin{aligned} & \bigtriangleup(g)=g\otimes g, \ \bigtriangleup(a)={a\otimes 1}+{g\otimes a}, \ \bigtriangleup(b)={b\otimes 1}+{g\otimes b};&\end{aligned}$$ $$\begin{aligned} & \varepsilon(g)=1,\ \varepsilon(a)=\varepsilon(b)=0;&\end{aligned}$$ $$\begin{aligned} S(g)=g^{-1}, \ S(a)=-g^{-1}a, \ S(b)=-g^{-1}b.\end{aligned}$$ Note that $kG$ is the coradical of $\mathcal{B}(V)\# kG$ and $kG$ is a Hopf subalgebra of $\mathcal{B}(V)\# kG$. Furthermore, one may construct filtered pointed Hopf algebras as “lifting" of $\mathcal{B}(V)\# kG$, that is those whose associated graded algebra is $\mathcal{B}(V)\# kG$. In the case of $p>2$, Cibils, Lauve and Witherspoon gave some examples of liftings of $\mathcal{B}(V)\# kG$, which can be described as follows. Assume $p>2$, and let $\lambda, \mu\in k$. The Hopf algebra $H(\lambda, \mu)$ is generated, as an algebra, by $g$, $a$ and $b$ with the relations $$\begin{aligned} & g^n=1,\ g^{-1}ag=a, \ g^{-1}bg=a+b, &\end{aligned}$$ $$\begin{aligned} & a^p=\lambda(1-g^p), \ b^p=\mu(1-g^p),\ ba=ab+{\frac{1}{2}}a^2.&\end{aligned}$$ The coalgebra structure and the antipode of $H(\lambda, \mu)$ are determined by the same equations as $\mathcal{B}(V)\# kG$. Note that $kG$ is the coradical of $H(\lambda, \mu)$ and $kG$ is a Hopf subalgebra of $H(\lambda, \mu)$. Moreover, when $\lambda=\mu=0$, $H(0,0)=\mathcal{B}(V)\# kG$. \[2.1\] When $p=2$, in $\mathcal{B}(V)\# kG$ we have $(1)$ $bg^i=ig^ia+g^ib$, $i\geqslant 0$. In particular, $g^2$ is central in $\mathcal{B}(V)\# kG$. $(2)$ $\mathcal{B}(V)\# kG$ is a symmetric Hopf algebra. \(1) It can be proved by induction on $i$ from the relation $g^{-1}bg=a+b$. \(2) Let $H=\mathcal{B}(V)\# kG$. Then the set $\{g^iabab^3|0\leqslant i\leqslant n-1\}$ are linearly independent in $H$ by [@Cib09 Theorem 3.1 and Corollary 3.4]. Let $t=(\sum\limits_{0\leqslant i\leqslant n-1}g^i)abab^3$. Then $t$ is a non-zero element of $H$. Since $g^n=1$, $g(\sum\limits_{0\leqslant i\leqslant n-1}g^i)=\sum\limits_{0\leqslant i\leqslant n-1}g^i$. It follows that $gt=t=\varepsilon(g)t$. By the definition of $H$, we also have $at=(\sum\limits_{0\leqslant i\leqslant n-1}g^i)a^2bab^3=0=\varepsilon(a)t$ and $bt=(\sum\limits_{0\leqslant i\leqslant n-1}bg^i)abab^3=\sum\limits_{0\leqslant i\leqslant n-1}(ig^ia+g^ib)abab^3 =\sum\limits_{0\leqslant i\leqslant n-1}g^ibabab^3=\sum\limits_{0\leqslant i\leqslant n-1}g^iabab^4=0=\varepsilon(b)t$. Since $g, a, b$ are generators of $H$, it follows that $\int_H^l=kt$. On the other hand, we have $a(a+b)=a^2+ab=ab$ and $bg=g(a+b)$. Hence $tg=(\sum\limits_{0\leqslant i\leqslant n-1}g^i)abab^3g=(\sum\limits_{0\leqslant i\leqslant n-1}g^i)ga(a+b)a(a+b)^3 =(\sum\limits_{0\leqslant i\leqslant n-1}g^i)abab^3=\varepsilon(g)t$. We also have $ta=(\sum\limits_{0\leqslant i\leqslant n-1}g^i)abab^3a =(\sum\limits_{0\leqslant i\leqslant n-1}g^i)abab(ab^2+aba) =(\sum\limits_{0\leqslant i\leqslant n-1}g^i)baba(ab^2+aba) =0=\varepsilon(a)t$ and $tb=(\sum\limits_{0\leqslant i\leqslant n-1}g^i)abab^4=0=\varepsilon(b)t$. Thus, $\int_H^r=kt=\int_H^l$, and so $H$ is unimodular. It is easy to check that $S^2(g)=g$, $S^2(a)=g^{-1}ag$ and $S^2(b)=g^{-1}bg$. Hence $S^2$ is inner since $S^2$ is an algebra automorphism. It follows that $H$ is a symmetric Hopf algebra. In the rest of this section, we assume $p>2$. Let $n=p^st$ with $p\nmid t$ and $s\geqslant 1$. Let $\lambda, \mu\in k$. Now we give some properties of $H(\lambda,\mu)$. \[2.2\] In $H(\lambda,\mu)$, we have $(1)$ $bg^i=ig^ia+g^ib$, $ba^j=a^jb+\frac{j}{2}a^{j+1}$ and $bg^ia^j=(i+\frac{j}{2})g^ia^{j+1}+g^ia^jb$ for all $i,\ j\geqslant 0$. In particular, $g^p$ is central in $H(\lambda,\mu)$. $(2)$ If $1\leqslant m\leqslant p-1$, then $$ab^m =\sum_{0\leqslant i\leqslant m}\alpha_{m,i}b^{m-i}a^{i+1},$$ where $\alpha_{m,i}\in k$ with $\alpha_{m,0}=1$, $\alpha_{m,1}=-\frac{m}{2}$ and $\alpha_{m,2}=\frac{1}{4}m(m-1)$. $(3)$ If $1\leqslant m\leqslant p-1$, then $$gb^m =\sum_ {0\leqslant i\leqslant m}\beta_{m,i}b^{m-i}ga^i,$$ where $\beta_{m,i}\in k$ with $\beta_{m,0}=1$, $\beta_{m,1}=-m$ and $\beta_{m,2}=\frac{3}{4}m(m-1)$. \(1) The first two equalities can be proved by induction on $i$ and $j$, respectively. The third one follows from the first two equalities. \(2) By the relations of the generators, $ab^m$ can be expressed as $ab^m = \sum\limits_{0\leqslant i\leqslant m}\alpha_{m,i}b^{m-i}a^{i+1}$ for some $\alpha_{m,i}\in k$. Then for $1\leqslant m< p-1$, by Part (1) we have $$\begin{split} ab^{m+1}&=(\sum\limits_{0\leqslant i\leqslant m}\alpha_{m,i}b^{m-i}a^{i+1})b\\ &=\sum\limits_{0\leqslant i\leqslant m}\alpha_{m,i}b^{m-i}(a^{i+1}b)\\ &=\sum\limits_{0\leqslant i\leqslant m}\alpha_{m,i}b^{m-i}(ba^{i+1}-\frac{i+1}{2}a^{i+2})\\ &=\sum\limits_{0\leqslant i\leqslant m}\alpha_{m,i}b^{m+1-i}a^{i+1}-\sum\limits_{0\leqslant i\leqslant m}\frac{i+1}{2}\alpha_{m,i}b^{m-i}a^{i+2}. \end{split}$$ Hence one gets that $\alpha_{m+1,0}=\alpha_{m,0}$, $\alpha_{m+1,m+1}=-\frac{m+1}{2}\alpha_{m,m}$ and $\alpha_{m+1,i}=\alpha_{m,i}-\frac{i}{2}\alpha_{m,i-1}$ for all $1\leqslant i\leqslant m$. From the definition of $H(\lambda, \mu)$, we know that $\alpha_{1,0}=1$ and $\alpha_{1,1}=-\frac{1}{2}$. Then by induction on $m$, it is easy to check that $\alpha_{m,0}=1$, $\alpha_{m,1}=-\frac{m}{2}$ and $\alpha_{m,2}=\frac{1}{4}m(m-1)$ for all $1\leqslant m\leqslant p-1$. \(3) It is similar to Part (2). We also have $\beta_{1, 0}=1$, $\beta_{1, 1}=-1$, $\beta_{m+1, 0}=\beta_{m, 0}$, $\beta_{m+1, m+1}=-\frac{m+2}{2}\beta_{m, m}$ and $\beta_{m+1, i}=\beta_{m, i}-\frac{i+1}{2}\beta_{m, i-1}$ for all $1\leqslant i\leqslant m<p-1$. \[2.3\] $H(\lambda,\mu)$ is a symmetric Hopf algebra. From $g^n=1$ and char$k=p$, it is easy to check that $g(\sum\limits_{0\leqslant i\leqslant n-1}g^i)=\sum\limits_{0\leqslant i\leqslant n-1}g^i$ and $(1-g^p)(\sum\limits_{0\leqslant i\leqslant n-1}ig^i)=0$. Since $\{g^ia^{p-1}b^{p-1}|0\leqslant i\leqslant n-1\}$ are linearly independent (see [@Cib09]), $t=(\sum\limits_{0\leqslant i\leqslant n-1}g^i)a^{p-1}b^{p-1}$ is a non-zero element of $H(\lambda,\mu)$. Then we have $gt=t=\varepsilon(g)t$, $at=a^p(\sum\limits_{0\leqslant i\leqslant n-1}g^i)b^{p-1}=\lambda(1-g^p)(\sum\limits_{0\leqslant i\leqslant n-1}g^i)b^{p-1}=0=\varepsilon(a)t$ and $$\begin{split} bt&=(\sum_{0\leqslant i\leqslant n-1}bg^ia^{p-1})b^{p-1}\\ &=[\sum_{0\leqslant i\leqslant n-1}(i+\frac{p-1}{2})g^ia^p+g^ia^{p-1}b]b^{p-1}\\ &=a^p(\sum_{0\leqslant i\leqslant n-1}ig^i)b^{p-1}+\frac{p-1}{2}a^p(\sum_{0\leqslant i\leqslant n-1}g^i)b^{p-1}+b^p(\sum_{0\leqslant i\leqslant n-1}g^i)a^{p-1}\\ &=0=\varepsilon(b)t. \end{split}$$ Since $g,a,b$ are generators of $H(\lambda,\mu)$, one gets that $\int_H^l=kt$. On the other hand, since $ba=a(b+\frac{1}{2}a)$, we have $(a+b)^{p-1}=b^{p-1}+a\sum\limits_{0\leqslant j\leqslant p-2}\alpha_ja^jb^{p-2-j}$ for some $\alpha_j\in k$. Hence $$\begin{split} tg&=(\sum_{0\leqslant i\leqslant n-1}g^i)a^{p-1}b^{p-1}g\\ &=(\sum_{0\leqslant i\leqslant n-1}g^i)a^{p-1}g(a+b)^{p-1}\ \ ({\rm by}\ bg=g(a+b))\\ &=(\sum_{0\leqslant i\leqslant n-1}g^i)a^{p-1}[b^{p-1}+a\sum_{0\leqslant j\leqslant p-2}\alpha_ja^jb^{p-2-j}]\\ &=t+a^p(\sum_{0\leqslant i\leqslant n-1}g^i)(\sum_{0\leqslant j\leqslant p-2}\alpha_ja^jb^{p-2-j})\\ &=t=\varepsilon(g)t,\\ \end{split}$$ $$\begin{split} ta&=(\sum_{0\leqslant i\leqslant n-1}g^i)a^{p-1}b^{p-1}a\\ &=(\sum_{0\leqslant i\leqslant n-1}g^i)a^{p-1}a(b+\frac{1}{2}a)^{p-1}\ ({\rm by}\ ba=a(b+\frac{1}{2}a))\\ &=a^p(\sum_{0\leqslant i\leqslant n-1}g^i)(b+\frac{1}{2}a)^{p-1}\\ &=0=\varepsilon(a)t\\ \end{split}$$ and $$\begin{split} tb&=b^p(\sum_{0\leqslant i\leqslant n-1}g^i)a^{p-1}=0=\varepsilon(b)t, \end{split}$$ where we use the facts that $a^p=\lambda(1-g^p)$ and $b^p=\mu(1-g^p)$ are central elements in $H(\lambda, \mu)$. Thus, $\int_H^r=kt=\int_H^l$, and so $H$ is unimodular. It is easy to check that $S^2$ is inner. It follows that $H$ is a symmetric Hopf algebra. \[2.4\] Let $J$ be the Jacobson radical of $H(\lambda,\mu)$. Then $(1)$ If $t=1$, then $a, b\in J$. $(2)$ If $t>1$ and $\lambda\mu\neq0$, then $a,b \not\in J$. \(1) Assume $t=1$. Then $n=p^s$. Since char$k$=$p$ and $a^p=\lambda(1-g^p)$, we have $a^n=a^{p^s}=[\lambda(1-g^p)]^{p^{s-1}}=\lambda ^{p^{s-1}}(1-g^{p^s})= \lambda^{p^{s-1}}(1-g^n)=0$. On the other hand, we have $ag=ga$ and $ab=ba-\frac{1}{2}a^2=(b-\frac{1}{2}a)a$. Hence $aH(\lambda, \mu)=H(\lambda,\mu)a$, and consequently $H(\lambda, \mu)a$ is equal to the ideal $\langle a\rangle$ of $H(\lambda, \mu)$ generated by $a$. It follows that $(H(\lambda, \mu)a)^n=H(\lambda, \mu)a^n=0$. Thus, $H(\lambda, \mu)a\subseteq J$, and so $a\in J$. Similarly, we have $b^n=0$. Consider the quotient algebra $H(\lambda, \mu)/\langle a\rangle$ of $H(\lambda, \mu)$ modulo $\langle a\rangle$. Then $H(\lambda, \mu)/\langle a\rangle$ is generated, as an algebra, by $\overline{g}$ and $\overline{b}$. In this case, we have $\overline{g}\overline{b}=\overline{b}\overline{g}$. It follows that the ideal $\langle\overline{b}\rangle$ of $H(\lambda, \mu)/\langle a\rangle$ generated by $\overline{b}$ satisfies $\langle\overline{b}\rangle^n=0$. Therefore, $b\in J$. \(2) Assume $t>1$ and $\lambda\mu\neq0$. Then $g^{p^m}\neq1$ for all $m\geqslant 0$. Hence $a^{p^m}=\lambda^{p^{m-1}}(1-g^{p^m})\neq0$ for all $m\geqslant 0$. This means that $a$ is not a nilpotent element, and so $a\notin J$. Similarly, $b\notin J$. \[2.5\] If $\lambda\neq0$, then $H(\lambda,\mu)\cong H(1,\lambda^{-1}\mu)$. Assume $\lambda\neq0$. Let $g$, $a$, $b$ and $g_0$, $a_0$, $b_0$ denote the generators of $H(\lambda,\mu)$ and $H(1,\lambda^{-1}\mu)$, respectively. Then in $H(1,\lambda^{-1}\mu)$ we have $g_0^n=1$, $g_0^{-1}(\lambda^{\frac{1}{p}}a_0)g=\lambda^{\frac{1}{p}}a_0$, $g_0^{-1}(\lambda^{\frac{1}{p}}b_0)g_0=\lambda^{\frac{1}{p}}a_0+\lambda^{\frac{1}{p}}b_0$. $(\lambda^{\frac{1}{p}}a_0)^p=\lambda(1-g_0^p)$, $(\lambda^{\frac{1}{p}}b_0)^p=\mu(1-g_0^p)$, and $(\lambda^{\frac{1}{p}}b_0)(\lambda^{\frac{1}{p}}a_0) =(\lambda^{\frac{1}{p}}a_0)(\lambda^{\frac{1}{p}}b_0)+{\frac{1}{2}}(\lambda^{\frac{1}{p}}a_0)^2$. It follows that there is an algebra map $\varphi:H(\lambda,\mu)\rightarrow H(1,\lambda^{-1}\mu)$ such that $\varphi(g)=g_0$, $\varphi(a)=\lambda^{\frac{1}{p}}a_0$ and $\varphi(b)=\lambda^{\frac{1}{p}}b_0$. It is easy to see that $\varphi$ is a Hopf algebra homomorphism. Similarly, there exists a Hopf algebra homomorphism $\psi: H(1, \lambda^{-1}\mu)\rightarrow H(\lambda, \mu)$ such that $\psi(g_0)=g$, $\psi(a_0)=\lambda^{-\frac{1}{p}}a$ and $\psi(b_0)=\lambda^{-\frac{1}{p}}b$. Obviously, $\varphi\circ\psi={\rm id}$ and $\psi\circ\varphi={\rm id}$, and so $H(\lambda,\mu)\cong H(1,\lambda^{-1}\mu)$. Simple modules and Projective Modules over $H(\lambda, \mu)$ {#3} ============================================================ Throughout this section, assume $p>2$. Let $n=p^st$ with $p\nmid t$ and $s\geqslant 1$. Let $\xi$ be a $t$-$th$ primitive root of unity in $k$. Let $\lambda, \mu\in k$. We will investigate simple modules and projective modules over $H(\lambda, \mu)$ in this section. Note that $kG$ is the coradical of $H(\lambda, \mu)$. Since $p|n$, we know that $kG$ is not semisimple. It has $t$ non-isomorphic simple modules, which are all 1-dimensional and given by the corresponding algebra homomorphisms $\rho_i: kG\rightarrow k$, $\rho_i(g)=\xi^i$, $0\leqslant i\leqslant t-1$. Moreover, $kG$ has $n$ non-isomorphic indecomposable modules, which can be described by the matrix representations as follows: $$\rho_{r,i}(g)=\left(\begin{array}{ccccc} \xi^i&1&\cdots&0&0\\ 0&\xi^i&\cdots&0&0\\ \cdots&\cdots&\cdots&\cdots&\cdots\\ 0&0&\cdots&\xi^i&1\\ 0&0&\cdots&0&\xi^i \end{array}\right)_{r\times r}$$ where $1\leqslant r\leqslant p^s$ and $0\leqslant i\leqslant t-1$ (see [@Don09]). \[3.1\]If $t=1$, there is only one simple module $S$ over $H(\lambda,\mu)$, which is $1$-dimensional and given by $g\cdot v=v$, $a\cdot v=0$ and $b\cdot v=0$ for all $v\in S$. In particular, $H(\lambda,\mu)$ is a local algebra in this case. Assume $t=1$. Then by Lemma \[2.4\](1), we know that $a,b\in J$, the Jacobson radical of $H(\lambda, \mu)$, and $H(\lambda,\mu)/\langle a, b\rangle\cong kG$, where $\langle a, b\rangle$ is the ideal of $H(\lambda, \mu)$ generated by $a$ and $b$. Hence the theorem follows. In the rest of this section, assume $t>1$. \[3.2\] Let $M$ be an $H(\lambda,\mu)$-module. If there exists an element $0\neq v\in M$ such that $g\cdot v=\alpha v$ and $a\cdot v=\beta v$ for some $\alpha,\beta\in k$ with $\beta\neq 0$, then the following statements holds: $(1)$ If $1\leqslant m\leqslant p-1$, then $$ab^m \cdot v= \sum_{0\leqslant j\leqslant m} \alpha_{m,j}b^j\cdot v\ \mbox{ and }\ gb^m\cdot v=\sum_ {0\leqslant j\leqslant m}\beta_{m, j}b^j\cdot v,$$ where $\alpha_{m,j}, \beta_{m, j}\in k$ with $\alpha_{m,m}=\beta$, $\alpha_{m, m-1}=-{\frac{m}{2}}\beta^2$, $\beta_{m, m}=\alpha$, and $\beta_{m, m-1}=-m\alpha\beta$. $(2)$ $N ={\rm span}\{v, b\cdot v, \cdots, b^{p-1}\cdot v\}$ is an submodule of $M$. $(3)$ $\{v, b\cdot v, \cdots, b^{p-1}\cdot v\}$ are linearly independent. $(4)$ Consider the actions of $g$ and $a$ on $N$. Then $\alpha$ and $\beta$ are the only eigenvalues of $g$ and $a$, respectively, with multiplicity $p$. Moreover, $v$ is the unique common eigenvector of $g$ and $a$ up to a non-zero scale multiple. $(5)$ $N$ is a simple $H(\lambda,\mu)$-module. \(1) It follows from Parts (2) and (3) of Lemma \[2.2\]. \(2) Since $b^p=\mu(1-g^p)$, it follows from Part (1). \(3) Suppose that $\{v, b\cdot v, \cdots, b^{p-1}\cdot v\}$ are linearly dependent. Since $v\neq 0$, there exists an $m$ with $0\leqslant m< p-1$ such that $\{v, b\cdot v, \cdots, b^m\cdot v\}$ are linearly independent, but $\{v, b\cdot v, \cdots, b^m\cdot v, b^{m+1}\cdot v\}$ are linearly dependent. Hence there are some $\alpha_i\in k$ such that $b^{m+1}\cdot v=\sum\limits_{0\leqslant i\leqslant m}\alpha_ib^i\cdot v$. Thus, $ab^{m+1}\cdot v=\sum\limits_{0\leqslant i\leqslant m}\alpha_iab^i\cdot v$. By Part (1), we have $ab^{m+1} \cdot v= \sum\limits_{0\leqslant j\leqslant m+1} \alpha_{m+1,j}b^j\cdot v=\beta b^{m+1}\cdot v+ \sum\limits_{0\leqslant j\leqslant m} \alpha_{m+1,j}b^j\cdot v$ and $$\begin{array}{rcl} \sum\limits_{0\leqslant i\leqslant m}\alpha_iab^i\cdot v &=&\sum\limits_{0\leqslant i\leqslant m} \sum\limits_{0\leqslant j\leqslant i}\alpha_i\alpha_{i, j}b^j\cdot v\\ &=&\alpha_m\beta b^m\cdot v+\sum\limits_{0\leqslant j\leqslant m-1}\gamma_jb^j\cdot v,\\ \end{array}$$ where $\gamma_j\in k$ for $0\leqslant j\leqslant m-1$. Hence we have $$\begin{array}{rcl} ab^{m+1}\cdot v-\beta b^{m+1}\cdot v&=&\sum\limits_{0\leqslant j\leqslant m} \alpha_{m+1,j}b^j\cdot v\\ &=&-\frac{m+1}{2}\beta^2b^m\cdot v+\sum\limits_{0\leqslant j\leqslant m-1} \alpha_{m+1,j}b^j\cdot v\\ \end{array}$$ and $$a(\sum\limits_{0\leqslant i\leqslant m}\alpha_ib^i\cdot v)- \beta(\sum\limits_{0\leqslant i\leqslant m}\alpha_ib^i\cdot v) =\sum\limits_{0\leqslant j\leqslant m-1}(\gamma_j-\alpha_j\beta)b^j\cdot v.$$ It follows that $-\frac{m+1}{2}\beta^2b^m\cdot v+\sum\limits_{0\leqslant j\leqslant m-1} \alpha_{m+1,j}b^j\cdot v=\sum\limits_{0\leqslant j\leqslant m-1}(\gamma_j-\alpha_j\beta)b^j\cdot v.$ Since $-\frac{m+1}{2}\beta^2\neq0$, one gets that $\{v, b\cdot v, \cdots, b^m\cdot v\}$ are linearly dependent, a contradiction. \(4) It follows from Parts (1) and (3). \(5) Let $N_0$ be a non-zero submodule of $N$. Then $N_0$ must contain a common eigenvector of $g$ and $a$. Hence $v\in N_0$ by Part (4), and so $N_0=N$. This shows that $N$ is a simple module. Now we will compute simple modules over $H(\lambda,\mu)$. Note that $H(\lambda, \mu)=\mathcal{B}(V)\# kG$ if $\lambda=\mu=0$. We first consider the case of $\lambda=0$. \[3.3\] Let $\mu\in k$. Then there are $t$ non-isomorphic simple modules $T_i$ over $H(0,\mu)$, $0\leqslant i\leqslant t-1$. Each $T_i$ is $1$-dimensional and given by $$g\cdot v=\xi^iv,\ a\cdot v=0,\ b\cdot v=\mu^\frac{1}{p}(1-\xi^i)v,\ v\in T_i.$$ Let $0\leqslant i\leqslant t-1$. Then it is easy to see that there is an algebra map $\rho_i: H(0, \mu)\rightarrow k$ such that $\rho_i(g)=\xi^i$, $\rho_i(a)=0$ and $\rho_i(b)=\mu^\frac{1}{p}(1-\xi^i)$. It follows that $T_0, T_1, \cdots, T_{t-1}$ given in the theorem are non-isomorphic 1-dimensional simple $H(0, \mu)$-modules. By the proof of Lemma \[2.4\](1), one knows that the ideal $\langle a\rangle$ of $H(0, \mu)$ generated by $a$ is equal to $H(0, \mu)a=aH(0, \mu)$. Since $a^p=0$, $\langle a\rangle^p=(H(0, \mu)a)^p=H(0, \mu)a^p=0$. Hence $\langle a\rangle\subseteq J$, the Jacobson radical of $H(0, \mu)$. Thus, any simple $H(0, \mu)$-module is a simple module over the quotient algebra $H(0, \mu)/\langle a\rangle$. However, $H(0, \mu)/\langle a\rangle$ is a commutative algebra and $k$ is an algebraically closed field. It follows that any simple $H(0, \mu)$-module is 1-dimensional and determined by an algebra map from $H(0, \mu)$ to $k$. Now let $\rho: H(0, \mu)\rightarrow k$ be an algebra map. Then $\rho(a)=0$. Since $\rho(g)^n=\rho(g^n)=\rho(1)=1$, $\rho(g)=\xi^i$ for some $0\leqslant i\leqslant t-1$. Since $b^p=\mu(1-g^p)$, $\rho(b)^p=\mu(1-\rho(g)^p)=\mu(1-\xi^{ip})=(\mu^{\frac{1}{p}}(1-\xi^i))^p$, and so $\rho(b)=\mu^{\frac{1}{p}}(1-\xi^i)$. Thus, $\rho=\rho_i$. This completes the proof. For the case of $\lambda\neq 0$, by Lemma \[2.5\], we may assume $\lambda=1$. Let $S_0$ be the trivial $H(1, \mu)$-module given by the counit $\varepsilon: H(1, \mu)\rightarrow k$. Then dim$S_0=1$, and $$g\cdot v=v,\ a\cdot v=0,\ b\cdot v=0,\ v\in S_0.$$ Now let $A$ be the subalgebra of $H(1, \mu)$ generated by $g$ and $a$. Then $A$ is a Hopf subalgebra of $H(1, \mu)$. Hence $H(1, \mu)$ is a free right (left) $A$-module [@Mon93]. Note that $A$ is a commutative algebra. For $1\leqslant i\leqslant t-1$, there is an algebra map $\rho_i: A\rightarrow k$ defined by $\rho_i(g)=\xi^i$ and $\rho_i(a)=1-\xi^i$. Let $X_i$ denote the corresponding left $A$-module. Then dim$X_i=1$, $g\cdot x=\xi^ix$ and $a\cdot x=(1-\xi^i)x$ for all $x\in X_i$. Let $S_i=H(1, \mu)\otimes_AX_i$. Then $S_i$ is a non-zero left cyclic $H(1, \mu)$-module generated by $1\otimes x$, where $0\neq x\in X_i$. \[3.4\] Let $0\leqslant i\leqslant t-1$. Then we have $(1)$ $S_0, S_1, \cdots, S_{t-1}$ are non-isomorphic simple $H(1, \mu)$-modules. $(2)$ If $i\neq 0$, ${\rm dim}S_i=p$ and there is a $0\neq v\in S_i$ such that $g\cdot v=\xi^i v$ and $a\cdot v=(1-\xi^i)v$. Moreover, $\{v, b\cdot v, \cdots, b^{p-1}\cdot v\}$ is a basis of $S_i$. $(3)$ If $M$ is a simple $H(1, \mu)$-module, then $M$ is isomorphic to some $S_i$. We have already known that $S_0$ is a simple $H(1, \mu)$-module and dim$S_0=1$. Now let $1\leqslant i\leqslant t-1$ and take $0\neq x\in X_i$. Let $v=1\otimes x\in S_i$. Then $g\cdot v=\xi^i v$ and $a\cdot v=(1-\xi^i)v$. Since $S_i$ is a cyclic $H(1, \mu)$-module generated by $v$, it follows from Lemma \[3.2\] that $S_i$ is a simple $H(1, \mu)$-module with dim$S_i=p$. Moreover, $\{v, b\cdot v, \cdots, b^{p-1}\cdot v\}$ is a basis of $S_i$, and $v$ is the unique common eigenvector of the actions of $g$ and $a$ on $S_i$ up to a non-zero scale multiple. Thus, $S_0, S_1, \cdots, S_{t-1}$ are non-isomorphic simple $H(1, \mu)$-modules. This shows Parts (1) and (2). Now let $M$ be a simple $H(1,\mu)$-module. Since $k$ is an algebraically closed field and $ga=ag$, there is a non-zero vector $v\in M$ such that $g\cdot v=\alpha v$ and $a\cdot v=\beta v$ for some $\alpha, \beta\in k$. Hence $A\cdot v=kv$. Since $g^n=1$, $\alpha^n=\alpha^{p^st}=(\alpha^t)^{p^s}=1$. Hence $\alpha^t=1$, and consequently $\alpha=\xi^i$ for some $0\leqslant i\leqslant t-1$. Since $a^p=1-g^p$, we have $\beta^p=1-\xi^{ip}=(1-\xi^i)^p$. It follows that $\beta=1-\xi^i$. Since $M$ is a simple $H(1, \mu)$-module and $H(1, \mu)=\sum\limits_{0\leqslant j\leqslant p-1}b^jA$, one gets that $M=H(1, \mu)\cdot v={\rm span}\{v, b\cdot v, \cdots, b^{p-1}\cdot v\}$. We divide the discussion into the following two cases. For the case: $i=0$. In this case, $g\cdot v=v$, $a\cdot v=0$ and $b^p\cdot v=\mu(1-g^p)\cdot v=0$. Hence there is an integer $m$ with $0\leqslant m\leqslant p-1$ such that $b^m\cdot v\neq0$ but $b^{m+1}\cdot v=0$. If $m=0$, then $g\cdot v=v$, $a\cdot v=0$ and $b\cdot v=0$. Hence $M=kv\cong S_0$ since $M$ is simple. If $m>0$, then by Lemma \[2.2\] it follows that $ab^m\cdot v=0$ and $gb^m\cdot v=b^m\cdot v$. Thus, $k\{b^m\cdot v\}$ is a non-zero $H(1,\mu)$-submodule of $M$, and so $M=k(b^m\cdot v)\cong S_0$ since $M$ is simple. In this case, $v=\gamma b^m\cdot v$ for some $0\neq \gamma\in k$, which implies that $b\cdot v=0$, and so $m=0$, a contradiction. For the case: $1\leqslant i\leqslant t-1$. In this case, $a\cdot v=(1-\xi^i)v\neq0$. Since $M$ is a simple $H(1, \mu)$-module, it follows from Lemma \[3.2\] that $k\{v, b\cdot v, \cdots, {b^{p-1}\cdot v}\}$ is a basis of $M$. In this case, $M$ is isomorphic to $S_i$. In fact, let $0\neq x\in X_i$. Then there is an $A$-module isomorphism $f: X_i\rightarrow kv$, $f(x)=v$, where $kv$ is obviously an $A$-submodule of $M$. Since $M=H(1, \mu)\cdot v$, we have an $H(1, \mu)$-module epimorphism $$\psi: S_i=H(1, \mu)\otimes_AX_i\xrightarrow{{\rm id}\otimes f} H(1, \mu)\otimes_A(kv)\xrightarrow{\cdot}M$$ given by $\psi(h\otimes x)=h\cdot f(x)=h\cdot v$, $h\in H(1, \mu)$. Since both $S_i$ and $M$ are simple, $\psi$ must be an isomorphism. For any integer $i$, let $0\leqslant\overline{i}\leqslant t-1$ with $\overline{i}\equiv i$ (mod $t)$. For any positive integer $m$, let $I_m$ denote the identity $m\times m$-matrix over $k$. For any matrix $X$ over $k$, let $r(X)$ denote the rank of $X$. For $1\leqslant i, j\leqslant t-1$, let $\{b^{i_1}\cdot v\}_{0\leqslant i_1\leqslant p-1}$ and $\{b^{j_1}\cdot w\}_{0\leqslant j_1\leqslant p-1}$ be the basis of $S_i$ and $S_j$ as stated in Theorem \[3.4\], respectively. Then $\{b^{i_1}\cdot v\otimes b^{j_1}\cdot w\}_{0\leqslant i_1,j_1\leqslant p-1}$ is a basis of $S_i\otimes S_j$. For any $0\neq u=\sum x_{i_1,j_1}b^{i_1}\cdot v \otimes b^{j_1}\cdot w\in S_i\otimes S_j$, let $h(u)={\rm max}\{i_1+j_1|x_{i_1,j_1}\neq0\}$ and let $$u(1)={\rm max}\{i_1|x_{i_1,j_1}\neq0 \mbox{ for some $j_1$}\}\mbox{ and } u(2)={\rm max}\{j_1|x_{u(1),j_1}\neq0\}.$$ With the above notations, we have the following lemma. \[3.5\] Let $0\neq u\in S_i\otimes S_j$ with $h(u)=u(1)=l>0$. Assume $v_1=g\cdot u-\xi^{i+j}u\neq 0$. Then $(1)$ $h(v_1)< l$. $(2)$ If $v_1(2)=0$, then there is an element $u'\in S_i\otimes S_j$ with $h(u')\leqslant l$ and $u'(1)=$ $v_1(1)$ such that $g\cdot u''-\xi^{i+j}u''=0$, or $(g\cdot u''-\xi^{i+j}u'')(1)<v_1(1)$, where $u''=u+u'$. $(3)$ If $v_1(2)>0$, then there is an element $u'\in S_i\otimes S_j$ with $h(u')\leqslant l$ and $u'(1)=$ $v_1(1)$ such that $g\cdot u''-\xi^{i+j}u''=0$, or $(g\cdot u''-\xi^{i+j}u'')(1)<v_1(1)$, or $(g\cdot u''-$ $\xi^{i+j}u'')(1)=v_1(1)$ and $(g\cdot u''-\xi^{i+j}u'')(2)<v_1(2)$, where $u''=u+u'$. Let $v_1(1)=m$ and $v_1(2)=s$. \(1) It follows from Lemma \[3.2\](1). \(2) Assume $s=0$. By Part (1), we have $0\leqslant m<l$. Hence $v_1=\alpha b^m\cdot v\otimes w+\sum\limits_{i_1<m}\alpha_{i_1, j_1}b^{i_1}\cdot v\otimes b^{j_1}\cdot w$ for some $\alpha$, $\alpha_{i_1, j_1}\in k$ with $\alpha\neq 0$. Take $u'=\alpha\xi^{-(i+j)}(1-\xi^j)^{-1}b^m\cdot v\otimes b\cdot w$ and let $u''=u+u'$. Then $h(u')=m+1\leqslant l$, $u'(1)=m$ and $$g\cdot u'-\xi^{i+j}u'=-\alpha b^m\cdot v\otimes w+\sum\limits_{i_1<m, j_1\leqslant 1} \beta_{i_1, j_1}b^{i_1}\cdot v\otimes b^{j_1}\cdot w.$$ Since $g\cdot u''-\xi^{i+j}u''=v_1+g\cdot u'-\xi^{i+j}u'$, we know that $g\cdot u''-\xi^{i+j}u''=0$, or $(g\cdot u''-\xi^{i+j}u'')(1)<m$. \(3) Assume $s>0$. Then $$v_1=\sum\limits_{0\leqslant j_1\leqslant s}\alpha_{j_1}b^m\cdot v \otimes b^{j_1}\cdot w+\sum\limits_{i_1<m}\alpha_{i_1, j_1}b^{i_1}\cdot v\otimes b^{j_1}\cdot w$$ for some $\alpha_{j_1}$, $\alpha_{i_1, j_1}\in k$ with $\alpha_s\neq 0$. Note that $m+s\leqslant h(v_1)<l\leqslant p-1$. Hence $s<p-1$ and so $1<s+1<p$. Let $u'=\alpha_s(s+1)^{-1}\xi^{-(i+j)}(1-\xi^j)^{-1}b^m\cdot v\otimes b^{s+1}\cdot w$ and $u''=u+u'$. Then $h(u')=m+s+1\leqslant l$, $u'(1)=m$ and $$g\cdot u'-\xi^{i+j}u'=-\alpha_s b^m\cdot v\otimes b^s\cdot w+\sum\limits_{j_1<s} \beta_{j_1}b^m\cdot v\otimes b^{j_1}\cdot w+\sum\limits_{i_1<m, j_1\leqslant s+1} \beta_{i_1, j_1}b^{i_1}\cdot v\otimes b^{j_1}\cdot w.$$ Since $g\cdot u''-\xi^{i+j}u''=v_1+g\cdot u'-\xi^{i+j}u'$, we know that $g\cdot u''-\xi^{i+j}u''=0$, or $(g\cdot u''-\xi^{i+j}u'')(1)<m$, or $(g\cdot u''-\xi^{i+j}u'')(1)=m$ and $(g\cdot u''-\xi^{i+j}u'')(2)<s$. \[3.6\] Let $0\neq u\in S_i\otimes S_j$ with $h(u)=u(1)=l>0$. If $g\cdot u\neq\xi^{i+j}u$, then there is an element $\overline{u}\in S_i\otimes S_j$ with $h(\overline{u})\leqslant l$ and $\overline{u}(1)<l$ such that $g\cdot\underline{u}=\xi^{i+j}\underline{u}$, where $\underline{u}=u+\overline{u}$. Let $u_1=u$, $v_1=g\cdot u_1-\xi^{i+j}u_1\neq 0$, $m_1=v_1(1)$ and $s_1=v_1(2)$. Then it follows from Lemma \[3.5\] that $m_1<l$ and there is an elements $u_1'\in S_i\otimes S_j$ with $h(u_1')\leqslant l$ and $u_1'(1)=m_1<l$ such that $g\cdot u_2=\xi^{i+j}u_2$, or $(g\cdot u_2-\xi^{i+j}u_2)(1)<m_1$, or $(g\cdot u_2-\xi^{i+j}u_2)(1)=m_1$ and $(g\cdot u_2-\xi^{i+j}u_2)(2)<s_1$, where $u_2=u_1+u_1'$. If $g\cdot u_2=\xi^{i+j}u_2$, then the theorem follows. Otherwise, let $v_2=g\cdot u_2-\xi^{i+j}u_2\neq0$, $v_2(1)=m_2$ and $v_2(2)=s_2$. Since $u_1(1)=l$ and $u_1'(1)=m_1<l$, $u_2(1)=l$, and so $h(u_2)=l$. By replacing $u_1$ with $u_2$, it follows from Lemma \[3.5\] that there is an $u_2'\in S_i\otimes S_j$ with $h(u_2')\leqslant l$ and $u_2'(1)=m_2<l$ such that $g\cdot u_3=\xi^{i+j}u_3$, or $(g\cdot u_3-\xi^{i+j}u_3)(1)<m_2$, or $(g\cdot u_3-\xi^{i+j}u_3)(1)=m_2$ and $(g\cdot u_3-\xi^{i+j}u_3)(2)<s_2$, where $u_3=u_2+u_2'$. Since $h(u_1')\leqslant l$ and $h(u_2')\leqslant l$, $h(u_1'+u_2')\leqslant l$. Furthermore, we have $u_2'(1)=m_2<m_1=u_1'(1)$, or $u_2'(1)=m_2=m_1=u_1'(1)$ and $u_2'(2)=s_2<s_1$. It follows that $(u_1'+u_2')(1)\leqslant m_1<l$. We also have $u_3=u_2+u_2'=u_1+u_1'+u_2'$. If $g\cdot u_3=\xi^{i+j}u_3$, then the theorem follows. Otherwise, let $v_3=g\cdot u_3-\xi^{i+j}u_3\neq0$, $v_3(1)=m_3$ and $v_3(2)=s_2$. Since $u_2(1)=l$ and $u_2'(1)=m_2<l$, $u_3(1)=l$, and so $h(u_3)=l$. Then we may repeat the above procedure by replacing $u_2$ with $u_3$, and continue. Thus one may get a series of elements $u_1', u_2', u_3', \cdots$ in $S_i\otimes S_j$ with $h(u_q')\leqslant l$ and $u_q'(1)=m_q<l$ such that $g\cdot u_{q+1}=\xi^{i+j}u_{q+1}$, or $m_{q+1}:=(g\cdot u_{q+1}-\xi^{i+j}u_{q+1})(1)<m_q$, or $m_{q+1}:=(g\cdot u_{q+1}-\xi^{i+j}u_{q+1})(1)=m_q$ and $s_{q+1}:=(g\cdot u_{q+1}-\xi^{i+j}u_1)(2)<s_q$, where $u_{q+1}=u_q+u_q'$, $q=1, 2, 3, \cdots$. We claim that the above procedure will stop. In fact, if $v_q=g\cdot u_q-\xi^{i+j}u_q\neq 0$ for all $q\geqslant 1$, then $m_{q+1}<m_q$, or $m_{q+1}=m_q$ and $s_{q+1}<s_q$ for all $q\geqslant 1$. Since $l>m_1\geqslant m_2\geqslant m_3\geqslant\cdots\geqslant 0$, there is a $q\geqslant 1$ such that $m_q=m_{q+1}=m_{q+2}=\cdots$. Then it follows that $s_q>s_{q+1}>s_{q+2}>\cdots\geqslant 0$. This is impossible. Thus, there exists an integer $m\geqslant 1$ such that $v_q=g\cdot u_q-\xi^{i+j}u_q\neq 0$ for all $1\leqslant q\leqslant m$, but $g\cdot u_{m+1}-\xi^{i+j}u_{m+1}=0$. Then the theorem follows. \[3.7\] Let $\{S_i\}_{0\leqslant i\leqslant t-1}$ be the complete set of non-isomorphic simple $H(1,\mu)$-modules defined in Theorem \[3.4\]. Then soc$(S_i\bigotimes S_j)\cong S_{\overline{i+j}}$ and $S_i\bigotimes S_j$ is indecomposable. In particular, $S_0\otimes S_i\cong S_i$ and $S_i\otimes S_0\cong S_i$. Here $0\leqslant i, j\leqslant t-1$. It is obvious that $S_0\otimes S_i\cong S_i$ and $S_i\otimes S_0\cong S_i$ for all $0\leqslant i\leqslant t-1$. Now let $1\leqslant i, j\leqslant t-1$. Let $\{b^{i_1}\cdot v|0\leqslant i_1\leqslant p-1\}$ and $\{b^{j_1}\cdot w|0\leqslant j_1\leqslant p-1\}$ be the bases of $S_i$ and $S_j$ as stated in Theorem \[3.4\], respectively. Then $\{b^{i_1}\cdot v\otimes b^{j_1}\cdot w|0\leqslant i_1, j_1\leqslant p-1\}$ is a basis of $S_i\bigotimes S_j$. By Lemma \[3.2\](1), the matrix of the action of $g$ on $S_i\bigotimes S_j$ with respect to the basis $\{v\otimes w, v\otimes b\cdot w, \cdots, v\otimes b^{p-1}\cdot w, b\cdot v\otimes w, b\cdot v\otimes b\cdot w, \cdots, b\cdot v\otimes b^{p-1}\cdot w, \cdots, b^{p-1}\cdot v\otimes w, b^{p-1}\cdot v\otimes b\cdot w, \cdots, b^{p-1}\cdot v\otimes b^{p-1}\cdot w\}$ has the form $$G_0=\left(\begin{array}{cccc} G_{11}&G_{12}&\cdots&G_{1p}\\ 0&G_{22}&\cdots&G_{2p}\\ \cdots&\cdots&\cdots&\cdots\\ 0&0&\cdots&G_{pp} \end{array}\right)$$ where each $G_{st}$ $(s\leqslant t)$ is a upper triangular $p\times p$-matrix, and $G_{ss}$ has the form $$\left(\begin{array}{ccccc} \\\xi^{i+j}&\alpha_{12}&*&\cdots&*\\ 0&\xi^{i+j}&\alpha_{23}&\cdots&*\\ 0&0&\xi^{i+j}&\ldots&*\\ \cdots&\cdots&\cdots&\cdots&\cdots\\ 0&0&0&\cdots&\xi^{i+j} \end{array}\right)$$ with $\alpha_{i_1,i_1+1}\neq 0$. Hence $\xi^{i+j}$ is the unique eigenvalue of the action of $g$ on $S_i\bigotimes S_j$. Moreover, $r(\xi^{i+j}I_p-G_{ss})=p-1$. It follows that $r(\xi^{i+j}I_{p^2}-G_0)\geqslant p(p-1)$. Thus, dim$V_{\xi^{i+j}}\leqslant p$, where $V_{\xi^{i+j}}$ is the eigenspace of the action of $g$ on $S_i\otimes S_j$. Obviously, $u_0=v\otimes w\in V_{\xi^{i+j}}$. For any $1\leqslant l\leqslant p-1$, let $u_{(l)}=b^l\cdot v\otimes w$. Then $h(u)=u(1)=l>0$. It follows from Lemma \[3.2\](1) that $g\cdot u_{(l)}\neq\xi^{i+j}u_{(l)}$. Then by Theorem \[3.6\], there is an element $u_{(l)}'\in S_i\otimes S_j$ with $h(u_{(l)}')\leqslant l$ and $u_{(l)}'(1)<l$ such that $g\cdot u_l=\xi^{i+j}u_l$, where $u_l=u_{(l)}+u_{(l)}'$. Obviously, $u_l(1)=l$ and $h(u_l)=l$ for all $0\leqslant l\leqslant p-1$. It follows that $\{u_0, u_1, \cdots, u_{p-1}\}\subset V_{\xi^{i+j}}$ are linearly independent over $k$. Thus, $\{u_0, u_1, \cdots, u_{p-1}\}$ is a $k$-basis of $V_{\xi^{i+j}}$. Let $v_l=g\cdot u_{(l)}-\xi^{i+j}u_{(l)}$. Then it follows from Lemma \[3.2\] that $v_l=-l\xi^{i+j}(1-\xi^i)b^{l-1}\cdot v\otimes w +\sum\limits_{i_1<l-1}\alpha_{i_1}b^{i_1}\cdot v\otimes w$. Hence $v_l(1)=l-1$ and $v_l(2)=0$. Since $g\cdot u_l-\xi^{i+j}u_l=0$, $v_l+g\cdot u'_{(l)}-\xi^{i+j}u'_{(l)}=0$. Hence $(g\cdot u'_{(l)}-\xi^{i+j}u'_{(l)})(1)=l-1$ and $(g\cdot u'_{(l)}-\xi^{i+j}u'_{(l)})(2)=0$. By Lemma \[3.2\], we know that $l-1=(g\cdot u'_{(l)}-\xi^{i+j}u'_{(l)})(1)\leqslant u'_{(l)}(1) <l$, which forces that $u'_{(l)}(1)=l-1$. Since $u'_{(l)}(1)+u'_{(l)}(2)\leqslant h(u'_{(l)})\leqslant l$, $u'_{(l)}(2)\leqslant 1$. If $u'_{(l)}(2)=0$, then it follows from Lemma \[3.2\] that $l-1=(g\cdot u'_{(l)}-\xi^{i+j}u'_{(l)})(1)< u'_{(l)}(1)=l-1$, a contradiction. Therefore, $u'_{(l)}(2)=1$, and so $h(u'_{(l)})=l$. Thus we have $$u'_{(l)}=\alpha b^{l-1}\cdot v\otimes b\cdot w+\beta b^{l-1}\cdot v\otimes w +\sum\limits_{i_1<l-1}\alpha_{i_1,j_1}b^{i_1}\cdot v\otimes b^{j_1}\cdot w.$$ Again by Lemma \[3.2\], one gets $$g\cdot u'_{(l)}-\xi^{i+j}u'_{(l)}=-\alpha\xi^{i+j}(1-\xi^j) b^{l-1}\cdot v\otimes w +\sum\limits_{i_1<l-1}\beta_{i_1,j_1}b^{i_1}\cdot v\otimes b^{j_1}\cdot w.$$ Since $v_l+g\cdot u'_{(l)}-\xi^{i+j}u'_{(l)}=0$, $\alpha=-l(1-\xi^i)(1-\xi^j)^{-1}$, and hence $$u'_{(l)}=-l(1-\xi^i)(1-\xi^j)^{-1}b^{l-1}\cdot v\otimes b\cdot w+\beta b^{l-1}\cdot v\otimes w +\sum\limits_{i_1<l-1}\alpha_{i_1,j_1}b^{i_1}\cdot v\otimes b^{j_1}\cdot w.$$ Since $ga=ag$, $a\cdot V_{\xi^{i+j}}\subseteq V_{\xi^{i+j}}$. Consider the action of $a$ on $V_{\xi^{i+j}}$. Then $a\cdot u_0=(1-\xi^{i+j})u_0$. For $1\leqslant l\leqslant p-1$, let $u=u_l+\alpha_1u_{l-1}+\ldots+\alpha_lu_0$ be an element in $V_{\xi^{i+j}}$. If $a\cdot u=\alpha u$ for some $\alpha\in k$, then by comparing their coefficients of the item $b^l\cdot v\otimes w$, we find that $\alpha=1-\xi^{i+j}$. It follows that $1-\xi^{i+j}$ is the unique eigenvalue for the action of $a$ on $V_{\xi^{i+j}}$. Using Lemma \[3.2\], one finds that the coefficient of the item $b^{l-1}\cdot v\otimes w$ in $a\cdot u-(1-\xi^{i+j})u$ is $-\frac{l}{2}(1-\xi^i)(1-\xi^{i+j})$. We divide the discussion into the following two cases. For case 1: $i+j\neq t$. In this case, $a\cdot u-(1-\xi^{i+j})u\neq0$, and hence $u$ is not an eigenvector of the action of $a$. It follows that $u_0$ is the unique common eigenvector of the action of $g$ and $a$ up to a non-zero scale multiple. It follows from Theorem \[3.4\] that soc$(S_i\otimes S_j)\cong S_{\overline{i+j}}$. For case 2: $i+j=t$. In this case, $1$ is the unique eigenvalue of the action of $g$. It follows from Theorem \[3.4\] that any simple submodule of $S_i\otimes S_j$ is isomorphic to $S_0$, and is spanned by a non-zero vector $v'$ with $g\cdot v'=v'$, $a\cdot v'=0$ and $b\cdot v'=0$. Now we have $g\cdot u_0=u_0$ and $a\cdot u_0=0$. By Lemma \[2.2\](2), it follows that $g\cdot(b^l\cdot u_0)=b^l\cdot u_0$ and $a\cdot(b^l\cdot u_0)=0$ for all $1\leqslant l\leqslant p-1$. Since $\Delta(b)={b\otimes 1}+{g\otimes b}$, one can see that $(b^l\cdot u_0)(1)=l$, $(b^l\cdot u_0)(2)=0$. It follows that $\{u_0, b\cdot u_0, \cdots, b^{p-1}\cdot u_0 \}$ are linearly independent and contained in $V_{\xi^{i+j}}=V_1$. Furthermore, $b\cdot(b^{p-1}\cdot u_0)=b^p\cdot u_0=0$. Thus, soc$(S_i\otimes S_j)=k(b^{p-1}\cdot u_0)\cong S_0$. This completes the proof. Now we are going to investigate the indecomposable projective modules over $H(\lambda, \mu)$. Let $e_i=\frac{1}{t}\sum_{j=0}^{t-1}(\xi^{-ip^s}g^{p^s})^j$. Then $\{e_0, e_1, \cdots, e_{t-1}\}$ is a set of primitive orthogonal idempotents in $kG$ since $\xi^{p^s}$ is also a $t$-$th$ primitive root of unity. Now we have $(1-\xi^{-ip^s}g^{p^s})e_i=\frac{1}{t}[1-(\xi^{-ip^s}g^{p^s})^t]=0$, that is, $g^{p^s}e_i=\xi^{ip^s}e_i$. Hence $\{g^{i_1}e_i|0\leqslant i_1\leqslant p^s-1\}$ is a basis of $kGe_i$ and dim$kGe_i=p^s$. Under this basis, the matrix of the action of $g$ on $kGe_i$ is $$\left(\begin{array}{ccccc} 0&0&\cdots&0&\xi^{ip^s}\\ 1&0&\cdots&0&0\\ \cdots&\cdots&\cdots&\cdots&\cdots\\ 0&0&\cdots&0&0\\ 0&0&\cdots&1&0 \end{array}\right)_{p^s\times p^s}.$$ The characteristic polynomial of $g$ is $p(x)=x^{p^s}-\xi^{ip^s}=(x-\xi^i)^{p^s}$. Acting on $kGe_i$, $g$ has a unique eigenvalue $\xi^i$ with multiplicity $p^s$. By Lemma \[2.2\], $g^p\in Z(H(\lambda,\mu))$, the center of $H(\lambda,\mu)$. Hence $\{e_0, e_1, \cdot, e_{t-1}\}$ is a set of central orthogonal idempotents of $H(\lambda, \mu)$. It follows that $H(\lambda,\mu)=\bigoplus_{0\leqslant i\leqslant t-1}H(\lambda,\mu)e_i$ is a decomposition of the left regular module $H(\lambda,\mu)$, which is also a composition of $H(\lambda, \mu)$ as two-sided ideals. Thus, the action of $g$ on $H(\lambda,\mu)e_i$ has the unique eigenvalue $\xi^i$ (with multiplicity of $p^{s+2}$). So $g$ has the unique eigenvalue $\xi^i$ when it acts on every principal projective module occurring in $H(\lambda,\mu)e_i$. Note that dim$H(\lambda, \mu)$=dim$(\mathcal{B}(V)\# kG)=p^2n=p^{s+2}t$ and $$H(\lambda, \mu)e_i={\rm span}\{g^{i_1}a^{i_2}b^{i_3}e_i| 0\leqslant i_1\leqslant p^s-1, 0\leqslant i_2, i_3\leqslant p-1\}.$$ Hence dim$H(\lambda, \mu)e_i=p^{s+2}$ and $\{g^{i_1}a^{i_2}b^{i_3}e_i| 0\leqslant i_1\leqslant p^s-1, 0\leqslant i_2, i_3\leqslant p-1\}$ is a basis of $H(\lambda,\mu)e_i$. Now we can prove the main results of this section. \[3.8\] Let $\{T_0, T_1, \cdots, T_{t-1}\}$ be the complete set of non-isomorphic simple $H(0,\mu)$-modules given in Theorem \[3.3\]. Let $P(T_i)$ denote the projective cover of $T_i$. Then $P(T_i)\cong H(0,\mu)e_i$, where $0\leqslant i\leqslant t-1$. Since $\xi^i$ is an eigenvalue of the action of $g$ on $T_i\cong P(T_i)/{\rm rad}(P(T_i))$, $\xi^i$ is the unique eigenvalue of the action of $g$ on $P(T_i)$. It follows that $P(T_i$) must be the unique summand of $H(0,\mu)e_i$ up to isomorphism of $H(0, \mu)$-modules. Since dim$T_i=1$, the left regular module $H(0, \mu)$ has the decomposition $H(0, \mu)\cong\bigoplus_{0\leqslant i\leqslant t-1}P(T_i)$, which forces that $P(T_i)\cong H(0,\mu)e_i$. Now we are going to consider the case of $\lambda=1$. Let us first show the following lemma for the case of $\mu=0$. \[3.9-1\] In the Hopf algebra $H(1, 0)$, we have $$b^mab^{p-1}=\frac{m!}{2^m}a^{m+1}b^{p-1},\ \ m\geqslant 0.$$ We prove the equation $b^mab^{p-1}=\frac{m!}{2^m}a^{m+1}b^{p-1}$ by induction on $m$. If $m=0$, it is obvious. Now let $m\geqslant 0$ and assume $b^mab^{p-1}=\frac{m!}{2^m}a^{m+1}b^{p-1}$. Since $b^p=0$, by Lemma \[2.2\](1) we have $$\begin{array}{rcl} b^{m+1}ab^{p-1}&=&\frac{m!}{2^m}ba^{m+1}b^{p-1}\\ &=&\frac{m!}{2^m}(a^{m+1}b+\frac{m+1}{2}a^{m+2})b^{p-1}\\ &=&\frac{m!}{2^m}(a^{m+1}b^p+\frac{m+1}{2}a^{m+2}b^{p-1})\\ &=&\frac{(m+1)!}{2^{m+1}}a^{m+2}b^{p-1}.\\ \end{array}$$ This completes the proof. \[3.9\] Let $\{S_0, S_1, \cdots, S_{t-1}\}$ be the complete set of non-isomorphic simple $H(1,\mu)$-modules described as Theorem \[3.4\]. Let $P(S_i)$ denote the projective cover of $S_i$. Then $(1)$ $P(S_0)\cong H(1,\mu)e_0$ and ${\rm dim}P(S_0)=p^{s+2}$. $(2)$ Let $1\leqslant i\leqslant t-1$. Then ${\rm dim}P(S_i)=p^{s+1}$. Moreover, if $\mu=0$, then $P(S_i)\cong H(1,0)b^{p-1}e_i$ and $\{g^{i_1}a^{i_2}b^{p-1}e_i|0\leqslant i_1\leqslant p^s-1, 0\leqslant i_2\leqslant p-1\}$ is a basis of $H(1, 0)b^{p-1}e_i$. If $\mu\neq0$ and $s=1$, then $P(S_i)\cong H(1,\mu)b_0^{p-1}e_i$, and $H(1,\mu)b_0^{p-1}e_i$ has a basis $\{g^{i_1}a^{i_2}b^{p-1}e_i| 0\leqslant i_1\leqslant p^s-1, 0\leqslant i_2\leqslant p-1\}$, where $b_0=b+\alpha_0$ and $\alpha_0=\mu^{\frac{1}{p}}(\xi^i-1)$. \(1) Since $\xi^i$ is an eigenvalue of the action of $g$ on $S_i=P(S_i)/{\rm rad}(P(S_i))$, $\xi^i$ is the unique eigenvalue of the action of $g$ on $P(S_i)$. It follows that $P(S_i$) must be the unique summand of $H(1,\mu)e_i$ up to the isomorphism of $H(1, \mu)$-modules. By Wedderburn-Artin Theorem, the left regular module $H(1, \mu)$ has the decomposition $H(1, \mu)\cong\bigoplus_{0\leqslant i\leqslant t-1}P(S_i)^{{\rm dim}S_i}$, where $P(S_i)^m$ denotes the direct sum of $m$ copies of $P(S_i)$. It follows that $H(1, \mu)e_i\cong P(S_i)^{{\rm dim}S_i}$ as left $H(1, \mu)$-modules. Since dim$S_0=1$, one gets that $P(S_0)\cong H(1,\mu)e_0$ and dim$P(S_0)=p^{s+2}$. \(2) Let $1\leqslant i\leqslant t-1$. Since dim$S_i=p$ and dim$H(1, \mu)e_i=p^{s+2}$, $H(1,\mu)e_i\cong P(S_i)^p$, the direct sum of $p$ copies of $P(S_i)$. Hence dim$P(S_i)=p^{s+1}$. Assume $\mu=0$. Then by Lemma \[3.9-1\] we have $b^{p-1}ab^{p-1}=\frac{(p-1)!}{2^{p-1}}a^pb^{p-1}$. Let $\widetilde{e_i}=a^{p^s-p+1}b^{p-1}e_i$. Since $a^p=1-g^p$ and $g^p\in Z(H(1, 0))$, we have $a^p\in Z(H(1, 0))$. Therefore, we have $$\begin{split} \widetilde{e_i}^2&=a^{p^s-p+1}b^{p-1}a^{p^s-p+1}b^{p-1}e_i\\ &=a^{2(p^s-p)+1}b^{p-1}ab^{p-1}e_i\\ &=\frac{(p-1)!}{2^{p-1}}a^{2(p^s-p)+1}a^pb^{p-1}e_i\\ &=\frac{(p-1)!}{2^{p-1}}a^{p^s-p+1}a^{p^s}b^{p-1}e_i\\ &=\frac{(p-1)!}{2^{p-1}}a^{p^s-p+1}b^{p-1}(1-g^{p^s})e_i\\ &=\frac{(p-1)!}{2^{p-1}}(1-\xi^{ip^s})a^{p^s-p+1}b^{p-1}e_i\\ &=\frac{(p-1)!}{2^{p-1}}(1-\xi^{ip^s})\widetilde{e_i}. \end{split}$$ Then $\widetilde{e_i}^2=\alpha\widetilde{e_i}$ with $\alpha=\frac{(p-1)!}{2^{p-1}}(1-\xi^{ip^s})\neq0$ in $k$. Let $\widehat{e_i}=\alpha^{-1}\widetilde{e_i}$. Then $\widehat{e_i}^2=\widehat{e_i}$. Hence $H(1, 0)\widehat{e_i}$ is a summand of $H(1, 0)e_i$ as a left $H(1, 0)$-module. It follows that $H(1, 0)\widehat{e_i}\cong P(S_i)^m$ for some $1\leqslant m\leqslant{\rm dim}S_i$. Obviously, $H(1, 0)\widehat{e_i}\subseteq H(1, 0)b^{p-1}e_i$. Since $a^p=1-g^p$ and $b^p=0$, it follows from Lemma \[2.2\](1) that $H(1, 0)b^{p-1}e_i={\rm span}\{g^{i_1}a^{i_2}b^{p-1}e_i|0\leqslant i_1\leqslant p^s-1, 0\leqslant i_2\leqslant p-1\}$. Hence $p^{s+1}={\rm dim}P(S_i) \leqslant{\rm dim}(H(1,0)\widehat{e_i})\leqslant{\rm dim}(H(1,0)b^{p-1}e_i) \leqslant p^{s+1}$. This implies that ${\rm dim}(H(1,0)\widehat{e_i})={\rm dim} (H(1, 0)b^{p-1}e_i)=p^{s+1}$. Hence $P(S_i)\cong H(1,0)\widehat{e_i}=H(1, 0)b^{p-1}e_i$, and consequently $H(1,0)b^{p-1}e_i$ has a basis $\{g^{i_1}a^{i_2}b^{p-1}e_i|0\leqslant i_1\leqslant p^s-1, 0\leqslant i_2\leqslant p-1\}$. Now assume $\mu\neq0$ and $s=1$. Let $\alpha_0=\mu^{\frac{1}{p}}(\xi^i-1)\in k\subseteq H(1, \mu)$ and $b_0=b+\alpha_0\in H(1,\mu)$. Then $b_0^p=\mu(\xi^{ip}-g^p)=\mu(\xi^i-g)^p$, and so $b_0^pe_i=0$. Since $g^p\in Z(H(1,\mu))$, $b_0^p\in Z(H(1,\mu))$. An argument similar to Lemma \[3.9-1\] shows that $b_0^mab_0^{p-1}e_i=\frac{m!}{2^m}a^{m+1}b_0^{p-1}e_i$ for all $m\geqslant 0$. Let $e'_i=\frac{2^{p-1}}{(p-1)!}(1-\xi^{ip})^{-1}ab_0^{p-1}e_i$. Then it follows from an argument similar to the case of $\mu=0$ that $(e'_i)^2=e'_i$, $P(S_i)\cong H(1,\mu)e'_i=H(1,\mu)b_0^{p-1}e_i$ and $\{g^{i_1}a^{i_2}b_0^{p-1}e_i|0\leqslant i_1\leqslant p^s-1, 0\leqslant i_2\leqslant p-1\}$ is a basis of $H(1, \mu)b_0^{p-1}e_i$. If $p=3,5,7,11$, we find that $b_1^p=[b+\mu^{\frac{1}{p}}(g-1)]^p=0$. Then the argument in the proof of Theorem \[3.9\] can be applied to $H(1,\mu)$ with $\mu\neq0$ and $s\geqslant 1$. In this case, we have that $P(S_i)\cong H(1,\mu)b_1^{p-1}e_i$ and $\{g^{i_1}a^{i_2}b_1^{p-1}e_i| 0\leqslant i_1\leqslant p^s-1, 0\leqslant i_2\leqslant p-1\}$ is a basis of $H(1, \mu)b_1^{p-1}e_i$, where $1\leqslant i\leqslant t-1$. \[3.11\] If $t>1$, then $\{e_0, e_1, \cdots, e_{t-1}\}$ is a set of central orthogonal primitive idempotents of $H(\lambda,\mu)$. \[3.11\] If $t>1$, then each block $H(\lambda,\mu)e_i$ of $H(\lambda,\mu)$ is a symmetric algebra. Moreover, $H(\lambda,\mu)e_0$ is a local symmetric algebra. It follows from Lemma \[2.3\] and [@Erd90 Lemma I.3.3] Representation types of $\mathcal{B}(V)\# kG$ and $H(\lambda,\mu)$ {#4} ================================================================== In this section, we will consider the representation types of $\mathcal{B}(V)\# kG$ and $H(\lambda,\mu)$. Let us first consider the simple modules and their projective covers over $\mathcal{B}(V)\# kG$. When $p>2$, $\mathcal{B}(V)\# kG=H(0, 0)$ as noted in the last section. In this case, the simple modules and their projective covers over $\mathcal{B}(V)\# kG$ have been described in the last section, see Theorems \[3.3\] and \[3.8\]. Now let us assume $p=2$ and $n=2^st$ with $2\nmid t$ and $s\geqslant 1$. Let $\xi$ be a $t$-$th$ primitive root of unity in $k$. We denote by $H$ the Hopf algebra $\mathcal{B}(V)\# kG$ defined in Section \[2\]. Since $H$ is a finite dimensional graded Hopf algebra $H=\bigoplus_{m\geqslant 0}H_m$ with $H_0=kG$ and $a, b\in H_1$, a left $H$-module $M$ is a simple $H$-module if and only if $M$ is a simple $kG$-module and $a\cdot M=b\cdot M=0$. Hence we have the following proposition. \[4.1\] Up to isomorphism, there are $t$ simple left $H$-modules $S_i$, which are all 1-dimensional and defined by $$g\cdot x=\xi^ix,\ a\cdot x=b\cdot x=0,\ x\in S_i,$$ where $0\leqslant i\leqslant t-1$. In particular, if $t=1$, then $H$ is a local algebra. Let $e_i=\frac{1}{t}\sum_{j=0}^{t-1}(\xi^{-i2^s}g^{2^s})^j$. Then $\{e_0, e_1, \cdots, e_{t-1}\}$ is a set of primitive orthogonal idempotents in $kG$ and $g^{2^s}e_i=\xi^{i2^s}e_i$. $\{g^{i_1}e_i|0\leqslant i_1\leqslant 2^s-1\}$ is a basis of $kGe_i$ and dim$kGe_i=2^s$. By Lemma \[2.1\], $g^2\in Z(H)$, the center of $H$. Hence $\{e_0, e_1, \cdots, e_{t-1}\}$ is a set of central orthogonal idempotents of $H$. It follows that $H=\bigoplus_{0\leqslant i\leqslant t-1}He_i$ is a decomposition of the left regular module $H$, which is also a composition of $H$ as two-sided ideals. By a discussion similar to that for $H(\lambda,\mu)$ in Section \[3\], we have the following result from Lemma \[2.1\] and [@Erd90 Lemma I.3.3]. \[4.2\] Let $\{S_0, S_1, \cdots, S_{t-1}\}$ be the complete set of non-isomorphic simple $H$-modules given in Proposition \[4.1\]. Let $P(S_i)$ denote the projective cover of $S_i$. Then $(1)$ $P(S_i)\cong He_i$, where $0\leqslant i\leqslant t-1$. $(2)$ $H$ has $t$ blocks $He_i$. Moreover, each block $He_i$ is a local symmetric algebra. \[4.3\] Let $0\leqslant i\leqslant t-1$. Let $M$ be an indecomposable module of dimension $2$ over the block $He_i$. Then $M$ has one of the following structures: $(1)$ There is a $k$-basis $\{v_1, v_2\}$ in $M$ such that $g\cdot v_1=\xi^i\cdot v_1$, $g\cdot v_2=\xi^i\cdot v_2$, $a\cdot v_1=$ $a\cdot v_2=0$, $b\cdot v_1=0$ and $b\cdot v_2=v_1$. $(2)$ There is a $k$-basis $\{v_1,v_2\}$ in $M$ such that $g\cdot v_1=\xi^iv_1$, $g\cdot v_2=\xi^iv_2+v_1$, $a\cdot v_1=a\cdot v_2=0$, $b\cdot v_1=0$ and $b\cdot v_2=\gamma v_1$ for some $\gamma\in k$. Let $M$ be a left $He_i$-module of dimension 2. Then $M$ is a $kGe_i$-module. Since $g^{2^s}e_i=\xi^{i2^s}e_i$, there is a basis $\{v_1, v_2\}$ of $M$ such that the corresponding matrix $G_1$ of the action of $g$ on $M$ is one of the followings: $$\begin{matrix} \begin{pmatrix} \xi^i & 0\\0 & \xi^i \end{pmatrix},& \begin{pmatrix} \xi^i & 1\\0 & \xi^i \end{pmatrix}.& \end{matrix}$$ Let $A$ and $B$ denote the matrices of the actions of $a$ and $b$ with respect to the basis $\{v_1, v_2\}$ of $M$, respectively. Assume $G_1=\begin{pmatrix} \xi^i & 1\\0 & \xi^i \end{pmatrix}$. Since $ga=ag$, $AG_1=G_1A$. Hence $A=\begin{pmatrix}\alpha_1 & \alpha_2\\0 & \alpha_1 \end{pmatrix}$ for some $\alpha_1, \alpha_2\in k$. Since $a^2=0$, $A$ is a nilpotent matrix, and so $\alpha_1=0$. From $bg=ga+gb$, one knows that $BG_1=G_1B+G_1A$. Then it follows that $B=\begin{pmatrix} \beta+\xi^i\alpha_2 & \gamma\\0 & \beta \end{pmatrix}$ for some $\beta, \gamma\in k$. Since $b^4=0$, $B$ is a nilpotent matrix. Hence $\beta+\xi^i\alpha_2=\beta=0$, and so $\alpha_2=0$. Thus, $A=0$ and $B=\begin{pmatrix} 0 & \gamma\\0 & 0 \end{pmatrix}$. In this case, $M$ has the structure described in (2). Assume $G_1=\begin{pmatrix} \xi^i & 0\\0 & \xi^i \end{pmatrix}$. Then $G_1B=BG_1$. Since $BG_1=GB+G_1A$, $G_1A=0$, and so $A=0$. In this case, under any basis of $M$, the matrix of the action of $g$ is always $G_1$ and $A$ is always 0. If $b\cdot M=0$, then $M\cong S_i\oplus S_i$, a semisimple module. Hence $b\cdot M\neq 0$. So we may choose a basis $\{v_1, v_2\}$ of $M$ such that $B=\begin{pmatrix}0 & 1\\0 & 0\end{pmatrix}$ since $b$ is a nilpotent element of $H$. Thus, $M$ has the structure described in (1). This completes the proof. Let $0\leqslant i\leqslant t-1$. For $\gamma\in k$, let $M(\gamma)$ denote the 2-dimensional module over the block $He_i$ described as in Lemma \[4.3\](2). \[4.4\] Let $0\leqslant i\leqslant t-1$ and $\gamma_1, \gamma_2\in k$. Then $M(\gamma_1)\cong M(\gamma_2)$ if and only if $\gamma_1=\gamma_2$. Let $G_1=\begin{pmatrix} \xi^i & 1\\0 & \xi^i \end{pmatrix}$, $B_1=\begin{pmatrix} 0 & \gamma_1\\0 & 0\end{pmatrix}$ and $B_2=\begin{pmatrix} 0 & \gamma_2\\0 & 0\end{pmatrix}$. If $M(\gamma_1)\cong M(\gamma_2)$, there exists an invertible matrix $F\in M_2(k)$ such that $G_1F=FG_1$ and $B_1F=FB_2$. Then one can get that $\gamma_1=\gamma_2$. \[4.5\] Let $0\leqslant i\leqslant t-1$ and $\beta, \gamma\in k$. Then there is an algebra map $f: H\rightarrow M_2(k)$ defined by $$f(g)=\begin{pmatrix} \xi^i & \beta\\0 & \xi^i\end{pmatrix},\ f(a)=\begin{pmatrix} 0 & 0\\0 & 0\end{pmatrix},\ f(b)=\begin{pmatrix} 0 & \gamma\\0 & 0\end{pmatrix}.$$ Let $M(\beta, \gamma)$ denote the corresponding $H$-module. Obviously, $M(\beta, \gamma)$ is a module over the block $He_i$. One can easily check that $M(\beta, \gamma)\cong M(\beta', \gamma')$ if and only if $(\beta, \gamma)=\alpha(\beta', \gamma')$ in $k\times k$ for some $0\neq\alpha\in k$. If $\beta=\gamma=0$, then $M(\beta, \gamma)\cong S_i\oplus S_i$. Otherwise, $M(\beta, \gamma)$ is indecomposable. Let $\{v_1, v_2\}$ be the basis of $M(\beta, \gamma)$ such that the corresponding matrix representation are given as above. Fix a non-zero element $v\in S_i$. Then there is an exact sequence $$0\rightarrow S_i\xrightarrow{\theta} M(\beta, \gamma)\xrightarrow{\eta}S_i\rightarrow 0$$ given by $\theta(v)=v_1$, $\eta(v_1)=0$ and $\eta(v_2)=v$. Denote by $E(\beta, \gamma)$ the extension of $S_i$ by $S_i$. Then a straightforward verification shows that two extensions $E(\beta, \gamma)$ and $E(\beta', \gamma')$ are equivalent if and only if $(\beta, \gamma)=(\beta', \gamma')$. Thus, we have the following corollary. \[4.6\] Let $0\leqslant i, j\leqslant t-1$. Then $${\rm dim(Ext}(S_i,S_j))= \begin{cases} 2, &{\rm if }\ i=j \\0, &{\rm if }\ i\neq j \end{cases}$$ Now we will consider the representation type of $H$. Since $H$ has $t$ blocks $He_i$, we only need to consider the representation type of each block $He_i$. Let $$I=\{1, a, b, ab, ba, b^2, aba, ab^2, bab,b^3, abab, ab^3, bab^2, abab^2, bab^3, abab^3\}.$$ Then by [@Cib09 Theorem 3.1 and Corollary 3.4], $H$ is a $2^{s+4}t$-dimensional graded Hopf algebra with a basis $\{g^jx|0\leqslant j\leqslant 2^st -1, x\in I\}$. Since $g^{2^s}e_i=\xi^{i2^s}e_i$, by a discussion similar to that for $H(\lambda,\mu)$ in Section \[3\], one gets that each block $He_i$ is $2^{s+4}$-dimensinal with a basis $\{g^jxe_i|0\leqslant j\leqslant 2^s-1, x\in I\}$, where $0\leqslant i\leqslant t-1$. Note that ${\rm deg}(a)={\rm deg}(b)=1$ in the graded Hopf algebra $H$. \[4.7\] Let $0\leqslant i\leqslant t-1$. Then the block $He_i$ is of wild representation type. Let $0\leqslant i\leqslant t-1$. Then $\{(g-\xi^i)^jxe_i|0\leqslant j\leqslant 2^s-1, x\in I\}$ is also a basis of $He_i$. Let $J$ denote the Jacobson radical of $He_i$. Since $S_i$ is the unique simple module over the block $He_i$, it follows from Proposition \[4.1\] that $J$ has a basis $\{(g-\xi^i)^jxe_i|0\leqslant j\leqslant 2^s-1, x\in I, j+{\rm deg}(x)\geqslant 1\}$. Since $g^{-1}bg=a+b$, we have $b(g-\xi^i)^m=(g-\xi^i)^mb+m(g-\xi^i)^ma+m\xi^i(g-\xi^i)^{m-1}a$ for all $m\geqslant 1$ by induction on $m$. By these relations and the other relations of $H$, it is easy to check that $N={\rm span}\{(g-\xi^i)^jxe_i, ae_i|0\leqslant j\leqslant 2^s-1, j+{\rm deg}(x)\geqslant 2\}$ is a left ideal of $He_i$ and $N\subseteq J^2$. Observe that ${\rm dim}(J/N)=2$. By [@Aus95 Proposition III.1.14] and Corollary \[4.6\], we have ${\rm dim}(J/J^2)={\rm dim(Ext}(S_i,S_i))=2$. It follows that $J^2=N$. Let $M={\rm span}\{ (g-\xi^i)^jxe_i, (g-\xi^i)ae_i, abe_i, bae_i |0\leqslant j\leqslant 2^s-1, j+{\rm deg}(x)\geqslant 3\}$. Then it is easy to check that $M$ is a left ideal of $He_i$ and $M\subseteq J^3$. Moreover, one can check that $J^2/M$ is a semisimple $He_i$-module, and so $J^3\subseteq M$. Thus $J^3=M$. Obviously, $J^2/M={\rm span}\{\overline{ae_0}, \overline{(g-\xi^i)^2e_i}, \overline{(g-\xi^i)be_i}, \overline{b^2e_i}\}$, where $\overline{y}=y+M$ in $J^2/M$ for any $y\in J^2$. Note that $(g-\xi^i)^2e_i=0$ when $s=1$. Hence $3\leqslant{\rm dim}(J^2/M)\leqslant 4$. Since $He_i$ is a local symmetric algebra by Theorem \[4.2\] and ${\rm dim}(J^2/J^3)\geqslant3$, it follows from [@Erd90 Lemma III.4] that $He_i$ is of wild representation type. \[4.8\] Assume $p=2$. Then $\mathcal{B}(V)\# kG$ is of wild representation type. In the rest of this section, assume $p>2$ and $\lambda, \mu\in k$. We will consider the representation type of $H(\lambda, \mu)$. Let $\{e_0, e_1, \cdots, e_{t-1}\}$ be the set of central orthogonal primitive idempotents of $H(\lambda, \mu)$ described as in the last section. Then $H(\lambda, \mu)$ has $t$ blocks $H(\lambda, \mu)e_i$. Hence we only need to consider the representation type of each block $H(\lambda, \mu)e_i$. We first consider the case of $\lambda=0$. From Theorems \[3.3\] and \[3.8\], one knows that $H(0, \mu)$ is a basic algebra and that $T_i$ is the unique simple module over the block $H(0, \mu)e_i$, where $0\leqslant i\leqslant t-1$. Moreover, each block $H(0, \mu)e_i$ is a local symmetric algebra by Lemma \[2.3\] and [@Erd90 Lemma I.3.3]. \[4.9\] We have ${\rm dim(Ext}(T_i, T_i))=2$ over each block $H(0,\mu)e_i$, where $0\leqslant i\leqslant t-1$. Let $0\leqslant i\leqslant t-1$. Then it follows from Theorems \[3.3\] and \[3.8\] that there is only one simple module $T_i$ over the block $H(0,\mu)e_i$. Let $\beta, \gamma\in k$. Then there is an algebra map $f: H(0,\mu)\rightarrow M_2(k)$ defined by $$f_{\gamma}(g)=\begin{pmatrix} \xi^i & \beta\\0 & \xi^i\end{pmatrix},\ f_{\gamma}(a)=\begin{pmatrix} 0 &0\\0 & 0\end{pmatrix},\ f_{\gamma}(b)=\begin{pmatrix}\mu^{\frac{1}{p}}(1-\xi^i) & \gamma\\ 0 &\mu^{\frac{1}{p}}(1-\xi^i)\end{pmatrix}.$$ Let $N(\beta, \gamma)$ be the corresponding $H(0, \mu)$-module. Obviously, $N(\beta, \gamma)$ is a module over the block $H(0, \mu)e_i$. An argument similar to $H$ shows that any 2-dimensional module over the block $H(0, \mu)e_i$ is isomorphic to some $N(\beta, \gamma)$ and that $N(\beta, \gamma)\cong N(\beta', \gamma')$ if and only if $(\beta, \gamma)=\alpha(\beta', \gamma')$ for some $0\neq \alpha\in k$. It follows that ${\rm dim(Ext}(T_i,T_i))=2$ from an argument similar to the case $p=2$. \[4.10\] Each block $H(0,\mu)e_i$ is of wild representation type, where $0\leqslant i\leqslant t-1$. Let $0\leqslant i\leqslant t-1$. Since $\{g^{i_1}a^{j_1}b^{k_1}e_i|0\leqslant i_1\leqslant p^s-1, 0\leqslant j_1, k_1\leqslant p-1\}$ is a basis of $H(0, \mu)e_i$, $\{(g-\xi^i)^{i_1}a^{j_1}(b-\mu^{\frac{1}{p}}(1-\xi^i))^{k_1}e_i |0\leqslant i_1\leqslant p^s-1, 0\leqslant j_1, k_1\leqslant p-1\}$ is also a basis of $H(0, \mu)e_i$. Let $J$ denote the Jacobson radical of $H(0, \mu)e_i$. Then it follows from Theorem \[3.3\] that the set $$\left\{(g-\xi^i)^{i_1}a^{j_1}(b-\mu^{\frac{1}{p}}(1-\xi^i))^{k_1}e_i \left|\begin{array}{l} 0\leqslant i_1\leqslant p^s-1,\\ 0\leqslant j_1, k_1\leqslant p-1,\\ 1\leqslant i_1+j_1+k_1\\ \end{array}\right.\right\}$$ is a basis of $J$. From $g^{-1}bg=a+b$ and $ba=ab+\frac{1}{2}a^2$, one can easily check that $$\begin{array}{ll} &(b-\mu^{\frac{1}{p}}(1-\xi^i))(g-\xi^i)^m\\ =&(g-\xi^i)^m(b-\mu^{\frac{1}{p}}(1-\xi^i))+m(g-\xi^i)^ma+m\xi^i(g-\xi^i)^{m-1}a\\ \end{array}$$ for all $m\geqslant 1$ and $$(b-\mu^{\frac{1}{p}}(1-\xi^i))a=a(b-\mu^{\frac{1}{p}}(1-\xi^i))+\frac{1}{2}a^2.$$ Put $$N={\rm span}\left\{(g-\xi^i)^{i_1}a^{j_1}(b-\mu^{\frac{1}{p}}(1-\xi^i))^{k_1}e_i, ae_i \left|\begin{array}{l}0\leqslant i_1\leqslant p^s-1,\\ 0\leqslant j_1, k_1\leqslant p-1,\\ 2\leqslant i_1+j_1+k_1\\ \end{array}\right.\right\}.$$ Then from the first one of the above two equalities, one can see that $N\subseteq J^2$. Obviously, ${\rm dim}(J/N)=2$. By [@Aus95 Proposition III.1.14] and Lemma \[4.9\], we have ${\rm dim}(J/J^2)={\rm dim(Ext}(T_i, T_i))=2$. It follows that $J^2=N$. Now put $$M={\rm span}\left\{\begin{array}{l} (g-\xi^i)^{i_1}a^{j_1}(b-\mu^{\frac{1}{p}}(1-\xi^i))^{k_1}e_i,\\ (g-\xi^i)ae_i, a^2e_i, a(b-\mu^{\frac{1}{p}}(1-\xi^i))e_i\\ \end{array} \left|\begin{array}{l}0\leqslant i_1\leqslant p^s-1,\\ 0\leqslant j_1, k_1\leqslant p-1,\\ 3\leqslant i_1+j_1+k_1\\ \end{array}\right.\right\}.$$ Since $J^2=N$, $M\subseteq J^3$. Now from the two equalities given above and $ga=ag$, one can check that $M$ is a left ideal of $H(0,\mu)e_i$ and $J^2/M$ is a semisimple module over $H(0, \mu)e_i$. Hence $J^3\subseteq M$ and so $J^3=M$. Obviously, $$J^2/M={\rm span}\left\{\overline{(g-\xi^i)^2e_i}, \overline{ae_i}, \overline{(g-\xi^i)(b-\mu^{\frac{1}{p}}(1-\xi^i))e_i}, \overline{(b-\mu^{\frac{1}{p}}(1-\xi^i))^2e_i}\right\}$$ is 4-dimensional, where $\overline{x}=x+M$ in $J^2/M$ for any $x\in J^2$. Hence ${\rm dim}(J^2/J^3)=4$. Since $H(0,\mu)e_i$ is a local symmetric algebra, it follows from [@Erd90 Lemma III.4] that $H(0, \mu)e_i$ is of wild representation type. Now we consider the case of $\lambda\neq 0$. We only consider the representation type of the block $H(1, \mu)e_0$. From Theorems \[3.4\] and \[3.9\], the trivial module $S_0$ is the unique simple module over the block $H(1, \mu)e_0$, and $H(1, \mu)e_0$ is a basic and local algebra. Furthermore, $H(1, \mu)e_0$ is a symmetric algebra by Lemma \[2.3\] and [@Erd90 Lemma I.3.3]. Then by setting $i=0$ in the proofs of Lemma \[4.9\] and Theorem \[4.10\], one can get the following Lemma \[4.11\] and Theorem \[4.12\] \[4.11\] We have ${\rm dim(Ext}(S_0,S_0))=2$ over the block $H(1,\mu)e_0$. \[4.12\] The block $H(1, \mu)e_0$ is of wild representation type. For the case of $t>1$, we don’t know whether $H(1, \mu)e_i$ is of tame or wild representation type, where $1\leqslant i\leqslant t-1$. Summarizing the above discussion, we have the following result. \[4.13\] Assume $p>2$. Then $H(\lambda,\mu)$ is of wild representation type for any $\lambda, \mu\in k$. In particular, $\mathcal{B}(V)\# kG$ is of wild representation type. **Acknowledgment** {#acknowledgment .unnumbered} ================== This work is supported by NSF of China, No. 10771183, and supported by Doctorate foundation, No. 200811170001, Ministry of Education of China. [99]{} Andruskiewitsch, N., Schneider, H. -J. (1998). Lifting of quantum linear spaces and pointed Hopf algebras of order $p^3$. [*J. Algebra*]{} 209:658-691. Andruskiewitsch, N., Schneider, H. -J. (2002). Pointed Hopf algebras. New directions in Hopf algebras. Math. Sci. Res. Inst. Publ. Vol. 43. pp. 1-68. Cambridge: Cambridge Univ. Press. Andruskiewitsch, N., Fantino, F. (2008). New techniques for pointed Hopf algebras. Preprint, arXiv:0803.3486v2. Auslander, M., Reiten, I., Smal[Ø]{}, S. (1995). [*Representations theory of Artin algebras*]{}. Cambridge: Cambridge Univ. Press. Chen, H. X. (2000). Irreducible representations of a class of Quantum Doubles. [*J. Algebra*]{} 225:391-409. Cibils, C., Lauve, A., Witherspoon, S. (2009). Hopf quivers and Nichos algebras in positive characteristic. [*Proc. Amer. Math. Soc.*]{} 137:4029-4041. Dong, J. C., Chen, H. X. The representations of quantum double of dihedral groups. Algebra Colloqu., to appear. Erdmann, K. (1990). Blocks of tame representation type and related algebras. Lecture notes in mathematics, 1428, Springer. Iglesias, A. G. (2010). Representations of finite dimensional pointed Hopf algebras over $\mathbb{S}_3$. [*Rev. Un. Mat. Argentina*]{} 51:51-77. Lorenz, M. (1997). Representations of finite-dimensional Hopf algebras. [*J. Algebra*]{} 188:476-505. Montgomery, S. (1993). [*Hopf algebras and their actions on rings*]{}. CBMS series in Math. Vol. 82. Providence, RI: Amer. Math. Soc. Oberst, U., Schneider, H.-J. (1973). Uber Untergruppen endlicher algebraischer gruppen. [*Manuscripta Math.*]{} 8:217-241.
1
--- abstract: 'The issue of giving an explicit description of the flow of information concerning the time of bankruptcy of a company (or a state) arriving on the market is tackled by defining a bridge process starting from zero and conditioned to be equal to zero when the default occurs. This enables to catch some empirical facts on the behavior of financial markets: when the bridge process is away from zero, investors can be relatively sure that the default will not happen immediately. However, when the information process is close to zero, market agents should be aware of the risk of an imminent default. In this sense the bridge process leaks information concerning the default before it occurs. The objective of this first paper on Brownian bridges on stochastic intervals is to provide the basic properties of these processes.' author: - 'M. L. Bedini, R. Buckdahn, H.-J. Engelbert' date: 'July 13, 2015' title: Brownian Bridges on Random Intervals --- \[sec:Introduction\]Introduction ================================ Motivated by the problem of modeling the information concerning the default time of a financial company, we propose a new approach to credit-risk. The problem is to give a mathematical model for the flow of information about the time at which the bankruptcy of a company (or state) occurs. The time at which this crucial event takes place is called *default time* and it is modeled by a strictly positive random variable $\tau$. We tackle the problem by defining the process $\beta=(\beta_{t},\, t\geq0)$, a Brownian bridge between 0 and 0 on the random time interval $[0,\tau]$: $$\beta_{t}:=W_{t}-\frac{t}{\tau\vee t}W_{\tau\vee t},\quad t\geq 0\,, \label{eq:EQ0}$$ where $W=(W_{t},\, t\geq0)$ is a Brownian motion independent of $\tau$. Since we are going to model the information about $\tau$ with such a bridge process, we call $\beta$ the *information process*. The filtration $\mathbb{F}^{\beta}=(\mathcal{F}_{t}^{\beta})_{t\geq0}$ generated by the information process provides partial information on the default time before it occurs. The intuitive idea is that when market agents observe $\beta_{t}$ away from 0, they know that the default will not occur immediately; on the other hand, the period of fear of an imminent default corresponds to the situation in which $\beta_{t}$ is close to 0. In our approach the Bayes formula is of particular importance since it allows to infer the a posteriori probability distribution of the default time $\tau$ given the information carried by $\mathcal{F}_{t}^{\beta}$, for $t\geq0$. Two classes of credit risk models have greatly inspired our work: the information-based approach and the reduced-form models. The first one has been developed in [@key-20] by Brody, Hughston and Macrina. They model the information concerning a random payoff $D_{T}$, paid at some pre-established date $T>0$, with the natural, completed filtration generated by the process $\xi=(\xi_{t},\,0\leq t\leq T)$, defined by $\xi_{t}:=\beta_{t}^{T}+\alpha tD_{T}$, where $\beta^{T}=(\beta_{t}^{T},\,0\leq t\leq T)$ is a standard Brownian bridge on the deterministic time interval $[0,T]$. The process $\beta^{T}$ is independent of $D_{T}$, and $\alpha>0$ is a positive constant. The idea is that the true information, represented by the component $\alpha tD_{T}$, about the final payoff is disturbed by some noisy information (rumors, mispricing, etc.), represented by the bridge process. In this model for credit risk the information concerning the random cash-flow $D_{T}$ is modeled explicitly but the default time $\tau$ of the company is not. On the other hand, in the models following the reduced-form approach for credit risk the information on the default time is modeled by the natural, completed filtration generated by the single-jump process $H=(H_{t}:=\mathbb{I}_{\{ \tau\leq t\} },\, t\geq0)$ occurring at $\tau$. We refer to the book [@key-21] of Bielecki and Rutkowski and the series of papers [@key-23; @key-24; @key-25] of Jeanblanc and Le Cam, among many other works on the reduced-form approach to credit-risk. Besides the advantages that have made this approach well-known, there is the poor structure of the information concerning $\tau$: people just know if the default has occurred ($H_{t}=1$) or not ($H_{t}=0$). Financial reality is often more complex than this: market agents have indeed more information and there are actually periods in which the default is more likely to happen than in others. We try to reconcile those two approaches considering that in our model the information is carried by the process $\beta$ given by (\[eq:EQ0\]). This paper is organized as follows. In Section 2 we provide some preliminary facts that are used throughout the paper. In Section 3 we give the definition of a bridge $\beta$ of random length $\tau$. We show that $\tau$ is a stopping time with respect to the natural, completed filtration generated by the process $\beta$. Also, we establish that $\beta$ is a Markov process with respect to the filtration $\mathbb{F}^{0}:=(\sigma(\beta_{s},\,0\leq s\leq t))_{t\geq0}$. In Section 4 we derive Bayesian estimates of the distribution of the default time $\tau$. Thereafter, in Section 5 we extend the results obtained in the previous section to more general situations. Section 6 is devoted to the proof of the Markov property of the information process with respect to the filtration $\mathbb{F}^{\beta}$, the smallest filtration which contains $\mathbb{F}^0$ and satisfies the usual conditions of right-continuity and completeness. In Section 7 we show that the process $\beta$ is a semimartingale and we provide its decomposition. Finally, in Section 8 we consider an application to Mathematical Finance concerned with the problem of pricing a Credit Default Swap in an elementary market model. The paper closes with the Appendix where, for the sake of easy reference, we recall the Bayes Theorem, and we prove a slight extension of the so-called innovation lemma (used in Section \[sec:Semimartingale-decomposition\] for the semimartingale decomposition of the information process $\beta$) and, finally, an auxiliary result. This paper is based on the thesis [@key-10] and the main objective of it is to introduce and study the information process $\beta$ and to provide its basic properties. Other topics of the thesis [@key-10] are the enlargement of the filtration $\mathbb{F}^{\beta}$ with a reference filtration $\mathbb{F}$, the classification of the default time $\tau$ (with respect to the filtration $\mathbb{F}^{\beta}$ and with respect to the enlarged filtration) and further applications to Mathematical Finance. Preliminaries ============= We start by recalling some basic results on Brownian bridges and properties of conditional expectations that will be used in the sequel. The interested reader may find further discussions on Brownian bridges, among others, in the books [@key-1] of Karatzas and Shreve or [@key-3] of Revuz and Yor . As usual, the set of natural numbers is denoted by $\mathbb{N}$ and the set of real numbers by $\mathbb{R}$. If $A\subseteq\mathbb{R}$, then the notation $A_{+}$ stands for $A_{+}:=A\cap\{ x\in\mathbb{R}:x\geq0\}$. If $E$ is a topological space, then the Borel $\sigma$-algebra over $E$ will be denoted by $\mathcal{B}(E)$. If $A$ is a set, its indicator function will be denoted by $\mathbb{I}_{A}$. Let $(\Omega,\,\mathcal{F},\,\mathbf{P})$ be a complete probability space. By $\mathcal{N}_{P}$ we denote the collection of $\mathbf{P}$-null sets. If $X$ is a random variable, the symbol $\sigma(X)$ will denote the $\sigma$-algebra generated by $X$. Let $W:(\Omega,\mathcal{F})\rightarrow(C,\mathcal{C})$ be a map from $\Omega$ into the space of continuous real-valued functions defined on $\mathbb{R}_{+}$ endowed with the $\sigma$-algebra generated by the canonical process, so that with each $\omega\in\Omega$ we associate the continuous function $W(\omega)=(W_{t}(\omega),\, t\geq0)$. Given a strictly positive real number $r$, the function $\beta^{r}:\Omega\rightarrow C$ defined by $$\beta_{t}^{r}\left(\omega\right):=W_{t}\left(\omega\right)-\frac{t}{r\vee t}W_{r\vee t}\left(\omega\right), \ t\geq0, \ \omega\in\Omega\,,$$ is called *bridge of length $r$ associated with $W$*. If $W$ is a Brownian motion on the probability space $(\Omega,\mathcal{F},\mathbf{P})$, then the process $\beta^{r}$ is called *Brownian bridge of length $r$*. We have the following fact concerning the measurability of the process $\beta^{r}$. The map $(r,t,\omega)\mapsto\beta_{t}^{r}(\omega)$ of $((0,+\infty)\times\mathbb{R}_{+}\times\Omega,\,\mathcal{B}((0,+\infty))\otimes\mathcal{B}(\mathbb{R}_{+})\otimes\mathcal{F})$ into $(\mathbb{R},\,\mathcal{B}(\mathbb{R}))$ is measurable. In particular, the $t$-section of $(r,t,\omega)\rightarrow\beta_{t}^{r}(\omega)$: $(r,\omega)\mapsto\beta_{t}^{r}(\omega)$ is measurable with respect to the $\sigma$-algebra $\mathcal{B}((0,+\infty))\otimes\mathcal{F}$, for all $t\geq0$. From the definition of $\beta^{r}$ we have that (i) the map $\omega\rightarrow\beta_{t}^{r}(\omega)$ is measurable for all $r>0$ and $t\geq0$ and (ii) the map $(r,t)\rightarrow\beta_{t}^{r}(\omega)$ is continuous for all $\omega\in\Omega$. It now suffices to proceed with the discretization of the parameter $(r,t)$ in order to define piecewise constant and measurable functions converging pointwise to $(r,t,\omega)\mapsto\beta_{t}^{r}(\omega)$ and to use standard results on the passage to the limit of sequences of measurable functions. The assertion of the lemma then follows immediately. \[cor:measyBetaR\] The map $(r,\omega)\rightarrow\beta_{\cdot}^{r}(\omega)$ of $((0,+\infty)\times\Omega,\,\mathcal{B}((0,+\infty))\otimes\mathcal{F})$ into $(C,\,\mathcal{C})$ is measurable. Note that $\beta^{r}$ is just as the $r$-section of the map $(r,t,\omega)\rightarrow\beta_{t}^{r}(\omega)$. If the process $W$ is a Brownian motion on $(\Omega,\mathcal{F},\mathbf{P})$, the process $\beta^{r}$ is just a Brownian bridge which is identically equal to 0 on the time interval $[r,\,+\infty)$. If we denote by $p(t,x,y),\, x\in\mathbb{R}$, the Gaussian density with variance $t$ and mean $y$, then the function $\varphi_{t}(r,\cdot)$ given by $$\varphi_{t}\left(r,x\right):=\begin{cases} p\left(\frac{t\left(r-t\right)}{r},x,0\right), & 0<t<r, \ x\in\mathbb{R},\\ 0, & r\leq t, \ x\in\mathbb{R}, \end{cases} \label{eq:bbdensity}$$ is equal to the density of $\beta_{t}^{r}$ for $0<t<r$. Furthermore we have the following properties: 1. (Expectation)  $\mathbf{E}\left[\beta_{t}^{r}\right]=0, \ t\geq0.$ 2. (Covariance)  $\mathbf{E}\left[\beta_{t}^{r}\beta_{s}^{r}\right]=s\wedge t\wedge r-\frac{(s\wedge r)(t\wedge r)}{r}, \ s,t\geq 0$. 3. (Conditional expectation)   For $t\in[0,r)$ and $t<u$ we have: If $g(\beta^r _u)$ is integrable, then $$\begin{aligned} \lefteqn{\mathbf{E}\left[g\left(\beta_{u}^{r}\right)|\beta_{t}^{r}=x\right]}\nonumber\\ &=&\begin{cases} {\displaystyle \int_{\mathbb{R}}}g\left(y\right)p\left(\frac{r-u}{r-t}\left(u-t\right),\, y,\,\frac{r-u}{r-t}x\right)dy, & t<u<r, \ x\in\mathbb{R},\\ g\left(0\right), & r\leq u, \ x\in\mathbb{R}. \end{cases}\label{eq:cond exp ext bb}\end{aligned}$$ 4. $\beta^{r}$ is a Markov process with respect to its completed natural filtration. 5.  $\beta_{t}^{r}=\beta_{t\wedge r}^{r}, \ t\geq0$. \[BMr\] The process $B^{r}=(B_{t}^{r},\, t\geq0)$ given by $$B_{t}^{r}:=\beta_{t}^{r}+\int_{0}^{t}\frac{\beta_{s}^{r}}{r-s}\, ds\label{eq:BMfrom_extBB}$$ is a Brownian motion stopped at the (deterministic) stopping time $r$, with respect to the completed natural filtration of $\beta^{r}$. Now we substitute the fixed time $r$ by a random time $\tau$. Let $\tau$ be a strictly positive random variable on $(\Omega,\mathcal{F},\mathbf{P})$. The object which we are interested in is given by the composition of the two mappings $(r,t,\omega)\mapsto\beta_{t}^{r}(\omega)$ and $(t,\omega)\mapsto(\tau(\omega),t,\omega)$. We have the following definition: The map $\beta=(\beta_{t},\, t\geq0):\,(\Omega,\,\mathcal{F})\rightarrow(C,\,\mathcal{C})$ is defined by $$\beta_{t}\left(\omega\right):=\beta_{t}^{\tau\left(\omega\right)}\left(\omega\right),\quad (t,\omega)\in\mathbb{R}_+\times\Omega\,.\label{eq:beta def 0}$$ The map $\beta:\,(\Omega,\,\mathcal{F})\rightarrow(C,\,\mathcal{C})$ is measurable. The map $(r,\omega)\rightarrow\beta^{r}(\omega)$ of $((0,+\infty)\times\Omega,\,\mathcal{B}((0,+ \infty))\otimes\mathcal{F})$ into $(C,\,\mathcal{C})$ is measurable because of Corollary \[cor:measyBetaR\]. By definition, the map $\omega\rightarrow(\tau(\omega),\omega)$ of $(\Omega,\mathcal{F})$ into the measurable space $((0,+\infty)\times\Omega,\,\mathcal{B}((0,+\infty))\otimes\mathcal{F})$ is measurable. The statement of the lemma follows from the measurability of the composition of measurable functions. We now consider the conditional law with respect to $\tau$. As common, by $\mathbf{P}_\tau$ we denote the distribution of $\tau$ with respect to $\mathbf{P}$. \[lem:If–is\]If $\tau$ is independent of the Brownian motion $W$, then for any measurable function $G$ on $\left(\left(0,+\infty\right)\times C,\mathcal{B}\left(\left(0,+\infty\right)\right)\otimes\mathcal{C}\right)$ such that $G(\tau,\, W)$ is integrable it follows that $$\mathbf{E}\left[G\left(\tau,\, W\right)|\sigma\left(\tau\right)\right]\left(\omega\right)=\left(\mathbf{E}\left[G\left(r,W\right)\right]\right)_{r=\tau\left(\omega\right)},\;\mathbf{P}\textrm{-a.s.}$$ See, e.g., [@key-8 Ch. II.7, p. 221]. \[cor:LEM if–is\] If $h:\,\left((0,+\infty)\times C,\mathcal{B}\left(\left(0,+\infty\right)\right)\otimes\mathcal{C}\right)\mapsto\left(\mathbb{R},\mathcal{B}(\mathbb{R})\right)$ is a measurable function such that $\mathbf{E}\left[|h\left(\tau,\,\beta\right)|\right]<+\infty$, then $$\mathbf{E}\left[h\left(\tau,\,\beta\right)|\tau=r\right]=\mathbf{E}\left[h\left(r,\,\beta^{r}\right)\right],\; \mathbf{P}_{\tau}\textrm{-a.s.},$$ or, equivalently, $$\mathbf{E}\left[h\left(\tau,\,\beta\right)|\sigma\left(\tau\right)\right]\left(\omega\right)=\left(\mathbf{E}\left[h\left(r,\,\beta^{r}\right)\right]\right)_{r=\tau\left(\omega\right)},\; \mathbf{P}\textrm{-a.s.}$$ The last two formulas provide a useful connection between the law of Brownian bridges and the conditional law with respect to $\sigma(\tau)$ of a generic functional involving $\tau$, the Brownian motion $W$ and the map $\beta$ defined in (\[eq:beta def 0\]). \[sec:Definition-and-First\] Definition and First Properties ============================================================ Let $W=(W_{t},\, t\geq0)$ be a standard Brownian motion (with respect to its completed natural filtration $\mathbb{F}^{W}=(\mathcal{F}_{t}^{W})_{t\geq0}$) starting from 0. Let $\tau:\,\Omega\rightarrow(0,\infty)$ be a strictly positive random time with distribution function denoted by $F$: $F(t):=\mathbf{P}(\tau\leq t),\: t\geq0$. In this paper the following assumption will always be made. \[ass:assumption1\]The random time $\tau$ and the Brownian motion $W$ are independent. The random time $\tau$ (which will be interpreted as *default time*) is supposed to be not an $\mathbb{F}^{W}$-stopping time. This means that the case in which $\tau$ is equal to a positive constant $T$ will not be considered. \[def:obs by MonPon\]The process $\beta=(\beta_{t},\;t\geq0)$ given by $$\beta_{t}:=W_{t}-\frac{t}{\tau\vee t}W_{\tau\vee t},\; t\geq0,\label{eq:betaDEF1}$$ will be called *information process*. In what follows, we shall use the following filtrations: 1. $\mathbb{F}^{0}=(\mathcal{F}_{t}^{0})_{t\geq0}$, the natural filtration of the process $\beta$:\ $\mathcal{F}_{t}^{0}:=\sigma(\beta_{s},\,0\leq s\leq t),\ t\geq0$. 2. $\mathbb{F}^{P}=(\mathcal{F}_{t}^{P})_{t\geq0}$, the completed natural filtration:\ $\mathcal{F}_{t}^{P}:=\mathcal{F}_{t}^{0}\vee\mathcal{N}_{P},\, t\geq0$. 3. $\mathbb{F}^{\beta}=(\mathcal{F}_{t}^{\beta})_{t\geq0}$, the smallest filtration containing $\mathbb{F}^{0}$ and satisfying the usual hypotheses of right-continuity and completeness: $$\mathcal{F}_{t}^{\beta}:=\sigma\left(\bigcap_{u>t}\mathcal{F}_{u}^{0}\cup\mathcal{N}_{P}\right)=\mathcal{F}_{t+}^{0}\vee\mathcal{N}_{P},\: t\geq0.$$ The aim of this section is to prove that the default time $\tau$ is a stopping time with respect to the completed natural filtration $\mathbb{F}^P$ and that the information process $\beta$ is Markov with respect to $\mathbb{F}^P$. Although it is possible to prove directly that $\tau$ is an $\mathbb{F}^{\beta}$-stopping time (see [@key-10 Lemma 2.5]) we point out the following result which better involves the role of the probability measure $\mathbf{P}$. For two sets $A, B\in\mathcal{F}$ we shall write $A\subseteq B \ \; \mathbf{P}$-a.s. if $\mathbf{P}(B\setminus A)=0$. If $A\subseteq B \ \; \mathbf{P}$-a.s. and $B\subseteq A \ \; \mathbf{P}$-a.s., then we write $A=B \ \; \mathbf{P}$-a.s. \[lem:For-all-,\]For all $t>0$, $\{ \beta_{t}=0\} =\{ \tau\leq t\},\ \mathbf{P}$-a.s. In particular, $\tau$ is a stopping time with respect to the filtration $\mathbb{F}^{P}$. Using the formula of total probability and Corollary \[cor:LEM if–is\], we have that $$\begin{aligned} \mathbf{P}\left(\beta_{t}=0,\,\tau>t\right) &=\int_{\left(t,+\infty\right)}\mathbf{P}\left(\beta_{t}=0|\tau=r\right)dF(r)\\ &=\int_{\left(t,+\infty\right)}\mathbf{P}\left(\beta_{t}^{r}=0\right)dF\left(r\right)=0\,,\end{aligned}$$ where the latter equality holds because the random variable $\beta_{t}^{r}$ is nondegenerate and normally distributed for $0<t<r$ and, therefore, we obtain $\mathbf{P}(\beta_{t}^{r}=0)=0,\,0<t<r$. Hence, for all $t>0$, $\{ \beta_{t}=0\} \cap\{ \tau>t\} \in\mathcal{N}_{P}$ and, consequently, $\{ \beta_{t}=0\} \subseteq\{ \tau\leq t\} ,\,\mathbf{P}$-a.s. In view of $\{ \tau\leq t\} \subseteq\{ \beta_{t}=0\}$, this yields the first part of the statement: $\{ \tau\leq t\} =\{\beta_{t}=0\} ,\,\mathbf{P}$-a.s. Since $\{ \beta_{t}=0\} \in\mathcal{F}_{t}^{0}$, the event $\{ \tau\leq t\}$ belongs to $\mathcal{F}_{t}^{0}\vee\mathcal{N}_{P}$, for all $t\geq 0$. Hence $\tau$ is a stopping time with respect to $\mathbb{F}^{P}$. $\tau$ is a stopping time with respect to the filtration $\mathbb{F}^{\beta}$. The above proposition states that the process $(\mathbb{I}_{\{ \tau\leq t\} },\, t>0)$ is a modification, under the probability measure $\mathbf{P}$, of the process $(\mathbb{I}_{\{ \beta_{t}=0\} },\, t>0)$. On the other hand, these both processes are not indistinguishable, since the process $\beta$ can hit 0 before $\tau$ (in fact, due to the law of iterated logarithm, this happens uncountably many times $\mathbf{P}$-a.s.). Roughly speaking we can say that in general, if we can observe only $\beta_{t}$, we are sure that $\tau>t$ whenever $\beta_{t}\neq0$. But, if at time $t$ we observe the event $\{ \beta_{t}=0\}$, the information carried by $\mathcal{F}_{t}^{0}$ may be not sufficient to know whether the event $\{ \tau\le t\}$ has occurred or not, we can only say that it occurred $\mathbf{P}$-a.s. We now show that the information process $\beta$ is a Markov process with respect to its natural filtration $\mathbb{F}^{0}$. \[TEO:The-information-process\] The information process $\beta$ is an $\mathbb{F}^{0}$-Markov process: $$\mathbf{E}\left[f\left(\beta_{t+h}\right)|\mathcal{F}_{t}^{0}\right]=\mathbf{E}\left[f\left(\beta_{t+h}\right)|\beta_{t}\right],\quad\mathbf{P}\textrm{-a.s., }t\geq0,\label{eq:statementMK1}$$ for all $h\geq0$ and for every measurable function $f$ which is nonnegative or such that $f(\beta_{t+h})$ is integrable. For $t=0$ the statement is clear. Let us assume $t>0$. On the set $\{ \tau\leq t\}$ we have $\mathbf{P}$-a.s. $$\mathbf{E}\left[f\left(\beta_{t+h}\right)|\mathcal{F}_{t}^{0}\right]\mathbb{I}_{\left\{ \tau\leq t\right\} } =\mathbf{E}\left[f\left(0\right)\mathbb{I}_{\left\{ \tau\leq t\right\} }|\mathcal{F}_{t}^{0}\right] =f\left(0\right)\mathbb{I}_{\left\{ \tau\leq t\right\}} =f\left(0\right)\mathbb{I}_{\left\{ \beta_{t}=0\right\}}\,,$$ which is a measurable function with respect to $\sigma(\beta_{t})$ and hence (\[eq:statementMK1\]) is valid on $\{ \tau\leq t\}$ $\mathbf{P}$-a.s. Now we have to prove $$\mathbf{E}\left[f\left(\beta_{t+h}\right)|\mathcal{F}_{t}^{0}\right]\mathbb{I}_{\left\{ \tau>t\right\} }=\mathbf{E}\left[f\left(\beta_{t+h}\right)|\beta_{t}\right]\mathbb{I}_{\left\{ \tau>t\right\} },\;\mathbf{P}\textrm{-a.s.}$$ Both sides being $\mathcal{F}_{t}^{P}$-measurable, it suffices to verify that for all $A\in\mathcal{F}_{t}^{0}$ we have $$\int_{A\cap\left\{ \tau>t\right\} }f\left(\beta_{t+h}\right)d\mathbf{P}=\int_{A\cap\left\{ \tau>t\right\} }\mathbf{E}\left[f\left(\beta_{t+h}\right)|\beta_{t}\right]d\mathbf{P}.\label{eq:equivalentMK1}$$ We observe that $\mathcal{F}_{t}^{0}$ is generated by $$\beta_{t_{n}},\ \xi_n:=\frac{\beta_{t_{n}}}{t_{n}}-\frac{\beta_{t_{n-1}}}{t_{n-1}},\ \xi_{n-1}:=\frac{\beta_{t_{n-1}}}{t_{n-1}}-\frac{\beta_{t_{n-2}}}{t_{n-2}},\ ..., \ \xi_1:=\frac{\beta_{t_{1}}}{t_{1}}-\frac{\beta_{t_{0}}}{t_{0}}\,,$$ $0<t_{0}<t_{1}<...<t_{n}=t$, for $n$ running through $\mathbb{N}$. By the monotone class theorem (see, e.g., [@key-7 I.19, I.21]) it is sufficient to prove (\[eq:equivalentMK1\]) for sets $A$ of the form $A=\{ \beta_{t}\in B,\,\xi_{1}\in B_{1},\ldots,\,\xi_{n}\in B_{n}\}$ with $B, B_{1}, B_{2},\ldots, B_{n}\in \mathcal{B}(\mathbb{R}),\, n\geq1$. Let $g:=\mathbb{I}_{B}$ and $L:=\mathbb{I}_{B_{1}\times B_{2}\times\cdots\times B_{n}}$. Then we have the equality $\mathbb{I}_{A}\!=\!g(\beta_{t})L(\xi_{1},\ldots,\,\xi_{n})$ and, setting $$\eta_{k}:=\frac{W_{t_{k}}}{t_{k}}-\frac{W_{t_{k-1}}}{t_{k-1}},\quad k=1,\ldots, n,$$ we have $\xi_{k}=\eta_{k}$ on $\{ \tau>t\}$, $k=1, \ldots, n$. But, for $r>t$, the random vector $(\eta_{1},\ldots, \eta_{n},\,\beta_{t}^{r},\beta_{t+h}^{r})$ is Gaussian and, denoting by $\textrm{cov}(X,Y)$ the covariance between two random variables $X$ and $Y$, we have that $\textrm{cov}(\eta_{k},\beta_{t}^{r})=\textrm{cov}(\eta_{k},\beta_{t+h}^{r})=0,\; k=1, \ldots, n$. Thus $(\eta_{1},...,\eta_{n})$ is independent of $(\beta_{t}^{r},\beta_{t+h}^{r})$ and, with the notation $H(x,y):=f(x)g(y)$, we also have that $L(\eta_{1},...,\eta_{n})$ is independent of $H(\beta_{t+h}^{r},\beta_{t}^{r})$. Now we can state the following lemma which will allow to complete the proof of Theorem \[TEO:The-information-process\]. \[lem:The-random-variables\] Let $H:\, \mathbb{R}^2\mapsto\mathbb{R}$ be a measurable function. Suppose that $H$ is nonnegative or such that $\mathbf{E}\left[\left| H\left(\beta_{t+h},\beta_{t}\right)\right|\right]<+\infty$. Then the random variables $H(\beta_{t+h},\beta_{t})\mathbb{I}_{\{ \tau>t\} }$ and $L(\eta_{1},...,\eta_{n})$ are uncorrelated: $$\begin{aligned} \mathbf{E}\left[H\left(\beta_{t+h},\beta_{t}\right)\mathbb{I}_{\left\{ \tau>t\right\}}L\left(\eta_{1},...,\eta_{n}\right)\right] &=\mathbf{E}\left[H\left(\beta_{t+h},\beta_{t}\right)\mathbb{I}_{\left\{ \tau>t\right\} }\right]\mathbf{E}\left[L\left(\eta_{1},...,\eta_{n}\right)\right]\end{aligned}$$ Using the formula of total probability and Lemma \[lem:If–is\] we get $$\begin{aligned} \lefteqn{\mathbf{E}\left[H\left(\beta_{t+h},\beta_{t}\right)\mathbb{I}_{\left\{ \tau>t\right\} }L\left(\eta_{1},...,\eta_{n}\right)\right]}\\ &=\int_{\left(t,+\infty\right)}\mathbf{E}\left[H\left(\beta_{t+h},\beta_{t}\right)\mathbb{I}_{\left\{ \tau>t\right\} }L\left(\eta_{1},...,\eta_{n}\right)|\tau=r\right]dF\left(r\right)\\ &=\int_{\left(t,+\infty\right)}\mathbf{E}\left[H\left(\beta_{t+h}^{r},\beta_{t}^{r}\right)L\left(\eta_{1},...,\eta_{n}\right)\right]dF\left(r\right)\\ &=\int_{\left(t,+\infty\right)}\mathbf{E}\left[H\left(\beta_{t+h}^{r},\beta_{t}^{r}\right)\right]dF\left(r\right)\mathbf{E}\left[L\left(\eta_{1},...,\eta_{n}\right)\right]\\ &=\mathbf{E}\left[H\left(\beta_{t+h},\beta_{t}\right)\mathbb{I}_{\left\{ \tau>t\right\} }\right]\mathbf{E}\left[L\left(\eta_{1},...,\eta_{n}\right)\right].\end{aligned}$$ The proof of the lemma is finished. We now prove (\[eq:equivalentMK1\]) for our special choice of $A$. From Lemma \[lem:The-random-variables\] above we have $$\begin{aligned} \int_{A\cap\left\{ \tau>t\right\} }f\left(\beta_{t+h}\right)d\mathbf{P} & =\mathbf{E}\left[H\left(\beta_{t+h},\beta_{t}\right)\mathbb{I}_{\left\{ \tau>t\right\} }L\left(\xi_{1},...,\xi_{n}\right)\right]\\ & =\mathbf{E}\left[H\left(\beta_{t+h},\beta_{t}\right)\mathbb{I}_{\left\{ \tau>t\right\} }L\left(\eta_{1},...,\eta_{n}\right)\right]\\ & =\mathbf{E}\left[f\left(\beta_{t+h}\right)g\left(\beta_{t}\right)\mathbb{I}_{\left\{ \tau>t\right\} }\right]\mathbf{E}\left[L\left(\eta_{1},...,\eta_{n}\right)\right]\\ & =\mathbf{E}\left[\mathbf{E}\left[f\left(\beta_{t+h}\right)|\beta_{t}\right]g\left(\beta_{t}\right)\mathbb{I}_{\left\{ \tau>t\right\} }\right]\mathbf{E}\left[L\left(\eta_{1},...,\eta_{n}\right)\right]\\ & =\mathbf{E}\left[\mathbf{E}\left[f\left(\beta_{t+h}\right)|\beta_{t}\right]g\left(\beta_{t}\right)\mathbb{I}_{\left\{ \tau>t\right\} }L\left(\eta_{1},...,\eta_{n}\right)\right]\\ & =\mathbf{E}\left[\mathbf{E}\left[f\left(\beta_{t+h}\right)|\beta_{t}\right]g\left(\beta_{t}\right)\mathbb{I}_{\left\{ \tau>t\right\} }L\left(\xi_{1},...,\xi_{n}\right)\right]\\ & =\int_{A\cap\left\{ \tau>t\right\} }\mathbf{E}\left[f\left(\beta_{t+h}\right)|\beta_{t}\right]d\mathbf{P},\end{aligned}$$ which proves that (\[eq:equivalentMK1\]) is true and this ends the proof. Note that the Markov property is trivially extended to the completed filtration $\mathbb{F}^{P}$. \[sec:Conditional-Expectations\] Bayes Estimates of the Default Time $\tau$ =========================================================================== The basic aim of the present section is to provide estimates of the a priori unknown default time $\tau$ based on the observation of the information process $\beta$ up to time $t$. For fixed $t\geq 0$, the observation is represented by the $\sigma$-algebra $\mathcal{F}^P_t$ and, because of the Markov property, the observation of $\beta_t$ would be sufficient. To this end it is natural to exploit the Bayesian approach. The idea is to use the knowledge gained from the observation of the flow $(\beta_t, \ t\geq 0)$ for updating the initial knowledge on $\tau$. At time 0, the market agents have only *a priori* knowledge about $\tau$, represented by its distribution function $F$. As time is increasing, information concerning the default becomes available. Using the Bayes theorem (recalled in the Appendix for easy reference), the *a posteriori* distribution of $\tau$ based on the observation of $\beta$ up to $t$ can be derived and in this way agents can update their initial knowledge obtaining a sharper estimate of the default time $\tau$. In this section the $\sigma$-algebra generated by the future of $\beta$ at time $t$ is denoted by $\mathcal{F}^P_{t,\infty}:=\sigma(\beta_{s},\, t\leq s\leq+\infty)\vee\mathcal{N}_{P}$. The following is a standard result on Markov processes: \[lem:basic MKV\]Let $X=(X_{t},\, t\geq0)$ be a stochastic process adapted to a filtration $\mathbb{F}=(\mathcal{F}_{t})_{t\geq0}$. Then the following are equivalent: 1. $X$ is Markov with respect to $\mathbb{F}$. 2. For each $t\geq0$ and bounded (or nonnegative) $\sigma(X_{s},\, s\geq t)$-measurable random variable $Y$ one has $$\mathbf{E}\left[Y|\mathcal{F}_{t}\right]=\mathbf{E}\left[Y|X_{t}\right], \; \mathbf{P}\textrm{-a.s.}$$ See [@key-5 Ch. I, Theorem (1.3)]. The next proposition describes the structure of the a posteriori distribution of $\tau$ based on the observation of $\mathcal{F}^P_t$. \[prop:HJ1\] For all $t,u\geq0$, it holds $$\label{prop:HJ1second} \mathbf{P}\left(\tau\leq u|\mathcal{F}_{t}^{P}\right)=\mathbb{I}_{\left\{ \tau\leq t\wedge u\right\} }+\mathbf{P}\left(t<\tau\leq u|\beta_{t}\right)\mathbb{I}_{\left\{ t<\tau\right\} },\;\mathbf{P}\textrm{-a.s.}$$ Obviously, we have $\{ \tau\leq u\} =\{ \tau\leq t\wedge u\} \cup\{ t<\tau\leq u\}$. The first set of the right-hand side of the above decomposition yields the first summand of the statement. Then it suffices to observe that we have the relation $\{ t<\tau\leq u\} =\{ \beta_{t}\neq0,\beta_{u}=0\}$ $\mathbf{P}\textrm{-a.s.}$ where the set on the right-hand side of the above equality belongs to $\mathcal{F}^P_{t,\infty}$. It remains to apply Lemma \[lem:basic MKV\] in order to complete the proof of the statement. Recalling that $F$ denotes the distribution function of $\tau$ and formula (\[eq:bbdensity\]) for the definition of the function $\varphi_{t}(r,x)$ (which is equal to the density of the Brownian bridge $\beta_{t}^{r}$ at time $t<r$), we have the following result which provides the explicit form of the a posteriori distribution of $\tau$ based on the observation of $\beta$ up to $t$. \[lem:PRE-theo-4.3\]Let $t>0$. Then, for all $u>0$, $\mathbf{P}$-a.s. $$\begin{aligned} \mathbf{P}\left(\tau\leq u|\mathcal{F}_{t}^{P}\right) =\mathbb{I}_{\left\{\tau\leq t\wedge u\right\}}+\frac{{\displaystyle \int_{\left(t,u\right]}}\varphi_{t}\left(r,\beta_{t}\right)dF(r)}{{\displaystyle \int_{\left(t,+\infty\right)}}\varphi_{t}\left(v,\beta_{t}\right)dF(v)}\,\mathbb{I}_{\left\{ t<\tau\right\}}\,.\label{eq:condexptau-1}\end{aligned}$$ The result is a consequence of Proposition \[prop:HJ1\] and the Bayes formula (see Corollary \[cor:(Bayes-formula)-a.s.\]). Theorem \[lem:PRE-theo-4.3\] can be extended to functions $g$ on $\mathbb{R}_+$ as it will be stated in the following corollary. \[LEM:prop\_cond\_exp\_tau\]Let $t>0,\; g:\mathbb{R}_{+}\rightarrow\mathbb{R}$ be a Borel function such that $\mathbf{E}\left[|g(\tau)|\right]<+\infty$. Then, $\mathbf{P}$-a.s., $$\begin{aligned} \mathbf{E}\left[g\left(\tau\right)|\mathcal{F}_{t}^{P}\right] & =g\left(\tau\right)\mathbb{I}_{\left\{ \tau\leq t\right\} }+\frac{{\displaystyle \int_{\left(t,+\infty\right)}}g\left(r\right)\varphi_{t}\left(r,\beta_{t}\right)dF(r)}{{\displaystyle \int_{\left(t,+\infty\right)}}\varphi_{t}\left(v,\beta_{t}\right)dF(v)}\,\mathbb{I}_{\left\{ t<\tau\right\}}\,. \label{eq:condexptau2}\end{aligned}$$ If the function $g$ is bounded then the statement immediately follows by an application of the monotone class theorem to simple functions where it is possible to use Theorem \[lem:PRE-theo-4.3\]. In the general case $g$ has to be approximated pointwise by bounded functions and by passing to the limit. We point out that the function $\phi_{t}$ defined by $$\phi_{t}\left(r,x\right):=\frac{\varphi_{t}\left(r,x\right)}{{\displaystyle \int_{\left(t,+\infty\right)}\varphi_{t}\left(v,x\right)dF\left(v\right)}}, \; \left(r,t\right)\in\left(0,+\infty\right)\times\mathbb{R}_+,\; x\in\mathbb{R}\,, \label{eq:densitybeta2}$$ is, for $t<r$, the a posteriori density function of $\tau$ on $\{\tau>t\}$ based on the observation $\beta_t=x$ (see Corollary \[cor:The-conditional-density\]). Then relation (\[eq:condexptau-1\]), representing the a posteriori distribution of $\tau$ based on the observation of $\mathcal{F}_t^P$, can be rewritten as $$\mathbf{P}\left(\tau\leq u|\mathcal{F}_{t}^{P}\right)=\mathbb{I}_{\left\{ \tau\leq t\right\}} +{\displaystyle \int_{\left(t,u\right]}}\phi_{t}\left(r,\beta_{t}\right)dF\left(r\right)\mathbb{I}_{\left\{ t<\tau\right\} },\;\mathbf{P}\textrm{-a.s.},$$ while (\[eq:condexptau2\]) is equal to the expression $$\mathbf{E}\left[g\left(\tau\right)|\mathcal{F}_{t}^{P}\right]=g\left(\tau\right)\mathbb{I}_{\left\{ \tau\leq t\right\} }+{\displaystyle \int_{\left(t,+\infty\right)}g\left(r\right)}\,\phi_{t}\left(r,\beta_{t}\right)dF\left(r\right)\mathbb{I}_{\left\{ t<\tau\right\} },\;\mathbf{P}\textrm{-a.s.}$$ Here it is possible to see how the Bayesian estimate of $\tau$ given above provides a better knowledge on the default time $\tau$ through the observation of the information process $\beta$ at time $t$. Extensions of the Bayes Estimates ================================= In this section we shall deal with an extension of the Bayes estimates of $\tau$ provided in Section \[sec:Conditional-Expectations\]. Roughly speaking we shall derive formulas which include the Bayes estimates discussed in Section \[sec:Conditional-Expectations\] as well as the prognose of the information process $\beta$ at some time $u$, the latter being related with the Markov property which has been proven in Section \[sec:Definition-and-First\] (see Theorem \[TEO:The-information-process\]). First we will state a lemma that will be used in the proof of the main results of this section. \[lem:AUXlemma\]Let $0\leq t<u$ and $g$ be a measurable function on $(0,+\infty)\times\mathbb{R}$ such that $g(\tau,\beta_{u})$ is integrable. Then it holds $\mathbf{P}$-a.s. $$\begin{aligned} \mathbf{E}\left[g\left(\tau,\beta_{u}\right)\mathbb{I}_{\left\{ t<\tau\right\} }|\mathcal{F}_{t}^{P}\right] & =\mathbf{E}\left[g\left(\tau,\beta_{u}\right)|\beta_{t}\right]\mathbb{I}_{\left\{ t<\tau\right\} }\\ & =\mathbf{E}\left[\mathbf{E}\left[g\left(\tau,\beta_{u}\right)|\sigma\left(\tau\right)\vee\sigma\left(\beta_{t}\right)\right]|\beta_{t}\right]\mathbb{I}_{\left\{ t<\tau\right\} }\\ & =\mathbf{E}\left[\left(\mathbf{E}\left[g\left(r,\beta_{u}^{r}\right)|\beta_{t}^{r}\right]\right)_{r=\tau}|\beta_{t}\right]\mathbb{I}_{\left\{ t<\tau\right\} }.\end{aligned}$$ It is clear that the first equality holds true due to the fact that $g(\tau,\beta_{u})\mathbb{I}_{\{ t<\tau\} }$ is $\mathcal{F}_{t,\infty}$-measurable and, hence, Lemma \[lem:basic MKV\] can be applied. The second equality is obvious. Let $h$ be an arbitrary bounded Borel function. Using Lemma \[lem:If–is\] we have that $$\begin{aligned} \mathbf{E}\left[g\left(\tau,\beta_{u}\right)h\left(\beta_{t}\right)\mathbb{I}_{\left\{ t<\tau\right\} }\right] & =\int_{\left(t,+\infty\right)}\mathbf{E}\left[g\left(r,\beta_{u}^{r}\right)h\left(\beta_{t}^{r}\right)\right]dF\left(r\right)\\ & =\int_{\left(t,+\infty\right)}\mathbf{E}\left[\mathbf{E}\left[g\left(r,\beta_{u}^{r}\right)h\left(\beta_{t}^{r}\right)|\beta_{t}^{r}\right]\right]dF\left(r\right)\\ & =\int_{\left(t,+\infty\right)}\mathbf{E}\left[\mathbf{E}\left[g\left(r,\beta_{u}^{r}\right)|\beta_{t}^{r}\right]h\left(\beta_{t}^{r}\right)\right]dF\left(r\right)\\ & =\mathbf{E}\left[\mathbf{E}\left[g\left(r,\beta_{u}^{r}\right)|\beta_{t}^{r}\right]_{r=\tau}h\left(\beta_{t}\right)\mathbb{I}_{\left\{ t<\tau\right\} }\right]\,,\end{aligned}$$ that is, $$\mathbf{E}\left[\left(g\left(\tau,\beta_{u}\right)-\mathbf{E}\left[g\left(r,\beta_{u}^{r}\right)|\beta_{t}^{r}\right]_{r=\tau}\right)h\left(\beta_{t}\right)\mathbb{I}_{\left\{ t<\tau\right\} }\right]=0.$$ But $h$ is arbitrary and thus $$\begin{aligned} \mathbf{E}\left[g\left(\tau,\beta_{u}\right)|\beta_{t}\right] & =\mathbf{E}\left[\mathbf{E}\left[g\left(r,\beta_{u}^{r}\right)|\beta_{t}^{r}\right]_{r=\tau}|\beta_{t}\right],\end{aligned}$$ $\mathbf{P}$-a.s. on $\{ t<\tau\} $, for $t<u$. The next theorem is prepared by the following result. \[cor:Let–be-BIS\] Let $t\geq0$ and $g$ be a measurable function such that $g(\tau,\beta_{t})$ is integrable. Then, $\mathbf{P}$-a.s., $$\mathbf{E}\left[g\left(\tau,\beta_{t}\right)|\mathcal{F}_{t}^{\beta}\right]=g\left(\tau,0\right)\mathbb{I}_{\left\{ \tau\leq t\right\} }+\int_{\left(t,+\infty\right)}g\left(r,\beta_{t}\right)\phi_{t}\left(r,\beta_{t}\right)dF\left(r\right)\mathbb{I}_{\left\{ t<\tau\right\} }\,.$$ The statement is clear on the set $\{ \tau\leq t\} $. On the set $\{ t<\tau\}$, we first prove it for bounded measurable functions $g$ and, by a monotone class argument, it suffices to consider functions $g$ of the form $g(\tau,\beta_{t})=g_{1}(\tau)g_{2}(\beta_{t})$ where $g_1$ and $g_2$ are bounded measurable functions. Then we have that $$\begin{aligned} \mathbf{E}\left[g\left(\tau,\beta_{t}\right)|\mathcal{F}_{t}^{\beta}\right]\mathbb{I}_{\left\{ t<\tau\right\} } & =\frac{\int_{\left(t,+\infty\right)}g_{1}\left(r\right)\varphi_{t}\left(r,\beta_{t}\right)dF(r)}{\int_{\left(t,+\infty\right)}\varphi_{t}\left(v,\beta_{t}\right)dF(v)}\mathbb{I}_{\left\{ t<\tau\right\} }\,g_{2}\left(\beta_{t}\right)\\ & =\int_{\left(t,+\infty\right)}g\left(r,\beta_{t}\right)\phi_{t}\left(r,\beta_{t}\right)dF\left(r\right)\mathbb{I}_{\left\{ t<\tau\right\} }.\end{aligned}$$ If $g$ is a nonnegative measurable function, we can apply the above result to the functions $g_N:=g\wedge N$ and using the monotone convergence theorem we obtain the asserted equality for $g$. Finally, in the general case, the result is true for the positive and negative parts $g^+$ and $g^-$ of $g$ and for $g=g^+-g^-$ the equality follows from the linearity of both sides. We can now state the main result of this section. \[thm:Let–and\] Let $0<t<u$ and $g$ be a measurable function defined on $(0,+\infty)\times\mathbb{R}$ such that $\mathbf{E}\left[|g(\tau,\,\beta_{u})|\right]<+\infty$. Then, $\mathbf{P}$-a.s. $$\begin{aligned} \lefteqn{\mathbf{E}\left[g\left(\tau,\,\beta_{u}\right)|\mathcal{F}_{t}^{P}\right]=\mathbf{E}\left[g\left(\tau,\,\beta_{u}\right)|\beta_t\right]}\\ =&\;g\left(\tau,0\right)\mathbb{I}_{\left\{ \tau\leq t\right\} }+\int_{\left(t,u\right]}g\left(r,0\right)\phi_{t}\left(r,\beta_{t}\right)dF\left(r\right)\,\mathbb{I}_{\left\{ t<\tau\right\}}\\ &+\!\!\int\limits_{\left(u,+\infty\right)}\!\!\int\limits_{\mathbb{R}}g\left(r,y\right)p\big(\frac{r-u}{r-t}\left(u-t\right),\, y,\,\frac{r-u}{r-t}\beta_{t}\big)dy\phi_{t}\left(r,\beta_{t}\right)dF\left(r\right)\mathbb{I}_{\left\{ t<\tau\right\} }\,, \label{eq:prova1}\end{aligned}$$ where $p(t,\cdot,y)$ is the Gaussian density with mean $y$ and variance $t$. On the set $\{ \tau\leq t\} $ the statement is a consequence of the fact that $\tau$ is an $\mathbb{F}^{P}$-stopping time and that $\beta_{u}=0$ on $\{\tau\leq t<u\}$. On the set $\{ t<\tau\}$, from Lemma \[lem:AUXlemma\] we have $$\begin{aligned} \mathbf{E}\left[g\left(\tau,\,\beta_{u}\right)|\mathcal{F}_{t}^{P}\right]\mathbb{I}_{\left\{ t<\tau\right\} } & =\mathbf{E}\left[g\left(\tau,\,\beta_{u}\right)|\beta_{t}\right]\mathbb{I}_{\left\{ t<\tau\right\} }\\ & =\mathbf{E}\left[g\left(\tau,0\right)\mathbb{I}_{\left\{ t<\tau\leq u\right\} }|\beta_{t}\right]+\mathbf{E}\left[g(\tau,\beta_{u})\mathbb{I}_{\left\{ u<\tau\right\} }|\beta_{t}\right]\,,\end{aligned}$$ $\mathbf{P}$-a.s. We remark that due to Corollary \[LEM:prop\_cond\_exp\_tau\] $$\mathbf{E}\left[g\left(\tau,0\right)\mathbb{I}_{\left\{ t<\tau\leq u\right\} }|\beta_{t}\right]=\int_{\left(t,u\right]}g\left(r,0\right)\phi_{t}\left(r,\beta_{t}\right)dF\left(r\right)\,\mathbb{I}_{\{t<\tau\}}\,.$$ On the other hand, from (\[eq:cond exp ext bb\]), for $t<u<r$, $$\begin{aligned} \mathbf{E}\left[g\left(r,\beta_{u}^{r}\right)|\beta_{t}^{r}=x\right]&=\int_{\mathbb{R}}g\left(r,y\right)p\left(\frac{r-u}{r-t}\left(u-t\right),\,y,\,\frac{r-u}{r-t}x\right)dy\nonumber\\ &=:G_{t,u}\left(r,x\right)\,. \label{eq:prova3}\end{aligned}$$ It follows from Lemma \[lem:AUXlemma\] and from (\[eq:prova3\]) that, on $\{ t<\tau\}$ $\mathbf{P}$-a.s., $$\begin{aligned} \mathbf{E}\left[g(\tau,\beta_{u})\mathbb{I}_{\left\{ u<\tau\right\} }|\beta_{t}\right] & =\mathbf{E}\left[\left(\mathbf{E}\left[g\left(r,\beta_{u}^{r}\right)\mathbb{I}_{\left\{ u<r\right\} }|\beta_{t}^{r}\right]\right)_{r=\tau}|\beta_{t}\right]\\ & =\mathbf{E}\left[\left(G_{t,u}\left(r,\beta_{t}^{r}\right)\right)_{r=\tau}|\beta_{t}\right]\\ & =\mathbf{E}\left[G_{t,u}\left(\tau,\beta_{t}\right)|\beta_{t}\right]\\ & =\int_{\left(u,+\infty\right)}G_{t,u}\left(r,\beta_{t}\right)\phi_{t}\left(r,\beta_{t}\right)dF\left(r\right),\end{aligned}$$ where the latter equality follows from Proposition \[cor:Let–be-BIS\]. As an immediate consequence of Theorem \[thm:Let–and\], for $t<u$, we can calculate the conditional expectation of $\beta_{u}$ given $\beta_{t}$ by $$\mathbf{E}\left[\beta_{u}|\mathcal{F}_{t}^{P}\right]=\beta_{t}\int_{\left(u,+\infty\right)}\frac{r-u}{r-t}\,\phi_{t}\left(r,\beta_{t}\right)dF\left(r\right)\mathbb{I}_{\left\{ t<\tau\right\} },\; \mathbf{P}\textrm{-a.s.},$$ and the conditional distribution of $\beta_u$ given $\beta_t$: For $\Gamma\in\mathcal{B}(\mathbb{R})$, $\mathbf{P}$-a.s., $$\begin{gathered} \mathbf{P}\left(\beta_{u}\in\Gamma|\mathcal{F}_{t}^{P}\right)=\mathbb{I}_{\left\{ 0\in\Gamma\right\}}\,\mathbb{I}_{\left\{ \tau\leq t\right\} } +\mathbb{I}_{\left\{ 0\in\Gamma\right\} } \int_{\left(t,u\right]}\phi_{t}\left(r,\beta_{t}\right)dF\left(r\right)\,\mathbb{I}_{\left\{ \tau\leq t\right\} }\\ +\int_{\left(u,+\infty\right)}\int_{\Gamma}p\big(\frac{r-u}{r-t}\left(u-t\right),\, y,\,\frac{r-u}{r-t}\beta_{t}\big)dy\,\phi_{t}\left(r,\beta_{t}\right)dF\left(r\right)\mathbb{I}_{\left\{ t<\tau\right\} }.\label{eq:prova2}\end{gathered}$$ \[rem:Strong\_Inhom\_MK\] From the factor $\left(r-u\right)/\left(r-t\right)$ in (\[eq:prova2\]) we see that the process $\beta$ cannot be a homogeneous Markov process because $\mathbf{P}(\beta_{u}\in\Gamma|\mathcal{F}_{t}^{P})$ does not depend only on $u-t$ and $(\beta_{t},\Gamma)$. Markov Property\[sec:The-Markov-property2\] =========================================== In this section we are going to strengthen Theorem \[TEO:The-information-process\] on the Markov property of the information process $\beta$. We shall prove that $\beta$ is not only a Markov process with respect to the filtration $\mathbb{F}^0$ (or $\mathbb{F}^P$) but also with respect to $\mathbb{F}^\beta$, the smallest filtration containing $\mathbb{F}^0$ and satisfying the usual conditions. As an important consequence, it turns out that the filtrations $\mathbb{F}^P$ and $\mathbb{F}^\beta$ are equal which amounts to saying that the filtration $\mathbb{F}^P$ is right-continuous. The result is stated in the following theorem. \[thm:The-process–1\]The process $\beta$ is a Markov process with respect to the filtration $\mathbb{F}^{\beta}$, i.e., $$\mathbf{E}\left[g\left(\beta_{u}\right)|\mathcal{F}_{t}^{\beta}\right]=\mathbf{E}\left[g\left(\beta_{u}\right)|\beta_{t}\right],\;\mathbf{P}\textrm{-a.s.}$$ for all $0\leq t<u$ and all measurable functions $g$ such that $g(\beta_{u})$ is integrable. The proof is divided into two main parts. In the first one we prove the statement of the above theorem for $t>0$, while in the second part we consider the case $t=0$. Throughout the proof we can assume without loss of generality that the function $g$ is continuous and bounded by some constant $M\in\mathbb{R}_+$. For the first part of the proof, let $t>0$ be a strictly positive real number and let $(t_{n})_{n\in\mathbb{N}}$ be a decreasing sequence converging to $t$ from above: $0<t<...<t_{n+1}<t_{n}<u,\: t_{n}\downarrow t$ as $n\rightarrow\infty$. From the definition of $\mathcal{F}_{t}^{\beta}$ we have that $\mathcal{F}_{t}^{\beta}=\bigcap_{n\in\mathbb{N}}\mathcal{F}_{t_{n}}^{P}$ where we recall that $\mathcal{F}_{v}^{P}=\sigma(\beta_{s},\,0\leq s\leq v)\vee\mathcal{N}_{P}$, $v\geq 0$. Consequently, $$\mathbf{E}\left[g\left(\beta_{u}\right)|\mathcal{F}_{t}^{\beta}\right]=\lim_{n\rightarrow\infty}\mathbf{E}\left[g\left(\beta_{u}\right)|\mathcal{F}_{t_{n}}^{P}\right]\,, \; \mathbf{P}\textrm{-a.s.}$$ From Theorem \[thm:Let–and\] and the definition of $G_{t_n,u}$ by we know that, as $t_{n}<u$, $\mathbf{P}$-a.s. $$\begin{aligned} \nonumber\lefteqn{\mathbf{E}\left[g\left(\beta_{u}\right)|\mathcal{F}_{t_{n}}^{P}\right]=\mathbf{E}\left[g\left(\beta_{u}\right)|\beta_{t_{n}}\right]}\\ &=&g\left(0\right)\mathbb{I}_{\left\{ \tau\leq t_{n}\right\} }+g\left(0\right)\int_{\left(t_{n},u\right]}\phi_{t_{n}}\left(r,\beta_{t_{n}}\right)dF\left(r\right)\,\mathbb{I}_{\left\{ t_{n}<\tau\right\} }\label{eq:equiv1}\\ \nonumber&&+\int_{\left(u,+\infty\right)}G_{t_n,u}\left(r,\beta_{t_n}\right)\,\phi_{t_{n}}\left(r,\beta_{t_{n}}\right)dF\left(r\right)\,\mathbb{I}_{\left\{ t_{n}<\tau\right\} }\,.\end{aligned}$$ We want to prove that $$\lim_{n\rightarrow\infty}\mathbf{E}\left[g\left(\beta_{u}\right)|\mathcal{F}_{t_{n}}^{P}\right]=\mathbf{E}\left[g\left(\beta_{u}\right)|\beta_{t}\right],\;\mathbf{P}\textrm{-a.s.}$$ Using (\[eq:equiv1\]) and Theorem \[thm:Let–and\] we see that this latter relation holds true if the following two identities are satisfied, $\mathbf{P}$-a.s. on $\{t<\tau\}$: $$\begin{aligned} \lim_{n\rightarrow\infty}\int_{\left(t_{n},u\right]}\phi_{t_{n}}\left(r,\beta_{t_{n}}\right)dF\left(r\right)&= \int_{\left(t,u\right]}\phi_{t}\left(r,\beta_{t}\right)dF\left(r\right)\,,\label{eq:star2}\\ \lim_{n\rightarrow\infty}\int\limits_{\left(u,+\infty\right)}\!\!\!G_{t_n,u}\left(r,\beta_{t_{n}}\right)\phi_{t_{n}}\left(r,\beta_{t_{n}}\right)dF\left(r\right)&=\!\!\! \int\limits_{\left(u,+\infty\right)}\!\!\!G_{t,u}\left(r,\beta_{t}\right)\phi_{t}\left(r,\beta_{t}\right)dF\left(r\right)\,.\label{eq:star3}\end{aligned}$$ Relation (\[eq:star2\]) can be derived as follows. *Proof of (\[eq:star2\]).* Recalling that by (\[eq:densitybeta2\]) $$\phi_{t_{n}}\left(r,\beta_{t_{n}}\right)=\frac{\varphi_{t_{n}}\left(r,\beta_{t_{n}}\right)}{{\displaystyle \int_{\left(t_{n},+\infty\right)}}\varphi_{t_{n}}\left(v,\beta_{t_{n}}\right)dF(v)},\quad t_n<r\,,$$ the integral on the left-hand side of (\[eq:star2\]) can be rewritten as $$\int_{\left(t_{n},u\right]}\phi_{t_{n}}\left(r,\beta_{t_{n}}\right)dF\left(r\right)=\frac{{\displaystyle \int_{\left(t_{n},u\right]}}\varphi_{t_{n}}\left(r,\beta_{t_{n}}\right)dF(r)}{{\displaystyle \int_{\left(t_{n},+\infty\right)}}\varphi_{t_{n}}\left(v,\beta_{t_{n}}\right)dF(v)}\,.$$ The plan is to apply Lebesgue’s bounded convergence theorem to the numerator and the denominator of the above expression. To this end we have to prove $\mathbf{P}$-a.s. pointwise convergence and uniform boundedness of the integrand $\varphi_{t_n}(\cdot, \beta_{t_n})$ $\mathbf{P}$-a.s. We begin by focusing our attention on the function $(t,r,x)\mapsto\varphi_{t}(r,x)$ defined by , which is continuous on $(0,+\infty)\times[0,+\infty)\times\mathbb{R}\backslash\{ 0\}$. Setting $\varphi_{t}(+\infty,x):=p(t,x,0)$ for every $t>0$ and $x\in\mathbb{R},$ we see that the resulting function $(t,r,x)\mapsto\varphi_{t}(r,x)$, now defined on $(0,+\infty)\times[0,+\infty]\times\mathbb{R}\backslash\{ 0\}$, is continuous, too. Hence $\lim_{n\rightarrow \infty}\varphi_{t_{n}}(r,\beta_{t_{n}})=\varphi_{t}(r,\beta_{t})$, $\mathbf{P}$-a.s. on $\tau>t$, providing pointwise convergence. For this we note that $\beta_{t}\neq0$ $\mathbf{P}$-a.s. on the set $\{t<\tau\}$. Now we fix $\omega\in\Omega$ such that $t<\tau(\omega)$ and $\beta_{t}(\omega)\neq0$. Then the set $\{ t_{n}:\, n\in\mathbb{N}\} \times(t,+\infty]\times\{ \beta_{t_{n}}(\omega):\, n\in\mathbb{N}\} $ is obviously contained in a compact subset of $(0,+\infty)\times[0,+\infty]\times\mathbb{R}\backslash\{0\}$ (depending on $\omega$). This implies that $\varphi_{t_{n}}(r,\beta_{t_{n}}(\omega))$ is bounded (by a constant depending on $\omega$). Using Lebesgue’s bounded convergence theorem we can conclude $$\lim_{n\rightarrow\infty}\int_{\left(t_{n},u\right]}\varphi_{t_{n}}\left(r,\beta_{t_{n}}\left(\omega\right)\right)dF\left(r\right)=\int_{\left(t,u\right]}\varphi_{t}\left(r,\beta_{t}\left(\omega\right)\right)dF\left(r\right).$$ Consequently, we have proven that $$\lim_{n\rightarrow\infty}\int_{\left(t_{n},u\right]}\varphi_{t_{n}}\left(r,\beta_{t_{n}}\right)dF\left(r\right)=\int_{\left(t,u\right]}\varphi_{t}\left(r,\beta_{t}\right)dF\left(r\right)\,\textrm{on }\left\{ t<\tau\right\} \;\mathbf{P}\textrm{-a.s.}$$ This is also valid for intervals $(t_{n},+\infty)$and $(t,+\infty)$ (instead of $(t_{n},u]$ and $(t,u]$). Relation (\[eq:star2\]) follows immediately. Let us conclude the first part of the proof by showing that equality (\[eq:star3\]) is indeed true. *Proof of (\[eq:star3\]).  * Recall that $t<t_{n}<u$, $n\in\mathbb{N}$. We start by noting that the function $$y\mapsto p\left(\frac{r-u}{r-t_{n}}\left(u-t_{n}\right),\, y,\,\frac{r-u}{r-t_{n}}\beta_{t_{n}}\right)$$ is a density on $\mathbb{R}$ for all $n$. Denoting by $N(\mu,\sigma^{2})$ the normal distribution with expectation $\mu$ and variance $\sigma^{2}$, it follows that the probability measures $N(\frac{r-u}{r-t_{n}}\beta_{t_{n}}(\omega),\frac{r-u}{r-t_{n}}(u-t_{n}))$ converge weakly to $N(\frac{r-u}{r-t}\beta_{t}(\omega),\frac{r-u}{r-t}(u-t))$. Since the function $g$ is bounded by $M$, we have that $$\begin{aligned} |G_{t_n,u}(r,\beta_{t_n}(\omega))|&=\left|\int_{\mathbb{R}}g\left(y\right)p\left(\frac{r-u}{r-t_{n}}\left(u-t_{n}\right),\, y,\,\frac{r-u}{r-t_{n}}\beta_{t_{n}\left(\omega\right)}\right)dy\right|\nonumber\\ &\leq M<+\infty\,.\label{eq:HJ1}\end{aligned}$$ Furthermore, $$\begin{aligned} \lefteqn{\lim_{n\rightarrow\infty} G_{t_n,u}\left(r,\beta_{t_n}(\omega)\right)\nonumber}\\ &=\lim_{n\rightarrow\infty}\int_{\mathbb{R}}g\left(y\right)p\left(\frac{r-u}{r-t_{n}}\left(u-t_{n}\right),\, y,\,\frac{r-u}{r-t_{n}}\beta_{t_{n}}\left(\omega\right)\right)dy\nonumber\\ &=\int_{\mathbb{R}}g\left(y\right)p\left(\frac{r-u}{r-t}\left(u-t\right),\,y,\,\frac{r-u}{r-t}\beta_{t}\left(\omega\right)\right)dy =G_{t,u}\left(r,\beta_{t}(\omega)\right)\label{eq:HJ2}\end{aligned}$$ immediately follows from the assumption that $g$ is bounded and continuous combined with the weak convergence of the Gaussian measures stated above. Now (\[eq:star3\]) can be derived using Lebesgue’s bounded convergence theorem and the properties of $G_{t_n,u}(r,\beta_{t_n}(\omega))$ and $\varphi_{t_{n}}(r,\beta_{t_{n}}(\omega))$ (and hence $\phi_{t_{n}}(r,\beta_{t_{n}}(\omega))$) of $\mathbf{P}$-a.s. boundedness and pointwise convergence verified above. The first part of the proof of the theorem is now finished. In the second part of the proof we consider the case $t=0$ which is divided into two steps. In the first step we assume that there exists $\varepsilon>0$ such that $\mathbf{P}(\tau>\varepsilon)=1$ and in the second step we will drop this condition. Let us assume that there exists $\varepsilon>0$ such that $\mathbf{P}(\tau>\varepsilon)=1$. Let $(t_{n})_{n\in\mathbb{N}}$ be a decreasing sequence of strictly positive real numbers converging to $0$: $0<...<t_{n+1}<t_{n},\: t_{n}\downarrow0$ as $n\rightarrow\infty$. Without loss of generality we assume $t_{n}<\varepsilon$ for all $n\in\mathbb{N}$. Then (\[eq:densitybeta2\]) can be rewritten as follows: $$\label{Phi-rewritten} \phi_{t_{n}}\left(r,\beta_{t_{n}}\right)=\frac{\frac{1}{\sqrt{2\pi t_{n}}}\sqrt{\frac{r}{r-t_{n}}}\exp\left[-\frac{\beta_{t_{n}}^{2}r}{2t_{n}\left(r-t_{n}\right)}\right]}{{\displaystyle \int_{\left(\varepsilon,+\infty\right)}}\frac{1}{\sqrt{2\pi t_{n}}}\sqrt{\frac{s}{s-t_{n}}}\exp\left[-\frac{\beta_{t_{n}}^{2}s}{2t_{n}\left(s-t_{n}\right)}\right]dF\left(s\right)}\,\mathbb{I}_{\left(t_n,+\infty\right)}\left(r\right)\,.$$ We have the following auxiliary result: \[lem:On-the-set-2\] Suppose that $\mathbf{P}(\tau>\varepsilon)=1$. Then the function $r\mapsto\phi_{t_{n}}(r,\beta_{t_{n}})$ is $\mathbf{P}_\tau$-a.s. uniformly bounded by some constant $K=K(\varepsilon,\omega)<+\infty$ and, for all $r>0$, $$\lim_{n\rightarrow\infty}\phi_{t_{n}}\left(r,\beta_{t_{n}}\right)=1\,,\;\mathbf{P}\textrm{-a.s.}$$ See Appendix \[sec:Proofs-of-Section\]. Now we turn to the proof of the Markov property of $\beta$ with respect to $\mathbb{F}^\beta$ at $t=0$ under the additional assumption that $\mathbf{P}(\tau>\varepsilon)=1$. Since $\mathbf{E}\left[g(\beta_u)|\mathcal{F}^\beta_0\right]= \lim_{n\rightarrow\infty} \mathbf{E}\left[g(\beta_u)|\mathcal{F}^P_{t_n}\right]$, as in the first part, it is sufficient to verify that $$\lim_{n\rightarrow\infty}\mathbf{E}\left[g\left(\beta_{u}\right)|\mathcal{F}^P_{t_{n}}\right]=\mathbf{E}\left[g\left(\beta_{u}\right)|\beta_{0}\right],\;\mathbf{P}\textrm{-a.s.}\label{eq:equiv 2}$$ Using the formula of total probability, Corollary \[cor:LEM if–is\] and , we can calculate $$\begin{aligned} \mathbf{E}\left[g\left(\beta_{u}\right)|\beta_{0}\right]&=\mathbf{E}\left[g\left(\beta_{u}\right)\right]\\ &=g\left(0\right)\,F\left(u\right)+\int_{\left(u,+\infty\right)}\int_{\mathbb{R}}g\left(y\right)p\left(\frac{r-u}{r}u,y,0\right)dy\,dF\left(r\right).\end{aligned}$$ Consequently, recalling (\[eq:equiv1\]) for computing the left-hand side of (\[eq:equiv 2\]) and the definition of $G_{t_n,u}$ and $G_{t,u}$ by and noting the obvious relation $\lim_{n\rightarrow\infty}\mathbb{I}_{\{ \tau\leq t_{n}\} }=0$, it is sufficient to prove the following two equalities, $\mathbf{P}$-a.s.: $$\begin{aligned} \lim_{n\rightarrow\infty}\int_{\left(t_{n},u\right]}\phi_{t_{n}}\left(r,\beta_{t_{n}}\right)dF\left(r\right) &=F\left(u\right)\,,\label{eq:star 2 bis}\\ \lim_{n\rightarrow\infty}\int\limits_{\left(u,+\infty\right)}G_{t_n,u}\left(r,\beta_{t_n}\right)\,\phi_{t_{n}}\left(r,\beta_{t_{n}}\right)dF\left(r\right)&=\int\limits_{\left(u,+\infty\right)}G_{t,u}\left(r,\beta_t\right)\,dF\left(r\right)\,.\label{eq:star 3 bis}\end{aligned}$$ The first equality relies on Lemma \[lem:On-the-set-2\] and Lebesgue’s bounded convergence theorem. For verifying the second equality we use Lebesgue’s bounded convergence theorem combined with Lemma \[lem:On-the-set-2\] and the boundedness and pointwise convergence of the sequence $G_{t_n,u}(\cdot,\beta_{t_n})$ (see (\[eq:HJ1\]) and (\[eq:HJ2\])). We now turn to the general case where $\tau>0$. Let $\varepsilon>0$ be arbitrary, but fixed. In what follows the process $^{\varepsilon}\!\beta=(^{\varepsilon}\!\beta_{t},\, t\geq0)$ will denote the process defined by $^{\varepsilon}\!\beta_{t}:= (\beta_{t}^{r})_{r=\tau\vee\varepsilon}$. The natural filtration of the process $^{\varepsilon}\!\beta$ will be denoted by $\mathbb{F}^\varepsilon=(\mathcal{F}^\varepsilon_{t})_{t\geq0}$, where $\mathcal{F}_{t}^\varepsilon :=\sigma(^{\varepsilon}\!\beta_{s},\,0\leq s\leq t)$. Obviously, for proving the Markov property of $\beta$ with respect to $\mathbb{F}^\beta$ at $t=0$ it is sufficient to show that $\mathcal{F}^\beta_0:=\mathcal{F}^0_{0+}\vee\mathcal{N}_P$ is $\mathbf{P}$-trivial. In order to show that $\mathcal{F}_{0+}^{0}\vee\mathcal{N}_{P}$ is indeed the trivial $\sigma$-algebra, we consider a set $A\in\mathcal{F}_{0+}^{0}$ and we show that if $\mathbf{P}(A)>0$, then $\mathbf{P}(A)=1$. If $\mathbf{P}(A)>0$, then there exists an $\varepsilon>0$ such that $\mathbf{P}(A\cap\{ \tau>\varepsilon\})>0$. Since $A\in\mathcal{F}_{0+}^{0}$, it follows that $A\in\mathcal{F}_{u}^{0}$ for all $0<u\leq\varepsilon$ and, consequently, $A\cap\{ \tau>\varepsilon\} \in\mathcal{F}_{u}^{0}|_{\{ \tau>\varepsilon\} }\vee\mathcal{N}_{P}$ where $\mathcal{F}_{u}^{0}|_{\{ \tau>\varepsilon\}}$ denotes the trace $\sigma$-field of $\mathcal{F}_{u}^{0}$ on the set $\tau>\varepsilon$. Moreover, on the set $\{\tau>\varepsilon\} $, $\beta_{t}={}^{\varepsilon}\!\beta_{t}$ for all $t\geq0$, i.e., $\beta$ and $^{\varepsilon}\beta$ generate the same trace filtration on $\{\tau>\varepsilon\}$ and, consequently, $A\cap\{ \tau>\varepsilon\} \in\mathcal{F}_{u}^\varepsilon|_{\{ \tau>\varepsilon\} }\vee\mathcal{N}_{P}$. Hence, there exists a set $A_{u}\in\mathcal{F}_{u}^\varepsilon$ such that $$A\cap\left\{ \tau>\varepsilon\right\} =A_{u}\cap\left\{ \tau>\varepsilon\right\} \label{eq:star_epsilon}$$ $\mathbf{P}$-a.s., for all $0<u\leq\varepsilon$. Replacing $u$ by $1/n$, for $n\in\mathbb{N}$ sufficiently large, and defining $A_{0}:=\limsup_{n\rightarrow\infty}A_{1/n}$, we obtain that $A_{0}\in\mathcal{F}_{0+}^\varepsilon$. We know from the first step of the second part of the proof that the $\sigma$-field $\mathcal{F}_{0+}^\varepsilon$ is $\mathbf{P}$-trivial and, consequently, $\mathbf{P}(A_{0})\in\{ 0,1\}$. However, from equality (\[eq:star\_epsilon\]) we have that, $\mathbf{P}$-a.s., $A\cap\{ \tau>\varepsilon\} =A_{0}\cap\{\tau>\varepsilon\}$. By hypothesis we have $\mathbf{P}(A\cap\{ \tau>\varepsilon\})>0$ and thus $\mathbf{P}(A_{0} \cap\{\tau>\varepsilon\})=\mathbf{P}(A\cap\{ \tau>\varepsilon\})>0$, which implies that $\mathbf{P}(A_{0})=1$ and, consequently, we get that $\mathbf{P}(A\cap\{ \tau>\varepsilon\})=\mathbf{P}(\{ \tau>\varepsilon\})$. Since $\varepsilon$ is arbitrary we can take the limit for $\varepsilon\downarrow0$, and we obtain that $\mathbf{P}(A)=\mathbf{P}(\Omega)=1$, which ends the proof. \[rem:equalitySTANDARDwithCOMPLETED\] The filtration $\mathbb{F}^{P}$ satisfies the usual conditions of right-continuity and completeness. See, e.g., [@key-5 (Ch. I, (8.12))]. As a consequence of Corollary \[rem:equalitySTANDARDwithCOMPLETED\], the filtrations $\mathbb{F}^{\beta}$ and $\mathbb{F}^{P}$ coincide and, in particular, the $\sigma$-algebra $\mathcal{F}^{\beta}_0$ is $\mathbf{P}$-trivial. It is worth to mention that in fact the statement of Corollary \[rem:equalitySTANDARDwithCOMPLETED\] combined with Theorem \[TEO:The-information-process\] is equivalent to the statement of Theorem \[thm:The-process–1\]. \[sec:Semimartingale-decomposition\] Semimartingale Decomposition of the Information Process ============================================================================================ This section deals with the semimartingale decomposition of $\beta$ with respect to $\mathbb{F}^{\beta}$. To begin with, we recall the notion of the optional projection of a general measurable process $X$. Let $X$ be a nonnegative measurable process and $\mathbb{F}$ a filtration satisfying the usual conditions. There exists a unique (up to indistinguishability) $\mathbb{F}$-optional process $^{o}\!X$ such that $$\mathbf{E}\left[X_{T}\,\mathbb{I}_{\left\{ T<+\infty\right\} }|\mathcal{F}_{T}\right]=\,^{o}\!X_{T}\,\mathbb{I}_{\left\{ T<+\infty\right\} },\;\mathbf{P}\textrm{-a.s.}$$ for every $\mathbb{F}$-stopping time $T$. See, for example, [@key-3 Ch. IV, (5.6)]. \(i)   The process $^{o}\!X$ is called the *optional projection* of the nonnegative measurable process $X$ with respect to $\mathbb{F}$. \(ii)  Let $X$ be an arbitrary measurable process. Then we define the *optional projection $^{o}\!X$ of $X$ with respect to $\mathbb{F}$* as $$^{o}\!X_{t}\left(\omega\right):=\begin{cases} ^{o}\!X_{t}^{+}\left(\omega\right)-\,^{o}\!X_{t}^{-}\left(\omega\right), & \textrm{if }^o\!X_{t}^{+}\left(\omega\right)\wedge ^o\!\!X_{t}^{-}\left(\omega\right)<+\infty,\\ +\infty, & \textrm{otherwise}, \end{cases}$$ where $^{o}\!X^{+}$ (resp. $^{o}\!X^{-}$) is the optional projection of the positive part $X^+$ (resp. the negative part $X^{-}$) of $X$ with respect to $\mathbb{F}$. \[rem:If–for rem 63\] Let $\xi$ be an arbitrary random variable and $\mathcal{G}$ a sub-$\sigma$-field of $\mathcal{F}$. Then the conditional expectations $\mathbf{E}\left[\xi^+|\mathcal{G}\right]$ and $\mathbf{E}\left[\xi^-|\mathcal{G}\right]$ always exist and in analogy to the above definition we agree to define the conditional expectation $\mathbf{E}\left[\xi|\mathcal{G}\right]$ by $$\begin{aligned} \mathbf{E}\left[\xi|\mathcal{G}\right] &:=\begin{cases} \mathbf{E}\left[\xi^+|\mathcal{G}\right]-\mathbf{E}\left[\xi^-|\mathcal{G}\right], & \textrm{on } \{\mathbf{E}\left[\xi^+|\mathcal{G}\right]\wedge \mathbf{E}\left[\xi^-|\mathcal{G}\right]<+\infty\},\\ +\infty, & \textrm{otherwise}\,. \end{cases}\end{aligned}$$ Now let $X$ be an arbitrary measurable process and $\mathbb{F}$ a filtration satisfying the usual conditions. We emphasize that then for every $\mathbb{F}$-stopping time $T$ we have $$\mathbf{E}\left[X_{T}\,\mathbb{I}_{\left\{ T<+\infty\right\} }|\mathcal{F}_{T}\right]=\,^{o}\!X_{T}\,\mathbb{I}_{\left\{ T<+\infty\right\} },\;\mathbf{P}\textrm{-a.s.}$$ In particular, $\mathbf{E}\left[X_{t}|\mathcal{F}_{t}\right]=\,^{o}\!X_{t}$, $\mathbf{P}\textrm{-a.s.}$, for all $t\geq 0$. Next we are going to state a slight extension of a well-known result from filtering theory which will be used in the sequel. The reader may find useful references, e.g., in [@key-4 Proposition 5.10.3.1], or in [@key-1000 Ch. VI, (8.4)]. First we will introduce the following definition. Let $B$ be a continuous process, $\mathbb{F}$ a filtration and $T$ an $\mathbb{F}$-stopping time. Then $B$ is called an $\mathbb{F}$-Brownian motion stopped at $T$ if $B$ is an $\mathbb{F}$-martingale with square variation process $\langle B,B\rangle$: $\langle B,B\rangle_t=t\wedge T$, $t\geq 0$. \[pro:innovationLEMMA\] Let $\mathbb{F}=(\mathcal{F}_{t})_{t\geq0}$ be a filtration satisfying the usual conditions, $T$ an $\mathbb{F}$-stopping time and $B$ an $\mathbb{F}$-Brownian motion stopped at $T$. Let $Z=(Z_{t},\, t\geq0)$ be an $\mathbb{F}$-optional process such that $$\label{ass:innovation-lemma} \mathbf{E}\left[\int_{0}^{t}\left|Z_{s}\right|\, ds\right]<+\infty,\quad t\geq0.$$ Let the process $X=(X_{t},\, t\geq0)$ be given by $$X_{t}:=\int_{0}^{t}Z_{s}\, ds+B_{t},\quad t\geq 0\,.$$ Denote by $^{o}\!Z$ the optional projection of $Z$ with respect to $\mathbb{F}^{X}=(\mathcal{F}_{t}^{X})_{t\geq0}$. Then the process $b$, $$b_{t}:=X_{t}-\int_{0}^{t}{}^{o}\!Z_{s}\, ds,\quad t\geq0,$$ is an $\mathbb{F}^{X}$-Brownian motion stopped at $T$. For the sake of easy reference a proof of this result is provided in Appendix \[sec:Proof of Innovation lemma\]. In the remainder of this section we shall make use of the filtration $\mathbb{G}=(\mathcal{G}_{t})_{t\geq0}$ defined as $$\mathcal{G}_{t}:=\bigcap_{u>t}\mathcal{F}_{u}^{\beta}\vee\sigma\left(\tau\right),\quad t\geq 0\,,\label{eq:FF D}$$ which is equal to the initial enlargement of the filtration $\mathbb{F}^{\beta}$ by the $\sigma$-algebra $\sigma(\tau)$. In the sequel, the process $Z=(Z_{t},\, t\geq0)$ is defined by $$Z_{t}:=\frac{\beta_{t}}{\tau-t}\,\mathbb{I}_{\{t<\tau\}}\,,\quad t\geq 0\,. \label{eq:processo Z}$$ The following auxiliary results will be used to prove the semimartingale property of the process $\beta$. \[lem:sqrt-tau-1\] We have $$\mathbf{E}\left[\int_{0}^{t}\left|Z_{s}\right|\, ds\right]<+\infty \mbox{ for all } t\geq0\,.$$ Using the formula of total probability, Corollary \[cor:LEM if–is\] and , we can calculate $$\begin{aligned} \mathbf{E}\left[\int_{0}^{t}\left|Z_{s}\right|\, ds\right] & = \int_0^{+\infty}\int_0^{t\wedge r}E\left|\beta^r_s\right|/\left(r-s\right)\,ds\,dF\left(r\right)\\ &=(2/\pi)^{1/2}\int_{0}^{+\infty}r^{-1/2}\int_{0}^{t\wedge r}s^{1/2}\left(r-s\right)^{-1/2}\,ds\,dF(r).\end{aligned}$$ The outer integral on the right-hand side of the above expression can be split into two integrals, the first one over $(0,t]$ and the second one over $(t,+\infty)$. For the first integral we see that $$\begin{aligned} \lefteqn{\int\limits_{(0,t]}r^{-1/2}\int_{0}^{t\wedge r}s^{1/2}\left(r-s\right)^{-1/2}\,ds\,dF(r)}\\ & \leq\int\limits_{(0,t]}\int_{0}^{t\wedge r}\left(r-s\right)^{-1/2}\,ds\,dF(r)=\int\limits_{(0,t]}2\,r^{1/2}\, dF\left(r\right)\leq2\,t^{1/2}<+\infty\,.\end{aligned}$$ The second integral can be estimated as follows: $$\begin{aligned} \lefteqn{\int_{\left(t,+\infty\right)}r^{-1/2}\int_{0}^{t\wedge r}s^{1/2}\left(r-s\right)^{-1/2}\,ds\,dF(r)}\\ &\leq& t^{1/2}\int_{\left(t,+\infty\right)}r^{-1/2}\int_{0}^{t}\left(r-s\right)^{-1/2}\, ds\,dF(r)\\ &\leq& t^{1/2}\int_{\left(t,+\infty\right)}r^{-1/2}\,2\,r^{1/2}\,dF\left(r\right)\leq2\,t^{1/2}<+\infty\,.\end{aligned}$$ The statement of the lemma is now proved. \[cor:The-process-Z\] $Z$ defined by (\[eq:processo Z\]) is integrable with respect to the Lebesgue measure $\mathbf{P}$-a.s.: $$\mathbf{P}\left(\int_{0}^{t}\left|Z_{s}\right|\, ds<+\infty\right)=1, \textrm{ for all } t\geq0\,.$$ \[Z-modification\] In view of Corollary \[cor:The-process-Z\], there can be found a process $\overline{Z}$ which is indistinguishable from $Z$ such that $\int_0^t\left|\overline{Z}_s\right|\, ds<+\infty$ for all $t\geq 0$ *everywhere* (and not only $\mathbf{P}$-a.s.). Without loss of generality we can assume that $Z$ has this property. If this would not be the case we could modify the paths of $Z$ on a negligible set. By this modification, the optional projection $^o\!Z$ with respect to any filtration $\mathbb{F}$ will stay in the same class of indistinguishable processes. We can also modify $\beta$ putting $\beta=0$ on the negligible set where the above integrals are not finite for all $t$. In this way the desired property for $Z$ would be fulfilled automatically. \[rem:We-make-the QWERTY\] (i)  The process $Z$ is optional with respect to the filtration $\mathbb{G}$ because $Z$ is right-continuous and $\mathbb{G}$-adapted. \(ii)  Let $l_+$ be the Lebesgue measure on $\mathbb{R}_+$. Because of Lemma \[lem:sqrt-tau-1\], using Fubini’s theorem, there is a measurable subset $\Lambda$ of $\mathbb{R}_+$ such that $l_+(\mathbb{R}_+\setminus\Lambda)=0$ and $E[|Z_t|]<+\infty$ for all $t\in\Lambda$. For later use, we fix a set $\Lambda$ with these properties. \(iii)  Formulas obtained in Section \[sec:Conditional-Expectations\] allow to compute the optional projection $^o\!Z$ of $Z$ with respect to the filtration $\mathbb{F}^\beta$ on the set $\Lambda$: For all $t\in\Lambda$ we have $\mathbf{P}$-a.s. $$\begin{aligned} ^o\!Z_t&=\mathbf{E}\left[Z_{t}|\mathcal{F}_{t}^{\beta}\right] =\mathbf{E}\left[\frac{\beta_{t}}{\tau-t}\,\mathbb{I}_{\left\{ t<\tau\right\} }|\beta_{t}\right]\\ &=\beta_{t}\int_{\left(t,+\infty\right)}\frac{1}{r-t}\phi_{t}\left(r,\beta_{t}\right)dF\left(r\right)\,\mathbb{I}_{\left\{ t<\tau\right\} }\,, \end{aligned}$$ where the first equality follows from Remark \[rem:If–for rem 63\], the second from the Markov property of the process $\beta$ and Definition (\[eq:processo Z\]) of the process $Z$, while the third equality follows directly from Proposition \[cor:Let–be-BIS\] and the measurability of $\beta_{t}$ with respect to $\sigma(\beta_{t})$. Note that in the above equality all terms are well-defined for *every* $t\geq 0$, the condition that $t\in\Lambda$ is only needed for the second and third equality. \[B\] The process $B=(B_{t},\, t\geq0)$ defined by $$B_{t}:=\beta_{t}+\int_{0}^{t}Z_{s}\, ds\,,\quad t\geq 0\,,\label{eq:BIGBM}$$ is a $\mathbb{G}$-Brownian motion stopped at $\tau$. Note that by Corollary \[cor:The-process-Z\] and Remark \[Z-modification\], the process $(Z_{t},\, t\geq0)$ is integrable with respect to the Lebesgue measure, hence $B$ is well-defined. It is clear that the process $B$ is continuous and $\mathbb{G}$-adapted. In order to prove that it is indeed a $\mathbb{G}$-Brownian motion stopped at $\tau$, it suffices to prove that the process $B$ and the process $X$ defined by $X_t:=B_{t}^{2}-(t\wedge\tau)$, $t\geq0$, are both $\mathbb{G}$-martingales. To this end we shall use Corollary \[cor:LEM if–is\]. First we show that $B$ is a $\mathbb{G}$-martingale. Let $n\in\mathbb{N}$ be an arbitrary but fixed natural number, $0<t_{1}<...<t_{n-1}<t_{n}:=t,\: h\geq0$ and $g$ an arbitrary bounded Borel function. Recalling the definition of $B^r$ in , we have that $$\begin{aligned} \lefteqn{\mathbf{E}\left[\left(B_{t+h}-B_{t}\right)g\left(\beta_{t_{1}},\dots,\beta_{t_{n}},\tau\right)\right]}\\ &=\int_{\left(0,+\infty\right)}\mathbf{E}\left[\left(B_{t+h}-B_{t}\right)g\left(\beta_{t_{1}},...,\beta_{t_{n}},\tau\right)|\tau=r\right]dF\left(r\right)\\ & =\int_{\left(0,+\infty\right)}\mathbf{E}\left[\left(B_{t+h}^{r}-B_{t}^{r}\right)g\left(\beta_{t_{1}}^{r},\ldots,\beta_{t_{n}}^{r},r\right)\right]dF(r)=0,\end{aligned}$$ because $B^{r}$ is a Brownian motion and hence a martingale with respect to the filtration generated by $\beta^{r}$ (cf. Lemma \[BMr\]). Using a monotone class argument, from this it can easily be derived that $B$ is a martingale with respect to $\mathbb{G}$. It remains to prove that the process $X$ is a $\mathbb{G}$-martingale. Putting $X_{t}^{r}:=(B_{t}^{r})^{2}-(t\wedge r)$, $t\geq0$, and repeating the same arguments used above, we see that $$\begin{aligned} \lefteqn{\mathbf{E}\left[\left(X_{t+h}-X_{t}\right)g\left(\beta_{t_{1}},\dots,\beta_{t_{n}},\tau\right)\right]}\\ &=\int_{\left(0,+\infty\right)}\mathbf{E}\left[\left(X_{t+h}-X_{t}\right)g\left(\beta_{t_{1}},\ldots,\beta_{t_{n}},\tau\right)|\tau=r\right]dF\left(r\right)\\ & =\int_{\left(0,+\infty\right)}\mathbf{E}\left[\left(X_{t+h}^{r}-X_{t}^{r}\right)g\left(\beta_{t_{1}}^{r},\ldots,\beta_{t_{n}}^{r},r\right)\right]dF(r)=0,\end{aligned}$$ since $X^{r}$ is a martingale with respect to the filtration generated by $\beta^{r}$. As above from this follows that $X$ is a martingale with respect to $\mathbb{G}$. This completes the proof of Lemma \[B\]. We are now ready to state the main result of this section. \[TEO:semimartFb\] The process $b=(b_{t},\, t\geq0)$ given by $$\begin{aligned} \nonumber b_{t}&:=\beta_{t}+\int_{0}^{t}\mathbf{E}\left[\frac{\beta_{s}}{\tau-s}\,\mathbb{I}_{\{s<\tau\}}\big|\beta_{s}\right]\, ds\\ &=\beta_t+\int_0^t \beta_s \int_{(s,+\infty)}\frac{1}{r-s}\,\phi_s(r,\beta_s)\, dF(r)\,\mathbb{I}_{\{s<\tau\}}\, ds\,, \label{eq:semimart-dec-1}\end{aligned}$$ where the second equality holds $\mathbf{P}$-a.s., is an $\mathbb{F}^{\beta}$-Brownian motion stopped at $\tau$. Thus the information process $\beta$ is an $\mathbb{F}^\beta$-semimartingale whose decomposition is determined by . First we notice that by Lemma \[lem:sqrt-tau-1\] and Remarks \[rem:If–for rem 63\] and \[rem:We-make-the QWERTY\] (iii), the integrands of the second and third term of are well-defined for all $s\geq 0$ and they are equal $l_+$-a.e. and integrable on $[0,t]\times\Omega$ with respect to $l_+\times\mathbf{P}$. This yields the second equality $\mathbf{P}$-a.s. Now we can apply Proposition \[pro:innovationLEMMA\] to the processes $(-Z)$ and $X$ where $X$ is defined by $X_t=\int_{0}^{t}(-Z_{s})\,ds+B_{t}$, $t\geq0$. By Proposition \[B\] we know that the process $B$ with $B_{t}:=\beta_{t}+\int_{0}^{t}Z_{s}\, ds$, $t\geq 0$, is a $\mathbb{G}$-Brownian motion stopped at $\tau$ (see Proposition \[B\]). Note that $X=\beta$. According to Proposition \[pro:innovationLEMMA\] the process $b$ with $$b_t=X_t-\int_0^{t}{^o(-Z)}_s\,ds=\beta_t+ \int_0^{t}{^o\!Z}_s\,ds=\beta_t+\int_{0}^{t}\mathbf{E}\left[\frac{\beta_{s}}{\tau-s}\,\mathbb{I}_{\{s<\tau\}}\big|\beta_{s}\right]\, ds\,,$$ is a Brownian motion stopped at $\tau$, with respect to $\mathbb{F}^X=\mathbb{F}^\beta$, where we have used Remark \[rem:We-make-the QWERTY\] (iii) for the third equality. This completes the proof of Theorem \[TEO:semimartFb\]. \[rem:QUA COV 1\] Note that the quadratic variation $\langle \beta,\beta\rangle$ of the information process $\beta$ is given by $$\langle \beta,\beta\rangle _{t}=\langle b,b\rangle _{t}=t\wedge\tau,\quad t\geq0\,.$$ This follows immediately from the semimartingale decomposition of $\beta$, see, for example, [@key-3 Ch. IV, (1.19)] (the quadratic variation does not depend on the filtration, provided that the process is a continuous semimartingale). \[sec:Local-time-of\] Example: pricing a Credit Default Swap ============================================================ A rather common financial contract that is traded in the credit market is the Credit Default Swap (CDS). A CDS with maturity $T>0$ is a contract between a buyer who wants to protect against the possibility that the default of a financial asset will take place before $T$, and a seller who provides such insurance. If the default has not occurred before $T$, the buyer will pay to the seller a fee until the maturity. But if the default time $\tau$ occurs before the maturity, the fee will be paid until the default and then the seller will give immediately to the buyer a pre-established amount of money $\delta$, called *recovery*. The recovery may depend on the time at which the default occurs and, hence, it is modeled by a positive function $\delta:[0,T]\rightarrow\mathbb{R}_{+}$. We follow the approach developed in [@key-29], and the reader can find some details also in [@key-4 Ch. 7.8]. We assume that the default-free spot interest rate $r$ is constant and that the fee that the buyer must pay to the seller is paid continuously in time according to some rate $\kappa>0$, that is to say, the buyer has to pay an amount $\kappa dt$ during the time $dt$ until $\tau\wedge T$. In case the default time $\tau$ occurs before the maturity $T$, the seller will pay to the buyer a recovery $\delta(\tau)$ at time $\tau$. If the pricing measure is $\mathbf{P}$ and the market filtration is $\mathbb{G}=(\mathcal{G}_{t})_{\geq0}$, the price $S_{t}(\kappa,\delta,T,r)$ at time $t$ of the CDS is given by $$S_{t}\left(\kappa,\delta,r,T\right):=e^{rt}\,\mathbf{E}\left[e^{-r\tau}\delta\left(\tau\right)\mathbb{I}_{\left\{ t<\tau\leq T\right\} }-\int_{t\wedge\tau}^{T\wedge\tau}e^{-rv}\kappa \, dv|\mathcal{G}_{t}\right].\label{eq:CDS}$$ We would like to make a comparison between the result obtained in our model with the one presented in [@key-29]. However, differently from [@key-29], we reduce ourselves to the simple situation where the market filtration can be either the minimal filtration $\mathbb{H}$ that makes $\tau$ a stopping time or the filtration $\mathbb{F}^{\beta}$ generated by the information process $\beta$. It is worth to note that the minimal filtration that makes $\tau$ a stopping time is of particular importance in the theory of enlargement of filtrations and its applications to mathematical models of credit risk. We refer to the series of papers [@key-23; @key-24; @key-25] and to the book [@key-4] for the topics of mathematical finance, where the filtration $\mathbb{H}$ is used, and to the books [@key-18] and [@key-7] for a discussion of the subject from a purely mathematical point of view. In the following, let us make the further assumption that $r=0$. We first recall the pricing formula for the market filtration $\mathbb{G}=\mathbb{H}$. If the market filtration $\mathbb{G}$ is the minimal filtration $\mathbb{H}=(\mathcal{H}_{t})_{t\geq0}$ that makes $\tau$ a stopping time, then the price $S_{t}(\kappa,\delta,0,T)$ at time $t$ of a CDS is given by $$\begin{aligned} S_{t}\left(\kappa,\delta,0,T\right) & =\mathbf{E}\left[\delta\left(\tau\right)\mathbb{I}_{\left\{ t<\tau\leq T\right\} }-\kappa\left(\left(\tau\wedge T\right)-t\right)\mathbb{I}_{\left\{t<\tau\right\} }|\mathcal{H}_{t}\right]\nonumber \\ & =\mathbb{I}_{\left\{t<\tau\right\} }\frac{1}{G\left(t\right)}\left(-\int_{t}^{T}\delta\left(v\right)dG\left(v\right)-\kappa\int_{t}^{T}G\left(v\right)dv\right)\label{eq:CDSeasy}\end{aligned}$$ where $G(u):=\mathbf{P}(u<\tau)=1-F(u)$ is supposed to be strictly greater than 0 for every $u\in[0,T]$. See [@key-29 Lemma 2.1]. In our model, i.e., if the market filtration is the filtration $\mathbb{F}^{\beta}=(\mathcal{F}_{t}^{\beta})_{t\geq0}$ generated by the information process $\beta$, the pricing formula is given in the following proposition. \[lem:If-,-i.e.,\]If $\mathbb{G}=\mathbb{F}^{\beta}$, then the price $S_{t}(\kappa,\delta,0,T)$ at time $t$ of a CDS is given by $$\begin{aligned} S_{t}\left(\kappa,\delta,0,T\right) & =\mathbf{E}\left[\delta\left(\tau\right)\mathbb{I}_{\left\{ t<\tau\leq T\right\} }-\kappa\left(\left(\tau\wedge T\right)-t\right)\mathbb{I}_{\left\{t<\tau\right\} }|\mathcal{F}_{t}^{\beta}\right]\label{price-formula} \\ & =\mathbb{I}_{\left\{t<\tau\right\} }\left(-\int_{t}^{T}\delta\left(v\right)d_{v}\Psi_{t}\left(v\right)-\kappa\int_{t}^{T}\Psi_{t}\left(v\right)dv\right)\label{eq:CDSmy}\end{aligned}$$ where $\Psi_{t}(u):=\mathbf{P}(u<\tau|\mathcal{F}_{t}^{\beta}) =\int_{u}^{+\infty}\phi_{t}(v,\beta_{t})\,dF(v)$ and the writing $d_{v}\Psi_{t}(v)$ in the above formula means that the integral is computed using $v$ as integrating variable. Concerning the first term in , in view of Corollary \[LEM:prop\_cond\_exp\_tau\] and , we have, $\mathbf{P}$-a.s.: $$\begin{aligned} \mathbf{E}\left[\delta\left(\tau\right)\mathbb{I}_{\left\{ t<\tau\leq T\right\} }|\mathcal{F}_{t}^{\beta}\right]&=\mathbb{I}_{\left\{t<\tau\right\} }\int_{t}^{T}\delta\left(v\right)\phi_{t}\left(v,\beta_{t}\right)dF\left(v\right)\\ &=\mathbb{I}_{\left\{t<\tau\right\} }\int_{t}^{T}\delta\left(v\right)d_{v}\Phi_{t}\left(v\right)\end{aligned}$$ where $\Phi_{t}(v):=\mathbf{P}(\tau\leq v|\mathcal{F}_{t}^{\beta})=1-\Psi_{t}(v)$ and hence $$\begin{aligned} \label{Ex1} \mathbf{E}\left[\delta\left(\tau\right)\mathbb{I}_{\left\{ t<\tau\leq T\right\} }|\mathcal{F}_{t}^{\beta}\right]&= \mathbb{I}_{\left\{t<\tau\right\} }\int_{t}^{T}\delta\left(v\right)d_{v}\Phi_{t}\left(v\right)\\ &=-\mathbb{I}_{\left\{t<\tau\right\} }\int_{t}^{T}\delta\left(v\right)d_{v}\Psi_{t}\left(v\right)\,.\end{aligned}$$ Concerning the second term in , again in view of Corollary \[LEM:prop\_cond\_exp\_tau\] and , we obtain $\mathbf{P}$-a.s., $$\begin{aligned} \nonumber\mathbf{E}\left[T\wedge\tau|\mathcal{F}_{t}^{\beta}\right]\mathbb{I}_{\left\{t<\tau\right\} } &=&\left(\int_{t}^{T}vd_{v}\Phi_{t}\left(v\right)+T\left(1-\Phi_{t}\left(T\right)\right)\right)\mathbb{I}_{\left\{t<\tau\right\} }\\ \nonumber&=&\left(-\int_{t}^{T}vd_{v}\Psi_{t}\left(v\right)+T\Psi_{t}\left(T\right)\right)\mathbb{I}_{\left\{t<\tau\right\}}\\ &=&\left(\int_{t}^{T}\Psi_{t}\left(v\right)dv+t\Psi_{t}\left(t\right)\right)\mathbb{I}_{\left\{t<\tau\right\}}\,. \label{Ex2}\end{aligned}$$ Inserting and into the price formula and noting that $\Psi_t(t)=1$ on $\{t<\tau\}$ $\mathbf{P}$-a.s., we obtain the asserted result . The proof of the proposition is completed. Although there is a formal analogy between and , the pricing formulas are quite different. The second formula is much more informative because it uses the observation of $\beta_t$ and the Bayes estimate of $\tau$, namely, the a posteriori distribution of $\tau$ after observing $\beta_t$. Given that $t<\tau$, the price in is a deterministic value, while in the price depends on the observation $\beta_t$. The so-called *fair spread* of a CDS at time $t$ is the value $\kappa^{*}$ such that $S_{t}(\kappa^{*},\delta,0,T)=0$. In our model we have that $$\kappa^{*}=-\frac{\int_{t}^{T}\delta\left(r\right)d_{r}\Psi_{t}\left(r\right)}{\int_{t}^{T}\Psi_{t}\left(r\right)dr}$$ while, in the simpler situation where the market filtration is $\mathbb{H}$, the fair spread is given by $$\kappa^{*}=-\frac{\int_{t}^{T}\delta\left(r\right)dG\left(r\right)}{\int_{t}^{T}G\left(r\right)dr}.$$ When dealing with problems related to credit risk, it is of interest to consider the case where the market filtration is a filtration $\mathbb{G}$ obtained by progressively enlarging a reference filtration $\mathbb{F}$ with another filtration, $\mathbb{D}=(\mathcal{D}_{t})_{t\geq0}$, which is responsible for modeling the information associated with the default time $\tau$. Traditionally, the filtration $\mathbb{D}$ has been settled to be equal to the filtration $\mathbb{H}$ generated by the single-jump process occurring at $\tau$. We intend to consider a different setting where the reference filtration $\mathbb{F}$ will be enlarged with the filtration $\mathbb{F}^{\beta}$ and we will provide the relative pricing formulas of credit instruments like the CDS. However, this is beyond the scope of the present paper. The Bayes Formula ================= Here the basic results on the Bayes formula are recalled without proofs. For further details we refer to [@key-8 Ch. II.7.8] or any book on Bayesian Statistics. Let $\tau$ and $X$ be random variables on a probability space $(\Omega,\,\mathcal{F},\,\mathbf{P})$ with values in measurable spaces $(E_{1},\,\mathcal{E}_{1})$ and $(E_{2},\,\mathcal{E}_{2})$, respectively. Let $\mathbf{P}_{r}$ be a regular conditional distribution of $X$ with respect to $\tau=r$, i.e., for $B\in\mathcal{E}_{2}$, $\mathbf{P}_{r}(B)=\mathbf{P}(X\in B|\tau=r)$, $\mathbf{P}_{\tau}$-a.s. By $\mathbf{P}_{\tau}$ we denote the distribution of $\tau$ on $(E_{1},\,\mathcal{E}_{1})$ (called the *a priori* distribution). Moreover, for $C\in\mathcal{E}_{1}$, let $\mathbf{G}_{C}$ be defined as follows: $$\begin{aligned} \mathbf{G}_{C}\left(B\right) & :=\int_{C}\mathbf{P}_{r}\left(B\right)\mathbf{P}_{\tau}\left(dr\right),\, B\in\mathcal{E}_{2}\,.\label{eq:starUAN}\end{aligned}$$ We are interested in the *a posteriori* probability $\mathbf{Q}(x,C):=\mathbf{P}(\tau\in C|X=x)$, for $x\in E_{2}$ and $C\in\mathcal{E}_{1}$. By $\mathbf{P}_{X}$ we denote the law of $X$. We have $\mathbf{G}_{C}\ll\mathbf{P}_{X}$ and $$\mathbf{Q}\left(x,C\right)=\frac{d\mathbf{G}_{C}}{d\mathbf{P}_{X}}\left(x\right),\quad x\in E_{2},\; C\in\mathcal{E}_{1},\;\mathbf{P}_{X}\textrm{-a.s.}$$ Now we assume that there exists a $\sigma$-finite measure $\mu$ on $(E_{2},\,\mathcal{E}_{2})$ such that $\mathbf{P}_{r}\ll\mu$, for all $r\in E_1$. Furthermore, we assume that there is a measurable function $p$ on $(E_{1}\times E_{2},\,\mathcal{E}_{1}\otimes\mathcal{E}_{2})$ such that $$p\left(r,x\right)=\frac{d\mathbf{P}_{r}}{d\mu}\left(x\right),\quad \mu\textrm{-a.e.}, \; r\in E_{1}\,.$$ We have that 1.  $\mathbf{G}_{C}\ll\mu$ and $\displaystyle{\frac{d\mathbf{G}_{C}}{d\mu}(x)=\int_{C}p(r,x)\mathbf{P}_{\tau}(dr),\;\mu\textrm{-a.e.}}$, 2.  $\mathbf{P}_{X}\ll\mu$ and $\displaystyle{\frac{d\mathbf{P}_{X}}{d\mu}(x)=\int_{E_{1}}p(r,x)\mathbf{P}_{\tau}(dr),\;\mu\textrm{-a.e.}}$ \[cor:(Bayes-formula)-a.s.\] $$\mathbf{P}\left(\tau\in C|X=x\right)= \mathbf{Q}\left(x,C\right)=\frac{{\displaystyle \int_{C}p\left(r,x\right)\mathbf{P}_{\tau}\left(dr\right)}}{{\displaystyle \int_{E_{1}}p\left(v,x\right)\mathbf{P}_{\tau}\left(dv\right)}},\quad \mathbf{P}_{X}\textrm{-a.s.},\; C\in\mathcal{E}_{1}\,.$$ \[cor:The-conditional-density\] The a posteriori density $q(r,x)$ of $\tau$ under $X=x$ with respect to $\mathbf{P}_{\tau}$ is given by $$q\left(r,x\right):=p\left(r,x\right) \left(\int_{E_{1}}p\left(v,x\right)\mathbf{\mathbf{P}_{\tau}}\left(dv\right)\right)^{-1},\quad r\in E_1,\; x\in E_2\,.$$ Thus $$\mathbf{P}\left(\tau\in C|X=x\right)=\int_{C}q\left(r,x\right)\mathbf{\mathbf{P}_{\tau}}\left(dr\right),\quad \mathbf{P}_{X}\textrm{-a.s.},\; C\in\mathcal{E}_{1}\,.$$ For the proof of Theorem \[lem:PRE-theo-4.3\], we have to choose $(E_1,\mathcal{E}_1)=(\mathbb{R}_+,\mathcal{B}(\mathbb{R}_+))$, $(E_2,\mathcal{E}_2)=(\mathbb{R}, \mathcal{B}(\mathbb{R}))$, $X=\beta_t$, $\tau$ as the default time and the $\sigma$-finite measure $\mu$ as the measure $\delta_0+l$ where $\delta_0$ is the Dirac measure at $0$ and $l$ is the Lebesgue measure on $\mathbb{R}$. Then Corollary \[cor:(Bayes-formula)-a.s.\] can be applied to the second term on the right-hand side of which yields the second term on the right hand side of . \[sec:Proof of Innovation lemma\] Proof of Proposition \[pro:innovationLEMMA\] (Innovation Lemma) ================================================================================================= We start by observing that the asssumption of Proposition \[pro:innovationLEMMA\] implies $$\int_{0}^{t}\left|Z_{s}\right|\, ds<+\infty,\quad t\geq0, \;\mathbf{P}\textrm{-a.s.}$$ Putting $Z$ equal to zero on the negligible set where $\int_0^t |Z_s|\, ds=+\infty$ for some $t\geq0$, we can assume without loss of generality that this property holds everywhere. Hence $\int_{0}^{t}Z_{s}\, ds$ is well defined and finite everywhere. By Remark \[rem:If–for rem 63\] we know that for the optional projection $^o\!Z$ of $Z$ with respect to $\mathbb{F}^X$ we have that $^{o}\!Z_{s}=\mathbf{E}\left[Z_{s}|\mathcal{F}_{s}^{X}\right]$, $\mathbf{P}$-a.s., for all $s\geq0$. This yields $\mathbf{E}\left[\int_{0}^{t}\left|^{o}\!Z_{s}\right|\, ds\right]\leq \mathbf{E}\left[\int_{0}^{t}\left|Z_{s}\right|\, ds\right]<+\infty$, $t\geq0$, and hence $\int_{0}^{t}|{^o\!Z}_{s}|\, ds<+\infty$, for all $t\geq0$, $\mathbf{P}$-a.s. As above, without loss of generality we can modify $^{o}\!Z$ on a $\mathbf{P}$-negligible set such that $\int_{0}^{t}\left|^{o}\!Z_{s}\right|\, ds<+\infty$, $t\geq0$, everywhere, while being still the optional projection of $Z$. Hence $\int_{0}^{t}\,^{o}\!Z_{s}\, ds$ is well-defined and finite everywhere, for all $t\geq0$. Since $^o\!Z$ is $\mathbb{F}^X$-optional, it is clear that the process $\int_{0}^{\cdot}\,^{o}\!Z_{s}\, ds$ is $\mathbb{F}^{X}$-adapted. Therefore, the process $b$ with $b_t:=X_t-\int_0^t{^o\!Z}_s\, ds$, $t\geq0$, is $\mathbb{F}^{X}$-adapted. Furthermore, $Z$ being $\mathbb{F}$-optional, the process $\int_{0}^{\cdot}Z_{s}\, ds$ is $\mathbb{F}$-adapted and hence the process $X$ is also $\mathbb{F}$-adapted. This yields the inclusion $\mathbb{F}^X\subseteq\mathbb{F}$. Next we show that the process $b$ is an $\mathbb{F}^{X}$-martingale. Obviously, $b_{t}$ is integrable for all $t\geq0$ and, as shown above, $b$ is $\mathbb{F}^{X}$-adapted. Let $0\leq s<t$. For showing the martingale property, using Fubini’s theorem, we first notice that there exists a Borel set $\Lambda\subseteq\mathbb{R}_+$ such that $l_+(\mathbb{R}_+\setminus \Lambda)=0$ and $Z_s$, and hence also $^o\!Z_s$, is integrable for all $s\in\Lambda$. Now $$\begin{aligned} \lefteqn{\mathbf{E}\left[b_{t}-b_{s}|\mathcal{F}_{s}^{X}\right]}\\ &=\mathbf{E}\left[B_{t}-B_{s}|\mathcal{F}_{s}^{X}\right]+\mathbf{E}\left[\int_{s}^{t}\left(Z_{u}-{}^{o}\!Z_{u}\right)du|\mathcal{F}_{s}^{X}\right]\\ &=\mathbf{E}\left[\mathbf{E}\left[B_{t}-B_{s}|\mathcal{F}_{s}\right]|\mathcal{F}_{s}^{X}\right]+\mathbf{E}\left[\int_{s}^{t}\mathbb{I}_{\Lambda}\left(u\right)\,\left(Z_{u}-{}^{o}\!Z_{u}\right)du|\mathcal{F}_{s}^{X}\right]\\ & =\int_{0}^{t}\mathbb{I}_{\Lambda}\left(u\right)\,\mathbf{E}\left[Z_{u}-{}^{o}\!Z_{u}|\mathcal{F}_{s}^{X}\right]du\\ &=\int_{0}^{t}\mathbb{I}_{\Lambda}\left(u\right)\,\mathbf{E}\left[\mathbf{E}\left[Z_{u}-{}^{o}\!Z_{u}|\mathcal{F}^X_u\right]|\mathcal{F}_{s}^{X}\right]du=0\end{aligned}$$ where we have used that $\mathbb{F}^X\subseteq\mathbb{F}$ and that $B$ is an $\mathbb{F}$-martingale, Fubini’s theorem and properties of the optional projection (see Remark \[rem:If–for rem 63\]). This proves that $b$ is a continuous $\mathbb{F}^X$-martingale. Finally, since $B$ is an $\mathbb{F}$-Brownian motion stopped at $T$, from the definition of $b$ it is clear that $b$ is a continuous $\mathbb{F}$-semimartingale with square variation process $\langle b,b\rangle$: $\langle b,b\rangle_t=t\wedge T$, $t\geq 0$. In view of the well-known fact that the square variation of continuous semimartingales does not depend on the filtration, this is also true with respect to $\mathbb{F}^X$. This proves that $b$ is an $\mathbb{F}^X$-Brownian motion stopped at $T$. \[sec:Proofs-of-Section\] Proof of Lemma \[lem:On-the-set-2\] ============================================================= First we recall that $(t_{n})_{n\in\mathbb{N}}$ is a strictly decreasing sequence converging to $0$ such that $0<t_{n+1}<t_{n}<\ldots t_{1}<\varepsilon,\, t_{n}\downarrow0$. For proving that $\lim_{n\rightarrow\infty}\phi_{t_{n}}(r,\beta_{t_{n}})=1$, using the assumption that $\mathbf{P}(\tau>\varepsilon)=1$, and , we can write $$\begin{aligned} \nonumber\lefteqn{\phi_{t_{n}}\left(r,\beta_{t_{n}}\right)}\\ =&\frac{\left(2\pi t_{n}\right)^{-1/2}\,{r}^{1/2}\left(r-t_{n}\right)^{-{1/2}}\,\exp\big[-\beta_{t_{n}}^{2}r/2t_{n}\left(r-t_{n}\right)\big]\mathbb{I}_{\left(t_n,+\infty\right)}\left(r\right)}{\!\!\!\!\int\limits_{\left(\varepsilon,+\infty\right)}\left(2\pi t_{n}\right)^{-1/2}\,{s}^{1/2}\left(s-t_{n}\right)^{-1/2}\, \exp\big[-\beta_{t_{n}}^{2}s/2t_{n}\left(s-t_{n}\right)\big]\, dF\left(s\right)}\\ =&\frac{r^{1/2}\left(r-t_{n}\right)^{-1/2}\; \mathbb{I}_{\left(t_n,+\infty\right)}\left(r\right)}{\int\limits_{\left(\varepsilon,+\infty\right)}{s}^{1/2}\left(s-t_{n}\right)^{-1/2}\exp\big[\beta_{t_{n}}^{2}\left(s-r\right)/2\left(r-t_{n}\right)\left(s-t_{n}\right)\big]\,dF\left(s\right)}\,.\label{eq:boh}\end{aligned}$$ First we note that for $s\in\left(\varepsilon,+\infty\right)$ $$\label{Estimate:square-root} 1\leq s^{1/2}\left(s-t_{n}\right)^{-1/2}\leq\varepsilon^{1/2} \left(\varepsilon-t_{n}\right)^{-1/2}\leq\varepsilon^{1/2}\left(\varepsilon-t_{1}\right)^{-1/2}\,.$$ Secondly, if $s\in(\varepsilon,r)$, then $\exp\left[\beta_{t_{n}}^{2}(s-r)/2(r-t_{n})(s-t_{n})\right]\leq1,$ while for $s\in[r,+\infty)$ we have that $(s-r)/(s-t_{n})\leq1$, and thus $$\exp\left[\beta_{t_{n}}^{2}(s-r)/2(r-t_{n})(s-t_{n})\right]\leq\exp\left[\beta_{t_{n}}^{2}/2(r-t_{n})\right].$$ We note that the right-hand side of this inequality is bounded with respect to $n$, since $t_n\downarrow0$ and $\beta_{t_n}\rightarrow0$. Furthermore, $$\lim_{n\rightarrow\infty}s^{1/2}\left(s-t_{n}\right)^{-1/2}\exp\left[\beta_{t_{n}}^{2}\left(s-r\right)/2\left(r-t_{n}\right)\left(s-t_{n}\right)\right]=1\,.$$ Thus we can apply Lebesgue’s bounded convergence theorem and it follows that $$\lim_{n\rightarrow\infty} \int\limits_{\left(\varepsilon,+\infty\right)}\!\!\!\!s^{1/2}\left(s-t_{n}\right)^{-1/2}\exp\left[\beta_{t_{n}}^{2}\left(s-r\right)/2\left(r-t_{n}\right)\left(s-t_{n}\right)\right]\,dF\left(s\right)=1\,.$$ Finally, as for the numerator in , $$\lim_{n\rightarrow\infty}r^{1/2}(r-t_{n})^{-1/2}\, \mathbb{I}_{(t_n,+\infty)}(r)=1,\; r>0\,,$$ we have $\lim_{n\rightarrow\infty}\phi_{t_{n}}(r,\beta_{t_{n}})=1$. In order to prove the $\mathbf{P}_\tau$-a.s. uniform boundedness of $\phi_{t_{n}}(\cdot,\beta_{t_{n}})$, first we notice that in view of the numerator in is uniformly bounded in $r\in(\varepsilon,+\infty)$ and using the assumption that $\mathbf{P}(\tau>\varepsilon)=1$ it follows that it is $\mathbf{P}_\tau$-a.s. uniformly bounded in $r\in(0,+\infty)$, too. It remains to verify that the denominator in is bounded from below by a strictly positive constant only depending on $\varepsilon$ and $\omega$. Using , for $s\in(\varepsilon,+\infty)$ the integrand in the denominator of can be estimated from below by $$\begin{aligned} \lefteqn{{s}^{1/2}\left(s-t_{n}\right)^{-1/2}\exp\big[\beta_{t_{n}}^{2}\left(s-r\right)/2\left(r-t_{n}\right)\left(s-t_{n}\right)\big]}\\ &\geq&\exp\big[\beta_{t_{n}}^{2}\left(s-r\right)/2\left(r-t_{n}\right)\left(s-t_{n}\right)\big]\\ &\geq&\exp\left[-\beta_{t_{n}}^{2}/\left(\varepsilon-t_{1}\right)\right]\, \mathbb{I}_{\left(\varepsilon,r\right)}+\mathbb{I}_{\left[r,+\infty\right)}\geq e^{-\gamma}\end{aligned}$$ where $\gamma=\gamma(\varepsilon,\omega)=\sup_{n\geq1}\beta_{t_{n}}^{2}(\omega)/(\varepsilon-t_{1})<+\infty$. Hence for the denominator in it follows $$\begin{aligned} \lefteqn{e^{-\gamma}=\int_{\left(\varepsilon,+\infty\right)}e^{-\gamma}\, dF\left(s\right)}\\ &\leq&\int_{\left(\varepsilon,+\infty\right)}{s}^{1/2}\left(s-t_{n}\right)^{-1/2}\exp\big[\beta_{t_{n}}^{2}\left(s-r\right)/2\left(r-t_{n}\right)\left(s-t_{n}\right)\big]\,dF\left(s\right)\,.\end{aligned}$$ The proof of Lemma \[lem:On-the-set-2\] is completed. Acknowledgment {#acknowledgment .unnumbered} -------------- This work has been financially supported by the European Community’s FP 7 Program under contract PITN-GA-2008-213841, Marie Curie ITN Controlled Systems. [99]{} Bedini M. L. *Information on a Default Time: Brownian Bridges on Stochastic Intervals and Enlargment of Filtrations*. PhD Thesis, Friedrich Schiller University of Jena (Germany), 2012. Bielecki T.R., Jeanblanc M., M. Rutkowski. *Hedging of basket of credit derivatives in a credit default swap market*. Journal of Credit Risk, 3: 91–132, 2007. Blumenthal R.M., Getoor R.K. *Markov Processes and Potential Theory*. Academic Press, 1968. Brody D., Hughston L., Macrina A. *Beyond Hazard Rates: A New Framework for Credit-Risk Modeling*. Advances in Mathematical Finance: Festschrift Volume in Honour of Dilip Madan, pp. 231–257, Basel: Birkhäuser, 2007. Bielecki T.R., Rutkowski M. *Credit risk: Modelling Valuation and Hedging*. Springer Verlag, Berlin 2001. Dellacherie C. *Capacités et processus stochastiques*. Volume 67 of Ergebnisse. Springer 1972. Dellacherie C., Meyer P.-A. *Probabilities and Potential*. North-Holland, 1978. Jeanblanc M., Le Cam Y. *Immersion property and Credit Risk Modelling*. Optimality and Risk - Modern Trends in Mathematical Finance, pp. 99–132. Springer Berlin Heidelberg, 2010. Jeanblanc M., Le Cam Y. *Progressive enlargement of filtrations with initial times*. Stoch. Process. Appl., 2009, v. 119, No. 8, 2523–2543. Jeanblanc M., Le Cam Y. *Reduced form modelling for credit risk*. Preprint 2007, availabe at: http://ssrn.com/abstract=1021545. Jeanblanc M., Yor M., Chesney M. *Mathematical Methods for Financial Markets*. Springer, First edition, 2009. Kallenberg O. *Foundations of Modern Probability*. Springer-Verlag, New-York, Second edition, 2002. Karatzas I., Shreve S. *Brownian Motion and Stochastic Calculus*. Springer- Verlag, Berlin, Second edition, 1991. Revuz D., Yor M. *Continuous Martingales and Brownian Motion*. Springer-Verlag, Berlin, Third edition, 1999. Rogers L.C.G., Williams D. *Diffusions, Markov Processes and Martingales. Vol. 2: Itô Calculus*. Cambridge University Press, Second edition, 2000. Shiryaev A.N. *Probability*. Springer Verlag, Second Edition, 1991. Matteo Ludovico Bedini, Intesa Sanpaolo, Milano, Italy;\ e-mail: matteo.bedini@intesasanpaolo.com Rainer Buckdahn, Laboratoire de Mathématiques CNRS-UMR 6204,\ Université de Bretagne Occidentale, Brest, France; School of Mathematics, Shandong University, Jinan, Shandong Province, P.R.China;\ e-mail: rainer.buckdahn@univ-brest.fr Hans-Jürgen Engelbert, Friedrich-Schiller-Universität Jena, Jena, Germany; e-mail: hans-juergen.engelbert@uni-jena.de
1
--- author: - Sébastien Boucksom title: Higher dimensional Zariski decompositions --- \[section\] \[theo\][Corollary]{} \[theo\][Definition]{} \[theo\][Proposition]{} \[theo\][Lemma]{} $Author's$ $address$: Institut Fourier, 100 rue des Maths, BP74, 38402 Saint-Martin d’Hères Cedex, France.\ $e-mail$: sbouckso@ujf-grenoble.fr\ $Abstract$: using currents with minimal singularities, we construct minimal multiplicities for a real pseudo-effective $(1,1)$-class $\alpha$ on a compact complex $n$-fold $X$, which are the local obstructions to the numerical effectivity of $\alpha$. The negative part of $\alpha$ is then defined as the real effective divisor $N(\alpha)$ whose multiplicity along a prime divisor $D$ is just the generic multiplicity of $\alpha$ along $D$, and we get in that way a divisorial Zariski decomposition of $\alpha$ into the sum of a class $Z(\alpha)$ which is nef in codimension 1 and the class of its negative part $N(\alpha)$, which is exceptional in the sense that it is very rigidly embedded in $X$. The positive parts $Z(\alpha)$ generate a modified nef cone, and the pseudo-effective cone is shown to be locally polyhedral away from the modified nef cone, with extremal rays generated by exceptional divisors. We then treat the case of a surface and a hyper-Kähler manifold in some detail: under the intersection form (resp. the Beauville-Bogomolov form), we characterize the modified nef cone and the exceptional divisors; our divisorial Zariski decomposition is orthogonal, and is thus a rational decomposition, which fact accounts for the usual existence statement of a Zariski decomposition on a projective surface, which is thus extended to the hyper-Kähler case. Finally, we explain how the divisorial Zariski decomposition of (the first Chern class of) a big line bundle on a projective manifold can be characterized in terms of the asymptotics of the linear series $|kL|$ as $k\to\infty$.\ 2000 Mathematics Subject Classification: 32J25 Introduction ============ It is known since the pioneering work of O.Zariski \[Zar62\] that the study of the linear series $|kL|$ where $L$ is a line bundle on a projective surface can be reduced to the case where $L$ is numerically effective (nef). The more precise result obtained by Zariski is that any effective ${\mathbf{Q}}$-divisor $D$ on a projective surface $X$ can be uniquely decomposed into a sum $D=P+N$ where $P$ is a nef ${\mathbf{Q}}$-divisor (the positive part), $N=\sum a_j D_j$ is an effective ${\mathbf{Q}}$-divisor (the negative part) such that the Gram matrix $(D_i\cdot D_j)$ is negative definite, and $P$ is orthogonal to $N$ with respect to the intersection form. Zariski shows that the natural inclusion $H^0(kP)\to H^0(kL)$ is necessarily an isomorphism in that case, relating the decomposition to the original problem.\ The proof of the uniqueness in this decomposition shows that the negative part $N$ only depends on the class $\{D\}$ of $D$ in the Néron-Severi group $NS(X)$, so that $\{D\}\mapsto\{P\}$ yields a map from part of the pseudo-effective cone to the nef cone, which we want to study geometrically.\ Building upon the construction by J.-P.Demailly of metrics with minimal singularities on a pseudo-effective line bundle $L$ over a compact complex $n$-fold, we define the minimal multiplicity $\nu(\alpha,x)$ of an arbitrary real pseudo-effective $(1,1)$-class $\alpha$ on a compact complex $n$-fold $X$ at some point $x\in X$. This multiplicity $\nu(\alpha,x)$ is the local obstruction at $x$ to the numerical effectivity of $\alpha$, and we then get the negative part of such a class $\alpha$ by setting $N(\alpha)=\sum\nu(\alpha,D)D$, where $D$ ranges over the prime divisors of $X$ and $\nu(\alpha,D)$ is the generic multiplicity of $\alpha$ along $D$ (cf. section 3). This negative part $N(\alpha)$ is an effective ${\mathbf{R}}$-divisor which is exceptional in the sense that it is very rigidly imbedded in $X$. When $X$ is a surface, the divisors we obtain in that way are exactly the effective ${\mathbf{R}}$-divisors whose support $D_1,...,D_r$ have negative definite Gram matrix $(D_i\cdot D_j)$.\ The difference $Z(\alpha):=\alpha-\{N(\alpha)\}$ is a real $(1,1)$-class on $X$ which we call the Zariski projection of $\alpha$. It is not a nef class, but is somehow nef in codimension 1. More precisely, we define the modified nef cone of a Kähler $n$-fold to be the closed convex cone generated by the classes in ${H^{1,1}(X,{\mathbf{R}})}$ which can be written as the push-forward of a Kähler class by a modification. We then show that the Zariski projection $Z(\alpha)$ of a pseudo-effective class $\alpha$ belongs to this modified nef cone. The decomposition $\alpha=Z(\alpha)+\{N(\alpha)\}$ we call the divisorial Zariski decomposition, and it is just induced by the Siu decomposition of a positive current with minimal singularities in $\alpha$ when the latter is big. For such a big class, we give a criterion to recognize a decomposition $\alpha=p+\{N\}$ into a modified nef and big class and the class of an effective real divisor as the divisorial Zariski decomposition of $\alpha$, in terms of the non-Kähler locus of $p$ (cf. section 3.5)\ The geometric picture is now as follows: the pseudo-effective cone of a compact complex $n$-fold $X$ is locally polyhedral away from the modified nef cone, with extremal rays that write ${\mathbf{R}}_+\{D\}$ for some exceptional prime $D$ of $X$. The Zariski projection $Z$ yields a projection from the pseudo-effective cone to the modified nef cone parallel to these exceptional rays, which map is concave (in some sense) and homogeneous, but not continuous up to the boundary of the pseudo-effective cone in general. The fibre $Z^{-1}(p)$ of $Z$ above a modified nef class $p$ is a countable union of simplicial cones generated by exceptional families of primes.\ When $X$ is a surface, a modified nef class is just a nef class; when $\alpha$ is the class of an effective ${\mathbf{Q}}$-divisor $D$ on a projective surface, the divisorial Zariski decomposition of $\alpha$ is just the original Zariski decomposition of $D$. More generally, we show that the divisorial decomposition of a pseudo-effective class $\alpha$ on a Kähler surface is the unique orthogonal decomposition of $\alpha$ into the sum of a modified nef class and the class of an exceptional (in some sense) effective ${\mathbf{R}}$-divisor. This fact accounts for the rationality of the Zariski decomposition on a surface, meaning that the negative part $N$ is rational when $D$ is.\ An interesting fact is that much of the well-known case of a surface carries on to the case where $X$ is a compact hyper-Kähler manifold. Using the quadratic Beauville-Bogomolov form on ${H^{1,1}(X,{\mathbf{R}})}$ and deep results due to D.Huybrechts, we can prove the following facts: a family of primes is exceptional in our sense iff their Gram matrix is negative definite. In particular, a prime is exceptional iff it has negative square, and this forces it to be uniruled. The modified nef cone of a hyper-Kähler manifold is just the dual cone to the pseudoeffective cone, which is also the closure of the so-called birational (or bimeromorphic) Kähler cone. Finally, the divisorial Zariski decomposition is the unique orthogonal decomposition into a modified nef class and an exceptional divisor. In particular, the divisorial Zariski decomposition is also rational in that case.\ In a last part, we explain how to tackle the above constructions in a more algebraic fashion. When $L$ is a big divisor on a projective manifold, we prove that the divisorial Zariski projection of $L$ is the only decomposition $L=P+N$ into real divisors with $P$ modified nef and $H^0(\lfloor kP\rfloor)=H^0(kL)$ for every $k$. The minimal multiplicities of $\{L\}$ (and thus its negative part) can be recovered from the asymptotic behaviour of the sections of $kL$. The case of a general pseudo-effective line bundle $L$ is then handled by approximating it by $L+{\varepsilon}A$, where $A$ is ample.\ The methods used in this paper are mostly “transcendental”, since we heavily rely on the theory of currents, but the results we aim at definitely belong to algebraic geometry, and we have thus tried to make the paper legible for more algebraically inclined readers by providing in the first section a rather detailed account of the tools we need afterwards. Technical preliminaries ======================= ${\partial\overline{\partial}}$-cohomology ------------------------------------------ When $X$ is an arbitrary complex manifold, the ${\partial\overline{\partial}}$-lemma of Kähler geometry does not hold, and it is thus better to work with ${\partial\overline{\partial}}$-cohomology. We will just need the $(1,1)$-cohomology space $H^{1,1}_{{\partial\overline{\partial}}}(X,{\mathbf{C}})$, which is defined as the quotient of the space of $d$-closed smooth $(1,1)$-forms modulo the ${\partial\overline{\partial}}$-exact ones. The real structure on the space of forms induces a real structure on $H^{1,1}_{{\partial\overline{\partial}}}(X,{\mathbf{C}})$, and we denote by ${H^{1,1}_{{\partial\overline{\partial}}}(X,{\mathbf{R}})}$ the space of real points.\ The canonical map from $H^{1,1}_{{\partial\overline{\partial}}}(X,{\mathbf{C}})$ to the quotient of the space of $d$-closed $(1,1)$-currents modulo the ${\partial\overline{\partial}}$-exact ones is injective (because, for any degree $0$ current $f$, ${\partial\overline{\partial}}f$ is smooth iff $f$ is), and is also surjective: given a closed $(1,1)$-current $T$, one can find a locally finite open covering $U_j$ of $X$ such that $T={\partial\overline{\partial}}f_j$ is ${\partial\overline{\partial}}$-exact on $U_j$. If $\rho_j$ is a partition of unity associated to $U_j$ and $f:=\sum\rho_jf_j$, then $T-{\partial\overline{\partial}}f$ is smooth. Indeed, on $U_i$, it is just ${\partial\overline{\partial}}(\sum_j\rho_j(f_i-f_j)$, and each $f_i-f_j$ is smooth since it is even pluri-harmonic. As a consequence, a class $\alpha\in H^{1,1}_{{\partial\overline{\partial}}}(X,{\mathbf{C}})$ can be seen as an affine space of closed $(1,1)$-currents. We denote by $\{T\}\in H^{1,1}_{{\partial\overline{\partial}}}(X,{\mathbf{C}})$ the class of the current $T$. Remark that $i{\partial\overline{\partial}}$ is a real operator (on forms or currents), so that if $T$ is a real closed $(1,1)$-current, its class $\{T\}$ lies in ${H^{1,1}_{{\partial\overline{\partial}}}(X,{\mathbf{R}})}$ and consists in all the closed currents $T+i{\partial\overline{\partial}}\varphi$ where $\varphi$ is a real current of degree $0$.\ When $X$ is furthermore compact, it can be shown that $H^{1,1}_{{\partial\overline{\partial}}}(X,{\mathbf{C}})$ is finite dimensional. The operator ${\partial\overline{\partial}}$ from smooth functions to smooth closed (1,1)-forms is thus an operator between Fréchet spaces with finite codimensional range; it therefore has closed range, and the quotient map $\theta\mapsto\{\theta\}$ from smooth closed $(1,1)$-forms to $H^{1,1}_{{\partial\overline{\partial}}}(X,{\mathbf{C}})$ endowed with its unique finite-dimensional complex vector space Hausdorff topology is thus continuous and open. General facts about currents ---------------------------- ### Siu decomposition Let $T$ be a closed positive current of bidegree $(p,p)$ on a complex $n$-fold $X$. We denote by $\nu(T,x)$ its Lelong number at a point $x\in X$. The Lelong super-level sets are defined by $E_c(T):=\{x\in X,\nu(T,x)\geq c\}$ for $c>0$, and a well known result of Y.T.Siu \[Siu74\] asserts that $E_c(T)$ is an analytic subset of $X$, of codimension at least $p$. As a consequence, for any analytic subset $Y$ of $X$, the generic Lelong number of $T$ along $Y$, defined by $$\nu(T,Y):=\inf\{\nu(T,x),x\in Y\},$$ is also equal to $\nu(T,x)$ for a very general $x\in Y$. It is also true that, for any irreducible analytic subset $Y$ of codimension $p$ in $X$, the current\ $T-\nu(T,Y)[Y]$ is positive. The symbol $[Y]$ denotes the integration current on $Y$, which is defined by integrating test forms on the smooth locus of $Y$. Since $E_+(T):=\cup_{c>0}E_c(T)$ is a countable union of $p$-codimensional analytic subsets, it contains an at most countable family $Y_k$ of $p$-codimensional irreducible analytic subsets. By what we have said, $T-\nu(T,Y_1)[Y_1]-...-\nu(T,Y_k)[Y_k]$ is a positive current for all $k$, thus the series $\sum_{k\geq 0}\nu(T,Y_k)[Y_k]$ converges, and we have $$T=R+\sum_k\nu(T,Y_k)[Y_k]$$ for some closed positive $(p,p)$-current $R$ such that each $E_c(R)$ has codimension $>p$. The decomposition above is called the Siu decomposition of the closed positive $(p,p)$-current $T$. Since $\nu(T,Y)=0$ if $Y$ is a $p$-codimensional subvariety not contained in $E_+(T)$, it makes sense to write $\sum_k\nu(T,Y_k)[Y_k]=\sum\nu(T,Y)[Y]$, where the sum is implicitely extended over all $p$-codimensional irreducible analytic subsets $Y\subset X$.\ ### Almost positive currents A real $(1,1)$-current $T$ on a complex manifold $X$ is said to be almost positive if $T\geq\gamma$ holds for some smooth real $(1,1)$-form $\gamma$. Let $T\geq\gamma$ be a closed almost positive $(1,1)$-current. On a small enough open set $U$ with coordinates $z=(z_1,...,z_n)$, we write $T={\partial\overline{\partial}}\varphi$ where $\varphi$ is a degree $0$ current. Since $\gamma+Ci{\partial\overline{\partial}}|z|^2$ is a positive $(1,1)$-form on $U$ for $C>0$ big enough, we get that $i{\partial\overline{\partial}}(\varphi+C|z|^2)$ is positive, which means that $\varphi+C|z|^2$ is (the current associated to) a (unique) pluri-subharmonic function on $U$. A locally integrable function $\varphi$ on $X$ such that $i{\partial\overline{\partial}}\varphi$ is almost positive is called an almost pluri-subharmonic function, and is thus locally equal to a pluri-subharmonic function modulo a smooth function.\ The Lelong number $\nu(T,x)$ of a closed almost positive $(1,1)$-current $T$ can be defined as $\nu(T+Ci{\partial\overline{\partial}}|z|^2,x)$ as above, since this does not depend on the smooth function $C|z|^2$. Consequently, the Siu decomposition of $T$ can also be constructed, and writes $T=R+\sum\nu(T,D)[D]$, where $D$ ranges over the prime divisors of $X$, and $R$ is a closed almost positive $(1,1)$-current. In fact, we have $R\geq\gamma$ as soon as $T\geq\gamma$ for a smooth form $\gamma$.\ ### Pull-back of a current When $f:Y\to X$ is a $surjective$ holomorphic map between compact complex manifolds and $T$ is a closed almost positive $(1,1)$-current on $X$, it is possible to define its pull back $f^{\star}T$ by $f$ using the analogue of local equations for divisors: write $T=\theta+i{\partial\overline{\partial}}\varphi$ for some smooth form $\theta\in\{T\}$. $\varphi$ is then an almost pluri-subharmonic function, thus locally a pluri-subharmonic function modulo $\mathcal{C}^{\infty}$. One defines $f^{\star}T$ to be $f^{\star}\theta+i{\partial\overline{\partial}}f^{\star}\varphi$, as this is easily seen to be independent of the choices made. Of course, we then have $\{f^{\star}T\}=f^{\star}\{T\}$. ### Gauduchon metrics and compactness On any compact complex $n$-fold $X$, there exists a Hermitian metric $\omega$ such that $\omega^{n-1}$ is ${\partial\overline{\partial}}$-closed. Such a metric is called a Gauduchon metric. As a consequence, for every smooth real $(1,1)$-form $\gamma$, the quotient map $T\mapsto\{T\}$ from the set of closed $(1,1)$-currents $T$ with $T\geq\gamma$ to ${H^{1,1}_{{\partial\overline{\partial}}}(X,{\mathbf{R}})}$ is proper. Indeed, the mass of the positive current $T-\gamma$ is controled by $\int(T-\gamma)\wedge\omega^{n-1}$, and $\int T\wedge\omega^{n-1}=\{T\}\cdot\{\omega\}$ only depends on the class of $T$. The result follows by the weak compactness of positive currents with bounded mass. Another consequence is that the kernel of $T\mapsto\{T\}$ meets the cone of closed positive $(1,1)$-currents at the origin only.\ ### Cycles as currents One can associate to any effective $p$-codimensional ${\mathbf{R}}$-cycle $Y=a_1Y_1+...+a_rY_r$ a closed positive $(p,p)$-current $[Y]=a_1[Y_1]+...+a_r[Y_r]$, called the integration current on $Y$. The map $Y\mapsto[Y]$ so defined is injective, and a result of Thie says that the Lelong number $\nu([Y],x)$ is just the multiplicity of $Y$ at $x$. Consequently, we shall drop the brackets in $[Y]$ when no confusion is to be feared, and write for instance $T=R+\sum\nu(T,D)D$ for a Siu decomposition, because this is more in the spirit of this work. Cones in the ${\partial\overline{\partial}}$-cohomology ------------------------------------------------------- We now assume that $X$ is compact, and fix some reference Hermitian form $\omega$ (i.e. a smooth positive definite $(1,1)$-form). A cohomology class\ $\alpha\in{H^{1,1}_{{\partial\overline{\partial}}}(X,{\mathbf{R}})}$ is said to be pseudo-effective iff it contains a positive current;\ $\alpha$ is nef (numerically effective) iff, for each ${\varepsilon}>0$, $\alpha$ contains a smooth form $\theta_{{\varepsilon}}$ with $\theta_{{\varepsilon}}\geq-{\varepsilon}\omega$;\ $\alpha$ is big iff it contains a Kähler current, i.e. a closed $(1,1)$-current $T$ such that $T\geq{\varepsilon}\omega$ for ${\varepsilon}>0$ small enough. Finally, $\alpha$ is a Kähler class iff it contains a Kähler form (note that a smooth Kähler current is the same thing as a Kähler form).\ Since any two Hermitian forms $\omega_1$, $\omega_2$ are commensurable (i.e. $C^{-1}\omega_2\leq\omega_1\leq C\omega_2$ for some $C>0$), these definitions do not depend on the choice of $\omega$.\ The set of pseudo-effective classes is a closed convex cone ${\mathcal{E}}\subset{H^{1,1}_{{\partial\overline{\partial}}}(X,{\mathbf{R}})}$, called the pseudo-effective cone. It has compact base, because so is the case of the cone of closed positive $(1,1)$-currents. Similarly, one defines the nef cone ${\mathcal{N}}$ (a closed convex cone), the big cone ${\mathcal{B}}$ (an open convex cone), and the Kähler cone ${\mathcal{K}}$ (an open convex cone). We obviously have the inclusions $${\mathcal{K}}\subset{\mathcal{B}}\subset{\mathcal{E}}$$ and $${\mathcal{K}}\subset{\mathcal{N}}\subset{\mathcal{E}}.$$ By definition, $X$ is a Kähler manifold iff its Kähler cone ${\mathcal{K}}$ is non-empty. Similarly (but this is a theorem, cf. \[DP01\]) $X$ is a Fujiki manifold (i.e. bimeromorphic to a Kähler manifold) iff its big cone ${\mathcal{B}}$ is non-empty (see also the proof of proposition 2.3 below). If $X$ is Kähler, ${\mathcal{K}}$ is trivially the interior ${\mathcal{N}}^0$ of the nef cone. Similarly, if $X$ is Fujiki, ${\mathcal{B}}$ is trivially the interior ${\mathcal{E}}^0$ of the pseudo-effective cone.\ We will now and then denote by $\geq$ the partial order relation on ${H^{1,1}_{{\partial\overline{\partial}}}(X,{\mathbf{R}})}$ induced by the convex cone ${\mathcal{E}}$. The Néron-Severi space ---------------------- Given a line bundle $L$ on $X$, each smooth Hermitian metric $h$ on $L$ locally writes as $h(x,v)=|v|^2e^{-2\varphi(x)}$ for some smooth local weight $\varphi$; the curvature form $\Theta_h(L):=\frac{i}{\pi}{\partial\overline{\partial}}\varphi$ is a globally defined real $(1,1)$-form, whose class in ${H^{1,1}_{{\partial\overline{\partial}}}(X,{\mathbf{R}})}$ we denote by $c_1(L)$, the first Chern class of $L$. We write $dd^c=\frac{i}{\pi}{\partial\overline{\partial}}$ for short. A singular Hermitian metric $h$ on $L$ is by definition a metric $h=h_{\infty}e^{-2\varphi}$, where $h_{\infty}$ is a smooth Hermitian metric on $L$ and the weight $\varphi$ is a locally integrable function. The curvature current of $h$ is defined as $\Theta_h(L):=\Theta_{h_{\infty}}(L)+dd^c\varphi$; it also lies in $c_1(L)$. Conversely, given a smooth Hermitian metric $h_{\infty}$ on $L$, any closed real $(1,1)$-current $T$ in $c_1(L)$ can be written (by definition) as $T=\Theta_{h_{\infty}}(L)+dd^c\varphi$. But $\varphi$ is just a degree $0$ current $a$ $priori$. However, $\varphi$ is automatically $L^1_{loc}$ in case $T$ is almost positive (cf. section 2.2.2), thus each almost positive current $T$ in $c_1(L)$ is the curvature form of a singular Hermitian metric on $L$.\ The image of the homomorphism Pic$(X)\to{H^{1,1}_{{\partial\overline{\partial}}}(X,{\mathbf{R}})}$ $L\mapsto c_1(L)$ is called the Néron-Severi group, denoted by $NS(X)$. It is a free ${\mathbf{Z}}$-module, whose rank is denoted by $\rho(X)$, and called the Picard number of $X$. The real Néron-Severi space ${NS(X)_{{\mathbf{R}}}}$ is just the real subspace of dimension $\rho(X)$ in ${H^{1,1}_{{\partial\overline{\partial}}}(X,{\mathbf{R}})}$ generated by $NS(X)$. Kodaira’s embedding theorem can be formulated as follows: $X$ is a projective manifold iff the intersection of the Kähler cone ${\mathcal{K}}$ with ${NS(X)_{{\mathbf{R}}}}$ is non-empty. Similarly, $X$ is a Moishezon manifold (i.e. bimeromorphic to a projective manifold) iff the intersection of the big cone ${\mathcal{B}}$ with ${NS(X)_{{\mathbf{R}}}}$ is non-empty (cf. \[DP01\]). Currents with analytic singularities ------------------------------------ ### Definition A closed almost positive $(1,1)$-current $T$ on a compact complex $n$-fold $X$ is said to have analytic singularities (along a subscheme $V({\mathcal{I}})$ defined by a coherent ideal sheaf ${\mathcal{I}}$) if there exists some $c>0$ such that $T$ is locally congruent to $\frac{c}{2}dd^c\log(|f_1|^2+...+|f_k|^2)$ modulo smooth forms, where $f_1,...,f_k$ are local generators of ${\mathcal{I}}$. $T$ is thus smooth outside the support of $V({\mathcal{I}})$, and it is an immediate consequence of the Lelong-Poincaré formula that $\sum\nu(T,D)D$ is just $c$ times the divisor part of the scheme $V({\mathcal{I}})$. If we first blow-up $X$ along $V({\mathcal{I}})$ and then resolve the singularities, we get a modification $\mu:{\widetilde}{X}\to X$, where ${\widetilde}{X}$ is a compact complex manifold, such that $\mu^{-1}{\mathcal{I}}$ is just ${\mathcal{O}}(-D)$ for some effective divisor $D$ upstairs. The pull-back $\mu^{\star}T$ clearly has analytic singularities along $V(\mu^{-1}{\mathcal{I}})=D$, thus its Siu decomposition writes $$\mu^{\star}T=\theta+cD$$ where $\theta$ is a smooth $(1,1)$-form. If $T\geq\gamma$ for some smooth form $\gamma$, then $\mu^{\star}T\geq\mu^{\star}\gamma$, and thus $\theta\geq\mu^{\star}\gamma$. This operation we call a resolution of the singularities of $T$. ### Regularization(s) of currents We will need two basic types of regularizations (inside a fixed cohomology class) for closed $(1,1)$-currents, both due to J.-P.Demailly. Let $T$ be a closed almost positive $(1,1)$-current on a compact complex manifold $X$, and fix a Hermitian form $\omega$. Suppose that $T\geq\gamma$ for some smooth real $(1,1)$-form $\gamma$ on $X$. Then:\ (i) There exists a sequence of smooth forms $\theta_k$ in $\{T\}$ which converges weakly to $T$, and such that $\theta_k\geq\gamma-C\lambda_k\omega$ where $C>0$ is a constant depending on the curvature of $(T_X,\omega)$ only, and $\lambda_k$ is a decreasing sequence of continuous functions such that $\lambda_k(x)\to\nu(T,x)$ for every $x\in X$.\ (ii) There exists a sequence $T_k$ of currents with analytic singularities in $\{T\}$ which converges weakly to $T$, such that $T_k\geq\gamma-{\varepsilon}_k\omega$ for some sequence ${\varepsilon}_k>0$ decreasing to $0$, and such that $\nu(T_k,x)$ increases to $\nu(T,x)$ uniformly with respect to $x\in X$. Point (ii) enables us in particular to approximate a Kähler current $T$ inside its cohomology class by Kähler currents $T_k$ with analytic singularities, with a very good control of the singularities. A big class therefore contains plenty of Kähler currents with analytic singularities. Intersection of currents ------------------------ Just as cycles, currents can be intersected provided their singular sets are in an acceptable mutual position. Specifically, let $T$ be a closed positive $(1,1)$-current on a complex manifold $X$. Locally, we have $T=dd^c\varphi$ with $\varphi$ a pluri-subharmonic function, which is well defined modulo a pluri-harmonic (hence smooth) function. We therefore get a globally well-defined unbounded locus $L(T)$, which is the complement of the open set of points near which $\varphi$ is locally bounded. Assume now that $T_1$, $T_2$ are two closed positive $(1,1)$-currents such that $L(T_j)$ is contained in an analytic set $A_j$ (which may be $X$); locally, we write $T_j=dd^c\varphi_j$ with $\varphi_j$ a pluri-subharmonic function. If $A_1\cap A_2$ has codimension at least $2$, then it is shown in \[Dem92\] that $\varphi_1dd^c\varphi_2$ has locally finite mass, and that $dd^c\varphi_1\wedge dd^c\varphi_2:=dd^c(\varphi_1 dd^c\varphi_2)$ yields a globally defined closed positive $(2,2)$-current, denoted by $T_1\wedge T_2$. It is also true that $T_1\wedge T_2$ lies in the product cohomology class $\{T_1\}\cdot\{T_2\}\in H^{2,2}_{{\partial\overline{\partial}}}(X,{\mathbf{R}})$.\ We will only need the following two special cases: if $T_1$ is a closed positive $(1,1)$-current with analytic singularities along a subscheme of codimension at least $2$, then $T_1\wedge T_2$ exists for every closed positive $(1,1)$-current $T_2$.\ If $D_1$ and $D_2$ are two distinct prime divisors, then $[D_1]\wedge [D_2]$ is a well defined closed positive $(2,2)$-current. Since its support is clearly contained in the set-theoretic intersection $D_1\cap D_2$ (whose codimension is at least $2$), we have $[D_1]\wedge [D_2]=\sum a_j[Y_j]$, where the $Y_j$’s are the components of $D_1\cap D_2$. In fact, it can be shown that $\sum a_jY_j$ is just the $2$-cycle associated to the scheme-theoretic intersection $D_1\cap D_2$, thus $[D_1]\wedge[D_2]$ is just the integration current associated to the cycle $D_1\cdot D_2$. The modified nef cone --------------------- For our purposes, we need to introduce a new cone in ${H^{1,1}_{{\partial\overline{\partial}}}(X,{\mathbf{R}})}$, which is somehow the cone of classes that are nef in codimension 1. Let $X$ be a compact complex $n$-fold, and $\omega$ be some reference Hermitian form. Let $\alpha$ be a class in ${H^{1,1}_{{\partial\overline{\partial}}}(X,{\mathbf{R}})}$. \(i) $\alpha$ is said to be a modified Kähler class iff it contains a Kähler current $T$ with $\nu(T,D)=0$ for all prime divisors $D$ in $X$. \(ii) $\alpha$ is said to be a modified nef class iff, for every ${\varepsilon}>0$, there exists a closed $(1,1)$-current $T_{{\varepsilon}}$ in $\alpha$ with $T_{{\varepsilon}}\geq-{\varepsilon}\omega$ and $\nu(T_{{\varepsilon}},D)=0$ for every prime $D$. This is again independent of the choice of $\omega$ by commensurability of the Hermitian forms. The set of modified Kähler classes is an open convex cone called the modified Kähler cone and denoted by ${\mathcal{MK}}$. Similarly, we get a closed convex cone ${\mathcal{MN}}$, the modified nef cone. Using the Siu decomposition, we immediately see that ${\mathcal{MK}}$ is non-empty iff the big cone ${\mathcal{B}}$ is non-empty, in which case ${\mathcal{MK}}$ is just the interior of ${\mathcal{MN}}$.\ [**Remark 1**]{}: upon regularizing the currents using (ii) of theorem 2.1, we can always assume that the currents involved in the definition have analytic singularities along a subcheme of codimension at least 2.\ [**Remark 2**]{}: the modified nef cone of a compact complex surface is just its nef cone (cf. section 4.2.1).\ [**Remark 3**]{}: just as for nef classes, one cannot simply take ${\varepsilon}=0$ in the definition of a modified nef class. We recall the example given in \[DPS94\]: there exists a ruled surface $X$ over an elliptic curve such that $X$ contains an irreducible curve $C$ with the following property: the class $\{C\}\in{H^{1,1}_{{\partial\overline{\partial}}}(X,{\mathbf{R}})}$ is nef, but contains only one positive current, which is of course the integration current $[C]$. The following proposition gives a more “algebraic” characterization of ${\mathcal{MK}}$, which also explains the (seemingly dumb) terminology. A class $\alpha$ lies in ${\mathcal{MK}}$ iff there exists a modification\ $\mu:{\widetilde}{X}\to X$ and a Kähler class ${\widetilde}{\alpha}$ on ${\widetilde}{X}$ such that $\alpha=\mu_{\star}{\widetilde}{\alpha}$. $Proof$: the argument is adapted from \[DP01\], theorem 3.4. If ${\widetilde}{\omega}$ is a Kähler form on ${\widetilde}{X}$ and $\omega$ is our reference Hermitian form on $X$, then $\mu^{\star}\omega\leq C{\widetilde}{\omega}$ for some $C>0$, since ${\widetilde}{X}$ is compact. Since $\mu$ is a modification, we have $\mu_{\star}\mu^{\star}\omega=\omega$, so we get $T:=\mu_{\star}{\widetilde}{\omega}\geq C^{-1}\omega$, and $T$ is thus a Kähler current. Since the singular values of $\mu$ are in codimension at least 2, we immediately see that $\nu(T,D)=0$ for every prime divisor $D$ in $X$, and $\{T\}=\mu_{\star}\{\omega\}$ lies in ${\mathcal{MK}}$ as desired. Conversely, if $\alpha\in{\mathcal{MK}}$ is represented by a Kähler current $T$ with $\nu(T,D)=0$ for all $D$, there exists by (ii) of theorem 2.1 a Kähler current $T_k$ in $\alpha$ with analytic singularities along a subscheme $V_k$ with $\nu(T_k,D)\leq\nu(T,D)$, so that $V_k$ has no divisor component. We select a resolution of the singularities of $T_k$ $\mu:{\widetilde}{X}\to X$, and write $\mu^{\star}T_k=\theta+F$, where $\theta$ is a smooth form and $F$ is an effective ${\mathbf{R}}$-divisor. Since $T_k\geq{\varepsilon}\omega$ for ${\varepsilon}>0$ small enough, we get that $\theta\geq\mu^{\star}{\varepsilon}\omega$. Denoting by $E_1,...,E_r$ the $\mu$-exceptional prime divisors on ${\widetilde}{X}$, it is shown in \[DP01\], lemma 3.5, that one can find ${\delta}_1,...,{\delta}_r>0$ small enough and a closed smooth $(1,1)$-form $\tau$ in $\{{\delta}_1E_1+...+{\delta}_rE_r\}$ such that $\mu^{\star}{\varepsilon}\omega-\tau$ is positive definite everywhere. It follows that $\theta-\tau$ is a Kähler form upstairs. Now, we have $$\alpha=\mu_{\star}\{T_k\}=\mu_{\star}\{\theta-({\delta}_1E_1+...+{\delta}_rE_r)\}=\mu_{\star}\{\theta-\tau\},$$ since $E_j$ is $\mu$-exceptional and so is $F$ because $\mu_{\star}F$ is an effective divisor contained in the scheme $V_k$; this concludes the proof of proposition 2.3. That a modified nef classe is somehow nef in codimension 1 is reflected in the following If $\alpha$ is a modified Kähler (resp. nef) class, then $\alpha_{|D}$ is big (resp. pseudo-effective) for every prime divisor $D\subset X$. $Proof$: if $\alpha$ is a modified nef class and ${\varepsilon}>0$ is given, choose a current $T_{{\varepsilon}}\geq-{\varepsilon}\omega$ in $\alpha$ with analytic singularities in codimension at least 2. Locally, we have $\omega\leq Cdd^c|z|^2$ for some $C>0$, thus $T_{{\varepsilon}}+{\varepsilon}Cdd^c|z|^2$ writes as $dd^c\varphi_{{\varepsilon}}$, where $\varphi_{{\varepsilon}}$ is pluri-subharmonic and is not identically $-\infty$ on $D$. Thus the restriction $(\varphi_{{\varepsilon}})_{|D}$ is pluri-subharmonic, and $(T_{{\varepsilon}}+{\varepsilon}Cdd^c|z|^2)_{|D}$ is a well defined closed positive current. It follows that $(T_{{\varepsilon}})_{|D}$ is a well defined almost positive current on $D$, with $(T_{{\varepsilon}})_{|D}\geq-{\varepsilon}C\omega_{|D}$. This certainly implies that $\alpha_{|D}$ is pseudo-effective. The case $\alpha\in{\mathcal{MK}}$ is treated similarly. Currents with minimal singularities ----------------------------------- Let $\varphi_1$, $\varphi_2$ be two almost pluri-subharmonic functions on a compact complex manifold $X$. Then, following \[DPS00\], we say that $\varphi_1$ is less singular than $\varphi_2$ (and write $\varphi_1\preceq\varphi_2$) if we have $\varphi_2\leq\varphi_1+C$ for some constant $C$. We denote by $\varphi_1\approx\varphi_2$ the equivalence relation generated by the pre-order relation $\preceq$. Note that $\varphi_1\approx\varphi_2$ exactly means that $\varphi_1=\varphi_2$ mod $L^{\infty}$.\ When $T_1$ and $T_2$ are two closed almost positive $(1,1)$-currents on $X$, we can also compare their singularities in the following fashion: write $T_i=\theta_i+dd^c\varphi_i$ for $\theta_i\in\{T_j\}$ a smooth form and $\varphi_i$ an almost pluri-subharmonic function. Since any $L^1_{loc}$ function $f$ with $dd^cf$ smooth is itself smooth, it is easy to check that $\varphi_i$ does not depend on the choices made up to equivalence of singularities, and we compare the singularities of the $T_i$’s by comparing those of the $\varphi_i$’s.\ Let now $\alpha$ be a class in ${H^{1,1}_{{\partial\overline{\partial}}}(X,{\mathbf{R}})}$ and $\gamma$ be a smooth real $(1,1)$-form, and denote by $\alpha[\gamma]$ the set of closed almost positive $(1,1)$-currents $T$ lying in $\alpha$ with $T\geq\gamma$. It is a (weakly) compact and convex subset of the space of $(1,1)$-currents. We endow it with the pre-order relation $\preceq$ defined above. For any family $T_j$, $j\in J$ of elements of $\alpha[\gamma]$, we claim that there exists an infimum $T=\inf_{j\in J}T_j$ in $(\alpha[\gamma],\preceq)$, which is therefore unique up to equivalence of singularities. The proof is pretty straightforward: fix a smooth form $\theta$ in $\alpha$, and write $T_j=\theta+dd^c\varphi_j$ for some quasi pluri-subharmonic functions $\varphi_j$. Since $X$ is compact, $\varphi_j$ is bounded from above; therefore, upon changing $\varphi_j$ into $\varphi_j-C_j$, we may assume that $\varphi_j\leq 0$ for all $ j\in J$. We then take $\varphi$ to be the upper semi-continuous upper enveloppe of the $\varphi_j$’s, $j\in J$, and set $T:=\theta+dd^c\varphi$. It is immediate to check that $T\preceq T_j$ for all $ j$, and that for every $S\in\alpha[\gamma]$, $S\preceq T_j$ for all $ j$ implies that $S\preceq T$. We should maybe explain why $T\geq\gamma$: locally, we can choose coordinates $z=(z_1,...,z_n)$ and a form $q(z)=\sum\lambda_j|z_j|^2$ such that $dd^cq\leq\gamma$ and $dd^cq$ is arbitrarily close to $\gamma$. Writing $\theta=dd^c\psi$ for some smooth local potential $\psi$, the condition $\theta+dd^c\varphi_j\geq\gamma$ implies that $\psi+\varphi_j-q$ is pluri-subharmonic. The upper enveloppe $\psi+\varphi-q$ is thus also pluri-subharmonic, which means that $T=\theta+dd^c\varphi\geq dd^cq$; letting $dd^cq$ tend to $\gamma$, we get $T\geq\gamma$, as desired.\ Since any two closed almost positive currents with equivalent singularities have the same Lelong numbers, the Lelong numbers of $\inf T_j$ do not depend on the specific choice of the current. In fact, it is immediate to check from the definitions that $$\nu(\inf_{j\in J} T_j,x)=\inf_{j\in J}\nu(T_j,x).$$\ As a particular case of the above construction, there exists a closed almost positive $(1,1)$-current $T_{\min,\gamma}\in\alpha[\gamma]$ which is a least element in $(\alpha[\gamma],\preceq)$. $T_{\min,\gamma}$ is well defined modulo $dd^cL^{\infty}$, and we call it a current with minimal singularities in $\alpha$, for the given lower bound $\gamma$. When $\gamma=0$ and $\alpha$ is pseudo-effective, we just write $T_{\min}=T_{\min,0}$, and call it a positive current with minimal singularities in $\alpha$. It must be noticed that, even for a big class $\alpha$, $T_{\min}$ will be a Kähler current only in the trivial case: A pseudo-effective class $\alpha$ contains a positive current with minimal singularities $T_{\min}$ which is a Kähler current iff $\alpha$ is a Kähler class. $Proof$: we can write $T_{\min}=\theta+dd^c\varphi$ with $\theta$ a smooth form. If $T_{\min}$ is Kähler, then so is ${\varepsilon}\theta+(1-{\varepsilon})T_{\min}=\theta+dd^c(1-{\varepsilon})\varphi$ for ${\varepsilon}>0$ small enough. We therefore get $\varphi\preceq(1-{\varepsilon})\varphi$ by minimality, that is: $(1-{\varepsilon})\varphi\leq\varphi+C$ for some constant $C$. But this shows that $\varphi$ is bounded, and thus $T_{\min}$ is a Kähler current with identically zero Lelong numbers. Using (i) of theorem 2.1, we can therefore regularize it into a Kähler form inside its cohomology class, qed.\ Finally, we remark that a positive current with minimal singularities in a pseudo-effective class is generally non-unique (as a current), as the example of a Kähler class already shows. The divisorial Zariski decomposition ==================================== In this section $X$ denotes a compact complex $n$-fold, and $\omega$ is a reference Hermitian form, unless otherwise specified. Minimal multiplicities and non-nef locus ---------------------------------------- When $\alpha\in{H^{1,1}_{{\partial\overline{\partial}}}(X,{\mathbf{R}})}$ is a pseudo-effective class, we want to introduce minimal multiplicities $\nu(\alpha,x)$, which measure the obstruction to the numerical effectivity of $\alpha$. For each ${\varepsilon}>0$, let $T_{\min,{\varepsilon}}=T_{\min,{\varepsilon}}(\alpha)$ be a current with minimal singularities in $\alpha[-{\varepsilon}\omega]$ (cf. section 2.8 for the notation). We then introduce the following The minimal multiplicity at $x\in X$ of the pseudo-effective class $\alpha\in{H^{1,1}_{{\partial\overline{\partial}}}(X,{\mathbf{R}})}$ is defined as $$\nu(\alpha,x):=\sup_{{\varepsilon}>0}\nu(T_{\min,{\varepsilon}},x).$$ The commensurability of any two Hermitian forms shows that the definition does not depend on $\omega$. When $D$ is a prime divisor, we define the generic minimal multiplicity of $\alpha$ along $D$ as $$\nu(\alpha,D):=\inf\{\nu(\alpha,x),x\in D\}.$$ We then have $\nu(\alpha,D)=\sup_{{\varepsilon}>0}\nu(T_{\min,{\varepsilon}},D)$, and $\nu(\alpha,D)=\nu(\alpha,x)$ for the very general $x\in D$. Let $\alpha\in{H^{1,1}_{{\partial\overline{\partial}}}(X,{\mathbf{R}})}$ be a pseudo-effective class. \(i) $\alpha$ is nef iff $\nu(\alpha,x)=0$ for every $x\in X$. \(ii) $\alpha$ is modified nef iff $\nu(\alpha,D)=0$ for every prime $D$. $Proof$: if $\alpha$ is nef (resp. modified nef), $\alpha[-{\varepsilon}\omega]$ contains by definition a smooth form (resp. a current $T_{{\varepsilon}}$ with $\nu(T_{{\varepsilon}},D)=0$ for every prime $D$). We thus have $\nu(T_{\min,{\varepsilon}},x)=0$ (resp. $\nu(T_{\min,{\varepsilon}},D)=0$) for every ${\varepsilon}>0$, and thus $\nu(\alpha,x)=0$ (resp. $\nu(\alpha,D)=0$). Conversely, if $\nu(\alpha,x)=0$ for every $x\in X$, applying (i) of theorem 2.1 to $T_{\min,{\varepsilon}}$, we see that $\nu(T_{\min,{\varepsilon}},x)=0$ for every $x\in X$ implies that $\alpha[-{\varepsilon}'\omega]$ contains a smooth form for every ${\varepsilon}'>{\varepsilon}$, and $\alpha$ is thus nef. Finally, if $\nu(\alpha,D)=0$ for every prime $D$, we have $\nu(T_{\min,{\varepsilon}},D)=0$ for every prime $D$. Since $T_{\min,{\varepsilon}}$ lies in $\alpha[-{\varepsilon}\omega]$, $\alpha$ is modified nef by the very definition.\ In view of proposition 3.2, we propose the The non-nef locus of a pseudo-effective class $\alpha\in{H^{1,1}_{{\partial\overline{\partial}}}(X,{\mathbf{R}})}$ is defined by $$E_{nn}(\alpha):=\{x\in X,\nu(\alpha,x)>0\}.$$ Recall that the set $E_+(T):=\{x\in X,\nu(T,x)>0\}$ is a countable union of closed analytic subsets for every closed almost positive $(1,1)$-current $T$. Since $E_{nn}(\alpha)=\cup_{{\varepsilon}>0}E_+(T_{\min,{\varepsilon}})$, the non-nef locus is also a countable union of closed analytic subsets. We do not claim however that each super-level set $\{x\in X,\nu(\alpha,x)\geq c\}$ $(c>0)$ is an analytic subset (this is most certainly not true in general). Using results of M.Paun, proposition 3.2 generalizes as follows: A pseudo-effective class $\alpha$ is nef iff $\alpha_{|Y}$ is nef for every irreducible analytic subset $Y\subset E_{nn}(\alpha)$. $Proof$: since the restriction of a nef class to any analytic subset is nef, one direction is clear. To prove the converse, we cannot directly apply the results of M.Paun \[Pau98\], since we allow slightly negative currents, so we sketch the proof to show that it immediately carries on to our situation. We quote without proof the following results: Let $\alpha$ be any class in ${H^{1,1}_{{\partial\overline{\partial}}}(X,{\mathbf{R}})}$, and $Y_1$, $Y_2$ two analytic subsets of $X$. If $\alpha_{|Y_i}$ is nef $(i=1,2)$, then $\alpha_{|Y_1\cup Y_2}$ is nef. Let $\theta$ be a closed smooth $(1,1)$-form and $\gamma$ be any smooth $(1,1)$-form. Let $Y\subset X$ be an analytic subset, and assume that $$\theta_{|Y}+dd^c\varphi\geq\gamma_{|Y}$$ for some smooth function $\varphi$ on $Y$. Then, for every ${\varepsilon}>0$, there exists a neighbourhood $V$ of $Y$ and a smooth function $\varphi_{{\varepsilon}}$ on $V$ such that $$\theta+dd^c\varphi_{{\varepsilon}}\geq\gamma-{\varepsilon}\omega$$ on $V$. We now select once for all a smooth form $\theta$ in $\alpha$. By applying (ii) of theorem 2.1 to $T_{\min,{\varepsilon}}\geq-{\varepsilon}\omega$, we can select a closed $(1,1)$-current with analytic singularities $T^{(1)}_{{\varepsilon}}=\theta+dd^c\varphi^{(1)}_{{\varepsilon}}$ such that $T^{(1)}_{{\varepsilon}}\geq-2{\varepsilon}\omega$ and $\nu(T^{(1)}_{{\varepsilon}},x)\leq\nu(T_{\min,{\varepsilon}},x)$. Let $Y_{{\varepsilon}}$ be the analytic subset along which $T^{(1)}_{{\varepsilon}}$ is singular; we have $Y_{{\varepsilon}}\subset E_+(T_{\min,{\varepsilon}})\subset E_{nn}(\alpha)$. Since $\alpha$ is nef by assumption on every component of $Y_{{\varepsilon}}$, using the above two lemma, we can find a neighbourhood $V_{{\varepsilon}}$ of $Y_{{\varepsilon}}$ and a smooth function $\varphi^{(2)}_{{\varepsilon}}$ on $V_{{\varepsilon}}$ such that $\theta+dd^c\varphi^{(2)}_{{\varepsilon}}\geq-{\varepsilon}\omega$ on $V_{{\varepsilon}}$. We choose a smaller neighbourhood $W_{{\varepsilon}}$ of $Y_{{\varepsilon}}$ with $\overline{W_{{\varepsilon}}}\subset V_{{\varepsilon}}$, and we then set $$\varphi^{(3)}_{{\varepsilon}}:=\cases{\varphi^{(1)}_{{\varepsilon}}& on $X-W_{{\varepsilon}}$,\cr \max_{\eta}(\varphi_{{\varepsilon}}^{(2)}-C_{{\varepsilon}},\varphi^{(1)}_{{\varepsilon}})& on $\overline{W_{{\varepsilon}}}$\cr}$$ where $\max_{\eta}(x,y):=\max\star\rho_{\eta}$ denotes a regularized maximum function obtained by convolution with a regularizing kernel $\rho_{\eta}$ ($\eta$ is chosen so small that $\max_{\eta}(x,y)=x$ when $y<x-1/2$), and $C_{{\varepsilon}}$ is a positive constant, large enough to achieve $\varphi^{(1)}_{{\varepsilon}}\geq\varphi_{{\varepsilon}}^{(2)}-C_{{\varepsilon}}+1$ near $\partial W_{{\varepsilon}}$ (we use that $\varphi^{(1)}_{{\varepsilon}}$ is smooth away from $Y_{{\varepsilon}}$, hence locally bounded near $\partial W_{{\varepsilon}}$). The two parts to be glued then coincide near $\partial W_{{\varepsilon}}$, thus $\varphi^{(3)}_{{\varepsilon}}$ is smooth. Since both $\theta+dd^c\varphi^{(1)}_{{\varepsilon}}$ and $\theta+dd^c\varphi_{{\varepsilon}}^{(2)}-C_{{\varepsilon}}$ are greater than $-2{\varepsilon}\omega$, the gluing property of pluri-subharmonic functions yields that $\theta+dd^c\varphi^{(3)}_{{\varepsilon}}\geq-2{\varepsilon}\omega$. Since this is true for every ${\varepsilon}>0$, this shows that $\alpha$ is indeed nef. We now investigate the continuity of $\alpha\mapsto\nu(\alpha,x)$ and $\nu(\alpha,D)$: For every $x\in X$ and every prime $D$, the maps ${\mathcal{E}}\to{\mathbf{R}}$ $\alpha\mapsto\nu(\alpha,x)$ and $\nu(\alpha,D)$ are convex, homogeneous. They are continuous on the interior ${\mathcal{E}}^0$, and lower semi-continuous on the whole of ${\mathcal{E}}$. $Proof$: let $\alpha$, $\beta$ be two pseudo-effective classes. If $T_{\min,{\varepsilon}}(\alpha)$ and $T_{\min,{\varepsilon}}(\beta)$ are currents with minimal singularities in $\alpha[-{\varepsilon}\omega]$ and $\beta[-{\varepsilon}\omega]$ respectively, then\ $T_{\min,{\varepsilon}}(\alpha)+T_{\min,{\varepsilon}}(\alpha)$ belongs to $(\alpha+\beta)[-2{\varepsilon}\omega]$, thus $$\nu(T_{\min,2{\varepsilon}}(\alpha+\beta),x)\leq\nu(T_{\min,{\varepsilon}}(\alpha),x)+\nu(T_{\min,{\varepsilon}}(\beta),x)\leq\nu(\alpha,x)+\nu(\beta,x).$$ We infer from this $\nu(\alpha+\beta,x)\leq\nu(\alpha,x)+\nu(\beta,x)$, and a similar sub-additivity property for $\nu(\cdot,D)$ is obtained along the same lines. Since the homogeneity of our two maps is obvious, the convexity also follows.\ The quotient map $\theta\mapsto\{\theta\}$ from the Fréchet space of closed smooth real $(1,1)$-forms to ${H^{1,1}_{{\partial\overline{\partial}}}(X,{\mathbf{R}})}$ is surjective, thus open. If $\alpha_k\in{H^{1,1}_{{\partial\overline{\partial}}}(X,{\mathbf{R}})}$ is a sequence of pseudo-effective classes converging to $\alpha$ and ${\varepsilon}>0$ is given, we can thus find a smooth form $\theta_k\in\alpha-\alpha_k$ for each $k$ big enough such that\ $-{\varepsilon}\omega\leq\theta_k\leq{\varepsilon}\omega$. The current $T_{\min,{\varepsilon}}(\alpha_k)+\theta_k$ then lies in $\alpha[-2{\varepsilon}\omega]$, and thus $\nu(T_{\min,2{\varepsilon}}(\alpha),x)\leq\nu(T_{\min,{\varepsilon}}(\alpha_k),x)\leq\nu(\alpha_k,x)$, for each $k$ big enough. We infer from this that $\nu(T_{\min,2{\varepsilon}}(\alpha),x)\leq\liminf_{k\to\infty}\nu(\alpha_k,x)$ for each ${\varepsilon}>0$, hence $\nu(\alpha,x)\leq\liminf_{k\to\infty}\nu(\alpha_k,x)$, by taking the supremum of the left hand-side for ${\varepsilon}>0$. This means that $\alpha\mapsto\nu(\alpha,x)$ is lower semi-continuous, and similarly for $\nu(\alpha,D)$, just replacing $x$ by $D$ in the above proof.\ Finally, the restrictions of our maps to ${\mathcal{E}}^0$ are continuous as any convex map on an open convex subset of a finite dimensional vector space is. Let $\alpha\in{H^{1,1}_{{\partial\overline{\partial}}}(X,{\mathbf{R}})}$ be a pseudo-effective class, and $T_{\min}$ be a positive current with minimal singularities in $\alpha$. \(i) We always have $\nu(\alpha,x)\leq\nu(T_{\min},x)$ and $\nu(\alpha,D)\leq\nu(T_{\min},D)$. \(ii) When $\alpha$ is furthermore big, we have $\nu(\alpha,x)=\nu(T_{\min},x)$ and $\nu(\alpha,D)=\nu(T_{\min},D)$. $Proof$: since $T_{\min}$ belongs to $\alpha[-{\varepsilon}\omega]$ for every ${\varepsilon}>0$, $\nu(\alpha,x)\leq\nu(T_{\min},x)$ follows for every $x\in X$, for any pseudo-effective class $\alpha$. If $\alpha$ is furthermore big, we can choose a Kähler current $T$ in $\alpha$ with $T\geq\omega$ for some Hermitian form $\omega$. If $T_{\min,{\varepsilon}}$ is a current with minimal singularities in $\alpha[-{\varepsilon}\omega]$, then $(1-{\varepsilon})T_{\min,{\varepsilon}}+{\varepsilon}T$ is a positive current in $\alpha$, and thus $\nu((1-{\varepsilon})T_{\min,{\varepsilon}}+{\varepsilon}T,x)\geq\nu(T_{\min} ,x)$ by minimality of $T_{\min}$, from which we infer $$(1-{\varepsilon})\nu(\alpha,x)+{\varepsilon}\nu(T,x)\geq\nu(T_{\min},x).$$ We thus get the converse inequality $\nu(\alpha,x)\geq\nu(T_{\min},x)$ by letting ${\varepsilon}\to 0$. The case of $\nu(\alpha,D)$ is similar. Definition of the divisorial Zariski decomposition -------------------------------------------------- Let $\alpha\in{H^{1,1}_{{\partial\overline{\partial}}}(X,{\mathbf{R}})}$ be again a pseudo-effective class, and choose a positive current with minimal singularities $T_{\min}$ in $\alpha$. Since $\nu(\alpha,D)\leq\nu(T_{\min},D)$ for every prime $D$ by proposition 3.8, the series of currents $\sum\nu(\alpha,D)[D]$ is convergent, since it is dominated by $\sum\nu(T_{\min},D)[D]$. The negative part of a pseudo-effective class $\alpha\in{H^{1,1}_{{\partial\overline{\partial}}}(X,{\mathbf{R}})}$ is defined as $N(\alpha):=\sum\nu(\alpha,D)[D]$. The Zariski projection of $\alpha$ is $Z(\alpha):=\alpha-\{N(\alpha)\}$. We call the decomposition $\alpha=Z(\alpha)+\{N(\alpha)\}$ the divisorial Zariski decomposition of $\alpha$. It is certainly highly desirable that the negative part $N(\alpha)$ of a pseudo-effective class be a divisor, i.e. that $\nu(\alpha,D)=0$ for almost every prime $D$. We will see in section 3.3 that it is indeed the case. For the time being, we concentrate on the Zariski projection, which we see as a map $Z:{\mathcal{E}}\to{\mathcal{E}}$. Let $\alpha\in{H^{1,1}_{{\partial\overline{\partial}}}(X,{\mathbf{R}})}$ be a pseudo-effective class. Then: \(i) Its Zariski projection $Z(\alpha)$ is a modified nef class. \(ii) We have $Z(\alpha)=\alpha$ iff $\alpha$ is modified nef. \(iii) $Z(\alpha)$ is big iff $\alpha$ is. \(iv) If $\alpha$ is not modified nef, then $Z(\alpha)$ belongs to the boundary $\partial{\mathcal{MN}}$ of the modified nef cone. $Proof$:(i) Let $T_{\min,{\varepsilon}}$ be as before a current with minimal singularities in $\alpha[-{\varepsilon}\omega]$, and consider its Siu decomposition $T_{\min,{\varepsilon}}=R_{{\varepsilon}}+\sum\nu(T_{\min,{\varepsilon}},D)[D]$. First, we claim that $N_{{\varepsilon}}:=\sum\nu(T_{\min,{\varepsilon}},D)[D]$ converges weakly to $N(\alpha)$ as ${\varepsilon}$ goes to $0$. For any smooth form $\theta$ of bidimension $(1,1)$, $\theta+C\omega^{n-1}$ is a positive form for $C>0$ big enough. Every such $\theta$ is thus the difference of two positive forms, and it is enough to show that $\int N_{{\varepsilon}}\wedge\theta\to\int N(\alpha)\wedge\theta$ for every smooth positive form $\theta$. But $\int N_{{\varepsilon}}\wedge\theta=\sum\nu(T_{\min,{\varepsilon}},D)\int[D]\wedge\theta$ is a convergent series whose general term $\nu(T_{\min,{\varepsilon}},D)\int[D]\wedge\theta$ converges to $\nu(\alpha,D)\int[D]\wedge\theta$ as ${\varepsilon}\to 0$ and is dominated by $\nu(T_{\min},D)\int[D]\wedge\theta$; since $\sum\nu(T_{\min},D)\int[D]\wedge\theta\leq\int T_{\min}\wedge\theta$ converges, our claim follows by dominated convergence.\ In particular, the class $\{N_{{\varepsilon}}-N(\alpha)\}$ converges to zero. Since the map $\theta\mapsto\{\theta\}$ is open on the space of smooth closed $(1,1)$-form, we can find a sequence $\theta_k\geq-{\delta}_k\omega$ of smooth forms with $\theta_k\in\{N_{{\varepsilon}_k}-N(\alpha)\}$ for some sequences ${\varepsilon}_k<<{\delta}_k$ going to zero. It remains to notice that $T_k:=R_{{\varepsilon}_k}+\theta_k$ is a current in $Z(\alpha)$ with $T_k\geq-({\varepsilon}_k+{\delta}_k)\omega$ and $\nu(T_k,D)=0$ for every prime $D$. Since ${\varepsilon}_k+{\delta}_k$ converges to zero, $Z(\alpha)$ is modified nef by definition. \(ii) Since $N(\alpha)=\sum\nu(\alpha,D)[D]$ is a closed positive $(1,1)$-current, it is zero iff its class $\{N(\alpha)\}\in{H^{1,1}_{{\partial\overline{\partial}}}(X,{\mathbf{R}})}$ is. The assertion is thus just a reformulation of (ii) in proposition 3.2. \(iii) If $Z(\alpha)$ is big, then of course $\alpha=Z(\alpha)+\{N(\alpha)\}$ is also big, as the sum of a big class and a pseudo-effective one. If conversely $\alpha$ is big, it contains a Kähler current $T$, whose Siu decomposition we write $T=R+\sum\nu(T,D)[D]$. Note that $R$ is a Kähler current since $T$ is; since $T$ belongs to $\alpha[-{\varepsilon}\omega]$ for every ${\varepsilon}>0$, we have $\nu(T,D)\geq\nu(\alpha,D)$, and $R+\sum(\nu(T,D)-\nu(\alpha,D))[D]$ is thus a Kähler current in $Z(\alpha)$ as desired. \(iv) Assume that $Z(\alpha)$ belongs to the interior ${\mathcal{MN}}^0$ of the modified nef cone. By proposition 3.2, we have to see that $\nu(\alpha,D)=0$ for every prime $D$. Suppose therefore that $\nu(\alpha,D_0)>0$ for some prime $D_0$. The class $Z(\alpha)+{\varepsilon}\{D_0\}$ has to lie in the open cone ${\mathcal{MN}}^0$ for ${\varepsilon}$ small enough, thus we can write for $0<{\varepsilon}<\nu(\alpha,D_0)$: $$\alpha=(Z(\alpha)+{\varepsilon}\{D_0\})+(\nu(\alpha,D_0)-{\varepsilon})\{D_0\}+\{\sum_{D\neq D_0}\nu(\alpha,D)D\}.$$ We deduce that $\nu(\alpha,D_0)\leq\nu(Z(\alpha)+{\varepsilon}\{D_0\},D_0)+(\nu(\alpha,D_0)-{\varepsilon})$. Indeed, the class $\{D_0\}$ (resp. $\{\sum_{D\neq D_0}\nu(\alpha,D)D\}$) has minimal multiplicity $\leq 1$ (resp. 0) along $D_0$, because so is the generic Lelong numbers of the positive current $[D_0]$ (resp. $\sum_{D\neq D_0}\nu(\alpha,D)[D]$) along $D_0$. Now, we also have $\nu(Z(\alpha)+{\varepsilon}\{D_0\},D_0)=0$ since $Z(\alpha)+{\varepsilon}\{D_0\}$ is modified nef by assumption, hence the contradiction $\nu(\alpha,D_0)\leq\nu(\alpha,D_0)-{\varepsilon}$. \(i) The map $\alpha\mapsto N(\alpha)$ is convex and homogeneous on ${\mathcal{E}}$. It is continuous on the interior of the pseudo-effective cone. \(ii) The Zariski projection $Z:{\mathcal{E}}\to{\mathcal{MN}}$ is concave and homogeneous. It is continuous on the interior of ${\mathcal{E}}$. $Proof$: we have already noticed that $\nu(\alpha+\beta,D)\leq\nu(\alpha,D)+\nu(\beta,D)$ for every prime $D$ and every two pseudo-effective classes $\alpha$, $\beta$. This implies that $N(\alpha+\beta)\leq N(\alpha)+N(\beta)$. Homogeneity is obvious, and the first assertion follows. To show continuity, it is enough as above to show that $\alpha\mapsto\int N(\alpha)\wedge\theta$ is continuous on ${\mathcal{E}}^0$ for every positive form $\theta$. But the latter map is convex, and thus continuous on ${\mathcal{E}}^0$ as any convex map on an open convex subset of a finite dimensional vector space is. (ii) is now an obvious consequence of (i) and the relation $Z(\alpha)=\alpha-\{N(\alpha)\}$. Negative part and exceptional divisors -------------------------------------- If $A=D_1,...,D_r$ is a finite family of prime divisors, we denote by $V_+(A)\subset{H^{1,1}_{{\partial\overline{\partial}}}(X,{\mathbf{R}})}$ the closed convex cone generated by the classes $\{D_1\},...,\{D_r\}$. Every element of $V_+(A)$ writes $\alpha=\{E\}$ for some effective ${\mathbf{R}}$-divisor supported by the $D_j$’s. Since $[E]$ is a positive current in $\alpha$, we have $N(\alpha)\leq E$, and thus $Z(\alpha)$ can be represented by the effective ${\mathbf{R}}$-divisor $E-N(\alpha)$, which is also supported by the $D_j$’s. We conclude: $V_+(A)$ is stable under the Zariski projection $Z$. In particular, we have $Z(V_+(A))=0$ iff $V_+(A)$ meets ${\mathcal{MN}}$ at $0$ only. \(i) A family $D_1,...,D_q$ of prime divisors is said to be an exceptional family iff the convex cone generated by their cohomology classes meets the modified nef cone ${\mathcal{MN}}$ at $0$ only. \(ii) An effective ${\mathbf{R}}$-divisor $E$ is said to be exceptional iff its prime components constitute an exceptional family. We have the following \(i) An effective ${\mathbf{R}}$-divisor $E$ is exceptional iff $Z(\{E\})=0$. \(ii) If $E$ is an exceptional effective ${\mathbf{R}}$-divisor, we have $E=N(\{E\})$. \(iii) If $D_1,...,D_q$ is an exceptional family of primes, then their classes $\{D_1\},...,\{D_q\}$ are linearly independent in ${NS(X)_{{\mathbf{R}}}}\subset{H^{1,1}_{{\partial\overline{\partial}}}(X,{\mathbf{R}})}$. In particular, the length of the exceptional families of primes is uniformly bounded by the Picard number $\rho(X)$. $Proof$: (i) let $A=D_1,...,D_r$ denote the family of primes supporting $E$, and choose a Gauduchon metric $\omega$ (cf. section 2.2.4). Since $\omega^{n-1}$ is ${\partial\overline{\partial}}$-closed, $\int Z(\alpha)\wedge\omega^{n-1}$ is well defined, and defines a map ${\mathcal{E}}\to{\mathbf{R}}$ $\alpha\mapsto\int Z(\alpha)\wedge\omega^{n-1}$, which is concave and homogeneous (by proposition 3.11), and everywhere non-negative. The restriction of this map to $V_+(A)$ shares the same properties, and the class $\alpha:=\{E\}$ is a point in the relative interior of the convex cone $V_+(A)$ at which $\int Z(\alpha)\wedge\omega^{n-1}=0$. By concavity, we thus get $\int Z(\alpha)\wedge\omega^{n-1}=0$ for every $\alpha\in V_+(A)$, and thus $Z(\alpha)=0$ for every such $\alpha\in V_+(A)$, qed.\ (ii) When $E$ is exceptional, we have both $E\geq N(\{E\})$ (because the positive current $[E]$ lies in the class $\{E\}$) and $\{E\}=\{N(\{E\})\}$ (because $Z(\{E\})=0$). Since a closed positive current which yields zero in ${H^{1,1}_{{\partial\overline{\partial}}}(X,{\mathbf{R}})}$ is itself zero, we get the result.\ (iii) Since $D_1,...,D_q$ are linearly independent in Div$(X)\otimes{\mathbf{R}}$, the assertion is equivalent to the fact that the quotient map $D\mapsto\{D\}$ is injective on the ${\mathbf{R}}$-vector space of divisors generated by the $D_j$’s. But this is easy: if $E=\sum a_j D_j$ lies in the kernel, we can write $E=E_+-E_-$ with $E_+$ and $E_-$ effective such that $\{E_+\}=\{E_-\}$. By (iii), we get $E_+=E_-$, whence $E=0$, qed. We state as a theorem the following important consequences of (iii): \(i) For every pseudo-effective class $\alpha\in{\mathcal{E}}$, the negative part $N(\alpha)$ is an exceptional effective ${\mathbf{R}}$-divisor supported by at most $\rho(X)$ primes. \(ii) $X$ carries at most countably many exceptional primes. \(iii) The exceptional fiber $Z^{-1}(0)$ is contained in ${NS(X)_{{\mathbf{R}}}}$, and is a union of at most countably many simplicial cones over exceptional families of primes. $Proof$: (i) We have $Z(\alpha)\geq Z(Z(\alpha))+Z(\{N(\alpha)\})$, and $Z(Z(\alpha))=Z(\alpha)$ by proposition 3.10, thus $Z(\{N(\alpha)\})=0$. We immediately deduce from this that any family of primes $D_1,...,D_r$ such that $\nu(\alpha,D_j)>0$ for every $j$ is an exceptional family, and the assertion follows from (iii) of proposition 3.13.\ (ii) We just have to notice that $D\mapsto\{D\}$ is injective on the set of exceptional primes, and maps into the lattice $NS(X)\subset{NS(X)_{{\mathbf{R}}}}$.\ (iii) Since $\{A\}$ is a linearly independent set for every exceptional family of primes $A$, we see that $V_+(A)=\sum_{D\in A}{\mathbf{R}}_+\{D\}$ is a simplicial cone. It remains to observe that $\alpha$ lies in the exceptional fiber $Z^{-1}(0)$ iff $\alpha=\{N(\alpha)\}$, thus $Z^{-1}(0)$ is covered by the simplicial cones $V_+(A)$.\ We will see in section 4.3 that a family $D_1,...,D_q$ of primes on a surface is exceptional iff the Gram matrix $(D_i\cdot D_j)$ is negative definite, i.e. iff $D_1,...,D_q$ can all be blown down to points by a modification towards an analytic surface (singular in general). On a general compact complex $n$-fold $X$, an exceptional divisor is still very rigidly embedded in $X$: If $E$ is an exceptional effective ${\mathbf{R}}$-divisor, then its class $\{E\}$ contains but one positive current, which is $[E]$. In particular, when $E$ is rational, its Kodaira-Iitaka dimension $\kappa(X,E)$ is zero. $Proof$: if $T$ is a positive current in $\{E\}$, we have $\nu(T,D)\geq\nu(\{E\},D)$ for every prime $D$. Using the Siu decomposition of $T$, we thus see that $T\geq\sum\nu(\{E\},D)D=N(\{E\})=E$, since $E$ is exceptional. But we also have $\{T\}=\{E\}$, hence $T=E$, as was to be shown. To get the last point, let $D$ be an element of the linear system $|kE|$ for some integer $k>0$ such that $kE$ is Cartier. The positive current $\frac{1}{k}[D]$ then lies in $\{E\}$, thus we have $[D]=k[E]$ as currents, hence $D=kE$ as divisors. This shows that $h^0(kE)=1$ for each $k>0$, qed. Discontinuities of the Zariski projection ----------------------------------------- It is remarkable that the Zariski projection $Z$ is not continuous in general up to the boundary $\partial{\mathcal{E}}$. If $X$ carries infinitely many exceptional primes, then the Zariski projection $Z:{\mathcal{E}}\to{\mathcal{MN}}$ is not continuous. $Proof$: we use the following If $D_k$ is an infinite sequence of divisors, the rays ${\mathbf{R}}_+\{D_k\}\subset{\mathcal{E}}$ can accumulate on ${\mathcal{MN}}$ only. $Proof$: suppose that $t_k\{D_k\}$ converges to some non-zero $\alpha\in{\mathcal{E}}$ (for $t_k>0$). For each prime $D$, we then have $D_k\neq D$ and thus $\nu(t_k\{D_k\},D)=t_k\nu(\{D_k\},D)=0$ for infinitely many $k$, because the family $D_k$ is infinite. By lower semi-continuity (proposition 3.7) we deduce $\nu(\alpha,D)=0$ for every prime $D$, i.e. $\alpha$ is modified nef (by proposition 3.2). Assume now that an infinite sequence of exceptional prime divisors $D_k$ exists. Since ${\mathcal{E}}$ has compact base, upon extracting a subsequence, we can assume that $t_k\{D_k\}$ converges to some non-zero $\alpha\in{\mathcal{E}}$ (with $t_k>0$ an appropriate sequence). Since $D_k$ is exceptional, we have $Z(t_k\{D_k\})=0$ for every $k$, but $Z(\alpha)=\alpha$ since $\alpha$ is modified nef by the above lemma. Consequently, $Z^{-1}(0)$ is not closed, and $Z$ is not continuous.\ To get an example of discontinuous Zariski projection, just take $X$ to be the blow-up of ${\mathbf{P}}^2$ in at least 9 general points. Such a rational surface is known to carry countably many exceptional curves of the first kind (cf. \[Har77\], p.409). Since a prime divisor $C$ on a surface is exceptional iff $C^2<0$ (cf. section 4.3), the set of exceptional primes on $X$ is infinite, and we have our example. When is a decomposition the Zariski decomposition? -------------------------------------------------- Suppose that we have a decomposition $\alpha=p+\{N\}$ of a pseudo-effective class $\alpha$ into the sum of a modified nef class $p$ and the class of an effective ${\mathbf{R}}$-divisor $N$. We want a criterion that tells us when it is the Zariski decomposition of $\alpha$. We have $N(\alpha)\leq N(p)+N$, and $N(p)=0$ since $p$ is modified nef, thus $N(\alpha)=N$ happens iff $Z(\alpha)=p$, and our question is equivalent to the study of the fibers $Z^{-1}(p)$, with $p\in{\mathcal{MN}}$.\ We will need the following If $\alpha\in{H^{1,1}_{{\partial\overline{\partial}}}(X,{\mathbf{R}})}$ is a big class, we define its non-Kähler locus as $E_{nK}(\alpha):=\cap_TE_+(T)$ for $T$ ranging among the Kähler currents in $\alpha$. Let us explain the terminology: Let $\alpha\in{H^{1,1}_{{\partial\overline{\partial}}}(X,{\mathbf{R}})}$ be a big class. Then: \(i) The non-nef locus $E_{nn}(\alpha)$ is contained in the non-Kähler locus $E_{nK}(\alpha)$. \(ii) There exists a Kähler current with analytic singularities $T$ in $\alpha$ such that $E_+(T)=E_{nK}(\alpha)$. In particular, the non-Kähler locus $E_{nK}(\alpha)$ is an analytic subset of $X$. \(iii) $\alpha$ is a Kähler class iff $E_{nK}(\alpha)$ is empty. More generally, $\alpha$ is a Kähler class iff $\alpha_{|Y}$ is a Kähler class for every irreducible component $Y$ of the analytic set $E_{nK}(\alpha)$. $Proof$:(i) Since $\alpha$ is big, its non-nef locus $E_{nn}(\alpha)$ is just the set $\{x\in X,\nu(T_{\min},x)>0\}$, since we have $\nu(\alpha,x)=\nu(T_{\min},x)$ in that case (cf. proposition 3.8). For every Kähler current $T$ in $\alpha$, we have $\nu(T,x)\geq\nu(T_{\min},x)$ by minimality, and the inclusion $E_{nn}(\alpha)\subset E_{nK}(\alpha)$ ensues. \(ii) First, we claim that given two Kähler currents $T_1$, $T_2$ in $\alpha$, there exists a Kähler current with analytic singularities $T$ such that $E_+(T)\subset E_+(T_1)\cap E_+(T_2)$. Indeed, we can find ${\varepsilon}>0$ small enough such that $T_j\geq{\varepsilon}\omega$. Our currents $T_1$ and $T_2$ thus belong to $\alpha[{\varepsilon}\omega]$, and admit an infimum $T_3$ in that set with respect to $\preceq$ (cf. section 2.8). In particular, $T_3$ is a current in $\alpha$ with $T_3\geq{\varepsilon}\omega$ and $\nu(T_3,x)=\min\{\nu(T_1,x),\nu(T_2,x)\}$ for every $x\in X$. By (ii) of theorem 2.1, there exists a Kähler current with analytic singularities $T$ in $\alpha$ such that $\nu(T,x)\leq\nu(T_3,x)$ for every $x\in X$, hence $E_+(T)\subset E_+(T_1)\cap E_+(T_2)$, and this proves the claim. Using the claim and (ii) of theorem 2.1, it is easy to construct a sequence $T_k$ of Kähler currents with analytic singularities such that $E_+(T_k)$ is a decreasing sequence with $E_{nK}(\alpha)=\cap_kE_+(T_k)$. Since $T_k$ has analytic singularities, $E_+(T_k)$ is an analytic subset, thus the decreasing sequence $E_+(T_k)$ has to be stationary (by the strong Nötherian property), and we eventually get $E_{nK}(\alpha)=E_+(T_k)$ for some $k$, as desired. \(iii) If $\alpha$ is a Kähler class, $E_+(\omega)$ is empty for every Kähler form $\omega$ in $\alpha$, and thus so is $E_{nK}(\alpha)$. Conversely, assume that $\alpha_{|Y}$ is a Kähler class for every component $Y$ of $E_+(\alpha)$, and let $T$ be a Kähler current with analytic singularities such that $E_+(T)=E_{nK}(\alpha)$. $\alpha$ is then a Kähler class by proposition 3.3 of \[DP01\], qed.\ We can now state the following Let $p$ be a big and modified nef class. Then the primes $D_1,...,D_r$ contained in the non-Kähler locus $E_{nK}(p)$ form an exceptional family $A$, and the fiber of $Z$ above $p$ is the simplicial cone $Z^{-1}(p)=p+V_+(A)$. When $p$ is an arbitrary modified nef class, $Z^{-1}(p)$ is an at most countable union of simplicial cones $p+V_+(A)$, where $A$ is an exceptional family of primes. $Proof$: note that, by the very definitions, for every pseudo-effective class $\alpha$, the prime components of its negative part $N(\alpha)$ are exactly the set $A$ of primes $D$ contained in the non-nef locus $E_{nn}(\alpha)$. Furthermore, $Z(\alpha)+V_+(A)$ is entirely contained in the fiber $Z^{-1}Z(\alpha)$. Indeed, the restriction of $Z$ to this simplicial cone is a concave map above the affine constant map $Z(\alpha)$, and both coincide at the relative interior point $\alpha$, thus they are equal on the whole of $Z(\alpha)+V_+(A)$. This already proves the last assertion.\ Assume now that $p$ is modified nef and big, and suppose first that $\alpha$ lies in $Z^{-1}(p)$. To see that $\alpha$ lies in $p+V_+(A)$, we have to prove that every prime $D_0$ with $\nu(\alpha,D_0)>0$ lies in $E_{nK}(p)$, that is: $\nu(T,D_0)>0$ for every Kähler current $T$ in $p$. If not, choose a smooth form $\theta$ in $\{D_0\}$. Since $T$ is a Kähler current, so is $T+{\varepsilon}\theta$ for ${\varepsilon}$ small enough. For $0<{\varepsilon}<\nu(\alpha,D_0)$ small enough, $T_{{\varepsilon}}:=T+{\varepsilon}\theta+(\nu(\alpha,D_0)-{\varepsilon})[D_0]+\sum_{D\neq D_0}\nu(\alpha,D)[D]$ is then a positive current in $\alpha$ with $\nu(T_{{\varepsilon}},D_0)=\nu(\alpha,D_0)-{\varepsilon}<\nu(\alpha,D)=\nu(T_{\min},D_0)$ (the last equality holds by proposition 3.8 because $\alpha$ is big since $p$ is); this is a contradiction which proves the inclusion $Z^{-1}(p)\subset p+V_+(A)$.\ In the other direction, let $T$ be a Kähler current in $p$, and let $T=R+\sum\nu(T,D)D$ be its Siu decomposition. $R$ is then a Kähler current with $\nu(R,D)=0$ for every prime $D$, thus its class $\beta:=\{R\}$ is a modified Kähler class. We first claim that we have $D_j\subset E_{nn}(p-{\varepsilon}\beta)$ for every ${\varepsilon}>0$ small enough and every prime component $D_j$ of the non-Kähler locus $E_{nK}(p)$ of $p$. Indeed, since $p-{\varepsilon}\beta$ is big for ${\varepsilon}>0$ small enough, we have $\nu(p-{\varepsilon}\beta,D_j)=\nu(T,D_j)$ if $T$ is a positive current with minimal singularities in $p-{\varepsilon}\beta$, and we have to see that $\nu(T,D_j)>0$. But $T+{\varepsilon}R$ is a Kähler current in $p$, thus $D_j\subset E_{nK}(p)\subset E_+(T+{\varepsilon}R)$ by definition, which exactly means that $\nu(T+{\varepsilon}R,D_j)>0$. The claim follows since $\nu(R,D_j)=0$ by construction of $R$.\ As a consequence of this claim, each prime $D_1,...,D_r$ of our family $A$ occurs in the negative part $N(p-{\varepsilon}\beta)$ for ${\varepsilon}>0$ small enough. Consequently, by the first part of the proof, the Zariski projection of $Z(p-{\varepsilon}\beta)+\{E\}$ is just $Z(p-{\varepsilon}\beta)$ for every effective ${\mathbf{R}}$-divisor $E$ supported by the $D_j$’s and every ${\varepsilon}>0$ small enough. Since $p$ is big, $Z$ is continuous at $p$, thus $Z(p-{\varepsilon}\beta)$ converges to $Z(p)$, which is just $p$ because the latter is also modified nef. Finally, $Z$ is also continuous at the big class $p+\{E\}$, thus the Zariski projection of $Z(p-{\varepsilon}\beta)+\{E\}$ converges to that of $p+\{E\}$, and thus $Z(p+\{E\})=p$ holds. This means that $p+V_+(A)\subset Z^{-1}(p)$, and concludes the proof of theorem 3.20. Structure of the pseudo-effective cone -------------------------------------- Using our constructions, we will prove the The boundary of the pseudo-effective cone is locally polyhedral away from the modified nef cone, with extremal rays generated by (the classes of) exceptional prime divisors. $Proof$: this is in fact rather straightforward by now: for each prime $D$, the set ${\mathcal{E}}_D:=\{\alpha\in{\mathcal{E}},\nu(\alpha,D)=0\}$ is a closed convex subcone fo ${\mathcal{E}}$. This follows from the fact that $\alpha\mapsto\nu(\alpha,D)$ is convex, homogeneous, lower semi-continuous and everywhere non-negative. If $\alpha\in\partial{\mathcal{E}}$ does not belong to ${\mathcal{MN}}$, it does not belong to ${\mathcal{E}}_D$ for some prime $D$ by proposition 3.2. For every $\beta\in{\mathcal{E}}$, we have either $\beta\in{\mathcal{E}}_D$, or $D$ occurs in the negative part $N(\beta)$. Therefore, ${\mathcal{E}}$ is generated by ${\mathbf{R}}_+\{D\}$ and ${\mathcal{E}}_D$, and the latter does not contain $\alpha$. This means that $\partial{\mathcal{E}}$ is locally polyhedral near $\alpha$. Since $\nu(\alpha,D)>0$, we also see that $D$ is exceptional. Finally, the extremal rays of ${\mathcal{E}}$ not contained in ${\mathcal{MN}}=\cap_D{\mathcal{E}}_D$ have to lie outside ${\mathcal{E}}_D$ for some exceptional prime $D$, and since ${\mathcal{E}}={\mathcal{E}}_D+{\mathbf{R}}_+\{D\}$, each such extremal ray is generated by $\{D\}$ for some $D$, qed. Volumes ------- Recall that the volume of a pseudo-effective class $\alpha$ on a compact Kähler $n$-fold is defined to be the supremum $v(\alpha)$ of $\int_XT_{ac}^n$ for $T$ a closed positive $(1,1)$-current in $\alpha$ (cf. \[Bou02\]). A class $\alpha$ is big iff $v(\alpha)>0$, and the volume is a quantitative measure of its bigness. We have already noticed that $Z(\alpha)$ is big iff $\alpha$ is; we have the following quantitative version: Let $\alpha$ be a pseudo-effective class on $X$ compact Kähler. Then $v(Z(\alpha))=v(\alpha)$. The proof is in fact immediate: if $T$ is a positive current in $\alpha$, then we have $T\geq N(\alpha)$ since $T$ belongs to $\alpha[-{\varepsilon}\omega]$ for each ${\varepsilon}>0$, and we deduce that $T\to T-N(\alpha)$ is a bijection between the positive currents in $\alpha$ and those in $Z(\alpha)$. It remains to notice that $(T-N(\alpha))_{ac}=T_{ac}$ to conclude the proof. Zariski decomposition on a surface and a hyper-Kähler manifold ============================================================== It is known since the pioneering work of Zariski \[Zar62\] that any effective divisor $D$ on a projective surface admits a unique Zariski decomposition $D=P+N$, i.e. a decomposition into a sum of ${\mathbf{Q}}$-divisors $P$ and $N$ with the following properties: \(i) $P$ is nef, $N=\sum a_jN_j$ is effective, \(ii) $P\cdot N=0$, \(iii) The Gram matrix $(N_i\cdot N_j)$ is negative definite. We want to show that our divisorial Zariski decomposition indeed is a generalization of such a Zariski decomposition on a surface. Notations --------- $X$ will stand for a compact Kähler surface, or a compact hyper-Kähler manifold. For such an $X$, we denote by $q$ the quadratic form on ${H^{1,1}(X,{\mathbf{R}})}$ defined as follows: when $X$ is a surface, we set $q(\alpha):=\int\alpha^2$, and when $X$ is hyper-Kähler, we choose a symplectic holomorphic form $\sigma$, and let $q(\alpha):=\int\alpha^2(\sigma\overline{\sigma})^{m-1}$ be the usual Beauville-Bogomolov quadratic form, with $\sigma$ normalized so as to achieve $q(\alpha)^m=\int_X\alpha^{2m}$ (with $\dim X=n=2m$). In both cases $({H^{1,1}(X,{\mathbf{R}})},q)$ is Lorentzian, i.e. it has signature $(1,h^{1,1}(X)-1)$; the open cone $\{\alpha\in{H^{1,1}(X,{\mathbf{R}})},q(\alpha)>0\}$ has thus two connected components which are convex cones, and we denote by ${\mathcal{P}}$ the component containing the Kähler cone ${\mathcal{K}}$. We call ${\mathcal{P}}$ the positive cone (attached to the quadratic form $q$). In general, given a linear form $\lambda$ on ${H^{1,1}(X,{\mathbf{R}})}$, we will denote its kernel by $\lambda^{\perp}$ and the two open half-spaces it defines by $\lambda_{>0}$ and $\lambda_{<0}$. The dual $\mathcal{C}^{\star}$ of a convex cone $\mathcal{C}$ in ${H^{1,1}(X,{\mathbf{R}})}$ is seen as a cone in ${H^{1,1}(X,{\mathbf{R}})}$, using the duality induced by $q$. The dual pseudo-effective cone ------------------------------ In both cases, we shall prove that the modifies nef cone is the dual cone to the pseudo-effective cone. ### The case of a surface We suppose that $X$ is a surface. We prove the following essentially well-known When $X$ is surface, the Kähler cone and the modified Kähler cone coincide. The dual pseudo-effective cone is just the nef cone. $Proof$: if $\alpha\in{\mathcal{MK}}$, it can be represented by a Kähler current with analytic singularities in codimension 2, that is at some points $x_1,...,x_r$. Therefore we see that the non-Kähler locus $E_{nK}(\alpha)$ is a discrete set. Since the restriction of any class to a point is (by convention) a Kähler class, theorem 3.19 shows that $\alpha$ lies in fact in ${\mathcal{K}}$.\ Since $\int_X\omega\wedge T$ is positive for every Kähler form $\omega$ and every positive current $T$, we of course have ${\mathcal{K}}\subset{{\mathcal{E}}^{\star}}$, and thus also ${\mathcal{N}}=\overline{K}\subset{{\mathcal{E}}^{\star}}$. The other inclusion is much deeper, since it is a consequence of the Nakai-Moishezon criterion for Kähler classes on a surface, as given in \[Lam99\]. Indeed, this criterion implies that a real $(1,1)$-class $\alpha$ on a Kähler surface is a nef class iff $\alpha\cdot\omega\geq 0$ for every $\omega\in{\mathcal{K}}$ and $\alpha\cdot C\geq 0$ for every irreducible curve $C$. Since a class in ${{\mathcal{E}}^{\star}}$ clearly satisfies these conditions, we get ${{\mathcal{E}}^{\star}}\subset{\mathcal{N}}$, and the proof of theorem 4.1 is over.\ As a consequence, since ${\mathcal{K}}$ is contained in ${\mathcal{P}}$ and since $\overline{{\mathcal{P}}}$ is self dual (just because $q$ is Lorentzian), we get dually that $\overline{{\mathcal{P}}}\subset{\mathcal{E}}$ and thus that ${\mathcal{P}}\subset{\mathcal{E}}^0={\mathcal{B}}$, which means the following: if $\alpha$ is a real $(1,1)$-class with $\alpha^2>0$, then $\alpha$ or $-\alpha$ is big. This generalizes the well known case where $\alpha$ is (the first Chern class of) a line bundle (whose proof is based on Riemann-Roch). ### The hyper-Kähler case In that case, the dual peudo-effective cone is also equal to the modified nef cone, but the proof uses another description, due to D.Huybrechts, of the dual pseudo-effective cone. In the easy direction, we have: \(i) The modified nef cone ${\mathcal{MN}}$ is contained in both the dual pseudo-effective cone ${{\mathcal{E}}^{\star}}$ and the closure of the positive cone $\overline{{\mathcal{P}}}$ \(ii) We have $q(D,D')\geq 0$ for any two distinct prime divisors $D\neq D'$. $Proof$: to prove (i), we only have to prove that ${\mathcal{MK}}\subset{{\mathcal{E}}^{\star}}$. Indeed, ${\mathcal{MK}}\cap{{\mathcal{E}}^{\star}}\subset{\mathcal{E}}\cap{{\mathcal{E}}^{\star}}$ is trivially contained in $\overline{{\mathcal{P}}}$. We pick a modified Kähler class $\alpha$ and a pseudo-effective class $\beta\in{\mathcal{E}}$, and choose a Kähler current $T$ in $\alpha$ with analytic singularities in codimension at least 2, and a positive current $S$ in $\beta$. By section 2.6, the wedge product $T\wedge S$ is well defined as a closed positive $(2,2)$-current, and lies in the class $\alpha\cdot\beta$. Since $(\sigma\overline{\sigma})^{m-1}$ is a smooth positive form of bidimension $(2,2)$, the integral $\int_XT\wedge S\wedge(\sigma\overline{\sigma})^{m-1}$ is positive. But $(\sigma\overline{\sigma})^{m-1}$ is also closed, thus we have $$\int_XT\wedge S\wedge(\sigma\overline{\sigma})^{m-1}=\alpha\cdot\beta\cdot\{(\sigma\overline{\sigma})^{m-1}\}=q(\alpha,\beta),$$ so we have proven that $q(\alpha,\beta)\geq 0$ as desired.\ The second contention is obtained similarly, noting that $\{D\}\cdot\{D'\}$ contains a closed positive $(2,2)$-current, which is $[D\cdot D']$, where $D\cdot D'$ is the effective intersection cycle.\ The other direction ${{\mathcal{E}}^{\star}}\subset{\mathcal{MN}}$ is much deeper. The effective $1$-dimensional cycles $C$ and the effective divisors $D$ define linear forms on ${H^{1,1}(X,{\mathbf{R}})}$ via the intersection form and the Beauville-Bogomolov form $q$ respectively, and we define a rational (resp. uniruled) chamber of the positive cone ${\mathcal{P}}$ to be a connected component of ${\mathcal{P}}-\cup C^{\perp}$ (resp. ${\mathcal{P}}-\cup D^{\perp}$), where $C$ (resp. $D$) runs over the rational curves (resp. the uniruled divisors). By a rational curve (resp. a uniruled divisor) we mean an effective $1$-dimensional cycle all of whose components are irreducible rational curves (resp. an effective divisor all of whose components are uniruled prime divisors). The rational chamber of ${\mathcal{P}}$ cut out by all the $C_{>0}$’s (resp. $D_{>0}$)’s will be called the fundamental rational chamber (resp. the fundamental uniruled chamber). When $X$ is a $K3$ surface, the rational and uniruled chambers are the same thing and coincide with the traditional chambers in that situation. We can now state the following fundamental result: \(i) The positive cone ${\mathcal{P}}$ is contained in ${\mathcal{E}}$. \(ii) If $\alpha\in{\mathcal{P}}$ belongs to one of the rational chambers, then there exists a bimeromorphic map $f:X-\to X'$ to a hyper-Kähler $X'$ such that $$f_{\star}\alpha=\omega'+\{D'\},$$ where $\omega'\in{\mathcal{K}}_{X'}$ is a Kähler class and $D'$ is an uniruled ${\mathbf{R}}$-divisor.\ (iii) When $\alpha\in{\mathcal{P}}$ lies in both the fundamental uniruled chamber and one of the rational chambers, then no uniruled divisor $D'$ occurs in (ii).\ (iv) The fundamental rational chamber coincides with the Kähler cone of $X$. In fact, \[Huy99\] states this only for a very general element $\alpha\in{\mathcal{P}}$, but we have noticed in \[Bou01\] that the elements of the rational chambers are already very general in that respect.\ In the situation (iii), $\alpha$ lies in $f^{\star}{\mathcal{K}}_{X'}$ for some bimeromorphic $f:X-\to X'$ towards a hyper-Kähler $X'$. The union of such open convex cones ${\mathcal{K}}_f:=f^{\star}{\mathcal{K}}_{X'}$ is called the bimeromorphic Kähler cone, and is denoted by ${\mathcal{BK}}$. The union in question yields in fact a partition of ${\mathcal{BK}}$ into open convex cones ${\mathcal{K}}_f$ (since a bimeromorphic map between minimal manifolds which sends one Kähler class to a Kähler class is an isomorphism by a result of A.Fujiki); ${\mathcal{BK}}$ is an open cone, but definitely not convex in general. (iii) tells us that each intersection of a rational chamber with the fundamental uniruled chamber is contained in ${\mathcal{BK}}$, and thus in one of the ${\mathcal{K}}_f$’s.\ We can now describe the dual pseudo-effective cone: The dual pseudo-effective ${{\mathcal{E}}^{\star}}$ of a hyper-Kähler manifold coincides with the modified nef cone ${\mathcal{MN}}$. $Proof$: by proposition 4.2, it remains to see that ${{\mathcal{E}}^{\star}}$ is contained in the modified nef cone ${\mathcal{MN}}$. By (i) of theorem 4.3, we have ${{\mathcal{E}}^{\star}}\subset\overline{{\mathcal{P}}}$, and it will thus be enough to show that an element of the interior of ${{\mathcal{E}}^{\star}}$ which belongs to one of the rational chambers lies in ${\mathcal{MN}}$. But an element $\alpha$ of the interior of ${{\mathcal{E}}^{\star}}$ has $q(\alpha,D)>0$ for every prime $D$, thus it certainly lies in the fundamental uniruled chamber. If $\alpha$ lies in both the interior of ${{\mathcal{E}}^{\star}}$ and one of the rational chambers, it therefore lies in ${\mathcal{K}}_f=f^{\star}{\mathcal{K}}_{X'}$ for some bimeromorphic $f:X-\to X'$, and it remains to see that ${\mathcal{K}}_f\subset{\mathcal{MN}}$. But if $\omega$ is a Kähler form on $X'$, its pull-back $T:=f^{\star}\omega$ can be defined using a resolution of $f$, and it is easy to check that $T$ is a Kähler current with $\nu(T,D)=0$ for every prime $D$, since $f$ induces an isomorphism $X-A\to X'-A'$ for $A$, $A'$ analytic subsets of codimension at least 2 (this is because $X$ and $X'$ are minimal). Therefore, $\{T\}=f^{\star}\{\omega\}$ belongs to ${\mathcal{MK}}\subset{\mathcal{MN}}$, qed. Exceptional divisors -------------------- When $X$ is a surface or a hyper-Kähler manifold, the fact that a family $D_1,...,D_r$ of prime divisors is exceptional can be read off its Gram matrix. A family $D_1,...,D_r$ of prime divisors is exceptional iff its Gram matrix $(q(D_i,D_j))$ is negative definite. $Proof$: let $V$ (resp. $V_+$) be the real vector space of ${\mathbf{R}}$-divisors (resp. effective ${\mathbf{R}}$-divisors) supported by the $D_j$’s. We begin with a lemma of quadratic algebra: Assume that $(V,q)$ is negative definite. Then every $E\in V$ such that $q(E,D_j)\leq 0$ for all $j$ belongs to $V_+$. $Proof$: if $E\in V$ is non-positive against each $D_j$, we write $E=E_+-E_-$ where $E_+$ and $E_-$ are effective with disjoint supports. We have to prove that $E_-=0$, and this is equivalent by assumption to $q(E_-)\geq 0$. But $q(E_-)=q(E_-,E_+)-q(E_-,E)$. The first term is positive because $E_+$ and $E_-$ have disjoint supports, using (ii) of proposition 4.2, whereas the second is positive by assumption on $E$.\ Let $D_1,...,D_r$ be primes with negative definite Gram matrix. In particular, we then have that $\{V_+\}\subset{H^{1,1}(X,{\mathbf{R}})}$ meets $\overline{{\mathcal{P}}}$ at $0$ only. Since the modified nef cone ${\mathcal{MN}}$ is contained in $\overline{{\mathcal{P}}}$ by proposition 4.2, $\{V_+\}$ $a$ $fortiori$ meets the modified nef cone at $0$ only, which means by definition that $D_1,...,D_r$ is an exceptional family, and this proves necessity in theorem 4.5. In the other direction, assume that $D_1,...,D_r$ is an exceptional family of primes. We first prove that the matrix $(q(D_i,D_j))$ is semi-negative. If not, we find an ${\mathbf{R}}$-divisor $E$ in $V$ with $q(E)>0$. Writing again $E=E_+-E_-$, with $E_+$ and $E_-$ two effective divisors in $V_+$ with disjoint supports, we have again $q(E_+,E_-)\geq 0$ by (ii) of proposition 4.2, and thus $q(E_+)+q(E_-)\geq q(E)>0$. We may therefore assume that $E$ lies in $V_+$, with $q(E)>0$. But then $E$ or $-E$ is big, and it has to be $E$ because it is already effective. Its Zariski projection $Z(\{E\})$ is then non-zero since it is also big (by proposition 3.10), and it lies in both $\{V_+\}$ and ${\mathcal{MN}}$, a contradiction.\ To conclude the proof of theorem 4.5, we may assume (by induction) that the Gram matrix of $D_1,...,D_{r-1}$ is negative definite. If $(V,q)$ is degenerate, the span $V'$ of $D_1,...,D_{r-1}$ is such that its orthogonal space $V'^{\perp}$ in $V$ is equal to the null-space of $V$. We then decompose $D_r=E+F$ in the direct sum $V=V'\oplus V'^{\perp}$. Since $q(E,D_j)=q(D_r,D_j)\geq 0$ for $j<r$, lemma 4.6 yields that $E\leq 0$. Therefore, $F=D_r-E$ lies in $V_+$, and is certainly non-zero. We claim that $\{F\}$ is also modified nef, which will yield the expected contradiction. But $F$ lies in the null-space of $V$, and is therefore non-negative against every prime divisor $D$. If $\alpha$ is a pseudo-effective class, we have $q(\{F\},\alpha)=q(\{F\},Z(\alpha))+q(F,N(\alpha))$. The first term is positive since $Z(\alpha)\in{\mathcal{MN}}={{\mathcal{E}}^{\star}}$, and the the second one is positive because $F$ is positive against every effective divisor. We infer from all this that $\{F\}$ lies in ${{\mathcal{E}}^{\star}}={\mathcal{MN}}$, and the claim follows.\ The theorem says in particular that a prime divisor $D$ is negative iff $q(D)<0$. On a K3 surface, an easy and well-known argument using the adjunction formula shows that the prime divisors with negative square are necessarily smooth rational curves with square $-2$. In higher dimension, we have: On a hyper-Kähler manifold $X$, the exceptional prime divisors are uniruled. $Proof$: since $D$ is exceptional, it lies outside $\overline{{\mathcal{P}}}={\mathcal{P}}^{\star}$, and we thus find a class $\alpha\in{\mathcal{P}}$ lying in one of the rational chambers such that $q(\alpha,D)<0$. By (ii) of theorem 4.3, there exists a bimeromorphic map between hyper-Kähler manifolds $f:X-\to X'$ such that $f_{\star}\alpha=\omega'+\sum a_j D_j'$ with $\omega'$ a Kähler class, $a_j\geq 0$ and $D_j'$ a uniruled prime divisor. Since the quadratic form is preserved by $f$, we have $0>q(\alpha,D)=q(\omega',f_{\star}D)+\sum a_j q(D_j',f_{\star}D)$, and $q(D_j',f_{\star}D)$ has to be negative for some $j$. But this implies that the two primes $D_j'$ and $f_{\star}D$ coincide, and thus $D=f^{\star}D_j'$ is uniruled since $D_j'$ is. Rationality of the Zariski decomposition ---------------------------------------- We want to prove that the divisorial Zariski decomposition is rational (when $X$ is a surface or a hyper-Kähler manifold) in the sense that $N(\alpha)$ is a rational divisor when $\alpha$ is a rational class. We first show the following characterization of the divisorial Zariski decomposition: If $\alpha\in{H^{1,1}(X,{\mathbf{R}})}$ is a pseudo-effective class, its divisorial Zariski decomposition $\alpha=Z(\alpha)+\{N(\alpha)\}$ is the unique orthogonal decomposition of $\alpha$ into the sum of a modified nef class and the class of an exceptional effective ${\mathbf{R}}$-divisor. $Proof$: we first prove uniqueness: assume that $\alpha=p+\{N\}$ is an orthogonal decomposition with $p$ a modified nef class and $N$ an effective exceptional ${\mathbf{R}}$-divisor. We claim that $N(\alpha)=N$. To see this, let $D_1,...,D_r$ be the support of $N$; the Gram matrix $(q(D_i,D_j))$ is negative definite by theorem 4.5, and $p$ is orthogonal to each $D_j$ because $q(p,N)=0$ and $q(p,D_j)\geq 0$ for all $j$ since $p$ is a modified nef class. We have $N(\alpha)\leq N(p)+N$ and $N(p)=0$ since $p$ is modified nef, thus $N(\alpha)\leq N$. But $N(\alpha)-N$ is supported by primes $D_1,...,D_r$ whose Gram matrix is negative definite, and $q(N(\alpha)-N,D_j)=q(p,D_j)-q(Z(\alpha),D_j)$ is non-positive since $p$ is orthogonal to $D_j$ and $Z(\alpha)$ belongs to ${\mathcal{MN}}={{\mathcal{E}}^{\star}}$. Lemma 4.6 thus yields $N(\alpha)\geq N$, and the claim follows. To prove theorem 4.8, we will show the existence of an orthogonal decomposition $\alpha=p+\{N\}$ with $p$ a modified nef class and $N$ an exceptional ${\mathbf{R}}$-divisor. When this is done, we must have $N=N(\alpha)$ by the claim, so that that $\alpha=Z(\alpha)+\{N(\alpha)\}$ is itself an orthogonal decomposition. A pseudo-effective class $\alpha$ lies in ${{\mathcal{E}}^{\star}}$ iff $q(\alpha,D)\geq 0$ for every prime $D$. $Proof$: if $\beta$ is a pseudo-effective class, we write $q(\alpha,\beta)=q(\alpha,Z(\beta))+q(\alpha,N(\beta))$. The first term is positive because $Z(\beta)$ lies in ${{\mathcal{E}}^{\star}}$, and the second one is positive if $q(\alpha,D)\geq 0$ for each prime $D$. Let $\alpha$ be a pseudo-effective class and let $D_1,...,D_r$, $E_1,...,E_p$ be two families of primes such that: \(i) $q(\alpha,D_j)<0$ and $q(\alpha,E_i)\leq 0$ for every $j$ and $i$. \(ii) $E_1,...,E_r$ is an exceptional family. Then the union of these two families is exceptional. $Proof$: let $F$ be an effective divisor supported by $D_j$’s and $E_i$’s, and assume that $\{F\}$ is a modified nef class. We have to see that $F=0$. But $q(\alpha,F)$ is positive since $F$ is modified nef, thus we see using (i) that $F$ is in fact supported by $E_i$’s, and then (ii) enables us to conclude that $F=0$ as desired.\ At this point, the argument is similar to \[Fuj79\]. If the pseudo-effective class $\alpha$ is already in ${{\mathcal{E}}^{\star}}$, we trivially have our decomposition. Otherwise, consider the family $A$ of primes $D$ such that $q(\alpha,D)<0$. That family is exceptional by lemma 4.10 with $E_1,...,E_p$ an empty family, thus $A$ is finite with negative definite Gram matrix, and is non-empty by lemma 4.9. Let $$\alpha=\alpha_1+\{N_1\}$$ be the decomposition in the direct sum $V^{\perp}\oplus V$, where $V\subset{H^{1,1}(X,{\mathbf{R}})}$ is spanned by $A$. We claim that $N_1$ is effective and that $\alpha_1$ is pseudo-effective. Since $q(N_1,D)=q(\alpha,D)<0$ for every $D\in A$, lemma 4.6 yields that $N_1$ is effective. We can also write $N(\alpha)=E+F$ where $E$ and $F$ are effective with disjoint supports and $F$ is supported by elements of $A$. Then for every $D\in A$ we have $q(F-N_1,D)\leq q(N(\alpha)-N_1,D)$ since $E$ and $D$ are disjoint, and $q(N(\alpha)-N_1,D)=q(\alpha_1,D)-q(Z(\alpha),D)$ is non-positive because $\alpha_1$ and $D$ are orthogonal and $Z(\alpha)$ lies in ${{\mathcal{E}}^{\star}}$. We infer from this that $N(\alpha)\geq N_1$ using lemma 4.6, and $\alpha_1=Z(\alpha)+\{N(\alpha)-N_1\}$ is thus pseudo-effective, and this proves our claim.\ If $\alpha_1$ lies in ${{\mathcal{E}}^{\star}}$, we have our decomposition by construction; otherwise, we iterate the construction: let $B$ be the non-empty exceptional family of primes $D$ such that $q(\alpha_1,D)<0$. Since $A$ is already exceptional and $q(\alpha_1,D)=0$ for $D\in A$, we infer from lemma 4.10 that the union $A_1$ of $A$ and $B$ is again an exceptional family. We decompose $$\alpha_1=\alpha_2+\{N_2\}$$ in the direct sum $V_1^{\perp}\oplus V_1$, where $V_1\subset{H^{1,1}(X,{\mathbf{R}})}$ is spanned by $A_1$. The same arguments as above show in that case also that $\alpha_2$ is pseudo-effective, and also that $N_2$ is effective (since $q(N_2,D)=q(\alpha_1,D)\leq 0$ for each $D\in A_1$). But since $B$ is non-empty, $A_1$ is an exceptional family strictly bigger than $A$. Since the length of the exceptional families is uniformly bounded by the Picard number $\rho(X)$ by theorem 3.14, the iteration of the construction has to stop after $l$ steps, for which we get a class $\alpha_l$ which is modified nef. The desired decomposition is then obtained by setting $p:=\alpha_l$ and $N:=N_1+...+N_l$, which is exceptional since it is supported by elements of $A\cup A_1\cup...\cup A_l=A_l$ (since $A\subset A_1\subset ...\subset A_1$ by construction). This concludes the proof of theorem 4.8. The divisorial Zariski decomposition is rational in case $X$ is a surface or a hyper-Kähler manifold. In particular, when $D$ is a pseudo-effective divisor on $X$, the modified nef ${\mathbf{R}}$-divisor $P:=D-N(\{D\})$ is rational and such that the canonical inclusion of $H^0(X,{\mathcal{O}}(kP))$ in $H^0(X,{\mathcal{O}}(kD))$ is surjective for every $k$ such that $kP$ is Cartier. $Proof$: if $\alpha\in NS(X)\otimes{\mathbf{Q}}$ is a rational class, $N(\alpha)$ is necessarily the image of $\alpha$ by the orthogonal projection $NS(X)\otimes{\mathbf{Q}}\to V_{{\mathbf{Q}}}(\alpha)$, where $V_{{\mathbf{Q}}}(\alpha)$ is the ${\mathbf{Q}}$-vector space generated by the cohomology classes of the components of $N(\alpha)$. The latter is therefore rational. As to the second part, let $E$ be an element of the linear system $|kD|$. Since the integration current $\frac{1}{k}[E]$ is positive and lies in $\{D\}$, we have $E\geq kN(\{D\})$. But this exactly means that $kN(\{D\})$ is contained in the base scheme of $|kD|$, as was to be shown. If $p\in{H^{1,1}(X,{\mathbf{R}})}$ is a modified nef class on $X$, its volume is equal to $$v(p)=q(p)^m=\int p^{\dim X}.$$ In general, we have $v(\alpha)=\int Z(\alpha)^{\dim X}$; in particular, the volume of a rational class is rational. $Proof$: we have already proven in proposition 3.22 that $v(\alpha)=v(Z(\alpha))$, so only the first assertion needs a proof. We have shown in \[Bou02\] that the equality $v(p)=\int p^{\dim X}$ is always true when $p$ is a nef class, so the contended equality holds on a surface. In the hyper-Kähler case, since we have chosen the symplectic form $\sigma$ so that $q(\alpha)^m=\alpha^{2m}$ for any class $\alpha$, we just have to prove $v(p)=q(p)^m$ for $p\in{\mathcal{MN}}$. The latter cone is also the closure of the bimeromorphic Kähler cone ${\mathcal{BK}}$, so we may assume that $p$ lies in $f^{\star}{\mathcal{K}}_{X'}$ for some bimeromorphic map $f:X-\to X'$ between hyper-Kähler manifolds (because both $q$ and the volume are continuous). But since $f$ is an isomorphism in codimension 1, the volume is invariant under $f$, and so is the quadratic form $q$, so we are reduced to the case where $p$ is a Kähler class, for which the equality is always true as we’ve said above. The algebraic approach ====================== In this section, we would like to show what the constructions we have made become when $\alpha=c_1(L)$ is the first Chern class of a line bundle on a projective complex manifold $X$. The general philosophy is that the divisorial Zariski decomposition of a big line bundle can be defined algebraically in terms of the asymptotic linear series $|kL|$. When $L$ is just pseudo-effective, sections are of course not sufficient, but we are led back to the big case by approximating. For those who are reluctant to assume projectivity too quickly, we remark that a compact Kähler manifold carrying a big line bundle is automatically projective. From sections to currents and back ---------------------------------- Let $L\to X$ be a line bundle over the projective manifold $X$. Each time $L$ has sections $\sigma_1,...,\sigma_l\in H^0(X,L)$, there is a canonical way to construct a closed positive current $T\in c_1(L)$ with analytic singularities as follows: choose some smooth Hermitian metric $h$ on $L$, and consider $$\varphi(x):=\frac{1}{2}\log\sum_jh(\sigma_j(x)).$$ Then we define $T=\Theta_h(L)+dd^c\varphi$, where $\Theta_h(L)$ is the first Chern form of $h$. One immediately checks that $T$ is positive and independent of the choice of $h$, and thus depends on the sections $\sigma_j$ only. $T$ has analytic singularities exactly along the common zero-scheme $A$ of the $\sigma_j$’s, and its Siu decomposition therefore writes $T=R+D$, where $D$ is the divisor part of $A$. When $(\sigma_j)$ is a basis of $H^0(X,L)$, we set $T_{|L|}:=T$. Another way to see $T_{|L|}$ is as the pull-back of the Fubiny-Study form on ${\mathbf{P}}H^0(X,L)^{\star}={\mathbf{P}}^N$ (the identification is determined by the choice of the basis of $H^0(L)$) by the rational map\ $\phi_{|L|}:X-\to{\mathbf{P}}H^0(X,L)^{\star}$. $T_{|L|}$ is independent of the choice of the basis up to equivalence of singularities, and carries a great deal of information about the linear system $|L|$: the singular scheme $A$ of $T_{|L|}$ is the base scheme $B_{|L|}$ of the linear system $|L|$, the Lelong number $\nu(T_{|L|},x)$ at $x$ is just the so-called multiplicity of the linear system at $x$, which is defined by $$\nu(|L|,x):=\min\{\nu(E,x),E\in|L|\}.$$ If a modification $\mu:{\widetilde}{X}\to X$ is chosen such that $\mu^{\star}|L|=|M|+F$, where $M$ has non base-point and $F$ is an effective divisor, then $\mu^{\star}T_{|L|}=T_{\mu^{\star}|L|}=T_{|M|}+F$ where $T_{|M|}$ is smooth since $|M|$ is generated by global sections. The so-called moving self-intersection of $L$, which is by definition $L^{[n]}:=M^n$, is thus also equal to $\int_X(T_{|L|})_{ac}^n$.\ When $L$ is a big line bundle, we get for each big enough $k>0$ a positive current $T_k:=\frac{1}{k}T_{|kL|}$ in $c_1(L)$. A result of Fujita (cf. \[DEL00\]) claims that the volume $v(L)$ is the limit of $\frac{1}{k^n}(kL)^{[n]}$, thus we have $v(L)=\lim_{k\to+\infty}\int_XT_{k,ac}^n$.\ Finally, if $T_{\min}$ is a positive current with minimal singularities in $c_1(L)$, we can choose a singular Hermitian metric $h_{\min}$ on $L$ whose curvature current is $T_{\min}$ (by section 2.4). If $L$ is still big and if for each $k$ we choose the basis of $H^0(kL)$ to be orthonormal with respect to $h_{\min}^{\otimes k}$, then it can be shown that $T_k\to T_{\min}$, and we will see in 5.2 that $\nu(T_k,x)=\frac{1}{k}\nu(|kL|,x)$ converges to $\nu(T_{\min},x)=\nu(c_1(L),x)$. In some sense, the family $T_k$ deriving from $|kL|$ is cofinite $(c_1(L)^+,\preceq)$.\ It should however be stressed that $T_{|kL|}$ will in general $not$ be a Kähler current, even if $L$ is big. Indeed, consider the pull-back $L=\mu^{\star}A$ of some ample line bundle $A$ by a blow-up $\mu$. Then $kL$ will be generated by global sections for $k$ big enough, and $T_{|kL|}$ is thus smooth for such a $k$, but not a Kähler current, since $L$ is not ample and a smooth Kähler current is just a Kähler form.\ Conversely, to go from currents to sections is the job of the $L^2$ estimates for the $\overline{\partial}$ operator, e.g. in the form of Nadel’s vanishing theorem. Recall that the multiplier ideal sheaf ${\mathcal{I}}(T)$ of a closed almost positive $(1,1)$-current $T$ is defined locally as follows: write $T=dd^c\varphi$ locally at some $x$. Then the stalk ${\mathcal{I}}(T)_x$ is the set of germs of holomorphic functions at $x$ such that $|f|^2e^{-2\varphi}$ is locally integrable at $x$. Then Nadel’s vanishing states that if $T$ is a Kähler current in the first Chern class $c_1(L)$ of a line bundle $L$, then $H^q(X,{\mathcal{O}}(K_X+L)\otimes{\mathcal{I}}(T))=0$ for every $q>0$. In particular, if $V(T)$ denotes the scheme $V({\mathcal{I}}(T))$, then the restriction map $$H^0(X,{\mathcal{O}}_X(K_X+L))\to H^0(V(T),{\mathcal{O}}_{V(T)}(K_X+L))$$ is surjective. This affords a tool to prove generation of jets at some points, using the following lemma (cf. \[DEL00\]): If $\nu(T,x)<1$, then ${\mathcal{I}}(T)_x={\mathcal{O}}_x$. If $\nu(T,x)\geq n+s$, we have ${\mathcal{I}}(T)_x\subset\mathcal{M}_x^{s+1}$. To illustrate how this works, let us prove the following algebraic characterization of the non-Kähler locus: If $L$ is a big line bundle, then the non-Kähler locus\ $E_{nK}(c_1(L))$ is the intersection of the non-finite loci $\Sigma_k$ of the rational maps $\phi_{|kL|}$, defined as the union of the reduced base locus $B_{|kL|}$ and the set of $x\in X-B_{|kL|}$ such that the fiber through $x$ $\phi_{|kL|}^{-1}(\phi_{|kL|}(x))$ is positive dimensional somewhere. $Proof$: If $x_1,...,x_r\in X$ lie outside $E_{nK}(c_1(L))$, then we can find a Kähler current $T\in c_1(L)$ with analytic singularities such that each $x_j$ lies outside the singular locus of $T$. The latter being closed, there exists a neighbourhood $U_j$ of $x_j$ such that $\nu(T,z)=0$ for every $z\in U_j$. We artificially force an isolated pole at each $x_j$ by setting ${\widetilde}{T}=T+\sum_{1\leq j\leq r}dd^c({\varepsilon}\theta_j(z)\log|z-x_j|)$, where $\theta_j$ is a smooth cut-off function near $x_j$, and ${\varepsilon}>0$ is so small that ${\widetilde}{T}$ is still Kähler. We have $\nu({\widetilde}{T},x_j)={\varepsilon}$, whereas $\nu({\widetilde}{T},z)$ is still zero for every $z\neq x_j$ in $U_j$. We now choose a some smooth form $\tau$ in $c_1(K_X)$, and consider the current $T_k:=k{\widetilde}{T}-\tau$. It lies in the first Chern class of $L_k:=kL-K_X$, and is certainly still Kähler for $k$ big enough. We also have $\nu(T_k,z)=0$ for every $z\neq x_j$ close to $x_j$, and $\nu(T_k,x_j)=k{\varepsilon}$. Given $s_1,...,s_r$, we see that, for $k$ big enough, each $x_j$ will be isolated in $E_1(T_k)$, whereas ${\mathcal{I}}(T_k)_{x_j}\subset\mathcal{M}_{x_j}^{s_j+1}$, using Skoda’s lemma. Nadel’s vanishing then implies that the global sections of $kL$ generate $s_j$-jets at $x_j$ for every $j$. This implies that the non-finite locus $\Sigma_k$ is contained in $E_{nK}(c_1(L))$.\ To prove the converse inclusion, we have to find for each $m$ a Kähler current $T_m$ in $c_1(L)$ with $E_+(T_m)\subset\Sigma_m$. To do this, we copy the proof of proposition 7.2 in \[Dem97\]. If $L$ is any line bundle such that the non-finite locus $\Sigma_m$ of $mL$ is distinct from $X$ for some $m$, then, for every line bundle $G$, the base locus of $|kL-G|$ is contained in $\Sigma_m$ for $k$ big enough. We then take $G$ to be ample, and set $T_m:=\frac{1}{k}(T_{|kL-G|}+\omega)$ with $k$ big enough so that $B_{|kL-G|}\subset\Sigma_m$ and $\omega$ a Kähler form in $c_1(G)$.\ To prove lemma 5.3, note that $|mL|$ is not empty, so we can select a modification $\mu:{\widetilde}{X}\to X$ such that $\mu^{\star}|mL|=|{\widetilde}{L}|+F$, where $|{\widetilde}{L}|$ is base point free. It is immediate to check that it is enough to prove the lemma for ${\widetilde}{L}$, so we can assume from the beginning that $L$ is base-point free, with $m=1$. We set $\phi:=\phi_{|L|}:X\to{\mathbf{P}}^N$ and $\Sigma:=\Sigma_1$. Upon adding a sufficiently ample line bundle to $G$, it is also clear that we may assume $G$ to be very ample. If $x\in X$ lies outside $\Sigma$, the fiber $\phi^{-1}(\phi(x))$ is a finite set, so we can find a divisor $D\in|G|$ which doesn’t meet it. Therefore we have $\phi(x)\in{\mathbf{P}}^N-\phi(D)$, so that for $k$ big enough there exists $H\in|{\mathcal{O}}_{{\mathbf{P}}^N}(k)|$ with $H\geq\phi_{\star}D$ which doesn’t pass through $\phi(x)$. The effective divisor $\phi^{\star}H-D$ is then an element of $|kL-G|$ which doesn’t pass through $x$. The upshot is: for every $x\in X$ outside $\Sigma$, we have $x\in X-B_{|kL-G|}$ for $k$ big enough. By Nötherian induction, we therefore find $k$ big enough such that $B_{|kL-G|}$ is contained in $\Sigma$, as was to be shown. Minimal Lelong numbers ---------------------- When $L$ is a big ${\mathbf{R}}$-divisor, we denote by $L_k:=\lfloor kL\rfloor$ the round-down of $kL$, and by $R_k:=kL-L_k$ the fractional part of $kL$. We then consider the sequence $\frac{1}{k}\nu(|L_k|,x)$. It is easily seen to be subadditive, and therefore $\nu(||L||,x):=\lim_{k\to+\infty}\frac{1}{k}\nu(|kL|,x)$ exists. We then prove the following If $L$ is a big ${\mathbf{R}}$-divisor on $X$ and $\alpha:=\{L\}\in NS(X)_{{\mathbf{R}}}$, then $$\nu(\alpha,x)=\nu(||L||,x)$$ for every $x\in X$. $Proof$: let $L=\sum a_jD_j$ be the decomposition of $L$ into its prime components. We choose arbitrary smooth forms $\eta_j$ in $\{D_j\}$, and denote by $\tau_k:=\sum(ka_j-\lfloor ka_j\rfloor)\eta_j$ the corresponding smooth form in $\{R_k\}$. Since $\tau_k$ has bounded coefficients, we can choose a fixed Kähler form $\omega$ such that $-\omega\leq\tau_k\leq\omega$ for every $k$. If $E$ is an effective divisor in $|L_k|$, then $1/k([E]+\tau_k)$ is a current in $\alpha[-1/k\omega]$, therefore $\frac{1}{k}\nu(E,x)\geq\nu(T_{\min,1/k},x)$, where $T_{\min,1/k}$ is a current with minimal singularities in $\alpha[-1/k\omega]$, and this yields $\lim_{k\to\infty}\frac{1}{k}\nu(|L_k|,x)\geq\lim_{k\to\infty}\nu(T_{\min,1/k},x)=\nu(\alpha,x)$. In the other direction, we use a related argument in \[DEL00\], Theorem 1.11. The Ohsawa-Takegoshi-Manivel $L^2$ extension theorem says in particular that if we are given a Hermitian line bundle $(A,h_A)$ with sufficiently positive curvature form, then for every pseudo-effective line bundle $G$ and every singular Hermitian metric $h$ on $G$ with positive curvature current $T\in c_1(G)$ and every $x\in X$, the evaluation map $$H^0(X,\mathcal{O}(G+A)\otimes\mathcal{I}(T))\to\mathcal{O}_x(G+A)\otimes\mathcal{I}(T)_x$$ is surjective, with an $L^2$ estimate independent of $(G,h)$ and $x\in X$.\ We now fix a Hermitian line bundle $(A,h_A)$ with a sufficiently positive curvature form $\omega_A$ to satisfy the Ohsawa-Takegoshi theorem. We select a positive current with minimal singularities $T_{\min}$ in $\alpha$, and also a Kähler current $T$ in $\alpha$, which is big by assumption; we can then find almost pluri-subharmonic functions $\varphi_{\min}$ and $\varphi$ on $X$ such that $T_{\min}-dd^c\varphi_{\min}$ and $T-dd^c\varphi$ are smooth. We set $G_k:=L_k-A=kL-R_k-A=(k-k_0)L+(k_0L-R_k-A)$, and fix $k_0$ big enough so that $k_0T-\omega-\omega_A$ is a Kähler current. For $k\geq k_0$, the current $T_k:=(k-k_0)T_{\min}+(k_0T-\tau_k-\omega_A)$ is then a positive current in $c_1(G_k)$, thus we can choose for each $k$ a smooth Hermitian metric $h_k$ on $G_k$ such that $T_k$ is the curvature current of the singular Hermitian metric $\exp(-2(k-k_0)\varphi_{\min}-2k_0\varphi)h_k$. Applying the Ohsawa-Takegoshi to $G_k$ equipped with this singular Hermitian metric, we thus get a section $\sigma\in H^0(X,L_k)$ such that $$h_k(\sigma(x))\exp(-2(k-k_0)\varphi_{\min}(x)-2\varphi(x))=1$$ and $$\int_Xh_k(\sigma)\exp(-2(k-k_0)\varphi_{\min}-2\varphi)dV\leq C_1,$$ where $C_1$ does not depend on $k$ and $x$. If we choose a basis $\sigma_1,...,\sigma_l$ of $H^0(X,L_k)$, we infer from this that $$\varphi_{\min}(x)+\frac{1}{k-k_0}\varphi(x)=\frac{1}{2(k-k_0)}\log h_k(\sigma(x))$$ $$\leq\frac{1}{2(k-k_0)}\log\sum h_k(\sigma_j(x))+C_2,$$ where $C_2$ does not depend on $x$. The latter inequality comes from the bound on the $L^2$ norm of $\sigma$, since the $L^2$ norm dominates the $L^{\infty}$ norm. Therefore $$\frac{1}{k-k_0}\nu(|L_k|,x)\leq\nu(\varphi_{\min},x)+\frac{C_3}{k-k_0},$$ where $C_3$ is a bound on the Lelong numbers of $T$. If we let $k\to\infty$ in the last inequality, we get $\nu(||L||,x)\leq\nu(\alpha,x)$ as desired. Zariski decompositions of a divisor ----------------------------------- The usual setting for the problem of Zariski decompositions is the following: let $X$ be a projective manifold, and $L$ a divisor on it. One asks when it is possible to find two ${\mathbf{R}}$-divisors $P$ and $N$ such that: \(i) $L=P+N$ \(ii) $P$ is nef, \(iii) $N$ is effective, \(iv) $H^0(X,kL)=H^0(X,\lfloor kP\rfloor)$ for all $ k>0$, where the round-down $\lfloor F\rfloor$ of an ${\mathbf{R}}$-divisor $F$ is defined coefficient-wise. This can of course happen only if $L$ is already pseudo-effective. When this is possible, one says that $L$ admits a Zariski decomposition (over ${\mathbf{R}}$ or ${\mathbf{Q}}$, depending whether the divisors are real or rational). We want to show that, for a big divisor $L$, this can be read off the negative part $N(\{L\})$. Let $L$ be a big divisor on $X$, and let $N(L):=N(\{L\})$ and $P(L):=L-N(L)$. Then $L=P(L)+N(L)$ is the unique decomposition $L=P+N$ into a modified nef ${\mathbf{R}}$-divisor $P$ and an effective ${\mathbf{R}}$-divisor $N$ such that the canonical inclusion $H^0(\lfloor kP\rfloor)\to H^0(kL)$ is an isomorphism for each $k>0$. $Proof$: first, we have to check that $H^0(X,kL)=H^0(X,\lfloor kP(L)\rfloor)$. If $E$ is an effective divisor in the linear system $|kL|$, we have to see that $E\geq\lceil kN(L)\rceil$. But $\frac{1}{k}[E]$ is a positive current in $\{L\}$, thus $E\geq kN(L)$, and so $E\geq\lceil kN(L)\rceil$ since $E$ has integer coefficients.\ Conversely, assume that $L=P+N$ is a decomposition as in theorem 5.5. We have to show that $N=N(L)$, i.e. $\nu(\{L\},D)=\nu(N,D)$ for every prime $D$. In view of theorem 5.4, this will be a consequence of the following Suppose that a big divisor $L$ writes $L=P+N$, where $P$ is an ${\mathbf{R}}$-divisor and $N$ is an effective ${\mathbf{R}}$-divisor such that $H^0(X,kL)=H^0(X,\lfloor kP\rfloor)$ for every $k>0$. Then we have: \(i) If $P$ is nef, then $\nu(||L||,x)=\nu(N,x)$ for every $x\in X$. \(ii) If $P$ is modified nef, then $\nu(||L||,D)=\nu(N,D)$ for every prime $D$. $Proof$: the assumption $H^0(X,kL)=H^0(X,\lfloor kP\rfloor)$ means precisely that for every $E\in|kL|$ we have $E\geq\lceil kN\rceil$, thus $\nu(|kL|,x)\geq\sum\frac{\lceil ka_j\rceil}{k}\nu(D_j,x)$ if we write $N=\sum a_jD_j$. We deduce from this the inequality $\lim_{k\to\infty}\frac{1}{k}\nu(|kL|,x)\geq\sum a_j\nu(D_j,x)=\nu(N,x)$. To get the converse inequalities, notice that $$\nu(|kL|,x)\leq\nu(|P_k|,x)+\nu(kN,x)$$ with $P_k:=\lfloor kP\rfloor$ as before; dividing this out by $k$ and letting $k\to+\infty$, we deduce $$\lim_{k\to\infty}\frac{1}{k}\nu(|kL|,x)\leq\lim_{k\to\infty}\frac{1}{k}\nu(kN,x)=\nu(N,x)$$ when $P$ is nef, since $\nu(\{P\},x)=\lim_{k\to\infty}\frac{1}{k}\nu(P_k,x)$ is then always zero, and similarly with $D$ in place of $x$ when $P$ is modified nef (remark that $P$ is big because $L$ is). This concludes the proof of theorem 5.5. Let $L$ be a big divisor on $X$, and assume that $\nu(\{L\},D)$ is irrational for some irreducible divisor $D$. Then there cannot exists a modification $\mu:{\widetilde}{X}\to X$ such that $\mu^{\star}L$ admits a Zariski decomposition over ${\mathbf{Q}}$. $Proof$: if a modification $\mu$ as stated exists, then the negative part $N(\mu^{\star}L)$ has to be rational by theorem 5.5, and we get a contradiction using the following easy Let $\alpha$ be a pseudo-effective class on $X$, and let $\mu:{\widetilde}{X}\to X$ be a modification. Then we have $$N(\alpha)=\mu_{\star}N(\mu^{\star}\alpha).$$ $Proof$: very easily checked using that a modification is an isomorphism in codimension 1. ### An example of Cutkosky. We propose to analyze in our setting an example due to S.D.Cutkosky \[Cut86\] of a big line bundle $L$ on a 3-fold $X$ whose divisorial Zariski decomposition is not rational, but whose Zariski projection $Z(\{L\})$ is nef. We start from any projective manifold $Y$ for which ${\mathcal{N}}_Y={\mathcal{E}}_Y$. Thus $Y$ might be a smooth curve or any manifold with nef tangent bundle (cf. \[DPS94\]). We pick two very ample divisors $D$ and $H$ on $Y$, and consider $X:={\mathbf{P}}({\mathcal{O}}(D)\oplus{\mathcal{O}}(-H))$, with its canonical projection $\pi:X\to Y$. If we denote by $L:={\mathcal{O}}(1)$ the canonical relatively ample line bundle on $X$, then it is well known that $${H^{1,1}(X,{\mathbf{R}})}=\pi^{\star}H^{1,1}(Y,{\mathbf{R}})\oplus{\mathbf{R}}L.$$ Since $D$ is ample, $L$ is big, but it won’t be nef since $-H$ is not. We are first interested in the divisorial Zariski decomposition of $L$. We have a hypersurface $E:={\mathbf{P}}({\mathcal{O}}(-H))\subset X$, and since $D$ has a section, we see that $E+\pi^{\star}D\in |L|$. Therefore we get $N(L)\leq N(\pi^{\star}D)+E$; but $\pi^{\star}D$ is nef, so has $N(\pi^{\star}D)=0$, and we deduce $N(L)\leq E$. Consequently, $N(L)=\mu_LE$ for some $0\leq\mu_L\leq 1$, and $L=Z(L)+\mu_L E$. We claim that $$\mu_L=\min\{t>0, (L-tE)_{|E}\in{\mathcal{N}}_E\}.$$ First, we have $L-tE=\pi^{\star}D+(1-t)E$, and since $\pi^{\star}D$ is nef, we get that the non-nef locus $E_{nn}(L-tE)$ is contained in $E$ for $0<t<1$. Therefore $L-tE\in{\mathcal{N}}_X$ iff $(L-tE)_{|E}\in{\mathcal{N}}_E$. If this is the case, we have $N(L)\leq N(L-tE)+tE=tE$, and thus $t\geq\mu_L$. Conversely, since $L-\mu_L E=Z(L)$ lies in ${\mathcal{MN}}$, we get that $Z(L)_{|E}\in{\mathcal{E}}_E={\mathcal{N}}_E$ by proposition 2.4 (since $E$ is isomorphic to $Y$ via $\pi$), and we deduce the equality. Now, notice that the projection $\pi$ induces an isomorphism $E\to Y$ such that $L$ becomes $-H$ and thus $E_{|E}$ becomes $-D-H$. The condition $(L-tE)_{|E}\in{\mathcal{N}}_E$ is turned into $-H+t(D+H)\in{\mathcal{N}}_Y$, and we get in the end $$\mu_L=\min\{t>0, -H+t(D+H)\in{\mathcal{N}}_Y\}$$ The picture can be made more precise: \(i) The nef cone ${\mathcal{N}}_X$ is generated by $\pi^{\star}{\mathcal{N}}_Y$ and $L+\pi^{\star}H$. \(ii) The pseudo-effective cone ${\mathcal{E}}_X$ is generated by $\pi^{\star}{\mathcal{N}}_Y$ and by $E$. \(iii) The only exceptional divisor on $X$ is $E$, and the modified Kähler cone coincides with the Kähler cone. The Zariski projection $Z(\alpha)$ of a pseudo-effective class $\alpha$ is thus the projection of $\alpha$ on ${\mathcal{N}}_X$ parallel to ${\mathbf{R}}_+ E$. $Proof$: given line bundles $L_1,...,L_r$ on a compact Kähler manifold $Y$, a class $\alpha=\pi^{\star}\beta$ over $X:={\mathbf{P}}(L_1\oplus...\oplus L_r)$ is nef (resp. pseudo-effective) iff $\beta$ is. A class $\alpha={\mathcal{O}}(1)+\pi^{\star}\beta$ is nef iff $\beta+L_j$ is nef forall $ j$, and $\alpha$ is big iff the convex cone generated by $\beta+L_1,...,\beta+L_r$ meets the big cone of $Y$, which condition is equivalent (by homogeneity) to: $\beta+$conv$(L_1,...,L_r)$ meets the big cone; finally $\alpha$ is pseudo-effective iff $\beta+$conv$(L_1,...,L_r)$ meets ${\mathcal{E}}_Y$. In our case $\alpha=\pi^{\star}\beta+L$ is thus nef iff $\beta-H$ is nef, and $\alpha$ is pseudo-effective iff $\alpha+[-H,D]$ meets ${\mathcal{N}}_Y$. The latter condition is clearly equivalent to $\alpha-D\in{\mathcal{N}}_Y$. Now an arbitrary class $\alpha$ on $X$ uniquely writes $\alpha=tL+\pi^{\star}\beta$. If $\alpha$ is pseudo-effective, then $t\geq 0$ (since $L$ is relatively ample); if $t=0$, then $\alpha\in\pi^{\star}{\mathcal{N}}_Y$. Otherwise, we may assume by homogeneity that $t=1$, and thus (i) and (ii) follow from the above discussion.\ By (ii), a pseudo-effective class $\alpha$ writes $\pi^{\star}\beta+tE$ with $\beta$ nef. Therefore we get $N(\alpha)\leq tE$, and $E$ is thus the only exceptional divisor on $X$. In fact, we even have $E_{nn}(\alpha)\subset E$, and thus $\alpha$ is nef iff $\alpha_{|E}$ is nef. In particular, we see that ${\mathcal{MK}}={\mathcal{K}}$ as desired (use proposition 2.4 again).\ We now assume that $Y$ is a surface. The assumption ${\mathcal{N}}_Y={\mathcal{E}}_Y$ implies that ${\mathcal{N}}_Y=\overline{{\mathcal{P}}}_Y={\mathcal{E}}_Y$, and $\mu_L$ is none but the least of the two roots of the quadratic polynomial in $t$ $(-H+t(D+H))^2$; it will thus be irrational for most choices of $H$ and $D$ (on, say, an abelian surface). This already yields that the divisorial Zariski decomposition of the rational class $c_1(L)$ will not be rational in general, that is, the analogue of corollary 4.11 is not true in general on a 3-fold.\ Since $Z(L)$ is nef, the volume of $L$ is just $v(Z(L))=Z(L)^3$, with $Z(L)=(1-\mu_L)L+\mu_L\pi^{\star}D$. The cubic intersection form is explicit on ${H^{1,1}(X,{\mathbf{R}})}$ from the relations $$L^3-\pi^{\star}(D-H)\cdot L^2-D\cdot H\cdot L=0$$ and $\pi_{\star}L=1$, $\pi_{\star}L^2=D-H$, thus we can check that $v(L)$ is an explicit polynomial of degree 3 in $\mu_L$ which is also irrational for most choices of $D$ and $H$. We conclude: there exists a big line bundle on a projective 3-fold with an irrational volume, by contrast with proposition 4.12. References. =========== - [**\[Bou01\]**]{} Boucksom, S. — [*Le cône kählerien d’une variété hyperkählerienne*]{}, C.R.A.S. (2001), –. - [**\[Bou02\]**]{} Boucksom, S. — [*On the volume of a line bundle*]{}, math.AG/0201031 (2002). - [**\[Cut86\]**]{} Cutkosky, S.D. — [*Zariski decomposition of divisors on algebraic varieties*]{}, Duke Math. J. [**53**]{} (1986), 149–156. - [**\[Dem82\]**]{} Demailly, J.-P. — [*Estimations $L^2$ pour l’opérateur $\overline{\partial}$ d’un fibré vectoriel holomorphe semi-positif au dessus d’une variété kählerienne complète*]{}, Ann. Sci. Ecole Norm. Sup. [**15**]{} (1982), 457–511. - [**\[Dem92\]**]{} Demailly, J.-P. — [*Regularization of closed positive currents and intersection theory*]{}, J. Alg. Geom. [**1**]{} (1992), 361–409. - [**\[Dem97\]**]{} Demailly, J.-P. — [*Algebraic criteria for Kobayashi hyperbolic projective varieties and jet differentials*]{}, Proc. Symp. Pure Math. [**62.2**]{} (1997). - [**\[DPS94\]**]{} Demailly, J.-P.; Peternell, T.; Schneider, M. — [*Compact complex manifolds with numerically effective tangent bundles*]{}, J. Alg. Geom. [**3**]{} (1994), 295-345. - [**\[DEL00\]**]{} Demailly, J.-P.; Ein, L.; Lazarsfeld, R. — [*A subadditivity property of multiplier ideals*]{}, math.AG/0002035 (2000). - [**\[DPS00\]**]{} Demailly, J.-P.; Peternell, T.; Schneider, M. — [*Pseudoeffective line bundles on compact Kähler manifolds*]{}, math.AG/0006205 (2000). - [**\[Fuj79\]**]{} Fujita, T. — [*On Zariski problem*]{}, Proc. Japan Acad., Ser. A [**55**]{} (1979), 106–110. - [**\[Fuj89\]**]{} Fujita, T. — [*Remarks on quasi-polarized varieties*]{}, Nagoya Math. J. [**115**]{} (1989), 105–123. - [**\[Har77\]**]{} Hartshorne, R. — [*Algebraic geometry*]{}, Springer Verlag, GTM [**52**]{} (1977). - [**\[Huy99\]**]{} Huybrechts, D. — [*The Kähler cone of a compact hyperkähler manifold*]{}, math.AG/9909109 (1999). - [**\[Lam99\]**]{} Lamari, A. — [*Courants kähleriens et surfaces compactes*]{}, Ann. Inst. Fourier [**49**]{} (1999), 249–263. - [**\[Pau98\]**]{} Paun, M. — [*Sur l’effectivité numérique des images inverses de fibrés en droites*]{}, Math. Ann. [**310**]{} (1998), 411–421. - [**\[Siu74\]**]{} Siu, Y.T. — [*Analyticity of sets associated to Lelong numbers and the extension of closed positive currents*]{}, Invent. Math. [**27**]{}(1974), 53–156. - [**\[Zar62\]**]{} Zariski, O. — [The theorem of Riemann-Roch for high multiples of an effective divisor on an algebraic surface]{}, Ann. of Math. (2) [**76**]{} (1962), 560–615.
1
--- abstract: 'In this paper we construct unstable shocks in the context of 2D isentropic compressible Euler in azimuthal symmetry. More specifically, we construct initial data that when viewed in self-similar coordinates, converges asymptotically to the unstable $C^{\frac15}$ self-similar solution to the Burgers’ equation. Moreover, we show the behavior is stable in $C^8$ modulo a two dimensional linear subspace. Under the azimuthal symmetry assumption, one cannot impose additional symmetry assumptions in order to isolate the corresponding unstable manifold: rather, we rely on modulation variable techniques in conjunction with a Newton scheme.' author: - '[ [^1] ]{}' - '[[^2] ]{}' bibliography: - 'euler.bib' title: '[**Formation of unstable shocks for 2D isentropic compressible Euler** ]{}' --- Introduction ============ Setup of Compressible Euler under azimuthal symmetry ---------------------------------------------------- In this paper we study asymptotically self-similar formation of unstable shocks for the 2D isentropic compressible Euler equations under azimuthal symmetry. The 2D isentropic compressible Euler equations take the form \[eq:Euler\] $$\begin{aligned} \partial_t (\rho u) + {\ensuremath{\mathrm{div\,}}}(\rho\, u \otimes u) + \nabla p(\rho) &= 0 \,, \label{eq:momentum} \\ \partial_t \rho + {\ensuremath{\mathrm{div\,}}}(\rho u)&=0 \,, \label{eq:mass}\end{aligned}$$ where $u :\mathbb{R}^2 \times \mathbb{R} \to \mathbb{R}^2 $ is the velocity of the fluid, $\rho: \mathbb{R}^2 \times \mathbb{R} \to \mathbb{R} _+$ is the density, and $p: \mathbb{R}^2 \times \mathbb{R} \to \mathbb{R} _+$ is the pressure defined by the ideal gas law $$p(\rho) := \tfrac{1}{\gamma} \rho^\gamma\,, \qquad \gamma >1 \,.$$ The associated sound speed $\sigma$ is given by $\sigma=\rho^\lambda$ where $\lambda=\frac{\gamma-1}{2}$. It was shown in [@BuShVi2019], that if one imposes the following azimuthal symmetry $$\label{e:symmetry} u(x,t)\cdot \frac{x}{{\left|x\right|}}= r a(\theta,t), \quad u(x,t)\cdot \frac{x^{\perp}}{{\left|x\right|}}=r b(\theta,t),\quad \rho=r^{\frac2{\gamma-1}} P(\theta,t),$$ where $(r,\theta)$ are the usual polar coordinates, then the equations reduce to the 1D system of equations \[eq:Euler:polar3\] $$\begin{aligned} \label{g3_a_evo} \left(\partial_t + b\partial_{\theta}\right) a + a^2-b^2+ \lambda^{-1} P^{2 \lambda } &=0 \\ \label{g3_b_evo} \left(\partial_t + b\partial_{\theta}\right)b+2a b+ P^{ 2 \lambda -1}\partial_\theta P&=0 \\ \label{g3_P_evo} \left(\partial_t + b\partial_{\theta}\right) P+ \tfrac{\gamma}{ \lambda } a P+ P {\ensuremath{\partial}}_\theta b &=0 \, .\end{aligned}$$ An important difference between Euler under azimuthal rather than radial symmetry is that azimuthal symmetry allows for the presence of non-trivial vorticity. We remark that it was shown in [@BuShVi2019], that the system is locally well-posed in $C^n$ for any $n\geq 1$. In order to avoid issues regarding the irregularity at the origin $r=0$, and in order to ensure finite kinetic-energy, following [@BuShVi2019], we can exploit locality and restrict the solution to the push forward of an annulus under the flow induced by $u$. To be more precise, define $A _{\underline{r},\overline{r}}$ to be the annular region $$\begin{aligned} A _{\underline{r},\overline{r}}= \{ x\in\mathbb R^2 \colon \underline{r} < {\left|x\right|} < \overline{r} \} \,.\end{aligned}$$ Fixing $0< r_0 < r_1$; then, if $\eta_u$ is the solution to ${\ensuremath{\partial}}_t \eta_u = u \circ \eta_u$ for $t>t_0$ with $\eta_u(x,t_0)=x$, define the time dependent domain $$\label{Omega} \Omega(t)= \eta_u(A_{r_0,r_1},t)\,.$$ Now set $0< R_0 < r_0 < r_1 <R_0$ and let $K>0$. Assuming that ${\left|u\right|}\leq K$ for all $(x,t)\in A _{R_0,R_1}\times [t_0,T_*)$, then it follows that $$\Omega(t)\subset A_{R_0,R_1} \quad \text{ for } \quad t \in [t_0,T_*]\,,$$ so long as ${\left|T_*-t_0\right|}$ is assumed to be sufficiently small (depending or $r_0$, $r_1$, $R_0$, $R_1$ and $K$). Then given a solution $(a,b,P)$ to the system , we relate these to solutions to via the transformation , restricted to the domain $\Omega$ given in . Brief historical overview ------------------------- The formation of shocks is a classical problem in hyperbolic PDE. The first rigorous proof of shock formation is due to the pioneering work of Lax [@Lax1964] that employed invariants devised by Riemann [@Ri1860] and the method of characteristics. The work of Lax was further generalized and refined by John [@John1974], Liu [@Li1979], and Majda [@Ma1984] (cf. [@Da2010]). In the multi-dimensional setting, Sideris in [@Si1997] demonstrated using a virial type argument the existence of solutions that form singularities in finite time. The method of proof does not however lead to a classification of the type of singularity produced. The first proof of shock formation in the multi-dimensional setting was given by Christodoulou [@Ch2007], whereby he proved shock formation in the irrotational, relativistic setting. The work was later generalized to non-relativistic, irrotational setting [@ChMi2014], and then further extended by Luk and Speck to the 2D setting with non-trivial vorticity [@LuSp2018]. It is important to note that while the cited work are capable of proving shock formation (or simply singularity formation in the case of Sideris), the methods of proof are incapable of distinguishing precise information on the shock’s profile. For example, none of the cited work determine whether the shock occurs at one specific location or whether multiple shocks occur simultaneously. In the recent work by the first author, Shkoller and Vicol [@BuShVi2019], it was shown than in 2D under the azimuthal symmetry one can prove the existence of stable shocks (stable with respect to perturbations that preserve the azimuthal symmetry) whose self-similar profile can be precisely described. This work in [@BuShVi2019b] was extended to 3D in the absence of any symmetry assumption, and further extended to the non-isentropic case in [@BuShVi2019c]. In a different direction, we would like to also bring to attention of the remarkable recent works of Merle, Raphael, Rodnianski, and Szeftel, [@MRRS1], [@MRRS2], which demonstrated the existence of radially symmetric imploding solutions to the isentropic Euler equation – a completely new form of singularity for the Euler equations. Unstable shocks for the Burgers’ equation {#s:burgers} ----------------------------------------- Before we state a rough version of the main theorem, let us first review the concept of an *unstable shock* in the context of the 1D Burgers’ equation: $$\label{eq:Burgers} {\ensuremath{\partial}}_t w+ w {\ensuremath{\partial}}_y w = 0\,\quad\mbox{for }y\in\mathbb R\,.$$ The Burgers’ equation satisfies the following four invariances: 1. Galilean symmetry: If $w(y,t)$ is solves then $w(y-v,t)+v$ solves for any $v\in\mathbb R$. 2. Temporal rescaling: If $w(y,t)$ is solves then $\lambda w(y,\lambda t)$ solves for any $\lambda>0$. 3. Translation invariance: If $w(y,t)$ is solves then $w(y-y_0, t)$ solves for any $y_0\in\mathbb R$. 4. Spatial rescaling: If $w(y,t)$ is solves then $\lambda^{-1} w(\lambda y,t)$ solves for any $\lambda>0$. Any initial data $w_0$ with a negative slope at some point $y_0$ will shock in finite time. Let us assume that $w_0$ has a global minimum slope. By temporal rescaling and translation invariance, without loss of generality, we may assume the global minimum slope is $-1$, occurring at $y=0$. Let us take the initial time to be $t=-1$. By Galilean symmetry, without loss of generality, we may further assume $w_0(0)=0$, then by methods of characteristics that the solution $w$ will shock at $(y,t)=(0,0)$. If in addition $w'''_0(0) =\nu> 0$, then the solution $w$ will convergence asymptotically at the blow up to a self-similar profile ${\overline}W_1$; in particular, we have $$\begin{aligned} \label{limit:1} \lim_{t\rightarrow 0} (-t)^{-\frac12}w(x(-t)^{-\frac32},t)= \left(\frac{\nu}{6}\right)^{-\frac12}{\overline}{W}_1\left(\left(\frac{\nu}{6}\right)^{\frac12}x\right)\,,\end{aligned}$$ for any $x\in\mathbb R$, where $$\begin{aligned} \label{explicit:1} {\overline}{W}_1(x) = \left(- \frac x2 + \left(\frac{1}{27} + \frac{x^2}{4}\right)^{\frac 12}\right)^{\frac 13} - \left( \frac x2 + \left(\frac{1}{27} + \frac{x^2}{4}\right)^{\frac 12} \right)^{\frac 13} \,.\end{aligned}$$ Note one can fix $\nu$ by making use of of the spacial rescaling invariance of Burgers’ equation. The shock profile is stable in the sense that given any initial data in a suitably small $C^4$ neighborhood of $w_0$, the resulting solution will satisfy modulo the invariances of Burgers’ equation. The profile ${\overline}W_1$ (together its $\nu$ rescaling given on the right hand side of ) satisfy the following self-similar Burgers’ equation $$-\frac{1}{2i} {\overline}W_1 + \left( \frac{3}{2}x + {\overline}W_1 \right) \partial_x {\overline}W_1=0\,.$$ In addition to ${\overline}{W}_1$ defined above, the Burgers’ equation admits a countable family of smooth self-similar profiles [@Fontelos]. For each $i\in\mathbb N$, there exists a unique non-trivial analytic profile $W_i$ satisfying the ODE $$-\frac 1{2i} {\overline}{W}_i + \left( \frac{(2i+1)x}{2i} + {\overline}{W}_i \right) \partial_x {\overline}{W}_i = 0\,.$$ such that $$w_i(x,t)=(-t)^{\frac1{2i}}{\overline}{W}_i(x(-t)^{\frac{2i+1}{2i}})\,,$$ defines a self-similar solution to the Burgers’ equation. Unlike ${\overline}{W}_1$, the solutions ${\overline}{W}_{i}$ for $i>1$ are unstable: generic small perturbations of initial data $w_i(\cdot,0)$ lead to singularities described by the stable self-similar profile ${\overline}W_1$. Indeed a generic smooth perturbation of $w_i(x,0)$ leads to initial data with a global minimum at a point where the third derivative is positive, which by the discussion above leads to a shock with asymptotic profile ${\overline}W_1$. The profiles ${\overline}{W}_{i}$ for $i>1$ are nevertheless stable modulo a finite co-dimension of initial data: Suppose we are given initial data $w_0$ with a global minimum, as a consequence of the invariances of Burgers’ equation, we may further assume $w(0)=0$ and $w_0'(0)=-1$. If we further assume that $w_0^{(n)}=0$ for $n=2,\dots,2i $ and that $w_0^{(2i+1)}=\nu>0$, then $$\begin{aligned} \label{limit:2} \lim_{t\rightarrow 0} (-t)^{-\frac1{2i}}w(x(-t)^{-\frac{2i+1}{2i}},t)= \left(\frac{\nu}{(2i+1)!}\right)^{-\frac1{2i}}{\overline}{W}_i\left(\left(\frac{\nu}{(2i+1)!}\right)^{\frac1{2i}}x\right)\,,\end{aligned}$$ for all $x\in\mathbb R$. Thus the initial data leading to the unstable shock profiles ${\overline}W_i $ for $i>1$ are described by an unstable manifolds of finite codimension. Our main objective in this work is to identify an analogous unstable manifold, $\mathcal{M}_U$, for the compressible Euler equations which lead to unstable blowup dynamics according to the profile ${\overline}{W}_2$. Unlike the case for Burgers described above, the specification of $\mathcal{M}_U$ is not as explicit as that described above, and must be found via very careful Newton scheme. Rough statement of main theorem ------------------------------- In this paper, we prove the existence of asymptotically self similar solutions to 2D isentropic compressible Euler equations under azimuthal symmetry that under the appropriate self-similar transformations are described by the self-similar Burgers’ profile ${\overline}W_2$: There exists initial data $(a_0,b_0, P_0)$ in $C^8$ for which the corresponding solutions $(a,b,P)$ to develop a $C^{\frac15}$-cusp singularity in finite time. At blow-up, the solutions $(a,b,P)$ form singularity at a unique angle; moreover, the singularities may be described in terms of the self-similar Burgers’ profile ${\overline}W_2$ in a manner made precise in Theorem \[thm:general\]. The behavior described is stable in $C^8$ with regards to the initial data modulo a two dimensional linear subspace. We note that analogous results exist for the Burger’s equation with traversal viscosity [@CoGhMa2018], the Prandtl equations [@CoGhIbMa18; @CoGhMa19] and the Burgers-Hilbert equation [@yang2020shock]. We also note that the formation of *unstable* shocks (defined and discussed below) in the context of Bourgain-Wang solutions to NLS was obtained in [@MR3086066] through virial type identities and backwards integration techniques. These papers however rely on a symmetry to constrain the position of the singularity which leads to a comparatively simple classification of initial data leading to unstable blow up profiles. Isentropic Euler does not satisfy analogous symmetries leading us to develop a new shooting method in order to describe initial data leading to unstable blowup. We believe that techniques developed are suitably malleable and could find potential use in proving the existence of unstable blowup for other PDE. Statement of main theorem ========================= Riemann invariants ------------------ Before we can state our main theorem, we must first introduce the concept of Riemann invariants, since it is our aim to show that we can prescribe initial data such that one of the Riemann invariants shocks according to the self-similar profile ${\overline}W_1$. As was done in [@BuShVi2019], in order to diagonalize the system - and isolate the Burgers-like behavior of the shock development, we will rewrite - in terms of the Riemann invariants $$w= b+ {\frac{1}{\lambda }} P^\lambda \,, \qquad z= b- {\frac{1}{\lambda }} P^\lambda \,,$$ and the wave speeds $$\Lambda_1= b - P^\lambda= \frac{1- \lambda }{2} w + \frac{1+ \lambda }{2} z \,, \qquad \Lambda_2= b+ P^ \lambda = \frac{1+ \lambda }{2} w + \frac{1- \lambda }{2} z\,.$$ With these substitutions we obtain the following system of nonlinear transport equations \[eq:euler:wza\] $$\begin{aligned} \partial_t w +\left(w+\tfrac{1-\lambda}{1+\lambda}z\right)\partial_{\theta}w &= -a \left(\tfrac{1-2\lambda}{1+\lambda} z+ \tfrac{3+2\lambda}{1+\lambda} w\right) \label{eq:w:evo} \,,\\ \partial_t z +\left(z+\tfrac{1-\lambda}{1+\lambda}w\right)\partial_{\theta}z &= -a \left( \tfrac{1-2\lambda}{1+\lambda} w+ \tfrac{3+2\lambda}{1+\lambda} z\right) \label{eq:z:evo} \,, \\ \partial_t a +\tfrac{1}{1+\lambda} (w+z) \partial_{\theta}a &=-\tfrac{2}{1+\lambda}a^2+\tfrac{1}{2(1+\lambda)}(w+z)^2 - \tfrac{\lambda}{2(1+\lambda)}(w-z)^2 \,. \label{eq:a:evo}\end{aligned}$$ Initial data assumptions {#ss:initial} ------------------------ In this section we will describe the initial data used to construct unstable shock solutions. We introduce a large constant $M$ which will be used to bound certain implicit constants appearing in the paper. We also let $\eps>0$ be a small constant which will parameterize the slope of the initial data. We will denote the initial data at initial time $t=-\eps$ by $$w(\theta,-\eps)=w_0,\quad z(\theta,-\eps)=z_0, \quad a(\theta,-\eps)=a_0\,.$$ The initial will be assumed to satisfy the follow support assumptions $${\ensuremath{\mathrm{supp\,}}}(w_0-\kappa_0)\cup {\ensuremath{\mathrm{supp\,}}}(z_0)\cup{\ensuremath{\mathrm{supp\,}}}(a_0) \subset\left[ -\frac{M\eps}{2} ,\frac{M\eps}{2}\right]\,,$$ where $\kappa_0>0$ will be a predetermined constant. We will further decompose $w_0$ as a sum $$\begin{aligned} \label{datum:0} w_0=\underbrace{\kappa_0+\eps^{\frac14} {\overline}{W_2}\left(\eps^{-\frac54}\theta\right) \chi(\eps^{-1}\theta) + \eps^{\frac 1 4} {\widehat}{W}_0(\eps^{- \frac 5 4} \theta)}_{=: \check w_0(\theta)}+ \eps^{\frac 1 4} \Big(\alpha (\eps^{- \frac 5 4} \theta)^2+\beta (\eps^{- \frac 5 4} \theta)^3 \Big) \chi(\eps^{- \frac 5 4} \theta)\,. \end{aligned}$$ for some smooth fixed cut-off, $\chi$, satisfying $\chi(x) = 1$ for $|x| \le 1$ and is supported in a ball of radius $2$. Above the constants $\alpha, \beta$ are determined by ${\widehat}W_0$ and are not free parameters that we choose as part of the data. The perturbation ${\widehat}W_0$ will be assumed to satisfy the following $$\begin{aligned} {\left \| {\widehat}W_0 \right\|}_{C^8\left(\left[ -\frac{M\eps}{2} ,\frac{M\eps}{2}\right]\right)} &\leq \eps^2\label{eq:C8:bnd} \\ {\widehat}W_0^{(n)}(0)&=0,\quad\mbox{for } n=0,1,4, 5\label{eq:W0:diff}\\ \label{est:hatW:in} {\left|{\widehat}W_0^{(n)}(0)\right|}&\leq \eps,\quad\mbox{for } n=2, 3 \,.\end{aligned}$$ We also assume the following bounds on $z$ and $a$ $$\begin{aligned} {\left \| z_0 \right\|}_{C^8} + {\left \| a_0 \right\|}_{C^8} &\leq \eps^2\,.\end{aligned}$$ Main theorem ------------ We now state our main theorem: \[thm:general\] Let $\gamma>1$ be given and set $ \lambda = {\tfrac{\gamma-1}{2}}$. Then there exists a sufficiently large $\kappa_0 = \kappa_0(\lambda) > 0$, sufficiently large $M = M(\lambda,\kappa_0) \geq 1$, and sufficiently small $\eps = \eps(\lambda,\kappa_0,M) \in (0,1)$ such that the following holds: Let $(w_0,z_0,a_0)$ be initial data satisfying the assumptions stipulated in Section \[ss:initial\], with the constants $\alpha$ and $\beta$ are left to be chosen. Then, there exists $\alpha$, $\beta$ satisfying ${\left|\alpha\right|}+{\left|\beta\right|}\leq \eps^{\frac{9}{10}}$ and a corresponding solution $(a,z,w) \in C([-\eps,T_*); C^8({{\mathbb T}}))$ to satisfying the following properties: - The solution forms a singularity at a computable time $T_*$ and angle $\theta_*$. - $\sup_{t\in[-\eps, T_*)} \left( \| a\|_{ W^{1, \infty }({{\mathbb T}})} + \| z\|_{ W^{1, \infty} ({{\mathbb T}})} + \| w\|_{ L^\infty({{\mathbb T}})} \right) \leq C_M,$ - $\lim_{t \to T_*} {\ensuremath{\partial}}_\theta w(\xi(t),t) = -\infty $ and $\frac{1}{2(T_*-t)} \leq {\left \| {\ensuremath{\partial}}_\theta w(\cdot,t) \right\|}_{L^\infty} \leq \frac{2}{T_*-t}$ as $t \to T_*$, - $w( \cdot , T_*)$ has a cusp singularity of Hölder $C^ {\sfrac{1}{5}} $ regularity Moreover, $w$ blows up in an asymptotically self-similar manner described by the profile ${\overline}{W}_2$. Specifically, there exists a $\nu>0$ and $\kappa_*$ such that $$\begin{aligned} \lim_{t\rightarrow 0} (-t)^{-\frac1{4}}\left(w(x(-t)^{-\frac{5}{4}},t)-\kappa_*\right)= \left(\frac{\nu}{120}\right)^{-\frac14}{\overline}{W}_2\left(\left(\frac{\nu}{120}\right)^{\frac14}x\right)\,,\end{aligned}$$ where $\nu=\lim_{t \to T_*} (T_*-t)^{-8}w^{(5)}(\xi(t),t)$ and $\kappa_*=\lim_{t \to T_*} \kappa(t) $ are explicitly computable, satisfying ${\left|\nu-120\right|}\leq \eps^{\frac 3 4}$ and ${\left|\kappa_0-\kappa_*\right|}\leq \eps$. As a corollary, we show that Theorem is stable modulo a two dimensional linear subspace of initial data: \[c:open\] There exists an open set $\Xi$ of initial data $(\check w_0,z_0,a_0)$ in the $C^8$ for which we have the following: for every $(\check w_0,z_0,a_0)\in \Xi$ there exists $\alpha,\beta\in \mathbb R$ such that if we define $w_0$ by then the conclusion of Theorem \[thm:general\] holds for initial data $(w_0,z_0,a_0)$. Modulation variables and unstable ODEs at $x = 0$ ------------------------------------------------- In order to isolate the self-similar profile, we will need to introduce modulated self-similar variables. These modulation variables allow one to control the time, location, and amplitude of the eventual shock. The idea of using modulation variables is by now classical (cf. [@Merle96; @MeZa97; @MeRa05]). We give the precise definitions of our self-similar variables and modulation variables in Section \[sec:var\], but to facilitate the forthcoming discussion, let us consider the self-similar quantities $(W, Z, A)$ defined through $w(\theta, t) = e^{- \frac{s}{4}} W(x, s) + \kappa(t), z(\theta, t) = Z(x, s)$ and $a(\theta, t) = A(x, s)$, where we rescale time via $s =- \log( \tau - t)$ and space via $x = \frac{\theta - \xi(t)}{(\tau - t)^{\frac 5 4}}$. In our case, we introduce the dynamical modulation variables $\tau, \xi, \kappa$ found in , to enable us to constrain $$\begin{aligned} \label{const:intro} W(0, s) = 0, \qquad {\ensuremath{\partial}}_x W(0, s) = -1, \qquad {\ensuremath{\partial}}_x^4 W(0, s) = 0\,, \end{aligned}$$ where the final constraint is notably different than in the works [@BuShVi2019; @BuShVi2019b; @BuShVi2019c], and reflects the different nature of the self-similar profile ${\overline}{W}_2$. In so doing, we obtain from - the system that we ultimately analyze, which $$\begin{aligned} \label{W:0:i} ({\ensuremath{\partial}}_ s- \frac 1 4) W + (g_W + \frac{5}{4}x ) {\ensuremath{\partial}}_x W &= -e^{- \frac 3 4 s} \frac{\dot{\kappa}}{1 - \dot{\tau}} + F_W, \\ \label{Z:0:i} {\ensuremath{\partial}}_s Z + (g_Z + \frac 5 4 x ) {\ensuremath{\partial}}_x Z &= F_Z\,, \\ \label{A:0:i} {\ensuremath{\partial}}_s A + (g_A + \frac 5 4 x) {\ensuremath{\partial}}_x A &= F_A\,. \end{aligned}$$ Above, the quantities $g_W, g_Z, g_A$ are transport speeds, and $F_W, F_Z, F_A$ are forcing terms that we also leave unspecified for the purposes of this discussion. The reader may find the precise definitions in - and - . In addition, we control the evolution of $\tau, \xi, \kappa$ through ODEs obtained by restricting to the constrains, . Importantly the three modulation variables enable us to constrain only the three quantities appearing in . However, a feature of with $i \ge 2$ is that $W^{(2)}(0, s)$ and $W^{(3)}(0, s)$ need to be zero in the limit as $s \rightarrow \infty$. This in turn cannot be enforced by the introduction of further modulation variables due to the lack of further symmetries in the compressible Euler equations, and so must be enforced by the choice of the unstable manifold, $\mathcal{M}_U$. The equations describing the second and third derivatives of $W$ at $x = 0$ are given by $$\begin{aligned} \label{super:1} &({\ensuremath{\partial}}_s - \frac 3 4) W^{(2)}(0, s) = \text{rapidly decaying forcing terms}\,, \\ \label{super:2} &({\ensuremath{\partial}}_s - \frac{1}{2}) W^{(2)}(0, s) = \text{rapidly decaying forcing terms}\,.\end{aligned}$$ One can see the instability of the manifold due to the negative damping coefficients appearing on the left-hand side of - . Indeed, negatively damped ODEs such as - generically grow as $s \rightarrow \infty$, but certain data (as determined by the right-hand side) can lead to decaying solutions. In the context of the Euler equations, the right-hand sides above themselves depend on other elements of the system (such as the modulation variables, and other derivatives of $(W, Z, A)$). For this reason, we are led to develop a Newton scheme which identifies this unstable manifold. An iterative scheme to search for unstable solutions ---------------------------------------------------- In this subsection, we briefly discuss the Newton scheme that can be used to identify an unstable manifold of initial data which leads to a globally decaying solution to - . For the present discussion, we focus on a model ODE problem. We consider $$\begin{aligned} \label{ODE:model} ({\ensuremath{\partial}}_s - \frac 12) u_{\alpha} = g + \eps f(u_{\alpha}), \qquad u_\alpha(0) = \alpha\,. \end{aligned}$$ We assume for now that the forcing, $g$, has sufficiently strong decay and the nonlinearity, $f$, is an explicit quadratic nonlinearity via $$\begin{aligned} \label{as:setup} |g| \lesssim e^{- \gamma s}, \qquad f(u) = u^2, \qquad \gamma > 0\,. \end{aligned}$$ For general data, $\alpha$, writing the solution to via the Duhamel formula yields $$\begin{aligned} \label{aloha:1} u_\alpha(s) = e^{\frac s 2} \alpha + e^{\frac s 2} \int_0^s e^{- \frac{s'}{2}} g(s') {\,\mathrm{d}}s' + \eps e^{\frac s 2} \int_0^s e^{- \frac{s'}{2}} u_\alpha(s')^2 {\,\mathrm{d}}s'\,. \end{aligned}$$ From , it is that even under the assumption of $g$ decaying exponentially one cannot expect the solution $u_\alpha$ to decay to zero as $s \rightarrow \infty$ for *generic data*, $\alpha$. Thus, to obtain decaying solutions to , one needs to find an unstable manifold of data. In the case of this ODE, this amounts to finding a *particular value* of $\alpha$ which ensures a globally decaying solution. To illustrate how to find this choice of $\alpha$, we now consider the linear version of (setting $\eps = 0$ in ). Upon setting $\eps = 0$ in , sending $s \rightarrow \infty$, and demanding the asymptotic behavior $u_\alpha(s) \rightarrow 0$ as $s \rightarrow \infty$, we obtain the following relation $$\begin{aligned} \alpha_0 + \int_0^\infty e^{- \frac{s'}{2}} g(s') {\,\mathrm{d}}s' = 0\,,\end{aligned}$$ which links the choice of data, $\alpha_0$, to the forcing, $g$, and guarantees the solution $|u_\alpha(s)| \lesssim e^{- \gamma s}$ inherits the decay of $g$. We would now like to modify the choice of data, $\alpha_0$, by an $\eps$ perturbation in order to account for the nonlinear effects when $\eps > 0$ in . The overall strategy will be to fix a sequence of times $\{s_n\}$ for $n \in \mathbb{N}$ with the property that $s_n \rightarrow \infty$ as $n \rightarrow \infty$, and a corresponding sequence of data choices $\{ \alpha_n\}$ for $n \in \mathbb{N}$ so that $u_{\alpha_n}(s_n) = 0$. With suitably strong estimates, we will show that $\alpha_n \rightarrow \alpha_\infty$ and correspondingly $u_{\alpha_\infty}(s) \rightarrow 0$ as $s \rightarrow \infty$. To compute the iterate of $\alpha_{n+1}$ requires an application of the Implicit Function Theorem, which in turn requires sufficiently strong estimates on the solution. Let us now take the particular selection of times, $s_n = n$. To initiate the induction, we will choose $\alpha_0 = 0$, and $u_0(s)$ the corresponding solution (clearly, $u_0(s_0) = u_0(0) = \alpha_0 = 0$). We describe now the $n \rightarrow n+1$ step of the iteration. We now assume inductively that there exists a choice of $\alpha_n$ so that $u_{\alpha_n}(s_{n}) = 0$ and describe the choice of $\alpha_{n+1}$, which is achieved through the Implicit Function Theorem. We define now the map $\mathcal{T}_n$ given by $\mathcal{T}_n(\alpha) := u_{\alpha}(s_{n+1})$. We now seek an $\alpha_{n+1}$ in a small neighborhood, $\mathcal{B}_n$, of $\alpha_n$ so that $\mathcal{T}_{n}(\alpha_{n+1}) = 0$. According to a Taylor expansion of $\mathcal{T}_n$ in $\alpha$, we obtain for some $\alpha_\ast$ satisfying $|\alpha_\ast - \alpha_n| \le |\alpha_n - \alpha_{n+1}|$, $$\begin{aligned} \mathcal{T}_n(\alpha_{n+1}) = \mathcal{T}_n(\alpha_n) + (\alpha_{n+1} - \alpha_n) \frac{{\ensuremath{\partial}}\mathcal{T}_n}{{\ensuremath{\partial}}\alpha}(\alpha_n) + \frac 1 2 (\alpha_{n+1} - \alpha_n)^2 \frac{{\ensuremath{\partial}}^2 \mathcal{T}_n}{{\ensuremath{\partial}}\alpha^2}(\alpha_\ast)\,.\end{aligned}$$ Accordingly, we may apply the Implicit Function Theorem to identify a $\alpha_{n+1}$ so that the left-hand side is zero if we can obtain three estimates: an upper bound on $| \mathcal{T}_n(\alpha_n)|$, a lower bound on $ \frac{{\ensuremath{\partial}}\mathcal{T}_n}{{\ensuremath{\partial}}\alpha}(\alpha_n)$, and an upper bound over $\sup_{\alpha_\ast \in \mathcal{B}_n} |\frac{{\ensuremath{\partial}}^2 \mathcal{T}_n}{{\ensuremath{\partial}}\alpha^2}|$. We thus define the *error at the next time scale* created by this solution as $E_n := u_{\alpha_{n}}(s_{n+1})$, which the new choice of $\alpha_{n+1}$ must rectify in order to achieve the condition $u_{\alpha_{n+1}}(s_{n+1}) = 0$. The first main estimate in the scheme is thus careful control of this error, $E_n$, throughout the iteration. Specifically, using backwards integration from $s_n$, we may obtain the decay estimate $$\begin{aligned} |E_n| = |\mathcal{T}_n(\alpha_n)| \lesssim e^{- \gamma s_n}\,. \end{aligned}$$ Lower bounds on $\frac{{\ensuremath{\partial}}\mathcal{T}_n}{{\ensuremath{\partial}}\alpha}$ are achieved by differentiating the forward integration formula, in $\alpha$, as this formula importantly holds for all $\alpha$. A simple inspection shows that we may expect $\frac{{\ensuremath{\partial}}\mathcal{T}_n}{{\ensuremath{\partial}}\alpha} \sim e^{\frac s 2}$. Third, an upper bound of $\sup_{\alpha_\ast \in [\alpha_n, \alpha_{n+1}]} |\frac{{\ensuremath{\partial}}^2 \mathcal{T}_n}{{\ensuremath{\partial}}\alpha^2}|$ can also be computed by differentiating twice in $\alpha$. Notational Conventions ---------------------- We now discuss some notational conventions that we will be using throughout the analysis. First, for a function $f = f(x, s)$, we use $\| f \|_\infty = \sup_{x} |f(s, x)|$, that is $L^\infty$ refers to in the $x$ variable only. Next, we define the bracket notation $\langle x \rangle := \sqrt{1 + x^2}$. Lastly, we will often use $A \lesssim B$ to mean $A \le CB$, where $C$ is a universal constant independent of $M, \eps, \kappa_0$. We will use $A \lesssim_M B$ to mean $A \le CB$ where $C$ is a constant that can depend on $M$. Preliminaries to the analysis ============================= Self-similar variables and derivation of equations {#sec:var} -------------------------------------------------- We will employ the notation $$\begin{aligned} \beta_{\tau}=\frac{1}{1-\dot\tau},\quad\beta_1=\frac{1}{1+\lambda},\quad\beta_2=\frac{1-\lambda}{1+\lambda},\quad\beta_3=\frac{1-2\lambda}{1+\lambda},\quad\beta_4=\frac{3+2\lambda}{1+\lambda},\quad\beta_5=\frac{\lambda}{2+2\lambda}\,.\end{aligned}$$ We now introduce the change of coordinates that we work in and the relevant modulation variables. We define our self-similar temporal and spacial variables as $$\begin{aligned} \label{sec3:1} s =- \log( \tau - t), \qquad x = \frac{\theta - \xi(t)}{(\tau - t)^{\frac 5 4}}\,.\end{aligned}$$ We record the following identities $$\begin{aligned} &\frac{{\ensuremath{\partial}}s}{{\ensuremath{\partial}}t} = (1 - \dot{\tau}) e^s, \qquad \frac{{\ensuremath{\partial}}x}{{\ensuremath{\partial}}t} = \frac{5}{4} (1 - \dot{\tau}) x e^s - \dot{\xi} e^{\frac 5 4 s}, \qquad \frac{{\ensuremath{\partial}}x}{{\ensuremath{\partial}}\theta} = e^{\frac 5 4 s}\,.\end{aligned}$$ We now introduce the new unknowns, $W, Z, A$ which are defined through the following relations $$\begin{aligned} \label{sec3:2} w(\theta, t) = e^{- \frac{s}{4}} W(x, s) + \kappa(t), \qquad z(\theta, t) = Z(x, s), \qquad a(\theta, t) = A(x, s)\,. \end{aligned}$$ In order to solve for the three modulation variables $\kappa$, $\tau$ and $\xi$, we enforce the following constraints $$\begin{aligned} \label{e:constraints} W(0, s) = 0, \qquad {\ensuremath{\partial}}_x W(0, s) = - 1, \qquad {\ensuremath{\partial}}_x^4 W(0, s) = 0\,.\end{aligned}$$ We now record the following calculations $$\begin{aligned} {\ensuremath{\partial}}_t w= & - \frac{1-\dot{\tau}}{4} e^{\frac{3}{4}s} W + (1-\dot{\tau}) e^{\frac{3s}{4}} {\ensuremath{\partial}}_s W + \dot{\kappa} \label{BUDS:1} + \frac{5}{4}(1 - \dot{\tau}) x e^{\frac{3}{4}s} {\ensuremath{\partial}}_x W - \dot{\xi} e^{s} {\ensuremath{\partial}}_x W\,, \\ \label{BUDS:2} {\ensuremath{\partial}}_\theta w = & e^{ s} {\ensuremath{\partial}}_x W\,. \end{aligned}$$ Next, we record the calculations $$\begin{aligned} {\ensuremath{\partial}}_t z = (1 - \dot{\tau}) e^s {\ensuremath{\partial}}_s Z + (\frac 5 4 (1 - \dot{\tau}) x e^s - \dot{\xi} e^{\frac 5 4 s}) {\ensuremath{\partial}}_x Z\label{whilk:miskey:1} ,\quad {\ensuremath{\partial}}_\theta z= e^{\frac 5 4 s} {\ensuremath{\partial}}_x Z\,. \end{aligned}$$ and similarly, $$\begin{aligned} {\ensuremath{\partial}}_t a = (1 - \dot{\tau}) e^s {\ensuremath{\partial}}_s A + (\frac 5 4 (1 - \dot{\tau}) x e^s - \dot{\xi} e^{\frac 5 4 s}) {\ensuremath{\partial}}_x A, \label{whilk:miskey:4}\quad {\ensuremath{\partial}}_\theta a= \ e^{\frac 5 4 s} {\ensuremath{\partial}}_x A\,.\end{aligned}$$ Then in self-similar variables becomes $$\begin{aligned} {\ensuremath{\nonumber}}&({\ensuremath{\partial}}_s - \frac 1 4) W + \left(\frac 5 4 x - \beta_\tau (\dot{\xi} - \kappa) e^{\frac 1 4 s} +\beta_{\tau}(\beta_2 e^{\frac 1 4 s} Z+ W)\right){\ensuremath{\partial}}_x W \\ &\quad= - \beta_\tau e^{- \frac 3 4 s}\dot{\kappa} - \beta_\tau e^{- \frac 3 4 s} A \Big( \beta_3 Z + \beta_4 (e^{- \frac s 4}W + \kappa) \Big)\,.\label{eq:W:0}\end{aligned}$$ Similarly, we rewrite as $$\begin{aligned} &{\ensuremath{\partial}}_s Z + \left(\frac 5 4 x+ \beta_{\tau}(e^{\frac 1 4 s}(\beta_2\kappa-\dot{\xi} +Z)+\beta_2 W)\right) {\ensuremath{\partial}}_x Z \label{eq:Z:0} = - \beta_\tau e^{-s} A \Big(\beta_3(e^{- \frac s 4}W+ \kappa) + \beta_4Z\Big)\,,\end{aligned}$$ and as $$\begin{aligned} {\ensuremath{\nonumber}}&{\ensuremath{\partial}}_s A + \left(\frac 5 4 x+ \beta_{\tau}(e^{\frac 1 4 s}(\beta_1\kappa-\dot{\xi} +\beta_1Z)+\beta_1W)\right) {\ensuremath{\partial}}_x A \\ &\quad= - 2\beta_\tau\beta_1e^{-s} A^2 + \frac{1}{2}\beta_\tau\beta_1 e^{-s} \Big( e^{- \frac s 4} W + \kappa + Z \Big)^2 \label{eq:A:0} - \beta_\tau\beta_5 e^{-s} \Big( e^{- \frac s 4} W + \kappa - Z \Big)^2\,.\end{aligned}$$ We now compactify the above equations by introducing the following transport speeds $$\begin{aligned} \label{gw:def} &g_W :=\beta_{\tau}W- \beta_\tau (\dot{\xi} - \kappa) e^{\frac 1 4 s} + \beta_{\tau}\beta_2e^{\frac 1 4 s} Z =:\beta_{\tau}W+G_W\,,\\ \label{gz:def} &g_Z :=\beta_{\tau}\beta_2W+ \beta_{\tau}e^{\frac 1 4 s}(\beta_2\kappa-\dot{\xi} +Z) =:\beta_{\tau}\beta_2W+G_Z \,,\\ \label{ga:def} &g_A := \beta_{\tau}\beta_1W+ \beta_{\tau}e^{\frac 1 4 s}(\beta_1\kappa-\dot{\xi} +\beta_1Z)=: \beta_{\tau}\beta_1W+G_A \,, \end{aligned}$$ and forcing terms $$\begin{aligned} \label{def:FW} F_W &:=-\beta_\tau e^{- \frac 3 4 s} A \Big( \beta_3 Z +\beta_4 (e^{- \frac s 4}W + \kappa) \Big)\,,\\ \label{def:FZ} F_Z& := - \beta_\tau e^{-s} A \Big(\beta_3(e^{- \frac s 4}W+ \kappa) + \beta_4Z\Big)\,,\\ \label{def:FA} F_A&:= - 2\beta_\tau\beta_1e^{-s} A^2 + \frac{1}{2}\beta_\tau\beta_1 e^{-s} \Big( e^{- \frac s 4} W + \kappa + Z \Big)^2- \beta_\tau\beta_5 e^{-s} \Big( e^{- \frac s 4} W + \kappa - Z \Big)^2\,.\end{aligned}$$ We note that the quantities $G_W, G_Z, G_A$ are defined through the second equalities in - . With these definitions, our equations become $$\begin{aligned} \label{W:0} ({\ensuremath{\partial}}_ s- \frac 1 4) W + (g_W + \frac{5}{4}x ) {\ensuremath{\partial}}_x W &= -e^{- \frac 3 4 s} \frac{\dot{\kappa}}{1 - \dot{\tau}} + F_W\,, \\ \label{Z:0} {\ensuremath{\partial}}_s Z + (g_Z + \frac 5 4 x ) {\ensuremath{\partial}}_x Z &= F_Z\,, \\ \label{A:0} {\ensuremath{\partial}}_s A + (g_A + \frac 5 4 x) {\ensuremath{\partial}}_x A &= F_A\,. \end{aligned}$$ Further, it will be convenient to introduce the notation $$\mathcal V_W:=g_W + \frac{5}{4}x,\qquad \mathcal V_Z:=g_Z + \frac{5}{4}x,\qquad \mathcal V_A:=g_A + \frac{5}{4}x\,.$$ so that we obtain $$\begin{aligned} \label{basic:w} ({\ensuremath{\partial}}_ s- \frac 1 4) W + \mathcal V_W {\ensuremath{\partial}}_x W &= -e^{- \frac 3 4 s} \frac{\dot{\kappa}}{1 - \dot{\tau}} + F_W\,, \\ \label{basic:z} {\ensuremath{\partial}}_s Z + \mathcal V_Z {\ensuremath{\partial}}_x Z &= F_Z\,, \\ \label{basic:a} {\ensuremath{\partial}}_s A + \mathcal V_A {\ensuremath{\partial}}_x A &= F_A\,. \end{aligned}$$ We define now the combination $$\begin{aligned} \label{def:mu} \mu :=- \beta_{\tau} (\dot{\xi} - \kappa)e^{\frac s 4} +\beta_{\tau}\beta_2e^{\frac14 s}Z(0,s) = G_W(s, 0)\,. \end{aligned}$$ An unstable self-similar solution to Burgers’ equation {#s:burgers} ------------------------------------------------------ Here we develop properties of the self-similar Burgers profile, ${\overline}{W}:= {\overline}{W}_2$, which solves the equation $$\begin{aligned} \label{Burger:1} - \frac 1 4 {\overline}{W} + ({\overline}{W} + \frac 5 4 x) {\overline}{W}_x = 0\,. \end{aligned}$$ According to [@Fontelos], has an implicit solution $$\begin{aligned} \label{e:implicit:eq} x = - {\overline}{W} - {\overline}{W}^{5}\,.\end{aligned}$$ Differentiating yields $$\begin{aligned} \label{e:w1:implicit} {\overline}{W}^{(1)}=-\frac{1}{1+5{{\overline}W}^4}\,.\end{aligned}$$ Hence $ {\overline}{W}^{(1)}\leq 0$ and thus since ${\overline}{W}(0)=0$ we attain that ${\overline}{W}\leq 0$ for $x\geq 0$. By Young’s inequality and applied to , we have $$\begin{aligned} x \leq - {\overline}{W} -{\overline}{W}^{5} \leq -\frac{{\overline}{W}^{5} }{5x^4}+-{{\overline}W}^5+\frac{4x}{5}\,.\end{aligned}$$ Rearranging, we obtain $$-{{\overline}W}^5\geq \frac{x^5}{5(5+x^4)}\,.$$ This lower bound combined with yields $$\label{bound:bar:W:1} {\left| {\overline}{W}^{(1)}\right|}\leq (1+x^4)^{-\frac15}\,.$$ Similarly, using Young’s inequality and we have $$- {\overline}{W}^{5}\leq 5x+1\,,$$ from which we obtain the estimate $${\left|{\overline}{W}\right|}\leq \frac32 (1+x^4)^{\frac1{20}}\,.$$ Finally, differentiating $5$ times, we obtain $$\label{W5:non} W^{(5)}(0)=120\,.$$ We now define the weight function $$\begin{aligned} \label{weight:eta} \eta_\gamma := (1 + x^4)^\gamma, \text{ for any } \gamma \in \mathbb{R}\,.\end{aligned}$$ We now record the following lemma, which summarizes the properties of ${\overline}{W}$ that we will be using Let $\ell$ be sufficiently small relative to universal constants. For $n = 2, 3, 4$ at $x=0$ we have $$\begin{aligned} \label{fifth:deriv:bar:W} &{\overline}{W}(0) = 0\,, \quad {\overline}{W}^{(1)}(0) = -1\,, \quad {\overline}{W}^{(n)}(0) = 0\,, \quad {\overline}{W}^{(5)}(0) = 120 \,.\end{aligned}$$ Furthermore, for $n \ge 2$, ${\overline}W$ satisfies the estimates $$\begin{gathered} \label{decay:bar:2} |{\overline}{W}| \le \frac 3 2 \eta_{\frac{1}{20}}\,, \quad|{\overline}{W}^{(1)}| \le \eta_{- \frac 1 5}\,, \quad |{\overline}{W}^{(n)}| \le C_k \eta_{- \frac 1 5 - \frac{n}{4}} \,,\\ \label{truth:1} -1 + \frac{l^7}{50} \le {\overline}{W}^{(1)} \le 0 \quad\text{ for } |x| \ge \ell\,.\end{gathered}$$ Higher order $x$ derivatives ---------------------------- In this section we list the higher order derivatives of $(W,Z,A)$. It will be convenient to introduce the notation: $$\begin{aligned} f^{(n)}(s, x) := {\ensuremath{\partial}}_x^n f(s, x)\,. \end{aligned}$$ We will derive now up to eight derivatives of the above system. $$\begin{aligned} \label{W:n} \Big( {\ensuremath{\partial}}_s + \frac 1 4 (- 1 + 5n) + \beta_{\tau}(n+1_{n> 1}) W^{(1)} \Big) W^{(n)} +\mathcal V_W {\ensuremath{\partial}}_x W^{(n)} &= F_{W,n}\,, \\ \label{Z:n} ({\ensuremath{\partial}}_s + \frac{5n}{4} + n\beta_{\tau}\beta_2 W^{(1)}) Z^{(n)} + \mathcal V_Z {\ensuremath{\partial}}_x Z^{(n)} &= F_{Z, n}\,, \\ \label{A:n} ({\ensuremath{\partial}}_s + \frac 5 4 n + n\beta_{\tau}\beta_1 W^{(1)} ) A^{(n)} +\mathcal V_A{\ensuremath{\partial}}_x A^{(n)} &= F_{A,n}\,,\end{aligned}$$ where the forcings are defined by $$\begin{aligned} \label{F.W.n} &F_{W,n} := F_W^{(n)} - 1_{n \ge 3}\beta_\tau \sum_{j = 2}^{n-1} \binom{n}{j} W^{(j)} W^{(n+1 - j)} - \sum_{j = 1}^n \binom{n}{j} G_W^{(j)} W^{(n+1-j)}\,, \\ \label{F.Z.n} &F_{Z, n} := F_Z^{(n)} - 1_{n \ge 2}\beta_\tau \beta_2 \sum_{j = 2}^{n} \binom{n}{j} W^{(j)} Z^{(n+1 - j)} - \sum_{j = 1}^n \binom{n}{j} G_Z^{(j)} Z^{(n+1-j)}\,, \\ \label{F.A.n} &F_{A,n} := F_A^{(n)} -1_{n \ge 2} \beta_\tau \beta_1\sum_{j = 2}^{n} \binom{n}{j} W^{(j)} A^{(n+1 - j)}- \sum_{j = 1}^n \binom{n}{j} G_A^{(j)} A^{(n+1-j)}\,.\end{aligned}$$ For repeated future reference, we record here the following expressions which are obtained by differentiating (for $n \ge 1$) $$\begin{aligned} \label{FW:to:the:n} F_W^{(n)} = &- \beta_\tau e^{- \frac 3 4 s} \sum_{j = 0}^n \binom{n}{j} A^{(j)} \Big( \beta_3 Z^{(n-j)} + \beta_4 (e^{- \frac s 4} W + \kappa)^{(n-j)} \Big)\,, \\ F_{Z}^{(n)} = &- \beta_\tau e^{-s} \sum_{j = 0}^{n} \binom{n}{j} A^{(j)} \Big(\beta_3 (e^{- \frac s 4}W+ \kappa)^{(n-j)} + \beta_4 Z^{(n-j)}\Big) \label{def:FZn} \,, \\ {\ensuremath{\nonumber}}F_A^{(n)} = & - 2 \beta_\tau \beta_1 e^{-s} \sum_{j = 0}^n \binom{n}{j} A^{(j)} A^{(n-j)} \\ {\ensuremath{\nonumber}}& + \frac 1 2 \beta_\tau \beta_1 e^{-s} \sum_{j = 0}^n \binom{n}{j} (e^{- \frac s 4}W + \kappa + Z)^{(j)} (e^{- \frac s 4}W + \kappa + Z)^{(n-j)} \\ \label{okey:1} & - \beta_\tau \beta_1 e^{-s} \sum_{j = 0}^n \binom{n}{j} (e^{- \frac s 4}W + \kappa - Z)^{(j)} (e^{- \frac s 4}W + \kappa - Z)^{(n-j)}\,.\end{aligned}$$ By combining with , we obtain the expression $$\begin{aligned} {\ensuremath{\nonumber}}F_{W,n} = & - \beta_\tau e^{- \frac 3 4 s} \sum_{j = 0}^n \binom{n}{j} A^{(j)} \Big( \beta_3 Z^{(n-j)} + \beta_4 (e^{- \frac s 4} W + \kappa)^{(n-j)} \Big) \\ \label{F.W.n.bot} & - 1_{n \ge 3}\beta_\tau \sum_{j = 2}^{n-1} \binom{n}{j} W^{(j)} W^{(n+1 - j)} - \sum_{j = 1}^n \binom{n}{j} G_W^{(j)} W^{(n+1-j)}\,.\end{aligned}$$ By combining with , we obtain the final expression $$\begin{aligned} {\ensuremath{\nonumber}}F_{Z,n} = & - \beta_\tau e^{-s} \sum_{j = 0}^{n} \binom{n}{j} A^{(j)} \Big(\beta_3 (e^{- \frac s 4}W+ \kappa)^{(n-j)} + \beta_4 Z^{(n-j)}\Big) \\ & - 1_{n \ge 2}\beta_\tau \beta_2 \sum_{j = 2}^{n} \binom{n}{j} W^{(j)} Z^{(n+1 - j)} \label{F.Z.n.bot} - \sum_{j = 1}^n \binom{n}{j} G_Z^{(j)} Z^{(n+1-j)}\,.\end{aligned}$$ We now derive the first five constrained ODEs. First, we introduce an important piece of notation to describe the purely $s$-dependent quantities at $x = 0$, $$\begin{aligned} \label{q:def:q} q^{(n)}(s) := W^{(n)}(0, s)\,. \end{aligned}$$ From the equations and , evaluating $W^{(n)}$, for $n=0,\dots,4$ at $x = 0$ and using the constraints , we obtain the following system of five ODEs in the $s$ variable $$\begin{aligned} \label{eq:ODE:1} &- \frac{\mu}{\beta_\tau} + e^{- \frac 3 4 s} \dot{\kappa} = \frac{1}{\beta_\tau} F_W(0, s)\,, \\ \label{eq:ODE:2} &\dot{\tau} -\frac{1}{\beta_\tau} G^{(1)}_W(0, s) + \frac{\mu}{\beta_\tau} q^{(2)}(s) =\frac{1}{\beta_\tau} F_{W}^{(1)}(0, s)\,, \\ \label{eq:ODE:3} &({\ensuremath{\partial}}_s + \frac 9 4 ) q^{(2)} - 3 \beta_\tau q^{(2)} + \mu q^{(3)} + 2 G_W^{(1)}(0, s) q^{(2)} = F_W^{(2)}(0, s) + G_W^{(2)}(0, s)\,, \\ \label{eq:ODE:4} &({\ensuremath{\partial}}_s + \frac{14}{4}) q^{(3)} - 4 \beta_\tau q^{(3)} + 3 G_W^{(1)}(0, s) q^{(3)} + 3 \beta_\tau |q^{(2)}|^2+ \sum_{j = 2}^3 \binom{3}{j} G_W^{(j)}(0, s) q^{(4-j)}= F_W^{(3)}(0, s)\,, \\ \label{eq:ODE:5} &q^{(5)} \mu + 10 \beta_\tau q^{(2)} q^{(3)} + \sum_{j = 2}^4 \binom{4}{j} G_W^{(j)}(0, s) q^{(5-j)} = F_W^{(4)}(0, s)\,.\end{aligned}$$ In addition, we will need the evolution equation of $W^{(5)}$ at $x=0$, given by $$\begin{aligned} \label{eq:ODE:6} &\partial_s q^{(5)}= - \mu q^{(6)} + (1 - \beta_\tau) q^{(5)} - 10 |q^{(3)}|^2 - \sum_{j = 1}^{5} \binom{5}{j} G_W^{(j)}(0, s) q^{(6-j)} + F_W^{(5)}(0, s)\,. \end{aligned}$$ We also derive the following equation for the difference ${\widetilde}W:=W-{\overline}W$: $$\begin{aligned} &({\ensuremath{\partial}}_ s- \frac 1 4+\beta_\tau {\overline}W^{(1)}) {\widetilde}W +\mathcal V_W {\ensuremath{\partial}}_x {\widetilde}W = -\beta_\tau e^{- \frac 3 4 s} \dot{\kappa} + F_{ W}+((\beta_\tau -1){\overline}W-G_W){\ensuremath{\partial}}_x {\overline}W:={\widetilde}F_W\label{diff:eq0}\,.\end{aligned}$$ The equation for the higher order derivatives $W^{(n)}$ is given by $$\begin{aligned} {\ensuremath{\nonumber}}&{\ensuremath{\partial}}_s {\widetilde}{W}^{(n)} + \Big( \frac 1 4 (-1 + 5n) + \beta_{\tau}\left({\overline}W^{(1)}+ nW^{(1)} \right) \Big) {\widetilde}{W}^{(n)} + \mathcal V_W {\ensuremath{\partial}}_x {\widetilde}{W}^{(n)} \\ {\ensuremath{\nonumber}}&\qquad= F_{W}^{(n)} - 1_{n \ge 2}\beta_\tau \sum_{j = 2}^{n-1} \binom{n}{j} W^{(j)} {\widetilde}W^{(n+1 - j)}- \sum_{j = 1}^n \binom{n}{j} \left(\beta_\tau {\overline}W^{(j+1)}{\widetilde}W ^{(n-j)}+G_W^{(j)}{\widetilde}W ^{(n+1-j)}\right) \\ {\ensuremath{\nonumber}}& \qquad \qquad + (\beta_\tau - 1) \sum_{j = 0}^n \binom{n}{j} {\overline}{W}^{(j)} {\overline}{W}^{(n+1-j)} - \sum_{j = 0}^n \binom{n}{j} G_W^{(j)} {\overline}{W}^{(n+1-j)} \\ \label{diff:eq} & \qquad =: {\widetilde}{F}_{W,n}\,.\end{aligned}$$ $\nabla_{\alpha, \beta}$ derivatives ------------------------------------ We introduce the following notation to compactify the forthcoming equations $$\begin{aligned} \label{c:deriv} f_{c} := {\ensuremath{\partial}}_{c} f, \qquad c \in \{ \alpha, \beta \}\,,\end{aligned}$$ for any function $f$. ### $\nabla_{\alpha, \beta}$ derivatives of $Z$ We first take ${\ensuremath{\partial}}_c$ of equation which produces $$\begin{aligned} \label{every:time:1} {\ensuremath{\partial}}_s Z_c + \mathcal{V}_Z {\ensuremath{\partial}}_x Z_c = {\ensuremath{\partial}}_c F_Z - Z^{(1)} \Big( \dot{\tau}_c \beta_\tau^2 \beta_2 W + \beta_\tau \beta_2 W_{c} + {\ensuremath{\partial}}_c G_Z \Big) =: F_{Z,0}^{c}\,.\end{aligned}$$ We now use to evaluate the ${\ensuremath{\partial}}_c F_Z$ term appearing above via $$\begin{aligned} \label{pcFz:0} {\ensuremath{\partial}}_c F_Z = \dot{\tau}_c \beta_\tau F_Z - \beta_\tau e^{-s} A_c ( \beta_3 (e^{- \frac s 4}W + \kappa) + \beta_4 Z) - \beta_\tau e^{-s} A(\beta_3 (e^{- \frac s 4} W_c + \kappa_c) + \beta_4 Z_c)\end{aligned}$$ We next compute ${\ensuremath{\partial}}_x^n$ of equation to obtain $$\begin{aligned} {\ensuremath{\nonumber}}&({\ensuremath{\partial}}_s + \frac 5 4 n + n \beta_\tau \beta_2 W^{(1)}) Z_c^{(n)} + \mathcal{V}_Z {\ensuremath{\partial}}_x Z_c^{(n)} \\ {\ensuremath{\nonumber}}&\quad= {\ensuremath{\partial}}_c F_Z^{(n)} - \sum_{j = 0}^n \binom{n}{j} \dot{\tau}_c \beta_\tau^2 \beta_2 Z^{(j+1)} W^{(n-j)} - \sum_{j = 0}^n \binom{n}{j} \beta_\tau \beta_2 Z^{(j+1)} W_c^{(n-j)}\\ {\ensuremath{\nonumber}}&\qquad - \sum_{j = 0}^n \binom{n}{j} Z^{(1+j)} {\ensuremath{\partial}}_c G_Z^{(n-j)} - 1_{n \ge 1} \sum_{j = 1}^n \binom{n}{j} G_Z^{(j)} Z_c^{(n+1-j)} \\ \label{Midterms:1} &\qquad - 1_{n \ge 2} \sum_{j = 2}^n \binom{n}{j} \beta_\tau \beta_2 W^{(j)} Z^{(n-j+1)}_c =: F_{Z,n}^{c}\,. \end{aligned}$$ We now compute the expression for ${\ensuremath{\partial}}_c F_Z^{(n)}$ by computing ${\ensuremath{\partial}}_x^n$ of which yields $$\begin{aligned} {\ensuremath{\nonumber}}{\ensuremath{\partial}}_c F_Z^{(n)} = & \dot{\tau}_c \beta_\tau F_Z^{(n)} - \beta_\tau e^{-s} \sum_{j = 0}^n \binom{n}{j} A_c^{(j)} \Big( \beta_3 ( e^{- \frac s 4} W + \kappa)^{(n-j)} + \beta_4 Z^{(n-j)} \Big) \\ \label{Midterms:2} & - \beta_\tau e^{-s} \sum_{j =0}^n \binom{n}{j} A^{(j)} \Big( \beta_3 ( e^{- \frac s 4} W_c + \kappa_c)^{(n-j)} + \beta_4 Z_c^{(n-j)} \Big)\,.\end{aligned}$$ ### $\nabla_{\alpha, \beta}$ derivatives of $A$ We compute ${\ensuremath{\partial}}_c$ of the basic equation for $A$, , which yields $$\begin{aligned} \label{socialite:1} {\ensuremath{\partial}}_s A_c + \mathcal{V}_A {\ensuremath{\partial}}_x A_c = {\ensuremath{\partial}}_c F_A - \Big( \dot{\tau}_{c} \beta_\tau^2 \beta_1 W + \beta_\tau \beta_1 W_c + {\ensuremath{\partial}}_c G_A \Big) A^{(1)} =: F_{A,0}^{c}\,. \end{aligned}$$ Computing now the expression ${\ensuremath{\partial}}_c F_A$ by differentiating , we obtain $$\begin{aligned} {\ensuremath{\nonumber}}{\ensuremath{\partial}}_c F_A = & \dot{\tau}_{c} \beta_\tau F_A + \beta_\tau \beta_1 e^{-s} \Big( e^{- \frac s 4}W + \kappa + Z \Big) \Big( e^{- \frac s 4} W_c + \kappa_c + Z_c \Big) \\ \label{pcFa} &- 2 \beta_\tau \beta_5 e^{-s} \Big( e^{- \frac s 4} W + \kappa - Z \Big) \Big( e^{- \frac s 4} W_c + \kappa_c - Z_c \Big)\,.\end{aligned}$$ We now compute ${\ensuremath{\partial}}_x^n$ of equation which produces $$\begin{aligned} {\ensuremath{\nonumber}}&({\ensuremath{\partial}}_s + \frac{5n}{4} + n \beta_\tau \beta_1 W^{(1)} ) A_c^{(n)} + \mathcal{V}_{A} {\ensuremath{\partial}}_x A_c^{(n)} \\ {\ensuremath{\nonumber}}= & {\ensuremath{\partial}}_c F_A^{(n)} - 1_{n \ge 1} \sum_{j = 1}^n \binom{n}{j} G_A^{(j)} A_c^{(n+1-j)} - 1_{n \ge 2} \sum_{j = 2}^n \binom{n}{j} \beta_\tau \beta_1 W^{(j)} A_c^{(n+1-j)} \\ {\ensuremath{\nonumber}}& - \sum_{j = 0}^n \binom{n}{j} \dot{\tau}_c \beta_\tau^2 \beta_1 W^{(j)} A^{(n+1-j)} - \sum_{j = 0}^n \binom{n}{j} \beta_\tau \beta_1 W_c^{(j)} A^{(n+1-j)} \\ \label{Midterms:3} & - \sum_{j = 0}^n \binom{n}{j} {\ensuremath{\partial}}_c G_A^{(j)} A^{(n+1-j)} =: F_{A,n}^{c}\,. \end{aligned}$$ We now compute ${\ensuremath{\partial}}_x^n$ of the expression for ${\ensuremath{\partial}}_c F_A$ in which yields $$\begin{aligned} {\ensuremath{\nonumber}}{\ensuremath{\partial}}_c F_A^{(n)} =& \dot{\tau}_c \beta_\tau F_A^{(n)} + \beta_\tau \beta_1 e^{-s}\sum_{j = 0}^n \binom{n}{j} \Big( e^{- \frac s 4} W + \kappa + Z \Big)^{(j)} \Big( e^{- \frac s 4} W_c + \kappa_c + Z_c\Big)^{(n-j)} \\ \label{Midterms:4} & - 2 \beta_\tau \beta_5 e^{-s} \sum_{j = 0}^n \binom{n}{j} \Big( e^{- \frac s 4} W + \kappa - Z \Big)^{(j)} \Big( e^{- \frac s 4} W_c+ \kappa_c - Z_c \Big)^{(n-j)}\,.\end{aligned}$$ ### $W$ Quantities For the $W$ equations, we separately write down the $n = 0$ system. Differentiating in $c$ yields $$\begin{aligned} {\ensuremath{\nonumber}}&({\ensuremath{\partial}}_s - \frac 1 4 + \beta_\tau W^{(1)}) {\ensuremath{\partial}}_c W + \mathcal{V}_W {\ensuremath{\partial}}_x {\ensuremath{\partial}}_c W \\ &\qquad= - e^{- \frac 3 4 s} \beta_\tau {\ensuremath{\partial}}_c \dot{\kappa} - e^{- \frac 3 4 s} \dot{\kappa} {\ensuremath{\partial}}_c \dot{\tau} \beta_\tau^2 - {\ensuremath{\partial}}_c G_W W^{(1)} - W^{(1)} \dot{\tau}_{c} \beta_\tau^2 W + {\ensuremath{\partial}}_c F_W\,. \label{eq.dcw.0} \end{aligned}$$ By differentiating in ${\ensuremath{\partial}}_c$, we obtain $$\begin{aligned} {\ensuremath{\nonumber}}{\ensuremath{\partial}}_c F_W = &- {\ensuremath{\partial}}_c \dot{\tau} \beta_\tau^2 e^{- \frac 3 4 s} A \Big( \beta_3 Z + \beta_4 (e^{- \frac s 4}W + \kappa) \Big) - \beta_\tau e^{- \frac 3 4 s} {\ensuremath{\partial}}_c A \Big( \beta_3 Z + \beta_4 (e^{- \frac s 4} W + \kappa) \Big) \\ {\ensuremath{\nonumber}}& - \beta_\tau e^{- \frac 3 4 s} A \Big( \beta_3 {\ensuremath{\partial}}_c Z + \beta_4 (e^{- \frac s 4} {\ensuremath{\partial}}_c W + {\ensuremath{\partial}}_c \kappa) \Big) \\ \label{dc.FW} = & \dot{\tau}_c \beta_\tau F_W - \beta_\tau e^{- \frac 3 4 s} A_c \Big( \beta_3 Z + \beta_4 (e^{- \frac s 4}W + \kappa) \Big) - \beta_\tau e^{- \frac 3 4 s} A \Big( \beta_3 Z_c + \beta_4 (e^{- \frac s 4}W_c + \kappa_c) \Big). \end{aligned}$$ We combine with to obtain $$\begin{aligned} \label{eq:C:W} ({\ensuremath{\partial}}_s - \frac 1 4 + \beta_\tau W^{(1)}) {\ensuremath{\partial}}_c W + \mathcal{V}_W {\ensuremath{\partial}}_x {\ensuremath{\partial}}_c W = F_{W,0}^{c}\,, \end{aligned}$$ where the forcing is given by $$\begin{aligned} {\ensuremath{\nonumber}}F_{W,0}^{c} := & \dot{\tau}_c \beta_\tau F_W - \beta_\tau e^{- \frac 3 4 s} A_c \Big( \beta_3 Z + \beta_4 (e^{- \frac s 4} W + \kappa) \Big) - {\ensuremath{\partial}}_c G_W W^{(1)} - W^{(1)} \dot{\tau}_{c} \beta_\tau^2 W \\ \label{dc.FW:0} & - \beta_\tau e^{- \frac 3 4 s} A \Big( \beta_3 Z_c + \beta_4 (e^{- \frac s 4} W_c + \kappa_c) \Big) - e^{- \frac 3 4 s} \beta_\tau {\ensuremath{\partial}}_c \dot{\kappa} - e^{- \frac 3 4 s} \dot{\kappa} {\ensuremath{\partial}}_c \dot{\tau} \beta_\tau^2 \,. \end{aligned}$$ We now take ${\ensuremath{\partial}}_x^n$ of equation . This produces, for $n \ge 1$, $$\begin{aligned} {\ensuremath{\nonumber}}&({\ensuremath{\partial}}_s + \frac{5n-1}{4} +(n+1) \beta_\tau W^{(1)}) {\ensuremath{\partial}}_c W^{(n)} + \mathcal{V}_W {\ensuremath{\partial}}_x {\ensuremath{\partial}}_c W^{(n)} \\ {\ensuremath{\nonumber}}=& - 1_{n \ge 1} \sum_{j = 1}^n \binom{n}{j} \beta_\tau W^{(1+j)} {\ensuremath{\partial}}_c W^{(n-j)} - 1_{n \ge 2} \sum_{j = 0}^{n-2} \binom{n}{j} \beta_\tau W^{(n-j)} {\ensuremath{\partial}}_c W^{(j+1)} \\ {\ensuremath{\nonumber}}& - 1_{n \ge 1} \sum_{j = 0}^{n-1} \binom{n}{j} G_W^{(n-j)} {\ensuremath{\partial}}_c W^{(j+1)} - \sum_{j = 0}^n \binom{n}{j} {\ensuremath{\partial}}_c G_W^{(j)} W^{(n-j+1)} \\ \label{trek:mix} & - \dot{\tau}_c \beta_\tau^2 \sum_{j = 0}^n \binom{n}{j} W^{(1+j)} W^{(n-j)} + {\ensuremath{\partial}}_c {\ensuremath{\partial}}_x^n F_W =: F_{W, n}^{c}\,. \end{aligned}$$ We now use the expression compute $$\begin{aligned} {\ensuremath{\nonumber}}{\ensuremath{\partial}}_c F_W^{(n)} = & \dot{\tau}_c \beta_\tau F_W^{(n)} - \sum_{j = 0}^n \binom{n}{j} \beta_\tau e^{- \frac 3 4 s} {\ensuremath{\partial}}_c A^{(j)} \Big( \beta_3 Z^{(n-j)} + \beta_4 (e^{- \frac s 4} W + \kappa)^{(n-j)} \Big) \\ \label{dcbcFwn} & - \sum_{j = 0}^n \binom{n}{j} \beta_\tau e^{- \frac 3 4 s} A^{(j)} \Big( \beta_3 {\ensuremath{\partial}}_c Z^{(n-j)} + \beta_4 (e^{- \frac s 4} {\ensuremath{\partial}}_c W + {\ensuremath{\partial}}_c \kappa)^{(n-j)} \Big)\,.\end{aligned}$$ Combining now with the expression , we obtain $$\begin{aligned} {\ensuremath{\nonumber}}F_{W,n}^{c} := & \dot{\tau}_c \beta_\tau F_W^{(n)} - \sum_{j = 0}^n \binom{n}{j} \beta_\tau e^{- \frac 3 4 s} {\ensuremath{\partial}}_c A^{(j)} \Big( \beta_3 Z^{(n-j)} + \beta_4 (e^{- \frac s 4} W + \kappa)^{(n-j)} \Big) \\ {\ensuremath{\nonumber}}& - \sum_{j = 0}^n \binom{n}{j} \beta_\tau e^{- \frac 3 4 s} A^{(j)} \Big( \beta_3 {\ensuremath{\partial}}_c Z^{(n-j)} + \beta_4 (e^{- \frac s 4} {\ensuremath{\partial}}_c W + {\ensuremath{\partial}}_c \kappa)^{(n-j)} \Big) \\ {\ensuremath{\nonumber}}& - \sum_{j = 1}^n \binom{n}{j} \beta_\tau W^{(1+j)} {\ensuremath{\partial}}_c W^{(n-j)} - 1_{n \ge 2} \sum_{j = 0}^{n-2} \binom{n}{j} \beta_\tau W^{(n-j)} {\ensuremath{\partial}}_c W^{(j+1)} \\ {\ensuremath{\nonumber}}& - 1_{n \ge 1} \sum_{j = 0}^{n-1} \binom{n}{j} G_W^{(n-j)} {\ensuremath{\partial}}_c W^{(j+1)} - \sum_{j = 0}^n \binom{n}{j} {\ensuremath{\partial}}_c G_W^{(j)} W^{(j+1)} \\ \label{Fncw.fin.2} & - \dot{\tau}_c \beta_\tau^2 \sum_{j = 0}^n \binom{n}{j} W^{(1+j)} W^{(n-j)}\,.\end{aligned}$$ $\nabla_{\alpha, \beta}^2$ derivatives -------------------------------------- ### $\nabla_{\alpha, \beta}^2$ derivatives of $W$ We compute ${\ensuremath{\partial}}_{c_2}$ of which results in $$\begin{aligned} {\ensuremath{\nonumber}}&({\ensuremath{\partial}}_s - \frac 1 4 + \beta_\tau W^{(1)}) W_{c_1 c_2} + \mathcal{V}_W {\ensuremath{\partial}}_x W_{c_1 c_2} \\ {\ensuremath{\nonumber}}=& {\ensuremath{\partial}}_{c_1 c_2} F_W - \beta_\tau W^{(1)}_{c_2} W_{c_1} - \beta_\tau^2 \dot{\tau}_{c_2} W^{(1)} W_{c_1} - \Big( \beta_\tau^2 \dot{\tau}_{c_2} W + \beta_\tau W_{c_2} + {\ensuremath{\partial}}_{c_2} G_W \Big) W^{(1)}_{c_1} \\ {\ensuremath{\nonumber}}& - \dot{\tau}_{c_1} \beta_\tau^2 W W^{(1)}_{c_2} - \dot{\tau}_{c_1 c_2} \beta_\tau^2 W W^{(1)} - 2 \beta_\tau^2 \dot{\tau}_{c_1} \dot{\tau}_{c_2} W W^{(1)} - \dot{\tau}_{c_1} \beta_\tau^2 W^{(1)} W_{c_2} - \mathcal{M}^{c_1, c_2} \\ \label{HAIM:1} =:& F_{W,0}^{c_1, c_2}\,,\end{aligned}$$ where the modulation terms have been grouped into $$\begin{aligned} \label{Mod:Mod:1} \mathcal{M}^{c_1, c_2} := e^{- \frac 3 4 s}\Big( \beta_\tau \dot{\kappa}_{c_1 c_2} + \beta_\tau^2 (\dot{\tau}_{c_2} \dot{\kappa}_{c_1} + \dot{\kappa}_{c_2} \dot{\tau}_{c_1}) + \dot{\kappa} \dot{\tau}_{c_1 c_2} \beta_\tau^2 + 2 \beta_\tau^2 \dot{\kappa} \dot{\tau}_{c_1} \dot{\tau}_{c_2} \Big)\,.\end{aligned}$$ Similarly we compute ${\ensuremath{\partial}}_{x}^n$ of which results in the following system for $n \ge 1$ $$\begin{aligned} {\ensuremath{\nonumber}}&\Big( {\ensuremath{\partial}}_s + \frac{5n-1}{4} + (n+1) \beta_\tau W^{(1)} \Big) W_{c_1, c_2}^{(n)} + \mathcal{V}_W {\ensuremath{\partial}}_x W^{(n)}_{c_1, c_2} \\ {\ensuremath{\nonumber}}= & {\ensuremath{\partial}}_{c_1 c_2} F_W^{(n)} - \sum_{i \in \{1, 2\}} \sum_{j = 0}^n \binom{n}{j} \beta_\tau^2 \dot{\tau}_{c_i} W^{(1+j)} W^{(n-j)}_{c_{i'}} - \sum_{j = 0}^n \binom{n}{j} \beta_\tau W^{(j)}_{c_1} W^{(n+1-j)}_{c_2} \\ {\ensuremath{\nonumber}}& - 1_{n \ge 1} \sum_{j = 1}^n \binom{n}{j} \beta_\tau W^{(1+j)} W^{(n-j)}_{c_1 c_2} - \sum_{i = \{1, 2\}} \sum_{j = 0}^{n} \binom{n}{j} \beta_\tau^2 \dot{\tau}_{c_{i'}} W^{(j)} W^{(n+1-j)}_{c_i} \\ {\ensuremath{\nonumber}}&- \sum_{j = 0}^{n} \binom{n}{j} \beta_\tau W^{(n+1-j)}_{c_1} W^{(j)}_{c_2} - 1_{n \ge 2} \sum_{j = 2}^{n} \binom{n}{j} \beta_\tau W^{(j)} W^{(n+1-j)}_{c_1 c_2} \\ {\ensuremath{\nonumber}}& - \sum_{j = 0}^{n} \binom{n}{j} {\ensuremath{\partial}}_{c_2}G_W^{(j)} W^{(n+1-j)}_{c_1} - 1_{n \ge 1} \sum_{j = 1}^{n} \binom{n}{j} G_W^{(j)} W_{c_1 c_2}^{(n-j+1)} \\ & - \sum_{j = 0}^n \binom{n}{j} \left(\dot{\tau}_{c_1 c_2}+2\dot{\tau}_{c_1} \dot{\tau}_{c_2} \right) \beta_\tau^2 W^{(j)} W^{(n+1-j)} =: F_{W,n}^{c_1, c_2}\,. \label{Bernie:1}\end{aligned}$$ We shall now compute the following identity by differentiating $$\begin{aligned} {\ensuremath{\nonumber}}{\ensuremath{\partial}}_{c_1 c_2} F_W = &- \beta_\tau e^{- \frac 3 4 s} \Big( A_{c_1 c_2} ( \beta_3 Z + \beta_4 (e^{- \frac s 4}W + \kappa) ) + A_{c_1} ( \beta_3 Z_{c_2} + \beta_4 (e^{- \frac s4} W_{c_2} + \kappa_{c_2}) ) \Big) \\ {\ensuremath{\nonumber}}& - \beta_\tau e^{- \frac 3 4 s} \Big( A_{c_2} (\beta_3 Z_{c_1} + \beta_4 (e^{- \frac s 4} W_{c_1} + \kappa_{c_1})) + A (\beta_3 Z_{c_1 c_2} + \beta_4 ( e^{- \frac s 4}W_{c_1 c_2} + \kappa_{c_1 c_2})) \Big) \\ \label{find:1} & + \dot{\tau}_{c_2} \beta_\tau {\ensuremath{\partial}}_{c_1} F_W + \dot{\tau}_{c_1 c_2} \beta_\tau F_W + \dot{\tau}_{c_1} \beta_\tau {\ensuremath{\partial}}_{c_2} F_W\,. \end{aligned}$$ Similarly, computing ${\ensuremath{\partial}}_x^n$ of the above expression, we record for $n \ge 1$, $$\begin{aligned} {\ensuremath{\nonumber}}{\ensuremath{\partial}}_{c_1 c_2} F_W^{(n)} = & - \beta_\tau e^{- \frac 3 4 s} \sum_{j = 0}^n \binom{n}{j} \Big( A_{c_1 c_2}^{(j)} (\beta_3 Z^{(n-j)} + \beta_4 (e^{- \frac s 4}W+ \kappa)^{(n-j)}) \\ {\ensuremath{\nonumber}}& \qquad \qquad \qquad \qquad \qquad + A_{c_1}^{(j)} (\beta_3 Z_{c_2}^{(n-j)} + \beta_4 (e^{- \frac s 4} W_{c_2} + \kappa_{c_2}) ^{(n-j)}) \Big) \\ {\ensuremath{\nonumber}}& - \beta_\tau e^{- \frac 3 4 s} \sum_{j = 0}^n \binom{n}{j} \Big( A_{c_2}^{(j)} (\beta_3 Z_{c_1}^{(n-j)} + \beta_4 (e^{- \frac s 4}W_{c_1} + \kappa_{c_1})^{(n-j)}) \\ {\ensuremath{\nonumber}}& \qquad \qquad \qquad \qquad \qquad + A^{(j)} (\beta_3 Z_{c_1 c_2}^{(n-j)} + \beta_4 (e^{- \frac s 4} W_{c_1 c_2} + \kappa_{c_1 c_2})^{(n-j)} ) \Big) \\ \label{Fwn:c1:c2:exp} & + \dot{\tau}_{c_2} \beta_\tau {\ensuremath{\partial}}_{c_1} F_W^{(n)} + \dot{\tau}_{c_1 c_2} \beta_\tau F_W^{(n)} + \dot{\tau}_{c_1} \beta_\tau {\ensuremath{\partial}}_{c_2} F_W^{(n)}\,.\end{aligned}$$ ### $\nabla_{\alpha, \beta}^2$ derivatives of $Z$ A calculation of ${\ensuremath{\partial}}_{c_2}$ of equation results in $$\begin{aligned} {\ensuremath{\nonumber}}{\ensuremath{\partial}}_s Z_{c_1 c_2} + \mathcal{V}_Z {\ensuremath{\partial}}_x Z_{c_1 c_2} = & {\ensuremath{\partial}}_{c_1 c_2} F_Z - \sum_{i \in \{1, 2 \}} Z_{c_i}^{(1)} \Big( \dot{\tau}_{c_{i'}} \beta_\tau^2 \beta_2 W + \beta_\tau \beta_2 W_{c_{i'}} + {\ensuremath{\partial}}_{c_{i'}} G_Z \Big) \\ {\ensuremath{\nonumber}}& - Z^{(1)} \Big( \dot{\tau}_{c_1 c_2} \beta_\tau^2 \beta_2 W + 2 \dot{\tau}_{c_1} \dot{\tau}_{c_2} \beta_\tau^2 \beta_2 W + \sum_{i \in \{1, 2\}}\dot{\tau}_{c_i} \beta_\tau^2 \beta_2 W_{c_{i'}} \\ & + \beta_\tau \beta_2 W_{c_1 c_2} + {\ensuremath{\partial}}_{c_1 c_2} G_Z \Big) =: F_{Z,0}^{c_1, c_2}\,.\label{Lauv:1}\end{aligned}$$ Computing ${\ensuremath{\partial}}_x^n$ we obtain $$\begin{aligned} {\ensuremath{\nonumber}}&\Big( {\ensuremath{\partial}}_s + \frac 5 4 n + n \beta_\tau \beta_2 W^{(1)} \Big) Z_{c_1 c_2}^{(n)} + \mathcal{V}_Z {\ensuremath{\partial}}_x Z_{c_1 c_2}^{(n)} \\ {\ensuremath{\nonumber}}= & - 1_{n \ge 2} \sum_{j = 2}^n \binom{n}{j} \beta_\tau \beta_2 W^{(j)} Z_{c_1 c_2}^{(n-j+1)} - 1_{n \ge 1} \sum_{j = 1}^n \binom{n}{j} G_Z^{(j)} Z_{c_1 c_2}^{(n-j+1)} \\ {\ensuremath{\nonumber}}& - \sum_{j = 0}^n \sum_{i \in \{1, 2 \}} \binom{n}{j} Z_{c_i}^{(j+1)} \Big( \dot{\tau}_{c_{i'}} \beta_\tau^2 \beta_2 W^{(n-j)} + \beta_\tau \beta_2 W_{c_{i'}}^{(n-j)} + {\ensuremath{\partial}}_{c_{i'}}G_Z^{(n-j)} \Big) \\ {\ensuremath{\nonumber}}& - \sum_{j = 0}^n \binom{n}{j} Z^{(j+1)} \Big( \dot{\tau}_{c_1 c_2} \beta_\tau^2 \beta_2 W^{(n-j)} + 2 \dot{\tau}_{c_1} \dot{\tau}_{c_2} \beta_\tau^2 \beta_2 W^{(n-j)} + \beta_\tau \beta_2 W_{c_1 c_2}^{(n-j)} \\ {\ensuremath{\nonumber}}& \qquad \qquad \qquad \qquad + \sum_{i \in \{1, 2\}} \dot{\tau}_{c_i} \beta_\tau^2 \beta_2 W_{c_{i'}}^{(n-j)} + {\ensuremath{\partial}}_{c_1 c_2}G_Z^{(n-j)} \Big) \\ & + {\ensuremath{\partial}}_{c_1 c_2} F_Z^{(n)} =: F_{Z,n}^{c_1, c_2}\,. \label{Lauv:2}\end{aligned}$$ We now record the expression for $$\begin{aligned} {\ensuremath{\nonumber}}{\ensuremath{\partial}}_{c_2 c_1} F_Z = &- \beta_\tau e^{-s}\Big( A ( \beta_3 ( e^{- \frac s 4} W_{c_1 c_2} + \kappa_{c_1 c_2} ) + \beta_4 Z_{c_1 c_2} ) + A_{c_1 c_2} (\beta_3 (e^{- \frac s 4}W + \kappa) + \beta_4 Z) \Big) \\ {\ensuremath{\nonumber}}& - \beta_\tau e^{-s} \sum_{i \in \{1, 2 \}} A_{c_i} \Big( \beta_3 (e^{- \frac s 4} W_{c_{i'}} + \kappa_{c_{i'}}) + \beta_4 Z_{c_{i'}} \Big) + \dot{\tau}_{c_1} \beta_\tau {\ensuremath{\partial}}_{c_2} F_Z + \dot{\tau}_{c_2} \beta_\tau {\ensuremath{\partial}}_{c_1} F_Z \\ \label{Lauv:3} & + \dot{\tau}_{c_1 c_2} \beta_\tau F_Z\,. \end{aligned}$$ Next, we compute ${\ensuremath{\partial}}_x^n$ of the above expression to obtain $$\begin{aligned} {\ensuremath{\nonumber}}{\ensuremath{\partial}}_{c_2 c_1} F_Z^{(n)} = & - \beta_\tau e^{-s} \sum_{j = 0}^n \binom{n}{j} A^{(j)} \Big( \beta_3 (e^{- \frac s 4} W_{c_1 c_2} + \kappa_{c_1 c_2})^{(n-j)} + \beta_4 Z_{c_1 c_2}^{(n-j)} \Big) \\ {\ensuremath{\nonumber}}& - \beta_\tau e^{-s} \sum_{j = 0}^n \binom{n}{j} A_{c_1 c_2}^{(j)} \Big( \beta_3 (e^{- \frac s 4} W + \kappa)^{(n-j)} + \beta_4 Z^{(n-j)} \Big) \\ {\ensuremath{\nonumber}}& - \beta_\tau e^{-s} \sum_{j = 0}^n \sum_{i \in \{1, 2\}} \binom{n}{j} A_{c_i}^{(j)} \Big( \beta_3( e^{- \frac s 4} W_{c_{i'}} + \kappa_{c_{i'}})^{(n-j)} + \beta_4 Z_{c_{i'}}^{(n-j)} \Big) \\ \label{Lauv:4} &+ \dot{\tau}_{c_1} \beta_\tau {\ensuremath{\partial}}_{c_2} F_Z^{(n)} + \dot{\tau}_{c_2} \beta_\tau {\ensuremath{\partial}}_{c_1} F_Z^{(n)} + \dot{\tau}_{c_1 c_2} \beta_\tau F_Z^{(n)}\,.\end{aligned}$$ ### $\nabla_{\alpha, \beta}^2$ derivatives of $A$ We compute ${\ensuremath{\partial}}_{c_2}$ of equation to obtain the equation to obtain $$\begin{aligned} {\ensuremath{\nonumber}}{\ensuremath{\partial}}_s A_{c_1 c_2} + \mathcal{V}_A {\ensuremath{\partial}}_x A_{c_1 c_2} = & {\ensuremath{\partial}}_{c_1 c_2} F_A - \sum_{i = \{1, 2\}} A^{(1)}_{c_{i'}} \Big( \dot{\tau}_{c_i} \beta_\tau^2 \beta_1 W + \beta_\tau \beta_1 W_{c_i} + {\ensuremath{\partial}}_{c_i} G_A \Big) \\ {\ensuremath{\nonumber}}& - A^{(1)} \Big( \dot{\tau}_{c_1 c_2} \beta_\tau^2 \beta_1 W + 2 \dot{\tau}_{c_1} \dot{\tau}_{c_2} \beta_1 \beta_\tau^3 W + \beta_\tau \beta_1 W_{c_1 c_2} + {\ensuremath{\partial}}_{c_1 c_2}G_A \\ {\ensuremath{\nonumber}}& \qquad \qquad + \sum_{i = \{1, 2\}} \beta_\tau^2 \beta_1 \dot{\tau}_{c_i} W_{c_{i'}} \Big)\\ \label{Lauv:5} =: & F_{A,0}^{c_1, c_2}\,. \end{aligned}$$ By computing ${\ensuremath{\partial}}_x^n$ of the above equation, we obtain $$\begin{aligned} {\ensuremath{\nonumber}}&\Big( {\ensuremath{\partial}}_s + \frac 5 4 n + n \beta_\tau \beta_1 W^{(1)} \Big) A_{c_1 c_2}^{(n)} + \mathcal{V}_A {\ensuremath{\partial}}_x A_{c_1 c_2}^{(n)} \\ {\ensuremath{\nonumber}}&\quad= - 1_{n \ge 2} \sum_{j = 2}^n \binom{n}{j} \beta_\tau \beta_1 W^{(j)} A_{c_1 c_2}^{(n-j+1)} - 1_{n \ge 1} \sum_{j = 1}^n \binom{n}{j} G_A^{(j)} A_{c_1 c_2}^{(n-j+1)} \\ {\ensuremath{\nonumber}}& \qquad- \sum_{i = \{1, 2\}} \sum_{j = 0}^n \binom{n}{j} A^{(j+1)}_{c_{i'}} \Big( \dot{\tau}_{c_i} \beta_\tau^2 \beta_1 W^{(n-j)} + \beta_\tau \beta_1 W_{c_{i}}^{(n-j)} + {\ensuremath{\partial}}_{c_i} G_A^{(n-j)} \Big) \\ {\ensuremath{\nonumber}}&\qquad - \sum_{i = \{1, 2 \}} \sum_{j = 0}^n \binom{n}{j} A^{(j+1)} \Big( \dot{\tau}_{c_1 c_2} \beta_\tau^2 \beta_1 W^{(n-j)} + 2 \dot{\tau}_{c_1} \dot{\tau}_{c_2} \beta_1 \beta_\tau^3 W^{(n-j)} + \beta_\tau \beta_1 W_{c_1 c_2}^{(n-j)} \\ \label{Lauv:6} & \qquad \qquad \qquad \qquad + \sum_{i \in \{1, 2 \}} \beta_\tau^2 \beta_1 \dot{\tau}_{c_i} W_{c_{i'}}^{(n-j)} + {\ensuremath{\partial}}_{c_1 c_2} G_A^{(n-j)}\Big) + {\ensuremath{\partial}}_{c_1 c_2} F_A^{(n)} =: F_{A,n}^{(c_1, c_2)}\,. \end{aligned}$$ We next differentiate equation to obtain $$\begin{aligned} {\ensuremath{\nonumber}}{\ensuremath{\partial}}_{c_1 c_2} F_A = & \beta_\tau \beta_1 e^{-s} \Big( e^{- \frac s 4} W_{c_2} + \kappa_{c_2} + Z_{c_2} \Big) \Big( e^{- \frac s 4} W_{c_1} + \kappa_{c_1} + Z_{c_1} \Big) \\ {\ensuremath{\nonumber}}& + \beta_\tau \beta_1 e^{-s} \Big( e^{- \frac s 4} W + \kappa + Z \Big) \Big( e^{- \frac s 4} W_{c_1 c_2} + \kappa_{c_1 c_2} + Z_{c_1 c_2} \Big) \\ {\ensuremath{\nonumber}}& - 2 \beta_\tau \beta_5 e^{-s} \Big( e^{- \frac s 4} W_{c_2} + \kappa_{c_2} - Z_{c_2} \Big) \Big( e^{- \frac s4} W_{c_1} + \kappa_{c_1} - Z_{c_1} \Big) \\ {\ensuremath{\nonumber}}& - 2 \beta_\tau \beta_5 e^{-s} \Big( e^{- \frac s 4} W + \kappa - Z \Big) \Big( e^{- \frac s 4} W_{c_1 c_2} + \kappa_{c_1 c_2} - Z_{c_1 c_2} \Big) \\ \label{Lauv:7} &+ \dot{\tau}_{c_1 c_2} \beta_\tau F_A + \dot{\tau}_{c_2} \beta_\tau {\ensuremath{\partial}}_{c_1}F_A + \dot{\tau}_{c_1} \beta_\tau {\ensuremath{\partial}}_{c_2}F_A\,.\end{aligned}$$ By computing ${\ensuremath{\partial}}_x^n$ of the above, we obtain $$\begin{aligned} {\ensuremath{\nonumber}}{\ensuremath{\partial}}_{c_1 c_2} F_A^{(n)} = & \beta_\tau \beta_1 e^{-s} \sum_{j = 0}^n \binom{n}{j} \Big( e^{- \frac s 4} W_{c_2} + \kappa_{c_2} + Z_{c_2} \Big)^{(j)} \Big( e^{- \frac s 4} W_{c_1} + \kappa_{c_1} + Z_{c_1} \Big)^{(n-j)} \\ {\ensuremath{\nonumber}}+& \beta_\tau \beta_1 e^{-s} \sum_{j = 0}^n \binom{n}{j} \Big( e^{- \frac s 4} W + \kappa + Z \Big)^{(j)} \Big( e^{- \frac s 4} W_{c_1 c_2} + \kappa_{c_1 c_2} + Z_{c_1 c_2} \Big)^{(n-j)} \\ {\ensuremath{\nonumber}}-& 2 \beta_\tau \beta_5 e^{-s} \sum_{j = 0}^n \binom{n}{j} \Big( e^{- \frac s 4} W_{c_2} + \kappa_{c_2} - Z_{c_2} \Big)^{(j)} \Big( e^{- \frac s 4} W_{c_1} + \kappa_{c_1} - Z_{c_1} \Big)^{(n-j)} \\ {\ensuremath{\nonumber}}-& 2 \beta_\tau \beta_5 e^{-s} \sum_{j = 0}^n \binom{n}{j} \Big( e^{- \frac s 4} W + \kappa - Z \Big)^{(j)} \Big( e^{- \frac s 4} W_{c_1 c_2} + \kappa_{c_1 c_2} - Z_{c_1 c_2} \Big)^{(n-j)} \\ \label{Lauv:8} + & \dot{\tau}_{c_1 c_2} \beta_\tau F_A^{(n)} + \dot{\tau}_{c_2} \beta_\tau {\ensuremath{\partial}}_{c_1}F_A^{(n)} + \dot{\tau}_{c_1} \beta_\tau {\ensuremath{\partial}}_{c_2} F_A^{(n)}\,. \end{aligned}$$ Initial data ============ We assume the data is of the form $$\label{guitar:or:band} W_0={\overline}{W} \chi(\eps^{\frac 1 4} x) + {\widehat}W_0+\alpha x^2 \chi(x)+\beta x^3 \chi(x)\,,$$ where $\chi$ is a smooth cut-off function satisfying $\chi(x)=1$ for ${\left|x\right|}\leq 1$ and with support contained in the ball of radius $2$. On the perturbation ${\widehat}{W}_0$, we shall assume $$\begin{aligned} \label{assume:1} {\left|\eta_{\frac 1 5} {\widehat}{W}_0^{(n)}(x)\right|} &\le \eps \,,&&\mbox{ for } {\left|x\right|}\leq \eps^{-\frac14} \mbox{ and } n = 0,...,8\,, \\ |{\widehat}{W}_0^{(n)}(0)| &\le \eps\,, &&\text{ for } n = 2, 3\,, \\ {\widehat}{W}_0^{(n)}(0) &= 0 \,,&&\text{ for } n = 0,1, 4, 5, 6\,. \end{aligned}$$ For $Z_0(x) := Z(s_0, x)$, and $A_0(x) = A(s_0, x)$, we assume $$\begin{aligned} \| Z_0^{(n)} \|_\infty &\le \eps^{\frac 3 2}\,,\\ \| A_0^{(n)} \|_\infty &\le \eps^{\frac 3 2} \,.\end{aligned}$$ for $n=0,\dots, 8$. Furthermore, we will assume the following support assumption on the initial data $(W_0,Z_0,A_0)$ $$\label{blue:grass} {\ensuremath{\mathrm{supp\,}}}({W}_0)\cup {\ensuremath{\mathrm{supp\,}}}({Z}_0)\cup {\ensuremath{\mathrm{supp\,}}}({A}_0) \subset [- \frac M 2 \eps^{- \frac 1 4}, \frac M 2 \eps^{-\frac 1 4}] \,.$$ We will now describe the iteration. The quantities $W_{\alpha, \beta}, Z_{\alpha, \beta}, A_{\alpha, \beta}$ solve the system - with initial data $W_{0}$ given by for $W_{\alpha, \beta}$. We now describe the inductive hypotheses. First, we define the time step via $$\begin{aligned} \label{time:step} s_N := - \log(\eps) + N, \qquad N \in \mathbb{N}\,. \end{aligned}$$ The inductive hypotheses we make are the following: $$\begin{aligned} \label{induct:1} W^{(2)}_{\alpha_N, \beta_N}(s_N) = 0, \qquad W_{\alpha_N, \beta_N}^{(3)}(s_N) = 0\,, \end{aligned}$$ To initialize the induction, we take $$\begin{aligned} \label{zero:param} \alpha_0 = - \frac 1 2 {\widehat}{W}_0^{(2)}(0), \qquad \beta_0 = - \frac 1 6 {\widehat}{W}_0^{(3)}(0)\,. \end{aligned}$$ Note that is satisfied for $N = 0$, which is the first step of the iteration, according to , due to which implies that $$\begin{aligned} W^{(2)}_{0, 0}(0, s_0) &= {\overline}{W}^{(2)}(0) + {\widehat}{W}_0^{(2)}(0) - {\widehat}{W}_0^{(2)}(0) = 0 \,,\\ W^{(3)}_{0,0}(0, s_0) &= {\overline}{W}^{(3)}(0) + {\widehat}{W}_0^{(3)}(0) - {\widehat}{W}_0^{(3)}(0) = 0\,. \end{aligned}$$ Bootstrap assumptions {#section:Bootstraps} ===================== In this section we delineate all of our bootstrap assumptions. First, recall the weight function $\eta_\gamma$ defined in . Let us also specify the hierarchy of three small parameters, where $\eps$ is significantly smaller than any power of $M^{-1}$, and in turn $M^{-1}$ is significantly smaller than any power of $\ell$. For the sake of precision, we make the following selections $$\begin{aligned} \label{choice:M} \ell^{-1} = \log \log(M)\,. \end{aligned}$$ Parameter assumptions --------------------- We will first specify bootstrap assumptions on the parameters, $(\alpha, \beta)$, appearing in the specification of the initial data in . Throughout the analysis, our parameters $(\alpha, \beta)$ will be contained in the rectangle set $\mathcal{B}_N$, which is defined via $$\begin{aligned} \label{def:BN} \mathcal{B}_N = \left\{ |\alpha -\alpha_{N} | \leq M^{30} \eps^{-\frac34}e^{-\frac74s_{N}} + \eps^{-\frac 3 {10}} e^{- \frac32s_{N}} , {\left|\beta - \beta_{N}\right|} \leq M^{30} \eps^{-\frac12}e^{- \frac32 s_{N}} \right\}\,.\end{aligned}$$ In particular, since $s_0=-\log\eps$ we have $$\begin{aligned} \label{apple:1} |\alpha | \leq 2 M^{30}\eps , \qquad {\left|\beta\right|} \leq 2 M^{30}\eps\,. \end{aligned}$$ Note that the bootstrap in this parameter region will be verified in - . Moreover, notice that due to , is valid for the initial choice of $(\alpha, \beta) = (\alpha_0, \beta_0)$, defined in . We will now drop the subscript $W_{\alpha, \beta}$ as it is understood that $\alpha, \beta$ are fixed, and arbitrary elements of the set $\mathcal{B}_N(\alpha_N, \beta_N)$. Note that we only assume (and therefore prove) the below bootstraps on the time interval $-\log \eps \le s \le s_{N+1}$. We now state the main inductive proposition we will be proving using these bootstrap estimates. The proof of this proposition will take place in Subsection \[subsection:proof\]. \[induct:prop\] Fix $N \in \mathbb{N}$, the parameters $(\eps, M, \ell)$ through . Let $s_N$ be given by . Assume $(\alpha_N, \beta_N)$ are given so that is valid for choice of data , satisfying conditions - . Then there exists $(\alpha_{N+1}, \beta_{N+1})$ so that is valid for $s_{N+1}$ for data given again by . Bootstrap estimates on $(W^{(n)},Z^{(n)},A^{(n)})$ and modulation variables {#subsection:base} --------------------------------------------------------------------------- We will assume the following bootstraps on the support of the solutions: $$\label{e:support} {\ensuremath{\mathrm{supp\,}}}W(s) \cup {\ensuremath{\mathrm{supp\,}}}Z(s) \cup {\ensuremath{\mathrm{supp\,}}}A(s) \subset B(M \eps e^{\frac54 s} ) =: B_f\,,$$ where $B(r)$ is the ball centered at the origin of radius $r$. We give the name $B_f$ to the above ball to compactify notation, as we will frequently write indicator functions on this ball. We will assume the following global in $x$ bootstrap assumptions on $W$: $$\begin{aligned} {\left|W\right|}&\leq \ell \log M \eta_{\frac1{20}} \label{W:boot:0}\,,\\ |W^{(1)}| &\le \ell \log M \eta_{- \frac 15} \label{e:uniform:W1}\,, \\ \label{weds:1} |W^{(n)}|&\leq M^{n^2} \eta_{- \frac 1 5} \quad\text{ for } n = 2,\dots,8 \,,\end{aligned}$$ As a consequence of $\eqref{W:boot:0}$ and , we have that $$\begin{aligned} |W| \le \ell \log(M) \eta_{\frac{1}{20}} \lesssim \ell \log(M) \langle x \rangle^{\frac 1 5} \lesssim \ell \log(M) \langle M \eps e^{\frac 5 4 s} \rangle^{\frac 1 5} \lesssim \ell \log(M) (1 + M^{\frac 1 5} \eps^{\frac 15} e^{\frac s 4})\,, \end{aligned}$$ and thus, $$\begin{aligned} \label{est:W:good} e^{- \frac s 4} |W| \le 1\,,\end{aligned}$$ which we shall use repeatedly. On $Z$ and $A$ we will assume the following bootstraps: $$\begin{aligned} \label{Z:boot:0} \| Z \|_\infty &\le \eps^{\frac 5 4}\,, &\| Z^{(n)} \|_\infty &\le M^{2n^2} e^{- \frac 5 4 s} \,, \\ \label{A:boot:9} \| A \|_\infty &\le M\eps\,, &\| A^{(n)} \|_\infty &\le M^{2n^2} e^{- \frac 5 4 s}\,, \end{aligned}$$ for $n=1,\dots 8$. For the difference, ${\widetilde}{W}$, we make the following bootstrap assumptions on ${\widetilde}{W}$ and ${\widetilde}{W}^{(1)}$ in the region ${\left|x\right|}\leq \eps^{-\frac14} $ $$\begin{aligned} \label{thurs:1} |{\widetilde}{W}| &\le \eps^{\frac{3}{20}} \eta_{\frac{1}{20}}\,,\\ |{\widetilde}{W}^{(1)}| &\le \eps^{\frac{1}{20}} \eta_{- \frac 1 5}\,. \label{e:Wtilde:1:bootstrap} \end{aligned}$$ For the higher order derivatives of ${\widetilde}{W}$, we will assume the following local in $x$ bootstraps in the region $|x| \le \ell$ $$\begin{aligned} |{\widetilde}{W}^{(n)}| &\le {\left|x\right|}^{6-n}\eps^{\frac{1}{5}}+\eps^{\frac12}\leq 2{\left|\ell\right|}^{6-n}\eps^{\frac{1}{5}},\quad \mbox{for } 0\leq k \leq 5 \label{e:Wtilde:bootstrap} \\ |{\widetilde}{W}^{(6)}| &\le \eps^{\frac{1}{5}},\label{e:W6:bootstrap}\\ \label{gerrard:1} |{\widetilde}{W}^{(7)}| & \le M \eps^{\frac 1 5} \,, \\ \label{gerrard:2} |{\widetilde}{W}^{(8)}| &\le M^3 \eps^{\frac 1 5} \end{aligned}$$ We now make the following crucial bootstrap assumptions, which display decay in $s$ for the unconstrained quantities $q^{(2)}, q^{(3)}$ (recall the notation defined in ), $$\begin{aligned} \label{boot:decay} |q^{(2)}| \le \eps^{\frac{1}{10}} e^{- \frac 3 4 s} , \qquad |q^{(3)}| \le M^{40} e^{- s}\,, \end{aligned}$$ and the following smallness estimate $$\begin{aligned} \label{boot:W:5:0} |{\widetilde}{W}^{(5)}(0,s)| \le \eps^{\frac12} \text{ for } -\log \eps \le s \le s_{n+1}\,, \end{aligned}$$ which in particular, when coupled with , ensures that $$\begin{aligned} \label{moon} |q^{(5)}| \ge 120 - \eps^{\frac 1 2} \ge 100\,. \end{aligned}$$ We also have crucially the following estimate $${\left|W^{(1)}\right|}\leq 1+ e^{-\frac 3 4 s}\label{eq:W1:bnd:1}\,.$$ Finally, we have the bootstraps on the modulation variables: $$\begin{aligned} \label{e:GW0} &|\mu| \le \eps^{\frac 1 6} e^{-\frac 3 4 s}\,, && |\dot{\tau}| \le \eps^{\frac 1 6} e^{- \frac 3 4 s}\,, && |\dot{\kappa}| \le \eps^{\frac 1 8}\,, \\ \label{mod:sub} &|\kappa- \kappa_0| \le \eps \,, && |\dot{\xi}| \le 3 \kappa_0\,.\end{aligned}$$ As a consequence we have $${\left|1-\beta_{\tau}\right|}\leq 2 \eps^{\frac 1 6} e^{- \frac 3 4 s}\,,\label{e:1beta:bnd}$$ which will be employed repeatedly in the forthcoming estimates. $\nabla_{\alpha, \beta}$ bootstraps ----------------------------------- We now provide the bootstrap assumptions we make on the $(\alpha, \beta)$ derivatives of the quantities appearing in Subsection \[subsection:base\]. The first bootstraps we provide are for the modulation variables, for which we notably do not distinguish between $\alpha$ and $\beta$ derivative (recall ${\ensuremath{\partial}}_c \in \{{\ensuremath{\partial}}_\alpha, {\ensuremath{\partial}}_\beta \}$ from ). $$\begin{aligned} \label{mako:1} &|{\ensuremath{\partial}}_c \mu| \le M^{33} \eps^{\frac 1 2} e^{- \frac s 4}\,, && |{\ensuremath{\partial}}_c \dot{\tau}| \le \eps^{\frac 1 2}\,, && |{\ensuremath{\partial}}_c \dot{\kappa}| \le \eps^{\frac 1 4} e^{\frac 1 2 s}\,, \\ \label{Baba:O:1} &|{\ensuremath{\partial}}_c \kappa| \le \eps^{\frac 1 2}\,, && |{\ensuremath{\partial}}_c \dot{\xi}| \le M \eps^{\frac 1 2} \,.\end{aligned}$$ Next, we provide the bootstrap assumptions on ${\ensuremath{\partial}}_\alpha Z, {\ensuremath{\partial}}_\beta Z, {\ensuremath{\partial}}_\alpha A, {\ensuremath{\partial}}_\beta A$, and higher derivatives thereof. We again note that we do not distinguish between $\alpha$ and $\beta$ derivatives for these quantities. $$\begin{aligned} \label{mako:2} \| {\ensuremath{\partial}}_c Z \|_\infty &\le \eps^{\frac 1 2} \,, & \| {\ensuremath{\partial}}_c A \|_\infty &\le \eps^{\frac 1 2}\,, \\ \label{mako:3} \| {\ensuremath{\partial}}_c Z^{(n)} \|_\infty &\le M^{2n^2} \eps^{\frac 1 2} e^{-\frac12 s}\,, & \|{\ensuremath{\partial}}_c A^{(n)}\|_\infty &\le M^{2n^2} \eps^{\frac 1 2} e^{-\frac12 s}\,,\end{aligned}$$ for $n=1,\dots,7$. Next, we provide the bootstrap assumptions for the elements of the $2 \times 2$ $s$-dependent matrix $$\begin{aligned} \begin{pmatrix} {\ensuremath{\partial}}_\alpha q^{(2)}(s) && {\ensuremath{\partial}}_\beta q^{(2)}(s) \\ {\ensuremath{\partial}}_\alpha q^{(3)}(s) && {\ensuremath{\partial}}_\beta q^{(3)}(s) \end{pmatrix}\,.\end{aligned}$$ For these quantities, we need to distinguish between $\alpha$ and $\beta$ derivatives carefully, which we do via $$\begin{aligned} \label{mako:4} &\frac 1 2 \eps^{\frac34}e^{\frac 3 4 s} \le {\ensuremath{\partial}}_\alpha q^{(2)} \le 4 \eps^{\frac34}e^{\frac 3 4 s }\,, &&|{\ensuremath{\partial}}_\alpha q^{(3)}| \le \eps e^{\frac s 2 }\,, \\ \label{mako:5} &|{\ensuremath{\partial}}_\beta q^{(2)}| \le \eps e^{\frac 3 4 s}\,, &&\frac 1 2 \eps^{\frac12} e^{\frac 1 2 s} \le {\ensuremath{\partial}}_\beta q^{(3)} \le 4 \eps^{\frac12}e^{\frac 1 2 s}\,.\end{aligned}$$ In addition, we will need the enhanced constrained bootstrap $$\begin{aligned} \label{hotel:motel} |{\widetilde}{q}^{(5)}_c(s)| \le \eps^{\frac 3 8} e^{\frac 1 8 s}\,. \end{aligned}$$ Next, we will assume the following bootstrap bounds on ${\ensuremath{\partial}}_c W$ and higher derivatives thereof. $$\begin{aligned} \label{pc:W0} \|{\ensuremath{\partial}}_c W \|_\infty &\le M^{4} \eps^{\frac34} e^{\frac 3 4 s}\,, \\ \label{mako:6} \|{\ensuremath{\partial}}_c W^{(n)} \eta_{\frac{1}{20}} \|_\infty &\le M^{(n+2)^2} \eps^{\frac34}e^{\frac 3 4 s} \,.\end{aligned}$$ for $n=1,\dots,7$. Finally, we assume the following localized bounds on the region $|x| \le \ell$ which are stronger than - $$\begin{aligned} \label{warrior:1} |W_c^{(n)}| &\le \ell^{\frac 1 2} M \eps^{\frac34}e^{\frac 3 4 s} \qquad\text{ for }0 \le n \le 6 \,,\\ \label{warrior:2} |W_c^{(7)}| &\le M \eps^{\frac34}e^{\frac 3 4 s}\,.\end{aligned}$$ $\nabla_{\alpha, \beta}^2$ bootstraps ------------------------------------- We now provide the bootstrap assumptions on two parameter ($\alpha, \beta$) derivatives of the quantities in Subsection \[subsection:base\]. For these highest order bootstraps, we do not need to distinguish between $\alpha$ and $\beta$ derivatives. Recall that ${\ensuremath{\partial}}_{c_1c_2}$ means $c_i \in \{\alpha, \beta\}$. We impose the following bootstrap assumptions for $0\le n\le 6$ $$\begin{aligned} \label{posty:1} \| {\ensuremath{\partial}}_{c_1 c_2} Z^{(n)} \|_\infty& \le M^{2j^2} \eps^{\frac 5 8} e^{\frac s 4} \,, \\ \label{posty:2} \| {\ensuremath{\partial}}_{c_1 c_2} A^{(n)} \|_\infty &\le M^{2j^2} \eps^{\frac 5 8} e^{\frac s 4} \,, \\ \label{buddy:1} \| {\ensuremath{\partial}}_{c_1 c_2} W^{(n)} \|_\infty &\le M^{(k+5)^2} \eps^{\frac32 }e^{\frac 3 2 s} \,.\end{aligned}$$ We will also need bootstraps on the second derivative of the modulation variables $$\begin{aligned} \label{group:1} &|\mu_{c_1 c_2}| \le M \eps^{\frac54}e^{\frac 5 4 s}\,, && | \dot{\kappa}_{c_1 c_2}| \le M^2 \eps^{\frac54}e^{2s}\,, && |\dot{\tau}_{c_1 c_2}| \le \eps e^{\frac 3 4 s}\,, \\ \label{group:2} &|\kappa_{c_1 c_2}| \le M^3 \eps^{\frac 5 4} e^s\,, && |\dot{\xi}_{c_1 c_2}| \le M^4 \eps^{\frac 5 4} e^s\,. \end{aligned}$$ Preliminary estimates ===================== In order to analyze the equations - and their higher order spatial derivative counterparts, - , as well as their higher order parameter derivative counterparts, we first provide estimates on the forcing terms appearing in - . These are performed in Subsection \[subsection:Forcing\]. Controlling these forcing terms requires in turn controlling the transport speeds, $G_W, G_Z, G_A$, which is achieved in Subsection \[GW:control\]. The final subsection in this section, Subsection \[subsect:trajectory\], collects estimates on the trajectories associated to the transport structure of equations - . Transport speed estimates {#GW:control} ------------------------- We now provide estimates on the transport speeds, which are defined in - . We begin with the following estimates. \[l:GW\] Let $-1 \leq r \leq 0$, and $n \ge 1$. Then the following estimates are valid on the transport speeds, - . $$\begin{aligned} \label{GW:transport:est} \| G_W \eta_{\frac r 4} \|_{\infty}& \lesssim \eps^{\frac 1 6} e^{- \frac 3 4 s} + M^{3+r}\eps^{(1+r)} e^{\frac{1+5r}{4}s}, && \| G_W^{(n)} \|_\infty \les M^{2n^2} e^{-s}\,, \\ \label{Z:transport:1} \| G_Z +(1-\beta_2)e^{\frac s4}\kappa_0\|_\infty &\lesssim e^{\frac s 4}\,, && \| G_Z^{(n)} \|_\infty \lesssim M^{2n^2} e^{-s}\,, && \\ \| G_A +(1-\beta_1)e^{\frac s4}\kappa_0 \|_\infty &\lesssim e^{\frac s 4}\,, && \| G_A^{(n)} \|_\infty \lesssim M^{2n^2} e^{-s}\label{A:transport:1}\,.\end{aligned}$$ We record the following identity: $$\begin{aligned} \label{def:GW:e} G_W(x, s) = \mu(s) + G_{W,e}(x, s), \qquad G_{W,e}(x, s) := \beta_\tau \beta_2 e^{\frac s 4} \int_0^x Z^{(1)}(x', s) {\,\mathrm{d}}x'\,, \end{aligned}$$ where we have invoked definition for $G_W$ and subsequently for the quantity $\mu(s)$. We estimate for $j \ge 1$, $$\begin{aligned} \label{Gw:j:est} \| G_W^{(j)} \|_\infty = \| \beta_\tau \beta_2 e^{\frac 1 4 s} Z^{(j)} \|_\infty \le 2 e^{\frac 1 4 s} M^{2j^2} e^{- \frac 5 4 s}\,. \end{aligned}$$ Using , we estimate $$\begin{aligned} {\ensuremath{\nonumber}}\| G_W\eta_{\frac r 4} \|_\infty \lesssim &|\mu| + \| G_{W,e} \eta_{\frac r 4} \|_\infty \\ {\ensuremath{\nonumber}}\lesssim & \eps^{\frac 1 6} e^{- \frac 3 4 s}+ \| \langle x \rangle^r \int_0^{x} {\ensuremath{\partial}}_x G_W(x') {\,\mathrm{d}}x' \|_\infty \\ {\ensuremath{\nonumber}}\lesssim & \eps^{\frac 1 6} e^{- \frac 3 4 s}+ \| \langle x \rangle^r \int_0^{x} \langle x' \rangle^{-1-r} {\ensuremath{\partial}}_x G_W(x') \langle x' \rangle^{1+ r} {\,\mathrm{d}}x' \|_\infty \\ {\ensuremath{\nonumber}}\lesssim & \eps^{\frac 1 6} e^{- \frac 3 4 s} + \| {\ensuremath{\partial}}_x G_W \langle x \rangle^{1+r} \|_\infty \\{\ensuremath{\nonumber}}\lesssim& \eps^{\frac 1 6} e^{- \frac 3 4 s} + e^{\frac 1 4 s} \| Z^{(1)} \langle x \rangle^{1+r} \|_\infty \\ \label{hardy:1} \lesssim & \eps^{\frac 1 6} e^{- \frac 3 4 s} + M^{3+r}\eps^{(1+r)} e^{\frac{1+5r}{4}s}\,.\end{aligned}$$ Above, we have invoked estimate for the estimate on $\mu$, the definition to calculate ${\ensuremath{\partial}}_x G_W$, estimate on $Z^{(1)}$, and the estimate to translate spatial weights to growth in $s$. The above calculation, , works when $r < 0$, but at $r = 0$ does not quite work due to having to integrate $\langle x \rangle^{-1}$. However, in that case, we may estimate via $$\begin{aligned} \| G_W \|_\infty \lesssim |\mu| + \| G_{W,e} \|_\infty \lesssim \eps^{\frac 1 6} e^{- \frac 3 4 s} + \| \langle x \rangle G_W^{(1)} \|_\infty \lesssim \eps^{\frac 1 6} e^{- \frac 3 4 s} + M^2 e^{-s} (M \eps e^{\frac 5 4 s})\,, \end{aligned}$$ where we have invoked for the estimate on $\mu$, with $j = 1$, and the estimate on the support. We now move to the transport speed $G_Z$. First, for the lowest order quantity, we use the definition and the bootstrap assumptions to estimate $$\begin{aligned} \| G_Z + (1-\beta_2)e^{\frac s4}\kappa_0 \|_\infty \lesssim e^{\frac s 4} (1 + \eps + \eps^{\frac 5 4}) \lesssim \eps^{\frac s 4}\,. \end{aligned}$$ According to the definition , we estimate $$\begin{aligned} \| G_Z^{(n)} \|_\infty \lesssim e^{\frac 1 4 s} \| Z^{(n)} \|_\infty \lesssim M^{2n^2} e^{- s}\,, \end{aligned}$$ where we have invoked the bootstrap, . For the transport speed $G_A$, we invoke the definition to perform the exact same calculation. Let $c \in \{ \alpha, \beta \}$. For $0 < r \le 1$ and $1\leq n\leq 7$, the following estimates are valid on the transport speeds, - $$\begin{aligned} \label{r:est:1} \| {\ensuremath{\partial}}_c G_W \eta_{- \frac r 4} \|_\infty &\lesssim \eps^{\frac 1 2} + M^{3-r} \eps^{\frac 3 2 - r} e^{\frac{4 - 5r}{4} s}\,, && \| {\ensuremath{\partial}}_c G_W^{(n)} \|_\infty \lesssim M^{2n^2} \eps^{\frac 1 2} e^{- \frac s 4}\,, \\ \label{r:est:2} \| {\ensuremath{\partial}}_c G_Z \|_\infty &\lesssim \eps^{\frac 1 4} e^{\frac s 4}\,, && \| {\ensuremath{\partial}}_c G_Z^{(n)} \|_\infty \lesssim M^{2n^2} \eps^{\frac 1 2} e^{- \frac s 4}\,,\\ \label{r:est:3} \| {\ensuremath{\partial}}_c G_A \|_\infty &\lesssim \eps^{\frac 1 4} e^{\frac s 4}\,, && \| {\ensuremath{\partial}}_c G_A^{(n)} \|_\infty \lesssim M^{2n^2} \eps^{\frac 1 2} e^{- \frac s 4} \,.\end{aligned}$$ We differentiate in $c$ to yield $$\begin{aligned} \label{coolie} {\ensuremath{\partial}}_c G_W = {\ensuremath{\partial}}_c \mu + {\ensuremath{\partial}}_c G_{W,e} = {\ensuremath{\partial}}_c \mu + {\ensuremath{\partial}}_c \dot{\tau} \beta_\tau^2 \beta_2 e^{\frac s 4} \int_0^x Z^{(1)}(x', s) {\,\mathrm{d}}x' + \beta_\tau \beta_2 e^{\frac s 4} \int_0^x {\ensuremath{\partial}}_c Z^{(1)} {\,\mathrm{d}}x'\,. \end{aligned}$$ Multiplying now by a weight of $\eta_{- \frac r 4}$, we obtain for every $r > 0$, $$\begin{aligned} {\ensuremath{\nonumber}}\| {\ensuremath{\partial}}_c G_W \eta_{- \frac r 4} \|_\infty \lesssim & |{\ensuremath{\partial}}_c \mu| + |{\ensuremath{\partial}}_c \dot{\tau}| e^{\frac s 4} \| Z^{(1)} \eta_{\frac{1-r}{4}} \|_\infty + e^{\frac s 4} \| {\ensuremath{\partial}}_c Z^{(1)} \eta_{\frac{1-r}{4}} \|_\infty \\ \lesssim & M^{33} \eps^{\frac 1 2}e^{-\frac s4} + \eps^{\frac 1 2} e^{\frac s 4} (M^2 e^{- \frac 5 4 s}) (M\eps e^{\frac 5 4 s})^{1-r} + e^{\frac s 4} (M^2 \eps^{\frac 1 2} e^{- \frac s 2}) (M\eps e^{\frac 5 4 s} )^{1 - r}{\ensuremath{\nonumber}}\\ \lesssim& \eps^{\frac 1 2} + M^{3-r} \eps^{\frac 3 2 - r} e^{\frac{4 - 5r}{4} s}{\ensuremath{\nonumber}}\,,\end{aligned}$$ where we have invoked for the modulation variables, and for the $Z$ quantities, and to estimate $\eta_{\frac{1-r}{4}}$ in the support of $Z^{(1)}$ and hence ${\ensuremath{\partial}}_c Z^{(1)}$. We first differentiate $G_W$ to order $n\geq 1$ in $x$ via and then take ${\ensuremath{\partial}}_c$ of the result to produce $$\begin{aligned} {\ensuremath{\partial}}_c G_W^{(n)} = {\ensuremath{\partial}}_c \dot{\tau} \beta_\tau^2 \beta_2 e^{\frac s 4} Z^{(n)} + \beta_2 \beta_\tau e^{\frac s 4} {\ensuremath{\partial}}_c Z^{(n)}\,,\end{aligned}$$ which upon estimating yields $$\begin{aligned} {\ensuremath{\nonumber}}\| {\ensuremath{\partial}}_c G_W^{(n)} \|_\infty \lesssim &|{\ensuremath{\partial}}_c \dot{\tau}| e^{\frac s 4} \| Z^{(n)} \|_\infty + e^{\frac s 4} \| {\ensuremath{\partial}}_c Z^{(n)} \|_\infty \\ \lesssim & M^{2n^2} \eps^{\frac 1 2} e^{- s} + e^{\frac s 4} M^{2n^2} \eps^{\frac 12} e^{- \frac s2} \lesssim M^{2n^2} \eps^{\frac 1 2} e^{- \frac s 4}\,,{\ensuremath{\nonumber}}\end{aligned}$$ where we have invoked for the modulation variables, and for the $Z$ quantities. By differentiating in ${\ensuremath{\partial}}_c$, we obtain the identities $$\begin{aligned} \label{Sam:smith:1} {\ensuremath{\partial}}_c G_Z = & \dot{\tau}_c \beta_\tau G_Z + \beta_\tau e^{\frac s 4} (\beta_2 \kappa_c - \dot{\xi}_c + Z_c) \\ {\ensuremath{\nonumber}}= & {\ensuremath{\partial}}_c \dot{\tau} \beta_\tau^2 e^{\frac s 4} (\beta_2 \kappa - \dot{\xi} + Z) + \beta_\tau e^{\frac s 4} (\beta_2 {\ensuremath{\partial}}_c \kappa - {\ensuremath{\partial}}_c \dot{\xi} + {\ensuremath{\partial}}_c Z) \\ \label{Sam:smith:2} {\ensuremath{\partial}}_c G_Z^{(n)} = & \dot{\tau}_c \beta_\tau G_Z^{(n)} + \beta_\tau e^{\frac s 4} Z_c^{(n)}\,. \end{aligned}$$ By estimating we obtain $$\begin{aligned} {\ensuremath{\nonumber}}\| {\ensuremath{\partial}}_c G_Z \|_\infty \lesssim & |{\ensuremath{\partial}}_c \dot{\tau}| e^{\frac s 4} (|\kappa| + |\dot{\xi}| + \| Z \|_\infty) + e^{\frac s 4} (|{\ensuremath{\partial}}_c \kappa| + |{\ensuremath{\partial}}_c \dot{\xi}| + \| {\ensuremath{\partial}}_c Z \|_\infty) \\ {\ensuremath{\nonumber}}\lesssim & \eps^{\frac 1 2} e^{\frac s 4} (1 + \eps^{\frac 5 4}) + e^{\frac s 4} (\eps^{\frac 1 4} + \eps^{\frac 1 4} + \eps^{\frac 1 2}) \lesssim \eps^{\frac 1 4} e^{\frac s 4}, \end{aligned}$$ where we have invoked both - for the ${\ensuremath{\partial}}_c$ of the modulation variables, - for the modulation variables themselves, and finally and for the $Z$ quantities, with $j \ge 1$. By estimating we obtain for $1 \le n \le 7$, $$\begin{aligned} {\ensuremath{\nonumber}}\| {\ensuremath{\partial}}_c G_Z^{(n)} \|_\infty \lesssim & |{\ensuremath{\partial}}_c \dot{\tau}| e^{\frac s 4} \| Z^{(n)} \|_\infty + e^{\frac s 4} \| {\ensuremath{\partial}}_c Z^{(n)} \|_\infty \\ \lesssim & e^{\frac s 4} \eps^{\frac 1 2} M^{2 n^2} e^{- \frac 5 4 s} + e^{\frac s 4} M^{2n^2} \eps^{\frac 1 2} e^{- \frac s 2} \lesssim M^{2n^2} \eps^{\frac 1 2} e^{- \frac s 4}, \end{aligned}$$ where we have invoked for the ${\ensuremath{\partial}}_c \dot{\tau}$ term, and then and for $Z^{(n)}$ and ${\ensuremath{\partial}}_c Z^{(n)}$, respectively. For ${\ensuremath{\partial}}_c G_A$, we perform essentially the same estimate as for ${\ensuremath{\partial}}_c G_Z$. Let $c_i \in \{ \alpha, \beta \}$ for $i = 1, 2$, and fix any $0 < r \le 1$. Then the following estimates are valid for the transport speeds $$\begin{aligned} \label{spin:1} \| {\ensuremath{\partial}}_{c_1 c_2} G_W \eta_{- \frac r 4} \|_\infty& \lesssim M \eps^{\frac54}e^{\frac 5 4 s} +M^{3-r} \eps^{\frac{13}{8}-r } e^{\frac {7-5r} 4 s}\,, &&\| {\ensuremath{\partial}}_{c_1 c_2} G_W^{(n)} \|_\infty \lesssim M^{2n^2} \eps^{\frac 5 8} e^{\frac s 2}\,, \\ \label{spin:2} \| {\ensuremath{\partial}}_{c_1 c_2} G_Z \|_\infty &\lesssim M^4 \eps^{\frac54}e^{\frac 5 4 s} \,, &&\| {\ensuremath{\partial}}_{c_1 c_2} G_Z^{(n)} \|_\infty \lesssim M^{2n^2} \eps^{\frac 5 8} e^{\frac s 2}\,, \\ \label{spin:3} \| {\ensuremath{\partial}}_{c_1 c_2} G_A \|_\infty &\lesssim M^4 \eps^{\frac54}e^{\frac 5 4 s} \,, &&\| {\ensuremath{\partial}}_{c_1 c_2} G_A^{(n)} \|_\infty \lesssim M^{2n^2} \eps^{\frac 5 8} e^{\frac s 2}\,, \end{aligned}$$ or $1\leq n\leq 7$ We differentiate in ${\ensuremath{\partial}}_{c_2}$ which generates the identities $$\begin{aligned} &{\ensuremath{\partial}}_{c_1 c_2} G_W = \mu_{c_1 c_2} + \beta_\tau \beta_2 e^{\frac s 4} \int_0^x Z^{(1)}_{c_1 c_2} + \beta_\tau^2 \beta_2 \dot{\tau}_{c_i} e^{\frac s 4} \int_0^x Z^{(1)}_{c_{i'}} {\ensuremath{\nonumber}}\\\label{myoo} & \qquad \qquad \qquad + (\dot{\tau}_{c_1 c_2} + 2 \beta_\tau \dot{\tau}_{c_1} \dot{\tau}_{c_2}) \beta_\tau^2 \beta_2 e^{\frac s 4} \int_0^x Z^{(1)}\,, \\ \label{myoo:2} &{\ensuremath{\partial}}_{c_1 c_2} G_W^{(n)} = \beta_\tau \beta_2 e^{\frac s 4} Z^{(n)}_{c_1 c_2} + \beta_\tau^2 \beta_2 \dot{\tau}_{c_i} e^{\frac s 4} Z^{(n)}_{c_{i'}} + (\dot{\tau}_{c_1 c_2} + 2 \beta_\tau \dot{\tau}_{c_1} \dot{\tau}_{c_2}) \beta_\tau^2 \beta_2 e^{\frac s 4} Z^{(n)}\,,\end{aligned}$$ for $ n \ge 1$. Estimating the right-hand side of yields $$\begin{aligned} {\ensuremath{\nonumber}}\| {\ensuremath{\partial}}_{c_1 c_2} G_W \eta_{- \frac r 4} \|_\infty \lesssim & |\mu_{c_1 c_2}| + e^{\frac s 4} \| Z_{c_1 c_2}^{(1)} \eta_{\frac{1-r}{4}} \|_\infty + |\dot{\tau}_{c_i}| e^{\frac s 4} \| Z^{(1)}_{c_{i'}} \eta_{\frac{1-r}{4}} \|_\infty + (|\dot{\tau}_{c_1 c_2}| + |\dot{\tau}_{c}|^2) e^{\frac s 4} \| Z^{(1)} \eta_{\frac{1-r}{4}} \|_\infty \\{\ensuremath{\nonumber}}\lesssim & M \eps^{\frac 54}e^{\frac 5 4 s} + M^{3-r}\eps^{\frac{13}{8}-r} e^{\frac {7-5r} 4 s} + \eps^{2-r} e^{\frac s 4} M^{3-r} e^{(1+\frac 5 4 r)s} \\ {\ensuremath{\nonumber}}&+ ( \eps e^{\frac 3 4 s} + \eps) M^{3-r} \eps^{1-r} e^{-\frac {1+5r} 4s} \,.\end{aligned}$$ Above, we have used for the $\mu_{c_1 c_2}, \dot{\tau}_{c_1 c_2}$ terms, for the $Z^{(1)}_{c_1 c_2}$ term, for the $Z^{(1)}_c$ term, for the $Z^{(1)}$ term, for the $\dot{\tau}_c$ terms, and finally for the estimation of $\eta$ in the presence of $Z$. Estimating the right-hand side of yields for $j \ge 1$, $$\begin{aligned} \| {\ensuremath{\partial}}_{c_1 c_2} G_W^{(n)} \|_\infty \lesssim e^{\frac s 4} \| Z_{c_1 c_2}^{(n)} \|_\infty + |\dot{\tau}_c| e^{\frac s 4} \| Z^{(n)}_c \|_\infty + (|\dot{\tau}_{c_1 c_2}| + |\dot{\tau}_c|^2) e^{\frac s 4} \| Z^{(n)} \|_\infty \lesssim M^{2n^2} \eps^{\frac 5 8} e^{\frac s 2}\,. \end{aligned}$$ We have invoked for the $Z_{c_1 c_2}^{(j)}$ term, for the $\dot{\tau}_c$ term, for the $Z_c$ term, for the $Z^{(j)}$ term, and for the $\dot{\tau}_{c_1c_2}$ term. Next, we differentiate - in ${\ensuremath{\partial}}_{c_2}$ to arrive at $$\begin{aligned} \label{lose:1} &{\ensuremath{\partial}}_{c_1 c_2} G_Z = \dot{\tau}_{c_1 c_2} \beta_\tau G_Z + \dot{\tau}_{c_i} \beta_\tau {\ensuremath{\partial}}_{c_{i'}}G_Z + \beta_\tau e^{\frac s 4} \Big( \beta_2 \kappa_{c_1 c_2} -\dot{\xi}_{c_1 c_2} + Z_{c_1 c_2} \Big)\,, \\ \label{lose:2} &{\ensuremath{\partial}}_{c_1 c_2} G_Z^{(n)} = \dot{\tau}_{c_1 c_2} \beta_\tau G_Z^{(n)} + \dot{\tau}_{c_i} \beta_\tau {\ensuremath{\partial}}_{c_{i'}} G_Z^{(n)} + \beta_\tau e^{\frac s 4} Z_{c_1 c_2}^{(n)}\,. \end{aligned}$$ Estimating the right-hand side gives via $$\begin{aligned} {\ensuremath{\nonumber}}\| {\ensuremath{\partial}}_{c_1 c_2} G_Z \|_\infty \lesssim & |\dot{\tau}_{c_1 c_2}| \| G_Z \|_\infty + |\dot{\tau}| \| {\ensuremath{\partial}}_c G_Z \|_\infty + e^{\frac s 4} \Big( |\kappa_{c_1 c_2}| + |\dot{\xi}_{c_1 c_2}| + \| Z_{c_1 c_2} \|_\infty \Big) \\ \lesssim & \eps e^{s} + \eps^{\frac 3 4} e^{\frac s 4} + e^{\frac s 4} \Big( M^3 \eps^{\frac 5 4} e^{s} + M^4 \eps^{\frac 5 4} e^{s} + \eps^{\frac 5 8} e^{\frac s 4} \Big){\ensuremath{\nonumber}}\end{aligned}$$ Above we have invoked and for the $G_Z$ and ${\ensuremath{\partial}}_c G_Z$ terms, respectively. We have also invoked - for the second derivatives of the modulation variables and for the $Z_{c_1 c_2}$ term. For the right-most estimate in , we estimate the right-hand side of , $$\begin{aligned} \| {\ensuremath{\partial}}_{c_1 c_2} G_Z^{(n)} \|_\infty \lesssim & |\dot{\tau}_{c_1 c_2}| \| G_Z^{(n)} \|_\infty + |\dot{\tau}_c| \| {\ensuremath{\partial}}_c G_Z^{(n)} \|_\infty + e^{\frac s 4} \| Z^{(n)}_{c_1 c_2} \|_\infty \\ \lesssim & M^{2n^2} \eps e^{-\frac{s}{4}} + M^{2n^2} \eps e^{- \frac s 4} + M^{2n^2}\eps^{\frac 5 8} e^{\frac s 2} \lesssim M^{2n^2} \eps^{\frac 5 8} e^{\frac s 2}\,, \end{aligned}$$ where we have invoked and for the $G_Z^{(n)}$ and ${\ensuremath{\partial}}_c G_Z^{(n)}$ terms, respectively. A nearly identical estimate is valid for . Forcing estimates {#subsection:Forcing} ----------------- In this subsection, we will provide pointwise estimates on the forcing terms $F_W, F_Z, F_A$, defined in - as well as their various derivatives (spatial and parameter). ### Forcing estimates for $(W,Z,A)$ and its derivatives We now provide estimates on the forcing of $(W,Z,A)$ and their spatial derivatives. For the forcing quantities defined in - and , the following estimates are valid $$\begin{aligned} \label{FW:est:1} \| F_W \|_\infty &\le \eps^{\frac 3 4} e^{- \frac 3 4 s}, &\| F_W^{(n)} \|_\infty &\le \eps^{\frac 3 4} e^{-s} \qquad \text{ for } 1\leq n \leq 8 \\ \label{FW:est:2} \| {\widetilde}{F}_W \|_\infty &\le e^{- \frac 3 4 s}\,, &\| {\widetilde}{F}_{W,1} \eta_{\frac 1 4} \|_\infty &\le \eps^{\frac{1}{10}}\,, \\ \label{FW:est:3} \| F_{W,n} \eta_{\frac 1 5} \|_\infty &\les M^{n^2 - 1} \quad \text{ for } 2 \le n \le 8\,, & \| F_{W}^{(1)} \eta_{\frac 1 4} \|_\infty &\le e^{- \frac 1 2 s}\,, \\ \label{Motor:1} \| F_{W,1} \eta_{\frac 1 5} \|_\infty &\lesssim e^{- \frac 1 2 s} \end{aligned}$$ We use definition to estimate $$\begin{aligned} {\ensuremath{\nonumber}}\| F_W \|_\infty \lesssim &e^{- \frac 3 4 s} \| A \|_\infty (\| Z \|_\infty + \| e^{- \frac s 4} W + \kappa \|_\infty) \lesssim e^{- \frac 3 4 s} M \eps (\eps^{\frac 5 4} +M) \lesssim_M \eps e^{- \frac 3 4 s}\,, \end{aligned}$$ which establishes the first inequality in . We now want to estimate ${\widetilde}{F}_W$, for which we use definition to bound $$\begin{aligned} {\ensuremath{\nonumber}}\| {\widetilde}{F}_W \|_\infty \le & |\beta_\tau| e^{- \frac 3 4 s}|\dot{\kappa}| + \| F_W \|_\infty + |\beta_\tau - 1| \| {\overline}{W} {\ensuremath{\partial}}_x {\overline}{W} \|_\infty + \| G_W \eta_{- \frac 1 5} \|_\infty \| {\ensuremath{\partial}}_x {\overline}{W} \eta_{\frac 1 5} \|_\infty \\ {\ensuremath{\nonumber}}\lesssim & e^{- \frac 3 4 s} \eps^{\frac 1 8} + \eps^{\frac 3 4} e^{- \frac 3 4 s} + \eps^{\frac 1 6} e^{- \frac 3 4 s} + M^{\frac{11}{5}} \eps^{\frac1 5} e^{- \frac 3 4 s} \\ {\ensuremath{\nonumber}}\lesssim & \eps^{\frac 1 8} e^{- \frac 3 4 s}\,, \end{aligned}$$ which establishes the first inequality in . Above, we have invoked estimate for the $\dot{\kappa}$ term, the previously established estimate on $\| F_W \|_\infty$ in , for the $\beta_\tau - 1$ quantity, and estimate for the $G_W$ term, with $r = - \frac 4 5$. Estimating the expression , we obtain $$\begin{aligned} {\ensuremath{\nonumber}}\| F_W^{(n)} \|_\infty \lesssim &e^{- \frac 3 4 s} \sum_{j = 1}^{n -1} \| A^{(j)} \|_\infty \Big( \| Z^{(n-j)} \|_\infty + e^{- \frac s 4} \|W^{(n-j)} \|_\infty \Big) \\ {\ensuremath{\nonumber}}& + e^{- \frac 3 4 s} \|A^{(n)} \|_\infty \| e^{- \frac s 4}W \mathbbm{1}_{B_f} + \kappa \|_\infty + e^{- \frac 3 4 s} \| A \|_\infty (\| Z^{(n)} \|_\infty + e^{- \frac s 4} \| W^{(n)} \|_\infty) \\ \label{Fwn:est:s:dec} \lesssim & M e^{- 2 s} (e^{- \frac 5 4 s} + e^{- \frac s 4}) + e^{- 2 s} + \eps e^{- \frac 3 4 s} (e^{- \frac 5 4 s} + e^{- \frac s 4}) \les_M \eps e^{-s} \,,\end{aligned}$$ which establishes the second inequality in . To estimate , we have invoked and estimates - . We now turn to the second inequality in . For this, we appeal to the definition $$\begin{aligned} {\ensuremath{\nonumber}}\| F_W^{(1)} \eta_{\frac 1 4} \|_\infty \lesssim &e^{- \frac 3 4 s} \| A^{(1)} \eta_{\frac 1 4} \|_\infty \Big( \| Z \|_\infty + \| e^{- \frac s 4} W \mathbbm{1}_{B_f} + \kappa \|_\infty \Big) + e^{- \frac 3 4 s} \|A \|_\infty \| Z^{(1)} \eta_{\frac 1 4 } \|_\infty \\ {\ensuremath{\nonumber}}& + e^{- \frac 3 4 s} \| A \eta_{\frac{1}{20}} \|_\infty \Big( \| {\overline}{W}^{(1)} \eta_{\frac 1 5} \|_\infty + \| {\widetilde}{W}^{(1)} \eta_{\frac 1 5} \|_\infty \Big) \\ {\ensuremath{\nonumber}}\lesssim & M^2 e^{-2s} (M \eps e^{\frac 5 4 s}) ( \eps^{\frac 5 4} + M ) + M^4 \eps^2 e^{- \frac 3 4 s}+ M e^{- \frac 3 4 s} \eps^{\frac 5 4} (M\eps e^{\frac 5 4 s})^{\frac 1 5} \ell \log M \\ \label{spoke:2} \lesssim & \eps^{\frac 1 8 } e^{- \frac 1 2 s}\,, \end{aligned}$$ where above we have used the inequality $\eta_{\frac r 4} \lesssim (M\eps e^{\frac 5 4 s})^{r}$ in the support of $A, Z$, as well as estimates - and and for the spatial decay property of ${\overline}{W}^{(1)}$. We now arrive at the second estimate in . An appeal to gives $$\begin{aligned} {\ensuremath{\nonumber}}\|{\widetilde}{F}_{W,1} \eta_{\frac 1 4}\|_\infty \lesssim & \| F_W^{(1)} \eta_{\frac 1 4} \|_\infty + \| {\overline}{W}^{(2)} {\widetilde}{W} \eta_{\frac 1 4}\|_\infty + \| G_W^{(1)} {\widetilde}{W}^{(1)} \eta_{\frac 1 4} \| + | \beta_\tau - 1| \Big(\| {\overline}{W} ~{\overline}{W}^{(2)} \eta_{\frac 1 4} \|_\infty \\ {\ensuremath{\nonumber}}& \qquad + \| {\overline}{W}^{(1)} \eta_{\frac 1 8}| \| {\overline}{W}^{(1)} \eta_{\frac 1 8} \|_\infty\Big) + \| G_W {\overline}{W}^{(2)} \eta_{\frac 1 4} \|_\infty + \| G_W^{(1)} {\overline}{W}^{(1)} \eta_{\frac 1 4} \|_\infty\\ {\ensuremath{\nonumber}}\lesssim & \| F_W^{(1)} \eta_{\frac 1 4} \|_\infty + \| {\overline}{W}^{(2)} \eta_{\frac{9}{20}} \|_\infty \| {\widetilde}{W} \eta_{- \frac{1}{20}} \|_\infty + (M\eps e^{\frac 5 4 s})^{\frac 1 5} \| G^{(1)}_W \|_\infty \| {\widetilde}{W}^{(1)} \eta_{\frac 1 5} \|_\infty \\ {\ensuremath{\nonumber}}& + \eps^{\frac 1 6} e^{- \frac 3 4 s} + \| {\overline}{W}^{(2)} \eta_{\frac{9}{20}} \|_\infty \| G_W \eta_{- \frac 1 5} \|_\infty + (M\eps e^{\frac 5 4 s})^{\frac 1 5} \| G_W^{(1)} \|_\infty \| {\overline}{W}^{(1)} \eta_{\frac 1 5} \|_\infty \\ {\ensuremath{\nonumber}}\lesssim & \eps^{\frac 18}e^{-\frac s 2}+ \eps^{\frac{3}{20}} + \eps^{\frac 1 6} e^{- \frac 3 4 s} + \eps^{\frac 1 4} e^{- \frac 3 4 s} + \eps^{\frac 1 6} e^{- \frac 3 4 s} +M^{\frac {11} 5} \eps^{\frac 1 5} e^{- \frac 3 4 s} \lesssim \eps^{\frac{3}{20}}\,. \end{aligned}$$ Above, we have used the bootstrap estimates and on ${\widetilde}W$, the bound regarding the decay of ${\overline}{W}^{(2)}$, as well as estimate , which has already been established. We have moreover invoked the previously established estimates on the $G_W$ quantity with $r = - \frac 4 5$ and the $G_W^{(1)}$ quantity. To prove , we first recall the definition , according to which if we pair with estimate yields $$\begin{aligned} {\ensuremath{\nonumber}}\| F_{W,1} \eta_{\frac 1 5} \|_\infty \le &\| F_{W}^{(1)} \eta_{\frac 1 5} \|_\infty + \| G_W^{(1)} \|_\infty \| W^{(1)} \eta_{\frac 1 5} \|_\infty \les \eps^{\frac18}e^{- \frac 1 2 s} + \ell M^3\log M e^{-s}\les \eps^{\frac18}e^{- \frac 1 2 s} \, , \end{aligned}$$ where we have also invoked estimate , and the bootstrap . We now appeal to the definition to perform the third estimate, . We estimate also with the help of $$\begin{aligned} \| F_W^{(n)} \eta_{\frac 1 5} \|_\infty \lesssim& \eps^{\frac 34} e^{-s} (M\eps e^{\frac 5 4s})^{\frac 4 5} = M^{\frac 4 5} \eps^{\frac 7 4}\,, \\ \Big\| 1_{n \ge 3}\beta_\tau \sum_{j = 2}^{n-1} \binom{n}{j} W^{(j)} W^{(n+1 - j)} \eta_{\frac 1 5}\Big\|_\infty \lesssim & \sum_{j = 2}^{n-1} M^{j^2} M^{(n+1-j)^2} \lesssim M^{n^2 - 1} \\ \Big\| \sum_{j = 1}^n \binom{n}{j} G_W^{(j)} W^{(n+1-j)} \eta_{\frac 1 5}\Big\|_\infty \lesssim &\sum_{j = 1}^n M^{2j^2} e^{-s} M^{(n+1-j)^2} \le \eps^{\frac 1 2}\,. \end{aligned}$$ Above we have invoked the elementary inequality $j^2 + (n+1 - j)^2 \le - 1 + n^2$ for $n \ge 3$, and $2 \le j \le n-1$, as well as the estimates on $G_W^{(j)}$ in , and estimates on $W^{(n)}$. We now state a lemma regarding localized estimates, on $|x| \le \ell$, which have an enhanced scaling. The following estimates are valid: $$\begin{aligned} \label{e:FW6:decay} \sup_{|x| \le \ell} |{\widetilde}{F}_{W,6}| \les {\ell \eps^{\frac 1 5}}\,, && \sup_{|x| \le \ell} |{\widetilde}{F}_{W,7}| \le { \eps^{\frac 1 5}}\,, &&& \sup_{|x| \le \ell} |{\widetilde}{F}_{W,8}| \les {M \eps^{\frac 1 5}}\,. \end{aligned}$$ We use the definition to estimate via $$\begin{aligned} {\ensuremath{\nonumber}}\sup_{|x| \le \ell } | {\widetilde}{F}_{W,6} | \lesssim &\|F_W^{(6)} \|_\infty + \sum_{j = 2}^{5} \sup_{|x| \le \ell} | {\widetilde}{W}^{(7-j)} | + \sum_{j = 1}^6 \sup_{|x| \le \ell} | {\widetilde}{W}^{(6-j)} | + \eps^{\frac 1 2} \eps^{\frac 1 5} \\ {\ensuremath{\nonumber}}& + \eps^{\frac 1 6} e^{- \frac 3 4 s} + \sum_{j = 1}^6 M^{2j^2} e^{-s} + \eps^{\frac 1 2} \\ {\ensuremath{\nonumber}}\lesssim & \eps^{\frac 3 4} e^{-s} + \ell \eps^{\frac 1 5} + \eps^{\frac 1 2} \eps^{\frac 1 5} + \eps^{\frac 1 6} e^{- \frac 3 4 s} + \eps^{\frac 1 2} \les \ell \eps^{\frac 1 5}\,, \end{aligned}$$ where we have invoked estimates with $n = 6$, and . The identical argument applies to the estimate of ${\widetilde}{F}_{W,7}$ and ${\widetilde}{F}_{W,8}$. For $F_Z$ defined in , the following estimates are valid $$\begin{aligned} \label{Z:0:order} \| F_Z \|_\infty \le & \eps^{\frac 3 4} e^{-s}\,, \\ \label{FZ:est:1} \| F_{Z,1} \| \le & \eps^{\frac 1 2} e^{- \frac 5 4 s} I(s) + e^{- \frac 3 2 s}\,, \\ \label{my:generation} \| F_Z^{(n)} \|_\infty \le & \eps^{\frac 3 4} e^{- \frac 5 4 s} \,,\\ \label{gen:n:FZ:est} \| F_{Z, n} \|_\infty \les & {M^{2n^2-1}} e^{- \frac 5 4 s}\,.\end{aligned}$$ for $2 \le n \le 8$, where $I(s)$ is an integrable function of $s$ satisfying the bound $\int_{s_0}^s |I(s')| ds' < 1$. For estimate , we use the definition to estimate $$\begin{aligned} \|F_Z \|_\infty \lesssim e^{-s} \|A \|_\infty \Big( \| e^{- \frac s 4} W + \kappa \|_\infty + \| Z \|_\infty \Big) \lesssim M^2\eps e^{-s}\,, \end{aligned}$$ where we have invoked , as well as . To estimate $F_Z^{(n)}$, we recall definition , which requires us to estimate the following four types of terms $$\begin{aligned} \sum_{j = 1}^n \| \beta_\tau e^{-s} A^{(j)} (e^{- \frac s 4} W + \kappa)^{(n-j)} \|_\infty &\lesssim e^{-s} \| A^{(j)} \|_\infty \| e^{- \frac s 4} W \mathbbm{1}_{B_f} + \kappa \| \lesssim M e^{-s} e^{- \frac 5 4 s}\,, \\ \sum_{j = 1}^n \| \beta_\tau e^{-s} A^{(j)} Z^{(n-j)} \|_\infty &\lesssim e^{-s} \| A^{(j)} \|_\infty \| Z \| \lesssim M \eps^{\frac 5 4} e^{- \frac 9 4s}\,, \\ \| \beta_\tau e^{-s} A \beta_3 e^{- \frac s 4} W^{(n)} \|_\infty & \lesssim_M \eps e^{- \frac 5 4 s}\,, \\ \| \beta_\tau e^{-s} A \beta_4 Z^{(n)} \|_\infty & \lesssim_M \eps e^{- \frac 9 4 s}\,.\end{aligned}$$ Again, we have used estimates - , as well as estimates for derivatives of $W$. We now provide the estimate . Recall the definition . For this, when coupled with , we need to estimate further the following two terms $$\begin{aligned} \| 1_{n \ge 2} \beta_\tau \beta_2 \sum_{j = 2}^{n} \binom{n}{j} W^{(j)} Z^{(n+1-j)} \|_\infty &\lesssim M^{2 n^2 - 1} 1_{n \ge 2} e^{- \frac 5 4 s}\,, \\ \| \sum_{j = 1}^n G_Z^{(j)} Z^{(n+1-j)} \|_\infty &\lesssim_M e^{- \frac 9 4 s}\,. \end{aligned}$$ Above, we have invoked estimates for derivatives of $W$, for $Z$, as well as for the $G_Z^{(j)}$ terms. For estimate , we estimate all of the terms above by $e^{- \frac 3 2 s}$ with the exception of $$\begin{aligned} |\beta_\tau \beta_3 e^{- \frac 54 s} A W^{(1)} \circ \Phi_Z| \le 10 \eps^{\frac 5 4} e^{- \frac 5 4 s} |\eta_{- \frac 1 5} \circ \Phi_Z| \le \eps e^{- \frac 5 4 s} I(s)\,,{\ensuremath{\nonumber}}\end{aligned}$$ where we have invoked the trajectory estimate . \[phone:1\] For $F_A$ defined in , the following estimates are valid $$\begin{aligned} \label{A:0:order} \| F_A \|_\infty &\les M^{\frac 1 2} e^{-s}\,,\\ \label{FA:est:1} \| F_{A,1} \|_\infty &\le e^{- \frac 5 4 s} I(s)\,, \\ \label{myA:generation} \| F_A^{(n)} \|_\infty &\lesssim M^{n^2} e^{- \frac 5 4 s}, \text{ for } 2 \le n \le 8\,, \\ \label{gen:n:FA:est} \| F_{A, n} \|_\infty &\les {M^{2n^2-1}} e^{- \frac 5 4 s} \text{ for } 2 \le n \le 8\,, \end{aligned}$$ where $I(s)$ is an integrable function of $s$ satisfying the bound $\int_{s_0}^s |I(s')| ds' < M$. First, we estimate $F_A$ via the definition in $$\begin{aligned} \|F_A \|_\infty \lesssim e^{-s} \|A\|_\infty^2 + e^{-s} \| e^{- \frac s 4} W + \kappa + Z \|_\infty^2 + e^{-s} \| e^{- \frac s 4} W + \kappa - Z \|_\infty^2 \les M^{\frac12 }e^{-s}\,,\end{aligned}$$ where we have used estimate , , , and , coupled with the fact that $M$ is large relative to $\kappa_0$. We now turn to , for $n \ge 1$, for which we consider . $$\begin{aligned} \| F_A^{(n)} \|_\infty \lesssim & e^{-s} \sum_{j = 0}^n \|A^{(j)} \|_\infty \| A^{(n-j)} \|_\infty + e^{-s} \sum_{j = 0}^n \Big( \| (e^{- \frac s 4} W + \kappa)^{(j)}\|_\infty + \| Z^{(j)} \|_\infty \Big) \times \\ & \Big( ( \| (e^{- \frac s 4} W + \kappa)^{(n-j)}\|_\infty + \| Z^{(n - j)} \|_\infty \Big) \\ \lesssim & M^{n^2} e^{- \frac 5 4 s}\,. \end{aligned}$$ Above, we have invoked - as well as and . The remaining two estimates, and , follow in the same manner as - . ### $\nabla_{a,b}$ forcing estimates We now develop estimates regarding the ${\ensuremath{\partial}}_\alpha$ and ${\ensuremath{\partial}}_\beta$ derivatives of the forcing terms $F_W, F_Z, F_A$. We start with the quantities ${\ensuremath{\partial}}_\alpha F_W$ and ${\ensuremath{\partial}}_\beta F_W$ in the following lemma. \[L:N:F\] Let $n \ge 1$. Then, $$\begin{aligned} \label{Fwc.est.ult:0} \| {\ensuremath{\partial}}_c F_W \|_\infty &\lesssim M \eps^{\frac 3 4} e^{- \frac s 4} \,, & \| F_{W,0}^{c} \|_\infty &\lesssim \eps^{\frac 1 8} \,,\\ \label{Fwc.est.ult} \| {\ensuremath{\partial}}_c F_W^{(n)} \|_\infty &\lesssim \eps^{\frac 3 4} e^{-\frac s 4}\,, & \| F_{W,n}^{c} \eta_{\frac{1}{20}} \|_\infty &\lesssim M^{-1} M^{(n+2)^2} \eps^{\frac34} e^{\frac 3 4 s}\,. \end{aligned}$$ First, we use equation to estimate the first quantity in . We proceed in order, starting with $$\begin{aligned} {\ensuremath{\nonumber}}\| {\ensuremath{\partial}}_c F_W \|_\infty \lesssim & \| {\ensuremath{\partial}}_c \dot{\tau} \beta_\tau^2 e^{- \frac 3 4 s} A \Big( \beta_3 Z + \beta_4 (e^{- \frac s 4}W + \kappa) \Big) \|_\infty + \| \beta_\tau e^{- \frac 3 4 s} {\ensuremath{\partial}}_c A \Big( \beta_3 Z + \beta_4 (e^{- \frac s 4} W + \kappa) \Big) \|_\infty \\ {\ensuremath{\nonumber}}& + \| \beta_\tau e^{- \frac 3 4 s} A \Big( \beta_3 {\ensuremath{\partial}}_c Z + \beta_4 (e^{- \frac s 4} {\ensuremath{\partial}}_c W + {\ensuremath{\partial}}_c \kappa) \Big) \|_\infty \\ {\ensuremath{\nonumber}}\lesssim & |{\ensuremath{\partial}}_c \dot{\tau}| e^{- \frac 3 4 s} \| A \|_\infty \Big( \| Z \|_\infty + \| e^{- \frac s 4}W \mathbbm{1}_{B_f} + \kappa \|_\infty \Big) + e^{- \frac 3 4 s} \| {\ensuremath{\partial}}_c A \|_\infty \Big( \| Z \|_\infty + \| e^{- \frac s 4}W \mathbbm{1}_{B_f}+ \kappa \|_\infty \Big) \\ {\ensuremath{\nonumber}}& + e^{- \frac 3 4 s} \| A \|_\infty \Big( \| {\ensuremath{\partial}}_c Z \|_\infty + \| e^{- \frac s 4} {\ensuremath{\partial}}_c W \|_\infty + |{\ensuremath{\partial}}_c \kappa| \Big) \\ \lesssim &M \eps^{\frac 3 2} e^{- \frac 3 4 s} \Big( \eps^{\frac 5 4} + M \Big) + e^{- \frac 3 4 s} \eps^{\frac 1 2} \Big( \eps^{\frac 5 4} + M \Big) + M\eps e^{- \frac 3 4 s} \Big( \eps^{\frac 1 2} +\eps e^{- \frac 3 4 s} + \eps^{\frac 3 8} \Big)\,.{\ensuremath{\nonumber}}\end{aligned}$$ Above, we have invoked repeatedly estimates - , as well as - . Next, we use equation to estimate the second quantity in via $$\begin{aligned} \| e^{- \frac 3 4 s} \beta_\tau {\ensuremath{\partial}}_c \dot{\kappa}+ e^{- \frac 3 4 s} \dot{\kappa} {\ensuremath{\partial}}_c \dot{\tau} \beta_\tau^2 \|_\infty &\lesssim e^{- \frac 3 4 s} |{\ensuremath{\partial}}_c \dot{\kappa}| + e^{- \frac 3 4 s} |\dot{\kappa}| |{\ensuremath{\partial}}_c \dot{\tau}| \lesssim \eps^{\frac 1 4 } e^{-\frac s 4 } + e^{- \frac 3 4 s} \eps^{\frac 5 8}\,, \\ \| {\ensuremath{\partial}}_c G_W W^{(1)} \|_\infty &\le \| {\ensuremath{\partial}}_c G_W \eta_{- \frac 1 5} \|_\infty \| W^{(1)} \eta_{\frac 1 5} \|_\infty \lesssim_M \eps^{\frac 1 2}\,, \\ \| W^{(1)} \dot{\tau}_c W \|_\infty &\le |\dot{\tau}_c| \| W^{(1)} \eta_{\frac 1 5} \|_{\infty} \| W \eta_{- \frac{1}{20}} \|_\infty \lesssim \eps^{\frac 1 2} \,,\end{aligned}$$ where we have invoked the bootstrap bounds , , and for the second line above we have invoked with $r = \frac 4 5$. Next, we use equation to estimate the first quantity in . Specifically, $$\begin{aligned} \| {\ensuremath{\partial}}_c F_W^{(n)} \|_\infty \lesssim & \Big\| \sum_{j = 0}^n \binom{n}{j} {\ensuremath{\partial}}_c \dot{\tau} \beta_\tau^2 e^{- \frac 3 4 s} A^{(j)} \Big( \beta_3 Z^{(n-j)} + \beta_4 (e^{- \frac s 4} W + \kappa)^{(n-j)} \Big) \Big\|_\infty \\ & + \Big\| \sum_{j = 0}^n \binom{n}{j} \beta_\tau e^{- \frac 3 4 s} {\ensuremath{\partial}}_c A^{(j)} \Big( \beta_3 Z^{(n-j)} + \beta_4 (e^{- \frac s 4} W + \kappa)^{(n-j)} \Big) \Big\|_\infty \\ & + \Big\| \sum_{j = 0}^n \binom{n}{j} \beta_\tau e^{- \frac 3 4 s} A^{(j)} \Big( \beta_3 {\ensuremath{\partial}}_c Z^{(n-j)} + \beta_4 (e^{- \frac s 4} {\ensuremath{\partial}}_c W + {\ensuremath{\partial}}_c \kappa)^{(n-j)} \Big) \Big\|_\infty \\ =: & \mathcal{O}_1 + \mathcal{O}_2 + \mathcal{O}_3\,. \end{aligned}$$ Bounding $ \mathcal{O}_1$, we obtain $$\begin{aligned} \mathcal{O}_1 &\lesssim \sum_{j = 1}^{n-1} |{\ensuremath{\partial}}_c \dot{\tau}| e^{- \frac 3 4 s} \| A^{(j)} \|_\infty \Big( \| Z^{(n-j)} \|_\infty + \| e^{- \frac s 4} W^{(n-j)} \|_\infty \Big) \\ & \qquad + |{\ensuremath{\partial}}_c \dot{\tau}| e^{- \frac 3 4 s} \|A \|_\infty \Big( \| Z^{(n)} \|_\infty + \| e^{- \frac s 4} W^{(n)} \|_\infty \Big) + |{\ensuremath{\partial}}_c \dot{\tau}| e^{- \frac 3 4 s} \| A^{(n)} \|_\infty \\ & \qquad \times \Big( \| Z \|_\infty + \| e^{- \frac s 4} W \mathbbm{1}_{B_f}+ \kappa \|_\infty \Big) \\ & \lesssim_M \eps^{\frac 1 2} e^{- 2 s} ( e^{- \frac 5 4 s} + e^{- \frac s 4} ) + \eps^{\frac 3 2} e^{- \frac 3 4 s} ( e^{- \frac 5 4 s} + e^{- \frac s4} ) + \eps^{\frac 1 2} e^{- 2 s} ( \eps^{\frac 5 4} + 1 ) \\ &\lesssim_M \eps^{\frac 5 4} e^{-s},\end{aligned}$$ where we have invoked estimates - , as well as . We now bound $ \mathcal{O}_2$ $$\begin{aligned} \mathcal{O}_2 &\lesssim \sum_{j = 1}^{n-1} e^{- \frac 3 4 s} \| {\ensuremath{\partial}}_c A^{(j)} \|_\infty \Big( \| Z^{(n-j)} \|_\infty + e^{- \frac s 4} \| W^{(n-j)} \|_\infty \Big) \\ &\qquad + e^{- \frac 3 4 s} \| {\ensuremath{\partial}}_c A \|_\infty \Big( \| Z^{(n)} \|_\infty + e^{- \frac s 4} \| W^{(n)} \|_\infty \Big) + e^{- \frac 3 4 s} \| {\ensuremath{\partial}}_c A^{(n)} \|_\infty \\ & \qquad \times \Big( \| Z \|_\infty + \| e^{- \frac s 4} W \mathbbm{1}_{B_f}+ \kappa \|_\infty \Big) \\ & \lesssim_M \eps^{\frac 1 2} e^{- \frac 5 4 s} ( e^{- \frac 5 4 s} + e^{- \frac s 4} ) + e^{- \frac 3 4 s} \eps^{\frac 1 2} ( e^{- \frac 5 4 s} + e^{- \frac s 4} ) + \eps^{\frac 1 2} e^{- \frac 5 4 s} ( \eps^{\frac 5 4} + 1) \\ &\lesssim_M \eps^{\frac 3 4} e^{-s}.\end{aligned}$$ We have invoked estimates - , as well as . Finally, we estimate $\mathcal{O}_3$ $$\begin{aligned} \mathcal{O}_3 &\lesssim \sum_{j = 1}^{n-1} e^{- \frac 3 4 s} \|A^{(j)} \|_\infty \Big( \| {\ensuremath{\partial}}_c Z^{(n-j)} \|_\infty + e^{- \frac s 4} \| {\ensuremath{\partial}}_c W^{(n-j)} \|_\infty \Big) \\ & \qquad + e^{- \frac 3 4 s} \| A \|_\infty \Big( \| {\ensuremath{\partial}}_c Z^{(n)} \|_\infty + e^{- \frac s 4} \| {\ensuremath{\partial}}_c W^{(n)} \|_\infty \Big) + e^{- \frac 3 4 s} \| A^{(n)} \|_\infty \\ & \qquad \times \Big( \| {\ensuremath{\partial}}_c Z \|_\infty + e^{- \frac s 4} \| {\ensuremath{\partial}}_c W \|_\infty + | {\ensuremath{\partial}}_c \kappa | \Big) \\ &\lesssim_M e^{- 2 s} ( \eps^{\frac 1 2} e^{- \frac 1 2 s} + \eps^{\frac 3 4 }e^{ \frac s 2} ) + e^{- \frac 3 4 s} \eps ( \eps^{\frac 1 2} e^{- \frac s 2} + \eps^{\frac 3 4} e^{\frac s 2} ) + e^{- 2 s} ( \eps^{\frac 1 2} +\eps^{\frac34}e^{\frac 3 4 s} + \eps^{\frac 3 8} ) \\ &\lesssim_M \eps e^{- \frac s 4}.\end{aligned}$$ We have used the bootstrap bounds - , as well as and - . We now remark that, according to , $$\begin{aligned} \label{born:2} \| {\ensuremath{\partial}}_c F_W^{(n)} \eta_{\frac{1}{20}} \|_\infty \lesssim \eps^{\frac 3 4} e^{- \frac s 4} (M\eps e^{\frac 5 4 s})^{\frac{1}{5}} = M^{\frac 1 5} \eps^{\frac {19} {20}}\,. \end{aligned}$$ Finally, we use equation to estimate the second quantity in . In addition to estimate , we need to estimate the following two quadratic quantities in $W$ $$\begin{aligned} \label{truck:0} \| \sum_{j = 1}^n \binom{n}{j} \beta_\tau W^{(1+j)} {\ensuremath{\partial}}_c W^{(n-j)} \eta_{\frac{1}{20}} \|_\infty &\lesssim M^{(1+j)^2} M^{(n-j+2)^2} \eps^{\frac34}e^{\frac 3 4 s} \lesssim M^{-1} M^{(n+2)^2} \eps^{\frac34}e^{\frac 3 4 s}, \end{aligned}$$ and similarly $$\begin{aligned} {\ensuremath{\nonumber}}\| 1_{n \ge 2} \sum_{j = 0}^{n-2} \binom{n}{j} \beta_\tau W^{(n-j)} {\ensuremath{\partial}}_c W^{(j+1)} \eta_{\frac{1}{20}} \|_\infty & \lesssim 1_{n \ge 2} \sum_{j = 0}^{n-2} \| W^{(n-j)}\eta_{\frac{1}{20}} \|_\infty \| {\ensuremath{\partial}}_c W^{(j+1)} \|_\infty \\ \label{truck:-1} & \lesssim M^{(n-j)^2} M^{(j+3)^2} \eps^{\frac34}e^{\frac 3 4 s}\lesssim M^{-1} M^{(n+2)^2} \eps^{\frac34}e^{\frac 3 4 s}\,.\end{aligned}$$ For both of the above estimates, and , we have invoked and - . Next, using again , we need to estimate the following two quantities $$\begin{aligned} \label{truck:1} \| 1_{n \ge 1} \sum_{j = 0}^{n-1} \binom{n}{j} G_W^{(n-j)} {\ensuremath{\partial}}_c W^{(j+1)}\eta_{\frac{1}{20}} \|_\infty &\lesssim 2 M^{2(n-j)^2} e^{-s} M^{(j+2)^2} \eps^{\frac34}e^{\frac 3 4 s} \les_M \eps^{\frac 3 4} e^{- \frac s 4}\,, \\ \label{truck:2} \| \sum_{j = 0}^n \binom{n}{j} {\ensuremath{\partial}}_c G_W^{(j)} W^{(n-j+1)}\eta_{\frac{1}{20}} \|_\infty &\lesssim \sum_{j = 0}^n \| {\ensuremath{\partial}}_c G_W^{(j)} \eta_{- \frac{3}{20}} \|_\infty \| W^{(n-j+1)} \eta_{\frac 1 5} \|_\infty \lesssim_M \eps^{\frac 9 {10}} e^{\frac s 4}\,.\end{aligned}$$ Above, we have appealed to estimates with $r = \frac 3 5$ and on $G_W$ and ${\ensuremath{\partial}}_c G_W$. Finally, according to , we need to estimate $$\begin{aligned} \label{backer:2} \|\sum_{j = 0}^n \binom{n}{j} \dot{\tau}_c \beta_\tau^2 \sum_{j = 0}^n \binom{n}{j} W^{(1+j)} W^{(n-j)}\eta_{\frac{1}{20}} \|_\infty \lesssim_M \eps^{\frac 1 2}\,. \end{aligned}$$ Above, we have used the elementary inequality $$\begin{aligned} (1 + j)^2 + (n - j+2)^2 \le -1 + (n+2)^2 \text{ for } n \ge 1, 1 \le j \le n\,, \end{aligned}$$ and we have invoked estimates , , . Combining - , we obtain the right-most estimate in . We now establish enhanced localized estimates for the bottom order derivatives. The following estimates are valid $$\begin{aligned} \label{better:1} \sup_{|x| \le \ell} |F_{W,7}^{c}| \le {M \ell^{\frac 1 5} \eps^{\frac34}e^{\frac 3 4 s}}\,. \end{aligned}$$ An inspection of the proof Lemma \[L:N:F\] shows that only terms and need to be estimated, with $n = 7$. Accordingly, we estimate $$\begin{aligned} {\ensuremath{\nonumber}}& \| \sum_{j = 1}^7 \binom{7}{j} \beta_\tau W^{(1+j)} {\ensuremath{\partial}}_c W^{(7-j)} \eta_{\frac{1}{20}} \|_\infty + \| \sum_{j = 0}^{5} \binom{n}{j} \beta_\tau W^{(7-j)} {\ensuremath{\partial}}_c W^{(j+1)} \eta_{\frac{1}{20}} \|_\infty \lesssim \ell^{\frac 1 2} M \eps^{\frac34}e^{\frac 3 4 s}\,, \end{aligned}$$ upon invoking the localized bootstraps and . \[lemma:j:1\] The following estimates are valid $$\begin{aligned} \label{Force:Z:pc:1} \| {\ensuremath{\partial}}_c F_Z \|_\infty &\le \eps^{\frac 3 4} e^{- \frac s 2}\,, &\| F^{c}_{Z,0} \|_\infty &\le \eps^{\frac 1 2} e^{- \frac s 2} \,,\\ \label{Force:Z:pc:n} \| {\ensuremath{\partial}}_c F_Z^{(n)} \|_\infty &\lesssim \eps^{\frac 3 4} e^{- \frac s 2}\,, &\| F^{c}_{Z,n} \|_\infty &\les {M^{2n^2-1} }\eps^{\frac 1 2} e^{- \frac s 2}\,,\end{aligned}$$ for $1\leq n\leq 7$ First, we use expression to estimate $$\begin{aligned} {\ensuremath{\nonumber}}\| {\ensuremath{\partial}}_c F_Z \|_\infty &\lesssim |\dot{\tau}_c| \| F_Z \|_\infty + e^{-s} \| A_c \|_\infty \Big( \| e^{- \frac s 4}W + \kappa \|_\infty + \| Z \|_\infty \Big) + e^{-s} \| A \|_\infty \Big( e^{- \frac s 4} \| W_c \|_\infty + | \kappa_c| + \| Z_c \|_\infty \Big) \\ \label{teq:1} &\lesssim_M \eps^{\frac 5 4} e^{-s} + \eps^{\frac 1 2} e^{- \frac 3 2s} \Big(1 + \eps^{\frac 5 4} \Big) + \eps e^{-s} \Big( \eps^{\frac 3 4} e^{\frac s 2} + \eps^{\frac 3 8} + \eps^{\frac 12} \Big) \lesssim_M \eps^{\frac 7 8} e^{- \frac s 2}\,. \end{aligned}$$ where above we have also invoked estimate for the $F_Z$ term together with the bootstrap estimates , , , , , and . The first estimate in follows from upon bringing $\eps$ small relative to $M$. Next, we use the identity to estimate $$\begin{aligned} {\ensuremath{\nonumber}}\| F^{c}_{Z,0} \|_\infty &\lesssim_M \| Z^{(1)} \|_\infty \Big( |{\ensuremath{\partial}}_c \dot{\tau}| \| W \|_\infty + \| {\ensuremath{\partial}}_c W \|_\infty + \| {\ensuremath{\partial}}_c G_Z \|_\infty \Big) + \| {\ensuremath{\partial}}_c F_Z \|_\infty \\ {\ensuremath{\nonumber}}&\lesssim_M e^{- \frac 5 4 s} \Big( \eps^{\frac 1 2} e^{\frac s 4} + \eps^{\frac 3 4} e^{\frac 3 4 s} + \eps^{\frac 1 2} e^{\frac s 4} \Big) + \eps^{\frac 3 4} e^{- \frac s 2} \lesssim_M \eps^{\frac 3 4 } e^{- \frac s 2}\,, \end{aligned}$$ from which the second estimate in follows again by bringing $\eps$ small relative to $M$. We now use expression to estimate the first quantity in via $$\begin{aligned} \| {\ensuremath{\partial}}_c F_Z^{(n)} \|_\infty \lesssim &| \dot{\tau}_c | | \beta_\tau | \| F_Z^{(n)} \|_\infty + | \beta_\tau| e^{-s} \sum_{j = 0}^n \| A_c^{(j)} \Big( \beta_3 ( e^{- \frac s 4} W + \kappa)^{(n-j)} + \beta_4 Z^{(n-j)} \Big) \|_\infty \\ &+ | \beta_\tau | e^{-s} \sum_{j =0}^n \| A^{(j)} \Big( \beta_3 ( e^{- \frac s 4} W_c + \kappa_c)^{(n-j)} + \beta_4 Z_c^{(n-j)} \Big) \|_\infty \\ \lesssim & \eps^{\frac 3 2} e^{- \frac 5 4 s} + \eps^{\frac 1 4} e^{-s} + \eps e^{- \frac s 2}\,, \end{aligned}$$ where above we have invoked the forcing estimate, . Next, in order to complete the estimate of the quantity $\| F_{Z,n}^{c} \|_\infty$, we need to estimate the remaining five terms in . The second, third, and sixth terms from the right-side of are estimated via $$\begin{aligned} \sum_{j = 0}^n |\dot{\tau}_c| \| Z^{(j+1)} \|_\infty \| W^{(n-j)} \|_\infty &\lesssim \sum_{j = 0}^n M^{2(j+1)^2} \eps^{\frac 12} e^{- \frac 5 4 s} M^{(n-j)^2} e^{\frac s 4}\,, \\ \sum_{j = 0}^n \| Z^{(j+1)} \|_\infty \| W_c^{(n-j)} \|_\infty &\lesssim \sum_{j = 0}^n M^{2(j+1)^2} e^{- \frac 5 4 s} M^{(n-j)^2} \eps^{\frac34}e^{\frac 3 4 s}\,, \\ \sum_{j = 2}^n \| W^{(j)} \|_\infty \| Z_c^{(n-j+1)} \|_\infty &\lesssim M^{j^2} \eps^{\frac 1 2} M^{2 (n-j+1)^2} e^{- \frac s 2} \lesssim M^{-1 + 2n^2} \eps^{\frac 1 2} e^{- \frac s 2}\,.\end{aligned}$$ Above, we have invoked , , and . The fourth and fifth terms from the right-side of are estimated via $$\begin{aligned} \sum_{j = 0}^n \| Z^{(j+1)} \|_\infty \| {\ensuremath{\partial}}_c G_Z^{(n-j)} \|_\infty & \lesssim \sum_{j = 0}^n M^{2(j+1)^2} \eps^{\frac 1 2} e^{-s} \\ \sum_{j = 1}^n \| G_Z^{(j)} \|_\infty \| Z_c^{(n+1-j)} \|_\infty &\lesssim M^{2j^2} M^{(n+1-j)^2} \eps e^{- \frac 3 4 s},\end{aligned}$$ where we have invoked the estimates on $G_Z$ and ${\ensuremath{\partial}}_c G_Z$ from and . Above we have also used the elementary inequality $$\begin{aligned} \label{elem:2} j^2 + 2 (n+1-j)^2 \le -1 + 2 n^2 \quad\text{ for } n \ge 2 \mbox{ and } 2 \le j \le n\,. \end{aligned}$$ The following estimates are valid $$\begin{aligned} \label{Force:A:pc:1} \| {\ensuremath{\partial}}_c F_A \|_\infty &\le \eps^{\frac 1 2} e^{- \frac s 2}\,, &&\| F^{c}_{A,0} \|_\infty \le \eps^{\frac 1 2 } e^{- \frac s2}\,, \\ \label{Force:A:pc:n} \| {\ensuremath{\partial}}_c F_A^{(n)} \|_\infty &\le \eps^{\frac 12} e^{- \frac s 2} ,&&\| F^{c}_{A,n} \|_\infty {\les M^{2n^2-1} \eps^{\frac 1 2} e^{- \frac s 2} }\,,\end{aligned}$$ for $ 1 \le n \le 7$ We appeal to the expression to estimate $$\begin{aligned} {\ensuremath{\nonumber}}\| {\ensuremath{\partial}}_c F_A \|_\infty &\lesssim |\dot{\tau}| \| F_A \|_\infty + e^{-s} \Big( \| e^{- \frac s 4}W + \kappa \|_\infty + \|Z \|_\infty \Big) \Big( \| e^{- \frac s 4} W_c + \kappa_c \|_\infty + \|Z_c \|_\infty \Big) \\ &\lesssim M^{\frac 1 2} \eps^{\frac 1 6} e^{- \frac 7 4 s} + e^{-s} ( M^4 \eps^{\frac34}e^{\frac 1 2 s}+ \eps^{\frac 1 2} + \eps^{\frac 1 2})\,.{\ensuremath{\nonumber}}\end{aligned}$$ Above, we have invoked , , , , , and finally for the $F_A$ contribution. Next, we appeal to the expression to estimate $$\begin{aligned} {\ensuremath{\nonumber}}\| F_{A,0}^{c}\circ \Phi_A^{x_0} \|_\infty \lesssim & \| {\ensuremath{\partial}}_c F_A \circ \Phi_A^{x_0}\|_\infty + \|A^{(1)} \|_\infty \Big( |\dot{\tau}_c| \|W\|_\infty + \| W_c \|_\infty + \| {\ensuremath{\partial}}_c G_A \|_\infty \Big) \\ \lesssim & \eps^{\frac 12} e^{- \frac s 2} + M^2 e^{- \frac 5 4 s} \Big( \eps^{\frac 1 2} e^{\frac s 4} +M^4 \eps^{\frac34}e^{\frac 3 4 s} + \eps^{\frac 1 4} e^{\frac s 4} \Big)\,,{\ensuremath{\nonumber}}\end{aligned}$$ where we have appealed to estimates , as well as bootstrap assumptions , , , , and for the ${\ensuremath{\partial}}_c G_A$ contribution. Next, we appeal to the expression of to estimate $$\begin{aligned} {\ensuremath{\nonumber}}\| {\ensuremath{\partial}}_c F_A^{(n)} \|_\infty &\lesssim |\dot{\tau}_c| \| F_A^{(n)} \|_\infty + e^{-s} \sum_{j = 0}^n (\| (e^{- \frac s 4} W^{(j)}+\kappa)^{(j)}\|_{\infty} + \| Z^{(j)} \|_\infty ) \times \\ {\ensuremath{\nonumber}}& \qquad (\|(e^{- \frac s 4} W_c +\kappa_c)^{(n-j)}\|_\infty + \| Z_c^{(n-j)} \|_\infty) \\ {\ensuremath{\nonumber}}&\lesssim_M \eps^{\frac 1 2} e^{-s} + e^{-s} ( 1 + \eps^{\frac 12} + \eps^{\frac 5 4} ) ( \eps^{\frac34}e^{\frac s 2 }+ \eps^{\frac 1 2} e^{- \frac s 2} ) \lesssim_M \eps^{\frac 3 4} e^{- \frac s 2}\,, \end{aligned}$$ where we have invoked estimates , , , - , as well as . The final estimate in requires an estimate of the remaining terms in , which is identical to that of Lemma \[lemma:j:1\]. ### $\nabla_{a,b}^2$ forcing estimates For $1 \le n \le 6$, the following estimates are valid $$\begin{aligned} \label{disclosure:1} \| {\ensuremath{\partial}}_{c_1 c_2} F_W \|_\infty &\le \eps^{\frac 1 2} e^{\frac s 2}\,, && \| F_{W,0}^{c_1, c_2} \|_\infty \le M^{14} \eps^{\frac32}e^{\frac 3 2 s}\,, \\ \label{disclosure:2} \| {\ensuremath{\partial}}_{c_1 c_2} F_W^{(n)} \|_\infty &\les \eps^{\frac 1 2} e^{\frac s 2}\,, && \| F_{W,n}^{c_1, c_2} \|_\infty \le M^{(n+5)^2 - 1} \eps^{\frac32}e^{\frac 3 2 s}\,.\end{aligned}$$ For the computation of ${\ensuremath{\partial}}_{c_1 c_2} F_W$, we recall the definition of , and proceed to estimate systematically $$\begin{aligned} {\ensuremath{\nonumber}}&\| \beta_\tau e^{- \frac 3 4 s} A_{c_1 c_2} ( \beta_3 Z + \beta_4 (e^{- \frac s 4}W + \kappa) ) \|_\infty \\ & \qquad \lesssim e^{- \frac 3 4 s} \| A_{c_1 c_2 } \|_\infty (\| Z \|_\infty + \| e^{- \frac s 4}W + \kappa \|_\infty) \lesssim_M \eps^{\frac 5 8} e^{- \frac s 2} (\eps^{\frac 5 4} + 1) \lesssim_M \eps^{\frac 5 8} e^{- \frac s 2}\,,{\ensuremath{\nonumber}}\end{aligned}$$ and next $$\begin{aligned} {\ensuremath{\nonumber}}&\| \beta_\tau e^{- \frac 3 4 s} A_{c_1} ( \beta_3 Z_{c_2} + \beta_4 (e^{- \frac s4} W_{c_2} + \kappa_{c_2}) ) \|_\infty \\ & \qquad \lesssim e^{- \frac 3 4 s} \| A_{c} \|_\infty (\| Z_{c} \|_\infty + e^{- \frac s 4} \| W_c \|_\infty + | \kappa_c |) \lesssim_M \eps^{\frac 1 2} e^{- \frac 3 4 s} ( \eps^{\frac 1 2} + \eps^{\frac 3 4} e^{\frac s 2} + \eps^{\frac 1 4} ) \lesssim_M \eps^{\frac 3 4} e^{- \frac s 4}\,.{\ensuremath{\nonumber}}\end{aligned}$$ Above, we have invoked bootstrap assumptions as well as - , and - . The first term on the second line of is estimated in an identical fashion, while the second term is estimated via $$\begin{aligned} {\ensuremath{\nonumber}}&\| \beta_\tau e^{- \frac 3 4 s} A (\beta_3 Z_{c_1 c_2} + \beta_4 (e^{- \frac s 4}W_{c_1 c_2} + \kappa_{c_1 c_2})) \|_\infty \\ {\ensuremath{\nonumber}}& \qquad \lesssim e^{- \frac 3 4 s} \| A \|_\infty (\| Z_{c_1 c_2} \|_\infty + e^{- \frac s 4} \| W_{c_1 c_2} \|_\infty + |\kappa_{c_1 c_2}|) \\ & \qquad \lesssim_M \eps e^{- \frac 3 4 s} ( \eps^{\frac 5 8} e^{\frac s 4} + \eps^{\frac 3 2} e^{\frac 5 4 s} + \eps^{\frac 5 4} e^s) \lesssim_M \eps e^{\frac s 2}\,,{\ensuremath{\nonumber}}\end{aligned}$$ where again we have invoked bootstrap assumptions as well as - , and - . Finally, the last line of is estimated via $$\begin{aligned} {\ensuremath{\nonumber}}&\| \dot{\tau}_{c_2} \beta_\tau {\ensuremath{\partial}}_{c_1} F_W + \dot{\tau}_{c_1 c_2} \beta_\tau F_W + \dot{\tau}_{c_1} \beta_\tau {\ensuremath{\partial}}_{c_2} F_W \|_\infty \\ & \qquad \lesssim |\dot{\tau}_c| \| {\ensuremath{\partial}}_c F_W \|_\infty + |\dot{\tau}_{c_1c_2}| \| F_W \|_\infty \lesssim_M \eps^{\frac 5 4} e^{- \frac s 4} + \eps^{\frac 3 2} \lesssim_M \eps\,,{\ensuremath{\nonumber}}\end{aligned}$$ where we have invoked the estimates and . Next, to estimate the remaining quantity in , we recall definition , according to which we define the following two auxiliary quantities: $$\begin{aligned} &\mathcal{L}_1 := \beta_\tau W^{(1)}_{c_2} W_{c_1} - \beta_\tau^2 \dot{\tau}_{c_2} W^{(1)} W_{c_1} - \Big( \beta_\tau^2 \dot{\tau}_{c_2} W + \beta_\tau W_{c_2} + {\ensuremath{\partial}}_{c_2} G_W \Big) W^{(1)}_{c_1} \\ &\mathcal{L}_2 := - \dot{\tau}_{c_1} \beta_\tau^2 W W^{(1)}_{c_2} - \dot{\tau}_{c_1 c_2} \beta_\tau^2 W W^{(1)} - 2 \beta_\tau^2 \dot{\tau}_{c_1} \dot{\tau}_{c_2} W W^{(1)} - \dot{\tau}_{c_1} \beta_\tau^2 W^{(1)} W_{c_2},\end{aligned}$$ so that we have the identity $$\begin{aligned} F_{W,0}^{c_1, c_2} = {\ensuremath{\partial}}_{c_1 c_2} F_W + \mathcal{L}_1 + \mathcal{L}_2 - \mathcal{M}^{c_1, c_2}\,, \end{aligned}$$ where $\mathcal{M}^{c_1, c_2}$ has been defined in . We first estimate $\mathcal{L}_1$ via $$\begin{aligned} {\ensuremath{\nonumber}}\| \mathcal{L}_1 \|_\infty \lesssim & (1 + |\dot{\tau}_c|) \| W^{(1)}_c \|_\infty \| W_c \|_\infty + |\dot{\tau}_c| \| W \|_\infty \|W^{(1)}_c \|_\infty + \| {\ensuremath{\partial}}_c G_W \eta_{- \frac{1}{20}} \|_\infty \| W^{(1)}_c \eta_{\frac{1}{20}} \|_\infty \\ {\ensuremath{\nonumber}}\lesssim & (1 + \eps^{\frac 1 2}) M^{13} \eps^{\frac32}e^{\frac 3 2 s} + \eps^{\frac 1 2} M^4 \eps^{\frac32}e^{\frac 3 2 s} + {M^{12}} \eps^{\frac {41} {20}}e^{\frac 3 2 s} {+ M^9 \eps^{\frac 54} e^{\frac 3 4s} }\\ \lesssim & M^{13} \eps^{\frac32}e^{\frac 3 2 s}\,.{\ensuremath{\nonumber}}\end{aligned}$$ Note that for the estimation of the final term above, we have used crucially the spatial decay of $W_c^{(1)}$, as guaranteed by the bootstrap assumption , and we have also applied estimate with $r = \frac{1}{5}$. Next, we estimate $\mathcal{L}_2$ via $$\begin{aligned} {\ensuremath{\nonumber}}\| \mathcal{L}_2 \|_\infty &\lesssim |\dot{\tau}_c| \| W \|_\infty \| W^{(1)}_c \|_\infty +( |\dot{\tau}_{c_1 c_2} | + |\dot{\tau}_c|^2 ) \| W \eta_{-\frac{1}{20}} \|_\infty \| W^{(1)} \eta_{\frac{1}{20}} \|_\infty + |\dot{\tau}_c| \| W^{(1)} \|_\infty \| W_c \|_\infty \\ &\lesssim_M \eps^{\frac 1 2} e^{\frac s 4} + (\eps e^{\frac 3 4 s} + \eps) + \eps^{\frac 5 4} e^{\frac 3 4 s} \lesssim_M \eps e^{\frac 34 s}\,,{\ensuremath{\nonumber}}\end{aligned}$$ where we invoke the bootstrap assumptions - , - , , and . Next, we estimate $\mathcal{M}^{c_1, c_2}$ via $$\begin{aligned} {\ensuremath{\nonumber}}|\mathcal{M}^{c_1, c_2}| &\lesssim e^{- \frac 3 4 s}\Big( | \dot{\kappa}_{c_1 c_2}| + |\dot{\kappa}_{c}| |\dot{\tau}_{c}| + |\dot{\kappa}| |\dot{\tau}_{c_1 c_2}| + |\dot{\kappa}| | \dot{\tau}_{c}|^2 \Big) \\ &\lesssim_M e^{- \frac 3 4 s} \Big( \eps^{\frac 54} e^s + \eps^{\frac 34} + \eps^{\frac 9 8} e^{\frac 3 4 s} + \eps^{\frac 5 8}\Big) \lesssim_M \eps^{\frac 5 8} e^{\frac s 4}\,,{\ensuremath{\nonumber}}\end{aligned}$$ where we have invoked the bootstrap assumptions on the second (parameter) derivatives of the modulation variables, - . We now move to the $1 \le n \le 6$ estimates, for which we first recall the expression of ${\ensuremath{\partial}}_{c_1 c_2} F_W^{(n)}$ from . The estimate of this is identical to the estimate of ${\ensuremath{\partial}}_{c_1 c_2}F_W$ (the $n = 0$ case above), and so we omit it. We now proceed to estimate all of the remaining terms in . $$\begin{aligned} \| \sum_{j = 1}^n \beta_\tau^2 \dot{\tau}_{c_i} W^{(1+j)} W^{(n-j)}_{c_{i'}} \|_\infty &\lesssim \sum_{j = 1}^n |\dot{\tau}_c| \| W^{(1+j)} \|_\infty \| W^{(n-j)}_c \|_\infty \lesssim_M \eps^{\frac 5 4} e^{\frac 3 4 s} \,,\\ \| \sum_{j = 0}^n \binom{n}{j} \beta_\tau W_{c_1}^{(j)} W_{c_2}^{(n+1-j)} \|_\infty &\lesssim \sum_{j = 0}^n \|W_{c}^{(j)}\|_\infty \| W_c^{(n+1-j)} \|_\infty \lesssim \sum_{j = 0}^n M^{(j+2)^2} M^{(n-j+3)^2} \eps^{\frac 3 2} e^{\frac 3 2 s}.\end{aligned}$$ We have invoked the bootstrap assumptions on derivatives of $W$, , as well as . We now appeal to the elementary inequality $$\begin{aligned} (j+1)^2 + (n-j+3)^2 \le (n+5)^2 - 1 \text{ for } 0 \le j \le n, \qquad n \ge 1\,. \end{aligned}$$ We continue with $$\begin{aligned} \| \sum_{j =1}^n \binom{n}{j} \beta_\tau W^{(1+j)} W^{(n-j)}_{c_1 c_2} \|_\infty \lesssim \sum_{j = 1}^n \| W^{(1+j)} \|_\infty \|W_{c_1 c_2}^{(n-j)} \|_\infty \lesssim \sum_{j = 1}^n M^{(1+j)^2} M^{(n-j+5)^2} \eps^{\frac 3 2} e^{\frac 3 2 s}\,,\end{aligned}$$ and again appeal to an elementary inequality $$\begin{aligned} (1 + j)^2 + (n-j+5)^2 \le -1 + (n+5)^2 \text{ for } 1 \le j \le n, \qquad n \ge 1\,. \end{aligned}$$ The fifth term on the right-hand side of is formally the same as the second term, with the exception of the $j = 0$ case, which we estimate via $$\begin{aligned} \| \beta_\tau^2 \dot{\tau}_{c_{i'}} W W_{c_i}^{(n+1)} \|_\infty \lesssim_M \eps^{\frac 1 2} e^{\frac s 4} e^{\frac 3 4 (s - s_0)} \lesssim_M {\eps^{\frac 5 4} e^s}\,. \end{aligned}$$ We now move to the term $$\begin{aligned} \| \sum_{j = 0}^n \binom{n}{j} {\ensuremath{\partial}}_{c_2} G_W^{(j)} W_{c_1}^{(n+1-j)} \|_\infty &\lesssim \sum_{j = 0}^n \| {\ensuremath{\partial}}_{c_2} G_W^{(j)} \eta_{- \frac{1}{20}} \|_\infty \| W_{c_1}^{(n+1-j)} \eta_{\frac{1}{20}} \|_\infty\\ &\lesssim_M \eps^{\frac {33} {20} } e^{\frac 3 2 s} + {\eps^{\frac 5 4} e^{\frac 3 4s}}\,.\end{aligned}$$ Above we have invoked with $r = \frac 1 5$. We now move to the final three terms, which are easily estimated via $$\begin{aligned} \| \sum_{j = 1}^n G_W^{(j)} W_{c_1 c_2}^{(n-j+1)} \|_\infty &\lesssim_M \eps^{\frac 3 2} e^{\frac s 2}, \\ \| \sum_{j = 0}^n \beta_\tau^2 (\dot{\tau}_{c_1 c_2} + 2 \dot{\tau}_{c_1} \dot{\tau}_{c_2}) W^{(j)} W^{(n+1-j)} \|_\infty &\lesssim_M \eps e^{\frac 3 4 s} + \eps,\end{aligned}$$ where we have invoked for derivatives of $W$, for $j \ge 1$ for the $G_W$ contribution, and , for ${\ensuremath{\partial}}_c$ and ${\ensuremath{\partial}}_{c}^2$ of $\dot{\tau}$. \[Lemma:ka\] For $1 \le n \le 6$, the following estimates are valid $$\begin{aligned} \label{jay:1} \| {\ensuremath{\partial}}_{c_1 c_2} F_Z \|_\infty &\le \eps e^{\frac s 4}\,, && \| F_{Z,0}^{c_1, c_2} \|_\infty \le \eps e^{\frac 1 4 s}\,, \\ \label{jay:2} \| {\ensuremath{\partial}}_{c_1 c_2} F_Z^{(n)} \|_\infty &\le \eps e^{\frac s 4}\,, && \| F_{Z,n}^{c_1, c_2} \|_\infty \les M^{2n^2 -1} \eps^{\frac 5 8} e^{\frac 1 4 s}\,.\end{aligned}$$ First, we turn to the estimation of ${\ensuremath{\partial}}_{c_1 c_2} F_Z^{(n)}$, for which we appeal to the expression given in and estimate term by term via $$\begin{aligned} {\ensuremath{\nonumber}}&\| \beta_\tau e^{-s} \sum_{j = 0}^n \binom{n}{j} A^{(j)} \Big( \beta_3( e^{- \frac s 4} W_{c_1 c_2} + \kappa_{c_1 c_2})^{(n-j)} + \beta_4 Z_{c_1 c_2}^{(n-j)} \Big) \|_\infty \\ {\ensuremath{\nonumber}}&\qquad \lesssim e^{-s} \sum_{j = 0}^n \| A^{(j)} \|_\infty \Big( \| e^{- \frac s 4}W_{c_1 c_2}+ \kappa_{c_1 c_2})^{(n-j)}\|_{\infty} + \| Z^{(n-j)}_{c_1 c_2}\|_\infty \Big) \\ &\qquad \lesssim_M \eps e^{-s} \Big( \eps^{\frac 3 2} e^{\frac 5 4 s}+ \eps^{\frac 5 4} e^s + \eps^{\frac 5 8} e^{\frac s 4} \Big) \lesssim_M \eps^{\frac 5 4} e^{\frac s 4}\,.{\ensuremath{\nonumber}}\end{aligned}$$ Above, we have invoked estimates , , , and . Next, the second term from is estimated via $$\begin{aligned} {\ensuremath{\nonumber}}&\| \beta_\tau e^{-s} \sum_{j = 0}^n \binom{n}{j} A_{c_1 c_2}^{(j)} \Big( \beta_3 (e^{- \frac s 4} W + \kappa)^{(n-j)} + \beta_4 Z^{(n-j)} \Big) \|_\infty \\ {\ensuremath{\nonumber}}& \qquad \lesssim e^{-s} \sum_{j = 0}^n \| A^{(j)}_{c_1 c_2} \|_\infty \Big( e^{- \frac s 4} \|W^{(n-j)} \|_\infty + |\kappa| + \| Z^{(n-j)} \|_\infty \Big) \\ & \qquad \lesssim_M \eps^{\frac 5 8} e^{- \frac 3 4 s} ( 1+ \eps^{\frac 5 4} )\,,{\ensuremath{\nonumber}}\end{aligned}$$ where we have invoked , , , and . Next, the third term from is estimated via $$\begin{aligned} {\ensuremath{\nonumber}}& \| \beta_\tau e^{-s} \sum_{j = 0}^n \sum_{i \in \{1, 2\}} \binom{n}{j} A_{c_i}^{(j)} \Big( \beta_3( e^{- \frac s 4} W_{c_{i'}} + \kappa_{c_{i'}})^{(n-j)} + \beta_4 Z_{c_{i'}}^{(n-j)} \Big) \|_\infty \\ {\ensuremath{\nonumber}}& \qquad \lesssim e^{-s} \sum_{j = 0}^n \| A_c^{(j)} \|_\infty \Big( e^{- \frac s 4} \| W_c^{(n-j)} \|_\infty + |\kappa_c| + \| Z_c^{(n-j)} \|_\infty \Big) \\ & \qquad \lesssim_M \eps^{\frac 1 2} e^{-s}\Big( \eps^{\frac 3 4} e^{\frac s 2}+ \eps^{\frac 1 4} + \eps^{\frac 1 2} \Big) \lesssim_M \eps e^{- \frac s 2}\,,{\ensuremath{\nonumber}}\end{aligned}$$ where we have invoked , - , and We now move to the final terms from which evaluate to $$\begin{aligned} {\ensuremath{\nonumber}}&\| \dot{\tau}_{c_1} \beta_\tau {\ensuremath{\partial}}_{c_2} F_Z^{(n)} + \dot{\tau}_{c_2} \beta_\tau {\ensuremath{\partial}}_{c_1} F_Z^{(n)} + \dot{\tau}_{c_1 c_2} \beta_\tau F_Z^{(n)} \|_\infty \\ & \qquad \lesssim |\dot{\tau}_c| |{\ensuremath{\partial}}_c F_Z^{(n)}| + |\dot{\tau}_{c_1 c_2}| \| F_Z^{(n)} \|_\infty \lesssim \eps^{\frac 5 4} e^{- \frac s 2} + \eps^2 e^{- \frac s 4}\,, {\ensuremath{\nonumber}}\end{aligned}$$ where we have invoked the estimates - , as well as estimates and . We now turn to equation for the form of $F_{Z,n}^{c_1, c_2}$. We will estimate term by term, starting with $$\begin{aligned} \| 1_{n \ge 2} \sum_{j = 2}^n \binom{n}{j} \beta_\tau \beta_2 W^{(j)} Z_{c_1 c_2}^{(n-j+1)} \|_\infty &\lesssim 1_{n \ge 2} M^{j^2} M^{2(n-j+1)^2} \eps^{\frac 5 8} e^{\frac s 4} \lesssim M^{-1 + 2n^2} \eps^{\frac 5 8} e^{\frac s 4}\,, \\ \| 1_{n \ge 1} \sum_{j = 1}^n \binom{n}{j} G_Z^{(j)} Z_{c_1 c_2}^{(n-j+1)} \|_\infty& \lesssim_M \eps^{\frac 5 8} e^{- \frac 3 4 s}\,, \end{aligned}$$ where for the first estimate above we have invoked the elementary inequality , and for the second estimate we have invoked . Next, we continue by estimating $$\begin{aligned} {\ensuremath{\nonumber}}&\| \sum_{j = 0}^n \sum_{i \in \{1, 2 \}} \binom{n}{j} Z_{c_i}^{(j+1)} \Big( \dot{\tau}_{c_{i'}} \beta_\tau^2 \beta_2 W^{(n-j)} + \beta_\tau \beta_2 W_{c_{i'}}^{(n-j)} + {\ensuremath{\partial}}_{c_{i'}}G_Z^{(n-j)} \Big) \|_\infty \\ {\ensuremath{\nonumber}}& \qquad \lesssim \sum_{j = 0}^n \| Z^{(j+1)}_c \|_\infty \Big(|\dot{\tau}_c| \| W^{(n-j)} \|_\infty + \| W_c^{(n-j)} \|_\infty + \| {\ensuremath{\partial}}_c G_Z^{(n-j)} \|_\infty \Big) \\ & \qquad \lesssim_M \eps^{\frac 1 2} e^{- \frac s 2} \Big( \eps^{\frac 1 2} e^{\frac s 4} + \eps^{\frac 3 4} e^{\frac 3 4} + \eps^{\frac 1 4} e^{\frac s 4} \Big) \lesssim_M \eps^{\frac 5 4} e^{\frac s 4},{\ensuremath{\nonumber}}\end{aligned}$$ where we have invoked the bootstrap assumptions , , - , , as well as on the ${\ensuremath{\partial}}_c G_Z$ term. We return to , and address the third and fourth lines by estimating $$\begin{aligned} {\ensuremath{\nonumber}}& \| \sum_{j = 0}^n \binom{n}{j} Z^{(j+1)} \Big( \dot{\tau}_{c_1 c_2} \beta_\tau^2 \beta_2 W^{(n-j)} + 2 \dot{\tau}_{c_1} \dot{\tau}_{c_2} \beta_\tau^2 \beta_2 W^{(n-j)} + \beta_\tau \beta_2 W_{c_1 c_2}^{(n-j)} \\ {\ensuremath{\nonumber}}& \qquad \qquad \qquad \qquad + \sum_{i \in \{1, 2\}} \dot{\tau}_{c_i} \beta_\tau^2 \beta_2 W_{c_{i'}}^{(n-j)} + {\ensuremath{\partial}}_{c_1 c_2}G_Z^{(n-j)} \Big) \|_{\infty} \\ {\ensuremath{\nonumber}}& \lesssim \sum_{j = 0}^n \| Z^{(j+1)} \|_\infty \Big( |\dot{\tau}_{c_1 c_2}| \| W^{(n-j)} \|_\infty + |\dot{\tau}_c|^2 \| W^{(n-j)} \|_\infty + \| W_{c_1 c_2}^{(n-j)} \|_\infty \\ {\ensuremath{\nonumber}}& \qquad \qquad \qquad \qquad + |\dot{\tau}_c| \| W_c^{(n-j)} \|_\infty + \| {\ensuremath{\partial}}_{c_1 c_2} G_Z^{(n-j)} \|_\infty \Big) \\ & \lesssim_M e^{- \frac 5 4 s} \Big( \eps e^s + \eps e^{\frac s 4} + \eps^{\frac 3 2} e^{\frac 3 2 s} + \eps^{\frac 5 4} e^{\frac 3 4 s} + \eps^{\frac 5 4} e^{\frac 5 4 s} \Big) \lesssim_M \eps^{\frac 5 4} e^{\frac s 4}\,.{\ensuremath{\nonumber}}\end{aligned}$$ Above, we have invoked , , , , , , as well as for the ${\ensuremath{\partial}}_{c_1 c_2} G_Z$ contribution. This concludes the treatment of the terms from and hence the proof of the lemma. For $1 \le n \le 6$, the following estimates are valid $$\begin{aligned} \label{found:me:1} \| {\ensuremath{\partial}}_{c_1 c_2} F_A \|_\infty &\le \eps e^{\frac s 4}\,, && \| F_{A,0}^{c_1, c_2} \|_\infty \le \eps e^{\frac 1 4 s}\,, \\ \label{found:me:2} \| {\ensuremath{\partial}}_{c_1 c_2} F_A^{(n)} \|_\infty &\le \eps e^{\frac s 4}\,, && \| F_{A,n}^{c_1, c_2} \|_\infty \les M^{2n^2 -1} \eps^{\frac 5 8} e^{\frac 1 4 s}\,.\end{aligned}$$ First, we use expression to produce the estimates $$\begin{aligned} {\ensuremath{\nonumber}}\| \beta_\tau \beta_1 e^{-s} \Big( e^{- \frac s 4} W_{c_2} + \kappa_{c_2} + Z_{c_2} \Big)\Big( e^{- \frac s 4} W_{c_1} + \kappa_{c_1} + Z_{c_1} \Big) \|_\infty &\le \eps^{\frac 3 2} e^{\frac s 4}\,, \\ \| \beta_\tau \beta_1 e^{-s}\Big( e^{- \frac s 4} W + \kappa + Z \Big) \Big( e^{- \frac s 4} W_{c_1 c_2} + \kappa_{c_1 c_2} + Z_{c_1 c_2} \Big) \|_{\infty} &\le \eps^{\frac 5 4}e^{\frac s 4} , {\ensuremath{\nonumber}}\end{aligned}$$ where we have invoked estimates , , , , , , and . For the last line from expression , we have $$\begin{aligned} {\ensuremath{\nonumber}}&\| \dot{\tau}_{c_1 c_2} \beta_\tau F_A + \dot{\tau}_{c_2} \beta_\tau {\ensuremath{\partial}}_{c_1}F_A + \dot{\tau}_{c_1} \beta_\tau {\ensuremath{\partial}}_{c_2}F_A \| \lesssim |\dot{\tau}_{c_1 c_2}| \| F_A \|_\infty + |\dot{\tau}_c| \| {\ensuremath{\partial}}_c F_A \|_\infty \lesssim M \eps e^{- \frac s 4} + \eps e^{- \frac s 2} \,, {\ensuremath{\nonumber}}\end{aligned}$$ where we have invoked the forcing estimates and . This contribution is clearly bounded by $\eps^{\frac 3 4} e^{- \frac s 4}$ by bringing $\eps$ small relative to $M$. Next, we move to the second estimate in , for which we appeal to the expression . However, these estimates are exactly analogous to those of Lemma \[Lemma:ka\], estimate , and so we omit repeating these estimates. The estimates for general $n$, also follow analogously to Lemma \[Lemma:ka\]. Trajectory estimates {#subsect:trajectory} -------------------- In this subsection, we provide estimates on the trajectories associated with the transport structure of the equations - . We now define these trajectories via $$\begin{aligned} &{\ensuremath{\partial}}_s \Phi^{x_0}_W(s) = \mathcal{V}_W \circ \Phi_{W}^{x_0}\,, & \Phi^{x_0}_{W}(s_0) &= x_0\,, \\ &{\ensuremath{\partial}}_s \Phi^{x_0}_Z(s) = \mathcal{V}_Z \circ \Phi_{Z}^{x_0}\,, & \Phi^{x_0}_{Z}(s_0) &= x_0\,, \\ &{\ensuremath{\partial}}_s \Phi^{x_0}_A(s) = \mathcal{V}_A \circ \Phi_{A}^{x_0}\,, & \Phi^{x_0}_{A}(s_0) &= x_0\,. \end{aligned}$$ \[l:support\] Let $\Phi(s)$ denote either $\Phi^{x_0}_W$, $\Phi^{x_0}_Z(s)$ or $\Phi^{x_0}_A$, then for ${\left|x_0\right|}\leq \frac M 2 \eps^{- \frac 1 4}$ we have $$\label{upp:Z:t} |\Phi^{x_0}(s) | \leq \frac{2M}{3}e^{\frac 5 4 s}\,.$$ As a consequence we obtain $$\begin{aligned} \label{assume:cpct:supp:ZA} {\ensuremath{\mathrm{supp\,}}}W(s)\cup{\ensuremath{\mathrm{supp\,}}}Z(s) \cup {\ensuremath{\mathrm{supp\,}}}A(s) \subset B\left(\frac{3}{4} M \eps e^{\frac 5 4 s}\right)\,,\end{aligned}$$ which verifies the bootstrap assumption We restrict to the case $\Phi=\Phi^{x_0}_W$. The cases $\Phi=\Phi^{x_0}_Z$ and $\Phi=\Phi^{x_0}_A$ will follow in an analogous fashion. Recall that for $\Phi=\Phi^{x_0}_W$ we have $${\ensuremath{\partial}}_s \Phi = \frac 5 4 \Phi+ \beta_\tau W\circ \Phi+ G_W\circ \Phi\,.$$ As a consequence of , and , we have $$\label{e:pelican} {\left \| W \right\|}_{\infty}+{\left \| G_W \right\|}_{\infty}\les M^{\frac15}\eps^{\frac15} e^{\frac s4}+e^{\frac s4}\les e^{\frac s4}\,.$$ Thus by Grönwall we obtain we obtain . The support bound follows directly from , the defining equations -, together with . Let $\Phi(s)$ denote either $\Phi^{x_0}_Z(s)$ or $\Phi^{x_0}_A$, then for ${\left|x_0\right|}\leq \frac M 2 \eps^{- \frac 1 4}$ we have $$\begin{aligned} \label{traj:large:z} |\Phi^{x_0}(s)| &\ge \text{min} (e^{\frac s 4}, e^{\frac s 4} - e^{\frac{s_\ast}{4}}) \text{ for some } s_\ast \ge s_0\,. \end{aligned}$$ We first show that if $\Phi (s) \leq e^{\frac s 4}$, then we have the inequality $$\label{e:PhiZA:left} \frac{{\ensuremath{\partial}}}{{\ensuremath{\partial}}s} \Phi(s) \le - e^{\frac s 4}\,.$$ For notational purposes, we set $(j,GZ)=(2,G_Z)$ or $(j,GZ)=(1,G_A)$ for the cases $\Phi(s)=\Phi^{x_0}_Z(s)$ or $\Phi(s)=\Phi^{x_0}_A$, respectively. We then have the ODE $$\begin{aligned} {\ensuremath{\nonumber}}{\ensuremath{\partial}}_s \Phi = & \frac 5 4 \Phi+ \beta_\tau\beta_j W\circ \Phi+ G\circ \Phi\,.\end{aligned}$$ Note that since $\alpha>1$, then ${\left|\beta_j\right|}<1$. Assuming $\eps$ to be sufficiently small (dependent on $\alpha$), then applying yields $\beta_{\tau}\beta_{j}\leq 1$. Then if $\Phi (s) \leq e^{\frac s 4}$, we have from , and $$\begin{aligned} \frac{{\ensuremath{\partial}}}{{\ensuremath{\partial}}s} \Phi(s) &\le \frac 5 4 e^{\frac s4}+2\eta_{\frac{1}{20}}\circ\Phi(s)-(1-\beta_j)\kappa_0e^{\frac s4}+\eps^{\frac 1 2}e^{\frac s4}{\ensuremath{\nonumber}}\\ &\le \frac 5 4 e^{\frac s4}-(1-\beta_j)\kappa_0e^{\frac s4}+\eps^{\frac 1 2}e^{\frac s4}\,, \notag \end{aligned}$$ where we used . Since $(1-\beta_j)>0$, then assuming $\kappa_0$ is sufficiently large, dependent of $\alpha$, we obtain . We now split the proof of into two subcases: 1. \[case:Tokaji1\] Either $\Phi(s)> e^{\frac s4}$ for all $s\in[s_0,\infty)$, or $x_{0}\leq 0$. 2. \[case:Tokaji2\] We have $x_0>0$ and there exists a smallest $s_1\in[s_0,\infty)$ such that $0<\Phi(s_1)\leq e^{\frac {s_1}4}$. Consider first Case \[case:Tokaji1\]. Note that $\Phi_1(s)> e^{\frac s4}$ directly implies . If $x_0\leq 0$, then implies that $ \Phi(s)\leq -e^{\frac s 4}+\eps^{-\frac14} $ and hence is satisfied for $s_{*}=-\log\eps$. Now consider Case \[case:Tokaji2\]. The estimate implies that $ \frac{d}{ds}\Phi(s)\leq -e^{\frac s4}$ for all $s\geq s_0$. Thus by continuity, there exists a unique $s_*>s_0$ such that $\Phi(s_*)=0$. By continuity, there exists $s_*>s_1$ such that $\Phi(s_*)=0$. Then as a consequence of , by following trajectories forwards and backwards in time from $s_*$ we conclude that $${\left|\Phi(s)\right|}\geq {\left|e^{\frac s4}-e^{\frac{s_*}{4}}\right|}\,,$$ for all $s\in[s_1,\infty)$. For the case $s_1\neq s_0$, then if $s\in [s_0,s_1)$ we have by definition that ${\left|\Phi(s)\right|}\geq e^{\frac s4}$. Thus we have . For any ${\left|x_0\right|}\geq \ell$ and $s_0\geq -\log\eps$ we have $$\begin{aligned} \label{e:escape:from:LA} \Phi_W^{x_0} \ge |x_0|\eps^{\frac15} e^{\frac{s}{5}}\,.\end{aligned}$$ Using $W(0,s)=0$, , , and we obtain $$\begin{aligned} \mathcal V_W x&= \frac{5}{4}x^2+x\beta_{\tau} W+G_W x\\ &\geq x^2\left(\frac{5}{4}-\beta_{\tau}{\left \| W^{(1)} \right\|}_{\infty}-{\left|G_W ^{(1)}\right|}\right)-{\left|\mu\right|}\\&\geq x^2\left(\frac{1}{4}-2e^{-\frac34 s}\right)-\eps^{\frac16}e^{-\frac34 s}\geq \frac15\,,\end{aligned}$$ where inequality we used that ${\left|x\right|}\geq\ell\geq \eps^{\frac14}$ and $s_0$ is taken to be sufficiently large. Thus we obtain $$\frac{d}{ds} \left(\Phi_W^{x_0}\right)^2=2\mathcal V_W(\Phi_W^{x_0}) \Phi_W^{x_0}\geq \frac{2 (\Phi_W^{x_0})^2}{5}\label{e:escape:from:New:York}\,.$$ and hence follows by Grönwall. Analysis of modulation variables ================================ In this section we close all bootstraps related to the modulation variables $\kappa$, $\xi$ and $\tau$, together with the quantity $\mu$. Modulation variables and their time derivatives ----------------------------------------------- The following lemma verifies the bootstraps . The following estimates are valid $$\begin{aligned} \label{mod:1} |\kappa - \kappa_0| \lesssim \eps^{\frac 9 8}, \qquad |\dot{\xi} - \kappa_0| \lesssim \eps\,.\end{aligned}$$ We integrate $$\begin{aligned} |\kappa(t) - \kappa_0| \le \int_{1-\eps}^t |\dot{\kappa}| {\,\mathrm{d}}t' \lesssim \eps^{\frac 9 8}\,, \end{aligned}$$ where we have invoked . For $\dot{\xi}$, we rearrange to obtain $$\begin{aligned} \beta_\tau \dot{\xi} = \beta_\tau \kappa - e^{- \frac s 4} \mu + \beta_\tau \beta_2 Z(0, s)\,.\end{aligned}$$ Estimating the right-hand side and using that $\beta_\tau \ge \frac 1 2$ on the left-hand side yields $$\begin{aligned} |\dot{\xi} - \kappa_0| \lesssim |\kappa - \kappa_0| + e^{- \frac s 4} |\mu| + \| Z \|_\infty \lesssim \eps^{\frac 9 8} + \eps^{\frac 1 6} e^{- s} + \eps^{\frac 5 4}\,, \end{aligned}$$ where we have invoked the bootstrap bounds and . The following lemma verifies the bootstraps on $\dot{\tau}$, the second estimate of . The following estimates are valid, $$\begin{aligned} |\dot{\tau}| \les M^2 e^{-s}\,.\end{aligned}$$ We rearrange the first ODE equation, , to obtain the following estimate $$\begin{aligned} {\ensuremath{\nonumber}}|\dot{\tau}| \le & |(1 - \dot{\tau})| |G_W^{(1)}(s, 0)| - |(1 - \dot{\tau})| | \mu| | W^{(2)}(s, 0)| + |(1 - \dot{\tau})| | F_W^{(1)}(s, 0)| \\ \les & M^{2} e^{-s} + \eps^{\frac 4 {15}} e^{-\frac 32 s} + \eps^{\frac 3 4} e^{-s} \,,{\ensuremath{\nonumber}}\end{aligned}$$ where we have invoked the second estimate in , the bootstrap bounds , , and the second estimate in to estimate the forcing. The following verifies the bootstraps on $\mu$, the first estimate on . The following estimates are valid, $$\begin{aligned} |\mu| \lesssim_M e^{-s}. \end{aligned}$$ We rearrange for $\mu(s)$, yielding $$\begin{aligned} \label{gW:eq} q^{(5)} \mu(s) = &- 10 \beta_\tau q^{(2)} q^{(3)} - \sum_{j = 2}^4 \binom{4}{j} G_W^{(j)}(0, s) q^{(5-j)} + F_W^{(4)}(s, 0)\,. \end{aligned}$$ We use the bootstrap that $| q^{(5)}(s)| \ge \frac 1 2$, , to estimate from below the denominators. We then estimate the right-hand side via $$\begin{aligned} {\ensuremath{\nonumber}}|\mu| &\lesssim |q^{(2)}| |q^{(3)}| + \sum_{j = 2}^3 \binom{4}{j} |G_W^{(j)}(0, s)| | q^{(5-j)}| + |G_W^{(4)}(0, s)| + |F_W^{(4)}(0, s)| \\ & \lesssim_M e^{- \frac 3 2 s} + e^{-\frac 7 4 s} + e^{-s} + \eps^{\frac 3 4} e^{-s} \lesssim_M e^{-s} \,,{\ensuremath{\nonumber}}\end{aligned}$$ where we have invoked with $j = 4$, and for the $F_W^{(4)}$ term, as well as the decay bootsraps on $q^{(2)}, q^{(3)}$. The following lemma verifies the bootstrap on $\dot{\kappa}$. The following estimates are valid, $$\begin{aligned} |\dot{\kappa}| \le \frac{\eps^{\frac 1 8}}{2}\,. \end{aligned}$$ We rearrange equation to obtain $$\begin{aligned} {\ensuremath{\nonumber}}|\dot{\kappa}| \le & |(1 - \dot{\tau})| e^{\frac 3 4 s} |\mu| + |(1 - \dot{\tau})| e^{\frac 3 4 s} |F_W(0, s)| \le 2 \eps^{\frac 1 6} + \eps^{\frac 3 4} \le M \eps^{\frac 1 6}\,, \end{aligned}$$ where we have invoked bootstrap for the $\mu$ estimate, and for the estimate on $F_W$. $\nabla_{\alpha, \beta}$ derivatives of modulation variables ------------------------------------------------------------ The following lemma verifies the bootstraps in . Let $c \in \{ \alpha, \beta \}$. Then the following estimates are valid $$\begin{aligned} \label{cor:mod} |\kappa_{c}| \lesssim \eps^{\frac 3 4}, \qquad |\dot{\xi}_{c}| \le \frac M 2 \eps^{\frac 1 2}\,.\end{aligned}$$ First, we have for every $- \eps \le t \le 0$, $$\begin{aligned} |{\ensuremath{\partial}}_c \kappa(t)| = |\int_{ -\eps}^{t} {\ensuremath{\partial}}_c \dot{\kappa}(t') {\,\mathrm{d}}t' | \le \int_{-\eps}^t \eps^{\frac 1 4} e^{\frac 1 2 s(t')} {\,\mathrm{d}}t' \le \int_{s_0}^\infty \eps^{\frac 1 4} e^{\frac 1 2 s'} e^{-s'} {\,\mathrm{d}}s' \lesssim \eps^{\frac 3 4},\end{aligned}$$ where we have used that ${\,\mathrm{d}}s = e^{-s} {\,\mathrm{d}}t$, and the bootstrap assumption on $\dot{\kappa}_c$ in . We now compute ${\ensuremath{\partial}}_c$ of equation to obtain the identity $$\begin{aligned} \label{free:folk:1} \mu_c = \dot{\tau}_c \beta_\tau \mu - e^{\frac s 4} \dot{\xi}_c \beta_\tau + e^{\frac s 4} \kappa_c \beta_\tau + \beta_\tau \beta_2 e^{\frac s 4} Z_c(s, 0)\,, \end{aligned}$$ which upon rearranging for $\dot{\xi}_c$, we obtain $$\begin{aligned} |\dot{\xi}_c| \lesssim e^{- \frac s 4}|\mu_c| + e^{- \frac s 4} |\dot{\tau}_c| |\mu| + |\kappa_c| + \| Z_c \|_\infty \lesssim {M^{33}} \eps^{\frac 1 2} e^{- \frac s 2} + \eps^{\frac 2 3} e^{- s}+ \eps^{\frac 1 2} + \eps^{\frac 1 2} \le \frac M 2 \eps^{\frac 1 2}\,,{\ensuremath{\nonumber}}\end{aligned}$$ where above we have invoked the bootstrap assumptions for ${\ensuremath{\partial}}_c$ of the modulation variables, and for ${\ensuremath{\partial}}_c Z$. The following verifies the first bootstrap in . Let $c \in \{ \alpha, \beta \}$. Then the following estimates are valid $$\begin{aligned} |\mu_c| \le \frac{{M^{33}}}{2} \eps^{\frac 1 2} e^{- \frac s 4}.\end{aligned}$$ We take ${\ensuremath{\partial}}_c$ of equation which produces the identity $$\begin{aligned} {\ensuremath{\nonumber}}q^{(5)} \mu_c = &- q_c^{(5)} \mu - 10 \dot{\tau}_c \beta_\tau^2 q^{(2)} q^{(3)} - 10 \beta_\tau (q^{(2)}_c q^{(3)} + q^{(2)} q^{(3)}_c) \\ \label{deriv:gW} &- \sum_{j = 2}^4 \binom{4}{j} (G_W^{(j)}(0, s) q_c^{(5-j)} + {\ensuremath{\partial}}_c G_W^{(j)}(0, s) q^{(5-j)}) + {\ensuremath{\partial}}_c F_W^{(4)}(0, s)\,, \end{aligned}$$ where we recall that $q^{(j)}(s) := W^{(j)}(0, s)$, according to . We now estimate each of the terms on the right-hand side above. $$\begin{aligned} |\mu_c| &\lesssim | q^{(5)}_c | |\mu| + |\dot{\tau}_c| |q^{(2)}| |q^{(3)}| + |q^{(2)}_c| |q^{(3)}| + |q^{(2)}| |q^{(3)}_c| \\ &\qquad + \sum_{j = 2}^4 (|G_W^{(j)}(s, 0)| \| q^{(5-j)}_c \|_\infty + \| {\ensuremath{\partial}}_c G_W^{(j)} \|_\infty |q^{(5-j)}|) + \| {\ensuremath{\partial}}_cF_W^{(4)} \|_\infty \\ &\lesssim \eps^{\frac {13} {24}} e^{- \frac 5 8 s} + \eps^{\frac 1 2} e^{- \frac 3 2 s} + M^{40}\eps^{\frac 3 4} e^{- \frac s 4} + \eps^{\frac35}e^{-\frac{s}{4}} +M^{18} \eps^{\frac 3 4} e^{-\frac 1 4 s} + M^{32} \eps^{\frac 1 2} e^{- \frac s 4} + \eps^{\frac 3 4} e^{- \frac s 4}\les M^32\eps^{\frac12}e^{-\frac s4}\,, \end{aligned}$$ where we have invoked estimates , for the $G_W^{(j)}$ contributions, and for the forcing term, and to estimate $\mu$ and $ \dot \tau_c$, as well as the estimates , , , to bound the terms involving q. We have also invoked bootstrap . The following verifies the second bootstrap in . Let $c \in \{ \alpha, \beta \}$. Then the following estimates are valid $$\begin{aligned} |\dot{\tau}_c| \le \frac 1 2 \eps^{\frac 1 2}\,.\end{aligned}$$ We take ${\ensuremath{\partial}}_c$ of equation to obtain $$\begin{aligned} \label{Ipso:1} \beta_\tau (1 + \beta_\tau \dot{\tau}) \dot{\tau}_c = {\ensuremath{\partial}}_c G^{(1)}_W(0, s) - \mu_c q^{(2)} - \mu q^{(2)}_c + {\ensuremath{\partial}}_c F^{(1)}_W(0, s)\,. \end{aligned}$$ We now estimate the right-hand side above via $$\begin{aligned} | \eqref{Ipso:1} | \lesssim & \| {\ensuremath{\partial}}_c G_W^{(1)} \|_\infty + |\mu_c| |q^{(2)}| + |\mu| |q^{(2)}_c| + \| {\ensuremath{\partial}}_c F_W^{(1)} \|_\infty \\ \lesssim & M^{2j^2} \eps^{\frac 1 2} e^{- \frac s 4} +{M^{33}}\eps^{\frac 3 5} e^{- s } + \eps^{\frac{11}{12}} + \eps^{\frac 3 4} e^{- \frac s 4} \lesssim_M \eps^{\frac 3 4},\end{aligned}$$ where above, we have invoked estimate for the ${\ensuremath{\partial}}_c G_W^{(1)}$ contribution, bootstraps , for the $\mu, \mu_c$ estimates respectively, bootstraps , for the $q^{(2)}, q^{(2)}_c$ contributions respectively, and finally for the ${\ensuremath{\partial}}_c F_W^{(1)}$ estimate. Finally, to conclude, we estimate the prefactor on the left-hand side of from below $$\begin{aligned} \beta_\tau (1 + \beta_\tau \dot{\tau}) \ge \frac 7 8( 1 - \beta_\tau |\dot{\tau}|) \ge \frac 3 4\,. \end{aligned}$$ The following verifies the third bootstrap in . Let $c \in \{ \alpha, \beta \}$. Then the following estimates are valid $$\begin{aligned} |\dot{\kappa}_c| < \frac 12 \eps^{\frac 14} e^{\frac s 2}\,.\end{aligned}$$ We compute ${\ensuremath{\partial}}_c$ of equation to obtain the identity $$\begin{aligned} \label{identity:kappa:c1} \beta_\tau \dot{\kappa}_c = e^{\frac 3 4 s} \mu_c - \dot{\kappa} \beta_\tau^2\dot{\tau}_c + e^{\frac 3 4 s} {\ensuremath{\partial}}_c F_W(0, s)\,, \end{aligned}$$ upon which estimating yields $$\begin{aligned} {\ensuremath{\nonumber}}| \dot{\kappa}_c| \lesssim & e^{\frac 3 4 s} | \mu_c| + |\dot{\kappa}| |\dot{\tau}_c| + e^{\frac 3 4 s} \| {\ensuremath{\partial}}_c F_W \|_\infty \lesssim {M^{33}}e^{\frac 1 2 s} \eps^{\frac 1 2} + \eps^{\frac 5 8} + M \eps^{\frac 3 4} e^{ \frac s 2} \lesssim_M \eps^{\frac 1 2} e^{\frac s 2}, \end{aligned}$$ where we have invoked the bootstraps on the modulation variables, , , as well as the forcing estimate . $\nabla_{\alpha, \beta}^2$ derivatives of modulation variables -------------------------------------------------------------- The following verifies the bootstraps in . Let $c_i \in \{ \alpha, \beta \}$ for $i = 1,2$. Then the following estimates are valid $$\begin{aligned} |\kappa_{c_1 c_2}| \le M^{\frac 5 2} \eps^{\frac 5 4} e^s, \qquad |\dot{\xi}_{c_1 c_2}| \le M^{\frac 7 2} \eps^{\frac 5 4} e^{s}.\end{aligned}$$ We have to integrate $$\begin{aligned} |\kappa_{c_1 c_2}| = | \int_{1-\eps}^{t} \dot{\kappa}_{c_1 c_2}| \lesssim \int_{s_0}^s M^2 \eps^{\frac 5 4} e^{2s'} e^{-s'} {\,\mathrm{d}}s' = M^2 \eps^{\frac 5 4} (e^s - e^{s_0}),\end{aligned}$$ where above we have invoked the bootstrap assumption on $\dot{\kappa}_{c_1 c_2}$. Next, we want to obtain an expression for $\dot{\xi}_{c_1 c_2}$. For this, we differentiate the expression which produces the identity $$\begin{aligned} \mu_{c_1 c_2} = \beta_\tau \dot{\tau}_{c_2} \mu_{c_1} + \beta_\tau \dot{\tau}_{c_1} \mu_{c_2} + \dot{\tau}_{c_1 c_2} \beta_\tau \mu - e^{\frac s 4} \beta_\tau \dot{\xi}_{c_1 c_2} + e^{\frac s 4} \beta_\tau \kappa_{c_1 c_2} + \beta_\tau \beta_2 e^{\frac s 4} Z_{c_1 c_2}(s, 0),\end{aligned}$$ which rearranging for $\dot{\xi}_{c_1 c_2}$ gives $$\begin{aligned} {\ensuremath{\nonumber}}|\dot{\xi}_{c_1 c_2}| \lesssim & e^{- \frac s 4}( |\mu_{c_1 c_2}| + |\dot{\tau}_c| |\mu_c| + |\dot{\tau}_{c_1 c_2}| |\mu|) + |\kappa_{c_1 c_2}| + \| Z_{c_1 c_2} \|_\infty \\ {\ensuremath{\nonumber}}\lesssim & e^{- \frac s 4} (M \eps^{\frac 5 4} e^{\frac 5 4 s} + {M^{33}} \eps e^{- \frac s 4} + \eps^{\frac 7 6}) + M^3 \eps^{\frac 5 4} e^s + M^{2j^2} \eps^{\frac 5 8} e^{\frac s 4} \\ {\ensuremath{\nonumber}}\lesssim & M^3 \eps^{\frac 5 4} e^s\,, \end{aligned}$$ where above we have invoked - for the second derivatives of the modulation variables, for the $\dot{\tau}_c$ term, for the $\mu$ term, and for the $Z_{c_1 c_2}$ contribution. The following verifies the bootstraps in on $\mu$. Let $c_i \in \{ \alpha, \beta \}$ for $i = 1,2$. Then the following estimates are valid $$\begin{aligned} | \mu_{c_1 c_2}| \le \frac{ M}{2} \eps^{\frac 5 4} e^{\frac 5 4 s}.\end{aligned}$$ We differentiate equation in ${\ensuremath{\partial}}_{c_2}$ to get $$\begin{aligned} \label{bird:1} q^{(5)} \mu_{c_1 c_2} = & - q^{(5)}_{c_2} \mu_{c_1} - q^{(5)}_{c_1 c_2} \mu - 10 ( \dot{\tau}_{c_1 c_2} + 2 \dot{\tau}_{c_1} \dot{\tau}_{c_2} ) \beta_\tau^2 q^{(2)} q^{(3)} - 10 \beta_\tau^2 \dot{\tau}_{c_{i'}} (q^{(2)}_{c_i} q^{(3)} + q^{(2)} q^{(3)}_{c_i}) \\ \label{bird:2} & - 10 \beta_\tau (q^{(3)}_{c_i} q^{(2)}_{c_{i'}} + q^{(2)}_{c_1 c_2} q^{(3)} + q^{(2)} q^{(3)}_{c_1 c_2}) - \sum_{j = 2}^4 \binom{4}{j} ({\ensuremath{\partial}}_{c_2}G_W^{(j)}(s, 0) q_{c_1}^{(5-j)} \\ \label{bird:3} & + G_W^{(j)}(s, 0) q_{c_1 c_2}^{(5-j)} + {\ensuremath{\partial}}_{c_1}G_W^{(j)}(s, 0) q_{c_2}^{(5-j)} + {\ensuremath{\partial}}_{c_1 c_2} G_W^{(5-j)}(s, 0) q^{(5-j)} ) + {\ensuremath{\partial}}_{c_1 c_2} F_W^{(4)}(s, 0)\,. \end{aligned}$$ We now estimate all of the terms above, line by line, starting with $$\begin{aligned} | \eqref{bird:1} |& \lesssim \| W^{(5)}_c \|_\infty |\mu_c| + \| W^{(5)}_{c_1 c_2} \|_\infty |\mu| + ( |\dot{\tau}_{c_1 c_2}| + |\dot{\tau}_c|^2) |q^{(2)}| |q^{(3)}| + |\dot{\tau}_c| ( |q^{(2)}_c| |q^{(3)}| + |q^{(2)}| |q^{(3)}_c| ) \\ &\lesssim_M \eps^{\frac 5 4} e^{\frac s 2}+ \eps^{\frac 5 3} e^{\frac 3 4 s} + ( \eps e^{\frac 3 4 s} + \eps )\eps^{\frac1{10}} e^{-\frac 7 4s} + \eps^{\frac12}(\eps^{\frac34}e^{-\frac s4}+\eps^{\frac 3{5}}e^{-\frac s4 })\les_{M} \eps^{\frac54} e^{\frac34 s}\,.\end{aligned}$$ Above, we have invoked , for the $W^{(5)}_c, W^{(5)}_{c_1 c_2}$ contributions, respectively, , and for the estimates on the modulation variable, for the decay estimates on $q^{(2)}, q^{(3)}$, and finally and for $q^{(2)}_c, q^{(3)}_c$ estimates. Next, we bound the terms in $$\begin{aligned} |\eqref{bird:3}| \lesssim & |q_c^{(3)}| |q^{(2)}_c| + |q^{(3)}| |q^{(2)}_{c_1 c_2}| + |q^{(2)}| |q^{(3)}_{c_1 c_2}| + \sum_{j = 2}^4 |{\ensuremath{\partial}}_c G_W^{(j)}(s, 0)| \| W_c^{(5-j)}\|_{\infty} \\ \lesssim &\eps^{\frac 5 4} e^{\frac 5 4 s} +\eps^{\frac 3 2} e^{\frac s 2 }+\eps^{\frac85 }e^{\frac34s} + M^{18}\eps^{\frac54}e^{-\frac s2}\les \eps^{\frac 5 4} e^{\frac 5 4 s} \,,\end{aligned}$$ where above we have invoked estimate for the transport term, as well as the bootstraps , , , for the $q^{(2)}, q^{(3)}$ quantities (and their derivatives in $c$). Lastly, we estimate the terms in $$\begin{aligned} \| \eqref{bird:3} \|_\infty &\lesssim \sum_{j = 2}^4 (\| G_W^{(j)} \|_\infty \| q_{c_1 c_2}^{(5-j)} \|_\infty + \| {\ensuremath{\partial}}_c G_W^{(j)} \|_\infty \| W_c^{(5-j)} \|_\infty + \| {\ensuremath{\partial}}_{c_1 c_2}G_W^{(j)} \|_\infty ) + \| {\ensuremath{\partial}}_{c_1 c_2} F_W^{(4)} \|_\infty \\ &\lesssim_M \eps^{\frac 3 2} e^{\frac s 2} + \eps^{\frac 5 4} e^{\frac s 2} + \eps^{\frac 5 8} e^{\frac s 2} + \eps e^{\frac s 2},\end{aligned}$$ where we have invoked estimates , , , and . The following verifies the bootstraps on $\dot{\tau}_{c_1 c_2}$. Let $c_i \in \{ \alpha, \beta \}$ for $i = 1,2$. Then the following estimates are valid $$\begin{aligned} |\dot{\tau}_{c_1 c_2}| \le \frac{1}{2} \eps e^{\frac 3 4 s}.\end{aligned}$$ We take ${\ensuremath{\partial}}_{c_2}$ of equation to obtain the identity $$\begin{aligned} {\ensuremath{\nonumber}}\beta_\tau (1 + \beta_\tau \dot{\tau}) \dot{\tau}_{c_1 c_2} = & - \beta_\tau^2 (1 + \beta_\tau \dot{\tau}) \dot{\tau}_{c_1} \dot{\tau}_{c_2} - \beta_\tau^3 \dot{\tau} \dot{\tau}_{c_2} \dot{\tau}_{c_1} - \beta_\tau^2 \dot{\tau}_{c_2} \dot{\tau}_{c_1} - {\ensuremath{\partial}}_{c_1 c_2} G_W^{(1)}(0, s) \\ \label{bravo:1} &- \mu_{c_1 c_2} q^{(2)} - \mu_{c_i} q^{(2)}_{c_{i'}} - \mu q^{(2)}_{c_1 c_2} + {\ensuremath{\partial}}_{c_1 c_2} F_W^{(1)}(0,s)\,. \end{aligned}$$ We now estimate each of the terms on the right-hand side above via $$\begin{aligned} {\ensuremath{\nonumber}}|\dot{\tau}_{c_1 c_2}| &\lesssim |\dot{\tau}_c|^2 + (|\dot{\tau}| + 1) |\dot{\tau}_c|^2 + \| {\ensuremath{\partial}}_{c_1 c_2} G_W^{(1)} \|_\infty + |q^{(2)}| |\mu_{c_1 c_2}| + |\mu_c| |q^{(2)}_c| \\ {\ensuremath{\nonumber}}&\qquad + |\mu| \| W^{(2)}_{c_1 c_2} \|_\infty + \| {\ensuremath{\partial}}_{c_1 c_2} F_W^{(1)} \|_\infty \\ &\lesssim_M \eps + \eps^{\frac 5 8} e^{\frac s 2} + \eps^{\frac 5 4} e^{\frac s 2} + \eps^{\frac 5 4} e^{\frac s 2} + \eps^{\frac 5 4} e^{\frac 3 4 s} + \eps e^{\frac s 2}\,,{\ensuremath{\nonumber}}\end{aligned}$$ where we have invoked estimates for the $G_W^{(1)}$ term above and for the $F_W^{(1)}$ term. We have also invoked , - for the modulation variables, and . The following verifies the bootstraps on $\dot{\kappa}_{c_1 c_2}$, the second estimate in . Let $c_i \in \{ \alpha, \beta \}$ for $i = 1,2$. Then the following estimates are valid $$\begin{aligned} |\dot{\kappa}_{c_1 c_2}| \le \frac{M^2}{2} \eps^{\frac 5 4} e^{2s}\,.\end{aligned}$$ We compute ${\ensuremath{\partial}}_{c_2}$ of equation to get to $$\begin{aligned} {\ensuremath{\nonumber}}|\beta_\tau \dot{\kappa}_{c_1 c_2}| =& |- \beta_\tau^2 \dot{\tau}_{c_i} \dot{\kappa}_{c_{i'}} + e^{\frac 3 4 s} \mu_{c_1 c_2} - 2 \dot{\kappa} \beta_\tau^2 \dot{\tau}_{c_1} \dot{\tau}_{c_2} + e^{\frac 3 4s} {\ensuremath{\partial}}_{c_1 c_2}F_W(0, s)| \\ {\ensuremath{\nonumber}}\lesssim & \eps^{\frac 3 4} e^{\frac s 2} + M \eps^{\frac 5 4} e^{2s} + \eps^{\frac 5 4} e^{\frac s 2} + \eps^{\frac 1 2} e^{\frac 5 4 s} \lesssim M \eps^{\frac 5 4} e^{2s}\,,\end{aligned}$$ where we have invoked estimates for the first derivative of the modulation variables in $c$, for the $\mu_{c_1 c_2}$ term, and estimate for the ${\ensuremath{\partial}}_{c_1 c_2} F_W$ term. Analysis of $Z$ and $A$ ======================= For this section, we consider the equations for $Z$ and $A$ given by and . We begin with the lowest order estimate, for which there is no damping, in which we verify the first bootstrap assumption in . The quantities $(Z, A)$ satisfy the following bounds $$\begin{aligned} \label{feel:1} &\| Z \|_\infty \le \frac 3 4 \eps^{\frac 5 4}\,, && \| Z^{(n)} \|_{\infty} \le \frac{M^{2n^2}}{2} e^{- \frac 5 4 s} \text{ for } 1 \le n \le 8\,, \\ \label{feel:2} &\| A \|_\infty \le \frac 3 4 M \eps\,, && \| A^{(n)} \|_{\infty} \le \frac{M^{2n^2}}{2} e^{- \frac 5 4 s} \text{ for } 1 \le n \le 8\,, \end{aligned}$$ which thereby verifies the bootstraps and . An application of the Grönwall lemma coupled with estimate yields the estimate $$\begin{aligned} {\ensuremath{\nonumber}}\| Z(\Phi_Z(s, x), s) \|_\infty \le & \| Z(x, s_0) \|_\infty + \int_{s_0}^s \| F_Z(\Phi_Z(s', x), s') \|_\infty ds' \\ \le & \frac 1 2 \eps^{\frac 5 4} + \int_{s_0}^s \eps^{\frac 3 4 } e^{- s'} {\,\mathrm{d}}s' \le \frac 3 4 \eps^{\frac 5 4}\,,{\ensuremath{\nonumber}}\end{aligned}$$ which establishes the desired bound upon invoking that $\Phi_Z(\cdot, x)$ is a diffeomorphism for all $s \ge s_0$. According to , we calculate $$\begin{aligned} \nonumber e^{-\int_{s_0}^s \Big( \frac{5n}{4} + n \beta_\tau \beta_2 W^{(1)} \Big) \circ \Phi_Z {\,\mathrm{d}}s'} = & e^{- \frac{5n}{4}(s - s_0)} e^{- \int_{s_0}^s n \beta_\tau \beta_2 W^{(1)} \circ \Phi_Z} \\ {\ensuremath{\nonumber}}\le & e^{- n \beta_\tau \beta_2 \int_{s_0}^s \eta_{- \frac 1 5} \circ \Phi_Z }e^{- \frac{5n}{4}(s - s_0)} \\ \le &C_n e^{- \frac{5n}{4}(s - s_0)}\,.{\ensuremath{\nonumber}}\end{aligned}$$ Using this estimate, coupled with , the Grönwall lemma, we estimate for $n \ge 2$, $$\begin{aligned} {\ensuremath{\nonumber}}|Z^{(n)}(\Phi_Z(x, s), s)| \le & C_n |e^{-\frac{10}{4} (s - s_0)} Z^{(n)}(s_0, x)| + C_n \int_{s_0}^s |e^{-\frac{10}{4}(s - s')} F_{Z,n} \circ \Phi_Z |{\,\mathrm{d}}s' \\ {\ensuremath{\nonumber}}\le & C_n \eps^{\frac 5 4} e^{- \frac 5 4 (s - s_0)} + C_n \int_{s_0}^s e^{- \frac{10}{4} (s - s')}M^{2n-1} e^{- \frac 5 4 s'} {\,\mathrm{d}}s' \\ {\ensuremath{\nonumber}}\le & \frac{M^{2n}}{2} e^{- \frac 5 4 s}\,.\end{aligned}$$ We now perform a similar calculation for $n = 1$, using estimate in place of . For the $A$ estimates, the identical arguments apply using Lemma \[phone:1\]. For $1 \le n \le 7$, we have the following estimates on $Z$ and $A$ $$\begin{aligned} &\| {\ensuremath{\partial}}_c Z \|_\infty \le \frac 1 2 \eps^{\frac 1 2}\,, && \| {\ensuremath{\partial}}_c Z^{(n)} \|_\infty \le \frac 1 2 M^{2k^2} \eps^{\frac 1 2} e^{- \frac s 2}\,, \\ &\| {\ensuremath{\partial}}_c A \|_\infty \le \frac 1 2 \eps^{\frac 1 2}\,, && \| {\ensuremath{\partial}}_c A^{(n)} \|_\infty \le \frac 1 2 M^{2k^2} \eps^{\frac 1 2} e^{- \frac s 2}\,, \end{aligned}$$ which thereby verifies the bootstraps - . This follows immediately from Grönwall, upon invoking the two right-most estimates in - for $Z$, and similarly - for $A$. For $0 \le n \le 6$, $$\begin{aligned} & \| {\ensuremath{\partial}}_{c_1 c_2} Z^{(n)} \|_\infty \le \frac 1 2 M^{2n^2} \eps^{\frac 5 8} e^{\frac s 4}\,, && \| {\ensuremath{\partial}}_{c_1 c_2} A^{(n)} \|_\infty \le\frac 1 2 M^{2n^2} \eps^{\frac 5 8} e^{\frac s 4},\end{aligned}$$ which therefore verifies the bootstrap assumptions - . This follows immediately from Grönwall, upon invoking the two right-most estimates in - for $Z$, and similarly - for $A$. Analysis of $W$ at $x = 0$ {#s:wx:at:0} ========================== In this section, we analyze $W$ and higher order derivatives of $W$ at $x = 0$. While $q^{(0)}(s), q^{(1)}(s), q^{(4)}(s)$ are constrained from , the quantities $q^{(2)}, q^{(3)}$ and $q^{(5)}$ are not constrained and therefore must be determined through ODEs in $s$ that they obey. ODE analysis of $q^{(2)}, q^{(3)}$ ---------------------------------- In this series of estimates, we use the crucial inductive assumption, , in order to integrate *backwards* the flow. First, we rewrite the ODEs in the following way: $$\begin{aligned} \label{mika} ({\ensuremath{\partial}}_s - \frac 3 4 )q^{(2)} = & \mathcal{F}^{(2)}(s), \qquad ({\ensuremath{\partial}}_s - \frac 1 2) q^{(3)} = \mathcal{F}^{(3)}(s)\,. \end{aligned}$$ where $$\begin{aligned} \label{back:2} &\mathcal{F}^{(2)} := 3(\beta_\tau - 1) q^{(2)} - \mu q^{(3)} - 2 G_W^{(1)}(0, s) q^{(2)} - G_W^{(2)}(0, s) + F_W^{(2)}(0, s)\,, \\ \label{back:3} &\mathcal{F}^{(3)} := 4(\beta_\tau - 1) q^{(3)} - 3 G^{(1)}_W(0, s) q^{(3)} -3 \beta_\tau |q^{(2)}|^2 - 3 G_W^{(2)}(0, s) q^{(2)} - G_W^{(3)}(0, s) + F_W^{(3)}(0, s)\,.\end{aligned}$$ and we recall the notation $q^{(n)}=W^{(n)}(0)$ specified in . We first prove lemmas for the particular quantities $W_{\alpha_{N}, \beta_{N}}^{(2)}(0, s)$ and $W_{\alpha_{N}, \beta_{N}}^{(3)}(0, s)$. Assume that $W^{(2)}_{\alpha_N, \beta_N}(0, s_N) = 0$ and $W^{(3)}_{\alpha_N, \beta_N}(0, s_N) = 0$. Then, for all $s_0 \le s \le s_{N+1}$, the following estimates hold: $$\begin{aligned} \label{britney:spears:1} |\mathcal{F}^{(2)}| \lesssim M^8 e^{-s} , \qquad |\mathcal{F}^{(3)}| \le M^{18} e^{-s}, \qquad s_0 \le s \le s_{N+1}\,,\end{aligned}$$ and in particular, this implies that $$\begin{aligned} \label{eq:2:3:W0} |W^{(2)}_{\alpha_N, \beta_N}(0, s)| \le \frac{M^9}{2} e^{-s} , \qquad |W^{(3)}_{\alpha_N, \beta_N}(0, s)| \le \frac{M^{19}}{2} e^{-s}, \qquad s_0 \le s \le s_{N+1}\,. \end{aligned}$$ The decay estimates follow upon writing the Duhamel formula associated to the evolution of , and crucially using the vanishing at $s_{N}$: $$\begin{aligned} \label{frankie:1} W^{(2)}_{\alpha_N, \beta_N}(0, s) = & \int^s_{s_{N}} e^{\frac 3 4 (s - s')} \mathcal{F}^{(2)}(s') {\,\mathrm{d}}s', \qquad W^{(3)}_{\alpha_N, \beta_N}(0, s) = \int^s_{s_{N}} e^{\frac 12 (s - s')} \mathcal{F}^{(3)}(s') {\,\mathrm{d}}s'\,.\end{aligned}$$ We will thus focus on proving estimates , starting with $$\begin{aligned} |\mathcal{F}^{(2)}| \lesssim & |\beta_\tau - 1| |q^{(2)}| + |\mu| |q^{(3)}| + \| G_W^{(1)} \|_\infty |q^{(2)}| + \| G_W^{(2)} \|_\infty + \|F_W^{(2)} \|_\infty \\ \lesssim & \eps^{\frac 4 {15}} e^{- \frac 3 2 s} + M^{40} \eps^{\frac 1 6} e^{-\frac 7 4 s} + M^{2}\eps^{\frac1{10}}e^{-\frac 7 4 s} + M^{8} e^{-s} + \eps^{\frac 3 4} e^{-s} \lesssim M^{8} e^{-s},\end{aligned}$$ where above we have used estimates for the transport terms $G_W$, and the estimates for the $F_W^{(2)}$ term. We have also invoked , , and . We now move to $$\begin{aligned} | \mathcal{F}^{(3)} | \lesssim & |\beta_\tau - 1| |q^{(3)}| + \| G_W^{(1)} \|_\infty |q^{(3)}| + |q^{(2)}|^2 + \| G_W^{(2)} \|_\infty |q^{(2)}| + \| G_W^{(3)} \|_\infty + \|F_W^{(3)}\|_\infty \\ \lesssim & M^{40} \eps^{\frac 16} e^{- \frac 7 4 s} + M^{42} e^{-2s} + \eps^{\frac15} e^{-\frac 3 2 s} + M^{8}\eps^{\frac1{10}} e^{-\frac 7 4 s} + M^{18} e^{-s} + \eps^{\frac 3 4} e^{-s} \\ \lesssim & M^{18} e^{-s},\end{aligned}$$ where we have invoked estimates for the $q^{(2)}, q^{(3)}$ quantities, for the $|\beta_\tau - 1|$ estimate, for the estimate of $G_W^{(1)}, G_W^{(2)}, G_W^{(3)}$, and for the forcing estimate. To establish , we appeal to (which holds for all values of $s$) $$\begin{aligned} |W^{(2)}_{\alpha_N, \beta_N}(0, s)| \lesssim \int^s_{s_N} e^{\frac 3 4 (s - s')} M^8 e^{-s'} {\,\mathrm{d}}s' \lesssim M^8e^{\frac34 s}\left(e^{-\frac74 s}+e^{-\frac74 s_N}\right) \lesssim M^8 e^{-{s}},\end{aligned}$$ for all $s_0 \le s \le s_{N+1}$, where we have used that $s_{N+1} - s_N = 1$ to estimate $e^{s_{N+1}}e^{-s_{N}}\leq e$. A similar argument applies to $W^{(3)}_{\alpha_N, \beta_N}(s, 0)$. We now verify the bootstrap assumptions , which apply to every $(\alpha, \beta) \in \mathcal{B}_N(\alpha_N, \beta_N)$. The following estimates are valid uniformly in the parameter set $\mathcal{B}_N$ given by $$\begin{aligned} |W^{(2)}(0, s)| \le \frac 1 2 \eps^{\frac{1}{10}} e^{- \frac 3 4 s} , \qquad |W^{(3)}(0, s)| \le \frac{M^{40}}{2} e^{- s}\,, \end{aligned}$$ We use the fundamental theorem of calculus in the space of parameters via $$\begin{aligned} {\ensuremath{\nonumber}}|W_{\alpha, \beta}^{(2)}(0, s)| \le & |W^{(2)}_{\alpha_{N}, \beta_{N}}(0, s)| + |\alpha - \alpha_{N}| \sup_{\alpha \in \mathcal{B}_N} |{\ensuremath{\partial}}_\alpha W^{(2)}(0, s)| + |\beta - \beta_{N}| \sup_{\beta \in \mathcal{B}_N} |{\ensuremath{\partial}}_\beta W^{(2)}(0, s)| \\ {\ensuremath{\nonumber}}\le & M^9 e^{-s}+ \Big( M^{30} e^{-s_N} e^{- \frac 3 4 (s_N - s_0)} + \eps^{\frac 1 5} e^{-s_N} e^{- \frac 1 2 (s_N - s_0)} \Big) 4 e^{\frac 3 4 (s - s_0)} \\ {\ensuremath{\nonumber}}& + M^{30} e^{-s_N} e^{- \frac 1 2 (s_N - s_0)} \eps^{\frac 1 4} e^{\frac 3 4 (s - s_0)} \\ \le & \frac12 \eps^{\frac{1}{10}} e^{- \frac 3 4 s}\,,{\ensuremath{\nonumber}}\end{aligned}$$ where above we have used that $s_{N+1} - s_N = 1$, coupled with the particular estimates , the two left-most bootstrap bounds in - , and the assumed size of the parameter rectangle in . Similarly, for the quantity $W^{(3)}_{\alpha, \beta}$, we have $$\begin{aligned} {\ensuremath{\nonumber}}|W_{\alpha, \beta}^{(3)}(0,s)| \le & |W_{\alpha_N, \beta_N}^{(3)}(0, s)| + |\alpha - \alpha_{N}| \sup_{\alpha \in \mathcal{B}_N} |{\ensuremath{\partial}}_\alpha W^{(3)}(0, s)| + |\beta - \beta_N| \sup_{\beta \in \mathcal{B}_N} |{\ensuremath{\partial}}_\beta W^{(3)}(0, s)| \\ {\ensuremath{\nonumber}}\le & M^{19} e^{-s} + \Big( M^{30} e^{-s_N} e^{- \frac 3 4 (s_N - s_0)} + \eps^{\frac 1 5} e^{-s_N} e^{- \frac 1 2 (s_N - s_0)} \Big) \eps^{\frac 1 2} e^{\frac 1 2 (s - s_0)} \\ {\ensuremath{\nonumber}}& + M^{30} e^{- s_N} e^{- \frac 1 2 (s_N - s_0)} 4 e^{\frac 1 2 (s - s_0)} \\ \le & \frac{M^{40}}{2} e^{-s}\,.{\ensuremath{\nonumber}}\end{aligned}$$ Again, we have invoked the particular bound , the two right-most estimates in - , as well as the size of the parameter rectangle in . Finally, we are left at estimating $W^{(5)}(0, s)$, and in particular to verify the bootstrap assumption . As a result, we write the ODE evolution for this quantity, equation , as $$\begin{aligned} \label{calvin:1} &{\ensuremath{\partial}}_s {\widetilde}{q}^{(5)} = \mathcal{F}^{(5)}\,, \end{aligned}$$ where $$\begin{aligned} \label{calvin:2} &\mathcal{F}^{(5)} := - \mu q^{(6)} + (1 - \beta_\tau) q^{(5)} - 10 |q^{(3)}|^2 - \sum_{j = 1}^5 \binom{5}{j} G_W^{(j)}(0, s) q^{(6-j)} + F_W^{(5)}(0, s)\,.\end{aligned}$$ We now verify the bootstrap assumptions . The following estimate is valid for the quantity ${\widetilde}{q}^{(5)}(s)$ $$\label{fifth:deriv:W:0} {\left|{\widetilde}{q}^{(5)}\right|}\les \eps^{\frac 7 8}\,.$$ We use to integrate $$\begin{aligned} \label{e:bluebottle} {\widetilde}{q}^{(5)}(s) = {\widetilde}{q}^{(5)}(s_0) + \int_{s_0}^s \mathcal{F}^{(5)}(s') {\,\mathrm{d}}s'\,, \end{aligned}$$ and we estimate the $\mathcal{F}^{(5)}$ term on the right-hand side via $$\begin{aligned} |\mathcal{F}^{(5)}| \lesssim & \eps^{\frac{11}{30} } e^{- \frac 3 4 s'} + \eps^{\frac 1 8} e^{- \frac 3 4 s'}+ 10 M^{36}e^{-2s'} + e^{-\frac{9}{10}s'} + \eps^{\frac 3 4} e^{-s'} \lesssim \eps^{\frac 18} e^{- \frac 3 4 s'}. \label{e:bluebottle2}\end{aligned}$$ Above, we have used the bootstraps on $\mu$, invoked estimate to control the forcing term, to control the transport terms, $G_W^{(j)}$, to estimate the $1 - \beta_\tau$ term, estimates for the $q^{(2)}, q^{(3)}$ terms, and finally for the $q^{(6)}$ term, coupled with the fact that ${\overline}{W}^{(6)}(0) = 0$ so $q^{(6)} = {\widetilde}{q}^{(6)}$. Next, we estimate the initial data via appealing to the specific form of and also the parameter bootstraps, $$\begin{aligned} |{\widetilde}{q}^{(5)}(s_0)| = | {\widehat}{W}_0^{(5)}(0) + \alpha{\ensuremath{\partial}}_x^5 (x^2 \chi(|x|))(0) + \beta {\ensuremath{\partial}}_x^5 (x^3 \chi(|x|) )(0) | \lesssim |\alpha| + |\beta| \lesssim_M \eps\,.\end{aligned}$$ [ODE analysis of $\nabla_{\alpha, \beta} q^{(n)}$ for $n = 2, 3, 5$]{} ---------------------------------------------------------------------- We start with the two formulas, which importantly, are valid for all values of the parameters $(\alpha, \beta) \in \mathcal{B}_n$: $$\begin{aligned} \label{W2:ab} &q^{(2)}(s) = W^{(2)}(0, s) = e^{\frac 34 (s - s_0)} \alpha + \int_{s_0}^s e^{\frac 34 (s - s')} \mathcal{F}^{(2)}(s') {\,\mathrm{d}}s'\,, \\ \label{W3:ab} &q^{(3)}(s) = W^{(3)}(0, s) = e^{\frac 12 (s - s_0)} \beta + \int_{s_0}^s e^{\frac 12 (s - s')} \mathcal{F}^{(3)}(s') {\,\mathrm{d}}s'\,, \end{aligned}$$ where the forcing terms are defined in , . We differentiate the above expressions in $\alpha$, recalling the notation that $q_\alpha := {\ensuremath{\partial}}_\alpha q$ and $q_\beta := {\ensuremath{\partial}}_\beta q$ $$\begin{aligned} \label{forward:1} &{ q^{(2)}_\alpha} = e^{\frac 3 4 (s - s_0)} + \int_{s_0}^s e^{\frac 3 4 (s - s')} {\ensuremath{\partial}}_\alpha \mathcal{F}^{(2)}(s') {\,\mathrm{d}}s'\,, \\ \label{forward:2} &{ q_\alpha^{(3)} } = \int_{s_0}^s e^{\frac 1 2 (s - s')} {\ensuremath{\partial}}_\alpha \mathcal{F}^{(3)}(s') {\,\mathrm{d}}s'\,. \end{aligned}$$ Similarly, differentiating in $\beta$ yields the expressions $$\begin{aligned} \label{forward:1:b} &{ q_\beta^{(2)} } = \int_{s_0}^s e^{\frac 3 4 (s - s')} {\ensuremath{\partial}}_\beta \mathcal{F}^{(2)}(s') {\,\mathrm{d}}s'\,, \\ \label{forward:2:b} & { q_\beta^{(3)} }= e^{\frac 1 2 (s - s_0)} + \int_{s_0}^s e^{\frac 1 2 (s - s')} {\ensuremath{\partial}}_\beta \mathcal{F}^{(3)}(s') {\,\mathrm{d}}s'\,. \end{aligned}$$ Third, by integrating - we have $$\begin{aligned} { {\widetilde}{q}^{(5)}_c = {\widetilde}{q}^{(5)}_c(s_0) }+ \int_{s_0}^s {\ensuremath{\partial}}_c \mathcal{F}^{(5)}(s') {\,\mathrm{d}}s'\,.\end{aligned}$$ We now write the expressions: $$\begin{aligned} {\ensuremath{\nonumber}}{\ensuremath{\partial}}_c \mathcal{F}^{(2)} = & 3 \dot{\tau}_c \beta_\tau^2 q^{(2)} + 3 (\beta_\tau - 1) q_c^{(2)} - \mu_c q^{(3)} - \mu q^{(3)}_c - 2 {\ensuremath{\partial}}_c G^{(1)}_W(0,s) q^{(2)} \\ \label{joes:2} & - 2 G^{(1)}_W(0,s) q^{(2)}_c + {\ensuremath{\partial}}_c F^{(2)}_W(0, s) + {\ensuremath{\partial}}_c G^{(2)}_W(0, s)\,, \end{aligned}$$ and $$\begin{aligned} {\ensuremath{\nonumber}}{\ensuremath{\partial}}_c \mathcal{F}^{(3)} =& 4 \beta_\tau^2 q^{(3)} \dot{\tau}_c + 4 (\beta_\tau - 1) q^{(3)}_c - 3 {\ensuremath{\partial}}_c G^{(1)}_W(0, s) q^{(3)} - 3 G^{(1)}_W(0, s) q_c^{(3)} \\ {\ensuremath{\nonumber}}&- 3 \beta_\tau^2 \dot{\tau}_c |q^{(2)}|^2 - 6 \beta_\tau q^{(2)}q_c^{(2)} - 3 {\ensuremath{\partial}}_c G_W^{(2)}(0, s) q^{(2)} - 3 G_W^{(2)}(0, s) q^{(2)}_c \\ & \label{joes:2:2} + {\ensuremath{\partial}}_c G_W^{(3)}(0, s)+ {\ensuremath{\partial}}_c F^{(3)}_W(0, s)\,, \end{aligned}$$ for $c \in \{ \alpha, \beta \}$. We also record, by differentiating , the expression $$\begin{aligned} {\ensuremath{\nonumber}}{\ensuremath{\partial}}_c \mathcal{F}^{(5)} = &- \mu_c q^{(6)} - \mu q^{(6)}_c - \beta_\tau^2 \dot{\tau}_c q^{(5)} + (1 - \beta_\tau) q^{(5)}_c - 20 q^{(3)} q^{(3)}_c \\ \label{joes:3:3} & - \sum_{j = 1}^{4} \binom{5}{j} ({\ensuremath{\partial}}_c G_W^{(j)}(0, s) q^{(6-j)} + G_W^{(j)}(0, s) q_c^{(6-j)}) + {\ensuremath{\partial}}_c G_W^{(5)}(0, s) + {\ensuremath{\partial}}_c F_W^{(5)}(0, s)\,. \end{aligned}$$ The following estimates are valid on the quantities defined in , , $$\begin{aligned} \label{whole:1} |{\ensuremath{\partial}}_c \mathcal{F}^{(2)} | \le \eps^{\frac 5 8}\,, && |{\ensuremath{\partial}}_c \mathcal{F}^{(3)}| \le \eps^{\frac 5 8}\,, && |{\ensuremath{\partial}}_c \mathcal{F}^{(5)}| \le \eps^{\frac 3 8}\,. \end{aligned}$$ We now estimate each of the terms in the forcing above in : $$\begin{aligned} {\ensuremath{\nonumber}}|{\ensuremath{\partial}}_c \mathcal{F}^{(2)}| &\lesssim |\dot{\tau}_c| |q^{(2)}| + |\beta_\tau -1| |q_c^{(2)}| + |\mu_c| |q^{(3)}| + |\mu| |q^{(3)}_c| + |{\ensuremath{\partial}}_c G_W^{(1)}(0, s)| |q^{(2)}| \\ {\ensuremath{\nonumber}}& \qquad \qquad + |G_W^{(1)}(0, s)| |q^{(2)}_c| + |{\ensuremath{\partial}}_c F_W^{(2)}(0, s)| + |{\ensuremath{\partial}}_c G_W^{(2)}(0, s)| \\ \label{wolf:alice} &\lesssim_M \eps^{\frac 1 2} e^{-\frac 3 4 s} + \eps^{\frac 1 6} e^{- \frac 3 4 s} + \eps^{\frac 1 2} e^{- \frac 5 4s} + \eps^{\frac{11}{12}}+ \eps^{\frac 1 2} e^{- s} + \eps^{\frac 3 4} e^{- \frac s 4} + \eps^{\frac 3 4 } e^{- \frac s 4} + \eps^{\frac 1 2} e^{- \frac s 4}\les_M \eps^{\frac34}\,,\end{aligned}$$ and similarly, we estimate $$\begin{aligned} {\ensuremath{\nonumber}}|{\ensuremath{\partial}}_c \mathcal{F}^{(3)}| &\lesssim |q^{(3)}| |\dot{\tau}_c| + |\beta_\tau - 1| |q^{(3)}_c| + \| {\ensuremath{\partial}}_c G_W^{(1)} \|_\infty |q^{(3)}| + \|G_W^{(1)}\| |q_c^{(3)}| + |\dot{\tau}_c| |q^{(2)}|^2 \\ {\ensuremath{\nonumber}}& \qquad + |q^{(2)}| |q^{(2)}_c| + \| {\ensuremath{\partial}}_c G_W^{(2)}\| |q^{(2)}| + \|G_W^{(2)}\| |q_c^{(2)}| + \|{\ensuremath{\partial}}_c G_W^{(3)} \| + \|{\ensuremath{\partial}}_c F_W^{(3)} \| \\ & \lesssim_M \eps^{\frac 1 2} e^{-s} + \eps^{\frac{11}{12}} + \eps^{\frac 1 2} e^{- \frac 5 4 s} + \eps^{\frac 3 4} e^{- \frac s 4} + \eps^{\frac 1 2} e^{-\frac 3 2 s} + \eps^{\frac 3 4} + \eps^{\frac 1 2} e^{-\frac 5 4 s} + \eps^{\frac 3 4} e^{-\frac 1 4 s} + \eps^{\frac 1 2} e^{- \frac s 4} + \eps^{\frac 3 4} e^{- \frac s 4}{\ensuremath{\nonumber}}\\ & \lesssim_M \eps^{\frac34}\,. \label{wolf:alice:2}\end{aligned}$$ In both estimates above we have invoked the bootstrap estimate on $\mu$, the estimate on $|1 - \beta_\tau|$, the bootstraps on the $\dot{\tau}_c, \mu_c$ terms, for the decay estimates on $q^{(2)}, q^{(3)}$, - for the estimates on $q^{(2)}_c, q^{(3)}_c$, and finally and for the transport and forcing terms, respectively. From and , we can take $\eps$ small relative to the implicit constant which depends on $M$ to conclude that $$\begin{aligned} |{\ensuremath{\partial}}_c \mathcal{F}^{(2)}| \le \eps^{\frac 5 8}, \qquad |{\ensuremath{\partial}}_c \mathcal{F}^{(3)}| \le \eps^{\frac 5 8}\,.\end{aligned}$$ Finally, estimating ${\ensuremath{\partial}}_c \mathcal{F}_5$ yields $$\begin{aligned} {\ensuremath{\nonumber}}|{\ensuremath{\partial}}_c \mathcal{F}_5| &\lesssim |\mu_c| \|W^{(6)} \|_\infty + |\mu| \| W_c^{(6)} \|_\infty + |\dot{\tau}_c| |q^{(5)}| + |1 - \beta_\tau| |q^{(5)}_c| + |q^{(3)}| |q^{(3)}_c| \\ {\ensuremath{\nonumber}}&\qquad + \sum_{j = 1}^4 (\| {\ensuremath{\partial}}_c G_W^{(j)} \|_\infty |q^{(6-j)}| + \| G_W^{(j)} \|_\infty \| W_c^{(6-j)} \|_\infty ) + \| {\ensuremath{\partial}}_c G_W^{(5)} \|_\infty + \| {\ensuremath{\partial}}_c F_W^{(5)} \|_\infty \\ {\ensuremath{\nonumber}}&\lesssim_M \eps^{\frac 1 2} e^{- \frac s 4} + \eps^{\frac{11}{12}} + \eps^{\frac 1 2} + \eps^{\frac 16} e^{- \frac 3 4 s} (1 + \eps^{\frac 3 8} e^{\frac s 8} ) + \eps^{\frac 3 4} e^{- \frac s 4} + \eps^{\frac 1 2} e^{- \frac s 4} + \eps^{\frac 3 4} e^{- \frac s 4}\\ {\ensuremath{\nonumber}}& \qquad + \eps^{\frac 1 2} e^{- \frac s 4} + \eps^{\frac 3 4} e^{- \frac s 4}\\ &\les_M \eps^{\frac12}\,, {\ensuremath{\nonumber}}\end{aligned}$$ from which we can conclude $|{\ensuremath{\partial}}_c \mathcal{F}^{(5)}| \le \eps^{\frac 3 8}$, establishing the final estimate of . We invoke the same set of bootstraps as in the estimate of ${\ensuremath{\partial}}_c \mathcal{F}^{(2)}, {\ensuremath{\partial}}_c \mathcal{F}^{(3)}$ above, and in addition we invoke on the estimate of $q^{(5)}_c$ and on the $W^{(n)}_c$ quantities. The following estimates are valid $$\begin{aligned} \label{joes:oes:1} |q^{(2)}_\alpha - \eps^{\frac 3 4} e^{\frac 3 4 s} |& \le \eps^{\frac 5 4} e^{\frac 3 4s}\,, & |q^{(2)}_\beta| &\le \frac{1}{2} \eps^{\frac 5 4} e^{\frac 3 4 s} \,,\\ \label{joes:oes:2} |q^{(3)}_\alpha| &\le \frac{1}{2} \eps e^{\frac 1 2 s} \,,& |q^{(3)}_\beta - \eps^{\frac 1 2} e^{\frac s 2}| &\le \eps e^{\frac 12 s} \,, \\ \label{joes:oes:3} |{\widetilde}{q}_c^{(5)}|& \le \frac 12 \eps^{\frac 3 8} e^{\frac 1 8 s}\,. \end{aligned}$$ In particular, this verifies the bootstrap estimates - , and . For - , this follows immediately upon combining estimates with the expressions - . For the estimate on ${\widetilde}{q}_c^{(5)}$, we need to use that $$\begin{aligned} {\widetilde}{q}_\alpha^{(5)}(s_0) &= {\ensuremath{\partial}}_x^5|_{x= 0} \Big( x^2 \chi(x) \Big) = 0\,, \\ {\widetilde}{q}_\beta^{(5)}(s_0) &= {\ensuremath{\partial}}_x^5|_{x = 0} \Big( x^3 \chi(x) \Big) = 0\,,\end{aligned}$$ according to . Estimates for $W$ ================= In this section we will verify various pointwise bootstrap estimates on $W$, solving , and derivatives thereof. The main objective is to verify the bootstrap assumptions - , , - , - , as well as . The following lemma verifies the bootstrap . The following estimate is valid on $W^{(1)}$ $$\begin{aligned} |W^{(1)}| \le 1 + \frac \ell 2 M^{40} e^{-s}\,, \end{aligned}$$ which in particular verifies . We subdivide into three regions $|x| \le \ell$, $\ell\le |x| \le \eps^{-\frac14}$ and ${\left|x\right|}\geq\eps^{-\frac14}$. In the middle region $\ell \le |x| \le \eps^{- \frac 1 4}$, we have $$\begin{aligned} |W^{(1)}(x, s)| \le |{\overline}{W}^{(1)}(x)| + |{\widetilde}{W}^{(1)}(x, s)| \le 1 - \frac{\ell^7}{50} + |{\widetilde}{W}^{(1)}(x, s)| \le 1 - \frac{\ell^7}{50} + \eps^{\frac 1 5} < 1,\end{aligned}$$ where we have invoked to bound $|{\overline}{W}^{(1)}|$ above in this region, and the bootstrap which is also valid in this region. In the far-field region, $|x| \ge \ell$, we use $$\begin{aligned} |W^{(1)}(x)| \le M \eta_{- \frac 1 5}(x) \lesssim_M (\eps^{- \frac 1 4})^{\frac 4 5}\,. \end{aligned}$$ In the region $|x| \le \ell$, we obtain by a Taylor expansion of $W^{(1)}$ for some $|x_\ast| \le \ell$. $$\begin{aligned} {\ensuremath{\nonumber}}W^{(1)}(x, s) =& -1 + W^{(2)}(0, s) x + W^{(3)}(0, s) \frac{x^2}{2} + W^{(5)}(x_\ast, s) \frac{x^4}{24} \\ {\ensuremath{\nonumber}}= & -1 + W^{(2)}(0, s) x + W^{(3)}(0, s) \frac{x^2}{2} + {\overline}{W}^{(5)}(x_\ast) \frac{x^4}{24} + {\widetilde}{W}^{(5)}(x_\ast, s) \frac{x^4}{24} \\ {\ensuremath{\nonumber}}\ge & (-1 + {\overline}{W}^{(5)}(x_\ast) \frac{x^4}{24} - | {\widetilde}{W}^{(5)}(x_\ast, s) \frac{x^4}{24}|) + W^{(2)}(0, s) x + W^{(3)}(0, s) \frac{x^2}{2} \\ \ge & -1 + \ell M^{40} e^{-s} + \ell^2 \frac{M^{40}}{2} e^{-s}\,.{\ensuremath{\nonumber}}\end{aligned}$$ Above, we have used property to assert that ${\overline}{W}^{(5)}(x_\ast) > \frac 1 2$ via a further Taylor expansion: $$\begin{aligned} {\overline}{W}^{(5)}(x_\ast) > {\overline}{W}^{(5)}(0) - |x_\ast| \| {\overline}{W}^{(6)} \|_\infty > {\overline}{W}^{(5)}(0) - C \ell > \frac 1 2\,. \end{aligned}$$ in which case we use to bound $$\begin{aligned} \frac{x^4}{24} \Big( {\overline}{W}^{(5)}(x_\ast) - |{\widetilde}{W}^{(5)}(x_\ast, s)| \Big) \ge \frac 1 2 - \eps \ge \frac 1 4\,.\end{aligned}$$ We now collect various estimates on damping terms. To do so, we first make the following definitions. $$\begin{aligned} \label{country:coffee:1} D_n &:= \frac 1 4 (- 1 + 5n) + \beta_{\tau}(n+1_{n> 1}) W^{(1)}\,,\\ \label{country:coffee:2} {\widetilde}{D}_n&:= \frac 1 4 (-1 + 5n) + \beta_{\tau}\left({\overline}W^{(1)}+ nW^{(1)} \right) \,,\\ \label{country:coffee:3} D_{n}^{c} &:= \frac{5n-1}{4} +(n+1) \beta_\tau W^{(1)}\,, \\ \label{bravo:16} D_{n,r} &:= D_n - \eta_{- \frac r 4} \mathcal{V}_W {\ensuremath{\partial}}_x \eta_{\frac r 4} = \frac 1 4 (- 1 + 5n) + \beta_{\tau}(n+1_{n> 1}) W^{(1)}- \eta_{- \frac r 4} \mathcal{V}_W {\ensuremath{\partial}}_x \eta_{\frac r 4}\,, \\ \label{country:coffee:4} {\widetilde}{D}_{n,r} &:= {\widetilde}{D}_{n} - \eta_{- \frac r 4} \mathcal{V}_W {\ensuremath{\partial}}_x \eta_{\frac r 4} = \frac 1 4 (-1 + 5n) + \beta_{\tau}\left({\overline}W^{(1)}+ nW^{(1)} \right) - \eta_{- \frac r 4} \mathcal{V}_W {\ensuremath{\partial}}_x \eta_{\frac r 4} \,, \\ \label{country:coffee:5} D_{n, r}^{c} &:= D_{n}^{c} - \eta_{- \frac r 4} \mathcal{V}_W {\ensuremath{\partial}}_x \eta_{\frac r 4} = \frac{5n-1}{4} +(n+1) \beta_\tau W^{(1)} - \eta_{- \frac r 4} \mathcal{V}_W {\ensuremath{\partial}}_x \eta_{\frac r 4}\,. \end{aligned}$$ We now state various estimates on these damping terms. Let $|x_0| \ge \ell$. Then, for $D \in \{ {\widetilde}{D}_6, D_7^{c} \}$, ${\overline}{D} \in \{ {\widetilde}{D}_{1, \frac 4 5}, {\widetilde}{D}_{0,- \frac 1 5} \}$, and for $n \ge 2$, $j \ge 1$, the following estimates are valid $$\begin{aligned} \label{headphones:1} D \ge & \frac 1 8\,, \\ \label{street:beat:2} -\int_{s_0}^s {\overline}{D} \circ \Phi^{x_0}_W \le & \frac{1}{50} \log M\,, \\ \label{street:beat:3} - \int_{s_0}^s D_{n, \frac 4 5} \circ \Phi^{x_0}_W \le & - \frac 1 9 (s - s_0) + \frac{1}{50} \log M\,, \\ \label{thor:4} - \int_{s_0}^s W^{(1)} \circ \Phi^{x_0}_W \le &\frac{1}{50} \log M\,, \\ \label{thor:5} - \int_{s_0}^s D^{c}_{j, \frac{1}{5}} \circ \Phi^{x_0}_W \le & \frac{1}{50} \log M\,.\end{aligned}$$ First, for , $$\begin{aligned} {\widetilde}{D}_{6} =\frac 1 4 (-1 + 30) + \beta_{\tau}\left({\overline}W^{(1)}+ 6W^{(1)} \right) \geq \frac{1}{4}- 6{\left|1-\beta_\tau\right|}\geq \frac{1}{8}\,.\label{eq:damping:W6} \end{aligned}$$ where we have used that ${\overline}W^{(1)}\geq -1$, and . An analogous estimate applies for the $D_7^{c}$ term. We turn now to . By a simple calculation, we have $$\begin{aligned} &{\widetilde}{D}_{0, -\frac 1 5}= \beta_{\tau} {\overline}W^{(1)}+ \frac{1}{4} \eta_{-1}+\frac{x^3}{5}\eta_{-1}g_W\,,\\ &{\widetilde}{D}_{1, \frac 4 5}= \beta_{\tau} ( {\overline}W^{(1)}+W^{(1)})- \eta_{-1}-\frac{4x^3}{5}\eta_{-1}g_W\,.\end{aligned}$$ Observe, that for either the case $D_q={\widetilde}{D}_{0, -\frac 1 5}, {\widetilde}{D}_{1, \frac 4 5}$, we have from , , , $$\begin{aligned} {\left|D_q\right|}&\leq 3 \ell \log M \eta_{-\frac15}+\eta_{-1}(1+{\left|x\right|}({\left|W\right|}+{\left|G_W\right|}))\\ &\leq 4 \ell \log M \eta_{-\frac15}+{\left|x\right|}\eta_{-1}( \frac{1}{1000} \log M \eta_{\frac1 {20}}+ \eta_{\frac14})\\ &\leq 6\ell \log M\,.\end{aligned}$$ Thus, using in addition , we have $$\begin{aligned} {\ensuremath{\nonumber}}-\int_{s_0}^s D_{q}\circ\Phi_W^{x_0}(s') \,ds' \leq & 6 \ell \log M \int_{s_0}^s \left(\eta_{-\frac15}(\ell \eps^{\frac15}e^{\frac15 s})+e^{- s}\right) ds' \\ \label{steve:aoki} \leq & 6 \ell \log M (20 \log \ell^{-1}) \leq \frac{1}{50} \log M\,.\end{aligned}$$ The same calculation establishes estimate , , , with minor modifications. Transport Estimates for $W$ --------------------------- We now prove a uniform estimate on ${\widetilde}{ W}^{(6)}$ in the region ${\left|x\right|}\leq \ell$. We will prove the estimates along trajectories originating at ${\left|x_0\right|}\leq \ell$. Note that no trajectory originating outside the ball of radius $\ell$ may enter the ball of radius $\ell$. This is a consequence of . The following establishes the bootstrap bounds - . The following localized estimates hold in the region ${\left|x\right|}\leq \ell $ $$\begin{aligned} \label{iniesta:1} & |{\widetilde}{W}^{(n)}| \le \frac 1 2 ({\left|x\right|}^{6-k}\eps^{\frac{1}{5}}+\eps^{\frac12})\leq {\left|\ell\right|}^{6-n}\eps^{\frac{1}{5}},\quad \mbox{for } n=0,\dots, 5\,, \\ \label{messi:1} & |{\widetilde}{W}^{(6)}| \le \frac 1 2 \eps^{\frac{1}{5}}\,,\\ \label{xavi:1} & |{\widetilde}{W}^{(7)}| \le \frac M 2 \eps^{\frac 1 5}\,, \\ \label{rooney:1} & |{\widetilde}{W}^{(8)}| \le \frac{M^3}{2} \eps^{\frac 1 5} \,. \end{aligned}$$ Composing with the flow we have $$\frac{d}{ds}\left({\widetilde}W^{(6)}\circ \Phi_W^{x_0}\right) +\left({\widetilde}{D}_{6}\circ \Phi_W^{x_0}\right)\left({\widetilde}W^{(6)}\circ \Phi_W^{x_0}\right)={\widetilde}{F}_{W,n}\circ \Phi_W^{x_0}\,.$$ Hence, applying Grönwall, and the lower bound , we obtain $${\left|{\widetilde}W^{(6)}\circ \Phi_W^{x_0}\right|}\les {\left|{\widetilde}W^{(6)}(x_0,-\log\eps)\right|}+\ell\eps^{\frac 1 5}\les \ell\eps^{\frac 1 5}\,.$$ The same argument applies for and using the latter two estimates in . From the constraints and the estimate , we have $$\begin{aligned} {\widetilde}W(x) =\frac{{\widetilde}W^{(2)}(0)}{2!}x^2 + \frac{{\widetilde}W^{(3)}(0)}{3!}x^3 + \frac{{\widetilde}W^{(5)}(0)}{5!}x^5 + {\mathcal O}(\eps^{\frac15}|x|^6) \, .\end{aligned}$$ Then applying and , we obtain . For $\ell\leq {\left|x\right|}\leq \eps^{-\frac14}$ we have $$\begin{aligned} \label{rover:1} |{\widetilde}{W}| &\le \frac 1 2 \eps^{\frac{3}{20}}\eta_{\frac{1}{20}} \,, \\ \label{3eb:4} |{\widetilde}{W}^{(1)}| &\le \frac 1 2 \eps^{\frac{1}{20}}\eta_{-\frac15}\,, \end{aligned}$$ which thus verifies the bootstraps - . We write $$\begin{aligned} \label{stuck:1} ({\ensuremath{\partial}}_s + {\widetilde}{D}_{0, - \frac 1 5}) (\eta_{- \frac{1}{20}} {\widetilde}{W}) + \mathcal{V}_W {\ensuremath{\partial}}_x (\eta_{- \frac{1}{20}} {\widetilde}{W}) &= \eta_{- \frac{1}{20}} {\widetilde}{F}_{W,0}\,, \\ \label{stuck:2} ({\ensuremath{\partial}}_s + {\widetilde}{D}_{1, \frac 4 5}) (\eta_{\frac{1}{5}} {\widetilde}{W}^{(1)}) + \mathcal{V}_W {\ensuremath{\partial}}_x (\eta_{\frac 1 5} {\widetilde}{W}^{(1)}) &= \eta_{\frac{1}{5}} {\widetilde}{F}_{W,1}\,.\end{aligned}$$ We now fix any $|x_0| \ge \ell$. We will consider trajectories starting with $(s_\ast, x_0 = \pm \ell)$ or $(s_0, x_0)$ for $|x_0| > \ell$. Writing the solution to we obtain $$\begin{aligned} \eta_{- \frac{1}{20}} {\widetilde}{W} \circ \Phi^{x_0}_W = \eta_{- \frac{1}{20}} {\widetilde}{W}(s_\ast, \Phi^{x_0}_W(s_\ast)) e^{- \int_{s_\ast}^s {\widetilde}{D}_{0, - \frac 1 5} \circ \Phi^{x_0}_W} + \int_{s_\ast}^s e^{- \int_{s'}^s {\widetilde}{D}_{0, - \frac 1 5} \circ \Phi^{x_0}_W} \eta_{- \frac{1}{20}} {\widetilde}{F}_{W} \circ \Phi_W^{x_0} {\,\mathrm{d}}s'\,.\end{aligned}$$ We now estimate both sides to produce $$\begin{aligned} |\eta_{- \frac{1}{20}} {\widetilde}{W} \circ \Phi^{x_0}_W | \le & ( \eps^{\frac 34} + 2 \ell^6 \eps^{\frac 1 5} ) M^{\frac{1}{50}} + \int_{s_\ast}^s M^{\frac{1}{50}} e^{- \frac 3 4 s'} {\,\mathrm{d}}s' \le \frac 1 2 \eps^{\frac{3}{20}}\,. \end{aligned}$$ Above, we have invoked estimate on the forcing term and for the damping term. We have moreover estimated the initial data by using to write $$\begin{aligned} \label{truck} {\widetilde}{W}(x, s_0) = & {\widehat}{W}_0 + \alpha x^2 \chi + \beta x^3 \chi - {\overline}{W} (1 - \chi(\eps^{\frac 1 4}x) )\,.\end{aligned}$$ When $|x| \le \eps^{- \frac 1 4}$, the last term above is zero, and so we estimate, for $|x| \le \eps^{- \frac 1 4}$, $$\begin{aligned} |{\widetilde}{W}(x, s_0) \eta_{- \frac{1}{20}}| \le \|{\widehat}{W}_0 \eta_{- \frac{1}{20}} \|_\infty + |\alpha| + |\beta| \le \eps^{\frac 3 4}\,,\end{aligned}$$ by the estimates and . Writing the solution to yields $$\begin{aligned} \eta_{\frac 1 5} {\widetilde}{W}^{(1)} \circ \Phi^{x_0}_W = \eta_{\frac 1 5} {\widetilde}{W}^{(1)}(s_\ast, x_0) e^{- \int_{s_\ast}^s {\widetilde}{D}_{1, \frac 4 5} \circ \Phi^{x_0}_W} + \int_{s_\ast}^s e^{- \int_{s'}^s {\widetilde}{D}_{1, \frac 4 5} \circ \Phi^{x_0}_W} \eta_{\frac 1 5} {\widetilde}{F}_{W,1} \circ \Phi_W^{x_0} {\,\mathrm{d}}s' \,.\end{aligned}$$ We now estimate the right-hand side via $$\begin{aligned} {\ensuremath{\nonumber}}|\eta_{\frac 1 5} {\widetilde}{W}^{(1)} \circ \Phi^{x_0}_W| \le &( \eps^{\frac 3 4} + 2 \ell^5 \eps^{\frac 1 5} ) M^{\frac{1}{50}} + \eps^{\frac{1}{10}} M^{\frac{1}{50}} \int_{s_\ast}^s |\eta_{- \frac{1}{20}}(x_0 e^{\frac 1 5 (s' - s_0)}) {\,\mathrm{d}}s' \le \frac 1 2 \eps^{\frac{1}{20}}\,,\end{aligned}$$ where above we have invoked estimate for the damping term, and for the forcing term. For the initial data, we differentiate to obtain $$\begin{aligned} {\widetilde}{W}^{(1)}(x, s_0) = {\widehat}{W}_0' + {\ensuremath{\partial}}_x \Big( \alpha x^2 \chi + \beta x^3 \chi \Big) - {\ensuremath{\partial}}_x \Big( {\overline}{W} (1 - \chi(\eps^{\frac 1 4}x)) \Big),\end{aligned}$$ which upon noting that the latter term is identically zero on $|x| \le \eps^{- \frac 1 4}$, we obtain $$\begin{aligned} |{\widetilde}{W}^{(1)}(x, s_0) \eta_{\frac 1 5}| \le \| \eta_{\frac 1 5} {\widehat}{W}_0' \|_\infty + |\alpha| + |\beta| \le \eps^{\frac 3 4}\,, \end{aligned}$$ upon invoking estimates and . For ${\left|x\right|}\geq \ell$ we have $$\begin{aligned} \label{3eb:1} |W| &\leq \frac{\ell}{2} \log M \eta_{\frac{1}{20}}\,, \\ \label{3eb:2} |W^{(1)}| &\leq \frac \ell 2 \log M \eta_{- \frac 1 5}\,, \\ \label{3eb:3} {\left|W^{(n)}\right|}&\leq \frac 1 2 M^{k^2} \eta_{- \frac 1 5}\quad \text{ for } n =2,\dots, 8\,, \end{aligned}$$ which verifies the bootstraps - . We write, for $n \ge 1$, $$\begin{aligned} \label{stuck:3} &({\ensuremath{\partial}}_s + D_{n, \frac 4 5}) \eta_{\frac 1 5} W^{(n)} + \mathcal{V}_W {\ensuremath{\partial}}_x (\eta_{\frac 1 5} W^{(n)}) = \eta_{\frac{1}{5}} F_{W,n}\,, \\ \label{stuck:4} &({\ensuremath{\partial}}_s + D_{0, -\frac 1 5}) (\eta_{- \frac{1}{20}} W) + \mathcal{V}_W {\ensuremath{\partial}}_x (\eta_{- \frac{1}{20}}W) = \eta_{- \frac{1}{20}} F_{W,0}\,. \end{aligned}$$ We will treat the cases $n = 0$, $n = 1$, and $n \ge 2$ cases separately. Writing Grönwall for gives $$\begin{aligned} \label{nat:1} \eta_{\frac 1 5}W^{(n)} \circ \Phi_W^{x_0} = \eta_{\frac 1 5} W^{(n)}(s_\ast, x_0) e^{- \int_{s_\ast}^s D_{n, \frac 4 5} \circ \Phi_W^{x_0}} + \int_{s_\ast}^s e^{- \int_{s'}^s D_{n, \frac 4 5} \circ \Phi_W^{x_0} } \eta_{\frac 1 5} F_{W,n} \circ \Phi_W^{x_0} {\,\mathrm{d}}s'\,. \end{aligned}$$ Estimating both sides for $n \ge 2$ gives $$\begin{aligned} |\eta_{\frac 1 5}W^{(n)} \circ \Phi_W^{x_0}| \le ( M + 10 \eps^{\frac 1 5} ) e^{- \frac{1}{9} (s - s_\ast)} M^{\frac{1}{50}} +M^{\frac{1}{50}} \int_{s_\ast}^s e^{- \frac{1}{9} (s - s')} M^{-\frac{9}{10}} M^{n^2} {\,\mathrm{d}}s',\end{aligned}$$ where we have appealed to estimate for the damping term and estimate for the forcing. For the $n = 0, 1$ cases, it suffices to prove estimates and in the region $|x| \ge \eps^{- \frac 1 4}$ due to - . In this case, we select $|x_0| \ge \eps^{- \frac 1 4}$ and $s_\ast \ge s_0$ such that $(s_\ast, x_0)$ is the origin of the trajectories consider. More specifically, we take either $|x_0| > \eps^{- \frac 1 4}$ and $s_\ast = s_0$ or $|x_0| = \eps^{- \frac 1 4}$ and any $s_\ast \ge s_0$. In this case, continues to hold for $n = 1$, and we estimate via $$\begin{aligned} {\ensuremath{\nonumber}}|\eta_{\frac 1 5} W^{(1)} \circ \Phi_W^{x_0}| \le & |\eta_{\frac 1 5} W^{(1)}(x_0, s_\ast) | | e^{- \int_{s_\ast}^s D_{1, \frac 4 5} \circ \Phi_W^{x_0}} | + \int_{s_\ast}^s |e^{- \int_{s'}^s D_{1, \frac 4 5} \circ \Phi_W^{x_0}}| \| \eta_{\frac 1 5} F_{W,1} \|_\infty {\,\mathrm{d}}s' \\ \label{martin:1} \lesssim & \Big( \sup_{|x| \ge \eps^{- \frac 1 4}} |\eta_{\frac 1 5} W^{(1)}(x, s_0)| + | \eta_{\frac 1 5} W^{(1)}(\eps^{- \frac 1 4}, s_\ast)| \Big) + \int_{s_\ast}^s e^{- \frac 1 2 s'} {\,\mathrm{d}}s' \\ \label{martin:2} \lesssim & \Big(1 +|\eta_{\frac 1 5} {\overline}{W}^{(1)}(\eps^{- \frac 1 4})| + |\eta_{\frac 1 5} {\widetilde}{W}^{(1)}(\eps^{- \frac 1 4}, s_\ast)| \Big) + \int_{s_\ast}^s e^{- \frac 1 2 s'} {\,\mathrm{d}}s' \\ {\ensuremath{\nonumber}}\le & \frac \ell 2 \log M\,. \end{aligned}$$ To evaluate the size of the initial data, from to , we have used to compute $$\begin{aligned} |\eta_{\frac 1 5} W^{(1)}(x, s_0)| = \Big| \Big( {\overline}{W}^{(1)} \chi(\eps^{\frac 1 4}x) + {\overline}{W} \eps^{\frac 1 4} \chi'(\eps^{\frac 1 4}x) + {\widehat}{W}_0' + {\ensuremath{\partial}}_x \Big( \alpha x^2 \chi(x) + \beta x^3 \chi(x) \Big) \Big) \eta_{\frac 1 5} \Big| \lesssim 1\,. \end{aligned}$$ Above, we have invoked the choice to ensure that $\ell \log M$ can be selected larger than the implicit constants appearing in the above estimate. We have also invoked bootstrap to control the ${\widetilde}{W}^{(1)}$ term above. We have also invoked to control the forcing term, and used the fact that $$\begin{aligned} \exp \Big( - \int_{s_0}^s D_{1, \frac 4 5} \circ \Phi_W^{x_0} \Big) \le 10 \quad\text{ for } |x_0| \ge \eps^{- \frac 1 4}\,. \end{aligned}$$ An analogous series of estimates applies to . Transport estimates of $\nabla_c W$ ----------------------------------- We now verify the bootstrap estimates - . For $n=0,\dots, 6$ and ${\left|x\right|}\le \ell$ we have the following estimates $$\begin{aligned} \label{warrior:1:kim} &|W_c^{(n)}| \le M \ell^{\frac 3 4} \eps^{\frac 3 4} e^{\frac 3 4 s}\,, \\ \label{warrior:2:kim} &|W_c^{(7)}| \le \frac M 2 \eps^{\frac 3 4} e^{\frac 3 4 s} \,. \end{aligned}$$ The first inequality above follows for $n = 0$ upon Taylor expanding and noting that $W_c(0, s) = 0$ via $$\begin{aligned} | W_c | \le \ell \sup_{|x| \le \ell} |W_c^{(1)}| \le \ell M \ell^{\frac 1 2} e^{\frac 3 4(s- s_0)}\,. \end{aligned}$$ The exact same argument works for the $n = 1$ inequality. For the $n = 2$ inequality, we also Taylor expand, but must factor in the value at $x = 0$ via $$\begin{aligned} |W^{(2)}_c| \le |W^{(2)}_c(0, s)| + \ell \sup_{|x| \le \ell} |W^{(3)}| \le 4 e^{\frac 3 4(s - s_0)} + \ell M e^{\frac 3 4(s - s_0)}\,. \end{aligned}$$ Finally, for the $n = 7$ case, we directly apply Grönwall to integrate which gives $$\begin{aligned} W_c^{(7)}( \Phi_W(x, s), s) = W_c^{(7)}(x, s) e^{- \int_{s_0}^s D_7^{c} \circ \Phi_W} + \int_{s_0}^s e^{- \int_{s'}^s D_7^{c} \circ \Phi_W} F_{W,7}^{c} \circ \Phi_W {\,\mathrm{d}}s'\,. \end{aligned}$$ We note that implies that $$\begin{aligned} e^{- \int_{s_0}^s D_7^{c} \circ \Phi_W^{x_0}} \le e^{- \frac 1 8 (s - s_0)}\,.\end{aligned}$$ Thus, we have $$\begin{aligned} {\ensuremath{\nonumber}}|W^{(7)}_c \circ \Phi_W^{x_0}| \le & 2 W^{(7)}_c(x_0, s_0) e^{-\frac 1 8 (s - s_0)} + \int_{s_0}^s e^{-\frac 1 8(s - s')} \| F_{W,7}^{c} \circ \Phi_W \| {\,\mathrm{d}}s' \\ {\ensuremath{\nonumber}}\le & 2 e^{-\frac 1 8(s - s_0)} + \int_{s_0}^s e^{-\frac 1 8 (s - s')} M \ell^{\frac 1 5} e^{\frac 3 4 (s' - s_0)} {\,\mathrm{d}}s' \\ \le & 2 e^{-\frac 1 8 (s - s_0)} + 2M \ell^{\frac 1 5} e^{\frac 3 4 (s - s_0)}\,,{\ensuremath{\nonumber}}\end{aligned}$$ where we have invoked the enhanced localized estimate, . We now verify - . For $n=1,\dots,7$ and ${\left|x\right|}\le \ell$ we have the following estimates $$\begin{aligned} \label{prince:1} |W_c| &\le \frac{M^{4}}{2} \eps^{\frac 3 4} e^{\frac 3 4 s} \,, \\ \label{prince:2} |W^{(n)}_c \eta_{\frac{1}{20}} | &\le \frac{M^{(n+2)^2}}{2} \eps^{\frac 34} e^{\frac 3 4 s} \,. \end{aligned}$$ Consider equation for ${\ensuremath{\partial}}_c W$. First, define the rescaled quantity $Q := {\ensuremath{\partial}}_c W e^{- \frac 1 4 (s -s_0)}$, which satisfies $$\begin{aligned} ({\ensuremath{\partial}}_s + \beta_\tau W^{(1)}) Q + \mathcal{V}_W {\ensuremath{\partial}}_x Q = e^{- \frac 1 4 (s - s_0)} F^c_{W,0}\end{aligned}$$ By Grönwall, we have $$\begin{aligned} {\ensuremath{\nonumber}}|Q \circ \Phi_W^{x_0}| \le & |Q(x_0, s_\ast)| e^{- \int_{s_\ast}^s \beta_\tau W^{(1)} \circ \Phi_W^{x_0}} + \int_{s_\ast}^s e^{- \int_{s'}^s \beta_\tau W^{(1)} \circ \Phi_W^{x_0} } |e^{- \frac 1 4 (s' - s_0)} F_{W,0}^{c} \circ \Phi_W^{x_0}| {\,\mathrm{d}}s' \\ \lesssim & (\| W_c(\cdot, s_0) \|_\infty + \ell^{\frac 1 2}M e^{\frac 1 2(s_\ast - s_0)} ) M^{\frac{1}{50}} + M^{\frac{1}{50}} \int_{s_\ast}^s e^{- \frac 1 4 (s' - s_0)} \eps^{\frac 18} {\,\mathrm{d}}s' \,,{\ensuremath{\nonumber}}\end{aligned}$$ where we have invoked for the estimate on the damping term, and estimate for the forcing term. Multiplying through by $e^{\frac 1 4 (s - s_0)}$ and using that $s_\ast \le s$ generates the desired bound. For , we again use Grönwall to estimate $$\begin{aligned} {\ensuremath{\nonumber}}|\eta_{\frac{1}{20}} W_c^{(n)} \circ \Phi_W^{x_0}| \le& |W_c^{(n)}(x_0, s_\ast)| e^{- \int_{s_\ast}^s D^{c}_{n, \frac 1 5} \circ \Phi_W^{x_0}} + \int_{s_\ast}^s e^{- \int_{s'}^s D_{n, \frac 1 5}^{c} \circ \Phi_W^{x_0}} |\eta_{\frac{1}{20}} F_{W,n}^{c} \circ \Phi_W^{x_0}| {\,\mathrm{d}}s' \\ {\ensuremath{\nonumber}}\lesssim & ( \| W_c^{(n)}( \cdot, s_0) \|_\infty + M e^{\frac 3 4 (s_\ast - s_0)} ) M^{\frac{1}{50}} + M^{\frac{1}{50}} \int_{s_0}^s M^{-1} M^{(n+2)^2} e^{\frac 3 4 (s'-s_0)} {\,\mathrm{d}}s' \\ \lesssim & M e^{\frac 3 4 (s_\ast - s_0)} M^{\frac{1}{50}} + M^{\frac{1}{50}} M^{(n+2)^2} M^{-1} e^{\frac 3 4 (s - s_0)}\,,{\ensuremath{\nonumber}}\end{aligned}$$ where we have invoked the estimate on the damping term, and estimate to estimate the forcing term. This concludes the proof of the lemma. Transport estimates for $\nabla_c^2 W$ -------------------------------------- The following verifies the bootstraps . Let $0 \le n \le 6$. $$\begin{aligned} \| {\ensuremath{\partial}}_{c_1 c_2} W^{(n)} \|_\infty \le \frac{ M^{(n+5)^2}}{2} \eps^{\frac 3 4} e^{\frac 3 4 s}\,. \end{aligned}$$ Using equation , we write via Grönwall upon noting that $W_{c_1 c_2}(s_0, x) = 0$, $$\begin{aligned} {\ensuremath{\nonumber}}|W^{(n)}_{c_1 c_2} \circ \Phi_{W}^{x_0}| \le &\int_{s_0}^s e^{- \int_{s'}^s D^{c}_n \circ \Phi_W^{x_0}} | F_{W,n}^{c_1, c_2} \circ \Phi_W^{x_0}| {\,\mathrm{d}}s' \\ \lesssim & \int_{s_0}^s e^{\frac{11}{8} (s - s')} M^{(n+5)^2 - 1} e^{\frac 3 2 (s' - s_0)} {\,\mathrm{d}}s' \lesssim M^{(n+5)^2 - 1} e^{\frac 3 2 (s - s_0)}\,,{\ensuremath{\nonumber}}\end{aligned}$$ where above we have used the definition to produce the trivial bound $$\begin{aligned} D_n^{c} \ge - \frac{11}{8}\,,\end{aligned}$$ and estimate - for the forcing. Proof of main theorem ===================== We are now ready to establish all of the assertions in Theorem \[thm:general\]. While the bootstrap estimates put forth in Section \[section:Bootstraps\] have all been verified, the first task is to now establish the inductive proposition, Proposition \[induct:prop\]. Newton iteration {#subsection:proof} ---------------- We now prove the main theorem by designing a Newton scheme on appropriately defined maps $\mathcal{T}_N$. First, we will define the map $\mathcal{T}_N: \mathcal{B}_N(\alpha_N, \beta_N) \subset \mathbb{R}^2 \rightarrow \mathbb{R}^2$ by $$\begin{aligned} \mathcal{T}_N(\alpha, \beta) := (W^{(2)}_{\alpha, \beta}(0, s_{N+1}), W_{\alpha, \beta}^{(3)}(0, s_{N+1}))\,. \end{aligned}$$ Define now the *error* quantities via $$\begin{aligned} &E_{N}^{(2)} := W_{\alpha_N, \beta_N}^{(2)}(0, s_{N+1}) = \mathcal{T}_N^{(1)}(\alpha_N, \beta_N)\,, \\ &E_N^{(3)} := W_{\alpha_N, \beta_N}^{(3)}(0, s_{N+1}) = \mathcal{T}_N^{(2)}(\alpha_N, \beta_N)\,.\end{aligned}$$ An immediate consequence of is the estimate $$\begin{aligned} |E_N^{(2)}| + |E_N^{(3)}| \le M^{25} e^{- s_{N}}.\end{aligned}$$ We now compute the matrix $$\begin{aligned} \nabla_{\alpha, \beta} \mathcal{T}_N = \begin{pmatrix} {\ensuremath{\partial}}_\alpha W^{(2)}_{\alpha, \beta}(0, s_{N+1}) & {\ensuremath{\partial}}_\beta W^{(2)}_{\alpha, \beta}(0, s_{N+1}) \\ {\ensuremath{\partial}}_\alpha W^{(3)}_{\alpha, \beta}(0, s_{N+1}) & {\ensuremath{\partial}}_\beta W^{(3)}_{\alpha, \beta}(0, s_{N+1}) \end{pmatrix},\end{aligned}$$ which, when we evaluate at the point $(\alpha_N, \beta_N)$ produces $$\begin{aligned} \nabla_{\alpha, \beta}|_{\alpha_N, \beta_N} \mathcal{T}_N = \begin{pmatrix} {\ensuremath{\partial}}_\alpha W^{(2)}_{\alpha_N, \beta_N}(0, s_{N+1}) & {\ensuremath{\partial}}_\beta W^{(2)}_{\alpha_N, \beta_N}(0, s_{N+1}) \\ {\ensuremath{\partial}}_\alpha W^{(3)}_{\alpha_N, \beta_N}(0, s_{N+1}) & {\ensuremath{\partial}}_\beta W^{(3)}_{\alpha_N, \beta_N}(0, s_{N+1}) \end{pmatrix}\,.\end{aligned}$$ The bootstrap assumptions - , coupled with the estimates on the second derivatives, enable us to apply the Implicit Function Theorem on $\mathcal{T}_N$ in a neighborhood $\mathcal{B}_N(\alpha_N, \beta_N)$ of $(\alpha_N, \beta_N)$, defined in , to conclude that $$\begin{aligned} \label{crickets:2} |\alpha_{N+1} - \alpha_{N}| &\le M^{25} e^{- \frac 3 4 (s_N - s_0)} e^{-s_N} + \eps^{\frac 1 4} e^{-s_N} e^{- \frac 1 2 (s_N - s_0)}\,, \\ \label{crickets:3} |\beta_{N+1} - \beta_{N}| &\le 2M^{25} e^{- \frac 1 2(s_N - s_0)} e^{-s_N}\,, \end{aligned}$$ which in particular verifies the bootstraps . More specifically, we have used that in the neighborhood $\mathcal{B}_N(\alpha_N, \beta_N)$, we have uniform bounds on the $(\alpha,\beta)$ Hessian of $ \mathcal{T}_N$. Estimating ${\ensuremath{\partial}}_{\alpha \alpha} \mathcal{T}_N$ yields $$\begin{aligned} {\ensuremath{\nonumber}}\sup_{\alpha, \beta \in \mathcal{B}_N} |{\ensuremath{\partial}}_{\alpha \alpha} \mathcal{T}_N | |\alpha - \alpha_N| \lesssim_M & e^{\frac 3 2 (s_{N+1} - s_0)} \Big( e^{- \frac 34 (s_N - s_0)} e^{-s_N} + \eps^{\frac 1 5} e^{-s_N} e^{- \frac 1 2(s_N - s_0)} \Big) \\ \lesssim_M & e^{- s_N}\Big( e^{\frac 3 4 (s_N - s_0)} + \eps^{\frac 1 5} e^{s_N - s_0)} \Big) \ll {\ensuremath{\partial}}_\alpha|_{\alpha_N, \beta_N} \mathcal{T}_N\,.{\ensuremath{\nonumber}}\end{aligned}$$ Similarly, for ${\ensuremath{\partial}}_{\alpha \beta} \mathcal{T}_N$ we have $$\begin{aligned} {\ensuremath{\nonumber}}\sup_{\alpha, \beta \in \mathcal{B}_N} |{\ensuremath{\partial}}_{\alpha \beta} \mathcal{T}_N| |\alpha - \alpha_N| \lesssim_M & e^{\frac 3 2 (s_{N+1} - s_0)} \Big( e^{- \frac 34 (s_N - s_0)} e^{-s_N} + \eps^{\frac 1 5} e^{-s_N} e^{- \frac 1 2(s_N - s_0)} \Big) \\ \lesssim_M & e^{- s_N}\Big( e^{\frac 3 4 (s_N - s_0)} + \eps^{\frac 1 5} e^{(s_N - s_0)} \Big) \ll {\ensuremath{\partial}}_\beta |_{\alpha_N, \beta_N} \mathcal{T}_N\,,{\ensuremath{\nonumber}}\end{aligned}$$ and $$\begin{aligned} {\ensuremath{\nonumber}}\sup_{\alpha, \beta \in \mathcal{B}_N} |{\ensuremath{\partial}}_{\alpha \beta} \mathcal{T}_N| |\beta - \beta_N| \lesssim_M e^{\frac 3 2 (s_{N+1} - s_0)} \Big( e^{-s_N} e^{- \frac 1 2(s_N - s_0)} \Big) \ll {\ensuremath{\partial}}_\alpha|_{\alpha_N, \beta_N} \mathcal{T}_N\,.\end{aligned}$$ Finally, estimating ${\ensuremath{\partial}}_{\beta \beta} \mathcal{T}_N$ yields $$\begin{aligned} {\ensuremath{\nonumber}}\sup_{\alpha, \beta \in \mathcal{B}_N} |{\ensuremath{\partial}}_{\beta \beta} \mathcal{T}_N| |\beta - \beta_N| \lesssim_M e^{\frac 3 2 (s_{N+1} - s_0)} \Big( e^{-s_N} e^{- \frac 1 2(s_N - s_0)} \Big) \ll {\ensuremath{\partial}}_\beta|_{\alpha_N, \beta_N} \mathcal{T}_N\,.\end{aligned}$$ We can now send $N \rightarrow \infty$ to obtain our limiting profiles. To make matters precise, we define the following norm, specific to a given $s_\ast \in [s_0, \infty)$. $$\begin{aligned} {\ensuremath{\nonumber}}\Big\| (W, Z, A) \Big\|_X := &\Big\| \| W \eta_{- \frac{1}{20}} \|_{L^\infty} \Big\|_{L^\infty(s_0, s_\ast)} + \sum_{j = 1}^6 \Big\| \| W^{(j)} \eta_{\frac 1 5} \|_{L^\infty} \Big\|_{L^\infty(s_0, s_\ast)} \\ {\ensuremath{\nonumber}}& + \Big\| e^{\frac 3 4 s} W^{(2)}(0, s) \Big\|_{L^\infty(s_0, s_\ast)} + \Big\| e^{ \frac 3 4 s} W^{(3)}(0, s) \Big\|_{L^\infty(s_0, s_\ast)} \\ {\ensuremath{\nonumber}}& + \eps^{- \frac 5 4} \Big\| \| Z \|_\infty \Big\|_{L^\infty(s_0, s_\ast)} + \eps^{- \frac 3 4} \Big\| \| A\|_\infty \Big\|_{L^\infty(s_0, s_\ast)} \\ \label{norm:X} & + \sum_{j = 1}^6 \Big\| e^{\frac 5 4 s} \| Z^{(j)} \|_{L^\infty} \Big\|_{L^\infty(s_0, s_\ast)} + \sum_{j = 1}^6 \Big\| e^{\frac 5 4 s} \| A^{(j)} \|_{L^\infty} \Big\|_{L^\infty(s_0, s_\ast)}\,,\end{aligned}$$ and the corresponding Banach space $$\begin{aligned} X := \text{Closure of } C^\infty_c([s_0, s_\ast], \mathbb{R} )^3 \text{ with respect to } \| \cdot \|_X\,. \end{aligned}$$ We also define the following norms in which we measure the modulation variables $$\begin{aligned} {\ensuremath{\nonumber}}\| (\mu, \tau, \kappa, \xi) \|_{Y} := & \eps^{- \frac 1 7} \| e^{\frac 3 4 s} \mu \|_{L^\infty(s_0, s_\ast)} + \eps^{- \frac 1 7} \| e^{\frac 3 4 s} \dot{\tau} \|_{L^\infty(s_0, s_\ast)} + \eps^{- \frac 1 8} \| \dot{\kappa} \|_{L^\infty(s_0, s_\ast)} \\ &+ \frac{1}{\kappa_0} \| \dot{\xi} \|_{L^\infty(s_0, s_\ast)}\,,{\ensuremath{\nonumber}}\end{aligned}$$ and the corresponding Banach space $$\begin{aligned} Y := \text{Closure of } C^\infty_c([s_0, s_\ast])^4 \text{ with respect to } \| \cdot \|_Y\,. \end{aligned}$$ \[corr:infty\] There exist values $(\alpha_\infty, \beta_\infty)$ so that the data $W_0$ given according to yields a global solution, $(W, Z, A) \in X$ and $(\mu, \tau, \kappa, \xi) \in Y$ on $- \log(\eps) \le s <\infty$ which satisfies $$\begin{aligned} \label{X:estimate} \| (W, Z, A) \|_X + \| (\mu, \tau, \kappa, \xi) \|_Y \lesssim_M 1\,,\end{aligned}$$ the constraints $$\begin{aligned} W(0, s) = 0\,, && W^{(2)}(0, s) = -1\,, && W^{(4)}(0, s) = 0\,, \end{aligned}$$ the following asymptotic behavior for the second and third derivatives: $$\begin{aligned} |W^{(2)}(0, s)| \lesssim e^{- \frac 3 4 s}\,, && |W^{(3)}(0, s)| \lesssim e^{-\frac 3 4 s}\,.\end{aligned}$$ Finally, for the fifth derivative $W^{(5)}(0, s)$, there exists a number $\nu$ such that $$\begin{aligned} \label{nu:def} W^{(5)}(0, s) \rightarrow \nu, \qquad |\nu - 120| \lesssim \eps^{\frac 78}\,. \end{aligned}$$ Fix any $s_0 \le s_\ast < \infty$ and consider the sequences $$\begin{aligned} \{ W_{\alpha_N, \beta_N}, Z_{\alpha_N, \beta_N}, A_{\alpha_N, \beta_N} \}_{N \ge \lfloor s_\ast \rfloor + 1} &=: \{W_N, Z_N, A_N\}_{N \ge \lfloor s_\ast \rfloor + 1}\,, \\ \{ \mu_{\alpha_n, \beta_n}, \dot{\tau}_{\alpha_N, \beta_N}, \dot{\kappa}_{\alpha_N, \beta_N}, \dot{\xi}_{\alpha_N, \beta_N} \}_{N \ge \lfloor s_\ast \rfloor + 1} &=: \{ \mu_N, \dot{\tau}_N, \dot{\kappa}_N, \dot{\xi}_N \}_{N \ge \lfloor s_\ast \rfloor + 1}\,. \end{aligned}$$ Our assertion will be that these sequences are Cauchy in the spaces $X$ and $Y$, respeectively. Let now $s_0 \le s \le s_\ast$. Recall from the definition of $\mathcal{B}_N$ in , that $$\begin{aligned} |\alpha_{N+1} - \alpha_N| \lesssim_M e^{-s_N} e^{- \frac 1 2 (s_N - s_0)}, \qquad |\beta_{N+1} - \beta_N| \lesssim_M e^{-s_N} e^{- \frac 1 2(s_N - s_0)}\,. \end{aligned}$$ Considering the first term in definition of , we now estimate $$\begin{aligned} {\ensuremath{\nonumber}}\| (W_{N+1} - W_N ) \eta_{- \frac{1}{20}} \|_{L^\infty} \lesssim_M & e^{-s_N} e^{- \frac 1 2 (s_N - s_0)} \sup_{(\alpha, \beta) \in \mathcal{B}_N} \| {\ensuremath{\partial}}_c W \eta_{- \frac{1}{20}} \|_{L^\infty} \\ \label{above:above} \lesssim_M & e^{- s_N} e^{- \frac 1 2 (s_N - s_0)} e^{\frac 3 4 (s -s_0)}\,, \end{aligned}$$ where we have invoked the estimate . Second, for $k \ge 1$, we have a nearly identical estimate using . Third, we estimate using - $$\begin{aligned} {\ensuremath{\nonumber}}e^{\frac 3 4 s} |W_{N+1}^{(2)}(0, s) - W_{N}^{(2)}(0, s)| \lesssim_M &e^{\frac 3 4 s} e^{-s_N} e^{- \frac 1 2 (s_N - s_0)} \sup_{(\alpha, \beta) \in \mathcal{B}_N} |{\ensuremath{\partial}}_c W^{(2)}(0, s)| \\ \lesssim_M & e^{\frac 3 4 s} e^{-s_N} e^{- \frac 1 2 (s_N - s_0)} e^{\frac 3 4(s - s_0)}\,.\end{aligned}$$ An analogous estimate applies to the fourth quantity in . For the quantities in the third and fourth lines of , we use - , coupled with - , in essentially the identical manner to the quantities above. Similarly, for the quantities in $Y$, we couple the estimates - , with the estimates - . As $s \le s_\ast \le s_N \rightarrow \infty$, the estimates above clearly imply that $\{W_N, Z_N, A_N\}$ is a Cauchy sequence in the norm X and $\{ \mu_N, \dot{\tau}_N, \dot{\kappa}_N, \dot{\xi}_N \}_{N \ge \lfloor s_\ast \rfloor + 1}$ form a Cauchy sequence in $Y$, upon taking supremum in $s \in [s_0, s_\ast]$. We conclude by sending $s_\ast \rightarrow \infty$. For the final step, we note that the norms $X$ and $Y$ are clearly strong enough to pass to the limit in the equation - . Furthermore, applying and yields that $$\nu=\lim_{s\rightarrow \infty}W^{(5)}(0, s)\,,$$ exists, and by we have $${\left|\nu-120\right|}\lesssim \eps^{\frac78}\,.$$ Consequential quantitative properties for $(w, a, z)$ {#s:bandicoot} ----------------------------------------------------- We finish by providing a proof of the following consequence of our construction. The solution $w(\theta, s)$ satisfies the following Holder $1/5$ regularity estimate uniformly in $t$ up to the shock time $T_\ast$ $$\begin{aligned} \label{reaCH:1} \sup_{t \in [-\eps, T_\ast]} [w(\cdot, t)]_{\frac{1}{5}} \lesssim 1\,.\end{aligned}$$ Due to bootstrap bounds , on ${\widetilde}{W}$, and properties on ${\overline}{W}$ we obtain the following on $W = {\overline}{W} + {\widetilde}{W}$, $$\begin{aligned} |{\ensuremath{\partial}}_x W(x, s)| \lesssim \langle x \rangle^{- \frac 4 5}, \end{aligned}$$ where the implicit constant is uniform, and in particular, independent of $s$. Using this, we write $$\begin{aligned} {\ensuremath{\nonumber}}[W(\cdot, x)]_{\frac 1 5} = & \sup_{(x, x')} \frac{|W(x, s) - W(x', s)|}{|x - x'|^{\frac 1 5}} = \sup_{(x, x')} \frac{1}{|x - x'|^{\frac 1 5}} |\int_{x}^{x'} {\ensuremath{\partial}}_x W(y, s) {\,\mathrm{d}}y| \\ \label{tho:1} \lesssim & \sup_{(x, x')} \frac{1}{|x - x'|^{\frac 1 5}} \int_{x}^{x'} \langle y \rangle^{- \frac 4 5} {\,\mathrm{d}}y = \sup_x \frac{1}{|x|^{\frac 1 5}} \int_0^x \langle y \rangle^{- \frac 4 5} {\,\mathrm{d}}y \lesssim 1\,. \end{aligned}$$ Finally, we use to argue as follows. Select any $(\theta, \theta') \in \mathbb{T}$. Then there exists a corresponding $(x, x')$ determined through so that $$\begin{aligned} \frac{|w(\theta, t) - w(\theta', t)|}{|\theta - \theta'|^{\frac 1 5}} = \frac{|W(x, s) - W(x', s)|}{|x - x'|^{\frac 1 5}}\,.\end{aligned}$$ From here, we take supremum over $\theta$ and apply estimate to reach . The following estimates hold for a constant $C_M$ that depends on $M$, $$\begin{aligned} \sup_{t \in [-\eps, T_\ast)} \sup_{\theta \in \mathbb{T}} |{\ensuremath{\partial}}_\theta a(\cdot, t)| &\le C_M\,, \\ \sup_{t \in [-\eps, T_\ast)} \sup_{\theta \in \mathbb{T}} |{\ensuremath{\partial}}_\theta z(\cdot, t)| &\le C_M\,, \\ \sup_{t \in [-\eps, T_\ast)} \sup_{\theta \in \mathbb{T}} |w(\theta, t)| &\le 2 \kappa_0\,. \end{aligned}$$ This follows upon pulling back to the original coordinate system via and which gives $$\begin{aligned} \sup_{t} \sup_{\theta} |{\ensuremath{\partial}}_\theta a| &= \sup_{s} \sup_{x} e^{\frac 5 4 s} |A^{(1)}| \le M^{2}\,, \\ \sup_{t} \sup_{\theta} |{\ensuremath{\partial}}_\theta z| &= \sup_{s} \sup_{x} e^{\frac 5 4 s} |Z^{(1)}| \le M^{2}\,,\end{aligned}$$ upon invoking bootstraps and , and upon invoking Corollary \[corr:infty\] to ensure that these bootstraps are satisfied globally. We now arrive at the pointwise estimate for $w(\theta, t)$. For this, we use the bootstraps , , and to obtain $$\begin{aligned} {\ensuremath{\nonumber}}|w| \le & e^{- \frac s 4} |W| + |\kappa| \lesssim e^{- \frac s 4} \sup_{\substack{ - \log(\eps) \le s < \infty \\ x \in B_f }} \langle x \rangle^{\frac 1 5} + |\kappa_0| + \eps \\ \lesssim & e^{- \frac s 4} (M \eps e^{\frac 5 4 s} )^{\frac 1 5} + |\kappa_0| + \eps \le 2 |\kappa_0|\,.{\ensuremath{\nonumber}}\end{aligned}$$ We now provide a final lemma to obtain the shock dynamics of ${\ensuremath{\partial}}_\theta w(x, t)$. The following asymptotic behavior is valid for $w(x, t)$, $$\begin{aligned} \label{BUD:1} &\lim_{t \rightarrow T_\ast} {\ensuremath{\partial}}_\theta w(\xi(t), t) = - \frac{1}{T_\ast - t}\,.\end{aligned}$$ First, follows upon using , evaluating at $x = 0$, and using the constraint $W^{(1)}(s, 0) = -1$ which yields $$\begin{aligned} \label{pre:lim:1} {\ensuremath{\partial}}_\theta w(\xi(t), t) = - \frac{1}{\tau(t) - t}\,. \end{aligned}$$ We now note that, while $\dot{\tau}(t)$ satisfies the bootstrap , $\tau(t)$ is itself uniquely defined upon enforcing $$\begin{aligned} \tau(T_\ast) = T_\ast\,. \end{aligned}$$ Thus, we may take the limit of to get . We now establish the following pointwise asymptotic stability result. \[l:wombat\] Let $W$ be the global solution from Corollary \[corr:infty\] and let $\nu$ be as in . Then the following asymptotic behavior holds $$\begin{aligned} \label{pointwise:limit} \lim_{s \rightarrow \infty} W^{(n)}(x, s) = {\overline}{W}^{(n)}_\nu(x), \qquad n=0,\dots,5\,, \end{aligned}$$ where ${\overline}{W}_\nu$ is the exact, self-similar Burgers profile $$\begin{aligned} \label{bar:nu} {\overline}{W}_\nu(x) := \left(\frac{\nu}{120}\right)^{-\frac14}{\overline}W\left(\left(\frac{\nu}{120}\right)^{\frac14} x\right)\,.\end{aligned}$$ We note that the parameter $\nu$ in is directly related to the spatial rescaling invariance of Burgers’ equation, listed in Section \[s:burgers\]. Let $(W,Z,A)$ be the global solution defined in Corollary \[corr:infty\]. First, it is easily verified that ${\overline}W_{\nu}$ is an exact solution to the self-similar Burgers’ equation , and that the first 5 Taylor coefficients of ${\overline}W_{\nu}$ are given by $$\begin{aligned} W_{\nu}(0)=W_{\nu}^{(2)}(0)=W_{\nu}^{(3)}(0)=W_{\nu}^{(4)}(0)=0,\quad W_{\nu}^{(1)}(0)=-1\quad\mbox{and } W_{\nu}^{(5)}(0)=\nu\,.\end{aligned}$$ In particular, at the limit $s\rightarrow \infty$, the first 5 Taylor coefficients of $W$ and ${\overline}W_{\nu}$ match. Let us define the difference $${\widetilde}W_{\nu}=W-{\overline}W_{\nu}\,.$$ Hence, by definition $$\begin{aligned} \label{eq:Taylor:cancellation} \lim_{s\rightarrow \infty }W_{\nu}^{(n)}(0)=0\,,\end{aligned}$$ for all $n=0,\dots,5$. By a similar calculation to – although we will rearrange the terms on the left-hand-side and right-hand-side – we obtain $$\begin{aligned} &({\ensuremath{\partial}}_ s- \frac 1 4+ {\overline}W_{\nu}^{(1)}) {\widetilde}W_{\nu} +(W+\frac54 x) {\ensuremath{\partial}}_x {\widetilde}W_{\nu} = -\beta_\tau e^{- \frac 3 4 s} \dot{\kappa} + F_{ W}+((1-\beta_{\tau}W)-G_W){\ensuremath{\partial}}_x W:={\widetilde}F_\nu\,.\end{aligned}$$ Using , and , we have that for any fixed $x_*$ that $$\label{eq:tilde:Fnu:decay} \int_{s_0}^{\infty}{\left|{\widetilde}F_\nu (x_*,s)\right|}\,ds<\infty\,.$$ Now fix $\delta>0$, $x_*\in\mathbb R$ and $s_*\geq -\log \eps$. Then as a consequence of and we have $$\label{eq:Taylor:est} {\left|{\widetilde}W(x_*,s_*)\right|}\les_M {\left|x_*\right|}^6 + \delta\,,$$ assuming that $s_*$ is taken sufficiently large dependent on the choice of $\delta$. Now define $\Phi$ to be the trajectory $$\begin{aligned} {\ensuremath{\partial}}_s \Phi(s) = \left(W+\frac54 x\right) \circ \Phi, \qquad \Phi(s_*) = x_*\,. \end{aligned}$$ If we in addition define $q= e^{-\frac54(s-s_*)}{\widetilde}W_\nu$, then $q\circ \Phi$ satisfies the equation $$\begin{aligned} (\partial_s+1+ {\overline}W_{\nu}^{(1)} )(q\circ \Phi)=e^{-\frac54(s-s_*)}{\widetilde}{F}_{\nu}\circ \Phi\,.\end{aligned}$$ Since ${\overline}W_{\nu}^{(1)} \geq -1$, then by Grönwall and , it follows that $$\label{eq:q:phi:est} {\left|q\circ \Phi (s)\right|}\leq {\left|q\circ \Phi (s_*)\right|}+\delta$$ for $s\geq s_*$, assuming that $s_*$ is taken to be sufficiently large, dependent on $\delta$. Combining and we obtain that for $s_* \leq s\leq s_*-\frac{23}{5}\log {\left|x_*\right|}$ and assuming $\delta\leq {\left|x_*\right|}^6$ $$\label{eq:wallaby} {\left|{\widetilde}W_{\nu}\circ \Phi (s)\right|}\les_M e^{\frac54(s-s_*)}({\left|x_*\right|}^6 + \delta) \les_M {\left|x_*\right|}^{\frac{1}{4}} \,.$$ Let us restrict to the case $x_*>0$ and assume the lower bound $$\label{eq:escape:lower:bound} \Phi\left(s_*-\frac{23}{5}\log {\left|x_*\right|}\right)\geq \Gamma\,.$$ In particular, by continuity, implies that for any $x_*\leq x\leq \Gamma$, there exists an $s_*\leq s\leq (s_*-\frac{23}{5}\log {\left|x_*\right|}$ such that $\Phi(s)=x$ and hence by $${\left|{\widetilde}W_{\nu}(x,s)\right|} \les_M {\left|x_*\right|}^{\frac{1}{4}}\,.$$ By taking the limit $s_*\rightarrow \infty$, this implies $$\label{eq:kangaroo} \lim_{s\rightarrow \infty }{\left|{\widetilde}W_{\nu}(x,s)\right|} \les_M {\left|x_*\right|}^{\frac{1}{4}}\,,$$ for any $x_*\leq x\leq \Gamma$. It remains to prove a $x_*$ dependent lower bound on $\Gamma$ that increases as $x_*\rightarrow 0$. First note that by and the Fundamental Theorem of Calculus $$\begin{aligned} W+\frac54 x\geq x\left( \frac54 -{\left \| W^{(1)} \right\|}_{\infty}\right)\geq \frac29 x\,. \end{aligned}$$ Thus by Grönwall $\Phi(s)\geq e^{\frac15(s-s_*)}s_*$, which implies $$\Phi\left(s_*-\frac{23}{5}\log {\left|x_*\right|}\right)\geq {\left|x_*\right|}^{-\frac{1}{45}}\,,$$ and hence we can take $\Gamma= {\left|x_*\right|}^{-\frac{1}{45}}$. Thus by taking $x_*\rightarrow 0$, from we obtain $$\label{eq:bilby} \lim_{s\rightarrow \infty}{\left|{\widetilde}W_{\nu}(x,s)\right|} =0 \,,$$ for all $x>0$. An analogous argument yields for the case $x<0$. The case $x=0$ is trivial since ${\widetilde}W_{\nu}(0,s)=0$ for all $s$. Thus, $W$ converges pointwise to ${\overline}W_{\mu}$. The proof for $n = 1,\dots,5$ works in an analogous manner. We remark that the asymptotic profile that is picked out in is consistent with our estimates . Indeed, by using estimate , we can estimate $$\begin{aligned} \| ({\overline}{W}_\nu - {\overline}{W}) \eta_{- \frac{1}{20}} \|_\infty \lesssim \eps^{\frac 78}\,, \end{aligned}$$ which shows that $W$ can simultaneously lie in a ball of size $\eps^{\frac{3}{20}}$ within ${\overline}{W}$ (in the weighted norm above) and converge pointwise to ${\overline}{W}_\nu$. It is now possible to prove asymptotic stability in a much stronger sense. To do so, we define the slightly weaker weighted space by first fixing a $0 < \delta \ll 1$, $$\begin{aligned} \label{norm:X:delta} \| W \|_{\mathcal{X}_{-\delta}} := \| W \eta_{- \frac{1}{20} - \delta} \|_\infty + \sum_{j = 1}^5 \| W^{(j)} \eta_{\frac1 5 - \delta} \|_\infty\,. \end{aligned}$$ For any $\delta > 0$, $$\begin{aligned} \label{asy:conv:1} \Big\| W - {\overline}{W}_\nu \Big\|_{\mathcal{X}_{-\delta}} \rightarrow 0\quad \text{ as } s \rightarrow \infty\,. \end{aligned}$$ This is a standard consequence of pointwise convergence (), uniform estimates on six derivatives, guaranteed by the specification of the norm $X$, , and finally, the compactness afforded by the weaker weight of $\langle x \rangle^{-\delta}$ in our norm . For the purpose of completeness, we include the argument for the lowest order part of the $X_{-\delta}$ norm, while the higher order components work in an exactly analogous fashion. To prove , specifically $\| (W - {\overline}{W}_\nu) \eta_{- \frac{1}{20} - \delta} \|_\infty \rightarrow 0$, we will first fix an arbitrary ${\widetilde}{\eps} > 0$, and demonstrate the existence of $S = S({\widetilde}{\eps})$ large, such that $s > S$ implies $\| (W - {\overline}{W}_\nu) \eta_{- \frac{1}{20} - \delta} \|_\infty \le {\widetilde}{\eps}$. First, there exists $X = X({\widetilde}{\eps}, \delta)$ so that $$\begin{aligned} \| (W - {\overline}{W}_\nu) \eta_{- \frac{1}{20} - \delta} \|_{L^\infty(|x| \ge X)} \le \frac{{\widetilde}{\eps}}{10}\,, \end{aligned}$$ according to the estimate on $W$ and on ${\overline}{W}$ (and hence, ${\overline}{W}_\nu$). We thus restrict to the compact interval $|x| \le X$, which we now subdivide into $N = N({\widetilde}{\eps}, M)$ sub-intervals with centers $x_k$, $k = 0, ..., N$. $N$ will be selected according to the rule: $$\begin{aligned} (\| W^{(1)} \|_\infty + \| {\overline}{W}^{(1)}_\nu \|_\infty) \frac{1}{N} < \frac{{\widetilde}{\eps}}{10}\,.\end{aligned}$$ By the pointwise convergence guaranteed by , there exists an $s_k$ so that $$\begin{aligned} |W(s_k, x_k) - {\overline}{W}_\nu(x_k)| \le \frac{{\widetilde}{\eps}}{10}\,.\end{aligned}$$ Define now $S := \max_{k} s_k$. Estimating, we have $$\begin{aligned} {\ensuremath{\nonumber}}|W(s, x) - {\overline}{W}_\nu(x)| \le & |W(s, x) - W(s, x_k)| + |W(s, x_k) - {\overline}{W}_\nu(x_k)| \\ {\ensuremath{\nonumber}}& + |{\overline}{W}_\nu(x_k) - {\overline}{W}_\nu(x)| \\ {\ensuremath{\nonumber}}\le & (\| W^{(1)} \|_\infty + \| {\overline}{W}^{(1)}_\nu \|_\infty) |x - x_k| + \frac{{\widetilde}{\eps}}{10} \\ \le & \frac{{\widetilde}{\eps}}{10} + \frac{{\widetilde}{\eps}}{10}\,,{\ensuremath{\nonumber}}\end{aligned}$$ for $s > S$. Taking supremum over $|x| \le X$ gives the desired conclusion. We note that the proof follows in a very similar manner to the proof of Corollary 4.7 of [@BuShVi2019]. By finite speed of propagation, the strict support properties imposed in Section \[ss:initial\], can be replaced by the condition that $(w_0,z_0,a_0)$ satisfy the conditions modulo a small perturbation in the $C^8$ topology. The conditions for the cases $n=0,1$ impose no obstruction to $\check w_0$ been chosen within an open set since the conditions may be enforced by choosing $\eps$ and $\kappa_0$ appropriately (it should be noted that these two parameters are free to be chosen from an open set). In order to weaken the condition for the case $n=4$, we note that by a Taylor expansion $$\begin{aligned} \partial_{\theta}^4 w_0(\theta)&=\partial_{\theta}^4 w_0(0)+\theta \partial_{\theta}^5 w_0(0)+{\mathcal O}(\eps^{-\frac{29}{4}}\theta^2)\\ &=\partial_{\theta}^4 w_0(0)+120\eps^{-6}\theta +\theta(\partial_{\theta}^3 w_0(0)-120\eps^{-6})+{\mathcal O}(\eps^{-\frac{29}{4}}\theta^2) \, ,\end{aligned}$$ here implicitly we used and that $${\left \| \partial_{\theta}^6\eps^{\frac14} {\overline}W\left(\eps^{-\frac54}\theta\right) \right\|}_{\infty}\les \eps^{-\frac{29} 4} \,.$$ By continuity, given ${\overline}\eps$, then assuming $\partial_{\theta}^4 w_0(0)$ and $\partial_{\theta}^5 w_0(0)-120\eps^{-6}$ to be sufficiently small, there exists a $\theta\in(-{\overline}\eps,{\overline}\eps)$ such that $\partial_{\theta_0}^4 w_0(\theta)=0$. Thus, up to a coordinate translation $\theta\mapsto \theta+\theta_0$, and under the assumptions $\partial_{\theta}^4 w_0(0)$ and $\partial_{\theta}^5 w_0(0)-120\eps^{-6}$ are both sufficiently small, we can remove the assumption for the case $n=4$. The strict assumption for the case $n=5$ may be removed by applying the rescaling $${\widetilde}a(\theta,t)=\mu^{-1}a(\mu \theta,t),\quad{\widetilde}w(\theta,t)=\mu^{-1}w(\mu \theta,t),\quad {\widetilde}z(\theta,t)=\mu^{-1}z(\mu \theta,t)\,,$$ for $\mu$ sufficiently close to $1$. As was noted in [@BuShVi2019], such a rescaling would modify the domain; however, since by finite-speed of propagation we restrict our analysis to a strict subset of the domain, such a rescaling does not impose any problem. [^1]: Department of Mathematics, Princeton University; email: [buckmaster@math.princeton.edu](buckmaster@math.princeton.edu); partially supported by NSF grant DMS-1900149 and a Simons Foundation Mathematical and Physical Sciences Collaborative Grant. [^2]: Department of Mathematics, Princeton University; email: [ssiyer@math.princeton.edu](ssiyer@math.princeton.edu); partially supported by NSF grant DMS-1802940.
1
--- abstract: 'We propose a minimal spectral theory for boundary layer turbulence that captures very well the profile of the mean square velocity fluctuations in the stream-wise direction, and gives a quantitative prediction of the Townsend-Perry constants. The phenomenological model is based on connecting the statistics in the streamwise direction with the energy spectrum of the streamvise velocity fluctuations. The original spectral theory was proposed in Ref. [@gioia2006turbulent] to explain the friction factor and von Kármán law in Ref. [@GGGC10]. We generalized it by including fluctuations in the wall-shear stress and the streamwise velocity. The predicted profiles for the mean velocity and mean square fluctuations are compared with velocity data from wind tunnel experiments.' author: - 'Björn Birnir $^1$, Luiza Angheluta$^2$, John Kaminsky$^1$, Xi Chen$^3$' bibliography: - 'referencef.bib' title: 'Spectral link of the generalized Townsend-Perry constants in turbulent boundary layers' --- Introduction ============ Turbulence is a ubiquitous phenomenon encountered in very diverse natural systems, from the large-scale atmosphere [@wyngaard1992atmospheric] and oceans [@toschi2009lagrangian] all the way down to quantum fluids [@vinen2002quantum], as well as in engineered systems, such as pipelines, heat exchangers, wind turbines, etc. It relates to the complex fluid dynamics that orchestrates the interactions of flow eddies spanning many length-scales and generating non-Gaussian statistics of velocity increments. The statistical properties of these turbulent fluctuations are fundamentally changed when the flow is confined by the presence of solid walls or boundaries [@smits2013wall; @jimenez2013near]. In contrast to bulk turbulence, which is statistically homogeneous and isotropic, the wall-bounded turbulence is characterised by statistically anisotropic properties. Namely, there is a net mean-flow in the streamwise direction along the wall and the different flow structures form depending on their distance to the wall. We typically differentiate between four flow regions as moving away from the wall [@Ob97]: i) the [*viscous region*]{} closest to the wall and where viscous flows dominate, ii) the [*buffer layer*]{}, marking the transition from the viscous layer into the inertial layer, iii) the [*inertial layer*]{} where the log-law of the wall applies, and iv) the [*wake*]{}, the energetic region beyond the inertial layer. A more refined division is given in [@CHS19]. A classical signature of wall-bounded turbulence is the “log-law of the wall” of the mean velocity profile (MVP) due to Prandtl and von Kármán, and reads as $$\label{eq:l-lwall} \langle \tilde u \rangle = \frac{1}{\kappa} \log(\tilde y)+B,$$ where $\kappa$ is the *universal* von Kármán constant that is independent of the microscopic flow characteristics and relates to generic features such as space dimensionality. The distance to the wall $y$ and the mean fluid velocity $u$ along the wall, are typically expressed in the “wall units” determined by the wall shear stress $\tau_0$. This is because $\tau_0$ is an important theoretical concept that is also experimentally measurable. The friction velocity $u_\tau = \sqrt{\langle \tau_0 \rangle/\rho}$ which is set by the wall shear stress $\tau_0$ and the kinematic viscosity $\nu$, and enters in the unit rescalings as $\tilde u= u/u_\tau$ and $\tilde y = yu_\tau/ \nu$. The constant fluid density is $\rho$ and the $B$ is a dimensionless constant that is fitted to experimental data, e.g. [@P53]. ![Theoretical predictions from the spectral theory for the MVP $\langle u \rangle$ and mean square velocity fluctuations $\langle w^2\rangle$ (dimensionless variables in wall units). []{data-label="fig:wx1"}](Figure_0.pdf){width=".4\textwidth"} A log-law of the wall was also derived from the “attached eddy hypothesis” by Townsend [@T76]. Townsend showed that the velocity fluctuations, $\tilde w =w/u_\tau$, $\tilde u= \langle \tilde u \rangle + \tilde w$, also follow the log-law of the wall in its second moment, namely $$\label{eq:l-lfluct} {\langle \tilde w^2 \rangle} = - A_1 \log(\tilde y) + B_1,$$ where the coefficients $A_1$ and $B_1$, also called the Townsend-Perry constants, were first measured by Perry and Chong [@PC82; @P86]. More recently, the log-law was generalised to any moment of the streamwise velocity fluctuations, $\tilde w$, assuming Gaussian velocity fluctuations [@MM13], $$\label{eq:l-pfluct} \langle \tilde w^{2p} \rangle^{1/p} = - A_p \log(\tilde y) + B_p.$$ While the generalised log-law is supported by wall-turbulence experiments, the dependance of $A_p$ and $B_p$ on $p$ turns out to be sub-Gaussian, which is confirmed both experimentally and numerically, [@MM13]. The sub-Gaussian behavior was explained in Ref. [@BC16] using the stochastic closure theory of turbulence [@BB211; @BB314] and the analysis was improved in Ref. [@KBK19], using measurements from the Flow Physics Facility (FPF) at the University of New Hampshire. Both of these studies used the results from homogeneous turbulence [@KBBS17] and made an assumption about the form of the fluctuating shear stress in the inertial layer, based on physical principles. In Ref. [@GGGC10], a spectral theory for the log-law of the wall of the MVP was proposed in which it is possible to derive the log-law in the inertial layer and the laminar profile in the viscous layer. The novel contribution is the precise form of the transition in the buffer layer using the the Kolmogorov-Obukhov energy spectrum of turbulent fluctuations. The form of the MVP in the wake is also obtained. This was done by summing the energy of the wall-attached eddies, as hypothesised originally by Townsend in [@T76]. In this paper, we propose a generalisation of the spectral theory that includes fluctuations in the streamwise velocity due to an essentially fluctuating wall shear stress. Fig. \[fig:wx1\] shows the spectral theory predictions of the profiles of the mean velocity and mean square velocity fluctuations across the viscous, buffer and inertial layers. The rest of the paper is structured as follow. We summarise the analysis in Ref. [@GGGC10] and its extension in Section \[sec:MVP\], and generalise it to include the fluctuations in Section \[sec:fluct\]. This produces the log law of the wall in Eq. (\[eq:l-lfluct\]) for the velocity fluctuations and its higher moments in Eq. (\[eq:l-pfluct\]). Then in Section \[sec:functional\], we derive the functional form of the mean-square fluctuations in the viscous layer and the inertial layer. In Section \[sec:SCT\], we use the attached eddy hypothesis and the stochastic closure theory [@BB211; @BB314] to derive the form of the Townsend-Perry and the generalized Townsend-Perry constants. This allows us to derive the streamwise fluctuations in the wall shear stress, and remove the assumption made in Refs. [@BC16] and [@KBK19], and mentioned above. Using theory-informed by data analysis, we can construct the Townsend-Perry constants and the generalised Townsend-Perry constants. In Section \[sec:BW\], we extend the formulas for the mean square fluctuations to the buffer layer and the energetic wake. In Section \[sec:data\], we compare the predicted MVP and mean-square velocity profile from this spectral theory to experimental data. In Section \[sec:summary\], we conclude with a discussion on the proposed spectral theory and the role that Townsend’s attached eddies play in it. The Spectral Theory {#sec:MVP} =================== The typical velocity of an inertial eddy of size $s$ can be obtained by integrating out the kinetic energy contained in all eddies of sizes up to $s$ as in Ref. [@GGGC10] $$\label{eq:vs} v_s^2 = \int_{1/s}^\infty E(k)dk,$$ where kinetic energy spectrum follows the Kolmogorov-Obukhov scaling with cutoffs in the injection scale and viscous scales, $E(k) =c_d(\eta k) \frac{2}{3}(\kappa_\epsilon \epsilon )^{2/3}k^{-5/3} c_e(Rk)$, with $\frac{2}{3}(\kappa_\epsilon \epsilon)^{2/3}k^{-5/3}$ being the Kolmogorov-Obukhov spectrum and $c_d (\eta k)$ and $c_e(R k)$ the phenomenological dimensionless corrections functions in the dissipative (set by the Kolmogorov scale $\eta$) and energetic range (set by the system size $R$), respectively. $\kappa_\epsilon$ is a dimensionless parameter, $\epsilon$ is the turbulent energy dissipation rate, $\eta=\nu^{3/4}\epsilon^{-1/4}$ is the viscous length scale and $R$ is the largest length scale in the flow. The dissipative correction function is typically an exponential cutoff function $c_d(\eta k) =\exp(-\beta_d \eta k)$, and the energetic-range (wake) correction function is $c_e(Rk)=(1+(\beta_e/(Rk))^2)^{-17/4}$, which is the form that was proposed by von Kármán. $\beta_d$ and $\beta_e$ are non-negative fitting parameters that can be adjusted to data. By the change of variables $\xi = sk$, we recast Eq. (\[eq:vs\]) as $$v_s^2 = (\kappa_\epsilon \epsilon s)^{2/3}I\left(\frac{\eta}{s},\frac{s}{R}\right),$$ where the spectral function $I$ is given by the formula [@GGGC10] $$\begin{aligned} \label{eq:spectcont} &I\left(\frac{\eta}{s},\frac{s}{R}\right)= \nonumber \\ &\frac{2}{3} \int_1^\infty e^{-\xi \beta_d \eta/s}\xi^{-5/3}\left(1+\left(\frac{\beta_e s}{R\xi}\right)^2\right)^{-17/6} d\xi.\end{aligned}$$ The integral sums the energies of all eddies of a smaller radius than $s$, and computes their contribution to the energy of the eddy of radius $s$. This is the energy (or spectral) formulation of the attached eddy hypothesis of Townsend [@T76]. The $I$-function correctly captures the buffer layer, as the transition from the viscous to the inertial layer, and the asymptotic of the MVP in the energetic wake. The asymptotic values are such that in the inertial layer $I=1$ and in the viscous layer $I=0$. The $I$-function combines the Kolmogorov-Obukhov theory with the observed spectrum in the viscous layer, the inertial layer and the wake and is thus able to capture the transition from one layer to the next. In Ref [@GGGC10], it was used to give the details of the MVP. In this paper, we will use it to capture the profile of mean-square fluctuations. In the buffer layer a different scaling of the attached eddies comes into play, this is the $k_x^{-1}$ scaling of the spectrum that has been debated in literature, but clearly shows up in recent simulations and experiments in the middle of the buffer layer, see Figure 9 (a) in Ref. [@LM15] and Figure 12 (b) in Ref. [@Sa18]. In the spectral theory, corresponding $I$-function for this scaling regime is $$\begin{aligned} \label{eq:spectcont1} &I_b\left(\frac{\eta}{s},\frac{s}{R}\right)= \nonumber\\ &\frac{2}{3}s^{-\frac{2}{3}} \int_1^\infty e^{-\xi \beta_d \frac{\eta}{s}}\xi^{-1}\left(1+\left(\frac{\beta_e s}{R\xi}\right)^2\right)^{-\frac{17}{6}} d\xi, \end{aligned}$$ where the subscript $b$ stands for “buffer”. The mean velocity is primarily influenced by the $I$-function, whereas the variation (fluctuation squared) is greatly influenced by the $I_b$-function in the buffer layer. $I$ is associated with the Kolmogorov-Obukhov energy cascade $k_x^{-5/3}$, in the inertial layer, whereas $I_b$ is associated with the $k_x^{-1}$ scaling in the buffer layer. (Here the $x$ denotes the streamwise direction.) We will take $I_b$ to be zero outside the buffer layer. The splitting of the near-wall region based on different scaling of the spectrum was proposed by Perry and Chong [@PC82] who used it build an interpolation model for MVP and the variation, this model was improved in Ref. [@Va15]. The generalised log-law {#sec:fluct} ======================= In this section, we will give a simple derivation of the log-law for the mean-square velocity profile that holds in the limit of large Reynolds number. In the following section we derive the general form of the variation that is not equally transparent. We will generalize the derivation of the MVP in Ref. [@GGGC10], by adding a fluctuation to the mean velocity. We let the velocity along the wall be $$v_1=u+v_1-u=u+w,$$ where $u$ is the mean velocity obtained by averaging $v_1$ over time, and $w$ is the fluctuation. The same derivations as in Ref. [@GGGC10] give the following equations for a dominant eddy of radius $s=y$, if we include the velocity fluctuations. In Ref. [@GGGC10] the shear stress at the distance $y$ from the wall is given by the formula ${\bar \tau_t} = \kappa_\tau \rho y v_y u'$ where $u'$ denotes the $y$ derivative of the velocity $u$ along the wall, and the overline indicates a not-fluctuating quantity. When velocity fluctuations are included the shear stress becomes: $$\label{eq:shear-stress} \tau_t = \kappa_\tau \rho y v_y (u'+w'),$$ where $\rho$ is the density $v_y$ is the (rotational) velocity of an eddy a distance $y$ from the wall and $\kappa_\tau$ is the dimensionless proportionality factor. The energy dissipation rate is related to the wall shear stress as ${\bar \epsilon} = \tau_t u'/\rho$ [@GGGC10] , and including the fluctuations, this becomes $$\label{eq:energy} \epsilon = \tau_t(u'+w')/\rho.$$ The eddy velocity for an eddy with radius $s=y$ at the distance $y$ from the wall is the same as in Ref. [@GGGC10], and as discussed above, $$\label{eq:e-viscosity} v_y= (\kappa_\epsilon \epsilon y)^{1/3} \sqrt{I},$$ where $I$ is the integral from Eq. (\[eq:spectcont\]) and $\kappa_\epsilon$ is a dimensionless proportionality factor. In the inertial layer $I=1$ and $\kappa_\epsilon = 4/5$ according to Kolmogorov’s $4/5$ law. Eliminating $\epsilon$ and $v_y$ from the three equations above, we obtain $$\label{eq:shear-stress_1} \tau_t= (\kappa_\epsilon \kappa_\tau^3)^{1/2} \rho y^2 (u'+w')^2 I^{3/4}.$$ The viscous shear stress is $\rho \nu (u'+w')$ so the total shear stress, including the contribution from the fluctuation is [@T76] $$\tau_t + \rho \nu (u'+w') = \tau_0(1-y/R).$$ Our assumption is that the wall shear stress $\tau_0$ is also a quantity that fluctuates about its mean value. We change the rescaled variables in the wall units written here in terms of the friction factor $f$: $\tilde y=y Re\sqrt{f}/R$, $\tilde u = u/(U\sqrt{f})$ and $\tilde w=w/(U\sqrt{f})$ and let $f=\langle \tau_0\rangle/\rho U^2$. Then, the equation above becomes $$\label{eq:total_stress1} {\tilde \kappa}^2 {\tilde y}^2(\tilde u'+\tilde w')^2 I^{3/4}+(\tilde u'+\tilde w') = \frac{\tau_0}{\langle \tau_0 \rangle}\left(1-\frac{\tilde y}{Re\sqrt{f}}\right).$$ If we let $\tilde y \to 0$, $\tilde w \to 0$ and integrate, we get the law of the viscous layer $$\label{eq:viscous} \tilde u = \tilde y,$$ the laminar profile being $$\label{eq:laminar} \tilde u = \left(\tilde y-\frac{\tilde y^2}{2Re\sqrt{f}}\right).$$ In the large Reynolds number limit, solving just for the mean velocity, we obtain the Prandtl-von Kármán law $$\label{eq:velocity} \tilde u = \frac{1}{\tilde \kappa}\log (\tilde y)+D.$$ This is the correct leading term but the full formulas in the next section are more complicated. We now motivate the log-law for the variation. If we solve for both the mean velocity and the fluctuation in the large Reynolds number limit, we get that $$\label{eq:velocity1} \tilde u+\tilde w= \frac{\sqrt{\tau_0}}{\langle \tau_0\rangle^{1/2} \tilde \kappa} \log (\tilde y)+C.$$ This is consistent with the Eq. (\[eq:velocity\]) in the sense that if $\sqrt{\tau_0}=\langle \tau_0\rangle^{1/2}$, then $\tilde w =0$ and we recover Eq. (\[eq:velocity\]). Thus squaring Eq. (\[eq:velocity1\]) gives that $${\tilde u}^2+2\tilde u \tilde w +{\tilde w}^2= \frac{\tau_0}{\langle \tau_0\rangle \tilde \kappa^2} (\log(\hat y))^2+2\frac{\sqrt{\tau_0}}{\tilde \kappa \sqrt{\langle \tau_0 \rangle}} C\log(\tilde y) +C^2.$$ Taking the average, using that $\langle \tilde w \rangle = 0$ and Eq. (\[eq:velocity\]), we get that $$\langle \tilde w^2 \rangle = \frac{2C\langle \sqrt{\tau_0}\rangle-2D\sqrt{\langle\tau_0\rangle}}{\tilde \kappa \sqrt{\langle \tau_0 \rangle}}\log(\tilde y) +C^2-D^2.$$ By comparing this with the generalised log-law in Eq. (\[eq:l-lfluct\]), for the fluctuations squared, we obtain $$\label{eq:gloglaw} \langle \tilde w^2 \rangle = -A \log(\tilde y)+B,$$ where $A = - \frac{2C\langle \sqrt{\tau_0}\rangle-2D\sqrt{\langle\tau_0\rangle}}{\tilde \kappa \sqrt{\langle \tau_0 \rangle}}$ and $B = C^2-D^2$ are the Townsend-Perry constants. The full formulas in next section show that Eq. (\[eq:gloglaw\]) is the leading term and $A = - 2C(\frac{\langle \sqrt{\tau_0}\rangle-\sqrt{\langle\tau_0\rangle}}{\tilde \kappa \sqrt{\langle \tau_0 \rangle}})$, with $C=D$. To simplify the notation, we will now drop the tilde’s from all the variable with the dimensionless units implicitly assumed, unless otherwise stated. The functional form of the Townsend-Perry law {#sec:functional} ============================================= We will now use Eq. (\[eq:total\_stress1\]) to find the general form of the average of the fluctuations squared as a function of the distance to the wall. We consider the Eq. (\[eq:total\_stress1\]) $$\label{eq:total_stress2} {\kappa}^2 {y}^2( u'+ w')^2 I^{3/4}+(u'+w') = \frac{\tau_0}{\langle \tau_0 \rangle}(1-\frac{y}{Re\sqrt{f}}),$$ and first set $I=0$ in the viscous layer. Then $$\label{eq:u_o} u = y - \frac{y^2}{2 Re\sqrt{f}}$$ by averaging and integration in $y$. Integrating Eq. (\[eq:total\_stress2\]) and subtracting $u$ gives, $$\label{eq:w_o} w = \frac{\tau_0 - \langle \tau_0 \rangle}{\langle \tau_0 \rangle}\left( y - \frac{y^2}{2 Re\sqrt{f}}\right)$$ and $$\langle w^2 \rangle=\frac{\langle \tau_0^2 \rangle - \langle \tau_0 \rangle^2}{\langle \tau_0 \rangle^2}\left( y - \frac{y^2}{2 Re\sqrt{f}}\right)^2.$$ In the inertial layer $I=1$ and ignoring the small $O(1/y^4)$ term, we get that $$\begin{aligned} u+w &=& \frac{1}{2\kappa^2 y} + 2\frac{\sqrt{\tau_0}}{\kappa \sqrt{\langle \tau_0 \rangle}}\sqrt{1-\frac{y}{2Re\sqrt{f}}} \nonumber\\ &-&2 \frac{\sqrt{\tau_0}}{\kappa \sqrt{\langle \tau_0 \rangle}} \tanh^{-1}\left(\sqrt{1-\frac{y}{2Re\sqrt{f}}}\right)+K, \end{aligned}$$ where $K$ is a constant. Then setting $w=0$, we get that $$\begin{aligned} &&u = \frac{1}{2\kappa^2 y} + \frac{2}{\kappa}\sqrt{1-\frac{y}{2Re\sqrt{f}}}\nonumber\\ &-&\frac{2}{\kappa}\tanh^{-1}\left(\sqrt{1-\frac{y}{2Re\sqrt{f}}}\right)+K',\end{aligned}$$ where $K'$ is another constant, because $\tau_0$ becomes $\langle \tau_0 \rangle$. Subtracting, $u$ from $u+w$ we get $$\begin{aligned} &&w = 2\frac{(\sqrt{\tau_0}-\sqrt{\langle \tau_0 \rangle})}{\kappa \sqrt{\langle \tau_0 \rangle}}\sqrt{1-\frac{y}{2Re\sqrt{f}}}{\nonumber}\\ &-&2 \frac{(\sqrt{\tau_0}-\sqrt{\langle \tau_0 \rangle})}{\kappa \sqrt{\langle \tau_0 \rangle}} \tanh^{-1}\left(\sqrt{1-\frac{y}{2Re\sqrt{f}}}\right)+C,\end{aligned}$$ where $C=K-K'$. Squaring $w$ and taking the average gives $$\begin{aligned} &&\langle w^2 \rangle = 4C\frac{(\langle \sqrt{\tau_0} \rangle-\sqrt{\langle \tau_0 \rangle})}{\kappa \sqrt{\langle \tau_0 \rangle}}\sqrt{1-\frac{y}{2Re\sqrt{f}}} {\nonumber}\\ &-&4C \frac{(\langle \sqrt{\tau_0} \rangle-\sqrt{\langle \tau_0 \rangle})}{\kappa \sqrt{\langle \tau_0 \rangle}} \tanh^{-1}\left(\sqrt{1-\frac{y}{2Re\sqrt{f}}}\right){\nonumber}\\ &+&4\left[\frac{2(\langle\tau_0 \rangle-\sqrt{\langle \tau_0 \rangle}\langle \sqrt{\tau_0}\rangle)}{\kappa^2 \langle \tau_0 \rangle}\left(1-\frac{y}{2Re\sqrt{f}}\right.\right.{\nonumber}\\ &-& \left. 2\sqrt{1-\frac{y}{2Re\sqrt{f}}}\tanh^{-1}(\sqrt{1-\frac{y}{2Re\sqrt{f}}})\right){\nonumber}\\ &+& \left. \left[\tanh^{-1}(\sqrt{1-\frac{y}{2Re\sqrt{f}}})\right]^2\right]+C^2.\end{aligned}$$ From $\tanh^{-1}(x) = \frac{1}{2} \log(\frac{1+x}{1-x})$, we see that the second term in the last formula is of leading order and we get that $$\label{eq:w2der} \langle w^2 \rangle \sim 2C \frac{(\langle \sqrt{\tau_0} \rangle-\sqrt{\langle \tau_0 \rangle})}{\kappa \sqrt{\langle \tau_0 \rangle}}\log\left(\frac{y}{Re\sqrt{f}}\right) + h. o. t.$$ This agrees with the formula (\[eq:gloglaw\]) above. For higher order moments $\langle w^{2p} \rangle^{1/p}$ the similar term, linear in $\tanh^{-1}$ and multiplied by $2C$, is of leading order, $$\label{eq:wpder} \langle w^{2p} \rangle^{1/p} \sim 2C \frac{\langle (\sqrt{\tau_0}-\sqrt{\langle \tau_0 \rangle})^p \rangle^{1/p}}{\kappa \sqrt{\langle \tau_0 \rangle}}\log\left(\frac{y}{Re\sqrt{f}}\right) + h. o. t.$$ These formulas establish the log dependance of the second moment of the fluctuations, with the Townsend-Perry constants, and the log dependence of the higher moments of the fluctuations, with the Generalized Townsend-Perry constants, and justify formulas Eq. (\[eq:l-lfluct\]) and Eq. (\[eq:l-pfluct\]). Together, Eq. (\[eq:l-lfluct\]) and Eq. (\[eq:l-pfluct\]) can be called the generalised log-law of the wall. Derivation of the Generalized Townsend-Perry Constants {#sec:SCT} ====================================================== We consider the dependence of the fluctuation $w$ on the distance $x$ along the wall, to understand the Townsend-Perry constants. So far we have only considered $w(y)$ as a function of the distance $y$ from the wall, but $w(x,y)$ obviously depends on both variables $x$ and $y$. If we consider the eddy depicted in Fig. \[fig:wx\], then we see that the difference in momentum in the $x$ direction, across the eddy, is given by $$\rho(w(x+s)-w(x-s)) \sim 2\rho s w_x,$$ for $y$ fixed, where $w_x=\frac{d}{dx}w$. ![The eddy of radius $s$ and the variation in the fluctuations across it in the $x$ (streamwise) direction.[]{data-label="fig:wx"}](Figure_1){width=".3\textwidth"} This means that the total turbulent stress, across a vertical surface at $x$, denoted by a dotted line on Fig. \[fig:wx\] for an eddy of radius $s\sim y$, is $$\tau_0 = \tau_t+\tau_x,$$ where $\tau_x= 2\kappa_\tau \rho y w_x v_y$, analogous to formula Eq. (\[eq:shear-stress\]) above. Then we get, using Eq. (\[eq:e-viscosity\]) and $$\epsilon = (\tau_t+\tau_x)(u'+w_x) \rho,$$ that $$\tau_t+\tau_x= \kappa^2 \rho I^{3/4} y^2(u'+w_x)^2,$$ where prime denotes the derivative with respect to $y$, and $$\begin{aligned} (\tau_t+\tau_x)^{1/2} &=& \kappa \rho^{1/2}I^{3/8}y(u'+w_x) {\nonumber}\\ &=& \langle \tau_0 \rangle^{1/2}+ \kappa \rho^{1/2}I^{3/8}y|w_x|,\end{aligned}$$ since both parts must be positive. The derivation is completely analogous to the derivation in Sec. \[sec:fluct\], but here with $w$ varying in the $x$ direction and $w_y=0$. This gives that for $y$ fixed, $$\begin{aligned} \tau_0^{1/2}-\langle \tau_0 \rangle^{1/2} &=& (\tau_t+\tau_x)^{1/2}-\langle \tau_0 \rangle^{1/2} {\nonumber}\\ &=& \kappa \rho^{1/2}I^{3/8}y|w_x|.\end{aligned}$$ Considering the leading order $\log(y/2Re\sqrt{f})$ term in Eq. (\[eq:w2der\]) gives the Townsend-Perry constant $$\label{eq:TP} A_1=\frac{2C \rho^{1/2} y\langle |w_x|\rangle}{\sqrt{\langle \tau_0 \rangle}},$$ and the generalized Townsend-Perry constants $$\label{eq:GTP} A_p=\frac{2C \rho^{1/2} y \langle |w_x|^{p}\rangle^{1/p}}{\sqrt{\langle \tau_0 \rangle}},$$ by use of Eq. (\[eq:wpder\]). This justifies the form of the stress tensor assumed in Ref. [@BC16] and used in Ref. [@KBK19]. Finally, we get the expressions $$A_1 = K \langle |w(x+y)-w(x-y)|\rangle$$ and $$A_p = K \langle |w(x+y)-w(x-y)|^{p} \rangle^{1/p},$$ where $K$ is a constant and this produces the relationship between the Townsend-Perry and the generalized Townsend-Perry constants and the structure function of turbulence, see Ref. [@BB211; @BB314; @KBBS17], used in Ref. [@BC16; @KBK19], $$\label{eq:TPstru1} A_1 = K C_1|y^*|^{\zeta_1},$$ $$\label{eq:TPstru2} A_2 = K C^{1/2}_2|y^*|^{\zeta_2/2},$$ and $$\label{eq:TPstrup} A_p = K C^{1/p}_{p}|y^*|^{\zeta_{p}/p},$$ where $-y\leq y^* \leq y$. Considering the ratio, washes out the constant $K$, $$\label{eq:ratio} \frac{A_p}{A_2}= \frac{C^{1/p}_{p}}{C^{1/2}_2}|y^*|^{\zeta_{p}/p-\zeta_2/2},$$ where the $C_p$s are the Kolmogorov-Obukhov coefficients of the structure functions from Ref. [@BB211; @BB314; @KBBS17]. The last ratio was used in Ref. [@KBK19] to get agreement between experimental data and theory. ![The average of the MVP as a function of $log(y)$, where $y$ is the distance from the wall. Comparison of experimental data with theory (black line). (a) Theoretical curve is given by an $I$-integral that interpolates between the $k_x^{-5/3}$ to the $k_x^{-1}$ with $a=0.9994$ in the buffer region. (b) Theoretical curve has a uniform $I$-integral with the $k_x^{-5/3}$ scaling present in buffer and inertial regions.[]{data-label="fig:mean velocity"}](Figure_2bis.pdf){width="45.00000%"} The Spectral Theory of mean-square fluctuations {#sec:BW} =============================================== In the above sections we have not used the spectral information in the integral $I$, in Eq. (\[eq:spectcont\]). We have just used the attached eddy hypothesis and set $I=0$ in the viscous layer and $I=1$ in the inertial layer. But following Ref. [@GGGC10], we can now use the spectral information through the integral $I$ to find the beginning of the buffer layer and the form of both the MVP $u$ and the fluctuation $w$ in the buffer layer and in the wake. This allows one to obtain the full functional form of both $u$ and $w$ as functions of the distance $y$ from the wall and compare it with the experimental data in the next section. By use of the energy Eq. (\[eq:energy\]) and the relation $\eta = \nu^{3/4}\epsilon^{-1/4}$ we can find an expression for $\eta/y$, the viscosity parameter that increases as we approach the wall $y\to 0$. If we set the fluctuation equal to zero, $$\eta/y = (\tilde u'(1-\tilde y/Re\sqrt{f})-(\tilde u')^2)^{-1/4}\tilde y^{-1}$$ and find a formula for $\tilde y$ using this equation along with the equation $${\kappa}^2 {\tilde y}^2( u')^2 I^{3/4}+u' = \frac{\tau_0}{\langle \tau_0 \rangle}\left(1-\frac{y}{Re\sqrt{f}}\right).$$ The resulting formula is given in Ref. [@GGGC10], $$\tilde y = \left(\frac{(\eta/y)^{4/3}+ \kappa^{4/3}I^{1/2}(\eta/y,0)}{\kappa^{2/3}(\eta/y)^{8/3}I^{1/4}(\eta/y,0)}\right).$$ It gives the minimum value of $\tilde y$ for which $I(\eta/y,0)>0$ and the small eddies begin to contribute to the turbulent shear stress $\tau_t >0$. In fact for each value of the parameter $\beta_d$ there is a minimum value of $\tilde y$ denoted $\tilde y_v$ below which $I=0$. Only after this minimum does $\tilde y$ increase with $\eta/y$. This gives the end of the viscous layer and the beginning of the buffer layer and a value of the MVP, $u_v$ at $\tilde y_v$. It also gives the value of the fluctuation $w$ at $\tilde y_v$ and we can integrate the differential equations for $u$ and $w$, with respect to $y$, to get the form of both functions in the buffer layer, inertial layer and the wake. Along with the formulas in the viscous layer this gives the full functional form. The differential equations use the spectral information through the full functional form of $I$ and the two parameters $\beta_d$ and $\beta_e$ must be fitted to experimental data. Approximations to the MVP and mean square fluctuations, based on the formulas in Sec. \[sec:functional\] are given in Fig. \[fig:mean velocity\] and \[fig:mean variation\], respectively. To compare with experimental data one must solve the differential equations $$\label{eq:udiff} u'=-\frac{1}{2 \kappa^2 I^{3/4}y^2} + \frac{1}{\kappa I^{3/8}y} \sqrt{1 - \frac{y}{Re\sqrt{f}}+\frac{1}{4\kappa^2 I^{3/4}y^2}}$$ with the initial condition $u = 4.17$ at the beginning of the buffer layer $y =4.17$. For the fluctuation we first have to solve the differential equation, ignoring term of order $O(1/y^3)$ and higher, $$\label{eq:wdiff} w'=\frac{ \sqrt{\tau_0}-\sqrt{\langle\tau_0\rangle}}{ \kappa I^{3/8}y\sqrt{\langle \tau_0 \rangle}} \sqrt{1 - \frac{y}{Re\sqrt{f}}},$$ with the initial condition $w=\frac{ \tau_0-\langle\tau_0\rangle}{ \langle \tau_0 \rangle}\left(4.17-\frac{17.39}{2 Re\sqrt{f}}\right)$, from Eq. (\[eq:w\_o\]), at the beginning of the buffer layer. Here $I(y)$ is the integral in Eq. (\[eq:spectcont\]). ![image](Figure_3.pdf){width="80.00000%"} In practice it is easier to vary the initial conditions than to change $\beta_d$ and $\beta_e$, thus we will let the initial condition $y_o$, of $w$, from Equation (\[eq:w\_o\]), vary slightly depending on the Reynolds number in the simulations below. The other initial condition $w_o$ is given by the formula $w_o=\frac{ \tau_0-\langle\tau_0\rangle}{ \langle \tau_0 \rangle}\left(y_o-\frac{y_o^2}{2 Re\sqrt{f}}\right)$. Comparison with Experimental Data {#sec:data} ================================= The data we use to compare with the theory comes from the wind tunnel experiments at the University of Melbourne using the nano-scale thermal anemometry probe (NSTAP) to conduct velocity measurements in the high Re number boundary layer up to $Re_\tau = 20000$. The NSTAT has a sensing length almost one order of magnitude smaller than conventional hot-wire, hence allows for a fully resolved NSTAT measurement of velocity fluctuations, [@Sa18], [@Ba19]. The size of the University of Melbourne wind tunnel and the accuracy of the NSTAT permit the measurement over a very large range of scales. We use the averaged velocity time-series at Reynolds numbers $Re_\tau=6000, 10000,14500, 20000$ and the averaged variance at the same Reynolds numbers. Fig. \[fig:mean velocity\] shows the mean velocity profiles as a function of normalized distance from the wall, whereas Fig. \[fig:mean variation\] shows the averaged fluctuation squared (variation) as a function of the normalized distance to the wall. Both are semi-log plots. First, let us consider the curve describing the MVP in Fig. \[fig:mean velocity\] (panel b). It starts with the Eq. (\[eq:u\_o\]) for the viscous profile because the $I$-function is zero. But then we reach the value $y_v$ where the first attached eddies appear ($y=4.17$) and then the viscous profile changes, instead of reaching its maximum $u=Re \sqrt{f}/2$ at $y=Re \sqrt{f}$, the attached eddies increase the viscosity (decrease the Reynolds number) and the MVP reaches its maximum increase at $y \approx 15$, independent of the Reynolds number. The energy transfer of the attached eddies is captured by the $I$-integral and we integrate the differential equation given by Eq. (\[eq:udiff\]), from $y=4.17$, with the initial condition $u=4.17$. This gives the MVP in Fig. \[fig:mean velocity\] (b). This was already done in Ref. [@GGGC10] and describes how the attached eddies transfer energy into the buffer and the inertial layer. However, we notice that in the predicted MVP over estimates the mean velocity in buffer region. This is because the $I$-function from Eq. (\[eq:spectcont\]) does not account for the formation of the attached eddies which reduce the net energy transfer in the direct cascade. The curves for the fluctuations squared in Fig. \[fig:mean variation\] are obtained in a similar manner. The attached eddies fix the peak of $\langle w^2 \rangle$ at $y \approx 15$ and the peak profiles can be fitted by the viscous formula $\langle w^2 \rangle = a (y-\frac{y^2}{30})^2$ where $a \sim( \langle \tau _o^2\rangle -\langle \tau_o \rangle^2)/ \langle \tau_o \rangle^2$. This fit is shown in Fig. \[fig:mean variation\] (c). The peak position is experimentally observed to be fixed, but its height shows a weak Reynolds number dependence $a = -3.06+0.99 \log(Re)$, see [@Sa18]. This relationship can be tested using our theory and this will be done in another publication, see also [@CS20]. Then, we integrate the differential equation from Eq. (\[eq:wdiff\]) for $w$ with the initial data described in last section from some point to the right of the peak, where above peak profile fits the initial condition, this give the profile of the fluctuations squared down to the flat part in the buffer layer. At the beginning of the flat part, $y \approx 60$, the second scaling from Section \[sec:MVP\] begins to dominate the fluctuations, modeling an inverse cascade of attached eddies in the buffer layer. Then we switch to the buffer $I$-function $I_ b$ in the integration and integrate with $I_b$ until we get into the inertial region where the Kolmogorov-Obukhov scaling dominates again and the attached eddies break up. This produces the curves in Fig. \[fig:mean variation\]. We can now compare the functional form of the fluctuations squared shown in Fig. \[fig:mean variation\] with the predictions of the stochastic closure theory (SCT) of turbulence, used in Refs. [@BC16] and [@KBK19], to compute the Townsend-Perry constants, in the inertial (log) layer. These computations use the first structure function $S_1$ of turbulence and we explain how they are performed, see [@BC16] and [@KBK19] for more information. The computed Townsend-Perry constants are listed in Table I. The first structure function of turbulence is, see [@KBBS17], $$\begin{aligned} &&E(\vert u(x,t)-u(y,t)\vert)=S_1(x,y,t){\nonumber}\\ &&=\frac{2}{C}\sum_{k\in\mathbb{Z}^3\backslash\{0\}}\frac{\vert d_k\vert(1-e^{-\lambda_kt})}{\vert k\vert^{\zeta_1}+\frac{4\pi^2\nu}{C}\vert k\vert^{\zeta_1+\frac{4}{3}}}\vert \sin(\pi k\cdot(x-y))\vert,\end{aligned}$$ where the Reynolds number dependence enters through the viscosity $\nu$, and $E$ denotes the expectation (ensamble average). To get the Kolmogorov-Obukhov coefficients, $C_p$ in $$S_p(r, \infty) \sim C_p r^{\zeta_p},$$ for the lag variable $r$ small, and $\zeta_p$ the scaling exponents, we send $t$ to $\infty$ in the above formulas and project onto the longitudinal lag variable ${\bf r} = (r,0,0)$. For $p=1$ this becomes $$\begin{aligned} &&S_1 \sim \frac{2\pi^{\zeta_1}}{C} \sum_{k\neq 0} \frac{|d_k |}{(1+\frac{4\pi^2\nu}{C}|k|^{4/3})} r^{\zeta_1}{\nonumber}\\ &=&\frac{4\pi^{\zeta_1}}{C} \sum_{k = 1}^\infty \frac{a}{(a^2+k^m)(1+\frac{4\pi^2\nu}{C}|k|^{4/3})} r^{\zeta_1},\end{aligned}$$ see [@KBBS17], where $\zeta_1 = 0.37$, see [@BB211]. Now we use the values for $\nu$ in Table 1 in [@KBK19], and the corresponding values for $a,\ m$ and $C$ from Table 3 in the same paper. The Reynolds numbers, 6430, 10,770, 15,740 and 19,670 are close enough to ours 6000, 10,000, 14,500, and 20,000, that we can use value of the parameters in [@KBK19]. This gives the values in Table I, where $A_1 \sim K|y^*|^{\zeta_1} C_1$, see Section \[sec:SCT\], and the proportionality factor $K|y^*|^{\zeta_1} = 1/12.952$ is computed at the Reynolds number $15,470$, where the approximated $A_1$ coincides with the measured $A_1$. The $\log$ functions with coefficient $A_1$, from the third column in Table I, and using the constant $B_1$ from the fourth column in Table I, are then compared to the experimental and theoretical values in Fig. \[fig:mean variation\]. The spanwise Townsend-Perry constants, for the spanwise fluctuations, can computed similarly by projecting onto the spanwise lag variable ${\bf t}=(0,t,0).$ In Fig. \[fig:mean variation\] panel (a), the Townsend-Perry constant $A_1$ computed by the SCT does not agree with the measured slope. This was already observed in Ref. [@KBK19], since for low Reynolds numbers the $C_1$s do not provide a good approximation to the $A_1$s. They only do for large Reynolds numbers and the discrepancy (a) occurs at the smallest Reynolds number. This does not happen for the Generalized Townsend-Perry constants, the reasons are explained in Ref. [@KBK19], and for them the $C_p$s, $p \ge 2$, provide good approximations to the $A_p$s for all Reynolds numbers. $Re_\lambda$ $C_1$ $A_1$ $B_1$ -------------- -------- ------- -------- 6000  9.449 0.730 9.373 10,000 15.628 1.207 13.073 14,500 15.500 1.197 13.573 20,000 14.994 1.158 13.673 : Here, the approximate $A_1$ value is computed from $C_1$ using the proportionality factor $A_1=C_1/(K|y^*|^{\zeta_1})=C_1/12.952$. ![Sketch of the instantaneous streaks, in the streamwise direction, and the wall-attached eddies, in the spanwise direction.[]{data-label="fig:wx1"}](Figure_4.pdf){width=".4\textwidth"} Discussion {#sec:summary} ========== We used the spectral theory of the MVP and the variation profile to represent both, and compare with experiment [@Sa18] for a range of Reynolds numbers. Assuming that the wall shear stress is a fluctuating quantity, we can derive that log-law for the variation (\[eq:l-lfluct\]) that was proposed by Townsend and measured by Perry and Chong. This law involves the Townsend-Perry constants. This was first done in the large Reynolds number limit and then for general Reynolds numbers. The Reynolds number dependence of the Townsend-Perry constants is determined by the stochastic closure theory [@BC16], [@KBK19]. We derive the log-law for the higher moments of the fluctuations and the Generalized Townsend-Perry constants based on the functional form of the variation and use the stochastic closure theory to express them in terms of the Kolmogorov-Obukhov coefficients of the structure functions of turbulence [@KBBS17]. This confirms the results in Refs. [@BC16] and [@KBK19]. The spectral function $I$ derived in Ref. [@GGGC10] plays a central role in this theory. It can be considered be the analytic expression of Townsend’s theory of wall-attached eddies. It quantifies when the first eddies appear at the boundary of the viscous and the buffer layer and when they are fully developed in the inertial layer. It even quantifies the limit of their influence in the energetic wake. By introducing the spectral theory into the analysis it resolves many of the issues that we are faced with in boundary layer turbulence. The $I$-function corresponds to the Kolmogorov-Obukhov cascade $k_x^{-5/3}$ in the inertial layer, but in the buffer layer another cascade $k_x^{-1}$ dominates the fluctuations, although its influence on the MVP is small. This is an inverse cascade that can accelerate larger and larger attached eddies. The energy transfer of this cascade is captured by the $I$-function in buffer layer, $I_b$. With it we are able to produce the functional form of the averaged fluctuations square in the buffer layer. Once in the inertial layer the original $I$-function dominates again. The final confirmation of this spectral theory is how we are able to improve the fit to experimental values of the MVP in Ref. [@GGGC10], by use of the $I_b$ function in the buffer layer. Although, this effect on the MVP is small, the attached eddies, siphon a small amount of energy from the MVP in the buffer layer. We model this by linear combination of the $I$ and $I_b$ function $(1-a)I+aI_b$, in the buffer layer, where $a$ is small. This produces a better fit to the measured MVP in the buffer region as shown in Figure \[fig:mean velocity\] (a), whereas the fit without this linear combination, shown in Figure \[fig:mean velocity\] (b), is not as good. It is fair to ask what the Townsend attached eddies actually look like since our spectral method is based on them. Unlike the streamwise streaks and associated vortices that have been visualize since the experiments of Kline et al. in the 1960s, see Refs. [@Kl67] and [@Ji99], the attached eddies are difficult to visualize, either in experiments or simulations. We provide a sketch in Fig. \[fig:wx1\], where streamwise streaks are visualized gradually lifting from the boundary by the flow, and perpendicular to them are spanwise attached eddies being deformed by the alternating slow and fast streamwise flow into a hairpin vortex. This does happen both in experiments and observations, see Ref. [@MM19]. However, these hairpin vortices are made unstable by the striations in the streamwise flow and the typical attached eddies are irregular in shape, with the general feature of being stretched by the flow and attached to the wall. One must interpret their influence in a statistical sense. #### Acknowledgements: {#acknowledgements .unnumbered} We are thankful to Ivan Marusic, Milad Samie and Christian E. Willert for kindly sharing with us the wind turbulence experimental data, and Joe Klewicki for useful conversations. We are grateful to Knut Bauer for proving us with the graphic illustrations. This research was supported in part by the National Science Foundation under Grant No. NSF PHY-1748958 through the Kavli Institute for Theoretical Physics.
1
--- abstract: | We study Padé interpolation at the node $z=0$ of functions $f(z)=\sum_{m=0}^{\infty} f_m z^m$, analytic in a neighbourhood of this node, by [*amplitude and frequency operators*]{} (*sums*) of the form $$\sum_{k=1}^n \mu_k h(\lambda_k z), \qquad \mu_k,\lambda_k\in \mathbb{C}.$$ Here $h(z)=\sum_{m=0}^{\infty} h_m z^m$, $h_m\ne 0$, is a fixed (*basis*) function, analytic at the origin, and the interpolation is carried out by an appropriate choice of *amplitudes* $\mu_k $ and *frequencies* $\lambda_k$. The solvability of the $2n$-multiple interpolation problem is determined by the solvability of the associated moment problem $$\sum_{k=1}^n\mu_k \lambda_k^m={f_m}/{h_m}, \qquad m=\overline{0,2n-1}.$$ In a number of cases, when the moment problem is consistent, it can be solved by the classical method due to Prony and Sylvester, moreover, one can easily construct the corresponding interpolating sum too. In the case of inconsistent moment problems, we propose a regularization method, which consists in adding a special binomial $c_1z^{n-1}+c_2 z^{2n-1}$ to an amplitude and frequency sum so that the moment problem, associated with the sum obtained, can be already solved by the method of Prony and Sylvester. This approach enables us to obtain interpolation formulas with $n$ nodes $\lambda_k z$, being exact for the polynomials of degree ${\leqslant}2n-1$, whilst traditional formulas with the same number of nodes are usually exact only for the polynomials of degree ${\leqslant}n-1$. The regularization method is applied to numerical differentiation and extrapolation. author: - Petr Chunaev and Vladimir Danchenko title: | Approximation by\ amplitude and frequency operators --- Introduction and statement of the problem ========================================= In [@Dan2008; @DanChu2011; @Chu2010; @Chu2012] the so-called *$h$-sums* of the form $$\label{h-sum} \mathcal{H}_n(\{\lambda_k\},h;z)=\sum_{k=1}^n\lambda_k h(\lambda_k z), \qquad z, \lambda_k\in \mathbb{C},\qquad n\in \mathbb{N},$$ are studied. Hereinafter $h(z)=\sum_{m=0}^\infty h_mz^m$ is a function, analytic in a disc $|z|<\rho$, $\rho>0$. We call it *a basis function*. Obviously, $\mathcal{H}_n(z)$ is well-defined and analytic in the disc $|z|<\rho\cdot \min_{k=\overline{1,n}} |\lambda_k|^{-1}$. In [@Dan2008; @DanChu2011; @Chu2010; @Chu2012] operators $\mathcal{H}_n(\{\lambda_k\},h;z)$ are used as a tool for $n$-multiple (Padé) interpolation and approximation of functions $f$, analytic in a neighbourhood of the origin. In particular, it is shown in [@Dan2008] that if $h_m\ne 0$, $m\in \mathbb{N}_0$, then *there always exists a unique set of the numbers* $\lambda_k=\lambda_k(f,h,n)$ such that $$f(z)=\mathcal{H}_n(\{\lambda_k\},h;z) +O(z^n), \qquad z\to 0.$$ On the other hand, in the above-mentioned papers $h$-sums are also used as operators of differentiation, integration, interpolation and extrapolation on certain classes of functions, holomorphic in a fixed neighbourhood of the origin. In this case the numbers $\lambda_k$ are already independent of individual functions $f$ from the class and hence are of a universal kind. For instance, the following formulas for numerical differentiation and integration, being exact for the polynomials of degree ${\leqslant}n-1$, are valid [@Dan2008]: $$\label{300} zh'(z)\approx -h(z)+\sum_{k=1}^{n}\lambda_{1,k} h(\lambda_{1,k} z);\quad \int_{0}^{z} h(t)\,dt\approx z \sum_{k=1}^{n}\lambda_{2,k} h(\lambda_{2,k} z).$$ Here the numbers $\lambda_{l,k}$ are absolute constants, being the roots of the polynomials $P_{l,n}$ $(l=1,2)$, which can be defined recursively as follows. Let $P_{l,0}=1$, $v_{l,1}=-1$ $(l=1,2)$, then for $k=1,2,\ldots$ we have $$P_{l,k}=\lambda P_{l,k-1}+v_{l,k},\quad v_{1,k}=-1-\sum_{j=1}^{k-1}\left(1-\frac{j}{k}\right)v_{1,j},\quad v_{2,k}=-\frac{1}{k^2}-\sum_{j=1}^{k-1}\frac{v_{2,j}}{k(k-j)}.$$ In 2013 we proposed [@CD-B; @CD-K] a natural generalization of the $h$-sums, the so-called *amplitude and frequency operators* (*sums*) of the form $$\label{gH} H_n(z)=H_n(\{\mu_k\},\{\lambda_k\},h;z):=\sum_{k=1}^n \mu_k h(\lambda_k z),\qquad \mu_k,\lambda_k\in \mathbb{C},$$ where *amplitudes* $\mu_k$ and *frequencies* $\lambda_k$ are parameters, being independent of each other. In the preprint [@CD-arxiv] and the present paper we give a detailed exposition of the results announced in [@CD-B; @CD-K]. Later on, operators of the form (\[gH\]) were studied in [@YF] but with a fundamentally different approach for constructing them. Namely, there was proposed not an analytic method for that as in this paper, but a numerical one with small residuals (it will be discussed in Section \[Section6\]). The number $n$ in (\[gH\]) is called *the order* of the amplitude and frequency operator, if there are no zeros among the numbers $\mu_k$ and, moreover, the numbers $\lambda_k$ are pairwise distinct (otherwise the order of the operator is less than $n$). As in the case of $h$-sums, we regard amplitude and frequency sums both as approximants of individual functions $f$, analytic at the origin (and then $\mu_k=\mu_k(f,h,n)$ and $\lambda_k=\lambda_k(f,h,n)$), and as special operators (of differentiation, extrapolation, etc.), acting on certain classes of functions (and then $\mu_k=\mu_k(n)$ and $\lambda_k=\lambda_k(n)$). Introduction of the additional parameters $\mu_k$ enables us to formulate the problem of $2n$-multiple (Padé) interpolation at $z=0$ by means of the amplitude and frequency sums (in contrast to the $h$-sums, when $n$-multiple interpolation is only possible). Indeed, given Maclaurin series $$f(z)=\sum_{m=0}^{\infty} f_m z^m, \qquad h(z)=\sum_{m=0}^{\infty} h_m z^m, \qquad \text{where } f_{m} = 0, \text{ if } h_{m} = 0,$$ we introduce the numbers $s_m=s_{m}(h,f)$: $$\label{ssmm} s_{m}(h,f)=0, \hbox{ if } f_{m}=0; \qquad s_{m}(h,f)=f_{m}/h_{m}, \hbox{ if } f_{m}\ne 0,\qquad m \in \mathbb{N}_0.$$ For $|z|<\rho\cdot\min_{k=\overline{1,n}} |\lambda_k|^{-1}$ the operator (\[gH\]) has the form $$\label{gH+} H_n(z)=\sum_{k=1}^n \mu_k \sum_{m=0}^{\infty} h_m (\lambda_k z)^m = \sum_{m=0}^{\infty}h_{m} \left(\sum_{k=1}^n \mu_k \lambda_k^m\right) z^{m},$$ hence to realize the $2n$-multiple interpolation $$f(z)=H_n(\{\mu_k\},\{\lambda_k\},h;z)+O(z^{2n}), \qquad z\to 0, \label{2n-interpolation}$$ or, which is the same, $$f^{(m)}(z)=H_n(\{\mu_k\lambda_k^m\},\{\lambda_k\},h^{(m)};z) +O(z^{2n-m}),\quad m=\overline{0, 2n-1}, \quad z\to 0, \label{2n-bis-interpol}$$ the following conditions on the so-called *generalized power sums* (*moments*) $S_m$ should be satisfied: $$\label{SRS} S_m:=\sum_{k=1}^n \mu_k \lambda_k^m=s_m,\qquad m=\overline{0,2n-1}.$$ The system (\[SRS\]) with unknown $\mu_k$, $\lambda_k$ and given $s_m$ is well known as *the discrete moment problem*. Classical works of Prony, Sylvester, Ramanujan and papers of many contemporary researchers are devoted to the problem of its solvability (see [@Prony; @Sylvester; @Ramanujan; @Lyubich; @Lyubich2; @Kung1]). Note that the system (\[SRS\]) is bound up with Hankel forms, orthogonal polynomials, continued fractions, Gaussian quadratures and Padé approximants (a detailed review of these connections is given in [@Lyubich; @Lyubich2] and also in Section \[par2\]). Suppose that the system (\[SRS\]) is solvable. Then, following [@Lyubich], we call the system and its solution *regular* if all $\lambda_k$ are pairwise distinct and all $\mu_k$ are not vanishing. In the case of regular systems (\[SRS\]) we call the problem of $2n$-multiple interpolation (\[2n-interpolation\]) *regularly solvable*. One of the methods for solving regular systems (\[SRS\]) is due to Prony [@Prony]. Consider the following product of determinants: $$\label{equ1} \left| \begin{array}{ccccc} 1 & 0 & 0 & \ldots & 0\\ 0 & \mu_1 & \mu_2 & \ldots & \mu_n\\ 0 & \mu_1\lambda_1 & \mu_2\lambda_2 & \ldots & \mu_n\lambda_n\\ \ldots & \ldots & \ldots & \ldots & \ldots\\ 0 & \mu_1\lambda_1^{n-1} & \mu_2\lambda_2^{n-1} & \ldots & \mu_n\lambda_n^{n-1}\\ \end{array} \right| \cdot \left| \begin{array}{ccccc} 1 & \lambda & \lambda^2 & \ldots & \lambda^n\\ 1 & \lambda_1 & \lambda_1^2 & \ldots & \lambda_1^n\\ \ldots & \ldots & \ldots & \ldots & \ldots\\ 1 & \lambda_n & \lambda_n^2 & \ldots & \lambda_n^n\\ \end{array} \right|.$$ By regularity, the former of them does not vanish and the latter does only for $\lambda=\lambda_k$ (as a Vandermonde determinant). On the other hand, direct multiplication of the determinants and taking into account (\[SRS\]) give the following determinant, which is a polynomial of $\lambda$: $$\label{G_n} G_n(\lambda):=\sum_{m=0}^n g_{m} \lambda^{m}= \left| \begin{array}{ccccc} 1 & \lambda & \lambda^2 & \ldots & \lambda^n\\ s_0 & s_1 & s_2 & \ldots & s_n\\ s_1 & s_2 & s_3 & \ldots & s_{n+1}\\ \ldots & \ldots & \ldots & \ldots & \ldots\\ s_{n-1} & s_n & s_{n+1} & \ldots & s_{2n-1}\\ \end{array} \right|.$$ We call $G_n$ [*a generating polynomial*]{} (for functional properties of such polynomials, including orthogonality, completeness, etc., see [@Lyubich]). Consequently, the numbers $\lambda_k$ are the simple roots of the generating polynomial $G_n$. If the equation $G_n(\lambda)=0$ is solved, we substitute the numbers found into the system (\[SRS\]). Finally, extracting any $n$ rows from (\[SRS\]) leads to a linear system of equations with unknowns $\mu_k$, which has a unique solution with non-vanishing components. We now give known formulas for calculation of the numbers $\mu_k$ (see, for example, [@DanDod2013]). Let $\sigma_m$ and $\sigma_m^{(k)}$ denote elementary symmetric polynomials of the form $$\sigma_m=\sigma_m(\lambda_1,\ldots,\lambda_n)= \sum_{1 {\leqslant}j_1<\ldots<j_m {\leqslant}n}{\lambda_{j_1}\ldots \lambda_{j_m}}, \quad m=\overline{1,n},$$ $$\sigma_0=1,\quad \sigma_m^{(k)}=\sigma_m(\lambda_1,\ldots,\lambda_{k-1},0, \lambda_{k+1},\ldots,\lambda_n), \quad k=\overline{1,n}.$$ \[400\] The numbers $\mu_k$ are the scalar products $\mu_k=({\mathcal{L}}_k\cdot {\mathcal{S}})$, where ${\mathcal{S}}= (s_0,\ldots, s_{n-1})$ and $$\label{444} {\mathcal{L}}_k=\frac{{g}_n}{{G}'_n(\lambda_k)} \left((-1)^{n-1}\sigma_{n-1}^{(k)}, \ldots , (-1)^{n-m}\sigma_{n-m}^{(k)}, \ldots , -\sigma_{1}^{(k)},\; 1\right).$$ If $V=V(\lambda_1,\lambda_2,\ldots,\lambda_n)$ is a Vandermonde matrix of the first $n$ equations in the system (\[SRS\]), then the elements of the $k$th row ${\mathcal{L}}_k$ of the matrix $V^{-1}$ have the form (\[444\]); see [@DanDod2013] for more details. We now formulate a known criterion of regularity in terms of roots of the polynomial $G_n$. Originally this criterion was obtained in an algebraic form by Sylvester [@Sylvester] (see also [@Kung1 Ch. 5]); later on Lyubich [@Lyubich; @Lyubich2] stated it in the analytical terms, which we use in the present paper. \[Criterion\_Syl\] The system $(\ref{SRS})$ is regular if and only if the generating polynomial $G_n$ is of degree $n$ and all its roots are pairwise distinct. Moreover, the regular system has a unique solution. This theorem immediately implies the following proposition about regular solvability of the interpolation problem (\[2n-interpolation\]) for a function $f$, being analytic in a neighbourhood of the origin. \[th1\] Suppose that the generating polynomial $G_n$, constructed using the numbers $s_m=s_m(h,f)$, $m=\overline{0,2n-1}$, is of degree $n$ and all its roots are pairwise distinct. Then the amplitude and frequency operator $H_n$ is uniquely determined from the system $(\ref{SRS})$ and realizes the $2n$-multiple interpolation $(\ref{2n-interpolation})$ of the function $f$ at the node ${z=0}$. \[remark\_even\_odd\] If the function $f$ is even or odd, then the local precision of the interpolation can be increased. Indeed, if $f$ is even, then $f(z)=\tilde{f}(t)$, $t=z^2$, for some function $\tilde{f}$, analytic at the point $t=0$, and the interpolation (\[2n-interpolation\]) under the assumptions of Theorem \[th1\] with the basic function $h$ gives $$\label{interpolaton_even} \tilde{f}(t)=\sum_{k=1}^n\mu_kh(\lambda_kt)+O(t^{2n})\quad \Leftrightarrow \quad f(z)=\sum_{k=1}^n\mu_kh(\lambda_kz^2)+O(z^{4n}),$$ where it is necessary that $f_{2m}\neq 0 \Rightarrow h_m\neq 0$, see (\[ssmm\]). If $f$ is odd, then $f(z)=z\tilde{f}(t)$, $t=z^2$, and analogously $$f(z)=\sum_{k=1}^n\mu_kzh(\lambda_kz^2)+O(z^{4n+1}).$$ Note that, in contrast to similar discussions in [@YF Corollary  2], here we do not require the functions $f$ and $h$ to be of the same parity. \[remark2\] One can consider the interpolation problem (\[2n-interpolation\]) with $O(z^{M})$, where $M>2n$ or $M<2n$, instead of $O(z^{2n})$. Then we accordingly get overdetermined and underdetermined moment systems of the type (\[SRS\]): $$\label{SRS_M} \sum_{k=1}^n \mu_k \lambda_k^m=s_m,\qquad m=\overline{0,M-1}.$$ In some cases the process of solving the consistent systems (\[SRS\_M\]) with $M\ne 2n$ can be reduced to the one of the standard systems (\[SRS\]) (with $M=2n$). It can be done by eliminating the superfluous equations or adding the missed ones (regarding this see also [@Lyubich §5]). But in the present paper we do not consider the case $M\ne 2n$ for the following reasons. The overdetermined systems (\[SRS\_M\]) belong to the non-regular problems of the form (\[SRS\]), where $2n$ is exchanged by $M$ and $\mu_{k}=0$ for $k{\geqslant}n+1$. To apply the Prony-Sylvester method or some other analytical approaches in the standard subsystem of the system (\[SRS\_M\]), one needs a preliminary analysis of its consistency. As far as we know, there exist no reasonably general methods for this purpose. Moreover, it can be seen from the corresponding overdetermined interpolation problem of the form (\[2n-interpolation\]), where $O(z^{2n})$ is exchanged by $O(z^{M})$, that its consistency is quite rigidly connected with individual properties of the functions $f$ and $h$ and thus one has a little chance to obtain more or less general interpolation formulas of the overdetermined type. For example, if a solvable system of the form (\[SRS\]) is supposed to be consistent with the next equation $S_{2n}=s_{2n}$, then the coefficient $f_{2n}$ cannot be chosen arbitrarily as it depends on the parameters $s_0,\ldots s_{2n-1},h_{2n}$ in a certain way. Indeed, the following generalized Newton’s formula, connecting the coefficients $g_m$ of the generating polynomial $G_n$ and the moments $S_{v+m}$, is well-known: $$\label{corollary_form_Newton++++} \sum_{m=0}^{n} S_{v+m}{g}_{m}=0, \qquad v=0,1,\ldots.$$ Therefore the values $s_m=s_m(h,f)$ (see (\[ssmm\])) of the moments $S_m$ with ${m>2n-1}$ (simultaneously with the coefficients $f_{m}=s_m h_m$) in a solvable overdetermined system of the form (\[SRS\_M\]) are uniquely determined from the system (\[SRS\]) (we suppose that $h$ is fixed). A similar situation arises when one obtains formulas for numerical differentiation and extrapolation (see the corresponding sections below). In these problems the sequences $\{s_m\}$ have a certain arithmetic structure and do not satisfy (\[corollary\_form\_Newton++++\]) for $v=n$ and $S_{2n}=s_{2n}$, i.e. one gets solvable systems of the form (\[SRS\]) with $M=2n$ although adding one more equation of the required form leads to an inconsistent system (see, for example, (\[r\_n\_diff\])). As regards the underdetermined moment systems (when $M<2n$) and the corresponding Padé interpolation, they do not arise in the present paper as we solve the interpolation problem of the order higher than $M$ with the same number of independent parameters $\{\mu_k\}_{k=1}^n$ and $\{\lambda_k\}_{k=1}^n$. Nevertheless, the systems (\[SRS\_M\]) (consistent and even inconsistent) are of independent interest, moreover, they are actively studied in numerical analysis. Various methods for finding an approximate solution (in different senses) to them were developed, for example, in [@Beylkin; @Kung_residual; @Beylkin2; @Potts; @YF] (characteristics of one such method are discussed in Section \[Section6\]). Amplitude and frequency operators in classical problems {#par2} ======================================================= We now consider several classical problems, which are bound up with the class of amplitude and frequency operators. [**.1. Hamburger moment problem.**]{} Theorem \[th1\] raises the question about interpolating amplitude and frequency operators with real $\lambda_k$ and $\mu_k$ (in particular, with $\mu_k>0$). This question is well-studied [@Akhiezer1 Ch. 2] and can be settled by discretization of the following classical Hamburger problem: given a sequence of real numbers $\{s_m\}$, $m\in \mathbb{N}_0$, find a non-negative Borel measure $\mu$ on $\mathbb{R}$ such that $$\label{Hamburger} s_m = \int_{-\infty}^\infty \lambda^m\,d \mu(\lambda),\qquad m\in \mathbb{N}_0.$$ Namely, the following criterion is valid [@Akhiezer1 Ch. 2, §1]: the problem (\[Hamburger\]), where $m=\overline{0,2n-1}$, has a unique solution with the spectrum, consisting of $n$ pairwise distinct points $\lambda_1,\ldots,\lambda_n$, if and only if the leading principal minors $\Delta_k$ of order $k$ of the infinite Hankel matrix $(s_{i+j})_{i,j=0}^\infty$ satisfy the following conditions: $$\label{Hamb_condition} \Delta_1>0, \qquad \Delta_2>0,\qquad \ldots \qquad \Delta_{n}>0,\qquad \Delta_{n+1}=\Delta_{n+2}=\ldots=0.$$ This implies that the discrete moment problem (\[SRS\]) is regularly solvable in real numbers $\lambda_k$ (this is equivalent to the fact that the polynomial (\[G\_n\]) has $n$ pairwise distinct real roots) and $\mu_k>0$ if and only if the sequence (\[ssmm\]) satisfies the first $n$ inequalities in (\[Hamb\_condition\]). Note that then the sequence $\{s_m\}_{m=0}^{2n-1}$ is called *positive*. [**.2. Gauss and Chebyshev quadratures.**]{} Given a function $f$, analytic in a $\rho$-neighbourhood of the origin, suppose that $$F(x):=\frac{1}{x}\int_{-x}^{x}f(t)\,dt,\qquad 0<x <\rho.$$ To construct the amplitude and frequency operator $H_n(\{\mu_k\},\{\lambda_k\},f;x)$ for $F(x)$, we get the (positive) moment sequence $s_{m}=\frac{1-(-1)^{m+1}}{m+1}$, $m=\overline{0,2n-1}$, from (\[ssmm\]) and then consider the corresponding discrete moment problem (\[SRS\]). It is well-known that it is regular for any $n$, moreover, the corresponding generating polynomial (\[G\_n\]) is the Legendre polynomial $P_n$ (we write it in Rodrigues’ form) multiplied by a non-zero constant [@Krylov Ch. 7, §2]: $$G_n(x)=P_n(1) P_n(x), \qquad P_n(x):=\frac{1}{2^n n!}\frac{d^n}{dx^n}(x^2-1)^n.$$ Therefore the frequencies $\lambda_k$ are real, pairwise distinct and belonging to the interval $(-1,1)$ (as roots of the Legendre polynomials, forming an orthogonal system on $[-1,1]$). The amplitudes $\mu_k$ are determined via the numbers $\lambda_k$ by the well-known formulas [@Krylov Ch. 10, §3]: $$\mu_k = \frac{2}{\left( 1-\lambda_k^2 \right) [P'_n(\lambda_k)]^2}>0.$$ Thus we obtain the interpolation formula $$\label{GAUSS} \frac{1}{x}\int_{-x}^{x}f(t)\,dt=\sum_{k=1}^n\mu_kf(\lambda_kx)+r_n(x), \qquad r_n(x)=O(x^{2n}),$$ which is a Gaussian quadrature for each fixed $x$. The amplitudes and frequencies depend only on $n$ but not on $f$. It is known [@Krylov Ch. 10] that the Gaussian quadratures are of the highest algebraic degree of accuracy among all the formulas of the form (\[GAUSS\]) and exact (i.e. $r_n(x)\equiv 0$) for the polynomials of degree ${\leqslant}2n-1$. In a similar manner one can obtain interpolation formulas for integrals with classical weights. For example, for $$F(x):=\int_{-x}^{x}\frac{f(t)}{\sqrt{x^2-t^2}}\,dt,\qquad 0< x <\rho,$$ we have $$s_{2m}=\pi\frac{(2m)!}{(2^m m!)^2},\quad s_{2m+1}=0,\quad m=\overline{0,n-1},\qquad G_n(x)=(-2^{1-n}\pi)^n T_n(x),$$ where $T_n(x)=\cos(n\arccos x)$ are the Chebyshev polynomials of the first kind. Calculating the amplitudes $\mu_k$ via the frequencies $\lambda_k$ leads to the following Gauss-Chebyshev quadrature [@Abramovitz §25.4.38] for real $x$, $0<x <\rho$: $$\label{Gauss-Chebyshev} \int_{-x}^{x}\frac{f(t)}{\sqrt{x^2-t^2}}\,dt=\frac{\pi}{n} \sum_{k=1}^n f(\lambda_kx)+r_n(x), \quad \lambda_k=\cos \tfrac{2k-1}{2n}\pi, \quad r_n(x)=O(x^{2n}),$$ whose characteristic property is the equality of the amplitudes $\mu_k=\pi/n$. The remainder can be written more precisely: $$\label{Gauss-Chebyshev-error} r_n(x)=\frac{\pi f^{(2n)}(\xi)}{2^{2n-1}(2n)!}x^{2n},\qquad \xi\in(-x,x).$$ One can deduce it from [@Abramovitz §25.4.38] by a suitable change of variables. [**.3. Padé approximants.**]{} The Padé approximants as well as the Gaussian quadratures are closely related to the classical moment problem (see, for instance, [@Dzyadyk]). Construction of the amplitude and frequency sum $H_n(\{\mu_k\},\{\lambda_k\},h;x)$ with the basis function $h(z)=(z-1)^{-1}$ for some function $f$, analytic in a neighbourhood of the origin, leads to the sequence of moments $s_m=-f_m$, $m=\overline{0,2n-1}$. If the generating polynomial (\[G\_n\]) for this sequence is of degree $n$ and all its roots are pairwise distinct, then by Theorem \[th1\] we get the following interpolation identity: $$\label{2} f(z)=\sum_{k=1}^n\frac{\mu_k}{\lambda_kz-1}+O(z^{2n}).$$ This is a classical Padé approximant of order $[(n-1)/n]$. We recall that classical Padé approximants of order $[m/n]$ are interpolating rational functions of the form $P_m(f;z)/Q_n(f;z)$ (see, for instance, [@Baker §1.1]). Note that the method for solving the problem (\[SRS\]), proposed by Ramanujan [@Ramanujan], is equivalent to the one for construction of the interpolation formula (\[2\]) (see [@Lyubich; @Lyubich2]). [**.4. Exponential sums.**]{} Let $h(z)=\exp(z)$ be a basis function in the amplitude and frequency operator $H_n(\{\mu_k\},\{\lambda_k\},h;z)$ and $f$ be a function, which we are going to interpolate. The corresponding sequence of moments is $s_m=m!f_m$, $m=\overline{0,2n-1}$. Suppose that the problem (\[SRS\]) for this sequence is regular. Then the following formula of $2n$-multiple interpolation at the origin holds: $$f(z)=\sum_{k=1}^n \mu_k e^{\lambda_kz}+O(z^{2n}).$$ (In particular, this result has been already obtained in [@Buchmann] and [@Lyubich].) Interpolation of functions by exponential sums with *simple* equidistant nodes was considered by Prony [@Prony]. At present, many works are devoted to this method and its various modifications and applications (see, for instance, [@Korobov; @Beylkin2; @Beylkin], [@Braess Ch. 6] and references there). A vast investigation of *the exponential series* was conducted in the scientific school of Leont’ev [@Leontiev2]. It is worth mentioning here that members of the school also actively studied several *generalizations of the exponential series* (see, for instance, [@Gromov2; @Shevtsov; @Leontiev3]). Namely, they enquired into the problem of completeness of the infinite systems $\{h(\lambda_k z)\}$, where $h$ were entire functions and $\lambda_k$ given numbers. Consequently, they actually considered some *representations* of analytic functions $f$ by amplitude and frequency sums of infinite order, $H_\infty(\{\mu_k\},\{\lambda_k\},h;z)$, and properties of these representations (domains of convergence, admissible classes of the numbers $\lambda_k$ and functions $f$, connections between $\mu_k$ and $\lambda_k$, etc.). In contrast to this approach, we consider *approximations* by amplitude and frequency sums of finite order and respective errors. Moreover, the parameters $\mu_k$ and $\lambda_k$ are not given but uniquely determined by the functions $f$ and $h$. Furthermore, in different applications we regard amplitude and frequency sums as operators with fixed (universal) numbers $\mu_k$ and $\lambda_k$, being determined by the analytic nature of these operators. Examples ======== In this section we give several examples of approximating amplitude and frequency sums for some special functions, in particular, Bessel functions (all arising discrete moment problems are regularly solvable). We will also compare our approximants with similar ones, obtained by other authors. It is known [@Abramovitz §9.1.20-21] that the Bessel function of order zero, $J_0$, has the representation $$J_0(\pm x)=\frac{1}{\pi}\int_{-x}^{x} \frac{\exp(it)}{\sqrt{x^2-t^2}}\,dt= \frac{1}{\pi}\int_{-x}^{x}\frac{\cos(t)}{\sqrt{x^2-t^2}}\,dt, \qquad x>0.$$ From this, (\[Gauss-Chebyshev\]) and (\[Gauss-Chebyshev-error\]) we get the following high-accuracy local approximation of $J_0$ at the point $x=0$ by an amplitude and frequency sum of order $n$: $$\label{J_0} J_0(x)=\frac{1}{n}\sum_{k=1}^n \cos(x\cdot\cos \tfrac{(2k-1)\pi }{2n})+r_n(x), \qquad |r_n(x)|{\leqslant}\frac{|x|^{2n}}{2^{2n-1}(2n)!}.$$ Note that one can obtain the same formula by interpolation of the series [@Abramovitz §9.1.10] $$\label{J_0000} J_0(z)=\sum_{m=0}^\infty\frac{(-1)^m}{(2^mm!)^2}z^{2m}$$ by amplitude and frequency sums with the basis function $h(z)=\exp(z)$. Furthermore, then $$s_{2m}=\frac{(2m)!}{(2^m m!)^2},\qquad s_{2m+1}=0,\qquad m=\overline{1,n-1},$$ cf. $\{s_{2m}\}$ for the Gauss-Chebyshev quadrature in Section 2.2. If in (\[J\_0\]) we use $2n$ instead of $n$ and take into account the symmetry of the frequencies obtained and the parity of cosine, then we get the following amplitude and frequency sum (of the same order $n$): $$\label{J_0_super} H_n(x)=\frac{1}{n}\sum_{k=1}^{n} \cos(x\cdot\cos \tfrac{(2k-1)\pi}{4n}),$$ for which by (\[Gauss-Chebyshev-error\]) we have $$\label{J_0_super_error} J_0(x)\approx H_n(x),\qquad |J_0(x)-H_n(x)|{\leqslant}\frac{|x|^{4n}}{2^{4n-1}(4n)!}.$$ Note that the formula (\[J\_0\_super\]) can be also obtained via (\[interpolaton\_even\]). In [@YF Table 1 and Formula (25)] the following approximant was obtained (for the approach from [@YF] see also Section \[Section6\]): $$\label{YF_J_0_cos} J_0(x)\approx \omega_{11}(x):=\sum_{k=1}^{11}\alpha_m \cos(\beta_m x), \quad \alpha_k\in(0.086,0.096),\quad \beta_k\in(0,1).$$ In comparison with this approximant, the sum (\[J\_0\_super\]) gives much more precise results for $n=11$ in the segment $[0,30]$. Calculations in Maple show that $$\label{YF_J_0_cos++} {\rm log}_{10}\frac{|J_0(x)-\omega_{11}(x)|}{|J_0(x)-H_{11}(x)|}> M(x):=\frac{38}{x+1}, \qquad x\in [0;30].$$ Moreover, in the segment \[0,10\] the minorant $M(x)$ can be exchanged by the more precise $\tilde M(x)=44\log_{10} \frac{1}{x}+50\to\infty$, $x\to 0$. Note that the absolute error of the formula (\[YF\_J\_0\_cos\]) in the segment $[0,10]$ is quite small and close to $\varepsilon=10^{-12}$. For $30{\leqslant}x{\leqslant}40$ the absolute error of the formula (\[J\_0\_super\]) is also less than the one of (\[YF\_J\_0\_cos\]), but for $x{\geqslant}40$ the errors of both approximants can reach $10^{-1}$, thus use of them does not seem to be fair for such $x$. From (\[2n-bis-interpol\]) and (\[J\_0\_super\]) we get the following interpolation formula for the derivative of the Bessel function, $J'_0$ (see [@Abramovitz §9.1.28]): $$\label{derivative_J_0++} J'_0(x)\approx H'_n(x)= -\frac{1}{n}\sum_{k=1}^{n} \cos \tfrac{(2k-1)\pi}{4n}\, \sin(x\cdot\cos \tfrac{(2k-1)\pi}{4n})$$ with the error (see (\[Gauss-Chebyshev-error\])) $$\label{derivative_J_0_super_error} |J'_0(x)-H'_n(x)|{\leqslant}\frac{|x|^{4n-1}}{2^{4n-1}(4n-1)!}.$$ For comparison, look at the approximant from [@YF Table 2 and Formula (30)]: $$\label{YF_J_1} J'_0(x)\approx \Omega_{13}(x):=-\sum_{k=1}^{13}\alpha_m \sin(\beta_m x),\quad \alpha_k\in(0.001,0.08),\quad \beta_k\in(0.12,1).$$ Calculations show that the formula (\[derivative\_J\_0++\]) with the same $n=13$ has noticeably less error in the segment $[0,40]$: $$\label{YF_J_0_cos++++} {\rm log}_{10}\frac{|J'_0(x)-\Omega_{13}(x)|}{|J'_0(x)-H'_{13}(x)|}> M_1(x):=\frac{40}{x+1}, \qquad x\in [0,40].$$ Moreover, in the segment $[0,10]$ the minorant $M_1(x)$ can be exchanged by the more precise $ \tilde M_1(x)=43\log_{10} \frac{1}{x}+48\to\infty$, $x\to 0$. Note that the absolute error of (\[YF\_J\_1\]) in $[0,10]$ is close to $\varepsilon=10^{-12}$. For $40{\leqslant}x{\leqslant}45$ the absolute error of the formula (\[derivative\_J\_0++\]) is also less than the one of (\[YF\_J\_1\]). For $x{\geqslant}45$ the absolute error of both formulas can be $10^{-2}$ and thus use of them may be unreasonable. Let us obtain one more representation for the function $J_0$ by taking $$\label{sinc} \textrm{sinc}\; x=\frac{\sin x}{x}=\sum_{m=0}^\infty \frac{(-1)^m}{(2m+1)!}x^{2m}$$ as a basis function and using the approach for interpolation of even functions from Remark \[remark\_even\_odd\]. Namely, we take into account that $J_0$ and $\textrm{sinc}$ are even and interpolate the function $\tilde{f}(t):=J_0(x)$, $t=x^2$, by amplitude and frequency sums with the basis function ${h}(t)=\textrm{sinc}\; x$. As it can be easily checked (see (\[J\_0000\]) and (\[sinc\])), then $s_m=\frac{(2m+1)!}{(2^mm!)^2}$, $m=\overline{0,2n-1}$. Solving the corresponding problem (\[SRS\]) (in all examples considered we obtained non-negative $\lambda_k$ and real $\mu_k$) yields $$\tilde{f}(t)=\sum_{k=1}^n\mu_k h(\lambda_k t)+r_n(t),\qquad r_n(t)=O(t^{2n}),$$ or, which is the same, $$\label{J_0_sinc} J_0(x)=\sum_{k=1}^n\mu_k \, \textrm{sinc}(\sqrt{\lambda_k}x)+r_n(x),\qquad r_n(x)=O(x^{4n}).$$ We now compare (\[J\_0\_sinc\]) with the approximate equality $J_0(x)\approx \sum_{k=1}^{11}\alpha_k\,\textrm{sinc}(\beta_k z)$, obtained in [@YF Table 3 and Formula (32)] (the amplitudes and frequencies were found there by a numerical method). It turns out that the amplitude and frequency sum in (\[J\_0\_sinc\]) with the same order $n=11$ gives more precise results for $x\in[0;42]$ (especially in a small neighbourhood of $x=0$). For $x{\geqslant}40$ the absolute errors of both formulas can already exceed $10^{-3}$. The authors of [@Beylkin; @Potts] consider numerical methods for interpolation of the function $J_0$ in equidistant nodes by amplitude and frequency sums of the form $\sum_{k=1}^{n}\alpha_k \exp(\beta_k x)$ with complex (but not purely imaginary) $\alpha_k$ and $\beta_k$ and quite fast-decreasing moduli of exponents. This enables them to get good approximants of $J_0$ on large intervals. For example, in [@Beylkin] such an approximant with $n=28$ has the error $\varepsilon{\leqslant}10^{-10}$ for $x\in[0,100\pi]$. The approximant with $n=20$, obtained in [@Potts Example 4.5] by slightly different methods, has the error $\varepsilon{\leqslant}10^{-4}$ for $x\in [0,1000]$. Note that the sums (\[J\_0\_super\]) and (\[derivative\_J\_0++\]) are not recommended for use on such large intervals because of their local character. To guarantee a reasonable rate of approximation, they must have the order comparable with length of the intervals considered (it can be seen from the order-precise estimates (\[J\_0\_super\_error\]) and (\[derivative\_J\_0\_super\_error\])). For example, for $n=20$ they do not give reasonable quality of approximation in the segment $[0,1000]$ but in the subsegments $[0,40]$, $[0,30]$ and $[0,20]$ the approximation errors are less than $10^{-16}$, $10^{-25}$ and $10^{-38}$, correspondingly. Analytic regularization of the interpolation by amplitude and frequency operators {#regularization_section} ================================================================================= [**.1. Variation of the moments.**]{} The $2n$-multiple interpolation problem has great difficulty in the case when the conditions of regularity (see Theorem \[th1\]) are not satisfied. In particular, it can be inconsistent then. In order to avoid this difficulty, we propose a method for analytic regularization of the discrete moment problem. It consists in a certain variation of the right hand sides $s_m$ of the system (\[SRS\]), namely, in adding the generalized power sums ${\sigma}\sum_{k=1}^{\nu}\alpha_k (r\beta_k)^m$ to them (another approach is described in Remark \[REG+++\]). The parameters $\alpha_k$, $\beta_k$ are independent of $s_m$, and $\sigma$ and $r$ depend only on $\max \{|s_m|\}$. From the point of view of the interpolation problem, this corresponds to the fact that we get a new regularly solvable problem of the form (\[2n-interpolation\]) (and (\[SRS\])), where $f$ is exchanged for a new varied function $\tilde f$ such that $\tilde f(z)-f(z)$ is the amplitude and frequency sum $\sigma H_{\nu}(\{\alpha_k\},\{r \beta_k\},h;z)$. We emphasize that the above-mentioned variation of $s_m=s_{m}(h,f)$ in the framework of the interpolation problem is universal as it depends only on $\max \{|s_m|\}$ but not on the functional properties of $f$ and $h$. Moreover, as we will see below, if $\alpha_k$, $\beta_k$, $\sigma$, $r$ are chosen appropriately, then the difference $\tilde f-f$ is just a certain binomial. Let us describe the regularization method in detail. Instead of the function $f$, whose corresponding problem (\[SRS\]) is not regular, we introduce the varied function $$\label{F(z)} {\tilde f}(z):=f(z)+\sigma H_{\nu}(\{\alpha_k\},\{r \beta_k\},h;z),\qquad p\in \mathbb{C},\qquad {\nu}\in \mathbb{N},$$ where $\alpha_k$, $\beta_k$ $\sigma$, $r$ are constants. Since $${\tilde f}(z)= \sum_{m=0}^\infty f_m z^m +\sigma\sum_{k=1}^{\nu}\alpha_k \sum_{m=0}^\infty h_m (r \beta_kz)^m= \sum_{m=0}^\infty \left(s_m+\tau_m \right)h_m z^m,$$ where $\tau_m:=\sigma\sum_{k=1}^{\nu}\alpha_k (r\beta_k)^m$ and $s_m=s_{m}(h,f)$ as above (see (\[ssmm\])), instead of $(\ref{SRS})$ we obtain the system $$\label{Var_disc_mom_syst} \sum_{k=1}^n \mu_k \lambda_k^m=s_m+\tau_m, \qquad m=\overline{0,2n-1},$$ which differs from the system (\[SRS\]) in the regularizing summands $\tau_m$ (if the initial system is regular, then it is natural to set $\sigma=\tau_m\equiv 0$). The task is now to find the numbers $\tau_m$ such that the conditions of Theorem \[th1\] are satisfied. Assume that we have done this, then by Theorem \[th1\] we obtain the interpolation identity $${\tilde f}(z)=H_{n}(\{\mu_k\},\{\lambda_k\},h;z)+O(z^{2n}).$$ Returning to the initial interpolation problem and taking into account (\[F(z)\]) yield $$\label{f_reg} f(z)=H_n(\{\mu_k\},\{\lambda_k\},h;z)-\sigma H_{\nu}(\{\alpha_k\},\{r\beta_k\},h;z) +O(z^{2n}).$$ At the same time it is reasonable to choose ${\nu}$ as small as possible. However, there is a natural restriction on $\nu$, which is seen from the following statement (cf. §4 in [@Lyubich]). \[ranks\] It is necessary for the regularity of the varied system $(\ref{Var_disc_mom_syst})$ that $${\nu}{\geqslant}n-{\operatorname{rank}}\left(s_{i+j-2}\right)_{i,j=1}^n.$$ Consider the Hankel matrices $$\mathbf{H}:=\left(s_{i+j-2}\right)_{i,j=1}^n, \qquad \mathbf{R}:=\sigma\left(\sum_{k=1}^{\nu} \alpha_k (r \beta_k)^{i+j-2}\right)_{i,j=1}^n.$$ For the regularity of the system $(\ref{Var_disc_mom_syst})$ it is necessary that the coefficient before $\lambda^n$ of the corresponding generating polynomial $G_n$ does not vanish, i.e. $\det (\mathbf{H}+\mathbf{R})\ne 0$, ${\operatorname{rank}}(\mathbf{H}+\mathbf{R})= n$. It is well known [@Horn §0.4.5] that for any $n\times n$-matrices $A$ and $B$ $$\label{rank_inequality} {\operatorname{rank}}(A+B){\leqslant}\min\{{\operatorname{rank}}A + {\operatorname{rank}}B;n\}.$$ Consequently, if the system $(\ref{Var_disc_mom_syst})$ is regular, then necessarily $n {\leqslant}{\operatorname{rank}}\mathbf{H} + {\operatorname{rank}}\mathbf{R}$. It remains to note that ${\operatorname{rank}}\mathbf{R}{\leqslant}{\nu}$. Indeed, the following representation is valid: $$\mathbf{R}=\sigma\sum_{k=1}^{\nu} \alpha_k \mathbf{C}(k), \qquad \mathbf{C}(k):=\left((r\beta_k)^{i+j-2}\right)_{i,j=1}^n,$$ where ${\operatorname{rank}}\mathbf{C}(k)=1$ (each next row is the previous one multiplied by $r\beta_k$). From this by the property (\[rank\_inequality\]) we obtain the required bound for ${\operatorname{rank}}\mathbf{R}$. [**.2. Parameters of the regularization.**]{} In the problems under consideration the ranks of matrices $\left(s_{i+j-2}\right)_{i,j=1}^n$ are small. Consequently, taking into account Lemma \[ranks\], we will consider only the overall case ${\nu}=n$ to solve the regularization problem. Then the formula (\[f\_reg\]) has $2n$ summands (if $\sigma\ne 0$, $r\ne 0$), and in this sense the amplitude and frequency sums obtained have no advantage over the $h$-sums of order $2n$. However, an appropriate choice of $\alpha_k$, $\beta_k$, $\sigma$ and $r$ can essentially simplify the latter sum in (\[f\_reg\]). Indeed, let $p$ and $q$ be fixed non-zero complex numbers and $$\label{alpha-beta} \alpha_k=\beta_k=\exp \left(\frac{2\pi (k-1) i}{n}\right),\qquad r=\left(\frac{q}{p}\right)^{1/n},\qquad \sigma=\frac{p^2}{nq} r,\qquad k=\overline{1,n},$$ where the number $r$ is any of the $n$ values of the root. Then, as it can be easily seen, in (\[Var\_disc\_mom\_syst\]) we obtain $$\label{TAU} \tau_{n-1}=p,\quad \tau_{2n-1}=q;\qquad \tau_{m}=0 \quad \hbox{ for other }\quad m.$$ Indeed, $$\tau_m=\frac{p^2}{nq} r^{m+1}\sum_{k=1}^n \exp\left({\frac{2\pi (k-1) i}{n}(m+1)}\right),$$ where the sum of the exponents equals $n$ if $m+1$ is divisible by $n$ and zero if not. Consequently, $$\label{SVORACH} \sigma H_n(\{\alpha_k\},\{r\beta_k\},h;z)= p\, h_{n-1}z^{n-1}+q\,h_{2n-1}z^{2n-1}+O(z^{2n}).$$ Thus, assuming the regularity of the varied problem (\[Var\_disc\_mom\_syst\]), we get the formula $$\label{f_reg_expansion} f(z)=H_n(\{\mu_k\},\{\lambda_k\},h;z)- p\, h_{n-1}z^{n-1}-q\,h_{2n-1}z^{2n-1}+O(z^{2n}).$$ In order to obtain the main result of this section, we now show that the above-mentioned problem is indeed regular for a certain choice of the parameters $p$ and $q$. Below we give a possible way of such a choice. The generating polynomial of the system (\[Var\_disc\_mom\_syst\]) for $\alpha_k$ and $\beta_k$ from (\[alpha-beta\]) has the form $$\label{modif_equ} {G}_n(\lambda)={G}_n(p,q;\lambda):= \left| \begin{array}{ccccc} 1 & \lambda & \cdots & \lambda^{n-1} & \lambda^n \\ s_0 & s_1 & \cdots & s_{n-1}+p & s_{n} \\ s_1 & s_2 & \cdots & s_n & s_{n+1} \\ \cdots & \cdots & \cdots & \cdots & \cdots \\ s_{n-1}+p & s_{n} & \cdots & s_{2n-2} & s_{2n-1}+q \\ \end{array} \right|.$$ Obviously, for the parameters $p$ and $q$, being sufficiently large in modulus (comparing with the moments $s_k$ and independently of each other), the roots of this polynomial are arbitrarily close to those of the polynomial of the form (\[modif\_equ\]) with $s_k=0$, $k=\overline{0,2n-1}$. The latter polynomial, as it can be easily checked by expanding the determinant along the first row, has the form $$(-1)^{n(n+1)/2}p^{n}(\lambda^n-q/p),$$ and all its $n$ roots are pairwise distinct. From here, the formula (\[f\_reg\_expansion\]) and Theorem \[th1\] we obtain the following result. \[th2\] Given $p$ and $q$ sufficiently large in modulus, the varied problem $(\ref{Var_disc_mom_syst})$ with the parameters $(\ref{TAU})$ has a regular solution $\{\mu_k\}$, $\{\lambda_k\}$. Moreover, for the constants $c_1=-ph_{n-1}$ and $c_2=-qh_{2n-1}$ the following interpolation formula holds: $$f(z)=c_1z^{n-1}+c_2 z^{2n-1}+\sum_{k=1}^n\mu_k h(\lambda_k z)+O(z^{2n}).$$ \[REG+++\] The above-mentioned regularization with the parameters from (\[alpha-beta\]) and (\[TAU\]) is actually equivalent to adding the binomial $c_1z^{n-1}+c_2 z^{2n-1}$ with non-vanishing coefficients $c_{1}$ and $c_{2}$ to the function $f$. In what follows we will expand the class of regularizable problems by showing that $c_1$ and $c_2$ can be chosen in a different way and not necessarily non-vanishing. In particular, in the extrapolation problem from Section \[par-extrap\] it will be reasonable to set $c_2=0$. \[rem2\] The conditions on $p$ and $q$, mentioned in Theorem \[th2\], are quite qualitative and need additional specification in practice. Several methods for this will be proposed below in particular applications. In the general case one can use the following observations. The leading coefficient $g_n=g_n(p)$ of the polynomial ${G}_n$ is obviously a polynomial of $p$ of degree $n$, hence $\deg {G}_n(p,q;\lambda)=n$ for all $p$ except those from the set $$\label{Pi-general} \Pi:=\{p: g_n(p)=0\},$$ containing no more than $n$ points. It is possible to obtain some estimates for the boundaries of the set $\Pi$ using that a matrix with strict diagonal dominance is non-singular (see the Levy–Desplanque theorem in [@Horn Th. 6.1.10]). Namely, if we choose $p$ satisfying the inequality $$|s_{n-1}+p|>\sum_{j=1, j\neq n-i+1}^{n}|s_{i+j-2}|, \qquad i=\overline{1,n},$$ then the determinant for $g_n$ is strict diagonally dominant and hence $g_n\ne 0$. For this it is sufficient to take, for example, $$p> n \max_{k=\overline{0,2n-1}} |s_k|.$$ We now suppose that the generating polynomial (\[modif\_equ\]) is of degree $n$. Then the question about “separation” of its multiple roots arises. As it is easily seen, $${G}_n(p,q;\lambda)={\mathcal S}(p;\lambda) +q\mathcal{T}(p;\lambda),$$ where the polynomial ${\mathcal S}$ is of degree $n$ and the polynomial ${\mathcal T}$ is of degree ${\leqslant}n-1$; both polynomials depend only on $p$. The following statement is valid. \[RAZDEL\] Suppose that in each multiple root $($if any$)$ of the polynomial ${G}_n$ the polynomial ${\mathcal T}$ either does not vanish or has a simple root. Then there exists an arbitrarily small variation $\delta\ne 0$ of the parameter $q$ such that the polynomial ${G}_n(p,q+\delta;\lambda)$ has $n$ simple roots. Assume that ${\lambda}_0$ is an $s$-multiple ($s{\geqslant}2$) root of the polynomial ${G}_n(p,q;\lambda)$. Then in a sufficiently small neighbourhood of the root the polynomial $${G}_n(p,q+\delta;\lambda)={G}_n(p,q;\lambda) +\delta{\mathcal T}(p;\lambda)$$ has the form $${G}_n(p,q+\delta;\lambda)=({\lambda}-{\lambda}_0)^s({\alpha}+ O({\lambda}-{\lambda}_0))+\delta (t_0+t_1(\lambda-{\lambda}_0)+O(({\lambda}-{\lambda}_0)^2)),$$ where ${\alpha}\ne 0$, $|t_0|+|t_1|\ne 0$ and values $O({\lambda}-{\lambda}_0)$, $O(({\lambda}-{\lambda}_0)^2)$, $\lambda\to \lambda_0$, are independent of $\delta$. Choose small $\varepsilon>0$ and $\delta=\delta(\varepsilon)$ so that in the disc ${|\lambda-\lambda_0|{\leqslant}2\varepsilon}$ the polynomial ${G}_n$ has no roots, distinct from $\lambda_0$ (we take into account that the roots depend on $\delta$ continuously), and $$|{G}_n(p,q;\lambda)|>|\delta{\mathcal T}(p;\lambda)|,\qquad |\lambda-\lambda_0|=\varepsilon.$$ By Rouché’s theorem, the polynomial ${G}_n(p,q+\delta;\lambda)$ has exactly $s$ roots in the disc ${|{\lambda}-{\lambda}_0|<\varepsilon}$; we will use $\tilde \lambda_k$ to denote them. If $t_0\ne 0$, then these roots satisfy the equation $$({\lambda}-{\lambda}_0)^s=-\frac{\delta t_0}{\alpha}\; (1+O(\varepsilon)),\qquad \varepsilon\to 0.$$ If $t_0=0$, $t_1\ne 0$, then $\tilde \lambda_1=\lambda_0$ and other roots satisfy the equation $$({\lambda}-{\lambda}_0)^{s-1}=-\frac{\delta t_1}{\alpha}\; (1+O(\varepsilon)),\qquad \varepsilon\to 0.$$ In any case we get $s$ simple roots. Suppose that $\varepsilon$ and $|\delta|$ are so small that the above-mentioned method works simultaneously for all the multiple roots but all the simple ones remain simple (it is possible as the roots depend on $\delta$ continuously). Then we get the polynomial ${G}_n(p,q+\delta;\lambda)$ with $n$ simple roots. It follows from the aforesaid that the following conjecture is very likely: the set of the parameters $(p,q)$, for which the interpolation problem considered in this section is regularly solvable, is everywhere dense in $\mathbb{C}^2$. But now we have only the following statement, which is a supplement to Theorem \[th2\]. \[th22\] Suppose that $p\notin \Pi$ $($see $(\ref{Pi-general})$$)$ and the conditions of Lemma $\ref{RAZDEL}$ are satisfied. Then there exists an arbitrarily small variation $\delta\ne 0$ of the parameter $q$ such that the varied problem $(\ref{Var_disc_mom_syst})$ with $\tau_{n-1}=p$, $\tau_{2n-1}=q+\delta$ $($all other $\tau_{m}=0)$ has a regular solution $\{\mu_k\}$, $\{\lambda_k\}$, and for the constants $c_1=-ph_{n-1}$ and $c_2=-(q+\delta)h_{2n-1}$ the following interpolation formula holds: $$f(z)=c_1z^{n-1}+c_2 z^{2n-1}+\sum_{k=1}^n\mu_k h(\lambda_k z)+O(z^{2n}).$$ Numerical differentiation by amplitude and frequency operators {#par-diff} ============================================================== [**.1. Statement of the problem.**]{} As an application of the regularization method we consider the problem of $2n$-multiple interpolation of the function $zf'(z)$ by amplitude and frequency operators $H_n$ with the basis function $f$. (As above we suppose that $f$ is defined and holomorphic in a neighbourhood of the origin.) The solution to this problem would allow us to obtain a high-accuracy formula for numerical differentiation with local precision $O(z^{2n})$. However, the discrete moment problem (\[SRS\]) with $s_m=m$, $m=\overline{0,2n-1}$, which we get in this case, is non-regular (the generating polynomial (\[G\_n\]) is of the degree less than $n$ for $n=1$ and $n{\geqslant}3$ as the algebraic adjunct to $\lambda^n$ obviously vanishes, and has the double root $\lambda=1$ for $n=2$; both cases do not satisfy Theorem \[th1\]). Here we apply the regularization method mentioned in Remark \[REG+++\]. More precisely, given some complex parameters $p$ and $q$, we consider the varied function $$\label{TILDE-f} \tilde{f}(z):=zf'(z)+ p\, f_{n-1}z^{n-1}+q\,f_{2n-1}z^{2n-1}, \qquad zf'(z)=\sum_{m=0}^{\infty} mf_m z^m$$ and the interpolating sum $H_n(\{\mu_k\},\{\lambda_k\},{f};z)$. From here by (\[ssmm\]) we get the set of the varied moments $$\label{diff_moments} s_m=m, \quad m\neq n-1,2n-1; \qquad s_{n-1}=n-1+p,\qquad s_{2n-1}=2n-1+q,$$ which are independent of $f$. Consequently, $$\label{G_n_diff_2_diag} \hat{G}_n(\lambda):=\sum_{m=0}^n \hat{g}_m\lambda^m= \left| \begin{array}{cccccc} 1 & \lambda & \ldots & \lambda^{n-1} & \lambda^n\\ 0 & 1 & \ldots & n-1+p & n\\ 1 & 2 & \ldots & {n} & {n+1}\\ \ldots & \ldots & \ldots & \ldots & \ldots\\ n-1+p & n & \ldots & {2n-2} & {2n-1+q}\\ \end{array} \right|.$$ If for some $p$ and $q$ the generating polynomial $\hat{G}_n(\lambda)$ has exactly $n$ pairwise distinct roots $\lambda_1,\ldots,\lambda_n$, then by Theorem \[th1\] the varied interpolation problem becomes regular and $$\label{REG PROIZ} zf'(z)=H_n(\{\mu_k\},\{\lambda_k\}, f;z)- p\, f_{n-1}z^{n-1}-q\,f_{2n-1}z^{2n-1}+O(z^{2n}),$$ where $\mu_k$ can be calculated using (\[diff\_moments\]), (\[SRS\]) and Lemma \[400\]. [**.2. Coefficients of the generating polynomial.**]{} In the case under consideration the coefficients $\hat{g}_m$ can be written explicitly. \[lemma\_dif\_1\] Let $\kappa:=(-1)^{n(n+1)/2}p^{n-3}$. Then for $n{\geqslant}1$ the coefficients of the polynomial $(\ref{G_n_diff_2_diag})$ have the form $$\begin{aligned} \hat{g}_n&=\kappa p\left(p^2+n(n-1)p +\tfrac{n^2(n^2-1)}{12}\right),\\ \hat{g}_0&=-\kappa \left(p^2q+(2n-1)p^2+(n-1)^2p\,q-\tfrac{n(n^2-1)}{6}p+\tfrac{(n-2)n(n-1)^2}{12}q\right),\quad\quad\quad\;\\ \hat{g}_m&=-\kappa \left((2n-(m+1))p^2-(n-(m+1))p\,q -\tfrac{n(n+1)}{2}\left(\tfrac{n+2}{3}-(m+1)\right)p+\phantom{\tfrac{1}{1}}\right. \\ &\qquad\qquad\qquad\qquad\qquad\left.\phantom{\tfrac{1}{1}} +\tfrac{n(n-1)}{2}\left(\tfrac{2(n+1)}{3}-(m+1)\right)q\right),\qquad m=\overline{1,n-1}.\end{aligned}$$ One can verify the identities for $n=1,2$ directly. From now on, $n{\geqslant}3$. Let us first prove the identity for $\hat{g}_n$ by direct calculation of the algebraic adjunct $(-1)^n D$ to $\lambda^n$ in the determinant (\[G\_n\_diff\_2\_diag\]). We now show that the characteristic polynomial $P_n(\lambda)=\det (A-\lambda I)$ of the matrix $$A:=\left( \begin{array}{ccccc} n-1 & n-2 & n-3 & \ldots & 0 \\ n & n-1 & n-2 & \ldots & 1 \\ \ldots & \ldots & \ldots & \ldots & \ldots \\ 2n-2 & 2n-3 & 2n-4 & \ldots & n-1 \\ \end{array} \right)$$ has the form $$\label{charac_poly} P_n(\lambda)=(-1)^n\lambda^{n-2}\left(\lambda^2-n(n-1) \lambda+\tfrac{n^2(n^2-1)}{12}\right).$$ It is known that for any matrix $B$ $$\label{characht_property_1} \det (B-\lambda I)=(-1)^n(\lambda^n-b_1\lambda^{n-1}+b_2\lambda^{n-2}+\ldots+(-1)^nb_n),$$ where $b_j$ is the sum of all $j$-rowed diagonal minors of the matrix $B$ (see, for instance, [@Jacobson §3.10]). In particular, in terms of the traces of the matrices $B$ and $B^2$ we have $$\label{characht_property_2} b_1=\mathrm{Tr}\,B, \qquad b_2=\tfrac{1}{2}\left((\mathrm{Tr}\,B)^2-\mathrm{Tr}\,B^2\right).$$ For our matrix $A$ all minors of the size greater than two are zero (as subtracting a row from any other one gives a constant row) therefore ${\operatorname{rank}}A=2$. Consequently, the coefficients before the terms with the powers less than $n-2$ in $\det (A-\lambda I)$ are zero. Furthermore, it is clear that $\mathrm{Tr}\,A=n(n-1)$. We now consider the coefficient before $\lambda^{n-2}$. It is easily seen (by direct multiplication of the $k$th row by the $k$th column of the matrix $A$) that $$\mathrm{Tr}\,A^2= \sum_{k=1}^n\sum_{m=0}^{n-1}\left((n-1)^2-(m-k+1)^2\right)= \tfrac{1}{6}n^2(n-1)(5n-7).$$ It follows that $$\tfrac{1}{2}\left((\mathrm{Tr}\,A)^2-\mathrm{Tr}\,A^2\right) =\tfrac{1}{2}\left(n^2(n-1)^2-\tfrac{1}{6}n^2(n-1)(5n-7)\right) =\tfrac{n^2(n^2-1)}{12}.$$ This completes the proof of the formula (\[charac\_poly\]). Let us return to the determinant $D$. Its matrix is mirror symmetric with respect to $A$ (i.e. its columns are placed in a reversed order) and can be obtained by right multiplication of $A$ by the anti-diagonal identity matrix. As is known, the determinant of the $n\times n$ anti-diagonal identity matrix is equal to $(-1)^{n(n-1)/2}$. Hence $$\label{DETER-D} D=(-1)^{\tfrac{n(n-1)}{2}}\det (A+p I)=(-1)^{\tfrac{n(n-1)}{2}}P_n(-p).$$ This and (\[charac\_poly\]) yield the desired formula for $\hat{g}_n=(-1)^n D$, $n{\geqslant}3$. For convenience, we introduce the set of three elements $$\label{Pi} \hat{\Pi}:=\left\{0; \tfrac{n}{2}\left(1-n+d_n\right); \tfrac{n}{2}\left(1-n-d_n\right)\right\},\qquad d_n:=\sqrt{\tfrac{2}{3}(n-1)(n-3)}.$$ Note that if $p\notin \hat{\Pi}$, then $\hat{g}_n\neq 0$. Assume that $p\notin \hat{\Pi}$ and $\hat{g}_n$ are known. Then we can determine the desired identities for the other $n$ coefficients from the following system of $n$ linear equations: $$\label{Newton_formula} \sum_{m=0}^n s_{v-m}\hat g_{n-m}=0,\qquad v=\overline{n,2n-1}.$$ The equation for each $v$ can be obtained by summarizing the products of the algebraic adjuncts to the elements of the first row of the determinant (\[G\_n\_diff\_2\_diag\]) (generally, of the determinant (\[G\_n\])) and the corresponding elements of the $(v+2)$th row. The linear system (\[Newton\_formula\]) with unknowns $\hat{g}_0,\ldots,\hat{g}_{n-1}$ has a non-singular matrix (its determinant is equal to the coefficient $\hat{g}_n\ne 0$, $p\notin \hat{\Pi}$), hence it has a unique solution. Consequently, in order to complete the proof of Lemma \[lemma\_dif\_1\] it is sufficient to verify (\[Newton\_formula\]) by direct substitution of the moments (\[diff\_moments\]) and coefficients given in Lemma \[lemma\_dif\_1\]. This verification is quite simple and can be reduced to calculation of the sums $\sum_{m=0}^{n}m^\nu$ for $\nu=1,2$, so we do not dwell on it. Finally, let $p\in \hat{\Pi}$. The case $p=0$ is not interesting as then we have $\hat{G}_n(\lambda)\equiv 0$ (see (\[G\_n\_diff\_2\_diag\]) for $n{\geqslant}3$). In the case $p\in \hat{\Pi}\setminus \{0\}$ the system (\[Newton\_formula\]) is homogeneous and has a singular matrix hence it has infinitely many non-zero solutions, one of those is given in Lemma \[lemma\_dif\_1\] (we use that the coefficients $\hat{g}_m$ depend on $p$ continuously). [**.3. Factorization of the coefficients of the generating polynomial.**]{} The following lemma about factorization of the coefficients of $\hat{G}_n$ is fundamental since it enables us to use Theorem \[th1\] in the problem under consideration. \[lemma\_factor\] Let $n{\geqslant}3$, $p\notin \hat{\Pi}$ $($see $(\ref{Pi})$$)$ and $$\label{PARAM p q} q=q_0(p):=-2\,{\frac {p\left (3\,p+{n}^{2}-1\right )}{\left ({n}-1 \right )\left (n-2\right )}}.$$ Then the ratios $\hat{g}_m/\hat{g}_n$ for $m=\overline{1,n}$ are independent of $p$ and $\hat{g}_0/\hat{g}_n$ depend on $p$ linearly. More precisely, the generating polynomial has the form $$\label{FACTOR G} \hat{G}_n(\lambda)=\hat{g}_n \left({\lambda}^{n}- \frac{6\lambda\left(\lambda^{n-1}-(n-1)\lambda+n-2\right)}{\left (n- 1\right)\left(n-2\right)\left (\lambda-1\right)^{2}} +2+{\frac {6\,p}{\left (n-1\right )\left (n-2\right)}} \right).$$ There exists an arbitrarily small variation of the parameter $p$ such that all the roots of the polynomial $\hat{G}_n$ are pairwise distinct. Taking into account Lemma \[lemma\_dif\_1\], if we solve the equation $\hat{g}_{n-1}=0$, being linear with respect to $q$, then we get (\[PARAM p q\]). Substitution of the expression for $q$ into the other coefficients gives $$\hat{g}_0=\hat{g}_n\left(2+{\frac {6\,p}{\left (n-1\right )\left (n-2\right )}}\right),\qquad \hat{g}_m=-6\,\hat{g}_n{\frac {n-1-m}{\left (n-1\right )\left ({n}-2 \right )}},\qquad m=\overline{1,n-1},$$ where $\hat{g}_n=(-1)^{\tfrac{n(n+1)}{2}}p^{n-2}\left(p^2+n(n-1)p +\tfrac{n^2(n^2-1)}{12}\right)\neq 0$ as $p\notin \Pi$, therefore $$\hat{G}_n(\lambda)=\hat{g}_n\, \left ({\lambda}^{n}-\frac{6}{\left (n- 1\right )\left (n-2\right )}\sum _{m=1}^{n-1}\,{\left (n-m-1\right ){\lambda}^{m}}+2+{\frac {6\,p}{\left (n-1\right )\left (n-2\right )}} \right ),$$ which yields (\[FACTOR G\]) after calculation of the sum. The conclusion about the simplicity of the roots follows immediately from Lemma \[RAZDEL\]. [**.4. Main theorem about numerical differentiation by amplitude and frequency operators.**]{} From (\[corollary\_form\_Newton++++\]) we get $$\label{corollary_form_Newton} S_{2n} =-\frac{1}{\hat{g}_n}\sum_{m=0}^{n-1} s_{n+m}\hat{g}_{m}.$$ After substituting the moments (\[diff\_moments\]) and the coefficients from Lemma \[lemma\_factor\] into (\[corollary\_form\_Newton\]), direct calculation gives the expression for the $2n$th moment: $$S_{2n} = 2n-C_n(p),\qquad C_n(p):=\frac{6np}{(n-1)(n-2)}.$$ Therefore the remainder $r_n$, which was denoted in (\[REG PROIZ\]) just as $O(z^{2n})$, has the following more precise form: $$\label{r_n_diff} \tilde{f}(z)-\sum_{k=1}^n\mu_k f(\lambda_k z)= \sum_{m=2n}^{\infty}(m-S_m)f_mz^m= C_n(p)f_{2n}z^{2n}+O(z^{2n+1}).$$ From the foregoing, we get the following statement. \[th3\] Given $n{\geqslant}3$, any $p_0\in \mathbb C$ and arbitrarily small $\varepsilon>0$, there exists a value of the parameter $p$, $|p-p_0|{\leqslant}\varepsilon$, $p\notin \hat{\Pi}$ $($see $(\ref{Pi}))$, such that $$\label{diff_formula_common} zf'(z)=\sum_{k=1}^n\mu_k f(\lambda_k z)-pf_{n-1}z^{n-1}- qf_{2n-1}z^{2n-1}+r_n(z),$$ where $r_n(z)=O(z^{2n})$ is the form $(\ref{r_n_diff})$ and $q=q_0(p)$ $($see $(\ref{PARAM p q})$$)$. Moreover, the frequencies $\lambda_k$ are the pairwise distinct roots of the polynomial $(\ref{FACTOR G})$ and the amplitudes $\mu_k$ are determined uniquely by Lemma \[400\]. Furthermore, $\mu_k=\mu_k(p,n)$ and $\lambda_k=\lambda_k(p,n)$, so they are independent of the function $f$ and universal in this sense. The interpolation formula is exact for the polynomials of degree ${\leqslant}2n-1$, i.e. ${r_n(z)\equiv 0}$ in $(\ref{diff_formula_common})$ for the polynomials $f$ such that $\deg f{\leqslant}2n-1$. Note that the method, which we consider in this section, can be easily extended to interpolation of the functions $z^\nu f^{(\nu)}(z)$, $\nu{\geqslant}2$, hence one can obtain the formulas for numerical differentiation of higher order. **.5. Remarks and examples.** We now make several remarks about practical applications of Theorem \[th3\]. The remainder in the formula (\[diff\_formula\_common\]) is of quite high infinitesimal order, $O(z^{2n})$, and this is achieved by knowing only $n$ values of $f$ and two fixed values of its derivatives at $z=0$. Traditional interpolation approaches with such a number of known values usually have remainders of order $O(z^{n+2})$. In other words, the formula $(\ref{diff_formula_common} )$ is exact for the polynomials of degree ${\leqslant}2n-1$, whereas usual $(n+2)$-point interpolation formulas are exact only for the polynomials of degree ${\leqslant}n+1$. Another important feature of the formula (\[diff\_formula\_common\]) is that the variable interpolation nodes $\lambda_k z$ depend only on the point $z$, where we calculate $zf'(z)$, and are independent of $f$ (in this sense the amplitudes $\mu_k=\mu_k(p,n)$ and frequencies $\lambda_k=\lambda_k(p,n)$ are universal for the whole class of analytic functions). It is seen from the formula (\[diff\_formula\_common\]) that its precision strongly depends on the precision of the values $f_{\nu}=f^{(\nu)}(0)/\nu!$, $\nu=n-1,2n-1$ (of course we assume that the values of the function $f$ are known). In several particular cases this difficulty can be overcome. For instance, if it is known a priori that the function $f$ is even (odd), then there is no necessity in calculation of $f_{n-1}$ and $f_{2n-1}$ for even (odd) $n$ since then $pf_{n-1}z^{n-1}+ qf_{2n-1}z^{2n-1}\equiv 0$ ($pf_{n-1}z^{n-1}\equiv 0$) and the local precision is $O(z^{2n})$ ($O(z^{2n-1})$). In the more general case, when $f$ is even or odd, the formula (\[diff\_formula\_common\]) can be applied to the even auxiliary function $\omega(z)=f^2(z)$ for even $n$. Then the corresponding coefficients $\omega_{n-1}=\omega_{2n-1}\equiv 0$ and $$2zf(z)f'(z)=\sum_{k=1}^n\mu_k f^2(\lambda_k z)+O(z^{2n}).$$ In the most general case, if we want to use (\[diff\_formula\_common\]) systematically, it is necessary to calculate the regularizing binomial $pf_{n-1}z^{n-1}+qf_{2n-1}z^{2n-1}$ for each [*fixed*]{} function $f$. For this purpose some known formulas for numerical differentiation of analytic functions at $z=0$ can be used. For example, one can use several high-accuracy formulas for $f_{\nu}=f^{(\nu)}(0)/\nu!$, obtained in [@Schmeisser]. Note that, in contrast to the formulas for calculation of Taylor coefficients as, for instance, in [@Schmeisser], the formula (\[diff\_formula\_common\]) works well only in a deleted neighbourhood of the point $z=0$. Below we cite several known interpolation formulas for numerical differentiation, being close in form to (\[diff\_formula\_common\]). In [@Ash_Janson_Jones] the following $n$-point formulas for numerical differentiation of real functions were obtained: $$\label{1} f_\nu x^{\nu}= \sum_{k=1}^n\mu_k f(\lambda_k x)+O(x^n),\qquad {\nu}=1,2,\qquad n{\geqslant}{\nu}+1,$$ where $\lambda_k x$, $|\lambda_k-\lambda_j|>1$, are real nodes, minimizing the generalized power sums $S_v=\sum_{k=1}^n\mu_k\lambda_k^v$ for $v{\geqslant}n+1$ (this corresponds to minimization of the remainder). In [@Salzer] interpolation formulas of the Lagrange type for numerical differentiation were constructed on basis of special non-uniformly distributed nodes, also minimizing the remainder. Formulas of the form (\[1\]) for analytic functions were obtained in [@Lyness] via contour integrals. Moreover, it was shown there that for $\lambda_k=\exp (2\pi i (k-1)/n)$ and appropriate $\mu_k$ their formulas were exact for the polynomials of degree ${\leqslant}n+{\nu}-1$. This result can be extended. Indeed, by direct substitution (see the discussion near the formulas (\[alpha-beta\]) and (\[TAU\])) one can check that for any non-vanishing parameters $p$ and $q$ and any integer $0{\leqslant}\nu{\leqslant}n-1$ we have $$pf_\nu z^\nu+qf_{\nu+n}z^{\nu+n}=\sum_{k=1}^n\mu_k f\left(\lambda_k z\right)+ O(z^{2n+\nu}),\quad \lambda_k=\left(\frac{q}{p}\right)^{1/n}\exp\left(\frac{2\pi (k-1)i}{n}\right),$$ where $\mu_k=(-1)^{n-\nu-1}\lambda_k \sigma_{n-\nu-1}^{(k)}\,p^2 \,(qn)^{-1}$ $($see Lemma $\ref{400})$ and in $\lambda_k$ one can take any value of the root. The following formula for analytic functions $f$ is contained in [@Dan2008]: $$f_\nu z^{\nu}= \sum_{k=1}^{(\nu+1) N}\lambda_kf(\lambda_kz)+O(z^{n}), \qquad N=\left[\tfrac{n}{\nu+1}\right], \qquad n>\nu+6.$$ Here the numbers $\lambda_k$ do not depend on $f$ and are non-zero roots of the polynomial $P_n(\lambda)=\sum_{k=0}^N (-1)^k \lambda^{n-k-\nu k}/((\nu+1)^k k!)$ (see estimates for the remainder in [@Dan2008]). Other results of this type were also obtained in [@Chu2010; @Fryantsev]. Let $n=4$ and $p=-1$. Then by (\[PARAM p q\]) and (\[FACTOR G\]) we get $q=4$ and $\hat{G}_4(\lambda)= 9\left(\lambda^4-\lambda^2-2\,\lambda+1\right)$. The roots of the generating polynomial $\hat{G}_4$ are $$\lambda_1\approx 1.38647,\quad \lambda_2\approx0.42578,\quad \lambda_{3,4}\approx-0.90612\pm 0.93427\,i.$$ Lemma \[400\] gives $$\mu_1\approx0.967276,\quad \mu_2\approx-0.79945,\quad \mu_{3,4}\approx-0.08390\pm 0.08175\, i,$$ and the formula (\[diff\_formula\_common\]) takes the form $$\label{diff_example_n=4} zf'(z)=\sum_{k=1}^4\mu_k f(\lambda_k z)+f_{3}z^{3}-4f_{7}z^{7}+r_4(z),\quad r_4(z)=-4 f_8 z^8-9 f_9 z^9+\cdots.$$ For instance, set $f(z)=(z+2)^{-1}$ (then $f_3=-1/16$, $f_7=-1/256$). Calculations in Maple show that the error of  (\[diff\_example\_n=4\]) does not exceed $10^{-4}$ for $z\in [-0.5,0.5]$. For $n=7$ and $n=10$ the corresponding errors are less than $10^{-8}$ and $10^{-12}$, correspondingly. Now consider the Bessel function $f=J_0$ from (\[J\_0000\]). This function is even and consequently the regularizing binomial $pf_{n-1}z^{n-1}+ qf_{2n-1}z^{2n-1}$ vanishes for even $n$. Therefore from (\[diff\_example\_n=4\]) we get $$zJ'_0(z)\approx \sum_{k=1}^4 \mu_kJ_0(\lambda_kz).$$ The error of the approximant does not exceed $10^{-4}$ for $z\in[-1,1]$. For $n=6$ and $n=8$ the corresponding errors are less than $10^{-9}$ and $10^{-14}$. **.6. Some estimates.** Absolute values of the amplitudes $\mu_k$ and frequencies $\lambda_k$ play an important role in calculations by the formula (\[diff\_formula\_common\]). We now estimate the frequencies. \[500\] For the roots $\lambda_k$ of the polynomial $(\ref{FACTOR G})$ we have $$|\lambda_k|{\leqslant}1+\frac{O(1)}{\sqrt{n}}, \qquad O(1)>0, \qquad n\to \infty.$$ More precisely, given $n{\geqslant}3$, $$|\lambda_k|{\leqslant}\Lambda:=\left(2\delta\right)^{\frac{3}{\sqrt{n-2}}},\qquad \delta:=1+\frac{3|p|}{(n-1)(n-2)}, \qquad p\notin \Pi.$$ First, we estimate the absolute value of the sum of the last three terms in the brackets in $(\ref{FACTOR G})$: $$\begin{aligned} V:&=\left|-\frac{6\lambda\left({\lambda}^{n-1}-(n-1){\lambda}+n-2\right)}{\left(n-1\right)\left(n-2\right)\left (\lambda-1\right)^{2}} +2+{\frac {6\,p}{\left (n-1\right )\left (n-2\right )}}\right|\\ &{\leqslant}|\lambda|^n \left(\frac{6\left(1+(n-1)|\lambda|^{2-n}+(n-2)|\lambda|^{1-n}\right)}{(n-1)(n-2)\left(|\lambda|-1\right)^{2}} +\frac{2\delta}{|\lambda|^n}\right).\end{aligned}$$ It is easily seen that $\left(2\delta\right)^{\frac{3}{\sqrt{n-2}}}-1{\geqslant}\tfrac{3\ln 2}{\sqrt{n-2}}$ for $\delta>1$. Therefore substituting $|\lambda|=\Lambda$ into the latter expression yields $$V{\leqslant}|\lambda|^n \left(\frac{6 \cdot\left(1+(2n-3)/(2\delta)^{3\sqrt{n-2}}\right)}{(3\ln 2)^2 \cdot(n-1)} +\frac{1}{(2\delta)^{\frac{3n}{\sqrt{n-2}}-1}}\right).$$ It is also easy to check that for $n{\geqslant}3$ and $\delta>1$ the expression in the brackets is less than one, so $V<\Lambda^n$ for the above-mentioned $|\lambda|=\Lambda$. This and Rouché’s theorem imply that all the roots of the polynomial (\[FACTOR G\]) lie in the disc $|\lambda|< \Lambda$. One can use Lemmas \[400\] and \[500\] to obtain estimates for the amplitudes $\mu_k$ but this problem needs more delicate analysis and we do not dwell on it in the present paper. Numerical extrapolation by amplitude and frequency operators {#par-extrap} ============================================================ [**.1. Statement of the problem.**]{} Let us briefly describe the idea of the extrapolation. Let $a>0$, $p,q\in \mathbb{R}$ and $f$ be a function analytic in a disc $|z|<\rho$, $\rho>0$. Consider the problem of multiple interpolation of the function $f(az)$ in a neighbourhood of $z=0$ by the amplitude and frequency operator ${H}_n(\{\mu_k\},\{\lambda_k\},f;z)$, where $f$ is chosen as a basis function. As in the case of differentiation, we get a non-regular discrete moment problem with $s_m=a^m$. To regularize it, we introduce the varied function $$\label{ZADACHA extrapol} \tilde{f}(z):=f(az)+pf_{n-1}z^{n-1}+qf_{2n-1}z^{2n-1}$$ with some parameters $p$ and $q$, being non-zero simultaneously. By the same approach that we used at the beginning of Section \[par-diff\], in order to construct the interpolating sum ${H}_n(\{\mu_k\},\{\lambda_k\},f;z)$, we find the sequence of varied moments $$\label{moments_extropal} s_k=a^{k}, \;\; k\neq n-1,2n-1, \;\; k\in\mathbb{N}_0; \quad s_{n-1}=a^{n-1}+p,\quad s_{2n-1}=a^{2n-1}+q,$$ and construct the generating polynomial $$\label{G_n_extrapol_p_q} \check{G}_n(\lambda):=\sum_{m=0}^n \check{g}_m\lambda^m= \left| \begin{array}{cccccc} 1 & \lambda & \ldots & \lambda^{n-1} & \lambda^n\\ 1 & a & \ldots & a^{n-1}+p & a^n\\ a & a^2 & \ldots & a^{n} & a^{n+1}\\ \cdots & \cdots & \cdots & \cdots & \cdots\\ a^{n-1}+p & a^n & \ldots & a^{2n-2} & a^{2n-1}+q\\ \end{array} \right|.$$ If for some $a> 0$, $p$ and $q$ the polynomial $\check{G}_n$ is of degree $n$ and all its roots $\lambda_1,\ldots,\lambda_n$ are pairwise distinct, then by Theorem \[th1\] the varied problem for the function (\[ZADACHA extrapol\]) is regularly solvable, so the following interpolation formula holds: $$\label{100} f(az)=H_n(\{\mu_k\},\{\lambda_k\}, f;z)- p\, f_{n-1}z^{n-1}-q\,f_{2n-1}z^{2n-1}+O(z^{2n}).$$ (Of course, we assume that all the arguments of the function $f$ lie in the disc ${|z|<\rho}$, where it is analytic.) Suppose also that the inequalities ${|\lambda_k|<\delta a}$ with some ${\delta\in (0,1)}$ are valid for all $k=\overline{1,n}$. Then it is natural to call the formula (\[100\]) [*extrapolational*]{} as the values of the function $f$ at the points $\zeta=az$ are approximated by the values of this function at the points $\lambda_kz$, belonging to the disc $\{\xi: |\xi|<\delta |\zeta|\}$, $\delta <1$. In the present section we will obtain such an extrapolation formula and a quantitative estimate for its remainder. We start with a formal description. As in Section \[par-diff\], we first analyse the coefficients and roots of the polynomial $\check{G}_n$ of the form (\[G\_n\_extrapol\_p\_q\]). [**.2. Coefficients of the generating polynomial.**]{} The following statement gives an explicit form of the coefficients $\check{g}_m$. \[lemma\_extrapol\_1\] Let $\kappa:=(-1)^{n(n+1)/2}p^{n-2}$, $p\neq 0,\;-na^{n-1}$. The polynomial $\check{G}_n$ has the following coefficients: $$\label{koef_extrapol} \begin{array}{c} \check{g}_n=\kappa p\left(na^{n-1}+p\right),\qquad \check{g}_0=-\kappa\left(a^{2n-1}p+(n-1)a^{n-1}q+p\,q\right), \\ \check{g}_m=-\kappa a^{n-1-m}\left(a^np-q\right),\qquad m=\overline{1,n-1}. \end{array}$$ The method of proof is the same as in Lemma \[lemma\_dif\_1\]. We first prove the identity for ${\check{g}}_n$ by direct computation of the algebraic adjunct $(-1)^n D$ to the element $\lambda^n$ in the determinant (\[G\_n\_extrapol\_p\_q\]). For this we show that given the matrix $$A:=\left( \begin{array}{ccccc} a^{n-1} & a^{n-2} & a^{n-3} & \ldots & 1 \\ a^n & a^{n-1} & a^{n-2} & \ldots & a \\ \ldots & \ldots & \ldots & \ldots & \ldots \\ a^{2n-2} & a^{2n-3} & a^{2n-4} & \ldots & a^{n-1} \\ \end{array} \right),$$ the characteristic polynomial $P_n(\lambda)=\det (A-\lambda I)$ has the form $$\label{charac_poly_2} P_n(\lambda)=(-1)^n\lambda^{n-1}\left(\lambda-na^{n-1}\right).$$ Indeed, in this case ${\operatorname{rank}}A=1$ as any two rows of $A$ are proportional. Therefore, by (\[characht\_property\_1\]) and (\[characht\_property\_2\]) the coefficients before the terms with the powers less than ${n-1}$ are zero in $\det (A-\lambda I)$. To prove (\[charac\_poly\_2\]), it remains to notice that $\mathrm{Tr} A=na^{n-1}$. Now we return to the determinant $D$. In the same way as in Lemma \[lemma\_dif\_1\], (\[DETER-D\]) and (\[charac\_poly\_2\]) yield the desired formula for ${\check{g}}_n$. If we suppose that ${\check{g}}_n$ are known, then the other $n$ coefficients as in the above-mentioned case of differentiation can be found from the system (\[Newton\_formula\]) of $n$ linear equations (with the exchange of $\hat{\phantom{o}}$ by $\check{\phantom{o}}$). This system with respect to unknowns ${\check{g}}_0,\ldots,{\check{g}}_{n-1}$ has a unique solution for $p\neq 0,\;-na^{n-1}$ as its determinant is equal to ${\check{g}}_n\ne 0$. Thus it suffices to check (\[Newton\_formula\]) by direct substitution of the values (\[moments\_extropal\]) and (\[koef\_extrapol\]). This is quite easy and thus we do not dwell on it. [**.3. Roots of the generating polynomial.**]{} From now on we suppose that $q=0$; then, as we will see below, $\check{G}_n$ has exactly $n$ pairwise distinct roots and its coefficients can be calculated by quite simple formulas. Let $p>0$, $q=0$ and $a>0$. Then $$\label{COEFF G_n_extrapol_p_q_special} {\check{g}}_n=\kappa p\left(na^{n-1}+p\right)\neq 0,\qquad {\check{g}}_m=-\kappa p \,a^{2n-1-m},\qquad m=\overline{0,n-1},$$ and the generating polynomial has the form $$\label{G_n_extrapol_p_q_special} \check{G}_n(\lambda)= {\check{g}}_n\left(\lambda^n-\frac{a^{2n-1}}{na^{n-1}+p}\sum_{m=0}^{n-1} \frac{\lambda^m}{a^m}\right)={\check{g}}_n \left(\lambda^n-\frac{a^{n}}{na^{n-1}+p} \frac{\lambda^n-a^n}{\lambda-a}\right).$$ Moreover, $\check{G}_n$ has exactly $n$ pairwise distinct roots. The representation (\[G\_n\_extrapol\_p\_q\_special\]) can be obtained by direct substituting $q=0$ into the formulas (\[koef\_extrapol\]) from Lemma \[lemma\_extrapol\_1\] for $p>0$. It remains to show that the polynomial (\[G\_n\_extrapol\_p\_q\_special\]) has no multiple roots. We rewrite $\check{G}_n$ in the form $$\check{G}_n(\lambda)={\check{g}}_n \frac{P_{n+1}(\lambda)}{\lambda-a},\qquad P_{n+1}(\lambda):=\lambda^{n+1}-a\ \left(1+\frac{a^{n-1}}{na^{n-1}+p}\right)\lambda^n+\frac{a^{2n}}{na^{n-1}+p}.$$ The set of roots of the polynomial $P_{n+1}$ contains all the roots of the polynomial $\check{G}_n$ and one more root $\lambda=a$. If the polynomial $P_{n+1}$ had a multiple root, then this root would be also a root of its derivative $P'_{n+1}$. However, $$P'_{n+1}(\lambda)=(n+1)\lambda^{n-1}\left(\lambda-\lambda^{*}\right),\qquad \lambda^{*}:=\frac{a n}{n+1}\left(1+\frac{a^{n-1}}{na^{n-1}+p}\right),$$ and, as it can be easily seen, in both roots of the derivative $P'_{n+1}$, namely, $0$ and $\lambda^{*}$, the polynomial $P_{n+1}$ does not vanish (for $p>0$), more precisely, $P_{n+1}(0)>0$, $P_{n+1}(\lambda^{*})<0$. Consequently, both $P_{n+1}$ and $\check{G}_n$ have no multiple roots. [**.4. Estimates of the roots of the generating polynomial.**]{} We now estimate the roots of the polynomial (\[G\_n\_extrapol\_p\_q\_special\]) assuming $q=0$ as above. \[lemma\_extrapol\_2\] Given $p>0$ and $a>0$, for the roots $\lambda_k$ of the polynomial $\check{G}_n$ we have $$\label{COR-G_n} |\lambda_k|<\delta a<a,\qquad \delta=\delta(n,a,p):=\left(1+\frac{p}{na^{n-1}}\right)^{-1/n}, \qquad k=\overline{1,n}.$$ For $|\lambda|=\delta a$ we get $$\frac{a^{2n-1}}{na^{n-1}+p}\sum_{m=0}^{n-1}\left|\frac{\lambda}{a}\right|^m= \frac{a^{2n-1}}{na^{n-1}+p}\sum_{m=0}^{n-1}\left|\delta\right|^m<\frac{na^{n-1}}{na^{n-1}+p}\,a^n= (\delta a)^n=|\lambda|^n.$$ From this, taking into account the first identity in $(\ref{G_n_extrapol_p_q_special})$ and Rouché’s theorem, we conclude that all $n$ roots of the polynomial $\check{G}_n$ belong to the disc $|\lambda|<\delta a$. \[primech\_ext\] We now mention several properties of $\delta=\delta(n,a,p)$, which plays a key role in the process of extrapolation. For a fixed $n$ we obviously have $$\label{COR-G_n-asymp} \delta\in (0,1),\qquad \delta=\left(\frac{n}{p}\right)^{1/n}a^{1-\tfrac{1}{n}} \left(1-O\left(\frac{a^{n-1}}{p}\right)\right),\qquad \frac{a^{n-1}}{p}\to 0,$$ where $O\left(a^{n-1}/p\right)$ is a positive real value. From this it follows, in particular, that if the fraction $a^{n-1}/p$ decreases, then all the roots $\lambda_k$ come closer to the origin. For example, for a fixed $p$ and $a\to 0$ the largest absolute value of $\lambda_k$ is bounded by a value of order $a^{2-1/n}$. [**.5. Main theorem about numerical extrapolation by amplitude and frequency operators. Remarks and examples.**]{} We now aim to estimate the extrapolation remainder (\[100\]) and then formulate the main result of the section. To do so, we need estimates for the generalized power sums $S_v$, $v{\geqslant}2n$, taking into account the sums (\[moments\_extropal\]) with indexes ${\leqslant}2n-1$ and $q=0$ as before. \[lemma\_extrapol\_3\] For $p>0$ and $a>0$ the following inequalities hold: $$\label{200} 0{\leqslant}S_v{\leqslant}a^v, \qquad v{\geqslant}2n.$$ We prove this by induction, based on (\[COEFF G\_n\_extrapol\_p\_q\_special\]) and the identities  (\[corollary\_form\_Newton++++\]) for $v{\geqslant}2n$. For $v=2n$ from (\[moments\_extropal\]) and (\[COEFF G\_n\_extrapol\_p\_q\_special\]) we get $$S_{2n} = \frac{na^{3n-1}}{na^{n-1}+p}{\leqslant}a^{2n}.$$ Furthermore, suppose that the inequality $S_v{\leqslant}a^v$ is valid for all $v=\overline{2n,N}$ (hence for all $v=\overline{0,N}$). Under this assumption, we obtain $$S_{N+1} = \frac{a^{2n-1}}{na^{n-1}+p}\sum_{m=0}^{n-1} \frac{S_{N-n+m+1}}{a^m}{\leqslant}\frac{na^{N+n}}{na^{n-1}+p}{\leqslant}a^{N+1},$$ which completes the proof. Now, using (\[200\]), we can estimate the extrapolation remainder (\[100\]), where $q=0$: $$|r_n(z)|=\left|\sum_{m=2n}^{\infty}(a^m-S_m)f_mz^m\right| {\leqslant}\sum_{m=2n}^{\infty}|f_m||az|^m.$$ Note that this estimate is independent of $p$. Summarizing, we formulate the main result of this section. \[th4\] Let $f$ be analytic in the disc $|z|<\rho$, $a>0$, $p>0$. Then for $|z|<\rho/a$ the following extrapolation formula holds: $$\label{extrapol_formula} f(az)=\sum_{k=1}^n\mu_k f(\lambda_k z)-pf_{n-1}z^{n-1}+r_n(z),\qquad |r_n(z)|{\leqslant}\sum_{m=2n}^{\infty}|f_m||az|^m,$$ where the frequencies $\lambda_k$ are the pairwise distinct roots of the polynomial $(\ref{G_n_extrapol_p_q_special})$, and $($see $(\ref{COR-G_n}))$ $$|\lambda_k z|<\delta a|z|<a|z|,\qquad \delta=\left(1+\frac{p}{na^{n-1}}\right)^{-1/n}.$$ The amplitudes $\mu_k$ are uniquely determined by Lemma \[400\]. Moreover, $\lambda_k=\lambda_k(a,p,n)$ and $\mu_k=\mu_k(a,p,n)$, so they are independent of the function $f$ and universal in this sense. The extrapolation formula $(\ref{extrapol_formula})$ is exact for the polynomials of degree $2n-1$, i.e. $r_n(z)\equiv 0$ for the polynomials $f$ such that $\deg f{\leqslant}2n-1$. If $f_{n-1}=0$, then the extrapolation formula has a particular simple form and high degree of accuracy. For instance, for even (odd) functions and even (odd) $n$ the identity from (\[extrapol\_formula\]) has the form $$f(az)=\sum_{k=1}^n\mu_k f(\lambda_k z)+r_n(z).$$ Calculation of $f_{n-1}$ in the general case was discussed at the end of Section \[par-diff\]. We emphasize that the extrapolation character of the formula (\[extrapol\_formula\]) is specified by a proper choice of the parameters $p$ and $q$ in the problem (\[moments\_extropal\]). For instance, for $p=q=0$ this problem is also solvable (but non-regular): one can take $\mu_1=1$ and $\lambda_1=a$ with the rest of parameters $\mu_k$, being equal to zero, and any $\lambda_k$. However, in this case (\[extrapol\_formula\]) becomes a trivial identity. If one takes, for example, $p>0$ and $q=a^n p-\check{g}_n/\kappa$, then the problem (\[moments\_extropal\]) turns out to be regular and, moreover, the coefficients of the generating polynomial $\check{G}_n$ can be also calculated easily: $$\check{g}_0=-\check{g}_np_0, \qquad p_0:=\left(na^{n-1}+p\right)\left(\kappa p \,a^n/\check{g}_n-1\right)+a^{n-1},$$ $$\check{g}_m=-a^{n-1-m}\check{g}_n,\qquad m=\overline{0,n-1}.$$ However, in this case there is no extrapolation since the inequalities $|\lambda_k|<a$ are not valid anymore and we just have an interpolation formula of the form (\[extrapol\_formula\]). Note that for different $a$ one has different extrapolation formulas of the form (\[extrapol\_formula\]), not arising from each other. In particular, they cannot be reduced to the case $a=1$ by linear substitution. Indeed, substituting $t=az$ into (\[extrapol\_formula\]) gives $$\label{extrapol_formula_substitute} f(t)=\sum_{k=1}^n\mu_k(a) f\left(\tilde\lambda_k(a) t\right)-pf_{n-1}(t/a)^{n-1}+r_n(t/a),\qquad \tilde\lambda_k(a)=\lambda_k(a)/a,$$ where by Lemma \[lemma\_extrapol\_2\] for any $p>0$ and $a>0$ $$|\tilde\lambda_k(a)|{\leqslant}\delta(n,a,p)=\left(1+\frac{p}{na^{n-1}}\right)^{-1/n}<1,$$ i.e. $a$ does not disappear and still is a controlling parameter. If $a<1$, then for any fixed $p>0$ we get $$|\tilde{\lambda}_k(a)|<\delta(n,a,p)\to a,\qquad n\to\infty,$$ Thus, for large $n$ all the arguments $\tilde{\lambda}_k(a)\,t$ lie almost $a$-times closer to the origin than $t$ (see also Remark \[primech\_ext\]). Realizing the $n$-point simple or multiple extrapolation (interpolation) on basis of the Lagrange polynomials or other similar approaches, one usually obtains extrapolation (interpolation) formulas, being exact for the polynomials of degree $n-1$ (see, for instance, [@Salzer2; @DanChu2011; @Chu2012]). However, our extrapolation formula is exact for the polynomials of degree ${\leqslant}2n-1$. It is interesting that the doubling of precision is gained by adding just one regularizing power term $pf_{n-1}z^{n-1}$. We also emphasize that, due to Remark \[primech\_ext\], if $p\to\infty$ and all other parameters are fixed, then the extrapolation nodes tend to the point $z=0$, but at the same time the theoretical error of extrapolation does not increase (see (\[extrapol\_formula\])) as is independent of $p$. The same phenomenon of the convergence of nodes to the origin was noticed in similar extrapolation problems in [@DanChu2011; @Chu2012]. Let $n=2$, $a=1/2$ and $p =2$ ($q=0$ as above). Then the generating polynomial (\[G\_n\_extrapol\_p\_q\_special\]) has the form $$\check{G}_2(\lambda)=-6\,{{\lambda}}^{2}+\frac{1}{2}\,{\lambda}+\frac{1}{4}.$$ We find its (pairwise distinct) roots and then determine the amplitudes by Lemma \[400\]: $$\label{example1-extrapolation-lambda+} \lambda_{1}=- \frac{1}{6} \qquad \lambda_{2}=\frac{1}{4},\qquad \mu_{1}=-\frac{27}{5},\qquad \mu_{2}=\frac{32}{5}.$$ Thus we get the following extrapolation formula (written in the form (\[extrapol\_formula\_substitute\])): $$\label{example1-extrapolation++} f(z)\approx -\frac{27}{5} f\left(-\frac{1}{3}z\right)+\frac{32}{5}f\left(\frac{1}{2}z\right)-4f_1z.$$ For example, for $f(z)=e^z$ the absolute error of this formula does not exceed $0.002$ in the real segment $z\in [-0.5,0.5]$. For $n=4$ and $n=8$ and the same parameters $a=1/2$ and $p=2$ the error of the extrapolation formula (\[extrapol\_formula\]) for $e^z$ in $[-1,1]$ does not exceed $10^{-7}$ and $10^{-18}$ correspondingly. Moreover, in both cases the moduli of extrapolation nodes are less than $0.58|z|$. On the numerical method for constructing amplitude and frequency operators {#Section6} ========================================================================== As we have already mentioned in Remark \[remark2\], some authors studied numerical methods for solving the systems (\[SRS\_M\]). In [@YF] one of such approaches, the method of small residuals in overdetermined moment systems, was used for approximation by amplitude and frequency sums (\[gH\]) in a neighbourhood of the point ${z=0}$. Some discussions from this paper (see Theorems 3 and 5 and Remark 1 there) raise the following important question: can one use the method of small residuals in the context of the Padé interpolation (\[2n-interpolation\]) and approximation by amplitude and frequency operators? From the undermentioned observations it is seen that this method can work well only for the quite narrow class of consistent systems, but in the general case one has to give a negative answer to the question. For the discussion we can consider only $M=2n$ as the case $M>2n$ follows from the below-mentioned counterexamples by adding equations with arbitrary right hand sides. Following [@YF], instead of the system (\[SRS\]) we consider the one with small residuals $\delta:=\{\delta_m\}_{m=0}^{2n-1}$: $$\label{NVZ} s_m-\sum_{k=1}^n\mu_k\lambda_k^m=\delta_m,\qquad m=\overline{0,2n-1}, \qquad |\delta|:=\max_m|\delta_m|{\leqslant}\varepsilon.$$ It is not difficult to show that for an arbitrarily small $\varepsilon>0$ one can choose the residuals $\delta$, $|\delta|{\leqslant}\varepsilon$, such that the system (\[NVZ\]) is solvable (both for consistent and inconsistent systems (\[SRS\])). Furthermore, using this solution, one can construct the corresponding amplitude and frequency sum $H_n(\delta;z)$ of the form (\[gH\]) such that $$\label{NVZ+} f(z)-H_n(\delta;z)=\sum_{m=0}^{2n-1} h_m\delta_m z^m+B_n(\delta;z),\qquad B_n(\delta;z)=O(z^{2n}).$$ If one can take $|\delta|=0$, then we deal with a consistent moment problem (\[SRS\]). As we have already mentioned above, this case is not of big interest for numerical analysis because analytical methods effectively work. If $|\delta|\ne 0$, then obviously one cannot get (\[2n-interpolation\]) from (\[NVZ+\]) even for regular problems. Indeed, given fixed residuals $\delta$, $|\delta|\ne 0$, the right hand side of (\[NVZ+\]) is just of order $h_k\delta_k z^k=O(|\delta|)z^k$, $z\to 0$, where $k$ is the index of the first non-zero residual. There also exist other serious obstacles to realization of the method of small residuals in the interpolation and approximation problems. The matter is that for inconsistent moment problems the decreasing of $|\delta|$ to zero always results in growing to infinity of at least one component of the solution to the problem (\[NVZ\]), i.e. the amplitudes $\mu_k(\delta)$ or frequencies $\lambda_k(\delta)$. This happens because the solution to the initial, non-regularized inconsistent problem (\[SRS\]) does not exist. But a similar situation can occur even for some consistent systems (see Example \[ex6-3\]). In such cases the approximation is impossible: either the computational errors considerably exceed the residuals or, which is even worse, the arguments $\lambda_k(\delta) z\to\infty$ leave the domain of the function $h(z)$ (see Example \[ex6-2\]). Moreover, then the corresponding interpolation formulas are usually in unstable relation not only to the norms of the residuals but also to their single components that makes impossible the error estimation via the norm $|\delta|$ (see Examples \[ex6-1\]-\[ex6-3\]). We now give several examples of such a divergence for consistent and inconsistent moment problems. For simplicity, let $n=2$. \[ex6-1\] Set $s_0=0$, $s_1=1$, $s_2=0$, $s_3=0$. The system (\[SRS\]) is inconsistent. Solving the system (\[NVZ\]) by the Prony-Sylvester formulas and Lemma \[400\], we get $$\mu_{1}+\mu_2=\delta_0,\qquad \mu_{1}=\frac {(1+\delta_1)^2}{\sqrt {4\,\delta_3 \delta_1-3\,{\delta_2}^{2}+4\,\delta_3}}+O(|\delta|)\to\infty, \qquad |\delta|\to 0.$$ Thus, passage to the limit in $H_n(\delta;z)$ as $|\delta|\to 0$ predictably does not determine any Padé amplitude and frequency sum, moreover, the parameters of the sum obtained are in unstable relation to the components of the residuals. \[ex6-2\] Set $s_0=1$, $s_1=0$, $s_2=0$, $s_3=1$. The system (\[SRS\]) is inconsistent. By the same formulas for (\[NVZ\]) we get $$\lambda_{1}=\frac{1+O\left(|\delta|\right)}{\delta_0\delta_2+\delta_2-\delta_1^2}\; \to\infty, \qquad |\delta|\to 0,$$ i.e. the argument $\lambda_{1} z$ of the amplitude and frequency sum $H_n(\delta;z)$ in (\[NVZ+\]) tends to infinity and can leave the domain of the basis function $h$. The following example (which arises in the extrapolation problem considered in Section \[par-extrap\]) shows that the method of small residuals can be unfit even for consistent systems. \[ex6-3\] Set $s_0=1$, $s_1=1$, $s_2=1$, $s_3=1$. The system (\[SRS\]) is consistent (but not regular); one of its solutions is obvious: $\lambda_1=\mu_1=1$, $\lambda_2=0$, $\mu_2$ is arbitrary. However, the method of small residuals leads to the indeterminacy $$\lambda_1\cdot \lambda_2=\frac {\delta_3+\delta_1+\delta_1\delta_3 -2\,\delta_2-\delta_2^2}{\delta_2-2\,\delta_1+\delta_0+\delta_0\delta_2- \delta_1^{2}}.$$ It is not clear how to choose the residuals for the convergence of the process. If one takes, for instance, $\delta_0=\delta_1=0$, $\delta_2=\delta_3^2$, then again the argument in the amplitude and frequency sum tends to infinity: $$\lambda_1\cdot \lambda_2= \frac{1-2\delta_3-\delta_3^3}{\delta_3}\; \to\infty, \qquad |\delta|\to 0.$$ \[ex6-4\] In [@YF §4.4.2] the following system arises in the approximation of the derivative of the Bessel function, $J'_0$, by amplitude and frequency sums with the basis function $h(x)=(1-J_0(x))/x$ and $n=2$: $$\sum_{k=1}^2\mu_k\lambda_k^{2m+1}=2(m+1),\qquad m=\overline{0,M-1}.$$ It is easily seen that it is inconsistent for $M=4$ (cf. the non-regularized system in the problem of numerical differentiation in Section \[par-diff\]). The authors of [@YF] solve it numerically for $M>4$ and $\varepsilon=2\cdot10^{-16}$ and obtain the following results: $$\mu_1=\overline{\mu_2}=\tfrac{1}{2}+\mu i,\qquad \lambda_1=\overline{\lambda_2}=1+\lambda i,\qquad \mu\approx -9.8\cdot 10^7,\qquad \lambda\approx 5.1\cdot 10^{-9}.$$ It is natural to expect that further decreasing of the residual will cause the moduli of the amplitudes $\mu_k$ to grow to infinity and the moduli of the frequencies $\lambda_k$ to tend to one. Again one gets a divergent process in the context of the Padé interpolation under consideration. Thus on account of the above-mentioned reasons the method of small residuals has to be used for constructing amplitude and frequency sums with great circumspection as it can lead to unacceptable results. Note that the regularization method that we propose also uses residuals; generally speaking, they are two: $p$ and $q$. The important difference between our method and the one with small residuals is in the fact that $p$ and $q$ are fixed, not necessary small and can be calculated by special formulas depending on the specificity of the problems considered in Sections \[regularization\_section\]-\[par-extrap\]. For example, in the problems of numerical differentiation and extrapolation we obtained explicit expressions for $p$ and $q$, being independent of $z$, $f$ and $h$. Moreover, the corresponding residuals in the interpolation disappear not because of decreasing $|\delta|$ but due to adding a fixed regularizing binomial $c_1z^{n-1}+c_2z^{2n-1}$ to an amplitude and frequency sum. For instance, the following regularized interpolation formula corresponds to Example \[ex6-2\]: $$f(z)=-h_{1}z+\sum_{k=1}^2\mu_kh(\lambda_kz)+O(z^4),\quad \mu_{1,2}=\tfrac{1}{2}\left(1\mp\tfrac{3\sqrt{5}}{5}\right),\quad \lambda_{1,2}=-\tfrac{1}{2}\left(1\pm\sqrt{5}\right),$$ where the remainder depends only on $z$, $f$ and $h$. In conclusion, it is worth mentioning that for increasing the rate of approximation one can use the regularization method even for some regular systems, for example, for those which can be obtained from non-regular ones by a small variation of the moments. Acknowledgments {#acknowledgments .unnumbered} =============== We would like to thank the referees for their useful suggestions, which helped to improve the paper. [99]{} M. Abramowitz and I.A. Stegun, *Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables*, Dover, New York (1972). N.I. Akhiezer, *The Classical Moment Problem and Some Related Questions in Analysis*, Oliver & Boyd, Edinburgh (1965). J.M. Ash, S. Janson and R.L. Jones, Optimal numerical differentiation using $N$ function evaluations, *Calcolo* **21**(2) (1984), 151-169. G.A. Baker Jr. and P. Graves-Morris, *Pade Approximants. Part I: Basic Theory. Part II: Extensions and Applications*, Addison-Wesley, Reading, Mass. (1981). G. Beylkin and L. Monzón, On approximation of functions by exponential sums, *Appl. Comput. Harmon. Anal.* **19** (2005), 17–48. G. Beylkin and L. Monzón, Approximation by exponential sums revisited, *Appl. Comput. Harmon. Anal.* **28** (2010) 131–149. D. Braess, Nonlinear Approximation Theory, Springer Ser. Comput. Math., vol. 7, Springer-Verlag, Berlin, 1986. M. Buchmann and A. Pinkus, On a recovery problem, *Ann. of Num. Math.* **4** (1997), 129–142. P.V. Chunaev, On a nontraditional method of approximation, *Proc. of the Steklov Inst. Math.* **270**(1) (2010), 278-284. P.V. Chunaev, On the extrapolation of analytic functions by sums of the form $\sum_k\lambda_k h(\lambda_k z)$, *Math. Notes* **92**(5-6) (2012), 727-730. P.V. Chunaev and V.I. Danchenko, Approximation by the amplitude and frequency operators, arXiv:1409.4188v1, September 15, 2014. V.I. Danchenko, Approximation properties of sums of the form $\sum_k\lambda_kh(\lambda_k z)$”, *Math. Notes* **83**(5) (2008), 587–593. V.I. Danchenko and P.V. Chunaev, Approximation by simple partial fractions and their generalizations, *J. of Math. Sci.* **176**(6) (2011), 844-859. V.I. Danchenko and P.V. Chunaev, On approximation by amplitude and frequancy sums, Intl. Summer Kazan School-Conference on Function theory, its applications and similar problems: Proceedings, Vol. 46, Kazan University, August 22–28, 2013, 174-175. V. Danchenko and P. Chunaev, Approximation by amplitude and frequency sums, Joint CRM-ISAAC Conference on Fourier Analysis and Approximation Theory: Abstracts, CRM, November 4–8, 2013, 12. V.I. Danchenko and A.E. Dodonov, Estimates for exponential sums. Applications, *J. of Math. Sci.* **188**(3) (2013), 197-206. V.K. Dzyadyk, Generalized problem of moments and the Pade approximation, *Ukrainian Math. J.* **35**(3) (1983), 254-259. A.V. Fryantsev, On numerical approximation of differential polynomials, *Izv. Saratov. Univ. Mat. Mekh. Inform.* **7**(2) (2007), 39–43 \[in Russian\] V.P. Gromov, On the growth of functions defined by series of the form $\sum_1^\infty d_nf(\lambda_nz)$, *Mat. Sb.* **67**(109:2) (1965), 190–209 \[in Russian\]. R.A. Horn and Ch.R. Johnson, *Matrix Analysis*, Cambridge University Press, Cambridge (2013). N. Jacobson, *Basic algebra I*, W.H. Freeman and Company, New York (1985). N.M. Korobov, Exponential sums and their applications, *Mathematics and Its Applications (Soviet Series)* **80**, Kluwer Academic Publishers (1992). V.I. Krylov, *Approximate Calculation of Integrals*, The Macmillan Co., New York-London (1962). J.P.S. Kung, Canonical forms of binary forms: Variations on a theme of Sylvester. Invariant theory and tableaux (Minneapolis, MN, 1988), *IMA Vol. Math. Appl.* **19** (1990) 46–58, Springer, New York. S.-Y. Kung, A new identification and model reduction algorithm via singular value decomposition, Proc. 12th Asilomar Conf. Circuits, Syst. Comput., Pacific Grove, CA, 1978, 705–714. A.F. Leont’ev, *Sequences of Exponentials*, Nauka, Moscow (1980) \[in Russian\]. A. F. Leont’ev, Representation of functions by generalized exponential series, *Math. Sb.* **62**(2) (1989), 491–505. J.N. Lyness, Differentiation formulas for analytic functions, *Math. Comp.* **22** (1968), 352-362. Y.I. Lyubich, Gauss type complex quadrature formulae, power moment problem and elliptic curves, *Mat. Fiz. Anal. Geom.* **9**(2) (2002), 128–145 \[in Russian\]. Y.I. Lyubich, The Sylvester-Ramanujan system of equations and the complex power moment problem, *Ramanujan J.* **8** (2004), 23–45. R. Prony, Sur les lois de la Dilatabilité des fluides élastiques et sur celles de la Force expansive de la vapeur de l’eau et de la vapeur de l’alkool, à différentes températures, *J. de l’Ecole Polytech.* **2**(4) (1795), 28-35 \[in French\]. D. Potts and M. Tasche, Parameter estimation for nonincreasing exponential sums by Prony-like methods, *Linear Algebra Appl.* **439**(4) (2013),1024–1039. S. Ramanujan, Note on a set of simultaneous equations, *J. Indian Math. Soc.* **4** (1912), 94–96. H.E. Salzer, Optimal points for numerical differentiation, *Num. Math.* **2**(1) (1960), 214-227. H.E. Salzer, Formulas for best extrapolation, *Num. Math.* **18** (1971), 144-153. G. Schmeisser, Numerical differentiation inspired by a formula of R.P. Boas, *J. Approx. Theory*, **160**(1-2) (2009), 202-222. V.I. Shevtsov, The representation of integral functions by series of the form $\sum_{n=1}^\infty \alpha_n f(\lambda_n z)$, *Mat. Zametki* **4**(5) (1968), 579–588 \[in Russian\]. J.J. Sylvester, On a remarkable discovery in the theory of canonical forms and of hyperdeterminants, *Phil. Magazine* **2** (1851), 391–410. C.E. Yarman and G.M. Flagg, Generalization of Padé approximation from rational functions to arbitrary analytic functions — Theory. *Math. Comp.* **84** (2015), no. 294, 1835–1860. **Petr Chunaev** Departament de Matemàtiques, Universitat Autònoma de Barcelona Edifici C, Facultat de Ciències, 08193 Bellaterra (Barcelona), Spain e-mail: **Vladimir Danchenko** Functional Analysis and Its Applications Department, Vladimir State University Belokonskoy str. 3/7, Building 3, 600000 Vladimir, Russia e-mail:
1
--- abstract: 'We study randomly initialized residual networks using mean field theory and the theory of difference equations. Classical feedforward neural networks, such as those with tanh activations, exhibit exponential behavior on the average when propagating inputs forward or gradients backward. The exponential forward dynamics causes rapid collapsing of the input space geometry, while the exponential backward dynamics causes drastic vanishing or exploding gradients. We show, in contrast, that by adding skip connections, the network will, depending on the nonlinearity, adopt subexponential forward and backward dynamics, and in many cases in fact polynomial. The exponents of these polynomials are obtained through analytic methods and proved and verified empirically to be correct. In terms of the “edge of chaos” hypothesis, these subexponential and polynomial laws allow residual networks to “hover over the boundary between stability and chaos,” thus preserving the geometry of the input space and the gradient information flow. In our experiments, for each activation function we study here, we initialize residual networks with different hyperparameters and train them on MNIST. Remarkably, our [*initialization time*]{} theory can accurately predict [*test time*]{} performance of these networks, by tracking either the expected amount of gradient explosion or the expected squared distance between the images of two input vectors. Importantly, we show, theoretically as well as empirically, that common initializations such as the Xavier or the He schemes are not optimal for residual networks, because [*the optimal initialization variances depend on the depth*]{}. Finally, we have made mathematical contributions by deriving several new identities for the kernels of powers of ReLU functions by relating them to the zeroth Bessel function of the second kind.' author: - | Greg Yang[^1]\ [Microsoft Research AI]{}\ `gregyang@microsoft.com`\ Samuel S. Schoenholz\ [Google Brain]{}\ `schsam@google.com`\ bibliography: - 'neural\_dynamics.bib' title: 'Mean Field Residual Networks: On the Edge of Chaos' --- Introduction ============ Previous works [@poole_exponential_2016; @daniely_toward_2016; @schoenholz_deep_2017] have shown that randomly initialized neural networks exhibit a spectrum of behavior with depth, from stable to chaotic, which depends on the variance of the initializations: the cosine distance of two input vectors converges exponentially fast with depth to a fixed point in \[0, 1\]; if this fixed point is 1, then the behavior is stable; if this fixed point is 0, then the behavior is chaotic. It has been argued in many prior works [@bertschinger_real-time_2004; @poole_exponential_2016] that effective computation can only be supported by a dynamical behavior that is on [**the edge of chaos**]{}. Too much stability prevents the neural network from telling apart two different inputs. While some chaotic behavior can increase the expressivity of a network, too much chaos makes the neural network think two similar inputs are very different. At the same time, the same initialization variances also control how far gradient information can be propagated through the network; the networks with chaotic forward dynamics will tend to suffer from exploding gradients, while networks with stable forward dynamics will tend to suffer from vanishing gradients. These works have focused on vanilla (fully connected) feedforward networks. Here we consider residual networks [@he_deep_2016; @he_identity_2016] (with fully-connected layers and without batchnorm), which are a family of recently proposed neural network architectures that has achieved state-of-the-art performance on image recognition tasks, beating all other approaches by a large margin. The main innovation of this family of architectures is the addition of a passthrough (identity) connection from the previous layer to the next, such that the usual nonlinearity computes the “residual” between the next-layer activation and the previous-layer activation. In this work, we seek to characterize randomly initialized residual networks. One of our main results is that random residual networks for many nonlinearities such as $\tanh$ [**live on the edge of chaos**]{}, in that the cosine distance of two input vectors will converge to a fixed point at a polynomial rate, rather than an exponential rate, as with vanilla tanh networks. Thus a typical residual network will slowly cross the stable-chaotic boundary with depth, hovering around this boundary for many layers. In addition, for most of the nonlinearities considered here, the mean field estimate of the gradient grows subexponentially with depth. In fact, for $\alpha$-ReLU, the $\alpha$th-power of ReLU, for $\alpha < 1$, the gradient grows only polynomially. These theoretical results provide some theoretical justification for why residual networks work so well in practice. In our experiments, we are also able to predict surprisingly well the relative performances of [*trained*]{} residual networks based only on their initialization hyperparameters, in a variety of settings. In particular, we find that the quality of initialization for tanh resnets is determined by [*trainability*]{} (how much gradient explosion on average) while that for ($\alpha$-)ReLU resnets is determined by expressivity (how far can two different input vectors be pulled apart) (see \[sec:experiments\]). To the best of our knowledge, this is the first time that a quantity other than gradient explosion/vanishing has been found to control the quality of initialization. We establish theoretically and empirically that the best initialization variances for residual networks depend on the depth of the network (contrary to the feedforward case [@schoenholz_deep_2017]), so that **common initialization schemes like Xavier [@glorot_understanding_2010] or He [@he_delving_2015] cannot be optimal**. In fact, even the rationale of He initialization is incorrect for ReLU residual networks because it tries to control gradient dynamics rather than expressivity. However we want to emphasize that we study a simplified model of residual networks in this work, with no batchnorm or convolutional layers, so that these results are not necessarily indicative of the MSRA residual network used in practice [@he_deep_2016]. In the body of this paper, we give account of general intuition and/or proof strategy when appropriate for our theoretical results, but we relegate all formal statements and proofs to the appendix. Background {#sec:background} ========== Consider a vanilla feedforward neural network of $L$ layers, with each layer $l$ having $\p N l$ neurons; here layer 0 is the input layer. For the ease of presentation we assume all hidden layer widths are the same $\p N l = N$ for all $l > 0$. Let $\p x 0 = (\p x 0_1, \ldots, \p x 0_{\p N 0})$ be the input vector to the network, and let $\p x l$ for $l > 0$ be the activation of layer $l$. Then a neural network is given by the equations $$\begin{aligned} \p x l _i &= \phi(\p h l _i), & \p h l _i &= \sum_{j=1}^N \p w l _{ij} \p x {l-1}_j + \p b l_i \end{aligned}$$ where $\p h l$ is the pre-activation at layer $l$, $\p w l$ is the weight matrix, $\p b l$ is the bias vector, and $\phi$ is a nonlinearity, for example $\tanh$ or ReLU, which is applied coordinatewise to its input. To lighten up notation, we suppress the explicit layer numbers $l$ and write $$\begin{aligned} x_i &= \phi(h_i), & h_i &= \sum_j w_{ij} {\underline}x_j + b_i \end{aligned}$$ where $\bullet$ implicitly denotes $\p \bullet l$, and ${\underline}\bullet$ denotes $\p \bullet {l-1}$ (and analogously, ${\overline}\bullet$ denotes $\p \bullet {l+1}$). A series of papers [@poole_exponential_2016; @raghu_expressive_2016; @schoenholz_deep_2017] investigated the “average behavior” of random neural networks sampled via $\p w l_{ij} \sim {\mathcal{N}}(0, \sigma_w^2/N), \p b l_i \sim {\mathcal{N}}(0, \sigma_b^2)$, for fixed parameters $\sigma_w$ and $\sigma_b$, independent of $l$. Consider the expectation of $\f 1 N \sum_{i=1}^N x^2_i$, the normalized squared length of $x$, over the sampling of $w$ and $b$. @poole_exponential_2016 showed that this quantity converges to a fixed point exponentially fast for sigmoid nonlinearities. Now suppose we propagate two different vectors $\p x 0$ and $(\p x 0)'$ through the network. @poole_exponential_2016 also showed that the expectation of the normalized dot product $\f 1 N \sum_{i=1}^N x_i x'_i$ converges exponentially fast to a fixed point. The ratio between the normalized squared length and the normalized dot product is the cosine distance between $x$ and $x'$. Thus these two exponential convergence results show that the cosine distance converges exponentially fast to a fixed point as well. Intuitively, this means that a vanilla feedforward network “forgets” the geometry of the input space “very quickly,” after only a few layers. In addition, @schoenholz_deep_2017, under certain independence assumptions, showed that the expected normalized squared norm of the gradient also vanishes or explodes in an exponential fashion with depth, with the ”half-life” controlled by $\sigma_w$ and $\sigma_b$. They verified that this theoretical ”half-life” correlates in practice with the maximal number of layers that are admissible to good performance. At the same time, @daniely_toward_2016 published work of similar nature, but phrased in the language of reproducing kernel Hilbert spaces, and provided high probability estimates that are meaningful for the case when the width $N$ is finite and the depth is logarithmic in $N$. However, they essentially fixed the variance parameters $\sigma_\bullet$, and furthermore, their framework (for example the notion of a “skeleton”) does not immediately generalize to the residual network case. In this work, we show that residual networks have very different dynamics from vanilla feedforward networks. In most cases, the cosine distance convergence rate and the gradient growth rate are subexponential in a residual network, and in most cases, these rates may be polynomial. Preliminaries ============= Residual networks were first introduced by [@he_deep_2016] and later refined by [@he_identity_2016], and they are now commonplace among deployed neural systems. The key innovation there is the addition of a shortcut connection from the previous layer to the next. We define the following idealized architectures for ease of analysis. Note that we only consider fully-connected affine layers instead of convolutional layers. A [**reduced residual network (RRN)**]{} has the recurrence $$\begin{aligned} x_i &= \phi(h_i) + {\underline}x, & h_i &= \sum_j w_{ij} {\underline}x_j + b_i. \end{aligned}$$ A [**(full) residual network (FRN)**]{} in addition has an affine connection given by weights $v$ and biases $a$ from the nonlinearity $\phi(h)$ to the next layer: $$\begin{aligned} x_i &= \sum_j v_{ij} \phi( h_j) + {\underline}x_i + a_i, & h_i &= \sum_j w_{ij} {\underline}x_j + b_i \end{aligned}$$ We are interested in the “average behavior” of these network when the weights and biases, $\p w l_{ij}, \p b l_i, \p v l_{ij}$, and $\p a l_i$ are sampled i.i.d. from Gaussian distributions resp. with standard deviations $\sigma_w, \sigma_b, \sigma_v,$ and $\sigma_a$, independent from $l$. Here we take the variance of $\p w l_{ij}$ to be $\sigma_w^2/N$ so that the variance of each $h_i$ is $\sigma_w^2$, assuming each ${\underline}x_j$ is fixed (similarity for $\p v l_{ij}$). Such an initialization scheme is standard in practice. We make several key “physical assumptions” to make theoretical computations tractable: \[ass:symAct\] (a) We assume $\la (\p h l _i)^2 \ra = \la (\p h l _j)^2 \ra$ and $\la (\p x 0 _i)^2 \ra = \la (\p x 0 _j)^2 \ra$ for any $i, j, l$. (b) We also assume that the gradient $\pd E/\pd \p x l _i$ with respect to the loss function $E$ satisfies $\la (\pd E/\pd \p x l _i)^2 \ra = \la (\pd E/ \pd \p x l _j)^2 \ra$ for any $i, j, l$. One can see that \[ass:symAct\](a) is satisfied if the input $\p x 0 \in \{\pm 1\}^N$ and \[ass:symAct\](b) is satisfied if \[ass:gradInd\] below is true and the gradient at the last layer $\pd E/\pd x L \in \{\pm 1 \}^N$. But in general it is justified both empirically and theoretically as an approximation, because $(\p h l _i)^2 - (\p h l _j)^2$ stays about constant with $l$, but $(\p h l _i)^2$ and $(\p h l _j)^2$ grow rather quickly at the same pace with $l$ (as will be seen later in calculations), so that their additive difference becomes negligible; similarly for $(\p x l _i)^2$ and $(\pd E/\pd \p h l _i)^2$. \[ass:gradInd\] (a) We assume the we use a different set of weights for backpropagation than those used to compute the network outputs, but sampled i.i.d. from the same distributions. (b) For any loss function $E$, we assume that the gradient at layer $l$, $\pd E/\pd \p x l _i$, is independent from all activations $\p h {l} _j$ and $\p x {l-1} _j$ from the previous layer. \[ass:gradInd\](a) was first made in [@schoenholz_deep_2017] for computing the mean field theory of gradients for feedforward tanh networks. This is similar to the practice of feedback alignment [@lillicrap_random_2016]. Even though we are the first to explicitly formulate \[ass:gradInd\](b), in fact it was already applied implicitly in the gradient calculations of [@schoenholz_deep_2017]. Note that a priori \[ass:gradInd\](b) is not true, as $\pd E/\pd \p x l _i$ depends on $\dot \phi(\p h {l+1} _k)$ for every $k$, which depend on $\p h {l} _j$ for each $j$, and which depends on $\p x {l-1} _k$ for every $k$. Nevertheless, in practice both subassumptions hold very well. Now we define the central quantities studied in this paper. Inevitably, our paper involves a large amount of notation that may be confusing for the first-time reader. We have included a glossary of symbols (\[tab:glossary\]) to ameliorate notation confusion. \[defn:length\] Fix an input $\p x 0$. Define the [**length quantities**]{} $\p {\mathbf{q}}l := \la (\p h l_1)^2 \ra$ and $\p {\mathbf{p}}l := \la (\p x l_1)^2 \ra$ for $l > 0$ and $\p {\mathbf{p}}0 = \|\p x 0\|^2/N$. Here the expectations $\la \bullet \ra$ are taken over all random initialization of weights and biases for all layers $l$, as $N \to \infty$ (large width limit). Note that in our definition, the index $1$ does not matter by \[ass:symAct\]. \[defn:corr\] Fix two inputs $\p x 0$ and $\p x 0{}'$. We write $\bullet'$ to denote a quantity $\bullet$ with respect to the input ${\p x 0}'$. Then define [**the correlation quantities**]{} $\p {\boldsymbol{\oldgamma}}l:= \la \p h l_1 \p h l_1{}' \ra$ and $\p {\boldsymbol{\oldlambda}}l:= \la \p x l_1 \p x l_1{}'\ra$ for $l > 0$ and $\p {\boldsymbol{\oldgamma}}0 = \p x 0 \cdot \p x 0 {}' / N$, where the expectations $\la \bullet \ra$ are taken over all random initialization of weights and biases for all layers $l$, as $N \to \infty$ (large width limit). Again, here the index $1$ does not matter by \[ass:symAct\]. By [**metric expressivity**]{}, we mean $\p {\mathbf{s}}l := \f 1 {2N} \la \|\p x l - \p x l {}'\|^2\ra = \f 1 {2N} (\la \|\p x l\|^2\ra + \la \|\p x l {}'\|^2 \ra - 2 \la \p x l \cdot \p x l {}'\ra) = \f 1 2 (\p {\mathbf{p}}l + \p {\mathbf{p}}l {}') - \p {\boldsymbol{\oldgamma}}l$. Additionally, define [**the cosine distance quantities**]{} $\p {\mathbf{e}}l := \p {\boldsymbol{\oldgamma}}l / \sqrt{\p {\mathbf{p}}l \p {\mathbf{p}}l {}'}$ and $\p {\mathbf{c}}l := \p {\boldsymbol{\oldlambda}}l / \sqrt{\p {\mathbf{q}}l \p {\mathbf{q}}l{}'}$, and we will also call $\p {\mathbf{e}}l$ [**angular expressivity**]{}. In this paper, for the ease of presentation, we assume $\p {\mathbf{p}}0 = \p {\mathbf{p}}0 {}'$. Then, as we will see, $\p {\mathbf{p}}l = \p {\mathbf{p}}l{}', \p {\mathbf{q}}l = \p {\mathbf{q}}l{}'$ for all $l$, and as a result, $\p {\mathbf{e}}l = \p {\boldsymbol{\oldgamma}}l / \p {\mathbf{p}}l$ and $\p {\mathbf{s}}l = \p {\mathbf{p}}l - \p {\boldsymbol{\oldgamma}}l = (1 - \p {\mathbf{e}}l) \p {\mathbf{p}}l$. \[defn:grad\] Fix an input $\p x 0$ and a gradient vector $(\pd E/ \pd{\p x L_i})_i$ of some loss function $E$ with respect to the last layer $\p x L$. Then define [**the gradient quantities**]{} $\p {{\boldsymbol{\oldchi}}}l:= \la (\pd E/\pd \p x l _1)^2 \ra, \p {\boldsymbol{\oldchi}}l _\bullet := \la (\pd E/\pd \p \bullet l _1)^2 \ra$ for $\bullet = a, b$, and $\p {\boldsymbol{\oldchi}}l _\bullet := \la (\pd E/\pd \p \bullet l _{11})^2 \ra$ for $\bullet = w, v$. Here the expectations are taken with \[ass:gradInd\] in mind, over both random initialization of forward and backward weights and biases, as $N \to \infty$ (large width limit). Again, the index $1$ or $11$ does not matter by \[ass:symAct\]. #### Asymptotic notations. The expressions $f = O(g) \iff g = \Omega(f)$ have their typical meanings, and $f = \Theta(g)$ iff $f = O(g), g = O(f)$. We take $f(x) = \tilde O(g(x)) \iff g(x) = \tilde \Omega(f(x))$ to mean $f(x) = O(g\log^k x)$ for some $k \in \Z$ (this is slightly different from the standard usage of $\tilde O$), and $f = \tilde\Theta(g) \iff f = \tilde O(g) \And g = \tilde O(f).$ We introduce a new notation: $f = {{\check\Theta}}(g)$ if $f(x) = O(g(x) \cdot x^{\epsilon})$ and $f(x) = \Omega(g(x) \cdot x^{-{\epsilon}})$, as $x \to \infty$, for any ${\epsilon}> 0$. All asymptotic notations are sign-less, i.e. can indicate either positive or negative quantities, unless stated otherwise. Overview ======== The primary reason we may say anything about the average behavior of any of the above quantities is the central limit theorem: every time the activations of the previous layer pass through an affine layer whose weights are sampled i.i.d., the output is a sum of a large number of random variables, and thus follows approximately Gaussian distributions. The mean and variance of these distributions can be computed by keeping track of the mean and variances of the activations in the previous layer. In what follows, we use this technique to derive recurrence equations governing ${\mathbf{p}}, {\mathbf{q}}, {\boldsymbol{\oldgamma}}, {\boldsymbol{\oldlambda}}, {{\boldsymbol{\oldchi}}}$ for different architectures and different activation functions. We use these equations to investigate the dynamics of ${\mathbf{e}}$ and ${\mathbf{s}}$, the key quantities in the forward pass, and the dynamics of ${{\boldsymbol{\oldchi}}}$, the key quantity in the backward pass. The cosine distance ${\mathbf{e}}$ in some sense measures the angular geometry of two vectors. If ${\mathbf{e}}= 1$, then the vectors are parallel; if ${\mathbf{e}}= 0$, then they are orthogonal. Just as in [@poole_exponential_2016] and [@schoenholz_deep_2017], we will show that in all of the architectures and activations we consider in this paper, $\p {\mathbf{e}}l$ converges to a fixed point ${\mathbf{e}}^*$ as $l \to \infty$ [^2]. Thus, on the average, as vectors propagate through network, the geometry of the original input space, for example, linear separability, is “forgotten” by residual networks as well as by vanilla networks. But we will prove and verify experimentally that, while @poole_exponential_2016 and [@schoenholz_deep_2017] showed that the convergence rate to ${\mathbf{e}}^*$ is exponential in a vanilla network, the convergence rate is rather only polynomial in residual networks, for tanh and $\alpha$-ReLU (\[defn:alphaReLU\]) nonlinearities; see \[thm:dalethRecReduced\], \[thm:eDynamicsFullResTanh\], \[thm:ReLUSquaredConvergence\], and \[thm:alphaReLUeConvergence\]. This slow convergence preserves geometric information in the input space, and allows a typical residual network to “hover over the edge of chaos”: Even when the cosine distance $\p{\mathbf{e}}l$ converges to 0, corresponding to “chaos”, (resp. 1, corresponding to “stability”), for the number of layers usually seen in practice, $\p {\mathbf{e}}l$ will reside well away from 0 (resp. 1). Similarly, the quantity ${\mathbf{s}}$ measures the metric geometry of two vectors. The evolution of $\p {\mathbf{s}}l$ with $l$ tells us the ability of the average network to separate two input points in terms of Euclidean distance. Again, for tanh and $\alpha$-ReLU ($\alpha < 1$) nonlinearities, ${\mathbf{s}}$ varies only polynomially with $l$. On the other hand, $\p {{\boldsymbol{\oldchi}}}l$ measures the size of gradient at layer $l$, and through it we track the dynamics of gradient backpropagation, be it explosion or vanishing. In contrast to vanilla tanh networks, which can experience both of these two phenomenon depending on the initialization variances, typical residual networks cannot have vanishing gradient, in the sense of vanishing $\p {{\boldsymbol{\oldchi}}}l$ as $l \to 1$; see \[thm:dalethRecReduced\] and \[thm:dalethRecFull\]. Furthermore, while vanilla tanh networks exhibit exponentially vanishing or exploding gradients, all of the activation/architecture pairings considered here, except the full residual network with ReLU, have subexponential gradient dynamics. While tanh residual networks (reduced or full) has $\p {{\boldsymbol{\oldchi}}}0 \approx \exp(\Theta(\sqrt l)) \p {{\boldsymbol{\oldchi}}}l$ (\[thm:dalethExpSqrtTanhFullRes\]), $\alpha$-ReLU residual networks for $\alpha < 1$ have $\p {{\boldsymbol{\oldchi}}}0 \approx {\mathsf{poly}}(l) \p {{\boldsymbol{\oldchi}}}l$ (\[thm:dalethDynamicsAlphaReLU\]). Instead of $\pd E/\pd x_i$, we may also consider the size of gradients of actual trainable parameters. For tanh and $\alpha$-ReLU with $\alpha < 1$, they are still subexponential and polynomial (\[thm:alphaReLUAllGradients\]). On the other hand, while $\p {{\boldsymbol{\oldchi}}}0 = \exp(\Theta(l))\p {{\boldsymbol{\oldchi}}}l$ for a ReLU resnet, its weight gradients have size independent of layer, within $O(1)$ (\[thm:alphaReLUAllGradients\])! This is the only instance in this paper of gradient norm being completely preserved across layers. The above overviews the theoretical portion of this paper. Through experiments, we discover that we can very accurately predict whether one random initialization leads to better performance than another on the test set, after training, by leveraging this theory we build. Residual networks of different nonlinearities have different [*controlling quantities*]{}: for resnets with tanh, the optimal initialization is obtained by controlling the gradient explosion $\p {{\boldsymbol{\oldchi}}}0 / \p {{\boldsymbol{\oldchi}}}L$; whereas for ReLU and $\alpha$-ReLU, the optimal initialization is obtained by maximizing ${\mathbf{s}}$ without running into numerical issues (with floating point computation). See \[sec:experiments\] for details. Over the course of our investigation of $\alpha$-ReLU, we derived several new identities involving the associated kernel functions, first defined in [@cho_kernel_2009], which relate them to the zeroth Bessel functions (\[lemma:JalphaBessel,lemma:LAlphaRec,lemma:JalphaRec,lemma:JalphaGrad\]). Theoretical Results =================== In what follows in the main text, we assume $\sigma_\bullet > 0$ for all $\bullet = w, v, b, a$; in the appendix, the formal statement of each main theorem will contain results for other cases. We are interested in the two major categories of nonlinearities used today: tanh-like and rectified units. We make the following formal definitions as a foundation for further consideration. We say a function $\phi$ is [**tanh-like**]{} if $\phi$ is antisymmetric ($\phi(-x) = -\phi(x)$), $|\phi(x)| \le 1$ for all $x$, $\phi(x) \ge 0, \forall x \ge 0$, and $\phi(x)$ monotonically increases to 1 as $x \to \infty$. \[defn:alphaReLU\] Define the $\alpha$-ReLU $\psi_\alpha(x) = x^\alpha$ if $x > 0$ and 0 otherwise. [^3] Antisymmetric/RRN Any/FRN -------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------- -- Theorems \[thm:p\_q\_linear\], \[thm:lambda\_gamma\_recurrence\], \[thm:dalethRecReduced\] Theorems \[thm:fullResPQRec\], \[thm:full\_res\_l\_g\_recurr\], \[thm:dalethRecFull\] : Main Recurrences[]{data-label="tab:recurrences"} By applying the central limit theorem as described in the last section, we derive a set of recurrences for different activation/architecture pairs, shown in \[tab:recurrences\] (see appendix for proofs). They leverage certain integral transforms [^4] as in the following \[defn:integralTransform\] Define the transforms ${\mathrm{V}}$ and ${\mathrm{W}}$ by ${\mathrm{V}}\phi(q) := \EV[\phi(z)^2: z \sim {\mathcal{N}}(0, q)]$ and ${\mathrm{W}}\phi(\rho, \nu) := \EV[\phi(z)\phi(z'): (z, z') \sim {\mathcal{N}}(0, \begin{pmatrix}\rho & \nu \\ \nu & \rho \end{pmatrix})]$. These recurrences are able to track the corresponding quantities in practice very well. For example, \[fig:theory\_tracks\_pratice\] compares theory vs experiments for the tanh/FRN pair. The agreement is very good for tanh/RRN (not shown, but similar to the case of tanh/FRN with $\sigma_v = 1$ and $\sigma_a = 0$) and $\alpha$-ReLU/FRN as well (see \[fig:alphaReLUTheoryVsEmpirics\]). As mentioned in previous sections, we seek to characterize the long term/high depth behavior of all of the quantities defined in \[sec:background\]. To do so, we solve for the asymptotics of the recurrences in \[tab:recurrences\], where $\phi$ is instantiated with tanh or $\alpha$-ReLU. Our main dynamics results are summarized in \[tab:dynamics\]. Tanh/RRN Tanh/FRN ReLU/FRN $\alpha$-ReLU/FRN, $\alpha<1$ ------------------------------------- ------------------------------------------------------- ---------------------------------------------------- ---------------------------------------------------- --------------------------------------------------------------------------------------- -- $\p {\mathbf{p}}l$ $\Theta(l)$, \[thm:p\_q\_linear\] $\Theta(l)$, \[thm:pIsLinearTanh\] $\exp(\Theta(l))$, \[thm:pDynamicAlphaReLU\] $\Theta(l^{1/(1-\alpha)})$, \[thm:pDynamicAlphaReLU\] $\p {\mathbf{s}}l$ $\Theta(l)$, \[thm:edynamics\] $\Theta(l)$, \[thm:eDynamicsFullResTanh\] $\exp(\Theta(l))$, \[thm:ReLUSquaredConvergence\] $\Theta(l^{1/(1-\alpha)})$, \[thm:alphaReLUeConvergence\] $\p {\mathbf{e}}l - {\mathbf{e}}^*$ ${{\check\Theta}}(l^{\f2 \pi -1})$, \[thm:edynamics\] ${\mathsf{poly}}(l)$, \[thm:eDynamicsFullResTanh\] $\Theta(l^{-2})$, \[thm:ReLUSquaredConvergence\] ${\mathsf{poly}}(l)$, \[thm:alphaReLUeConvergence\] $\p {{\boldsymbol{\oldchi}}}l$ $\exp(\Theta(\sqrt l))$, \[thm:dalethExpSqrtTanh\] $\exp(\Theta(\sqrt l))$, \[thm:dalethRecFull\] $\exp(\Theta(l))$, \[thm:dalethDynamicsAlphaReLU\] $\Theta(l^{\f{\alpha^2}{(1-\alpha)(2 \alpha - 1)}})$, \[thm:dalethDynamicsAlphaReLU\] : Summary of Main Dynamics Results. Note that while $\p{{\boldsymbol{\oldchi}}}l$ is exponential for ReLU/FRN, the gradients with respect to weight parameters have norms (${\boldsymbol{\oldchi}}_w$ and ${\boldsymbol{\oldchi}}_v$) constant in $l$ (\[thm:alphaReLUAllGradients\]). Also, the $\p {{\boldsymbol{\oldchi}}}l$ entry for $\alpha$-ReLU is for $\alpha \in (3/4, 1)$ only[]{data-label="tab:dynamics"} Tanh ---- ![Our equations predict the relevant quantities very well in practice. These plots make the comparison between prediction and measurements for the full resnet with tanh activation, with $\sigma_v^2 = 1.5$, $\sigma_a^2 = .5$, $\sigma_w^2 = 1.69$, $\sigma_b^2 = .49$. Left-to-right: [**(a)**]{} $\p {\mathbf{p}}l$ and $\p {\boldsymbol{\oldgamma}}l$ against layer $l$ for 200 layers. [**(b)**]{} $\p {\mathbf{e}}l = \p {\boldsymbol{\oldgamma}}l /\p {\mathbf{p}}l$ against $l$ for 200 layers. Both (a) and (b) trace out curves for different initial conditions. [**(c)**]{} Different gradient quantities against $l$ for 50 layers. From left to right the layer number $l$ decreases, following the direction of backpropagation. Notice that the gradient increases in norm as $l \to 1$. All three figures exhibit smooth curves, which are theoretical estimates, and irregular curves with shades around them, which indicate empirical means and standard deviations (both of which taken in regular scale, not log scale). (a) and (b) are made with 20 runs of resnets of width 1000. (c) is made with 25 runs of resnets of width 250.[]{data-label="fig:theory_tracks_pratice"}](graphics/tanh_full_res_theory_vs_exp-chi.pdf){height=".17\textheight"} #### Forward dynamics. When $\phi = \tanh$, $\p {\mathbf{p}}l$ and $\p {\mathbf{q}}l$ increase as $\Theta(l)$ in either RRN or FRN (\[thm:p\_q\_linear\]), as one might expect by observing that ${\mathrm{V}}\tanh( {\mathbf{q}}) \to 1$ as ${\mathbf{q}}\to \infty$ so that, for example in the RRN case, the recurrence ${\mathbf{p}}= {\mathrm{V}}\tanh( {\mathbf{q}}) + {\underline}{\mathbf{p}}$ becomes ${\mathbf{p}}= 1 + {\underline}{\mathbf{p}}$. This is confirmed graphically by the black lines of the leftmost chart of \[fig:theory\_tracks\_pratice\]. We carefully verify that this intuition is correct in its proof in the appendix, and find that in fact $\p {\mathbf{p}}l \sim l$ in the RRN case and $\p {\mathbf{p}}l \sim (\sigma_v^2 + \sigma_a^2)l$ in the FRN case. What about $\p {\boldsymbol{\oldgamma}}l$? The middle chart of \[fig:theory\_tracks\_pratice\] shows that over time, $\p {\mathbf{e}}l = \p {\boldsymbol{\oldgamma}}l / \p {\mathbf{p}}l$ contracts toward the center of the interval $[0, 1]$, but from the looks of it, it is not clear whether there is a stable fixed point ${\mathbf{e}}^*$ of ${\mathbf{e}}$ or not. We prove that, in fact, [**all trajectories of ${\mathbf{e}}$ not starting at 1 do converge to a single fixed point, but only at a polynomial rate**]{}, in both the RRN and FRN cases (\[thm:p\_q\_linear\] and \[thm:full\_res\_l\_g\_recurr\]); we can even explicitly compute the fixed point and the rate of convergence: For FRN, there is a [**unique stable fixed point**]{} ${\mathbf{e}}^* < 1$ determined by the equation $${\mathbf{e}}^* = \f 1 {\sigma_v^2 + \sigma_a^2}[\sigma_v^2 \f 2 \pi \arcsin\lp {\mathbf{e}}^* \rp + \sigma_a^2],$$ and $|{\mathbf{e}}^* - \p {\mathbf{e}}l|$ decreases like $l^{-\delta^*}$, where $$\delta^* := 1 - \f 2 \pi \f 1 {\sqrt{1 - ({\mathbf{e}}^*)^2}} \f{\sigma_v^2 }{\sigma_v^2 + \sigma_a^2}.$$ Since ${\mathbf{e}}^* < 1$, ${\mathbf{s}}= (1 - {\mathbf{e}}) {\mathbf{p}}= \Theta({\mathbf{p}}) = \Theta(l).$ The case of RRN can be viewed as a special case of the above, setting $\sigma_v^2 = 1$ and $\sigma_a^2 = 0$, which yields ${\mathbf{e}}^* = 0$ and $\delta^* = 1 - \f 2 \pi$. We observe that both ${\mathbf{e}}^*$ and $\delta^*$ only depend on the ratio $\rho := \sigma_a/\sigma_v$, so in \[fig:edelta\_plot\] we graph these two quantities as a function of $\rho$. ${\mathbf{e}}^*$ and $\delta^*$ both increase with $\rho$ and asymptotically approach 1 and $\nicefrac 1 2$ respectively from below. When $\rho = \sigma_a = 0$, ${\mathbf{e}}^* = 0$ and $\delta^* = 1 - \f 2 \pi$. Thus the rate of convergence at its [**slowest**]{} for tanh/FRN is $\delta^* = 1 - \f 2 \pi \approx 0.36338$, where asymptotically the network tends toward a [**chaotic regime**]{} ${\mathbf{e}}^* = 0$, corresponding to a large weight variance and a small bias variance; it at its [**fastest**]{} is $\delta^* = \nicefrac 1 2$, where asymptotically the network tends toward a [**stable regime**]{} ${\mathbf{e}}^* = 1$, corresponding to a large bias variance and small weight variance. We verify $\delta^*$ by comparing $\p {\mathbf{e}}l - \p {\mathbf{e}}{l-1}$ to $l^{-\delta^* - 1}$ in log-log scale. If $\p {\mathbf{e}}l = \Theta(l^{-\delta^*})$, then $\p {\mathbf{e}}l - \p {\mathbf{e}}{l-1} = \Theta(l^{-\delta^* - 1})$ and should obtain the same slope as $l^{-\delta^* - 1}$ as $l \to \infty$. The middle figure of \[fig:edelta\_plot\] ascertains that this is indeed the case, starting around layer number 400. ![Left-to-right: [**(a)**]{} Plots of ${\mathbf{e}}^*$ and $\delta^*$ against $\sigma_a/\sigma_v$. [**(b)**]{} In log-log scale: the dashed line is $l^{-\delta^* - 1}$, and the colored lines are $\p {\mathbf{e}}l - \p {\mathbf{e}}{l-1}$ for different initial conditions $\p {\mathbf{e}}0$. That they become parallel at about $l = 400$ on verifies that $\p {\mathbf{e}}l = \Theta(l^{-\delta^*})$. [^5] [**(c)**]{} In log-log scale: The dashed line is $\mathcal A \sqrt l$ ($\mathcal A$ given in \[thm:dalethExpSqrtTanhFullRes\]), and the colored lines are $\log(\p \bullet 1/\p \bullet l)$ for $\bullet = {{\boldsymbol{\oldchi}}}, {\boldsymbol{\oldchi}}_b, {\boldsymbol{\oldchi}}_w$. That they all converge together starting around $l=1000$ indicates that the approximation in \[thm:dalethExpSqrtTanhFullRes\] is very good for large $l$.[]{data-label="fig:edelta_plot"}](graphics/edelta_plot.pdf "fig:"){height=".16\textheight" width=".3\textwidth"} ![Left-to-right: [**(a)**]{} Plots of ${\mathbf{e}}^*$ and $\delta^*$ against $\sigma_a/\sigma_v$. [**(b)**]{} In log-log scale: the dashed line is $l^{-\delta^* - 1}$, and the colored lines are $\p {\mathbf{e}}l - \p {\mathbf{e}}{l-1}$ for different initial conditions $\p {\mathbf{e}}0$. That they become parallel at about $l = 400$ on verifies that $\p {\mathbf{e}}l = \Theta(l^{-\delta^*})$. [^6] [**(c)**]{} In log-log scale: The dashed line is $\mathcal A \sqrt l$ ($\mathcal A$ given in \[thm:dalethExpSqrtTanhFullRes\]), and the colored lines are $\log(\p \bullet 1/\p \bullet l)$ for $\bullet = {{\boldsymbol{\oldchi}}}, {\boldsymbol{\oldchi}}_b, {\boldsymbol{\oldchi}}_w$. That they all converge together starting around $l=1000$ indicates that the approximation in \[thm:dalethExpSqrtTanhFullRes\] is very good for large $l$.[]{data-label="fig:edelta_plot"}](graphics/verify_deltastar_n_grad-chi.pdf "fig:"){height=".16\textheight"} #### Backward dynamics. Finally, we show that the gradient is approximated by $$\begin{aligned} \p {{\boldsymbol{\oldchi}}}{m} &= \exp(\mathcal A(\sqrt{l} - \sqrt{m}) + O(\log l - \log m))\p {{\boldsymbol{\oldchi}}}{l} \label{eqn:tanhGradEst}\tag{$\star$}\end{aligned}$$ where $\mathcal A = \f 4 3 \sqrt{\f 2 \pi} \sigma_w$ in the RRN case and $ \mathcal A = \f 4 3 \sqrt{\f 2 \pi} \f{\sigma_v^2 \sigma_w}{\sqrt{\sigma_v^2 + \sigma_a^2}}$ in the FRN case (\[thm:dalethExpSqrtTanh\] and \[thm:dalethExpSqrtTanhFullRes\]). The rightmost plot of \[fig:edelta\_plot\] verifies that indeed, for large $l \ge 1000$, this is a very good approximation. This demonstrates that the mean field assumption of independent backpropagation weights is very practical and convenient even for residual networks. Note that in the FRN case, the constant $\mathcal A$ can be decomposed into $\mathcal A = \f 4 3 \sqrt{\f 2 \pi} \cdot \sigma_v \cdot \sigma_w \cdot (1 + \sigma_a^2/\sigma_v^2)^{-1/2}$. Consider the ratio $\rho := \sigma_a/\sigma_v$. If $\rho \gg 1$, then ${\mathbf{e}}^* \approx 1$ (\[fig:jjj\_vs\_id\_main\]), meaning that the typical network essentially computes a constant function, and thus unexpressive; at the same time, large $\rho$ makes $\mathcal A$ small, and thus ameliorating the gradient explosion problem, making the network more trainable. On the other hand, if $\rho \ll 1$, then ${\mathbf{e}}^* \approx 0$ (\[fig:jjj\_vs\_id\_main\]), the typical network can tease out the finest differences between any two input vectors, and a final linear layer on top of such a network should be able to express a wide variety of functions [@poole_exponential_2016]; at the same time, small $\rho$ increases $\mathcal A$, worsening the gradient explosion problem, making the network less trainable. This is the same expressivity-trainability tradeoff discussed in [@schoenholz_deep_2017]. $\alpha$-ReLU ------------- #### Forward dynamics. As with the tanh case, to deduce the asymptotic behavior of random $\alpha$-ReLU resnets, we need to understand the transforms ${\mathrm{V}}\psi_\alpha$ and ${\mathrm{W}}\psi_\alpha$. Fortunately, ${\mathrm{V}}\psi_\alpha$ has a closed form, and ${\mathrm{W}}\psi_\alpha$ has been studied before [@cho_kernel_2009]. In particular, if $\alpha > -\f 1 2$, then ${\mathrm{V}}\psi_\alpha( {\mathbf{q}}) = {\mathsf{c}}_\alpha {\mathbf{q}}^{\alpha}$, where ${\mathsf{c}}_\alpha$ is a constant with a closed form given by \[lemma:VtPsiAlpha\]. In addition, by [@cho_kernel_2009], we know that ${\mathrm{W}}\psi_\alpha( {\mathbf{q}}, {\mathbf{c}}{\mathbf{q}}) = {\mathrm{V}}\psi_\alpha( {\mathbf{q}}) {\mathbb{J}}_\alpha({\mathbf{c}})$ for ${\mathbb{J}}_\alpha$ given in \[sec:AlphaReluForwardProofs\]. \[fig:jjj\_vs\_id\_main\] shows a comparison of ${\mathbb{J}}_\alpha$ for different $\alpha$s along with the identity function. Substituting in ${\mathsf{c}}_\alpha {\mathbf{q}}^\alpha$ for ${\mathrm{V}}\psi_\alpha$, we get a difference equation ${\mathbf{p}}- {\underline}{\mathbf{p}}= \sigma_v^2 {\mathsf{c}}_\alpha (\sigma_w^2 {\underline}{\mathbf{p}}+ \sigma_b^2)^\alpha + \sigma_a^2$ governing the evolution of ${\mathbf{p}}$. This should be reminiscent of the differential equation $\dot P(l) = C P(l)^\alpha$, which has solution $\propto l^{1/(1-\alpha)}$ for $\alpha < 1$, and $\propto \exp(Cl)$ when $\alpha = 1$. And indeed, the solutions $\p {\mathbf{p}}l$ to these difference equations behave asymptotically exactly like so (\[thm:pDynamicAlphaReLU\]). Thus [**ReLU behaves very explosively compared to $\alpha$-ReLU with $\alpha<1$**]{}. In fact, in simulations, for $\sigma_w^2 = 1.69$ and $\sigma_v^2 = 1.5$, the ReLU resnets overflows into `inf`s after around 100 layers, while there’s no problem from any other kind of networks we consider. Regardless, [**$\alpha$-ReLU for all $\alpha$ massages $\p {\mathbf{e}}l$ toward a fixed point ${\mathbf{e}}^*$ that depends on $\alpha$**]{}. [ When $\phi = \psi_1$, the standard ReLU, $\p {\mathbf{e}}l$ converges to 1 asymptotically as $C l^{-2}$ for an explicit constant $C$ depending on $\sigma_v$ and $\sigma_w$ only (\[thm:ReLUSquaredConvergence\]), so that ${\mathbf{s}}= (1 - {\mathbf{e}}){\mathbf{p}}= \Theta(l^{-2}\exp(\Theta(l))) = \exp(\Theta(l)).$ When $\phi = \psi_\alpha$ for $\alpha < 1$, then $\p {\mathbf{e}}l$ converges to the nonunit fixed point ${\mathbf{e}}^*$ of ${\mathbb{J}}_\alpha$ at a rate of ${{\check\Theta}}(l^{-\mu})$, where $\mu = (1-\dot {\mathbb{J}}_\alpha({\mathbf{e}}^*))/(1-\alpha)$ is independent of the variances (\[thm:alphaReLUeConvergence\]), so that ${\mathbf{s}}= \Theta({\mathbf{p}})$.]{} These rates are verified in \[fig:alphaReLUVerifyExponents\]. #### Backward dynamics. Finally, we have also characterized the rate of gradient growth for any $\alpha \in (\f 3 4, 1]$. [^7] [**In the case of $\alpha = 1$, the dynamics of ${{\boldsymbol{\oldchi}}}$ is exponential**]{}, the same as that of ${\mathbf{p}}$, $\p{{\boldsymbol{\oldchi}}}{l-m} = \p {{\boldsymbol{\oldchi}}}{l} B^m$ where $B =\f 1 2 \sigma_v^2 \sigma_w^2 + 1$. [**For $\alpha \in (\f 3 4, 1)$, the dynamics is polynomial**]{}, but with different exponent in general from that of the forward pass: $\p{{\boldsymbol{\oldchi}}}{l-m} = \Theta(1) \p {{\boldsymbol{\oldchi}}}{l} (l/(l-m))^R$ for $R = \f{\alpha^2}{(1-\alpha)(2 \alpha - 1)}$, where the constants in $\Theta(1)$ do not depend on $l$ or $m$. This exponent $R$ is minimized on $\alpha \in [\f 3 4, 1)$ at $\alpha = \nicefrac 3 4$, where $R = \nicefrac 9 2$ (but on $\alpha \in (\f 1 2, 1)$ it is minimized at $\alpha = \nicefrac 2 3$, where $R = 4$); see \[fig:backprop\_exponent\_alpha-relu\]. These exponents are verified empirically in \[fig:alphaReLUVerifyExponents\]. Looking only at ${{\boldsymbol{\oldchi}}}$ and the gradients against the biases, it seems that ReLU suffers from a dramatic case of exploding gradients. But in fact, because ${{\boldsymbol{\oldchi}}}$ gains a factor of $B$ moving backwards while ${\mathbf{p}}$ loses a factor of $B$, the gradient norm ${\boldsymbol{\oldchi}}_w^{(l-m)}$ (and similarly for ${\boldsymbol{\oldchi}}_v^{(l-m)}$) is independent of how far, $m$, the gradient has been propagated (\[thm:alphaReLUAllGradients\]) — this is certainly the best gradient preservation among all of the models considered in this paper. Thus strangely, random ReLU FRN exhibits both the best (constant for $v$ and $w$) and the worse (exponential for $a$ and $b$) gradient dynamics. This begs the question, then, is this a better deal than other $\alpha$-ReLU for which for any learnable parameter we have at most a polynomial blowup with depth in its gradient? Our experiments (discussed below) show that $\alpha$-ReLU is useful to the extent that smaller $\alpha$ avoids numerical issues with exponentiating forward and backward dynamics, but the best performance is given by the largest $\alpha$ that avoids them (\[fig:tanhHeatmaps\](c, d)); in fact, the metric expressivity ${\mathbf{s}}$, determines performance, not gradient explosion (see $\alpha$-ReLU experiments). Experimental Results {#sec:experiments} ==================== ![From left to right, top to bottom: **(a)** and **(b)**: $\sigma_w^2$, $L$, and test set accuracy of a grid of tanh reduced (left) and full (right) resnets trained on MNIST. Color indicates performance, with ligher colors indicating higher accuracy on test set. Other than the values on the axes, we have fixed $\sigma_b^2 = \sigma_a^2 = \f 1 2$ and $\sigma_v^2 = 1$. The white dotted lines are given by $\sigma_w^2 L = C$, where $C = 170$ on the left and $C = 145$ on the right. We see that both dotted lines accurately predict the largest optimal $\sigma_w$ for each depth $L$. **(c)** Varying the ratio $\sigma_a^2/\sigma_v^2$ while fixing $\sigma_v/\sqrt{1 + \sigma_a^2/\sigma_v^2}$, and thus fixing $\mathcal A$, the leading constant of $\log \p {{\boldsymbol{\oldchi}}}0 / \p {{\boldsymbol{\oldchi}}}L$. **(d)** in log-log scale: Heatmap gives the test accuracies of ReLU FRN for varying $\sigma_w^2$ and $L$. Curves give level sets for the log ratios $\log \p {\mathbf{s}}L / \p {\mathbf{s}}0 \approx \log \p {\mathbf{p}}L / \p {\mathbf{p}}0 \approx \log \p{{\boldsymbol{\oldchi}}}0 / \p {{\boldsymbol{\oldchi}}}L = L \log(1 + \sigma_v^2 \sigma_w^2/2)$. **(e)** Red heatmap shows the test accuracies of a grid of $\alpha$-ReLU FRN with varying $\alpha$ and $L$ as shown, but with all $\sigma_\bullet$s fixed. The white dashed curve gives a typical contour line of $L^R = \text{const}$, where $R = \f {\alpha^2}{(1-\alpha)(2\alpha-1)}.$ The yellow-to-blue curves form a set of level curves for $\p {\mathbf{s}}l = \p {\mathbf{p}}l - \p {\boldsymbol{\oldgamma}}l = \text{const}$, with yellow curves corresponding to higher levels. []{data-label="fig:tanhHeatmaps"}](graphics/tanh_simple_resnet_trainability "fig:"){height="\height"} ![From left to right, top to bottom: **(a)** and **(b)**: $\sigma_w^2$, $L$, and test set accuracy of a grid of tanh reduced (left) and full (right) resnets trained on MNIST. Color indicates performance, with ligher colors indicating higher accuracy on test set. Other than the values on the axes, we have fixed $\sigma_b^2 = \sigma_a^2 = \f 1 2$ and $\sigma_v^2 = 1$. The white dotted lines are given by $\sigma_w^2 L = C$, where $C = 170$ on the left and $C = 145$ on the right. We see that both dotted lines accurately predict the largest optimal $\sigma_w$ for each depth $L$. **(c)** Varying the ratio $\sigma_a^2/\sigma_v^2$ while fixing $\sigma_v/\sqrt{1 + \sigma_a^2/\sigma_v^2}$, and thus fixing $\mathcal A$, the leading constant of $\log \p {{\boldsymbol{\oldchi}}}0 / \p {{\boldsymbol{\oldchi}}}L$. **(d)** in log-log scale: Heatmap gives the test accuracies of ReLU FRN for varying $\sigma_w^2$ and $L$. Curves give level sets for the log ratios $\log \p {\mathbf{s}}L / \p {\mathbf{s}}0 \approx \log \p {\mathbf{p}}L / \p {\mathbf{p}}0 \approx \log \p{{\boldsymbol{\oldchi}}}0 / \p {{\boldsymbol{\oldchi}}}L = L \log(1 + \sigma_v^2 \sigma_w^2/2)$. **(e)** Red heatmap shows the test accuracies of a grid of $\alpha$-ReLU FRN with varying $\alpha$ and $L$ as shown, but with all $\sigma_\bullet$s fixed. The white dashed curve gives a typical contour line of $L^R = \text{const}$, where $R = \f {\alpha^2}{(1-\alpha)(2\alpha-1)}.$ The yellow-to-blue curves form a set of level curves for $\p {\mathbf{s}}l = \p {\mathbf{p}}l - \p {\boldsymbol{\oldgamma}}l = \text{const}$, with yellow curves corresponding to higher levels. []{data-label="fig:tanhHeatmaps"}](graphics/tanh_resnet_trainability "fig:"){height="\height"} ![From left to right, top to bottom: **(a)** and **(b)**: $\sigma_w^2$, $L$, and test set accuracy of a grid of tanh reduced (left) and full (right) resnets trained on MNIST. Color indicates performance, with ligher colors indicating higher accuracy on test set. Other than the values on the axes, we have fixed $\sigma_b^2 = \sigma_a^2 = \f 1 2$ and $\sigma_v^2 = 1$. The white dotted lines are given by $\sigma_w^2 L = C$, where $C = 170$ on the left and $C = 145$ on the right. We see that both dotted lines accurately predict the largest optimal $\sigma_w$ for each depth $L$. **(c)** Varying the ratio $\sigma_a^2/\sigma_v^2$ while fixing $\sigma_v/\sqrt{1 + \sigma_a^2/\sigma_v^2}$, and thus fixing $\mathcal A$, the leading constant of $\log \p {{\boldsymbol{\oldchi}}}0 / \p {{\boldsymbol{\oldchi}}}L$. **(d)** in log-log scale: Heatmap gives the test accuracies of ReLU FRN for varying $\sigma_w^2$ and $L$. Curves give level sets for the log ratios $\log \p {\mathbf{s}}L / \p {\mathbf{s}}0 \approx \log \p {\mathbf{p}}L / \p {\mathbf{p}}0 \approx \log \p{{\boldsymbol{\oldchi}}}0 / \p {{\boldsymbol{\oldchi}}}L = L \log(1 + \sigma_v^2 \sigma_w^2/2)$. **(e)** Red heatmap shows the test accuracies of a grid of $\alpha$-ReLU FRN with varying $\alpha$ and $L$ as shown, but with all $\sigma_\bullet$s fixed. The white dashed curve gives a typical contour line of $L^R = \text{const}$, where $R = \f {\alpha^2}{(1-\alpha)(2\alpha-1)}.$ The yellow-to-blue curves form a set of level curves for $\p {\mathbf{s}}l = \p {\mathbf{p}}l - \p {\boldsymbol{\oldgamma}}l = \text{const}$, with yellow curves corresponding to higher levels. []{data-label="fig:tanhHeatmaps"}](graphics/tanh_resnet_sa_sb "fig:"){height="\height"}\ ![From left to right, top to bottom: **(a)** and **(b)**: $\sigma_w^2$, $L$, and test set accuracy of a grid of tanh reduced (left) and full (right) resnets trained on MNIST. Color indicates performance, with ligher colors indicating higher accuracy on test set. Other than the values on the axes, we have fixed $\sigma_b^2 = \sigma_a^2 = \f 1 2$ and $\sigma_v^2 = 1$. The white dotted lines are given by $\sigma_w^2 L = C$, where $C = 170$ on the left and $C = 145$ on the right. We see that both dotted lines accurately predict the largest optimal $\sigma_w$ for each depth $L$. **(c)** Varying the ratio $\sigma_a^2/\sigma_v^2$ while fixing $\sigma_v/\sqrt{1 + \sigma_a^2/\sigma_v^2}$, and thus fixing $\mathcal A$, the leading constant of $\log \p {{\boldsymbol{\oldchi}}}0 / \p {{\boldsymbol{\oldchi}}}L$. **(d)** in log-log scale: Heatmap gives the test accuracies of ReLU FRN for varying $\sigma_w^2$ and $L$. Curves give level sets for the log ratios $\log \p {\mathbf{s}}L / \p {\mathbf{s}}0 \approx \log \p {\mathbf{p}}L / \p {\mathbf{p}}0 \approx \log \p{{\boldsymbol{\oldchi}}}0 / \p {{\boldsymbol{\oldchi}}}L = L \log(1 + \sigma_v^2 \sigma_w^2/2)$. **(e)** Red heatmap shows the test accuracies of a grid of $\alpha$-ReLU FRN with varying $\alpha$ and $L$ as shown, but with all $\sigma_\bullet$s fixed. The white dashed curve gives a typical contour line of $L^R = \text{const}$, where $R = \f {\alpha^2}{(1-\alpha)(2\alpha-1)}.$ The yellow-to-blue curves form a set of level curves for $\p {\mathbf{s}}l = \p {\mathbf{p}}l - \p {\boldsymbol{\oldgamma}}l = \text{const}$, with yellow curves corresponding to higher levels. []{data-label="fig:tanhHeatmaps"}](graphics/reluHeatmapContour "fig:"){height="\height"} ![From left to right, top to bottom: **(a)** and **(b)**: $\sigma_w^2$, $L$, and test set accuracy of a grid of tanh reduced (left) and full (right) resnets trained on MNIST. Color indicates performance, with ligher colors indicating higher accuracy on test set. Other than the values on the axes, we have fixed $\sigma_b^2 = \sigma_a^2 = \f 1 2$ and $\sigma_v^2 = 1$. The white dotted lines are given by $\sigma_w^2 L = C$, where $C = 170$ on the left and $C = 145$ on the right. We see that both dotted lines accurately predict the largest optimal $\sigma_w$ for each depth $L$. **(c)** Varying the ratio $\sigma_a^2/\sigma_v^2$ while fixing $\sigma_v/\sqrt{1 + \sigma_a^2/\sigma_v^2}$, and thus fixing $\mathcal A$, the leading constant of $\log \p {{\boldsymbol{\oldchi}}}0 / \p {{\boldsymbol{\oldchi}}}L$. **(d)** in log-log scale: Heatmap gives the test accuracies of ReLU FRN for varying $\sigma_w^2$ and $L$. Curves give level sets for the log ratios $\log \p {\mathbf{s}}L / \p {\mathbf{s}}0 \approx \log \p {\mathbf{p}}L / \p {\mathbf{p}}0 \approx \log \p{{\boldsymbol{\oldchi}}}0 / \p {{\boldsymbol{\oldchi}}}L = L \log(1 + \sigma_v^2 \sigma_w^2/2)$. **(e)** Red heatmap shows the test accuracies of a grid of $\alpha$-ReLU FRN with varying $\alpha$ and $L$ as shown, but with all $\sigma_\bullet$s fixed. The white dashed curve gives a typical contour line of $L^R = \text{const}$, where $R = \f {\alpha^2}{(1-\alpha)(2\alpha-1)}.$ The yellow-to-blue curves form a set of level curves for $\p {\mathbf{s}}l = \p {\mathbf{p}}l - \p {\boldsymbol{\oldgamma}}l = \text{const}$, with yellow curves corresponding to higher levels. []{data-label="fig:tanhHeatmaps"}](graphics/alphaReluGrid_GammaGapOverlay "fig:"){height="\height"} Our experiments show a dichotomy of what matters in initialization: for tanh resnets, quality of an initialization is determined by how much gradient explosion there is (measured by $\p {{\boldsymbol{\oldchi}}}0 / \p {{\boldsymbol{\oldchi}}}L$); for ($\alpha$-)ReLU resnets, it is determined by how expressive the random network is (measured by the metric expressivity $\p {\mathbf{s}}L$). We hypothesize this is because in tanh resnets, the gradient dynamics is much more explosive than the expressivity dynamics ($\exp(\Theta(\sqrt l))$ vs $\Theta(l)$), whereas for ReLU it’s somewhat the opposite (${\boldsymbol{\oldchi}}_w, {\boldsymbol{\oldchi}}_v = \Theta(1)$ vs ${\mathbf{s}}= \exp(\Theta(l))$). #### Tanh, vary $\sigma_w$. We train a grid of reduced and full tanh resnets on MNIST, varying the variance $\sigma_w^2$ and the number of layers (for FRN we fix $\sigma_v = 1$). The results are indicated in \[fig:tanhHeatmaps\](a, b). We see that in either model, deeper resnets favor much smaller $\sigma_w$ than shallower ones. The white dotted lines in \[fig:tanhHeatmaps\](a, b) confirm our theory: according to \[eqn:tanhGradEst\], for the same gradient ratio $R = \p {{\boldsymbol{\oldchi}}}0 / \p {{\boldsymbol{\oldchi}}}L$, we want $\log R \approx \sigma_w \sqrt L$. Indeed, the white dotted lines in \[fig:tanhHeatmaps\](a, b) trace out such a level curve and it remarkably pinpoints the largest $\sigma_w$ that gives the optimal test set accuracy for each depth $L$. Why isn’t the best initialization given by $R = 1 \iff \sigma_w = 0$? We believe that when $L$ and/or $\sigma_w$ is small, gradient dynamics no longer dominates the initialization quality because it has “less room to explode,” and expressivity issues start to dampen the test time performance. #### Tanh, vary $\sigma_a^2/\sigma_v^2$. As suggested in the analysis of \[eqn:tanhGradEst\], the ratio $\rho^2 = \sigma_a^2/\sigma_v^2$ determines the fixed point ${\mathbf{e}}^*$ and its convergence rate by itself while also contributes to the rate of gradient explosion in tanh FRN. We seek to isolate its effect on forward dynamics by varying $\sigma_v$ with $\rho$ such that $\sigma_v/\sqrt{1 + \rho^2}$ is kept constant, so that the leading term of the log gradient ratio is kept approximately equal for each $L$ and $\rho$. \[fig:tanhHeatmaps\](c) shows the test accuracies of a grid of tanh FRN initialized with such an ensemble of $\sigma_\bullet$s. What stands out the most is that performance is maximized essentially around a fixed value of $L$ regardless of $\rho$, which shows that indeed gradient dynamics determines the initialization quality in tanh resnets. There is also a minor increase in performance with increasing $\rho$ regardless of $L$; this is counterintuitive as increasing $\rho$ means “decreasing expressivity.” It is currently not clear what accounts for this effect. #### ReLU, vary $\sigma_w$ We train a grid of ReLU FRN on MNIST, varying $\sigma_w^2 \in [0, 1.5]$ while fixing $\sigma_v^2 = 1, \sigma_a^2 = \sigma_b^2 = \f 1 2$. The resulting test set accuracies are shown in \[fig:tanhHeatmaps\](d). The dark upper region signifies failure of training caused by numerical issues with exploding activation and gradient norms: This corresponds to the region where $\p {\mathbf{p}}L$, which is a measure of the mean magnitude of an neuronal activation in layer $L$, becomes too big. We see that the best test accuracies are given by depths just below where these numerical issues occur. However, if we were to predict that the optimal init is the one minimizing $\p {\boldsymbol{\oldchi}}0 / \p {\boldsymbol{\oldchi}}L \ge 1$, then we would be wrong — in fact it is exactly the opposite. In this case, the dynamics of $\p {\mathbf{s}}l, \p {\mathbf{p}}l$, and $\p {\boldsymbol{\oldchi}}0 / \p {\boldsymbol{\oldchi}}l$ are approximately the same (all $\exp(\Theta(l))$ with the same hidden constants), and optimal performance corresponds to the highest $\p {\mathbf{s}}L$, $\p {\mathbf{p}}L$, and $\p {\boldsymbol{\oldchi}}0 / \p {\boldsymbol{\oldchi}}L$ without running into `inf`s. #### $\alpha$-ReLU, vary $\alpha$. We similarly trained a grid of $\alpha$-ReLU FRN on MNIST, varying only $\alpha$ and the depth, fixing all $\sigma_\bullet$. \[fig:tanhHeatmaps\](e) shows their test accuracies. We see similar behavior to ReLU, where when the net is too deep, numerical issues doom the training (black upper right corner), but the best performance is given by $L$ just below where this problem occurs. In this case, if we were to predict optimality based on minimizing gradient explosion, we would be again wrong, and furthermore, the contour plot of $\p {\boldsymbol{\oldchi}}0 / \p {\boldsymbol{\oldchi}}L$ (white dashed line) now gives no information at all on the test set accuracy. In contrast, the contours for $\p {\mathbf{s}}l$ succeeds remarkably well at this prediction (yellow/green lines).[^8] By interpolation, this suggests that indeed in the ReLU case, it is expressivity, not trainability, which determines performance at test time. In all of our experiments, we did not find ${\mathbf{e}}$ dynamics to be predictive of neural network performance. Conclusion ========== In this paper, we have extended the mean field formalism developed by [@poole_exponential_2016; @raghu_expressive_2016; @schoenholz_deep_2017] to residual networks, a class of models closer to practice than classical feedforward neural networks as were investigated earlier. We proved and verified that in both the forward and backward passes, most of the residual networks discussed here do not collapse their input space geometry or the gradient information exponentially. We found our theory incredibly predictive of test time performance despite saying nothing about the dynamics of training. In addition, we overwhelmingly find, through theory and experiments, that an optimal initialization scheme must take into account the depth of the residual network. The reason that Xavier [@glorot_understanding_2010] or He [@he_delving_2015] scheme are not the best for residual networks is in fact not that their statistical assumptions are fragile — theirs are similar to our mean field theoretic assumptions, and they hold up in experiments for large width — but rather that their structural assumptions on the network break very badly on residual nets. #### Open Problems. Our work thus have shown that optimality of initialization schemes can be very unstable with respect to architecture. We hope this work will form a foundation toward a mathematically grounded initialization scheme for state-of-the-art architectures like the original He et al. residual network. To do so, there are still two major components left to study out of the following three: Residual/skip connection Batchnorm Convolutional layers. Recurrent architectures and attention mechanisms are also still mostly unexplored in terms of mean field theory. Furthermore, many theoretical questions still yet to be resolved; the most important with regard to mean field theory is: why can we make \[ass:symAct,ass:gradInd\] and still be able to make accurate predictions? We hope to make progress on these problems in the future and encourage readers to take part in this effort. Acknowledgments {#acknowledgments .unnumbered} =============== Thanks to Jeffrey Ling for early exploration experiments and help with the initial draft. Thanks to Felix Wong for offering his wisdom and experience working in statistical physics. Additional Figures ================== In figures appearing in the appendix, $\olddaleth$ means ${\boldsymbol{\oldchi}}$ (due to legacy reasons). [m[.1]{}m[0.6]{}]{} $\alpha=1$ & ![Empirical vs theoretical dynamics for $\p {\mathbf{p}}l, \p {\mathbf{e}}l$, and different gradient quantities for $\alpha$-ReLU, with format similar to \[fig:theory\_tracks\_pratice\]. We refer to each figure on each row from left to right as (a), (b), and (c). Note that in the $\alpha=1$ case, figure (a) ($\p {\mathbf{p}}l$ and $\p {\boldsymbol{\oldgamma}}l$ for different initial values) has log scale y-axis and (a) and (b) have x-axis ranging from 1 to 50, while for other $\alpha$, (a) has normal y-axis and (a) and (b) have x-axis ranging from 1 to 200. We do so because the norm of the activation vector in a typical ReLU resnet blows up into `NaN` at around layer 90, while this is not a problem for $\alpha < 1$. Our theoretical predictions track the average of empirical values closely for forward quantities $\p {\mathbf{p}}l, \p {\boldsymbol{\oldgamma}}l,$ and $\p {\mathbf{e}}\l$ for all $\alpha$, but variance is extremely large for $\p {\mathbf{e}}l$ at $\alpha = 1$; it also predicts the average gradient norm accurately for $\alpha = 1$ to $\alpha = .7$ (despite the fact that we should not expect so for $\alpha \le .75$ due to exploding variance (\[thm:dalethInfVarAlphaReLU\])), although variance is large for $\alpha = 1$ at earlier layers (i.e. later layers w.r.t backpropagation). However it [*consistently and significantly overestimates*]{} the average gradient norm for $\alpha = .6$ to $\alpha = .5$, where the variance is so large that one standard deviation below the mean results in negative values. All plots are made with parameters $\sigma_v^2 = 1.5, \sigma_a^2 = .5, \sigma_w^2 = 1.69, \sigma_b^2 = .49$; only $\alpha$ is varied. All figures exhibit smooth curves, which are theoretical estimates, and irregular curves with shades around them, which indicate empirical means and standard deviations (both of which taken in regular scale, not log scale). For each $\alpha$, figures (a) and (b) are made with 20 runs of resnets of width 1000. (c) is made with 25 runs of resnets of width 250.[]{data-label="fig:alphaReLUTheoryVsEmpirics"}](graphics/relu_full_res_theory_vs_exp "fig:"){height=".08\textheight"}\ $\alpha=.9$ & ![Empirical vs theoretical dynamics for $\p {\mathbf{p}}l, \p {\mathbf{e}}l$, and different gradient quantities for $\alpha$-ReLU, with format similar to \[fig:theory\_tracks\_pratice\]. We refer to each figure on each row from left to right as (a), (b), and (c). Note that in the $\alpha=1$ case, figure (a) ($\p {\mathbf{p}}l$ and $\p {\boldsymbol{\oldgamma}}l$ for different initial values) has log scale y-axis and (a) and (b) have x-axis ranging from 1 to 50, while for other $\alpha$, (a) has normal y-axis and (a) and (b) have x-axis ranging from 1 to 200. We do so because the norm of the activation vector in a typical ReLU resnet blows up into `NaN` at around layer 90, while this is not a problem for $\alpha < 1$. Our theoretical predictions track the average of empirical values closely for forward quantities $\p {\mathbf{p}}l, \p {\boldsymbol{\oldgamma}}l,$ and $\p {\mathbf{e}}\l$ for all $\alpha$, but variance is extremely large for $\p {\mathbf{e}}l$ at $\alpha = 1$; it also predicts the average gradient norm accurately for $\alpha = 1$ to $\alpha = .7$ (despite the fact that we should not expect so for $\alpha \le .75$ due to exploding variance (\[thm:dalethInfVarAlphaReLU\])), although variance is large for $\alpha = 1$ at earlier layers (i.e. later layers w.r.t backpropagation). However it [*consistently and significantly overestimates*]{} the average gradient norm for $\alpha = .6$ to $\alpha = .5$, where the variance is so large that one standard deviation below the mean results in negative values. All plots are made with parameters $\sigma_v^2 = 1.5, \sigma_a^2 = .5, \sigma_w^2 = 1.69, \sigma_b^2 = .49$; only $\alpha$ is varied. All figures exhibit smooth curves, which are theoretical estimates, and irregular curves with shades around them, which indicate empirical means and standard deviations (both of which taken in regular scale, not log scale). For each $\alpha$, figures (a) and (b) are made with 20 runs of resnets of width 1000. (c) is made with 25 runs of resnets of width 250.[]{data-label="fig:alphaReLUTheoryVsEmpirics"}](graphics/9relu_full_res_theory_vs_exp "fig:"){height=".08\textheight"}\ $\alpha=.8$ & ![Empirical vs theoretical dynamics for $\p {\mathbf{p}}l, \p {\mathbf{e}}l$, and different gradient quantities for $\alpha$-ReLU, with format similar to \[fig:theory\_tracks\_pratice\]. We refer to each figure on each row from left to right as (a), (b), and (c). Note that in the $\alpha=1$ case, figure (a) ($\p {\mathbf{p}}l$ and $\p {\boldsymbol{\oldgamma}}l$ for different initial values) has log scale y-axis and (a) and (b) have x-axis ranging from 1 to 50, while for other $\alpha$, (a) has normal y-axis and (a) and (b) have x-axis ranging from 1 to 200. We do so because the norm of the activation vector in a typical ReLU resnet blows up into `NaN` at around layer 90, while this is not a problem for $\alpha < 1$. Our theoretical predictions track the average of empirical values closely for forward quantities $\p {\mathbf{p}}l, \p {\boldsymbol{\oldgamma}}l,$ and $\p {\mathbf{e}}\l$ for all $\alpha$, but variance is extremely large for $\p {\mathbf{e}}l$ at $\alpha = 1$; it also predicts the average gradient norm accurately for $\alpha = 1$ to $\alpha = .7$ (despite the fact that we should not expect so for $\alpha \le .75$ due to exploding variance (\[thm:dalethInfVarAlphaReLU\])), although variance is large for $\alpha = 1$ at earlier layers (i.e. later layers w.r.t backpropagation). However it [*consistently and significantly overestimates*]{} the average gradient norm for $\alpha = .6$ to $\alpha = .5$, where the variance is so large that one standard deviation below the mean results in negative values. All plots are made with parameters $\sigma_v^2 = 1.5, \sigma_a^2 = .5, \sigma_w^2 = 1.69, \sigma_b^2 = .49$; only $\alpha$ is varied. All figures exhibit smooth curves, which are theoretical estimates, and irregular curves with shades around them, which indicate empirical means and standard deviations (both of which taken in regular scale, not log scale). For each $\alpha$, figures (a) and (b) are made with 20 runs of resnets of width 1000. (c) is made with 25 runs of resnets of width 250.[]{data-label="fig:alphaReLUTheoryVsEmpirics"}](graphics/8relu_full_res_theory_vs_exp "fig:"){height=".08\textheight"}\ $\alpha=.7$ & ![Empirical vs theoretical dynamics for $\p {\mathbf{p}}l, \p {\mathbf{e}}l$, and different gradient quantities for $\alpha$-ReLU, with format similar to \[fig:theory\_tracks\_pratice\]. We refer to each figure on each row from left to right as (a), (b), and (c). Note that in the $\alpha=1$ case, figure (a) ($\p {\mathbf{p}}l$ and $\p {\boldsymbol{\oldgamma}}l$ for different initial values) has log scale y-axis and (a) and (b) have x-axis ranging from 1 to 50, while for other $\alpha$, (a) has normal y-axis and (a) and (b) have x-axis ranging from 1 to 200. We do so because the norm of the activation vector in a typical ReLU resnet blows up into `NaN` at around layer 90, while this is not a problem for $\alpha < 1$. Our theoretical predictions track the average of empirical values closely for forward quantities $\p {\mathbf{p}}l, \p {\boldsymbol{\oldgamma}}l,$ and $\p {\mathbf{e}}\l$ for all $\alpha$, but variance is extremely large for $\p {\mathbf{e}}l$ at $\alpha = 1$; it also predicts the average gradient norm accurately for $\alpha = 1$ to $\alpha = .7$ (despite the fact that we should not expect so for $\alpha \le .75$ due to exploding variance (\[thm:dalethInfVarAlphaReLU\])), although variance is large for $\alpha = 1$ at earlier layers (i.e. later layers w.r.t backpropagation). However it [*consistently and significantly overestimates*]{} the average gradient norm for $\alpha = .6$ to $\alpha = .5$, where the variance is so large that one standard deviation below the mean results in negative values. All plots are made with parameters $\sigma_v^2 = 1.5, \sigma_a^2 = .5, \sigma_w^2 = 1.69, \sigma_b^2 = .49$; only $\alpha$ is varied. All figures exhibit smooth curves, which are theoretical estimates, and irregular curves with shades around them, which indicate empirical means and standard deviations (both of which taken in regular scale, not log scale). For each $\alpha$, figures (a) and (b) are made with 20 runs of resnets of width 1000. (c) is made with 25 runs of resnets of width 250.[]{data-label="fig:alphaReLUTheoryVsEmpirics"}](graphics/7relu_full_res_theory_vs_exp "fig:"){height=".08\textheight"}\ $\alpha=.6$ & ![Empirical vs theoretical dynamics for $\p {\mathbf{p}}l, \p {\mathbf{e}}l$, and different gradient quantities for $\alpha$-ReLU, with format similar to \[fig:theory\_tracks\_pratice\]. We refer to each figure on each row from left to right as (a), (b), and (c). Note that in the $\alpha=1$ case, figure (a) ($\p {\mathbf{p}}l$ and $\p {\boldsymbol{\oldgamma}}l$ for different initial values) has log scale y-axis and (a) and (b) have x-axis ranging from 1 to 50, while for other $\alpha$, (a) has normal y-axis and (a) and (b) have x-axis ranging from 1 to 200. We do so because the norm of the activation vector in a typical ReLU resnet blows up into `NaN` at around layer 90, while this is not a problem for $\alpha < 1$. Our theoretical predictions track the average of empirical values closely for forward quantities $\p {\mathbf{p}}l, \p {\boldsymbol{\oldgamma}}l,$ and $\p {\mathbf{e}}\l$ for all $\alpha$, but variance is extremely large for $\p {\mathbf{e}}l$ at $\alpha = 1$; it also predicts the average gradient norm accurately for $\alpha = 1$ to $\alpha = .7$ (despite the fact that we should not expect so for $\alpha \le .75$ due to exploding variance (\[thm:dalethInfVarAlphaReLU\])), although variance is large for $\alpha = 1$ at earlier layers (i.e. later layers w.r.t backpropagation). However it [*consistently and significantly overestimates*]{} the average gradient norm for $\alpha = .6$ to $\alpha = .5$, where the variance is so large that one standard deviation below the mean results in negative values. All plots are made with parameters $\sigma_v^2 = 1.5, \sigma_a^2 = .5, \sigma_w^2 = 1.69, \sigma_b^2 = .49$; only $\alpha$ is varied. All figures exhibit smooth curves, which are theoretical estimates, and irregular curves with shades around them, which indicate empirical means and standard deviations (both of which taken in regular scale, not log scale). For each $\alpha$, figures (a) and (b) are made with 20 runs of resnets of width 1000. (c) is made with 25 runs of resnets of width 250.[]{data-label="fig:alphaReLUTheoryVsEmpirics"}](graphics/6relu_full_res_theory_vs_exp "fig:"){height=".08\textheight"}\ $\alpha=.55$ & ![Empirical vs theoretical dynamics for $\p {\mathbf{p}}l, \p {\mathbf{e}}l$, and different gradient quantities for $\alpha$-ReLU, with format similar to \[fig:theory\_tracks\_pratice\]. We refer to each figure on each row from left to right as (a), (b), and (c). Note that in the $\alpha=1$ case, figure (a) ($\p {\mathbf{p}}l$ and $\p {\boldsymbol{\oldgamma}}l$ for different initial values) has log scale y-axis and (a) and (b) have x-axis ranging from 1 to 50, while for other $\alpha$, (a) has normal y-axis and (a) and (b) have x-axis ranging from 1 to 200. We do so because the norm of the activation vector in a typical ReLU resnet blows up into `NaN` at around layer 90, while this is not a problem for $\alpha < 1$. Our theoretical predictions track the average of empirical values closely for forward quantities $\p {\mathbf{p}}l, \p {\boldsymbol{\oldgamma}}l,$ and $\p {\mathbf{e}}\l$ for all $\alpha$, but variance is extremely large for $\p {\mathbf{e}}l$ at $\alpha = 1$; it also predicts the average gradient norm accurately for $\alpha = 1$ to $\alpha = .7$ (despite the fact that we should not expect so for $\alpha \le .75$ due to exploding variance (\[thm:dalethInfVarAlphaReLU\])), although variance is large for $\alpha = 1$ at earlier layers (i.e. later layers w.r.t backpropagation). However it [*consistently and significantly overestimates*]{} the average gradient norm for $\alpha = .6$ to $\alpha = .5$, where the variance is so large that one standard deviation below the mean results in negative values. All plots are made with parameters $\sigma_v^2 = 1.5, \sigma_a^2 = .5, \sigma_w^2 = 1.69, \sigma_b^2 = .49$; only $\alpha$ is varied. All figures exhibit smooth curves, which are theoretical estimates, and irregular curves with shades around them, which indicate empirical means and standard deviations (both of which taken in regular scale, not log scale). For each $\alpha$, figures (a) and (b) are made with 20 runs of resnets of width 1000. (c) is made with 25 runs of resnets of width 250.[]{data-label="fig:alphaReLUTheoryVsEmpirics"}](graphics/55relu_full_res_theory_vs_exp "fig:"){height=".08\textheight"}\ $\alpha=.51$ & ![Empirical vs theoretical dynamics for $\p {\mathbf{p}}l, \p {\mathbf{e}}l$, and different gradient quantities for $\alpha$-ReLU, with format similar to \[fig:theory\_tracks\_pratice\]. We refer to each figure on each row from left to right as (a), (b), and (c). Note that in the $\alpha=1$ case, figure (a) ($\p {\mathbf{p}}l$ and $\p {\boldsymbol{\oldgamma}}l$ for different initial values) has log scale y-axis and (a) and (b) have x-axis ranging from 1 to 50, while for other $\alpha$, (a) has normal y-axis and (a) and (b) have x-axis ranging from 1 to 200. We do so because the norm of the activation vector in a typical ReLU resnet blows up into `NaN` at around layer 90, while this is not a problem for $\alpha < 1$. Our theoretical predictions track the average of empirical values closely for forward quantities $\p {\mathbf{p}}l, \p {\boldsymbol{\oldgamma}}l,$ and $\p {\mathbf{e}}\l$ for all $\alpha$, but variance is extremely large for $\p {\mathbf{e}}l$ at $\alpha = 1$; it also predicts the average gradient norm accurately for $\alpha = 1$ to $\alpha = .7$ (despite the fact that we should not expect so for $\alpha \le .75$ due to exploding variance (\[thm:dalethInfVarAlphaReLU\])), although variance is large for $\alpha = 1$ at earlier layers (i.e. later layers w.r.t backpropagation). However it [*consistently and significantly overestimates*]{} the average gradient norm for $\alpha = .6$ to $\alpha = .5$, where the variance is so large that one standard deviation below the mean results in negative values. All plots are made with parameters $\sigma_v^2 = 1.5, \sigma_a^2 = .5, \sigma_w^2 = 1.69, \sigma_b^2 = .49$; only $\alpha$ is varied. All figures exhibit smooth curves, which are theoretical estimates, and irregular curves with shades around them, which indicate empirical means and standard deviations (both of which taken in regular scale, not log scale). For each $\alpha$, figures (a) and (b) are made with 20 runs of resnets of width 1000. (c) is made with 25 runs of resnets of width 250.[]{data-label="fig:alphaReLUTheoryVsEmpirics"}](graphics/51relu_full_res_theory_vs_exp "fig:"){height=".08\textheight"} [m[.1]{}m[0.4]{}]{} $\alpha=.9$ & ![We verify the exponents of the forward and backward dynamics for $\alpha$-ReLU FRN. For each row, the figures are labeled (a) and (b) from left to right. The format is the same as in \[fig:jjj\_vs\_id\_main\]. All figures are in log-log scale. [**(a)**]{} We exhibit our theoretical dynamics of the cosine distance $\p {\mathbf{e}}l$ based on the recurrences \[thm:fullResPQRec\] and \[thm:full\_res\_l\_g\_recurr\] for different initial conditions $\p {\mathbf{e}}0$. We draw $|\p {\mathbf{e}}l - \p {\mathbf{e}}{l-1}|$ for each of these dynamics in colored solid lines. We predict that each dynamic is ${{\check\Theta}}(l^{-\mu})$, where $\mu = (1 - \dot{\mathbb{J}}_\alpha({\mathbf{e}}^*))/(1-\alpha)$, and the dashed line gives $l^{-\mu-1}$ (\[thm:alphaReLUeConvergence\]), shifted vertically to better compare the slope in log scale (i.e. the exponent of the polynomial dynamics). (See footnote \[footnote:plotDelta\] for why we plot the dynamics this way). We see that the our asymptotic prediction is very accurate for the sequence of $\p {\mathbf{e}}l$ that starts with $\p {\mathbf{e}}0 = 0.99$, the closest to ${\mathbf{e}}^*$ for each $\alpha$, while other lines only slowly converge to the same exponent (which is the slope in the log-log plot). This is to be expected based on the proof of \[thm:alphaReLUeConvergence\]. For $\alpha = .9$, the $\p {\mathbf{e}}0 = .99$ line upticks at around $10^3$ and then turn into `NaN`s due to numerical instability. [**(b)**]{} Colored lines are $\p \bullet 0 / \p \bullet l$ for $\bullet = {{\boldsymbol{\oldchi}}}, {\boldsymbol{\oldchi}}_b, {\boldsymbol{\oldchi}}_w$ (we are not taking logs in addition to plotting in log-log scale like in \[fig:jjj\_vs\_id\]). The dashed lines are our asymptotic predictions for the dynamics with corresponding colors, based on \[thm:alphaReLUAllGradients\], again shifted appropriately to easily compare slope visually. We see that for every alpha our asymptotic predictions are highly accurate. For both (a) and (b), we did not show $\alpha = 1$ case as ReLU FRN runs into numerical issues quickly (i.e. with even for 100 layers) because of exponential explosions in $\p {\mathbf{p}}l$ and $\p {{\boldsymbol{\oldchi}}}l$ as predicted by \[thm:pDynamicAlphaReLU,thm:dalethDynamicsAlphaReLU\], so we cannot expect to empirically verify the precise predicted asymptotics. All plots are made with parameters $\sigma_v^2 = 1.5, \sigma_a^2 = .5, \sigma_w^2 = 1.69, \sigma_b^2 = .49$; only $\alpha$ is varied. []{data-label="fig:alphaReLUVerifyExponents"}](graphics/9relu_verify_deltastar_n_grad "fig:"){height=".08\textheight"}\ $\alpha=.8$ & ![We verify the exponents of the forward and backward dynamics for $\alpha$-ReLU FRN. For each row, the figures are labeled (a) and (b) from left to right. The format is the same as in \[fig:jjj\_vs\_id\_main\]. All figures are in log-log scale. [**(a)**]{} We exhibit our theoretical dynamics of the cosine distance $\p {\mathbf{e}}l$ based on the recurrences \[thm:fullResPQRec\] and \[thm:full\_res\_l\_g\_recurr\] for different initial conditions $\p {\mathbf{e}}0$. We draw $|\p {\mathbf{e}}l - \p {\mathbf{e}}{l-1}|$ for each of these dynamics in colored solid lines. We predict that each dynamic is ${{\check\Theta}}(l^{-\mu})$, where $\mu = (1 - \dot{\mathbb{J}}_\alpha({\mathbf{e}}^*))/(1-\alpha)$, and the dashed line gives $l^{-\mu-1}$ (\[thm:alphaReLUeConvergence\]), shifted vertically to better compare the slope in log scale (i.e. the exponent of the polynomial dynamics). (See footnote \[footnote:plotDelta\] for why we plot the dynamics this way). We see that the our asymptotic prediction is very accurate for the sequence of $\p {\mathbf{e}}l$ that starts with $\p {\mathbf{e}}0 = 0.99$, the closest to ${\mathbf{e}}^*$ for each $\alpha$, while other lines only slowly converge to the same exponent (which is the slope in the log-log plot). This is to be expected based on the proof of \[thm:alphaReLUeConvergence\]. For $\alpha = .9$, the $\p {\mathbf{e}}0 = .99$ line upticks at around $10^3$ and then turn into `NaN`s due to numerical instability. [**(b)**]{} Colored lines are $\p \bullet 0 / \p \bullet l$ for $\bullet = {{\boldsymbol{\oldchi}}}, {\boldsymbol{\oldchi}}_b, {\boldsymbol{\oldchi}}_w$ (we are not taking logs in addition to plotting in log-log scale like in \[fig:jjj\_vs\_id\]). The dashed lines are our asymptotic predictions for the dynamics with corresponding colors, based on \[thm:alphaReLUAllGradients\], again shifted appropriately to easily compare slope visually. We see that for every alpha our asymptotic predictions are highly accurate. For both (a) and (b), we did not show $\alpha = 1$ case as ReLU FRN runs into numerical issues quickly (i.e. with even for 100 layers) because of exponential explosions in $\p {\mathbf{p}}l$ and $\p {{\boldsymbol{\oldchi}}}l$ as predicted by \[thm:pDynamicAlphaReLU,thm:dalethDynamicsAlphaReLU\], so we cannot expect to empirically verify the precise predicted asymptotics. All plots are made with parameters $\sigma_v^2 = 1.5, \sigma_a^2 = .5, \sigma_w^2 = 1.69, \sigma_b^2 = .49$; only $\alpha$ is varied. []{data-label="fig:alphaReLUVerifyExponents"}](graphics/8relu_verify_deltastar_n_grad "fig:"){height=".08\textheight"}\ $\alpha=.7$ & ![We verify the exponents of the forward and backward dynamics for $\alpha$-ReLU FRN. For each row, the figures are labeled (a) and (b) from left to right. The format is the same as in \[fig:jjj\_vs\_id\_main\]. All figures are in log-log scale. [**(a)**]{} We exhibit our theoretical dynamics of the cosine distance $\p {\mathbf{e}}l$ based on the recurrences \[thm:fullResPQRec\] and \[thm:full\_res\_l\_g\_recurr\] for different initial conditions $\p {\mathbf{e}}0$. We draw $|\p {\mathbf{e}}l - \p {\mathbf{e}}{l-1}|$ for each of these dynamics in colored solid lines. We predict that each dynamic is ${{\check\Theta}}(l^{-\mu})$, where $\mu = (1 - \dot{\mathbb{J}}_\alpha({\mathbf{e}}^*))/(1-\alpha)$, and the dashed line gives $l^{-\mu-1}$ (\[thm:alphaReLUeConvergence\]), shifted vertically to better compare the slope in log scale (i.e. the exponent of the polynomial dynamics). (See footnote \[footnote:plotDelta\] for why we plot the dynamics this way). We see that the our asymptotic prediction is very accurate for the sequence of $\p {\mathbf{e}}l$ that starts with $\p {\mathbf{e}}0 = 0.99$, the closest to ${\mathbf{e}}^*$ for each $\alpha$, while other lines only slowly converge to the same exponent (which is the slope in the log-log plot). This is to be expected based on the proof of \[thm:alphaReLUeConvergence\]. For $\alpha = .9$, the $\p {\mathbf{e}}0 = .99$ line upticks at around $10^3$ and then turn into `NaN`s due to numerical instability. [**(b)**]{} Colored lines are $\p \bullet 0 / \p \bullet l$ for $\bullet = {{\boldsymbol{\oldchi}}}, {\boldsymbol{\oldchi}}_b, {\boldsymbol{\oldchi}}_w$ (we are not taking logs in addition to plotting in log-log scale like in \[fig:jjj\_vs\_id\]). The dashed lines are our asymptotic predictions for the dynamics with corresponding colors, based on \[thm:alphaReLUAllGradients\], again shifted appropriately to easily compare slope visually. We see that for every alpha our asymptotic predictions are highly accurate. For both (a) and (b), we did not show $\alpha = 1$ case as ReLU FRN runs into numerical issues quickly (i.e. with even for 100 layers) because of exponential explosions in $\p {\mathbf{p}}l$ and $\p {{\boldsymbol{\oldchi}}}l$ as predicted by \[thm:pDynamicAlphaReLU,thm:dalethDynamicsAlphaReLU\], so we cannot expect to empirically verify the precise predicted asymptotics. All plots are made with parameters $\sigma_v^2 = 1.5, \sigma_a^2 = .5, \sigma_w^2 = 1.69, \sigma_b^2 = .49$; only $\alpha$ is varied. []{data-label="fig:alphaReLUVerifyExponents"}](graphics/7relu_verify_deltastar_n_grad "fig:"){height=".08\textheight"}\ $\alpha=.6$ & ![We verify the exponents of the forward and backward dynamics for $\alpha$-ReLU FRN. For each row, the figures are labeled (a) and (b) from left to right. The format is the same as in \[fig:jjj\_vs\_id\_main\]. All figures are in log-log scale. [**(a)**]{} We exhibit our theoretical dynamics of the cosine distance $\p {\mathbf{e}}l$ based on the recurrences \[thm:fullResPQRec\] and \[thm:full\_res\_l\_g\_recurr\] for different initial conditions $\p {\mathbf{e}}0$. We draw $|\p {\mathbf{e}}l - \p {\mathbf{e}}{l-1}|$ for each of these dynamics in colored solid lines. We predict that each dynamic is ${{\check\Theta}}(l^{-\mu})$, where $\mu = (1 - \dot{\mathbb{J}}_\alpha({\mathbf{e}}^*))/(1-\alpha)$, and the dashed line gives $l^{-\mu-1}$ (\[thm:alphaReLUeConvergence\]), shifted vertically to better compare the slope in log scale (i.e. the exponent of the polynomial dynamics). (See footnote \[footnote:plotDelta\] for why we plot the dynamics this way). We see that the our asymptotic prediction is very accurate for the sequence of $\p {\mathbf{e}}l$ that starts with $\p {\mathbf{e}}0 = 0.99$, the closest to ${\mathbf{e}}^*$ for each $\alpha$, while other lines only slowly converge to the same exponent (which is the slope in the log-log plot). This is to be expected based on the proof of \[thm:alphaReLUeConvergence\]. For $\alpha = .9$, the $\p {\mathbf{e}}0 = .99$ line upticks at around $10^3$ and then turn into `NaN`s due to numerical instability. [**(b)**]{} Colored lines are $\p \bullet 0 / \p \bullet l$ for $\bullet = {{\boldsymbol{\oldchi}}}, {\boldsymbol{\oldchi}}_b, {\boldsymbol{\oldchi}}_w$ (we are not taking logs in addition to plotting in log-log scale like in \[fig:jjj\_vs\_id\]). The dashed lines are our asymptotic predictions for the dynamics with corresponding colors, based on \[thm:alphaReLUAllGradients\], again shifted appropriately to easily compare slope visually. We see that for every alpha our asymptotic predictions are highly accurate. For both (a) and (b), we did not show $\alpha = 1$ case as ReLU FRN runs into numerical issues quickly (i.e. with even for 100 layers) because of exponential explosions in $\p {\mathbf{p}}l$ and $\p {{\boldsymbol{\oldchi}}}l$ as predicted by \[thm:pDynamicAlphaReLU,thm:dalethDynamicsAlphaReLU\], so we cannot expect to empirically verify the precise predicted asymptotics. All plots are made with parameters $\sigma_v^2 = 1.5, \sigma_a^2 = .5, \sigma_w^2 = 1.69, \sigma_b^2 = .49$; only $\alpha$ is varied. []{data-label="fig:alphaReLUVerifyExponents"}](graphics/6relu_verify_deltastar_n_grad "fig:"){height=".08\textheight"}\ $\alpha=.55$ & ![We verify the exponents of the forward and backward dynamics for $\alpha$-ReLU FRN. For each row, the figures are labeled (a) and (b) from left to right. The format is the same as in \[fig:jjj\_vs\_id\_main\]. All figures are in log-log scale. [**(a)**]{} We exhibit our theoretical dynamics of the cosine distance $\p {\mathbf{e}}l$ based on the recurrences \[thm:fullResPQRec\] and \[thm:full\_res\_l\_g\_recurr\] for different initial conditions $\p {\mathbf{e}}0$. We draw $|\p {\mathbf{e}}l - \p {\mathbf{e}}{l-1}|$ for each of these dynamics in colored solid lines. We predict that each dynamic is ${{\check\Theta}}(l^{-\mu})$, where $\mu = (1 - \dot{\mathbb{J}}_\alpha({\mathbf{e}}^*))/(1-\alpha)$, and the dashed line gives $l^{-\mu-1}$ (\[thm:alphaReLUeConvergence\]), shifted vertically to better compare the slope in log scale (i.e. the exponent of the polynomial dynamics). (See footnote \[footnote:plotDelta\] for why we plot the dynamics this way). We see that the our asymptotic prediction is very accurate for the sequence of $\p {\mathbf{e}}l$ that starts with $\p {\mathbf{e}}0 = 0.99$, the closest to ${\mathbf{e}}^*$ for each $\alpha$, while other lines only slowly converge to the same exponent (which is the slope in the log-log plot). This is to be expected based on the proof of \[thm:alphaReLUeConvergence\]. For $\alpha = .9$, the $\p {\mathbf{e}}0 = .99$ line upticks at around $10^3$ and then turn into `NaN`s due to numerical instability. [**(b)**]{} Colored lines are $\p \bullet 0 / \p \bullet l$ for $\bullet = {{\boldsymbol{\oldchi}}}, {\boldsymbol{\oldchi}}_b, {\boldsymbol{\oldchi}}_w$ (we are not taking logs in addition to plotting in log-log scale like in \[fig:jjj\_vs\_id\]). The dashed lines are our asymptotic predictions for the dynamics with corresponding colors, based on \[thm:alphaReLUAllGradients\], again shifted appropriately to easily compare slope visually. We see that for every alpha our asymptotic predictions are highly accurate. For both (a) and (b), we did not show $\alpha = 1$ case as ReLU FRN runs into numerical issues quickly (i.e. with even for 100 layers) because of exponential explosions in $\p {\mathbf{p}}l$ and $\p {{\boldsymbol{\oldchi}}}l$ as predicted by \[thm:pDynamicAlphaReLU,thm:dalethDynamicsAlphaReLU\], so we cannot expect to empirically verify the precise predicted asymptotics. All plots are made with parameters $\sigma_v^2 = 1.5, \sigma_a^2 = .5, \sigma_w^2 = 1.69, \sigma_b^2 = .49$; only $\alpha$ is varied. []{data-label="fig:alphaReLUVerifyExponents"}](graphics/55relu_verify_deltastar_n_grad "fig:"){height=".08\textheight"}\ $\alpha=.51$ & ![We verify the exponents of the forward and backward dynamics for $\alpha$-ReLU FRN. For each row, the figures are labeled (a) and (b) from left to right. The format is the same as in \[fig:jjj\_vs\_id\_main\]. All figures are in log-log scale. [**(a)**]{} We exhibit our theoretical dynamics of the cosine distance $\p {\mathbf{e}}l$ based on the recurrences \[thm:fullResPQRec\] and \[thm:full\_res\_l\_g\_recurr\] for different initial conditions $\p {\mathbf{e}}0$. We draw $|\p {\mathbf{e}}l - \p {\mathbf{e}}{l-1}|$ for each of these dynamics in colored solid lines. We predict that each dynamic is ${{\check\Theta}}(l^{-\mu})$, where $\mu = (1 - \dot{\mathbb{J}}_\alpha({\mathbf{e}}^*))/(1-\alpha)$, and the dashed line gives $l^{-\mu-1}$ (\[thm:alphaReLUeConvergence\]), shifted vertically to better compare the slope in log scale (i.e. the exponent of the polynomial dynamics). (See footnote \[footnote:plotDelta\] for why we plot the dynamics this way). We see that the our asymptotic prediction is very accurate for the sequence of $\p {\mathbf{e}}l$ that starts with $\p {\mathbf{e}}0 = 0.99$, the closest to ${\mathbf{e}}^*$ for each $\alpha$, while other lines only slowly converge to the same exponent (which is the slope in the log-log plot). This is to be expected based on the proof of \[thm:alphaReLUeConvergence\]. For $\alpha = .9$, the $\p {\mathbf{e}}0 = .99$ line upticks at around $10^3$ and then turn into `NaN`s due to numerical instability. [**(b)**]{} Colored lines are $\p \bullet 0 / \p \bullet l$ for $\bullet = {{\boldsymbol{\oldchi}}}, {\boldsymbol{\oldchi}}_b, {\boldsymbol{\oldchi}}_w$ (we are not taking logs in addition to plotting in log-log scale like in \[fig:jjj\_vs\_id\]). The dashed lines are our asymptotic predictions for the dynamics with corresponding colors, based on \[thm:alphaReLUAllGradients\], again shifted appropriately to easily compare slope visually. We see that for every alpha our asymptotic predictions are highly accurate. For both (a) and (b), we did not show $\alpha = 1$ case as ReLU FRN runs into numerical issues quickly (i.e. with even for 100 layers) because of exponential explosions in $\p {\mathbf{p}}l$ and $\p {{\boldsymbol{\oldchi}}}l$ as predicted by \[thm:pDynamicAlphaReLU,thm:dalethDynamicsAlphaReLU\], so we cannot expect to empirically verify the precise predicted asymptotics. All plots are made with parameters $\sigma_v^2 = 1.5, \sigma_a^2 = .5, \sigma_w^2 = 1.69, \sigma_b^2 = .49$; only $\alpha$ is varied. []{data-label="fig:alphaReLUVerifyExponents"}](graphics/51relu_verify_deltastar_n_grad "fig:"){height=".08\textheight"}\ Symbol Meaning Ref ----------------------------------------- ---------------------------------------------------------------------------------------------------- ---------------------------------- $\sigma_\bullet$ standard deviation of trainable parameter $\bullet$ $\p x l$ activation vector/input vector $\p h l$ hidden vector $N$ width (same across all layers) $\p {\mathbf{p}}l$ m.n. squared length of activation vector $\p x l$ \[defn:length\] $\p {\mathbf{q}}l$ m.n. squared length of hidden vector $\p h l$ \[defn:length\] $\p {\boldsymbol{\oldgamma}}l$ m.n. dot product $\p x l \cdot \p x l{}'$ \[defn:corr\] $\p {\boldsymbol{\oldlambda}}l$ m.n. dot product $\p h l \cdot \p h l{}'$ \[defn:corr\] $\p {\mathbf{s}}l$ m.n. squared distance $\|\p x l - \p x l {}'\|^2$ \[defn:corr\] $\p {\mathbf{e}}l$ cosine distance $\p {\boldsymbol{\oldgamma}}l / \sqrt{\p {\mathbf{p}}l \p {\mathbf{p}}l {}'}$ \[defn:corr\] ${\mathbf{e}}^*$ limit value of $\p {\mathbf{e}}l$ as $l \to \infty$ $\p {\mathbf{c}}l$ cosine distance $\p {\boldsymbol{\oldlambda}}l / \sqrt{\p {\mathbf{q}}l \p {\mathbf{q}}l {}'}$ \[defn:corr\] $\p {\boldsymbol{\oldchi}}l$ m.n. gradient squared norm w.r.t. $\p x l$ \[defn:grad\] $\p {{\boldsymbol{\oldchi}}_\bullet} l$ m.n. gradient squared norm w.r.t. trainable parameter $\bullet$ \[defn:grad\] $\phi$ variable nonlinearity $\R \to \R$ $\psi_\alpha$ $\alpha$-ReLU \[defn:alphaReLU\] ${\mathrm{V}}$ variance integral transform \[defn:integralTransform\] ${\mathrm{W}}$ covariance integral transform \[defn:integralTransform\] $\delta^*$ $\p {\mathbf{e}}l$ converges like $\Theta(l^{-\delta^*})$ in tanh FRN \[thm:eDynamicsFullResTanh\] $\mathcal A$ leading coeff of $\log \p {\boldsymbol{\oldchi}}0 / \p {\boldsymbol{\oldchi}}L$ in tanh FRN \[thm:dalethExpSqrtTanhFullRes\] $R$ $\log \p {\boldsymbol{\oldchi}}0 / \p {\boldsymbol{\oldchi}}L \sim R \log L$ for $(\alpha<1)$-ReLU \[thm:dalethDynamicsAlphaReLU\] ${\mathbb{J}}_\alpha$ kernel function of $\alpha$-ReLU \[lemma:basicJalpha\] : Glossary of Symbols. “Mean normalized” is abbreviated “m.n.”[]{data-label="tab:glossary"} A Listing of Main Theorems ========================== Tanh ---- ### Reduced Residual Network [lemma]{}[pqrecurrence]{}\[lemma:p\_q\_recurrence\] Suppose $\phi$ is antisymmetric. Then in an RRN, ${\mathbf{p}}$ and ${\mathbf{q}}$ satisfy the recurrence $$\begin{aligned} {\mathbf{q}}&= \sigma_w^2 {\underline}{\mathbf{p}}+ \sigma_b^2\\ {\mathbf{p}}&= {\mathrm{V}}\phi( {\mathbf{q}}) + {\underline}{\mathbf{p}}.\end{aligned}$$ [thm]{}[pqlinear]{}\[thm:p\_q\_linear\] Suppose $\phi$ is tanh-like. Assume RRN architecture. - If $\sigma_w = 0$, then $\p {\mathbf{p}}l = l {\mathrm{V}}\phi( \sigma_b^2) + \p {\mathbf{p}}0$ and $\p {\mathbf{q}}l = \sigma_b^2$. - If $\sigma_w > 0$, $\lim_{l \to \infty} \p {\mathbf{p}}l/ l = 1$ and $\lim_{l \to \infty} \p {\mathbf{q}}l /(\sigma_w^2 l) = 1$. If $\phi = \tanh$, then we can obtain more terms of the asymptotic expansions: $$\begin{aligned} \p {\mathbf{p}}l &= l - 2 C \sigma_w^{-1} l^{1/2} - C^2 \sigma_w^{-2} \log l + O(1)\\ \p {\mathbf{q}}l &= \sigma_w^2 l - 2 C \sigma_w l^{1/2} - C^2 \log l + O(1) \end{aligned}$$ as $l \to \infty$, where $C = \sqrt{2 / \pi}$. [thm]{}[lambdagammarecurrence]{}\[thm:lambda\_gamma\_recurrence\] Suppose $\phi$ is antisymmetric. Then in an RRN, ${\boldsymbol{\oldlambda}}$ and ${\boldsymbol{\oldgamma}}$ satisfy the recurrence $$\begin{aligned} {\boldsymbol{\oldlambda}}&= \sigma_w^2 {\underline}{\boldsymbol{\oldgamma}}+ \sigma_b^2\\ {\boldsymbol{\oldgamma}}&= {\mathrm{W}}\phi({\mathbf{q}}, {\boldsymbol{\oldlambda}}) + {\underline}{\boldsymbol{\oldgamma}}.\end{aligned}$$ [thm]{}[edynamics]{}\[thm:edynamics\] Suppose $\phi$ is a tanh-like nonlinearity in an RRN. Assume $\p {\mathbf{e}}0 < 1$. - If $\sigma_w = 0$, then $\p {\boldsymbol{\oldgamma}}l = l {\mathrm{W}}\phi( \sigma_b^2, \sigma_b^2) + \p {\boldsymbol{\oldgamma}}0 = l {\mathrm{V}}\phi( \sigma_b^2) + \p {\boldsymbol{\oldgamma}}0$ and $\p {\boldsymbol{\oldlambda}}l = \sigma_b^2$, so that $\p {\mathbf{e}}l \to 1$ and $1 - \p {\mathbf{e}}l = \Theta(l^{-1})$. As a result, $\p {\mathbf{s}}l = \p {\mathbf{p}}l (1 - \p {\mathbf{e}}l) = \Theta(1).$ - If $\sigma_w > 0$, then $\p {\boldsymbol{\oldgamma}}l = {{\check\Theta}}(l^{\f 2\pi})$, and $\p {\mathbf{e}}l \to 0$ like ${{\check\Theta}}(l^{\f 2 \pi - 1})$. Thus $\p {\mathbf{s}}l = \Theta(\p {\mathbf{p}}l) = \Theta(l).$ [thm]{}[dalethRecReduced]{}\[thm:dalethRecReduced\] For any nonlinearity $\phi$ in an RRN, under assumptions \[ass:symAct\] and \[ass:gradInd\], whenever $\dot \phi^2(\zeta)$ has finite variance for Gaussian variable $\zeta$, $$\begin{aligned} {\underline}{{\boldsymbol{\oldchi}}}&= (\sigma_w^2 {\mathrm{V}}\dot \phi( {\mathbf{q}}) + 1){{\boldsymbol{\oldchi}}},& {\boldsymbol{\oldchi}}_b &= {{\boldsymbol{\oldchi}}}{\mathrm{V}}\dot \phi( {\mathbf{q}}),& {\boldsymbol{\oldchi}}_w &= {{\boldsymbol{\oldchi}}}{\mathrm{V}}\dot \phi( {\mathbf{q}}) {\underline}{\mathbf{p}}. \end{aligned}$$ [thm]{}[dalethExpSqrtTanh]{}\[thm:dalethExpSqrtTanh\] For $\phi = \tanh$ in an RRN, - If $\sigma_w = 0$, $\p {{\boldsymbol{\oldchi}}}m = \p {{\boldsymbol{\oldchi}}}l$ for all $l, m$. - If $\sigma_w > 0$, $$\begin{aligned} \log(\p {{\boldsymbol{\oldchi}}}{m}/\p {{\boldsymbol{\oldchi}}}l) &= \mathcal A (\sqrt l - \sqrt m) + \mathcal B (\log l - \log m) + O(1) \end{aligned}$$ where $\mathcal A = \f 4 3 \sqrt{\f 2 \pi} \sigma_w$ and $\mathcal B = \f 4 {3\pi} - \sigma_w^2 \f 4 {9\pi}$. [thm]{}[dalethExpSqrtTanhAllGrad]{}\[thm:dalethExpSqrtTanhAllGrad\] Suppose $\phi = \tanh$. Then in an RRN - If $\sigma_w = 0$, $\p {\boldsymbol{\oldchi}}l _b = \p {{\boldsymbol{\oldchi}}}L {\mathrm{V}}\dot \phi( \sigma_b^2)$ and $\p {\boldsymbol{\oldchi}}l _w = \p {{\boldsymbol{\oldchi}}}L {\mathrm{V}}\dot \phi( \sigma_b^2) ((l-1) {\mathrm{V}}\phi( \sigma_b^2) + \p {\mathbf{p}}0),$ where $L$ is the last layer. - If $\sigma_w > 0$, $$\begin{aligned} \log(\p{\boldsymbol{\oldchi}}m _b / \p {\boldsymbol{\oldchi}}l _b) &= \mathcal A (\sqrt l - \sqrt m) + \mathcal B_b (\log l - \log m) + O(1)\\ \log(\p{\boldsymbol{\oldchi}}m _w / \p {\boldsymbol{\oldchi}}l _w) &= \mathcal A (\sqrt l - \sqrt m) + \mathcal B_w (\log l - \log m) + O(1) \end{aligned}$$ where $\mathcal A = \f 4 3 \sqrt{\f 2 \pi} \sigma_w$ (same as $\mathcal A$ in \[thm:dalethExpSqrtTanh\]) and $\mathcal B_b = \mathcal B + \f 1 2, \mathcal B_w = \mathcal B - \f 1 2$, with $\mathcal B = \f 4 {3\pi} - \sigma_w^2 \f 4 {9\pi}$ (same as $\mathcal B$ in \[thm:dalethExpSqrtTanh\]). ### Full Residual Network [thm]{}[fullResPQRec]{} \[thm:fullResPQRec\] For any nonlinearity $\phi$ in an FRN, $$\begin{aligned} {\mathbf{q}}&= \sigma_w^2 {\underline}{\mathbf{p}}+ \sigma_b^2\\ {\mathbf{p}}&= \sigma_v^2 {\mathrm{V}}\phi( {\mathbf{q}}) + \sigma_a^2 + {\underline}{\mathbf{p}}\end{aligned}$$ [thm]{}[pIsLinearTanh]{}\[thm:pIsLinearTanh\] Suppose $\phi$ is tanh-like. Assume the FRN architecture. - If $\sigma_w = 0$, then $\p {\mathbf{p}}l = (\sigma_v^2 {\mathrm{V}}\phi( \sigma_b^2) + \sigma_a^2)l +\p {\mathbf{p}}0$, and $\p {\mathbf{q}}l = \sigma_b^2$. - If $\sigma_w > 0$, then $\p {\mathbf{p}}l = b_0 l + b_1 l^{1/2} + b_2 \log l + O(1)$, where $$\begin{aligned} b_0 &= \sigma_v^2 + \sigma_a^2\\ b_1 &= \f{-2C \sigma_v^2 \sigma_w^{-1}}{\sqrt{\sigma_v^2 + \sigma_a^2}}\\ b_2 &= \f{-C^2 \sigma_v^4 \sigma_w^{-2}}{(\sigma_v^2 + \sigma_a^2)^2} \end{aligned}$$ and $C = \sqrt{\f 2 \pi}$. Additionally, $\p {\mathbf{q}}l = \sigma_w^2 b_0 l + \sigma_w^2 b_1 l^{1/2} + \sigma_w^2 b_2 \log l + O(1)$. ![Empirical verification of \[thm:pIsLinearTanh\].](graphics/tanh_p_asymptotic_expansion.pdf){height=".16\textheight"} [thm]{}[LGRecFullRes]{}\[thm:full\_res\_l\_g\_recurr\] For any nonlinearity $\phi$, in an FRN $$\begin{aligned} {\boldsymbol{\oldlambda}}&= \sigma_w^2 {\underline}{\boldsymbol{\oldgamma}}+ \sigma_b^2\\ {\boldsymbol{\oldgamma}}&= \sigma_v^2 {\mathrm{W}}\phi({\mathbf{q}}, {\boldsymbol{\oldlambda}}) + \sigma_a^2 + {\underline}{\boldsymbol{\oldgamma}}\end{aligned}$$ [thm]{}[eDynamicsFullResTanh]{} \[thm:eDynamicsFullResTanh\] Assume $\phi = \tanh$ in an FRN. Suppose $\p {\mathbf{e}}0 < 1$. - If $\sigma_w = 0$, then $\p {\boldsymbol{\oldlambda}}l = \sigma_b^2$ and $\p {\boldsymbol{\oldgamma}}l = l (\sigma_v^2 {\mathrm{W}}\phi( \sigma_b^2, \sigma_b^2) + \sigma_a^2) + \p {\boldsymbol{\oldgamma}}0 = l (\sigma_v^2 {\mathrm{V}}\phi( \sigma_b^2) + \sigma_a^2) + \p {\boldsymbol{\oldgamma}}0$. Thus $\p {\mathbf{e}}l \to 1$ and $1 - \p {\mathbf{e}}l = \Theta(l^{-1})$. As a result, $\p {\mathbf{s}}l = \p {\mathbf{p}}l (1 - \p {\mathbf{e}}l) = \Theta(1).$ - If $\sigma_w > 0$, then $\p {\mathbf{e}}l$ converges to the unique fixed point ${\mathbf{e}}^* \not = 1$ determined by the equation $${\mathbf{e}}^* = \f 1 {\sigma_v^2 + \sigma_a^2}[\sigma_v^2 \f 2 \pi \arcsin\lp {\mathbf{e}}^* \rp + \sigma_a^2].$$ Furthermore, $\p {\mathbf{e}}l$ converges to ${\mathbf{e}}^*$ polynomially: $|\p {\mathbf{e}}l - {\mathbf{e}}^*|$ is ${{\check\Theta}}(l^{-\delta^*})$, where $$\delta^* := 1 - \f 2 \pi \f 1 {\sqrt{1 - ({\mathbf{e}}^*)^2}} \f{\sigma_v^2 }{\sigma_v^2 + \sigma_a^2} \in [\f 2 \pi - 1, \f 1 2)$$ Since ${\mathbf{e}}^* < 1$, $\p {\mathbf{s}}l = \Theta(\p {\mathbf{p}}l) = \Theta(l).$ [thm]{}[dalethRecFull]{} \[thm:dalethRecFull\] For any nonlinearity $\phi$ in an FRN, under assumptions \[ass:symAct\] and \[ass:gradInd\], whenever $\dot \phi(\zeta)^2$ has finite variance for Gaussian variable $\zeta$, $$\begin{aligned} {\underline}{{\boldsymbol{\oldchi}}}&= (\sigma_v^2\sigma_w^2 {\mathrm{V}}\dot \phi( {\mathbf{q}}) + 1){{\boldsymbol{\oldchi}}},& {\boldsymbol{\oldchi}}_b &= \sigma_v^2{{\boldsymbol{\oldchi}}}{\mathrm{V}}\dot \phi( {\mathbf{q}}),\\ {\boldsymbol{\oldchi}}_w &= \sigma_v^2{{\boldsymbol{\oldchi}}}{\mathrm{V}}\dot \phi( {\mathbf{q}}) {\underline}{\mathbf{p}},& {\boldsymbol{\oldchi}}_v &= {{\boldsymbol{\oldchi}}}{\mathrm{V}}\phi( {\mathbf{q}}),& {\boldsymbol{\oldchi}}_a &= {{\boldsymbol{\oldchi}}}\end{aligned}$$ [thm]{}[dalethExpSqrtTanhFullRes]{}\[thm:dalethExpSqrtTanhFullRes\] Assume $\phi = \tanh$ in an FRN. - If $\sigma_w = 0$, $\p {{\boldsymbol{\oldchi}}}m = \p {{\boldsymbol{\oldchi}}}l$ for all $l, m$. - If $\sigma_w > 0$, then for $l \ge m \ge 0,$ $$\log(\p {{\boldsymbol{\oldchi}}}{m} / \p {{\boldsymbol{\oldchi}}}l) = \mathcal A (\sqrt l - \sqrt m) + \mathcal B (\log l - \log m) + O(1)$$ where $$\begin{aligned} \mathcal A &= \f 4 3 \sqrt{\f 2 \pi} \f{\sigma_v^2 \sigma_w}{\sqrt{\sigma_v^2 + \sigma_a^2}}\\ \mathcal B &= \f 4 {9\pi}\f{ \sigma_v^4 }{\sigma_v^2 + \sigma_a^2}\lp \f 3 {\sigma_v^2 + \sigma_a^2} - \sigma_w^2\rp \end{aligned}$$ \[fig:tanhasymptoticsgrid\] shows empirical verification of the asymptotic expansion of ${{\boldsymbol{\oldchi}}}$ for various values of $\sigma_\bullet$s. ![Empirical verification of the asymptotic expansion of ${{\boldsymbol{\oldchi}}}$ for various values of $\sigma_\bullet$s. Note that we have chosen all small values for $\sigma_\bullet$s. For larger values, the constant term in \[thm:dalethExpSqrtTanhFullRes\] begins to dominate (primarily because of the expansion $\log(1+x) = x + \Theta(x^2)$ has large $\Theta$ term when $x$ is large), and ${{\boldsymbol{\oldchi}}}$ behaves more like $\exp(\Theta(l))$ up to depth 1000.[]{data-label="fig:tanhasymptoticsgrid"}](graphics/tanhAsymptoticsGrid){height=".3\textheight"} [thm]{}[dalethExpSqrtTanhFullResAllGrad]{}\[thm:dalethExpSqrtTanhFullResAllGrad\] Suppose $\phi = \tanh$ in an FRN. - If $\sigma_w = 0$, then $$\begin{aligned} \p {\boldsymbol{\oldchi}}l _b &= \sigma_v^2 \p {{\boldsymbol{\oldchi}}}L {\mathrm{V}}\dot \phi( \sigma_b^2)\\ \p {\boldsymbol{\oldchi}}l _w &= \sigma_v^2 \p {{\boldsymbol{\oldchi}}}L {\mathrm{V}}\dot \phi( \sigma_b^2) ( (\sigma_v^2 {\mathrm{V}}\phi( \sigma_b^2) + \sigma_a^2)(l-1) +\p {\mathbf{p}}0)\\ \p {\boldsymbol{\oldchi}}l _v &= \p {{\boldsymbol{\oldchi}}}L {\mathrm{V}}\phi( \sigma_b^2)\\ \p {\boldsymbol{\oldchi}}l _a &= \p {{\boldsymbol{\oldchi}}}L. \end{aligned}$$ - If $\sigma_w > 0$, then for $l \ge m \ge 0,$ $$\begin{aligned} \log(\p{\boldsymbol{\oldchi}}m _b / \p {\boldsymbol{\oldchi}}l _b) &= \mathcal A (\sqrt l - \sqrt m) + \mathcal B_b (\log l - \log m) + O(1)\\ \log(\p{\boldsymbol{\oldchi}}m _w / \p {\boldsymbol{\oldchi}}l _w) &= \mathcal A (\sqrt l - \sqrt m) + \mathcal B_w (\log l - \log m) + O(1)\\ \log(\p{\boldsymbol{\oldchi}}m _a / \p {\boldsymbol{\oldchi}}l _a) &= \mathcal A (\sqrt l - \sqrt m) + \mathcal B (\log l - \log m) + O(1)\\ \log(\p{\boldsymbol{\oldchi}}m _v / \p {\boldsymbol{\oldchi}}l _v) &= \mathcal A (\sqrt l - \sqrt m) + \mathcal B (\log l - \log m) + O(1) \end{aligned}$$ where $\mathcal A = \f 4 3 \sqrt{\f 2 \pi} \f{\sigma_v^2 \sigma_w}{\sqrt{\sigma_v^2 + \sigma_a^2}}$ and $\mathcal B = \f 4 {9\pi}\f{ \sigma_v^4 }{\sigma_v^2 + \sigma_a^2}\lp \f 3 {\sigma_v^2 + \sigma_a^2} - \sigma_w^2\rp$ are as in \[thm:dalethExpSqrtTanhFullRes\] and $\mathcal B_b = \mathcal B + \f 1 2$ and $\mathcal B_w = \mathcal B - \f 1 2$. $\alpha$-ReLU ------------- [lemma]{}[VtPsiAlpha]{}\[lemma:VtPsiAlpha\] If $\alpha > -\f 1 2$, then ${\mathrm{V}}\psi_\alpha( q) = {\mathsf{c}}_\alpha q^{\alpha}$, where ${\mathsf{c}}_\alpha = \f 1 {\sqrt \pi} 2^{\alpha - 1} \Gamma \left(\alpha+ \f 1 2\right)$. Note that if $\alpha \le - \f 1 2$, then ${\mathrm{V}}\psi_\alpha( q)$ is not defined (its defining integral does not converge). ### Full Residual Network By \[thm:fullResPQRec\] and \[lemma:VtPsiAlpha\], we have the length recurrences $$\begin{aligned} {\mathbf{q}}&= \sigma_w^2 {\underline}{\mathbf{p}}+ \sigma_b^2\\ {\mathbf{p}}&= \sigma_v^2 {\mathsf{c}}_\alpha {\mathbf{q}}^\alpha + \sigma_a^2 + {\underline}{\mathbf{p}}\end{aligned}$$ [thm]{}[pDynamicAlphaReLU]{}\[thm:pDynamicAlphaReLU\] Suppose we have the nonlinearity $\phi = \psi_\alpha$. The in an FRN: If $\alpha = 1$, then $\p {\mathbf{p}}l = \Theta((1 + \sigma_v^2 \sigma_w^2/2)^l)$, with the hidden constant depending on the initial condition. If $0 < \alpha < 1$, then $\p {\mathbf{p}}l = \Theta(l^{\f 1 {1- \alpha}})$. More precisely, $\lim_{l \to \infty} {\mathbf{p}}/ l^{\f 1 {1-\alpha}} = [\sigma_v^2 \sigma_w^{2\alpha} {\mathsf{c}}_\alpha (1 - \alpha)]^{\f 1 {1-\alpha}}$. \[fig:reluverifypasymptotics\] empirically verifies the asymptotics for $\alpha=1$ for various $\sigma_v$ and $\sigma_w$. ![Verification of the exponential asymptotics of $\p {\mathbf{p}}l$ when $\alpha=1$. The lines of each color correspond to different $(\sigma_w, \sigma_v)$ pairs, which are given in the legend. The solid lines are given by the recurrences \[thm:fullResPQRec\], and the dashed lines are given by our asymptotics $(1+\sigma_v^2\sigma_w^2/2)^l$ (\[thm:pDynamicAlphaReLU\]). Note that the y-axis is in log-scale.[]{data-label="fig:reluverifypasymptotics"}](graphics/relu_verify_p_asymptotics){height=".2\textheight"} Similarly, by \[thm:full\_res\_l\_g\_recurr\], if ${\mathbf{q}}= {\mathbf{q}}'$, then $$\begin{aligned} {\boldsymbol{\oldlambda}}&= \sigma_w^2 {\underline}{\boldsymbol{\oldgamma}}+ \sigma_b^2\\ {\boldsymbol{\oldgamma}}&= \sigma_v^2 {\mathbf{q}}^\alpha {\mathrm{W}}\psi_\alpha( 1, {\mathbf{c}}) + \sigma_a^2 + {\underline}{\boldsymbol{\oldgamma}}\end{aligned}$$ [thm]{}[ReLUSquaredConvergence]{}\[thm:ReLUSquaredConvergence\] Suppose $\phi = \psi_1$. Then in an FRN, $\p {\mathbf{e}}l \to 1$ and $1 - \p {\mathbf{e}}l \sim [\f 1 4 \sigma_v^2 \sigma_w^2 \inv B U l]^{-2}$ for $B = 1 + \sigma_v^2 \sigma_w^2/2$ and $U = \f {2\sqrt 2}{3\pi}$. As a result, $\p {\mathbf{s}}l = (1 - \p {\mathbf{e}}\l) \p {\mathbf{p}}l = \Theta(l^{-2}\exp(\Theta(l))) = \exp(\Theta(l)).$ [thm]{}[alphaReLUeConvergence]{} \[thm:alphaReLUeConvergence\] Suppose $\phi = \psi_\alpha$ for $0 < \alpha < 1$ in an FRN. Then ${\mathbf{e}}$ converges to the unique nonunit fixed point ${\mathbf{e}}^*$ of ${\mathbb{J}}_\alpha$, and $|{\mathbf{e}}^* - \p {\mathbf{e}}l|$ is ${{\check\Theta}}(l^{-\mu})$, where $\mu = (1-\dot{\mathbb{J}}_\alpha({\mathbf{e}}^*))/(1-\alpha)$. Additionally, $\p {\mathbf{s}}l = \Theta(\p {\mathbf{p}}l) = \Theta(l^{1/(1-\alpha)}).$ \[fig:6reluverifyestar\] verifies empirically that ${\mathbf{e}}^*$ is indeed the fixed point of $\p {\mathbf{e}}l$. \[fig:alphaReLUVerifyExponents\] verifies empirically the convergence rate $l^{-\mu}$. \[fig:MuPlots\] plots $\dot {\mathbb{J}}_\alpha({\mathbf{e}}^*)$ and $\mu$ versus $\alpha$. It certainly looks like $\mu = \f 1 2 (1 - \alpha)$, but we have no proof for it. Based on this conjecture, we see there is a “discontinuity” of $\mu$ at $\alpha = 1$: $\mu \to 0$ as $\alpha \to 1$, but for $\alpha = 1$, the actual convergence dynamics has exponent $-2$ by \[thm:ReLUSquaredConvergence\]. ![Verification of fixed point ${\mathbf{e}}^*$ in \[thm:alphaReLUeConvergence\] for $\alpha = .6$. Different colors correspond to different initial conditions $\p {\mathbf{e}}0$, and the dashed line gives the fixed point.[]{data-label="fig:6reluverifyestar"}](graphics/6relu_verify_estar){height=".2\textheight"} ![[**(a)**]{} A plot of $\dot {\mathbb{J}}_\alpha({\mathbf{e}}^*)$ versus $\alpha$. [**(b)**]{} A plot of the exponent $\mu$ of the dynamics of $|{\mathbf{e}}^* - \p {\mathbf{e}}l|$ (see \[thm:alphaReLUeConvergence\]) []{data-label="fig:MuPlots"}](graphics/JdotPlot "fig:"){height=".2\textheight"} ![[**(a)**]{} A plot of $\dot {\mathbb{J}}_\alpha({\mathbf{e}}^*)$ versus $\alpha$. [**(b)**]{} A plot of the exponent $\mu$ of the dynamics of $|{\mathbf{e}}^* - \p {\mathbf{e}}l|$ (see \[thm:alphaReLUeConvergence\]) []{data-label="fig:MuPlots"}](graphics/MuPlot "fig:"){height=".2\textheight"} Because of the following theorem, we cannot expect the equations of \[thm:dalethRecFull\] to hold for $\alpha \le \f 3 4$. [thm]{}[dalethInfVarAlphaReLU]{} \[thm:dalethInfVarAlphaReLU\] Suppose we have the nonlinearity $\psi_\alpha$ in an FRN. $\Var(\dot \psi_\alpha(\zeta)^2)$ diverges for any Gaussian variable $\zeta$ with mean 0 if $\alpha \le \f 3 4$ but is finite if $\alpha > \f 3 4$. [thm]{}[dalethDynamicsAlphaReLU]{} \[thm:dalethDynamicsAlphaReLU\] Suppose we have the nonlinearity $\psi_\alpha$ in an FRN. If $\alpha = 1$, then $\p{{\boldsymbol{\oldchi}}}{l-m} = \p {{\boldsymbol{\oldchi}}}{l} \left(\f 1 2 \sigma_v^2 \sigma_w^2 + 1\right)^m$. If $\alpha \in (\f 3 4, 1)$, then $\p{{\boldsymbol{\oldchi}}}{l-m} = \Theta(1) \p {{\boldsymbol{\oldchi}}}{l} (l/(l-m))^R$ for $R = \f{\alpha^2}{(1-\alpha)(2 \alpha - 1)}$, where the constants in $\Theta(1)$ do not depend on $l$ or $m$. This exponent $\f{\alpha^2}{(1 - \alpha)(2\alpha - 1)}$ is minimized at $\alpha = \f 3 4$ on $\alpha \in (3/4, 1)$, where the value is $\f 9 2$ (and at $\alpha = \f 2 3$ on $\alpha \in (1/2, 1)$, where the value achieved is 4) (\[fig:backprop\_exponent\_alpha-relu\](a)). As a corollary, [thm]{}[alphaReLUAllGradients]{}\[thm:alphaReLUAllGradients\] If $\phi = \psi_1$ in an FRN, then for $l \ge m \ge 0,$ $$\begin{aligned} \p {\boldsymbol{\oldchi}}{l-m} _b &= \Theta(1) \p {{\boldsymbol{\oldchi}}}l B^m,& \p {\boldsymbol{\oldchi}}{l-m} _w &= \Theta(1) \p {{\boldsymbol{\oldchi}}}l B^l,\\ \p {\boldsymbol{\oldchi}}{l-m} _v&= \Theta(1) \p {{\boldsymbol{\oldchi}}}l B^l,& \p {\boldsymbol{\oldchi}}{l-m} _a &= \Theta(1) \p {{\boldsymbol{\oldchi}}}{l} B^m.\end{aligned}$$ where $B = 1 + \sigma_v^2\sigma_w^2/2$. If $\phi = \psi_\alpha$ in an FRN, for $\alpha < 1$, then for $l \ge m \ge 0,$ $$\begin{aligned} \p {\boldsymbol{\oldchi}}{l-m} _b &= \Theta(1) \p {{\boldsymbol{\oldchi}}}l l^R (l-m)^{-R-1},& \p {\boldsymbol{\oldchi}}{l-m} _w &= \Theta(1) \p {{\boldsymbol{\oldchi}}}l l^R (l-m)^{\f \alpha {1-\alpha} - R},\\ \p {\boldsymbol{\oldchi}}{l-m} _v &= \Theta(1) \p {{\boldsymbol{\oldchi}}}l l^R (l-m)^{\f \alpha {1-\alpha} - R},& \p {\boldsymbol{\oldchi}}{l-m} _a &= \Theta(1) \p {{\boldsymbol{\oldchi}}}{l} (l/(l-m))^R.\end{aligned}$$ \[fig:alphaReLUVerifyExponents\] verifies the backward asymptotic dynamics empirically for different $\alpha < 1$. \[fig:backprop\_exponent\_alpha-relu\](b) graphs the exponent $\f \alpha {1-\alpha} - R$ in terms of $\alpha$. We see that on $[0.5, 1]$, the maximum of this exponent is at $\alpha = 1$. ![**(a)** The exponent of the polynomial gradient dynamics with respect to $\alpha$-ReLU versus $\alpha$. **(b)** The exponent of the dynamics of ${\boldsymbol{\oldchi}}_v$ and ${\boldsymbol{\oldchi}}_w$.[]{data-label="fig:backprop_exponent_alpha-relu"}](graphics/backprop_exponent_alpha-relu.pdf "fig:"){width=".4\textwidth"} ![**(a)** The exponent of the polynomial gradient dynamics with respect to $\alpha$-ReLU versus $\alpha$. **(b)** The exponent of the dynamics of ${\boldsymbol{\oldchi}}_v$ and ${\boldsymbol{\oldchi}}_w$.[]{data-label="fig:backprop_exponent_alpha-relu"}](graphics/alphaReLU_chi_w_chi_v_exponent.pdf "fig:"){width=".4\textwidth"} Proofs ====== A brief note about notation: We use $\sim$ to denote both how a random variable is sampled (ex: $x \sim {\mathcal{N}}(0, 1)$ for a Gaussian $x$) and how a function behaves asymptotically, i.e. $f(x) \sim g(x)$ as $x \to a$ iff $\lim_{x \to a} f(x)/g(x) = 1$. Context should be enough to differentiate between these two cases. We in addition use $\simeq$ to denote asymptotic expansion. For example, if $\{\alpha_i\}_{i \ge 0}$ is a sequence of strictly decreasing reals and $\{\beta_i\}_{i \ge 0}$ is a sequence of nonzero reals, then $$f(x) \simeq \sum_{i \ge 0} \beta_i (x - \xi)^{\alpha_i}$$ means that as $x \to \xi$, $f(x) - \sum_{i = 0}^N \beta_i (x - \xi)^{\alpha_i} = \Theta((x- \xi)^{\alpha_{N+1}})$. Preliminary Lemmas ------------------ \[lemma:cExpansion\] We have $$\f{\sigma_w^2 {\boldsymbol{\oldgamma}}+ \sigma_b^2}{\sigma_w^2 {\mathbf{p}}+ \sigma_b^2} = {\mathbf{e}}(1 + O( {\boldsymbol{\oldgamma}}^{-1})).$$ regardless of whether $\p {\mathbf{e}}l = \p {\boldsymbol{\oldgamma}}l /\p {\mathbf{p}}l$ converges. But suppose $\p {\mathbf{e}}l = \p {\boldsymbol{\oldgamma}}l /\p {\mathbf{p}}l \to {\mathbf{e}}^*$. If ${\mathbf{e}}^* < 1$, then $$\f{\sigma_w^2 {\boldsymbol{\oldgamma}}+ \sigma_b^2}{\sigma_w^2 {\mathbf{p}}+ \sigma_b^2} = {\mathbf{e}}(1 + \Theta( {\boldsymbol{\oldgamma}}^{-1})).$$ If ${\mathbf{e}}^* = 1$, then $$\f{\sigma_w^2 {\boldsymbol{\oldgamma}}+ \sigma_b^2}{\sigma_w^2 {\mathbf{p}}+ \sigma_b^2} = {\mathbf{e}}(1 + \Theta( {\epsilon}{\mathbf{p}}^{-1})),$$ where ${\epsilon}= 1 - {\mathbf{e}}$. Write $M = \sigma_b^2/\sigma_w^2$. $$\begin{aligned} \f{\sigma_w^2 {\boldsymbol{\oldgamma}}+ \sigma_b^2}{\sigma_w^2 {\mathbf{p}}+ \sigma_b^2} &= {\mathbf{e}}(1 + \f{1 + M {\boldsymbol{\oldgamma}}^{-1}}{1 + M {\mathbf{p}}^{-1}})\\ &= {\mathbf{e}}(1 + M(\inv {\boldsymbol{\oldgamma}}- \inv {\mathbf{p}}) + O(\inv {\mathbf{p}}(\inv {\boldsymbol{\oldgamma}}- \inv {\mathbf{p}}))).\end{aligned}$$ In any situation, $\inv {\boldsymbol{\oldgamma}}- \inv {\mathbf{p}}= O(\inv {\boldsymbol{\oldgamma}})$ because ${\boldsymbol{\oldgamma}}\le {\mathbf{p}}$, so this gives the first statement. If ${\mathbf{e}}^*$ exists and ${\mathbf{e}}^* < 1$, then $\inv {\boldsymbol{\oldgamma}}- \inv {\mathbf{p}}= \Theta(\inv {\boldsymbol{\oldgamma}})$, which yields the second statement. If ${\mathbf{e}}^*$ exists and ${\mathbf{e}}^* = 1$, then $\inv {\boldsymbol{\oldgamma}}- \inv {\mathbf{p}}= \inv {\mathbf{p}}((1-{\epsilon})^{-1} - 1) = \inv {\mathbf{p}}({\epsilon}+ O({\epsilon}^2)) = \Theta({\epsilon}\inv {\mathbf{p}})$. For any function $f$ that is $(k+1)$-times differentiable in a neighborhood of $0$, we have the asymptotic expansion $$f(z) = \sum_{n = 0}^k \f{d^n f}{dz^n}(0) \f{z^n}{n!} + O(z^{k+1}), \text{as } z \to 0.$$ Since $$\begin{aligned} \left.\f{d^n}{d(1/q)^n}q^{1/2}{\mathrm{V}}\phi( q)\right\rvert_{q \to \infty} &= \f {(-1)^n} {2^n\sqrt{2\pi}}\int_{-\infty}^\infty \phi^2(z) z^{2n} \dd z\end{aligned}$$ whenever the RHS is integrable, we have \[lemma:Vt\_asymptotic\] Suppose $\phi^2(z) z^{2n}$ is integrable over $z \in \R$ for all $0 \le n \le N + 1$. Then ${\mathrm{V}}\phi( q) = q^{-1/2} (\sum_{n = 0}^N C_n q^{-n} + O(q^{-N-1}))$ as $q \to \infty$, where $$C_n := \f {(-1)^n} {2^n n! \sqrt{2\pi}}\int_{-\infty}^\infty \phi^2(z) z^{2n} \dd z.$$ Note that ${\operatorname{sech}}^d(z) = \Theta(e^{-d|z|})$ for $z \to \infty$ as long as $d > 0$, so that $C_n$ from the above result converges when $\phi = {\operatorname{sech}}^d$. Therefore \[lemma:Vt\_sech\_asymptotics\] Let $d > 0$. We have ${\mathrm{V}}{\operatorname{sech}}^d( q) \simeq q^{-1/2} \sum_{n \ge 0} C_n q^{-n}$, where $$C_n := \f {(-1)^n} {2^n n! \sqrt{2\pi}}\int_{-\infty}^\infty {\operatorname{sech}}^{2d}(z) z^{2n} \dd z.$$ As corollaries, we obtain the following asymptotics. \[lemma:Vt\_dot\_tanh\_asymptotics\] ${\mathrm{V}}\dot \tanh( q) = \f 2 3 \sqrt{\f 2 \pi} q^{-1/2} + \Theta(q^{-3/2})$ as $q \to \infty$. Use \[lemma:Vt\_sech\_asymptotics\] along with the fact that $\dot \tanh(z) = {\operatorname{sech}}^2(z)$ and $\int {\operatorname{sech}}^4 z \dd z = \f 2 3 \tanh z + \f 1 2 {\operatorname{sech}}^2 z \tanh z$. \[lemma:vtanhSqrtConvergence\] $1 - {\mathrm{V}}\tanh( q) = \sqrt{\f 2 \pi} q^{-1/2} + \Theta(q^{-3/2})$ as $q \to \infty$. Use \[lemma:Vt\_sech\_asymptotics\] along with the fact that $1 - \tanh^2(z) = {\operatorname{sech}}^2(z)$ and $\int {\operatorname{sech}}^2 z \dd z = \tanh z$. \[lemma:sech\_lower\_bound\] ${\operatorname{sech}}^2(t) \ge \exp(-t^2)$ for all $t$, with equality iff $t = 0$. The lower bound is equivalent to $$\begin{aligned} 2 & \ge e^{t-t^2/2} + e^{-t -t^2/2}\end{aligned}$$ The RHS has derivative $(1 - t)e^{t - t^2/2} - (1 + t)e^{-t-t^2/2}$. This is 0 iff $$\begin{aligned} \f{1 - t}{1 + t} = e^{-2t}\end{aligned}$$ which has a solution 0 and in general can only have solution $t\in (-1, 1)$ (by considering the sign of the LHS). Since each side is analytic in $t \in (-1, 1)$, we expand $$\begin{aligned} \log \f{1 - t}{1 + t} &= \log e^{-2t}\\ \log(1-t) - \log(1+t) &= -2t\\ (-t-t^2-\cdots) - (t-t^2+\cdots) &= -2t\\ -2t-2t^3-\cdots &= -2t\end{aligned}$$ which shows that the only solution is $t = 0$. A simple plot shows that $t=0$ is a maximum, where the bound in question achieves equality. \[lemma:V\_dot\_tanh\_lower\_bound\] Suppose $\phi = \tanh$. Then ${\mathrm{V}}\dot \phi( q) \ge \f 1 {\sqrt{4q+1}}$. As a sanity check, \[lemma:Vt\_dot\_tanh\_asymptotics\] shows that ${\mathrm{V}}\dot \phi( q) \sim C_0 q^{1/2}$ where $C_0 \approx .5319$, which is above the .5 in this lemma. By \[lemma:sech\_lower\_bound\], $$\begin{aligned} {\mathrm{V}}\dot\phi( q) &= \int \dd \mu(z) \dot \phi^2(\sqrt q z)\\ &\ge \f 1 {\sqrt{2\pi}} \int \dd z \exp(-z^2/2 - 2qz^2)\\ &= \f 1 {\sqrt{2\pi}} \int \dd z \exp(-(4q + 1)z^2/2)\\ &= \f 1 {\sqrt{4q + 1}}. \end{aligned}$$ \[fig:V\_dot\_tanh\_lower\_bound\] demonstrates \[lemma:V\_dot\_tanh\_lower\_bound\]. ![Illustration of \[lemma:V\_dot\_tanh\_lower\_bound\]: ${\mathrm{V}}\dot \phi( q)$ vs $\f 1 {\sqrt{4q+1}}$ for $\phi = \tanh$. This bound is very tight, and for most purposes, $\f 1 {\sqrt{4q+1}}$ can be taken as a good approximation of ${\mathrm{V}}\dot \phi( q)$.[]{data-label="fig:V_dot_tanh_lower_bound"}](graphics/V_dot_tanh_lower_bound.pdf){width=".4\textwidth"} \[lemma:power\_sum\_asymptotics\] Let $d \in \R$ and $1 < M < N$ with $N - M \in \Z^{\ge 0}$. Set $\Sigma(M, N, d) := \sum_{a=M}^N a^d$. If we fix $M$ and let $N \to \infty$, $$\Sigma(M, N, d) = \begin{cases} \Theta(1) & \text{if $d < -1$}\\ \log N + O(1) & \text{if $d = -1$}\\ \f{N^{d+1}}{d+1} + O(1) & \text{if $-1 < d < 0$}\\ N - M + 1 & \text{if $d =0$}\\ \f 1 {d+1}N^{d+1} + \f 1 2 N^d + O(N^{\max(0, d-1)}) & \text{if $d > 0$} \end{cases}$$ Consider the integrals $A = \int_M^{N+1} a^d \dd a$ and $B = \int^N_{M-1} a^d \dd a $. They evaluate to $A = \f 1 {d+1}((N+1)^{d+1} - M^{d+1})$ and $B = \f 1 {d+1}(N^{d+1} - (M-1)^{d+1})$ when $d \not = -1$ and to $A = \log(N+1) - \log M$ and $B = \log N - \log(M-1)$ when $d = -1$. When $d \le 0$, we have $A \le B$ and $\Sigma(M, N, d) \in [A, B]$; when $d > 0$, $B \le A$ and $\Sigma(M, N, d) \in [B, A].$ Thus, as $N \to \infty$ with $M$ fixed, when $d < -1$, $\Sigma(M, N, d) = \Theta(1)$; when $d = -1$, $\Sigma(M, N, -1) = \log N + O(1)$; and when $d > -1$, we have $\Sigma(M, N, d) = \f{N^{d+1}}{d+1} + O(N^{d})$. Now for $a > 0$ and $d > -1$ and $d \not = 0, 1$, $$\begin{aligned} \int_a^{a+1} z^d - a^d \dd z &= \f 1 {d+1} ( (a+1)^{d+1} - a^d )\\ &= (a^d + \f d 2 a^{d-1} + \cdots ) - a^d\\ &= \f d 2 a^{d-1} + \Theta(a^{d-2}). \end{aligned}$$ where the hidden constants in $\Theta$ depend only on $d$ (and in fact this term vanishes if $d = 1$). Thus $$\begin{aligned} \Sigma(M, N, d) &= \int_M^{N+1} z^d \dd z - \sum_{a=M}^N [ \f d 2 a^{d-1} + \Theta(a^{d-2})]\\ &= \f 1 {d+1} ((N+1)^{d+1} - M^{d+1}) - \f d 2 \Sigma(M, N, d-1) + \Theta(\Sigma(M, N, d-2)) \end{aligned}$$ If $-1 < d < 0$, then $\Sigma(M, N, d - 1) = \Theta(1)$, so that $\Sigma(M, N, d) = \f{(N+1)^{d+1}}{d+1} + O(1) = \f{N^{d+1}}{d+1} + O(1).$ If $d > 0$ and $d \not = 1$, then $\Sigma(M, N, d - 1) = \f{N^d}d$, so that $$\begin{aligned} \Sigma(M, N, d) &= \f 1 {d+1} N^{d+1} + N^{d} + \Theta(N^{\max(0, d-1)}) - \f 1 2 N^d + \Theta(\Sigma(M, N, d-2)) \\ &= \f 1 {d+1}N^{d+1} + \f 1 2 N^d + O(N^{\max(0, d-1)}). \end{aligned}$$ We can obtain more terms in the expansion for higher $d$ via the Euler-Maclaurin formula, but this suffices for our purposes. Dynamics Zoo ------------ This section deduces the asymptotic behaviors of some sequences governed by recurrence equations. For the most part, the leading term of their asymptotic expansions is as one would expect from the corresponding differential equation. However, in some cases we need subleading terms for later results. They require slightly more nuanced reasoning. First we present a technical lemma. \[lemma:simpleDynamics\] Let $F: \R \times \N \to \R$ be a function such that for a subset $U \sbe \R$, and for all $z, z' \in U, z \ge z' \implies F(z, n) \ge F(z', n)$ for every $n$. Suppose sequences $\p a l, \p b l, \p {c}l$ satisfy - $\p a {l+1} = F(\p a l, l)$ for all $l$; - $\p b {l+1} \le F(\p b l, l)$ for all $l$ above a constant $K_b$. - $\p {c}{l+1} \ge F(\p {c}l, l)$ for all $l$ above a constant $K_c$. and furthermore, $\p a l, \p b l, \p {c}l$ all fall into $U$ for $l$ above a constant $K_U$. If for some $m \ge \max(K_b, K_U)$, $\p b m \le \p a m$, then $\p b l \le \p a l, \forall l \ge m$. Similarly, if for some $n \ge \max(K_c, K_U)$, $\p {c}n \ge \p a n$, then $\p {c}l \ge \p a l, \forall l \ge n$. For the first claim: $\p b m \le \p a m \implies \p b {m+1} \le F(\p b m, m) \le F(\p a m, m) = \p a {m+1}$. Here the last inequality used the monotonicity of $F$. Induction gives the desired result. It’s similar for the second claim, where the inductive step is $\p {c}m \ge \p a m \implies \p {c}{m+1} \ge F(\p {c}m, m) \ge F(\p a m, m) = \p a {m+1}$. \[lemma:alphaDeltaRec\] Suppose $\p {\epsilon}l$ satisfies the recurrence $$\p {\epsilon}l = \p {\epsilon}{l-1} (1 + \f \delta {l^\beta}).$$ for some nonzero constant $\delta \in \R$ independent of $l$. - If $\beta > 1$, then $\p {\epsilon}l = \Theta(1)$. - If $\beta = 1$, then $\p {\epsilon}l = \Theta(l^{\delta})$. - If $0 < \beta < 1$, then $\p {\epsilon}l = \exp(\f{\delta}{1 - \beta}l^{1-\beta} + \tilde\Theta(l^{\psi_1(1-2\beta)}))$, where $\psi_1(x) = \max(0, x)$ is the ReLU function. We have $$\begin{aligned} \log \p {\epsilon}l &= \log \p {\epsilon}{l-1} + \log(1 + \delta/l^\beta)\\ &= \log \p {\epsilon}{l-1} + \delta/l^\beta + \Theta(\delta^2/l^{2\beta}) \end{aligned}$$ for large $l$. If $\beta > 1$, then $\sum_l l^{-\beta}$ converges, and $$\begin{aligned} \log \p {\epsilon}l &= \log \p {\epsilon}0 - \Theta(1)\\ \p {\epsilon}l &= \Theta(1). \end{aligned}$$ If $\beta = 1$, then $$\begin{aligned} \log \p {\epsilon}l &= \log \p {\epsilon}0 + \delta \log l + \Theta(1)\\ \p {\epsilon}l &= \Theta(l^{\delta}). \end{aligned}$$ If $\beta < 1$, then $$\begin{aligned} \log \p {\epsilon}l &= \log \p {\epsilon}0 + \f\delta{1 - \beta} l^{1 - \beta} + \tilde\Theta(l^{1 - 2\beta})\\ \p {\epsilon}l &= \exp( \f\delta{1 - \beta} l^{1 - \beta} + \tilde\Theta(l^{\psi_1(1 - 2\beta)})). \end{aligned}$$ \[lemma:alphaDeltaDynamics\] Suppose $\p {\epsilon}l = C l^{-\alpha} + \p {\epsilon}{l-1} (1 + \delta/l^{\beta})$ for $\alpha \in \R$, $C\not=0$, and $\delta \not = 0$. Then - If $\beta > 1$, then - $\p {\epsilon}l = \Theta(l^{1 - \alpha})$ if $\alpha \in (0, 1)$; - $\p {\epsilon}l = \Theta(\log l)$ if $\alpha = 1$; - $\p {\epsilon}l = \Theta(1)$ if $\alpha > 1$. - If $\beta = 1$, then - $\p {\epsilon}l = \Theta(l^{\max(\delta, 1-\alpha)})$ if $1-\delta \not = \alpha$. - $\p {\epsilon}l = \Theta(l^{\delta} \log l)$ if $1-\delta = \alpha$. Furthermore, for $\beta = -\delta = 1$, $\p {\epsilon}l \sim l^{-1}$ if $\alpha > 2$, $\p {\epsilon}l \sim l^{1-\alpha}$ if $\alpha < 2$, and $\p {\epsilon}l \sim l^{\delta} \log l$ if $\alpha = 2$. We can unwind the recurrence to get $$\begin{aligned} \p {\epsilon}l &= \sum_{m=1}^l m^{-\alpha} \prod_{n=m+1}^l (1 + \f \delta {n^\beta}) + \p {\epsilon}0 \prod_{n=1}^l (1 + \f \delta {n^\beta}) \end{aligned}$$ Suppose $\beta > 1$. By \[lemma:alphaDeltaRec\], we get $$\begin{aligned} \p {\epsilon}l &= \Theta(1)\sum_{m=1}^l m^{-\alpha} + \p {\epsilon}0 \Theta(1)\\ &= \begin{cases} \Theta(l^{1 - \alpha}) & \text{if $\alpha \in (0, 1)$}\\ \Theta(\log l) & \text{if $\alpha = 1$}\\ \Theta(1) & \text{if $\alpha > 1$.} \end{cases} \end{aligned}$$ Now suppose $\beta = 1$. By \[lemma:alphaDeltaRec\], we get $$\begin{aligned} \p {\epsilon}l &= \sum_{m=1}^l m^{-\alpha} \Theta(m^{-\delta}l^\delta) + \p {\epsilon}0 \Theta(l^{\delta}) \end{aligned}$$ where the constants hidden inside the $\Theta$ are the same in every term of the sum. If $\alpha > 1 - \delta$, then $m^{-\delta - \alpha} = o(m^{-1})$, so that $\sum_{m=1}^l m^{-\delta - \alpha} = \Theta(1)$, and $$\begin{aligned} \p {\epsilon}l &= \Theta(l^{\delta}) + \p {\epsilon}0 \Theta(l^{\delta})\\ &= \Theta(l^{\delta}). \end{aligned}$$ On the other hand, if $\alpha < 1 - \delta$, then $\sum_{m=1}^l m^{-\delta - \alpha} = \Theta(l^{1 - \delta - \alpha})$. So $$\begin{aligned} \p {\epsilon}l &= \Theta(l^{1 - \alpha}) + \p {\epsilon}0 \Theta(l^{\delta})\\ &= \Theta(l^{1 - \alpha}). \end{aligned}$$ If $\alpha = 1 - \delta$, then $\sum_{m=1}^l m^{-\delta - \alpha} = \Theta(\log l)$. So $$\begin{aligned} \p {\epsilon}l &= \Theta(l^{\delta}\log l) + \p {\epsilon}0 \Theta(l^{\delta})\\ &= \Theta(l^{\delta}\log l). \end{aligned}$$ Finally, if $\beta \in (0, 1)$, then $$\begin{aligned} \p {\epsilon}l &= e^{\f\delta{1-\beta}l^{1-\beta} + \Theta(l^{1-2\beta})}\sum_{m=1}^l m^{-\alpha} e^{\f{-\delta}{1-\beta} m^{1-\beta} + \Theta(m^{1-2\beta})} + e^{\f\delta{1-\beta}l^{1-\beta} + \Theta(l^{1-2\beta})} \end{aligned}$$ The case of $\delta = -1$ telescopes, so that the upper and lower constants hidden in $\Theta$ can both be taken to be 1. \[lemma:invlogDynamics\] Suppose for some $\beta > 0$, a sequence $\p {\epsilon}l$ satisfies $$\p {\epsilon}l = \p {\epsilon}{l-1} (1 - \mu (\p {\epsilon}{l-1})^\beta/l),\quad \p {\epsilon}0 \in (0, \f 1 \mu).$$ Then $\p {\epsilon}l \sim (\beta\mu \log l)^{-1/\beta}$. Consider the differential equation $$\dot x_\mu = - \mu x^{\beta+1}_\mu/t$$ for constant $\mu$ has solution $x_\mu = [\beta(\mu \log t + C)]^{-1/\beta}$ for some constant $C$ determined by initial condition. Note that $$-\mu x_\mu(t)^{\beta+1}/t \le x_\mu(t+1) - x_\mu(t) \le - \mu x_\mu(t+1)^{\beta+1}/(t+1) = - (1 - o(t^{-1}))\mu x_\mu (t)^{\beta+1}/t.$$ For any small enough $\alpha > 0$, we apply \[lemma:simpleDynamics\] with $F({\epsilon}, l) = {\epsilon}- \mu {\epsilon}^{\beta+1}/l$ (which is monotonic in ${\epsilon}$ for small enough ${\epsilon}$), $\p {c}l = x_\mu(l)$, and $\p b l = x_{\mu-\alpha}(l)$ to obtain $$x_{\mu - \alpha}(l) \le \p {\epsilon}l \le x_\mu(l)$$ for large enough $l$ and appropriately chosen initial conditions. This shows that $\p {\epsilon}l = \Theta(\log l ^{-1/\beta})$ Taking $\alpha \to 0$, we also obtain the leading coefficient $\p {\epsilon}l \sim [\beta\mu \log l]^{-1/\beta}$. \[lemma:polyDynamics\] Suppose a sequence $\p u l$ is governed by the equation $$\p u l - \p u {l-1} = A(\p u {l-1} + B)^\alpha,$$ where $\alpha \in [0, 1)$ and $A > 0$. Then $\p u l = K_1 l^{{\f 1 {1-\alpha}}}- K_2 l^{{\f \alpha {1-\alpha}}}\log l + o(l^{{\f \alpha {1-\alpha}}}\log l)$, where $K_1 = [A(1-\alpha)]^{{\f 1 {1-\alpha}}}$ and $K_2 = \f 1 2 A^{{{\f 1 {1-\alpha}}}} (1-\alpha)^{{{\f \alpha {1-\alpha}}}- 1} \alpha$. **Leading term.** The differential equation $$\dot x_{A, B} = A(x_{A, B} + B)^\alpha$$ has solution $x_{A,B}(l) = [A(1-\alpha)(l + S)]^{\f 1 {1-\alpha}} - B$ for some constant $S$. Since $\dot x_{A, B}$ is monotonic, we have (writing $x = x_{A, B}$ for brevity) $$A(x_{A, B}(l) + B)^\alpha = \dot x_{A, B}(l) \le x_{A, B}(l+1) - x_{A, B}(l) \le \dot x_{A, B}(l+1) \le (A + o(1))(x_{A, B}(l) + B)^\alpha$$ for large enough $l$. We apply \[lemma:simpleDynamics\] with $F(x, l) = x + A(x + B)^\alpha$ (which is monotonic in $x$ for large $x$), $\p {c}l = x_{A, B}(l)$, and $\p b l = x_{A-{\epsilon}, B}(l)$ to obtain $$x_{A-{\epsilon},B}(l) \le \p u l \le x_{A, B}(l)$$ for large enough $l$ and appropriate initial conditions. Therefore $\lim \p u l / l^{\f 1 {1-\alpha}} \in [[(A - {\epsilon}) ( 1- \alpha)]^{\f 1 {1 - \alpha}}, [A ( 1- \alpha)]^{\f 1 {1 - \alpha}}].$ Taking ${\epsilon}\to 0$ gives the leading term. **Subleading term.** Now let $\p v l := \p u l - \aleph l^{\f 1 {1-a}}$, where $\aleph = [A(1-\alpha)]^{\f 1 {1-\alpha}}.$ Then we have the recurrence $$\begin{aligned} \p v {l+1} + \aleph (l + 1)^{{{\f 1 {1-\alpha}}}} - \p v l - \aleph l^{{{\f 1 {1-\alpha}}}} &= A(\p v l + \aleph l^{{{\f 1 {1-\alpha}}}} + B)^\alpha\\ \p v {l+1} - \p v l + \aleph({{\f 1 {1-\alpha}}}l^{{{\f \alpha {1-\alpha}}}} &+ \f 1 2 ({{\f 1 {1-\alpha}}}) ({{\f \alpha {1-\alpha}}}) l^{{{\f \alpha {1-\alpha}}}-1} + \Theta(l^{{{\f \alpha {1-\alpha}}}- 2})) \\&= A[\aleph^\alpha l^{{{\f \alpha {1-\alpha}}}} + \alpha (\p v l+B) \aleph^{\alpha - 1} \inv l + \Theta((\p v l+B) l^{-1-{{\f 1 {1-\alpha}}}})]\\ \p v {l+1} - \p v l &= {{\f \alpha {1-\alpha}}}\p v l \inv l - \f 1 2 \aleph({{\f 1 {1-\alpha}}})({{\f \alpha {1-\alpha}}}) l^{{{\f \alpha {1-\alpha}}}- 1} + g(l) \end{aligned}$$ for some $g(l) = O(l^{{{\f \alpha {1-\alpha}}}- 2} + \inv l)$ and where, to get the last equation, we have used $A \alpha^\alpha = {{\f 1 {1-\alpha}}}\aleph$ to cancel the $l^{{{\f \alpha {1-\alpha}}}}$ term and simplified $\alpha A \aleph^{\alpha-1} = {{\f \alpha {1-\alpha}}}$. For any $J > 0$, the differential equation $\dot v_J(l) = {{\f \alpha {1-\alpha}}}v_J(l) \inv l - J l^{{{\f \alpha {1-\alpha}}}- 1}$ has solution $v_J(l) = C[l(1-\alpha)]^{{{\f \alpha {1-\alpha}}}} - J l^{{{\f \alpha {1-\alpha}}}} \log l$. Note that the functions $F_J(z, n) = z + {{\f \alpha {1-\alpha}}}z \inv n - J n^{{{\f \alpha {1-\alpha}}}- 1}$ and $G_J(z, n) = F_J(z, n) + g(n)$ is monotonic in $z$ (for positive $n$). For large $l$, we also have $\dot v_J(l)$ and $F_J(v_J(l), l) = v_J(l) + \dot v_J(l)$ decreasing in $l$. Thus for any ${\epsilon}> 0$ and $l$ large enough $$G_{J+{\epsilon}}(v_J(l), l) \le F_{J+{\epsilon}/2}(v_J(l), l) \le v_J(l) + \dot v_J(l+1) \le v_J(l+1) \le F_J(v_J(l), l) \le G_{J-{\epsilon}}(v_J(l), l).$$ Now apply \[lemma:simpleDynamics\] with $F = G_{K}$, $\p a l = \p v l, \p {c}l = v_{K-{\epsilon}}, \p b l = v_{K+{\epsilon}}$ where $K := \f 1 2 \aleph({{\f 1 {1-\alpha}}})({{\f \alpha {1-\alpha}}}) = \f 1 2 A^{{{\f 1 {1-\alpha}}}} (1-\alpha)^{{{\f \alpha {1-\alpha}}}- 1} \alpha$, with appropriately chosen initial conditions. This yields $\lim_{l\to \infty} \p v l / (l^{{{\f \alpha {1-\alpha}}}} \log l) \in [-K - {\epsilon}, -K + {\epsilon}]$ for every ${\epsilon}> 0$, and there it must be equal to $K$. We have thus obtained the asymptotic expansion $$\p u l = [A(1-\alpha) l]^{{\f 1 {1-\alpha}}}- \f 1 2 A^{{{\f 1 {1-\alpha}}}} (1-\alpha)^{{{\f \alpha {1-\alpha}}}- 1} \alpha l^{{\f \alpha {1-\alpha}}}\log l + o(l^{{\f \alpha {1-\alpha}}}\log l).$$ \[lemma:polyDynamicsPosAlpha\] Suppose a sequence $\p u l$ is governed by the equation $$\p u l - \p u {l-1} = -A(\p u {l-1} + B)^\alpha,$$ where $\alpha > 1$ and $A > 0$. Then $\p u l \sim [A(\alpha-1)l]^{\f 1 {1-\alpha}}$. Similar to \[lemma:polyDynamics\]. \[lemma:polyDynamicsConstant\] Suppose a sequence $\p u l$ is governed by the equation $$\p u l - \p u {l-1} = A(\p u {l-1} + B)^\alpha + C,$$ where $\alpha \in (0, 1)$. Then $\p u l = K_1 l^{{\f 1 {1-\alpha}}}+ R(l)$, where the remainder $R(l)$ is $$R(l) \sim \begin{cases} -K_2 l^{{{\f \alpha {1-\alpha}}}} \log l &\text{if $\alpha > \f 1 2$}\\ (C-K_2) l \log l &\text{if $\alpha = \f 1 2$ and $K_2 \not=C$}\\ \f{C(1-\alpha)}{1-2\alpha} l & \text{if $ \alpha < \f 1 2$} \end{cases}$$ where $K_1 = [A(1-\alpha)]^{\f 1 {1-\alpha}}, K_2 = \f 1 2 A^{{{\f 1 {1-\alpha}}}} (1-\alpha)^{{{\f \alpha {1-\alpha}}}- 1} \alpha$ as in \[lemma:polyDynamics\]. $u$ is bounded below by the dynamics $\p v l - \p v {l-1} = A(\p v {l-1} + B)^\alpha$ and bounded above by the dynamics $\p w l - \p w {l-1} = (A + o(1))(\p w {l-1} + B)^\alpha$. By \[lemma:polyDynamics\], both $v$ and $w$ are asymptotic to $\p u l \sim [A(1-\alpha)l]^{\f 1 {1-\alpha}}$, which gives the result. Now define $\p v l = \p u l - [A(1-\alpha)l]^{{\f 1 {1-\alpha}}}$, and similar to the proof of \[lemma:polyDynamics\], we find $$\p v {l+1} - \p v l = {{\f \alpha {1-\alpha}}}\p v l \inv l - K l^{{{\f \alpha {1-\alpha}}}- 1} + C + g(l)$$ where $K = \f 1 2 A^{{{\f 1 {1-\alpha}}}} (1-\alpha)^{{{\f \alpha {1-\alpha}}}- 1} \alpha$ and $g(l) = O(l^{{{\f \alpha {1-\alpha}}}- 2} + \inv l)$. If ${{\f \alpha {1-\alpha}}}> 1 \iff \alpha > \f 1 2$, then $C + g(l) = o(l^{{{\f \alpha {1-\alpha}}}- 1})$ and we can proceed as in the proof of \[lemma:polyDynamics\] to find $\p v l \sim K l^{{{\f \alpha {1-\alpha}}}} \log l.$ If ${{\f \alpha {1-\alpha}}}= 1 \iff \alpha = 1$ and $K \not = C$, then $\p v {l+1} - \p v l = {{\f \alpha {1-\alpha}}}\p v l l^{-1} - (K-C) l^{{{\f \alpha {1-\alpha}}}- 1} + g(l)$, so that the technique used in \[lemma:polyDynamics\] would obtain $\p v l \sim (K-C) l^{{{\f \alpha {1-\alpha}}}} \log l = (K-C) l \log l.$ If ${{\f \alpha {1-\alpha}}}< 1 \iff \alpha < \f 1 2$, then $\p v {l+1} - \p v l = {{\f \alpha {1-\alpha}}}\p v l l^{-1} + C + o(1)$, then by using the differential equation $\dot v_J(l) = {{\f \alpha {1-\alpha}}}v_J(l) l^{-1} + J$ to approximate the difference equation solution and applying \[lemma:simpleDynamics\] as in the proof of \[lemma:polyDynamics\], we obtain $\p v l(l) \sim \f{C(1-\alpha)}{1-2\alpha} l$. Forward Dynamical Equations --------------------------- Here we derive the recurrences governing the forward length and correlation quantities ${\mathbf{p}}, {\mathbf{q}}, {\boldsymbol{\oldlambda}}, {\boldsymbol{\oldgamma}}.$ We start with reduced residual networks. We have $$\begin{aligned} {\mathbf{q}}&= \la h_j^2 \ra = \la \sum_i (w_{ji}{\underline}x_i + b_j)^2 \ra\\ &= \la b_j^2 \ra + \sum_i \la w_{ji}^2 {\underline}x_i^2\ra + 2\sum_i \la w_{ji} {\underline}x_i b_j \ra + 2 \sum_{j \not= l} \la w_{ji} w_{li} x_i^2\ra \end{aligned}$$ But $w_{ji}, w_{li}, {\underline}x,$ and $b_j$ form an independency, so the last two sums are 0, and the terms in the first sum split multiplicatively. Therefore $$\begin{aligned} {\mathbf{q}}&= \sigma_b^2 + \sum_i \la w_{ji}^2 \ra \la {\underline}x_i^2 \ra\\ &= \sigma_b^2 + N \cdot \f {\sigma_w^2}{N} {\underline}{\mathbf{p}}\\ &= \sigma_b^2 + \sigma_w^2 {\underline}{\mathbf{p}}. \end{aligned}$$ For the recurrence of ${\mathbf{p}}$, we have $$\begin{aligned} {\mathbf{p}}&= \la x_i^2 \ra = \la (\phi(h_i) + {\underline}x_i)^2 \ra \\ &= \la \phi(h_i)^2\ra + \la {\underline}x_i^2 \ra + 2 \la\phi(h_i) {\underline}x_i \ra \end{aligned}$$ As $N \to \infty$, the coefficient $w_{ii}$ of ${\underline}x_i$ in $h_i$ has vanishing covariance, so $h_i$ and ${\underline}x_i$ become independent. Therefore $\la \phi(h_i) {\underline}x_i \ra = \la \phi(h_i) \ra \la {\underline}x_i \ra$. Because $h_i$ is the sum of a large number of independent random variables, by CLT, $h_i$ is a Gaussian with mean $\sum_i \la w_{ji}\ra \la {\underline}x_i\ra + \la b_j\ra = 0$ since $\la w_{ji} \ra = \la b_j \ra = 0$. Our antisymmetry assumption on $\phi$ then implies $\la \phi(h_i) \ra = 0$. Therefore, $$\begin{aligned} {\mathbf{p}}&= \la \phi(h_i)^2\ra + \la {\underline}x_i^2 \ra\\ &= {\mathrm{V}}\phi( {\mathbf{q}}) + {\underline}{\mathbf{p}}\end{aligned}$$ as desired. Similar to \[lemma:p\_q\_recurrence\]. Now, for the full residual networks, the proofs are similar, but we no longer need to assume that $\phi$ is antisymmetric because of the randomization via the extra sets of weights. $$\begin{aligned} {\mathbf{q}}&= \la h_j^2 \ra = \la ( w_j^i {\underline}x_i + b_j)^2\ra = \la ( w_j^i {\underline}x_i)^2 \ra + \la b_j^2\ra\\ &= \sigma^2_w \la {\underline}x_i^2 \ra + \sigma_b^2\\ &= \sigma^2_w {\underline}{\mathbf{p}}+ \sigma^2_b\\ {\mathbf{p}}&= \la x_i^2 \ra = \la (v^j_i\phi( h_j) +{\underline}x_i + a_i)^2 \ra\\ &= \sigma_v^2\la \phi( h_i)^2 \ra + \la {\underline}x_i^2 \ra + \sigma_a^2\\ &=\sigma_v^2 {\mathrm{V}}\phi( {\mathbf{q}}) + \sigma_a^2 + {\underline}{\mathbf{p}}\\ \end{aligned}$$ where in the third equality for ${\mathbf{p}}$, we are now using the independence of $v_i^j$ from all other variables to cancel out the terms, whereas before we had to rely on $\phi$ being antisymmetric. Similar to \[thm:fullResPQRec\]. Backward Dynamical Equations ---------------------------- Here we derive the recurrences governing the gradient quantities ${{\boldsymbol{\oldchi}}}$ and ${\boldsymbol{\oldchi}}_{\bullet}$ for different $\bullet$, all under the gradient independence assumption. Write $\p {\beta}l_i = \pdf E {\p x l_i}$ for a cost function $E$. For a reduced residual network, we have the following derivative computation: $$\begin{aligned} \pdf {x_i} {{\underline}x_j} &= \delta_{ji} + \dot \phi(h_i) \pdf {h_i}{{\underline}x_j},& \pdf {x_i}{h_j} &= \delta_{ji} \dot \phi(h_j),& \pdf {h_i}{{\underline}x_j} &= w_{ij},& \pdf {h_i}{w_{ij}} &= {\underline}x_j,& \pdf {h_i}{b_j} &= \delta_{ij}. \end{aligned}$$ Then $$\begin{aligned} {\underline}{\beta}_j &= {\beta}_j + \sum_i {\beta}_i \dot \phi(h_i) \pdf {h_i}{{\underline}x_j}\\ &= {\beta}_j + \sum_i {\beta}_i\dot \phi(h_i) w_{ij}\\ \la {\underline}{\beta}_j^2 \ra &= \la [{\beta}_j + \sum_i {\beta}_i\dot \phi(h_i) w_{ij}]^2 \ra\\ &= \la {\beta}_j^2 \ra + \sum_i \la {\beta}_i^2 \dot \phi^2(h_i) (w_{ij})^2\ra \\ &\phantom{={}}+ 2\sum_{i < k} \la {\beta}_i {\beta}_k\dot\phi(h_i)w_{ij}\dot\phi(h_k)w_{kj}\ra + 2 \sum_i\la {\beta}_j {\beta}_i \dot\phi(h_i)w_{ij} \ra \end{aligned}$$ The last two terms of the above vanish as $w_{ij}$ is independent from $w_{kj}$, $h_i, h_k$ and ${\beta}_i, {\beta}_j, {\beta}_k$ by \[ass:gradInd\], and $\la w_{ij} \ra = 0$. Therefore, applying \[ass:symAct\], $$\begin{aligned} \la {\underline}{\beta}_j^2 \ra &= \sigma_w^2\la {\beta}_j^2\ra\la\dot \phi^2(h_i)\ra + \la {\beta}_j^2\ra\\ &= (\sigma_w^2 {\mathrm{V}}\dot \phi( {\mathbf{q}}) + 1)\la {\beta}_j^2 \ra \end{aligned}$$ We similarly have $$\begin{aligned} \pdf E {b_j} &= \sum_i \pdf E {x_i} \pdf {x_i}{h_j} = {\beta}_j \dot \phi(h_j), & \text{since $\pdf {x_i}{h_j} = \delta_{ji} \dot \phi(h_j)$}\\ \la \lp\pdf E {b_j}\rp^2 \ra &= \la {\beta}_j^2 \dot \phi(h_j)^2 \ra = \la {\beta}_j^2 \ra {\mathrm{V}}\dot \phi( {\mathbf{q}}), & \text{by \cref{ass:gradInd}(b)};\\ \pdf E {w_{ji}} &= \sum_i \pdf E {x_i} \pdf {x_i}{h_j}\pdf {h_j}{w_{ji}} = {\beta}_j \dot \phi(h_j){\underline}x_i, & \text{since $\pdf {x_i}{h_j} = \delta_{ji} \dot \phi(h_j)$}\\ \la \lp\pdf E {w_{ji}}\rp^2 \ra &= \la {\beta}_j^2 \dot \phi^2(h_j) {\underline}x_i^2 \ra = \la {\beta}_j^2 \ra {\mathrm{V}}\dot \phi( {\mathbf{q}}) {\underline}{\mathbf{p}}, & \text{by \cref{ass:gradInd}(b)} \end{aligned}$$ In the last equation we have also used the fact that as $N \to \infty$, $h_j$ and $x_i$ become independent (they are jointly Gaussian and their correlation $\la w_{ji}^2 \ra$ goes to 0 with $N$). For the full residual network, we have the following derivative computations: $$\begin{aligned} \pdf {x_i} {{\underline}x_j} &= \delta_{ji} + \sum_k v_{ik}\dot \phi(h_k) \pdf {h_k}{{\underline}x_j},& \pdf {x_i}{h_j} &= v_{ij} \dot \phi(h_j),& \pdf {h_i}{{\underline}x_j} &= w_{ij},& \pdf {h_i}{w_{ij}} &= {\underline}x_j,& \pdf {h_i}{b_i} &= 1,\\ && \pdf {x_i}{v_{ik}} &= \phi(h_k),& \pdf {x_i}{a_i} &= 1. \end{aligned}$$ Again let ${\beta}_j = \pdf E {x_j}$. Then $$\begin{aligned} {\underline}{\beta}_j &= \sum_i {\beta}_i (\delta_{ji} + \sum_k v_{ik}\dot \phi(h_k) \pdf {h_k}{{\underline}x_j})\\ &= \sum_i {\beta}_i (\delta_{ji} + \sum_k v_{ik}\dot \phi(h_k) w_{kj}) \end{aligned}$$ Thus, $$\begin{aligned} \la {\underline}{\beta}_j^2 \ra &= \la [\sum_i {\beta}_i (\delta_{ji} + \sum_k v_{ik}\dot \phi(h_k) w_{kj})]^2 \ra\\ &= \la {\beta}_j^2 \ra + \sum_{i, k} \la v_{ik}^2 \ra \la w_{kj}^2\ra {\mathrm{V}}\dot \phi( {\mathbf{q}}) \la {\beta}_i^2 \ra\\ &= \la {\beta}_j^2 \ra (1 + \sigma_v^2 \sigma_w^2 {\mathrm{V}}\dot \phi( {\mathbf{q}})) \end{aligned}$$ where in the second equality we applied the independence argument as in the proof of \[thm:dalethRecReduced\], leveraging \[ass:gradInd\], and in the third equality we used \[ass:symAct\] to get $\la {\beta}_i^2\ra = \la {\beta}_j^2 \ra$. The other computations are similar to the proof of \[thm:dalethRecFull\]. Tanh: Reduced Residual Network ------------------------------ ### Forward Dynamics The case with $\sigma_w = 0$ is trivial. We assume $\sigma_w > 0$ from here on. **${\mathbf{p}}$ and ${\mathbf{q}}$ are asymptotically linear with $l$.** We first show that, for any $\omega < 1$, $$l + \p {\mathbf{p}}0 \ge \p {\mathbf{p}}l \ge \omega l$$ and $$\sigma_w^2 (l + \p {\mathbf{p}}0) + \sigma_b^2 \ge \p {\mathbf{q}}l \ge \sigma_w^2 \omega (l-1) + \sigma_b^2,$$ so that $\p {\mathbf{p}}l \sim l$ and $\p {\mathbf{q}}l \sim \sigma_w^2 l$. The upper bounds are trivial, given ${\mathrm{V}}\phi( {\mathbf{q}}) \le 1$ for any ${\mathbf{q}}$. We show the lower bounds for any $\omega < 1$. For any ${\epsilon}> 0$, define $\aleph_{\epsilon}$ by $\phi^2(\aleph_{\epsilon}) = \exp(-{\epsilon})$. Then $$\begin{aligned} {\mathrm{V}}\phi( {\mathbf{q}}) &\ge \exp(-{\epsilon}) \Pr[z \not \in [-\aleph_{\epsilon}, \aleph_{\epsilon}]: z \sim {\mathcal{N}}(0, {\mathbf{q}})]\\ &\ge \exp(-{\epsilon})\lp 1 - \f{2 \aleph_{\epsilon}}{\sqrt{2\pi {\mathbf{q}}}} \rp\end{aligned}$$ where the second inequality follows from an overestimate of the $\Pr[z \in [-\aleph_{\epsilon}, \aleph_{\epsilon}]]$ via the mode of ${\mathcal{N}}(0, {\mathbf{q}})$. For any ${\mathbf{q}}\ge \p {\mathbf{q}}0$, ${\mathrm{V}}\phi( {\mathbf{q}})$ is then lower bounded by $$\phi^2\lp \sqrt{\p {\mathbf{q}}0} \rp \lp1 - \f{2\sqrt{\p {\mathbf{q}}0}}{\sqrt{2 \pi \p {\mathbf{q}}0}}\rp = \phi^2\lp\sqrt{\p {\mathbf{q}}0}\rp \lp1 - \sqrt{\f 2 \pi}\rp > 0.$$ Thus $\p {\mathbf{p}}l$ and $\p {\mathbf{q}}l$ are unbounded with $l$. Furthermore, as ${\mathbf{q}}\to \infty$, the lower bound $\exp(-{\epsilon})\lp 1 - \f{2 \aleph_{\epsilon}}{\sqrt{2\pi {\mathbf{q}}}} \rp$ goes to $\exp(-{\epsilon})$, for any ${\epsilon}$. Therefore, for any $\omega < 1, \p {\mathbf{p}}l \ge \omega l$ and $\p {\mathbf{q}}l \ge \sigma_w^2 \omega (l-1) + \sigma_b^2$. **Asymptotic expansion.** Now we repeat the following to get each successive asymptotic term of $\p {\mathbf{p}}l$ and $\p {\mathbf{q}}l$: We plug in the current asymptotic form of $\p {\mathbf{q}}l$ into ${\mathrm{V}}\tanh( {\mathbf{q}}) = 1 - C {\mathbf{q}}^{-1/2} + \Theta({\mathbf{q}}^{-3/2})$ (\[lemma:vtanhSqrtConvergence\]), where $C = \sqrt{2/\pi}$. Next we take the sum $\p {\mathbf{q}}l = \sum_{r=1}^l {\mathrm{V}}\tanh( \p {\mathbf{q}}r)$, which yields one more term in the asymptotic expansion of ${\mathbf{p}}$ than the last round. We then repeat until we get only constant terms. The following exhibits a trace of this procedure, where in the summation step for $\p {\mathbf{q}}l$, we implicitly apply $$\begin{aligned} {\mathbf{q}}&= \sigma_w^2 l + o(l) = \sigma_w^2 l (1 + o(1))\\ {\mathbf{q}}^{-1/2} &= \sigma_w^{-1} l^{-1/2}(1 + o(1)) = \sigma_w^{-1} l^{-1/2} + o(l^{-1/2})\\ {\mathbf{p}}&= \sum_{r=1}^l 1 - C (\p {\mathbf{q}}r)^{-1/2} + \Theta((\p {\mathbf{q}}r)^{-3/2}) \\ &= \sum_{r=1}^l 1 - C (\sigma_w^{-1} r^{-1/2} + o(r^{-1/2})) + \Theta(r^{-3/2}) \\ &= l - 2C \sigma_w^{-1} l^{1/2} + o(l^{1/2})\\ {\mathbf{q}}&= \sigma_w^2 l - 2 C \sigma_w l^{1/2} + o(l^{1/2}) = \sigma_w^2 l (1 - 2 C \sigma_2^{-1} l^{-1/2} + o(l^{-1/2}))\\ {\mathbf{q}}^{-1/2} &= \sigma_w^{-1} l^{-1/2} ( 1 + C \sigma_w^{-1} l^{-1/2} + o(l^{-1/2}))= \sigma_w^{-1} l^{-1/2} + C \sigma_w^{-2} l^{-1} + o(l^{-1})\\ {\mathbf{p}}&= \sum_{r=1}^l 1 - C(\sigma_w^{-1} l^{-1/2} + C \sigma_w^{-2} l^{-1} + o(l^{-1})) + \Theta(l^{-3/2})\\ &= l - 2 C \sigma_w^{-1} l^{1/2} - C^2 \sigma_w^{-2} \log l + o(\log l)\\ {\mathbf{q}}&= \sigma_w^2 l (1 - 2 C \sigma_w^{-1} l^{-1/2} - C^2 \sigma_w^{-2} \f{\log l}{l} + o(\f{\log l}{l}))\\ {\mathbf{q}}^{-1/2} &=\sigma_w^{-1} l^{-1/2}(1 + C \sigma_w^{-1} l ^{-1/2} + \f 1 2 C^2 \sigma_w^{-2} \f{\log l}{l} + o (\f{\log l}{l}))\\ {\mathbf{p}}&= \sum_{r=1}^l 1 - C(\sigma_w^{-1} r^{-1/2} + C \sigma_w^{-2} r ^{-1} + \f 1 2 C^2 \sigma_w^{-3} \f{\log r}{r^{3/2}} + o (\f{\log r}{r^{3/2}}))) + \Theta(r^{-3/2})\\ &= l - 2 C \sigma_w^{-1} l^{1/2} - C^2 \sigma_w^{-2} \log l + O(1)\end{aligned}$$ which is what we want. \[lemma:WtPhiAntisymmetric\] Let $\phi$ is antisymmetric. Then for $\tau \in [0, \pi/2]$, $$\begin{aligned} {\mathrm{W}}\phi( {q}, {q}\cos \tau) &= \lim_{t \to \tau}\f 1 {\pi \sin t} \int_{w' \ge |w|} \dd w \dd w' \Upsilon(w, w';\tau) \phi(\f {\sqrt{{q}}} {\sqrt{2}}(w + w')) \phi(\f {\sqrt {q}} {\sqrt{2}}(w' - w))\\ &= \f 1 \pi \int_{0}^\infty r\dd r e^{-r^2/2} \int_{0}^{\pi}\dd \theta \Sigma(\sqrt {q}r, \theta; \tau)\\ &= \f 1 \pi \int_0^\infty s \dd s \inv {q}e^{-s^2 \inv {q}/2} \int_0^\pi \dd \theta \Sigma(s, \theta; \tau)\\ &= \f 1 \pi \int_0^\pi \dd \theta \int_0^\infty \dd s e^{-s^2 \inv {q}/2} \pdf{}{s} \Sigma(s, \theta; \tau) \end{aligned}$$ where $\Upsilon(w, w';\tau) :=e^{-\f 1 2(\f{w^2}{1-c} + \f{(w')^2}{1+c})} - e^{-\f 1 2(\f{(w')^2}{1-c} + \f{w^2}{1+c})}$ with $c = \cos \tau$, and $\Sigma(s, \theta; \tau) := \phi(s \sin \theta) \phi(s \sin(\theta - \tau)).$ Of course, in the above lemma, the limit in the first equation is only necessary when $\tau = 0$ or $\tau = \pi/2$. Let $c := \cos \tau$ and $$\Gamma := {\mathrm{W}}\phi( {q}, cq) = \f 1 {2\pi {q}\sqrt{1 - c^2}} \int \dd \mathbf z \exp(-\mathbf z^T \Sigma^{-1} \mathbf z /2) \phi(z)\phi(z'),$$ where $\Sigma = \begin{pmatrix}{q}& cq \\ cq & {q}\end{pmatrix}$. Our proof will have two portions: Symmetrization of the $\Gamma$ integral and trigonometric change of variables for evaluation. **Symmetrization.** $\Sigma$ is diagonalized by $\Omega = \f 1 {\sqrt{2q}}\begin{pmatrix}-1 & 1 \\ 1 & 1 \end{pmatrix}$, $$\Sigma = \Omega^T \Diag(1 - c, 1 + c) \Omega.$$ By a change of variable $\mathbf w = \Omega \mathbf z$, so that $\dd \mathbf w = \inv {q}\dd \mathbf z$, we have $$\begin{aligned} \Gamma &= \f 1 {2\pi \sqrt{1 - c^2}}\int \dd \mathbf w \exp(-\mathbf w^T \Diag(1 - c, 1 + c)^{-1}\mathbf w/2) \phi(\f {\sqrt {q}} {\sqrt{2}}(w' - w)) \phi(\f {\sqrt {q}} {\sqrt{2}}(w + w'))\\ &= \f 1 {2\pi \sqrt{1 - c^2}} \int \dd w \dd w' e^{-\f 1 2(\f{w^2}{1-c} + \f{(w')^2}{1+c})} \phi(\f {\sqrt{{q}}} {\sqrt{2}}(w' - w)) \phi(\f {\sqrt {q}} {\sqrt{2}}(w + w')) \end{aligned}$$ By a change of variable swapping $w$ with $w'$, we get $$\begin{aligned} \Gamma &= -\f 1 {2\pi \sqrt{1 - c^2}} \int \dd w \dd w' e^{-\f 1 2(\f{(w')^2}{1-c} + \f{w^2}{1+c})} \phi(\f {\sqrt{{q}}} {\sqrt{2}}(w + w')) \phi(\f {\sqrt {q}} {\sqrt{2}}(w' - w)) \end{aligned}$$ Thus $$\begin{aligned} 2 \Gamma &= \f 1 {2\pi \sqrt{1 - c^2}} \int \dd w \dd w' \Upsilon(w, w';\tau) \phi(\f {\sqrt{{q}}} {\sqrt{2}}(w + w')) \phi(\f {\sqrt {q}} {\sqrt{2}}(w' - w)) \end{aligned}$$ where $$\Upsilon(w, w';\tau) = e^{-\f 1 2(\f{w^2}{1-c} + \f{(w')^2}{1+c})} - e^{-\f 1 2(\f{(w')^2}{1-c} + \f{w^2}{1+c})}.$$ ![The integrand of $\Gamma$ after symmetrization. Here $c = .2$ and ${q}= 100$ and $\phi = \tanh$.[]{data-label="fig:symmetrization_of_Gamma"}](graphics/symmetrized_integrand.pdf){height=".2\textheight"} Note that, by the antisymmetry of $\phi$, the integrand $K := \Upsilon(w, w';\tau)\phi(\ldots)\phi(\ldots)$ above has the symmetries $K(w, w') = K(w', w) = K(w, -w')$, and is everywhere nonnegative. \[fig:symmetrization\_of\_Gamma\] displays a contour plot of $K$ for typical values of ${q}$ and $c$. So $$\begin{aligned} \Gamma &= \f 1 {\pi \sqrt{1 - c^2}} \int_{w' \ge |w|} \dd w \dd w' K(w, w'). \end{aligned}$$ This gives the first equation in the lemma. **Polar Coordinates.** Let $\f w {\sqrt{1 - c}} = r \cos \theta, \f {w'} {\sqrt{1 + c}} = r \sin \theta$, so that $$\begin{aligned} w &= r \cos \theta \sqrt{1-c} = \sqrt 2 r \cos \theta \sin \f \tau 2\\ w' &= r \sin \theta \sqrt{1+c} = \sqrt 2 r \sin \theta \cos \f \tau 2\\ \dd w \dd w' &= \sqrt{1-c^2}r \dd r \dd \theta = (\sin^2 \tau) r \dd r \dd \theta. \end{aligned}$$ Then $$\begin{aligned} \mathbf A &:= \int_{w' \ge |w|} e^{-(\f{w^2}{1-c} + \f{(w')^2}{1 +c})/2}\phi(\sqrt{{q}/2}(w+w'))\phi(\sqrt{{q}/2}(w'-w)) \dd w \dd w'\\ &= \sin^2 \tau \int_{0}^\infty r\dd r e^{-r^2/2} \int_{\tau/2}^{\pi - \tau/2}\dd \theta \phi(\sqrt {q}r \sin(\theta + \tau/2))\phi(\sqrt {q}r \sin(\theta - \tau/2)). \end{aligned}$$ Similarly, let $\f w {\sqrt{1 + c}} = r \cos \theta, \f {w'} {\sqrt{1 - c}} = r \sin \theta$, so that $$\begin{aligned} w &= r \cos \theta \sqrt{1+c} = \sqrt 2 r \cos \theta \cos \f \tau 2\\ w' &= r \sin \theta \sqrt{1-c} = \sqrt 2 r \sin \theta \sin \f \tau 2\\ \dd w \dd w' &= \sqrt{1-c^2}r \dd r \dd \theta = (\sin^2 \tau) r \dd r \dd \theta, \end{aligned}$$ and $$\begin{aligned} \mathbf B &= \int_{w' \ge |w|} e^{-(\f{w^2}{1+c} + \f{(w')^2}{1 - c})/2}\phi(\sqrt{{q}/2}(w+w'))\phi(\sqrt{{q}/2}(w'-w)) \dd w \dd w'\\ &= -\sin^2 \tau \int_{0}^\infty r\dd r e^{-r^2/2} \int_{\pi/2 - \tau/2}^{\pi/2 + \tau/2}\dd \theta \phi(\sqrt {q}r \cos(\theta + \tau/2))\phi(\sqrt {q}r \cos(\theta - \tau/2))\\ &= -\sin^2 \tau \int_{0}^\infty r\dd r e^{-r^2/2} \int_{- \tau/2}^{\tau/2}\dd \theta \phi(\sqrt {q}r \sin(\theta + \tau/2))\phi(\sqrt {q}r \sin(\theta - \tau/2)). \end{aligned}$$ Thus $$\begin{aligned} \Gamma &= \f 1 {\pi \sqrt{1-c^2}} (\mathbf A - \mathbf B)\\ &= \f 1 \pi \int_{0}^\infty r\dd r e^{-r^2/2} \int_{- \tau/2}^{\pi - \tau/2}\dd \theta \phi(\sqrt {q}r \sin(\theta + \tau/2))\phi(\sqrt {q}r \sin(\theta - \tau/2))\\ &= \f 1 \pi \int_{0}^\infty r\dd r e^{-r^2/2} \int_{0}^{\pi}\dd \theta \phi(\sqrt {q}r \sin(\theta))\phi(\sqrt {q}r \sin(\theta - \tau)). \end{aligned}$$ This gives the second equation in the lemma, and a change of variables $s = \sqrt {q}r$ gives the third. For the fourth equality, we start from the third equality, and apply integration by parts: $$\begin{aligned} &\phantom{{}={}}\f 1 \pi \int_0^\infty s \dd s \inv {q}e^{-s^2 \inv {q}/2} \int_0^\pi \dd \theta \Sigma(s, \theta; \tau)\\ &= \f 1 \pi \int_0^\pi \dd \theta \int_0^\infty \dd s s \inv {q}e^{-s^2 \inv {q}/2} \Sigma(s, \theta; \tau)\\ &= \f 1 \pi \int_0^\pi \dd \theta \biggl(-e^{-s^2 \inv {q}/2} \Sigma(s, \theta; \tau)\biggr\rvert_{s=0}^\infty + \int_0^\infty \dd s e^{-s^2 \inv {q}/2} \pdf{}{s} \Sigma(s, \theta; \tau) \biggr)\\ &= \f 1 \pi \int_0^\pi \dd \theta \int_0^\infty \dd s e^{-s^2 \inv {q}/2} \pdf{}{s} \Sigma(s, \theta; \tau). \end{aligned}$$ where the last equality follows because $\Sigma(0, \theta; \tau) = 0$ and $e^{-s^2\inv {q}/ 2} \to 0$ as $s \to \infty$. In the following lemmas, the “2” is not important, and can be any arbitrary finite or infinite value. Suppose a function $f: (0, 2) \to \R$ is $C^k$ on $(0,2)$. If $\lim_{x \downarrow 0} \p f i (x)$ exists and is finite for every $i \in [0, k]$, then $f$ can be extended to $[0, 2)$ such that one sided $i$th derivatives exist at 0 for all $i \in [0, k]$. Consider $\overline{\p f i}(0) := \p f i (1) - \int_0^1 \p f {i+1}(x) \dd x$ for $i \in [0, k-1]$, which naturally is also equal to $\p f i ({\epsilon}) - \int_0^{\epsilon}\p f {i+1}(x)\dd x$ for any ${\epsilon}> 0$. Certainly $\p f i (x) \to \overline{\p f i}(0)$ as $x \to 0$ if this limit exists — and by assumption it does, for $0 \le i \le k -1$. Therefore, we can define the extension of $\p f i$ to $x = 0$ to be $\p f i (0) := \overline{\p f i}(0)$. But we need to check that for $i \in [0, k-1]$. $$\begin{aligned} \lim_{{\epsilon}\to 0} \f 1 {\epsilon}(\p f i({\epsilon}) - \p f i (0)) = \p f {i+1}(0) \end{aligned}$$ so that all one sided $i$th derivatives exist. But $$\begin{aligned} \f 1 {\epsilon}(\p f i({\epsilon}) - \p f i (0)) &= \f 1 {\epsilon}\int_0^{\epsilon}\p f {i+1}(x)\dd x\\ &=\p f {i+1}(0) + \int_0^1 (\p f {i+1}(x) - \p f i (0)) \I(x \in [0, {\epsilon}]) \dd x \end{aligned}$$ Since $\lim_{x \downarrow 0} \p f {i+1}(x) = \p f {i+1} (0)$, $\p f {i+1}(x) - \p f {i+1}(0)$ is bounded for small $x$, and by dominated convergence, $\int_0^1 (\p f {i+1}(x) - \p f i (0)) \I(x \in [0, {\epsilon}]) \dd x \to \int_0^1 0 \dd x = 0$ as ${\epsilon}\to 0$. Thus $$\begin{aligned} \lim_{{\epsilon}\to 0} \f 1 {\epsilon}(\p f i({\epsilon}) - \p f i (0)) &= \p f {i+1}(0) \end{aligned}$$ as desired. If $f: [0, 2) \to \R$ is $C^k$ on $(0, 2)$ and has one sided derivatives at 0 up to order $k$, then $$\begin{aligned} f({\epsilon}) &= f(0) + {\epsilon}\p f 1 (0) + \cdots + \f{{\epsilon}^{i-1}}{(i-1)!} \p f {i-1} (0) + O({\epsilon}^i) \end{aligned}$$ for any $i \le k$. We have $$\begin{aligned} f({\epsilon}) &= f(0) + \int_0^{\epsilon}\p f 1 (x) \dd x\\ &= f(0) + {\epsilon}\p f 1(0) + \int_0^{\epsilon}\p f 1(x) - \p f 1 (0) \dd x\\ &= f(0) + {\epsilon}\p f 1(0) + \int_0^{\epsilon}\int_0^{x_0} \p f 2 (x_2) \dd x_2 \dd x_1\\ &= f(0) + {\epsilon}\p f 1(0) + \f{{\epsilon}^2}{2} \p f 2(0) + \int_0^{\epsilon}\int_0^{x_1} \p f 2 (x_2) - \p f 2 (0) \dd x_2 \dd x_1\\ &\ \ \mathbin{\vdots}\ \\ f({\epsilon}) &= f(0) + {\epsilon}\p f 1 (0) + \cdots + \f{{\epsilon}^{i-1}}{(i-1)!} \p f {i-1} (0) + \int_0^{\epsilon}\dd x_1 \int_0^{x_1} \dd x_2 \cdots \int_0^{x_{i-1}} \dd x_{i} \p f {i}(x_{i}) \end{aligned}$$ for any $i\le k$. It suffices then to bound the size of the integral. Since $\p f i (x) \to \p f i (0)$ as $x \downarrow 0$ by assumption, $|\p f i (x_i)|$ is bounded by some constant $C$ on the integration region $\mathbb{A} := \{(x_1, \ldots, x_i): {\epsilon}\ge x_1 \ge \cdots \ge x_i\}$ for small enough ${\epsilon}$. Therefore, $$\begin{aligned} &\phantom{{}={}}\int_0^{\epsilon}\dd x_1 \int_0^{x_1} \dd x_2 \cdots \int_0^{x_{i-1}} \dd x_{i} \p f {i}(x_{i})\\ &= \int \p f {i}(x_i) \I(\vec x \in \mathbb A) \dd \vec x\\ &\le C |\mathbb A|\\ &= \Theta({\epsilon}^i). \end{aligned}$$ As a corollary, \[lemma:smoothExtensionBoundary\] If $f: (0, 2) \to \R$ is smooth on $(0, 2)$ and $\lim_{x \to 0}\p f i (x)$ exists and is finite for all $i$, then $f$ can be extended to $[0, 2)$ and be [*one-sided smooth*]{} at 0, and $$\begin{aligned} f({\epsilon}) &= f(0) + {\epsilon}\p f 1 (0) + \cdots + \f{{\epsilon}^{i-1}}{(i-1)!} \p f {i-1} (0) + O({\epsilon}^i) \end{aligned}$$ for any $i$. \[lemma:WtExtension\] Let $\phi = \tanh$. For any fixed $c$, ${\mathrm{W}}\phi( {q}, cq)$ is smooth (infinitely differentiable) on ${q}\in (0, \infty)$. As a function of $Q := \inv {q}$, it can be extended smoothly to the point $Q = 0$, so that $$\begin{aligned} {\mathrm{W}}\phi( {q}, cq ) &= \lim_{{q}' \to \infty} {\mathrm{W}}\phi( {q}', cq') + \inv {q}\lim_{{q}' \to \infty} \pd {\mathrm{W}}\phi( {q}', cq')/\pd ({q}')^{-1} + \cdots \\ &\qquad+ \f{{q}^{-i+1}}{(i-1)!} \lim_{{q}' \to \infty} \pd^{i-1} {\mathrm{W}}\phi( {q}', cq')/\pd ({q}')^{-i+1} + O({q}^{-i}) \end{aligned}$$ for any $i \ge 0$. Furthermore, for $c$ bounded away from $1$, the constants hidden $O$ can be taken independent of $c$. **Smoothness on $(0, \infty)$.** By the third equation of \[lemma:WtPhiAntisymmetric\], for $Q \in (0, \infty) \iff {q}\in (0, \infty)$, $$\begin{aligned} &\phantom{{}={}}\f 1 \pi \int_0^\infty s \dd s \left|\pdf{^n}{Q^n}\lp Q e^{-s^2 Q/2}\rp\right| \int_0^\pi \dd \theta |\phi(s \sin \theta)\phi(s \sin(\theta - \tau))|\\ &\le \int_0^\infty s \dd s \left|\pdf{^n}{Q^n}\lp Q e^{-s^2 Q/2}\rp\right| < \infty, \end{aligned}$$ so by Leibniz’s integral rule and a simple induction, all derivatives of ${\mathrm{W}}\phi( {q}, cq)$ against $Q$ exists for any $Q \in (0, \infty)$. **Extension to $Q = 0$.** By \[lemma:smoothExtensionBoundary\], it suffices to show that the limit of $ \pdf{^k{\mathrm{W}}\phi( {q}, cq)}{Q^k}$ exists and is finite as $Q \to 0$, for all $k$. Let $\tau = \arccos c$. By the fourth equation of \[lemma:WtPhiAntisymmetric\], we have explicitly $$\begin{aligned} \pdf{^k{\mathrm{W}}\phi( {q}, cq)}{Q^k} &= \f 1 \pi \int_0^\pi \dd \theta \int_0^\infty \dd s (\nicefrac{-s^2}{2})^k e^{-s^2 Q /2} \pdf{}{s} \Sigma(s, \theta; \tau)\\ &= \f {(-2)^{-k}} \pi \int_0^\pi \dd \theta \int_0^\infty \dd s\ s^{2k} e^{-s^2 Q /2} \pdf{}{s} \Sigma(s, \theta; \tau) \end{aligned}$$ for any $Q \in (0, \infty)$. Note that for $\phi = \tanh$, $\dot \phi = {\operatorname{sech}}^2$, $$\begin{aligned} \pdf{}{s} \Sigma(s, \theta; \tau) &= \sin \theta \dot \phi(s \sin \theta)\phi(s \sin (\theta - \tau)) + \sin(\theta-\tau)\phi(s \sin \theta)\dot \phi(s \sin (\theta - \tau)).\\ \end{aligned}$$ We split the integral of $\pdf{^k {\mathrm{W}}\phi}{Q^k}$ as follows: $$\begin{aligned} \pdf{^k{\mathrm{W}}\phi( {q}, cq)}{Q^k} &= \f {(-2)^{-k}} \pi \int_0^\pi \dd \theta \int_0^\infty \dd s\ s^{2k} e^{-s^2 Q /2} \sin \theta \dot \phi(s \sin \theta)\phi(s \sin (\theta - \tau))\\ &\qquad + \f {(-2)^{-k}} \pi \int_0^\pi \dd \theta \int_0^\infty \dd s\ s^{2k} e^{-s^2 Q /2} \sin(\theta-\tau)\phi(s \sin \theta)\dot \phi(s \sin (\theta - \tau)) \end{aligned}$$ We show that for each piece, the limit as $Q \to 0$ exists and is finite, for any $k$. This will prove the smooth extendability of ${\mathrm{W}}\phi$ to $Q = 0$. We will do this for the first piece; the second is similar. For $Q > 0$, the integrand is absolutely integrable, so we may switch the integrals. $$\begin{aligned} &\phantom{{}={}} \int_0^\pi \dd \theta \int_0^\infty \dd s\ s^{2k} e^{-s^2 Q /2} \sin \theta \dot \phi(s \sin \theta)\phi(s \sin (\theta - \tau))\\ &= \int_0^\infty \dd s s^{2k} e^{-s^2 Q/2}\int_0^\pi \dd \theta \sin \theta \dot \phi(s \sin\theta) \phi(s \sin (\theta - \tau)) \end{aligned}$$ We now try to bound the inner integral by an exponentially decreasing term $e^{-s \mu}$ for some $\mu$; clearly, by monotone convergence on the outer integral as $Q \to 0$, this would show the limit of the integral exists and is finite. Because $\phi$ is odd and $\dot \phi$ is even, the inner integrand is negative on $\theta \in [0, \tau)$ and positive on $\theta \in (\tau, \pi]$. We will break up the inner integral as follows, for some fixed ${\epsilon}> 0$ satisfying $\tau - {\epsilon}> 0$ independent of $s$ (recall $\tau \in (0, \pi/2]$). $$\begin{aligned} &\phantom{{}={}} \int_0^\pi \dd \theta \sin \theta \dot \phi(s \sin\theta) \phi(s \sin (\theta - \tau))\\ &= \left(\int_0^{\epsilon}+ \int_{\pi - {\epsilon}}^\pi \right) \dd \theta \sin \theta \dot \phi(s \sin\theta) \phi(s \sin (\theta - \tau)) + \int_{\epsilon}^{\pi - {\epsilon}} \dd \theta \sin \theta \dot \phi(s \sin\theta) \phi(s \sin (\theta - \tau)) \end{aligned}$$ Now because $\dot \phi(z) = {\operatorname{sech}}^2(z) \le 2 e^{-z}$, and $\sin \theta \ge \sin {\epsilon}$ on $\theta \in [{\epsilon}, \pi - {\epsilon}]$, $$\begin{aligned} &\phantom{{}={}}\left|\int_{\epsilon}^{\pi - {\epsilon}} \dd \theta \sin \theta \dot \phi(s \sin\theta) \phi(s \sin (\theta - \tau))\right|\\ &\le 2\int_{\epsilon}^{\pi - {\epsilon}} \dd \theta \exp(-s \sin{\epsilon}) \\ &= 2(\pi - 2{\epsilon}) \exp(-s \sin{\epsilon}). \end{aligned}$$ For the other part: $$\begin{aligned} &\phantom{{}={}}\int_{\pi - {\epsilon}}^\pi \dd \theta \sin \theta \dot \phi(s \sin\theta) \phi(s \sin (\theta - \tau))\\ &= \int_{\epsilon}^{0} \sin(\pi - \theta) \dot \phi(s \sin \pi - \theta) \phi(s \sin (\pi - \theta - \tau)) \dd(\pi - \theta)\\ &= \int_0^{\epsilon}\dd \theta \sin\theta \dot \phi(s \sin \theta) \phi(s \sin \theta + \tau) \end{aligned}$$ so that $$\begin{aligned} &\phantom{{}={}}\left(\int_0^{\epsilon}+ \int_{\pi - {\epsilon}}^\pi \right) \dd \theta \sin \theta \dot \phi(s \sin\theta) \phi(s \sin (\theta - \tau))\\ &= \int_0^{\epsilon}\dd \theta \sin \theta \dot \phi(s \sin\theta) [\phi(s \sin (\tau + \theta)) - \phi(s \sin(\tau - \theta))] \end{aligned}$$ But by intermediate value theorem, $\phi(s \sin (\tau + \theta)) - \phi(s \sin(\tau - \theta)) = 2 \theta \pd \phi(s \sin(\tau + \theta))/\pd \theta |_{\theta = \psi} = 2 \theta \dot \phi(s \sin (\tau + \psi))s \cos(\tau + \psi)$ for some $\psi \in [- \theta, \theta]$. By the assumption on ${\epsilon}$, $\phi(s \sin (\tau + \theta)) - \phi(s \sin(\tau - \theta)) \le 2{\epsilon}\dot \phi(s \sin(\tau - {\epsilon}))s \cos(\tau - {\epsilon}).$ Then $$\begin{aligned} &\phantom{{}={}} \int_0^{\epsilon}\dd \theta \sin \theta \dot \phi(s \sin\theta) [\phi(s \sin (\tau + \theta)) - \phi(s \sin(\tau - \theta))]\\ &\le \int_0^{\epsilon}\dd \theta \sin \theta \dot \phi(s \sin\theta) 2 {\epsilon}\dot \phi(s \sin(\tau - {\epsilon}))s \cos(\tau - {\epsilon})\\ &\le 2 {\epsilon}\dot \phi(s \sin(\tau - {\epsilon}))s \cos(\tau - {\epsilon}) O(1) \end{aligned}$$ Because $\tau -{\epsilon}> 0$ by assumption on ${\epsilon}$, and because $\dot \phi(z) = \exp(-\Theta_+(z))$, this quantity is $\exp(-\Theta_+(z))$, as desired (here $\Theta_+$ denotes a positive quantity). Thus $$\begin{aligned} &\phantom{{}={}} \int_0^\pi \dd \theta \sin \theta \dot \phi(s \sin\theta) \phi(s \sin (\theta - \tau))\\ &= \left(\int_0^{\epsilon}+ \int_{\pi - {\epsilon}}^\pi \right) \dd \theta \sin \theta \dot \phi(s \sin\theta) \phi(s \sin (\theta - \tau)) + \int_{\epsilon}^{\pi - {\epsilon}} \dd \theta \sin \theta \dot \phi(s \sin\theta) \phi(s \sin (\theta - \tau))\\ &= \exp(-\Theta_+(s)) \end{aligned}$$ and similarly for the other piece of $\pdf{^k {\mathrm{W}}\phi}{Q^k}$, so that $$\begin{aligned} &\phantom{{}={}} \int_0^\infty \dd s s^{2k} e^{-s^2 Q/2}\int_0^\pi \dd \theta \sin \theta \dot \phi(s \sin\theta) \phi(s \sin (\theta - \tau))\\ &= \int_0^\infty \dd s s^{2k} e^{-s^2\f Q 2 - \Theta_+(z)}\\ &\to \int_0^\infty \dd s s^{2k} e^{-\Theta_+(z)} \end{aligned}$$ is finite as $Q \to 0$, by monotone convergence. **Independence of constant hidden in $O(({q}')^{-i})$.** The constant hidden is a function of the ${\epsilon}$ chosen above, which depend on $\tau$, but only to the extent that it must satisfy $\tau - {\epsilon}> 0$. As long as we are interested in a set $\mathcal C$ of $c$ that is bounded away from 1, the corresponding set of $\tau$ is bounded away from $0$, so ${\epsilon}$ can be taken to be some number smaller than all of the corresponding $\tau$. \[lemma:Wt\_le\_arcsin\] Suppose $\phi$ is tanh-like. Then for $c \in [0, 1]$, $${\mathrm{W}}\phi( {q}, cq) \le \f 2 \pi \arcsin(c),$$ and weakly increases to this upper bound as ${q}\to \infty$. Furthermore, - If $c = 0$ or 1, then equality holds regardless of ${q}$. - If $c \in (0, 1)$ is held constant, $\f 2 \pi \arcsin(c) - {\mathrm{W}}\phi( {q}, cq) = \Theta(\inv {q})$, where the hidden constants in $\Theta$ depend on $c$. But the constants can be made independent of $c$ if $c \in [{\epsilon}, 1-{\epsilon}]$ for some ${\epsilon}> 0$. The cases of $c = 0$ or 1 are obvious by the definition of ${\mathrm{W}}$. So from here on we assume $c \in (0, 1)$. Let $\tau := \arccos c$. By the first equation of \[lemma:WtPhiAntisymmetric\] and the assumption that $\phi$ is tanh-like, it is immediate that ${\mathrm{W}}\phi({q},cq)$ is nondecreasing in ${q}$. By dominated convergence, using the second equation of \[lemma:WtPhiAntisymmetric\], we get $$\begin{aligned} \lim_{{q}\to \infty} {\mathrm{W}}\phi({q},cq) &= \f 1 \pi \int_{0}^\infty r\dd r e^{-r^2/2} (\pi - 2 \tau)\\ &= \f {\pi - 2 \tau}{\pi}\\ &= \f 2 \pi \arcsin c. \end{aligned}$$ Then the convergence rate is $O(\inv {q})$ by \[lemma:WtExtension\] and Taylor’s theorem. Thus to show the convergence rate is $\Theta(\inv {q})$, it suffices to show that $\mathbf D := \pdf{{\mathrm{W}}\phi( {q}, cq)}{Q} < 0$. But this is apparent from the first equation of \[lemma:WtPhiAntisymmetric\]: For $\tau \in (0, \pi/2)$, $$\begin{aligned} \mathbf D &= \f 1 {\pi \sin \tau} \int_{w' \ge |w|} \dd w \dd w' \Upsilon(w, w';\tau)(-\f 1 {2\sqrt 2} Q^{-3/2})\\ &\qquad\qquad\qquad\quad \times[\dot\phi(\sqrt{{q}/2}(w + w'))\phi(\sqrt{{q}/2}(w'-w))\\ &\qquad\qquad\qquad\qquad + \phi(\sqrt{{q}/2}(w + w'))\dot\phi(\sqrt{{q}/2}(w'-w))]\\ & < 0 \end{aligned}$$ since $\Upsilon$ is positive on the integration domain, and $\dot \phi$ and $\phi$ are both positive for positive arguments, by the assumption of $\phi$ being tanh-like. **Independence of the constants in $\Theta(\inv {q})$ from $c$ when $c \in [{\epsilon}, 1-{\epsilon}]$.** By \[lemma:WtExtension\], the upper constant can be made independent from $c$. Since $\mathbf D$ is monotonically decreasing in $c$ (or monotonically increasing in $\tau$) and $|\mathbf D|$ is monotonically increasing in $c$ (or monotonically decreasing in $\tau$), we have $|\mathbf D| > |\mathbf D|\bigg]_{c = {\epsilon}}$, which can be taken to be the lower constant in $\Theta(\inv {q})$. \[fig:verifywtanhasymptotics\] verifies empirically that the subleading term in ${\mathrm{W}}\tanh( q, cq)$ is linear in $\inv q$, for constant $c$. ![We verify empirically that the subleading term in ${\mathrm{W}}\tanh( q, cq)$ is linear in $\inv q$, for constant $c$. Indeed, observe that the curve of of ${\mathrm{W}}\tanh$ intersects the y-axis at an angle.[]{data-label="fig:verifywtanhasymptotics"}](graphics/verify_Wtanh_asymptotics){height=".15\textheight"} We have by \[lemma:Wt\_le\_arcsin\], $$\begin{aligned} {\boldsymbol{\oldgamma}}&= \f 2 \pi \arcsin({\boldsymbol{\oldlambda}}/{q}) - \Theta({q}^{-1}) + {\underline}{\boldsymbol{\oldgamma}}.\end{aligned}$$ Since ${q}= \sigma_w^2 {\underline}{\mathbf{p}}+ \sigma_b^2$ by \[thm:p\_q\_linear\], and ${\boldsymbol{\oldlambda}}= \sigma_w^2 {\underline}{\boldsymbol{\oldgamma}}+ \sigma_b^2$ by \[thm:lambda\_gamma\_recurrence\], $$\begin{aligned} {\boldsymbol{\oldgamma}}&= \f 2 \pi \arcsin\lp\f{\sigma_w^2 {\underline}{\boldsymbol{\oldgamma}}+ \sigma_b^2}{\sigma_w^2 {\underline}{\mathbf{p}}+ \sigma_b^2}\rp - \Theta({q}^{-1}) + {\underline}{\boldsymbol{\oldgamma}}.\end{aligned}$$ We claim that $\p {\boldsymbol{\oldgamma}}l \to \infty$ as $l \to \infty$. Otherwise, there is some $C$ such that $\p {\boldsymbol{\oldgamma}}l \le C$ for all $l$. For large enough $l$, $\p {\mathbf{p}}l \ge \omega l$ for any $\omega < 1$ and $\arcsin \lp \f{C}{\sigma_w^2 \p {\mathbf{p}}{l-1} + \sigma_b^2} \rp = \Theta(1/l)$ by linearization of $\arcsin$. Thus $\p {\boldsymbol{\oldgamma}}l = \Theta(\log l)$, but this contradicts our assumption that ${\boldsymbol{\oldgamma}}$ is bounded. This proves our claim. Therefore, for large enough $l$, $$\f{\sigma_w^2 {\underline}{\boldsymbol{\oldgamma}}+ \sigma_b^2}{\sigma_w^2 {\underline}{\mathbf{p}}+ \sigma_b^2} = {\underline}{\boldsymbol{\oldgamma}}/{\underline}{\mathbf{p}}+ \Theta(l^{-1}).$$ \[fig:arcsin-vs-x\] shows $\f2\pi\arcsin x$ vs $x$. One sees that 1 is an unstable fixed point; if ${\mathbf{e}}< 1 - \epsilon$, then $\f 2 \pi \arcsin {\mathbf{e}}< 1 - \epsilon - \delta$ for some $\delta$. Thus ${c}$ drops monotonically until some threshold under which the linearization of $\arcsin$, $\arcsin x = x + \Theta(x^3)$, is applicable. So for large enough $l$, $$\begin{aligned} {\boldsymbol{\oldgamma}}- {\underline}{\boldsymbol{\oldgamma}}&= \f 2 \pi \arcsin ({\underline}{\boldsymbol{\oldgamma}}/ {\underline}{\mathbf{p}}+ \Theta(\inv l)) - \Theta(\inv l)\\ &= \f 2 \pi {\underline}{\boldsymbol{\oldgamma}}/ {\underline}{\mathbf{p}}+ O(\inv l)\end{aligned}$$ As $\p {\mathbf{p}}l \sim l$ by \[thm:p\_q\_linear\], this difference equation has solution ${\boldsymbol{\oldgamma}}= \Omega(l^{\f 2 \pi -{\epsilon}}), O(l^{\f 2 \pi + {\epsilon}})$ for any ${\epsilon}$ by using the dynamics of \[lemma:alphaDeltaDynamics\] to upper and lower bound this difference equation. ### Backward Dynamics The $\sigma_w = 0$ case is obvious. We will assume $\sigma_w > 0$ from here on. Let $\p {\mathbf{p}}l = b_0 l + b_1 l^{1/2} + b_2 \log l + O(1)$. Then for $D = \f 2 3 \sqrt{\f 2 \pi}$, we have (implicitly applying \[lemma:Vt\_dot\_tanh\_asymptotics\] and \[lemma:power\_sum\_asymptotics\]), $$\begin{aligned} {q}^{-1/2} &= \sigma_w^{-1} b_0^{-1/2} l^{-1/2}(1 - b_1 b_0^{-1} \inv 2 l^{-1/2} - b_2 b_0^{-1} 2^{-1} l^{-1} \log l + O(l^{-1}))\\ {\mathrm{V}}\dot \phi( {q}) & = D {q}^{-1/2} + \Theta({q}^{-3/2})\\ &= D\sigma_w^{-1} b_0^{-1/2} l^{-1/2}(1 - b_1 b_0^{-1} \inv 2 l^{-1/2} - b_2 b_0^{-1} 2^{-1} l^{-1} \log l + O(l^{-1}))\\ \log(B {\mathrm{V}}\dot \phi( {q}) + 1) &= B D \sigma_w^{-1} b_0^{-1/2} l^{-1/2}\\ &\phantom{={}}- (BD \sigma_w^{-1} b_0^{-3/2} b_1 2^{-1} +B^2 D^2 \sigma_w^{-2} b_0^{-1}2^{-1})l^{-1} + \Theta(l^{-3/2}\log l)\\ \sum_{r=1}^l \log(B {\mathrm{V}}\dot \phi( \p {q}r) + 1) &= 2B D \sigma_w^{-1} b_0^{-1/2} l^{1/2} \\ &\phantom{={}} -(BD \sigma_w^{-1} b_0^{-3/2} b_1 2^{-1} +B^2 D^2 \sigma_w^{-2} b_0^{-1}2^{-1})\log l + O(1)\\ \end{aligned}$$ In our case, we have $b_0 = 1, b_1 = -2C \sigma_w^{-1}, b_2 = C^2 \sigma_w^{-2}, B = \sigma_w^2, C = \sqrt{\f 2 \pi}$, which gives $$\sum_{r=1}^l \log(B {\mathrm{V}}\dot \phi( \p {q}r) + 1) = \f 4 3 \sqrt{\f 2 \pi} \sigma_w l^{1/2} + (\f 4 {3\pi} - \sigma_w^2 \f 4 {9\pi}) \log l + O(1).$$ so that $$\begin{aligned} \p {{\boldsymbol{\oldchi}}}{m}/\p {{\boldsymbol{\oldchi}}}l &= \exp\left[\f 4 3 \sqrt{\f 2 \pi} \sigma_w (\sqrt l - \sqrt m) + (\f 4 {3\pi} - \sigma_w^2 \f 4 {9\pi}) (\log l - \log m) + O(1)\right]\\ \end{aligned}$$ The $\sigma_w = 0$ case is obvious. We will assume $\sigma_w > 0$ from here on. As in the proof of \[thm:dalethExpSqrtTanh\], $${\mathrm{V}}\dot \phi( {q}) = D\sigma_w^{-1} b_0^{-1/2} l^{-1/2} + \Theta(\inv l)$$ where $D = \f 2 3 \sqrt{\f 2 \pi}$. Thus by \[thm:dalethRecReduced\], $$\log(\p {{\boldsymbol{\oldchi}}}{m}/\p {{\boldsymbol{\oldchi}}}l) = \f 4 3 \sqrt{\f 2 \pi} \sigma_w (\sqrt l - \sqrt m) + (\f 4 {3\pi} - \sigma_w^2 \f 4 {9\pi}) (\log l - \log m) + O(1)$$ $$\log(\p {\boldsymbol{\oldchi}}m_b/\p {\boldsymbol{\oldchi}}l _b) =\f 4 3 \sqrt{\f 2 \pi} \sigma_w (\sqrt l - \sqrt m) + (\f 4 {3\pi} - \f 1 2 - \sigma_w^2 \f 4 {9\pi}) (\log l - \log m) + O(1)$$ Similarly, since ${\mathbf{p}}= l + \Theta(\sqrt l)$ by \[thm:p\_q\_linear\], we have $$\log(\p {\boldsymbol{\oldchi}}m_w/\p {\boldsymbol{\oldchi}}l _w) = \f 4 3 \sqrt{\f 2 \pi} \sigma_w (\sqrt l - \sqrt m) + (\f 4 {3\pi} + \f 1 2 - \sigma_w^2 \f 4 {9\pi}) (\log l - \log m) + O(1)$$ Tanh: Full Residual Network --------------------------- ### Forward Dynamics The $\sigma_w = 0$ case is obvious. We will assume $\sigma_w > 0$ from here on. As in \[thm:p\_q\_linear\], ${\mathbf{p}}$ will have expansion ${\mathbf{p}}= b_0 l + b_1 l^{1/2} + b_2 \log l + O(1).$ Then, for $C = \sqrt{\f 2 \pi}$, $$\begin{aligned} {q}^{-1/2} &= \sigma_w^{-1} b_0^{-1/2} l^{-1/2}(1 - b_1 b_0^{-1} \inv 2 l^{-1/2} - b_2 b_0^{-1} 2^{-1} l^{-1} \log l + O(l^{-1}))\\ \sum_{r=1}^l {\mathrm{V}}\phi( \p {q}r) &= \sum_{r=1}^l 1 - C(\p {q}r)^{-1/2} + \Theta((\p {q}r)^{-3/2})\\ &= l - 2 C \sigma_w^{-1} b_0^{-1/2} l^{1/2} + C \sigma_w^{-1} b_1 b_0^{-3/2} 2^{-1} \log l + O(1)\\ \p {\mathbf{p}}l &= \sigma_v^2 \sum_{r=1}^l + \sigma_a^2 l\\ &= (\sigma_v^2 + \sigma_a^2) l - 2C \sigma_v^2 \sigma_w^{-1} b_0^{-1/2} l^{1/2} + C \sigma_v^2 \sigma_w^{-1} b_1 b_0^{-3/2} 2^{-1} \log l + O(1) \end{aligned}$$ which yields $$\begin{aligned} b_0 &= \sigma_v^2 + \sigma_a^2\\ b_1 &= -2C \sigma_v^2 \sigma_w^{-1} b_0^{-1/2} = \f{-2C \sigma_v^2 \sigma_w^{-1}}{\sqrt{\sigma_v^2 + \sigma_a^2}}\\ b_2 &= \f{-C^2 \sigma_v^4 \sigma_w^{-2}}{(\sigma_v^2 + \sigma_a^2)^2} \end{aligned}$$ Suppose $\phi$ is tanh-like. Then $${\boldsymbol{\oldgamma}}\le \sigma_v^2 \f 2 \pi \arcsin\lp { {\boldsymbol{\oldlambda}}}/{{q}}\rp + \sigma_a^2 + {\underline}{\boldsymbol{\oldgamma}},$$ and $$\sigma_v^2 \f 2 \pi \arcsin\lp { {\boldsymbol{\oldlambda}}}/{{q}}\rp + \sigma_a^2 + {\underline}{\boldsymbol{\oldgamma}}- {\boldsymbol{\oldgamma}}= \Theta({q}^{-1}).$$ Similar to the proof of \[lemma:Wt\_le\_arcsin\]. \[lemma:timeDependentConvergence\] Let $u^* \in [0, 1)$. Let $f_t: [0, 1) \to [0, 1]$ be a continuous function for each $t \in \N$, to each of which we associate two numbers $0 \le a_t \le u^* \le b_t \le 1$. Suppose for each $t$, $f_t(u) > u$ for all $u \in [0, a_t)$ and $f_t(u) < u$ for all $u \in (b_t, 1)$. Assume that for each $u$, $f_t(u) - u \to 0$ as $t \to \infty$ uniformly over $u$. If $a_t \nearrow u^*$ and $b_t \searrow u^*$, then for any $u_0 \in [0, 1)$, the dynamics $u_t = f_t(u_{t-1})$ has a limiting point. Furthermore, either $u_t \to u^*$ or $u_t$ eventually converges monotonically (decreasing or increasing) to a limit point. Fix a $u_0 \in [0, 1)$. If $u_t \to u^*$ then we are done. Otherwise, suppose there is a neighborhood $[u^* - {\epsilon}, u^* + {\epsilon}]$ such that for an infinite sequence $t_1, t_2, \ldots$, $u_{t_i} \not \in [u^* - {\epsilon}, u^* + {\epsilon}]$. WLOG assume $u_{t_i} < u^* - {\epsilon}$ for all $i$ and $(t_i)_i$ is the sequence of all $t$s that satisfy this inequality. If $(t_i)_i$ contains $\{s: s \ge N\}$ for some $N$, then for some $M > N$, for every $t > M$, $a_t > u^* - {\epsilon}> u_t$. By assumption, $u_t$ is monotonic for all $t > M$ but is bounded above. Thus $u_t$ has a fixed point $\hat u \le u^* - {\epsilon}$ as desired. Now assume there are infinite $i$s such that $t_i - 1 \not= t_{i-1}$ (i.e. $t_i - 1$ is not part of the sequence $(t_i)_i$). We will show that this case is contradictory. Take $T$ large enough such that $a_t > u^* - {\epsilon}/2$ and $|f_t(u) - u| < {\epsilon}/4$ for all $u$ and for all $t \ge T$ ($T$ exists by premise). Let $j$ be the smallest index such that $t_j > T$ and $t_j - 1 \not = t_{j-1}$. By the definition of $j$, $u_{t_j - 1} \ge u^* - {\epsilon}$. If $u_{t_j - 1} \ge u^* - {\epsilon}/2$, then by definition of $T$, $u^* - {\epsilon}> u_{t_j} = f_{t_j}(u_{t_j - 1}) > u_{t_j-1} - {\epsilon}/4 > u^* - 3{\epsilon}/4 > u^* - {\epsilon}$, a contradiction. If $u^* - {\epsilon}\le u_{t_j - 1} \le u^* - {\epsilon}/2$, then by the definition of $T$, $u_{t_j-1} \le a_{t_j-1}$ so that $u_{t_j } = f_{t_j}(u_{t_j - 1}) > u_{t_j - 1} \ge u^* - {\epsilon},$ a contradiction. The “furthermore” claim is clear from our proof above. The $\sigma_w = 0$ case is obvious. We will assume $\sigma_w > 0$ from here on. If $\sigma_a = 0$, then ${\mathbf{e}}^*$ as defined above is 0, and ${\mathbf{e}}= \f {\boldsymbol{\oldgamma}}{\mathbf{p}}$ decreases as $\Theta(l^{\f 2 \pi - 1})$ to 0, by the same reason as before. So from now on suppose $\sigma_a > 0$. We apply \[lemma:timeDependentConvergence\] first to show that ${\mathbf{e}}$ converges. We have $$\begin{aligned} \sigma_v^2 {\mathrm{W}}\phi( {q}, cq) + \sigma_a^2 &= {\mathbf{e}}{\mathbf{p}}- {\underline}{\mathbf{e}}{\underline}{\mathbf{p}}\\ &= {\mathbf{e}}{\mathbf{p}}- {\underline}{\mathbf{e}}{\mathbf{p}}+ {\underline}{\mathbf{e}}{\mathbf{p}}- {\underline}{\mathbf{e}}{\underline}{\mathbf{p}}\\ &= ({\mathbf{e}}- {\underline}{\mathbf{e}}) {\mathbf{p}}+ {\underline}{\mathbf{e}}({\mathbf{p}}- {\underline}{\mathbf{p}})\\ &= ({\mathbf{p}}- {\underline}{\mathbf{p}})[({\mathbf{e}}- {\underline}{\mathbf{e}})\f {\mathbf{p}}{{\mathbf{p}}- {\underline}{\mathbf{p}}} + {\underline}{\mathbf{e}}]\\ \f{\sigma_v^2 {\mathrm{W}}\phi( {q}, cq) + \sigma_a^2}{\sigma_v^2 {\mathrm{V}}\phi( {q}) + \sigma_a^2} &= ({\mathbf{e}}- {\underline}{\mathbf{e}}) \f {\mathbf{p}}{{\mathbf{p}}- {\underline}{\mathbf{p}}} + {\underline}{\mathbf{e}}\\ \f {{\mathbf{p}}- {\underline}{\mathbf{p}}}{\mathbf{p}}\left[\f{\sigma_v^2 {\mathrm{W}}\phi + \sigma_a^2}{\sigma_v^2 {\mathrm{V}}\phi + \sigma_a^2} - {\underline}{\mathbf{e}}\right] &= {\mathbf{e}}- {\underline}{\mathbf{e}}\end{aligned}$$ If we define $f_l(u) := \f{\p {\mathbf{p}}l - \p {\mathbf{p}}{l-1}}{\p {\mathbf{p}}l}\left[\f{\sigma_v^2 {\mathrm{W}}\phi( \p {q}l, \p {c}l \p {q}l) + \sigma_a^2}{\sigma_v^2 {\mathrm{V}}\phi( \p {q}l) + \sigma_a^2} - u\right] + u$ (the LHS of the above), then $f_l(u) - u = O(l^{-1})$ uniformly for all $u$ because $\p {\mathbf{p}}l = \Theta(l)$, $\p {\mathbf{p}}l - \p {\mathbf{p}}{l-1} = \Theta(1)$, and the part in the bracket is $O(1)$, with constants all (able to be taken) independent of $u$. We divide $[0, 1)$ into the following intervals $I_1 = [1, 1/2), I_2 = [1/2, 3/4), I_3 = [3/4, 7/8), \ldots$. For each $I_k$, it is clear that the trajectories of $\p {\mathbf{e}}l = f_l(\p {\mathbf{e}}{l-1})$ with $\p {\mathbf{e}}0 \in I_k$ will fall into some interval $J_k$ bounded away from 1 for all $l \ge L$, for large enough $L$ (dependent on $k$). Then we can apply \[lemma:cExpansion,lemma:vtanhSqrtConvergence,lemma:Wt\_le\_arcsin\] to get $f_l(u) = \f{\p {\mathbf{p}}l - \p {\mathbf{p}}{l-1}}{\p {\mathbf{p}}l}\left[\f{\sigma_v^2\f 2 \pi \arcsin(u) + \sigma_a^2}{\sigma_v^2 + \sigma_a^2} - u + o(1)\right] + u$ where the constants in $o(1)$ is uniform for all $\p {\mathbf{e}}0 \in I_k$. For $u < {\mathbf{e}}^*$ (as defined in the theorem statement), $\f{\sigma_v^2\f 2 \pi \arcsin(u) + \sigma_a^2}{\sigma_v^2 + \sigma_a^2} > u$ and for $u > {\mathbf{e}}^*$, $\f{\sigma_v^2\f 2 \pi \arcsin(u) + \sigma_a^2}{\sigma_v^2 + \sigma_a^2} < u$ (see \[fig:arcsin-vs-x\]). Thus as $l \to \infty$, the $o(1)$ term gets smaller and smaller, and this monotonicity holds for $f_l(u) - u = \left[\f{\sigma_v^2\f 2 \pi \arcsin(u) + \sigma_a^2}{\sigma_v^2 + \sigma_a^2} - u + o(1)\right] > 0$ (resp. $<0$) on larger and larger intervals $[0, a_l] \cap J_k$ (resp. $[b_l, 1) \cap J_k$). This proves all the preconditions for \[lemma:timeDependentConvergence\], which yields that $I_k$ converges to a limit point. As this argument is independent of $k$, we have that for all $\p {\mathbf{e}}0 \in [0, 1)$, $\p {\mathbf{e}}l$ converges. Now we solve for the limit point. Suppose ${\mathbf{e}}$ has limit point ${\mathbf{e}}^\dagger$ (possibly different from ${\mathbf{e}}^*$ described in the theorem); if we express $\p {\boldsymbol{\oldgamma}}l = ({\mathbf{e}}^\dagger + \p {\epsilon}l ) \p {\mathbf{p}}l$, then $$\begin{aligned} \sigma_v^2 {\mathrm{W}}\phi( {q}, cq) + \sigma_a^2 &= {\boldsymbol{\oldgamma}}- {\underline}{\boldsymbol{\oldgamma}}\\ &= ({\mathbf{e}}^\dagger + {\epsilon}){\mathbf{p}}- ({\mathbf{e}}^\dagger + {\underline}{\epsilon}) {\underline}{\mathbf{p}}\\ &= {\mathbf{e}}^\dagger({\mathbf{p}}- {\underline}{\mathbf{p}}) + {\epsilon}{\mathbf{p}}- {\underline}{\epsilon}{\underline}{\mathbf{p}}\\ \f{\sigma_v^2 {\mathrm{W}}\phi( {q}, cq) + \sigma_a^2}{\sigma_v^2 {\mathrm{V}}\phi( {q}) + \sigma_a^2} &= {\mathbf{e}}^\dagger + {\epsilon}+ ({\epsilon}- {\underline}{\epsilon})\f{{\underline}{\mathbf{p}}}{{\mathbf{p}}- {\underline}{\mathbf{p}}}\end{aligned}$$ As $l \to \infty$, ${c}\sim {\mathbf{e}}\to {\mathbf{e}}^\dagger$, and ${\mathrm{W}}\phi( {q}, {\mathbf{e}}^\dagger {q}) \to \f 2 \pi \arcsin({\mathbf{e}}^\dagger)$, and ${\mathrm{V}}\phi( {q}) \to 1$. Additionally, ${\underline}{\mathbf{p}}/({\mathbf{p}}- {\underline}{\mathbf{p}}) = \Theta(l)$ and ${\epsilon}= o(1)$ so that ${\epsilon}- {\underline}{\epsilon}= o(l^{-1})$. Then we have, taking limits $l \to \infty$, $$\begin{aligned} \f{\sigma_v^2 \f 2 \pi \arcsin({\mathbf{e}}^\dagger)+\sigma_a^2}{\sigma_v^2 + \sigma_a^2} &= {\mathbf{e}}^\dagger.\end{aligned}$$ Since $f_l$ (as defined above) repels points away from 1, the only solution for ${\mathbf{e}}^\dagger$ when $\p {\mathbf{e}}0 < 1$ is ${\mathbf{e}}^\dagger = {\mathbf{e}}^*$ as specified in the theorem statement. We defer the proof of the convergence rate to ${\mathbf{e}}^*$ to \[thm:TanhFullResConvergenceRate\]. ![Graph of $y({\mathbf{e}}) = \f 1 {\sigma_v^2 + \sigma_a^2}[\sigma_v^2\f 2 \pi \arcsin({\mathbf{e}}) + \sigma_a^2]$ for various $\sigma_v$ and $\sigma_a$.[]{data-label="fig:arcsin-vs-x"}](graphics/arcsin_vs_x.pdf){height=".15\textheight"} \[lemma:fixed\_point\_ineq\] Let ${\mathbf{e}}^*$ be the stable fixed point determined by $\sigma_a$ and $\sigma_v$. Then as long as $\sigma_v > 0$, $$\begin{aligned} \f 2 \pi \f 1 {\sqrt{1 - ({\mathbf{e}}^*)^2}} \f{\sigma_v^2 }{\sigma_v^2 + \sigma_a^2} \in (\f 1 2, \f 2 \pi]\end{aligned}$$ Write $\rho := \f{\sigma_a^2}{\sigma_v^2}$. By definition of ${{{\mathbf{e}}^*}}$, we get $$\begin{aligned} {{{\mathbf{e}}^*}}&= (1 - \rho) \f 2 \pi \arcsin {{{\mathbf{e}}^*}}+ \rho\\ \rho = &= \f{{{{\mathbf{e}}^*}}- \f 2 \pi \arcsin {{{\mathbf{e}}^*}}}{1 - {{{\mathbf{e}}^*}}} \end{aligned}$$ Substituting $\rho$ into the expression in question, it follows that we want to show $$\begin{aligned} \f 2 \pi (1 - {{{\mathbf{e}}^*}}^2)^{-1/2} (1 + \rho)^{-1}= \f 2 \pi (1 - {{{\mathbf{e}}^*}}^2)^{-1/2} \left(\f{1 - \f 2 \pi \arcsin {{{\mathbf{e}}^*}}}{1 - {{{\mathbf{e}}^*}}}\right)^{-1}\in (\f 1 2, \f 2 \pi] \end{aligned}$$ for ${{{\mathbf{e}}^*}}\in [0, 1)$ (the endpoint at 1 is not included since $\sigma_v > 0$. But this is $$\begin{aligned} &\phantom{={}} \f 2 \pi (1 - {{{\mathbf{e}}^*}})^{1/2} (1 + {{{\mathbf{e}}^*}})^{-1/2} (1 - \f 2 \pi \arcsin {{{\mathbf{e}}^*}})^{-1}. \end{aligned}$$ Set $g({{{\mathbf{e}}^*}})$ to be this expression. We could proceed by finding critical points, but a simple plot \[fig:deltastarBound\] shows that $g$ is decreasing on $[0, 1)$, with extremal values at the end points: $$g({{{\mathbf{e}}^*}}) \in [\lim_{{{{\mathbf{e}}^*}}\to 1} g({{{\mathbf{e}}^*}}), g(0)), \quad \text{for }{{{\mathbf{e}}^*}}\in [0, 1).$$ Obviously $g(0) = \f 2 \pi$. For the limit, we note that $\arcsin {{{\mathbf{e}}^*}}$ has an asymptotic expansion $\f \pi 2 - \sqrt 2 (1 - e)^{1/2} + \Theta((1-e)^{3/2})$ at 1, so that $(1 - {{{\mathbf{e}}^*}})^{1/2}(1 - \f 2 \pi \arcsin {{{\mathbf{e}}^*}})^{-1} \to \dfrac{\pi}{2 \sqrt 2}$, and $g({{{\mathbf{e}}^*}}) \to \dfrac{1}{2}$ as ${{{\mathbf{e}}^*}}\to 1$. ![Plot of $g({{{\mathbf{e}}^*}})$ in the proof of \[lemma:fixed\_point\_ineq\][]{data-label="fig:deltastarBound"}](graphics/deltastarBound.pdf){width=".4\textwidth"} \[thm:TanhFullResConvergenceRate\] If $\p {\mathbf{e}}0 < 1$, then $|\p {\mathbf{e}}l - {\mathbf{e}}^*|$ is $\Omega(l^{-\delta^* - \varepsilon})$ and $O(l^{-\delta^* + \varepsilon})$ for any $\varepsilon > 0$, where $$\delta^* := 1 - \f 2 \pi \f 1 {\sqrt{1 - ({\mathbf{e}}^*)^2}} \f{\sigma_v^2 }{\sigma_v^2 + \sigma_a^2} \in [1-\f 2 \pi, \f 1 2),$$ where the bounds on the right follow from \[lemma:fixed\_point\_ineq\]. Define $\omega(q, c) = \f 2 \pi \arcsin(c) - {\mathrm{W}}\tanh( q, c q)$. By \[lemma:Wt\_le\_arcsin\], for large enough $l$, ${c}$ is close to ${\mathbf{e}}^*$ bounded away from 0 or 1, so that $\omega({q}, {c}) = \Theta(\inv {q})$ with the constant hidden in $\Theta$ independent of ${c}$. Additionally, by \[lemma:vtanhSqrtConvergence\], $1 - {\mathrm{V}}\tanh( {q}) = \Theta({q}^{-1/2})$. Therefore, $$\begin{aligned} ({\mathbf{e}}^* + \epsilon){\mathbf{p}}&= \sigma_v^2 (\f 2 \pi \arcsin({\mathbf{e}}^* + {\underline}\epsilon) - \omega({q}, {c}))+ \sigma_a^2 + {\underline}{\boldsymbol{\oldgamma}}\\ &= \sigma_v^2 \f 2 \pi [\arcsin({\mathbf{e}}^*) + \f{{\underline}\epsilon}{\sqrt{1 - ({\mathbf{e}}^*)^2}} + \Theta({\underline}{\epsilon}^2)] - \Theta(\inv l) + \sigma_a^2 + {\underline}{\boldsymbol{\oldgamma}}\\ &= {\mathbf{e}}^*(\sigma_v^2 + \sigma_a^2) + ({\mathbf{e}}^* + {\underline}{\epsilon}) {\underline}{\mathbf{p}}+ \sigma_v^2 \f 2 \pi\f{{\underline}\epsilon}{\sqrt{1 - ({\mathbf{e}}^*)^2}} + \Theta({\underline}{\epsilon}^2) - \Theta(\inv l)\\ {\mathbf{e}}^*({\mathbf{p}}- {\underline}{\mathbf{p}}- \sigma_v^2 - \sigma_a^2) &= {\underline}{\epsilon}{\underline}{\mathbf{p}}- {\epsilon}{\mathbf{p}}+ \sigma_v^2 \f 2 \pi\f{{\underline}\epsilon}{\sqrt{1 - ({\mathbf{e}}^*)^2}} + \Theta({\underline}{\epsilon}^2)- \Theta(\inv l)\\ {\mathbf{e}}^*\sigma_v^2({\mathrm{V}}\phi( {q}) - 1) &= {\underline}{\epsilon}{\underline}{\mathbf{p}}- {\epsilon}{\mathbf{p}}+ \sigma_v^2 \f 2 \pi\f{{\underline}\epsilon}{\sqrt{1 - ({\mathbf{e}}^*)^2}} + \Theta({\underline}{\epsilon}^2) - \Theta(\inv l)\\ {\epsilon}&= \f 1 {\mathbf{p}}({\mathbf{e}}^*\sigma_v^2(1 - {\mathrm{V}}\phi( {q})) + \Theta({\underline}{\epsilon}^2) - \Theta(\inv l) + {\underline}{\epsilon}({\underline}{\mathbf{p}}+ \sigma_v^2 \f 2 \pi\f{1}{\sqrt{1 - ({\mathbf{e}}^*)^2}}))\\ &= \Theta(l^{-3/2}) + {\underline}{\epsilon}(1 - \p \delta l / l)\\ $$ where $$\begin{aligned} \p \delta l &= \f l {\mathbf{p}}(\sigma_v^2 {\mathrm{V}}\phi( {q}) + \sigma_a^2 -\sigma_v^2 \f 2 \pi\f{1}{\sqrt{1 - ({\mathbf{e}}^*)^2}}) + \Theta({\underline}{\epsilon}/l)\\ &= (1 + \Theta(l^{-1/2}))(\sigma_v^2 (1 - \Theta(l^{-1/2})) + \sigma_a^2 -\sigma_v^2 \f 2 \pi\f{1}{\sqrt{1 - ({\mathbf{e}}^*)^2}})/(\sigma_v^2 + \sigma_a^2) + \Theta({\underline}{\epsilon}/ l)\\ &= \delta^* + O(l^{-1/2}),\end{aligned}$$ where $\delta^* := 1 - \f 2 \pi \f 1 {\sqrt{1 - ({\mathbf{e}}^*)^2}} \f{\sigma_v^2 }{\sigma_v^2 + \sigma_a^2}$, which is positive by \[lemma:fixed\_point\_ineq\]. By taking the $\delta$ of \[lemma:alphaDeltaDynamics\] to be $\delta^* + \varepsilon$ or $\delta^* - \varepsilon$ respectively for lower and upper bounding the dynamics of $\p {\epsilon}l$, the solution $\p {\epsilon}l$ is $\Omega(l^{-\delta^* - \varepsilon})$ and $O(l^{-\delta^* + \varepsilon})$ for any $\varepsilon > 0$ since $\f 1 2 > \delta^*$. ### Backward Dynamics The $\sigma_w = 0$ case is obvious. We will assume $\sigma_w > 0$ from here on. As in the proof of \[thm:dalethExpSqrtTanh\], $$\begin{aligned} \log(\p {{\boldsymbol{\oldchi}}}{m} / \p {{\boldsymbol{\oldchi}}}l) &= 2B D \sigma_w^{-1} b_0^{-1/2} (\sqrt l - \sqrt m) \\ &\phantom{={}} -(BD \sigma_w^{-1} b_0^{-3/2} b_1 2^{-1} +B^2 D^2 \sigma_w^{-2} b_0^{-1}2^{-1})(\log l - \log m) + O(1)\end{aligned}$$ where $B = \sigma_v^2 \sigma_w^2, D = \f 2 3 \sqrt{\f 2 \pi},$ $$\begin{aligned} b_0 &= \sigma_v^2 + \sigma_a^2\\ b_1 &= \f{-2C \sigma_v^2 \sigma_w^{-1}}{\sqrt{\sigma_v^2 + \sigma_a^2}}\\ b_2 &= \f{-C^2 \sigma_v^4 \sigma_w^{-2}}{(\sigma_v^2 + \sigma_a^2)^2}.\end{aligned}$$ with $C = \sqrt{\f 2 \pi}$. This simplifies to the desired form. Similar to \[thm:dalethExpSqrtTanhAllGrad\]. $\alpha$-ReLU: Full Residual Network ------------------------------------ The following can be checked readily Since $\dot \psi_\alpha = \alpha \psi_{\alpha - 1}$, we have as a corollary, \[lemma:Vt\_dotpsi\_alpha\] If $\alpha > \f 1 2$, then ${\mathrm{V}}\dot \psi_\alpha( q) = \alpha^2 {\mathsf{c}}_{\alpha - 1} q^{\alpha-1}$. As a special case, when $\alpha = 1$, ${\mathsf{c}}_\alpha = \f 1 2$. The following is a trivial computation, but useful for many simplifications. ${\mathsf{c}}_{\alpha+1}/{\mathsf{c}}_\alpha = 2\alpha+1$. ### Forward Dynamics {#sec:AlphaReluForwardProofs} \[thm:pDynamic1ReLU\] Suppose we have the nonlinearity $\phi = \psi_1$. Then $\p {\mathbf{p}}l = \Theta((1 + \sigma_v^2 \sigma_w^2/2)^l)$, with the hidden constant depending on the initial condition. We have $$\begin{aligned} {\mathbf{p}}&= \f 1 2\sigma_v^2(\sigma_w^2 {\underline}{\mathbf{p}}+ \sigma_b^2) + \sigma_a^2 + {\underline}{\mathbf{p}}\\ &= (\f 1 2\sigma_v^2\sigma_w^2 + 1) {\underline}{\mathbf{p}}+ \f 1 2 (\sigma_v^2 \sigma_b^2 + \sigma_a^2).\end{aligned}$$ By the standard method of characteristic equation, we get that $$\p {\mathbf{p}}l = A + C B^l$$ where $A = - \f{\sigma_a^2 + \sigma_b^2 \sigma_v^2}{\sigma_v^2 \sigma_w^2}$, $B = 1 + \f{\sigma_v^2 \sigma_w^2}{2}$, and $C$ is a coefficient determined by initial conditions. \[thm:pDynamicLT1ReLU\] Suppose $\alpha < 1$. We have the following asymptotic expansion $$\p {\mathbf{p}}l = K_1 l^{{\f 1 {1-\alpha}}}+ R(l)$$ where the remainder term $$R(l) \sim \begin{cases} -K_2 l^{{{\f \alpha {1-\alpha}}}} \log l &\text{if $\alpha > \f 1 2$}\\ (C-K_2) l \log l &\text{if $\alpha = \f 1 2$ and $K_2 \not=C$}\\ \f{C(1-\alpha)}{1-2\alpha} l & \text{if $ \alpha < \f 1 2$} \end{cases}$$ where $K_1 = [\sigma_v^2 \sigma_w^{2\alpha} {\mathsf{c}}_\alpha (1 - \alpha)]^{\f 1 {1-\alpha}}, K_2 = \f 1 2 [\sigma_v^2 {\mathsf{c}}_\alpha \sigma_w^{2\alpha}]^{{{\f 1 {1-\alpha}}}} (1-\alpha)^{{{\f \alpha {1-\alpha}}}- 1} \alpha$ and $C = \sigma_a^2$. \[fig:55reluverifyleadingcoeffp\] verifies the leading coefficient and the exponent of the leading term. The difference equation governing the evolution of ${\mathbf{p}}$ is $${\mathbf{p}}- {\underline}{\mathbf{p}}= A({\underline}{\mathbf{p}}+ B)^\alpha + C$$ where $A =\sigma_v^2 {\mathsf{c}}_\alpha \sigma_w^{2\alpha}$, $B = \sigma_b^2/\sigma_w^2$, and $C = \sigma_a^2$. Then \[lemma:polyDynamicsConstant\] yields the result. ![Verification of leading term of \[thm:pDynamic1ReLU\] for $\alpha = 0.55$.[]{data-label="fig:55reluverifyleadingcoeffp"}](graphics/55relu_verify_leading_coeff_p){height=".2\textheight"} \[thm:pDynamicLT1ReLU\] combined with \[thm:pDynamic1ReLU\] gives the following result. By [@cho_kernel_2009], we know that ${\mathrm{W}}\psi_\alpha( q, q c) = {\mathrm{V}}\psi_\alpha( q) {\mathbb{J}}_\alpha(c)$, where ${\mathbb{J}}_\alpha(c) = J_\alpha(\arccos c)$ and $$J_\alpha(\theta) := \f 1 {2\pi {\mathsf{c}}_\alpha} (\sin \theta)^{2\alpha + 1} \Gamma(\alpha + 1)\int_0^{\pi / 2} \f{\dd \eta \cos^\alpha \eta}{(1 - \cos \theta \cos \eta)^{1 + \alpha}}.\tag{$\triangle$}\label{eqn:JalphaIntegralFormula}$$ Note that ${\mathbb{J}}_\alpha(c) \in (-\infty, \infty)$ for $\alpha \in (-1, \infty)$ and any $c \in (0, 1)$, even though ${\mathrm{V}}\psi_\alpha$ is only defined for $\alpha > -1/2$. ![(a) ${\mathbb{J}}_\alpha$ for different $\alpha$s and the identity function. From this plot, it looks like ${\mathbb{J}}_\alpha(c) \ge c$ and $\dot{\mathbb{J}}_\alpha(c) \le 1$ for all $\alpha \in (\f 1 2, 1]$ with equality iff $c = 1$, but this is misleading. (b) shows $|{\mathbb{J}}_\alpha(c) - c|$ in log scale. Where the curves dip below the x-axis indicate points where ${\mathbb{J}}_\alpha(c) = c$. We see that in fact every ${\mathbb{J}}_\alpha$ has a solution ${\mathbb{J}}_\alpha(c) = c$ for a $c < 1$, when $\alpha < 1$. (c) Furthermore, at each such $c$, $\dot{\mathbb{J}}_\alpha < 1$.[]{data-label="fig:jjj_vs_id"}](graphics/jjjVsId2.pdf "fig:"){width=".3\textwidth"}   ![(a) ${\mathbb{J}}_\alpha$ for different $\alpha$s and the identity function. From this plot, it looks like ${\mathbb{J}}_\alpha(c) \ge c$ and $\dot{\mathbb{J}}_\alpha(c) \le 1$ for all $\alpha \in (\f 1 2, 1]$ with equality iff $c = 1$, but this is misleading. (b) shows $|{\mathbb{J}}_\alpha(c) - c|$ in log scale. Where the curves dip below the x-axis indicate points where ${\mathbb{J}}_\alpha(c) = c$. We see that in fact every ${\mathbb{J}}_\alpha$ has a solution ${\mathbb{J}}_\alpha(c) = c$ for a $c < 1$, when $\alpha < 1$. (c) Furthermore, at each such $c$, $\dot{\mathbb{J}}_\alpha < 1$.[]{data-label="fig:jjj_vs_id"}](graphics/jjj_vs_id_log.pdf "fig:"){width=".3\textwidth"}   ![(a) ${\mathbb{J}}_\alpha$ for different $\alpha$s and the identity function. From this plot, it looks like ${\mathbb{J}}_\alpha(c) \ge c$ and $\dot{\mathbb{J}}_\alpha(c) \le 1$ for all $\alpha \in (\f 1 2, 1]$ with equality iff $c = 1$, but this is misleading. (b) shows $|{\mathbb{J}}_\alpha(c) - c|$ in log scale. Where the curves dip below the x-axis indicate points where ${\mathbb{J}}_\alpha(c) = c$. We see that in fact every ${\mathbb{J}}_\alpha$ has a solution ${\mathbb{J}}_\alpha(c) = c$ for a $c < 1$, when $\alpha < 1$. (c) Furthermore, at each such $c$, $\dot{\mathbb{J}}_\alpha < 1$.[]{data-label="fig:jjj_vs_id"}](graphics/jjj_vs_id_close.pdf "fig:"){width=".3\textwidth"} \[fig:jjj\_vs\_id\] shows a comparison of ${\mathbb{J}}_\alpha$ for different $\alpha$s along with the identity function. By [@daniely_toward_2016 Lemma 11], ${\mathbb{J}}_\alpha$ is an increasing and convex function as long as $\psi_\alpha^2$ is Gaussian-integrable, which is precisely when $\alpha > -1/2$. We can compute ${\mathbb{J}}_\alpha(1) = {\mathrm{W}}\psi_\alpha(q, q)/{\mathrm{V}}\psi_\alpha(q) = 1$, and ${\mathbb{J}}_\alpha(0) = {\mathrm{W}}\psi_\alpha(q, 0)/{\mathrm{V}}\psi_\alpha(q) = {\mathrm{V}}\psi_{\alpha/2}(q)^2/{\mathrm{V}}\psi_\alpha(q) = {\mathsf{c}}_{\alpha/2}^2/{\mathsf{c}}_\alpha = \f 1{2\sqrt \pi} \f{\Gamma(\f \alpha 2 + \f 1 2)^2}{\Gamma(\alpha + \f 1 2)}$. We record these observations as a lemma. \[lemma:basicJalpha\] ${\mathbb{J}}_\alpha(c)$ is an increasing and convex function for each $\alpha > -1/2$ on $c \in [0, 1]$. ${\mathbb{J}}_\alpha(1) = 1$ and ${\mathbb{J}}_\alpha(0) = \f 1{2\sqrt \pi} \f{\Gamma(\f \alpha 2 + \f 1 2)^2}{\Gamma(\alpha + \f 1 2)}$. ![${\mathbb{J}}_1$ vs identity[]{data-label="fig:jjj1_vs_id"}](graphics/jjj1_vs_id.pdf){width=".3\textwidth"} For $\alpha = 1$, @cho_kernel_2009 computed $${\mathbb{J}}_1(c) = \f 1 \pi (\sqrt{1 - c^2} + (\pi - \arccos(c))c).$$ \[fig:jjj1\_vs\_id\] shows a plot of ${\mathbb{J}}_1$ vs identity. It has derivative $\dot{\mathbb{J}}_1(c) = 1 - \f 1 \pi \arccos c$, which shows that $\dot{\mathbb{J}}_1(c) < 1$ with equality iff $c = 1$, and consequently ${\mathbb{J}}_1(c) \ge c$ with equality iff $c = 1$. At the same time, $\dot {\mathbb{J}}_1(c) \ge 0$ with equality iff $c = -1$, so ${\mathbb{J}}_1$ is increasing on $[-1, 1]$. It has an asymptotic expansion ${\mathbb{J}}_1(1 - \varepsilon) = 1 - \varepsilon + \f {2\sqrt 2}{3\pi} {\epsilon}^{3/2} + \Theta({\epsilon}^{5/2})$ at 1. The zeroth Bessel function of the second kind is defined by ${{\mathcal{K}_0}}(z) = \int_1^\infty e^{-z x} (x^2 - 1)^{-1/2} \dd x$. It is one of the fundamental solutions to the homogeneous differential equation $x^2 \dot y + x \dot y - x^2 y = 0$. The following lemma shows that $J_\alpha$ can be expressed in terms of ${{\mathcal{K}_0}}$. \[lemma:JalphaBessel\] For any $\alpha > -1$, $J_\alpha(\theta) = \f 1 {2\pi {\mathsf{c}}_\alpha}\sin^{2\alpha+1} \theta \int_0^\infty \dd x {{\mathcal{K}_0}}(x) e^{x\cos \theta} x^\alpha$ @cho_kernel_2009 gave the expression $$2\pi {\mathsf{c}}_\alpha J_\alpha(\theta) = \csc \theta \int_0^\infty \dd u \int_0^\infty \dd v e^{-(u^2+v^2 - 2uv \cos \theta)/2\sin^2\theta} u^\alpha v^\alpha.$$ Note that the integrand is symmetric in $u$ and $v$. Thus, if ${{\mathsf V}}= \{(u, v): u, v \ge 0 \And v \ge u\}$, then $$2\pi {\mathsf{c}}_\alpha J_\alpha(\theta) = 2\csc \theta \int_{{\mathsf V}}\dd u \dd v e^{-(u^2+v^2 - 2uv \cos \theta)/2\sin^2\theta} u^\alpha v^\alpha.$$ Now make the change of variables from ${{\mathsf V}}$ to $\{({\mathbbm{p}}, {\mathbbm{q}}): {\mathbbm{q}}\ge 2 \sqrt{{\mathbbm{p}}}\}$: $$\begin{aligned} {\mathbbm{p}}&= uv & \dd {\mathbbm{p}}&= v \dd u + u \dd v\\ {\mathbbm{q}}&= u + v& \dd {\mathbbm{q}}&= \dd u + \dd v\\ \dd {\mathbbm{p}}\dd {\mathbbm{q}}&= (v - u) \dd u \dd v & \dd u \dd v &= ({\mathbbm{q}}^2 - 4 {\mathbbm{p}})^{-1/2} \dd {\mathbbm{p}}\dd {\mathbbm{q}}\end{aligned}$$ so that we have $$2 \pi {\mathsf{c}}_\alpha J_\alpha(\theta) = 2\csc \theta \int_0^\infty \dd {\mathbbm{p}}e^{{\mathbbm{p}}(1 + \cos \theta)\csc^2\theta} {\mathbbm{p}}^\alpha \int_{2\sqrt{{\mathbbm{p}}}}^\infty \dd {\mathbbm{q}}e^{-{\mathbbm{q}}^2\csc^2 \theta}({\mathbbm{q}}^2 - 4 {\mathbbm{p}})^{-1/2}.$$ The inner integral in ${\mathbbm{q}}$ can be expressed in terms of ${{\mathcal{K}_0}}$ by a change of variable $x = {\mathbbm{q}}^2/2\sqrt {\mathbbm{p}}$: $$\begin{aligned} 2 \pi {\mathsf{c}}_\alpha J_\alpha(\theta) &= 2\csc \theta \int_0^\infty \dd {\mathbbm{p}}e^{{\mathbbm{p}}(1 + \cos \theta)\csc^2\theta} {\mathbbm{p}}^\alpha \f 1 2 e^{-{\mathbbm{p}}\csc^2 \theta} {{\mathcal{K}_0}}({\mathbbm{p}}\csc^2 \theta)\\ &= \csc\theta \int_0^\infty \dd {\mathbbm{p}}{{\mathcal{K}_0}}({\mathbbm{p}}\csc^2\theta) e^{{\mathbbm{p}}\cos \theta \csc^2\theta} {\mathbbm{p}}^\alpha\\ &= \sin^{2\alpha + 1} \theta \int_0^\infty \dd x {{\mathcal{K}_0}}(x) e^{x \cos \theta} x^\alpha \end{aligned}$$ Define $L_\alpha(\theta) = 2 \pi {\mathsf{c}}_\alpha J_\alpha(\theta)\csc^{2\alpha+1}\theta = \int_0^\infty \dd x {{\mathcal{K}_0}}(x) e^{x \cos \theta} x^\alpha$. \[lemma:LAlphaRec\] If $\alpha > 1$, then $$L_\alpha(\theta) = \csc^2 \theta [(2\alpha-1) \cos \theta L_{\alpha-1}(\theta) + (\alpha-1)^2 L_{\alpha - 2}(\theta)].$$ We will prove this claim for $\theta < 1$, and by continuity this also proves the case $\theta = 1$. As remarked above, ${{\mathcal{K}_0}}(z) = \ddot {{\mathcal{K}_0}}(z) + \inv z \dot {{\mathcal{K}_0}}(z).$ Thus $$\begin{aligned} L_\alpha(\theta) &= \int_0^\infty \dd x (\ddot {{\mathcal{K}_0}}(x) + \inv x \dot {{\mathcal{K}_0}}(x)) e^{x \cos\theta} x^\alpha\\ &=\dot {{\mathcal{K}_0}}e^{x\cos\theta} x^\alpha \rvert_0^\infty + {{\mathcal{K}_0}}e^{x\cos \theta} x^{\alpha - 1} \rvert_0^\infty\\ &\phantom{={}} - \int \dd x[\cos \theta e^{x \cos \theta} x^\alpha + \alpha e^{x \cos \theta} x^{\alpha - 1}] \dot {{\mathcal{K}_0}}\\ &\phantom{={}} - \int \dd x[\cos \theta e^{x \cos \theta} x^{\alpha - 1} + (\alpha-1) e^{x\cos\theta} x^{\alpha - 2}] {{\mathcal{K}_0}}\end{aligned}$$ Asymptotically, ${{\mathcal{K}_0}}(z) \sim \sqrt{\f{\pi}{2z}} e^{-z}$ as $z \to \infty$ and ${{\mathcal{K}_0}}(z) \sim -\ln(z)$ as $z \searrow 0$, and $\dot{{\mathcal{K}_0}}(z) \sim -\sqrt{\f{\pi}{2z}} e^{-z}$ as $z \to \infty$ and $\dot{{\mathcal{K}_0}}(z) \sim -\inv z$ as $z \searrow 0$. Thus, as $\alpha > 1$, $$\begin{aligned} \dot {{\mathcal{K}_0}}e^{x\cos\theta} x^\alpha \rvert_0^\infty = -\lim_{x \to \infty} \sqrt{\pi/2} e^{-x(1 - \cos \theta)}x^{\alpha - 1} + \lim_{x\searrow 0} e^{x\cos\theta} x^{\alpha-1} = 0\\ {{\mathcal{K}_0}}e^{x\cos\theta} x^{\alpha-1} \rvert_0^\infty = -\lim_{x \to \infty} \sqrt{\pi/2} e^{-x(1 - \cos \theta)}x^{\alpha - 2} + \lim_{x\searrow 0} e^{x\cos\theta} x^{\alpha-1} \ln x = 0 \end{aligned}$$ So $$\begin{aligned} L_\alpha(\theta) &= -\cos \theta L_{\alpha-1}(\theta) - (\alpha-1)L_{\alpha-2}(\theta) - \int \dd x[\cos \theta e^{x \cos \theta} x^\alpha + \alpha e^{x \cos \theta} x^{\alpha - 1}] \dot {{\mathcal{K}_0}}\end{aligned}$$ Via another integration by parts, the integral on the right is $$\begin{aligned} &\phantom{={}} \cos \theta e^{x \cos \theta} x^\alpha {{\mathcal{K}_0}}\rvert_0^\infty + \alpha e^{x\cos \theta} x^{\alpha-1} {{\mathcal{K}_0}}\rvert_0^\infty\\ &\phantom{={}} - \int \dd x[\cos^2 \theta e^{x \cos \theta} x^\alpha + 2 \alpha \cos \theta e^{x \cos \theta} x^{\alpha - 1} + \alpha (\alpha - 1) e^{x \cos \theta} x^{\alpha - 2}] {{\mathcal{K}_0}}\\ &= - [\cos^2 \theta L_\alpha(\theta) + 2 \alpha \cos \theta L_{\alpha-1}(\theta) + \alpha(\alpha-1) L_{\alpha-2}(\theta)] \end{aligned}$$ where the evaluation terms vanish just like before. Altogether, we have $$\begin{aligned} L_\alpha(\theta) &= \cos^2 \theta L_\alpha(\theta) + (2\alpha-1)\cos\theta L_{\alpha-1}(\theta) + (\alpha-1)^2 L_{\alpha-2}(\theta)\\ &= \csc^2 \theta[(2\alpha-1) \cos \theta L_{\alpha-1}(\theta) + (\alpha-1)^2 L_{\alpha-2}(\theta)] \end{aligned}$$ As a corollary we get \[lemma:JalphaRec\] Suppose $\alpha > 1$. Then $$\begin{aligned} J_\alpha(\theta) &= \cos \theta J_{\alpha-1}(\theta) + (\alpha-1)^2 (2\alpha-1)^{-1}(2\alpha-3)^{-1} \sin^2 \theta J_{\alpha-2}(\theta)\\ {\mathbb{J}}_\alpha(c) &= c {\mathbb{J}}_{\alpha-1}(c) + (\alpha-1)^2 (2\alpha-1)^{-1}(2\alpha-3)^{-1}(1-c^2) {\mathbb{J}}_{\alpha-2}(c) \end{aligned}$$ The derivative of $J_\alpha(\theta)$ turns out to be quite simple. \[lemma:JalphaGrad\] Suppose $\alpha > 0$. Then $$\begin{aligned} \dot J_\alpha(\theta) &= -\alpha^2(2 \alpha-1)^{-1} J_{\alpha-1}(\theta) \sin \theta\\ \dot {\mathbb{J}}_\alpha(c) &= \alpha^2 (2 \alpha-1)^{-1} {\mathbb{J}}_{\alpha-1}(c) \end{aligned}$$ We will prove the first formula. The second follows from chain rule. By \[lemma:JalphaBessel\], $$\begin{aligned} J_\alpha(\theta) &= \f 1 {2\pi {\mathsf{c}}_\alpha}\sin^{2\alpha+1} \theta \int \dd x {{\mathcal{K}_0}}(x) e^{x\cos \theta} x^\alpha\\ \dot J_\alpha(\theta) &= \f 1 {2\pi {\mathsf{c}}_\alpha}[(2\alpha+1)\sin^{2\alpha} \theta \cos\theta \int \dd x {{\mathcal{K}_0}}(x) e^{x\cos \theta} x^\alpha\\ &\phantom{={}} -\sin^{2\alpha+2}\theta \int \dd x {{\mathcal{K}_0}}(x) e^{x\cos \theta} x^{\alpha+1}]\\ &= (2\alpha+1) \cot \theta J_\alpha(\theta) - \f{c_{\alpha+1}}{c_\alpha}\csc\theta J_{\alpha+1}(\theta)\\ &= (2\alpha+1) \csc \theta [\cos \theta J_\alpha(\theta) - J_{\alpha+1}(\theta)].\\ \end{aligned}$$ As $\alpha + 1 > 1$, by \[lemma:JalphaRec\], this is $$\begin{aligned} &\phantom{={}}-(2\alpha+1)\csc \theta [(\alpha-1)^2 (2\alpha+1)^{-1}(2\alpha-1)^{-1} \sin^2 \theta J_{\alpha-1}(\theta)]\\ &= -(\alpha-1)^2(2\alpha-1)^{-1} \sin \theta J_{\alpha-1}(\theta). \end{aligned}$$ Thus $\dot {\mathbb{J}}_\alpha(1) = \alpha^2(2\alpha-1)^{-1} {\mathbb{J}}_{\alpha-1}(1) = \alpha^2(2\alpha-1)^{-1}$ for any $\alpha > 0$ by \[lemma:basicJalpha\]. For $1/2 < \alpha \le 1$, $\dot {\mathbb{J}}_\alpha(1) \ge 1$ with equality iff $\alpha = 1$, and for $\alpha = 1/2$, $\dot {\mathbb{J}}_\alpha(1) =\infty > 1$ by continuity of $\dot {\mathbb{J}}_\alpha(c)$ in $\alpha$. Because for $\alpha > -1/2$, ${\mathbb{J}}_\alpha$ is increasing and convex on $[0, 1]$ and ${\mathbb{J}}_\alpha(0) > 0$ by \[lemma:basicJalpha\], ${\mathbb{J}}_\alpha$ intersects identity at a unique point away from 1 when $\alpha \in [1/2, 1)$. We record this as a theorem. \[thm:stableFixedPointsJJ\] For $\alpha \in [1/2, 1)$, ${\mathbb{J}}_\alpha(c) = c$ has two solutions: an unstable solution at 1 (“unstable” meaning $\dot{\mathbb{J}}_\alpha(1) > 1$) and a stable solution in ${\mathbf{e}}^* \in (0, 1)$ (“stable” meaning $\dot {\mathbb{J}}_\alpha({\mathbf{e}}^*) < 1$). ![Left-to-right: **(a)** ${\mathbb{J}}_\alpha$ for different $\alpha$s and the identity function (black, dashed line). ${\mathbb{J}}_1$ is highlighted in red. From this plot, it looks like ${\mathbb{J}}_\alpha(c) \ge c$ and $\dot{\mathbb{J}}_\alpha(c) \le 1$ for all $\alpha \in (\f 1 2, 1]$ with equality iff $c = 1$, but this is misleading. **(b)** shows $|{\mathbb{J}}_\alpha(c) - c|$ in log scale. Where the curves dip below the x-axis indicate points where ${\mathbb{J}}_\alpha(c) = c$. We see that in fact every ${\mathbb{J}}_\alpha$ has a solution ${\mathbb{J}}_\alpha(c) = c$ for a $c < 1$, when $\alpha < 1$. **(c)** Furthermore, at each such $c$, $\dot{\mathbb{J}}_\alpha < 1$. (b) and (c) demonstrate the existence of stable fixed points away from 1 for ${\mathbb{J}}_\alpha, \alpha \in (1/2, 1)$, which is confirmed rigorously by \[thm:stableFixedPointsJJ\]. []{data-label="fig:jjj_vs_id_main"}](graphics/jjjVsId2.pdf "fig:"){height=".1\textheight"}   ![Left-to-right: **(a)** ${\mathbb{J}}_\alpha$ for different $\alpha$s and the identity function (black, dashed line). ${\mathbb{J}}_1$ is highlighted in red. From this plot, it looks like ${\mathbb{J}}_\alpha(c) \ge c$ and $\dot{\mathbb{J}}_\alpha(c) \le 1$ for all $\alpha \in (\f 1 2, 1]$ with equality iff $c = 1$, but this is misleading. **(b)** shows $|{\mathbb{J}}_\alpha(c) - c|$ in log scale. Where the curves dip below the x-axis indicate points where ${\mathbb{J}}_\alpha(c) = c$. We see that in fact every ${\mathbb{J}}_\alpha$ has a solution ${\mathbb{J}}_\alpha(c) = c$ for a $c < 1$, when $\alpha < 1$. **(c)** Furthermore, at each such $c$, $\dot{\mathbb{J}}_\alpha < 1$. (b) and (c) demonstrate the existence of stable fixed points away from 1 for ${\mathbb{J}}_\alpha, \alpha \in (1/2, 1)$, which is confirmed rigorously by \[thm:stableFixedPointsJJ\]. []{data-label="fig:jjj_vs_id_main"}](graphics/jjj_vs_id_log.pdf "fig:"){height=".1\textheight"}   ![Left-to-right: **(a)** ${\mathbb{J}}_\alpha$ for different $\alpha$s and the identity function (black, dashed line). ${\mathbb{J}}_1$ is highlighted in red. From this plot, it looks like ${\mathbb{J}}_\alpha(c) \ge c$ and $\dot{\mathbb{J}}_\alpha(c) \le 1$ for all $\alpha \in (\f 1 2, 1]$ with equality iff $c = 1$, but this is misleading. **(b)** shows $|{\mathbb{J}}_\alpha(c) - c|$ in log scale. Where the curves dip below the x-axis indicate points where ${\mathbb{J}}_\alpha(c) = c$. We see that in fact every ${\mathbb{J}}_\alpha$ has a solution ${\mathbb{J}}_\alpha(c) = c$ for a $c < 1$, when $\alpha < 1$. **(c)** Furthermore, at each such $c$, $\dot{\mathbb{J}}_\alpha < 1$. (b) and (c) demonstrate the existence of stable fixed points away from 1 for ${\mathbb{J}}_\alpha, \alpha \in (1/2, 1)$, which is confirmed rigorously by \[thm:stableFixedPointsJJ\]. []{data-label="fig:jjj_vs_id_main"}](graphics/jjj_vs_id_close.pdf "fig:"){height=".1\textheight"} This result confirms that pictures presented in \[fig:jjj\_vs\_id\_main\]b,c are qualitatively correct, that there are indeed stable fixed points of ${\mathbb{J}}_\alpha$ away from 1. If ${\underline}{\mathbf{e}}< 1$, then $$\begin{aligned} {c}= \f{\sigma_w^2 {\underline}{\boldsymbol{\oldgamma}}+ \sigma_b^2}{\sigma_w^2 {\underline}{\mathbbm{p}}+ \sigma_b^2} &\ge {\underline}{\mathbf{e}}\\ {\mathbb{J}}_1({c}) &\ge {\mathbb{J}}_1({\underline}{\mathbf{e}})\\ {\mathbf{e}}= \f{\sigma_v^2 {\mathsf{c}}_\alpha {\mathbbm{q}}^\alpha{\mathbb{J}}_1({c}) + \sigma_b^2}{\sigma_v^2 {\mathsf{c}}_\alpha {\mathbbm{q}}^\alpha + \sigma_b^2} &\ge {\mathbb{J}}_1({\underline}{\mathbf{e}}) \end{aligned}$$ but ${\mathbf{e}}\ge {\mathbb{J}}_1({\underline}{\mathbf{e}}) > {\underline}{\mathbf{e}}$ as noted above. Thus by monotone convergence ${\mathbf{e}}$ converges, and ${\mathbf{e}}^* = 1$ is the only possible fixed point. By \[lemma:cExpansion\], ${c}= {\underline}{\mathbf{e}}(1 + \Theta({\underline}{\epsilon}{\underline}{\mathbbm{p}}^{-1})) = 1 - {\underline}{\epsilon}+ \Theta({\underline}{\epsilon}\inv {\mathbbm{p}}) = 1 - u{\underline}{\epsilon}$ where $u := 1 - \Theta({\underline}{\mathbbm{p}}^{-1})$. Using the asymptotic expansion ${\mathbb{J}}_1(1 - {\epsilon}) = 1 - {\epsilon}+ U {\epsilon}^{3/2} + \Theta({\epsilon}^{5/2})$, we have $$\begin{aligned} (1 - {\epsilon}){\mathbbm{p}}&= \sigma_v^2 \f {\mathbbm{q}}2 {\mathbb{J}}_1(1 - u{\underline}{\epsilon}) + \sigma_a^2 + (1 - {\underline}{\epsilon}) {\underline}{\mathbbm{p}}\\ -{\epsilon}{\mathbbm{p}}&= \sigma_v^2 \f {\mathbbm{q}}2 ({\mathbb{J}}_1(1 - u{\underline}{\epsilon}) - 1) - {\underline}{\epsilon}{\underline}{\mathbbm{p}}\\ &= \sigma_v^2 \f {\mathbbm{q}}2 [- u{\underline}{\epsilon}+ U u^{3/2}{\underline}{\epsilon}^{3/2} + \Theta(u^{5/2}{\underline}{\epsilon}^{5/2})] - {\underline}{\epsilon}{\underline}{\mathbbm{p}}\\ {\epsilon}&= {\underline}{\epsilon}\f 1 {\mathbbm{p}}[{\underline}{\mathbbm{p}}+ \sigma_v^2 \f {\mathbbm{q}}2 (u - U u^{3/2} {\underline}{\epsilon}^{1/2} + \Theta(u^{5/2}{\underline}{\epsilon}^{3/2}))]\\ &= {\underline}{\epsilon}\f 1 {\mathbbm{p}}[{\mathbbm{p}}- \sigma_a^2 + \sigma_v^2 \f {\mathbbm{q}}2 (\Theta(\inv {{\underline}{\mathbbm{p}}}) - U u^{3/2} {\underline}{\epsilon}^{1/2} + \Theta(u^{5/2}{\underline}{\epsilon}^{3/2}))]\\ &= {\underline}{\epsilon}[1 + \f{-\sigma_a^2 + \sigma_v^2 \f {\mathbbm{q}}2 (\Theta(\inv {{\underline}{\mathbbm{p}}}) - U u^{3/2} {\underline}{\epsilon}^{1/2} + \Theta(u^{5/2}{\underline}{\epsilon}^{3/2}))}{{\mathbbm{p}}}]\\ &= {\underline}{\epsilon}[1 + \f{-\sigma_a^2\inv {\mathbbm{q}}+ \f 1 2 \sigma_v^2 (\Theta(\inv {{\underline}{\mathbbm{p}}}) - U u^{3/2} {\underline}{\epsilon}^{1/2} + \Theta(u^{5/2}{\underline}{\epsilon}^{3/2}))}{{\mathbbm{p}}\inv {\mathbbm{q}}}] $$ Let the content of the bracket on the RHS be $\aleph$. We have ${\mathbbm{p}}\inv {\mathbbm{q}}= (1+ o(1))B/\sigma_w^2$. If ${\epsilon}= O({\underline}{\mathbbm{p}}^{-1})$, then $\aleph = 1 - O(\inv {\mathbbm{p}})$, but because ${\mathbbm{p}}$ is exponentially decreasing, this means ${\epsilon}= \Theta(1)$ and does not converge to 0 — this is a contradiction. Therefore, ${\underline}{\epsilon}= \omega({\underline}{\mathbbm{p}}^{-1})$, and $$\begin{aligned} {\epsilon}&= {\underline}{\epsilon}[1 - \f 1 2 B^{-1}\sigma_v^2 \sigma_w^2 U{\underline}{\epsilon}^{1/2}(1 + o(1))]\\ {\epsilon}- {\underline}{\epsilon}&= - \f 1 2 B^{-1}\sigma_v^2 \sigma_w^2 U{\underline}{\epsilon}^{3/2}(1 + o(1))\end{aligned}$$ Using \[lemma:polyDynamicsPosAlpha\] to upper and lower bound our dynamics, we get that $\p {\epsilon}l \sim [\f 1 4 \sigma_v^2 \sigma_w^2 \inv B U l]^{-2}$. \[lemma:separableDynamics\] Let $\phi$ be any nonlinearity. Suppose ${\mathrm{W}}\phi( r, rd) = {\mathrm{V}}\phi( r){\mathbb{K}}(d)$ for some twice differentiable function ${\mathbb{K}}(d)$ independent of ${\mathbbm{q}}$, where ${\mathbb{K}}(1) = 1$ naturally. Suppose further that - ${\mathbb{K}}(d) = d$ has a solution $d = {\mathbf{e}}^* > 0$ where $\dot{\mathbb{K}}({\mathbf{e}}^*) = \delta < 1$; - ${\mathbb{K}}(d) > d$ for all $d < {\mathbf{e}}^*$ and ${\mathbb{K}}(d) < d$ for all $1 > d > {\mathbf{e}}^*$; and - ${\mathbb{K}}$ is nondecreasing. Let $\p {\epsilon}l := \p {\mathbf{e}}l - {\mathbf{e}}^*$ and suppose $\p {\mathbf{e}}0 < 1$. If $\p {\boldsymbol{\oldgamma}}l \to \infty$ and ${\mathrm{V}}\phi( \p {\mathbbm{q}}l) \to \infty$, then $\p {\epsilon}l \to 0$ and satisfies $${\epsilon}= {\underline}{\epsilon}\left(1 - \f{\sigma_a^2 + (1 - \delta + O({\underline}{\epsilon}))\sigma_v^2 {\mathrm{V}}\phi( {\mathbbm{q}})}{{\mathbbm{p}}}\right) + {\mathrm{V}}\phi( {\mathbbm{q}})\Theta(\inv {\boldsymbol{\oldgamma}}\inv {\mathbbm{p}}).$$ First we note that because ${\mathbf{e}}^*$ is the only stable fixed point of the dynamics $x \mapsto {\mathbb{K}}(x)$, with the basin of attraction $[0, 1)$, we can show $\p {\mathbf{e}}l \to {\mathbf{e}}^*$ as in the proof of \[thm:eDynamicsFullResTanh\] (using \[lemma:timeDependentConvergence\]). Write $\p V l := {\mathrm{V}}\phi( \p {\mathbbm{q}}l)$. We first show that $\p {\mathbf{e}}l \to {\mathbf{e}}^*$. When $l$ is large, $$\begin{aligned} {c}= \f{\sigma_w^2 {\underline}{\boldsymbol{\oldgamma}}+ \sigma_b^2}{\sigma_w^2 {\underline}{\mathbbm{p}}+ \sigma_b^2} & = {\underline}{\mathbf{e}}( 1 + O(\inv {\boldsymbol{\oldgamma}}))\\ {\mathbf{e}}= \f{\sigma_v^2V{\mathbb{K}}({c}) + \sigma_a^2}{\sigma_v^2 V + \sigma_a^2} &= {\mathbb{K}}({c})(1 + O(\inv V {\mathbb{K}}({c})^{-1})).\end{aligned}$$ If $\p {\boldsymbol{\oldgamma}}l$ is bounded for all $l$, then ${\mathbf{e}}\to 0$ because $\p {\mathbbm{p}}l \to \infty$. Since ${\mathbb{K}}({c}) > 0$ for ${c}\in [0, 1]$ and $\p V l\to \infty$, we have that in the limit $l\to \infty$, $\lim_{l \to \infty} {\mathbf{e}}= 0 = {\mathbb{K}}(\lim_{l \to \infty}{\mathbf{e}}) = {\mathbb{K}}(0)$ (by the continuity of ${\mathbb{K}}$), which is impossible by our assumptions. Thus $\p {\boldsymbol{\oldgamma}}l \to \infty$, and we have $\lim_{l \to \infty} {\mathbf{e}}= {\mathbb{K}}(\lim_{l \to \infty} {\mathbf{e}})$. By our assumptions, ${\mathbf{e}}^*$ is the only stable fixed point of ${\mathbb{K}}$ with basin of attraction $[0, 1)$, so this shows that ${\mathbf{e}}\to {\mathbf{e}}^*$ as desired. Now we derive the equation in question. Note that ${c}= {\underline}{\mathbf{e}}(1 + \Theta(\inv {\boldsymbol{\oldgamma}}))$ because ${\mathbf{e}}^* < 1$. We use the Taylor expansion ${\mathbb{K}}({\mathbf{e}}^* + {\epsilon}) = {\mathbf{e}}^* + \delta {\epsilon}+ O({\epsilon}^2)$. $$\begin{aligned} ({\mathbf{e}}^* + {\epsilon}){\mathbbm{p}}&= {\sigma_v^2}V {\mathbb{K}}\lp({\mathbf{e}}^* + {\underline}{\epsilon})(1 + \Theta(\inv {\boldsymbol{\oldgamma}}))\rp + {\sigma_a^2}+ ({\mathbf{e}}^* + {\underline}{\epsilon}) {\underline}{\mathbbm{p}}\\ &= {\sigma_v^2}V({\mathbf{e}}^* + \delta({\underline}{\epsilon}+ \Theta(\inv {\boldsymbol{\oldgamma}})) + O({\underline}{\epsilon}^2)) + {\sigma_a^2}+ ({\mathbf{e}}^* + {\underline}{\epsilon}) {\underline}{\mathbbm{p}}\\ {\epsilon}{\mathbbm{p}}&= {\sigma_v^2}V(\delta({\underline}{\epsilon}+ \Theta(\inv {\boldsymbol{\oldgamma}})) + O({\underline}{\epsilon}^2)) + {\underline}{\epsilon}{\underline}{\mathbbm{p}}\\ {\epsilon}&= {\underline}{\epsilon}(1 - \f{{\sigma_a^2}+ (1 - \delta + O({\underline}{\epsilon})){\sigma_v^2}V}{\mathbbm{p}}) + \Theta(V \inv{\boldsymbol{\oldgamma}}\inv {\mathbbm{p}})\end{aligned}$$ We apply \[lemma:separableDynamics\]. We first check the conditions of the lemma, with ${\mathbb{K}}= {\mathbb{J}}_\alpha$. The following conditions were already verified. - ${\mathbb{J}}_\alpha$ has a fixed point ${\mathbf{e}}^*$ less than but very close to 1, where its slope is $\upsilon := \dot{\mathbb{J}}_\alpha({\mathbf{e}}^*) < 1$. (\[thm:stableFixedPointsJJ\]) - ${\mathbb{J}}_\alpha(d) > d$ for all $d < {\mathbf{e}}^*$ and ${\mathbb{J}}_\alpha(d) < d$ for all $d > {\mathbf{e}}^*$. (By the convexity shown in \[lemma:basicJalpha\]) - ${\mathbb{J}}_\alpha$ is nondecreasing (\[lemma:basicJalpha\]). Furthermore, from its integral formula (\[eqn:JalphaIntegralFormula\]), we see easily that ${\mathbb{J}}_\alpha$ is smooth at ${\mathbf{e}}^* < 1$. We also proved the following - $\p {\mathbbm{p}}l \sim [\sigma_v^2 \sigma_w^{2\alpha} {\mathsf{c}}_\alpha (1 - \alpha)]^{\f 1 {1-\alpha}}l^{\f 1 {1-\alpha}}$ (\[thm:pDynamicLT1ReLU\]) and $\p {\boldsymbol{\oldgamma}}l$ is asymptotically a constant fraction of $\p {\mathbbm{p}}l$ (\[lemma:separableDynamics\]), so both go to $\infty$. - ${\mathrm{V}}\psi_\alpha( {\mathbbm{q}}) = {\mathsf{c}}_\alpha {\mathbbm{q}}^\alpha = {\mathsf{c}}_\alpha (\sigma_w^2 {\mathbbm{p}}+ \sigma_b^2)^\alpha = \Theta(l^{\alpha/(1-\alpha)})$, so goes to $\infty$. (\[lemma:VtPsiAlpha\]) Thus, for $\upsilon = \dot {\mathbb{J}}({\mathbf{e}}^*)$, $$\begin{aligned} \f{\sigma_a^2 + (1 - \upsilon + O({\underline}{\epsilon}))\sigma_v^2 {\mathrm{V}}\phi( {\mathbbm{q}})}{{\mathbbm{p}}} &\sim \f{(1 - \upsilon)\sigma_v^2\sigma_w^{2\alpha}{\mathsf{c}}_\alpha}{{\mathbbm{p}}^{1-\alpha}}\\ &= \inv l (1-\upsilon)/(1 - \alpha).\end{aligned}$$ Now, ${\mathrm{V}}\phi( {\mathbbm{q}})\inv {\boldsymbol{\oldgamma}}\inv {\mathbbm{p}}= \Theta(l^{-\f1{1-\alpha} - 1})$. By using the dynamics of \[lemma:alphaDeltaDynamics\] to upper and lower bound our dynamics, we have $\p {\epsilon}l = \Omega(l^{-\mu - {\epsilon}}), O(l^{-\mu + {\epsilon}})$ for any ${\epsilon}> 0$, where $\mu = \min((1-\upsilon)/(1-\alpha), 1/(1-\alpha)) = (1-\upsilon)/(1-\alpha).$ ### Backward Dynamics \[lemma:infVarAlphaLe75\] Suppose random variable $X \sim {\mathcal{N}}(0, \sigma^2)$, and $Y = \psi_{-\beta}(X)$ for some $\beta > 0 $, where $\psi_\alpha$ is $\alpha$-ReLU. Then for $\xi > 0$, $Y$ has density $$\Pr[Y \in [\xi, \xi+\dd \xi]] = \f 1 {\beta\sqrt{2\pi \sigma^2}} \xi^{-\f 1 \beta - 1} e^{-\xi^{-2/\beta}/2\sigma^2}.$$ At $\xi = 0$, $Y$ has density given by a Dirac delta of mass $\f 1 2$. Furthermore, $Y$ has finite second moment iff $\beta < \f 1 2$. We have $$\begin{aligned} \Pr[Y \in [\xi, \infty)] &= \Pr[X \in [0, \xi^{-1/\beta}]]\\ &= \f 1 {\sqrt{2\pi \sigma^2}}\int_0^{\xi^{-1/\beta}} e^{-x^2/2\sigma^2}\dd x. \end{aligned}$$ Differentiating the RHS against $\xi$ using Leibniz’s rule, we get $$\begin{aligned} d\Pr[Y \in [\xi, \infty)]/d\xi &= \f 1 {\sqrt{2\pi \sigma^2}} e^{-\xi^{-2/\beta}/2\sigma^2} \f d {d\xi}\xi^{-1/\beta}\\ &= \f {-1} {\beta\sqrt{2\pi \sigma^2}} \xi^{-\f 1 \beta - 1} e^{-\xi^{-2/\beta}/2\sigma^2}. \end{aligned}$$ Negating both sides gives the density $f_Y$ of $Y$ for $\xi > 0$. For $\xi = 0$, observe that $\lim_{\xi \to 0} f_Y(\xi) = 0$ because, while $\xi^{-\f 1 \beta -1}$ blows up polynomially, $e^{-\xi^{-2/\beta}/2\sigma^2}$ blows up exponentially. Thus the contribution of $Y$’s mass at $Y = 0$ from $X > 0$ is 0. On the other hand, all $X < 0$ gets mapped to $Y = 0$, so $f_Y(0) = \f 1 2 \delta_0$, where $\delta_0$ is the Dirac delta. For the second assertion, observe that $$\begin{aligned} f_Y(\xi) \sim \f 1 {\beta \sqrt{2\pi \sigma^2}} \xi^{-\f 1 \beta - 1} & \text{as ${\xi \to \infty}$}. \end{aligned}$$ Thus, $\xi^2 f_Y(\xi)$ is integrable iff $2 -\f 1 \beta - 1 < -1 \iff \beta < \f 1 2$. Note that $\dot \psi_\alpha \propto \psi_{\alpha - 1}$, so it suffices to show that $\Var(\psi_{\alpha-1}(\zeta)^2) = \Var(\psi_{2\alpha - 2}(\zeta))$ is infinite for $\zeta \sim {\mathcal{N}}(0, \sigma^2)$. By \[lemma:infVarAlphaLe75\] with $\beta = 2 - 2\alpha$, $\psi_{2\alpha - 2}(\zeta)$ has finite variance iff $\beta < \f 1 2 \iff \alpha > \f 3 4$. If $\alpha = 1$, then $${\underline}{{\boldsymbol{\oldchi}}}= {{\boldsymbol{\oldchi}}}(1 + \f 1 2\sigma_v^2 \sigma_w^2).$$ So $\p {{\boldsymbol{\oldchi}}}{l-m} / \p {{\boldsymbol{\oldchi}}}l = \Theta(1) B^m$ for $B = 1 + \f 1 2\sigma_v^2 \sigma_w^2$. If $\f 1 2 < \alpha < 1$, then ${\underline}{{\boldsymbol{\oldchi}}}/{{\boldsymbol{\oldchi}}}- 1$ is $$\begin{aligned} &\phantom{=} \sigma_v^2 \sigma_w^2 {\mathrm{V}}\dot \phi( {\mathbbm{q}})\\ &= \sigma_v^2 \sigma_w^2 \alpha^2 {\mathsf{c}}_{\alpha - 1} {\mathbbm{q}}^{\alpha - 1}\\ &= \sigma_v^2 \sigma_w^2 \alpha^2 {\mathsf{c}}_{\alpha - 1} (\sigma_w^2 {\mathbbm{p}})^{\alpha - 1} + \Theta({\mathbbm{p}}^{\alpha - 2})\\ &= \sigma_v^2 \sigma_w^{2\alpha} \alpha^2 {\mathsf{c}}_{\alpha - 1}(K_1 l^{{\f 1 {1-\alpha}}}- K_2 l^{{\f \alpha {1-\alpha}}}\log l + o(l^{{\f \alpha {1-\alpha}}}\log l))^{\alpha - 1} + \Theta(l^{\f{\alpha - 2}{1 - \alpha}}) & \text{by \cref{thm:pDynamicLT1ReLU}}\\ &= \sigma_v^2 \sigma_w^{2\alpha} \alpha^2 {\mathsf{c}}_{\alpha - 1} [K_1^{\alpha - 1} \inv l + \Theta(l^{-2} \log l)] + O(l^{-3})\\ &= \sigma_v^2 \sigma_w^{2\alpha} \alpha^2 {\mathsf{c}}_{\alpha - 1} K_1^{\alpha - 1} \inv l + \Theta(l^{-2} \log l)\\ &= R \inv l + \Theta(l^{-2} \log l)\end{aligned}$$ where $R = \sigma_v^2 \sigma_w^{2\alpha} \alpha^2 {\mathsf{c}}_{\alpha - 1} K_1^{\alpha - 1} = \f{\alpha^2}{(1-\alpha)(2 \alpha - 1)} $ and $K_1 = [\sigma_v^2 \sigma_w^{2\alpha} {\mathsf{c}}_\alpha (1 - \alpha)]^{\f 1 {1-\alpha}}$. So $$\begin{aligned} {\underline}{{\boldsymbol{\oldchi}}}&= {{\boldsymbol{\oldchi}}}\exp(R \inv l + \Theta(l^{-2} \log l))\\ \p {{\boldsymbol{\oldchi}}}{l -m} &= \Theta(1)\p {{\boldsymbol{\oldchi}}}{l} \lp\f l {l-m} \rp^R\end{aligned}$$ as desired. The proof is similar to that of \[thm:dalethExpSqrtTanhAllGrad\]. [^1]: Work done while at Harvard University [^2]: Under simplified conditions, @daniely_toward_2016 showed that there exists a fixed point for any “well-behaved” activation function in a feedforward net. However, this result does not apply to architectures with residual connections. [^3]: Note that in practice, to avoid the diverging gradient $\dot \psi_\alpha(x) \to \infty$ as $x \to 0$, we can use a tempered version $\Psi_\alpha(x)$ of $\alpha$-ReLU, defined by $\Psi_\alpha(x) = (x + {\epsilon})^\alpha - {\epsilon}^\alpha$ on $x > 0$ and 0 otherwise, for some small ${\epsilon}> 0$. The conclusions of this paper on $\psi_\alpha$ should hold similarly for $\Psi_\alpha$ as well. [^4]: @daniely_toward_2016 called the version of ${\mathrm{W}}\phi$ with fixed $\rho = 1$ the “dual function” of $\phi$. [^5]: [A more natural visualization is to graph $\p {\mathbf{e}}l - {\mathbf{e}}^*$ versus $l^{-\delta^*}$, but because of floating point precision, $\p {\mathbf{e}}l - {\mathbf{e}}^*$ doesn’t converge to 0, but a small number close to 0, so that the log-log plot wouldn’t look like what is expected.\[footnote:plotDelta\]]{} [^6]: [A more natural visualization is to graph $\p {\mathbf{e}}l - {\mathbf{e}}^*$ versus $l^{-\delta^*}$, but because of floating point precision, $\p {\mathbf{e}}l - {\mathbf{e}}^*$ doesn’t converge to 0, but a small number close to 0, so that the log-log plot wouldn’t look like what is expected.\[footnote:plotDelta\]]{} [^7]: Our derivations actually apply to all $\alpha \in (\f 1 2, 1]$, where at $\alpha = \f 1 2$, the expected norm of the gradient diverges within our mean field formalism. However, at $\alpha \le \f 3 4$, the variance of the gradient already diverges (\[thm:dalethInfVarAlphaReLU\]), so we cannot expect the empirical values to agree with our theoretical predictions. But in fact, empirically our theoretical predictions seem to form an upper bound on the gradient norms (see \[fig:alphaReLUTheoryVsEmpirics\]). [^8]: the contour for $\p {\mathbf{p}}l$ is similar, but its slopes are slightly off from the heatmap contours.
1